trusted AI chatbots & assistents
Teaching bots
to behave
We build AI Chatbots & Assistants you and your customers can trust.
You could build a chatbot in under 20 Minutes…
Sources:
DPD chatbot (The Guardian) | Car dealer chatbot (Business Insider) | Bing Chat (Ars Technica)
But the risks attached to it might make you reconsider if your chatbot shouldn’t have a more refined programming background to it, so you and your customers could use it with trust and satisfaction. A well-developed and trustworthy Chatbot ensures that your business and your customers are under no threat of quality, safety, security and legal issues.
Building trusted AI Chatbots & Assistants requires a systematic approach to risk. At diconium, we are developing a Trustworthy Assistant Playbook, based on our teams’ experience building those systems for our clients and the literature.
Based on a systematic risk taxonomy, we implement mitigation strategies for risk avoidance and establish benchmarks & test protocols to see whether you mitigated successfully.
Risks and challenges for trusted AI Chatbots & Assistants
Quality & Safety
Can the user or the operator be harmed by the chatbot’s behavior? The operator is often at higher risk than the user.
Are the bot’s answers good enough to be helpful? Low quality bots may not directly cause harm, but they undermine trust in the technology, and, ultimately, ROI on AI investment.
Security
Can a malicious user hack the assistant?
There are specific attack patterns targeted at chatbots and large language models in general.
Legal
Does the chatbot expose the operator to specific legal risks?
The large language models that drive AI chatbots and assistants pose new legal challenges, especially in the domains of copyright and data protection.
Hallucinations
Large language models, which drive AI chatbots, can “make up” things that sound plausible but are not true. The consequences can be anywhere from annoying to harmful, depending on the use case. In the case of an airline customer support bot that has given wrong advice on a reimbursement, courts have ruled in favor of the customer.
quality / safety
Prompt Injection
Without countermeasures, Hackers can override a chatbot’s base instructions with specially crafted prompts, and thereby alter the behavior of the bot. This is often used to make the bot say silly or harmful things which are then posted to social media, causing reputational damage for the operator.
security
Copyright Risks
The language models that drive AI chatbots rely on large amounts of training data. Not all of this data might be licensed by the model vendors, which has led to several pending lawsuits such as New York Times vs OpenAI. Even though some model vendors have vowed to defend their customers from IP litigation in the case they lose their lawsuits, this is uncharted legal territory.
legal
Chatty Bots
An AI chatbot should keep the conversation with the user within the domain it was designed for. If this is not actively enforced during development, a company’s chatbot easily becomes a “free for all” generic AI assistant incurring high operating costs.
quality / safety
Prompt Leaking
With special prompts hackers can get a chatbot to reveal its secret base level instructions, a.k.a. the “system prompt”, revealing how a company thinks about its AI and its customers.
security
Data Protection
Great care is advised when personal data is processed with chatbots, AI assistants and large language models. There are, for example, techniques for fine-tuning the behavior of chatbots that make it extremely hard or very costly to make the AI model “forget” information. This is at odds with the “right to erasure”, an integral part of data protection legislation such as the GDPR.
legal
Language-dependent Performance
The language models that drive AI chatbots rely on large amounts of training data. For languages with less data, performance and quality is often significantly worse than for English. A multi-lingual chatbot must be carefully evaluated in all languages before being deployed.
quality / safety
Bad Language
Chatbots don’t swear or insult users by themselves, thanks to the “alignment” process used by AI model vendors to tame the behavior of their language models. However, with attacks like Prompt Injection hackers can override the safety guardrails, alter the behavior of the bot and provoke reputation-damaging behavior.
security
Let’s assess your unique risks and challenges
Use cases for trusted AI Chatbots & Assistants
AI chatbots deliver on the promise of automated 24/7 customer support. In contrast to the often-annoying pre-AI bots, they excel at detecting a customer’s request, extracting relevant information like the affected product, at formulating helpful responses and engaging in dialogue. AI chatbots allow for an unprecedented degree of automation and cost efficiency.
A customer contacting support is probably already not happy. AI is only helpful if it uses the right tone of voice and polite, appropriate language. A support bot should never make up facts (“hallucinate”) and it must be safeguarded against hacking.
AI chatbots can provide effective dialogue-based product help to your customers. Instead of letting a user search through FAQs, PDF manuals and knowledge bases, a product help bot answers a user’s question directly, using that very information. This improves customer satisfaction and reduces requests for customer support.
Product help must provide correct and targeted solutions. Avoiding false information (“AI hallucinations”) is key to keeping your customers happy, alongside hitting the right tone of voice and using appropriate language.
AI assistants and AI-based search can be a great supplement to your intranet (Sharepoint, Confluence, …) for finding relevant information for your employees. An AI assistant can answer questions directly or find relevant information more efficiently than classic keyword search. This improves efficiency and employee satisfaction.
Like the external product help bot, an internal assistant must avoid false information (“AI hallucinations”), alongside hitting the right tone of voice and appropriate language. If the AI is not useful, employees won’t adopt it, and the AI investment will be sunk cost.
An AI chatbot can be a sales assistant, providing information about your products while highlighting their advantages. It can engage in a dialog and ask customers about their preferences to present your product in a personalized and targeted way. At the same time, it collects valuable customer data, much richer than simple web tracking data.
An AI sales assistant must not make up facts (“hallucinate”) about your products. It must hit the right tone of voice and it must be safeguarded against hacking. If it collects data, it must be implemented with the correct legal guardrails to be compliant with all data protection laws.
AI assistants can provide a convenient language-based interface for booking, cancelling and managing appointments and reservations. Modern AI is better than ever at determining a user’s intent, also for complex tasks. State-of-the-art LLMs provide functionality to call the right backend services to execute what the user wants.
An AI assistant for managing appointments likely handles personal information and must be implemented with the correct legal guardrails to be compliant with all data protection laws. It must communicate clearly and use the appropriate language and tone of voice.
AI based language technology can be used to gather customer feedback in a more personal and engaging way than a traditional form. It can interview your customer with a natural dialogue. AI can also be used in feedback and survey evaluation, regardless of whether the data was collected with an AI dialogue, a web form or a phone interview.
Customer feedback AI conducting interviews must use the appropriate tone of voice as well as polite concise language. It must stay on topic and not be distracted by what the interview partner has to say. As it likely deals with some form of personal information, it must be implemented with the correct legal guardrails to be compliant with all data protection laws.
We do data and talk about it
Our applydata blog is a platform where our experts celebrate and publish their take on the latest in data, AI, regulation, business and service design.
We also speak at conferences, and we have recently hosted our own.
We are member of the KI Bundesverband (German AI Association), an association of 400+ AI companies dedicated to ethical AI.
We are member of Bitkom, Germany’s digital association representing more than 2,200 companies of the digital economy.
Let’s get started
diconium: your partner for trusted chatbots
Your AI. Trustworthy, certification-ready
and future-proof.
Ultimately, AI systems are decision-making systems. But trust in those decisions is key to realizing the potential of AI. Even if you bring a human into the loop, trust in the AI suggestions is key to adoption and return on investment.