Back to Blog

How to battle hallucinations in customer facing Gen AI solutions?

Vibs Abhishek

Imagine this: A customer reaches out to your brand's new AI assistant with a simple question. But instead of a helpful response, they get completely fabricated policy information. And, as a brand - you have to honor the policy because it came from the company channel! 

That’s the story of Air Canada’s chatbot hallucinating and it’s probably the biggest reason why 60% of executives are cautious of implementing gen AI into customer facing products. 

But, here at Alltius, we understand the power of AI for customer service. But we also understand the importance of trust. That's why we've dedicated ourselves to creating AI assistants that are reliable, accurate, and, most importantly, truthful using a multi-pronged approach to battle hallucinations! 

In this blog, I’ll delve into the world of AI hallucinations and explore how Alltius is combating this challenge by using a combination of different techniques, so that you don’t ever face another situation like Air Canada

Solving for Hallucination: The Alltius approach

At Alltius, we take hallucinations seriously. You can’t have another Air Canada fiasco at your hands, and so, we combine multiple methods along with our native methodology to tackle hallucinations. 

Chain of Thoughts

In 2022, Google researchers devolved into methodology to break down complex multi-step problems into multiple intermediate steps, thus forcing LLMs to think through the problem before answering. This process was the Chain of Thought prompting process and it boosted LLMs reasoning & arithmetic skill sets. 

Chain of thought forces LLM to rethink its responses and provide an answer that is grounded in the input. It helps LLM identify flaws and gaps in its thought process before arriving at the answer rather than just blindly guessing it. 

Image Source: Chain-of-Thought Prompting Elicits Reasoning in Large Language Models

Source Authentication 

For LLMs trained on general information, the sources are innumerable. But when you’re training on company documents, you have limited sources of information. So, in order to verify whether the information is accurate and grounded in the sources, we deploy explicit checks. 

Our team has trained complex mathematical models that identify the sources of information & prove the authenticity of the provided answer. This allows us to cap the hallucination rates. 

In one of the sample AI assistants in the image trained on our website, you can see, after every answer, Alltius states its sources. 

One-vs-Many models 

Using one model for all tasks limits the capabilities. Studies suggest using multiple LLM architectures yields better accuracy as compared to using one single LLMs. 

With repeated experimentation, our team has concocted a perfect mix of different models that work cohesively for different types of requirements. Alltius’ AI assistants use different architecture when they’re working with mathematical information vs image based documents. 

 

Fine-Tuning on Domain Knowledge

Domain knowledge plays a vital role in improving accuracy in specific tasks. For example, when BloombergGPT was launched, it “significantly” outperformed ChatGPT in financial-specific tasks by 16 points. 

At Alltius, while working with our clients, we’ve trained our AI assistant architecture to be fine tuned on domain knowledge for insurance, banking, fintech, SaaS and manufacturing industries. We’ve done this by implementing a mixture of transfer learning, RAG models on company data and encoding domain knowledge graphs. 

By incorporating these into the training process, Alltius ensures that the AI assistants possess a deep understanding of the subject matter. This emphasis on domain knowledge enhances the precision and contextual relevance of generated content, reducing the  hallucinations.

Symbolic AI

Another powerful tool in battling AI hallucinations is Symbolic AI. Unlike traditional LLMs, which rely solely on vast data and pattern recognition, Symbolic AI uses rules, logic, and structured knowledge bases to reason through problems. This structured approach ensures that the AI's responses remain grounded in factual, predefined information rather than probabilistic guesswork.

In a customer-facing environment, this blend of symbolic reasoning and LLM capabilities helps ensure that the AI assistant responds with accurate, fact-based answers. For instance, if a customer asks about a company policy, the symbolic AI component checks a set of predefined rules or databases before formulating an answer, ensuring that the response is consistent with the company's actual policies. This prevents hallucinations, as the system is tethered to verifiable sources of truth.

At Alltius, where we integrate symbolic reasoning with LLMs to create robust AI assistants. By combining data-driven insights with structured knowledge, our systems prevent inaccurate or misleading responses. Assurance IQ used this combination in their AI tool, which helped their sales team boost call-to-sale conversion rates by 300%. This shows how blending symbolic AI with LLMs results in not just accuracy but also significant business outcomes.

Contextual Information

AI is as good as its input. Data is the fuel to our AI assistants. In order for our AI assistants to perform better, we ensure that the AI assistants are trained on relevant and contextual information from the company documents. This restricts the horizon of information search for our AI assistants. They find all information they know within the company documents. 

We understand that company information is stored in many formats and therefore, we’ve developed our system to extract accurate information from multiple types of data sources.  

In its quest to combat hallucinations, Alltius prioritizes the provision of comprehensive and contextual information. By furnishing AI systems with ample relevant data points, Alltius minimizes the reliance on generating extraneous information from outside domains. This approach not only enhances the coherence of generated content but also mitigates the risk of hallucinations stemming from incomplete or misleading context.

Knowing when to say no 

LLMs hallucinate due to a combination of  incomplete training data, lack of objective alignment, prompt engineering challenges, language complexity, and architectural limitations. Which makes it a difficult problem to beat. But what if your AI assistant would just say “I don’t know the answer” rather than showing an incorrect answer? 

That’s exactly what we do here. Our team has put together a methodology that prompts Alltius’ AI assistants to answer that they don’t know the answer. This saves your organization from many difficult situations created by hallucinations. 

The Alltius Advantage

When you’re deploying customer facing Gen AI applications, hallucinations are a big no! 

We’ve developed AI assistants for major brands like Prudential, AngelOne, GMR; that handle more than 100k+ conversations daily with almost 0* hallucinations. We’ve ensured our AI assistants provide answers that are 

  • Contextual to the customer’s query 
  • Extracted from company documents 
  • Far away from hallucinations 

Which builds trust in your company’s Gen AI applications. And once that’s done, you can reap benefits like ticket deflection, reduction in customer support costs, reduced wait times & most beneficial of all - increase in customer satisfaction & loyalty. 

At Alltius, we understand AI and customer journeys deeply. With a team of academic scholars and seasoned AI professionals, Alltius’ platform deploys state of art mechanisms to battle hallucinations, improve accuracy and reduce latency. And all this, while conversing with your end users in a human-like language. 

In case you’re exploring conversational AI platforms for sales or customer support, try Alltius. We’d suggest you go for a demo because:

There is a reason why conversational AI and chatbots are almost synonyms. 

With similar UI and applications, one can confuse chatbot for high-end conversational AI. The real beauty of conversational AI is more than skin deep. It lies in how you can interact with it and the architecture behind it. And to see it all, you should go for a live demo, preferably with your own data to see the real benefit. 

So, if you’re looking for conversational AI platform, we’d invite you to explore Alltius in following ways: 

 

Conclusion

The fight against AI hallucinations is a critical battleground. Brands need to be cautious of using generic customer facing AI applications that deploy minimum to no efforts in battling hallucinations. 

That being said, Alltius has taken a multi-pronged approach to tackle hallucinations in customer facing gen AI applications. By employing rigorous verification, domain expertise, and proactive mitigation strategies, Alltius is enhancing the reliability of its GenAI assistants. With meticulous approaches like these, customers can interact with gen AI applications they can trust and in turn benefit from it in the long run. 

More from the Blog

Liked what you read?

Stay updated on our progress as a company and read on to what we are discovering as we grow.
We will never share your email address with third parties.