All industries, and specifically the FinTech segment, are walking the early legs of the hype cycle in Generative AI and LLMs. AI, however, has not been new to most of these organisations. Predictive AI has for long been used to profile borrowers, attach risk premiums, and rate investment products. So what is different this time?
There is a growing collective belief among the industry leaders that, with Generative AI, the ways of working have changed forever and there is enough early proof of value. At the heart of this wave is the democratisation of LLMs (Large Language Models) - the phrase and the technology. There are a few things that make LLMs different and powerful versus incumbent alternatives.
These LLM boons unfortunately come at a price. LLMs are few. Powerful and accurate LLMs are fewer and almost oligarchic. As a result, organisations who want to ‘do-it-themselves’ have three choices.
FinTechs today realise LLMs alone with a DIY project cannot solve problems. They need the capabilities, the security and other goodness of a platform. This is similar to how in the past, a simple access to a Python library for performing Logistic Regression would not necessarily mean readiness for real time predictive classification.
In this rapidly evolving space, a DIY solution can lock firms into suboptimal tech stack. This is a problem that can be solved by working with an expert partner leading innovation in GenAI. More and more companies are partnering with these solution providers so that they shorten the time to value while being capital efficient. Of course, they scout for companies that have hyper-optimised themselves along
Discounting the build or buy dilemma, which largely constitutes the ‘effort’, adopting Generative AI as a way of working has disproportionate ‘returns’. This is largely due to the vast variety of use cases it can be applied to and show quick value. Here are a few of them.
Retail users (e.g. traders, investors, for insurance, for banking)
Bulk users (e.g. developers, code-based traders)
Customer Success teams
Product teams
To realise the maximum value, FinTech companies have to acknowledge the complexities associated with solving one or more of these use cases. A good partner or a native solution must address the same.
Information can be sourced from a variety of places. It is important that the ingestion pipeline can handle this variety, index it and make it quickly retrievable. For example, information could reside in
The same user query could be answered from a host of sources. Which sources to invoke when - is a search problem, which needs to be solved much before the LLM constructs an answer. For example, ‘What was my last trade?’ has to look into the user transactions database while ‘How do I check my last trade?’ has to peek into a support document.
The solution must know when to segue into a safer, less punishing workflow. For example, doubtful intents must always be reconfirmed and handoff priority customers to live agents.
A user new to the platform must be engaged differently from a power user. Similarly, the way a conversation would work for a retail trader would be different from a code-filled developer centric conversation.
No instance of LLM models can be absolutely perfect out of the box. Since it is dealing with financial decisions, it is imperative that the GenAI used is ‘honest’. A simple ‘I don’t know’ to a user query is far less punitive than ‘a made up answer’. Short term honesty goes a long way building trust than longer term accuracy.
Organisations must be cognizant of the fragility of LLMs when faced with these complexities. A sensible approach to incorporating GenAI in core workflows can create a never-before-possible experience for customers of FinTech services, leading to a significant increase not only in the bottom line, but also the top line for these firms.