Most of us have looked at Generative AI tools, either out of curiosity to find out the answers they will give or to discover whether AI could be a role in modern enterprises. It could revolutionize the way we provide services to customers. Applications include how we enrich our advisory conversations with ad-hoc uncovered insights, enhancing the client experience by automatically pre-filling forms for workflows or pro-actively assisting the user with outcome enhancing actions in conducting a certain banking task. AI certainly has a role to play in the financial services of the future, and for this to happen, certain pre-conditions must be fulfilled.
The opportunities and limitations of ChatGPT
Knowledge and insights from the wealth of information that models like ChatGPT or Bard/Vertex contain, can be used to enrich services and products. These tools can also augment the communication between bank and client by preparing, suggesting, and presenting certain data pro-actively. For this to work in the regulated world of financial services, certain risks of current generative AI models and tools must be understood and managed.
Financial institutions should consider that the data in current generative AI models may not be up to date, for example. Consequently, use-cases which are built on using data/ facts which hold true over time work well, while use-cases based on new data or “perishable” data should be carefully considered for viability (e.g., news).
Another aspect is understanding and explaining the results, and particularly the ability to trace how an AI model derived a certain suggestion. This can be important for regulatory reasons and to avoid biases in responses.
Another consideration is how or whether data entered by a bank would find its way into the generative AI base model training data set – which would mean that any future user could potentially retrieve these data entered by the bank. This would have implications for client data protection and bank client privacy in certain jurisdictions. Large AI model providers are increasingly considering and solving this aspect, but it should still be considered when using generative AI tools and models today.
Can you build customer trust with artificial intelligence?
Current generative AI models were not created to uncover the “truth” in data, but to find the most likely answers based on data contained in the training set. This means, that current generative AI tools and models should not be understood and used as the sole advisor or knowledgeable professor, but rather as a fast and clever student which provides 80/20 solutions to mine data or to provide drafts – and this alone saves time and enhances convenience.
AI promises a substantial change in banking in the future. But considerations on truthfulness, transparency and data security must be made to safeguard client trust and regulatory adherence.