Accelerating chatbot deployments with generative AI: LeadSquared’s success story with Amazon Bedrock and Amazon Aurora PostgreSQL on AWS

Spread the love

LeadSquared is a modern SaaS CRM platform that offers sales, marketing, and onboarding solutions tailored for various sectors like BFSI, healthcare, education, and real estate. Their Service CRM goes beyond basic ticketing with centralized support through omnichannel communications, personalized interactions, AI-driven ticket routing, and data-driven insights. LeadSquared faced challenges in accelerating chatbot onboarding due to training requirements, user intent comprehension, query identification, and dialogue management. To address these challenges, LeadSquared adopted a solution using large language models (LLM) augmented with customer-specific data stored in Amazon Aurora PostgreSQL-Compatible Edition database. This allowed for improved chatbot responses, streamlined onboarding processes, and enhanced dialogue management.

The integration of Retrieval Augmented Generation (RAG) capabilities using Amazon Aurora PostgreSQL with pgvector extension and LLMs in Amazon Bedrock led to chatbots delivering natural language responses, enhanced dialogue management, and a 20% improvement in customer onboarding times. This solution involved utilizing vector embeddings stored in Aurora for hybrid searches, leveraging foundation models (FMs) from Amazon Bedrock through API, and enhancing prompts with relevant retrieved data. By incorporating videos, help documents, case histories, FAQs, and knowledge base text, LeadSquared was able to ease chatbot setup, offer personalized experiences, understand user intent, improve dialogue management, and automate repetitive tasks.

The RAG mechanism used involved retrieval of relevant text from a knowledge base using similarity search followed by using an LLM to generate coherent and contextually relevant responses. The solution architecture included transforming various knowledge resources into vector representations using Amazon Titan Text Embeddings model and storing them in Aurora using pgvector capabilities. The conversation chain created based on user input and chat history retrieved from a buffer memory allowed for the retrieval of relevant documents from the Aurora vector store and passing them to the Claude v2.1 model via Amazon Bedrock to generate responses.

To deploy the application, users could utilize a Streamlit application and follow specific prerequisites like setting up an Aurora PostgreSQL cluster, an Amazon EC2 instance, AWS Secrets Manager for database access, and requesting access to FMs in Amazon Bedrock. The code walkthrough demonstrated how to load source data, chunk data, generate embeddings, and create conversation chains. Examples were provided using PDFs, Amazon S3, YouTube videos, PowerPoint presentations, and Word documents as sources to showcase the application’s functionality.

The conclusion highlighted the benefits of generative AI technologies in enhancing chatbots for customer interactions and discussed how integrating pgvector extension in Amazon Aurora PostgreSQL and LLMs in Amazon Bedrock can create intelligent and efficient Q&A bots for businesses. The post also included a message from the authors and detailed their backgrounds in AI-driven business value, data platforms, and generative AI strategy in Amazon Aurora.

Article Source
https://aws.amazon.com/blogs/database/how-leadsquared-accelerated-chatbot-deployments-with-generative-ai-using-amazon-bedrock-and-amazon-aurora-postgresql/