With the advent of LLM models, LivePerson(LP) is ready to provide an Enterprise safe Generative AI space for its customers-

My role is to LEAD this WAY centered around user needs with guardrails to provide safe and optimized experiences.

I am currently leading the design of this platform that empowers LP's customers to evaluate the latest conversational GenAI models, as well as their own chatbots. My objective is to leverage LP's extensive conversational dataset to create a comprehensive framework that enables effortless testing, fine-tuning, and identification of hallucination, as well as seamless debugging and simulation of conversations.

Designing for accuracy

This project started off as an internal project for LP data scientists to evaluate multiple GenAI models different models including prompts options experiment with changing settings for a bot in multiple places. First, there is a section for Conversation settings under Administration > Bots > Edit. Here users can select a prompt from a list of predefined templates, or create a fully custom prompt. They can also toggle hallucination detection and how many recent messages are used as context in prompt construction. If the pre-prompt and prompt header overrides are toggled on, the user can make additional changes in the Setting panel of the conversation view during conversations.