I’m building a fully interactive avatar that uses OpenAI on the front end and my existing SQL Server on the back end to answer customer queries in real time. The goal is to give users a smooth, conversational experience that can switch seamlessly between text and voice responses depending on their preference. The avatar must be able to pull three kinds of data on demand—customer account details, order status and history, and product information—while respecting connection-pool limits and keeping response latency low. I already have the SQL Server schemas documented; what I need is the intelligence layer that interprets a user’s question, calls the right stored procedures, and formats a friendly reply. For voice output, feel free to suggest the stack you’re most comfortable with (e.g., WebRTC, Twilio, or a browser-native solution). Deliverables • An AI dialogue engine integrated with OpenAI’s API, tuned for customer-service language • Middleware in C# (or a comparable language) that authenticates, queries SQL Server, and returns clean JSON for the avatar to consume • Front-end component—web or cross-platform—that displays the avatar, handles user input, and supports both text and voice I/O • Setup instructions and tested deployment scripts (Docker or Azure preferred) I’ll provide API keys, DB credentials, and sample data once we start. A short proof-of-concept demo showing account lookup functionality will be the first acceptance milestone, followed by the full multichannel avatar.