This project entails building a generative-AI web application that runs entirely on OpenAI GPT and delivers a smooth, real-time, text-based chat experience. The frontend must be responsive so users on desktop or mobile feel no friction, while the backend handles dynamic prompts, keeps short-term conversational context, and lets me fine-tune how the model responds. Core functionality • Secure user authentication (log in / sign-up) • Drag-and-drop or button-based file sharing inside the chat window • Conversational memory so the AI remembers context during a session and resets cleanly when required Behind the scenes the app should: • Call the OpenAI GPT API efficiently, batching or streaming tokens where it makes sense to keep latency low • Store transient chat history in a lightweight database or in-memory store for quick retrieval • Offer simple control over system, user, and assistant prompt templates so I can adjust tone or domain-specific guidance without redeploying code Deliverables 1. Source code for both frontend and backend, clearly organised and documented 2. Deployment instructions (Docker or similar) so I can launch on a standard cloud host 3. A short README explaining how to obtain and add my own OpenAI API key, plus configuration for file-size limits and session time-outs 4. Demo URL or video walkthrough showing the chat, authentication flow, and a file upload in action Acceptance criteria: the chat must authenticate a user, let that user drop in a file, reference or summarise the file content on request, and respond in under five seconds for an average prompt.