I need a production-ready AI avatar that looks and behaves like a real person, tailored for an entertainment setting and speaking English with a friendly, approachable tone. Within four weeks, I want users to be able to type or talk to the avatar and receive smooth, watermark-free video replies featuring accurate lip-sync, expressive facial animation, natural body movement, and realistic voice cloning of the subject. Core requirements • Real-time text and voice input → video + text output • High-fidelity lip-sync and emotion-driven animation • Studio-quality voice synthesis cloned from the target voice • Modular embed code so the avatar can drop into any site or digital experience • Custom admin dashboard to manage prompts, personalities, and usage analytics • Deployment pipeline to AWS S3 (or equivalent) for scalable streaming delivery Technical notes – I’m open to your preferred stack; suggestions such as Unreal MetaHuman, Unity with ARKit blend shapes, TensorRT, WebRTC, ffmpeg, or ElevenLabs for voice are all acceptable so long as the end result is seamless and lightweight for web. – All assets, code, and model checkpoints must be handed over with clear build and deployment instructions. – No third-party watermarks may appear in the final videos. Deliverables (must be met to consider the project complete) 1. Fully functional AI avatar with lip-synced video response capability. 2. End-to-end text + video conversational interface. 3. Custom admin dashboard with full control over knowledge, tone, and testing logs. 4. Integrated AWS S3 storage for scalable deployment. 5. Source configuration, deployment documentation, and test environment. 6. High-fidelity avatar with no watermark and flexible web embedding. If you have prior work in AI-driven avatars, voice tech, or real-time animation, please share links or short clips so I can gauge visual quality and sync accuracy. Looking forward to seeing how you can bring this virtual performer to life.