AI Home Assistant with Avatar

Замовник: AI | Опубліковано: 09.10.2025
Бюджет: 750 $

I’m ready to move my smart-home setup beyond simple automations and into a full-featured AI experience. Here’s what I need: • A locally run Illm on a Ubuntu serverthat runs a voice-based AI avatar. • The avatar must handle voice control, connect to my existing Home Assistant instance, and deliver personalized AI responses. • I want to be able to say, for example, “Turn the hallway lights to 30%,” and hear the avatar confirm the action. When the avatar answers questions—weather, calendar, or custom prompts—it should feel conversational, not scripted. Scope of work 1. Build or configure the core assistant (Python or Node.js are fine) so it securely links to Home Assistant’s API/WebSocket. 2. Integrate a lifelike 3D or 2D avatar that lip-syncs and animates while speaking. I’m open to tools such as Unreal MetaHuman, Ready Player Me, or a lightweight WebGL model—whichever you recommend for smooth desktop performance. 3. Implement high-quality speech recognition (e.g., Whisper, Vosk, or Windows Speech) and TTS (such as ElevenLabs or Amazon Polly). 4. Map common Home Assistant intents—lighting, climate, scenes, sensors—to natural language commands, with the flexibility to add new intents later. 5. Provide a settings panel so I can connect to my Home Assistant URL/token, choose a voice, and toggle avatar appearance or idle animations. 6. Deliver a functional prototype, setup instructions, and commented source code so I can continue tweaking after hand-off. Success criteria • Avatar launches on Windows, listens for a wake word, and executes at least ten core Home Assistant actions hands-free. • Responses feel natural (<1.5 s round-trip on local network). • Code is clean, modular, and documented; I can rebuild or extend it without starting from scratch. If you have previous projects involving Home Assistant integrations, real-time avatars, or speech pipelines, I’d love to see them. Let’s bring a personable AI face to my smart home!