I have an Expo-based voice-translator built with React Native and need to drop in a talking 3D avatar—tomorrow. Using expo-three, expo-gl and expo-asset, you’ll wire up a custom 3D model so the mouth syncs to my existing text-to-speech stream and the spoken line matches the translated output. The whole solution must stay inside the managed Expo workflow; no ejecting. Core goals • Load and display a custom 3D avatar. • Drive lip-sync in real time from the TTS buffer. • Route the translated text through the same TTS engine already in the app. • Hand back clean, commented source fully merged into my repo. Nice-to-haves (extra credit) • Subtle facial idles—blinks, breathing, small head turns. • Multi-language TTS switching that mirrors the translator’s language pair. Acceptance If I can pull the project, run expo start on iOS/Android simulators and watch the avatar speak every translated sentence without dropped frames, the job is done. Please outline relevant Three.js / React Native-Expo avatar work so I can gauge fit quickly. Fixed budget remains ₹3000 INR and the deadline is tomorrow, so only reply if you can deliver fast and confidently.