I’m looking for a skilled Native Android developer (Java) to help build and integrate real time audio streaming features and an offline LLM experience inside a native Android app. The work includes capturing microphone audio, streaming it efficiently, handling buffering and network dropouts, and optionally supporting playback of live or recorded streams. On the offline AI side, I need local model inference running fully on device (no server dependency), with a clean integration into the app UI and proper performance tuning for speed, memory, and battery. You should be comfortable working with Android audio APIs (AudioRecord, AudioTrack, MediaCodec, ExoPlayer if needed), and have experience with on device ML runtimes such as ONNX Runtime, TensorFlow Lite, or llama.cpp based solutions. Strong debugging skills are important, especially around latency, audio glitches, threading, and background execution limits. Please share 1 to 2 relevant Android projects or examples where you’ve done audio streaming, offline speech, or on device LLM inference, and mention which tools or libraries you used.