I’m building a mobile-first Android application that taps into Deep Seek’s Janus-Pro model to turn dense, complex text prompts into high-fidelity images. The project hinges on one feature above all else: rock-solid support for custom style parameters that users can tweak manually before they hit “Generate.” Everything—UI flow, API calls, even GPU scheduling—should serve that requirement. Here’s the workflow I have in mind: • Prompt intake: accept long, richly detailed prompts and run them through DeepSeek’s decoupled visual-encoding pipeline. • Style controls: expose manual sliders or numeric fields for every tunable parameter (strength, palette bias, texture granularity, etc.), persisting user presets locally. • Generation engine: call either fal.ai or DeepSeek’s hosted API (your recommendation welcome) for fast, low-cost inference, targeting sub-3-second turnaround on modern devices. • Gallery: a responsive, swipe-friendly grid that stores each output at full resolution, alongside prompt text and all style metadata for easy re-use. • Export: share or download images in high-res PNG/JPEG and a lightweight JSON sidecar holding the prompt + parameter set. Acceptance criteria 1. End-to-end prompt → image flow completes on current mid-range Android phones with >84 % semantic alignment, measured against DeepSeek’s reference evaluator. 2. Manual style adjustments show immediate preview feedback and propagate exactly to the generation call. 3. No single generation costs more than the low-tier fal.ai pricing threshold when benchmarked over 100 runs. write "pari" in bid. 4. Codebase delivered in clean, well-documented Kotlin (Jetpack Compose preferred), with README covering build, deploy, and API key management. If this sounds like your kind of challenge, let’s talk implementation details and milestones. Budget limited <inr 2k