I need a production-ready liveness-detection SDK that runs natively on both iOS and Android. The integration must be completely silent: no user gestures, voice commands, or on-screen instructions—just automatic detection once the camera opens. Core detection logic • Reject frames unless exactly one live face is present; if multiple people appear, the SDK must fail fast. • Confirm the person is looking straight into the camera. • Classify and flag: closed eyes, open mouth, face mask, number of detected faces, and overall “live/not-live” status. • Return structured JSON with confidence scores for every rule above so the host app can decide pass/fail thresholds. Performance expectations The classifier should run in real time (≥25 fps) on mid-range devices. A model you have previously trained is preferred, but I’m open to you custom-training or fine-tuning if it increases accuracy, especially for mask and silent-spoof scenarios. Deliverables 1. iOS framework (Swift/Obj-C compatible) and Android AAR, each exposing the same public API. 2. Sample apps that demonstrate initialization, camera feed handling, and response parsing. 3. Lightweight documentation covering build settings, permissions, and best-practice thresholds. 4. Short video clip or test suite proving correct behaviour for: single face pass, multi-face reject, direct gaze pass, eyes closed reject, mouth open reject, and mask reject. Acceptance criteria I will test on a mid-tier iPhone and Android handset; any rule misclassification above 5 % will be considered a failed build. If this matches tech you’ve already delivered—or you can train to these specs—let’s talk.