A Wi-Fi camera is already streaming live RGB frames from our test space. I need you to take those images, label the raw footage for “fall” versus “normal movement,” build a robust model, and prove that it detects falls with better than 99 % accuracy—first on your development machine, then on a Raspberry Pi with the same reliability. Color (RGB) images are the only input, and I am open to any approach that reaches the target accuracy, whether that is a CNN, an SVM, an RNN, or a clever hybrid. Feel free to leverage PyTorch, TensorFlow, OpenCV, ONNX, TensorRT, or whichever stack you trust for edge deployment, as long as it runs smoothly on the Pi without throttling or overheating. Because my current footage is still unlabeled, the first milestone will be building (or semi-automating) an efficient annotation pipeline so we can create a high-quality training set. Once the model is trained and tuned, I’ll need a lightweight inference script or service—preferably in Python—that opens the incoming video stream, flags a fall in real time, and exposes a simple REST or MQTT alert I can hook into my broader system. Deliverables and acceptance criteria • Curated, well-organized labeled dataset (train/val/test splits) • Trained fall-detection model with > 99 % accuracy, evaluated on held-out data • Optimized Raspberry Pi build (binaries, weights, or container) matching the same accuracy benchmark • Source code, setup instructions, and a short demo video showing real-time detection on the Pi If any clarifications are needed—camera specs, frame rates, or Pi model—let me know and I’ll supply them right away.