I want to turn a Raspberry Pi 4 (8 GB) into a real-time face-and-emotion classifier that can drive hardware outputs. The core of the job is to prepare the Pi, deploy a model built with Teachable AI, and link the inference results to GPIO pins. Scope of work • Flash and harden the OS, then install and configure Python, OpenCV, TensorFlow/TensorFlow Lite, and any camera drivers. • Help me export the emotion-recognition model from Teachable AI (I’ll create the training images there) and optimise it for the Pi. • Write a clean Python script that: – Captures frames from the Pi camera with OpenCV. – Runs inference locally and classifies faces and emotions. – Maps each classification to three actions: turn on LED lights, activate a buzzer, or trigger a motor via GPIO. • Provide pin-out diagrams, wiring guidance, and a short README so I can reproduce the setup or extend it later. • Offer at least one live hand-off session (video or screenshare) to walk through the install steps and verify the outputs fire correctly. Deliverables 1. Fully configured Raspberry Pi image or detailed bash script/Ansible playbook. 2. Optimised TensorFlow/TFLite model file. 3. Documented Python source with comments and requirements.txt. 4. Wiring guide and brief usage manual. A working prototype that recognises emotions at ~10 FPS or better and drives the LED, buzzer, and motor reliably will be considered a successful completion.