I need a Raspberry Pi programmed so it can take a live video feed from an attached camera, run image-recognition on each frame to spot custom objects or patterns that I will later define, and then publish the resulting data over Modbus TCP. Here is the flow I am after: 1. Camera streams video into the Pi. 2. Your code performs real-time detection of the specified custom objects/patterns (OpenCV, TensorFlow Lite, or another light framework is fine as long as it runs smoothly on the Pi). 3. For every detection event—or on a set interval if no object is found—you convert the recognition result into register values and expose them through a Modbus TCP server running on the same Pi. Deliverables • Fully commented source code (Python preferred) • A brief read-me covering setup, dependencies, and how to change the object-detection model • Tested Modbus TCP mapping so I can poll the Pi from a SCADA or PLC right away • Optional: a simple CLI dashboard or log output to verify detections during development Acceptance criteria • Detection latency low enough to keep up with a 30 fps stream • Successful Modbus reads by an external client on my network showing the expected register values when test objects appear in front of the camera If you have questions about the exact objects or want to suggest model architectures, let me know and we can pin that down before you start.