SageMaker Object Detection Prototype

Customer: AI | Published: 07.11.2025

I’m putting together a proof-of-concept vision pipeline and need help training a lightweight object-detection model in AWS SageMaker, then deploying it for real-time inference with OpenCV. Core goals • Detect and classify vehicles, people, animals and any other objects that could present a collision risk. • Work reliably across urban streets, rural roads, indoor corridors and even forest or dense-foliage paths. • Produce a Python script that runs on an edge device (Raspberry Pi, Jetson, or similar) and flags obstacles inside a “collision zone,” outputting simple navigation cues such as stop or turn. What I need from you 1. Prepare or curate a modest annotated dataset—or adapt an open one—and train an efficient model in SageMaker (YOLOv8, SSD, or another fast architecture). 2. Export the weights and sample code for OpenCV DNN so the model runs locally without cloud calls. 3. Deliver a demo script that: • opens a live camera feed or video file, • draws bounding boxes with labels and confidence scores, • triggers navigation suggestions when an object is too close. 4. Supply a concise README covering setup, dependencies and how to retrain with new data. Keep the scope lean: I’m after a working prototype, not a polished production system. Clean, well-commented code and clear instructions are more valuable than lavish UIs.