This project involves developing a complete computer vision pipeline for analyzing sports videos and generating automated performance feedback. The system should detect and track multiple objects including the player, racket, and ball, as well as estimate body pose and identify the type of shot being performed. The model can be built using one of the following approaches: Roboflow + YOLO Ultralytics YOLOv8/YOLO11 with MediaPipe MoveNet/SensiAI combined with a custom classifier Using the detections, the system must calculate timing and technical performance metrics and output structured JSON in the following format: { "type_of_shot": "bandeja", "strengths": [], "improvements": [], "score": 82, "overlay_url": "" } The solution should also use GPT-4o to generate natural language coaching feedback based on the analysis. In addition, performance benchmarking must be included, showing latency and cost per processed video. The final delivery should be in the form of a script or API ready for integration. Required Skills: Computer vision experience with YOLO, object tracking, and pose estimation Proficiency in Python and OpenCV Sports or movement analysis experience is highly preferred Experience generating text outputs using OpenAI APIs Required Examples: A demonstration of a sports or movement analysis vision project A pose estimation or object detection demo GitHub sample code Example JSON outputs A video showing movement detection Pre-Qualification Questions: Before moving ahead, please confirm experience with: Multi-object tracking (player, ball, racket) Pose and movement analysis Action classification (shot type detection) Automated scoring or structured feedback generation Budget is fixed: 70 AUD