I’m midway through building a professional horse-racing prediction and betting intelligence platform and need an experienced backend/data engineer for an initial 2–3 week sprint. The immediate goal is to get the prediction engine running on real race data as quickly as possible. What you’ll tackle • Design a clean PostgreSQL schema to store race entries, official results and derived running performance data. • Build a flexible ingestion pipeline that accepts JSON and CSV from public race results, manual uploads and future licensed data providers. • Ensure ingestion is source-agnostic, de-duplicates records and timestamps all loads. • Persist raw race data and derived basic performance metrics (finishes, times, odds, running lines) so they are query-ready for modelling. • Prepare structured feature tables that a separate prediction engine will consume. • Expose a lightweight internal REST endpoint (Python preferred, Node acceptable) that returns prediction-ready race data. Public API delivery will later run via Cloudflare Workers. Acceptance criteria New data files dropped into a storage bucket are automatically ingested, validated and queryable in PostgreSQL within 5 minutes. Horse-level performance views correctly reflect finishes, odds, speed proxies and jockey/trainer stats when spot-checked. Prediction data endpoint responds in <200ms using precomputed data. Clear README covering schema, ingestion flow and commands to rebuild system from scratch. Notes The prediction model logic and weighting will be provided separately. Primary objective is rapid deployment of a clean, scalable data foundation for the prediction engine. Successful delivery will lead to significant ongoing work as we scale into a full betting intelligence platform.