I want to build a complete AI-driven analytics pipeline that ingests our user data, extracts the right behavioural signals, and continuously recommends the most relevant content and features to each individual. The heart of the project is data analysis aimed squarely at personalising the user experience, so every decision—from feature engineering to model choice—must optimise for that outcome. Scope of work • Clean, transform, and warehouse our raw user data (web & mobile events, basic demographics, session metrics). • Design and train a robust personalisation model (e.g., collaborative filtering, sequence-aware deep learning, or a hybrid you can justify) using Python with libraries such as Pandas, Scikit-learn, TensorFlow or PyTorch. • Build an inference layer or micro-service that serves real-time or near-real-time recommendations through a RESTful or gRPC API. • Instrument clear evaluation metrics (precision@k, recall, uplift) and an offline A/B testing framework so we can validate performance before live rollout. • Package all code, documentation, and environment specs (Docker or Conda) so engineering can maintain the system after hand-off. Acceptance criteria 1. Model delivers at least a 15 % lift in click-through or engagement on a held-out user cohort. 2. End-to-end pipeline reproducible with a single command on our staging server. 3. API returns personalised results in <300 ms P95 latency under expected load (1 k RPS). If you’ve implemented similar personalisation engines and can point to measurable wins, I’m ready to dive into the data with you and push this live.