I’m building an Open RAN (O-RAN) solution and now need a production-ready MLOps pipeline around it. Kubeflow will orchestrate every workflow and KServe will handle model serving. The most critical pieces for me are model-training and model-deployment flows; CI/CD for the surrounding infrastructure is secondary. I already have several trained models that must be containerised and slotted straight into the new pipeline. Here’s what I expect to receive: • Infrastructure-as-code that spins up the required Kubernetes cluster on any major cloud provider, ready for Kubeflow and KServe • Kubeflow Pipelines covering data ingest, feature processing, training, validation, and artifact versioning • KServe endpoints with blue/green or canary rollout support so models can be swapped with zero downtime in the RAN environment • Automated triggers so fresh data or code pushes retrain the model and redeploy it end-to-end • Monitoring and alerting (Prometheus/Grafana or equivalent) wired into RAN dashboards for latency, throughput, and accuracy metrics Deliverables are a Git repository containing all IaC scripts, pipeline/printer YAML, container recipes, and a concise README that lets my team reproduce the entire setup from scratch. If you’ve built similar Kubeflow + KServe stacks before—especially for telecom or edge scenarios—let’s talk.