k6 and SpeedCurve performance logs are currently landing in cloud storage as raw Kubernetes logs. The task is to move those logs into Grafana and surface the insights I actually care about—request metrics, error rates, resource usage, average response time, plus the 90th and 95th percentiles. The workflow I picture is straightforward: pull the existing log files from the storage bucket, transform or enrich them if required, then feed them into Grafana using whichever data-source plugin (prometheus , Promtail, Loki, InfluxDB, etc.) makes the most sense. Once the pipeline is stable, craft a set of Grafana dashboards that break out each metric clearly and update in near-real-time. Deliverables • End-to-end ingestion pipeline from the cloud bucket holding the Kubernetes logs into Grafana • A reusable parsing/transform script or config that extracts the fields needed for every run of k6 and SpeedCurve • Dashboards that visualize request metrics, error rates, CPU / memory usage, average response time, and the 90th & 95th percentile latencies • Brief hand-off notes outlining how to add new test runs or extend the dashboards later Everything must be self-contained, version-controlled and ready to run in my existing environment.