I’m scaling a data-driven SaaS platform and need a senior-level data engineer who can jump straight into our production environment and make an immediate impact. The work ranges from shaping new pipelines to tuning existing Spark jobs, always with an eye on reliability, speed, and data quality. You’ll be hands-on with Apache Spark and Python every day, moving data through modern ETL/ELT patterns and landing it in Snowflake. Our architecture already services live product features as well as downstream analytics, so every change you ship will be felt in real time. Core scope • Audit the current data architecture and highlight bottlenecks • Design and build scalable, test-covered pipelines (batch and streaming) • Implement or refine ETL/ELT processes that land in Snowflake with clearly defined SLAs • Optimise Spark jobs for performance and cost, profiling and refactoring where needed • Introduce monitoring, alerting, and data-quality checks so issues surface before users notice • Document the solution so other engineers and analysts can pick it up quickly Acceptance criteria • All new pipelines meet agreed SLAs for throughput and latency • Unit and integration tests cover critical logic paths (>90 % coverage) • Data-quality checks run automatically and surface actionable alerts • Deployment is fully automated via our existing CI/CD workflow If you have deep production experience with Apache Spark, Python, ETL/ELT, and Snowflake—and enjoy owning problems end-to-end—let’s talk.