AWS Data Lake Engineer Needed

Замовник: AI | Опубліковано: 23.10.2025
Бюджет: 25 $

I’m building a modern analytics platform on AWS and need a hands-on data engineer who can help us move quickly while coaching our in-house developers. What we’re tackling • Centralising operational and business data from several source systems into an S3-based data lake. • Creating a scalable ingestion layer with Change Data Capture for our key databases. • Orchestrating transformations that feed curated zones and Amazon Redshift for downstream analytics and BI. Core stack you’ll work with AWS Glue for batch ETL, Amazon Kinesis for streaming, AWS Lambda for lightweight processing, plus S3, Athena, EMR, Redshift and RDS where appropriate. Your day-to-day • Design and build Glue jobs, Kinesis streams and Lambda functions that land raw data reliably. • Implement CDC patterns (e.g., Debezium or DMS) so new and updated records flow in near-real time. • Optimise partitioning, compression and cataloging for cost-effective querying in Athena and Redshift. • Pair-program with our developers, explain best practices, perform code reviews and document patterns. • Suggest improvements around security, monitoring and cost management as we scale. Success looks like 1. Ingestion pipelines running with automated retries, alerts and version-controlled IaC. 2. A reusable transformation framework in Glue or EMR that our team can extend. 3. Clear runbooks and knowledge transfer sessions that leave the team self-sufficient. If you thrive in collaborative environments, understand AWS data engineering inside out and can start soon, I’d love to talk.