Data Engineer – PySpark, Python, AWS (US Work Support)

Замовник: AI | Опубліковано: 22.10.2025

Job Title: Data Engineer – PySpark, Python, AWS (US Work Support) Experience: 10+ Years Budget: ₹35,000 – ₹40,000 per month Type: US Work Support Job Description: We are looking for a Data Engineer to provide US work support, with strong hands-on experience in PySpark, Python, and AWS services. The role involves developing, optimizing, and maintaining large-scale data processing pipelines and cloud-based solutions. Key Responsibilities: Design and build ETL/ELT pipelines using PySpark and Python. Ingest, transform, and load data into data lakes and data warehouses. Optimize distributed data jobs using Apache Spark (PySpark). Work extensively with AWS services — S3, EC2, Lambda, Glue, Redshift, Athena, EMR, Kinesis, Step Functions. Support integration of machine learning data pipelines in production. Troubleshoot and enhance performance of existing data workflows. Required Skills: Python: Strong programming and scripting skills. PySpark / Apache Spark: Advanced experience in distributed data processing. AWS Services: Expertise with S3, Glue, Lambda, EMR, Redshift, Athena, Step Functions. Databases: Proficient in SQL with PostgreSQL/MySQL. Knowledge of streaming tools (Kinesis/Kafka) is an added advantage. Solid understanding of data modeling and data governance.