Senior Data / AI Engineer role with Native English skill

Замовник: AI | Опубліковано: 24.03.2026
Бюджет: 25 $

Needs to hire 2 Freelancers Summary Job Overview: We are assembling a remote engineering team for startup projects serving US-based clients. We are currently looking for experienced Senior Data / AI Engineers who are comfortable working in structured Agile environments and building reliable, scalable, production-grade data platforms. The ideal candidate should have a strong data engineering background and also bring experience supporting AI/ML-related workflows, such as ML data pipelines, feature pipelines, model-serving data preparation, LLM integrations, or MLOps-adjacent systems. Required Qualifications: • Strong experience as a Senior Data Engineer, Data / AI Engineer, or similar role • Hands-on experience building and maintaining ETL/ELT pipelines • Strong SQL skills and experience working with large-scale datasets • Strong experience with Python and modern data processing frameworks such as Spark or similar tools • Experience with modern data warehouses such as Snowflake, BigQuery, Redshift, or Synapse • Experience with workflow orchestration tools such as Airflow, Prefect, or similar • Public cloud experience: AWS, Azure, or GCP • Experience designing scalable, secure, and production-ready data systems • Familiarity with CI/CD, version control, and deployment best practices • Experience with Docker and platform tooling • Experience working with structured, semi-structured, and large-scale production data • Strong understanding of data modeling, data quality, validation, and pipeline reliability • Experience using Jira or similar project management tools • Experience working in Agile/Scrum environments • Hands-on experience supporting or developing AI/ML data workflows • Familiarity with ML pipelines, feature preparation, model data flows, or AI-related backend systems • Ability to collaborate with engineering, product, and AI/ML teams in production environments Preferred (Plus): • Experience with dbt • Experience with real-time or streaming pipelines such as Kafka or similar tools • Experience with MLOps tools and practices • Experience supporting or deploying machine learning models in production • Experience with LLM applications, vector pipelines, embeddings, RAG workflows, or AI-powered backend systems • Experience with data governance, monitoring, and observability • Experience working on projects for US-based clients • Experience in regulated industries such as Healthcare, Finance, or Insurance Work Schedule Requirement (Important): Candidates must be available during US business hours: • 9:00 AM – 5:00 PM EST • Occasionally CST depending on project needs Full overlap with US working hours is required. Compensation: • Initial phase (first 2–3 months): fixed monthly budget of $1,000 per month • After setup phase: $3,000 – $5,000 per month, depending on experience, role fit, and performance All payments will be processed through Upwork in accordance with platform policies. When Applying, Please Include: • Your primary technical stack • Years of experience as a Data Engineer or Data / AI Engineer • ETL/ELT tools and orchestration tools you have used • Cloud platforms used • Data warehouses you have worked with • AI/ML-related tools, workflows, or production use cases you have supported • Confirmation of availability during EST/CST hours • Brief summary of experience with US clients or production systems • A Loom video introduction demonstrating your English communication skills Important: To be considered for this role, applicants must include a short Loom video introducing themselves and demonstrating their English communication skills. This is a required part of the application and helps us evaluate communication clarity, professionalism, and overall fit for client-facing collaboration with US-based clients. To confirm that you have read this job description carefully, please include the word “Orange” in your header of proposal. Applications without this keyword may not be considered.