I need an end-to-end ETL pipeline designed and implemented on Azure Databricks that will reliably move, transform, and load data coming from three distinct sources: our on-premises databases, files already landing in cloud storage, and several business-critical APIs. Azure Data Factory is in place and should remain the orchestration layer; your work will focus on crafting efficient Databricks notebooks in Python and SQL, wiring them into ADF pipelines, and ensuring each stage (ingest, transform, load) runs on a schedule with proper error handling and alerting. Acceptance criteria • Well-structured Databricks notebooks (Python/SQL) that perform all required transformations. • ADF pipelines that invoke those notebooks, parameterised for dev, test, and prod. • Connection setups for on-prem gateways, cloud storage containers, and the specified APIs. • Incremental load logic and full reload option documented and verified. • README-style documentation covering environment setup, pipeline flow, and runbook for support. If you have recent experience building similar Azure Databricks ETL solutions, I’m ready to share schema details and API specs so we can get started right away.