I need a robust data-engineering workflow built on Databricks that connects directly to our existing data lakes. The work covers everything from ingesting raw files, through transformation and enrichment, to delivering well-structured Delta Lake tables ready for downstream analytics and machine-learning teams. The ideal flow will • autoload new data from the lake into bronze tables, • apply cleansing, validation, and schema evolution logic into silver, • aggregate and optimize the final gold layer with Z-Ordering and compaction, • include notebooks or jobs scripted in Python or Scala, • leverage Databricks Workflows for orchestrated runs, and • generate basic quality metrics and logging to our chosen monitoring solution. Deliverables will be considered complete when the pipelines run end-to-end in our workspace, process a sample data-lake folder successfully, and produce clean Delta tables with documented notebook code and a short hand-off guide.