I’m expanding the funding component of our payments platform and need another set of hands to move faster. The stack is PySpark running on Azure (Databricks and ADLS), with development in PyCharm. What I want to accomplish • Cleanly ingest daily files and streaming feeds into our bronze layer • Transform and enrich them for the silver/gold layers, applying business-level funding rules • Build rock-solid validation checks so exceptions are surfaced early Where I could use you most I already have the high-level pipeline design; I need focused, code-level help writing and testing each stage inside PyCharm. Environment setup is mostly there—just a few tweaks or plug-ins might be required so we stay productive. Deliverables 1. PySpark notebooks or .py modules for ingestion, processing, and validation, ready to run in our Azure workspace 2. Unit tests that cover critical paths and validation rules 3. A short README outlining execution steps and any PyCharm run configurations you adjust Code must be readable, modular, and in line with Spark best practices (partitioning, schema evolution, error handling). If this sounds like your wheelhouse, let’s talk and get the first commit rolling.