I have an operational Python code-base that is slowing us down in production. The logic works correctly, yet execution times and memory usage have grown to an unacceptable level as data volumes have increased. I’m looking for a developer who enjoys squeezing every last millisecond out of Python, whether that means profiling with cProfile or Py-Spy, vectorising calculations with NumPy, introducing async techniques, or refactoring hotspots into Cython—whatever delivers measurable speed-ups while keeping the current behaviour intact. Here’s what I need from you: • Run a thorough performance audit, identifying the true bottlenecks. • Implement targeted optimisations and document each change so future maintenance remains straightforward. • Provide before-and-after benchmarks that clearly demonstrate the improvements on my sample dataset. The repository is on GitHub, tests are already in place, and I can give you clear success criteria based on execution time limits and memory caps. If refining algorithms, leveraging concurrency, or smart caching is your speciality, I’d love to see how you can push this code to run faster and leaner.