I have a working script that only finishes when it can grab more than 100 GB of RAM. Right now I keep it online by renting several VPS instances, which already drains well over $500 every month and still doesn’t let me scale efficiently when workload spikes. I’m absolutely open to revisiting the way the program uses resources. What I need first is a sharp mind that can study the current setup, pinpoint where memory is wasted, and outline a smarter resource-management strategy. Whether the answer ends up being sharding the job across nodes, streaming data in smaller chunks, redesigning the in-memory structures, or off-loading parts to a managed cloud service, I’m ready to explore it—as long as we end up with a solution that scales without blowing up the budget. Deliverables (acceptance criteria): • Technical audit of the script’s memory footprint and runtime profile. • Step-by-step plan that shows how we cut the RAM requirement or distribute it more intelligently, with estimated cost per month at projected loads. • Proof-of-concept implementation (Docker-ready) that runs on affordable hardware or a single modest cloud instance. • Clear documentation so future team members can replicate or extend the solution. You’ll receive full access to the code, sample data, and the current deployment notes once we start. If you enjoy squeezing the most out of hardware, let’s talk about making this script leaner and the bills lighter.