I have a pipeline that must pull very large volumes of data around the clock, so I need an engineer who already knows how to squeeze maximum speed and reliability out of every request. You will design and code the entire scraping stack in Python, store the results in SQL, and proactively choose the endpoints or techniques that deliver the highest throughput. Core requirements • Build a production-ready scraper (Python 3.x, requests/asyncio/Scrapy or similar) • Implement IP rotation, smart retries, and any needed anti-bot work-arounds—if you understand Akamai you’ll be right at home here • Optimise queries and database structure in MySQL or PostgreSQL to keep ingestion fast and clean • Validate every record to guarantee data integrity; bad or duplicate rows must be automatically flagged or discarded • Document the code so a second developer can maintain it without guessing Deliverables 1. Fully-tested scraping modules with configuration files 2. SQL schema plus migration script 3. Short README that explains deployment, cron settings, and scaling guidelines Acceptance Criteria – Continuous run for 24 h with no unhandled exceptions – ≥ 99 % accuracy versus manual spot-checks – Inserts sustain 1 k+ rows per minute without deadlocks Start your bid with “I know about akamai” so I can filter auto-responses.