I’m looking for someone who can reliably pull a high-volume feed from a UK-based retailer’s website and return it to me in clean JSON. Each run should capture complete product details (name, description, stock status), up-to-the-minute pricing, and the associated image URLs. The site’s catalogue is large, so I need efficient pagination, respectful rate-limiting, and whatever proxy or headless-browser setup is required to stay ahead of blocking measures. I’d like the scrape to run every other day and overwrite or flag any records that have changed since the previous run. Please deliver: • A repeatable script or crawler (Python/Scrapy, Node/Puppeteer, or similar) with straightforward config for start URLs and frequency • A JSON export for each run, organised by SKU or product ID, plus a short log summarising totals and any errors • Clear instructions so I can trigger the job on my server, or an offer to host it yourself if you prefer If you’re already set up for large-scale retail scraping and can show examples of structured JSON you’ve produced, I’m ready to move quickly and keep the work coming.