I need a reliable, maintainable script that will crawl a single product-catalogue website and pull down every key detail we offer online. The data I’m after includes: • Product names and full descriptions • Prices and current availability status • High-quality product images (downloaded or direct links, whichever is easier to store) • Any listed product characteristics/specifications The scraper should navigate all categories, paginate through results, and handle variants so nothing is missed. A moderate level of polish is important to me: I’d like clean, well-documented code (Python, Node, or a similarly popular language is fine) plus a quick read-me that shows how to install dependencies, run the script, and adjust rate-limits or credentials if the site requires them. Please organise the output into a structured format—CSV, JSON, or a simple database table—so I can drop the data straight into our catalogue backend. If images are downloaded, place them in a tidy folder with predictable filenames and reference those names in the data file. Graceful error handling, basic logging, and the ability to resume from the last successful page are must-haves. If the site uses dynamic loading, feel free to leverage Selenium, Playwright, or similar headless browser tools; otherwise a straightforward requests/BeautifulSoup approach is perfect. I’ll test the script against a small category first, then run a full crawl. Once the output matches what’s visible on the site, the job is complete.