I need a straightforward solution that pulls product details directly into my system without any manual forms or file imports. The data should be captured automatically via web scraping, so the script or small application must be able to connect to one or more target sites, extract the relevant product fields (name, SKU, price, description, images, and stock status), and store everything neatly in a structured format such as CSV, JSON, or a database table that I can easily query later. Key points: • Source method: web scraping only—no API keys or sensor feeds are involved. • Process: fully automatic capture on a scheduled run or trigger I can adjust. • Output: clean, deduplicated product records ready for further processing. • Hand-off: commented code, a quick setup guide, and a brief README explaining dependencies and how to add new target URLs down the line. Python with BeautifulSoup, Scrapy, or similar libraries is fine, but I’m open to alternatives if they achieve the same reliability and speed. Please make sure the solution is lightweight, easy to maintain, and keeps website etiquette in mind (respect robots.txt, reasonable request rates). If you already have a working prototype you can adapt, let me know—otherwise, outline your approach, timeline, and any clarifying questions so we can get this running smoothly.