I need to pull specific text content from a set of webpages and deliver it in clean, well-structured CSV files. The task is straightforward: • Write or configure a script that visits each URL I provide, identifies the targeted text sections, and extracts them accurately (no images or numeric tables involved). • Save every record to a single CSV per site—or one consolidated file if that’s cleaner—as long as headers are consistent. • Include clear set-up instructions so I can rerun the process later (Python with BeautifulSoup, Scrapy, or a similar tool is fine; if you prefer another language, just ensure it’s easy to install). I’m interested in speed and reliability rather than fancy UI. If you’ve built scrapers for public webpages before and can troubleshoot common blockers like lazy-loaded content or basic captcha, this should be quick work. Please outline: 1. Your proposed tech stack. 2. Turnaround time for an initial working draft. 3. Any assumptions or constraints you’ll need me to confirm (e.g., maximum page count, login access, etc.). Once the CSV output matches the samples I provide, the project is complete.