I need a robust, fully automated script that visits a specific website, extracts only the visible text on each page I designate, and saves the results to plain-text (.txt) files. No images, tables, or other assets are required—just the clean textual content. Key points I care about: • Stability: the script should keep working even if the site’s layout shifts slightly or if it employs light anti-bot measures (simple CAPTCHAs, rate limits, pagination). • Simplicity: URLs or page ranges must be definable from a single place—ideally a config block or command-line argument—so I can schedule runs via cron without editing code. • Clean output: remove HTML, scripts, and styling so I receive human-readable UTF-8 text. One text file per page is perfect. • Clear hand-off: include concise setup instructions and any required libraries (Python 3.x and requests/BeautifulSoup or equivalent are fine, but feel free to suggest alternates). Acceptance criteria 1. I can run `python scraper.py target_list.txt` on Ubuntu and see separate .txt files created. 2. Sample run demonstrates complete capture of the pages I provide, with no truncated sections or markup residue. 3. Code is commented well enough that I can tweak selectors if the site changes. Once those points are met, we’re done. If you have questions about the target site’s structure, let me know and I’ll share a sample URL privately.