I need a compact Python 3 script that can visit a single specified website, pull all visible text content on the pages I indicate, and save every line into a clean, well-structured CSV file. The CSV will serve as an archive, so column order and consistency matter—no missing separators, stray quotes, or encoding issues (UTF-8 throughout). Feel free to lean on requests, BeautifulSoup, Selenium, or any other open-source library you prefer, as long as the final solution runs from the command line on a standard Windows or Linux machine without extra paid dependencies. Deliverables • .py script with clear inline comments • Sample CSV produced by the script, showing correct structure • Quick README explaining setup, required Python packages, and how to adjust the target URLs Acceptance criteria The script must: 1. Retrieve only text content (no images, no binary blobs). 2. Write the extracted text to CSV in a single pass, one logical unit per row. 3. Handle pagination or multi-page navigation if needed for the site I provide. 4. Exit cleanly without uncaught errors and log basic progress to the console. Once the script meets these points I can integrate it into my broader archiving workflow.