A reusable Python script is required to automate data scraping from a series of publicly accessible web pages. The script should accept a list of URLs, navigate through any paginated content, extract the specified fields, and save the results to CSV and JSON. The task suits someone with an intermediate grasp of Python who is comfortable working with libraries such as requests, BeautifulSoup, pandas, or, when a site relies on JavaScript, Selenium or Playwright. Clear, well-commented code and concise setup instructions are essential so the script can be dropped into an existing workflow without modification. Acceptance criteria and deliverables: • Fully functional .py script that runs from the command line. • Configuration section (or .env file) for URL list and field selectors. • Output in both CSV and JSON, written to an /output directory created by the script if missing. • A brief README explaining prerequisites, setup, and sample usage. • Confirmation the scraper respects robots.txt and rate limits to avoid blocking. The project is straightforward but demands attention to clean structure, error handling, and maintainability consistent with intermediate-level best practices.