I need a reliable script that will pull public-record information from an ordinary county website and reorganize it so I can work with the data quickly—ideally in a clean CSV or Excel file. That date is then run through another website to further qualify it against some requirements. The first site does not provide an export function, so the scraper will have to crawl the relevant pages, capture every field that appears in the public-record tables, and normalise names, dates, and addresses before saving. However, the second site has some export capabilities. Python with BeautifulSoup, Scrapy or a lightweight Selenium setup is fine, as long as the final code is readable and I can rerun it myself whenever new records appear. Please keep throttling, polite headers and retries in mind so we stay within the county’s usage limits. Deliverables: • Fully commented source code • One-click run instructions (virtualenv / requirements.txt) • The first successful data pull in CSV or XLSX format I will consider the job complete when the script finishes without errors, the dataset matches the fields shown on the site, and spot-checks confirm data accuracy.