I will share a concise PDF that spells out the target URLs, the search logic, and the specific record fields I need. Your job is to turn those written steps into a lightweight, fully automated research tool that fetches web data—more precisely public records—and saves everything in clean JSON. Here’s the workflow I have in mind: the script reads the parameters from the PDF, visits each listed source, navigates any simple pagination, extracts the designated fields, and writes them to a single JSON file per run. A quick command-line invocation (“python main.py” or similar) is all I want to trigger the entire process. Deliverables • Complete, well-commented source code • A sample JSON output generated from a short test run • A brief README with setup instructions, dependencies, and any runtime flags Acceptance criteria • Matches every extraction rule and field name outlined in the PDF • Runs on Windows, macOS, and Linux without edits (virtualenv or container is fine) • Respects polite scraping practices—rate limiting and robots.txt compliance Python with requests/BeautifulSoup or Scrapy is preferred, but I’m open to Node or Go if you make setup effortless. No UI is required; speed and accuracy are the priorities.