I’m looking to commission a flexible data-scraping script or small application that I can point at almost any industry niche and reliably collect the information I need. The core requirement is simple: pull data from websites, APIs, and even direct database connections, then deliver everything in a clean, well-structured Excel workbook ready for analysis. Here’s what I have in mind • Sources: The solution must handle standard public websites (including those with JavaScript-rendered content), REST or JSON-based APIs, and common relational databases. A single, unified workflow is ideal, but separate modules that feed the same Excel file are fine too. • Output: An .xlsx file with clearly labeled sheets or columns. Column naming should match the field definitions I specify when I run the scraper. • Configuration: I’d like to set target URLs, endpoints, queries, or connection strings myself—either via a simple config file or command-line arguments—without touching the core code each time. • Reliability: It should tolerate pagination, rate limits, and simple anti-bot measures. If a CAPTCHA or login wall appears, the script should at least flag the record and keep moving. • Tech stack: Python (Scrapy, BeautifulSoup, Selenium, or Requests) is my default preference, but I’m open to Node.js, Go, or similar if you can demonstrate equal reliability. Acceptance test 1. I provide a small list of sample targets across the three source types. 2. Your solution runs on my machine (Windows 10) or a Docker container without extra licensing costs. 3. The produced Excel file contains every requested field, correctly mapped, with no duplicate or missing rows. Please include a brief outline of your planned approach, libraries you’ll use, and a realistic timeline to deliver a first working version.