I have access to a website that lists 1,349,184 companies and I need every record pulled into a single Excel workbook. The five pieces of information I want for each company are: Name, Telephone number, Email, Number of Employees, and City. I will give you the site URL along with working login credentials; once signed in, all pages are visible but they sit behind standard session handling that Selenium can navigate. The script must be written in Python with Selenium as the driver layer. Feel free to add BeautifulSoup, pandas, or any other open-source helper libraries that make the workflow cleaner and faster, but Selenium should do the heavy lifting for navigation and data capture. Reliability is more important than raw speed: I need the full 1.3 M row Excel file delivered intact, with no missing or shuffled columns and each field exactly where it belongs. Key deliverables • A well-commented .py file (or small package) that logs in, iterates through every company page, extracts the five data points, and writes them to an .xlsx file in the stated column order. • The completed Excel file itself. • A short README describing any dependencies, setup steps, and run-time options (e.g., headless vs. headed, optional delays or retries). Acceptance criteria 1. Running `python scraper.py` on a fresh machine (after `pip install -r requirements.txt`) produces an Excel file containing exactly 1,349,184 rows plus header. 2. Random spot checks against the live site confirm accuracy. 3. Script exits gracefully if the connection drops or the session times out and can resume from the last processed record. If you have experience with large-scale scraping projects and can demonstrate best practices for speed throttling, session reuse, and polite scraping, I’m ready to get started right away.