I need a Python-based scraper that pulls complete car-listing information from CarGurus.ca every day. At a minimum the script has to capture make, model, price, and mileage but, in practice, I want every publicly visible field on each listing so that nothing useful is missed. Here’s what matters to me: • Reliability – the code must navigate pagination, work around basic anti-bot measures (rotating user-agents / respectful delays), and throw clear errors if the site layout changes. • Clean output – save to CSV or an SQLite database with consistent column names, ready for later analysis. You’re free to choose libraries you trust (requests, BeautifulSoup, Selenium, Scrapy, Playwright, etc.); just document any setup steps and keep third-party dependencies to a minimum. Deliverable: git-ready project folder containing the scraper, a brief README with run and schedule instructions, and a sample output file generated from at least a few live pages so I can confirm field coverage.