I’m building a sports-modelling pipeline and now need a robust web-scraping module to feed it fresh numbers every week. The target sites all publish publicly available match information and statistics; links will be shared once we start. Here’s what I’m after: • Clean, well-documented code (Python preferred—think Requests/BeautifulSoup, Scrapy or another framework you’re comfortable with) that automatically collects the required data from the specified websites. • An automated weekly run—cron job, Windows Task Scheduler or a simple cloud function is fine—as long as it grabs the latest data without manual intervention. • Output in CSV or JSON and a quick read-me so I can slot the files straight into my existing modelling workflow. • Basic error handling and logging so I know when a source page changes or the grab fails. Acceptance criteria: the script runs on my machine, captures every required field from each site, and produces a correctly formatted file on schedule for two consecutive weekly pulls. If this sounds like your kind of task, let’s get it scraping.