I need a robust web-scraping solution that pulls every public record from https://www.clinicaltrialsregister.eu—including the information hidden behind the “View results” tab—and exports it to a single, well-structured CSV file. The data has to refresh automatically each day so I always hold a current snapshot of the full registry. Minimum fields I will audit first are the Study Title, Study Results and Study Dates, but the scraper must capture every other table, note and metadata point the site exposes, without exceptions. Pagination, multi-language entries, and PDF attachments that sometimes appear inside the results section all need to be handled gracefully. Please code the solution so it can run headless on a Linux server; Python with requests, BeautifulSoup, Selenium or Scrapy is fine as long as it is reliable and well-documented. Deliverables are: • An executable script (plus requirements.txt) that performs the full crawl and writes/overwrites a CSV. • A brief README explaining setup, scheduling for the daily run (cron is OK) and how errors are logged. • One initial full dataset generated by your script so I can validate completeness. I will consider the work complete when consecutive runs show identical record counts to the live site, and random spot-checks confirm every data point on the “View results” pages is present in the CSV.