I need the text content from a specific public website pulled into a clean, structured Excel file. Once I share the site URL, your job will be to: • capture every relevant text field across all pages (no images required) • normalise and de-duplicate the data so each row is consistent • hand over both the final .xlsx file and the well-commented script (Python, BeautifulSoup/Scrapy/Selenium—whichever you prefer) so I can rerun the extraction later. The site is fully public with no login, but it does use dynamic loading on some sections, so handling JavaScript-rendered content may be necessary. Accuracy matters more than speed; I’ll spot-check the Excel file against the live pages before sign-off. Please confirm which library you intend to use and the estimated turnaround time, and flag any rate-limiting or captcha issues you anticipate so we can address them up front.