I have a list of roughly 200 public-facing library staff directories. Each directory holds about forty individual listings, so the final dataset should land near eight thousand rows. From every listing I only need the person’s name, institution, email address and work title. Because of the volume, I’m looking for a fully automated scrape rather than manual copy-and-paste. I’m open to any stack you’re comfortable with (Python, JavaScript, headless browsers, APIs—whatever gets the job done efficiently and without violating the sites’ terms of use). When the run is complete, send one clean Excel file that: • Contains a row per staff member with separate columns for each contact field you capture • Has consistent headers across all libraries • Is de-duplicated and free of obvious formatting issues