I want a lightweight Python script that, given a list of company names or URLs, visits official company websites and leading business directories to pull publicly available details—company name, website URL, domain, telephone number, physical address, and any exposed email addresses. The crawler must respect robots.txt, back off politely when sites object, and handle common failure cases so the run never stops on a single bad page. BeautifulSoup is my preferred parsing library, but feel free to combine it with requests, aiohttp, or other standard helpers if that speeds things up. At the end of each run the script should save two files: one CSV and one JSON, both containing the same structured output so I can drop the data straight into downstream pipelines. Please make the code easy to read, with clear function boundaries, doc-strings, and a short README that explains how to install the requirements, supply the input list, and launch a crawl. I will consider the project complete when I can point the program at a small sample list, watch it scrape both company sites and directory listings, and see clean CSV/JSON files generated without triggering blocks or captchas.