I need a reliable automation script that focuses on web-based data extraction from popular social-media sites. The goal is to pull structured information—posts, comments, timestamps, engagement metrics, and any visible public profile details—into a clean, machine-readable format that I can later analyze. Scope and expectations • Build a working, documented script (Python is preferred, but I’m open to other languages if they suit the task better). • Handle login or scrolling logic only when the content absolutely requires it; the process should respect rate limits and site terms. • Deliver extracted data as JSON or CSV, accompanied by a concise README that explains setup, configuration, and how to extend or adjust target accounts/hashtags/pages. • Modular design is important: separate scraping logic, parsing, and storage layers so I can swap pieces out later if a site changes its layout. • Include a brief demo run or sample dataset that proves the script successfully captures the fields above from at least one social-media platform. Nice-to-have (but not strictly required): retry logic, proxy support, and lightweight scheduling so I can trigger the job daily. I’m looking to move quickly, so please outline your proposed approach, relevant experience with tools like Selenium, Playwright, BeautifulSoup, Scrapy, or similar, and any past examples of social-media scraping work. If you're an experienced developer, you can use your best approach to automate the process. Here's the full cycle to automate:https://docs.google.com/presentation/d/1DC85lepeiYIU4638qH9KFdhYlgkJBlv1eDUxozme_bM/edit?usp=sharing