I am looking for an experienced Web Scraping and Automation Developer to build an automated daily workflow for my business. I respond to over 100 government tenders a year and need to automate the discovery and document retrieval process. The Goal: Scrape 9 Australian Government tender websites daily for newly published tenders in two specific categories (UNSPSC 43000000 - IT, and 81000000 - Engineering/Research). Extract key details (Title, Agency, Closing Date, URL) and insert them into a centralized Google Sheet. Automatically download the associated Tender Documents and save them into uniquely named folders in my Google Drive. Periodically monitor these specific tenders for any newly published Addendums, and automatically download them to the respective Google Drive folder. The Websites: Federal (AusTender) New South Wales (BuyNSW) Victoria (Tenders VIC) Queensland (QTenders) South Australia (SA Tenders) Northern Territory (NT Tenders) Australian Capital Territory (ACT Tenders) Tasmania (Tas Tenders) Western Australia (Tenders WA) Technical Requirements & Challenges You Must Handle: Anti-Scraping: Many of these sites (like AusTender and BuyNSW) sit behind Cloudflare or similar bot-protection. You must know how to handle this (e.g., using headless browsers like Playwright/Puppeteer or proxies). Authentication: Downloading documents and addendums on these platforms requires a logged-in user session. The script must be able to securely handle my login credentials to access and download the files. Google Workspace API: You must seamlessly integrate the outputs via the Google Drive and Google Sheets APIs. Please reply with your proposed tech stack (e.g., Python, Scrapy, Playwright, Make.com, etc.), how you plan to handle the bot-protections/logins, and your estimated timeline/budget.