Scalable Data Management Solution

Замовник: AI | Опубліковано: 26.11.2025
Бюджет: 750 $

I’m putting together a full-stack data management system that can collect, clean, store, and surface large volumes of information in a way that’s secure, fast, and easy to extend. The primary goal here is, quite simply, data management—everything we build should serve that purpose. Here’s what I have in mind: • Centralised, well-structured database architecture (SQL, NoSQL, or a hybrid—open to your recommendation) • Automated ETL pipelines for reliable data ingestion and transformation • A lightweight admin interface for monitoring, querying, and exporting datasets • Role-based access controls and encryption to keep everything compliant and secure • Clear documentation so future contributors can step in without guesswork You’re welcome to lean on the tools you know best—whether that’s Python with Pandas and FastAPI, Node.js with Express, or a modern low-code alternative—so long as the result is stable, scalable, and easy to maintain. Whatever stack you propose, outline why it fits our data volume and performance targets, and specify any open-source frameworks or cloud services you plan to use (AWS, Azure, GCP, Supabase, etc.). Acceptance criteria 1. I can ingest a sample CSV file and see it land in the database, transformed to the agreed schema, within one click or command. 2. Query response times stay under two seconds for datasets up to 1 M rows. 3. Role-based permissions prevent an unauthorised user from querying or exporting restricted tables. 4. Documentation covers setup, deployment, and routine maintenance in plain language. If this sounds like a challenge you can run with, tell me how you’d architect it, which tools you’d pick, and a realistic timeline for MVP delivery.