"IMPORTANT: Start your proposal with the exact words 'DEEPSEEK ROUTING' so I know you are a real human and have read this briefing." Senior Lead Developer / Team for Decentralized Compute Network (DePIN/BYOD) – SLM Task Routing Project Overview: We are looking for a highly skilled Senior Developer or a small, agile team to build the MVP of "AIFREENET," a Bring Your Own Device (BYOD) decentralized physical infrastructure network (DePIN). Our goal is to aggregate idle consumer hardware (CPU/GPU) to run inference for open-source AI models. CRITICAL NOTE: You are ONLY building the P2P network and the desktop client. The entire multi-level-marketing (MLM) backend, user dashboard, and financial payout logic are already being developed by a separate team. Your system only needs to provide an API bridge to report compute metrics. The Tech Strategy (Task Routing, NOT Model Slicing): To fit our strict budget and timeline, we are NOT looking to build complex distributed inference (slicing one massive model across multiple nodes). Instead, we use Task Routing: Each user’s desktop client will locally run a quantized Small Language Model (SLM) – specifically the DeepSeek-R1-Distill-Llama-8B (using frameworks like llama.cpp / GGUF). The Master Server simply routes incoming inference requests (prompts) to available idle nodes, and the nodes return the generated text. Core Responsibilities (MVP Phase 1): 1. The Desktop Client (Windows/macOS): Build a lightweight, secure background app. It must profile the host's hardware (VRAM/RAM), download the quantized SLM, and listen for tasks. 2. The P2P Router / Master Node: Implement a central dispatch system (we highly encourage forking existing open-source architectures like BOINC, libp2p, etc., to save time). It handles task queuing, routing to available clients, and basic Proof of Compute/verification. 3. The API Bridge: Send automated webhooks (e.g., hourly) to our separate MLM backend containing simple metrics: {"user_id": "1045", "tasks_completed": 150, "uptime_minutes": 60}. Requirements: • Proven experience in distributed systems, grid computing, or P2P networks. • Strong knowledge of local LLM deployment (llama.cpp, Ollama, quantization). • Tech stack: C++/Rust/Go for the client (performance is key); Node.js/Python/Go for the router. • Focus on security (sandboxing local inference processes). Budget & Timeline: • Strict Hard Cap Budget for MVP: $12,000-15,000 (Milestone-based). • Launch Deadline: May 1st, 2026. We need a working MVP API communication by mid-April.