I’m looking for an experienced AI/LLM engineer to help set up OpenClaw on my Mac Studio (M4 Max, 128GB RAM) so that it runs reliably with a fully local model. The objective is a clean, stable setup where OpenClaw agents can run and call tools using a local LLM (no reliance on cloud APIs). • Mac Studio (M4 Max, 128GB RAM) • LM Studio installed • Local models tested previously (Qwen variants) • OpenClaw previously installed but removed for a clean restart Scope of Work • Install and configure OpenClaw from scratch • Configure it to run with a local LLM via LM Studio • Ensure tool calling works properly (web search, shell tools, etc.) • Configure optimal model and quantization for this hardware • Configure the system so agents automatically start after a machine reboot • Set up access so agents can be used through Telegram and Discord • Optional: performance tuning for the Mac M4 Max • Qwen2.5-Coder-32B / 72B • MiniMax M2.5 • Other models known to work reliably with OpenClaw Deliverables • Fully working OpenClaw setup • Local model configured and running • Agents verified to auto-start after reboot • Agents accessible via Telegram and Discord • Step-by-step instructions so the setup can be reproduced or updated later Ideal Candidate • Direct experience with OpenClaw or similar agent frameworks • Experience with LM Studio, llama.cpp, or GGUF models • Experience optimizing models on Apple Silicon (M-series) • Experience integrating AI agents with Telegram and Discord bots When applying, please include 1. Your experience running OpenClaw or similar agent frameworks 2. Examples of local LLM environments you’ve configured 3. Your recommended model for this hardware 4. Confirmation that you have previously integrated AI agents with Telegram or Discord