I’m building a production-grade backend that orchestrates multi-step LLM workflows and serves them through a FastAPI layer. The stack is Python first, so I need someone who is genuinely comfortable with advanced language features, clean architecture, and test-driven habits. Core scope • Design and implement LangGraph/LangChain pipelines that call both Claude and OpenAI models. • Shape a robust PostgreSQL schema from scratch and write the migration scripts to match. • Expose all functionality via FastAPI endpoints with proper async handling, input validation, and error management. • Handle PDF generation with ReportLab (or an equivalent you like) for final user-facing reports. Claude-specific responsibilities Prompt engineering Understanding token costs Caching and optimization Additional expectations – Respect token budgets by implementing smart caching and partial responses where sensible. – Write concise unit tests and lightweight docs so the codebase remains maintainable. – Keep an eye on performance-to-cost ratios and log usage metrics for later analysis. Final deliverable is a Git repository containing the fully functioning FastAPI service, migrations, PDF module, and sample tests ready to run with Docker.