I’m expanding our LLM-focused R&D pod in Noida and need a hands-on Python engineer who can start contributing on day one. You must be comfortable building asynchronous back-ends in FastAPI, have real production experience creating LangChain chains and agents, and be equally at ease working with both OpenAI and Hugging Face models. Day-to-day you’ll design and optimise micro-services that call LLMs, manage embeddings in Pinecone or FAISS, and orchestrate data flows through PostgreSQL, MongoDB and Redis—everything neatly containerised in Docker for local and cloud runs. Expect plenty of room to prototype new AI features, run quick experiments, and then harden the successful ones for scale. The role is 100 % onsite in our Sector-62 Noida office; close collaboration with product and design teams is essential, so remote or hybrid isn’t an option right now. Key deliverables: • Clean, well-tested FastAPI endpoints for new AI features • LangChain agent scripts wired into vector stores and external tools • Dockerised services with CI hooks ready for staging • Documentation and hand-off notes for each sprint You’ll work directly with me and the founding engineering team, commit to two-week sprints, and demo progress every Friday. If you thrive in quick iteration cycles and love shipping AI that users touch immediately, let’s talk.