I am developing a master’s-level research design that compares Retrieval-Augmented Generation (RAG) systems from an information retrieval and evaluation perspective. I am looking for someone that can help we out with helping and brainstorming the thesis design What I am looking for: - Refining research questions and comparison criteria for RAG systems - Structuring the theoretical and technical framework - Identifying and organizing key literature in RAG, IR, and LLM evaluation - Designing an evaluation approach (retrieval metrics, generation quality, cost/latency, robustness, etc.) Project context Domain: RAG / search systems / LLM pipelines Approach: system comparison + online comparison with systems Tooling: Python-based pipeline, API-driven LLM setup Required background: - Experience with RAG, search/IR systems, or LLM evaluation - Strong understanding of evaluation metrics (e.g., precision/recall, nDCG, retrieval vs generation metrics) - Academic or technical research experience - Ability to reference recent papers and benchmarks - One relevant technical or research writing sample - Your experience with RAG / IR / LLM evaluation - Your approach to comparing two RAG pipelines