Accuracy Audit for Survey Models

Заказчик: AI | Опубликовано: 30.10.2025
Бюджет: 750 $

I have a Python workflow that processes open-ended survey responses using a mix of classical statistical techniques and newer machine-learning pipelines. The results look promising, but I’m no longer confident that the way I pick, tune, and compare models is giving me the most reliable insights. My single focus for this engagement is therefore accuracy—specifically the part that hinges on model selection and evaluation. You’ll step through my current notebooks, scripts, and helper modules (pandas, scikit-learn, possibly gensim or spaCy) and pinpoint where the modelling choices, cross-validation strategy, or performance metrics may be inflating or obscuring real accuracy. I’m expecting concise, evidence-backed recommendations and—where it makes sense—code-level fixes or alternative approaches that I can drop straight into the pipeline. Deliverables • A short technical report or annotated notebook outlining every accuracy risk you found, why it matters, and the concrete change needed • Updated Python code or pull-request ready commits that implement your proposed improvements • A brief call or recorded walkthrough to ensure I understand the changes and can maintain them going forward Acceptance criteria The revised workflow must reproduce my existing results on the sample dataset within an acceptable margin while clearly demonstrating better generalisation on a held-out set or through a more robust validation scheme. If this sounds like your wheelhouse, let’s talk timing and get started.