Optimize Hugging Face Training

Замовник: AI | Опубліковано: 08.11.2025
Бюджет: 15 $

My text-based model is already live on Hugging Face with a Vercel-hosted inference endpoint triggered from the front-end via GitHub Actions. The pipeline works, but the training still produces odd, low-accuracy results even after my own data augmentation. I need an experienced Hugging Face practitioner who can jump in immediately to: • Review my current notebook / Trainer script, hyper-parameters, and tokenizer choices. • Diagnose why accuracy is lagging and propose concrete fixes (better loss function, curriculum learning, class balancing, weighted sampling, etc.). • Iterate on the training run—tune learning rates, batch sizes, evaluation strategy, and early stopping—until we hit a clearly higher accuracy on the held-out set. • Push the improved model to the same Hugging Face repository and confirm the Vercel endpoint continues serving the new weights without breaking the existing GitHub deployment workflow. Please be comfortable with Transformers, Accelerate, mixed precision, and fast restart of interrupted jobs. I’ll provide immediate access to the dataset, current code, and HF token so we can move ASAP—no delays.