This is version 2.0 of the original project - the details of which are posted for reference purposes below: Aiidtions to V2 to will be - Enhanced UI - adding player stats to the model - making elo of players a factor when computing the output - trying other ML models other than XGboost, or model the predictor with deep learning to get better insights -A look into Fantasy League sports v1.0 All of the training data is ready to load; what I need is a clean, reproducible Python 3.10+ workflow that trains two separate classification models—one with XGBoost, the other with scikit-learn’s RandomForestClassifier—and returns both the predicted class and its associated probabilities. I will judge the solution on validation accuracy, so please use that metric as the main optimisation target. The stack is fixed: pandas, numpy, xgboost, scikit-learn, matplotlib/seaborn for diagnostics and joblib for saving artefacts. Feel free to include GridSearchCV, RandomizedSearchCV or similar helpers from scikit-learn for tuning. Deliverables • Well-commented script or Jupyter notebook that: – Loads the provided data, performs a sensible train/validation split (stratified if appropriate) – Trains and tunes the XGBoost and Random Forest models – Outputs accuracy scores for both, plus any supporting plots you think illustrate performance or feature importance • joblib-saved model files for each algorithm • A small inference snippet (function or notebook cell) that accepts a single new record and returns the predicted class label and its probability distribution Keep the code lightweight and easy to follow so I can integrate it straight into the wider pipeline.