Compare Classifiers on MNIST Datasets

Заказчик: AI | Опубликовано: 13.03.2026

I’m running a head-to-head study of several classic and deep-learning classifiers—KNN, Logistic Regression, linear SVM, Kernel SVM, and a simple feed-forward Neural Network—using both the original MNIST digits and the Fashion-MNIST images. I want the two datasets treated with equal weight so that any conclusions hold across handwriting and apparel imagery alike. Before training, every image batch must pass through Normalization and Feature Scaling, and I’d like to see creative yet reasonable Data Augmentation (rotations, shifts, noise, etc.) applied consistently to both datasets so we can observe how each model copes with expanded variability. For each classifier, I need precision, recall, and F1-score reported per class and averaged (macro and weighted). Beyond raw numbers, I’m interested in a concise narrative or visual that explains how model complexity—not just depth or number of neighbors but also kernel choice, regularisation, and hidden-layer width—interacts with the distinct characteristics of the two datasets. Deliverables • Well-commented Python notebook(s) or script(s) showing data loading, preprocessing pipeline, model training, and evaluation • A short comparative report (PDF or markdown) highlighting results, insights, and any surprising findings • Plots or tables that clearly display metric scores and, where helpful, confusion matrices If you already have utilities for Scikit-learn or TensorFlow/PyTorch, feel free to leverage them—just keep the workflow reproducible so I can rerun everything on my side.