Job Title: MAGIC AI – Official Model Trainer Status: Contract (with opportunity for long-term collaboration) Compensation: $200 competition prize $400 official trainer contract Public credit on MAGIC GitHub & HuggingFace repositories (MAGIC + MageV1) About MAGIC AI MAGIC is a new AI architecture currently under active development. We're looking for a dedicated and highly skilled machine learning engineer to serve as the Official MAGIC Trainer, working directly on the training pipelines, performance tuning, and dataset refinement for the MAGIC and MageV1 foundation models. This role is ideal for someone who enjoys experimental architectures, training large models from scratch, and pushing performance boundaries through iterative improvements to the training loops. How Selection Works To be considered for the role, applicants must participate in and win the current competition: Competition: https://www.freelancer.com/contest/Training-MAGIC-AI-2663043/ Prize: $200 Goal: Train the best-performing MAGIC Base model The winner of this contest will automatically be offered the official training position. Position Summary As MAGIC’s Official Trainer, you will: Work directly with the developer of MAGIC on model training strategy Improve and extend the core training script (training_magic.py) Design, run, and optimize training loops for MAGIC Base and subsequent versions Handle preprocessing, batching, scaling strategies, checkpoints, and evaluation Experiment with hyperparameters, optimizers, scheduling, and architecture variants Provide input on dataset sourcing, creation, and curation Collaborate on training reproducibility and documentation Help shape the official MAGIC training standards for future models Responsibilities Maintain and enhance MAGIC’s training pipeline Run training experiments and document performance results Implement improvements to stability, speed, and model quality Optimize for GPU/multi-GPU setups Collaborate on shaping the evolution of MageV1 and future MAGIC versions Contribute to best practices and training documentation Ensure reproducible training via versioning and clean code updates Requirements Technical Skills: Strong experience with PyTorch (preferred) or similar deep learning frameworks Hands-on experience training transformers or other large neural architectures Strong understanding of training loops, optimization, scheduling, and GPU efficiency Experience with large-scale data processing and dataset pipelines Familiarity with multi-GPU, distributed, or accelerated training frameworks Ability to debug training instabilities, loss issues, and performance bottlenecks Bonus Skills: Experience with custom architectures Prior contributions to ML open-source projects Ability to profile and optimize CUDA workloads Research background in LLMs or generative models Compensation & Recognition $200 for winning competition $400 upon completion of MageV1 Public credit as the Official Trainer of MAGIC and MageV1 on: GitHub HuggingFace Documentation Opportunity for expanded future paid roles as MAGIC evolves Priority consideration for extended research collaborations ### **How to Apply** 1. Enter the competition: [https://www.freelancer.com/contest/Training-MAGIC-AI-2663043/](https://www.freelancer.com/contest/Training-MAGIC-AI-2663043/) 2. Train and submit the strongest MAGIC Base model. 3. The winner receives the contract and begins work as MAGIC’s official trainer.