I built a Python-based bazaar bot for Wizard101, but it’s consistently out-performed by another bazaar bot. My priority is improving both speed and accuracy without ballooning resource usage. From the profiling I’ve done, the main drag seems to lie in the decision-making layer; the bot currently relies on a small machine-learning model to choose when and what to buy or sell, yet the reaction time is still too slow to win contested items. Here’s what I need: • A careful performance audit of the existing ML-driven decision logic, including timing breakdowns and pinpointed bottlenecks. • Refactoring or replacement of that section—whether by pruning the model, switching to lighter inference (e.g., scikit-learn → ONNX or TensorFlow Lite), or introducing cached heuristics—so the bot can respond in near-real-time. • Clean, well-commented code that plugs back into my current Scrapy/requests framework with no regressions in accuracy (measured by correct buy/sell decisions over a 24-hour test run). • A short report summarising changes, before/after benchmarks, and guidance for future tweaks. The remainder of the pipeline—scraping, login handling, and network I/O—performs acceptably right now, so focus squarely on decision speed and precision. Familiarity with Python profiling tools (cProfile, Py-Spy), concurrency (asyncio or threading), and lightweight ML optimisation will be invaluable. If you can demonstrate a consistent edge over the rival bot, we’re done.