I need a production-ready image-segmentation pipeline that cleanly separates structural elements—walls, doors, windows, stairs—from full-sheet architectural plans. The model must run server-side and expose a lightweight REST or GraphQL endpoint so my web application can request a plan, receive masks or overlaid PNG/SVG layers, and continue its own post-processing. My current stack is Python, FastAPI, and Docker, so training in PyTorch with Ultralytics YOLOv8 (or a demonstrably better architecture) will slot in perfectly. You can expect hundreds of high-resolution TIFF and PDF drawings; I will supply a curated, annotated subset at project start. The rest of the dataset may need additional labeling, so please account for best-practice augmentation, tiling, and class-imbalance handling. Deliverables: • Trained segmentation model (weights + config) achieving reliable IOU on walls, doors, windows, stairs and “other”. • Inference script wrapped in FastAPI, docker-ised, accepting base64 or URL, returning JSON masks plus optional overlay image. • Brief README covering environment setup, training commands, and endpoint usage. Acceptance is straightforward: I will run your container on a fresh GPU instance and test on 50 unseen plans; masks must reach the agreed IOU threshold and return in under 3 s per plan. Any questions about the drawings or classes—let’s clarify early so the first training cycle is already on the right track.