I am building an AI-driven flow that pinpoints stuck-at faults while a VLSI circuit is still in the design phase. Rather than relying on exhaustive manual test-bench creation, I want a simulation-based machine-learning model that observes RTL or gate-level simulation data, learns normal behaviour, and flags abnormalities that map to stuck-at conditions before tape-out. The core idea is to couple traditional fault simulation (e.g., Synopsys VCS, ModelSim, or any similar engine you prefer) with a neural model—TensorFlow, PyTorch, or even a lighter scikit-learn approach if it proves faster—to predict fault locations and, ideally, suggest minimal pattern sets for confirmation. You are free to choose the exact architecture as long as it can be trained on simulation traces and delivers reproducible results. Deliverables • A complete, well-commented codebase (Python or comparable language) that ingests simulation waveforms/netlists, trains, and performs inference. • Sample dataset generation scripts showing how you injected and labelled stuck-at-0 and stuck-at-1 conditions. • A short report outlining model architecture, training procedure, key hyper-parameters, and achieved detection accuracy. • A runnable demo: command line or Jupyter notebook that I can execute locally to verify results on a fresh netlist. Acceptance criteria • ≥95 % detection accuracy on an unseen set of injected stuck-at faults. • Inference time that allows analysis of a 100 k-gate design in under 5 minutes on an average GPU or CPU. • Clear documentation so I can integrate the solution into an existing simulation regression flow. If this aligns with your expertise in machine learning, fault simulation, and VLSI design, I’m ready to get started right away.