I need a full-stack engineer who is comfortable at the intersection of audio signal processing and data-rich front-end work. The core task is to turn overnight recordings (8 + hours) into meaningful insights by automatically flagging four acoustic events—heavy snoring, breathing interruptions, normal flow, and a softer “slow snore” that is not always obvious—and then presenting the results in an intuitive web dashboard. On the back end, an AI/DSP pipeline should sweep through a single-channel WAV (or similar) file, classify every frame, and calculate two continuous “work of breathing” metrics: intensity and patient effort. These indices will feed the UI in real time or after batch processing. Accuracy on slow snoring is especially important, so a short model-validation routine or confusion-matrix report will be required. The interface must feel like a professional monitoring console: two circular gauges for the intensity and effort scores, plus a central sphere whose colour and animation state change with cumulative findings. A scrollable timeline should let me jump straight to any highlighted event and hear a 10- to 30-second trimmed clip without re-encoding the whole file. Technical freedom is yours—if Python libraries such as librosa, PyTorch or TensorFlow serve the detection, great; if you prefer another stack, convince me. The front end can be React, Vue, or a comparable modern framework; D3, Three.js or Plotly can drive the visualisation layer. Deliverables • Trained detection model and reproducible inference script • REST or local API that produces per-second labels and WOB metrics • Web dashboard reflecting the gauge/sphere design with in-place audio playback • Brief usage documentation and install notes If this sounds like your wheelhouse, tell me how you would tackle the slow-snore detection challenge and outline any similar projects you have shipped.