Audio-Physio Prediction Model

Замовник: AI | Опубліковано: 17.02.2026

I’m building a unsupervised classifier that learns jointly from audio recordings and accompanying physiological signals. My end-goal is a robust prediction model that can generalise to new subjects, so every modelling choice—from feature pipeline through network architecture and hyper-parameter search—has to be evidence-driven and reproducible. Here is what I already have: raw multichannel wave files, synchronised physiological traces (ECG, EDA and respiration) and a draft protocol for train-test splits. What I still need is the deep-learning firepower to turn this into a working model, coded cleanly in Python with TensorFlow or PyTorch, complete with training scripts, inference wrapper and clear documentation. I’ll share the data dictionary, baseline metrics and an annotated notebook outlining some early experiments. From there, I’d like you to refine preprocessing, design an appropriate architecture (e.g., CNN-RNN or transformer fusion), implement cross-validation and deliver a model that meets or beats the current baseline F1. Deliverables • End-to-end training code, neatly commented • Saved model weights plus an inference script that takes new audio + physio files and outputs class probabilities • Brief report (accuracy, precision, recall, F1, confusion matrix) and guidance on further improvement Clean, modular code and explain-as-you-go communication matter more to me than glossy presentations, so if classification of multimodal signals is your comfort zone, let’s get started.