I’m building a proof-of-concept privacy layer for wearable technology that relies on behavioral learning rather than hard-coded rules. By continuously studying user activity patterns, the system should recognize legitimate behavior, flag anomalies, and trigger the right counter-measure—whether that’s seamless data encryption, blocking unauthorized access attempts, or preventing downstream data misuse. Scope • Devices: fitness bands, smartwatches, health trackers and similar wearables. • Data feed: anonymized streams of time-stamped user actions (steps, heart-rate checks, gesture commands, app interactions). What I need from you 1. Design and train a lightweight machine-learning model (anomaly detection or sequence-based classification) optimised for on-device or near-edge execution. 2. Implement a decision layer that selects one of three responses—encrypt, quarantine, or alert—based on the model’s confidence score. 3. Provide clean, well-commented Python code (TensorFlow, PyTorch or scikit-learn are all acceptable) plus a short README explaining data preprocessing, hyper-parameters and how to port the model to an embedded runtime (e.g., TensorFlow Lite, ONNX). 4. Supply a small synthetic data set and demonstrate at least 90 % accuracy in distinguishing normal from suspicious activity during a live demo or recorded notebook. Acceptance criteria • Model trains and runs locally on a laptop within 10 minutes using the provided sample data. • End-to-end pipeline reproduces results via a single command. • Clear documentation shows how each privacy concern—encryption, unauthorized access, data misuse—is addressed in code logic. If you have prior experience with anomaly detection on limited hardware or have deployed ML models in wearables, your insight will be invaluable. Let’s secure our wearables the smart way.