Project
| # | Title | Team Members | TA | Documents | Sponsor |
|---|---|---|---|---|---|
| 18 | Acoustic Stimulation to Improve Sleep |
Bakry Abdalla John Ludeke Sid Gurumurthi |
Mingrui Liu | ||
| # Acoustic Stimulation to Improve Sleep Team Members: - Abdalla, Bakry (bakryha2) - Gurumurthi, Sid (sguru2) - Ludeke, John (jludeke2) # Problem Certain people experience poor quality sleep as they age or develop sleep disorders because they do not spend enough time in slow wave sleep (SWS). While there are data-first solutions currently available to the public, they are expensive. # Solution Closed-loop auditory simulation has been shown through research to amplify the oscillations of SWS. When it is time to sleep, users will put a wearable device on their head. The device will consist of an EEG headband with dry electrodes to measure brain activity which will be connected to an all-purpose, custom PCB that integrates the EEG front-end, microcontroller, audio driver, and power management circuitry. The processor detects slow wave sleep and identifies slow wave oscillations. When these waves are detected, the system delivers short, precisely timed bursts of pink noise through an integrated speaker. Data insights about the user’s sleep patterns are delivered via a user-facing application. All of this while being cheaper than what is currently available. # Solution Components ## Subsystem 1 – EEG Headband We will be using a commercially available EEG Headband, the OpenBCI EEG Headband Kit. This includes the headband, electrodes, and cables carrying the analog signal. Components: - OpenBCI EEG Headband: https://shop.openbci.com/products/openbci-eeg-headband-kit - Ag-AgCl Electrodes - Earclip & snap cables ## Subsystem 2 – Signal Processor Takes in analog signals, denoises and amplifies, digitally processes, and then outputs. The signal processing subsystem is responsible for performing the core functionality of a commercial EEG interface such as the OpenBCI Cyton, but at a lesser cost. It receives raw analog EEG signals from the headband electrodes and converts them into digitized, clean EEG data suitable for downstream analysis. It would perform amplification of weak analog electric signals followed by analog filtering to limit bandwidth to EEG-relevant bands and prevent aliasing before analog-to-digital conversion. Following digitization, the subsystem performs digital signal processing, including bandpass and notch filtering, for noise and artifact reduction. An accelerometer would be incorporated to remove spikes and noise in EEG data at significant motion events. Components: - Analog front end: Texas Instruments ADS1299 - Microcontroller: PIC32MX250F128B - Wireless transmission of data: RFduino BLE radio module (RFD22301) - Triple-Axis Accelerometer: LIS3DH - Resistors: COM-10969 (ECE Supply Store) - Capacitors: 75-562R5HKD10, 330820 (ECE Supply Store) - JFET Input Operational Amplifier: TL082CP (ECE Supply Store) - Standard Clock Oscillators 2.048MHz: C3291-2.048 ## Subsystem 3 – Audio Output After receiving the processed audio signals from the signal processor's subsystem, this subsystem will provide the data as input to an algorithm which decides whether or not to play a certain frequency of noise through the preferred audio output device (default will be speaker). The algorithm makes this decision by detecting whether the brain signals indicate short wave sleep is occurring. Components: - A special algorithm to detect short wave sleep (https://pubmed.ncbi.nlm.nih.gov/25637866/) - One small integrated speaker (665-AST03008MRR) ## Subsystem 4 – Power Delivery To provide power for the entire system, a power circuit is integrated into the PCB. This circuit manages battery charging and voltage regulation while minimizing heat dissipation for user comfort. Components: - 2 AAA batteries: EN92 - Voltage regulator: LM350T - Capacitors: 75-562R5HKD10 - On/off switch: MULTICOMP 1MS3T1B1M1QE - Power jack: 163-4013 ## Subsystem 5 – User-Facing Application To improve usability, the User-Facing Application will give the end user insights into their sleep using standard sleep metrics. Specifically, it will tell the user their time spent not sleeping, in REM sleep, light sleep, and deep sleep. We can use a React Native frontend for compatibility with Android and iOS. We can run a lightweight ML model on-device with Python to determine the state of sleep (using libraries like FFT and bandpower). For the backend, Firebase can be used to store our data, which will come in via Bluetooth. Components: - React Native - Firebase # Criterion For Success - Headset remains comfortable (4/5 people would be okay wearing the device to sleep) - Signal Processor successfully amplifies and denoises signal - Signal Processor successfully converts the analog signal into a digital one - Audio Output gives audio in phase with EEG waves to maximize effectiveness - Audio Output correctly adjusts audio in correspondence to the input signal from the Signal Processor - Power Delivery gives enough battery power for the device to last at least 10 hours - Power Delivery remains cool and comfortable for sleep - User-Facing Application is intuitive (4/5 people would download the app) - User-Facing Application shows accurate, historical data from the user’s headband - User-Facing Application correctly classifies phases of the user’s sleep - The entire system is easy to use (a new user can figure it out without instruction) - The entire system works seamlessly |
|||||