Project

# Title Team Members TA Documents Sponsor
38 Smart Glasses for the Blind
ECE 445 Instructor's Award
Abdul Maaieh
Ahmed Nahas
Siraj Khogeer
Sanjana Pingali design_document1.pdf
final_paper2.pdf
photo1.png
photo2.png
presentation1.pptx
proposal2.pdf
video
# Team Members
- Ahmed Nahas (anahas2)
- Siraj Khogeer (khogeer2)
- Abdulrahman Maaieh (amaaieh2)

# Problem:
The underlying motive behind this project is the heart-wrenching fact that, with all the developments in science and technology, the visually impaired have been left with nothing but a simple white cane; a stick among today’s scientific novelties. Our overarching goal is to create a wearable assistive device for the visually impaired by giving them an alternative way of “seeing” through sound. The idea revolves around glasses/headset that allow the user to walk independently by detecting obstacles and notifying the user, creating a sense of vision through spatial awareness.

# Solution:
Our objective is to create smart glasses/headset that allow the visually impaired to ‘see’ through sound. The general idea is to map the user’s surroundings through depth maps and a normal camera, then map both to audio that allows the user to perceive their surroundings.

We’ll use two low-power I2C ToF imagers to build a depth map of the user’s surroundings, as well as an SPI camera for ML features such as object recognition. These cameras/imagers will be connected to our ESP32-S3 WROOM, which downsamples some of the input and offloads them to our phone app/webpage for heavier processing (for object recognition, as well as for the depth-map to sound algorithm, which will be quite complex and builds on research papers we’ve found).



---

# Subsystems:
## Subsystem 1: Microcontroller Unit
We will use an ESP as an MCU, mainly for its WIFI capabilities as well as its sufficient processing power, suitable for us to connect
- ESP32-S3 WROOM : https://www.digikey.com/en/products/detail/espressif-systems/ESP32-S3-WROOM-1-N8/15200089


## Subsystem 2: Tof Depth Imagers/Cameras Subsystem
This subsystem is the main sensor subsystem for getting the depth map data. This data will be transformed into audio signals to allow a visually impaired person to perceive obstacles around them.

There will be two Tof sensors to provide a wide FOV which will be connected to the ESP-32 MCU through two I2C connections. Each sensor provides a 8x8 pixel array at a 63 degree FOV.
- x2 SparkFun Qwiic Mini ToF Imager - VL53L5CX: https://www.sparkfun.com/products/19013

## Subsystem 3: SPI Camera Subsystem
This subsystem will allow us to capture a colored image of the user’s surroundings. A captured image will allow us to implement egocentric computer vision, processed on the app. We will implement one ML feature as a baseline for this project (one of: scene description, object recognition, etc). This will only be given as feedback to the user once prompted by a button on the PCB: when the user clicks the button on the glasses/headset, they will hear a description of their surroundings (hence, we don’t need real time object recognition, as opposed to a higher frame rate for the depth maps which do need lower latency. So as low as 1fps is what we need). This is exciting as having such an input will allow for other ML features/integrations that can be scaled drastically beyond this course.
- x1 Mega 3MP SPI Camera Module: https://www.arducam.com/product/presale-mega-3mp-color-rolling-shutter-camera-module-with-solid-camera-case-for-any-microcontroller/

## Subsystem 4: Stereo Audio Circuit
This subsystem is in charge of converting the digital audio from the ESP-32 and APP into stereo output to be used with earphones or speakers. This included digital to audio conversion and voltage clamping/regulation. Potentially add an adjustable audio option through a potentiometer.

- DAC Circuit
- 2*Op-Amp for Stereo Output, TLC27L1ACP:https://www.ti.com/product/TLC27L1A/part-details/TLC27L1ACP

- SJ1-3554NG (AUX)
- Connection to speakers/earphones https://www.digikey.com/en/products/detail/cui-devices/SJ1-3554NG/738709

- Bone conduction Transducer (optional, to be tested)
- Will allow for a bone conduction audio output, easily integrated around the ear in place of earphones, to be tested for effectiveness. Replaced with earphones otherwise. https://www.adafruit.com/product/1674

## Subsystem 5: App Subsystem
- React Native App/webpage, connects directly to ESP
- Does the heavy processing for the spatial awareness algorithm as well as object recognition or scene description algorithms (using libraries such as yolo, opencv, tflite)
- Sends audio output back to ESP to be outputted to stereo audio circuit

## Subsystem 6: Battery and Power Management
This subsystem is in charge of Power delivery, voltage regulation, and battery management to the rest of the circuit and devices. Takes in the unregulated battery voltage and steps up or down according to each components needs

- Main Power Supply
- Lithium Ion Battery Pack
- Voltage Regulators
- Linear, Buck, Boost regulators for the MCU, Sensors, and DAC
- Enclosure and Routing
- Plastic enclosure for the battery pack



---

# Criterion for Success

**Obstacle Detection:**
- Be able to identify the difference between an obstacle that is 1 meter away vs an obstacle that is 3 meters away.
- Be able to differentiate between obstacles on the right vs the left side of the user
- Be able to perceive an object moving from left to right or right to left in front of the user

**MCU:**
- Offload data from sensor subsystems onto application through a wifi connection.
- Control and receive data from sensors (ToF imagers and SPI camera) using SPI and I2C
- Receive audio from application and pass onto DAC for stereo out.

**App/Webpage:**
- Successfully connects to ESP through WIFI or BLE
- Processes data (ML and depth map algorithms)
- Process image using ML for object recognition
- Transforms depth map into spatial audio
- Sends audio back to ESP for audio output

**Audio:**
- Have working stereo output on the PCB for use in wired earphones or built in speakers
- Have bluetooth working on the app if a user wants to use wireless audio
- Potentially add hardware volume control

**Power:**
- Be able to operate the device using battery power. Safe voltage levels and regulation are needed.
- 5.5V Max

Musical Hand

Ramsey Foote, Thomas MacDonald, Michelle Zhang

Musical Hand

Featured Project

# Musical Hand

Team Members:

- Ramesey Foote (rgfoote2)

- Michelle Zhang (mz32)

- Thomas MacDonald (tcm5)

# Problem

Musical instruments come in all shapes and sizes; however, transporting instruments often involves bulky and heavy cases. Not only can transporting instruments be a hassle, but the initial purchase and maintenance of an instrument can be very expensive. We would like to solve this problem by creating an instrument that is lightweight, compact, and low maintenance.

# Solution

Our project involves a wearable system on the chest and both hands. The left hand will be used to dictate the pitches of three “strings” using relative angles between the palm and fingers. For example, from a flat horizontal hand a small dip in one finger is associated with a low frequency. A greater dip corresponds to a higher frequency pitch. The right hand will modulate the generated sound by adding effects such as vibrato through lateral motion. Finally, the brains of the project will be the central unit, a wearable, chest-mounted subsystem responsible for the audio synthesis and output.

Our solution would provide an instrument that is lightweight and easy to transport. We will be utilizing accelerometers instead of flex sensors to limit wear and tear, which would solve the issue of expensive maintenance typical of more physical synthesis methods.

# Solution Components

The overall solution has three subsystems; a right hand, left hand, and a central unit.

## Subsystem 1 - Left Hand

The left hand subsystem will use four digital accelerometers total: three on the fingers and one on the back of the hand. These sensors will be used to determine the angle between the back of the hand and each of the three fingers (ring, middle, and index) being used for synthesis. Each angle will correspond to an analog signal for pitch with a low frequency corresponding to a completely straight finger and a high frequency corresponding to a completely bent finger. To filter out AC noise, bypass capacitors and possibly resistors will be used when sending the accelerometer signals to the central unit.

## Subsystem 2 - Right Hand

The right subsystem will use one accelerometer to determine the broad movement of the hand. This information will be used to determine how much of a vibrato there is in the output sound. This system will need the accelerometer, bypass capacitors (.1uF), and possibly some resistors if they are needed for the communication scheme used (SPI or I2C).

## Subsystem 3 - Central Unit

The central subsystem utilizes data from the gloves to determine and generate the correct audio. To do this, two microcontrollers from the STM32F3 series will be used. The left and right hand subunits will be connected to the central unit through cabling. One of the microcontrollers will receive information from the sensors on both gloves and use it to calculate the correct frequencies. The other microcontroller uses these frequencies to generate the actual audio. The use of two separate microcontrollers allows for the logic to take longer, accounting for slower human response time, while meeting needs for quicker audio updates. At the output, there will be a second order multiple feedback filter. This will get rid of any switching noise while also allowing us to set a gain. This will be done using an LM358 Op amp along with the necessary resistors and capacitors to generate the filter and gain. This output will then go to an audio jack that will go to a speaker. In addition, bypass capacitors, pull up resistors, pull down resistors, and the necessary programming circuits will be implemented on this board.

# Criterion For Success

The minimum viable product will consist of two wearable gloves and a central unit that will be connected together via cords. The user will be able to adjust three separate notes that will be played simultaneously using the left hand, and will be able to apply a sound effect using the right hand. The output audio should be able to be heard audibly from a speaker.

Project Videos