Project

# Title Team Members TA Documents Sponsor
69 Bluetooth Speaker with Motion-based Automated Volume Adjustment
Chirag Kikkeri
Dhruv Vishwanath
Raj Pulugurtha
Abhisheka Mathur Sekar design_document2.pdf
final_paper1.pdf
photo1.HEIC
photo2.HEIC
presentation1.pptx
proposal2.pdf
video1.pdf
# Bluetooth Speaker with Motion-based Automated Volume Adjustment

TEAM MEMBERS:
- Chirag Kikkeri (kikkeri2)
- Dhruv Vishwanath (dhruvv2)
- Raj Pulugurtha (rajkp2)

# PROBLEM
When driving and listening to music, oftentimes we want to change the volume based on the speed of the vehicle. For example, when moving at higher speeds, drivers will raise the volume to better hear the music, and when stopped at a stop light, will lower the volume significantly. This issue is a clear nuisance, but can also present a major safety hazard that takes the user’s concentration away from driving and to adjusting the volume, especially for drivers who do not use the car sound system. Outside of driving specifically, this is a problem for those who bike or skate with a speaker as well.

# Solution Overview
Our solution is to create a speaker that will automatically increase and decrease volume based on the speed that the speaker is moving. The speaker will be a portable Bluetooth speaker that the user can take in and out of the car. Users will also have the ability to set the minimum and maximum volumes to better personalize their listening experience. It will also contain a series of LEDs that tell the user the current volume. The speaker system will have two modes: one for when it is moving, and one for when it is stationary. When it is in the stationary mode, the user can increase and decrease volume with buttons. When it is in moving mode, the user will not be able to change the volume, so that the user focuses on driving.

# Solution Components:
## Subsystem #1: Power
- Description: This part of our project will be key to making the remainder of our project operable. In order to power our speaker and change volume levels when in the “moving mode”, we will need a battery to power it.
- Components: Lithium-ion battery, USB-based charging port

## Subsystem #2: Bluetooth Connection
- Description: Both the bluetooth module and bluetooth amplifier are essential for wireless communication between the speaker and a media device. Having both of these components allows our speaker to be more easily portable.
- Components: HC-05 Bluetooth Module, TDA7492P amplifier board

## Subsystem #3: Sensor System
- Description: Arguably the most essential subsystem for our project, the point of the sensor is to track changes in speed within our speaker so that it can use that information to adjust the volume of our speaker automatically based on a formula that we create (this formula will create a consistent change in volume values that correspond with the changes in speed). We plan on using an accelerometer sensor for this, which means we must also account for the fact that the sensor will only give us information regarding the speaker's acceleration, meaning we need to convert that to speed so that our speaker can properly change the volume. This system will be connected to the PCB in addition to the bluetooth amplifier so that there is a line of communication between our subsystems which will allow the PCB to make changes to the volume itself based on the information provided by the system.
- Components: Accelerometer sensor (https://www.amazon.com/HiLetgo-MPU-6050-Accelerometer-Gyroscope-Converter/dp/B00LP25V1A/ref=sr_1_3?keywords=accelerometer&qid=1675291981&sr=8-3&th=1)
Microcontroller: STM32F401RE Microcontroller

## Subsystem #4: Speaker System
- Description: The physical build of the speaker itself is very important to our project, as the aesthetic appearance of our product will be directly correlated to its assumed value and durability. To build the speaker itself, we will need the bluetooth technology (see above), in addition to the physical parts of the speaker that produce sound. Given the components below and wood, we would be able to ask the machine shop to put the parts together in a way that could complete the physical part of the speaker. With the case of the speaker completed, we can add the remaining subsystems to an empty part of the case and make the necessary connections for the speaker.
- Components: Woofer (https://www.parts-express.com/GRS-5PF-8-5-1-4-Paper-Cone-Foam-Surround-Woofer-292-405?quantity=1), speaker driver (https://www.parts-express.com/GRS-1TD1-8-1-Dome-Tweeter-8-Ohm-292-462?quantity=1), passive radiator (https://www.parts-express.com/Samsung-U083L03SSK1-3-Poly-Cone-Passive-Radiator-21-23-34-289-2362?quantity=1), audio crossovers (https://www.parts-express.com/Crossover-2-Way-8-Ohm-5-000-Hz-150W-260-198?quantity=1)

## Subsystem #5: User Interface
- Description: The last module is what the user will see on the outside surface of the speaker. The main things we want to have here are some buttons (on/off, switch between modes, min/max volume settings, bluetooth connection), as well as LEDs that are visible to the user so that they know what volume level they are currently using the speaker at.
- Components: Omron B3F switch, SparkFun Qwiic LED Stick (SparkFun Qwiic LED Stick - APA102C - COM-18354 - SparkFun Electronics)

# Criterion for Success
- The system is able to play music using Bluetooth connection
- The system is able to precisely adjust volume based on the readings of the accelerometer (same speed should result in same volume)
- The user is able to set min and max volumes and those volumes are not crossed
- The user is able to manually change volume when the system is in stationary mode

(For demoing in the lab, we will change our formula for changing volume such that a small change in speed, results in a large difference in volume)

Smart Glasses for the Blind

Siraj Khogeer, Abdul Maaieh, Ahmed Nahas

Smart Glasses for the Blind

Featured Project

# Team Members

- Ahmed Nahas (anahas2)

- Siraj Khogeer (khogeer2)

- Abdulrahman Maaieh (amaaieh2)

# Problem:

The underlying motive behind this project is the heart-wrenching fact that, with all the developments in science and technology, the visually impaired have been left with nothing but a simple white cane; a stick among today’s scientific novelties. Our overarching goal is to create a wearable assistive device for the visually impaired by giving them an alternative way of “seeing” through sound. The idea revolves around glasses/headset that allow the user to walk independently by detecting obstacles and notifying the user, creating a sense of vision through spatial awareness.

# Solution:

Our objective is to create smart glasses/headset that allow the visually impaired to ‘see’ through sound. The general idea is to map the user’s surroundings through depth maps and a normal camera, then map both to audio that allows the user to perceive their surroundings.

We’ll use two low-power I2C ToF imagers to build a depth map of the user’s surroundings, as well as an SPI camera for ML features such as object recognition. These cameras/imagers will be connected to our ESP32-S3 WROOM, which downsamples some of the input and offloads them to our phone app/webpage for heavier processing (for object recognition, as well as for the depth-map to sound algorithm, which will be quite complex and builds on research papers we’ve found).

---

# Subsystems:

## Subsystem 1: Microcontroller Unit

We will use an ESP as an MCU, mainly for its WIFI capabilities as well as its sufficient processing power, suitable for us to connect

- ESP32-S3 WROOM : https://www.digikey.com/en/products/detail/espressif-systems/ESP32-S3-WROOM-1-N8/15200089

## Subsystem 2: Tof Depth Imagers/Cameras Subsystem

This subsystem is the main sensor subsystem for getting the depth map data. This data will be transformed into audio signals to allow a visually impaired person to perceive obstacles around them.

There will be two Tof sensors to provide a wide FOV which will be connected to the ESP-32 MCU through two I2C connections. Each sensor provides a 8x8 pixel array at a 63 degree FOV.

- x2 SparkFun Qwiic Mini ToF Imager - VL53L5CX: https://www.sparkfun.com/products/19013

## Subsystem 3: SPI Camera Subsystem

This subsystem will allow us to capture a colored image of the user’s surroundings. A captured image will allow us to implement egocentric computer vision, processed on the app. We will implement one ML feature as a baseline for this project (one of: scene description, object recognition, etc). This will only be given as feedback to the user once prompted by a button on the PCB: when the user clicks the button on the glasses/headset, they will hear a description of their surroundings (hence, we don’t need real time object recognition, as opposed to a higher frame rate for the depth maps which do need lower latency. So as low as 1fps is what we need). This is exciting as having such an input will allow for other ML features/integrations that can be scaled drastically beyond this course.

- x1 Mega 3MP SPI Camera Module: https://www.arducam.com/product/presale-mega-3mp-color-rolling-shutter-camera-module-with-solid-camera-case-for-any-microcontroller/

## Subsystem 4: Stereo Audio Circuit

This subsystem is in charge of converting the digital audio from the ESP-32 and APP into stereo output to be used with earphones or speakers. This included digital to audio conversion and voltage clamping/regulation. Potentially add an adjustable audio option through a potentiometer.

- DAC Circuit

- 2*Op-Amp for Stereo Output, TLC27L1ACP:https://www.ti.com/product/TLC27L1A/part-details/TLC27L1ACP

- SJ1-3554NG (AUX)

- Connection to speakers/earphones https://www.digikey.com/en/products/detail/cui-devices/SJ1-3554NG/738709

- Bone conduction Transducer (optional, to be tested)

- Will allow for a bone conduction audio output, easily integrated around the ear in place of earphones, to be tested for effectiveness. Replaced with earphones otherwise. https://www.adafruit.com/product/1674

## Subsystem 5: App Subsystem

- React Native App/webpage, connects directly to ESP

- Does the heavy processing for the spatial awareness algorithm as well as object recognition or scene description algorithms (using libraries such as yolo, opencv, tflite)

- Sends audio output back to ESP to be outputted to stereo audio circuit

## Subsystem 6: Battery and Power Management

This subsystem is in charge of Power delivery, voltage regulation, and battery management to the rest of the circuit and devices. Takes in the unregulated battery voltage and steps up or down according to each components needs

- Main Power Supply

- Lithium Ion Battery Pack

- Voltage Regulators

- Linear, Buck, Boost regulators for the MCU, Sensors, and DAC

- Enclosure and Routing

- Plastic enclosure for the battery pack

---

# Criterion for Success

**Obstacle Detection:**

- Be able to identify the difference between an obstacle that is 1 meter away vs an obstacle that is 3 meters away.

- Be able to differentiate between obstacles on the right vs the left side of the user

- Be able to perceive an object moving from left to right or right to left in front of the user

**MCU:**

- Offload data from sensor subsystems onto application through a wifi connection.

- Control and receive data from sensors (ToF imagers and SPI camera) using SPI and I2C

- Receive audio from application and pass onto DAC for stereo out.

**App/Webpage:**

- Successfully connects to ESP through WIFI or BLE

- Processes data (ML and depth map algorithms)

- Process image using ML for object recognition

- Transforms depth map into spatial audio

- Sends audio back to ESP for audio output

**Audio:**

- Have working stereo output on the PCB for use in wired earphones or built in speakers

- Have bluetooth working on the app if a user wants to use wireless audio

- Potentially add hardware volume control

**Power:**

- Be able to operate the device using battery power. Safe voltage levels and regulation are needed.

- 5.5V Max

Project Videos