Project

# Title Team Members TA Documents Sponsor
5 Navigation Vest Suite For People With Eye Disability
Haoming Mei
Jiwoong Jung
Pump Vanichjakvong
Rishik Sathua design_document1.pdf
proposal1.pdf
# Navigation Vest Suite For People With Eye Disability


Team Members & Experiences:
- Jiwoong Jung (jiwoong3): Experienced in Machine Learning, and some Embedded programming. Worked on many research and internships that requires expertise in Machine Learning, Software Engineering, Web Dev., and App Dev. Had some experience with Embedded programming for Telemetry.
- Haoming Mei (hmei7): Experienced in Embedded programming and PCB design. Worked with projects like lights, accelerometer, power converter, High-Fet Board, and motor control for a RSO that involve understanding of electronics, PCB design, and programming with STM32 MCUs.
- Pump Vanichjakvong (nv22): Experienced with Cloud, Machine Learning, and Embedded programming. Done various internships and classes that focuses on AI, ML, and Cloud. Experience with Telemetry and GPS system from RSO that requires expertises in SPI, UART, GPIOs, and etc with STM32 MCUs.

# Problem

People with Eye Disability often face significant challenges navigating around in their daily lives. Currently, most available solutions ranges like white canes and guide dogs to AI-powered smart glasses, many of which are difficult to use and can cost as much as $3,000. Additionally, problems arises for people with disability, especially in crowded/urban areas, and that includes injuries from collision with obstacles, person, or from terrains. According to the U.S department of Transportation's 2021 Crash Report , 75% of pedestrian fatalities occurred at locations that were not intersections. Thus we aim to design a navigation vest suite to help people with eye disability to deal with these issues.

https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/813458.pdf

# Solution
We have devised a solution which helps ease visually impaired individuals in daily activities such as walking from two places, or navigating around a building with multiple obstacles. Our focus will for out-door navigation in urban areas, where obstacles, terrain, and pedestrians. But, if time permits we will also deal with traffics and crosswalks.




In order to achieve this, we will be utilizing 3 main components:
- Lidar sensors to help the wearer with depth perception tasks
- Vibration Motors to aid navigation (turning left/right)
- Magnetometer to enable more accurate GPS coordination

All the above components will contribute to the sensory fusion algorithm.

# Solution Components

## Subsystem 1
### Microcontroller System
We are planning to use a STM32 microcontroller as main processing unit for sensory data from lidar sensors (magnetometer and GPS if time permits) and object detection data from the **machine learning system**, and direction data from navigation app (our design on phone). We will use these information to generate vibration in the direction the wearer should navigate.

### Power Systems
The whole system will be battery-powered by a battery module, which contains 5V battery-cells. It will be connected to the **Microcontroller System**, which will also supply it to the **Machine Learning System**. We will also implement the necessary power protection, buck converter, regulator, and boost converters as necessary per sensors or components.
- Battery Module Pack
- Buck Converter (Step-Down)
- Boost Converter (Step-Up)
- Voltage Regulator
- Reverse Polarity Protection
- BMS

## Subsystem 2
### Navigation Locator Systems
Our navigation system will consist of an App which directly connects to the Google Maps API, paired with our existing sensors. We plan to utilize a magnetometer sensor, which will indicate the direction the user is facing (North, South, East, West, .etc). In order to pinpoint which direction the wearer needs to be heading, our built-in LiDAR sensors will enable us to create a SLAM (Simultaneous Localization and Mapping) to build a map of the environment. With these systems in place, we would be able to assist users in navigation. To deal with Terrain hazards, we will use the LiDAR to sensors to assist us in dealing with elevation changes the user needed to make.

- LiDAR
- Android App (Connected to Google API)
- Magnetometer
- Vibration Motors

Extra Features (if time permits):
- Audio Output (Text to Speech Generation from Raspberry PI 5 sends to microcontroller through AUX cable )
## Subsystem 3
### Machine Learning Systems

- We plan to employ a object detection model on a 16GB Raspberry PI 5 (already) along with a PI camera to detect objects, signs, and people on the road, which will be feed to the microcontroller
- Raspberry PI 5
- PI Camera


a) The image video model will be expected to be less than 5 billion parameters with convolutional layers, running on device in raspberry pi Obviously the processing power on the raspberry pi is expected to be limited, however we are planning to accept the challenge and find out ways to improve the model with limited hardware capabilities.

b) If the overall project for subtask a) becomes arduous or time consuming, we can utilize api calls or free open source models to process the image/video in real time if the user wants the feature enabled. The device is paired with the phone via the wifi chip on the raspberry pi to enable the API call. Some of the best candidates we can think of are the YOLO family models, MMDetection and MMTracking toolkit, or Detectron2 model that is developed by Facebook AI Research that supports real time camera feedbacks.

# Criterion For Success


### Navigational Motor/Haptic Feedback
1) The Haptic feedback (left/right vibration) should perfectly match with the navigation directions received from the app (turn left/right)

2) Being able to Detect Obstacles, Stairs, Curbs, and people.

3) Being able to detect intersections infront and the point of turn through the lidar sensory data.

4) Being able to obey the haptic feedback patterns that is designed. (tap front for walking forward, tap right to go right etc...)

### Object Detection
1) Using the Illinois Rules of the Road and the Federal Manual on Uniform Traffic Control Device Guidelines, we will be using total of 10-30 distinct pedestrian road signs to test the object detection capability. We will be using a formal ML testing methods like geometric transformations, photometric transformations, and background clutter. Accuracy will be measured by the general equation (Total Number of correctly classified Datasets)/(Total Number of Datasets)

2) The ML Model should be able to detect potential environmental hazards including but not limited to Obstacles, Stairs, Curbs, and people. We are planning onto gather multiple hazard senarios via online research, surveys, and in-person interviews. Based on the collected research, we will be building solid test cases to ensure that our device can reliably identify potential hazards. More importantly, we are planning design strict timing and accuracy measures metrics.


3) The ML model should be able to detect additional road structures such as curbs, crosswalks, and stairs to provide comprehensive environment awareness. We will be utilizing different crosswalks located on north quad and utilize the accuracy measurement techniques mentioned in 1)



### Power and Battery Life

1) The device should support at least 3 hours of battery life.

2) The device should obey the IEC 62368-1 safety standard. IEC 62368-1 safety standard lays out different classes such as ES1, ES2, and ES3 that considers electrical and flame



BarPro Weightlifting Aid Device

Patrick Fejkiel, Grzegorz Gruba, Kevin Mienta

Featured Project

Patrick Fejkiel (pfejki2), Kevin Mienta (kmient2), Grzegorz Gruba (ggruba2)

Title: BarPro

Problem: Many beginner weightlifters struggle with keeping the barbell level during lifts. Even seasoned weightlifters find their barbells swaying to one side sometimes. During heavy lifts, many people also struggle with full movements after a few repetitions.

Solution Overview: BarPro is a device that straps on to a barbell and aids the lifter with keeping the barbell level, maintaining full repetitions and keeping track of reps/sets. It keeps track of the level of the barbell and notifies the lifter with a sound to correct the barbell positioning when not level. The lifter can use the device to calibrate their full movement of the repetition before adding weight so that when heavy weight is applied, the device will use data from the initial repetition to notify the lifter with a sound if they are not lifting or lowering the barbell all the way during their lift. There will be an LCD screen or LEDs showing the lifter the amount of repetitions/sets that they have completed.

Solution Components:

Subsystem #1 - Level Sensor: An accelerometer will be used to measure the level of the barbell. If an unlevel position is measured, a speaker will beep and notify the lifter.

Subsystem #2 - Full Repetition Sensor: An ultrasonic or infrared distance sensor will be used to measure the height of the barbell from the ground/body during repetitions. The sensor will first be calibrated by the lifter during a repetition with no weight, and then that calibration will be used to check if the lifter is having their barbell reach the calibrated maximum and minimum heights.

Subsystem #3 - LED/LCD Rep/Sets Indicator: LEDs or a LCD screen will be used to display the reps/sets from the data measured by the accelerometer.

Criterion for Success: Our device needs to be user friendly and easily attachable to the barbell. It needs to notify the lifter with sounds and LEDs/LCD display when their barbell is not level, when their movements are not fully complete, and the amount of reps/sets they have completed. The device needs to work smoothly, and testing/calibrating will need to be performed to determine the minimum/maximum values for level and movement positioning.