Project

# Title Team Members TA Documents Sponsor
5 Navigation Vest Suite For People With Eye Disability
Haoming Mei
Jiwoong Jung
Pump Vanichjakvong
Rishik Sathua proposal1.pdf
# Navigation Vest Suite For People With Eye Disability


Team Members & Experiences:
- Jiwoong Jung (jiwoong3): Experienced in Machine Learning, and some Embedded programming. Worked on many research and internships that requires expertise in Machine Learning, Software Engineering, Web Dev., and App Dev. Had some experience with Embedded programming for Telemetry.
- Haoming Mei (hmei7): Experienced in Embedded programming and PCB design. Worked with projects like lights, accelerometer, power converter, High-Fet Board, and motor control for a RSO that involve understanding of electronics, PCB design, and programming with STM32 MCUs.
- Pump Vanichjakvong (nv22): Experienced with Cloud, Machine Learning, and Embedded programming. Done various internships and classes that focuses on AI, ML, and Cloud. Experience with Telemetry and GPS system from RSO that requires expertises in SPI, UART, GPIOs, and etc with STM32 MCUs.

# Problem

People with Eye Disability often face significant challenges navigating around in their daily lives. Currently, most available solutions ranges like white canes and guide dogs to AI-powered smart glasses, many of which are difficult to use and can cost as much as $3,000. Additionally, problems arises for people with disability, especially in crowded/urban areas, and that includes injuries from collision with obstacles, person, or from terrains. According to the U.S department of Transportation's 2021 Crash Report , 75% of pedestrian fatalities occurred at locations that were not intersections. Thus we aim to design a navigation vest suite to help people with eye disability to deal with these issues.

https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/813458.pdf

# Solution
We have devised a solution which helps ease visually impaired individuals in daily activities such as walking from two places, or navigating around a building with multiple obstacles. Our focus will for out-door navigation in urban areas, where obstacles, terrain, and pedestrians. But, if time permits we will also deal with traffics and crosswalks.




In order to achieve this, we will be utilizing 3 main components:
- Lidar sensors to help the wearer with depth perception tasks
- Vibration Motors to aid navigation (turning left/right)
- Magnetometer to enable more accurate GPS coordination

All the above components will contribute to the sensory fusion algorithm.

# Solution Components

## Subsystem 1
### Microcontroller System
We are planning to use a STM32 microcontroller as main processing unit for sensory data from lidar sensors (magnetometer and GPS if time permits) and object detection data from the **machine learning system**, and direction data from navigation app (our design on phone). We will use these information to generate vibration in the direction the wearer should navigate.

### Power Systems
The whole system will be battery-powered by a battery module, which contains 5V battery-cells. It will be connected to the **Microcontroller System**, which will also supply it to the **Machine Learning System**. We will also implement the necessary power protection, buck converter, regulator, and boost converters as necessary per sensors or components.
- Battery Module Pack
- Buck Converter (Step-Down)
- Boost Converter (Step-Up)
- Voltage Regulator
- Reverse Polarity Protection
- BMS

## Subsystem 2
### Navigation Locator Systems
Our navigation system will consist of an App which directly connects to the Google Maps API, paired with our existing sensors. We plan to utilize a magnetometer sensor, which will indicate the direction the user is facing (North, South, East, West, .etc). In order to pinpoint which direction the wearer needs to be heading, our built-in LiDAR sensors will enable us to create a SLAM (Simultaneous Localization and Mapping) to build a map of the environment. With these systems in place, we would be able to assist users in navigation. To deal with Terrain hazards, we will use the LiDAR to sensors to assist us in dealing with elevation changes the user needed to make.

- LiDAR
- Android App (Connected to Google API)
- Magnetometer
- Vibration Motors

Extra Features (if time permits):
- Audio Output (Text to Speech Generation from Raspberry PI 5 sends to microcontroller through AUX cable )
## Subsystem 3
### Machine Learning Systems

- We plan to employ a object detection model on a 16GB Raspberry PI 5 (already) along with a PI camera to detect objects, signs, and people on the road, which will be feed to the microcontroller
- Raspberry PI 5
- PI Camera


a) The image video model will be expected to be less than 5 billion parameters with convolutional layers, running on device in raspberry pi Obviously the processing power on the raspberry pi is expected to be limited, however we are planning to accept the challenge and find out ways to improve the model with limited hardware capabilities.

b) If the overall project for subtask a) becomes arduous or time consuming, we can utilize api calls or free open source models to process the image/video in real time if the user wants the feature enabled. The device is paired with the phone via the wifi chip on the raspberry pi to enable the API call. Some of the best candidates we can think of are the YOLO family models, MMDetection and MMTracking toolkit, or Detectron2 model that is developed by Facebook AI Research that supports real time camera feedbacks.

# Criterion For Success


### Navigational Motor/Haptic Feedback
1) The Haptic feedback (left/right vibration) should perfectly match with the navigation directions received from the app (turn left/right)

2) Being able to Detect Obstacles, Stairs, Curbs, and people.

3) Being able to detect intersections infront and the point of turn through the lidar sensory data.

4) Being able to obey the haptic feedback patterns that is designed. (tap front for walking forward, tap right to go right etc...)

### Object Detection
1) Using the Illinois Rules of the Road and the Federal Manual on Uniform Traffic Control Device Guidelines, we will be using total of 10-30 distinct pedestrian road signs to test the object detection capability. We will be using a formal ML testing methods like geometric transformations, photometric transformations, and background clutter. Accuracy will be measured by the general equation (Total Number of correctly classified Datasets)/(Total Number of Datasets)

2) The ML Model should be able to detect potential environmental hazards including but not limited to Obstacles, Stairs, Curbs, and people. We are planning onto gather multiple hazard senarios via online research, surveys, and in-person interviews. Based on the collected research, we will be building solid test cases to ensure that our device can reliably identify potential hazards. More importantly, we are planning design strict timing and accuracy measures metrics.


3) The ML model should be able to detect additional road structures such as curbs, crosswalks, and stairs to provide comprehensive environment awareness. We will be utilizing different crosswalks located on north quad and utilize the accuracy measurement techniques mentioned in 1)



### Power and Battery Life

1) The device should support at least 3 hours of battery life.

2) The device should obey the IEC 62368-1 safety standard. IEC 62368-1 safety standard lays out different classes such as ES1, ES2, and ES3 that considers electrical and flame



Antweight Battlebot Project

Jeevan Navudu, Keegan Teal, Avik Vaish

Antweight Battlebot Project

Featured Project

# Antweight Battlebot

Team Members:

- Keegan Teal (kteal2)

- Avik Vaish (avikv2)

- Jeevan Navudu (jnavudu2)

# Problem

In order to compete in Professor Gruev’s robot competition, there are many constraints that need to be met, including:

- Maximum weight (2lbs)

- Allowed materials (3D-printed thermoplastics)

- Locomotion system and fighting tool

- Wireless control via Bluetooth or Wifi

The main goal of this competition is to design a Battlebot that is capable of disrupting the functionality of the other Battlebots with our fighting tool while maintaining our own functionality.

# Solution

For the project, we plan to build a battlebot with a custom electronic speed controller (ESC) that can independently control three brushless motors: two for the drive system, and one for the fighting tool. This ESC will be controlled by an STM32 microcontroller, to which we will add a Bluetooth module to connect to it and specify how much power we want to send to each motor. To communicate with our robot, we will use a laptop that can connect to Bluetooth.

# Solution Components

## Vehicle Controller

The main subsystem of the robot will be a combined vehicle control board and ESC. This subsystem will contain an STM32 Microcontroller that will serve as the brain for the whole robot. With this MCU, we’ll be able to flash our whole software package that will be able to control the speed and direction of the robot, the robot’s weapon, and the Bluetooth communication.

## Power Module

This subsystem includes the battery, the voltage regulators/converters needed to power the electronics, and the necessary battery monitoring circuitry. Specifically, for the battery, we will use a 14.8V 4S2P LiPo pack to power all the components. There will also be a voltage short detection circuit for the battery that will shut down the robot in case of a short to ensure safe practices. This subsystem also contains a 5V linear regulator and 3.3V linear regulator to power the low voltage electronics.

## Drivetrain/Powertrain

This subsystem includes the motors and H-bridges needed to control both the wheels and weapon of the robot. The H-bridges will be made with regular N-MOSs that will be controlled by a PWM signal sent from the STM32 MCU. This H-bridge setup will be able to control the voltage and polarity sent to the motors, which will be able to control the speed of the wheels or weapon. This subsystem will also include the mechanical wheels of the robot and actual hardware of the weapon, which will be a spinning object. Since all the wheels and the weapon have the same mechanical motion, they can all use the same hardware and software electronically, with minor adjustments in motor selection and the actual mechanical hardware/peripheral.

## Bluetooth Module

One big requirement for this project is the ability for the robot to be controlled wirelessly via laptop. The STM32 MCU has bluetooth capabilities, and with additional peripheral hardware, the robot will be able to communicate over bluetooth with a laptop. The goal for the laptop is to be able to control the speed, direction, and weapon of the robot wirelessly and also have a display for live telemetry.

## Mechanical Design

The last part of our project would be the mechanical design of the robot chassis and weapon. For the chassis and weapon material, we decided to go with PLA+ as it offers a blend of being strong and robust but not being too brittle. The drive system will be a 2-wheeled tank style drive with one motor controlling each side of the robot. For the weapon, we are looking to utilize a fully 3D-printed drum that will have a 100% infill to maximize the rotational inertia which can lead to bigger impacts.

## Criterion for Success

We would consider our project a success if we are able to communicate with the robot from our computer as in sending throttle and steering commands to the robot, if those commands are then processed on the robots microprocessors and the motors are sent the according power needed to move and behave in the way that we want during a match.

## Alternatives

The most commonly used electronics in current antweight battlebots consist mostly of RC drone parts. We plan to create a very similar ESC to those on the market but it will have an integrated Bluetooth wireless capability as well as telemetry monitoring. We also want to focus on minimizing packaging size to lower weight and increase flexibility as much as possible.

Project Videos