Project

# Title Team Members TA Documents Sponsor
3 Wearable mobility-assistance device for Blind and visually impaired (BVI)
Darui Xu
Haoyu Zhu
Jiashen Ren
Jinnan Zhang
design_document1.pdf
Bo Zhao
# Problem

Blind and visually impaired (BVI) individuals rely heavily on hearing to navigate safely in daily environments. While walking, they must continuously monitor critical environmental sounds such as approaching vehicles, crosswalk signals, bicycles, and nearby pedestrians. However, many existing assistive navigation devices communicate obstacle information mainly through audio alerts or voice prompts. This creates a major usability and safety issue because the device competes with the same auditory channel that the user depends on for situational awareness.

In addition, audio-based systems often require earphones or louder playback in noisy environments, which can further reduce a user’s ability to perceive surrounding hazards. As a result, these systems may unintentionally compromise safety instead of improving it.

The problem addressed by this project is therefore: **how to provide intuitive and timely obstacle-location information to BVI users without occupying their auditory channel**.

This problem is important because an effective mobility-assistance device must do more than detect obstacles. It must communicate actionable information in a way that is fast, intuitive, wearable, and compatible with the user’s natural navigation behavior. Our project focuses on preserving hearing for environmental awareness while shifting obstacle communication to the tactile channel.

# Solution Overview

This project proposes a **wearable haptic navigation-assistance device** for blind and visually impaired users. The system will detect nearby obstacles in front of the user using an AI vision-based sensing approach and communicate their relative direction and distance through **vibration feedback** rather than sound or speech.

The proposed system will use a camera and onboard processing hardware to capture visual information from the environment. AI-based vision algorithms will analyze the scene in real time to identify nearby obstacles and estimate their relative position with respect to the user. Based on this information, the system will activate vibration motors to convey obstacle direction and distance. For example, vibration on the left side may indicate an obstacle on the left, while stronger or faster vibration may indicate a closer obstacle.

The key innovation of this design is a **non-auditory feedback mapping** that allows users to receive obstacle information while keeping their hearing fully available for environmental sounds. Compared with conventional audio-based systems, this approach is intended to improve safety, reduce sensory conflict, and provide a more intuitive navigation aid in realistic walking scenarios.

To keep the project feasible within the course scope, the prototype will focus on short-range obstacle awareness and vibration-based haptic communication rather than full autonomous navigation or large-scale scene understanding.

# Components

The system will be organized into the following major subsystems:

## AI Vision Sensing Subsystem
The sensing subsystem detects nearby obstacles and estimates their relative position using visual data. Possible components include:
- Camera module
- Embedded AI processing unit or microprocessor
- Computer vision / object detection algorithm
- Distance or relative-position estimation logic

This subsystem is responsible for acquiring environmental information and identifying obstacles in real time.

## Processing and Control Subsystem
The processing subsystem interprets the sensing results and determines the correct haptic response. Possible components include:
- Embedded controller or processor
- Obstacle localization logic
- Haptic feedback mapping algorithm
- Timing and control logic

This subsystem converts vision-based obstacle information into control commands for the feedback device.

## Vibration Feedback Subsystem
The feedback subsystem communicates obstacle information to the user through tactile vibration cues. Possible components include:
- Vibration motors
- Motor driver circuitry
- Wearable actuator placement
- Feedback mapping design for direction and distance

This subsystem is responsible for conveying obstacle direction and relative distance in an intuitive and distinguishable way.

## Power Subsystem
The power subsystem provides portable and stable power to all electronics. Possible components include:
- Rechargeable battery
- Voltage regulation circuit
- Charging interface
- Power switch and protection circuitry

This subsystem enables continuous wearable operation.

## Wearable Integration Subsystem
The wearable integration subsystem packages the prototype into a form suitable for real use. Possible components include:
- Wearable mounting structure
- Sensor and actuator supports
- Wiring and enclosure management
- Adjustable fastening mechanism

This subsystem ensures that the device is practical, lightweight, and wearable.

# Criteria of Success

The project will be considered successful if the final prototype satisfies the following criteria:

1. The device must detect nearby obstacles within the intended range with reliable performance during indoor testing.

2. The system must communicate obstacle direction and relative distance through vibration feedback in a way that users can correctly interpret during controlled testing.

3. The device must provide obstacle information without using audio output, thereby preserving the user’s auditory awareness of the surrounding environment.

4. The final prototype must function as a wearable, battery-powered system capable of real-time operation during demonstration.

Augmented Reality and Virtual Reality for Electromagnetics Education

Zhanyu Feng, Zhewen Fu, Han Hua, Daosen Sun

Featured Project

# PROBLEM

Many students found electromagnetics a difficult subject to master partly because electromagnetic waves are difficult to visualize directly using our own eyes. Thus, it becomes a mathematical abstract that heavily relies upon mathematical formulations.

# SOLUTION OVERVIEW

We focus on using AR/VR technology for large-scale, complex, and interactive visualization for the electromagnetic waves. To speed up the calculation, we are going to compute the field responses and render the fields out in real-time probably accelerated by GPU computing, cluster computation, and other more advanced numerical algorithms. Besides, we propose to perform public, immersive, and interactive education to users. We plan to use the existing VR equipment, VR square at laboratory building D220 to present users with a wide range of field of view, high-resolution, and high-quality 3D stereoscopic images, making the virtual environment perfectly comparable to the real world. Users can work together and interact with each other while maneuvering the virtual objects. This project also set up the basis for us to develop digital-twins technology for electromagnetics that effectively links the real world with digital space.

# COMPONENTS

1.Numerical computation component: The part that responsible for computing the field lines via Maxwell equations. We will try to load the work on the GPU to get better performance.

2.Graphic rendering component: The part will receive data from the numerical computation component and use renderers to visualize the data.

3.User interface component: This part can process users’ actions and allow the users to interact with objects in the virtual world.

4.Audio component: This part will generate audio based on the electromagnetic fields on charged objects.

5.Haptic component: This part will interact with the controller to send vibration feedback to users based on the field strength.

# CRITERIA OF SUCCESS

Set up four distinct experiments to illustrate the concept of four Maxwell equations. Students can work together and use controllers to set up different types of charged objects and operate the orientation/position of them. Students can see both static and real-time electromagnetic fields around charged objects via VR devices. Achieve high frame rates in the virtual world and fasten the process of computation and using advanced algorithms to get smooth electromagnetic fields.

# WHAT MAKES OUR PROJECT UNIQUE

We will build four distinct scenarios based on four Maxwell Equations rather than the one Gaussian’s Law made by UIUC team. In these scenarios, we will render both electric and magnetic field lines around charged objects, as well as the forces between them.

The experiments allow users to interact with objects simultaneously. In other words, users can cooperate with each other while conducting experiments. While the lab scene made by UIUC team only allows one user to do the experiment alone, we offer the chance to make the experiment public and allow multiple users to engage in the experiments.

We will use different hardware to do the computation. Rather than based on CPU, we will parallelize the calculation and using GPU to improve the performance and simulate large-scale visualization for the fields to meet the multi-users needs.

Compared to the project in the UIUC, we will not only try to visualize the fields, but also expand the dimension that we can perceive the phenomena i.e., adding haptic feedback in the game and also using audio feedback to give users 4D experience.