Project

# Title Team Members TA Documents Sponsor
45 Intelligent Wearable Vision Systems for Assistive Perception
Junchen He
Mingyan Gao
Shengnan Cai
Yi Su
Wee-Liat Ong
# Request for Approval (RFA)
## ECE 445 / ME 470 – Spring 2026

**Project Title:** Intelligent Wearable Vision Systems for Assistive Perception

### Team Members

| Name | UID | Major |
| :--- | :--- | :--- |
| Yi Su | 676182091 | Mechanical Engineering |
| Mingyan Gao | 658581716 | Computer Engineering |
| Shengnan Cai | 665630420 | Electrical Engineering |
| Junchen He | 663319500 | Electrical Engineering |

- **Date Submitted:** March 13, 2026
- **Course Instructors:** Prof. Weeliat Ong
- **Suggested TA:** Zhao Ruolin (UID: 22571086)

---

## 1. Problem

For visually impaired individuals, navigating everyday environments — hallways, crosswalks, stairs — requires either reliance on others or tools that fall significantly short of what the situation demands. Traditional aids like white canes provide limited spatial awareness, while existing smart glasses tend to either overwhelm users with indiscriminate scene description or fail to operate reliably in real-world conditions.

A key shortcoming of current systems is that they treat perception as a static problem: they do not adapt to whether the user is walking briskly through a crowd, pausing at a curb, or turning into an unfamiliar corridor. The result is feedback that arrives too late, too often, or without meaningful prioritization — reducing rather than enhancing the user's sense of control.

## 2. Solution Overview

We propose a wearable vision system — designed to be worn as glasses or integrated into a lightweight cap — that uses on-device computer vision to continuously monitor the environment and relay only the information most relevant to safe navigation. Rather than describing everything the camera sees, the system focuses on hazards that require near-term action: obstacles at ground level, steps, approaching pedestrians, and doorways. Feedback is delivered through bone-conduction audio and small vibration motors, keeping the user's hands free and their ears open to surrounding sound.

A distinguishing feature of the design is that the system monitors the user's own motion through an inertial sensor and adjusts its behavior accordingly — more alert and faster to flag hazards when the user is moving, quieter and less intrusive when they have stopped. This context-sensitivity is what we believe makes the difference between a system that genuinely aids navigation and one that simply adds noise.

## 3. Components

The system is organized into four subsystems, all housed within a wearable form factor.

### 3.1 Sensing Subsystem
A compact RGB or RGB-D camera captures the scene ahead, while an IMU (accelerometer and gyroscope) tracks the user's movement and orientation. The camera will be selected based on trade-offs between power draw, weight, and depth sensing quality; we are currently evaluating a few candidates including OV-series modules and Intel RealSense D4xx compact variants.

### 3.2 Processing and Intelligence Subsystem
The core computation runs on a small edge board — likely a Raspberry Pi 5 or NVIDIA Jetson Nano depending on the latency and power budget we settle on during prototyping. A lightweight object detection model (MobileNet-SSD or similar) handles hazard classification in real time. A separate logic layer fuses the IMU data to decide when and how urgently to trigger feedback, and estimates rough distances to flagged objects using depth information or monocular depth cues.

### 3.3 Feedback Subsystem
Audio output uses a bone-conduction transducer so the user can hear ambient sound simultaneously. Haptic output comes from small eccentric rotating mass (ERM) motors positioned to give a rough directional sense — for instance, left versus right — when a hazard is detected nearby. The feedback modality, timing, and phrasing will be iterated based on informal usability tests during development.

### 3.4 Power and Mechanical Subsystem
Power is provided by a rechargeable Li-Po cell sized to last a full day of use. The enclosure will be designed with wearability as a primary constraint — lightweight materials, balanced weight distribution, and enough environmental sealing to be usable outdoors in light rain or dust.

## 4. Criteria of Success

We consider the project successful if the system demonstrates reliable and timely hazard perception in realistic navigation scenarios, both indoors (hallways, stairwells) and outdoors (sidewalks, crosswalks). We will evaluate the following outcomes through structured tests with participants.

- **Perception reliability:** The system should detect and correctly identify the target hazard categories (pedestrians, vehicles, curbs, stairs, doors) with high consistency across typical indoor and outdoor lighting conditions, maintaining a low enough false alarm rate that the feedback remains trustworthy rather than distracting.

- **Response timeliness:** End-to-end latency — from camera capture to delivered feedback — should be short enough that a walking user has sufficient time to react and adjust their path. The system should feel responsive in everyday use rather than lagged.

- **Wearability:** The assembled system should be light and compact enough for comfortable extended wear, with battery life sufficient to cover a full day of normal use without recharging.

- **Usability:** In informal blindfolded navigation tests, participants unfamiliar with the system should be able to interpret the feedback cues quickly and respond appropriately to introduced hazards without verbal instruction. We will iterate on the feedback design until this is achieved to a satisfactory degree.

Wireless IntraNetwork

Featured Project

There is a drastic lack of networking infrastructure in unstable or remote areas, where businesses don’t think they can reliably recoup the large initial cost of construction. Our goal is to bring the internet to these areas. We will use a network of extremely affordable (<$20, made possible by IoT technology) solar-powered nodes that communicate via Wi-Fi with one another and personal devices, donated through organizations such as OLPC, creating an intranet. Each node covers an area approximately 600-800ft in every direction with 4MB/s access and 16GB of cached data, saving valuable bandwidth. Internal communication applications will be provided, minimizing expensive and slow global internet connections. Several solutions exist, but all have failed due to costs of over $200/node or the lack of networking capability.

To connect to the internet at large, a more powerful “server” may be added. This server hooks into the network like other nodes, but contains a cellular connection to connect to the global internet. Any device on the network will be able to access the web via the server’s connection, effectively spreading the cost of a single cellular data plan (which is too expensive for individuals in rural areas). The server also contains a continually-updated several-terabyte cache of educational data and programs, such as Wikipedia and Project Gutenberg. This data gives students and educators high-speed access to resources. Working in harmony, these two components foster economic growth and education, while significantly reducing the costs of adding future infrastructure.