Project

# Title Team Members TA Documents Sponsor
5 An event-based smart vision node for ultra-low-latency motion detection
Luying Wang
Shuke Wang
Yaxing Zhang
Yueyao Si
proposal1.docx
Aili Wang
# Problem



Traditional motion detection systems usually rely on frame-based cameras, which capture full images at fixed intervals. In many situations, consecutive frames are very similar, making the system store and process a large amount of redundant information. This not only increases data load but also leads to higher power consumption. Meanwhile, in this way, motion can only be analyzed after a batch of image frames are collected and processed, which is not ideal for applications that require very low latency.



As a result, the main problem is how to build a vision system that can respond to motion more efficiently by using only meaningful visual changes instead of full frames, while showing potential advantages in latency, resources and power consumption compared with a conventional approach.

# Solution Overview



Our solution is to build an event-based vision system using a DVS camera, FPGA, and SNN-inspired processing. Instead of capturing and processing full image frames, the system works directly on event data input stream.



The system first captures event-based visual data from a DVS camera. These events are then sent to the FPGA, where they are received, parsed, and temporarily buffered in real time without reconstructing full frames. The formatted event stream is then passed to a software-based SNN-inspired module, which analyzes motion patterns over time and generates a detection result when meaningful activity is observed. When motion is detected, the result will be sent to the output subsystem for display with minimal latency.



If time allows, a frame-focused baseline may be used as a comparison so that our system can be evaluated in terms of end-to-end latency, event throughput, and power consumption.

# Solution Components & Distribution of work



### Event-Based Vision Sensor (Shuke Wang – EE)



- Dynamic Vision Sensor (DVS) Camera: Employs a neuromorphic event-based sensor that captures visual information asynchronous spikes of pixel-level brightness changes. Each event includes pixel coordinates, polarity, and a precise microsecond timestamp, enabling ultra‑low‑latency motion detection without the need for full frame readout.



- High‑Speed Data Interface: Outputs event streams using the Address‑Event Representation (AER) protocol over a high‑bandwidth link. This interface allows direct, real‑time transmission of raw events to the FPGA processing platform, minimizing additional latency, and preserving the temporal precision of the sensor.



- Optics and Mounting: The camera is equipped with a suitable lens to match the target field of view and application scenario. It is rigidly mounted on an adjustable stage to facilitate precise alignment and stable imaging conditions during experiments.

### FPGA Subsystem (Yaxing Zhang – EE)



- The FPGA subsystem serves as the real-time processing platform of the system. It receives the event stream from the DVS camera through a high-speed interface and parses each event into pixel coordinates, polarity, and timestamp.



- The parsed events are temporarily stored in on-chip buffers to maintain stable data flow and handle burst event traffic. The FPGA can also perform lightweight pre-processing such as basic filtering before passing the formatted event stream to the motion detection module.



- This hardware platform ensures low latency and efficient handling of asynchronous event data in the system pipeline.

## SNN-Based Motion Detection Subsystem (Luying Wang – ECE)



- An SNN-inspired module that analyzes incoming events, detects motion regions by updating neural activity based on event spikes, builds up motion activity in certain regions, and generates an output when the activity exceeds a threshold.

### Output Subsystem (Yueyao Si – ME)



- The output subsystem is responsible for presenting the final motion detection result generated by the SNN-inspired module. Once motion activity exceeds the predefined threshold, a detection signal is produced and forwarded to the output controller.



- In the current implementation, the FPGA receives the detection result and triggers a visual indicator such as an LED or display module. When motion is detected, the indicator is activated in real time; otherwise it remains off.



- This subsystem provides a simple and low-latency way to demonstrate the system response to motion events. The output interface can also be extended to support other devices, such as a monitor display, UART logging interface, or external control signals for robotic or embedded applications.

# Criteria of Success

### Functionality

- The complete pipeline runs successfully from event input to final output.



- The motion detection module can correctly identify motion regions from the event stream.



- The output responds correctly to motion: the display turns on when motion is detected and remains off otherwise.



### Performance

- The end-to-end latency is less than 50 ms.



- The measured FPGA board power during operation is less than 5 W.



- The FPGA resource utilization remains below 80% of available logic and memory resources.

# References

- [Event-based Vision: A Survey](https://arxiv.org/pdf/1904.08405)



- [Event-based vision on FPGAs – a survey The work presented in this paper was supported by: the program ”Excellence initiative –- research university” for the AGH University of Krakow.](https://arxiv.org/html/2407.08356v1#bib.bib60)



- [Neuro-Inspired Spike-Based Motion: From Dynamic Vision Sensor to Robot Motor Open-Loop Control through Spike-VITE](https://www.mdpi.com/1424-8220/13/11/15805)



- [A Reconfigurable Architecture for Real-time Event-based Multi-Object Tracking | ACM Transactions on Reconfigurable Technology and Systems](https://dl.acm.org/doi/10.1145/3593587)

Fixed wing drone with auto-navigation

Ziyang An, Zhanhao He, Yihui Li, Zhibo Teng

Featured Project

# Fixed wing drone with auto-navigation

## Group Members

**Zhibo Teng** NetID: zhibot2

**Yihui Li** NetID: yihuil2

**Ziyang An** NetID: ziyanga2

**Zhanhao He** NetID: zhanhao5

## Problem

Traditional methods of data collection, such as using manned aircraft or ground surveys, can be time-consuming, expensive, and limited in their ability to access certain areas. The multi-rotor airfoil UAV being used now has slow flight speed and short single distance, which is not suitable for some long-distance operations. Moreover, it needs manual control, so it has low convenience. Fixed wing drones with auto-navigation can overcome these limitations by providing a cost-effective and flexible solution for aerial data collection.

The motivation behind our design is to provide a reliable and efficient way to collect high-quality data from the air, which can improve decision-making processes for a variety of industries. The drone can fly pre-determined flight paths, making it easier to cover large areas and collect consistent data. The auto-navigation capabilities can also improve the accuracy of the data collected, reducing the need for manual intervention and minimizing the risk of errors.

## Solution Overview

Our design is a fixed wing drone with auto-navigation capabilities that is optimized for aerial data collection. The drone is equipped with a range of sensors and cameras, as well as software that allows it to fly pre-determined flight paths and collect data in a consistent and accurate manner. Our design solves the problem of inefficient and costly aerial data collection by providing a cost-effective and flexible solution that can cover large areas quickly and accurately. The auto-navigation capabilities of the drone enable it to fly pre-determined flight paths, which allows for consistent and repeatable data collection. This reduces the need for manual intervention, which can improve the accuracy of the data and minimize the risk of errors. Additionally, the drone’s compact size and ability to access difficult-to-reach areas can make it an ideal solution for industries that require detailed aerial data collection.

## Solution Components

### Subsystem #1: Aircraft Structure and Design

* Design the overall structure of the plane, including the wings, fuselage, and tail section

* Use 3D modeling software to create a digital model of the plane

* Choose materials for construction based on their weight, durability, and strength

* Create a physical model of the plane using 3D printing or laser cutting

### Subsystem #2: Flight Control System

* Implement a flight control system that can be operated both manually and automatically

* For manual control, design a control panel that includes a joystick and other necessary controls

* For automatic control, integrate a flight controller module that can be programmed with waypoints and flight parameters

* Choose appropriate sensors for detecting altitude, speed, and orientation of the plane

* Implement algorithms for stabilizing the plane during flight and adjusting control surfaces for directional control

### Subsystem #3: Power and Propulsion

* Choose a suitable motor and propeller to provide the necessary thrust for the plane

* Design and integrate a battery system that can power the motor and control systems for a sufficient amount of time

* Implement a power management system that can monitor the battery voltage and ensure safe operation of the plane

### Subsystem #4: Communication and Telemetry

* Implement a wireless communication system for transmitting telemetry data and controlling the plane remotely

* Choose a suitable communication protocol such as Wi-Fi or Bluetooth

* Develop a user interface for displaying telemetry data and controlling the plane from a mobile device or computer

## Criterion for Success

1. Design and complete the UAV model including wings, fuselage, and tail section

2. The UAV can fly normally in the air and realize the control of the UAV, including manual and automatic control

3. To realize the data monitoring of UAV in flight, including location, speed and altitude

## Distribution of Work

**Zhibo Teng:** Aircraft Structure and Design

**Yihui Li:** Aircraft Structure and Design

**Ziyang An:** Flight Control System Power and Propulsion

**Zhanhao He:** Flight Control System Communication and Telemetry