Project

# Title Team Members TA Documents Sponsor
12 Laser System to Shoot Down Mosquitos
Fan Yang
Ruochen Wu
Yuxin Qu
Zhongqi Wu
Xinyi Xu design_document1.pdf
final_paper1.pdf
other8.pdf
proposal1.pdf
Timothy Lee
# Laser System to Shoot Down Mosquitos
## Team Member
- Ruochen Wu (rw12)
- Yuxin Qu (yuxin19)
- Zhongqi Wu (zhongqi19)
- Fan Yang (fy10)

## Problem
In the world, there are thousands of humans suffering from the disease and death brought by bite from mosquito. Therefore, an effective method of protection against mosquitoes is necessary. Tracking the mosquito and using a laser to kill it may be a feasible solution.

## Solution Overview
Firstly, the laser gun attached to the camera will emit a low-power laser to illustrate the drop point. To position the mosquito, we employ the yolov5s on our computation platform to do real-time detection. We move the camera to diminish the distance between the drop point and the mosquito until they coincide. The laser gun then emits a high-power laser to destroy the mosquito. But there are many challenges in implementation. The first one is the computing platform. The embedded development board may be incapable to run yolov5. We are considering those with NPU or CUDA support. Cloud Computing is another solution. But it possibly has high latency and low stability. Besides, the mosquito is very tiny and possibly occupies only a few pixels on the frame. If necessary, we may use radar to help with the detection. Also, the security of the laser is a major problem. We plan to do the safety check before emitting it and find a proper power that is harmless to humans.
## Solution Components:
#### 1. Positioning system:
- High resolution camara
- Low power laser for aiming
- Software employing yolov5
- Computing platform (cloud server or embedded development board)

#### 2. Attacking system:
- Driver control module
- Rotation motor
- High power laser for shooting

## Criterion for Success
1. Be able to detect a mosquito in the scene from a camera and locate its position.
2. The lasing device can target and shoot the mosquito.
3. The laser does not harm people.

## Distribution of Work

Ruochen and Zhongqi are responsible for the training of yolov5 on mosquito datasets. Though the mosquito is very small and the procesing speed is limited by the device, they are both ECE students and have got lots of experience in deep learning.

Fan will handle the deployment of yolov5 on embedded device, who is majoring in ECE. It takes time to get familiar with the environment of the board and make full use of the computing resource.

Yuxin will take charge of the driver control module, laser and motors. Yuxin is EE student, who is familiar with the control system. Though we all lack mechanical experience, it accounts for a little portion of the project.

An Intelligent Assistant Using Sign Language

Qianzhong Chen, Howie Liu, Haina Lou, Yike Zhou

Featured Project

# TEAM MEMBERS

Qianzhong Chen (qc19)

Hanwen Liu (hanwenl4)

Haina Lou (hainal2)

Yike Zhou (yikez3)

# TITLE OF THE PROJECT

An Intelligent Assistant Using Sign Language

# PROBLEM & SOLUTION OVERVIEW

Recently, smart home accessories are more and more common in people's home. A center, which is usually a speaker with voice user interface, is needed to control private smart home accessories. But a interactive speaker may not be the most ideal for people who are hard to speak or hear. Therefore, we aim to develop a intelligent assistant using sign language, which can understand sign languages, interact with people, and act as a real assistant.

# SOLUTION COMPONENTS

## Subsystem1: 12-Degree-of-Freedom Bionic Hand System

- Two moveable joints every finger driven by 5-V servo motors

- The main parts of the hand manufactured with 3D printing

- The bionic hand is fixed on a 2-DOF electrical platform

- All of the servo motors controlled by PWM signals transmitted by STM32 micro controller

## Subsystem2: The Control System

- The controlling system consists of embedded system modules including the microcontroller, high performance edge computing platform which will be used to run dynamic gesture recognition model and more than 20 motors which can control the delicate movement of our bionic hand. It also requires a high-precision camera to capture the hand gesture of users.

## Subsystem3: Dynamic Gesture Recognition System

- A external camera capturing the shape, appearance, and motion of objective hands

- A pre-trained model to help other subsystems to figure out the meaning behind the sign language. To be more specific, at the step of objects detection, we intended to adopt YOLO algorithm as well as Mediapipe, a machine learning framework developed by Google to recognize different sign language efficiently. Considering the characteristic of dynamic gesture, we also hope to adopt 3D-CNN and RNN to build our models to better fit in the spatio-temporal features.

# CRITERION OF SUCCESS

- The bionic hand can move free and fluently as designed, all of the 12 DOFs fulfilled. The movement of single joint of the finger does not interrupt or be interrupted by other movements. The durability and reliability of the bionic hand is achieved.

- The controlling system needs to be reliable and outputs stable PWM signals to motors. The edge computing platform we choose should have high performance when running the dynamic gesture recognition model.

- Our machine could recognize different sign language immediately and react with corresponding gestures without obvious delay.

# DISTRIBUTION OF WORK

- Qianzhong Chen(ME): Mechanical design and manufacture the bionic hand; tune the linking between motors and mechanical parts; work with Haina to program on STM32 to generate PWM signals and drive motors.

- Hanwen Liu(CompE): Record gesture clips to collect enough data; test camera modules; draft reports; make schedules.

- Haina Lou(EE): Implement the embedded controlling System; program the microcontroller, AI embedded edge computing module and implement serial communication.

- Yike Zhou(EE): Accomplish object detection subsystem; Build and train the machine learning models.