Project

# Title Team Members TA Documents Sponsor
36 intelligent robot arm
Chenghan Li
Haoran Yang
Yiming Li
Yipu Liao
design_document1.pdf
design_document2.pdf
final_paper1.pdf
final_paper2.pdf
other2.pdf
proposal4.pdf
proposal3.pdf
Wee-Liat Ong
# TEAM MEMBERS
- Haoran Yang [haorany8]
- Chenghan Li [cli104]
- Yiming Li [yiming20]
- Yipu Liao[yipul2]

# TITLE
Intelligent robot arm

# PROBLEM
For individuals with disabilities or limited mobility, it is hard for them to perform certain or multiple taskes. Or under some circumstances, the task is repetitive and boring for human. We want to design a intelligent robot arm that help disabilities and free people from repetitive work.

# SOLUTION OVERVIEW
Our graduation project aims to conceptualize an intelligent robotic arm, proficient in executing diverse tasks through voice and visual recognition. The overarching concept involves a user verbally identifying an object on a table to the robot. The robot, upon receiving the voice command, leverages its camera system to detect the specified object and subsequently manipulate it.

In addition to these fundamental functionalities, the system is designed to interpret intricate voice instructions, such as rotating the object to specific degrees based on predefined references or following a set reference for movement. This innovative project harbors the potential to significantly benefit individuals with visual impairments in managing their daily tasks, as well as aiding those facing critical situations, such as during fires or earthquakes.

# SOLUTION COMPONENTS

# SUBSYSTEM 1
Four or five axis robotics arms

# SUBSYSEM 2

An algorithm that can control the robotics arms to move and grab things

# SUBSYSEM 3

A robotics vision system that contains a camera and an algorithm that can detect certain object and its relative position to the robotic arm

# SUBSYSTEM 4

A voice recognition system that contains a microphone and an algorithm that can recognize what objects that a person is speaking of.

# CRITERION FOR SUCCESS

1. The robotics arm is able to receive and process the relative position of an object that is sent by robotics vision system and grab the target using those information.
2. Robotics vision system is able to detect object and measure the relative position of it and feeds back to the robotics arm
3. Voice recognition system is able to recognize what objects that a person is speaking of and feed back to the robotics vision system.

# DISTRIBUTION OF WORK
- Haoran Yang: CAD model, construct robotics arm
- Chenghan Li: design voice recognition system
- Yiming Li: design robotics vision system
- Yipu Liao: design robotics arm algorithm, construct robotics arm.

An Intelligent Assistant Using Sign Language

Qianzhong Chen, Howie Liu, Haina Lou, Yike Zhou

Featured Project

# TEAM MEMBERS

Qianzhong Chen (qc19)

Hanwen Liu (hanwenl4)

Haina Lou (hainal2)

Yike Zhou (yikez3)

# TITLE OF THE PROJECT

An Intelligent Assistant Using Sign Language

# PROBLEM & SOLUTION OVERVIEW

Recently, smart home accessories are more and more common in people's home. A center, which is usually a speaker with voice user interface, is needed to control private smart home accessories. But a interactive speaker may not be the most ideal for people who are hard to speak or hear. Therefore, we aim to develop a intelligent assistant using sign language, which can understand sign languages, interact with people, and act as a real assistant.

# SOLUTION COMPONENTS

## Subsystem1: 12-Degree-of-Freedom Bionic Hand System

- Two moveable joints every finger driven by 5-V servo motors

- The main parts of the hand manufactured with 3D printing

- The bionic hand is fixed on a 2-DOF electrical platform

- All of the servo motors controlled by PWM signals transmitted by STM32 micro controller

## Subsystem2: The Control System

- The controlling system consists of embedded system modules including the microcontroller, high performance edge computing platform which will be used to run dynamic gesture recognition model and more than 20 motors which can control the delicate movement of our bionic hand. It also requires a high-precision camera to capture the hand gesture of users.

## Subsystem3: Dynamic Gesture Recognition System

- A external camera capturing the shape, appearance, and motion of objective hands

- A pre-trained model to help other subsystems to figure out the meaning behind the sign language. To be more specific, at the step of objects detection, we intended to adopt YOLO algorithm as well as Mediapipe, a machine learning framework developed by Google to recognize different sign language efficiently. Considering the characteristic of dynamic gesture, we also hope to adopt 3D-CNN and RNN to build our models to better fit in the spatio-temporal features.

# CRITERION OF SUCCESS

- The bionic hand can move free and fluently as designed, all of the 12 DOFs fulfilled. The movement of single joint of the finger does not interrupt or be interrupted by other movements. The durability and reliability of the bionic hand is achieved.

- The controlling system needs to be reliable and outputs stable PWM signals to motors. The edge computing platform we choose should have high performance when running the dynamic gesture recognition model.

- Our machine could recognize different sign language immediately and react with corresponding gestures without obvious delay.

# DISTRIBUTION OF WORK

- Qianzhong Chen(ME): Mechanical design and manufacture the bionic hand; tune the linking between motors and mechanical parts; work with Haina to program on STM32 to generate PWM signals and drive motors.

- Hanwen Liu(CompE): Record gesture clips to collect enough data; test camera modules; draft reports; make schedules.

- Haina Lou(EE): Implement the embedded controlling System; program the microcontroller, AI embedded edge computing module and implement serial communication.

- Yike Zhou(EE): Accomplish object detection subsystem; Build and train the machine learning models.