Project

# Title Team Members TA Documents Sponsor
4 Automatic Page_Turning Photocopier
Shuchang Dong
Xuan Zhu
Yingying Gao
Yiying Lyu
design_document1.pdf
design_document2.pdf
design_document3.pdf
other1.pdf
proposal1.pdf
Liangjing Yang
# Problem
Current photocopying machines require manual page-turning, which is inconvenient and inefficient. Additionally, these machines are limited in their applicability, as they are primarily designed for bound documents such as books. This limitation restricts their use in scenarios where unbound or irregularly shaped documents need to be copied, such as in printing shops or educational institutions. As a result, there is a need for a more versatile and efficient solution to streamline the document copying process in these environments.

# Solution overview
Our solution for reducing the human labor when using photocopier is to create a robotic arm. The robotic arm is a machine that has multiple linkages to support moving, camera to capture images of each page, a screen that allows operators to monitor progress and perform necessary operations, and adaptive lighting to accommodate varying book bindings and paper conditions. Additionally, the photocopier can be controlled by an automated computer program, enabling seamless page-turning during photo or scanning processes, thereby enhancing efficiency and minimizing manual effort.

# Components

## Page Turning subsystem
-A robotic arm or mechanism is required to automatically turn pages. This mechanism will be supported by a robust frame that ensures stability and precision.
-A base frame which is easy to recognize is required to support materials.

## Photocopying System
-Equipped with a high-resolution camera, this system will capture clear and detailed images of each page.
-The LED lighting will be optimized based on the characteristics of the paper to prevent reflections and ensure complete content capture.

## Input/Output Interface
-An intuitive user interface will allow users to input commands such as page settings.
-A display screen is needed for easy interaction and status updates.

# Criteria of Success
-Turns one page at a time.
-Completes a page-turning action within 6 seconds.
-Produces clear scanned images with no significant shadows or reflections.
-Automatically stops when all pages have been turned or when reaching a preset page number.

An Intelligent Assistant Using Sign Language

Qianzhong Chen, Howie Liu, Haina Lou, Yike Zhou

Featured Project

# TEAM MEMBERS

Qianzhong Chen (qc19)

Hanwen Liu (hanwenl4)

Haina Lou (hainal2)

Yike Zhou (yikez3)

# TITLE OF THE PROJECT

An Intelligent Assistant Using Sign Language

# PROBLEM & SOLUTION OVERVIEW

Recently, smart home accessories are more and more common in people's home. A center, which is usually a speaker with voice user interface, is needed to control private smart home accessories. But a interactive speaker may not be the most ideal for people who are hard to speak or hear. Therefore, we aim to develop a intelligent assistant using sign language, which can understand sign languages, interact with people, and act as a real assistant.

# SOLUTION COMPONENTS

## Subsystem1: 12-Degree-of-Freedom Bionic Hand System

- Two moveable joints every finger driven by 5-V servo motors

- The main parts of the hand manufactured with 3D printing

- The bionic hand is fixed on a 2-DOF electrical platform

- All of the servo motors controlled by PWM signals transmitted by STM32 micro controller

## Subsystem2: The Control System

- The controlling system consists of embedded system modules including the microcontroller, high performance edge computing platform which will be used to run dynamic gesture recognition model and more than 20 motors which can control the delicate movement of our bionic hand. It also requires a high-precision camera to capture the hand gesture of users.

## Subsystem3: Dynamic Gesture Recognition System

- A external camera capturing the shape, appearance, and motion of objective hands

- A pre-trained model to help other subsystems to figure out the meaning behind the sign language. To be more specific, at the step of objects detection, we intended to adopt YOLO algorithm as well as Mediapipe, a machine learning framework developed by Google to recognize different sign language efficiently. Considering the characteristic of dynamic gesture, we also hope to adopt 3D-CNN and RNN to build our models to better fit in the spatio-temporal features.

# CRITERION OF SUCCESS

- The bionic hand can move free and fluently as designed, all of the 12 DOFs fulfilled. The movement of single joint of the finger does not interrupt or be interrupted by other movements. The durability and reliability of the bionic hand is achieved.

- The controlling system needs to be reliable and outputs stable PWM signals to motors. The edge computing platform we choose should have high performance when running the dynamic gesture recognition model.

- Our machine could recognize different sign language immediately and react with corresponding gestures without obvious delay.

# DISTRIBUTION OF WORK

- Qianzhong Chen(ME): Mechanical design and manufacture the bionic hand; tune the linking between motors and mechanical parts; work with Haina to program on STM32 to generate PWM signals and drive motors.

- Hanwen Liu(CompE): Record gesture clips to collect enough data; test camera modules; draft reports; make schedules.

- Haina Lou(EE): Implement the embedded controlling System; program the microcontroller, AI embedded edge computing module and implement serial communication.

- Yike Zhou(EE): Accomplish object detection subsystem; Build and train the machine learning models.