Project
| # | Title | Team Members | TA | Documents | Sponsor |
|---|---|---|---|---|---|
| 15 | Vision-Based Sign Language Recognition System for Smart Furniture Control |
Chongying Yue Licheng Xu Mingzhi Gu Zihan Xu |
proposal1.pdf |
Yushi Cheng | |
| ## Problem Current smart home systems rely primarily on voice control or mobile apps for operation. However, these interaction methods are not user-friendly for the hearing impaired, and controlling furniture devices via mobile apps requires additional steps, resulting in low interaction efficiency. Therefore, this project aims to develop a system that can directly control furniture devices through visual gesture recognition, providing a more intuitive and accessible interaction method for smart homes. ## Solution Overview Our solution is a vision-based sign language recognition smart furniture control system. The system uses a camera to capture the user's hand movements in real time and utilizes computer vision technology to detect key hand points and gestures, converting them into corresponding furniture control commands, *such as turning on the lights*. The system sends the gesture recognition results to the main control unit, where the main controller parses the control commands and generates corresponding control signals to drive the furniture devices. ## Solution Components ### Software Component - **Real-time Gesture Recognition**: real-time gesture recognition on the vision processing unit. The system acquires hand images through a camera and uses MediaPipe to extract gesture features. Based on these features, a lightweight machine learning model classifies gestures and recognizes the user's input control gestures. - **Control Logic**: The main controller receives gesture recognition results from the vision recognition module and parses them into specific control commands. The system generates PWM or GPIO control signals based on different commands to drive physical devices. ### Hardware Component - **Vision Processing Unit**: Includes a camera module and vision processing board *(e.g., K230)* , which acquires user hand images and running gesture recognition algorithms. - **Main Control Unit**: An STM32 microcontroller used to receive recognition results and generate corresponding control signals. - **Execution Drive Module**: Motor drive circuits and relay modules control the actual furniture devices, *e.g., smart lighting systems*. ## Criteria of Success - The system can stably recognize at least 5 predefined gestures with an accuracy rate of over 70%. - The system latency from user gesture input to furniture device response is less than 1 second. - The system can successfully control at least two types of furniture devices. ## Distribution of Work - **Zihan Xu** Develops the visual recognition module and is responsible for testing the accuracy of gesture recognition under different environments. - **Licheng Xu:** Designs STM32 control programs, parsing gesture commands, and generating PWM/GPIO control signals. - **Chongying Yue:** Responsible for hardware circuit design and implementation, including motor drive circuits and power management. - **Mingzhi Gu:** Responsible for system architecture design and overall integration, including the design and debugging of the furniture control interface and system stability testing. |
|||||