Project

# Title Team Members TA Documents Sponsor
21 Vision-driven Automatic Posture Correction Device
Weichong Chen
Xiaoyu Xu
Yilun Chen
proposal1.pdf
Wee-Liat Ong
#Problem
The digital age has led to increased reliance on portable electronic devices, causing a significant rise in poor sitting postures. Traditional brackets lack dynamic adjustment, forcing users to adapt to fixed screens, which hinders healthy habits. Existing market solutions often fail to optimize the sight-screen relationship or rely on imprecise manual adjustments. This results in health issues such as cervical spine strain, muscle soreness, and carpal tunnel syndrome.
#Solution Overview
The project develops an Automatic Sight Correction Device Bracket. Using visual detection and attitude sensing, the bracket dynamically adjusts its height and tilt angle to maintain the user’s sight in a horizontal state. It is portable, universally compatible with mainstream devices, and powered via USB for mobile use.
#Solution Components
##Subsystem 1 (Hardware)
Core Microcontroller: A high-performance MCU processes data from the camera and gyroscope to perform closed-loop regulation of actuators.
Actuators: Includes a micro electric linear actuator (load capacity ≥2kg) with a linear encoder for precise height adjustment, and a worm gear motor for tilt angle control.
Sensing Modules: A high-definition camera captures facial landmarks (pupil, jawline) with autofocus and low-light compensation. A gyroscope provides real-time attitude data.
Structure & Power: Built from Higher-strength 3D printing materials, the frame supports 7-12.9 inch tablets, smartphones and e-readers. It uses a 5V/2A USB-C power scheme.
Interaction: Includes a one-click start button, emergency stop, and a DIP switch for manual/automatic mode switching.
##Subsystem 2 (Software)
Data Processing: Uses the Kalman Filter Algorithm to fuse sensor data and the Mediapipe framework to detect 68 facial landmarks.
Control Logic: A PID Control Algorithm calculates deviations between actual sight and the horizontal standard, driving actuators to correct the bracket without overshoot.
Safety: Automatically triggers alarms and stops adjustment if feedback is lost or deviations persist.
#Criteria of Success
Efficiency: Completes initial correction within 10 seconds; responds to posture changes exceeding 3°.
Performance: Supports up to 2kg loads; achieves ≥95% recognition accuracy under various lighting.
Safety & Usability: Features torque protection and emergency stop. Setup involves only three steps: place device, fix bracket, and one-click start.

Keebot, a humanoid robot performing 3D pose imitation

Zhi Cen, Hao Hu, Xinyi Lai, Kerui Zhu

Featured Project

# Problem Description

Life is movement, but exercising alone is boring. When people are alone, it is hard to motivate themselves to exercise and it is easy to give up. Faced with the unprecedented COVID-19 pandemics, even more people have to do sports alone at home. Inspired by "Keep", a popular fitness app with many video demonstrations, we want to build a humanoid robot "Keebot" which can imitate the movements of the user in real time. Compared to a virtual coach in the video, our Keebot can provide physical company by doing the same exercises as the user, thus making exercising alone at home more interesting.

# Solution Overview

Our solution to the create such a movement imitating robot is to combine both computer vision and robotic design. The user's movement is captured by a fixed and stabilized depth camera. The 3D joint position will be calculated from the camera image with the help of some neural networks and depth information from the camera. The 3D joint position data will be translated into the motor angular rotation information and sent to the robot using Bluetooth. The robot realizes the imitation by controlling the servo motors as commanded. Since the 3D position data and mechanical control are not ideal, we leave out the consideration of keeping robot's balance and the robot's trunk will be fixed to a holder.

# Solution Components

## 3-D Pose Info Translator: from depth camera to 3-D pose info

+ RealSense Depth Camera which can get RGB and depth frames

+ A series of pre-processors such as denoising, normalizing and segmentation to reduce the impact of noise and environment

+ Pre-trained 2-D Human Pose Estimation model to convert the RGB frames to 2-D pose info

+ Combine the 2-D pose info with the depth frames to get the 3-D pose info

## Control system: from model to motors

+ An STM32-based PCB with a Bluetooth module and servo motor drivers

+ A mapping from the 3-D poses and movements to the joint parameters, based on Inverse Kinematics

+ A close-loop control system with PID or State Space Method

+ Generate control signals for the servo motors in each joints

## Mechanical structure: the body of the humanoid robot

+ CAD drawings of the robot’s physical structure, with 14 joints (14 DOF).

+ Simulations with the Robotics System Toolbox in MATLAB to test the stability and feasibility of the movements

+ Assembling the robot with 3D print parts, fasteners and motors

# Criterion of Success

+ 3-D pose info and movements are extracted from the video by RealSense Depth Camera

+ The virtual robot can imitate human's movements in MATLAB simulation

+ The physical robot can imitate human's movements with its limbs while its trunk being fixed