Project

# Title Team Members TA Documents Sponsor
3 DESIGN AND CONTROL OF A FETCHING QUADRUPED
Jitao Li
Teng Hou
Wenkang Li
Yikai Cao
design_document1.pdf
design_document2.pdf
other2.pdf
other3.pdf
other1.docx
proposal1.pdf
proposal2.pdf
Hua Chen
There are various commercially available robotic dog platforms, yet no "fetching" skill is shown. One reason is the lack of integration of a manipulator with the dog. To be compatible with the robot dog, the robot arm needs to be lightweight, accurate, and robust. The integrated system will be able to perform simple tasks such as fetching, with the help of visual feedback. Such a manipulator requires a new design, good coordination of its components, along with a dedicated controller.

Keebot, a humanoid robot performing 3D pose imitation

Zhi Cen, Hao Hu, Xinyi Lai, Kerui Zhu

Featured Project

# Problem Description

Life is movement, but exercising alone is boring. When people are alone, it is hard to motivate themselves to exercise and it is easy to give up. Faced with the unprecedented COVID-19 pandemics, even more people have to do sports alone at home. Inspired by "Keep", a popular fitness app with many video demonstrations, we want to build a humanoid robot "Keebot" which can imitate the movements of the user in real time. Compared to a virtual coach in the video, our Keebot can provide physical company by doing the same exercises as the user, thus making exercising alone at home more interesting.

# Solution Overview

Our solution to the create such a movement imitating robot is to combine both computer vision and robotic design. The user's movement is captured by a fixed and stabilized depth camera. The 3D joint position will be calculated from the camera image with the help of some neural networks and depth information from the camera. The 3D joint position data will be translated into the motor angular rotation information and sent to the robot using Bluetooth. The robot realizes the imitation by controlling the servo motors as commanded. Since the 3D position data and mechanical control are not ideal, we leave out the consideration of keeping robot's balance and the robot's trunk will be fixed to a holder.

# Solution Components

## 3-D Pose Info Translator: from depth camera to 3-D pose info

+ RealSense Depth Camera which can get RGB and depth frames

+ A series of pre-processors such as denoising, normalizing and segmentation to reduce the impact of noise and environment

+ Pre-trained 2-D Human Pose Estimation model to convert the RGB frames to 2-D pose info

+ Combine the 2-D pose info with the depth frames to get the 3-D pose info

## Control system: from model to motors

+ An STM32-based PCB with a Bluetooth module and servo motor drivers

+ A mapping from the 3-D poses and movements to the joint parameters, based on Inverse Kinematics

+ A close-loop control system with PID or State Space Method

+ Generate control signals for the servo motors in each joints

## Mechanical structure: the body of the humanoid robot

+ CAD drawings of the robot’s physical structure, with 14 joints (14 DOF).

+ Simulations with the Robotics System Toolbox in MATLAB to test the stability and feasibility of the movements

+ Assembling the robot with 3D print parts, fasteners and motors

# Criterion of Success

+ 3-D pose info and movements are extracted from the video by RealSense Depth Camera

+ The virtual robot can imitate human's movements in MATLAB simulation

+ The physical robot can imitate human's movements with its limbs while its trunk being fixed