Project

# Title Team Members TA Documents Sponsor
19 An immersive human-driven robot detecting foreign matter in tubes
Pengzhao Liu
Shixin Chen
Tianle Weng
Ziyuan Lin
Yutao Zhuang design_document1.pdf
final_paper1.pdf
proposal3.pdf
Liangjing Yang
# TEAM MEMBERS:

Name Netid

Chen Shixin shixinc2

Lin Ziyuan ziyuanl3

Liu Pengzhao pl17

Weng Tianle tianlew3

# Title: A immersive human-driven robot detecting foreign matter in tubes.

# Problem:

With the development of technology in the 21st century, systems like rockets, chemistry transportation systems, and systems underground are getting more likely to involve small and unreachable spaces for humans, for example, thin tubes. Sometimes, there could be foreign matter inside these tubes and we need to figure out where it is and even remove it. For such little space that is hard to reach and observe, human beings are getting harder to enter. Current solutions include a self-control robot or a robot controlled through a remote handset. However, as the environment inside tubes could be very complex, these solutions could be either impossible or not flexible enough.

# Solution Overview:

We will design a human-driven robot but in an immersive context. We will use a self-design electric car as a model. People change the speed through audio, changing the direction by manipulating the position of their hands as if there is a real steering wheel. The position of the car will be recorded and displayed on the screen in front of the driver or on the glass of the driver even though the actual car may be far away from the user. In this way, the driver can immersively drive the car and make precise and subtle operations when the “road” condition is very complex. The robot is able to detect the foreign matter as a recognition or segmentation problem and send back the information like the position of the foreign matter. Then humans can take corresponding actions.

# Solution Components

Subsystem #1

A human hand position recognition system. The input is your hands’ position picture captured by the camera. After data processing, the output is the degree(from -90 to 90) you want to turn the wheel. This signal will be sent to the electronic component which controls the direction of the wheel through wireless communication. We will need a processor(computer GPU) to run the machine learning model for the degree regression problem. We will also need a camera, and a Bluetooth sender to communicate between the car and the computer.

Subsystem #2

An audio detection module. The input is the driver’s voice, the output is the speed of the car.

Subsystem #3

Robot body which performs the main work of detecting. A car and electronic device(like Arduino) that can control the degree of the wheel and other operations. A Bluetooth receiver that receives the signal from the main computer. Speed-changing hardware(some voltage-changing circuit)on the car.
Subsystem #4

Object recognition/segmentation system. This system aims to recognize and find the foreign object inside the tube. We can either design the neural network on the FPGA board or process the image sent back to the computer.

# Criterion for Success:

(1) Successfully calculating the degree of direction change.
(2) Successfully respond to the audio voice.
(3) The electrical degree signal can be transformed into the car wheels’ degree.
(4) The car can change speed with different audio voices.
(5) The car can detect the object and remind the computer.
(6) Additional functions of the car may be added, such as sweeping out foreign stuff.

# Distribution of Work:

Chen Shixin and Lin Ziyuan: All machine learning algorithms and implementation (audio, picture), processing data, transmitting signals between cars and computers. Complex! Even though we are ECE students. Since we need to perform both regression and classification problems under the vision and audio context. We also need to understand and manage the wireless communication of signals.

Liu Pengzhao and Weng Tianle: Design and implementation of the entire car, circuit to control the movement of the car. Arduino programming. Camera-car system designing. Additional function on the car. Complex! Since we are students in ME, we lack knowledge of circuit designing and Arduino programming. We need to coordinate the input digital signal and the car motion. We also need to make the car camera system to be stable. We need to learn sensors.

Low Cost Myoelectric Prosthetic Hand

Featured Project

According to the WHO, 80% of amputees are in developing nations, and less than 3% of that 80% have access to rehabilitative care. In a study by Heidi Witteveen, “the lack of sensory feedback was indicated as one of the major factors of prosthesis abandonment.” A low cost myoelectric prosthetic hand interfaced with a sensory substitution system returns functionality, increases the availability to amputees, and provides users with sensory feedback.

We will work with Aadeel Akhtar to develop a new iteration of his open source, low cost, myoelectric prosthetic hand. The current revision uses eight EMG channels, with sensors placed on the residual limb. A microcontroller communicates with an ADC, runs a classifier to determine the user’s type of grip, and controls motors in the hand achieving desired grips at predetermined velocities.

As requested by Aadeel, the socket and hand will operate independently using separate microcontrollers and interface with each other, providing modularity and customizability. The microcontroller in the socket will interface with the ADC and run the grip classifier, which will be expanded so finger velocities correspond to the amplitude of the user’s muscle activity. The hand microcontroller controls the motors and receives grip and velocity commands. Contact reflexes will be added via pressure sensors in fingertips, adjusting grip strength and velocity. The hand microcontroller will interface with existing sensory substitution systems using the pressure sensors. A PCB with a custom motor controller will fit inside the palm of the hand, and interface with the hand microcontroller.