Project

# Title Team Members TA Documents Sponsor
19 An immersive human-driven robot detecting foreign matter in tubes
Pengzhao Liu
Shixin Chen
Tianle Weng
Ziyuan Lin
Yutao Zhuang design_document1.pdf
final_paper1.pdf
proposal3.pdf
Liangjing Yang
# TEAM MEMBERS:

Name Netid

Chen Shixin shixinc2

Lin Ziyuan ziyuanl3

Liu Pengzhao pl17

Weng Tianle tianlew3

# Title: A immersive human-driven robot detecting foreign matter in tubes.

# Problem:

With the development of technology in the 21st century, systems like rockets, chemistry transportation systems, and systems underground are getting more likely to involve small and unreachable spaces for humans, for example, thin tubes. Sometimes, there could be foreign matter inside these tubes and we need to figure out where it is and even remove it. For such little space that is hard to reach and observe, human beings are getting harder to enter. Current solutions include a self-control robot or a robot controlled through a remote handset. However, as the environment inside tubes could be very complex, these solutions could be either impossible or not flexible enough.

# Solution Overview:

We will design a human-driven robot but in an immersive context. We will use a self-design electric car as a model. People change the speed through audio, changing the direction by manipulating the position of their hands as if there is a real steering wheel. The position of the car will be recorded and displayed on the screen in front of the driver or on the glass of the driver even though the actual car may be far away from the user. In this way, the driver can immersively drive the car and make precise and subtle operations when the “road” condition is very complex. The robot is able to detect the foreign matter as a recognition or segmentation problem and send back the information like the position of the foreign matter. Then humans can take corresponding actions.

# Solution Components

Subsystem #1

A human hand position recognition system. The input is your hands’ position picture captured by the camera. After data processing, the output is the degree(from -90 to 90) you want to turn the wheel. This signal will be sent to the electronic component which controls the direction of the wheel through wireless communication. We will need a processor(computer GPU) to run the machine learning model for the degree regression problem. We will also need a camera, and a Bluetooth sender to communicate between the car and the computer.

Subsystem #2

An audio detection module. The input is the driver’s voice, the output is the speed of the car.

Subsystem #3

Robot body which performs the main work of detecting. A car and electronic device(like Arduino) that can control the degree of the wheel and other operations. A Bluetooth receiver that receives the signal from the main computer. Speed-changing hardware(some voltage-changing circuit)on the car.
Subsystem #4

Object recognition/segmentation system. This system aims to recognize and find the foreign object inside the tube. We can either design the neural network on the FPGA board or process the image sent back to the computer.

# Criterion for Success:

(1) Successfully calculating the degree of direction change.
(2) Successfully respond to the audio voice.
(3) The electrical degree signal can be transformed into the car wheels’ degree.
(4) The car can change speed with different audio voices.
(5) The car can detect the object and remind the computer.
(6) Additional functions of the car may be added, such as sweeping out foreign stuff.

# Distribution of Work:

Chen Shixin and Lin Ziyuan: All machine learning algorithms and implementation (audio, picture), processing data, transmitting signals between cars and computers. Complex! Even though we are ECE students. Since we need to perform both regression and classification problems under the vision and audio context. We also need to understand and manage the wireless communication of signals.

Liu Pengzhao and Weng Tianle: Design and implementation of the entire car, circuit to control the movement of the car. Arduino programming. Camera-car system designing. Additional function on the car. Complex! Since we are students in ME, we lack knowledge of circuit designing and Arduino programming. We need to coordinate the input digital signal and the car motion. We also need to make the car camera system to be stable. We need to learn sensors.

Wireless IntraNetwork

Featured Project

There is a drastic lack of networking infrastructure in unstable or remote areas, where businesses don’t think they can reliably recoup the large initial cost of construction. Our goal is to bring the internet to these areas. We will use a network of extremely affordable (<$20, made possible by IoT technology) solar-powered nodes that communicate via Wi-Fi with one another and personal devices, donated through organizations such as OLPC, creating an intranet. Each node covers an area approximately 600-800ft in every direction with 4MB/s access and 16GB of cached data, saving valuable bandwidth. Internal communication applications will be provided, minimizing expensive and slow global internet connections. Several solutions exist, but all have failed due to costs of over $200/node or the lack of networking capability.

To connect to the internet at large, a more powerful “server” may be added. This server hooks into the network like other nodes, but contains a cellular connection to connect to the global internet. Any device on the network will be able to access the web via the server’s connection, effectively spreading the cost of a single cellular data plan (which is too expensive for individuals in rural areas). The server also contains a continually-updated several-terabyte cache of educational data and programs, such as Wikipedia and Project Gutenberg. This data gives students and educators high-speed access to resources. Working in harmony, these two components foster economic growth and education, while significantly reducing the costs of adding future infrastructure.