Project

# Title Team Members TA Documents Sponsor
41 Dodgeball Bots
Qingyan Li
design_document1.pdf
proposal2.pdf
proposal3.pdf
Timothy Lee
# Dodgeball Bots

# Members
- Loigen Sodian [sodian.21]
- Isaac Koo Hern En [ikoo2]
- Jaden Peterson Wen [peterson.21]
- Qingyan Li [qingyan5]
- Putu Evaita Jnani [putu.20]

# Problem
Typically, practicing dodge ball requires a second party who acts as the dodge ball thrower. Unfortunately, the thrower may have imprecise aim or is entirely unavailable. Hence of the need of a robot that fully replaces the function of the human thrower. A dodgeball robot that tracks humans and fires balls to hit them is often quite complex to build and involves multidisciplinary knowledge (both electrical, computing, and mechanical work). Sometimes dodgeball bots may possess safety concerns to humans if the ball is misfired or the force feedback applied is too large for the apparatus to handle. Sophisticated decision making, object detection and recognition algorithms, and physical modelling need to be studied in detail to make sure that the robot is safe and smart.

# Solution Overview
Here we present a Dodgeball robot, a combination of dodgeball and robot. The robot’s primary function is to fire small projectiles at specific targets (e.g., based on color or specific symbols) through the use of computer vision aided by YOLO v8 machine learning. The machine is able to move independently on its own, and gun reload can be manually overridden through an IR remote control, and all of the necessary components (including machine vision devices) will be mounted in the device, and the machine will be powered by a 20V/5A powerbank, meaning that no external connection is necessary.

# Components
## Firing
- Homemade “tennis” ball launcher for firing mechanisms. Two motors aligned horizontally rotate on opposing directions to launch the ball (tennis-sized ball). The system is an open loop system. The motors are of high RPM and reasonable torque, hence the ball will be launched quickly, and its trajectory will be a straight line. This reduces the need to determine depth/distance.
- Vertical motor for vertical movement of the gun (shaft and the two rotating motors mentioned previously).
- Reload motors control the flow of the ball from the magazine to the chamber, allowing only one ball at a time. IR remote may be used to disable the reload motors (through the Arduino) to prevent it from continuously firing at targets.
- When a target is about to be shot, the Arduino control system will activate an LED and sleep for 2 seconds, after which it will start firing. This 2 second window should be enough for the other party to get ready.
## Moving
- Two-axis control for gun elevation and rotation. Elevation motors elevate the gun, while the rotation motors rotate the whole turret structure, which houses the whole firing system.
- Arduino (with IR sensor) to activate or deactivate reload mechanism to prevent erroneous operation.
- From the AI classification, the Moving system moves the turret and the gun elevation to center the target on the camera (which is located on the gun barrel).
## Vision
- Thermal camera to capture the surroundings and feed it to the AI for classification. We classify targets if their heat signature deviates significantly from neighboring pixels.
- Jetson Nano for AI computations.
- Python to create and run AI model (YOLO v8).
## Body
- Chassis structure to hold all the components together.
- Armor to shield devices from environment, with holes made to ensure air-flow.
- Cooling fan inside for electronic devices.
- Power bank for power supply of the electronics.

# Criteria of Success
## Stable operation of the design
- The robot must function autonomously without malfunction throughout its operation.
- It should reliably track human movement and execute precise ball-firing actions without errors or unexpected shutdowns.
- The system should incorporate error handling mechanisms to prevent escalating the problem to a potential injury device.
- An example would be to immediately stop firing if a software problem is detected or when there are no bullets left in the magazine to prevent motor strain.
- If the motors, processor or firing mechanism exceed certain safe operating temperatures, the system should pause operation and cool down before continuing.
- A built-in physical emergency stop button for immediate shutdown if unexpected behavior occurs.
## Positioning accuracy of the gun
- The robot must achieve ≥80% firing accuracy when targeting moving humans within a 5-meters range under standard (flat terrain, good visibility) conditions.
## High targeting accuracy
- System should use reliable sensors, and good dataset model to enhance detection accuracy and prevent false positive. Stationery should achieve accuracy of ≥80%, moving ≥70%.
## Cost efficiency of the entire project
- The entire project, including hardware, software and assembly, must be completed within the budget of 1000 RMB.
- Designed should be for efficiency in power consumption (since it is battery-powered), and material usage, e.g., reusing components from previous projects.

Autonomous Behavior Supervisor

Shengjian Chen, Xiaolu Liu, Zhuping Liu, Huili Tao

Featured Project

## Team members

- Xiaolu Liu (xiaolul2)

- Zhuping Liu(zhuping2)

- Shengjian Chen(sc54)

- Huili Tao(huilit2)

## Problem:

In many real-life scenarios, we need AI systems not only to detect people, but also to monitor their behavior. However, today's AI systems are only able to detect faces but are still lacking the analysis of movements, and the results obtained are not comprehensive enough. For example, in many high-risk laboratories, we need to ensure not only that the person entering the laboratory is identified, but also that he or she is acting in accordance with the regulations to avoid danger. In addition to this, the system can also help to better supervise students in their online study exams. We can combine the student's expressions and eyes, as well as his movements to better maintain the fairness of the test.

## Solution Overview:

Our solution for the problem mentioned above is an Autonomous Behavior Supervisor. This system mainly consists of a camera and an alarm device. Using real-time photos taken by the camera, the system can perform face verification on people. When the person is successfully verified, the camera starts to monitor the person's behavior and his interaction with the surroundings. Then the system determines whether there is a dangerous action or an unreasonable behavior. As soon as the system determines that there are something uncommon, the alarm will ring. Conversely, if the person fails verification (ie, does not have permission), the words "You do not have permission" will be displayed on the computer screen.

## Solution Components:

### Identification Subsystem:

- Locate the position of people's face

- Identify whether the face of people is recorded in our system

The camera will capture people's facial information as image input to the system. There exists several libraries in Python like OpenCV, which have lots of useful tools. The identification progress has 3 steps: firstly, we establish the documents of facial information and store the encoded faceprint. Secondly, we camera to capture the current face image, and generate the face pattern coding of the current face image file. Finally, we compare the current facial coding with the information in the storage. This is done by setting of a threshold. When the familiarity exceeds the threshold, we regard this person as recorded. Otherwise, this person will be banned from the system unless he records his facial information to our system.

### Supervising Subsystem

- Capture people's behavior

- Recognize the interaction between human and object

- Identify what people are doing

This part is the capture and analysis of people's behavior, which is the interaction between people and objects. For the algorithm, we decided initially to utilize that based on VSG-Net or other developed HOI models. To make it suitable for our system or make some improvement, we need analysis and adjustment of the models. For the algorithm, it is a multi-branch network: Visual Branch: extracting visual features from people, objects, and the surrounding environment. Spatial Attention Branch: Modeling the spatial relationship between human-object pairs. Graph Convolutional Branch: The scene was treated as a graph, with people and objects as nodes, and modeling the structural interactions. This is a computational work that needs the training on dataset and applies to the real system. It is true that the accuracy may not be 100% but we will try our best to improve the performance.

### Alarming Subsystem

- Staying normal when common behaviors are detected

- Alarming when dangerous or non-compliant behaviors are detected

It is an alarm apparatus connected to the final of our system, which is used to report dangerous actions or behaviors that are not permitted. If some actions are detected in supervising system like "harm people", "illegal experimental operation", and "cheating in exams", the alarming system will sound a warning to let people notice that. To achieve this, a "dangerous action library" should be prepared in advance which contains dangerous behaviors, when the analysis of actions in supervising system match some contents in the action library, the system will alarm to report.

## Criteria of Success:

- Must have a human face recognition system and determine whether the person is in the backend database

- The system will detect the human with the surrounding objects on the screen and analyze the possible interaction between these items.

- Based on the interaction, the system could detect the potentially dangerous action and give out warnings.

## DIVISION OF LABOR AND RESPONSIBILITIES

All members should contribute to the design and process of the project, we meet regularly to discuss and push forward the process of the design. Each member is responsible for a certain part but it doesn't mean that this is the only work for him/her.

- Shengjian Chen: Responsible for the facial recognition part of the project.

- Huili Tao: HOI algorithm modification and apply that to our project

- Zhuping Liu: Hardware design and the connectivity of the project

- XIaolu Liu: Detail optimizing and test of the function.

Project Videos