Project

# Title Team Members TA Documents Sponsor
11 Early Response Drone for First Responders
Aditya Patel
Kevin Gerard
Lohit Muralidharan
Manvi Jha design_document1.pdf
other1.pdf
proposal2.pdf
proposal1.pdf
**Problem:**
Every week, UIUC students receive emails from the Illini-Alert system regarding crimes that are committed, fires that are occuring, and other dangerous situations to be aware of. With the latest reported median response time of first responders to a 911 call being over 6 minutes in Champaign County ([source](https://dph.illinois.gov/topics-services/emergency-preparedness-response/ems/prehospital-data-program/emsresponsetimes.html)), the situation to which emergency personnel are responding can drastically change from the initial details that were provided. To best be able to manage the event, first responders need as much accurate information as they can possibly receive so that the situation can be handled in a timely manner and the safety of everyone involved is prioritized.

**Solution Overview:**
Our solution is to construct a cost-effective drone that first responders can deploy and immediately fly to the location of an emergency event. While en route, they can use the drone’s on board camera and computer vision capabilities to assess the situation at hand. There are multiple scenarios in which this drone could be particularly beneficial, such as:

- Police: monitor crime scenes and track suspicious individuals; provide aerial surveillance for events with a high density of people (such as sports games, concerts, or protests) to ensure the safety of everyone

- Fire: monitor the spread of fire at the location; obtain information on what kind of fire it is (electrical, chemical) and any potential hazards

- Medical: assess the type and number of injuries suffered, and locations of patients

Our drone system comprises 4 different elements: a cloud storage, a backend, a frontend, and the drone itself. The high level block diagram linked below illustrates which elements communicate with other elements through data transferring shown by the arrows.

[[Link](https://drive.google.com/file/d/12qx_syQQH0pHcrh7uVouneDARXH_6Dbi/view?usp=sharing)]

In order to create a baseline early response drone, we need to be able to control the drone as well as receive information from the drone such as capture frames, altitude, roll, pitch, and yaw. The capture frames and data will be visually displayed in the frontend. However, this data bundle will first be stashed onto a cloud storage, and when the backend is ready to receive the data, it will retrieve it. The reason why we have a backend is because if time permits, we want to perform machine learning processing using object tracking and detection models. The other data transmission that occurs is by sending command signals from the frontend to the drone itself. In other words, whenever there is a keyboard click, we can visually see the key click which is uploaded to the cloud storage.

**Solution Components:**
1. **Drone Hardware/Software:**
Utilizes ESP 32 with SIM7600 for data transmission.
Retrieve roll, pitch, and yaw using MPU6050 IMU sensor and altitude (using pressure) with BMP280.
Utilize Servos to control flaps, rudders, and aileron and brushless motor + ESC for single rotor control.
3. **Drone Structure:**
We will be utilizing foam board due to ease for repair just in case rather than LW PLA or PLA in general.
Utilize larger wingspan for easier control of the drone .

4. **Cloud Storage:**
The cloud storage will act as a Medium between the Drone itself and the C++ Backend.
EXTRA: We are trying to completely eliminate the use of Cloud storage. There seems to be a way of using either TCP or a higher level protocol like HTTP requests according to youtube, arduino forums, and Chat-GPT 4o.

5. **C++ Backend:**
Utilize HTTP Request to retrieve drone from the cloud storage to send and to the TypeScript Frontend using websockets.
Utilize Websockets to receive command signals.
EXTRA: Run the frames on a Deepsort Model for tracking humans using either a pre-trained Yolo Model or a trained Yolo Model (the train set will be generated by utilizing the drone itself).

6. **TypeScript Frontend:**
Use Websockets to send command signals and retrieve drone data to the C++ backend.
Visually display a command control for the user.

**Criterion for Success:**
- **Stability and Flight Controls:** Smooth operation of drone while in flight at varying altitudes, and non-jerky response to user-controlled inputs
- **Sophisticated UI:** Easy-to-use and proportional web-based user interface for viewing camera frames, sensor data, and controlling the drone’s movements
- **Frame Transmission:** Ability to transmit frames back and forth to the database, which then connects to the C++ backend using a cellular connection
- **Computer Vision:** Time permitting, ability to detect and track objects (people) from a high up, aerial view based on a self-trained ML model

Additionally, for testing the demonstration purposes, we plan to view the university guidelines and restrictions on drone flight. We will then find a suitable location, such as an open field or quadrangle, for launching and landing our drone. For permission, we will need to register the drone with the FAA and university, and each of our group members will need to take a short test to obtain a drone license.

Filtered Back – Projection Optical Demonstration

Tori Fujinami, Xingchen Hong, Jacob Ramsey

Filtered Back – Projection Optical Demonstration

Featured Project

Project Description

Computed Tomography, often referred to as CT or CAT scans, is a modern technology used for medical imaging. While many people know of this technology, not many people understand how it works. The concepts behind CT scans are theoretical and often hard to visualize. Professor Carney has indicated that a small-scale device for demonstrational purposes will help students gain a more concrete understanding of the technical components behind this device. Using light rather than x-rays, we will design and build a simplified CT device for use as an educational tool.

Design Methodology

We will build a device with three components: a light source, a screen, and a stand to hold the object. After placing an object on the stand and starting the scan, the device will record three projections by rotating either the camera and screen or object. Using the three projections in tandem with an algorithm developed with a graduate student, our device will create a 3D reconstruction of the object.

Hardware

• Motors to rotate camera and screen or object

• Grid of photo sensors built into screen

• Light source

• Power source for each of these components

• Control system for timing between movement, light on, and sensor readings