Project

# Title Team Members TA Documents Sponsor
10 Distributed Species Tracker
Jonathan Yuen
Max Shepherd
Ryan Day
Hanyin Shao design_document1.pdf
final_paper1.pdf
photo1.jpeg
photo2.jpeg
presentation1.pdf
proposal1.pdf
video
# Title
Distributed Species Tracker

# Team Members:
- Ryan Day (rmday2)
- Jonathan Yuen (yuen9)
- Max Shepherd (maxes2)

# Problem
Invasive species are organisms that find their way into an environment of which they are not a native. They are capable of inflicting great harm on their new ecosystems leading to the death of native species as well as significant economic damage in some cases. Removing invasive species is an incredibly intensive and difficult task. Some common methods include chemical control, bringing in new predators, or even uprooting parts of ecosystems in a desperate attempt to prevent the spread of the invasive species. The burden of controlling invasive species often falls on civilians who are called to look out for the invading species in order to provide intel on their location and help prevent any further spreading.

Endangered species are creatures that are on the brink of extinction. A lot of conservation efforts are made in order to restore the population of the species, including gathering the animals and breeding them in a controlled environment, as well as monitoring them via a tracking chip or satellite.

# Solution
We propose a network of nodes that, once deployed in the wild, can capture images and process them to determine whether or not a species of interest has been in a certain area. The nodes will communicate with one another in order to compile a report of all of the places and times that an animal was seen. This can be an improvement on satellite imaging that is hindered by trees and overbrush and is also an improvement over the manual scouring of wilderness that is often used in the hunt of invasive and endangered species. The network, if deployed for long enough, can offer valuable data and present a comprehensive view of a species’ behavior.

This semester, we aim to provide a proof of concept for this idea by building a small set of these nodes and demonstrating their ability to recognize an animal and log its whereabouts in a way that is redundant and node-failure-tolerant.

In order to do this, we will fit each node with a camera that will take images to be processed. If the species being monitored is detected, its location will be sent over the network of nodes via a routing subsystem. A power subsystem will supply and regulate power to the modules in each node. A sensor subsystem will provide GPS data and infrared detection. Therefore, the significance of the PCB in this project is that it hosts the MCU which is responsible for routing and communication protocols as well as all of the logic relating to the sensors and power modules which will also be fitted on the PCB.

All in all, we have a solution to a problem that we are really excited about turning into a project for this semester and are very determined to complete.


# Solution Components (Revised portion)
## Subsystem 1 : Routing
This subsystem will establish the network over which the nodes will communicate. These nodes will replicate local GPS data amongst themselves. We will currently plan on using LoRa as that best fits our use case as a network that would require low-power, long range communication in a real-world scenario.

Components:
LoRa transceiver (RFM95W); Antenna; Microcontroller

## Subsystem 2 : Camera and Classification
This subsystem will be responsible for gathering and classifying images. It will communicate with the MCU. We are now planning on using an ESP32 module to handle our image processing instead of a Raspberry Pi. This is to make our design more compact and also to save significant amounts of money. When choosing an MCU, we are prioritizing RAM, a suitable camera interface, and processing power. The ESP32-WROOM-32E is a good guess for now and is cited to have been used for each of our use cases. As soon as this RFA is approved, we plan on purchasing an MCU and a dev board to start testing out functionality.

Components:
Camera; Microcontroller (interface)

## Subsystem 3 : Power
This subsystem will handle the supply and regulation of power to the modules in each node.

Components:
Li-ion battery; Battery controller; Boost/buck converters; USB charger/port

## Subsystem 4 : Sensor
This subsystem will gather GPS data and send it to the MCU. It will also measure infrared radiation, signaling that a creature has passed by the module. This will trigger the camera to take a picture.

Components:
GPS chip; Infrared Sensor; Temperature Sensor

# Criterion For Success
Data redundancy - We should be able to demonstrate that data gathered on any arbitrary node is reflected on the rest of the nodes in the network.

Detection accuracy - We will demonstrate that the detections made by our camera subsystem are accurately logged (demonstrate that if a target appears in front of a node that the sighting is logged at the correct location).

Battery life - We will determine a realistic and practical minimum battery life based on the hardware components we end up using.

Oxygen Delivery Robot

Aidan Dunican, Nazar Kalyniouk, Rutvik Sayankar

Oxygen Delivery Robot

Featured Project

# Oxygen Delivery Robot

Team Members:

- Rutvik Sayankar (rutviks2)

- Aidan Dunican (dunican2)

- Nazar Kalyniouk (nazark2)

# Problem

Children's interstitial and diffuse lung disease (ChILD) is a collection of diseases or disorders. These diseases cause a thickening of the interstitium (the tissue that extends throughout the lungs) due to scarring, inflammation, or fluid buildup. This eventually affects a patient’s ability to breathe and distribute enough oxygen to the blood.

Numerous children experience the impact of this situation, requiring supplemental oxygen for their daily activities. It hampers the mobility and freedom of young infants, diminishing their growth and confidence. Moreover, parents face an increased burden, not only caring for their child but also having to be directly involved in managing the oxygen tank as their child moves around.

# Solution

Given the absence of relevant solutions in the current market, our project aims to ease the challenges faced by parents and provide the freedom for young children to explore their surroundings. As a proof of concept for an affordable solution, we propose a three-wheeled omnidirectional mobile robot capable of supporting filled oxygen tanks in the size range of M-2 to M-9, weighing 1 - 6kg (2.2 - 13.2 lbs) respectively (when full). Due to time constraints in the class and the objective to demonstrate the feasibility of a low-cost device, we plan to construct a robot at a ~50% scale of the proposed solution. Consequently, our robot will handle simulated weights/tanks with weights ranging from 0.5 - 3 kg (1.1 - 6.6 lbs).

The robot will have a three-wheeled omni-wheel drive train, incorporating two localization subsystems to ensure redundancy and enhance child safety. The first subsystem focuses on the drivetrain and chassis of the robot, while the second subsystem utilizes ultra-wideband (UWB) transceivers for triangulating the child's location relative to the robot in indoor environments. As for the final subsystem, we intend to use a camera connected to a Raspberry Pi and leverage OpenCV to improve directional accuracy in tracking the child.

As part of the design, we intend to create a PCB in the form of a Raspberry Pi hat, facilitating convenient access to information generated by our computer vision system. The PCB will incorporate essential components for motor control, with an STM microcontroller serving as the project's central processing unit. This microcontroller will manage the drivetrain, analyze UWB localization data, and execute corresponding actions based on the information obtained.

# Solution Components

## Subsystem 1: Drivetrain and Chassis

This subsystem encompasses the drive train for the 3 omni-wheel robot, featuring the use of 3 H-Bridges (L298N - each IC has two H-bridges therefore we plan to incorporate all the hardware such that we may switch to a 4 omni-wheel based drive train if need be) and 3 AndyMark 245 RPM 12V Gearmotors equipped with 2 Channel Encoders. The microcontroller will control the H-bridges. The 3 omni-wheel drive system facilitates zero-degree turning, simplifying the robot's design and reducing costs by minimizing the number of wheels. An omni-wheel is characterized by outer rollers that spin freely about axes in the plane of the wheel, enabling sideways sliding while the wheel propels forward or backward without slip. Alongside the drivetrain, the chassis will incorporate 3 HC-SR04 Ultrasonic sensors (or three bumper-style limit switches - like a Roomba), providing a redundant system to detect potential obstacles in the robot's path.

## Subsystem 2: UWB Localization

This subsystem suggests implementing a module based on the DW1000 Ultra-Wideband (UWB) transceiver IC, similar to the technology found in Apple AirTags. We opt for UWB over Bluetooth due to its significantly superior accuracy, attributed to UWB's precise distance-based approach using time-of-flight (ToF) rather than meer signal strength as in Bluetooth.

This project will require three transceiver ICs, with two acting as "anchors" fixed on the robot. The distance to the third transceiver (referred to as the "tag") will always be calculated relative to the anchors. With the transceivers we are currently considering, at full transmit power, they have to be at least 18" apart to report the range. At minimum power, they work when they are at least 10 inches. For the "tag," we plan to create a compact PCB containing the transceiver, a small coin battery, and other essential components to ensure proper transceiver operation. This device can be attached to a child's shirt using Velcro.

## Subsystem 3: Computer Vision

This subsystem involves using the OpenCV library on a Raspberry Pi equipped with a camera. By employing pre-trained models, we aim to enhance the reliability and directional accuracy of tracking a young child. The plan is to perform all camera-related processing on the Raspberry Pi and subsequently translate the information into a directional command for the robot if necessary. Given that most common STM chips feature I2C buses, we plan to communicate between the Raspberry Pi and our microcontroller through this bus.

## Division of Work:

Given that we already have a 3 omni wheel robot, it is a little bit smaller than our 50% scale but it allows us to immediately begin work on UWB localization and computer vision until a new iteration can be made. Simultaneously, we'll reconfigure the drive train to ensure compatibility with the additional systems we plan to implement, and the ability to move the desired weight. To streamline the process, we'll allocate specific tasks to individual group members – one focusing on UWB, another on Computer Vision, and the third on the drivetrain. This division of work will allow parallel progress on the different aspects of the project.

# Criterion For Success

Omni-wheel drivetrain that can drive in a specified direction.

Close-range object detection system working (can detect objects inside the path of travel).

UWB Localization down to an accuracy of < 1m.

## Current considerations

We are currently in discussion with Greg at the machine shop about switching to a four-wheeled omni-wheel drivetrain due to the increased weight capacity and integrity of the chassis. To address the safety concerns of this particular project, we are planning to implement the following safety measures:

- Limit robot max speed to <5 MPH

- Using Empty Tanks/ simulated weights. At NO point ever will we be working with compressed oxygen. Our goal is just to prove that we can build a robot that can follow a small human.

- We are planning to work extensively to design the base of the robot to be bottom-heavy & wide to prevent the tipping hazard.