Project

# Title Team Members TA Documents Sponsor
10 Distributed Species Tracker
Jonathan Yuen
Max Shepherd
Ryan Day
Hanyin Shao design_document1.pdf
final_paper1.pdf
photo1.jpeg
photo2.jpeg
presentation1.pdf
proposal1.pdf
video
# Title
Distributed Species Tracker

# Team Members:
- Ryan Day (rmday2)
- Jonathan Yuen (yuen9)
- Max Shepherd (maxes2)

# Problem
Invasive species are organisms that find their way into an environment of which they are not a native. They are capable of inflicting great harm on their new ecosystems leading to the death of native species as well as significant economic damage in some cases. Removing invasive species is an incredibly intensive and difficult task. Some common methods include chemical control, bringing in new predators, or even uprooting parts of ecosystems in a desperate attempt to prevent the spread of the invasive species. The burden of controlling invasive species often falls on civilians who are called to look out for the invading species in order to provide intel on their location and help prevent any further spreading.

Endangered species are creatures that are on the brink of extinction. A lot of conservation efforts are made in order to restore the population of the species, including gathering the animals and breeding them in a controlled environment, as well as monitoring them via a tracking chip or satellite.

# Solution
We propose a network of nodes that, once deployed in the wild, can capture images and process them to determine whether or not a species of interest has been in a certain area. The nodes will communicate with one another in order to compile a report of all of the places and times that an animal was seen. This can be an improvement on satellite imaging that is hindered by trees and overbrush and is also an improvement over the manual scouring of wilderness that is often used in the hunt of invasive and endangered species. The network, if deployed for long enough, can offer valuable data and present a comprehensive view of a species’ behavior.

This semester, we aim to provide a proof of concept for this idea by building a small set of these nodes and demonstrating their ability to recognize an animal and log its whereabouts in a way that is redundant and node-failure-tolerant.

In order to do this, we will fit each node with a camera that will take images to be processed. If the species being monitored is detected, its location will be sent over the network of nodes via a routing subsystem. A power subsystem will supply and regulate power to the modules in each node. A sensor subsystem will provide GPS data and infrared detection. Therefore, the significance of the PCB in this project is that it hosts the MCU which is responsible for routing and communication protocols as well as all of the logic relating to the sensors and power modules which will also be fitted on the PCB.

All in all, we have a solution to a problem that we are really excited about turning into a project for this semester and are very determined to complete.


# Solution Components (Revised portion)
## Subsystem 1 : Routing
This subsystem will establish the network over which the nodes will communicate. These nodes will replicate local GPS data amongst themselves. We will currently plan on using LoRa as that best fits our use case as a network that would require low-power, long range communication in a real-world scenario.

Components:
LoRa transceiver (RFM95W); Antenna; Microcontroller

## Subsystem 2 : Camera and Classification
This subsystem will be responsible for gathering and classifying images. It will communicate with the MCU. We are now planning on using an ESP32 module to handle our image processing instead of a Raspberry Pi. This is to make our design more compact and also to save significant amounts of money. When choosing an MCU, we are prioritizing RAM, a suitable camera interface, and processing power. The ESP32-WROOM-32E is a good guess for now and is cited to have been used for each of our use cases. As soon as this RFA is approved, we plan on purchasing an MCU and a dev board to start testing out functionality.

Components:
Camera; Microcontroller (interface)

## Subsystem 3 : Power
This subsystem will handle the supply and regulation of power to the modules in each node.

Components:
Li-ion battery; Battery controller; Boost/buck converters; USB charger/port

## Subsystem 4 : Sensor
This subsystem will gather GPS data and send it to the MCU. It will also measure infrared radiation, signaling that a creature has passed by the module. This will trigger the camera to take a picture.

Components:
GPS chip; Infrared Sensor; Temperature Sensor

# Criterion For Success
Data redundancy - We should be able to demonstrate that data gathered on any arbitrary node is reflected on the rest of the nodes in the network.

Detection accuracy - We will demonstrate that the detections made by our camera subsystem are accurately logged (demonstrate that if a target appears in front of a node that the sighting is logged at the correct location).

Battery life - We will determine a realistic and practical minimum battery life based on the hardware components we end up using.

Filtered Back – Projection Optical Demonstration

Tori Fujinami, Xingchen Hong, Jacob Ramsey

Filtered Back – Projection Optical Demonstration

Featured Project

Project Description

Computed Tomography, often referred to as CT or CAT scans, is a modern technology used for medical imaging. While many people know of this technology, not many people understand how it works. The concepts behind CT scans are theoretical and often hard to visualize. Professor Carney has indicated that a small-scale device for demonstrational purposes will help students gain a more concrete understanding of the technical components behind this device. Using light rather than x-rays, we will design and build a simplified CT device for use as an educational tool.

Design Methodology

We will build a device with three components: a light source, a screen, and a stand to hold the object. After placing an object on the stand and starting the scan, the device will record three projections by rotating either the camera and screen or object. Using the three projections in tandem with an algorithm developed with a graduate student, our device will create a 3D reconstruction of the object.

Hardware

• Motors to rotate camera and screen or object

• Grid of photo sensors built into screen

• Light source

• Power source for each of these components

• Control system for timing between movement, light on, and sensor readings