# Title Team Members TA Documents Sponsor
Cole Herrmann
Gautam Pakala
Jake Xiong
Qingyu Li design_document1.pdf
Team Members:
- Gautum Pakala (gpakala2)
- Cole Herrmann (colewh2)
- Jake Xiong (yuangx2)

# Problem

Time and energy are resources that aren't plentiful in UAVs. Traditionally when a UAV is used for aerial mapping, it will take a picture every time it flies a predetermined distance interval. Since UAVs must be kept lightweight, it’s uncommon to find any with enough onboard processing hardware and energy reserves to stich hundreds of frames into a map. That’s why most mapping UAVs perform the map generation offsite on more powerful hardware than the onboard camera and flight controller. In time sensitive emergencies (open combat, search and rescue, etc), it may not be possible to land the UAV to render an aerial map, and it would be much more convenient if the drone could render the map itself, which could be viewed on a ground station through a UDP radio link.
# Solution

We would like to design a camera that has onboard hardware acceleration capability to stitch images together. When stitching images together into a panorama or map, several repetitive operations are required to "prep" the images for stitching. Operations to greyscale, blur, and convolute images can be performed on a traditional CPU, but the processing time and power consumption can be improved when such repetitive operations are pipelined through an FPGA. With Cole’s ECE 397 funding from last semester, he acquired a Diligent Embedded Vision bundle (, which we plan on using the Zybo Z7020 and PCAM 5C as the basis for the camera. The Zybo board comes with two A9 processor cores which can run Xilinx's Embedded Linux distro called PetaLinux. By running PetaLinux on the camera, I have easier access to the I/O and filesystem on the Zybo board rather than trying to create a baremetal design. After completing this project, I plan to integrate the camera into one of my drones, including adding serial communication between the flight controller and the Zybo board (another pro of building on PetaLinux), which would give access to a plethora of sensors such as GPS, airspeed, etc that could bring a live rendering aerial mapping drone into reality!

# Solution Components

## Subsystem 1: Keypoint Detection/Description, and Matching
As mentioned before, the development of this projecet will be done on the Zybo board that has the embedded Linux environment. The majority of code base and algorithms below would be written in SystemVerilog for the hardware portion. There may be some image pre-processing done in the Linux environment if that is easier to implement.
All image stitching for panoramas has 3 main processes: Keypoint Detection/Description, Keypoint matching, and Homography
Keypoint Detection is the process of identifying keypoints in an image that are recognizable from different angles, lighting, and scale. Many computer vision algorithms accomplish this goal such SIFT, SURF, and FAST to name a few. We are choosing to implement the FAST algorithm for keypoint detection, not just because it is faster than most other algorithms, but also because it is the least resource intensive for the FPGA to execute. These algorithms already take into account scale and rotational invariance for the images.
Keypoint Description gives each identified keypoint a unique descriptor that can be used to identify each keypoint on the image. Again, there are many methods of doing this, but the simplest is to compile a matrix of the gradient vectors around each keypoint that can be obtained through convolving the image with specific filters.
Keypoint matching occurs when the keypoints are detected and described in each image. If the difference between the descriptors is below a certain error threshold, the keypoints in each image are said to be a match. Typically, a minimum of 4 keypoint matches is needed for Homography Transformation.

## Subsystem 2: Homography Transformation
When image stitching, the angle of the images needs to be rectified to create a clean output panorama. Homography Transformation is a common problem that transforms the coordinate system of an image into the plane of the reference image through a 3x3 homography matrix. The homography matrix can be calculated using the keypoint matches matrix and solving a constrained least squares problem in order to find the eigenvector with the lowest eigenvalue. This transformation is then applied. One issue with the homography transformation is that the result can be skewed with outliers in the keypoint matching process, where there are keypoint matches detected, but they are not really matches. A common solution to any outlier problem like this is the RANSAC algorithm. This is easily transferrable to hardware and can be used to make the computation of the homography matrix more robust. After the images are warped (transformed) and overlapped, there may be some image blending required for a cleaner result which can be done in the Linux environment.

## Subsystem 3: HDMI Output
HDMI output is a system that operates with the TMDS protocol. There have been plenty of people who have created image renderers for HDMI. Our goal is to be able to transfer the image that is being processed in the accelerator through an HDMI renderer we design and output to the HDMI port for an instantaneous results viewer. If the process of creating the accelerator is too long, it would be simpler to host a webserver and display the image in the Linux the environment.

# Criterion For Success

Due to the time limited nature of ECE 445, we will not have enough time to write a full mapping application on PetaLinux, and fully integrate it into the UAV’s PX4 cube flight controller. With this in mind, we plan on building a simpler application that can create panoramas and render them once they are fully processed. At minimum, we would like to have our camera be able to generate a panorama from three frames side by side. We also have to have a way to view the final panaorma, which will either be a low level solution from the TMDS video port on the Zybo board, or high level solution where it’s hosted on a local webpage(PetaLinux already has support for hosting web pages).

S.I.P. (Smart Irrigation Project)

Jackson Lenz, James McMahon

S.I.P. (Smart Irrigation Project)

Featured Project

Jackson Lenz

James McMahon

Our project is to be a reliable, robust, and intelligent irrigation controller for use in areas where reliable weather prediction, water supply, and power supply are not found.

Upon completion of the project, our device will be able to determine the moisture level of the soil, the water level in a water tank, and the temperature, humidity, insolation, and barometric pressure of the environment. It will perform some processing on the observed environmental factors to determine if rain can be expected soon, Comparing this knowledge to the dampness of the soil and the amount of water in reserves will either trigger a command to begin irrigation or maintain a command to not irrigate the fields. This device will allow farmers to make much more efficient use of precious water and also avoid dehydrating crops to death.

In developing nations, power is also of concern because it is not as readily available as power here in the United States. For that reason, our device will incorporate several amp-hours of energy storage in the form of rechargeable, maintenance-free, lead acid batteries. These batteries will charge while power is available from the grid and discharge when power is no longer available. This will allow for uninterrupted control of irrigation. When power is available from the grid, our device will be powered by the grid. At other times, the batteries will supply the required power.

The project is titled S.I.P. because it will reduce water wasted and will be very power efficient (by extremely conservative estimates, able to run for 70 hours without input from the grid), thus sipping on both power and water.

We welcome all questions and comments regarding our project in its current form.

Thank you all very much for you time and consideration!