Project

# Title Team Members TA Documents Sponsor
37 GESTURE CONTROLLED AUDIO SHARING SYSTEM
Fred Chang
Ruofan Chen
Ruohua Li
AJ Schroeder design_document1.pdf
final_paper1.pdf
other1.pdf
other2.pdf
presentation1.pptx
Problem


When a person is cooking or working, it’s hard to use his or her hands to control an audio system. Thus a Gesture Controlled Audio System would be handy especially when cooking or working noise makes voice control not too easy to be used. Also during social gatherings, it’s hard to find many smart speakers which could be paired and synchronized to play the same music, thus a plug-and-go sharing system requiring no smart speakers at all becomes incredibly convenient. Thus a combination of gesture control and plug-and-go audio sharing system solves the problem for both individual usages or social events. There hasn’t been an existing product in the market right now which would offer the convenience of both features.


Solution Overview


Gesture Controlled Audio Sharing System (GCASS) is an audio sharing and coordination system. This system is aiming for providing users with a handy way of controlling audio systems by enabling remote control using human gestures. Our proposed system consists of two subsystems: 1) human gesture capturing and recognizing system which employs a camera along with an embedded system to segment human gestures and convert them to control signals in real time, and 2) plug-and-go audio sharing and distribution system which contains one broadcaster and multiple receivers. The receivers can be plugged into any types of audio speakers (regular speakers or even a magnetic speaker). The setup requires no pairing procedure and music tracks are automatically synchronized. The whole audio system is controlled by human gestures from the master node.



Solution Components


Component#1 human gesture capturing and recognizing module

This module is used to segment human gestures and convert them into control signals. The module consists of a camera, a microcontroller and a RF module mounted on the microcontroller for signal transmission.

Component#2 plug-and-go audio sharing module

The plug-and-go audio sharing module utilizes a novel approach we designed to share the music across different kinds of speakers. It will have one single broadcaster chip which will be plugged into a master node and multiple receivers chips which could be plugged into any sorts of speakers even a cheap magnetic speaker will suffice. The data transmission relies on a PCB design which will incorporate atmega328 controllers, RF modules, and some power regulation and signal amplification components.

Software


[CV algorithm]
The software side utilizes camera module to capture real-time images and segment human gestures for control usage

[Audio Sharing System]
The audio sharing and distribution is realized by both hardware PCB chips and software protocols. The software protocols running on microcontrollers will handle memberships, leader election, and data integrity issues in a distributed fashion.

Criterion for Success

Our solution will be successful if the audio sharing system can handle multiple nodes simultaneously and synchronize music in an unnoticeable time. The gestures control will be successful if the software could successfully segment human gestures and convert them to appropriate control signals.

Filtered Back – Projection Optical Demonstration

Tori Fujinami, Xingchen Hong, Jacob Ramsey

Filtered Back – Projection Optical Demonstration

Featured Project

Project Description

Computed Tomography, often referred to as CT or CAT scans, is a modern technology used for medical imaging. While many people know of this technology, not many people understand how it works. The concepts behind CT scans are theoretical and often hard to visualize. Professor Carney has indicated that a small-scale device for demonstrational purposes will help students gain a more concrete understanding of the technical components behind this device. Using light rather than x-rays, we will design and build a simplified CT device for use as an educational tool.

Design Methodology

We will build a device with three components: a light source, a screen, and a stand to hold the object. After placing an object on the stand and starting the scan, the device will record three projections by rotating either the camera and screen or object. Using the three projections in tandem with an algorithm developed with a graduate student, our device will create a 3D reconstruction of the object.

Hardware

• Motors to rotate camera and screen or object

• Grid of photo sensors built into screen

• Light source

• Power source for each of these components

• Control system for timing between movement, light on, and sensor readings