Final Demo

Description

The Final Demonstration (Final Demo) is the single, most important assignment in the course. It is the strongest measure of the success of your project. The evaluation focuses on the criteria of project completion, reliability, and professionalism. You will demo your full project to a group consisting of your Professor, your TA, and a few peer reviewers. Other guests (e.g. alumni, other course staff, visiting scholars, donors) may sometimes also be present.

Requirements and Grading

Students must be able to demonstrate the full functionality of their project to the instructors. If full functionality is not available, then students must be able to show the parts of the project that do function via the procedure listed in their Requirements and Verification Table. Credit will not be given for features which cannot be demonstrated, even if those features worked before and suddenly fail at the time of the final demo. Still, for any portion of the project which does not function as specified, students should have hypotheses and supporting evidence for what the problem may be.

The project team should be ready to justify design decisions and technical aspects of any part of the project (not just your own parts). Quantitative results are expected wherever applicable.

Grading is covered by the Demo Rubric, and is out of 150 points. Some of the key points are as follows:

  1. Completion: The project has been entirely completed.
  2. Thoroughness: Care and attention to detail are evident in construction and layout.
  3. Performance: Performance is completely verified, and operation is reliable.
  4. Understanding: Everyone on the project team must be able to demonstrate understanding of his/her technical work and show that all members have contributed significantly.

Submission and Deadlines

Sign-up for a demo time is handled through the PACE system. Again, remember to sign up for a peer review session as well.

VoxBox Robo-Drummer

Featured Project

Our group proposes to create robot drummer which would respond to human voice "beatboxing" input, via conventional dynamic microphone, and translate the input into the corresponding drum hit performance. For example, if the human user issues a bass-kick voice sound, the robot will recognize it and strike the bass drum; and likewise for the hi-hat/snare and clap. Our design will minimally cover 3 different drum hit types (bass hit, snare hit, clap hit), and respond with minimal latency.

This would involve amplifying the analog signal (as dynamic mics drive fairly low gain signals), which would be sampled by a dsPIC33F DSP/MCU (or comparable chipset), and processed for trigger event recognition. This entails applying Short-Time Fourier Transform analysis to provide spectral content data to our event detection algorithm (i.e. recognizing the "control" signal from the human user). The MCU functionality of the dsPIC33F would be used for relaying the trigger commands to the actuator circuits controlling the robot.

The robot in question would be small; about the size of ventriloquist dummy. The "drum set" would be scaled accordingly (think pots and pans, like a child would play with). Actuators would likely be based on solenoids, as opposed to motors.

Beyond these minimal capabilities, we would add analog prefiltering of the input audio signal, and amplification of the drum hits, as bonus features if the development and implementation process goes better than expected.