Projects
| # | Title | Team Members | TA | Professor | Documents | Sponsor |
|---|---|---|---|---|---|---|
| 1 | Mobile Hive Checker |
Fiona Cashin Olivia Guido Rawda Abdeltawab |
Hossein Ataee | Arne Fliflet | proposal1.pdf |
Dr. Joy O'Keefe |
| # Team Members: - Fiona Cashin (fcashin2) - Olivia Guido (ojguido2) - Rawda Abdeltawab (rawdaka2) # Problem Beekeepers must routinely monitor hive conditions to maintain healthy colonies. However, manually opening a hive significantly stresses the bees and disrupts their environment, and frequent disturbances can negatively affect bee behavior and productivity. On the other hand, insufficient monitoring can lead to swarming or freezing, resulting in the loss of an entire colony. Each lost colony can cost a beekeeper between $100 and $200. This highlights the need for a non-invasive solution for assessing the health of multiple hives, while minimizing stress on the bees. Although monitoring systems are available, they typically cost around $100 per hive, and many of the leading companies in this space are headquartered in Europe. # Solution The proposed solution is a portable device that enables beekeepers to monitor a colony’s health without opening the hive. A small sensor probe is inserted into the hive entrance to collect internal environmental data while the main unit remains outside. The device displays active sensor readings on an integrated screen and indicates whether hive conditions fall within acceptable ranges, such as temperatures between 70 and 97 degrees Fahrenheit. This approach minimizes hive disturbance while still providing essential health data including temperature, humidity, and carbon dioxide levels. # Solution Components ## Subsystem 1, Temperature and Humidity Monitoring This subsystem measures the internal temperature and humidity of the beehive. Maintaining proper temperature is critical for hive health, as bee eggs will not develop and adult bees may die if the internal temperature falls outside the range between 70 and 97 degrees Fahrenheit. Humidity levels must remain between 50 percent and 60 percent to allow nectar to dry into honey. Excess humidity can promote pest reproduction, while insufficient humidity can cause bee eggs to dehydrate. The device will use a temperature and humidity sensor connected via a long cable, allowing the sensor to be inserted into the hive while the user holds the device externally. The sensor will interface with a microcontroller unit (MCU), which will process the data and display the readings on an LCD screen. The MCU will evaluate whether the temperature and humidity values fall within the acceptable ranges. If the readings are normal, the display will show “PASSED.” If any reading is outside the normal range, the display will show “FAILED.” Components: - Digital Temperature Humidity Sensor : HiLetgo DHT21 - Microcontroller Unit (MCU) : ESP32-C3-WROOM-02 - Liquid Crystal Display (LCD) : B0DN9NMBFW (GODIYMODULES) or B0BWTFN9WF (Hosyond) ## Subsystem 2, Carbon Dioxide Monitoring This subsystem measures the carbon dioxide concentration within the hive. In a beehive, CO2 levels can be tolerated to a level of 8 percent, with higher levels indicating overcrowding and poor ventilation. The device will include a CO2 sensor connected via cable to the same MCU. The MCU will record the CO2 levels and display the results on the LED. As with the temperature and humidity subsystem, the MCU will determine whether the CO2 level is within the acceptable range and display “PASSED” or “FAILED” accordingly. Components: - CO2 Sensor : HiLetgo MHZ19 - Microcontroller Unit (MCU) : ESP32-C3-WROOM-02 - Liquid Crystal Display (LCD) : B0DN9NMBFW (GODIYMODULES) or B0BWTFN9WF (Hosyond) ## Subsystem 3, Microcontroller and Logic The microcontroller coordinates all the subsystems and implements a Finite State Machine (FSM). The MCU runs embedded C firmware that defines an FSM with at least four states, including “Start”, “Reset”, “Testing”, and “Done”. During the “Testing” state, sensor data is acquired via the appropriate communication protocols. Once testing is complete, the collected data is displayed on the LCD, allowing the user to assess the overall health of the hive. The MCU compares the data with the specified range to determine if the data is within range. This will prompt either a passed or failed responses to be displayed on the device Components: -Microcontroller Unit (MCU) : ESP32- option could be Espressif ESP32-C3-WROOM-02 which has RISC-V 32 bit CPU, antenna built-in, bluetooth, WIFI -Programming Interface: use USB to upload code. USB can either charge battery/upload code, Arduino IDE platform -Rest Button: PTS645SL43-2 LFS, resting the data on LCD to test another hive -Power ON Button: PTS645SL43-2 LFS -Liquid Crystal Display (LCD): B0DN9NMBFW (GODIYMODULES) or B0BWTFN9WF (Hosyond) # Criterion For Success - The humidity sensor accurately measures humidity. - The temperature sensor accurately measures temperature. - The display correctly shows the measured temperature. - The display correctly shows the measured humidity. - The display turns on when the ON button is pressed. - A Start screen is shown when the ON button is pressed. - A Testing screen is shown after the Start screen. - A Done screen is displayed when the ON button is pressed the second time. - A Reset Screen is displayed when the reset button is pressed. - The display correctly shows “PASSED” and “FAILED.” - The display shows “PASSED” when all sensor readings are within normal ranges. - The display shows “FAILED” when at least one sensor reading is outside the normal range. - Final product tested on multiple hives. |
||||||
| 2 | Bird Simulator |
Anthony Amella Eli Yang Emily Liu |
Shiyuan Duan | Arne Fliflet | proposal1.pdf |
|
| # Bird Simulator Team Members: - Anthony Amella (aamel2) - Emily Liu (el20) - Eli Yang (eliyang2) # Problem FPV drones give people a chance to experience immersive flight through FPV goggles, improving engagement. However, this immersion is primarily visual and does not allow for physical control such as motion cues or body orientation. This results in an experience with a realism factor missing for people who want an even more exhilarating experience. # Solution Our bird simulator will allow the pilot to control a drone using motion. This system will consist of a drone with a camera, FPV goggles, and a suit connected to IMUs that can be worn by a person that will read information about how their body moves and is oriented. The motion captured by the suit will then be converted to instructions that the drone can use to maneuver in its environment. # Solution Components ## Visuals We will use 5.8 GHz radio to transmit video data from the drone to the goggles using a pair of transmitters and receivers (RTC6705 and RTC6715). These RF modules handle amplifying, mixing, and modulating/demodulating signals, while leaving us the ability to configure and program the module through SPI with a microcontroller. We will use a camera that outputs analog video to be transmitted by the RTC6705 and received by the RTC6715 module in the goggles to be converted to composite video and displayed on a small screen. We expect the development of the other subsystems to require a lot of trial and error, so we will develop a virtual simulation environment using JavaScript/WebGL that will allow testing with less safety concerns. ## Drone We will design and manufacture a drone from scratch. The body of the drone will be made through a waterjet from carbon fiber, similar to existing COTS racing drones. Tentatively, we will make the drone on a 3-inch frame. Notably, the drone will have a servo attached to the FPV camera, which will allow for pitch to be changed mid-flight. This will allow the drone to look forward, regardless of the position of the actual drone body. This will allow the FPV pilot to feel more like a bird, since birds generally look forward during flight, regardless of their speed. The drone will consist of a 5.8GHz AM radio transmitter, as described above, as well as a 2.4GHz SX1280 receiver for control signals from the pilot. We will also make our own ESCs, allowing us to control the motors with a custom BLDC controller with FDMC8010 MOSFETs. The drone will have auto-leveling capabilities, harnessing the IMU in the drone body. This will allow for easier flight, with the drone staying roughly level. ## Control There will be 4 IMUs embedded in a wearable suit that will collect data to be combined and used to determine the motion and orientation of the user: one on each arm, one on the head, and one on the torso. We plan to use the IIM-20670 which includes a gyroscope and accelerometer and communicates with the MCU using SPI. Movements such as head rotation, wing flapping, body orientation, and others to be determined will be translated to stick inputs on a normal drone controller. We will also make a normal drone controller to override suit inputs and take over control in case the drone starts behaving unexpectedly. Both the suit and the controller will transmit signals using a 2.4 GHz transceiver (SX1280), which will be received by the drone also equipped with an SX1280. Using these modules requires writing driver code to facilitate communication with the MCU. # Criterion For Success At a minimum, we will make a drone that is able to control four BLDC motors, as well as receive 2.4GHz control signals and transmit 5.8GHz video. The drone will have some form of auto-leveling with a built in IMU, as well as a camera with variable pitch. We will also make a bird suit, with four IMUs that can generate signals that could control the drone. These signals will initially be used to control a drone simulator, programmed in WebGL. If time permits, these signals will also control the drone, allowing for real-world flight. Of note, Eli Yang has a FAA Remote Pilot Certification, allowing for legal outside flight. To start, we will use off-the-shelf FPV goggles, but we will make our own if time permits. |
||||||
| 3 | Heterodyne Bat Detector |
BILL Waltz Evan McGowan Kyle Jedryszek |
Gayatri Chandran | Arne Fliflet | proposal1.pdf |
|
| Team Members: - Bill Waltz (wwaltz2) - Kyle Jedryszek (kaj5) - Evan McGowan (evandm2) # Problem: There is a need for American-made and sold handheld heterodyne bat detectors. There are some American bat enthusiasts who dislike the bat detectors that plug into phones or tablets, like the ones from Wildlife Acoustics, since the sound produced is not as high-quality as a standard heterodyne. Also, these models cost $300+. The most popular heterodynes are currently produced and sold in the UK and Australia. Specifically, Dr. Joy O'Keefe is in demand of a high-quality, mass-produceable device for the purpose of providing several groups of people with a bat detecting device for Bat Walks at the Central Illinois Bat Festival. # Solution A handheld device with a microphone, capable of detecting frequencies between 15kHz-100kHz, which will be amplified before being heterodyned with a mixer circuit. The frequency to be mixed with is controlled by a large dial (with illuminated frequency labels) on the front of the device. The sound will then be amplified and output via quality speakers. The device will also have a power button, a volume dial, a 3.5mm auxiliary port for headphone use, and be powered by AAA batteries. Finally, what might set this apart from every other bat detector is that this model will have stored, prerecorded sound bytes that can be played so that first-time users can know what to listen for. # Solution Components ## Ultrasonic Receiver To first receive the signal, we will employ an ultrasonic transducer, likely to be the most important and expensive part of the product. Transducer options include Syntiant’s SPVA1A0LR5H-1 microphone, readily available on DigiKey, since it has a frequency rating well into the LF spectrum. A pre-amplifier using op-amps like the TLV9052/ADA4097 will amplify the desired signal, followed by a high-pass filter to remove low-frequency noise below 20kHz. ## Heterodyne To mix the ultrasonic signal down to baseband, we will employ a double-balanced mixer like the SA612A or MC1496, producing the internal oscillator signal as well. This heterodyned signal is then amplified with another op-amp circuit and passed through to a speaker. Finally, our leading choice for speaker is the Taoglas SPKM.23.8.A: a thin, ~1-inch diameter speaker which will fit nicely into a handheld device. ## Bat Sound Playback Pre-recorded audio bytes from other heterodyne bat detectors will be programmed onto a flash memory module, size somewhere between 32K-512K, that can be accessed by a microcontroller. An ATTiny85 is our MCU of choice, as its availability, low cost, and speed satisfy our needs for this project. When the device is on, and the user presses a button labeled “Demo” on the device, one of the recordings will play from the speaker or audio jack, preceded by an announcement of which species of bat they are hearing. The programming for the MCU and flash memory will be done via an external programmer (such as the USBasp), with the audio data dumped directly into the external flash storage. ## User Interface The UI will consist of a 3D-printed handheld chassis for the device. The chassis will contain a power button (or switch) which will either be mechanically or electrically connected to the main board, and an adjustable volume knob. The device will have a dial (labeled with both frequencies (in kHz) and common bat call ranges) to adjust a potentiometer to change the frequency of the onboard oscillator. There will also be a dim, non-invasive red or green light that will shine on the frequency dial, such that the user has the ability to read the dial in the dark. The bottom of the device will have a 3.5mm auxiliary audio port for headphone listeners. # Criterion For Success Our product must accomplish the following objectives to be considered successful: Total production cost below 50USD including casing Device must be tunable between 15kHz and 100kHz frequencies using onboard tuner, testable using Dr. O’Keefe’s Ultrasound Calibrator Battery life (rechargeable or otherwise) lasts the length of (at least) one bat walk (1-2 hours) Volume control is tunable from muted to more-than-noticeably audible Selected bat sounds must be audible through speaker when played When an ultrasonic source radiates sound, the device must downconvert it to audible frequencies and play it through the onboard speaker |
||||||
| 4 | Scorpion-Lift Ant-Weight BattleBot |
Chen Meng Zisu Jiang Zixin Mao |
Zhuoer Zhang | Viktor Gruev | other1.docx |
|
| Team Members: Zixin Mao(zixinm2) Chen Meng(meng28) Problem Many small combat/arena robots fail not because they lack “power,” but because they lose mobility (treads derailing, wheels slipping), cannot recover after being flipped, and cannot reliably control an opponent’s posture. Tracked robots have better traction, but keeping treads aligned under aggressive turning and impacts is difficult. Lifter-style control bots can dominate positioning, but they often struggle with self-righting and maintaining stable contact with opponents. We want to build a tracked, scorpion-shaped control robot that can (1) keep traction and mobility under collisions, (2) self-right and resist flips, and (3) control an opponent using two lifting arms (“claws”) plus a tail-mounted stinger mechanism for pushing/hooking/jabbing/bracing, without using destructive spinning weapons. The goal is a robust platform that demonstrates strong mechanical design and custom high-current circuits (motor drive, actuator drive, and power monitoring), suitable for a senior design scope. Solution We will build a differential-drive tracked platform (left and right tread) with a low center of gravity and a wide stance. Inspired by tracked designs that use self-centering tread geometry to prevent belt derailment, we will incorporate a crowned-pulley/self-centering tread approach to improve reliability during turns and impacts. On top of the base, we add: Two front lifting arms (scorpion “claws”) use a linkage mechanism to get a mechanical advantage and lift opponents/self-right. A scorpion tail “stinger” that can be positioned to brace against the ground for self-righting/anti-tip stability and can also be used as a control weapon to jab/push/hook opponents to disrupt their posture and set up lifts. Custom circuit boards: High-current dual motor driver (external MOSFET H-bridges with gate-driver ICs) Tail actuator power stage (H-bridge or MOSFET stage + current sensing + thermal protection) Power distribution + sensing (battery monitoring, current measurement, fusing, kill switch) This system directly addresses the problem: tracks provide traction, crowned/self-centering geometry improves tread retention, lifter arms provide control and self-righting, and the tail stinger adds a controllable “third-point” brace plus an active control/attack mechanism. Subsystems Overview and Physical Design Subsystem 1—Tracked Mobility and Drive Electronics 1) Function: Provide high-traction motion, fast turning, and robust tread retention under impacts. 2) Mechanical Approach: Differential tracked drive: one motor per side. Tread retention strategy: incorporate a crowned pulley/self-centering profile to reduce derailment during turning and shock loads. Commercial track set (baseline): Pololu 30T Track Set – Black, item #3033 (sprockets + tracks). If size/torque needs change, we can swap to a different Pololu track set family (e.g., 22T variants). 3) Actuators/Sensors (Explicit Parts): Gearmotors (with encoders for closed-loop speed control): Pololu 100:1 Metal Gearmotor 37Dx73L mm 12V with 64 CPR encoder, item #4755 (or equivalent 100:1 encoder variant). Motor driver (custom PCB, circuit-level design): TI DRV8701 brushed DC H-bridge gate driver (uses external N-MOSFETs for high current). We will design the H-bridge with appropriately rated MOSFETs, gate resistors, a current shunt, and a protection layout (high-current routing, thermal design). Prototype/fallback option: The VNH5019-class integrated driver can be used for early bring-up, but the final deliverable targets a discrete MOSFET + gate-driver solution for circuit-level depth. Current/voltage sensing: TI INA219 current shunt/power monitor (I²C) for battery + load telemetry (or per-rail monitoring where needed). 4) Key Circuit Deliverables (What We Will Design and Build): Dual H-bridge power stage (2x DRV8701 + MOSFETs) (prototype fallback: VNH5019-class module) Current sense + current limiting strategy (sense resistor + DRV8701 sense amplifier use) Reverse polarity + fuse + TVS transient suppression 5V/3.3V regulation for logic and servos (as needed) Subsystem 2—Dual Lifting Arms (“Claws”) Mechanism 1) Function: Lift/tilt opponents, perform self-righting, and stabilize the robot during control maneuvers. 2) Mechanical Approach: Two symmetric front arms shaped like scorpion claws. Linkage-based lifter (4-bar or similar) to amplify torque and keep the lifting motion controlled. 3) Components (Explicit Parts): High-torque metal gear servos (example): DS3218 digital servo (~20 kg·cm class)—one per arm, or one actuator with a shared linkage if weight/space demands. Arm position feedback (optional): potentiometer or magnetic encoder (e.g., AS5600) for closed-loop arm control beyond servo internal control. 4) Circuit and Interface Servo power rail design (separate buck regulator, bulk capacitance, brownout prevention) PWM generation from the MCU; optional current monitoring for stall detection Subsystem 3—Scorpion Tail “Stinger” and Driver Stage 1) Function: Provide a controllable tail mechanism that supports (1) self-righting, (2) anti-tip bracing while lifting, and (3) active control/attack against other robots via jabbing, pushing, and hooking. 2) Mechanical Approach: The tail is a rigid arm mounted at the rear/top of the chassis with 1–2 DOF: Pitch joint to raise/lower the tail (primary DOF). (Optional) small yaw adjustment to place the stinger left/right if needed. Tail tip “stinger” end-effector (replaceable modules): Jab/Pusher Tip: a rounded or wedge-shaped tip to shove and unbalance opponents without snagging. Hook Tip: a curved hook profile to catch on opponent edges (weapon guards, chassis lips, or external features) and pull/rotate them into the lifter arms. Brake/Brace Foot: high-friction pad to press into the ground for stability and self-righting. Operating modes: Self-righting push: the tail presses into the floor to lever the chassis upright. Anti-tip brace: as the front arms lift, the tail pushes down to prevent a backflip and stabilize the chassis. Jab/Poke: quick tail motion to disrupt opponent alignment and create an opening for the front claws. Hook-and-control: tail hooks and pulls to rotate the opponent or drag them into a favorable position. 3) Actuators/Sensors (Explicit Parts): Tail pitch actuator (choose one of the following implementation paths): Path A (simpler, lighter): high-torque servo (example: DS3218) for tail pitch joint. Path B (more force and controllable): compact DC gearmotor + lead screw (custom linear actuator) driving tail pitch via a crank linkage. Tail contact/force sensing (optional but recommended for protection and testing): Force-sensitive resistor (FSR) under the brace foot or a small load cell in the tail linkage to estimate applied downforce. Tail joint endstop sensing: limit switch or Hall sensor to prevent over-travel. 4) Power Electronics (Custom, Circuit-Level Design) If servo-based: a dedicated servo power rail with a buck regulator and bulk capacitance; monitor servo rail voltage sag. If DC motor/linear actuator-based: dedicated tail actuator driver PCB, including: H-bridge motor driver (gate driver + MOSFETs, or a motor-driver IC) Flyback/transient protection Current sensing (shunt + amplifier or INA219 channel) to detect stall and enforce safe limits Thermal monitoring near power devices and firmware cutback Subsystem 4—Wireless Control and Main Controller 1) Function: Reliable teleoperation, safety failsafe, and sensor telemetry. 2) Controller (Explicit Parts): ESP32-WROOM-32E-N4 module as the main MCU (Wi-Fi/BLE for control + telemetry). 3) Features: Wireless control (BLE gamepad or Wi-Fi UDP) Failsafe: if command packets stop for >500 ms (heartbeat) → motors stop, tail relaxes to safe position, arms relax to safe position Telemetry: battery voltage/current, motor currents, temperatures Subsystem 5—Power, Safety, and Compliance 1) Function: Safe high-current operation and course-lab compliance. 2) Planned Safety Hardware Physical kill switch / removable arming plug Main fuse sized for worst-case current + wiring limits Separate “logic” and “power” rails with filtering LiPo-safe practices: voltage cutoff, charging in approved bags/areas, current limiting for high-current loads Physical Design—3D Modeling and Fabrication 1) Modeling Software We will use Autodesk Fusion 360 for the entire mechanical design. 2) Material Since this is a combat robot, material properties are a primary design constraint. We will first consider PLA, PETG, and ABS materials (TBD). 3) Weight Management and Distribution General Weight Budgeting (Depending): Electronics & Motors: ~45% Battery: ~15% Mechanical Frame & 3D Prints: ~30% Fasteners & Tracks: ~10% Criterion for Success All goals below are clearly testable: 1. Mobility/Traction Maintain continuous drive for ≥ 10 minutes on a flat surface (no thermal shutdown). Reach ≥ 1.0 m/s straight-line speed on the lab floor with the full system powered. Execute 10 consecutive aggressive turns (full differential turning) without tread derailment. 2. Lifting Arms Lift a 0.5 kg test block by ≥ 30 mm within 3 seconds, repeated for 10 cycles without mechanical failure. Self-righting: from upside-down, return to upright in ≤ 5 seconds using the arms and tail pose in 3 out of 3 trials. 3. Tail “Stinger” (Stability and Attack/Control) Bracing downforce: when deployed in brace mode, the tail applies ≥ 30 N downward force on a scale (measured at the stinger foot) and holds for ≥ 30 seconds without actuator overheating or mechanical slip. Deployment speed: tail transitions from “stowed” to “bracing” position in ≤ 1.0 second, repeated 10 cycles. Anti-tip effectiveness: during a lift of the 0.5 kg test block, the robot does not tip past a defined angle threshold (e.g., < 45°) in 3 out of 3 trials. Jab/Pusher effectiveness: using the jab/pusher tip, the tail can push a 1.0 kg surrogate block on the lab floor by ≥ 20 cm within 2 seconds (repeatable in 3/3 trials). Hook-and-control: using the hook tip, the tail can latch onto a standardized pull point (e.g., a metal ring/edge on a test fixture) and pull a 0.5 kg load by ≥ 10 cm (repeatable in 3/3 trials). 4. Control and Safety Wireless control range ≥ 10 m line-of-sight with < 150 ms command latency. Failsafe stops the drive and disables high-force actions within ≤ 300 ms of signal loss (verified by logging + stopwatch/LED indicator). 5. Circuit-Level Design Validation The custom motor driver and tail actuator PCB operate at the target battery voltage and demonstrate: Current sensing accuracy within ±10% (bench compared to multimeter/shunt) No overcurrent damage during stall tests (protected shutdown triggers as designed) |
||||||
| 5 | ANT-WEIGHT BATTLEBOT |
wenhao Zhang XiangYi Kong Yuxin Zhang |
Zhuoer Zhang | Viktor Gruev | proposal1.pdf |
|
| # ANT-WEIGHT BATTLEBOT Team Members: - Xiangyi Kong (xkong13) - Yuxin Zhang (yuxinz11) - Wenhao Zhang (wenhaoz5) # Problem Antweight (≤2 lb) combat robots must operate under strict weight, power, and control constraints while enduring repeated impacts, motor stalls, and wireless failures. It’s extremely important for the stable and fast interconnection among power delivery, wireless control, and integration between mechanical and electronic subsystems. # Solution We propose a 2-lb antweight battlebot with a four-wheel-drive chassis and an active front roller-and-fork weapon. All electronics are integrated on a custom PCB centered on an ESP32 microcontroller. The system is divided into four subsystems—Power, Drive, Weapon, and Control—allowing modular development and testing. Wireless PC-based control is implemented via WiFi or Bluetooth, with firmware failsafes ensuring automatic shutdown on RF link loss. # Solution Components ## Subsystem 1 - power Supplies stable power to motors and electronics while preventing brownouts, overcurrent damage, and unsafe operation. Components: - 3S LIPO Battery (11.1v battery) - LM2596S-3.3( regulator to output 3.3v) ## Subsystem 2 - Drive Provides reliable locomotion, turning, and pushing power during combat. Components: - Four DC gear motors - L298N (motor driver) - Four wheels mounted to a 3D-printed chassis ## Subsystem3 - Weapon Implements the robot’s primary mechanism for engaging and controlling opponents. Components: - Front roller driven by a DC motor - PWM-based motor control circuitry - Other 3D-printed weapon structure (forks, and wedge guides) ## Subsystem4 - Control Handles wireless communication, motion control, weapon control, and safety logic. Components: - ESP32 microcontroller on custom PCB - Integrated Bluetooth radio - Current sensor for safety monitoring - PC-based control interface # Criterion For Success - Weight Compliance: Total robot mass is less than 2.0 lb. - Wireless Control: Robot is reliably controlled from a PC via Bluetooth with Failsafe Operation. - Mobility: Robot operates continuously for 3 minutes without power resets. - Weapon Reliability: Weapon can be repeatedly actuated without electrical or mechanical failure. |
||||||
| 6 | Interactive Desktop Companion Robot for Stress Relief |
Jiajun Gao Yu-Chen Shih Zichao Wang |
Haocheng Bill Yang | Craig Shultz | proposal1.pdf |
|
| # Team - Jiajun Gao (jiajung3) - Yuchen Shih (ycshih2) - Zichao Wang (zichao3) # Problem Students and office workers often spend extended periods working at desks, leading to mental fatigue, stress, and reduced focus. While mobile applications, videos, or music can provide temporary relief, they often require users to shift attention away from their primary tasks and lack a sense of physical presence. Static desk toys also fail to maintain long-term engagement because they do not adapt to user behavior or provide meaningful interaction. There is a need for an interactive, physically present system that can provide short, low-effort interactions to help users relax without becoming a major distraction. Such a system should be compact, safe for desk use, and capable of responding naturally to user input. # Solution We propose an interactive desktop companion robot designed to reduce stress and boredom through voice interaction, expressive feedback, and simple physical motion. The robot has a compact, box-shaped form factor suitable for desk environments and can move using a tracked or differential-drive base. An ESP32-based controller coordinates audio processing, networking, control logic, and hardware interfaces. The robot supports voice wake-up, natural language conversation using a cloud-based language model, and speech synthesis for verbal responses. Visual expressions are displayed using a small screen or LED indicators to reflect internal states such as listening, thinking, or speaking. Spoken commands can also trigger physical actions, such as rotating, moving closer, or changing expressions. By combining audio, visual, and physical interaction, the system creates an engaging yet lightweight companion that fits naturally into a desk workflow. # Solution Components ## Subsystem 1: Voice Interaction and Audio Processing This subsystem enables natural voice-based interaction between the user and the robot. It performs wake-word detection locally and streams audio data to a remote server for speech recognition and response generation. The subsystem also handles audio playback and interruption control. Audio data is captured using a digital microphone, encoded, and transmitted over a network connection. Responses from the server are received as audio streams and played through an onboard speaker. Local wake-word detection ensures responsiveness and reduces unnecessary network usage. Components: • ESP32-S3 microcontroller with PSRAM • ESP32-S3 integrated Wi-Fi module • I2S digital microphone (INMP441 or equivalent) • I2S audio amplifier (MAX98357A) • 4Ω or 8Ω speaker ## Subsystem 2: Visual Expression and User Feedback This subsystem provides visual feedback that represents the robot’s internal state and interaction context. Visual cues improve usability and convey personality. Different states such as idle, listening, processing, speaking, and error are represented using animations or color patterns. Components: • SPI LCD display (ST7789 or equivalent) or • RGB LEDs (WS2812B or equivalent) ## Subsystem 3: Motion and Actuation This subsystem enables controlled movement on a desk surface. The robot performs simple motions such as forward movement, rotation, and stopping based on voice commands and sensor feedback. Motor control runs in a dedicated task to prevent interference with audio and networking functions. Components: • Two DC gear motors • Dual H-bridge motor driver (TB6612FNG or equivalent) • Optional wheel encoders ## Subsystem 4: Power Management and Safety This subsystem manages power distribution and ensures safe operation. The robot is battery-powered to allow untethered use on a desk. Hardware and software protections limit speed, current, and movement range. Components: • Lithium battery with protection circuit • Battery charging module • Voltage regulators (5V and 3.3V) • Physical power switch ## Subsystem 5: Subsystem 5: Safety Sensing (Desk-Edge Detection + Obstacle Avoidance) This subsystem prevents the robot from falling off the desk and reduces collisions with nearby objects. It continuously monitors both the surface below the robot and the space in front of the robot. When a desk edge (cliff) or obstacle is detected, this subsystem overrides motion commands and triggers an immediate safe response. Desk-edge detection (cliff detection): Two downward-facing distance sensors are mounted near the front-left and front-right corners. They measure the distance from the robot to the desk surface. If either sensor detects a sudden increase in distance beyond a calibrated baseline, the robot immediately stops and performs a short reverse maneuver to move away from the edge. Obstacle avoidance: A forward-facing distance sensor detects objects in front of the robot. If an obstacle is within a predefined safety distance, the robot stops. If the obstacle remains, the robot can optionally rotate in place to search for a clear direction before continuing motion. Control priority: Safety sensing has the highest priority in the motion stack: Desk-edge detection (highest priority) Obstacle avoidance User/voice motion commands (lowest priority) Components: 2 × Time-of-Flight distance sensors for downward cliff detection (VL53L0X or equivalent, I2C) 1 × Time-of-Flight distance sensor for forward obstacle detection (VL53L0X or equivalent, I2C) # Criterion For Success The success of this project will be evaluated using the following high-level criteria: 1. The robot connects to a Wi-Fi network and establishes a server connection within 10 seconds of power-on. 2. The system detects a wake word and enters interaction mode within 2 second in a quiet environment. 3. The average end-to-end voice interaction latency is less than 5 seconds under normal network conditions. 4. At least five predefined voice commands trigger the correct robot actions with at least 90% accuracy during testing. 5. Visual feedback correctly reflects the system state in all operational modes. 6. The robot operates continuously for at least 30 minutes on battery power during active use. 7. When Wi-Fi is unavailable, the system enters a safe degraded mode without crashing or unsafe motion. 8. During a 10-minute continuous motion demonstration on a desk, the robot does not fall off the desk. 9. In an obstacle test, the robot is commanded to move forward toward a stationary obstacle (for example, a box or book) from multiple start distances for 20 trials. The robot must stop (or stop and turn) before making contact in at least 18/20 trials. |
||||||
| 7 | SolarTrack |
Rahul Patel Rishikesh Balaji Siddhant Jain |
Haocheng Bill Yang | Arne Fliflet | proposal1.pdf |
|
| Problem: Fixed solar panels waste potential energy due to changing sun positions and limited monitoring, Solution: This project proposes the design of a self positioning solar panel system that automatically orients itself to capture the maximum possible solar energy throughout the day and stores that energy in a battery. Unlike fixed panels the system continuously adjusts its angle using light sensors or a sun-position algorithm controlled by a microcontroller, ensuring the best alignment with the sun as conditions change. The harvested energy is routed through a charge controller to safely charge a battery while protecting against overvoltage, overcurrent, and deep discharge. In addition to energy generation and storage, the system includes a mobile or web application that displays real time and historical data such as panel voltage and current, total energy generated (Wh), battery state of charge, system efficiency, and power consumption of connected loads. This application allows users to monitor performance, compare tracked versus fixed operation, and understand how environmental conditions impact energy production. Solution Components: Dual Axis Tracking Mechanism The solar panels will be mounted on a two axis articulating frame that is driven by servo and stepper motors. This will allow independent control of both the east to west orientation, as well as the angle at which the solar panels are mounted. This will enable the panels to follow the sun’s path through the day across the sky. Light Sensor Array We will use an array of photodiodes or LDR sensors to detect the light intensity in various positionings in order to determine the most optimal position for the panels. We could also implement an algorithm that calculates the sun’s theoretical position based on GPS coordinates for use during cloudy or partially shaded conditions. Maximum Power Point Tracking Charge Controller We will make use of a charge controller to interface between the solar panel and the battery to operate at the maximum power point. This will help us protect the battery from over charging, over discharging, and reverse current flow. Energy Storage and Management System We will incorporate voltage and current senors to measure the output from the panels, battery charge/discharge rates, and load consumption. We will make use of these measurements to compute realtime power, cumulative energy, and system efficiency for performance analysis. Wireless Communication Module We will use a WiFi communication module to send system data to a local server or even a cloud based server. This will allow remote monitoring, firmware updates, and long term data logging for performance analysis of tracked and fixed-tilt operations. Mobile/Web Application Dashboard We will use an application that will visualize live and historical metrics, including but not limited to orientation angles, power output, energy yield, and tracking efficiency. With the help of this application, users will be able to analyze trends, receive fault alerts, and evaluate the energy gained from solar tracking under different environmental conditions. Criteria for success: The success of this project will be evaluated under the following criteria. Wi-Fi connection between the solar panel/battery and a local/cloud server. Tracking of statistics, such as angle, output, etc... for display later. A cache in which to store tracked statistics should the server be unavailable. Creation of a web app to display the tracked statistics. Creation of an algorithm allowing for the solar panel to "follow" the sun. Integration of the algorithm onto a microcontroller + interfacing with light sensors and motors. |
||||||
| 8 | Facial Quantum Matching Mirror |
Akhil Morisetty Alex Cheng Ethan Zhang |
Wesley Pang | Yang Zhao | proposal1.pdf |
Illinois Quantum and Microelectronics Park. |
| # Facial Quantum Matching Mirror Team Members: - Akhil Morisetty (akhilm6) - Alex Cheng (xueruic2 ) - Ethan Zhang (ethanjz2) # Problem Describe the problem you want to solve and motivate the need. Chicago is spending 500 million dollars investing in the development of the Illinois Quantum and Microelectronics Park. Professor Kwait is looking for a viable prototype of a Facial Quantum Matching Mirror that he can show investors to persuade them into creating a more expensive and museum-ready version. Our task is to create a visually appealing and functioning prototype that Professor Kwait can show to investors to eventually add to the Illinois Quantum and Microelectronics Park. # Solution We propose a Facial Quantum Matching Mirror, an interactive display device that uses a one-way mirror and facial recognition to reflect a user’s likeness matched with well-known figures in selected categories such as engineers, scientists, or entrepreneurs. When the display is illuminated, the one-way mirror becomes transparent, allowing the user to see the matched character overlaid behind the glass. This creates the illusion that the user is “face-to-face” with a figure who resembles them, combining reflection, computation, and visual storytelling in a single interactive experience. The system consists of a one-way mirror, a display panel of equal size mounted behind the mirror, a surrounding LED light ring, a camera, local storage, a microcontroller, and a user input button, all integrated within a single frame. When the system is idle, the display remains dark, causing the mirror to behave as a reflective surface so the user sees only their own reflection. Upon pressing the button, the user selects a category and the system is activated. The microcontroller triggers visual feedback through the LED ring and commands the camera to capture an image of the user. This image is processed by the facial recognition backend, which identifies the most visually similar individual from the selected category. The result index is returned to the microcontroller, which retrieves the corresponding image from local storage and displays it on the screen. # Solution Components ## Subsystem 1: Display Unit This subsystem serves as the presentation and capture layer of the smart mirror. It uses an onboard camera to capture a photo of the person standing in front of the mirror, and a monitor behind a two-way mirror to render the user experience (UI prompts, loading screens, images, and optional video). During idle mode, the monitor remains black so the mirror looks fully reflective like a normal mirror. When the user presses the start button, the display transitions to a loading interface while the backend subsystems process the captured image and return a match. Once processing completes, the monitor displays the selected quantum scientist/engineer/entrepreneur (and any associated content), giving the mirror the appearance of an interactive digital mirror. Components: - 18’’ x 24’’ Wooden Picture Frame - SANSUI 24” 100Hz PC Monitor - 18” x 24” Glass Mirror - 18” x 24” 50% Reflective Film ## Subsystem 2: LED Sensor Unit This subsystem focuses on providing visual feedback to the participant throughout the interaction process. The LED Sensor Unit is activated after the participant presses the startup button and indicates that the system is processing the facial scan and matching operation. The LEDs will flash in a predefined pattern to signal that the system is active and working. The LED Sensor Unit receives control signals from the system microcontroller and remains active until an “off” signal is sent by the display subsystem or system controller, indicating that the matched image or video has finished displaying. Once the off signal is received, the LEDs are turned off and the system returns to an idle state. The LED lights are mounted around the frame of the mirror to ensure high visibility and to enhance the overall user experience. Components: - Addressable LED strip: SEZO WS2812B ECO LED Strip Light ## Subsystem 3: Startup Button This subsystem focuses on the start of the entire process for the project. The participant begins the process of using the mirror by choosing options from a set of buttons available to them. The participant will have the option of selecting the quantum category that they want, and starting the camera/scan process with another button. The participant has the control for when they are interested in and when they start the process. The button will be stationed next to where the participant will stand and have wires connected to the microcontroller subsystem. Components: - Button: 2x16 LCD Display with Controller ## Subsystem 4: System Microcontroller The system microcontroller organizes and communicates between all the other subsystems in the project. All of the logic and transmission of data is handled by this subsystem. Moreover, the software component of the projects sends data back and forth between the microcontroller and itself. The system microcontroller is the overarching subsystem in the project, which essentially plays a role in every component of the solution. Components: - Microcontroller: ESP32-S3-WROOM-1-N16 # Criterion For Success - Participants are able to select the category they are interested in to find a match for. - Be able to accurately match the participant to a person in the topic the participant has selected: Accuracy should be at 75% - After match has been found a personal video is displayed from the match - Device does not start until participant steps on to pressure plate - The led surrounding should be on after the user press the button and before the character image disappear - The image on the monitor should be showing for up to 15 seconds, and then turn back to the black screen. |
||||||
| 9 | Automated Cocktail/Mocktail Maker |
Benjamin Kotlarski Dominic Andrejek Nick Kubiak |
Wesley Pang | Joohyung Kim | proposal1.pdf |
|
| # Automated Cocktail/Mocktail Maker Team Members: - Dominic Andrejek (da24) - Benjamin Kotlarski (bkotl2) - Nick Kubiak (nkubi2) # Problem Making cocktails or mocktails can be a tricky situation at times. You have many different ingredients that you must accurately measure and are very prone to human error. In social settings this can also be a somewhat time-consuming and inconvenient process. While some automatic drink dispensers already exist, most are expensive, very large, and have many limitations. # Solution An automated cocktail and mocktail mixing machine can fix this. Based on a user's input, the system will dispense a precise amount of the specific liquids needed to make that drink. There will be a sensor to check for cup presence so liquid is not spilled everywhere and user input will be done through either buttons, a small graphical UI or potential voice. The system will contain multiple containers to hold liquids with pumps or solenoids to connect them to the cup that is controlled by the microcontroller. Some type of weight sensors will also be implemented to make sure the correct amount of each liquid is dispensed. For use, if a cup is present and a user gives a recognizable command from the pre-defined recipes the microcontroller will start activating the appropriate pumps/solenoids to drive the correct ingredients to the cup to make the desired drink. Our design will be a more affordable solution (using much cheaper materials) that more residential users will be able to use and enjoy the precision and luxury of properly measured out drinks without needing external and premade pods or absurd prices. # Solution Components ## Subsystem 1 - User Interface (UI) Initially it will be a simple push button, in the future potentially expand or rework into a more complex screen based UI after expanding the pump system so that there are more options with potentially more ingredients to them too. Expansion to multiple buttons for multiple drinks is also a simpler option for expansion once we get an initial drink working. Here is an initial button we can use 16mm Panel Mount Momentary Pushbutton - Red Product ID: 1445 ## Subsystem 2 - Stirring Mechanism Since these drinks are being produced by two different liquids, they are naturally required to be mixed by some means. Thus, the purpose of this subsystem is to automate the drinks stirring process. To do so, it will require two different motors connecting back to the systems microcontroller. The first motor will be in charge of controlling the height of the stirring arm so that it can be lowered into and out of the cup, while the second motor would be in charge of actually rotating the stirrer to mix the drink. In terms of the potential motors which would be able to do this, we found that the linear up/down motion could be handled with a dfrobot fit0806, while the rotational motion can be done with an adafruit 3777. These are both DC powered thus we will use batteries to power our system and if we find issues with either running through batteries very fast or requiring higher power outputs then we will implement an AC to DC converter to allow us to use wall power. ## Subsystem 3 - Pumps and Plumbing This system will be in charge of transporting the liquid from our housing containers to the central cup. Upon the signal from the microcontroller, the pumps will turn on transporting the liquid from the housing container to the liquid through small tubes. Once one pump finishes dispensing the liquid and is verified as correct, the next pump will turn on in charge of the next liquid and dispense into the central cup. For the tubing we will use small plastic tubing 1/6 in. I.D. x 1/4 in. O.D. Clear Vinyl Tubing. The pumps we will use to control the liquid flow is the Adafruit 1150. ## Subsystem 4 - Intercomponent Communication System Microcontroller system to communicate between all of the systems. For example sending on a user input the microcontroller will tell the pumps to start taking liquid from the housing containers to the central cup. This also includes various colored LED’s to note the status of each step or whether something might potentially be wrong. This will tell the pump if/when to dispense, how much to dispense, and how the motors should be moving. We will use an ESP32 microcontroller along with our custom PCB. ## Subsystem 5 - Functionality and Weight Verification System Some weight sensors which verify the amounts/presence of liquids, and also verify that there is a cup present. Each Container (both liquid housing and the central cup) will have a weight sensor below them. The weight sensor below the central cup will have two purposes. The first is that the microcontroller must read a non zero/small value from it along with a user input to start dispensing liquid. Then it will also make sure that the amount of liquid lost from the liquid housing (based on weight lost) is then regained in the central cup so we know all liquid is fully transferred and is not stuck in the tubing. The weight sensors can either have the numbers adafruit 4540, sparkfun tal220, and adafruit 454. This decision will be based on the weight limit we will determine necessary for our application in the design phase. Regardless of which one is chosen, however, all of these require the addition of an amplifier to function. The code for that is hx711. # Criterion For Success In its most fundamental and basic form, our project must be able to successfully produce at least one simple stirred cocktail upon a user's input. This must effectively include the following functions. First is the ability to check whether a cup is present before pouring any liquids, as well as check if there is the right amount of the necessary liquids before pouring. Once that is complete, the stirring and pouring mechanisms should move down into place, and the different liquids get individually poured into the cup. The amounts of each liquid should be measured via the weight sensor below the cup so that each time the drink is produced the portions remain consistent. After each respective liquid is poured into the cup, the stirring device should clearly activate, and whenever it finishes the stirring and pouring mechanisms should move back up to their starting positions, with a green LED indicating that the process was completed. If time permits, however, we hope to be able to expand our goals a bit more in three different ways. The first way was to expand the selection of drinks by having multiple different options available to choose from. An additional and slightly different approach to expanding the drink selections would be to incorporate more complex options as well that would require multiple different ingredients instead. The final goal which we hope to achieve/reach would be to incorporate a more complex and visually appealing UI so that users can easily select between and see different drink options on a screen. # Alternatives There are three different categories of alternatives which currently exist relating to our proposed project. The first is a coaster looking device which connects to a phone app via bluetooth to weigh the amounts of liquid you add to your cup and guides you in making your drinks. This product, while the least expensive of all other options, is by far the most simple and the least automated. The next alternative was a fully automated drink creator which worked by having users input a flavor pod for their desired drink, and mixing it with the correct liquor. While this one got closer to performing the same function as our idea, its price went drastically up and it required users to purchase or own the company's specific flavor pods. Finally, the alternative which is most similar to our design is the Barsys 360 Cocktail Maker Machine, which also takes in various liquids and dispenses them accordingly for whatever mixed drink one desires, but that's where its functionality ends. Therefore, besides the fact that it once again has a very large price tag, it also does not have the same functionality of actually automatically stirring the drinks for a user. Important to mention too is that there do exist commercial grade versions of this type of machine, but these ones jump in price even further up to around three thousand dollars. |
||||||
| 10 | OmniSense-Dual — Dual-Wearable 360° Blind-Spot Detection, Directional Haptic Hazard Alerts, and Belly-Based Navigation for Pedestrian Safety |
Alex Jin Jiateng Ma Simon Xia |
Wesley Pang | Yang Zhao | proposal1.pdf |
|
| Team Members: - Simon Xia (hx17) - Jiateng Ma (jiateng4) - Alex Jin (jin50) **1. Problem Statement** Pedestrians in urban and campus environments frequently share space with bicycles, e-scooters, cars, and other pedestrians approaching from all directions. Unlike drivers, who benefit from mirrors and active driver-assistance systems, pedestrians have: - Unprotected blind spots Fast-approaching objects from behind or from diagonal sectors are often perceived too late, especially on shared bike/pedestrian paths and narrow sidewalks. - Reduced situational awareness Headphones, smartphones, and other distractions degrade auditory and visual awareness, making it harder to detect hazards or notice subtle visual cues. - Navigation burden Outdoor and indoor navigation typically depend on visually checking a smartphone map or listening to voice guidance. Both approaches demand attention, occupy hands or ears, and can themselves be unsafe in traffic or crowded environments. For visually impaired users, relying solely on audio is also not ideal. Existing systems (smartphone maps, voice navigation, cycling radars, blind canes) each address part of the problem but do not provide integrated 360° safety sensing plus hands-free navigation with clear separation of meaning. **2. Solution Overview** We propose OmniSense-Dual, a dual-wearable system consisting of: - A waist/belly-mounted sensing, compute, and navigation haptic module, and - A head-mounted sensing + haptic hazard alert module Key design choice: - Head channel = hazard alerts only - Belly channel = navigation cues only This cleanly separates “something is dangerous around you” from “where you should go.” Core functions: - 360° Blind-Spot Hazard Awareness The belly module uses mmWave and ToF/ultrasonic sensors to detect approaching objects around the torso. The head module provides an additional sensing plane for head-level obstacles. When a hazard is detected, the headband vibrates on the corresponding side/direction, signaling an urgent warning. Hands-Free Navigation - A smartphone app provides waypoints (outdoor via GPS; optionally indoor via BLE/UWB). The belly module fuses waypoints with IMU heading and encodes navigation instructions as gentle vibration patterns on the belly module (e.g., left side of belt = turn left soon). Navigation never uses the head motors, so it cannot be confused with hazard alerts. OmniSense-Dual is designed for campus walking, urban commuting, and accessibility support, with a strong emphasis on non-visual, non-auditory, and clearly distinguishable feedback. **3. Solution Components** **Component A: Waist/Belly Perception & Compute Module** Placement: Worn around waist or belly using elastic belt. Sensors: - Rear + Rear-Diagonal (L/R): mmWave radar (60 GHz) - Left + Right: ToF (e.g., VL53 series) or ultrasonic - Front-Lower: ToF/IR for low obstacles (curbs, poles, steps) Functions: - Provides 360° sensing at waist plane - Detects moving vs static obstacles - Includes 6-DoF IMU for heading + gait - Includes battery + charger + regulators - Belly haptics used only for navigation **Component B: Head-Mounted Hazard Alert Module** Placement: Headband, cap insert, or lightweight strap. Haptic Feedback: 8 directional motors placed at: - Front (0°) - Front-Left (45°) - Left (90°) - Rear-Left (135°) - Rear (180°) - Rear-Right (225°) - Right (270°) - Front-Right (315°) Electronics: - Small BLE SoC/MCU - Optional short-range ToF for head-height obstacles - Small battery or wired power from belt Role: - Only hazard alerts - No navigation patterns **Component C: Navigation & Belly Haptic Interface** Input Source: Phone provides route via GPS (outdoor) or BLE/UWB (indoor). Processing on Belt Module: - Receives desired bearing from phone - Computes angle difference using IMU - Triggers haptic cue on belt **Component D: Safety Hazard Logic** Inputs: - mmWave + ToF/ultrasonic - Optional head ToF - IMU heading Hazards Detected: - Approaching fast objects (bike, scooter) - Sudden close static obstacles - Rear or diagonal intrusion - Low objects in walking path Head Feedback Patterns (Hazard Only): - Default hazard → strong 0.5–1.0s pulse in correct motor direction - High severity → repeated strong pulses - Multiple hazards → priority by time-to-collision **Component E: Electronics & PCB** Belly PCB Includes: - MCU (e.g., STM32H7 or ESP32-S3) - Sensor interfaces (mmWave, ToF, IMU) - BLE for phone + headband - Haptic drivers for belt motors - Li-ion charging + regulation Head PCB Includes: - BLE SoC (e.g., nRF52832/ESP32-C3-Mini) - 8 motor drivers (directional) - Optional ToF - Small battery or connector **4. Criterion for Success** Safety - Detect bikes/scooters ≥ 5 m away with ≥90% recall - Head direction correctness ≥90% - Alert latency ≤250 ms - Dual-plane sensing reduces occlusion misses ≥30% Navigation - Turn accuracy using belly haptics ≥85% - Heading deviation during “straight” ≤10° - Navigation update latency ≤200 ms Channel Separation - Head = hazard, belly = navigation - User classification accuracy (hazard vs nav) ≥90% Usability - Battery life ≥4 hours - Total mass ≤350 g (head ≤150 g) |
||||||
| 11 | Ant-weight Durian Battlebot |
Matthew Jin Timothy Fong Ved Tiwari |
Zhuoer Zhang | Viktor Gruev | proposal1.docx |
|
| # Title Ant-Weight Durian Battlebot # TEAM MEMBERS: - Matthew Jin (mj41) - Tim Fong (tfong5) - Ved Tiwari ( vedt2) # PROBLEM Want to design an Ant-weight Battlebot that can outlast and tactically out-compete other entries into the competition. Several restrictions/requisites (outlined by the National Robotics Challenge rulebook) are as follows: - Robot must be under 2lb (we are not opting for a bipedal/quadpedal robot) - Usage of an H-bridge motor system - No metal components whatsoever - Weaponry (either passive or active) - Power delivery system (battery) - Usage of sensors/actuators - Must be 3D printed using one/multiple of 5 materials: PET, PETG, ABS, PLA, PLA+ - Custom PCB to house a microcontroller - Microcontroller must have bluetooth or wifi capability to be controlled externally via a nearby PC/laptop - Simple and complete manual shutdown (within 60s) without the usage of an RF link # SOLUTION Collaboratively decided on a Battlebot design with a passive/counter-type weapon, being spikes that cover the outer shell (resembles a Durian shell with rounded, shallower spikes). Numerous other countermeasures and engineering decisions have been culminated to account for tactics employed by other participating teams. Unlike other common approaches, the absence of an active weapon allows for weight to contribute toward other directions. With this passive weaponry, it falls down more toward microcontroller-initiated, driver assistance algorithms and the shell armor design to disarm/decommission the competition. You’re in trouble. # SOLUTION COMPONENTS ## PASSIVE WEAPONRY The shell spikes are intentionally shallow and rounded to prevent chipping, and to maximize structural integrity under impact. This will prove useful against many active weapon forms, namely the hammer and rotary-type Battlebots in head-on collisions. ## OUTER SHELL Due to the absence of an active weapon, this gives more wiggle room to make the outer shell thicker. To counter Battlebots with forklift/door wedge armaments that aim to flip us over, we will intentionally minimize the clearance room between the bottom lip of the shell and the bottom of the wheels. Additionally, the shell will be thicker toward the middle/base (compared to the top) to create an even lower center of gravity. This shell will be 3D printed using the PETG material, given its functional robustness in the context of this Battlebot competition. It is durable, impact resistant, non-brittle, and warp-resistant during the printing process. ## ELECTRONICS Preface: To include details regarding sensors, battery system, the microcontroller, AND the electronics + battery trays. We decided to use an STM32 microcontroller compared to other popular microcontrollers (namely ESP32) due to its superior compute power. The STM32 provides us with a better ability to perform algorithmic computations on board from data collected from our sensors. An example use case of this might be to determine if and when the bot is close to flipping over. By calculating the y offset from the gyroscope and accelerometer on the IMU, we can send a signal to the wheels to spin it at a certain frequency to reduce the chances of flipping. Apart from this, the STM32 provides us with native Bluetooth and WIFI support out of the box, eliminating the need to configure separate chips to the microcontroller setup. For the battery, we have chosen the 4S (14.8V) 750mAh LiPo battery, as it provides ample flexibility between power and charge capacity: both of which are important for a nimble Battlebot that can last the entire contest. This battery will be stored in a lower-level tray (to again, lower the center of gravity) to protect it. Additionally, a battery-health specialized transistor chip will be utilized. There will be a buck converter that will step down the 14.8v in order to power the microcontroller and other components at the correct voltages that require less voltage. We are to use IMU and load sensors for the sake of creating 2 feedback systems. The first feedback system is between the microcontroller and the Battlebot’s localization. The second feedback system is between the microcontroller and the motor health/status. The goal of initializing these two systems is for the sake of ensuring the Battlebot’s movement is both accurate, and that its motors do not malfunction. Alongside the sensors, the microcontroller/PCB will be located in an upper-level tray above the battery tray. ## DRIVETRAIN As outlined in Professor Gruev’s slides, we are to use an H-bridge system. We’ve opted for a multidirectional 4WD setup with the wheels being attached to the inner perimeter of the shell. With this approach, fluid motion exists while simultaneously shielding the wheels from external impacts. Wheels will be made of urethane, as they are heavy (contribute toward lowering the center of gravity), durable, have good grip, and less wear factor. Brushless DC motors will be used due to their incredibly high power-to-weight ratio and long lifespan (reliable). # CRITERION FOR SUCCESS - Battlebot electronics are well-protected, functional, and durable - Outer shell does not break under expected impact - Spikes do not chip and prove effective in using others’ active weapons against them - Battlebot does not flip over during trial runs/competition scenario reenactments - Battery lasts the entire combat duration |
||||||
| 12 | 4-Wheel-Drive Invertible Ant-Weight Battlebot |
Haoru Li Ziheng Qi Ziyi Wang |
Zhuoer Zhang | Viktor Gruev | proposal1.pdf |
|
| # Ant Weight Battlebot Team Members: - Ziyi Wang (zw67) - Ziheng Qi (zihengq2) - Haoru Li (haorul2) # Problem For ant-weight battlebots, 3D-printed materials introduce significant vulnerabilities. Though many robots can effectively defend strikes, they are prone to "turtling" and may lose mobility when flipped. Under the competition rule, losing mobility will quickly lead to knockout. When inverted, weapon systems such as vertical spinners may rotate in an ineffective direction or lose engagement with the opponent entirely, significantly reducing combat effectiveness. Preserving weapon functionality in both orientations remains a critical challenge for ant-weight combat robots. In addition, sudden high-impact collisions can introduce transient power spikes and voltage fluctuations in the power distribution system, which may disrupt onboard electronics, or cause overall system instability during operation. # Solution We want to design a invertible 4-Wheel-Drive battlebot with vertical drum spinner. According to our investigation, vertical drum spinner is an ideal weapon choice as it is rigid and can effectively flip opponents. To solve the problem of "turtling," the robot uses a symmetric chassis with wheel diameters exceeding the total chassis height, ensuring traction regardless of orientation. And bigger wheels also allow the battlebot to function even after flipped and the vertical rollercan change its direction as well. To address the cognitive load of inverted driving, we integrate an onboard IMU that automatically detects a flip and remaps the motor control logic in the firmware, making the transition seamless for the operator. To ensure electrical stability and prevent brownouts, the custom PCB utilizes a decoupled power architecture. We isolate the high-current weapon system from the sensitive logic rails using a high-efficiency switching regulator and a large bulk capacitor array. The robot is divided into three primary subsystems: Power Management, Control & Sensing, and Drive & Weapon Actuation. # Solution Components ## Subsystem 1: Power Management and Distribution Provides stable, isolated power delivery to all robot subsystems while meeting the 24V maximum battery voltage requirement. Detail specifications awaits to be put on based on selection of motors. ## Subsystem 2: Control and Communication Function: Receives operator commands, processes IMU orientation data, and generates appropriate motor control signals with automatic inversion compensation. *Components:* * Microcontroller: ESP32-WROOM-32D module with integrated WiFi/Bluetooth * Part: Espressif ESP32-WROOM-32D * IMU Sensor: 6-axis accelerometer and gyroscope module * Part: InvenSense MPU-6050 (GY-521 breakout module) * Interface: I2C communication at 400kHz Firmware Logic: Continuously poll IMU at 100Hz to determine Z-axis orientation If Z-acceleration indicates inversion (threshold: -8 m/s² to -10 m/s²), apply 180° phase shift to drive motor PWM signals fit the pose change. Maintain weapon control polarity regardless of orientation Implement exponential response curve on drive inputs for fine control ## Subsystem 3: Drive Train Provides four-wheel independent drive with sufficient torque for pushing and maneuverability. Components: * 4 Drive Motors with expected weight of ~10g each ## Subsystem 4: Weapon System Vertical drum spinner delivering kinetic energy impacts to destabilize and damage opponents. Performance Targets: Weapon tip speed: 150-200 mph (conservative for material constraints) Spin-up time: <3 seconds to operating speed Subsystem ## Sybsystem 5: Chassis and Structure Provides impact-resistant housing for all components while maintaining invertible geometry and meeting weight requirements. # Criterion For Success 1. The total weight of the battlebot should always remain below 2 lb. And the robot should execute a complete motor shutdown within 2 seconds once triggered by software or hardware switch. 2. Logic systems (ESP32, IMU) must maintain operation during weapon spin-up and simulated impact loads. And communication should stay on. 3. The robot can work as expected: move according to PC inputs and do not need manual adjustment; weapon spinning vertically; shutdown in time according to PC commands; self-adaptive when flipped (mobility and weapon functionality) 4. The chassis and mounting structures must withstand repeated weapon engagement and collisions without structural failure. |
||||||
| 13 | Invertible-Control Ant-Weight Battle Bot |
Ben Goldman Jack Moran |
Haocheng Bill Yang | Viktor Gruev | proposal1.pdf |
|
| **TEAM MEMBERS:** - Jack Moran (jackm6) - Ben Goldman (bg23) **PROBLEM:** The primary objective is to create a bot weighing under 2lbs to disable an opponent in an ant weight combat battle bots match in a confined space. Winning a match like this often requires a high skill level to pilot a robot, especially as they get flipped or lose control when other bots attack. Additionally, many bots may suffer from reliability issues as teams overcomplicate the robotics which leads to vulnerabilities. We need a solution to maximize weapon power while simplifying the driving experience for the operator so all they need to focus on is planning attacks against other opponent bots. **SOLUTION:** We propose a 2lb combat battle bot designed to deliver catastrophic blows to opponents using a double sided horizontal spinning bar with an easy to use control system to allow for efficient battle. The chassis will feature a large primary weapon consisting of a horizontal spinning bar capable of delivering powerful attacks after winding up due to high inertia. This primary weapon will stick out of the front. The sides and back of the bot will be rounded in shape with no sharp edges or corners in order to deflect attacks and prevent opponent's weapons from grabbing on. For the controls and movement, the bot will feature two wheels to enable a tank like steering system. These wheels will be enclosed within the body of the bot so that only a small section, where it would contact the ground, protrudes from the top and bottom of the bot. There would be small skid sections to allow the remainder of the body to stay low to the ground while also moving easily when on smooth surfaces. Since the bot will have a weapon, defense system, and wheels which can operate in either orientation, this bot will be capable of operating if flipped. However, whenever the bot is inverted, the steering and controls would be inverted making it hard to command. To combat this, we will include an IMU sensor to detect if the bot has been flipped. The controls would then be inverted so that the driver does not need to focus on the orientation of the robot and can focus on controlling the weapon towards opponents as controls would be reversed automatically. The bot would be controlled from the driver's laptop. **SOLUTION COMPONENTS:** **Subsystem 1: Mobility and Drive System** This subsystem is responsible for the mobility and driving capabilities of our bot. The bot needs to be highly mobile and fast in order to evade and attack other bots. In addition, this system will need to be capable of operating no matter the orientation of the bot. Using two motors for mobility will allow the bot to be able to turn very efficiently using tank like steering. - Drive type: Differential wheeled drive (two motors). - Wheel placement: Wheels recessed inside the chassis to protect against direct impacts. Each wheel only slightly protrudes from top and bottom of the chassis. - Motors: High-torque brushed DC gearmotors sized for ant-weight limits. - Control: Independent left/right motor control via H-bridges on the custom PCB. **Subsystem 2: Spinning Weapon System** The main weapon of our battle bot is a horizontal spinning bar. This piece will be 3D printed in a manner such that it is very strong and will not break on impact. It will be driven by the bot's third motor. In addition, this weapon must comply with ant-weight regulations. Therefore, this weapon must stop completely within 60 seconds of shutoff. The weapon provides offensive capability while keeping mechanical complexity to a minimum. - Weapon type: Horizontal spinning bar. - Actuation: Brushed DC motor belt driven or directly driven. - Safety: Software-controlled spin up sequence and current monitoring to prevent overcurrent or unsafe startup. **Subsystem 3: Orientation Detection and Control Inversion** The battle bot will feature the use of IMU sensors to help the driver control the bot. When flipped upside down by other bots, this bot will detect the inversion and be able to invert all controls. This allows for the driver to focus on attacking and evading other bots rather than wasting energy understanding how to control a bot when it is upside down using reversed controls. - Sensor: 6-axis IMU (accelerometer + gyroscope). Potential option: MPU-6050 - Function: Detect robot orientation (upright vs inverted). - Control logic: Automatically invert motor commands when inverted so “forward” and “turn” remain intuitive to the operator. **Subsystem 4: Control Electronics and Custom PCB** The PCB and control electronics are responsible for the main control and communication of our robot. Our microcontroller will be our central controller receiving operator commands and translating them into control signals. This will interface with the IMU to determine the robot’s orientation and apply the correct control logic accordingly. This subsystem also monitors our safety conditions. It will kill all motors and enforce failsafe behavior for our weaponry if communication is lost or there is a fault. - Microcontroller: ESP32 (Wi-Fi or Bluetooth control). Potential option: ESP32-WROOM-32E - Wireless control: PC-based controller via Wi-Fi/BLE. This is included in the ESP32 - Motor drivers: Custom H-bridge circuits for left drive, right drive, and weapon motor. - Power management: LiPo battery. Potential option: Turnigy Nano-Tech 3S LiPo. Would include voltage regulation for logic (3.3V) and current sensing for protection. - Safety features: Hardware kill switch. Automatic shutdown on RF link loss **Subsystem 5: Mechanical Design and Fabrication** The body of the bot will be primarily 3D printed and will adhere to all requirements of an ant-weight battle bot. Primarily, this means that the bot will measure in under 2lbs for competition. The chassis will be able to be opened in order to properly build and work on the bot including access to the PCB, microcontroller, battery, and motors. This chassis will also provide all primary defense systems by being smooth and rounded everywhere other than at the front where the weapon protrudes. This prevents attacks from spinning weapons or claw like devices to do damage. In addition, weight distribution will be optimized to keep the center of mass low and stable. - Materials: PLA+, PETG, or ABS. - Weight limit: ≤ 2 lb total robot mass. - Manufacturing: Fully 3D-printed chassis with modular access to electronics. **CRITERIA FOR SUCCESS:** **Mobility and Drive System** - The robot remains fully drivable when inverted. - The robot contains two wheels directly driven by motors such that front, back, and sides of each wheel are protected by the chassis. **Spinning Weapon System** - Uninterrupted high speed 360 degree rotation possible in both directions. - After impact, the spinning weapon immediately starts to spin up again. - The control system has an operational killswitch which shuts down all operations of the bot. - Weapon comes to a complete stop within 60 seconds after shutoff. **Orientation Detection and Control Inversion** - Sensors detect both upright and inverted positions which are displayed on the laptop controlling the bot. - Controls get inverted when the bot is upside down and return to normal when upright based on the use of the IMU. - Controls invert within 300ms after bot flips. **Control Electronics and Custom PCB** - The robot passes all safety shutdown tests required in ant-weight battle bot rules. - Custom PCB operates reliably without overheating or brownouts. This means it remains operational for ten or more minutes. **Mechanical Design and Fabrication** - The chassis of the battle bot weights in under 2lbs. - The chassis of the battle bot is smooth and curved with no sharp corners other than on the main spinning weapon. - The robot is competition-ready and able to participate in the ECE 445 ant-weight battle bot event. |
||||||
| 14 | PocketScope |
Aaron Holl Caleb Peach Rohan Nagaraj |
Lukas Dumasius | Craig Shultz | proposal1.pdf |
|
| # Team Members: - Rohan Nagaraj (rohan14) - Aaron Holl (amholl2) - Caleb Peach (calebrp2) # Problem Most signal generators and oscilloscopes are limited to large laboratory instruments. They are also very costly and usually reserved for universities and company labs. Currently, there is no cheap, pocket-sized, convenient, and compact signal generator/oscilloscope designed for electricians, hobbyists, and engineers to use in the field while troubleshooting electrical problems. # Solution With advancements in microcontroller technology (specifically cheaper, smaller, and more powerful devices) our team can create a handheld, pocket-sized, two-in-one oscilloscope and signal generator. It will include an OLED screen to display a user interface with a time-versus-voltage/current plot, options for generated signals, and other features for quick measurements such as a voltmeter and ohmmeter. It will also include software based analysis tools such as FFT, curve-fitting, and the ability to export data as a CSV to a computer. Software, ADC, and DAC functionality can be handled through an ESP32 or a similar microcontroller. Basic circuit design using op-amps and voltage dividers can be used to scale larger input signals down to ranges acceptable for the microcontroller’s ADC. The user interface software can be implemented using C and Python. # Solution Components ## Subsystem 1: Voltage and Current vs Time This subsystem will take a real-world signal ranging from [-20 V, 20 V] and scale it down to a 0 to 3.3 V range since this is the typical input range for a microcontroller’s ADC. We can easily do this mathematically by dividing the function by a scaling factor (implemented in a circuit with a voltage divider) and adding an offset (using an op-amp adder circuit) to get it in the suitable range. We will use a LM741 op amp to do this since it is one of the most popular and widely used op-amps in circuit design. Our microcontroller will be an ESP-32 or STM-32 since it has an onboard ADC that can read voltages in the 0 to 3.3 V range. It also has the computing ability for small scale graphics for the waveforms vs time and can handle other DSP intensive threads. ## Subsystem 2: LCD Touchscreen This subsystem will display our application code written in C, Python, and possibly Arduino. It will display the voltage/current waveforms, show menus for signal generation, display spectrogram readings, show analysis tool details, and provide major control over the device. We will use a LCD capacitive touch bare display which communicates with our microcontroller over SPI. Adafruit provides a suitable display (https://www.adafruit.com/product/2090) that can be used for this. ## Subsystem 3: USB-C Charging and Computer Exportability - USB-C PCB mount on our custom PCB will allow for microcontroller programming, battery re-charging, and allow the microcontroller to export a .CSV file to a connected computer - USB-C will support USB 2.0 at 12 Mbps since this is fast enough to import CSV data and machine code data to the microcontroller without having to worry about impedance controlled traces on D+ and D- lines. - The UJ20-C-H-C-4-SMT-TR (USB-C PCB mount) will allow us to have this connectivity - USB-C also natively supports a 5V power supply over the VBUS terminal, so we can use this to charge a rechargeable lithium ion battery that allows the device to be mobile ## Subsystem 4: Time varying FFT (Spectrogram) of input signal - In software, we will implement a short time Fourier Transform algorithm to show a real-time spectrogram of the input signal - We do this by sampling the signal in short windows and taking the FFT of the instantaneous waveform, displaying it, and then repeating the process in real time such that the user can accurately see how the frequency components of the signal change over time ## Subsystem 5: Waveform Signal Generation User will be able to choose between the following pre defined waveform shapes we support: - Rectangle Wave - Triangle Wave - Sine Wave - Sawtooth Wave - Pulse Signal - Gaussian Noise function This will be generated by the microcontroller (ESP-32 or STM32) via PWM through a GPIO pin and amplified to a 0 to 5 V range through an op-amp amplifier (again using the LM741). The frequency, phase, duty cycle, and amplitude of the waveforms can be fully customizable by the user. ## Subsystem 6: Machine Learning Algorithm for Input Waveform Analysis - Implement a machine-learning-based parameter estimation algorithm using gradient descent to fit mathematical models to measured input waveforms - We will base our algorithm on a Nth order polynomial fit (where N is a parametrized by the user, giving more accuracy on the fit) - This can be used to characterize transient behavior, dynamic response, and system properties related to impulse and frequency response # Criterion For Success - The device needs to be portable such that the entire structure can fit comfortably in your hand and ideally within a pants or jacket pocket. - The device needs to have a battery system that can support at least a couple hours of use, in order to serve the needs of the users who may be unable to plug the device into an outlet while using it. - The device needs to be able to read any arbitrary signal within a -20 V to +20 V range and display them accurately on the screen. - The screen needs to be easy to read and the interface must be concise and unobtrusive. Also the screen should be sturdy enough to be used frequently without fear of damage. - The device needs to have an overvoltage protection system that prevents the circuits from burning out if a high voltage signal is put across the input pins. - The metal pins that read the voltage signal must be adjustable in gap width and/or compatible with a set of detachable probes that can be placed on any two points of a target circuit. # Alternatives Small oscilloscopes have already been implemented and manufactured. Our solution is unique as we will implement our ideas in a cost efficient, energy efficient, space efficient manner for low voltage inputs, which is not currently available (current solutions are too big, too expensive, or too energy efficient for low voltage systems). https://www.digikey.com/en/products/detail/owon-technology-lilliput-electronics-usa-inc/HDS1021M-N/10667422?gclsrc=aw.ds&gad_source=1&gad_campaignid=20228387720&gbraid=0AAAAADrbLlg8c4vRvwakbVmhST4aZ3Gqw&gclid=Cj0KCQiA4eHLBhCzARIsAJ2NZoIiJi_xpcOgqdLhCqINMhACTyUvaBxYUS1mqWpOtyJXAPze3dIfL64aAkQHEALw_wcB |
||||||
| 15 | SafeStep: Smart White Cane Attachment for Audio + Haptic Navigation and Emergency Alerts |
Abdulrahman Almana Arsalan Ahmad Eraad Ahmed |
Abdullah Alawad | Arne Fliflet | proposal1.pdf |
|
| # TEAM: Abdulrahman Almana (aalmana2), Arsalan Ahmed (aahma22), Eraad Ahmed (eahme2) # PROBLEM White canes provide reliable obstacle detection, but they do not give route-level navigation to help a user reach a destination efficiently. This can make it harder for blind or low-vision users to travel independently in unfamiliar areas. In addition, audio-only directions are not always accessible for users who are deaf or hard of hearing, and if a user falls there is often no automatic way to notify others quickly, which can delay assistance. # SOLUTION OVERVIEW We propose a modular smart attachment that mounts onto a standard white cane to improve navigation and safety without replacing the cane’s core purpose. The attachment will connect via Bluetooth to a user’s phone and headphones to support clear spoken directions, and it will also provide vibration-based cues for users who need non-audio feedback. The attachment will include fall detection and a basic emergency alert workflow that sends an alert to a pre-set emergency contact with the user’s last known location. # SOLUTION COMPONENTS **SUBSYSTEM 1, CONNECTIVITY + CONTROL** Handles Bluetooth pairing, basic user controls, and system logic. Planned Components: 1-ESP32 (Bluetooth Low Energy) microcontroller, ESP32-WROOM-32 2-Power switch + SOS button + cancel button 3-LiPo battery + USB-C charging module **SUBSYSTEM 2, NAVIGATION OUTPUT (AUDIO + HAPTICS)** Supports spoken directions through headphones and vibration cues for users who need non-audio feedback. Planned Components: 1-Bluetooth connection to smartphone (using standard maps app audio) 2-Vibration motor (coin vibration motor, 3V) + motor driver (DRV8833) 3-Optional buzzer for confirmations **SUBSYSTEM 3, LOCAL SENSING (WHEN MAPS NOT AVAILABLE)** Provides short-range obstacle warnings and basic direction/heading feedback when GPS/maps are unreliable. Planned Components: 1-Long-range distance sensor (Benewake TFmini-S LiDAR) for obstacle proximity alerts 2-IMU (MPU-9250) for motion/heading estimation **SUBSYSTEM 4, FALL DETECTION + EMERGENCY ALERTING** Detects falls and triggers an emergency workflow through the phone without a custom app. Planned Components: 1-IMU-based fall detection using MPU-9250 data 2-BLE trigger to phone using standard phone shortcut automation 3-Phone sends SMS/call to pre-set emergency contact with last known GPS location # CRITERION FOR SUCCESS 1-The attachment pairs to a smartphone and maintains a Bluetooth connection within 10 meters indoors. 2-The vibration system supports at least four distinct cues (left, right, straight, arrival). 3-The distance sensor detects obstacles within 20 cm to 12 m and triggers a warning vibration within 1 second. 4-Fall detection triggers within 5 seconds of a staged fall-like event and provides a cancel window (ex: 10 seconds). 5-When a fall is confirmed or SOS is pressed, the phone successfully notifies a designated contact and shares location (through phone shortcut automation). 6-The battery supports at least 1 hour of continuous operation. # ALTERNATIVES 1-Smartphone-only navigation: Works for audio, but does not provide haptics for deaf/hard-of-hearing users and is not cane-integrated. 2-Smartwatch fall detection: Helps with emergencies but does not guide navigation through the cane. 3-Dedicated smart cane products: Often expensive and replace the cane instead of adding a modular attachment. 4-Wearable navigation (smart glasses): Higher cost and complexity. |
||||||
| 17 | Shower Music Controller |
Amar Patel Shalin Joshi Varnith Aleti |
Eric Tang | Craig Shultz | proposal1.pdf |
|
| # Shower Music Controller Team Members: - Shalin Joshi (shalinj2) - Amar Patel (amarcp2) - Varnith Aleti (valet3) # Problem People often like to listen to music when they are in the shower, but it is very inconvenient to control/play specific music with wet hands, foggy screens, and with devices that aren't waterproof. If the person wants to switch the song, it leads to issues of getting the phone wet, having to step out of the shower, or just being stuck with whatever song is being played. # Solution The solution is a waterproof device that can be stuck to a shower wall, which allows the user to play, pause, skip, and even search for their playlists/songs from Spotify. This device will act as a Bluetooth remote interface to connect to a phone companion app. The app will be able to call the Spotify API and communicate with the device in order to do each task. The device will include buttons for playback actions and D-Pad buttons to navigate the UI on a screen. # Solution Components ## Subsystem 1 - Embedded UI (Screen + Buttons) Displays different menus and music lists (search, my playlists, now playing) and captures user input by using physical buttons. The buttons will be different ones for playback controls (play, pause, skip, volume) and a d-pad to navigate through the menus and songs on the UI. D-pad implemented using 4 tactile switches (UP/DOWN/LEFT/RIGHT) arranged in a cross layout plus a center SELECT switch, all mounted on the PCB and covered through a waterproof silicone membrane. Components: - SPI TFT display module using ILI9341 controller - Tactile Switches ## Subsystem 2 - Microcontroller + BLE Communication Runs the software for the button controls and has Bluetooth communication with the phone. Sends commands (play/pause, search query, select track) and receives results/status updates from the phone. Components: - ESP32 Microcontroller ## Subsystem 3 - Phone Companion App + Spotify Integration Handles Spotify authentication and all Web API requests. Translates Bluetooth messages from the device into Spotify actions and returns data back to the device. The app will do all the music control and Spotify connections and communicate with the device in order to know which actions to perform Components: - Mobile app using Swift or React - Spotify WebAPI ## Subsystem 4 - Power, Charging, and Water-resistant Enclosure Provides safe portable power, charging, voltage regulation, and physical waterproofing suitable for shower spray/steam. This subsystem will ensure that the device and its components are water-resistant and have charging capabilities. We will make sure that water doesn’t harm our device by enclosing it in a 3D-printed enclosure. The screen will be covered by a clear acrylic/polycarbonate display window, and the buttons will be lined with a silicone membrane. When the user wants to charge the device, they will remove it from the enclosure and shower and charge it elsewhere. Components: - LiPo Battery - Li-ion charger IC/module (USB powered charging) - 3.3V regulator for MCU and display - Waterproof enclosure elements - 3D printed enclosure for the device board and circuitry - Clear acrylic/polycarbonate display window - Silicone membrane for buttons # Criterion For Success - From the shower device, the user can successfully perform different playback actions with a maximum 1-2 seconds of delay: Play/Pause, Next Track, Previous Track, Volume Up/Down - Users can enter a search query using buttons, submit it, receive at least 5 search results on the device screen, select one, and start playback. - Device can connect through bluetooth to phone companion app and remain connected through the entire duration of a shower - Device remains functional after 5 minutes of exposure to shower spray/steam. - Device operates for at least 2 hours of active use on a full charge. |
||||||
| 18 | Acoustic Stimulation to Improve Sleep |
Bakry Abdalla John Ludeke Sid Gurumurthi |
Mingrui Liu | Yang Zhao | proposal1.pdf |
Sound Sleep |
| # Acoustic Stimulation to Improve Sleep Team Members: - Abdalla, Bakry (bakryha2) - Gurumurthi, Sid (sguru2) - Ludeke, John (jludeke2) # Problem Certain people experience poor quality sleep as they age or develop sleep disorders because they do not spend enough time in slow wave sleep (SWS). While there are data-first solutions currently available to the public, they are expensive. # Solution Closed-loop auditory simulation has been shown through research to amplify the oscillations of SWS. When it is time to sleep, users will put a wearable device on their head. The device will consist of an EEG headband with dry electrodes to measure brain activity which will be connected to an all-purpose, custom PCB that integrates the EEG front-end, microcontroller, audio driver, and power management circuitry. The processor detects slow wave sleep and identifies slow wave oscillations. When these waves are detected, the system delivers short, precisely timed bursts of pink noise through an integrated speaker. Data insights about the user’s sleep patterns are delivered via a user-facing application. All of this while being cheaper than what is currently available. # Solution Components ## Subsystem 1 – EEG Headband We will be using a commercially available EEG Headband, the OpenBCI EEG Headband Kit. This includes the headband, electrodes, and cables carrying the analog signal. Components: - OpenBCI EEG Headband: https://shop.openbci.com/products/openbci-eeg-headband-kit - Ag-AgCl Electrodes - Earclip & snap cables ## Subsystem 2 – Signal Processor Takes in analog signals, denoises and amplifies, digitally processes, and then outputs. The signal processing subsystem is responsible for performing the core functionality of a commercial EEG interface such as the OpenBCI Cyton, but at a lesser cost. It receives raw analog EEG signals from the headband electrodes and converts them into digitized, clean EEG data suitable for downstream analysis. It would perform amplification of weak analog electric signals followed by analog filtering to limit bandwidth to EEG-relevant bands and prevent aliasing before analog-to-digital conversion. Following digitization, the subsystem performs digital signal processing, including bandpass and notch filtering, for noise and artifact reduction. An accelerometer would be incorporated to remove spikes and noise in EEG data at significant motion events. Components: - Analog front end: Texas Instruments ADS1299 - Microcontroller: PIC32MX250F128B - Wireless transmission of data: RFduino BLE radio module (RFD22301) - Triple-Axis Accelerometer: LIS3DH - Resistors: COM-10969 (ECE Supply Store) - Capacitors: 75-562R5HKD10, 330820 (ECE Supply Store) - JFET Input Operational Amplifier: TL082CP (ECE Supply Store) - Standard Clock Oscillators 2.048MHz: C3291-2.048 ## Subsystem 3 – Audio Output After receiving the processed audio signals from the signal processor's subsystem, this subsystem will provide the data as input to an algorithm which decides whether or not to play a certain frequency of noise through the preferred audio output device (default will be speaker). The algorithm makes this decision by detecting whether the brain signals indicate short wave sleep is occurring. Components: - A special algorithm to detect short wave sleep (https://pubmed.ncbi.nlm.nih.gov/25637866/) - One small integrated speaker (665-AST03008MRR) ## Subsystem 4 – Power Delivery To provide power for the entire system, a power circuit is integrated into the PCB. This circuit manages battery charging and voltage regulation while minimizing heat dissipation for user comfort. Components: - 2 AAA batteries: EN92 - Voltage regulator: LM350T - Capacitors: 75-562R5HKD10 - On/off switch: MULTICOMP 1MS3T1B1M1QE - Power jack: 163-4013 ## Subsystem 5 – User-Facing Application To improve usability, the User-Facing Application will give the end user insights into their sleep using standard sleep metrics. Specifically, it will tell the user their time spent not sleeping, in REM sleep, light sleep, and deep sleep. We can use a React Native frontend for compatibility with Android and iOS. We can run a lightweight ML model on-device with Python to determine the state of sleep (using libraries like FFT and bandpower). For the backend, Firebase can be used to store our data, which will come in via Bluetooth. Components: - React Native - Firebase # Criterion For Success - Headset remains comfortable (4/5 people would be okay wearing the device to sleep) - Signal Processor successfully amplifies and denoises signal - Signal Processor successfully converts the analog signal into a digital one - Audio Output gives audio in phase with EEG waves to maximize effectiveness - Audio Output correctly adjusts audio in correspondence to the input signal from the Signal Processor - Power Delivery gives enough battery power for the device to last at least 10 hours - Power Delivery remains cool and comfortable for sleep - User-Facing Application is intuitive (4/5 people would download the app) - User-Facing Application shows accurate, historical data from the user’s headband - User-Facing Application correctly classifies phases of the user’s sleep - The entire system is easy to use (a new user can figure it out without instruction) - The entire system works seamlessly |
||||||
| 19 | Cycloidal Hub motor with FOC driver |
Michael Talapin Nithin Durgam |
Eric Tang | Joohyung Kim | proposal1.pdf |
|
| # Title Cycloidal Hub Motor With custom FOC Drivers Team Members: - Michael Talapin (talapin2) - Nithin Durgam (ndurgam2) # Problem Many modern physical systems need motors that require high torque in a compact size with precise motion capable of heavier payloads. # Solution Describe your design at a high-level, how it solves the problem, and introduce the subsystems of your project. The motor we are building out is an internal cycloidal hub motor with custom windings and a custom-milled frame along with a field-oriented control (FOC) custom motor driver. The internal cycloidal gear box solves the earlier stated problem due to two key components.One is the low backlash property, which allows for high precision motion. The second property is the ability to utilize the cycloidal gearbox to get higher gear ratios with smaller geometry, which gives the motor an ability to have high torque and in turn handle heavier payloads. A FOC driver comes into play for allowing direct torque control, speed control and position control (given an encoder/resolver) all while reducing resonances that come from the mechanical system. # Solution Components This problem is broken down into two major components, the custom motor aspect as well as the custom FOC Driver aspect. These components also break down into further respective subcomponents. Explain what the subsystem does. Explicitly list what sensors/components you will use in this subsystem. Include part numbers. ## Subsystem 1 : Electromagnetic motor core ### Function To generate torque efficiently while thinking about packaging constraints. The winding and laminations help set our motors kT/kV as well as torque ripple behavior. An additional useful feature here is to track the temperature of our stator to ensure thermal limitations. ### Key Components **Stator Laminations + Slots :** Forms the magnetic circuit so the motor produces torque efficiently with no loss. **Custom Windings** The insulated copper that carries current directly and defines what our torque constant,losses and thermal capability are. **Rotor**: Provide the fixed magnetic field the stator pushes against to generate the torque. **Insulation Systems:** Locks windings in place while improving reliability under vibration and thermal cycling. ### Sensors **Stator Temperature Sensor : ** (Murata NCP18XH103F03RB NTC) Helps limit torque when the motor is heating up so the windings don't get damaged. ## Subsystem 2 : Cycloidal Reduction Gearbox ### Function To multiply torque in the wheel while maintaining, compact volume, a low backlash and good shock tolerance. The gearbox here turns high motor speed into a low speed wheel torque. By utilizing the cycloidal geometry the motor can have a high reduction with size constraints while maintaining a low backlash plus high shock-load capability. ### Key Components **Eccentric Input Shaft / Cam:** Creates eccentric motion that drives the cycloidal disk **Cycloidal Discs :** The reducing element that converts eccentric motion into a slower high-torque output **Ring Pins :** These pins provide the rolling contact interface that shares load and supports high torque with low backlash. **Output Pins :** Collects the disc motion and outputs the reduced speed and amplified torque rotation to the hub. **Bearings :** Carry the loads while keeping alignment stable so the gearbox does not bind or wear easily (part to be decided) **Lubrication :** Reduces wear and heat to increase efficiency and lifetime. ## Subsystem 3: Hub Structure and Custom Milled Frame ### Function In harsh environments we must integrate the wheel bearing and structure but ensure we keep the alignment stable, carry wheel loads, protect internals and provide a heat path. ### Key Components **Custom-milled housing** **Wheel mounting interface** **Bearing seats** **Seals** **Fasteners and Dowel Pins** ## Subsystem 4: Bearings and sealing subsystem ### Function This subsystem should ensure the motor supports radial,axial, and moment loads while maintaining alignment and preventing contamination. ### Key Components **Main wheel bearing arrangement** **Gearbox support Bearings** **Seals:** O-rings, radial shaft seals, gaskets ## Subsystem 5: Motor Position Sensing ### Function Since FOC requires rotor position, this subsystem is meant to provide rotor electrical angle. ### Sensors **Absolute Encoder :** AS6057P, The purpose of this sensor is to get the absolute position of the rotor. ## Subsystem 6: DC Input and Power Conditioning ### Function Since the motor driver will be a voltage source inverted that gets fed by a DC link, the goal here is to accept supplied power safely, reduce the EMI and stabilize the DC link that will feed the inverter. ### Key Components **Input Connector and Relay:** SLPRB50CPSO, This should be a high-current connector to allow us to connect the battery without overheating and loosening in the field. **Precharge Circuit:** Implemented with a resistor and a small relay, this is built to avoid a huge rush of current instantly slamming into the DC-link capacitors when we first are connected to power. **EMI Filter:** Reduce the conducted noise so the drive does not interfere with the sensors, comms and other electronic components. **DC Link Capacitors:** To stabilize the DC bus and supply the ripple current ripple current that the inverter creates. **Dump Resistors:** These prevent the DC bus overvoltage during aggressive regen when the battery is not absorbing power fast enough. ### Sensors **DC bus voltage sensor:** Use a resistor divider onto a MCU ADC. Lets our microcontroller detect undervoltage/overvoltage and scale our control commands. **DC bus current sensor:** Use TI INA240A2. Helps measure input power and detect abnormal conditions. ## Subsystem 7: 3-phase Converter ### Function Since FOC measures phase currents and DC bus voltage with ADC sampling, we need to convert the DC bus into controlled 3-phase voltages/currents. ### Key Components **6-switch bridge:** The main power switch that creates the 3-phase drive waveforms for the motor **Current shunts:** Use WSL3637R0005FEA. These produce a tiny measurable voltage proportional to phase current to allow FOC to control torque precisely. **Current sense amplifiers:** Amplifies the shunt signals and rejects PWM noise allowing our current control loop to stay stable. **Thermal Path:** Removes heat from the power devices so that torque is sustainable with high power. ### Sensors **Power device temperature sensor:** Use the NCP18XH103F03RB NTC.Derate before MOSFETs or PCBs get damaged. **Phase current measurement:** Use shunts + INA240. provides the core feedback signal for our FOC loop. ## Subsystem 8: Gate Driver ### Function To drive the high/low side switches correctly to survive different faults. The goal here is to handle undervoltage lockout, protect from short-circuit, and include active miller clamps. ### Key Components **Gate driver IC:** Use TI DRV8353R. This will properly drive the high-side or low-side MOSFET gates with proper handling and built in fault handling. **Gate resistors + Miller clamps:** Help tune switching speed to balance efficiency EMI and ringing. ## Subsystem 9: Sensing Front End ### Function Provide Clean and accurate signals for the control loop, protection and derating. ### Key Signals **Phase Currents** **Bus Voltage** **Rotor Position** **Temperatures:** Stator,inverter, rotor and PCB ambient temperature **Phase Voltages** ## Subsystem 10: Control Compute ### Function The compute necessary for running the real time control loops and fault handling ### Key Components: **Micro Controller:** STM32H755ZI this has enough compute to run the algorithms necessary for a high end motor **Encoder/Hall Interfaces:** **Communication Peripherals:** How others interface with our motor, in this case the motor will utilize CAN-FD due to low vulnerability to EMI and ability to handle longer runs **Watchdog:** ## Subsystem 11: Firmware & Control Stack ### Function Deliver stable torque,speed, position control, telemetry logs and debug abilities. ### Key Components: **Sampling & Transforms:** Read the current and put through Clarke/Park transforms. **Current control:** Regulate the Id,Iq. **Modulation:** SVPWM. **Estimator/ Position:** Use motors encoder for position. **Control Loops:** PID Loop for Iq command and PID loop for position,speed and torque. **Derating Logic:** Limit the Iq based on the temperature or bus voltage. **Telemetry Interface:** - Way to keep track of temps,currents,bus voltages, faults and estimated torque/speed/position. ## Subsystem 12: Protection and Functional Safety Layer ### Function Ensure the proper functions are in place for motor protection and safety during operation ### Key Components: **Protect from fast overcurrent** **Gate Driver UVLO** **Over/undervoltage handling** **Current/torque limiting** **Thermal limiting** **Fault state machine and latching behavior** **Sensor Faults** # Criterion For Success Describe high-level goals that your project needs to achieve to be effective. These goals need to be clearly testable and not subjective. ** Continuous Torque: ** T_nm >= 4 Nm ** Peak Torque: ** T_nm >= 18 Nm ** Max Speed: ** rpm_max >= 120 rpm ** Backlash: ** our backlash <= 1 degree |
||||||
| 20 | Air Guitar |
Arturo Arroyo Valencia Miaomiao Jin Youngmin Jeon |
Eric Tang | Yang Zhao | proposal1.pdf |
|
| # Title Team Members: - Miaomiao Jin (mj47) - Youngmin Jeon (yj21) - Arturo Arroyo Valencia (aarro6) # Problem Traditional guitars are bulky and non-portable, making it difficult for musicians to practice or perform in mobile environments. While software-based "virtual guitars" exist, they lack the tactile "muscle memory" of fretting with one hand and strumming with the other. There is a need for a wearable system that captures the physical kinetics of guitar playing without the physical footprint of the instrument. # Solution Project: Air Guitar is a dual-wearable sensor system that mimics the ergonomics of a real guitar. The left hand captures "fretting" finger patterns to determine chords, while the right hand captures "strumming" velocity and timing. By fusing these two data streams wirelessly, the system generates real-time MIDI audio. The design focuses on low-latency wireless communication and precise gesture recognition, allowing the user to play music anywhere without being tethered to a physical instrument or a power outlet. # Solution Components ## Subsystem 1: The Left-Hand "Fret" Controller This subsystem identifies the chord the user is trying to play. It maps the curvature of each finger to a specific digital profile (e.g., specific bend angles = C Major). - Flex Sensors (4x) [P/N: FS-L-0054-103-ST]: These are long, thin strips placed along the fingers. As the user curls their fingers to form a chord shape, the resistance changes. We use these to measure the degree of flexion for each finger. - Voltage Divider Network: A series of precision resistors used to convert the changing resistance of the flex sensors into a measurable voltage that the microcontroller's ADC (Analog-to-Digital Converter) can read. ## Subsystem 2: The Right-Hand "Strum" Controller This subsystem acts as the "trigger." It determines when a sound should be played and how loud it should be based on the intensity of the movement. - 9-Axis IMU [P/N: BNO055]: This contains an accelerometer and a gyroscope. It tracks the rapid "up and down" motion of a strum. We chose the BNO055 because it has an on-board processor that handles "Sensor Fusion," giving us clean orientation data without taxing our main CPU. - Backup IMU (Plan B): InvenSense MPU-6050. It is widely available and has extensive library support. While it only offers 6-axis sensing (no magnetometer) and requires the ESP32 to handle the Kalman filtering or Complementary filtering in code, it is a highly reliable fallback if the BNO055 has procurement delays or I2C clock-stretching issues. - Force Sensitive Resistor (FSR) [P/N: FSR 402]: A small pressure sensor placed on the thumb. This allows the user to simulate "holding a pick." The sound only triggers when the user "squeezes" the virtual pick while strumming. ## Subsystem 3: Processing & Wireless Communication This is the "Brain" of the system. It collects data from both hands and converts it into music. - ESP32 Microcontroller (2x) [P/N: ESP32-WROOM-32E]: One for each hand. These chips are powerful and have built-in Bluetooth and Wi-Fi. - ESP-NOW Protocol: We will use this specialized low-latency wireless protocol to send data from the "Strum" hand to the "Fret" hand in less than 5ms, ensuring the two hands are perfectly in sync. - BLE MIDI: The final output is sent via Bluetooth Low Energy MIDI to a phone or laptop, allowing the glove to work with any professional music software (like GarageBand or Ableton). ## Subsystem 4: Power Management Since we want the project to be wearable and "Cyberpunk" in style, the power system must be compact and efficient. - LiPo Batteries (2x): Small 3.7V rechargeable batteries tucked into the wrist straps. - TP4056 Charging Modules: To allow the gloves to be recharged via a standard USB-C cable. - Buck-Boost Converters: To ensure the ESP32 and sensors receive a steady, clean 3.3V even as the battery voltage drops during use. # Criterion For Success - Latency: The total "Motion-to-Sound" delay must be under 30ms. Anything higher is noticeable to a musician. **Test Method:** We will program a "Test Mode" where a physical button press on the Strum hand toggles a GPIO pin (HIGH) and simultaneously sends the wireless strum packet. Using an oscilloscope, we will measure the delta (t) between the GPIO HIGH signal and the arrival of the MIDI Note On message at the receiver's serial port. - Chord Recognition: The system must accurately distinguish between at least 5 different chord shapes with a success rate of >90%. Dynamic Range: The system must be able to distinguish between a "Soft Strum" and a "Hard Strum," translating that into different MIDI volume levels. - Battery Life: The device must operate continuously for at least 2 hours on a single charge. - Wireless Stability: The ESP-NOW link between hands must maintain a Packet Delivery Ratio (PDR) of ≥ 99%within a 2-meter radius (the typical wingspan of a human) over a continuous 10-minute testing window. **Test Method:** The Right-Hand unit will send 1,000 packets at the target rate (e.g., 100Hz). The Left-Hand unit will log the sequence numbers; a successful test results in ≤ 10 missed packets. |
||||||
| 21 | Vertical Spinner Ant-Weight Battle Bot |
Andrew Bajek Elise Chiang Giovanni Escamilla |
Jiaming Xu | Viktor Gruev | proposal1.pdf |
|
| ANT-WEIGHT BATTLEBOT Team Members: - Giovanni Escamilla (gme5) - Andrew Bajek (abajek2) - Elise Chiang (elisenc3) # Problem Antweight combat robots, limited to a maximum mass of 2 lb, must function reliably despite aggressive mechanical stress, and demanding control requirements. These systems regularly experience violent impacts, sudden motor stalls, and intermittent wireless links, making fast and dependable coordination between power distribution, control electronics, and mechanical hardware. # Solution Our idea for our 2-lb bot is a two-wheel drive with a vertical drum spinner as our weapon. We will develop our own custom PCB with controls centered around our STM32WB series microcontroller. This controller will not only control our weapon and drive system, but monitor our stress to limit damage done to the battlebot. Overall, our total system will be divided into four sections: power, control, drive, weapon. Our wireless connection to our PC will be bluetooth and work in tandem with our microcontroller to guarantee our success. # Solution Components ## Subsystem 1 - Power Our Power system will give life to our bot with some additional safety features so we are able to compete in the competition. This will include the physical switch to turn off the bot and a voltage regulator so that our controller can use it. Components: - XT60 Connectors (to unplug) - 3S LIPO Battery (11.1v battery) - We could make our own power regulator; if not, we will use LM2596 ## Subsystem 2 - Drive Our Drive system will allow the battle bot to navigate the arena quickly and precisely in order to deliver attacks and avoid attacks from opposing bots. Components: - Two DC motors, one per side (508 RPM Mini Econ Gear Motor) - Dual H-bridge motor driver (DRV8411) ## Subsystem 3 - Weapon The Weapon system serves as the main accessory for engaging the opponent for damage. Components: - DC motor to power the weapon (drum vertical spinner) - Motor control driven by PWM - 3D structures to aid main weapon (ramps, lifters, etc) ## Subsystem 4 - Control Our central brain will center around our STM32WB microcontroller, which will monitor and control our weapon and drive. In addition, monitoring our weapon's motor to limit damage to ourselves. Components: - STM32WB series microcontroller - Bluetooth - PC-based control interface - Real-time reliability - Weapon Motor Stress Sensor # Physical Design - Body The body of the battlebot will house and protect the electronics, motors, while maintaining structural integrity during combat. We will use Autodesk Fusion 360 to model the body and use PLA+ as the 3D printing filament. # Criterion For Success - Weight Compliance: Total Weight: 2lb - Wireless Control: Robot is controlled from a PC via Bluetooth with Failsafe Operation. - Safety: The bot will automatically shut down in the case of a power fault, loss of control signal, or electrical malfunction. - Mobility: Robot runs continuously for 3 minutes without resets. - Weapon Reliability: The fighting tool operates reliably under repeated activation while maintaining electrical and mechanical performance. - Sensor Addition: Some internal or external sensor that makes the robot react in some way - Responsiveness: Inputs in control have a delay of less than 50ms. |
||||||
| 22 | Oscillosketch: Handheld XY Etch-a-Sketch Signal Generator for Oscilloscopes |
Eric Vo Josh Jenks |
Xiaodong Ye | Yang Zhao | proposal1.pdf |
|
| Team Members: - Josh Jenks (JaJenks2) - Eric Vo (ericvo) # Problem Oscilloscope XY mode is a powerful way to visualize 2D parametric signals and vector like graphics, but interactive control typically requires multiple bench instruments or ad hoc setups. There is no simple, handheld, purpose-built controller that can safely generate stable, low noise bipolar X/Y signals for XY mode while providing an intuitive drawing interface. Additionally, producing clean vector style graphics requires careful mixed signal design (DAC, filtering, level shifting, buffering, protection) and deterministic embedded control. # Solution We will design a custom PCB and handheld enclosure that connects to an oscilloscope’s CH1 and CH2 inputs (X and Y). The device will function like an Etch-a-Sketch: two rotary encoders control the on screen cursor position, allowing continuous line drawing on the oscilloscope in XY mode. The PCB will include: - A microcontroller (STM32- or ESP32-class) to read the encoders/buttons and generate X/Y sample streams - An external dual channel DAC to produce two analog voltages - Analog filtering, level shifting, and buffering to generate bipolar outputs with selectable full scale up to ±5 V - A complete power subsystem powered from USB-C 5 V, including a generated negative rail to support bipolar analog output - Output protection/current limiting so the device cannot damage the oscilloscope inputs under reasonable misuse Stretch goals: add a vector rendered game/demo mode (Pong; Asteroids as further stretch), including optional Z axis blanking to reduce retrace artifacts, and optional line level audio output to monitor/play back generated signals. # Solution Components ## Subsystem 1: User Input / UI Purpose: Provide intuitive control for drawing and mode selection. Components (examples): - 2x incremental rotary encoders with push switch (e.g., Bourns PEC11R series or equivalent) - 4x tactile pushbuttons (e.g., mode select, clear/recenter, scale/zoom, optional pen/blank) - Optional status LEDs for mode feedback ## Subsystem 2: Microcontroller + Firmware Purpose: Read inputs, maintain drawing state, and generate X/Y sample buffers at a fixed update rate. Components: - MCU (STM32- or ESP32-class) - Example options: ESP32-WROOM-32E module OR STM32G4/F4-class MCU with SPI + timers Firmware features: - Quadrature decoding for encoders; button debouncing - Drawing modes: - Base mode: “etch-a-sketch” continuous drawing (position integration with adjustable step/scale) - Optional modes: predefined shapes/patterns for testing - Fixed rate DAC update engine (timer driven), with buffered generation to keep output stable independent of UI activity ## Subsystem 3: Dual-Channel DAC + Analog Output Chain (X and Y) Purpose: Generate clean, low noise bipolar voltages suitable for oscilloscope XY inputs. Components (examples): - Dual-channel SPI DAC, 12-bit (Microchip MCP4922 or equivalent) - Reference for stable scaling / midscale (e.g., LM4040-2.5 or equivalent 2.5 V reference) - Optional reconstruction filtering per channel (RC and/or 2nd order low-pass) to eliminate high frequency components - Op-amp signal conditioning: - Level shift around midscale + gain to produce bipolar output centered at 0 V - Buffer stage for stable drive into coax cables and oscilloscope inputs - Example op-amp class: dual op-amp supporting ±5 V rails (e.g., OPA2192/OPA2197 class or equivalent) - Output connectors: - 2x PCB mount BNC connectors (X and Y outputs) - Output protection / safety features (per channel): - Series output resistor (current limiting and stability into cable capacitance) - Clamp diodes to rails to limit overvoltage at the connector - ESD considerations and robust grounding strategy ## Subsystem 4: Power Regulation Purpose: Provide clean digital and analog rails from a safe, convenient input. Components (examples): - USB-C 5 V input (sink configuration with CC resistors) + input protection - 3.3 V regulator for MCU and logic (e.g., AP2112K-3.3 or equivalent) - Negative rail generation for analog (e.g., TPS60403 inverting charge pump or equivalent) to enable bipolar outputs - Power decoupling and analog/digital rail isolation as needed ## (Stretch) Subsystem 5: Z-Axis Blanking Output (Optional) Purpose: Improve vector graphics/game rendering by blanking the beam during “retrace” moves. Components: - Protected Z-output driver (0–5 V-class control) to oscilloscope Z-input Firmware: - Assert blanking during reposition moves; unblank during line segments ## (Stretch) Subsystem 6: Line-Level Audio Output (Optional) Purpose: Provide an auxiliary line out to monitor synthesized signals audibly. Components: - 3.5 mm TRS jack (line out) - AC coupling + attenuation network and optional buffer Firmware: - Optional stereo mapping (e.g., X→Left, Y→Right) after removing DC offset # Criterion For Success The project is considered successful if all of the following are demonstrated and measured: 1. Bipolar XY output with selectable range: - Device generates two analog outputs (X and Y) centered at 0 V, with selectable full-scale up to ±5 V. - Verified with DMM and oscilloscope measurements (documented calibration procedure). 2. Stable interactive drawing in XY mode: - Using the two rotary encoders, a user can draw continuous line art on an oscilloscope in XY mode. - At minimum, demonstrate repeatable drawing of a square and a circle using the controller’s clear/recenter and scaling functions. 3. Deterministic update behavior: - The firmware updates the DAC using a hardware timer or equivalent mechanism to maintain stable, non intensity varying output during user interaction. 4. Safe interfacing / cannot damage scope under reasonable misuse: - Output stage includes current limiting and voltage clamping such that accidental output short-to-ground and brief overdrive conditions do not produce damaging currents into the oscilloscope input. - Verified by bench test (short to ground test and measurement of limited fault current through series resistor). (Stretch) Demonstrate a vector rendered mode (Pong; Asteroids further stretch) with reduced retrace artifacts if Z-blanking is implemented. Optional line-out demonstration if implemented. |
||||||
| 23 | Portable RAW Reconstruction Accelerator for Legacy CCD Imaging |
Arnav Gaddam Guyan Wang Yuhong Chen |
Gerasimos Gerogiannis | other1.docx other2.pdf |
||
| # **RFA: Portable RAW Reconstruction Accelerator for Legacy CCD Imaging** Group Member: Guyan Wang, Yuhong Chen ## **1\. Problem Statement** **The "Glass-Silicon Gap":** Many legacy digital cameras (circa 2000-2010) are equipped with premium optics (Leica, Zeiss, high-grade Nikon/Canon glass) that outresolve their internal processing pipelines. While the optical pathway is high-fidelity, the final image quality is bottlenecked by: - **Obsolete Signal Chains:** Early-stage Analogue-to-Digital Converters (ADCs) and readout circuits introduce significant read noise and pattern noise. - **Destructive Processing:** In-camera JPEGs destroy dynamic range and detail. Even legacy RAW files are often processed with rudimentary demosaicing algorithms that fail to distinguish high-frequency texture from sensor noise. - **Usability Void:** Users seeking the unique "CCD look" are forced to rely on cumbersome desktop post-processing workflows (e.g., Lightroom, Topaz), preventing a portable, shoot-to-share experience. ## **2\. Solution Overview** **The "Digital Back" External Accelerator:** We propose a standalone, handheld hardware device-a "smart reconstruction box"-that interfaces physically with legacy CCD cameras. Instead of relying on the camera's internal image processor, this device ingests the raw sensor data (CCD RAW) and applies a hybrid reconstruction pipeline. The core innovation is a **Hardware-Oriented Hybrid Pipeline**: - **Classical Signal Processing:** Handles deterministic error correction (black level subtraction, gain normalization, hot pixel mapping). - **Learned Estimator (AI):** A lightweight Convolutional Neural Network (CNN) or Vision Transformer model optimized for microcontroller inference (TinyML). This model does not "hallucinate" new details but acts as a probabilistic estimator to separate signal from stochastic noise based on the physics of CCD sensor characteristics. The device will feature a touchscreen interface for file selection and "film simulation" style filter application, targeting an output quality perceptually comparable to a modern full-frame sensor (e.g., Sony A7 III) in terms of dynamic range recovery and noise floor. ## **3\. Solution Components** ### **Component A: The Compute Core (Embedded Host)** - **MCU:** STMicroelectronics **STM32H7 Series** (e.g., STM32H747/H757). - _Rationale:_ Dual-core architecture (Cortex-M7 + M4) allows separation of UI logic and heavy DSP operations. The Chrom-ART Accelerator helps with display handling, while the high clock speed supports the computationally intensive reconstruction algorithms. - **Memory:** External SDRAM/HyperRAM expansion (essential for buffering full-resolution RAW files, e.g., 10MP-24MP) and high-speed QSPI Flash for AI model weight storage. ### **Component B: Connectivity & Data Ingestion Interface** - **Physical I/O:** USB OTG (On-The-Go) Host port. - _Function:_ The device acts as a USB Host, mounting the camera (or the camera's card reader) as a Mass Storage Device to pull RAW files (.CR2, .NEF, .RAF, .DNG). - **Storage:** On-board MicroSD card slot for saving processed/reconstructed JPEGs or TIFFs. ### **Component C: Hybrid Reconstruction Algorithm** - **Stage 1 (DSP):** Linearization, dark frame subtraction (optional calibration), and white balance gain application. - **Stage 2 (NPU/AI):** A quantization-aware trained model (likely TFLite for Microcontrollers or STM32-AI) trained specifically on _noisy CCD -to- clean CMOS_ image pairs. - _Task:_ Joint Demosaicing and Denoising (JDD). - **Stage 3 (Color):** Application of specific "Film Looks" (LUTs) selected by the user via the UI. ### **Component D: Human-Machine Interface (HMI)** - **Display:** 2.8" to 3.5" Capacitive Touchscreen (SPI or MIPI DSI interface). - **GUI Stack:** TouchGFX or LVGL. - _Workflow:_ User plugs in camera -> Device scans for RAWs -> User selects thumbnails -> User chooses "Filter/Profile" -> Device processes and saves to SD card. ## **4\. Criterion for Success** To be considered successful, the prototype must meet the following benchmarks: - **Quality Parity:** The output image, when blind-tested against the same scene shot on a modern CMOS sensor (Sony A7 III class), must show statistically insignificant differences in perceived noise at ISO 400-800 equivalent. - **Edge Preservation:** The AI reconstruction must demonstrate a reduction in color moiré and false-color artifacts compared to standard bilinear demosaicing, without "smoothing" genuine texture (measured via MTF charts). - **Latency:** Total processing time for a 10-megapixel RAW file must be under **15 seconds** on the STM32 hardware. - **Universal RAW Support:** Successful parsing and decoding of at least two major legacy formats (e.g., Nikon .NEF from D200 era and Canon .CR2 from 5D Classic era). ## **5\. Alternatives** - **Desktop Post-Processing (Software Only):** - _Pros:_ Infinite computing power, established tools (DxO PureRAW), highly customized. - _Cons:_ Destroys the portability of the photography experience; cannot be done "in the field." Need to be proficient with parameters inside the software, which requires self-training and tutoring (not user-friendly). - **Smartphone App (via USB-C dongle):** - _Pros:_ Powerful processors (Snapdragon/A-Series), high-res screens, easy to use. - _Cons:_ Lack of low-level control over USB mass storage protocols for obscure legacy cameras; high friction in file management; operating system overhead prevents bare-metal optimization of the signal pipeline; unique algorithms may not be suitable for legacy cameras. - **FPGA Implementation (Zynq/Cyclone):** - _Pros:_ Parallel processing could make reconstruction instant. - _Cons:_ Significantly higher complexity, cost, and power consumption compared to an STM32 implementation; higher barrier to entry for a "mini project." |
||||||
| 24 | Circular Antweight Battlebot (Shovel/Lifter) |
Junyan Bai Yuxuan Guo |
Zhuoer Zhang | Joohyung Kim | proposal1.pdf |
|
| # Circular Antweight Battlebot (Shovel/Lifter) Team Members: * Yuxuan Guo (yuxuang7) * Junyan Bai (junyanb2) # Problem ECE 445 antweight (≤ 2 lb) battlebots must be mostly 3D-printed (allowed plastics), include locomotion + an active tool, and be controlled from a PC over Wi-Fi/Bluetooth using a custom PCB (MCU + wireless + motor control). The robot must support manual shutdown and automatically disable on RF link loss. Many robots fail due to getting stuck, losing traction, or motor stalls that cause brownouts/resets and wireless dropouts. Our problem is to build a compact robot that stays controllable and safe under impacts and stalls while meeting competition shutdown requirements. # Solution We will build a circular “UFO-shaped” robot focused on control and robustness. A recessed two-wheel drivetrain sits inside a low-profile circular chassis to reduce snag points and survive collisions. The weapon is a motor-driven front shovel/lifter used to get under opponents and lift/destabilize them for pushing and pinning. A custom ESP32-based PCB receives PC commands via Wi-Fi (Bluetooth optional) and controls both mobility and shovel actuation. Safety is layered: a manual kill switch, a firmware link-loss failsafe, and hardware current-sense protection that can disable motor drivers during overcurrent/stall events. # Solution Components ## Subsystem 1 — Control & Communication (ESP32 + IMU + LEDs) **Function:** Receive PC commands, run safety logic, and output control signals for drive + weapon. **Components:** * ESP32-WROOM-32D (Wi-Fi/Bluetooth) * MPU-6050 IMU (I2C, planned) * LEDs for power/link/fault **Key requirements:** * Control update rate **≥ 50 Hz** * Link-loss failsafe: if no valid commands for **> 300 ms**, disable all outputs and require re-arm ## Subsystem 2 — Power Supply & Safety (Battery + Kill Switch + Distribution + Current Sense) **Function:** Provide stable rails and enforce fast shutdown + stall protection. **Key requirements:** * Logic rail: **3.3 V ± 5%**, budget **≥ 500 mA**, stays **> 3.0 V** under worst-case load * Kill switch disables motion quickly (target near-instant motor power removal) * Overcurrent/stall protection asserts **FAULT** and disables **EN** within **≤ 50 ms** (threshold TBD) ## Subsystem 3 — Drive Motor (Mobility) **Function:** Provide reliable motion and pushing power with a differential drivetrain. **Components:** * Motor driver + **2x** gearmotors (candidate: N20 / 16 mm) * Recessed wheels **Key requirements:** * Speed **≥ 0.5 m/s** * Push a **1.0 kg** test sled at **≥ 0.1 m/s** for **≥ 2 s** without reset/brownout ## Subsystem 4 — Weapon Motor (Shovel/Lifter Actuation) **Function:** Actuate the front shovel/lifter for opponent control. **Components (planned):** * MG996R servo + shovel linkage **Key requirements:** * Lift a **0.9 kg (2 lb)** test block by **≥ 15 mm** within **≤ 0.5 s**, hold **≥ 5 s** * Jam/stall safety handled via FAULT/EN gating (disable within **≤ 50 ms**) # Criterion For Success 1. **Weight compliance:** Total mass (including battery) ** 300 ms**). 3. **Reliable operation:** Drive for **≥ 3 min** with no MCU resets; logic rail stays **> 3.0 V**. 4. **Performance:** Push a **1.0 kg** sled for **1 m**, and shovel lifts **0.9 kg** by **≥ 15 mm** within **≤ 0.5 s**. |
||||||
| 25 | Building Interior Reconnaissance Drone (BIRD) |
Jack Lavin Jacob Witek Mark Viz |
Shiyuan Duan | Joohyung Kim | proposal1.pdf |
|
| # Building interior reconnaissance drone proposal Team Members: - Mark Viz (markjv2) - Jack Lavin (jlavin4) - Jacob Witek (witek5) # Problem There are many situations when law enforcement or emergency medical service professionals need quick, real-time, useful information about a non-visible location without sending in a human to gather this information due to present risks. One of the most important things to know in these situations is if there are people in a room or area, and if so, where they are located. While there are current promising solutions used by these professionals, they can rarely be operated by one person and take away time and manpower from situations which usually greatly require both. Our solution attempts to address these issues while providing an easy-to-use interface with critical information. # Solution Our solution to this issue is to use a reconnaissance drone equipped with a camera and other sensing components and simple autonomous behavior capabilities, and process the video feed on a separate laptop to determine an accurate location of all people in view of the drone relative to the location of a phone or viewing device nearby. This phone or viewing device would run an augmented-reality application using position information from the drone system to overlay the positions of people near the drone over first-person perspective video. The end result would allow someone to slide/toss the drone into a room, and after a second or two, be able to "see through the wall" where anyone in the room is. # Solution Components ## Drone and Sensors The drone itself will be a basic lightweight quadcopter design. The frame will be constructed using a 2D design cut from a sheet of carbon fiber and assembled with aluminum hardware and thread locks. The total volume including the rotor blades should not exceed 4" H by 8" W by 8" L at maximum (ideally much less). This simple frame will consist of a rectangular section to mount the PCB and a 2S (7.4 V) LiPo pack of about 2" x 2" or less, and four identical limbs mounted to the corners. On each of the four limbs will be brushless DC motors (EMAX XA2212 2-3S) driven by electronic speed controllers from the PCB (assuming they can't be pre-purchased). The PCB will have a two-pin DuPont/JST connectors for battery leads, a TP4056 LiPo discharging circuit, and buck converters for necessary voltage(s) all on the underside. On top, the PCB will house an ESP32-S3 microcontroller, an IMU with decent accuracy, a set of mmWave 24 GHz human presence sensor (like the LD2410) and ultrasonic transducers to form a phase array sensor with an accurate, narrow beam to scan for human presence with range. These components will allow the drone to be programmed with very simple and limited autonomous flight behaviors (fly up 5 feet, spin 360 degrees, land) and properly/safely control itself. The ultrasonic transducers and human sensing radars will be the primary method of determining human presence and mostly calculated on the ESP-32, however additional calculation will need to be made on the AR end with the received data. If time and budget allow, we may also include a small 2 MP or 5 MP camera for WiFi video stream or a composite video camera for an analog video stream as a backup/failsafe to the other sensors. A working rough breakdown of the expected mass of each component will go as follows: - 4 hobby motors: ~ 50 grams (based on consumer measurements) - Carbon fiber frame: ~ 40 grams (estimate based on similar style and sized frames) - 2S 500 mAh battery: ~30 grams (based on common commercial LiPo product info) - PCB with MCU & peripherals: ~50 grams (based on measurements of similar boards) - 10-20 ultrasonic transducers: ~50 grams (based on commercial component info) - Metal hardware/fasteners & miscellaneous: ~25 grams (accounting for error as well) - Total mass: ~255 grams - Total thrust (at 7.6 V 7.3 A): ~2000 grams (from manufacturer ratings) - Thrust/weight is well over 2.0 and should allow for quick movement and considerable stability along with the improved frame considerations, and also extra room for more weight if needed. ## AR Viewer or Headset To create a useful augmented-reality display of the collected position data, the simplest way will be to write an app that uses the digital camera and gyroscope/IMU API's of a smart phone to overlay highlighted human position data on a live camera view. We would use the android studio platform to create this custom app which would interface with the data incoming from the drone. Building upon the android API's we would overlay the data to the phone camera. If we have more time to develop one, a headset or AR glasses could make the experience more useful (hands-free) and immersive. We may also use a laptop at this stage to run a server alongside the app for better processing. # Working Supply List *some can be found in student self-service, some need to be ordered - Carbon fiber sheet (find appropriate size and 2-3 mm thick) - Aluminum machine screws with lock-tite or bolt/nut with locking washer - 4 EMAX brushless DC motors and mounting hardware - 4 quadcopter rotor blades - 2S (7.6 V) 500 mAh LiPo battery - Custom PCB - ESP32-S3 chip w/ PCB antenna - 20 ultrasonic (40 kHz) transducer cans - 4 mmWave 24 GHz human presence radar sensors - TP 4056 LiPo Charging IC (find other necessary SMD components) - DuPont two-pin connector for LiPo charging/discharging (choose whether removable battery design) - Various SMD LEDs to indicate functionalities or states on PCB - Voltage buck converter circuit components - ESC circuit components - Adafruit Accelerometer # Criterion For Success The best criteria for the success of this project is whether our handheld device or headset can effectively communicate human position data of a visually obstructed location to a nearby user within an accuracy of 1-2 meters while still allowing the user to carry out personal tasks. The video feed should be stable with minimal latency as to be effective and usable, and estimated human positions should be updated only when they are positively in view and information about the recency of data should be apparent (maybe a red highlight on new people, yellow on a stale location, and green for a newly updated position). |
||||||
| 26 | AdheraScent Pill Container |
Albert Liu Anshul Rao Chia-Ti(Cindy) Liu |
Zhuchen Shao | Arne Fliflet | proposal1.pdf |
Adherascent |
| Team Members: - Albert Liu (ycl6) - Chia-Ti (Cindy) Liu (chiatil2) - Anshul Rao (anshulr2) # Problem Describe the problem you want to solve and motivate the need. 1 in 4 adults miss doses of medication due to complex instructions or simply forgetting. Traditional reminders, such as alarms and notifications, are often ignored due to alarm fatigue. There are also many apps addressing this problem; however, seniors and many other adults struggle with using complex apps. Therefore, we are looking to build an automated scent-based pill dispenser to simplify the process and ensure adults take their medications on time. # Solution Describe your design at a high-level, how it solves the problem, and introduce the subsystems of your project. We propose an olfactory-based medication reminder system using a pill dispenser with a scent emitter as our reminder mechanism. The smell-based reminder feature addition to the traditional pill dispenser consists of a conditional logic trigger: if the "container open" state is not triggered within a specific time window which we scheduled, the device initiates a controlled release of a specific scent emission. This scent will act as an environmental prompt, persistently reminding the user to take the medicine. The intensity of the scent emission will gradually increase over time until the physical container is opened, at which point the emission will be deactivated. This approach ensures the reminder remains physically present in the user's space. At a high level, our system consists of a pill container with an open/close detection mechanism, a timing unit, a scent emitter, and a power subsystem. # Solution Components ## Subsystem 1: Pill Container, Open/Close Detection This subsystem is responsible for physically storing the medication and detecting if the container is opened. As the pill container is designed as a multi-day container, we will design it to be a 7-day pill box to support the users’ daily medication routines. An open/close detection mechanism would determine whether the container has been opened during a scheduled medication time each day. This means the pillbox will contain 7 separate sensors, one for each day and communicate this information to the timing unit subsystem as needed. The detection will be implemented using a simple mechanical or magnetic sensing design such as a reed switch or a limit switch. Once the opening is detected, this subsystem will send a signal indicating successful medication taken. Components: 7 section pill container 7x open/close sensors (possibly a limit switch) ## Subsystem 2: Timing Unit The timing unit subsystem would use a Real-Time Clock (RTC) module integrated within the primary microcontroller. As long as the microcontroller has a coin cell, the RTC will be able to continue running as intended while the main power is off. This means that if the main power happens to be interrupted, the RTC module will still be able to generate the date, time, and other specific data necessary. Otherwise, the microcontroller will poll the RTC module and compare it against the scheduled medication window. When the current time enters the configured scheduled window for an individual to take their medication, the timing unit will monitor the open/close detection subsystem. Specifically, if the sensor remains in the “closed” state past, the timing unit subsystem will generate a PWM signal to the scent emitter. While the pill dispensing mechanism continues to stay in the “closed” state past the scheduled window, the duty cycle of the PWM signal will gradually increase, intensifying the smell over time. Components: ESP32 Microcontroller CR2032 Coin Cell & Holder RTC DS3231 (optional) Buttons / LCD display for adjusting scheduled time ## Subsystem 3: Scent Emitter The scent emitter module is responsible for producing the scent, our physical reminder when the medication is not taken in scheduled time. When it receives the signal that the container is not opened in a scheduled window, it will release a controlled amount of scent into the surrounding environment, which we would like to design the emission to be continuous, and the emission should stop immediately once the container is opened. To avoid heating to make our pill container safe and portable, we will be implementing our scent emitter with a replaceable scent pad combined with a mechanically controlled valve and a tiny DC fan to regulate the scent release, which when a missed medication event is detected, the valve opens to allow the air to flow across the pad to emit the scent into the environment. The fan will go stronger and stronger if the container is still not open, and the valve will close once the pill container is opened, stopping further emission. Our system will also assume a predetermined effective lifetime for each pad, for example 20 days, after our characterization. Then after a conservative usage estimates time, for example 15 out of 20 days, which is also tracked by our time unit, a LED begins blinking to indicate that the scent pad should be replaced. The LED will stop blinking after the pad is replaced. Components: Replaceable scent pad LED Mechanical controlled valve Micro 5V or 3.7 V DC fan Another alternative for scent emitters is using a little ultrasonic speaker/vibrator at a certain frequency to make particles aerosolized like a diffuser. ## Subsystem 4: Power Supply This subsystem would provide the power needed to all electronic components in the device. To ensure the ease of use and portability, our design will be powered by a battery instead of requiring a constant external power source. There will then be a voltage regulation circuit that would ensure stable operation of the microcontroller and peripherals. In addition, there will also be a deep sleep power-saving state where the microcontroller will shut down the most power-hungry components, such as the CPU or WiFi module, during idle time periods. The system/microcontroller will wake up from the RTC module via a hardware interrupt when the pill dispenser is open or closed as well as during the scheduled medication time. This will ensure that the scent-based medication box will be able to work as intended for a longer period of time. Components: Battery Power switch Voltage regulator # Criterion For Success The system correctly detects whether the pill container has been opened during a scheduled medication window. The user must be able to schedule a medication window. The scent emitter must activate within 10 seconds automatically after the scheduled medication window has passed if the pill container has remained in a closed state. This scent-based pill reminder system must have variable amounts of scent intensity as the duration of the missed medication window increases based on the PWM signal (25%, 50%, 100%). The scent emitter deactivates within 10 seconds once the container is opened. LED starts blinking when replaceable scent pad has to be changed and stops after its replaced. The system operates without requiring a smartphone, app, or external display. The device operates reliably for multiple medication cycles without failure. All subsystems integrate into a single functional prototype suitable for demonstration. The prototype has to be smaller than 5*2.8*0.5 inch^3 to allow it to be portable. Scent strong enough for real-world testers to recognize. Power consumption of the system to be small enough to allow the device to function for longer than 2 months before the battery has to be replaced. |
||||||
| 27 | Kombucha Fermentation Control System |
Edwin Xiao John Puthiaparambil Rudy Beauchesne |
Haocheng Bill Yang | Yang Zhao | proposal1.pdf |
|
| # Kombucha Fermentation Control System Team Members: - Rudy Beauchesne (rudyb2) - John Puthiaparambil (jtp7) - Edwin Xiao (edwinyx2) # Problem Home kombucha brewing is becoming increasingly popular, but most options fall into two extremes: expensive commercial systems with automated control, or low-cost DIY methods that depend on frequent manual checks and guesswork. As a result, home brews are often inconsistent from batch to batch, with fermentation running too slow or too fast, acidity drifting outside the desired range, or the process stalling without clear feedback. This unpredictability can lead to inconsistent flavor and, in the worst case, failed or spoiled batches. There is a need for a low-cost, repeatable kombucha brewing system that continuously monitors key conditions like temperature and pH and provides clear, reliable feedback with minimal user intervention. # Solution We propose a low-cost, closed-loop kombucha brewing system designed to make home fermentation more consistent and repeatable. A microcontroller on a custom PCB continuously reads temperature, pH, RGB color, ultrasonic liquid level, and pressure sensors to track fermentation conditions and progress. Using these measurements, the system controls a heating pad to regulate temperature and peristaltic pumps to add fresh tea or remove liquid as needed based on user-defined targets. If feasible within budget, the system will also include a small optional aeration pump (air pump + sterile filter) for primary fermentation to provide controlled aeration during primary fermentation. A companion companion app dashboard (web-based) displays real-time status and logs trends over time so users can monitor brewing without constant manual checking. # Solution Components Subsystem 1: Fermentation Monitoring & Control This subsystem monitors the primary fermentation conditions and regulates temperature to keep the brew in a stable range. Functionality: - Continuously measure temperature, pH, and color trends during F1 - Drive a heating pad to maintain a user-defined temperature setpoint and control pumps for automated liquid handling - Send sensor data to the main controller for closed-loop control and logging Sensors / Components: - Temperature sensor: DS18B20 - Ultrasonic liquid-level sensor: HC-SR04 measures the brew height/volume to detect evaporation and prevent overfilling/underfilling during pump-based tea additions or liquid removal - pH Sensor: Analog pH probe + signal conditioning (PH-4502C module or equivalent front-end) - RGB Color Sensor: TCS34725 - Heating Element: Resistive heating pad controlled via MOSFET - Peristaltic pump(s): 12 V peristaltic pump (food-safe tubing) - Microcontroller: ESP32 Subsystem 2: Fermentation State & Safety Monitoring This subsystem monitors secondary fermentation indicators and system safety. Functionality: - Measure internal pressure buildup during fermentation - Detect abnormal fermentation conditions (overpressure or stalled fermentation) - Provide safety cutoffs and alerts if thresholds are exceeded Sensors / Components: - Pressure Sensor: MPX5700AP or equivalent pressure transducer - Signal Conditioning Circuit: Instrumentation amplifier and filtering - Safety Cutoff: Relay or solid-state switch for heater disable - Status Indicators: LEDs for system state and fault indication Subsystem 3: Data Logging & Web Interface This subsystem provides real-time data logging and user visibility through a web-based dashboard. Functionality: - Transmit sensor data (temperature, pH, color, pressure) to a web server - Log historical fermentation data for later analysis - Display real-time plots and system status via a browser-based interface Sensors / Components: - Wireless Interface: ESP32 integrated Wi-Fi - Backend: Lightweight web server or cloud-hosted database (e.g., HTTP/MQTT-based logging) - Frontend: Web dashboard displaying time-series sensor data and system state Subsystem 4: Power Management This subsystem provides regulated and reliable power to all system components. Functionality: - Supply 12 V power to the heating pad and pumps - Step down 12 V to 3.3 V for logic and sensors - Isolate high-power and low-power domains for safety and noise reduction Sensors / Components: - Power Source: 12 V wall adapter - Regulation: DC-DC buck converter (12 V → 3.3 V) - Loads: Heating pad, pumps, ESP32, and sensors Criterion For Success: - Maintain fermentation temperature within ±1°C of the target setpoint for a continuous 48-hour period - Measure pH with ≥0.1 pH resolution and maintain ±0.2 pH accuracy after calibration - Detect and log measurable color changes correlated with fermentation progression - Maintain safe operating pressure below a defined threshold and trigger a shutdown if exceeded - For the final demo, we will start from a deliberately off-condition brew (ice-cooled and pH shifted away from target) and show the system autonomously returning temperature and pH to a reasonable kombucha range using the heating pad and peristaltic pumps while logging and plotting all sensor trends live in the app This project involves significant circuit-level hardware design, including sensor signal conditioning, power management, actuator control, and embedded system integration. The scope and complexity are appropriate for a multi-person team and align with the course requirements. |
||||||
| 28 | Modular Screen |
Dale Morrison Sean Halperin Yuzhe He |
Wesley Pang | Craig Shultz | proposal1.pdf |
|
| # Team Members: - Morrison, Dale Joseph Jr (dalejm2) - He Yuzhe (yuzhehe2) - Sean Halperin (seanmh3) # Problem Many applications (tabletop gaming groups, educators, researchers, presenters, and event organizers) require large, flexible, and reconfigurable display systems; however, existing solutions are expensive, bulky, non-modular, and difficult to customize. Users who want visual content often lack an affordable system that can be easily resized, repositioned, and updated with new content. For example, one can consider the tabletop groups that may spend close to $1000 on TV-table setups, which does not include a reconfigurable display, making immersion exceedingly difficult for these groups. This shows the need for a screen that is both customizable, modular, and affordable. # Solution The solution proposed is a modular digital display composed of multiple interlocking screen tiles that connect to form a larger display. Each tile contains a display and communicates with neighboring tiles through magnetic interconnects. A power or control tile will distribute power, detect the layout of the tiles, and set the visual display of each tile. The system to start will support static images and user-uploaded images. Something like this could be used in a classroom, team meetings, digital canvases, and tabletop gaming. The core idea is as described, but there are many advanced features such as audio and animation that will be implemented if time allows. # Solution Components ## Subsystem 1, Tile Display Module (Per Tile) This subsystem allows each tile to render its assigned portion of the full image. The display tiles form the user experience; therefore, without high-quality visual output, the modular board would fail to justify the replacement of paper or screens. To keep immersion, the overall board needs to be seamless instead of fragmented. As such, each tile must render its assigned portion in full detail. Each tile will contain a screen, display driver, and electrical connectors that will receive power and image data from the control tile. The tiles will have a MCU for image processing. The tile will be enclosed in a block housing, which does not separate any screens from each other and maintains alignment. Components: - Display : 6 inch LCD or TFT screen - CreateXplay 6.0 inch TFT Screen Module 1080*2160 - Display Controller Board : HDMI or LVDS - Edge connectors : Magnetic Pogo Pin Connector, 12V 1A Pogopin Male Female 2.5 MM Spring Loaded Connectors - Housing for the Screen - Microcontroller Unit (MCU) : ESP32-C3-WROOM-02 ## Subsystem 2, Tile Interconnect and Layout Detection The key innovation of this project is modularity. Therefore, the board must work regardless of how the user arranges the tiles. This subsystem will provide that capability, allowing users to rearrange tiles freely while ensuring the correct image appears in the correct location. Each tile will include edge contacts that detect when it is connected to a neighboring tile. The power tile will scan the connections and build a grid of the size of the board. Based on the tile's position data, the power tile will assign a location of the grid the tile is on and determine the part of the image the tile should display (rerunning automatically as tiles are moved). Implemention: - Connection Detection - Layout mapping algorithm on the MCU - Coordinate assignments ## Subsystem 3, Power or Control Tile This subsystem will serve as the control center of the board and will be responsible for ensuring all tiles receive power and image data. The control tile will have one or two MCUs. One MCU manages system logic (layout detection, scene selection, etc), while the second handles display data. The controller will store images locally (microSD or USB), slice them into tile segments, and transmit the correct image data to each tile. It will also broadcast synchronization signals to ensure all tiles update at the same time. This tile will also include power regulation, ensuring that all connected tiles receive stable voltage and current. Components: - Microcontroller Unit (MCU) : ESP32-C3-WROOM-02 - microSD or flash storage - Power distribution board with protection NCV97200 -Power On Button: PTS645SL43-2 LFS ## Subsystem 4, User Interface and Scene Control Without an intuitive interface, changing the screen would be difficult, which would reduce usability. This subsystem ensures that the board is able to be used in all different kinds of scenarios. Basic user controls will be integrated directly into the control tile. For advanced control, the system will provide a Wi-Fi-based web application hosted on the control tile. Users can connect from a phone or laptop to upload images, select scenes, and upload them to the board. If app development proves too complex within the semester, the board will support switching between multiple preloaded scenes as a fallback. Components: - Scroll Knob: A scroll wheel which will allow the switching of images if app development is too complex # Criterion For Success - The system supports 4 to 9 tiles. - Pressing the power button powers the system and all connected tiles. - The power or control tile automatically detects the board layout. - Each tile displays the correct portion of the full image. - The board displays at least two selectable scenes. - Scene transitions occur without visible misalignment. - The system remains stable under repeated reconfiguration. - Displaying numbers of it's relative location |
||||||
| 29 | EV Battery Thermal Fault Early Detection & Safety Module |
RJ Schneider Skyler Yoon Troy Edwards |
Wenjing Song | Arne Fliflet | ||
| # Team Members - RJ Schneider (rs49) - Skyler Yoon (yy30) - Troy Edwards (troyre2) # Problem Lithium-ion batteries used in electric vehicles can experience abnormal heating due to internal faults, charging stress, or cooling failure. These thermal issues often begin with localized hot spots or an unusually fast increase in temperature before visible failure occurs. While vehicle battery management systems handle internal protection, there is a need for an external, lowvoltage monitoring and diagnostic module that can provide early warning and a hardware-level safety output for laboratory testing, validation, and educational demonstration environments. # Solution We propose a battery thermal fault monitoring module that detects early thermal fault indicators using multiple temperature sensors and simple decision logic. The system will use two independent detection paths: a microcontroller-based path for data logging and trend analysis, and a hardware comparator path for fast threshold-based fault detection. A custom PCB will integrate sensor interfaces, signal conditioning, control logic, and alert outputs. The system will be demonstrated using a low-voltage heating element to safely simulate abnormal battery heating behavior. # Solution Components ## Subsystem 1 (Thermal Sensing Front-End) Components: - 10k NTC Thermistors (x3) - 1% Precision Resistors (voltage divider networks) - MCP6002 Rail-to-Rail Op-Amp (or equivalent) Function: This subsystem converts temperature changes into analog voltage signals using thermistor voltage dividers. A simple active low-pass filter is implemented on the PCB to reduce noise from the heating element and power supply. Multiple sensors allow detection of uneven heating across the simulated battery surface. ## Subsystem 2 (Dual-Logic Decision Unit) Components: - ESP32-WROOM-32 Microcontroller - LM311 Voltage Comparator Function: The ESP32 samples temperature data using its ADC and calculates temperature rate-of-rise to generate early warning alerts. In parallel, the LM311 comparator directly monitors one sensor voltage and triggers a fault output when a fixed temperature threshold is exceeded. This provides a simple hardware backup path that does not rely on firmware execution. ## Subsystem 3 (Power Regulation and Safety Output) Components: - 5V to 3.3V LDO Regulator (e.g., AMS1117-3.3) - SPDT 5V Relay Module - Logic-Level MOSFET (IRLZ44N or equivalent) Function: This subsystem regulates input power for the PCB and provides output signaling. The relay represents a low-voltage safety cutoff output that simulates a charger-disable or contactor-enable signal. The MOSFET is used to control the heating element during demonstration and testing. # Criterion For Success 1. Hardware Fault Trigger: The comparator-based protection path must activate the relay output within 200 ms of exceeding a preset temperature threshold. 2. Early Warning Detection: The ESP32 must trigger a warning alert when the measured temperature rise exceeds a configured rate-of-rise threshold for at least 3 seconds. 3. Temperature Accuracy: PCB sensor readings must be within ±1.5°C of a calibrated reference thermometer. 4. Noise Reduction Performance: The PCB filtering stage must demonstrate reduced ADC signal noise compared to an unfiltered measurement when the heating element is active. 5. Fail-Safe Behavior: The relay output must default to an open (safe) state when system power is removed. |
||||||
| 30 | American Sign Language Robot Hand Interpreter |
Ankur Prasad Matthew Uthayopas Tunc Gozubuyuk |
Mingrui Liu | Yang Zhao | proposal1.pdf |
|
| **American Sign Language Robot Hand Interpreter** **Team Members**: - Ankur Prasad (ankurp3) - Experienced in Control Systems, Machine Learning, and some Embedded programming. Have done projects that train models using Python and have worked with programming and communicating sensors. Addtionally have experience building mechanical systems. - Tunc Gozubuyuk (tuncg2) - Have some experience in PCB design and experience in Control Systems. - Matthew Uthayopas (mnu2) - Experienced in Circuit Design and Signal Processing. Have done internships focused on AI/ML models. Have some experience with PCB design and programming with MCUs. **Problem** There are 500,000 to 1,000,000 people worldwide who use American Sign Language (ASL) to convey their ideas. Every idea matters, and we want every idea to be addressed, understood, and communicated between individuals without having any communication barriers. Therefore, we wanted to engineer a cost-efficient ASL Robot Hand Interpreter to be used as a teaching tool for anyone who wants to learn ASL. Voices of the Unheard: Conversational Challenges Between Signers and Non-Signers and Design Interventions for Adaptive SLT Systems: https://dl.acm.org/doi/10.1145/3706599.3720201 Students With Disabilities: https://nces.ed.gov/programs/coe/pdf/2024/CGG_508c.pdf **Solution** Describe your design at a high-level, how it solves the problem, and introduce the subsystems of your project. Our solution is to design a programmable robotic hand that will be able to perform all letters of the alphabet in American Sign Language. This hand will be able to be trained through multiple sensors attached to a separate glove, so we can potentially train the glove to sign whole words. We will be focusing on our hand displaying ASL words, but if time permits, we will be adding features that will allow interaction with the hand. If Time Permits: The robotic hand will be able to teach the American Sign Language without the need for a teacher/interpreter. This can be done by adding audio recognition to our robotic hand so that it will be able to sign words that it picks up. **Solution Components** **Subsystem 1**: Robotic Hand and Actuation Controls This subsystem will be able to bend and restore the joints of the robotic hand. It will function similarly to tendons when it curls and extends fingers. Mechanical Structure: Fingers made out of popsicle sticks that will be cut and sanded down and connected with screws and nuts: Popsicle Sticks - https://www.hobbylobby.com/crafts-hobbies/wood-crafts($0.99) For the palm, it will be made out of cardboard, being layered and then glued together. Additionally, there will be cut wood to mount the servo motors. Cardboards - https://a.co/d/1botWA0 ($5) For the tendons we plan to use nylon string that will be routed through the fingers using small screws/holes on the finger segments. We will place winches and spools on the servo horns to wind the string that controls the fingers. Additionally we will utilise elastic cords to provide a restoring force which will return the finger back to its original state. Elastic Cords - https://www.amazon.com/Elastic-Bracelets-Bracelet-Stretchy-Necklaces ($7) We will also potentially utilise springs to ensure that the fingers have enough force when holding a specific hand position. Motor system: Servo motors (x9) which will provide the torque to pull the tendons. Each finger will contain one servo motor except the thumb which will contain three. Then we will have two servo motor for the wrist to allow for movement in both directions Servo Motors - https://www.adafruit.com/product/1143?utm ($10) Microcontroller (Nano V3.0) - https://a.co/d/bsRC3nZ ($16) We are planning to use an ATmega328P MCU to determine the resistance at which each finger is able to have for each certain letter. The microcontroller will be hooked up to flex sensors which will be attached to each finger. The microcontroller and motor system will be placed inside of a recyclable water bottle. Flex Sensors - https://www.pcb-hero.com/products/2-2-resistive-flex-sensor ($2.15) Power System: Our system will eventually be powered by a portable power module. It will be connected to the microcontroller, which will then provide power to all the other components. Power Source: For bench: AC-DC adapter (12 V or 6–8 V, depending on motors) For portable: Turnigy 3300mAh 3S 11.1V Shorty LiPo Battery ($20) - https://hobbyking.com/en_us/turnigy-3300mah-3s-11-1v-30c-shorty.html?wrh_pdp=2&countrycode=US&utm_source=chatgpt.com **Subsystem 2**: Interaction and Teaching This subsystem will be responsible for training and programming the robotic hand. Sensor Glove: Main Glove: Standard cloth glove made for winter We will use 9 flex sensors to gauge the movements of the specific joints and fingers An Arduino Nano, which will be mounted on the glove to read all of the flex sensor data A HC-05 Bluetooth module will be used to send the glove’s sensor data to the main robot hand controller **Criterion For Success** Describe high-level goals that your project needs to achieve to be effective. These goals need to be clearly testable and not subjective. Sign Language Accuracy The robotic hand should sign each letter of the ASL alphabet perfectly when programmed to do so Any words or letters signalled should be able to be recognized by at least 3 testers The device should be able to spell out a 6-letter word in a reasonable amount of time which can be understood by 3 testers Machine Learning Feedback The robotic hand must be able to replicate signs that were performed from the glove at 85% accuracy The robotic hand should replicate signs within 2-3 seconds of glove movement Battery Life and Power Supply The robotic hand must have at least 2 hours of battery life The device should be able to perform at least 26 different hand signals before losing functionality Time Permitting Features The robotic hand should be able to replicate words spoken at 75% accuracy The camera should be able to detect a human doing sign language with a single color background |
||||||
| 32 | Plant Notification System (Soilmate) |
Emma Hoeger Sigrior Vauhkonen Ysabella Lucero |
Zhuchen Shao | Arne Fliflet | proposal1.pdf |
|
| Plant Notification System (Soilmate) Team Members: - Emma Hoeger (ehoeger2) - Ysabella Lucero(ylucero2) - Sigrior Vauhkonen (sigrior2) # Problem Many house plant owners struggle taking proper care of their plants. It can be difficult to keep track of when to water them and where to keep them, based on their species of plant and stage of life. Since all of them require water at different frequencies and amounts, it’s also easy to forget to water the plants on time and meet their different schedules. # Solution Our solution is to create a notification system to inform houseplant owners of when they should water their different plants. It will also notify the owner of the conditions of the plant based on various sensors. This will be done by creating an app that the owner can download on their phone where they will be able to enter their type of plant. There have been many apps created to act as a reminder to water plants; however, the majority of them rely on a schedule rather than live data gathered from the plant. Also those that do have live data from the plant, do not track the weather. Our app will track where that plant is originally from and use the weather patterns in that area to determine when it should be watered (ie. when it’s raining). In addition, there will be a soil moisture sensor, humidity sensor, light sensor, and temperature sensor. The soil moisture sensor will also alert the owner to water the plant if the moisture is too low, and prevent overwatering of the plant if the moisture is too high. The humidity sensor will alert the owner when humidity is dangerously too high or low for the plant, which is especially useful for tropical plants in a non-tropical environment (many houseplants are of a tropical background). The temperature sensor will alert the owner when the room temperature is not in the optimal range for the specific plant. With the integration of software and hardware subsystems, this effective plant notifying system will make taking care of houseplants easier for both beginner and experienced plant owners. Beginner plant owners will find it easier to learn of and keep track of the demands of their plants, preventing most common mistakes that result in the death of their plants. Many experienced plant owners have upwards of 20 plants, and this notification system would make it much simpler to keep track of when to water them all. # Solution Components - ESP32-C61-DevKitC-1-N8R2 - Moisture Sensor (SEN0114) - Temperature & Humidity Sensor (SHTC3-TR-10KS/9477851) - Light Sensor (BH1750) - ADC Module - 5V DC Converter ## Subsystem 1: App Configuration + Weather Data The app (developed using Flutter/Android Studio) will allow the user to add a plant for monitoring- the user will select the plant species, size, light exposure, and the size of the pot. With this information, using a lookup table that holds information for plant species, the app will store target ranges for soil moisture, temperature, humidity, and light, as well as a “home location” (later used to check weather). In the event that a plant species is unknown to the app (not in the lookup table), the user can manually add this information. Once per day, the app will call a weather API (OpenWeatherMap API) using the “home location” of a plant to check for rain in that region. This will be used as a supplementary factor to the data from the soil moisture sensor, and with this a decision will be made on whether to water the plant or not. If the plant should be watered, a notification will be generated to inform the user. The data from the temperature, light, and humidity sensor will also generate notifications if the temperature and/or humidity is out of the recommended range, informing the user that the environment is too hot or too cold, or too moist or dry. It will give recommendations to either turn down/up the temperature, place plant in a different facing window (north, east, south, west), mist with water if too dry, or open windows if too humid. This will make the app much more beginner plant friendly. ## Subsystem 2: Sensor Subsystem The sensor subsystem will use a resistive moisture sensor (SEN0114), temperature and humidity sensor (SHTC3), and a light sensor (BH1750). All of these sensors except the SEN0114, which requires an ADC module, will use an I2C interface that is compatible with our microcontroller (ESP32). The sensors will send their measurements to the microcontroller to be interpreted and relayed through the app. Our power subsystem will supply the correct voltages to the rated amounts of the sensors. ## Subsystem 3: Microcontroller for Communication We must be able to blend our app configuration with our live sensor subsystem to send an alert. We can do this by using the ESP32 microcontroller. It will provide wifi and bluetooth connectivity for our sensor devices to easily transfer the data to our app. It is cost-effective and has low power consumption which will make it easy to integrate with our design. Furthermore, our group has experience with this microcontroller so we are confident with its capabilities. ## Subsystem 4: Power Subsystem The power subsystem will deliver power to the sensors and microcontroller systems. The ESP32 requires 5V while the temperature, humidity, moisture and light sensors require 3.3V. The 3.3V will come from the LDO on the microcontroller and we will use a 5V USB adaptor to convert the 120V AC from the bench to 5V. # Criterion For Success (Pothos for example) - Accurately gather soil moisture data - 300-700 Ohms optimal for top 2 inches of soil - Accurately gather temperature data - 60 to 80 degrees farenheit - Accurately gather humidity data - 40 to 60% - Accurately gather light data - 1,000 to 3,000 lux - Accurately transfer data from sensors to app via microcontroller - Be able to track weather conditions - Be able to send alerts through app using sensors/weather conditions - Allow user to enter plant species, and size in app - Ensure app can track weather for multiple plant species |
||||||
| 33 | HelpMeRecall |
Michael Jiang Sravya Davuluri William Li |
Hossein Ataee | Craig Shultz | proposal1.pdf proposal2.pdf proposal3.pdf |
|
| # HelpMeRecall Team Members: - Sravya Davuluri (sravyad2) - William Li (wli202) - Michael Jiang (mbjiang2) # Problem Many individuals have difficulty remembering recent activities and completing routine tasks like eating or taking medication. # Solution A standalone assistive device that supports activity recall using sensor-gated voice interaction. It allows users to verbally log activities they have completed, and later query if a specific activity has been performed. It uses an onboard microphone and on-device audio processing on a microcontroller to perform keyword detection. This device is always on and will be verifiable with an LED, but the voice input is only accepted if the device is worn (capacitive touch sensor) and specific words from a limited vocabulary is said to avoid accidental logging. To address the possibility of reduced correct detection of supported keywords, we will have various keywords targeted for an activity. So in the case of taking medicine, it might be medicine, medication, pill, drug, and prescription. This also simplifies the problem and prevents confidence rate issues. To validate a completed action, the action is logged only if an accelerometer detects physical movement around the time in order to reduce false logging. If a voice log is accepted, haptic feedback is provided by the device. Activities are also timestamped and stored in local memory. If the device notes that a specific activity has been completed, it affirms it including the timestamp using an integrated speaker. The logs reset at midnight automatically since the activities repeat on the daily. There is also an option of a hard reset button to clear logs. There will also be a button to delete the latest log in case of a logging mistake by the user. # Solution Components ## Subsystem 1: Microcontroller Unit and Controls Acts as the central unit for logic. Manages the sensor inputs, and executes a finite state machine. The FSM states are start, idle, listening, logging, and replying. Components: ESP32-S3-WROOM-1 ## Subsystem 2: Audio input processing unit Captures the voice input from the user and performs keyword detection on a limited vocabulary, where each action can be mapped to multiple set keywords to improve detection. Components: Digital MEMS microphone (INMP441), ESP32-S3-WROOM-1 ## Subsystem 3: Sensor gating and activity validation Uses a capacitive touch sensor and an accelerometer to detect motion, which ensures that voice input is only received and accepted if the device is worn and recent movement is detected by the accelerometer instead of continuous voice recognition. A "cooldown" period is enforced where the microphone will be disabled for 10 seconds if there's motion but no logging during the listening period multiple times in a row to help conserve some battery. Components: Capacitive touch sensor (AT42QT1010), Accelerometer (MPU-6050) ## Subsystem 4: Feedback and Output Uses a speaker for audio feedback as a response to the user’s query. This subsystem also provides haptic feedback as an indication of an accepted user voice log. To indicate if the device is on, the LED is green. If the device is listening, the LED is yellow. If the device is low on power, the LED will be red. Components: Speaker (8 ohm speaker), amplifier (MAX98357A), coin vibration motor, transistor (2N3904), RGB LED ## Subsystem 5: Time logging and local storage Stores the activity voice logs along with timestamps. Allows automatic reset at midnight to support daily repetitive tasks. Timekeeping is done using ESP32’s internal RTC. Components: ESP32-S3-WROOM-1 ## Subsystem 6: Power Supplies power to the device. Components: Battery (Li-Po battery) # Criterion For Success - Correctly detects supported keywords with an accuracy of at least 80% in a quiet environment - Device will only log upon verifying physical activity and hearing a keyword from the user within a 5 second window - Upon successful logging, the speaker will output audibly and haptic feedback can be felt by the user with a 2 second vibration - While querying logs, speaker will output and LED will be solid - Logs will be automatically cleared at midnight and can be manually reset with the reset button - Latest log will be deleted upon pushing a separate button - LED stays solid while device is powered - False log rate < 1 per hour in normal conversation when worn. |
||||||
| 34 | LabEscape Ultrasonic Directional Speaker |
Arthur Zaro Piotr Nowobilski Sam Royer |
Mingrui Liu | Arne Fliflet | proposal1.pdf |
LabEscape Escape Room |
| # LabEscape Ultrasonic Directional Speaker Team Members: - Piotr Nowobilski (piotrn2) - Sam Royer (sroyer2) - Arthur Zaro (azaro3) # Problem Working with Professor Kwiat for the LabEscape escape room, we want to make an audio-based clue using ultrasonic waves to hide a narrow beam of audio that can only be heard at the intersection of two ultrasonic waves. We need to create the ultrasonic transducer array to emit the ultrasonic waves as well as the drivers to feed into the transducer and produce the necessary waves. # Solution We will make 2 separate subcircuit drivers to drive the ultrasonic waves. One will be a standard 40kHz wave as a reference wave, and the other will be a carrier wave using Amplitude Modulation at 40kHz to encode an audible audio signal at 40kHz. The intensity of the 40kHz wave will delinearize the air the sound is in, allowing the air to demodulate the carrier wave with the reference 40kHz wave, causing the initial audio to be heard only at the intersection of the 2 waves. For the transducer we will simply wire many individual ultrasonic transducers in parallel with one array being connected to a 40kHz sine wave, and the other connected to the 40kHz carrier wave. # Solution Components ## Digital-to-analog Converter We need to store an audio clip digitally to have the same clue play over and over throughout the escape room experience so that the clue may be discovered upon the intersection of the “audio spotlights”. To convert this digitally stored signal to a usable signal in the speakers, we need to convert the digital signal to an analog signal. The ideal resolution would be 16 bits for high quality audio as we want to minimize the distortion caused by conversion. This will be done through a DAC IC. It seems like a serial load DAC might be best as they have internal 16 bit shift registers, and if I sample my audio at 22050Hz, I can have good resolution if I load at 22050 * 16 Hz, and then move to output the signal. Components: DAC8811 - 16 bit serial Digital to Analog converter. Audacity audio software to record and encode 16 bit audio ## Modulating subcircuit We need to convert the new analog signal into a 40kHz signal using Amplitude Modulation so that the carrier wave and reference wave are at the same base frequency, and upon their crossing with enough power, the signal will demodulate in the air. We are thinking about implementing this using a digital potentiometer(s) using one of the many standard amplitude modulation circuit designs one can find online, and tuning it very specifically with those digital potentiometers based on tolerances of the resistors and capacitors used in this circuit. Components: Digital Potentiometer - MCP4141. ## Signal Amplifier Circuit After we modulate the signal, as well as for the standard 40kHz wave, we need to amplify the signal so that the signal is large enough to be powerful enough to delinearize the air for the audio signal to be demodulated at the cross section of the audio beams. Components: LM3886 (high power audio amplifier, only issue is it doesn’t have as much gain as possible at higher frequencies (40kHz), so we may decide to swap this out). ## Filtering Subcircuit A filter subcircuit may be necessary in order to reduce the noise before amplification. Given that most speaking frequencies are below 6kHz at an absolute high end and below 80Hz at an absolute low, this will likely be a band-pass filter to cut out the absolute highs and lows from harmonics and miscellaneous noise from conversion. Initially we will just try a simple first order low pass filter and high pass filter in series, which would only require a capacitor and a potentiometer to tune it. If that doesn’t do enough attenuation, I’ve found some online examples of higher order filters that will give us higher attenuation and would require a few additional resistors, capacitors, and an op amp. Components: Digital Potentiometer MCP4141 for tuning filtering circuit. Capacitors for filtering circuit. Resistor for filtering circuit. Op Amp (tbd if needed). ## Transducer Array To actually emit the ultrasonic waves, we will need an ultrasonic speaker array to emit both the reference and carrier waves. To do this we will buy several small individual ultrasonic speakers and attach them in parallel to have them all simultaneously emit the desired frequency. Components: 25+ small ultrasonic transducers (Can buy in bulk) ## Additional Component(s) Stepper motor and motor drivers for panning the speaker to align. Flashlight mounted to transducer array to make it clear the alignment of each speaker # Criterion for Success - Audio and pressure from ultrasonic waves is very narrow and intersection between the two ultrasonic “spotlights” requires precision. This beam should be consistent with the attached flashlights. - Audio is only heard at the intersection of the two waves and not too loud or too quiet. - Audio is of clear enough quality that a clue can easily be presented through the transducers. - Transducers and drivers are capable of being run for a long period of time while players try to uncover the clue associated with it. |
||||||
| 35 | UAV Battery Management System with Integrated SOC and SOH Estimation |
Edward Chow Jay Goenka Samar Kumar |
Xiaodong Ye | Arne Fliflet | proposal1.pdf |
|
| # Title UAV Battery Management System with Integrated SOC and SOH Estimation # Team Members: - Edward Chow (ec34) - Jay Sunil Goenka (jgoenka2) - Samar Kumar (sk127) # Problem UAV batteries are safety-critical and performance-critical as a weak or degraded pack can cause sudden voltage drop, shutdown, reduced flight time, or unsafe thermal behavior. The usual BMS implementations primarily rely on fixed thresholds for voltage, temperature or current to prevent immediate failures. However, threshold-only systems do not provide predictive insight into battery degradation. Battery health issues are often discovered only after runtime loss or unsafe behavior. Additionally high discharge currents and fluctuating temperatures are common in UAV operations, which fastens degradation. A lightweight BMS that not only protects the pack in real time but also estimates battery health and degradation risk would improve reliability, reduce unexpected failures, and enable better operational decisions such as deciding if the battery is safe to use or needs to be retired. # Solution To address the delicate nature of UAV batteries we decided to undertake a project with the aim to design and construct a compact and efficient battery management system that seamlessly integrates reliable real-time protection with intelligent prediction. Our primary algorithm for estimating the battery’s State of Charge (SOC) will be coulomb counting, which relies on continuous current measurement. We are researching the Kalman filter method as a second algorithm for more accurate calculation. The BMS will also monitor cell voltages and temperatures to ensure safe operation and provide valuable data for battery condition assessment. By analyzing SOC history, voltage behavior, current profiles, and temperature data, the system should be able to estimate the State of Health (SOH) of the battery. SOH over time will help us understand the capacity fade and degradation trends over time. We also plan to log all measurements and stream it to an external dashboard for visualization and analysis. As an extension, the project could also incorporate a lightweight AI-driven model to assist in SOH estimation and degradation assessment. # Solution Components ## Slave Board The slave board will be responsible for monitoring individual cell voltages and temperatures and supporting passive cell balancing. It will report accurate measurement data to the master board, ensuring safe operation of the battery pack at the cell level. The HW components and sensors include: Cell monitoring IC: Analog Devices LTC6811 or LTC6813s (multi-cell voltage sensing with built-in diagnostics and balance control) isoSPI communication interface: Analog Devices LTC6820 Temperature sensors: 10 kΩ NTC thermistors (e.g., Murata NCP18XH103F03RB) Passive balancing: bleed resistors (33–100 Ω) and N-MOSFETs per cell Cell sense connectors and basic RC filtering/ESD protection Power regulation: buck converter (e.g., TPS62130) and 3.3 V LDO ## Master Board The master board is responsible for actually performing pack-level protection, SOC and SOH estimation, data logging, and external communication. It makes sure safety limits are enforced by aggregating data from the slave board. The HW components and sensors include: Microcontroller: STM32H7 series Current sensing: shunt resistor with TI INA240 current-sense amplifier Protection switching: back-to-back N-channel MOSFETs with gate driver (e.g., BQ76200) Power regulation: buck converter (e.g., TPS62130) and 3.3 V LDO Communication: isoSPI (LTC6820), CAN Data logging: microSD card or onboard flash memory ## BMS Viewer The BMS Viewer will be a software dashboard used to visualize real-time and logged battery data and assess battery health. Potential features: Live display of SOC, SOH, pack voltage, pack current, and temperature Time-series plots of voltage, current, temperature, and SOC Data ingestion via USB, CAN, or wireless telemetry Backend implemented in Python or Node.js with a web-based dashboard # Criterion For Success - BMS detects and mitigates fault conditions within a bounded response time (≤100 ms). - Cell voltage within ±50 mV per cell, pack current within ±10%, temperature within ±5°C after calibration. - SOC remains within ±10% of a reference SOC over a full UAV-like discharge cycle. - SOH estimate is within ±15% of a capacity-based reference and shows consistent degradation trends. - BMS Viewer displays and logs SOC, SOH, pack voltage/current, and temperature in real time. |
||||||
| 36 | Slow Wave Sleep Enhancement System RFA |
Aidan Stahl Kavin Bharathi Vikram Chakravarthi |
Hossein Ataee | Yang Zhao | proposal1.pdf proposal2.pdf proposal3.pdf |
Sound Sleep |
| # Slow Wave Sleep Enhancement System ## Disclaimer: We are assisting Team 05 - Acoustic Stimulation to Improve Sleep who presented during the first class lecture with this project # Team Members: - Kavin Bharathi (kavinrb2) - Aidan Stahl (ahstahl2) - Vikram Chakravarthi (vikram5) # Problem: Many common neurological conditions like Alzheimer’s disease, depression, and memory issues are associated with patients receiving lower quality of sleep. Specifically, these issues often stem from a lack of a specific type of sleep known as slow wave sleep (SWS). As individuals age, sleep disorders and other sleep-related issues lead to a lack of overall sleep. As a result, the amount of time an individual spends in SWS and the quality of SWS they experience typically declines with age, contributing to many of the issues mentioned above. # Solution: Describe your design at a high-level, how it solves the problem, and introduce the subsystems of your project. Our team is trying to improve sleep quality using a wearable device that is non-invasive and cost effective. This device will record EEG waves and then detect when the user is in Slow Wave Sleep (SWS) using the aid of specialized software. Once the user enters SWS, the system emits carefully timed bursts of pink noise through an auditory interface to enhance slow wave activity and extend its duration. This project is being done for the “Team 05 - Acoustic Stimulation to Improve Sleep” proposal by Maggie Li, Nafisa Mostofa, Blake Mosher, Presanna Raman. Currently, our sponsors have a wearable headset that measures how much time is spent in SWS and a “Cyton + Daisy Biosensing PCB” to process incoming signals. This board costs $2,500, and we are aiming to design an alternative, cheaper PCB within the class budget of $150. Providing a cheaper alternative that offers similar functionality is what makes our project unique and patentable. # Solution Components: ## EEG Leads - EEG Leads are conductive electrodes, small metal disks, that are placed on the scalp. These electrodes measure small voltage differences generated by electrical activity produced by neurons in the brain. ## MCU/EEG Wave Detection System - The MCU/EEG wave detection system is used to detect the analog EEG waves from the EEG headband, amplify the signal (the EEG waves are very low voltage, so amplification will be necessary), digitize them, and transmit those signals to a computer for further processing to detect SWS. ## Computer/Software - Utilize YASA, open-source command-line tool, to analyze EEG signals - Python script to utilize command-line tool while EEG data is being collected - Script also starts the process of playing pink noise once SWS is detected - Interactive UI that allows user to visualize EEG data ## Audio Source - An audio source will be used to play pink noise after the user enters SWS. # Criterion For Success: - Playing pink noise after detecting SWS signal with minimal delay - Correctly classify SWS with good accuracy - Ensure wearable device is comfortable for user through survey metrics |
||||||
| 37 | Ant-Weight Battlebot - DC Hammer |
Carson Sprague Gage Gathman Ian Purkis |
Haocheng Bill Yang | Viktor Gruev | proposal1.docx proposal2.pdf |
|
| # Ant-Weight Battlebot - DC Hammer Team Members: - Ian Purkis (ipurkis2) - Carson Sprague (cs104) - Gage Gathman (gagemg2) # Problem Statement Many battlebot designs struggle with balancing movement control, durability, offense, and defense within the limitations of competition regulations. We need to design a robust and versatile battlebot while following competition requirements (namely weight requirements) that can outlast and subdue a variety of competitors. Primary design challenges for most battlebots stem from the diversity of opponent designs and abilities, often leaning on a particular design element to win. Our bot must be able to remain competitive throughout the full match regardless of the opponent or sustained damage. # Solution Our proposed solution/design will take a well-rounded approach to offense and defense, ensuring that our bot can sustain damage and last the full length of the match. Our primary offensive tool will be a motor-powered, sensor-enabled hammer and wedge attachment allowing for multiple methods of opponent submission by housing two “attack modes”, allowing the driver to adapt attack strategy depending on the design of opposing bots. Our design also includes a significant defensive tool in inversion adjustment, by utilizing sensors and physical shape to prevent knockouts via flips. Our bot will remain functional even if fully inverted. Physical components, especially the hammer, must be modular for quick replacement between matches if damage is taken. This well-rounded design will enable the driver’s creativity during the match by automating the offensive tool (hammer/wedge) and defensive tool (flip adjustment), providing the bot significant competitive advantage against all types of opposing bots. # Solution Components ## Subsytem 1 - Ultrasonic Sensor Enabled Hammer/Wedge Attachment (Attack Arm) We will embed an ultrasonic sensor into the front of our bot. The sensor will be used as a proximity detector to activate the attack arm motion. The attack arm will have two default configurations for either orientation of the bot. A low position, running near parallel to the arena surface will be used for the wedge attack, upon sensor OR driver input an upward swing will execute, effectively flipping objects in front of the bot. The other arm resting position will stick upward, perpendicular to the ground and upon sensor or driver input perform a downward swing to strike objects in front of the robot. - Ultrasonic Sensor; If we can use a pre-emplemented sensor - Adafruit 4007 (https://www.digikey.com/en/products/detail/adafruit-industries-llc/4007/9857020). If we cannot, alternatively, an infrared LED/detector combo could be used - Motor (Weapon) TBD, but something of the sort as follows, primary characteristic is a high torque motor for flipping/smashing 12V 50RPM 694 oz-in Brushed DC Motor (210 grams) (https://www.robotshop.com/products/12v-50rpm-694-oz-in-brushed-dc-motor) - Microcontroller Unit ESP32-S3-WROOM-1 (not dev board, just chip + antenna) ## Subsystem 2 - Gyroscopic Sensor Enabled Control Inversion We will embed a gyroscopic sensor inside the body of the robot. This will allow the software responsible for translating driver input into motor movement to adjust based on the orientation of the bot. If the bot is flipped over, left turns become right turns and vice versa, which would be a challenge for the driver to quickly adjust. This feature/subsystem will allow the software to make the appropriate adjustments to maintain driver input continuity. Additionally, the orientation measured by the gyroscopic sensor will modify the resting/default positions of the attack arm to continue operation (resting positions and rotation direction must be inverted to continue operation). - Gyroscopic Sensor (Potential alternate sensor - Accelerometer - something like MC3416 would do, this should be able to detect orientation satisfactorily) (https://www.digikey.com/en/products/detail/memsic-inc/MC3416/15292804) - Microcontroller Unit - ESP32-S3 see above ## Subsystem 3 - Wireless Control/Driver Input + Steering and Wheel Configuration Our driver will utilize a keyboard for robot control and steering. The W and S keys will control forward and backward motion with A and D controlling left and right rotation. We will also program the F key to switch attack modes between the hammer and wedge and the Space bar as an alternative manual attack trigger. These inputs will be wirelessly communicated to the onboard PCB and microcontroller via bluetooth and translated to the appropriate motors. To enable the tank-turning we will use 4 wheel drive as each wheel/motor will require isolated control. The height of the robot’s body will be thinner than the diameter of the wheels, with the wheels’ axles fixed at the midpoint relative to the thickness of the body. This will allow all four wheels to make contact with the ground regardless of orientation, and maintain drivability. - Microcontroller Unit - ESP32-S3 see above - Keyboard (Simply from a laptop, Laptop will also run the “server” that communicates with the MCU/PCB) - Drive Motors 12mm Diameter 50:1 Micro Metal Gearmotor 12V 600RPM (2 x 10 grams) (https://www.robotshop.com/products/dyna-engine-12mm-diameter-501-micro-metal-gearmotor-12v-600rpm) ## Subsystem 4 - Battery/Power Onboard power source for sensors/controllers/motors as well as components to regulate and distribute power. - Battery 3S (11.1 V) around 500 mAH battery (starting point estimation) (https://hobbyking.com/en_us/turnigy-nano-tech-450mah-3s-45c-lipo-pack-w-xt30) - Control Circuit Regulator AZ1117CH-3.3TRG1 - 3.3 V w/ 18 V max input, output current is 1.7 mA min, and 1 A max, well within range (https://www.digikey.com/en/products/detail/diodes-incorporated/AZ1117CH-3-3TRG1/4470985) - Gate Drivers DGD0211C - 3.3v to 12 v gate drivers, plenty of overhead in ability (https://www.digikey.com/en/products/detail/diodes-incorporated/DGD0211CWT-7/12702560) - H-Bridge MOSFETs FDC655BN - 30v, 6.3 A NMOSfets (https://www.digikey.com/en/products/detail/onsemi/FDC655BN/979810) # Criteria for Success - Ultrasonic sensor accurately triggers attack arm when an object comes into close proximity - Gyroscopic sensor accurately registers when robot has been flipped and inverts controls - Microcontroller takes in driver keyboard inputs for fluid steering - Attack arm’s default position changes based on driver input (horizontal for wedge, vertical for hammer) - Attack arm’s default position changes based on gyroscopic sensor input (default position adjusts to bot’s orientation) - Tank turning and wheel alignment allows for 360 degree rotation - Robot movements follow driver input: i.e. forward/backward motion, turns etc. |
||||||
| 39 | Auto-Tuner with LCD Display |
John Driscoll Lee Susara Nicholas Chan |
Eric Tang | Yang Zhao | proposal1.pdf |
|
| **Auto-Tuner with LCD Display** **Team:** Nicholas Chan, John Driscoll, Lee Susara **Problem:** In order for guitars to be properly used, each string needs to be tuned to the right frequency to play the right note. This can either be done manually, or with assistance from a tuner. We would like to make this process easier though, so we would like to implement an auto-tuning device that attaches to the pegs of the guitar. While these are exist, most of these devices on the market are over $100, so we would like to make it more affordable. **Solution:** Our solution to this would be to create an auto-tuning device using a servo motor and a feedback loop. This solves the problem because this would make the tuner much more affordable while still maintaining its main functionality. Our design would be to attach a servo motor to each peg of the guitar and, while the user plucks the string, our device would use a microphone to take in the frequency and turn the peg as need be. The note being played will also be shown on an LCD display. **Subsystem 1:** One of the subsystems we will be the device that attaches to the head of the guitar. This device will have 6 servo motors (HS-318), one for each peg. Each motor will have a clamp that will attach to the pegs of the guitar. The device will also have an electret microphone amplifier that is picking up sound from the guitar to know what note is being played. A clamp will be used to keep the whole subsytem in place. **Subsystem 2:** Another subsystem we will need to implement is the control subsystem, which will house our PCB (QFN-16) and logic. We will use a breadboard (103-1100) , wires, and various logic chips to implement the correct logic. **Subsystem 3:** The last subsystem we will need is the power and user interface. This will include our battery (EN-22), power switch button (1489), and LCD display , as well as any buttons, should we need to tune the guitar to non-standard tuning. We can use the 2x16 LCD display with controller for this. **Criterion for Success:** For our project to be effective, it must be able to pick up and filter out the frequency being played, properly take in the sound as input to determine how the guitar should be tuned, and ensure the motors are being powered and are functioning as desired. It must also fit on the head of the guitar without being too clunky, and our LCD display must display the correct notes being played. The project as a whole must also be more affordable than the current auto-tuners on the market as of right now. |
||||||
| 40 | Bilateral Earlobe Pulse Timing Measurement Device |
Joshua Joseph Mark Schmitt Zhikuan Zhang |
Shiyuan Duan | Yang Zhao | other1.pdf |
|
| # Bilateral Earlobe Pulse Timing Measurement Device # Team Members Zhikuan Zhang (zhikuan2) Joshua Joseph (jgj3) Mark Schmitt (markfs2) # Problem Pulse transit time (PTT) is widely used as a non invasive indicator of cardiovascular dynamics but most existing systems measure PTT at a single peripheral location There is currently a lack of low cost synchronized hardware tools that enable bilateral pulse timing measurements such as comparing pulse arrival times between the left and right earlobes Without a dedicated time synchronized multi channel sensing platform it is difficult to study or validate whether body posture head orientation or environmental conditions introduce measurable bilateral timing differences This project addresses the need for a custom PCB based physiological sensing device that can reliably acquire synchronized ECG and bilateral PPG signals and serve as a general purpose measurement tool for this under studied topic # Solution This project proposes a PCB based multi channel physiological sensing system consisting of one ECG channel placed near the chest and two PPG channels placed on the left and right earlobes The system is designed as a measurement and validation tool rather than a research discovery platform The PCB focuses on low noise analog front end design precise time synchronization and multi channel data acquisition ECG R peaks are used as a timing reference and pulse arrival times from both PPG channels are compared under controlled conditions such as neutral posture head tilt or side lying # Solution Components ## Subsystem 1 ECG Analog Front End Function Acquire a clean ECG signal to provide a reliable cardiac timing reference Components Instrumentation amplifier such as AD8232 or equivalent ECG analog front end Analog high pass and low pass filtering stages Driven right leg circuit for common mode noise reduction Surface ECG electrodes Output Digitized ECG waveform with clearly detectable R peaks ## Subsystem 2 Dual PPG Sensing Channels Function Measure pulse waveforms at the left and right earlobes simultaneously Components Two identical PPG sensors such as MAX30102 or discrete LED and photodiode design Transimpedance amplifiers for photodiode current sensing Anti aliasing filters Optical shielding for ambient light rejection Output Two synchronized PPG waveforms suitable for pulse arrival time extraction ## Subsystem 3 Time Synchronized Data Acquisition and Control Function Ensure accurate relative timing between ECG and both PPG channels Design considerations All channels are sampled by a single microcontroller ADC or synchronized ADCs Shared clock source using a low ppm crystal oscillator Hardware level timestamping of samples Avoid reliance on BLE timing for synchronization BLE used only for data transfer if implemented Components Microcontroller such as STM32 or ESP32 Low drift crystal oscillator Shared sampling clock architecture # Criterion For Success Requirement 1 ECG signal acquisition Validation Clearly visible ECG waveform with identifiable R peaks Elevated heart rate observable after light exercise Requirement 2 PPG signal acquisition for both earlobes Validation Stable and repeatable PPG waveforms captured simultaneously from left and right earlobes Requirement 3 Channel time synchronization Validation Relative timing jitter between channels below predefined threshold such as less than 1 ms Consistent timing results across repeated measurements Requirement 4 Bilateral pulse timing comparison Validation ECG referenced pulse arrival times successfully computed for both earlobes under at least two different body conditions # Scope and Complexity Justification This project involves significant circuit level hardware design including low noise analog front ends synchronized multi channel data acquisition and mixed signal PCB integration The system complexity is appropriate for a senior design project and aligns with course expectations The project is inspired by experience working as a research assistant in a biological sensing laboratory and is positioned as a hardware measurement tool rather than a research discovery platform |
||||||
| 41 | BetaSpray - Bouldering Route Assistance |
Ingi Helgason Maxwell Beach Prakhar Gupta |
Gayatri Chandran | Viktor Gruev | proposal1.pdf |
|
| # Beta Spray [Link to Discussion](https://courses.grainger.illinois.edu/ece445/pace/view-topic.asp?id=78759) **Team Members:** - Maxwell Beach (mlbeach2) - Ingi Helgason (ingih2) - Prakhar Gupta (prakhar7) # Problem Spray walls in climbing gyms allow users to create endless custom routes, but preserving or sharing those climbs is difficult. Currently, climbers must memorize or manually mark which holds belong to a route. This limitation makes training inconsistent and reduces the collaborative potential of spray wall setups, particularly in community and training gym environments. # Solution Beta Spray introduces a combined scanning and projection system that records and visually reproduces climbing routes. The system maps the spray wall, categorizes each hold, and projects or highlights route-specific holds to guide climbers in real time. Routes can be stored locally or shared across devices over a network. The design includes three primary subsystems: vision mapping, projection control, and user interface. # Solution Components ## Vision Mapping Subsystem This subsystem performs wall scanning and hold detection. A **camera module** (Raspberry Pi Camera Module 3 or Arducam OV5647) will capture high-resolution images under ambient lighting conditions. The **ESP32** will handle image capture and preprocessing using C++ OpenCV bindings. The image recognition algorithm will identify hold contours and assign coordinates relative to wall geometry. If on-device processing proves too compute-intensive, the camera data can be sent via HTTP requests to a remote machine running an OpenCV or TensorFlow Lite inference service for offloaded recognition. To improve reliability in low-light setups, IR LEDs or reflective markers may be added for hold localization. If latency proves too high, a physical layer solution could connect directly to a nearby laptop to speed up computer vision processing. ## Projection Subsystem The projection subsystem highlights route holds using **servo-actuated laser pointers**. Each laser module will be mounted to a **2-axis servo gimbal** arrangement controlled by a microcontroller PWM interface. The system will direct up to four laser beams to indicate sequential handholds as users progress. A benefit of using servos over motors is avoiding PID tuning for motor control loops. If laser precision or safety reliability becomes an issue, an alternative approach will use a **compact DLP or LED projector**, calibrated through the same coordinate mapping. Mechanical design will ensure adjustable pitch angles to accommodate wall inclines up to 45 degrees. ## User Interface Subsystem Users configure and control Beta Spray through a web or mobile interface. The **ESP32** module provides Wi‑Fi and Bluetooth connectivity, and the **ESP‑IDF SDK** enables local route storage through SPI flash or SD card, along with a lightweight HTTP server for remote control. The interface will include climb management (create, save, replay) and calibration controls. If latency or bandwidth limits affect responsiveness, a fallback option is to implement a wired serial or USB configuration interface using a host computer to manage routes and command sequences. A basic mobile or web frontend will be developed using **Flutter** or **Flask**. # Physical Constraints - The system will draw power from a standard outlet (no battery operation needed). - The device will be secured to the floor using a stable stand or rubber bumpers to prevent slipping. - The total footprint will be **less than 25 cm * 25 cm**, with a maximum height of **40 cm**, including the laser pointer gimbals. # Criterion for Success Beta Spray will be successful if it can: - Achieve reasonable accuracy in laser pointer targeting to mark holds. - Track a climber’s movement in real time with less than **200 ms** latency. - Interface with a mobile device to change route planning and trajectory. - Operate consistently across varied placement distances and wall angles. Meeting these criteria will validate the feasibility of Beta Spray as a modular and expandable climbing wall visualization platform. |
||||||
| 42 | Autonomous Cold Salad Bar |
Siddhaarta Venkatesh Tejas Alagiri Kannan Tinhsu Wen |
Aniket Chatterjee | Craig Shultz | proposal1.jpeg proposal2.pdf |
|
| # **Team:** 1. Tejas Alagiri Kannan(tejasa4) 2. Siddhaarta Venkatesh(sv39) # **Problem:** In the food industry, a huge number of processes are extremely rote and utilize manpower on monotonic tasks that can be replaced by an autonomous system. One such problem is the usage of manpower in assembly line format restaurants(eg, Chipotle, Forage Kitchen, Qdoba, etc.). Just as in the automation industry, where the assembly line is, in essence, replaced by 6-DoF arms and robot operators, I believe the manpower in restaurants can also be replaced by a robotic system that can provide higher efficiency. We have already seen a large number of processes getting automated in the restaurant industry, such as the automated food bar in sushi restaurants and robotic servers(not widely adapted unfortunately). # **Solution:** At the outset, I would like to mention that the solution does not aim to automate the entire pipeline from creating the dish to serving it. To perform highly technical dishes is a different problem in itself. I aim to make the serving process more efficient and reduce wait time. Given the ingredients, such as, chopped chicken, chopped onions, sauces, etc(which i believe is a fair starting point) Each ingredient will have its own pipe that dispenses one specific type of dish. Once we receive instructions of what food needs to be prepared and the x # of ingredients it needs to dispense and in which order, the bowl on a conveyor belt will move back and forth to fill up with those ingredients. These ingredients are funneled from their own pipes that dispenses the ingredients, one at a time. The final box is then sealed and placed in a shaker which mixes the ingredients and it is served at the end. # **COMPONENTS:** # **Subsystem 1: Motion** The bowl must be moved around the pipes to get filled. This is what we propose: Conveyor belt: 4 idlers, 2 head pulleys, 1 NEMA 23 motor(or other), 1 gear reducer, 1 motor driver(TB6600) 1 Food storage basket, 5 individual dispensary pipes, 5 servo motors, 1 servo motor PWM controller The dispensary pipes will be pumping out food using a servo pump filler mechanism where the servo motor will push down on the contents of the pump(in a piston like motion) and squeeze out the food). We will use the ESP32 Microcontroller series # **Subsystem 2: User Interface** For initial testing, simple buttons to determine which dish is chosen. The final device will involve a screen, natural interface. The simple buttons will just be regular tactile buttons. and the final screen would be an ST7789 LCD display that will show the user what food has been ordered. It will show the user what options they have chosen for their salad and how to add/remove particular items with a button press # **Subsystem 3: Food presentation** We expect to have the final salad, well tossed and provided to the user. So once the bowl is filled which is determined by it passing through the pipes of all its ingredients, the user will close it with a cap. the user will have the choice to have it shaked or not. That feature is an additional button after the food is dispensed. The bowl is then placed in a closed contrapment which simply rotates at high speeds to mix the food. It is a very similar design to regular boba shakers. Shaker: 1 NEMA 23 motor, 1 gear box, 1 motor driver(TB6600) # **Subsystem 4: Accuracy checking** A major part of this project is to ensure efficiency. So we will incorporate a weight sensor(mini load cell), this weight sensor will track the weight of the bowl as items are being dispensed and will serve as a checker to stop the machine from over dispensing. # **Subsystem 5: Power system** For demonstration purposes the machine will be hooked up to a benchtop powersupply or another reliable form of powersupply similar to a benchtop like a low-grade DC power supply. Another main component that we will add is food safe tubing to ensure that the food does not get contaminated # **Criterions for success:** 1. The conveyor belt is able to move consistently in a way that the bowl is under the right dispenser. 2. Each dispenser is able to dispense food. This would be for both solid and liquid food, such as sauces. 3. Each dispenser is able to dispense the right amount of food or a range of food in a set range. 4. Initial prototype can, on button press, determine exact motor angles to move the components for early demo during semester 5. Final prototype can, on user request, send a signal to the microprocessor to move bowl and dispense mock food into a bowl. # **Team work requirements:** 1. CAD every individual component in a miniature form to depict the real system (1 week) 2. Use Dev board with motor drivers to demonstrate bread board working of Criterion 1 of success. (1 week) 3. Attach devboard solution to CAD physical model to take into account motor backlash and other physical constraints like power supply issues and overheating ( 1 week) 4. Start PCB design based on the chosen direction. Soldering and debugging (3-4 weeks) 5. Final assembly and testing( 1 week) This gives us maybe 1 week of extra leeway for any hindrances. |
||||||
| 43 | LeafLink |
Hannah Pushparaj Hassan Shafi Praveen Natarajan |
Aniket Chatterjee | Craig Shultz | proposal1.pdf |
|
| LeafLink Team Members: Praveen Natarajan (pn17) Hassan Shafi(hashafi2) Hannah Pushparaj(hsp5) PROBLEM Plants need to be watered constantly for them to stay alive. Depending on certain scenarios, this might not always be possible for people to do (ex: going on vacation, forgetting to water, etc). We want a way to automatically water these indoor plants to make them stay alive. SOLUTION A standalone device that automatically senses the moisture level of the soil, and deploys a pump that supplies the plant with just the right amount of water to survive. It uses an onboard soil moisture sensor along with a water pump to supply the plant with water. The device is designed to be reliable and easy to understand. A simple light shows what it’s doing (normal, watering, or needs attention). It also includes basic safety limits so it can’t keep running forever if something goes wrong, and it can warn the user if the water container is empty or if the device isn’t able to pump water properly. The device can store a basic history of when it watered the plant so the user can see that it’s working. If we have time, we can add a simple companion app. The app would let the user see the current soil moisture, and it would show a log of recent watering. It would also allow the user to trigger a quick manual watering from their phone if needed (for example, after repotting or during a very hot week). The app is optional as the device should work on its own even without it. Solution Components Subsystem 1: Control & Processing This subsystem serves as the central controller. An ESP32 on our custom-designed PCB reads soil moisture sensor data, executes watering logic, and controls the relay module. The PCB integrates power regulation and some basic status indication. Components: - ESP32 - Our Custom PCB - 3.3 V voltage regulator - Some LEDs and resistors Subsystem 2: Soil Moisture Sensing This subsystem measures soil moisture and provides an analog voltage to the ESP32 ADC pin to drive the water delivery system. Components: - Capacitive Soil Moisture Sensor Subsystem 3: Water Delivery & Relay Control This subsystem allows the ESP32 microcontroller to turn the water pump on and off by using a relay, acting as a switch between the ESP32 and higher voltage water pump. So essentially the ESP32 GPIO will drive the relay input which will switch pump power on and off. Components: - 6-12 V DC Water pump - 5 V single-channel relay module - External 5 V power supply - Tubing and water reservoir Subsystem 4: User Feedback & Safety This subsystem provides basic visual feedback based on the current state of the Leaflink system and an emergency stop button Components: - Status LEDs (different colors for idle, watering, error). - Red push button (emergency stop, kills power) Subsystem 5: Wireless Monitoring We will also have a remote monitoring feature using the ESP32’s built-in Wi-Fi. In this remote monitoring system we will display the real-time soil moisture readings (maybe even keep track of old readings over a time period), history of recent watering events, and a manual watering trigger button. Components: - ESP32 Wi-Fi (already part of chip) - Simple mobile or web interface CRITERION FOR SUCCESS - The ESP32 on our custom PCB correctly reads soil moisture data and determines when watering is required independently (requiring no supervision) - Ensure proper functionality of the soil moisture sensor by ensuring moisture readings are accurate (for example if we add water the moisture percentage should get higher) - The ESP32 reliably controls the relay to turn the water pump on and off based on soil moisture thresholds. - The water pump operates only through the relay and correctly distributes the required amount of water - The multiple LEDs correctly indicate the current system states, including idle, watering, and error. - Pressing the emergency stop button immediately cuts power to the water pump and halts any ongoing operation - Remote monitoring system displays accurate real-time soil moisture data, logs watering events, and allows manual watering control. |
||||||
| 44 | Voice-Activated Geographic Reference Globe |
Mahathi Jayaraman Rijul Roy Varsha Mullangi |
Chihun Song | Joohyung Kim | proposal2.pdf |
|
| Team Members: Mahathi Jayaraman (mj45) Rijul Roy (rijulr2) Varsha Mullangi (varsham3) Problem Many kids these days, especially American kids, don’t know their geography that well. In addition, many kids are spending a lot of time on screens and online, which is taking them out of the real world. We want to create a solution where kids can learn geography in a manner that does not need them to be connected to the internet or on a screen. This solution should be able to be used in classrooms for kids to learn from, as well as be able to rotate to accommodate the shorter height of kids. Solution Our proposed solution is to build a globe that is screen-free and interactive. Rather than manually rotating a globe and having to search for where a certain country is, kids can now simply push a button to activate a microphone and say a country name out loud. The globe will rotate automatically to a designated front marker of the globe and light up the specified countries with LEDs. This will help kids feel more engaged with learning. Solution Components Subsystem 1: Speech Recognition with a Push to Talk Mechanism This subsystem will implement the speech recognition mechanism of the globe. A simple push button and microphone will be used, connected to the GPIO pins of the ESP32-S3 MCU. While the button is pressed, the microphone will collect audio from the user, capturing the specified country the user wants to find. The MCU uses this audio to run an offline, on-device speech recognition software (ESP-SR) to determine which country the user wants to find, which will be used to handle the motor control logic and LEDs. Components: ESP32-S3 MCU and ESP-SR Package I2S Digital Microphone (INMP441) Subsystem 2: Software-Driven Motor Control This subsystem controls how the globe physically rotates to face the input country. A low speed DC gear motor will be driven by the ESP32-S3 through a motor driver, allowing the MCU to control both the direction and speed of rotation on the axis. A separate motor will be used to tilt the globe up and down, with the globe sitting in a ring with a ball bearing track. Based on the target country’s stored position and the current angle of the globe, the software will calculate the direction of rotation and the number of turns needed for the globe to rotate to align the country with the front marker. Feedback from a magnetic angle sensor will be used to track the globe’s position and stop rotation at the correct point. This makes the rotation more reliable and prevents the globe from rotating too far past the target. Components: 22 RPM 24 Volt DC Globe Inline Gearmotor [500635] Subsystem 3: LED Outline/Markers This subsystem is responsible for the physical identification of countries using LEDs. We will use a LED grid placed behind the globe, ensuring that that LEDS line the borders and corners of countries. If its a smaller country, making it harder to border, we will use the center point of the country, lighting up only one LED to indicate the location of that country. Since we will be using addressable LEDs, we will be able to assign LEDS to countries, so that when a country is chosen, the logic can quickly determine which LEDS to turn on. We will also use one LED near the button that captures audio, helping the user know when audio is being recorded. Components: LED strips (WS2812B) Subsystem 4: Front Marker Reference This subsystem is responsible for rotating the globe to face a designed front marker. This marker will be a point on a ring around the globe. This will designate where the user of this globe will be positioned, so that when the globe rotates to allow the country to face this marker, the country will also be facing the user. The globe will also rotate on multiple axes to face this, which can help accommodate the shorter height of kids by making the globe rotate down to make areas near the north pole (such as Iceland or the North Pole) visible to kids who may not be tall enough to see the top of the globe. Every time a country is detected through the microphone, that country will automatically rotate to this marker. The slip ring will be used to ensure that the internal components do not get caught in each other as the globe rotates, and the limit switches will make sure the globe does not rotate too much in any direction. Components: ESP32-S3 MCU (controller) Adafruit AS5600 Magnetic Angle Sensor - rotation position sensor Slip Ring (because it is a rotating system) Optional Limit Switches to prevent overrotation The Motor System (subsystem 2) Criteria for Success: The system can use the microphone to accurately identify spoken words, and check if the word is in the database of country names. When a country name is spoken, the system can light up the country on the globe. When a country name is spoken, the globe can rotate to display the lit country in front of the user. When the word “reset” is provided as an input, the globe moves back to its default position and all LEDs are turned off. The globe will correctly detect the spoken country name and rotate automatically so the specified country is facing the front marker |
||||||
| 45 | Focus Dial: A Tactile Hardware Interface for Distraction-Free Focus |
Ahan Goel Amogh Mehta Benjamin Loo |
Frey Zhao | Craig Shultz | proposal1.pdf video video |
|
| **Team Members:** - Amogh Mehta (amoghm3) - Ahan Goel (ahang5) - Benjamin Loo (bloo2) --- # Problem Staying focused is increasingly difficult in an environment saturated with digital distractions. While most modern operating systems provide tools such as Focus Mode or Do Not Disturb, these solutions are embedded within smartphones or computers themselves. Activating or managing them often requires unlocking a phone, navigating menus, or interacting with the very device that causes distraction. This creates friction and makes it easy for users to abandon focus unintentionally. Additionally, many existing productivity tools rely heavily on cloud services or voice assistants, raising concerns around privacy, reliability, and latency. There is a need for a more intentional, low-friction, and privacy-conscious way to manage focus that does not require constant screen interaction. --- # Solution We propose the **Focus Dial**, a standalone hardware controller that allows users to enter, manage, and visualize focus states through a simple physical interaction. By turning a rotary dial, users can activate focus modes, set timers, and receive feedback without opening a phone or navigating software menus. The Focus Dial solves the problem by shifting distraction management from a screen-based interaction to a tactile, human-centered interface. The device communicates wirelessly with user devices (phones, tablets, and computers) to control Focus Mode or Do Not Disturb settings. In addition, the Focus Dial is designed to integrate with IoT devices on the local network, enabling environmental cues—such as smart lights, displays, or other connected devices—to reflect or respond to the user’s focus state. At a high level, the system consists of: - A physical user interface for intentional user input and feedback - An embedded processing and communication subsystem - Wireless integration with user devices and local IoT systems --- # Solution Components ## Subsystem 1: Physical User Interface and Feedback **Purpose:** Functions as the primary **physical user interface**, allowing users to intentionally control focus modes and timers without interacting with screen-based devices. **Function:** This subsystem combines tactile input and multimodal feedback mechanisms to provide intuitive control and clear system state indication. It is composed of the following hardware elements: - **Rotary Position Encoding:** A rotary encoder detects rotational direction and position, enabling users to select focus modes, adjust durations, and confirm actions through deliberate physical motion. - **Haptic Feedback:** A vibration motor provides tactile confirmation for actions such as mode changes, timer start/stop events, and alerts, reinforcing interaction without requiring visual attention. - **OLED/LCD Display:** A circular OLED or LCD display presents contextual information such as the active focus mode, remaining time, or system status. - **Lighting (LED Ring):** An addressable LED ring provides glanceable visual feedback by indicating focus state, progress, or alerts through color and animation. The lighting can also mirror or augment connected IoT lighting systems. **Components:** - Rotary encoder with push-button (e.g., Bourns PEC11 series) - Circular OLED or LCD display (e.g., 1.28\" round TFT display) - Addressable LED ring (e.g., WS2812B / NeoPixel ring) - Coin vibration motor --- ## Subsystem 2: Embedded Processing and Wireless Communication **Purpose:** Acts as the **central control unit**, coordinating input processing, system state management, and communication between subsystems and external devices. **Function:** Processes rotary encoder input, drives output peripherals (display, LEDs, haptics), and manages wireless communication protocols. **Components:** - Microcontroller with integrated Bluetooth and Wi-Fi (e.g., ESP32) - Power management circuitry - On-board memory for firmware and configuration storage --- ## Subsystem 3: Device and IoT Integration **Purpose:** Enables the Focus Dial to operate as a **local control hub**, synchronizing focus states across personal devices and connected IoT systems. **Function:** Transmits focus state changes to paired devices and triggers context-aware environmental responses. **Components / Interfaces:** - Bluetooth Low Energy (BLE) for communicating with a companion app or OS-level shortcuts - Wi-Fi for local network communication - Integration with IoT devices (e.g., smart lights, displays, or other networked devices) using local protocols such as MQTT or HTTP This subsystem allows the Focus Dial to trigger actions such as dimming lights, changing light color, or notifying other devices when a focus session starts or ends. --- # Criterion for Success The project will be considered successful if it meets the following measurable criteria: 1. The rotary encoder reliably detects user input with greater than 95% accuracy. 2. The device activates or deactivates Focus Mode or Do Not Disturb on a paired device via Bluetooth within 1 second of user input. 3. The display, LED lighting, and haptic feedback consistently reflect the correct focus state. 4. The Focus Dial successfully communicates focus state changes to at least one IoT device on the local network. 5. Core functionality operates without requiring an active internet connection. --- **Project Classification:** Innovation (human-centered hardware interface integrating embedded systems, wireless communication, and IoT interaction) |
||||||
| 46 | Snooze-Cruiser |
Alex Wang Jiachen Hu Jizhen Chen |
Jiaming Xu | Joohyung Kim | proposal1.pdf |
|
| #Snooze-Cruiser Team Members: Jiachen Hu (hu86) Jizhen Chen (jizhenc2) Alex Wang (zw71) #Problem Many people suffer from sleep inertia, a condition where individuals instinctively silence alarms without fully waking up. Traditional alarm clocks and smartphone alarms rely solely on audio, which can be easily ignored or dismissed while half asleep. Existing alternative solutions such as puzzle-based alarms or flying alarms are often ineffective, unsafe, or impractical in confined environments like dorm rooms and bedrooms. The fundamental issue is that current alarm systems fail to reliably force physical engagement, allowing users to return to sleep without becoming fully alert. A more effective alarm must require the user to physically interact with the system in order to disable it. #Solution We propose Snooze-Cruiser, a two-wheeled differential-drive robotic alarm system that physically moves away from the user when the alarm time is reached. Instead of simply producing sound, the robot navigates around the room, forcing the user to get out of bed and chase it in order to silence the alarm. The robot operates autonomously in a confined indoor space, using onboard sensors for obstacle avoidance and odometry-based localization to remain within a defined area. The alarm is disabled not by pressing a button, but by detecting when the robot has been picked up using inertial sensor data. This interaction ensures that the user must physically wake up and engage with the device. The system is divided into motion control, sensing, alarm/audio, localization, and power management subsystems. #Solution Components ##Subsystem 1: Motion Control and Navigation Function: This subsystem enables the robot to move autonomously, wander unpredictably, and avoid obstacles while remaining within a confined area. Components: Microcontroller: STM32F446RCT6 Motor Driver: DRV8833PWP dual H-bridge motor driver Motors: N20 micro gear motors with quadrature encoders (x2) Inertial Measurement Unit: MPU6050 Obstacle Sensors: VL53L1X Time-of-Flight distance sensors (multiple) Description: The STM32 generates PWM signals to control the motors through the DRV8833 motor driver. Wheel encoders provide feedback for estimating speed and displacement. During alarm operation, the robot drives forward at a base speed and periodically introduces random heading changes. Obstacle avoidance is triggered when distance sensors detect nearby obstacles, causing the robot to turn away and resume wandering motion. Encoder and IMU data are fused to estimate the robot’s position relative to its starting point. ##Subsystem 2: Localization and Soft Geofencing Function: This subsystem prevents the robot from leaving the intended operating area (e.g., a bedroom). Components: Wheel Encoders (from Subsystem 1) IMU: MPU6050 Description: Wheel encoder data and IMU measurements are fused using a Kalman Filter (or equivalent sensor fusion approach) to estimate the robot’s displacement from its starting location. A soft geofence is defined as a radius around this starting point. If the robot exceeds this radius, it enters a return-to-center behavior by rotating toward the estimated origin and driving inward until it re-enters the allowed area. ##Subsystem 3: Alarm Timing and Audio Output Function: This subsystem handles timekeeping and audible alarm generation. Components: Microcontroller: STM32F446RCT6 Audio Amplifier: PAM8301AAF Speaker Description: The STM32 maintains a real-time counter for alarm scheduling. When the preset alarm time is reached, the microcontroller simultaneously enables the audio amplifier and activates the motion subsystem. The alarm sound continues until a valid caught event is detected. ##Subsystem 4: Caught Detection (User Interaction) Function: This subsystem detects when the robot has been picked up by the user and disables the alarm. Components: IMU: MPU6050 Wheel Encoders Description: Caught detection is performed by analyzing IMU acceleration and vibration data in combination with wheel encoder feedback. A caught event is identified by sudden changes in acceleration magnitude, high-frequency vibrations from human handling, and inconsistencies between wheel motion and measured acceleration (indicating loss of ground contact). Once confirmed, the system immediately stops motor output and silences the alarm. ##Subsystem 5: Power Management Function: This subsystem supplies and regulates power for the robot. Components: Battery Charger IC: MCP73844 Rechargeable Battery Voltage Regulation Circuitry Description: The battery supplies power to the MCU, sensors, motor driver, and audio system. The MCP73844 manages battery charging. Voltage regulation ensures stable operation during high current events such as motor startup. #Criterion For Success The project will be considered successful if the following objective criteria are met: Timed Activation: The alarm triggers within ±X seconds of the programmed time. Synchronized Operation: Robot motion and alarm audio start simultaneously upon alarm activation. Autonomous Motion: The robot moves continuously without user intervention during alarm operation. Obstacle Avoidance: The robot avoids obstacles placed in its path without repeated collisions. Confined Operation: The robot remains within a predefined operating radius and returns toward the starting location when the boundary is exceeded. Caught Detection: When picked up by a user, the robot reliably stops motion and audio within a short time window. |
||||||
| 47 | Combative Hardened Ultra Tumbler |
Abhinav Garg Rahul Ramanathan Krishnamoorthy Shobhit Sinha |
Xiaodong Ye | Joohyung Kim | proposal1.pdf |
|
| # Combative Hardened Ultra Tumbler - Battlebot ## Team Members - Abhinav Garg (ag90) - Rahul Krishnamoorthy (rahulr9) - Shobhit Sinha (ss194) --- ## Problem The antweight battlebot competition requires teams to design a combat robot under strict constraints on weight, materials, safety, and electronics. Robots must weigh under 2 lb, be constructed from approved 3D-printed plastics, and use a custom PCB integrating control and motor driving circuitry. Commercial RC receivers are not permitted. The challenge is to design a compact and reliable robot that integrates motor control, power electronics, and wireless communication while operating under high current loads and repeated mechanical impacts during combat. --- ## Solution We propose to design and build a 2 lb antweight battlebot featuring a spinning drum weapon and a fully custom electronic control system. A custom PCB will serve as the core of the robot and will house an ESP32-C3 microcontroller for computation and wireless communication. The robot will be controlled from a laptop using Bluetooth or Wi-Fi. Two motors will drive a centered two-wheel drivetrain, while a third motor will power the drum spinner weapon. Power will be supplied by a 14.8 V 4S2P LiPo battery. The system emphasizes reliable motor control, safe power management, and robustness to mechanical shock during competition. --- ## Solution Components ### Subsystem 1: Control and Communication System This subsystem handles wireless communication, control logic, and overall system coordination. It uses an ESP32-C3 microcontroller, Bluetooth and Wi-Fi wireless communication, and a USB interface for programming and debugging. --- ### Subsystem 2: Motor Control System This subsystem drives the drivetrain and weapon motors. It uses H-bridge motor driver circuitry controlled through PWM signals generated by the ESP32-C3 and brushless DC motors for drivetrain and weapon actuation. --- ### Subsystem 3: Power Management and Safety This subsystem distributes power and ensures safe operation of the robot. It uses a 14.8 V 4S2P LiPo battery, on-board voltage regulators for logic power, and battery voltage sensing via a resistor divider. Software-based shutdown is implemented to disable the robot on loss of wireless communication. --- ### Subsystem 4: Mechanical Structure and Weapon This subsystem provides structural support and offensive capability. It consists of a 3D-printed PLA or ABS chassis, a spinning drum weapon, and a belt-driven mechanical coupling between the weapon motor and drum. --- ### Optional Subsystem: Inertial Measurement and Weapon Optimization An optional inertial measurement unit (IMU) may be integrated to measure angular motion and vibration of the drum weapon. IMU data can be used to estimate weapon rotational behavior, detect imbalance, and inform software adjustments to improve weapon stability and reliability during operation. --- ## Criterion for Success The project will be considered successful if the robot weighs less than 2 lb and complies with all competition material restrictions, the custom PCB integrates control, motor driving, and power management circuitry, the robot can be reliably controlled from a laptop using Bluetooth or Wi-Fi, the drivetrain provides stable and responsive motion, the drum spinner weapon operates reliably without electrical failure, and the robot safely shuts down when wireless communication is lost. |
||||||
| 48 | Sleep Position Trainer |
Brian Park Kyle Lee Nick Tse |
Po-Jen Ko | Yang Zhao | proposal1.pdf |
|
| **Team Members:** Brian Park (brianp7) Kyle Lee (klee281) Nick Tse (nstse2) **Problem:** Sleep is essential for overall health and recovery. We want to develop a device that can detect a person’s sleeping position and provide gentle feedback, via vibration, to prompt repositioning. This device is intended to help users improve and maintain healthier sleep patterns. **Solution:** In order to maintain healthy sleep posture, we propose a wearable sleep monitoring device that detects a user’s sleeping position and provides gentle vibration feedback when an adjustment is needed. The device continuously monitors body orientation during sleep and encourages repositioning when prolonged or unhealthy postures are detected, helping users develop healthier sleep habits over time. The system will incorporate a Battery, Microcontroller, Inertial Measurement Unit (IMU), and Eccentric Rotating Mass (ERM) motors to develop a small wearable sleep position trainer. **Solution Components:** **Subsystem 1 (Position Sensing):** Components: Bosch BMI270 IMU A 6-axis IMU will be used to determine whether the user is laying on their back or side. The microcontroller continuously estimates the device’s tilt/roll angle relative to gravity. When the estimated orientation corresponds to a supine posture for longer than a defined time window, the system will know to activate the vibrations. **Subsystem 2 (User Alert System):** Components: Parallax Inc. 28821 DC Motor Vibration, ERM (Haptic) 9000 RPM 3VDC This vibration mechanism will train the user to not sleep on their back. The device will keep vibrating until the user has turned onto their side, turning off the vibration. **Subsystem 3 (Microcontroller):** Components: Espressif ESP32-S3-WROOM-1 This acts as the device's control unit. It will be responsible for interpreting sleep position based on IMU, timing logic (vibration delays and cooldowns), and vibration. **Subsystem 4 (Physical Build):** Components: 3D-printed case A compact 3D-printed case will protect the PCB, battery, and motor and keep them from shifting during sleep. The enclosure will include strap/clip mounts and ensure the vibration motor is pressed against the body for a noticeable cue, with openings for charging and any button/LED. **Subsystem 5 (Power Management):** Components: 3.7 V Lithium-Ion Battery Rechargeable (Secondary) 100mAh, TI BQ24074 charger/power-path IC, TI TPS62840 3.3 V regulator This subsystem provides rechargeable power and stable 3.3 V for the electronics. The charger safely charges the battery from USB and can allow operation while plugged in. The regulator improves battery life by efficiently converting battery voltage to 3.3 V. **Criterion For Success:** The device is considered successful if it can reliably detect when the user is sleeping on their back and activate vibration feedback during sleep to encourage repositioning, thereby helping to reduce snoring, alleviate sleep apnea symptoms, and ease heartburn or acid reflux. |
||||||
| 49 | Move Displaying Chess Board |
Jeanjuella Tipan Matthew Trela Tim Chen |
Wenjing Song | Joohyung Kim | proposal1.pdf |
|
| # Move Displaying Chess Board Team Members: - Matthew Trela (mtrela2) - Tim Chen (taianc2) - Jeanjuella Tipan (jtipa2) # Problem Chess is a game with a high barrier to entry and often the hardest part of the game for kids to pick up is how the pieces move, where a piece can move, and if a move is legal. Existing boards that tackle this problem are very expensive and not a practical option for an elementary or middle school chess club. # Solution A physical chess board which shows all legal moves for a piece once it is picked up. The movement of pieces will be detected with a sensor array of reed switches and a board state in memory. The squares will be lit up by an addressable strip of LED lights cut into 8 equal sections and daisy chained together. This chessboard will also optionally display the best move with a small chess engine in the MCU’s flash memory. The chess board will include a UI to turn best moves on and off, to handle the edge case of promoting to something besides a queen, and to display information like if an illegal move is played. # Solution Components ## Subsystem 1, Piece Detection Array This subsystem detects the location of each piece using magnets attached to the bottom of the pieces and an array of 64 reed switches. Since the microcontroller can not handle 64 separate sensors we will use 4 I2C GPIO expanders. - Reed Switches: Standex-Meder Electronics SW GP560/15-20 AT - Magnets: Magnet Applications N42P062062 - I2C 16 input GPIO expander: Microchip Technology MCP23017-E/ML ## Subsystem 2, LED Move Display This subsystem provides feedback to the user. An addressable LED strip is placed under the board in 8 segments, one for each rank. The segments will be connected with clip connectors for replacing each segment when necessary. When a piece is lifted as detected by subsystem 1, the MCU calculates the legal moves and sends a signal to the LEDs to illuminate target squares in a specified color (for example: green for legal moves, red for capturable piece). - Addressable LED strip: SEZO WS2812B ECO LED Strip Light 16.4 FT - 3Pin LED Strip Connector: DFRobot FIT0861 ## Subsystem 3, Microcontroller and UI The microcontroller will handle all of the logic of our chess system. There will be a simple control loop which polls every sensor so see if the board state has changed. If a piece has been picked up, the microcontroller uses the current board state to see what piece was picked up, what its legal moves are, and then controls the LED strip accordingly. We will use logic to check for error or desync and have a recovery protocol through the UI if detected. This control loop can be interrupted by input from the UI like to turn on best moves. UI is a monochrome OLED screen with some buttons for selecting options. When best moves are on, the board puts the current state into a small chess engine locally stored in the MCU and displays the best move using the LEDs. This happens every time the board state changes. - MCU: ESP32-WROOM-32-N4 - OLED Display: UCTRONICS 0.96 Inch OLED Module 12864 128x64 ## Subsystem 4, Power supply A portable power supply is used to power the LEDs, sensors, microcontroller, and UI display. A capacitor prevents sudden surges or dips in from crashing the microcontroller. - Power bank: VOLTME Portable charger, 10000mAh 5V/3A - Capacitor: Chemi-Con ESMG160ETD102MJ16S # Criterion For Success Describe high-level goals that your project needs to achieve to be effective. These goals need to be clearly testable and not subjective. - LEDs can be selectively turned on by the MCU for all 64 squares - Move display and best move display can be turned on and off with the UI controls - All legal moves are accurately displayed by LEDs, including rules such as en passente, castling, and the first move of pawns - Pieces can be detected accurately when lifted off, being displayed on the UI display - Detect pieces picked up and show legal moves in under 1 second - Display the best move in under 3 seconds - We can detect and recover from two pieces on the same square - We can detect and recover from multiple pieces being picked up at the same time and switched # Alternatives Existing solutions include commercial products that cost around $300 or more. They perform almost the exact same functions as what we propose to do. It is hard to determine the exact sensor method other boards use but we saw RFID and other more extensive methods. Our implementation attempts to use the simplest possible sensing apparatus and make up the difference in hardware. There does not exist a product that is both affordable and offers the functionality of displaying moves on the board. |
||||||
| 50 | Crowdsurf: Realtime Crowd-Monitoring for indoor spaces |
Ananya Krishnan John Abraham Tanvika Boyineni |
Aniket Chatterjee | proposal1.pdf |
||
| Team Members: Tanvika Boyineni (tanvika3) Ananya Krishnan (ananya10) John Abraham (jabra6) Problem: Indoor public spaces (libraries, study lounges, gyms, student centers) often become congested, but students and facility staff lack real time, localized information about crowd density and traffic flow. Existing approaches either rely on cameras, raising privacy concerns, require manual observation, or provide only building level estimates that are not actionable for choosing a specific room/entrance. Solution: This project proposes a privacy preserving, real time crowd monitoring system that estimates occupancy and directional flow using distributed, non-imaging sensor nodes with local processing. Each node is deployed at an entrance or transition point and performs local detection and direction inference. Processed data is transmitted wirelessly to a central gateway, which aggregates occupancy estimates, logs data, and presents live metrics through a user facing dashboard. The system emphasizes robustness to sensor noise and communication loss, and ease of deployment. Solution Components: 1. Sensing Subsystem (Doorway Detection and Direction) -Non-imaging sensors per entrance mounted with spatial separation. -Direction inference using ordered sensor trigger -Calibration procedures for mounting height, angle, and baseline noise conditions. 2. Embedded Processing Subsystem -Microcontroller-based state machine for event detection, debouncing, and occupancy updates. -Filtering and gating logic to handle common edge cases such as pausing in doorways, close following individuals, and short reversals. -Node health monitoring, including sensor timeouts and heartbeat status. 3. Wireless Communication Subsystem -Packet structure includes timestamp, IN/OUT counts, current occupancy estimate, and node status. -Features such as retransmission, periodic heartbeats, and graceful degradation during packet loss. 4. Gateway and Data Logging Subsystem -Gateway device (like Raspberry Pi) receives telemetry from sensor nodes. -Maintains the system wide occupancy per entrance or room. -Logs data to persistent storage (CSV) and manages node reconnection. 5. Dashboard and User Interface Subsystem -Live dashboard displaying current occupancy, directional flow rate (people per minute), and recent trends. -Visual indicators for “crowded” vs. “not crowded” states based on configurable thresholds. 6. Hardware and PCB Subsystem (Sensor Node) -Custom PCB using a modular, low risk design approach -Mechanical enclosure and mounting plan to ensure consistent and repeatable sensor placement. Criterion for Success: The project will be considered successful if the system can accurately demonstrate real time directional counting and occupancy estimation at one to two doorways using non imaging sensors. The system must correctly track entries and exits and maintain a live occupancy estimate that updates within one second of a doorway event. A functional dashboard should display current occupancy, flow rate, and node status in real time, while the gateway continuously logs data for at least one hour without interruption. Additionally, a custom designed PCB must be fabricated and used for at least one sensor node in the final demonstration. The system must remain stable and operational during temporary wireless packet loss events, demonstrating graceful degradation without crashes and automatic recovery once communication resumes. Node health and connectivity status should be clearly visible through the user interface to allow for basic monitoring and debugging. If time permits, additional success criteria include scaling the system to three or four sensor nodes covering multiple entrances or zones, improving robustness in challenging edge cases such as tailgating or closely spaced groups, and evaluating accuracy as a function of traffic rate. Further extensions may include implementing battery-powered sensor nodes with basic power optimization strategies or adding simple short term congestion prediction based on recent occupancy trends. |
||||||
| 51 | Networked Physical Chessboard for Remote Play |
Danny Guller Payton Schutte Quinn Athas |
Wenjing Song | Arne Fliflet | other1.pdf |
|
| # Networked Physical Chessboard for Remote Play Team Members: - Danny Guller - Quinn Athas - Payton Schutte # Problem Online chess makes it easy for intermediate players to find games quickly, but it removes much of what makes chess feel engaging in the first place. Playing on a screen lacks the tactile feedback of moving real pieces, the spatial awareness of a full board, and the sense of presence that comes from sitting in front of a real board. While traditional in-person chess restores these elements, it usually requires both players to be in the same physical location, which limits who you can play and how often. Some existing commercial systems attempt to bridge this gap by combining physical boards with online connectivity, but these solutions are often extremely expensive and inaccessible to most players. As a result, there is currently no widely available, cost-effective way to enjoy a truly physical game of chess with a remote opponent. Players are therefore forced to choose between convenience and the authentic physical experience of the game, motivating the need for a more affordable and accessible solution. # Solution Our solution is a pair of internet-connected physical chessboards that allow two players in different locations to play a real game of chess using physical pieces. Each board tracks the state of the game locally and synchronizes moves with the remote board in real time. By combining physical interaction with networked communication, the system preserves the tactile and spatial experience of chess while removing the requirement for both players to be in the same place. Each board uses Hall effect sensors embedded beneath every square to detect the presence and movement of magnetized chess pieces. When a player moves a piece, the system detects changes in the board state and infers the intended move by comparing the previous and current configurations. To avoid ambiguity caused by partial lifts, piece adjustments, or accidental touches, players must confirm their move using a button on a digital display before it is transmitted. Once a move is confirmed, it is sent over the internet to the opponent’s board. LEDs on the receiving board highlight the source and destination squares, guiding the opponent to physically replicate the move. The use of Hall effect sensors also enables future expansion, such as differentiating piece types using different magnet strengths or polarities, without requiring major hardware redesign. # Solution Components ## Subsystem 1: Piece Detection (Hall Sensors + ADC Row Readout) To detect pieces on all 64 squares without exhausting the microcontroller’s GPIO resources, the board uses one analog Hall effect sensor per square combined with an ADC-based row readout architecture. Eight 8-channel ADCs are used, with each ADC responsible for one row of the chessboard. Each ADC samples the eight Hall sensors in its row and reports the digitized values to the microcontroller over a shared communication line (I2C or SPI). This design limits the number of devices on the communication bus to eight while still allowing the system to poll all squares frequently enough for responsive move detection. The microcontroller continuously polls the ADCs, reconstructing a full 8×8 chess board where pieces correlate to high magnetic fields. A key challenge in this subsystem is avoiding false positives caused by magnetic fringe fields affecting neighboring squares. Because magnetic field strength decreases rapidly with distance, cross-square interference can be mitigated by careful square spacing and threshold selection. The system will also perform a calibration step to record baseline sensor values for each square and detect pieces based on deviations from that baseline rather than using a single global threshold. This approach improves robustness to sensor variation and environmental changes. ## Subsystem 2: Move Inference, Legality Checking, and Piece Identification The system infers piece identity primarily through game state tracking rather than direct sensing. Starting from a standard chess setup, the controller maintains an internal board representation and updates it after each confirmed move. As long as pieces are not intentionally swapped, this approach allows the system to correctly track piece types over the course of the game. Even if physical pieces are swapped, the board will only let legal moves of the original piece be played. During a player’s turn, the controller monitors changes in square occupancy and generates a proposed move hypothesis, including captures. Before the move can be confirmed, the system checks whether it is legal under standard chess rules given the current board state. If the move is illegal, confirmation is blocked and the player is notified via visual feedback, prompting them to correct the placement. As an optional advanced feature, we may directly identify piece types using magnets with distinct strengths or polarity patterns. In this case, the analog Hall sensor readings could be used to classify the piece type directly rather than relying entirely on historical tracking. This would improve robustness against cheating and recovery from incorrect piece placement. The main challenge is ensuring sufficient separation between magnet signal ranges so that piece classes remain distinguishable across all squares and across different boards. If time permits, this feature will be implemented with careful calibration and validation. ## Subsystem 3: Networking and Synchronization This subsystem enables two ESP32-based chessboards to communicate over the internet using a centrally hosted server. Each board connects to the server over Wi-Fi and joins a shared game session, with the server responsible for storing and relaying moves between the two players. Communication is handled using HTTPS and a simple REST-style API. When a player confirms a move, the ESP32 sends the move to the server via an HTTP POST request. The opponent’s board periodically polls the server using HTTP GET requests to retrieve any new moves that have occurred since the last update. Each board tracks the most recent move number it has processed. If a board temporarily disconnects, it can reconnect and request any missed moves, allowing the game to resume without resetting or manual intervention. The server enforces move ordering and prevents duplicate updates, ensuring that both boards remain synchronized throughout the game. ## Subsystem 4: Local User Interface (Display + Controls) The local user interface allows players to set up and control the system without needing a separate phone or computer. It provides functionality for entering or selecting a game session code, confirming Wi-Fi and server connectivity, indicating whose turn it is, and displaying basic status or error messages such as “waiting for opponent,” “illegal move,” or “connection lost.” The UI also supports the move confirmation workflow by clearly indicating when a move is ready to be sent and when it has been successfully transmitted and received. Our preferred implementation is a small touchscreen display connected to the ESP32, which allows intuitive menu navigation and direct session code entry. As a simpler and lower-cost alternative, we may use a small OLED display with several physical buttons for menu navigation and code entry. In both cases, the interface is intentionally minimal: a player should be able to power on the board, connect to Wi-Fi, join a game, and begin playing with minimal setup. The final choice will depend on integration complexity, responsiveness on the ESP32, and available development time. ## Subsystem 5: Game Play Loop The game play loop is intentionally simple to simulate in-person chess as close as possible. At the start of the game, the board is set up in the standard configuration. White will be prompted on the screen to make their first move after all pieces are set on each board. White will move their piece, if the move is legal, the display will prompt white to submit their move, locking their board state. Black’s display will prompt that white has made a move and LEDs under the moved piece’s square will light up indicating which piece to move and where. Black can not submit a move until their board matches that of the white player. After black replicates white’s move, black plays their move and is prompted to submit. Each move is checked for legality before a submit prompt is revealed. Board state is checked as well to ensure both players' boards are identical. If there are discrepancies in board state on either side, the display will prompt which pieces are out of place and where they should be. Once a winner is determined, the game ends and the display shows who won. # Criterion For Success The project will be considered successful if two physical chessboards located in different places can reliably play a complete game of chess while connected only through the internet. Each board must accurately detect player moves using the Hall effect sensor grid, require explicit move confirmation, and prevent illegal moves from being transmitted. Confirmed moves must be transmitted to the server and received by the opponent’s board in the correct order, with the source and destination squares clearly indicated using LEDs. The system must maintain synchronization between boards even in the presence of temporary network interruptions, allowing a board to reconnect and recover the current game state without manual reset. Finally, the system must support the completion of a full legal chess game of at least 30 moves without desynchronization, missed moves, or unintended move confirmations, while providing clear user feedback throughout gameplay. # Components: - Hall effect sensor: DRV5055A4QDBZR 12.5 mV/mT, ±169-mT Range - MCU: ESP-32 (includes Wi-Fi antenna and capability) - ADC: TLA2528IRTER 12-bit, 8-channel, I2C - Display: DSI Touch Screen LCD Display 800x480 |
||||||
| 52 | LED Sphere Display |
Ashley Saju David Heydinger Stephanie Eze |
Shiyuan Duan | Craig Shultz | proposal1.pdf |
LabEscape POV |
| # LED Globe Display Team Members: - Ashley Saju(asaju2) - David Heydinger (ddh3) - Stephanie Eze (oeze2) # Problem For LabEscape, an escape room under Prof. Kwait, a unique LED display would be beneficial to the escape room experience. A spinning LED display should be able to show a timer count down and wirelessly show any image. # Solution We will design a curved LED strip to be mounted on a rotating platform that spins at a constant speed. Through a Bluetooth enabled app, we can upload images and text to the image display system for storage and playback. These images will be displayed using persistence of vision by precisely controlling LED light timing based on the angular position and speed of the platform. The position and speed of the platform will be measured by an Hall sensor that detects each revolution of the rotating system, allowing the system to accurately determine when to display certain LED lights. # Solution Components ## Image Displaying System (Microcontroller, Memory, and LEDs) This system handles the process of receiving the image wirelessly or taking a sprite from memory and lighting the LED appropriately. An SD card would be used to store sprites of numbers for the timer mode. Shift registers would be used to achieve a speedy parallel output to the LED. And the LEDs would be receiving a preset voltage at first then varying voltages if time allows for different colors. The potentiometer can be used to adjust LED color. RP2040 microcontroller Micro SD card > 16kB memory 24-bit Shift registers: STP24DP05 24-bit constant current LED sink driver with output error detection RGB LEDs: Strawhat LED 4.8mm RGB (4-Pin) WEDRGB03-CM 10kOhm Potentiometer with knob Resistors ## Wireless Control The ESP32 hosts a web application that is accessible by entering the device’s IP address into a web browser. This web application allows a user to upload text or an image, which are processed by the ESP32 into a display-ready format. The processed data is then transmitted directly from the ESP32 to the spherical display system for rendering. The initial implementation supports monochrome bitmap images, with plans to extend to multi-color images in future revisions. ESP32-WROVER-B ## Power System Delivering power to the stationary motor will be provided by AAA batteries. However, delivering power to the spinning component is more difficult due to the potential for wires to be tangled. To solve this, we will drive power to the rotating platform using a slip ring, allowing for 360 degree rotation without twisting any electrical connections. Components: AAA battery pack [MIKROE-5351] Power Switch [GSW-18] Slip Ring [ADAFRUIT1196] DC motor [CN-PA22-201213500-G429] Voltage Regulator (buck converter) ## Spinning PCB - angular speed measurement The spinning PCB will include a Hall effect sensor that will detect exactly when one full turn of the PCB has been completed. It will send the measurements to the microprocessor which will calculate the angular speed of the spinning PCB based on the time interval between measurements. Components: Hall effect sensor [US5881LUA] Voltage Regulator [MIC5219-3.3] Small Magnet [07045HD] # Criterion For Success Describe high-level goals that your project needs to achieve to be effective. These goals need to be clearly testable and not subjective. When operating at full speed, the displayed text and image should be clearly legible from 5 feet away over a period of 10 minutes. The rotating assembly remains balanced while operating, with no audible thumping exceeding 50 dB or visible oscillation for the duration of 10 minutes. The LED Globe successfully receives and displays image and text uploads within 1 minute per image, without requiring any physical connections. A Hall effect sensor accurately detects when the rotating assembly has completed one revolution, with less than 2% missed detections over 10 minutes. LED brightness is sufficient to display images and text from 5 feet away under standard indoor lighting (300 lux). Timer mode: Timer can be set to a time up to 1 hour in the web application and counts down, resets, and pauses via web application.. |
||||||
| 53 | [Updated RFA] - Efficient Card Shuffler with Cut Card Insert |
Alex Lo Faso Matt Garrity Steve Mathew |
Aniket Chatterjee | proposal1.pdf |
||
| **Efficient Card Shuffler with Cut Card Insert** Team Members: - Alexander Lo Faso (alofaso2) - Matt Garrity (garrity6) - Steve Mathew (stevem4) **Problem** Card games such as blackjack require shuffling of cards between rounds of play. Over time this can be a strenuous task for dealers and decrease playing time for players. In addition to this, games such as blackjack require a cut card to be inserted between rounds at varying deck penetration levels. There currently does not exist any card shuffler machines with a cut card insertion feature. Many card shuffle machines commercially available have very limited features and lack complexity. These lower quality machines have limited deck capacities, require a constant push of a button to operate, and require the manual retrieval of the shuffled deck which can be cumbersome when reshuffling the same decks multiple times. **Solution** Our solution is to design and build a card shuffling machine with added features of increased deck shuffle capacity, optical detection of shuffle completion, a retractable motorized shuffled deck tray, and a cut card insert feature with electrical deck penetration customization. These features lead to four subsystems namely card deck(s) detection, deck shuffling mechanism, cut card insertion, and the completed deck tray extension. The prevailing goal is to make the card shuffler as efficient as possible. There will only be three inputs available to the user. A shuffle button, a dial to set the cut card penetration, and a cut card insertion button. The entire shuffle function is fully automated with the push of a button. Once the user is ready for the cut card insertion, they will set the dial and press the cut card insert button which will electrically align the cut card insertion window and create a delay to give the user time to insert the cut card. **Solution Components** **Subsystem 1 (Card Deck(s) Detection)** This subsystem will detect whether cards are present in the input trays for the shuffler. Detection will be determined through the use of reflective optical sensors, and is critical for the prevention of overdriving motors and ensuring shuffling is completed to completion. The reflective sensors on each tray will measure the light reflected off the bottom card of the stack to determine if the tray is empty or still full. The IR sensors will be flushed to the bottom of the tray surface and its output will be fed to a comparator to differentiate between the signals for when no cards are present and for when there are cards. The resulting digital signal is read by the MCU through GPIO inputs. When the sensors report no cards are present, the MCU concludes that the shuffling process is complete. - Reflective infrared optical sensor (Vishay TCRT5000) - Comparator IC (LM393) **Subsystem 2 (Deck Shuffling Mechanism)** This subsystem is responsible for the physical shuffling of the cards. It will involve two motors positioned at the bottom of the pre-shuffle deck trays. Each motor will slide one card at a time from its respective card stack inwards into a common pile forming a shuffled card pile. The motors will be in contact with the cards by a wheel with a rubber edge. Once the shuffle button is pressed and the finished tray is fully retracted (from the previous operation), the motors will begin shuffling. To ensure that the cards are being shuffled reliably, a beam-break sensor will be positioned below the motor wheels, and as each card passes through the slot, the sensor will generate a pulse that is read by the MCU to confirm that no jam has occurred (or if there is no pulse, that there has been a jam, and to reset the cards) and to keep count of the cards that have been passed through. They will continue shuffling until signaled by subsystem 1 that there are no more cards remaining to be shuffled. - Servo Motor (HitecSKU: RB-Hit-27) - Optical beam-break sensor (omron ee-sx1103) **Subsystem 3 (Cut Card Insertion)** This section will include a user-controlled dial (0-100 scale) which will set the desired depth at which the cut card will be inserted into the shuffled deck (ex. Tuning the potentiometer halfway around would insert a cut card in the middle of the deck). The dial will be electronically coordinated with a slitted plate which will move upon the vertical axis along the card deck based on the dial’s input. A rotary potentiometer will serve as the dial, and the voltage read from the potentiometer will be fed into an ADC on the MCU (ESP32 comes with an ADC). The output of the ADC will be scaled to a corresponding linear displacement for the slitted plate, and the slitted plate will be driven by a stepper motor connected to a linear guide rail which will guide the plate up and down the deck. This allows for the user to insert the cut card practically anywhere within the card assortment. Additionally, we would add limit switches at the top and bottom of the rail to prevent any overtravel. - Rotary potentiometer (Bourns 91A1A-B28-L15) - Stepper motor (NEMA 17) - Motor driver (TMC2208) - Linear Guide Rail (MGN9H Linear Guide Rail + Carriage Block) - Limit Switch (Omron SS-3GLPT) **Subsystem 4 (Completed Deck Tray Extension)** This subsystem will be responsible for extending the completed shuffled deck at the end of the shuffle operation.This will require the use of one motor and one optical sensor. The motor used will be a small gear motor attached to a gear track. These are optimal since they have high torque and require low voltage to operate. In addition, we will use an optical sensor to detect when the shuffled deck has been retrieved from the extended tray. Once the shuffled deck is retrieved, the tray will retract automatically. - Reflective infrared optical sensor (Vishay TCRT5000) - Stepper motor (NEMA 17) - Motor driver (TMC2208) - Gear Track (22460300) **Criterion For Success** - Device successfully shuffles 4-6 standard size decks without any manual intervention - Pressing ‘start’ button once begins the shuffling process - Shuffling continues until all cards from the input trays are emptied and halts once there are no more cards left in the trays with ~95% accuracy (to account for potential physical mishaps) - If MCU detects jam, stop shuffling - Cut card insertion slot moves in accordance with dial, allowing for desired insertion at any deck penetration level - Bottom tray extends automatically upon shuffling completion - Bottom tray retracts when IR sensor reads there are no cards on the tray |
||||||
| 54 | E-PEEL: Electronic Peeling Equipment for Easier Living |
Hyun Jun Paik Saathveek Gowrishankar Varun Ramprakash |
Manvi Jha | Arne Fliflet | design_document1.jpeg proposal1.pdf proposal2.pdf |
|
| Team Members: - Saathveek Gowrishankar (sg59) - Varun Ramprakash (varunr6) - Hyun Jun Paik (hpaik2) # Problem Traditional peelers require grip strength and fine motor control to properly and safely operate. Older adults and other individuals with limited fine motor control, arthritis, tremors, or reduced grip strength often find peeling fruits/vegetables difficult and unsafe. Meal preparation is widely classified as an instrumental activity of daily living (IADL), and the inability to consistently prepare meals can diminish one's independence and quality of life. Several recent papers highlight the lack of availability for assistive technologies for kitchen-related tasks. One paper (MORPHeus: a Multimodal One-armed Robot-assisted Peeling System with Human Users In-the-loop) even explores a fully autonomous robotic arm that peels vegetables with no human intervention. This solution, however, would be expensive, large, and unrealistic for home kitchens. Additionally, several studies highlight that older adults are less likely to use fully automated solutions and instead prefer semi-autonomous assistive technology that they can reasonably control. # Solution We propose a semi-autonomous peeling assist robot that can solve many of the aforementioned challenges while avoiding many of the disadvantages of existing proposed solutions. Our proposed solution consists of two primary mechanisms: a motorized conveyor belt and an actively compliant lever arm. Users place a vegetable on the conveyor belt which can then move the vegetable underneath and across a peeler; the conveyor belt is controlled by three buttons: one for each direction and one to stop. The actively compliant lever arm is fitted with a pressure sensor, a vibration motor, and a vegetable peeler; this allows the peeler’s position to adapt to variations in vegetable shape and position while maintaining a consistent depth of peeling. To ensure continuous and reliable power without runtime limitations, the device will be designed to operate on AC power using an external low-voltage DC adapter. To ensure ease of use, all food-contact components will be removable without tools and easily cleaned. The peeler will be held in place on two rails with a plastic swivel lock at one end, and the plastic conveyor belt will have a removable food-safe silicone/TPU outer layer that clips on; this allows the peeler and conveyor belt cover to be secure when in use but also effortlessly removed for cleaning. LEDs will be included to signal the state of the device (on/off) and the state of the conveyor belt (forward, reverse, paused). # Solution Components ## Subsystem 1: Conveyor Belt This subsystem controls the movement of the vegetable with constant speed, pulling it underneath the peeler blade. The vegetable is peeled lengthwise. Cylindrical vegetables (e.g., zucchini or carrots) are placed on the conveyor belt with their long axis parallel to the belt direction. As the belt moves forward, the vegetable is drawn longitudinally across the blade, allowing the blade to remove peel along the length of the vegetable surface. A single motor rotates the conveyor by rotating the drive roller through a sprocket and chain transmission. The belt is constructed from plastic and covered by a layer of food-grade silicone. The silicone layer attaches to the plastic belt and can be easily attached and removed for cleaning. - 12V Stepper Motor MEDIUM bipolar - ROB-09238 - Stepper Motor Driver – TB6600 ## Subsystem 2: Blade Holder: Pressure Detector with Vibration Motor This subsystem applies a controlled peeling force to the vegetable using a spring-loaded blade holder with motor-adjustable position, while simultaneously measuring the applied force using a load cell. A TAL220B straight-bar load cell measures the normal contact force applied by the blade. The load cell output is amplified and digitized by an HX711 load cell amplifier, allowing the microcontroller to read and record the applied force. The MG996R servo motor actively allows the blade to sense variations in the vegetable surface and adjust the motor accordingly in real time, maintaining continuous contact with the same force applied to the vegetable. To improve peeling, a mini vibration motor (Adafruit 1201) is mounted near the blade holder. The vibration helps the blade slide through the skin more smoothly without increasing applied force. Control Loop: The MG996R servo will be updated at approximately 50 Hz based on load cell feedback. Target Force Value: Initial target normal force is ~1–2 N, which is sufficient to peel typical vegetables like zucchini, carrot, and potato. We will experiment with these values to find the best-performing force. Control Algorithm: We will use a threshold-based incremental adjustment: if the measured force is above the target range, the servo retracts slightly; if below, it advances. This approach is simpler than PID and sufficient for the semi-autonomous design. Force Range Variation: Peeling force varies with vegetable type and skin toughness. Some papers indicate forces between 0.8 N and 2.5 N are generally effective for common cylindrical vegetables, but again, we'll have to test this. - SparkFun Load Cell (5kg, Straight Bar) – TAL220B - SparkFun Load Cell Amplifier – HX711 - Servo Motor – MG996R - Adafruit Vibrating Mini Motor Disc – ID: 1201 ## Subsystem 3: User Interface: Conveyor Direction Push Buttons This subsystem provides a simple, reliable manual control interface to move the conveyor belt forward or reverse. The main purpose is jamming recovery. If a vegetable binds against the blade, the user can reverse the belt to free it from the contact, then resume forward motion. LEDs will be included which indicates the state of the conveyor belt direction. For safety, the peeler will only vibrate when the device is in the peel state, not in the pause or reverse state. Additionally, clicking any button (including reverse) during the peel state will stop the device, moving it into the pause state. The user does not manually feed or hold the vegetable during operation. After placing the vegetable on the conveyor belt, the user steps back and initiates motion using a momentary button press. The blade remains stationary relative to the frame and is never directly contacted during normal operation. A physical blade guard will be added to prevent any direct access to the blade from above or the sides, reducing the risk of accidental contact. - 4 LEDs - 3 Buttons (Forward / Reverse / Pause) - 1 Switch (Power On/Off) ## Subsystem 4: Power, Voltage, and Current Control This subsystem converts standard AC wall power into low-voltage DC required to safely operate all motors, sensors, and microcontroller components. It ensures continuous, reliable power without runtime limitations and protects user-accessible components from any high voltage. It also ensures that the power provided to the circuit components does not exceed their maximum power requirements. A current sensor will additionally be used to prevent motor burnout during stalls. - AC-to-DC Adapter: Mean Well GST60A24-P1J - Current Sensor - ACS712 # Criteria For Success The device has three states, and the following criteria reference these states. - Pause State: The conveyor belt does not move, and the blade does not vibrate. - Peel State: The conveyor belt moves forward, and the blade vibrates. - Reverse State: The conveyor belt moves backward, and the blade does not vibrate. 1. The device enters the pause state when the on/off button is switched to on. 2. When the forwards peel button is pressed and the device is in the pause state, the conveyor belt enters the peel state. 3. If any button other than the forwards peel button is clicked during the peel state, the device immediately enters the pause state. 4. When the reverse button is pressed and the device is in the pause state, the conveyor belt enters the reverse state. 5. Once the conveyor belt starts moving forward, it does not stop unless the direction is changed, the conveyor is paused, or power is cut. 6. Once the peeler starts vibrating, it does not stop unless the direction is changed, the conveyor is paused, or power is cut. 7. The device thoroughly peels cylindrical vegetables, covering over 90% of their surface area. Upon achieving consistent success with partially cylindrical vegetables (e.g. zucchini), attempt to peel other varying shapes/sizes of fruits and vegetables. 8. The device minimizes the amount of usable produce being discarded. (This will be determined with qualitative determination from visual observations). 9. The device requires no more than 120 V of AC power to operate. |
||||||
| 55 | HydroFlora ( A Context-Aware Watering Can ) |
Delilah Dzulkafli Idris Ispandi |
Mingrui Liu | photo1.png proposal1.pdf |
||
| # Team Members: Idris Ispandi (mm120) Delilah Dzulkafli (delilah5) # Problem: Many people care for multiple houseplants with different watering needs, but watering is typically done by intuition and inconsistent habits. Because plant type, pot size, soil type, and moisture all affect how much water a plant actually needs, manual watering often results in overwatering or underwatering. Overwatering can lead to root rot, fungus gnats, and wasted water, while underwatering causes plant stress, slowed growth, and wilting. Existing reminders or generic schedules don’t adapt to real-time soil conditions, and fully automated irrigation systems can be too expensive, complex, or impractical for small indoor plant collections. There is a need for a simple, low-effort tool that helps users deliver the correct amount of water per plant based on measured soil dryness and plant/pot-specific requirements, without requiring a permanent installed system. # Solution: In order to maintain optimal conditions for plants, we propose a smart watering can. The watering can will have two working parts: the MCU connected to a water pump (on the watering can), and the modular sensing unit (on the plants pot). The idea is that when you get a new plant, you input to the MCU the type of the plant, and the recommended amount of water the plant will be stored. The sensor unit will constantly broadcast the readings so when you pick up the watering can it will tell you which plant is in need of water based on the previous watering logs. You select the plant and go to the respective pot and press dispense and the MCU will tell the pump to dispense the needed amount of water = Recommended moisture level - current moisture level. This way, we can ensure that each plant has the most optimal amount of water needed to grow. # Solution Components: - ## Subsystem 1 (Water Dispensing Unit): Components: Peristaltic Liquid Pump with Silicone Tubing Driven by the MCU, this unit is responsible for dispensing the required amount of water. This will be placed in the watering can. [https://www.digikey.com/en/products/detail/adafruit-industries-llc/1150/5638299](url) - ## Subsystem 2 (Sensor Node): Components: Capacitive Soil Moisture Sensor SKU:SEN0193, ESP32-C3-WROOM-02, battery and regulator This unit will have a sensor that will be attached to the plant to measure the soil moisture, and the readings will be transmitted to the main control unit periodically via WiFi/Bluetooth (tradeoffs are still being weighed). [https://www.digikey.com/en/products/detail/dfrobot/SEN0193/6588605](url) - ## Subsystem 3 (Main Control Unit): Components: ESP32-C3-WROOM-02, LCD display, buttons This acts as the device's main control unit. When the user chooses a plant by clicking the buttons (pre-defined for prototype), the LCD will display what plant the user has selected. It is then responsible for determining the amount of water to be pumped out based on the readings received from the plant’s moisture sensor. - ## Subsystem 4 (Physical Build): Components: A watering can The MCU will be attached at the top of the watering can with a waterproof enclosure. This will be discussed with the machine shop for further opinions. - ## Subsystem 5 (Power Management): Components: Rechargeable Battery for MCU and LiPo battery for sensor unit This subsystem provides rechargeable power and stable 3.3 V for our electronics. The pump, sensor node, and the control unit will have separate power systems. # Criterion For Success: This project will be considered successful if the system can reliably receive soil moisture data from multiple sensor nodes (sensor readings are stable under fixed conditions), accurately determine which plant needs watering, and dispense water within 10% of the target volume while maintaining a stable operation: - Sensor nodes have a stable, repeatable moisture value where moisture reading increases after watering and decreases over time - Sensor nodes can successfully broadcast soil moisture readings to the main control unit. - Accurately determine which plant needs watering based on moisture level - Pump dispenses water within 10% of target volume - Different plants result in different dispense volume - Sensor node operates continuously for >24 hours on battery without recharge - Electronics remain functional after watering |
||||||
| 56 | Automatic Bike Light |
Magdalene Noftz Nathanael Salazar Pesandi Gunasekera |
Chihun Song | Arne Fliflet | proposal1.pdf |
|
| # Automatic Bike Light Team Members: - Magdalene Noftz (noftz2) - Pesandi Gunasekera (pesandi2) - Nathanael Salazar (nsala6) # Problem Bicycles that drive on the road legally must have a light on the front allowing them to be visible for 500 feet and have a rear reflector or rear light in the state of Illinois. It is also recommended that a bike is visible for at least 100 feet for vehicles approaching from behind. Presently there are no systems in place to adjust the brightness of the headlight of a bike in the same way cars have automatically adjusting headlights. There are also no rear lights that automatically turn on or off to alert cars behind the bike of its presence. Additionally, even if cyclists have lights on their bikes, they can forget to turn them on. Similarly, cyclists can forget to turn their lights off, thus draining the battery and making the lights useless. Also, the luminosity of certain lights may not be appropriate for the light level of the environment that the cyclists are biking through. # Solution Bike lights increase visibility and reduce accident risks. Front light brightness is determined based on ambient light. The darker the surrounding the brighter the light. We would ensure this brightness is calibrated for the bike and is always visible from 500 ft ahead. The rear light turning on would be based on the bike’s distance from a car behind the bike. For additional functionality to save energy, if we had time we would like to turn the bike light off if the bike is stationary for long periods of time. # Solution Components Bike (Nathanael’s bike) Front Light - White bike light (Walmart) - Photoresistors - Microcontroller - Vibration sensor (1528-1766-ND) Back Light - Red bike light (Walmart) - Ultrasonic sensor (1738-SEN0313-ND) - Microcontroller - Vibration sensor (1528-1766-ND) ## Subsystem 1: Front light The front light would detect the ambient light of the surroundings and automatically adjust its brightness accordingly. Photoresistors would be placed on top of the light to determine the luminosity of the sunlight or streetlights nearby. In broad daylight, the photoresistors would detect the brightness from the sun. This condition could turn the lights off or set it to a flashing mode to improve the visibility of the cyclist. During night time, the lack of surrounding light would be detected by the photoresistors and set the front bike light to a constant beam that varies in intensity depending on the environment. In well-lit areas, such as cities, the microcontroller would set the light to emit an intensity of at least 150 lumens. In semi-lit areas, such as main roads, the light would emit an intensity between 150 and 400 lumens. In very dark areas, such as unlit trails, the light would emit an intensity upwards of 400 lumens. The bike light will contain a vibration sensor to detect when the bike is moving. The vibration sensor would be able to detect when the bike is in motion and turn on based on the aforementioned light level. After 5 minutes of inactivity, the light would automatically turn off. ## Subsystem 2: Rear light The rear light will use an ultrasonic sensor to detect a vehicle behind the bike within a distance of 25 feet. Although the recommended distance is 100 feet, ultrasonic sensors that can detect this range are very expensive, and so our project will use the range of 25 feet. If the project were to be expanded later on, we would switch the sensor to one that could detect farther. If the sensor detects a vehicle behind the bike, the microcontroller will turn on the rear light to make the bike visible. Once there is no longer anything detected within the range, the microcontroller will turn the light off. Additionally, the vibration sensor will detect if the bike is in motion and is being used. Once the vibration sensor detects that the bike has not been in motion for five minutes, it will turn off the light fully. # Criterion For Success - Photovoltaic sensor detects changes in ambient light - Photovoltaic sensor is used to adjust the brightness of front bike light - Ultrasonic sensor detect movement 25 feet behind bike - Rear light turns on if movement is detected - Vibration sensor correctly detects when bike is moving - Both lights turn off if the bike has not moved for over five minutes. |
||||||
| 57 | Solar Scrubber |
Jonathan Sengstock Sandra Georgy Yehia Ahmed |
Chihun Song | Joohyung Kim | other1.pdf |
|
| Team: Yehia Ahmed (yahme6), Sandra Georgy (sgeor9), Jonathan Sengstock (jms32) Problem Keeping solar panels clean is crucial to their operation; if panels are obscured by dust, dirt, snow, or bird droppings, their power output is critically reduced. Additionally, solar power installations are in difficult-to-reach or remote locations such as rooftops and fields; this makes frequent cleaning of the solar panels difficult. Solution Our solution, which we call Solar Scrubber, is a robot that navigates on a 2-axis linear guide rail system. The guide rails will be mounted on the top and bottom of the solar array. The main body of the robot will contain the circuitry and electronics, cleaning module, and motors to navigate the guide rail system. Additionally, the Scrubber will have a module connected to the output wires of the solar panel to measure its power output. If a section of the panel is outputting lower power than the rest, the Scrubber will automatically clean that section of the panel. The cleaning module will be a rotating cloth (similar to a mop head), and a water or cleaning solution dispenser. We will be designing our project with the ECE building solar panels as the primary use case. The system is composed of several integrated subsystems, including a rail-based locomotion unit for travel, an MPPT algorithm for power analysis, a cleaning module for scrubbing and fluid delivery, an ESP32 control unit for managing the Finite State Machine and Bluetooth communication, a power conversion system to step down 120V wall power to usable DC voltages, and the solar panel itself which serves as the operational surface. Locomotion/Movement The locomotion subsystem enables movement across the solar panel through vertical and horizontal drive components powered by 12V DC motors and drivers that interface with the microcontroller to ensure full coverage of the cleaning area. We aim to use linear guide rails, similar to how a 3D printer navigates. MPPT and Algorithm The Maximum Power Point Tracking (MPPT) component extracts maximum power from the solar panel and detects the power losses caused by dirt. The MPPT analyzes the I-V characteristics of the solar panel to identify a group of cells that aren’t meeting expected performance. The MPPT measurements will help us perform target cleaning rather than cleaning the full solar array. In addition, the MPPT measurements can be used to compare the output power before and after cleaning to determine the efficiency of the Solar Panel Cleaner. Key components include ADC input (MCU), current sensor, perturb-and-observe algorithm in firmware (runs on STM32), and data logging for power measurements. Cleaning Module The cleaning module features a 12V DC motor with a rotating towel and a 12V water pump for fluid delivery. To bridge the gap between the 120V wall power and the 3.3V logic of the ESP32, the system uses an AC-DC power adapter and an L298N motor driver. The adapter converts the high-voltage wall power into a steady 12V supply, while the motor driver acts as a high-speed electronic switch. By receiving low-voltage commands from the ESP32, the driver directs the 12V power to the scrubbing motor and pump, allowing the Finite State Machine to control the rotation and spraying sequences based on the cleaning path. MCU The ESP32 Development Board acts as the robot's brain and was chosen because it has built-in Bluetooth to allow for manual control and data monitoring. The system uses a Finite State Machine (FSM) which is a logic map that tells the robot whether it should be in Auto mode to clean the panels, Manual mode to respond to your Bluetooth commands, or Idle mode when at the home position. The Bluetooth capability is especially important for the MPPT algorithm, as it allows the robot to wirelessly transmit real-time power data to a phone or tablet so you can see if the cleaning is actually improving efficiency. Power Conversion The power conversion subsystem supplies and regulates the voltages to all electronic components. Key components include a AC–DC converter (120V AC from the building grid to 12V DC), and DC-DC stepdown converters to supply the motors with 12V and the ICs with 3.3V and 5V. Solar Panel The solar panel we will be using is targeted for the panels on the roof of the ECEB. The dimensions of these panels are not posted online, but each panel outputs about 280 Watts. Our project will aim to function on existing solar panels, so purchasing a panel should not be necessary. Criterion For Success To ensure the Solar Scrubber is effective, the following goals will be tested: The cleaning module must be able to detect the cells with dirt or debris, enable targeted cleaning, and should be able to tell the difference between dirt and shading/lack of sun. Upon cleaning the panel, it should be able to remove the majority of debris (more than 75%). The cleaning module should be able to perform a full panel sweep every 2 hours autonomously. The entire module should be able to function in a variety of conditions, including temperatures between 0° F and 100° F, and weather between sunshine, light rain, and snow. The electronics and movement units should show little to no sign of breakdown or failure after 50+ uses. |
||||||
| 58 | Adherescent (Team 2) Auto Time Setting Scent Reminder |
Megan Shapland Wenchang Qi |
Jiaming Xu | Craig Shultz | proposal1.pdf |
Adherascent |
| # Adherescent (Team 2) Auto Time Setting Scent Reminder Team Members: - Megan Shapland (meganls2) - Wenchang Qi (qi14) # Problem Daily Medication is imperative to health, but is often easy to forget as we grow older and the reliability of our memories, sight, and sound decrease. Traditional medication reminders are lost in the frenzy of notifications and sounds that we experience on a daily basis. (As presented by Gaurav Nigam and Brian Mehdian at Adherescent ) There also is an ease of use problem. Many adaptive devices are not adopted due to the intimidation of learning to work with a new technology, particularly with time setting and confusing user interfaces. # Solution We propose a smart pill dispenser that utilizes scent as the primary notification mechanism. The system is built around a custom-designed PCB integrating an ESP32 microcontroller module. This allows for Wi-Fi connectivity, enabling time synchronization and remote scheduling potential. When a scheduled dose is due, the system triggers a scent release mechanism. The scent persists until the user opens the correct pill compartment. We will achieve the scent generation by electronically interfacing with and controlling a commercial aroma diffuser. The system will also employ magnetic sensors to detect the precise open/closed state of each medication compartment to close the feedback loop. # Solution Components ## Subsystem 1: Custom Control Electronics (PCB Design) This subsystem is the central processing unit of the device. Instead of using a pre-made development board, we will design and fabricate a custom PCB to ensure a compact form factor and specific power requirements. * Microcontroller: An ESP32 Module will be used as the core processor to handle logic and Wi-Fi connectivity. * Power Management: The PCB will include a Voltage Regulator circuit to step down the external power supply (5V USB) to the voltage required by the logic circuits (3.3V). * Programming Interface: A UART interface will be exposed on the PCB to allow firmware flashing and debugging via an external serial adapter. ## Subsystem 2: Olfactory Notification Interface This subsystem is responsible for generating the scent alert. We will adopt a system integration approach to leverage existing reliable atomization technology. * Primary Approach (Commercial Integration): We will reverse-engineer a commercially available Ultrasonic Aroma Diffuser. The control signals of the diffuser will be intercepted and managed by our main PCB. * Isolation Circuit: To safely interface the low-voltage ESP32 logic with the potentially higher-voltage circuit of the commercial diffuser, we will design an Optocoupler Isolation Circuit on our PCB. This acts as an electronic switch, simulating physical button presses to trigger the scent without electrical risk to the microcontroller. * Backup Approach (Thermal Diffusion): In the event that the commercial unit cannot be successfully integrated due to space constraints, we will implement a fallback mechanism using Thermal Diffusion. This involves a PTC Heating Element driven by a MOSFET on our PCB to gently heat a scent-infused pad, promoting rapid evaporation. ## Subsystem 3: Compartment State Detection This subsystem verifies user compliance by monitoring the physical state of the pill box lids. * Sensors: We will utilize Hall Effect Sensors placed on the PCB or routed to individual compartments. These non-contact sensors offer superior durability compared to mechanical switches. * Triggers: Small permanent magnets will be embedded into the lid of each pill compartment. * Logic: The system will read the sensor state to determine if the correct compartment has been opened. If confirmed, the microcontroller will immediately send a signal to stop the scent generation. # Criterion For Success 1. Scheduling Reliability: The device must trigger the scent notification within 5 seconds of the scheduled medication time. 2. Scent Control: The system must successfully turn on the external diffuser via the custom isolation circuit and turn it off automatically when the pill box is opened. 3. Sensor Accuracy: The Hall Effect sensors must detect the Open and Closed states of the compartment with 100% accuracy across consecutive test trials. 4. PCB Functionality: The custom-designed PCB must successfully power the ESP32 module and handle the logic levels without overheating or resetting due to power fluctuations. |
||||||
| 59 | Gesture Controlled Surveillance Robot |
Kushl Saboo Roshni Mathew Suvid Singh |
Argyrios Gerogiannis | Yang Zhao | ||
| # Gesture Controlled Surveillance Robot Team Members: - Roshni Mathew (roshnim3) - Kushl Saboo (kushls2) - Suvid Singh (suvids2) # Problem In disaster and rescue scenarios (collapsed structures, smoke-filled buildings, unstable debris fields), responders often need quick situational awareness without putting people at additional risk. Small ground robots can provide remote surveillance, but many are controlled using joysticks or complex interfaces that require training and constant fine-grained input. In high-stress environments, precise manual control becomes a liability as it increases cognitive load, slows down deployment, and makes it harder for responders to focus on interpreting the scene and coordinating rescue actions. The result is that existing teleoperated robots can be underutilized or difficult to operate effectively when time and attention are limited. # Solution We will build a rescue surveillance robot with an intuitive gesture-based control interface that translates simple hand motions into high-level movement commands, paired with onboard safety behaviors to reduce operator burden. The operator wears a gesture device (IMU-based glove or wrist module) that detects orientation/motion and wirelessly transmits commands such as move forward, turn, stop, rotate/scan, and return. The robot executes these commands while enforcing safety constraints (slowing/stopping near obstacles), and provides real-time situational awareness through video streaming and sensor feedback. This enables faster, more natural control than a traditional remote controller, allowing responders to deploy the robot quickly and maintain attention on the environment rather than micromanaging the robot’s motion. # Solution Components ## Subsystem 1 We want to make a glove that would recognize the different gestures made and transmit the corresponding motion to the robot. The motions we want the glove to recognize are forward/backward, turn left/right, and stop. Additional features, if we have time, would include “come back” and “spin/dance”. Base System - Custom PCB 1. IMU 2. Bluetooth Transmitter/Receiver 3. 3-4 Flex sensors (1 for each finger) 4. 1 MCU (think Raspberry Pi chip) 5. Buttons to control the mode and turn on 6. Battery (PSU) Additional System: 1. 1 Haptic Feedback Module With the base system, the purpose of the IMU would be to detect pitch and roll because these motions would correspond with directions. Then the flex sensors would be used to detect stop and come back. We would have an MCU on the glove that will detect the different movements and send commands to the robot. For the bonus features, we would like to have a receiver that recognizes it for our bonus feature of obstacle avoidance. When the robot has detected an obstacle and has stopped, it lets the user know through haptic feedback that it cannot move in that direction. Another bonus feature would have the glove be in different modes where it can control either the camera move (spin to see different areas). ## Subsystem 2 We want to build a system on the robot. The robot will be receiving the commands from the glove and then moving in the corresponding direction. Here are the components that will be required: Base System - not PCB 1. Bluetooth Transmitter/Receiver 2. Motors 3. Caterpillar Track (For multi-terrain compatibility) 4. Raspberry Pi Board Additional System 1. Camera for surveillance 2. TOF(Lidar) sensors 3. Heat/Night vision camera? (Better at looking through debris?)(Maybe too expensive?) The robot base system will accept commands from the glove and then move accordingly. We have a caterpillar track for multi-train capability. We will use a Raspberry Pi board for receiving and executing the commands. The purpose of the board is so that we can easily add other modules for the additional system features. The additional system will include a camera that will transmit the camera data to an external laptop. Then we will have Lidar sensors for obstacle avoidance so that if you give an instruction to the robot but it will hit an obstacle to do the command it will stop and transmit that back to the arm. # Criterion For Success The project will be considered successful if the following functional and performance objectives are met: ## 1. Reliable Gesture Recognition (Glove Subsystem) The glove must accurately detect user gestures using IMU orientation (pitch and roll) and finger flex sensor inputs. The system must correctly classify and generate control commands corresponding to: - Move forward - Move backward - Turn left - Turn right - Stop ## 2. Wireless Communication The glove subsystem must transmit gesture commands to the robot wirelessly using Bluetooth (BLE). ## 3. Robot Motion Execution The robot subsystem must correctly interpret received commands and translate them into motion, reliably performing: - Forward and backward motion - Left and right turns - A 360° surveillance spin ## Stretch Goals (Advanced Success Criteria) ### 1. Safety Through Obstacle Avoidance The robot must integrate onboard distance sensing (ToF/LiDAR) to prevent unsafe movements. The robot must stop before impact. The system must override unsafe commands in real time. ### 2. Haptic Feedback to User (Closed-Loop System) When the robot is unable to execute a command due to an obstacle, haptic feedback must be sent to the glove to notify the user. ### 3. Camera/visual feedback We will add a camera or thermal/infrared sensing method to detect human presence in low-visibility environments and provide easy remote control. |
||||||
| 60 | FadeX: Automated Nicotine Tapering Device |
Ian Zentner Justin Leith Malik Kelly |
Jiaming Xu | proposal1.pdf |
FadeX | |
| **Team Members:** * Malik Kelly (mkelly61) * Justin Leith (jleit3) * Ian Zentner (iwz2) **Problem:** Electronic cigarettes were originally marketed as cessation tools, yet they have become a primary source of addiction. Current cessation methods like gum or patches fail to address the "oral fixation" habit, leading to high relapse rates. Alternatively, "manual tapering" (buying bottles with progressively lower nicotine) is logistically difficult and prone to user error; users often relapse when they cannot find the specific lower concentration they need or struggle with the "cold turkey" steps between available concentrations (e.g., jumping from 5% down to 3%). There is currently no device that automates the tapering process while maintaining the user's behavioral routine. **Solution:** FadeX is a Bluetooth-enabled vaporization device that automates nicotine reduction. Unlike standard devices, FadeX utilizes a dual-reservoir system: one pod containing high-concentration nicotine and another containing zero-nicotine dilutant. The device features an active mixing system using micro-peristaltic pumps driven by an ESP32 microcontroller. Based on a schedule set in the companion mobile app, the device calculates and delivers a specific ratio of liquids to the heating element in real-time. This allows for a continuous reduction in nicotine that is harder for the user to perceive (e.g., 5.0% to 4.9% to 4.8%) rather than distinct steps. The system includes pod authentication to ensure safe liquid usage and strict software fail-safes to limit dosage per hour. It would also implement safety protocols in regards to temperature, and have a charge-capability similar to that of current e-cigarettes. **Solution Components:** **Subsystem 1: Power & Energy Management** * **Goal:** Get power in safely, regulate it, and budget it. * **Power Source:** Samsung SDI INR18650-20S (1-cell Li-ion). * **Charging:** TP5100 charging module. Premade circuit that powers Microcontroller. * **User Wake/Enable:** Button to toggle vape back on after idle using watchdog timer. * **Status/Low-Power Feedback:** RGB LEDs (Battery Low, Puffs Remaining, Error). **Subsystem 2: Fluid, Mixture, and Sensing (The “Process Plant”)** * **Goal:** Move liquid, know what’s happening, and control the blend. * **Liquid Transport:** The Bartels Pump | BP7 × 2. Used to extract liquid from capsules and move into the central chamber to be atomized; isolates the liquid from mechanical parts. * **Inhalation Detection:** BMP280 barometric pressure sensor or differential pressure sensor for airflow/puff detection. * **Pump Drive / Ratio Control:** Dual H-Bridge driver (L9110S) used with PWM control to set relative pump rate. **Subsystem 3: Thermal & Aerosol Generation** * **Goal:** Turn the commanded dose into vapor consistently and safely. * **Atomizer:** Standard resistance coil (Kanthal A1, ~1.0 Ω) wrapped in organic cotton. * **Coil Switching/Drive:** N-channel MOSFET (IRLB3034) to fire the coil. * **Overheat Protection:** NTC thermistor near coil/atomizer to monitor temperature and prevent overheating. **Subsystem 4: Tapering Control, Display, and Connectivity Unit** * **Central Control & Safety Logic:** Handles system state, permissions, and interlocks. * **Microcontroller:** ESP32 (Wi-Fi/BLE for app connectivity). * **Waveshare 2inch LCD Display Module:** To display analytics and options to user * **Buttons:** user control of display and microcontroller logic **Criterion For Success:** * **Mixing Accuracy:** The device must produce a target nicotine concentration with a margin of error less than ±20%. * **Autonomous Tapering:** The system must successfully alter the concentration of nicotine over a specified amount of time, and use smaller or larger increments of stepping down based on the user’s settings (starting concentration value, time period of cessation) over a simulated timeframe without user intervention. * **Safety & Limits:** The firmware must enforce a "lockout" if the user exceeds a set nicotine limit (e.g., >2mg in 1 hour) or if the coil temperature exceeds safe limits (>250°C). * **Pod Security:** The device must refuse to fire if the pods are swapped (e.g., Nicotine pod inserted into the Dilutant slot) or if an unauthorized pod is detected. * **Power conservation:** despite using power in more ways than the usual e-cigarette, the device should last for around 100 puffs, aiming for close to a full day on one charge. |
||||||
| 61 | Automatic Motorized Satellite Tracker/GroundStation & Down Converter Subsystem/RF frontend |
Jumana Schmidt Rishan Patel Wiley Tong |
Jason Jung | Arne Fliflet | proposal1.pdf |
|
| # Automatic Motorized Satellite Tracker/GroundStation & Down Converter Subsystem/RF Frontend Team Members: Jumana Schmidt (jumanas2) Wiley Tong (wileyt2) Rishan Patel (rishanp2) # Problem: There are over 14,000 satellites orbiting the Earth. From real-time weather images, pictures of our Sun, HAM radio, to leaked unencrypted military communications, each satellite is transmitting a variety of readily available data. Some of this data can even be life saving or critical to our infrastructure. With such intriguing information available, it is no wonder why there has been a growing interest in satellite communications for so many different communities. However, accessing satellite data directly or indirectly typically requires either internet based services, expensive tracking hardware, RF experience, and a lot of manual setup. For off-grid users, remote communities, and students learning RF/satellite communication, this creates a large barrier: even if the satellites are transmitting overhead, it’s hard to reliably aim an antenna, lock the signal, and turn that RF into usable decoded output. Many relevant or interesting satellites, including those for weather, are low Earth orbiting (LEO), which require real-time tracking through the sky, either manually or a motorized mount. There are no commercial and affordable hands-free, motorized antenna mounts, and none of them are truly hands-off and automated. They also usually transmit in L-band and/or in S-band. So even though most of the equipment to start can be homemade or cheap, such as an antenna, some free software, and a basic software defined radio dongle (like a RTL-SDR), these microwave band signals can still be hard or impossible to properly receive and decode due to limited range. An MMDS or frequency downconverter is required for both a cheap option like an RTL-SDR and even a step up to a $300 Hack RF One. Additionally, there are not many commercial and affordable downconverters available As a result of both of these obstacles, receiving any updated critical/useful data is often impractical, inconsistent, or too costly for most people to try. # Solution: Our overall goal is to help make radio and satellite tracking/reception more accessible for educators, researchers, remote communities, survivalists, and radio enthusiasts alike. To accomplish part of this task, we seek to address two of the most inaccessible and unaffordable aspects: live tracking and making those microwave transmissions receivable by cheaper SDR’s. More specifically, we will create an affordable automatic, motorized satellite tracker/receiver and a custom S-band frequency downconverter. # Solution Components: ## 1. Motorized Antenna Mount - RTL-SDR: $30 Antenna & Dish parts: Usually negligible (could be free depending on the sources & band type) - Azimuth Motor: $28 https://www.amazon.com/gp/product/B0FMY17QRT/ref=ewc_pr_img_3?smid=AVTJBJ76BDD27&psc=1 - Elevation Motor: $37 https://www.amazon.com/dp/B0C69W2QP7/ref=sspa_dk_detail_1?pd_rd_i=B0C69RSJNT&pd_rd_w=dJt1j&content-id=amzn1.sym.386c274b-4bfe-4421-9052-a1a56db557ab&pf_rd_p=386c274b-4bfe-4421-9052-a1a56db557ab&pf_rd_r=5H73NB21EDBPJSF5WR2Y&pd_rd_wg=dDyFo&pd_rd_r=79ee8ae1-1e2f-4b6f-bd54-edc53447b320&sp_csd=d2lkZ2V0TmFtZT1zcF9kZXRhaWxfdGhlbWF0aWM&th= - 9 DOF IMU: BNO055 $9 - Lazy Susan Bearing: $15 - MCB & Power Management + parts: $8 + Negligible Esp32: $8 - Mount Brackets: Machine Shop ## 2. Down Converter Subsystem/RF frontend The RTL-SDR has a max frequency of 1.75 GHz. In order to receive and demodulate S band signals we need to build a down converter that brings 2-3.5 GHz signals into range of the RTL-SDR. The down converter is an analog heterodyne: the RF signal from the antenna will be multiplied by a 1.5 GHz local oscillator signal using an rf mixer. This submodule would require: - RF LNA (SKY67151-396LF) - S band bandpass filter (BPF-AS1600-75+) - active RF mixer (LT5560EDD#PBF) - pll synth (LMX2531LQ1910E/NOPB) - possibly include mcu to control pll - oscillator reference clock (UCE4031035LK015000-10.0M) - IF filter (built from LC components or use a detector) - SMA connectors - SMD rlc components - SMD balun, tapped transformers There will be two boards: LNA and filter board connected directly to the antenna to reduce loss, the down converter board that feeds into the RTL-SDR. Making the LNA and down converter into separate modules also makes testing easier. Even if the more complex downconverter fails the LNA module can be saved. # Criterion For Success: For the motorized antenna mount, we will have succeeded if the device is relatively affordable and able to smoothly automatically track a satellite, given streamed live TLE coordinates from a computer. We want the user to be able to just connect the antenna, SDR, and filters of their choice one time, and be able to send scheduled coordinates to start tracking a satellite any time. And the S-band downconverter will have been confirmed to work if we can receive S-band satellite communications on much lower, easily accessible frequencies. ## S-Band Satellite Options: - Hinode Solar B: 2256 MHz - Jason-3: 2215.92 MHz - Blue Walker 3: 2245 NOAA 20: 2247.5 MHz In the future, we’d hope to have a dashboard for data collected and logs, to make it into a more automated, full ground station. We also hope to build an adjustable down shifter so that the module can downshift signals beyond 3 GHz. # Alternatives: ## Motorized Antenna Mount - Ant Runner Pro: $500 ## S-band Down Converter - RTL-SDR Blog Wideband LNA + Bias Tee $28 https://a.co/d/0g0wGGSv - Nooelec HAM It Down: $90-125 https://www.nooelec.com/store/ham-it-down.html?srsltid=AfmBOooLr50utjbiAL63G1_oEChwrt4FRbUYePs9j1fTbOP_XoPrxOto - Sysmo S-band Cavity Filter: $80 (not always available) https://shop.sysmocom.de/S-Band-cavity-filter-2170-2300-MHz/cf2235-kt30 |
||||||
| 62 | AI-Nutritious Culinary Assistant |
Griffin Kelley Jackson Brown Tony Liu |
Aniket Chatterjee | proposal1.pdf |
||
| Team Members: - Jackson Brown (jcb10) - Kadin Shaheen (kadinas2) **Actively looking for a 3rd Teammate as Kadin dropped out.** - Tony Liu (zikunl2) # Problem The processed food industry has become increasingly toxic due to chemical flavor additives (12%-32% higher cancer risk), yet cooking often intimidates new chefs when preparing. Therefore, students and working professionals may rely on convenient but unwholesome meals. Most available ‘smart cooking’ tools do not provide a real-time experience that guides users through the process from raw ingredients to the finished dish. This reduces the likelihood that the user will learn. It is also inefficient to design recipes that can be adjusted to the user's available ingredients. Users will waste food, and recipe creation is an expert skill that is difficult to customize. A healthy diet is important for increased productivity and long-term health, but is difficult to accomplish. # Solution We propose an AI-Nutritious Culinary Assistant that recognizes available ingredients and generates a personalized recipe with interactive, step-by-step guidance. Using the Meta Quest 3 as the user interface and sensor front-end, the system streams video and voice commands to an edge vision processor running an ingredient recognition pipeline. In addition to vision, the device integrates an environmental sensor module that measures ingredient weight (for portion verification) and ambient temperature (for context/safety telemetry). Finally, the appliance includes a circular rotating seasoning carousel driven by a stepper motor for proportional seasoning action and a servo-actuated gate for controlled dispensing, enabling closed-loop “dispense to target grams” assistance during cooking. # Solution Components Explain what the subsystem does. Explicitly list what sensors/components you will use in this subsystem. Include part numbers. ## Subsystem 1 - VR Headset Sensor Platform (Software) This subsystem uses the Meta Quest 3 as the primary user interface and sensing front-end. The headset’s built-in RGB cameras capture the cooking scene for ingredient recognition, and its microphone captures the user’s wake word and spoken prompts. The headset also serves as the output display, presenting step-by-step recipe instructions as an AR/VR overlay. Captured camera frames and voice transcripts/commands are transmitted to the Vision Processor subsystem for inference and planning, while the returned recipe steps and alerts are rendered back in the headset. ### Parts: Meta Quest 3 (VR headset with RGB cameras + microphone + display) Wireless link (Wi-Fi) between Quest and Jetson (stream frames + commands, receive instructions) ## Subsystem 2 - Vision Processor (Software) This subsystem performs the core perception and recipe-planning computation on the edge compute unit. It receives camera frames from the Meta Quest 3 and first detects/segments candidate ingredient regions using a real-time model (YOLO for bounding boxes, or FastSAM for masks). Each candidate region is then cropped (or masked) and passed through a CLIP-style vision encoder to generate an image embedding. In parallel, the system maintains a library of text embeddings for ingredient labels (e.g., “tomato”, “onion”, “spinach”). By comparing image embeddings to text embeddings (cosine similarity), the processor assigns the most likely ingredient label to each detected region, enabling more flexible recognition than closed-set detection alone. The final output is a structured ingredient list (label + confidence + location/mask), which is then provided to the LLM agent to generate step-by-step recipes and instructions that are sent back to the headset. ### Parts: NVIDIA Jetson Nano (edge compute unit for model inference + agent logic) Candidate region model: YOLOv8/YOLO11 (det/seg) (bounding boxes or segmentation) or FastSAM (mask proposals) Vision-language classifier: CLIP / OpenCLIP / MobileCLIP (region embedding + text embedding matching) LLM agent (recipe selection + instruction generation using detected ingredients) Communication interface: Wi-Fi (Quest ↔ Jetson) for frames/instructions; optional UART/I2C/USB (ESP32 ↔ Jetson) for weight/safety telemetry ## Subsystem 3 — Environmental Sensor Subsystem (Weight + Room Temperature) This subsystem measures ingredient mass for closed-loop dispensing and ambient temperature for environmental context (improving recipes by understanding room temperature) and essential fire hazard checks. The ESP32 reads both sensors, filters/calibrates the data (tare + scale factor), and forwards measurements to the main controller. ### Sensors/Components Load cell(s): 5 kg single-point load cell (e.g., TAL220B 5kg) or 4× half-bridge load cells (for a round platform) Load cell ADC/Amplifier: HX711 24-bit load cell amplifier breakout/module (HX711) Ambient temperature sensor: SHT31-DIS (Sensirion SHT31-DIS-B) or BME280 (Bosch BME280) over I2C MCU interface: ESP32 reads HX711 (GPIO clock/data) + temp sensor (I2C) ## Subsystem 4 — Battery Subsystem (Rechargeable Power for Portability) This subsystem powers the device from a rechargeable battery, supports charging via USB, and generates stable rails for logic and actuation. It provides regulated 3.3 V for ESP32/sensors and 5–6 V for servo/actuators while preventing brownouts during motor current spikes. ### Sensors/Components Battery pack: 1-cell LiPo, 3.7 V (e.g., 2000–5000 mAh, JST-PH) Battery charger: TP4056 1S Li-ion charger module (with protection variant preferred) or MCP73831 (Microchip MCP73831T) 3.3 V regulator (logic rail): Buck converter module (e.g., MP1584EN) set to 3.3 V or LDO if current is small 5–6 V regulator (servo rail): Buck converter module (e.g., MP1584EN) set to 5.0–6.0 V Power monitoring (optional but helpful): MAX17048 LiPo fuel gauge (MAX17048G+U) over I2C Protection/robustness: power switch + fuse/polyfuse + bulk capacitors near servo rail ## Subsystem 5 — Rotating Carousel Ring + Dispenser Subsystem (One-Piece Circular Device) This subsystem’s purposing on seasoning and provides the “lazy Susan” mechanism: a circular rotating ring that indexes ingredient pods to a fixed dispense station above the scale. A motor rotates the ring to the selected pod; a servo opens a gate to dispense into the center bowl. The ESP32 controls indexing, homing, and dispensing, using the scale feedback to stop at a target mass. ### Sensors/Components Rotation motor: NEMA 17 stepper motor (e.g., 42BYGH-class) Stepper driver: DRV8825 or A4988 stepper driver module Homing/index sensor: A3144 Hall-effect sensor + small neodymium magnet (defines “slot 0”) Dispense actuator: MG90S micro servo (metal gear) or SG90 (lighter duty) Mechanical drive: GT2 timing belt + pulley set (e.g., GT2 6mm belt, 20T pulley) or friction wheel drive Ring support: lazy-susan bearing (turntable bearing) or printed rail + small rollers/V-wheels Dispense hardware: fixed chute + passive pod gate (flap/valve) engaged by the servo at the station # Criterion For Success We’d like to set the criterion for success with numerical benchmarks: The sensor for weight measurement should have a resolution of 1 gram and a gram error range of 0 to 100 grams. Fully chargeable battery offering approximately 20 minutes of runtime. The rotating carousel dispenser should spin to the desired spot within 2 error. Ingredient classification and localization accuracy >= 85% with 10 FPS real-time performance. Eventually, the pipeline of feeding input from a VR headset (visual image stream) and sensors (hardware) to the vision processor and returning a contextual recipe back to the VR headset should be completed. ## References: [1]: Hasenböhler, Anaïs et al. “Intake of food additive preservatives and incidence of cancer: results from the NutriNet-Santé prospective cohort.” BMJ (Clinical research ed.) vol. 392 e084917. 7 Jan. 2026, doi:10.1136/bmj-2025-084917 |
||||||
| 65 | Active Postural Correction Vest |
Aparna Srinivasan Jordyn Andrews Sophia Sulkar |
Frey Zhao | Yang Zhao | proposal1.pdf |
|
| # Active Postural Correction Vest **Team Members:** - Aparna Srinivasan (aparnas3) - Jordyn Andrews (jandr25) - Sophia Sulkar (ssulkar2) # Problem Poor posture is an extremely common issue in modern society, especially in the workplace, where employees sit and slouch for hours on end. Long-term slouching can lead to musculoskeletal imbalances, chronic back pain, and reduced respiratory efficiency. Existing solutions are either braces (which do not require any muscular effort from the person) or simple notification (devices that buzz but do not actually enforce correction). There is a lack of active solutions that physically assist the user in regaining proper posture without requiring constant conscious effort, or just doing all the work for them with no effort at all. # Solution We propose an Active Postural Correction Vest. Unlike passive braces, this system uses an active electromechanical feedback loop to physically retrain the user’s posture, while also letting go so that good posture is maintained by the user, not just the device itself. The device consists of a wearable vest equipped with stretch sensors which attach to elastics. These sensors continuously monitor how much the elastics are extended. When the system detects a "slouch" state (shown by the stretch sensor reading shifting away from the calibrated threshold), the central PCB triggers a high-torque servo motor mounted on the back plate. The servo reels in a cabling system made of elastic connected to the shoulder straps, physically pulling the user's shoulders back into a proper position. Once the sensors detect that the user has returned to the correct posture, the servo releases tension, allowing for natural movement and self-maintained posture until the next slouch event. In terms of safety precautions, we plan to create an assistive device that does not use a lot of force, so it cannot cause any damage. We also are going to have an emergency stop button as well as an auto shut-off when the resistance level reaches a level that is too high. We also will filter out noise by adding a timer that only activates the motors if the person is sitting in a slouched position for a prolonged time. # Solution Components ## Subsystem 1 **Sensing and Input** This subsystem is responsible for detecting the user's postural state by measuring the tension and force exerted by the brace straps against the body. - Primary Sensors (Stretch Subsystem): We will use stretch sensors placed between the shoulder strap and the user's clavicle. When the user is well-postured, the straps are taut (indicated by high Resistance/Voltage). When slouching, the straps loosen or shift (indicated by low Resistance/Voltage). - Secondary Sensor (Pressure Subsystem): We will also use pressure sensors on the front of the vest to provide a safety check to make sure that the strap tension stays within a comfortable limit ## Subsystem 2 **Mechanical Correction** This subsystem provides the physical force required to retract the shoulders. - Actuator: We will use a Servo motor, which will be able to reel in the elastic band without being too powerful or dangerous. - Mechanism: The servo will be mounted on a central back plate, which could be 3D printed, using a spool-and-cable mechanism to shorten the effective length of the shoulder straps. ## Subsystem 3 **Control & Power** This subsystem processes sensor data and drives the motor. - Microcontroller: possibly an ESP32 for wireless support - Power Regulation: batteries, etc. - Failsafe: Kill switch/button ## Subsystem 4 **Bluetooth App** A Bluetooth-connected app will display posture behavior over time (how often and how long the user slouches). The app would also allow adjustment of sensitivity and comfort limits, and let the user switch between training and brace modes. # Criterion For Success - The system shall detect a slouched posture when the stretch sensor output drops below a calibrated upright threshold for >= 30 seconds. - Normal movements such as walking, reaching, or twisting shall not trigger motor actuation during a 10-minute movement test. - When a slouch is detected, the servo shall retract the shoulder straps by a fixed amount of mm within 10 -15 seconds, resulting in visible shoulder retraction. - The servo shall fully release strap tension within 5 seconds after the stretch sensor returns above the upright threshold. - Strap pressure shall remain below a predefined safe limit, and the system shall disable the motor immediately when the emergency stop button is pressed. - The vest shall operate continuously for at least 4 hours on battery power while maintaining full sensing and actuation functionality. |
||||||
| 66 | Self-playing Programmable Chromatic Harmonica |
David Zhang Robert Zhu Sean Jasin |
Wenjing Song | Yang Zhao | proposal1.pdf |
|
| # Team Members: -Sean Jasin (sjasi3@illinois.edu) -Robert Zhu (robertz4@illinois.edu) -David Zhang (dzhan6@illinois.edu) # Problem: The harmonica is a versatile, simple, yet technically difficult instrument to play. There is a need for the background music of a live instrument, yet it is difficult to master the harmonica. Some lack the time to practice and learn the harmonica. For others, they may no longer be able to physically play the harmonica, or do not have access to training or musical education. Existing musical devices exist for keyboard and string instruments, but not for wind instruments. There is a need for a self-playing harmonica that can produce melodies without requiring manual lip or breath control. # Solution: The solution is a device that is able to play the harmonica. The self playing harmonica consists of multiple subsystems. The power supply provides power at all required voltages for the MCU, air pumps, and electronic pneumatic valves. The harmonica-computer interface connects to both the harmonica and the MCU, and is responsible for controlling the flow of air through the harmonica. It consists of pneumatic tubes, air pumps, and electronic valves. The MCU is responsible for controlling the pumps and valves in the harmonica-computer interface, as well as taking a MIDI file and converting it into a sequence of pump and valve motions. Lastly, songs are uploaded to the MCU through WiFi. We will create a website where the user can upload a MIDI file and that file is then available to play on the device. # Subsystems: Power supply Motor driver MCU Harmonica-computer interface Website for uploading MIDI files ## Power supply (located on the pcb) The power supply must be capable of supplying 3.3V, and 12V power to the device. The 3.3V power supply is for the MCU and the 12V power supply is used by the pneumatic valves. We will utilize an AL-12100 12V 120W power supply that plugs into a wall outlet. We will then convert the 12 power supply into the signal voltage, 3.3V, on the PCB. ## Motor driver (located on the pcb) The motor driver will allow the ESP32 to control the DC motor because the output of the GPIO on the MCU cannot provide enough power (GPIO 3V3 @ low current, motor needs 12V). The output of the motor driver will be a 12V PWM signal. ## MCU For the MCU, we will use an ESP32 for its WiFi capabilities. The MCU has 2 functions: mechanical control and MIDI upload. The mechanical control will take MIDI inputs and play the respective note on the harmonica. This will be controlled by several valves which will allow us to control the airflow into the harmonica. ## Harmonica-Computer Interface Harmonica: The harmonica that we will be using for the project will be a Conjurer-brand 10-hole chromatic harmonica. This harmonica was chosen due to its budget-friendliness, as well as its ability to play semitones without the requirement of “bending”, when one uses their tongue to play semitones. Air pump: We will use a Mini 555 Dongguang air pump to supply a constant and variable air flow. The airflow will be changed to control for volume, with a maximum of 15 LFM or 0.53 CFM. This should allow us to be able to play 10 notes simultaneously. The airflow of the output will be determined by PWM duty cycle, which will allow us to control the volume of the harmonica. The static pressure requirement of the pump is inconsequential, as harmonicas do not require significant air pressure to play. Electronic valves: The electronic valves will consist of 10 Laccimo 2V025-1/4 12V solenoid valves and one Airtac 4V110-08F 5/2 12V solenoid valve. This will allow air input and output from each hole of the harmonica, as well as switching action between blowing air and sucking air. High-Torque Servo: To operate the slider, a high-torque servo will be used. The DS3218MG has a sufficient control angle and has enough force (20kg) to operate the lever at a fast speed and in a precise manner. ## Website Using the ESP32’s WiFi capabilities, the MCU will host a mini server on which a user can upload MIDI files. These MIDI files can then be processed by the MCU to be played by the harmonica. # Criterion for Success: The success of this project will be based upon these criteria: Must be able to blow in and suck out air of all holes in a chromatic harmonica Must be able to achieve the full range of airflow from 100 to 400 LFM. (equivalent to 0.009 CFM to 0.025 CFM given a 4mm x 10mm sized opening). Ability to engage and disengage the slide of a chromatic harmonica. Dynamics/volume control of each played note is accurate and successful. A .MIDI file is able to be uploaded to the website of the self-playing harmonica system. A .MIDI file is able to be transmitted to the MCU via WiFi and performed accurately by the self-playing harmonica system. The system must be robust enough to play for 10 minutes continuously. |
||||||
| 67 | Roomify: A Smart Room System |
Benjamin Chang Owen Wang Warren Lam |
Lukas Dumasius | Craig Shultz | proposal1.pdf |
|
| # _Roomify_: A Smart Room System *Team Members:* - Warren Lam (wklam2) - Benjamin Chang (bchang) - Owen Wang (owenw2) # Problem Room decor (LED lights, music, desk decorations, etc.) is hard to coordinate, leading to unsynchronized room vibes (e.g. slow music and flashing lights) and an excess of handheld remotes. The smart home industry is projected to grow from 127.8 billion to 537.3 billion by 2030. However, there are currently limited products and systems for smart rooms and apartments. The products that exist are developed independently, each with its own remote, controls, and settings. There is no easy way to coordinate all of these devices, creating the need for a centralized room system. # Solution _Roomify_ is a centralized room control system where users can easily operate devices in their room like LED lights, TV remotes, Spotify, or other remote-controlled decorations and displays. For example, users will be able to press a single button on their phone/on the Roomify circular display and turn on their TV to the cooking channel, put on yellow lights, and play jazz music, all in one click. _Roomify_ removes the need for multiple remotes and coordinates independent, IR-controlled devices. _Roomify_ will appear like a vinyl player with a hinged wooden box, with a round RGB TTL TFT display driven by an ESP32-S3 board. The core functionality of the board will be controlling the round display, IR signal receiving and decoding (to store device remote codes), WI-FI communication (to make Spotify API calls), and omni-directional NEC protocol IR transmission (to transmit remote codes). In addition, we will design “repeaters” that can receive and relay IR signals to increase effective range and spatial coverage. By "copying" and storing IR remote codes, users can map buttons on their phone to IR signals that interact with devices in the room. Roomify will have an "Add Remote" mode where users can store device remote information within _Roomify_. For example, a user would add the "red" button on a LED string light remote by aiming the remote at the _Roomify_ box. After _Roomify_ decodes the signal, users can label and save the button code data. The user could then repeat the process for other buttons on the remote. Any device that uses a NEC protocol remote or has an API (like Spotify) will be operable with _Roomify_. After adding all room device remotes to _Roomify_, users will be able to create quick-start presets (e.g. green lights with Christmas music and a Snoopy Gif display). When the user selects a preset, _Roomify_ will transmit the necessary IR signals in all directions (14 IR LEDS, 6 to cover the three primary axes, and 8 pointed along the center of each octant) and make Spotify API calls. Aside from using presets, users can also change individual settings, avoiding the necessity of using multiple remotes or apps. _Roomify_ is an extensible, centralized room system that allows for full control over decorations, lights, and sound, allowing students and apartment owners to all of their devices and transform their room into a smart room. # Comparison to Existing Solutions While smart home products like Google Nest or Amazon Alexa exist, only compatible smart devices can be connected. Roomify is a much more lightweight and cheap solution designed to work with everyday products without requiring any special integration. # Solution Components ## Subsystem : Power ### Function: The power control unit will consist of an AC-DC converter that allows our unit (the main control box) to pull power from an AC wall outlet. Our estimated peak power consumption of our device will draw around 8 Watts of power. We aim to have the power unit draw the 120VAC and step it down to 12VDC. From there, we will have other power converters, like an LDO, to power our MCU. The 12V line will go to the DC motor that opens and closes the box. ### Features: In our power module, we will have an AC-DC converter with a flyback topology. This will be paired with an active PFC to control it such that we will be able to get a good power factor for our system. In addition to this, the transformer will allow us to isolate the high voltage side from our low voltage and all the other parts of our circuit. After our AC-DC converter, we have an LDO to act as a linear voltage source for our MCU in order to provide clean power. Our motor will pull from the AC-DC converter. ## Subsystem 2: IR Remote Signal Receiving and Transmission ### Function: The IR subsystem in the main control unit copies and stores device remote signals to transmit later (via app control). ### Features: - Receive and decode IR signals when in “pairing” mode - Store IR signal in Roomify database - Send omni-directional IR signal after receiving an HTTP POST request (from web-app) - Transmit a device remote signal (hex code) to control room devices ### Implementation: - Receiver: The receiver LED will connect to one of the GPIO ports on our MCU in order for our MCU to read and see any digital high or lows from the IR signal. To connect it, the MCU, we will have the LED connected in series with a resistor and connected to the power and GPIO. - Transmitter: 14 IR LEDs ( 6 to cover the three primary axes, and 8 pointed along the center of each octant) driven by MCU (NEC data encoding). The topology will be similar to the receiver: the IR LED will connect to a resistor in series and will turn on and off depending on the GPIO outputs from our MCU. - Control: ESP32-S3 for Wi-Fi to receive requests from web-app ## Subsystem 3: IR Repeaters ### Function: Extend the effective range and spatial coverage of Roomify by receiving transmitted IR commands, conditioning the signal, and re-emitting it to reach devices outside the direct line-of-sight of the main unit. ### Features: - Reception of NEC-protocol IR signals from the main Roomify unit - Signal amplification to restore IR carrier strength - Bandpass filtering centered at 38 kHz to reject ambient light and noise - Re-transmission of conditioned IR signals via high-power IR LEDs - Enables modular and scalable coverage for larger rooms or obstructed layouts ### Implementation: Each IR repeater will consist of an IR receiver module tuned to the 38 kHz carrier frequency, followed by an analog amplification stage to restore signal amplitude. A bandpass filter will be used to isolate the IR carrier and suppress noise from ambient lighting sources. The conditioned signal will then drive one or more IR LEDs through a transistor-based driver, re-emitting the original command with sufficient radiant intensity to propagate further into the room. The repeater operates transparently and does not decode or modify the IR protocol, minimizing latency and system complexity. ## Subsystem 4: Web application ### Function: The web application is the primary user interface for configuring and controlling _Roomify_. It allows users to add remotes, label and map IR commands, create presets, and trigger synchronized room actions. The app communicates with the ESP32-S3 over Wi-Fi using lightweight HTTP requests. ### Features: - Users place _Roomify_ in IR-learning mode via the web app. When an IR signal is received and decoded, the user labels and saves the command such as “Power”, “Red”, or “Brightness Up.” - Create presets that bundle multiple actions, including IR transmissions, Spotify API calls, and display updates. - Manually control individual IR or API actions without presets. ### Implementation: - Frontend: Browser-based UI (Next.js, Tailwind CSS, ShadCN UI) - Backend: REST server (FastAPI, Postgres) - External Integration: Spotify Web API for music control ## Subsystem 5: Control Box (“Wooden Vinyl Player”) ### Function: The control box is the housing unit for the round TTL display screen and ESP32-S3. The box will be constructed from wood and painted to look like a hinged vintage wooden vinyl player. The round TTL display screen will have a spinning display to look like a spinning record. ### Features: - Wooden finish - Motor-controlled hinges to open and close the box - Round display screen with spinning display (to look like a vinyl player) ### Implementation: - Box will be constructed from wood (laser cut) - The way we control the hinges will be through DC brushed motors. We will implement these motors to spin a gear that will slowly open and close the box. - RGB-666 interface standard driven by ESP32-S3 # Criterion For Success - ### IR Learning and Storage Successfully decode and store at least 10 unique NEC-protocol IR commands from external remotes. Test: Verify that learned codes match signals sent from the original remote using a second IR receiver. - ### Omnidirectional IR Control Control at least two distinct IR-based devices from multiple orientations within a standard dorm room. Test: Activate each device using Roomify without moving the unit or the device. - ### Web Application Functionality Web app can reliably trigger IR commands and Spotify API actions over Wi-Fi with latency < 1 s. Test: Measure response time between user action in the web app and device execution. - ### Preset Execution Presets must execute multiple actions (IR commands + API calls + display updates) simultaneously. Test: Verify synchronized execution of all actions using multiple devices and a test Spotify account. - ### Hardware Stability System operates continuously for at least 2 hours without resets, Wi-Fi drops, or overheating. Test: Run a continuous preset cycle or manual control session while monitoring temperature, power draw, and connectivity. - ### Standalone Operation All functionality works without original remotes or external apps. Test: Operate devices and execute presets after disconnecting original remotes and external control devices. - ### Safety and Power Compliance The device operates within safe current limits and maintains stable voltages. Test: Measure voltage and current under typical operation with a benchtop power supply. |
||||||
| 68 | Insole Pressure Sensing System for Running |
Aarush Sivanesan Joseph Casino Matthew Weng |
Xiaodong Ye | Craig Shultz | proposal1.pdf |
|
| Members: Joseph Casino (jcasino2) Aarush Sivanesan (aarush2) Matthew Weng (mw87) # Problem Runners often develop injuries or inefficient running form due to high impact forces, poor foot-strike mechanics (heel vs midfoot), asymmetrical loading, or inconsistent cadence. Most runners do not have an easy way to measure how their foot actually loads the ground over time, since gait labs and force-sensing soles are expensive and geared towards physical therapy, research, or professional athletics. Existing consumer wearables estimate cadence using wrist/hip motion, but do not directly measure foot-ground pressure/impact. There is a need for a low-profile, shoe-integrated system that can quantify foot impact and pressure distribution during real runs while remaining comfortable, lightweight, and accessible to everyday runners. # Solution We propose a thin-film pressure sensor insole system for running shoes that measures the force applied by the foot to the ground throughout each stride. A flexible sensor array embedded on top of the shoe foam (or placed under the insole) will capture pressure through the foot’s main contact points (forefoot, heel, and midfoot). A small electronics module will attach to the shoe heel or tongue and contain MCU, battery, and Bluetooth modules. The MCU will sample the pressure sensors, detect foot-strike events, and compute basic metrics such as step count, cadence, contact time, and estimated distance (using cadence/stride-length calibration and optional IMU/GPS data). Data will be streamed over Bluetooth Low Energy (BLE) to a phone for visualization, logging, and further analysis. # Solution Components **Subsystem 1: Thin-Film Pressure Sensor Insole Array** This subsystem senses foot pressure at key regions of the shoe to capture impact patterns and pressure distribution during stance. The sensor insole would fit either on top or bottom of the foam insole of the shoe. Components: - Thin-film force sensors (multiple locations): Interlink Electronics FSR 402 - Flexible interconnect/cabling: FFC/FPC cable (0.5 mm pitch) (generic) - Connector (board-side): Molex 503480-0490 (4-pos FFC/FPC connector) (size can be adjusted based on channel count) **Subsystem 2: Analog Front-End + ADC Data Acquisition** This subsystem converts each sensor data to data that can be read to the MCU. To sample all the sensors on the foot, we sample between all the sensors with a MUX. We then properly filter and amplify the data from the sensor through the op-amp. This data then gets digitized through an ADC. Components: - 16-bit ADC: MCP3425A0T-E/CH - Analog multiplexer: CD74HC4067SM96 - Op-amp: TLV9062IDR **Subsystem 3: Microcontroller + BLE Wireless Telemetry** This subsystem houses our MCU which will control sampling,collect data, timestamp data, and transmit results via BLE. Components: - MCU module: ESP32-C3-WROOM-02 - Programming/debug interface: Tag-Connect TC2030-IDC **Subsystem 4: Optional Motion Sensing (IMU)** This extra subsystem provides accelerometer/gyro data to gather speed data, estimate and improve stride data and length, and improve cadence robustness when the pressure signals are noisy. Components: - 6-axis IMU: ST LSM6DSOXTR or equivalent **Subsystem 5: Power Management + Charging** This subsystem powers the in-shoe electronics safely and supports rechargeable operation if applicable. The design regulates battery voltage to stable rails for the MCU and sensors. We have a wide range of batteries that we would like to work with initially to weigh out the pros and cons of each. Components: Battery options: - 3.7V Li-Po (300–500 mAh) - 3V Coin Battery - AAA Alkaline Battery - BMS IC for Li-Po : MCP73831T-2ACI/OT - 3.3V regulator : MCP1700T-3302E/TT **Subsystem 6: Phone Interface / Data Visualization** This subsystem provides the wireless interface between the device and a smartphone or website which displays metrics to the runner and logs sessions. Initial versions can use a simple BLE GATT service viewed in a standard BLE app; a custom website or phone UI can be added if time permits. Components: - BLE GATT profile (firmware-defined) - Prototype viewer: nRF Connect app or alternative # Criterion For Success Efficiency: The system shall sample plantar pressure sensor data at a minimum rate of 100 Hz and transmit the data over Bluetooth Low Energy with no more than 5% packet loss during continuous operation. Accuracy: The system shall detect foot-strike events and report running cadence with an accuracy of ±3 BPM compared to a stopwatch or smartwatch reference over a controlled running trial. Continuity/Longevity: The device shall operate continuously for at least 1 hour on battery power while performing active sensing and BLE data streaming. |
||||||
| 69 | Paint Color and Gloss Classification Device |
Charis Wang James Lee Victoria Lee |
Chihun Song | Arne Fliflet | proposal1.pdf |
|
| # Title Paint / Sheen Analysis Device # Team Members: - James Lee (jl212) - Victoria Lee (vlee33) - Charis Wang (cwang274) # Problem Homeowners, renters, and especially college students frequently face the challenge of matching existing wall paint and texture for touch up or repairs often without access to the original paint can. While it is possible to peel a physical chip off the wall to scan it, it is an inconvenient process. While mobile apps exist they rely on smartphone cameras which use auto white balance and are heavily infused by ambient lighting. These current solutions do not account for sheen such as matte vs eggshell meaning that even the best color match can look off once applied. This resulted in wasted time and materials and a poor result / color match. # Solution We propose a non-destructive "Paint/Surface Analysis Device" that accurately identifies both wall color and sheen without removing a physical paint chip. Our device utilizes a controlled lighting environment and a spectral color sensor to determine the precise color composition (hex code) of the wall. To address the gloss, the device integrates a secondary computer vision subsystem utilizing "raking light" (low-angle side lighting). This illumination technique reveals the paint finish (e.g., gloss vs. semi-gloss) Describe your design at a high-level, how it solves the problem, and introduce the subsystems of your project. ## Subsystem 1: Microcontroller and Processing Coordinates sensor data acquisition, executes matching algorithms, and manages system timing. It converts spectral data into the standard color space. From there, we match the color to color database stored in memory. Components: STM32F7 Series Microcontroller (High-performance with DCMI for camera support) ## Subsystem 2: Sheen Analysis We intend to shine an LED light at a 60 degree angle and measure how much light bounces off. If there is a lot of bounce the surface would be considered glossy if there is little bounce the surface would be considered matte. Components: Low-angle "Raking Light" LED array, AS7341 11-Channel Spectral Sensor, calibrated neutral-white LED, Photodiode ## Subsystem 3: Spectral Sensing Measures the absolute color composition of the sample under calibrated internal lighting. Components: AS7341 11-Channel Spectral Sensor, calibrated neutral-white LED ## Subsystem 4: User Interface Displays the identified paint brand, color name, and recommended applicator type. Components: 2.8" TFT LCD Display, Rotary Encoder for menu navigation ## Subsystem 5: Power Management Regulates external power for sensitive analog sensors and high-current LED subsystems. Components: 12V DC Wall Adapter, Buck Converters (5V), and Low-Noise LDO Regulators (3.3V) ## Subsystem 6: Enclosure Blocks outside light and fixes spectral sensor position/angle for reproducible results Components: Cardboard Box with fixed cutouts for reproducible measurements # Criterion for Success Color Accuracy: Achieve a color match with a Delta-E < 3.0 across multiple measurements, which represents a commercially acceptable match for consumer-grade applications. How Is Color Measured? Calculating Delta E | ALPOLIC® Sheen Classification: Correctly distinguish between "Gloss," "Semi-Gloss," and “Flat” with 90% accuracy. Ambient Isolation: Maintain consistent color readings regardless of external room lighting conditions. |
||||||
| 70 | EduGrid Microgrid Demonstrator |
Ahmet Colak Jason Hart Srijan Kunta |
Abdullah Alawad | proposal1.docx |
||
| EduGrid Microgrid Demonstrator **Team Members:** - Jason Hart (jhart34) - Ahmet Colak (colak2) **Problem:** Students often have limited understanding of how the electric power grid is designed to stay safe and reliable, especially the protection systems, breakers, relays, fault isolation, that prevent small failures from becoming large outages. Because these concepts are not taught in a fun and accessible way, many students do not see what power engineers actually do or why the field matters, which can reduce interest in pursuing power and energy careers. Recent large scale outages, such as the winter storm related failures experienced in Texas, show how grid reliability, planning, and protection directly affect daily life and the safety of the public, highlighting the need for clear, hands on public education. **Solution:** Our product will include an interactable tabletop power grid that allows the user to see the flow of power from source to load, all while having access to switches and controls that aim to isolate faults, correct the power factor, and visualize the flow of power on our grid. All of these systems will provide students with an intuitive and engaging tool to learn more about the important role of power engineers. **Solution Components:** **Subsystem 1:** **Power distribution and DC/DC regulation:** Provides stable regulated power rails for the entire project so the ESP32-S3, display, LEDs, and indicators operate reliably. This subsystem converts the main input supply into a clean 5 V rail and a clean 3.3 V rail, with protection and power indication. - 5 V system rail Buck Switch Converters: TPS54202DDCR - 3.3 V logic rail Low Dropout Voltage Regulators: TLV75533PDBVR - Input power connector: 2.1 mm DC jack PJ-102A - Power switch: SPST toggle/slide switch (example: EG1218) - Protection: reverse Schottky diode SN74S1053DWR - Power indication LED: standard LED (WP7113SURDK14V) + resistor (generic) Subsystem 2: **Power grid state machine and fault initializer** The brain of the system that stores the states of feeders, faults, measurements, display text, and breakers. Displays voltage, current, and power factor for power factor correction. - Microcontroller: ESP32 S3 - Text-display showing V, I, PF, fault state: SSD1306 - Power factor correction (buttons to add capacitance or inductance): 108-D6C40F1LFS-ND **Subsystem 3:** **Feeder and breaker tripping behavior** The microcontroller will initialize fault scenarios that will require the user to manually trip breakers to isolate faults - Breaker toggleswitch: 100SP5T1B1M1QEH - Programmable LEDs for power-flow and fault visualization LED: WS2812B Subsystem 4: **Mechanisms for fault selection, activation, and protection success.** - Manual rotary selector: PEC11R-4220F-N0012 - Fault selection/activation light: 732-5017-ND - Buzzer for incorrect fault isolation: CUI CEM-1203 **Criterion For Success:** - Fault selection, activation, and isolation switches with clear LED/noise indication of success or failure. - Clear depiction of power flow and feeder/breaker location on the board's face. - Product is safely enclosed with now exposed conductors or excessive heating - Each fault state is selectable and operational. |
||||||
| 71 | E-Bike Theft Detection System |
John Paul Hanley Kacper Bakun Paul Harris |
Yulei Shen | Craig Shultz | other1.pdf |
|
| Student 1: Kacper Bakun(kbakun2) Student 2: John Paul Hanley(jhanley5) Student 3: Paul Harris(pharr6) Problem: Bicycle theft is a problem in large cities and small neighborhoods alike resulting in financial losses for companies and decreased serviceability for users of the company. Companies such as Lyft have had multiple occurrences of their Divvy bikes being stolen by "persistent rattling, shaking, or even brute force" methods. These theft attempts exploit the limitations of mechanical locking systems which do not have real time monitoring and theft deterrence once tampering begins. The attached article below shows a video in which excessive shaking and attempts to dislodge divvy bikes have been successful. While companies try to improve their mechanical locking systems, theft strategies will always change and improve after new designs are put out. Thieves will look to exploit these systems late at night when there is no public supervision and alarm systems to alert the public. https://www.nbcchicago.com/news/local/divvy-bike-theft-video/176532/ Solution: To combat the limitations of mechanical locking systems and address how thieves attempt to steal these Ebikes at night we would implement a system embedded onto our bike which could monitor shaking, rattling, and brute force. The system consists of a custom PCB containing a low-power microcontroller, motion and vibration sensors, and an electronic alarm interface. The microcontroller continuously monitors sensor data to detect abnormal vibration patterns associated with rattling, excessive shaking, or brute-force tampering. Using a Finite State Machine (FSM), the system classifies behavior into normal usage, suspicious activity, and confirmed theft attempts. When activity exceeds predefined thresholds over a set time window, the system escalates its response by triggering a loud electronic alarm to deter the thief and alert nearby pedestrians. Alarm timing and reset conditions are managed by a clocking system implemented in firmware to ensure consistent and predictable operation. Subsystem 1: Tamper Sensing + Event Detection This subsystem is responsible for detecting motion patterns that indicate the bike is being tampered with or moved while it is parked and armed. The design will use an accelerometer and gyroscope (IMU), such as the MPU-6050, to monitor vibration, shaking, lifting, and rotation. The IMU allows the system to detect both: - Shaking/Vibration (repeated rapid acceleration changes typical of someone yanking the bike/lock) - Tilt/Lift/Rotation (gravity direction changes when the bike is lifted or the frame angle changes) - Filtering for noisy data - To improve reliability and reduce false alarms (wind, small bumps, people brushing past), the algorithm will evaluate motion over short time windows rather than triggering on a single spike. An optional vibration/impact sensor can be added as a secondary confirmation source, but the IMU will be the primary sensing method. We will also use a Digital Low Pass Filter that will block any unnecessary background movement to prevent it from having false alarms and unreadable data values. Component: - MPU-6050 Inertial Measurement Unit (IMU) - Digital Low-Pass Filter (DLPF) - Possible Additional Feature: Use a short-time window + RMS (filters out random bumps) - Instead of triggering on a single spike, compute RMS energy over a window Subsystem 2: Control + Finite State Machine (FSM) - This decides whether motion is normal or a theft attempt and controls escalation behavior. We will implement an FSM with set thresholds that decide whether a reading from the accelerometer is safe, suspicious, or alarming. Components: - 1 low-power microcontroller (ESP32 / STM32 / nRF52 / ATmega328P) - Firmware timer/clocking for consistent alarm timing Subsystem 3: Alarm + Public Deterrence - This subsystem makes the theft attempt obvious and unpleasant. Components: - 1 alarm siren reaching 75 dB from 1 meter away - This subsystem produces the physical response when theft is detected. A high-decibel alarm will be driven using a transistor or MOSFET driver so the microcontroller can control it safely. The response will be designed to trigger quickly and be loud enough to deter theft and attract attention. Subsystem 4: Testing and Validation Setup - This subsystem validates system performance through bench and field testing. Bench tests will involve controlled shaking and lifting to verify detection timing and alarm activation. Field testing will include parking the bike in realistic environments to ensure the system reliably detects theft attempts while minimizing false alarms from normal disturbances. Criterion for Success The Smart Bike Theft Detection System will be considered successful if it meets the following performance criteria during bench and field testing: Tamper Detection Accuracy: - The system must correctly distinguish between normal environmental motion and theft-like tampering with an accuracy of at least 90% over 40 test trials. - Normal motion trials include light bumps, wind-induced movement, and brief contact from pedestrians. - Tamper trials include sustained shaking, repeated rattling, lifting, and rotation of the bike frame. - During 30 minutes of continuous normal parking conditions, the system must trigger no more than one false alarm. This ensures the system is practical for real-world deployment without frequent nuisance alerts. Detection Latency: - For sustained theft-like activity, the system must transition from the armed state to the alarm state within 2 seconds of the tampering event beginning. Alarm Effectiveness: - When a confirmed theft attempt is detected, the alarm subsystem must produce a response that is clearly noticeable to nearby pedestrians: - The device must produce a minimum sound pressure level of 75dB measured at a distance of 1 meter. FSM Reliability and Recovery: - The Finite State Machine must correctly transition between idle, suspicious, alarm, and reset states without software crashes or undefined behavior over 10 consecutive alarm cycles, returning to the idle state after reset conditions are met. |
||||||
| 72 | Single-Phase AC Power Analyzer |
Isaac Herink Jeffrey Pohlman Joseph Kim |
Eric Tang | Arne Fliflet | other1.pdf |
|
| Team Members: - Isaac Herink (iherink2) - Jeffrey Pohlman (jpohl3) - Joseph Kim (joseph51) # Problem Basic voltage and current measurements do not provide insight into how power is actually being consumed by an AC load. Relevant quantities such as real power and power factor require time-synchronized measurements of voltage and current, which are typically only available from commercial power analyzers. These commercial analyzers are expensive and unnecessary for small-scale laboratory or educational purposes. # Solution Design and build a microcontroller-based, single-phase power quality analyzer that measures voltage and current supplied to a load using isolated sensing circuits. The microcontroller will sample both signals at the same time and compute RMS values, real power, and power factor in real time. Measurement data will be transmitted to a computer over USB for display and analysis. Example use cases include comparing real power and power factor across common loads (incandescent lamp vs. fan motor vs phone charger), measuring load startup behavior, and identifying inefficient or abnormal load behavior in educational lab experiments. It provides students with hands-on exposure to AC power measurements without needing expensive commercial equipment. The final system will provide a low-cost, embedded tool for monitoring and analyzing AC power behavior in laboratory and educational environments. # Solution Components ## Subsystem 1 - Power Path (Outlet -> Analyzer -> Load) This subsystem will provide a safe way to place the analyzer in line with the load without the analyzer acting as the load. The load current will flow through internal wiring (with optional fuse protection), and the analyzer measures current using a CT. This subsystem ensures the analyzer itself does not significantly affect load current/voltage. It also ensures a simple connecting interface between the outlet, analyzer, and load. Components: Inlet/Outlet Wiring Power Cord (McMaster Carr 71535K42), Receptacle (McMaster Carr 1333N53), Fuse (Littelfuse 0217005.MXP), Fuse holder (Littelfuse 01550900Z). ## Subsystem 2 - Voltage Sensing Provides an isolated low-voltage representation of the line voltage. The transformer secondary is routed to the PCB for conditioning. Components: AC Voltage transformer (120 VAC to 6-12 VAC) HQRP TR038 or equivalent. ## Subsystem 3 - Current Sensing Provides an isolated current measurement to the load. Components: Split-core CT 5A to 5mA (B0G1M449JN) - We may use a CT with a larger secondary current. Voltage and current sensing are isolated with a VT and CT to prevent direct electrical connection between mains and the MCU. ## Subsystem 4 - Analog Signal Conditioning Converts VT/CT signals into clean and bounded voltages that the MCU can sample accurately. This subsystem performs: - Voltage scaling: A resistor divider scales the VT secondary down to a target amplitude that is compatible with the ADC. - Current to voltage conversion: A burden resistor translates the CT secondary waveform into a proportional voltage waveform (for ADC input). - Input protection: Series resistors and clamp diodes to limit fault voltages and protect ADC ports. - Filtering: RC low-pass filters to reduce high-frequency noise and prevent aliasing. This subsystem ensures that the MCU receives waveforms that accurately represent line current/voltage. ## Subsystem 5 - Board Power The PCB will be powered from USB 5V (or an external 5V source). A 3.3V regulator supplies the MCU. Components: Voltage regulator (Diodes Inc AP2112K-3.3TRG1) ## Subsystem 6 - Bias Voltage Generation Both the voltage and current waveforms will be shifted (biased) to sit within the ADC input range, since the ADC cannot measure negative voltage. The PCB will supply a reference voltage of roughly 1.65V (Vmid = 1.65V) from the 3.3V rail using a resistor divider and decoupling capacitor. The conditioned waveforms are then centered around Vmid to remain between the 0-3.3V ADC range. ## Subsystem 7 - Embedded Processing (MCU) A microcontroller will sample voltage and current channels at a fixed sample rate. The firmware will remove DC offsets, apply any needed calibration factors, and compute: - RMS voltage/current - Real power from the average of v[t]i[t] - Apparent power, reactive power, and power factor Components: MCU (STMicroelectronics STM32F303CCT6 (LQFP-48)), SWD programming header (Samtec FTSH-105-01-F-DV-K). ## Subsystem 8 - Communication and Display This subsystem will present our computed values on a pc using USB serial (via a USB-UART bridge). A PC side program (Python or equivalent) will display Vrms, Irms, P, and PF over time. Components: USB-UART bridge (CP2102N), USB connector (GCT USB4085-GF-A). ## Enclosure We will design and 3D print an enclosure to contain our different subsystems. The enclosure will be self-contained and require only AC power and a USB connection. # Criterion For Success - Voltage and current waveforms are sampled at a fixed rate - The device measures voltage and current simultaneously - The device computes RMS voltage/current, real power, reactive power, and power factor - Measurements are displayed on a PC in real time - RMS voltage is measured within ±5% of a commercial analyzer for a resistive load -RMS current is measured within ±10% for at least one load in the 0–5 A range - Real and reactive power is computed within ±10% of a commercial analyzer for a resistive load - Power factor is reported within ±0.10 and correctly distinguishes resistive (PF ~ 1) and inductive loads (PF < 1) - The device is in a self-contained enclosure |
||||||
| 73 | Circle of Life: Automated Desktop Aquaponics System |
Aishwarya Manoj Anjali Aravindhan Estela Medrano Gutierrez |
Manvi Jha | Joohyung Kim | photo1.jpg proposal1.pdf |
|
| # Circle of Life: Automated Desktop Aquaponics System # Team Members: - Aishwarya Manoj (am133) - Anjali Aravindhan (anjalia2) - Estela Medrano (estelam2) # Problem Urban living and limited indoor space make it difficult for individuals to grow fresh produce sustainably. Aquaponic systems offer an efficient solution by combining fish cultivation and plant growth in a closed-loop ecosystem, but existing systems require frequent manual monitoring and maintenance. Current desktop-scale aquaponics kits often lack intelligent control features and are cost-prohibitive for individual users. # Solution This project proposes the design and construction of a small desktop smart aquaponics system integrating automated environmental and fluid control. The system consists of a compact fish tank and plant grow bed forming a closed-loop water circulation path. An electronically controlled pump circulates water between the tank and grow bed, while a motorized dispensing mechanism provides automated fish feeding. A programmable grow-light module delivers controlled lighting cycles for plant growth. Embedded sensors monitor key system conditions such as water flow, ph level and water temperature. A microcontroller schedules feeding and lighting and processes sensor data. Depending on budget and difficulty, we may add more or less capabilities. # Solution Components ## Subsystem 1: Fish Feeder Subsystem A simple automated fish feeder will be implemented using an SG90 servo motor (linked below) operating between two angular positions, one away from the fish tank and another towards the fish tank for dispensing food. A custom 32-printed food container will be mechanically coupled to the servo shaft using screws and will include a small outlet opening that allows food to dispense when the container is rotated downwards. The servo motor will be controlled via PWM signals generated by a microcontroller. This microcontroller will also serve as the controller for the other subsystems. [https://www.digikey.com/short/0r42n3vv](url) ## Subsystem 2: Lighting Subsystem The lighting subsystem serves as the artificial light sources for plants in our desktop aquaponics system. The purpose of this subsystem is to make sure plants will get the correct amount of light and intensity per day to simulate growth due to sunlight from the Sun. The lighting subsystem will use LED colored lights with alternating blue and red colors to simulate sunlight and promote photosynthesis. We plan on using the Royal Blue and Deep Red ASMW-LL00-NKM0E LEDs from DigiKey (also linked below this section) connected to a LED driver to both control the lighting system and step down the input voltage of the PCB to the 3.08V needed by the lights. This LED driver will be in the same PCB as the microcontroller system and will use the same microcontroller. It will be mounted above the plants and the aquarium portion of the aquaponics system and shine down upon the plants. [https://www.digikey.com/short/zcmqv3wj](url) ## Subsystem 3: Water Quality Subsystem This subsystem monitors water quality through various sensors and allows for us to ensure that the aquaponic system is working properly. The three main components of this subsystem are the water flow sensor (314150005 from DigiKey), the water PH sensor (SEN0161 from DigiKey), and the water temperature sensor (Waterproof 1-Wire DS18B20 Digital temperature sensor) As we have a water pump pushing water up through our aquaponic system and bringing water to the plants above the fish tank, we need to measure the flow rate of the water to ensure that this component is operating effectively. The water flow sensor will thus measure the flow of water and ensure that the water is pumping effectively up the system. Alongside this, we will have a PH sensor to measure the PH of the water, which is critical for the health of both the fish and the plants. As we aim to have beta fish in the tank, that requires a PH of roughly 6.8 to 7.5, and we will have plants that require that slightly acidic to neutral PH range as well. If the PH is outside of this range, we will have a LED indicator (sourced from our component kit) so that the user knows it is time to change the water. Finally, we will have a sensor measuring the temperature of the water to ensure that it is habitable for the fish. Again, for beta fish this requires a temperature of 76 to 85 degrees Fahrenheit. The temperature sensor will measure the temperature of the water in the tank and if it is too high or too low, an LED indicator will be triggered, allowing the user to change the water or the temperature of their room. [https://www.digikey.com/short/r7f95h7j](url) [https://www.adafruit.com/product/381?srsltid=AfmBOop4JLBfv5qedUGq36frDQX9vyVTusMKieUlSaGwtCNAFJlJTlm4](url) [https://www.digikey.com/short/v9btn5d9](url) ## Subsystem 4: Power Subsystem The power subsystem’s main goal is to provide power to the other subsystems in this project, including but not limited to the lighting, fish feeder, water quality, and pump. To start the project will need an AC to DC 12V converter that is linked below. The voltages of the components will be the following: - The microcontroller unit, either a STM32- or a ESP32-class IC, requires 3.3 volts. For example, a STM32G4/F4 or a ESP32-S3. - The LEDs require a voltage of 3.08V and 200mA. - The water flow rate sensor requires an input voltage of at least 5V. - The PH sensor also requires an input voltage of 5V. The water temperature sensor’s power is between 3.0V to 5.5V. - The circulation pump ranges from 6V to 18V. - The water feeder servo uses 5V. Thus, we will be using a voltage regulator to step down the voltage from 12V to 5V for all of the systems, and a LDO to step it down to 3V for the LEDs. [https://www.digikey.com/short/dbfnfn48](url) [https://www.digikey.com/short/bf0mqfjh](url) [https://www.amazon.com/12V-Power-Supply-Adapter-Transformer/dp/B07DMFN2YN](url) ## Subsystem 5: Water Pump Subsystem The water pump will be in series with the water flow sensor, sending the water from the fish tank up to the plants. We will be using a FIT0563 circulation pump that is waterproof, and depending on the water flow sensor’s outputs, we will be controlling the speed of the circulation pump by PWM modulating the supply voltage using a MOSFET. [https://www.digikey.com/short/cr79t182](url) # Criterion For Success - The pH sensor accurately measures the pH - The temperature accurately measures the temperature of the water - The flow rate sensor accurately measures the flow rate - Water is able to flow in a circular loop from the aquarium to the plants and vice versa - Automated fish feeder is able to supply food into the fish tank once every 24 hours - Lighting is able to mounted above the plants and has daily lighting schedule that changes based on the time of day (24 hour schedule implemented) - The LED indicators accurately indicate temperature that is too cold or warm, and water that has a PH too high or low (unsafe for fish) |
||||||
| 74 | Sensor Integrated Putter for Putting Stroke Analysis |
Kyle Smith Mithesh Ballae Ganesh Nathan Hwang |
Abdullah Alawad | Craig Shultz | proposal1.pdf |
|
| Team Members: - Nathan Hwang (njhwang2) - Kyle Smith (kfsmith2) - Mithesh Ballae (mballae2) # Problem Putting requires a high degree of repeatability and consistency, where small variations in club speed, face angle, and tempo can significantly affect the direction and distance of the ball. That is why the average golfer loses the most strokes in a round on the green. Anyone can hit a ball far, not everyone can consistently get the ball in the hole within par. It is very difficult for golfers to perceive small mechanical differences during practice, which makes it challenging to understand why certain putts hit while others miss. Since putting depends on subtle physical motions, players often rely on subjective feel rather than measurable evidence, which can lead to inconsistency and difficulty identifying the root cause of errors. # Solution We want to design a sensor integrated putter that measures and analyzes putting stroke mechanics in real time to provide feedback on individual strokes, consistency, and improvement. By embedding sensing and processing hardware into a putter, the system will focus on capturing physical characteristics of the stroke, regardless of if the putt goes in or not. A Motion sensor will track club speed, acceleration and tempo, face orientation, and rotational behavior of the club throughout the stroke. An impact sensing mechanism will then determine the characteristics of the contact between the ball and club. Additional optical sensing at the club face will watch ball behavior, most notably ball wobble. These measurements will be processed to quantify repeatability and identify stroke variability. The hardware system will connect with a mobile app via bluetooth that will show putt metrics, store sessions, visualize trends, and provide users with potential sources of error in their game. # Solution Components ## Subsystem 1 - Club Motion Sensing This subsystem measures the dynamics of the stroke mechanics. An IMU mounted into the main PCB, located on the backend of the clubhead, is perfect for this piece because it integrates an accelerometer, gyroscope, and magnetometer which will provide measurements for club speed, acceleration and deceleration, tempo, and rotational motion throughout the stroke. By continuously tracking the putter’s movement before, during, and after impact, this subsystem will capture the physical characteristics of the user’s stroke so that we can later provide numbers and feedback with the ultimate goal of correction, repeatability and consistency. Component (for breadboarding) - Adafruit 9-DOF Orientation IMU Fusion Breakout - BNO085 Component (for PCB) - CEVA BNO085 IMU ACCEL/GYRO/MAG I2C 32BIT ## Subsystem 2 - Impact Sensing This subsystem is meant to detect the time and characteristics of contact between the putter and ball. This will use vibration sensors positioned near the heel and toe of the club head. These sensors generate a voltage signal in response to ball impact, allowing us to determine the timestamp and relative location of contact across the club face (heel, center, or toe). The goal when putting is to hit the ball at the center as this will allow the ball to go straight. Any deviation provides a source of error. The impact timestamp will allow for synchronization of motion data from the IMU as well. This subsystem enables evaluation of impact consistency and quality. Components: 2x Piezo film strip: TE Connectivity Measurement Specialties LDT0-028K (1002794) ## Subsystem 3 - Ball Behavior Sensing We will track distance as well as dimple position utilizing a camera module and corresponding infrared LEDs, which will allow us to observe the rotation and wobble of the ball. Our thought is to drill a small hole in the center of the club face so that the camera can be positioned safely in the back end of the head looking toward the face. Our balls will have a line across the equator, which will allow our camera module to measure the changes of the roll over time. A good putt goes end over end, so it is important to measure roll stability, wobble, and catch any off-axis rotation. This along with our other measurements will capture all important aspects of the putt. Components: IR LEDs: Vishay TSAL6400 OV2640 camera module ## Subsystem 4 - MCU and Bluetooth Communication This subsystem will be the central processing and control unit of the broader system. Our chosen microcontroller has built in Bluetooth capability and can interface with our camera module. It is also able to collect data from the IMU through I2C and read analog inputs from the piezo impact sensors. Here, we will take in all of our signals, process them, compute stroke and ball mechanics, and then send it to our application over Bluetooth. Components: ESP32-S3 Microcontroller ## Subsystem 5 - Phone App Piggybacking off of our bluetooth connection, we will create a phone app, which will gather data from our back end, and structure it in an interactive and user friendly format. Using a database, the phone app will not only allow live feedback from the last putt, but additionally will allow the user to examine trends, and observe their growth through time. This makes it so a player can try different approaches to improving their putting consistency, as well as learn how to make different kinds of putts. Components: Phone ## Subsystem 6 - Battery and Power We will need to power our microcontroller so that we can gather and process data as well as send it to our app component. To power our system, we will use a lipo battery, likely around 4V, approximately 200 maH, which we will regulate via a regulator to 3.3V. We will have to allocate space in our physical putter design for this, so the smaller the battery, the better. Components: 3.3 V regulator Lipo Battery ## Subsystem 7 - Club Mounting/Club To mount our PCB/battery and other components, we will need to create a machined fixture that mounts flush to a mallet head putter. We chose a mallet head for several reasons, the first of which being that they generally offer more control and are the most commonly used design on the PGA tour, but secondly due to the larger real estate offered on a mallet head for component mounting. Ideally, we would be able to make an aesthetically pleasing lodging for our components that fits flush on top of the putter, but I could also see potential to mount the battery components near the grip if that helps with balance. Components: Machine Shop molded attachment Putter # Criterion For Success Describe high-level goals that your project needs to achieve to be effective. These goals need to be clearly testable and not subjective. In order for our project to be successful, there are several things we need to be sure to perfect. First and honestly most importantly, we need to ensure that the weight and balance of the putter with all components mounted is spot on. A good putter has a fluid feel, and by adding these components we will be altering the pre-existing balance, so it is crucial we account for maintaining this in our design so the putter itself feels good. To test this, we can go to a local golf course and approach golfers warming up. As stated before, golf is a feel game. We will ask if the putter feels well balanced and if they would be satisfied using the putter with the attachment as their everyday club. To get a consistent result, we can sample with 20 different golfers and look for 90% satisfaction. Second, it is crucial we are able to get accurate readings on club acceleration and club angle of each putting stroke. These readings are the most fundamental to understanding the way that a player putts the ball, and likewise it is paramount that these are consistently accurate so that the player can use this information to improve. To satisfy this we need to see sensor readings every stroke, with accurate measurements and accurate data processing. To do this, we can set up fixed putting and sensing calibration tests. Further, using the club face sensor, one must be able to determine the area of impact, which alongside angle and speed will complete what is necessary for a player to understand how they are putting. We also need to establish a reliable bluetooth connection to pair with our app. A successful project will without fail display each stroke's data on the app, and record it in the database for historical logging. For testing, we can make sure that every putt, sends data over bluetooth to our app, providing a strong connection. We also must make sure that our data between hardware and software is the same. Lastly, our database should store everything being transmitted from the hardware. While it is not apparent yet that it will be completely necessary, we ideally will have a functioning IR/Camera Module that tracks wobble and our balls rolling. This could allow us to better understand the exact link between ball trajectory and our swing factors, however our sensor positioning should be enough without this. Regardless it would be cool if we implemented this module additionally. |
||||||
| 75 | RailRider (Reaction-Wheel Uni-Wheel Inspection Robot with Vision) |
James Recera Varun Sharma Zhanshuo Zhang |
Abdullah Alawad | Joohyung Kim | proposal1.pdf |
|
| # Title RailRider (Reaction-Wheel Uni-Wheel Inspection Robot with Vision) # Team Members - Zhanshuo Zhang (zz128) - Varun Sharma (varuns10) - James Recera (jrecera2) # Problem A lot of important inspection locations are basically “thin-structure environments” where a normal robot is awkward or unsafe: narrow beams, cable trays, ladder racks, pipe-rack edges, and long tunnel-like spaces. These places show up in real settings like data centers (overhead cable management and airflow issues), HVAC/ventilation runs (debris, blockages, moisture), industrial facilities (leaks/labels/fasteners), and even “space-inspired” scenarios like a lunar/martian tunnel scout where falling off an edge or getting stuck could mean mission failure. A typical RC car is too wide and needs turning radius, and a drone is loud, short battery life, and often not allowed indoors (plus it struggles in confined, dusty, GPS-denied spaces). We want a compact platform that can move on narrow structures and produce useful inspection results instead of only streaming video. # Solution We will build a reaction-wheel stabilized uni-wheel robot that can travel along a narrow beam/rail while carrying a camera-based perception payload. The core idea is that the robot can balance itself with a tiny contact footprint, so it can ride on structures that would make a 4-wheel robot fall off. Our robot will support two main modes: 1. Teleop + safety override: user drives it, but the robot prevents unsafe motions near edges/obstacles. 2. Assisted inspection: the robot follows a beam/rail direction using perception cues and logs simple “inspection events” (marker reached, obstacle detected, possible defect). The perception system could detecting drop-offs or obstacles early enough to stop, and flagging inspection targets for different missions (some object detection and segmentation maybe using ai). # Solution Components **Subsystem 1: Main Control PCB** Custom PCB that does: power distribution, motor control, sensor IO, and communication. - MCU: ESP32-S3 (WiFi/BLE + good performance for control + telemetry) - IMU: BNO055 (easier) or ICM-20948 (harder but common) for orientation feedback - Reaction wheel motor driver: 3-phase BLDC driver stage (selected based on motor choice; goal is closed-loop reaction wheel torque control) - Wheel motor driver: brushed DC driver or BLDC driver depending on drivetrain - Power rails: battery → buck converters (example: 2S LiPo to regulated 5V + 3.3V) - Current/voltage sensing: measure battery + motor current for safety cutoff / stall detection - Connectors: I2C header for ToF/thermal, UART/USB header for perception module, debug header, kill switch - Safety + reliability: heartbeat/watchdog input from the CV module so the MCU can default to “safe stop” if perception freezes **Subsystem 2: Balancing and Moving** This subsystem keeps the robot upright and moves it forward. - Reaction wheel assembly: BLDC motor + flywheel disk (hub + added rim mass for inertia) - Drive wheel: geared motor or hub motor depending on size and torque needs - Control loop: IMU → controller → reaction wheel torque (and wheel torque as needed) **Subsystem 3: CV Perception Payload (camera or optional radar)** - Forward-facing camera for obstacle detection and logging markers/labels for missions - Onboard lighting (LED ring/light bar) for dark environments - Multizone ToF mounted as a hard safety override, when encounter sudden gaps or obstacles - (Optional) Thermal array (e.g., MLX90640) to flag hotspots - Possibly replace the camera with radar **Subsystem 4: Communications with users** - WiFi video/telemetry stream (ESP32 + CV module stream) - Simple laptop dashboard: live video, distance/edge warnings, “event log” (marker reached, obstacle, stop triggered) **Subsystem 5: Mechanical Structure** - Protective cage so if it tips it doesn’t destroy the camera - Modular mounting plate for sensors # Criteria For Success 1. Balance: robot can self-balance in place for ≥ 60 seconds without external support. 2. Narrow-structure traversal: robot can traverse a 2 m rail/beam (target width chosen for our demo rig) at slow speed without falling off. 3. Safety override: perception-based override stops the robot before a drop/obstacle with ≤ 20 cm stopping distance at test speed. 4. Inspection output: robot produces a structured event log with 3 event types, for example: “marker reached / tag detected,”; “obstacle detected / stop triggered,”; “possible anomaly (debris/loose cable) flagged,”; (optional) “thermal hotspot flagged.” # References: (for future project implementation) (1) “The Wheelbot: A Jumping Reaction Wheel Unicycle” (IEEE Robotics and Automation Letters, Vol. 7, No. 4, pp. 9683–9690, Oct. 2022). (2) https://github.com/peng-zhihui/ONE-Robot |
||||||
| 76 | FPGA-based stock market data feed handler |
Neel Ghosal Saksham Jairath Sara Sehgal |
Gerasimos Gerogiannis | proposal1.pdf |
||
| Problem Electronic trading systems must process extremely high-rate streams of market data with very low and predictable latency. Software-based feed handlers running on general-purpose CPUs often suffer from nondeterministic delays due to instruction overhead, caching, and operating system scheduling, which can lead to delayed reactions or dropped messages under heavy load. Hardware-based solutions are therefore commonly used in industry to achieve deterministic performance. This project addresses the need for a low-latency, reliable market data processing system. Solution We propose an FPGA-based hardware market data feed handler that processes packetized trading updates in real time. The system ingests a continuous byte stream, parses market data messages, maintains per-symbol trading state (top-of-book), and generates low-latency trigger events when predefined conditions are met. By implementing the data path entirely in hardware, the design provides deterministic latency and high throughput, demonstrating the advantages of hardware acceleration for latency-critical trading workloads. Solution Components Subsystem 1: Input Interface and Buffering Receives incoming data via the FPGA’s UART interface and buffers it using a FIFO implemented in block RAM to prevent data loss during bursts. Components: FPGA UART, BRAM FIFO. Subsystem 2: Packet Parser A finite state machine parses incoming bytes into structured market data messages based on a predefined packet format. Components: SystemVerilog FSM. Subsystem 3: Trading State Manager Maintains best bid and best ask prices per symbol and updates state based on incoming messages. Components: BRAM, comparison logic. Subsystem 4: Trigger Engine and Output Evaluates trading conditions and outputs trigger events and system metrics via UART. Components: Arithmetic/comparison logic, UART transmitter. Criterion For Success Correctly parses all valid packets and updates top-of-book state to match a software reference model. Sustains a target message throughput without data loss. Produces deterministic, bounded latency from packet reception to trigger generation. Detects and reports sequence gaps or malformed packets. Meets FPGA resource and timing constraints. |
||||||
| 78 | Wearable Basketball Jumpshot Mechanics Analyzer |
Aiden Zack Arjun Vyas Tanmay Nair |
Mingrui Liu | Joohyung Kim | proposal1.pdf |
|
| Tanmay Nair (netid: tanmayn2 ) Arjun Vyas (netid: avyas9) Aiden Zack (netid: aidenrz2) Problem: A basketball jumpshot involves a chain of body mechanics that requires coordination from your feet to your wrist to achieve a simple goal that is much more complicated than what the average person sees: Making the ball go in the hoop. So many players across the world have exhibited different mechanics in their jumpshot, so when they reach out to coaching for help, they tend to hear subjective advice that is often inconsistent, difficult to put into numbers, and, more importantly, harder to fit into the player’s perspective. Existing resolutions utilize shot trajectory and do not tap into the biomechanics that reside in the shooter. In essence, this leads to players lacking reliable, repeatable data to identify points of improvement in their mechanics, address consistency issues, and record progress. Solution: This project will implement a system dedicated to quantifying a user’s basketball jumpshot by analyzing the consistency and timing of the “kinetic chain”. It starts with node sensors that will be worn on the user's shooting wrist and the knee of the user’s shooting side. These sensors will hold an IMU, microcontroller, and wireless (or wired, tbd) communication. The knee sensor will focus on lower-body motion and take measures related to shot success, such as the timing of the jump and how much the knee flexes to determine the dip. The wrist sensor will look at the upper-body mechanics that finish out the shot, like the angular velocity and release timing of the wrist, along with how high it sits for the follow-through. These 2 data nodes will be synchronized in our system, extracted for timing measures like jump-to-release, and then processed for evaluation and feedback. This will focus on the repeatability and timing of the user’s body mechanics, providing user-oriented assistance that adjusts as the user progresses. Solution Components: PCB, Li-Po Battery Pack, USB-C Charging Port, SPI/I2C Communication Bus, IMU Sensors (3-axis accelerometer + 3-axis gyroscope), FPGA*, PCB Chest Harness. *FPGA may not be needed if we decide to use specific types of IMU sensors with FSYNC/SYNC capability to trigger sampling on the same external edge. Subsystem 1: IMU Sensor on the Knee This IMU sensor will be worn on the user’s shooting leg, right above the knee, along the side of the femur. The important metrics to grab from here will be the displacement and angular rotation with respect to the zero-calibration (standing straight up). This IMU will be synchronized with the other IMU sensor on the wrist, being sampled under a SPI/I2C communication bus that will carry data from the sensors to the PCB, which will then be processed and sent to the FPGA via USB/UART. Subsystem 2: IMU Sensor on the Wrist This IMU sensor will be positioned on the back of the user's thumb to accurately record the motion of the wrist. The key metrics we are looking for are angular velocity, physical displacement, and the timing between each of the 3 phases between the movements. The angular velocity can be determined by seeing the physical start and end positions of the wrist motion during phase 3 of the shot, divided by the elapsed time. The 3 phases of the shot are Raising the ball (Shoulder Movement) Pushing the ball forward (Chest Movement + Elbow Extension) Releasing the ball (Wrist Movement) Subsystem 3: PCB The PCB is the centerpiece of all external component communications. The 2 IMU sensors will communicate with the MCU on the PCB via I2C/SPI. The MCU will then send the data to the computer over USB/UART. The data will be interpreted in Python using closed-loop feedback communications with the user. Criterion For Success: Wrist and knee IMU sensors accurately record motion data Communication buses accurately read the data off the IMU sensors with low latency and send it to the MCU on the PCB The MCU can communicate with the computer via USB/UART We can see the telemetry data, observe significant changes (edge detection/triggers) in behavior via measurements, and quantify these changes in order to provide feedback to the user based on their input. |
||||||
| 79 | Universal Gesture Interface |
Connor Michalec Kenobi Carpenter Kobe Duda |
Lukas Dumasius | Yang Zhao | proposal1.pdf |
|
| # Universal Gesture Interface Team members: - Kenobi Carpenter (joseph48) - Kobe Duda (ksduda2) - Connor Michalec (connor15) # Problem Since the invention of the personal computer, the interface between humans and computers has remained relatively unchanged. The keyboard and mouse layout has proven highly effective for the majority of use cases, but its mostly-discrete nature greatly restricts the possible ways humans can interact with computer applications. Much of the way we interact with the world requires expressive, free-flowing modes of interaction. Activities like playing an instrument, martial arts, dancing, or sculpting often can’t simply be described by a series of inputs in the correct order at the correct time. They take place in continuous, 3D space—yet, the most complex expression we typically get with a computer is the 2D plane that a mouse movement provides. Some solutions exist to address this need, the most notable of these being VR headsets. However, these headsets tend to be expensive, large, and lead to feelings of fatigue and nausea for many users. As it currently stands, there is no low-cost, low-fatigue, desk-friendly input device that allows continuous spatial interaction on PC. Such a device would open new possibilities for how users interface with programs while also improving accessibility for those lacking in fine motor skills, i.e. limited finger dexterity. # Solution We propose a wearable gesture-detecting glove that allows users to interface with computer applications through hand and finger motions. This glove will have a wired USB connection (though wireless would be ideal, we are omitting it for the sake of scope) with two interfaces. The first interface is an HID compliant mouse, allowing the glove to be generally used for regular applications, while the second interface streams live 3D movement data to be interpreted by specialized applications. This dual-interface approach allows the glove to stand on its own as a general-purpose tool while also granting the extensibility to be leveraged to its full potential by specialized applications. The sensor layout will consist of a 9-DOF IMU placed on the back of the hand for broad movements, three flex sensors in the index, middle finger, and thumb, and three force-sensitive resistors (FSRs) on the fingertips to detect touch. Finally, the device will feature on-board DSP on the MCU. It will process raw sensor data and interpret a predefined set of gestures, then send those interpreted actions as discrete inputs to the target USB device. # Solution Components ## Subsystem 1: IMU Unit Components: ICM-20948 This 9-axis accelerometer will be used for detecting broad-phase translational and rotational movements of the hand. It will be mounted to the back of the palm, and raw sensor data will be sent over SPI to the MCU for processing. ## Subsystem 2: Flex sensors Components: Adafruit Industries Short Flex/Bend Sensor We will mount three flex sensors to the thumb, index finger, and middle finger. They will be connected each to an ADC port by voltage divider with a 50kOhm resistor. 0.1uF capacitors will be used for noise reduction. Used for detecting specific hand positions. ## Subsystem 3: Touch sensors Components: Geekcreit FSR402 Force Sensitive Resistor Three force-sensitive resistors will be attached to the tips of the thumb, index finger, and middle finger. Similar to the flex sensors, they will be wired to ADCs with voltage dividers (22kOhm) to be read by the MCU. Used for detecting pinching, tapping, and pressing. ## Subsystem 4: Microprocessor Components STM32F405 Microprocessor This microprocessor takes as input all of the aforementioned sensor data and sends USB as output. The processor itself has been chosen for its DSP capabilities, as processing sensor inputs and identifying them as gestures will constitute a considerable portion of this project. Attached to the PCB will be a USB port for connecting to a computer, over which identified gestures are sent as inputs to the computer. This is also where most of our design decisions will be integrated. For example, the IMU is prone to drift, meaning we’ll have to make UX decisions that mitigate its influence, i.e. only moving the mouse when a finger is down on the desk. ## Subsystem 5: Physical Frame Another important aspect of the project will be the physical design itself. In order for our project to be even moderately successful, it has to be wearable. This presents the unique challenge of designing a glove that is both comfortable and can house the electronic components in a way that does not impede movement. ## Subsystem 6: Associated Software This is not a part of the actual project, but a testbed to demonstrate its capabilities. We will use Unreal Engine 5 to create a very basic flight simulation that allows for controlling the plane with orientation of the user’s hand. For basic testing, we will also have a barebones program that receives gesture inputs and prints them to the screen when received over serial. # Criterion for success - Hand movements are able to reliably move a mouse on the attached device - The following gestures/actions can be reliably detected and mirrored to test program - Hand closed - Hand open - Light tap (index/middle/thumb) - Firm press (index/middle/thumb) - Pinching fingers (index-thumb, middle-thumb) - Thumbs up - Thumbs down - User can successfully navigate a plane in the testbed program through a basic course using hand orientation |
||||||
| 80 | Edge-AI based audio classifier |
Ahaan Joishy Kavin Manivasagam Om Dhingra |
Weijie Liang | proposal1.pdf proposal2.pdf |
||
| Problem Overview Most audio-based embedded systems nowadays collect large amounts of raw sensor data but, they only use simple threshold-based logic for classification. And these methods are very sensitive to noise and fail to perform accurately across various conditions. Thus, they fail without the usage of external computation/ cloud services. Thus, there’s a need of a method that can covert raw captured signals to meaningful classifications locally under tight power and memory constraints. Solution Overview The proposed project is an Edge-AI embedded system that can classify audio signals (e.g. – a clap, laugh, snap, stomp, speech, etc.) in real time using a simple neural net. The system will use a single sensor (an MEMS I2S digital microphone) to collect audio data. The classification will result in an LED-based output, telling the user the result. The system thus eliminates the need for cloud usage/computation and demonstrates the true strength of machine learning, even under tight constraints. Solution Components Sensor Subsystem: A MEMS microphone with an I2S interface will be used to collect raw audio signals (e.g. – clap, speech, snap, etc.). Audio will be sampled at a target rate of 16kHz, which is sufficient and the industry standard (used in voice bots and voice recognition) for speech and common environmental sounds. We use a digital microphone because it removes the need for an analog amplifier. Processing Subsystem: We will use an STM32F411 microcontroller for this project. We choose this microcontroller for our project because it features 512 kB Flash and 128 kB RAM which is crucial for running the math of a neural net. Furthermore, it has built-in DSP instructions that are crucial to convert raw audio signals into a spectrogram (MFCC) in real time. Since we’re using a neural net, we also need a chip with a floating point unit (FPU) which this microcontroller has. The signal chain (to capture signals) would be as follows: The microphone captures audio and sends it digitally over I2S to the microcontroller, which uses DMA to quickly store the data in memory. The audio frames are converted into MFCC features These features are then fed into our neural-net model. The ML pipeline would be as follows: The obtained MFCC features are fed into our small, dense neural net for classification into predefined types. TensorFlow Lite Micro will be used to facilitate deployment on the microcontroller without an OS/ internet connection. (We may also try to use ExecuTorch if time permits). Model size will be kept under 20 kB to ensure real-time performance. Power Subsystem: We will use a 5V USB input to power the board. This will be stepped down to 3.3 V using an on-board voltage regulator. Decoupling capacitors and filtering components will be used to reduce electrical noise that could interfere with stable operation. Criterion for Success Our device can classify at least 3 different sound types correctly with more than 85% accuracy on the recorded test set. The target end-to-end latency (from sound to LED output) is less than 100 ms. Current drawn should be under 60mA. Test Protocol description: Our test set will consist of around 50 samples per class and shall be gathered from a variety of noisy and quiet environments. (We shall aim for our model to correctly classify 3 different sound types but, this will be extended to 5 types if time permits). Alternatives Many existing sound classification systems use cloud-based processing or rely on high-power computing platforms such as smartphones and computers. These methods require a continuous internet connection. Many other methods also use a threshold-based audio detection but, these can’t work accurately for different types of sounds in varying environments. Our solution differs by performing audio classification on a low-power embedded device, using a simple neural, without the usage of external computing/ complex hardware. |
||||||
| 81 | Controllable, User-Friendly 3-Phase Inverter |
Alex Chirita Johnathan Vogt Shyam Peden |
Frey Zhao | Arne Fliflet | appendix1.png appendix2.png proposal1.pdf proposal2.pdf |
|
| # Controllable, User-Friendly 3-Phase Inverter Team Members: - Johnathan Vogt (jsvogt2) - Shyam Peden (speden2) - Alex Chirita (chirita2) # Problem While normal 3-phase AC power systems operate with consistent phase differences of 120 degrees, these systems are not always perfect. There may be occasions (fault conditions) where the power system becomes unbalanced. In order to test small machines under these conditions, one might want to create controllable AC waveforms with adjustable phase angles. # Solution We will create an inverter system that is capable of creating three AC waveforms with controllable phase angles. Phase A will serve as the reference 0 degree phase, while the B and C phases will be controllable with respect to this reference phase. This will be achieved using analog control, likely via potentiometers. The PCB will function as a normal 3-phase switching inverter, with switching control handled by the microcontroller, which takes input from analog signals to control the output AC waveforms. There will be 5 main subsystems : input stage with a boost topology, 3 MOSFET H-bridges, Encoder, CONFIRM button and small OLED Display as the user interface, and a TI-C2000 microcontroller as it has enough PWM channels and high resolution timers, whose firmware will include the user input control, and the power control for the bridge # Solution Components - Microcontroller TI-C2000 F2800157SPN - Rotary encoder PEC11R-4215F-S0024 - Button 320E11BLK - Display NHD-0420CW-AB3 - Power FETS G18N20T - Low side FET Drivers DGD0211CWT-7 - High side FET Drivers 1EDN7550BXTSA1 - Power capacitors (for bridges) - Low resistance resistors for shunts (Voltage, current sensing) - Connectors - Potentiometers for precision voltage division - Inductor cores - Copper wire - Linear voltage regulators for low-power IC’s (78xx Series) ## Subsystem 1: Boost Stage The main purpose of this project is the inverter stage, while the input is some sort of DC that mimics a solar panel. We will not focus on it being powered from a solar panel during the semester and leave it as a modification/addition to the inverter part for further development. The input voltage needs to be converted to a setpoint and fed into a common DC bus. This will be done with a half bridge boost converter. A good quality of this system is the freedom to choose which variables to control. The circuit will be able to respond to quick changes in input voltage, but this semester we will be using a constant DC power supply instead of a solar panel to reduce cost and complexity. Therefore the boost converter will be run by a switching algorithm with a fixed input voltage. Later the algorithm could be changed to an MPPT conversion if needed. ## Subsystem 2: H-bridges The bridge subsystem will contain three MOSFET H-bridges, each corresponding to one of the phases. Each of the phases will have a similar layout since the control is only achieved by the gate signal which is fully generated by the microcontroller. We decided to go with an H-bridge because it’s a good middle ground between multi-level bridges and a half-bridge. The H-bridge will allow us to generate a good quality sine wave when averaged out and filtered.The sine wave will be generated from -Vmax to +Vmax, and the sign will be decided by using a proper pair of MOSFETS from the 4 available. Each mosfet will have a corresponding low side or high side gate driver, which will receive their PWM control signals from the microcontroller. Each phase will have an LC low-pass filter at the end to reduce switching harmonics. ## Subsystem 3: User interface The user interface will consist of a display, encoder, and a confirm button. The user will use the encoder and confirm button to navigate the user interface where the phase angle will be set for phase B and C in relation to phase A, which is static. Users can also choose to use an autoset where the microcontroller will default to 120 degrees between each of the phases. ## Subsystem 4: Input control The microcontroller will run polling input from the button and encoders, run a loop which checks whether the phase is within bounds (-180 to +180 degrees with respect to phase A) and override the proper variables variables, which will be used by the switching subsystem as target phase angle. ## Subsystem 5: Switching control The constant loop will check the phase angle variables and calculate the expected voltage for each phase. It will then generate the PWM for each phase that matches the needed Vrms. The output voltage which first went into the resistor divider to fit within maximum operating voltage of the microcontroller will be collected as samples over the wave period, Vrms calculated and compared to the set Vrms after which the switching signals for each phase will be adjusted. # Criterion for success Our device will be considered successful if we can accurately display a 3-phase network on the oscilloscope. Each phase has to have the same amplitude and frequency as well as have the same phase angle between each phase as set by the user. Across all 3 phases the inverter should be able to output 0.83A of current (~100W) and each phase should be able to handle 0.33A (~40W). The output RMS voltage is 120V. |
||||||
| 82 | Real-Time Form Correction Device |
Bhanu Kunam Ishank Pujari Sree Akkina |
Po-Jen Ko | Craig Shultz | proposal1.pdf |
|
| Team Members: - Bhanuprakash Kunam (bkunam2) - Sree Akkina (sakkina2) - Ishank Pujari (ipuja2) # Problem Free weight exercises (dumbbells/barbells) require intense focus, and users often cannot safely look at visual displays while performing complex movements. Additionally, beginners frequently suffer from poor form - such as wobbling or using momentum rather than muscle control - which is difficult to self-diagnose without a personal trainer. # Solution This project proposes the “Smart-Clip,” an IoT attachment for barbells and free weights that utilizes auditory feedback to correct form in real-time. The system aims to use an ESP32 microcontroller and a 6-axis IMU to analyze the lift’s stability and trajectory. A piezoelectric buzzer provides sound cues: a “clean” tone confirms a stable, good-form repetition, while a dissonant alert signals excessive wobble or dangerous acceleration. This allows the user to maintain safe positioning while receiving instant coaching on their technique. All form data is logged to an app via Bluetooth for post-workout analysis. # Solution Components ## Data Acquisition (Sensing) The physical clip attaches securely to the dumbbell handle. Inside, a 6-axis Inertial Measurement Unit (IMU) continuously monitors the weight’s movement in 3D space. The Accelerometer measures the velocity of the lift (is the user moving too fast/jerking the weight?). The Gyroscope measures rotational stability (is the user’s wrist wobbling or tilting effectively?) ## On-Device Processing The ESP32 microcontroller acts as the central processing unit. Instead of sending raw, noisy data to the phone, the ESP32 performs Edge Computing. Noise Filtering applies a smoothing filter to ignore small hand tremors. The Form Analysis Algorithm compares the motion vector against a “Gold Standard” vertical path. If the vector deviates sideways (wobble) or accelerates beyond a safety threshold (momentum), the system flags the repetition as “Poor Form.” ## Feedback Generation (The Interface) The system employs a dual-loop feedback mechanism to provide both real-time coaching and long-term analytics. For immediate, a passive piezoelectric buzzer emits distinct auditory cues: a sharp, high-pitched beep confirms a valid repetition with proper form, whereas a low, dissonant buzz alerts the user to instability or dangerous acceleration. In parallel, the device utilizes Bluetooth Low Energy (BLE) to transmit detailed performance metrics, such as total count, lift tempo, and stability scores, to a companion mobile application, allowing users to review their workout history and track progress over time. ## App The companion mobile application serves as the centralized hub for workout analytics, receiving data from the Smart-Clip via Bluetooth Low Energy (BLE). It records all session metrics, including repetition counts, tempo, and stability scores, locally on the device, allowing users to track and analyze their long-term progress through historical graphs and trend reports. Beyond data storage, the app acts as a control interface, enabling users to customize the clip’s sensitivity thresholds and audio feedback settings to match their specific training regimen. # Criterion For Success 1. Repetition Accuracy: Counts bicep curls with = 90% accuracy; detects > 15 degrees wobble in >= 9/10 trials with no false alerts on clean reps. 3. Feedback Latency: Audio feedback occurs within 200 ms of IMU-detected rep completion. 4. Bluetooth Integrity: 100% of completed sets transmit correctly to the app within a 2 m range. 5. Mechanical Stability: Clip rotates less than 10 degrees on the handle during a 10-rep set. 6. Power Efficiency: Operates for at least 1 hour with average current draw under 100 mA. |
||||||
| 84 | AutoServe (Automated Room Service Bot) |
Ethan Jiang Johan Martinez Nikhil Vishnoi |
Po-Jen Ko | Joohyung Kim | proposal1.pdf |
|
| **AutoServe (Automated Room Service Bot)** **Team Members:** - Nikhil Vishnoi (nikhilv4) - Ethan Jiang (ethanj4) - Johan Martinez (jmart454) **Problem** In hotels, apartments, and dormitories, guests or residents often request small amenities such as snacks, toiletries, chargers and more. Fulfilling these requests often requires manual labor, such as a staff member traveling long distances across hallways and between floors which is time-consuming, inefficient, and labor intensive. While some automated delivery robots exist, commercial solutions are extremely expensive, and often impractical for smaller deployments or retrofitting existing buildings. There is a need for an affordable yet flexible indoor delivery system capable of autonomously transporting small items within multi floor buildings while operating within existing infrastructure constraints. **Solution** We propose a small autonomous indoor delivery robot capable of transporting items between locations in a multi-floor building such as a hotel. The robot will navigate hallways autonomously and use an elevator to travel between floors, allowing it to deliver items from a central base location such as the hotel lobby snack bar to a specified destination room. The robot will move autonomously and be monitored wirelessly by staff through a remote UI that can display status updates on deliveries, or when the robot is ready in the elevator to be transported by hotel staff calling the elevator from the lobby. Elevator actuation is assumed to be externally triggered by the building as is most common in real hotels, while the robot will autonomously handle entering, riding, and exiting the elevator at the correct floor with sensor detection. This design choice reflects realistic constraints of existing building logistics while allowing the project to focus on autonomous navigation, system integration, and practicality. An ESP32-based controller located on the central unit and the navigation unit will coordinate wireless connection between each other with the integrated Wi-Fi module. We would also incorporate graphed routes that are optimized for avoiding obstacles, with a proximity sensor to detect obstacles such as people and send the appropriate warnings. Items will be transported in a box with a rfid lock that can only be opened by residents such as with a hotel keycard or something of similar nature. This system would reduce staff workload, improve response time for guests, and demonstrate how embedded robotic platforms can be useful to automate common but repetitive manual logistics tasks. **Subsystem 1: Microcontroller Unit** - Two ESP microcontrollers will be used, one for the Central Base Unit and one for the actual Robot Navigation Unit. - Both microcontrollers will communicate with each other using their integrated Wifi connection modules with transmitters and receivers. **Subsystem 2: Robot Base Unit** - Will have USB keyboard input (DS_FT312D) and Display to allow user input commands to robot - Display (NHD-0216KZW-AB5) will show a UI for user to see robot status (charge, where it thinks it is, connection) **Subsystem 3: Robot Unit** - 2 Stepper motors (17ME15-1504S) to accurately move robot with predetermined distances. - Will be 3D printed or machined with the machine shop - Motors will be driven using motor driver (A4988SETTR-T) with MCU - Display (NHD-0216KZW-AB5) for robot unit to communicate with nearby people **Subsystem 4: Navigation and Sensing** - Position Tracking sensor (TLV493DA1B6HTSA2) to track x,y,z motion data of robot. Actual map data and floor data will be hardcoded into the robot; this data will be used to make sure that stepper motors are moving correctly. - Proximity sensors (TSSP40) for MCU to tell when it is being blocked by an obstacle and if it is boxed in it will communicate with the Base Unit for help. **Subsystem 5: Robot Charging Station** - The robot will have battery charge detection and will be able to inform the central base Unit when it is low on power. - When delivery is completed and robot is done working it will dock into a base charging station that will flow a reverse current into the Lithium Ion batteries using a charge management controller (MCP73811). **Subsystem 6: Security Subsystem** - RFID based lock system for storing delivered items that opens for residents (Either from base station or with smart lock) **Criteria for Success** - The central base station can send commands to the navigational robot unit which is able to use predefined data to go to programmed/stored locations accurately. - The navigational unit is able to identify its location, calculate the route to its next destination, and then move precisely towards it and stop correctly. - Robot unit can avoid obstacles and send back status messages to the central base station indicators. - The robot unit can operate through the elevator and can tell when it is at the right floor and when to exit. |
||||||
| 85 | Modular Desktop Audio Mixer Control |
Aarushi Sharma Dylan Moon |
Yulei Shen | Craig Shultz | proposal1.pdf |
|
| # Modular Desktop Audio Mixer Control Team Members: - Aarushi Sharma (sharma93) - Dylan Moon (dylanm5) # Problem Modern desktop computers have generally revolved around a set paradigm for human-computer interaction: the keyboard and the mouse. However, analog control surfaces can be beneficial in interacting with the many analog-like controls present in a computer. For example, take software volume mixers. They are prevalent in modern computing systems, shipping with the OS on Windows and Linux-based operating systems. However, they are somewhat difficult to access, usually buried behind multiple menus or needing to open a different application to adjust the volumes of individual applications. People like computer power users might have music playing in the background, a call in the foreground, and application audio on top of that, which all needs to be individually adjusted so that important details are heard. Furthermore, for gamers, who may frequently have full-screen applications occupying their screen, minimizing their game to go searching for the volume mixer to turn down the loud voice call that they may have in the background takes up time. The time spent doing so could be the difference between winning and losing their current match. # Solution We propose a modular audio control panel that sits on a user's desktop, which can be physically interacted with to smoothly and easily change volumes of individual applications. Since the controls that we want to target are analog (volume controls for individual applications), the control surfaces that the user interacts with will be linear sliders. This allows for quick but also granular control of the volume levels of various applications in the computer. The system consists of two different types of components. One type of component is a base station that connects to the computer, does processing of inputs (and possibly outputs), and controls and provides power to the other modules. The other type of component are the modules with sliders. We plan to design the system so that the slider modules can be daisy-chained to allow for a user to choose the number of sliders to include in their setup. More details can be found in the Solution Components section. If time permits, we also want to explore and implement audio output and post-processing through the device. The inclusion of a DSP chip will process the audio output from the system, which we want to use to implement an equalizer mode, which will temporarily switch the application volume controls to equalizer band controls, allowing users to dynamically adjust the sound profile. One of the target userbases for this feature are gamers: adjusting their audio profile on the fly facilitates listening for other players’ footsteps by turning up the frequency range that footsteps reside in, giving them an advantage. # Solution Components ## Subsystem 1: Base Module The base module connects to the computer via USB, connects to power (if external power is required e.g. weak power input from USB), and communicates to other modules on the daisy chain to send and receive data. For this, the base module will have a microcontroller (ESP32-S3) and bus communication transceivers (if using CAN, but we are looking to see if I2C would be sufficient). We would use pogo pins to interface with the neighboring module. In the case we add DSP output, we would additionally require a DSP chip (ADAU1701) to implement EQ and a push button to switch control modes. ## Subsystem 2: Fader Modules The fader modules physically attach to the base module and each other via magnets. The electrical connection will be done via pogo pins on the sides of each module. Each fader module communicates with the base module using the bus protocol to send fader position data and receive updates about where to set the fader position. For this, the fader modules will need a microcontroller (e.g. ATtiny1614), however these MCUs can be weaker than the one in the base because they’re reading values from the fader and sending it through the bus protocol (or vice versa). For the motorized fader itself, Behringer sells replacement fader modules that we can repurpose (X32MOTORFADER). We will also need buttons to act as mute and solo, which mute the application and mute all other applications, respectively. ## Subsystem 3: Integration with OS Windows doesn’t expose the volume mixer controls to hardware, so a program running on the OS is required to receive values from the hardware and change the value. For this, we are planning to get input via USB and utilize Windows APIs to change the values of the mixers. If we can find a Linux-based device to test on, we would also like to support the PipeWire/PulseAudio volume mixer. # Criterion For Success With multiple fader modules connected (e.g. 2), within 0.5 seconds: Test that bringing the fader all the way down sets volume of respective application to 0. Test that bringing the fader all the way up sets volume of respective application to 100. Bringing fader to halfway mark sets volume to around 50 (± 5%). Mute and solo buttons perform their respective functions (mute respective application, mute other applications) when pressed. If DSP implemented: switching to equalizer control and modifying equalizer results in an audible frequency-band change within 2.5 seconds. |
||||||
| 86 | Any-Surface-Stylus for Computer |
Alex Camaj Ethan Forsell John Bledsoe |
Manvi Jha | Craig Shultz | proposal1.pdf |
|
| **Any-Surface-Stylus** Team Members - Ethan Forsell (ethanef2) - John Bledsoe (johndb3) - Looking for 3rd! **Problem:** Having a laptop or tablet with stylus functionality used to be a luxury but is starting to become a necessity. Tasks that require hand-drawn diagrams or figures are just as prevalent as paper and pencil becomes more rare. Those without tablets are left with few options for drawing digitally. The old-fashion way is to print the document, draw in pen, scan the newly-edited document, and reupload again to the computer. There are also drawing pads, but they are bulky and often made for artists rather than completing a simple task making them pricy. It would be convenient to have a stylus that could write on any surface and work on any computer. **Solution:** We want to create a stylus that can connect to any computer and be used on about any surface. An optical sensor will be used to track movement similar to a laser mouse. A pressure sensitive tip on the stylus will control the left-click signal. There will be separate buttons on the side of the stylus to control right click and scroll inputs. It will connect through USB to the computer which will also provide the power. For purposes of this project much of the control hardware will be housed in an intermediate box which connects to the smaller pen through a wire for reliability. - Subsystem 1: Stylus Sensors - Tip pressure, optical sensor, right-click button and scroll function. - Subsystem 2: PCB and microcontroller - Signal Processing system and communications to computer. **Criteria for Success:** At minimum, this project will be a success if the stylus can be plugged in and work as reliably as a standard laser mouse. The stylus needs to fit comfortably in the hand to not interfere with drawing. It must be able to reproduce consistent drawings without erratic movements. |
||||||
| 88 | Catching Z's |
Prineet Parhar Srikar Palani Suprathik Vinayakula |
Zhuchen Shao | proposal1.pdf |
||
| # Title **Catching Z’s** ## Team Members - Suprathik Vinayakula (sv53) - Srikar Palani (palani3) - Prineet Parhar (pparhar2) ## Problem Sudden environmental noises such as sirens, loud neighbors, barking dogs, or door slams are a primary cause of sleep fragmentation, which negatively impacts cognitive performance and long-term health. Conventional white noise machines operate continuously at a fixed volume, which can be unnecessary or ineffective against short, intermittent disturbances. There is a need for a smart bedside system that continuously monitors room acoustics and activates noise masking only when disruptive sounds occur, while remaining off during quiet periods. ## Solution We propose **Catching Z’s**, a bedside embedded system that monitors ambient audio in real time and adaptively generates masking noise in response to disruptive sound events. Using a high-sensitivity microphone and onboard signal processing, the system establishes a baseline ambient noise profile and detects sudden sound spikes based on amplitude and frequency characteristics. When a disturbance is detected, Catching Z’s smoothly fades in white, pink, or brown noise to mask the event, then gradually fades out once the environment returns to baseline. This adaptive response minimizes unnecessary noise while preventing the masking system itself from waking the user. ## Solution Components ### Acoustic Sensing Subsystem This subsystem continuously monitors the ambient sound environment. - **Microphone Module:** Electret microphone with pre-amplifier (MAX4466) to capture low-level room noise with sufficient gain and low distortion. - **Analog-to-Digital Conversion:** The ESP32-S3’s built-in ADC samples the microphone signal at 10–20 kHz for envelope and spectral analysis. ### Processing and Audio Output Subsystem This subsystem performs sound analysis and generates masking audio. - **Microcontroller:** ESP32-S3-WROOM-1, selected for dual-core operation, allowing one core to handle real-time audio sensing while the other manages audio synthesis and playback. - **Audio Amplifier / DAC:** I2S Class-D amplifier (MAX98357A) for efficient digital-to-audio conversion and speaker drive. - **Speaker:** 4 Ω, 3 W full-range speaker (50 mm) for producing broadband masking noise. ### User Interface and Power Subsystem This subsystem provides user control and power regulation. - **User Input:** Rotary encoder (PEC11R-4215F-S0024) to adjust detection sensitivity and masking intensity thresholds. - **Power:** 5 V USB-C input with on-board regulation to 3.3 V using an AMS1117-3.3 LDO regulator. - **Indicators:** Status LEDs to indicate detection events and system state. ## Criterion for Success 1. **Detection Latency:** The system shall trigger masking noise playback within **100 ms** of detecting a sound event exceeding the ambient baseline by **≥ 10 dB**. 2. **Output Capability:** The audio subsystem shall produce masking noise over a controllable range of **40 dB to 75 dB SPL** at the bedside. 3. **Continuous Operation:** The system shall operate continuously for overnight use without performance degradation or audible artifacts. ## Risks and Mitigation - **Overreaction to brief harmless sounds:** Mitigated by minimum-duration thresholds. - **Environmental variability:** Adaptive baseline recalibration during extended quiet periods. |
||||||
| 89 | Screentime Habit Correction Headband |
Colin Moy Jake Chen Zhiyuan Chen |
Weijie Liang | Craig Shultz | proposal1.pdf |
|
| # Screentime Habit Correction Headband Team Members: - Jake Chen (jakezc2) - Colin Moy (colincm2) - Zhiyuan Chen (zc67) # Problem With the majority of people having more and more access to screens, many people spend a large amount of time in front of a desktop computer. After some time, their posture deteriorates into slouching and they can end up sitting too close to the screen. With poor posture, the neck and back can be strained and can be detrimental to long term health. Additionally, when sitting too close to the screen, the eyes can get dry from not blinking enough and get strained. Even if you have good posture and distance, sitting at the screen for too long can also strain your eyes and back. # Solution Our Screentime Habit Correction Headband will allow the user to track their habits during screentime and correct bad habits. By using a headband with two sensors, the device will be able to track the posture of the user based on the calibration done when the device is powered on, as well as the distance between the user and the screen they are looking at. The device will send feedback to the user using vibrations, a speaker, and a LED when the user’s posture deteriorates or they get too close to the screen. In addition, the device will also send feedback to the user if they have been sitting in front of the screen for too long. The headband will be lightweight and will be wired to a box that contains the bulk of the electronics as well as the rechargeable battery for the device. In addition to the physical device, there will also be an app that can track screentime and posture data from the device using Bluetooth. # Solution Components ## Power Our power subsystem will contain a Lithium-Polymer battery with a TP4056 charging module. It will also be able to regulate and step down voltages using an LDO and buck converters and send them to all the other components in the device. Lithium Polymer battery, TP4056, LDL1117-3.3 ## Sensors There are two sensors on the device. The first sensor is the ICM-42670-P, which is an IMU that is able to sense position and orientation in order to tell the MCU to send feedback when the user’s posture is bad. The second sensor is the VL53L0X Time-of-Flight Sensor, which is able to detect the distance from the user to a screen. This sensor will tell the MCU to send feedback when the user is too close to their screen. ICM-42670-P, VL53L0X ## Feedback The feedback subsystem consists of a vibration motor (Mini ERM), speaker (Piezoelectric Buzzer), and two LEDs. There are two cases when the feedback subsystem will activate. One case is when the user is either slouching or too close to the screen. The other case is when the user has been sitting in front of the screen for too long. Each case will have their own dedicated LED, while both cases will activate the vibration motor and speaker. Coin vibration motor, Piezoelectric Buzzer, 2 LEDs ## Processing The processing system consists of the microcontroller. The MCU that we will be using is the ESP32. It will use sensor data as well as its own timer to determine when to send feedback to the user based on time of exposure to a screen, distance to a screen, and posture. The MCU will also manipulate the sensor data so the two cases won’t interfere with each other. In addition, the MCU will have Bluetooth capabilities that will be able to communicate with the app and allow it to track data. ESP32-S3 ## App The app will measure a lot of data from the sensors using Bluetooth. The app will display the time it takes before the user’s posture deteriorates or the screen gets too close to the user, the amount of times this occurs, and the general data such as daily screentime. The app will also have a graph of all these statistics that it can track over the course of a week. ## Design The headband will have a switch that is used to turn the device on and off, with device calibration when switched on. The headband also will only contain the two sensors and the vibration motor, and the headband will be wired to a separate box, meant to be placed on the desk. The box will hold everything else, from the LEDs, speaker, microcontroller, and power subsystem. # Criterion For Success ## Headband: Accurate distance measurements from headband to screen transmitted to stationary module (±0.5 in) Lightweight (weight limit of 100g) Alarm activates when distance to screen is less than 12 inches Alarm activates when IMU detects the user’s head looking down at an angle of over 15 degrees for 3 seconds or when IMU detects it has been lowered by at least 2 inches for 3 seconds Alarm activates when user has been sitting for at least 60 minutes Alarm is turned off when user fixes posture to ±0.5 inches of normal position and is further than 12 inches from the screen Fast calibration for posture (Under 15 seconds) Switch can power the device off and on, as well as calibrate when switched on Device operates for at least 2 hours on a single battery charge ## App: Values displayed on the app match the values output by the microcontroller (average time from initial screen exposure to unsafe screen distance, average time from initially sitting down to bad posture) Previous recorded values can be displayed in a graph ## Box: Battery is chargeable by USB-C |
||||||
| 91 | Automatic Bike Collision Prevention System |
Charlie Wang Nathan Zhu Rahul Nayak |
Frey Zhao | Craig Shultz | proposal1.pdf |
|
| # Automatic Bike Collision Prevention System Team Members: - Rahul Nayak (rn8) - Charlie Wang (cgwang3) - Nathan Zhu (nyzhu2) # Problem Active pathways like campus sidewalks create high risk scenarios for cyclists and passerby due to oblivious pedestrians and distracted riding. Traditional bicycle bells are reactive rather than proactive, requiring both the cyclist to recognize a potential collision and react by ringing the bell, and pedestrians to acknowledge the bell and move out of the way. The total time to prevent collision can be lengthened if the cyclist’s reaction time was not a consideration. As such, there is a need for an automated alert system that is able to identify and distinguish potential collision hazards before they occur. # Solution We will create a handlebar-mounted safety system using three mmWave radar sensors to act as a peripheral vision of sorts. The sensors will be set up such that we have a center sensor, and left and right sensors. The system performs spatial gating, where detections transitioning from peripheral radar sectors into the forward sector are classified as hazards, while detections only in the peripheral radar sectors are ignored. We estimate a time to collision depending on the current distance detected and the distance from past readings, and ring the bell at different volumes accordingly. # Solution Components ## Subsystem 1: Power Provide regulated power and system status feedback. Components: - Li-ion 18650 Battery: High capacity power source. - Buck-Boost Converter: Stable 5V/3.3V regulation. - Status LEDs: Indicators to indicate if the system is on, sensitivity level, and if an object is detected. - Sensitivity Potentiometer: Allows the rider to adjust the magnitude threshold for different environments. ## Subsystem 2: Radar Sensor Array Function: Detect object distance. Components: - Three HLK-LD2410 24GHz mmWave Radar Modules - Configuration: 1 center (0°), 2 side angled (30°) - To create distinct sensors, small 3D printed shields will be set to limit field of view and prevent cross-talking. - This triangular configuration allows for section-based filtering. - Due to limited UARTs on the ESP32, the radars should be checked one at a time in a very fast, cyclical manner, which would also help prevent crosstalking. ## Subsystem 3: Processing Function: Filter noise and determine collision likeliness. Components: - ESP32 Microcontroller: UART connection with Radar sensors - Magnitude thresholding: Ignore low energy reflections such as from pavement or small non-collision objects. - Time-To-Collision algorithm: Estimate how long it will take until a collision occurs. ## Subsystem 4: Alert System Function: Create a gradually audible ringing sound depending on the expected collision time. Components: - Piezo Buzzer (PS1240): Use Pulse Width Modulation to increase beep frequency - Three alert stages # Criterion For Success The project will be considered successful if all criteria below are met: - Range performance: Reliably detect objects from 5 meters away. - Low latency: Detection to audio output is less than 150ms. - Form factor: Device is compact enough to mount on handlebars. - False-positive mitigation: Thresholding prevents alarm from triggering for ground objects and other non hazards. - Peripheral vision: Device is able to detect objects in peripheral vision and keep track of these objects moving into the sight of the center sensor. - Battery life: Battery should last at least 8 hours on a single charge. |
||||||
| 93 | Dynamic Violin Fingerboard Attachment |
Adrian Ignaci Kamil Waz Sophia Wilhelm |
Manvi Jha | Yang Zhao | proposal1.pdf |
|
| # Dynamic Violin Fingerboard Attachment Team Members: - Kamil Waz (kwaz2) - Sophia Wilhelm (sophia16) - Adrian Ignaci (aigna3) # Problem Most people would like to learn an instrument, however not only are the instruments expensive, but the lessons are just as (if not more) costly. This also assumes lessons are even available where they live. For this reason, many people try to teach themselves how to play, either through experimentation or online resources. However, this path has a distinct lack of feedback that would help correct poor habits or otherwise incorrect playing. # Solution Our project seeks to give those self-learning an instrument, specifically a violin, an extra source of feedback with respect to finger placement (creating the notes) as well as the rhythm played. A dynamic LED display laid on top of the fingerboard would allow learners to better understand proper finger placement in addition to its relation to the specific note’s duration. Furthermore, by using linear/membrane potentiometers we could accurately measure the position of a finger placed along any of the 4 paths (strings) on the fingerboard. This allows us to also to also collect information on how accurate the placement is, rather than a simple yes or no as to whether they play the right note. To encourage building good habits and continuous practice, we would like to allow users to upload pieces they would like to learn. Thus users will be allowed to upload files (MIDI) that can then be used on the fingerboard along with an adjustable tempo. This, paired with individual settings for full piece playthroughs and learning (only playing the next note after the user plays it) will help encourage good, accurate playing whilst making it fun. # Solution Components ## Fingerboard overlay This subsystem is the main source of feedback to and from the user. It will contain an array of individually addressable LEDs (1528-1196-ND) which will display appropriate fingerings at the appropriate moment in a piece and membrane potentiometers (SEN-08680) which will give the processor feedback as to the user’s accuracy. ## Microcontroller The microcontroller will be responsible for a number of key operations, including: File uploading and reading Control of the display User input validation and data collection We will use the ESP32 (tentative, as using an RP2040 would be easier, but that would also be bulkier and remove a lot of the designing). Additionally, an LCD display (16x2 most likely, no specific part as they’re all pretty generic) can be controlled to display piece statistics to the user. ## Piece Play Configuration This component controls the aforementioned various settings regarding what style of playing the user wants. Specifically, this will control not only the tempo the pieces are played, but also when the piece progresses. There will be a setting to simply go through the entire piece while tracking statistics in addition to a setting dedicated to learning the piece, which pauses until each fingering is properly performed. The most straightforward implementation of this (strictly) would only require a potentiometer (COM-09939) or two buttons (such as TS02-66-70-BK-100-LCR-D) for tempo and a switch (OS102011MS2QN1). ##Power A pair of standard AA batteries should be sufficient for our needs (would need the 36-2463-ND enclosure to hold them). However we would like to consider 3.7 V Lithium-Ion Battery Rechargeable (1528-1839-ND) with the associated charger (TI BQ24074) and regulator (TI TPS62840). The rechargeable battery is a potentially dangerous option as it could endanger the instrument itself (it’s a more risky fire hazard than an easily removable AA battery), therefore it will not be implemented unless there has been extensive testing of the rest of the system within the time frame of this project. ##Case/Enclosure There will be two parts, one for the fingerboard components and one for everything else. The fingerboard overlay will consist of the components surrounded by a sort of envelope made of ~0.3mm transparent silicon rubber (allowing for clarity for the LEDs without compromising the membrane potentiometers). The second part will be a plastic enclosure that must fit either under the fingerboard or body (better for weight distribution) of a full size violin without significant weight. It will contain the power supply, microcontroller, and configuration modules. # Criterion For Success Unit is easy to attach to a standard violin. The attachment must accurately display and detect note fingerings from a user specified piece at an adjustable tempo. User accuracy will be displayed real-time through use of LEDs on the microcontroller, and an accuracy summary will be displayed at the end of the piece. In addition (though less quantifiable), it must not impede the physical way a user must play the instrument. Note on expertise: Kamil plays violin and knows a number of other violinists. They will be consulted on the physical design. |
||||||
| 94 | RFID Automatic Self Checkout Basket |
Jacob Slabosz Jada-Marie Griggs Oscar Kaplon |
Yulei Shen | proposal1.pdf |
||
| Team Members: - Jacob Slabosz (slabosz2) - Oscar Kaplon (okaplon2) - Jada-Marie Griggs (jgrig7) # Problem Checking out at a store can be a point of frustration for many shoppers. With long lines, clients may wait for long lengths of time until it is their turn at the register, where they then still have to scan each item one by one, either on their own at a self checkout or with a cashier. Checkouts can also be troublesome for business owners, as they have to pay multiple employees to man the registers. Existing mobile self checkout options aim to fix this, though they still rely on customers being truthful and scanning all of their items. # Solution We propose a smart shopping basket equipped with UHF RFID that automatically detects and tracks items placed inside without immediately charging the customer. As items are added or removed, the system updates a live item list and running total that the shopper can view through a connected web application. When finished shopping, the user explicitly chooses how to complete checkout — either by paying digitally online (e.g., Apple Pay) or by proceeding to a traditional register for cash payment — ensuring transparency and user control. An integrated load cell cross-verifies item additions by detecting weight changes, allowing the system to flag errors when an item is placed in the basket but no RFID tag is detected. By shifting item identification earlier in the shopping process while preserving flexible payment options, the system reduces checkout congestion and operational overhead. # Solution Components ## Subsystem 1: NFC/RFid Sensing System We will make use of UHF (Ultra High Frequency) RFID due to the fact that it is able to detect multiple tags “piled” on top of one another and has an increased range over standard RFID. Using a M5Stack UHF RFID Unit (JRD-4035) as the reader module, we will access the data via the UART connection. Each individual item will have a unique RFID tag (1568-27180-ND or similar part number). We will also tune the power (attenuation) of the RFID reader such that it only detects items inside of the basket. ## Subsystem 2: Status Indicator (LED) The basket will be equipped with RGB LED lights (WS2812B) that can be set to multiple colors. This will be able to display different colors in different patterns (strobing, pulsing, etc.) based on different statuses of the basket: solid white to indicate the basket is ready, pulsing red to indicate there was an error with the item, or a green pulse to indicate a successful reading of an item. The LED will be controlled by the microcontroller. ## Subsystem 3: Brain The microcontroller (ESP32-C3-DEVKITM-1-N4X) will use input from the RFID subsystem to keep track of every items’ unique tag and determine which are in the basket. With WiFi and bluetooth connectivity, it will communicate with the store’s infrastructure (or in our case an emulation on a laptop computer). ## Subsystem 4: Load Cell A load cell (1528-4543-ND) which supports 20kg and an amplifier (1568-13879-ND) will be used to ensure the signals can be picked up by the microcontroller. The load cell will be placed at the bottom of the basket underneath a plate so that all the weight of all items will be registered. ## Subsystem 5: Power Regulator Given that the RFID module requires a stable 5V supply and draws a significant inrush current during startup while the microcontroller and LEDs operate at 3.3V, we propose the system to be powered via a USB-C input using a breakout board with proper CC termination (Adafruit USB-C Breakout or something similar). This board exposes the 5 V and GND pins for the circuit while automatically handling the USB-C configuration pins, ensuring that the source delivers a stable 5 V at sufficient current. The 5V bus directly powers the RFID module and LEDs, with a 470 µF bulk capacitor and 0.1 µF decoupling capacitors placed nearby to absorb startup current spikes and prevent voltage dips. A buck converter (MP1584EN module or something similar) steps down 5V to 3.3V to supply the microcontroller, sensors, and LEDs reliably. This arrangement ensures stable operation during RFID power-up events, isolates the microcontroller from voltage fluctuations caused by high-current devices, and provides a simple USB-powered system suitable for our project’s purposes. Additionally, we will include a PTC (MF-MSMF050-2-ND) to prevent large amounts of current to go through the 5V bus via temperature detection USB breakout board 1568-23055-ND. (Digikey) Or cheaper option - Adafruit USB Type C Breakout Board - Downstream Connection Product ID: 4090 ## Subsystem 6: Web Application A web application (running via smartphone emulator on a laptop) will connect to the device via the “store’s” infrastructure (WiFi), allowing a shopper to see a live list of the items in their basket and a running total. The web application will connect to a centralized server (emulated on a computer) and access the information via API, meaning that shoppers do not need to use a bluetooth connection which poses security risks. # Criterion For Success - Shall successfully identify the unique ID of an item placed in the basket within 5 seconds with 95% accuracy - Shall successfully identify the unique IDs of 10 different items placed in the basket at the same time within 10 seconds with 95% accuracy - Shall not detect items placed further than 12 inches from the bowl with 95% accuracy. - LED ring shall change color within 5 seconds of an item being placed in the basket and successfully detected. - LED ring shall change color within 5 seconds of an item with no tag being placed in the basket. - The web application shall display all items in the basket with 100% accuracy within 10 seconds of an item being added. |
||||||
| 95 | Chair-Mounted Anti-Sedentary Detection System with Enforced Movement Clearing |
Chris Huang Jack Gaw Melissa Wang |
Weijie Liang | proposal1.pdf |
||
| # Team Members: - Chris Huang (zexih2) - Melissa Wang (wang569) - Jack Gaw (jgaw3) # Problem Students and office workers often spend long periods sitting at their desks, which can negatively affect physical health, focus, and productivity. Many existing reminder systems, such as phone notifications or simple alarms, are easy to ignore or turn off without actually getting up. As a result, these systems do not effectively reduce prolonged sitting. There is a need for a system that not only detects extended sitting, but also encourages users to physically get up and move in a simple and practical way. # Solution We propose a chair-mounted system that monitors how long a user has been sitting and triggers an alarm after a configurable time threshold. A pressure-based sensor detects whether the user is seated and tracks continuous sitting time. When the sitting time exceeds the threshold, an alarm is activated and cannot be dismissed while the user is still seated. After the user stands up, the system switches to a movement detection mode. All sensors are mounted directly on the chair, and no wearable devices are required. Movement near the chair is detected using vibration and inertial sensors mounted on the chair frame or legs. The alarm is cleared only after the system detects enough movement consistent with short-distance walking. The system is implemented using a simple state-machine-based embedded design and is divided into multiple subsystems, including seat detection, movement detection, user feedback, and a main controller. # Solution Components ## Subsystem 1: Seat Occupancy Detection This subsystem determines whether a user is sitting on the chair and measures how long the user remains seated. The signal is filtered to reduce noise and prevent false transitions. Components: - Force-sensitive resistor (FSR-402) mounted under the seat cushion, or - Load cell sensors mounted under the chair supports - HX711 load cell amplifier (for load cell configuration) - Basic signal conditioning resistors ## Subsystem 2: Chair-Mounted Movement Detection This subsystem checks whether the user has stood up and moved around near the chair. Sensors are mounted on the chair structure to detect vibrations and motion caused by footsteps. This approach is chosen for simplicity and ease of use, even though it is less precise than wearable step counters. Components: - Piezo vibration sensors mounted on chair legs or base - Optional MPU-6050 IMU mounted on the chair frame - Analog and I2C connections to the controller ## Subsystem 3: Alarm and User Interface This subsystem provides feedback to the user and allows basic interaction with the system. Components: - Active piezo buzzer - LEDs for status indication - Push buttons for configuration and reset - Optional small OLED display ## Subsystem 4: Main Controller and Power The main controller coordinates all subsystems, runs the state machine, and controls alarm behavior. All electronics are mounted on the chair and powered locally. Components: - ESP32 microcontroller - USB 5V power supply or rechargeable battery - Wiring and mounting hardware # Criterion For Success 1. The system correctly detects whether the user is seated or not during repeated sit and stand actions. 2. The alarm activates within a few seconds of the configured sitting-time threshold. 3. The alarm cannot be permanently turned off while the user remains seated. 4. After standing up, the alarm is cleared only after sufficient movement near the chair is detected. 5. Simple actions such as tapping or shaking the chair while seated do not clear the alarm. 6. The system can successfully complete multiple full cycles of sitting, alarm triggering, movement detection, and reset without failure. |
||||||
| 96 | Motion Sensing Guitar Pedal System |
Luke Hilgart Nicholas Oberts Spencer Siegellak |
Po-Jen Ko | Yang Zhao | proposal1.pdf |
|
| Problem: One issue that can come up with playing guitar on stage is wanting to switch guitar pedals on and off while playing. If a guitarist wants to change the effects on their guitar as they are playing, they would either have to have their pedalboard on stage, or would need someone else controlling which pedals are turned on and off. Solution Our solution to this problem is a motion sensing attachment that can clip on to the bottom of the guitar. The attachment will project a lighting display on the ground, indicating which pedals’ effects are currently active, as well as using motion sensors to detect when the guitarist kicks near each light. The attachment is connected wirelessly to a custom routing box, which routes the signal through the connected pedals, allowing the guitarist on stage to control which pedals’ effects are active at any given time. Solution Components Subsystem 1: Lighting This subsystem will have to display different colored lights on the ground from the guitar. The entirety of the device will have to be angled from the guitar so that it can shine directly to the ground. One color of light will indicate the effect is active, and another will indicate that it is inactive. Subsystem 2: Motion Sensor This subsystem will be responsible for delivering user inputs. When someone steps on a light, that light will turn off and the pedal effect associated with that light will activate. When another light is stepped on, the pedals’ effects will combine, as they ordinarily would while using a standard pedal setup. In order to remove a special effect, you need to step in that area again. Subsystem 3: Pedal Connection Box In order to use multiple effects, we need to use foot pedals that are turned on and connected to the box. In a normal guitar pedal arrangement the pedals are connected in series, so the box will route the audio signal in series through whichever pedals are designated by the sensing system. The box will have a wireless receiver that takes in data on which pedals should be activated, and use it to route the signal in and out of the connected pedals. Criterion For Success Describe high-level goals that your project needs to achieve to be effective. These goals need to be clearly testable and not subjective. In order for our project to be considered a success, the guitarist should be able to switch between their pedals as they like despite being away from the pedal board. The criterion for this to occur would be: 1) The kick/step motion effectively toggles on/off desired pedals 2) The lights correspond with pedals correctly 3) Pedal connection box correctly routes the signal to go through the desired pedals |
||||||
| 97 | Facial Matching Display Mirror w/ Motion Sensor |
Connor Tan Keenan Peris Krish Sahni |
Argyrios Gerogiannis | Yang Zhao | proposal1.pdf |
|
| # Facial Matching Smart Mirror w/ Motion Sensor Team Members: - Keenan Peris (peris2) - Krish Sahni (krish3) - Connor Tan (cctan2) # Problem STEM outreach spaces often rely on static posters or non-interactive exhibits to showcase different career paths one can take within these industries. These “displays” fail to actively engage or create a personal connection with visitors, especially for students who may not initially see themselves represented in STEM fields. Thus, there is a need for an interactive, technology-driven exhibit that captures attention, responds to visitor presence, and presents STEM role models in a way that feels personal, modern, and engaging. # Solution We propose an interactive, mirror-like display that appears inactive or reflective until a person stands in front of the mirror. When the system detects a person, it turns on and prompts the user to select the future career they would like to see. The user would then be able to select from a list of options on which “quantum” career they want to look at. The system then uses a camera and basic image processing to identify the type of person standing in front of it (ethnicity, sex, etc). This data would be then used to find a matched scientist or engineer and present them via short video with their name, role, and brief quote about their interest in science or engineering. # Solution Components ## Subsystem 1: Presence Detection This subsystem utilizes the Xbox Kinect’s Infrared (IR) depth stream to create a digital "tripwire" for the system. The Raspberry Pi processes the Kinect’s raw depth map. By analyzing the depth frames for skeletal silhouettes, the system can distinguish between a human walking toward the mirror and background movement (such as a door swinging), ensuring the interactive UI only triggers when a user is positioned within the optimal 1.5- to 3.5-meter interaction zone. Components: - Raspberry Pi 4 - Microsoft Xbox One And 360 Kinect V2 Model 1520 Motion Sensor Camera ## Subsystem 2: Mirror Display This subsystem is responsible for presenting the interactive user interface and short-form video content of selected STEM role models. The system uses a partially reflective (two-way) mirror placed in front of an LED display, creating a mirror-like appearance when inactive and a dynamic display when content is shown. The LED TV functions as an active backlighting source positioned directly behind the partially reflective mirror. When the system is inactive, the display outputs a dark (near-black) image, causing minimal light transmission through the mirror and preserving a reflective, mirror-like appearance. When activated, the TV increases brightness and displays high-contrast video and UI elements, allowing light to pass through the mirror film and making the content visible to the user. This contrast-based control enables seamless transitions between an inactive mirror state and an active display state without mechanica Components: - 18”x30” Glass or Acrylic Panel - 18”x30” 60%R/40%T Mirror film - INSIGNIA 32" Class F20 Series LED HD Smart Fire TV ## Subsystem 3: Camera & Video/Image Processing This subsystem captures real-time visual data of the area in front of the display using a Logitech C920 webcam and processes it using the Raspberry Pi to detect the presence and position of a person. It provides the raw image data needed for person and face detection. The processing performed in this subsystem prepares image regions of interest for face detection and confidence evaluation. This ensures reliable and efficient visual analysis. Components: - USB camera - Logitech C920 - Camera mounting bracket - Raspberry Pi 4 - Image processing software ## Subsystem 4: Interactive UI The Interactive UI serves as the bridge between user input and the career-matching database, leveraging Xbox Kinect skeleton tracking for a touchless experience. The interface display shows career options that users can select by moving their hands to "hover" over digital buttons. The display is controlled by an STM32 Microcontroller. Components: - STM32 Microcontroller - Display - Microsoft Xbox One And 360 Kinect V2 Model 1520 Motion Sensor Camera ## Subsystem 5: Face Detection Confidence Determination This subsystem evaluates whether a face is present in the captured image and determines how confident the system is in that detection. It focuses solely on detection quality, such as face size, position, and clarity. The resulting confidence score is then used to decide when the system should proceed with selecting and displaying a profile, helping prevent false triggers and unreliable matches. Components: - Face detection software module - OpenCV - Raspberry Pi - Image quality evaluation logic ## Subsystem 6: Data-based search This subsystem selects an appropriate/matching scientist or engineer profile from a local database once sufficient confidence has been established. Using predefined metadata and selection rules, the subsystem matches user context to available profiles and outputs the selected content to the UI. Components: - Local profile database - JSON or SQLite - Raspberry Pi - Local storage - microSD card - Profile selection and matching logic in software ## Subsystem 7: Power Management This subsystem is responsible for safely distributing power to all low-voltage electronic components in the system, including the Raspberry Pi and the microcontroller-based control hardware. A regulated 5V DC supply will be used for powering the Raspberry Pi, and additional 3.3 V regulated rails for any local microcontroller logic. The subsystem for the microcontroller and Raspberry Pi cooperates independently of the Kinect sensor and LED TV, which each use their own dedicated power supplies. Components: - Buck Converter LM2596 - MCP1700 / MCP1703 - Schottky Diode - KABCON Kinect Adapter for Xbox One S, Xbox One X, PC Windows 10 8.1 8, Xbox Kinect Power Supply for Xbox 1S, 1X Kinect 2.0 Sensor - INSIGNIA 32" Class F20 Series LED HD Smart Fire TV AC power adapter (comes with TV) # Criterion for Success: Our final product should demonstrate the following capabilities in order to be considered successful: ## Core Functionality The mirror detects a person standing in front of it, with detection being automatic (no button press needed). The system consistently activates when someone enters range. ## Visual Transformation: The display clearly switches from a mirror-like idle state -> digital content The overlay of image/video and text is visible, aligned, not confusing or cluttered. ## Correct Content Triggered: The system shows the intended character/person (scientist, engineer, etc.) along with the correct associated text and/or short video. No random or incorrect activations Reliability of the consistent facial matching is >80% correct. ## Responsiveness: Delay from detection to the display change is short enough to feel natural. |
||||||
| 98 | Real Time Piano Input Visualizer For Learning |
Jay Park Nuwan Singhal Sarayu Suresh |
Wenjing Song | Yang Zhao | proposal1.pdf |
|
| Team Members: - Nuwan Singhal (nuwans2) - Jay Park (jaypark3) - Sarayu Suresh (sarayus2) # Problem Learning to play the piano, especially if self-taught, comes with many difficulties. Two of the main ones are learning how to read sheet music, as well as knowing if your timing is accurate. These hurdles can be difficult to overcome, and lead to people giving up as they have to put in a lot of preparation before they can start playing songs. # Solution Create a hardware solution that controls an RGB LED Matrix which responds to MIDI input from a piano. This can be used to learn songs by preloading data through an SD card and having it so visual cues tell the user when to press keys and which keys to press, waiting for users to press the correct key before moving on. Other features such as controlling the speed of the song and working with only one hand can be used to incrementally learn. It could also be used when teaching piano by instead outputting what key is currently being pressed, allowing students to have a better understanding of what their teacher is playing. # Solution Components ## Subsystem 1 - Led Matrix board to display which keys to press and user interface The LED board shows which keys should be pressed by the user who is trying to learn how to play the song. It lights up when the user needs to press the key for playing the song that is stored in the SD card. There can be multiple modes for playing the music as well as options related to speed and hand used. We plan on using the RGB LED Matrix 1528-2094-ND as it offers multiple colours and enough space to display a few octaves of the piano. We plan on designing this project to mainly work with 4 octaves as those are the most commonly used ones, but with this LED Matrix, extending it to more octaves remains as an option. For the MCU, we found that a STM32F446RET6 would be a possible option mainly due to the high speed which is needed to keep the LED Matrix persistent for vision without major flickering. Additionally, we already have a Nucleo-F446RE, which means it can be used as a dev board and part of it can later be used as an ST-Link. ## Subsystem 2 - SD card for storing and loading songs Our PCB should have an SD card reader which will store MIDI files of various songs which can be read by the microcontroller and loaded onto the LED board so that users can play along visually to learn the track. We can use Micro SD Card Reader Module TS-891 for the SD card Reader, and we can use Sandisk ImageMate Sdxc Flash Memory for storing songs as MIDI Files. ## Subsystem 3 - USB input for reading and registering piano input in real time The keyboard sends messages whenever a key is pressed or released. The microcontroller reads these messages and extracts the note number and timing information. This data is compared with the expected notes from the SD card to determine whether the user played the correct key. We can use a MIDI Jack (SDS-50J) to take in Midi input from a piano, this is a better option than using USB A since that will require our microcontroller to act as a USB host, while using MIDI input directly uses UART which is all we need for our use case. ## Subsystem 4 - User interface to control the system Users need to be able to control certain parts of the system such as what song to play, which mode to operate in, the speed of the playback and actually starting and turning on and off the system. We will use external buttons and switches for these parts of the system as well as the LED matrix to display text related to the interface. We can use simple buttons since the main controls we need are ‘left’, ‘right’ and ‘select’. (Omrom B3F-4055) ## Subsystem 5 - Power Management Outlet power supply can be used for our project to power up our project. We can convert AC current to DC current using an external 5V adapter barrel plug that we used in previous labs to convert mains to 5V DC. We can use a low dropout regulator (AMS1117-3.3) to convert it from 5V to 3.3V for the MCU. The LED Matrix we plan to use uses 5 Volts, so no converter is necessary for that, the LED Matrix uses bare wires to get powered, so we could use a 2 input screw terminal block (TB002-500-02BE). For the data of the LED board, we need a 16 pin header (900-0702461602-ND). # Criterion For Success The system successfully loads a song from the SD card and begins playback. The LED board correctly displays which piano keys should be pressed for the selected song. A user interface allows users to interact with the system and choose the song they wish to play as well as details such as the speed, which hand and switch from input to output mode. USB MIDI input accurately detects pressed keys in real time and matches them to expected notes. Input detects the length of the time that the key was pressed. Also displays multiple notes when multiple notes need to be pressed because of the song. The system measures timing differences between expected notes and user input. The system should have the option to wait for the user to press the correct key before moving forward with the song The system should have options to control the speed of playback of a song At the end of a song, the system reports basic performance metrics such as number of correct notes and average timing error. # Alternatives Currently three main similar solutions exist. The first of which are software solutions like Synthesia, but these require an internet connection, a smart device which runs supported operating systems and the software itself costs money. Our solution is a separate device from phones or laptops making it more accessible to younger and older people, more affordable, and not requiring an internet connection to function. The other solution is a one dimensional LED strip that sits on piano keys, such as “The ONE Piano Hi-Lite”, these solutions also require a smart device to function, but more importantly only offer one dimensional lighting which means that users see which keys to press in advance which is an important feature for harder songs and for learning the timing. The last option would be products similar to the “PopuPiano” which is essentially a combination of a led strip and a piano, but we aim to offer a separate device that piano owners can use rather than a piano itself. Also this solution comes with many of the same drawbacks as the other two. |
||||||
| 99 | Predictive Indoor Ventilation Control Using Air Quality Estimation |
Arka Kolay Gulnaaz Sayyad Noah Rockoff |
Hossein Ataee | Arne Fliflet | proposal1.pdf |
|
| Team Members: Gulnaaz Sayyad (gsayy2), Noah Rockoff (noahlr2), Arkaprabha Kolay (akolay2) Problem: Indoor air quality is often poorly managed in homes, classrooms, and office spaces because harmful conditions such as elevated CO2, PM2.5, and humidity are not immediately noticeable to occupants. Poor ventilation can lead to fatigue, reduced concentration, and health issues. Most existing ventilation systems operate on fixed schedules or require manual control, which means they do not respond dynamically to changing air quality conditions. This results in either insufficient ventilation that harms occupant health or excessive ventilation that wastes energy. Solution: This project proposes an indoor air quality monitoring and ventilation control system that continuously measures CO2, PM2.5, temperature, and humidity. Based on real-time sensor data, control algorithms automatically activate ventilation mechanisms such as fans using predictive, model-based control algorithms to proactively regulate ventilation before air quality thresholds are exceeded. The system will incorporate a simplified physical model of indoor CO2 dynamics to estimate future air quality trends and inform ventilation decisions. The system also includes a software dashboard that displays current conditions and stores air quality data. These will allow users to track trends over time while maintaining a healthier indoor environment. Solution Components: Air Quality Sensor Sensors to continuously monitor indoor environmental quality CO₂, temperature, and humidity sensor (Sensirion SCD40, I²C) PM1006K Low Cost PM2.5 Sensor Microcontroller Processes sensor data Executes predictive ventilation control algorithms Logs air quality data for analysis Ventilation Subsystem Fan controlled using PWM MOSFET driver circuit implemented on custom PCB Will run based on the data collected from the sensors Software dashboard Displays live air quality data Potentially send alerts Used for system validation and performance evaluation Buy SCD40 CO2, Temperature and Humidity Sensor Breakout I2C at Best Price | 7semi Criterion for Success: To validate system performance, controlled experiments will be conducted to create repeatable indoor air quality disturbances. For example, candles or small flames will be used near the CO₂ sensor to artificially increase CO₂ concentration, allowing verification of sensor response and system behavior. These disturbances will be used to evaluate both a baseline threshold-based controller and the proposed predictive control strategy. Ventilation activation and system response will be observed and logged to compare control approaches under identical conditions. The project will be considered successful if the following measurable performance criteria are met: The system predicts CO₂ threshold crossings within ±X minutes using the internal air quality model. Indoor CO₂ concentration is maintained below a specified ppm value for at least a majority of occupied operation time. Compared to a baseline threshold-based controller, the predictive control strategy reduces ventilation fan runtime or estimated energy usage by at least a baseline percentage. The system operates continuously without unintended resets or sensor failures during fan actuation and environmental changes. Controlled experiments (e.g., candle-based CO₂ disturbances) demonstrate repeatable and observable differences between predictive and threshold-based control behavior. |
||||||
| 100 | Driving Habits Feedback Module |
Anna Sako Elijah Sutton James Tang |
Lukas Dumasius | Craig Shultz | ||
| # Title Team Members: - Elijah Sutton (esutton3) - James Tang (jhtang2) - Anna Sako (sako2) # Problem According to the Department of Energy a simple change in habits can effect fuel economy by 10%-40% which translates to $0.38-$1.53/gallon saved! https://www.energy.gov/energysaver/driving-more-efficiently Although many drivers are concerned with fuel efficiency and eco-friendly driving, it is often difficult to understand the specific impact of driving habits on emissions. Especially in older vehicles, actionable driving feedback is limited and counter-intuitive. # Solution My idea is a small OBDII compatible module that can be retrofit into nearly any vehicle that collects driving data such as throttle, RPM, and speed. This data then be used to infer other data such as transmission state and braking. Collectively this data can be fed live into a lightweight ML model that classifies different driving styles and mistakes before relaying the data to the driver via a distraction-free LED display (RGB strip). The driver can then use this feedback to adjust their driving habits in an intuitive way and achieve the emissions savings that are possible. # Solution Components ## Subsystem 1 The first subsystem of the design is a PCB that is powered by and interfaces with the OBDII port in a car. This board would use the 12V chassis power stepped down with a buck. It would also use a CAN transceiver to communicate with the ECM of the car to collect data. The MCU on the board would control all communications enough and host a lightweight ML model. ## Subsystem 2 The second subsystem is a distraction-free intuitive LED display that provides the driver with feedback. It needs to be convenient enough to add to the dash of any car, discrete enough to not be distracting, and intuitive enough to give the driver actionable information. This piece of the device defines the entire user experience and is a potential source of danger if it becomes distracting; it is very important to be designed with lots of thought. ## Subsystem 3 The last subsystem is all software. After the MCU collects the data, it needs to process it in order to inform the display. We will start with a threshold / rule-based algorithm that classifies the drivers habits and provides feedback. This will then be developed into a lightweight ML model where improvements can be made. # Criterion For Success In order to be effective, this project will collect driving data via OBDII port, control the LED display, and be a self contained power system. At the highest level, this project will be deemed successful if we can improve the vehicles reported fuel-economy for a given trip based on feedback from the device. |
||||||
| 101 | Optimization of Hemispherical Imaging System for Subterraneum Root Detection |
Alan Ilinskiy Areg Gevorgyan Liam Thompson |
Jason Jung | Arne Fliflet | ||
| Team TBD Problem: Existing methods of phenotype data collection of plants roots needed to develop higher performing plants are time consuming and tedious, have poor image quality, and are not robust or portable. Solution: Optimize an innovative design based on a hemispherical root camera prototyped by an ECE445 project last semester. Project goals for this semester include improved image resolution and image collection accuracy, increased durability and portability for field use, and conducting tests in the greenhouse and in the field. |
||||||
| 103 | Adaptive Solar Panel Canopy for Vineyard Microclimate Control |
Titouan Louis Matthieu Morel Zikora Okonkwo |
Zhuchen Shao | Joohyung Kim | proposal1.pdf |
|
| # Problem : Climate change is increasingly threatening vineyards by exposing plants to heat stress and water scarcity. During hot, sunny periods, leaves can overheat, soil moisture evaporates rapidly, and crop yield and quality can decline. Growers currently lack a localized, automated system to manage sunlight and humidity at the plant level without frequent manual intervention or excessive irrigation. # Solution We propose an adaptive shading and microclimate control system that combines a motorized solar-panel canopy with a moisture-capture foam layer beneath it. By using environmental sensors such as soil moisture, air temperature, humidity, and light intensity we can continuously monitor the conditions around the plants. The system will automatically adjust the title and height of the panels to regulate sunlight exposure and local humidity. The foam layer captures and retains ambient moisture to further mitigate heat stress. By using real time data, the system can optimize plant protection and water conservation which will provide precise and low maintenance solution for growers. # Solution Components ## Subsystem 1: Environmental Sensing ### Function: Monitor the microclimate around plants to inform adaptive responses. ### Components: Moisture sensor for soil → Capacitive sensor (DFROBOT SEN0193) Air temperature & humidity sensor → DHT22 Light intensity sensor → BH1750 Microcontroller → Arduino Uno # #Subsystem 2: Moving Canopy ### Function : Thermal Protection : Orient the panels to cast shade over the plants Humidity Management : Adjust the height and tilt to trap or release humidity of the air generated by water capturing layer ### Components : Linear Actuators → Progressive Automations PA-14 Distance/Height Feedback → Ultrasonic distance sensor HC-SR04 Tilt Control → Stepper Motor NEMA 23 + IMU BNO055 Orientation Feedback → IMU Sensor (BNO055) # #Subsystem 3: Water capturing Layer ### Function : Absorption : Capture the humidity of the air when it’s high Release : Release the water when it’s needed ### Components : Hydrophilic medium → Polyurethane Foam Moisture absorbing material → Calcium Chloride CaCl2 # Criterion For Success Soil moisture retention is improved about 10-15% compared to uncovered soil during test periods System operates autonomously for about 24hrs without manual intervention Canopy height responds to local humidity levels where height adjusts within 60 seconds to increase or decrease humidity Canopy tilt responds to changes in light intensity where the panel tilts within 30 seconds of sensor threshold breach The air temperature at the plant canopy reduced by at least 3 degrees celsius under high solar exposure # Demonstration : For the demonstration, we propose building a microsystem that simulates an environment with a bell jar, a lamp, a hair dryer, and a humidity diffuser. The demonstration will focus on showing the system’s response to an increase in temperature as well as on measuring and capturing humidity. We also propose adding a display to check the sensor readings and lowering the thresholds for the demo. |
||||||