Projects
| # | Title | Team Members | TA | Professor | Documents | Sponsor |
|---|---|---|---|---|---|---|
| 1 | Mobile Hive Checker |
Fiona Cashin Olivia Guido Rawda Abdeltawab |
||||
| # Team Members: - Fiona Cashin (fcashin2) - Olivia Guido (ojguido2) - Rawda Abdeltawab (rawdaka2) # Problem Beekeepers must routinely monitor hive conditions to maintain healthy colonies. However, manually opening a hive significantly stresses the bees and disrupts their environment, and frequent disturbances can negatively affect bee behavior and productivity. On the other hand, insufficient monitoring can lead to swarming or freezing, resulting in the loss of an entire colony. Each lost colony can cost a beekeeper between $100 and $200. This highlights the need for a non-invasive solution for assessing the health of multiple hives, while minimizing stress on the bees. Although monitoring systems are available, they typically cost around $100 per hive, and many of the leading companies in this space are headquartered in Europe. # Solution The proposed solution is a portable device that enables beekeepers to monitor a colony’s health without opening the hive. A small sensor probe is inserted into the hive entrance to collect internal environmental data while the main unit remains outside. The device displays active sensor readings on an integrated screen and indicates whether hive conditions fall within acceptable ranges, such as temperatures between 70 and 97 degrees Fahrenheit. This approach minimizes hive disturbance while still providing essential health data including temperature, humidity, and carbon dioxide levels. # Solution Components ## Subsystem 1, Temperature and Humidity Monitoring This subsystem measures the internal temperature and humidity of the beehive. Maintaining proper temperature is critical for hive health, as bee eggs will not develop and adult bees may die if the internal temperature falls outside the range between 70 and 97 degrees Fahrenheit. Humidity levels must remain between 50 percent and 60 percent to allow nectar to dry into honey. Excess humidity can promote pest reproduction, while insufficient humidity can cause bee eggs to dehydrate. The device will use a temperature and humidity sensor connected via a long cable, allowing the sensor to be inserted into the hive while the user holds the device externally. The sensor will interface with a microcontroller unit (MCU), which will process the data and display the readings on an LCD screen. The MCU will evaluate whether the temperature and humidity values fall within the acceptable ranges. If the readings are normal, the display will show “PASSED.” If any reading is outside the normal range, the display will show “FAILED.” Components: - Digital Temperature Humidity Sensor : HiLetgo DHT21 - Microcontroller Unit (MCU) : ESP32-C3-WROOM-02 - Liquid Crystal Display (LCD) : B0DN9NMBFW (GODIYMODULES) or B0BWTFN9WF (Hosyond) ## Subsystem 2, Carbon Dioxide Monitoring This subsystem measures the carbon dioxide concentration within the hive. In a beehive, CO2 levels can be tolerated to a level of 8 percent, with higher levels indicating overcrowding and poor ventilation. The device will include a CO2 sensor connected via cable to the same MCU. The MCU will record the CO2 levels and display the results on the LED. As with the temperature and humidity subsystem, the MCU will determine whether the CO2 level is within the acceptable range and display “PASSED” or “FAILED” accordingly. Components: - CO2 Sensor : HiLetgo MHZ19 - Microcontroller Unit (MCU) : ESP32-C3-WROOM-02 - Liquid Crystal Display (LCD) : B0DN9NMBFW (GODIYMODULES) or B0BWTFN9WF (Hosyond) ## Subsystem 3, Microcontroller and Logic The microcontroller coordinates all the subsystems and implements a Finite State Machine (FSM). The MCU runs embedded C firmware that defines an FSM with at least four states, including “Start”, “Reset”, “Testing”, and “Done”. During the “Testing” state, sensor data is acquired via the appropriate communication protocols. Once testing is complete, the collected data is displayed on the LCD, allowing the user to assess the overall health of the hive. The MCU compares the data with the specified range to determine if the data is within range. This will prompt either a passed or failed responses to be displayed on the device Components: -Microcontroller Unit (MCU) : ESP32- option could be Espressif ESP32-C3-WROOM-02 which has RISC-V 32 bit CPU, antenna built-in, bluetooth, WIFI -Programming Interface: use USB to upload code. USB can either charge battery/upload code, Arduino IDE platform -Rest Button: PTS645SL43-2 LFS, resting the data on LCD to test another hive -Power ON Button: PTS645SL43-2 LFS -Liquid Crystal Display (LCD): B0DN9NMBFW (GODIYMODULES) or B0BWTFN9WF (Hosyond) # Criterion For Success - The humidity sensor accurately measures humidity. - The temperature sensor accurately measures temperature. - The display correctly shows the measured temperature. - The display correctly shows the measured humidity. - The display turns on when the ON button is pressed. - A Start screen is shown when the ON button is pressed. - A Testing screen is shown after the Start screen. - A Done screen is displayed when the ON button is pressed the second time. - A Reset Screen is displayed when the reset button is pressed. - The display correctly shows “PASSED” and “FAILED.” - The display shows “PASSED” when all sensor readings are within normal ranges. - The display shows “FAILED” when at least one sensor reading is outside the normal range. - Final product tested on multiple hives. |
||||||
| 2 | Bird Simulator |
Anthony Amella Eli Yang Emily Liu |
Shiyuan Duan | |||
| # Bird Simulator Team Members: - Anthony Amella (aamel2) - Emily Liu (el20) - Eli Yang (eliyang2) # Problem FPV drones give people a chance to experience immersive flight through FPV goggles, improving engagement. However, this immersion is primarily visual and does not allow for physical control such as motion cues or body orientation. This results in an experience with a realism factor missing for people who want an even more exhilarating experience. # Solution Our bird simulator will allow the pilot to control a drone using motion. This system will consist of a drone with a camera, FPV goggles, and a suit connected to IMUs that can be worn by a person that will read information about how their body moves and is oriented. The motion captured by the suit will then be converted to instructions that the drone can use to maneuver in its environment. # Solution Components ## Visuals We will use 5.8 GHz radio to transmit video data from the drone to the goggles using a pair of transmitters and receivers (RTC6705 and RTC6715). These RF modules handle amplifying, mixing, and modulating/demodulating signals, while leaving us the ability to configure and program the module through SPI with a microcontroller. We will use a camera that outputs analog video to be transmitted by the RTC6705 and received by the RTC6715 module in the goggles to be converted to composite video and displayed on a small screen. We expect the development of the other subsystems to require a lot of trial and error, so we will develop a virtual simulation environment using JavaScript/WebGL that will allow testing with less safety concerns. ## Drone We will design and manufacture a drone from scratch. The body of the drone will be made through a waterjet from carbon fiber, similar to existing COTS racing drones. Tentatively, we will make the drone on a 3-inch frame. Notably, the drone will have a servo attached to the FPV camera, which will allow for pitch to be changed mid-flight. This will allow the drone to look forward, regardless of the position of the actual drone body. This will allow the FPV pilot to feel more like a bird, since birds generally look forward during flight, regardless of their speed. The drone will consist of a 5.8GHz AM radio transmitter, as described above, as well as a 2.4GHz SX1280 receiver for control signals from the pilot. We will also make our own ESCs, allowing us to control the motors with a custom BLDC controller with FDMC8010 MOSFETs. The drone will have auto-leveling capabilities, harnessing the IMU in the drone body. This will allow for easier flight, with the drone staying roughly level. ## Control There will be 4 IMUs embedded in a wearable suit that will collect data to be combined and used to determine the motion and orientation of the user: one on each arm, one on the head, and one on the torso. We plan to use the IIM-20670 which includes a gyroscope and accelerometer and communicates with the MCU using SPI. Movements such as head rotation, wing flapping, body orientation, and others to be determined will be translated to stick inputs on a normal drone controller. We will also make a normal drone controller to override suit inputs and take over control in case the drone starts behaving unexpectedly. Both the suit and the controller will transmit signals using a 2.4 GHz transceiver (SX1280), which will be received by the drone also equipped with an SX1280. Using these modules requires writing driver code to facilitate communication with the MCU. # Criterion For Success At a minimum, we will make a drone that is able to control four BLDC motors, as well as receive 2.4GHz control signals and transmit 5.8GHz video. The drone will have some form of auto-leveling with a built in IMU, as well as a camera with variable pitch. We will also make a bird suit, with four IMUs that can generate signals that could control the drone. These signals will initially be used to control a drone simulator, programmed in WebGL. If time permits, these signals will also control the drone, allowing for real-world flight. Of note, Eli Yang has a FAA Remote Pilot Certification, allowing for legal outside flight. To start, we will use off-the-shelf FPV goggles, but we will make our own if time permits. |
||||||
| 3 | Heterodyne Bat Detector |
BILL Waltz Evan McGowan Kyle Jedryszek |
Gayatri Chandran | |||
| Team Members: - Bill Waltz (wwaltz2) - Kyle Jedryszek (kaj5) - Evan McGowan (evandm2) # Problem: There is a need for American-made and sold handheld heterodyne bat detectors. There are some American bat enthusiasts who dislike the bat detectors that plug into phones or tablets, like the ones from Wildlife Acoustics, since the sound produced is not as high-quality as a standard heterodyne. Also, these models cost $300+. The most popular heterodynes are currently produced and sold in the UK and Australia. Specifically, Dr. Joy O'Keefe is in demand of a high-quality, mass-produceable device for the purpose of providing several groups of people with a bat detecting device for Bat Walks at the Central Illinois Bat Festival. # Solution A handheld device with a microphone, capable of detecting frequencies between 15kHz-100kHz, which will be amplified before being heterodyned with a mixer circuit. The frequency to be mixed with is controlled by a large dial (with illuminated frequency labels) on the front of the device. The sound will then be amplified and output via quality speakers. The device will also have a power button, a volume dial, a 3.5mm auxiliary port for headphone use, and be powered by AAA batteries. Finally, what might set this apart from every other bat detector is that this model will have stored, prerecorded sound bytes that can be played so that first-time users can know what to listen for. # Solution Components ## Ultrasonic Receiver To first receive the signal, we will employ an ultrasonic transducer, likely to be the most important and expensive part of the product. Transducer options include Syntiant’s SPVA1A0LR5H-1 microphone, readily available on DigiKey, since it has a frequency rating well into the LF spectrum. A pre-amplifier using op-amps like the TLV9052/ADA4097 will amplify the desired signal, followed by a high-pass filter to remove low-frequency noise below 20kHz. ## Heterodyne To mix the ultrasonic signal down to baseband, we will employ a double-balanced mixer like the SA612A or MC1496, producing the internal oscillator signal as well. This heterodyned signal is then amplified with another op-amp circuit and passed through to a speaker. Finally, our leading choice for speaker is the Taoglas SPKM.23.8.A: a thin, ~1-inch diameter speaker which will fit nicely into a handheld device. ## Bat Sound Playback Pre-recorded audio bytes from other heterodyne bat detectors will be programmed onto a flash memory module, size somewhere between 32K-512K, that can be accessed by a microcontroller. An ATTiny85 is our MCU of choice, as its availability, low cost, and speed satisfy our needs for this project. When the device is on, and the user presses a button labeled “Demo” on the device, one of the recordings will play from the speaker or audio jack, preceded by an announcement of which species of bat they are hearing. The programming for the MCU and flash memory will be done via an external programmer (such as the USBasp), with the audio data dumped directly into the external flash storage. ## User Interface The UI will consist of a 3D-printed handheld chassis for the device. The chassis will contain a power button (or switch) which will either be mechanically or electrically connected to the main board, and an adjustable volume knob. The device will have a dial (labeled with both frequencies (in kHz) and common bat call ranges) to adjust a potentiometer to change the frequency of the onboard oscillator. There will also be a dim, non-invasive red or green light that will shine on the frequency dial, such that the user has the ability to read the dial in the dark. The bottom of the device will have a 3.5mm auxiliary audio port for headphone listeners. # Criterion For Success Our product must accomplish the following objectives to be considered successful: Total production cost below 50USD including casing Device must be tunable between 15kHz and 100kHz frequencies using onboard tuner, testable using Dr. O’Keefe’s Ultrasound Calibrator Battery life (rechargeable or otherwise) lasts the length of (at least) one bat walk (1-2 hours) Volume control is tunable from muted to more-than-noticeably audible Selected bat sounds must be audible through speaker when played When an ultrasonic source radiates sound, the device must downconvert it to audible frequencies and play it through the onboard speaker |
||||||
| 4 | Scorpion-Lift Ant-Weight BattleBot |
Chen Meng Zixin Mao |
Zhuoer Zhang | |||
| Team Members: Zixin Mao(zixinm2) Chen Meng(meng28) Problem Many small combat/arena robots fail not because they lack “power,” but because they lose mobility (treads derailing, wheels slipping), cannot recover after being flipped, and cannot reliably control an opponent’s posture. Tracked robots have better traction, but keeping treads aligned under aggressive turning and impacts is difficult. Lifter-style control bots can dominate positioning, but they often struggle with self-righting and maintaining stable contact with opponents. We want to build a tracked, scorpion-shaped control robot that can (1) keep traction and mobility under collisions, (2) self-right and resist flips, and (3) control an opponent using two lifting arms (“claws”) plus a tail-mounted stinger mechanism for pushing/hooking/jabbing/bracing, without using destructive spinning weapons. The goal is a robust platform that demonstrates strong mechanical design and custom high-current circuits (motor drive, actuator drive, and power monitoring), suitable for a senior design scope. Solution We will build a differential-drive tracked platform (left and right tread) with a low center of gravity and a wide stance. Inspired by tracked designs that use self-centering tread geometry to prevent belt derailment, we will incorporate a crowned-pulley/self-centering tread approach to improve reliability during turns and impacts. On top of the base, we add: Two front lifting arms (scorpion “claws”) use a linkage mechanism to get a mechanical advantage and lift opponents/self-right. A scorpion tail “stinger” that can be positioned to brace against the ground for self-righting/anti-tip stability and can also be used as a control weapon to jab/push/hook opponents to disrupt their posture and set up lifts. Custom circuit boards: High-current dual motor driver (external MOSFET H-bridges with gate-driver ICs) Tail actuator power stage (H-bridge or MOSFET stage + current sensing + thermal protection) Power distribution + sensing (battery monitoring, current measurement, fusing, kill switch) This system directly addresses the problem: tracks provide traction, crowned/self-centering geometry improves tread retention, lifter arms provide control and self-righting, and the tail stinger adds a controllable “third-point” brace plus an active control/attack mechanism. Subsystems Overview and Physical Design Subsystem 1—Tracked Mobility and Drive Electronics 1) Function: Provide high-traction motion, fast turning, and robust tread retention under impacts. 2) Mechanical Approach: Differential tracked drive: one motor per side. Tread retention strategy: incorporate a crowned pulley/self-centering profile to reduce derailment during turning and shock loads. Commercial track set (baseline): Pololu 30T Track Set – Black, item #3033 (sprockets + tracks). If size/torque needs change, we can swap to a different Pololu track set family (e.g., 22T variants). 3) Actuators/Sensors (Explicit Parts): Gearmotors (with encoders for closed-loop speed control): Pololu 100:1 Metal Gearmotor 37Dx73L mm 12V with 64 CPR encoder, item #4755 (or equivalent 100:1 encoder variant). Motor driver (custom PCB, circuit-level design): TI DRV8701 brushed DC H-bridge gate driver (uses external N-MOSFETs for high current). We will design the H-bridge with appropriately rated MOSFETs, gate resistors, a current shunt, and a protection layout (high-current routing, thermal design). Prototype/fallback option: The VNH5019-class integrated driver can be used for early bring-up, but the final deliverable targets a discrete MOSFET + gate-driver solution for circuit-level depth. Current/voltage sensing: TI INA219 current shunt/power monitor (I²C) for battery + load telemetry (or per-rail monitoring where needed). 4) Key Circuit Deliverables (What We Will Design and Build): Dual H-bridge power stage (2x DRV8701 + MOSFETs) (prototype fallback: VNH5019-class module) Current sense + current limiting strategy (sense resistor + DRV8701 sense amplifier use) Reverse polarity + fuse + TVS transient suppression 5V/3.3V regulation for logic and servos (as needed) Subsystem 2—Dual Lifting Arms (“Claws”) Mechanism 1) Function: Lift/tilt opponents, perform self-righting, and stabilize the robot during control maneuvers. 2) Mechanical Approach: Two symmetric front arms shaped like scorpion claws. Linkage-based lifter (4-bar or similar) to amplify torque and keep the lifting motion controlled. 3) Components (Explicit Parts): High-torque metal gear servos (example): DS3218 digital servo (~20 kg·cm class)—one per arm, or one actuator with a shared linkage if weight/space demands. Arm position feedback (optional): potentiometer or magnetic encoder (e.g., AS5600) for closed-loop arm control beyond servo internal control. 4) Circuit and Interface Servo power rail design (separate buck regulator, bulk capacitance, brownout prevention) PWM generation from the MCU; optional current monitoring for stall detection Subsystem 3—Scorpion Tail “Stinger” and Driver Stage 1) Function: Provide a controllable tail mechanism that supports (1) self-righting, (2) anti-tip bracing while lifting, and (3) active control/attack against other robots via jabbing, pushing, and hooking. 2) Mechanical Approach: The tail is a rigid arm mounted at the rear/top of the chassis with 1–2 DOF: Pitch joint to raise/lower the tail (primary DOF). (Optional) small yaw adjustment to place the stinger left/right if needed. Tail tip “stinger” end-effector (replaceable modules): Jab/Pusher Tip: a rounded or wedge-shaped tip to shove and unbalance opponents without snagging. Hook Tip: a curved hook profile to catch on opponent edges (weapon guards, chassis lips, or external features) and pull/rotate them into the lifter arms. Brake/Brace Foot: high-friction pad to press into the ground for stability and self-righting. Operating modes: Self-righting push: the tail presses into the floor to lever the chassis upright. Anti-tip brace: as the front arms lift, the tail pushes down to prevent a backflip and stabilize the chassis. Jab/Poke: quick tail motion to disrupt opponent alignment and create an opening for the front claws. Hook-and-control: tail hooks and pulls to rotate the opponent or drag them into a favorable position. 3) Actuators/Sensors (Explicit Parts): Tail pitch actuator (choose one of the following implementation paths): Path A (simpler, lighter): high-torque servo (example: DS3218) for tail pitch joint. Path B (more force and controllable): compact DC gearmotor + lead screw (custom linear actuator) driving tail pitch via a crank linkage. Tail contact/force sensing (optional but recommended for protection and testing): Force-sensitive resistor (FSR) under the brace foot or a small load cell in the tail linkage to estimate applied downforce. Tail joint endstop sensing: limit switch or Hall sensor to prevent over-travel. 4) Power Electronics (Custom, Circuit-Level Design) If servo-based: a dedicated servo power rail with a buck regulator and bulk capacitance; monitor servo rail voltage sag. If DC motor/linear actuator-based: dedicated tail actuator driver PCB, including: H-bridge motor driver (gate driver + MOSFETs, or a motor-driver IC) Flyback/transient protection Current sensing (shunt + amplifier or INA219 channel) to detect stall and enforce safe limits Thermal monitoring near power devices and firmware cutback Subsystem 4—Wireless Control and Main Controller 1) Function: Reliable teleoperation, safety failsafe, and sensor telemetry. 2) Controller (Explicit Parts): ESP32-WROOM-32E-N4 module as the main MCU (Wi-Fi/BLE for control + telemetry). 3) Features: Wireless control (BLE gamepad or Wi-Fi UDP) Failsafe: if command packets stop for >500 ms (heartbeat) → motors stop, tail relaxes to safe position, arms relax to safe position Telemetry: battery voltage/current, motor currents, temperatures Subsystem 5—Power, Safety, and Compliance 1) Function: Safe high-current operation and course-lab compliance. 2) Planned Safety Hardware Physical kill switch / removable arming plug Main fuse sized for worst-case current + wiring limits Separate “logic” and “power” rails with filtering LiPo-safe practices: voltage cutoff, charging in approved bags/areas, current limiting for high-current loads Physical Design—3D Modeling and Fabrication 1) Modeling Software We will use Autodesk Fusion 360 for the entire mechanical design. 2) Material Since this is a combat robot, material properties are a primary design constraint. We will first consider PLA, PETG, and ABS materials (TBD). 3) Weight Management and Distribution General Weight Budgeting (Depending): Electronics & Motors: ~45% Battery: ~15% Mechanical Frame & 3D Prints: ~30% Fasteners & Tracks: ~10% Criterion for Success All goals below are clearly testable: 1. Mobility/Traction Maintain continuous drive for ≥ 10 minutes on a flat surface (no thermal shutdown). Reach ≥ 1.0 m/s straight-line speed on the lab floor with the full system powered. Execute 10 consecutive aggressive turns (full differential turning) without tread derailment. 2. Lifting Arms Lift a 0.5 kg test block by ≥ 30 mm within 3 seconds, repeated for 10 cycles without mechanical failure. Self-righting: from upside-down, return to upright in ≤ 5 seconds using the arms and tail pose in 3 out of 3 trials. 3. Tail “Stinger” (Stability and Attack/Control) Bracing downforce: when deployed in brace mode, the tail applies ≥ 30 N downward force on a scale (measured at the stinger foot) and holds for ≥ 30 seconds without actuator overheating or mechanical slip. Deployment speed: tail transitions from “stowed” to “bracing” position in ≤ 1.0 second, repeated 10 cycles. Anti-tip effectiveness: during a lift of the 0.5 kg test block, the robot does not tip past a defined angle threshold (e.g., < 45°) in 3 out of 3 trials. Jab/Pusher effectiveness: using the jab/pusher tip, the tail can push a 1.0 kg surrogate block on the lab floor by ≥ 20 cm within 2 seconds (repeatable in 3/3 trials). Hook-and-control: using the hook tip, the tail can latch onto a standardized pull point (e.g., a metal ring/edge on a test fixture) and pull a 0.5 kg load by ≥ 10 cm (repeatable in 3/3 trials). 4. Control and Safety Wireless control range ≥ 10 m line-of-sight with < 150 ms command latency. Failsafe stops the drive and disables high-force actions within ≤ 300 ms of signal loss (verified by logging + stopwatch/LED indicator). 5. Circuit-Level Design Validation The custom motor driver and tail actuator PCB operate at the target battery voltage and demonstrate: Current sensing accuracy within ±10% (bench compared to multimeter/shunt) No overcurrent damage during stall tests (protected shutdown triggers as designed) |
||||||
| 5 | ANT-WEIGHT BATTLEBOT |
wenhao Zhang XiangYi Kong Yuxin Zhang |
Zhuoer Zhang | |||
| # ANT-WEIGHT BATTLEBOT Team Members: - Xiangyi Kong (xkong13) - Yuxin Zhang (yuxinz11) - Wenhao Zhang (wenhaoz5) # Problem Antweight (≤2 lb) combat robots must operate under strict weight, power, and control constraints while enduring repeated impacts, motor stalls, and wireless failures. It’s extremely important for the stable and fast interconnection among power delivery, wireless control, and integration between mechanical and electronic subsystems. # Solution We propose a 2-lb antweight battlebot with a four-wheel-drive chassis and an active front roller-and-fork weapon. All electronics are integrated on a custom PCB centered on an ESP32 microcontroller. The system is divided into four subsystems—Power, Drive, Weapon, and Control—allowing modular development and testing. Wireless PC-based control is implemented via WiFi or Bluetooth, with firmware failsafes ensuring automatic shutdown on RF link loss. # Solution Components ## Subsystem 1 - power Supplies stable power to motors and electronics while preventing brownouts, overcurrent damage, and unsafe operation. Components: - 3S LIPO Battery (11.1v battery) - LM2596S-3.3( regulator to output 3.3v) ## Subsystem 2 - Drive Provides reliable locomotion, turning, and pushing power during combat. Components: - Four DC gear motors - L298N (motor driver) - Four wheels mounted to a 3D-printed chassis ## Subsystem3 - Weapon Implements the robot’s primary mechanism for engaging and controlling opponents. Components: - Front roller driven by a DC motor - PWM-based motor control circuitry - Other 3D-printed weapon structure (forks, and wedge guides) ## Subsystem4 - Control Handles wireless communication, motion control, weapon control, and safety logic. Components: - ESP32 microcontroller on custom PCB - Integrated Bluetooth radio - Current sensor for safety monitoring - PC-based control interface # Criterion For Success - Weight Compliance: Total robot mass is less than 2.0 lb. - Wireless Control: Robot is reliably controlled from a PC via Bluetooth with Failsafe Operation. - Mobility: Robot operates continuously for 3 minutes without power resets. - Weapon Reliability: Weapon can be repeatedly actuated without electrical or mechanical failure. |
||||||
| 6 | Interactive Desktop Companion Robot for Stress Relief |
Jiajun Gao Yu-Chen Shih Zichao Wang |
Haocheng Bill Yang | |||
| # Team - Jiajun Gao (jiajung3) - Yuchen Shih (ycshih2) - Zichao Wang (zichao3) # Problem Students and office workers often spend extended periods working at desks, leading to mental fatigue, stress, and reduced focus. While mobile applications, videos, or music can provide temporary relief, they often require users to shift attention away from their primary tasks and lack a sense of physical presence. Static desk toys also fail to maintain long-term engagement because they do not adapt to user behavior or provide meaningful interaction. There is a need for an interactive, physically present system that can provide short, low-effort interactions to help users relax without becoming a major distraction. Such a system should be compact, safe for desk use, and capable of responding naturally to user input. # Solution We propose an interactive desktop companion robot designed to reduce stress and boredom through voice interaction, expressive feedback, and simple physical motion. The robot has a compact, box-shaped form factor suitable for desk environments and can move using a tracked or differential-drive base. An ESP32-based controller coordinates audio processing, networking, control logic, and hardware interfaces. The robot supports voice wake-up, natural language conversation using a cloud-based language model, and speech synthesis for verbal responses. Visual expressions are displayed using a small screen or LED indicators to reflect internal states such as listening, thinking, or speaking. Spoken commands can also trigger physical actions, such as rotating, moving closer, or changing expressions. By combining audio, visual, and physical interaction, the system creates an engaging yet lightweight companion that fits naturally into a desk workflow. # Solution Components ## Subsystem 1: Voice Interaction and Audio Processing This subsystem enables natural voice-based interaction between the user and the robot. It performs wake-word detection locally and streams audio data to a remote server for speech recognition and response generation. The subsystem also handles audio playback and interruption control. Audio data is captured using a digital microphone, encoded, and transmitted over a network connection. Responses from the server are received as audio streams and played through an onboard speaker. Local wake-word detection ensures responsiveness and reduces unnecessary network usage. Components: • ESP32-S3 microcontroller with PSRAM • ESP32-S3 integrated Wi-Fi module • I2S digital microphone (INMP441 or equivalent) • I2S audio amplifier (MAX98357A) • 4Ω or 8Ω speaker ## Subsystem 2: Visual Expression and User Feedback This subsystem provides visual feedback that represents the robot’s internal state and interaction context. Visual cues improve usability and convey personality. Different states such as idle, listening, processing, speaking, and error are represented using animations or color patterns. Components: • SPI LCD display (ST7789 or equivalent) or • RGB LEDs (WS2812B or equivalent) ## Subsystem 3: Motion and Actuation This subsystem enables controlled movement on a desk surface. The robot performs simple motions such as forward movement, rotation, and stopping based on voice commands and sensor feedback. Motor control runs in a dedicated task to prevent interference with audio and networking functions. Components: • Two DC gear motors • Dual H-bridge motor driver (TB6612FNG or equivalent) • Optional wheel encoders ## Subsystem 4: Power Management and Safety This subsystem manages power distribution and ensures safe operation. The robot is battery-powered to allow untethered use on a desk. Hardware and software protections limit speed, current, and movement range. Components: • Lithium battery with protection circuit • Battery charging module • Voltage regulators (5V and 3.3V) • Physical power switch ## Subsystem 5: Subsystem 5: Safety Sensing (Desk-Edge Detection + Obstacle Avoidance) This subsystem prevents the robot from falling off the desk and reduces collisions with nearby objects. It continuously monitors both the surface below the robot and the space in front of the robot. When a desk edge (cliff) or obstacle is detected, this subsystem overrides motion commands and triggers an immediate safe response. Desk-edge detection (cliff detection): Two downward-facing distance sensors are mounted near the front-left and front-right corners. They measure the distance from the robot to the desk surface. If either sensor detects a sudden increase in distance beyond a calibrated baseline, the robot immediately stops and performs a short reverse maneuver to move away from the edge. Obstacle avoidance: A forward-facing distance sensor detects objects in front of the robot. If an obstacle is within a predefined safety distance, the robot stops. If the obstacle remains, the robot can optionally rotate in place to search for a clear direction before continuing motion. Control priority: Safety sensing has the highest priority in the motion stack: Desk-edge detection (highest priority) Obstacle avoidance User/voice motion commands (lowest priority) Components: 2 × Time-of-Flight distance sensors for downward cliff detection (VL53L0X or equivalent, I2C) 1 × Time-of-Flight distance sensor for forward obstacle detection (VL53L0X or equivalent, I2C) # Criterion For Success The success of this project will be evaluated using the following high-level criteria: 1. The robot connects to a Wi-Fi network and establishes a server connection within 10 seconds of power-on. 2. The system detects a wake word and enters interaction mode within 2 second in a quiet environment. 3. The average end-to-end voice interaction latency is less than 5 seconds under normal network conditions. 4. At least five predefined voice commands trigger the correct robot actions with at least 90% accuracy during testing. 5. Visual feedback correctly reflects the system state in all operational modes. 6. The robot operates continuously for at least 30 minutes on battery power during active use. 7. When Wi-Fi is unavailable, the system enters a safe degraded mode without crashing or unsafe motion. 8. During a 10-minute continuous motion demonstration on a desk, the robot does not fall off the desk. 9. In an obstacle test, the robot is commanded to move forward toward a stationary obstacle (for example, a box or book) from multiple start distances for 20 trials. The robot must stop (or stop and turn) before making contact in at least 18/20 trials. |
||||||
| 7 | SolarTrack |
Rahul Patel Rishikesh Balaji Siddhant Jain |
Haocheng Bill Yang | |||
| Problem: Fixed solar panels waste potential energy due to changing sun positions and limited monitoring, Solution: This project proposes the design of a self positioning solar panel system that automatically orients itself to capture the maximum possible solar energy throughout the day and stores that energy in a battery. Unlike fixed panels the system continuously adjusts its angle using light sensors or a sun-position algorithm controlled by a microcontroller, ensuring the best alignment with the sun as conditions change. The harvested energy is routed through a charge controller to safely charge a battery while protecting against overvoltage, overcurrent, and deep discharge. In addition to energy generation and storage, the system includes a mobile or web application that displays real time and historical data such as panel voltage and current, total energy generated (Wh), battery state of charge, system efficiency, and power consumption of connected loads. This application allows users to monitor performance, compare tracked versus fixed operation, and understand how environmental conditions impact energy production. Solution Components: Dual Axis Tracking Mechanism The solar panels will be mounted on a two axis articulating frame that is driven by servo and stepper motors. This will allow independent control of both the east to west orientation, as well as the angle at which the solar panels are mounted. This will enable the panels to follow the sun’s path through the day across the sky. Light Sensor Array We will use an array of photodiodes or LDR sensors to detect the light intensity in various positionings in order to determine the most optimal position for the panels. We could also implement an algorithm that calculates the sun’s theoretical position based on GPS coordinates for use during cloudy or partially shaded conditions. Maximum Power Point Tracking Charge Controller We will make use of a charge controller to interface between the solar panel and the battery to operate at the maximum power point. This will help us protect the battery from over charging, over discharging, and reverse current flow. Energy Storage and Management System We will incorporate voltage and current senors to measure the output from the panels, battery charge/discharge rates, and load consumption. We will make use of these measurements to compute realtime power, cumulative energy, and system efficiency for performance analysis. Wireless Communication Module We will use a WiFi communication module to send system data to a local server or even a cloud based server. This will allow remote monitoring, firmware updates, and long term data logging for performance analysis of tracked and fixed-tilt operations. Mobile/Web Application Dashboard We will use an application that will visualize live and historical metrics, including but not limited to orientation angles, power output, energy yield, and tracking efficiency. With the help of this application, users will be able to analyze trends, receive fault alerts, and evaluate the energy gained from solar tracking under different environmental conditions. Criteria for success: The success of this project will be evaluated under the following criteria. Wi-Fi connection between the solar panel/battery and a local/cloud server. Tracking of statistics, such as angle, output, etc... for display later. A cache in which to store tracked statistics should the server be unavailable. Creation of a web app to display the tracked statistics. Creation of an algorithm allowing for the solar panel to "follow" the sun. Integration of the algorithm onto a microcontroller + interfacing with light sensors and motors. |
||||||
| 8 | Facial Quantum Matching Mirror |
Akhil Morisetty Alex Cheng Ethan Zhang |
||||
| # Facial Quantum Matching Mirror Team Members: - Akhil Morisetty (akhilm6) - Alex Cheng (xueruic2 ) - Ethan Zhang (ethanjz2) # Problem Describe the problem you want to solve and motivate the need. Chicago is spending 500 million dollars investing in the development of the Illinois Quantum and Microelectronics Park. Professor Kwait is looking for a viable prototype of a Facial Quantum Matching Mirror that he can show investors to persuade them into creating a more expensive and museum-ready version. Our task is to create a visually appealing and functioning prototype that Professor Kwait can show to investors to eventually add to the Illinois Quantum and Microelectronics Park. # Solution We propose a Facial Quantum Matching Mirror, an interactive display device that uses a one-way mirror and facial recognition to reflect a user’s likeness matched with well-known figures in selected categories such as engineers, scientists, or entrepreneurs. When the display is illuminated, the one-way mirror becomes transparent, allowing the user to see the matched character overlaid behind the glass. This creates the illusion that the user is “face-to-face” with a figure who resembles them, combining reflection, computation, and visual storytelling in a single interactive experience. The system consists of a one-way mirror, a display panel of equal size mounted behind the mirror, a surrounding LED light ring, a camera, local storage, a microcontroller, and a user input button, all integrated within a single frame. When the system is idle, the display remains dark, causing the mirror to behave as a reflective surface so the user sees only their own reflection. Upon pressing the button, the user selects a category and the system is activated. The microcontroller triggers visual feedback through the LED ring and commands the camera to capture an image of the user. This image is processed by the facial recognition backend, which identifies the most visually similar individual from the selected category. The result index is returned to the microcontroller, which retrieves the corresponding image from local storage and displays it on the screen. # Solution Components ## Subsystem 1: Display Unit This subsystem serves as the presentation and capture layer of the smart mirror. It uses an onboard camera to capture a photo of the person standing in front of the mirror, and a monitor behind a two-way mirror to render the user experience (UI prompts, loading screens, images, and optional video). During idle mode, the monitor remains black so the mirror looks fully reflective like a normal mirror. When the user presses the start button, the display transitions to a loading interface while the backend subsystems process the captured image and return a match. Once processing completes, the monitor displays the selected quantum scientist/engineer/entrepreneur (and any associated content), giving the mirror the appearance of an interactive digital mirror. Components: - 18’’ x 24’’ Wooden Picture Frame - SANSUI 24” 100Hz PC Monitor - 18” x 24” Glass Mirror - 18” x 24” 50% Reflective Film ## Subsystem 2: LED Sensor Unit This subsystem focuses on providing visual feedback to the participant throughout the interaction process. The LED Sensor Unit is activated after the participant presses the startup button and indicates that the system is processing the facial scan and matching operation. The LEDs will flash in a predefined pattern to signal that the system is active and working. The LED Sensor Unit receives control signals from the system microcontroller and remains active until an “off” signal is sent by the display subsystem or system controller, indicating that the matched image or video has finished displaying. Once the off signal is received, the LEDs are turned off and the system returns to an idle state. The LED lights are mounted around the frame of the mirror to ensure high visibility and to enhance the overall user experience. Components: - Addressable LED strip: SEZO WS2812B ECO LED Strip Light ## Subsystem 3: Startup Button This subsystem focuses on the start of the entire process for the project. The participant begins the process of using the mirror by choosing options from a set of buttons available to them. The participant will have the option of selecting the quantum category that they want, and starting the camera/scan process with another button. The participant has the control for when they are interested in and when they start the process. The button will be stationed next to where the participant will stand and have wires connected to the microcontroller subsystem. Components: - Button: 2x16 LCD Display with Controller ## Subsystem 4: System Microcontroller The system microcontroller organizes and communicates between all the other subsystems in the project. All of the logic and transmission of data is handled by this subsystem. Moreover, the software component of the projects sends data back and forth between the microcontroller and itself. The system microcontroller is the overarching subsystem in the project, which essentially plays a role in every component of the solution. Components: - Microcontroller: ESP32-S3-WROOM-1-N16 # Criterion For Success - Participants are able to select the category they are interested in to find a match for. - Be able to accurately match the participant to a person in the topic the participant has selected: Accuracy should be at 75% - After match has been found a personal video is displayed from the match - Device does not start until participant steps on to pressure plate - The led surrounding should be on after the user press the button and before the character image disappear - The image on the monitor should be showing for up to 15 seconds, and then turn back to the black screen. |
||||||
| 9 | Automated Cocktail/Mocktail Maker |
Benjamin Kotlarski Dominic Andrejek Nick Kubiak |
||||
| # Automated Cocktail/Mocktail Maker Team Members: - Dominic Andrejek (da24) - Benjamin Kotlarski (bkotl2) - Nick Kubiak (nkubi2) # Problem Making cocktails or mocktails can be a tricky situation at times. You have many different ingredients that you must accurately measure and are very prone to human error. In social settings this can also be a somewhat time-consuming and inconvenient process. While some automatic drink dispensers already exist, most are expensive, very large, and have many limitations. # Solution An automated cocktail and mocktail mixing machine can fix this. Based on a user's input, the system will dispense a precise amount of the specific liquids needed to make that drink. There will be a sensor to check for cup presence so liquid is not spilled everywhere and user input will be done through either buttons, a small graphical UI or potential voice. The system will contain multiple containers to hold liquids with pumps or solenoids to connect them to the cup that is controlled by the microcontroller. Some type of weight sensors will also be implemented to make sure the correct amount of each liquid is dispensed. For use, if a cup is present and a user gives a recognizable command from the pre-defined recipes the microcontroller will start activating the appropriate pumps/solenoids to drive the correct ingredients to the cup to make the desired drink. Our design will be a more affordable solution (using much cheaper materials) that more residential users will be able to use and enjoy the precision and luxury of properly measured out drinks without needing external and premade pods or absurd prices. # Solution Components ## Subsystem 1 - User Interface (UI) Initially it will be a simple push button, in the future potentially expand or rework into a more complex screen based UI after expanding the pump system so that there are more options with potentially more ingredients to them too. Expansion to multiple buttons for multiple drinks is also a simpler option for expansion once we get an initial drink working. Here is an initial button we can use 16mm Panel Mount Momentary Pushbutton - Red Product ID: 1445 ## Subsystem 2 - Stirring Mechanism Since these drinks are being produced by two different liquids, they are naturally required to be mixed by some means. Thus, the purpose of this subsystem is to automate the drinks stirring process. To do so, it will require two different motors connecting back to the systems microcontroller. The first motor will be in charge of controlling the height of the stirring arm so that it can be lowered into and out of the cup, while the second motor would be in charge of actually rotating the stirrer to mix the drink. In terms of the potential motors which would be able to do this, we found that the linear up/down motion could be handled with a dfrobot fit0806, while the rotational motion can be done with an adafruit 3777. These are both DC powered thus we will use batteries to power our system and if we find issues with either running through batteries very fast or requiring higher power outputs then we will implement an AC to DC converter to allow us to use wall power. ## Subsystem 3 - Pumps and Plumbing This system will be in charge of transporting the liquid from our housing containers to the central cup. Upon the signal from the microcontroller, the pumps will turn on transporting the liquid from the housing container to the liquid through small tubes. Once one pump finishes dispensing the liquid and is verified as correct, the next pump will turn on in charge of the next liquid and dispense into the central cup. For the tubing we will use small plastic tubing 1/6 in. I.D. x 1/4 in. O.D. Clear Vinyl Tubing. The pumps we will use to control the liquid flow is the Adafruit 1150. ## Subsystem 4 - Intercomponent Communication System Microcontroller system to communicate between all of the systems. For example sending on a user input the microcontroller will tell the pumps to start taking liquid from the housing containers to the central cup. This also includes various colored LED’s to note the status of each step or whether something might potentially be wrong. This will tell the pump if/when to dispense, how much to dispense, and how the motors should be moving. We will use an ESP32 microcontroller along with our custom PCB. ## Subsystem 5 - Functionality and Weight Verification System Some weight sensors which verify the amounts/presence of liquids, and also verify that there is a cup present. Each Container (both liquid housing and the central cup) will have a weight sensor below them. The weight sensor below the central cup will have two purposes. The first is that the microcontroller must read a non zero/small value from it along with a user input to start dispensing liquid. Then it will also make sure that the amount of liquid lost from the liquid housing (based on weight lost) is then regained in the central cup so we know all liquid is fully transferred and is not stuck in the tubing. The weight sensors can either have the numbers adafruit 4540, sparkfun tal220, and adafruit 454. This decision will be based on the weight limit we will determine necessary for our application in the design phase. Regardless of which one is chosen, however, all of these require the addition of an amplifier to function. The code for that is hx711. # Criterion For Success In its most fundamental and basic form, our project must be able to successfully produce at least one simple stirred cocktail upon a user's input. This must effectively include the following functions. First is the ability to check whether a cup is present before pouring any liquids, as well as check if there is the right amount of the necessary liquids before pouring. Once that is complete, the stirring and pouring mechanisms should move down into place, and the different liquids get individually poured into the cup. The amounts of each liquid should be measured via the weight sensor below the cup so that each time the drink is produced the portions remain consistent. After each respective liquid is poured into the cup, the stirring device should clearly activate, and whenever it finishes the stirring and pouring mechanisms should move back up to their starting positions, with a green LED indicating that the process was completed. If time permits, however, we hope to be able to expand our goals a bit more in three different ways. The first way was to expand the selection of drinks by having multiple different options available to choose from. An additional and slightly different approach to expanding the drink selections would be to incorporate more complex options as well that would require multiple different ingredients instead. The final goal which we hope to achieve/reach would be to incorporate a more complex and visually appealing UI so that users can easily select between and see different drink options on a screen. # Alternatives There are three different categories of alternatives which currently exist relating to our proposed project. The first is a coaster looking device which connects to a phone app via bluetooth to weigh the amounts of liquid you add to your cup and guides you in making your drinks. This product, while the least expensive of all other options, is by far the most simple and the least automated. The next alternative was a fully automated drink creator which worked by having users input a flavor pod for their desired drink, and mixing it with the correct liquor. While this one got closer to performing the same function as our idea, its price went drastically up and it required users to purchase or own the company's specific flavor pods. Finally, the alternative which is most similar to our design is the Barsys 360 Cocktail Maker Machine, which also takes in various liquids and dispenses them accordingly for whatever mixed drink one desires, but that's where its functionality ends. Therefore, besides the fact that it once again has a very large price tag, it also does not have the same functionality of actually automatically stirring the drinks for a user. Important to mention too is that there do exist commercial grade versions of this type of machine, but these ones jump in price even further up to around three thousand dollars. |
||||||
| 10 | OmniSense-Dual — Dual-Wearable 360° Blind-Spot Detection, Directional Haptic Hazard Alerts, and Belly-Based Navigation for Pedestrian Safety |
Alex Jin Jiateng Ma Simon Xia |
||||
| Team Members: - Simon Xia (hx17) - Jiateng Ma (jiateng4) - Alex Jin (jin50) **1. Problem Statement** Pedestrians in urban and campus environments frequently share space with bicycles, e-scooters, cars, and other pedestrians approaching from all directions. Unlike drivers, who benefit from mirrors and active driver-assistance systems, pedestrians have: - Unprotected blind spots Fast-approaching objects from behind or from diagonal sectors are often perceived too late, especially on shared bike/pedestrian paths and narrow sidewalks. - Reduced situational awareness Headphones, smartphones, and other distractions degrade auditory and visual awareness, making it harder to detect hazards or notice subtle visual cues. - Navigation burden Outdoor and indoor navigation typically depend on visually checking a smartphone map or listening to voice guidance. Both approaches demand attention, occupy hands or ears, and can themselves be unsafe in traffic or crowded environments. For visually impaired users, relying solely on audio is also not ideal. Existing systems (smartphone maps, voice navigation, cycling radars, blind canes) each address part of the problem but do not provide integrated 360° safety sensing plus hands-free navigation with clear separation of meaning. **2. Solution Overview** We propose OmniSense-Dual, a dual-wearable system consisting of: - A waist/belly-mounted sensing, compute, and navigation haptic module, and - A head-mounted sensing + haptic hazard alert module Key design choice: - Head channel = hazard alerts only - Belly channel = navigation cues only This cleanly separates “something is dangerous around you” from “where you should go.” Core functions: - 360° Blind-Spot Hazard Awareness The belly module uses mmWave and ToF/ultrasonic sensors to detect approaching objects around the torso. The head module provides an additional sensing plane for head-level obstacles. When a hazard is detected, the headband vibrates on the corresponding side/direction, signaling an urgent warning. Hands-Free Navigation - A smartphone app provides waypoints (outdoor via GPS; optionally indoor via BLE/UWB). The belly module fuses waypoints with IMU heading and encodes navigation instructions as gentle vibration patterns on the belly module (e.g., left side of belt = turn left soon). Navigation never uses the head motors, so it cannot be confused with hazard alerts. OmniSense-Dual is designed for campus walking, urban commuting, and accessibility support, with a strong emphasis on non-visual, non-auditory, and clearly distinguishable feedback. **3. Solution Components** **Component A: Waist/Belly Perception & Compute Module** Placement: Worn around waist or belly using elastic belt. Sensors: - Rear + Rear-Diagonal (L/R): mmWave radar (60 GHz) - Left + Right: ToF (e.g., VL53 series) or ultrasonic - Front-Lower: ToF/IR for low obstacles (curbs, poles, steps) Functions: - Provides 360° sensing at waist plane - Detects moving vs static obstacles - Includes 6-DoF IMU for heading + gait - Includes battery + charger + regulators - Belly haptics used only for navigation **Component B: Head-Mounted Hazard Alert Module** Placement: Headband, cap insert, or lightweight strap. Haptic Feedback: 8 directional motors placed at: - Front (0°) - Front-Left (45°) - Left (90°) - Rear-Left (135°) - Rear (180°) - Rear-Right (225°) - Right (270°) - Front-Right (315°) Electronics: - Small BLE SoC/MCU - Optional short-range ToF for head-height obstacles - Small battery or wired power from belt Role: - Only hazard alerts - No navigation patterns **Component C: Navigation & Belly Haptic Interface** Input Source: Phone provides route via GPS (outdoor) or BLE/UWB (indoor). Processing on Belt Module: - Receives desired bearing from phone - Computes angle difference using IMU - Triggers haptic cue on belt **Component D: Safety Hazard Logic** Inputs: - mmWave + ToF/ultrasonic - Optional head ToF - IMU heading Hazards Detected: - Approaching fast objects (bike, scooter) - Sudden close static obstacles - Rear or diagonal intrusion - Low objects in walking path Head Feedback Patterns (Hazard Only): - Default hazard → strong 0.5–1.0s pulse in correct motor direction - High severity → repeated strong pulses - Multiple hazards → priority by time-to-collision **Component E: Electronics & PCB** Belly PCB Includes: - MCU (e.g., STM32H7 or ESP32-S3) - Sensor interfaces (mmWave, ToF, IMU) - BLE for phone + headband - Haptic drivers for belt motors - Li-ion charging + regulation Head PCB Includes: - BLE SoC (e.g., nRF52832/ESP32-C3-Mini) - 8 motor drivers (directional) - Optional ToF - Small battery or connector **4. Criterion for Success** Safety - Detect bikes/scooters ≥ 5 m away with ≥90% recall - Head direction correctness ≥90% - Alert latency ≤250 ms - Dual-plane sensing reduces occlusion misses ≥30% Navigation - Turn accuracy using belly haptics ≥85% - Heading deviation during “straight” ≤10° - Navigation update latency ≤200 ms Channel Separation - Head = hazard, belly = navigation - User classification accuracy (hazard vs nav) ≥90% Usability - Battery life ≥4 hours - Total mass ≤350 g (head ≤150 g) |
||||||
| 11 | Ant-weight Durian Battlebot |
Matthew Jin Timothy Fong Ved Tiwari |
Zhuoer Zhang | |||
| # Title Ant-Weight Durian Battlebot # TEAM MEMBERS: - Matthew Jin (mj41) - Tim Fong (tfong5) - Ved Tiwari ( vedt2) # PROBLEM Want to design an Ant-weight Battlebot that can outlast and tactically out-compete other entries into the competition. Several restrictions/requisites (outlined by the National Robotics Challenge rulebook) are as follows: - Robot must be under 2lb (we are not opting for a bipedal/quadpedal robot) - Usage of an H-bridge motor system - No metal components whatsoever - Weaponry (either passive or active) - Power delivery system (battery) - Usage of sensors/actuators - Must be 3D printed using one/multiple of 5 materials: PET, PETG, ABS, PLA, PLA+ - Custom PCB to house a microcontroller - Microcontroller must have bluetooth or wifi capability to be controlled externally via a nearby PC/laptop - Simple and complete manual shutdown (within 60s) without the usage of an RF link # SOLUTION Collaboratively decided on a Battlebot design with a passive/counter-type weapon, being spikes that cover the outer shell (resembles a Durian shell with rounded, shallower spikes). Numerous other countermeasures and engineering decisions have been culminated to account for tactics employed by other participating teams. Unlike other common approaches, the absence of an active weapon allows for weight to contribute toward other directions. With this passive weaponry, it falls down more toward microcontroller-initiated, driver assistance algorithms and the shell armor design to disarm/decommission the competition. You’re in trouble. # SOLUTION COMPONENTS ## PASSIVE WEAPONRY The shell spikes are intentionally shallow and rounded to prevent chipping, and to maximize structural integrity under impact. This will prove useful against many active weapon forms, namely the hammer and rotary-type Battlebots in head-on collisions. ## OUTER SHELL Due to the absence of an active weapon, this gives more wiggle room to make the outer shell thicker. To counter Battlebots with forklift/door wedge armaments that aim to flip us over, we will intentionally minimize the clearance room between the bottom lip of the shell and the bottom of the wheels. Additionally, the shell will be thicker toward the middle/base (compared to the top) to create an even lower center of gravity. This shell will be 3D printed using the PETG material, given its functional robustness in the context of this Battlebot competition. It is durable, impact resistant, non-brittle, and warp-resistant during the printing process. ## ELECTRONICS Preface: To include details regarding sensors, battery system, the microcontroller, AND the electronics + battery trays. We decided to use an STM32 microcontroller compared to other popular microcontrollers (namely ESP32) due to its superior compute power. The STM32 provides us with a better ability to perform algorithmic computations on board from data collected from our sensors. An example use case of this might be to determine if and when the bot is close to flipping over. By calculating the y offset from the gyroscope and accelerometer on the IMU, we can send a signal to the wheels to spin it at a certain frequency to reduce the chances of flipping. Apart from this, the STM32 provides us with native Bluetooth and WIFI support out of the box, eliminating the need to configure separate chips to the microcontroller setup. For the battery, we have chosen the 4S (14.8V) 750mAh LiPo battery, as it provides ample flexibility between power and charge capacity: both of which are important for a nimble Battlebot that can last the entire contest. This battery will be stored in a lower-level tray (to again, lower the center of gravity) to protect it. Additionally, a battery-health specialized transistor chip will be utilized. There will be a buck converter that will step down the 14.8v in order to power the microcontroller and other components at the correct voltages that require less voltage. We are to use IMU and load sensors for the sake of creating 2 feedback systems. The first feedback system is between the microcontroller and the Battlebot’s localization. The second feedback system is between the microcontroller and the motor health/status. The goal of initializing these two systems is for the sake of ensuring the Battlebot’s movement is both accurate, and that its motors do not malfunction. Alongside the sensors, the microcontroller/PCB will be located in an upper-level tray above the battery tray. ## DRIVETRAIN As outlined in Professor Gruev’s slides, we are to use an H-bridge system. We’ve opted for a multidirectional 4WD setup with the wheels being attached to the inner perimeter of the shell. With this approach, fluid motion exists while simultaneously shielding the wheels from external impacts. Wheels will be made of urethane, as they are heavy (contribute toward lowering the center of gravity), durable, have good grip, and less wear factor. Brushless DC motors will be used due to their incredibly high power-to-weight ratio and long lifespan (reliable). # CRITERION FOR SUCCESS - Battlebot electronics are well-protected, functional, and durable - Outer shell does not break under expected impact - Spikes do not chip and prove effective in using others’ active weapons against them - Battlebot does not flip over during trial runs/competition scenario reenactments - Battery lasts the entire combat duration |
||||||
| 12 | 4-Wheel-Drive Invertible Ant-Weight Battlebot |
Haoru Li Ziheng Qi Ziyi Wang |
Zhuoer Zhang | |||
| # Ant Weight Battlebot Team Members: - Ziyi Wang (zw67) - Ziheng Qi (zihengq2) - Haoru Li (haorul2) # Problem For ant-weight battlebots, 3D-printed materials introduce significant vulnerabilities. Though many robots can effectively defend strikes, they are prone to "turtling" and may lose mobility when flipped. Under the competition rule, losing mobility will quickly lead to knockout. When inverted, weapon systems such as vertical spinners may rotate in an ineffective direction or lose engagement with the opponent entirely, significantly reducing combat effectiveness. Preserving weapon functionality in both orientations remains a critical challenge for ant-weight combat robots. In addition, sudden high-impact collisions can introduce transient power spikes and voltage fluctuations in the power distribution system, which may disrupt onboard electronics, or cause overall system instability during operation. # Solution We want to design a invertible 4-Wheel-Drive battlebot with vertical drum spinner. According to our investigation, vertical drum spinner is an ideal weapon choice as it is rigid and can effectively flip opponents. To solve the problem of "turtling," the robot uses a symmetric chassis with wheel diameters exceeding the total chassis height, ensuring traction regardless of orientation. And bigger wheels also allow the battlebot to function even after flipped and the vertical rollercan change its direction as well. To address the cognitive load of inverted driving, we integrate an onboard IMU that automatically detects a flip and remaps the motor control logic in the firmware, making the transition seamless for the operator. To ensure electrical stability and prevent brownouts, the custom PCB utilizes a decoupled power architecture. We isolate the high-current weapon system from the sensitive logic rails using a high-efficiency switching regulator and a large bulk capacitor array. The robot is divided into three primary subsystems: Power Management, Control & Sensing, and Drive & Weapon Actuation. # Solution Components ## Subsystem 1: Power Management and Distribution Provides stable, isolated power delivery to all robot subsystems while meeting the 24V maximum battery voltage requirement. Detail specifications awaits to be put on based on selection of motors. ## Subsystem 2: Control and Communication Function: Receives operator commands, processes IMU orientation data, and generates appropriate motor control signals with automatic inversion compensation. *Components:* * Microcontroller: ESP32-WROOM-32D module with integrated WiFi/Bluetooth * Part: Espressif ESP32-WROOM-32D * IMU Sensor: 6-axis accelerometer and gyroscope module * Part: InvenSense MPU-6050 (GY-521 breakout module) * Interface: I2C communication at 400kHz Firmware Logic: Continuously poll IMU at 100Hz to determine Z-axis orientation If Z-acceleration indicates inversion (threshold: -8 m/s² to -10 m/s²), apply 180° phase shift to drive motor PWM signals fit the pose change. Maintain weapon control polarity regardless of orientation Implement exponential response curve on drive inputs for fine control ## Subsystem 3: Drive Train Provides four-wheel independent drive with sufficient torque for pushing and maneuverability. Components: * 4 Drive Motors with expected weight of ~10g each ## Subsystem 4: Weapon System Vertical drum spinner delivering kinetic energy impacts to destabilize and damage opponents. Performance Targets: Weapon tip speed: 150-200 mph (conservative for material constraints) Spin-up time: <3 seconds to operating speed Subsystem ## Sybsystem 5: Chassis and Structure Provides impact-resistant housing for all components while maintaining invertible geometry and meeting weight requirements. # Criterion For Success 1. The total weight of the battlebot should always remain below 2 lb. And the robot should execute a complete motor shutdown within 2 seconds once triggered by software or hardware switch. 2. Logic systems (ESP32, IMU) must maintain operation during weapon spin-up and simulated impact loads. And communication should stay on. 3. The robot can work as expected: move according to PC inputs and do not need manual adjustment; weapon spinning vertically; shutdown in time according to PC commands; self-adaptive when flipped (mobility and weapon functionality) 4. The chassis and mounting structures must withstand repeated weapon engagement and collisions without structural failure. |
||||||
| 13 | Invertible-Control Ant-Weight Battle Bot |
Ben Goldman Jack Moran |
Haocheng Bill Yang | |||
| **TEAM MEMBERS:** - Jack Moran (jackm6) - Ben Goldman (bg23) **PROBLEM:** The primary objective is to create a bot weighing under 2lbs to disable an opponent in an ant weight combat battle bots match in a confined space. Winning a match like this often requires a high skill level to pilot a robot, especially as they get flipped or lose control when other bots attack. Additionally, many bots may suffer from reliability issues as teams overcomplicate the robotics which leads to vulnerabilities. We need a solution to maximize weapon power while simplifying the driving experience for the operator so all they need to focus on is planning attacks against other opponent bots. **SOLUTION:** We propose a 2lb combat battle bot designed to deliver catastrophic blows to opponents using a double sided horizontal spinning bar with an easy to use control system to allow for efficient battle. The chassis will feature a large primary weapon consisting of a horizontal spinning bar capable of delivering powerful attacks after winding up due to high inertia. This primary weapon will stick out of the front. The sides and back of the bot will be rounded in shape with no sharp edges or corners in order to deflect attacks and prevent opponent's weapons from grabbing on. For the controls and movement, the bot will feature two wheels to enable a tank like steering system. These wheels will be enclosed within the body of the bot so that only a small section, where it would contact the ground, protrudes from the top and bottom of the bot. There would be small skid sections to allow the remainder of the body to stay low to the ground while also moving easily when on smooth surfaces. Since the bot will have a weapon, defense system, and wheels which can operate in either orientation, this bot will be capable of operating if flipped. However, whenever the bot is inverted, the steering and controls would be inverted making it hard to command. To combat this, we will include an IMU sensor to detect if the bot has been flipped. The controls would then be inverted so that the driver does not need to focus on the orientation of the robot and can focus on controlling the weapon towards opponents as controls would be reversed automatically. The bot would be controlled from the driver's laptop. **SOLUTION COMPONENTS:** **Subsystem 1: Mobility and Drive System** This subsystem is responsible for the mobility and driving capabilities of our bot. The bot needs to be highly mobile and fast in order to evade and attack other bots. In addition, this system will need to be capable of operating no matter the orientation of the bot. Using two motors for mobility will allow the bot to be able to turn very efficiently using tank like steering. - Drive type: Differential wheeled drive (two motors). - Wheel placement: Wheels recessed inside the chassis to protect against direct impacts. Each wheel only slightly protrudes from top and bottom of the chassis. - Motors: High-torque brushed DC gearmotors sized for ant-weight limits. - Control: Independent left/right motor control via H-bridges on the custom PCB. **Subsystem 2: Spinning Weapon System** The main weapon of our battle bot is a horizontal spinning bar. This piece will be 3D printed in a manner such that it is very strong and will not break on impact. It will be driven by the bot's third motor. In addition, this weapon must comply with ant-weight regulations. Therefore, this weapon must stop completely within 60 seconds of shutoff. The weapon provides offensive capability while keeping mechanical complexity to a minimum. - Weapon type: Horizontal spinning bar. - Actuation: Brushed DC motor belt driven or directly driven. - Safety: Software-controlled spin up sequence and current monitoring to prevent overcurrent or unsafe startup. **Subsystem 3: Orientation Detection and Control Inversion** The battle bot will feature the use of IMU sensors to help the driver control the bot. When flipped upside down by other bots, this bot will detect the inversion and be able to invert all controls. This allows for the driver to focus on attacking and evading other bots rather than wasting energy understanding how to control a bot when it is upside down using reversed controls. - Sensor: 6-axis IMU (accelerometer + gyroscope). Potential option: MPU-6050 - Function: Detect robot orientation (upright vs inverted). - Control logic: Automatically invert motor commands when inverted so “forward” and “turn” remain intuitive to the operator. **Subsystem 4: Control Electronics and Custom PCB** The PCB and control electronics are responsible for the main control and communication of our robot. Our microcontroller will be our central controller receiving operator commands and translating them into control signals. This will interface with the IMU to determine the robot’s orientation and apply the correct control logic accordingly. This subsystem also monitors our safety conditions. It will kill all motors and enforce failsafe behavior for our weaponry if communication is lost or there is a fault. - Microcontroller: ESP32 (Wi-Fi or Bluetooth control). Potential option: ESP32-WROOM-32E - Wireless control: PC-based controller via Wi-Fi/BLE. This is included in the ESP32 - Motor drivers: Custom H-bridge circuits for left drive, right drive, and weapon motor. - Power management: LiPo battery. Potential option: Turnigy Nano-Tech 3S LiPo. Would include voltage regulation for logic (3.3V) and current sensing for protection. - Safety features: Hardware kill switch. Automatic shutdown on RF link loss **Subsystem 5: Mechanical Design and Fabrication** The body of the bot will be primarily 3D printed and will adhere to all requirements of an ant-weight battle bot. Primarily, this means that the bot will measure in under 2lbs for competition. The chassis will be able to be opened in order to properly build and work on the bot including access to the PCB, microcontroller, battery, and motors. This chassis will also provide all primary defense systems by being smooth and rounded everywhere other than at the front where the weapon protrudes. This prevents attacks from spinning weapons or claw like devices to do damage. In addition, weight distribution will be optimized to keep the center of mass low and stable. - Materials: PLA+, PETG, or ABS. - Weight limit: ≤ 2 lb total robot mass. - Manufacturing: Fully 3D-printed chassis with modular access to electronics. **CRITERIA FOR SUCCESS:** **Mobility and Drive System** - The robot remains fully drivable when inverted. - The robot contains two wheels directly driven by motors such that front, back, and sides of each wheel are protected by the chassis. **Spinning Weapon System** - Uninterrupted high speed 360 degree rotation possible in both directions. - After impact, the spinning weapon immediately starts to spin up again. - The control system has an operational killswitch which shuts down all operations of the bot. - Weapon comes to a complete stop within 60 seconds after shutoff. **Orientation Detection and Control Inversion** - Sensors detect both upright and inverted positions which are displayed on the laptop controlling the bot. - Controls get inverted when the bot is upside down and return to normal when upright based on the use of the IMU. - Controls invert within 300ms after bot flips. **Control Electronics and Custom PCB** - The robot passes all safety shutdown tests required in ant-weight battle bot rules. - Custom PCB operates reliably without overheating or brownouts. This means it remains operational for ten or more minutes. **Mechanical Design and Fabrication** - The chassis of the battle bot weights in under 2lbs. - The chassis of the battle bot is smooth and curved with no sharp corners other than on the main spinning weapon. - The robot is competition-ready and able to participate in the ECE 445 ant-weight battle bot event. |
||||||
| 14 | PocketScope |
Aaron Holl Caleb Peach Rohan Nagaraj |
||||
| # Team Members: - Rohan Nagaraj (rohan14) - Aaron Holl (amholl2) - Caleb Peach (calebrp2) # Problem Most signal generators and oscilloscopes are limited to large laboratory instruments. They are also very costly and usually reserved for universities and company labs. Currently, there is no cheap, pocket-sized, convenient, and compact signal generator/oscilloscope designed for electricians, hobbyists, and engineers to use in the field while troubleshooting electrical problems. # Solution With advancements in microcontroller technology (specifically cheaper, smaller, and more powerful devices) our team can create a handheld, pocket-sized, two-in-one oscilloscope and signal generator. It will include an OLED screen to display a user interface with a time-versus-voltage/current plot, options for generated signals, and other features for quick measurements such as a voltmeter and ohmmeter. It will also include software based analysis tools such as FFT, curve-fitting, and the ability to export data as a CSV to a computer. Software, ADC, and DAC functionality can be handled through an ESP32 or a similar microcontroller. Basic circuit design using op-amps and voltage dividers can be used to scale larger input signals down to ranges acceptable for the microcontroller’s ADC. The user interface software can be implemented using C and Python. # Solution Components ## Subsystem 1: Voltage and Current vs Time This subsystem will take a real-world signal ranging from [-20 V, 20 V] and scale it down to a 0 to 3.3 V range since this is the typical input range for a microcontroller’s ADC. We can easily do this mathematically by dividing the function by a scaling factor (implemented in a circuit with a voltage divider) and adding an offset (using an op-amp adder circuit) to get it in the suitable range. We will use a LM741 op amp to do this since it is one of the most popular and widely used op-amps in circuit design. Our microcontroller will be an ESP-32 or STM-32 since it has an onboard ADC that can read voltages in the 0 to 3.3 V range. It also has the computing ability for small scale graphics for the waveforms vs time and can handle other DSP intensive threads. ## Subsystem 2: LCD Touchscreen This subsystem will display our application code written in C, Python, and possibly Arduino. It will display the voltage/current waveforms, show menus for signal generation, display spectrogram readings, show analysis tool details, and provide major control over the device. We will use a LCD capacitive touch bare display which communicates with our microcontroller over SPI. Adafruit provides a suitable display (https://www.adafruit.com/product/2090) that can be used for this. ## Subsystem 3: USB-C Charging and Computer Exportability - USB-C PCB mount on our custom PCB will allow for microcontroller programming, battery re-charging, and allow the microcontroller to export a .CSV file to a connected computer - USB-C will support USB 2.0 at 12 Mbps since this is fast enough to import CSV data and machine code data to the microcontroller without having to worry about impedance controlled traces on D+ and D- lines. - The UJ20-C-H-C-4-SMT-TR (USB-C PCB mount) will allow us to have this connectivity - USB-C also natively supports a 5V power supply over the VBUS terminal, so we can use this to charge a rechargeable lithium ion battery that allows the device to be mobile ## Subsystem 4: Time varying FFT (Spectrogram) of input signal - In software, we will implement a short time Fourier Transform algorithm to show a real-time spectrogram of the input signal - We do this by sampling the signal in short windows and taking the FFT of the instantaneous waveform, displaying it, and then repeating the process in real time such that the user can accurately see how the frequency components of the signal change over time ## Subsystem 5: Waveform Signal Generation User will be able to choose between the following pre defined waveform shapes we support: - Rectangle Wave - Triangle Wave - Sine Wave - Sawtooth Wave - Pulse Signal - Gaussian Noise function This will be generated by the microcontroller (ESP-32 or STM32) via PWM through a GPIO pin and amplified to a 0 to 5 V range through an op-amp amplifier (again using the LM741). The frequency, phase, duty cycle, and amplitude of the waveforms can be fully customizable by the user. ## Subsystem 6: Machine Learning Algorithm for Input Waveform Analysis - Implement a machine-learning-based parameter estimation algorithm using gradient descent to fit mathematical models to measured input waveforms - We will base our algorithm on a Nth order polynomial fit (where N is a parametrized by the user, giving more accuracy on the fit) - This can be used to characterize transient behavior, dynamic response, and system properties related to impulse and frequency response # Criterion For Success - The device needs to be portable such that the entire structure can fit comfortably in your hand and ideally within a pants or jacket pocket. - The device needs to have a battery system that can support at least a couple hours of use, in order to serve the needs of the users who may be unable to plug the device into an outlet while using it. - The device needs to be able to read any arbitrary signal within a -20 V to +20 V range and display them accurately on the screen. - The screen needs to be easy to read and the interface must be concise and unobtrusive. Also the screen should be sturdy enough to be used frequently without fear of damage. - The device needs to have an overvoltage protection system that prevents the circuits from burning out if a high voltage signal is put across the input pins. - The metal pins that read the voltage signal must be adjustable in gap width and/or compatible with a set of detachable probes that can be placed on any two points of a target circuit. # Alternatives Small oscilloscopes have already been implemented and manufactured. Our solution is unique as we will implement our ideas in a cost efficient, energy efficient, space efficient manner for low voltage inputs, which is not currently available (current solutions are too big, too expensive, or too energy efficient for low voltage systems). https://www.digikey.com/en/products/detail/owon-technology-lilliput-electronics-usa-inc/HDS1021M-N/10667422?gclsrc=aw.ds&gad_source=1&gad_campaignid=20228387720&gbraid=0AAAAADrbLlg8c4vRvwakbVmhST4aZ3Gqw&gclid=Cj0KCQiA4eHLBhCzARIsAJ2NZoIiJi_xpcOgqdLhCqINMhACTyUvaBxYUS1mqWpOtyJXAPze3dIfL64aAkQHEALw_wcB |
||||||
| 15 | SafeStep: Smart White Cane Attachment for Audio + Haptic Navigation and Emergency Alerts |
Abdulrahman Almana Arsalan Ahmad Eraad Ahmed |
||||
| # TEAM: Abdulrahman Almana (aalmana2), Arsalan Ahmed (aahma22), Eraad Ahmed (eahme2) # PROBLEM White canes provide reliable obstacle detection, but they do not give route-level navigation to help a user reach a destination efficiently. This can make it harder for blind or low-vision users to travel independently in unfamiliar areas. In addition, audio-only directions are not always accessible for users who are deaf or hard of hearing, and if a user falls there is often no automatic way to notify others quickly, which can delay assistance. # SOLUTION OVERVIEW We propose a modular smart attachment that mounts onto a standard white cane to improve navigation and safety without replacing the cane’s core purpose. The attachment will connect via Bluetooth to a user’s phone and headphones to support clear spoken directions, and it will also provide vibration-based cues for users who need non-audio feedback. The attachment will include fall detection and a basic emergency alert workflow that sends an alert to a pre-set emergency contact with the user’s last known location. # SOLUTION COMPONENTS **SUBSYSTEM 1, CONNECTIVITY + CONTROL** Handles Bluetooth pairing, basic user controls, and system logic. Planned Components: 1-ESP32 (Bluetooth Low Energy) microcontroller, ESP32-WROOM-32 2-Power switch + SOS button + cancel button 3-LiPo battery + USB-C charging module **SUBSYSTEM 2, NAVIGATION OUTPUT (AUDIO + HAPTICS)** Supports spoken directions through headphones and vibration cues for users who need non-audio feedback. Planned Components: 1-Bluetooth connection to smartphone (using standard maps app audio) 2-Vibration motor (coin vibration motor, 3V) + motor driver (DRV8833) 3-Optional buzzer for confirmations **SUBSYSTEM 3, LOCAL SENSING (WHEN MAPS NOT AVAILABLE)** Provides short-range obstacle warnings and basic direction/heading feedback when GPS/maps are unreliable. Planned Components: 1-Long-range distance sensor (Benewake TFmini-S LiDAR) for obstacle proximity alerts 2-IMU (MPU-9250) for motion/heading estimation **SUBSYSTEM 4, FALL DETECTION + EMERGENCY ALERTING** Detects falls and triggers an emergency workflow through the phone without a custom app. Planned Components: 1-IMU-based fall detection using MPU-9250 data 2-BLE trigger to phone using standard phone shortcut automation 3-Phone sends SMS/call to pre-set emergency contact with last known GPS location # CRITERION FOR SUCCESS 1-The attachment pairs to a smartphone and maintains a Bluetooth connection within 10 meters indoors. 2-The vibration system supports at least four distinct cues (left, right, straight, arrival). 3-The distance sensor detects obstacles within 20 cm to 12 m and triggers a warning vibration within 1 second. 4-Fall detection triggers within 5 seconds of a staged fall-like event and provides a cancel window (ex: 10 seconds). 5-When a fall is confirmed or SOS is pressed, the phone successfully notifies a designated contact and shares location (through phone shortcut automation). 6-The battery supports at least 1 hour of continuous operation. # ALTERNATIVES 1-Smartphone-only navigation: Works for audio, but does not provide haptics for deaf/hard-of-hearing users and is not cane-integrated. 2-Smartwatch fall detection: Helps with emergencies but does not guide navigation through the cane. 3-Dedicated smart cane products: Often expensive and replace the cane instead of adding a modular attachment. 4-Wearable navigation (smart glasses): Higher cost and complexity. |
||||||
| 16 | Dual-Mode Smart Temperature Coaster |
Alan Ilinskiy Areg Gevorgyan |
||||
| ## Team Members - Areg Gevorgyan - Alan Ilinskiy ## Problem Ideal drink temperatures don’t last. Hot drinks cool too quickly, and cold drinks warm up. Existing solutions like thermal mugs only slow heat loss and mostly work for hot drinks, while ice cubes dilute flavor. ## Solution A smart coaster that actively maintains a user-selected drink temperature. The device can both heat and cool using a reversible thermal plate, allowing instant switching between modes. A temperature sensor measures the drink surface, and a microcontroller runs PID control to stabilize and hold the desired temperature. The user sets their preferred temperature with a knob and views it on a small display; the coaster automatically adjusts to lock in that temperature. ## Solution Components ### Subsystem 1: Microcontroller and Control Logic Handles system control, PID loop and user input. **Components:** STM32 microcontroller ### Subsystem 2: Thermal Regulation Actively heats or cools the drink by reversing current direction. **Components:** High-power Peltier module, custom H-bridge / power PCB ### Subsystem 3: Temperature Sensing Continuously measures drink temperature for closed-loop control. **Components:** IR temperature sensor ### Subsystem 4: User Interface Allows users to set and view temperature preferences. **Components:** Rotary encoder (knob), display ### Subsystem 5: Power Supplies and regulates power for logic and high-current thermal control. **Components:** External power adapter, onboard voltage regulation ## Criterion for Success - Maintains user-set temperature within ±5°C - Seamlessly switches between heating and cooling - Stable PID control with no oscillation - Intuitive temperature selection via knob and display |
||||||
| 17 | Shower Music Controller |
Amar Patel Shalin Joshi Varnith Aleti |
||||
| # Shower Music Controller Team Members: - Shalin Joshi (shalinj2) - Amar Patel (amarcp2) - Varnith Aleti (valet3) # Problem People often like to listen to music when they are in the shower, but it is very inconvenient to control/play specific music with wet hands, foggy screens, and with devices that aren't waterproof. If the person wants to switch the song, it leads to issues of getting the phone wet, having to step out of the shower, or just being stuck with whatever song is being played. # Solution The solution is a waterproof device that can be stuck to a shower wall, which allows the user to play, pause, skip, and even search for their playlists/songs from Spotify. This device will act as a Bluetooth remote interface to connect to a phone companion app. The app will be able to call the Spotify API and communicate with the device in order to do each task. The device will include buttons for playback actions and D-Pad buttons to navigate the UI on a screen. # Solution Components ## Subsystem 1 - Embedded UI (Screen + Buttons) Displays different menus and music lists (search, my playlists, now playing) and captures user input by using physical buttons. The buttons will be different ones for playback controls (play, pause, skip, volume) and a d-pad to navigate through the menus and songs on the UI. D-pad implemented using 4 tactile switches (UP/DOWN/LEFT/RIGHT) arranged in a cross layout plus a center SELECT switch, all mounted on the PCB and covered through a waterproof silicone membrane. Components: - SPI TFT display module using ILI9341 controller - Tactile Switches ## Subsystem 2 - Microcontroller + BLE Communication Runs the software for the button controls and has Bluetooth communication with the phone. Sends commands (play/pause, search query, select track) and receives results/status updates from the phone. Components: - ESP32 Microcontroller ## Subsystem 3 - Phone Companion App + Spotify Integration Handles Spotify authentication and all Web API requests. Translates Bluetooth messages from the device into Spotify actions and returns data back to the device. The app will do all the music control and Spotify connections and communicate with the device in order to know which actions to perform Components: - Mobile app using Swift or React - Spotify WebAPI ## Subsystem 4 - Power, Charging, and Water-resistant Enclosure Provides safe portable power, charging, voltage regulation, and physical waterproofing suitable for shower spray/steam. This subsystem will ensure that the device and its components are water-resistant and have charging capabilities. We will make sure that water doesn’t harm our device by enclosing it in a 3D-printed enclosure. The screen will be covered by a clear acrylic/polycarbonate display window, and the buttons will be lined with a silicone membrane. When the user wants to charge the device, they will remove it from the enclosure and shower and charge it elsewhere. Components: - LiPo Battery - Li-ion charger IC/module (USB powered charging) - 3.3V regulator for MCU and display - Waterproof enclosure elements - 3D printed enclosure for the device board and circuitry - Clear acrylic/polycarbonate display window - Silicone membrane for buttons # Criterion For Success - From the shower device, the user can successfully perform different playback actions with a maximum 1-2 seconds of delay: Play/Pause, Next Track, Previous Track, Volume Up/Down - Users can enter a search query using buttons, submit it, receive at least 5 search results on the device screen, select one, and start playback. - Device can connect through bluetooth to phone companion app and remain connected through the entire duration of a shower - Device remains functional after 5 minutes of exposure to shower spray/steam. - Device operates for at least 2 hours of active use on a full charge. |
||||||
| 18 | Acoustic Stimulation to Improve Sleep |
Bakry Abdalla John Ludeke Sid Gurumurthi |
Mingrui Liu | |||
| # Acoustic Stimulation to Improve Sleep Team Members: - Abdalla, Bakry (bakryha2) - Gurumurthi, Sid (sguru2) - Ludeke, John (jludeke2) # Problem Certain people experience poor quality sleep as they age or develop sleep disorders because they do not spend enough time in slow wave sleep (SWS). While there are data-first solutions currently available to the public, they are expensive. # Solution Closed-loop auditory simulation has been shown through research to amplify the oscillations of SWS. When it is time to sleep, users will put a wearable device on their head. The device will consist of an EEG headband with dry electrodes to measure brain activity which will be connected to an all-purpose, custom PCB that integrates the EEG front-end, microcontroller, audio driver, and power management circuitry. The processor detects slow wave sleep and identifies slow wave oscillations. When these waves are detected, the system delivers short, precisely timed bursts of pink noise through an integrated speaker. Data insights about the user’s sleep patterns are delivered via a user-facing application. All of this while being cheaper than what is currently available. # Solution Components ## Subsystem 1 – EEG Headband We will be using a commercially available EEG Headband, the OpenBCI EEG Headband Kit. This includes the headband, electrodes, and cables carrying the analog signal. Components: - OpenBCI EEG Headband: https://shop.openbci.com/products/openbci-eeg-headband-kit - Ag-AgCl Electrodes - Earclip & snap cables ## Subsystem 2 – Signal Processor Takes in analog signals, denoises and amplifies, digitally processes, and then outputs. The signal processing subsystem is responsible for performing the core functionality of a commercial EEG interface such as the OpenBCI Cyton, but at a lesser cost. It receives raw analog EEG signals from the headband electrodes and converts them into digitized, clean EEG data suitable for downstream analysis. It would perform amplification of weak analog electric signals followed by analog filtering to limit bandwidth to EEG-relevant bands and prevent aliasing before analog-to-digital conversion. Following digitization, the subsystem performs digital signal processing, including bandpass and notch filtering, for noise and artifact reduction. An accelerometer would be incorporated to remove spikes and noise in EEG data at significant motion events. Components: - Analog front end: Texas Instruments ADS1299 - Microcontroller: PIC32MX250F128B - Wireless transmission of data: RFduino BLE radio module (RFD22301) - Triple-Axis Accelerometer: LIS3DH - Resistors: COM-10969 (ECE Supply Store) - Capacitors: 75-562R5HKD10, 330820 (ECE Supply Store) - JFET Input Operational Amplifier: TL082CP (ECE Supply Store) - Standard Clock Oscillators 2.048MHz: C3291-2.048 ## Subsystem 3 – Audio Output After receiving the processed audio signals from the signal processor's subsystem, this subsystem will provide the data as input to an algorithm which decides whether or not to play a certain frequency of noise through the preferred audio output device (default will be speaker). The algorithm makes this decision by detecting whether the brain signals indicate short wave sleep is occurring. Components: - A special algorithm to detect short wave sleep (https://pubmed.ncbi.nlm.nih.gov/25637866/) - One small integrated speaker (665-AST03008MRR) ## Subsystem 4 – Power Delivery To provide power for the entire system, a power circuit is integrated into the PCB. This circuit manages battery charging and voltage regulation while minimizing heat dissipation for user comfort. Components: - 2 AAA batteries: EN92 - Voltage regulator: LM350T - Capacitors: 75-562R5HKD10 - On/off switch: MULTICOMP 1MS3T1B1M1QE - Power jack: 163-4013 ## Subsystem 5 – User-Facing Application To improve usability, the User-Facing Application will give the end user insights into their sleep using standard sleep metrics. Specifically, it will tell the user their time spent not sleeping, in REM sleep, light sleep, and deep sleep. We can use a React Native frontend for compatibility with Android and iOS. We can run a lightweight ML model on-device with Python to determine the state of sleep (using libraries like FFT and bandpower). For the backend, Firebase can be used to store our data, which will come in via Bluetooth. Components: - React Native - Firebase # Criterion For Success - Headset remains comfortable (4/5 people would be okay wearing the device to sleep) - Signal Processor successfully amplifies and denoises signal - Signal Processor successfully converts the analog signal into a digital one - Audio Output gives audio in phase with EEG waves to maximize effectiveness - Audio Output correctly adjusts audio in correspondence to the input signal from the Signal Processor - Power Delivery gives enough battery power for the device to last at least 10 hours - Power Delivery remains cool and comfortable for sleep - User-Facing Application is intuitive (4/5 people would download the app) - User-Facing Application shows accurate, historical data from the user’s headband - User-Facing Application correctly classifies phases of the user’s sleep - The entire system is easy to use (a new user can figure it out without instruction) - The entire system works seamlessly |
||||||
| 19 | Cycloidal Hub motor with FOC driver |
Michael Talapin Nithin Durgam |
||||
| # Title Cycloidal Hub Motor With custom FOC Drivers Team Members: - Michael Talapin (talapin2) - Nithin Durgam (ndurgam2) # Problem Many modern physical systems need motors that require high torque in a compact size with precise motion capable of heavier payloads. # Solution Describe your design at a high-level, how it solves the problem, and introduce the subsystems of your project. The motor we are building out is an internal cycloidal hub motor with custom windings and a custom-milled frame along with a field-oriented control (FOC) custom motor driver. The internal cycloidal gear box solves the earlier stated problem due to two key components.One is the low backlash property, which allows for high precision motion. The second property is the ability to utilize the cycloidal gearbox to get higher gear ratios with smaller geometry, which gives the motor an ability to have high torque and in turn handle heavier payloads. A FOC driver comes into play for allowing direct torque control, speed control and position control (given an encoder/resolver) all while reducing resonances that come from the mechanical system. # Solution Components This problem is broken down into two major components, the custom motor aspect as well as the custom FOC Driver aspect. These components also break down into further respective subcomponents. Explain what the subsystem does. Explicitly list what sensors/components you will use in this subsystem. Include part numbers. ## Subsystem 1 : Electromagnetic motor core ### Function To generate torque efficiently while thinking about packaging constraints. The winding and laminations help set our motors kT/kV as well as torque ripple behavior. An additional useful feature here is to track the temperature of our stator to ensure thermal limitations. ### Key Components **Stator Laminations + Slots :** Forms the magnetic circuit so the motor produces torque efficiently with no loss. **Custom Windings** The insulated copper that carries current directly and defines what our torque constant,losses and thermal capability are. **Rotor**: Provide the fixed magnetic field the stator pushes against to generate the torque. **Insulation Systems:** Locks windings in place while improving reliability under vibration and thermal cycling. ### Sensors **Stator Temperature Sensor : ** (Murata NCP18XH103F03RB NTC) Helps limit torque when the motor is heating up so the windings don't get damaged. ## Subsystem 2 : Cycloidal Reduction Gearbox ### Function To multiply torque in the wheel while maintaining, compact volume, a low backlash and good shock tolerance. The gearbox here turns high motor speed into a low speed wheel torque. By utilizing the cycloidal geometry the motor can have a high reduction with size constraints while maintaining a low backlash plus high shock-load capability. ### Key Components **Eccentric Input Shaft / Cam:** Creates eccentric motion that drives the cycloidal disk **Cycloidal Discs :** The reducing element that converts eccentric motion into a slower high-torque output **Ring Pins :** These pins provide the rolling contact interface that shares load and supports high torque with low backlash. **Output Pins :** Collects the disc motion and outputs the reduced speed and amplified torque rotation to the hub. **Bearings :** Carry the loads while keeping alignment stable so the gearbox does not bind or wear easily (part to be decided) **Lubrication :** Reduces wear and heat to increase efficiency and lifetime. ## Subsystem 3: Hub Structure and Custom Milled Frame ### Function In harsh environments we must integrate the wheel bearing and structure but ensure we keep the alignment stable, carry wheel loads, protect internals and provide a heat path. ### Key Components **Custom-milled housing** **Wheel mounting interface** **Bearing seats** **Seals** **Fasteners and Dowel Pins** ## Subsystem 4: Bearings and sealing subsystem ### Function This subsystem should ensure the motor supports radial,axial, and moment loads while maintaining alignment and preventing contamination. ### Key Components **Main wheel bearing arrangement** **Gearbox support Bearings** **Seals:** O-rings, radial shaft seals, gaskets ## Subsystem 5: Motor Position Sensing ### Function Since FOC requires rotor position, this subsystem is meant to provide rotor electrical angle. ### Sensors **Absolute Encoder :** AS6057P, The purpose of this sensor is to get the absolute position of the rotor. ## Subsystem 6: DC Input and Power Conditioning ### Function Since the motor driver will be a voltage source inverted that gets fed by a DC link, the goal here is to accept supplied power safely, reduce the EMI and stabilize the DC link that will feed the inverter. ### Key Components **Input Connector and Relay:** SLPRB50CPSO, This should be a high-current connector to allow us to connect the battery without overheating and loosening in the field. **Precharge Circuit:** Implemented with a resistor and a small relay, this is built to avoid a huge rush of current instantly slamming into the DC-link capacitors when we first are connected to power. **EMI Filter:** Reduce the conducted noise so the drive does not interfere with the sensors, comms and other electronic components. **DC Link Capacitors:** To stabilize the DC bus and supply the ripple current ripple current that the inverter creates. **Dump Resistors:** These prevent the DC bus overvoltage during aggressive regen when the battery is not absorbing power fast enough. ### Sensors **DC bus voltage sensor:** Use a resistor divider onto a MCU ADC. Lets our microcontroller detect undervoltage/overvoltage and scale our control commands. **DC bus current sensor:** Use TI INA240A2. Helps measure input power and detect abnormal conditions. ## Subsystem 7: 3-phase Converter ### Function Since FOC measures phase currents and DC bus voltage with ADC sampling, we need to convert the DC bus into controlled 3-phase voltages/currents. ### Key Components **6-switch bridge:** The main power switch that creates the 3-phase drive waveforms for the motor **Current shunts:** Use WSL3637R0005FEA. These produce a tiny measurable voltage proportional to phase current to allow FOC to control torque precisely. **Current sense amplifiers:** Amplifies the shunt signals and rejects PWM noise allowing our current control loop to stay stable. **Thermal Path:** Removes heat from the power devices so that torque is sustainable with high power. ### Sensors **Power device temperature sensor:** Use the NCP18XH103F03RB NTC.Derate before MOSFETs or PCBs get damaged. **Phase current measurement:** Use shunts + INA240. provides the core feedback signal for our FOC loop. ## Subsystem 8: Gate Driver ### Function To drive the high/low side switches correctly to survive different faults. The goal here is to handle undervoltage lockout, protect from short-circuit, and include active miller clamps. ### Key Components **Gate driver IC:** Use TI DRV8353R. This will properly drive the high-side or low-side MOSFET gates with proper handling and built in fault handling. **Gate resistors + Miller clamps:** Help tune switching speed to balance efficiency EMI and ringing. ## Subsystem 9: Sensing Front End ### Function Provide Clean and accurate signals for the control loop, protection and derating. ### Key Signals **Phase Currents** **Bus Voltage** **Rotor Position** **Temperatures:** Stator,inverter, rotor and PCB ambient temperature **Phase Voltages** ## Subsystem 10: Control Compute ### Function The compute necessary for running the real time control loops and fault handling ### Key Components: **Micro Controller:** STM32H755ZI this has enough compute to run the algorithms necessary for a high end motor **Encoder/Hall Interfaces:** **Communication Peripherals:** How others interface with our motor, in this case the motor will utilize CAN-FD due to low vulnerability to EMI and ability to handle longer runs **Watchdog:** ## Subsystem 11: Firmware & Control Stack ### Function Deliver stable torque,speed, position control, telemetry logs and debug abilities. ### Key Components: **Sampling & Transforms:** Read the current and put through Clarke/Park transforms. **Current control:** Regulate the Id,Iq. **Modulation:** SVPWM. **Estimator/ Position:** Use motors encoder for position. **Control Loops:** PID Loop for Iq command and PID loop for position,speed and torque. **Derating Logic:** Limit the Iq based on the temperature or bus voltage. **Telemetry Interface:** - Way to keep track of temps,currents,bus voltages, faults and estimated torque/speed/position. ## Subsystem 12: Protection and Functional Safety Layer ### Function Ensure the proper functions are in place for motor protection and safety during operation ### Key Components: **Protect from fast overcurrent** **Gate Driver UVLO** **Over/undervoltage handling** **Current/torque limiting** **Thermal limiting** **Fault state machine and latching behavior** **Sensor Faults** # Criterion For Success Describe high-level goals that your project needs to achieve to be effective. These goals need to be clearly testable and not subjective. ** Continuous Torque: ** T_nm >= 4 Nm ** Peak Torque: ** T_nm >= 18 Nm ** Max Speed: ** rpm_max >= 120 rpm ** Backlash: ** our backlash <= 1 degree |
||||||
| 20 | Air Guitar |
Arturo Arroyo Valencia Miaomiao Jin Youngmin Jeon |
||||
| # Title Team Members: - Miaomiao Jin (mj47) - Youngmin Jeon (yj21) - Arturo Arroyo Valencia (aarro6) # Problem Traditional guitars are bulky and non-portable, making it difficult for musicians to practice or perform in mobile environments. While software-based "virtual guitars" exist, they lack the tactile "muscle memory" of fretting with one hand and strumming with the other. There is a need for a wearable system that captures the physical kinetics of guitar playing without the physical footprint of the instrument. # Solution Project: Air Guitar is a dual-wearable sensor system that mimics the ergonomics of a real guitar. The left hand captures "fretting" finger patterns to determine chords, while the right hand captures "strumming" velocity and timing. By fusing these two data streams wirelessly, the system generates real-time MIDI audio. The design focuses on low-latency wireless communication and precise gesture recognition, allowing the user to play music anywhere without being tethered to a physical instrument or a power outlet. # Solution Components ## Subsystem 1: The Left-Hand "Fret" Controller This subsystem identifies the chord the user is trying to play. It maps the curvature of each finger to a specific digital profile (e.g., specific bend angles = C Major). - Flex Sensors (4x) [P/N: FS-L-0054-103-ST]: These are long, thin strips placed along the fingers. As the user curls their fingers to form a chord shape, the resistance changes. We use these to measure the degree of flexion for each finger. - Voltage Divider Network: A series of precision resistors used to convert the changing resistance of the flex sensors into a measurable voltage that the microcontroller's ADC (Analog-to-Digital Converter) can read. ## Subsystem 2: The Right-Hand "Strum" Controller This subsystem acts as the "trigger." It determines when a sound should be played and how loud it should be based on the intensity of the movement. - 9-Axis IMU [P/N: BNO055]: This contains an accelerometer and a gyroscope. It tracks the rapid "up and down" motion of a strum. We chose the BNO055 because it has an on-board processor that handles "Sensor Fusion," giving us clean orientation data without taxing our main CPU. - Backup IMU (Plan B): InvenSense MPU-6050. It is widely available and has extensive library support. While it only offers 6-axis sensing (no magnetometer) and requires the ESP32 to handle the Kalman filtering or Complementary filtering in code, it is a highly reliable fallback if the BNO055 has procurement delays or I2C clock-stretching issues. - Force Sensitive Resistor (FSR) [P/N: FSR 402]: A small pressure sensor placed on the thumb. This allows the user to simulate "holding a pick." The sound only triggers when the user "squeezes" the virtual pick while strumming. ## Subsystem 3: Processing & Wireless Communication This is the "Brain" of the system. It collects data from both hands and converts it into music. - ESP32 Microcontroller (2x) [P/N: ESP32-WROOM-32E]: One for each hand. These chips are powerful and have built-in Bluetooth and Wi-Fi. - ESP-NOW Protocol: We will use this specialized low-latency wireless protocol to send data from the "Strum" hand to the "Fret" hand in less than 5ms, ensuring the two hands are perfectly in sync. - BLE MIDI: The final output is sent via Bluetooth Low Energy MIDI to a phone or laptop, allowing the glove to work with any professional music software (like GarageBand or Ableton). ## Subsystem 4: Power Management Since we want the project to be wearable and "Cyberpunk" in style, the power system must be compact and efficient. - LiPo Batteries (2x): Small 3.7V rechargeable batteries tucked into the wrist straps. - TP4056 Charging Modules: To allow the gloves to be recharged via a standard USB-C cable. - Buck-Boost Converters: To ensure the ESP32 and sensors receive a steady, clean 3.3V even as the battery voltage drops during use. # Criterion For Success - Latency: The total "Motion-to-Sound" delay must be under 30ms. Anything higher is noticeable to a musician. **Test Method:** We will program a "Test Mode" where a physical button press on the Strum hand toggles a GPIO pin (HIGH) and simultaneously sends the wireless strum packet. Using an oscilloscope, we will measure the delta (t) between the GPIO HIGH signal and the arrival of the MIDI Note On message at the receiver's serial port. - Chord Recognition: The system must accurately distinguish between at least 5 different chord shapes with a success rate of >90%. Dynamic Range: The system must be able to distinguish between a "Soft Strum" and a "Hard Strum," translating that into different MIDI volume levels. - Battery Life: The device must operate continuously for at least 2 hours on a single charge. - Wireless Stability: The ESP-NOW link between hands must maintain a Packet Delivery Ratio (PDR) of ≥ 99%within a 2-meter radius (the typical wingspan of a human) over a continuous 10-minute testing window. **Test Method:** The Right-Hand unit will send 1,000 packets at the target rate (e.g., 100Hz). The Left-Hand unit will log the sequence numbers; a successful test results in ≤ 10 missed packets. |
||||||
| 21 | Vertial Spinner Ant-Weight Battle Bot |
Andrew Bajek Elise Chiang Giovanni Escamilla |
Jiaming Xu | |||
| ANT-WEIGHT BATTLEBOT Team Members: - Giovanni Escamilla (gme5) - Andrew Bajek (abajek2) - Elise Chiang (elisenc3) # Problem Antweight combat robots, limited to a maximum mass of 2 lb, must function reliably despite aggressive mechanical stress, and demanding control requirements. These systems regularly experience violent impacts, sudden motor stalls, and intermittent wireless links, making fast and dependable coordination between power distribution, control electronics, and mechanical hardware. # Solution Our idea for our 2-lb bot is a two-wheel drive with a vertical drum spinner as our weapon. We will develop our own custom PCB with controls centered around our STM32WB series microcontroller. This controller will not only control our weapon and drive system, but monitor our stress to limit damage done to the battlebot. Overall, our total system will be divided into four sections: power, control, drive, weapon. Our wireless connection to our PC will be bluetooth and work in tandem with our microcontroller to guarantee our success. # Solution Components ## Subsystem 1 - Power Our Power system will give life to our bot with some additional safety features so we are able to compete in the competition. This will include the physical switch to turn off the bot and a voltage regulator so that our controller can use it. Components: - XT60 Connectors (to unplug) - 3S LIPO Battery (11.1v battery) - We could make our own power regulator; if not, we will use LM2596 ## Subsystem 2 - Drive Our Drive system will allow the battle bot to navigate the arena quickly and precisely in order to deliver attacks and avoid attacks from opposing bots. Components: - Two DC motors, one per side (508 RPM Mini Econ Gear Motor) - Dual H-bridge motor driver (DRV8411) ## Subsystem 3 - Weapon The Weapon system serves as the main accessory for engaging the opponent for damage. Components: - DC motor to power the weapon (drum vertical spinner) - Motor control driven by PWM - 3D structures to aid main weapon (ramps, lifters, etc) ## Subsystem 4 - Control Our central brain will center around our STM32WB microcontroller, which will monitor and control our weapon and drive. In addition, monitoring our weapon's motor to limit damage to ourselves. Components: - STM32WB series microcontroller - Bluetooth - PC-based control interface - Real-time reliability - Weapon Motor Stress Sensor # Physical Design - Body The body of the battlebot will house and protect the electronics, motors, while maintaining structural integrity during combat. We will use Autodesk Fusion 360 to model the body and use PLA+ as the 3D printing filament. # Criterion For Success - Weight Compliance: Total Weight: 2lb - Wireless Control: Robot is controlled from a PC via Bluetooth with Failsafe Operation. - Safety: The bot will automatically shut down in the case of a power fault, loss of control signal, or electrical malfunction. - Mobility: Robot runs continuously for 3 minutes without resets. - Weapon Reliability: The fighting tool operates reliably under repeated activation while maintaining electrical and mechanical performance. - Sensor Addition: Some internal or external sensor that makes the robot react in some way - Responsiveness: Inputs in control have a delay of less than 50ms. |
||||||
| 22 | Oscilliosketch: Handheld XY Etch-a-Sketch Signal Generator for Oscilloscopes |
Eric Vo Josh Jenks |
||||
| Team Members: - Josh Jenks (JaJenks2) - Eric Vo (ericvo) # Problem Oscilloscope XY mode is a powerful way to visualize 2D parametric signals and vector like graphics, but interactive control typically requires multiple bench instruments or ad hoc setups. There is no simple, handheld, purpose-built controller that can safely generate stable, low noise bipolar X/Y signals for XY mode while providing an intuitive drawing interface. Additionally, producing clean vector style graphics requires careful mixed signal design (DAC, filtering, level shifting, buffering, protection) and deterministic embedded control. # Solution We will design a custom PCB and handheld enclosure that connects to an oscilloscope’s CH1 and CH2 inputs (X and Y). The device will function like an Etch-a-Sketch: two rotary encoders control the on screen cursor position, allowing continuous line drawing on the oscilloscope in XY mode. The PCB will include: - A microcontroller (STM32- or ESP32-class) to read the encoders/buttons and generate X/Y sample streams - An external dual channel DAC to produce two analog voltages - Analog filtering, level shifting, and buffering to generate bipolar outputs with selectable full scale up to ±5 V - A complete power subsystem powered from USB-C 5 V, including a generated negative rail to support bipolar analog output - Output protection/current limiting so the device cannot damage the oscilloscope inputs under reasonable misuse Stretch goals: add a vector rendered game/demo mode (Pong; Asteroids as further stretch), including optional Z axis blanking to reduce retrace artifacts, and optional line level audio output to monitor/play back generated signals. # Solution Components ## Subsystem 1: User Input / UI Purpose: Provide intuitive control for drawing and mode selection. Components (examples): - 2x incremental rotary encoders with push switch (e.g., Bourns PEC11R series or equivalent) - 4x tactile pushbuttons (e.g., mode select, clear/recenter, scale/zoom, optional pen/blank) - Optional status LEDs for mode feedback ## Subsystem 2: Microcontroller + Firmware Purpose: Read inputs, maintain drawing state, and generate X/Y sample buffers at a fixed update rate. Components: - MCU (STM32- or ESP32-class) - Example options: ESP32-WROOM-32E module OR STM32G4/F4-class MCU with SPI + timers Firmware features: - Quadrature decoding for encoders; button debouncing - Drawing modes: - Base mode: “etch-a-sketch” continuous drawing (position integration with adjustable step/scale) - Optional modes: predefined shapes/patterns for testing - Fixed rate DAC update engine (timer driven), with buffered generation to keep output stable independent of UI activity ## Subsystem 3: Dual-Channel DAC + Analog Output Chain (X and Y) Purpose: Generate clean, low noise bipolar voltages suitable for oscilloscope XY inputs. Components (examples): - Dual-channel SPI DAC, 12-bit (Microchip MCP4922 or equivalent) - Reference for stable scaling / midscale (e.g., LM4040-2.5 or equivalent 2.5 V reference) - Optional reconstruction filtering per channel (RC and/or 2nd order low-pass) to eliminate high frequency components - Op-amp signal conditioning: - Level shift around midscale + gain to produce bipolar output centered at 0 V - Buffer stage for stable drive into coax cables and oscilloscope inputs - Example op-amp class: dual op-amp supporting ±5 V rails (e.g., OPA2192/OPA2197 class or equivalent) - Output connectors: - 2x PCB mount BNC connectors (X and Y outputs) - Output protection / safety features (per channel): - Series output resistor (current limiting and stability into cable capacitance) - Clamp diodes to rails to limit overvoltage at the connector - ESD considerations and robust grounding strategy ## Subsystem 4: Power Regulation Purpose: Provide clean digital and analog rails from a safe, convenient input. Components (examples): - USB-C 5 V input (sink configuration with CC resistors) + input protection - 3.3 V regulator for MCU and logic (e.g., AP2112K-3.3 or equivalent) - Negative rail generation for analog (e.g., TPS60403 inverting charge pump or equivalent) to enable bipolar outputs - Power decoupling and analog/digital rail isolation as needed ## (Stretch) Subsystem 5: Z-Axis Blanking Output (Optional) Purpose: Improve vector graphics/game rendering by blanking the beam during “retrace” moves. Components: - Protected Z-output driver (0–5 V-class control) to oscilloscope Z-input Firmware: - Assert blanking during reposition moves; unblank during line segments ## (Stretch) Subsystem 6: Line-Level Audio Output (Optional) Purpose: Provide an auxiliary line out to monitor synthesized signals audibly. Components: - 3.5 mm TRS jack (line out) - AC coupling + attenuation network and optional buffer Firmware: - Optional stereo mapping (e.g., X→Left, Y→Right) after removing DC offset # Criterion For Success The project is considered successful if all of the following are demonstrated and measured: 1. Bipolar XY output with selectable range: - Device generates two analog outputs (X and Y) centered at 0 V, with selectable full-scale up to ±5 V. - Verified with DMM and oscilloscope measurements (documented calibration procedure). 2. Stable interactive drawing in XY mode: - Using the two rotary encoders, a user can draw continuous line art on an oscilloscope in XY mode. - At minimum, demonstrate repeatable drawing of a square and a circle using the controller’s clear/recenter and scaling functions. 3. Deterministic update behavior: - The firmware updates the DAC using a hardware timer or equivalent mechanism to maintain stable, non intensity varying output during user interaction. 4. Safe interfacing / cannot damage scope under reasonable misuse: - Output stage includes current limiting and voltage clamping such that accidental output short-to-ground and brief overdrive conditions do not produce damaging currents into the oscilloscope input. - Verified by bench test (short to ground test and measurement of limited fault current through series resistor). (Stretch) Demonstrate a vector rendered mode (Pong; Asteroids further stretch) with reduced retrace artifacts if Z-blanking is implemented. Optional line-out demonstration if implemented. |
||||||
| 23 | Portable RAW Reconstruction Accelerator for Legacy CCD Imaging |
Guyan Wang Yuhong Chen |
other1.docx |
|||
| # **RFA: Portable RAW Reconstruction Accelerator for Legacy CCD Imaging** Group Member: Guyan Wang, Yuhong Chen ## **1\. Problem Statement** **The "Glass-Silicon Gap":** Many legacy digital cameras (circa 2000-2010) are equipped with premium optics (Leica, Zeiss, high-grade Nikon/Canon glass) that outresolve their internal processing pipelines. While the optical pathway is high-fidelity, the final image quality is bottlenecked by: - **Obsolete Signal Chains:** Early-stage Analogue-to-Digital Converters (ADCs) and readout circuits introduce significant read noise and pattern noise. - **Destructive Processing:** In-camera JPEGs destroy dynamic range and detail. Even legacy RAW files are often processed with rudimentary demosaicing algorithms that fail to distinguish high-frequency texture from sensor noise. - **Usability Void:** Users seeking the unique "CCD look" are forced to rely on cumbersome desktop post-processing workflows (e.g., Lightroom, Topaz), preventing a portable, shoot-to-share experience. ## **2\. Solution Overview** **The "Digital Back" External Accelerator:** We propose a standalone, handheld hardware device-a "smart reconstruction box"-that interfaces physically with legacy CCD cameras. Instead of relying on the camera's internal image processor, this device ingests the raw sensor data (CCD RAW) and applies a hybrid reconstruction pipeline. The core innovation is a **Hardware-Oriented Hybrid Pipeline**: - **Classical Signal Processing:** Handles deterministic error correction (black level subtraction, gain normalization, hot pixel mapping). - **Learned Estimator (AI):** A lightweight Convolutional Neural Network (CNN) or Vision Transformer model optimized for microcontroller inference (TinyML). This model does not "hallucinate" new details but acts as a probabilistic estimator to separate signal from stochastic noise based on the physics of CCD sensor characteristics. The device will feature a touchscreen interface for file selection and "film simulation" style filter application, targeting an output quality perceptually comparable to a modern full-frame sensor (e.g., Sony A7 III) in terms of dynamic range recovery and noise floor. ## **3\. Solution Components** ### **Component A: The Compute Core (Embedded Host)** - **MCU:** STMicroelectronics **STM32H7 Series** (e.g., STM32H747/H757). - _Rationale:_ Dual-core architecture (Cortex-M7 + M4) allows separation of UI logic and heavy DSP operations. The Chrom-ART Accelerator helps with display handling, while the high clock speed supports the computationally intensive reconstruction algorithms. - **Memory:** External SDRAM/HyperRAM expansion (essential for buffering full-resolution RAW files, e.g., 10MP-24MP) and high-speed QSPI Flash for AI model weight storage. ### **Component B: Connectivity & Data Ingestion Interface** - **Physical I/O:** USB OTG (On-The-Go) Host port. - _Function:_ The device acts as a USB Host, mounting the camera (or the camera's card reader) as a Mass Storage Device to pull RAW files (.CR2, .NEF, .RAF, .DNG). - **Storage:** On-board MicroSD card slot for saving processed/reconstructed JPEGs or TIFFs. ### **Component C: Hybrid Reconstruction Algorithm** - **Stage 1 (DSP):** Linearization, dark frame subtraction (optional calibration), and white balance gain application. - **Stage 2 (NPU/AI):** A quantization-aware trained model (likely TFLite for Microcontrollers or STM32-AI) trained specifically on _noisy CCD -to- clean CMOS_ image pairs. - _Task:_ Joint Demosaicing and Denoising (JDD). - **Stage 3 (Color):** Application of specific "Film Looks" (LUTs) selected by the user via the UI. ### **Component D: Human-Machine Interface (HMI)** - **Display:** 2.8" to 3.5" Capacitive Touchscreen (SPI or MIPI DSI interface). - **GUI Stack:** TouchGFX or LVGL. - _Workflow:_ User plugs in camera -> Device scans for RAWs -> User selects thumbnails -> User chooses "Filter/Profile" -> Device processes and saves to SD card. ## **4\. Criterion for Success** To be considered successful, the prototype must meet the following benchmarks: - **Quality Parity:** The output image, when blind-tested against the same scene shot on a modern CMOS sensor (Sony A7 III class), must show statistically insignificant differences in perceived noise at ISO 400-800 equivalent. - **Edge Preservation:** The AI reconstruction must demonstrate a reduction in color moiré and false-color artifacts compared to standard bilinear demosaicing, without "smoothing" genuine texture (measured via MTF charts). - **Latency:** Total processing time for a 10-megapixel RAW file must be under **15 seconds** on the STM32 hardware. - **Universal RAW Support:** Successful parsing and decoding of at least two major legacy formats (e.g., Nikon .NEF from D200 era and Canon .CR2 from 5D Classic era). ## **5\. Alternatives** - **Desktop Post-Processing (Software Only):** - _Pros:_ Infinite computing power, established tools (DxO PureRAW), highly customized. - _Cons:_ Destroys the portability of the photography experience; cannot be done "in the field." Need to be proficient with parameters inside the software, which requires self-training and tutoring (not user-friendly). - **Smartphone App (via USB-C dongle):** - _Pros:_ Powerful processors (Snapdragon/A-Series), high-res screens, easy to use. - _Cons:_ Lack of low-level control over USB mass storage protocols for obscure legacy cameras; high friction in file management; operating system overhead prevents bare-metal optimization of the signal pipeline; unique algorithms may not be suitable for legacy cameras. - **FPGA Implementation (Zynq/Cyclone):** - _Pros:_ Parallel processing could make reconstruction instant. - _Cons:_ Significantly higher complexity, cost, and power consumption compared to an STM32 implementation; higher barrier to entry for a "mini project." |
||||||
| 24 | 4WD Wedge + Powered Roller Antweight Battlebot |
Junyan Bai Yuxuan Guo |
Zhuoer Zhang | |||
| # 4WD Wedge + Powered Roller Antweight Battlebot Team Members: - Yuxuan Guo (yuxuang7) - Junyan Bai (junyanb2) # Problem Antweight (≤ 2 lb) combat robots must remain mobile and controllable while enduring impacts, motor stalls, and power transients. Many teams lose matches due to loss of traction, getting stuck on opponents/walls, or electronics brownouts and wireless dropouts that lead to uncontrollable behavior or resets. Therefore we want a competitive design that emphasizes reliable control and survivability: a low wedge to get under opponents and a powered front roller to help pin/deflect opponents and prevent getting stuck, while using a custom PCB that integrates wireless control, motor driving, and safety shutoffs. # Solution We will build a 2-lb antweight combat robot featuring: - A low-profile front wedge for ground control and deflection - A powered front roller mounted above the wedge lip to assist in pinning, lifting slightly, and guiding opponents - Four-wheel drive (4WD) for pushing power and maneuverability - A custom control PCB centered on an ESP32 to provide PC-based wireless control (WiFi/Bluetooth), motor control, and robust safety mechanisms The system is divided into four main subsystems: (1) Power & Safety, (2) Control & Communication, (3) Drive Train, and (4) Roller Mechanism. The design prioritizes predictable behavior under stalls/impacts and includes automatic shutdown on wireless link loss. # Solution Components ## Subsystem 1 — Power & Safety (Power Management and Distribution) **Function:** Deliver stable power to drive and roller systems while protecting logic electronics from brownouts and ensuring safe shutdown. **Safety features:** - Manual hard shutdown via kill switch - Firmware-controlled motor disable line(s) - Brownout monitoring (ADC measurement of battery/logic rail) ## Subsystem 2 — Control & Communication **Function:** Receive operator commands from a PC, process safety logic, and output PWM/enable signals for motor drivers. **Components:** - Microcontroller + wireless: Espressif ESP32-WROOM-32D (WiFi/Bluetooth) - Status indicators: LEDs for power/armed/link state (part numbers TBD) - Optional orientation sensing (stretch): MPU-6050 IMU module (GY-521) for flip detection and drive remapping **Firmware logic:** - Drive mixing (arcade/tank) for 4WD control - Roller speed control - Link-loss failsafe: if command packets stop for > X ms, disable all motors - Input shaping (rate limiting / exponential curve) for controllable driving ## Subsystem 3 — Drive Train (4WD Locomotion) **Function:** Provide reliable mobility and pushing power during combat. **Components:** - 4x Drive motors **Mechanical:** - Four wheels mounted to a 3D-printed chassis - Wheel size chosen to improve traction and reduce high-centering (exact diameter TBD) ## Subsystem 4 — Powered Front Roller (Control Weapon) **Function:** Improve control by pinning/deflecting opponents and reducing the chance of getting stuck on wedges or walls. **Components (with part numbers):** - Roller motor: small brushed DC motor (e.g., N20/130-size class), final selection TBD - Roller driver: shared motor driver family with drive train - Roller structure: 3D-printed roller with compliant sleeve (TPU) or textured surface for grip (material TBD) # Criterion For Success The project will be considered successful if all criteria below are met: 1. **Weight compliance:** Total robot mass (including battery) is **< 2.0 lb**. 2. **Manual shutdown:** Manual kill switch stops all motion within **≤ 2 seconds**. 3. **Failsafe shutdown:** On wireless link loss (no valid commands for a defined timeout), all motors are disabled within **≤ 2 seconds**. 4. **Mobility reliability:** Robot can drive continuously for **≥ 3 minutes** without MCU resets or power brownouts. 5. **Control effectiveness:** Robot can push a standardized test object (defined weight) across **1 meter** on the arena surface without stalling into a reboot. 6. **Roller reliability:** Roller can run continuously for **≥ 60 seconds** without causing logic rail brownout or driver overheat shutdown. 7. **Impact robustness:** After **10 wall-impact tests** (full-speed bump into a rigid barrier), the robot remains operational with no loose power connections and no repeated resets. |
||||||
| 25 | Building Interior Reconnaissance Drone (BIRD) |
Jack Lavin Jacob Witek Mark Viz |
Shiyuan Duan | |||
| # Building interior reconnaissance drone proposal Team Members: - Mark Viz (markjv2) - Jack Lavin (jlavin4) - Jacob Witek (witek5) # Problem There are many situations when law enforcement or emergency medical service professionals need quick, real-time, useful information about a non-visible location without sending in a human to gather this information due to present risks. One of the most important things to know in these situations is if there are people in a room or area, and if so, where they are located. While there are current promising solutions used by these professionals, they can rarely be operated by one person and take away time and manpower from situations which usually greatly require both. Our solution attempts to address these issues while providing an easy-to-use interface with critical information. # Solution Our solution to this issue is to use a reconnaissance drone equipped with a camera and other sensing components and simple autonomous behavior capabilities, and process the video feed on a separate laptop to determine an accurate location of all people in view of the drone relative to the location of a phone or viewing device nearby. This phone or viewing device would run an augmented-reality application using position information from the drone system to overlay the positions of people near the drone over first-person perspective video. The end result would allow someone to slide/toss the drone into a room, and after a second or two, be able to "see through the wall" where anyone in the room is. # Solution Components ## Drone and Sensors The drone itself will be a basic lightweight quadcopter design. The frame will be constructed using a 2D design cut from a sheet of carbon fiber and assembled with aluminum hardware and thread locks. The total volume including the rotor blades should not exceed 4" H by 8" W by 8" L at maximum (ideally much less). This simple frame will consist of a rectangular section to mount the PCB and a 2S (7.4 V) LiPo pack of about 2" x 2" or less, and four identical limbs mounted to the corners. On each of the four limbs will be brushless DC motors (EMAX XA2212 2-3S) driven by electronic speed controllers from the PCB (assuming they can't be pre-purchased). The PCB will have a two-pin DuPont/JST connectors for battery leads, a TP4056 LiPo discharging circuit, and buck converters for necessary voltage(s) all on the underside. On top, the PCB will house an ESP32-S3 microcontroller, an IMU with decent accuracy, a set of mmWave 24 GHz human presence sensor (like the LD2410) and ultrasonic transducers to form a phase array sensor with an accurate, narrow beam to scan for human presence with range. These components will allow the drone to be programmed with very simple and limited autonomous flight behaviors (fly up 5 feet, spin 360 degrees, land) and properly/safely control itself. The ultrasonic transducers and human sensing radars will be the primary method of determining human presence and mostly calculated on the ESP-32, however additional calculation will need to be made on the AR end with the received data. If time and budget allow, we may also include a small 2 MP or 5 MP camera for WiFi video stream or a composite video camera for an analog video stream as a backup/failsafe to the other sensors. A working rough breakdown of the expected mass of each component will go as follows: - 4 hobby motors: ~ 50 grams (based on consumer measurements) - Carbon fiber frame: ~ 40 grams (estimate based on similar style and sized frames) - 2S 500 mAh battery: ~30 grams (based on common commercial LiPo product info) - PCB with MCU & peripherals: ~50 grams (based on measurements of similar boards) - 10-20 ultrasonic transducers: ~50 grams (based on commercial component info) - Metal hardware/fasteners & miscellaneous: ~25 grams (accounting for error as well) - Total mass: ~255 grams - Total thrust (at 7.6 V 7.3 A): ~2000 grams (from manufacturer ratings) - Thrust/weight is well over 2.0 and should allow for quick movement and considerable stability along with the improved frame considerations, and also extra room for more weight if needed. ## AR Viewer or Headset To create a useful augmented-reality display of the collected position data, the simplest way will be to write an app that uses the digital camera and gyroscope/IMU API's of a smart phone to overlay highlighted human position data on a live camera view. We would use the android studio platform to create this custom app which would interface with the data incoming from the drone. Building upon the android API's we would overlay the data to the phone camera. If we have more time to develop one, a headset or AR glasses could make the experience more useful (hands-free) and immersive. We may also use a laptop at this stage to run a server alongside the app for better processing. # Working Supply List *some can be found in student self-service, some need to be ordered - Carbon fiber sheet (find appropriate size and 2-3 mm thick) - Aluminum machine screws with lock-tite or bolt/nut with locking washer - 4 EMAX brushless DC motors and mounting hardware - 4 quadcopter rotor blades - 2S (7.6 V) 500 mAh LiPo battery - Custom PCB - ESP32-S3 chip w/ PCB antenna - 20 ultrasonic (40 kHz) transducer cans - 4 mmWave 24 GHz human presence radar sensors - TP 4056 LiPo Charging IC (find other necessary SMD components) - DuPont two-pin connector for LiPo charging/discharging (choose whether removable battery design) - Various SMD LEDs to indicate functionalities or states on PCB - Voltage buck converter circuit components - ESC circuit components - Adafruit Accelerometer # Criterion For Success The best criteria for the success of this project is whether our handheld device or headset can effectively communicate human position data of a visually obstructed location to a nearby user within an accuracy of 1-2 meters while still allowing the user to carry out personal tasks. The video feed should be stable with minimal latency as to be effective and usable, and estimated human positions should be updated only when they are positively in view and information about the recency of data should be apparent (maybe a red highlight on new people, yellow on a stale location, and green for a newly updated position). |
||||||
| 26 | AdheraScent Pill Container |
Albert Liu Anshul Rao Chia-Ti(Cindy) Liu |
||||
| Team Members: - Albert Liu (ycl6) - Chia-Ti (Cindy) Liu (chiatil2) - Anshul Rao (anshulr2) # Problem Describe the problem you want to solve and motivate the need. 1 in 4 adults miss doses of medication due to complex instructions or simply forgetting. Traditional reminders, such as alarms and notifications, are often ignored due to alarm fatigue. There are also many apps addressing this problem; however, seniors and many other adults struggle with using complex apps. Therefore, we are looking to build an automated scent-based pill dispenser to simplify the process and ensure adults take their medications on time. # Solution Describe your design at a high-level, how it solves the problem, and introduce the subsystems of your project. We propose an olfactory-based medication reminder system using a pill dispenser with a scent emitter as our reminder mechanism. The smell-based reminder feature addition to the traditional pill dispenser consists of a conditional logic trigger: if the "container open" state is not triggered within a specific time window which we scheduled, the device initiates a controlled release of a specific scent emission. This scent will act as an environmental prompt, persistently reminding the user to take the medicine. The intensity of the scent emission will gradually increase over time until the physical container is opened, at which point the emission will be deactivated. This approach ensures the reminder remains physically present in the user's space. At a high level, our system consists of a pill container with an open/close detection mechanism, a timing unit, a scent emitter, and a power subsystem. # Solution Components ## Subsystem 1: Pill Container, Open/Close Detection This subsystem is responsible for physically storing the medication and detecting if the container is opened. As the pill container is designed as a multi-day container, we will design it to be a 7-day pill box to support the users’ daily medication routines. An open/close detection mechanism would determine whether the container has been opened during a scheduled medication time each day. This means the pillbox will contain 7 separate sensors, one for each day and communicate this information to the timing unit subsystem as needed. The detection will be implemented using a simple mechanical or magnetic sensing design such as a reed switch or a limit switch. Once the opening is detected, this subsystem will send a signal indicating successful medication taken. Components: 7 section pill container 7x open/close sensors (possibly a limit switch) ## Subsystem 2: Timing Unit The timing unit subsystem would use a Real-Time Clock (RTC) module integrated within the primary microcontroller. As long as the microcontroller has a coin cell, the RTC will be able to continue running as intended while the main power is off. This means that if the main power happens to be interrupted, the RTC module will still be able to generate the date, time, and other specific data necessary. Otherwise, the microcontroller will poll the RTC module and compare it against the scheduled medication window. When the current time enters the configured scheduled window for an individual to take their medication, the timing unit will monitor the open/close detection subsystem. Specifically, if the sensor remains in the “closed” state past, the timing unit subsystem will generate a PWM signal to the scent emitter. While the pill dispensing mechanism continues to stay in the “closed” state past the scheduled window, the duty cycle of the PWM signal will gradually increase, intensifying the smell over time. Components: ESP32 Microcontroller CR2032 Coin Cell & Holder RTC DS3231 (optional) Buttons / LCD display for adjusting scheduled time ## Subsystem 3: Scent Emitter The scent emitter module is responsible for producing the scent, our physical reminder when the medication is not taken in scheduled time. When it receives the signal that the container is not opened in a scheduled window, it will release a controlled amount of scent into the surrounding environment, which we would like to design the emission to be continuous, and the emission should stop immediately once the container is opened. To avoid heating to make our pill container safe and portable, we will be implementing our scent emitter with a replaceable scent pad combined with a mechanically controlled valve and a tiny DC fan to regulate the scent release, which when a missed medication event is detected, the valve opens to allow the air to flow across the pad to emit the scent into the environment. The fan will go stronger and stronger if the container is still not open, and the valve will close once the pill container is opened, stopping further emission. Our system will also assume a predetermined effective lifetime for each pad, for example 20 days, after our characterization. Then after a conservative usage estimates time, for example 15 out of 20 days, which is also tracked by our time unit, a LED begins blinking to indicate that the scent pad should be replaced. The LED will stop blinking after the pad is replaced. Components: Replaceable scent pad LED Mechanical controlled valve Micro 5V or 3.7 V DC fan Another alternative for scent emitters is using a little ultrasonic speaker/vibrator at a certain frequency to make particles aerosolized like a diffuser. ## Subsystem 4: Power Supply This subsystem would provide the power needed to all electronic components in the device. To ensure the ease of use and portability, our design will be powered by a battery instead of requiring a constant external power source. There will then be a voltage regulation circuit that would ensure stable operation of the microcontroller and peripherals. In addition, there will also be a deep sleep power-saving state where the microcontroller will shut down the most power-hungry components, such as the CPU or WiFi module, during idle time periods. The system/microcontroller will wake up from the RTC module via a hardware interrupt when the pill dispenser is open or closed as well as during the scheduled medication time. This will ensure that the scent-based medication box will be able to work as intended for a longer period of time. Components: Battery Power switch Voltage regulator # Criterion For Success The system correctly detects whether the pill container has been opened during a scheduled medication window. The user must be able to schedule a medication window. The scent emitter must activate within 10 seconds automatically after the scheduled medication window has passed if the pill container has remained in a closed state. This scent-based pill reminder system must have variable amounts of scent intensity as the duration of the missed medication window increases based on the PWM signal (25%, 50%, 100%). The scent emitter deactivates within 10 seconds once the container is opened. LED starts blinking when replaceable scent pad has to be changed and stops after its replaced. The system operates without requiring a smartphone, app, or external display. The device operates reliably for multiple medication cycles without failure. All subsystems integrate into a single functional prototype suitable for demonstration. The prototype has to be smaller than 5*2.8*0.5 inch^3 to allow it to be portable. Scent strong enough for real-world testers to recognize. Power consumption of the system to be small enough to allow the device to function for longer than 2 months before the battery has to be replaced. |
||||||
| 27 | Kombucha Fermentation Control System |
Edwin Xiao John Puthiaparambil Rudy Beauchesne |
Haocheng Bill Yang | |||
| # Kombucha Fermentation Control System Team Members: - Rudy Beauchesne (rudyb2) - John Puthiaparambil (jtp7) - Edwin Xiao (edwinyx2) # Problem Home kombucha brewing is becoming increasingly popular, but most options fall into two extremes: expensive commercial systems with automated control, or low-cost DIY methods that depend on frequent manual checks and guesswork. As a result, home brews are often inconsistent from batch to batch, with fermentation running too slow or too fast, acidity drifting outside the desired range, or the process stalling without clear feedback. This unpredictability can lead to inconsistent flavor and, in the worst case, failed or spoiled batches. There is a need for a low-cost, repeatable kombucha brewing system that continuously monitors key conditions like temperature and pH and provides clear, reliable feedback with minimal user intervention. # Solution We propose a low-cost, closed-loop kombucha brewing system designed to make home fermentation more consistent and repeatable. A microcontroller on a custom PCB continuously reads temperature, pH, RGB color, ultrasonic liquid level, and pressure sensors to track fermentation conditions and progress. Using these measurements, the system controls a heating pad to regulate temperature and peristaltic pumps to add fresh tea or remove liquid as needed based on user-defined targets. If feasible within budget, the system will also include a small optional aeration pump (air pump + sterile filter) for primary fermentation to provide controlled aeration during primary fermentation. A companion companion app dashboard (web-based) displays real-time status and logs trends over time so users can monitor brewing without constant manual checking. # Solution Components Subsystem 1: Fermentation Monitoring & Control This subsystem monitors the primary fermentation conditions and regulates temperature to keep the brew in a stable range. Functionality: - Continuously measure temperature, pH, and color trends during F1 - Drive a heating pad to maintain a user-defined temperature setpoint and control pumps for automated liquid handling - Send sensor data to the main controller for closed-loop control and logging Sensors / Components: - Temperature sensor: DS18B20 - Ultrasonic liquid-level sensor: HC-SR04 measures the brew height/volume to detect evaporation and prevent overfilling/underfilling during pump-based tea additions or liquid removal - pH Sensor: Analog pH probe + signal conditioning (PH-4502C module or equivalent front-end) - RGB Color Sensor: TCS34725 - Heating Element: Resistive heating pad controlled via MOSFET - Peristaltic pump(s): 12 V peristaltic pump (food-safe tubing) - Microcontroller: ESP32 Subsystem 2: Fermentation State & Safety Monitoring This subsystem monitors secondary fermentation indicators and system safety. Functionality: - Measure internal pressure buildup during fermentation - Detect abnormal fermentation conditions (overpressure or stalled fermentation) - Provide safety cutoffs and alerts if thresholds are exceeded Sensors / Components: - Pressure Sensor: MPX5700AP or equivalent pressure transducer - Signal Conditioning Circuit: Instrumentation amplifier and filtering - Safety Cutoff: Relay or solid-state switch for heater disable - Status Indicators: LEDs for system state and fault indication Subsystem 3: Data Logging & Web Interface This subsystem provides real-time data logging and user visibility through a web-based dashboard. Functionality: - Transmit sensor data (temperature, pH, color, pressure) to a web server - Log historical fermentation data for later analysis - Display real-time plots and system status via a browser-based interface Sensors / Components: - Wireless Interface: ESP32 integrated Wi-Fi - Backend: Lightweight web server or cloud-hosted database (e.g., HTTP/MQTT-based logging) - Frontend: Web dashboard displaying time-series sensor data and system state Subsystem 4: Power Management This subsystem provides regulated and reliable power to all system components. Functionality: - Supply 12 V power to the heating pad and pumps - Step down 12 V to 3.3 V for logic and sensors - Isolate high-power and low-power domains for safety and noise reduction Sensors / Components: - Power Source: 12 V wall adapter - Regulation: DC-DC buck converter (12 V → 3.3 V) - Loads: Heating pad, pumps, ESP32, and sensors Criterion For Success: - Maintain fermentation temperature within ±1°C of the target setpoint for a continuous 48-hour period - Measure pH with ≥0.1 pH resolution and maintain ±0.2 pH accuracy after calibration - Detect and log measurable color changes correlated with fermentation progression - Maintain safe operating pressure below a defined threshold and trigger a shutdown if exceeded - For the final demo, we will start from a deliberately off-condition brew (ice-cooled and pH shifted away from target) and show the system autonomously returning temperature and pH to a reasonable kombucha range using the heating pad and peristaltic pumps while logging and plotting all sensor trends live in the app This project involves significant circuit-level hardware design, including sensor signal conditioning, power management, actuator control, and embedded system integration. The scope and complexity are appropriate for a multi-person team and align with the course requirements. |
||||||
| 28 | Modular Screen |
Dale Morrison Sean Halperin Yuzhe He |
||||
| # Team Members: - Morrison, Dale Joseph Jr (dalejm2) - He Yuzhe (yuzhehe2) - Sean Halperin (seanmh3) # Problem Many applications (tabletop gaming groups, educators, researchers, presenters, and event organizers) require large, flexible, and reconfigurable display systems; however, existing solutions are expensive, bulky, non-modular, and difficult to customize. Users who want visual content often lack an affordable system that can be easily resized, repositioned, and updated with new content. For example, one can consider the tabletop groups that may spend close to $1000 on TV-table setups, which does not include a reconfigurable display, making immersion exceedingly difficult for these groups. This shows the need for a screen that is both customizable, modular, and affordable. # Solution The solution proposed is a modular digital display composed of multiple interlocking screen tiles that connect to form a larger display. Each tile contains a display and communicates with neighboring tiles through magnetic interconnects. A power or control tile will distribute power, detect the layout of the tiles, and set the visual display of each tile. The system to start will support static images and user-uploaded images. Something like this could be used in a classroom, team meetings, digital canvases, and tabletop gaming. The core idea is as described, but there are many advanced features such as audio and animation that will be implemented if time allows. # Solution Components ## Subsystem 1, Tile Display Module (Per Tile) This subsystem allows each tile to render its assigned portion of the full image. The display tiles form the user experience; therefore, without high-quality visual output, the modular board would fail to justify the replacement of paper or screens. To keep immersion, the overall board needs to be seamless instead of fragmented. As such, each tile must render its assigned portion in full detail. Each tile will contain a screen, display driver, and electrical connectors that will receive power and image data from the control tile. The tiles will have a MCU for image processing. The tile will be enclosed in a block housing, which does not separate any screens from each other and maintains alignment. Components: - Display : 6 inch LCD or TFT screen - CreateXplay 6.0 inch TFT Screen Module 1080*2160 - Display Controller Board : HDMI or LVDS - Edge connectors : Magnetic Pogo Pin Connector, 12V 1A Pogopin Male Female 2.5 MM Spring Loaded Connectors - Housing for the Screen - Microcontroller Unit (MCU) : ESP32-C3-WROOM-02 ## Subsystem 2, Tile Interconnect and Layout Detection The key innovation of this project is modularity. Therefore, the board must work regardless of how the user arranges the tiles. This subsystem will provide that capability, allowing users to rearrange tiles freely while ensuring the correct image appears in the correct location. Each tile will include edge contacts that detect when it is connected to a neighboring tile. The power tile will scan the connections and build a grid of the size of the board. Based on the tile's position data, the power tile will assign a location of the grid the tile is on and determine the part of the image the tile should display (rerunning automatically as tiles are moved). Implemention: - Connection Detection - Layout mapping algorithm on the MCU - Coordinate assignments ## Subsystem 3, Power or Control Tile This subsystem will serve as the control center of the board and will be responsible for ensuring all tiles receive power and image data. The control tile will have one or two MCUs. One MCU manages system logic (layout detection, scene selection, etc), while the second handles display data. The controller will store images locally (microSD or USB), slice them into tile segments, and transmit the correct image data to each tile. It will also broadcast synchronization signals to ensure all tiles update at the same time. This tile will also include power regulation, ensuring that all connected tiles receive stable voltage and current. Components: - Microcontroller Unit (MCU) : ESP32-C3-WROOM-02 - microSD or flash storage - Power distribution board with protection NCV97200 -Power On Button: PTS645SL43-2 LFS ## Subsystem 4, User Interface and Scene Control Without an intuitive interface, changing the screen would be difficult, which would reduce usability. This subsystem ensures that the board is able to be used in all different kinds of scenarios. Basic user controls will be integrated directly into the control tile. For advanced control, the system will provide a Wi-Fi-based web application hosted on the control tile. Users can connect from a phone or laptop to upload images, select scenes, and upload them to the board. If app development proves too complex within the semester, the board will support switching between multiple preloaded scenes as a fallback. Components: - Scroll Knob: A scroll wheel which will allow the switching of images if app development is too complex # Criterion For Success - The system supports 4 to 9 tiles. - Pressing the power button powers the system and all connected tiles. - The power or control tile automatically detects the board layout. - Each tile displays the correct portion of the full image. - The board displays at least two selectable scenes. - Scene transitions occur without visible misalignment. - The system remains stable under repeated reconfiguration. - Displaying numbers of it's relative location |
||||||
| 29 | EV Battery Thermal Fault Early Detection & Safety Module |
RJ Schneider Skyler Yoon Troy Edwards |
||||
| # Team Members - RJ Schneider (rs49) - Skyler Yoon (yy30) - Troy Edwards (troyre2) # Problem Lithium-ion batteries used in electric vehicles can experience abnormal heating due to internal faults, charging stress, or cooling failure. These thermal issues often begin with localized hot spots or an unusually fast increase in temperature before visible failure occurs. While vehicle battery management systems handle internal protection, there is a need for an external, lowvoltage monitoring and diagnostic module that can provide early warning and a hardware-level safety output for laboratory testing, validation, and educational demonstration environments. # Solution We propose a battery thermal fault monitoring module that detects early thermal fault indicators using multiple temperature sensors and simple decision logic. The system will use two independent detection paths: a microcontroller-based path for data logging and trend analysis, and a hardware comparator path for fast threshold-based fault detection. A custom PCB will integrate sensor interfaces, signal conditioning, control logic, and alert outputs. The system will be demonstrated using a low-voltage heating element to safely simulate abnormal battery heating behavior. # Solution Components ## Subsystem 1 (Thermal Sensing Front-End) Components: - 10k NTC Thermistors (x3) - 1% Precision Resistors (voltage divider networks) - MCP6002 Rail-to-Rail Op-Amp (or equivalent) Function: This subsystem converts temperature changes into analog voltage signals using thermistor voltage dividers. A simple active low-pass filter is implemented on the PCB to reduce noise from the heating element and power supply. Multiple sensors allow detection of uneven heating across the simulated battery surface. ## Subsystem 2 (Dual-Logic Decision Unit) Components: - ESP32-WROOM-32 Microcontroller - LM311 Voltage Comparator Function: The ESP32 samples temperature data using its ADC and calculates temperature rate-of-rise to generate early warning alerts. In parallel, the LM311 comparator directly monitors one sensor voltage and triggers a fault output when a fixed temperature threshold is exceeded. This provides a simple hardware backup path that does not rely on firmware execution. ## Subsystem 3 (Power Regulation and Safety Output) Components: - 5V to 3.3V LDO Regulator (e.g., AMS1117-3.3) - SPDT 5V Relay Module - Logic-Level MOSFET (IRLZ44N or equivalent) Function: This subsystem regulates input power for the PCB and provides output signaling. The relay represents a low-voltage safety cutoff output that simulates a charger-disable or contactor-enable signal. The MOSFET is used to control the heating element during demonstration and testing. # Criterion For Success 1. Hardware Fault Trigger: The comparator-based protection path must activate the relay output within 200 ms of exceeding a preset temperature threshold. 2. Early Warning Detection: The ESP32 must trigger a warning alert when the measured temperature rise exceeds a configured rate-of-rise threshold for at least 3 seconds. 3. Temperature Accuracy: PCB sensor readings must be within ±1.5°C of a calibrated reference thermometer. 4. Noise Reduction Performance: The PCB filtering stage must demonstrate reduced ADC signal noise compared to an unfiltered measurement when the heating element is active. 5. Fail-Safe Behavior: The relay output must default to an open (safe) state when system power is removed. |
||||||
| 30 | American Sign Language Robot Hand Interpreter |
Ankur Prasad Matthew Uthayopas Tunc Gozubuyuk |
||||
| **American Sign Language Robot Hand Interpreter** **Team Members**: - Ankur Prasad (ankurp3) - Experienced in Control Systems, Machine Learning, and some Embedded programming. Have done projects that train models using Python and have worked with programming and communicating sensors. Addtionally have experience building mechanical systems. - Tunc Gozubuyuk (tuncg2) - Have some experience in PCB design and experience in Control Systems. - Matthew Uthayopas (mnu2) - Experienced in Circuit Design and Signal Processing. Have done internships focused on AI/ML models. Have some experience with PCB design and programming with MCUs. **Problem** There are 500,000 to 1,000,000 people worldwide who use American Sign Language (ASL) to convey their ideas. Every idea matters, and we want every idea to be addressed, understood, and communicated between individuals without having any communication barriers. Therefore, we wanted to engineer a cost-efficient ASL Robot Hand Interpreter to be used as a teaching tool for anyone who wants to learn ASL. Voices of the Unheard: Conversational Challenges Between Signers and Non-Signers and Design Interventions for Adaptive SLT Systems: https://dl.acm.org/doi/10.1145/3706599.3720201 Students With Disabilities: https://nces.ed.gov/programs/coe/pdf/2024/CGG_508c.pdf **Solution** Describe your design at a high-level, how it solves the problem, and introduce the subsystems of your project. Our solution is to design a programmable robotic hand that will be able to perform all letters of the alphabet in American Sign Language. This hand will be able to be trained through multiple sensors attached to a separate glove, so we can potentially train the glove to sign whole words. We will be focusing on our hand displaying ASL words, but if time permits, we will be adding features that will allow interaction with the hand. If Time Permits: The robotic hand will be able to teach the American Sign Language without the need for a teacher/interpreter. This can be done by adding audio recognition to our robotic hand so that it will be able to sign words that it picks up. **Solution Components** **Subsystem 1**: Robotic Hand and Actuation Controls This subsystem will be able to bend and restore the joints of the robotic hand. It will function similarly to tendons when it curls and extends fingers. Mechanical Structure: Fingers made out of popsicle sticks that will be cut and sanded down and connected with screws and nuts: Popsicle Sticks - https://www.hobbylobby.com/crafts-hobbies/wood-crafts($0.99) For the palm, it will be made out of cardboard, being layered and then glued together. Additionally, there will be cut wood to mount the servo motors. Cardboards - https://a.co/d/1botWA0 ($5) For the tendons we plan to use nylon string that will be routed through the fingers using small screws/holes on the finger segments. We will place winches and spools on the servo horns to wind the string that controls the fingers. Additionally we will utilise elastic cords to provide a restoring force which will return the finger back to its original state. Elastic Cords - https://www.amazon.com/Elastic-Bracelets-Bracelet-Stretchy-Necklaces ($7) We will also potentially utilise springs to ensure that the fingers have enough force when holding a specific hand position. Motor system: Servo motors (x9) which will provide the torque to pull the tendons. Each finger will contain one servo motor except the thumb which will contain three. Then we will have two servo motor for the wrist to allow for movement in both directions Servo Motors - https://www.adafruit.com/product/1143?utm ($10) Microcontroller (Nano V3.0) - https://a.co/d/bsRC3nZ ($16) We are planning to use an ATmega328P MCU to determine the resistance at which each finger is able to have for each certain letter. The microcontroller will be hooked up to flex sensors which will be attached to each finger. The microcontroller and motor system will be placed inside of a recyclable water bottle. Flex Sensors - https://www.pcb-hero.com/products/2-2-resistive-flex-sensor ($2.15) Power System: Our system will eventually be powered by a portable power module. It will be connected to the microcontroller, which will then provide power to all the other components. Power Source: For bench: AC-DC adapter (12 V or 6–8 V, depending on motors) For portable: Turnigy 3300mAh 3S 11.1V Shorty LiPo Battery ($20) - https://hobbyking.com/en_us/turnigy-3300mah-3s-11-1v-30c-shorty.html?wrh_pdp=2&countrycode=US&utm_source=chatgpt.com **Subsystem 2**: Interaction and Teaching This subsystem will be responsible for training and programming the robotic hand. Sensor Glove: Main Glove: Standard cloth glove made for winter We will use 9 flex sensors to gauge the movements of the specific joints and fingers An Arduino Nano, which will be mounted on the glove to read all of the flex sensor data A HC-05 Bluetooth module will be used to send the glove’s sensor data to the main robot hand controller **Criterion For Success** Describe high-level goals that your project needs to achieve to be effective. These goals need to be clearly testable and not subjective. Sign Language Accuracy The robotic hand should sign each letter of the ASL alphabet perfectly when programmed to do so Any words or letters signalled should be able to be recognized by at least 3 testers The device should be able to spell out a 6-letter word in a reasonable amount of time which can be understood by 3 testers Machine Learning Feedback The robotic hand must be able to replicate signs that were performed from the glove at 85% accuracy The robotic hand should replicate signs within 2-3 seconds of glove movement Battery Life and Power Supply The robotic hand must have at least 2 hours of battery life The device should be able to perform at least 26 different hand signals before losing functionality Time Permitting Features The robotic hand should be able to replicate words spoken at 75% accuracy The camera should be able to detect a human doing sign language with a single color background |
||||||
| 32 | Plant Notification System (Soilmate) |
Emma Hoeger Sigrior Vauhkonen Ysabella Lucero |
||||
| Plant Notification System (Soilmate) Team Members: - Emma Hoeger (ehoeger2) - Ysabella Lucero(ylucero2) - Sigrior Vauhkonen (sigrior2) # Problem Many house plant owners struggle taking proper care of their plants. It can be difficult to keep track of when to water them and where to keep them, based on their species of plant and stage of life. Since all of them require water at different frequencies and amounts, it’s also easy to forget to water the plants on time and meet their different schedules. # Solution Our solution is to create a notification system to inform houseplant owners of when they should water their different plants. It will also notify the owner of the conditions of the plant based on various sensors. This will be done by creating an app that the owner can download on their phone where they will be able to enter their type of plant. There have been many apps created to act as a reminder to water plants; however, the majority of them rely on a schedule rather than live data gathered from the plant. Also those that do have live data from the plant, do not track the weather. Our app will track where that plant is originally from and use the weather patterns in that area to determine when it should be watered (ie. when it’s raining). In addition, there will be a soil moisture sensor, humidity sensor, light sensor, and temperature sensor. The soil moisture sensor will also alert the owner to water the plant if the moisture is too low, and prevent overwatering of the plant if the moisture is too high. The humidity sensor will alert the owner when humidity is dangerously too high or low for the plant, which is especially useful for tropical plants in a non-tropical environment (many houseplants are of a tropical background). The temperature sensor will alert the owner when the room temperature is not in the optimal range for the specific plant. With the integration of software and hardware subsystems, this effective plant notifying system will make taking care of houseplants easier for both beginner and experienced plant owners. Beginner plant owners will find it easier to learn of and keep track of the demands of their plants, preventing most common mistakes that result in the death of their plants. Many experienced plant owners have upwards of 20 plants, and this notification system would make it much simpler to keep track of when to water them all. # Solution Components - ESP32-C61-DevKitC-1-N8R2 - Moisture Sensor (SEN0114) - Temperature & Humidity Sensor (SHTC3-TR-10KS/9477851) - Light Sensor (BH1750) - ADC Module - 5V DC Converter ## Subsystem 1: App Configuration + Weather Data The app (developed using Flutter/Android Studio) will allow the user to add a plant for monitoring- the user will select the plant species, size, light exposure, and the size of the pot. With this information, using a lookup table that holds information for plant species, the app will store target ranges for soil moisture, temperature, humidity, and light, as well as a “home location” (later used to check weather). In the event that a plant species is unknown to the app (not in the lookup table), the user can manually add this information. Once per day, the app will call a weather API (OpenWeatherMap API) using the “home location” of a plant to check for rain in that region. This will be used as a supplementary factor to the data from the soil moisture sensor, and with this a decision will be made on whether to water the plant or not. If the plant should be watered, a notification will be generated to inform the user. The data from the temperature, light, and humidity sensor will also generate notifications if the temperature and/or humidity is out of the recommended range, informing the user that the environment is too hot or too cold, or too moist or dry. It will give recommendations to either turn down/up the temperature, place plant in a different facing window (north, east, south, west), mist with water if too dry, or open windows if too humid. This will make the app much more beginner plant friendly. ## Subsystem 2: Sensor Subsystem The sensor subsystem will use a resistive moisture sensor (SEN0114), temperature and humidity sensor (SHTC3), and a light sensor (BH1750). All of these sensors except the SEN0114, which requires an ADC module, will use an I2C interface that is compatible with our microcontroller (ESP32). The sensors will send their measurements to the microcontroller to be interpreted and relayed through the app. Our power subsystem will supply the correct voltages to the rated amounts of the sensors. ## Subsystem 3: Microcontroller for Communication We must be able to blend our app configuration with our live sensor subsystem to send an alert. We can do this by using the ESP32 microcontroller. It will provide wifi and bluetooth connectivity for our sensor devices to easily transfer the data to our app. It is cost-effective and has low power consumption which will make it easy to integrate with our design. Furthermore, our group has experience with this microcontroller so we are confident with its capabilities. ## Subsystem 4: Power Subsystem The power subsystem will deliver power to the sensors and microcontroller systems. The ESP32 requires 5V while the temperature, humidity, moisture and light sensors require 3.3V. The 3.3V will come from the LDO on the microcontroller and we will use a 5V USB adaptor to convert the 120V AC from the bench to 5V. # Criterion For Success (Pothos for example) - Accurately gather soil moisture data - 300-700 Ohms optimal for top 2 inches of soil - Accurately gather temperature data - 60 to 80 degrees farenheit - Accurately gather humidity data - 40 to 60% - Accurately gather light data - 1,000 to 3,000 lux - Accurately transfer data from sensors to app via microcontroller - Be able to track weather conditions - Be able to send alerts through app using sensors/weather conditions - Allow user to enter plant species, and size in app - Ensure app can track weather for multiple plant species |
||||||
| 33 | HelpMeRecall |
Michael Jiang Sravya Davuluri William Li |
||||
| # HelpMeRecall Team Members: - Sravya Davuluri (sravyad2) - William Li (wli202) - Michael Jiang (mbjiang2) # Problem Many individuals have difficulty remembering recent activities and completing routine tasks like eating or taking medication. # Solution A standalone assistive device that supports activity recall using sensor-gated voice interaction. It allows users to verbally log activities they have completed, and later query if a specific activity has been performed. It uses an onboard microphone and on-device audio processing on a microcontroller to perform keyword detection. This device is always on and will be verifiable with an LED, but the voice input is only accepted if the device is worn (capacitive touch sensor) and specific words from a limited vocabulary is said to avoid accidental logging. To address the possibility of reduced correct detection of supported keywords, we will have various keywords targeted for an activity. So in the case of taking medicine, it might be medicine, medication, pill, drug, and prescription. This also simplifies the problem and prevents confidence rate issues. To validate a completed action, the action is logged only if an accelerometer detects physical movement around the time in order to reduce false logging. If a voice log is accepted, haptic feedback is provided by the device. Activities are also timestamped and stored in local memory. If the device notes that a specific activity has been completed, it affirms it including the timestamp using an integrated speaker. The logs reset at midnight automatically since the activities repeat on the daily. There is also an option of a hard reset button to clear logs. There will also be a button to delete the latest log in case of a logging mistake by the user. # Solution Components ## Subsystem 1: Microcontroller Unit and Controls Acts as the central unit for logic. Manages the sensor inputs, and executes a finite state machine. The FSM states are start, idle, listening, logging, and replying. Components: ESP32-S3-WROOM-1 ## Subsystem 2: Audio input processing unit Captures the voice input from the user and performs keyword detection on a limited vocabulary, where each action can be mapped to multiple set keywords to improve detection. Components: Digital MEMS microphone (INMP441), ESP32-S3-WROOM-1 ## Subsystem 3: Sensor gating and activity validation Uses a capacitive touch sensor and an accelerometer to detect motion, which ensures that voice input is only received and accepted if the device is worn and recent movement is detected by the accelerometer instead of continuous voice recognition. A "cooldown" period is enforced where the microphone will be disabled for 10 seconds if there's motion but no logging during the listening period multiple times in a row to help conserve some battery. Components: Capacitive touch sensor (AT42QT1010), Accelerometer (MPU-6050) ## Subsystem 4: Feedback and Output Uses a speaker for audio feedback as a response to the user’s query. This subsystem also provides haptic feedback as an indication of an accepted user voice log. To indicate if the device is on, the LED is green. If the device is listening, the LED is yellow. If the device is low on power, the LED will be red. Components: Speaker (8 ohm speaker), amplifier (MAX98357A), coin vibration motor, transistor (2N3904), RGB LED ## Subsystem 5: Time logging and local storage Stores the activity voice logs along with timestamps. Allows automatic reset at midnight to support daily repetitive tasks. Timekeeping is done using ESP32’s internal RTC. Components: ESP32-S3-WROOM-1 ## Subsystem 6: Power Supplies power to the device. Components: Battery (Li-Po battery) # Criterion For Success - Correctly detects supported keywords with an accuracy of at least 80% in a quiet environment - Device will only log upon verifying physical activity and hearing a keyword from the user within a 5 second window - Upon successful logging, the speaker will output audibly and haptic feedback can be felt by the user with a 2 second vibration - While querying logs, speaker will output and LED will be solid - Logs will be automatically cleared at midnight and can be manually reset with the reset button - Latest log will be deleted upon pushing a separate button - LED stays solid while device is powered - False log rate < 1 per hour in normal conversation when worn. |
||||||
| 34 | LabEscape Ultrasonic Directional Speaker |
Sam Royer |
Mingrui Liu | |||
| # LabEscape Ultrasonic Directional Speaker Team Members: - Piotr Nowobilski (piotrn2) - Sam Royer (sroyer2) - Arthur Zaro (azaro3) # Problem Working with Professor Kwiat for the LabEscape escape room, we want to make an audio-based clue using ultrasonic waves to hide a narrow beam of audio that can only be heard at the intersection of two ultrasonic waves. We need to create the ultrasonic transducer array to emit the ultrasonic waves as well as the drivers to feed into the transducer and produce the necessary waves. # Solution We will make 2 separate subcircuit drivers to drive the ultrasonic waves. One will be a standard 40kHz wave as a reference wave, and the other will be a carrier wave using Amplitude Modulation at 40kHz to encode an audible audio signal at 40kHz. The intensity of the 40kHz wave will delinearize the air the sound is in, allowing the air to demodulate the carrier wave with the reference 40kHz wave, causing the initial audio to be heard only at the intersection of the 2 waves. For the transducer we will simply wire many individual ultrasonic transducers in parallel with one array being connected to a 40kHz sine wave, and the other connected to the 40kHz carrier wave. # Solution Components ## Digital-to-analog Converter We need to store an audio clip digitally to have the same clue play over and over throughout the escape room experience so that the clue may be discovered upon the intersection of the “audio spotlights”. To convert this digitally stored signal to a usable signal in the speakers, we need to convert the digital signal to an analog signal. The ideal resolution would be 16 bits for high quality audio as we want to minimize the distortion caused by conversion. This will be done through a DAC IC. It seems like a serial load DAC might be best as they have internal 16 bit shift registers, and if I sample my audio at 22050Hz, I can have good resolution if I load at 22050 * 16 Hz, and then move to output the signal. Components: DAC8811 - 16 bit serial Digital to Analog converter. Audacity audio software to record and encode 16 bit audio ## Modulating subcircuit We need to convert the new analog signal into a 40kHz signal using Amplitude Modulation so that the carrier wave and reference wave are at the same base frequency, and upon their crossing with enough power, the signal will demodulate in the air. We are thinking about implementing this using a digital potentiometer(s) using one of the many standard amplitude modulation circuit designs one can find online, and tuning it very specifically with those digital potentiometers based on tolerances of the resistors and capacitors used in this circuit. Components: Digital Potentiometer - MCP4141. ## Signal Amplifier Circuit After we modulate the signal, as well as for the standard 40kHz wave, we need to amplify the signal so that the signal is large enough to be powerful enough to delinearize the air for the audio signal to be demodulated at the cross section of the audio beams. Components: LM3886 (high power audio amplifier, only issue is it doesn’t have as much gain as possible at higher frequencies (40kHz), so we may decide to swap this out). ## Filtering Subcircuit A filter subcircuit may be necessary in order to reduce the noise before amplification. Given that most speaking frequencies are below 6kHz at an absolute high end and below 80Hz at an absolute low, this will likely be a band-pass filter to cut out the absolute highs and lows from harmonics and miscellaneous noise from conversion. Initially we will just try a simple first order low pass filter and high pass filter in series, which would only require a capacitor and a potentiometer to tune it. If that doesn’t do enough attenuation, I’ve found some online examples of higher order filters that will give us higher attenuation and would require a few additional resistors, capacitors, and an op amp. Components: Digital Potentiometer MCP4141 for tuning filtering circuit. Capacitors for filtering circuit. Resistor for filtering circuit. Op Amp (tbd if needed). ## Transducer Array To actually emit the ultrasonic waves, we will need an ultrasonic speaker array to emit both the reference and carrier waves. To do this we will buy several small individual ultrasonic speakers and attach them in parallel to have them all simultaneously emit the desired frequency. Components: 25+ small ultrasonic transducers (Can buy in bulk) ## Additional Component(s) Stepper motor and motor drivers for panning the speaker to align. Flashlight mounted to transducer array to make it clear the alignment of each speaker # Criterion for Success - Audio and pressure from ultrasonic waves is very narrow and intersection between the two ultrasonic “spotlights” requires precision. This beam should be consistent with the attached flashlights. - Audio is only heard at the intersection of the two waves and not too loud or too quiet. - Audio is of clear enough quality that a clue can easily be presented through the transducers. - Transducers and drivers are capable of being run for a long period of time while players try to uncover the clue associated with it. |
||||||
| 35 | UAV Battery Management System with Integrated SOC and SOH Estimation |
Edward Chow Jay Goenka Samar Kumar |
||||
| # Title UAV Battery Management System with Integrated SOC and SOH Estimation # Team Members: - Edward Chow (ec34) - Jay Sunil Goenka (jgoenka2) - Samar Kumar (sk127) # Problem UAV batteries are safety-critical and performance-critical as a weak or degraded pack can cause sudden voltage drop, shutdown, reduced flight time, or unsafe thermal behavior. The usual BMS implementations primarily rely on fixed thresholds for voltage, temperature or current to prevent immediate failures. However, threshold-only systems do not provide predictive insight into battery degradation. Battery health issues are often discovered only after runtime loss or unsafe behavior. Additionally high discharge currents and fluctuating temperatures are common in UAV operations, which fastens degradation. A lightweight BMS that not only protects the pack in real time but also estimates battery health and degradation risk would improve reliability, reduce unexpected failures, and enable better operational decisions such as deciding if the battery is safe to use or needs to be retired. # Solution To address the delicate nature of UAV batteries we decided to undertake a project with the aim to design and construct a compact and efficient battery management system that seamlessly integrates reliable real-time protection with intelligent prediction. Our primary algorithm for estimating the battery’s State of Charge (SOC) will be coulomb counting, which relies on continuous current measurement. We are researching the Kalman filter method as a second algorithm for more accurate calculation. The BMS will also monitor cell voltages and temperatures to ensure safe operation and provide valuable data for battery condition assessment. By analyzing SOC history, voltage behavior, current profiles, and temperature data, the system should be able to estimate the State of Health (SOH) of the battery. SOH over time will help us understand the capacity fade and degradation trends over time. We also plan to log all measurements and stream it to an external dashboard for visualization and analysis. As an extension, the project could also incorporate a lightweight AI-driven model to assist in SOH estimation and degradation assessment. # Solution Components ## Slave Board The slave board will be responsible for monitoring individual cell voltages and temperatures and supporting passive cell balancing. It will report accurate measurement data to the master board, ensuring safe operation of the battery pack at the cell level. The HW components and sensors include: Cell monitoring IC: Analog Devices LTC6811 or LTC6813s (multi-cell voltage sensing with built-in diagnostics and balance control) isoSPI communication interface: Analog Devices LTC6820 Temperature sensors: 10 kΩ NTC thermistors (e.g., Murata NCP18XH103F03RB) Passive balancing: bleed resistors (33–100 Ω) and N-MOSFETs per cell Cell sense connectors and basic RC filtering/ESD protection Power regulation: buck converter (e.g., TPS62130) and 3.3 V LDO ## Master Board The master board is responsible for actually performing pack-level protection, SOC and SOH estimation, data logging, and external communication. It makes sure safety limits are enforced by aggregating data from the slave board. The HW components and sensors include: Microcontroller: STM32H7 series Current sensing: shunt resistor with TI INA240 current-sense amplifier Protection switching: back-to-back N-channel MOSFETs with gate driver (e.g., BQ76200) Power regulation: buck converter (e.g., TPS62130) and 3.3 V LDO Communication: isoSPI (LTC6820), CAN Data logging: microSD card or onboard flash memory ## BMS Viewer The BMS Viewer will be a software dashboard used to visualize real-time and logged battery data and assess battery health. Potential features: Live display of SOC, SOH, pack voltage, pack current, and temperature Time-series plots of voltage, current, temperature, and SOC Data ingestion via USB, CAN, or wireless telemetry Backend implemented in Python or Node.js with a web-based dashboard # Criterion For Success - BMS detects and mitigates fault conditions within a bounded response time (≤100 ms). - Cell voltage within ±50 mV per cell, pack current within ±10%, temperature within ±5°C after calibration. - SOC remains within ±10% of a reference SOC over a full UAV-like discharge cycle. - SOH estimate is within ±15% of a capacity-based reference and shows consistent degradation trends. - BMS Viewer displays and logs SOC, SOH, pack voltage/current, and temperature in real time. |
||||||
| 36 | Slow Wave Sleep Enhancement System RFA |
Aidan Stahl Kavin Bharathi Vikram Chakravarthi |
||||
| # Slow Wave Sleep Enhancement System ## Disclaimer: We are assisting Team 05 - Acoustic Stimulation to Improve Sleep who presented during the first class lecture with this project # Team Members: - Kavin Bharathi (kavinrb2) - Aidan Stahl (ahstahl2) - Vikram Chakravarthi (vikram5) # Problem: Many common neurological conditions like Alzheimer’s disease, depression, and memory issues are associated with patients receiving lower quality of sleep. Specifically, these issues often stem from a lack of a specific type of sleep known as slow wave sleep (SWS). As individuals age, sleep disorders and other sleep-related issues lead to a lack of overall sleep. As a result, the amount of time an individual spends in SWS and the quality of SWS they experience typically declines with age, contributing to many of the issues mentioned above. # Solution: Describe your design at a high-level, how it solves the problem, and introduce the subsystems of your project. Our team is trying to improve sleep quality using a wearable device that is non-invasive and cost effective. This device will record EEG waves and then detect when the user is in Slow Wave Sleep (SWS) using the aid of specialized software. Once the user enters SWS, the system emits carefully timed bursts of pink noise through an auditory interface to enhance slow wave activity and extend its duration. This project is being done for the “Team 05 - Acoustic Stimulation to Improve Sleep” proposal by Maggie Li, Nafisa Mostofa, Blake Mosher, Presanna Raman. Currently, our sponsors have a wearable headset that measures how much time is spent in SWS and a “Cyton + Daisy Biosensing PCB” to process incoming signals. This board costs $2,500, and we are aiming to design an alternative, cheaper PCB within the class budget of $150. Providing a cheaper alternative that offers similar functionality is what makes our project unique and patentable. # Solution Components: ## EEG Leads - EEG Leads are conductive electrodes, small metal disks, that are placed on the scalp. These electrodes measure small voltage differences generated by electrical activity produced by neurons in the brain. ## MCU/EEG Wave Detection System - The MCU/EEG wave detection system is used to detect the analog EEG waves from the EEG headband, amplify the signal (the EEG waves are very low voltage, so amplification will be necessary), digitize them, and transmit those signals to a computer for further processing to detect SWS. ## Computer/Software - Utilize YASA, open-source command-line tool, to analyze EEG signals - Python script to utilize command-line tool while EEG data is being collected - Script also starts the process of playing pink noise once SWS is detected - Interactive UI that allows user to visualize EEG data ## Audio Source - An audio source will be used to play pink noise after the user enters SWS. # Criterion For Success: - Playing pink noise after detecting SWS signal with minimal delay - Correctly classify SWS with good accuracy - Ensure wearable device is comfortable for user through survey metrics |
||||||
| 37 | Ant-Weight Battlebot - DC Hammer |
Carson Sprague Gage Gathman Ian Purkis |
Haocheng Bill Yang | |||
| # Ant-Weight Battlebot - DC Hammer Team Members: - Ian Purkis (ipurkis2) - Carson Sprague (cs104) - Gage Gathman (gagemg2) # Problem Statement Many battlebot designs struggle with balancing movement control, durability, offense, and defense within the limitations of competition regulations. We need to design a robust and versatile battlebot while following competition requirements (namely weight requirements) that can outlast and subdue a variety of competitors. Primary design challenges for most battlebots stem from the diversity of opponent designs and abilities, often leaning on a particular design element to win. Our bot must be able to remain competitive throughout the full match regardless of the opponent or sustained damage. # Solution Our proposed solution/design will take a well-rounded approach to offense and defense, ensuring that our bot can sustain damage and last the full length of the match. Our primary offensive tool will be a motor-powered, sensor-enabled hammer and wedge attachment allowing for multiple methods of opponent submission by housing two “attack modes”, allowing the driver to adapt attack strategy depending on the design of opposing bots. Our design also includes a significant defensive tool in inversion adjustment, by utilizing sensors and physical shape to prevent knockouts via flips. Our bot will remain functional even if fully inverted. Physical components, especially the hammer, must be modular for quick replacement between matches if damage is taken. This well-rounded design will enable the driver’s creativity during the match by automating the offensive tool (hammer/wedge) and defensive tool (flip adjustment), providing the bot significant competitive advantage against all types of opposing bots. # Solution Components ## Subsytem 1 - Ultrasonic Sensor Enabled Hammer/Wedge Attachment (Attack Arm) We will embed an ultrasonic sensor into the front of our bot. The sensor will be used as a proximity detector to activate the attack arm motion. The attack arm will have two default configurations for either orientation of the bot. A low position, running near parallel to the arena surface will be used for the wedge attack, upon sensor OR driver input an upward swing will execute, effectively flipping objects in front of the bot. The other arm resting position will stick upward, perpendicular to the ground and upon sensor or driver input perform a downward swing to strike objects in front of the robot. - Ultrasonic Sensor; If we can use a pre-emplemented sensor - Adafruit 4007 (https://www.digikey.com/en/products/detail/adafruit-industries-llc/4007/9857020). If we cannot, alternatively, an infrared LED/detector combo could be used - Motor (Weapon) TBD, but something of the sort as follows, primary characteristic is a high torque motor for flipping/smashing 12V 50RPM 694 oz-in Brushed DC Motor (210 grams) (https://www.robotshop.com/products/12v-50rpm-694-oz-in-brushed-dc-motor) - Microcontroller Unit ESP32-S3-WROOM-1 (not dev board, just chip + antenna) ## Subsystem 2 - Gyroscopic Sensor Enabled Control Inversion We will embed a gyroscopic sensor inside the body of the robot. This will allow the software responsible for translating driver input into motor movement to adjust based on the orientation of the bot. If the bot is flipped over, left turns become right turns and vice versa, which would be a challenge for the driver to quickly adjust. This feature/subsystem will allow the software to make the appropriate adjustments to maintain driver input continuity. Additionally, the orientation measured by the gyroscopic sensor will modify the resting/default positions of the attack arm to continue operation (resting positions and rotation direction must be inverted to continue operation). - Gyroscopic Sensor (Potential alternate sensor - Accelerometer - something like MC3416 would do, this should be able to detect orientation satisfactorily) (https://www.digikey.com/en/products/detail/memsic-inc/MC3416/15292804) - Microcontroller Unit - ESP32-S3 see above ## Subsystem 3 - Wireless Control/Driver Input + Steering and Wheel Configuration Our driver will utilize a keyboard for robot control and steering. The W and S keys will control forward and backward motion with A and D controlling left and right rotation. We will also program the F key to switch attack modes between the hammer and wedge and the Space bar as an alternative manual attack trigger. These inputs will be wirelessly communicated to the onboard PCB and microcontroller via bluetooth and translated to the appropriate motors. To enable the tank-turning we will use 4 wheel drive as each wheel/motor will require isolated control. The height of the robot’s body will be thinner than the diameter of the wheels, with the wheels’ axles fixed at the midpoint relative to the thickness of the body. This will allow all four wheels to make contact with the ground regardless of orientation, and maintain drivability. - Microcontroller Unit - ESP32-S3 see above - Keyboard (Simply from a laptop, Laptop will also run the “server” that communicates with the MCU/PCB) - Drive Motors 12mm Diameter 50:1 Micro Metal Gearmotor 12V 600RPM (2 x 10 grams) (https://www.robotshop.com/products/dyna-engine-12mm-diameter-501-micro-metal-gearmotor-12v-600rpm) ## Subsystem 4 - Battery/Power Onboard power source for sensors/controllers/motors as well as components to regulate and distribute power. - Battery 3S (11.1 V) around 500 mAH battery (starting point estimation) (https://hobbyking.com/en_us/turnigy-nano-tech-450mah-3s-45c-lipo-pack-w-xt30) - Control Circuit Regulator AZ1117CH-3.3TRG1 - 3.3 V w/ 18 V max input, output current is 1.7 mA min, and 1 A max, well within range (https://www.digikey.com/en/products/detail/diodes-incorporated/AZ1117CH-3-3TRG1/4470985) - Gate Drivers DGD0211C - 3.3v to 12 v gate drivers, plenty of overhead in ability (https://www.digikey.com/en/products/detail/diodes-incorporated/DGD0211CWT-7/12702560) - H-Bridge MOSFETs FDC655BN - 30v, 6.3 A NMOSfets (https://www.digikey.com/en/products/detail/onsemi/FDC655BN/979810) # Criteria for Success - Ultrasonic sensor accurately triggers attack arm when an object comes into close proximity - Gyroscopic sensor accurately registers when robot has been flipped and inverts controls - Microcontroller takes in driver keyboard inputs for fluid steering - Attack arm’s default position changes based on driver input (horizontal for wedge, vertical for hammer) - Attack arm’s default position changes based on gyroscopic sensor input (default position adjusts to bot’s orientation) - Tank turning and wheel alignment allows for 360 degree rotation - Robot movements follow driver input: i.e. forward/backward motion, turns etc. |
||||||
| 39 | Auto-Tuner with LCD Display |
John Driscoll Lee Susara Nicholas Chan |
||||
| **Auto-Tuner with LCD Display** **Team:** Nicholas Chan, John Driscoll, Lee Susara **Problem:** In order for guitars to be properly used, each string needs to be tuned to the right frequency to play the right note. This can either be done manually, or with assistance from a tuner. We would like to make this process easier though, so we would like to implement an auto-tuning device that attaches to the pegs of the guitar. While these are exist, most of these devices on the market are over $100, so we would like to make it more affordable. **Solution:** Our solution to this would be to create an auto-tuning device using a servo motor and a feedback loop. This solves the problem because this would make the tuner much more affordable while still maintaining its main functionality. Our design would be to attach a servo motor to each peg of the guitar and, while the user plucks the string, our device would use a microphone to take in the frequency and turn the peg as need be. The note being played will also be shown on an LCD display. **Subsystem 1:** One of the subsystems we will be the device that attaches to the head of the guitar. This device will have 6 servo motors (HS-318), one for each peg. Each motor will have a clamp that will attach to the pegs of the guitar. The device will also have an electret microphone amplifier that is picking up sound from the guitar to know what note is being played. A clamp will be used to keep the whole subsytem in place. **Subsystem 2:** Another subsystem we will need to implement is the control subsystem, which will house our PCB (QFN-16) and logic. We will use a breadboard (103-1100) , wires, and various logic chips to implement the correct logic. **Subsystem 3:** The last subsystem we will need is the power and user interface. This will include our battery (EN-22), power switch button (1489), and LCD display , as well as any buttons, should we need to tune the guitar to non-standard tuning. We can use the 2x16 LCD display with controller for this. **Criterion for Success:** For our project to be effective, it must be able to pick up and filter out the frequency being played, properly take in the sound as input to determine how the guitar should be tuned, and ensure the motors are being powered and are functioning as desired. It must also fit on the head of the guitar without being too clunky, and our LCD display must display the correct notes being played. The project as a whole must also be more affordable than the current auto-tuners on the market as of right now. |
||||||
| 40 | Bilateral Earlobe Pulse Timing Measurement Device |
Joshua Joseph Mark Schmitt Zhikuan Zhang |
Shiyuan Duan | |||
| # Bilateral Earlobe Pulse Timing Measurement Device # Team Members Zhikuan Zhang (zhikuan2) Joshua Joseph (jgj3) Mark Schmitt (markfs2) # Problem Pulse transit time (PTT) is widely used as a non invasive indicator of cardiovascular dynamics but most existing systems measure PTT at a single peripheral location There is currently a lack of low cost synchronized hardware tools that enable bilateral pulse timing measurements such as comparing pulse arrival times between the left and right earlobes Without a dedicated time synchronized multi channel sensing platform it is difficult to study or validate whether body posture head orientation or environmental conditions introduce measurable bilateral timing differences This project addresses the need for a custom PCB based physiological sensing device that can reliably acquire synchronized ECG and bilateral PPG signals and serve as a general purpose measurement tool for this under studied topic # Solution This project proposes a PCB based multi channel physiological sensing system consisting of one ECG channel placed near the chest and two PPG channels placed on the left and right earlobes The system is designed as a measurement and validation tool rather than a research discovery platform The PCB focuses on low noise analog front end design precise time synchronization and multi channel data acquisition ECG R peaks are used as a timing reference and pulse arrival times from both PPG channels are compared under controlled conditions such as neutral posture head tilt or side lying # Solution Components ## Subsystem 1 ECG Analog Front End Function Acquire a clean ECG signal to provide a reliable cardiac timing reference Components Instrumentation amplifier such as AD8232 or equivalent ECG analog front end Analog high pass and low pass filtering stages Driven right leg circuit for common mode noise reduction Surface ECG electrodes Output Digitized ECG waveform with clearly detectable R peaks ## Subsystem 2 Dual PPG Sensing Channels Function Measure pulse waveforms at the left and right earlobes simultaneously Components Two identical PPG sensors such as MAX30102 or discrete LED and photodiode design Transimpedance amplifiers for photodiode current sensing Anti aliasing filters Optical shielding for ambient light rejection Output Two synchronized PPG waveforms suitable for pulse arrival time extraction ## Subsystem 3 Time Synchronized Data Acquisition and Control Function Ensure accurate relative timing between ECG and both PPG channels Design considerations All channels are sampled by a single microcontroller ADC or synchronized ADCs Shared clock source using a low ppm crystal oscillator Hardware level timestamping of samples Avoid reliance on BLE timing for synchronization BLE used only for data transfer if implemented Components Microcontroller such as STM32 or ESP32 Low drift crystal oscillator Shared sampling clock architecture # Criterion For Success Requirement 1 ECG signal acquisition Validation Clearly visible ECG waveform with identifiable R peaks Elevated heart rate observable after light exercise Requirement 2 PPG signal acquisition for both earlobes Validation Stable and repeatable PPG waveforms captured simultaneously from left and right earlobes Requirement 3 Channel time synchronization Validation Relative timing jitter between channels below predefined threshold such as less than 1 ms Consistent timing results across repeated measurements Requirement 4 Bilateral pulse timing comparison Validation ECG referenced pulse arrival times successfully computed for both earlobes under at least two different body conditions # Scope and Complexity Justification This project involves significant circuit level hardware design including low noise analog front ends synchronized multi channel data acquisition and mixed signal PCB integration The system complexity is appropriate for a senior design project and aligns with course expectations The project is inspired by experience working as a research assistant in a biological sensing laboratory and is positioned as a hardware measurement tool rather than a research discovery platform |
||||||
| 41 | BetaSpray - Bouldering Route Assistance |
Ingi Helgason Maxwell Beach Prakhar Gupta |
||||
| # Beta Spray [Link to Discussion](https://courses.grainger.illinois.edu/ece445/pace/view-topic.asp?id=78759) **Team Members:** - Maxwell Beach (mlbeach2) - Ingi Helgason (ingih2) - Prakhar Gupta (prakhar7) # Problem Spray walls in climbing gyms allow users to create endless custom routes, but preserving or sharing those climbs is difficult. Currently, climbers must memorize or manually mark which holds belong to a route. This limitation makes training inconsistent and reduces the collaborative potential of spray wall setups, particularly in community and training gym environments. # Solution Beta Spray introduces a combined scanning and projection system that records and visually reproduces climbing routes. The system maps the spray wall, categorizes each hold, and projects or highlights route-specific holds to guide climbers in real time. Routes can be stored locally or shared across devices over a network. The design includes three primary subsystems: vision mapping, projection control, and user interface. # Solution Components ## Vision Mapping Subsystem This subsystem performs wall scanning and hold detection. A **camera module** (Raspberry Pi Camera Module 3 or Arducam OV5647) will capture high-resolution images under ambient lighting conditions. The **ESP32** will handle image capture and preprocessing using C++ OpenCV bindings. The image recognition algorithm will identify hold contours and assign coordinates relative to wall geometry. If on-device processing proves too compute-intensive, the camera data can be sent via HTTP requests to a remote machine running an OpenCV or TensorFlow Lite inference service for offloaded recognition. To improve reliability in low-light setups, IR LEDs or reflective markers may be added for hold localization. If latency proves too high, a physical layer solution could connect directly to a nearby laptop to speed up computer vision processing. ## Projection Subsystem The projection subsystem highlights route holds using **servo-actuated laser pointers**. Each laser module will be mounted to a **2-axis servo gimbal** arrangement controlled by a microcontroller PWM interface. The system will direct up to four laser beams to indicate sequential handholds as users progress. A benefit of using servos over motors is avoiding PID tuning for motor control loops. If laser precision or safety reliability becomes an issue, an alternative approach will use a **compact DLP or LED projector**, calibrated through the same coordinate mapping. Mechanical design will ensure adjustable pitch angles to accommodate wall inclines up to 45 degrees. ## User Interface Subsystem Users configure and control Beta Spray through a web or mobile interface. The **ESP32** module provides Wi‑Fi and Bluetooth connectivity, and the **ESP‑IDF SDK** enables local route storage through SPI flash or SD card, along with a lightweight HTTP server for remote control. The interface will include climb management (create, save, replay) and calibration controls. If latency or bandwidth limits affect responsiveness, a fallback option is to implement a wired serial or USB configuration interface using a host computer to manage routes and command sequences. A basic mobile or web frontend will be developed using **Flutter** or **Flask**. # Physical Constraints - The system will draw power from a standard outlet (no battery operation needed). - The device will be secured to the floor using a stable stand or rubber bumpers to prevent slipping. - The total footprint will be **less than 25 cm * 25 cm**, with a maximum height of **40 cm**, including the laser pointer gimbals. # Criterion for Success Beta Spray will be successful if it can: - Achieve reasonable accuracy in laser pointer targeting to mark holds. - Track a climber’s movement in real time with less than **200 ms** latency. - Interface with a mobile device to change route planning and trajectory. - Operate consistently across varied placement distances and wall angles. Meeting these criteria will validate the feasibility of Beta Spray as a modular and expandable climbing wall visualization platform. |
||||||
| 42 | Autonomous Cold Salad Bar |
Siddhaarta Venkatesh Tejas Alagiri Kannan |
proposal1.jpeg |
|||
| # **Team:** 1. Tejas Alagiri Kannan(tejasa4) 2. Siddhaarta Venkatesh(sv39) # **Problem:** In the food industry, a huge number of processes are extremely rote and utilize manpower on monotonic tasks that can be replaced by an autonomous system. One such problem is the usage of manpower in assembly line format restaurants(eg, Chipotle, Forage Kitchen, Qdoba, etc.). Just as in the automation industry, where the assembly line is, in essence, replaced by 6-DoF arms and robot operators, I believe the manpower in restaurants can also be replaced by a robotic system that can provide higher efficiency. We have already seen a large number of processes getting automated in the restaurant industry, such as the automated food bar in sushi restaurants and robotic servers(not widely adapted unfortunately). # **Solution:** At the outset, I would like to mention that the solution does not aim to automate the entire pipeline from creating the dish to serving it. To perform highly technical dishes is a different problem in itself. I aim to make the serving process more efficient and reduce wait time. Given the ingredients, such as, chopped chicken, chopped onions, sauces, etc(which i believe is a fair starting point) Each ingredient will have its own pipe that dispenses one specific type of dish. Once we receive instructions of what food needs to be prepared and the x # of ingredients it needs to dispense and in which order, the bowl on a conveyor belt will move back and forth to fill up with those ingredients. These ingredients are funneled from their own pipes that dispenses the ingredients, one at a time. The final box is then sealed and placed in a shaker which mixes the ingredients and it is served at the end. # **COMPONENTS:** # **Subsystem 1: Motion** The bowl must be moved around the pipes to get filled. This is what we propose: Conveyor belt: 4 idlers, 2 head pulleys, 1 NEMA 23 motor(or other), 1 gear reducer, 1 motor driver(TB6600) 1 Food storage basket, 5 individual dispensary pipes, 5 servo motors, 1 servo motor PWM controller The dispensary pipes will be pumping out food using a servo pump filler mechanism where the servo motor will push down on the contents of the pump(in a piston like motion) and squeeze out the food). We will use the ESP32 Microcontroller series # **Subsystem 2: User Interface** For initial testing, simple buttons to determine which dish is chosen. The final device will involve a screen, natural interface. The simple buttons will just be regular tactile buttons. and the final screen would be an ST7789 LCD display that will show the user what food has been ordered. It will show the user what options they have chosen for their salad and how to add/remove particular items with a button press # **Subsystem 3: Food presentation** We expect to have the final salad, well tossed and provided to the user. So once the bowl is filled which is determined by it passing through the pipes of all its ingredients, the user will close it with a cap. the user will have the choice to have it shaked or not. That feature is an additional button after the food is dispensed. The bowl is then placed in a closed contrapment which simply rotates at high speeds to mix the food. It is a very similar design to regular boba shakers. Shaker: 1 NEMA 23 motor, 1 gear box, 1 motor driver(TB6600) # **Subsystem 4: Accuracy checking** A major part of this project is to ensure efficiency. So we will incorporate a weight sensor(mini load cell), this weight sensor will track the weight of the bowl as items are being dispensed and will serve as a checker to stop the machine from over dispensing. # **Subsystem 5: Power system** For demonstration purposes the machine will be hooked up to a benchtop powersupply or another reliable form of powersupply similar to a benchtop like a low-grade DC power supply. Another main component that we will add is food safe tubing to ensure that the food does not get contaminated # **Criterions for success:** 1. The conveyor belt is able to move consistently in a way that the bowl is under the right dispenser. 2. Each dispenser is able to dispense food. This would be for both solid and liquid food, such as sauces. 3. Each dispenser is able to dispense the right amount of food or a range of food in a set range. 4. Initial prototype can, on button press, determine exact motor angles to move the components for early demo during semester 5. Final prototype can, on user request, send a signal to the microprocessor to move bowl and dispense mock food into a bowl. # **Team work requirements:** 1. CAD every individual component in a miniature form to depict the real system (1 week) 2. Use Dev board with motor drivers to demonstrate bread board working of Criterion 1 of success. (1 week) 3. Attach devboard solution to CAD physical model to take into account motor backlash and other physical constraints like power supply issues and overheating ( 1 week) 4. Start PCB design based on the chosen direction. Soldering and debugging (3-4 weeks) 5. Final assembly and testing( 1 week) This gives us maybe 1 week of extra leeway for any hindrances. |
||||||
| 43 | LeafLink |
Hannah Pushparaj Hassan Shafi Praveen Natarajan |
||||
| LeafLink Team Members: Praveen Natarajan (pn17) Hassan Shafi(hashafi2) Hannah Pushparaj(hsp5) PROBLEM Plants need to be watered constantly for them to stay alive. Depending on certain scenarios, this might not always be possible for people to do (ex: going on vacation, forgetting to water, etc). We want a way to automatically water these indoor plants to make them stay alive. SOLUTION A standalone device that automatically senses the moisture level of the soil, and deploys a pump that supplies the plant with just the right amount of water to survive. It uses an onboard soil moisture sensor along with a water pump to supply the plant with water. The device is designed to be reliable and easy to understand. A simple light shows what it’s doing (normal, watering, or needs attention). It also includes basic safety limits so it can’t keep running forever if something goes wrong, and it can warn the user if the water container is empty or if the device isn’t able to pump water properly. The device can store a basic history of when it watered the plant so the user can see that it’s working. If we have time, we can add a simple companion app. The app would let the user see the current soil moisture, and it would show a log of recent watering. It would also allow the user to trigger a quick manual watering from their phone if needed (for example, after repotting or during a very hot week). The app is optional as the device should work on its own even without it. Solution Components Subsystem 1: Control & Processing This subsystem serves as the central controller. An ESP32 on our custom-designed PCB reads soil moisture sensor data, executes watering logic, and controls the relay module. The PCB integrates power regulation and some basic status indication. Components: - ESP32 - Our Custom PCB - 3.3 V voltage regulator - Some LEDs and resistors Subsystem 2: Soil Moisture Sensing This subsystem measures soil moisture and provides an analog voltage to the ESP32 ADC pin to drive the water delivery system. Components: - Capacitive Soil Moisture Sensor Subsystem 3: Water Delivery & Relay Control This subsystem allows the ESP32 microcontroller to turn the water pump on and off by using a relay, acting as a switch between the ESP32 and higher voltage water pump. So essentially the ESP32 GPIO will drive the relay input which will switch pump power on and off. Components: - 6-12 V DC Water pump - 5 V single-channel relay module - External 5 V power supply - Tubing and water reservoir Subsystem 4: User Feedback & Safety This subsystem provides basic visual feedback based on the current state of the Leaflink system and an emergency stop button Components: - Status LEDs (different colors for idle, watering, error). - Red push button (emergency stop, kills power) Subsystem 5: Wireless Monitoring We will also have a remote monitoring feature using the ESP32’s built-in Wi-Fi. In this remote monitoring system we will display the real-time soil moisture readings (maybe even keep track of old readings over a time period), history of recent watering events, and a manual watering trigger button. Components: - ESP32 Wi-Fi (already part of chip) - Simple mobile or web interface CRITERION FOR SUCCESS - The ESP32 on our custom PCB correctly reads soil moisture data and determines when watering is required independently (requiring no supervision) - Ensure proper functionality of the soil moisture sensor by ensuring moisture readings are accurate (for example if we add water the moisture percentage should get higher) - The ESP32 reliably controls the relay to turn the water pump on and off based on soil moisture thresholds. - The water pump operates only through the relay and correctly distributes the required amount of water - The multiple LEDs correctly indicate the current system states, including idle, watering, and error. - Pressing the emergency stop button immediately cuts power to the water pump and halts any ongoing operation - Remote monitoring system displays accurate real-time soil moisture data, logs watering events, and allows manual watering control. |
||||||
| 44 | Voice-Activated Geographic Reference Globe |
Mahathi Jayaraman Rijul Roy Varsha Mullangi |
||||
| Team Members: Mahathi Jayaraman (mj45) Rijul Roy (rijulr2) Varsha Mullangi (varsham3) Problem Many kids these days, especially American kids, don’t know their geography that well. In addition, many kids are spending a lot of time on screens and online, which is taking them out of the real world. We want to create a solution where kids can learn geography in a manner that does not need them to be connected to the internet or on a screen. This solution should be able to be used in classrooms for kids to learn from, as well as be able to rotate to accommodate the shorter height of kids. Solution Our proposed solution is to build a globe that is screen-free and interactive. Rather than manually rotating a globe and having to search for where a certain country is, kids can now simply push a button to activate a microphone and say a country name out loud. The globe will rotate automatically to a designated front marker of the globe and light up the specified countries with LEDs. This will help kids feel more engaged with learning. Solution Components Subsystem 1: Speech Recognition with a Push to Talk Mechanism This subsystem will implement the speech recognition mechanism of the globe. A simple push button and microphone will be used, connected to the GPIO pins of the ESP32-S3 MCU. While the button is pressed, the microphone will collect audio from the user, capturing the specified country the user wants to find. The MCU uses this audio to run an offline, on-device speech recognition software (ESP-SR) to determine which country the user wants to find, which will be used to handle the motor control logic and LEDs. Components: ESP32-S3 MCU and ESP-SR Package I2S Digital Microphone (INMP441) Subsystem 2: Software-Driven Motor Control This subsystem controls how the globe physically rotates to face the input country. A low speed DC gear motor will be driven by the ESP32-S3 through a motor driver, allowing the MCU to control both the direction and speed of rotation on the axis. A separate motor will be used to tilt the globe up and down, with the globe sitting in a ring with a ball bearing track. Based on the target country’s stored position and the current angle of the globe, the software will calculate the direction of rotation and the number of turns needed for the globe to rotate to align the country with the front marker. Feedback from a magnetic angle sensor will be used to track the globe’s position and stop rotation at the correct point. This makes the rotation more reliable and prevents the globe from rotating too far past the target. Components: 22 RPM 24 Volt DC Globe Inline Gearmotor [500635] Subsystem 3: LED Outline/Markers This subsystem is responsible for the physical identification of countries using LEDs. We will use a LED grid placed behind the globe, ensuring that that LEDS line the borders and corners of countries. If its a smaller country, making it harder to border, we will use the center point of the country, lighting up only one LED to indicate the location of that country. Since we will be using addressable LEDs, we will be able to assign LEDS to countries, so that when a country is chosen, the logic can quickly determine which LEDS to turn on. We will also use one LED near the button that captures audio, helping the user know when audio is being recorded. Components: LED strips (WS2812B) Subsystem 4: Front Marker Reference This subsystem is responsible for rotating the globe to face a designed front marker. This marker will be a point on a ring around the globe. This will designate where the user of this globe will be positioned, so that when the globe rotates to allow the country to face this marker, the country will also be facing the user. The globe will also rotate on multiple axes to face this, which can help accommodate the shorter height of kids by making the globe rotate down to make areas near the north pole (such as Iceland or the North Pole) visible to kids who may not be tall enough to see the top of the globe. Every time a country is detected through the microphone, that country will automatically rotate to this marker. The slip ring will be used to ensure that the internal components do not get caught in each other as the globe rotates, and the limit switches will make sure the globe does not rotate too much in any direction. Components: ESP32-S3 MCU (controller) Adafruit AS5600 Magnetic Angle Sensor - rotation position sensor Slip Ring (because it is a rotating system) Optional Limit Switches to prevent overrotation The Motor System (subsystem 2) Criteria for Success: The system can use the microphone to accurately identify spoken words, and check if the word is in the database of country names. When a country name is spoken, the system can light up the country on the globe. When a country name is spoken, the globe can rotate to display the lit country in front of the user. When the word “reset” is provided as an input, the globe moves back to its default position and all LEDs are turned off. The globe will correctly detect the spoken country name and rotate automatically so the specified country is facing the front marker |
||||||
| 45 | Focus Dial: A Tactile Hardware Interface for Distraction-Free Focus |
Ahan Goel Amogh Mehta Benjamin Loo |
video video |
|||
| **Team Members:** - Amogh Mehta (amoghm3) - Ahan Goel (ahang5) - Benjamin Loo (bloo2) --- # Problem Staying focused is increasingly difficult in an environment saturated with digital distractions. While most modern operating systems provide tools such as Focus Mode or Do Not Disturb, these solutions are embedded within smartphones or computers themselves. Activating or managing them often requires unlocking a phone, navigating menus, or interacting with the very device that causes distraction. This creates friction and makes it easy for users to abandon focus unintentionally. Additionally, many existing productivity tools rely heavily on cloud services or voice assistants, raising concerns around privacy, reliability, and latency. There is a need for a more intentional, low-friction, and privacy-conscious way to manage focus that does not require constant screen interaction. --- # Solution We propose the **Focus Dial**, a standalone hardware controller that allows users to enter, manage, and visualize focus states through a simple physical interaction. By turning a rotary dial, users can activate focus modes, set timers, and receive feedback without opening a phone or navigating software menus. The Focus Dial solves the problem by shifting distraction management from a screen-based interaction to a tactile, human-centered interface. The device communicates wirelessly with user devices (phones, tablets, and computers) to control Focus Mode or Do Not Disturb settings. In addition, the Focus Dial is designed to integrate with IoT devices on the local network, enabling environmental cues—such as smart lights, displays, or other connected devices—to reflect or respond to the user’s focus state. At a high level, the system consists of: - A physical user interface for intentional user input and feedback - An embedded processing and communication subsystem - Wireless integration with user devices and local IoT systems --- # Solution Components ## Subsystem 1: Physical User Interface and Feedback **Purpose:** Functions as the primary **physical user interface**, allowing users to intentionally control focus modes and timers without interacting with screen-based devices. **Function:** This subsystem combines tactile input and multimodal feedback mechanisms to provide intuitive control and clear system state indication. It is composed of the following hardware elements: - **Rotary Position Encoding:** A rotary encoder detects rotational direction and position, enabling users to select focus modes, adjust durations, and confirm actions through deliberate physical motion. - **Haptic Feedback:** A vibration motor provides tactile confirmation for actions such as mode changes, timer start/stop events, and alerts, reinforcing interaction without requiring visual attention. - **OLED/LCD Display:** A circular OLED or LCD display presents contextual information such as the active focus mode, remaining time, or system status. - **Lighting (LED Ring):** An addressable LED ring provides glanceable visual feedback by indicating focus state, progress, or alerts through color and animation. The lighting can also mirror or augment connected IoT lighting systems. **Components:** - Rotary encoder with push-button (e.g., Bourns PEC11 series) - Circular OLED or LCD display (e.g., 1.28\" round TFT display) - Addressable LED ring (e.g., WS2812B / NeoPixel ring) - Coin vibration motor --- ## Subsystem 2: Embedded Processing and Wireless Communication **Purpose:** Acts as the **central control unit**, coordinating input processing, system state management, and communication between subsystems and external devices. **Function:** Processes rotary encoder input, drives output peripherals (display, LEDs, haptics), and manages wireless communication protocols. **Components:** - Microcontroller with integrated Bluetooth and Wi-Fi (e.g., ESP32) - Power management circuitry - On-board memory for firmware and configuration storage --- ## Subsystem 3: Device and IoT Integration **Purpose:** Enables the Focus Dial to operate as a **local control hub**, synchronizing focus states across personal devices and connected IoT systems. **Function:** Transmits focus state changes to paired devices and triggers context-aware environmental responses. **Components / Interfaces:** - Bluetooth Low Energy (BLE) for communicating with a companion app or OS-level shortcuts - Wi-Fi for local network communication - Integration with IoT devices (e.g., smart lights, displays, or other networked devices) using local protocols such as MQTT or HTTP This subsystem allows the Focus Dial to trigger actions such as dimming lights, changing light color, or notifying other devices when a focus session starts or ends. --- # Criterion for Success The project will be considered successful if it meets the following measurable criteria: 1. The rotary encoder reliably detects user input with greater than 95% accuracy. 2. The device activates or deactivates Focus Mode or Do Not Disturb on a paired device via Bluetooth within 1 second of user input. 3. The display, LED lighting, and haptic feedback consistently reflect the correct focus state. 4. The Focus Dial successfully communicates focus state changes to at least one IoT device on the local network. 5. Core functionality operates without requiring an active internet connection. --- **Project Classification:** Innovation (human-centered hardware interface integrating embedded systems, wireless communication, and IoT interaction) |
||||||
| 46 | Snooze-Cruiser |
Alex Wang Jiachen Hu Jizhen Chen |
Jiaming Xu | |||
| #Snooze-Cruiser Team Members: Jiachen Hu (hu86) Jizhen Chen (jizhenc2) Alex Wang (zw71) #Problem Many people suffer from sleep inertia, a condition where individuals instinctively silence alarms without fully waking up. Traditional alarm clocks and smartphone alarms rely solely on audio, which can be easily ignored or dismissed while half asleep. Existing alternative solutions such as puzzle-based alarms or flying alarms are often ineffective, unsafe, or impractical in confined environments like dorm rooms and bedrooms. The fundamental issue is that current alarm systems fail to reliably force physical engagement, allowing users to return to sleep without becoming fully alert. A more effective alarm must require the user to physically interact with the system in order to disable it. #Solution We propose Snooze-Cruiser, a two-wheeled differential-drive robotic alarm system that physically moves away from the user when the alarm time is reached. Instead of simply producing sound, the robot navigates around the room, forcing the user to get out of bed and chase it in order to silence the alarm. The robot operates autonomously in a confined indoor space, using onboard sensors for obstacle avoidance and odometry-based localization to remain within a defined area. The alarm is disabled not by pressing a button, but by detecting when the robot has been picked up using inertial sensor data. This interaction ensures that the user must physically wake up and engage with the device. The system is divided into motion control, sensing, alarm/audio, localization, and power management subsystems. #Solution Components ##Subsystem 1: Motion Control and Navigation Function: This subsystem enables the robot to move autonomously, wander unpredictably, and avoid obstacles while remaining within a confined area. Components: Microcontroller: STM32F446RCT6 Motor Driver: DRV8833PWP dual H-bridge motor driver Motors: N20 micro gear motors with quadrature encoders (x2) Inertial Measurement Unit: MPU6050 Obstacle Sensors: VL53L1X Time-of-Flight distance sensors (multiple) Description: The STM32 generates PWM signals to control the motors through the DRV8833 motor driver. Wheel encoders provide feedback for estimating speed and displacement. During alarm operation, the robot drives forward at a base speed and periodically introduces random heading changes. Obstacle avoidance is triggered when distance sensors detect nearby obstacles, causing the robot to turn away and resume wandering motion. Encoder and IMU data are fused to estimate the robot’s position relative to its starting point. ##Subsystem 2: Localization and Soft Geofencing Function: This subsystem prevents the robot from leaving the intended operating area (e.g., a bedroom). Components: Wheel Encoders (from Subsystem 1) IMU: MPU6050 Description: Wheel encoder data and IMU measurements are fused using a Kalman Filter (or equivalent sensor fusion approach) to estimate the robot’s displacement from its starting location. A soft geofence is defined as a radius around this starting point. If the robot exceeds this radius, it enters a return-to-center behavior by rotating toward the estimated origin and driving inward until it re-enters the allowed area. ##Subsystem 3: Alarm Timing and Audio Output Function: This subsystem handles timekeeping and audible alarm generation. Components: Microcontroller: STM32F446RCT6 Audio Amplifier: PAM8301AAF Speaker Description: The STM32 maintains a real-time counter for alarm scheduling. When the preset alarm time is reached, the microcontroller simultaneously enables the audio amplifier and activates the motion subsystem. The alarm sound continues until a valid caught event is detected. ##Subsystem 4: Caught Detection (User Interaction) Function: This subsystem detects when the robot has been picked up by the user and disables the alarm. Components: IMU: MPU6050 Wheel Encoders Description: Caught detection is performed by analyzing IMU acceleration and vibration data in combination with wheel encoder feedback. A caught event is identified by sudden changes in acceleration magnitude, high-frequency vibrations from human handling, and inconsistencies between wheel motion and measured acceleration (indicating loss of ground contact). Once confirmed, the system immediately stops motor output and silences the alarm. ##Subsystem 5: Power Management Function: This subsystem supplies and regulates power for the robot. Components: Battery Charger IC: MCP73844 Rechargeable Battery Voltage Regulation Circuitry Description: The battery supplies power to the MCU, sensors, motor driver, and audio system. The MCP73844 manages battery charging. Voltage regulation ensures stable operation during high current events such as motor startup. #Criterion For Success The project will be considered successful if the following objective criteria are met: Timed Activation: The alarm triggers within ±X seconds of the programmed time. Synchronized Operation: Robot motion and alarm audio start simultaneously upon alarm activation. Autonomous Motion: The robot moves continuously without user intervention during alarm operation. Obstacle Avoidance: The robot avoids obstacles placed in its path without repeated collisions. Confined Operation: The robot remains within a predefined operating radius and returns toward the starting location when the boundary is exceeded. Caught Detection: When picked up by a user, the robot reliably stops motion and audio within a short time window. |
||||||
| 47 | Combative Hardened Ultra Tumbler |
Abhinav Garg Rahul Ramanathan Krishnamoorthy Shobhit Sinha |
||||
| # Combative Hardened Ultra Tumbler - Battlebot ## Team Members - Abhinav Garg (ag90) - Rahul Krishnamoorthy (rahulr9) - Shobhit Sinha (ss194) --- ## Problem The antweight battlebot competition requires teams to design a combat robot under strict constraints on weight, materials, safety, and electronics. Robots must weigh under 2 lb, be constructed from approved 3D-printed plastics, and use a custom PCB integrating control and motor driving circuitry. Commercial RC receivers are not permitted. The challenge is to design a compact and reliable robot that integrates motor control, power electronics, and wireless communication while operating under high current loads and repeated mechanical impacts during combat. --- ## Solution We propose to design and build a 2 lb antweight battlebot featuring a spinning drum weapon and a fully custom electronic control system. A custom PCB will serve as the core of the robot and will house an ESP32-C3 microcontroller for computation and wireless communication. The robot will be controlled from a laptop using Bluetooth or Wi-Fi. Two motors will drive a centered two-wheel drivetrain, while a third motor will power the drum spinner weapon. Power will be supplied by a 14.8 V 4S2P LiPo battery. The system emphasizes reliable motor control, safe power management, and robustness to mechanical shock during competition. --- ## Solution Components ### Subsystem 1: Control and Communication System This subsystem handles wireless communication, control logic, and overall system coordination. It uses an ESP32-C3 microcontroller, Bluetooth and Wi-Fi wireless communication, and a USB interface for programming and debugging. --- ### Subsystem 2: Motor Control System This subsystem drives the drivetrain and weapon motors. It uses H-bridge motor driver circuitry controlled through PWM signals generated by the ESP32-C3 and brushless DC motors for drivetrain and weapon actuation. --- ### Subsystem 3: Power Management and Safety This subsystem distributes power and ensures safe operation of the robot. It uses a 14.8 V 4S2P LiPo battery, on-board voltage regulators for logic power, and battery voltage sensing via a resistor divider. Software-based shutdown is implemented to disable the robot on loss of wireless communication. --- ### Subsystem 4: Mechanical Structure and Weapon This subsystem provides structural support and offensive capability. It consists of a 3D-printed PLA or ABS chassis, a spinning drum weapon, and a belt-driven mechanical coupling between the weapon motor and drum. --- ### Optional Subsystem: Inertial Measurement and Weapon Optimization An optional inertial measurement unit (IMU) may be integrated to measure angular motion and vibration of the drum weapon. IMU data can be used to estimate weapon rotational behavior, detect imbalance, and inform software adjustments to improve weapon stability and reliability during operation. --- ## Criterion for Success The project will be considered successful if the robot weighs less than 2 lb and complies with all competition material restrictions, the custom PCB integrates control, motor driving, and power management circuitry, the robot can be reliably controlled from a laptop using Bluetooth or Wi-Fi, the drivetrain provides stable and responsive motion, the drum spinner weapon operates reliably without electrical failure, and the robot safely shuts down when wireless communication is lost. |
||||||
| 48 | Sleep Position Trainer |
Brian Park Kyle Lee Nick Tse |
||||
| **Team Members:** Brian Park (brianp7) Kyle Lee (klee281) Nick Tse (nstse2) **Problem:** Sleep is essential for overall health and recovery. We want to develop a device that can detect a person’s sleeping position and provide gentle feedback, via vibration, to prompt repositioning. This device is intended to help users improve and maintain healthier sleep patterns. **Solution:** In order to maintain healthy sleep posture, we propose a wearable sleep monitoring device that detects a user’s sleeping position and provides gentle vibration feedback when an adjustment is needed. The device continuously monitors body orientation during sleep and encourages repositioning when prolonged or unhealthy postures are detected, helping users develop healthier sleep habits over time. The system will incorporate a Battery, Microcontroller, Inertial Measurement Unit (IMU), and Eccentric Rotating Mass (ERM) motors to develop a small wearable sleep position trainer. **Solution Components:** **Subsystem 1 (Position Sensing):** Components: Bosch BMI270 IMU A 6-axis IMU will be used to determine whether the user is laying on their back or side. The microcontroller continuously estimates the device’s tilt/roll angle relative to gravity. When the estimated orientation corresponds to a supine posture for longer than a defined time window, the system will know to activate the vibrations. **Subsystem 2 (User Alert System):** Components: Parallax Inc. 28821 DC Motor Vibration, ERM (Haptic) 9000 RPM 3VDC This vibration mechanism will train the user to not sleep on their back. The device will keep vibrating until the user has turned onto their side, turning off the vibration. **Subsystem 3 (Microcontroller):** Components: Espressif ESP32-S3-WROOM-1 This acts as the device's control unit. It will be responsible for interpreting sleep position based on IMU, timing logic (vibration delays and cooldowns), and vibration. **Subsystem 4 (Physical Build):** Components: 3D-printed case A compact 3D-printed case will protect the PCB, battery, and motor and keep them from shifting during sleep. The enclosure will include strap/clip mounts and ensure the vibration motor is pressed against the body for a noticeable cue, with openings for charging and any button/LED. **Subsystem 5 (Power Management):** Components: 3.7 V Lithium-Ion Battery Rechargeable (Secondary) 100mAh, TI BQ24074 charger/power-path IC, TI TPS62840 3.3 V regulator This subsystem provides rechargeable power and stable 3.3 V for the electronics. The charger safely charges the battery from USB and can allow operation while plugged in. The regulator improves battery life by efficiently converting battery voltage to 3.3 V. **Criterion For Success:** The device is considered successful if it can reliably detect when the user is sleeping on their back and activate vibration feedback during sleep to encourage repositioning, thereby helping to reduce snoring, alleviate sleep apnea symptoms, and ease heartburn or acid reflux. |
||||||
| 49 | Move Displaying Chess Board |
Jeanjuella Tipan Matthew Trela Tim Chen |
Wenjing Song | |||
| # Move Displaying Chess Board Team Members: - Matthew Trela (mtrela2) - Tim Chen (taianc2) - Jeanjuella Tipan (jtipa2) # Problem Chess is a game with a high barrier to entry and often the hardest part of the game for kids to pick up is how the pieces move, where a piece can move, and if a move is legal. Existing boards that tackle this problem are very expensive and not a practical option for an elementary or middle school chess club. # Solution A physical chess board which shows all legal moves for a piece once it is picked up. The movement of pieces will be detected with a sensor array of reed switches and a board state in memory. The squares will be lit up by an addressable strip of LED lights cut into 8 equal sections and daisy chained together. This chessboard will also optionally display the best move with a small chess engine in the MCU’s flash memory. The chess board will include a UI to turn best moves on and off, to handle the edge case of promoting to something besides a queen, and to display information like if an illegal move is played. # Solution Components ## Subsystem 1, Piece Detection Array This subsystem detects the location of each piece using magnets attached to the bottom of the pieces and an array of 64 reed switches. Since the microcontroller can not handle 64 separate sensors we will use 4 I2C GPIO expanders. - Reed Switches: Standex-Meder Electronics SW GP560/15-20 AT - Magnets: Magnet Applications N42P062062 - I2C 16 input GPIO expander: Microchip Technology MCP23017-E/ML ## Subsystem 2, LED Move Display This subsystem provides feedback to the user. An addressable LED strip is placed under the board in 8 segments, one for each rank. The segments will be connected with clip connectors for replacing each segment when necessary. When a piece is lifted as detected by subsystem 1, the MCU calculates the legal moves and sends a signal to the LEDs to illuminate target squares in a specified color (for example: green for legal moves, red for capturable piece). - Addressable LED strip: SEZO WS2812B ECO LED Strip Light 16.4 FT - 3Pin LED Strip Connector: DFRobot FIT0861 ## Subsystem 3, Microcontroller and UI The microcontroller will handle all of the logic of our chess system. There will be a simple control loop which polls every sensor so see if the board state has changed. If a piece has been picked up, the microcontroller uses the current board state to see what piece was picked up, what its legal moves are, and then controls the LED strip accordingly. We will use logic to check for error or desync and have a recovery protocol through the UI if detected. This control loop can be interrupted by input from the UI like to turn on best moves. UI is a monochrome OLED screen with some buttons for selecting options. When best moves are on, the board puts the current state into a small chess engine locally stored in the MCU and displays the best move using the LEDs. This happens every time the board state changes. - MCU: ESP32-WROOM-32-N4 - OLED Display: UCTRONICS 0.96 Inch OLED Module 12864 128x64 ## Subsystem 4, Power supply A portable power supply is used to power the LEDs, sensors, microcontroller, and UI display. A capacitor prevents sudden surges or dips in from crashing the microcontroller. - Power bank: VOLTME Portable charger, 10000mAh 5V/3A - Capacitor: Chemi-Con ESMG160ETD102MJ16S # Criterion For Success Describe high-level goals that your project needs to achieve to be effective. These goals need to be clearly testable and not subjective. - LEDs can be selectively turned on by the MCU for all 64 squares - Move display and best move display can be turned on and off with the UI controls - All legal moves are accurately displayed by LEDs, including rules such as en passente, castling, and the first move of pawns - Pieces can be detected accurately when lifted off, being displayed on the UI display - Detect pieces picked up and show legal moves in under 1 second - Display the best move in under 3 seconds - We can detect and recover from two pieces on the same square - We can detect and recover from multiple pieces being picked up at the same time and switched # Alternatives Existing solutions include commercial products that cost around $300 or more. They perform almost the exact same functions as what we propose to do. It is hard to determine the exact sensor method other boards use but we saw RFID and other more extensive methods. Our implementation attempts to use the simplest possible sensing apparatus and make up the difference in hardware. There does not exist a product that is both affordable and offers the functionality of displaying moves on the board. |
||||||
| 50 | Crowdsurf: Realtime Crowd-Monitoring for indoor spaces |
Ananya Krishnan John Abraham Tanvika Boyineni |
||||
| Team Members: Tanvika Boyineni (tanvika3) Ananya Krishnan (ananya10) John Abraham (jabra6) Problem: Indoor public spaces (libraries, study lounges, gyms, student centers) often become congested, but students and facility staff lack real time, localized information about crowd density and traffic flow. Existing approaches either rely on cameras, raising privacy concerns, require manual observation, or provide only building level estimates that are not actionable for choosing a specific room/entrance. Solution: This project proposes a privacy preserving, real time crowd monitoring system that estimates occupancy and directional flow using distributed, non-imaging sensor nodes with local processing. Each node is deployed at an entrance or transition point and performs local detection and direction inference. Processed data is transmitted wirelessly to a central gateway, which aggregates occupancy estimates, logs data, and presents live metrics through a user facing dashboard. The system emphasizes robustness to sensor noise and communication loss, and ease of deployment. Solution Components: 1. Sensing Subsystem (Doorway Detection and Direction) -Non-imaging sensors per entrance mounted with spatial separation. -Direction inference using ordered sensor trigger -Calibration procedures for mounting height, angle, and baseline noise conditions. 2. Embedded Processing Subsystem -Microcontroller-based state machine for event detection, debouncing, and occupancy updates. -Filtering and gating logic to handle common edge cases such as pausing in doorways, close following individuals, and short reversals. -Node health monitoring, including sensor timeouts and heartbeat status. 3. Wireless Communication Subsystem -Packet structure includes timestamp, IN/OUT counts, current occupancy estimate, and node status. -Features such as retransmission, periodic heartbeats, and graceful degradation during packet loss. 4. Gateway and Data Logging Subsystem -Gateway device (like Raspberry Pi) receives telemetry from sensor nodes. -Maintains the system wide occupancy per entrance or room. -Logs data to persistent storage (CSV) and manages node reconnection. 5. Dashboard and User Interface Subsystem -Live dashboard displaying current occupancy, directional flow rate (people per minute), and recent trends. -Visual indicators for “crowded” vs. “not crowded” states based on configurable thresholds. 6. Hardware and PCB Subsystem (Sensor Node) -Custom PCB using a modular, low risk design approach -Mechanical enclosure and mounting plan to ensure consistent and repeatable sensor placement. Criterion for Success: The project will be considered successful if the system can accurately demonstrate real time directional counting and occupancy estimation at one to two doorways using non imaging sensors. The system must correctly track entries and exits and maintain a live occupancy estimate that updates within one second of a doorway event. A functional dashboard should display current occupancy, flow rate, and node status in real time, while the gateway continuously logs data for at least one hour without interruption. Additionally, a custom designed PCB must be fabricated and used for at least one sensor node in the final demonstration. The system must remain stable and operational during temporary wireless packet loss events, demonstrating graceful degradation without crashes and automatic recovery once communication resumes. Node health and connectivity status should be clearly visible through the user interface to allow for basic monitoring and debugging. If time permits, additional success criteria include scaling the system to three or four sensor nodes covering multiple entrances or zones, improving robustness in challenging edge cases such as tailgating or closely spaced groups, and evaluating accuracy as a function of traffic rate. Further extensions may include implementing battery-powered sensor nodes with basic power optimization strategies or adding simple short term congestion prediction based on recent occupancy trends. |
||||||
| 51 | Networked Physical Chessboard for Remote Play |
Danny Guller Payton Schutte Quinn Athas |
Wenjing Song | |||
| # Networked Physical Chessboard for Remote Play Team Members: - Danny Guller - Quinn Athas - Payton Schutte # Problem Online chess makes it easy for intermediate players to find games quickly, but it removes much of what makes chess feel engaging in the first place. Playing on a screen lacks the tactile feedback of moving real pieces, the spatial awareness of a full board, and the sense of presence that comes from sitting in front of a real board. While traditional in-person chess restores these elements, it usually requires both players to be in the same physical location, which limits who you can play and how often. Some existing commercial systems attempt to bridge this gap by combining physical boards with online connectivity, but these solutions are often extremely expensive and inaccessible to most players. As a result, there is currently no widely available, cost-effective way to enjoy a truly physical game of chess with a remote opponent. Players are therefore forced to choose between convenience and the authentic physical experience of the game, motivating the need for a more affordable and accessible solution. # Solution Our solution is a pair of internet-connected physical chessboards that allow two players in different locations to play a real game of chess using physical pieces. Each board tracks the state of the game locally and synchronizes moves with the remote board in real time. By combining physical interaction with networked communication, the system preserves the tactile and spatial experience of chess while removing the requirement for both players to be in the same place. Each board uses Hall effect sensors embedded beneath every square to detect the presence and movement of magnetized chess pieces. When a player moves a piece, the system detects changes in the board state and infers the intended move by comparing the previous and current configurations. To avoid ambiguity caused by partial lifts, piece adjustments, or accidental touches, players must confirm their move using a button on a digital display before it is transmitted. Once a move is confirmed, it is sent over the internet to the opponent’s board. LEDs on the receiving board highlight the source and destination squares, guiding the opponent to physically replicate the move. The use of Hall effect sensors also enables future expansion, such as differentiating piece types using different magnet strengths or polarities, without requiring major hardware redesign. # Solution Components ## Subsystem 1: Piece Detection (Hall Sensors + ADC Row Readout) To detect pieces on all 64 squares without exhausting the microcontroller’s GPIO resources, the board uses one analog Hall effect sensor per square combined with an ADC-based row readout architecture. Eight 8-channel ADCs are used, with each ADC responsible for one row of the chessboard. Each ADC samples the eight Hall sensors in its row and reports the digitized values to the microcontroller over a shared communication line (I2C or SPI). This design limits the number of devices on the communication bus to eight while still allowing the system to poll all squares frequently enough for responsive move detection. The microcontroller continuously polls the ADCs, reconstructing a full 8×8 chess board where pieces correlate to high magnetic fields. A key challenge in this subsystem is avoiding false positives caused by magnetic fringe fields affecting neighboring squares. Because magnetic field strength decreases rapidly with distance, cross-square interference can be mitigated by careful square spacing and threshold selection. The system will also perform a calibration step to record baseline sensor values for each square and detect pieces based on deviations from that baseline rather than using a single global threshold. This approach improves robustness to sensor variation and environmental changes. ## Subsystem 2: Move Inference, Legality Checking, and Piece Identification The system infers piece identity primarily through game state tracking rather than direct sensing. Starting from a standard chess setup, the controller maintains an internal board representation and updates it after each confirmed move. As long as pieces are not intentionally swapped, this approach allows the system to correctly track piece types over the course of the game. Even if physical pieces are swapped, the board will only let legal moves of the original piece be played. During a player’s turn, the controller monitors changes in square occupancy and generates a proposed move hypothesis, including captures. Before the move can be confirmed, the system checks whether it is legal under standard chess rules given the current board state. If the move is illegal, confirmation is blocked and the player is notified via visual feedback, prompting them to correct the placement. As an optional advanced feature, we may directly identify piece types using magnets with distinct strengths or polarity patterns. In this case, the analog Hall sensor readings could be used to classify the piece type directly rather than relying entirely on historical tracking. This would improve robustness against cheating and recovery from incorrect piece placement. The main challenge is ensuring sufficient separation between magnet signal ranges so that piece classes remain distinguishable across all squares and across different boards. If time permits, this feature will be implemented with careful calibration and validation. ## Subsystem 3: Networking and Synchronization This subsystem enables two ESP32-based chessboards to communicate over the internet using a centrally hosted server. Each board connects to the server over Wi-Fi and joins a shared game session, with the server responsible for storing and relaying moves between the two players. Communication is handled using HTTPS and a simple REST-style API. When a player confirms a move, the ESP32 sends the move to the server via an HTTP POST request. The opponent’s board periodically polls the server using HTTP GET requests to retrieve any new moves that have occurred since the last update. Each board tracks the most recent move number it has processed. If a board temporarily disconnects, it can reconnect and request any missed moves, allowing the game to resume without resetting or manual intervention. The server enforces move ordering and prevents duplicate updates, ensuring that both boards remain synchronized throughout the game. ## Subsystem 4: Local User Interface (Display + Controls) The local user interface allows players to set up and control the system without needing a separate phone or computer. It provides functionality for entering or selecting a game session code, confirming Wi-Fi and server connectivity, indicating whose turn it is, and displaying basic status or error messages such as “waiting for opponent,” “illegal move,” or “connection lost.” The UI also supports the move confirmation workflow by clearly indicating when a move is ready to be sent and when it has been successfully transmitted and received. Our preferred implementation is a small touchscreen display connected to the ESP32, which allows intuitive menu navigation and direct session code entry. As a simpler and lower-cost alternative, we may use a small OLED display with several physical buttons for menu navigation and code entry. In both cases, the interface is intentionally minimal: a player should be able to power on the board, connect to Wi-Fi, join a game, and begin playing with minimal setup. The final choice will depend on integration complexity, responsiveness on the ESP32, and available development time. ## Subsystem 5: Game Play Loop The game play loop is intentionally simple to simulate in-person chess as close as possible. At the start of the game, the board is set up in the standard configuration. White will be prompted on the screen to make their first move after all pieces are set on each board. White will move their piece, if the move is legal, the display will prompt white to submit their move, locking their board state. Black’s display will prompt that white has made a move and LEDs under the moved piece’s square will light up indicating which piece to move and where. Black can not submit a move until their board matches that of the white player. After black replicates white’s move, black plays their move and is prompted to submit. Each move is checked for legality before a submit prompt is revealed. Board state is checked as well to ensure both players' boards are identical. If there are discrepancies in board state on either side, the display will prompt which pieces are out of place and where they should be. Once a winner is determined, the game ends and the display shows who won. # Criterion For Success The project will be considered successful if two physical chessboards located in different places can reliably play a complete game of chess while connected only through the internet. Each board must accurately detect player moves using the Hall effect sensor grid, require explicit move confirmation, and prevent illegal moves from being transmitted. Confirmed moves must be transmitted to the server and received by the opponent’s board in the correct order, with the source and destination squares clearly indicated using LEDs. The system must maintain synchronization between boards even in the presence of temporary network interruptions, allowing a board to reconnect and recover the current game state without manual reset. Finally, the system must support the completion of a full legal chess game of at least 30 moves without desynchronization, missed moves, or unintended move confirmations, while providing clear user feedback throughout gameplay. # Components: - Hall effect sensor: DRV5055A4QDBZR 12.5 mV/mT, ±169-mT Range - MCU: ESP-32 (includes Wi-Fi antenna and capability) - ADC: TLA2528IRTER 12-bit, 8-channel, I2C - Display: DSI Touch Screen LCD Display 800x480 |
||||||
| 52 | LED Globe Display |
Ashley Saju David Heydinger Stephanie Eze |
Shiyuan Duan | |||
| # LED Globe Display Team Members: - Ashley Saju(asaju2) - David Heydinger (ddh3) - Stephanie Eze (oeze2) # Problem For LabEscape, an escape room under Prof. Kwait, a unique LED display would be beneficial to the escape room experience. A spinning LED display should be able to show a timer count down and wirelessly show any image. # Solution We will design a curved LED strip to be mounted on a rotating platform that spins at a constant speed. Through a Bluetooth enabled app, we can upload images and text to the image display system for storage and playback. These images will be displayed using persistence of vision by precisely controlling LED light timing based on the angular position and speed of the platform. The position and speed of the platform will be measured by an Hall sensor that detects each revolution of the rotating system, allowing the system to accurately determine when to display certain LED lights. # Solution Components ## Image Displaying System (Microcontroller, Memory, and LEDs) This system handles the process of receiving the image wirelessly or taking a sprite from memory and lighting the LED appropriately. An SD card would be used to store sprites of numbers for the timer mode. Shift registers would be used to achieve a speedy parallel output to the LED. And the LEDs would be receiving a preset voltage at first then varying voltages if time allows for different colors. The potentiometer can be used to adjust LED color. RP2040 microcontroller Micro SD card > 16kB memory 24-bit Shift registers: STP24DP05 24-bit constant current LED sink driver with output error detection RGB LEDs: Strawhat LED 4.8mm RGB (4-Pin) WEDRGB03-CM 10kOhm Potentiometer with knob Resistors ## Wireless Control The ESP32 hosts a web application that is accessible by entering the device’s IP address into a web browser. This web application allows a user to upload text or an image, which are processed by the ESP32 into a display-ready format. The processed data is then transmitted directly from the ESP32 to the spherical display system for rendering. The initial implementation supports monochrome bitmap images, with plans to extend to multi-color images in future revisions. ESP32-WROVER-B ## Power System Delivering power to the stationary motor will be provided by AAA batteries. However, delivering power to the spinning component is more difficult due to the potential for wires to be tangled. To solve this, we will drive power to the rotating platform using a slip ring, allowing for 360 degree rotation without twisting any electrical connections. Components: AAA battery pack [MIKROE-5351] Power Switch [GSW-18] Slip Ring [ADAFRUIT1196] DC motor [CN-PA22-201213500-G429] Voltage Regulator (buck converter) ## Spinning PCB - angular speed measurement The spinning PCB will include a Hall effect sensor that will detect exactly when one full turn of the PCB has been completed. It will send the measurements to the microprocessor which will calculate the angular speed of the spinning PCB based on the time interval between measurements. Components: Hall effect sensor [US5881LUA] Voltage Regulator [MIC5219-3.3] Small Magnet [07045HD] # Criterion For Success Describe high-level goals that your project needs to achieve to be effective. These goals need to be clearly testable and not subjective. When operating at full speed, the displayed text and image should be clearly legible from 5 feet away over a period of 10 minutes. The rotating assembly remains balanced while operating, with no audible thumping exceeding 50 dB or visible oscillation for the duration of 10 minutes. The LED Globe successfully receives and displays image and text uploads within 1 minute per image, without requiring any physical connections. A Hall effect sensor accurately detects when the rotating assembly has completed one revolution, with less than 2% missed detections over 10 minutes. LED brightness is sufficient to display images and text from 5 feet away under standard indoor lighting (300 lux). Timer mode: Timer can be set to a time up to 1 hour in the web application and counts down, resets, and pauses via web application.. |
||||||
| 53 | [Updated RFA] - Efficient Card Shuffler with Cut Card Insert |
Alex Lo Faso Matt Garrity Steve Mathew |
||||
| **Efficient Card Shuffler with Cut Card Insert** Team Members: - Alexander Lo Faso (alofaso2) - Matt Garrity (garrity6) - Steve Mathew (stevem4) **Problem** Card games such as blackjack require shuffling of cards between rounds of play. Over time this can be a strenuous task for dealers and decrease playing time for players. In addition to this, games such as blackjack require a cut card to be inserted between rounds at varying deck penetration levels. There currently does not exist any card shuffler machines with a cut card insertion feature. Many card shuffle machines commercially available have very limited features and lack complexity. These lower quality machines have limited deck capacities, require a constant push of a button to operate, and require the manual retrieval of the shuffled deck which can be cumbersome when reshuffling the same decks multiple times. **Solution** Our solution is to design and build a card shuffling machine with added features of increased deck shuffle capacity, optical detection of shuffle completion, a retractable motorized shuffled deck tray, and a cut card insert feature with electrical deck penetration customization. These features lead to four subsystems namely card deck(s) detection, deck shuffling mechanism, cut card insertion, and the completed deck tray extension. The prevailing goal is to make the card shuffler as efficient as possible. There will only be three inputs available to the user. A shuffle button, a dial to set the cut card penetration, and a cut card insertion button. The entire shuffle function is fully automated with the push of a button. Once the user is ready for the cut card insertion, they will set the dial and press the cut card insert button which will electrically align the cut card insertion window and create a delay to give the user time to insert the cut card. **Solution Components** **Subsystem 1 (Card Deck(s) Detection)** This subsystem will detect whether cards are present in the input trays for the shuffler. Detection will be determined through the use of reflective optical sensors, and is critical for the prevention of overdriving motors and ensuring shuffling is completed to completion. The reflective sensors on each tray will measure the light reflected off the bottom card of the stack to determine if the tray is empty or still full. The IR sensors will be flushed to the bottom of the tray surface and its output will be fed to a comparator to differentiate between the signals for when no cards are present and for when there are cards. The resulting digital signal is read by the MCU through GPIO inputs. When the sensors report no cards are present, the MCU concludes that the shuffling process is complete. - Reflective infrared optical sensor (Vishay TCRT5000) - Comparator IC (LM393) **Subsystem 2 (Deck Shuffling Mechanism)** This subsystem is responsible for the physical shuffling of the cards. It will involve two motors positioned at the bottom of the pre-shuffle deck trays. Each motor will slide one card at a time from its respective card stack inwards into a common pile forming a shuffled card pile. The motors will be in contact with the cards by a wheel with a rubber edge. Once the shuffle button is pressed and the finished tray is fully retracted (from the previous operation), the motors will begin shuffling. To ensure that the cards are being shuffled reliably, a beam-break sensor will be positioned below the motor wheels, and as each card passes through the slot, the sensor will generate a pulse that is read by the MCU to confirm that no jam has occurred (or if there is no pulse, that there has been a jam, and to reset the cards) and to keep count of the cards that have been passed through. They will continue shuffling until signaled by subsystem 1 that there are no more cards remaining to be shuffled. - Servo Motor (HitecSKU: RB-Hit-27) - Optical beam-break sensor (omron ee-sx1103) **Subsystem 3 (Cut Card Insertion)** This section will include a user-controlled dial (0-100 scale) which will set the desired depth at which the cut card will be inserted into the shuffled deck (ex. Tuning the potentiometer halfway around would insert a cut card in the middle of the deck). The dial will be electronically coordinated with a slitted plate which will move upon the vertical axis along the card deck based on the dial’s input. A rotary potentiometer will serve as the dial, and the voltage read from the potentiometer will be fed into an ADC on the MCU (ESP32 comes with an ADC). The output of the ADC will be scaled to a corresponding linear displacement for the slitted plate, and the slitted plate will be driven by a stepper motor connected to a linear guide rail which will guide the plate up and down the deck. This allows for the user to insert the cut card practically anywhere within the card assortment. Additionally, we would add limit switches at the top and bottom of the rail to prevent any overtravel. - Rotary potentiometer (Bourns 91A1A-B28-L15) - Stepper motor (NEMA 17) - Motor driver (TMC2208) - Linear Guide Rail (MGN9H Linear Guide Rail + Carriage Block) - Limit Switch (Omron SS-3GLPT) **Subsystem 4 (Completed Deck Tray Extension)** This subsystem will be responsible for extending the completed shuffled deck at the end of the shuffle operation.This will require the use of one motor and one optical sensor. The motor used will be a small gear motor attached to a gear track. These are optimal since they have high torque and require low voltage to operate. In addition, we will use an optical sensor to detect when the shuffled deck has been retrieved from the extended tray. Once the shuffled deck is retrieved, the tray will retract automatically. - Reflective infrared optical sensor (Vishay TCRT5000) - Stepper motor (NEMA 17) - Motor driver (TMC2208) - Gear Track (22460300) **Criterion For Success** - Device successfully shuffles 4-6 standard size decks without any manual intervention - Pressing ‘start’ button once begins the shuffling process - Shuffling continues until all cards from the input trays are emptied and halts once there are no more cards left in the trays with ~95% accuracy (to account for potential physical mishaps) - If MCU detects jam, stop shuffling - Cut card insertion slot moves in accordance with dial, allowing for desired insertion at any deck penetration level - Bottom tray extends automatically upon shuffling completion - Bottom tray retracts when IR sensor reads there are no cards on the tray |
||||||
| 54 | E-PEEL: Electronic Peeling Equipment for Easier Living |
Hyun Jun Paik Saathveek Gowrishankar Varun Ramprakash |
design_document1.jpeg |
|||
| Team Members: - Saathveek Gowrishankar (sg59) - Varun Ramprakash (varunr6) - Hyun Jun Paik (hpaik2) # Problem Traditional peelers require grip strength and fine motor control to properly and safely operate. Older adults and other individuals with limited fine motor control, arthritis, tremors, or reduced grip strength often find peeling fruits/vegetables difficult and unsafe. Meal preparation is widely classified as an instrumental activity of daily living (IADL), and the inability to consistently prepare meals can diminish one's independence and quality of life. Several recent papers highlight the lack of availability for assistive technologies for kitchen-related tasks. One paper (MORPHeus: a Multimodal One-armed Robot-assisted Peeling System with Human Users In-the-loop) even explores a fully autonomous robotic arm that peels vegetables with no human intervention. This solution, however, would be expensive, large, and unrealistic for home kitchens. Additionally, several studies highlight that older adults are less likely to use fully automated solutions and instead prefer semi-autonomous assistive technology that they can reasonably control. # Solution We propose a semi-autonomous peeling assist robot that can solve many of the aforementioned challenges while avoiding many of the disadvantages of existing proposed solutions. Our proposed solution consists of two primary mechanisms: a motorized conveyor belt and an actively compliant lever arm. Users place a vegetable on the conveyor belt which can then move the vegetable underneath and across a peeler; the conveyor belt is controlled by three buttons: one for each direction and one to stop. The actively compliant lever arm is fitted with a pressure sensor, a vibration motor, and a vegetable peeler; this allows the peeler’s position to adapt to variations in vegetable shape and position while maintaining a consistent depth of peeling. To ensure continuous and reliable power without runtime limitations, the device will be designed to operate on AC power using an external low-voltage DC adapter. To ensure ease of use, all food-contact components will be removable without tools and easily cleaned. The peeler will be held in place on two rails with a plastic swivel lock at one end, and the plastic conveyor belt will have a removable food-safe silicone/TPU outer layer that clips on; this allows the peeler and conveyor belt cover to be secure when in use but also effortlessly removed for cleaning. LEDs will be included to signal the state of the device (on/off) and the state of the conveyor belt (forward, reverse, paused). # Solution Components ## Subsystem 1: Conveyor Belt This subsystem controls the movement of the vegetable with constant speed, pulling it underneath the peeler blade. The vegetable is peeled lengthwise. Cylindrical vegetables (e.g., zucchini or carrots) are placed on the conveyor belt with their long axis parallel to the belt direction. As the belt moves forward, the vegetable is drawn longitudinally across the blade, allowing the blade to remove peel along the length of the vegetable surface. A single motor rotates the conveyor by rotating the drive roller through a sprocket and chain transmission. The belt is constructed from plastic and covered by a layer of food-grade silicone. The silicone layer attaches to the plastic belt and can be easily attached and removed for cleaning. - 12V Stepper Motor MEDIUM bipolar - ROB-09238 - Stepper Motor Driver – TB6600 ## Subsystem 2: Blade Holder: Pressure Detector with Vibration Motor This subsystem applies a controlled peeling force to the vegetable using a spring-loaded blade holder with motor-adjustable position, while simultaneously measuring the applied force using a load cell. A TAL220B straight-bar load cell measures the normal contact force applied by the blade. The load cell output is amplified and digitized by an HX711 load cell amplifier, allowing the microcontroller to read and record the applied force. The MG996R servo motor actively allows the blade to sense variations in the vegetable surface and adjust the motor accordingly in real time, maintaining continuous contact with the same force applied to the vegetable. To improve peeling, a mini vibration motor (Adafruit 1201) is mounted near the blade holder. The vibration helps the blade slide through the skin more smoothly without increasing applied force. Control Loop: The MG996R servo will be updated at approximately 50 Hz based on load cell feedback. Target Force Value: Initial target normal force is ~1–2 N, which is sufficient to peel typical vegetables like zucchini, carrot, and potato. We will experiment with these values to find the best-performing force. Control Algorithm: We will use a threshold-based incremental adjustment: if the measured force is above the target range, the servo retracts slightly; if below, it advances. This approach is simpler than PID and sufficient for the semi-autonomous design. Force Range Variation: Peeling force varies with vegetable type and skin toughness. Some papers indicate forces between 0.8 N and 2.5 N are generally effective for common cylindrical vegetables, but again, we'll have to test this. - SparkFun Load Cell (5kg, Straight Bar) – TAL220B - SparkFun Load Cell Amplifier – HX711 - Servo Motor – MG996R - Adafruit Vibrating Mini Motor Disc – ID: 1201 ## Subsystem 3: User Interface: Conveyor Direction Push Buttons This subsystem provides a simple, reliable manual control interface to move the conveyor belt forward or reverse. The main purpose is jamming recovery. If a vegetable binds against the blade, the user can reverse the belt to free it from the contact, then resume forward motion. LEDs will be included which indicates the state of the conveyor belt direction. For safety, the peeler will only vibrate when the device is in the peel state, not in the pause or reverse state. Additionally, clicking any button (including reverse) during the peel state will stop the device, moving it into the pause state. The user does not manually feed or hold the vegetable during operation. After placing the vegetable on the conveyor belt, the user steps back and initiates motion using a momentary button press. The blade remains stationary relative to the frame and is never directly contacted during normal operation. A physical blade guard will be added to prevent any direct access to the blade from above or the sides, reducing the risk of accidental contact. - 4 LEDs - 3 Buttons (Forward / Reverse / Pause) - 1 Switch (Power On/Off) ## Subsystem 4: Power, Voltage, and Current Control This subsystem converts standard AC wall power into low-voltage DC required to safely operate all motors, sensors, and microcontroller components. It ensures continuous, reliable power without runtime limitations and protects user-accessible components from any high voltage. It also ensures that the power provided to the circuit components does not exceed their maximum power requirements. A current sensor will additionally be used to prevent motor burnout during stalls. - AC-to-DC Adapter: Mean Well GST60A24-P1J - Current Sensor - ACS712 # Criteria For Success The device has three states, and the following criteria reference these states. - Pause State: The conveyor belt does not move, and the blade does not vibrate. - Peel State: The conveyor belt moves forward, and the blade vibrates. - Reverse State: The conveyor belt moves backward, and the blade does not vibrate. 1. The device enters the pause state when the on/off button is switched to on. 2. When the forwards peel button is pressed and the device is in the pause state, the conveyor belt enters the peel state. 3. If any button other than the forwards peel button is clicked during the peel state, the device immediately enters the pause state. 4. When the reverse button is pressed and the device is in the pause state, the conveyor belt enters the reverse state. 5. Once the conveyor belt starts moving forward, it does not stop unless the direction is changed, the conveyor is paused, or power is cut. 6. Once the peeler starts vibrating, it does not stop unless the direction is changed, the conveyor is paused, or power is cut. 7. The device thoroughly peels cylindrical vegetables, covering over 90% of their surface area. Upon achieving consistent success with partially cylindrical vegetables (e.g. zucchini), attempt to peel other varying shapes/sizes of fruits and vegetables. 8. The device minimizes the amount of usable produce being discarded. (This will be determined with qualitative determination from visual observations). 9. The device requires no more than 120 V of AC power to operate. |
||||||