Project

# Title Team Members TA Documents Sponsor
6 Interactive Desktop Companion Robot for Stress Relief
Jiajun Gao
Yu-Chen Shih
Zichao Wang
Haocheng Bill Yang design_document1.pdf
proposal1.pdf
# Team
- Jiajun Gao (jiajung3)
- Yuchen Shih (ycshih2)
- Zichao Wang (zichao3)
# Problem
Students and office workers often spend extended periods working at desks, leading to mental fatigue, stress, and reduced focus. While mobile applications, videos, or music can provide temporary relief, they often require users to shift attention away from their primary tasks and lack a sense of physical presence. Static desk toys also fail to maintain long-term engagement because they do not adapt to user behavior or provide meaningful interaction.
There is a need for an interactive, physically present system that can provide short, low-effort interactions to help users relax without becoming a major distraction. Such a system should be compact, safe for desk use, and capable of responding naturally to user input.

# Solution
We propose an interactive desktop companion robot designed to reduce stress and boredom through voice interaction, expressive feedback, and simple physical motion. The robot has a compact, box-shaped form factor suitable for desk environments and can move using a tracked or differential-drive base. An ESP32-based controller coordinates audio processing, networking, control logic, and hardware interfaces.
The robot supports voice wake-up, natural language conversation using a cloud-based language model, and speech synthesis for verbal responses. Visual expressions are displayed using a small screen or LED indicators to reflect internal states such as listening, thinking, or speaking. Spoken commands can also trigger physical actions, such as rotating, moving closer, or changing expressions. By combining audio, visual, and physical interaction, the system creates an engaging yet lightweight companion that fits naturally into a desk workflow.
# Solution Components
## Subsystem 1: Voice Interaction and Audio Processing
This subsystem enables natural voice-based interaction between the user and the robot. It performs wake-word detection locally and streams audio data to a remote server for speech recognition and response generation. The subsystem also handles audio playback and interruption control.
Audio data is captured using a digital microphone, encoded, and transmitted over a network connection. Responses from the server are received as audio streams and played through an onboard speaker. Local wake-word detection ensures responsiveness and reduces unnecessary network usage.
Components:

• ESP32-S3 microcontroller with PSRAM
• ESP32-S3 integrated Wi-Fi module
• I2S digital microphone (INMP441 or equivalent)
• I2S audio amplifier (MAX98357A)
• 4Ω or 8Ω speaker
## Subsystem 2: Visual Expression and User Feedback
This subsystem provides visual feedback that represents the robot’s internal state and interaction context. Visual cues improve usability and convey personality.
Different states such as idle, listening, processing, speaking, and error are represented using animations or color patterns.
Components:

• SPI LCD display (ST7789 or equivalent) or
• RGB LEDs (WS2812B or equivalent)

## Subsystem 3: Motion and Actuation
This subsystem enables controlled movement on a desk surface. The robot performs simple motions such as forward movement, rotation, and stopping based on voice commands and sensor feedback.
Motor control runs in a dedicated task to prevent interference with audio and networking functions.
Components:

• Two DC gear motors
• Dual H-bridge motor driver (TB6612FNG or equivalent)
• Optional wheel encoders


## Subsystem 4: Power Management and Safety
This subsystem manages power distribution and ensures safe operation. The robot is battery-powered to allow untethered use on a desk. Hardware and software protections limit speed, current, and movement range.
Components:

• Lithium battery with protection circuit
• Battery charging module
• Voltage regulators (5V and 3.3V)
• Physical power switch

## Subsystem 5: Subsystem 5: Safety Sensing (Desk-Edge Detection + Obstacle Avoidance)

This subsystem prevents the robot from falling off the desk and reduces collisions with nearby objects. It continuously monitors both the surface below the robot and the space in front of the robot. When a desk edge (cliff) or obstacle is detected, this subsystem overrides motion commands and triggers an immediate safe response.

Desk-edge detection (cliff detection):
Two downward-facing distance sensors are mounted near the front-left and front-right corners. They measure the distance from the robot to the desk surface. If either sensor detects a sudden increase in distance beyond a calibrated baseline, the robot immediately stops and performs a short reverse maneuver to move away from the edge.

Obstacle avoidance:
A forward-facing distance sensor detects objects in front of the robot. If an obstacle is within a predefined safety distance, the robot stops. If the obstacle remains, the robot can optionally rotate in place to search for a clear direction before continuing motion.

Control priority:
Safety sensing has the highest priority in the motion stack:

Desk-edge detection (highest priority)

Obstacle avoidance

User/voice motion commands (lowest priority)

Components:

2 × Time-of-Flight distance sensors for downward cliff detection (VL53L0X or equivalent, I2C)

1 × Time-of-Flight distance sensor for forward obstacle detection (VL53L0X or equivalent, I2C)

# Criterion For Success
The success of this project will be evaluated using the following high-level criteria:
1. The robot connects to a Wi-Fi network and establishes a server connection within 10 seconds of power-on.
2. The system detects a wake word and enters interaction mode within 2 second in a quiet environment.
3. The average end-to-end voice interaction latency is less than 5 seconds under normal network conditions.
4. At least five predefined voice commands trigger the correct robot actions with at least 90% accuracy during testing.
5. Visual feedback correctly reflects the system state in all operational modes.
6. The robot operates continuously for at least 30 minutes on battery power during active use.
7. When Wi-Fi is unavailable, the system enters a safe degraded mode without crashing or unsafe motion.
8. During a 10-minute continuous motion demonstration on a desk, the robot does not fall off the desk.
9. In an obstacle test, the robot is commanded to move forward toward a stationary obstacle (for example, a box or book) from multiple start distances for 20 trials. The robot must stop (or stop and turn) before making contact in at least 18/20 trials.

Microcontroller-based Occupancy Monitoring (MOM)

Vish Gopal Sekar, John Li, Franklin Moy

Microcontroller-based Occupancy Monitoring (MOM)

Featured Project

# Microcontroller-based Occupancy Monitoring (MOM)

Team Members:

- Franklin Moy (fmoy3)

- Vish Gopal Sekar (vg12)

- John Li (johnwl2)

# Problem

With the campus returning to normalcy from the pandemic, most, if not all, students have returned to campus for the school year. This means that more and more students will be going to the libraries to study, which in turn means that the limited space at the libraries will be filled up with the many students who are now back on campus. Even in the semesters during the pandemic, many students have entered libraries such as Grainger to find study space, only to leave 5 minutes later because all of the seats are taken. This is definitely a loss not only to someone's study time, but maybe also their motivation to study at that point in time.

# Solution

We plan on utilizing a fleet of microcontrollers that will scan for nearby Wi-Fi and Bluetooth network signals in different areas of a building. Since students nowadays will be using phones and/or laptops that emit Wi-Fi and Bluetooth signals, scanning for Wi-Fi and Bluetooth signals is a good way to estimate the fullness of a building. Our microcontrollers, which will be deployed in numerous dedicated areas of a building (called sectors), will be able to detect these connections. The microcontrollers will then conduct some light processing to compile the fullness data for its sector. We will then feed this data into an IoT core in the cloud which will process and interpret the data and send it to a web app that will display this information in a user-friendly format.

# Solution Components

## Microcontrollers with Radio Antenna Suite

Each microcontroller will scan for Wi-Fi and Bluetooth packets in its vicinity, then it will compile this data for a set timeframe and send its findings to the IoT Core in the Cloud subsystem. Each microcontroller will be programmed with custom software that will interface with its different radio antennas, compile the data of detected signals, and send this data to the IoT Core in the Cloud subsystem.

The microcontroller that would suit the job would be the ESP32. It can be programmed to run a suite of real-time operating systems, which are perfect for IoT applications such as this one. This enables straightforward software development and easy connectivity with our IoT Core in the Cloud. The ESP32 also comes equipped with a 2.4 GHz Wi-Fi transceiver, which will be used to connect to the IoT Core, and a Bluetooth Low Energy transceiver, which will be part of the radio antenna suite.

Most UIUC Wi-Fi access points are dual-band, meaning that they communicate using both the 2.4 GHz and 5 GHz frequencies. Because of this, we will need to connect a separate dual-band antenna to the ESP32. The simplest solution is to get a USB dual-band Wi-Fi transceiver, such as the TP-Link Nano AC600, and plug it into a USB Type-A breakout board that we will connect to each ESP32's GPIO pins. Our custom software will interface with the USB Wi-Fi transceiver to scan for Wi-Fi activity, while it will use the ESP32's own Bluetooth Low Energy transceiver to scan for Bluetooth activity.

## Battery Backup

It is possible that the power supply to a microcontroller could fail, either due to a faulty power supply or by human interference, such as pulling the plug. To mitigate the effects that this would have on the system, we plan on including a battery backup subsystem to each microcontroller. The battery backup subsystem will be able to not only power the microcontroller when it is unplugged, but it will also be able to charge the battery when it is plugged in.

Most ESP32 development boards, like the Adafruit HUZZAH32, have this subsystem built in. Should we decide to build this subsystem ourselves, we would use the following parts. Most, if not all, ESP32 microcontrollers use 3.3 volts as its operating voltage, so utilizing a 3.7 volt battery (in either an 18650 or LiPo form factor) with a voltage regulator would supply the necessary voltage for the microcontroller to operate. A battery charging circuit consisting of a charge management controller would also be needed to maintain battery safety and health.

## IoT Core in the Cloud

The IoT Core in the Cloud will handle the main processing of the data sent by the microcontrollers. Each microcontroller is connected to the IoT Core, which will likely be hosted on AWS, through the ESP32's included 2.4GHz Wi-Fi transceiver. We will also host on AWS the web app that interfaces with the IoT Core to display the fullness of the different sectors. This web app will initially be very simple and display only the estimated fullness. The web app will likely be built using a Python web framework such as Flask or Django.

# Criterion For Success

- Identify Wi-Fi and Bluetooth packets from a device and distinguish them from packets sent by different devices.

- Be able to estimate the occupancy of a sector within a reasonable margin of error (15%), as well as being able to compute its fullness relative to that sector's size.

- Display sector capacity information on the web app that is accurate within 5 minutes of a user accessing the page.

- Battery backup system keeps the microcontroller powered for at least 3 hours when the wall outlet is unplugged.

Project Videos