Project

# Title Team Members TA Documents Sponsor
33 # AMADEUS - Augmented Modular AI Dialogue and Exchange User System
Chengyuan Peng
Ryan Fu
Wesley Pang
Jason Zhang design_document1.pdf
proposal1.pdf
# AMADEUS - Augmented Modular AI Dialogue and Exchange User System
# Team members:
· Ryan Fu (ryfu2)

· Qiran Pang (qpang2)

· Chengyuan Peng (cpeng14)
# Problem
For many years, people have dreamed of having natural, everyday conversations with robots to fulfill their emotional and lifestyle needs. However, current interactive AI systems are often bulky, and even the most portable solutions still rely on smartphone interactions. Regarding emotional needs, we don’t want to talk to a cold, lifeless screen. Instead, we hope for a more tangible medium—like a child chatting with a SpongeBob toy embedded with AI.
Thus, the needs are clear:
We require a more compact AI platform that can easily integrate into various devices.
On top of that, it should be as affordable as possible to make it widely accessible.
# Solution
We are designing an AI-based audio interactive interface.
The baseline feature of the project is a cheap PCB board interface that can receive audio from the user and then send it through Wifi to a model on a computer so that the AI model can process the audio and reply with audio, which is sent to the board to be played out. We will use an ESP32 microcontroller with wifi and audio input/output capability to achieve this.
Additional features would be indoor and outdoor modes such that when we are outdoors we will speak when a button is pressed and the input will be denoised. Another additional feature can be integrating the board with headphones or Bluetooth earbuds. Moreover, a text display interface can be embedded on the PCB to display the converted audio as text.
Please view our block diagram via the Google link: https://docs.google.com/document/d/1Uv_b5SzeoN7boqyMyB3Kkgl7XGVAnuv50S6DZ1e3PhY/edit





# Solution Components

# Subsystem 1: AI Web Client
Our language model will be hosted on a cloud-based server. The local MCU will transmit audio to the server via a WiFi module. We are collaborating with a local start-up that will provide the AI model and handle the audio training. However, we also have the option to train our own AI model to create additional characters using their interface.

# Subsystem 2: ESP32 with Wifi Capability
We will utilize ESP32 for the processor to process the signal. Before use, it will receive a password from the user’s device through Bluetooth to connect with Wifi. It will receive an audio signal from the ADC and send it to the PC for AI Web Client input. After receiving the output audio signal from the PC, it will be sent to the audio codec for audio output.

# Subsystem 3: Power System
The system can be powered through either a USB connection or a 5V battery. The 5V supply directly powers the I/O devices and the programming module. To provide 3.3V power for the microcontroller and audio processing module, a 5V to 3.3V LDO voltage regulator is used to step down the voltage.

# Subsystem4: Bluetooth Communication
A Bluetooth transceiver module will be connected to the ESP32 processor to receive user input for configuring the internet connection. The user will transmit the internet passcode to the Bluetooth transceiver, which will then relay this information to the microcontroller to establish the connection.

# Subsystem5: Audio I/O & Processing
The microphone on the board will capture the audio input, which will be processed by an Audio Codec module. Once the audio output is fetched from the internet into the MCU, it will be transmitted through the Audio Codec and played through a speaker.

# Subsystem6: Text Display

An additional feature of our project will be a text display. After the ESP32 module converts the audio input / output into texts, an LCD screen will be attached to the microprocessor to display the text output.

# Subsystem7: Debug Module
A serial port will be temporarily integrated into the PCB for debugging the output from the ESP32 processor. Additionally, a programmer will be connected to the MCU for programming purposes.





Decentralized Systems for Ground & Arial Vehicles (DSGAV)

Mingda Ma, Alvin Sun, Jialiang Zhang

Featured Project

# Team Members

* Yixiao Sun (yixiaos3)

* Mingda Ma (mingdam2)

* Jialiang Zhang (jz23)

# Problem Statement

Autonomous delivery over drone networks has become one of the new trends which can save a tremendous amount of labor. However, it is very difficult to scale things up due to the inefficiency of multi-rotors collaboration especially when they are carrying payload. In order to actually have it deployed in big cities, we could take advantage of the large ground vehicle network which already exists with rideshare companies like Uber and Lyft. The roof of an automobile has plenty of spaces to hold regular size packages with magnets, and the drone network can then optimize for flight time and efficiency while factoring in ground vehicle plans. While dramatically increasing delivery coverage and efficiency, such strategy raises a challenging problem of drone docking onto moving ground vehicles.

# Solution

We aim at tackling a particular component of this project given the scope and time limitation. We will implement a decentralized multi-agent control system that involves synchronizing a ground vehicle and a drone when in close proximity. Assumptions such as knowledge of vehicle states will be made, as this project is aiming towards a proof of concepts of a core challenge to this project. However, as we progress, we aim at lifting as many of those assumptions as possible. The infrastructure of the lab, drone and ground vehicle will be provided by our kind sponsor Professor Naira Hovakimyan. When the drone approaches the target and starts to have visuals on the ground vehicle, it will automatically send a docking request through an RF module. The RF receiver on the vehicle will then automatically turn on its assistant devices such as specific LED light patterns which aids motion synchronization between ground and areo vehicles. The ground vehicle will also periodically send out locally planned paths to the drone for it to predict the ground vehicle’s trajectory a couple of seconds into the future. This prediction can help the drone to stay within close proximity to the ground vehicle by optimizing with a reference trajectory.

### The hardware components include:

Provided by Research Platforms

* A drone

* A ground vehicle

* A camera

Developed by our team

* An LED based docking indicator

* RF communication modules (xbee)

* Onboard compute and communication microprocessor (STM32F4)

* Standalone power source for RF module and processor

# Required Circuit Design

We will integrate the power source, RF communication module and the LED tracking assistant together with our microcontroller within our PCB. The circuit will also automatically trigger the tracking assistant to facilitate its further operations. This special circuit is designed particularly to demonstrate the ability for the drone to precisely track and dock onto the ground vehicle.

# Criterion for Success -- Stages

1. When the ground vehicle is moving slowly in a straight line, the drone can autonomously take off from an arbitrary location and end up following it within close proximity.

2. Drones remains in close proximity when the ground vehicle is slowly turning (or navigating arbitrarily in slow speed)

3. Drone can dock autonomously onto the ground vehicle that is moving slowly in straight line

4. Drone can dock autonomously onto the ground vehicle that is slowly turning

5. Increase the speed of the ground vehicle and successfully perform tracking and / or docking

6. Drone can pick up packages while flying synchronously to the ground vehicle

We consider project completion on stage 3. The stages after that are considered advanced features depending on actual progress.

Project Videos