Project

# Title Team Members TA Documents Sponsor
25 A.I.dan: ChatGPT Integrated Virtual Assistant
Andrew Scott
Brahmteg Minhas
Leonardo Garcia
Hanyin Shao design_document1.pdf
final_paper1.pdf
photo1.jpg
photo2.png
presentation1.pptx
proposal1.pdf
Team Members:
- Andrew Scott (ajscott5)
- Leonardo Garcia (lgarci91)
- Brahmteg Minhas (bminhas2)




# Problem
Current virtual assistants (Amazon’s Alexa, Apple’s Siri, etc) all use google as their primary mechanism for answering questions posed to them. While they may have other functionality, like integration with Amazon.com or spotify, their primary function is as assistants who answer questions based on audio I/O. With the advent of Chat GPT-3, Google is now an outdated information gathering mechanism, and needs to be replaced within the virtual assistant space.
# Solution
Our solution combines the convenience of a virtual assistant with the power of chatGPT to create a more powerful and useful home-assistant for answering questions. We will use a Speech-to-Text module to convert user voice input to text. This interaction, taking in user sound and responding shall be facilitated by a “cue word”, like “Hey A.I.dan”, or similar. To ask a question, a user will say the cue word and then ask their question. Once they have stopped speaking, A.I.dan will send the message through to ChatGPT, and once it gets back ChatGPT’s response, use Text To Speech (TTS) to relay it to the user as well as display it on the screen.




## Control Unit
Utilizes an ESP32 microcontroller with a Raspberry Pi RP2040. Software on the microcontroller interfaces with the audio I/O, the screen, and the through Wi-Fi to a PC which handles the chatGPT API, as well as the Speech-to-Text and Text-to-Speech modules. The microcontroller will also receive the information to be output to the screen and microphone from the PC.




## Audio I/O
The mechanism through which a user will interact with our device is with their voice. To facilitate this, both a speaker and microphone will be added to our PCB. Any post processing we want to do in order to clean up the audio to increase accuracy will also be done onboard. Any audio input to the microphone will go to the RP2040 for the detection of a wake word. Once a wake word is detected, the microcontroller will stream audio to a PC through Wi-Fi. Once the PC returns the chatGPT output after it has been passed through the text-to-speech module, it is played through the microphone.
## Screen
Many outputs that ChatGPT has are not easily understood through an audio description. The best example of this is code segments, which are always formatted as a markdown. In order to provide this particular functionality, a screen shall be added externally to our assistant, connected by SPI to the PCB.
# Criterion For Success
To consider this fully successful, at least 75% of attempted basic interactions should be successful. Basic interactions are questions that are based entirely on words included in our pre-trained speech to text model.
Code (Markdown) as well as traditional text answers must display/speak properly given a successful question. This can be tested by asking the same question to chatGPT on a separate device.


# Resources:
[Example of ESP32 to PC Audio Streaming]( https://github.com/MinePro120/ESP32-Audio-Streamer)


[Example of PC to ESP32 Audio Streaming](https://www.hackster.io/julianfschroeter/stream-your-audio-on-the-esp32-2e4661)

Remotely Controlled Self-balancing Mini Bike

Will Chen, Eric Tang, Jiaming Xu

Featured Project

# Remotely Controlled Self-balancing Mini Bike

Team Members:

- Will Chen hongyuc5

- Jiaming Xu jx30

- Eric Tang leweit2

# Problem

Bike Share and scooter share have become more popular all over the world these years. This mode of travel is gradually gaining recognition and support. Champaign also has a company that provides this service called Veo. Short-distance traveling with shared bikes between school buildings and bus stops is convenient. However, since they will be randomly parked around the entire city when we need to use them, we often need to look for where the bike is parked and walk to the bike's location. Some of the potential solutions are not ideal, for example: collecting and redistributing all of the bikes once in a while is going to be costly and inefficient; using enough bikes to saturate the region is also very cost inefficient.

# Solution

We think the best way to solve the above problem is to create a self-balancing and moving bike, which users can call bikes to self-drive to their location. To make this solution possible we first need to design a bike that can self-balance. After that, we will add a remote control feature to control the bike movement. Considering the possibilities for demonstration are complicated for a real bike, we will design a scaled-down mini bicycle to apply our self-balancing and remote control functions.

# Solution Components

## Subsystem 1: Self-balancing part

The self-balancing subsystem is the most important component of this project: it will use one reaction wheel with a Brushless DC motor to balance the bike based on reading from the accelerometer.

MPU-6050 Accelerometer gyroscope sensor: it will measure the velocity, acceleration, orientation, and displacement of the object it attaches to, and, with this information, we could implement the corresponding control algorithm on the reaction wheel to balance the bike.

Brushless DC motor: it will be used to rotate the reaction wheel. BLDC motors tend to have better efficiency and speed control than other motors.

Reaction wheel: we will design the reaction wheel by ourselves in Solidworks, and ask the ECE machine shop to help us machine the metal part.

Battery: it will be used to power the BLDC motor for the reaction wheel, the stepper motor for steering, and another BLDC motor for movement. We are considering using an 11.1 Volt LiPo battery.

Processor: we will use STM32F103C8T6 as the brain for this project to complete the application of control algorithms and the coordination between various subsystems.

## Subsystem 2: Bike movement, steering, and remote control

This subsystem will accomplish bike movement and steering with remote control.

Servo motor for movement: it will be used to rotate one of the wheels to achieve bike movement. Servo motors tend to have better efficiency and speed control than other motors.

Stepper motor for steering: in general, stepper motors have better precision and provide higher torque at low speeds than other motors, which makes them perfect for steering the handlebar.

ESP32 2.4GHz Dual-Core WiFi Bluetooth Processor: it has both WiFi and Bluetooth connectivity so it could be used for receiving messages from remote controllers such as Xbox controllers or mobile phones.

## Subsystem 3: Bike structure design

We plan to design the bike frame structure with Solidworks and have it printed out with a 3D printer. At least one of our team members has previous experience in Solidworks and 3D printing, and we have access to a 3D printer.

3D Printed parts: we plan to use PETG material to print all the bike structure parts. PETG is known to be stronger, more durable, and more heat resistant than PLA.

PCB: The PCB will contain several parts mentioned above such as ESP32, MPU6050, STM32, motor driver chips, and other electronic components

## Bonus Subsystem4: Collision check and obstacle avoidance

To detect the obstacles, we are considering using ultrasonic sensors HC-SR04

or cameras such as the OV7725 Camera function with stm32 with an obstacle detection algorithm. Based on the messages received from these sensors, the bicycle could turn left or right to avoid.

# Criterion For Success

The bike could be self-balanced.

The bike could recover from small external disturbances and maintain self-balancing.

The bike movement and steering could be remotely controlled by the user.

Project Videos