Project

# Title Team Members TA Documents Sponsor
37 Visual chatting and Real-time acting Robot
Haozhe Chi
Jiatong Li
Minghua Yang
Zonghai Jing
design_document2.pdf
final_paper3.pdf
proposal3.pdf
video1.mp4
Gaoang Wang
Group member:
Haozhe Chi, haozhe4
Minghua Yang, minghua3
Zonghai Jing, zonghai2
Jiatong Li, jl180
Problem:
With the rise of large language models (LLMs), Large visual language models (LVLMs) have achieved great success in recent AI development. However, it's still a big challenge to configure an LVLM system for a robot and make all hardware work well around this system. We aim to design an LVLM-based robot that can react to multimodal inputs.
Solution overview:
We aim to deliver an LVLM system (software part), a robot arm for robot actions like grabbing objects (hardware part), a robot movement equipment for moving according to the environment (hardware part), a camera for real-time visual inputs (hardware part), a laser tracker for implicating the object (hardware part), and an audio equipment for audio inputs and outputs (hardware part).
Solution components:
LVLM system:
We will deploy a BLIP-2 based AI model for visual language processing. We will incorporate the strengths of several recent visual-language models, including LlaVA, Videochat, and VideoLlaMA, and design a better real-time visual language processing system. This system should be able to realize real-time visual chatting with less object hallucination.
Robot arm and wheels:
We will use ROS environment to control robot movements. We will apply to use robot arms in ZJUI ECE470 labs and buy certain wheels for moving. We may use four-wheel design or track design.
Camera:
We will configure cameras for real-time image inputs. 3D reconstruction may be needed, depending on our LVLM system design.
If multi-viewed inputs are needed, we will design a better camera configuration.
Audio processing:
We will use two audio processing systems: voice recognition and text-to-audio generation. They are responsible for audio inputs and outputs respectively. We will use certain audio broadcast components to make the robot talk.
Criterion for success:
The robot consists of functions including voice recognition, laser tracking, real-time visual chatting, a multimodal processing system, identifying a certain object, moving and grabbing it, and multi-view camera configuration. All the hardware parts should cooperate well in the final demo. This means that not only every single hardware should function well, but also perform more advanced functions. For instance, the robot should be able to move towards certain objects while chatting with humans.

VoxBox Robo-Drummer

Featured Project

Our group proposes to create robot drummer which would respond to human voice "beatboxing" input, via conventional dynamic microphone, and translate the input into the corresponding drum hit performance. For example, if the human user issues a bass-kick voice sound, the robot will recognize it and strike the bass drum; and likewise for the hi-hat/snare and clap. Our design will minimally cover 3 different drum hit types (bass hit, snare hit, clap hit), and respond with minimal latency.

This would involve amplifying the analog signal (as dynamic mics drive fairly low gain signals), which would be sampled by a dsPIC33F DSP/MCU (or comparable chipset), and processed for trigger event recognition. This entails applying Short-Time Fourier Transform analysis to provide spectral content data to our event detection algorithm (i.e. recognizing the "control" signal from the human user). The MCU functionality of the dsPIC33F would be used for relaying the trigger commands to the actuator circuits controlling the robot.

The robot in question would be small; about the size of ventriloquist dummy. The "drum set" would be scaled accordingly (think pots and pans, like a child would play with). Actuators would likely be based on solenoids, as opposed to motors.

Beyond these minimal capabilities, we would add analog prefiltering of the input audio signal, and amplification of the drum hits, as bonus features if the development and implementation process goes better than expected.