Project
| # | Title | Team Members | TA | Documents | Sponsor |
|---|---|---|---|---|---|
| 85 | Modular Desktop Audio Mixer Control |
Aarushi Sharma Dylan Moon |
Yulei Shen | proposal1.pdf |
|
| # Modular Desktop Audio Mixer Control Team Members: - Aarushi Sharma (sharma93) - Dylan Moon (dylanm5) # Problem Modern desktop computers have generally revolved around a set paradigm for human-computer interaction: the keyboard and the mouse. However, analog control surfaces can be beneficial in interacting with the many analog-like controls present in a computer. For example, take software volume mixers. They are prevalent in modern computing systems, shipping with the OS on Windows and Linux-based operating systems. However, they are somewhat difficult to access, usually buried behind multiple menus or needing to open a different application to adjust the volumes of individual applications. People like computer power users might have music playing in the background, a call in the foreground, and application audio on top of that, which all needs to be individually adjusted so that important details are heard. Furthermore, for gamers, who may frequently have full-screen applications occupying their screen, minimizing their game to go searching for the volume mixer to turn down the loud voice call that they may have in the background takes up time. The time spent doing so could be the difference between winning and losing their current match. # Solution We propose a modular audio control panel that sits on a user's desktop, which can be physically interacted with to smoothly and easily change volumes of individual applications. Since the controls that we want to target are analog (volume controls for individual applications), the control surfaces that the user interacts with will be linear sliders. This allows for quick but also granular control of the volume levels of various applications in the computer. The system consists of two different types of components. One type of component is a base station that connects to the computer, does processing of inputs (and possibly outputs), and controls and provides power to the other modules. The other type of component are the modules with sliders. We plan to design the system so that the slider modules can be daisy-chained to allow for a user to choose the number of sliders to include in their setup. More details can be found in the Solution Components section. If time permits, we also want to explore and implement audio output and post-processing through the device. The inclusion of a DSP chip will process the audio output from the system, which we want to use to implement an equalizer mode, which will temporarily switch the application volume controls to equalizer band controls, allowing users to dynamically adjust the sound profile. One of the target userbases for this feature are gamers: adjusting their audio profile on the fly facilitates listening for other players’ footsteps by turning up the frequency range that footsteps reside in, giving them an advantage. # Solution Components ## Subsystem 1: Base Module The base module connects to the computer via USB, connects to power (if external power is required e.g. weak power input from USB), and communicates to other modules on the daisy chain to send and receive data. For this, the base module will have a microcontroller (ESP32-S3) and bus communication transceivers (if using CAN, but we are looking to see if I2C would be sufficient). We would use pogo pins to interface with the neighboring module. In the case we add DSP output, we would additionally require a DSP chip (ADAU1701) to implement EQ and a push button to switch control modes. ## Subsystem 2: Fader Modules The fader modules physically attach to the base module and each other via magnets. The electrical connection will be done via pogo pins on the sides of each module. Each fader module communicates with the base module using the bus protocol to send fader position data and receive updates about where to set the fader position. For this, the fader modules will need a microcontroller (e.g. ATtiny1614), however these MCUs can be weaker than the one in the base because they’re reading values from the fader and sending it through the bus protocol (or vice versa). For the motorized fader itself, Behringer sells replacement fader modules that we can repurpose (X32MOTORFADER). We will also need buttons to act as mute and solo, which mute the application and mute all other applications, respectively. ## Subsystem 3: Integration with OS Windows doesn’t expose the volume mixer controls to hardware, so a program running on the OS is required to receive values from the hardware and change the value. For this, we are planning to get input via USB and utilize Windows APIs to change the values of the mixers. If we can find a Linux-based device to test on, we would also like to support the PipeWire/PulseAudio volume mixer. # Criterion For Success With multiple fader modules connected (e.g. 2), within 0.5 seconds: Test that bringing the fader all the way down sets volume of respective application to 0. Test that bringing the fader all the way up sets volume of respective application to 100. Bringing fader to halfway mark sets volume to around 50 (± 5%). Mute and solo buttons perform their respective functions (mute respective application, mute other applications) when pressed. If DSP implemented: switching to equalizer control and modifying equalizer results in an audible frequency-band change within 2.5 seconds. |
|||||