Project

# Title Team Members TA Documents Sponsor
57 Wireless EMG and IMU Sleeve for Hand Gesture Recognition
Diqing Zuo
Harbin Li
Jameson Koonce
Michael Molter design_document1.pdf
final_paper1.pdf
final_paper2.pdf
grading_sheet1.pdf
photo1.jpeg
photo2.jpeg
presentation1.pdf
proposal1.pdf
# Team Members:

- Jameson Koonce (jrk8)
- Diqing Zuo (diqingz2)
- Harbin Li (hdli2)

# Problem

As advancements have been made in the Virtual Reality (VR) space, more practical applications of the technology have been found such as in education, engineering, utilities maintenance, and entertainment ([source](https://pmc.ncbi.nlm.nih.gov/articles/PMC9517547/#sec4-ijerph-19-11278)). However, this technology is not yet immersive enough as the majority of users experience some level of cybersickness during use characterized by discomfort ([source](https://pmc.ncbi.nlm.nih.gov/articles/PMC8886867/#Sec1)). Part of this immersion loss can be attributed to how VR consoles track the user’s hands, with some solutions involving controllers, leading to a lack of immersion, and others involving computer vision, which can be inaccurate in many hand/arm positions. There needs to be a more effective way to immerse a VR user’s arm and hands into a virtual environment.

# Solution

We are looking to create a system which tracks arm movements and recognizes hand gestures for more immersive Virtual Reality (VR) Environments. Specifically, we are going to develop a wireless sleeve lined with Electromyography (EMG) and Inertial Measurement Unit (IMU) sensors in order to detect electrical signals, orientation, and acceleration information from a user's arm and use on-device processing of machine learning algorithms to classify individual finger gestures and track arm movement. This system will be more immersive than existing solutions because the user’s hands will be free in a VR environment, and the arm motion will be tracked even when the arm is out of view. The system will make use of EMG and IMU sensors on a physical sleeve, connected to a wireless module to assure that the information can be used as a controller for external devices and the user is physically unconstrained. The data will be processed in our on-sleeve ML framework for classification and tracking, but raw data can be processed off-sleeve for higher computational efficacy, with an increase in latency.

# Solution Components

## Sensor Array System

Description: Array of sensors responsible for collecting and preprocessing the analog signals for use by the processing unit.

- Dry sEMG Electrodes: large array of dry electrodes for recognition of movements in the hand.
- IMUs (ICM-20948 9-axis IMU): collection of accelerometer, gyroscope, and magnetometer to track the orientation and movement of the arm.
- Op-amp Denoising (OPA4277UA): operational amplifiers for signal conditioning.

## ML-Based Gesture Recognition (Software)

Description: Processes EMG data collected using ML models to classify hand/finger/arm gestures in real time.

Components:
- Microcontroller : responsible for interfacing with the EMG sensors, preprocessing raw signals, and system control
STM32WB55 Series MCU
- ML Framework: Optimized for real-time, low power consumption inference.
- TensorFlow Lite for Microcontrollers (tflite-micro/tensorflow/lite/micro/examples at main · tensorflow/tflite-micro · GitHub)
Possible external dataset Ninapro (Ninapro)
- Edge processing module (is only needed for high real-time inference latency requirements): executes the ML model directly on device for low-latency inference —nRF52840 SoC
- Training model on EMG signal data
We are training our model solely on our own collected EMG data from a single user, focusing on a limited number of gestures first to demonstrate feasibility. The ninapro dataset could serve as a reference dataset for understanding gesture patterns but would currently not be used directly in training.
The training and optimization of the ML model would be divided into the following parts:
1. Data Collection: Data would be collected from a single user and focusing on a small subset of predefined gestures. This would then be labelled and used to train our model.
2. Feature Extraction: Extract relevant features from EMG signals, including amplitude, frequency domain characteristics, and time-domain patterns.
3. Model Architecture: Uses a lightweight deep learning model. We consider two primary approaches CNN and RNN, while our primary attempt would be focusing on CNN due to a lower requirement for processing power and memory.
Based on the above, we train the ML model and then converter the trained model into
TensorFlow Lite model.
- Classification of EMG signals (text/command)
We start by first preprocessing the raw EMG signals by applying filtering techniques to remove noise and enhancing signal quality. The extracted features such as signal amplitude, frequency, patterns are analyzed to identify the gesture characteristics. The processed data would then be fed into our trained ML model that classifies the EMG signals into specific gestures and thus converted into text-based commands, control signals, etc.

## Wireless module

Description: Manages real-time communication between the wearable device and external systems, enabling efficient transmission of classified gesture data for further processing or user interaction.

Components:
- Wireless Protocol: We would be using BLE for efficient, low-power wireless communication
- Integrated BLE MCU: The STM32WB55 includes a built-in BLE radio

## Physical component (wearable form)

Description: Physical

- Nylon-spandex sleeve with electrode cutouts
- PCB and electronic mounts
- Li-Po battery and attachment

# Criterion For Success

- Reliability/consistency in discerning gesture
- Show viability by implementing it on one person only.
- Achieve 95% accuracy in recognizing a set of 6 gestures
- Demonstrate wearability for extended periods (1+ hours) without significant signal degradation (maintaining 90%+ accuracy).
- Achieving same or similar accuracy between sessions of wearing, with minimal or to no calibration.
- Wireless capability
- Demonstrate wireless capability and clearly show gesture recognition and arm tracking results on external device
- Latency
- Achieving latency of below 200ms

Schnorr Protocol Key Fob

Michael Gamota, Vasav Nair, Pedro Ocampo

Featured Project

# Schnorr Identification Protocol Key Fob

Team Members:

- Michael Gamota (mgamota2)

- Vasav Nair (vasavbn2)

- Pedro Ocampo (pocamp3)

# Problem

Current car fobs are susceptible to different types of attacks. Rolling jam attacks are one of such attacks where an attacker jams and stores a valid "unlock" signal for later. Cars with passive keys/cards can be stolen using relay attacks. Since a car can be the most expensive item someone owns, it is unreasonable to allow people to steal them so discreetly by hacking the fob/lock combo.

# Solution

By leveraging public key cryptography, specifically the Schnorr identification protocol, it is possible to create a key fob which is not susceptible to either attack (rolling jam and relay) and also gives no information about the private key of the fob if the signal were to be intercepted.

# Solution Components

# Key Fob

## Subsystem 1

Random number generation - We will use a transistor circuit to generate random numbers. This is required by the Schnorr protocol to ensure security.

## Subsystem 2

Microcontroller - The MCU will run all the computation to calculate the messages. We will likely use an ATtiny MCU so we can use the Arduino IDE for programming. However, some group members have experience with the STM32 family so that is another option.

## Subsystem 3

Power - We plan on using either a 5V battery or 3.3V battery with a boost converter to power the fob.

## Subsystem 4

Wireless Communication - We plan on using the 315 MHz frequency band which is currently used by some car fobs. We will need a transmitter and receiver, since the protocol is interactive.

# Lock

## Subsystem 1

Random number generation - We will use a transistor circuit to generate random numbers. This is required by the Schnorr protocol to ensure security.

## Subsystem 2

Microcontroller - This MCU will also run all the computation to calculate the messages. We will likely use an ATtiny MCU so we can use the Arduino IDE for programming. However, some group members have experience with the STM32 family so that is another option. This MCU will need to have PWM output to control the lock.

## Subsystem 3

Linear Actuator - We plan on using a linear actuator as a deadbolt lock for demonstration purposes.

## Subsystem 4

Wireless Communication - We plan on using the 315 MHz frequency band which is currently used by some car fobs. We will need a transmitter and receiver, since the protocol is interactive.

## Subsystem 5

Power - This subsystem will also likely require 5V, but power sourcing is not an issue since this system would be connected to the car battery. During a demo I would be acceptable to have this plugged into a power supply or a barrel jack connector from an AC-DC converter.

# Criterion For Success

Describe high-level goals that your project needs to achieve to be effective. These goals need to be clearly testable and not subjective.

Our first criteria for success is a reasonably sized fob. There is some concern about the power storage and consumption of the fob.

The next criteria for success is communication between the fob and the lock. This will be the first milestone in our design. We will need to have a message sent from one MCU that is properly received by the other, we can determine this in the debug terminal.

Once we are sure that we can communicate between the fob and the lock, we will implement the Schnorr protocol on the two systems, where the fob will act as the prover and the lock as the verifier. If the Schnorr signature implementation is correct, then we will always be able to unlock the lock using the fob whose public key is associated with full privileges.

Project Videos