Name | NetID | Course |
---|---|---|
Tejus Kurdukar | tkurdu2 | ECE 110 |
Taylor Xia | tyxia2 | ECE 110 |
Purpose: To construct a synthetic arm that responds and interacts with user. Our initial designs incorporate either computer vision and machine learning algorithms or wireless communication. We were considering accomplishing this using flex sensors attached to the user via a glove or strap or some sort, or, if taking the computer vision approach, a set of cameras positioned in a way to register and recognize different gestures.
- Introduction
- Background Research
We came up with the idea for this project because we wanted to gain experience with wireless communication systems, Arduino, and fabrication. The robotic hand is meant to be a lighthearted, creative project that can interact with the user in multiple ways and allows us to learn and practice these skills.
We've found that there are many projects that have been done previously to mimic the motion of a user, but our project aims to go even further beyond! The first step is having the prosthetic simply mimic the user's hand motion. Then, once we are confident with its ability to recognize motion, we can expand its capabilities to interacting and responding rather than just mimicking. For example, the hand could greet the user with a handshake, a wave, or even the classic game of good old rock-paper-scissors.
The first article we found laid the foundations for our general design: use flex sensors to detect movement in a glove worn by the user, which then send wireless signals to an Arduino. Then, the Arduino relays the signals to a servo-based pulley system that dictates the movement of the fingers[1]. This would allow the hand to detect the user's motions and react in a variety of ways. The article also provides a method on how to physically design the hand itself, using 3D-printed hinges. We also found an article that goes into detail about gesture recognition using computer vision algorithms, such as Canny edge detection and background subtraction[3]. Paired with a video we found about tracking objects using an endoscope camera, we may also incorporate some form of computer vision in our project. If we make enough progress, we may even be able to replace the flex sensors all together, though we may also use just use this to react to some visual stimulus[2].
- Background Research
- Design Details
- Block Diagram / Flow Chart
- System Overview
The above block diagram shows the connections between the essential parts of the system, though we may add more sensors such as a camera, if time allows. The most important sensor, however, will be our flex sensors; these will be responsible for communicating with the Arduino NANO and, by extension, the Arduino UNO. First, an ultrasonic sensor checks if a person is close enough before activating the prosthetic's NRF for receiving instructions. This is both to save power and ensure the user is within range for the NRFs to be able to communicate with each other. After that requirement is fulfilled, the Arduino NANO attached to the user will send instructions to the NRF transceiver on the UNO attached to the prosthetic. These instructions are then interpreted into activating the various servos on the prosthetic, moving the fingers.
Parts
Note: Can get the Arduinos for considerably cheaper if we go for a cheaper brand, save some $May purchase an endoscope camera later on in the semester.
- Block Diagram / Flow Chart
- Possible Challenges
Proper integration of electronics, code for transceivers, assembling the arm and wiring the servo-pulley system properly, and using a correct circuit design are all potential challenges we will likely face. If we decide to do computer vision later, that's a whole 'nother beast. - References
[1]M. Kilic, "How to Make Wireless / Gesture Control Robotic Hand", Hackster.io, 2018. [Online]. Available: https://www.hackster.io/mertarduino/how-to-make-wireless-gesture-control-robotic-hand-cc7d07. [Accessed: 15- Feb- 2020].
[2]C. Macintosh, Computer Vision with Processing, Arduino and Robotic Arm. 2017. Available: https://www.youtube.com/watch?v=usZoV4OlgfU [Accessed: 15- Feb- 2020].
[3]Z. Xian and J. Yeo, Hand Recognition and Gesture Control Using a Laptop Web-camera. Stanford: Stanford University. Available: https://web.stanford.edu/class/cs231a/prev_projects_2016/CS231A_Project_Final.pdf. [Accessed: 15- Feb- 20].
Final report and Demo Video
Attachments:
image2020-2-14_20-44-13.png (image/png)
Ece 110 Honors Lab Report.pdf (application/pdf)