Lab

Recommended Tools

In addition to the resources that the course provides, students may find it useful to obtain the tools below:

  • wire cutter
  • wire stripper
  • needle nose pliers
  • screwdrivers
  • hex set (ball ends)
  • electrical tape
  • small scissors
  • a small file

Lab Resources

The Senior Design Lab is located at ZJUI room D225. This lab provides you access to various equipment, some spare parts, computers, and a space to work on your senior design project. In addition, course staff will make themselves available in this lab during their office hours to provide guidance on your project throughout the semester. Your weekly meetings with your TA will also likely be at this location.

It is our intention that this laboratory space provides you and your team with all the tools you would need to develop and test your project (within reason, of course!). If there is something that you require in the lab to complete your project that does not exist in the lab, talk to your TA and we will see how we can solve your issue.

Lab Rules

There are two overarching rules of working in the Senior Design Lab (and, frankly, any shared lab). First, be safe, and second, be courteous. Lab privileges will be revoked if you fail to complete the required laboratory safety training or if you break any of the lab rules. Specific points and examples of what we expect:

Breaking the rules or exhibiting bad laboratory etiquette will lead to a loss of points and/or revocation of laboratory access.

Lab Bench Reservations

We do not expect the lab to become so crowded such that finding a lab bench to work at becomes difficult. However, in the case that this does happen (particularly in semesters with very high enrollment), we will move to a Lab Bench Reservation system. Reserving a bench guarantees that spot for you, however each team may only book one lab bench at a time, and for a maximum of 4 hours per day.

If the lab needs to move to a reservation-based system, you will be notified ahead of time.

A few ground rules:

  1. You may use a lab bench (a) during a time for which you have it reserved or (b) any time during which it is not reserved in the system (on a first-come-first-served basis). However, if you are working at a bench that is unreserved and somebody reserves it using the online system, the group with the reservation gets the lab bench.
  2. There is a limit on the amount of time for which you can reserve benches in ZJUI D225. The limit is currently a total of 4 hours of total bench time in the lab per group per day (e.g., 2 hours at Bench A and 2 hours at Bench B would max out your team's reservations for the day). While this may seem restrictive, keep in mind that the course serves more than 30 groups in a typical semester and the lab has only 14 benches. Also keep in mind that you can work at a bench if it is unreserved.
  3. Some lab benches have specialized equipment at them, such as digital logic analyzers. Try to reserve the lab bench that has the equipment that you need.
  4. Cancel reservations that you will not need as soon as possible to give other groups a chance to reserve the lab bench. You can cancel a reservation up to 1 hour before time and not have it count against your daily allotment.
  5. Conflicts and/or reports of people not following these rules should be sent to your TA with the course faculty in copy.
  6. Above all, be courteous. Especially near the end of the semester, the lab will be more crowded and many teams are stressed. Clean up the lab bench when you are done with it. Start and end your sessions on time. Be patient and friendly to your peers and try to resolve conflicts professionally. If we notice empty lab benches that have been reserved, we will cancel your reservations and limit your ability to reserve lab benches in the future. Similarly, do not reserve more time than you will need. If we notice that you are frequently canceling reservations, we will limit your ability to reserve lab benches in the future. Finally, do not try to exploit the system and reserve a bench for 30 minutes every hour for eight hours. We will notice this and revoke your ability to reserve a bench.

A Wearable Device Outputting Scene Text For Blind People

Hangtao Jin, Youchuan Liu, Xiaomeng Yang, Changyu Zhu

A Wearable Device Outputting Scene Text For Blind People

Featured Project

# Revised

We discussed it with our mentor Prof. Gaoang Wang, and got a solution to solve the problem

## TEAM MEMBERS (NETID)

Xiaomeng Yang (xy20), Youchuan Liu (yl38), Changyu Zhu (changyu4), Hangtao Jin (hangtao2)

## INSTRUCTOR

Prof. Gaoang Wang

## LINK

This idea was pitched on Web Board by Xiaomeng Yang.

https://courses.grainger.illinois.edu/ece445zjui/pace/view-topic.asp?id=64684

## PROBLEM DESCRIPTION

Nowadays, there are about 12 million visually disabled people in China. However, it is hard for us to see blind people in the street. One reason is that when the blind people are going to the location they are not familiar with, it is difficult for blind people to figure out where they are. When blind people travel, they are usually equipped with navigation equipment, but the accuracy of navigation equipment is not enough, and it is difficult for blind people to find the accurate position of the destination when they arrive near the destination. Therefore, we'd like to make a device that can figure out the scene text information around the destination for blind people to reach the direct place.

## SOLUTION OVERVIEW

We'd like to make a device with a micro camera and an earphone. By clicking a button, the camera will take a picture and send it to a remote server to process through a communication subsystem. After that, text messages will be extracted and recognized from the pictures using neural network, and be transferred to voice messages by Google text-to-speech API. The speech messages will then be sent back through the earphones to the users. The device can be attached to glasses that blind people wear.

The blind use the navigation equipment, which can tell them the location and direction of their destination, but the blind still need the detail direction of the destination. And our wearable device can help solve this problem. The camera is fixed to the head, just like our eyes. So when the blind person turns his head, the camera can capture the text of the scene in different directions. Our scenario is to identify the name of the store on the side of the street. These store signs are generally not tall, about two stories high. Blind people can look up and down to let the camera capture the whole store. Therefore, no matter where the store name is, it can be recognized.

For example, if a blind person aims to go to a book store, the navigation app will tell him that he arrives the store and it is on his right when he are near the destination. However, there are several stores on his right. Then the blind person can face to the right and take a photo of that direction, and figure out whether the store is there. If not, he can turn his head a little bit and take another photo of the new direction.

![figure1](https://courses.grainger.illinois.edu/ece445zjui/pace/getfile/18612)

![figure2](https://courses.grainger.illinois.edu/ece445zjui/pace/getfile/18614)

## SOLUTION COMPONENTS

### Interactive Subsystem

The interactive subsystem interacts with the blind and the environment.

- 3-D printed frame that can be attached to the glasses through a snap-fit structure, which could holds all the accessories in place

- Micro camera that can take pictures

- Earphone that can output the speech

### Communication Subsystem

The communication subsystem is used to connect the interactive subsystem with the software processing subsystem.

- Raspberry Pi(RPI) can get the images taken by the camera and send them to the remote server through WiFi module. After processing in the remote server, RPI can receive the speech information(.mp3 file).

### Software Processing Subsystem

The software processing subsystem processes the images and output speech, which including two subparts, text recognition part and text-to-speech part.

- A OCR recognition neural network which is able to extract and recognize the Chinese text from the environmental images transported by the communication system.

- Google text-to-speech API is used to transfer the text we get to speech.

## CRITERION FOR SUCCESS

- Use neural network to recognize the Chinese scene text successfully.

- Use Google text-to-speech API to transfer the recognized text to speech.

- The device can transport the environment pictures or video to server and receive the speech information correctly.

- Blind people could use the speech information locate their position.