NameNetIDSection
Ahmed Nahasanahas2ECE 110
Vinamr Sachdevavls4ECE 120
Muhammad Khanmkhan331ECE 120
YouYou Yuyouyouy2ECE 110
John Burnsjeburns2ECE 120
Ayush Sharmaayushs5ECE 110


INTRODUCTION

  The underlying motive behind this project is the heart-wrenching fact that, with all the developments in science and technology, the visually impaired have been left with nothing but a simple white cane; a stick among today’s scientific novelties. This is primarily because of economic incentives, but somebody needs to step up and create such products. The major driver behind this project is to make a usable product prototype for the benefit of a rather untargeted class of consumers, blind people.

Our overarching goal is to create a wearable assistive device for the visually impaired by giving them an alternative way of “seeing” through sound. The idea revolves around glasses that allow the user to walk independently by detecting obstacles and notifying the user, creating a sense of vision through spatial awareness. Our objective is to develop a device that is attainable and accessible to all, taking a first step in improving upon the current white cane. This will hopefully open more doors for newer aids for the visually impaired.

  Ahmed Nahas has previously worked in creating prototypes of smart glasses for the blind and hence has some experience on the challenges we might face and how to best plan the project and organize the workflow to accelerate research and development. Our task is different as we are aiming to build a fully functional, convenient, and cost-effective solution.

THE PLAN

  1. THREE PHASES

With a greatly talented team of six honors lab students, we devised a plan to maximize our output. Our target for this semester was ambitious: we aimed to develop a minimum of three prototypes, starting with an Arduino and breadboard, downscaling the circuit’s size to an AtTiny, then moving to creating printed circuit boards that are small enough to fit inside a 3D printed wearable glasses enclosure. Three phases:

  Arduino   -------------------------->     AtTiny    ----->     PCB

One thing to note is that we wanted to create a final wearable glasses prototype that with no circuit components visible; to make it seem like normal glasses. This was one of the major constraints to our project as it meant that we needed to use the smallest components available. This constraint also meant that the miniature components used are not made for prototyping, rather made for industry-grade applications and direct PCB assembly, introducing both software and hardware challenges.

The first phase with the Arduino is the most crucial as it acts as our proof of concept. This first step includes deciding which components to use, testing each separately, then combining them to one final circuit - the Arduino makes it very convenient for this prototyping and rapid testing phase. 

The last two phases rest upon the first’s success, and would only require slight alteration to the software and circuit schematic.

  1. Tasks and Organization

With our ambitious plan at hand, our team required utmost effectiveness and organization to build and maintain momentum. Each week, we would have a one hour primary team meeting to discuss progress and next steps, keeping a weekly log of our progress, research findings, and deliverables. We would then divide ourselves into three sub-teams (two students per subteam) to each focus on specific tasks for that week. An example:

  • Audio amplifier circuits - (Vinamr, John)

    • Create a circuit schematic, list the parts needed for the build

    • Send coding template to programming team

    • DAC vs PWM with low-pass filtering

  • Sensor circuits - (Ayush, YouYou)

    • Complete circuit schematic diagram for the CH201

    • List parts that are needed for the build

    • No wires on the sensor, how will we add wires (soldering? Manual vs Machine)

  • AtTiny programming research (Muhammad, Nahas)

    • Complete audio output code

    • Research how to I2C interface with CH201’s I/O Layer


Having settled on a plan and with each member driven towards our larger goal, we were ready to tackle this design challenge.


DESIGN

The concept of a smart glass consists of four main components in the block diagram: Input, output, control unit, and the power supply (see Figure 1). Each of these components represents an abstraction of the underlying hardware and software that was implemented to make the system work. 

The input block is mainly centered around the sensor that the glass uses to perceive the outside world. The sensor needs to have the ability to detect range information of certain objects in the field of view and provide some sorts of data feedback based on that. It needs to be power efficient and have a relatively high sample rate. The choosing process was completed by discussion and research from every team member and the ultrasonic sensor was the final candidate for this project.

The output block represents how the glass will interact with the users. The data sent back from the sensor will be processed and information will reach the user in some other forms such as sounds or vibration, considering the targeted users have vision disabilities. The feedback provided by the glass will help the user to know where and how close certain objects are around him/her. 

The control unit block identifies the integral part of the glass which is the brain to process all of the data taken and giving out. In this design, the Arduino Uno was used to receive sensor data and provide corresponding feedback to the user. This also contains the organization of the sensor firmware and coding of the logic which the glass will operate upon. 

The power supply block represents the power source of the system. Since this system is designed to be a smart wearable device,  a long battery life is expected. At the same time, its size will have to be relatively small which could fit inside the compact design of a glass. 

In the following few sections, we will describe the technical elements we explored under these categories, digging deeper into the challenges that we encountered with each.


INPUT - DESCRIPTION AND CHALLENGES FACED:

  1. Choosing the sensor

The team has initially chosen the CH-201 ultrasonic sensor produced by TDK for the system (see figure 5). Its extremely small size (3mm x 3mm), long range, large FOV, and high power efficiency were some of the advantages the team valued for the choice. Since CH-201 is a kind of compact distance detection solution provided by TDK, this was the first time for most of the team members to actually try to implement the industry-grade product into our design. This was a very involved process for most of us from looking into the data sheet to trying to understand useful information provided by TDK. Whilst its specifications are perfect for our goals, the main challenges anticipated are interfacing with its miniature hardware (made to directly be surface-mounted on a PCB) as well as understanding and integrating its software to other elements in our circuit.

  1. Understanding CH201 Sensor Hardware (Industry-grade)

From the data sheet, we extracted out useful information and implementation advice that could help the team immensely when it comes to assembly the sensor onto a PCB or a testing breadboard. For example, the operating voltages and temperature were noted so that no sensor will be burned. Its pin distribution was very important to help us design the circuitry to connect sensors to arduino and the outputs. Soldering instructions were also in the data sheet which would be useful when it comes to the final step of soldering the sensor to the PCB. 

  1. Understanding the sensor’s IDE (soniclib.h) 

The sensor manufacturer also provides the firmware for the sensor on its official website. After downloading the files, we noticed that one important header file “soniclib.h” contains all of the functionality definitions of this sensor including how to toggle modes, initializations, and so many more. The team put a great amount of time researching this header file and related source files so that we could grasp a general idea of how to implement the sensor at the higher level. Of course, not until after extensive researching and experimenting that we found out more difficulties which will be discussed later. 

  1. PCBs for the sensor

The CH201 sensors are incredibly accurate yet cheap industry-grade sensors, and using them in our project was one of the fundamental goals of our project. These fragile sensors require a slightly lower soldering temperature than usual, and when we tried to connect them to the breadboard, these half an inch sensors were complicated to work with. Not only were the pins smaller than the width of a wire, but there was no scaffolding for the sensor whatsoever. This meant we needed some intermediate board to mount the sensor on and connect to the breadboard. We designed a basic breakout board using the EAGLE CAD software (see figure 6). 

As we never previously had any experience with printing circuit boards, it was a truly eye-opening experience. Not only were we able to see how the software works and developers use open source tools to accelerate the engineering process, but we also learned that real-world constraints are so much different than theoretical schematics. Every drill, every hole, and wire should be just in the right place to make a circuit work at full efficiency. We need to get the dimensions just right, pins in the right places. We learned to measure twice, cut once. That means double check your work as we ruined the first few circuit boards due to a short circuit in pins. It's also essential to reduce the size of the board as much as possible to make it blend in the smart glasses frame.

Creating a breakout board gave me a very in-depth flavor of the field of electrical engineering. We needed to go in-depth into the sensor's datasheet, find the exact pins, dimensions, and connections while keeping in mind the larger circuit we were building. 

  1. Logic gates 

We give the user an option to toggle between which sensors they want to use while walking. We achieve this by adding two switches (A and B) and a combinational logic circuit which transitions between 4 states based on the inputs from the two switches.

Function for search of the three sensors L, C and R (left, center and right):

L = AB + A’B’

C = A’B + A’B’

R = AB’ + A’B’

So, when both switches are ON, only the left sensor will be active.

Circuit: Refer to Figure 7

Truth Table:

A

B

L

C

R

0

0

1

1

1

0

1

0

1

0

1

0

0

0

1

1

1

1

0

0



CONTROL UNIT  - DESCRIPTION AND CHALLENGES FACED:

  1. Arduino Software 

Understanding the Arduino’s companion software, Arduino IDE, was paramount to the functionality of the prototype. Having some prior knowledge with the C programming language coupled with looking at examples online aided the process of becoming familiar with the syntax. However, this software was much more directly interconnected to the hardware that I had used before. It was interesting to see how even the smaller things like initializing pins and creating the setup and loop functions differed but was also similar to other programming experiences I had. 

The objective of the software was to initialize the ultrasonic sensors and run them so that they could return and keep track of distances, and then output audio based on the direction and distance from the user. While our initial sensor, the CH-201, was very capable in its distance tracking and as well as its field of view, in terms of interfacing with the Arduino IDE, it unfortunately ended up being too difficult to implement. In terms of the audio output, we used the Arduino’s PWM output and were able to code an audio cue that could be played either on both or one of the two headphones to alert the user the direction of an obstacle.

  1. Attiny Hardware 

The Arduino is a powerful tool for the prototyping stage; however, it is not practical to expect someone to walk around with an Arduino on their glasses. We always planned to eventually transition to an ATtiny microcontroller to be used in the final PCB. Although we unfortunately ran out of time to transition to a wearable prototype, we were able to reach a point through our research where we should be able to make this transition soon (see next steps). Many factors were important in the decision of the ATtiny to use- flash storage for code, size, and of course number of I/O pins. After much research we decided on the ATtiny1634. We created a full schematic for this microcontroller (see Figure 3) within the circuit. It functions almost exactly the same as the Arduino’s implementation (I/O for the sensors, PWM for the audio). The main challenges remaining for when we do transition to this microcontroller are adapting the Arudino’s code, both in terms of understanding how to code for the ATtiny and in fitting the code within the ATtiny’s 16K bytes of Flash memory.

  1. Attiny Software 

One other potential solution that we sought was to use Atmel Microchip Studio which is an IDE for AVR microcontrollers. With the hope that the provided source and header files could be directly written onto the Atmega chip on the Arduino, the team started to do more research on the IDE itself and related skills needed to accomplish what we wanted. It turned out that this was a very sophisticated and complex IDE that none of us has seen before (probably one thing similar in terms of complexity that we have seen before is the Quartus Prime used in ECE 120). Although we managed to import the project files into the IDE, no further progress could be made because of limited online information that is relevant to our situation and the long learning curve to the IDE. 

  1. Storage Management

One approach that the team has made with the Arduino IDE was to create our own library consisting of the firmware codes that the manufacturer provided for the sensor which includes soniclib.h, the header file mentioned before that includes all function definitions. However, this approach quickly encountered obstacles. As the size of all files reaches nearly 20 MB with over 700 files enclosed, we sadly found out that the usable memory space on Arduino is almost hundreds of times smaller.  Arduino libraries require a specific structure with one main c or cpp file and one header file - we had hundreds of each. After lots of testing, we were able to fit these files in that structure and include it as a library, but 15mb of memory was still too much to be uploaded to the Arduino. After looking through all of the files packed together, we realized that there are many html files and source files that are for CH-101, a different sensor that we were not using. There were also drivers to make development on different kinds of board feasible but since we were only trying it out on Arduino, those could be ignored. After discarding a great amount of files and only selecting files that were written for CH-201, we were able to effectively decrease the size of the library almost in half. The issue that then arose was that all files were intricately interconnected, meaning that one file removed would cause errors in 50 other files. 

The firmware files described above were only provided in an example project by the manufacturer. However, they stated that this sensor is made for very low-level applications and that an I/O layer needs to be developed by the user (using soniclib.h as a start) to fit their specific needs. The other approach was to use a library called wire.h with the Arduino that would allow us to directly communicate using the I2C protocol (which we also investigated), but this specific sensor’s implementation actually sends and receives a multitude of data to support its built-in firmware, meaning that we would not be able to decipher the data we’re receiving, nor would we be able to make the process as efficient as possible. We reached out to professionals in the field who have 5-8 years of experience working with similar components to ask for insight on how we should approach these sensors: they stated that they weren’t able to figure these sensors out and they found its complexity to go beyond their scope - we knew it was time to find another sensor to work with.


OUTPUT - DESCRIPTION AND CHALLENGES FACED:

  1. Digital to Analogue (DAC)

The selection of the audio file is decided on the Arduino/microcontroller; however, we still needed to turn this information into actual sound. This required a digital to analog conversion, so our first approach was to use an external DAC chip which could take digital output from the microcontroller and output the analog frequency desired for the earbuds. However, this led us to a common struggle in this project- the more sensors we have, the more useful the glasses can be, but this meant we were eternally rationing microcontroller pins. Using an external DAC chip would simply have not fit into the rest of our design. A closer look at the purpose of our audio component provided the solution. Most online resources and examples recommended an external DAC for small scale Arduino-to-audio circuits because it would produce the highest quality sound. However, it is not necessary that our glasses produce great or even good quality sound- only that the outputs it gives are distinct enough from one another for the wearer to distinguish which direction the object is. This led us to the low-pass filter approach. 

  1. Low-pass filters 

Once our audio circuit was complete, we tested and realized that the audio quality was subpar, especially when we need high quality 3D audio to help the user. We decided to create a low pass filter for our PWM generated sound from the arduino pins. Firstly we analyzed the frequency composition of our generated sound using audacity software. (See Figure 2)

Judging from the amplitude-frequency graph we decided the cutoff frequency to be roughly 1k Hz. We created the circuit with resistor and capacitor to filter the waveform. Based on the formula for cutoff frequency :

Fcutoff frequency = 12RC where R is resistance, C is capacitance. We got a resistor of value 1.5 & capacitor of 100F which gave us a cutoff frequency of roughly 1.061 kHz. 

Figure 4 describes the connections of the low pass filter along with the audio amplifier(as the filter reduces the amplitude of the sound), which will be necessary when transferring to 3d audio. We were able to successfully filter the pulse-width-modulated signal when we measured using an oscilloscope but when we tried to encode the 3d audio to be utilised in the arduino, we got into many obstacles. 

  1. Earphones

Before transitioning to using bone conduction transducers, we started with simpler, normal earphones which provide spacial awareness to the visually challenged through stereo audio. The control unit uses conditional statements to determine what exactly should be signaled to the user. Our current model creates the spacial awareness sensing objects directly in front of the user, at their left and right. The output signals where the object is being detected. 

The output signal basically signals the direction which is clear for the user to walk in. If there’s an object at the front and east, they’ll be signaled to walk west with all sound being only heard in their left ear. Similarly for the east direction. If the object is only at the front, he’ll be signaled to walk either east or west by hearing the same sound in both of their ears.

  1. 3D Audio, Stereo Output 

One goal for this project was to make the glasses as functional for the user as possible, and one integral element is being able to specifically and accurately pinpoint where the object is coming from. Disregarding the needs from the sensor input, we found the best versatile solution to be 3D audio: using stereo sound to create the illusion of audio only being played from specific directions, making a clear distinction between very close coordinates like North and North East. This is widely used in movies, music, and is even built-in to phone video recording and playback. Testing examples online made us certain that this was the correct solution

First, we had to choose an appropriate sound effect: we recorded one note from an organ instrument and found it to be effective for our goals - noticeable but not irritative over a long period of time. We also decided on this to be able to have constant, uniform waves that would allow us to reduce the audio file to fractions of a second and only loop it in our code, solving the issues with storage management.

To create this 3D effect, we tested and used several methods. The first was editing each ear’s channels to create a time difference/differential for when the audio enters each ear, creating that illusion of being closer to one ear than the other. This worked for sound waves that were not uniform, as the person hearing the sound can notice the relative differences in beats and allow for the effect to work, but it didn’t work for uniform audio as there was not differential in relative sound waves.

The other approach was using 3D audio plugins in a software called Audacity that tackles the above issues, and it worked perfectly! The 3D spatial awareness plugins (example shown in Figure 3) allowed us to easily choose which angle we wanted the audio to be played, both surrounding azimuth angles as well as the angle of elevation. We successfully created audio 0.2 second audio files for the West, North West, North, North East, and East directions.


RESULTS

  1. First Prototype

The first prototype’s purpose was to test our audio circuit. At the time, the CH-201 sensor’s code was still under development; hence, we looked to the HC-R04, the ultrasonic sensor found in the ECE 110 kit. By using this simple and easy-to-use sensor, we were able to restrict and limit the challenges faced to only the audio circuit and code, allowing for much easier debugging and troubleshooting. After procuring 3 of these sensors, we tested them alongside code that merges the sensor’s input with the audio code that we had developed. The results were promising. The sensors would detect objects a certain range away and would play a sound, and as the object came closer, the sound would play more frequently. With three sensors, we would get at the very least 3 directions: right, left, and front. With this latest test, there were issues in differentiating which direction was actually the closest, so something that would be detected by the front and the right sensor would play through both earphones instead of just the right. 

One thing to note is that the Arduino only allows for one output at a given time, meaning that outputting stereo audio to two different earphones under this restriction had us play around with the period of each tone being played, so that one ear would hear a tone for a fraction of the second, then played across the other, creating the illusion of both ears being played at the same time. In addition, this idea of using PWM with alternating channels had little to no examples online and was fully developed by our audio code subteam. 

The major downside was not being able to encode the 3D audio files we had developed that would allow the user to specifically pinpoint where the object is. Issues arose when trying to encode and play these 3D audio files using the method above. Moreover, the limited 8-bit audio quality would make it somewhat more difficult to accurately deduce where the audio is coming from. We can loosely claim that our sensing part of our product has a least count of roughly 60 degrees, because we can only tell if the object is at front, right or left. A low accuracy might cause problems for the user while walking, which we were able to ascertain while testing. We believe we can encode and test these 3D audio files in the future by using other encoding techniques/software whilst also not using the specialised DAC chip.

  1. Second Prototype

Our second prototype’s goal was to test the CH-201 sensor’s code. In preparation for this test, we designed and printed a PCB breakout board to house our tiny CH-201 sensor as well as setup the circuit components needed for this specific sensor’s implementation. The major issue that restricted our progress was not getting the sensor’s code completed in time given the software-related issues we explained above. Therefore, we were unable to test this prototype in time.


CONCLUSION AND NEXT STEPS

Overall, whilst we did face many challenges that restricted our desired progress towards a second and third prototype that would use an attiny and PCBs, I believe this project was a major success. We did not reach our initial ambitious goals, but we definitely learned and explored a ton of new concepts that are definitely beyond our current freshman knowledge.

In retrospect, our project can be mainly divided into two categories: 3D audio output, and CH-201 input. We were successful to a certain extent with the 3D audio’s software and hardware as we overcame the many challenges that arose. When it comes to the sensor, we did face major obstacles with this specific sensor that restricted our progress - over one month of dedicated and focused efforts across the team allowed us to heavily explore and understand these sensors, but the issues that then arose behind that visage of mild success grew beyond our current scope. The next steps to overcome these challenges include:

  • Finding a better sensor: miniature, yet Arduino and AtTiny-friendly

  • Optimizing the sensor’s code: multiple sensor input/output functionality

  • Encode 3D Audio files: find a better encoding solution/software

  • Create a final arduino prototype: successfully test multiple sensor input and 3D audio output

  • Translate to Attiny: transition all code and hardware to an AtTiny

  • Translate to PCBs: design and transition all hardware to a PCB that takes the shape of and fits in a 3D-printed glasses design


All in all, the major lesson throughout this project is acknowledging gates: each stage can be described as a gate; if one gate doesn’t want to open no matter how much time and effort it was given (CH-201 sensors) then we should backtrack and find another suitable pathway. We conclude this report with the learning outcomes and topics explored by each team member:

  • YouYou: 

    • IDE File managements in a project

    • Creating Arduino Library

    • Library Storage Management

    • Understanding industry-grade sensors’ documentation (CH-201)

    • I2C communication protocol

  • Ayush:

    • Eagle PCB Design

    • Low-pass filters

    • Understanding industry-grade sensors’ documentation (CH-201)

    • I2C communication protocol

  • Muhammad

    • Arduino hardware/software

    • Different libraries for sound encoding, playback, etc.

    • Ch-201 C code, soniclib.h (industry-grade C programming)

    • Encoding audio

  • John:

    • Arduino Hardware/Software

    • Audio Circuit: Digital to Analog output, developing the circuit

    • Developing the attiny and audio circuit schematic

    • Attiny Hardware, datasheets 

  • Ahmed:

    • Developing 3D Stereo Audio

    • Arduino/Attiny Software + Hardware

    • Understanding the I2C communication protocol

    • Storage management in IDE projects

    • Creating an Arduino library 

    • Understanding the sensor’s complex I/O layer 


APPENDICES


Figure 1: Design Block Diagram

Figure 2: Amplitude-Frequency Graph


Figure 3: Ambeo Plugin in Audacity - Used to create 3D spatial audio files.

Figure 4: ATtiny1634 schematic

Figure 5: Schematic for low-pass filter

Figure 6: CH-201 Ultrasonic sensor



Figure 7: Footprint & symbol for CH201 sensor as designed on EAGLE software


Figure 8: Logic circuit

Hackaday.io. 2021. DIY Bone conduction glasses. [online] Available at: <https://hackaday.io/project/164895-diy-bone-conduction-glasses>

Electronics-Lab.com. 2021. ATtiny85 Audio Sample Player - Electronics-Lab.com. [online] Available at: <https://www.electronics-lab.com/attiny85-audio-sample-player/>


Attachments:

Comments:

Hey guys! Seems like an awesome project! I would also consider getting an Arduino as a backup in case the ATtiny9 becomes difficult to prototype on. That ATtiny9 is really limited in capability so those 4 I/O lanes especially may be difficult to work around. Using the Arduino and a tiny 8-bit microprocessor will allow you to develop code to test functionality of your other parts and then you can start developing on the more restrictive chip which will likely be much more difficult.

I don't have too much experience with using these tiny microcontrollers so it may actually be less restrictive than I think but I wanted to give a heads up.

I'll approve your project!

Posted by dbycul2 at Feb 25, 2021 23:46

Hey David,

Thanks for the feedback! Yes, the Attiny9 is pretty limited - we're looking into more powerful versions such as the Attiny2313. As you advised, we should surely start with an Arduino for initial testing and prototyping. Turns out we can actually use the Arduino IDE to program the Attiny chip, so the transition will be more hardware related. Once again, thanks for the insight!

Posted by anahas2 at Feb 27, 2021 19:26