Detailed Instructions
You can do this lab in any programming language, however, you MUST create your own musical notes using a low-level cosine function. If you use any music synthesis API, you will receive no credit for the lab.
Following instructions assume you're using python.
- Create a program
lab2.py -
Find a song that contains at least 6 notes. For each
note, figure out its frequency, duration, and starttime. To
compute frequency, you might
find this
table to be useful. To compute duration and starttime,
I recommend that you choose a tempo (in beats per minute or
bpm), then compute the duration of a quarter note as
quarternote=Fs*60/bpm, then compute the duration and starttime of every note with respect toquarternote. Finally, create lists to hold these pieces of information:where you would replace
frequencies=[freq1,freq2,...]
durations=[dur1,dur2,...]
starttimes=[stt1,stt2,...]
freq1by a number, and so on. -
Create a numpy array long enough to hold all of your
notes.
songaudio=np.zeros(durations[-1]+starttimes[-1]). -
For each note: calculate sample values, and add them
to
songaudioat the right times. Something like
for notenum in range(0,len(frequencies)):
for n in range(0,durations[notenum]):
songaudio(starttimes[notenum]+n) += math.cos(2*pi*frequencies[notenum]*n/Fs)
-
Convert your
songaudiofrom floating point to 16-bit integer, using the np.ndarray.astype function. First you'll need to scale your values so they don't all get set to zero. The maximum possible 16-bit integer is+/-math.pow(2,15). If you've added together up to four notes, then yoursongaudioarray already contains values up to +/-4, so you want to multiply bymath.pow(2,13). So you should be able to do something like:wavframes = math.pow(2,13)*songaudio.astype('int16')
-
Use wave
to save the result to a wav file. For example,
with wave.open('lab2.wav','wb') as f:
f.setnchannels(1)
f.setframerate(Fs)
f.setsampwidth(2)
f.writeframes(wavframes)
- Compute the energy spectrum of your song, as a function of pitch in semitones. For each pitch (A0 is pitch 0, A4 is pitch 48, and so on), add the power of the sinusoid (1/2, if the amplitude is 1), multiplied by the duration of the sinusoid.
-
Compute the level spectrum = energy spectrum in decibels.
To do this, you'll need to
calculate
10*math.log10only of the pitches that have a nonzero energy spectrum. log10(0) is undefined, so if any pitch has a zero energy spectrum, leave its level spectrum also at zero (or at -60, or at some other small value which is lower than the levels of all nonzero energies) -
Use
matplotlib.pyplot.stemto create an image showing the level (in decibels) as a function of pitch (in semitones). Usematplotlib.pyplot.xlabelto label the X axis,matplotlib.pyplot.ylabelto label the Y axis, andmatplotlib.pyplot.savefigureto save the image file.
Deliverables
By 1/31/2017 23:59, upload to compass the following things:
- A wav file containing your song.
- An image file showing the energy spectrum of the song,
as a function of log frequency. Abscissa should be labeled
in semitones over A0 (
12*math.log2(f/27.5)). Ordinate should be labeled in decibels below most powerful note (10*math.log10(power*duration)). - A program that creates the song, and the figure. If not python, provide a comment specifying how to run it.
Walkthrough
Here is a video walkthrough of lab 2.