CS440/ECE448 Fall 2019
Assignment 5: Perceptron
Due date: Monday November 4th, 11:55pm
|
Created By: Justin Lizama and Medhini Narasimhan
Responsible TAs: Hari Cheruvu and Weilin Zhang
In this assignment, we are going to see if we can teach a computer to distinguish
living things from non-living things. More precisely, you will implement the perceptron algorithm to detect
whether or not an image contains an animal or not.
Contents
General guidelines and submission
Basic instructions are the same as in MP 1. To summarize:
You should submit on Gradescope:
- A copy of perceptron.py containing all your new code
Problem Statement
You are given a dataset consisting of images, that either contain pictures of
animals or not. Your task is to write a perceptron algorithm to classify
which images have animals in them. Using the training set, you will learn a
perceptron classifier that will predict the right class label given an unseen image.
Use the development set to test the accuracy of your learned model.
We will have a separate (unseen) test set that we will use to run your code after you turn it in.
You may use NumPy in this MP to program your solution. Aside from that library,
no other outside non-standard libraries can be used.
This dataset consists of 10000 32x32 colored images total. We have
split this data set for you into 2500 development examples and 7500 training
examples. There are 2999 negative examples and 4501 positive examples in the
training set.
This is a subset of the CIFAR-10 dataset,
provided by Alex Krizhevsky.
The data set can be downloaded here:
data (gzip) or
data (zip).
When you uncompress this, you'll find
a binary object that our reader code
will unpack for you.
The perceptron model is a linear function that tries to separate data into
two or more classes. It does this by learning a set of weight coefficients
wi and then adding a bias b. Suppose you have features x1,…,xn
then this can be expressed in the following fashion:
fw,b(x)=n∑i=1wixi+b
You will use the perceptron learning algorithm to find good weight parameters
wi and b such that sign(fw,b(x))≥0 when there is an
animal in the image and sign(fw,b(x))<0 when there is a
no animal in the image. Note that in our case, we have 3072 features because
each image is 32x32 and they each have RGB color channels yielding 32*32*3 = 3072.
Please see the textbook and lecture notes for the perceptron algorithm.
You will be using a single classical perceptron whose output is either +1 or -1
(i.e. sign/step activation function).
- Training: To train the perceptron you are going to need to implement
the perceptron learning algorithm on the training set. Each pixel of the
image is a feature in this case.
- Development: After you have trained your perceptron classifier,
you will have your model decide whether or not images in the development set
contain animals in them or not. In order to do this take the sign of the
function fw,b(x). If it is negative then classify as 0. If it is
positive then classify as 1.
Use only the training set to learn the weights.
Perceptron is a simple linear model, and although this is sufficient
in a lot of cases, it has its limits. Implement K-Nearest Neighbors.
Try and see what's the highest accuracy you can get, and find some
justification (just for fun no need to submit) for why your choice of
model is superior to perceptron for this particular task. You must
implement this algorithm on your own with only standard libraries and
NumPy. The choice of K should be chosen based on experimentation.
We have provided
( tar
zip)
all the code to get you started on your MP, which
means you will only have to implement the logic behind perceptron.
- reader.py - This file is responsible for reading in the data set.
It makes a giant NumPy array of feature vectors corresponding with each image.
- mp5.py - This is the main file that starts the program, and computes the
accuracy, precision, recall, and F1-score using your implementation of perceptron.
- perceptron.py This is the file where you will be doing all of your work.
Inside the code ...
- The function classify() takes as input the training data, training labels, development set,
learning rate, and maximum number of iterations.
-
The training data provided is the output from reader.py.
-
The training labels is the list of labels corresponding to each image in the training data.
-
The development set is the NumPy array of images that you are going to test your implementation on.
-
The learning rate is the hyperparameter you specified with --lrate (it is 1 by default).
Reset the value inside your classify() function if you want something other than the default
value for your final submission.
-
The maximium number of iterations is the number you specified with --max_iter (it is 10 by default).
Please do not reset this value inside your code.
-
You will have classify() output the predicted labels for the development set from your perceptron model.
-
The function classifyEC() is a function where you can implement the extra credit, if you decide to
attempt it. If you use the --extra flag then classifyEC() will be ran instead of classify().
NOTE: In classify() only implement perceptron on the raw data. Do NOT do any extra credit or anything extra
in classify().
Do not modify the provided code. You will only have to modify perceptron.py.
To understand more about how to run the MP, run python3 mp5.py -h in your terminal.
|