Processing math: 77%

CS440/ECE448 Fall 2019

Assignment 6: Neural Nets and PyTorch

Due date: Monday November 18th, 11:55pm

image from Wikipedia

Created By: Justin Lizama, Kedan Li, and Tiantian Fang

The goal of this assignment is to extend your results from MP5, improving the accuracy by employing neural networks (also known as multilayer perceptrons), nonlinear extensions of the linear perceptron from MP5. In the first part, you will create an 1980s style neural network with sigmoid activation functions. In the second part, the goal is to improve this network using more modern techniques such as changing the activation function and/or changing the network architecture or initialization details.

You will be using the PyTorch and NumPy library to implement these models. The PyTorch library will do most of the heavy lifting for you, but it is still up to you to implement the right high level instructions to train the model.

Contents

Extra credit may be awarded for going beyond expectations or completing the suggestions below. Notice, however, that the score for each MP is capped at 110%.

Dataset

The dataset consists of 10000 32x32 colored images total. We have split this data set for you into 2500 development examples and 7500 training examples. There are 2999 negative examples and 4501 positive examples in the training set. This is a subset of the CIFAR-10 dataset, provided by Alex Krizhevsky.

The data set can be downloaded here: data (gzip) or data (zip). When you uncompress this you'll find a binary object that our reader code will unpack for you.

Part 1: Classical Sigmoid Network

The basic neural network model consists of a sequence of hidden layers sandwiched by an input and output layer. Input is fed into it from the input layer and the data is passed through the hidden layers and out to the output layer. Induced by every neural network is a function FW which is given by propagating the data through the layers.

To make things more precise, in MP5 you learned a function fw(x)=ni=1wixi+b. In this assignment, given weight matrices W1,W2 with W1Rh×d, W2R2×h and bias vectors b1Rh and b2R2, you will learn a function FW defined: FW(x)=W2σ(W1x+b1)+b2 where σ is your activation function. In part 1, we will be using the sigmoid activation function which is defined σ(x)=11+ex, and we will have h=32 and d=(32)(32)(3)=3072. In other words, we will be using 32 hidden units and we will have 3072 input units, one for each of the image's pixels.

Training and Development

  • Training: To train the neural network you are going to need to minimize the empirical risk R(W) which is defined as the mean loss determined by some loss function. For this assignment you can use cross entropy for that loss function. In the case of binary classication, the empirical risk is given by: R(W)=1nni=1yilogˆyi+(1yi)log(1ˆyi). Where the yi are the labels and the ˆyi are determined by ˆyi=σ(FW(xi)), where σ is the sigmoid function as discussed earlier. For this assignment, you won't have to really implement this yourself. You can just use the PyTorch function torch.nn.CrossEntropyLoss(). Keep in mind that you do not need an extra sigmoid at the end of your network. The torch.nn.CrossEntropyLoss function will do this for you.

  • Development: After you have trained your neural network model, you will have your model decide whether or not images in the development set contain animals in them or not. This is done by evaluating your network FW on each example in the development set, and then taking the index of the maximum of the two outputs (i.e. argmax).

Part 2: Modern Network

In this part, you will try to improve your performance by employing more modern machine learning techniques. These include, but are not limited to the following:
  1. Choice of activation function: Some possible candidates are (Tanh, ReLU, ELU, softplus, and LeakyReLU). You may find that choosing the right activation function will lead to significantly faster convergence, and or improved performance overall.
  2. L2 Regularization: Regularization is when you try to improve your model's ability to generalize to unseen examples. One commonly used form of regularization is L2 regularization. Let R(W) be the empirical risk (mean loss), then you can implement L2 regularization by adding on an additional term that penalizes the norm of the weights. More precisely, your new empirical risk becomes R(W):=R(W)+λWP where P is the set of all your parameters and \lambda (usually small) is some hyperparameter chosen by you.
  3. Network Depth and Width: The sort of network you implemented in part 1 is called a two-layer network because it uses two weight matrices. Sometimes it helps performance to add more hidden units and or add more weight matrices to obtain greater representation power and make training easier.
  4. Data Standardization: Convergence speed can be improved greatly by simply centralizing your data by subtracting the sample mean and dividing by the sample standard deviation. More precisely, you can alter your data matrix X by simply doing X:=(X-\mu)/\sigma.
Try to employ some of these techniques in order to attain a test accuracy of at least 0.84. The only stipulation is that you use under 500,000 total parameters. This means that if you take every floating point value in all of your weights including bias terms, you only use at most 500,000 floating point values.

Extra Credit Suggestion

While it is possible to obtain nice results with traditional multilayer perceptrons, when doing image classification tasks it is often best to use convolutional neural networks, which are tailored specifically to signal processing tasks such as image recognition. See if you can improve your results using convolutional layers in your network.

Additionally, there are several other techniques besides L2 regularization for improving the generalization of your model. Some ideas are dropout, batch normalization, and choice of loss function. You could also see how far you can take these regularization methods to improve your model.

There will be a leaderboard ranking the best models. However, all you need to do to get the full extra credit points is to attain an accuracy of at least 88%.

Provided Code Skeleton

We have provided ( tar zip) all the code to get you started on your MP, which means you will only have to implement PyTorch neural network model.

  • reader.py - This file is responsible for reading in the data set. It makes a giant NumPy array of feature vectors corresponding with each image.

  • mp6.py - This is the main file that starts the program, and computes the accuracy, precision, recall, and F1-score using your implementation.

  • neuralnet.py This is the file where you will be doing all of your work. You are given a NeuralNet class which implements a torch.nn.module. This class consists of __init__(), set_parameters(), get_parameters(), forward(), and step() functions.

    In the __init__() function you will need to construct the network architecture. There are multiple ways to do this. One way is to use nn.Linear() and nn.Sequential() . Keep in mind that nn.Linear() uses a Kaiming He uniform initialization to initialize the weight matrices and 0 for the bias terms. Another way you could do things is by explicitly defining weight matrices W1,W2,... and bias terms b1,b2,... by defining them as a torch.tensor(). This way is more hands on and will allow you to choose your own initialization. However, for this assignment Kaiming He uniform initialization should suffice and should be a good choice. Additionally, you can initialize a torch.optim optimizer object in this function to use to optimize your network in the step() function.

    The forward() function should do a forward pass through your network. This means it should explicitly evaluate F_{W}(x) . This can be done by simply calling your nn.Sequential() object defined in __init__() or in the torch.tensor() case by explicitly multiplying the weight matrices by your data.

    The step() function should perform one iteration of training. This means it should perform one gradient update through your training data. You can do this by calling loss_fn(yhat,y).backward() then either update the weights directly yourself, or you can use a torch.optim object that you may have initialized in __init__() to help you update the network. Be sure to call zero_grad() on your optimizer in order to clear the gradient buffer.

    More details on what each of these methods in the NeuralNet class should do is given in the skeleton code.

    The function fit() takes as input the training data, training labels, development set, and maximum number of iterations. The training data provided is the output from reader.py. The training labels is a torch tensor consisting of labels corresponding to each image in the training data. The development set is the torch tensor of images that you are going to test your implementation on. The maximium number of iterations is the number you specified with --max_iter (it is 10 by default). fit() outputs the predicted labels. The fit function should construct a NeuralNet object, and iteratively call the neural net's step() function to train the network. This should be done by feeding in batches of data determined by batch size. (You will use a batch size of 100 for this assignment.)

    Do not modify the provided code. You will only have to modify neuralnet.py.

    To understand more about how to run the MP, run python3 mp6.py -h in your terminal.

    Definitely use the PyTorch docs to help you with implementation details. You can also use this PyTorch Tutorial as a reference to help you with your implementation. There are also other guides out there such as this one.

    Deliverables

    This MP will be submitted via gradescope.

    When you believe your model has attained an acceptable accuracy on the development set, save your trained model by using the torch.save() function. You should save your model in a file named net.model, and submit it together with your neuralnet.py

    Please upload only neuralnet.py and net.model to gradescope.