The first thing you need to do is to download this file: mp09.zip. It has the following content:
submitted.py
: Your homework. Edit, and then submit to Gradescope.mp09_notebook.ipynb
: This is a Jupyter notebook to help you debug. You can completely ignore it if you want, although you might find that it gives you useful instructions.tests
: This directory contains visible test.The list of modules you will need to import/install:
torch
numpy
os
torchvision
You will not require a GPU for this MP.
This file (mp09_notebook.ipynb
) will walk you through the full MPs with instructions and suggestion, and it is highly recommended that you follow this notebook.
The objective of this assignment is to create a full end-to-end training and testing pipeline for a convolutional neural network (CNN) for the task of image classification on a modified version of the standard vision dataset CIFAR10
. You will learn the concept of finetuning your model, in which you freeze your convolutional backbone and finetune newly initialized linear layer(s) for a specific task.
There are 8 target categories: airplane (0), automobile (1), bird (2), deer (3), frog (4), horse (5), ship (6), truck (7)
. Given an image, your CNN will be expected to successfully classify an image into one of these categories.
You will be using PyTorch
for this MP. In MP04
, you gained some familiarity with the PyTorch library and you will build upon this foundation in this MP by designing and implementing the whole pipeline from scratch.
You will need to consult the PyTorch documentation, to help you with implementation details. Please make sure you read the function definitions and descriptions in submitted.py
carefully before completing them.
In this section you will create a PyTorch Dataset based on the torch.utils.data.Dataset
class.
Some useful resources:
It is highly recommended to read and understand these resources before diving into the code. Here is a short summary:
The data folder contains 6 files, each of which is a pickled Python object: data_batch_1
, data_batch_2
, data_batch_3
, data_batch_4
, data_batch_5
, test_batch
. Each file contains roughly (this amount as we only observe 8 classes) 8000 samples from the dataset; the first 5 files correspond to our training set and the test_batch
file corresponds to our test set. You will need to read all of these files and import the samples in the code. There are memory-efficient ways to do so, but simply reading all the samples from the files at once (according to train/test mode) will work fine for this assignment. Each file is a dictionary containing the data and the labels, which will be "visible" after unpickling. data
is a numpy array of shape (num_samples, 3072) containing the pixel values (in range [0...255]) for the 32x32 image. The pixel values are stored in a specific order (R values, G values and then B values in row-major order) described in detail in the provided link above. labels
is a numpy array of shape (num_samples), containing the categorical label for each sample.
The Dataset
you will need to create has three major member functions: __init__
, __len__
, and __getitem__
. __init__
is the constructor for the class inheriting Dataset
- this is where you may want to load the data from the provided data files and store it in some member variable. __len__
returns the length of the dataset (the number of samples) and provides the DataLoader
wrapper with an idea of the range for index sampling. __getitem__
should return an image and a label (a single sample) when called with a given numerical index - this function will be called many times when a batch is constructed by the DataLoader
.
import submitted
# helper functions to visualize images from https://pytorch.org/vision/stable/auto_examples/plot_visualization_utils.html
import matplotlib.pyplot as plt
from PIL import Image
import numpy as np
import torch
import torchvision.transforms.functional as F
# names = {0: "ship", 1: "automobile", 2: "dog", 3: "frog", 4: "horse"}
# names = {0: "airplane", 1: "automobile", 2: "bird", 3: "cat", 4: "deer", 5: "dog", 6: "frog", 7: "horse", 8: "ship", 9: "truck"}
names = {0: "airplane", 1: "automobile", 2: "bird", 3: "deer", 4: "frog", 5: "horse", 6: "ship", 7: "truck"}
def show(imgs, figsize=None):
if not isinstance(imgs, list):
imgs = [imgs]
if figsize is not None:
fig, axs = plt.subplots(ncols=len(imgs), squeeze=False, figsize=figsize)
else:
fig, axs = plt.subplots(ncols=len(imgs), squeeze=False)
for i, img in enumerate(imgs):
img = img.detach()
img = F.to_pil_image(img)
axs[0, i].imshow(np.asarray(img))
axs[0, i].set(xticklabels=[], yticklabels=[], xticks=[], yticks=[])
Let's visualize some images from one of the provided data batch files and try instantiating our data loader to see whether it works. Note that you need to complete the build_dataset
and build_dataloader
functions, as well as the CIFAR10
dataset class.
from torchvision import transforms
# it make take a little while to build the dataset
example_dataset = submitted.build_dataset(["cifar10_batches/data_batch_1"], transform=transforms.ToTensor())
import random
image, label = example_dataset[random.randint(0, len(example_dataset)-1)]
print("An example of a", names[label])
show(image)
An example of a deer
If you implemented your dataset class correctly, as well as your build_dataset
function. You should be able to visualize random images from the dataset through the cell above. Feel free to run that cell as many times to verify that your dataset is working correctly. Now, let's instantiate a DataLoader so that we can sample batches of images from the dataset and visualize these images. Try visualizing multiple different batches. As a quick exercise, try printing out the labels of each element in the batch and verify whether it corresponds to the images correctly.
loader_params = {"batch_size": 4, "shuffle": True}
example_dataloader = submitted.build_dataloader(example_dataset, loader_params=loader_params)
from torchvision.utils import make_grid
plt.rcParams["savefig.bbox"] = 'tight'
image_batch, label_batch = next(iter(example_dataloader))
print("image batch shape: ", image_batch.shape)
print("label batch shape: ", label_batch.shape)
show(make_grid([image_batch[i, :, :, :] for i in range(image_batch.shape[0])], nrows=4), figsize=(32, 32))
image batch shape: torch.Size([4, 3, 32, 32]) label batch shape: torch.Size([4])
In this section you will create your own model based on a pretrained backbone and finetune it on the CIFAR10 dataset.
Some useful resources:
It is highly recommended to read and understand these resources before diving into the code. Here is a short summary:
It is a very common and useful practice to leverage pretrained models (models that have been trained for a specific task already) on other downstream tasks. There can be many reasons to do so: (1) the representations learned by a model for a different dataset/task may transfer well to our desired task, (2) we may want to train the model with less compute resources so we don't want to train the entire model, etc. Intuitively, suppose you have trained a model on the massive ImageNet dataset to recognize all kinds of different objects and you obtain relatively good performance. The features the model has learned to extract through its convolutional backbone to detect "general" objects could be very applicable to a different dataset (with different objects). In this part of the MP, you will leverage a pretrained model and finetune it on our CIFAR10 dataset. We provide a model checkpoint resnet18.pt
in the source code, corresponding to a pretrained version of the ResNet18 architecture. You will need to figure out how to load this model checkpoint and then identify which layers to use as your backbone (hint: only the final part of your network should be excluded from your backbone).
After you load your model checkpoint, you need to also initialize new classification layers on top of the backbone. Think about what type of layers you would need for classification. Finally, you can complete your forward
function in your network according to your logical separation of backbone/classifier.
You are almost done with your implementation of your network. The final step is to ensure that your backbone parameters are frozen! This means that the weight parameters in your backbone should not receive any gradient updates during backpropagation. In essence, we are assuming the backbone will already be effective for the task at hand (so it no longer needs to be trained) and we only the train the classifier. Refer to the PyTorch tutorial for help (there are also many online resources that discuss this topic).
You are now ready to train and test your model! Fill out the train
and test
functions -- you should have a good reference in MP04 how to do so. Please be careful about the loss function / computation (e.g. if you use negative log likelihood loss, make sure your logits are normalized). run_model
should orchestrate the entire training and testing flow and should call the functions you have completed so far.
Note: ResNet18 is fairly large to train on a CPU. Do not be alarmed if it takes a few minutes to train and test. Through empirical verification, you should be able to get ~90% performance after 5-7 minutes of training if done correctly.
submitted.run_model()
If you've reached this point, and all of the above sections work, then you're ready to try grading your homework! Before you submit it to Gradescope, try grading it on your own machine. This will run some visible test cases. Note that these visible test cases do not test the accuracy of your model, but we expect your finetuned model to achieve at least 90% accuracy on the test set to pass the hidden test cases. Make sure you test locally and can ensure that you can achieve 90% accuracy before submitting to the autograder (it may take even longer to run on the autograder).
The exclamation point (!) tells python to run the following as a shell command. Obviously you don't need to run the code this way -- this usage is here just to remind you that you can also, if you wish, run this command in a terminal window.
!pip3 install gradescope_utils
!python grade.py
Requirement already satisfied: gradescope_utils in /Users/pranavsriram/opt/anaconda3/lib/python3.8/site-packages (0.5.0) Datasets built. Dataloaders built. Model built. Starting training.... Train epoch 0: Loss: 2.0863759517669678, 0 / 39995 [0.0%] Train epoch 0: Loss: 2.0318400859832764, 64 / 39995 [0.16%] Train epoch 0: Loss: 1.9658395051956177, 128 / 39995 [0.32%] Train epoch 0: Loss: 1.904288649559021, 192 / 39995 [0.48%] Train epoch 0: Loss: 1.8394948244094849, 256 / 39995 [0.64%] Train epoch 0: Loss: 1.7957719564437866, 320 / 39995 [0.8%] Train epoch 0: Loss: 1.7265615463256836, 384 / 39995 [0.96%] Train epoch 0: Loss: 1.6867951154708862, 448 / 39995 [1.12%] Train epoch 0: Loss: 1.637083888053894, 512 / 39995 [1.28%] Train epoch 0: Loss: 1.5728893280029297, 576 / 39995 [1.44%] Train epoch 0: Loss: 1.5163304805755615, 640 / 39995 [1.6%] Train epoch 0: Loss: 1.4553488492965698, 704 / 39995 [1.76%] Train epoch 0: Loss: 1.415666103363037, 768 / 39995 [1.92%] Train epoch 0: Loss: 1.3699871301651, 832 / 39995 [2.08%] Train epoch 0: Loss: 1.3550962209701538, 896 / 39995 [2.24%] Train epoch 0: Loss: 1.268727421760559, 960 / 39995 [2.4%] Train epoch 0: Loss: 1.2014806270599365, 1024 / 39995 [2.56%] Train epoch 0: Loss: 1.1739234924316406, 1088 / 39995 [2.72%] Train epoch 0: Loss: 1.1174687147140503, 1152 / 39995 [2.88%] Train epoch 0: Loss: 1.0864241123199463, 1216 / 39995 [3.04%] Train epoch 0: Loss: 1.0383265018463135, 1280 / 39995 [3.2%] Train epoch 0: Loss: 1.0637120008468628, 1344 / 39995 [3.36%] Train epoch 0: Loss: 1.0138152837753296, 1408 / 39995 [3.52%] Train epoch 0: Loss: 0.9663822054862976, 1472 / 39995 [3.68%] Train epoch 0: Loss: 0.9143064618110657, 1536 / 39995 [3.84%] Train epoch 0: Loss: 0.8723706007003784, 1600 / 39995 [4.0%] Train epoch 0: Loss: 0.8784564137458801, 1664 / 39995 [4.16%] Train epoch 0: Loss: 0.8058015704154968, 1728 / 39995 [4.32%] Train epoch 0: Loss: 0.8167240619659424, 1792 / 39995 [4.48%] Train epoch 0: Loss: 0.8027297258377075, 1856 / 39995 [4.64%] Train epoch 0: Loss: 0.7092670202255249, 1920 / 39995 [4.8%] Train epoch 0: Loss: 0.7079779505729675, 1984 / 39995 [4.96%] Train epoch 0: Loss: 0.7586937546730042, 2048 / 39995 [5.12%] Train epoch 0: Loss: 0.7168750762939453, 2112 / 39995 [5.28%] Train epoch 0: Loss: 0.6061670780181885, 2176 / 39995 [5.44%] Train epoch 0: Loss: 0.5901877880096436, 2240 / 39995 [5.6%] Train epoch 0: Loss: 0.557778000831604, 2304 / 39995 [5.76%] Train epoch 0: Loss: 0.6043642163276672, 2368 / 39995 [5.92%] Train epoch 0: Loss: 0.5295153856277466, 2432 / 39995 [6.08%] Train epoch 0: Loss: 0.5028281211853027, 2496 / 39995 [6.24%] Train epoch 0: Loss: 0.5240253210067749, 2560 / 39995 [6.4%] Train epoch 0: Loss: 0.4970299005508423, 2624 / 39995 [6.56%] Train epoch 0: Loss: 0.45811939239501953, 2688 / 39995 [6.72%] Train epoch 0: Loss: 0.5003327131271362, 2752 / 39995 [6.88%] Train epoch 0: Loss: 0.47085922956466675, 2816 / 39995 [7.04%] Train epoch 0: Loss: 0.4660385549068451, 2880 / 39995 [7.2%] Train epoch 0: Loss: 0.37427398562431335, 2944 / 39995 [7.36%] Train epoch 0: Loss: 0.4047382175922394, 3008 / 39995 [7.52%] Train epoch 0: Loss: 0.39659109711647034, 3072 / 39995 [7.68%] Train epoch 0: Loss: 0.4223729074001312, 3136 / 39995 [7.84%] Train epoch 0: Loss: 0.3734342157840729, 3200 / 39995 [8.0%] Train epoch 0: Loss: 0.4631831645965576, 3264 / 39995 [8.16%] Train epoch 0: Loss: 0.3967108130455017, 3328 / 39995 [8.32%] Train epoch 0: Loss: 0.31680965423583984, 3392 / 39995 [8.48%] Train epoch 0: Loss: 0.31321749091148376, 3456 / 39995 [8.64%] Train epoch 0: Loss: 0.33348390460014343, 3520 / 39995 [8.8%] Train epoch 0: Loss: 0.37877246737480164, 3584 / 39995 [8.96%] Train epoch 0: Loss: 0.34892141819000244, 3648 / 39995 [9.12%] Train epoch 0: Loss: 0.3436005711555481, 3712 / 39995 [9.28%] Train epoch 0: Loss: 0.31271982192993164, 3776 / 39995 [9.44%] Train epoch 0: Loss: 0.3047443628311157, 3840 / 39995 [9.6%] Train epoch 0: Loss: 0.37003064155578613, 3904 / 39995 [9.76%] Train epoch 0: Loss: 0.2669072449207306, 3968 / 39995 [9.92%] Train epoch 0: Loss: 0.2991431653499603, 4032 / 39995 [10.08%] Train epoch 0: Loss: 0.2158452570438385, 4096 / 39995 [10.24%] Train epoch 0: Loss: 0.2759068012237549, 4160 / 39995 [10.4%] Train epoch 0: Loss: 0.3804832994937897, 4224 / 39995 [10.56%] Train epoch 0: Loss: 0.23616887629032135, 4288 / 39995 [10.72%] Train epoch 0: Loss: 0.255097895860672, 4352 / 39995 [10.88%] Train epoch 0: Loss: 0.22264881432056427, 4416 / 39995 [11.04%] Train epoch 0: Loss: 0.2546139061450958, 4480 / 39995 [11.2%] Train epoch 0: Loss: 0.2605377733707428, 4544 / 39995 [11.36%] Train epoch 0: Loss: 0.22995837032794952, 4608 / 39995 [11.52%] Train epoch 0: Loss: 0.24688923358917236, 4672 / 39995 [11.68%] Train epoch 0: Loss: 0.180977001786232, 4736 / 39995 [11.84%] Train epoch 0: Loss: 0.19915592670440674, 4800 / 39995 [12.0%] Train epoch 0: Loss: 0.19306200742721558, 4864 / 39995 [12.16%] Train epoch 0: Loss: 0.2274877429008484, 4928 / 39995 [12.32%] Train epoch 0: Loss: 0.20151232182979584, 4992 / 39995 [12.48%] Train epoch 0: Loss: 0.18980318307876587, 5056 / 39995 [12.64%] Train epoch 0: Loss: 0.15840354561805725, 5120 / 39995 [12.8%] Train epoch 0: Loss: 0.28871017694473267, 5184 / 39995 [12.96%] Train epoch 0: Loss: 0.16815197467803955, 5248 / 39995 [13.12%] Train epoch 0: Loss: 0.1977928876876831, 5312 / 39995 [13.28%] Train epoch 0: Loss: 0.1639438420534134, 5376 / 39995 [13.44%] Train epoch 0: Loss: 0.1656763106584549, 5440 / 39995 [13.6%] Train epoch 0: Loss: 0.19729967415332794, 5504 / 39995 [13.76%] Train epoch 0: Loss: 0.18935924768447876, 5568 / 39995 [13.92%] Train epoch 0: Loss: 0.16716361045837402, 5632 / 39995 [14.08%] Train epoch 0: Loss: 0.2133297324180603, 5696 / 39995 [14.24%] Train epoch 0: Loss: 0.20249691605567932, 5760 / 39995 [14.4%] Train epoch 0: Loss: 0.1578662097454071, 5824 / 39995 [14.56%] Train epoch 0: Loss: 0.18447613716125488, 5888 / 39995 [14.72%] Train epoch 0: Loss: 0.11972524970769882, 5952 / 39995 [14.88%] Train epoch 0: Loss: 0.15639807283878326, 6016 / 39995 [15.04%] Train epoch 0: Loss: 0.15316228568553925, 6080 / 39995 [15.2%] Train epoch 0: Loss: 0.14387373626232147, 6144 / 39995 [15.36%] Train epoch 0: Loss: 0.21912415325641632, 6208 / 39995 [15.52%] Train epoch 0: Loss: 0.12899234890937805, 6272 / 39995 [15.68%] Train epoch 0: Loss: 0.16415217518806458, 6336 / 39995 [15.84%] Train epoch 0: Loss: 0.13120737671852112, 6400 / 39995 [16.0%] Train epoch 0: Loss: 0.13463804125785828, 6464 / 39995 [16.16%] Train epoch 0: Loss: 0.18624679744243622, 6528 / 39995 [16.32%] Train epoch 0: Loss: 0.17932887375354767, 6592 / 39995 [16.48%] Train epoch 0: Loss: 0.14044325053691864, 6656 / 39995 [16.64%] Train epoch 0: Loss: 0.1496724635362625, 6720 / 39995 [16.8%] Train epoch 0: Loss: 0.20452924072742462, 6784 / 39995 [16.96%] Train epoch 0: Loss: 0.14291414618492126, 6848 / 39995 [17.12%] Train epoch 0: Loss: 0.13083666563034058, 6912 / 39995 [17.28%] Train epoch 0: Loss: 0.12226424366235733, 6976 / 39995 [17.44%] Train epoch 0: Loss: 0.1404186189174652, 7040 / 39995 [17.6%] Train epoch 0: Loss: 0.11219566315412521, 7104 / 39995 [17.76%] Train epoch 0: Loss: 0.10215923935174942, 7168 / 39995 [17.92%] Train epoch 0: Loss: 0.1571601927280426, 7232 / 39995 [18.08%] Train epoch 0: Loss: 0.10984201729297638, 7296 / 39995 [18.24%] Train epoch 0: Loss: 0.19591642916202545, 7360 / 39995 [18.4%] Train epoch 0: Loss: 0.1823619157075882, 7424 / 39995 [18.56%] Train epoch 0: Loss: 0.09698609262704849, 7488 / 39995 [18.72%] Train epoch 0: Loss: 0.09959348291158676, 7552 / 39995 [18.88%] Train epoch 0: Loss: 0.14614032208919525, 7616 / 39995 [19.04%] Train epoch 0: Loss: 0.10757801681756973, 7680 / 39995 [19.2%] Train epoch 0: Loss: 0.08430547267198563, 7744 / 39995 [19.36%] Train epoch 0: Loss: 0.12847909331321716, 7808 / 39995 [19.52%] Train epoch 0: Loss: 0.10641826689243317, 7872 / 39995 [19.68%] Train epoch 0: Loss: 0.14391104876995087, 7936 / 39995 [19.84%] Train epoch 0: Loss: 0.14535216987133026, 8000 / 39995 [20.0%] Train epoch 0: Loss: 0.16260512173175812, 8064 / 39995 [20.16%] Train epoch 0: Loss: 0.0810110867023468, 8128 / 39995 [20.32%] Train epoch 0: Loss: 0.10373825579881668, 8192 / 39995 [20.48%] Train epoch 0: Loss: 0.10247980803251266, 8256 / 39995 [20.64%] Train epoch 0: Loss: 0.11073277145624161, 8320 / 39995 [20.8%] Train epoch 0: Loss: 0.11144290119409561, 8384 / 39995 [20.96%] Train epoch 0: Loss: 0.1821364313364029, 8448 / 39995 [21.12%] Train epoch 0: Loss: 0.1215064749121666, 8512 / 39995 [21.28%] Train epoch 0: Loss: 0.1033942773938179, 8576 / 39995 [21.44%] Train epoch 0: Loss: 0.14544755220413208, 8640 / 39995 [21.6%] Train epoch 0: Loss: 0.1865772157907486, 8704 / 39995 [21.76%] Train epoch 0: Loss: 0.1104646846652031, 8768 / 39995 [21.92%] Train epoch 0: Loss: 0.09105606377124786, 8832 / 39995 [22.08%] Train epoch 0: Loss: 0.16210158169269562, 8896 / 39995 [22.24%] Train epoch 0: Loss: 0.1111006960272789, 8960 / 39995 [22.4%] Train epoch 0: Loss: 0.11198535561561584, 9024 / 39995 [22.56%] Train epoch 0: Loss: 0.1697636842727661, 9088 / 39995 [22.72%] Train epoch 0: Loss: 0.1168472021818161, 9152 / 39995 [22.88%] Train epoch 0: Loss: 0.10438816249370575, 9216 / 39995 [23.04%] Train epoch 0: Loss: 0.08532126992940903, 9280 / 39995 [23.2%] Train epoch 0: Loss: 0.135142982006073, 9344 / 39995 [23.36%] Train epoch 0: Loss: 0.10319584608078003, 9408 / 39995 [23.52%] Train epoch 0: Loss: 0.15600185096263885, 9472 / 39995 [23.68%] Train epoch 0: Loss: 0.05918395146727562, 9536 / 39995 [23.84%] Train epoch 0: Loss: 0.0887330174446106, 9600 / 39995 [24.0%] Train epoch 0: Loss: 0.11879169195890427, 9664 / 39995 [24.16%] Train epoch 0: Loss: 0.05693349242210388, 9728 / 39995 [24.32%] Train epoch 0: Loss: 0.09432309865951538, 9792 / 39995 [24.48%] Train epoch 0: Loss: 0.08160721510648727, 9856 / 39995 [24.64%] Train epoch 0: Loss: 0.0799853503704071, 9920 / 39995 [24.8%] Train epoch 0: Loss: 0.10014109313488007, 9984 / 39995 [24.96%] Train epoch 0: Loss: 0.10683683305978775, 10048 / 39995 [25.12%] Train epoch 0: Loss: 0.07078266888856888, 10112 / 39995 [25.28%] Train epoch 0: Loss: 0.09036482870578766, 10176 / 39995 [25.44%] Train epoch 0: Loss: 0.11010279506444931, 10240 / 39995 [25.6%] Train epoch 0: Loss: 0.08363151550292969, 10304 / 39995 [25.76%] Train epoch 0: Loss: 0.1634581834077835, 10368 / 39995 [25.92%] Train epoch 0: Loss: 0.08855950087308884, 10432 / 39995 [26.08%] Train epoch 0: Loss: 0.1201813817024231, 10496 / 39995 [26.24%] Train epoch 0: Loss: 0.19203993678092957, 10560 / 39995 [26.4%] Train epoch 0: Loss: 0.06787681579589844, 10624 / 39995 [26.56%] Train epoch 0: Loss: 0.09349482506513596, 10688 / 39995 [26.72%] Train epoch 0: Loss: 0.1017167866230011, 10752 / 39995 [26.88%] Train epoch 0: Loss: 0.059506628662347794, 10816 / 39995 [27.04%] Train epoch 0: Loss: 0.07223408669233322, 10880 / 39995 [27.2%] Train epoch 0: Loss: 0.05740614980459213, 10944 / 39995 [27.36%] Train epoch 0: Loss: 0.08809440582990646, 11008 / 39995 [27.52%] Train epoch 0: Loss: 0.08129480481147766, 11072 / 39995 [27.68%] Train epoch 0: Loss: 0.06232025474309921, 11136 / 39995 [27.84%] Train epoch 0: Loss: 0.10280216485261917, 11200 / 39995 [28.0%] Train epoch 0: Loss: 0.09733407944440842, 11264 / 39995 [28.16%] Train epoch 0: Loss: 0.05140114575624466, 11328 / 39995 [28.32%] Train epoch 0: Loss: 0.09939821064472198, 11392 / 39995 [28.48%] Train epoch 0: Loss: 0.08072461187839508, 11456 / 39995 [28.64%] Train epoch 0: Loss: 0.1266396939754486, 11520 / 39995 [28.8%] Train epoch 0: Loss: 0.054363761097192764, 11584 / 39995 [28.96%] Train epoch 0: Loss: 0.10155908018350601, 11648 / 39995 [29.12%] Train epoch 0: Loss: 0.06342551112174988, 11712 / 39995 [29.28%] Train epoch 0: Loss: 0.09984581172466278, 11776 / 39995 [29.44%] Train epoch 0: Loss: 0.16426900029182434, 11840 / 39995 [29.6%] Train epoch 0: Loss: 0.05526980757713318, 11904 / 39995 [29.76%] Train epoch 0: Loss: 0.0628853291273117, 11968 / 39995 [29.92%] Train epoch 0: Loss: 0.10061980783939362, 12032 / 39995 [30.08%] Train epoch 0: Loss: 0.09148958325386047, 12096 / 39995 [30.24%] Train epoch 0: Loss: 0.05923967808485031, 12160 / 39995 [30.4%] Train epoch 0: Loss: 0.23048771917819977, 12224 / 39995 [30.56%] Train epoch 0: Loss: 0.058315545320510864, 12288 / 39995 [30.72%] Train epoch 0: Loss: 0.04547705873847008, 12352 / 39995 [30.88%] Train epoch 0: Loss: 0.08967746794223785, 12416 / 39995 [31.04%] Train epoch 0: Loss: 0.123638816177845, 12480 / 39995 [31.2%] Train epoch 0: Loss: 0.0751490592956543, 12544 / 39995 [31.36%] Train epoch 0: Loss: 0.07027777284383774, 12608 / 39995 [31.52%] Train epoch 0: Loss: 0.08562733978033066, 12672 / 39995 [31.68%] Train epoch 0: Loss: 0.08635175228118896, 12736 / 39995 [31.84%] Train epoch 0: Loss: 0.11775624752044678, 12800 / 39995 [32.0%] Train epoch 0: Loss: 0.07360232621431351, 12864 / 39995 [32.16%] Train epoch 0: Loss: 0.09630975127220154, 12928 / 39995 [32.32%] Train epoch 0: Loss: 0.06366439163684845, 12992 / 39995 [32.48%] Train epoch 0: Loss: 0.0474739633500576, 13056 / 39995 [32.64%] Train epoch 0: Loss: 0.035340048372745514, 13120 / 39995 [32.8%] Train epoch 0: Loss: 0.07622786611318588, 13184 / 39995 [32.96%] Train epoch 0: Loss: 0.10594593733549118, 13248 / 39995 [33.12%] Train epoch 0: Loss: 0.12097859382629395, 13312 / 39995 [33.28%] Train epoch 0: Loss: 0.07024206221103668, 13376 / 39995 [33.44%] Train epoch 0: Loss: 0.09671975672245026, 13440 / 39995 [33.6%] Train epoch 0: Loss: 0.14577916264533997, 13504 / 39995 [33.76%] Train epoch 0: Loss: 0.0665627121925354, 13568 / 39995 [33.92%] Train epoch 0: Loss: 0.04715132340788841, 13632 / 39995 [34.08%] Train epoch 0: Loss: 0.1643032282590866, 13696 / 39995 [34.24%] Train epoch 0: Loss: 0.08629382401704788, 13760 / 39995 [34.4%] Train epoch 0: Loss: 0.09155125916004181, 13824 / 39995 [34.56%] Train epoch 0: Loss: 0.047883592545986176, 13888 / 39995 [34.72%] Train epoch 0: Loss: 0.0706937164068222, 13952 / 39995 [34.88%] Train epoch 0: Loss: 0.05747814476490021, 14016 / 39995 [35.04%] Train epoch 0: Loss: 0.04460792616009712, 14080 / 39995 [35.2%] Train epoch 0: Loss: 0.14926224946975708, 14144 / 39995 [35.36%] Train epoch 0: Loss: 0.04505245015025139, 14208 / 39995 [35.52%] Train epoch 0: Loss: 0.05227086320519447, 14272 / 39995 [35.68%] Train epoch 0: Loss: 0.08449903130531311, 14336 / 39995 [35.84%] Train epoch 0: Loss: 0.13172873854637146, 14400 / 39995 [36.0%] Train epoch 0: Loss: 0.15847420692443848, 14464 / 39995 [36.16%] Train epoch 0: Loss: 0.11169488728046417, 14528 / 39995 [36.32%] Train epoch 0: Loss: 0.0685625895857811, 14592 / 39995 [36.48%] Train epoch 0: Loss: 0.044345781207084656, 14656 / 39995 [36.64%] Train epoch 0: Loss: 0.06872528046369553, 14720 / 39995 [36.8%] Train epoch 0: Loss: 0.10498042404651642, 14784 / 39995 [36.96%] Train epoch 0: Loss: 0.07042528688907623, 14848 / 39995 [37.12%] Train epoch 0: Loss: 0.05783338472247124, 14912 / 39995 [37.28%] Train epoch 0: Loss: 0.04659818485379219, 14976 / 39995 [37.44%] Train epoch 0: Loss: 0.07066728919744492, 15040 / 39995 [37.6%] Train epoch 0: Loss: 0.06047726050019264, 15104 / 39995 [37.76%] Train epoch 0: Loss: 0.07297445088624954, 15168 / 39995 [37.92%] Train epoch 0: Loss: 0.06772288680076599, 15232 / 39995 [38.08%] Train epoch 0: Loss: 0.05102350935339928, 15296 / 39995 [38.24%] Train epoch 0: Loss: 0.04554462060332298, 15360 / 39995 [38.4%] Train epoch 0: Loss: 0.07357114553451538, 15424 / 39995 [38.56%] Train epoch 0: Loss: 0.05705738440155983, 15488 / 39995 [38.72%] Train epoch 0: Loss: 0.036767907440662384, 15552 / 39995 [38.88%] Train epoch 0: Loss: 0.12336074560880661, 15616 / 39995 [39.04%] Train epoch 0: Loss: 0.06800210475921631, 15680 / 39995 [39.2%] Train epoch 0: Loss: 0.06112261116504669, 15744 / 39995 [39.36%] Train epoch 0: Loss: 0.07160672545433044, 15808 / 39995 [39.52%] Train epoch 0: Loss: 0.07057490944862366, 15872 / 39995 [39.68%] Train epoch 0: Loss: 0.11957093328237534, 15936 / 39995 [39.84%] Train epoch 0: Loss: 0.14392897486686707, 16000 / 39995 [40.0%] Train epoch 0: Loss: 0.043616872280836105, 16064 / 39995 [40.16%] Train epoch 0: Loss: 0.08142179995775223, 16128 / 39995 [40.32%] Train epoch 0: Loss: 0.06777063012123108, 16192 / 39995 [40.48%] Train epoch 0: Loss: 0.05546132102608681, 16256 / 39995 [40.64%] Train epoch 0: Loss: 0.04090672731399536, 16320 / 39995 [40.8%] Train epoch 0: Loss: 0.0752749890089035, 16384 / 39995 [40.96%] Train epoch 0: Loss: 0.1378525197505951, 16448 / 39995 [41.12%] Train epoch 0: Loss: 0.0315549336373806, 16512 / 39995 [41.28%] Train epoch 0: Loss: 0.05690819025039673, 16576 / 39995 [41.44%] Train epoch 0: Loss: 0.07266739010810852, 16640 / 39995 [41.6%] Train epoch 0: Loss: 0.07687251269817352, 16704 / 39995 [41.76%] Train epoch 0: Loss: 0.28138330578804016, 16768 / 39995 [41.92%] Train epoch 0: Loss: 0.05695810168981552, 16832 / 39995 [42.08%] Train epoch 0: Loss: 0.056847915053367615, 16896 / 39995 [42.24%] Train epoch 0: Loss: 0.05579569190740585, 16960 / 39995 [42.4%] Train epoch 0: Loss: 0.10786481201648712, 17024 / 39995 [42.56%] Train epoch 0: Loss: 0.0908798798918724, 17088 / 39995 [42.72%] Train epoch 0: Loss: 0.17058898508548737, 17152 / 39995 [42.88%] Train epoch 0: Loss: 0.14802329242229462, 17216 / 39995 [43.04%] Train epoch 0: Loss: 0.12962296605110168, 17280 / 39995 [43.2%] Train epoch 0: Loss: 0.05782467871904373, 17344 / 39995 [43.36%] Train epoch 0: Loss: 0.06412336230278015, 17408 / 39995 [43.52%] Train epoch 0: Loss: 0.04473739489912987, 17472 / 39995 [43.68%] Train epoch 0: Loss: 0.0881771519780159, 17536 / 39995 [43.84%] Train epoch 0: Loss: 0.04013443738222122, 17600 / 39995 [44.0%] Train epoch 0: Loss: 0.024740345776081085, 17664 / 39995 [44.16%] Train epoch 0: Loss: 0.07646298408508301, 17728 / 39995 [44.32%] Train epoch 0: Loss: 0.10799767076969147, 17792 / 39995 [44.48%] Train epoch 0: Loss: 0.08232804387807846, 17856 / 39995 [44.64%] Train epoch 0: Loss: 0.07553378492593765, 17920 / 39995 [44.8%] Train epoch 0: Loss: 0.08995908498764038, 17984 / 39995 [44.96%] Train epoch 0: Loss: 0.04251100867986679, 18048 / 39995 [45.12%] Train epoch 0: Loss: 0.051859818398952484, 18112 / 39995 [45.28%] Train epoch 0: Loss: 0.052936941385269165, 18176 / 39995 [45.44%] Train epoch 0: Loss: 0.030054936185479164, 18240 / 39995 [45.6%] Train epoch 0: Loss: 0.03857439383864403, 18304 / 39995 [45.76%] Train epoch 0: Loss: 0.10567547380924225, 18368 / 39995 [45.92%] Train epoch 0: Loss: 0.04666333273053169, 18432 / 39995 [46.08%] Train epoch 0: Loss: 0.05544396862387657, 18496 / 39995 [46.24%] Train epoch 0: Loss: 0.04382607713341713, 18560 / 39995 [46.4%] Train epoch 0: Loss: 0.06585145741701126, 18624 / 39995 [46.56%] Train epoch 0: Loss: 0.08236084878444672, 18688 / 39995 [46.72%] Train epoch 0: Loss: 0.0655154138803482, 18752 / 39995 [46.88%] Train epoch 0: Loss: 0.14847029745578766, 18816 / 39995 [47.04%] Train epoch 0: Loss: 0.06184772402048111, 18880 / 39995 [47.2%] Train epoch 0: Loss: 0.11375629901885986, 18944 / 39995 [47.36%] Train epoch 0: Loss: 0.021533885970711708, 19008 / 39995 [47.52%] Train epoch 0: Loss: 0.0549800843000412, 19072 / 39995 [47.68%] Train epoch 0: Loss: 0.030217932537198067, 19136 / 39995 [47.84%] Train epoch 0: Loss: 0.07658818364143372, 19200 / 39995 [48.0%] Train epoch 0: Loss: 0.11622089892625809, 19264 / 39995 [48.16%] Train epoch 0: Loss: 0.06658335030078888, 19328 / 39995 [48.32%] Train epoch 0: Loss: 0.08017754554748535, 19392 / 39995 [48.48%] Train epoch 0: Loss: 0.06583763659000397, 19456 / 39995 [48.64%] Train epoch 0: Loss: 0.06265681982040405, 19520 / 39995 [48.8%] Train epoch 0: Loss: 0.05118170753121376, 19584 / 39995 [48.96%] Train epoch 0: Loss: 0.05453537777066231, 19648 / 39995 [49.12%] Train epoch 0: Loss: 0.04470448940992355, 19712 / 39995 [49.28%] Train epoch 0: Loss: 0.12679018080234528, 19776 / 39995 [49.44%] Train epoch 0: Loss: 0.08985034376382828, 19840 / 39995 [49.6%] Train epoch 0: Loss: 0.09918029606342316, 19904 / 39995 [49.76%] Train epoch 0: Loss: 0.020476289093494415, 19968 / 39995 [49.92%] Train epoch 0: Loss: 0.05184217169880867, 20032 / 39995 [50.08%] Train epoch 0: Loss: 0.04601413011550903, 20096 / 39995 [50.24%] Train epoch 0: Loss: 0.08333452045917511, 20160 / 39995 [50.4%] Train epoch 0: Loss: 0.03322218731045723, 20224 / 39995 [50.56%] Train epoch 0: Loss: 0.1247393786907196, 20288 / 39995 [50.72%] Train epoch 0: Loss: 0.10792165994644165, 20352 / 39995 [50.88%] Train epoch 0: Loss: 0.0749775692820549, 20416 / 39995 [51.04%] Train epoch 0: Loss: 0.03160946071147919, 20480 / 39995 [51.2%] Train epoch 0: Loss: 0.03187355399131775, 20544 / 39995 [51.36%] Train epoch 0: Loss: 0.08844958245754242, 20608 / 39995 [51.52%] Train epoch 0: Loss: 0.03110223077237606, 20672 / 39995 [51.68%] Train epoch 0: Loss: 0.05868345871567726, 20736 / 39995 [51.84%] Train epoch 0: Loss: 0.04531462490558624, 20800 / 39995 [52.0%] Train epoch 0: Loss: 0.03031984716653824, 20864 / 39995 [52.16%] Train epoch 0: Loss: 0.10011087357997894, 20928 / 39995 [52.32%] Train epoch 0: Loss: 0.020194705575704575, 20992 / 39995 [52.48%] Train epoch 0: Loss: 0.04822314903140068, 21056 / 39995 [52.64%] Train epoch 0: Loss: 0.028314432129263878, 21120 / 39995 [52.8%] Train epoch 0: Loss: 0.03732398897409439, 21184 / 39995 [52.96%] Train epoch 0: Loss: 0.07811175286769867, 21248 / 39995 [53.12%] Train epoch 0: Loss: 0.1513223648071289, 21312 / 39995 [53.28%] Train epoch 0: Loss: 0.06192595139145851, 21376 / 39995 [53.44%] Train epoch 0: Loss: 0.046545740216970444, 21440 / 39995 [53.6%] Train epoch 0: Loss: 0.03837195783853531, 21504 / 39995 [53.76%] Train epoch 0: Loss: 0.12173612415790558, 21568 / 39995 [53.92%] Train epoch 0: Loss: 0.06661023944616318, 21632 / 39995 [54.08%] Train epoch 0: Loss: 0.0668647438287735, 21696 / 39995 [54.24%] Train epoch 0: Loss: 0.02397022396326065, 21760 / 39995 [54.4%] Train epoch 0: Loss: 0.07071659713983536, 21824 / 39995 [54.56%] Train epoch 0: Loss: 0.04202824458479881, 21888 / 39995 [54.72%] Train epoch 0: Loss: 0.06598640978336334, 21952 / 39995 [54.88%] Train epoch 0: Loss: 0.02662348374724388, 22016 / 39995 [55.04%] Train epoch 0: Loss: 0.04754955321550369, 22080 / 39995 [55.2%] Train epoch 0: Loss: 0.060119617730379105, 22144 / 39995 [55.36%] Train epoch 0: Loss: 0.022027742117643356, 22208 / 39995 [55.52%] Train epoch 0: Loss: 0.09879244863986969, 22272 / 39995 [55.68%] Train epoch 0: Loss: 0.11563117802143097, 22336 / 39995 [55.84%] Train epoch 0: Loss: 0.04144401103258133, 22400 / 39995 [56.0%] Train epoch 0: Loss: 0.02906039170920849, 22464 / 39995 [56.16%] Train epoch 0: Loss: 0.030576661229133606, 22528 / 39995 [56.32%] Train epoch 0: Loss: 0.09927990287542343, 22592 / 39995 [56.48%] Train epoch 0: Loss: 0.06865960359573364, 22656 / 39995 [56.64%] Train epoch 0: Loss: 0.03797769173979759, 22720 / 39995 [56.8%] Train epoch 0: Loss: 0.05298436060547829, 22784 / 39995 [56.96%] Train epoch 0: Loss: 0.11031779646873474, 22848 / 39995 [57.12%] Train epoch 0: Loss: 0.1372023969888687, 22912 / 39995 [57.28%] Train epoch 0: Loss: 0.05005456507205963, 22976 / 39995 [57.44%] Train epoch 0: Loss: 0.08769520372152328, 23040 / 39995 [57.6%] Train epoch 0: Loss: 0.04100195690989494, 23104 / 39995 [57.76%] Train epoch 0: Loss: 0.03742724284529686, 23168 / 39995 [57.92%] Train epoch 0: Loss: 0.0574975349009037, 23232 / 39995 [58.08%] Train epoch 0: Loss: 0.025263270363211632, 23296 / 39995 [58.24%] Train epoch 0: Loss: 0.02089662477374077, 23360 / 39995 [58.4%] Train epoch 0: Loss: 0.050044406205415726, 23424 / 39995 [58.56%] Train epoch 0: Loss: 0.023647595196962357, 23488 / 39995 [58.72%] Train epoch 0: Loss: 0.0706324428319931, 23552 / 39995 [58.88%] Train epoch 0: Loss: 0.116828054189682, 23616 / 39995 [59.04%] Train epoch 0: Loss: 0.04125462844967842, 23680 / 39995 [59.2%] Train epoch 0: Loss: 0.03621647134423256, 23744 / 39995 [59.36%] Train epoch 0: Loss: 0.022948993369936943, 23808 / 39995 [59.52%] Train epoch 0: Loss: 0.033515624701976776, 23872 / 39995 [59.68%] Train epoch 0: Loss: 0.05807773768901825, 23936 / 39995 [59.84%] Train epoch 0: Loss: 0.0858566015958786, 24000 / 39995 [60.0%] Train epoch 0: Loss: 0.03139658272266388, 24064 / 39995 [60.16%] Train epoch 0: Loss: 0.041199974715709686, 24128 / 39995 [60.32%] Train epoch 0: Loss: 0.049517855048179626, 24192 / 39995 [60.48%] Train epoch 0: Loss: 0.11752311140298843, 24256 / 39995 [60.64%] Train epoch 0: Loss: 0.08227205276489258, 24320 / 39995 [60.8%] Train epoch 0: Loss: 0.0313403457403183, 24384 / 39995 [60.96%] Train epoch 0: Loss: 0.047426581382751465, 24448 / 39995 [61.12%] Train epoch 0: Loss: 0.014721107669174671, 24512 / 39995 [61.28%] Train epoch 0: Loss: 0.04483111575245857, 24576 / 39995 [61.44%] Train epoch 0: Loss: 0.02767617627978325, 24640 / 39995 [61.6%] Train epoch 0: Loss: 0.060832373797893524, 24704 / 39995 [61.76%] Train epoch 0: Loss: 0.02729954570531845, 24768 / 39995 [61.92%] Train epoch 0: Loss: 0.03792734816670418, 24832 / 39995 [62.08%] Train epoch 0: Loss: 0.10234170407056808, 24896 / 39995 [62.24%] Train epoch 0: Loss: 0.06363841891288757, 24960 / 39995 [62.4%] Train epoch 0: Loss: 0.0710831731557846, 25024 / 39995 [62.56%] Train epoch 0: Loss: 0.03731097653508186, 25088 / 39995 [62.72%] Train epoch 0: Loss: 0.10583585500717163, 25152 / 39995 [62.88%] Train epoch 0: Loss: 0.06672628968954086, 25216 / 39995 [63.04%] Train epoch 0: Loss: 0.0354924350976944, 25280 / 39995 [63.2%] Train epoch 0: Loss: 0.05996004864573479, 25344 / 39995 [63.36%] Train epoch 0: Loss: 0.08420496433973312, 25408 / 39995 [63.52%] Train epoch 0: Loss: 0.022884603589773178, 25472 / 39995 [63.68%] Train epoch 0: Loss: 0.08839866518974304, 25536 / 39995 [63.84%] Train epoch 0: Loss: 0.03639897331595421, 25600 / 39995 [64.0%] Train epoch 0: Loss: 0.07695365697145462, 25664 / 39995 [64.16%] Train epoch 0: Loss: 0.041729044169187546, 25728 / 39995 [64.32%] Train epoch 0: Loss: 0.1193433403968811, 25792 / 39995 [64.48%] Train epoch 0: Loss: 0.06915292143821716, 25856 / 39995 [64.64%] Train epoch 0: Loss: 0.05763750150799751, 25920 / 39995 [64.8%] Train epoch 0: Loss: 0.06420192122459412, 25984 / 39995 [64.96%] Train epoch 0: Loss: 0.20360122621059418, 26048 / 39995 [65.12%] Train epoch 0: Loss: 0.05690772831439972, 26112 / 39995 [65.28%] Train epoch 0: Loss: 0.022289825603365898, 26176 / 39995 [65.44%] Train epoch 0: Loss: 0.026806440204381943, 26240 / 39995 [65.6%] Train epoch 0: Loss: 0.06028864532709122, 26304 / 39995 [65.76%] Train epoch 0: Loss: 0.02098551020026207, 26368 / 39995 [65.92%] Train epoch 0: Loss: 0.02641696110367775, 26432 / 39995 [66.08%] Train epoch 0: Loss: 0.03375342860817909, 26496 / 39995 [66.24%] Train epoch 0: Loss: 0.036449603736400604, 26560 / 39995 [66.4%] Train epoch 0: Loss: 0.030874276533722878, 26624 / 39995 [66.56%] Train epoch 0: Loss: 0.047969162464141846, 26688 / 39995 [66.72%] Train epoch 0: Loss: 0.03518715128302574, 26752 / 39995 [66.88%] Train epoch 0: Loss: 0.05114161968231201, 26816 / 39995 [67.04%] Train epoch 0: Loss: 0.055474307388067245, 26880 / 39995 [67.2%] Train epoch 0: Loss: 0.1390579789876938, 26944 / 39995 [67.36%] Train epoch 0: Loss: 0.043423984199762344, 27008 / 39995 [67.52%] Train epoch 0: Loss: 0.18815656006336212, 27072 / 39995 [67.68%] Train epoch 0: Loss: 0.11768606305122375, 27136 / 39995 [67.84%] Train epoch 0: Loss: 0.07151047885417938, 27200 / 39995 [68.0%] Train epoch 0: Loss: 0.059120241552591324, 27264 / 39995 [68.16%] Train epoch 0: Loss: 0.03875407949090004, 27328 / 39995 [68.32%] Train epoch 0: Loss: 0.1355593204498291, 27392 / 39995 [68.48%] Train epoch 0: Loss: 0.08299955725669861, 27456 / 39995 [68.64%] Train epoch 0: Loss: 0.025203758850693703, 27520 / 39995 [68.8%] Train epoch 0: Loss: 0.03929685428738594, 27584 / 39995 [68.96%] Train epoch 0: Loss: 0.06509511917829514, 27648 / 39995 [69.12%] Train epoch 0: Loss: 0.08905362337827682, 27712 / 39995 [69.28%] Train epoch 0: Loss: 0.06782617419958115, 27776 / 39995 [69.44%] Train epoch 0: Loss: 0.0395861491560936, 27840 / 39995 [69.6%] Train epoch 0: Loss: 0.07867840677499771, 27904 / 39995 [69.76%] Train epoch 0: Loss: 0.05824825167655945, 27968 / 39995 [69.92%] Train epoch 0: Loss: 0.024927228689193726, 28032 / 39995 [70.08%] Train epoch 0: Loss: 0.032875802367925644, 28096 / 39995 [70.24%] Train epoch 0: Loss: 0.11776500195264816, 28160 / 39995 [70.4%] Train epoch 0: Loss: 0.03523918241262436, 28224 / 39995 [70.56%] Train epoch 0: Loss: 0.02449297159910202, 28288 / 39995 [70.72%] Train epoch 0: Loss: 0.028506208211183548, 28352 / 39995 [70.88%] Train epoch 0: Loss: 0.08149157464504242, 28416 / 39995 [71.04%] Train epoch 0: Loss: 0.19462807476520538, 28480 / 39995 [71.2%] Train epoch 0: Loss: 0.03975917771458626, 28544 / 39995 [71.36%] Train epoch 0: Loss: 0.023218829184770584, 28608 / 39995 [71.52%] Train epoch 0: Loss: 0.019052382558584213, 28672 / 39995 [71.68%] Train epoch 0: Loss: 0.014617059379816055, 28736 / 39995 [71.84%] Train epoch 0: Loss: 0.0244820024818182, 28800 / 39995 [72.0%] Train epoch 0: Loss: 0.05069233849644661, 28864 / 39995 [72.16%] Train epoch 0: Loss: 0.012586128897964954, 28928 / 39995 [72.32%] Train epoch 0: Loss: 0.038340430706739426, 28992 / 39995 [72.48%] Train epoch 0: Loss: 0.014868460595607758, 29056 / 39995 [72.64%] Train epoch 0: Loss: 0.07002706825733185, 29120 / 39995 [72.8%] Train epoch 0: Loss: 0.016529520973563194, 29184 / 39995 [72.96%] Train epoch 0: Loss: 0.04822501167654991, 29248 / 39995 [73.12%] Train epoch 0: Loss: 0.024394404143095016, 29312 / 39995 [73.28%] Train epoch 0: Loss: 0.0537608303129673, 29376 / 39995 [73.44%] Train epoch 0: Loss: 0.017245043069124222, 29440 / 39995 [73.6%] Train epoch 0: Loss: 0.07669602334499359, 29504 / 39995 [73.76%] Train epoch 0: Loss: 0.017269255593419075, 29568 / 39995 [73.92%] Train epoch 0: Loss: 0.04091843590140343, 29632 / 39995 [74.08%] Train epoch 0: Loss: 0.03422214090824127, 29696 / 39995 [74.24%] Train epoch 0: Loss: 0.07580097019672394, 29760 / 39995 [74.4%] Train epoch 0: Loss: 0.03147733956575394, 29824 / 39995 [74.56%] Train epoch 0: Loss: 0.015089158900082111, 29888 / 39995 [74.72%] Train epoch 0: Loss: 0.027783522382378578, 29952 / 39995 [74.88%] Train epoch 0: Loss: 0.046400297433137894, 30016 / 39995 [75.04%] Train epoch 0: Loss: 0.018689798191189766, 30080 / 39995 [75.2%] Train epoch 0: Loss: 0.025787921622395515, 30144 / 39995 [75.36%] Train epoch 0: Loss: 0.16527490317821503, 30208 / 39995 [75.52%] Train epoch 0: Loss: 0.014468491077423096, 30272 / 39995 [75.68%] Train epoch 0: Loss: 0.047064408659935, 30336 / 39995 [75.84%] Train epoch 0: Loss: 0.017184285447001457, 30400 / 39995 [76.0%] Train epoch 0: Loss: 0.08970809727907181, 30464 / 39995 [76.16%] Train epoch 0: Loss: 0.0789523497223854, 30528 / 39995 [76.32%] Train epoch 0: Loss: 0.16489875316619873, 30592 / 39995 [76.48%] Train epoch 0: Loss: 0.01736680045723915, 30656 / 39995 [76.64%] Train epoch 0: Loss: 0.013928514905273914, 30720 / 39995 [76.8%] Train epoch 0: Loss: 0.155061274766922, 30784 / 39995 [76.96%] Train epoch 0: Loss: 0.023406291380524635, 30848 / 39995 [77.12%] Train epoch 0: Loss: 0.07968099415302277, 30912 / 39995 [77.28%] Train epoch 0: Loss: 0.029338030144572258, 30976 / 39995 [77.44%] Train epoch 0: Loss: 0.12233840674161911, 31040 / 39995 [77.6%] Train epoch 0: Loss: 0.08667168021202087, 31104 / 39995 [77.76%] Train epoch 0: Loss: 0.08417759835720062, 31168 / 39995 [77.92%] Train epoch 0: Loss: 0.05181512236595154, 31232 / 39995 [78.08%] Train epoch 0: Loss: 0.03925235569477081, 31296 / 39995 [78.24%] Train epoch 0: Loss: 0.0155851561576128, 31360 / 39995 [78.4%] Train epoch 0: Loss: 0.16622032225131989, 31424 / 39995 [78.56%] Train epoch 0: Loss: 0.020436957478523254, 31488 / 39995 [78.72%] Train epoch 0: Loss: 0.15266576409339905, 31552 / 39995 [78.88%] Train epoch 0: Loss: 0.0298882108181715, 31616 / 39995 [79.04%] Train epoch 0: Loss: 0.08562654256820679, 31680 / 39995 [79.2%] Train epoch 0: Loss: 0.03682945668697357, 31744 / 39995 [79.36%] Train epoch 0: Loss: 0.03410081937909126, 31808 / 39995 [79.52%] Train epoch 0: Loss: 0.022832240909337997, 31872 / 39995 [79.68%] Train epoch 0: Loss: 0.025992069393396378, 31936 / 39995 [79.84%] Train epoch 0: Loss: 0.08907600492238998, 32000 / 39995 [80.0%] Train epoch 0: Loss: 0.038393136113882065, 32064 / 39995 [80.16%] Train epoch 0: Loss: 0.07879290729761124, 32128 / 39995 [80.32%] Train epoch 0: Loss: 0.038879234343767166, 32192 / 39995 [80.48%] Train epoch 0: Loss: 0.15919837355613708, 32256 / 39995 [80.64%] Train epoch 0: Loss: 0.045448243618011475, 32320 / 39995 [80.8%] Train epoch 0: Loss: 0.055427417159080505, 32384 / 39995 [80.96%] Train epoch 0: Loss: 0.028473418205976486, 32448 / 39995 [81.12%] Train epoch 0: Loss: 0.024223076179623604, 32512 / 39995 [81.28%] Train epoch 0: Loss: 0.0725279226899147, 32576 / 39995 [81.44%] Train epoch 0: Loss: 0.09535810351371765, 32640 / 39995 [81.6%] Train epoch 0: Loss: 0.050571806728839874, 32704 / 39995 [81.76%] Train epoch 0: Loss: 0.02393924817442894, 32768 / 39995 [81.92%] Train epoch 0: Loss: 0.026611732318997383, 32832 / 39995 [82.08%] Train epoch 0: Loss: 0.04112207144498825, 32896 / 39995 [82.24%] Train epoch 0: Loss: 0.04177547246217728, 32960 / 39995 [82.4%] Train epoch 0: Loss: 0.08469005674123764, 33024 / 39995 [82.56%] Train epoch 0: Loss: 0.06388591974973679, 33088 / 39995 [82.72%] Train epoch 0: Loss: 0.07888980954885483, 33152 / 39995 [82.88%] Train epoch 0: Loss: 0.05907373130321503, 33216 / 39995 [83.04%] Train epoch 0: Loss: 0.013648800551891327, 33280 / 39995 [83.2%] Train epoch 0: Loss: 0.08492215722799301, 33344 / 39995 [83.36%] Train epoch 0: Loss: 0.018430709838867188, 33408 / 39995 [83.52%] Train epoch 0: Loss: 0.012726734392344952, 33472 / 39995 [83.68%] Train epoch 0: Loss: 0.08421088010072708, 33536 / 39995 [83.84%] Train epoch 0: Loss: 0.05243420973420143, 33600 / 39995 [84.0%] Train epoch 0: Loss: 0.06269960105419159, 33664 / 39995 [84.16%] Train epoch 0: Loss: 0.03167112171649933, 33728 / 39995 [84.32%] Train epoch 0: Loss: 0.03801395744085312, 33792 / 39995 [84.48%] Train epoch 0: Loss: 0.04026862606406212, 33856 / 39995 [84.64%] Train epoch 0: Loss: 0.08587893098592758, 33920 / 39995 [84.8%] Train epoch 0: Loss: 0.08623196184635162, 33984 / 39995 [84.96%] Train epoch 0: Loss: 0.018653081730008125, 34048 / 39995 [85.12%] Train epoch 0: Loss: 0.047030117362737656, 34112 / 39995 [85.28%] Train epoch 0: Loss: 0.01799248717725277, 34176 / 39995 [85.44%] Train epoch 0: Loss: 0.019798891618847847, 34240 / 39995 [85.6%] Train epoch 0: Loss: 0.05587794631719589, 34304 / 39995 [85.76%] Train epoch 0: Loss: 0.027570724487304688, 34368 / 39995 [85.92%] Train epoch 0: Loss: 0.10065526515245438, 34432 / 39995 [86.08%] Train epoch 0: Loss: 0.16659778356552124, 34496 / 39995 [86.24%] Train epoch 0: Loss: 0.09355650842189789, 34560 / 39995 [86.4%] Train epoch 0: Loss: 0.012537690810859203, 34624 / 39995 [86.56%] Train epoch 0: Loss: 0.026921648532152176, 34688 / 39995 [86.72%] Train epoch 0: Loss: 0.02298777922987938, 34752 / 39995 [86.88%] Train epoch 0: Loss: 0.1315305531024933, 34816 / 39995 [87.04%] Train epoch 0: Loss: 0.07689791917800903, 34880 / 39995 [87.2%] Train epoch 0: Loss: 0.02323829010128975, 34944 / 39995 [87.36%] Train epoch 0: Loss: 0.09353483468294144, 35008 / 39995 [87.52%] Train epoch 0: Loss: 0.04428291693329811, 35072 / 39995 [87.68%] Train epoch 0: Loss: 0.05343662574887276, 35136 / 39995 [87.84%] Train epoch 0: Loss: 0.020822150632739067, 35200 / 39995 [88.0%] Train epoch 0: Loss: 0.01725364662706852, 35264 / 39995 [88.16%] Train epoch 0: Loss: 0.05949018895626068, 35328 / 39995 [88.32%] Train epoch 0: Loss: 0.0301803145557642, 35392 / 39995 [88.48%] Train epoch 0: Loss: 0.03494521975517273, 35456 / 39995 [88.64%] Train epoch 0: Loss: 0.0753808543086052, 35520 / 39995 [88.8%] Train epoch 0: Loss: 0.24135930836200714, 35584 / 39995 [88.96%] Train epoch 0: Loss: 0.07747747004032135, 35648 / 39995 [89.12%] Train epoch 0: Loss: 0.14102593064308167, 35712 / 39995 [89.28%] Train epoch 0: Loss: 0.19604286551475525, 35776 / 39995 [89.44%] Train epoch 0: Loss: 0.0417674221098423, 35840 / 39995 [89.6%] Train epoch 0: Loss: 0.07404720038175583, 35904 / 39995 [89.76%] Train epoch 0: Loss: 0.025042472407221794, 35968 / 39995 [89.92%] Train epoch 0: Loss: 0.10309068858623505, 36032 / 39995 [90.08%] Train epoch 0: Loss: 0.017678692936897278, 36096 / 39995 [90.24%] Train epoch 0: Loss: 0.04430504888296127, 36160 / 39995 [90.4%] Train epoch 0: Loss: 0.0542343407869339, 36224 / 39995 [90.56%] Train epoch 0: Loss: 0.07675588130950928, 36288 / 39995 [90.72%] Train epoch 0: Loss: 0.01915966533124447, 36352 / 39995 [90.88%] Train epoch 0: Loss: 0.07157879322767258, 36416 / 39995 [91.04%] Train epoch 0: Loss: 0.07463733851909637, 36480 / 39995 [91.2%] Train epoch 0: Loss: 0.04655233398079872, 36544 / 39995 [91.36%] Train epoch 0: Loss: 0.06286001950502396, 36608 / 39995 [91.52%] Train epoch 0: Loss: 0.015369724482297897, 36672 / 39995 [91.68%] Train epoch 0: Loss: 0.019268687814474106, 36736 / 39995 [91.84%] Train epoch 0: Loss: 0.031260475516319275, 36800 / 39995 [92.0%] Train epoch 0: Loss: 0.021747520193457603, 36864 / 39995 [92.16%] Train epoch 0: Loss: 0.06859485059976578, 36928 / 39995 [92.32%] Train epoch 0: Loss: 0.021399274468421936, 36992 / 39995 [92.48%] Train epoch 0: Loss: 0.05281909927725792, 37056 / 39995 [92.64%] Train epoch 0: Loss: 0.06859205663204193, 37120 / 39995 [92.8%] Train epoch 0: Loss: 0.0253459345549345, 37184 / 39995 [92.96%] Train epoch 0: Loss: 0.08144275099039078, 37248 / 39995 [93.12%] Train epoch 0: Loss: 0.0583144947886467, 37312 / 39995 [93.28%] Train epoch 0: Loss: 0.1740317940711975, 37376 / 39995 [93.44%] Train epoch 0: Loss: 0.03066284954547882, 37440 / 39995 [93.6%] Train epoch 0: Loss: 0.14064499735832214, 37504 / 39995 [93.76%] Train epoch 0: Loss: 0.027860799804329872, 37568 / 39995 [93.92%] Train epoch 0: Loss: 0.09076805412769318, 37632 / 39995 [94.08%] Train epoch 0: Loss: 0.0094144307076931, 37696 / 39995 [94.24%] Train epoch 0: Loss: 0.015688413754105568, 37760 / 39995 [94.4%] Train epoch 0: Loss: 0.03405393660068512, 37824 / 39995 [94.56%] Train epoch 0: Loss: 0.14872807264328003, 37888 / 39995 [94.72%] Train epoch 0: Loss: 0.05193854868412018, 37952 / 39995 [94.88%] Train epoch 0: Loss: 0.013987467624247074, 38016 / 39995 [95.04%] Train epoch 0: Loss: 0.011152409017086029, 38080 / 39995 [95.2%] Train epoch 0: Loss: 0.03972921893000603, 38144 / 39995 [95.36%] Train epoch 0: Loss: 0.04514819383621216, 38208 / 39995 [95.52%] Train epoch 0: Loss: 0.07036732882261276, 38272 / 39995 [95.68%] Train epoch 0: Loss: 0.041310589760541916, 38336 / 39995 [95.84%] Train epoch 0: Loss: 0.01199439074844122, 38400 / 39995 [96.0%] Train epoch 0: Loss: 0.05014428496360779, 38464 / 39995 [96.16%] Train epoch 0: Loss: 0.1228235587477684, 38528 / 39995 [96.32%] Train epoch 0: Loss: 0.016871849074959755, 38592 / 39995 [96.48%] Train epoch 0: Loss: 0.17352978885173798, 38656 / 39995 [96.64%] Train epoch 0: Loss: 0.06704023480415344, 38720 / 39995 [96.8%] Train epoch 0: Loss: 0.011829180642962456, 38784 / 39995 [96.96%] Train epoch 0: Loss: 0.05888442322611809, 38848 / 39995 [97.12%] Train epoch 0: Loss: 0.019822612404823303, 38912 / 39995 [97.28%] Train epoch 0: Loss: 0.05344249680638313, 38976 / 39995 [97.44%] Train epoch 0: Loss: 0.07506941258907318, 39040 / 39995 [97.6%] Train epoch 0: Loss: 0.01511395163834095, 39104 / 39995 [97.76%] Train epoch 0: Loss: 0.05322682857513428, 39168 / 39995 [97.92%] Train epoch 0: Loss: 0.02109447307884693, 39232 / 39995 [98.08%] Train epoch 0: Loss: 0.037680305540561676, 39296 / 39995 [98.24%] Train epoch 0: Loss: 0.0827537477016449, 39360 / 39995 [98.4%] Train epoch 0: Loss: 0.019121704623103142, 39424 / 39995 [98.56%] Train epoch 0: Loss: 0.018465356901288033, 39488 / 39995 [98.72%] Train epoch 0: Loss: 0.038437239825725555, 39552 / 39995 [98.88%] Train epoch 0: Loss: 0.04286917299032211, 39616 / 39995 [99.04%] Train epoch 0: Loss: 0.008634636178612709, 39680 / 39995 [99.2%] Train epoch 0: Loss: 0.008911041542887688, 39744 / 39995 [99.36%] Train epoch 0: Loss: 0.10127004981040955, 39808 / 39995 [99.52%] Train epoch 0: Loss: 0.07165879011154175, 39872 / 39995 [99.68%] Train epoch 0: Loss: 0.08671703189611435, 36816 / 39995 [99.84%] Test accuracy: 94.47430928866109 Accuracy: tensor(0.9447) +5 points for accuracy above 0.75 +5 points for accuracy above 0.8 +5 points for accuracy above 0.85 +5 points for accuracy above 0.9 ... ---------------------------------------------------------------------- Ran 3 tests in 421.066s OK