Back to Course Info

Grading Policy

Is there a curve?

Individual MPs, labs, and Exams are not curved. The total scores for the course may be curved (before any extra credit is calculated). The grade cutoffs will never be any higher than posted, however they may be lowered. Score distributions for CS 225 remain fairly consistent over semesters, and these cutoffs are already set reasonably. They almost certainly will not be lowered significantly.

I got a ZERO on the MP but my code works!

The test cases used for grading cover your code more thoroughly than the test cases we provide you with the MP. If you have code that doesn’t abide by C++’s conventions (or for other reasons), your code may compile with the provided code but not with the grading test cases.

We almost never accept regrade requests for mistakes on your part, however trivial they seem. However, if there is an error with our test cases themselves, we will fix those for no penalty.

Something was graded incorrectly on my exam.

If you believe there was an error in a problem on your exam, just post your concerns privately on piazza. We’ll take a look, and let you know.

About autograding

Grading is somewhat of a black box for most students - you submit some code, and eventually get some numbers back in your repository. But what happens in between then? This page aims to answer some common questions about the grading process and explain a bit of what goes on behind the scenes.

Policy

About timeout:

Each test case has a timeout associated with it. Test cases have a default timeout of 10 seconds, but that can be higher or lower on a per-test basis. If a particular test case has a different timeout, you’ll see a tag like [timeout=5000] in the Catch test file (this would run the test with a timeout of 5 seconds). If your code does not complete within the timeout, you’ll receive no points for that test case.

About memory leak:

Test cases that will be run with Valgrind are tagged with [valgrind] in the Catch test file. When running one of these test cases, we first run the test case without Valgrind. If you pass the test case, we’ll do a second pass with Valgrind. If you have any memory errors or memory leaks, you receive a 0 for the test, even if you passed all of the assertions in the test case.

About print statement:

Each test case can produce at most 1MB of output on standard out/standard error, for instance by printing with std::cout. However, this limit also applies to any output that comes from our provided code, for instance warnings when you try to access an out-of-range pixel in a PNG object. If you exceed that 1MB output limit, you’ll receive a 0 for that test case. This limit is in place to prevent you from overloading our grading process or results repos with data. Given this, we strongly encourage you to comment out any print statements in your code before submitting.

FAQ

Q: How does the autograder know which version of my code to grade?

A: GitHub records when each commit was pushed to the server. Using GitHub’s API, we select the most recent commit that was present in your repository at the deadline (the deadline could either be the assignment deadline, or the nightly deadlines for extra credit runs). You can still push commits to your repository after the deadline, they just won’t be considered for grading. Note that we’ll only ever consider code committed and pushed to the master branch - code committed to any other branch will not be graded.

Q: Only some of the files in my repo are used for grading. Where do the remaining files come from?

A: We have a private repo of assignment files, which forms the base of the grading process. When we grade your code, we fetch just the “graded” files that are listed on the MP website and replace the original files from our private repo with the copy from your repo. This also allows us to include additional test cases, new Makefile targets, and so on.