Computer science instructors are increasingly adding autograding tools to help meet the demands of teaching large classes. Autograding tools can save teachers time by providing immediate feedback to students and by automating parts of the grading process. And recent studies also highlight the benefits students perceive when these tools are used in the classroom . Professors Dan Wallach and Mike Zamansky shared their perspectives on how autograding tools help shape student success.
Wallach teaches a large course as part of Rice University’s introduction to computer science. Second-year and third-year undergraduates make up most of his course of over a hundred students. Wallach implemented his own autograder, RiceChecks, which builds on the features of the Gradle build-automation tool.
Zamansky teaches a computer science lab that pairs with CS1 at Hunter College. Last year in his course of first-year undergraduate students, he implemented Travis CI, a service that automatically generates builds and runs tests.
For teachers, autograding tools free up time to provide more meaningful feedback on aspects of student work that cannot be assessed by automation. For example, Zamansky used the extra time to focus on parts of the code that revealed “something about the students.” He went on to state that an English teacher would never use an autograder on an essay because they have to read the students’ work in order to get to know them. However, they can rely on automated grammar checks. By using an autograder, English teachers can automate away the tedious parts of grading and free up time to focus on the elements of writing that tell them more about the student.
Wallach offered the same perspective and encouraged teachers to automate whenever possible in order to “free up your human graders to look at the things you can’t automate.” Automating the more tedious parts of grading refocuses teachers’ grading efforts on more complex elements of a student’s work, and contributes to their success by helping teachers get to know their students.
Wallach estimated that autograding tools improved student grades by 20 percent in his course. He explained that when students can see the grade metrics for a given assignment, “they can continue pushing [changes and running the autograder] until they reach the metrics they want.”
Delayed grade results and feedback on assignments can be a major pain point for students, and autograding tools help speed up the grading process. Wallach recalled, “If I go back in my course evaluation feedback from a couple years ago, the number one complaint was that it takes forever to get grades back. We were taking weeks. Now, assignments are due Sunday night and three-quarters of the students are graded by Monday afternoon. It’s a mammoth, mammoth improvement.”
Human graders can unintentionally introduce various biases. For example, Wallach proposed a scenario with a tough grader and a grader who “lets anything through.” In this scenario, the human graders introduce a bias that creates unfairness and inconsistencies in feedback. He goes on to explain that autograding tools help limit bias and keep his grading “as objective as possible” because “there isn’t a lot of wiggle room where the grades might diverge.”
Some students may also be more sensitive to negative feedback, and using an autograder can help make negative feedback feel less personal. “The computer just says pass or fail,” Wallach explained, “and that’s an opportunity for students to not feel bad about it when they know everyone has the same challenge and they’re fighting against the same autograder. It becomes your shared adversary to overcome.”
Autograding tools may even help serve as an objective referee during group projects. Wallach shared, “[In a scenario where an assignment team has a dispute], team members can agree they want to get those points. So, a certain amount of objectivity in the autograder might void a certain amount of partner strife.” Again, by serving as an objective feedback system, an autograder helps students avoid ruminating on or rejecting negative feedback.
Using autograding tools in the classroom provides students with reassurance and helps them feel more confident when they’re first learning to program. Wallach explained that a student’s machine may not be set up correctly, and sometimes their code might pass locally but fail remotely. Further, a student may forget to commit one of their files. However, “rerunning the autograder on GitHub Actions, Travis CI, or whatever else gives the student a little confidence boost. GitHub Actions gives them a green check mark, and that helps reduce a certain amount of their stress.”
Zamansky suggested that autograding tools may be a good fit for more basic assignments across both intro and advanced courses. He explained, “even if you’re taking an advanced class, the first assignments in any given unit are going to be small things just to see if you understand certain ideas. That’s where an autograder can give students the type of feedback they need. Offload that from the teacher.” Although Zamansky estimated that all students “seem to get to the same place” regardless of the use of autograding, he suggested that their use gets them “to a level of proficiency more quickly.” Ultimately, teachers can maximize the benefits of using an autograder by considering when students may benefit from immediate feedback the most.
GitHub Classroom, now with autograding, helps you apply this time-saving practice to your courses just as easily as Classroom helps you automatically distribute assignments. Run existing tests on each student repository or author new tests as you create an assignment. Students can see the test run on every commit, providing constant feedback so they can iterate towards success.
 Chris Wilcox. 2015. The Role of Automation in Undergraduate Computer Science Education. In Proceedings of the 46th ACM Technical Symposium on Computer Science Education (SIGCSE ’15). ACM, New York, NY, USA, 90–95.