As part of the bachelor courses on Computer Science here at the TU Delft, students get taught concepts of programming languages. These range from functional concepts such as lambda’s and cons-nil lists to more imperative concepts such as records and mutation. And as many concepts there are to teach, there are as many ways to teach them. One approach is to ask students to implement a definitional interpreter that makes these concepts explicit, which is the approach chosen by Shriram Krishnamurthi and Joe Gibbs Politz for their book Programming and Programming Languages.
For the course we chose to follow Shriram’s book, but we replaced the Pyret programming language by Scala. The main reason was that Scala is a much more prevalent programming language, for which helpful resources are much more readily available. As an additional advantage it teaches students another programming language that’s actually useful outside the course. Luckily, most of the Pyret code from the book was easily translated to Scala.
I was a teaching assistant when the course was first taught using Shriram’s book, now two years ago, and with the increasing enrollment in the course a big part of my efforts were concentrated on making it easier for us to automatically assess the student’s submissions by writing a test suite that exercises their implementations. Since the students had to modify and extend their interpreter each week to support new concepts, it quickly became apparent that maintaining these test suites for each week would become a nightmare if we didn’t automate their creation as well. So we used Scala mixin composition to create test suites from the individual test sets of each concept, and wrote a test generator that given just the test inputs would spit out a full ScalaTest test suite asserting that the student’s implementation actually produce the same results as our reference implementation.
While this worked well, there was room for improvement. Students tended not to understand why a particular implementation was incorrect, and kept running our test suite against their code to see if their seemingly random changes actually improved the score. So last year we asked the students to write a test suite for their implementation before they actually start implementing it. This test suite should include tests for corner cases. However, this meant another burden on us as teaching assistants, so we automated the assessment of the test suites as well by automatically running their test suits on slightly crippled versions of our reference implementation. If their test suite contained a test for a particular corner case, it should fail on our crippled implementation and succeed on our non-crippled implementation. I believe this worked quite well, and provided the students with a better understanding of the concepts they were going to implement.
For more information, read the paper A scalable infrastructure for teaching concepts of programming languages in Scala with WebLab: an experience report by Tim van der Lippe, Thomas Smith, Eelco Visser and myself, as published in the Proceedings of the 2016 7th ACM SIGPLAN Symposium on Scala.