Just joined the project?
Onboarding materials are in this private GitHub repo. Your advisor can get you access.
Dan Garcia on the why of CBT: Proficiency-Based Learning (5 minutes)
Armando Fox on the how of CBT: from Questions to Question Generators
Slides for Instructional Resiliency Task Force (presented by Armando Fox)
See the main LEARNER Lab website for recent publications and more details on the team and current efforts.
We’re looking for Masters and ambitious undergraduates to help with various PrairieLearn-related enhancements. These are a mix of research and engineering, but we’d expect any of them to lead to one or more submitted publications, as well as become contributions to an assessment-authoring system used by thousands of students!
Prerequisites: Strong software development skills, including solid knowledge of Python or the ability to pick it up fast because you’ve used other non-Python web frameworks or modern scripting languages. Self-starter, works well in a small team or as one of a pair. Ability to do significant pathfinding on your own, by researching what has been done on a particular topic, finding relevant articles online where appropriate (with your advisor’s or a senior student’s guidance), and in general getting from a high-level description of the project to a specific plan to build & execute it.
Compensation: Pay or credit may be possible, depending on the specific project.
Courseware developers for high-enrollment CS courses
Have you been a GSI/reader/etc. for a high-enrollment upper or lower division CS course? Are you excited about the possibility of improving the course’s assessments by using PrairieLearn? Ask your course’s instructor for their blessing to develop content for the course as part of our group. We will embed you in a team to get you started, and figure out a way to get you compensated for the work.
Automatic support for “Cheat traps”
Prof. Nick Weaver has developed a mechanism that helps detect certain forms of cheating on remote (takehome) online exams. We believe his techniques can be adapted in such a way that anyone authoring assessments in PrairieLearn can build them in. Is this possible, and if so, do they successfully detect attempts at certain types of cheating?
Automatic support for cheating detection via UI events
Picture this scenario: a student taking a remote exam has another window open on their screen, via which they are illicitly getting the answer to (e.g.) a coding question. They copy-and-paste the answer and submit. But if you could look at the timing of the keystrokes, you’d see that they would have had to type impossibly fast to do that. This and other mechanisms can be signals of less-than-honorable behavior when taking an online exam. This project would investigate and ideally prototype/develop some ways of building these detection mechanisms directly into PrairieLearn.