Skip to content

NAP (Sub)Problems and their effects on the LM #2

Open
@lambdaTotoro

Description

@lambdaTotoro

In the systems' meeting today, we talked about how peer grading (even instructor grading) currently does not update the LM and how it should, in the new system. Consensus was that we probably want to deal with this on the basis of subproblems (if there are any).

What's currently happening (as documented here) is that the LMP catches events from the frontend that have all the necessary information and updates the LM from that. Here's an example of that could look like:

{
	"type" : "problem-answer",
	"uri" : "http://mathhub.info/iwgs/quizzes/creative_commons_21.tex",
	"learner" : "ab34efgh",
	"score" : 2.0,
	"max-points" : 2.0,
	"updates" : [{
		"concept" : "http://mathhub.info/smglom/ip/cc-licenses",
		"dimensions" : ["Remember", "Understand", "Evaluate"],
		"quotient" : 1.0
	}]
	"time" : "2023-12-12 16:10:06",
	"payload" : "",
	"comment" : "IWGS Tuesday Quiz 7"
}

The interesting bits are in the list associated to "updates". This allows one event to update multiple concepts differently (flexibility that's required by answer classes), e.g. "This learner has understood DFS correctly but Prolog syntax poorly". This can include any combination of cognitive dimensions as well. As of now, the LMP does not look up anything like prerequisites or objectives, it only relies on the list given here (again, because the general information about the problem isn't as precise as answer classes and those should be what informs the update the most)

The "quotient" is a number between 0 and 1 that reflects the performance in this answer. This can, but does not have to, be just the quotient of score over max-points as a first approximation. But again, maximum flexibility for maximum usefulness of answer classes.

What I would imagine to happen is something along the following lines:

  • Learner submits an answer to a given practice (sub)problem.
  • Grader (peer or instructor) selects answer classes, adjusts points if necessary and enters feedback. (We can and should debate if
  • LMP gets thrown one event per problem if there are no subproblems, otherwise one per subproblems and updates the learner model.

I'm not exactly sure what parts of this still need to be designed for, but that's my end of it.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions