Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create a better system for defining assessments in the adaptive system #9

Open
eharpste opened this issue Jul 28, 2021 · 0 comments
Open
Assignees

Comments

@eharpste
Copy link
Member

eharpste commented Jul 28, 2021

It seems like the system for switching into a post-test mode for an adaptive controller is not working. Rather than just saying "fix it" I think it would be good to sketch out a better vision for how it should work.

Current Situation

It seems like the intention for the testing mode is to define a maximum number of items and a list of test_problems. Once the agent hits the maximum number of items or reaches what the controller defines as mastery, it switches into test mode and runs out the testing set. Essentially this means that when an agent is done it s given a posttest defined by the user.

Feature Ideas

Beyond the fact that the current posttest only system doesn't currently seem to work, I think there could be other ways of doing it. Here are some ideas for possible features that I think would be useful in an agent assessment system:

  1. A system for inserting assessments at defined points in a sequence.
    1. In most cases you probably just want to do a pretest-posttest but I could see defining a mid test or series of tests throughout the sim run.
    2. Intermediate (mid-) tests could be defined to be injected after every Nth problem
    3. This would open up a structure I've thought would be interesting to leverage given the simulation nature where you run a full post test after every item (or even every step) and leverage the fact that the simulated agent doesn't need to manifest a testing effect and would let you have higher fidelity incremental assessment than is humanly possible
  2. A system for defining assessment item pools
    1. Since the current system is only designed for posttests it doesn't have a notion of counter-balancing test forms. It would be great if you could define item pools and setting for how to handle their counter balancing.
  3. A system for easily picking test items out of the logs.
    1. This relates to AL_Train/#12.
    2. It would be great if the testing mode was injected into the log in a way that you could easily condition on it for calculating a test score. Given 1 and 2 above it should probably also include the item pool / which test it is so that you can tell what is pre and what is post.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants