-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Discussion + Planning: How to unittest an agent? #72
Comments
Related to #57, which has always been my big concern with testing whole agents. Would be great to have someone go through and try to catalog all the things that use some kind of random choice and see if we can set some master seed. |
@cmaclell Don't you have something like the virtual tutor idea in the works? One concern with some of that would be over fitting to a testcase. Might be nice to have some kind of regular batch process (probably not using current CI tools but maybe) that re-runs some of our canonical experiments and reports fit to human data just so we can see cases when we totally break something there. |
Made a new issue for CI #74 |
I do, it is located here: https://gitlab.cci.drexel.edu/teachable-ai-lab/tutorenvs <https://gitlab.cci.drexel.edu/teachable-ai-lab/tutorenvs>
You might get overfitting, but it does currently randomly generate problems / orders.
Best wishes,
Chris
… On Dec 7, 2020, at 3:04 PM, Erik Harpstead ***@***.***> wrote:
@cmaclell <https://github.com/cmaclell> Don't you have something like the virtual tutor idea in the works? One concern with some of that would be over fitting to a testcase. Might be nice to have some kind of regular batch process (probably not using current CI tools but maybe) that re-runs some of our canonical experiments and reports fit to human data just so we can see cases when we totally break something there.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#72 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAEHALEMTQNBP5VKMEO6WQLSTUYOTANCNFSM4UQZVDJQ>.
|
So beyond some of the issues with why this will be hard to implement what would be some good tests? Some I can think of:
I guess part of this what high level assumptions do we want to make about all agents? Some that seem reasonable, though might not apply to all agent types or goals;
|
I'm not entirely clear on what you both mean by overfitting, the agent overfits or there is some fitting in the environment? Thanks @cmaclell these seem like a step in the right direction I'll take a closer look. @eharpste these are all things that would be good to incorporate. For the non-deep learning based agents (or at least ModularAgent) expanding on 1, it would be nice to test things that are inside the agent (more of the unit test variety) beyond behavior like:
|
I meant overfitting in a general software engineering sense not an ML sense. Basically, we don't want to stay myopically focused on the few test cases we define in cases where things might change as we explore new directions. For example, there are some known cases where the blocked vs interleaved effect should invert. |
Ahh I see. The unit tests I'm suggesting would be written on an agent by agent basis. For a given implementation there are a set of intended behaviors that should be directly enforced via unit tests. If the intended behaviors change then the unit tests should change, but if they don't change then they should still pass regardless of implementation changes or additions. |
This merge adds in several additional integration tests that were on the [expanded_tests](https://github.com/apprenticelearner/AL_Core/tree/expanded-tests) branch, this is relevant to but does not address #72 . It also implements a test for agent serialization and re-introduces database saving of agents talked about in #60 . The changes also remove all un-commented print statements and replace them with use of the built in logging library as talked about in #73 .
There are a lot of bugs we have run into with regard to different behavior in different implementation/generations of code, and these issues are pretty hard to track down. I would like to move toward having a way to unittest agents.
The way we 'test' agents right now involves running the agents with altrain (which spins up another server that hosts a tutor in the browser) and then in the short term we look a the general behavior of the agent transactions as they are printed in the terminal, and maybe we additionally print out some of what is happening internally with the agent. And in the long term we look at the learning curves, which usually involves first doing some pre-processing/KC labeling and uploading to datashop.
It would be nice to have unittests at the level of "the agent is at this point in training with these skills, and we'll give it these interactions, and we expect this to happen".
Some impediments to doing this:
I've been flirting with the idea of having a sort of virtual tutor that is just a knowledge-base that gets updated in python, executes the tutor logic, and calls request/train. This would at least address 1) and maybe also 2).
The text was updated successfully, but these errors were encountered: