Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automate interaction of successful tests #11

Open
benjamin051000 opened this issue Jan 11, 2023 · 6 comments
Open

Automate interaction of successful tests #11

benjamin051000 opened this issue Jan 11, 2023 · 6 comments
Labels
enhancement New feature or request

Comments

@benjamin051000
Copy link
Member

Option (or default) to run all students' scripts automatically, only go back to the ones that have issues (compile or sim load errors). For each students' project that loads and runs TB properly, just output a little table that shows each name and score. Way less interactivity, faster results.

@benjamin051000 benjamin051000 added the enhancement New feature or request label Jan 11, 2023
@benjamin051000
Copy link
Member Author

Have an option to automatically run through all sims and yield a report at the very end (no need to press Return for each student). Will be tricky with sims that break/never finish. Not entirely sure how to handle all cases here, may just end up being a "use with caution" flag

@benjamin051000
Copy link
Member Author

Not sure if this could be done with a timeout parameter in the TCL, but it could be done with multiprocessing, and just have a parent process that kills procs that take too long.

@benjamin051000
Copy link
Member Author

This relies on #10 to work well, since separating submissions into their own projects allows for simple parallelization.

@benjamin051000
Copy link
Member Author

Now that #10 is working out nicely, this is becoming something I've been thinking about more and more. I could see the terminal interface showing the status of each simulation process and its final scores (or an X for failed), and you could select each one to see details (e.g., transcript log, etc). This would also tie in to #19

@benjamin051000
Copy link
Member Author

Testbench results will be confusing to understand because they aren't unified in their grading structures. This needs to change

@benjamin051000
Copy link
Member Author

We don't actually need #19 in order for this to work. Just have it collect the stdout, parse it, and print a little table. That's a lot easier.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant