Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

better tests #162

Open
jeremymanning opened this issue May 25, 2018 · 0 comments
Open

better tests #162

jeremymanning opened this issue May 25, 2018 · 0 comments
Assignees

Comments

@jeremymanning
Copy link
Member

once we finish cleaning up the code (if it's not already sufficiently clean), we need better tests to verify things are working-- i.e. for some benchmark data, we should verify that performance is at least up to some minimum threshold (that way performance will never gets worse without us realizing).

for example:

  • need a way of checking whether things are "working" (i.e. on some sample dataset, can we reconstruct well?)
  • check each "condition" (brain object locations are a subset of model locations, brain and model locations match, brain and model locations partially overlap, brain and model locations are distinct)
  • also include some checks for when things are expected to break (e.g. not enough locations)
  • our checks should go beyond merely checking data types-- they should check that we get the "right answers" up to some threshold similarity or agreement
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants