You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
once we finish cleaning up the code (if it's not already sufficiently clean), we need better tests to verify things are working-- i.e. for some benchmark data, we should verify that performance is at least up to some minimum threshold (that way performance will never gets worse without us realizing).
for example:
need a way of checking whether things are "working" (i.e. on some sample dataset, can we reconstruct well?)
check each "condition" (brain object locations are a subset of model locations, brain and model locations match, brain and model locations partially overlap, brain and model locations are distinct)
also include some checks for when things are expected to break (e.g. not enough locations)
our checks should go beyond merely checking data types-- they should check that we get the "right answers" up to some threshold similarity or agreement
The text was updated successfully, but these errors were encountered:
once we finish cleaning up the code (if it's not already sufficiently clean), we need better tests to verify things are working-- i.e. for some benchmark data, we should verify that performance is at least up to some minimum threshold (that way performance will never gets worse without us realizing).
for example:
The text was updated successfully, but these errors were encountered: