Releases: PEtab-dev/petab_select
Releases · PEtab-dev/petab_select
PEtab Select v0.2.1
What's Changed
- Bump codecov/codecov-action from 3 to 5 by @dependabot in #115
- Bump actions/cache from 3 to 4 by @dependabot in #114
- Bump actions/setup-python from 4 to 5 by @dependabot in #113
- Bump actions/checkout from 3 to 4 by @dependabot in #112
- Fix README logo; add Zenodo by @dilpath in #116
- Release 0.2.1 by @dilpath in #117
New Contributors
- @dependabot made their first contribution in #115
Full Changelog: v0.2.0...v0.2.1
PEtab Select v0.2.0
There are some major breaking changes, to support users providing previous calibration results, e.g. from previous model selection runs. The following changes are reflected in the notebook examples.
- breaking change previously, calibration tools would call
candidates
at each iteration of model selection.candidates
has now been renamed tostart_iteration
, and tools are now expected to runend_iteration
after calibrating the iteration's models. This structure also simplifies the codebase for other features of PEtab Select. - breaking change previously, calibration tools would determine whether to continue model selection based on whether the candidate space contains any models. Now, calibration tools should rely on the
TERMINATE
signal provided byend_iteration
to determine whether to continue model selection. - breaking change PEtab Select hides user-calibrated models from the calibration tool, until
end_iteration
is called. Hence, if a calibration tool does some analysis on the calibrated models of the current iteration, the tool should use theMODELS
provided byend_iteration
, and not theMODELS
provided bystart_iteration
.
In summary, here's some pseudocode showing the old way.
from petab_select.ui import candidates
while True:
# Get iteration models
models = candidates(...).models
# Terminate if no models
if not models:
break
# Calibrate iteration models
for model in models:
calibrate(model)
# Print a summary/analysis of current iteration models (dummy code)
print_summary_of_iteration_models(models)
And here's the new way. Full working examples are given in the updated notebooks, including how to handle the candidate space.
from petab_select.ui import start_iteration, end_iteration
from petab_select.constants import MODELS, TERMINATE
while True:
# Initialize iteration, get uncalibrated iteration models
iteration = start_iteration(...)
# Calibrate iteration models
for model in iteration[MODELS]:
calibrate(model)
# Finalize iteration, get all iteration models and results
iteration_results = end_iteration(...)
# Print a summary/analysis of all iteration models (dummy code)
print_summary_of_iteration_models(iteration_results[MODELS])
# Terminate if indicated
if iteration_results[TERMINATE]:
break
- Other major changes
- Many thanks to @dweindl for:
- documentation! https://petab-select.readthedocs.io/
- GitHub CI fixes and GHA deployments to PyPI and Zenodo
- visualizations, examples in the gallery: https://petab-select.readthedocs.io/en/develop/examples/visualization.html
- fixed a bug introduced in 0.1.8, where FAMoS "jump to most distant" moves were not handled correctly
- the renamed
candidates
->start_iteration
:- no longer accepts
calibrated_models
, as they are automatically stored in theCandidateSpace
now with eachend_iteration
calibrated_models
andnewly_calibrated_models
no longer need to be tracked between iterations. They are now tracked by the candidate space.- exclusions via
exclude_models
is no longer supported. exclusions can be supplied withset_excluded_hashes
- no longer accepts
- model hashes are more readable and composed of two parts:
- the model subspace ID
- the location of the model in its subspace (the model subspace indices)
- users can now provide model calibrations from previous model selection runs. This enables them to skip re-calibration of the same models.
- Many thanks to @dweindl for:
What's Changed
- pre-commit autoupdate by @dweindl in #58
- GHA: Trigger on each push by @dweindl in #61
- Cleanup imports by @dweindl in #59
- Fix some typehints by @dweindl in #60
- Add sphinx configuration by @dweindl in #57
- Add RTD config by @dweindl in #63
- Fix multi-subspace stepwise moves by @dilpath in #65
- Fix argument order in
calculate_aicc
by @dweindl in #71 - Update README by @dweindl in #69
- Fix some type annotations / docstrings by @dweindl in #70
- Set
__all__
by @dweindl in #72 - Fix FAMoS termination; remove other switching methods by @dilpath in #68
- Doc: Skip builtins by @dweindl in #74
- Clean up
governing_method
andmethod
inCandidateSpace
s by @dilpath in #73 - Do not try to calibrate virtual models by @dilpath in #75
- Sort dictionaries used in hashes, for reproducibility by @dilpath in #78
- Model.to_petab: store parameter estimates as nominal values by @dilpath in #77
- Release 0.1.11 by @dilpath in #79
- Fix RTD sphinx ext by @dilpath in #80
- Release 0.1.12 by @dilpath in #81
- Fix #82 by @dilpath in #83
- Release 0.1.13 by @dilpath in #84
- Nicer model hashes;
model_hash_to_model
method by @dilpath in #86 - Skip virtual models in
models_to_yaml_list
by @dilpath in #66 - Action updates via dependabot by @dweindl in #92
- GHA: Add PyPI deployment workflow by @dweindl in #91
- Fix minimum Python requirement by @dweindl in #97
- Doc: Fix heading levels in example by @dweindl in #96
- Get rid of petab DeprecationWarnings by @dweindl in #93
- Add some docs by @dilpath in #67
- File formats to RTD by @dweindl in #102
- Move test suite description to RTD by @dweindl in #101
- User-calibrated models by @dilpath in #87
- Add logo draft by @dilpath in #103
- Set up setuptools_scm by @dweindl in #100
- Doc: Update RTD landing page by @dweindl in #105
- Switch from black/isort/... to ruff by @dweindl in #99
- Update file format spec by @dilpath in #106
- Doc: fix list formatting by @dweindl in #107
- Visualization methods by @dilpath in #36
- Support SSR as criterion by @dilpath in #108
- Release 0.2.0 by @dilpath in #111