Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[tune](deps): Bump pytorch-lightning from 1.4.3 to 1.5.5 in /python/requirements/tune #83

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

dependabot[bot]
Copy link

@dependabot dependabot bot commented on behalf of github Dec 11, 2021

Bumps pytorch-lightning from 1.4.3 to 1.5.5.

Release notes

Sourced from pytorch-lightning's releases.

Standard weekly patch release

[1.5.5] - 2021-12-07

Fixed

  • Disabled batch_size extraction for torchmetric instances because they accumulate the metrics internally (#10815)
  • Fixed an issue with SignalConnector not restoring the default signal handlers on teardown when running on SLURM or with fault-tolerant training enabled (#10611)
  • Fixed SignalConnector._has_already_handler check for callable type (#10483)
  • Fixed an issue to return the results for each dataloader separately instead of duplicating them for each (#10810)
  • Improved exception message if rich version is less than 10.2.2 (#10839)
  • Fixed uploading best model checkpoint in NeptuneLogger (#10369)
  • Fixed early schedule reset logic in PyTorch profiler that was causing data leak (#10837)
  • Fixed a bug that caused incorrect batch indices to be passed to the BasePredictionWriter hooks when using a dataloader with num_workers > 0 (#10870)
  • Fixed an issue with item assignment on the logger on rank > 0 for those who support it (#10917)
  • Fixed importing torch_xla.debug for torch-xla<1.8 (#10836)
  • Fixed an issue with DDPSpawnPlugin and related plugins leaving a temporary checkpoint behind (#10934)
  • Fixed a TypeError occuring in the SingalConnector.teardown() method (#10961)

Contributors

@​awaelchli @​carmocca @​four4fish @​kaushikb11 @​lucmos @​mauvilsa @​Raalsky @​rohitgr7

If we forgot someone due to not matching commit email with GitHub account, let us know :]

Standard weekly patch release

[1.5.4] - 2021-11-30

Fixed

  • Fixed support for --key.help=class with the LightningCLI (#10767)
  • Fixed _compare_version for python packages (#10762)
  • Fixed TensorBoardLogger SummaryWriter not close before spawning the processes (#10777)
  • Fixed a consolidation error in Lite when attempting to save the state dict of a sharded optimizer (#10746)
  • Fixed the default logging level for batch hooks associated with training from on_step=False, on_epoch=True to on_step=True, on_epoch=False (#10756)

Removed

Contributors

@​awaelchli @​carmocca @​kaushikb11 @​rohitgr7 @​tchaton

If we forgot someone due to not matching commit email with GitHub account, let us know :]

Standard weekly patch release

[1.5.3] - 2021-11-24

... (truncated)

Changelog

Sourced from pytorch-lightning's changelog.

[1.5.5] - 2021-12-07

Fixed

  • Disabled batch_size extraction for torchmetric instances because they accumulate the metrics internally (#10815)
  • Fixed an issue with SignalConnector not restoring the default signal handlers on teardown when running on SLURM or with fault-tolerant training enabled (#10611)
  • Fixed SignalConnector._has_already_handler check for callable type (#10483)
  • Fixed an issue to return the results for each dataloader separately instead of duplicating them for each (#10810)
  • Improved exception message if rich version is less than 10.2.2 (#10839)
  • Fixed uploading best model checkpoint in NeptuneLogger (#10369)
  • Fixed early schedule reset logic in PyTorch profiler that was causing data leak (#10837)
  • Fixed a bug that caused incorrect batch indices to be passed to the BasePredictionWriter hooks when using a dataloader with num_workers > 0 (#10870)
  • Fixed an issue with item assignment on the logger on rank > 0 for those who support it (#10917)
  • Fixed importing torch_xla.debug for torch-xla<1.8 (#10836)
  • Fixed an issue with DDPSpawnPlugin and related plugins leaving a temporary checkpoint behind (#10934)
  • Fixed a TypeError occuring in the SingalConnector.teardown() method (#10961)

[1.5.4] - 2021-11-30

Fixed

  • Fixed support for --key.help=class with the LightningCLI (#10767)
  • Fixed _compare_version for python packages (#10762)
  • Fixed TensorBoardLogger SummaryWriter not close before spawning the processes (#10777)
  • Fixed a consolidation error in Lite when attempting to save the state dict of a sharded optimizer (#10746)
  • Fixed the default logging level for batch hooks associated with training from on_step=False, on_epoch=True to on_step=True, on_epoch=False (#10756)

Removed

[1.5.3] - 2021-11-24

Fixed

  • Fixed ShardedTensor state dict hook registration to check if torch distributed is available (#10621)
  • Fixed an issue with self.log not respecting a tensor's dtype when applying computations (#10076)
  • Fixed LigtningLite _wrap_init popping unexisting keys from DataLoader signature parameters (#10613)
  • Fixed signals being registered within threads (#10610)
  • Fixed an issue that caused Lightning to extract the batch size even though it was set by the user in LightningModule.log (#10408)
  • Fixed Trainer(move_metrics_to_cpu=True) not moving the evaluation logged results to CPU (#10631)
  • Fixed the {validation,test}_step outputs getting moved to CPU with Trainer(move_metrics_to_cpu=True) (#10631)
  • Fixed an issue with collecting logged test results with multiple dataloaders (#10522)

[1.5.2] - 2021-11-16

Fixed

... (truncated)

Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Bumps [pytorch-lightning](https://github.com/PyTorchLightning/pytorch-lightning) from 1.4.3 to 1.5.5.
- [Release notes](https://github.com/PyTorchLightning/pytorch-lightning/releases)
- [Changelog](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/CHANGELOG.md)
- [Commits](Lightning-AI/pytorch-lightning@1.4.3...1.5.5)

---
updated-dependencies:
- dependency-name: pytorch-lightning
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <[email protected]>
@dependabot dependabot bot added the dependencies Pull requests that update a dependency file label Dec 11, 2021
@dependabot @github
Copy link
Author

dependabot bot commented on behalf of github Dec 15, 2021

Dependabot tried to update this pull request, but something went wrong. We're looking into it, but in the meantime you can retry the update by commenting @dependabot rebase.

6 similar comments
@dependabot @github
Copy link
Author

dependabot bot commented on behalf of github Feb 4, 2022

Dependabot tried to update this pull request, but something went wrong. We're looking into it, but in the meantime you can retry the update by commenting @dependabot rebase.

@dependabot @github
Copy link
Author

dependabot bot commented on behalf of github Aug 16, 2022

Dependabot tried to update this pull request, but something went wrong. We're looking into it, but in the meantime you can retry the update by commenting @dependabot rebase.

@dependabot @github
Copy link
Author

dependabot bot commented on behalf of github Aug 22, 2022

Dependabot tried to update this pull request, but something went wrong. We're looking into it, but in the meantime you can retry the update by commenting @dependabot rebase.

@dependabot @github
Copy link
Author

dependabot bot commented on behalf of github Aug 30, 2022

Dependabot tried to update this pull request, but something went wrong. We're looking into it, but in the meantime you can retry the update by commenting @dependabot rebase.

@dependabot @github
Copy link
Author

dependabot bot commented on behalf of github Jan 25, 2023

Dependabot tried to update this pull request, but something went wrong. We're looking into it, but in the meantime you can retry the update by commenting @dependabot rebase.

@dependabot @github
Copy link
Author

dependabot bot commented on behalf of github Feb 7, 2023

Dependabot tried to update this pull request, but something went wrong. We're looking into it, but in the meantime you can retry the update by commenting @dependabot rebase.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants