-
Notifications
You must be signed in to change notification settings - Fork 27.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
change version #29097
change version #29097
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
We still have failing for On this PR, the accelerate is @amyeroberts mentions she has already some work on solving @ArthurZucker mentions he don't want to use So let's keep this PR as it is despite there is cc @muellerzr |
p.s. |
@ydshieh Do we know the original reasons for using accelerate main rather than the stable release in the CI? |
@ydshieh Just to confirm - changing the accelerate version here would mean that |
@amyeroberts I didn't know the fix of However, that fix is merged 4 days ago, right? But last night's run having those failures. So it's not this PR introducing this again. (i.e. those failures are still on |
I propose to revert the changes regarding accelerate in this PR (as it didn't fix the issue). If eventually everyone agrees with @ArthurZucker, we can open another PR to use |
@ydshieh OK, what I suggest for the moment is skipping all the models where this fails at the moment. Separate from this PR, I think there's a issue with our test fetcher. There's been quite a few PRs recently where the whole CI has been green and then after merging into main, another PR which touches related (but different) code suddenly starts to have CI failures. These tests are one example. Two other cases would be:
|
I'm pro splitting and pro having stable accelerate for PR's CI and |
Will take a look over the test fetcher. But remember that test fetcher have some design decision (by great @sgugger ) to balance the coverage as well as the speed of CI, so the situation described is not surprising (at least to me since I faced it a few times already and some fix were already done). (One related PR but not exact is #28816) |
Regarding skipping all the models where this fails at the moment, I will leave it to anyone that is so motivated to beat me, otherwise I can do it in about 9 hours. For now, I will just wait the CI of this PR and merge it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM thanks
* change version * nuke * this doesn't make sense * update some requirements.py * revert + no main * nits * change cache number * more pin * revert --------- Co-authored-by: ydshieh <[email protected]>
Thanks both 🤗 |
* change version * nuke * this doesn't make sense * update some requirements.py * revert + no main * nits * change cache number * more pin * revert --------- Co-authored-by: ydshieh <[email protected]>
What does this PR do?
Try to change cached version