-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update libedgetpu repo for compatibility of recent versions of Tensorflow. #60
Conversation
Prevent compilation fail
No longer needed. deb packaging will be done natively.
Refactored WORKSPACE after deprecation of Tensorflow/toolchains. More info: https://discuss.tensorflow.org/t/tensorflow-toolchains-has-been-deprecated-and-moved-into-tensorflow-tensorflow/7713
@dmitriykovalev can you or some other maintainer please review this pr. Merging this would fix many of the issues users of the coral edge tpu have with newer tensorflow versions and python versions. |
@Namburger So I tested it against the current TF master with both MacOS and Linux and all seems to work. To allow the use of TM master for temporary testing, I locally changed this in the libcoral/WORKSPACE`:
|
@feranick working with the tensorflow repo is a learning experience for me... I'll do a first pass to see how to get this backported to 2.16, will let you know if we'll need a plan b |
I'd discuss it with @mihaimaruseac. My previous PR was quickly cherrypicked into 2.16.0 thanks to him. |
@feranick I've also talked to him, doesn't seems like there is a plan to pick this to 2.16 at this time :/ For your change suggestion,
Could it works to just set |
@Namburger That is unfortunate. With regards to the specific commit, that would build against 2.17.0 at this point, correct? Given that is unstable and under development, I wouldn't think it's a good idea...... |
@Namburger, Compilation of The concerns remains, though, that it's building against an unstable version fo 2.17.0. Also, should be libedgetpu be built against the same TF (stable is now against 2.15.0, but I am prepping the next with TF 2.16 as soon as it reaches stable). |
@feranick I hear the concerns, it doesn't look like we can get that in for 2.16 so unless we want to sync to a 2.17 dev version, we're stuck in limbo until it comes out.. which probably won't happen any time soon... At this point my suggestion to move forward and keep all the main rocal repos updated is to sync to |
This sounds good to me @Namburger. The repos are already locally in sync with that commit. I already found a possible breakage due to 2.17 (See log here) when crosscompiling for arm (there are no issues in MacOS and Linux for x86). I forked 2.16.0 and added the visibility patch to test whether the issue is in 2.17 or not. Will report back. |
@Namburger since this PR has been merged and concerns libedgetpu (not libcoral), I am going to continue this reporting to the relevant PR: |
Interesting, doesn't look like |
So, unfortunately this landed too late in the 2.16 release process (it was supposed to also have an RC1 to handle bugs from RC0 but the release team decided otherwise). If you pin to a commit from master branch, you will always build at that commit, but will need to always test when you move to a separate one. Regarding the commit to pick, I would suggest either the commit where the support got added or one of the nightly commits afterwards (check if there is a pip wheel built) |
Well, as unfortunate as that is, I am really grateful to you both @mihaimaruseac and @Namburger to try to push this forward. I am still running a few build tests, it's pretty easy to set either version we want to build against. If I were to build against the latest nightly, how do I find its corresponding commit? |
If you build on the day immediately after the nightly release, you can use the commit at the top of the nightly branch If you build some days later, |
The issue is only specific to arm7a. This is the commit where edits are exactly what the compiler is complaining about. There are no specific details on what drove the commit in first place.... |
humm, that certainly sounds like the issue. As I understand it, that commit changes the constraint of the registers used for that ops which matches the error:
|
Indeed. Weirdly enough, the change in that commit was only applied to
not on
which in fact works fine.... |
@Namburger I tested it with a local fork of TF 2.17.0 with the
Note that this file (and the whole So maybe the commit was intended to address this problem, but didn't... |
@Namburger Disclaimer: I know close to nothing about asm. Yet, I tried to address this error for what seems like a limitation in the architecture. Following some comments from here, I replaced "r" with "g" so that in
is changed to:
Compilation proceeds beyind this point, but it then stops with a similar error for another external library:
Not encouraging. |
hi @feranick with all of the changes, would you be able to send a set of commands to reproduce the very first issues?
I'm talking to some folks, trying to get an easy process to repro the original issues and continue on. It appears that |
A procedure to making code changes to tensorflow and test them against this build would be nice to have also, not sure if this can be easily done with bazel, though |
One idea would be to fork TensorFlow, make changes in your own fork (and sync from time to time) and then change Lines 68 to 76 in b5820ad
Then, once it all works, you can make a PR from the fork back to TF and once that lands you can change the |
@mihaimaruseac Thanks. This is effectively what I did for testing. An advantage of this is that |
Hi @Namburger , here 's a slightly revised version of your commands (I can't clone via [email protected]:google-coral/libcoral.git, only via https). Also you should build using debian:bookworm as it has support for python3.11:
This commands will surely trigger the issue (just tested it). As for |
by mean of an update, the author of tensorflow/tensorflow@8419c70 promised to take a look sometimes this week! |
Hi @Namburger, just wondering whether there is an update on this... Thanks! |
Joining in the fray here, wondering how the future for the Coral TPU is going to look. Many of us out here bought this with hopes of affordable facial and object recog for software like Frigate using the Coral devices.... |
To be honest with you, the development is pretty much dead. Google is happy to sell you the boards, ASUS will do so for bulk sales, and yet the platform is dead. Shameful if you are asking me. This his why I embarked in the effort of keeping it alive, and with some success to merge it with the main repo. But I also have a fork for the latest. I am pretty sure that apart from some googler with some time to spare, Google basically killed their dev team. So I would not be invested too much into the platform, unless you are willing to put the work in. |
Luckily I only purchase a single pcie Coral. Unfortunately for me, I'm on a fixed income so the loss hurts just a smidge more than it would have if I was still at 6-figure income |
This PR:
v2.15.0
), TF/crosstools, and libusb (1.0.26).This is in relation to issues: tensorflow/tensorflow#62371 and #53