Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Weights #2

Closed
KaTaiHo opened this issue Apr 25, 2019 · 1 comment
Closed

Weights #2

KaTaiHo opened this issue Apr 25, 2019 · 1 comment

Comments

@KaTaiHo
Copy link

KaTaiHo commented Apr 25, 2019

Hi,

I came to your github since someone recommended it from Xilinx/QNN-MO-PYNQ#35. I was wondering how I would export the weights such that I could use them for the QNN-MO-PYNQ tinier yolo. I've followed all the sets on your GitHub but I don't know what to do with the bin files and the .npz files.

It seems like for the deployment you meant we should copy over the bin files and place them under QNN-MO-PYNQ/qnn/params/binparam-dorefanet/ and then use Vivado to recompile it is this correct?

Additionally, do you happen to have the gen-weights file for converting for QNN-MO-PYNQ?

Thanks!

@mohdumar644
Copy link
Owner

I did try loading custom weights for the QNN-MO-PYNQ tinier yolo, a few months back.
It was not so simple as the .bin file generation involved several technicalities, some of which I could not figure out as Xilinx had not provided their own Finnthesizer (I did discover the 'split' mechanism of evaluating 512 output feature maps by two successive runs of 256 ofms, and packing thresholds and invert bits in a specific manner, but the weight matrix padding for certain layers caused confusion)

I suggest sticking to this repo as it is much simpler and straightforward. Of course you can edit it to have FP32 initial and final convolutional layers, as well as using the Darknet shared object instead of custom postprocessing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants