How to correctly dequantize the CUBA LIF neurons voltage (neuron.v) - When using neuronal voltages output layer as logits for classification and bounding box regression #892
Unanswered
gwgknudayanga
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
I am using the output layer's neuronal voltage values to decide the bounding box coordinates of a detection algorithm(DVS camera based). When i trained and did inference in slayer, it gave me 41% map@50 accuracy. (I am using mean only batch-norm and conv layers only and I trained with delay_shift = False. All the layers are CUBA-LIF. Output layer's neurons voltage threshold are 2048. For all the other layer neurons the voltage threshold is 1.). Then i run the trained algorithm on Loihi2 (both loihi2 board and simulation). I reset the network every "32" time steps. Also i read the output layer neuron voltages with "len(network) + 1" time-step offset. But the detection performance and classification accuracies are very low in loihi board and loihi cpu simulation though in slayer it is good ( i did quantization aware training). In this case, to dequantize the output layer neuron voltages and get the fix-point values from floating points values, i divide the output layers neuron values by (32*64) as they were used in pilotnet snn. But the resulting values are different than the corresponding fix-point values that can be seen in slayer. Is this the correct way to dequantize?
Thanks and Rgds,
Udayanga
Beta Was this translation helpful? Give feedback.
All reactions