Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question: Interpretation of resulting confusion matrix #15

Closed
graulef opened this issue May 23, 2017 · 3 comments
Closed

Question: Interpretation of resulting confusion matrix #15

graulef opened this issue May 23, 2017 · 3 comments

Comments

@graulef
Copy link
Contributor

graulef commented May 23, 2017

Hey there, I have one more question. After evaluating the network, I came to this resulting confusion matrix:

[eval] accuracy 0.711933
    label 01: 58 [ 58  00  07  24  05  00  00  04  00  00  01  00  00 ]
    label 02: 96 [ 00  96  00  01  01  00  00  02  01  00  00  00  00 ]
    label 03: 62 [ 04  06  62  17  03  00  00  07  01  00  00  00  00 ]
    label 04: 73 [ 13  01  06  73  03  00  00  01  02  00  02  00  00 ]
    label 05: 80 [ 06  02  03  03  80  00  00  00  05  01  01  00  00 ]
    label 06: 43 [ 17  01  09  04  03  43  10  01  01  01  07  00  03 ]
    label 07: 98 [ 00  00  00  00  00  00  98  01  00  00  00  00  00 ]
    label 08: 91 [ 01  01  01  05  01  00  00  91  01  00  00  00  00 ]
    label 09: 93 [ 01  02  00  02  01  00  00  01  93  00  00  00  00 ]
    label 10: 64 [ 04  18  03  01  04  00  00  00  03  64  00  00  02 ]
    label 11: 25 [ 09  00  01  37  02  00  00  25  00  00  25  00  00 ]
    label 12: --
    label 13: 00 [ 100  00  00  00  00  00  00  00  00  00  00  00  00 ]
Training Finished

Now, I know that the results are significantly worse than in the paper, but this is not my question.

My question is, why class 13 (which I believe is the background noise) gets a score of 100 for label 01 in the confusion matrix?

Thanks for your help!

@simonwsw
Copy link
Owner

To use MaskZero in RNN packages, an additional class need to be reserved for the label of frames that are masked case. Thus this additional class uses 13 (of course only in Torch/Lua) as gesture ID.

Theoretically class 13 shouldn't be shown in the confusion matrix as it's been marked (which is all zeros). However, it shouldn't affect the result.

@graulef
Copy link
Contributor Author

graulef commented May 24, 2017

Okay got it, thank you for the answer!

@mxtx0509
Copy link

mxtx0509 commented May 15, 2018

I miss a question when I want to run the code and I can't solve it ,could you help me? #20

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants