Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Explainability is hard to understand #10

Open
DrJonnyT opened this issue Aug 14, 2023 · 0 comments
Open

Explainability is hard to understand #10

DrJonnyT opened this issue Aug 14, 2023 · 0 comments

Comments

@DrJonnyT
Copy link
Collaborator

DrJonnyT commented Aug 14, 2023

would be good to show some general plots of what parts of the image in general the model looks at, rather than just on a per-image basis. And some mention of how they work, eg doutput/dinput, how does the output change if we change one pixel?
ScoreCAM goes through the activations maps for the layers and weights them accordingly

Also do not use the penultimate_layer = penultimate_layer_idx part? Seems to work better without it? Possibly...
It goes through to find the layer automatically

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant