You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 21, 2019. It is now read-only.
Prize Categories Enrolled for:
(Please tick any specific Category that applies)
the Best use of Microsoft Azure
Best Beginner (Your team should be at least 50% Beginners)
Best "Jugaad" - Best Fun/ Silly/ Innovative Hack
Best UIPath Hack
🔥 Your Pitch
Kindly write a pitch for your project. Please do not use more than 500 words
One of the most debated topics in deep learning is how to interpret and understand a trained model – particularly in the context of high-risk industries like healthcare. The term “black box” has often been associated with deep learning algorithms. How can we trust the results of a model if we can’t explain how it works? Take the example of a deep learning model trained for detecting cancerous tumours. The model tells you that it is 99% sure that it has detected cancer – but it does not tell you why or how it made that decision. Did it find an important clue in the MRI scan? Or was it just a smudge on the scan that was incorrectly detected as a tumour? This is a matter of life and death for the patient and doctors cannot afford to be wrong.
🔦 Any other specific thing you want to highlight?
(Optional)
Love 😍 to see more such hacks!
✅ Checklist
Before you post the issue:
You have followed the issue title format.
You have mentioned the correct labels.
You have provided all the information correctly.
The text was updated successfully, but these errors were encountered:
Before you start, please follow this format for your issue title:
npHard - DL-Vizard
ℹ️ Project information
Please complete all applicable.
(Please tick any specific Category that applies)
🔥 Your Pitch
Kindly write a pitch for your project. Please do not use more than 500 words
One of the most debated topics in deep learning is how to interpret and understand a trained model – particularly in the context of high-risk industries like healthcare. The term “black box” has often been associated with deep learning algorithms. How can we trust the results of a model if we can’t explain how it works? Take the example of a deep learning model trained for detecting cancerous tumours. The model tells you that it is 99% sure that it has detected cancer – but it does not tell you why or how it made that decision. Did it find an important clue in the MRI scan? Or was it just a smudge on the scan that was incorrectly detected as a tumour? This is a matter of life and death for the patient and doctors cannot afford to be wrong.
🔦 Any other specific thing you want to highlight?
(Optional)
Love 😍 to see more such hacks!
✅ Checklist
Before you post the issue:
The text was updated successfully, but these errors were encountered: