When top neurons of trained neural network of a specific class is dropped gradually, the trained model loses it ability to classify the label correctly. Interestingly, the model starts focusing on other regions of an Image with probability getting distributed among other classes.
Consider you have strong memories of particular events in your life like going to college or the time you met with an
accident and it is stored as neuron among billions of neurons in your brain and now you vaguely remember those events because your neurons have vanished
slowly as the years have passed.
Similarly now compare those with neurons of your model which has learnt to identify a scene but as i drop some of those neurons, it starts loses ability to
understand its role and vaguely predicts with available neurons. Interest Concepts, Don't you think?
- Few Powerful Units are enough for the Model to predict the class.
- Class Activation Map highlights the focus of units in an image, as the units are dropped.
- Probability reduces for the Ground Truth label when few powerful units are dropped.
- Probability gets shared across best alike other labels when units are dropped.
- Obvious result, Predicted Label changes as units are dropped.
pip install -r requirements.txt
Top Most Neuron refers to neurons with large weight value and Bottom Most Neuron refers to weights with least value.
- ipywidgets==7.5.1
- opencv_python==4.2.0.34
- numpy==1.19.1
- torchvision==0.7.0
- matplotlib==3.3.1
- torch==1.6.0
- ipython==7.18.1
- Pillow==7.2.0