Skip to content

saketd403/Visualizing-and-Understanding-Convolutional-neural-networks

Repository files navigation

Visualizing-and-Understanding-Convolutional-neural-networks

This is a straight from scratch implementation of paper "Visualizing and Understanding Convolutional neural networks"-Matthew D. Zeiler and Rob Fergus (https://cs.nyu.edu/~fergus/papers/zeilerECCV2014.pdf)

This implementation is using keras with tensorflow backend.

The implementation is mainly divided into two parts-

  1. Feature visualization with Deconvnet.
  2. Occlusion Sensitivity.

Part 1- Feature visualization with Deconvnet This is described in notebook "Feature visualization using Deconvnets in Keras". Here the term deconvolution refers to transpose convolution.

Original image -

kangaroo

Features detected in feature map 127 of convolutional layer 3 of block 3 of VGG16 model-

kangarooblock3_conv3_127_all

Part 2- Occlusion sensitivity This experiment is carried out to test if the convnet is actualy locating and detecting the desired object in image and not some pattern or object in background.

The experiment is described in notebook- "Occlusion experiment".The experiment is carried out on MNIST hanwritten digit data set.

Original image -

original_image

Occlusion map -

heatmap

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published