DETECTION OF TUBERS WITH CONVOLUTIONAL NEURAL NETWORKS

TEST CASE I

Import packages and functions

In [1]:
# Import packages
%matplotlib inline
from PIL import Image
import numpy as np
import os
import re
from skimage.color import gray2rgb
import matplotlib.pyplot as plt
from sklearn.utils import shuffle
!pip install tensorflow
!pip install keras
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten, GaussianNoise, BatchNormalization, GlobalAveragePooling2D
from keras.layers import Conv2D, MaxPooling2D
from keras import Sequential
from keras.optimizers import Adam
import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
from keras.preprocessing import image
from keras.models import Model
from keras import backend as K
from sklearn.metrics import confusion_matrix
from sklearn.metrics import roc_auc_score
!pip install git+https://github.com/raghakot/keras-vis.git --upgrade
from vis.visualization import visualize_cam, visualize_saliency, overlay
from keras import activations
from matplotlib import pyplot as plt
import matplotlib.cm as cm
import zipfile
from keras.models import model_from_json
import matplotlib as mpl
Requirement already satisfied: tensorflow in c:\programdata\anaconda3\lib\site-packages (1.12.0)
Requirement already satisfied: absl-py>=0.1.6 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (0.7.0)
Requirement already satisfied: termcolor>=1.1.0 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (1.1.0)
Requirement already satisfied: gast>=0.2.0 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (0.2.2)
Requirement already satisfied: wheel>=0.26 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (0.32.3)
Requirement already satisfied: keras-preprocessing>=1.0.5 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (1.0.5)
Requirement already satisfied: astor>=0.6.0 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (0.7.1)
Requirement already satisfied: numpy>=1.13.3 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (1.14.3)
Requirement already satisfied: protobuf>=3.6.1 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (3.6.1)
Requirement already satisfied: tensorboard<1.13.0,>=1.12.0 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (1.12.2)
Requirement already satisfied: grpcio>=1.8.6 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (1.18.0)
Requirement already satisfied: six>=1.10.0 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (1.11.0)
Requirement already satisfied: keras-applications>=1.0.6 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (1.0.6)
Requirement already satisfied: setuptools in c:\programdata\anaconda3\lib\site-packages (from protobuf>=3.6.1->tensorflow) (40.6.3)
Requirement already satisfied: werkzeug>=0.11.10 in c:\programdata\anaconda3\lib\site-packages (from tensorboard<1.13.0,>=1.12.0->tensorflow) (0.14.1)
Requirement already satisfied: markdown>=2.6.8 in c:\programdata\anaconda3\lib\site-packages (from tensorboard<1.13.0,>=1.12.0->tensorflow) (3.0.1)
Requirement already satisfied: h5py in c:\programdata\anaconda3\lib\site-packages (from keras-applications>=1.0.6->tensorflow) (2.7.1)
You are using pip version 19.0.1, however version 19.0.3 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' command.
Requirement already satisfied: keras in c:\programdata\anaconda3\lib\site-packages (2.2.4)
Requirement already satisfied: scipy>=0.14 in c:\programdata\anaconda3\lib\site-packages (from keras) (1.1.0)
Requirement already satisfied: keras-preprocessing>=1.0.5 in c:\programdata\anaconda3\lib\site-packages (from keras) (1.0.5)
Requirement already satisfied: keras-applications>=1.0.6 in c:\programdata\anaconda3\lib\site-packages (from keras) (1.0.6)
Requirement already satisfied: h5py in c:\programdata\anaconda3\lib\site-packages (from keras) (2.7.1)
Requirement already satisfied: numpy>=1.9.1 in c:\programdata\anaconda3\lib\site-packages (from keras) (1.14.3)
Requirement already satisfied: six>=1.9.0 in c:\programdata\anaconda3\lib\site-packages (from keras) (1.11.0)
Requirement already satisfied: pyyaml in c:\programdata\anaconda3\lib\site-packages (from keras) (3.12)
You are using pip version 19.0.1, however version 19.0.3 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' command.
C:\ProgramData\Anaconda3\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
Using TensorFlow backend.
Collecting git+https://github.com/raghakot/keras-vis.git
  Cloning https://github.com/raghakot/keras-vis.git to c:\users\tempch~1.021\appdata\local\temp\pip-req-build-i6afv4gn
Requirement already satisfied, skipping upgrade: keras in c:\programdata\anaconda3\lib\site-packages (from keras-vis==0.4.1) (2.2.4)
Requirement already satisfied, skipping upgrade: six in c:\programdata\anaconda3\lib\site-packages (from keras-vis==0.4.1) (1.11.0)
Requirement already satisfied, skipping upgrade: scikit-image in c:\programdata\anaconda3\lib\site-packages (from keras-vis==0.4.1) (0.13.1)
Requirement already satisfied, skipping upgrade: matplotlib in c:\programdata\anaconda3\lib\site-packages (from keras-vis==0.4.1) (2.2.2)
Requirement already satisfied, skipping upgrade: h5py in c:\programdata\anaconda3\lib\site-packages (from keras-vis==0.4.1) (2.7.1)
Requirement already satisfied, skipping upgrade: keras-applications>=1.0.6 in c:\programdata\anaconda3\lib\site-packages (from keras->keras-vis==0.4.1) (1.0.6)
Requirement already satisfied, skipping upgrade: pyyaml in c:\programdata\anaconda3\lib\site-packages (from keras->keras-vis==0.4.1) (3.12)
Requirement already satisfied, skipping upgrade: numpy>=1.9.1 in c:\programdata\anaconda3\lib\site-packages (from keras->keras-vis==0.4.1) (1.14.3)
Requirement already satisfied, skipping upgrade: scipy>=0.14 in c:\programdata\anaconda3\lib\site-packages (from keras->keras-vis==0.4.1) (1.1.0)
Requirement already satisfied, skipping upgrade: keras-preprocessing>=1.0.5 in c:\programdata\anaconda3\lib\site-packages (from keras->keras-vis==0.4.1) (1.0.5)
Requirement already satisfied, skipping upgrade: networkx>=1.8 in c:\programdata\anaconda3\lib\site-packages (from scikit-image->keras-vis==0.4.1) (2.1)
Requirement already satisfied, skipping upgrade: pillow>=2.1.0 in c:\programdata\anaconda3\lib\site-packages (from scikit-image->keras-vis==0.4.1) (5.1.0)
Requirement already satisfied, skipping upgrade: PyWavelets>=0.4.0 in c:\programdata\anaconda3\lib\site-packages (from scikit-image->keras-vis==0.4.1) (0.5.2)
Requirement already satisfied, skipping upgrade: cycler>=0.10 in c:\programdata\anaconda3\lib\site-packages (from matplotlib->keras-vis==0.4.1) (0.10.0)
Requirement already satisfied, skipping upgrade: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in c:\programdata\anaconda3\lib\site-packages (from matplotlib->keras-vis==0.4.1) (2.2.0)
Requirement already satisfied, skipping upgrade: python-dateutil>=2.1 in c:\programdata\anaconda3\lib\site-packages (from matplotlib->keras-vis==0.4.1) (2.7.3)
Requirement already satisfied, skipping upgrade: pytz in c:\programdata\anaconda3\lib\site-packages (from matplotlib->keras-vis==0.4.1) (2018.4)
Requirement already satisfied, skipping upgrade: kiwisolver>=1.0.1 in c:\programdata\anaconda3\lib\site-packages (from matplotlib->keras-vis==0.4.1) (1.0.1)
Requirement already satisfied, skipping upgrade: decorator>=4.1.0 in c:\programdata\anaconda3\lib\site-packages (from networkx>=1.8->scikit-image->keras-vis==0.4.1) (4.3.0)
Requirement already satisfied, skipping upgrade: setuptools in c:\programdata\anaconda3\lib\site-packages (from kiwisolver>=1.0.1->matplotlib->keras-vis==0.4.1) (40.6.3)
Building wheels for collected packages: keras-vis
  Building wheel for keras-vis (setup.py): started
  Building wheel for keras-vis (setup.py): finished with status 'done'
  Stored in directory: C:\Users\TEMPCH~1.021\AppData\Local\Temp\pip-ephem-wheel-cache-2mprlii3\wheels\c5\ae\e7\b34d1cb48b1898f606a5cce08ebc9521fa0588f37f1e590d9f
Successfully built keras-vis
Installing collected packages: keras-vis
  Found existing installation: keras-vis 0.4.1
    Uninstalling keras-vis-0.4.1:
      Successfully uninstalled keras-vis-0.4.1
Successfully installed keras-vis-0.4.1
You are using pip version 19.0.1, however version 19.0.3 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' command.

FIRST PART: DATA INGESTION

Import original images from local computer

These images come from a 2 year-old male with TSC. In total, there are 40 images consisting of 20 consecutive axial T2 MRI slices and 20 consecutive axial FLAIR MRI slices.

In [2]:
# Set the figure size
mpl.rcParams['figure.figsize'] = (16,10)
In [3]:
# Unzip files
with zipfile.ZipFile("TestCaseIT2.zip","r") as zip_ref:
    zip_ref.extractall()
with zipfile.ZipFile("TestCaseIFLAIR.zip","r") as zip_ref:
    zip_ref.extractall()

Path to original images folder

In [4]:
# Path to the folder with the original images
pathtoimagesT2test = './TestCaseIT2/'

pathtoimagesFLAIRtest = './TestCaseIFLAIR/'

SECOND PART: IMPORTATION OF FINAL DATA

In [5]:
# Functions to sort images with numbers within their name
def atoi(text):
    return int(text) if text.isdigit() else text

def natural_keys(text):
    return [ atoi(c) for c in re.split(r'(\d+)', text) ]

Import images and create labels for the T2 set

In [6]:
## T2

# Define the image size
image_size = (224, 224)

# Read in the test images for T2
T2test_images = []
T2test_dir = pathtoimagesT2test
T2test_files = os.listdir(T2test_dir)
T2test_files.sort(key=natural_keys)
# For each image
for f in T2test_files:
  # Open the image
  img = Image.open(T2test_dir + f)
  # Resize the image so that it has a size 224x224
  img = img.resize(image_size)
  # Transform into a numpy array
  img_arr = np.array(img)
  # Transform from 224x224 to 224x224x3
  if img_arr.shape == image_size:
        img_arr = np.expand_dims(img_arr, 3)
        img_arr = gray2rgb(img_arr[:, :, 0])
  # Add the image to the array of images      
  T2test_images.append(img_arr)

# After having transformed all images, transform the list into a numpy array  
T2test_X = np.array(T2test_images)

# Create an array of labels (as read by the radiologist)
T2test_y = np.array([[1], [1], [1], [1], [1], [1], [1], [1], [1], [1], 
                     [1], [1], [1], [1], [1], [1], [1], [1], [1], [1]])

# GPU expects values to be 32-bit floats
T2test_X = T2test_X.astype(np.float32)

# Rescale the values to be between 0 and 1
T2test_X /= 255.
In [7]:
T2test_X.shape
Out[7]:
(20, 224, 224, 3)
In [8]:
# Example of an image to make sure they were converted right
plt.imshow(T2test_X[0])
plt.grid(b=None)
plt.xticks([])
plt.yticks([])
plt.show()
In [9]:
T2test_y.shape
Out[9]:
(20, 1)
In [10]:
T2test_y[0]
Out[10]:
array([1])

Import images and create labels for the FLAIR set

In [11]:
## FLAIR

# Define the image size
image_size = (224, 224)

# Read in the test images for FLAIR
FLAIRtest_images = []
FLAIRtest_dir = pathtoimagesFLAIRtest
FLAIRtest_files = os.listdir(FLAIRtest_dir)
FLAIRtest_files.sort(key=natural_keys)
# For each image
for f in FLAIRtest_files:
  # Open the image
  img = Image.open(FLAIRtest_dir + f)
  # Resize the image so that it has a size 224x224
  img = img.resize(image_size)
  # Transform into a numpy array
  img_arr = np.array(img)
  # Transform from 224x224 to 224x224x3
  if img_arr.shape == image_size:
        img_arr = np.expand_dims(img_arr, 3)
        img_arr = gray2rgb(img_arr[:, :, 0])
  # Add the image to the array of images      
  FLAIRtest_images.append(img_arr)

# After having transformed all images, transform the list into a numpy array  
FLAIRtest_X = np.array(FLAIRtest_images)

# Create an array of labels (as read by the radiologist)
FLAIRtest_y = np.array([[1], [1], [1], [1], [1], [1], [1], [1], [1], [1], 
                     [1], [1], [1], [1], [1], [1], [1], [1], [1], [1]])

# GPU expects values to be 32-bit floats
FLAIRtest_X = FLAIRtest_X.astype(np.float32)

# Rescale the values to be between 0 and 1
FLAIRtest_X /= 255.
In [12]:
FLAIRtest_X.shape
Out[12]:
(20, 224, 224, 3)
In [13]:
# Example of an image to make sure they were converted right
plt.imshow(FLAIRtest_X[0])
plt.grid(b=None)
plt.xticks([])
plt.yticks([])
plt.show()
In [14]:
FLAIRtest_y.shape
Out[14]:
(20, 1)
In [15]:
FLAIRtest_y[0]
Out[15]:
array([1])

THIRD PART: VISUALIZE CLASS ACTIVATION MAPS AND SALIENCY MAPS

Load the model

In [16]:
# load model
json_file = open('InceptionV3.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
model = model_from_json(loaded_model_json)
# load weights into new model
model.load_weights("InceptionV3.h5")
In [17]:
# Compile model
model.compile(optimizer = Adam(lr = 0.00025), loss = 'binary_crossentropy', metrics = ['accuracy'])
In [18]:
# Generate predictions on test data in the form of probabilities for T2
testInceptionV3T2 = model.predict(T2test_X, batch_size = 16)
testInceptionV3T2
Out[18]:
array([[0.9973788 ],
       [0.99999976],
       [0.99863213],
       [0.9990465 ],
       [0.99920803],
       [0.9999858 ],
       [0.9999331 ],
       [0.9937078 ],
       [0.99964976],
       [0.9999976 ],
       [0.9999964 ],
       [0.9999995 ],
       [0.99993765],
       [0.96819323],
       [0.9969618 ],
       [0.992562  ],
       [0.99612266],
       [0.9984871 ],
       [0.9996356 ],
       [0.99999523]], dtype=float32)
In [19]:
# Generate predictions on test data in the form of probabilities for FLAIR
testInceptionV3FLAIR = model.predict(FLAIRtest_X, batch_size = 16)
testInceptionV3FLAIR
Out[19]:
array([[0.05197744],
       [0.9999956 ],
       [0.9999999 ],
       [0.9999777 ],
       [0.9998404 ],
       [0.9976756 ],
       [0.9997334 ],
       [0.9999999 ],
       [0.99999964],
       [1.        ],
       [0.9999999 ],
       [1.        ],
       [0.9999993 ],
       [0.9999515 ],
       [0.99975413],
       [0.999987  ],
       [0.9999987 ],
       [0.9999995 ],
       [0.99999917],
       [0.9999995 ]], dtype=float32)
In [20]:
# Create the confusion matrix for T2
y_trueT2 = T2test_y
y_predInceptionV3T2 = testInceptionV3T2 > 0.5
confusion_matrix(y_trueT2, y_predInceptionV3T2, labels=[0,1])
Out[20]:
array([[ 0,  0],
       [ 0, 20]], dtype=int64)
In [21]:
# Create the confusion matrix for FLAIR
y_trueFLAIR = FLAIRtest_y
y_predInceptionV3FLAIR = testInceptionV3FLAIR > 0.5
confusion_matrix(y_trueFLAIR, y_predInceptionV3FLAIR, labels=[0,1])
Out[21]:
array([[ 0,  0],
       [ 1, 19]], dtype=int64)
In [22]:
# Calculate accuracy for T2
accuracy_InceptionV3T2 = (confusion_matrix(y_trueT2, y_predInceptionV3T2, labels=[0,1])[0, 0] + confusion_matrix(y_trueT2, y_predInceptionV3T2, labels=[0,1])[1, 1]) / (confusion_matrix(y_trueT2, y_predInceptionV3T2, labels=[0,1])[0, 0] + confusion_matrix(y_trueT2, y_predInceptionV3T2, labels=[0,1])[0, 1] + confusion_matrix(y_trueT2, y_predInceptionV3T2, labels=[0,1])[1, 0] + confusion_matrix(y_trueT2, y_predInceptionV3T2, labels=[0,1])[1, 1])
print('The accuracy in the test set is {}.'.format(accuracy_InceptionV3T2))
The accuracy in the test set is 1.0.
In [23]:
# Calculate accuracy for FLAIR
accuracy_InceptionV3FLAIR = (confusion_matrix(y_trueFLAIR, y_predInceptionV3FLAIR, labels=[0,1])[0, 0] + confusion_matrix(y_trueFLAIR, y_predInceptionV3FLAIR, labels=[0,1])[1, 1]) / (confusion_matrix(y_trueFLAIR, y_predInceptionV3FLAIR, labels=[0,1])[0, 0] + confusion_matrix(y_trueFLAIR, y_predInceptionV3FLAIR, labels=[0,1])[0, 1] + confusion_matrix(y_trueFLAIR, y_predInceptionV3FLAIR, labels=[0,1])[1, 0] + confusion_matrix(y_trueFLAIR, y_predInceptionV3FLAIR, labels=[0,1])[1, 1])
print('The accuracy in the test set is {}.'.format(accuracy_InceptionV3FLAIR))
The accuracy in the test set is 0.95.

Visualize the data

In [24]:
# Visualize the structure and layers of the model
model.layers
Out[24]:
[<keras.engine.input_layer.InputLayer at 0xf9d30f0>,
 <keras.layers.convolutional.Conv2D at 0xf9d3160>,
 <keras.layers.normalization.BatchNormalization at 0xf9d33c8>,
 <keras.layers.core.Activation at 0xf9d34e0>,
 <keras.layers.convolutional.Conv2D at 0xf9d37f0>,
 <keras.layers.normalization.BatchNormalization at 0xf9d3828>,
 <keras.layers.core.Activation at 0xf9d39b0>,
 <keras.layers.convolutional.Conv2D at 0xf9d3ac8>,
 <keras.layers.normalization.BatchNormalization at 0xf9d3b00>,
 <keras.layers.core.Activation at 0xf9d3c88>,
 <keras.layers.pooling.MaxPooling2D at 0xf9d3da0>,
 <keras.layers.convolutional.Conv2D at 0xf9d3dd8>,
 <keras.layers.normalization.BatchNormalization at 0xf9e9048>,
 <keras.layers.core.Activation at 0xf9e9160>,
 <keras.layers.convolutional.Conv2D at 0xf9e9198>,
 <keras.layers.normalization.BatchNormalization at 0xf9e9320>,
 <keras.layers.core.Activation at 0xf9e9438>,
 <keras.layers.pooling.MaxPooling2D at 0xf9e9470>,
 <keras.layers.convolutional.Conv2D at 0xf9e9518>,
 <keras.layers.normalization.BatchNormalization at 0xf9e96a0>,
 <keras.layers.core.Activation at 0xf9e97b8>,
 <keras.layers.convolutional.Conv2D at 0xf9e97f0>,
 <keras.layers.convolutional.Conv2D at 0xf9e9978>,
 <keras.layers.normalization.BatchNormalization at 0xf9e9b00>,
 <keras.layers.normalization.BatchNormalization at 0xf9e9c18>,
 <keras.layers.core.Activation at 0xf9e9d30>,
 <keras.layers.core.Activation at 0xf9e9d68>,
 <keras.layers.pooling.AveragePooling2D at 0xf9e9da0>,
 <keras.layers.convolutional.Conv2D at 0xf9e9e48>,
 <keras.layers.convolutional.Conv2D at 0xf9d3e80>,
 <keras.layers.convolutional.Conv2D at 0xf9f0198>,
 <keras.layers.convolutional.Conv2D at 0xf9f0320>,
 <keras.layers.normalization.BatchNormalization at 0xf9f04a8>,
 <keras.layers.normalization.BatchNormalization at 0xf9f05c0>,
 <keras.layers.normalization.BatchNormalization at 0xf9f06d8>,
 <keras.layers.normalization.BatchNormalization at 0xf9f07f0>,
 <keras.layers.core.Activation at 0xf9f0908>,
 <keras.layers.core.Activation at 0xf9f0940>,
 <keras.layers.core.Activation at 0xf9f0978>,
 <keras.layers.core.Activation at 0xf9f09b0>,
 <keras.layers.merge.Concatenate at 0xf9f09e8>,
 <keras.layers.convolutional.Conv2D at 0xf9f0a20>,
 <keras.layers.normalization.BatchNormalization at 0xf9f0ba8>,
 <keras.layers.core.Activation at 0xf9f0cc0>,
 <keras.layers.convolutional.Conv2D at 0xf9f0cf8>,
 <keras.layers.convolutional.Conv2D at 0xf9f0e80>,
 <keras.layers.normalization.BatchNormalization at 0xf9f7048>,
 <keras.layers.normalization.BatchNormalization at 0xf9f7160>,
 <keras.layers.core.Activation at 0xf9f7278>,
 <keras.layers.core.Activation at 0xf9f72b0>,
 <keras.layers.pooling.AveragePooling2D at 0xf9f72e8>,
 <keras.layers.convolutional.Conv2D at 0xf9f7390>,
 <keras.layers.convolutional.Conv2D at 0xf9f7518>,
 <keras.layers.convolutional.Conv2D at 0xf9f76a0>,
 <keras.layers.convolutional.Conv2D at 0xf9f7828>,
 <keras.layers.normalization.BatchNormalization at 0xf9f79b0>,
 <keras.layers.normalization.BatchNormalization at 0xf9f7ac8>,
 <keras.layers.normalization.BatchNormalization at 0xf9f7be0>,
 <keras.layers.normalization.BatchNormalization at 0xf9f7cf8>,
 <keras.layers.core.Activation at 0xf9f7e10>,
 <keras.layers.core.Activation at 0xf9f7e48>,
 <keras.layers.core.Activation at 0xf9f7e80>,
 <keras.layers.core.Activation at 0xf9f7eb8>,
 <keras.layers.merge.Concatenate at 0xf9f7ef0>,
 <keras.layers.convolutional.Conv2D at 0xf9f7f28>,
 <keras.layers.normalization.BatchNormalization at 0xf9fe0f0>,
 <keras.layers.core.Activation at 0xf9fe208>,
 <keras.layers.convolutional.Conv2D at 0xf9fe240>,
 <keras.layers.convolutional.Conv2D at 0xf9fe3c8>,
 <keras.layers.normalization.BatchNormalization at 0xf9fe550>,
 <keras.layers.normalization.BatchNormalization at 0xf9fe668>,
 <keras.layers.core.Activation at 0xf9fe780>,
 <keras.layers.core.Activation at 0xf9fe7b8>,
 <keras.layers.pooling.AveragePooling2D at 0xf9fe7f0>,
 <keras.layers.convolutional.Conv2D at 0xf9fe898>,
 <keras.layers.convolutional.Conv2D at 0xf9fea20>,
 <keras.layers.convolutional.Conv2D at 0xf9feba8>,
 <keras.layers.convolutional.Conv2D at 0xf9fed30>,
 <keras.layers.normalization.BatchNormalization at 0xf9feeb8>,
 <keras.layers.normalization.BatchNormalization at 0xf9e9fd0>,
 <keras.layers.normalization.BatchNormalization at 0xfee6128>,
 <keras.layers.normalization.BatchNormalization at 0xfee6240>,
 <keras.layers.core.Activation at 0xfee6358>,
 <keras.layers.core.Activation at 0xfee6390>,
 <keras.layers.core.Activation at 0xfee63c8>,
 <keras.layers.core.Activation at 0xfee6400>,
 <keras.layers.merge.Concatenate at 0xfee6438>,
 <keras.layers.convolutional.Conv2D at 0xfee6470>,
 <keras.layers.normalization.BatchNormalization at 0xfee65f8>,
 <keras.layers.core.Activation at 0xfee6710>,
 <keras.layers.convolutional.Conv2D at 0xfee6748>,
 <keras.layers.normalization.BatchNormalization at 0xfee68d0>,
 <keras.layers.core.Activation at 0xfee69e8>,
 <keras.layers.convolutional.Conv2D at 0xfee6a20>,
 <keras.layers.convolutional.Conv2D at 0xfee6ba8>,
 <keras.layers.normalization.BatchNormalization at 0xfee6d30>,
 <keras.layers.normalization.BatchNormalization at 0xfee6e48>,
 <keras.layers.core.Activation at 0xfee6f60>,
 <keras.layers.core.Activation at 0xfee6f98>,
 <keras.layers.pooling.MaxPooling2D at 0xf9fefd0>,
 <keras.layers.merge.Concatenate at 0xfeed0b8>,
 <keras.layers.convolutional.Conv2D at 0xfeed0f0>,
 <keras.layers.normalization.BatchNormalization at 0xfeed278>,
 <keras.layers.core.Activation at 0xfeed390>,
 <keras.layers.convolutional.Conv2D at 0xfeed3c8>,
 <keras.layers.normalization.BatchNormalization at 0xfeed550>,
 <keras.layers.core.Activation at 0xfeed668>,
 <keras.layers.convolutional.Conv2D at 0xfeed6a0>,
 <keras.layers.convolutional.Conv2D at 0xfeed828>,
 <keras.layers.normalization.BatchNormalization at 0xfeed9b0>,
 <keras.layers.normalization.BatchNormalization at 0xfeedac8>,
 <keras.layers.core.Activation at 0xfeedbe0>,
 <keras.layers.core.Activation at 0xfeedc18>,
 <keras.layers.convolutional.Conv2D at 0xfeedc50>,
 <keras.layers.convolutional.Conv2D at 0xfeeddd8>,
 <keras.layers.normalization.BatchNormalization at 0xfeedf60>,
 <keras.layers.normalization.BatchNormalization at 0xfee6fd0>,
 <keras.layers.core.Activation at 0xfef41d0>,
 <keras.layers.core.Activation at 0xfef4208>,
 <keras.layers.pooling.AveragePooling2D at 0xfef4240>,
 <keras.layers.convolutional.Conv2D at 0xfef42e8>,
 <keras.layers.convolutional.Conv2D at 0xfef4470>,
 <keras.layers.convolutional.Conv2D at 0xfef45f8>,
 <keras.layers.convolutional.Conv2D at 0xfef4780>,
 <keras.layers.normalization.BatchNormalization at 0xfef4908>,
 <keras.layers.normalization.BatchNormalization at 0xfef4a20>,
 <keras.layers.normalization.BatchNormalization at 0xfef4b38>,
 <keras.layers.normalization.BatchNormalization at 0xfef4c50>,
 <keras.layers.core.Activation at 0xfef4d68>,
 <keras.layers.core.Activation at 0xfef4da0>,
 <keras.layers.core.Activation at 0xfef4dd8>,
 <keras.layers.core.Activation at 0xfef4e10>,
 <keras.layers.merge.Concatenate at 0xfef4e48>,
 <keras.layers.convolutional.Conv2D at 0xfef4e80>,
 <keras.layers.normalization.BatchNormalization at 0xfefc048>,
 <keras.layers.core.Activation at 0xfefc160>,
 <keras.layers.convolutional.Conv2D at 0xfefc198>,
 <keras.layers.normalization.BatchNormalization at 0xfefc320>,
 <keras.layers.core.Activation at 0xfefc438>,
 <keras.layers.convolutional.Conv2D at 0xfefc470>,
 <keras.layers.convolutional.Conv2D at 0xfefc5f8>,
 <keras.layers.normalization.BatchNormalization at 0xfefc780>,
 <keras.layers.normalization.BatchNormalization at 0xfefc898>,
 <keras.layers.core.Activation at 0xfefc9b0>,
 <keras.layers.core.Activation at 0xfefc9e8>,
 <keras.layers.convolutional.Conv2D at 0xfefca20>,
 <keras.layers.convolutional.Conv2D at 0xfefcba8>,
 <keras.layers.normalization.BatchNormalization at 0xfefcd30>,
 <keras.layers.normalization.BatchNormalization at 0xfefce48>,
 <keras.layers.core.Activation at 0xfefcf60>,
 <keras.layers.core.Activation at 0xfefcf98>,
 <keras.layers.pooling.AveragePooling2D at 0xfeedfd0>,
 <keras.layers.convolutional.Conv2D at 0xff040b8>,
 <keras.layers.convolutional.Conv2D at 0xff04240>,
 <keras.layers.convolutional.Conv2D at 0xff043c8>,
 <keras.layers.convolutional.Conv2D at 0xff04550>,
 <keras.layers.normalization.BatchNormalization at 0xff046d8>,
 <keras.layers.normalization.BatchNormalization at 0xff047f0>,
 <keras.layers.normalization.BatchNormalization at 0xff04908>,
 <keras.layers.normalization.BatchNormalization at 0xff04a20>,
 <keras.layers.core.Activation at 0xff04b38>,
 <keras.layers.core.Activation at 0xff04b70>,
 <keras.layers.core.Activation at 0xff04ba8>,
 <keras.layers.core.Activation at 0xff04be0>,
 <keras.layers.merge.Concatenate at 0xff04c18>,
 <keras.layers.convolutional.Conv2D at 0xff04c50>,
 <keras.layers.normalization.BatchNormalization at 0xff04dd8>,
 <keras.layers.core.Activation at 0xff04ef0>,
 <keras.layers.convolutional.Conv2D at 0xff04f28>,
 <keras.layers.normalization.BatchNormalization at 0xff0b0f0>,
 <keras.layers.core.Activation at 0xff0b208>,
 <keras.layers.convolutional.Conv2D at 0xff0b240>,
 <keras.layers.convolutional.Conv2D at 0xff0b3c8>,
 <keras.layers.normalization.BatchNormalization at 0xff0b550>,
 <keras.layers.normalization.BatchNormalization at 0xff0b668>,
 <keras.layers.core.Activation at 0xff0b780>,
 <keras.layers.core.Activation at 0xff0b7b8>,
 <keras.layers.convolutional.Conv2D at 0xff0b7f0>,
 <keras.layers.convolutional.Conv2D at 0xff0b978>,
 <keras.layers.normalization.BatchNormalization at 0xff0bb00>,
 <keras.layers.normalization.BatchNormalization at 0xff0bc18>,
 <keras.layers.core.Activation at 0xff0bd30>,
 <keras.layers.core.Activation at 0xff0bd68>,
 <keras.layers.pooling.AveragePooling2D at 0xff0bda0>,
 <keras.layers.convolutional.Conv2D at 0xff0be48>,
 <keras.layers.convolutional.Conv2D at 0xfefcfd0>,
 <keras.layers.convolutional.Conv2D at 0xff12198>,
 <keras.layers.convolutional.Conv2D at 0xff12320>,
 <keras.layers.normalization.BatchNormalization at 0xff124a8>,
 <keras.layers.normalization.BatchNormalization at 0xff125c0>,
 <keras.layers.normalization.BatchNormalization at 0xff126d8>,
 <keras.layers.normalization.BatchNormalization at 0xff127f0>,
 <keras.layers.core.Activation at 0xff12908>,
 <keras.layers.core.Activation at 0xff12940>,
 <keras.layers.core.Activation at 0xff12978>,
 <keras.layers.core.Activation at 0xff129b0>,
 <keras.layers.merge.Concatenate at 0xff129e8>,
 <keras.layers.convolutional.Conv2D at 0xff12a20>,
 <keras.layers.normalization.BatchNormalization at 0xff12ba8>,
 <keras.layers.core.Activation at 0xff12cc0>,
 <keras.layers.convolutional.Conv2D at 0xff12cf8>,
 <keras.layers.normalization.BatchNormalization at 0xff12e80>,
 <keras.layers.core.Activation at 0xff12f98>,
 <keras.layers.convolutional.Conv2D at 0xff0bfd0>,
 <keras.layers.convolutional.Conv2D at 0xff19198>,
 <keras.layers.normalization.BatchNormalization at 0xff19320>,
 <keras.layers.normalization.BatchNormalization at 0xff19438>,
 <keras.layers.core.Activation at 0xff19550>,
 <keras.layers.core.Activation at 0xff19588>,
 <keras.layers.convolutional.Conv2D at 0xff195c0>,
 <keras.layers.convolutional.Conv2D at 0xff19748>,
 <keras.layers.normalization.BatchNormalization at 0xff198d0>,
 <keras.layers.normalization.BatchNormalization at 0xff199e8>,
 <keras.layers.core.Activation at 0xff19b00>,
 <keras.layers.core.Activation at 0xff19b38>,
 <keras.layers.pooling.AveragePooling2D at 0xff19b70>,
 <keras.layers.convolutional.Conv2D at 0xff19c18>,
 <keras.layers.convolutional.Conv2D at 0xff19da0>,
 <keras.layers.convolutional.Conv2D at 0xff19f28>,
 <keras.layers.convolutional.Conv2D at 0xff620f0>,
 <keras.layers.normalization.BatchNormalization at 0xff62278>,
 <keras.layers.normalization.BatchNormalization at 0xff62390>,
 <keras.layers.normalization.BatchNormalization at 0xff624a8>,
 <keras.layers.normalization.BatchNormalization at 0xff625c0>,
 <keras.layers.core.Activation at 0xff626d8>,
 <keras.layers.core.Activation at 0xff62710>,
 <keras.layers.core.Activation at 0xff62748>,
 <keras.layers.core.Activation at 0xff62780>,
 <keras.layers.merge.Concatenate at 0xff627b8>,
 <keras.layers.convolutional.Conv2D at 0xff627f0>,
 <keras.layers.normalization.BatchNormalization at 0xff62978>,
 <keras.layers.core.Activation at 0xff62a90>,
 <keras.layers.convolutional.Conv2D at 0xff62ac8>,
 <keras.layers.normalization.BatchNormalization at 0xff62c50>,
 <keras.layers.core.Activation at 0xff62d68>,
 <keras.layers.convolutional.Conv2D at 0xff62da0>,
 <keras.layers.convolutional.Conv2D at 0xff62f28>,
 <keras.layers.normalization.BatchNormalization at 0xff6a0f0>,
 <keras.layers.normalization.BatchNormalization at 0xff6a208>,
 <keras.layers.core.Activation at 0xff6a320>,
 <keras.layers.core.Activation at 0xff6a358>,
 <keras.layers.convolutional.Conv2D at 0xff6a390>,
 <keras.layers.convolutional.Conv2D at 0xff6a518>,
 <keras.layers.normalization.BatchNormalization at 0xff6a6a0>,
 <keras.layers.normalization.BatchNormalization at 0xff6a7b8>,
 <keras.layers.core.Activation at 0xff6a8d0>,
 <keras.layers.core.Activation at 0xff6a908>,
 <keras.layers.pooling.MaxPooling2D at 0xff6a940>,
 <keras.layers.merge.Concatenate at 0xff6a9e8>,
 <keras.layers.convolutional.Conv2D at 0xff6aa20>,
 <keras.layers.normalization.BatchNormalization at 0xff6aba8>,
 <keras.layers.core.Activation at 0xff6acc0>,
 <keras.layers.convolutional.Conv2D at 0xff6acf8>,
 <keras.layers.convolutional.Conv2D at 0xff6ae80>,
 <keras.layers.normalization.BatchNormalization at 0xff71048>,
 <keras.layers.normalization.BatchNormalization at 0xff71160>,
 <keras.layers.core.Activation at 0xff71278>,
 <keras.layers.core.Activation at 0xff712b0>,
 <keras.layers.convolutional.Conv2D at 0xff712e8>,
 <keras.layers.convolutional.Conv2D at 0xff71470>,
 <keras.layers.convolutional.Conv2D at 0xff715f8>,
 <keras.layers.convolutional.Conv2D at 0xff71780>,
 <keras.layers.pooling.AveragePooling2D at 0xff71908>,
 <keras.layers.convolutional.Conv2D at 0xff719b0>,
 <keras.layers.normalization.BatchNormalization at 0xff71b38>,
 <keras.layers.normalization.BatchNormalization at 0xff71c50>,
 <keras.layers.normalization.BatchNormalization at 0xff71d68>,
 <keras.layers.normalization.BatchNormalization at 0xff71e80>,
 <keras.layers.convolutional.Conv2D at 0xff71f98>,
 <keras.layers.normalization.BatchNormalization at 0xff79160>,
 <keras.layers.core.Activation at 0xff79278>,
 <keras.layers.core.Activation at 0xff792b0>,
 <keras.layers.core.Activation at 0xff792e8>,
 <keras.layers.core.Activation at 0xff79320>,
 <keras.layers.normalization.BatchNormalization at 0xff79358>,
 <keras.layers.core.Activation at 0xff79470>,
 <keras.layers.merge.Concatenate at 0xff794a8>,
 <keras.layers.merge.Concatenate at 0xff794e0>,
 <keras.layers.core.Activation at 0xff79518>,
 <keras.layers.merge.Concatenate at 0xff79550>,
 <keras.layers.convolutional.Conv2D at 0xff79588>,
 <keras.layers.normalization.BatchNormalization at 0xff79710>,
 <keras.layers.core.Activation at 0xff79828>,
 <keras.layers.convolutional.Conv2D at 0xff79860>,
 <keras.layers.convolutional.Conv2D at 0xff799e8>,
 <keras.layers.normalization.BatchNormalization at 0xff79b70>,
 <keras.layers.normalization.BatchNormalization at 0xff79c88>,
 <keras.layers.core.Activation at 0xff79da0>,
 <keras.layers.core.Activation at 0xff79dd8>,
 <keras.layers.convolutional.Conv2D at 0xff79e10>,
 <keras.layers.convolutional.Conv2D at 0xff79f98>,
 <keras.layers.convolutional.Conv2D at 0xff81160>,
 <keras.layers.convolutional.Conv2D at 0xff812e8>,
 <keras.layers.pooling.AveragePooling2D at 0xff81470>,
 <keras.layers.convolutional.Conv2D at 0xff81518>,
 <keras.layers.normalization.BatchNormalization at 0xff816a0>,
 <keras.layers.normalization.BatchNormalization at 0xff817b8>,
 <keras.layers.normalization.BatchNormalization at 0xff818d0>,
 <keras.layers.normalization.BatchNormalization at 0xff819e8>,
 <keras.layers.convolutional.Conv2D at 0xff81b00>,
 <keras.layers.normalization.BatchNormalization at 0xff81c88>,
 <keras.layers.core.Activation at 0xff81da0>,
 <keras.layers.core.Activation at 0xff81dd8>,
 <keras.layers.core.Activation at 0xff81e10>,
 <keras.layers.core.Activation at 0xff81e48>,
 <keras.layers.normalization.BatchNormalization at 0xff81e80>,
 <keras.layers.core.Activation at 0xff81f98>,
 <keras.layers.merge.Concatenate at 0xff12fd0>,
 <keras.layers.merge.Concatenate at 0xff88048>,
 <keras.layers.core.Activation at 0xff88080>,
 <keras.layers.merge.Concatenate at 0xff880b8>,
 <keras.layers.pooling.GlobalAveragePooling2D at 0xff880f0>,
 <keras.layers.core.Dense at 0xff88160>,
 <keras.layers.core.Dense at 0xff882b0>]
In [25]:
# Iterate through the MRIs in T2

print('\n \n' + '\033[1m' + 'EACH ORIGINAL IMAGE IS ANALYZED WITH TWO METHODS: CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)' + '\033[0m' + '\n')
print('\033[1m' + 'FOR EACH METHOD, THE FIRST IMAGE IS THE ORIGINAL IMAGE, THE SECOND IMAGE IS THE MAP, AND THE THIRD IMAGE IS THE MAP SUPERIMPOSED ON THE ORIGINAL IMAGE WITH A TRANSPARENCY THAT IS PROPORTIONAL TO THE ESTIMATED PROBABILITY OF THE IMAGE HAVING TUBER(S) (HIGHER ESTIMATED PROBABILITIES PRODUCE CLEARLY SEEN MAPS OVERLAID ON THE ORIGINAL IMAGE AND LOWER ESTIMATED PROBABILITIES PRODUCE VERY TRANSPARENT MAPS OVERLAYED ON THE ORIGINAL IMAGE)' + '\033[0m'+ '\n \n \n \n')


for i in range(T2test_X.shape[0]):
    
  # Print spaces to separate from the next image
  print('\n \n \n \n \n \n \n \n')
  
  # Print real classification of the image
  print('\033[1m' + 'REAL CLASSIFICATION OF THE IMAGE: {}'.format('TUBER(S)' if y_trueT2[i][0]==1 else 'NO TUBER(S)') + '\033[0m')
  # Print model classification and model probability of TSC
  print('Model classification of this image: {} \nEstimated probability of tuber(s): {} \n'.format('TUBER(S)' if testInceptionV3T2[i][0]>0.5 else 'NO TUBER(S)', testInceptionV3T2[i][0]))     


  # Print title
  print('\033[1m' + 'CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)' + '\033[0m')     
    
  # Original image
  plt.subplot(2,3,1)
  plt.imshow(T2test_X[i])
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])
  
  # Heat map
  plt.subplot(2,3,2)
  heat_map = visualize_cam(model, layer_idx=300, filter_indices=None, seed_input=T2test_X[i])
  plt.imshow(heat_map)
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])
  
  # Heat map superimposed on original image
  plt.subplot(2,3,3)
  plt.imshow(T2test_X[i])
  plt.imshow(heat_map, alpha = 0.8 * testInceptionV3T2[i][0])
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])

       
  # Original image
  plt.subplot(2,3,4)
  plt.imshow(T2test_X[i])
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])


  # Heat map
  heat_map = visualize_saliency(model, layer_idx=300, filter_indices=None, seed_input=T2test_X[i])
  plt.subplot(2,3,5)
  plt.imshow(heat_map)
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])
  
  # Heat map superimposed on original image
  plt.subplot(2,3,6)
  plt.imshow(T2test_X[i])
  plt.imshow(heat_map, alpha = 0.8 * testInceptionV3T2[i][0])
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])

  # Show the image and close it
  plt.show()
  plt.close()
 
EACH ORIGINAL IMAGE IS ANALYZED WITH TWO METHODS: CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)

FOR EACH METHOD, THE FIRST IMAGE IS THE ORIGINAL IMAGE, THE SECOND IMAGE IS THE MAP, AND THE THIRD IMAGE IS THE MAP SUPERIMPOSED ON THE ORIGINAL IMAGE WITH A TRANSPARENCY THAT IS PROPORTIONAL TO THE ESTIMATED PROBABILITY OF THE IMAGE HAVING TUBER(S) (HIGHER ESTIMATED PROBABILITIES PRODUCE CLEARLY SEEN MAPS OVERLAID ON THE ORIGINAL IMAGE AND LOWER ESTIMATED PROBABILITIES PRODUCE VERY TRANSPARENT MAPS OVERLAYED ON THE ORIGINAL IMAGE)
 
 
 


 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9973788261413574 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9999997615814209 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9986321330070496 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9990465044975281 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9992080330848694 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9999858140945435 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.999933123588562 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9937077760696411 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9996497631072998 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.999997615814209 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9999964237213135 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9999995231628418 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9999376535415649 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9681932330131531 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.996961772441864 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9925619959831238 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9961226582527161 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.998487114906311 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9996355772018433 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.999995231628418 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)

SCROLL UP TO SEE THE GradCAM AND SALIENCY MAPS OF EACH IMAGE

This 2 year-old male had a high tuber burden with tubers detected in each T2 MRI slice by our Neuroradiologist. The convolutional neural network classified images as having tuber(s) in 20 out of 20 T2 MRI slices, even if some of them where subtle. The accuracy of the convolutional neural network in T2 was 1.

Each original image is analyzed with two methods: Gradient-weighted class activation maps (upper row) and saliency maps (lower row).

For each method, the first image is the original image, the second image is the map, and the third image is the map superimposed on the original image with a transparency that is proportional to the estimated probability of the image having tuber(s) (higher estimated probabilities produce clearly seen maps overlaid on the original image and lower estimated probabilities produce very transparent maps overlaid on the original image).

In [26]:
# Iterate through the MRIs in FLAIR

print('\n \n' + '\033[1m' + 'EACH ORIGINAL IMAGE IS ANALYZED WITH TWO METHODS: CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)' + '\033[0m' + '\n')
print('\033[1m' + 'FOR EACH METHOD, THE FIRST IMAGE IS THE ORIGINAL IMAGE, THE SECOND IMAGE IS THE MAP, AND THE THIRD IMAGE IS THE MAP SUPERIMPOSED ON THE ORIGINAL IMAGE WITH A TRANSPARENCY THAT IS PROPORTIONAL TO THE ESTIMATED PROBABILITY OF THE IMAGE HAVING TUBER(S) (HIGHER ESTIMATED PROBABILITIES PRODUCE CLEARLY SEEN MAPS OVERLAID ON THE ORIGINAL IMAGE AND LOWER ESTIMATED PROBABILITIES PRODUCE VERY TRANSPARENT MAPS OVERLAYED ON THE ORIGINAL IMAGE)' + '\033[0m'+ '\n \n \n \n')


for i in range(FLAIRtest_X.shape[0]):
    
  # Print spaces to separate from the next image
  print('\n \n \n \n \n \n \n \n')
  
  # Print real classification of the image
  print('\033[1m' + 'REAL CLASSIFICATION OF THE IMAGE: {}'.format('TUBER(S)' if y_trueFLAIR[i][0]==1 else 'NO TUBER(S)') + '\033[0m')
  # Print model classification and model probability of TSC
  print('Model classification of this image: {} \nEstimated probability of tuber(s): {} \n'.format('TUBER(S)' if testInceptionV3FLAIR[i][0]>0.5 else 'NO TUBER(S)', testInceptionV3FLAIR[i][0]))     


  # Print title
  print('\033[1m' + 'CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)' + '\033[0m')     
    
  # Original image
  plt.subplot(2,3,1)
  plt.imshow(FLAIRtest_X[i])
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])
  
  # Heat map
  plt.subplot(2,3,2)
  heat_map = visualize_cam(model, layer_idx=300, filter_indices=None, seed_input=FLAIRtest_X[i])
  plt.imshow(heat_map)
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])
  
  # Heat map superimposed on original image
  plt.subplot(2,3,3)
  plt.imshow(FLAIRtest_X[i])
  plt.imshow(heat_map, alpha = 0.8 * testInceptionV3FLAIR[i][0])
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])

       
  # Original image
  plt.subplot(2,3,4)
  plt.imshow(FLAIRtest_X[i])
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])


  # Heat map
  heat_map = visualize_saliency(model, layer_idx=300, filter_indices=None, seed_input=FLAIRtest_X[i])
  plt.subplot(2,3,5)
  plt.imshow(heat_map)
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])
  
  # Heat map superimposed on original image
  plt.subplot(2,3,6)
  plt.imshow(FLAIRtest_X[i])
  plt.imshow(heat_map, alpha = 0.8 * testInceptionV3FLAIR[i][0])
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])

  # Show the image and close it
  plt.show()
  plt.close()
 
EACH ORIGINAL IMAGE IS ANALYZED WITH TWO METHODS: CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)

FOR EACH METHOD, THE FIRST IMAGE IS THE ORIGINAL IMAGE, THE SECOND IMAGE IS THE MAP, AND THE THIRD IMAGE IS THE MAP SUPERIMPOSED ON THE ORIGINAL IMAGE WITH A TRANSPARENCY THAT IS PROPORTIONAL TO THE ESTIMATED PROBABILITY OF THE IMAGE HAVING TUBER(S) (HIGHER ESTIMATED PROBABILITIES PRODUCE CLEARLY SEEN MAPS OVERLAID ON THE ORIGINAL IMAGE AND LOWER ESTIMATED PROBABILITIES PRODUCE VERY TRANSPARENT MAPS OVERLAYED ON THE ORIGINAL IMAGE)
 
 
 


 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.05197743698954582 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9999955892562866 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9999998807907104 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.999977707862854 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9998403787612915 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9976755976676941 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9997333884239197 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9999998807907104 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9999996423721313 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 1.0 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9999998807907104 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 1.0 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9999992847442627 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9999514818191528 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9997541308403015 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.999987006187439 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9999986886978149 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9999995231628418 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9999991655349731 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9999995231628418 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)

SCROLL UP TO SEE THE GradCAM AND SALIENCY MAPS OF EACH IMAGE

This 2 year-old male had a high tuber burden with tubers detected in each FLAIR MRI slice by our Neuroradiologist. The convolutional neural network classified images as having tuber(s) in 19 out of 20 FLAIR MRI slices, even if some of them where subtle. Interestingly, in the missclassified image the maps show that the convolutional neural network was focusing on the area of the tuber, but the estimated probability of tuber(s) was very low. The accuracy of the convolutional neural network in FLAIR was 0.95.

Each original image is analyzed with two methods: Gradient-weighted class activation maps (upper row) and saliency maps (lower row).

For each method, the first image is the original image, the second image is the map, and the third image is the map superimposed on the original image with a transparency that is proportional to the estimated probability of the image having tuber(s) (higher estimated probabilities produce clearly seen maps overlaid on the original image and lower estimated probabilities produce very transparent maps overlaid on the original image).