DETECTION OF TUBERS WITH CONVOLUTIONAL NEURAL NETWORKS

TEST CASE II

Import packages and functions

In [1]:
# Import packages
%matplotlib inline
from PIL import Image
import numpy as np
import os
import re
from skimage.color import gray2rgb
import matplotlib.pyplot as plt
from sklearn.utils import shuffle
!pip install tensorflow
!pip install keras
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten, GaussianNoise, BatchNormalization, GlobalAveragePooling2D
from keras.layers import Conv2D, MaxPooling2D
from keras import Sequential
from keras.optimizers import Adam
import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
from keras.preprocessing import image
from keras.models import Model
from keras import backend as K
from sklearn.metrics import confusion_matrix
from sklearn.metrics import roc_auc_score
!pip install git+https://github.com/raghakot/keras-vis.git --upgrade
from vis.visualization import visualize_cam, visualize_saliency, overlay
from keras import activations
from matplotlib import pyplot as plt
import matplotlib.cm as cm
import zipfile
from keras.models import model_from_json
import matplotlib as mpl
Requirement already satisfied: tensorflow in c:\programdata\anaconda3\lib\site-packages (1.12.0)
Requirement already satisfied: astor>=0.6.0 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (0.7.1)
Requirement already satisfied: numpy>=1.13.3 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (1.14.3)
Requirement already satisfied: keras-applications>=1.0.6 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (1.0.6)
Requirement already satisfied: protobuf>=3.6.1 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (3.6.1)
Requirement already satisfied: wheel>=0.26 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (0.32.3)
Requirement already satisfied: tensorboard<1.13.0,>=1.12.0 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (1.12.2)
Requirement already satisfied: keras-preprocessing>=1.0.5 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (1.0.5)
Requirement already satisfied: six>=1.10.0 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (1.11.0)
Requirement already satisfied: termcolor>=1.1.0 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (1.1.0)
Requirement already satisfied: absl-py>=0.1.6 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (0.7.0)
Requirement already satisfied: grpcio>=1.8.6 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (1.18.0)
Requirement already satisfied: gast>=0.2.0 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (0.2.2)
Requirement already satisfied: h5py in c:\programdata\anaconda3\lib\site-packages (from keras-applications>=1.0.6->tensorflow) (2.7.1)
Requirement already satisfied: setuptools in c:\programdata\anaconda3\lib\site-packages (from protobuf>=3.6.1->tensorflow) (40.6.3)
Requirement already satisfied: werkzeug>=0.11.10 in c:\programdata\anaconda3\lib\site-packages (from tensorboard<1.13.0,>=1.12.0->tensorflow) (0.14.1)
Requirement already satisfied: markdown>=2.6.8 in c:\programdata\anaconda3\lib\site-packages (from tensorboard<1.13.0,>=1.12.0->tensorflow) (3.0.1)
You are using pip version 19.0.1, however version 19.0.3 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' command.
Requirement already satisfied: keras in c:\programdata\anaconda3\lib\site-packages (2.2.4)
Requirement already satisfied: keras-applications>=1.0.6 in c:\programdata\anaconda3\lib\site-packages (from keras) (1.0.6)
Requirement already satisfied: scipy>=0.14 in c:\programdata\anaconda3\lib\site-packages (from keras) (1.1.0)
Requirement already satisfied: keras-preprocessing>=1.0.5 in c:\programdata\anaconda3\lib\site-packages (from keras) (1.0.5)
Requirement already satisfied: numpy>=1.9.1 in c:\programdata\anaconda3\lib\site-packages (from keras) (1.14.3)
Requirement already satisfied: h5py in c:\programdata\anaconda3\lib\site-packages (from keras) (2.7.1)
Requirement already satisfied: pyyaml in c:\programdata\anaconda3\lib\site-packages (from keras) (3.12)
Requirement already satisfied: six>=1.9.0 in c:\programdata\anaconda3\lib\site-packages (from keras) (1.11.0)
You are using pip version 19.0.1, however version 19.0.3 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' command.
C:\ProgramData\Anaconda3\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
Using TensorFlow backend.
Collecting git+https://github.com/raghakot/keras-vis.git
  Cloning https://github.com/raghakot/keras-vis.git to c:\users\tempch~1.021\appdata\local\temp\pip-req-build-sbta_mc8
Requirement already satisfied, skipping upgrade: keras in c:\programdata\anaconda3\lib\site-packages (from keras-vis==0.4.1) (2.2.4)
Requirement already satisfied, skipping upgrade: six in c:\programdata\anaconda3\lib\site-packages (from keras-vis==0.4.1) (1.11.0)
Requirement already satisfied, skipping upgrade: scikit-image in c:\programdata\anaconda3\lib\site-packages (from keras-vis==0.4.1) (0.13.1)
Requirement already satisfied, skipping upgrade: matplotlib in c:\programdata\anaconda3\lib\site-packages (from keras-vis==0.4.1) (2.2.2)
Requirement already satisfied, skipping upgrade: h5py in c:\programdata\anaconda3\lib\site-packages (from keras-vis==0.4.1) (2.7.1)
Requirement already satisfied, skipping upgrade: keras-preprocessing>=1.0.5 in c:\programdata\anaconda3\lib\site-packages (from keras->keras-vis==0.4.1) (1.0.5)
Requirement already satisfied, skipping upgrade: pyyaml in c:\programdata\anaconda3\lib\site-packages (from keras->keras-vis==0.4.1) (3.12)
Requirement already satisfied, skipping upgrade: numpy>=1.9.1 in c:\programdata\anaconda3\lib\site-packages (from keras->keras-vis==0.4.1) (1.14.3)
Requirement already satisfied, skipping upgrade: scipy>=0.14 in c:\programdata\anaconda3\lib\site-packages (from keras->keras-vis==0.4.1) (1.1.0)
Requirement already satisfied, skipping upgrade: keras-applications>=1.0.6 in c:\programdata\anaconda3\lib\site-packages (from keras->keras-vis==0.4.1) (1.0.6)
Requirement already satisfied, skipping upgrade: networkx>=1.8 in c:\programdata\anaconda3\lib\site-packages (from scikit-image->keras-vis==0.4.1) (2.1)
Requirement already satisfied, skipping upgrade: pillow>=2.1.0 in c:\programdata\anaconda3\lib\site-packages (from scikit-image->keras-vis==0.4.1) (5.1.0)
Requirement already satisfied, skipping upgrade: PyWavelets>=0.4.0 in c:\programdata\anaconda3\lib\site-packages (from scikit-image->keras-vis==0.4.1) (0.5.2)
Requirement already satisfied, skipping upgrade: cycler>=0.10 in c:\programdata\anaconda3\lib\site-packages (from matplotlib->keras-vis==0.4.1) (0.10.0)
Requirement already satisfied, skipping upgrade: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in c:\programdata\anaconda3\lib\site-packages (from matplotlib->keras-vis==0.4.1) (2.2.0)
Requirement already satisfied, skipping upgrade: python-dateutil>=2.1 in c:\programdata\anaconda3\lib\site-packages (from matplotlib->keras-vis==0.4.1) (2.7.3)
Requirement already satisfied, skipping upgrade: pytz in c:\programdata\anaconda3\lib\site-packages (from matplotlib->keras-vis==0.4.1) (2018.4)
Requirement already satisfied, skipping upgrade: kiwisolver>=1.0.1 in c:\programdata\anaconda3\lib\site-packages (from matplotlib->keras-vis==0.4.1) (1.0.1)
Requirement already satisfied, skipping upgrade: decorator>=4.1.0 in c:\programdata\anaconda3\lib\site-packages (from networkx>=1.8->scikit-image->keras-vis==0.4.1) (4.3.0)
Requirement already satisfied, skipping upgrade: setuptools in c:\programdata\anaconda3\lib\site-packages (from kiwisolver>=1.0.1->matplotlib->keras-vis==0.4.1) (40.6.3)
Building wheels for collected packages: keras-vis
  Building wheel for keras-vis (setup.py): started
  Building wheel for keras-vis (setup.py): finished with status 'done'
  Stored in directory: C:\Users\TEMPCH~1.021\AppData\Local\Temp\pip-ephem-wheel-cache-1j275boe\wheels\c5\ae\e7\b34d1cb48b1898f606a5cce08ebc9521fa0588f37f1e590d9f
Successfully built keras-vis
Installing collected packages: keras-vis
  Found existing installation: keras-vis 0.4.1
    Uninstalling keras-vis-0.4.1:
      Successfully uninstalled keras-vis-0.4.1
Successfully installed keras-vis-0.4.1
You are using pip version 19.0.1, however version 19.0.3 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' command.

FIRST PART: DATA INGESTION

Import original images from local computer

These images come from a 5 year-old female with TSC. In total, there are 45 images consisting of 30 consecutive axial T2 MRI slices and 15 consecutive axial FLAIR MRI slices.

In [2]:
# Set the figure size
mpl.rcParams['figure.figsize'] = (16,10)
In [3]:
# Unzip files
with zipfile.ZipFile("TestCaseIIT2.zip","r") as zip_ref:
    zip_ref.extractall()
with zipfile.ZipFile("TestCaseIIFLAIR.zip","r") as zip_ref:
    zip_ref.extractall()

Path to original images folder

In [4]:
# Path to the folder with the original images
pathtoimagesT2test = './TestCaseIIT2/'

pathtoimagesFLAIRtest = './TestCaseIIFLAIR/'

SECOND PART: IMPORTATION OF FINAL DATA

In [5]:
# Functions to sort images with numbers within their name
def atoi(text):
    return int(text) if text.isdigit() else text

def natural_keys(text):
    return [ atoi(c) for c in re.split(r'(\d+)', text) ]

Import images and create labels for the T2 set

In [6]:
## T2

# Define the image size
image_size = (224, 224)

# Read in the test images for T2
T2test_images = []
T2test_dir = pathtoimagesT2test
T2test_files = os.listdir(T2test_dir)
T2test_files.sort(key=natural_keys)
# For each image
for f in T2test_files:
  # Open the image
  img = Image.open(T2test_dir + f)
  # Resize the image so that it has a size 224x224
  img = img.resize(image_size)
  # Transform into a numpy array
  img_arr = np.array(img)
  # Transform from 224x224 to 224x224x3
  if img_arr.shape == image_size:
        img_arr = np.expand_dims(img_arr, 3)
        img_arr = gray2rgb(img_arr[:, :, 0])
  # Add the image to the array of images      
  T2test_images.append(img_arr)

# After having transformed all images, transform the list into a numpy array  
T2test_X = np.array(T2test_images)

# Create an array of labels (as read by the radiologist)
T2test_y = np.array([[1], [1], [1], [1], [0], [0], [0], [0], [0], [0], 
                     [0], [0], [0], [0], [0], [0], [0], [0], [0], [0],
                     [0], [0], [0], [0], [0], [0], [0], [0], [0], [0]])

# GPU expects values to be 32-bit floats
T2test_X = T2test_X.astype(np.float32)

# Rescale the values to be between 0 and 1
T2test_X /= 255.
In [7]:
T2test_X.shape
Out[7]:
(30, 224, 224, 3)
In [8]:
# Example of an image to make sure they were converted right
plt.imshow(T2test_X[0])
plt.grid(b=None)
plt.xticks([])
plt.yticks([])
plt.show()
In [9]:
T2test_y.shape
Out[9]:
(30, 1)
In [10]:
T2test_y[0]
Out[10]:
array([1])

Import images and create labels for the FLAIR set

In [11]:
## FLAIR

# Define the image size
image_size = (224, 224)

# Read in the test images for FLAIR
FLAIRtest_images = []
FLAIRtest_dir = pathtoimagesFLAIRtest
FLAIRtest_files = os.listdir(FLAIRtest_dir)
FLAIRtest_files.sort(key=natural_keys)
# For each image
for f in FLAIRtest_files:
  # Open the image
  img = Image.open(FLAIRtest_dir + f)
  # Resize the image so that it has a size 224x224
  img = img.resize(image_size)
  # Transform into a numpy array
  img_arr = np.array(img)
  # Transform from 224x224 to 224x224x3
  if img_arr.shape == image_size:
        img_arr = np.expand_dims(img_arr, 3)
        img_arr = gray2rgb(img_arr[:, :, 0])
  # Add the image to the array of images      
  FLAIRtest_images.append(img_arr)

# After having transformed all images, transform the list into a numpy array  
FLAIRtest_X = np.array(FLAIRtest_images)

# Create an array of labels (as read by the radiologist)
FLAIRtest_y = np.array([[1], [1], [0], [0], [0], [0], [0], [0], [0], [0], 
                        [0], [0], [0], [0], [0]])

# GPU expects values to be 32-bit floats
FLAIRtest_X = FLAIRtest_X.astype(np.float32)

# Rescale the values to be between 0 and 1
FLAIRtest_X /= 255.
In [12]:
FLAIRtest_X.shape
Out[12]:
(15, 224, 224, 3)
In [13]:
# Example of an image to make sure they were converted right
plt.imshow(FLAIRtest_X[0])
plt.grid(b=None)
plt.xticks([])
plt.yticks([])
plt.show()
In [14]:
FLAIRtest_y.shape
Out[14]:
(15, 1)
In [15]:
FLAIRtest_y[0]
Out[15]:
array([1])

THIRD PART: VISUALIZE CLASS ACTIVATION MAPS AND SALIENCY MAPS

Load the model

In [16]:
# load model
json_file = open('InceptionV3.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
model = model_from_json(loaded_model_json)
# load weights into new model
model.load_weights("InceptionV3.h5")
In [17]:
# Compile model
model.compile(optimizer = Adam(lr = 0.00025), loss = 'binary_crossentropy', metrics = ['accuracy'])
In [18]:
# Generate predictions on test data in the form of probabilities for T2
testInceptionV3T2 = model.predict(T2test_X, batch_size = 16)
testInceptionV3T2
Out[18]:
array([[8.2918715e-01],
       [9.8063135e-01],
       [9.8627210e-01],
       [9.9746072e-01],
       [9.9338299e-01],
       [2.3976957e-02],
       [3.7464635e-03],
       [4.6884022e-03],
       [1.5118733e-04],
       [2.9703348e-03],
       [7.1524337e-02],
       [9.9784982e-01],
       [1.0582309e-02],
       [2.9095341e-04],
       [2.7956718e-05],
       [4.2256527e-04],
       [1.4007068e-04],
       [1.9773086e-05],
       [1.5059841e-01],
       [4.2285170e-02],
       [5.4385089e-03],
       [9.8892605e-01],
       [8.1733704e-01],
       [7.8671736e-01],
       [3.4336022e-01],
       [9.9534589e-01],
       [4.7428828e-01],
       [9.9565339e-01],
       [9.8819578e-01],
       [9.9841201e-01]], dtype=float32)
In [19]:
# Generate predictions on test data in the form of probabilities for FLAIR
testInceptionV3FLAIR = model.predict(FLAIRtest_X, batch_size = 16)
testInceptionV3FLAIR
Out[19]:
array([[9.9849749e-01],
       [9.9972647e-01],
       [3.7536051e-05],
       [6.3738248e-06],
       [4.9218893e-06],
       [8.2873239e-06],
       [1.7174463e-05],
       [7.7989198e-06],
       [1.3543068e-06],
       [2.5131803e-06],
       [5.9892441e-06],
       [2.1663054e-06],
       [1.6606858e-05],
       [9.4643647e-06],
       [3.0817321e-04]], dtype=float32)
In [20]:
# Create the confusion matrix for T2
y_trueT2 = T2test_y
y_predInceptionV3T2 = testInceptionV3T2 > 0.5
confusion_matrix(y_trueT2, y_predInceptionV3T2, labels=[0,1])
Out[20]:
array([[17,  9],
       [ 0,  4]], dtype=int64)
In [21]:
# Create the confusion matrix for FLAIR
y_trueFLAIR = FLAIRtest_y
y_predInceptionV3FLAIR = testInceptionV3FLAIR > 0.5
confusion_matrix(y_trueFLAIR, y_predInceptionV3FLAIR, labels=[0,1])
Out[21]:
array([[13,  0],
       [ 0,  2]], dtype=int64)
In [22]:
# Calculate accuracy for T2
accuracy_InceptionV3T2 = (confusion_matrix(y_trueT2, y_predInceptionV3T2, labels=[0,1])[0, 0] + confusion_matrix(y_trueT2, y_predInceptionV3T2, labels=[0,1])[1, 1]) / (confusion_matrix(y_trueT2, y_predInceptionV3T2, labels=[0,1])[0, 0] + confusion_matrix(y_trueT2, y_predInceptionV3T2, labels=[0,1])[0, 1] + confusion_matrix(y_trueT2, y_predInceptionV3T2, labels=[0,1])[1, 0] + confusion_matrix(y_trueT2, y_predInceptionV3T2, labels=[0,1])[1, 1])
print('The accuracy in the test set is {}.'.format(accuracy_InceptionV3T2))
The accuracy in the test set is 0.7.
In [23]:
# Calculate accuracy for FLAIR
accuracy_InceptionV3FLAIR = (confusion_matrix(y_trueFLAIR, y_predInceptionV3FLAIR, labels=[0,1])[0, 0] + confusion_matrix(y_trueFLAIR, y_predInceptionV3FLAIR, labels=[0,1])[1, 1]) / (confusion_matrix(y_trueFLAIR, y_predInceptionV3FLAIR, labels=[0,1])[0, 0] + confusion_matrix(y_trueFLAIR, y_predInceptionV3FLAIR, labels=[0,1])[0, 1] + confusion_matrix(y_trueFLAIR, y_predInceptionV3FLAIR, labels=[0,1])[1, 0] + confusion_matrix(y_trueFLAIR, y_predInceptionV3FLAIR, labels=[0,1])[1, 1])
print('The accuracy in the test set is {}.'.format(accuracy_InceptionV3FLAIR))
The accuracy in the test set is 1.0.

Visualize the data

In [24]:
# Visualize the structure and layers of the model
model.layers
Out[24]:
[<keras.engine.input_layer.InputLayer at 0xf9bf198>,
 <keras.layers.convolutional.Conv2D at 0xf9bf208>,
 <keras.layers.normalization.BatchNormalization at 0xf9bf780>,
 <keras.layers.core.Activation at 0xf9bf5f8>,
 <keras.layers.convolutional.Conv2D at 0xf9bf7f0>,
 <keras.layers.normalization.BatchNormalization at 0xf9bf908>,
 <keras.layers.core.Activation at 0xf9bfa90>,
 <keras.layers.convolutional.Conv2D at 0xf9bfba8>,
 <keras.layers.normalization.BatchNormalization at 0xf9bfbe0>,
 <keras.layers.core.Activation at 0xf9bfd68>,
 <keras.layers.pooling.MaxPooling2D at 0xf9bfe80>,
 <keras.layers.convolutional.Conv2D at 0xf9bfeb8>,
 <keras.layers.normalization.BatchNormalization at 0xf9d7128>,
 <keras.layers.core.Activation at 0xf9d7240>,
 <keras.layers.convolutional.Conv2D at 0xf9d7278>,
 <keras.layers.normalization.BatchNormalization at 0xf9d7400>,
 <keras.layers.core.Activation at 0xf9d7518>,
 <keras.layers.pooling.MaxPooling2D at 0xf9d7550>,
 <keras.layers.convolutional.Conv2D at 0xf9d75f8>,
 <keras.layers.normalization.BatchNormalization at 0xf9d7780>,
 <keras.layers.core.Activation at 0xf9d7898>,
 <keras.layers.convolutional.Conv2D at 0xf9d78d0>,
 <keras.layers.convolutional.Conv2D at 0xf9d7a58>,
 <keras.layers.normalization.BatchNormalization at 0xf9d7be0>,
 <keras.layers.normalization.BatchNormalization at 0xf9d7cf8>,
 <keras.layers.core.Activation at 0xf9d7e10>,
 <keras.layers.core.Activation at 0xf9d7e48>,
 <keras.layers.pooling.AveragePooling2D at 0xf9d7e80>,
 <keras.layers.convolutional.Conv2D at 0xf9d7f28>,
 <keras.layers.convolutional.Conv2D at 0xf9de0f0>,
 <keras.layers.convolutional.Conv2D at 0xf9de278>,
 <keras.layers.convolutional.Conv2D at 0xf9de400>,
 <keras.layers.normalization.BatchNormalization at 0xf9de588>,
 <keras.layers.normalization.BatchNormalization at 0xf9de6a0>,
 <keras.layers.normalization.BatchNormalization at 0xf9de7b8>,
 <keras.layers.normalization.BatchNormalization at 0xf9de8d0>,
 <keras.layers.core.Activation at 0xf9de9e8>,
 <keras.layers.core.Activation at 0xf9dea20>,
 <keras.layers.core.Activation at 0xf9dea58>,
 <keras.layers.core.Activation at 0xf9dea90>,
 <keras.layers.merge.Concatenate at 0xf9deac8>,
 <keras.layers.convolutional.Conv2D at 0xf9deb00>,
 <keras.layers.normalization.BatchNormalization at 0xf9dec88>,
 <keras.layers.core.Activation at 0xf9deda0>,
 <keras.layers.convolutional.Conv2D at 0xf9dedd8>,
 <keras.layers.convolutional.Conv2D at 0xf9def60>,
 <keras.layers.normalization.BatchNormalization at 0xf9e5128>,
 <keras.layers.normalization.BatchNormalization at 0xf9e5240>,
 <keras.layers.core.Activation at 0xf9e5358>,
 <keras.layers.core.Activation at 0xf9e5390>,
 <keras.layers.pooling.AveragePooling2D at 0xf9e53c8>,
 <keras.layers.convolutional.Conv2D at 0xf9e5470>,
 <keras.layers.convolutional.Conv2D at 0xf9e55f8>,
 <keras.layers.convolutional.Conv2D at 0xf9e5780>,
 <keras.layers.convolutional.Conv2D at 0xf9e5908>,
 <keras.layers.normalization.BatchNormalization at 0xf9e5a90>,
 <keras.layers.normalization.BatchNormalization at 0xf9e5ba8>,
 <keras.layers.normalization.BatchNormalization at 0xf9e5cc0>,
 <keras.layers.normalization.BatchNormalization at 0xf9e5dd8>,
 <keras.layers.core.Activation at 0xf9e5ef0>,
 <keras.layers.core.Activation at 0xf9e5f28>,
 <keras.layers.core.Activation at 0xf9e5f60>,
 <keras.layers.core.Activation at 0xf9e5f98>,
 <keras.layers.merge.Concatenate at 0xf9bff60>,
 <keras.layers.convolutional.Conv2D at 0xf9ec048>,
 <keras.layers.normalization.BatchNormalization at 0xf9ec1d0>,
 <keras.layers.core.Activation at 0xf9ec2e8>,
 <keras.layers.convolutional.Conv2D at 0xf9ec320>,
 <keras.layers.convolutional.Conv2D at 0xf9ec4a8>,
 <keras.layers.normalization.BatchNormalization at 0xf9ec630>,
 <keras.layers.normalization.BatchNormalization at 0xf9ec748>,
 <keras.layers.core.Activation at 0xf9ec860>,
 <keras.layers.core.Activation at 0xf9ec898>,
 <keras.layers.pooling.AveragePooling2D at 0xf9ec8d0>,
 <keras.layers.convolutional.Conv2D at 0xf9ec978>,
 <keras.layers.convolutional.Conv2D at 0xf9ecb00>,
 <keras.layers.convolutional.Conv2D at 0xf9ecc88>,
 <keras.layers.convolutional.Conv2D at 0xf9ece10>,
 <keras.layers.normalization.BatchNormalization at 0xf9ecf98>,
 <keras.layers.normalization.BatchNormalization at 0xf9e5fd0>,
 <keras.layers.normalization.BatchNormalization at 0xfcd3208>,
 <keras.layers.normalization.BatchNormalization at 0xfcd3320>,
 <keras.layers.core.Activation at 0xfcd3438>,
 <keras.layers.core.Activation at 0xfcd3470>,
 <keras.layers.core.Activation at 0xfcd34a8>,
 <keras.layers.core.Activation at 0xfcd34e0>,
 <keras.layers.merge.Concatenate at 0xfcd3518>,
 <keras.layers.convolutional.Conv2D at 0xfcd3550>,
 <keras.layers.normalization.BatchNormalization at 0xfcd36d8>,
 <keras.layers.core.Activation at 0xfcd37f0>,
 <keras.layers.convolutional.Conv2D at 0xfcd3828>,
 <keras.layers.normalization.BatchNormalization at 0xfcd39b0>,
 <keras.layers.core.Activation at 0xfcd3ac8>,
 <keras.layers.convolutional.Conv2D at 0xfcd3b00>,
 <keras.layers.convolutional.Conv2D at 0xfcd3c88>,
 <keras.layers.normalization.BatchNormalization at 0xfcd3e10>,
 <keras.layers.normalization.BatchNormalization at 0xfcd3f28>,
 <keras.layers.core.Activation at 0xf9ecfd0>,
 <keras.layers.core.Activation at 0xfcdb0b8>,
 <keras.layers.pooling.MaxPooling2D at 0xfcdb0f0>,
 <keras.layers.merge.Concatenate at 0xfcdb198>,
 <keras.layers.convolutional.Conv2D at 0xfcdb1d0>,
 <keras.layers.normalization.BatchNormalization at 0xfcdb358>,
 <keras.layers.core.Activation at 0xfcdb470>,
 <keras.layers.convolutional.Conv2D at 0xfcdb4a8>,
 <keras.layers.normalization.BatchNormalization at 0xfcdb630>,
 <keras.layers.core.Activation at 0xfcdb748>,
 <keras.layers.convolutional.Conv2D at 0xfcdb780>,
 <keras.layers.convolutional.Conv2D at 0xfcdb908>,
 <keras.layers.normalization.BatchNormalization at 0xfcdba90>,
 <keras.layers.normalization.BatchNormalization at 0xfcdbba8>,
 <keras.layers.core.Activation at 0xfcdbcc0>,
 <keras.layers.core.Activation at 0xfcdbcf8>,
 <keras.layers.convolutional.Conv2D at 0xfcdbd30>,
 <keras.layers.convolutional.Conv2D at 0xfcdbeb8>,
 <keras.layers.normalization.BatchNormalization at 0xfce4080>,
 <keras.layers.normalization.BatchNormalization at 0xfce4198>,
 <keras.layers.core.Activation at 0xfce42b0>,
 <keras.layers.core.Activation at 0xfce42e8>,
 <keras.layers.pooling.AveragePooling2D at 0xfce4320>,
 <keras.layers.convolutional.Conv2D at 0xfce43c8>,
 <keras.layers.convolutional.Conv2D at 0xfce4550>,
 <keras.layers.convolutional.Conv2D at 0xfce46d8>,
 <keras.layers.convolutional.Conv2D at 0xfce4860>,
 <keras.layers.normalization.BatchNormalization at 0xfce49e8>,
 <keras.layers.normalization.BatchNormalization at 0xfce4b00>,
 <keras.layers.normalization.BatchNormalization at 0xfce4c18>,
 <keras.layers.normalization.BatchNormalization at 0xfce4d30>,
 <keras.layers.core.Activation at 0xfce4e48>,
 <keras.layers.core.Activation at 0xfce4e80>,
 <keras.layers.core.Activation at 0xfce4eb8>,
 <keras.layers.core.Activation at 0xfce4ef0>,
 <keras.layers.merge.Concatenate at 0xfce4f28>,
 <keras.layers.convolutional.Conv2D at 0xfce4f60>,
 <keras.layers.normalization.BatchNormalization at 0xfceb128>,
 <keras.layers.core.Activation at 0xfceb240>,
 <keras.layers.convolutional.Conv2D at 0xfceb278>,
 <keras.layers.normalization.BatchNormalization at 0xfceb400>,
 <keras.layers.core.Activation at 0xfceb518>,
 <keras.layers.convolutional.Conv2D at 0xfceb550>,
 <keras.layers.convolutional.Conv2D at 0xfceb6d8>,
 <keras.layers.normalization.BatchNormalization at 0xfceb860>,
 <keras.layers.normalization.BatchNormalization at 0xfceb978>,
 <keras.layers.core.Activation at 0xfceba90>,
 <keras.layers.core.Activation at 0xfcebac8>,
 <keras.layers.convolutional.Conv2D at 0xfcebb00>,
 <keras.layers.convolutional.Conv2D at 0xfcebc88>,
 <keras.layers.normalization.BatchNormalization at 0xfcebe10>,
 <keras.layers.normalization.BatchNormalization at 0xfcebf28>,
 <keras.layers.core.Activation at 0xfcd3fd0>,
 <keras.layers.core.Activation at 0xfcf10b8>,
 <keras.layers.pooling.AveragePooling2D at 0xfcf10f0>,
 <keras.layers.convolutional.Conv2D at 0xfcf1198>,
 <keras.layers.convolutional.Conv2D at 0xfcf1320>,
 <keras.layers.convolutional.Conv2D at 0xfcf14a8>,
 <keras.layers.convolutional.Conv2D at 0xfcf1630>,
 <keras.layers.normalization.BatchNormalization at 0xfcf17b8>,
 <keras.layers.normalization.BatchNormalization at 0xfcf18d0>,
 <keras.layers.normalization.BatchNormalization at 0xfcf19e8>,
 <keras.layers.normalization.BatchNormalization at 0xfcf1b00>,
 <keras.layers.core.Activation at 0xfcf1c18>,
 <keras.layers.core.Activation at 0xfcf1c50>,
 <keras.layers.core.Activation at 0xfcf1c88>,
 <keras.layers.core.Activation at 0xfcf1cc0>,
 <keras.layers.merge.Concatenate at 0xfcf1cf8>,
 <keras.layers.convolutional.Conv2D at 0xfcf1d30>,
 <keras.layers.normalization.BatchNormalization at 0xfcf1eb8>,
 <keras.layers.core.Activation at 0xfcebfd0>,
 <keras.layers.convolutional.Conv2D at 0xfcfa048>,
 <keras.layers.normalization.BatchNormalization at 0xfcfa1d0>,
 <keras.layers.core.Activation at 0xfcfa2e8>,
 <keras.layers.convolutional.Conv2D at 0xfcfa320>,
 <keras.layers.convolutional.Conv2D at 0xfcfa4a8>,
 <keras.layers.normalization.BatchNormalization at 0xfcfa630>,
 <keras.layers.normalization.BatchNormalization at 0xfcfa748>,
 <keras.layers.core.Activation at 0xfcfa860>,
 <keras.layers.core.Activation at 0xfcfa898>,
 <keras.layers.convolutional.Conv2D at 0xfcfa8d0>,
 <keras.layers.convolutional.Conv2D at 0xfcfaa58>,
 <keras.layers.normalization.BatchNormalization at 0xfcfabe0>,
 <keras.layers.normalization.BatchNormalization at 0xfcfacf8>,
 <keras.layers.core.Activation at 0xfcfae10>,
 <keras.layers.core.Activation at 0xfcfae48>,
 <keras.layers.pooling.AveragePooling2D at 0xfcfae80>,
 <keras.layers.convolutional.Conv2D at 0xfcfaf28>,
 <keras.layers.convolutional.Conv2D at 0xfd010f0>,
 <keras.layers.convolutional.Conv2D at 0xfd01278>,
 <keras.layers.convolutional.Conv2D at 0xfd01400>,
 <keras.layers.normalization.BatchNormalization at 0xfd01588>,
 <keras.layers.normalization.BatchNormalization at 0xfd016a0>,
 <keras.layers.normalization.BatchNormalization at 0xfd017b8>,
 <keras.layers.normalization.BatchNormalization at 0xfd018d0>,
 <keras.layers.core.Activation at 0xfd019e8>,
 <keras.layers.core.Activation at 0xfd01a20>,
 <keras.layers.core.Activation at 0xfd01a58>,
 <keras.layers.core.Activation at 0xfd01a90>,
 <keras.layers.merge.Concatenate at 0xfd01ac8>,
 <keras.layers.convolutional.Conv2D at 0xfd01b00>,
 <keras.layers.normalization.BatchNormalization at 0xfd01c88>,
 <keras.layers.core.Activation at 0xfd01da0>,
 <keras.layers.convolutional.Conv2D at 0xfd01dd8>,
 <keras.layers.normalization.BatchNormalization at 0xfd01f60>,
 <keras.layers.core.Activation at 0xfcf1fd0>,
 <keras.layers.convolutional.Conv2D at 0xfd080f0>,
 <keras.layers.convolutional.Conv2D at 0xfd08278>,
 <keras.layers.normalization.BatchNormalization at 0xfd08400>,
 <keras.layers.normalization.BatchNormalization at 0xfd08518>,
 <keras.layers.core.Activation at 0xfd08630>,
 <keras.layers.core.Activation at 0xfd08668>,
 <keras.layers.convolutional.Conv2D at 0xfd086a0>,
 <keras.layers.convolutional.Conv2D at 0xfd08828>,
 <keras.layers.normalization.BatchNormalization at 0xfd089b0>,
 <keras.layers.normalization.BatchNormalization at 0xfd08ac8>,
 <keras.layers.core.Activation at 0xfd08be0>,
 <keras.layers.core.Activation at 0xfd08c18>,
 <keras.layers.pooling.AveragePooling2D at 0xfd08c50>,
 <keras.layers.convolutional.Conv2D at 0xfd08cf8>,
 <keras.layers.convolutional.Conv2D at 0xfd08e80>,
 <keras.layers.convolutional.Conv2D at 0xff11048>,
 <keras.layers.convolutional.Conv2D at 0xff111d0>,
 <keras.layers.normalization.BatchNormalization at 0xff11358>,
 <keras.layers.normalization.BatchNormalization at 0xff11470>,
 <keras.layers.normalization.BatchNormalization at 0xff11588>,
 <keras.layers.normalization.BatchNormalization at 0xff116a0>,
 <keras.layers.core.Activation at 0xff117b8>,
 <keras.layers.core.Activation at 0xff117f0>,
 <keras.layers.core.Activation at 0xff11828>,
 <keras.layers.core.Activation at 0xff11860>,
 <keras.layers.merge.Concatenate at 0xff11898>,
 <keras.layers.convolutional.Conv2D at 0xff118d0>,
 <keras.layers.normalization.BatchNormalization at 0xff11a58>,
 <keras.layers.core.Activation at 0xff11b70>,
 <keras.layers.convolutional.Conv2D at 0xff11ba8>,
 <keras.layers.normalization.BatchNormalization at 0xff11d30>,
 <keras.layers.core.Activation at 0xff11e48>,
 <keras.layers.convolutional.Conv2D at 0xff11e80>,
 <keras.layers.convolutional.Conv2D at 0xff19048>,
 <keras.layers.normalization.BatchNormalization at 0xff191d0>,
 <keras.layers.normalization.BatchNormalization at 0xff192e8>,
 <keras.layers.core.Activation at 0xff19400>,
 <keras.layers.core.Activation at 0xff19438>,
 <keras.layers.convolutional.Conv2D at 0xff19470>,
 <keras.layers.convolutional.Conv2D at 0xff195f8>,
 <keras.layers.normalization.BatchNormalization at 0xff19780>,
 <keras.layers.normalization.BatchNormalization at 0xff19898>,
 <keras.layers.core.Activation at 0xff199b0>,
 <keras.layers.core.Activation at 0xff199e8>,
 <keras.layers.pooling.MaxPooling2D at 0xff19a20>,
 <keras.layers.merge.Concatenate at 0xff19ac8>,
 <keras.layers.convolutional.Conv2D at 0xff19b00>,
 <keras.layers.normalization.BatchNormalization at 0xff19c88>,
 <keras.layers.core.Activation at 0xff19da0>,
 <keras.layers.convolutional.Conv2D at 0xff19dd8>,
 <keras.layers.convolutional.Conv2D at 0xff19f60>,
 <keras.layers.normalization.BatchNormalization at 0xff20128>,
 <keras.layers.normalization.BatchNormalization at 0xff20240>,
 <keras.layers.core.Activation at 0xff20358>,
 <keras.layers.core.Activation at 0xff20390>,
 <keras.layers.convolutional.Conv2D at 0xff203c8>,
 <keras.layers.convolutional.Conv2D at 0xff20550>,
 <keras.layers.convolutional.Conv2D at 0xff206d8>,
 <keras.layers.convolutional.Conv2D at 0xff20860>,
 <keras.layers.pooling.AveragePooling2D at 0xff209e8>,
 <keras.layers.convolutional.Conv2D at 0xff20a90>,
 <keras.layers.normalization.BatchNormalization at 0xff20c18>,
 <keras.layers.normalization.BatchNormalization at 0xff20d30>,
 <keras.layers.normalization.BatchNormalization at 0xff20e48>,
 <keras.layers.normalization.BatchNormalization at 0xff20f60>,
 <keras.layers.convolutional.Conv2D at 0xfd01fd0>,
 <keras.layers.normalization.BatchNormalization at 0xff27240>,
 <keras.layers.core.Activation at 0xff27358>,
 <keras.layers.core.Activation at 0xff27390>,
 <keras.layers.core.Activation at 0xff273c8>,
 <keras.layers.core.Activation at 0xff27400>,
 <keras.layers.normalization.BatchNormalization at 0xff27438>,
 <keras.layers.core.Activation at 0xff27550>,
 <keras.layers.merge.Concatenate at 0xff27588>,
 <keras.layers.merge.Concatenate at 0xff275c0>,
 <keras.layers.core.Activation at 0xff275f8>,
 <keras.layers.merge.Concatenate at 0xff27630>,
 <keras.layers.convolutional.Conv2D at 0xff27668>,
 <keras.layers.normalization.BatchNormalization at 0xff277f0>,
 <keras.layers.core.Activation at 0xff27908>,
 <keras.layers.convolutional.Conv2D at 0xff27940>,
 <keras.layers.convolutional.Conv2D at 0xff27ac8>,
 <keras.layers.normalization.BatchNormalization at 0xff27c50>,
 <keras.layers.normalization.BatchNormalization at 0xff27d68>,
 <keras.layers.core.Activation at 0xff27e80>,
 <keras.layers.core.Activation at 0xff27eb8>,
 <keras.layers.convolutional.Conv2D at 0xff27ef0>,
 <keras.layers.convolutional.Conv2D at 0xff300b8>,
 <keras.layers.convolutional.Conv2D at 0xff30240>,
 <keras.layers.convolutional.Conv2D at 0xff303c8>,
 <keras.layers.pooling.AveragePooling2D at 0xff30550>,
 <keras.layers.convolutional.Conv2D at 0xff305f8>,
 <keras.layers.normalization.BatchNormalization at 0xff30780>,
 <keras.layers.normalization.BatchNormalization at 0xff30898>,
 <keras.layers.normalization.BatchNormalization at 0xff309b0>,
 <keras.layers.normalization.BatchNormalization at 0xff30ac8>,
 <keras.layers.convolutional.Conv2D at 0xff30be0>,
 <keras.layers.normalization.BatchNormalization at 0xff30d68>,
 <keras.layers.core.Activation at 0xff30e80>,
 <keras.layers.core.Activation at 0xff30eb8>,
 <keras.layers.core.Activation at 0xff30ef0>,
 <keras.layers.core.Activation at 0xff30f28>,
 <keras.layers.normalization.BatchNormalization at 0xff30f60>,
 <keras.layers.core.Activation at 0xff20fd0>,
 <keras.layers.merge.Concatenate at 0xff360f0>,
 <keras.layers.merge.Concatenate at 0xff36128>,
 <keras.layers.core.Activation at 0xff36160>,
 <keras.layers.merge.Concatenate at 0xff36198>,
 <keras.layers.pooling.GlobalAveragePooling2D at 0xff361d0>,
 <keras.layers.core.Dense at 0xff36240>,
 <keras.layers.core.Dense at 0xff36390>]
In [25]:
# Iterate through the MRIs in T2

print('\n \n' + '\033[1m' + 'EACH ORIGINAL IMAGE IS ANALYZED WITH TWO METHODS: CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)' + '\033[0m' + '\n')
print('\033[1m' + 'FOR EACH METHOD, THE FIRST IMAGE IS THE ORIGINAL IMAGE, THE SECOND IMAGE IS THE MAP, AND THE THIRD IMAGE IS THE MAP SUPERIMPOSED ON THE ORIGINAL IMAGE WITH A TRANSPARENCY THAT IS PROPORTIONAL TO THE ESTIMATED PROBABILITY OF THE IMAGE HAVING TUBER(S) (HIGHER ESTIMATED PROBABILITIES PRODUCE CLEARLY SEEN MAPS OVERLAID ON THE ORIGINAL IMAGE AND LOWER ESTIMATED PROBABILITIES PRODUCE VERY TRANSPARENT MAPS OVERLAYED ON THE ORIGINAL IMAGE)' + '\033[0m'+ '\n \n \n \n')


for i in range(T2test_X.shape[0]):
    
  # Print spaces to separate from the next image
  print('\n \n \n \n \n \n \n \n')
  
  # Print real classification of the image
  print('\033[1m' + 'REAL CLASSIFICATION OF THE IMAGE: {}'.format('TUBER(S)' if y_trueT2[i][0]==1 else 'NO TUBER(S)') + '\033[0m')
  # Print model classification and model probability of TSC
  print('Model classification of this image: {} \nEstimated probability of tuber(s): {} \n'.format('TUBER(S)' if testInceptionV3T2[i][0]>0.5 else 'NO TUBER(S)', testInceptionV3T2[i][0]))     


  # Print title
  print('\033[1m' + 'CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)' + '\033[0m')     
    
  # Original image
  plt.subplot(2,3,1)
  plt.imshow(T2test_X[i])
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])
  
  # Heat map
  plt.subplot(2,3,2)
  heat_map = visualize_cam(model, layer_idx=300, filter_indices=None, seed_input=T2test_X[i])
  plt.imshow(heat_map)
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])
  
  # Heat map superimposed on original image
  plt.subplot(2,3,3)
  plt.imshow(T2test_X[i])
  plt.imshow(heat_map, alpha = 0.8 * testInceptionV3T2[i][0])
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])

       
  # Original image
  plt.subplot(2,3,4)
  plt.imshow(T2test_X[i])
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])


  # Heat map
  heat_map = visualize_saliency(model, layer_idx=300, filter_indices=None, seed_input=T2test_X[i])
  plt.subplot(2,3,5)
  plt.imshow(heat_map)
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])
  
  # Heat map superimposed on original image
  plt.subplot(2,3,6)
  plt.imshow(T2test_X[i])
  plt.imshow(heat_map, alpha = 0.8 * testInceptionV3T2[i][0])
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])

  # Show the image and close it
  plt.show()
  plt.close()
 
EACH ORIGINAL IMAGE IS ANALYZED WITH TWO METHODS: CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)

FOR EACH METHOD, THE FIRST IMAGE IS THE ORIGINAL IMAGE, THE SECOND IMAGE IS THE MAP, AND THE THIRD IMAGE IS THE MAP SUPERIMPOSED ON THE ORIGINAL IMAGE WITH A TRANSPARENCY THAT IS PROPORTIONAL TO THE ESTIMATED PROBABILITY OF THE IMAGE HAVING TUBER(S) (HIGHER ESTIMATED PROBABILITIES PRODUCE CLEARLY SEEN MAPS OVERLAID ON THE ORIGINAL IMAGE AND LOWER ESTIMATED PROBABILITIES PRODUCE VERY TRANSPARENT MAPS OVERLAYED ON THE ORIGINAL IMAGE)
 
 
 


 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.8291871547698975 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9806313514709473 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9862720966339111 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9974607229232788 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.99338299036026 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.02397695742547512 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.0037464634515345097 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.00468840217217803 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.0001511873269919306 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.002970334840938449 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.07152433693408966 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9978498220443726 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.01058230921626091 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.00029095340869389474 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 2.7956717531196773e-05 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.0004225652664899826 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.00014007068239152431 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 1.977308602363337e-05 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.150598406791687 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.042285170406103134 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.005438508931547403 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9889260530471802 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.8173370361328125 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.7867173552513123 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.34336021542549133 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9953458905220032 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.47428828477859497 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9956533908843994 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9881957769393921 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.998412013053894 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)

SCROLL UP TO SEE THE GradCAM AND SALIENCY MAPS OF EACH IMAGE

This 5 year-old female had a low tuber burden with tubers detected in only four T2 MRI slices by our Neuroradiologist. The convolutional neural network classified images as having tuber(s) in 13 out of 30 T2 MRI slices. In particular, it classified 4 out of the 4 T2 MRI slices with tuber(s). It correctly classified 17 out of the 26 the negative slices as negative. Its accuracy in T2 was 0.7.

Each original image is analyzed with two methods: Gradient-weighted class activation maps (upper row) and saliency maps (lower row).

For each method, the first image is the original image, the second image is the map, and the third image is the map superimposed on the original image with a transparency that is proportional to the estimated probability of the image having tuber(s) (higher estimated probabilities produce clearly seen maps overlaid on the original image and lower estimated probabilities produce very transparent maps overlaid on the original image).

In [26]:
# Iterate through the MRIs in FLAIR

print('\n \n' + '\033[1m' + 'EACH ORIGINAL IMAGE IS ANALYZED WITH TWO METHODS: CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)' + '\033[0m' + '\n')
print('\033[1m' + 'FOR EACH METHOD, THE FIRST IMAGE IS THE ORIGINAL IMAGE, THE SECOND IMAGE IS THE MAP, AND THE THIRD IMAGE IS THE MAP SUPERIMPOSED ON THE ORIGINAL IMAGE WITH A TRANSPARENCY THAT IS PROPORTIONAL TO THE ESTIMATED PROBABILITY OF THE IMAGE HAVING TUBER(S) (HIGHER ESTIMATED PROBABILITIES PRODUCE CLEARLY SEEN MAPS OVERLAID ON THE ORIGINAL IMAGE AND LOWER ESTIMATED PROBABILITIES PRODUCE VERY TRANSPARENT MAPS OVERLAYED ON THE ORIGINAL IMAGE)' + '\033[0m'+ '\n \n \n \n')


for i in range(FLAIRtest_X.shape[0]):
    
  # Print spaces to separate from the next image
  print('\n \n \n \n \n \n \n \n')
  
  # Print real classification of the image
  print('\033[1m' + 'REAL CLASSIFICATION OF THE IMAGE: {}'.format('TUBER(S)' if y_trueFLAIR[i][0]==1 else 'NO TUBER(S)') + '\033[0m')
  # Print model classification and model probability of TSC
  print('Model classification of this image: {} \nEstimated probability of tuber(s): {} \n'.format('TUBER(S)' if testInceptionV3FLAIR[i][0]>0.5 else 'NO TUBER(S)', testInceptionV3FLAIR[i][0]))     


  # Print title
  print('\033[1m' + 'CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)' + '\033[0m')     
    
  # Original image
  plt.subplot(2,3,1)
  plt.imshow(FLAIRtest_X[i])
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])
  
  # Heat map
  plt.subplot(2,3,2)
  heat_map = visualize_cam(model, layer_idx=300, filter_indices=None, seed_input=FLAIRtest_X[i])
  plt.imshow(heat_map)
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])
  
  # Heat map superimposed on original image
  plt.subplot(2,3,3)
  plt.imshow(FLAIRtest_X[i])
  plt.imshow(heat_map, alpha = 0.8 * testInceptionV3FLAIR[i][0])
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])

       
  # Original image
  plt.subplot(2,3,4)
  plt.imshow(FLAIRtest_X[i])
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])


  # Heat map
  heat_map = visualize_saliency(model, layer_idx=300, filter_indices=None, seed_input=FLAIRtest_X[i])
  plt.subplot(2,3,5)
  plt.imshow(heat_map)
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])
  
  # Heat map superimposed on original image
  plt.subplot(2,3,6)
  plt.imshow(FLAIRtest_X[i])
  plt.imshow(heat_map, alpha = 0.8 * testInceptionV3FLAIR[i][0])
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])

  # Show the image and close it
  plt.show()
  plt.close()
 
EACH ORIGINAL IMAGE IS ANALYZED WITH TWO METHODS: CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)

FOR EACH METHOD, THE FIRST IMAGE IS THE ORIGINAL IMAGE, THE SECOND IMAGE IS THE MAP, AND THE THIRD IMAGE IS THE MAP SUPERIMPOSED ON THE ORIGINAL IMAGE WITH A TRANSPARENCY THAT IS PROPORTIONAL TO THE ESTIMATED PROBABILITY OF THE IMAGE HAVING TUBER(S) (HIGHER ESTIMATED PROBABILITIES PRODUCE CLEARLY SEEN MAPS OVERLAID ON THE ORIGINAL IMAGE AND LOWER ESTIMATED PROBABILITIES PRODUCE VERY TRANSPARENT MAPS OVERLAYED ON THE ORIGINAL IMAGE)
 
 
 


 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.998497486114502 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9997264742851257 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 3.7536050513153896e-05 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 6.373824817273999e-06 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 4.9218892854696605e-06 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 8.287323908007238e-06 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 1.717446320981253e-05 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 7.798919796186965e-06 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 1.3543068462240626e-06 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 2.513180334062781e-06 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 5.989244073134614e-06 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 2.1663054212694988e-06 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 1.6606858480372466e-05 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 9.464364666200709e-06 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.00030817321385256946 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)

SCROLL UP TO SEE THE GradCAM AND SALIENCY MAPS OF EACH IMAGE

This 5 year-old female had a low tuber burden with tubers detected in two FLAIR MRI slices by our Neuroradiologist. The convolutional neural network classified images as having tuber(s) in 2 out of 15 MRI FLAIR slices for a perfect classification. Its accuracy in FLAIR was 1.

Each original image is analyzed with two methods: Gradient-weighted class activation maps (upper row) and saliency maps (lower row).

For each method, the first image is the original image, the second image is the map, and the third image is the map superimposed on the original image with a transparency that is proportional to the estimated probability of the image having tuber(s) (higher estimated probabilities produce clearly seen maps overlaid on the original image and lower estimated probabilities produce very transparent maps overlaid on the original image).