DETECTION OF TUBERS WITH CONVOLUTIONAL NEURAL NETWORKS

TEST CASE V

Import packages and functions

In [1]:
# Import packages
%matplotlib inline
from PIL import Image
import numpy as np
import os
import re
from skimage.color import gray2rgb
import matplotlib.pyplot as plt
from sklearn.utils import shuffle
!pip install tensorflow
!pip install keras
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten, GaussianNoise, BatchNormalization, GlobalAveragePooling2D
from keras.layers import Conv2D, MaxPooling2D
from keras import Sequential
from keras.optimizers import Adam
import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
from keras.preprocessing import image
from keras.models import Model
from keras import backend as K
from sklearn.metrics import confusion_matrix
from sklearn.metrics import roc_auc_score
!pip install git+https://github.com/raghakot/keras-vis.git --upgrade
from vis.visualization import visualize_cam, visualize_saliency, overlay
from keras import activations
from matplotlib import pyplot as plt
import matplotlib.cm as cm
import zipfile
from keras.models import model_from_json
import matplotlib as mpl
Requirement already satisfied: tensorflow in c:\programdata\anaconda3\lib\site-packages (1.12.0)
Requirement already satisfied: grpcio>=1.8.6 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (1.18.0)
Requirement already satisfied: wheel>=0.26 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (0.32.3)
Requirement already satisfied: six>=1.10.0 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (1.11.0)
Requirement already satisfied: termcolor>=1.1.0 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (1.1.0)
Requirement already satisfied: keras-preprocessing>=1.0.5 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (1.0.5)
Requirement already satisfied: astor>=0.6.0 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (0.7.1)
Requirement already satisfied: gast>=0.2.0 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (0.2.2)
Requirement already satisfied: keras-applications>=1.0.6 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (1.0.6)
Requirement already satisfied: numpy>=1.13.3 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (1.14.3)
Requirement already satisfied: absl-py>=0.1.6 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (0.7.0)
Requirement already satisfied: protobuf>=3.6.1 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (3.6.1)
Requirement already satisfied: tensorboard<1.13.0,>=1.12.0 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (1.12.2)
Requirement already satisfied: h5py in c:\programdata\anaconda3\lib\site-packages (from keras-applications>=1.0.6->tensorflow) (2.7.1)
Requirement already satisfied: setuptools in c:\programdata\anaconda3\lib\site-packages (from protobuf>=3.6.1->tensorflow) (40.6.3)
Requirement already satisfied: markdown>=2.6.8 in c:\programdata\anaconda3\lib\site-packages (from tensorboard<1.13.0,>=1.12.0->tensorflow) (3.0.1)
Requirement already satisfied: werkzeug>=0.11.10 in c:\programdata\anaconda3\lib\site-packages (from tensorboard<1.13.0,>=1.12.0->tensorflow) (0.14.1)
You are using pip version 19.0.1, however version 19.0.3 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' command.
Requirement already satisfied: keras in c:\programdata\anaconda3\lib\site-packages (2.2.4)
Requirement already satisfied: keras-applications>=1.0.6 in c:\programdata\anaconda3\lib\site-packages (from keras) (1.0.6)
Requirement already satisfied: keras-preprocessing>=1.0.5 in c:\programdata\anaconda3\lib\site-packages (from keras) (1.0.5)
Requirement already satisfied: pyyaml in c:\programdata\anaconda3\lib\site-packages (from keras) (3.12)
Requirement already satisfied: h5py in c:\programdata\anaconda3\lib\site-packages (from keras) (2.7.1)
Requirement already satisfied: six>=1.9.0 in c:\programdata\anaconda3\lib\site-packages (from keras) (1.11.0)
Requirement already satisfied: numpy>=1.9.1 in c:\programdata\anaconda3\lib\site-packages (from keras) (1.14.3)
Requirement already satisfied: scipy>=0.14 in c:\programdata\anaconda3\lib\site-packages (from keras) (1.1.0)
You are using pip version 19.0.1, however version 19.0.3 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' command.
C:\ProgramData\Anaconda3\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
Using TensorFlow backend.
Collecting git+https://github.com/raghakot/keras-vis.git
  Cloning https://github.com/raghakot/keras-vis.git to c:\users\tempch~1.022\appdata\local\temp\pip-req-build-29ky20md
Requirement already satisfied, skipping upgrade: keras in c:\programdata\anaconda3\lib\site-packages (from keras-vis==0.4.1) (2.2.4)
Requirement already satisfied, skipping upgrade: six in c:\programdata\anaconda3\lib\site-packages (from keras-vis==0.4.1) (1.11.0)
Requirement already satisfied, skipping upgrade: scikit-image in c:\programdata\anaconda3\lib\site-packages (from keras-vis==0.4.1) (0.13.1)
Requirement already satisfied, skipping upgrade: matplotlib in c:\programdata\anaconda3\lib\site-packages (from keras-vis==0.4.1) (2.2.2)
Requirement already satisfied, skipping upgrade: h5py in c:\programdata\anaconda3\lib\site-packages (from keras-vis==0.4.1) (2.7.1)
Requirement already satisfied, skipping upgrade: numpy>=1.9.1 in c:\programdata\anaconda3\lib\site-packages (from keras->keras-vis==0.4.1) (1.14.3)
Requirement already satisfied, skipping upgrade: pyyaml in c:\programdata\anaconda3\lib\site-packages (from keras->keras-vis==0.4.1) (3.12)
Requirement already satisfied, skipping upgrade: scipy>=0.14 in c:\programdata\anaconda3\lib\site-packages (from keras->keras-vis==0.4.1) (1.1.0)
Requirement already satisfied, skipping upgrade: keras-applications>=1.0.6 in c:\programdata\anaconda3\lib\site-packages (from keras->keras-vis==0.4.1) (1.0.6)
Requirement already satisfied, skipping upgrade: keras-preprocessing>=1.0.5 in c:\programdata\anaconda3\lib\site-packages (from keras->keras-vis==0.4.1) (1.0.5)
Requirement already satisfied, skipping upgrade: networkx>=1.8 in c:\programdata\anaconda3\lib\site-packages (from scikit-image->keras-vis==0.4.1) (2.1)
Requirement already satisfied, skipping upgrade: pillow>=2.1.0 in c:\programdata\anaconda3\lib\site-packages (from scikit-image->keras-vis==0.4.1) (5.1.0)
Requirement already satisfied, skipping upgrade: PyWavelets>=0.4.0 in c:\programdata\anaconda3\lib\site-packages (from scikit-image->keras-vis==0.4.1) (0.5.2)
Requirement already satisfied, skipping upgrade: cycler>=0.10 in c:\programdata\anaconda3\lib\site-packages (from matplotlib->keras-vis==0.4.1) (0.10.0)
Requirement already satisfied, skipping upgrade: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in c:\programdata\anaconda3\lib\site-packages (from matplotlib->keras-vis==0.4.1) (2.2.0)
Requirement already satisfied, skipping upgrade: python-dateutil>=2.1 in c:\programdata\anaconda3\lib\site-packages (from matplotlib->keras-vis==0.4.1) (2.7.3)
Requirement already satisfied, skipping upgrade: pytz in c:\programdata\anaconda3\lib\site-packages (from matplotlib->keras-vis==0.4.1) (2018.4)
Requirement already satisfied, skipping upgrade: kiwisolver>=1.0.1 in c:\programdata\anaconda3\lib\site-packages (from matplotlib->keras-vis==0.4.1) (1.0.1)
Requirement already satisfied, skipping upgrade: decorator>=4.1.0 in c:\programdata\anaconda3\lib\site-packages (from networkx>=1.8->scikit-image->keras-vis==0.4.1) (4.3.0)
Requirement already satisfied, skipping upgrade: setuptools in c:\programdata\anaconda3\lib\site-packages (from kiwisolver>=1.0.1->matplotlib->keras-vis==0.4.1) (40.6.3)
Building wheels for collected packages: keras-vis
  Building wheel for keras-vis (setup.py): started
  Building wheel for keras-vis (setup.py): finished with status 'done'
  Stored in directory: C:\Users\TEMPCH~1.022\AppData\Local\Temp\pip-ephem-wheel-cache-btgo30u_\wheels\c5\ae\e7\b34d1cb48b1898f606a5cce08ebc9521fa0588f37f1e590d9f
Successfully built keras-vis
Installing collected packages: keras-vis
  Found existing installation: keras-vis 0.4.1
    Uninstalling keras-vis-0.4.1:
      Successfully uninstalled keras-vis-0.4.1
Successfully installed keras-vis-0.4.1
You are using pip version 19.0.1, however version 19.0.3 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' command.

FIRST PART: DATA INGESTION

Import original images from local computer

These images come from a 4 year-old female with TSC. In total, there are 50 images consisting of 35 consecutive axial T2 MRI slices and 15 consecutive axial FLAIR MRI slices.

In [2]:
# Set the figure size
mpl.rcParams['figure.figsize'] = (16,10)
In [3]:
# Unzip files
with zipfile.ZipFile("TestCaseVT2.zip","r") as zip_ref:
    zip_ref.extractall()
with zipfile.ZipFile("TestCaseVFLAIR.zip","r") as zip_ref:
    zip_ref.extractall()

Path to original images folder

In [4]:
# Path to the folder with the original images
pathtoimagesT2test = './TestCaseVT2/'

pathtoimagesFLAIRtest = './TestCaseVFLAIR/'

SECOND PART: IMPORTATION OF FINAL DATA

In [5]:
# Functions to sort images with numbers within their name
def atoi(text):
    return int(text) if text.isdigit() else text

def natural_keys(text):
    return [ atoi(c) for c in re.split(r'(\d+)', text) ]

Import images and create labels for the T2 set

In [6]:
## T2

# Define the image size
image_size = (224, 224)

# Read in the test images for T2
T2test_images = []
T2test_dir = pathtoimagesT2test
T2test_files = os.listdir(T2test_dir)
T2test_files.sort(key=natural_keys)
# For each image
for f in T2test_files:
  # Open the image
  img = Image.open(T2test_dir + f)
  # Resize the image so that it has a size 224x224
  img = img.resize(image_size)
  # Transform into a numpy array
  img_arr = np.array(img)
  # Transform from 224x224 to 224x224x3
  if img_arr.shape == image_size:
        img_arr = np.expand_dims(img_arr, 3)
        img_arr = gray2rgb(img_arr[:, :, 0])
  # Add the image to the array of images      
  T2test_images.append(img_arr)

# After having transformed all images, transform the list into a numpy array  
T2test_X = np.array(T2test_images)

# Create an array of labels (as read by the radiologist)
T2test_y = np.array([[0], [0], [0], [0], [0], [1], [1], [1], [1], [1], 
                     [1], [0], [0], [0], [0], [1], [1], [1], [1], [1],
                     [1], [0], [1], [1], [0], [0], [0], [0], [0], [0],
                     [0], [0], [0], [0], [0]])

# GPU expects values to be 32-bit floats
T2test_X = T2test_X.astype(np.float32)

# Rescale the values to be between 0 and 1
T2test_X /= 255.
In [7]:
T2test_X.shape
Out[7]:
(35, 224, 224, 3)
In [8]:
# Example of an image to make sure they were converted right
plt.imshow(T2test_X[0])
plt.grid(b=None)
plt.xticks([])
plt.yticks([])
plt.show()
In [9]:
T2test_y.shape
Out[9]:
(35, 1)
In [10]:
T2test_y[0]
Out[10]:
array([0])

Import images and create labels for the FLAIR set

In [11]:
## FLAIR

# Define the image size
image_size = (224, 224)

# Read in the test images for FLAIR
FLAIRtest_images = []
FLAIRtest_dir = pathtoimagesFLAIRtest
FLAIRtest_files = os.listdir(FLAIRtest_dir)
FLAIRtest_files.sort(key=natural_keys)
# For each image
for f in FLAIRtest_files:
  # Open the image
  img = Image.open(FLAIRtest_dir + f)
  # Resize the image so that it has a size 224x224
  img = img.resize(image_size)
  # Transform into a numpy array
  img_arr = np.array(img)
  # Transform from 224x224 to 224x224x3
  if img_arr.shape == image_size:
        img_arr = np.expand_dims(img_arr, 3)
        img_arr = gray2rgb(img_arr[:, :, 0])
  # Add the image to the array of images      
  FLAIRtest_images.append(img_arr)

# After having transformed all images, transform the list into a numpy array  
FLAIRtest_X = np.array(FLAIRtest_images)

# Create an array of labels (as read by the radiologist)
FLAIRtest_y = np.array([[1], [1], [1], [1], [0], [0], [0], [0], [1], [1], 
                        [0], [0], [0], [0], [0]])

# GPU expects values to be 32-bit floats
FLAIRtest_X = FLAIRtest_X.astype(np.float32)

# Rescale the values to be between 0 and 1
FLAIRtest_X /= 255.
In [12]:
FLAIRtest_X.shape
Out[12]:
(15, 224, 224, 3)
In [13]:
# Example of an image to make sure they were converted right
plt.imshow(FLAIRtest_X[0])
plt.grid(b=None)
plt.xticks([])
plt.yticks([])
plt.show()
In [14]:
FLAIRtest_y.shape
Out[14]:
(15, 1)
In [15]:
FLAIRtest_y[0]
Out[15]:
array([1])

THIRD PART: VISUALIZE CLASS ACTIVATION MAPS AND SALIENCY MAPS

Load the model

In [16]:
# load model
json_file = open('InceptionV3.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
model = model_from_json(loaded_model_json)
# load weights into new model
model.load_weights("InceptionV3.h5")
In [17]:
# Compile model
model.compile(optimizer = Adam(lr = 0.00025), loss = 'binary_crossentropy', metrics = ['accuracy'])
In [18]:
# Generate predictions on test data in the form of probabilities for T2
testInceptionV3T2 = model.predict(T2test_X, batch_size = 16)
testInceptionV3T2
Out[18]:
array([[4.4537678e-02],
       [1.9112747e-04],
       [1.6339646e-01],
       [1.9239161e-02],
       [3.6147100e-01],
       [9.9991214e-01],
       [9.8952711e-01],
       [9.9897337e-01],
       [9.9713254e-01],
       [9.9989903e-01],
       [7.3466516e-01],
       [3.2274562e-05],
       [1.3195887e-03],
       [1.3555899e-02],
       [8.7824249e-01],
       [9.3789899e-01],
       [3.8395973e-03],
       [2.2528593e-04],
       [3.8430495e-03],
       [2.6240168e-05],
       [5.9527510e-01],
       [9.7707844e-01],
       [8.5153973e-01],
       [9.6842992e-01],
       [9.2528808e-01],
       [1.9086692e-01],
       [9.4312030e-01],
       [9.9112928e-01],
       [9.9435800e-01],
       [9.9678063e-01],
       [6.5501481e-01],
       [1.7463298e-01],
       [6.2329388e-01],
       [9.9838686e-01],
       [9.7554749e-01]], dtype=float32)
In [19]:
# Generate predictions on test data in the form of probabilities for FLAIR
testInceptionV3FLAIR = model.predict(FLAIRtest_X, batch_size = 16)
testInceptionV3FLAIR
Out[19]:
array([[2.0027260e-01],
       [9.9951100e-01],
       [9.9991965e-01],
       [4.1551375e-01],
       [4.8261394e-05],
       [1.3686615e-05],
       [1.4089873e-05],
       [1.1179532e-05],
       [2.7335857e-06],
       [3.2192083e-06],
       [1.6262208e-03],
       [3.1169711e-07],
       [1.6226636e-06],
       [2.1481846e-06],
       [5.6563484e-05]], dtype=float32)
In [20]:
# Create the confusion matrix for T2
y_trueT2 = T2test_y
y_predInceptionV3T2 = testInceptionV3T2 > 0.5
confusion_matrix(y_trueT2, y_predInceptionV3T2, labels=[0,1])
Out[20]:
array([[10, 11],
       [ 4, 10]], dtype=int64)
In [21]:
# Create the confusion matrix for FLAIR
y_trueFLAIR = FLAIRtest_y
y_predInceptionV3FLAIR = testInceptionV3FLAIR > 0.5
confusion_matrix(y_trueFLAIR, y_predInceptionV3FLAIR)
Out[21]:
array([[9, 0],
       [4, 2]], dtype=int64)
In [22]:
# Calculate accuracy for T2
accuracy_InceptionV3T2 = (confusion_matrix(y_trueT2, y_predInceptionV3T2, labels=[0,1])[0, 0] + confusion_matrix(y_trueT2, y_predInceptionV3T2, labels=[0,1])[1, 1]) / (confusion_matrix(y_trueT2, y_predInceptionV3T2, labels=[0,1])[0, 0] + confusion_matrix(y_trueT2, y_predInceptionV3T2, labels=[0,1])[0, 1] + confusion_matrix(y_trueT2, y_predInceptionV3T2, labels=[0,1])[1, 0] + confusion_matrix(y_trueT2, y_predInceptionV3T2, labels=[0,1])[1, 1])
print('The accuracy in the test set is {}.'.format(accuracy_InceptionV3T2))
The accuracy in the test set is 0.5714285714285714.
In [23]:
# Calculate accuracy for FLAIR
accuracy_InceptionV3FLAIR = (confusion_matrix(y_trueFLAIR, y_predInceptionV3FLAIR)[0, 0] + confusion_matrix(y_trueFLAIR, y_predInceptionV3FLAIR)[1, 1]) / (confusion_matrix(y_trueFLAIR, y_predInceptionV3FLAIR)[0, 0] + confusion_matrix(y_trueFLAIR, y_predInceptionV3FLAIR)[0, 1] + confusion_matrix(y_trueFLAIR, y_predInceptionV3FLAIR)[1, 0] + confusion_matrix(y_trueFLAIR, y_predInceptionV3FLAIR)[1, 1])
print('The accuracy in the test set is {}.'.format(accuracy_InceptionV3FLAIR))
The accuracy in the test set is 0.7333333333333333.

Visualize the data

In [24]:
# Visualize the structure and layers of the model
model.layers
Out[24]:
[<keras.engine.input_layer.InputLayer at 0xf7b0240>,
 <keras.layers.convolutional.Conv2D at 0xf7b02b0>,
 <keras.layers.normalization.BatchNormalization at 0xf7b0828>,
 <keras.layers.core.Activation at 0xf7b06a0>,
 <keras.layers.convolutional.Conv2D at 0xf7b0898>,
 <keras.layers.normalization.BatchNormalization at 0xf7b09b0>,
 <keras.layers.core.Activation at 0xf7b0b38>,
 <keras.layers.convolutional.Conv2D at 0xf7b0c50>,
 <keras.layers.normalization.BatchNormalization at 0xf7b0c88>,
 <keras.layers.core.Activation at 0xf7b0e10>,
 <keras.layers.pooling.MaxPooling2D at 0xf7b0f28>,
 <keras.layers.convolutional.Conv2D at 0xf7c8048>,
 <keras.layers.normalization.BatchNormalization at 0xf7c81d0>,
 <keras.layers.core.Activation at 0xf7c82e8>,
 <keras.layers.convolutional.Conv2D at 0xf7c8320>,
 <keras.layers.normalization.BatchNormalization at 0xf7c84a8>,
 <keras.layers.core.Activation at 0xf7c85c0>,
 <keras.layers.pooling.MaxPooling2D at 0xf7c85f8>,
 <keras.layers.convolutional.Conv2D at 0xf7c86a0>,
 <keras.layers.normalization.BatchNormalization at 0xf7c8828>,
 <keras.layers.core.Activation at 0xf7c8940>,
 <keras.layers.convolutional.Conv2D at 0xf7c8978>,
 <keras.layers.convolutional.Conv2D at 0xf7c8b00>,
 <keras.layers.normalization.BatchNormalization at 0xf7c8c88>,
 <keras.layers.normalization.BatchNormalization at 0xf7c8da0>,
 <keras.layers.core.Activation at 0xf7c8eb8>,
 <keras.layers.core.Activation at 0xf7c8ef0>,
 <keras.layers.pooling.AveragePooling2D at 0xf7c8f28>,
 <keras.layers.convolutional.Conv2D at 0xf7b0f60>,
 <keras.layers.convolutional.Conv2D at 0xf7cf198>,
 <keras.layers.convolutional.Conv2D at 0xf7cf320>,
 <keras.layers.convolutional.Conv2D at 0xf7cf4a8>,
 <keras.layers.normalization.BatchNormalization at 0xf7cf630>,
 <keras.layers.normalization.BatchNormalization at 0xf7cf748>,
 <keras.layers.normalization.BatchNormalization at 0xf7cf860>,
 <keras.layers.normalization.BatchNormalization at 0xf7cf978>,
 <keras.layers.core.Activation at 0xf7cfa90>,
 <keras.layers.core.Activation at 0xf7cfac8>,
 <keras.layers.core.Activation at 0xf7cfb00>,
 <keras.layers.core.Activation at 0xf7cfb38>,
 <keras.layers.merge.Concatenate at 0xf7cfb70>,
 <keras.layers.convolutional.Conv2D at 0xf7cfba8>,
 <keras.layers.normalization.BatchNormalization at 0xf7cfd30>,
 <keras.layers.core.Activation at 0xf7cfe48>,
 <keras.layers.convolutional.Conv2D at 0xf7cfe80>,
 <keras.layers.convolutional.Conv2D at 0xf7d7048>,
 <keras.layers.normalization.BatchNormalization at 0xf7d71d0>,
 <keras.layers.normalization.BatchNormalization at 0xf7d72e8>,
 <keras.layers.core.Activation at 0xf7d7400>,
 <keras.layers.core.Activation at 0xf7d7438>,
 <keras.layers.pooling.AveragePooling2D at 0xf7d7470>,
 <keras.layers.convolutional.Conv2D at 0xf7d7518>,
 <keras.layers.convolutional.Conv2D at 0xf7d76a0>,
 <keras.layers.convolutional.Conv2D at 0xf7d7828>,
 <keras.layers.convolutional.Conv2D at 0xf7d79b0>,
 <keras.layers.normalization.BatchNormalization at 0xf7d7b38>,
 <keras.layers.normalization.BatchNormalization at 0xf7d7c50>,
 <keras.layers.normalization.BatchNormalization at 0xf7d7d68>,
 <keras.layers.normalization.BatchNormalization at 0xf7d7e80>,
 <keras.layers.core.Activation at 0xf7d7f98>,
 <keras.layers.core.Activation at 0xf7c8fd0>,
 <keras.layers.core.Activation at 0xf7de048>,
 <keras.layers.core.Activation at 0xf7de080>,
 <keras.layers.merge.Concatenate at 0xf7de0b8>,
 <keras.layers.convolutional.Conv2D at 0xf7de0f0>,
 <keras.layers.normalization.BatchNormalization at 0xf7de278>,
 <keras.layers.core.Activation at 0xf7de390>,
 <keras.layers.convolutional.Conv2D at 0xf7de3c8>,
 <keras.layers.convolutional.Conv2D at 0xf7de550>,
 <keras.layers.normalization.BatchNormalization at 0xf7de6d8>,
 <keras.layers.normalization.BatchNormalization at 0xf7de7f0>,
 <keras.layers.core.Activation at 0xf7de908>,
 <keras.layers.core.Activation at 0xf7de940>,
 <keras.layers.pooling.AveragePooling2D at 0xf7de978>,
 <keras.layers.convolutional.Conv2D at 0xf7dea20>,
 <keras.layers.convolutional.Conv2D at 0xf7deba8>,
 <keras.layers.convolutional.Conv2D at 0xf7ded30>,
 <keras.layers.convolutional.Conv2D at 0xf7deeb8>,
 <keras.layers.normalization.BatchNormalization at 0xfb95080>,
 <keras.layers.normalization.BatchNormalization at 0xfb95198>,
 <keras.layers.normalization.BatchNormalization at 0xfb952b0>,
 <keras.layers.normalization.BatchNormalization at 0xfb953c8>,
 <keras.layers.core.Activation at 0xfb954e0>,
 <keras.layers.core.Activation at 0xfb95518>,
 <keras.layers.core.Activation at 0xfb95550>,
 <keras.layers.core.Activation at 0xfb95588>,
 <keras.layers.merge.Concatenate at 0xfb955c0>,
 <keras.layers.convolutional.Conv2D at 0xfb955f8>,
 <keras.layers.normalization.BatchNormalization at 0xfb95780>,
 <keras.layers.core.Activation at 0xfb95898>,
 <keras.layers.convolutional.Conv2D at 0xfb958d0>,
 <keras.layers.normalization.BatchNormalization at 0xfb95a58>,
 <keras.layers.core.Activation at 0xfb95b70>,
 <keras.layers.convolutional.Conv2D at 0xfb95ba8>,
 <keras.layers.convolutional.Conv2D at 0xfb95d30>,
 <keras.layers.normalization.BatchNormalization at 0xfb95eb8>,
 <keras.layers.normalization.BatchNormalization at 0xf7d7fd0>,
 <keras.layers.core.Activation at 0xfb9c128>,
 <keras.layers.core.Activation at 0xfb9c160>,
 <keras.layers.pooling.MaxPooling2D at 0xfb9c198>,
 <keras.layers.merge.Concatenate at 0xfb9c240>,
 <keras.layers.convolutional.Conv2D at 0xfb9c278>,
 <keras.layers.normalization.BatchNormalization at 0xfb9c400>,
 <keras.layers.core.Activation at 0xfb9c518>,
 <keras.layers.convolutional.Conv2D at 0xfb9c550>,
 <keras.layers.normalization.BatchNormalization at 0xfb9c6d8>,
 <keras.layers.core.Activation at 0xfb9c7f0>,
 <keras.layers.convolutional.Conv2D at 0xfb9c828>,
 <keras.layers.convolutional.Conv2D at 0xfb9c9b0>,
 <keras.layers.normalization.BatchNormalization at 0xfb9cb38>,
 <keras.layers.normalization.BatchNormalization at 0xfb9cc50>,
 <keras.layers.core.Activation at 0xfb9cd68>,
 <keras.layers.core.Activation at 0xfb9cda0>,
 <keras.layers.convolutional.Conv2D at 0xfb9cdd8>,
 <keras.layers.convolutional.Conv2D at 0xfb9cf60>,
 <keras.layers.normalization.BatchNormalization at 0xfba5128>,
 <keras.layers.normalization.BatchNormalization at 0xfba5240>,
 <keras.layers.core.Activation at 0xfba5358>,
 <keras.layers.core.Activation at 0xfba5390>,
 <keras.layers.pooling.AveragePooling2D at 0xfba53c8>,
 <keras.layers.convolutional.Conv2D at 0xfba5470>,
 <keras.layers.convolutional.Conv2D at 0xfba55f8>,
 <keras.layers.convolutional.Conv2D at 0xfba5780>,
 <keras.layers.convolutional.Conv2D at 0xfba5908>,
 <keras.layers.normalization.BatchNormalization at 0xfba5a90>,
 <keras.layers.normalization.BatchNormalization at 0xfba5ba8>,
 <keras.layers.normalization.BatchNormalization at 0xfba5cc0>,
 <keras.layers.normalization.BatchNormalization at 0xfba5dd8>,
 <keras.layers.core.Activation at 0xfba5ef0>,
 <keras.layers.core.Activation at 0xfba5f28>,
 <keras.layers.core.Activation at 0xfba5f60>,
 <keras.layers.core.Activation at 0xfba5f98>,
 <keras.layers.merge.Concatenate at 0xfb95fd0>,
 <keras.layers.convolutional.Conv2D at 0xfbac048>,
 <keras.layers.normalization.BatchNormalization at 0xfbac1d0>,
 <keras.layers.core.Activation at 0xfbac2e8>,
 <keras.layers.convolutional.Conv2D at 0xfbac320>,
 <keras.layers.normalization.BatchNormalization at 0xfbac4a8>,
 <keras.layers.core.Activation at 0xfbac5c0>,
 <keras.layers.convolutional.Conv2D at 0xfbac5f8>,
 <keras.layers.convolutional.Conv2D at 0xfbac780>,
 <keras.layers.normalization.BatchNormalization at 0xfbac908>,
 <keras.layers.normalization.BatchNormalization at 0xfbaca20>,
 <keras.layers.core.Activation at 0xfbacb38>,
 <keras.layers.core.Activation at 0xfbacb70>,
 <keras.layers.convolutional.Conv2D at 0xfbacba8>,
 <keras.layers.convolutional.Conv2D at 0xfbacd30>,
 <keras.layers.normalization.BatchNormalization at 0xfbaceb8>,
 <keras.layers.normalization.BatchNormalization at 0xfba5fd0>,
 <keras.layers.core.Activation at 0xfbb3128>,
 <keras.layers.core.Activation at 0xfbb3160>,
 <keras.layers.pooling.AveragePooling2D at 0xfbb3198>,
 <keras.layers.convolutional.Conv2D at 0xfbb3240>,
 <keras.layers.convolutional.Conv2D at 0xfbb33c8>,
 <keras.layers.convolutional.Conv2D at 0xfbb3550>,
 <keras.layers.convolutional.Conv2D at 0xfbb36d8>,
 <keras.layers.normalization.BatchNormalization at 0xfbb3860>,
 <keras.layers.normalization.BatchNormalization at 0xfbb3978>,
 <keras.layers.normalization.BatchNormalization at 0xfbb3a90>,
 <keras.layers.normalization.BatchNormalization at 0xfbb3ba8>,
 <keras.layers.core.Activation at 0xfbb3cc0>,
 <keras.layers.core.Activation at 0xfbb3cf8>,
 <keras.layers.core.Activation at 0xfbb3d30>,
 <keras.layers.core.Activation at 0xfbb3d68>,
 <keras.layers.merge.Concatenate at 0xfbb3da0>,
 <keras.layers.convolutional.Conv2D at 0xfbb3dd8>,
 <keras.layers.normalization.BatchNormalization at 0xfbb3f60>,
 <keras.layers.core.Activation at 0xfbacfd0>,
 <keras.layers.convolutional.Conv2D at 0xfbba0f0>,
 <keras.layers.normalization.BatchNormalization at 0xfbba278>,
 <keras.layers.core.Activation at 0xfbba390>,
 <keras.layers.convolutional.Conv2D at 0xfbba3c8>,
 <keras.layers.convolutional.Conv2D at 0xfbba550>,
 <keras.layers.normalization.BatchNormalization at 0xfbba6d8>,
 <keras.layers.normalization.BatchNormalization at 0xfbba7f0>,
 <keras.layers.core.Activation at 0xfbba908>,
 <keras.layers.core.Activation at 0xfbba940>,
 <keras.layers.convolutional.Conv2D at 0xfbba978>,
 <keras.layers.convolutional.Conv2D at 0xfbbab00>,
 <keras.layers.normalization.BatchNormalization at 0xfbbac88>,
 <keras.layers.normalization.BatchNormalization at 0xfbbada0>,
 <keras.layers.core.Activation at 0xfbbaeb8>,
 <keras.layers.core.Activation at 0xfbbaef0>,
 <keras.layers.pooling.AveragePooling2D at 0xfbbaf28>,
 <keras.layers.convolutional.Conv2D at 0xfbb3fd0>,
 <keras.layers.convolutional.Conv2D at 0xfbc2198>,
 <keras.layers.convolutional.Conv2D at 0xfbc2320>,
 <keras.layers.convolutional.Conv2D at 0xfbc24a8>,
 <keras.layers.normalization.BatchNormalization at 0xfbc2630>,
 <keras.layers.normalization.BatchNormalization at 0xfbc2748>,
 <keras.layers.normalization.BatchNormalization at 0xfbc2860>,
 <keras.layers.normalization.BatchNormalization at 0xfbc2978>,
 <keras.layers.core.Activation at 0xfbc2a90>,
 <keras.layers.core.Activation at 0xfbc2ac8>,
 <keras.layers.core.Activation at 0xfbc2b00>,
 <keras.layers.core.Activation at 0xfbc2b38>,
 <keras.layers.merge.Concatenate at 0xfbc2b70>,
 <keras.layers.convolutional.Conv2D at 0xfbc2ba8>,
 <keras.layers.normalization.BatchNormalization at 0xfbc2d30>,
 <keras.layers.core.Activation at 0xfbc2e48>,
 <keras.layers.convolutional.Conv2D at 0xfbc2e80>,
 <keras.layers.normalization.BatchNormalization at 0xfbca048>,
 <keras.layers.core.Activation at 0xfbca160>,
 <keras.layers.convolutional.Conv2D at 0xfbca198>,
 <keras.layers.convolutional.Conv2D at 0xfbca320>,
 <keras.layers.normalization.BatchNormalization at 0xfbca4a8>,
 <keras.layers.normalization.BatchNormalization at 0xfbca5c0>,
 <keras.layers.core.Activation at 0xfbca6d8>,
 <keras.layers.core.Activation at 0xfbca710>,
 <keras.layers.convolutional.Conv2D at 0xfbca748>,
 <keras.layers.convolutional.Conv2D at 0xfbca8d0>,
 <keras.layers.normalization.BatchNormalization at 0xfbcaa58>,
 <keras.layers.normalization.BatchNormalization at 0xfbcab70>,
 <keras.layers.core.Activation at 0xfbcac88>,
 <keras.layers.core.Activation at 0xfbcacc0>,
 <keras.layers.pooling.AveragePooling2D at 0xfbcacf8>,
 <keras.layers.convolutional.Conv2D at 0xfbcada0>,
 <keras.layers.convolutional.Conv2D at 0xfbcaf28>,
 <keras.layers.convolutional.Conv2D at 0xfbd20f0>,
 <keras.layers.convolutional.Conv2D at 0xfbd2278>,
 <keras.layers.normalization.BatchNormalization at 0xfbd2400>,
 <keras.layers.normalization.BatchNormalization at 0xfbd2518>,
 <keras.layers.normalization.BatchNormalization at 0xfbd2630>,
 <keras.layers.normalization.BatchNormalization at 0xfbd2748>,
 <keras.layers.core.Activation at 0xfbd2860>,
 <keras.layers.core.Activation at 0xfbd2898>,
 <keras.layers.core.Activation at 0xfbd28d0>,
 <keras.layers.core.Activation at 0xfbd2908>,
 <keras.layers.merge.Concatenate at 0xfbd2940>,
 <keras.layers.convolutional.Conv2D at 0xfbd2978>,
 <keras.layers.normalization.BatchNormalization at 0xfbd2b00>,
 <keras.layers.core.Activation at 0xfbd2c18>,
 <keras.layers.convolutional.Conv2D at 0xfbd2c50>,
 <keras.layers.normalization.BatchNormalization at 0xfbd2dd8>,
 <keras.layers.core.Activation at 0xfbd2ef0>,
 <keras.layers.convolutional.Conv2D at 0xfbd2f28>,
 <keras.layers.convolutional.Conv2D at 0xfbd90f0>,
 <keras.layers.normalization.BatchNormalization at 0xfbd9278>,
 <keras.layers.normalization.BatchNormalization at 0xfbd9390>,
 <keras.layers.core.Activation at 0xfbd94a8>,
 <keras.layers.core.Activation at 0xfbd94e0>,
 <keras.layers.convolutional.Conv2D at 0xfbd9518>,
 <keras.layers.convolutional.Conv2D at 0xfbd96a0>,
 <keras.layers.normalization.BatchNormalization at 0xfbd9828>,
 <keras.layers.normalization.BatchNormalization at 0xfbd9940>,
 <keras.layers.core.Activation at 0xfbd9a58>,
 <keras.layers.core.Activation at 0xfbd9a90>,
 <keras.layers.pooling.MaxPooling2D at 0xfbd9ac8>,
 <keras.layers.merge.Concatenate at 0xfbd9b70>,
 <keras.layers.convolutional.Conv2D at 0xfbd9ba8>,
 <keras.layers.normalization.BatchNormalization at 0xfbd9d30>,
 <keras.layers.core.Activation at 0xfbd9e48>,
 <keras.layers.convolutional.Conv2D at 0xfbd9e80>,
 <keras.layers.convolutional.Conv2D at 0xfbe1048>,
 <keras.layers.normalization.BatchNormalization at 0xfbe11d0>,
 <keras.layers.normalization.BatchNormalization at 0xfbe12e8>,
 <keras.layers.core.Activation at 0xfbe1400>,
 <keras.layers.core.Activation at 0xfbe1438>,
 <keras.layers.convolutional.Conv2D at 0xfbe1470>,
 <keras.layers.convolutional.Conv2D at 0xfbe15f8>,
 <keras.layers.convolutional.Conv2D at 0xfbe1780>,
 <keras.layers.convolutional.Conv2D at 0xfbe1908>,
 <keras.layers.pooling.AveragePooling2D at 0xfbe1a90>,
 <keras.layers.convolutional.Conv2D at 0xfbe1b38>,
 <keras.layers.normalization.BatchNormalization at 0xfbe1cc0>,
 <keras.layers.normalization.BatchNormalization at 0xfbe1dd8>,
 <keras.layers.normalization.BatchNormalization at 0xfbe1ef0>,
 <keras.layers.normalization.BatchNormalization at 0xfbbafd0>,
 <keras.layers.convolutional.Conv2D at 0xfbe9160>,
 <keras.layers.normalization.BatchNormalization at 0xfbe92e8>,
 <keras.layers.core.Activation at 0xfbe9400>,
 <keras.layers.core.Activation at 0xfbe9438>,
 <keras.layers.core.Activation at 0xfbe9470>,
 <keras.layers.core.Activation at 0xfbe94a8>,
 <keras.layers.normalization.BatchNormalization at 0xfbe94e0>,
 <keras.layers.core.Activation at 0xfbe95f8>,
 <keras.layers.merge.Concatenate at 0xfbe9630>,
 <keras.layers.merge.Concatenate at 0xfbe9668>,
 <keras.layers.core.Activation at 0xfbe96a0>,
 <keras.layers.merge.Concatenate at 0xfbe96d8>,
 <keras.layers.convolutional.Conv2D at 0xfbe9710>,
 <keras.layers.normalization.BatchNormalization at 0xfbe9898>,
 <keras.layers.core.Activation at 0xfbe99b0>,
 <keras.layers.convolutional.Conv2D at 0xfbe99e8>,
 <keras.layers.convolutional.Conv2D at 0xfbe9b70>,
 <keras.layers.normalization.BatchNormalization at 0xfbe9cf8>,
 <keras.layers.normalization.BatchNormalization at 0xfbe9e10>,
 <keras.layers.core.Activation at 0xfbe9f28>,
 <keras.layers.core.Activation at 0xfbe9f60>,
 <keras.layers.convolutional.Conv2D at 0xfbe9f98>,
 <keras.layers.convolutional.Conv2D at 0xfbf1160>,
 <keras.layers.convolutional.Conv2D at 0xfbf12e8>,
 <keras.layers.convolutional.Conv2D at 0xfbf1470>,
 <keras.layers.pooling.AveragePooling2D at 0xfbf15f8>,
 <keras.layers.convolutional.Conv2D at 0xfbf16a0>,
 <keras.layers.normalization.BatchNormalization at 0xfbf1828>,
 <keras.layers.normalization.BatchNormalization at 0xfbf1940>,
 <keras.layers.normalization.BatchNormalization at 0xfbf1a58>,
 <keras.layers.normalization.BatchNormalization at 0xfbf1b70>,
 <keras.layers.convolutional.Conv2D at 0xfbf1c88>,
 <keras.layers.normalization.BatchNormalization at 0xfbf1e10>,
 <keras.layers.core.Activation at 0xfbf1f28>,
 <keras.layers.core.Activation at 0xfbf1f60>,
 <keras.layers.core.Activation at 0xfbf1f98>,
 <keras.layers.core.Activation at 0xfbe1fd0>,
 <keras.layers.normalization.BatchNormalization at 0xfbf8048>,
 <keras.layers.core.Activation at 0xfbf8160>,
 <keras.layers.merge.Concatenate at 0xfbf8198>,
 <keras.layers.merge.Concatenate at 0xfbf81d0>,
 <keras.layers.core.Activation at 0xfbf8208>,
 <keras.layers.merge.Concatenate at 0xfbf8240>,
 <keras.layers.pooling.GlobalAveragePooling2D at 0xfbf8278>,
 <keras.layers.core.Dense at 0xfbf82e8>,
 <keras.layers.core.Dense at 0xfbf8438>]
In [25]:
# Iterate through the MRIs in T2

print('\n \n' + '\033[1m' + 'EACH ORIGINAL IMAGE IS ANALYZED WITH TWO METHODS: CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)' + '\033[0m' + '\n')
print('\033[1m' + 'FOR EACH METHOD, THE FIRST IMAGE IS THE ORIGINAL IMAGE, THE SECOND IMAGE IS THE MAP, AND THE THIRD IMAGE IS THE MAP SUPERIMPOSED ON THE ORIGINAL IMAGE WITH A TRANSPARENCY THAT IS PROPORTIONAL TO THE ESTIMATED PROBABILITY OF THE IMAGE HAVING TUBER(S) (HIGHER ESTIMATED PROBABILITIES PRODUCE CLEARLY SEEN MAPS OVERLAID ON THE ORIGINAL IMAGE AND LOWER ESTIMATED PROBABILITIES PRODUCE VERY TRANSPARENT MAPS OVERLAYED ON THE ORIGINAL IMAGE)' + '\033[0m'+ '\n \n \n \n')


for i in range(T2test_X.shape[0]):
    
  # Print spaces to separate from the next image
  print('\n \n \n \n \n \n \n \n')
  
  # Print real classification of the image
  print('\033[1m' + 'REAL CLASSIFICATION OF THE IMAGE: {}'.format('TUBER(S)' if y_trueT2[i][0]==1 else 'NO TUBER(S)') + '\033[0m')
  # Print model classification and model probability of TSC
  print('Model classification of this image: {} \nEstimated probability of tuber(s): {} \n'.format('TUBER(S)' if testInceptionV3T2[i][0]>0.5 else 'NO TUBER(S)', testInceptionV3T2[i][0]))     


  # Print title
  print('\033[1m' + 'CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)' + '\033[0m')     
    
  # Original image
  plt.subplot(2,3,1)
  plt.imshow(T2test_X[i])
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])
  
  # Heat map
  plt.subplot(2,3,2)
  heat_map = visualize_cam(model, layer_idx=300, filter_indices=None, seed_input=T2test_X[i])
  plt.imshow(heat_map)
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])
  
  # Heat map superimposed on original image
  plt.subplot(2,3,3)
  plt.imshow(T2test_X[i])
  plt.imshow(heat_map, alpha = 0.8 * testInceptionV3T2[i][0])
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])

       
  # Original image
  plt.subplot(2,3,4)
  plt.imshow(T2test_X[i])
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])


  # Heat map
  heat_map = visualize_saliency(model, layer_idx=300, filter_indices=None, seed_input=T2test_X[i])
  plt.subplot(2,3,5)
  plt.imshow(heat_map)
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])
  
  # Heat map superimposed on original image
  plt.subplot(2,3,6)
  plt.imshow(T2test_X[i])
  plt.imshow(heat_map, alpha = 0.8 * testInceptionV3T2[i][0])
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])

  # Show the image and close it
  plt.show()
  plt.close()
 
EACH ORIGINAL IMAGE IS ANALYZED WITH TWO METHODS: CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)

FOR EACH METHOD, THE FIRST IMAGE IS THE ORIGINAL IMAGE, THE SECOND IMAGE IS THE MAP, AND THE THIRD IMAGE IS THE MAP SUPERIMPOSED ON THE ORIGINAL IMAGE WITH A TRANSPARENCY THAT IS PROPORTIONAL TO THE ESTIMATED PROBABILITY OF THE IMAGE HAVING TUBER(S) (HIGHER ESTIMATED PROBABILITIES PRODUCE CLEARLY SEEN MAPS OVERLAID ON THE ORIGINAL IMAGE AND LOWER ESTIMATED PROBABILITIES PRODUCE VERY TRANSPARENT MAPS OVERLAYED ON THE ORIGINAL IMAGE)
 
 
 


 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.044537678360939026 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.00019112747395411134 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.1633964627981186 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.019239161163568497 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.3614709973335266 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9999121427536011 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9895271062850952 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9989733695983887 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9971325397491455 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9998990297317505 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.7346651554107666 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 3.2274561817757785e-05 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.0013195887440815568 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.013555899262428284 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.8782424926757812 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9378989934921265 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.003839597338810563 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.0002252859267173335 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.003843049518764019 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 2.624016815389041e-05 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.5952751040458679 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9770784378051758 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.8515397310256958 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9684299230575562 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9252880811691284 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.19086691737174988 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9431203007698059 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9911292791366577 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9943580031394958 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9967806339263916 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.6550148129463196 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.1746329814195633 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.6232938766479492 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9983868598937988 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9755474925041199 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)

SCROLL UP TO SEE THE GradCAM AND SALIENCY MAPS OF EACH IMAGE

This 4 year-old female had a medium tuber burden with tubers detected in 14 of 35 T2 MRI slices by our Neuroradiologist. The convolutional neural network classified images as having tuber(s) in 10 out of 35 T2 MRI slices and missed several T2 MRI slices with subtle tubers. Interestingly, in the missed tubers the convolutional neural network was focusing on the right are of tuber(s), but the estimated probability was too low to classify the MRI slice as a having tuber(s). Its accuracy in T2 was 0.57.

Each original image is analyzed with two methods: Gradient-weighted class activation maps (upper row) and saliency maps (lower row).

For each method, the first image is the original image, the second image is the map, and the third image is the map superimposed on the original image with a transparency that is proportional to the estimated probability of the image having tuber(s) (higher estimated probabilities produce clearly seen maps overlaid on the original image and lower estimated probabilities produce very transparent maps overlaid on the original image).

In [26]:
# Iterate through the MRIs in FLAIR

print('\n \n' + '\033[1m' + 'EACH ORIGINAL IMAGE IS ANALYZED WITH TWO METHODS: CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)' + '\033[0m' + '\n')
print('\033[1m' + 'FOR EACH METHOD, THE FIRST IMAGE IS THE ORIGINAL IMAGE, THE SECOND IMAGE IS THE MAP, AND THE THIRD IMAGE IS THE MAP SUPERIMPOSED ON THE ORIGINAL IMAGE WITH A TRANSPARENCY THAT IS PROPORTIONAL TO THE ESTIMATED PROBABILITY OF THE IMAGE HAVING TUBER(S) (HIGHER ESTIMATED PROBABILITIES PRODUCE CLEARLY SEEN MAPS OVERLAID ON THE ORIGINAL IMAGE AND LOWER ESTIMATED PROBABILITIES PRODUCE VERY TRANSPARENT MAPS OVERLAYED ON THE ORIGINAL IMAGE)' + '\033[0m'+ '\n \n \n \n')


for i in range(FLAIRtest_X.shape[0]):
    
  # Print spaces to separate from the next image
  print('\n \n \n \n \n \n \n \n')
  
  # Print real classification of the image
  print('\033[1m' + 'REAL CLASSIFICATION OF THE IMAGE: {}'.format('TUBER(S)' if y_trueFLAIR[i][0]==1 else 'NO TUBER(S)') + '\033[0m')
  # Print model classification and model probability of TSC
  print('Model classification of this image: {} \nEstimated probability of tuber(s): {} \n'.format('TUBER(S)' if testInceptionV3FLAIR[i][0]>0.5 else 'NO TUBER(S)', testInceptionV3FLAIR[i][0]))     


  # Print title
  print('\033[1m' + 'CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)' + '\033[0m')     
    
  # Original image
  plt.subplot(2,3,1)
  plt.imshow(FLAIRtest_X[i])
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])
  
  # Heat map
  plt.subplot(2,3,2)
  heat_map = visualize_cam(model, layer_idx=300, filter_indices=None, seed_input=FLAIRtest_X[i])
  plt.imshow(heat_map)
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])
  
  # Heat map superimposed on original image
  plt.subplot(2,3,3)
  plt.imshow(FLAIRtest_X[i])
  plt.imshow(heat_map, alpha = 0.8 * testInceptionV3FLAIR[i][0])
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])

       
  # Original image
  plt.subplot(2,3,4)
  plt.imshow(FLAIRtest_X[i])
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])


  # Heat map
  heat_map = visualize_saliency(model, layer_idx=300, filter_indices=None, seed_input=FLAIRtest_X[i])
  plt.subplot(2,3,5)
  plt.imshow(heat_map)
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])
  
  # Heat map superimposed on original image
  plt.subplot(2,3,6)
  plt.imshow(FLAIRtest_X[i])
  plt.imshow(heat_map, alpha = 0.8 * testInceptionV3FLAIR[i][0])
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])

  # Show the image and close it
  plt.show()
  plt.close()
 
EACH ORIGINAL IMAGE IS ANALYZED WITH TWO METHODS: CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)

FOR EACH METHOD, THE FIRST IMAGE IS THE ORIGINAL IMAGE, THE SECOND IMAGE IS THE MAP, AND THE THIRD IMAGE IS THE MAP SUPERIMPOSED ON THE ORIGINAL IMAGE WITH A TRANSPARENCY THAT IS PROPORTIONAL TO THE ESTIMATED PROBABILITY OF THE IMAGE HAVING TUBER(S) (HIGHER ESTIMATED PROBABILITIES PRODUCE CLEARLY SEEN MAPS OVERLAID ON THE ORIGINAL IMAGE AND LOWER ESTIMATED PROBABILITIES PRODUCE VERY TRANSPARENT MAPS OVERLAYED ON THE ORIGINAL IMAGE)
 
 
 


 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.2002726048231125 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9995110034942627 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9999196529388428 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.4155137538909912 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 4.82613941130694e-05 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 1.3686614693142474e-05 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 1.4089872820477467e-05 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 1.1179531611560378e-05 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 2.7335856884747045e-06 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 3.2192083381232806e-06 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.00162622076459229 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 3.116971072358865e-07 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 1.6226636034843978e-06 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 2.148184648831375e-06 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 5.656348366755992e-05 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)

SCROLL UP TO SEE THE GradCAM AND SALIENCY MAPS OF EACH IMAGE

This 4 year-old female had a medium tuber burden with tubers detected in 6 of 15 FLAIR MRI slices by our Neuroradiologist. The convolutional neural network classified images as having tuber(s) in 2 out of 15 FLAIR MRI slices and missed several FLAIR MRI slices with subtle tubers. Interestingly, in the missed tubers the convolutional neural network was focusing on the right are of tuber(s), but the estimated probability was too low to classify the MRI slice as a having tuber(s). Its accuracy in FLAIR was 0.73.

Each original image is analyzed with two methods: Gradient-weighted class activation maps (upper row) and saliency maps (lower row).

For each method, the first image is the original image, the second image is the map, and the third image is the map superimposed on the original image with a transparency that is proportional to the estimated probability of the image having tuber(s) (higher estimated probabilities produce clearly seen maps overlaid on the original image and lower estimated probabilities produce very transparent maps overlaid on the original image).