DETECTION OF TUBERS WITH CONVOLUTIONAL NEURAL NETWORKS

TEST CASE IV

Import packages and functions

In [1]:
# Import packages
%matplotlib inline
from PIL import Image
import numpy as np
import os
import re
from skimage.color import gray2rgb
import matplotlib.pyplot as plt
from sklearn.utils import shuffle
!pip install tensorflow
!pip install keras
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten, GaussianNoise, BatchNormalization, GlobalAveragePooling2D
from keras.layers import Conv2D, MaxPooling2D
from keras import Sequential
from keras.optimizers import Adam
import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
from keras.preprocessing import image
from keras.models import Model
from keras import backend as K
from sklearn.metrics import confusion_matrix
from sklearn.metrics import roc_auc_score
!pip install git+https://github.com/raghakot/keras-vis.git --upgrade
from vis.visualization import visualize_cam, visualize_saliency, overlay
from keras import activations
from matplotlib import pyplot as plt
import matplotlib.cm as cm
import zipfile
from keras.models import model_from_json
import matplotlib as mpl
Requirement already satisfied: tensorflow in c:\programdata\anaconda3\lib\site-packages (1.12.0)
Requirement already satisfied: absl-py>=0.1.6 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (0.7.0)
Requirement already satisfied: six>=1.10.0 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (1.11.0)
Requirement already satisfied: grpcio>=1.8.6 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (1.18.0)
Requirement already satisfied: numpy>=1.13.3 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (1.14.3)
Requirement already satisfied: protobuf>=3.6.1 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (3.6.1)
Requirement already satisfied: termcolor>=1.1.0 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (1.1.0)
Requirement already satisfied: keras-preprocessing>=1.0.5 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (1.0.5)
Requirement already satisfied: keras-applications>=1.0.6 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (1.0.6)
Requirement already satisfied: gast>=0.2.0 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (0.2.2)
Requirement already satisfied: tensorboard<1.13.0,>=1.12.0 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (1.12.2)
Requirement already satisfied: wheel>=0.26 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (0.32.3)
Requirement already satisfied: astor>=0.6.0 in c:\programdata\anaconda3\lib\site-packages (from tensorflow) (0.7.1)
Requirement already satisfied: setuptools in c:\programdata\anaconda3\lib\site-packages (from protobuf>=3.6.1->tensorflow) (40.6.3)
Requirement already satisfied: h5py in c:\programdata\anaconda3\lib\site-packages (from keras-applications>=1.0.6->tensorflow) (2.7.1)
Requirement already satisfied: markdown>=2.6.8 in c:\programdata\anaconda3\lib\site-packages (from tensorboard<1.13.0,>=1.12.0->tensorflow) (3.0.1)
Requirement already satisfied: werkzeug>=0.11.10 in c:\programdata\anaconda3\lib\site-packages (from tensorboard<1.13.0,>=1.12.0->tensorflow) (0.14.1)
You are using pip version 19.0.1, however version 19.0.3 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' command.
Requirement already satisfied: keras in c:\programdata\anaconda3\lib\site-packages (2.2.4)
Requirement already satisfied: scipy>=0.14 in c:\programdata\anaconda3\lib\site-packages (from keras) (1.1.0)
Requirement already satisfied: pyyaml in c:\programdata\anaconda3\lib\site-packages (from keras) (3.12)
Requirement already satisfied: numpy>=1.9.1 in c:\programdata\anaconda3\lib\site-packages (from keras) (1.14.3)
Requirement already satisfied: keras-applications>=1.0.6 in c:\programdata\anaconda3\lib\site-packages (from keras) (1.0.6)
Requirement already satisfied: six>=1.9.0 in c:\programdata\anaconda3\lib\site-packages (from keras) (1.11.0)
Requirement already satisfied: keras-preprocessing>=1.0.5 in c:\programdata\anaconda3\lib\site-packages (from keras) (1.0.5)
Requirement already satisfied: h5py in c:\programdata\anaconda3\lib\site-packages (from keras) (2.7.1)
You are using pip version 19.0.1, however version 19.0.3 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' command.
C:\ProgramData\Anaconda3\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
Using TensorFlow backend.
Collecting git+https://github.com/raghakot/keras-vis.git
  Cloning https://github.com/raghakot/keras-vis.git to c:\users\tempch~1.021\appdata\local\temp\pip-req-build-fkvo4jd1
Requirement already satisfied, skipping upgrade: keras in c:\programdata\anaconda3\lib\site-packages (from keras-vis==0.4.1) (2.2.4)
Requirement already satisfied, skipping upgrade: six in c:\programdata\anaconda3\lib\site-packages (from keras-vis==0.4.1) (1.11.0)
Requirement already satisfied, skipping upgrade: scikit-image in c:\programdata\anaconda3\lib\site-packages (from keras-vis==0.4.1) (0.13.1)
Requirement already satisfied, skipping upgrade: matplotlib in c:\programdata\anaconda3\lib\site-packages (from keras-vis==0.4.1) (2.2.2)
Requirement already satisfied, skipping upgrade: h5py in c:\programdata\anaconda3\lib\site-packages (from keras-vis==0.4.1) (2.7.1)
Requirement already satisfied, skipping upgrade: pyyaml in c:\programdata\anaconda3\lib\site-packages (from keras->keras-vis==0.4.1) (3.12)
Requirement already satisfied, skipping upgrade: keras-applications>=1.0.6 in c:\programdata\anaconda3\lib\site-packages (from keras->keras-vis==0.4.1) (1.0.6)
Requirement already satisfied, skipping upgrade: numpy>=1.9.1 in c:\programdata\anaconda3\lib\site-packages (from keras->keras-vis==0.4.1) (1.14.3)
Requirement already satisfied, skipping upgrade: scipy>=0.14 in c:\programdata\anaconda3\lib\site-packages (from keras->keras-vis==0.4.1) (1.1.0)
Requirement already satisfied, skipping upgrade: keras-preprocessing>=1.0.5 in c:\programdata\anaconda3\lib\site-packages (from keras->keras-vis==0.4.1) (1.0.5)
Requirement already satisfied, skipping upgrade: networkx>=1.8 in c:\programdata\anaconda3\lib\site-packages (from scikit-image->keras-vis==0.4.1) (2.1)
Requirement already satisfied, skipping upgrade: pillow>=2.1.0 in c:\programdata\anaconda3\lib\site-packages (from scikit-image->keras-vis==0.4.1) (5.1.0)
Requirement already satisfied, skipping upgrade: PyWavelets>=0.4.0 in c:\programdata\anaconda3\lib\site-packages (from scikit-image->keras-vis==0.4.1) (0.5.2)
Requirement already satisfied, skipping upgrade: cycler>=0.10 in c:\programdata\anaconda3\lib\site-packages (from matplotlib->keras-vis==0.4.1) (0.10.0)
Requirement already satisfied, skipping upgrade: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in c:\programdata\anaconda3\lib\site-packages (from matplotlib->keras-vis==0.4.1) (2.2.0)
Requirement already satisfied, skipping upgrade: python-dateutil>=2.1 in c:\programdata\anaconda3\lib\site-packages (from matplotlib->keras-vis==0.4.1) (2.7.3)
Requirement already satisfied, skipping upgrade: pytz in c:\programdata\anaconda3\lib\site-packages (from matplotlib->keras-vis==0.4.1) (2018.4)
Requirement already satisfied, skipping upgrade: kiwisolver>=1.0.1 in c:\programdata\anaconda3\lib\site-packages (from matplotlib->keras-vis==0.4.1) (1.0.1)
Requirement already satisfied, skipping upgrade: decorator>=4.1.0 in c:\programdata\anaconda3\lib\site-packages (from networkx>=1.8->scikit-image->keras-vis==0.4.1) (4.3.0)
Requirement already satisfied, skipping upgrade: setuptools in c:\programdata\anaconda3\lib\site-packages (from kiwisolver>=1.0.1->matplotlib->keras-vis==0.4.1) (40.6.3)
Building wheels for collected packages: keras-vis
  Building wheel for keras-vis (setup.py): started
  Building wheel for keras-vis (setup.py): finished with status 'done'
  Stored in directory: C:\Users\TEMPCH~1.021\AppData\Local\Temp\pip-ephem-wheel-cache-bh52v1wq\wheels\c5\ae\e7\b34d1cb48b1898f606a5cce08ebc9521fa0588f37f1e590d9f
Successfully built keras-vis
Installing collected packages: keras-vis
  Found existing installation: keras-vis 0.4.1
    Uninstalling keras-vis-0.4.1:
      Successfully uninstalled keras-vis-0.4.1
Successfully installed keras-vis-0.4.1
You are using pip version 19.0.1, however version 19.0.3 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' command.

FIRST PART: DATA INGESTION

Import original images from local computer

These images come from a 10 day-old female with TSC. In total, there are 35 images consisting of 20 consecutive axial T2 MRI slices and 15 consecutive axial FLAIR MRI slices.

In [2]:
# Set the figure size
mpl.rcParams['figure.figsize'] = (16,10)
In [3]:
# Unzip files
with zipfile.ZipFile("TestCaseIVT2.zip","r") as zip_ref:
    zip_ref.extractall()
with zipfile.ZipFile("TestCaseIVFLAIR.zip","r") as zip_ref:
    zip_ref.extractall()

Path to original images folder

In [4]:
# Path to the folder with the original images
pathtoimagesT2test = './TestCaseIVT2/'

pathtoimagesFLAIRtest = './TestCaseIVFLAIR/'

SECOND PART: IMPORTATION OF FINAL DATA

In [5]:
# Functions to sort images with numbers within their name
def atoi(text):
    return int(text) if text.isdigit() else text

def natural_keys(text):
    return [ atoi(c) for c in re.split(r'(\d+)', text) ]

Import images and create labels for the T2 set

In [6]:
## T2

# Define the image size
image_size = (224, 224)

# Read in the test images for T2
T2test_images = []
T2test_dir = pathtoimagesT2test
T2test_files = os.listdir(T2test_dir)
T2test_files.sort(key=natural_keys)
# For each image
for f in T2test_files:
  # Open the image
  img = Image.open(T2test_dir + f)
  # Resize the image so that it has a size 224x224
  img = img.resize(image_size)
  # Transform into a numpy array
  img_arr = np.array(img)
  # Transform from 224x224 to 224x224x3
  if img_arr.shape == image_size:
        img_arr = np.expand_dims(img_arr, 3)
        img_arr = gray2rgb(img_arr[:, :, 0])
  # Add the image to the array of images      
  T2test_images.append(img_arr)

# After having transformed all images, transform the list into a numpy array  
T2test_X = np.array(T2test_images)

# Create an array of labels (as read by the radiologist)
T2test_y = np.array([[1], [1], [1], [1], [1], [1], [1], [1], [1], [1], 
                     [1], [1], [1], [1], [1], [1], [1], [1], [1], [1]])

# GPU expects values to be 32-bit floats
T2test_X = T2test_X.astype(np.float32)

# Rescale the values to be between 0 and 1
T2test_X /= 255.
In [7]:
T2test_X.shape
Out[7]:
(20, 224, 224, 3)
In [8]:
# Example of an image to make sure they were converted right
plt.imshow(T2test_X[0])
plt.grid(b=None)
plt.xticks([])
plt.yticks([])
plt.show()
In [9]:
T2test_y.shape
Out[9]:
(20, 1)
In [10]:
T2test_y[0]
Out[10]:
array([1])

Import images and create labels for the FLAIR set

In [11]:
## FLAIR

# Define the image size
image_size = (224, 224)

# Read in the test images for FLAIR
FLAIRtest_images = []
FLAIRtest_dir = pathtoimagesFLAIRtest
FLAIRtest_files = os.listdir(FLAIRtest_dir)
FLAIRtest_files.sort(key=natural_keys)
# For each image
for f in FLAIRtest_files:
  # Open the image
  img = Image.open(FLAIRtest_dir + f)
  # Resize the image so that it has a size 224x224
  img = img.resize(image_size)
  # Transform into a numpy array
  img_arr = np.array(img)
  # Transform from 224x224 to 224x224x3
  if img_arr.shape == image_size:
        img_arr = np.expand_dims(img_arr, 3)
        img_arr = gray2rgb(img_arr[:, :, 0])
  # Add the image to the array of images      
  FLAIRtest_images.append(img_arr)

# After having transformed all images, transform the list into a numpy array  
FLAIRtest_X = np.array(FLAIRtest_images)

# Create an array of labels (as read by the radiologist)
FLAIRtest_y = np.array([[1], [1], [1], [1], [1], [1], [1], [1], [1], [1], 
                        [1], [1], [1], [1], [0]])

# GPU expects values to be 32-bit floats
FLAIRtest_X = FLAIRtest_X.astype(np.float32)

# Rescale the values to be between 0 and 1
FLAIRtest_X /= 255.
In [12]:
FLAIRtest_X.shape
Out[12]:
(15, 224, 224, 3)
In [13]:
# Example of an image to make sure they were converted right
plt.imshow(FLAIRtest_X[0])
plt.grid(b=None)
plt.xticks([])
plt.yticks([])
plt.show()
In [14]:
FLAIRtest_y.shape
Out[14]:
(15, 1)
In [15]:
FLAIRtest_y[0]
Out[15]:
array([1])

THIRD PART: VISUALIZE CLASS ACTIVATION MAPS AND SALIENCY MAPS

Load the model

In [16]:
# load model
json_file = open('InceptionV3.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
model = model_from_json(loaded_model_json)
# load weights into new model
model.load_weights("InceptionV3.h5")
In [17]:
# Compile model
model.compile(optimizer = Adam(lr = 0.00025), loss = 'binary_crossentropy', metrics = ['accuracy'])
In [18]:
# Generate predictions on test data in the form of probabilities for T2
testInceptionV3T2 = model.predict(T2test_X, batch_size = 16)
testInceptionV3T2
Out[18]:
array([[7.7483660e-01],
       [9.9866629e-01],
       [9.9934405e-01],
       [9.9999559e-01],
       [9.9999678e-01],
       [9.9993682e-01],
       [9.9999940e-01],
       [9.9999487e-01],
       [9.9999988e-01],
       [9.9999857e-01],
       [9.9999690e-01],
       [9.9989235e-01],
       [9.9912065e-01],
       [8.4121972e-02],
       [7.2792331e-03],
       [3.4066194e-04],
       [5.4361597e-03],
       [1.2869253e-02],
       [8.3096051e-01],
       [2.3355914e-02]], dtype=float32)
In [19]:
# Generate predictions on test data in the form of probabilities for FLAIR
testInceptionV3FLAIR = model.predict(FLAIRtest_X, batch_size = 16)
testInceptionV3FLAIR
Out[19]:
array([[0.989881  ],
       [0.97080797],
       [0.39660543],
       [0.01123807],
       [0.00670442],
       [0.2642107 ],
       [0.2170956 ],
       [0.06463054],
       [0.00694439],
       [0.02234792],
       [0.15759513],
       [0.09368443],
       [0.92081976],
       [0.0673559 ],
       [0.99417865]], dtype=float32)
In [20]:
# Create the confusion matrix for T2
y_trueT2 = T2test_y
y_predInceptionV3T2 = testInceptionV3T2 > 0.5
confusion_matrix(y_trueT2, y_predInceptionV3T2, labels=[0,1])
Out[20]:
array([[ 0,  0],
       [ 6, 14]], dtype=int64)
In [21]:
# Create the confusion matrix for FLAIR
y_trueFLAIR = FLAIRtest_y
y_predInceptionV3FLAIR = testInceptionV3FLAIR > 0.5
confusion_matrix(y_trueFLAIR, y_predInceptionV3FLAIR)
Out[21]:
array([[ 0,  1],
       [11,  3]], dtype=int64)
In [22]:
# Calculate accuracy for T2
accuracy_InceptionV3T2 = (confusion_matrix(y_trueT2, y_predInceptionV3T2, labels=[0,1])[0, 0] + confusion_matrix(y_trueT2, y_predInceptionV3T2, labels=[0,1])[1, 1]) / (confusion_matrix(y_trueT2, y_predInceptionV3T2, labels=[0,1])[0, 0] + confusion_matrix(y_trueT2, y_predInceptionV3T2, labels=[0,1])[0, 1] + confusion_matrix(y_trueT2, y_predInceptionV3T2, labels=[0,1])[1, 0] + confusion_matrix(y_trueT2, y_predInceptionV3T2, labels=[0,1])[1, 1])
print('The accuracy in the test set is {}.'.format(accuracy_InceptionV3T2))
The accuracy in the test set is 0.7.
In [23]:
# Calculate accuracy for FLAIR
accuracy_InceptionV3FLAIR = (confusion_matrix(y_trueFLAIR, y_predInceptionV3FLAIR)[0, 0] + confusion_matrix(y_trueFLAIR, y_predInceptionV3FLAIR)[1, 1]) / (confusion_matrix(y_trueFLAIR, y_predInceptionV3FLAIR)[0, 0] + confusion_matrix(y_trueFLAIR, y_predInceptionV3FLAIR)[0, 1] + confusion_matrix(y_trueFLAIR, y_predInceptionV3FLAIR)[1, 0] + confusion_matrix(y_trueFLAIR, y_predInceptionV3FLAIR)[1, 1])
print('The accuracy in the test set is {}.'.format(accuracy_InceptionV3FLAIR))
The accuracy in the test set is 0.2.

Visualize the data

In [24]:
# Visualize the structure and layers of the model
model.layers
Out[24]:
[<keras.engine.input_layer.InputLayer at 0xf8f2048>,
 <keras.layers.convolutional.Conv2D at 0xf8f25c0>,
 <keras.layers.normalization.BatchNormalization at 0xf8f2438>,
 <keras.layers.core.Activation at 0xf8f2748>,
 <keras.layers.convolutional.Conv2D at 0xf8f2780>,
 <keras.layers.normalization.BatchNormalization at 0xf8f2908>,
 <keras.layers.core.Activation at 0xf8f2a20>,
 <keras.layers.convolutional.Conv2D at 0xf8f2a58>,
 <keras.layers.normalization.BatchNormalization at 0xf8f2be0>,
 <keras.layers.core.Activation at 0xf8f2cf8>,
 <keras.layers.pooling.MaxPooling2D at 0xf8f2d30>,
 <keras.layers.convolutional.Conv2D at 0xf8f2dd8>,
 <keras.layers.normalization.BatchNormalization at 0xf8f2f60>,
 <keras.layers.core.Activation at 0xf8e6f98>,
 <keras.layers.convolutional.Conv2D at 0xf9080f0>,
 <keras.layers.normalization.BatchNormalization at 0xf908278>,
 <keras.layers.core.Activation at 0xf908390>,
 <keras.layers.pooling.MaxPooling2D at 0xf9083c8>,
 <keras.layers.convolutional.Conv2D at 0xf908470>,
 <keras.layers.normalization.BatchNormalization at 0xf9085f8>,
 <keras.layers.core.Activation at 0xf908710>,
 <keras.layers.convolutional.Conv2D at 0xf908748>,
 <keras.layers.convolutional.Conv2D at 0xf9088d0>,
 <keras.layers.normalization.BatchNormalization at 0xf908a58>,
 <keras.layers.normalization.BatchNormalization at 0xf908b70>,
 <keras.layers.core.Activation at 0xf908c88>,
 <keras.layers.core.Activation at 0xf908cc0>,
 <keras.layers.pooling.AveragePooling2D at 0xf908cf8>,
 <keras.layers.convolutional.Conv2D at 0xf908da0>,
 <keras.layers.convolutional.Conv2D at 0xf908f28>,
 <keras.layers.convolutional.Conv2D at 0xf9110f0>,
 <keras.layers.convolutional.Conv2D at 0xf911278>,
 <keras.layers.normalization.BatchNormalization at 0xf911400>,
 <keras.layers.normalization.BatchNormalization at 0xf911518>,
 <keras.layers.normalization.BatchNormalization at 0xf911630>,
 <keras.layers.normalization.BatchNormalization at 0xf911748>,
 <keras.layers.core.Activation at 0xf911860>,
 <keras.layers.core.Activation at 0xf911898>,
 <keras.layers.core.Activation at 0xf9118d0>,
 <keras.layers.core.Activation at 0xf911908>,
 <keras.layers.merge.Concatenate at 0xf911940>,
 <keras.layers.convolutional.Conv2D at 0xf911978>,
 <keras.layers.normalization.BatchNormalization at 0xf911b00>,
 <keras.layers.core.Activation at 0xf911c18>,
 <keras.layers.convolutional.Conv2D at 0xf911c50>,
 <keras.layers.convolutional.Conv2D at 0xf911dd8>,
 <keras.layers.normalization.BatchNormalization at 0xf911f60>,
 <keras.layers.normalization.BatchNormalization at 0xf8f2fd0>,
 <keras.layers.core.Activation at 0xf9171d0>,
 <keras.layers.core.Activation at 0xf917208>,
 <keras.layers.pooling.AveragePooling2D at 0xf917240>,
 <keras.layers.convolutional.Conv2D at 0xf9172e8>,
 <keras.layers.convolutional.Conv2D at 0xf917470>,
 <keras.layers.convolutional.Conv2D at 0xf9175f8>,
 <keras.layers.convolutional.Conv2D at 0xf917780>,
 <keras.layers.normalization.BatchNormalization at 0xf917908>,
 <keras.layers.normalization.BatchNormalization at 0xf917a20>,
 <keras.layers.normalization.BatchNormalization at 0xf917b38>,
 <keras.layers.normalization.BatchNormalization at 0xf917c50>,
 <keras.layers.core.Activation at 0xf917d68>,
 <keras.layers.core.Activation at 0xf917da0>,
 <keras.layers.core.Activation at 0xf917dd8>,
 <keras.layers.core.Activation at 0xf917e10>,
 <keras.layers.merge.Concatenate at 0xf917e48>,
 <keras.layers.convolutional.Conv2D at 0xf917e80>,
 <keras.layers.normalization.BatchNormalization at 0xf920048>,
 <keras.layers.core.Activation at 0xf920160>,
 <keras.layers.convolutional.Conv2D at 0xf920198>,
 <keras.layers.convolutional.Conv2D at 0xf920320>,
 <keras.layers.normalization.BatchNormalization at 0xf9204a8>,
 <keras.layers.normalization.BatchNormalization at 0xf9205c0>,
 <keras.layers.core.Activation at 0xf9206d8>,
 <keras.layers.core.Activation at 0xf920710>,
 <keras.layers.pooling.AveragePooling2D at 0xf920748>,
 <keras.layers.convolutional.Conv2D at 0xf9207f0>,
 <keras.layers.convolutional.Conv2D at 0xf920978>,
 <keras.layers.convolutional.Conv2D at 0xf920b00>,
 <keras.layers.convolutional.Conv2D at 0xf920c88>,
 <keras.layers.normalization.BatchNormalization at 0xf920e10>,
 <keras.layers.normalization.BatchNormalization at 0xf920f28>,
 <keras.layers.normalization.BatchNormalization at 0xf911fd0>,
 <keras.layers.normalization.BatchNormalization at 0xf926198>,
 <keras.layers.core.Activation at 0xf9262b0>,
 <keras.layers.core.Activation at 0xf9262e8>,
 <keras.layers.core.Activation at 0xf926320>,
 <keras.layers.core.Activation at 0xf926358>,
 <keras.layers.merge.Concatenate at 0xf926390>,
 <keras.layers.convolutional.Conv2D at 0xf9263c8>,
 <keras.layers.normalization.BatchNormalization at 0xf926550>,
 <keras.layers.core.Activation at 0xf926668>,
 <keras.layers.convolutional.Conv2D at 0xf9266a0>,
 <keras.layers.normalization.BatchNormalization at 0xf926828>,
 <keras.layers.core.Activation at 0xf926940>,
 <keras.layers.convolutional.Conv2D at 0xf926978>,
 <keras.layers.convolutional.Conv2D at 0xf926b00>,
 <keras.layers.normalization.BatchNormalization at 0xf926c88>,
 <keras.layers.normalization.BatchNormalization at 0xf926da0>,
 <keras.layers.core.Activation at 0xf926eb8>,
 <keras.layers.core.Activation at 0xf926ef0>,
 <keras.layers.pooling.MaxPooling2D at 0xf926f28>,
 <keras.layers.merge.Concatenate at 0xf920fd0>,
 <keras.layers.convolutional.Conv2D at 0xf92e048>,
 <keras.layers.normalization.BatchNormalization at 0xf92e1d0>,
 <keras.layers.core.Activation at 0xf92e2e8>,
 <keras.layers.convolutional.Conv2D at 0xf92e320>,
 <keras.layers.normalization.BatchNormalization at 0xf92e4a8>,
 <keras.layers.core.Activation at 0xf92e5c0>,
 <keras.layers.convolutional.Conv2D at 0xf92e5f8>,
 <keras.layers.convolutional.Conv2D at 0xf92e780>,
 <keras.layers.normalization.BatchNormalization at 0xf92e908>,
 <keras.layers.normalization.BatchNormalization at 0xf92ea20>,
 <keras.layers.core.Activation at 0xf92eb38>,
 <keras.layers.core.Activation at 0xf92eb70>,
 <keras.layers.convolutional.Conv2D at 0xf92eba8>,
 <keras.layers.convolutional.Conv2D at 0xf92ed30>,
 <keras.layers.normalization.BatchNormalization at 0xf92eeb8>,
 <keras.layers.normalization.BatchNormalization at 0xf926fd0>,
 <keras.layers.core.Activation at 0xf934128>,
 <keras.layers.core.Activation at 0xf934160>,
 <keras.layers.pooling.AveragePooling2D at 0xf934198>,
 <keras.layers.convolutional.Conv2D at 0xf934240>,
 <keras.layers.convolutional.Conv2D at 0xf9343c8>,
 <keras.layers.convolutional.Conv2D at 0xf934550>,
 <keras.layers.convolutional.Conv2D at 0xf9346d8>,
 <keras.layers.normalization.BatchNormalization at 0xf934860>,
 <keras.layers.normalization.BatchNormalization at 0xf934978>,
 <keras.layers.normalization.BatchNormalization at 0xf934a90>,
 <keras.layers.normalization.BatchNormalization at 0xf934ba8>,
 <keras.layers.core.Activation at 0xf934cc0>,
 <keras.layers.core.Activation at 0xf934cf8>,
 <keras.layers.core.Activation at 0xf934d30>,
 <keras.layers.core.Activation at 0xf934d68>,
 <keras.layers.merge.Concatenate at 0xf934da0>,
 <keras.layers.convolutional.Conv2D at 0xf934dd8>,
 <keras.layers.normalization.BatchNormalization at 0xf934f60>,
 <keras.layers.core.Activation at 0xf92efd0>,
 <keras.layers.convolutional.Conv2D at 0xf93e0f0>,
 <keras.layers.normalization.BatchNormalization at 0xf93e278>,
 <keras.layers.core.Activation at 0xf93e390>,
 <keras.layers.convolutional.Conv2D at 0xf93e3c8>,
 <keras.layers.convolutional.Conv2D at 0xf93e550>,
 <keras.layers.normalization.BatchNormalization at 0xf93e6d8>,
 <keras.layers.normalization.BatchNormalization at 0xf93e7f0>,
 <keras.layers.core.Activation at 0xf93e908>,
 <keras.layers.core.Activation at 0xf93e940>,
 <keras.layers.convolutional.Conv2D at 0xf93e978>,
 <keras.layers.convolutional.Conv2D at 0xf93eb00>,
 <keras.layers.normalization.BatchNormalization at 0xf93ec88>,
 <keras.layers.normalization.BatchNormalization at 0xf93eda0>,
 <keras.layers.core.Activation at 0xf93eeb8>,
 <keras.layers.core.Activation at 0xf93eef0>,
 <keras.layers.pooling.AveragePooling2D at 0xf93ef28>,
 <keras.layers.convolutional.Conv2D at 0xf934fd0>,
 <keras.layers.convolutional.Conv2D at 0xf945198>,
 <keras.layers.convolutional.Conv2D at 0xf945320>,
 <keras.layers.convolutional.Conv2D at 0xf9454a8>,
 <keras.layers.normalization.BatchNormalization at 0xf945630>,
 <keras.layers.normalization.BatchNormalization at 0xf945748>,
 <keras.layers.normalization.BatchNormalization at 0xf945860>,
 <keras.layers.normalization.BatchNormalization at 0xf945978>,
 <keras.layers.core.Activation at 0xf945a90>,
 <keras.layers.core.Activation at 0xf945ac8>,
 <keras.layers.core.Activation at 0xf945b00>,
 <keras.layers.core.Activation at 0xf945b38>,
 <keras.layers.merge.Concatenate at 0xf945b70>,
 <keras.layers.convolutional.Conv2D at 0xf945ba8>,
 <keras.layers.normalization.BatchNormalization at 0xf945d30>,
 <keras.layers.core.Activation at 0xf945e48>,
 <keras.layers.convolutional.Conv2D at 0xf945e80>,
 <keras.layers.normalization.BatchNormalization at 0xf94b048>,
 <keras.layers.core.Activation at 0xf94b160>,
 <keras.layers.convolutional.Conv2D at 0xf94b198>,
 <keras.layers.convolutional.Conv2D at 0xf94b320>,
 <keras.layers.normalization.BatchNormalization at 0xf94b4a8>,
 <keras.layers.normalization.BatchNormalization at 0xf94b5c0>,
 <keras.layers.core.Activation at 0xf94b6d8>,
 <keras.layers.core.Activation at 0xf94b710>,
 <keras.layers.convolutional.Conv2D at 0xf94b748>,
 <keras.layers.convolutional.Conv2D at 0xf94b8d0>,
 <keras.layers.normalization.BatchNormalization at 0xf94ba58>,
 <keras.layers.normalization.BatchNormalization at 0xf94bb70>,
 <keras.layers.core.Activation at 0xf94bc88>,
 <keras.layers.core.Activation at 0xf94bcc0>,
 <keras.layers.pooling.AveragePooling2D at 0xf94bcf8>,
 <keras.layers.convolutional.Conv2D at 0xf94bda0>,
 <keras.layers.convolutional.Conv2D at 0xf94bf28>,
 <keras.layers.convolutional.Conv2D at 0xf9520f0>,
 <keras.layers.convolutional.Conv2D at 0xf952278>,
 <keras.layers.normalization.BatchNormalization at 0xf952400>,
 <keras.layers.normalization.BatchNormalization at 0xf952518>,
 <keras.layers.normalization.BatchNormalization at 0xf952630>,
 <keras.layers.normalization.BatchNormalization at 0xf952748>,
 <keras.layers.core.Activation at 0xf952860>,
 <keras.layers.core.Activation at 0xf952898>,
 <keras.layers.core.Activation at 0xf9528d0>,
 <keras.layers.core.Activation at 0xf952908>,
 <keras.layers.merge.Concatenate at 0xf952940>,
 <keras.layers.convolutional.Conv2D at 0xf952978>,
 <keras.layers.normalization.BatchNormalization at 0xf952b00>,
 <keras.layers.core.Activation at 0xf952c18>,
 <keras.layers.convolutional.Conv2D at 0xf952c50>,
 <keras.layers.normalization.BatchNormalization at 0xf952dd8>,
 <keras.layers.core.Activation at 0xf952ef0>,
 <keras.layers.convolutional.Conv2D at 0xf952f28>,
 <keras.layers.convolutional.Conv2D at 0xf95b0f0>,
 <keras.layers.normalization.BatchNormalization at 0xf95b278>,
 <keras.layers.normalization.BatchNormalization at 0xf95b390>,
 <keras.layers.core.Activation at 0xf95b4a8>,
 <keras.layers.core.Activation at 0xf95b4e0>,
 <keras.layers.convolutional.Conv2D at 0xf95b518>,
 <keras.layers.convolutional.Conv2D at 0xf95b6a0>,
 <keras.layers.normalization.BatchNormalization at 0xf95b828>,
 <keras.layers.normalization.BatchNormalization at 0xf95b940>,
 <keras.layers.core.Activation at 0xf95ba58>,
 <keras.layers.core.Activation at 0xf95ba90>,
 <keras.layers.pooling.AveragePooling2D at 0xf95bac8>,
 <keras.layers.convolutional.Conv2D at 0xf95bb70>,
 <keras.layers.convolutional.Conv2D at 0xf95bcf8>,
 <keras.layers.convolutional.Conv2D at 0xf95be80>,
 <keras.layers.convolutional.Conv2D at 0xf963048>,
 <keras.layers.normalization.BatchNormalization at 0xf9631d0>,
 <keras.layers.normalization.BatchNormalization at 0xf9632e8>,
 <keras.layers.normalization.BatchNormalization at 0xf963400>,
 <keras.layers.normalization.BatchNormalization at 0xf963518>,
 <keras.layers.core.Activation at 0xf963630>,
 <keras.layers.core.Activation at 0xf963668>,
 <keras.layers.core.Activation at 0xf9636a0>,
 <keras.layers.core.Activation at 0xf9636d8>,
 <keras.layers.merge.Concatenate at 0xf963710>,
 <keras.layers.convolutional.Conv2D at 0xf963748>,
 <keras.layers.normalization.BatchNormalization at 0xf9638d0>,
 <keras.layers.core.Activation at 0xf9639e8>,
 <keras.layers.convolutional.Conv2D at 0xf963a20>,
 <keras.layers.normalization.BatchNormalization at 0xf963ba8>,
 <keras.layers.core.Activation at 0xf963cc0>,
 <keras.layers.convolutional.Conv2D at 0xf963cf8>,
 <keras.layers.convolutional.Conv2D at 0xf963e80>,
 <keras.layers.normalization.BatchNormalization at 0xf96a048>,
 <keras.layers.normalization.BatchNormalization at 0xf96a160>,
 <keras.layers.core.Activation at 0xf96a278>,
 <keras.layers.core.Activation at 0xf96a2b0>,
 <keras.layers.convolutional.Conv2D at 0xf96a2e8>,
 <keras.layers.convolutional.Conv2D at 0xf96a470>,
 <keras.layers.normalization.BatchNormalization at 0xf96a5f8>,
 <keras.layers.normalization.BatchNormalization at 0xf96a710>,
 <keras.layers.core.Activation at 0xf96a828>,
 <keras.layers.core.Activation at 0xf96a860>,
 <keras.layers.pooling.MaxPooling2D at 0xf96a898>,
 <keras.layers.merge.Concatenate at 0xf96a940>,
 <keras.layers.convolutional.Conv2D at 0xf96a978>,
 <keras.layers.normalization.BatchNormalization at 0xf96ab00>,
 <keras.layers.core.Activation at 0xf96ac18>,
 <keras.layers.convolutional.Conv2D at 0xf96ac50>,
 <keras.layers.convolutional.Conv2D at 0xf96add8>,
 <keras.layers.normalization.BatchNormalization at 0xf96af60>,
 <keras.layers.normalization.BatchNormalization at 0xf93efd0>,
 <keras.layers.core.Activation at 0xf9721d0>,
 <keras.layers.core.Activation at 0xf972208>,
 <keras.layers.convolutional.Conv2D at 0xf972240>,
 <keras.layers.convolutional.Conv2D at 0xf9723c8>,
 <keras.layers.convolutional.Conv2D at 0xf972550>,
 <keras.layers.convolutional.Conv2D at 0xf9726d8>,
 <keras.layers.pooling.AveragePooling2D at 0xf972860>,
 <keras.layers.convolutional.Conv2D at 0xf972908>,
 <keras.layers.normalization.BatchNormalization at 0xf972a90>,
 <keras.layers.normalization.BatchNormalization at 0xf972ba8>,
 <keras.layers.normalization.BatchNormalization at 0xf972cc0>,
 <keras.layers.normalization.BatchNormalization at 0xf972dd8>,
 <keras.layers.convolutional.Conv2D at 0xf972ef0>,
 <keras.layers.normalization.BatchNormalization at 0xf9790b8>,
 <keras.layers.core.Activation at 0xf9791d0>,
 <keras.layers.core.Activation at 0xf979208>,
 <keras.layers.core.Activation at 0xf979240>,
 <keras.layers.core.Activation at 0xf979278>,
 <keras.layers.normalization.BatchNormalization at 0xf9792b0>,
 <keras.layers.core.Activation at 0xf9793c8>,
 <keras.layers.merge.Concatenate at 0xf979400>,
 <keras.layers.merge.Concatenate at 0xf979438>,
 <keras.layers.core.Activation at 0xf979470>,
 <keras.layers.merge.Concatenate at 0xf9794a8>,
 <keras.layers.convolutional.Conv2D at 0xf9794e0>,
 <keras.layers.normalization.BatchNormalization at 0xf979668>,
 <keras.layers.core.Activation at 0xf979780>,
 <keras.layers.convolutional.Conv2D at 0xf9797b8>,
 <keras.layers.convolutional.Conv2D at 0xf979940>,
 <keras.layers.normalization.BatchNormalization at 0xf979ac8>,
 <keras.layers.normalization.BatchNormalization at 0xf979be0>,
 <keras.layers.core.Activation at 0xf979cf8>,
 <keras.layers.core.Activation at 0xf979d30>,
 <keras.layers.convolutional.Conv2D at 0xf979d68>,
 <keras.layers.convolutional.Conv2D at 0xf979ef0>,
 <keras.layers.convolutional.Conv2D at 0xf9820b8>,
 <keras.layers.convolutional.Conv2D at 0xf982240>,
 <keras.layers.pooling.AveragePooling2D at 0xf9823c8>,
 <keras.layers.convolutional.Conv2D at 0xf982470>,
 <keras.layers.normalization.BatchNormalization at 0xf9825f8>,
 <keras.layers.normalization.BatchNormalization at 0xf982710>,
 <keras.layers.normalization.BatchNormalization at 0xf982828>,
 <keras.layers.normalization.BatchNormalization at 0xf982940>,
 <keras.layers.convolutional.Conv2D at 0xf982a58>,
 <keras.layers.normalization.BatchNormalization at 0xf982be0>,
 <keras.layers.core.Activation at 0xf982cf8>,
 <keras.layers.core.Activation at 0xf982d30>,
 <keras.layers.core.Activation at 0xf982d68>,
 <keras.layers.core.Activation at 0xf982da0>,
 <keras.layers.normalization.BatchNormalization at 0xf982dd8>,
 <keras.layers.core.Activation at 0xf982ef0>,
 <keras.layers.merge.Concatenate at 0xf982f28>,
 <keras.layers.merge.Concatenate at 0xf982f60>,
 <keras.layers.core.Activation at 0xf982f98>,
 <keras.layers.merge.Concatenate at 0xf96afd0>,
 <keras.layers.pooling.GlobalAveragePooling2D at 0xf989048>,
 <keras.layers.core.Dense at 0xf9890b8>,
 <keras.layers.core.Dense at 0xf989208>]
In [25]:
# Iterate through the MRIs in T2

print('\n \n' + '\033[1m' + 'EACH ORIGINAL IMAGE IS ANALYZED WITH TWO METHODS: CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)' + '\033[0m' + '\n')
print('\033[1m' + 'FOR EACH METHOD, THE FIRST IMAGE IS THE ORIGINAL IMAGE, THE SECOND IMAGE IS THE MAP, AND THE THIRD IMAGE IS THE MAP SUPERIMPOSED ON THE ORIGINAL IMAGE WITH A TRANSPARENCY THAT IS PROPORTIONAL TO THE ESTIMATED PROBABILITY OF THE IMAGE HAVING TUBER(S) (HIGHER ESTIMATED PROBABILITIES PRODUCE CLEARLY SEEN MAPS OVERLAID ON THE ORIGINAL IMAGE AND LOWER ESTIMATED PROBABILITIES PRODUCE VERY TRANSPARENT MAPS OVERLAYED ON THE ORIGINAL IMAGE)' + '\033[0m'+ '\n \n \n \n')


for i in range(T2test_X.shape[0]):
    
  # Print spaces to separate from the next image
  print('\n \n \n \n \n \n \n \n')
  
  # Print real classification of the image
  print('\033[1m' + 'REAL CLASSIFICATION OF THE IMAGE: {}'.format('TUBER(S)' if y_trueT2[i][0]==1 else 'NO TUBER(S)') + '\033[0m')
  # Print model classification and model probability of TSC
  print('Model classification of this image: {} \nEstimated probability of tuber(s): {} \n'.format('TUBER(S)' if testInceptionV3T2[i][0]>0.5 else 'NO TUBER(S)', testInceptionV3T2[i][0]))     


  # Print title
  print('\033[1m' + 'CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)' + '\033[0m')     
    
  # Original image
  plt.subplot(2,3,1)
  plt.imshow(T2test_X[i])
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])
  
  # Heat map
  plt.subplot(2,3,2)
  heat_map = visualize_cam(model, layer_idx=300, filter_indices=None, seed_input=T2test_X[i])
  plt.imshow(heat_map)
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])
  
  # Heat map superimposed on original image
  plt.subplot(2,3,3)
  plt.imshow(T2test_X[i])
  plt.imshow(heat_map, alpha = 0.8 * testInceptionV3T2[i][0])
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])

       
  # Original image
  plt.subplot(2,3,4)
  plt.imshow(T2test_X[i])
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])


  # Heat map
  heat_map = visualize_saliency(model, layer_idx=300, filter_indices=None, seed_input=T2test_X[i])
  plt.subplot(2,3,5)
  plt.imshow(heat_map)
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])
  
  # Heat map superimposed on original image
  plt.subplot(2,3,6)
  plt.imshow(T2test_X[i])
  plt.imshow(heat_map, alpha = 0.8 * testInceptionV3T2[i][0])
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])

  # Show the image and close it
  plt.show()
  plt.close()
 
EACH ORIGINAL IMAGE IS ANALYZED WITH TWO METHODS: CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)

FOR EACH METHOD, THE FIRST IMAGE IS THE ORIGINAL IMAGE, THE SECOND IMAGE IS THE MAP, AND THE THIRD IMAGE IS THE MAP SUPERIMPOSED ON THE ORIGINAL IMAGE WITH A TRANSPARENCY THAT IS PROPORTIONAL TO THE ESTIMATED PROBABILITY OF THE IMAGE HAVING TUBER(S) (HIGHER ESTIMATED PROBABILITIES PRODUCE CLEARLY SEEN MAPS OVERLAID ON THE ORIGINAL IMAGE AND LOWER ESTIMATED PROBABILITIES PRODUCE VERY TRANSPARENT MAPS OVERLAYED ON THE ORIGINAL IMAGE)
 
 
 


 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.7748365998268127 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9986662864685059 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9993440508842468 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9999955892562866 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9999967813491821 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9999368190765381 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9999994039535522 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9999948740005493 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9999998807907104 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9999985694885254 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9999969005584717 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9998923540115356 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9991206526756287 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.08412197232246399 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.0072792330756783485 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.0003406619362067431 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.005436159670352936 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.012869252823293209 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.8309605121612549 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.023355914279818535 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)

SCROLL UP TO SEE THE GradCAM AND SALIENCY MAPS OF EACH IMAGE

This 10 day-old female had a high tuber burden with tubers detected in each T2 MRI slice by our Neuroradiologist. The convolutional neural network classified images as having tuber(s) in 14 out of 20 T2 MRI slices, even if some of them where very subtle because of the limited myelination. Interestingly, in most of the images classified as negative, the maps show that the convolutional neural network was focusing on the areas with the tuber(s), but the estimated probability was low. The accuracy of the convolutional neural network in T2 was 0.7.

Each original image is analyzed with two methods: Gradient-weighted class activation maps (upper row) and saliency maps (lower row).

For each method, the first image is the original image, the second image is the map, and the third image is the map superimposed on the original image with a transparency that is proportional to the estimated probability of the image having tuber(s) (higher estimated probabilities produce clearly seen maps overlaid on the original image and lower estimated probabilities produce very transparent maps overlaid on the original image).

In [26]:
# Iterate through the MRIs in FLAIR

print('\n \n' + '\033[1m' + 'EACH ORIGINAL IMAGE IS ANALYZED WITH TWO METHODS: CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)' + '\033[0m' + '\n')
print('\033[1m' + 'FOR EACH METHOD, THE FIRST IMAGE IS THE ORIGINAL IMAGE, THE SECOND IMAGE IS THE MAP, AND THE THIRD IMAGE IS THE MAP SUPERIMPOSED ON THE ORIGINAL IMAGE WITH A TRANSPARENCY THAT IS PROPORTIONAL TO THE ESTIMATED PROBABILITY OF THE IMAGE HAVING TUBER(S) (HIGHER ESTIMATED PROBABILITIES PRODUCE CLEARLY SEEN MAPS OVERLAID ON THE ORIGINAL IMAGE AND LOWER ESTIMATED PROBABILITIES PRODUCE VERY TRANSPARENT MAPS OVERLAYED ON THE ORIGINAL IMAGE)' + '\033[0m'+ '\n \n \n \n')


for i in range(FLAIRtest_X.shape[0]):
    
  # Print spaces to separate from the next image
  print('\n \n \n \n \n \n \n \n')
  
  # Print real classification of the image
  print('\033[1m' + 'REAL CLASSIFICATION OF THE IMAGE: {}'.format('TUBER(S)' if y_trueFLAIR[i][0]==1 else 'NO TUBER(S)') + '\033[0m')
  # Print model classification and model probability of TSC
  print('Model classification of this image: {} \nEstimated probability of tuber(s): {} \n'.format('TUBER(S)' if testInceptionV3FLAIR[i][0]>0.5 else 'NO TUBER(S)', testInceptionV3FLAIR[i][0]))     


  # Print title
  print('\033[1m' + 'CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)' + '\033[0m')     
    
  # Original image
  plt.subplot(2,3,1)
  plt.imshow(FLAIRtest_X[i])
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])
  
  # Heat map
  plt.subplot(2,3,2)
  heat_map = visualize_cam(model, layer_idx=300, filter_indices=None, seed_input=FLAIRtest_X[i])
  plt.imshow(heat_map)
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])
  
  # Heat map superimposed on original image
  plt.subplot(2,3,3)
  plt.imshow(FLAIRtest_X[i])
  plt.imshow(heat_map, alpha = 0.8 * testInceptionV3FLAIR[i][0])
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])

       
  # Original image
  plt.subplot(2,3,4)
  plt.imshow(FLAIRtest_X[i])
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])


  # Heat map
  heat_map = visualize_saliency(model, layer_idx=300, filter_indices=None, seed_input=FLAIRtest_X[i])
  plt.subplot(2,3,5)
  plt.imshow(heat_map)
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])
  
  # Heat map superimposed on original image
  plt.subplot(2,3,6)
  plt.imshow(FLAIRtest_X[i])
  plt.imshow(heat_map, alpha = 0.8 * testInceptionV3FLAIR[i][0])
  plt.grid(b=None)
  plt.xticks([])
  plt.yticks([])

  # Show the image and close it
  plt.show()
  plt.close()
 
EACH ORIGINAL IMAGE IS ANALYZED WITH TWO METHODS: CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)

FOR EACH METHOD, THE FIRST IMAGE IS THE ORIGINAL IMAGE, THE SECOND IMAGE IS THE MAP, AND THE THIRD IMAGE IS THE MAP SUPERIMPOSED ON THE ORIGINAL IMAGE WITH A TRANSPARENCY THAT IS PROPORTIONAL TO THE ESTIMATED PROBABILITY OF THE IMAGE HAVING TUBER(S) (HIGHER ESTIMATED PROBABILITIES PRODUCE CLEARLY SEEN MAPS OVERLAID ON THE ORIGINAL IMAGE AND LOWER ESTIMATED PROBABILITIES PRODUCE VERY TRANSPARENT MAPS OVERLAYED ON THE ORIGINAL IMAGE)
 
 
 


 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9898809790611267 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9708079695701599 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.3966054320335388 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.01123807393014431 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.006704422179609537 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.26421070098876953 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.21709559857845306 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.06463053822517395 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.006944387219846249 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.02234792336821556 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.15759512782096863 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.09368443489074707 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9208197593688965 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: TUBER(S)
Model classification of this image: NO TUBER(S) 
Estimated probability of tuber(s): 0.06735590100288391 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)
 
 
 
 
 
 
 

REAL CLASSIFICATION OF THE IMAGE: NO TUBER(S)
Model classification of this image: TUBER(S) 
Estimated probability of tuber(s): 0.9941786527633667 

CLASS ACTIVATION MAP (UPPER ROW) AND SALIENCY MAP (LOWER ROW)

SCROLL UP TO SEE THE GradCAM AND SALIENCY MAPS OF EACH IMAGE

This 10 day-old female had a high tuber burden with tubers detected in 14 out of 15 FLAIR MRI slices by our Neuroradiologist. The convolutional neural network classified images as having tuber(s) in 3 out of 15 FLAIR MRI slices, even if some of them where very subtle because of the limited myelination. Interestingly, in most of the images classified as negative, the maps show that the convolutional neural network was focusing on the areas with the tuber(s), but the estimated probability was low. The accuracy of the convolutional neural network in FLAIR was 0.2.

Each original image is analyzed with two methods: Gradient-weighted class activation maps (upper row) and saliency maps (lower row).

For each method, the first image is the original image, the second image is the map, and the third image is the map superimposed on the original image with a transparency that is proportional to the estimated probability of the image having tuber(s) (higher estimated probabilities produce clearly seen maps overlaid on the original image and lower estimated probabilities produce very transparent maps overlaid on the original image).