Libraries

import os
from keras import utils 
import tensorflow as tf

print(tf.test.gpu_device_name())
/device:GPU:0

# location of data
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'

# download the data and extract it
path_to_zip = utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)

# construct paths
PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')

train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')

# parameters for datasets
BATCH_SIZE = 32
IMG_SIZE = (160, 160)

# construct train and validation datasets 
train_dataset = utils.image_dataset_from_directory(train_dir,
                                                   shuffle=True,
                                                   batch_size=BATCH_SIZE,
                                                   image_size=IMG_SIZE)

validation_dataset = utils.image_dataset_from_directory(validation_dir,
                                                        shuffle=True,
                                                        batch_size=BATCH_SIZE,
                                                        image_size=IMG_SIZE)

# construct the test dataset by taking every 5th observation out of the validation dataset
val_batches = tf.data.experimental.cardinality(validation_dataset)
test_dataset = validation_dataset.take(val_batches // 5)
validation_dataset = validation_dataset.skip(val_batches // 5)
Downloading data from https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip
68606236/68606236 [==============================] - 0s 0us/step
Found 2000 files belonging to 2 classes.
Found 1000 files belonging to 2 classes.
AUTOTUNE = tf.data.AUTOTUNE

train_dataset = train_dataset.prefetch(buffer_size=AUTOTUNE)
validation_dataset = validation_dataset.prefetch(buffer_size=AUTOTUNE)
test_dataset = test_dataset.prefetch(buffer_size=AUTOTUNE)

Dataset visualization

from matplotlib import pyplot as plt
t_dataset = utils.image_dataset_from_directory(train_dir,
                                                   shuffle=True,
                                                   batch_size=BATCH_SIZE,
                                                   image_size=IMG_SIZE)
t_dataset.class_names
Found 2000 files belonging to 2 classes.
['cats', 'dogs']
def two_row(batch):
  class_names = ['cats', 'dogs']
  fig, ax = plt.subplots(2, 3, figsize=(10, 10))
  for images, labels in batch:
    cnt_cat = 0
    cnt_dog = 0
    i = 0
    while cnt_cat < 3 or cnt_dog < 3:
      if labels[i] == 1 and cnt_cat < 3:
        ax[0,cnt_cat].imshow(images[i].numpy().astype("uint8"))
        ax[0,cnt_cat].set_title(class_names[labels[i]])
        plt.axis("off")
        cnt_cat += 1
      elif labels[i] == 0 and cnt_dog < 3:
        ax[1,cnt_dog].imshow(images[i].numpy().astype("uint8"))
        ax[1,cnt_dog].set_title(class_names[labels[i]])
        plt.axis("off")
        cnt_dog += 1
      i += 1
      
two_row(train_dataset.take(1))

Check frequencies

labels_iterator= train_dataset.unbatch().map(lambda image, label: label).as_numpy_iterator()
WARNING:tensorflow:From /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/pyct/static_analysis/liveness.py:83: Analyzer.lamba_check (from tensorflow.python.autograph.pyct.static_analysis.liveness) is deprecated and will be removed after 2023-09-23.
Instructions for updating:
Lambda fuctions will be no more assumed to be used in the statement where they are used, or at least in the same block. https://github.com/tensorflow/tensorflow/issues/56089
dogs = 0
cnt = 0
for lbl in labels_iterator:
  dogs += lbl
  cnt += 1
print("dog: ",dogs/cnt)
print("cat: ",(cnt - dogs) / cnt)
dog:  0.5
cat:  0.5

First model: Sequential model

Create a tf.keras.Sequential model using some of the layers we’ve discussed in class. In each model, include at least two Conv2D layers, at least two MaxPooling2D layers, at least one Flatten layer, at least one Dense layer, and at least one Dropout layer. Train your model and plot the history of the accuracy on both the training and validation sets. Give your model the name model1.

To train a model on a Dataset, use syntax like this:

history = model1.fit(train_dataset, epochs=20, validation_data=validation

Here and in later parts of this assignment, training for 20 epochs with the Dataset settings described above should be sufficient.

You don’t have to show multiple models, but please do a few experiments to try to get the best validation accuracy you can. Briefly describe a few of the things you tried. Please make sure that you are able to consistently achieve at least 52% validation accuracy in this part (i.e. just a bit better than baseline).

In bold font, describe the validation accuracy of your model during training. You don’t have to be precise. For example, “the accuracy of my model stabilized between 65% and 70% during training.”

Then, compare that to the baseline. How much better did you do? Overfitting can be observed when the training accuracy is much higher than the validation accuracy. Do you observe overfitting in model1?

import keras
from keras import layers
train_dataset.take(1)
<TakeDataset element_spec=(TensorSpec(shape=(None, 160, 160, 3), dtype=tf.float32, name=None), TensorSpec(shape=(None,), dtype=tf.int32, name=None))>
#one batch has a total of 32 images, 
#the output is either dog/cat for each image, binary classification
#from above, the shape of each input/img is 160,160,3 (height, width, rgb)
model1 = keras.Sequential([
    keras.Input(shape=(160,160,3)),
    layers.Conv2D(filters = 32, kernel_size = (5,5), strides=(2,2), activation="relu"),
    layers.Dropout(.1),
    layers.MaxPooling2D(pool_size=(2, 2)),
    layers.Conv2D(32, (3,3), 1, activation="relu"),
    layers.MaxPooling2D(pool_size=(2, 2)),
    layers.Flatten(),
    layers.Dense(32, activation='relu'),
    layers.Dense(16, activation='relu')
])

model1.summary()
Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d (Conv2D)             (None, 78, 78, 32)        2432      
                                                                 
 dropout (Dropout)           (None, 78, 78, 32)        0         
                                                                 
 max_pooling2d (MaxPooling2D  (None, 39, 39, 32)       0         
 )                                                               
                                                                 
 conv2d_1 (Conv2D)           (None, 37, 37, 32)        9248      
                                                                 
 max_pooling2d_1 (MaxPooling  (None, 18, 18, 32)       0         
 2D)                                                             
                                                                 
 flatten (Flatten)           (None, 10368)             0         
                                                                 
 dense (Dense)               (None, 32)                331808    
                                                                 
 dense_1 (Dense)             (None, 16)                528       
                                                                 
=================================================================
Total params: 344,016
Trainable params: 344,016
Non-trainable params: 0
_________________________________________________________________
optimizer = 'adam'
#sparse_categorical_crossentropy: Used as a loss function 
#for multi-class classification model where the output label is assigned integer value (0, 1, 2, 3…).
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metrics = ['accuracy']
model1.compile(optimizer, loss, metrics)
history = model1.fit(train_dataset, 
                     epochs=50, 
                     validation_data=validation_dataset)
Epoch 1/50
63/63 [==============================] - 14s 53ms/step - loss: 9.0446 - accuracy: 0.4470 - val_loss: 0.7199 - val_accuracy: 0.5371
Epoch 2/50
63/63 [==============================] - 3s 49ms/step - loss: 0.6849 - accuracy: 0.5975 - val_loss: 0.6745 - val_accuracy: 0.6040
Epoch 3/50
63/63 [==============================] - 4s 60ms/step - loss: 0.6038 - accuracy: 0.6875 - val_loss: 0.6509 - val_accuracy: 0.6597
Epoch 4/50
63/63 [==============================] - 4s 53ms/step - loss: 0.5629 - accuracy: 0.7135 - val_loss: 0.6924 - val_accuracy: 0.6275
Epoch 5/50
63/63 [==============================] - 4s 54ms/step - loss: 0.4958 - accuracy: 0.7600 - val_loss: 0.6700 - val_accuracy: 0.6597
Epoch 6/50
63/63 [==============================] - 3s 49ms/step - loss: 0.4461 - accuracy: 0.7970 - val_loss: 0.7408 - val_accuracy: 0.6411
Epoch 7/50
63/63 [==============================] - 6s 91ms/step - loss: 0.4132 - accuracy: 0.8140 - val_loss: 0.7357 - val_accuracy: 0.6646
Epoch 8/50
63/63 [==============================] - 3s 48ms/step - loss: 0.3683 - accuracy: 0.8305 - val_loss: 0.8711 - val_accuracy: 0.6040
Epoch 9/50
63/63 [==============================] - 5s 72ms/step - loss: 0.3934 - accuracy: 0.8135 - val_loss: 0.8444 - val_accuracy: 0.6262
Epoch 10/50
63/63 [==============================] - 3s 48ms/step - loss: 0.3014 - accuracy: 0.8670 - val_loss: 0.8653 - val_accuracy: 0.6349
Epoch 11/50
63/63 [==============================] - 3s 48ms/step - loss: 0.2654 - accuracy: 0.8865 - val_loss: 0.8746 - val_accuracy: 0.6522
Epoch 12/50
63/63 [==============================] - 5s 80ms/step - loss: 0.2195 - accuracy: 0.9130 - val_loss: 1.1526 - val_accuracy: 0.6262
Epoch 13/50
63/63 [==============================] - 5s 77ms/step - loss: 0.2017 - accuracy: 0.9210 - val_loss: 1.0514 - val_accuracy: 0.6448
Epoch 14/50
63/63 [==============================] - 6s 95ms/step - loss: 0.1705 - accuracy: 0.9330 - val_loss: 1.1487 - val_accuracy: 0.6361
Epoch 15/50
63/63 [==============================] - 3s 48ms/step - loss: 0.1574 - accuracy: 0.9390 - val_loss: 1.2391 - val_accuracy: 0.6460
Epoch 16/50
63/63 [==============================] - 3s 48ms/step - loss: 0.1482 - accuracy: 0.9415 - val_loss: 1.3938 - val_accuracy: 0.6448
Epoch 17/50
63/63 [==============================] - 4s 67ms/step - loss: 0.1119 - accuracy: 0.9560 - val_loss: 1.5561 - val_accuracy: 0.6176
Epoch 18/50
63/63 [==============================] - 4s 53ms/step - loss: 0.1389 - accuracy: 0.9465 - val_loss: 1.5458 - val_accuracy: 0.6361
Epoch 19/50
63/63 [==============================] - 3s 49ms/step - loss: 0.1691 - accuracy: 0.9330 - val_loss: 1.5418 - val_accuracy: 0.6213
Epoch 20/50
63/63 [==============================] - 4s 59ms/step - loss: 0.1459 - accuracy: 0.9485 - val_loss: 1.6921 - val_accuracy: 0.6349
Epoch 21/50
63/63 [==============================] - 3s 48ms/step - loss: 0.1121 - accuracy: 0.9585 - val_loss: 1.7612 - val_accuracy: 0.6411
Epoch 22/50
63/63 [==============================] - 3s 49ms/step - loss: 0.0805 - accuracy: 0.9720 - val_loss: 1.8149 - val_accuracy: 0.6250
Epoch 23/50
63/63 [==============================] - 3s 49ms/step - loss: 0.0809 - accuracy: 0.9695 - val_loss: 2.0050 - val_accuracy: 0.6213
Epoch 24/50
63/63 [==============================] - 5s 74ms/step - loss: 0.0734 - accuracy: 0.9730 - val_loss: 1.9849 - val_accuracy: 0.6262
Epoch 25/50
63/63 [==============================] - 3s 49ms/step - loss: 0.0617 - accuracy: 0.9780 - val_loss: 2.0790 - val_accuracy: 0.6139
Epoch 26/50
63/63 [==============================] - 3s 48ms/step - loss: 0.0718 - accuracy: 0.9735 - val_loss: 2.0649 - val_accuracy: 0.6337
Epoch 27/50
63/63 [==============================] - 5s 75ms/step - loss: 0.0691 - accuracy: 0.9700 - val_loss: 2.1949 - val_accuracy: 0.6151
Epoch 28/50
63/63 [==============================] - 3s 49ms/step - loss: 0.0863 - accuracy: 0.9710 - val_loss: 2.0743 - val_accuracy: 0.6337
Epoch 29/50
63/63 [==============================] - 6s 87ms/step - loss: 0.1642 - accuracy: 0.9455 - val_loss: 2.2722 - val_accuracy: 0.6386
Epoch 30/50
63/63 [==============================] - 6s 92ms/step - loss: 0.1576 - accuracy: 0.9545 - val_loss: 1.8130 - val_accuracy: 0.6262
Epoch 31/50
63/63 [==============================] - 4s 53ms/step - loss: 0.0969 - accuracy: 0.9645 - val_loss: 2.4169 - val_accuracy: 0.6188
Epoch 32/50
63/63 [==============================] - 6s 82ms/step - loss: 0.0848 - accuracy: 0.9710 - val_loss: 2.3626 - val_accuracy: 0.6213
Epoch 33/50
63/63 [==============================] - 3s 47ms/step - loss: 0.0801 - accuracy: 0.9635 - val_loss: 2.3931 - val_accuracy: 0.6399
Epoch 34/50
63/63 [==============================] - 5s 75ms/step - loss: 0.1094 - accuracy: 0.9535 - val_loss: 2.6719 - val_accuracy: 0.6188
Epoch 35/50
63/63 [==============================] - 3s 48ms/step - loss: 0.0587 - accuracy: 0.9790 - val_loss: 2.6054 - val_accuracy: 0.6423
Epoch 36/50
63/63 [==============================] - 3s 48ms/step - loss: 0.0429 - accuracy: 0.9855 - val_loss: 2.5520 - val_accuracy: 0.6324
Epoch 37/50
63/63 [==============================] - 3s 50ms/step - loss: 0.0252 - accuracy: 0.9920 - val_loss: 2.6287 - val_accuracy: 0.6312
Epoch 38/50
63/63 [==============================] - 5s 71ms/step - loss: 0.0432 - accuracy: 0.9850 - val_loss: 2.7542 - val_accuracy: 0.6423
Epoch 39/50
63/63 [==============================] - 3s 49ms/step - loss: 0.0342 - accuracy: 0.9895 - val_loss: 2.8145 - val_accuracy: 0.6361
Epoch 40/50
63/63 [==============================] - 3s 49ms/step - loss: 0.0372 - accuracy: 0.9895 - val_loss: 3.0052 - val_accuracy: 0.6040
Epoch 41/50
63/63 [==============================] - 5s 75ms/step - loss: 0.0514 - accuracy: 0.9855 - val_loss: 2.9740 - val_accuracy: 0.6386
Epoch 42/50
63/63 [==============================] - 3s 48ms/step - loss: 0.0786 - accuracy: 0.9755 - val_loss: 2.9864 - val_accuracy: 0.6176
Epoch 43/50
63/63 [==============================] - 3s 48ms/step - loss: 0.1579 - accuracy: 0.9620 - val_loss: 3.0914 - val_accuracy: 0.6077
Epoch 44/50
63/63 [==============================] - 3s 48ms/step - loss: 0.0814 - accuracy: 0.9675 - val_loss: 2.8460 - val_accuracy: 0.6386
Epoch 45/50
63/63 [==============================] - 5s 76ms/step - loss: 0.1028 - accuracy: 0.9655 - val_loss: 3.1745 - val_accuracy: 0.6250
Epoch 46/50
63/63 [==============================] - 3s 48ms/step - loss: 0.1022 - accuracy: 0.9725 - val_loss: 3.0819 - val_accuracy: 0.6126
Epoch 47/50
63/63 [==============================] - 3s 49ms/step - loss: 0.0784 - accuracy: 0.9760 - val_loss: 3.2478 - val_accuracy: 0.6213
Epoch 48/50
63/63 [==============================] - 5s 74ms/step - loss: 0.0774 - accuracy: 0.9740 - val_loss: 3.0277 - val_accuracy: 0.6312
Epoch 49/50
63/63 [==============================] - 3s 48ms/step - loss: 0.0378 - accuracy: 0.9880 - val_loss: 3.4703 - val_accuracy: 0.6176
Epoch 50/50
63/63 [==============================] - 3s 47ms/step - loss: 0.0343 - accuracy: 0.9890 - val_loss: 3.4245 - val_accuracy: 0.6287

The validation accuracy of my model stabilized somewhere between 61% and 63% during training, more than 10% improvement comparing to the baseline. However, the training accuracy is as high as 98.90%, an evidence for overfitting.

Second model: With data augmentation

Random Flip

#input: 3D (unbatched) or 4D (batched) tensor with shape: 
  #(..., height, width, channels), in "channels_last" format.
#output: same
r_flip = tf.keras.layers.RandomFlip(
    mode="horizontal_and_vertical", seed=2)
batch = train_dataset.take(1)
for img,labl in batch:
  pic = img[0]
fig, ax = plt.subplots(1, 5, figsize=(10, 10))
for i in range(5):
    ax[i].imshow(r_flip(pic).numpy().astype("uint8"))
    #ax[0,i].set_title()
    plt.axis("off")
     
      

Random Rotate

## rotate
r_rotate = tf.keras.layers.RandomRotation(
    #factor (rotation angle range, unit in percent of 2 pi)
    factor = (-1,1),
    fill_mode="reflect",
    interpolation="bilinear",
    seed=2,
    fill_value=0.0
)
batch = train_dataset.take(1)
for img,labl in batch:
  pic = img[0]
fig, ax = plt.subplots(1, 5, figsize=(10, 10))
for i in range(5):
    ax[i].imshow(r_rotate(pic).numpy().astype("uint8"))
    #ax[0,i].set_title()
    plt.axis("off")
     
      

#adding preprocessing: random flip and random rotate
model2 = keras.Sequential([
    keras.Input(shape=(160,160,3)),
    r_flip,
    r_rotate,
    layers.Conv2D(filters = 32, kernel_size = (5,5), strides=(2,2), activation="relu"),
    layers.Dropout(.1),
    layers.MaxPooling2D(pool_size=(2, 2)),
    layers.Conv2D(32, (3,3), 1, activation="relu"),
    layers.MaxPooling2D(pool_size=(2, 2)),
    layers.Flatten(),
    layers.Dense(32, activation='relu'),
    layers.Dense(16, activation='relu')
])
model2.build()

model2.summary()
WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting Bitcast cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting Bitcast cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2 cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting ImageProjectiveTransformV3 cause there is no registered converter for this op.
Model: "sequential_3"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 random_flip (RandomFlip)    (None, 160, 160, 3)       0         
                                                                 
 random_rotation (RandomRota  (None, 160, 160, 3)      0         
 tion)                                                           
                                                                 
 conv2d_6 (Conv2D)           (None, 78, 78, 32)        2432      
                                                                 
 dropout_3 (Dropout)         (None, 78, 78, 32)        0         
                                                                 
 max_pooling2d_6 (MaxPooling  (None, 39, 39, 32)       0         
 2D)                                                             
                                                                 
 conv2d_7 (Conv2D)           (None, 37, 37, 32)        9248      
                                                                 
 max_pooling2d_7 (MaxPooling  (None, 18, 18, 32)       0         
 2D)                                                             
                                                                 
 flatten_3 (Flatten)         (None, 10368)             0         
                                                                 
 dense_6 (Dense)             (None, 32)                331808    
                                                                 
 dense_7 (Dense)             (None, 16)                528       
                                                                 
=================================================================
Total params: 344,016
Trainable params: 344,016
Non-trainable params: 0
_________________________________________________________________
model2.compile(optimizer, loss, metrics)
history2 = model2.fit(train_dataset, 
                     epochs=50, 
                     validation_data = validation_dataset)
Epoch 1/50
WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting Bitcast cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting Bitcast cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2 cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting ImageProjectiveTransformV3 cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting Bitcast cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting Bitcast cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2 cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting ImageProjectiveTransformV3 cause there is no registered converter for this op.
63/63 [==============================] - 12s 136ms/step - loss: 4.0756 - accuracy: 0.4935 - val_loss: 0.7550 - val_accuracy: 0.5186
Epoch 2/50
63/63 [==============================] - 8s 122ms/step - loss: 0.7321 - accuracy: 0.5460 - val_loss: 0.7016 - val_accuracy: 0.5606
Epoch 3/50
63/63 [==============================] - 10s 148ms/step - loss: 0.7107 - accuracy: 0.5390 - val_loss: 0.7209 - val_accuracy: 0.5408
Epoch 4/50
63/63 [==============================] - 9s 143ms/step - loss: 0.7210 - accuracy: 0.5490 - val_loss: 0.6741 - val_accuracy: 0.5879
Epoch 5/50
63/63 [==============================] - 8s 117ms/step - loss: 0.6888 - accuracy: 0.5740 - val_loss: 0.6998 - val_accuracy: 0.5965
Epoch 6/50
63/63 [==============================] - 9s 140ms/step - loss: 0.6974 - accuracy: 0.5610 - val_loss: 0.6889 - val_accuracy: 0.5792
Epoch 7/50
63/63 [==============================] - 9s 142ms/step - loss: 0.6945 - accuracy: 0.5735 - val_loss: 0.6859 - val_accuracy: 0.5532
Epoch 8/50
63/63 [==============================] - 8s 122ms/step - loss: 0.6753 - accuracy: 0.5760 - val_loss: 0.6853 - val_accuracy: 0.5272
Epoch 9/50
63/63 [==============================] - 8s 127ms/step - loss: 0.6654 - accuracy: 0.5855 - val_loss: 0.6860 - val_accuracy: 0.5458
Epoch 10/50
63/63 [==============================] - 9s 141ms/step - loss: 0.6758 - accuracy: 0.5735 - val_loss: 0.6878 - val_accuracy: 0.5594
Epoch 11/50
63/63 [==============================] - 10s 152ms/step - loss: 0.6591 - accuracy: 0.6025 - val_loss: 0.6608 - val_accuracy: 0.5743
Epoch 12/50
63/63 [==============================] - 8s 116ms/step - loss: 0.6761 - accuracy: 0.5735 - val_loss: 0.6518 - val_accuracy: 0.6312
Epoch 13/50
63/63 [==============================] - 9s 141ms/step - loss: 0.7175 - accuracy: 0.5320 - val_loss: 0.8113 - val_accuracy: 0.5062
Epoch 14/50
63/63 [==============================] - 9s 146ms/step - loss: 0.6544 - accuracy: 0.6070 - val_loss: 0.6738 - val_accuracy: 0.5953
Epoch 15/50
63/63 [==============================] - 8s 116ms/step - loss: 0.6617 - accuracy: 0.5960 - val_loss: 0.6667 - val_accuracy: 0.6139
Epoch 16/50
63/63 [==============================] - 9s 141ms/step - loss: 0.6601 - accuracy: 0.6035 - val_loss: 0.6790 - val_accuracy: 0.6077
Epoch 17/50
63/63 [==============================] - 8s 122ms/step - loss: 0.6619 - accuracy: 0.5950 - val_loss: 0.7183 - val_accuracy: 0.5767
Epoch 18/50
63/63 [==============================] - 8s 116ms/step - loss: 0.6759 - accuracy: 0.6170 - val_loss: 0.6822 - val_accuracy: 0.5606
Epoch 19/50
63/63 [==============================] - 9s 133ms/step - loss: 0.6648 - accuracy: 0.5965 - val_loss: 0.7517 - val_accuracy: 0.5136
Epoch 20/50
63/63 [==============================] - 10s 162ms/step - loss: 0.6946 - accuracy: 0.5550 - val_loss: 0.6791 - val_accuracy: 0.5866
Epoch 21/50
63/63 [==============================] - 11s 178ms/step - loss: 0.6878 - accuracy: 0.5665 - val_loss: 0.6356 - val_accuracy: 0.6423
Epoch 22/50
63/63 [==============================] - 10s 158ms/step - loss: 0.6421 - accuracy: 0.6205 - val_loss: 0.6380 - val_accuracy: 0.6250
Epoch 23/50
63/63 [==============================] - 12s 185ms/step - loss: 0.6404 - accuracy: 0.6225 - val_loss: 0.6903 - val_accuracy: 0.5668
Epoch 24/50
63/63 [==============================] - 9s 144ms/step - loss: 0.6565 - accuracy: 0.6205 - val_loss: 0.6455 - val_accuracy: 0.6200
Epoch 25/50
63/63 [==============================] - 9s 133ms/step - loss: 0.6473 - accuracy: 0.6370 - val_loss: 0.6772 - val_accuracy: 0.6114
Epoch 26/50
63/63 [==============================] - 11s 180ms/step - loss: 0.6733 - accuracy: 0.5985 - val_loss: 0.6580 - val_accuracy: 0.6114
Epoch 27/50
63/63 [==============================] - 11s 161ms/step - loss: 0.6459 - accuracy: 0.6315 - val_loss: 0.6503 - val_accuracy: 0.6188
Epoch 28/50
63/63 [==============================] - 13s 200ms/step - loss: 0.6500 - accuracy: 0.6195 - val_loss: 0.6428 - val_accuracy: 0.6312
Epoch 29/50
63/63 [==============================] - 8s 122ms/step - loss: 0.6315 - accuracy: 0.6255 - val_loss: 0.6448 - val_accuracy: 0.6485
Epoch 30/50
63/63 [==============================] - 15s 227ms/step - loss: 0.6568 - accuracy: 0.6200 - val_loss: 0.6090 - val_accuracy: 0.6832
Epoch 31/50
63/63 [==============================] - 10s 146ms/step - loss: 0.6267 - accuracy: 0.6585 - val_loss: 0.6694 - val_accuracy: 0.5928
Epoch 32/50
63/63 [==============================] - 8s 120ms/step - loss: 0.6230 - accuracy: 0.6470 - val_loss: 0.6386 - val_accuracy: 0.6498
Epoch 33/50
63/63 [==============================] - 9s 142ms/step - loss: 0.6318 - accuracy: 0.6435 - val_loss: 0.6334 - val_accuracy: 0.6572
Epoch 34/50
63/63 [==============================] - 10s 151ms/step - loss: 0.6155 - accuracy: 0.6720 - val_loss: 0.6435 - val_accuracy: 0.6485
Epoch 35/50
63/63 [==============================] - 8s 118ms/step - loss: 0.6209 - accuracy: 0.6605 - val_loss: 0.6295 - val_accuracy: 0.6522
Epoch 36/50
63/63 [==============================] - 9s 132ms/step - loss: 0.6013 - accuracy: 0.6655 - val_loss: 0.6611 - val_accuracy: 0.6535
Epoch 37/50
63/63 [==============================] - 9s 144ms/step - loss: 0.6228 - accuracy: 0.6550 - val_loss: 0.6374 - val_accuracy: 0.6473
Epoch 38/50
63/63 [==============================] - 8s 119ms/step - loss: 0.6080 - accuracy: 0.6635 - val_loss: 1.1070 - val_accuracy: 0.5285
Epoch 39/50
63/63 [==============================] - 8s 120ms/step - loss: 0.6583 - accuracy: 0.6095 - val_loss: 0.6477 - val_accuracy: 0.6386
Epoch 40/50
63/63 [==============================] - 10s 154ms/step - loss: 0.6965 - accuracy: 0.5855 - val_loss: 0.7324 - val_accuracy: 0.5396
Epoch 41/50
63/63 [==============================] - 11s 164ms/step - loss: 0.7275 - accuracy: 0.5770 - val_loss: 0.7071 - val_accuracy: 0.5557
Epoch 42/50
63/63 [==============================] - 9s 146ms/step - loss: 0.6915 - accuracy: 0.5775 - val_loss: 0.6913 - val_accuracy: 0.5458
Epoch 43/50
63/63 [==============================] - 8s 121ms/step - loss: 0.6767 - accuracy: 0.5715 - val_loss: 0.6833 - val_accuracy: 0.5866
Epoch 44/50
63/63 [==============================] - 9s 145ms/step - loss: 0.6881 - accuracy: 0.5560 - val_loss: 0.6887 - val_accuracy: 0.6027
Epoch 45/50
63/63 [==============================] - 11s 174ms/step - loss: 0.6501 - accuracy: 0.6280 - val_loss: 0.6613 - val_accuracy: 0.6213
Epoch 46/50
63/63 [==============================] - 8s 122ms/step - loss: 0.6586 - accuracy: 0.6075 - val_loss: 0.6697 - val_accuracy: 0.6238
Epoch 47/50
63/63 [==============================] - 11s 169ms/step - loss: 0.6692 - accuracy: 0.6160 - val_loss: 0.6544 - val_accuracy: 0.6200
Epoch 48/50
63/63 [==============================] - 8s 124ms/step - loss: 0.6299 - accuracy: 0.6465 - val_loss: 0.6437 - val_accuracy: 0.6436
Epoch 49/50
63/63 [==============================] - 8s 117ms/step - loss: 0.6260 - accuracy: 0.6500 - val_loss: 0.6142 - val_accuracy: 0.6770
Epoch 50/50
63/63 [==============================] - 9s 145ms/step - loss: 0.6195 - accuracy: 0.6555 - val_loss: 0.6257 - val_accuracy: 0.6522

The model performance with data augmentation included is similar to the previous one with validation accuracy reaching somewhere between 62% and 67%, but without significant overfitting as the training accuracy is somehwere around 65% also.

Third model: adding more data preprocesing

i = tf.keras.Input(shape=(160, 160, 3))
x = tf.keras.applications.mobilenet_v2.preprocess_input(i)
preprocessor = tf.keras.Model(inputs = [i], outputs = [x])
#adding preprocessing: random flip and random rotate
model3 = keras.Sequential([
    preprocessor,
    r_flip,
    r_rotate,
    layers.Conv2D(filters = 32, kernel_size = (5,5), strides=(2,2), activation="relu"),
    layers.Dropout(.1),
    layers.MaxPooling2D(pool_size=(2, 2)),
    layers.Conv2D(32, (3,3), 1, activation="relu"),
    layers.MaxPooling2D(pool_size=(2, 2)),
    layers.Flatten(),
    layers.Dense(32, activation='relu'),
    layers.Dense(16, activation='relu')
])
model3.build()

model3.summary()
WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting Bitcast cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting Bitcast cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2 cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting ImageProjectiveTransformV3 cause there is no registered converter for this op.
Model: "sequential_5"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 model (Functional)          (None, 160, 160, 3)       0         
                                                                 
 random_flip (RandomFlip)    (None, 160, 160, 3)       0         
                                                                 
 random_rotation (RandomRota  (None, 160, 160, 3)      0         
 tion)                                                           
                                                                 
 conv2d_10 (Conv2D)          (None, 78, 78, 32)        2432      
                                                                 
 dropout_5 (Dropout)         (None, 78, 78, 32)        0         
                                                                 
 max_pooling2d_10 (MaxPoolin  (None, 39, 39, 32)       0         
 g2D)                                                            
                                                                 
 conv2d_11 (Conv2D)          (None, 37, 37, 32)        9248      
                                                                 
 max_pooling2d_11 (MaxPoolin  (None, 18, 18, 32)       0         
 g2D)                                                            
                                                                 
 flatten_5 (Flatten)         (None, 10368)             0         
                                                                 
 dense_10 (Dense)            (None, 32)                331808    
                                                                 
 dense_11 (Dense)            (None, 16)                528       
                                                                 
=================================================================
Total params: 344,016
Trainable params: 344,016
Non-trainable params: 0
_________________________________________________________________
model3.compile(optimizer, loss, metrics)
history3 = model3.fit(train_dataset, 
                     epochs=50, 
                     validation_data = validation_dataset)
Epoch 1/50
WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting Bitcast cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting Bitcast cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2 cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting ImageProjectiveTransformV3 cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting Bitcast cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting Bitcast cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2 cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting ImageProjectiveTransformV3 cause there is no registered converter for this op.
63/63 [==============================] - 13s 140ms/step - loss: 0.7305 - accuracy: 0.5600 - val_loss: 0.6759 - val_accuracy: 0.5941
Epoch 2/50
63/63 [==============================] - 11s 179ms/step - loss: 0.6545 - accuracy: 0.6090 - val_loss: 0.6424 - val_accuracy: 0.6324
Epoch 3/50
63/63 [==============================] - 8s 124ms/step - loss: 0.6459 - accuracy: 0.6365 - val_loss: 0.6268 - val_accuracy: 0.6522
Epoch 4/50
63/63 [==============================] - 9s 132ms/step - loss: 0.6331 - accuracy: 0.6340 - val_loss: 0.6149 - val_accuracy: 0.6844
Epoch 5/50
63/63 [==============================] - 9s 147ms/step - loss: 0.6036 - accuracy: 0.6745 - val_loss: 0.5946 - val_accuracy: 0.6757
Epoch 6/50
63/63 [==============================] - 8s 121ms/step - loss: 0.5918 - accuracy: 0.6800 - val_loss: 0.5733 - val_accuracy: 0.6943
Epoch 7/50
63/63 [==============================] - 8s 118ms/step - loss: 0.5924 - accuracy: 0.6745 - val_loss: 0.6273 - val_accuracy: 0.6448
Epoch 8/50
63/63 [==============================] - 13s 192ms/step - loss: 0.5949 - accuracy: 0.6820 - val_loss: 0.6055 - val_accuracy: 0.6621
Epoch 9/50
63/63 [==============================] - 9s 145ms/step - loss: 0.5899 - accuracy: 0.6740 - val_loss: 0.6145 - val_accuracy: 0.6621
Epoch 10/50
63/63 [==============================] - 9s 145ms/step - loss: 0.5893 - accuracy: 0.6915 - val_loss: 0.5710 - val_accuracy: 0.6993
Epoch 11/50
63/63 [==============================] - 8s 119ms/step - loss: 0.5574 - accuracy: 0.7080 - val_loss: 0.5739 - val_accuracy: 0.6931
Epoch 12/50
63/63 [==============================] - 11s 169ms/step - loss: 0.5672 - accuracy: 0.6975 - val_loss: 0.5582 - val_accuracy: 0.6980
Epoch 13/50
63/63 [==============================] - 8s 120ms/step - loss: 0.5543 - accuracy: 0.7110 - val_loss: 0.5619 - val_accuracy: 0.6881
Epoch 14/50
63/63 [==============================] - 8s 119ms/step - loss: 0.5666 - accuracy: 0.6990 - val_loss: 0.5545 - val_accuracy: 0.7067
Epoch 15/50
63/63 [==============================] - 9s 145ms/step - loss: 0.5484 - accuracy: 0.7145 - val_loss: 0.5601 - val_accuracy: 0.7017
Epoch 16/50
63/63 [==============================] - 9s 146ms/step - loss: 0.5491 - accuracy: 0.7130 - val_loss: 0.5683 - val_accuracy: 0.7005
Epoch 17/50
63/63 [==============================] - 9s 139ms/step - loss: 0.5436 - accuracy: 0.7210 - val_loss: 0.5584 - val_accuracy: 0.7067
Epoch 18/50
63/63 [==============================] - 10s 159ms/step - loss: 0.5432 - accuracy: 0.7190 - val_loss: 0.5761 - val_accuracy: 0.7042
Epoch 19/50
63/63 [==============================] - 13s 209ms/step - loss: 0.5325 - accuracy: 0.7370 - val_loss: 0.5598 - val_accuracy: 0.7017
Epoch 20/50
63/63 [==============================] - 16s 241ms/step - loss: 0.5387 - accuracy: 0.7300 - val_loss: 0.5343 - val_accuracy: 0.7141
Epoch 21/50
63/63 [==============================] - 10s 141ms/step - loss: 0.5448 - accuracy: 0.7255 - val_loss: 0.6586 - val_accuracy: 0.6361
Epoch 22/50
63/63 [==============================] - 10s 154ms/step - loss: 0.5316 - accuracy: 0.7300 - val_loss: 0.5711 - val_accuracy: 0.7054
Epoch 23/50
63/63 [==============================] - 10s 153ms/step - loss: 0.5179 - accuracy: 0.7390 - val_loss: 0.5543 - val_accuracy: 0.7191
Epoch 24/50
63/63 [==============================] - 11s 179ms/step - loss: 0.5183 - accuracy: 0.7385 - val_loss: 0.5317 - val_accuracy: 0.7166
Epoch 25/50
63/63 [==============================] - 9s 142ms/step - loss: 0.5146 - accuracy: 0.7460 - val_loss: 0.5647 - val_accuracy: 0.7042
Epoch 26/50
63/63 [==============================] - 9s 143ms/step - loss: 0.5119 - accuracy: 0.7420 - val_loss: 0.5662 - val_accuracy: 0.7104
Epoch 27/50
63/63 [==============================] - 9s 146ms/step - loss: 0.4905 - accuracy: 0.7570 - val_loss: 0.6048 - val_accuracy: 0.7030
Epoch 28/50
63/63 [==============================] - 8s 117ms/step - loss: 0.5120 - accuracy: 0.7390 - val_loss: 0.5372 - val_accuracy: 0.7265
Epoch 29/50
63/63 [==============================] - 9s 144ms/step - loss: 0.4876 - accuracy: 0.7695 - val_loss: 0.5654 - val_accuracy: 0.7067
Epoch 30/50
63/63 [==============================] - 10s 149ms/step - loss: 0.5052 - accuracy: 0.7590 - val_loss: 0.5366 - val_accuracy: 0.7240
Epoch 31/50
63/63 [==============================] - 8s 117ms/step - loss: 0.4856 - accuracy: 0.7650 - val_loss: 0.5385 - val_accuracy: 0.7240
Epoch 32/50
63/63 [==============================] - 10s 148ms/step - loss: 0.4745 - accuracy: 0.7840 - val_loss: 0.5287 - val_accuracy: 0.7290
Epoch 33/50
63/63 [==============================] - 9s 147ms/step - loss: 0.4799 - accuracy: 0.7605 - val_loss: 0.5576 - val_accuracy: 0.7116
Epoch 34/50
63/63 [==============================] - 8s 118ms/step - loss: 0.4811 - accuracy: 0.7600 - val_loss: 0.5079 - val_accuracy: 0.7475
Epoch 35/50
63/63 [==============================] - 10s 149ms/step - loss: 0.4729 - accuracy: 0.7690 - val_loss: 0.5933 - val_accuracy: 0.7054
Epoch 36/50
63/63 [==============================] - 10s 155ms/step - loss: 0.4984 - accuracy: 0.7675 - val_loss: 0.5220 - val_accuracy: 0.7376
Epoch 37/50
63/63 [==============================] - 9s 132ms/step - loss: 0.4845 - accuracy: 0.7555 - val_loss: 0.5131 - val_accuracy: 0.7351
Epoch 38/50
63/63 [==============================] - 9s 143ms/step - loss: 0.4595 - accuracy: 0.7770 - val_loss: 0.5723 - val_accuracy: 0.7054
Epoch 39/50
63/63 [==============================] - 9s 143ms/step - loss: 0.4698 - accuracy: 0.7750 - val_loss: 0.5284 - val_accuracy: 0.7376
Epoch 40/50
63/63 [==============================] - 8s 119ms/step - loss: 0.4727 - accuracy: 0.7730 - val_loss: 0.5317 - val_accuracy: 0.7376
Epoch 41/50
63/63 [==============================] - 8s 126ms/step - loss: 0.4759 - accuracy: 0.7695 - val_loss: 0.5556 - val_accuracy: 0.7104
Epoch 42/50
63/63 [==============================] - 9s 139ms/step - loss: 0.4613 - accuracy: 0.7935 - val_loss: 0.5647 - val_accuracy: 0.7042
Epoch 43/50
63/63 [==============================] - 8s 127ms/step - loss: 0.4545 - accuracy: 0.7775 - val_loss: 0.5112 - val_accuracy: 0.7450
Epoch 44/50
63/63 [==============================] - 10s 147ms/step - loss: 0.4364 - accuracy: 0.7965 - val_loss: 0.5362 - val_accuracy: 0.7389
Epoch 45/50
63/63 [==============================] - 12s 173ms/step - loss: 0.4349 - accuracy: 0.7980 - val_loss: 0.5546 - val_accuracy: 0.7290
Epoch 46/50
63/63 [==============================] - 12s 191ms/step - loss: 0.4516 - accuracy: 0.7870 - val_loss: 0.5443 - val_accuracy: 0.7401
Epoch 47/50
63/63 [==============================] - 9s 145ms/step - loss: 0.4393 - accuracy: 0.7970 - val_loss: 0.5473 - val_accuracy: 0.7463
Epoch 48/50
63/63 [==============================] - 13s 190ms/step - loss: 0.4459 - accuracy: 0.7875 - val_loss: 0.5607 - val_accuracy: 0.7277
Epoch 49/50
63/63 [==============================] - 12s 194ms/step - loss: 0.4347 - accuracy: 0.7980 - val_loss: 0.5752 - val_accuracy: 0.7166
Epoch 50/50
63/63 [==============================] - 10s 150ms/step - loss: 0.4405 - accuracy: 0.7925 - val_loss: 0.5470 - val_accuracy: 0.7191

Given the additional preprocessing layer that normalizes the RGB values between [-1,1] before the actual training, the validation accuracy stablized somewhere between 70% and 71% without significant overfitting.

Fourth Model: With transfer learning

IMG_SHAPE = IMG_SIZE + (3,)
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
                                               include_top=False,
                                               weights='imagenet')
base_model.trainable = False

i = tf.keras.Input(shape=IMG_SHAPE)
x = base_model(i, training = False)
base_model_layer = tf.keras.Model(inputs = [i], outputs = [x])
#adding preprocessing: random flip and random rotate
model4 = keras.Sequential([
    preprocessor,
    r_flip,
    r_rotate,
    base_model_layer,
    layers.Dropout(.1),
    layers.GlobalMaxPooling2D(),
    layers.Flatten(),
    layers.Dense(32, activation='relu'),
    layers.Dense(2, activation='softmax')
])

model4.build()

model4.summary()
WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting Bitcast cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting Bitcast cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2 cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting ImageProjectiveTransformV3 cause there is no registered converter for this op.
Model: "sequential_10"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 model (Functional)          (None, 160, 160, 3)       0         
                                                                 
 random_flip (RandomFlip)    (None, 160, 160, 3)       0         
                                                                 
 random_rotation (RandomRota  (None, 160, 160, 3)      0         
 tion)                                                           
                                                                 
 model_2 (Functional)        (None, 5, 5, 1280)        2257984   
                                                                 
 dropout_11 (Dropout)        (None, 5, 5, 1280)        0         
                                                                 
 global_max_pooling2d_4 (Glo  (None, 1280)             0         
 balMaxPooling2D)                                                
                                                                 
 flatten_10 (Flatten)        (None, 1280)              0         
                                                                 
 dense_22 (Dense)            (None, 32)                40992     
                                                                 
 dense_23 (Dense)            (None, 2)                 66        
                                                                 
=================================================================
Total params: 2,299,042
Trainable params: 41,058
Non-trainable params: 2,257,984
_________________________________________________________________
model4.compile(optimizer,loss,metrics)
history4 = model4.fit(train_dataset, 
                     epochs=50, 
                     validation_data = validation_dataset)
Epoch 1/50
WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting Bitcast cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting Bitcast cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2 cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting ImageProjectiveTransformV3 cause there is no registered converter for this op.
/usr/local/lib/python3.8/dist-packages/keras/backend.py:5585: UserWarning: "`sparse_categorical_crossentropy` received `from_logits=True`, but the `output` argument was produced by a Softmax activation and thus does not represent logits. Was this intended?
  output, from_logits = _get_logits(
WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting Bitcast cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting Bitcast cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2 cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting ImageProjectiveTransformV3 cause there is no registered converter for this op.
63/63 [==============================] - 19s 171ms/step - loss: 0.4589 - accuracy: 0.8075 - val_loss: 0.1331 - val_accuracy: 0.9505
Epoch 2/50
63/63 [==============================] - 9s 137ms/step - loss: 0.2380 - accuracy: 0.8970 - val_loss: 0.0928 - val_accuracy: 0.9703
Epoch 3/50
63/63 [==============================] - 9s 142ms/step - loss: 0.2126 - accuracy: 0.9065 - val_loss: 0.0833 - val_accuracy: 0.9740
Epoch 4/50
63/63 [==============================] - 10s 161ms/step - loss: 0.2054 - accuracy: 0.9155 - val_loss: 0.0818 - val_accuracy: 0.9740
Epoch 5/50
63/63 [==============================] - 10s 159ms/step - loss: 0.1844 - accuracy: 0.9195 - val_loss: 0.0736 - val_accuracy: 0.9678
Epoch 6/50
63/63 [==============================] - 10s 144ms/step - loss: 0.1760 - accuracy: 0.9285 - val_loss: 0.0711 - val_accuracy: 0.9728
Epoch 7/50
63/63 [==============================] - 10s 160ms/step - loss: 0.1777 - accuracy: 0.9290 - val_loss: 0.0645 - val_accuracy: 0.9752
Epoch 8/50
63/63 [==============================] - 10s 159ms/step - loss: 0.1541 - accuracy: 0.9365 - val_loss: 0.0657 - val_accuracy: 0.9765
Epoch 9/50
63/63 [==============================] - 9s 136ms/step - loss: 0.1785 - accuracy: 0.9225 - val_loss: 0.0800 - val_accuracy: 0.9691
Epoch 10/50
63/63 [==============================] - 10s 143ms/step - loss: 0.1592 - accuracy: 0.9280 - val_loss: 0.0620 - val_accuracy: 0.9728
Epoch 11/50
63/63 [==============================] - 10s 160ms/step - loss: 0.1492 - accuracy: 0.9380 - val_loss: 0.0894 - val_accuracy: 0.9653
Epoch 12/50
63/63 [==============================] - 10s 159ms/step - loss: 0.1513 - accuracy: 0.9380 - val_loss: 0.0562 - val_accuracy: 0.9790
Epoch 13/50
63/63 [==============================] - 9s 136ms/step - loss: 0.1436 - accuracy: 0.9415 - val_loss: 0.0700 - val_accuracy: 0.9728
Epoch 14/50
63/63 [==============================] - 10s 160ms/step - loss: 0.1561 - accuracy: 0.9360 - val_loss: 0.0716 - val_accuracy: 0.9678
Epoch 15/50
63/63 [==============================] - 10s 159ms/step - loss: 0.1421 - accuracy: 0.9445 - val_loss: 0.0640 - val_accuracy: 0.9715
Epoch 16/50
63/63 [==============================] - 9s 144ms/step - loss: 0.1533 - accuracy: 0.9430 - val_loss: 0.0718 - val_accuracy: 0.9666
Epoch 17/50
63/63 [==============================] - 9s 136ms/step - loss: 0.1258 - accuracy: 0.9465 - val_loss: 0.0646 - val_accuracy: 0.9740
Epoch 18/50
63/63 [==============================] - 10s 160ms/step - loss: 0.1524 - accuracy: 0.9435 - val_loss: 0.0740 - val_accuracy: 0.9691
Epoch 19/50
63/63 [==============================] - 10s 160ms/step - loss: 0.1326 - accuracy: 0.9470 - val_loss: 0.0717 - val_accuracy: 0.9666
Epoch 20/50
63/63 [==============================] - 10s 154ms/step - loss: 0.1348 - accuracy: 0.9435 - val_loss: 0.0708 - val_accuracy: 0.9728
Epoch 21/50
63/63 [==============================] - 10s 161ms/step - loss: 0.1236 - accuracy: 0.9485 - val_loss: 0.0558 - val_accuracy: 0.9728
Epoch 22/50
63/63 [==============================] - 9s 136ms/step - loss: 0.1478 - accuracy: 0.9355 - val_loss: 0.0543 - val_accuracy: 0.9765
Epoch 23/50
63/63 [==============================] - 10s 159ms/step - loss: 0.1343 - accuracy: 0.9465 - val_loss: 0.0633 - val_accuracy: 0.9752
Epoch 24/50
63/63 [==============================] - 10s 157ms/step - loss: 0.1097 - accuracy: 0.9540 - val_loss: 0.0619 - val_accuracy: 0.9790
Epoch 25/50
63/63 [==============================] - 9s 146ms/step - loss: 0.1158 - accuracy: 0.9525 - val_loss: 0.0714 - val_accuracy: 0.9728
Epoch 26/50
63/63 [==============================] - 9s 137ms/step - loss: 0.1198 - accuracy: 0.9490 - val_loss: 0.0744 - val_accuracy: 0.9703
Epoch 27/50
63/63 [==============================] - 10s 159ms/step - loss: 0.1217 - accuracy: 0.9485 - val_loss: 0.0623 - val_accuracy: 0.9777
Epoch 28/50
63/63 [==============================] - 10s 158ms/step - loss: 0.1345 - accuracy: 0.9460 - val_loss: 0.0599 - val_accuracy: 0.9728
Epoch 29/50
63/63 [==============================] - 10s 155ms/step - loss: 0.1053 - accuracy: 0.9570 - val_loss: 0.0609 - val_accuracy: 0.9740
Epoch 30/50
63/63 [==============================] - 9s 134ms/step - loss: 0.1421 - accuracy: 0.9415 - val_loss: 0.0816 - val_accuracy: 0.9678
Epoch 31/50
63/63 [==============================] - 10s 159ms/step - loss: 0.1043 - accuracy: 0.9540 - val_loss: 0.0724 - val_accuracy: 0.9653
Epoch 32/50
63/63 [==============================] - 10s 158ms/step - loss: 0.1303 - accuracy: 0.9520 - val_loss: 0.0612 - val_accuracy: 0.9740
Epoch 33/50
63/63 [==============================] - 9s 135ms/step - loss: 0.1066 - accuracy: 0.9600 - val_loss: 0.0716 - val_accuracy: 0.9678
Epoch 34/50
63/63 [==============================] - 10s 157ms/step - loss: 0.1045 - accuracy: 0.9565 - val_loss: 0.0577 - val_accuracy: 0.9752
Epoch 35/50
63/63 [==============================] - 10s 161ms/step - loss: 0.0946 - accuracy: 0.9610 - val_loss: 0.0550 - val_accuracy: 0.9777
Epoch 36/50
63/63 [==============================] - 9s 133ms/step - loss: 0.1040 - accuracy: 0.9565 - val_loss: 0.0604 - val_accuracy: 0.9765
Epoch 37/50
63/63 [==============================] - 10s 159ms/step - loss: 0.1001 - accuracy: 0.9595 - val_loss: 0.0606 - val_accuracy: 0.9752
Epoch 38/50
63/63 [==============================] - 10s 158ms/step - loss: 0.1013 - accuracy: 0.9580 - val_loss: 0.0660 - val_accuracy: 0.9752
Epoch 39/50
63/63 [==============================] - 10s 158ms/step - loss: 0.1027 - accuracy: 0.9615 - val_loss: 0.0625 - val_accuracy: 0.9765
Epoch 40/50
63/63 [==============================] - 9s 136ms/step - loss: 0.0997 - accuracy: 0.9625 - val_loss: 0.0550 - val_accuracy: 0.9777
Epoch 41/50
63/63 [==============================] - 10s 158ms/step - loss: 0.1018 - accuracy: 0.9610 - val_loss: 0.0622 - val_accuracy: 0.9814
Epoch 42/50
63/63 [==============================] - 10s 158ms/step - loss: 0.1016 - accuracy: 0.9570 - val_loss: 0.0549 - val_accuracy: 0.9790
Epoch 43/50
63/63 [==============================] - 9s 135ms/step - loss: 0.0981 - accuracy: 0.9625 - val_loss: 0.0678 - val_accuracy: 0.9765
Epoch 44/50
63/63 [==============================] - 9s 141ms/step - loss: 0.1051 - accuracy: 0.9555 - val_loss: 0.0724 - val_accuracy: 0.9740
Epoch 45/50
63/63 [==============================] - 10s 160ms/step - loss: 0.1125 - accuracy: 0.9510 - val_loss: 0.0825 - val_accuracy: 0.9691
Epoch 46/50
63/63 [==============================] - 10s 158ms/step - loss: 0.1122 - accuracy: 0.9590 - val_loss: 0.0705 - val_accuracy: 0.9765
Epoch 47/50
63/63 [==============================] - 9s 135ms/step - loss: 0.0926 - accuracy: 0.9605 - val_loss: 0.0541 - val_accuracy: 0.9765
Epoch 48/50
63/63 [==============================] - 10s 161ms/step - loss: 0.0815 - accuracy: 0.9630 - val_loss: 0.0647 - val_accuracy: 0.9790
Epoch 49/50
63/63 [==============================] - 10s 161ms/step - loss: 0.0965 - accuracy: 0.9655 - val_loss: 0.0597 - val_accuracy: 0.9777
Epoch 50/50
63/63 [==============================] - 9s 136ms/step - loss: 0.0869 - accuracy: 0.9665 - val_loss: 0.0670 - val_accuracy: 0.9777

By incorporating a pretrained network and transfer the learning to our model, the final validation accuracy we reached is approximately 97.8% with no evidence of overfitting as the train accuracy is somewhere around 96.7%.

Test on test set

import numpy as np

https://keras.io/api/models/model_training_apis/#predictonbatch-method

test_image = []
test_label = []
for image, label in test_dataset:
  test_image.append(image.numpy())
  test_label.append(label.numpy())
test_imge = np.concatenate(test_image)
test_label = np.concatenate(test_label)
test_label[0] #dog is 1, cat is 0
0
plt.imshow(test_imge[0].astype("uint8"))
<matplotlib.image.AxesImage at 0x7f49f4283970>

prediction = model4.predict(test_imge)
prediction.shape
6/6 [==============================] - 0s 34ms/step
(192, 2)
pred = np.argmax(prediction, axis=-1)
pred.shape == test_label.shape
True
print("Test accuracy: ",np.sum([pred == test_label])/len(pred))
Test accuracy:  0.9895833333333334

We reached a final classification accuracy of 98.96% on the test dataset by using the fourth model which incorporated color normalization, random flip/rotate, and learning transfer.