Milestone Project: Food Vision Big

This Notebook is an account of my working for the Udemy course :TensorFlow Developer Certificate in 2022: Zero to Mastery. This Notebook covers:

  • Using TensorFlow Datasets to download and explore data(all of Food101)
  • Creating a preprocessing function for our data
  • Batching and preparing datasets for modelling(making them run fast)
  • Setting up mixed precision training(faster model training)

As a part of the project:

  • Building and training a feature extraction model
  • Fine-tuning feature extraction model to beat the DeepFoodpaper
  • Evaluating model results on TensorBoard
  • Evaluating model results by making and plotting predictions.

Check GPU

  • Google colab offers GPUs, however not all of them are compatible with mixed prcision training.

Google Colab offers:

  • K80 (not compatible)
  • P100 (not compatible)
  • Tesla T4 (compatible)

Knowing this, in order to use mixed precision training we need access to a Tesla T4 (from within goole colab) or if we're using our own hardware our GPU needs a score of 7.0+

# re-running this cell
!nvidia-smi -L
GPU 0: Tesla K80 (UUID: GPU-d5f295c5-6906-37cd-28fd-7ccfe8b8e738)

Get Helper functions

Rather than writing all the functions we need. We can reuse the helper functions we have created before and dowload them in the colab

!wget https://raw.githubusercontent.com/mrdbourke/tensorflow-deep-learning/main/extras/helper_functions.py
--2022-02-22 06:33:05--  https://raw.githubusercontent.com/mrdbourke/tensorflow-deep-learning/main/extras/helper_functions.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 10246 (10K) [text/plain]
Saving to: ‘helper_functions.py’

helper_functions.py 100%[===================>]  10.01K  --.-KB/s    in 0s      

2022-02-22 06:33:05 (90.6 MB/s) - ‘helper_functions.py’ saved [10246/10246]

from helper_functions import create_tensorboard_callback, plot_loss_curves, compare_historys

Using TensorFlow Datasets

TensorFlow Datasets is a place for prepared and ready-to-use machine learning datasets. Using TensorFlow datasets we can download some famous datasets to work on via the API.

Why use TensorFlow Datasets?

  • Load data already in Tensor format
  • Practice on well established datasets(for many different problem types)
  • Experiment with different modelling techniques on a consisten dataset.

Why not use TensorFlow Datasets?

  • Datasets are static (do not change like real-world datasets)
import tensorflow_datasets as tfds
datasets_list = tfds.list_builders()# Get all available datasets in TFDS
print("food101" in datasets_list) # is our target dataset in the list of TFDS datasets
True
(train_data, test_data), ds_info = tfds.load(name = "food101",
                                             split = ["train", "validation"],
                                             shuffle_files = True,
                                             as_supervised = True, # data returned in tuple format(data,label)
                                             with_info = True)
Downloading and preparing dataset food101/2.0.0 (download: 4.65 GiB, generated: Unknown size, total: 4.65 GiB) to /root/tensorflow_datasets/food101/2.0.0...



Shuffling and writing examples to /root/tensorflow_datasets/food101/2.0.0.incomplete31YQ3Y/food101-train.tfrecord
Shuffling and writing examples to /root/tensorflow_datasets/food101/2.0.0.incomplete31YQ3Y/food101-validation.tfrecord
Dataset food101 downloaded and prepared to /root/tensorflow_datasets/food101/2.0.0. Subsequent calls will reuse this data.

Exploring the Food 10 data from TensorFlow Datasets

we want to find:

  • Class names
  • The shape of our input data(image tensors)
  • The datatype of our input data
  • What the labels look like(e.g. are they one-hot encoded or are they label encoded)
  • Do the labels match up with class names
ds_info.features
FeaturesDict({
    'image': Image(shape=(None, None, 3), dtype=tf.uint8),
    'label': ClassLabel(shape=(), dtype=tf.int64, num_classes=101),
})
classnames = ds_info.features["label"].names
classnames[:10]
['apple_pie',
 'baby_back_ribs',
 'baklava',
 'beef_carpaccio',
 'beef_tartare',
 'beet_salad',
 'beignets',
 'bibimbap',
 'bread_pudding',
 'breakfast_burrito']
train_one_sample = train_data.take(1) # sampls are in format (image_tensor, label)
train_one_sample
<TakeDataset element_spec=(TensorSpec(shape=(None, None, 3), dtype=tf.uint8, name=None), TensorSpec(shape=(), dtype=tf.int64, name=None))>
for image, label in train_one_sample:
  print(f"""
  Image shape; {image.shape},
  Image datatype : {image.dtype},
  Target class from Food 101 (tensor form): {label},
  Class name (str form): {classnames[label.numpy()]}
    """)
  Image shape; (512, 512, 3),
  Image datatype : <dtype: 'uint8'>,
  Target class from Food 101 (tensor form): 64,
  Class name (str form): miso_soup
    
# What does our image tensor look like from TFDS's Food101 look like?
image

<tf.Tensor: shape=(512, 512, 3), dtype=uint8, numpy=
array([[[ 43,  89, 125],
        [ 52,  96, 131],
        [ 85, 128, 162],
        ...,
        [251, 254, 223],
        [250, 253, 222],
        [250, 253, 222]],

       [[ 42,  88, 124],
        [ 53,  97, 132],
        [ 92, 135, 169],
        ...,
        [251, 254, 223],
        [250, 253, 222],
        [250, 253, 222]],

       [[ 45,  89, 124],
        [ 52,  96, 131],
        [ 92, 135, 169],
        ...,
        [251, 254, 223],
        [250, 253, 222],
        [250, 253, 222]],

       ...,

       [[ 91,  99,  86],
        [ 89,  97,  84],
        [ 88,  94,  82],
        ...,
        [ 37,  44,  50],
        [ 34,  41,  47],
        [ 31,  38,  44]],

       [[ 91,  99,  86],
        [ 90,  98,  85],
        [ 88,  96,  83],
        ...,
        [ 38,  43,  47],
        [ 35,  40,  44],
        [ 33,  38,  42]],

       [[ 93, 101,  88],
        [ 93, 101,  88],
        [ 89,  97,  84],
        ...,
        [ 37,  42,  46],
        [ 35,  40,  44],
        [ 35,  40,  44]]], dtype=uint8)>
import tensorflow as tf
tf.reduce_min(image), tf.reduce_max(image)
(<tf.Tensor: shape=(), dtype=uint8, numpy=0>,
 <tf.Tensor: shape=(), dtype=uint8, numpy=255>)

Plot an image from TensorFlow Datasets

import matplotlib.pyplot as plt
plt.style.use('dark_background')
plt.imshow(image)
plt.title(classnames[label.numpy()]) # Add title to image to verify 
plt.axis(False)
(-0.5, 511.5, 511.5, -0.5)

Preprocessing our data

Neural Networks performs the best when data is in a certain way (e.g. batched, normalized etc.)
So in order to get it ready for a neural network, you'll often have to write preprocessing functions and map it to your data. What we know about our data:

  • In uint8 datatype.
  • Comprise of all different size tensors(different sized images)
  • Not scaled (the pixel values are between 0 & 255 )

What we know models like:

  • Data in float32 dtype (or for mixed precision float16 and float32)
  • For batches tensorflow likes all of the tensors within a batch to be of same size
  • Scaled (values between 0 &1 ) also called normalized tensors generally perform better.
    Since, we're going to be using an EfficientNetBX pretrained model from tf.keras.applications we don't need to rescale our data(these architectures have rescaling built-in)

This means our functions need to:

  1. Reshape our images to all the same size
  2. Conver the dtype of our image tensors from uint8 to float32
def preprocess_img(image, label, img_shape = 224):
  """

  Converts image datatype from 'uint8' -> 'float32' and reshapes 
  image to [img_shape, img_shape, colour_channels]
  """
  image = tf.image.resize(image, [img_shape, img_shape] ) # Reshape target image
  #image = image/255. # Scaling images (not required for EfficientNetBX models)
  return tf.cast(image, tf.float32), label # returns (float32_image, label) tuple
# Preprocess a single sample of image and check the outputs
preprocessed_img = preprocess_img(image, label)[0]
print(f"Image before preprocessing :\n {image[:2]}..., \n Shape: {image.shape}, \n Datatype:{image.dtype}")
print(f"Image after preprocessing :\n {preprocessed_img[:2]}..., \n Shape: {preprocessed_img.shape} , \n Datatype:{preprocessed_img.dtype}")

Image before preprocessing :
 [[[ 43  89 125]
  [ 52  96 131]
  [ 85 128 162]
  ...
  [251 254 223]
  [250 253 222]
  [250 253 222]]

 [[ 42  88 124]
  [ 53  97 132]
  [ 92 135 169]
  ...
  [251 254 223]
  [250 253 222]
  [250 253 222]]]..., 
 Shape: (512, 512, 3), 
 Datatype:<dtype: 'uint8'>
Image after preprocessing :
 [[[ 48.969387  93.68367  129.04082 ]
  [124.78572  164.07144  195.28572 ]
  [125.372444 158.94388  183.5153  ]
  ...
  [251.78574  254.78574  223.78574 ]
  [251.       254.       223.      ]
  [250.       253.       222.      ]]

 [[ 65.28572  108.688774 143.09183 ]
  [129.93878  169.09184  200.17348 ]
  [ 79.61224  115.04081  140.88266 ]
  ...
  [251.78574  254.78574  223.78574 ]
  [251.       254.       223.      ]
  [250.       253.       222.      ]]]..., 
 Shape: (224, 224, 3) , 
 Datatype:<dtype: 'float32'>

Batch & Prepare datasets

# Map preprocessing function to training data ( and parallelize it)
train_data = train_data.map(map_func = preprocess_img, num_parallel_calls = tf.data.AUTOTUNE)

# Shuffle train data and turn it into data and prefetch it (load it faster)
train_data = train_data.shuffle(buffer_size = 1000).batch(batch_size = 32).prefetch(buffer_size = tf.data.AUTOTUNE)

# Map preprocessing function to test data
#test_data = test_data.map(map_func = preprocess_img, num_parallel_class= tf.data.AUTOTUNE)
test_data = test_data.shuffle(buffer_size = 1000).batch(batch_size=32).prefetch(buffer_size = tf.data.AUTOTUNE)
train_data, test_data
(<PrefetchDataset element_spec=(TensorSpec(shape=(None, 224, 224, 3), dtype=tf.float32, name=None), TensorSpec(shape=(None,), dtype=tf.int64, name=None))>,
 <PrefetchDataset element_spec=(TensorSpec(shape=(None, None, None, 3), dtype=tf.uint8, name=None), TensorSpec(shape=(None,), dtype=tf.int64, name=None))>)

Tensorflow maps this preprocessing function(preprocess_img) across our training dataset, then shuffle a number of elements and then batch them together and finally make sure you prepare new batches(prefetch) whilst the model is looking through (finding patterns) in the current batch.

What happens when you use prefetching (faster) versus what happens when you don't use prefetching (slower). Source: Page 422 of Hands-On Machine Learning with Scikit-Learn, Keras & TensorFlow Book by Aurélien Géron.

Create modelling callbacks

We're going to create a couple of callbacks to help us while our model trains:

  • TensorBoard Callback to log training results ( so we can visualize them later)
  • ModelCheckpoint callback to save our model's progress after feature extraction
from helper_functions import create_tensorboard_callback

# Create a Modelcheckpoint callback to save a model's progress during training
checkpoint_path= "model_checkpoints/cp.ckpt"
model_checkpoint = tf.keras.callbacks.ModelCheckpoint(checkpoint_path,
                                                      monitor = "val_acc",
                                                      save_best_only = True,
                                                      save_weights_only = True,
                                                      verbose = 0) # Dont print whether or not model is being saved

Setup Mixed precision training

For Deep understanding of mixed precision training, check out the TensorFlow guide: https://www.tensorflow.org/guide/mixed_precision

Mixed precision utilizes a combination of float32 and float16 datatypes to speed up model performance

from tensorflow.keras import mixed_precision
mixed_precision.set_global_policy("mixed_float16") # set global data policy to mixed precision
WARNING:tensorflow:Mixed precision compatibility check (mixed_float16): WARNING
Your GPU may run slowly with dtype policy mixed_float16 because it does not have compute capability of at least 7.0. Your GPU:
  Tesla K80, compute capability 3.7
See https://developer.nvidia.com/cuda-gpus for a list of GPUs and their compute capabilities.
If you will use compatible GPU(s) not attached to this host, e.g. by running a multi-worker model, you can ignore this warning. This message will only be logged once
WARNING:tensorflow:Mixed precision compatibility check (mixed_float16): WARNING
Your GPU may run slowly with dtype policy mixed_float16 because it does not have compute capability of at least 7.0. Your GPU:
  Tesla K80, compute capability 3.7
See https://developer.nvidia.com/cuda-gpus for a list of GPUs and their compute capabilities.
If you will use compatible GPU(s) not attached to this host, e.g. by running a multi-worker model, you can ignore this warning. This message will only be logged once
!nvidia-smi
Tue Feb 22 07:56:39 2022       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.32.03    Driver Version: 460.32.03    CUDA Version: 11.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla K80           Off  | 00000000:00:04.0 Off |                    0 |
| N/A   73C    P0    79W / 149W |    159MiB / 11441MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+
mixed_precision.global_policy()
<Policy "mixed_float16">

Build feature extraction model

from tensorflow.keras import layers
from tensorflow.keras.layers.experimental import preprocessing

# Create a base model
input_shape = (224,224,3)
base_model = tf.keras.applications.EfficientNetB0(include_top=False)
base_model.trainable = False

# Create a functional model
inputs = layers.Input(shape = input_shape, name = "input_layer")
# Note: EfficientNetBX models have rescaling built-in but if your model doesn't you need to have
# x = preprocessing.Rescaling(1/255.)(x)

x = base_model(inputs, training = False)
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dense(len(classnames))(x)
outputs = layers.Activation("softmax", dtype = tf.float32, name = "softmax_float32")(x)
model = tf.keras.Model(inputs,outputs)

# Compile the model
model.compile(loss = "sparse_categorical_crossentropy",
              optimizer = tf.keras.optimizers.Adam(),
              metrics = ["accuracy"])
model.summary()
Model: "model_1"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 input_layer (InputLayer)    [(None, 224, 224, 3)]     0         
                                                                 
 efficientnetb0 (Functional)  (None, None, None, 1280)  4049571  
                                                                 
 global_average_pooling2d_2   (None, 1280)             0         
 (GlobalAveragePooling2D)                                        
                                                                 
 dense_1 (Dense)             (None, 101)               129381    
                                                                 
 softmax_float32 (Activation  (None, 101)              0         
 )                                                               
                                                                 
=================================================================
Total params: 4,178,952
Trainable params: 129,381
Non-trainable params: 4,049,571
_________________________________________________________________

Checking layes datatype policies(verifying mixed precision)

for layer in model.layers:
  print(layer.name, layer.trainable, layer.dtype, layer.dtype_policy)
input_layer True float32 <Policy "float32">
efficientnetb0 False float32 <Policy "mixed_float16">
global_average_pooling2d_2 True float32 <Policy "mixed_float16">
dense_1 True float32 <Policy "mixed_float16">
softmax_float32 True float32 <Policy "float32">

Going through the above output we notice:

  • layer.name : the human readable name of a particular layer
  • layer.trainable : is the layer trainable or not?
  • layer.dtype : the data type of layer stores its variables in
  • layer.dtype_policy : the data type policy a layers computes on its variables with
# Check the base model layers dtypes
for layer in model.layers[1].layers:
  print(layer.name, layer.trainable, layer.dtype, layer.dtype_policy)

input_4 False float32 <Policy "float32">
rescaling_3 False float32 <Policy "mixed_float16">
normalization_3 False float32 <Policy "mixed_float16">
stem_conv_pad False float32 <Policy "mixed_float16">
stem_conv False float32 <Policy "mixed_float16">
stem_bn False float32 <Policy "mixed_float16">
stem_activation False float32 <Policy "mixed_float16">
block1a_dwconv False float32 <Policy "mixed_float16">
block1a_bn False float32 <Policy "mixed_float16">
block1a_activation False float32 <Policy "mixed_float16">
block1a_se_squeeze False float32 <Policy "mixed_float16">
block1a_se_reshape False float32 <Policy "mixed_float16">
block1a_se_reduce False float32 <Policy "mixed_float16">
block1a_se_expand False float32 <Policy "mixed_float16">
block1a_se_excite False float32 <Policy "mixed_float16">
block1a_project_conv False float32 <Policy "mixed_float16">
block1a_project_bn False float32 <Policy "mixed_float16">
block2a_expand_conv False float32 <Policy "mixed_float16">
block2a_expand_bn False float32 <Policy "mixed_float16">
block2a_expand_activation False float32 <Policy "mixed_float16">
block2a_dwconv_pad False float32 <Policy "mixed_float16">
block2a_dwconv False float32 <Policy "mixed_float16">
block2a_bn False float32 <Policy "mixed_float16">
block2a_activation False float32 <Policy "mixed_float16">
block2a_se_squeeze False float32 <Policy "mixed_float16">
block2a_se_reshape False float32 <Policy "mixed_float16">
block2a_se_reduce False float32 <Policy "mixed_float16">
block2a_se_expand False float32 <Policy "mixed_float16">
block2a_se_excite False float32 <Policy "mixed_float16">
block2a_project_conv False float32 <Policy "mixed_float16">
block2a_project_bn False float32 <Policy "mixed_float16">
block2b_expand_conv False float32 <Policy "mixed_float16">
block2b_expand_bn False float32 <Policy "mixed_float16">
block2b_expand_activation False float32 <Policy "mixed_float16">
block2b_dwconv False float32 <Policy "mixed_float16">
block2b_bn False float32 <Policy "mixed_float16">
block2b_activation False float32 <Policy "mixed_float16">
block2b_se_squeeze False float32 <Policy "mixed_float16">
block2b_se_reshape False float32 <Policy "mixed_float16">
block2b_se_reduce False float32 <Policy "mixed_float16">
block2b_se_expand False float32 <Policy "mixed_float16">
block2b_se_excite False float32 <Policy "mixed_float16">
block2b_project_conv False float32 <Policy "mixed_float16">
block2b_project_bn False float32 <Policy "mixed_float16">
block2b_drop False float32 <Policy "mixed_float16">
block2b_add False float32 <Policy "mixed_float16">
block3a_expand_conv False float32 <Policy "mixed_float16">
block3a_expand_bn False float32 <Policy "mixed_float16">
block3a_expand_activation False float32 <Policy "mixed_float16">
block3a_dwconv_pad False float32 <Policy "mixed_float16">
block3a_dwconv False float32 <Policy "mixed_float16">
block3a_bn False float32 <Policy "mixed_float16">
block3a_activation False float32 <Policy "mixed_float16">
block3a_se_squeeze False float32 <Policy "mixed_float16">
block3a_se_reshape False float32 <Policy "mixed_float16">
block3a_se_reduce False float32 <Policy "mixed_float16">
block3a_se_expand False float32 <Policy "mixed_float16">
block3a_se_excite False float32 <Policy "mixed_float16">
block3a_project_conv False float32 <Policy "mixed_float16">
block3a_project_bn False float32 <Policy "mixed_float16">
block3b_expand_conv False float32 <Policy "mixed_float16">
block3b_expand_bn False float32 <Policy "mixed_float16">
block3b_expand_activation False float32 <Policy "mixed_float16">
block3b_dwconv False float32 <Policy "mixed_float16">
block3b_bn False float32 <Policy "mixed_float16">
block3b_activation False float32 <Policy "mixed_float16">
block3b_se_squeeze False float32 <Policy "mixed_float16">
block3b_se_reshape False float32 <Policy "mixed_float16">
block3b_se_reduce False float32 <Policy "mixed_float16">
block3b_se_expand False float32 <Policy "mixed_float16">
block3b_se_excite False float32 <Policy "mixed_float16">
block3b_project_conv False float32 <Policy "mixed_float16">
block3b_project_bn False float32 <Policy "mixed_float16">
block3b_drop False float32 <Policy "mixed_float16">
block3b_add False float32 <Policy "mixed_float16">
block4a_expand_conv False float32 <Policy "mixed_float16">
block4a_expand_bn False float32 <Policy "mixed_float16">
block4a_expand_activation False float32 <Policy "mixed_float16">
block4a_dwconv_pad False float32 <Policy "mixed_float16">
block4a_dwconv False float32 <Policy "mixed_float16">
block4a_bn False float32 <Policy "mixed_float16">
block4a_activation False float32 <Policy "mixed_float16">
block4a_se_squeeze False float32 <Policy "mixed_float16">
block4a_se_reshape False float32 <Policy "mixed_float16">
block4a_se_reduce False float32 <Policy "mixed_float16">
block4a_se_expand False float32 <Policy "mixed_float16">
block4a_se_excite False float32 <Policy "mixed_float16">
block4a_project_conv False float32 <Policy "mixed_float16">
block4a_project_bn False float32 <Policy "mixed_float16">
block4b_expand_conv False float32 <Policy "mixed_float16">
block4b_expand_bn False float32 <Policy "mixed_float16">
block4b_expand_activation False float32 <Policy "mixed_float16">
block4b_dwconv False float32 <Policy "mixed_float16">
block4b_bn False float32 <Policy "mixed_float16">
block4b_activation False float32 <Policy "mixed_float16">
block4b_se_squeeze False float32 <Policy "mixed_float16">
block4b_se_reshape False float32 <Policy "mixed_float16">
block4b_se_reduce False float32 <Policy "mixed_float16">
block4b_se_expand False float32 <Policy "mixed_float16">
block4b_se_excite False float32 <Policy "mixed_float16">
block4b_project_conv False float32 <Policy "mixed_float16">
block4b_project_bn False float32 <Policy "mixed_float16">
block4b_drop False float32 <Policy "mixed_float16">
block4b_add False float32 <Policy "mixed_float16">
block4c_expand_conv False float32 <Policy "mixed_float16">
block4c_expand_bn False float32 <Policy "mixed_float16">
block4c_expand_activation False float32 <Policy "mixed_float16">
block4c_dwconv False float32 <Policy "mixed_float16">
block4c_bn False float32 <Policy "mixed_float16">
block4c_activation False float32 <Policy "mixed_float16">
block4c_se_squeeze False float32 <Policy "mixed_float16">
block4c_se_reshape False float32 <Policy "mixed_float16">
block4c_se_reduce False float32 <Policy "mixed_float16">
block4c_se_expand False float32 <Policy "mixed_float16">
block4c_se_excite False float32 <Policy "mixed_float16">
block4c_project_conv False float32 <Policy "mixed_float16">
block4c_project_bn False float32 <Policy "mixed_float16">
block4c_drop False float32 <Policy "mixed_float16">
block4c_add False float32 <Policy "mixed_float16">
block5a_expand_conv False float32 <Policy "mixed_float16">
block5a_expand_bn False float32 <Policy "mixed_float16">
block5a_expand_activation False float32 <Policy "mixed_float16">
block5a_dwconv False float32 <Policy "mixed_float16">
block5a_bn False float32 <Policy "mixed_float16">
block5a_activation False float32 <Policy "mixed_float16">
block5a_se_squeeze False float32 <Policy "mixed_float16">
block5a_se_reshape False float32 <Policy "mixed_float16">
block5a_se_reduce False float32 <Policy "mixed_float16">
block5a_se_expand False float32 <Policy "mixed_float16">
block5a_se_excite False float32 <Policy "mixed_float16">
block5a_project_conv False float32 <Policy "mixed_float16">
block5a_project_bn False float32 <Policy "mixed_float16">
block5b_expand_conv False float32 <Policy "mixed_float16">
block5b_expand_bn False float32 <Policy "mixed_float16">
block5b_expand_activation False float32 <Policy "mixed_float16">
block5b_dwconv False float32 <Policy "mixed_float16">
block5b_bn False float32 <Policy "mixed_float16">
block5b_activation False float32 <Policy "mixed_float16">
block5b_se_squeeze False float32 <Policy "mixed_float16">
block5b_se_reshape False float32 <Policy "mixed_float16">
block5b_se_reduce False float32 <Policy "mixed_float16">
block5b_se_expand False float32 <Policy "mixed_float16">
block5b_se_excite False float32 <Policy "mixed_float16">
block5b_project_conv False float32 <Policy "mixed_float16">
block5b_project_bn False float32 <Policy "mixed_float16">
block5b_drop False float32 <Policy "mixed_float16">
block5b_add False float32 <Policy "mixed_float16">
block5c_expand_conv False float32 <Policy "mixed_float16">
block5c_expand_bn False float32 <Policy "mixed_float16">
block5c_expand_activation False float32 <Policy "mixed_float16">
block5c_dwconv False float32 <Policy "mixed_float16">
block5c_bn False float32 <Policy "mixed_float16">
block5c_activation False float32 <Policy "mixed_float16">
block5c_se_squeeze False float32 <Policy "mixed_float16">
block5c_se_reshape False float32 <Policy "mixed_float16">
block5c_se_reduce False float32 <Policy "mixed_float16">
block5c_se_expand False float32 <Policy "mixed_float16">
block5c_se_excite False float32 <Policy "mixed_float16">
block5c_project_conv False float32 <Policy "mixed_float16">
block5c_project_bn False float32 <Policy "mixed_float16">
block5c_drop False float32 <Policy "mixed_float16">
block5c_add False float32 <Policy "mixed_float16">
block6a_expand_conv False float32 <Policy "mixed_float16">
block6a_expand_bn False float32 <Policy "mixed_float16">
block6a_expand_activation False float32 <Policy "mixed_float16">
block6a_dwconv_pad False float32 <Policy "mixed_float16">
block6a_dwconv False float32 <Policy "mixed_float16">
block6a_bn False float32 <Policy "mixed_float16">
block6a_activation False float32 <Policy "mixed_float16">
block6a_se_squeeze False float32 <Policy "mixed_float16">
block6a_se_reshape False float32 <Policy "mixed_float16">
block6a_se_reduce False float32 <Policy "mixed_float16">
block6a_se_expand False float32 <Policy "mixed_float16">
block6a_se_excite False float32 <Policy "mixed_float16">
block6a_project_conv False float32 <Policy "mixed_float16">
block6a_project_bn False float32 <Policy "mixed_float16">
block6b_expand_conv False float32 <Policy "mixed_float16">
block6b_expand_bn False float32 <Policy "mixed_float16">
block6b_expand_activation False float32 <Policy "mixed_float16">
block6b_dwconv False float32 <Policy "mixed_float16">
block6b_bn False float32 <Policy "mixed_float16">
block6b_activation False float32 <Policy "mixed_float16">
block6b_se_squeeze False float32 <Policy "mixed_float16">
block6b_se_reshape False float32 <Policy "mixed_float16">
block6b_se_reduce False float32 <Policy "mixed_float16">
block6b_se_expand False float32 <Policy "mixed_float16">
block6b_se_excite False float32 <Policy "mixed_float16">
block6b_project_conv False float32 <Policy "mixed_float16">
block6b_project_bn False float32 <Policy "mixed_float16">
block6b_drop False float32 <Policy "mixed_float16">
block6b_add False float32 <Policy "mixed_float16">
block6c_expand_conv False float32 <Policy "mixed_float16">
block6c_expand_bn False float32 <Policy "mixed_float16">
block6c_expand_activation False float32 <Policy "mixed_float16">
block6c_dwconv False float32 <Policy "mixed_float16">
block6c_bn False float32 <Policy "mixed_float16">
block6c_activation False float32 <Policy "mixed_float16">
block6c_se_squeeze False float32 <Policy "mixed_float16">
block6c_se_reshape False float32 <Policy "mixed_float16">
block6c_se_reduce False float32 <Policy "mixed_float16">
block6c_se_expand False float32 <Policy "mixed_float16">
block6c_se_excite False float32 <Policy "mixed_float16">
block6c_project_conv False float32 <Policy "mixed_float16">
block6c_project_bn False float32 <Policy "mixed_float16">
block6c_drop False float32 <Policy "mixed_float16">
block6c_add False float32 <Policy "mixed_float16">
block6d_expand_conv False float32 <Policy "mixed_float16">
block6d_expand_bn False float32 <Policy "mixed_float16">
block6d_expand_activation False float32 <Policy "mixed_float16">
block6d_dwconv False float32 <Policy "mixed_float16">
block6d_bn False float32 <Policy "mixed_float16">
block6d_activation False float32 <Policy "mixed_float16">
block6d_se_squeeze False float32 <Policy "mixed_float16">
block6d_se_reshape False float32 <Policy "mixed_float16">
block6d_se_reduce False float32 <Policy "mixed_float16">
block6d_se_expand False float32 <Policy "mixed_float16">
block6d_se_excite False float32 <Policy "mixed_float16">
block6d_project_conv False float32 <Policy "mixed_float16">
block6d_project_bn False float32 <Policy "mixed_float16">
block6d_drop False float32 <Policy "mixed_float16">
block6d_add False float32 <Policy "mixed_float16">
block7a_expand_conv False float32 <Policy "mixed_float16">
block7a_expand_bn False float32 <Policy "mixed_float16">
block7a_expand_activation False float32 <Policy "mixed_float16">
block7a_dwconv False float32 <Policy "mixed_float16">
block7a_bn False float32 <Policy "mixed_float16">
block7a_activation False float32 <Policy "mixed_float16">
block7a_se_squeeze False float32 <Policy "mixed_float16">
block7a_se_reshape False float32 <Policy "mixed_float16">
block7a_se_reduce False float32 <Policy "mixed_float16">
block7a_se_expand False float32 <Policy "mixed_float16">
block7a_se_excite False float32 <Policy "mixed_float16">
block7a_project_conv False float32 <Policy "mixed_float16">
block7a_project_bn False float32 <Policy "mixed_float16">
top_conv False float32 <Policy "mixed_float16">
top_bn False float32 <Policy "mixed_float16">
top_activation False float32 <Policy "mixed_float16">

Fit the feature extraction Model

If our goal is to fine-tune a pretrained model, the general order of doing things is:

  1. Build a feature extraction model(train a couple output layers with base layers frozen)
  2. Fine-tune some of the frozen layers
# Fit the model with callbacks
history_101_food_classes_feature_extract = model.fit(train_data, 
                                                     epochs=3,
                                                     steps_per_epoch=len(train_data),
                                                     validation_data=test_data,
                                                     validation_steps=int(0.15 * len(test_data)),
                                                     callbacks=[create_tensorboard_callback("training_logs", 
                                                                                            "efficientnetb0_101_classes_all_data_feature_extract"),
                                                                model_checkpoint])
Saving TensorBoard log files to: training_logs/efficientnetb0_101_classes_all_data_feature_extract/20220222-094838
Epoch 1/3
2368/2368 [==============================] - 309s 119ms/step - loss: 1.8179 - accuracy: 0.5596 - val_loss: 1.2317 - val_accuracy: 0.6756
Epoch 2/3
2368/2368 [==============================] - 264s 111ms/step - loss: 1.2942 - accuracy: 0.6668 - val_loss: 1.1215 - val_accuracy: 0.7050
Epoch 3/3
2368/2368 [==============================] - 272s 114ms/step - loss: 1.1436 - accuracy: 0.7034 - val_loss: 1.0969 - val_accuracy: 0.7050

Nice, looks like our feature extraction model is performing pretty well. How about we evaluate it on the whole test dataset?

results_feature_extract_model = model.evaluate(test_data)
results_feature_extract_model
790/790 [==============================] - 90s 113ms/step - loss: 1.0927 - accuracy: 0.7059
[1.092657208442688, 0.7059009671211243]
!mkdir -p saved_model
INFO:tensorflow:Assets written to: saved_model/food_vision_big/assets
INFO:tensorflow:Assets written to: saved_model/food_vision_big/assets
loaded_model = tf.keras.models.load_model('saved_model/food_vision_big')
# Check the layers in the base model and see what dtype policy they're using
for layer in base_model.layers:
  print(layer.name, layer.trainable, layer.dtype, layer.dtype_policy)

input_1 False float32 <Policy "float32">
rescaling False float32 <Policy "mixed_float16">
normalization False float32 <Policy "mixed_float16">
stem_conv_pad False float32 <Policy "mixed_float16">
stem_conv False float32 <Policy "mixed_float16">
stem_bn False float32 <Policy "mixed_float16">
stem_activation False float32 <Policy "mixed_float16">
block1a_dwconv False float32 <Policy "mixed_float16">
block1a_bn False float32 <Policy "mixed_float16">
block1a_activation False float32 <Policy "mixed_float16">
block1a_se_squeeze False float32 <Policy "mixed_float16">
block1a_se_reshape False float32 <Policy "mixed_float16">
block1a_se_reduce False float32 <Policy "mixed_float16">
block1a_se_expand False float32 <Policy "mixed_float16">
block1a_se_excite False float32 <Policy "mixed_float16">
block1a_project_conv False float32 <Policy "mixed_float16">
block1a_project_bn False float32 <Policy "mixed_float16">
block2a_expand_conv False float32 <Policy "mixed_float16">
block2a_expand_bn False float32 <Policy "mixed_float16">
block2a_expand_activation False float32 <Policy "mixed_float16">
block2a_dwconv_pad False float32 <Policy "mixed_float16">
block2a_dwconv False float32 <Policy "mixed_float16">
block2a_bn False float32 <Policy "mixed_float16">
block2a_activation False float32 <Policy "mixed_float16">
block2a_se_squeeze False float32 <Policy "mixed_float16">
block2a_se_reshape False float32 <Policy "mixed_float16">
block2a_se_reduce False float32 <Policy "mixed_float16">
block2a_se_expand False float32 <Policy "mixed_float16">
block2a_se_excite False float32 <Policy "mixed_float16">
block2a_project_conv False float32 <Policy "mixed_float16">
block2a_project_bn False float32 <Policy "mixed_float16">
block2b_expand_conv False float32 <Policy "mixed_float16">
block2b_expand_bn False float32 <Policy "mixed_float16">
block2b_expand_activation False float32 <Policy "mixed_float16">
block2b_dwconv False float32 <Policy "mixed_float16">
block2b_bn False float32 <Policy "mixed_float16">
block2b_activation False float32 <Policy "mixed_float16">
block2b_se_squeeze False float32 <Policy "mixed_float16">
block2b_se_reshape False float32 <Policy "mixed_float16">
block2b_se_reduce False float32 <Policy "mixed_float16">
block2b_se_expand False float32 <Policy "mixed_float16">
block2b_se_excite False float32 <Policy "mixed_float16">
block2b_project_conv False float32 <Policy "mixed_float16">
block2b_project_bn False float32 <Policy "mixed_float16">
block2b_drop False float32 <Policy "mixed_float16">
block2b_add False float32 <Policy "mixed_float16">
block3a_expand_conv False float32 <Policy "mixed_float16">
block3a_expand_bn False float32 <Policy "mixed_float16">
block3a_expand_activation False float32 <Policy "mixed_float16">
block3a_dwconv_pad False float32 <Policy "mixed_float16">
block3a_dwconv False float32 <Policy "mixed_float16">
block3a_bn False float32 <Policy "mixed_float16">
block3a_activation False float32 <Policy "mixed_float16">
block3a_se_squeeze False float32 <Policy "mixed_float16">
block3a_se_reshape False float32 <Policy "mixed_float16">
block3a_se_reduce False float32 <Policy "mixed_float16">
block3a_se_expand False float32 <Policy "mixed_float16">
block3a_se_excite False float32 <Policy "mixed_float16">
block3a_project_conv False float32 <Policy "mixed_float16">
block3a_project_bn False float32 <Policy "mixed_float16">
block3b_expand_conv False float32 <Policy "mixed_float16">
block3b_expand_bn False float32 <Policy "mixed_float16">
block3b_expand_activation False float32 <Policy "mixed_float16">
block3b_dwconv False float32 <Policy "mixed_float16">
block3b_bn False float32 <Policy "mixed_float16">
block3b_activation False float32 <Policy "mixed_float16">
block3b_se_squeeze False float32 <Policy "mixed_float16">
block3b_se_reshape False float32 <Policy "mixed_float16">
block3b_se_reduce False float32 <Policy "mixed_float16">
block3b_se_expand False float32 <Policy "mixed_float16">
block3b_se_excite False float32 <Policy "mixed_float16">
block3b_project_conv False float32 <Policy "mixed_float16">
block3b_project_bn False float32 <Policy "mixed_float16">
block3b_drop False float32 <Policy "mixed_float16">
block3b_add False float32 <Policy "mixed_float16">
block4a_expand_conv False float32 <Policy "mixed_float16">
block4a_expand_bn False float32 <Policy "mixed_float16">
block4a_expand_activation False float32 <Policy "mixed_float16">
block4a_dwconv_pad False float32 <Policy "mixed_float16">
block4a_dwconv False float32 <Policy "mixed_float16">
block4a_bn False float32 <Policy "mixed_float16">
block4a_activation False float32 <Policy "mixed_float16">
block4a_se_squeeze False float32 <Policy "mixed_float16">
block4a_se_reshape False float32 <Policy "mixed_float16">
block4a_se_reduce False float32 <Policy "mixed_float16">
block4a_se_expand False float32 <Policy "mixed_float16">
block4a_se_excite False float32 <Policy "mixed_float16">
block4a_project_conv False float32 <Policy "mixed_float16">
block4a_project_bn False float32 <Policy "mixed_float16">
block4b_expand_conv False float32 <Policy "mixed_float16">
block4b_expand_bn False float32 <Policy "mixed_float16">
block4b_expand_activation False float32 <Policy "mixed_float16">
block4b_dwconv False float32 <Policy "mixed_float16">
block4b_bn False float32 <Policy "mixed_float16">
block4b_activation False float32 <Policy "mixed_float16">
block4b_se_squeeze False float32 <Policy "mixed_float16">
block4b_se_reshape False float32 <Policy "mixed_float16">
block4b_se_reduce False float32 <Policy "mixed_float16">
block4b_se_expand False float32 <Policy "mixed_float16">
block4b_se_excite False float32 <Policy "mixed_float16">
block4b_project_conv False float32 <Policy "mixed_float16">
block4b_project_bn False float32 <Policy "mixed_float16">
block4b_drop False float32 <Policy "mixed_float16">
block4b_add False float32 <Policy "mixed_float16">
block4c_expand_conv False float32 <Policy "mixed_float16">
block4c_expand_bn False float32 <Policy "mixed_float16">
block4c_expand_activation False float32 <Policy "mixed_float16">
block4c_dwconv False float32 <Policy "mixed_float16">
block4c_bn False float32 <Policy "mixed_float16">
block4c_activation False float32 <Policy "mixed_float16">
block4c_se_squeeze False float32 <Policy "mixed_float16">
block4c_se_reshape False float32 <Policy "mixed_float16">
block4c_se_reduce False float32 <Policy "mixed_float16">
block4c_se_expand False float32 <Policy "mixed_float16">
block4c_se_excite False float32 <Policy "mixed_float16">
block4c_project_conv False float32 <Policy "mixed_float16">
block4c_project_bn False float32 <Policy "mixed_float16">
block4c_drop False float32 <Policy "mixed_float16">
block4c_add False float32 <Policy "mixed_float16">
block5a_expand_conv False float32 <Policy "mixed_float16">
block5a_expand_bn False float32 <Policy "mixed_float16">
block5a_expand_activation False float32 <Policy "mixed_float16">
block5a_dwconv False float32 <Policy "mixed_float16">
block5a_bn False float32 <Policy "mixed_float16">
block5a_activation False float32 <Policy "mixed_float16">
block5a_se_squeeze False float32 <Policy "mixed_float16">
block5a_se_reshape False float32 <Policy "mixed_float16">
block5a_se_reduce False float32 <Policy "mixed_float16">
block5a_se_expand False float32 <Policy "mixed_float16">
block5a_se_excite False float32 <Policy "mixed_float16">
block5a_project_conv False float32 <Policy "mixed_float16">
block5a_project_bn False float32 <Policy "mixed_float16">
block5b_expand_conv False float32 <Policy "mixed_float16">
block5b_expand_bn False float32 <Policy "mixed_float16">
block5b_expand_activation False float32 <Policy "mixed_float16">
block5b_dwconv False float32 <Policy "mixed_float16">
block5b_bn False float32 <Policy "mixed_float16">
block5b_activation False float32 <Policy "mixed_float16">
block5b_se_squeeze False float32 <Policy "mixed_float16">
block5b_se_reshape False float32 <Policy "mixed_float16">
block5b_se_reduce False float32 <Policy "mixed_float16">
block5b_se_expand False float32 <Policy "mixed_float16">
block5b_se_excite False float32 <Policy "mixed_float16">
block5b_project_conv False float32 <Policy "mixed_float16">
block5b_project_bn False float32 <Policy "mixed_float16">
block5b_drop False float32 <Policy "mixed_float16">
block5b_add False float32 <Policy "mixed_float16">
block5c_expand_conv False float32 <Policy "mixed_float16">
block5c_expand_bn False float32 <Policy "mixed_float16">
block5c_expand_activation False float32 <Policy "mixed_float16">
block5c_dwconv False float32 <Policy "mixed_float16">
block5c_bn False float32 <Policy "mixed_float16">
block5c_activation False float32 <Policy "mixed_float16">
block5c_se_squeeze False float32 <Policy "mixed_float16">
block5c_se_reshape False float32 <Policy "mixed_float16">
block5c_se_reduce False float32 <Policy "mixed_float16">
block5c_se_expand False float32 <Policy "mixed_float16">
block5c_se_excite False float32 <Policy "mixed_float16">
block5c_project_conv False float32 <Policy "mixed_float16">
block5c_project_bn False float32 <Policy "mixed_float16">
block5c_drop False float32 <Policy "mixed_float16">
block5c_add False float32 <Policy "mixed_float16">
block6a_expand_conv False float32 <Policy "mixed_float16">
block6a_expand_bn False float32 <Policy "mixed_float16">
block6a_expand_activation False float32 <Policy "mixed_float16">
block6a_dwconv_pad False float32 <Policy "mixed_float16">
block6a_dwconv False float32 <Policy "mixed_float16">
block6a_bn False float32 <Policy "mixed_float16">
block6a_activation False float32 <Policy "mixed_float16">
block6a_se_squeeze False float32 <Policy "mixed_float16">
block6a_se_reshape False float32 <Policy "mixed_float16">
block6a_se_reduce False float32 <Policy "mixed_float16">
block6a_se_expand False float32 <Policy "mixed_float16">
block6a_se_excite False float32 <Policy "mixed_float16">
block6a_project_conv False float32 <Policy "mixed_float16">
block6a_project_bn False float32 <Policy "mixed_float16">
block6b_expand_conv False float32 <Policy "mixed_float16">
block6b_expand_bn False float32 <Policy "mixed_float16">
block6b_expand_activation False float32 <Policy "mixed_float16">
block6b_dwconv False float32 <Policy "mixed_float16">
block6b_bn False float32 <Policy "mixed_float16">
block6b_activation False float32 <Policy "mixed_float16">
block6b_se_squeeze False float32 <Policy "mixed_float16">
block6b_se_reshape False float32 <Policy "mixed_float16">
block6b_se_reduce False float32 <Policy "mixed_float16">
block6b_se_expand False float32 <Policy "mixed_float16">
block6b_se_excite False float32 <Policy "mixed_float16">
block6b_project_conv False float32 <Policy "mixed_float16">
block6b_project_bn False float32 <Policy "mixed_float16">
block6b_drop False float32 <Policy "mixed_float16">
block6b_add False float32 <Policy "mixed_float16">
block6c_expand_conv False float32 <Policy "mixed_float16">
block6c_expand_bn False float32 <Policy "mixed_float16">
block6c_expand_activation False float32 <Policy "mixed_float16">
block6c_dwconv False float32 <Policy "mixed_float16">
block6c_bn False float32 <Policy "mixed_float16">
block6c_activation False float32 <Policy "mixed_float16">
block6c_se_squeeze False float32 <Policy "mixed_float16">
block6c_se_reshape False float32 <Policy "mixed_float16">
block6c_se_reduce False float32 <Policy "mixed_float16">
block6c_se_expand False float32 <Policy "mixed_float16">
block6c_se_excite False float32 <Policy "mixed_float16">
block6c_project_conv False float32 <Policy "mixed_float16">
block6c_project_bn False float32 <Policy "mixed_float16">
block6c_drop False float32 <Policy "mixed_float16">
block6c_add False float32 <Policy "mixed_float16">
block6d_expand_conv False float32 <Policy "mixed_float16">
block6d_expand_bn False float32 <Policy "mixed_float16">
block6d_expand_activation False float32 <Policy "mixed_float16">
block6d_dwconv False float32 <Policy "mixed_float16">
block6d_bn False float32 <Policy "mixed_float16">
block6d_activation False float32 <Policy "mixed_float16">
block6d_se_squeeze False float32 <Policy "mixed_float16">
block6d_se_reshape False float32 <Policy "mixed_float16">
block6d_se_reduce False float32 <Policy "mixed_float16">
block6d_se_expand False float32 <Policy "mixed_float16">
block6d_se_excite False float32 <Policy "mixed_float16">
block6d_project_conv False float32 <Policy "mixed_float16">
block6d_project_bn False float32 <Policy "mixed_float16">
block6d_drop False float32 <Policy "mixed_float16">
block6d_add False float32 <Policy "mixed_float16">
block7a_expand_conv False float32 <Policy "mixed_float16">
block7a_expand_bn False float32 <Policy "mixed_float16">
block7a_expand_activation False float32 <Policy "mixed_float16">
block7a_dwconv False float32 <Policy "mixed_float16">
block7a_bn False float32 <Policy "mixed_float16">
block7a_activation False float32 <Policy "mixed_float16">
block7a_se_squeeze False float32 <Policy "mixed_float16">
block7a_se_reshape False float32 <Policy "mixed_float16">
block7a_se_reduce False float32 <Policy "mixed_float16">
block7a_se_expand False float32 <Policy "mixed_float16">
block7a_se_excite False float32 <Policy "mixed_float16">
block7a_project_conv False float32 <Policy "mixed_float16">
block7a_project_bn False float32 <Policy "mixed_float16">
top_conv False float32 <Policy "mixed_float16">
top_bn False float32 <Policy "mixed_float16">
top_activation False float32 <Policy "mixed_float16">
results_loaded_model = loaded_model.evaluate(test_data)
790/790 [==============================] - 99s 125ms/step - loss: 1.0927 - accuracy: 0.7059
# Note: this will only work if you've instatiated results variables 
import numpy as np
assert np.isclose(results_feature_extract_model, results_loaded_model).all()

Milestone Project 1: FOODVISION model

TODO:

  • Fine-tuning our feature extraction model to beat the DeepFoodpaper
  • Evaluating our model results on TensorBoard
  • Evaluating our model results by making and plotting predictions

TODO: Preparing our model's layers for fine-tuning

Next: Fine-tune the feature extraction model to beat the DeepFood paper.

Like all good cooking shows, I've saved a model I prepared earlier (the feature extraction model from above) to Google Storage.

You can download it to make sure you're using the same model as originall trained going forward.

!wget https://storage.googleapis.com/ztm_tf_course/food_vision/07_efficientnetb0_feature_extract_model_mixed_precision.zip
--2022-02-22 10:18:04--  https://storage.googleapis.com/ztm_tf_course/food_vision/07_efficientnetb0_feature_extract_model_mixed_precision.zip
Resolving storage.googleapis.com (storage.googleapis.com)... 74.125.69.128, 173.194.193.128, 173.194.194.128, ...
Connecting to storage.googleapis.com (storage.googleapis.com)|74.125.69.128|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 16976857 (16M) [application/zip]
Saving to: ‘07_efficientnetb0_feature_extract_model_mixed_precision.zip’

07_efficientnetb0_f 100%[===================>]  16.19M  65.1MB/s    in 0.2s    

2022-02-22 10:18:04 (65.1 MB/s) - ‘07_efficientnetb0_feature_extract_model_mixed_precision.zip’ saved [16976857/16976857]

!mkdir downloaded_gs_model # create new dir to store downloaded feature extraction model
!unzip 07_efficientnetb0_feature_extract_model_mixed_precision.zip -d downloaded_gs_model
Archive:  07_efficientnetb0_feature_extract_model_mixed_precision.zip
   creating: downloaded_gs_model/07_efficientnetb0_feature_extract_model_mixed_precision/
   creating: downloaded_gs_model/07_efficientnetb0_feature_extract_model_mixed_precision/variables/
  inflating: downloaded_gs_model/07_efficientnetb0_feature_extract_model_mixed_precision/variables/variables.data-00000-of-00001  
  inflating: downloaded_gs_model/07_efficientnetb0_feature_extract_model_mixed_precision/variables/variables.index  
  inflating: downloaded_gs_model/07_efficientnetb0_feature_extract_model_mixed_precision/saved_model.pb  
   creating: downloaded_gs_model/07_efficientnetb0_feature_extract_model_mixed_precision/assets/
# Load and evaluate downloaded GS model
tf.get_logger().setLevel('INFO') # hide warning logs
loaded_gs_model = tf.keras.models.load_model("downloaded_gs_model/07_efficientnetb0_feature_extract_model_mixed_precision")

WARNING:tensorflow:SavedModel saved prior to TF 2.5 detected when loading Keras model. Please ensure that you are saving the model with model.save() or tf.keras.models.save_model(), *NOT* tf.saved_model.save(). To confirm, there should be a file named "keras_metadata.pb" in the SavedModel directory.
WARNING:tensorflow:SavedModel saved prior to TF 2.5 detected when loading Keras model. Please ensure that you are saving the model with model.save() or tf.keras.models.save_model(), *NOT* tf.saved_model.save(). To confirm, there should be a file named "keras_metadata.pb" in the SavedModel directory.
WARNING:absl:Importing a function (__inference_block1a_activation_layer_call_and_return_conditional_losses_158253) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block2a_activation_layer_call_and_return_conditional_losses_191539) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block6d_expand_activation_layer_call_and_return_conditional_losses_196076) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block6c_activation_layer_call_and_return_conditional_losses_195780) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block6d_activation_layer_call_and_return_conditional_losses_196153) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_model_layer_call_and_return_conditional_losses_180010) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_stem_activation_layer_call_and_return_conditional_losses_191136) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block4c_expand_activation_layer_call_and_return_conditional_losses_160354) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block6c_expand_activation_layer_call_and_return_conditional_losses_195703) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block3b_expand_activation_layer_call_and_return_conditional_losses_159392) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block1a_activation_layer_call_and_return_conditional_losses_191213) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block4c_se_reduce_layer_call_and_return_conditional_losses_193678) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block5a_se_reduce_layer_call_and_return_conditional_losses_194051) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block2b_expand_activation_layer_call_and_return_conditional_losses_158768) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block2b_se_reduce_layer_call_and_return_conditional_losses_191907) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block6d_se_reduce_layer_call_and_return_conditional_losses_162720) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block5c_activation_layer_call_and_return_conditional_losses_194708) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block6d_se_reduce_layer_call_and_return_conditional_losses_196195) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block5b_expand_activation_layer_call_and_return_conditional_losses_194258) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_efficientnetb0_layer_call_and_return_conditional_losses_188022) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block6b_activation_layer_call_and_return_conditional_losses_161995) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_efficientnetb0_layer_call_and_return_conditional_losses_183149) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block2b_activation_layer_call_and_return_conditional_losses_158824) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block4a_activation_layer_call_and_return_conditional_losses_159787) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block2a_expand_activation_layer_call_and_return_conditional_losses_158482) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block2a_se_reduce_layer_call_and_return_conditional_losses_158588) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block6b_se_reduce_layer_call_and_return_conditional_losses_195449) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block5b_se_reduce_layer_call_and_return_conditional_losses_194377) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block6d_expand_activation_layer_call_and_return_conditional_losses_162615) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block3a_activation_layer_call_and_return_conditional_losses_192238) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block4b_se_reduce_layer_call_and_return_conditional_losses_160121) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block4a_expand_activation_layer_call_and_return_conditional_losses_192860) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block2b_activation_layer_call_and_return_conditional_losses_191865) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block4b_expand_activation_layer_call_and_return_conditional_losses_160016) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block5c_se_reduce_layer_call_and_return_conditional_losses_194750) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_efficientnetb0_layer_call_and_return_conditional_losses_169029) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_efficientnetb0_layer_call_and_return_conditional_losses_170771) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block3b_activation_layer_call_and_return_conditional_losses_159448) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block5c_expand_activation_layer_call_and_return_conditional_losses_194631) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block4a_se_reduce_layer_call_and_return_conditional_losses_192979) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block4b_activation_layer_call_and_return_conditional_losses_193263) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block5b_expand_activation_layer_call_and_return_conditional_losses_160977) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block7a_expand_activation_layer_call_and_return_conditional_losses_162953) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block4a_se_reduce_layer_call_and_return_conditional_losses_159836) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block2a_se_reduce_layer_call_and_return_conditional_losses_191581) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block2a_activation_layer_call_and_return_conditional_losses_158539) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block6c_se_reduce_layer_call_and_return_conditional_losses_162382) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block7a_expand_activation_layer_call_and_return_conditional_losses_196449) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_top_activation_layer_call_and_return_conditional_losses_163238) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block6c_expand_activation_layer_call_and_return_conditional_losses_162277) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block3b_expand_activation_layer_call_and_return_conditional_losses_192487) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block1a_se_reduce_layer_call_and_return_conditional_losses_191255) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block7a_activation_layer_call_and_return_conditional_losses_163009) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block5b_activation_layer_call_and_return_conditional_losses_194335) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block4c_expand_activation_layer_call_and_return_conditional_losses_193559) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block4a_expand_activation_layer_call_and_return_conditional_losses_159730) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block6a_se_reduce_layer_call_and_return_conditional_losses_161759) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block3a_expand_activation_layer_call_and_return_conditional_losses_192161) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block4b_se_reduce_layer_call_and_return_conditional_losses_193305) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block5a_activation_layer_call_and_return_conditional_losses_160748) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block5c_activation_layer_call_and_return_conditional_losses_161371) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block4a_activation_layer_call_and_return_conditional_losses_192937) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block7a_se_reduce_layer_call_and_return_conditional_losses_196568) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block2b_expand_activation_layer_call_and_return_conditional_losses_191788) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block3a_expand_activation_layer_call_and_return_conditional_losses_159106) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block3b_se_reduce_layer_call_and_return_conditional_losses_159497) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block5c_expand_activation_layer_call_and_return_conditional_losses_161315) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_efficientnetb0_layer_call_and_return_conditional_losses_184891) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_model_layer_call_and_return_conditional_losses_178256) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block6a_activation_layer_call_and_return_conditional_losses_161710) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block6a_expand_activation_layer_call_and_return_conditional_losses_161653) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block3a_se_reduce_layer_call_and_return_conditional_losses_159212) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_stem_activation_layer_call_and_return_conditional_losses_158197) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_efficientnetb0_layer_call_and_return_conditional_losses_189764) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block3b_se_reduce_layer_call_and_return_conditional_losses_192606) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block6a_activation_layer_call_and_return_conditional_losses_195081) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block6c_activation_layer_call_and_return_conditional_losses_162333) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block5a_se_reduce_layer_call_and_return_conditional_losses_160797) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block5a_activation_layer_call_and_return_conditional_losses_194009) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block6c_se_reduce_layer_call_and_return_conditional_losses_195822) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block5b_activation_layer_call_and_return_conditional_losses_161033) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block6b_expand_activation_layer_call_and_return_conditional_losses_195330) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block3a_activation_layer_call_and_return_conditional_losses_159163) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block4c_se_reduce_layer_call_and_return_conditional_losses_160459) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block6b_activation_layer_call_and_return_conditional_losses_195407) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block7a_se_reduce_layer_call_and_return_conditional_losses_163058) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block3a_se_reduce_layer_call_and_return_conditional_losses_192280) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block6d_activation_layer_call_and_return_conditional_losses_162671) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference__wrapped_model_152628) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block6b_se_reduce_layer_call_and_return_conditional_losses_162044) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block2b_se_reduce_layer_call_and_return_conditional_losses_158873) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block4c_activation_layer_call_and_return_conditional_losses_160410) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block6a_expand_activation_layer_call_and_return_conditional_losses_195004) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block3b_activation_layer_call_and_return_conditional_losses_192564) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block5b_se_reduce_layer_call_and_return_conditional_losses_161082) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block5c_se_reduce_layer_call_and_return_conditional_losses_161420) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block4c_activation_layer_call_and_return_conditional_losses_193636) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_top_activation_layer_call_and_return_conditional_losses_196775) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block4b_activation_layer_call_and_return_conditional_losses_160072) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block6b_expand_activation_layer_call_and_return_conditional_losses_161939) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block5a_expand_activation_layer_call_and_return_conditional_losses_193932) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block4b_expand_activation_layer_call_and_return_conditional_losses_193186) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block1a_se_reduce_layer_call_and_return_conditional_losses_158302) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block6a_se_reduce_layer_call_and_return_conditional_losses_195123) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block2a_expand_activation_layer_call_and_return_conditional_losses_191462) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block7a_activation_layer_call_and_return_conditional_losses_196526) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
WARNING:absl:Importing a function (__inference_block5a_expand_activation_layer_call_and_return_conditional_losses_160692) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
loaded_gs_model.summary()
Model: "model"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 input_layer (InputLayer)    [(None, 224, 224, 3)]     0         
                                                                 
 efficientnetb0 (Functional)  (None, None, None, 1280)  4049571  
                                                                 
 pooling_layer (GlobalAverag  (None, 1280)             0         
 ePooling2D)                                                     
                                                                 
 dense (Dense)               (None, 101)               129381    
                                                                 
 softmax_float32 (Activation  (None, 101)              0         
 )                                                               
                                                                 
=================================================================
Total params: 4,178,952
Trainable params: 129,381
Non-trainable params: 4,049,571
_________________________________________________________________
results_loaded_gs_model = loaded_gs_model.evaluate(test_data)
results_loaded_gs_model
790/790 [==============================] - 89s 112ms/step - loss: 1.0881 - accuracy: 0.7065
[1.0880852937698364, 0.7065346240997314]
for layer in loaded_gs_model.layers:
  layer.trainable = True # set all layers to trainable
  print(layer.name, layer.trainable, layer.dtype, layer.dtype_policy)
input_layer True float32 <Policy "float32">
efficientnetb0 True float32 <Policy "mixed_float16">
pooling_layer True float32 <Policy "mixed_float16">
dense True float32 <Policy "mixed_float16">
softmax_float32 True float32 <Policy "float32">
for layer in loaded_gs_model.layers:
  print(layer.name, layer.trainable, layer.dtype, layer.dtype_policy)
input_layer True float32 <Policy "float32">
efficientnetb0 True float32 <Policy "mixed_float16">
pooling_layer True float32 <Policy "mixed_float16">
dense True float32 <Policy "mixed_float16">
softmax_float32 True float32 <Policy "float32">

So, Now we have all our layers trainable. We are going to train our model on the dataset with all the layers unfrozen and try to beat the results of DeepFood Paper (which has accuracy close to 77%)

# Monitor the val_loss and stop training if it doesn't improve for 3 epochs
EarlyStopping_callback = tf.keras.callbacks.EarlyStopping(monitor = "val_loss",
                                                                    patience = 3, verbose = 0)

# Create ModelCheckpoint callback to save best model during fine-tuning
# Save the best model only
# Monitor val_loss while training and save the best model (lowest val_loss)
checkpoint_path = "fine_tune_checkpoints/"
model_checkpoint = tf.keras.callbacks.ModelCheckpoint(checkpoint_path,
                                                      save_best_only=True,
                                                      monitor="val_loss")

# Creating learning rate reduction callback
reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(monitor="val_loss",  
                                                 factor=0.2, # multiply the learning rate by 0.2 (reduce by 5x)
                                                 patience=2,
                                                 verbose=1, # print out when learning rate goes down 
                                                 min_lr=1e-7)
# Use the Adam optimizer with a 10x lower than default learning rate

loaded_gs_model.compile(loss = "sparse_categorical_crossentropy",
                        optimizer = tf.keras.optimizers.Adam(),
                        metrics = ["accuracy"])
# Use 100 epochs as the default
# Validate on 15% of the test_data
# Use the create_tensorboard_callback, ModelCheckpoint and EarlyStopping callbacks you created eaelier

history_101_food_classes_all_data_fine_tune = loaded_gs_model.fit(train_data,
                                                                  epochs = 100,
                                                                  steps_per_epoch = len(train_data),
                                                                  validation_data = test_data,
                                                                  validation_steps = int(0.15*len(test_data)),
                                                                  callbacks = [create_tensorboard_callback("training_logs", "efficientb0_101_classes_all_data_fine_tuning"),
                                                                               model_checkpoint,
                                                                               EarlyStopping_callback,
                                                                               reduce_lr])
Saving TensorBoard log files to: training_logs/efficientb0_101_classes_all_data_fine_tuning/20220222-114347
Epoch 1/100
2367/2368 [============================>.] - ETA: 0s - loss: 0.9892 - accuracy: 0.7442INFO:tensorflow:Assets written to: fine_tune_checkpoints/assets
INFO:tensorflow:Assets written to: fine_tune_checkpoints/assets
2368/2368 [==============================] - 341s 142ms/step - loss: 0.9892 - accuracy: 0.7443 - val_loss: 1.0841 - val_accuracy: 0.7074 - lr: 1.0000e-04
Epoch 2/100
2368/2368 [==============================] - 272s 114ms/step - loss: 0.9892 - accuracy: 0.7443 - val_loss: 1.0922 - val_accuracy: 0.7052 - lr: 1.0000e-04
Epoch 3/100
2367/2368 [============================>.] - ETA: 0s - loss: 0.9892 - accuracy: 0.7443
Epoch 3: ReduceLROnPlateau reducing learning rate to 1.9999999494757503e-05.
2368/2368 [==============================] - 271s 113ms/step - loss: 0.9892 - accuracy: 0.7443 - val_loss: 1.0863 - val_accuracy: 0.7071 - lr: 1.0000e-04
Epoch 4/100
2368/2368 [==============================] - 270s 113ms/step - loss: 0.9892 - accuracy: 0.7443 - val_loss: 1.0884 - val_accuracy: 0.7063 - lr: 2.0000e-05

TODO: View training results on TensorBoard

Upload and view your model's training results to TensorBoard.dev and view them.

!tensorboard dev upload --logdir ./training_logs \
   --name "Fine-tuning EfficientNetB0 on all Food101 Data" \
   --description "Training results for fine-tuning EfficientNetB0 on Food101 Data with learning rate 0.0001" \

TODO: Evaluate your trained model

Some ideas you might want to go through:

  1. Find the precision, recall and f1 scores for each class (all 101).
  2. Build a confusion matrix for each of the classes.
  3. Find your model's most wrong predictions (those with the highest prediction probability but the wrong prediction).
pred_probs = loaded_gs_model.predict(test_data, verbose=1) # set verbosity to see how long it will take
790/790 [==============================] - 80s 98ms/step
len(pred_probs)
25250
pred_probs.shape
(25250, 101)
# How do they look?
pred_probs[:10]
print(f"Number of prediction probabilities for sample 0: {len(pred_probs[0])}")
print(f"What prediction probability sample 0 looks like:\n {pred_probs[0]}")
print(f"The class with the highest predicted probability by the model for sample 0: {pred_probs[0].argmax()}")
Number of prediction probabilities for sample 0: 101
What prediction probability sample 0 looks like:
 [1.4401069e-02 8.7248621e-04 6.5537915e-04 1.9259790e-04 2.6844227e-04
 1.4441166e-03 2.2685886e-03 1.0428986e-05 1.0808602e-01 3.4739010e-04
 2.4208028e-03 8.9741443e-05 4.9706534e-03 1.7937253e-03 3.0230531e-03
 3.9168997e-03 2.1527853e-04 1.1786536e-03 2.0797420e-03 1.7351459e-03
 8.2122732e-04 1.9372971e-04 2.7859115e-04 2.6635322e-04 2.4794212e-03
 2.4826851e-04 3.0876575e-02 2.7369595e-05 4.1981181e-03 7.9650054e-06
 2.6739572e-04 4.9948638e-05 1.4542394e-02 1.4254733e-05 2.2348419e-02
 1.3251959e-03 6.5145308e-05 6.4395950e-04 3.6477347e-04 2.8687369e-04
 8.4456497e-06 4.3317920e-04 1.9041091e-02 1.1983844e-03 6.6171189e-05
 1.5174084e-05 4.9374090e-04 8.0134459e-03 3.7708841e-04 1.0130905e-03
 7.9440768e-04 7.9650054e-06 7.5740227e-03 2.9540254e-04 9.9140896e-05
 9.2590140e-05 1.7641925e-03 1.1146712e-04 2.1067130e-05 4.4509084e-03
 9.9738396e-04 1.2657462e-03 1.3471788e-04 1.5055999e-05 1.0029457e-05
 5.9208577e-04 3.2054773e-03 5.1743595e-04 5.0249667e-04 5.2376992e-01
 7.4689458e-05 6.7354698e-04 4.6839113e-03 2.8519772e-04 8.5477560e-04
 4.5383235e-06 5.0249667e-04 6.2171364e-04 1.4843705e-02 6.2904228e-04
 6.9618232e-05 1.5533926e-05 5.1356256e-02 2.0260060e-05 2.4757916e-03
 1.6236639e-03 3.1749285e-05 9.9349557e-04 1.8844793e-06 9.6260263e-03
 1.9032566e-05 1.4824790e-04 8.1074642e-05 1.2244096e-03 4.0932561e-04
 3.2472243e-03 2.1824200e-04 1.2136953e-03 1.1524751e-03 2.3096017e-04
 9.6509956e-02]
The class with the highest predicted probability by the model for sample 0: 69
pred_classes = pred_probs.argmax(axis=1)

# How do they look?
pred_classes.shape
array([ 69,  57,  46,  69, 100,  57,  10,  17,  16,  22])
y_labels = []
for images, labels in test_data.unbatch(): # unbatch the test data and get images and labels
  y_labels.append(labels.numpy().argmax()) # append the index which has the largest value (labels are one-hot)
y_labels[:10] # check what they look like (unshuffled)
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
len(y_labels)
25250
from sklearn.metrics import accuracy_score
sklearn_accuracy = accuracy_score(y_labels, pred_classes)
sklearn_accuracy
0.0057425742574257425
(train_data, test_data), ds_info = tfds.load(name="food101", # target dataset to get from TFDS
                                             split=["train", "validation"], # what splits of data should we get? note: not all datasets have train, valid, test
                                             shuffle_files=True, # shuffle files on download?
                                             as_supervised=True, # download data in tuple format (sample, label), e.g. (image, label)
                                             with_info=True) # include dataset metadata? if so, tfds.load() returns tuple (data, ds_info)
class_names = ds_info.features["label"].names
class_names[:10]
['apple_pie',
 'baby_back_ribs',
 'baklava',
 'beef_carpaccio',
 'beef_tartare',
 'beet_salad',
 'beignets',
 'bibimbap',
 'bread_pudding',
 'breakfast_burrito']
Back to top of page