Introduction to Natural Language Processing (NLP) Fundamentals in TensorFlow

NLP has the goal of deriving information out of natural language(could be a sequences test or speech).

Another common term for NLP problems is Sequence Models/ Sequence to Sequence problems

some common applications of NLP:

  • Classification of articles into labels
  • Text Generation
  • Machine Translation
  • Voice Assistants.

All of there are also referred to as Sequence Problems

Different Types of Sequence Problems:

This Notebook covers:

  • Downloading and preparing a text dataset
  • How to prepare text data for modelling(tokenization and embedding)
  • Setting up multiple modelling experiments with recurrent neural networks(RNNs)
  • Building a text feature extraction model using TensorFlow Hub
  • Finding the most wrong prediction examples
  • Using a model we've built to make predictions on text from the wild.

What is a Recurrent Neural Network?

Answer goes here...

Architecture of an RNN:

Hyperparamter/Layer type What does it do? Typical values
Input text(s) Target texts/sequences you'd like to discover patterns in Whatever you can represent as a text or a sequence
Input layer Takes in a target sequence input_shape = [batch_size, embeddding_size] or [batch_size, sequence_shape]
Text Vectorization layer Maps input sequences to numbers Multiple, can create with tf.keras.layers.experimental.preprocessing.TextVectorization
Embedding Turns mapping of text vectors to embedding matrix(representation of how words realate) Multiple, can create with tf.keras.layers.Embedding
RNN Cell(s) Finds patterns in sequences Simple RNN, LSTM, GRU
Hidden activation Adds non-linearity to learned features(non-straight lines) Usually Tanh(hyperbolic tangent)(tf.keras.activations.tanh)
Pooling layer Reduces the dimensionality of learned sequence features (usually Conv1D models) Average(tf.keras.layers.GlobalAveragePooling1D or Max(tf.keras.layers.GlobalMaxPool1D)
Fully connected layer Further refines learned features from recurrent layers tf.keras.layers.Dense
Output layer Takes learned features and outputs them in shape of target labels output_shape = [number_of_classes](e.g. 2 for Disaster/Not Disaster example)
Output activation Adds non-linearities to output layer tf.keras.activations.sigmoid(binary classification) or tf.keras.activations.softmax

Example TensorFlow code for RNN Model:

#1. Create LSTM model
from tensorflow.keras import layers
inputs = layers.Input(shape = (1,), dtype = "string")
x = text_vectorizer(inputs) # turn inputs sequence to numbers
x = embedding(x) # Create embedding matrix 
x = layers.LSTM(64, activation = "tanh")(x)
outputs = layers.Dense(1, activation = "sigmoid")(x)
model = tf.keras.Model(inputs,outputs, name = "LSTM_model")

# 2. Compile the Model
model.compile(loss = tf.keras.losses.BinaryCrossentropy(),
              optimzer = tf.keras.optimizers.Adam(),
              metrics = ["accuracy"])

# 3. Fit the model
history = model.fit(train_sentences, train_labels, epochs = 5)
!nvidia-smi -L
GPU 0: Tesla K80 (UUID: GPU-b8f44027-0c2b-aba8-bdea-b2db6b2c6f28)

Get Helper Functions

!wget https://raw.githubusercontent.com/mrdbourke/tensorflow-deep-learning/main/extras/helper_functions.py

# Import a series of helper functions for the notebook
from helper_functions import unzip_data, create_tensorboard_callback,plot_loss_curves,compare_historys
--2022-03-11 07:41:53--  https://raw.githubusercontent.com/mrdbourke/tensorflow-deep-learning/main/extras/helper_functions.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 10246 (10K) [text/plain]
Saving to: ‘helper_functions.py.1’

helper_functions.py 100%[===================>]  10.01K  --.-KB/s    in 0s      

2022-03-11 07:41:53 (74.3 MB/s) - ‘helper_functions.py.1’ saved [10246/10246]

Get Text Dataset

The dataset we're going to be using is Kaggle's introduction to NLP dataset (text samples of Tweets labelled as disaster or not disaster)

Source : Natural Language Processing with Disaster Tweets

!wget https://storage.googleapis.com/ztm_tf_course/nlp_getting_started.zip

# Unzip the dataset
unzip_data("nlp_getting_started.zip")
--2022-03-11 07:41:53--  https://storage.googleapis.com/ztm_tf_course/nlp_getting_started.zip
Resolving storage.googleapis.com (storage.googleapis.com)... 74.125.133.128, 108.177.15.128, 173.194.76.128, ...
Connecting to storage.googleapis.com (storage.googleapis.com)|74.125.133.128|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 607343 (593K) [application/zip]
Saving to: ‘nlp_getting_started.zip.1’

nlp_getting_started 100%[===================>] 593.11K  --.-KB/s    in 0.007s  

2022-03-11 07:41:53 (85.5 MB/s) - ‘nlp_getting_started.zip.1’ saved [607343/607343]

Visualizing a text dataset

To visualize our text samples, we have to read them in, one way to do this is to be use python.

import pandas as pd
train_df = pd.read_csv("train.csv")
test_df = pd.read_csv("test.csv")
train_df.head()
id keyword location text target
0 1 NaN NaN Our Deeds are the Reason of this #earthquake M... 1
1 4 NaN NaN Forest fire near La Ronge Sask. Canada 1
2 5 NaN NaN All residents asked to 'shelter in place' are ... 1
3 6 NaN NaN 13,000 people receive #wildfires evacuation or... 1
4 7 NaN NaN Just got sent this photo from Ruby #Alaska as ... 1
train_df_shuffled = train_df.sample(frac = 1, random_state=42)
train_df_shuffled.head()
id keyword location text target
2644 3796 destruction NaN So you have a new weapon that can cause un-ima... 1
2227 3185 deluge NaN The f$&@ing things I do for #GISHWHES Just... 0
5448 7769 police UK DT @georgegalloway: RT @Galloway4Mayor: ‰ÛÏThe... 1
132 191 aftershock NaN Aftershock back to school kick off was great. ... 0
6845 9810 trauma Montgomery County, MD in response to trauma Children of Addicts deve... 0
test_df.head()
id keyword location text
0 0 NaN NaN Just happened a terrible car crash
1 2 NaN NaN Heard about #earthquake is different cities, s...
2 3 NaN NaN there is a forest fire at spot pond, geese are...
3 9 NaN NaN Apocalypse lighting. #Spokane #wildfires
4 11 NaN NaN Typhoon Soudelor kills 28 in China and Taiwan
train_df.target.value_counts()
0    4342
1    3271
Name: target, dtype: int64
len(train_df), len(test_df)
(7613, 3263)
import random
random_index = random.randint(0, len(train_df)-5) # Create random indexes 
for row in train_df_shuffled[["text", "target"]][random_index: random_index+5].itertuples():
  _, text, target = row
  print(f"Target:{target}", "(real disaster)" if target > 0 else "(not real disaster)")
  print(f"Text:\n {text} \n")
  print("---\n")
Target:1 (real disaster)
Text:
 News Update Huge cliff landslide on road in China - Watch the moment a cliff collapses as huge chunks of rock fall... http://t.co/gaBd0cjmAG 

---

Target:1 (real disaster)
Text:
 #Politics Democracy‰Ûªs hatred for hate: ‰Û_ Dawabsha threaten to erode Israeli democracy. Homegrown terrorism ha...  http://t.co/q8n5Tn8WME 

---

Target:0 (not real disaster)
Text:
 Be Trynna smoke TJ out but he a hoe 

---

Target:1 (real disaster)
Text:
 California Law‰ÛÓNegligence and Fireworks Explosion Incidents http://t.co/d5w2zynP7b 

---

Target:1 (real disaster)
Text:
 USGS EQ: M 1.2 - 23km S of Twentynine Palms California: Time2015-08-05 23:54:09 UTC2015-08-05 16:... http://t.co/T97JmbzOBO #EarthQuake 

---

Split data into training and validation sets

from sklearn.model_selection import train_test_split
train_sentences, val_sentences, train_labels, val_labels = train_test_split(train_df_shuffled["text"].to_numpy(),
                                                                            train_df_shuffled["target"].to_numpy(),
                                                                            test_size = 0.1,# use 10% of training data for validation 
                                                                            random_state = 42)
len(train_sentences), len(val_sentences), len(train_labels), len(val_labels)
(6851, 762, 6851, 762)
train_sentences[:10], train_labels[:10]
(array(['@mogacola @zamtriossu i screamed after hitting tweet',
        'Imagine getting flattened by Kurt Zouma',
        '@Gurmeetramrahim #MSGDoing111WelfareWorks Green S welfare force ke appx 65000 members har time disaster victim ki help ke liye tyar hai....',
        "@shakjn @C7 @Magnums im shaking in fear he's gonna hack the planet",
        'Somehow find you and I collide http://t.co/Ee8RpOahPk',
        '@EvaHanderek @MarleyKnysh great times until the bus driver held us hostage in the mall parking lot lmfao',
        'destroy the free fandom honestly',
        'Weapons stolen from National Guard Armory in New Albany still missing #Gunsense http://t.co/lKNU8902JE',
        '@wfaaweather Pete when will the heat wave pass? Is it really going to be mid month? Frisco Boy Scouts have a canoe trip in Okla.',
        'Patient-reported outcomes in long-term survivors of metastatic colorectal cancer - British Journal of Surgery http://t.co/5Yl4DC1Tqt'],
       dtype=object), array([0, 0, 1, 0, 0, 1, 1, 0, 1, 1]))

Converting text into numbers

When dealing with a text problem, one of the first things you'll have to do before you can build a model is to convert your text to numbers.

There are a few ways to do this, namely:

  • Tokenization: Straight mapping from token to number(can be modelled but quickly gets too big)

  • Embedding: richer representation of relationships between tokens (can limit size + can be learned)

Tokenization vs Embedding

E.g. I am a Human

I = 0  
am = 1  
a = 2
Human = 3

or using one-hot enconding

[[1,0,0,0],
 [0,1,0,0]
 [0,0,1,0]
 [0,0,0,1]]

or by creating an Embedding

[[0.492, 0.005, 0.019],
 [0.060, 0.233, 0.899],
 [0.741, 0.983, 0.567]]

There are a few ways to do this, namely:

  • Tokenization: Straight mapping from token to number(can be modelled but quickly gets too big)

  • Embedding: richer representation of relationships between tokens (can limit size + can be learned)

Text Vectorization(Tokenization)

train_sentences[:5]
array(['@mogacola @zamtriossu i screamed after hitting tweet',
       'Imagine getting flattened by Kurt Zouma',
       '@Gurmeetramrahim #MSGDoing111WelfareWorks Green S welfare force ke appx 65000 members har time disaster victim ki help ke liye tyar hai....',
       "@shakjn @C7 @Magnums im shaking in fear he's gonna hack the planet",
       'Somehow find you and I collide http://t.co/Ee8RpOahPk'],
      dtype=object)
import tensorflow as tf
from tensorflow.keras.layers.experimental.preprocessing import TextVectorization

# Use the default TextVectorization parameters(just to demonstrate the default values of this instance)
text_vectorizer = TextVectorization(max_tokens = None, # how many words in the vocabulary(automatically add <OOV?)
                                    standardize = "lower_and_strip_punctuation",
                                    split = "whitespace", # or SPLIT_WHITESPACE also works
                                    ngrams = None, # Create groups of n-words
                                    output_mode ="int", # How to map token to numbers
                                    output_sequence_length = None) #  how long do you want the sequences to be
                                    #pad_to_max_tokens = True) not valid if using max_tokens=None
len(train_sentences[0].split())
7
round(sum([len(i.split()) for i in train_sentences])/len(train_sentences))
15
max_vocab_length  = 10000 # max number of words to have in our vocabulary
max_length = 15 # max lenght our sequences will be (e.g. how many words from a tweet our model see)

text_vectorizer = TextVectorization(max_tokens = max_vocab_length,
                                    output_mode ="int",
                                    output_sequence_length = max_length)
text_vectorizer.adapt(train_sentences)
sample_sentence = "There's a flood in my street!"
text_vectorizer([sample_sentence])
<tf.Tensor: shape=(1, 15), dtype=int64, numpy=
array([[264,   3, 232,   4,  13, 698,   0,   0,   0,   0,   0,   0,   0,
          0,   0]])>
# Choose a random sentence from the training dataset and tokenize it
random_sentence = random.choice(train_sentences)
print(f"Original text: \n {random_sentence} \
        \n \n Vectorized version: ")

text_vectorizer([random_sentence])
Original text: 
 @AlfaPedia It might have come out ONLY too burst as a Bomb making him suicide bomber         
 
 Vectorized version: 
<tf.Tensor: shape=(1, 15), dtype=int64, numpy=
array([[   1,   15,  843,   24,  220,   36,  126,  150, 2174,   26,    3,
         108,  572,  158,   87]])>
words_in_vocab = text_vectorizer.get_vocabulary() # get all of the unique words in our training data
top_5_words = words_in_vocab[:5] # get the most common words
bottom_5_words= words_in_vocab[-5:] # get the least common words
print(f"Number of words in vocab: {len(words_in_vocab)}")
print(f"5 most common words: {top_5_words}")
print(f"5 least common words: {bottom_5_words}")
Number of words in vocab: 10000
5 most common words: ['', '[UNK]', 'the', 'a', 'in']
5 least common words: ['pages', 'paeds', 'pads', 'padres', 'paddytomlinson1']

Creating and Embedding using an Emedding Layer

To make our embedding we are going to use TensorFlow's Embedding layer.

The parameters we care most about for our embedding layer:

  • input_dim = the size of our vocabulary
  • output_dim = the size of the output embedding vector, for example, a value of 100 would mean each token gets represented by a vector 100 long
  • input_length = length of sequences being passed to the embedding layer.
from tensorflow.keras import layers

embedding = layers.Embedding(input_dim = max_vocab_length, # set input size
                             output_dim = 128,
                             embeddings_initializer = 'uniform',
                             input_length = max_length # how long is each input
                             )

embedding
<keras.layers.embeddings.Embedding at 0x7f6a9734a110>
# Get a random sentence from the training set
random_sentence = random.choice(train_sentences)
print(f"Original text:\n {random_sentence}\
        \n \nEmbedded version: ")

# Embed the random sentence (turn it into dense vectors of fixed size)
sample_embed = embedding(text_vectorizer([random_sentence]))
sample_embed

Original text:
 Fall back this first break homebuyer miscalculation that could destruction thousands: MwjCdk        
 
Embedded version: 
<tf.Tensor: shape=(1, 15, 128), dtype=float32, numpy=
array([[[ 0.00859234, -0.02991412,  0.00175644, ...,  0.02193626,
          0.04184195, -0.02619057],
        [-0.00470889, -0.04967961, -0.01436696, ..., -0.00270484,
         -0.00828482,  0.01314512],
        [ 0.00405866,  0.02752711,  0.01645878, ..., -0.00964943,
          0.02267227,  0.00371256],
        ...,
        [ 0.00350211, -0.04788604,  0.00196681, ...,  0.02803201,
          0.00803728,  0.02167306],
        [ 0.00350211, -0.04788604,  0.00196681, ...,  0.02803201,
          0.00803728,  0.02167306],
        [ 0.00350211, -0.04788604,  0.00196681, ...,  0.02803201,
          0.00803728,  0.02167306]]], dtype=float32)>
# Check out a single token's embedding
sample_embed[0][0], sample_embed[0][0].shape, random_sentence[0]

(<tf.Tensor: shape=(128,), dtype=float32, numpy=
 array([ 0.00859234, -0.02991412,  0.00175644, -0.03367729, -0.03769684,
        -0.0291563 , -0.02644087, -0.03562082,  0.04090923, -0.02225679,
        -0.01017957, -0.04141713, -0.01146468, -0.04587493,  0.02797664,
        -0.02889217,  0.03275966, -0.02597183,  0.03522148, -0.04480093,
         0.0389016 , -0.0169893 ,  0.0142766 , -0.03043303,  0.04030218,
        -0.04211314,  0.03645163,  0.02257297,  0.02544535, -0.00259332,
        -0.01840631, -0.02087172, -0.03521866, -0.01772154, -0.04674302,
        -0.00397594,  0.03044703, -0.00820515, -0.04558386, -0.02431409,
         0.041382  ,  0.02238775,  0.00051622, -0.01694447, -0.01824627,
         0.03566995, -0.04934913, -0.00467784,  0.02524788,  0.02154641,
        -0.0166956 , -0.00147361,  0.02120248,  0.0378341 ,  0.00150269,
        -0.02470231, -0.04485737, -0.03325255, -0.0435687 , -0.02844893,
         0.04605688, -0.04954116,  0.01102605, -0.03360488, -0.00772928,
         0.00679797,  0.03716531, -0.04825834,  0.03694325, -0.04381819,
        -0.01490177,  0.01195564,  0.04442109,  0.02496224,  0.00903082,
        -0.00705872, -0.00284895, -0.0252423 ,  0.01289615,  0.00021081,
         0.01263725,  0.00752894, -0.02753489,  0.03229766, -0.02484094,
        -0.00039848, -0.02652047, -0.03955548, -0.04918641, -0.02551678,
         0.02933357,  0.03285935,  0.03242579, -0.01814985,  0.04405214,
        -0.03942751,  0.04565073,  0.02142942,  0.04559227,  0.02825892,
        -0.04690794,  0.04336696, -0.04502993,  0.03199968, -0.01844629,
        -0.01408292,  0.0381173 ,  0.02374512, -0.04499742,  0.00774354,
         0.03652367, -0.03725274, -0.0037598 , -0.03174049, -0.00407821,
         0.04460347,  0.00373029,  0.01934692,  0.04332647,  0.03005439,
        -0.00177516,  0.02241433, -0.01339862, -0.00733767,  0.03994515,
         0.02193626,  0.04184195, -0.02619057], dtype=float32)>,
 TensorShape([128]),
 'F')

Modelling a text dataset

Experiment Number Model
0 Naive Bayes with TF-IDF encoder(baseline)
1 Feed-forward neural network(dense model)
2 LSTM(RNN)
3 GRU(RNN)
4 Bidirectional-LSTM(RNN)
5 1D Convolutional Neural Network
6 TensorFlow Hub Pretrained Feature Extractor
7 TensorFlow Hub Pretrained Feature Extractor (10% of data)

Standards steps involved in running Modelling Experiments:

  • Create a model
  • Build a model
  • Fit a model
  • Evaluate our model

Model 0: Naive Bayes with TF-IDF encoder

To create our baseline, we'll use Sklearn's Multinomial Naive Bayes using the TF-IDF formula to convert our words to numbers.

Note: It's common practice to use non-DL algorithms as a baseline because of their speed and later we can use DL algorithms to see if we can improve upon them.

from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import Pipeline

# Create tokenization and modelling pipeline
model_0 = Pipeline([
                    ("tfidf", TfidfVectorizer()), # convert words to numbers using tfidf
                    ("clf", MultinomialNB()) # Model the text
                    
])

# Fit the pipeline to the training dat
model_0.fit(train_sentences, train_labels)
Pipeline(steps=[('tfidf', TfidfVectorizer()), ('clf', MultinomialNB())])
baseline_score = model_0.score(val_sentences, val_labels)
print(f"Our baseline model achieves an accuracy of : {baseline_score*100:.2f}%")
Our baseline model achieves an accuracy of : 79.27%
train_df.target.value_counts()
0    4342
1    3271
Name: target, dtype: int64

So, our model is doing better than guessing, since there are almost 50% of example of both label types in the dataset.

baseline_preds = model_0.predict(val_sentences)
baseline_preds[:20]
array([1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1])

Creating an evaluation function for our model experiments

We can evaluate all of our model's predictions with different metrics every time, instead of repeating the code, we can create a function so that we can reuse it later for all the model experiments.

The functions should output the following evaluation metrics:

  • Accuracy
  • Precision
  • Recall
  • F1-Score

Resource: Metrics and scoring: quantifying the quality of predictions

from sklearn.metrics import accuracy_score, precision_recall_fscore_support

def calculate_results(y_true, y_pred):
  """

  Calculates model accuracy, precision, recall and f1 score of a binary classification model
  """
  # Calculate the model accuracy
  model_accuracy = accuracy_score(y_true, y_pred)*100
  # Calculate model precision, recall and f1-score using "weighted" average
  model_precision, model_recall, model_f1, _ = precision_recall_fscore_support(y_true,y_pred, average = "weighted")
  model_results = {"accuracy": model_accuracy,
                   "precision": model_precision,
                   "recall": model_recall,
                   "f1": model_f1}

  return model_results
baseline_results = calculate_results(y_true = val_labels,
                                     y_pred = baseline_preds)

baseline_results
{'accuracy': 79.26509186351706,
 'f1': 0.7862189758049549,
 'precision': 0.8111390004213173,
 'recall': 0.7926509186351706}

Model 1: A Simple Dense Model

from helper_functions import create_tensorboard_callback

# Create a directory to save TensorBoard logs
SAVE_DIR = "model_logs"
from tensorflow.keras import layers
inputs = layers.Input(shape=(1,), dtype="string") # inputs are 1-dimensional strings
x = text_vectorizer(inputs) # turn the input text into numbers
x = embedding(x) # create an embedding of the numerized numbers
x = layers.GlobalAveragePooling1D()(x) # lower the dimensionality of the embedding 
outputs = layers.Dense(1, activation="sigmoid")(x) # create the output layer, want binary outputs so use sigmoid activation
model_1 = tf.keras.Model(inputs, outputs, name="model_1_dense") # construct the model
model_1.summary()
Model: "model_1_dense"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 input_1 (InputLayer)        [(None, 1)]               0         
                                                                 
 text_vectorization_4 (TextV  (None, 15)               0         
 ectorization)                                                   
                                                                 
 embedding_1 (Embedding)     (None, 15, 128)           1280000   
                                                                 
 global_average_pooling1d (G  (None, 128)              0         
 lobalAveragePooling1D)                                          
                                                                 
 dense (Dense)               (None, 1)                 129       
                                                                 
=================================================================
Total params: 1,280,129
Trainable params: 1,280,129
Non-trainable params: 0
_________________________________________________________________
model_1.compile(loss = "binary_crossentropy",
                optimizer = tf.keras.optimizers.Adam(),
                metrics = ["accuracy"])
model_1_history = model_1.fit(train_sentences,
                              train_labels,
                              epochs = 5,
                              validation_data = (val_sentences,val_labels),
                              callbacks = [create_tensorboard_callback(dir_name = SAVE_DIR,
                                                                       experiment_name = "model_1_dense" )])
Saving TensorBoard log files to: model_logs/model_1_dense/20220311-074354
Epoch 1/5
215/215 [==============================] - 5s 7ms/step - loss: 0.6103 - accuracy: 0.6875 - val_loss: 0.5359 - val_accuracy: 0.7598
Epoch 2/5
215/215 [==============================] - 1s 7ms/step - loss: 0.4411 - accuracy: 0.8156 - val_loss: 0.4690 - val_accuracy: 0.7848
Epoch 3/5
215/215 [==============================] - 1s 6ms/step - loss: 0.3472 - accuracy: 0.8600 - val_loss: 0.4571 - val_accuracy: 0.7953
Epoch 4/5
215/215 [==============================] - 1s 7ms/step - loss: 0.2837 - accuracy: 0.8924 - val_loss: 0.4681 - val_accuracy: 0.7927
Epoch 5/5
215/215 [==============================] - 1s 6ms/step - loss: 0.2371 - accuracy: 0.9126 - val_loss: 0.4866 - val_accuracy: 0.7887
model_1.evaluate(val_sentences, val_labels)
24/24 [==============================] - 0s 4ms/step - loss: 0.4866 - accuracy: 0.7887
[0.4866005778312683, 0.7887139320373535]
model_1_pred_probs = model_1.predict(val_sentences)
model_1_pred_probs.shape
(762, 1)
model_1_pred_probs[1]
array([0.8119219], dtype=float32)

These are prediction probabilites that came out of the output layer.

model_1_preds = tf.squeeze(tf.round(model_1_pred_probs))
model_1_preds[:20]
<tf.Tensor: shape=(20,), dtype=float32, numpy=
array([0., 1., 1., 0., 0., 1., 1., 1., 1., 0., 0., 1., 0., 0., 0., 0., 0.,
       0., 0., 0.], dtype=float32)>
model_1_results = calculate_results(y_true = val_labels,
                                    y_pred = model_1_preds)
model_1_results
{'accuracy': 78.87139107611549,
 'f1': 0.7848945056280915,
 'precision': 0.7964015586347394,
 'recall': 0.7887139107611548}
import numpy as np
np.array(list(model_1_results.values())) > np.array(list(baseline_results.values()))
array([False, False, False, False])

Visualizing learned Embeddings:

words_in_vocab = text_vectorizer.get_vocabulary()
len(words_in_vocab), words_in_vocab[:10]
(10000, ['', '[UNK]', 'the', 'a', 'in', 'to', 'of', 'and', 'i', 'is'])
model_1.summary()
Model: "model_1_dense"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 input_1 (InputLayer)        [(None, 1)]               0         
                                                                 
 text_vectorization_4 (TextV  (None, 15)               0         
 ectorization)                                                   
                                                                 
 embedding_1 (Embedding)     (None, 15, 128)           1280000   
                                                                 
 global_average_pooling1d (G  (None, 128)              0         
 lobalAveragePooling1D)                                          
                                                                 
 dense (Dense)               (None, 1)                 129       
                                                                 
=================================================================
Total params: 1,280,129
Trainable params: 1,280,129
Non-trainable params: 0
_________________________________________________________________
# (these numerical representations of each token in our training data, which has been learned for 5 epochs)
embed_weights = model_1.get_layer("embedding_1").get_weights()[0]
embed_weights # Same size as vocab size and embedding_dim
array([[ 0.02186958, -0.06763938,  0.02146152, ...,  0.01258572,
         0.02883859,  0.00023849],
       [-0.02620384, -0.02298776, -0.02039188, ...,  0.00896182,
         0.03781693, -0.03427465],
       [ 0.0165944 ,  0.01236528,  0.03275075, ...,  0.03906085,
         0.05525858, -0.03807979],
       ...,
       [-0.0099979 ,  0.04100901, -0.04915455, ...,  0.03296768,
         0.03509828,  0.02508564],
       [ 0.01705603, -0.04142731, -0.00240709, ..., -0.05540716,
         0.0721622 , -0.03262765],
       [ 0.06986112, -0.09984355,  0.02866708, ..., -0.02748252,
         0.08362035, -0.03691495]], dtype=float32)
print(embed_weights.shape) # same size as vocab size and embedding_dim
(10000, 128)

Now we have got the embedding matrix our model has learned to represent our tokens, let's see how we can visualize it. To do so, Tensorflow has a tool called projector: https://projector.tensorflow.org/

And TensorFlow also has an incredible guide on word embeddings: https://www.tensorflow.org/text/guide/word_embeddings

import io
out_v = io.open('vectors.tsv', 'w', encoding='utf-8')
out_m = io.open('metadata.tsv', 'w', encoding='utf-8')

for index, word in enumerate(words_in_vocab):
  if index == 0:
    continue  # skip 0, it's padding.
  vec = embed_weights[index]
  out_v.write('\t'.join([str(x) for x in vec]) + "\n")
  out_m.write(word + "\n")
out_v.close()
out_m.close()
try:
  from google.colab import files
  files.download('vectors.tsv')
  files.download('metadata.tsv')
except Exception:
  pass
def plot_functions(k_values, m_values, n_values):
  return create_tensorboard_callback(dirname, dirpath)
tf.keras.utils.text_dataset_from_directory(
    directory, labels='inferred', label_mode='int',
    class_names=None, batch_size=32, max_length=None, shuffle=True, seed=None,
    validation_split=None, subset=None, follow_links=False
)
Back to top of page