text
stringlengths
0
4.99k
return tf.matmul(concatenated, self.kernel)
Now our code works fine:
x = tf.random.normal(shape=(2, 5))
y = MyAntirectifier()(x)
pos.shape: (2, 5)
neg.shape: (2, 5)
concatenated.shape: (2, 10)
kernel.shape: (10, 5)
Tip 2: use model.summary() and plot_model() to check layer output shapes
If you're working with complex network topologies, you're going to need a way to visualize how your layers are connected and how they transform the data that passes through them.
Here's an example. Consider this model with three inputs and two outputs (lifted from the Functional API guide):
from tensorflow import keras
num_tags = 12 # Number of unique issue tags
num_words = 10000 # Size of vocabulary obtained when preprocessing text data
num_departments = 4 # Number of departments for predictions
title_input = keras.Input(
shape=(None,), name=\"title\"
) # Variable-length sequence of ints
body_input = keras.Input(shape=(None,), name=\"body\") # Variable-length sequence of ints
tags_input = keras.Input(
shape=(num_tags,), name=\"tags\"
) # Binary vectors of size `num_tags`
# Embed each word in the title into a 64-dimensional vector
title_features = layers.Embedding(num_words, 64)(title_input)
# Embed each word in the text into a 64-dimensional vector
body_features = layers.Embedding(num_words, 64)(body_input)
# Reduce sequence of embedded words in the title into a single 128-dimensional vector
title_features = layers.LSTM(128)(title_features)
# Reduce sequence of embedded words in the body into a single 32-dimensional vector
body_features = layers.LSTM(32)(body_features)
# Merge all available features into a single large vector via concatenation
x = layers.concatenate([title_features, body_features, tags_input])
# Stick a logistic regression for priority prediction on top of the features
priority_pred = layers.Dense(1, name=\"priority\")(x)
# Stick a department classifier on top of the features
department_pred = layers.Dense(num_departments, name=\"department\")(x)
# Instantiate an end-to-end model predicting both priority and department
model = keras.Model(
inputs=[title_input, body_input, tags_input],
outputs=[priority_pred, department_pred],
)
Calling summary() can help you check the output shape of each layer:
model.summary()
Model: \"functional_1\"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
title (InputLayer) [(None, None)] 0
__________________________________________________________________________________________________
body (InputLayer) [(None, None)] 0
__________________________________________________________________________________________________
embedding (Embedding) (None, None, 64) 640000 title[0][0]
__________________________________________________________________________________________________
embedding_1 (Embedding) (None, None, 64) 640000 body[0][0]
__________________________________________________________________________________________________
lstm (LSTM) (None, 128) 98816 embedding[0][0]
__________________________________________________________________________________________________
lstm_1 (LSTM) (None, 32) 12416 embedding_1[0][0]
__________________________________________________________________________________________________
tags (InputLayer) [(None, 12)] 0
__________________________________________________________________________________________________
concatenate (Concatenate) (None, 172) 0 lstm[0][0]
lstm_1[0][0]
tags[0][0]
__________________________________________________________________________________________________
priority (Dense) (None, 1) 173 concatenate[0][0]
__________________________________________________________________________________________________
department (Dense) (None, 4) 692 concatenate[0][0]
==================================================================================================
Total params: 1,392,097
Trainable params: 1,392,097
Non-trainable params: 0
__________________________________________________________________________________________________
You can also visualize the entire network topology alongside output shapes using plot_model:
keras.utils.plot_model(model, show_shapes=True)
png
With this plot, any connectivity-level error becomes immediately obvious.
Tip 3: to debug what happens during fit(), use run_eagerly=True
The fit() method is fast: it runs a well-optimized, fully-compiled computation graph. That's great for performance, but it also means that the code you're executing isn't the Python code you've written. This can be problematic when debugging. As you may recall, Python is slow -- so we use it as a staging language, not ...
Thankfully, there's an easy way to run your code in \"debug mode\", fully eagerly: pass run_eagerly=True to compile(). Your call to fit() will now get executed line by line, without any optimization. It's slower, but it makes it possible to print the value of intermediate tensors, or to use a Python debugger. Great for...
Here's a basic example: let's write a really simple model with a custom train_step. Our model just implements gradient descent, but instead of first-order gradients, it uses a combination of first-order and second-order gradients. Pretty trivial so far.
Can you spot what we're doing wrong?