text stringlengths 0 4.99k |
|---|
However, sometimes, you will need to dive deeper and write your own code. Here are some common examples: |
Creating a new Layer subclass. |
Creating a custom Metric subclass. |
Implementing a custom train_step on a Model. |
This document provides a few simple tips to help you navigate debugging in these situations. |
Tip 1: test each part before you test the whole |
If you've created any object that has a chance of not working as expected, don't just drop it in your end-to-end process and watch sparks fly. Rather, test your custom object in isolation first. This may seem obvious -- but you'd be surprised how often people don't start with this. |
If you write a custom layer, don't call fit() on your entire model just yet. Call your layer on some test data first. |
If you write a custom metric, start by printing its output for some reference inputs. |
Here's a simple example. Let's write a custom layer a bug in it: |
import tensorflow as tf |
from tensorflow.keras import layers |
class MyAntirectifier(layers.Layer): |
def build(self, input_shape): |
output_dim = input_shape[-1] |
self.kernel = self.add_weight( |
shape=(output_dim * 2, output_dim), |
initializer=\"he_normal\", |
name=\"kernel\", |
trainable=True, |
) |
def call(self, inputs): |
# Take the positive part of the input |
pos = tf.nn.relu(inputs) |
# Take the negative part of the input |
neg = tf.nn.relu(-inputs) |
# Concatenate the positive and negative parts |
concatenated = tf.concat([pos, neg], axis=0) |
# Project the concatenation down to the same dimensionality as the input |
return tf.matmul(concatenated, self.kernel) |
Now, rather than using it in a end-to-end model directly, let's try to call the layer on some test data: |
x = tf.random.normal(shape=(2, 5)) |
y = MyAntirectifier()(x) |
We get the following error: |
... |
1 x = tf.random.normal(shape=(2, 5)) |
----> 2 y = MyAntirectifier()(x) |
... |
17 neg = tf.nn.relu(-inputs) |
18 concatenated = tf.concat([pos, neg], axis=0) |
---> 19 return tf.matmul(concatenated, self.kernel) |
... |
InvalidArgumentError: Matrix size-incompatible: In[0]: [4,5], In[1]: [10,5] [Op:MatMul] |
Looks like our input tensor in the matmul op may have an incorrect shape. Let's add a print statement to check the actual shapes: |
class MyAntirectifier(layers.Layer): |
def build(self, input_shape): |
output_dim = input_shape[-1] |
self.kernel = self.add_weight( |
shape=(output_dim * 2, output_dim), |
initializer=\"he_normal\", |
name=\"kernel\", |
trainable=True, |
) |
def call(self, inputs): |
pos = tf.nn.relu(inputs) |
neg = tf.nn.relu(-inputs) |
print(\"pos.shape:\", pos.shape) |
print(\"neg.shape:\", neg.shape) |
concatenated = tf.concat([pos, neg], axis=0) |
print(\"concatenated.shape:\", concatenated.shape) |
print(\"kernel.shape:\", self.kernel.shape) |
return tf.matmul(concatenated, self.kernel) |
We get the following: |
pos.shape: (2, 5) |
neg.shape: (2, 5) |
concatenated.shape: (4, 5) |
kernel.shape: (10, 5) |
Turns out we had the wrong axis for the concat op! We should be concatenating neg and pos alongside the feature axis 1, not the batch axis 0. Here's the correct version: |
class MyAntirectifier(layers.Layer): |
def build(self, input_shape): |
output_dim = input_shape[-1] |
self.kernel = self.add_weight( |
shape=(output_dim * 2, output_dim), |
initializer=\"he_normal\", |
name=\"kernel\", |
trainable=True, |
) |
def call(self, inputs): |
pos = tf.nn.relu(inputs) |
neg = tf.nn.relu(-inputs) |
print(\"pos.shape:\", pos.shape) |
print(\"neg.shape:\", neg.shape) |
concatenated = tf.concat([pos, neg], axis=1) |
print(\"concatenated.shape:\", concatenated.shape) |
print(\"kernel.shape:\", self.kernel.shape) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.