text stringlengths 0 4.99k |
|---|
Their values can vary greatly at each step. |
This leads us to an obvious idea: let's normalize the gradients before combining them. |
class MyModel(keras.Model): |
def train_step(self, data): |
inputs, targets = data |
trainable_vars = self.trainable_variables |
with tf.GradientTape() as tape2: |
with tf.GradientTape() as tape1: |
preds = self(inputs, training=True) # Forward pass |
# Compute the loss value |
# (the loss function is configured in `compile()`) |
loss = self.compiled_loss(targets, preds) |
# Compute first-order gradients |
dl_dw = tape1.gradient(loss, trainable_vars) |
# Compute second-order gradients |
d2l_dw2 = tape2.gradient(dl_dw, trainable_vars) |
dl_dw = [tf.math.l2_normalize(w) for w in dl_dw] |
d2l_dw2 = [tf.math.l2_normalize(w) for w in d2l_dw2] |
# Combine first-order and second-order gradients |
grads = [0.5 * w1 + 0.5 * w2 for (w1, w2) in zip(d2l_dw2, dl_dw)] |
# Update weights |
self.optimizer.apply_gradients(zip(grads, trainable_vars)) |
# Update metrics (includes the metric that tracks the loss) |
self.compiled_metrics.update_state(targets, preds) |
# Return a dict mapping metric names to current value |
return {m.name: m.result() for m in self.metrics} |
model = get_model() |
model.compile( |
optimizer=keras.optimizers.SGD(learning_rate=1e-2), |
loss=\"sparse_categorical_crossentropy\", |
metrics=[\"accuracy\"], |
) |
model.fit(x_train, y_train, epochs=5, batch_size=1024, validation_split=0.1) |
Epoch 1/5 |
53/53 [==============================] - 1s 15ms/step - loss: 2.1680 - accuracy: 0.2796 - val_loss: 2.0063 - val_accuracy: 0.4688 |
Epoch 2/5 |
53/53 [==============================] - 1s 13ms/step - loss: 1.9071 - accuracy: 0.5292 - val_loss: 1.7729 - val_accuracy: 0.6312 |
Epoch 3/5 |
53/53 [==============================] - 1s 13ms/step - loss: 1.7098 - accuracy: 0.6197 - val_loss: 1.5966 - val_accuracy: 0.6785 |
Epoch 4/5 |
53/53 [==============================] - 1s 13ms/step - loss: 1.5686 - accuracy: 0.6434 - val_loss: 1.4748 - val_accuracy: 0.6875 |
Epoch 5/5 |
53/53 [==============================] - 1s 14ms/step - loss: 1.4729 - accuracy: 0.6448 - val_loss: 1.3908 - val_accuracy: 0.6862 |
<tensorflow.python.keras.callbacks.History at 0x1a1105210> |
Now, training converges! It doesn't work well at all, but at least the model learns something. |
After spending a few minutes tuning parameters, we get to the following configuration that works somewhat well (achieves 97% validation accuracy and seems reasonably robust to overfitting): |
Use 0.2 * w1 + 0.8 * w2 for combining gradients. |
Use a learning rate that decays linearly over time. |
I'm not going to say that the idea works -- this isn't at all how you're supposed to do second-order optimization (pointers: see the Newton & Gauss-Newton methods, quasi-Newton methods, and BFGS). But hopefully this demonstration gave you an idea of how you can debug your way out of uncomfortable training situations. |
Remember: use run_eagerly=True for debugging what happens in fit(). And when your code is finally working as expected, make sure to remove this flag in order to get the best runtime performance! |
Here's our final training run: |
class MyModel(keras.Model): |
def train_step(self, data): |
inputs, targets = data |
trainable_vars = self.trainable_variables |
with tf.GradientTape() as tape2: |
with tf.GradientTape() as tape1: |
preds = self(inputs, training=True) # Forward pass |
# Compute the loss value |
# (the loss function is configured in `compile()`) |
loss = self.compiled_loss(targets, preds) |
# Compute first-order gradients |
dl_dw = tape1.gradient(loss, trainable_vars) |
# Compute second-order gradients |
d2l_dw2 = tape2.gradient(dl_dw, trainable_vars) |
dl_dw = [tf.math.l2_normalize(w) for w in dl_dw] |
d2l_dw2 = [tf.math.l2_normalize(w) for w in d2l_dw2] |
# Combine first-order and second-order gradients |
grads = [0.2 * w1 + 0.8 * w2 for (w1, w2) in zip(d2l_dw2, dl_dw)] |
# Update weights |
self.optimizer.apply_gradients(zip(grads, trainable_vars)) |
# Update metrics (includes the metric that tracks the loss) |
self.compiled_metrics.update_state(targets, preds) |
# Return a dict mapping metric names to current value |
return {m.name: m.result() for m in self.metrics} |
model = get_model() |
lr = learning_rate = keras.optimizers.schedules.InverseTimeDecay( |
initial_learning_rate=0.1, decay_steps=25, decay_rate=0.1 |
) |
model.compile( |
optimizer=keras.optimizers.SGD(lr), |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.