text stringlengths 0 4.99k |
|---|
Epoch 9/20 |
399/399 [==============================] - 2s 5ms/step - loss: 0.0710 - sparse_categorical_accuracy: 0.9827 - val_loss: 0.1613 - val_sparse_categorical_accuracy: 0.9694 |
Epoch 10/20 |
399/399 [==============================] - 2s 5ms/step - loss: 0.0633 - sparse_categorical_accuracy: 0.9840 - val_loss: 0.1463 - val_sparse_categorical_accuracy: 0.9758 |
Epoch 11/20 |
399/399 [==============================] - 2s 5ms/step - loss: 0.0604 - sparse_categorical_accuracy: 0.9856 - val_loss: 0.1390 - val_sparse_categorical_accuracy: 0.9769 |
Epoch 12/20 |
399/399 [==============================] - 2s 5ms/step - loss: 0.0561 - sparse_categorical_accuracy: 0.9865 - val_loss: 0.1761 - val_sparse_categorical_accuracy: 0.9740 |
Epoch 13/20 |
399/399 [==============================] - 2s 5ms/step - loss: 0.0589 - sparse_categorical_accuracy: 0.9873 - val_loss: 0.1598 - val_sparse_categorical_accuracy: 0.9769 |
Epoch 14/20 |
399/399 [==============================] - 2s 5ms/step - loss: 0.0527 - sparse_categorical_accuracy: 0.9879 - val_loss: 0.1565 - val_sparse_categorical_accuracy: 0.9802 |
Epoch 15/20 |
399/399 [==============================] - 2s 5ms/step - loss: 0.0563 - sparse_categorical_accuracy: 0.9878 - val_loss: 0.1970 - val_sparse_categorical_accuracy: 0.9758 |
Epoch 16/20 |
399/399 [==============================] - 2s 5ms/step - loss: 0.0525 - sparse_categorical_accuracy: 0.9888 - val_loss: 0.1937 - val_sparse_categorical_accuracy: 0.9757 |
Epoch 17/20 |
399/399 [==============================] - 2s 5ms/step - loss: 0.0522 - sparse_categorical_accuracy: 0.9898 - val_loss: 0.1777 - val_sparse_categorical_accuracy: 0.9797 |
Epoch 18/20 |
399/399 [==============================] - 2s 5ms/step - loss: 0.0568 - sparse_categorical_accuracy: 0.9894 - val_loss: 0.1831 - val_sparse_categorical_accuracy: 0.9791 |
Epoch 19/20 |
399/399 [==============================] - 2s 5ms/step - loss: 0.0526 - sparse_categorical_accuracy: 0.9900 - val_loss: 0.1812 - val_sparse_categorical_accuracy: 0.9782 |
Epoch 20/20 |
399/399 [==============================] - 2s 5ms/step - loss: 0.0503 - sparse_categorical_accuracy: 0.9902 - val_loss: 0.2098 - val_sparse_categorical_accuracy: 0.9776 |
313/313 [==============================] - 0s 731us/step - loss: 0.2002 - sparse_categorical_accuracy: 0.9776 |
[0.20024622976779938, 0.9775999784469604] |
Overview of how to use the TensorFlow NumPy API to write Keras models. |
Introduction |
NumPy is a hugely successful Python linear algebra library. |
TensorFlow recently launched tf_numpy, a TensorFlow implementation of a large subset of the NumPy API. Thanks to tf_numpy, you can write Keras layers or models in the NumPy style! |
The TensorFlow NumPy API has full integration with the TensorFlow ecosystem. Features such as automatic differentiation, TensorBoard, Keras model callbacks, TPU distribution and model exporting are all supported. |
Let's run through a few examples. |
Setup |
TensorFlow NumPy requires TensorFlow 2.5 or later. |
import tensorflow as tf |
import tensorflow.experimental.numpy as tnp |
import keras |
import keras.layers as layers |
import numpy as np |
Optionally, you can call tnp.experimental_enable_numpy_behavior() to enable type promotion in TensorFlow. This allows TNP to more closely follow the NumPy standard. |
tnp.experimental_enable_numpy_behavior() |
To test our models we will use the Boston housing prices regression dataset. |
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.boston_housing.load_data( |
path=\"boston_housing.npz\", test_split=0.2, seed=113 |
) |
def evaluate_model(model: keras.Model): |
[loss, percent_error] = model.evaluate(x_test, y_test, verbose=0) |
print(\"Mean absolute percent error before training: \", percent_error) |
model.fit(x_train, y_train, epochs=200, verbose=0) |
[loss, percent_error] = model.evaluate(x_test, y_test, verbose=0) |
print(\"Mean absolute percent error after training:\", percent_error) |
Subclassing keras.Model with TNP |
The most flexible way to make use of the Keras API is to subclass the [keras.Model](/api/models/model#model-class) class. Subclassing the Model class gives you the ability to fully customize what occurs in the training loop. This makes subclassing Model a popular option for researchers. |
In this example, we will implement a Model subclass that performs regression over the boston housing dataset using the TNP API. Note that differentiation and gradient descent is handled automatically when using the TNP API alongside keras. |
First let's define a simple TNPForwardFeedRegressionNetwork class. |
class TNPForwardFeedRegressionNetwork(keras.Model): |
def __init__(self, blocks=None, **kwargs): |
super(TNPForwardFeedRegressionNetwork, self).__init__(**kwargs) |
if not isinstance(blocks, list): |
raise ValueError(f\"blocks must be a list, got blocks={blocks}\") |
self.blocks = blocks |
self.block_weights = None |
self.biases = None |
def build(self, input_shape): |
current_shape = input_shape[1] |
self.block_weights = [] |
self.biases = [] |
for i, block in enumerate(self.blocks): |
self.block_weights.append( |
self.add_weight( |
shape=(current_shape, block), trainable=True, name=f\"block-{i}\" |
) |
) |
self.biases.append( |
self.add_weight(shape=(block,), trainable=True, name=f\"bias-{i}\") |
) |
current_shape = block |
self.linear_layer = self.add_weight( |
shape=(current_shape, 1), name=\"linear_projector\", trainable=True |
) |
def call(self, inputs): |
activations = inputs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.