text stringlengths 0 4.99k |
|---|
================================================================= |
Total params: 58 |
Trainable params: 58 |
Non-trainable params: 0 |
_________________________________________________________________ |
Mean absolute percent error before training: 101.17143249511719 |
Mean absolute percent error after training: 23.479856491088867 |
You can also seamlessly switch between TNP layers and native Keras layers! |
def create_mixed_model(): |
return keras.Sequential( |
[ |
TNPDense(3, activation=tnp_relu), |
# The model will have no issue using a normal Dense layer |
layers.Dense(3, activation=\"relu\"), |
# ... or switching back to tnp layers! |
TNPDense(1), |
] |
) |
model = create_mixed_model() |
model.compile( |
optimizer=\"adam\", |
loss=\"mean_squared_error\", |
metrics=[keras.metrics.MeanAbsolutePercentageError()], |
) |
model.build((None, 13,)) |
model.summary() |
evaluate_model(model) |
Model: \"sequential_1\" |
_________________________________________________________________ |
Layer (type) Output Shape Param # |
================================================================= |
tnp_dense_3 (TNPDense) (None, 3) 42 |
_________________________________________________________________ |
dense (Dense) (None, 3) 12 |
_________________________________________________________________ |
tnp_dense_4 (TNPDense) (None, 1) 4 |
================================================================= |
Total params: 58 |
Trainable params: 58 |
Non-trainable params: 0 |
_________________________________________________________________ |
Mean absolute percent error before training: 104.59967041015625 |
Mean absolute percent error after training: 27.712949752807617 |
The Keras API offers a wide variety of layers. The ability to use them alongside NumPy code can be a huge time saver in projects. |
Distribution Strategy |
TensorFlow NumPy and Keras integrate with TensorFlow Distribution Strategies. This makes it simple to perform distributed training across multiple GPUs, or even an entire TPU Pod. |
gpus = tf.config.list_logical_devices(\"GPU\") |
if gpus: |
strategy = tf.distribute.MirroredStrategy(gpus) |
else: |
# We can fallback to a no-op CPU strategy. |
strategy = tf.distribute.get_strategy() |
print(\"Running with strategy:\", str(strategy.__class__.__name__)) |
with strategy.scope(): |
model = create_layered_tnp_model() |
model.compile( |
optimizer=\"adam\", |
loss=\"mean_squared_error\", |
metrics=[keras.metrics.MeanAbsolutePercentageError()], |
) |
model.build((None, 13,)) |
model.summary() |
evaluate_model(model) |
Running with strategy: _DefaultDistributionStrategy |
Model: \"sequential_2\" |
_________________________________________________________________ |
Layer (type) Output Shape Param # |
================================================================= |
tnp_dense_5 (TNPDense) (None, 3) 42 |
_________________________________________________________________ |
tnp_dense_6 (TNPDense) (None, 3) 12 |
_________________________________________________________________ |
tnp_dense_7 (TNPDense) (None, 1) 4 |
================================================================= |
Total params: 58 |
Trainable params: 58 |
Non-trainable params: 0 |
_________________________________________________________________ |
Mean absolute percent error before training: 100.5331039428711 |
Mean absolute percent error after training: 20.71842384338379 |
TensorBoard Integration |
One of the many benefits of using the Keras API is the ability to monitor training through TensorBoard. Using the TensorFlow NumPy API alongside Keras allows you to easily leverage TensorBoard. |
keras.backend.clear_session() |
To load the TensorBoard from a Jupyter notebook, you can run the following magic: |
%load_ext tensorboard |
models = [ |
(TNPForwardFeedRegressionNetwork(blocks=[3, 3]), \"TNPForwardFeedRegressionNetwork\"), |
(create_layered_tnp_model(), \"layered_tnp_model\"), |
(create_mixed_model(), \"mixed_model\"), |
] |
for model, model_name in models: |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.