text
stringlengths 5
261k
| id
stringlengths 16
106
| metadata
dict | __index_level_0__
int64 0
266
|
---|---|---|---|
<jupyter_start><jupyter_text>Data-efficient GANs with Adaptive Discriminator Augmentation**Author:** [AndrΓ‘s BΓ©res](https://www.linkedin.com/in/andras-beres-789190210)**Date created:** 2021/10/28**Last modified:** 2021/10/28**Description:** Generating images from limited data using the Caltech Birds dataset. Introduction GANs[Generative Adversarial Networks (GANs)](https://arxiv.org/abs/1406.2661) are a popularclass of generative deep learning models, commonly used for image generation. Theyconsist of a pair of dueling neural networks, called the discriminator and the generator.The discriminator's task is to distinguish real images from generated (fake) ones, whilethe generator network tries to fool the discriminator by generating more and morerealistic images. If the generator is however too easy or too hard to fool, it might failto provide useful learning signal for the generator, therefore training GANs is usuallyconsidered a difficult task. Data augmentation for GANSData augmentation, a popular technique in deep learning, is the process of randomlyapplying semantics-preserving transformations to the input data to generate multiplerealistic versions of it, thereby effectively multiplying the amount of training dataavailable. The simplest example is left-right flipping an image, which preserves itscontents while generating a second unique training sample. Data augmentation is commonlyused in supervised learning to prevent overfitting and enhance generalization.The authors of [StyleGAN2-ADA](https://arxiv.org/abs/2006.06676) show that discriminatoroverfitting can be an issue in GANs, especially when only low amounts of training data isavailable. They propose Adaptive Discriminator Augmentation to mitigate this issue.Applying data augmentation to GANs however is not straightforward. Since the generator isupdated using the discriminator's gradients, if the generated images are augmented, theaugmentation pipeline has to be differentiable and also has to be GPU-compatible forcomputational efficiency. Luckily, the[Keras image augmentation layers](https://keras.io/api/layers/preprocessing_layers/image_augmentation/)fulfill both these requirements, and are therefore very well suited for this task. Invertible data augmentationA possible difficulty when using data augmentation in generative models is the issue of["leaky augmentations" (section 2.2)](https://arxiv.org/abs/2006.06676), namely when themodel generates images that are already augmented. This would mean that it was not ableto separate the augmentation from the underlying data distribution, which can be causedby using non-invertible data transformations. For example, if either 0, 90, 180 or 270degree rotations are performed with equal probability, the original orientation of theimages is impossible to infer, and this information is destroyed.A simple trick to make data augmentations invertible is to only apply them with someprobability. That way the original version of the images will be more common, and thedata distribution can be inferred. By properly choosing this probability, one caneffectively regularize the discriminator without making the augmentations leaky. Setup<jupyter_code>import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorflow import keras
from tensorflow.keras import layers<jupyter_output><empty_output><jupyter_text>Hyperparameterers<jupyter_code># data
num_epochs = 10 # train for 400 epochs for good results
image_size = 64
# resolution of Kernel Inception Distance measurement, see related section
kid_image_size = 75
padding = 0.25
dataset_name = "caltech_birds2011"
# adaptive discriminator augmentation
max_translation = 0.125
max_rotation = 0.125
max_zoom = 0.25
target_accuracy = 0.85
integration_steps = 1000
# architecture
noise_size = 64
depth = 4
width = 128
leaky_relu_slope = 0.2
dropout_rate = 0.4
# optimization
batch_size = 128
learning_rate = 2e-4
beta_1 = 0.5 # not using the default value of 0.9 is important
ema = 0.99<jupyter_output><empty_output><jupyter_text>Data pipelineIn this example, we will use the[Caltech Birds (2011)](https://www.tensorflow.org/datasets/catalog/caltech_birds2011) dataset forgenerating images of birds, which is a diverse natural dataset containing less then 6000images for training. When working with such low amounts of data, one has to take extracare to retain as high data quality as possible. In this example, we use the providedbounding boxes of the birds to cut them out with square crops while preserving theiraspect ratios when possible.<jupyter_code>def round_to_int(float_value):
return tf.cast(tf.math.round(float_value), dtype=tf.int32)
def preprocess_image(data):
# unnormalize bounding box coordinates
height = tf.cast(tf.shape(data["image"])[0], dtype=tf.float32)
width = tf.cast(tf.shape(data["image"])[1], dtype=tf.float32)
bounding_box = data["bbox"] * tf.stack([height, width, height, width])
# calculate center and length of longer side, add padding
target_center_y = 0.5 * (bounding_box[0] + bounding_box[2])
target_center_x = 0.5 * (bounding_box[1] + bounding_box[3])
target_size = tf.maximum(
(1.0 + padding) * (bounding_box[2] - bounding_box[0]),
(1.0 + padding) * (bounding_box[3] - bounding_box[1]),
)
# modify crop size to fit into image
target_height = tf.reduce_min(
[target_size, 2.0 * target_center_y, 2.0 * (height - target_center_y)]
)
target_width = tf.reduce_min(
[target_size, 2.0 * target_center_x, 2.0 * (width - target_center_x)]
)
# crop image
image = tf.image.crop_to_bounding_box(
data["image"],
offset_height=round_to_int(target_center_y - 0.5 * target_height),
offset_width=round_to_int(target_center_x - 0.5 * target_width),
target_height=round_to_int(target_height),
target_width=round_to_int(target_width),
)
# resize and clip
# for image downsampling, area interpolation is the preferred method
image = tf.image.resize(
image, size=[image_size, image_size], method=tf.image.ResizeMethod.AREA
)
return tf.clip_by_value(image / 255.0, 0.0, 1.0)
def prepare_dataset(split):
# the validation dataset is shuffled as well, because data order matters
# for the KID calculation
return (
tfds.load(dataset_name, split=split, shuffle_files=True)
.map(preprocess_image, num_parallel_calls=tf.data.AUTOTUNE)
.cache()
.shuffle(10 * batch_size)
.batch(batch_size, drop_remainder=True)
.prefetch(buffer_size=tf.data.AUTOTUNE)
)
train_dataset = prepare_dataset("train")
val_dataset = prepare_dataset("test")<jupyter_output><empty_output><jupyter_text>After preprocessing the training images look like the following: Kernel inception distance[Kernel Inception Distance (KID)](https://arxiv.org/abs/1801.01401) was proposed as areplacement for the popular[Frechet Inception Distance (FID)](https://arxiv.org/abs/1706.08500)metric for measuring image generation quality.Both metrics measure the difference in the generated and training distributions in therepresentation space of an [InceptionV3](https://keras.io/api/applications/inceptionv3/)network pretrained on[ImageNet](https://www.tensorflow.org/datasets/catalog/imagenet2012).According to the paper, KID was proposed because FID has no unbiased estimator, itsexpected value is higher when it is measured on fewer images. KID is more suitable forsmall datasets because its expected value does not depend on the number of samples it ismeasured on. In my experience it is also computationally lighter, numerically morestable, and simpler to implement because it can be estimated in a per-batch manner.In this example, the images are evaluated at the minimal possible resolution of theInception network (75x75 instead of 299x299), and the metric is only measured on thevalidation set for computational efficiency.<jupyter_code>class KID(keras.metrics.Metric):
def __init__(self, name="kid", **kwargs):
super().__init__(name=name, **kwargs)
# KID is estimated per batch and is averaged across batches
self.kid_tracker = keras.metrics.Mean()
# a pretrained InceptionV3 is used without its classification layer
# transform the pixel values to the 0-255 range, then use the same
# preprocessing as during pretraining
self.encoder = keras.Sequential(
[
layers.InputLayer(input_shape=(image_size, image_size, 3)),
layers.Rescaling(255.0),
layers.Resizing(height=kid_image_size, width=kid_image_size),
layers.Lambda(keras.applications.inception_v3.preprocess_input),
keras.applications.InceptionV3(
include_top=False,
input_shape=(kid_image_size, kid_image_size, 3),
weights="imagenet",
),
layers.GlobalAveragePooling2D(),
],
name="inception_encoder",
)
def polynomial_kernel(self, features_1, features_2):
feature_dimensions = tf.cast(tf.shape(features_1)[1], dtype=tf.float32)
return (features_1 @ tf.transpose(features_2) / feature_dimensions + 1.0) ** 3.0
def update_state(self, real_images, generated_images, sample_weight=None):
real_features = self.encoder(real_images, training=False)
generated_features = self.encoder(generated_images, training=False)
# compute polynomial kernels using the two sets of features
kernel_real = self.polynomial_kernel(real_features, real_features)
kernel_generated = self.polynomial_kernel(
generated_features, generated_features
)
kernel_cross = self.polynomial_kernel(real_features, generated_features)
# estimate the squared maximum mean discrepancy using the average kernel values
batch_size = tf.shape(real_features)[0]
batch_size_f = tf.cast(batch_size, dtype=tf.float32)
mean_kernel_real = tf.reduce_sum(kernel_real * (1.0 - tf.eye(batch_size))) / (
batch_size_f * (batch_size_f - 1.0)
)
mean_kernel_generated = tf.reduce_sum(
kernel_generated * (1.0 - tf.eye(batch_size))
) / (batch_size_f * (batch_size_f - 1.0))
mean_kernel_cross = tf.reduce_mean(kernel_cross)
kid = mean_kernel_real + mean_kernel_generated - 2.0 * mean_kernel_cross
# update the average KID estimate
self.kid_tracker.update_state(kid)
def result(self):
return self.kid_tracker.result()
def reset_state(self):
self.kid_tracker.reset_state()<jupyter_output><empty_output><jupyter_text>Adaptive discriminator augmentationThe authors of [StyleGAN2-ADA](https://arxiv.org/abs/2006.06676) propose to change theaugmentation probability adaptively during training. Though it is explained differentlyin the paper, they use [integral control](https://en.wikipedia.org/wiki/PID_controllerIntegral) on the augmentationprobability to keep the discriminator's accuracy on real images close to a target value.Note, that their controlled variable is actually the average sign of the discriminatorlogits (r_t in the paper), which corresponds to 2 * accuracy - 1.This method requires two hyperparameters:1. `target_accuracy`: the target value for the discriminator's accuracy on real images. Irecommend selecting its value from the 80-90% range.2. [`integration_steps`](https://en.wikipedia.org/wiki/PID_controllerMathematical_form):the number of update steps required for an accuracy error of 100% to transform into anaugmentation probability increase of 100%. To give an intuition, this defines how slowlythe augmentation probability is changed. I recommend setting this to a relatively highvalue (1000 in this case) so that the augmentation strength is only adjusted slowly.The main motivation for this procedure is that the optimal value of the target accuracyis similar across different dataset sizes (see [figure 4 and 5 in the paper](https://arxiv.org/abs/2006.06676)),so it does not have to be re-tuned, because theprocess automatically applies stronger data augmentation when it is needed.<jupyter_code># "hard sigmoid", useful for binary accuracy calculation from logits
def step(values):
# negative values -> 0.0, positive values -> 1.0
return 0.5 * (1.0 + tf.sign(values))
# augments images with a probability that is dynamically updated during training
class AdaptiveAugmenter(keras.Model):
def __init__(self):
super().__init__()
# stores the current probability of an image being augmented
self.probability = tf.Variable(0.0)
# the corresponding augmentation names from the paper are shown above each layer
# the authors show (see figure 4), that the blitting and geometric augmentations
# are the most helpful in the low-data regime
self.augmenter = keras.Sequential(
[
layers.InputLayer(input_shape=(image_size, image_size, 3)),
# blitting/x-flip:
layers.RandomFlip("horizontal"),
# blitting/integer translation:
layers.RandomTranslation(
height_factor=max_translation,
width_factor=max_translation,
interpolation="nearest",
),
# geometric/rotation:
layers.RandomRotation(factor=max_rotation),
# geometric/isotropic and anisotropic scaling:
layers.RandomZoom(
height_factor=(-max_zoom, 0.0), width_factor=(-max_zoom, 0.0)
),
],
name="adaptive_augmenter",
)
def call(self, images, training):
if training:
augmented_images = self.augmenter(images, training)
# during training either the original or the augmented images are selected
# based on self.probability
augmentation_values = tf.random.uniform(
shape=(batch_size, 1, 1, 1), minval=0.0, maxval=1.0
)
augmentation_bools = tf.math.less(augmentation_values, self.probability)
images = tf.where(augmentation_bools, augmented_images, images)
return images
def update(self, real_logits):
current_accuracy = tf.reduce_mean(step(real_logits))
# the augmentation probability is updated based on the discriminator's
# accuracy on real images
accuracy_error = current_accuracy - target_accuracy
self.probability.assign(
tf.clip_by_value(
self.probability + accuracy_error / integration_steps, 0.0, 1.0
)
)<jupyter_output><empty_output><jupyter_text>Network architectureHere we specify the architecture of the two networks:* generator: maps a random vector to an image, which should be as realistic as possible* discriminator: maps an image to a scalar score, which should be high for real and lowfor generated imagesGANs tend to be sensitive to the network architecture, I implemented a DCGAN architecturein this example, because it is relatively stable during training while being simple toimplement. We use a constant number of filters throughout the network, use a sigmoidinstead of tanh in the last layer of the generator, and use default initializationinstead of random normal as further simplifications.As a good practice, we disable the learnable scale parameter in the batch normalizationlayers, because on one hand the following relu + convolutional layers make it redundant(as noted in the[documentation](https://keras.io/api/layers/normalization_layers/batch_normalization/)).But also because it should be disabled based on theory when using [spectral normalization(section 4.1)](https://arxiv.org/abs/1802.05957), which is not used here, but is commonin GANs. We also disable the bias in the fully connected and convolutional layers, becausethe following batch normalization makes it redundant.<jupyter_code># DCGAN generator
def get_generator():
noise_input = keras.Input(shape=(noise_size,))
x = layers.Dense(4 * 4 * width, use_bias=False)(noise_input)
x = layers.BatchNormalization(scale=False)(x)
x = layers.ReLU()(x)
x = layers.Reshape(target_shape=(4, 4, width))(x)
for _ in range(depth - 1):
x = layers.Conv2DTranspose(
width, kernel_size=4, strides=2, padding="same", use_bias=False,
)(x)
x = layers.BatchNormalization(scale=False)(x)
x = layers.ReLU()(x)
image_output = layers.Conv2DTranspose(
3, kernel_size=4, strides=2, padding="same", activation="sigmoid",
)(x)
return keras.Model(noise_input, image_output, name="generator")
# DCGAN discriminator
def get_discriminator():
image_input = keras.Input(shape=(image_size, image_size, 3))
x = image_input
for _ in range(depth):
x = layers.Conv2D(
width, kernel_size=4, strides=2, padding="same", use_bias=False,
)(x)
x = layers.BatchNormalization(scale=False)(x)
x = layers.LeakyReLU(alpha=leaky_relu_slope)(x)
x = layers.Flatten()(x)
x = layers.Dropout(dropout_rate)(x)
output_score = layers.Dense(1)(x)
return keras.Model(image_input, output_score, name="discriminator")<jupyter_output><empty_output><jupyter_text>GAN model<jupyter_code>class GAN_ADA(keras.Model):
def __init__(self):
super().__init__()
self.augmenter = AdaptiveAugmenter()
self.generator = get_generator()
self.ema_generator = keras.models.clone_model(self.generator)
self.discriminator = get_discriminator()
self.generator.summary()
self.discriminator.summary()
def compile(self, generator_optimizer, discriminator_optimizer, **kwargs):
super().compile(**kwargs)
# separate optimizers for the two networks
self.generator_optimizer = generator_optimizer
self.discriminator_optimizer = discriminator_optimizer
self.generator_loss_tracker = keras.metrics.Mean(name="g_loss")
self.discriminator_loss_tracker = keras.metrics.Mean(name="d_loss")
self.real_accuracy = keras.metrics.BinaryAccuracy(name="real_acc")
self.generated_accuracy = keras.metrics.BinaryAccuracy(name="gen_acc")
self.augmentation_probability_tracker = keras.metrics.Mean(name="aug_p")
self.kid = KID()
@property
def metrics(self):
return [
self.generator_loss_tracker,
self.discriminator_loss_tracker,
self.real_accuracy,
self.generated_accuracy,
self.augmentation_probability_tracker,
self.kid,
]
def generate(self, batch_size, training):
latent_samples = tf.random.normal(shape=(batch_size, noise_size))
# use ema_generator during inference
if training:
generated_images = self.generator(latent_samples, training)
else:
generated_images = self.ema_generator(latent_samples, training)
return generated_images
def adversarial_loss(self, real_logits, generated_logits):
# this is usually called the non-saturating GAN loss
real_labels = tf.ones(shape=(batch_size, 1))
generated_labels = tf.zeros(shape=(batch_size, 1))
# the generator tries to produce images that the discriminator considers as real
generator_loss = keras.losses.binary_crossentropy(
real_labels, generated_logits, from_logits=True
)
# the discriminator tries to determine if images are real or generated
discriminator_loss = keras.losses.binary_crossentropy(
tf.concat([real_labels, generated_labels], axis=0),
tf.concat([real_logits, generated_logits], axis=0),
from_logits=True,
)
return tf.reduce_mean(generator_loss), tf.reduce_mean(discriminator_loss)
def train_step(self, real_images):
real_images = self.augmenter(real_images, training=True)
# use persistent gradient tape because gradients will be calculated twice
with tf.GradientTape(persistent=True) as tape:
generated_images = self.generate(batch_size, training=True)
# gradient is calculated through the image augmentation
generated_images = self.augmenter(generated_images, training=True)
# separate forward passes for the real and generated images, meaning
# that batch normalization is applied separately
real_logits = self.discriminator(real_images, training=True)
generated_logits = self.discriminator(generated_images, training=True)
generator_loss, discriminator_loss = self.adversarial_loss(
real_logits, generated_logits
)
# calculate gradients and update weights
generator_gradients = tape.gradient(
generator_loss, self.generator.trainable_weights
)
discriminator_gradients = tape.gradient(
discriminator_loss, self.discriminator.trainable_weights
)
self.generator_optimizer.apply_gradients(
zip(generator_gradients, self.generator.trainable_weights)
)
self.discriminator_optimizer.apply_gradients(
zip(discriminator_gradients, self.discriminator.trainable_weights)
)
# update the augmentation probability based on the discriminator's performance
self.augmenter.update(real_logits)
self.generator_loss_tracker.update_state(generator_loss)
self.discriminator_loss_tracker.update_state(discriminator_loss)
self.real_accuracy.update_state(1.0, step(real_logits))
self.generated_accuracy.update_state(0.0, step(generated_logits))
self.augmentation_probability_tracker.update_state(self.augmenter.probability)
# track the exponential moving average of the generator's weights to decrease
# variance in the generation quality
for weight, ema_weight in zip(
self.generator.weights, self.ema_generator.weights
):
ema_weight.assign(ema * ema_weight + (1 - ema) * weight)
# KID is not measured during the training phase for computational efficiency
return {m.name: m.result() for m in self.metrics[:-1]}
def test_step(self, real_images):
generated_images = self.generate(batch_size, training=False)
self.kid.update_state(real_images, generated_images)
# only KID is measured during the evaluation phase for computational efficiency
return {self.kid.name: self.kid.result()}
def plot_images(self, epoch=None, logs=None, num_rows=3, num_cols=6, interval=5):
# plot random generated images for visual evaluation of generation quality
if epoch is None or (epoch + 1) % interval == 0:
num_images = num_rows * num_cols
generated_images = self.generate(num_images, training=False)
plt.figure(figsize=(num_cols * 2.0, num_rows * 2.0))
for row in range(num_rows):
for col in range(num_cols):
index = row * num_cols + col
plt.subplot(num_rows, num_cols, index + 1)
plt.imshow(generated_images[index])
plt.axis("off")
plt.tight_layout()
plt.show()
plt.close()<jupyter_output><empty_output><jupyter_text>TrainingOne can should see from the metrics during training, that if the real accuracy(discriminator's accuracy on real images) is below the target accuracy, the augmentationprobability is increased, and vice versa. In my experience, during a healthy GANtraining, the discriminator accuracy should stay in the 80-95% range. Below that, thediscriminator is too weak, above that it is too strong.Note that we track the exponential moving average of the generator's weights, and use thatfor image generation and KID evaluation.<jupyter_code># create and compile the model
model = GAN_ADA()
model.compile(
generator_optimizer=keras.optimizers.Adam(learning_rate, beta_1),
discriminator_optimizer=keras.optimizers.Adam(learning_rate, beta_1),
)
# save the best model based on the validation KID metric
checkpoint_path = "gan_model"
checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_path,
save_weights_only=True,
monitor="val_kid",
mode="min",
save_best_only=True,
)
# run training and plot generated images periodically
model.fit(
train_dataset,
epochs=num_epochs,
validation_data=val_dataset,
callbacks=[
keras.callbacks.LambdaCallback(on_epoch_end=model.plot_images),
checkpoint_callback,
],
)<jupyter_output><empty_output><jupyter_text>Inference<jupyter_code># load the best model and generate images
model.load_weights(checkpoint_path)
model.plot_images()<jupyter_output><empty_output> | keras-io/examples/generative/ipynb/gan_ada.ipynb/0 | {
"file_path": "keras-io/examples/generative/ipynb/gan_ada.ipynb",
"repo_id": "keras-io",
"token_count": 9009
} | 100 |
# Drug Molecule Generation with VAE
**Author:** [Victor Basu](https://www.linkedin.com/in/victor-basu-520958147)<br>
**Date created:** 2022/03/10<br>
**Last modified:** 2022/03/24<br>
**Description:** Implementing a Convolutional Variational AutoEncoder (VAE) for Drug Discovery.
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/generative/ipynb/molecule_generation.ipynb) <span class="k-dot">β’</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/examples/generative/molecule_generation.py)
---
## Introduction
In this example, we use a Variational Autoencoder to generate molecules for drug discovery.
We use the research papers
[Automatic chemical design using a data-driven continuous representation of molecules](https://arxiv.org/abs/1610.02415)
and [MolGAN: An implicit generative model for small molecular graphs](https://arxiv.org/abs/1805.11973)
as a reference.
The model described in the paper **Automatic chemical design using a data-driven
continuous representation of molecules** generates new molecules via efficient exploration
of open-ended spaces of chemical compounds. The model consists of
three components: Encoder, Decoder and Predictor. The Encoder converts the discrete
representation of a molecule into a real-valued continuous vector, and the Decoder
converts these continuous vectors back to discrete molecule representations. The
Predictor estimates chemical properties from the latent continuous vector representation
of the molecule. Continuous representations allow the use of gradient-based
optimization to efficiently guide the search for optimized functional compounds.
![intro](https://bit.ly/3CtPMzM)
**Figure (a)** - A diagram of the autoencoder used for molecule design, including the
joint property prediction model. Starting from a discrete molecule representation, such
as a SMILES string, the encoder network converts each molecule into a vector in the
latent space, which is effectively a continuous molecule representation. Given a point
in the latent space, the decoder network produces a corresponding SMILES string. A
multilayer perceptron network estimates the value of target properties associated with
each molecule.
**Figure (b)** - Gradient-based optimization in continuous latent space. After training a
surrogate model `f(z)` to predict the properties of molecules based on their latent
representation `z`, we can optimize `f(z)` with respect to `z` to find new latent
representations expected to match specific desired properties. These new latent
representations can then be decoded into SMILES strings, at which point their properties
can be tested empirically.
For an explanation and implementation of MolGAN, please refer to the Keras Example
[**WGAN-GP with R-GCN for the generation of small molecular graphs**](https://bit.ly/3pU6zXK) by
Alexander Kensert. Many of the functions used in the present example are from the above Keras example.
---
## Setup
RDKit is an open source toolkit for cheminformatics and machine learning. This toolkit come in handy
if one is into drug discovery domain. In this example, RDKit is used to conveniently
and efficiently transform SMILES to molecule objects, and then from those obtain sets of atoms
and bonds.
Quoting from
[WGAN-GP with R-GCN for the generation of small molecular graphs](https://keras.io/examples/generative/wgan-graphs/)):
**"SMILES expresses the structure of a given molecule in the form of an ASCII string.
The SMILES string is a compact encoding which, for smaller molecules, is relatively human-readable.
Encoding molecules as a string both alleviates and facilitates database and/or web searching
of a given molecule. RDKit uses algorithms to accurately transform a given SMILES to
a molecule object, which can then be used to compute a great number of molecular properties/features."**
```python
!pip -q install rdkit-pypi==2021.9.4
```
```python
import ast
import pandas as pd
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import matplotlib.pyplot as plt
from rdkit import Chem, RDLogger
from rdkit.Chem import BondType
from rdkit.Chem.Draw import MolsToGridImage
RDLogger.DisableLog("rdApp.*")
```
<div class="k-default-codeblock">
```
[K |ββββββββββββββββββββββββββββββββ| 20.6 MB 1.2 MB/s
[?25h
```
</div>
---
## Dataset
We use the [**ZINC β A Free Database of Commercially Available Compounds for
Virtual Screening**](https://bit.ly/3IVBI4x) dataset. The dataset comes with molecule
formula in SMILE representation along with their respective molecular properties such as
**logP** (waterβoctanal partition coefficient), **SAS** (synthetic
accessibility score) and **QED** (Qualitative Estimate of Drug-likeness).
```python
csv_path = keras.utils.get_file(
"/content/250k_rndm_zinc_drugs_clean_3.csv",
"https://raw.githubusercontent.com/aspuru-guzik-group/chemical_vae/master/models/zinc_properties/250k_rndm_zinc_drugs_clean_3.csv",
)
df = pd.read_csv("/content/250k_rndm_zinc_drugs_clean_3.csv")
df["smiles"] = df["smiles"].apply(lambda s: s.replace("\n", ""))
df.head()
```
<div class="k-default-codeblock">
```
Downloading data from https://raw.githubusercontent.com/aspuru-guzik-group/chemical_vae/master/models/zinc_properties/250k_rndm_zinc_drugs_clean_3.csv
22606589/22606589 [==============================] - 0s 0us/step
```
</div>
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
<div class="k-default-codeblock">
```
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
```
</div>
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>smiles</th>
<th>logP</th>
<th>qed</th>
<th>SAS</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>CC(C)(C)c1ccc2occ(CC(=O)Nc3ccccc3F)c2c1</td>
<td>5.05060</td>
<td>0.702012</td>
<td>2.084095</td>
</tr>
<tr>
<th>1</th>
<td>C[C@@H]1CC(Nc2cncc(-c3nncn3C)c2)C[C@@H](C)C1</td>
<td>3.11370</td>
<td>0.928975</td>
<td>3.432004</td>
</tr>
<tr>
<th>2</th>
<td>N#Cc1ccc(-c2ccc(O[C@@H](C(=O)N3CCCC3)c3ccccc3)...</td>
<td>4.96778</td>
<td>0.599682</td>
<td>2.470633</td>
</tr>
<tr>
<th>3</th>
<td>CCOC(=O)[C@@H]1CCCN(C(=O)c2nc(-c3ccc(C)cc3)n3c...</td>
<td>4.00022</td>
<td>0.690944</td>
<td>2.822753</td>
</tr>
<tr>
<th>4</th>
<td>N#CC1=C(SCC(=O)Nc2cccc(Cl)c2)N=C([O-])[C@H](C#...</td>
<td>3.60956</td>
<td>0.789027</td>
<td>4.035182</td>
</tr>
</tbody>
</table>
</div>
---
## Hyperparameters
```python
SMILE_CHARSET = '["C", "B", "F", "I", "H", "O", "N", "S", "P", "Cl", "Br"]'
bond_mapping = {"SINGLE": 0, "DOUBLE": 1, "TRIPLE": 2, "AROMATIC": 3}
bond_mapping.update(
{0: BondType.SINGLE, 1: BondType.DOUBLE, 2: BondType.TRIPLE, 3: BondType.AROMATIC}
)
SMILE_CHARSET = ast.literal_eval(SMILE_CHARSET)
MAX_MOLSIZE = max(df["smiles"].str.len())
SMILE_to_index = dict((c, i) for i, c in enumerate(SMILE_CHARSET))
index_to_SMILE = dict((i, c) for i, c in enumerate(SMILE_CHARSET))
atom_mapping = dict(SMILE_to_index)
atom_mapping.update(index_to_SMILE)
BATCH_SIZE = 100
EPOCHS = 10
VAE_LR = 5e-4
NUM_ATOMS = 120 # Maximum number of atoms
ATOM_DIM = len(SMILE_CHARSET) # Number of atom types
BOND_DIM = 4 + 1 # Number of bond types
LATENT_DIM = 435 # Size of the latent space
def smiles_to_graph(smiles):
# Converts SMILES to molecule object
molecule = Chem.MolFromSmiles(smiles)
# Initialize adjacency and feature tensor
adjacency = np.zeros((BOND_DIM, NUM_ATOMS, NUM_ATOMS), "float32")
features = np.zeros((NUM_ATOMS, ATOM_DIM), "float32")
# loop over each atom in molecule
for atom in molecule.GetAtoms():
i = atom.GetIdx()
atom_type = atom_mapping[atom.GetSymbol()]
features[i] = np.eye(ATOM_DIM)[atom_type]
# loop over one-hop neighbors
for neighbor in atom.GetNeighbors():
j = neighbor.GetIdx()
bond = molecule.GetBondBetweenAtoms(i, j)
bond_type_idx = bond_mapping[bond.GetBondType().name]
adjacency[bond_type_idx, [i, j], [j, i]] = 1
# Where no bond, add 1 to last channel (indicating "non-bond")
# Notice: channels-first
adjacency[-1, np.sum(adjacency, axis=0) == 0] = 1
# Where no atom, add 1 to last column (indicating "non-atom")
features[np.where(np.sum(features, axis=1) == 0)[0], -1] = 1
return adjacency, features
def graph_to_molecule(graph):
# Unpack graph
adjacency, features = graph
# RWMol is a molecule object intended to be edited
molecule = Chem.RWMol()
# Remove "no atoms" & atoms with no bonds
keep_idx = np.where(
(np.argmax(features, axis=1) != ATOM_DIM - 1)
& (np.sum(adjacency[:-1], axis=(0, 1)) != 0)
)[0]
features = features[keep_idx]
adjacency = adjacency[:, keep_idx, :][:, :, keep_idx]
# Add atoms to molecule
for atom_type_idx in np.argmax(features, axis=1):
atom = Chem.Atom(atom_mapping[atom_type_idx])
_ = molecule.AddAtom(atom)
# Add bonds between atoms in molecule; based on the upper triangles
# of the [symmetric] adjacency tensor
(bonds_ij, atoms_i, atoms_j) = np.where(np.triu(adjacency) == 1)
for (bond_ij, atom_i, atom_j) in zip(bonds_ij, atoms_i, atoms_j):
if atom_i == atom_j or bond_ij == BOND_DIM - 1:
continue
bond_type = bond_mapping[bond_ij]
molecule.AddBond(int(atom_i), int(atom_j), bond_type)
# Sanitize the molecule; for more information on sanitization, see
# https://www.rdkit.org/docs/RDKit_Book.html#molecular-sanitization
flag = Chem.SanitizeMol(molecule, catchErrors=True)
# Let's be strict. If sanitization fails, return None
if flag != Chem.SanitizeFlags.SANITIZE_NONE:
return None
return molecule
```
---
## Generate training set
```python
train_df = df.sample(frac=0.75, random_state=42) # random state is a seed value
train_df.reset_index(drop=True, inplace=True)
adjacency_tensor, feature_tensor, qed_tensor = [], [], []
for idx in range(8000):
adjacency, features = smiles_to_graph(train_df.loc[idx]["smiles"])
qed = train_df.loc[idx]["qed"]
adjacency_tensor.append(adjacency)
feature_tensor.append(features)
qed_tensor.append(qed)
adjacency_tensor = np.array(adjacency_tensor)
feature_tensor = np.array(feature_tensor)
qed_tensor = np.array(qed_tensor)
class RelationalGraphConvLayer(keras.layers.Layer):
def __init__(
self,
units=128,
activation="relu",
use_bias=False,
kernel_initializer="glorot_uniform",
bias_initializer="zeros",
kernel_regularizer=None,
bias_regularizer=None,
**kwargs
):
super().__init__(**kwargs)
self.units = units
self.activation = keras.activations.get(activation)
self.use_bias = use_bias
self.kernel_initializer = keras.initializers.get(kernel_initializer)
self.bias_initializer = keras.initializers.get(bias_initializer)
self.kernel_regularizer = keras.regularizers.get(kernel_regularizer)
self.bias_regularizer = keras.regularizers.get(bias_regularizer)
def build(self, input_shape):
bond_dim = input_shape[0][1]
atom_dim = input_shape[1][2]
self.kernel = self.add_weight(
shape=(bond_dim, atom_dim, self.units),
initializer=self.kernel_initializer,
regularizer=self.kernel_regularizer,
trainable=True,
name="W",
dtype=tf.float32,
)
if self.use_bias:
self.bias = self.add_weight(
shape=(bond_dim, 1, self.units),
initializer=self.bias_initializer,
regularizer=self.bias_regularizer,
trainable=True,
name="b",
dtype=tf.float32,
)
self.built = True
def call(self, inputs, training=False):
adjacency, features = inputs
# Aggregate information from neighbors
x = tf.matmul(adjacency, features[:, None, :, :])
# Apply linear transformation
x = tf.matmul(x, self.kernel)
if self.use_bias:
x += self.bias
# Reduce bond types dim
x_reduced = tf.reduce_sum(x, axis=1)
# Apply non-linear transformation
return self.activation(x_reduced)
```
---
## Build the Encoder and Decoder
The Encoder takes as input a molecule's graph adjacency matrix and feature matrix.
These features are processed via a Graph Convolution layer, then are flattened and
processed by several Dense layers to derive `z_mean` and `log_var`, the
latent-space representation of the molecule.
**Graph Convolution layer**: The relational graph convolution layer implements
non-linearly transformed neighbourhood aggregations. We can define these layers as
follows:
`H_hat**(l+1) = Ο(D_hat**(-1) * A_hat * H_hat**(l+1) * W**(l))`
Where `Ο` denotes the non-linear transformation (commonly a ReLU activation), `A` the
adjacency tensor, `H_hat**(l)` the feature tensor at the `l-th` layer, `D_hat**(-1)` the
inverse diagonal degree tensor of `A_hat`, and `W_hat**(l)` the trainable weight tensor
at the `l-th` layer. Specifically, for each bond type (relation), the degree tensor
expresses, in the diagonal, the number of bonds attached to each atom.
Source:
[WGAN-GP with R-GCN for the generation of small molecular graphs](https://keras.io/examples/generative/wgan-graphs/))
The Decoder takes as input the latent-space representation and predicts
the graph adjacency matrix and feature matrix of the corresponding molecules.
```python
def get_encoder(
gconv_units, latent_dim, adjacency_shape, feature_shape, dense_units, dropout_rate
):
adjacency = keras.layers.Input(shape=adjacency_shape)
features = keras.layers.Input(shape=feature_shape)
# Propagate through one or more graph convolutional layers
features_transformed = features
for units in gconv_units:
features_transformed = RelationalGraphConvLayer(units)(
[adjacency, features_transformed]
)
# Reduce 2-D representation of molecule to 1-D
x = keras.layers.GlobalAveragePooling1D()(features_transformed)
# Propagate through one or more densely connected layers
for units in dense_units:
x = layers.Dense(units, activation="relu")(x)
x = layers.Dropout(dropout_rate)(x)
z_mean = layers.Dense(latent_dim, dtype="float32", name="z_mean")(x)
log_var = layers.Dense(latent_dim, dtype="float32", name="log_var")(x)
encoder = keras.Model([adjacency, features], [z_mean, log_var], name="encoder")
return encoder
def get_decoder(dense_units, dropout_rate, latent_dim, adjacency_shape, feature_shape):
latent_inputs = keras.Input(shape=(latent_dim,))
x = latent_inputs
for units in dense_units:
x = keras.layers.Dense(units, activation="tanh")(x)
x = keras.layers.Dropout(dropout_rate)(x)
# Map outputs of previous layer (x) to [continuous] adjacency tensors (x_adjacency)
x_adjacency = keras.layers.Dense(tf.math.reduce_prod(adjacency_shape))(x)
x_adjacency = keras.layers.Reshape(adjacency_shape)(x_adjacency)
# Symmetrify tensors in the last two dimensions
x_adjacency = (x_adjacency + tf.transpose(x_adjacency, (0, 1, 3, 2))) / 2
x_adjacency = keras.layers.Softmax(axis=1)(x_adjacency)
# Map outputs of previous layer (x) to [continuous] feature tensors (x_features)
x_features = keras.layers.Dense(tf.math.reduce_prod(feature_shape))(x)
x_features = keras.layers.Reshape(feature_shape)(x_features)
x_features = keras.layers.Softmax(axis=2)(x_features)
decoder = keras.Model(
latent_inputs, outputs=[x_adjacency, x_features], name="decoder"
)
return decoder
```
---
## Build the Sampling layer
```python
class Sampling(layers.Layer):
def call(self, inputs):
z_mean, z_log_var = inputs
batch = tf.shape(z_log_var)[0]
dim = tf.shape(z_log_var)[1]
epsilon = tf.keras.backend.random_normal(shape=(batch, dim))
return z_mean + tf.exp(0.5 * z_log_var) * epsilon
```
---
## Build the VAE
This model is trained to optimize four losses:
* Categorical crossentropy
* KL divergence loss
* Property prediction loss
* Graph loss (gradient penalty)
The categorical crossentropy loss function measures the model's
reconstruction accuracy. The Property prediction loss estimates the mean squared
error between predicted and actual properties after running the latent representation
through a property prediction model. The property
prediction of the model is optimized via binary crossentropy. The gradient
penalty is further guided by the model's property (QED) prediction.
A gradient penalty is an alternative soft constraint on the
1-Lipschitz continuity as an improvement upon the gradient clipping scheme from the
original neural network
("1-Lipschitz continuity" means that the norm of the gradient is at most 1 at evey single
point of the function).
It adds a regularization term to the loss function.
```python
class MoleculeGenerator(keras.Model):
def __init__(self, encoder, decoder, max_len, **kwargs):
super().__init__(**kwargs)
self.encoder = encoder
self.decoder = decoder
self.property_prediction_layer = layers.Dense(1)
self.max_len = max_len
self.train_total_loss_tracker = keras.metrics.Mean(name="train_total_loss")
self.val_total_loss_tracker = keras.metrics.Mean(name="val_total_loss")
def train_step(self, data):
adjacency_tensor, feature_tensor, qed_tensor = data[0]
graph_real = [adjacency_tensor, feature_tensor]
self.batch_size = tf.shape(qed_tensor)[0]
with tf.GradientTape() as tape:
z_mean, z_log_var, qed_pred, gen_adjacency, gen_features = self(
graph_real, training=True
)
graph_generated = [gen_adjacency, gen_features]
total_loss = self._compute_loss(
z_log_var, z_mean, qed_tensor, qed_pred, graph_real, graph_generated
)
grads = tape.gradient(total_loss, self.trainable_weights)
self.optimizer.apply_gradients(zip(grads, self.trainable_weights))
self.train_total_loss_tracker.update_state(total_loss)
return {"loss": self.train_total_loss_tracker.result()}
def _compute_loss(
self, z_log_var, z_mean, qed_true, qed_pred, graph_real, graph_generated
):
adjacency_real, features_real = graph_real
adjacency_gen, features_gen = graph_generated
adjacency_loss = tf.reduce_mean(
tf.reduce_sum(
keras.losses.categorical_crossentropy(adjacency_real, adjacency_gen),
axis=(1, 2),
)
)
features_loss = tf.reduce_mean(
tf.reduce_sum(
keras.losses.categorical_crossentropy(features_real, features_gen),
axis=(1),
)
)
kl_loss = -0.5 * tf.reduce_sum(
1 + z_log_var - tf.square(z_mean) - tf.exp(z_log_var), 1
)
kl_loss = tf.reduce_mean(kl_loss)
property_loss = tf.reduce_mean(
keras.losses.binary_crossentropy(qed_true, qed_pred)
)
graph_loss = self._gradient_penalty(graph_real, graph_generated)
return kl_loss + property_loss + graph_loss + adjacency_loss + features_loss
def _gradient_penalty(self, graph_real, graph_generated):
# Unpack graphs
adjacency_real, features_real = graph_real
adjacency_generated, features_generated = graph_generated
# Generate interpolated graphs (adjacency_interp and features_interp)
alpha = tf.random.uniform([self.batch_size])
alpha = tf.reshape(alpha, (self.batch_size, 1, 1, 1))
adjacency_interp = (adjacency_real * alpha) + (1 - alpha) * adjacency_generated
alpha = tf.reshape(alpha, (self.batch_size, 1, 1))
features_interp = (features_real * alpha) + (1 - alpha) * features_generated
# Compute the logits of interpolated graphs
with tf.GradientTape() as tape:
tape.watch(adjacency_interp)
tape.watch(features_interp)
_, _, logits, _, _ = self(
[adjacency_interp, features_interp], training=True
)
# Compute the gradients with respect to the interpolated graphs
grads = tape.gradient(logits, [adjacency_interp, features_interp])
# Compute the gradient penalty
grads_adjacency_penalty = (1 - tf.norm(grads[0], axis=1)) ** 2
grads_features_penalty = (1 - tf.norm(grads[1], axis=2)) ** 2
return tf.reduce_mean(
tf.reduce_mean(grads_adjacency_penalty, axis=(-2, -1))
+ tf.reduce_mean(grads_features_penalty, axis=(-1))
)
def inference(self, batch_size):
z = tf.random.normal((batch_size, LATENT_DIM))
reconstruction_adjacency, reconstruction_features = model.decoder.predict(z)
# obtain one-hot encoded adjacency tensor
adjacency = tf.argmax(reconstruction_adjacency, axis=1)
adjacency = tf.one_hot(adjacency, depth=BOND_DIM, axis=1)
# Remove potential self-loops from adjacency
adjacency = tf.linalg.set_diag(adjacency, tf.zeros(tf.shape(adjacency)[:-1]))
# obtain one-hot encoded feature tensor
features = tf.argmax(reconstruction_features, axis=2)
features = tf.one_hot(features, depth=ATOM_DIM, axis=2)
return [
graph_to_molecule([adjacency[i].numpy(), features[i].numpy()])
for i in range(batch_size)
]
def call(self, inputs):
z_mean, log_var = self.encoder(inputs)
z = Sampling()([z_mean, log_var])
gen_adjacency, gen_features = self.decoder(z)
property_pred = self.property_prediction_layer(z_mean)
return z_mean, log_var, property_pred, gen_adjacency, gen_features
```
---
## Train the model
```python
vae_optimizer = tf.keras.optimizers.Adam(learning_rate=VAE_LR)
encoder = get_encoder(
gconv_units=[9],
adjacency_shape=(BOND_DIM, NUM_ATOMS, NUM_ATOMS),
feature_shape=(NUM_ATOMS, ATOM_DIM),
latent_dim=LATENT_DIM,
dense_units=[512],
dropout_rate=0.0,
)
decoder = get_decoder(
dense_units=[128, 256, 512],
dropout_rate=0.2,
latent_dim=LATENT_DIM,
adjacency_shape=(BOND_DIM, NUM_ATOMS, NUM_ATOMS),
feature_shape=(NUM_ATOMS, ATOM_DIM),
)
model = MoleculeGenerator(encoder, decoder, MAX_MOLSIZE)
model.compile(vae_optimizer)
history = model.fit([adjacency_tensor, feature_tensor, qed_tensor], epochs=EPOCHS)
```
<div class="k-default-codeblock">
```
Epoch 1/10
250/250 [==============================] - 24s 84ms/step - loss: 68958.3946
Epoch 2/10
250/250 [==============================] - 20s 79ms/step - loss: 68819.8421
Epoch 3/10
250/250 [==============================] - 20s 79ms/step - loss: 68830.6720
Epoch 4/10
250/250 [==============================] - 20s 79ms/step - loss: 68816.1486
Epoch 5/10
250/250 [==============================] - 20s 79ms/step - loss: 68825.9977
Epoch 6/10
250/250 [==============================] - 19s 78ms/step - loss: 68818.0771
Epoch 7/10
250/250 [==============================] - 19s 77ms/step - loss: 68815.8525
Epoch 8/10
250/250 [==============================] - 20s 78ms/step - loss: 68820.5459
Epoch 9/10
250/250 [==============================] - 21s 83ms/step - loss: 68806.9465
Epoch 10/10
250/250 [==============================] - 21s 84ms/step - loss: 68805.9879
```
</div>
---
## Inference
We use our model to generate new valid molecules from different points of the latent space.
### Generate unique Molecules with the model
```python
molecules = model.inference(1000)
MolsToGridImage(
[m for m in molecules if m is not None][:1000], molsPerRow=5, subImgSize=(260, 160)
)
```
![png](/img/examples/generative/molecule_generation/molecule_generation_21_0.png)
### Display latent space clusters with respect to molecular properties (QAE)
---
```python
def plot_latent(vae, data, labels):
# display a 2D plot of the property in the latent space
z_mean, _ = vae.encoder.predict(data)
plt.figure(figsize=(12, 10))
plt.scatter(z_mean[:, 0], z_mean[:, 1], c=labels)
plt.colorbar()
plt.xlabel("z[0]")
plt.ylabel("z[1]")
plt.show()
plot_latent(model, [adjacency_tensor[:8000], feature_tensor[:8000]], qed_tensor[:8000])
```
![png](/img/examples/generative/molecule_generation/molecule_generation_23_0.png)
---
## Conclusion
In this example, we combined model architectures from two papers,
"Automatic chemical design using a data-driven continuous representation of
molecules" from 2016 and the "MolGAN" paper from 2018. The former paper
treats SMILES inputs as strings and seeks to generate molecule strings in SMILES format,
while the later paper considers SMILES inputs as graphs (a combination of adjacency
matrices and feature matrices) and seeks to generate molecules as graphs.
This hybrid approach enables a new type of directed gradient-based search through chemical space.
Example available on HuggingFace
| Trained Model | Demo |
| :--: | :--: |
| [![Generic badge](https://img.shields.io/badge/%F0%9F%A4%97%20Model-molecule%20generation%20with%20VAE-black.svg)](https://huggingface.co/keras-io/drug-molecule-generation-with-VAE) | [![Generic badge](https://img.shields.io/badge/%F0%9F%A4%97%20Spaces-molecule%20generation%20with%20VAE-black.svg)](https://huggingface.co/spaces/keras-io/generating-drug-molecule-with-VAE) |
| keras-io/examples/generative/md/molecule_generation.md/0 | {
"file_path": "keras-io/examples/generative/md/molecule_generation.md",
"repo_id": "keras-io",
"token_count": 10447
} | 101 |
"""
Title: A walk through latent space with Stable Diffusion
Authors: Ian Stenbit, [fchollet](https://twitter.com/fchollet), [lukewood](https://twitter.com/luke_wood_ml)
Date created: 2022/09/28
Last modified: 2022/09/28
Description: Explore the latent manifold of Stable Diffusion.
Accelerator: GPU
"""
"""
## Overview
Generative image models learn a "latent manifold" of the visual world:
a low-dimensional vector space where each point maps to an image.
Going from such a point on the manifold back to a displayable image
is called "decoding" -- in the Stable Diffusion model, this is handled by
the "decoder" model.
![The Stable Diffusion architecture](https://i.imgur.com/2uC8rYJ.png)
This latent manifold of images is continuous and interpolative, meaning that:
1. Moving a little on the manifold only changes the corresponding image a little (continuity).
2. For any two points A and B on the manifold (i.e. any two images), it is possible
to move from A to B via a path where each intermediate point is also on the manifold (i.e.
is also a valid image). Intermediate points would be called "interpolations" between
the two starting images.
Stable Diffusion isn't just an image model, though, it's also a natural language model.
It has two latent spaces: the image representation space learned by the
encoder used during training, and the prompt latent space
which is learned using a combination of pretraining and training-time
fine-tuning.
_Latent space walking_, or _latent space exploration_, is the process of
sampling a point in latent space and incrementally changing the latent
representation. Its most common application is generating animations
where each sampled point is fed to the decoder and is stored as a
frame in the final animation.
For high-quality latent representations, this produces coherent-looking
animations. These animations can provide insight into the feature map of the
latent space, and can ultimately lead to improvements in the training
process. One such GIF is displayed below:
![Panda to Plane](/img/examples/generative/random_walks_with_stable_diffusion/panda2plane.gif)
In this guide, we will show how to take advantage of the Stable Diffusion API
in KerasCV to perform prompt interpolation and circular walks through
Stable Diffusion's visual latent manifold, as well as through
the text encoder's latent manifold.
This guide assumes the reader has a
high-level understanding of Stable Diffusion.
If you haven't already, you should start
by reading the [Stable Diffusion Tutorial](https://keras.io/guides/keras_cv/generate_images_with_stable_diffusion/).
To start, we import KerasCV and load up a Stable Diffusion model using the
optimizations discussed in the tutorial
[Generate images with Stable Diffusion](https://keras.io/guides/keras_cv/generate_images_with_stable_diffusion/).
Note that if you are running with a M1 Mac GPU you should not enable mixed precision.
"""
"""shell
pip install keras-cv --upgrade --quiet
"""
import keras_cv
import keras
import matplotlib.pyplot as plt
from keras import ops
import numpy as np
import math
from PIL import Image
# Enable mixed precision
# (only do this if you have a recent NVIDIA GPU)
keras.mixed_precision.set_global_policy("mixed_float16")
# Instantiate the Stable Diffusion model
model = keras_cv.models.StableDiffusion(jit_compile=True)
"""
## Interpolating between text prompts
In Stable Diffusion, a text prompt is first encoded into a vector,
and that encoding is used to guide the diffusion process.
The latent encoding vector has shape
77x768 (that's huge!), and when we give Stable Diffusion a text prompt, we're
generating images from just one such point on the latent manifold.
To explore more of this manifold, we can interpolate between two text encodings
and generate images at those interpolated points:
"""
prompt_1 = "A watercolor painting of a Golden Retriever at the beach"
prompt_2 = "A still life DSLR photo of a bowl of fruit"
interpolation_steps = 5
encoding_1 = ops.squeeze(model.encode_text(prompt_1))
encoding_2 = ops.squeeze(model.encode_text(prompt_2))
interpolated_encodings = ops.linspace(encoding_1, encoding_2, interpolation_steps)
# Show the size of the latent manifold
print(f"Encoding shape: {encoding_1.shape}")
"""
Once we've interpolated the encodings, we can generate images from each point.
Note that in order to maintain some stability between the resulting images we
keep the diffusion noise constant between images.
"""
seed = 12345
noise = keras.random.normal((512 // 8, 512 // 8, 4), seed=seed)
images = model.generate_image(
interpolated_encodings,
batch_size=interpolation_steps,
diffusion_noise=noise,
)
"""
Now that we've generated some interpolated images, let's take a look at them!
Throughout this tutorial, we're going to export sequences of images as gifs so
that they can be easily viewed with some temporal context. For sequences of
images where the first and last images don't match conceptually, we rubber-band
the gif.
If you're running in Colab, you can view your own GIFs by running:
```
from IPython.display import Image as IImage
IImage("doggo-and-fruit-5.gif")
```
"""
def export_as_gif(filename, images, frames_per_second=10, rubber_band=False):
if rubber_band:
images += images[2:-1][::-1]
images[0].save(
filename,
save_all=True,
append_images=images[1:],
duration=1000 // frames_per_second,
loop=0,
)
export_as_gif(
"doggo-and-fruit-5.gif",
[Image.fromarray(img) for img in images],
frames_per_second=2,
rubber_band=True,
)
"""
![Dog to Fruit 5](https://i.imgur.com/4ZCxZY4.gif)
The results may seem surprising. Generally, interpolating between prompts
produces coherent looking images, and often demonstrates a progressive concept
shift between the contents of the two prompts. This is indicative of a high
quality representation space, that closely mirrors the natural structure
of the visual world.
To best visualize this, we should do a much more fine-grained interpolation,
using hundreds of steps. In order to keep batch size small (so that we don't
OOM our GPU), this requires manually batching our interpolated
encodings.
"""
interpolation_steps = 150
batch_size = 3
batches = interpolation_steps // batch_size
interpolated_encodings = ops.linspace(encoding_1, encoding_2, interpolation_steps)
batched_encodings = ops.split(interpolated_encodings, batches)
images = []
for batch in range(batches):
images += [
Image.fromarray(img)
for img in model.generate_image(
batched_encodings[batch],
batch_size=batch_size,
num_steps=25,
diffusion_noise=noise,
)
]
export_as_gif("doggo-and-fruit-150.gif", images, rubber_band=True)
"""
![Dog to Fruit 150](/img/examples/generative/random_walks_with_stable_diffusion/dog2fruit150.gif)
The resulting gif shows a much clearer and more coherent shift between the two
prompts. Try out some prompts of your own and experiment!
We can even extend this concept for more than one image. For example, we can
interpolate between four prompts:
"""
prompt_1 = "A watercolor painting of a Golden Retriever at the beach"
prompt_2 = "A still life DSLR photo of a bowl of fruit"
prompt_3 = "The eiffel tower in the style of starry night"
prompt_4 = "An architectural sketch of a skyscraper"
interpolation_steps = 6
batch_size = 3
batches = (interpolation_steps**2) // batch_size
encoding_1 = ops.squeeze(model.encode_text(prompt_1))
encoding_2 = ops.squeeze(model.encode_text(prompt_2))
encoding_3 = ops.squeeze(model.encode_text(prompt_3))
encoding_4 = ops.squeeze(model.encode_text(prompt_4))
interpolated_encodings = ops.linspace(
ops.linspace(encoding_1, encoding_2, interpolation_steps),
ops.linspace(encoding_3, encoding_4, interpolation_steps),
interpolation_steps,
)
interpolated_encodings = ops.reshape(
interpolated_encodings, (interpolation_steps**2, 77, 768)
)
batched_encodings = ops.split(interpolated_encodings, batches)
images = []
for batch in range(batches):
images.append(
model.generate_image(
batched_encodings[batch],
batch_size=batch_size,
diffusion_noise=noise,
)
)
def plot_grid(images, path, grid_size, scale=2):
fig, axs = plt.subplots(
grid_size, grid_size, figsize=(grid_size * scale, grid_size * scale)
)
fig.tight_layout()
plt.subplots_adjust(wspace=0, hspace=0)
plt.axis("off")
for ax in axs.flat:
ax.axis("off")
images = images.astype(int)
for i in range(min(grid_size * grid_size, len(images))):
ax = axs.flat[i]
ax.imshow(images[i].astype("uint8"))
ax.axis("off")
for i in range(len(images), grid_size * grid_size):
axs.flat[i].axis("off")
axs.flat[i].remove()
plt.savefig(
fname=path,
pad_inches=0,
bbox_inches="tight",
transparent=False,
dpi=60,
)
images = np.concatenate(images)
plot_grid(images, "4-way-interpolation.jpg", interpolation_steps)
"""
We can also interpolate while allowing diffusion noise to vary by dropping
the `diffusion_noise` parameter:
"""
images = []
for batch in range(batches):
images.append(model.generate_image(batched_encodings[batch], batch_size=batch_size))
images = np.concatenate(images)
plot_grid(images, "4-way-interpolation-varying-noise.jpg", interpolation_steps)
"""
Next up -- let's go for some walks!
## A walk around a text prompt
Our next experiment will be to go for a walk around the latent manifold
starting from a point produced by a particular prompt.
"""
walk_steps = 150
batch_size = 3
batches = walk_steps // batch_size
step_size = 0.005
encoding = ops.squeeze(
model.encode_text("The Eiffel Tower in the style of starry night")
)
# Note that (77, 768) is the shape of the text encoding.
delta = ops.ones_like(encoding) * step_size
walked_encodings = []
for step_index in range(walk_steps):
walked_encodings.append(encoding)
encoding += delta
walked_encodings = ops.stack(walked_encodings)
batched_encodings = ops.split(walked_encodings, batches)
images = []
for batch in range(batches):
images += [
Image.fromarray(img)
for img in model.generate_image(
batched_encodings[batch],
batch_size=batch_size,
num_steps=25,
diffusion_noise=noise,
)
]
export_as_gif("eiffel-tower-starry-night.gif", images, rubber_band=True)
"""
![Eiffel tower walk gif](https://i.imgur.com/9MMYtal.gif)
Perhaps unsurprisingly, walking too far from the encoder's latent manifold
produces images that look incoherent. Try it for yourself by setting
your own prompt, and adjusting `step_size` to increase or decrease the magnitude
of the walk. Note that when the magnitude of the walk gets large, the walk often
leads into areas which produce extremely noisy images.
## A circular walk through the diffusion noise space for a single prompt
Our final experiment is to stick to one prompt and explore the variety of images
that the diffusion model can produce from that prompt. We do this by controlling
the noise that is used to seed the diffusion process.
We create two noise components, `x` and `y`, and do a walk from 0 to 2Ο, summing
the cosine of our `x` component and the sin of our `y` component to produce noise.
Using this approach, the end of our walk arrives at the same noise inputs where
we began our walk, so we get a "loopable" result!
"""
prompt = "An oil paintings of cows in a field next to a windmill in Holland"
encoding = ops.squeeze(model.encode_text(prompt))
walk_steps = 150
batch_size = 3
batches = walk_steps // batch_size
walk_noise_x = keras.random.normal(noise.shape, dtype="float64")
walk_noise_y = keras.random.normal(noise.shape, dtype="float64")
walk_scale_x = ops.cos(ops.linspace(0, 2, walk_steps) * math.pi)
walk_scale_y = ops.sin(ops.linspace(0, 2, walk_steps) * math.pi)
noise_x = ops.tensordot(walk_scale_x, walk_noise_x, axes=0)
noise_y = ops.tensordot(walk_scale_y, walk_noise_y, axes=0)
noise = ops.add(noise_x, noise_y)
batched_noise = ops.split(noise, batches)
images = []
for batch in range(batches):
images += [
Image.fromarray(img)
for img in model.generate_image(
encoding,
batch_size=batch_size,
num_steps=25,
diffusion_noise=batched_noise[batch],
)
]
export_as_gif("cows.gif", images)
"""
![Happy Cows](/img/examples/generative/random_walks_with_stable_diffusion/happycows.gif)
Experiment with your own prompts and with different values of
`unconditional_guidance_scale`!
## Conclusion
Stable Diffusion offers a lot more than just single text-to-image generation.
Exploring the latent manifold of the text encoder and the noise space of the
diffusion model are two fun ways to experience the power of this model, and
KerasCV makes it easy!
"""
| keras-io/examples/generative/random_walks_with_stable_diffusion.py/0 | {
"file_path": "keras-io/examples/generative/random_walks_with_stable_diffusion.py",
"repo_id": "keras-io",
"token_count": 4341
} | 102 |
"""
Title: Simple custom layer example: Antirectifier
Author: [fchollet](https://twitter.com/fchollet)
Date created: 2016/01/06
Last modified: 2023/11/20
Description: Demonstration of custom layer creation.
Accelerator: GPU
"""
"""
## Introduction
This example shows how to create custom layers, using the Antirectifier layer
(originally proposed as a Keras example script in January 2016), an alternative
to ReLU. Instead of zeroing-out the negative part of the input, it splits the negative
and positive parts and returns the concatenation of the absolute value
of both. This avoids loss of information, at the cost of an increase in dimensionality.
To fix the dimensionality increase, we linearly combine the
features back to a space of the original size.
"""
"""
## Setup
"""
import keras
from keras import layers
from keras import ops
"""
## The Antirectifier layer
To implement a custom layer:
- Create the state variables via `add_weight()` in `__init__` or `build()`.
Similarly, you can also create sublayers.
- Implement the `call()` method, taking the layer's input tensor(s) and
return the output tensor(s).
- Optionally, you can also enable serialization by implementing `get_config()`,
which returns a configuration dictionary.
See also the guide
[Making new layers and models via subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/).
"""
class Antirectifier(layers.Layer):
def __init__(self, initializer="he_normal", **kwargs):
super().__init__(**kwargs)
self.initializer = keras.initializers.get(initializer)
def build(self, input_shape):
output_dim = input_shape[-1]
self.kernel = self.add_weight(
shape=(output_dim * 2, output_dim),
initializer=self.initializer,
name="kernel",
trainable=True,
)
def call(self, inputs):
inputs -= ops.mean(inputs, axis=-1, keepdims=True)
pos = ops.relu(inputs)
neg = ops.relu(-inputs)
concatenated = ops.concatenate([pos, neg], axis=-1)
mixed = ops.matmul(concatenated, self.kernel)
return mixed
def get_config(self):
# Implement get_config to enable serialization. This is optional.
base_config = super().get_config()
config = {"initializer": keras.initializers.serialize(self.initializer)}
return dict(list(base_config.items()) + list(config.items()))
"""
## Let's test-drive it on MNIST
"""
# Training parameters
batch_size = 128
num_classes = 10
epochs = 20
# The data, split between train and test sets
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x_train.reshape(-1, 784)
x_test = x_test.reshape(-1, 784)
x_train = x_train.astype("float32")
x_test = x_test.astype("float32")
x_train /= 255
x_test /= 255
print(x_train.shape[0], "train samples")
print(x_test.shape[0], "test samples")
# Build the model
model = keras.Sequential(
[
keras.Input(shape=(784,)),
layers.Dense(256),
Antirectifier(),
layers.Dense(256),
Antirectifier(),
layers.Dropout(0.5),
layers.Dense(10),
]
)
# Compile the model
model.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=keras.optimizers.RMSprop(),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
# Train the model
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, validation_split=0.15)
# Test the model
model.evaluate(x_test, y_test)
| keras-io/examples/keras_recipes/antirectifier.py/0 | {
"file_path": "keras-io/examples/keras_recipes/antirectifier.py",
"repo_id": "keras-io",
"token_count": 1295
} | 103 |
<jupyter_start><jupyter_text>Endpoint layer pattern**Author:** [fchollet](https://twitter.com/fchollet)**Date created:** 2019/05/10**Last modified:** 2023/11/22**Description:** Demonstration of the "endpoint layer" pattern (layer that handles loss management). Setup<jupyter_code>import os
os.environ["KERAS_BACKEND"] = "tensorflow"
import tensorflow as tf
import keras
import numpy as np<jupyter_output><empty_output><jupyter_text>Usage of endpoint layers in the Functional APIAn "endpoint layer" has access to the model's targets, and creates arbitrary lossesin `call()` using `self.add_loss()` and `Metric.update_state()`.This enables you to define losses andmetrics that don't match the usual signature `fn(y_true, y_pred, sample_weight=None)`.Note that you could have separate metrics for training and eval with this pattern.<jupyter_code>class LogisticEndpoint(keras.layers.Layer):
def __init__(self, name=None):
super().__init__(name=name)
self.loss_fn = keras.losses.BinaryCrossentropy(from_logits=True)
self.accuracy_metric = keras.metrics.BinaryAccuracy(name="accuracy")
def call(self, logits, targets=None, sample_weight=None):
if targets is not None:
# Compute the training-time loss value and add it
# to the layer using `self.add_loss()`.
loss = self.loss_fn(targets, logits, sample_weight)
self.add_loss(loss)
# Log the accuracy as a metric (we could log arbitrary metrics,
# including different metrics for training and inference.)
self.accuracy_metric.update_state(targets, logits, sample_weight)
# Return the inference-time prediction tensor (for `.predict()`).
return tf.nn.softmax(logits)
inputs = keras.Input((764,), name="inputs")
logits = keras.layers.Dense(1)(inputs)
targets = keras.Input((1,), name="targets")
sample_weight = keras.Input((1,), name="sample_weight")
preds = LogisticEndpoint()(logits, targets, sample_weight)
model = keras.Model([inputs, targets, sample_weight], preds)
data = {
"inputs": np.random.random((1000, 764)),
"targets": np.random.random((1000, 1)),
"sample_weight": np.random.random((1000, 1)),
}
model.compile(keras.optimizers.Adam(1e-3))
model.fit(data, epochs=2)<jupyter_output><empty_output><jupyter_text>Exporting an inference-only modelSimply don't include `targets` in the model. The weights stay the same.<jupyter_code>inputs = keras.Input((764,), name="inputs")
logits = keras.layers.Dense(1)(inputs)
preds = LogisticEndpoint()(logits, targets=None, sample_weight=None)
inference_model = keras.Model(inputs, preds)
inference_model.set_weights(model.get_weights())
preds = inference_model.predict(np.random.random((1000, 764)))<jupyter_output><empty_output><jupyter_text>Usage of loss endpoint layers in subclassed models<jupyter_code>class LogReg(keras.Model):
def __init__(self):
super().__init__()
self.dense = keras.layers.Dense(1)
self.logistic_endpoint = LogisticEndpoint()
def call(self, inputs):
# Note that all inputs should be in the first argument
# since we want to be able to call `model.fit(inputs)`.
logits = self.dense(inputs["inputs"])
preds = self.logistic_endpoint(
logits=logits,
targets=inputs["targets"],
sample_weight=inputs["sample_weight"],
)
return preds
model = LogReg()
data = {
"inputs": np.random.random((1000, 764)),
"targets": np.random.random((1000, 1)),
"sample_weight": np.random.random((1000, 1)),
}
model.compile(keras.optimizers.Adam(1e-3))
model.fit(data, epochs=2)<jupyter_output><empty_output> | keras-io/examples/keras_recipes/ipynb/endpoint_layer_pattern.ipynb/0 | {
"file_path": "keras-io/examples/keras_recipes/ipynb/endpoint_layer_pattern.ipynb",
"repo_id": "keras-io",
"token_count": 1417
} | 104 |
# Keras debugging tips
**Author:** [fchollet](https://twitter.com/fchollet)<br>
**Date created:** 2020/05/16<br>
**Last modified:** 2023/11/16<br>
**Description:** Four simple tips to help you debug your Keras code.
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/keras_recipes/ipynb/debugging_tips.ipynb) <span class="k-dot">β’</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/examples/keras_recipes/debugging_tips.py)
---
## Introduction
It's generally possible to do almost anything in Keras *without writing code* per se:
whether you're implementing a new type of GAN or the latest convnet architecture for
image segmentation, you can usually stick to calling built-in methods. Because all
built-in methods do extensive input validation checks, you will have little to no
debugging to do. A Functional API model made entirely of built-in layers will work on
first try -- if you can compile it, it will run.
However, sometimes, you will need to dive deeper and write your own code. Here are some
common examples:
- Creating a new `Layer` subclass.
- Creating a custom `Metric` subclass.
- Implementing a custom `train_step` on a `Model`.
This document provides a few simple tips to help you navigate debugging in these
situations.
---
## Tip 1: test each part before you test the whole
If you've created any object that has a chance of not working as expected, don't just
drop it in your end-to-end process and watch sparks fly. Rather, test your custom object
in isolation first. This may seem obvious -- but you'd be surprised how often people
don't start with this.
- If you write a custom layer, don't call `fit()` on your entire model just yet. Call
your layer on some test data first.
- If you write a custom metric, start by printing its output for some reference inputs.
Here's a simple example. Let's write a custom layer a bug in it:
```python
import os
# The last example uses tf.GradientTape and thus requires TensorFlow.
# However, all tips here are applicable with all backends.
os.environ["KERAS_BACKEND"] = "tensorflow"
import keras
from keras import layers
from keras import ops
import numpy as np
import tensorflow as tf
class MyAntirectifier(layers.Layer):
def build(self, input_shape):
output_dim = input_shape[-1]
self.kernel = self.add_weight(
shape=(output_dim * 2, output_dim),
initializer="he_normal",
name="kernel",
trainable=True,
)
def call(self, inputs):
# Take the positive part of the input
pos = ops.relu(inputs)
# Take the negative part of the input
neg = ops.relu(-inputs)
# Concatenate the positive and negative parts
concatenated = ops.concatenate([pos, neg], axis=0)
# Project the concatenation down to the same dimensionality as the input
return ops.matmul(concatenated, self.kernel)
```
Now, rather than using it in a end-to-end model directly, let's try to call the layer on
some test data:
```python
x = tf.random.normal(shape=(2, 5))
y = MyAntirectifier()(x)
```
We get the following error:
```
...
1 x = tf.random.normal(shape=(2, 5))
----> 2 y = MyAntirectifier()(x)
...
17 neg = tf.nn.relu(-inputs)
18 concatenated = tf.concat([pos, neg], axis=0)
---> 19 return tf.matmul(concatenated, self.kernel)
...
InvalidArgumentError: Matrix size-incompatible: In[0]: [4,5], In[1]: [10,5] [Op:MatMul]
```
Looks like our input tensor in the `matmul` op may have an incorrect shape.
Let's add a print statement to check the actual shapes:
```python
class MyAntirectifier(layers.Layer):
def build(self, input_shape):
output_dim = input_shape[-1]
self.kernel = self.add_weight(
shape=(output_dim * 2, output_dim),
initializer="he_normal",
name="kernel",
trainable=True,
)
def call(self, inputs):
pos = ops.relu(inputs)
neg = ops.relu(-inputs)
print("pos.shape:", pos.shape)
print("neg.shape:", neg.shape)
concatenated = ops.concatenate([pos, neg], axis=0)
print("concatenated.shape:", concatenated.shape)
print("kernel.shape:", self.kernel.shape)
return ops.matmul(concatenated, self.kernel)
```
We get the following:
```
pos.shape: (2, 5)
neg.shape: (2, 5)
concatenated.shape: (4, 5)
kernel.shape: (10, 5)
```
Turns out we had the wrong axis for the `concat` op! We should be concatenating `neg` and
`pos` alongside the feature axis 1, not the batch axis 0. Here's the correct version:
```python
class MyAntirectifier(layers.Layer):
def build(self, input_shape):
output_dim = input_shape[-1]
self.kernel = self.add_weight(
shape=(output_dim * 2, output_dim),
initializer="he_normal",
name="kernel",
trainable=True,
)
def call(self, inputs):
pos = ops.relu(inputs)
neg = ops.relu(-inputs)
print("pos.shape:", pos.shape)
print("neg.shape:", neg.shape)
concatenated = ops.concatenate([pos, neg], axis=1)
print("concatenated.shape:", concatenated.shape)
print("kernel.shape:", self.kernel.shape)
return ops.matmul(concatenated, self.kernel)
```
Now our code works fine:
```python
x = keras.random.normal(shape=(2, 5))
y = MyAntirectifier()(x)
```
<div class="k-default-codeblock">
```
pos.shape: (2, 5)
neg.shape: (2, 5)
concatenated.shape: (2, 10)
kernel.shape: (10, 5)
```
</div>
---
## Tip 2: use `model.summary()` and `plot_model()` to check layer output shapes
If you're working with complex network topologies, you're going to need a way
to visualize how your layers are connected and how they transform the data that passes
through them.
Here's an example. Consider this model with three inputs and two outputs (lifted from the
[Functional API guide](https://keras.io/guides/functional_api/#manipulate-complex-graph-topologies)):
```python
num_tags = 12 # Number of unique issue tags
num_words = 10000 # Size of vocabulary obtained when preprocessing text data
num_departments = 4 # Number of departments for predictions
title_input = keras.Input(
shape=(None,), name="title"
) # Variable-length sequence of ints
body_input = keras.Input(shape=(None,), name="body") # Variable-length sequence of ints
tags_input = keras.Input(
shape=(num_tags,), name="tags"
) # Binary vectors of size `num_tags`
# Embed each word in the title into a 64-dimensional vector
title_features = layers.Embedding(num_words, 64)(title_input)
# Embed each word in the text into a 64-dimensional vector
body_features = layers.Embedding(num_words, 64)(body_input)
# Reduce sequence of embedded words in the title into a single 128-dimensional vector
title_features = layers.LSTM(128)(title_features)
# Reduce sequence of embedded words in the body into a single 32-dimensional vector
body_features = layers.LSTM(32)(body_features)
# Merge all available features into a single large vector via concatenation
x = layers.concatenate([title_features, body_features, tags_input])
# Stick a logistic regression for priority prediction on top of the features
priority_pred = layers.Dense(1, name="priority")(x)
# Stick a department classifier on top of the features
department_pred = layers.Dense(num_departments, name="department")(x)
# Instantiate an end-to-end model predicting both priority and department
model = keras.Model(
inputs=[title_input, body_input, tags_input],
outputs=[priority_pred, department_pred],
)
```
Calling `summary()` can help you check the output shape of each layer:
```python
model.summary()
```
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold">Model: "functional_1"</span>
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace">βββββββββββββββββββββββ³ββββββββββββββββββββ³ββββββββββ³βββββββββββββββββββββββ
β<span style="font-weight: bold"> Layer (type) </span>β<span style="font-weight: bold"> Output Shape </span>β<span style="font-weight: bold"> Param # </span>β<span style="font-weight: bold"> Connected to </span>β
β‘βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β title (<span style="color: #0087ff; text-decoration-color: #0087ff">InputLayer</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>) β <span style="color: #00af00; text-decoration-color: #00af00">0</span> β - β
βββββββββββββββββββββββΌββββββββββββββββββββΌββββββββββΌβββββββββββββββββββββββ€
β body (<span style="color: #0087ff; text-decoration-color: #0087ff">InputLayer</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>) β <span style="color: #00af00; text-decoration-color: #00af00">0</span> β - β
βββββββββββββββββββββββΌββββββββββββββββββββΌββββββββββΌβββββββββββββββββββββββ€
β embedding β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) β <span style="color: #00af00; text-decoration-color: #00af00">640,000</span> β title[<span style="color: #00af00; text-decoration-color: #00af00">0</span>][<span style="color: #00af00; text-decoration-color: #00af00">0</span>] β
β (<span style="color: #0087ff; text-decoration-color: #0087ff">Embedding</span>) β β β β
βββββββββββββββββββββββΌββββββββββββββββββββΌββββββββββΌβββββββββββββββββββββββ€
β embedding_1 β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) β <span style="color: #00af00; text-decoration-color: #00af00">640,000</span> β body[<span style="color: #00af00; text-decoration-color: #00af00">0</span>][<span style="color: #00af00; text-decoration-color: #00af00">0</span>] β
β (<span style="color: #0087ff; text-decoration-color: #0087ff">Embedding</span>) β β β β
βββββββββββββββββββββββΌββββββββββββββββββββΌββββββββββΌβββββββββββββββββββββββ€
β lstm (<span style="color: #0087ff; text-decoration-color: #0087ff">LSTM</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">128</span>) β <span style="color: #00af00; text-decoration-color: #00af00">98,816</span> β embedding[<span style="color: #00af00; text-decoration-color: #00af00">0</span>][<span style="color: #00af00; text-decoration-color: #00af00">0</span>] β
βββββββββββββββββββββββΌββββββββββββββββββββΌββββββββββΌβββββββββββββββββββββββ€
β lstm_1 (<span style="color: #0087ff; text-decoration-color: #0087ff">LSTM</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">32</span>) β <span style="color: #00af00; text-decoration-color: #00af00">12,416</span> β embedding_1[<span style="color: #00af00; text-decoration-color: #00af00">0</span>][<span style="color: #00af00; text-decoration-color: #00af00">0</span>] β
βββββββββββββββββββββββΌββββββββββββββββββββΌββββββββββΌβββββββββββββββββββββββ€
β tags (<span style="color: #0087ff; text-decoration-color: #0087ff">InputLayer</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">12</span>) β <span style="color: #00af00; text-decoration-color: #00af00">0</span> β - β
βββββββββββββββββββββββΌββββββββββββββββββββΌββββββββββΌβββββββββββββββββββββββ€
β concatenate β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">172</span>) β <span style="color: #00af00; text-decoration-color: #00af00">0</span> β lstm[<span style="color: #00af00; text-decoration-color: #00af00">0</span>][<span style="color: #00af00; text-decoration-color: #00af00">0</span>], β
β (<span style="color: #0087ff; text-decoration-color: #0087ff">Concatenate</span>) β β β lstm_1[<span style="color: #00af00; text-decoration-color: #00af00">0</span>][<span style="color: #00af00; text-decoration-color: #00af00">0</span>], β
β β β β tags[<span style="color: #00af00; text-decoration-color: #00af00">0</span>][<span style="color: #00af00; text-decoration-color: #00af00">0</span>] β
βββββββββββββββββββββββΌββββββββββββββββββββΌββββββββββΌβββββββββββββββββββββββ€
β priority (<span style="color: #0087ff; text-decoration-color: #0087ff">Dense</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">1</span>) β <span style="color: #00af00; text-decoration-color: #00af00">173</span> β concatenate[<span style="color: #00af00; text-decoration-color: #00af00">0</span>][<span style="color: #00af00; text-decoration-color: #00af00">0</span>] β
βββββββββββββββββββββββΌββββββββββββββββββββΌββββββββββΌβββββββββββββββββββββββ€
β department (<span style="color: #0087ff; text-decoration-color: #0087ff">Dense</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">4</span>) β <span style="color: #00af00; text-decoration-color: #00af00">692</span> β concatenate[<span style="color: #00af00; text-decoration-color: #00af00">0</span>][<span style="color: #00af00; text-decoration-color: #00af00">0</span>] β
βββββββββββββββββββββββ΄ββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββββββ
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Total params: </span><span style="color: #00af00; text-decoration-color: #00af00">1,392,097</span> (5.31 MB)
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Trainable params: </span><span style="color: #00af00; text-decoration-color: #00af00">1,392,097</span> (5.31 MB)
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Non-trainable params: </span><span style="color: #00af00; text-decoration-color: #00af00">0</span> (0.00 B)
</pre>
You can also visualize the entire network topology alongside output shapes using
`plot_model`:
```python
keras.utils.plot_model(model, show_shapes=True)
```
![png](/img/examples/keras_recipes/debugging_tips/debugging_tips_15_0.png)
With this plot, any connectivity-level error becomes immediately obvious.
---
## Tip 3: to debug what happens during `fit()`, use `run_eagerly=True`
The `fit()` method is fast: it runs a well-optimized, fully-compiled computation graph.
That's great for performance, but it also means that the code you're executing isn't the
Python code you've written. This can be problematic when debugging. As you may recall,
Python is slow -- so we use it as a staging language, not as an execution language.
Thankfully, there's an easy way to run your code in "debug mode", fully eagerly:
pass `run_eagerly=True` to `compile()`. Your call to `fit()` will now get executed line
by line, without any optimization. It's slower, but it makes it possible to print the
value of intermediate tensors, or to use a Python debugger. Great for debugging.
Here's a basic example: let's write a really simple model with a custom `train_step()` method.
Our model just implements gradient descent, but instead of first-order gradients,
it uses a combination of first-order and second-order gradients. Pretty simple so far.
Can you spot what we're doing wrong?
```python
class MyModel(keras.Model):
def train_step(self, data):
inputs, targets = data
trainable_vars = self.trainable_variables
with tf.GradientTape() as tape2:
with tf.GradientTape() as tape1:
y_pred = self(inputs, training=True) # Forward pass
# Compute the loss value
# (the loss function is configured in `compile()`)
loss = self.compute_loss(y=targets, y_pred=y_pred)
# Compute first-order gradients
dl_dw = tape1.gradient(loss, trainable_vars)
# Compute second-order gradients
d2l_dw2 = tape2.gradient(dl_dw, trainable_vars)
# Combine first-order and second-order gradients
grads = [0.5 * w1 + 0.5 * w2 for (w1, w2) in zip(d2l_dw2, dl_dw)]
# Update weights
self.optimizer.apply_gradients(zip(grads, trainable_vars))
# Update metrics (includes the metric that tracks the loss)
for metric in self.metrics:
if metric.name == "loss":
metric.update_state(loss)
else:
metric.update_state(targets, y_pred)
# Return a dict mapping metric names to current value
return {m.name: m.result() for m in self.metrics}
```
Let's train a one-layer model on MNIST with this custom loss function.
We pick, somewhat at random, a batch size of 1024 and a learning rate of 0.1. The general
idea being to use larger batches and a larger learning rate than usual, since our
"improved" gradients should lead us to quicker convergence.
```python
# Construct an instance of MyModel
def get_model():
inputs = keras.Input(shape=(784,))
intermediate = layers.Dense(256, activation="relu")(inputs)
outputs = layers.Dense(10, activation="softmax")(intermediate)
model = MyModel(inputs, outputs)
return model
# Prepare data
(x_train, y_train), _ = keras.datasets.mnist.load_data()
x_train = np.reshape(x_train, (-1, 784)) / 255
model = get_model()
model.compile(
optimizer=keras.optimizers.SGD(learning_rate=1e-2),
loss="sparse_categorical_crossentropy",
)
model.fit(x_train, y_train, epochs=3, batch_size=1024, validation_split=0.1)
```
<div class="k-default-codeblock">
```
Epoch 1/3
53/53 ββββββββββββββββββββ 0s 7ms/step - loss: 2.4264 - val_loss: 2.3036
Epoch 2/3
53/53 ββββββββββββββββββββ 0s 6ms/step - loss: 2.3111 - val_loss: 2.3387
Epoch 3/3
53/53 ββββββββββββββββββββ 0s 7ms/step - loss: 2.3442 - val_loss: 2.3697
<keras.src.callbacks.history.History at 0x29a899600>
```
</div>
Oh no, it doesn't converge! Something is not working as planned.
Time for some step-by-step printing of what's going on with our gradients.
We add various `print` statements in the `train_step` method, and we make sure to pass
`run_eagerly=True` to `compile()` to run our code step-by-step, eagerly.
```python
class MyModel(keras.Model):
def train_step(self, data):
print()
print("----Start of step: %d" % (self.step_counter,))
self.step_counter += 1
inputs, targets = data
trainable_vars = self.trainable_variables
with tf.GradientTape() as tape2:
with tf.GradientTape() as tape1:
y_pred = self(inputs, training=True) # Forward pass
# Compute the loss value
# (the loss function is configured in `compile()`)
loss = self.compute_loss(y=targets, y_pred=y_pred)
# Compute first-order gradients
dl_dw = tape1.gradient(loss, trainable_vars)
# Compute second-order gradients
d2l_dw2 = tape2.gradient(dl_dw, trainable_vars)
print("Max of dl_dw[0]: %.4f" % tf.reduce_max(dl_dw[0]))
print("Min of dl_dw[0]: %.4f" % tf.reduce_min(dl_dw[0]))
print("Mean of dl_dw[0]: %.4f" % tf.reduce_mean(dl_dw[0]))
print("-")
print("Max of d2l_dw2[0]: %.4f" % tf.reduce_max(d2l_dw2[0]))
print("Min of d2l_dw2[0]: %.4f" % tf.reduce_min(d2l_dw2[0]))
print("Mean of d2l_dw2[0]: %.4f" % tf.reduce_mean(d2l_dw2[0]))
# Combine first-order and second-order gradients
grads = [0.5 * w1 + 0.5 * w2 for (w1, w2) in zip(d2l_dw2, dl_dw)]
# Update weights
self.optimizer.apply_gradients(zip(grads, trainable_vars))
# Update metrics (includes the metric that tracks the loss)
for metric in self.metrics:
if metric.name == "loss":
metric.update_state(loss)
else:
metric.update_state(targets, y_pred)
# Return a dict mapping metric names to current value
return {m.name: m.result() for m in self.metrics}
model = get_model()
model.compile(
optimizer=keras.optimizers.SGD(learning_rate=1e-2),
loss="sparse_categorical_crossentropy",
metrics=["sparse_categorical_accuracy"],
run_eagerly=True,
)
model.step_counter = 0
# We pass epochs=1 and steps_per_epoch=10 to only run 10 steps of training.
model.fit(x_train, y_train, epochs=1, batch_size=1024, verbose=0, steps_per_epoch=10)
```
<div class="k-default-codeblock">
```
----Start of step: 0
Max of dl_dw[0]: 0.0332
Min of dl_dw[0]: -0.0288
Mean of dl_dw[0]: 0.0003
-
Max of d2l_dw2[0]: 5.2691
Min of d2l_dw2[0]: -2.6968
Mean of d2l_dw2[0]: 0.0981
```
</div>
<div class="k-default-codeblock">
```
----Start of step: 1
Max of dl_dw[0]: 0.0445
Min of dl_dw[0]: -0.0169
Mean of dl_dw[0]: 0.0013
-
Max of d2l_dw2[0]: 3.3575
Min of d2l_dw2[0]: -1.9024
Mean of d2l_dw2[0]: 0.0726
```
</div>
<div class="k-default-codeblock">
```
----Start of step: 2
Max of dl_dw[0]: 0.0669
Min of dl_dw[0]: -0.0153
Mean of dl_dw[0]: 0.0013
-
Max of d2l_dw2[0]: 5.0661
Min of d2l_dw2[0]: -1.7168
Mean of d2l_dw2[0]: 0.0809
```
</div>
<div class="k-default-codeblock">
```
----Start of step: 3
Max of dl_dw[0]: 0.0545
Min of dl_dw[0]: -0.0125
Mean of dl_dw[0]: 0.0008
-
Max of d2l_dw2[0]: 6.5223
Min of d2l_dw2[0]: -0.6604
Mean of d2l_dw2[0]: 0.0991
```
</div>
<div class="k-default-codeblock">
```
----Start of step: 4
Max of dl_dw[0]: 0.0247
Min of dl_dw[0]: -0.0152
Mean of dl_dw[0]: -0.0001
-
Max of d2l_dw2[0]: 2.8030
Min of d2l_dw2[0]: -0.1156
Mean of d2l_dw2[0]: 0.0321
```
</div>
<div class="k-default-codeblock">
```
----Start of step: 5
Max of dl_dw[0]: 0.0051
Min of dl_dw[0]: -0.0096
Mean of dl_dw[0]: -0.0001
-
Max of d2l_dw2[0]: 0.2545
Min of d2l_dw2[0]: -0.0284
Mean of d2l_dw2[0]: 0.0079
```
</div>
<div class="k-default-codeblock">
```
----Start of step: 6
Max of dl_dw[0]: 0.0041
Min of dl_dw[0]: -0.0102
Mean of dl_dw[0]: -0.0001
-
Max of d2l_dw2[0]: 0.2198
Min of d2l_dw2[0]: -0.0175
Mean of d2l_dw2[0]: 0.0069
```
</div>
<div class="k-default-codeblock">
```
----Start of step: 7
Max of dl_dw[0]: 0.0035
Min of dl_dw[0]: -0.0086
Mean of dl_dw[0]: -0.0001
-
Max of d2l_dw2[0]: 0.1485
Min of d2l_dw2[0]: -0.0175
Mean of d2l_dw2[0]: 0.0060
```
</div>
<div class="k-default-codeblock">
```
----Start of step: 8
Max of dl_dw[0]: 0.0039
Min of dl_dw[0]: -0.0094
Mean of dl_dw[0]: -0.0001
-
Max of d2l_dw2[0]: 0.1454
Min of d2l_dw2[0]: -0.0130
Mean of d2l_dw2[0]: 0.0061
```
</div>
<div class="k-default-codeblock">
```
----Start of step: 9
Max of dl_dw[0]: 0.0028
Min of dl_dw[0]: -0.0087
Mean of dl_dw[0]: -0.0001
-
Max of d2l_dw2[0]: 0.1491
Min of d2l_dw2[0]: -0.0326
Mean of d2l_dw2[0]: 0.0058
<keras.src.callbacks.history.History at 0x2a0d1e440>
```
</div>
What did we learn?
- The first order and second order gradients can have values that differ by orders of
magnitudes.
- Sometimes, they may not even have the same sign.
- Their values can vary greatly at each step.
This leads us to an obvious idea: let's normalize the gradients before combining them.
```python
class MyModel(keras.Model):
def train_step(self, data):
inputs, targets = data
trainable_vars = self.trainable_variables
with tf.GradientTape() as tape2:
with tf.GradientTape() as tape1:
y_pred = self(inputs, training=True) # Forward pass
# Compute the loss value
# (the loss function is configured in `compile()`)
loss = self.compute_loss(y=targets, y_pred=y_pred)
# Compute first-order gradients
dl_dw = tape1.gradient(loss, trainable_vars)
# Compute second-order gradients
d2l_dw2 = tape2.gradient(dl_dw, trainable_vars)
dl_dw = [tf.math.l2_normalize(w) for w in dl_dw]
d2l_dw2 = [tf.math.l2_normalize(w) for w in d2l_dw2]
# Combine first-order and second-order gradients
grads = [0.5 * w1 + 0.5 * w2 for (w1, w2) in zip(d2l_dw2, dl_dw)]
# Update weights
self.optimizer.apply_gradients(zip(grads, trainable_vars))
# Update metrics (includes the metric that tracks the loss)
for metric in self.metrics:
if metric.name == "loss":
metric.update_state(loss)
else:
metric.update_state(targets, y_pred)
# Return a dict mapping metric names to current value
return {m.name: m.result() for m in self.metrics}
model = get_model()
model.compile(
optimizer=keras.optimizers.SGD(learning_rate=1e-2),
loss="sparse_categorical_crossentropy",
metrics=["sparse_categorical_accuracy"],
)
model.fit(x_train, y_train, epochs=5, batch_size=1024, validation_split=0.1)
```
<div class="k-default-codeblock">
```
Epoch 1/5
53/53 ββββββββββββββββββββ 1s 7ms/step - sparse_categorical_accuracy: 0.1250 - loss: 2.3185 - val_loss: 2.0502 - val_sparse_categorical_accuracy: 0.3373
Epoch 2/5
53/53 ββββββββββββββββββββ 0s 6ms/step - sparse_categorical_accuracy: 0.3966 - loss: 1.9934 - val_loss: 1.8032 - val_sparse_categorical_accuracy: 0.5698
Epoch 3/5
53/53 ββββββββββββββββββββ 0s 7ms/step - sparse_categorical_accuracy: 0.5663 - loss: 1.7784 - val_loss: 1.6241 - val_sparse_categorical_accuracy: 0.6470
Epoch 4/5
53/53 ββββββββββββββββββββ 0s 7ms/step - sparse_categorical_accuracy: 0.6135 - loss: 1.6256 - val_loss: 1.5010 - val_sparse_categorical_accuracy: 0.6595
Epoch 5/5
53/53 ββββββββββββββββββββ 0s 7ms/step - sparse_categorical_accuracy: 0.6216 - loss: 1.5173 - val_loss: 1.4169 - val_sparse_categorical_accuracy: 0.6625
<keras.src.callbacks.history.History at 0x2a0d4c640>
```
</div>
Now, training converges! It doesn't work well at all, but at least the model learns
something.
After spending a few minutes tuning parameters, we get to the following configuration
that works somewhat well (achieves 97% validation accuracy and seems reasonably robust to
overfitting):
- Use `0.2 * w1 + 0.8 * w2` for combining gradients.
- Use a learning rate that decays linearly over time.
I'm not going to say that the idea works -- this isn't at all how you're supposed to do
second-order optimization (pointers: see the Newton & Gauss-Newton methods, quasi-Newton
methods, and BFGS). But hopefully this demonstration gave you an idea of how you can
debug your way out of uncomfortable training situations.
Remember: use `run_eagerly=True` for debugging what happens in `fit()`. And when your code
is finally working as expected, make sure to remove this flag in order to get the best
runtime performance!
Here's our final training run:
```python
class MyModel(keras.Model):
def train_step(self, data):
inputs, targets = data
trainable_vars = self.trainable_variables
with tf.GradientTape() as tape2:
with tf.GradientTape() as tape1:
y_pred = self(inputs, training=True) # Forward pass
# Compute the loss value
# (the loss function is configured in `compile()`)
loss = self.compute_loss(y=targets, y_pred=y_pred)
# Compute first-order gradients
dl_dw = tape1.gradient(loss, trainable_vars)
# Compute second-order gradients
d2l_dw2 = tape2.gradient(dl_dw, trainable_vars)
dl_dw = [tf.math.l2_normalize(w) for w in dl_dw]
d2l_dw2 = [tf.math.l2_normalize(w) for w in d2l_dw2]
# Combine first-order and second-order gradients
grads = [0.2 * w1 + 0.8 * w2 for (w1, w2) in zip(d2l_dw2, dl_dw)]
# Update weights
self.optimizer.apply_gradients(zip(grads, trainable_vars))
# Update metrics (includes the metric that tracks the loss)
for metric in self.metrics:
if metric.name == "loss":
metric.update_state(loss)
else:
metric.update_state(targets, y_pred)
# Return a dict mapping metric names to current value
return {m.name: m.result() for m in self.metrics}
model = get_model()
lr = learning_rate = keras.optimizers.schedules.InverseTimeDecay(
initial_learning_rate=0.1, decay_steps=25, decay_rate=0.1
)
model.compile(
optimizer=keras.optimizers.SGD(lr),
loss="sparse_categorical_crossentropy",
metrics=["sparse_categorical_accuracy"],
)
model.fit(x_train, y_train, epochs=50, batch_size=2048, validation_split=0.1)
```
<div class="k-default-codeblock">
```
Epoch 1/50
27/27 ββββββββββββββββββββ 1s 14ms/step - sparse_categorical_accuracy: 0.5056 - loss: 1.7508 - val_loss: 0.6378 - val_sparse_categorical_accuracy: 0.8658
Epoch 2/50
27/27 ββββββββββββββββββββ 0s 10ms/step - sparse_categorical_accuracy: 0.8407 - loss: 0.6323 - val_loss: 0.4039 - val_sparse_categorical_accuracy: 0.8970
Epoch 3/50
27/27 ββββββββββββββββββββ 0s 10ms/step - sparse_categorical_accuracy: 0.8807 - loss: 0.4472 - val_loss: 0.3243 - val_sparse_categorical_accuracy: 0.9120
Epoch 4/50
27/27 ββββββββββββββββββββ 0s 10ms/step - sparse_categorical_accuracy: 0.8947 - loss: 0.3781 - val_loss: 0.2861 - val_sparse_categorical_accuracy: 0.9235
Epoch 5/50
27/27 ββββββββββββββββββββ 0s 11ms/step - sparse_categorical_accuracy: 0.9022 - loss: 0.3453 - val_loss: 0.2622 - val_sparse_categorical_accuracy: 0.9288
Epoch 6/50
27/27 ββββββββββββββββββββ 0s 11ms/step - sparse_categorical_accuracy: 0.9093 - loss: 0.3243 - val_loss: 0.2523 - val_sparse_categorical_accuracy: 0.9303
Epoch 7/50
27/27 ββββββββββββββββββββ 0s 11ms/step - sparse_categorical_accuracy: 0.9148 - loss: 0.3021 - val_loss: 0.2362 - val_sparse_categorical_accuracy: 0.9338
Epoch 8/50
27/27 ββββββββββββββββββββ 0s 11ms/step - sparse_categorical_accuracy: 0.9184 - loss: 0.2899 - val_loss: 0.2289 - val_sparse_categorical_accuracy: 0.9365
Epoch 9/50
27/27 ββββββββββββββββββββ 0s 11ms/step - sparse_categorical_accuracy: 0.9212 - loss: 0.2784 - val_loss: 0.2183 - val_sparse_categorical_accuracy: 0.9383
Epoch 10/50
27/27 ββββββββββββββββββββ 0s 11ms/step - sparse_categorical_accuracy: 0.9246 - loss: 0.2670 - val_loss: 0.2097 - val_sparse_categorical_accuracy: 0.9405
Epoch 11/50
27/27 ββββββββββββββββββββ 0s 11ms/step - sparse_categorical_accuracy: 0.9267 - loss: 0.2563 - val_loss: 0.2063 - val_sparse_categorical_accuracy: 0.9442
Epoch 12/50
27/27 ββββββββββββββββββββ 0s 12ms/step - sparse_categorical_accuracy: 0.9313 - loss: 0.2412 - val_loss: 0.1965 - val_sparse_categorical_accuracy: 0.9458
Epoch 13/50
27/27 ββββββββββββββββββββ 0s 12ms/step - sparse_categorical_accuracy: 0.9324 - loss: 0.2411 - val_loss: 0.1917 - val_sparse_categorical_accuracy: 0.9472
Epoch 14/50
27/27 ββββββββββββββββββββ 0s 12ms/step - sparse_categorical_accuracy: 0.9359 - loss: 0.2260 - val_loss: 0.1861 - val_sparse_categorical_accuracy: 0.9495
Epoch 15/50
27/27 ββββββββββββββββββββ 0s 12ms/step - sparse_categorical_accuracy: 0.9374 - loss: 0.2234 - val_loss: 0.1804 - val_sparse_categorical_accuracy: 0.9517
Epoch 16/50
27/27 ββββββββββββββββββββ 0s 14ms/step - sparse_categorical_accuracy: 0.9382 - loss: 0.2196 - val_loss: 0.1761 - val_sparse_categorical_accuracy: 0.9528
Epoch 17/50
27/27 ββββββββββββββββββββ 0s 14ms/step - sparse_categorical_accuracy: 0.9417 - loss: 0.2076 - val_loss: 0.1709 - val_sparse_categorical_accuracy: 0.9557
Epoch 18/50
27/27 ββββββββββββββββββββ 0s 13ms/step - sparse_categorical_accuracy: 0.9423 - loss: 0.2032 - val_loss: 0.1664 - val_sparse_categorical_accuracy: 0.9555
Epoch 19/50
27/27 ββββββββββββββββββββ 0s 12ms/step - sparse_categorical_accuracy: 0.9444 - loss: 0.1953 - val_loss: 0.1616 - val_sparse_categorical_accuracy: 0.9582
Epoch 20/50
27/27 ββββββββββββββββββββ 0s 12ms/step - sparse_categorical_accuracy: 0.9451 - loss: 0.1916 - val_loss: 0.1597 - val_sparse_categorical_accuracy: 0.9592
Epoch 21/50
27/27 ββββββββββββββββββββ 0s 13ms/step - sparse_categorical_accuracy: 0.9473 - loss: 0.1866 - val_loss: 0.1563 - val_sparse_categorical_accuracy: 0.9615
Epoch 22/50
27/27 ββββββββββββββββββββ 0s 12ms/step - sparse_categorical_accuracy: 0.9486 - loss: 0.1818 - val_loss: 0.1520 - val_sparse_categorical_accuracy: 0.9617
Epoch 23/50
27/27 ββββββββββββββββββββ 0s 12ms/step - sparse_categorical_accuracy: 0.9502 - loss: 0.1794 - val_loss: 0.1499 - val_sparse_categorical_accuracy: 0.9635
Epoch 24/50
27/27 ββββββββββββββββββββ 0s 12ms/step - sparse_categorical_accuracy: 0.9502 - loss: 0.1759 - val_loss: 0.1466 - val_sparse_categorical_accuracy: 0.9640
Epoch 25/50
27/27 ββββββββββββββββββββ 0s 12ms/step - sparse_categorical_accuracy: 0.9515 - loss: 0.1714 - val_loss: 0.1437 - val_sparse_categorical_accuracy: 0.9645
Epoch 26/50
27/27 ββββββββββββββββββββ 0s 14ms/step - sparse_categorical_accuracy: 0.9535 - loss: 0.1649 - val_loss: 0.1435 - val_sparse_categorical_accuracy: 0.9640
Epoch 27/50
27/27 ββββββββββββββββββββ 0s 13ms/step - sparse_categorical_accuracy: 0.9548 - loss: 0.1628 - val_loss: 0.1411 - val_sparse_categorical_accuracy: 0.9650
Epoch 28/50
27/27 ββββββββββββββββββββ 0s 12ms/step - sparse_categorical_accuracy: 0.9541 - loss: 0.1620 - val_loss: 0.1384 - val_sparse_categorical_accuracy: 0.9655
Epoch 29/50
27/27 ββββββββββββββββββββ 0s 12ms/step - sparse_categorical_accuracy: 0.9564 - loss: 0.1560 - val_loss: 0.1359 - val_sparse_categorical_accuracy: 0.9668
Epoch 30/50
27/27 ββββββββββββββββββββ 0s 12ms/step - sparse_categorical_accuracy: 0.9577 - loss: 0.1547 - val_loss: 0.1338 - val_sparse_categorical_accuracy: 0.9672
Epoch 31/50
27/27 ββββββββββββββββββββ 0s 12ms/step - sparse_categorical_accuracy: 0.9569 - loss: 0.1520 - val_loss: 0.1329 - val_sparse_categorical_accuracy: 0.9663
Epoch 32/50
27/27 ββββββββββββββββββββ 0s 12ms/step - sparse_categorical_accuracy: 0.9582 - loss: 0.1478 - val_loss: 0.1320 - val_sparse_categorical_accuracy: 0.9675
Epoch 33/50
27/27 ββββββββββββββββββββ 0s 12ms/step - sparse_categorical_accuracy: 0.9582 - loss: 0.1483 - val_loss: 0.1292 - val_sparse_categorical_accuracy: 0.9670
Epoch 34/50
27/27 ββββββββββββββββββββ 0s 12ms/step - sparse_categorical_accuracy: 0.9594 - loss: 0.1448 - val_loss: 0.1274 - val_sparse_categorical_accuracy: 0.9677
Epoch 35/50
27/27 ββββββββββββββββββββ 0s 12ms/step - sparse_categorical_accuracy: 0.9587 - loss: 0.1452 - val_loss: 0.1262 - val_sparse_categorical_accuracy: 0.9678
Epoch 36/50
27/27 ββββββββββββββββββββ 0s 12ms/step - sparse_categorical_accuracy: 0.9603 - loss: 0.1418 - val_loss: 0.1251 - val_sparse_categorical_accuracy: 0.9677
Epoch 37/50
27/27 ββββββββββββββββββββ 0s 12ms/step - sparse_categorical_accuracy: 0.9603 - loss: 0.1402 - val_loss: 0.1238 - val_sparse_categorical_accuracy: 0.9682
Epoch 38/50
27/27 ββββββββββββββββββββ 0s 11ms/step - sparse_categorical_accuracy: 0.9618 - loss: 0.1382 - val_loss: 0.1228 - val_sparse_categorical_accuracy: 0.9680
Epoch 39/50
27/27 ββββββββββββββββββββ 0s 12ms/step - sparse_categorical_accuracy: 0.9630 - loss: 0.1335 - val_loss: 0.1213 - val_sparse_categorical_accuracy: 0.9695
Epoch 40/50
27/27 ββββββββββββββββββββ 0s 12ms/step - sparse_categorical_accuracy: 0.9629 - loss: 0.1327 - val_loss: 0.1198 - val_sparse_categorical_accuracy: 0.9698
Epoch 41/50
27/27 ββββββββββββββββββββ 0s 12ms/step - sparse_categorical_accuracy: 0.9639 - loss: 0.1323 - val_loss: 0.1191 - val_sparse_categorical_accuracy: 0.9695
Epoch 42/50
27/27 ββββββββββββββββββββ 0s 12ms/step - sparse_categorical_accuracy: 0.9629 - loss: 0.1346 - val_loss: 0.1183 - val_sparse_categorical_accuracy: 0.9692
Epoch 43/50
27/27 ββββββββββββββββββββ 0s 12ms/step - sparse_categorical_accuracy: 0.9661 - loss: 0.1262 - val_loss: 0.1182 - val_sparse_categorical_accuracy: 0.9700
Epoch 44/50
27/27 ββββββββββββββββββββ 0s 12ms/step - sparse_categorical_accuracy: 0.9652 - loss: 0.1274 - val_loss: 0.1163 - val_sparse_categorical_accuracy: 0.9702
Epoch 45/50
27/27 ββββββββββββββββββββ 0s 12ms/step - sparse_categorical_accuracy: 0.9650 - loss: 0.1259 - val_loss: 0.1154 - val_sparse_categorical_accuracy: 0.9708
Epoch 46/50
27/27 ββββββββββββββββββββ 0s 11ms/step - sparse_categorical_accuracy: 0.9647 - loss: 0.1246 - val_loss: 0.1148 - val_sparse_categorical_accuracy: 0.9703
Epoch 47/50
27/27 ββββββββββββββββββββ 0s 12ms/step - sparse_categorical_accuracy: 0.9659 - loss: 0.1236 - val_loss: 0.1137 - val_sparse_categorical_accuracy: 0.9707
Epoch 48/50
27/27 ββββββββββββββββββββ 0s 12ms/step - sparse_categorical_accuracy: 0.9665 - loss: 0.1221 - val_loss: 0.1133 - val_sparse_categorical_accuracy: 0.9710
Epoch 49/50
27/27 ββββββββββββββββββββ 0s 12ms/step - sparse_categorical_accuracy: 0.9675 - loss: 0.1192 - val_loss: 0.1124 - val_sparse_categorical_accuracy: 0.9712
Epoch 50/50
27/27 ββββββββββββββββββββ 0s 12ms/step - sparse_categorical_accuracy: 0.9664 - loss: 0.1214 - val_loss: 0.1112 - val_sparse_categorical_accuracy: 0.9707
<keras.src.callbacks.history.History at 0x29e76ae60>
```
</div> | keras-io/examples/keras_recipes/md/debugging_tips.md/0 | {
"file_path": "keras-io/examples/keras_recipes/md/debugging_tips.md",
"repo_id": "keras-io",
"token_count": 16790
} | 105 |
"""
Title: Evaluating and exporting scikit-learn metrics in a Keras callback
Author: [lukewood](https://lukewood.xyz)
Date created: 10/07/2021
Last modified: 11/17/2023
Description: This example shows how to use Keras callbacks to evaluate and export non-TensorFlow based metrics.
Accelerator: GPU
"""
"""
## Introduction
[Keras callbacks](https://keras.io/api/callbacks/) allow for the execution of arbitrary
code at various stages of the Keras training process. While Keras offers first-class
support for metric evaluation, [Keras metrics](https://keras.io/api/metrics/) may only
rely on TensorFlow code internally.
While there are TensorFlow implementations of many metrics online, some metrics are
implemented using [NumPy](https://numpy.org/) or another Python-based numerical computation library.
By performing metric evaluation inside of a Keras callback, we can leverage any existing
metric, and ultimately export the result to TensorBoard.
"""
"""
## Jaccard score metric
This example makes use of a sklearn metric, `sklearn.metrics.jaccard_score()`, and
writes the result to TensorBoard using the `tf.summary` API.
This template can be modified slightly to make it work with any existing sklearn metric.
"""
import os
os.environ["KERAS_BACKEND"] = "tensorflow"
import tensorflow as tf
import keras as keras
from keras import layers
from sklearn.metrics import jaccard_score
import numpy as np
import os
class JaccardScoreCallback(keras.callbacks.Callback):
"""Computes the Jaccard score and logs the results to TensorBoard."""
def __init__(self, name, x_test, y_test, log_dir):
self.x_test = x_test
self.y_test = y_test
self.keras_metric = keras.metrics.Mean("jaccard_score")
self.epoch = 0
self.summary_writer = tf.summary.create_file_writer(os.path.join(log_dir, name))
def on_epoch_end(self, batch, logs=None):
self.epoch += 1
self.keras_metric.reset_state()
predictions = self.model.predict(self.x_test)
jaccard_value = jaccard_score(
np.argmax(predictions, axis=-1), self.y_test, average=None
)
self.keras_metric.update_state(jaccard_value)
self._write_metric(
self.keras_metric.name, self.keras_metric.result().numpy().astype(float)
)
def _write_metric(self, name, value):
with self.summary_writer.as_default():
tf.summary.scalar(
name,
value,
step=self.epoch,
)
self.summary_writer.flush()
"""
## Sample usage
Let's test our `JaccardScoreCallback` class with a Keras model.
"""
# Model / data parameters
num_classes = 10
input_shape = (28, 28, 1)
# The data, split between train and test sets
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
# Scale images to the [0, 1] range
x_train = x_train.astype("float32") / 255
x_test = x_test.astype("float32") / 255
# Make sure images have shape (28, 28, 1)
x_train = np.expand_dims(x_train, -1)
x_test = np.expand_dims(x_test, -1)
print("x_train shape:", x_train.shape)
print(x_train.shape[0], "train samples")
print(x_test.shape[0], "test samples")
# Convert class vectors to binary class matrices.
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = keras.Sequential(
[
keras.Input(shape=input_shape),
layers.Conv2D(32, kernel_size=(3, 3), activation="relu"),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Conv2D(64, kernel_size=(3, 3), activation="relu"),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Flatten(),
layers.Dropout(0.5),
layers.Dense(num_classes, activation="softmax"),
]
)
model.summary()
batch_size = 128
epochs = 15
model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"])
callbacks = [
JaccardScoreCallback(model.name, x_test, np.argmax(y_test, axis=-1), "logs")
]
model.fit(
x_train,
y_train,
batch_size=batch_size,
epochs=epochs,
validation_split=0.1,
callbacks=callbacks,
)
"""
If you now launch a TensorBoard instance using `tensorboard --logdir=logs`, you will
see the `jaccard_score` metric alongside any other exported metrics!
![TensorBoard Jaccard Score](https://i.imgur.com/T4qzrdn.png)
"""
"""
## Conclusion
Many ML practitioners and researchers rely on metrics that may not yet have a TensorFlow
implementation. Keras users can still leverage the wide variety of existing metric
implementations in other frameworks by using a Keras callback. These metrics can be
exported, viewed and analyzed in the TensorBoard like any other metric.
"""
| keras-io/examples/keras_recipes/sklearn_metric_callbacks.py/0 | {
"file_path": "keras-io/examples/keras_recipes/sklearn_metric_callbacks.py",
"repo_id": "keras-io",
"token_count": 1770
} | 106 |
<jupyter_start><jupyter_text>Text Classification using FNet**Author:** [Abheesht Sharma](https://github.com/abheesht17/)**Date created:** 2022/06/01**Last modified:** 2022/12/21**Description:** Text Classification on the IMDb Dataset using `keras_nlp.layers.FNetEncoder` layer. IntroductionIn this example, we will demonstrate the ability of FNet to achieve comparableresults with a vanilla Transformer model on the text classification task.We will be using the IMDb dataset, which is acollection of movie reviews labelled either positive or negative (sentimentanalysis).To build the tokenizer, model, etc., we will use components from[KerasNLP](https://github.com/keras-team/keras-nlp). KerasNLP makes life easierfor people who want to build NLP pipelines! :) ModelTransformer-based language models (LMs) such as BERT, RoBERTa, XLNet, etc. havedemonstrated the effectiveness of the self-attention mechanism for computingrich embeddings for input text. However, the self-attention mechanism is anexpensive operation, with a time complexity of `O(n^2)`, where `n` is the numberof tokens in the input. Hence, there has been an effort to reduce the timecomplexity of the self-attention mechanism and improve performance withoutsacrificing the quality of results.In 2020, a paper titled[FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824)replaced the self-attention layer in BERT with a simple Fourier Transform layerfor "token mixing". This resulted in comparable accuracy and a speed-up duringtraining. In particular, a couple of points from the paper stand out:* The authors claim that FNet is 80% faster than BERT on GPUs and 70% faster onTPUs. The reason for this speed-up is two-fold: a) the Fourier Transform layeris unparametrized, it does not have any parameters, and b) the authors use FastFourier Transform (FFT); this reduces the time complexity from `O(n^2)`(in the case of self-attention) to `O(n log n)`.* FNet manages to achieve 92-97% of the accuracy of BERT on the GLUE benchmark. SetupBefore we start with the implementation, let's import all the necessary packages.<jupyter_code>!pip install -q --upgrade keras-nlp
!pip install -q --upgrade keras # Upgrade to Keras 3.
import keras_nlp
import keras
import tensorflow as tf
import os
keras.utils.set_random_seed(42)<jupyter_output><empty_output><jupyter_text>Let's also define our hyperparameters.<jupyter_code>BATCH_SIZE = 64
EPOCHS = 3
MAX_SEQUENCE_LENGTH = 512
VOCAB_SIZE = 15000
EMBED_DIM = 128
INTERMEDIATE_DIM = 512<jupyter_output><empty_output><jupyter_text>Loading the datasetFirst, let's download the IMDB dataset and extract it.<jupyter_code>!wget http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -xzf aclImdb_v1.tar.gz<jupyter_output><empty_output><jupyter_text>Samples are present in the form of text files. Let's inspect the structure ofthe directory.<jupyter_code>print(os.listdir("./aclImdb"))
print(os.listdir("./aclImdb/train"))
print(os.listdir("./aclImdb/test"))<jupyter_output><empty_output><jupyter_text>The directory contains two sub-directories: `train` and `test`. Each subdirectoryin turn contains two folders: `pos` and `neg` for positive and negative reviews,respectively. Before we load the dataset, let's delete the `./aclImdb/train/unsup`folder since it has unlabelled samples.<jupyter_code>!rm -rf aclImdb/train/unsup<jupyter_output><empty_output><jupyter_text>We'll use the `keras.utils.text_dataset_from_directory` utility to generateour labelled `tf.data.Dataset` dataset from text files.<jupyter_code>train_ds = keras.utils.text_dataset_from_directory(
"aclImdb/train",
batch_size=BATCH_SIZE,
validation_split=0.2,
subset="training",
seed=42,
)
val_ds = keras.utils.text_dataset_from_directory(
"aclImdb/train",
batch_size=BATCH_SIZE,
validation_split=0.2,
subset="validation",
seed=42,
)
test_ds = keras.utils.text_dataset_from_directory("aclImdb/test", batch_size=BATCH_SIZE)<jupyter_output><empty_output><jupyter_text>We will now convert the text to lowercase.<jupyter_code>train_ds = train_ds.map(lambda x, y: (tf.strings.lower(x), y))
val_ds = val_ds.map(lambda x, y: (tf.strings.lower(x), y))
test_ds = test_ds.map(lambda x, y: (tf.strings.lower(x), y))<jupyter_output><empty_output><jupyter_text>Let's print a few samples.<jupyter_code>for text_batch, label_batch in train_ds.take(1):
for i in range(3):
print(text_batch.numpy()[i])
print(label_batch.numpy()[i])<jupyter_output><empty_output><jupyter_text>Tokenizing the dataWe'll be using the `keras_nlp.tokenizers.WordPieceTokenizer` layer to tokenizethe text. `keras_nlp.tokenizers.WordPieceTokenizer` takes a WordPiece vocabularyand has functions for tokenizing the text, and detokenizing sequences of tokens.Before we define the tokenizer, we first need to train it on the datasetwe have. The WordPiece tokenization algorithm is a subword tokenization algorithm;training it on a corpus gives us a vocabulary of subwords. A subword tokenizeris a compromise between word tokenizers (word tokenizers need very largevocabularies for good coverage of input words), and character tokenizers(characters don't really encode meaning like words do). Luckily, KerasNLPmakes it very simple to train WordPiece on a corpus with the`keras_nlp.tokenizers.compute_word_piece_vocabulary` utility.Note: The official implementation of FNet uses the SentencePiece Tokenizer.<jupyter_code>def train_word_piece(ds, vocab_size, reserved_tokens):
word_piece_ds = ds.unbatch().map(lambda x, y: x)
vocab = keras_nlp.tokenizers.compute_word_piece_vocabulary(
word_piece_ds.batch(1000).prefetch(2),
vocabulary_size=vocab_size,
reserved_tokens=reserved_tokens,
)
return vocab<jupyter_output><empty_output><jupyter_text>Every vocabulary has a few special, reserved tokens. We have two such tokens:- `"[PAD]"` - Padding token. Padding tokens are appended to the input sequence lengthwhen the input sequence length is shorter than the maximum sequence length.- `"[UNK]"` - Unknown token.<jupyter_code>reserved_tokens = ["[PAD]", "[UNK]"]
train_sentences = [element[0] for element in train_ds]
vocab = train_word_piece(train_ds, VOCAB_SIZE, reserved_tokens)<jupyter_output><empty_output><jupyter_text>Let's see some tokens!<jupyter_code>print("Tokens: ", vocab[100:110])<jupyter_output><empty_output><jupyter_text>Now, let's define the tokenizer. We will configure the tokenizer with thethe vocabularies trained above. We will define a maximum sequence length so thatall sequences are padded to the same length, if the length of the sequence isless than the specified sequence length. Otherwise, the sequence is truncated.<jupyter_code>tokenizer = keras_nlp.tokenizers.WordPieceTokenizer(
vocabulary=vocab,
lowercase=False,
sequence_length=MAX_SEQUENCE_LENGTH,
)<jupyter_output><empty_output><jupyter_text>Let's try and tokenize a sample from our dataset! To verify whether the text hasbeen tokenized correctly, we can also detokenize the list of tokens back to theoriginal text.<jupyter_code>input_sentence_ex = train_ds.take(1).get_single_element()[0][0]
input_tokens_ex = tokenizer(input_sentence_ex)
print("Sentence: ", input_sentence_ex)
print("Tokens: ", input_tokens_ex)
print("Recovered text after detokenizing: ", tokenizer.detokenize(input_tokens_ex))<jupyter_output><empty_output><jupyter_text>Formatting the datasetNext, we'll format our datasets in the form that will be fed to the models. Weneed to tokenize the text.<jupyter_code>def format_dataset(sentence, label):
sentence = tokenizer(sentence)
return ({"input_ids": sentence}, label)
def make_dataset(dataset):
dataset = dataset.map(format_dataset, num_parallel_calls=tf.data.AUTOTUNE)
return dataset.shuffle(512).prefetch(16).cache()
train_ds = make_dataset(train_ds)
val_ds = make_dataset(val_ds)
test_ds = make_dataset(test_ds)<jupyter_output><empty_output><jupyter_text>Building the modelNow, let's move on to the exciting part - defining our model!We first need an embedding layer, i.e., a layer that maps every token in the inputsequence to a vector. This embedding layer can be initialised randomly. We alsoneed a positional embedding layer which encodes the word order in the sequence.The convention is to add, i.e., sum, these two embeddings. KerasNLP has a`keras_nlp.layers.TokenAndPositionEmbedding ` layer which does all of the abovesteps for us.Our FNet classification model consists of three `keras_nlp.layers.FNetEncoder`layers with a `keras.layers.Dense` layer on top.Note: For FNet, masking the padding tokens has a minimal effect on results. In theofficial implementation, the padding tokens are not masked.<jupyter_code>input_ids = keras.Input(shape=(None,), dtype="int64", name="input_ids")
x = keras_nlp.layers.TokenAndPositionEmbedding(
vocabulary_size=VOCAB_SIZE,
sequence_length=MAX_SEQUENCE_LENGTH,
embedding_dim=EMBED_DIM,
mask_zero=True,
)(input_ids)
x = keras_nlp.layers.FNetEncoder(intermediate_dim=INTERMEDIATE_DIM)(inputs=x)
x = keras_nlp.layers.FNetEncoder(intermediate_dim=INTERMEDIATE_DIM)(inputs=x)
x = keras_nlp.layers.FNetEncoder(intermediate_dim=INTERMEDIATE_DIM)(inputs=x)
x = keras.layers.GlobalAveragePooling1D()(x)
x = keras.layers.Dropout(0.1)(x)
outputs = keras.layers.Dense(1, activation="sigmoid")(x)
fnet_classifier = keras.Model(input_ids, outputs, name="fnet_classifier")<jupyter_output><empty_output><jupyter_text>Training our modelWe'll use accuracy to monitor training progress on the validation data. Let'strain our model for 3 epochs.<jupyter_code>fnet_classifier.summary()
fnet_classifier.compile(
optimizer=keras.optimizers.Adam(learning_rate=0.001),
loss="binary_crossentropy",
metrics=["accuracy"],
)
fnet_classifier.fit(train_ds, epochs=EPOCHS, validation_data=val_ds)<jupyter_output><empty_output><jupyter_text>We obtain a train accuracy of around 92% and a validation accuracy of around85%. Moreover, for 3 epochs, it takes around 86 seconds to train the model(on Colab with a 16 GB Tesla T4 GPU).Let's calculate the test accuracy.<jupyter_code>fnet_classifier.evaluate(test_ds, batch_size=BATCH_SIZE)<jupyter_output><empty_output><jupyter_text>Comparison with Transformer modelLet's compare our FNet Classifier model with a Transformer Classifier model. Wekeep all the parameters/hyperparameters the same. For example, we use three`TransformerEncoder` layers.We set the number of heads to 2.<jupyter_code>NUM_HEADS = 2
input_ids = keras.Input(shape=(None,), dtype="int64", name="input_ids")
x = keras_nlp.layers.TokenAndPositionEmbedding(
vocabulary_size=VOCAB_SIZE,
sequence_length=MAX_SEQUENCE_LENGTH,
embedding_dim=EMBED_DIM,
mask_zero=True,
)(input_ids)
x = keras_nlp.layers.TransformerEncoder(
intermediate_dim=INTERMEDIATE_DIM, num_heads=NUM_HEADS
)(inputs=x)
x = keras_nlp.layers.TransformerEncoder(
intermediate_dim=INTERMEDIATE_DIM, num_heads=NUM_HEADS
)(inputs=x)
x = keras_nlp.layers.TransformerEncoder(
intermediate_dim=INTERMEDIATE_DIM, num_heads=NUM_HEADS
)(inputs=x)
x = keras.layers.GlobalAveragePooling1D()(x)
x = keras.layers.Dropout(0.1)(x)
outputs = keras.layers.Dense(1, activation="sigmoid")(x)
transformer_classifier = keras.Model(input_ids, outputs, name="transformer_classifier")
transformer_classifier.summary()
transformer_classifier.compile(
optimizer=keras.optimizers.Adam(learning_rate=0.001),
loss="binary_crossentropy",
metrics=["accuracy"],
)
transformer_classifier.fit(train_ds, epochs=EPOCHS, validation_data=val_ds)<jupyter_output><empty_output><jupyter_text>We obtain a train accuracy of around 94% and a validation accuracy of around86.5%. It takes around 146 seconds to train the model (on Colab with a 16 GB TeslaT4 GPU).Let's calculate the test accuracy.<jupyter_code>transformer_classifier.evaluate(test_ds, batch_size=BATCH_SIZE)<jupyter_output><empty_output> | keras-io/examples/nlp/ipynb/fnet_classification_with_keras_nlp.ipynb/0 | {
"file_path": "keras-io/examples/nlp/ipynb/fnet_classification_with_keras_nlp.ipynb",
"repo_id": "keras-io",
"token_count": 4000
} | 107 |
<jupyter_start><jupyter_text>Sentence embeddings using Siamese RoBERTa-networks**Author:** [Mohammed Abu El-Nasr](https://github.com/abuelnasr0)**Date created:** 2023/07/14**Last modified:** 2023/07/14**Description:** Fine-tune a RoBERTa model to generate sentence embeddings using KerasNLP. IntroductionBERT and RoBERTa can be used for semantic textual similarity tasks, where two sentencesare passed to the model and the network predicts whether they are similar or not. Butwhat if we have a large collection of sentences and want to find the most similar pairsin that collection? That will take n*(n-1)/2 inference computations, where n is thenumber of sentences in the collection. For example, if n = 10000, the required time willbe 65 hours on a V100 GPU.A common method to overcome the time overhead issue is to pass one sentence to the model,then average the output of the model, or take the first token (the [CLS] token) and usethem as a [sentence embedding](https://en.wikipedia.org/wiki/Sentence_embedding), thenuse a vector similarity measure like cosine similarity or Manhatten / Euclidean distanceto find close sentences (semantically similar sentences). That will reduce the time tofind the most similar pairs in a collection of 10,000 sentences from 65 hours to 5seconds!If we use RoBERTa directly, that will yield rather bad sentence embeddings. But if wefine-tune RoBERTa using a Siamese network, that will generate semantically meaningfulsentence embeddings. This will enable RoBERTa to be used for new tasks. These tasksinclude:- Large-scale semantic similarity comparison.- Clustering.- Information retrieval via semantic search.In this example, we will show how to fine-tune a RoBERTa model using a Siamese networksuch that it will be able to produce semantically meaningful sentence embeddings and usethem in a semantic search and clustering example.This method of fine-tuning was introduced in[Sentence-BERT](https://arxiv.org/abs/1908.10084) SetupLet's install and import the libraries we need. We'll be using the KerasNLP library inthis example.We will also enable [mixed precision](https://www.tensorflow.org/guide/mixed_precision)training. This will help us reduce the training time.<jupyter_code>!pip install -q --upgrade keras-nlp
!pip install -q --upgrade keras # Upgrade to Keras 3.
import os
os.environ["KERAS_BACKEND"] = "tensorflow"
import keras
import keras_nlp
import tensorflow as tf
import tensorflow_datasets as tfds
import sklearn.cluster as cluster
keras.mixed_precision.set_global_policy("mixed_float16")<jupyter_output><empty_output><jupyter_text>Fine-tune the model using siamese networks[Siamese network](https://en.wikipedia.org/wiki/Siamese_neural_network) is a neuralnetwork architecture that contains two or more subnetworks. The subnetworks share thesame weights. It is used to generate feature vectors for each input and then compare themfor similarity.For our example, the subnetwork will be a RoBERTa model that has a pooling layer on topof it to produce the embeddings of the input sentences. These embeddings will then becompared to each other to learn to produce semantically meaningful embeddings.The pooling strategies used are mean, max, and CLS pooling. Mean pooling produces thebest results. We will use it in our examples. Fine-tune using the regression objective functionFor building the siamese network with the regression objective function, the siamesenetwork is asked to predict the cosine similarity between the embeddings of the two inputsentences.Cosine similarity indicates the angle between the sentence embeddings. If the cosinesimilarity is high, that means there is a small angle between the embeddings; hence, theyare semantically similar. Load the datasetWe will use the STSB dataset to fine-tune the model for the regression objective. STSBconsists of a collection of sentence pairs that are labelled in the range [0, 5]. 0indicates the least semantic similarity between the two sentences, and 5 indicates themost semantic similarity between the two sentences.The range of the cosine similarity is [-1, 1] and it's the output of the siamese network,but the range of the labels in the dataset is [0, 5]. We need to unify the range betweenthe cosine similarity and the dataset labels, so while preparing the dataset, we willdivide the labels by 2.5 and subtract 1.<jupyter_code>TRAIN_BATCH_SIZE = 6
VALIDATION_BATCH_SIZE = 8
TRAIN_NUM_BATCHES = 300
VALIDATION_NUM_BATCHES = 40
AUTOTUNE = tf.data.experimental.AUTOTUNE
def change_range(x):
return (x / 2.5) - 1
def prepare_dataset(dataset, num_batches, batch_size):
dataset = dataset.map(
lambda z: (
[z["sentence1"], z["sentence2"]],
[tf.cast(change_range(z["label"]), tf.float32)],
),
num_parallel_calls=AUTOTUNE,
)
dataset = dataset.batch(batch_size)
dataset = dataset.take(num_batches)
dataset = dataset.prefetch(AUTOTUNE)
return dataset
stsb_ds = tfds.load(
"glue/stsb",
)
stsb_train, stsb_valid = stsb_ds["train"], stsb_ds["validation"]
stsb_train = prepare_dataset(stsb_train, TRAIN_NUM_BATCHES, TRAIN_BATCH_SIZE)
stsb_valid = prepare_dataset(stsb_valid, VALIDATION_NUM_BATCHES, VALIDATION_BATCH_SIZE)<jupyter_output><empty_output><jupyter_text>Let's see examples from the dataset of two sentenses and their similarity.<jupyter_code>for x, y in stsb_train:
for i, example in enumerate(x):
print(f"sentence 1 : {example[0]} ")
print(f"sentence 2 : {example[1]} ")
print(f"similarity : {y[i]} \n")
break<jupyter_output><empty_output><jupyter_text>Build the encoder model.Now, we'll build the encoder model that will produce the sentence embeddings. It consistsof:- A preprocessor layer to tokenize and generate padding masks for the sentences.- A backbone model that will generate the contextual representation of each token in thesentence.- A mean pooling layer to produce the embeddings. We will use `keras.layers.GlobalAveragePooling1D`to apply the mean pooling to the backbone outputs. We will pass the padding mask to thelayer to exclude padded tokens from being averaged.- A normalization layer to normalize the embeddings as we are using the cosine similarity.<jupyter_code>preprocessor = keras_nlp.models.RobertaPreprocessor.from_preset("roberta_base_en")
backbone = keras_nlp.models.RobertaBackbone.from_preset("roberta_base_en")
inputs = keras.Input(shape=(1,), dtype="string", name="sentence")
x = preprocessor(inputs)
h = backbone(x)
embedding = keras.layers.GlobalAveragePooling1D(name="pooling_layer")(
h, x["padding_mask"]
)
n_embedding = keras.layers.UnitNormalization(axis=1)(embedding)
roberta_normal_encoder = keras.Model(inputs=inputs, outputs=n_embedding)
roberta_normal_encoder.summary()<jupyter_output><empty_output><jupyter_text>Build the Siamese network with the regression objective function.It's described above that the Siamese network has two or more subnetworks, and for thisSiamese model, we need two encoders. But we don't have two encoders; we have only oneencoder, but we will pass the two sentences through it. That way, we can have two pathsto get the embeddings and also shared weights between the two paths.After passing the two sentences to the model and getting the normalized embeddings, wewill multiply the two normalized embeddings to get the cosine similarity between the twosentences.<jupyter_code>class RegressionSiamese(keras.Model):
def __init__(self, encoder, **kwargs):
inputs = keras.Input(shape=(2,), dtype="string", name="sentences")
sen1, sen2 = keras.ops.split(inputs, 2, axis=1)
u = encoder(sen1)
v = encoder(sen2)
cosine_similarity_scores = keras.ops.matmul(u, keras.ops.transpose(v))
super().__init__(
inputs=inputs,
outputs=cosine_similarity_scores,
**kwargs,
)
self.encoder = encoder
def get_encoder(self):
return self.encoder<jupyter_output><empty_output><jupyter_text>Fit the modelLet's try this example before training and compare it to the output after training.<jupyter_code>sentences = [
"Today is a very sunny day.",
"I am hungry, I will get my meal.",
"The dog is eating his food.",
]
query = ["The dog is enjoying his meal."]
encoder = roberta_normal_encoder
sentence_embeddings = encoder(tf.constant(sentences))
query_embedding = encoder(tf.constant(query))
cosine_similarity_scores = tf.matmul(query_embedding, tf.transpose(sentence_embeddings))
for i, sim in enumerate(cosine_similarity_scores[0]):
print(f"cosine similarity score between sentence {i+1} and the query = {sim} ")<jupyter_output><empty_output><jupyter_text>For the training we will use `MeanSquaredError()` as loss function, and `Adam()`optimizer with learning rate = 2e-5.<jupyter_code>roberta_regression_siamese = RegressionSiamese(roberta_normal_encoder)
roberta_regression_siamese.compile(
loss=keras.losses.MeanSquaredError(),
optimizer=keras.optimizers.Adam(2e-5),
jit_compile=False,
)
roberta_regression_siamese.fit(stsb_train, validation_data=stsb_valid, epochs=1)<jupyter_output><empty_output><jupyter_text>Let's try the model after training, we will notice a huge difference in the output. Thatmeans that the model after fine-tuning is capable of producing semantically meaningfulembeddings. where the semantically similar sentences have a small angle between them. andsemantically dissimilar sentences have a large angle between them.<jupyter_code>sentences = [
"Today is a very sunny day.",
"I am hungry, I will get my meal.",
"The dog is eating his food.",
]
query = ["The dog is enjoying his food."]
encoder = roberta_regression_siamese.get_encoder()
sentence_embeddings = encoder(tf.constant(sentences))
query_embedding = encoder(tf.constant(query))
cosine_simalarities = tf.matmul(query_embedding, tf.transpose(sentence_embeddings))
for i, sim in enumerate(cosine_simalarities[0]):
print(f"cosine similarity between sentence {i+1} and the query = {sim} ")<jupyter_output><empty_output><jupyter_text>Fine-tune Using the triplet Objective FunctionFor the Siamese network with the triplet objective function, three sentences are passedto the Siamese network *anchor*, *positive*, and *negative* sentences. *anchor* and*positive* sentences are semantically similar, and *anchor* and *negative* sentences aresemantically dissimilar. The objective is to minimize the distance between the *anchor*sentence and the *positive* sentence, and to maximize the distance between the *anchor*sentence and the *negative* sentence. Load the datasetWe will use the Wikipedia-sections-triplets dataset for fine-tuning. This data setconsists of sentences derived from the Wikipedia website. It has a collection of 3sentences *anchor*, *positive*, *negative*. *anchor* and *positive* are derived from thesame section. *anchor* and *negative* are derived from different sections.This dataset has 1.8 million training triplets and 220,000 test triplets. In thisexample, we will only use 1200 triplets for training and 300 for testing.<jupyter_code>!wget https://sbert.net/datasets/wikipedia-sections-triplets.zip -q
!unzip wikipedia-sections-triplets.zip -d wikipedia-sections-triplets
NUM_TRAIN_BATCHES = 200
NUM_TEST_BATCHES = 75
AUTOTUNE = tf.data.experimental.AUTOTUNE
def prepare_wiki_data(dataset, num_batches):
dataset = dataset.map(
lambda z: ((z["Sentence1"], z["Sentence2"], z["Sentence3"]), 0)
)
dataset = dataset.batch(6)
dataset = dataset.take(num_batches)
dataset = dataset.prefetch(AUTOTUNE)
return dataset
wiki_train = tf.data.experimental.make_csv_dataset(
"wikipedia-sections-triplets/train.csv",
batch_size=1,
num_epochs=1,
)
wiki_test = tf.data.experimental.make_csv_dataset(
"wikipedia-sections-triplets/test.csv",
batch_size=1,
num_epochs=1,
)
wiki_train = prepare_wiki_data(wiki_train, NUM_TRAIN_BATCHES)
wiki_test = prepare_wiki_data(wiki_test, NUM_TEST_BATCHES)<jupyter_output><empty_output><jupyter_text>Build the encoder modelFor this encoder model, we will use RoBERTa with mean pooling and we will not normalizethe output embeddings. The encoder model consists of:- A preprocessor layer to tokenize and generate padding masks for the sentences.- A backbone model that will generate the contextual representation of each token in thesentence.- A mean pooling layer to produce the embeddings.<jupyter_code>preprocessor = keras_nlp.models.RobertaPreprocessor.from_preset("roberta_base_en")
backbone = keras_nlp.models.RobertaBackbone.from_preset("roberta_base_en")
input = keras.Input(shape=(1,), dtype="string", name="sentence")
x = preprocessor(input)
h = backbone(x)
embedding = keras.layers.GlobalAveragePooling1D(name="pooling_layer")(
h, x["padding_mask"]
)
roberta_encoder = keras.Model(inputs=input, outputs=embedding)
roberta_encoder.summary()<jupyter_output><empty_output><jupyter_text>Build the Siamese network with the triplet objective functionFor the Siamese network with the triplet objective function, we will build the model withan encoder, and we will pass the three sentences through that encoder. We will get anembedding for each sentence, and we will calculate the `positive_dist` and`negative_dist` that will be passed to the loss function described below.<jupyter_code>class TripletSiamese(keras.Model):
def __init__(self, encoder, **kwargs):
anchor = keras.Input(shape=(1,), dtype="string")
positive = keras.Input(shape=(1,), dtype="string")
negative = keras.Input(shape=(1,), dtype="string")
ea = encoder(anchor)
ep = encoder(positive)
en = encoder(negative)
positive_dist = keras.ops.sum(keras.ops.square(ea - ep), axis=1)
negative_dist = keras.ops.sum(keras.ops.square(ea - en), axis=1)
positive_dist = keras.ops.sqrt(positive_dist)
negative_dist = keras.ops.sqrt(negative_dist)
output = keras.ops.stack([positive_dist, negative_dist], axis=0)
super().__init__(inputs=[anchor, positive, negative], outputs=output, **kwargs)
self.encoder = encoder
def get_encoder(self):
return self.encoder<jupyter_output><empty_output><jupyter_text>We will use a custom loss function for the triplet objective. The loss function willreceive the distance between the *anchor* and the *positive* embeddings `positive_dist`,and the distance between the *anchor* and the *negative* embeddings `negative_dist`,where they are stacked together in `y_pred`.We will use `positive_dist` and `negative_dist` to compute the loss such that`negative_dist` is larger than `positive_dist` at least by a specific margin.Mathematically, we will minimize this loss function: `max( positive_dist - negative_dist+ margin, 0)`.There is no `y_true` used in this loss function. Note that we set the labels in thedataset to zero, but they will not be used.<jupyter_code>class TripletLoss(keras.losses.Loss):
def __init__(self, margin=1, **kwargs):
super().__init__(**kwargs)
self.margin = margin
def call(self, y_true, y_pred):
positive_dist, negative_dist = tf.unstack(y_pred, axis=0)
losses = keras.ops.relu(positive_dist - negative_dist + self.margin)
return keras.ops.mean(losses, axis=0)<jupyter_output><empty_output><jupyter_text>Fit the modelFor the training, we will use the custom `TripletLoss()` loss function, and `Adam()`optimizer with a learning rate = 2e-5.<jupyter_code>roberta_triplet_siamese = TripletSiamese(roberta_encoder)
roberta_triplet_siamese.compile(
loss=TripletLoss(),
optimizer=keras.optimizers.Adam(2e-5),
jit_compile=False,
)
roberta_triplet_siamese.fit(wiki_train, validation_data=wiki_test, epochs=1)<jupyter_output><empty_output><jupyter_text>Let's try this model in a clustering example. Here are 6 questions. first 3 questionsabout learning English, and the last 3 questions about working online. Let's see if theembeddings produced by our encoder will cluster them correctly.<jupyter_code>questions = [
"What should I do to improve my English writting?",
"How to be good at speaking English?",
"How can I improve my English?",
"How to earn money online?",
"How do I earn money online?",
"How to work and earn money through internet?",
]
encoder = roberta_triplet_siamese.get_encoder()
embeddings = encoder(tf.constant(questions))
kmeans = cluster.KMeans(n_clusters=2, random_state=0, n_init="auto").fit(embeddings)
for i, label in enumerate(kmeans.labels_):
print(f"sentence ({questions[i]}) belongs to cluster {label}")<jupyter_output><empty_output> | keras-io/examples/nlp/ipynb/sentence_embeddings_with_sbert.ipynb/0 | {
"file_path": "keras-io/examples/nlp/ipynb/sentence_embeddings_with_sbert.ipynb",
"repo_id": "keras-io",
"token_count": 5392
} | 108 |
# End-to-end Masked Language Modeling with BERT
**Author:** [Ankur Singh](https://twitter.com/ankur310794)<br>
**Date created:** 2020/09/18<br>
**Last modified:** 2020/09/18<br>
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/nlp/ipynb/masked_language_modeling.ipynb) <span class="k-dot">β’</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/examples/nlp/masked_language_modeling.py)
**Description:** Implement a Masked Language Model (MLM) with BERT and fine-tune it on the IMDB Reviews dataset.
---
## Introduction
Masked Language Modeling is a fill-in-the-blank task,
where a model uses the context words surrounding a mask token to try to predict what the
masked word should be.
For an input that contains one or more mask tokens,
the model will generate the most likely substitution for each.
Example:
- Input: "I have watched this [MASK] and it was awesome."
- Output: "I have watched this movie and it was awesome."
Masked language modeling is a great way to train a language
model in a self-supervised setting (without human-annotated labels).
Such a model can then be fine-tuned to accomplish various supervised
NLP tasks.
This example teaches you how to build a BERT model from scratch,
train it with the masked language modeling task,
and then fine-tune this model on a sentiment classification task.
We will use the Keras `TextVectorization` and `MultiHeadAttention` layers
to create a BERT Transformer-Encoder network architecture.
Note: This example should be run with `tf-nightly`.
---
## Setup
Install `tf-nightly` via `pip install tf-nightly`.
```python
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.layers import TextVectorization
from dataclasses import dataclass
import pandas as pd
import numpy as np
import glob
import re
from pprint import pprint
```
---
## Set-up Configuration
```python
@dataclass
class Config:
MAX_LEN = 256
BATCH_SIZE = 32
LR = 0.001
VOCAB_SIZE = 30000
EMBED_DIM = 128
NUM_HEAD = 8 # used in bert model
FF_DIM = 128 # used in bert model
NUM_LAYERS = 1
config = Config()
```
---
## Load the data
We will first download the IMDB data and load into a Pandas dataframe.
```python
!curl -O https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -xf aclImdb_v1.tar.gz
```
```python
def get_text_list_from_files(files):
text_list = []
for name in files:
with open(name) as f:
for line in f:
text_list.append(line)
return text_list
def get_data_from_text_files(folder_name):
pos_files = glob.glob("aclImdb/" + folder_name + "/pos/*.txt")
pos_texts = get_text_list_from_files(pos_files)
neg_files = glob.glob("aclImdb/" + folder_name + "/neg/*.txt")
neg_texts = get_text_list_from_files(neg_files)
df = pd.DataFrame(
{
"review": pos_texts + neg_texts,
"sentiment": [0] * len(pos_texts) + [1] * len(neg_texts),
}
)
df = df.sample(len(df)).reset_index(drop=True)
return df
train_df = get_data_from_text_files("train")
test_df = get_data_from_text_files("test")
all_data = train_df.append(test_df)
```
<div class="k-default-codeblock">
```
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 80.2M 100 80.2M 0 0 45.3M 0 0:00:01 0:00:01 --:--:-- 45.3M
```
</div>
---
## Dataset preparation
We will use the `TextVectorization` layer to vectorize the text into integer token ids.
It transforms a batch of strings into either
a sequence of token indices (one sample = 1D array of integer token indices, in order)
or a dense representation (one sample = 1D array of float values encoding an unordered set of tokens).
Below, we define 3 preprocessing functions.
1. The `get_vectorize_layer` function builds the `TextVectorization` layer.
2. The `encode` function encodes raw text into integer token ids.
3. The `get_masked_input_and_labels` function will mask input token ids.
It masks 15% of all input tokens in each sequence at random.
```python
def custom_standardization(input_data):
lowercase = tf.strings.lower(input_data)
stripped_html = tf.strings.regex_replace(lowercase, "<br />", " ")
return tf.strings.regex_replace(
stripped_html, "[%s]" % re.escape("!#$%&'()*+,-./:;<=>?@\^_`{|}~"), ""
)
def get_vectorize_layer(texts, vocab_size, max_seq, special_tokens=["[MASK]"]):
"""Build Text vectorization layer
Args:
texts (list): List of string i.e input texts
vocab_size (int): vocab size
max_seq (int): Maximum sequence lenght.
special_tokens (list, optional): List of special tokens. Defaults to ['[MASK]'].
Returns:
layers.Layer: Return TextVectorization Keras Layer
"""
vectorize_layer = TextVectorization(
max_tokens=vocab_size,
output_mode="int",
standardize=custom_standardization,
output_sequence_length=max_seq,
)
vectorize_layer.adapt(texts)
# Insert mask token in vocabulary
vocab = vectorize_layer.get_vocabulary()
vocab = vocab[2 : vocab_size - len(special_tokens)] + ["[mask]"]
vectorize_layer.set_vocabulary(vocab)
return vectorize_layer
vectorize_layer = get_vectorize_layer(
all_data.review.values.tolist(),
config.VOCAB_SIZE,
config.MAX_LEN,
special_tokens=["[mask]"],
)
# Get mask token id for masked language model
mask_token_id = vectorize_layer(["[mask]"]).numpy()[0][0]
def encode(texts):
encoded_texts = vectorize_layer(texts)
return encoded_texts.numpy()
def get_masked_input_and_labels(encoded_texts):
# 15% BERT masking
inp_mask = np.random.rand(*encoded_texts.shape) < 0.15
# Do not mask special tokens
inp_mask[encoded_texts <= 2] = False
# Set targets to -1 by default, it means ignore
labels = -1 * np.ones(encoded_texts.shape, dtype=int)
# Set labels for masked tokens
labels[inp_mask] = encoded_texts[inp_mask]
# Prepare input
encoded_texts_masked = np.copy(encoded_texts)
# Set input to [MASK] which is the last token for the 90% of tokens
# This means leaving 10% unchanged
inp_mask_2mask = inp_mask & (np.random.rand(*encoded_texts.shape) < 0.90)
encoded_texts_masked[
inp_mask_2mask
] = mask_token_id # mask token is the last in the dict
# Set 10% to a random token
inp_mask_2random = inp_mask_2mask & (np.random.rand(*encoded_texts.shape) < 1 / 9)
encoded_texts_masked[inp_mask_2random] = np.random.randint(
3, mask_token_id, inp_mask_2random.sum()
)
# Prepare sample_weights to pass to .fit() method
sample_weights = np.ones(labels.shape)
sample_weights[labels == -1] = 0
# y_labels would be same as encoded_texts i.e input tokens
y_labels = np.copy(encoded_texts)
return encoded_texts_masked, y_labels, sample_weights
# We have 25000 examples for training
x_train = encode(train_df.review.values) # encode reviews with vectorizer
y_train = train_df.sentiment.values
train_classifier_ds = (
tf.data.Dataset.from_tensor_slices((x_train, y_train))
.shuffle(1000)
.batch(config.BATCH_SIZE)
)
# We have 25000 examples for testing
x_test = encode(test_df.review.values)
y_test = test_df.sentiment.values
test_classifier_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(
config.BATCH_SIZE
)
# Build dataset for end to end model input (will be used at the end)
test_raw_classifier_ds = tf.data.Dataset.from_tensor_slices(
(test_df.review.values, y_test)
).batch(config.BATCH_SIZE)
# Prepare data for masked language model
x_all_review = encode(all_data.review.values)
x_masked_train, y_masked_labels, sample_weights = get_masked_input_and_labels(
x_all_review
)
mlm_ds = tf.data.Dataset.from_tensor_slices(
(x_masked_train, y_masked_labels, sample_weights)
)
mlm_ds = mlm_ds.shuffle(1000).batch(config.BATCH_SIZE)
```
---
## Create BERT model (Pretraining Model) for masked language modeling
We will create a BERT-like pretraining model architecture
using the `MultiHeadAttention` layer.
It will take token ids as inputs (including masked tokens)
and it will predict the correct ids for the masked input tokens.
```python
def bert_module(query, key, value, i):
# Multi headed self-attention
attention_output = layers.MultiHeadAttention(
num_heads=config.NUM_HEAD,
key_dim=config.EMBED_DIM // config.NUM_HEAD,
name="encoder_{}/multiheadattention".format(i),
)(query, key, value)
attention_output = layers.Dropout(0.1, name="encoder_{}/att_dropout".format(i))(
attention_output
)
attention_output = layers.LayerNormalization(
epsilon=1e-6, name="encoder_{}/att_layernormalization".format(i)
)(query + attention_output)
# Feed-forward layer
ffn = keras.Sequential(
[
layers.Dense(config.FF_DIM, activation="relu"),
layers.Dense(config.EMBED_DIM),
],
name="encoder_{}/ffn".format(i),
)
ffn_output = ffn(attention_output)
ffn_output = layers.Dropout(0.1, name="encoder_{}/ffn_dropout".format(i))(
ffn_output
)
sequence_output = layers.LayerNormalization(
epsilon=1e-6, name="encoder_{}/ffn_layernormalization".format(i)
)(attention_output + ffn_output)
return sequence_output
def get_pos_encoding_matrix(max_len, d_emb):
pos_enc = np.array(
[
[pos / np.power(10000, 2 * (j // 2) / d_emb) for j in range(d_emb)]
if pos != 0
else np.zeros(d_emb)
for pos in range(max_len)
]
)
pos_enc[1:, 0::2] = np.sin(pos_enc[1:, 0::2]) # dim 2i
pos_enc[1:, 1::2] = np.cos(pos_enc[1:, 1::2]) # dim 2i+1
return pos_enc
loss_fn = keras.losses.SparseCategoricalCrossentropy(
reduction=tf.keras.losses.Reduction.NONE
)
loss_tracker = tf.keras.metrics.Mean(name="loss")
class MaskedLanguageModel(tf.keras.Model):
def train_step(self, inputs):
if len(inputs) == 3:
features, labels, sample_weight = inputs
else:
features, labels = inputs
sample_weight = None
with tf.GradientTape() as tape:
predictions = self(features, training=True)
loss = loss_fn(labels, predictions, sample_weight=sample_weight)
# Compute gradients
trainable_vars = self.trainable_variables
gradients = tape.gradient(loss, trainable_vars)
# Update weights
self.optimizer.apply_gradients(zip(gradients, trainable_vars))
# Compute our own metrics
loss_tracker.update_state(loss, sample_weight=sample_weight)
# Return a dict mapping metric names to current value
return {"loss": loss_tracker.result()}
@property
def metrics(self):
# We list our `Metric` objects here so that `reset_states()` can be
# called automatically at the start of each epoch
# or at the start of `evaluate()`.
# If you don't implement this property, you have to call
# `reset_states()` yourself at the time of your choosing.
return [loss_tracker]
def create_masked_language_bert_model():
inputs = layers.Input((config.MAX_LEN,), dtype=tf.int64)
word_embeddings = layers.Embedding(
config.VOCAB_SIZE, config.EMBED_DIM, name="word_embedding"
)(inputs)
position_embeddings = layers.Embedding(
input_dim=config.MAX_LEN,
output_dim=config.EMBED_DIM,
weights=[get_pos_encoding_matrix(config.MAX_LEN, config.EMBED_DIM)],
name="position_embedding",
)(tf.range(start=0, limit=config.MAX_LEN, delta=1))
embeddings = word_embeddings + position_embeddings
encoder_output = embeddings
for i in range(config.NUM_LAYERS):
encoder_output = bert_module(encoder_output, encoder_output, encoder_output, i)
mlm_output = layers.Dense(config.VOCAB_SIZE, name="mlm_cls", activation="softmax")(
encoder_output
)
mlm_model = MaskedLanguageModel(inputs, mlm_output, name="masked_bert_model")
optimizer = keras.optimizers.Adam(learning_rate=config.LR)
mlm_model.compile(optimizer=optimizer)
return mlm_model
id2token = dict(enumerate(vectorize_layer.get_vocabulary()))
token2id = {y: x for x, y in id2token.items()}
class MaskedTextGenerator(keras.callbacks.Callback):
def __init__(self, sample_tokens, top_k=5):
self.sample_tokens = sample_tokens
self.k = top_k
def decode(self, tokens):
return " ".join([id2token[t] for t in tokens if t != 0])
def convert_ids_to_tokens(self, id):
return id2token[id]
def on_epoch_end(self, epoch, logs=None):
prediction = self.model.predict(self.sample_tokens)
masked_index = np.where(self.sample_tokens == mask_token_id)
masked_index = masked_index[1]
mask_prediction = prediction[0][masked_index]
top_indices = mask_prediction[0].argsort()[-self.k :][::-1]
values = mask_prediction[0][top_indices]
for i in range(len(top_indices)):
p = top_indices[i]
v = values[i]
tokens = np.copy(sample_tokens[0])
tokens[masked_index[0]] = p
result = {
"input_text": self.decode(sample_tokens[0].numpy()),
"prediction": self.decode(tokens),
"probability": v,
"predicted mask token": self.convert_ids_to_tokens(p),
}
pprint(result)
sample_tokens = vectorize_layer(["I have watched this [mask] and it was awesome"])
generator_callback = MaskedTextGenerator(sample_tokens.numpy())
bert_masked_model = create_masked_language_bert_model()
bert_masked_model.summary()
```
<div class="k-default-codeblock">
```
Model: "masked_bert_model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 256)] 0
__________________________________________________________________________________________________
word_embedding (Embedding) (None, 256, 128) 3840000 input_1[0][0]
__________________________________________________________________________________________________
tf.__operators__.add (TFOpLambd (None, 256, 128) 0 word_embedding[0][0]
__________________________________________________________________________________________________
encoder_0/multiheadattention (M (None, 256, 128) 66048 tf.__operators__.add[0][0]
tf.__operators__.add[0][0]
tf.__operators__.add[0][0]
__________________________________________________________________________________________________
encoder_0/att_dropout (Dropout) (None, 256, 128) 0 encoder_0/multiheadattention[0][0
__________________________________________________________________________________________________
tf.__operators__.add_1 (TFOpLam (None, 256, 128) 0 tf.__operators__.add[0][0]
encoder_0/att_dropout[0][0]
__________________________________________________________________________________________________
encoder_0/att_layernormalizatio (None, 256, 128) 256 tf.__operators__.add_1[0][0]
__________________________________________________________________________________________________
encoder_0/ffn (Sequential) (None, 256, 128) 33024 encoder_0/att_layernormalization[
__________________________________________________________________________________________________
encoder_0/ffn_dropout (Dropout) (None, 256, 128) 0 encoder_0/ffn[0][0]
__________________________________________________________________________________________________
tf.__operators__.add_2 (TFOpLam (None, 256, 128) 0 encoder_0/att_layernormalization[
encoder_0/ffn_dropout[0][0]
__________________________________________________________________________________________________
encoder_0/ffn_layernormalizatio (None, 256, 128) 256 tf.__operators__.add_2[0][0]
__________________________________________________________________________________________________
mlm_cls (Dense) (None, 256, 30000) 3870000 encoder_0/ffn_layernormalization[
==================================================================================================
Total params: 7,809,584
Trainable params: 7,809,584
Non-trainable params: 0
__________________________________________________________________________________________________
```
</div>
---
## Train and Save
```python
bert_masked_model.fit(mlm_ds, epochs=5, callbacks=[generator_callback])
bert_masked_model.save("bert_mlm_imdb.h5")
```
<div class="k-default-codeblock">
```
Epoch 1/5
1563/1563 [==============================] - ETA: 0s - loss: 7.0111{'input_text': 'i have watched this [mask] and it was awesome',
'predicted mask token': 'this',
'prediction': 'i have watched this this and it was awesome',
'probability': 0.086307295}
{'input_text': 'i have watched this [mask] and it was awesome',
'predicted mask token': 'i',
'prediction': 'i have watched this i and it was awesome',
'probability': 0.066265985}
{'input_text': 'i have watched this [mask] and it was awesome',
'predicted mask token': 'movie',
'prediction': 'i have watched this movie and it was awesome',
'probability': 0.044195656}
{'input_text': 'i have watched this [mask] and it was awesome',
'predicted mask token': 'a',
'prediction': 'i have watched this a and it was awesome',
'probability': 0.04020928}
{'input_text': 'i have watched this [mask] and it was awesome',
'predicted mask token': 'was',
'prediction': 'i have watched this was and it was awesome',
'probability': 0.027878676}
1563/1563 [==============================] - 661s 423ms/step - loss: 7.0111
Epoch 2/5
1563/1563 [==============================] - ETA: 0s - loss: 6.4498{'input_text': 'i have watched this [mask] and it was awesome',
'predicted mask token': 'movie',
'prediction': 'i have watched this movie and it was awesome',
'probability': 0.44448906}
{'input_text': 'i have watched this [mask] and it was awesome',
'predicted mask token': 'film',
'prediction': 'i have watched this film and it was awesome',
'probability': 0.1507494}
{'input_text': 'i have watched this [mask] and it was awesome',
'predicted mask token': 'is',
'prediction': 'i have watched this is and it was awesome',
'probability': 0.06385628}
{'input_text': 'i have watched this [mask] and it was awesome',
'predicted mask token': 'one',
'prediction': 'i have watched this one and it was awesome',
'probability': 0.023549262}
{'input_text': 'i have watched this [mask] and it was awesome',
'predicted mask token': 'was',
'prediction': 'i have watched this was and it was awesome',
'probability': 0.022277055}
1563/1563 [==============================] - 660s 422ms/step - loss: 6.4498
Epoch 3/5
1563/1563 [==============================] - ETA: 0s - loss: 5.8709{'input_text': 'i have watched this [mask] and it was awesome',
'predicted mask token': 'movie',
'prediction': 'i have watched this movie and it was awesome',
'probability': 0.4759983}
{'input_text': 'i have watched this [mask] and it was awesome',
'predicted mask token': 'film',
'prediction': 'i have watched this film and it was awesome',
'probability': 0.18642229}
{'input_text': 'i have watched this [mask] and it was awesome',
'predicted mask token': 'one',
'prediction': 'i have watched this one and it was awesome',
'probability': 0.045611132}
{'input_text': 'i have watched this [mask] and it was awesome',
'predicted mask token': 'is',
'prediction': 'i have watched this is and it was awesome',
'probability': 0.028308254}
{'input_text': 'i have watched this [mask] and it was awesome',
'predicted mask token': 'series',
'prediction': 'i have watched this series and it was awesome',
'probability': 0.027862877}
1563/1563 [==============================] - 661s 423ms/step - loss: 5.8709
Epoch 4/5
771/1563 [=============>................] - ETA: 5:35 - loss: 5.3782
```
</div>
---
## Fine-tune a sentiment classification model
We will fine-tune our self-supervised model on a downstream task of sentiment classification.
To do this, let's create a classifier by adding a pooling layer and a `Dense` layer on top of the
pretrained BERT features.
```python
# Load pretrained bert model
mlm_model = keras.models.load_model(
"bert_mlm_imdb.h5", custom_objects={"MaskedLanguageModel": MaskedLanguageModel}
)
pretrained_bert_model = tf.keras.Model(
mlm_model.input, mlm_model.get_layer("encoder_0/ffn_layernormalization").output
)
# Freeze it
pretrained_bert_model.trainable = False
def create_classifier_bert_model():
inputs = layers.Input((config.MAX_LEN,), dtype=tf.int64)
sequence_output = pretrained_bert_model(inputs)
pooled_output = layers.GlobalMaxPooling1D()(sequence_output)
hidden_layer = layers.Dense(64, activation="relu")(pooled_output)
outputs = layers.Dense(1, activation="sigmoid")(hidden_layer)
classifer_model = keras.Model(inputs, outputs, name="classification")
optimizer = keras.optimizers.Adam()
classifer_model.compile(
optimizer=optimizer, loss="binary_crossentropy", metrics=["accuracy"]
)
return classifer_model
classifer_model = create_classifier_bert_model()
classifer_model.summary()
# Train the classifier with frozen BERT stage
classifer_model.fit(
train_classifier_ds,
epochs=5,
validation_data=test_classifier_ds,
)
# Unfreeze the BERT model for fine-tuning
pretrained_bert_model.trainable = True
optimizer = keras.optimizers.Adam()
classifer_model.compile(
optimizer=optimizer, loss="binary_crossentropy", metrics=["accuracy"]
)
classifer_model.fit(
train_classifier_ds,
epochs=5,
validation_data=test_classifier_ds,
)
```
<div class="k-default-codeblock">
```
Model: "classification"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 256)] 0
_________________________________________________________________
model (Functional) (None, 256, 128) 3939584
_________________________________________________________________
global_max_pooling1d (Global (None, 128) 0
_________________________________________________________________
dense_2 (Dense) (None, 64) 8256
_________________________________________________________________
dense_3 (Dense) (None, 1) 65
=================================================================
Total params: 3,947,905
Trainable params: 8,321
Non-trainable params: 3,939,584
_________________________________________________________________
Epoch 1/5
782/782 [==============================] - 15s 19ms/step - loss: 0.8096 - accuracy: 0.5498 - val_loss: 0.6406 - val_accuracy: 0.6329
Epoch 2/5
782/782 [==============================] - 14s 18ms/step - loss: 0.6551 - accuracy: 0.6220 - val_loss: 0.6423 - val_accuracy: 0.6338
Epoch 3/5
782/782 [==============================] - 14s 18ms/step - loss: 0.6473 - accuracy: 0.6310 - val_loss: 0.6380 - val_accuracy: 0.6350
Epoch 4/5
782/782 [==============================] - 14s 18ms/step - loss: 0.6307 - accuracy: 0.6471 - val_loss: 0.6432 - val_accuracy: 0.6312
Epoch 5/5
782/782 [==============================] - 14s 18ms/step - loss: 0.6278 - accuracy: 0.6465 - val_loss: 0.6107 - val_accuracy: 0.6678
Epoch 1/5
782/782 [==============================] - 46s 59ms/step - loss: 0.5234 - accuracy: 0.7373 - val_loss: 0.3533 - val_accuracy: 0.8427
Epoch 2/5
782/782 [==============================] - 45s 57ms/step - loss: 0.2808 - accuracy: 0.8814 - val_loss: 0.3252 - val_accuracy: 0.8633
Epoch 3/5
782/782 [==============================] - 43s 55ms/step - loss: 0.1493 - accuracy: 0.9413 - val_loss: 0.4374 - val_accuracy: 0.8486
Epoch 4/5
782/782 [==============================] - 43s 55ms/step - loss: 0.0600 - accuracy: 0.9803 - val_loss: 0.6422 - val_accuracy: 0.8380
Epoch 5/5
782/782 [==============================] - 43s 55ms/step - loss: 0.0305 - accuracy: 0.9893 - val_loss: 0.6064 - val_accuracy: 0.8440
<tensorflow.python.keras.callbacks.History at 0x7f35af4367f0>
```
</div>
---
## Create an end-to-end model and evaluate it
When you want to deploy a model, it's best if it already includes its preprocessing
pipeline, so that you don't have to reimplement the preprocessing logic in your
production environment. Let's create an end-to-end model that incorporates
the `TextVectorization` layer, and let's evaluate. Our model will accept raw strings
as input.
```python
def get_end_to_end(model):
inputs_string = keras.Input(shape=(1,), dtype="string")
indices = vectorize_layer(inputs_string)
outputs = model(indices)
end_to_end_model = keras.Model(inputs_string, outputs, name="end_to_end_model")
optimizer = keras.optimizers.Adam(learning_rate=config.LR)
end_to_end_model.compile(
optimizer=optimizer, loss="binary_crossentropy", metrics=["accuracy"]
)
return end_to_end_model
end_to_end_classification_model = get_end_to_end(classifer_model)
end_to_end_classification_model.evaluate(test_raw_classifier_ds)
```
<div class="k-default-codeblock">
```
782/782 [==============================] - 8s 11ms/step - loss: 0.5967 - accuracy: 0.8446
[0.6064175963401794, 0.8439599871635437]
```
</div> | keras-io/examples/nlp/md/masked_language_modeling.md/0 | {
"file_path": "keras-io/examples/nlp/md/masked_language_modeling.md",
"repo_id": "keras-io",
"token_count": 10033
} | 109 |
"""
Title: Semantic Similarity with BERT
Author: [Mohamad Merchant](https://twitter.com/mohmadmerchant1)
Date created: 2020/08/15
Last modified: 2020/08/29
Description: Natural Language Inference by fine-tuning BERT model on SNLI Corpus.
Accelerator: GPU
"""
"""
## Introduction
Semantic Similarity is the task of determining how similar
two sentences are, in terms of what they mean.
This example demonstrates the use of SNLI (Stanford Natural Language Inference) Corpus
to predict sentence semantic similarity with Transformers.
We will fine-tune a BERT model that takes two sentences as inputs
and that outputs a similarity score for these two sentences.
### References
* [BERT](https://arxiv.org/pdf/1810.04805.pdf)
* [SNLI](https://nlp.stanford.edu/projects/snli/)
"""
"""
## Setup
Note: install HuggingFace `transformers` via `pip install transformers` (version >= 2.11.0).
"""
import numpy as np
import pandas as pd
import tensorflow as tf
import transformers
"""
## Configuration
"""
max_length = 128 # Maximum length of input sentence to the model.
batch_size = 32
epochs = 2
# Labels in our dataset.
labels = ["contradiction", "entailment", "neutral"]
"""
## Load the Data
"""
"""shell
curl -LO https://raw.githubusercontent.com/MohamadMerchant/SNLI/master/data.tar.gz
tar -xvzf data.tar.gz
"""
# There are more than 550k samples in total; we will use 100k for this example.
train_df = pd.read_csv("SNLI_Corpus/snli_1.0_train.csv", nrows=100000)
valid_df = pd.read_csv("SNLI_Corpus/snli_1.0_dev.csv")
test_df = pd.read_csv("SNLI_Corpus/snli_1.0_test.csv")
# Shape of the data
print(f"Total train samples : {train_df.shape[0]}")
print(f"Total validation samples: {valid_df.shape[0]}")
print(f"Total test samples: {valid_df.shape[0]}")
"""
Dataset Overview:
- sentence1: The premise caption that was supplied to the author of the pair.
- sentence2: The hypothesis caption that was written by the author of the pair.
- similarity: This is the label chosen by the majority of annotators.
Where no majority exists, the label "-" is used (we will skip such samples here).
Here are the "similarity" label values in our dataset:
- Contradiction: The sentences share no similarity.
- Entailment: The sentences have similar meaning.
- Neutral: The sentences are neutral.
"""
"""
Let's look at one sample from the dataset:
"""
print(f"Sentence1: {train_df.loc[1, 'sentence1']}")
print(f"Sentence2: {train_df.loc[1, 'sentence2']}")
print(f"Similarity: {train_df.loc[1, 'similarity']}")
"""
## Preprocessing
"""
# We have some NaN entries in our train data, we will simply drop them.
print("Number of missing values")
print(train_df.isnull().sum())
train_df.dropna(axis=0, inplace=True)
"""
Distribution of our training targets.
"""
print("Train Target Distribution")
print(train_df.similarity.value_counts())
"""
Distribution of our validation targets.
"""
print("Validation Target Distribution")
print(valid_df.similarity.value_counts())
"""
The value "-" appears as part of our training and validation targets.
We will skip these samples.
"""
train_df = (
train_df[train_df.similarity != "-"]
.sample(frac=1.0, random_state=42)
.reset_index(drop=True)
)
valid_df = (
valid_df[valid_df.similarity != "-"]
.sample(frac=1.0, random_state=42)
.reset_index(drop=True)
)
"""
One-hot encode training, validation, and test labels.
"""
train_df["label"] = train_df["similarity"].apply(
lambda x: 0 if x == "contradiction" else 1 if x == "entailment" else 2
)
y_train = tf.keras.utils.to_categorical(train_df.label, num_classes=3)
valid_df["label"] = valid_df["similarity"].apply(
lambda x: 0 if x == "contradiction" else 1 if x == "entailment" else 2
)
y_val = tf.keras.utils.to_categorical(valid_df.label, num_classes=3)
test_df["label"] = test_df["similarity"].apply(
lambda x: 0 if x == "contradiction" else 1 if x == "entailment" else 2
)
y_test = tf.keras.utils.to_categorical(test_df.label, num_classes=3)
"""
## Create a custom data generator
"""
class BertSemanticDataGenerator(tf.keras.utils.Sequence):
"""Generates batches of data.
Args:
sentence_pairs: Array of premise and hypothesis input sentences.
labels: Array of labels.
batch_size: Integer batch size.
shuffle: boolean, whether to shuffle the data.
include_targets: boolean, whether to incude the labels.
Returns:
Tuples `([input_ids, attention_mask, `token_type_ids], labels)`
(or just `[input_ids, attention_mask, `token_type_ids]`
if `include_targets=False`)
"""
def __init__(
self,
sentence_pairs,
labels,
batch_size=batch_size,
shuffle=True,
include_targets=True,
):
self.sentence_pairs = sentence_pairs
self.labels = labels
self.shuffle = shuffle
self.batch_size = batch_size
self.include_targets = include_targets
# Load our BERT Tokenizer to encode the text.
# We will use base-base-uncased pretrained model.
self.tokenizer = transformers.BertTokenizer.from_pretrained(
"bert-base-uncased", do_lower_case=True
)
self.indexes = np.arange(len(self.sentence_pairs))
self.on_epoch_end()
def __len__(self):
# Denotes the number of batches per epoch.
return len(self.sentence_pairs) // self.batch_size
def __getitem__(self, idx):
# Retrieves the batch of index.
indexes = self.indexes[idx * self.batch_size : (idx + 1) * self.batch_size]
sentence_pairs = self.sentence_pairs[indexes]
# With BERT tokenizer's batch_encode_plus batch of both the sentences are
# encoded together and separated by [SEP] token.
encoded = self.tokenizer.batch_encode_plus(
sentence_pairs.tolist(),
add_special_tokens=True,
max_length=max_length,
return_attention_mask=True,
return_token_type_ids=True,
pad_to_max_length=True,
return_tensors="tf",
)
# Convert batch of encoded features to numpy array.
input_ids = np.array(encoded["input_ids"], dtype="int32")
attention_masks = np.array(encoded["attention_mask"], dtype="int32")
token_type_ids = np.array(encoded["token_type_ids"], dtype="int32")
# Set to true if data generator is used for training/validation.
if self.include_targets:
labels = np.array(self.labels[indexes], dtype="int32")
return [input_ids, attention_masks, token_type_ids], labels
else:
return [input_ids, attention_masks, token_type_ids]
def on_epoch_end(self):
# Shuffle indexes after each epoch if shuffle is set to True.
if self.shuffle:
np.random.RandomState(42).shuffle(self.indexes)
"""
## Build the model
"""
# Create the model under a distribution strategy scope.
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
# Encoded token ids from BERT tokenizer.
input_ids = tf.keras.layers.Input(
shape=(max_length,), dtype=tf.int32, name="input_ids"
)
# Attention masks indicates to the model which tokens should be attended to.
attention_masks = tf.keras.layers.Input(
shape=(max_length,), dtype=tf.int32, name="attention_masks"
)
# Token type ids are binary masks identifying different sequences in the model.
token_type_ids = tf.keras.layers.Input(
shape=(max_length,), dtype=tf.int32, name="token_type_ids"
)
# Loading pretrained BERT model.
bert_model = transformers.TFBertModel.from_pretrained("bert-base-uncased")
# Freeze the BERT model to reuse the pretrained features without modifying them.
bert_model.trainable = False
bert_output = bert_model.bert(
input_ids, attention_mask=attention_masks, token_type_ids=token_type_ids
)
sequence_output = bert_output.last_hidden_state
pooled_output = bert_output.pooler_output
# Add trainable layers on top of frozen layers to adapt the pretrained features on the new data.
bi_lstm = tf.keras.layers.Bidirectional(
tf.keras.layers.LSTM(64, return_sequences=True)
)(sequence_output)
# Applying hybrid pooling approach to bi_lstm sequence output.
avg_pool = tf.keras.layers.GlobalAveragePooling1D()(bi_lstm)
max_pool = tf.keras.layers.GlobalMaxPooling1D()(bi_lstm)
concat = tf.keras.layers.concatenate([avg_pool, max_pool])
dropout = tf.keras.layers.Dropout(0.3)(concat)
output = tf.keras.layers.Dense(3, activation="softmax")(dropout)
model = tf.keras.models.Model(
inputs=[input_ids, attention_masks, token_type_ids], outputs=output
)
model.compile(
optimizer=tf.keras.optimizers.Adam(),
loss="categorical_crossentropy",
metrics=["acc"],
)
print(f"Strategy: {strategy}")
model.summary()
"""
Create train and validation data generators
"""
train_data = BertSemanticDataGenerator(
train_df[["sentence1", "sentence2"]].values.astype("str"),
y_train,
batch_size=batch_size,
shuffle=True,
)
valid_data = BertSemanticDataGenerator(
valid_df[["sentence1", "sentence2"]].values.astype("str"),
y_val,
batch_size=batch_size,
shuffle=False,
)
"""
## Train the Model
Training is done only for the top layers to perform "feature extraction",
which will allow the model to use the representations of the pretrained model.
"""
history = model.fit(
train_data,
validation_data=valid_data,
epochs=epochs,
use_multiprocessing=True,
workers=-1,
)
"""
## Fine-tuning
This step must only be performed after the feature extraction model has
been trained to convergence on the new data.
This is an optional last step where `bert_model` is unfreezed and retrained
with a very low learning rate. This can deliver meaningful improvement by
incrementally adapting the pretrained features to the new data.
"""
# Unfreeze the bert_model.
bert_model.trainable = True
# Recompile the model to make the change effective.
model.compile(
optimizer=tf.keras.optimizers.Adam(1e-5),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
model.summary()
"""
## Train the entire model end-to-end
"""
history = model.fit(
train_data,
validation_data=valid_data,
epochs=epochs,
use_multiprocessing=True,
workers=-1,
)
"""
## Evaluate model on the test set
"""
test_data = BertSemanticDataGenerator(
test_df[["sentence1", "sentence2"]].values.astype("str"),
y_test,
batch_size=batch_size,
shuffle=False,
)
model.evaluate(test_data, verbose=1)
"""
## Inference on custom sentences
"""
def check_similarity(sentence1, sentence2):
sentence_pairs = np.array([[str(sentence1), str(sentence2)]])
test_data = BertSemanticDataGenerator(
sentence_pairs,
labels=None,
batch_size=1,
shuffle=False,
include_targets=False,
)
proba = model.predict(test_data[0])[0]
idx = np.argmax(proba)
proba = f"{proba[idx]: .2f}%"
pred = labels[idx]
return pred, proba
"""
Check results on some example sentence pairs.
"""
sentence1 = "Two women are observing something together."
sentence2 = "Two women are standing with their eyes closed."
check_similarity(sentence1, sentence2)
"""
Check results on some example sentence pairs.
"""
sentence1 = "A smiling costumed woman is holding an umbrella"
sentence2 = "A happy woman in a fairy costume holds an umbrella"
check_similarity(sentence1, sentence2)
"""
Check results on some example sentence pairs
"""
sentence1 = "A soccer game with multiple males playing"
sentence2 = "Some men are playing a sport"
check_similarity(sentence1, sentence2)
"""
Example available on HuggingFace
| Trained Model | Demo |
| :--: | :--: |
| [![Generic badge](https://img.shields.io/badge/%F0%9F%A4%97%20Model-semantic%20similarity%20with%20bert-black.svg)](https://huggingface.co/keras-io/bert-semantic-similarity) | [![Generic badge](https://img.shields.io/badge/%F0%9F%A4%97%20Spaces-semantic%20similarity%20with%20bert-black.svg)](https://huggingface.co/spaces/keras-io/bert-semantic-similarity) |
"""
| keras-io/examples/nlp/semantic_similarity_with_bert.py/0 | {
"file_path": "keras-io/examples/nlp/semantic_similarity_with_bert.py",
"repo_id": "keras-io",
"token_count": 4576
} | 110 |
<jupyter_start><jupyter_text>Proximal Policy Optimization**Author:** [Ilias Chrysovergis](https://twitter.com/iliachry)**Date created:** 2021/06/24**Last modified:** 2021/06/24**Description:** Implementation of a Proximal Policy Optimization agent for the CartPole-v0 environment. IntroductionThis code example solves the CartPole-v0 environment using a Proximal Policy Optimization (PPO) agent. CartPole-v0A pole is attached by an un-actuated joint to a cart, which moves along a frictionless track.The system is controlled by applying a force of +1 or -1 to the cart.The pendulum starts upright, and the goal is to prevent it from falling over.A reward of +1 is provided for every timestep that the pole remains upright.The episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center.After 200 steps the episode ends. Thus, the highest return we can get is equal to 200.[CartPole-v0](https://gym.openai.com/envs/CartPole-v0/) Proximal Policy OptimizationPPO is a policy gradient method and can be used for environments with either discrete or continuous action spaces.It trains a stochastic policy in an on-policy way. Also, it utilizes the actor critic method. The actor maps theobservation to an action and the critic gives an expectation of the rewards of the agent for the observation given.Firstly, it collects a set of trajectories for each epoch by sampling from the latest version of the stochastic policy.Then, the rewards-to-go and the advantage estimates are computed in order to update the policy and fit the value function.The policy is updated via a stochastic gradient ascent optimizer, while the value function is fitted via some gradient descent algorithm.This procedure is applied for many epochs until the environment is solved.- [PPO Original Paper](https://arxiv.org/pdf/1707.06347.pdf)- [OpenAI Spinning Up docs - PPO](https://spinningup.openai.com/en/latest/algorithms/ppo.html) NoteThis code example uses Keras and Tensorflow v2. It is based on the PPO Original Paper,the OpenAI's Spinning Up docs for PPO, and the OpenAI's Spinning Up implementation of PPO using Tensorflow v1.[OpenAI Spinning Up Github - PPO](https://github.com/openai/spinningup/blob/master/spinup/algos/tf1/ppo/ppo.py) LibrariesFor this example the following libraries are used:1. `numpy` for n-dimensional arrays2. `tensorflow` and `keras` for building the deep RL PPO agent3. `gym` for getting everything we need about the environment4. `scipy.signal` for calculating the discounted cumulative sums of vectors<jupyter_code>import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import gym
import scipy.signal
import time<jupyter_output><empty_output><jupyter_text>Functions and class<jupyter_code>def discounted_cumulative_sums(x, discount):
# Discounted cumulative sums of vectors for computing rewards-to-go and advantage estimates
return scipy.signal.lfilter([1], [1, float(-discount)], x[::-1], axis=0)[::-1]
class Buffer:
# Buffer for storing trajectories
def __init__(self, observation_dimensions, size, gamma=0.99, lam=0.95):
# Buffer initialization
self.observation_buffer = np.zeros(
(size, observation_dimensions), dtype=np.float32
)
self.action_buffer = np.zeros(size, dtype=np.int32)
self.advantage_buffer = np.zeros(size, dtype=np.float32)
self.reward_buffer = np.zeros(size, dtype=np.float32)
self.return_buffer = np.zeros(size, dtype=np.float32)
self.value_buffer = np.zeros(size, dtype=np.float32)
self.logprobability_buffer = np.zeros(size, dtype=np.float32)
self.gamma, self.lam = gamma, lam
self.pointer, self.trajectory_start_index = 0, 0
def store(self, observation, action, reward, value, logprobability):
# Append one step of agent-environment interaction
self.observation_buffer[self.pointer] = observation
self.action_buffer[self.pointer] = action
self.reward_buffer[self.pointer] = reward
self.value_buffer[self.pointer] = value
self.logprobability_buffer[self.pointer] = logprobability
self.pointer += 1
def finish_trajectory(self, last_value=0):
# Finish the trajectory by computing advantage estimates and rewards-to-go
path_slice = slice(self.trajectory_start_index, self.pointer)
rewards = np.append(self.reward_buffer[path_slice], last_value)
values = np.append(self.value_buffer[path_slice], last_value)
deltas = rewards[:-1] + self.gamma * values[1:] - values[:-1]
self.advantage_buffer[path_slice] = discounted_cumulative_sums(
deltas, self.gamma * self.lam
)
self.return_buffer[path_slice] = discounted_cumulative_sums(
rewards, self.gamma
)[:-1]
self.trajectory_start_index = self.pointer
def get(self):
# Get all data of the buffer and normalize the advantages
self.pointer, self.trajectory_start_index = 0, 0
advantage_mean, advantage_std = (
np.mean(self.advantage_buffer),
np.std(self.advantage_buffer),
)
self.advantage_buffer = (self.advantage_buffer - advantage_mean) / advantage_std
return (
self.observation_buffer,
self.action_buffer,
self.advantage_buffer,
self.return_buffer,
self.logprobability_buffer,
)
def mlp(x, sizes, activation=tf.tanh, output_activation=None):
# Build a feedforward neural network
for size in sizes[:-1]:
x = layers.Dense(units=size, activation=activation)(x)
return layers.Dense(units=sizes[-1], activation=output_activation)(x)
def logprobabilities(logits, a):
# Compute the log-probabilities of taking actions a by using the logits (i.e. the output of the actor)
logprobabilities_all = tf.nn.log_softmax(logits)
logprobability = tf.reduce_sum(
tf.one_hot(a, num_actions) * logprobabilities_all, axis=1
)
return logprobability
# Sample action from actor
@tf.function
def sample_action(observation):
logits = actor(observation)
action = tf.squeeze(tf.random.categorical(logits, 1), axis=1)
return logits, action
# Train the policy by maxizing the PPO-Clip objective
@tf.function
def train_policy(
observation_buffer, action_buffer, logprobability_buffer, advantage_buffer
):
with tf.GradientTape() as tape: # Record operations for automatic differentiation.
ratio = tf.exp(
logprobabilities(actor(observation_buffer), action_buffer)
- logprobability_buffer
)
min_advantage = tf.where(
advantage_buffer > 0,
(1 + clip_ratio) * advantage_buffer,
(1 - clip_ratio) * advantage_buffer,
)
policy_loss = -tf.reduce_mean(
tf.minimum(ratio * advantage_buffer, min_advantage)
)
policy_grads = tape.gradient(policy_loss, actor.trainable_variables)
policy_optimizer.apply_gradients(zip(policy_grads, actor.trainable_variables))
kl = tf.reduce_mean(
logprobability_buffer
- logprobabilities(actor(observation_buffer), action_buffer)
)
kl = tf.reduce_sum(kl)
return kl
# Train the value function by regression on mean-squared error
@tf.function
def train_value_function(observation_buffer, return_buffer):
with tf.GradientTape() as tape: # Record operations for automatic differentiation.
value_loss = tf.reduce_mean((return_buffer - critic(observation_buffer)) ** 2)
value_grads = tape.gradient(value_loss, critic.trainable_variables)
value_optimizer.apply_gradients(zip(value_grads, critic.trainable_variables))<jupyter_output><empty_output><jupyter_text>Hyperparameters<jupyter_code># Hyperparameters of the PPO algorithm
steps_per_epoch = 4000
epochs = 30
gamma = 0.99
clip_ratio = 0.2
policy_learning_rate = 3e-4
value_function_learning_rate = 1e-3
train_policy_iterations = 80
train_value_iterations = 80
lam = 0.97
target_kl = 0.01
hidden_sizes = (64, 64)
# True if you want to render the environment
render = False<jupyter_output><empty_output><jupyter_text>Initializations<jupyter_code># Initialize the environment and get the dimensionality of the
# observation space and the number of possible actions
env = gym.make("CartPole-v0")
observation_dimensions = env.observation_space.shape[0]
num_actions = env.action_space.n
# Initialize the buffer
buffer = Buffer(observation_dimensions, steps_per_epoch)
# Initialize the actor and the critic as keras models
observation_input = keras.Input(shape=(observation_dimensions,), dtype=tf.float32)
logits = mlp(observation_input, list(hidden_sizes) + [num_actions], tf.tanh, None)
actor = keras.Model(inputs=observation_input, outputs=logits)
value = tf.squeeze(
mlp(observation_input, list(hidden_sizes) + [1], tf.tanh, None), axis=1
)
critic = keras.Model(inputs=observation_input, outputs=value)
# Initialize the policy and the value function optimizers
policy_optimizer = keras.optimizers.Adam(learning_rate=policy_learning_rate)
value_optimizer = keras.optimizers.Adam(learning_rate=value_function_learning_rate)
# Initialize the observation, episode return and episode length
observation, episode_return, episode_length = env.reset(), 0, 0<jupyter_output><empty_output><jupyter_text>Train<jupyter_code># Iterate over the number of epochs
for epoch in range(epochs):
# Initialize the sum of the returns, lengths and number of episodes for each epoch
sum_return = 0
sum_length = 0
num_episodes = 0
# Iterate over the steps of each epoch
for t in range(steps_per_epoch):
if render:
env.render()
# Get the logits, action, and take one step in the environment
observation = observation.reshape(1, -1)
logits, action = sample_action(observation)
observation_new, reward, done, _ = env.step(action[0].numpy())
episode_return += reward
episode_length += 1
# Get the value and log-probability of the action
value_t = critic(observation)
logprobability_t = logprobabilities(logits, action)
# Store obs, act, rew, v_t, logp_pi_t
buffer.store(observation, action, reward, value_t, logprobability_t)
# Update the observation
observation = observation_new
# Finish trajectory if reached to a terminal state
terminal = done
if terminal or (t == steps_per_epoch - 1):
last_value = 0 if done else critic(observation.reshape(1, -1))
buffer.finish_trajectory(last_value)
sum_return += episode_return
sum_length += episode_length
num_episodes += 1
observation, episode_return, episode_length = env.reset(), 0, 0
# Get values from the buffer
(
observation_buffer,
action_buffer,
advantage_buffer,
return_buffer,
logprobability_buffer,
) = buffer.get()
# Update the policy and implement early stopping using KL divergence
for _ in range(train_policy_iterations):
kl = train_policy(
observation_buffer, action_buffer, logprobability_buffer, advantage_buffer
)
if kl > 1.5 * target_kl:
# Early Stopping
break
# Update the value function
for _ in range(train_value_iterations):
train_value_function(observation_buffer, return_buffer)
# Print mean return and length for each epoch
print(
f" Epoch: {epoch + 1}. Mean Return: {sum_return / num_episodes}. Mean Length: {sum_length / num_episodes}"
)<jupyter_output><empty_output> | keras-io/examples/rl/ipynb/ppo_cartpole.ipynb/0 | {
"file_path": "keras-io/examples/rl/ipynb/ppo_cartpole.ipynb",
"repo_id": "keras-io",
"token_count": 4283
} | 111 |
<jupyter_start><jupyter_text>Structured data classification with FeatureSpace**Author:** [fchollet](https://twitter.com/fchollet)**Date created:** 2022/11/09**Last modified:** 2022/11/09**Description:** Classify tabular data in a few lines of code. IntroductionThis example demonstrates how to do structured data classification(also known as tabular data classification), starting from a rawCSV file. Our data includes numerical features,and integer categorical features, and string categorical features.We will use the utility `keras.utils.FeatureSpace` to index,preprocess, and encode our features.The code is adapted from the example[Structured data classification from scratch](https://keras.io/examples/structured_data/structured_data_classification_from_scratch/).While the previous example managed its own low-level feature preprocessing andencoding with Keras preprocessing layers, in this example wedelegate everything to `FeatureSpace`, making the workflowextremely quick and easy. The dataset[Our dataset](https://archive.ics.uci.edu/ml/datasets/heart+Disease) is provided by theCleveland Clinic Foundation for Heart Disease.It's a CSV file with 303 rows. Each row contains information about a patient (a**sample**), and each column describes an attribute of the patient (a **feature**). Weuse the features to predict whether a patient has a heart disease(**binary classification**).Here's the description of each feature:Column| Description| Feature Type------------|--------------------|----------------------Age | Age in years | NumericalSex | (1 = male; 0 = female) | CategoricalCP | Chest pain type (0, 1, 2, 3, 4) | CategoricalTrestbpd | Resting blood pressure (in mm Hg on admission) | NumericalChol | Serum cholesterol in mg/dl | NumericalFBS | fasting blood sugar in 120 mg/dl (1 = true; 0 = false) | CategoricalRestECG | Resting electrocardiogram results (0, 1, 2) | CategoricalThalach | Maximum heart rate achieved | NumericalExang | Exercise induced angina (1 = yes; 0 = no) | CategoricalOldpeak | ST depression induced by exercise relative to rest | NumericalSlope | Slope of the peak exercise ST segment | NumericalCA | Number of major vessels (0-3) colored by fluoroscopy | Both numerical & categoricalThal | 3 = normal; 6 = fixed defect; 7 = reversible defect | CategoricalTarget | Diagnosis of heart disease (1 = true; 0 = false) | Target Setup<jupyter_code>import os
os.environ["KERAS_BACKEND"] = "tensorflow"
import tensorflow as tf
import pandas as pd
import keras
from keras.utils import FeatureSpace<jupyter_output><empty_output><jupyter_text>Preparing the dataLet's download the data and load it into a Pandas dataframe:<jupyter_code>file_url = "http://storage.googleapis.com/download.tensorflow.org/data/heart.csv"
dataframe = pd.read_csv(file_url)<jupyter_output><empty_output><jupyter_text>The dataset includes 303 samples with 14 columns per sample(13 features, plus the target label):<jupyter_code>print(dataframe.shape)<jupyter_output><empty_output><jupyter_text>Here's a preview of a few samples:<jupyter_code>dataframe.head()<jupyter_output><empty_output><jupyter_text>The last column, "target", indicates whether the patienthas a heart disease (1) or not (0).Let's split the data into a training and validation set:<jupyter_code>val_dataframe = dataframe.sample(frac=0.2, random_state=1337)
train_dataframe = dataframe.drop(val_dataframe.index)
print(
"Using %d samples for training and %d for validation"
% (len(train_dataframe), len(val_dataframe))
)<jupyter_output><empty_output><jupyter_text>Let's generate `tf.data.Dataset` objects for each dataframe:<jupyter_code>def dataframe_to_dataset(dataframe):
dataframe = dataframe.copy()
labels = dataframe.pop("target")
ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))
ds = ds.shuffle(buffer_size=len(dataframe))
return ds
train_ds = dataframe_to_dataset(train_dataframe)
val_ds = dataframe_to_dataset(val_dataframe)<jupyter_output><empty_output><jupyter_text>Each `Dataset` yields a tuple `(input, target)` where `input` is a dictionary of featuresand `target` is the value `0` or `1`:<jupyter_code>for x, y in train_ds.take(1):
print("Input:", x)
print("Target:", y)<jupyter_output><empty_output><jupyter_text>Let's batch the datasets:<jupyter_code>train_ds = train_ds.batch(32)
val_ds = val_ds.batch(32)<jupyter_output><empty_output><jupyter_text>Configuring a `FeatureSpace`To configure how each feature should be preprocessed,we instantiate a `keras.utils.FeatureSpace`, and wepass to it a dictionary that maps the name of our featuresto a string that describes the feature type.We have a few "integer categorical" features such as `"FBS"`,one "string categorical" feature (`"thal"`),and a few numerical features, which we'd like to normalize-- except `"age"`, which we'd like to discretize intoa number of bins.We also use the `crosses` argumentto capture *feature interactions* for some categoricalfeatures, that is to say, create additional featuresthat represent value co-occurrences for these categorical features.You can compute feature crosses like this for arbitrary sets ofcategorical features -- not just tuples of two features.Because the resulting co-occurences are hashedinto a fixed-sized vector, you don't need to worry about whetherthe co-occurence space is too large.<jupyter_code>feature_space = FeatureSpace(
features={
# Categorical features encoded as integers
"sex": "integer_categorical",
"cp": "integer_categorical",
"fbs": "integer_categorical",
"restecg": "integer_categorical",
"exang": "integer_categorical",
"ca": "integer_categorical",
# Categorical feature encoded as string
"thal": "string_categorical",
# Numerical features to discretize
"age": "float_discretized",
# Numerical features to normalize
"trestbps": "float_normalized",
"chol": "float_normalized",
"thalach": "float_normalized",
"oldpeak": "float_normalized",
"slope": "float_normalized",
},
# We create additional features by hashing
# value co-occurrences for the
# following groups of categorical features.
crosses=[("sex", "age"), ("thal", "ca")],
# The hashing space for these co-occurrences
# wil be 32-dimensional.
crossing_dim=32,
# Our utility will one-hot encode all categorical
# features and concat all features into a single
# vector (one vector per sample).
output_mode="concat",
)<jupyter_output><empty_output><jupyter_text>Further customizing a `FeatureSpace`Specifying the feature type via a string name is quick and easy,but sometimes you may want to further configure the preprocessingof each feature. For instance, in our case, our categoricalfeatures don't have a large set of possible values -- it's onlya handful of values per feature (e.g. `1` and `0` for the feature `"FBS"`),and all possible values are represented in the training set.As a result, we don't need to reserve an index to represent "out of vocabulary" valuesfor these features -- which would have been the default behavior.Below, we just specify `num_oov_indices=0` in each of these featuresto tell the feature preprocessor to skip "out of vocabulary" indexing.Other customizations you have access to include specifying the number ofbins for discretizing features of type `"float_discretized"`,or the dimensionality of the hashing space for feature crossing.<jupyter_code>feature_space = FeatureSpace(
features={
# Categorical features encoded as integers
"sex": FeatureSpace.integer_categorical(num_oov_indices=0),
"cp": FeatureSpace.integer_categorical(num_oov_indices=0),
"fbs": FeatureSpace.integer_categorical(num_oov_indices=0),
"restecg": FeatureSpace.integer_categorical(num_oov_indices=0),
"exang": FeatureSpace.integer_categorical(num_oov_indices=0),
"ca": FeatureSpace.integer_categorical(num_oov_indices=0),
# Categorical feature encoded as string
"thal": FeatureSpace.string_categorical(num_oov_indices=0),
# Numerical features to discretize
"age": FeatureSpace.float_discretized(num_bins=30),
# Numerical features to normalize
"trestbps": FeatureSpace.float_normalized(),
"chol": FeatureSpace.float_normalized(),
"thalach": FeatureSpace.float_normalized(),
"oldpeak": FeatureSpace.float_normalized(),
"slope": FeatureSpace.float_normalized(),
},
# Specify feature cross with a custom crossing dim.
crosses=[
FeatureSpace.cross(feature_names=("sex", "age"), crossing_dim=64),
FeatureSpace.cross(
feature_names=("thal", "ca"),
crossing_dim=16,
),
],
output_mode="concat",
)<jupyter_output><empty_output><jupyter_text>Adapt the `FeatureSpace` to the training dataBefore we start using the `FeatureSpace` to build a model, we haveto adapt it to the training data. During `adapt()`, the `FeatureSpace` will:- Index the set of possible values for categorical features.- Compute the mean and variance for numerical features to normalize.- Compute the value boundaries for the different bins for numerical features to discretize.Note that `adapt()` should be called on a `tf.data.Dataset` which yields dictsof feature values -- no labels.<jupyter_code>train_ds_with_no_labels = train_ds.map(lambda x, _: x)
feature_space.adapt(train_ds_with_no_labels)<jupyter_output><empty_output><jupyter_text>At this point, the `FeatureSpace` can be called on a dict of raw feature values, and will return asingle concatenate vector for each sample, combining encoded features and feature crosses.<jupyter_code>for x, _ in train_ds.take(1):
preprocessed_x = feature_space(x)
print("preprocessed_x.shape:", preprocessed_x.shape)
print("preprocessed_x.dtype:", preprocessed_x.dtype)<jupyter_output><empty_output><jupyter_text>Two ways to manage preprocessing: as part of the `tf.data` pipeline, or in the model itselfThere are two ways in which you can leverage your `FeatureSpace`: Asynchronous preprocessing in `tf.data`You can make it part of your data pipeline, before the model. This enables asynchronous parallelpreprocessing of the data on CPU before it hits the model. Do this if you're training on GPU or TPU,or if you want to speed up preprocessing. Usually, this is always the right thing to do during training. Synchronous preprocessing in the modelYou can make it part of your model. This means that the model will expect dicts of raw featurevalues, and the preprocessing batch will be done synchronously (in a blocking manner) before therest of the forward pass. Do this if you want to have an end-to-end model that can processraw feature values -- but keep in mind that your model will only be able to run on CPU,since most types of feature preprocessing (e.g. string preprocessing) are not GPU or TPU compatible.Do not do this on GPU / TPU or in performance-sensitive settings. In general, you want to do in-modelpreprocessing when you do inference on CPU.In our case, we will apply the `FeatureSpace` in the tf.data pipeline during training, but we willdo inference with an end-to-end model that includes the `FeatureSpace`. Let's create a training and validation dataset of preprocessed batches:<jupyter_code>preprocessed_train_ds = train_ds.map(
lambda x, y: (feature_space(x), y), num_parallel_calls=tf.data.AUTOTUNE
)
preprocessed_train_ds = preprocessed_train_ds.prefetch(tf.data.AUTOTUNE)
preprocessed_val_ds = val_ds.map(
lambda x, y: (feature_space(x), y), num_parallel_calls=tf.data.AUTOTUNE
)
preprocessed_val_ds = preprocessed_val_ds.prefetch(tf.data.AUTOTUNE)<jupyter_output><empty_output><jupyter_text>Build a modelTime to build a model -- or rather two models:- A training model that expects preprocessed features (one sample = one vector)- An inference model that expects raw features (one sample = dict of raw feature values)<jupyter_code>dict_inputs = feature_space.get_inputs()
encoded_features = feature_space.get_encoded_features()
x = keras.layers.Dense(32, activation="relu")(encoded_features)
x = keras.layers.Dropout(0.5)(x)
predictions = keras.layers.Dense(1, activation="sigmoid")(x)
training_model = keras.Model(inputs=encoded_features, outputs=predictions)
training_model.compile(
optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"]
)
inference_model = keras.Model(inputs=dict_inputs, outputs=predictions)<jupyter_output><empty_output><jupyter_text>Train the modelLet's train our model for 50 epochs. Note that feature preprocessing is happeningas part of the tf.data pipeline, not as part of the model.<jupyter_code>training_model.fit(
preprocessed_train_ds,
epochs=20,
validation_data=preprocessed_val_ds,
verbose=2,
)<jupyter_output><empty_output><jupyter_text>We quickly get to 80% validation accuracy. Inference on new data with the end-to-end modelNow, we can use our inference model (which includes the `FeatureSpace`)to make predictions based on dicts of raw features values, as follows:<jupyter_code>sample = {
"age": 60,
"sex": 1,
"cp": 1,
"trestbps": 145,
"chol": 233,
"fbs": 1,
"restecg": 2,
"thalach": 150,
"exang": 0,
"oldpeak": 2.3,
"slope": 3,
"ca": 0,
"thal": "fixed",
}
input_dict = {name: tf.convert_to_tensor([value]) for name, value in sample.items()}
predictions = inference_model.predict(input_dict)
print(
f"This particular patient had a {100 * predictions[0][0]:.2f}% probability "
"of having a heart disease, as evaluated by our model."
)<jupyter_output><empty_output> | keras-io/examples/structured_data/ipynb/structured_data_classification_with_feature_space.ipynb/0 | {
"file_path": "keras-io/examples/structured_data/ipynb/structured_data_classification_with_feature_space.ipynb",
"repo_id": "keras-io",
"token_count": 4374
} | 112 |
"""
Title: Structured data classification with FeatureSpace
Author: [fchollet](https://twitter.com/fchollet)
Date created: 2022/11/09
Last modified: 2022/11/09
Description: Classify tabular data in a few lines of code.
Accelerator: GPU
"""
"""
## Introduction
This example demonstrates how to do structured data classification
(also known as tabular data classification), starting from a raw
CSV file. Our data includes numerical features,
and integer categorical features, and string categorical features.
We will use the utility `keras.utils.FeatureSpace` to index,
preprocess, and encode our features.
The code is adapted from the example
[Structured data classification from scratch](https://keras.io/examples/structured_data/structured_data_classification_from_scratch/).
While the previous example managed its own low-level feature preprocessing and
encoding with Keras preprocessing layers, in this example we
delegate everything to `FeatureSpace`, making the workflow
extremely quick and easy.
### The dataset
[Our dataset](https://archive.ics.uci.edu/ml/datasets/heart+Disease) is provided by the
Cleveland Clinic Foundation for Heart Disease.
It's a CSV file with 303 rows. Each row contains information about a patient (a
**sample**), and each column describes an attribute of the patient (a **feature**). We
use the features to predict whether a patient has a heart disease
(**binary classification**).
Here's the description of each feature:
Column| Description| Feature Type
------------|--------------------|----------------------
Age | Age in years | Numerical
Sex | (1 = male; 0 = female) | Categorical
CP | Chest pain type (0, 1, 2, 3, 4) | Categorical
Trestbpd | Resting blood pressure (in mm Hg on admission) | Numerical
Chol | Serum cholesterol in mg/dl | Numerical
FBS | fasting blood sugar in 120 mg/dl (1 = true; 0 = false) | Categorical
RestECG | Resting electrocardiogram results (0, 1, 2) | Categorical
Thalach | Maximum heart rate achieved | Numerical
Exang | Exercise induced angina (1 = yes; 0 = no) | Categorical
Oldpeak | ST depression induced by exercise relative to rest | Numerical
Slope | Slope of the peak exercise ST segment | Numerical
CA | Number of major vessels (0-3) colored by fluoroscopy | Both numerical & categorical
Thal | 3 = normal; 6 = fixed defect; 7 = reversible defect | Categorical
Target | Diagnosis of heart disease (1 = true; 0 = false) | Target
"""
"""
## Setup
"""
import os
os.environ["KERAS_BACKEND"] = "tensorflow"
import tensorflow as tf
import pandas as pd
import keras
from keras.utils import FeatureSpace
"""
## Preparing the data
Let's download the data and load it into a Pandas dataframe:
"""
file_url = "http://storage.googleapis.com/download.tensorflow.org/data/heart.csv"
dataframe = pd.read_csv(file_url)
"""
The dataset includes 303 samples with 14 columns per sample
(13 features, plus the target label):
"""
print(dataframe.shape)
"""
Here's a preview of a few samples:
"""
dataframe.head()
"""
The last column, "target", indicates whether the patient
has a heart disease (1) or not (0).
Let's split the data into a training and validation set:
"""
val_dataframe = dataframe.sample(frac=0.2, random_state=1337)
train_dataframe = dataframe.drop(val_dataframe.index)
print(
"Using %d samples for training and %d for validation"
% (len(train_dataframe), len(val_dataframe))
)
"""
Let's generate `tf.data.Dataset` objects for each dataframe:
"""
def dataframe_to_dataset(dataframe):
dataframe = dataframe.copy()
labels = dataframe.pop("target")
ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))
ds = ds.shuffle(buffer_size=len(dataframe))
return ds
train_ds = dataframe_to_dataset(train_dataframe)
val_ds = dataframe_to_dataset(val_dataframe)
"""
Each `Dataset` yields a tuple `(input, target)` where `input` is a dictionary of features
and `target` is the value `0` or `1`:
"""
for x, y in train_ds.take(1):
print("Input:", x)
print("Target:", y)
"""
Let's batch the datasets:
"""
train_ds = train_ds.batch(32)
val_ds = val_ds.batch(32)
"""
## Configuring a `FeatureSpace`
To configure how each feature should be preprocessed,
we instantiate a `keras.utils.FeatureSpace`, and we
pass to it a dictionary that maps the name of our features
to a string that describes the feature type.
We have a few "integer categorical" features such as `"FBS"`,
one "string categorical" feature (`"thal"`),
and a few numerical features, which we'd like to normalize
-- except `"age"`, which we'd like to discretize into
a number of bins.
We also use the `crosses` argument
to capture *feature interactions* for some categorical
features, that is to say, create additional features
that represent value co-occurrences for these categorical features.
You can compute feature crosses like this for arbitrary sets of
categorical features -- not just tuples of two features.
Because the resulting co-occurences are hashed
into a fixed-sized vector, you don't need to worry about whether
the co-occurence space is too large.
"""
feature_space = FeatureSpace(
features={
# Categorical features encoded as integers
"sex": "integer_categorical",
"cp": "integer_categorical",
"fbs": "integer_categorical",
"restecg": "integer_categorical",
"exang": "integer_categorical",
"ca": "integer_categorical",
# Categorical feature encoded as string
"thal": "string_categorical",
# Numerical features to discretize
"age": "float_discretized",
# Numerical features to normalize
"trestbps": "float_normalized",
"chol": "float_normalized",
"thalach": "float_normalized",
"oldpeak": "float_normalized",
"slope": "float_normalized",
},
# We create additional features by hashing
# value co-occurrences for the
# following groups of categorical features.
crosses=[("sex", "age"), ("thal", "ca")],
# The hashing space for these co-occurrences
# wil be 32-dimensional.
crossing_dim=32,
# Our utility will one-hot encode all categorical
# features and concat all features into a single
# vector (one vector per sample).
output_mode="concat",
)
"""
## Further customizing a `FeatureSpace`
Specifying the feature type via a string name is quick and easy,
but sometimes you may want to further configure the preprocessing
of each feature. For instance, in our case, our categorical
features don't have a large set of possible values -- it's only
a handful of values per feature (e.g. `1` and `0` for the feature `"FBS"`),
and all possible values are represented in the training set.
As a result, we don't need to reserve an index to represent "out of vocabulary" values
for these features -- which would have been the default behavior.
Below, we just specify `num_oov_indices=0` in each of these features
to tell the feature preprocessor to skip "out of vocabulary" indexing.
Other customizations you have access to include specifying the number of
bins for discretizing features of type `"float_discretized"`,
or the dimensionality of the hashing space for feature crossing.
"""
feature_space = FeatureSpace(
features={
# Categorical features encoded as integers
"sex": FeatureSpace.integer_categorical(num_oov_indices=0),
"cp": FeatureSpace.integer_categorical(num_oov_indices=0),
"fbs": FeatureSpace.integer_categorical(num_oov_indices=0),
"restecg": FeatureSpace.integer_categorical(num_oov_indices=0),
"exang": FeatureSpace.integer_categorical(num_oov_indices=0),
"ca": FeatureSpace.integer_categorical(num_oov_indices=0),
# Categorical feature encoded as string
"thal": FeatureSpace.string_categorical(num_oov_indices=0),
# Numerical features to discretize
"age": FeatureSpace.float_discretized(num_bins=30),
# Numerical features to normalize
"trestbps": FeatureSpace.float_normalized(),
"chol": FeatureSpace.float_normalized(),
"thalach": FeatureSpace.float_normalized(),
"oldpeak": FeatureSpace.float_normalized(),
"slope": FeatureSpace.float_normalized(),
},
# Specify feature cross with a custom crossing dim.
crosses=[
FeatureSpace.cross(feature_names=("sex", "age"), crossing_dim=64),
FeatureSpace.cross(
feature_names=("thal", "ca"),
crossing_dim=16,
),
],
output_mode="concat",
)
"""
## Adapt the `FeatureSpace` to the training data
Before we start using the `FeatureSpace` to build a model, we have
to adapt it to the training data. During `adapt()`, the `FeatureSpace` will:
- Index the set of possible values for categorical features.
- Compute the mean and variance for numerical features to normalize.
- Compute the value boundaries for the different bins for numerical features to discretize.
Note that `adapt()` should be called on a `tf.data.Dataset` which yields dicts
of feature values -- no labels.
"""
train_ds_with_no_labels = train_ds.map(lambda x, _: x)
feature_space.adapt(train_ds_with_no_labels)
"""
At this point, the `FeatureSpace` can be called on a dict of raw feature values, and will return a
single concatenate vector for each sample, combining encoded features and feature crosses.
"""
for x, _ in train_ds.take(1):
preprocessed_x = feature_space(x)
print("preprocessed_x.shape:", preprocessed_x.shape)
print("preprocessed_x.dtype:", preprocessed_x.dtype)
"""
## Two ways to manage preprocessing: as part of the `tf.data` pipeline, or in the model itself
There are two ways in which you can leverage your `FeatureSpace`:
### Asynchronous preprocessing in `tf.data`
You can make it part of your data pipeline, before the model. This enables asynchronous parallel
preprocessing of the data on CPU before it hits the model. Do this if you're training on GPU or TPU,
or if you want to speed up preprocessing. Usually, this is always the right thing to do during training.
### Synchronous preprocessing in the model
You can make it part of your model. This means that the model will expect dicts of raw feature
values, and the preprocessing batch will be done synchronously (in a blocking manner) before the
rest of the forward pass. Do this if you want to have an end-to-end model that can process
raw feature values -- but keep in mind that your model will only be able to run on CPU,
since most types of feature preprocessing (e.g. string preprocessing) are not GPU or TPU compatible.
Do not do this on GPU / TPU or in performance-sensitive settings. In general, you want to do in-model
preprocessing when you do inference on CPU.
In our case, we will apply the `FeatureSpace` in the tf.data pipeline during training, but we will
do inference with an end-to-end model that includes the `FeatureSpace`.
"""
"""
Let's create a training and validation dataset of preprocessed batches:
"""
preprocessed_train_ds = train_ds.map(
lambda x, y: (feature_space(x), y), num_parallel_calls=tf.data.AUTOTUNE
)
preprocessed_train_ds = preprocessed_train_ds.prefetch(tf.data.AUTOTUNE)
preprocessed_val_ds = val_ds.map(
lambda x, y: (feature_space(x), y), num_parallel_calls=tf.data.AUTOTUNE
)
preprocessed_val_ds = preprocessed_val_ds.prefetch(tf.data.AUTOTUNE)
"""
## Build a model
Time to build a model -- or rather two models:
- A training model that expects preprocessed features (one sample = one vector)
- An inference model that expects raw features (one sample = dict of raw feature values)
"""
dict_inputs = feature_space.get_inputs()
encoded_features = feature_space.get_encoded_features()
x = keras.layers.Dense(32, activation="relu")(encoded_features)
x = keras.layers.Dropout(0.5)(x)
predictions = keras.layers.Dense(1, activation="sigmoid")(x)
training_model = keras.Model(inputs=encoded_features, outputs=predictions)
training_model.compile(
optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"]
)
inference_model = keras.Model(inputs=dict_inputs, outputs=predictions)
"""
## Train the model
Let's train our model for 50 epochs. Note that feature preprocessing is happening
as part of the tf.data pipeline, not as part of the model.
"""
training_model.fit(
preprocessed_train_ds,
epochs=20,
validation_data=preprocessed_val_ds,
verbose=2,
)
"""
We quickly get to 80% validation accuracy.
"""
"""
## Inference on new data with the end-to-end model
Now, we can use our inference model (which includes the `FeatureSpace`)
to make predictions based on dicts of raw features values, as follows:
"""
sample = {
"age": 60,
"sex": 1,
"cp": 1,
"trestbps": 145,
"chol": 233,
"fbs": 1,
"restecg": 2,
"thalach": 150,
"exang": 0,
"oldpeak": 2.3,
"slope": 3,
"ca": 0,
"thal": "fixed",
}
input_dict = {name: tf.convert_to_tensor([value]) for name, value in sample.items()}
predictions = inference_model.predict(input_dict)
print(
f"This particular patient had a {100 * predictions[0][0]:.2f}% probability "
"of having a heart disease, as evaluated by our model."
)
| keras-io/examples/structured_data/structured_data_classification_with_feature_space.py/0 | {
"file_path": "keras-io/examples/structured_data/structured_data_classification_with_feature_space.py",
"repo_id": "keras-io",
"token_count": 4235
} | 113 |
<jupyter_start><jupyter_text>Timeseries classification from scratch**Author:** [hfawaz](https://github.com/hfawaz/)**Date created:** 2020/07/21**Last modified:** 2023/11/10**Description:** Training a timeseries classifier from scratch on the FordA dataset from the UCR/UEA archive. IntroductionThis example shows how to do timeseries classification from scratch, starting from rawCSV timeseries files on disk. We demonstrate the workflow on the FordA dataset from the[UCR/UEA archive](https://www.cs.ucr.edu/%7Eeamonn/time_series_data_2018/). Setup<jupyter_code>import keras
import numpy as np
import matplotlib.pyplot as plt<jupyter_output><empty_output><jupyter_text>Load the data: the FordA dataset Dataset descriptionThe dataset we are using here is called FordA.The data comes from the UCR archive.The dataset contains 3601 training instances and another 1320 testing instances.Each timeseries corresponds to a measurement of engine noise captured by a motor sensor.For this task, the goal is to automatically detect the presence of a specific issue withthe engine. The problem is a balanced binary classification task. The full description ofthis dataset can be found [here](http://www.j-wichard.de/publications/FordPaper.pdf). Read the TSV dataWe will use the `FordA_TRAIN` file for training and the`FordA_TEST` file for testing. The simplicity of this datasetallows us to demonstrate effectively how to use ConvNets for timeseries classification.In this file, the first column corresponds to the label.<jupyter_code>def readucr(filename):
data = np.loadtxt(filename, delimiter="\t")
y = data[:, 0]
x = data[:, 1:]
return x, y.astype(int)
root_url = "https://raw.githubusercontent.com/hfawaz/cd-diagram/master/FordA/"
x_train, y_train = readucr(root_url + "FordA_TRAIN.tsv")
x_test, y_test = readucr(root_url + "FordA_TEST.tsv")<jupyter_output><empty_output><jupyter_text>Visualize the dataHere we visualize one timeseries example for each class in the dataset.<jupyter_code>classes = np.unique(np.concatenate((y_train, y_test), axis=0))
plt.figure()
for c in classes:
c_x_train = x_train[y_train == c]
plt.plot(c_x_train[0], label="class " + str(c))
plt.legend(loc="best")
plt.show()
plt.close()<jupyter_output><empty_output><jupyter_text>Standardize the dataOur timeseries are already in a single length (500). However, their values areusually in various ranges. This is not ideal for a neural network;in general we should seek to make the input values normalized.For this specific dataset, the data is already z-normalized: each timeseries samplehas a mean equal to zero and a standard deviation equal to one. This type ofnormalization is very common for timeseries classification problems, see[Bagnall et al. (2016)](https://link.springer.com/article/10.1007/s10618-016-0483-9).Note that the timeseries data used here are univariate, meaning we only have one channelper timeseries example.We will therefore transform the timeseries into a multivariate one with one channelusing a simple reshaping via numpy.This will allow us to construct a model that is easily applicable to multivariate timeseries.<jupyter_code>x_train = x_train.reshape((x_train.shape[0], x_train.shape[1], 1))
x_test = x_test.reshape((x_test.shape[0], x_test.shape[1], 1))<jupyter_output><empty_output><jupyter_text>Finally, in order to use `sparse_categorical_crossentropy`, we will have to countthe number of classes beforehand.<jupyter_code>num_classes = len(np.unique(y_train))<jupyter_output><empty_output><jupyter_text>Now we shuffle the training set because we will be using the `validation_split` optionlater when training.<jupyter_code>idx = np.random.permutation(len(x_train))
x_train = x_train[idx]
y_train = y_train[idx]<jupyter_output><empty_output><jupyter_text>Standardize the labels to positive integers.The expected labels will then be 0 and 1.<jupyter_code>y_train[y_train == -1] = 0
y_test[y_test == -1] = 0<jupyter_output><empty_output><jupyter_text>Build a modelWe build a Fully Convolutional Neural Network originally proposed in[this paper](https://arxiv.org/abs/1611.06455).The implementation is based on the TF 2 version provided[here](https://github.com/hfawaz/dl-4-tsc/).The following hyperparameters (kernel_size, filters, the usage of BatchNorm) were foundvia random search using [KerasTuner](https://github.com/keras-team/keras-tuner).<jupyter_code>def make_model(input_shape):
input_layer = keras.layers.Input(input_shape)
conv1 = keras.layers.Conv1D(filters=64, kernel_size=3, padding="same")(input_layer)
conv1 = keras.layers.BatchNormalization()(conv1)
conv1 = keras.layers.ReLU()(conv1)
conv2 = keras.layers.Conv1D(filters=64, kernel_size=3, padding="same")(conv1)
conv2 = keras.layers.BatchNormalization()(conv2)
conv2 = keras.layers.ReLU()(conv2)
conv3 = keras.layers.Conv1D(filters=64, kernel_size=3, padding="same")(conv2)
conv3 = keras.layers.BatchNormalization()(conv3)
conv3 = keras.layers.ReLU()(conv3)
gap = keras.layers.GlobalAveragePooling1D()(conv3)
output_layer = keras.layers.Dense(num_classes, activation="softmax")(gap)
return keras.models.Model(inputs=input_layer, outputs=output_layer)
model = make_model(input_shape=x_train.shape[1:])
keras.utils.plot_model(model, show_shapes=True)<jupyter_output><empty_output><jupyter_text>Train the model<jupyter_code>epochs = 500
batch_size = 32
callbacks = [
keras.callbacks.ModelCheckpoint(
"best_model.keras", save_best_only=True, monitor="val_loss"
),
keras.callbacks.ReduceLROnPlateau(
monitor="val_loss", factor=0.5, patience=20, min_lr=0.0001
),
keras.callbacks.EarlyStopping(monitor="val_loss", patience=50, verbose=1),
]
model.compile(
optimizer="adam",
loss="sparse_categorical_crossentropy",
metrics=["sparse_categorical_accuracy"],
)
history = model.fit(
x_train,
y_train,
batch_size=batch_size,
epochs=epochs,
callbacks=callbacks,
validation_split=0.2,
verbose=1,
)<jupyter_output><empty_output><jupyter_text>Evaluate model on test data<jupyter_code>model = keras.models.load_model("best_model.keras")
test_loss, test_acc = model.evaluate(x_test, y_test)
print("Test accuracy", test_acc)
print("Test loss", test_loss)<jupyter_output><empty_output><jupyter_text>Plot the model's training and validation loss<jupyter_code>metric = "sparse_categorical_accuracy"
plt.figure()
plt.plot(history.history[metric])
plt.plot(history.history["val_" + metric])
plt.title("model " + metric)
plt.ylabel(metric, fontsize="large")
plt.xlabel("epoch", fontsize="large")
plt.legend(["train", "val"], loc="best")
plt.show()
plt.close()<jupyter_output><empty_output> | keras-io/examples/timeseries/ipynb/timeseries_classification_from_scratch.ipynb/0 | {
"file_path": "keras-io/examples/timeseries/ipynb/timeseries_classification_from_scratch.ipynb",
"repo_id": "keras-io",
"token_count": 2313
} | 114 |
"""
Title: 3D image classification from CT scans
Author: [Hasib Zunair](https://twitter.com/hasibzunair)
Date created: 2020/09/23
Last modified: 2024/01/11
Description: Train a 3D convolutional neural network to predict presence of pneumonia.
Accelerator: GPU
"""
"""
## Introduction
This example will show the steps needed to build a 3D convolutional neural network (CNN)
to predict the presence of viral pneumonia in computer tomography (CT) scans. 2D CNNs are
commonly used to process RGB images (3 channels). A 3D CNN is simply the 3D
equivalent: it takes as input a 3D volume or a sequence of 2D frames (e.g. slices in a CT scan),
3D CNNs are a powerful model for learning representations for volumetric data.
## References
- [A survey on Deep Learning Advances on Different 3D DataRepresentations](https://arxiv.org/abs/1808.01462)
- [VoxNet: A 3D Convolutional Neural Network for Real-Time Object Recognition](https://www.ri.cmu.edu/pub_files/2015/9/voxnet_maturana_scherer_iros15.pdf)
- [FusionNet: 3D Object Classification Using MultipleData Representations](https://arxiv.org/abs/1607.05695)
- [Uniformizing Techniques to Process CT scans with 3D CNNs for Tuberculosis Prediction](https://arxiv.org/abs/2007.13224)
"""
"""
## Setup
"""
import os
import zipfile
import numpy as np
import tensorflow as tf # for data preprocessing
import keras
from keras import layers
"""
## Downloading the MosMedData: Chest CT Scans with COVID-19 Related Findings
In this example, we use a subset of the
[MosMedData: Chest CT Scans with COVID-19 Related Findings](https://www.medrxiv.org/content/10.1101/2020.05.20.20100362v1).
This dataset consists of lung CT scans with COVID-19 related findings, as well as without such findings.
We will be using the associated radiological findings of the CT scans as labels to build
a classifier to predict presence of viral pneumonia.
Hence, the task is a binary classification problem.
"""
# Download url of normal CT scans.
url = "https://github.com/hasibzunair/3D-image-classification-tutorial/releases/download/v0.2/CT-0.zip"
filename = os.path.join(os.getcwd(), "CT-0.zip")
keras.utils.get_file(filename, url)
# Download url of abnormal CT scans.
url = "https://github.com/hasibzunair/3D-image-classification-tutorial/releases/download/v0.2/CT-23.zip"
filename = os.path.join(os.getcwd(), "CT-23.zip")
keras.utils.get_file(filename, url)
# Make a directory to store the data.
os.makedirs("MosMedData")
# Unzip data in the newly created directory.
with zipfile.ZipFile("CT-0.zip", "r") as z_fp:
z_fp.extractall("./MosMedData/")
with zipfile.ZipFile("CT-23.zip", "r") as z_fp:
z_fp.extractall("./MosMedData/")
"""
## Loading data and preprocessing
The files are provided in Nifti format with the extension .nii. To read the
scans, we use the `nibabel` package.
You can install the package via `pip install nibabel`. CT scans store raw voxel
intensity in Hounsfield units (HU). They range from -1024 to above 2000 in this dataset.
Above 400 are bones with different radiointensity, so this is used as a higher bound. A threshold
between -1000 and 400 is commonly used to normalize CT scans.
To process the data, we do the following:
* We first rotate the volumes by 90 degrees, so the orientation is fixed
* We scale the HU values to be between 0 and 1.
* We resize width, height and depth.
Here we define several helper functions to process the data. These functions
will be used when building training and validation datasets.
"""
import nibabel as nib
from scipy import ndimage
def read_nifti_file(filepath):
"""Read and load volume"""
# Read file
scan = nib.load(filepath)
# Get raw data
scan = scan.get_fdata()
return scan
def normalize(volume):
"""Normalize the volume"""
min = -1000
max = 400
volume[volume < min] = min
volume[volume > max] = max
volume = (volume - min) / (max - min)
volume = volume.astype("float32")
return volume
def resize_volume(img):
"""Resize across z-axis"""
# Set the desired depth
desired_depth = 64
desired_width = 128
desired_height = 128
# Get current depth
current_depth = img.shape[-1]
current_width = img.shape[0]
current_height = img.shape[1]
# Compute depth factor
depth = current_depth / desired_depth
width = current_width / desired_width
height = current_height / desired_height
depth_factor = 1 / depth
width_factor = 1 / width
height_factor = 1 / height
# Rotate
img = ndimage.rotate(img, 90, reshape=False)
# Resize across z-axis
img = ndimage.zoom(img, (width_factor, height_factor, depth_factor), order=1)
return img
def process_scan(path):
"""Read and resize volume"""
# Read scan
volume = read_nifti_file(path)
# Normalize
volume = normalize(volume)
# Resize width, height and depth
volume = resize_volume(volume)
return volume
"""
Let's read the paths of the CT scans from the class directories.
"""
# Folder "CT-0" consist of CT scans having normal lung tissue,
# no CT-signs of viral pneumonia.
normal_scan_paths = [
os.path.join(os.getcwd(), "MosMedData/CT-0", x)
for x in os.listdir("MosMedData/CT-0")
]
# Folder "CT-23" consist of CT scans having several ground-glass opacifications,
# involvement of lung parenchyma.
abnormal_scan_paths = [
os.path.join(os.getcwd(), "MosMedData/CT-23", x)
for x in os.listdir("MosMedData/CT-23")
]
print("CT scans with normal lung tissue: " + str(len(normal_scan_paths)))
print("CT scans with abnormal lung tissue: " + str(len(abnormal_scan_paths)))
"""
## Build train and validation datasets
Read the scans from the class directories and assign labels. Downsample the scans to have
shape of 128x128x64. Rescale the raw HU values to the range 0 to 1.
Lastly, split the dataset into train and validation subsets.
"""
# Read and process the scans.
# Each scan is resized across height, width, and depth and rescaled.
abnormal_scans = np.array([process_scan(path) for path in abnormal_scan_paths])
normal_scans = np.array([process_scan(path) for path in normal_scan_paths])
# For the CT scans having presence of viral pneumonia
# assign 1, for the normal ones assign 0.
abnormal_labels = np.array([1 for _ in range(len(abnormal_scans))])
normal_labels = np.array([0 for _ in range(len(normal_scans))])
# Split data in the ratio 70-30 for training and validation.
x_train = np.concatenate((abnormal_scans[:70], normal_scans[:70]), axis=0)
y_train = np.concatenate((abnormal_labels[:70], normal_labels[:70]), axis=0)
x_val = np.concatenate((abnormal_scans[70:], normal_scans[70:]), axis=0)
y_val = np.concatenate((abnormal_labels[70:], normal_labels[70:]), axis=0)
print(
"Number of samples in train and validation are %d and %d."
% (x_train.shape[0], x_val.shape[0])
)
"""
## Data augmentation
The CT scans also augmented by rotating at random angles during training. Since
the data is stored in rank-3 tensors of shape `(samples, height, width, depth)`,
we add a dimension of size 1 at axis 4 to be able to perform 3D convolutions on
the data. The new shape is thus `(samples, height, width, depth, 1)`. There are
different kinds of preprocessing and augmentation techniques out there,
this example shows a few simple ones to get started.
"""
import random
from scipy import ndimage
def rotate(volume):
"""Rotate the volume by a few degrees"""
def scipy_rotate(volume):
# define some rotation angles
angles = [-20, -10, -5, 5, 10, 20]
# pick angles at random
angle = random.choice(angles)
# rotate volume
volume = ndimage.rotate(volume, angle, reshape=False)
volume[volume < 0] = 0
volume[volume > 1] = 1
return volume
augmented_volume = tf.numpy_function(scipy_rotate, [volume], tf.float32)
return augmented_volume
def train_preprocessing(volume, label):
"""Process training data by rotating and adding a channel."""
# Rotate volume
volume = rotate(volume)
volume = tf.expand_dims(volume, axis=3)
return volume, label
def validation_preprocessing(volume, label):
"""Process validation data by only adding a channel."""
volume = tf.expand_dims(volume, axis=3)
return volume, label
"""
While defining the train and validation data loader, the training data is passed through
and augmentation function which randomly rotates volume at different angles. Note that both
training and validation data are already rescaled to have values between 0 and 1.
"""
# Define data loaders.
train_loader = tf.data.Dataset.from_tensor_slices((x_train, y_train))
validation_loader = tf.data.Dataset.from_tensor_slices((x_val, y_val))
batch_size = 2
# Augment the on the fly during training.
train_dataset = (
train_loader.shuffle(len(x_train))
.map(train_preprocessing)
.batch(batch_size)
.prefetch(2)
)
# Only rescale.
validation_dataset = (
validation_loader.shuffle(len(x_val))
.map(validation_preprocessing)
.batch(batch_size)
.prefetch(2)
)
"""
Visualize an augmented CT scan.
"""
import matplotlib.pyplot as plt
data = train_dataset.take(1)
images, labels = list(data)[0]
images = images.numpy()
image = images[0]
print("Dimension of the CT scan is:", image.shape)
plt.imshow(np.squeeze(image[:, :, 30]), cmap="gray")
"""
Since a CT scan has many slices, let's visualize a montage of the slices.
"""
def plot_slices(num_rows, num_columns, width, height, data):
"""Plot a montage of 20 CT slices"""
data = np.rot90(np.array(data))
data = np.transpose(data)
data = np.reshape(data, (num_rows, num_columns, width, height))
rows_data, columns_data = data.shape[0], data.shape[1]
heights = [slc[0].shape[0] for slc in data]
widths = [slc.shape[1] for slc in data[0]]
fig_width = 12.0
fig_height = fig_width * sum(heights) / sum(widths)
f, axarr = plt.subplots(
rows_data,
columns_data,
figsize=(fig_width, fig_height),
gridspec_kw={"height_ratios": heights},
)
for i in range(rows_data):
for j in range(columns_data):
axarr[i, j].imshow(data[i][j], cmap="gray")
axarr[i, j].axis("off")
plt.subplots_adjust(wspace=0, hspace=0, left=0, right=1, bottom=0, top=1)
plt.show()
# Visualize montage of slices.
# 4 rows and 10 columns for 100 slices of the CT scan.
plot_slices(4, 10, 128, 128, image[:, :, :40])
"""
## Define a 3D convolutional neural network
To make the model easier to understand, we structure it into blocks.
The architecture of the 3D CNN used in this example
is based on [this paper](https://arxiv.org/abs/2007.13224).
"""
def get_model(width=128, height=128, depth=64):
"""Build a 3D convolutional neural network model."""
inputs = keras.Input((width, height, depth, 1))
x = layers.Conv3D(filters=64, kernel_size=3, activation="relu")(inputs)
x = layers.MaxPool3D(pool_size=2)(x)
x = layers.BatchNormalization()(x)
x = layers.Conv3D(filters=64, kernel_size=3, activation="relu")(x)
x = layers.MaxPool3D(pool_size=2)(x)
x = layers.BatchNormalization()(x)
x = layers.Conv3D(filters=128, kernel_size=3, activation="relu")(x)
x = layers.MaxPool3D(pool_size=2)(x)
x = layers.BatchNormalization()(x)
x = layers.Conv3D(filters=256, kernel_size=3, activation="relu")(x)
x = layers.MaxPool3D(pool_size=2)(x)
x = layers.BatchNormalization()(x)
x = layers.GlobalAveragePooling3D()(x)
x = layers.Dense(units=512, activation="relu")(x)
x = layers.Dropout(0.3)(x)
outputs = layers.Dense(units=1, activation="sigmoid")(x)
# Define the model.
model = keras.Model(inputs, outputs, name="3dcnn")
return model
# Build model.
model = get_model(width=128, height=128, depth=64)
model.summary()
"""
## Train model
"""
# Compile model.
initial_learning_rate = 0.0001
lr_schedule = keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate, decay_steps=100000, decay_rate=0.96, staircase=True
)
model.compile(
loss="binary_crossentropy",
optimizer=keras.optimizers.Adam(learning_rate=lr_schedule),
metrics=["acc"],
run_eagerly=True,
)
# Define callbacks.
checkpoint_cb = keras.callbacks.ModelCheckpoint(
"3d_image_classification.keras", save_best_only=True
)
early_stopping_cb = keras.callbacks.EarlyStopping(monitor="val_acc", patience=15)
# Train the model, doing validation at the end of each epoch
epochs = 100
model.fit(
train_dataset,
validation_data=validation_dataset,
epochs=epochs,
shuffle=True,
verbose=2,
callbacks=[checkpoint_cb, early_stopping_cb],
)
"""
It is important to note that the number of samples is very small (only 200) and we don't
specify a random seed. As such, you can expect significant variance in the results. The full dataset
which consists of over 1000 CT scans can be found [here](https://www.medrxiv.org/content/10.1101/2020.05.20.20100362v1). Using the full
dataset, an accuracy of 83% was achieved. A variability of 6-7% in the classification
performance is observed in both cases.
"""
"""
## Visualizing model performance
Here the model accuracy and loss for the training and the validation sets are plotted.
Since the validation set is class-balanced, accuracy provides an unbiased representation
of the model's performance.
"""
fig, ax = plt.subplots(1, 2, figsize=(20, 3))
ax = ax.ravel()
for i, metric in enumerate(["acc", "loss"]):
ax[i].plot(model.history.history[metric])
ax[i].plot(model.history.history["val_" + metric])
ax[i].set_title("Model {}".format(metric))
ax[i].set_xlabel("epochs")
ax[i].set_ylabel(metric)
ax[i].legend(["train", "val"])
"""
## Make predictions on a single CT scan
"""
# Load best weights.
model.load_weights("3d_image_classification.keras")
prediction = model.predict(np.expand_dims(x_val[0], axis=0))[0]
scores = [1 - prediction[0], prediction[0]]
class_names = ["normal", "abnormal"]
for score, name in zip(scores, class_names):
print(
"This model is %.2f percent confident that CT scan is %s"
% ((100 * score), name)
)
| keras-io/examples/vision/3D_image_classification.py/0 | {
"file_path": "keras-io/examples/vision/3D_image_classification.py",
"repo_id": "keras-io",
"token_count": 4987
} | 115 |
"""
Title: Monocular depth estimation
Author: [Victor Basu](https://www.linkedin.com/in/victor-basu-520958147)
Date created: 2021/08/30
Last modified: 2021/08/30
Description: Implement a depth estimation model with a convnet.
Accelerator: GPU
"""
"""
## Introduction
_Depth estimation_ is a crucial step towards inferring scene geometry from 2D images.
The goal in _monocular depth estimation_ is to predict the depth value of each pixel or
inferring depth information, given only a single RGB image as input.
This example will show an approach to build a depth estimation model with a convnet
and simple loss functions.
![depth](https://paperswithcode.com/media/thumbnails/task/task-0000000605-d9849a91.jpg)
"""
"""
## Setup
"""
import os
import sys
import tensorflow as tf
from tensorflow.keras import layers
import pandas as pd
import numpy as np
import cv2
import matplotlib.pyplot as plt
tf.random.set_seed(123)
"""
## Downloading the dataset
We will be using the dataset **DIODE: A Dense Indoor and Outdoor Depth Dataset** for this
tutorial. However, we use the validation set generating training and evaluation subsets
for our model. The reason we use the validation set rather than the training set of the original dataset is because
the training set consists of 81GB of data, which is challenging to download compared
to the validation set which is only 2.6GB.
Other datasets that you could use are
**[NYU-v2](https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html)**
and **[KITTI](http://www.cvlibs.net/datasets/kitti/)**.
"""
annotation_folder = "/dataset/"
if not os.path.exists(os.path.abspath(".") + annotation_folder):
annotation_zip = tf.keras.utils.get_file(
"val.tar.gz",
cache_subdir=os.path.abspath("."),
origin="http://diode-dataset.s3.amazonaws.com/val.tar.gz",
extract=True,
)
"""
## Preparing the dataset
We only use the indoor images to train our depth estimation model.
"""
path = "val/indoors"
filelist = []
for root, dirs, files in os.walk(path):
for file in files:
filelist.append(os.path.join(root, file))
filelist.sort()
data = {
"image": [x for x in filelist if x.endswith(".png")],
"depth": [x for x in filelist if x.endswith("_depth.npy")],
"mask": [x for x in filelist if x.endswith("_depth_mask.npy")],
}
df = pd.DataFrame(data)
df = df.sample(frac=1, random_state=42)
"""
## Preparing hyperparameters
"""
HEIGHT = 256
WIDTH = 256
LR = 0.0002
EPOCHS = 30
BATCH_SIZE = 32
"""
## Building a data pipeline
1. The pipeline takes a dataframe containing the path for the RGB images,
as well as the depth and depth mask files.
2. It reads and resize the RGB images.
3. It reads the depth and depth mask files, process them to generate the depth map image and
resize it.
4. It returns the RGB images and the depth map images for a batch.
"""
class DataGenerator(tf.keras.utils.Sequence):
def __init__(self, data, batch_size=6, dim=(768, 1024), n_channels=3, shuffle=True):
"""
Initialization
"""
self.data = data
self.indices = self.data.index.tolist()
self.dim = dim
self.n_channels = n_channels
self.batch_size = batch_size
self.shuffle = shuffle
self.min_depth = 0.1
self.on_epoch_end()
def __len__(self):
return int(np.ceil(len(self.data) / self.batch_size))
def __getitem__(self, index):
if (index + 1) * self.batch_size > len(self.indices):
self.batch_size = len(self.indices) - index * self.batch_size
# Generate one batch of data
# Generate indices of the batch
index = self.indices[index * self.batch_size : (index + 1) * self.batch_size]
# Find list of IDs
batch = [self.indices[k] for k in index]
x, y = self.data_generation(batch)
return x, y
def on_epoch_end(self):
"""
Updates indexes after each epoch
"""
self.index = np.arange(len(self.indices))
if self.shuffle == True:
np.random.shuffle(self.index)
def load(self, image_path, depth_map, mask):
"""Load input and target image."""
image_ = cv2.imread(image_path)
image_ = cv2.cvtColor(image_, cv2.COLOR_BGR2RGB)
image_ = cv2.resize(image_, self.dim)
image_ = tf.image.convert_image_dtype(image_, tf.float32)
depth_map = np.load(depth_map).squeeze()
mask = np.load(mask)
mask = mask > 0
max_depth = min(300, np.percentile(depth_map, 99))
depth_map = np.clip(depth_map, self.min_depth, max_depth)
depth_map = np.log(depth_map, where=mask)
depth_map = np.ma.masked_where(~mask, depth_map)
depth_map = np.clip(depth_map, 0.1, np.log(max_depth))
depth_map = cv2.resize(depth_map, self.dim)
depth_map = np.expand_dims(depth_map, axis=2)
depth_map = tf.image.convert_image_dtype(depth_map, tf.float32)
return image_, depth_map
def data_generation(self, batch):
x = np.empty((self.batch_size, *self.dim, self.n_channels))
y = np.empty((self.batch_size, *self.dim, 1))
for i, batch_id in enumerate(batch):
x[i,], y[i,] = self.load(
self.data["image"][batch_id],
self.data["depth"][batch_id],
self.data["mask"][batch_id],
)
return x, y
"""
## Visualizing samples
"""
def visualize_depth_map(samples, test=False, model=None):
input, target = samples
cmap = plt.cm.jet
cmap.set_bad(color="black")
if test:
pred = model.predict(input)
fig, ax = plt.subplots(6, 3, figsize=(50, 50))
for i in range(6):
ax[i, 0].imshow((input[i].squeeze()))
ax[i, 1].imshow((target[i].squeeze()), cmap=cmap)
ax[i, 2].imshow((pred[i].squeeze()), cmap=cmap)
else:
fig, ax = plt.subplots(6, 2, figsize=(50, 50))
for i in range(6):
ax[i, 0].imshow((input[i].squeeze()))
ax[i, 1].imshow((target[i].squeeze()), cmap=cmap)
visualize_samples = next(
iter(DataGenerator(data=df, batch_size=6, dim=(HEIGHT, WIDTH)))
)
visualize_depth_map(visualize_samples)
"""
## 3D point cloud visualization
"""
depth_vis = np.flipud(visualize_samples[1][1].squeeze()) # target
img_vis = np.flipud(visualize_samples[0][1].squeeze()) # input
fig = plt.figure(figsize=(15, 10))
ax = plt.axes(projection="3d")
STEP = 3
for x in range(0, img_vis.shape[0], STEP):
for y in range(0, img_vis.shape[1], STEP):
ax.scatter(
[depth_vis[x, y]] * 3,
[y] * 3,
[x] * 3,
c=tuple(img_vis[x, y, :3] / 255),
s=3,
)
ax.view_init(45, 135)
"""
## Building the model
1. The basic model is from U-Net.
2. Addditive skip-connections are implemented in the downscaling block.
"""
class DownscaleBlock(layers.Layer):
def __init__(
self, filters, kernel_size=(3, 3), padding="same", strides=1, **kwargs
):
super().__init__(**kwargs)
self.convA = layers.Conv2D(filters, kernel_size, strides, padding)
self.convB = layers.Conv2D(filters, kernel_size, strides, padding)
self.reluA = layers.LeakyReLU(alpha=0.2)
self.reluB = layers.LeakyReLU(alpha=0.2)
self.bn2a = tf.keras.layers.BatchNormalization()
self.bn2b = tf.keras.layers.BatchNormalization()
self.pool = layers.MaxPool2D((2, 2), (2, 2))
def call(self, input_tensor):
d = self.convA(input_tensor)
x = self.bn2a(d)
x = self.reluA(x)
x = self.convB(x)
x = self.bn2b(x)
x = self.reluB(x)
x += d
p = self.pool(x)
return x, p
class UpscaleBlock(layers.Layer):
def __init__(
self, filters, kernel_size=(3, 3), padding="same", strides=1, **kwargs
):
super().__init__(**kwargs)
self.us = layers.UpSampling2D((2, 2))
self.convA = layers.Conv2D(filters, kernel_size, strides, padding)
self.convB = layers.Conv2D(filters, kernel_size, strides, padding)
self.reluA = layers.LeakyReLU(alpha=0.2)
self.reluB = layers.LeakyReLU(alpha=0.2)
self.bn2a = tf.keras.layers.BatchNormalization()
self.bn2b = tf.keras.layers.BatchNormalization()
self.conc = layers.Concatenate()
def call(self, x, skip):
x = self.us(x)
concat = self.conc([x, skip])
x = self.convA(concat)
x = self.bn2a(x)
x = self.reluA(x)
x = self.convB(x)
x = self.bn2b(x)
x = self.reluB(x)
return x
class BottleNeckBlock(layers.Layer):
def __init__(
self, filters, kernel_size=(3, 3), padding="same", strides=1, **kwargs
):
super().__init__(**kwargs)
self.convA = layers.Conv2D(filters, kernel_size, strides, padding)
self.convB = layers.Conv2D(filters, kernel_size, strides, padding)
self.reluA = layers.LeakyReLU(alpha=0.2)
self.reluB = layers.LeakyReLU(alpha=0.2)
def call(self, x):
x = self.convA(x)
x = self.reluA(x)
x = self.convB(x)
x = self.reluB(x)
return x
"""
## Defining the loss
We will optimize 3 losses in our mode.
1. Structural similarity index(SSIM).
2. L1-loss, or Point-wise depth in our case.
3. Depth smoothness loss.
Out of the three loss functions, SSIM contributes the most to improving model performance.
"""
class DepthEstimationModel(tf.keras.Model):
def __init__(self):
super().__init__()
self.ssim_loss_weight = 0.85
self.l1_loss_weight = 0.1
self.edge_loss_weight = 0.9
self.loss_metric = tf.keras.metrics.Mean(name="loss")
f = [16, 32, 64, 128, 256]
self.downscale_blocks = [
DownscaleBlock(f[0]),
DownscaleBlock(f[1]),
DownscaleBlock(f[2]),
DownscaleBlock(f[3]),
]
self.bottle_neck_block = BottleNeckBlock(f[4])
self.upscale_blocks = [
UpscaleBlock(f[3]),
UpscaleBlock(f[2]),
UpscaleBlock(f[1]),
UpscaleBlock(f[0]),
]
self.conv_layer = layers.Conv2D(1, (1, 1), padding="same", activation="tanh")
def calculate_loss(self, target, pred):
# Edges
dy_true, dx_true = tf.image.image_gradients(target)
dy_pred, dx_pred = tf.image.image_gradients(pred)
weights_x = tf.exp(tf.reduce_mean(tf.abs(dx_true)))
weights_y = tf.exp(tf.reduce_mean(tf.abs(dy_true)))
# Depth smoothness
smoothness_x = dx_pred * weights_x
smoothness_y = dy_pred * weights_y
depth_smoothness_loss = tf.reduce_mean(abs(smoothness_x)) + tf.reduce_mean(
abs(smoothness_y)
)
# Structural similarity (SSIM) index
ssim_loss = tf.reduce_mean(
1
- tf.image.ssim(
target, pred, max_val=WIDTH, filter_size=7, k1=0.01**2, k2=0.03**2
)
)
# Point-wise depth
l1_loss = tf.reduce_mean(tf.abs(target - pred))
loss = (
(self.ssim_loss_weight * ssim_loss)
+ (self.l1_loss_weight * l1_loss)
+ (self.edge_loss_weight * depth_smoothness_loss)
)
return loss
@property
def metrics(self):
return [self.loss_metric]
def train_step(self, batch_data):
input, target = batch_data
with tf.GradientTape() as tape:
pred = self(input, training=True)
loss = self.calculate_loss(target, pred)
gradients = tape.gradient(loss, self.trainable_variables)
self.optimizer.apply_gradients(zip(gradients, self.trainable_variables))
self.loss_metric.update_state(loss)
return {
"loss": self.loss_metric.result(),
}
def test_step(self, batch_data):
input, target = batch_data
pred = self(input, training=False)
loss = self.calculate_loss(target, pred)
self.loss_metric.update_state(loss)
return {
"loss": self.loss_metric.result(),
}
def call(self, x):
c1, p1 = self.downscale_blocks[0](x)
c2, p2 = self.downscale_blocks[1](p1)
c3, p3 = self.downscale_blocks[2](p2)
c4, p4 = self.downscale_blocks[3](p3)
bn = self.bottle_neck_block(p4)
u1 = self.upscale_blocks[0](bn, c4)
u2 = self.upscale_blocks[1](u1, c3)
u3 = self.upscale_blocks[2](u2, c2)
u4 = self.upscale_blocks[3](u3, c1)
return self.conv_layer(u4)
"""
## Model training
"""
optimizer = tf.keras.optimizers.Adam(
learning_rate=LR,
amsgrad=False,
)
model = DepthEstimationModel()
# Compile the model
model.compile(optimizer)
train_loader = DataGenerator(
data=df[:260].reset_index(drop="true"), batch_size=BATCH_SIZE, dim=(HEIGHT, WIDTH)
)
validation_loader = DataGenerator(
data=df[260:].reset_index(drop="true"), batch_size=BATCH_SIZE, dim=(HEIGHT, WIDTH)
)
model.fit(
train_loader,
epochs=EPOCHS,
validation_data=validation_loader,
)
"""
## Visualizing model output
We visualize the model output over the validation set.
The first image is the RGB image, the second image is the ground truth depth map image
and the third one is the predicted depth map image.
"""
test_loader = next(
iter(
DataGenerator(
data=df[265:].reset_index(drop="true"), batch_size=6, dim=(HEIGHT, WIDTH)
)
)
)
visualize_depth_map(test_loader, test=True, model=model)
test_loader = next(
iter(
DataGenerator(
data=df[300:].reset_index(drop="true"), batch_size=6, dim=(HEIGHT, WIDTH)
)
)
)
visualize_depth_map(test_loader, test=True, model=model)
"""
## Possible improvements
1. You can improve this model by replacing the encoding part of the U-Net with a
pretrained DenseNet or ResNet.
2. Loss functions play an important role in solving this problem.
Tuning the loss functions may yield significant improvement.
"""
"""
## References
The following papers go deeper into possible approaches for depth estimation.
1. [Depth Prediction Without the Sensors: Leveraging Structure for Unsupervised Learning from Monocular Videos](https://arxiv.org/pdf/1811.06152v1.pdf)
2. [Digging Into Self-Supervised Monocular Depth Estimation](https://openaccess.thecvf.com/content_ICCV_2019/papers/Godard_Digging_Into_Self-Supervised_Monocular_Depth_Estimation_ICCV_2019_paper.pdf)
3. [Deeper Depth Prediction with Fully Convolutional Residual Networks](https://arxiv.org/pdf/1606.00373v2.pdf)
You can also find helpful implementations in the papers with code depth estimation task.
You can use the trained model hosted on [Hugging Face Hub](https://huggingface.co/spaces/keras-io/Monocular-Depth-Estimation)
and try the demo on [Hugging Face Spaces](https://huggingface.co/keras-io/monocular-depth-estimation).
"""
| keras-io/examples/vision/depth_estimation.py/0 | {
"file_path": "keras-io/examples/vision/depth_estimation.py",
"repo_id": "keras-io",
"token_count": 6609
} | 116 |
<jupyter_start><jupyter_text>Next-Frame Video Prediction with Convolutional LSTMs**Author:** [Amogh Joshi](https://github.com/amogh7joshi)**Date created:** 2021/06/02**Last modified:** 2023/11/10**Description:** How to build and train a convolutional LSTM model for next-frame video prediction. IntroductionThe[Convolutional LSTM](https://papers.nips.cc/paper/2015/file/07563a3fe3bbe7e3ba84431ad9d055af-Paper.pdf)architectures bring together time series processing and computer vision byintroducing a convolutional recurrent cell in a LSTM layer. In this example, we will explore theConvolutional LSTM model in an application to next-frame prediction, the processof predicting what video frames come next given a series of past frames. Setup<jupyter_code>import numpy as np
import matplotlib.pyplot as plt
import keras
from keras import layers
import io
import imageio
from IPython.display import Image, display
from ipywidgets import widgets, Layout, HBox<jupyter_output><empty_output><jupyter_text>Dataset ConstructionFor this example, we will be using the[Moving MNIST](http://www.cs.toronto.edu/~nitish/unsupervised_video/)dataset.We will download the dataset and then construct andpreprocess training and validation sets.For next-frame prediction, our model will be using a previous frame,which we'll call `f_n`, to predict a new frame, called `f_(n + 1)`.To allow the model to create these predictions, we'll need to processthe data such that we have "shifted" inputs and outputs, where theinput data is frame `x_n`, being used to predict frame `y_(n + 1)`.<jupyter_code># Download and load the dataset.
fpath = keras.utils.get_file(
"moving_mnist.npy",
"http://www.cs.toronto.edu/~nitish/unsupervised_video/mnist_test_seq.npy",
)
dataset = np.load(fpath)
# Swap the axes representing the number of frames and number of data samples.
dataset = np.swapaxes(dataset, 0, 1)
# We'll pick out 1000 of the 10000 total examples and use those.
dataset = dataset[:1000, ...]
# Add a channel dimension since the images are grayscale.
dataset = np.expand_dims(dataset, axis=-1)
# Split into train and validation sets using indexing to optimize memory.
indexes = np.arange(dataset.shape[0])
np.random.shuffle(indexes)
train_index = indexes[: int(0.9 * dataset.shape[0])]
val_index = indexes[int(0.9 * dataset.shape[0]) :]
train_dataset = dataset[train_index]
val_dataset = dataset[val_index]
# Normalize the data to the 0-1 range.
train_dataset = train_dataset / 255
val_dataset = val_dataset / 255
# We'll define a helper function to shift the frames, where
# `x` is frames 0 to n - 1, and `y` is frames 1 to n.
def create_shifted_frames(data):
x = data[:, 0 : data.shape[1] - 1, :, :]
y = data[:, 1 : data.shape[1], :, :]
return x, y
# Apply the processing function to the datasets.
x_train, y_train = create_shifted_frames(train_dataset)
x_val, y_val = create_shifted_frames(val_dataset)
# Inspect the dataset.
print("Training Dataset Shapes: " + str(x_train.shape) + ", " + str(y_train.shape))
print("Validation Dataset Shapes: " + str(x_val.shape) + ", " + str(y_val.shape))<jupyter_output><empty_output><jupyter_text>Data VisualizationOur data consists of sequences of frames, each of whichare used to predict the upcoming frame. Let's take a lookat some of these sequential frames.<jupyter_code># Construct a figure on which we will visualize the images.
fig, axes = plt.subplots(4, 5, figsize=(10, 8))
# Plot each of the sequential images for one random data example.
data_choice = np.random.choice(range(len(train_dataset)), size=1)[0]
for idx, ax in enumerate(axes.flat):
ax.imshow(np.squeeze(train_dataset[data_choice][idx]), cmap="gray")
ax.set_title(f"Frame {idx + 1}")
ax.axis("off")
# Print information and display the figure.
print(f"Displaying frames for example {data_choice}.")
plt.show()<jupyter_output><empty_output><jupyter_text>Model ConstructionTo build a Convolutional LSTM model, we will use the`ConvLSTM2D` layer, which will accept inputs of shape`(batch_size, num_frames, width, height, channels)`, and returna prediction movie of the same shape.<jupyter_code># Construct the input layer with no definite frame size.
inp = layers.Input(shape=(None, *x_train.shape[2:]))
# We will construct 3 `ConvLSTM2D` layers with batch normalization,
# followed by a `Conv3D` layer for the spatiotemporal outputs.
x = layers.ConvLSTM2D(
filters=64,
kernel_size=(5, 5),
padding="same",
return_sequences=True,
activation="relu",
)(inp)
x = layers.BatchNormalization()(x)
x = layers.ConvLSTM2D(
filters=64,
kernel_size=(3, 3),
padding="same",
return_sequences=True,
activation="relu",
)(x)
x = layers.BatchNormalization()(x)
x = layers.ConvLSTM2D(
filters=64,
kernel_size=(1, 1),
padding="same",
return_sequences=True,
activation="relu",
)(x)
x = layers.Conv3D(
filters=1, kernel_size=(3, 3, 3), activation="sigmoid", padding="same"
)(x)
# Next, we will build the complete model and compile it.
model = keras.models.Model(inp, x)
model.compile(
loss=keras.losses.binary_crossentropy,
optimizer=keras.optimizers.Adam(),
)<jupyter_output><empty_output><jupyter_text>Model TrainingWith our model and data constructed, we can now train the model.<jupyter_code># Define some callbacks to improve training.
early_stopping = keras.callbacks.EarlyStopping(monitor="val_loss", patience=10)
reduce_lr = keras.callbacks.ReduceLROnPlateau(monitor="val_loss", patience=5)
# Define modifiable training hyperparameters.
epochs = 20
batch_size = 5
# Fit the model to the training data.
model.fit(
x_train,
y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_val, y_val),
callbacks=[early_stopping, reduce_lr],
)<jupyter_output><empty_output><jupyter_text>Frame Prediction VisualizationsWith our model now constructed and trained, we can generatesome example frame predictions based on a new video.We'll pick a random example from the validation set andthen choose the first ten frames from them. From there, we canallow the model to predict 10 new frames, which we can compareto the ground truth frame predictions.<jupyter_code># Select a random example from the validation dataset.
example = val_dataset[np.random.choice(range(len(val_dataset)), size=1)[0]]
# Pick the first/last ten frames from the example.
frames = example[:10, ...]
original_frames = example[10:, ...]
# Predict a new set of 10 frames.
for _ in range(10):
# Extract the model's prediction and post-process it.
new_prediction = model.predict(np.expand_dims(frames, axis=0))
new_prediction = np.squeeze(new_prediction, axis=0)
predicted_frame = np.expand_dims(new_prediction[-1, ...], axis=0)
# Extend the set of prediction frames.
frames = np.concatenate((frames, predicted_frame), axis=0)
# Construct a figure for the original and new frames.
fig, axes = plt.subplots(2, 10, figsize=(20, 4))
# Plot the original frames.
for idx, ax in enumerate(axes[0]):
ax.imshow(np.squeeze(original_frames[idx]), cmap="gray")
ax.set_title(f"Frame {idx + 11}")
ax.axis("off")
# Plot the new frames.
new_frames = frames[10:, ...]
for idx, ax in enumerate(axes[1]):
ax.imshow(np.squeeze(new_frames[idx]), cmap="gray")
ax.set_title(f"Frame {idx + 11}")
ax.axis("off")
# Display the figure.
plt.show()<jupyter_output><empty_output><jupyter_text>Predicted VideosFinally, we'll pick a few examples from the validation setand construct some GIFs with them to see the model'spredicted videos.You can use the trained model hosted on [Hugging Face Hub](https://huggingface.co/keras-io/conv-lstm)and try the demo on [Hugging Face Spaces](https://huggingface.co/spaces/keras-io/conv-lstm).<jupyter_code># Select a few random examples from the dataset.
examples = val_dataset[np.random.choice(range(len(val_dataset)), size=5)]
# Iterate over the examples and predict the frames.
predicted_videos = []
for example in examples:
# Pick the first/last ten frames from the example.
frames = example[:10, ...]
original_frames = example[10:, ...]
new_predictions = np.zeros(shape=(10, *frames[0].shape))
# Predict a new set of 10 frames.
for i in range(10):
# Extract the model's prediction and post-process it.
frames = example[: 10 + i + 1, ...]
new_prediction = model.predict(np.expand_dims(frames, axis=0))
new_prediction = np.squeeze(new_prediction, axis=0)
predicted_frame = np.expand_dims(new_prediction[-1, ...], axis=0)
# Extend the set of prediction frames.
new_predictions[i] = predicted_frame
# Create and save GIFs for each of the ground truth/prediction images.
for frame_set in [original_frames, new_predictions]:
# Construct a GIF from the selected video frames.
current_frames = np.squeeze(frame_set)
current_frames = current_frames[..., np.newaxis] * np.ones(3)
current_frames = (current_frames * 255).astype(np.uint8)
current_frames = list(current_frames)
# Construct a GIF from the frames.
with io.BytesIO() as gif:
imageio.mimsave(gif, current_frames, "GIF", duration=200)
predicted_videos.append(gif.getvalue())
# Display the videos.
print(" Truth\tPrediction")
for i in range(0, len(predicted_videos), 2):
# Construct and display an `HBox` with the ground truth and prediction.
box = HBox(
[
widgets.Image(value=predicted_videos[i]),
widgets.Image(value=predicted_videos[i + 1]),
]
)
display(box)<jupyter_output><empty_output> | keras-io/examples/vision/ipynb/conv_lstm.ipynb/0 | {
"file_path": "keras-io/examples/vision/ipynb/conv_lstm.ipynb",
"repo_id": "keras-io",
"token_count": 3393
} | 117 |
<jupyter_start><jupyter_text>Image classification via fine-tuning with EfficientNet**Author:** [Yixing Fu](https://github.com/yixingfu)**Date created:** 2020/06/30**Last modified:** 2023/07/10**Description:** Use EfficientNet with weights pre-trained on imagenet for Stanford Dogs classification. Introduction: what is EfficientNetEfficientNet, first introduced in [Tan and Le, 2019](https://arxiv.org/abs/1905.11946)is among the most efficient models (i.e. requiring least FLOPS for inference)that reaches State-of-the-Art accuracy on bothimagenet and common image classification transfer learning tasks.The smallest base model is similar to [MnasNet](https://arxiv.org/abs/1807.11626), whichreached near-SOTA with a significantly smaller model. By introducing a heuristic way toscale the model, EfficientNet provides a family of models (B0 to B7) that represents agood combination of efficiency and accuracy on a variety of scales. Such a scalingheuristics (compound-scaling, details see[Tan and Le, 2019](https://arxiv.org/abs/1905.11946)) allows theefficiency-oriented base model (B0) to surpass models at every scale, while avoidingextensive grid-search of hyperparameters.A summary of the latest updates on the model is available at[here](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet), where variousaugmentation schemes and semi-supervised learning approaches are applied to furtherimprove the imagenet performance of the models. These extensions of the model can be usedby updating weights without changing model architecture. B0 to B7 variants of EfficientNet*(This section provides some details on "compound scaling", and can be skippedif you're only interested in using the models)*Based on the [original paper](https://arxiv.org/abs/1905.11946) people may have theimpression that EfficientNet is a continuous family of models created by arbitrarilychoosing scaling factor in as Eq.(3) of the paper. However, choice of resolution,depth and width are also restricted by many factors:- Resolution: Resolutions not divisible by 8, 16, etc. cause zero-padding near boundariesof some layers which wastes computational resources. This especially applies to smallervariants of the model, hence the input resolution for B0 and B1 are chosen as 224 and240.- Depth and width: The building blocks of EfficientNet demands channel size to bemultiples of 8.- Resource limit: Memory limitation may bottleneck resolution when depthand width can still increase. In such a situation, increasing depth and/orwidth but keep resolution can still improve performance.As a result, the depth, width and resolution of each variant of the EfficientNet modelsare hand-picked and proven to produce good results, though they may be significantlyoff from the compound scaling formula.Therefore, the keras implementation (detailed below) only provide these 8 models, B0 to B7,instead of allowing arbitray choice of width / depth / resolution parameters. Keras implementation of EfficientNetAn implementation of EfficientNet B0 to B7 has been shipped with Keras since v2.3. Touse EfficientNetB0 for classifying 1000 classes of images from ImageNet, run:```pythonfrom tensorflow.keras.applications import EfficientNetB0model = EfficientNetB0(weights='imagenet')```This model takes input images of shape `(224, 224, 3)`, and the input data should be in therange `[0, 255]`. Normalization is included as part of the model.Because training EfficientNet on ImageNet takes a tremendous amount of resources andseveral techniques that are not a part of the model architecture itself. Hence the Kerasimplementation by default loads pre-trained weights obtained via training with[AutoAugment](https://arxiv.org/abs/1805.09501).For B0 to B7 base models, the input shapes are different. Here is a list of input shapeexpected for each model:| Base model | resolution||----------------|-----|| EfficientNetB0 | 224 || EfficientNetB1 | 240 || EfficientNetB2 | 260 || EfficientNetB3 | 300 || EfficientNetB4 | 380 || EfficientNetB5 | 456 || EfficientNetB6 | 528 || EfficientNetB7 | 600 |When the model is intended for transfer learning, the Keras implementationprovides a option to remove the top layers:```model = EfficientNetB0(include_top=False, weights='imagenet')```This option excludes the final `Dense` layer that turns 1280 features on the penultimatelayer into prediction of the 1000 ImageNet classes. Replacing the top layer with customlayers allows using EfficientNet as a feature extractor in a transfer learning workflow.Another argument in the model constructor worth noticing is `drop_connect_rate` which controlsthe dropout rate responsible for [stochastic depth](https://arxiv.org/abs/1603.09382).This parameter serves as a toggle for extra regularization in finetuning, but does notaffect loaded weights. For example, when stronger regularization is desired, try:```pythonmodel = EfficientNetB0(weights='imagenet', drop_connect_rate=0.4)```The default value is 0.2. Example: EfficientNetB0 for Stanford Dogs.EfficientNet is capable of a wide range of image classification tasks.This makes it a good model for transfer learning.As an end-to-end example, we will show using pre-trained EfficientNetB0 on[Stanford Dogs](http://vision.stanford.edu/aditya86/ImageNetDogs/main.html) dataset. Setup and data loading<jupyter_code>import numpy as np
import tensorflow_datasets as tfds
import tensorflow as tf # For tf.data
import matplotlib.pyplot as plt
import keras
from keras import layers
from keras.applications import EfficientNetB0
# IMG_SIZE is determined by EfficientNet model choice
IMG_SIZE = 224
BATCH_SIZE = 64<jupyter_output><empty_output><jupyter_text>Loading dataHere we load data from [tensorflow_datasets](https://www.tensorflow.org/datasets)(hereafter TFDS).Stanford Dogs dataset is provided inTFDS as [stanford_dogs](https://www.tensorflow.org/datasets/catalog/stanford_dogs).It features 20,580 images that belong to 120 classes of dog breeds(12,000 for training and 8,580 for testing).By simply changing `dataset_name` below, you may also try this notebook forother datasets in TFDS such as[cifar10](https://www.tensorflow.org/datasets/catalog/cifar10),[cifar100](https://www.tensorflow.org/datasets/catalog/cifar100),[food101](https://www.tensorflow.org/datasets/catalog/food101),etc. When the images are much smaller than the size of EfficientNet input,we can simply upsample the input images. It has been shown in[Tan and Le, 2019](https://arxiv.org/abs/1905.11946) that transfer learningresult is better for increased resolution even if input images remain small.<jupyter_code>dataset_name = "stanford_dogs"
(ds_train, ds_test), ds_info = tfds.load(
dataset_name, split=["train", "test"], with_info=True, as_supervised=True
)
NUM_CLASSES = ds_info.features["label"].num_classes<jupyter_output><empty_output><jupyter_text>When the dataset include images with various size, we need to resize them into ashared size. The Stanford Dogs dataset includes only images at least 200x200pixels in size. Here we resize the images to the input size needed for EfficientNet.<jupyter_code>size = (IMG_SIZE, IMG_SIZE)
ds_train = ds_train.map(lambda image, label: (tf.image.resize(image, size), label))
ds_test = ds_test.map(lambda image, label: (tf.image.resize(image, size), label))<jupyter_output><empty_output><jupyter_text>Visualizing the dataThe following code shows the first 9 images with their labels.<jupyter_code>def format_label(label):
string_label = label_info.int2str(label)
return string_label.split("-")[1]
label_info = ds_info.features["label"]
for i, (image, label) in enumerate(ds_train.take(9)):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(image.numpy().astype("uint8"))
plt.title("{}".format(format_label(label)))
plt.axis("off")<jupyter_output><empty_output><jupyter_text>Data augmentationWe can use the preprocessing layers APIs for image augmentation.<jupyter_code>img_augmentation_layers = [
layers.RandomRotation(factor=0.15),
layers.RandomTranslation(height_factor=0.1, width_factor=0.1),
layers.RandomFlip(),
layers.RandomContrast(factor=0.1),
]
def img_augmentation(images):
for layer in img_augmentation_layers:
images = layer(images)
return images<jupyter_output><empty_output><jupyter_text>This `Sequential` model object can be used both as a part ofthe model we later build, and as a function to preprocessdata before feeding into the model. Using them as function makesit easy to visualize the augmented images. Here we plot 9 examplesof augmentation result of a given figure.<jupyter_code>for image, label in ds_train.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
aug_img = img_augmentation(np.expand_dims(image.numpy(), axis=0))
aug_img = np.array(aug_img)
plt.imshow(aug_img[0].astype("uint8"))
plt.title("{}".format(format_label(label)))
plt.axis("off")<jupyter_output><empty_output><jupyter_text>Prepare inputsOnce we verify the input data and augmentation are working correctly,we prepare dataset for training. The input data are resized to uniform`IMG_SIZE`. The labels are put into one-hot(a.k.a. categorical) encoding. The dataset is batched.Note: `prefetch` and `AUTOTUNE` may in some situation improveperformance, but depends on environment and the specific dataset used.See this [guide](https://www.tensorflow.org/guide/data_performance)for more information on data pipeline performance.<jupyter_code># One-hot / categorical encoding
def input_preprocess_train(image, label):
image = img_augmentation(image)
label = tf.one_hot(label, NUM_CLASSES)
return image, label
def input_preprocess_test(image, label):
label = tf.one_hot(label, NUM_CLASSES)
return image, label
ds_train = ds_train.map(input_preprocess_train, num_parallel_calls=tf.data.AUTOTUNE)
ds_train = ds_train.batch(batch_size=BATCH_SIZE, drop_remainder=True)
ds_train = ds_train.prefetch(tf.data.AUTOTUNE)
ds_test = ds_test.map(input_preprocess_test, num_parallel_calls=tf.data.AUTOTUNE)
ds_test = ds_test.batch(batch_size=BATCH_SIZE, drop_remainder=True)<jupyter_output><empty_output><jupyter_text>Training a model from scratchWe build an EfficientNetB0 with 120 output classes, that is initialized from scratch:Note: the accuracy will increase very slowly and may overfit.<jupyter_code>model = EfficientNetB0(
include_top=True,
weights=None,
classes=NUM_CLASSES,
input_shape=(IMG_SIZE, IMG_SIZE, 3),
)
model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"])
model.summary()
epochs = 40 # @param {type: "slider", min:10, max:100}
hist = model.fit(ds_train, epochs=epochs, validation_data=ds_test)<jupyter_output><empty_output><jupyter_text>Training the model is relatively fast. This might make it sounds easy to simply train EfficientNet on anydataset wanted from scratch. However, training EfficientNet on smaller datasets,especially those with lower resolution like CIFAR-100, faces the significant challenge ofoverfitting.Hence training from scratch requires very careful choice of hyperparameters and isdifficult to find suitable regularization. It would also be much more demanding in resources.Plotting the training and validation accuracymakes it clear that validation accuracy stagnates at a low value.<jupyter_code>import matplotlib.pyplot as plt
def plot_hist(hist):
plt.plot(hist.history["accuracy"])
plt.plot(hist.history["val_accuracy"])
plt.title("model accuracy")
plt.ylabel("accuracy")
plt.xlabel("epoch")
plt.legend(["train", "validation"], loc="upper left")
plt.show()
plot_hist(hist)<jupyter_output><empty_output><jupyter_text>Transfer learning from pre-trained weightsHere we initialize the model with pre-trained ImageNet weights,and we fine-tune it on our own dataset.<jupyter_code>def build_model(num_classes):
inputs = layers.Input(shape=(IMG_SIZE, IMG_SIZE, 3))
model = EfficientNetB0(include_top=False, input_tensor=inputs, weights="imagenet")
# Freeze the pretrained weights
model.trainable = False
# Rebuild top
x = layers.GlobalAveragePooling2D(name="avg_pool")(model.output)
x = layers.BatchNormalization()(x)
top_dropout_rate = 0.2
x = layers.Dropout(top_dropout_rate, name="top_dropout")(x)
outputs = layers.Dense(num_classes, activation="softmax", name="pred")(x)
# Compile
model = keras.Model(inputs, outputs, name="EfficientNet")
optimizer = keras.optimizers.Adam(learning_rate=1e-2)
model.compile(
optimizer=optimizer, loss="categorical_crossentropy", metrics=["accuracy"]
)
return model<jupyter_output><empty_output><jupyter_text>The first step to transfer learning is to freeze all layers and train only the toplayers. For this step, a relatively large learning rate (1e-2) can be used.Note that validation accuracy and loss will usually be better than trainingaccuracy and loss. This is because the regularization is strong, which onlysuppresses training-time metrics.Note that the convergence may take up to 50 epochs depending on choice of learning rate.If image augmentation layers were notapplied, the validation accuracy may only reach ~60%.<jupyter_code>model = build_model(num_classes=NUM_CLASSES)
epochs = 25 # @param {type: "slider", min:8, max:80}
hist = model.fit(ds_train, epochs=epochs, validation_data=ds_test)
plot_hist(hist)<jupyter_output><empty_output><jupyter_text>The second step is to unfreeze a number of layers and fit the model using smallerlearning rate. In this example we show unfreezing all layers, but depending onspecific dataset it may be desireble to only unfreeze a fraction of all layers.When the feature extraction withpretrained model works good enough, this step would give a very limited gain onvalidation accuracy. In our case we only see a small improvement,as ImageNet pretraining already exposed the model to a good amount of dogs.On the other hand, when we use pretrained weights on a dataset that is more differentfrom ImageNet, this fine-tuning step can be crucial as the feature extractor alsoneeds to be adjusted by a considerable amount. Such a situation can be demonstratedif choosing CIFAR-100 dataset instead, where fine-tuning boosts validation accuracyby about 10% to pass 80% on `EfficientNetB0`.A side note on freezing/unfreezing models: setting `trainable` of a `Model` willsimultaneously set all layers belonging to the `Model` to the same `trainable`attribute. Each layer is trainable only if both the layer itself and the modelcontaining it are trainable. Hence when we need to partially freeze/unfreezea model, we need to make sure the `trainable` attribute of the model is setto `True`.<jupyter_code>def unfreeze_model(model):
# We unfreeze the top 20 layers while leaving BatchNorm layers frozen
for layer in model.layers[-20:]:
if not isinstance(layer, layers.BatchNormalization):
layer.trainable = True
optimizer = keras.optimizers.Adam(learning_rate=1e-5)
model.compile(
optimizer=optimizer, loss="categorical_crossentropy", metrics=["accuracy"]
)
unfreeze_model(model)
epochs = 4 # @param {type: "slider", min:4, max:10}
hist = model.fit(ds_train, epochs=epochs, validation_data=ds_test)
plot_hist(hist)<jupyter_output><empty_output> | keras-io/examples/vision/ipynb/image_classification_efficientnet_fine_tuning.ipynb/0 | {
"file_path": "keras-io/examples/vision/ipynb/image_classification_efficientnet_fine_tuning.ipynb",
"repo_id": "keras-io",
"token_count": 4574
} | 118 |
<jupyter_start><jupyter_text>MobileViT: A mobile-friendly Transformer-based model for image classification**Author:** [Sayak Paul](https://twitter.com/RisingSayak)**Date created:** 2021/10/20**Last modified:** 2024/02/11**Description:** MobileViT for image classification with combined benefits of convolutions and Transformers. IntroductionIn this example, we implement the MobileViT architecture([Mehta et al.](https://arxiv.org/abs/2110.02178)),which combines the benefits of Transformers([Vaswani et al.](https://arxiv.org/abs/1706.03762))and convolutions. With Transformers, we can capture long-range dependencies that resultin global representations. With convolutions, we can capture spatial relationships thatmodel locality.Besides combining the properties of Transformers and convolutions, the authors introduceMobileViT as a general-purpose mobile-friendly backbone for different image recognitiontasks. Their findings suggest that, performance-wise, MobileViT is better than othermodels with the same or higher complexity ([MobileNetV3](https://arxiv.org/abs/1905.02244),for example), while being efficient on mobile devices.Note: This example should be run with Tensorflow 2.13 and higher. Imports<jupyter_code>import os
import tensorflow as tf
os.environ["KERAS_BACKEND"] = "tensorflow"
import keras
from keras import layers
from keras import backend
import tensorflow_datasets as tfds
tfds.disable_progress_bar()<jupyter_output><empty_output><jupyter_text>Hyperparameters<jupyter_code># Values are from table 4.
patch_size = 4 # 2x2, for the Transformer blocks.
image_size = 256
expansion_factor = 2 # expansion factor for the MobileNetV2 blocks.<jupyter_output><empty_output><jupyter_text>MobileViT utilitiesThe MobileViT architecture is comprised of the following blocks:* Strided 3x3 convolutions that process the input image.* [MobileNetV2](https://arxiv.org/abs/1801.04381)-style inverted residual blocks fordownsampling the resolution of the intermediate feature maps.* MobileViT blocks that combine the benefits of Transformers and convolutions. It ispresented in the figure below (taken from the[original paper](https://arxiv.org/abs/2110.02178)):<jupyter_code>def conv_block(x, filters=16, kernel_size=3, strides=2):
conv_layer = layers.Conv2D(
filters,
kernel_size,
strides=strides,
activation=keras.activations.swish,
padding="same",
)
return conv_layer(x)
# Reference: https://github.com/keras-team/keras/blob/e3858739d178fe16a0c77ce7fab88b0be6dbbdc7/keras/applications/imagenet_utils.py#L413C17-L435
def correct_pad(inputs, kernel_size):
img_dim = 2 if backend.image_data_format() == "channels_first" else 1
input_size = inputs.shape[img_dim : (img_dim + 2)]
if isinstance(kernel_size, int):
kernel_size = (kernel_size, kernel_size)
if input_size[0] is None:
adjust = (1, 1)
else:
adjust = (1 - input_size[0] % 2, 1 - input_size[1] % 2)
correct = (kernel_size[0] // 2, kernel_size[1] // 2)
return (
(correct[0] - adjust[0], correct[0]),
(correct[1] - adjust[1], correct[1]),
)
# Reference: https://git.io/JKgtC
def inverted_residual_block(x, expanded_channels, output_channels, strides=1):
m = layers.Conv2D(expanded_channels, 1, padding="same", use_bias=False)(x)
m = layers.BatchNormalization()(m)
m = keras.activations.swish(m)
if strides == 2:
m = layers.ZeroPadding2D(padding=correct_pad(m, 3))(m)
m = layers.DepthwiseConv2D(
3, strides=strides, padding="same" if strides == 1 else "valid", use_bias=False
)(m)
m = layers.BatchNormalization()(m)
m = keras.activations.swish(m)
m = layers.Conv2D(output_channels, 1, padding="same", use_bias=False)(m)
m = layers.BatchNormalization()(m)
if keras.ops.equal(x.shape[-1], output_channels) and strides == 1:
return layers.Add()([m, x])
return m
# Reference:
# https://keras.io/examples/vision/image_classification_with_vision_transformer/
def mlp(x, hidden_units, dropout_rate):
for units in hidden_units:
x = layers.Dense(units, activation=keras.activations.swish)(x)
x = layers.Dropout(dropout_rate)(x)
return x
def transformer_block(x, transformer_layers, projection_dim, num_heads=2):
for _ in range(transformer_layers):
# Layer normalization 1.
x1 = layers.LayerNormalization(epsilon=1e-6)(x)
# Create a multi-head attention layer.
attention_output = layers.MultiHeadAttention(
num_heads=num_heads, key_dim=projection_dim, dropout=0.1
)(x1, x1)
# Skip connection 1.
x2 = layers.Add()([attention_output, x])
# Layer normalization 2.
x3 = layers.LayerNormalization(epsilon=1e-6)(x2)
# MLP.
x3 = mlp(
x3,
hidden_units=[x.shape[-1] * 2, x.shape[-1]],
dropout_rate=0.1,
)
# Skip connection 2.
x = layers.Add()([x3, x2])
return x
def mobilevit_block(x, num_blocks, projection_dim, strides=1):
# Local projection with convolutions.
local_features = conv_block(x, filters=projection_dim, strides=strides)
local_features = conv_block(
local_features, filters=projection_dim, kernel_size=1, strides=strides
)
# Unfold into patches and then pass through Transformers.
num_patches = int((local_features.shape[1] * local_features.shape[2]) / patch_size)
non_overlapping_patches = layers.Reshape((patch_size, num_patches, projection_dim))(
local_features
)
global_features = transformer_block(
non_overlapping_patches, num_blocks, projection_dim
)
# Fold into conv-like feature-maps.
folded_feature_map = layers.Reshape((*local_features.shape[1:-1], projection_dim))(
global_features
)
# Apply point-wise conv -> concatenate with the input features.
folded_feature_map = conv_block(
folded_feature_map, filters=x.shape[-1], kernel_size=1, strides=strides
)
local_global_features = layers.Concatenate(axis=-1)([x, folded_feature_map])
# Fuse the local and global features using a convoluion layer.
local_global_features = conv_block(
local_global_features, filters=projection_dim, strides=strides
)
return local_global_features<jupyter_output><empty_output><jupyter_text>**More on the MobileViT block**:* First, the feature representations (A) go through convolution blocks that capture localrelationships. The expected shape of a single entry here would be `(h, w, num_channels)`.* Then they get unfolded into another vector with shape `(p, n, num_channels)`,where `p` is the area of a small patch, and `n` is `(h * w) / p`. So, we end up with `n`non-overlapping patches.* This unfolded vector is then passed through a Tranformer block that captures globalrelationships between the patches.* The output vector (B) is again folded into a vector of shape `(h, w, num_channels)`resembling a feature map coming out of convolutions.Vectors A and B are then passed through two more convolutional layers to fuse the localand global representations. Notice how the spatial resolution of the final vector remainsunchanged at this point. The authors also present an explanation of how the MobileViTblock resembles a convolution block of a CNN. For more details, please refer to theoriginal paper. Next, we combine these blocks together and implement the MobileViT architecture (XXSvariant). The following figure (taken from the original paper) presents a schematicrepresentation of the architecture:<jupyter_code>def create_mobilevit(num_classes=5):
inputs = keras.Input((image_size, image_size, 3))
x = layers.Rescaling(scale=1.0 / 255)(inputs)
# Initial conv-stem -> MV2 block.
x = conv_block(x, filters=16)
x = inverted_residual_block(
x, expanded_channels=16 * expansion_factor, output_channels=16
)
# Downsampling with MV2 block.
x = inverted_residual_block(
x, expanded_channels=16 * expansion_factor, output_channels=24, strides=2
)
x = inverted_residual_block(
x, expanded_channels=24 * expansion_factor, output_channels=24
)
x = inverted_residual_block(
x, expanded_channels=24 * expansion_factor, output_channels=24
)
# First MV2 -> MobileViT block.
x = inverted_residual_block(
x, expanded_channels=24 * expansion_factor, output_channels=48, strides=2
)
x = mobilevit_block(x, num_blocks=2, projection_dim=64)
# Second MV2 -> MobileViT block.
x = inverted_residual_block(
x, expanded_channels=64 * expansion_factor, output_channels=64, strides=2
)
x = mobilevit_block(x, num_blocks=4, projection_dim=80)
# Third MV2 -> MobileViT block.
x = inverted_residual_block(
x, expanded_channels=80 * expansion_factor, output_channels=80, strides=2
)
x = mobilevit_block(x, num_blocks=3, projection_dim=96)
x = conv_block(x, filters=320, kernel_size=1, strides=1)
# Classification head.
x = layers.GlobalAvgPool2D()(x)
outputs = layers.Dense(num_classes, activation="softmax")(x)
return keras.Model(inputs, outputs)
mobilevit_xxs = create_mobilevit()
mobilevit_xxs.summary()<jupyter_output><empty_output><jupyter_text>Dataset preparationWe will be using the[`tf_flowers`](https://www.tensorflow.org/datasets/catalog/tf_flowers)dataset to demonstrate the model. Unlike other Transformer-based architectures,MobileViT uses a simple augmentation pipeline primarily because it has the propertiesof a CNN.<jupyter_code>batch_size = 64
auto = tf.data.AUTOTUNE
resize_bigger = 280
num_classes = 5
def preprocess_dataset(is_training=True):
def _pp(image, label):
if is_training:
# Resize to a bigger spatial resolution and take the random
# crops.
image = tf.image.resize(image, (resize_bigger, resize_bigger))
image = tf.image.random_crop(image, (image_size, image_size, 3))
image = tf.image.random_flip_left_right(image)
else:
image = tf.image.resize(image, (image_size, image_size))
label = tf.one_hot(label, depth=num_classes)
return image, label
return _pp
def prepare_dataset(dataset, is_training=True):
if is_training:
dataset = dataset.shuffle(batch_size * 10)
dataset = dataset.map(preprocess_dataset(is_training), num_parallel_calls=auto)
return dataset.batch(batch_size).prefetch(auto)<jupyter_output><empty_output><jupyter_text>The authors use a multi-scale data sampler to help the model learn representations ofvaried scales. In this example, we discard this part. Load and prepare the dataset<jupyter_code>train_dataset, val_dataset = tfds.load(
"tf_flowers", split=["train[:90%]", "train[90%:]"], as_supervised=True
)
num_train = train_dataset.cardinality()
num_val = val_dataset.cardinality()
print(f"Number of training examples: {num_train}")
print(f"Number of validation examples: {num_val}")
train_dataset = prepare_dataset(train_dataset, is_training=True)
val_dataset = prepare_dataset(val_dataset, is_training=False)<jupyter_output><empty_output><jupyter_text>Train a MobileViT (XXS) model<jupyter_code>learning_rate = 0.002
label_smoothing_factor = 0.1
epochs = 30
optimizer = keras.optimizers.Adam(learning_rate=learning_rate)
loss_fn = keras.losses.CategoricalCrossentropy(label_smoothing=label_smoothing_factor)
def run_experiment(epochs=epochs):
mobilevit_xxs = create_mobilevit(num_classes=num_classes)
mobilevit_xxs.compile(optimizer=optimizer, loss=loss_fn, metrics=["accuracy"])
# When using `save_weights_only=True` in `ModelCheckpoint`, the filepath provided must end in `.weights.h5`
checkpoint_filepath = "/tmp/checkpoint.weights.h5"
checkpoint_callback = keras.callbacks.ModelCheckpoint(
checkpoint_filepath,
monitor="val_accuracy",
save_best_only=True,
save_weights_only=True,
)
mobilevit_xxs.fit(
train_dataset,
validation_data=val_dataset,
epochs=epochs,
callbacks=[checkpoint_callback],
)
mobilevit_xxs.load_weights(checkpoint_filepath)
_, accuracy = mobilevit_xxs.evaluate(val_dataset)
print(f"Validation accuracy: {round(accuracy * 100, 2)}%")
return mobilevit_xxs
mobilevit_xxs = run_experiment()<jupyter_output><empty_output><jupyter_text>Results and TFLite conversionWith about one million parameters, getting to ~85% top-1 accuracy on 256x256 resolution isa strong result. This MobileViT mobile is fully compatible with TensorFlow Lite (TFLite)and can be converted with the following code:<jupyter_code># Serialize the model as a SavedModel.
tf.saved_model.save(mobilevit_xxs, "mobilevit_xxs")
# Convert to TFLite. This form of quantization is called
# post-training dynamic-range quantization in TFLite.
converter = tf.lite.TFLiteConverter.from_saved_model("mobilevit_xxs")
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, # Enable TensorFlow Lite ops.
tf.lite.OpsSet.SELECT_TF_OPS, # Enable TensorFlow ops.
]
tflite_model = converter.convert()
open("mobilevit_xxs.tflite", "wb").write(tflite_model)<jupyter_output><empty_output> | keras-io/examples/vision/ipynb/mobilevit.ipynb/0 | {
"file_path": "keras-io/examples/vision/ipynb/mobilevit.ipynb",
"repo_id": "keras-io",
"token_count": 4890
} | 119 |
<jupyter_start><jupyter_text>Semantic segmentation with SegFormer and Hugging Face Transformers**Author:** [Sayak Paul](https://twitter.com/RisingSayak)**Date created:** 2023/01/25**Last modified:** 2023/01/29**Description:** Fine-tuning a SegFormer model variant for semantic segmentation. IntroductionIn this example, we show how to fine-tune a SegFormer model variant to dosemantic segmentation on a custom dataset. Semantic segmentation is the task ofassigning a category to each and every pixel of an image. SegFormer was proposed in[SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203).SegFormer uses a hierarchical Transformer architecture (called "Mix Transformer") asits encoder and a lightweight decoder for segmentation. As a result, it yieldsstate-of-the-art performance on semantic segmentation while being more efficient thanexisting models. For more details, check out the original paper.We leverage[Hugging Face Transformers](https://github.com/huggingface/transformers)to load a pretrained SegFormer checkpoint and fine-tune it on a custom dataset.**Note:** this example reuses code from the following sources:* [Official tutorial on segmentation from the TensorFlow team](https://www.tensorflow.org/tutorials/images/segmentation)* [Hugging Face Task guide on segmentation](https://huggingface.co/docs/transformers/main/en/tasks/semantic_segmentation)To run this example, we need to install the `transformers` library:<jupyter_code>!!pip install transformers -q<jupyter_output><empty_output><jupyter_text>Load the dataWe use the [Oxford-IIIT Pets](https://www.robots.ox.ac.uk/~vgg/data/pets/) dataset forthis example. We leverage `tensorflow_datasets` to load the dataset.<jupyter_code>import tensorflow_datasets as tfds
dataset, info = tfds.load("oxford_iiit_pet:3.*.*", with_info=True)<jupyter_output><empty_output><jupyter_text>Prepare the datasetsFor preparing the datasets for training and evaluation, we:* Normalize the images with the mean and standard deviation used during pre-trainingSegFormer.* Subtract 1 from the segmentation masks so that the pixel values start from 0.* Resize the images.* Transpose the images such that they are in `"channels_first"` format. This is to makethem compatible with the SegFormer model from Hugging Face Transformers.<jupyter_code>import tensorflow as tf
from tensorflow.keras import backend
image_size = 512
mean = tf.constant([0.485, 0.456, 0.406])
std = tf.constant([0.229, 0.224, 0.225])
def normalize(input_image, input_mask):
input_image = tf.image.convert_image_dtype(input_image, tf.float32)
input_image = (input_image - mean) / tf.maximum(std, backend.epsilon())
input_mask -= 1
return input_image, input_mask
def load_image(datapoint):
input_image = tf.image.resize(datapoint["image"], (image_size, image_size))
input_mask = tf.image.resize(
datapoint["segmentation_mask"],
(image_size, image_size),
method="bilinear",
)
input_image, input_mask = normalize(input_image, input_mask)
input_image = tf.transpose(input_image, (2, 0, 1))
return {"pixel_values": input_image, "labels": tf.squeeze(input_mask)}<jupyter_output><empty_output><jupyter_text>We now use the above utilities to prepare `tf.data.Dataset` objects including`prefetch()` for performance. Change the `batch_size` to match the size of the GPU memoryon the GPU that you're using for training.<jupyter_code>auto = tf.data.AUTOTUNE
batch_size = 4
train_ds = (
dataset["train"]
.cache()
.shuffle(batch_size * 10)
.map(load_image, num_parallel_calls=auto)
.batch(batch_size)
.prefetch(auto)
)
test_ds = (
dataset["test"]
.map(load_image, num_parallel_calls=auto)
.batch(batch_size)
.prefetch(auto)
)<jupyter_output><empty_output><jupyter_text>We can check the shapes of the input images and their segmentation maps:<jupyter_code>print(train_ds.element_spec)<jupyter_output><empty_output><jupyter_text>Visualize dataset<jupyter_code>import matplotlib.pyplot as plt
def display(display_list):
plt.figure(figsize=(15, 15))
title = ["Input Image", "True Mask", "Predicted Mask"]
for i in range(len(display_list)):
plt.subplot(1, len(display_list), i + 1)
plt.title(title[i])
plt.imshow(tf.keras.utils.array_to_img(display_list[i]))
plt.axis("off")
plt.show()
for samples in train_ds.take(2):
sample_image, sample_mask = samples["pixel_values"][0], samples["labels"][0]
sample_image = tf.transpose(sample_image, (1, 2, 0))
sample_mask = tf.expand_dims(sample_mask, -1)
display([sample_image, sample_mask])<jupyter_output><empty_output><jupyter_text>Load a pretrained SegFormer checkpointWe now load a pretrained SegFormer model variant from Hugging Face Transformers. TheSegFormer model comes in different variants dubbed as **MiT-B0** to **MiT-B5**. You canfind these checkpoints[here](https://huggingface.co/models?pipeline_tag=image-segmentation&sort=downloads&search=segformer).We load the smallest variant Mix-B0, which produces a good trade-offbetween inference efficiency and predictive performance.<jupyter_code>from transformers import TFSegformerForSemanticSegmentation
model_checkpoint = "nvidia/mit-b0"
id2label = {0: "outer", 1: "inner", 2: "border"}
label2id = {label: id for id, label in id2label.items()}
num_labels = len(id2label)
model = TFSegformerForSemanticSegmentation.from_pretrained(
model_checkpoint,
num_labels=num_labels,
id2label=id2label,
label2id=label2id,
ignore_mismatched_sizes=True,
)<jupyter_output><empty_output><jupyter_text>The warning is telling us that we're throwing away some weights and newly initializingsome others. Don't panic! This is absolutely normal. Since we're using a custom datasetwhich has a different set of semantic class labels than the pre-training dataset,[`TFSegformerForSemanticSegmentation`](https://huggingface.co/docs/transformers/model_doc/segformertransformers.TFSegformerForSemanticSegmentation)is initializing a new decoder head.We can now initialize an optimizer and compile the model with it. Compile the model<jupyter_code>lr = 0.00006
optimizer = tf.keras.optimizers.Adam(learning_rate=lr)
model.compile(optimizer=optimizer)<jupyter_output><empty_output><jupyter_text>Notice that we are not using any loss function for compiling the model. This is becausethe forward pass of the model[implements](https://github.com/huggingface/transformers/blob/820c46a707ddd033975bc3b0549eea200e64c7da/src/transformers/models/segformer/modeling_tf_segformer.pyL873)the loss computation part when we provide labels alongside the input images. Aftercomputing the loss, the model returned a structured `dataclass` object which isthen used to guide the training process.With the compiled model, we can proceed and call `fit()` on it to begin the fine-tuningprocess! Prediction callback to monitor training progressIt helps us to visualize some sample predictions when the model is being fine-tuned,thereby helping us to monitor the progress of the model. This callback is inspired from[this tutorial](https://www.tensorflow.org/tutorials/images/segmentation).<jupyter_code>from IPython.display import clear_output
def create_mask(pred_mask):
pred_mask = tf.math.argmax(pred_mask, axis=1)
pred_mask = tf.expand_dims(pred_mask, -1)
return pred_mask[0]
def show_predictions(dataset=None, num=1):
if dataset:
for sample in dataset.take(num):
images, masks = sample["pixel_values"], sample["labels"]
masks = tf.expand_dims(masks, -1)
pred_masks = model.predict(images).logits
images = tf.transpose(images, (0, 2, 3, 1))
display([images[0], masks[0], create_mask(pred_masks)])
else:
display(
[
sample_image,
sample_mask,
create_mask(model.predict(tf.expand_dims(sample_image, 0))),
]
)
class DisplayCallback(tf.keras.callbacks.Callback):
def __init__(self, dataset, **kwargs):
super().__init__(**kwargs)
self.dataset = dataset
def on_epoch_end(self, epoch, logs=None):
clear_output(wait=True)
show_predictions(self.dataset)
print("\nSample Prediction after epoch {}\n".format(epoch + 1))<jupyter_output><empty_output><jupyter_text>Train model<jupyter_code># Increase the number of epochs if the results are not of expected quality.
epochs = 5
history = model.fit(
train_ds,
validation_data=test_ds,
callbacks=[DisplayCallback(test_ds)],
epochs=epochs,
)<jupyter_output><empty_output><jupyter_text>InferenceWe perform inference on a few samples from the test set.<jupyter_code>show_predictions(test_ds, 5)<jupyter_output><empty_output> | keras-io/examples/vision/ipynb/segformer.ipynb/0 | {
"file_path": "keras-io/examples/vision/ipynb/segformer.ipynb",
"repo_id": "keras-io",
"token_count": 3037
} | 120 |
<jupyter_start><jupyter_text>Video Vision Transformer**Author:** [Aritra Roy Gosthipaty](https://twitter.com/ariG23498), [Ayush Thakur](https://twitter.com/ayushthakur0) (equal contribution)**Date created:** 2022/01/12**Last modified:** 2024/01/15**Description:** A Transformer-based architecture for video classification. IntroductionVideos are sequences of images. Let's assume you have an imagerepresentation model (CNN, ViT, etc.) and a sequence model(RNN, LSTM, etc.) at hand. We ask you to tweak the model for videoclassification. The simplest approach would be to apply the imagemodel to individual frames, use the sequence model to learnsequences of image features, then apply a classification head onthe learned sequence representation.The Keras example[Video Classification with a CNN-RNN Architecture](https://keras.io/examples/vision/video_classification/)explains this approach in detail. Alernatively, you can alsobuild a hybrid Transformer-based model for video classification as shown in the Keras example[Video Classification with Transformers](https://keras.io/examples/vision/video_transformers/).In this example, we minimally implement[ViViT: A Video Vision Transformer](https://arxiv.org/abs/2103.15691)by Arnab et al., a **pure Transformer-based** modelfor video classification. The authors propose a novel embedding schemeand a number of Transformer variants to model video clips. We implementthe embedding scheme and one of the variants of the Transformerarchitecture, for simplicity.This example requires `medmnist` package, which can be installedby running the code cell below.<jupyter_code>!pip install -qq medmnist<jupyter_output><empty_output><jupyter_text>Imports<jupyter_code>import os
import io
import imageio
import medmnist
import ipywidgets
import numpy as np
import tensorflow as tf # for data preprocessing only
import keras
from keras import layers, ops
# Setting seed for reproducibility
SEED = 42
os.environ["TF_CUDNN_DETERMINISTIC"] = "1"
keras.utils.set_random_seed(SEED)<jupyter_output><empty_output><jupyter_text>HyperparametersThe hyperparameters are chosen via hyperparametersearch. You can learn more about the process in the "conclusion" section.<jupyter_code># DATA
DATASET_NAME = "organmnist3d"
BATCH_SIZE = 32
AUTO = tf.data.AUTOTUNE
INPUT_SHAPE = (28, 28, 28, 1)
NUM_CLASSES = 11
# OPTIMIZER
LEARNING_RATE = 1e-4
WEIGHT_DECAY = 1e-5
# TRAINING
EPOCHS = 60
# TUBELET EMBEDDING
PATCH_SIZE = (8, 8, 8)
NUM_PATCHES = (INPUT_SHAPE[0] // PATCH_SIZE[0]) ** 2
# ViViT ARCHITECTURE
LAYER_NORM_EPS = 1e-6
PROJECTION_DIM = 128
NUM_HEADS = 8
NUM_LAYERS = 8<jupyter_output><empty_output><jupyter_text>DatasetFor our example we use the[MedMNIST v2: A Large-Scale Lightweight Benchmark for 2D and 3D Biomedical Image Classification](https://medmnist.com/)dataset. The videos are lightweight and easy to train on.<jupyter_code>def download_and_prepare_dataset(data_info: dict):
"""Utility function to download the dataset.
Arguments:
data_info (dict): Dataset metadata.
"""
data_path = keras.utils.get_file(origin=data_info["url"], md5_hash=data_info["MD5"])
with np.load(data_path) as data:
# Get videos
train_videos = data["train_images"]
valid_videos = data["val_images"]
test_videos = data["test_images"]
# Get labels
train_labels = data["train_labels"].flatten()
valid_labels = data["val_labels"].flatten()
test_labels = data["test_labels"].flatten()
return (
(train_videos, train_labels),
(valid_videos, valid_labels),
(test_videos, test_labels),
)
# Get the metadata of the dataset
info = medmnist.INFO[DATASET_NAME]
# Get the dataset
prepared_dataset = download_and_prepare_dataset(info)
(train_videos, train_labels) = prepared_dataset[0]
(valid_videos, valid_labels) = prepared_dataset[1]
(test_videos, test_labels) = prepared_dataset[2]<jupyter_output><empty_output><jupyter_text>`tf.data` pipeline<jupyter_code>def preprocess(frames: tf.Tensor, label: tf.Tensor):
"""Preprocess the frames tensors and parse the labels."""
# Preprocess images
frames = tf.image.convert_image_dtype(
frames[
..., tf.newaxis
], # The new axis is to help for further processing with Conv3D layers
tf.float32,
)
# Parse label
label = tf.cast(label, tf.float32)
return frames, label
def prepare_dataloader(
videos: np.ndarray,
labels: np.ndarray,
loader_type: str = "train",
batch_size: int = BATCH_SIZE,
):
"""Utility function to prepare the dataloader."""
dataset = tf.data.Dataset.from_tensor_slices((videos, labels))
if loader_type == "train":
dataset = dataset.shuffle(BATCH_SIZE * 2)
dataloader = (
dataset.map(preprocess, num_parallel_calls=tf.data.AUTOTUNE)
.batch(batch_size)
.prefetch(tf.data.AUTOTUNE)
)
return dataloader
trainloader = prepare_dataloader(train_videos, train_labels, "train")
validloader = prepare_dataloader(valid_videos, valid_labels, "valid")
testloader = prepare_dataloader(test_videos, test_labels, "test")<jupyter_output><empty_output><jupyter_text>Tubelet EmbeddingIn ViTs, an image is divided into patches, which are then spatiallyflattened, a process known as tokenization. For a video, one canrepeat this process for individual frames. **Uniform frame sampling**as suggested by the authors is a tokenization scheme in which wesample frames from the video clip and perform simple ViT tokenization.| || :--: || Uniform Frame Sampling [Source](https://arxiv.org/abs/2103.15691) |**Tubelet Embedding** is different in terms of capturing temporalinformation from the video.First, we extract volumes from the video -- these volumes containpatches of the frame and the temporal information as well. The volumesare then flattened to build video tokens.| || :--: || Tubelet Embedding [Source](https://arxiv.org/abs/2103.15691) |<jupyter_code>class TubeletEmbedding(layers.Layer):
def __init__(self, embed_dim, patch_size, **kwargs):
super().__init__(**kwargs)
self.projection = layers.Conv3D(
filters=embed_dim,
kernel_size=patch_size,
strides=patch_size,
padding="VALID",
)
self.flatten = layers.Reshape(target_shape=(-1, embed_dim))
def call(self, videos):
projected_patches = self.projection(videos)
flattened_patches = self.flatten(projected_patches)
return flattened_patches<jupyter_output><empty_output><jupyter_text>Positional EmbeddingThis layer adds positional information to the encoded video tokens.<jupyter_code>class PositionalEncoder(layers.Layer):
def __init__(self, embed_dim, **kwargs):
super().__init__(**kwargs)
self.embed_dim = embed_dim
def build(self, input_shape):
_, num_tokens, _ = input_shape
self.position_embedding = layers.Embedding(
input_dim=num_tokens, output_dim=self.embed_dim
)
self.positions = ops.arange(0, num_tokens, 1)
def call(self, encoded_tokens):
# Encode the positions and add it to the encoded tokens
encoded_positions = self.position_embedding(self.positions)
encoded_tokens = encoded_tokens + encoded_positions
return encoded_tokens<jupyter_output><empty_output><jupyter_text>Video Vision TransformerThe authors suggest 4 variants of Vision Transformer:- Spatio-temporal attention- Factorized encoder- Factorized self-attention- Factorized dot-product attentionIn this example, we will implement the **Spatio-temporal attention**model for simplicity. The following code snippet is heavily inspired from[Image classification with Vision Transformer](https://keras.io/examples/vision/image_classification_with_vision_transformer/).One can also refer to the[official repository of ViViT](https://github.com/google-research/scenic/tree/main/scenic/projects/vivit)which contains all the variants, implemented in JAX.<jupyter_code>def create_vivit_classifier(
tubelet_embedder,
positional_encoder,
input_shape=INPUT_SHAPE,
transformer_layers=NUM_LAYERS,
num_heads=NUM_HEADS,
embed_dim=PROJECTION_DIM,
layer_norm_eps=LAYER_NORM_EPS,
num_classes=NUM_CLASSES,
):
# Get the input layer
inputs = layers.Input(shape=input_shape)
# Create patches.
patches = tubelet_embedder(inputs)
# Encode patches.
encoded_patches = positional_encoder(patches)
# Create multiple layers of the Transformer block.
for _ in range(transformer_layers):
# Layer normalization and MHSA
x1 = layers.LayerNormalization(epsilon=1e-6)(encoded_patches)
attention_output = layers.MultiHeadAttention(
num_heads=num_heads, key_dim=embed_dim // num_heads, dropout=0.1
)(x1, x1)
# Skip connection
x2 = layers.Add()([attention_output, encoded_patches])
# Layer Normalization and MLP
x3 = layers.LayerNormalization(epsilon=1e-6)(x2)
x3 = keras.Sequential(
[
layers.Dense(units=embed_dim * 4, activation=ops.gelu),
layers.Dense(units=embed_dim, activation=ops.gelu),
]
)(x3)
# Skip connection
encoded_patches = layers.Add()([x3, x2])
# Layer normalization and Global average pooling.
representation = layers.LayerNormalization(epsilon=layer_norm_eps)(encoded_patches)
representation = layers.GlobalAvgPool1D()(representation)
# Classify outputs.
outputs = layers.Dense(units=num_classes, activation="softmax")(representation)
# Create the Keras model.
model = keras.Model(inputs=inputs, outputs=outputs)
return model<jupyter_output><empty_output><jupyter_text>Train<jupyter_code>def run_experiment():
# Initialize model
model = create_vivit_classifier(
tubelet_embedder=TubeletEmbedding(
embed_dim=PROJECTION_DIM, patch_size=PATCH_SIZE
),
positional_encoder=PositionalEncoder(embed_dim=PROJECTION_DIM),
)
# Compile the model with the optimizer, loss function
# and the metrics.
optimizer = keras.optimizers.Adam(learning_rate=LEARNING_RATE)
model.compile(
optimizer=optimizer,
loss="sparse_categorical_crossentropy",
metrics=[
keras.metrics.SparseCategoricalAccuracy(name="accuracy"),
keras.metrics.SparseTopKCategoricalAccuracy(5, name="top-5-accuracy"),
],
)
# Train the model.
_ = model.fit(trainloader, epochs=EPOCHS, validation_data=validloader)
_, accuracy, top_5_accuracy = model.evaluate(testloader)
print(f"Test accuracy: {round(accuracy * 100, 2)}%")
print(f"Test top 5 accuracy: {round(top_5_accuracy * 100, 2)}%")
return model
model = run_experiment()<jupyter_output><empty_output><jupyter_text>Inference<jupyter_code>NUM_SAMPLES_VIZ = 25
testsamples, labels = next(iter(testloader))
testsamples, labels = testsamples[:NUM_SAMPLES_VIZ], labels[:NUM_SAMPLES_VIZ]
ground_truths = []
preds = []
videos = []
for i, (testsample, label) in enumerate(zip(testsamples, labels)):
# Generate gif
testsample = np.reshape(testsample.numpy(), (-1, 28, 28))
with io.BytesIO() as gif:
imageio.mimsave(gif, (testsample * 255).astype("uint8"), "GIF", fps=5)
videos.append(gif.getvalue())
# Get model prediction
output = model.predict(ops.expand_dims(testsample, axis=0))[0]
pred = np.argmax(output, axis=0)
ground_truths.append(label.numpy().astype("int"))
preds.append(pred)
def make_box_for_grid(image_widget, fit):
"""Make a VBox to hold caption/image for demonstrating option_fit values.
Source: https://ipywidgets.readthedocs.io/en/latest/examples/Widget%20Styling.html
"""
# Make the caption
if fit is not None:
fit_str = "'{}'".format(fit)
else:
fit_str = str(fit)
h = ipywidgets.HTML(value="" + str(fit_str) + "")
# Make the green box with the image widget inside it
boxb = ipywidgets.widgets.Box()
boxb.children = [image_widget]
# Compose into a vertical box
vb = ipywidgets.widgets.VBox()
vb.layout.align_items = "center"
vb.children = [h, boxb]
return vb
boxes = []
for i in range(NUM_SAMPLES_VIZ):
ib = ipywidgets.widgets.Image(value=videos[i], width=100, height=100)
true_class = info["label"][str(ground_truths[i])]
pred_class = info["label"][str(preds[i])]
caption = f"T: {true_class} | P: {pred_class}"
boxes.append(make_box_for_grid(ib, caption))
ipywidgets.widgets.GridBox(
boxes, layout=ipywidgets.widgets.Layout(grid_template_columns="repeat(5, 200px)")
)<jupyter_output><empty_output> | keras-io/examples/vision/ipynb/vivit.ipynb/0 | {
"file_path": "keras-io/examples/vision/ipynb/vivit.ipynb",
"repo_id": "keras-io",
"token_count": 4785
} | 121 |
# OCR model for reading Captchas
**Author:** [A_K_Nain](https://twitter.com/A_K_Nain)<br>
**Date created:** 2020/06/14<br>
**Last modified:** 2020/06/26<br>
**Description:** How to implement an OCR model using CNNs, RNNs and CTC loss.
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/vision/ipynb/captcha_ocr.ipynb) <span class="k-dot">β’</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/examples/vision/captcha_ocr.py)
---
## Introduction
This example demonstrates a simple OCR model built with the Functional API. Apart from
combining CNN and RNN, it also illustrates how you can instantiate a new layer
and use it as an "Endpoint layer" for implementing CTC loss. For a detailed
guide to layer subclassing, please check out
[this page](https://keras.io/guides/making_new_layers_and_models_via_subclassing/)
in the developer guides.
---
## Setup
```python
import os
os.environ["KERAS_BACKEND"] = "tensorflow"
import os
import numpy as np
import matplotlib.pyplot as plt
from pathlib import Path
from collections import Counter
import tensorflow as tf
import keras
from keras import layers
```
---
## Load the data: [Captcha Images](https://www.kaggle.com/fournierp/captcha-version-2-images)
Let's download the data.
```python
!curl -LO https://github.com/AakashKumarNain/CaptchaCracker/raw/master/captcha_images_v2.zip
!unzip -qq captcha_images_v2.zip
```
<div class="k-default-codeblock">
```
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 8863k 100 8863k 0 0 11.9M 0 --:--:-- --:--:-- --:--:-- 141M
```
</div>
The dataset contains 1040 captcha files as `png` images. The label for each sample is a string,
the name of the file (minus the file extension).
We will map each character in the string to an integer for training the model. Similary,
we will need to map the predictions of the model back to strings. For this purpose
we will maintain two dictionaries, mapping characters to integers, and integers to characters,
respectively.
```python
# Path to the data directory
data_dir = Path("./captcha_images_v2/")
# Get list of all the images
images = sorted(list(map(str, list(data_dir.glob("*.png")))))
labels = [img.split(os.path.sep)[-1].split(".png")[0] for img in images]
characters = set(char for label in labels for char in label)
characters = sorted(list(characters))
print("Number of images found: ", len(images))
print("Number of labels found: ", len(labels))
print("Number of unique characters: ", len(characters))
print("Characters present: ", characters)
# Batch size for training and validation
batch_size = 16
# Desired image dimensions
img_width = 200
img_height = 50
# Factor by which the image is going to be downsampled
# by the convolutional blocks. We will be using two
# convolution blocks and each block will have
# a pooling layer which downsample the features by a factor of 2.
# Hence total downsampling factor would be 4.
downsample_factor = 4
# Maximum length of any captcha in the dataset
max_length = max([len(label) for label in labels])
```
<div class="k-default-codeblock">
```
Number of images found: 1040
Number of labels found: 1040
Number of unique characters: 19
Characters present: ['2', '3', '4', '5', '6', '7', '8', 'b', 'c', 'd', 'e', 'f', 'g', 'm', 'n', 'p', 'w', 'x', 'y']
```
</div>
---
## Preprocessing
```python
# Mapping characters to integers
char_to_num = layers.StringLookup(vocabulary=list(characters), mask_token=None)
# Mapping integers back to original characters
num_to_char = layers.StringLookup(
vocabulary=char_to_num.get_vocabulary(), mask_token=None, invert=True
)
def split_data(images, labels, train_size=0.9, shuffle=True):
# 1. Get the total size of the dataset
size = len(images)
# 2. Make an indices array and shuffle it, if required
indices = np.arange(size)
if shuffle:
np.random.shuffle(indices)
# 3. Get the size of training samples
train_samples = int(size * train_size)
# 4. Split data into training and validation sets
x_train, y_train = images[indices[:train_samples]], labels[indices[:train_samples]]
x_valid, y_valid = images[indices[train_samples:]], labels[indices[train_samples:]]
return x_train, x_valid, y_train, y_valid
# Splitting data into training and validation sets
x_train, x_valid, y_train, y_valid = split_data(np.array(images), np.array(labels))
def encode_single_sample(img_path, label):
# 1. Read image
img = tf.io.read_file(img_path)
# 2. Decode and convert to grayscale
img = tf.io.decode_png(img, channels=1)
# 3. Convert to float32 in [0, 1] range
img = tf.image.convert_image_dtype(img, tf.float32)
# 4. Resize to the desired size
img = tf.image.resize(img, [img_height, img_width])
# 5. Transpose the image because we want the time
# dimension to correspond to the width of the image.
img = tf.transpose(img, perm=[1, 0, 2])
# 6. Map the characters in label to numbers
label = char_to_num(tf.strings.unicode_split(label, input_encoding="UTF-8"))
# 7. Return a dict as our model is expecting two inputs
return {"image": img, "label": label}
```
---
## Create `Dataset` objects
```python
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = (
train_dataset.map(encode_single_sample, num_parallel_calls=tf.data.AUTOTUNE)
.batch(batch_size)
.prefetch(buffer_size=tf.data.AUTOTUNE)
)
validation_dataset = tf.data.Dataset.from_tensor_slices((x_valid, y_valid))
validation_dataset = (
validation_dataset.map(encode_single_sample, num_parallel_calls=tf.data.AUTOTUNE)
.batch(batch_size)
.prefetch(buffer_size=tf.data.AUTOTUNE)
)
```
---
## Visualize the data
```python
_, ax = plt.subplots(4, 4, figsize=(10, 5))
for batch in train_dataset.take(1):
images = batch["image"]
labels = batch["label"]
for i in range(16):
img = (images[i] * 255).numpy().astype("uint8")
label = tf.strings.reduce_join(num_to_char(labels[i])).numpy().decode("utf-8")
ax[i // 4, i % 4].imshow(img[:, :, 0].T, cmap="gray")
ax[i // 4, i % 4].set_title(label)
ax[i // 4, i % 4].axis("off")
plt.show()
```
![png](/img/examples/vision/captcha_ocr/captcha_ocr_13_0.png)
---
## Model
```python
def ctc_batch_cost(y_true, y_pred, input_length, label_length):
label_length = tf.cast(tf.squeeze(label_length, axis=-1), tf.int32)
input_length = tf.cast(tf.squeeze(input_length, axis=-1), tf.int32)
sparse_labels = tf.cast(ctc_label_dense_to_sparse(y_true, label_length), tf.int32)
y_pred = tf.math.log(tf.transpose(y_pred, perm=[1, 0, 2]) + keras.backend.epsilon())
return tf.expand_dims(
tf.compat.v1.nn.ctc_loss(
inputs=y_pred, labels=sparse_labels, sequence_length=input_length
),
1,
)
def ctc_label_dense_to_sparse(labels, label_lengths):
label_shape = tf.shape(labels)
num_batches_tns = tf.stack([label_shape[0]])
max_num_labels_tns = tf.stack([label_shape[1]])
def range_less_than(old_input, current_input):
return tf.expand_dims(tf.range(tf.shape(old_input)[1]), 0) < tf.fill(
max_num_labels_tns, current_input
)
init = tf.cast(tf.fill([1, label_shape[1]], 0), tf.bool)
dense_mask = tf.compat.v1.scan(
range_less_than, label_lengths, initializer=init, parallel_iterations=1
)
dense_mask = dense_mask[:, 0, :]
label_array = tf.reshape(
tf.tile(tf.range(0, label_shape[1]), num_batches_tns), label_shape
)
label_ind = tf.compat.v1.boolean_mask(label_array, dense_mask)
batch_array = tf.transpose(
tf.reshape(
tf.tile(tf.range(0, label_shape[0]), max_num_labels_tns),
tf.reverse(label_shape, [0]),
)
)
batch_ind = tf.compat.v1.boolean_mask(batch_array, dense_mask)
indices = tf.transpose(
tf.reshape(tf.concat([batch_ind, label_ind], axis=0), [2, -1])
)
vals_sparse = tf.compat.v1.gather_nd(labels, indices)
return tf.SparseTensor(
tf.cast(indices, tf.int64), vals_sparse, tf.cast(label_shape, tf.int64)
)
class CTCLayer(layers.Layer):
def __init__(self, name=None):
super().__init__(name=name)
self.loss_fn = ctc_batch_cost
def call(self, y_true, y_pred):
# Compute the training-time loss value and add it
# to the layer using `self.add_loss()`.
batch_len = tf.cast(tf.shape(y_true)[0], dtype="int64")
input_length = tf.cast(tf.shape(y_pred)[1], dtype="int64")
label_length = tf.cast(tf.shape(y_true)[1], dtype="int64")
input_length = input_length * tf.ones(shape=(batch_len, 1), dtype="int64")
label_length = label_length * tf.ones(shape=(batch_len, 1), dtype="int64")
loss = self.loss_fn(y_true, y_pred, input_length, label_length)
self.add_loss(loss)
# At test time, just return the computed predictions
return y_pred
def build_model():
# Inputs to the model
input_img = layers.Input(
shape=(img_width, img_height, 1), name="image", dtype="float32"
)
labels = layers.Input(name="label", shape=(None,), dtype="float32")
# First conv block
x = layers.Conv2D(
32,
(3, 3),
activation="relu",
kernel_initializer="he_normal",
padding="same",
name="Conv1",
)(input_img)
x = layers.MaxPooling2D((2, 2), name="pool1")(x)
# Second conv block
x = layers.Conv2D(
64,
(3, 3),
activation="relu",
kernel_initializer="he_normal",
padding="same",
name="Conv2",
)(x)
x = layers.MaxPooling2D((2, 2), name="pool2")(x)
# We have used two max pool with pool size and strides 2.
# Hence, downsampled feature maps are 4x smaller. The number of
# filters in the last layer is 64. Reshape accordingly before
# passing the output to the RNN part of the model
new_shape = ((img_width // 4), (img_height // 4) * 64)
x = layers.Reshape(target_shape=new_shape, name="reshape")(x)
x = layers.Dense(64, activation="relu", name="dense1")(x)
x = layers.Dropout(0.2)(x)
# RNNs
x = layers.Bidirectional(layers.LSTM(128, return_sequences=True, dropout=0.25))(x)
x = layers.Bidirectional(layers.LSTM(64, return_sequences=True, dropout=0.25))(x)
# Output layer
x = layers.Dense(
len(char_to_num.get_vocabulary()) + 1, activation="softmax", name="dense2"
)(x)
# Add CTC layer for calculating CTC loss at each step
output = CTCLayer(name="ctc_loss")(labels, x)
# Define the model
model = keras.models.Model(
inputs=[input_img, labels], outputs=output, name="ocr_model_v1"
)
# Optimizer
opt = keras.optimizers.Adam()
# Compile the model and return
model.compile(optimizer=opt)
return model
# Get the model
model = build_model()
model.summary()
```
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold">Model: "ocr_model_v1"</span>
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace">βββββββββββββββββββββββ³ββββββββββββββββββββ³ββββββββββ³βββββββββββββββββββββββ
β<span style="font-weight: bold"> Layer (type) </span>β<span style="font-weight: bold"> Output Shape </span>β<span style="font-weight: bold"> Param # </span>β<span style="font-weight: bold"> Connected to </span>β
β‘βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β image (<span style="color: #0087ff; text-decoration-color: #0087ff">InputLayer</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">200</span>, <span style="color: #00af00; text-decoration-color: #00af00">50</span>, β <span style="color: #00af00; text-decoration-color: #00af00">0</span> β - β
β β <span style="color: #00af00; text-decoration-color: #00af00">1</span>) β β β
βββββββββββββββββββββββΌββββββββββββββββββββΌββββββββββΌβββββββββββββββββββββββ€
β Conv1 (<span style="color: #0087ff; text-decoration-color: #0087ff">Conv2D</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">200</span>, <span style="color: #00af00; text-decoration-color: #00af00">50</span>, β <span style="color: #00af00; text-decoration-color: #00af00">320</span> β image[<span style="color: #00af00; text-decoration-color: #00af00">0</span>][<span style="color: #00af00; text-decoration-color: #00af00">0</span>] β
β β <span style="color: #00af00; text-decoration-color: #00af00">32</span>) β β β
βββββββββββββββββββββββΌββββββββββββββββββββΌββββββββββΌβββββββββββββββββββββββ€
β pool1 β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">100</span>, <span style="color: #00af00; text-decoration-color: #00af00">25</span>, β <span style="color: #00af00; text-decoration-color: #00af00">0</span> β Conv1[<span style="color: #00af00; text-decoration-color: #00af00">0</span>][<span style="color: #00af00; text-decoration-color: #00af00">0</span>] β
β (<span style="color: #0087ff; text-decoration-color: #0087ff">MaxPooling2D</span>) β <span style="color: #00af00; text-decoration-color: #00af00">32</span>) β β β
βββββββββββββββββββββββΌββββββββββββββββββββΌββββββββββΌβββββββββββββββββββββββ€
β Conv2 (<span style="color: #0087ff; text-decoration-color: #0087ff">Conv2D</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">100</span>, <span style="color: #00af00; text-decoration-color: #00af00">25</span>, β <span style="color: #00af00; text-decoration-color: #00af00">18,496</span> β pool1[<span style="color: #00af00; text-decoration-color: #00af00">0</span>][<span style="color: #00af00; text-decoration-color: #00af00">0</span>] β
β β <span style="color: #00af00; text-decoration-color: #00af00">64</span>) β β β
βββββββββββββββββββββββΌββββββββββββββββββββΌββββββββββΌβββββββββββββββββββββββ€
β pool2 β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">50</span>, <span style="color: #00af00; text-decoration-color: #00af00">12</span>, β <span style="color: #00af00; text-decoration-color: #00af00">0</span> β Conv2[<span style="color: #00af00; text-decoration-color: #00af00">0</span>][<span style="color: #00af00; text-decoration-color: #00af00">0</span>] β
β (<span style="color: #0087ff; text-decoration-color: #0087ff">MaxPooling2D</span>) β <span style="color: #00af00; text-decoration-color: #00af00">64</span>) β β β
βββββββββββββββββββββββΌββββββββββββββββββββΌββββββββββΌβββββββββββββββββββββββ€
β reshape (<span style="color: #0087ff; text-decoration-color: #0087ff">Reshape</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">50</span>, <span style="color: #00af00; text-decoration-color: #00af00">768</span>) β <span style="color: #00af00; text-decoration-color: #00af00">0</span> β pool2[<span style="color: #00af00; text-decoration-color: #00af00">0</span>][<span style="color: #00af00; text-decoration-color: #00af00">0</span>] β
βββββββββββββββββββββββΌββββββββββββββββββββΌββββββββββΌβββββββββββββββββββββββ€
β dense1 (<span style="color: #0087ff; text-decoration-color: #0087ff">Dense</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">50</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) β <span style="color: #00af00; text-decoration-color: #00af00">49,216</span> β reshape[<span style="color: #00af00; text-decoration-color: #00af00">0</span>][<span style="color: #00af00; text-decoration-color: #00af00">0</span>] β
βββββββββββββββββββββββΌββββββββββββββββββββΌββββββββββΌβββββββββββββββββββββββ€
β dropout (<span style="color: #0087ff; text-decoration-color: #0087ff">Dropout</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">50</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) β <span style="color: #00af00; text-decoration-color: #00af00">0</span> β dense1[<span style="color: #00af00; text-decoration-color: #00af00">0</span>][<span style="color: #00af00; text-decoration-color: #00af00">0</span>] β
βββββββββββββββββββββββΌββββββββββββββββββββΌββββββββββΌβββββββββββββββββββββββ€
β bidirectional β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">50</span>, <span style="color: #00af00; text-decoration-color: #00af00">256</span>) β <span style="color: #00af00; text-decoration-color: #00af00">197,632</span> β dropout[<span style="color: #00af00; text-decoration-color: #00af00">0</span>][<span style="color: #00af00; text-decoration-color: #00af00">0</span>] β
β (<span style="color: #0087ff; text-decoration-color: #0087ff">Bidirectional</span>) β β β β
βββββββββββββββββββββββΌββββββββββββββββββββΌββββββββββΌβββββββββββββββββββββββ€
β bidirectional_1 β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">50</span>, <span style="color: #00af00; text-decoration-color: #00af00">128</span>) β <span style="color: #00af00; text-decoration-color: #00af00">164,352</span> β bidirectional[<span style="color: #00af00; text-decoration-color: #00af00">0</span>][<span style="color: #00af00; text-decoration-color: #00af00">0</span>] β
β (<span style="color: #0087ff; text-decoration-color: #0087ff">Bidirectional</span>) β β β β
βββββββββββββββββββββββΌββββββββββββββββββββΌββββββββββΌβββββββββββββββββββββββ€
β label (<span style="color: #0087ff; text-decoration-color: #0087ff">InputLayer</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>) β <span style="color: #00af00; text-decoration-color: #00af00">0</span> β - β
βββββββββββββββββββββββΌββββββββββββββββββββΌββββββββββΌβββββββββββββββββββββββ€
β dense2 (<span style="color: #0087ff; text-decoration-color: #0087ff">Dense</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">50</span>, <span style="color: #00af00; text-decoration-color: #00af00">21</span>) β <span style="color: #00af00; text-decoration-color: #00af00">2,709</span> β bidirectional_1[<span style="color: #00af00; text-decoration-color: #00af00">0</span>][<span style="color: #00af00; text-decoration-color: #00af00">β¦</span> β
βββββββββββββββββββββββΌββββββββββββββββββββΌββββββββββΌβββββββββββββββββββββββ€
β ctc_loss (<span style="color: #0087ff; text-decoration-color: #0087ff">CTCLayer</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">50</span>, <span style="color: #00af00; text-decoration-color: #00af00">21</span>) β <span style="color: #00af00; text-decoration-color: #00af00">0</span> β label[<span style="color: #00af00; text-decoration-color: #00af00">0</span>][<span style="color: #00af00; text-decoration-color: #00af00">0</span>], β
β β β β dense2[<span style="color: #00af00; text-decoration-color: #00af00">0</span>][<span style="color: #00af00; text-decoration-color: #00af00">0</span>] β
βββββββββββββββββββββββ΄ββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββββββ
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Total params: </span><span style="color: #00af00; text-decoration-color: #00af00">432,725</span> (1.65 MB)
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Trainable params: </span><span style="color: #00af00; text-decoration-color: #00af00">432,725</span> (1.65 MB)
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Non-trainable params: </span><span style="color: #00af00; text-decoration-color: #00af00">0</span> (0.00 B)
</pre>
---
## Training
```python
# TODO restore epoch count.
epochs = 100
early_stopping_patience = 10
# Add early stopping
early_stopping = keras.callbacks.EarlyStopping(
monitor="val_loss", patience=early_stopping_patience, restore_best_weights=True
)
# Train the model
history = model.fit(
train_dataset,
validation_data=validation_dataset,
epochs=epochs,
callbacks=[early_stopping],
)
```
<div class="k-default-codeblock">
```
Epoch 1/100
59/59 ββββββββββββββββββββ 22s 229ms/step - loss: 35.8756 - val_loss: 16.3966
Epoch 2/100
59/59 ββββββββββββββββββββ 14s 235ms/step - loss: 16.4092 - val_loss: 16.3648
Epoch 3/100
59/59 ββββββββββββββββββββ 13s 224ms/step - loss: 16.3922 - val_loss: 16.3571
Epoch 4/100
59/59 ββββββββββββββββββββ 13s 218ms/step - loss: 16.3749 - val_loss: 16.3602
Epoch 5/100
59/59 ββββββββββββββββββββ 20s 210ms/step - loss: 16.3756 - val_loss: 16.3513
Epoch 6/100
59/59 ββββββββββββββββββββ 14s 236ms/step - loss: 16.3737 - val_loss: 16.3466
Epoch 7/100
59/59 ββββββββββββββββββββ 13s 227ms/step - loss: 16.3591 - val_loss: 16.3479
Epoch 8/100
59/59 ββββββββββββββββββββ 13s 219ms/step - loss: 16.3505 - val_loss: 16.3436
Epoch 9/100
59/59 ββββββββββββββββββββ 13s 213ms/step - loss: 16.3440 - val_loss: 16.3386
Epoch 10/100
59/59 ββββββββββββββββββββ 13s 226ms/step - loss: 16.3312 - val_loss: 16.3066
Epoch 11/100
59/59 ββββββββββββββββββββ 13s 224ms/step - loss: 16.3077 - val_loss: 16.3288
Epoch 12/100
59/59 ββββββββββββββββββββ 13s 226ms/step - loss: 16.2746 - val_loss: 16.2750
Epoch 13/100
59/59 ββββββββββββββββββββ 13s 214ms/step - loss: 16.1853 - val_loss: 16.1606
Epoch 14/100
59/59 ββββββββββββββββββββ 21s 229ms/step - loss: 16.0636 - val_loss: 16.1616
Epoch 15/100
59/59 ββββββββββββββββββββ 13s 223ms/step - loss: 15.9873 - val_loss: 16.0928
Epoch 16/100
59/59 ββββββββββββββββββββ 13s 224ms/step - loss: 15.9339 - val_loss: 16.0070
Epoch 17/100
59/59 ββββββββββββββββββββ 13s 213ms/step - loss: 15.8379 - val_loss: 15.8443
Epoch 18/100
59/59 ββββββββββββββββββββ 13s 212ms/step - loss: 15.7156 - val_loss: 15.6414
Epoch 19/100
59/59 ββββββββββββββββββββ 21s 224ms/step - loss: 15.5618 - val_loss: 15.5937
Epoch 20/100
59/59 ββββββββββββββββββββ 20s 219ms/step - loss: 15.4386 - val_loss: 15.4481
Epoch 21/100
59/59 ββββββββββββββββββββ 13s 215ms/step - loss: 15.2270 - val_loss: 15.4191
Epoch 22/100
59/59 ββββββββββββββββββββ 14s 229ms/step - loss: 15.0565 - val_loss: 15.1226
Epoch 23/100
59/59 ββββββββββββββββββββ 13s 226ms/step - loss: 14.8641 - val_loss: 14.9598
Epoch 24/100
59/59 ββββββββββββββββββββ 13s 225ms/step - loss: 14.6488 - val_loss: 14.7074
Epoch 25/100
59/59 ββββββββββββββββββββ 20s 213ms/step - loss: 14.3843 - val_loss: 14.4713
Epoch 26/100
59/59 ββββββββββββββββββββ 13s 224ms/step - loss: 14.1244 - val_loss: 14.0645
Epoch 27/100
59/59 ββββββββββββββββββββ 13s 218ms/step - loss: 13.8279 - val_loss: 13.7670
Epoch 28/100
59/59 ββββββββββββββββββββ 20s 218ms/step - loss: 13.4959 - val_loss: 13.5277
Epoch 29/100
59/59 ββββββββββββββββββββ 12s 206ms/step - loss: 13.2192 - val_loss: 13.2536
Epoch 30/100
59/59 ββββββββββββββββββββ 23s 248ms/step - loss: 12.9255 - val_loss: 12.8277
Epoch 31/100
59/59 ββββββββββββββββββββ 19s 220ms/step - loss: 12.5599 - val_loss: 12.6968
Epoch 32/100
59/59 ββββββββββββββββββββ 12s 207ms/step - loss: 12.2893 - val_loss: 12.3682
Epoch 33/100
59/59 ββββββββββββββββββββ 12s 205ms/step - loss: 11.8148 - val_loss: 11.7916
Epoch 34/100
59/59 ββββββββββββββββββββ 21s 215ms/step - loss: 11.3895 - val_loss: 11.6033
Epoch 35/100
59/59 ββββββββββββββββββββ 13s 216ms/step - loss: 11.0912 - val_loss: 11.1269
Epoch 36/100
59/59 ββββββββββββββββββββ 12s 206ms/step - loss: 10.7124 - val_loss: 10.8567
Epoch 37/100
59/59 ββββββββββββββββββββ 12s 203ms/step - loss: 10.2611 - val_loss: 10.5215
Epoch 38/100
59/59 ββββββββββββββββββββ 13s 220ms/step - loss: 9.9407 - val_loss: 10.2151
Epoch 39/100
59/59 ββββββββββββββββββββ 13s 213ms/step - loss: 9.5958 - val_loss: 9.6870
Epoch 40/100
59/59 ββββββββββββββββββββ 20s 208ms/step - loss: 9.2352 - val_loss: 9.2340
Epoch 41/100
59/59 ββββββββββββββββββββ 12s 202ms/step - loss: 8.7480 - val_loss: 8.9227
Epoch 42/100
59/59 ββββββββββββββββββββ 13s 218ms/step - loss: 8.2937 - val_loss: 8.7348
Epoch 43/100
59/59 ββββββββββββββββββββ 13s 214ms/step - loss: 8.0500 - val_loss: 8.3136
Epoch 44/100
59/59 ββββββββββββββββββββ 13s 213ms/step - loss: 7.7643 - val_loss: 7.9847
Epoch 45/100
59/59 ββββββββββββββββββββ 12s 207ms/step - loss: 7.2927 - val_loss: 7.9830
Epoch 46/100
59/59 ββββββββββββββββββββ 12s 200ms/step - loss: 7.0159 - val_loss: 7.4162
Epoch 47/100
59/59 ββββββββββββββββββββ 13s 217ms/step - loss: 6.8198 - val_loss: 7.1488
Epoch 48/100
59/59 ββββββββββββββββββββ 13s 213ms/step - loss: 6.4661 - val_loss: 7.0038
Epoch 49/100
59/59 ββββββββββββββββββββ 20s 210ms/step - loss: 6.1844 - val_loss: 6.7504
Epoch 50/100
59/59 ββββββββββββββββββββ 20s 201ms/step - loss: 5.8523 - val_loss: 6.5577
Epoch 51/100
59/59 ββββββββββββββββββββ 13s 225ms/step - loss: 5.7405 - val_loss: 6.4001
Epoch 52/100
59/59 ββββββββββββββββββββ 20s 215ms/step - loss: 5.3831 - val_loss: 6.3826
Epoch 53/100
59/59 ββββββββββββββββββββ 12s 202ms/step - loss: 5.1238 - val_loss: 6.0649
Epoch 54/100
59/59 ββββββββββββββββββββ 21s 218ms/step - loss: 4.9646 - val_loss: 5.8397
Epoch 55/100
59/59 ββββββββββββββββββββ 20s 213ms/step - loss: 4.7486 - val_loss: 5.7926
Epoch 56/100
59/59 ββββββββββββββββββββ 12s 206ms/step - loss: 4.4270 - val_loss: 5.7480
Epoch 57/100
59/59 ββββββββββββββββββββ 12s 199ms/step - loss: 4.3954 - val_loss: 5.7311
Epoch 58/100
59/59 ββββββββββββββββββββ 12s 205ms/step - loss: 4.2907 - val_loss: 5.6178
Epoch 59/100
59/59 ββββββββββββββββββββ 21s 211ms/step - loss: 4.0034 - val_loss: 5.3565
Epoch 60/100
59/59 ββββββββββββββββββββ 12s 208ms/step - loss: 3.7862 - val_loss: 5.3226
Epoch 61/100
59/59 ββββββββββββββββββββ 12s 198ms/step - loss: 3.7867 - val_loss: 5.1675
Epoch 62/100
59/59 ββββββββββββββββββββ 12s 198ms/step - loss: 3.3635 - val_loss: 4.9778
Epoch 63/100
59/59 ββββββββββββββββββββ 13s 223ms/step - loss: 3.3120 - val_loss: 5.0680
Epoch 64/100
59/59 ββββββββββββββββββββ 13s 213ms/step - loss: 3.2816 - val_loss: 4.9794
Epoch 65/100
59/59 ββββββββββββββββββββ 12s 209ms/step - loss: 3.1493 - val_loss: 4.9307
Epoch 66/100
59/59 ββββββββββββββββββββ 12s 199ms/step - loss: 2.8954 - val_loss: 4.6848
Epoch 67/100
59/59 ββββββββββββββββββββ 12s 200ms/step - loss: 2.9579 - val_loss: 4.7673
Epoch 68/100
59/59 ββββββββββββββββββββ 13s 224ms/step - loss: 2.8408 - val_loss: 4.7547
Epoch 69/100
59/59 ββββββββββββββββββββ 13s 212ms/step - loss: 2.5937 - val_loss: 4.6363
Epoch 70/100
59/59 ββββββββββββββββββββ 12s 206ms/step - loss: 2.5928 - val_loss: 4.6453
Epoch 71/100
59/59 ββββββββββββββββββββ 12s 198ms/step - loss: 2.5662 - val_loss: 4.6460
Epoch 72/100
59/59 ββββββββββββββββββββ 15s 249ms/step - loss: 2.5619 - val_loss: 4.7042
Epoch 73/100
59/59 ββββββββββββββββββββ 18s 211ms/step - loss: 2.3146 - val_loss: 4.5853
Epoch 74/100
59/59 ββββββββββββββββββββ 12s 210ms/step - loss: 2.1848 - val_loss: 4.5865
Epoch 75/100
59/59 ββββββββββββββββββββ 20s 199ms/step - loss: 2.1284 - val_loss: 4.6487
Epoch 76/100
59/59 ββββββββββββββββββββ 13s 218ms/step - loss: 2.0072 - val_loss: 4.5793
Epoch 77/100
59/59 ββββββββββββββββββββ 12s 209ms/step - loss: 1.8963 - val_loss: 4.6183
Epoch 78/100
59/59 ββββββββββββββββββββ 12s 211ms/step - loss: 1.7980 - val_loss: 4.7451
Epoch 79/100
59/59 ββββββββββββββββββββ 12s 198ms/step - loss: 1.7276 - val_loss: 4.6344
Epoch 80/100
59/59 ββββββββββββββββββββ 12s 200ms/step - loss: 1.7558 - val_loss: 4.5365
Epoch 81/100
59/59 ββββββββββββββββββββ 13s 221ms/step - loss: 1.6611 - val_loss: 4.4597
Epoch 82/100
59/59 ββββββββββββββββββββ 12s 209ms/step - loss: 1.6337 - val_loss: 4.5162
Epoch 83/100
59/59 ββββββββββββββββββββ 12s 211ms/step - loss: 1.5404 - val_loss: 4.5297
Epoch 84/100
59/59 ββββββββββββββββββββ 20s 199ms/step - loss: 1.5716 - val_loss: 4.5663
Epoch 85/100
59/59 ββββββββββββββββββββ 13s 216ms/step - loss: 1.5106 - val_loss: 4.5341
Epoch 86/100
59/59 ββββββββββββββββββββ 12s 210ms/step - loss: 1.4508 - val_loss: 4.5627
Epoch 87/100
59/59 ββββββββββββββββββββ 12s 210ms/step - loss: 1.3580 - val_loss: 4.6142
Epoch 88/100
59/59 ββββββββββββββββββββ 20s 198ms/step - loss: 1.3243 - val_loss: 4.4505
Epoch 89/100
59/59 ββββββββββββββββββββ 12s 208ms/step - loss: 1.2391 - val_loss: 4.5890
Epoch 90/100
59/59 ββββββββββββββββββββ 12s 210ms/step - loss: 1.2288 - val_loss: 4.6803
Epoch 91/100
59/59 ββββββββββββββββββββ 20s 208ms/step - loss: 1.1559 - val_loss: 4.6009
Epoch 92/100
59/59 ββββββββββββββββββββ 12s 198ms/step - loss: 1.1157 - val_loss: 4.6105
Epoch 93/100
59/59 ββββββββββββββββββββ 12s 199ms/step - loss: 1.0949 - val_loss: 4.4293
Epoch 94/100
59/59 ββββββββββββββββββββ 13s 225ms/step - loss: 1.0753 - val_loss: 4.3587
Epoch 95/100
59/59 ββββββββββββββββββββ 12s 210ms/step - loss: 0.9857 - val_loss: 4.7014
Epoch 96/100
59/59 ββββββββββββββββββββ 12s 208ms/step - loss: 1.0708 - val_loss: 4.6754
Epoch 97/100
59/59 ββββββββββββββββββββ 12s 201ms/step - loss: 0.9798 - val_loss: 4.4668
Epoch 98/100
59/59 ββββββββββββββββββββ 12s 205ms/step - loss: 0.9349 - val_loss: 4.7812
Epoch 99/100
59/59 ββββββββββββββββββββ 21s 209ms/step - loss: 0.8769 - val_loss: 4.8273
Epoch 100/100
59/59 ββββββββββββββββββββ 20s 202ms/step - loss: 0.9521 - val_loss: 4.5411
```
</div>
---
## Inference
You can use the trained model hosted on [Hugging Face Hub](https://huggingface.co/keras-io/ocr-for-captcha)
and try the demo on [Hugging Face Spaces](https://huggingface.co/spaces/keras-io/ocr-for-captcha).
```python
def ctc_decode(y_pred, input_length, greedy=True, beam_width=100, top_paths=1):
input_shape = tf.shape(y_pred)
num_samples, num_steps = input_shape[0], input_shape[1]
y_pred = tf.math.log(tf.transpose(y_pred, perm=[1, 0, 2]) + keras.backend.epsilon())
input_length = tf.cast(input_length, tf.int32)
if greedy:
(decoded, log_prob) = tf.nn.ctc_greedy_decoder(
inputs=y_pred, sequence_length=input_length
)
else:
(decoded, log_prob) = tf.compat.v1.nn.ctc_beam_search_decoder(
inputs=y_pred,
sequence_length=input_length,
beam_width=beam_width,
top_paths=top_paths,
)
decoded_dense = []
for st in decoded:
st = tf.SparseTensor(st.indices, st.values, (num_samples, num_steps))
decoded_dense.append(tf.sparse.to_dense(sp_input=st, default_value=-1))
return (decoded_dense, log_prob)
# Get the prediction model by extracting layers till the output layer
prediction_model = keras.models.Model(
model.input[0], model.get_layer(name="dense2").output
)
prediction_model.summary()
# A utility function to decode the output of the network
def decode_batch_predictions(pred):
input_len = np.ones(pred.shape[0]) * pred.shape[1]
# Use greedy search. For complex tasks, you can use beam search
results = ctc_decode(pred, input_length=input_len, greedy=True)[0][0][
:, :max_length
]
# Iterate over the results and get back the text
output_text = []
for res in results:
res = tf.strings.reduce_join(num_to_char(res)).numpy().decode("utf-8")
output_text.append(res)
return output_text
# Let's check results on some validation samples
for batch in validation_dataset.take(1):
batch_images = batch["image"]
batch_labels = batch["label"]
preds = prediction_model.predict(batch_images)
pred_texts = decode_batch_predictions(preds)
orig_texts = []
for label in batch_labels:
label = tf.strings.reduce_join(num_to_char(label)).numpy().decode("utf-8")
orig_texts.append(label)
_, ax = plt.subplots(4, 4, figsize=(15, 5))
for i in range(len(pred_texts)):
img = (batch_images[i, :, :, 0] * 255).numpy().astype(np.uint8)
img = img.T
title = f"Prediction: {pred_texts[i]}"
ax[i // 4, i % 4].imshow(img, cmap="gray")
ax[i // 4, i % 4].set_title(title)
ax[i // 4, i % 4].axis("off")
plt.show()
```
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold">Model: "functional_1"</span>
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace">βββββββββββββββββββββββββββββββββββ³ββββββββββββββββββββββββββββ³βββββββββββββ
β<span style="font-weight: bold"> Layer (type) </span>β<span style="font-weight: bold"> Output Shape </span>β<span style="font-weight: bold"> Param # </span>β
β‘βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β image (<span style="color: #0087ff; text-decoration-color: #0087ff">InputLayer</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">200</span>, <span style="color: #00af00; text-decoration-color: #00af00">50</span>, <span style="color: #00af00; text-decoration-color: #00af00">1</span>) β <span style="color: #00af00; text-decoration-color: #00af00">0</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β Conv1 (<span style="color: #0087ff; text-decoration-color: #0087ff">Conv2D</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">200</span>, <span style="color: #00af00; text-decoration-color: #00af00">50</span>, <span style="color: #00af00; text-decoration-color: #00af00">32</span>) β <span style="color: #00af00; text-decoration-color: #00af00">320</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β pool1 (<span style="color: #0087ff; text-decoration-color: #0087ff">MaxPooling2D</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">100</span>, <span style="color: #00af00; text-decoration-color: #00af00">25</span>, <span style="color: #00af00; text-decoration-color: #00af00">32</span>) β <span style="color: #00af00; text-decoration-color: #00af00">0</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β Conv2 (<span style="color: #0087ff; text-decoration-color: #0087ff">Conv2D</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">100</span>, <span style="color: #00af00; text-decoration-color: #00af00">25</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) β <span style="color: #00af00; text-decoration-color: #00af00">18,496</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β pool2 (<span style="color: #0087ff; text-decoration-color: #0087ff">MaxPooling2D</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">50</span>, <span style="color: #00af00; text-decoration-color: #00af00">12</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) β <span style="color: #00af00; text-decoration-color: #00af00">0</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β reshape (<span style="color: #0087ff; text-decoration-color: #0087ff">Reshape</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">50</span>, <span style="color: #00af00; text-decoration-color: #00af00">768</span>) β <span style="color: #00af00; text-decoration-color: #00af00">0</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β dense1 (<span style="color: #0087ff; text-decoration-color: #0087ff">Dense</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">50</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) β <span style="color: #00af00; text-decoration-color: #00af00">49,216</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β dropout (<span style="color: #0087ff; text-decoration-color: #0087ff">Dropout</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">50</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) β <span style="color: #00af00; text-decoration-color: #00af00">0</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β bidirectional (<span style="color: #0087ff; text-decoration-color: #0087ff">Bidirectional</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">50</span>, <span style="color: #00af00; text-decoration-color: #00af00">256</span>) β <span style="color: #00af00; text-decoration-color: #00af00">197,632</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β bidirectional_1 (<span style="color: #0087ff; text-decoration-color: #0087ff">Bidirectional</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">50</span>, <span style="color: #00af00; text-decoration-color: #00af00">128</span>) β <span style="color: #00af00; text-decoration-color: #00af00">164,352</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β dense2 (<span style="color: #0087ff; text-decoration-color: #0087ff">Dense</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">50</span>, <span style="color: #00af00; text-decoration-color: #00af00">21</span>) β <span style="color: #00af00; text-decoration-color: #00af00">2,709</span> β
βββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββ΄βββββββββββββ
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Total params: </span><span style="color: #00af00; text-decoration-color: #00af00">432,725</span> (1.65 MB)
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Trainable params: </span><span style="color: #00af00; text-decoration-color: #00af00">432,725</span> (1.65 MB)
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Non-trainable params: </span><span style="color: #00af00; text-decoration-color: #00af00">0</span> (0.00 B)
</pre>
<div class="k-default-codeblock">
```
1/1 ββββββββββββββββββββ 1s 579ms/step
```
</div>
![png](/img/examples/vision/captcha_ocr/captcha_ocr_19_6.png)
| keras-io/examples/vision/md/captcha_ocr.md/0 | {
"file_path": "keras-io/examples/vision/md/captcha_ocr.md",
"repo_id": "keras-io",
"token_count": 18879
} | 122 |
# Gradient Centralization for Better Training Performance
**Author:** [Rishit Dagli](https://github.com/Rishit-dagli)<br>
**Date created:** 06/18/21<br>
**Last modified:** 07/25/23<br>
**Description:** Implement Gradient Centralization to improve training performance of DNNs.
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/vision/ipynb/gradient_centralization.ipynb) <span class="k-dot">β’</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/examples/vision/gradient_centralization.py)
---
## Introduction
This example implements [Gradient Centralization](https://arxiv.org/abs/2004.01461), a
new optimization technique for Deep Neural Networks by Yong et al., and demonstrates it
on Laurence Moroney's [Horses or Humans
Dataset](https://www.tensorflow.org/datasets/catalog/horses_or_humans). Gradient
Centralization can both speedup training process and improve the final generalization
performance of DNNs. It operates directly on gradients by centralizing the gradient
vectors to have zero mean. Gradient Centralization morever improves the Lipschitzness of
the loss function and its gradient so that the training process becomes more efficient
and stable.
This example requires `tensorflow_datasets` which can be installed with this command:
```
pip install tensorflow-datasets
```
---
## Setup
```python
from time import time
import keras
from keras import layers
from keras.optimizers import RMSprop
from keras import ops
from tensorflow import data as tf_data
import tensorflow_datasets as tfds
```
---
## Prepare the data
For this example, we will be using the [Horses or Humans
dataset](https://www.tensorflow.org/datasets/catalog/horses_or_humans).
```python
num_classes = 2
input_shape = (300, 300, 3)
dataset_name = "horses_or_humans"
batch_size = 128
AUTOTUNE = tf_data.AUTOTUNE
(train_ds, test_ds), metadata = tfds.load(
name=dataset_name,
split=[tfds.Split.TRAIN, tfds.Split.TEST],
with_info=True,
as_supervised=True,
)
print(f"Image shape: {metadata.features['image'].shape}")
print(f"Training images: {metadata.splits['train'].num_examples}")
print(f"Test images: {metadata.splits['test'].num_examples}")
```
<div class="k-default-codeblock">
```
Image shape: (300, 300, 3)
Training images: 1027
Test images: 256
```
</div>
---
## Use Data Augmentation
We will rescale the data to `[0, 1]` and perform simple augmentations to our data.
```python
rescale = layers.Rescaling(1.0 / 255)
data_augmentation = [
layers.RandomFlip("horizontal_and_vertical"),
layers.RandomRotation(0.3),
layers.RandomZoom(0.2),
]
# Helper to apply augmentation
def apply_aug(x):
for aug in data_augmentation:
x = aug(x)
return x
def prepare(ds, shuffle=False, augment=False):
# Rescale dataset
ds = ds.map(lambda x, y: (rescale(x), y), num_parallel_calls=AUTOTUNE)
if shuffle:
ds = ds.shuffle(1024)
# Batch dataset
ds = ds.batch(batch_size)
# Use data augmentation only on the training set
if augment:
ds = ds.map(
lambda x, y: (apply_aug(x), y),
num_parallel_calls=AUTOTUNE,
)
# Use buffered prefecting
return ds.prefetch(buffer_size=AUTOTUNE)
```
Rescale and augment the data
```python
train_ds = prepare(train_ds, shuffle=True, augment=True)
test_ds = prepare(test_ds)
```
---
## Define a model
In this section we will define a Convolutional neural network.
```python
model = keras.Sequential(
[
layers.Input(shape=input_shape),
layers.Conv2D(16, (3, 3), activation="relu"),
layers.MaxPooling2D(2, 2),
layers.Conv2D(32, (3, 3), activation="relu"),
layers.Dropout(0.5),
layers.MaxPooling2D(2, 2),
layers.Conv2D(64, (3, 3), activation="relu"),
layers.Dropout(0.5),
layers.MaxPooling2D(2, 2),
layers.Conv2D(64, (3, 3), activation="relu"),
layers.MaxPooling2D(2, 2),
layers.Conv2D(64, (3, 3), activation="relu"),
layers.MaxPooling2D(2, 2),
layers.Flatten(),
layers.Dropout(0.5),
layers.Dense(512, activation="relu"),
layers.Dense(1, activation="sigmoid"),
]
)
```
---
## Implement Gradient Centralization
We will now
subclass the `RMSProp` optimizer class modifying the
`keras.optimizers.Optimizer.get_gradients()` method where we now implement Gradient
Centralization. On a high level the idea is that let us say we obtain our gradients
through back propogation for a Dense or Convolution layer we then compute the mean of the
column vectors of the weight matrix, and then remove the mean from each column vector.
The experiments in [this paper](https://arxiv.org/abs/2004.01461) on various
applications, including general image classification, fine-grained image classification,
detection and segmentation and Person ReID demonstrate that GC can consistently improve
the performance of DNN learning.
Also, for simplicity at the moment we are not implementing gradient cliiping functionality,
however this quite easy to implement.
At the moment we are just creating a subclass for the `RMSProp` optimizer
however you could easily reproduce this for any other optimizer or on a custom
optimizer in the same way. We will be using this class in the later section when
we train a model with Gradient Centralization.
```python
class GCRMSprop(RMSprop):
def get_gradients(self, loss, params):
# We here just provide a modified get_gradients() function since we are
# trying to just compute the centralized gradients.
grads = []
gradients = super().get_gradients()
for grad in gradients:
grad_len = len(grad.shape)
if grad_len > 1:
axis = list(range(grad_len - 1))
grad -= ops.mean(grad, axis=axis, keep_dims=True)
grads.append(grad)
return grads
optimizer = GCRMSprop(learning_rate=1e-4)
```
---
## Training utilities
We will also create a callback which allows us to easily measure the total training time
and the time taken for each epoch since we are interested in comparing the effect of
Gradient Centralization on the model we built above.
```python
class TimeHistory(keras.callbacks.Callback):
def on_train_begin(self, logs={}):
self.times = []
def on_epoch_begin(self, batch, logs={}):
self.epoch_time_start = time()
def on_epoch_end(self, batch, logs={}):
self.times.append(time() - self.epoch_time_start)
```
---
## Train the model without GC
We now train the model we built earlier without Gradient Centralization which we can
compare to the training performance of the model trained with Gradient Centralization.
```python
time_callback_no_gc = TimeHistory()
model.compile(
loss="binary_crossentropy",
optimizer=RMSprop(learning_rate=1e-4),
metrics=["accuracy"],
)
model.summary()
```
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold">Model: "sequential"</span>
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace">βββββββββββββββββββββββββββββββββββ³ββββββββββββββββββββββββββββ³βββββββββββββ
β<span style="font-weight: bold"> Layer (type) </span>β<span style="font-weight: bold"> Output Shape </span>β<span style="font-weight: bold"> Param # </span>β
β‘βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β conv2d (<span style="color: #0087ff; text-decoration-color: #0087ff">Conv2D</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">298</span>, <span style="color: #00af00; text-decoration-color: #00af00">298</span>, <span style="color: #00af00; text-decoration-color: #00af00">16</span>) β <span style="color: #00af00; text-decoration-color: #00af00">448</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β max_pooling2d (<span style="color: #0087ff; text-decoration-color: #0087ff">MaxPooling2D</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">149</span>, <span style="color: #00af00; text-decoration-color: #00af00">149</span>, <span style="color: #00af00; text-decoration-color: #00af00">16</span>) β <span style="color: #00af00; text-decoration-color: #00af00">0</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β conv2d_1 (<span style="color: #0087ff; text-decoration-color: #0087ff">Conv2D</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">147</span>, <span style="color: #00af00; text-decoration-color: #00af00">147</span>, <span style="color: #00af00; text-decoration-color: #00af00">32</span>) β <span style="color: #00af00; text-decoration-color: #00af00">4,640</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β dropout (<span style="color: #0087ff; text-decoration-color: #0087ff">Dropout</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">147</span>, <span style="color: #00af00; text-decoration-color: #00af00">147</span>, <span style="color: #00af00; text-decoration-color: #00af00">32</span>) β <span style="color: #00af00; text-decoration-color: #00af00">0</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β max_pooling2d_1 (<span style="color: #0087ff; text-decoration-color: #0087ff">MaxPooling2D</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">73</span>, <span style="color: #00af00; text-decoration-color: #00af00">73</span>, <span style="color: #00af00; text-decoration-color: #00af00">32</span>) β <span style="color: #00af00; text-decoration-color: #00af00">0</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β conv2d_2 (<span style="color: #0087ff; text-decoration-color: #0087ff">Conv2D</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">71</span>, <span style="color: #00af00; text-decoration-color: #00af00">71</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) β <span style="color: #00af00; text-decoration-color: #00af00">18,496</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β dropout_1 (<span style="color: #0087ff; text-decoration-color: #0087ff">Dropout</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">71</span>, <span style="color: #00af00; text-decoration-color: #00af00">71</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) β <span style="color: #00af00; text-decoration-color: #00af00">0</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β max_pooling2d_2 (<span style="color: #0087ff; text-decoration-color: #0087ff">MaxPooling2D</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">35</span>, <span style="color: #00af00; text-decoration-color: #00af00">35</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) β <span style="color: #00af00; text-decoration-color: #00af00">0</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β conv2d_3 (<span style="color: #0087ff; text-decoration-color: #0087ff">Conv2D</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">33</span>, <span style="color: #00af00; text-decoration-color: #00af00">33</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) β <span style="color: #00af00; text-decoration-color: #00af00">36,928</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β max_pooling2d_3 (<span style="color: #0087ff; text-decoration-color: #0087ff">MaxPooling2D</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">16</span>, <span style="color: #00af00; text-decoration-color: #00af00">16</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) β <span style="color: #00af00; text-decoration-color: #00af00">0</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β conv2d_4 (<span style="color: #0087ff; text-decoration-color: #0087ff">Conv2D</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">14</span>, <span style="color: #00af00; text-decoration-color: #00af00">14</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) β <span style="color: #00af00; text-decoration-color: #00af00">36,928</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β max_pooling2d_4 (<span style="color: #0087ff; text-decoration-color: #0087ff">MaxPooling2D</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">7</span>, <span style="color: #00af00; text-decoration-color: #00af00">7</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) β <span style="color: #00af00; text-decoration-color: #00af00">0</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β flatten (<span style="color: #0087ff; text-decoration-color: #0087ff">Flatten</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">3136</span>) β <span style="color: #00af00; text-decoration-color: #00af00">0</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β dropout_2 (<span style="color: #0087ff; text-decoration-color: #0087ff">Dropout</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">3136</span>) β <span style="color: #00af00; text-decoration-color: #00af00">0</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β dense (<span style="color: #0087ff; text-decoration-color: #0087ff">Dense</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">512</span>) β <span style="color: #00af00; text-decoration-color: #00af00">1,606,144</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β dense_1 (<span style="color: #0087ff; text-decoration-color: #0087ff">Dense</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">1</span>) β <span style="color: #00af00; text-decoration-color: #00af00">513</span> β
βββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββ΄βββββββββββββ
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Total params: </span><span style="color: #00af00; text-decoration-color: #00af00">1,704,097</span> (6.50 MB)
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Trainable params: </span><span style="color: #00af00; text-decoration-color: #00af00">1,704,097</span> (6.50 MB)
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Non-trainable params: </span><span style="color: #00af00; text-decoration-color: #00af00">0</span> (0.00 B)
</pre>
We also save the history since we later want to compare our model trained with and not
trained with Gradient Centralization
```python
history_no_gc = model.fit(
train_ds, epochs=10, verbose=1, callbacks=[time_callback_no_gc]
)
```
<div class="k-default-codeblock">
```
Epoch 1/10
9/9 ββββββββββββββββββββ 24s 778ms/step - accuracy: 0.4772 - loss: 0.7405
Epoch 2/10
9/9 ββββββββββββββββββββ 10s 597ms/step - accuracy: 0.5434 - loss: 0.6861
Epoch 3/10
9/9 ββββββββββββββββββββ 10s 700ms/step - accuracy: 0.5402 - loss: 0.6911
Epoch 4/10
9/9 ββββββββββββββββββββ 9s 586ms/step - accuracy: 0.5884 - loss: 0.6788
Epoch 5/10
9/9 ββββββββββββββββββββ 9s 588ms/step - accuracy: 0.6570 - loss: 0.6564
Epoch 6/10
9/9 ββββββββββββββββββββ 10s 591ms/step - accuracy: 0.6671 - loss: 0.6395
Epoch 7/10
9/9 ββββββββββββββββββββ 10s 594ms/step - accuracy: 0.7010 - loss: 0.6161
Epoch 8/10
9/9 ββββββββββββββββββββ 9s 593ms/step - accuracy: 0.6946 - loss: 0.6129
Epoch 9/10
9/9 ββββββββββββββββββββ 10s 699ms/step - accuracy: 0.6972 - loss: 0.5987
Epoch 10/10
9/9 ββββββββββββββββββββ 11s 623ms/step - accuracy: 0.6839 - loss: 0.6197
```
</div>
---
## Train the model with GC
We will now train the same model, this time using Gradient Centralization,
notice our optimizer is the one using Gradient Centralization this time.
```python
time_callback_gc = TimeHistory()
model.compile(loss="binary_crossentropy", optimizer=optimizer, metrics=["accuracy"])
model.summary()
history_gc = model.fit(train_ds, epochs=10, verbose=1, callbacks=[time_callback_gc])
```
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold">Model: "sequential"</span>
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace">βββββββββββββββββββββββββββββββββββ³ββββββββββββββββββββββββββββ³βββββββββββββ
β<span style="font-weight: bold"> Layer (type) </span>β<span style="font-weight: bold"> Output Shape </span>β<span style="font-weight: bold"> Param # </span>β
β‘βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β conv2d (<span style="color: #0087ff; text-decoration-color: #0087ff">Conv2D</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">298</span>, <span style="color: #00af00; text-decoration-color: #00af00">298</span>, <span style="color: #00af00; text-decoration-color: #00af00">16</span>) β <span style="color: #00af00; text-decoration-color: #00af00">448</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β max_pooling2d (<span style="color: #0087ff; text-decoration-color: #0087ff">MaxPooling2D</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">149</span>, <span style="color: #00af00; text-decoration-color: #00af00">149</span>, <span style="color: #00af00; text-decoration-color: #00af00">16</span>) β <span style="color: #00af00; text-decoration-color: #00af00">0</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β conv2d_1 (<span style="color: #0087ff; text-decoration-color: #0087ff">Conv2D</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">147</span>, <span style="color: #00af00; text-decoration-color: #00af00">147</span>, <span style="color: #00af00; text-decoration-color: #00af00">32</span>) β <span style="color: #00af00; text-decoration-color: #00af00">4,640</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β dropout (<span style="color: #0087ff; text-decoration-color: #0087ff">Dropout</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">147</span>, <span style="color: #00af00; text-decoration-color: #00af00">147</span>, <span style="color: #00af00; text-decoration-color: #00af00">32</span>) β <span style="color: #00af00; text-decoration-color: #00af00">0</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β max_pooling2d_1 (<span style="color: #0087ff; text-decoration-color: #0087ff">MaxPooling2D</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">73</span>, <span style="color: #00af00; text-decoration-color: #00af00">73</span>, <span style="color: #00af00; text-decoration-color: #00af00">32</span>) β <span style="color: #00af00; text-decoration-color: #00af00">0</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β conv2d_2 (<span style="color: #0087ff; text-decoration-color: #0087ff">Conv2D</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">71</span>, <span style="color: #00af00; text-decoration-color: #00af00">71</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) β <span style="color: #00af00; text-decoration-color: #00af00">18,496</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β dropout_1 (<span style="color: #0087ff; text-decoration-color: #0087ff">Dropout</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">71</span>, <span style="color: #00af00; text-decoration-color: #00af00">71</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) β <span style="color: #00af00; text-decoration-color: #00af00">0</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β max_pooling2d_2 (<span style="color: #0087ff; text-decoration-color: #0087ff">MaxPooling2D</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">35</span>, <span style="color: #00af00; text-decoration-color: #00af00">35</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) β <span style="color: #00af00; text-decoration-color: #00af00">0</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β conv2d_3 (<span style="color: #0087ff; text-decoration-color: #0087ff">Conv2D</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">33</span>, <span style="color: #00af00; text-decoration-color: #00af00">33</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) β <span style="color: #00af00; text-decoration-color: #00af00">36,928</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β max_pooling2d_3 (<span style="color: #0087ff; text-decoration-color: #0087ff">MaxPooling2D</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">16</span>, <span style="color: #00af00; text-decoration-color: #00af00">16</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) β <span style="color: #00af00; text-decoration-color: #00af00">0</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β conv2d_4 (<span style="color: #0087ff; text-decoration-color: #0087ff">Conv2D</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">14</span>, <span style="color: #00af00; text-decoration-color: #00af00">14</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) β <span style="color: #00af00; text-decoration-color: #00af00">36,928</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β max_pooling2d_4 (<span style="color: #0087ff; text-decoration-color: #0087ff">MaxPooling2D</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">7</span>, <span style="color: #00af00; text-decoration-color: #00af00">7</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) β <span style="color: #00af00; text-decoration-color: #00af00">0</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β flatten (<span style="color: #0087ff; text-decoration-color: #0087ff">Flatten</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">3136</span>) β <span style="color: #00af00; text-decoration-color: #00af00">0</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β dropout_2 (<span style="color: #0087ff; text-decoration-color: #0087ff">Dropout</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">3136</span>) β <span style="color: #00af00; text-decoration-color: #00af00">0</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β dense (<span style="color: #0087ff; text-decoration-color: #0087ff">Dense</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">512</span>) β <span style="color: #00af00; text-decoration-color: #00af00">1,606,144</span> β
βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββ€
β dense_1 (<span style="color: #0087ff; text-decoration-color: #0087ff">Dense</span>) β (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">1</span>) β <span style="color: #00af00; text-decoration-color: #00af00">513</span> β
βββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββ΄βββββββββββββ
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Total params: </span><span style="color: #00af00; text-decoration-color: #00af00">1,704,097</span> (6.50 MB)
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Trainable params: </span><span style="color: #00af00; text-decoration-color: #00af00">1,704,097</span> (6.50 MB)
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Non-trainable params: </span><span style="color: #00af00; text-decoration-color: #00af00">0</span> (0.00 B)
</pre>
<div class="k-default-codeblock">
```
Epoch 1/10
9/9 ββββββββββββββββββββ 12s 649ms/step - accuracy: 0.7118 - loss: 0.5594
Epoch 2/10
9/9 ββββββββββββββββββββ 10s 592ms/step - accuracy: 0.7249 - loss: 0.5817
Epoch 3/10
9/9 ββββββββββββββββββββ 9s 587ms/step - accuracy: 0.8060 - loss: 0.4448
Epoch 4/10
9/9 ββββββββββββββββββββ 10s 693ms/step - accuracy: 0.8472 - loss: 0.4051
Epoch 5/10
9/9 ββββββββββββββββββββ 10s 594ms/step - accuracy: 0.8386 - loss: 0.3978
Epoch 6/10
9/9 ββββββββββββββββββββ 10s 593ms/step - accuracy: 0.8442 - loss: 0.3976
Epoch 7/10
9/9 ββββββββββββββββββββ 9s 585ms/step - accuracy: 0.7409 - loss: 0.6626
Epoch 8/10
9/9 ββββββββββββββββββββ 10s 587ms/step - accuracy: 0.8191 - loss: 0.4357
Epoch 9/10
9/9 ββββββββββββββββββββ 9s 587ms/step - accuracy: 0.8248 - loss: 0.3974
Epoch 10/10
9/9 ββββββββββββββββββββ 10s 646ms/step - accuracy: 0.8022 - loss: 0.4589
```
</div>
---
## Comparing performance
```python
print("Not using Gradient Centralization")
print(f"Loss: {history_no_gc.history['loss'][-1]}")
print(f"Accuracy: {history_no_gc.history['accuracy'][-1]}")
print(f"Training Time: {sum(time_callback_no_gc.times)}")
print("Using Gradient Centralization")
print(f"Loss: {history_gc.history['loss'][-1]}")
print(f"Accuracy: {history_gc.history['accuracy'][-1]}")
print(f"Training Time: {sum(time_callback_gc.times)}")
```
<div class="k-default-codeblock">
```
Not using Gradient Centralization
Loss: 0.5345584154129028
Accuracy: 0.7604166865348816
Training Time: 112.48799777030945
Using Gradient Centralization
Loss: 0.4014038145542145
Accuracy: 0.8153935074806213
Training Time: 98.31573963165283
```
</div>
Readers are encouraged to try out Gradient Centralization on different datasets from
different domains and experiment with it's effect. You are strongly advised to check out
the [original paper](https://arxiv.org/abs/2004.01461) as well - the authors present
several studies on Gradient Centralization showing how it can improve general
performance, generalization, training time as well as more efficient.
Many thanks to [Ali Mustufa Shaikh](https://github.com/ialimustufa) for reviewing this
implementation.
| keras-io/examples/vision/md/gradient_centralization.md/0 | {
"file_path": "keras-io/examples/vision/md/gradient_centralization.md",
"repo_id": "keras-io",
"token_count": 12424
} | 123 |
# MixUp augmentation for image classification
**Author:** [Sayak Paul](https://twitter.com/RisingSayak)<br>
**Date created:** 2021/03/06<br>
**Last modified:** 2023/07/24<br>
**Description:** Data augmentation using the mixup technique for image classification.
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/vision/ipynb/mixup.ipynb) <span class="k-dot">β’</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/examples/vision/mixup.py)
---
## Introduction
_mixup_ is a *domain-agnostic* data augmentation technique proposed in [mixup: Beyond Empirical Risk Minimization](https://arxiv.org/abs/1710.09412)
by Zhang et al. It's implemented with the following formulas:
![](https://i.ibb.co/DRyHYww/image.png)
(Note that the lambda values are values with the [0, 1] range and are sampled from the
[Beta distribution](https://en.wikipedia.org/wiki/Beta_distribution).)
The technique is quite systematically named. We are literally mixing up the features and
their corresponding labels. Implementation-wise it's simple. Neural networks are prone
to [memorizing corrupt labels](https://arxiv.org/abs/1611.03530). mixup relaxes this by
combining different features with one another (same happens for the labels too) so that
a network does not get overconfident about the relationship between the features and
their labels.
mixup is specifically useful when we are not sure about selecting a set of augmentation
transforms for a given dataset, medical imaging datasets, for example. mixup can be
extended to a variety of data modalities such as computer vision, naturallanguage
processing, speech, and so on.
---
## Setup
```python
import os
os.environ["KERAS_BACKEND"] = "tensorflow"
import numpy as np
import keras
import matplotlib.pyplot as plt
from keras import layers
# TF imports related to tf.data preprocessing
from tensorflow import data as tf_data
from tensorflow import image as tf_image
from tensorflow.random import gamma as tf_random_gamma
```
---
## Prepare the dataset
In this example, we will be using the [FashionMNIST](https://github.com/zalandoresearch/fashion-mnist) dataset. But this same recipe can
be used for other classification datasets as well.
```python
(x_train, y_train), (x_test, y_test) = keras.datasets.fashion_mnist.load_data()
x_train = x_train.astype("float32") / 255.0
x_train = np.reshape(x_train, (-1, 28, 28, 1))
y_train = keras.ops.one_hot(y_train, 10)
x_test = x_test.astype("float32") / 255.0
x_test = np.reshape(x_test, (-1, 28, 28, 1))
y_test = keras.ops.one_hot(y_test, 10)
```
---
## Define hyperparameters
```python
AUTO = tf_data.AUTOTUNE
BATCH_SIZE = 64
EPOCHS = 10
```
---
## Convert the data into TensorFlow `Dataset` objects
```python
# Put aside a few samples to create our validation set
val_samples = 2000
x_val, y_val = x_train[:val_samples], y_train[:val_samples]
new_x_train, new_y_train = x_train[val_samples:], y_train[val_samples:]
train_ds_one = (
tf_data.Dataset.from_tensor_slices((new_x_train, new_y_train))
.shuffle(BATCH_SIZE * 100)
.batch(BATCH_SIZE)
)
train_ds_two = (
tf_data.Dataset.from_tensor_slices((new_x_train, new_y_train))
.shuffle(BATCH_SIZE * 100)
.batch(BATCH_SIZE)
)
# Because we will be mixing up the images and their corresponding labels, we will be
# combining two shuffled datasets from the same training data.
train_ds = tf_data.Dataset.zip((train_ds_one, train_ds_two))
val_ds = tf_data.Dataset.from_tensor_slices((x_val, y_val)).batch(BATCH_SIZE)
test_ds = tf_data.Dataset.from_tensor_slices((x_test, y_test)).batch(BATCH_SIZE)
```
---
## Define the mixup technique function
To perform the mixup routine, we create new virtual datasets using the training data from
the same dataset, and apply a lambda value within the [0, 1] range sampled from a [Beta distribution](https://en.wikipedia.org/wiki/Beta_distribution)
β such that, for example, `new_x = lambda * x1 + (1 - lambda) * x2` (where
`x1` and `x2` are images) and the same equation is applied to the labels as well.
```python
def sample_beta_distribution(size, concentration_0=0.2, concentration_1=0.2):
gamma_1_sample = tf_random_gamma(shape=[size], alpha=concentration_1)
gamma_2_sample = tf_random_gamma(shape=[size], alpha=concentration_0)
return gamma_1_sample / (gamma_1_sample + gamma_2_sample)
def mix_up(ds_one, ds_two, alpha=0.2):
# Unpack two datasets
images_one, labels_one = ds_one
images_two, labels_two = ds_two
batch_size = keras.ops.shape(images_one)[0]
# Sample lambda and reshape it to do the mixup
l = sample_beta_distribution(batch_size, alpha, alpha)
x_l = keras.ops.reshape(l, (batch_size, 1, 1, 1))
y_l = keras.ops.reshape(l, (batch_size, 1))
# Perform mixup on both images and labels by combining a pair of images/labels
# (one from each dataset) into one image/label
images = images_one * x_l + images_two * (1 - x_l)
labels = labels_one * y_l + labels_two * (1 - y_l)
return (images, labels)
```
**Note** that here , we are combining two images to create a single one. Theoretically,
we can combine as many we want but that comes at an increased computation cost. In
certain cases, it may not help improve the performance as well.
---
## Visualize the new augmented dataset
```python
# First create the new dataset using our `mix_up` utility
train_ds_mu = train_ds.map(
lambda ds_one, ds_two: mix_up(ds_one, ds_two, alpha=0.2),
num_parallel_calls=AUTO,
)
# Let's preview 9 samples from the dataset
sample_images, sample_labels = next(iter(train_ds_mu))
plt.figure(figsize=(10, 10))
for i, (image, label) in enumerate(zip(sample_images[:9], sample_labels[:9])):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(image.numpy().squeeze())
print(label.numpy().tolist())
plt.axis("off")
```
<div class="k-default-codeblock">
```
[0.0, 0.9964277148246765, 0.0, 0.0, 0.003572270041331649, 0.0, 0.0, 0.0, 0.0, 0.0]
[0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
[0.0, 0.0, 0.0, 0.0, 0.9794676899909973, 0.02053229510784149, 0.0, 0.0, 0.0, 0.0]
[0.0, 0.0, 0.0, 0.0, 0.9536369442939758, 0.0, 0.0, 0.0, 0.04636305570602417, 0.0]
[0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7631776928901672, 0.0, 0.0, 0.23682232201099396]
[0.0, 0.0, 0.045958757400512695, 0.0, 0.0, 0.0, 0.9540412425994873, 0.0, 0.0, 0.0]
[0.0, 0.0, 0.0, 0.0, 2.8015051611873787e-08, 0.0, 0.0, 1.0, 0.0, 0.0]
[0.0, 0.0, 0.0, 0.0003173351287841797, 0.0, 0.9996826648712158, 0.0, 0.0, 0.0, 0.0]
```
</div>
![png](/img/examples/vision/mixup/mixup_15_1.png)
---
## Model building
```python
def get_training_model():
model = keras.Sequential(
[
layers.Input(shape=(28, 28, 1)),
layers.Conv2D(16, (5, 5), activation="relu"),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Conv2D(32, (5, 5), activation="relu"),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Dropout(0.2),
layers.GlobalAveragePooling2D(),
layers.Dense(128, activation="relu"),
layers.Dense(10, activation="softmax"),
]
)
return model
```
For the sake of reproducibility, we serialize the initial random weights of our shallow
network.
```python
initial_model = get_training_model()
initial_model.save_weights("initial_weights.weights.h5")
```
---
## 1. Train the model with the mixed up dataset
```python
model = get_training_model()
model.load_weights("initial_weights.weights.h5")
model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"])
model.fit(train_ds_mu, validation_data=val_ds, epochs=EPOCHS)
_, test_acc = model.evaluate(test_ds)
print("Test accuracy: {:.2f}%".format(test_acc * 100))
```
<div class="k-default-codeblock">
```
Epoch 1/10
62/907 β[37mβββββββββββββββββββ 2s 3ms/step - accuracy: 0.2518 - loss: 2.2072
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1699655923.381468 16749 device_compiler.h:187] Compiled cluster using XLA! This line is logged at most once for the lifetime of the process.
907/907 ββββββββββββββββββββ 13s 9ms/step - accuracy: 0.5335 - loss: 1.4414 - val_accuracy: 0.7635 - val_loss: 0.6678
Epoch 2/10
907/907 ββββββββββββββββββββ 12s 4ms/step - accuracy: 0.7168 - loss: 0.9688 - val_accuracy: 0.7925 - val_loss: 0.5849
Epoch 3/10
907/907 ββββββββββββββββββββ 5s 4ms/step - accuracy: 0.7525 - loss: 0.8940 - val_accuracy: 0.8290 - val_loss: 0.5138
Epoch 4/10
907/907 ββββββββββββββββββββ 4s 3ms/step - accuracy: 0.7742 - loss: 0.8431 - val_accuracy: 0.8360 - val_loss: 0.4726
Epoch 5/10
907/907 ββββββββββββββββββββ 3s 3ms/step - accuracy: 0.7876 - loss: 0.8095 - val_accuracy: 0.8550 - val_loss: 0.4450
Epoch 6/10
907/907 ββββββββββββββββββββ 3s 3ms/step - accuracy: 0.8029 - loss: 0.7794 - val_accuracy: 0.8560 - val_loss: 0.4178
Epoch 7/10
907/907 ββββββββββββββββββββ 2s 3ms/step - accuracy: 0.8039 - loss: 0.7632 - val_accuracy: 0.8600 - val_loss: 0.4056
Epoch 8/10
907/907 ββββββββββββββββββββ 3s 3ms/step - accuracy: 0.8115 - loss: 0.7465 - val_accuracy: 0.8510 - val_loss: 0.4114
Epoch 9/10
907/907 ββββββββββββββββββββ 3s 3ms/step - accuracy: 0.8115 - loss: 0.7364 - val_accuracy: 0.8645 - val_loss: 0.3983
Epoch 10/10
907/907 ββββββββββββββββββββ 3s 3ms/step - accuracy: 0.8182 - loss: 0.7237 - val_accuracy: 0.8630 - val_loss: 0.3735
157/157 ββββββββββββββββββββ 0s 2ms/step - accuracy: 0.8610 - loss: 0.4030
Test accuracy: 85.82%
```
</div>
---
## 2. Train the model *without* the mixed up dataset
```python
model = get_training_model()
model.load_weights("initial_weights.weights.h5")
model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"])
# Notice that we are NOT using the mixed up dataset here
model.fit(train_ds_one, validation_data=val_ds, epochs=EPOCHS)
_, test_acc = model.evaluate(test_ds)
print("Test accuracy: {:.2f}%".format(test_acc * 100))
```
<div class="k-default-codeblock">
```
Epoch 1/10
907/907 ββββββββββββββββββββ 8s 6ms/step - accuracy: 0.5690 - loss: 1.1928 - val_accuracy: 0.7585 - val_loss: 0.6519
Epoch 2/10
907/907 ββββββββββββββββββββ 5s 2ms/step - accuracy: 0.7525 - loss: 0.6484 - val_accuracy: 0.7860 - val_loss: 0.5799
Epoch 3/10
907/907 ββββββββββββββββββββ 2s 2ms/step - accuracy: 0.7895 - loss: 0.5661 - val_accuracy: 0.8205 - val_loss: 0.5122
Epoch 4/10
907/907 ββββββββββββββββββββ 3s 2ms/step - accuracy: 0.8148 - loss: 0.5126 - val_accuracy: 0.8415 - val_loss: 0.4375
Epoch 5/10
907/907 ββββββββββββββββββββ 3s 2ms/step - accuracy: 0.8306 - loss: 0.4636 - val_accuracy: 0.8610 - val_loss: 0.3913
Epoch 6/10
907/907 ββββββββββββββββββββ 2s 2ms/step - accuracy: 0.8433 - loss: 0.4312 - val_accuracy: 0.8680 - val_loss: 0.3734
Epoch 7/10
907/907 ββββββββββββββββββββ 3s 2ms/step - accuracy: 0.8544 - loss: 0.4072 - val_accuracy: 0.8750 - val_loss: 0.3606
Epoch 8/10
907/907 ββββββββββββββββββββ 3s 2ms/step - accuracy: 0.8577 - loss: 0.3913 - val_accuracy: 0.8735 - val_loss: 0.3520
Epoch 9/10
907/907 ββββββββββββββββββββ 3s 2ms/step - accuracy: 0.8645 - loss: 0.3803 - val_accuracy: 0.8725 - val_loss: 0.3536
Epoch 10/10
907/907 ββββββββββββββββββββ 3s 3ms/step - accuracy: 0.8686 - loss: 0.3597 - val_accuracy: 0.8745 - val_loss: 0.3395
157/157 ββββββββββββββββββββ 1s 4ms/step - accuracy: 0.8705 - loss: 0.3672
Test accuracy: 86.92%
```
</div>
Readers are encouraged to try out mixup on different datasets from different domains and
experiment with the lambda parameter. You are strongly advised to check out the
[original paper](https://arxiv.org/abs/1710.09412) as well - the authors present several ablation studies on mixup
showing how it can improve generalization, as well as show their results of combining
more than two images to create a single one.
---
## Notes
* With mixup, you can create synthetic examples β especially when you lack a large
dataset - without incurring high computational costs.
* [Label smoothing](https://www.pyimagesearch.com/2019/12/30/label-smoothing-with-keras-tensorflow-and-deep-learning/) and mixup usually do not work well together because label smoothing
already modifies the hard labels by some factor.
* mixup does not work well when you are using [Supervised Contrastive
Learning](https://arxiv.org/abs/2004.11362) (SCL) since SCL expects the true labels
during its pre-training phase.
* A few other benefits of mixup include (as described in the [paper](https://arxiv.org/abs/1710.09412)) robustness to
adversarial examples and stabilized GAN (Generative Adversarial Networks) training.
* There are a number of data augmentation techniques that extend mixup such as
[CutMix](https://arxiv.org/abs/1905.04899) and [AugMix](https://arxiv.org/abs/1912.02781).
| keras-io/examples/vision/md/mixup.md/0 | {
"file_path": "keras-io/examples/vision/md/mixup.md",
"repo_id": "keras-io",
"token_count": 5391
} | 124 |
# Few-Shot learning with Reptile
**Author:** [ADMoreau](https://github.com/ADMoreau)<br>
**Date created:** 2020/05/21<br>
**Last modified:** 2023/07/20<br>
**Description:** Few-shot classification on the Omniglot dataset using Reptile.
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/vision/ipynb/reptile.ipynb) <span class="k-dot">β’</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/examples/vision/reptile.py)
---
## Introduction
The [Reptile](https://arxiv.org/abs/1803.02999) algorithm was developed by OpenAI to
perform model-agnostic meta-learning. Specifically, this algorithm was designed to
quickly learn to perform new tasks with minimal training (few-shot learning).
The algorithm works by performing Stochastic Gradient Descent using the
difference between weights trained on a mini-batch of never-seen-before data and the
model weights prior to training over a fixed number of meta-iterations.
```python
import os
os.environ["KERAS_BACKEND"] = "tensorflow"
import keras
from keras import layers
import matplotlib.pyplot as plt
import numpy as np
import random
import tensorflow as tf
import tensorflow_datasets as tfds
```
---
## Define the Hyperparameters
```python
learning_rate = 0.003
meta_step_size = 0.25
inner_batch_size = 25
eval_batch_size = 25
meta_iters = 2000
eval_iters = 5
inner_iters = 4
eval_interval = 1
train_shots = 20
shots = 5
classes = 5
```
---
## Prepare the data
The [Omniglot dataset](https://github.com/brendenlake/omniglot/) is a dataset of 1,623
characters taken from 50 different alphabets, with 20 examples for each character.
The 20 samples for each character were drawn online via Amazon's Mechanical Turk. For the
few-shot learning task, `k` samples (or "shots") are drawn randomly from `n` randomly-chosen
classes. These `n` numerical values are used to create a new set of temporary labels to use
to test the model's ability to learn a new task given few examples. In other words, if you
are training on 5 classes, your new class labels will be either 0, 1, 2, 3, or 4.
Omniglot is a great dataset for this task since there are many different classes to draw
from, with a reasonable number of samples for each class.
```python
class Dataset:
# This class will facilitate the creation of a few-shot dataset
# from the Omniglot dataset that can be sampled from quickly while also
# allowing to create new labels at the same time.
def __init__(self, training):
# Download the tfrecord files containing the omniglot data and convert to a
# dataset.
split = "train" if training else "test"
ds = tfds.load("omniglot", split=split, as_supervised=True, shuffle_files=False)
# Iterate over the dataset to get each individual image and its class,
# and put that data into a dictionary.
self.data = {}
def extraction(image, label):
# This function will shrink the Omniglot images to the desired size,
# scale pixel values and convert the RGB image to grayscale
image = tf.image.convert_image_dtype(image, tf.float32)
image = tf.image.rgb_to_grayscale(image)
image = tf.image.resize(image, [28, 28])
return image, label
for image, label in ds.map(extraction):
image = image.numpy()
label = str(label.numpy())
if label not in self.data:
self.data[label] = []
self.data[label].append(image)
self.labels = list(self.data.keys())
def get_mini_dataset(
self, batch_size, repetitions, shots, num_classes, split=False
):
temp_labels = np.zeros(shape=(num_classes * shots))
temp_images = np.zeros(shape=(num_classes * shots, 28, 28, 1))
if split:
test_labels = np.zeros(shape=(num_classes))
test_images = np.zeros(shape=(num_classes, 28, 28, 1))
# Get a random subset of labels from the entire label set.
label_subset = random.choices(self.labels, k=num_classes)
for class_idx, class_obj in enumerate(label_subset):
# Use enumerated index value as a temporary label for mini-batch in
# few shot learning.
temp_labels[class_idx * shots : (class_idx + 1) * shots] = class_idx
# If creating a split dataset for testing, select an extra sample from each
# label to create the test dataset.
if split:
test_labels[class_idx] = class_idx
images_to_split = random.choices(
self.data[label_subset[class_idx]], k=shots + 1
)
test_images[class_idx] = images_to_split[-1]
temp_images[
class_idx * shots : (class_idx + 1) * shots
] = images_to_split[:-1]
else:
# For each index in the randomly selected label_subset, sample the
# necessary number of images.
temp_images[
class_idx * shots : (class_idx + 1) * shots
] = random.choices(self.data[label_subset[class_idx]], k=shots)
dataset = tf.data.Dataset.from_tensor_slices(
(temp_images.astype(np.float32), temp_labels.astype(np.int32))
)
dataset = dataset.shuffle(100).batch(batch_size).repeat(repetitions)
if split:
return dataset, test_images, test_labels
return dataset
import urllib3
urllib3.disable_warnings() # Disable SSL warnings that may happen during download.
train_dataset = Dataset(training=True)
test_dataset = Dataset(training=False)
```
<div class="k-default-codeblock">
```
Downloading and preparing dataset 17.95 MiB (download: 17.95 MiB, generated: Unknown size, total: 17.95 MiB) to /home/fchollet/tensorflow_datasets/omniglot/3.0.0...
Dl Completed...: 0 url [00:00, ? url/s]
Dl Size...: 0 MiB [00:00, ? MiB/s]
Extraction completed...: 0 file [00:00, ? file/s]
Generating splits...: 0%| | 0/4 [00:00<?, ? splits/s]
Generating train examples...: 0%| | 0/19280 [00:00<?, ? examples/s]
Shuffling /home/fchollet/tensorflow_datasets/omniglot/3.0.0.incomplete1MPXME/omniglot-train.tfrecord*...: 0%β¦
Generating test examples...: 0%| | 0/13180 [00:00<?, ? examples/s]
Shuffling /home/fchollet/tensorflow_datasets/omniglot/3.0.0.incomplete1MPXME/omniglot-test.tfrecord*...: 0%|β¦
Generating small1 examples...: 0%| | 0/2720 [00:00<?, ? examples/s]
Shuffling /home/fchollet/tensorflow_datasets/omniglot/3.0.0.incomplete1MPXME/omniglot-small1.tfrecord*...: 0β¦
Generating small2 examples...: 0%| | 0/3120 [00:00<?, ? examples/s]
Shuffling /home/fchollet/tensorflow_datasets/omniglot/3.0.0.incomplete1MPXME/omniglot-small2.tfrecord*...: 0β¦
Dataset omniglot downloaded and prepared to /home/fchollet/tensorflow_datasets/omniglot/3.0.0. Subsequent calls will reuse this data.
```
</div>
---
## Visualize some examples from the dataset
```python
_, axarr = plt.subplots(nrows=5, ncols=5, figsize=(20, 20))
sample_keys = list(train_dataset.data.keys())
for a in range(5):
for b in range(5):
temp_image = train_dataset.data[sample_keys[a]][b]
temp_image = np.stack((temp_image[:, :, 0],) * 3, axis=2)
temp_image *= 255
temp_image = np.clip(temp_image, 0, 255).astype("uint8")
if b == 2:
axarr[a, b].set_title("Class : " + sample_keys[a])
axarr[a, b].imshow(temp_image, cmap="gray")
axarr[a, b].xaxis.set_visible(False)
axarr[a, b].yaxis.set_visible(False)
plt.show()
```
![png](/img/examples/vision/reptile/reptile_8_0.png)
---
## Build the model
```python
def conv_bn(x):
x = layers.Conv2D(filters=64, kernel_size=3, strides=2, padding="same")(x)
x = layers.BatchNormalization()(x)
return layers.ReLU()(x)
inputs = layers.Input(shape=(28, 28, 1))
x = conv_bn(inputs)
x = conv_bn(x)
x = conv_bn(x)
x = conv_bn(x)
x = layers.Flatten()(x)
outputs = layers.Dense(classes, activation="softmax")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile()
optimizer = keras.optimizers.SGD(learning_rate=learning_rate)
```
---
## Train the model
```python
training = []
testing = []
for meta_iter in range(meta_iters):
frac_done = meta_iter / meta_iters
cur_meta_step_size = (1 - frac_done) * meta_step_size
# Temporarily save the weights from the model.
old_vars = model.get_weights()
# Get a sample from the full dataset.
mini_dataset = train_dataset.get_mini_dataset(
inner_batch_size, inner_iters, train_shots, classes
)
for images, labels in mini_dataset:
with tf.GradientTape() as tape:
preds = model(images)
loss = keras.losses.sparse_categorical_crossentropy(labels, preds)
grads = tape.gradient(loss, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
new_vars = model.get_weights()
# Perform SGD for the meta step.
for var in range(len(new_vars)):
new_vars[var] = old_vars[var] + (
(new_vars[var] - old_vars[var]) * cur_meta_step_size
)
# After the meta-learning step, reload the newly-trained weights into the model.
model.set_weights(new_vars)
# Evaluation loop
if meta_iter % eval_interval == 0:
accuracies = []
for dataset in (train_dataset, test_dataset):
# Sample a mini dataset from the full dataset.
train_set, test_images, test_labels = dataset.get_mini_dataset(
eval_batch_size, eval_iters, shots, classes, split=True
)
old_vars = model.get_weights()
# Train on the samples and get the resulting accuracies.
for images, labels in train_set:
with tf.GradientTape() as tape:
preds = model(images)
loss = keras.losses.sparse_categorical_crossentropy(labels, preds)
grads = tape.gradient(loss, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
test_preds = model.predict(test_images)
test_preds = tf.argmax(test_preds).numpy()
num_correct = (test_preds == test_labels).sum()
# Reset the weights after getting the evaluation accuracies.
model.set_weights(old_vars)
accuracies.append(num_correct / classes)
training.append(accuracies[0])
testing.append(accuracies[1])
if meta_iter % 100 == 0:
print(
"batch %d: train=%f test=%f" % (meta_iter, accuracies[0], accuracies[1])
)
```
<div class="k-default-codeblock">
```
batch 0: train=0.600000 test=0.200000
batch 100: train=0.800000 test=0.200000
batch 200: train=1.000000 test=1.000000
batch 300: train=1.000000 test=0.800000
batch 400: train=1.000000 test=0.600000
batch 500: train=1.000000 test=1.000000
batch 600: train=1.000000 test=0.600000
batch 700: train=1.000000 test=1.000000
batch 800: train=1.000000 test=0.800000
batch 900: train=0.800000 test=0.600000
batch 1000: train=1.000000 test=0.600000
batch 1100: train=1.000000 test=1.000000
batch 1200: train=1.000000 test=1.000000
batch 1300: train=0.600000 test=1.000000
batch 1400: train=1.000000 test=0.600000
batch 1500: train=1.000000 test=1.000000
batch 1600: train=0.800000 test=1.000000
batch 1700: train=0.800000 test=1.000000
batch 1800: train=0.800000 test=1.000000
batch 1900: train=1.000000 test=1.000000
```
</div>
---
## Visualize Results
```python
# First, some preprocessing to smooth the training and testing arrays for display.
window_length = 100
train_s = np.r_[
training[window_length - 1 : 0 : -1],
training,
training[-1:-window_length:-1],
]
test_s = np.r_[
testing[window_length - 1 : 0 : -1], testing, testing[-1:-window_length:-1]
]
w = np.hamming(window_length)
train_y = np.convolve(w / w.sum(), train_s, mode="valid")
test_y = np.convolve(w / w.sum(), test_s, mode="valid")
# Display the training accuracies.
x = np.arange(0, len(test_y), 1)
plt.plot(x, test_y, x, train_y)
plt.legend(["test", "train"])
plt.grid()
train_set, test_images, test_labels = dataset.get_mini_dataset(
eval_batch_size, eval_iters, shots, classes, split=True
)
for images, labels in train_set:
with tf.GradientTape() as tape:
preds = model(images)
loss = keras.losses.sparse_categorical_crossentropy(labels, preds)
grads = tape.gradient(loss, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
test_preds = model.predict(test_images)
test_preds = tf.argmax(test_preds).numpy()
_, axarr = plt.subplots(nrows=1, ncols=5, figsize=(20, 20))
sample_keys = list(train_dataset.data.keys())
for i, ax in zip(range(5), axarr):
temp_image = np.stack((test_images[i, :, :, 0],) * 3, axis=2)
temp_image *= 255
temp_image = np.clip(temp_image, 0, 255).astype("uint8")
ax.set_title(
"Label : {}, Prediction : {}".format(int(test_labels[i]), test_preds[i])
)
ax.imshow(temp_image, cmap="gray")
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
plt.show()
```
<div class="k-default-codeblock">
```
```
</div>
![png](/img/examples/vision/reptile/reptile_14_1.png)
![png](/img/examples/vision/reptile/reptile_14_2.png)
| keras-io/examples/vision/md/reptile.md/0 | {
"file_path": "keras-io/examples/vision/md/reptile.md",
"repo_id": "keras-io",
"token_count": 5743
} | 125 |
"""
Title: Natural language image search with a Dual Encoder
Author: [Khalid Salama](https://www.linkedin.com/in/khalid-salama-24403144/)
Date created: 2021/01/30
Last modified: 2021/01/30
Description: Implementation of a dual encoder model for retrieving images that match natural language queries.
Accelerator: GPU
"""
"""
## Introduction
The example demonstrates how to build a dual encoder (also known as two-tower) neural network
model to search for images using natural language. The model is inspired by
the [CLIP](https://openai.com/blog/clip/)
approach, introduced by Alec Radford et al. The idea is to train a vision encoder and a text
encoder jointly to project the representation of images and their captions into the same embedding
space, such that the caption embeddings are located near the embeddings of the images they describe.
This example requires TensorFlow 2.4 or higher.
In addition, [TensorFlow Hub](https://www.tensorflow.org/hub)
and [TensorFlow Text](https://www.tensorflow.org/tutorials/tensorflow_text/intro)
are required for the BERT model, and [TensorFlow Addons](https://www.tensorflow.org/addons)
is required for the AdamW optimizer. These libraries can be installed using the
following command:
```python
pip install -q -U tensorflow-hub tensorflow-text tensorflow-addons
```
"""
"""
## Setup
"""
import os
import collections
import json
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import tensorflow_hub as hub
import tensorflow_text as text
import tensorflow_addons as tfa
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from tqdm import tqdm
# Suppressing tf.hub warnings
tf.get_logger().setLevel("ERROR")
"""
## Prepare the data
We will use the [MS-COCO](https://cocodataset.org/#home) dataset to train our
dual encoder model. MS-COCO contains over 82,000 images, each of which has at least
5 different caption annotations. The dataset is usually used for
[image captioning](https://www.tensorflow.org/tutorials/text/image_captioning)
tasks, but we can repurpose the image-caption pairs to train our dual encoder
model for image search.
###
Download and extract the data
First, let's download the dataset, which consists of two compressed folders:
one with images, and the otherβwith associated image captions.
Note that the compressed images folder is 13GB in size.
"""
root_dir = "datasets"
annotations_dir = os.path.join(root_dir, "annotations")
images_dir = os.path.join(root_dir, "train2014")
tfrecords_dir = os.path.join(root_dir, "tfrecords")
annotation_file = os.path.join(annotations_dir, "captions_train2014.json")
# Download caption annotation files
if not os.path.exists(annotations_dir):
annotation_zip = tf.keras.utils.get_file(
"captions.zip",
cache_dir=os.path.abspath("."),
origin="http://images.cocodataset.org/annotations/annotations_trainval2014.zip",
extract=True,
)
os.remove(annotation_zip)
# Download image files
if not os.path.exists(images_dir):
image_zip = tf.keras.utils.get_file(
"train2014.zip",
cache_dir=os.path.abspath("."),
origin="http://images.cocodataset.org/zips/train2014.zip",
extract=True,
)
os.remove(image_zip)
print("Dataset is downloaded and extracted successfully.")
with open(annotation_file, "r") as f:
annotations = json.load(f)["annotations"]
image_path_to_caption = collections.defaultdict(list)
for element in annotations:
caption = f"{element['caption'].lower().rstrip('.')}"
image_path = images_dir + "/COCO_train2014_" + "%012d.jpg" % (element["image_id"])
image_path_to_caption[image_path].append(caption)
image_paths = list(image_path_to_caption.keys())
print(f"Number of images: {len(image_paths)}")
"""
### Process and save the data to TFRecord files
You can change the `sample_size` parameter to control many image-caption pairs
will be used for training the dual encoder model.
In this example we set `train_size` to 30,000 images,
which is about 35% of the dataset. We use 2 captions for each
image, thus producing 60,000 image-caption pairs. The size of the training set
affects the quality of the produced encoders, but more examples would lead to
longer training time.
"""
train_size = 30000
valid_size = 5000
captions_per_image = 2
images_per_file = 2000
train_image_paths = image_paths[:train_size]
num_train_files = int(np.ceil(train_size / images_per_file))
train_files_prefix = os.path.join(tfrecords_dir, "train")
valid_image_paths = image_paths[-valid_size:]
num_valid_files = int(np.ceil(valid_size / images_per_file))
valid_files_prefix = os.path.join(tfrecords_dir, "valid")
tf.io.gfile.makedirs(tfrecords_dir)
def bytes_feature(value):
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def create_example(image_path, caption):
feature = {
"caption": bytes_feature(caption.encode()),
"raw_image": bytes_feature(tf.io.read_file(image_path).numpy()),
}
return tf.train.Example(features=tf.train.Features(feature=feature))
def write_tfrecords(file_name, image_paths):
caption_list = []
image_path_list = []
for image_path in image_paths:
captions = image_path_to_caption[image_path][:captions_per_image]
caption_list.extend(captions)
image_path_list.extend([image_path] * len(captions))
with tf.io.TFRecordWriter(file_name) as writer:
for example_idx in range(len(image_path_list)):
example = create_example(
image_path_list[example_idx], caption_list[example_idx]
)
writer.write(example.SerializeToString())
return example_idx + 1
def write_data(image_paths, num_files, files_prefix):
example_counter = 0
for file_idx in tqdm(range(num_files)):
file_name = files_prefix + "-%02d.tfrecord" % (file_idx)
start_idx = images_per_file * file_idx
end_idx = start_idx + images_per_file
example_counter += write_tfrecords(file_name, image_paths[start_idx:end_idx])
return example_counter
train_example_count = write_data(train_image_paths, num_train_files, train_files_prefix)
print(f"{train_example_count} training examples were written to tfrecord files.")
valid_example_count = write_data(valid_image_paths, num_valid_files, valid_files_prefix)
print(f"{valid_example_count} evaluation examples were written to tfrecord files.")
"""
### Create `tf.data.Dataset` for training and evaluation
"""
feature_description = {
"caption": tf.io.FixedLenFeature([], tf.string),
"raw_image": tf.io.FixedLenFeature([], tf.string),
}
def read_example(example):
features = tf.io.parse_single_example(example, feature_description)
raw_image = features.pop("raw_image")
features["image"] = tf.image.resize(
tf.image.decode_jpeg(raw_image, channels=3), size=(299, 299)
)
return features
def get_dataset(file_pattern, batch_size):
return (
tf.data.TFRecordDataset(tf.data.Dataset.list_files(file_pattern))
.map(
read_example,
num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False,
)
.shuffle(batch_size * 10)
.prefetch(buffer_size=tf.data.AUTOTUNE)
.batch(batch_size)
)
"""
## Implement the projection head
The projection head is used to transform the image and the text embeddings to
the same embedding space with the same dimensionality.
"""
def project_embeddings(
embeddings, num_projection_layers, projection_dims, dropout_rate
):
projected_embeddings = layers.Dense(units=projection_dims)(embeddings)
for _ in range(num_projection_layers):
x = tf.nn.gelu(projected_embeddings)
x = layers.Dense(projection_dims)(x)
x = layers.Dropout(dropout_rate)(x)
x = layers.Add()([projected_embeddings, x])
projected_embeddings = layers.LayerNormalization()(x)
return projected_embeddings
"""
## Implement the vision encoder
In this example, we use [Xception](https://keras.io/api/applications/xception/)
from [Keras Applications](https://keras.io/api/applications/) as the base for the
vision encoder.
"""
def create_vision_encoder(
num_projection_layers, projection_dims, dropout_rate, trainable=False
):
# Load the pre-trained Xception model to be used as the base encoder.
xception = keras.applications.Xception(
include_top=False, weights="imagenet", pooling="avg"
)
# Set the trainability of the base encoder.
for layer in xception.layers:
layer.trainable = trainable
# Receive the images as inputs.
inputs = layers.Input(shape=(299, 299, 3), name="image_input")
# Preprocess the input image.
xception_input = tf.keras.applications.xception.preprocess_input(inputs)
# Generate the embeddings for the images using the xception model.
embeddings = xception(xception_input)
# Project the embeddings produced by the model.
outputs = project_embeddings(
embeddings, num_projection_layers, projection_dims, dropout_rate
)
# Create the vision encoder model.
return keras.Model(inputs, outputs, name="vision_encoder")
"""
## Implement the text encoder
We use [BERT](https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-256_A-4/1)
from [TensorFlow Hub](https://tfhub.dev) as the text encoder
"""
def create_text_encoder(
num_projection_layers, projection_dims, dropout_rate, trainable=False
):
# Load the BERT preprocessing module.
preprocess = hub.KerasLayer(
"https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/2",
name="text_preprocessing",
)
# Load the pre-trained BERT model to be used as the base encoder.
bert = hub.KerasLayer(
"https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-512_A-8/1",
"bert",
)
# Set the trainability of the base encoder.
bert.trainable = trainable
# Receive the text as inputs.
inputs = layers.Input(shape=(), dtype=tf.string, name="text_input")
# Preprocess the text.
bert_inputs = preprocess(inputs)
# Generate embeddings for the preprocessed text using the BERT model.
embeddings = bert(bert_inputs)["pooled_output"]
# Project the embeddings produced by the model.
outputs = project_embeddings(
embeddings, num_projection_layers, projection_dims, dropout_rate
)
# Create the text encoder model.
return keras.Model(inputs, outputs, name="text_encoder")
"""
## Implement the dual encoder
To calculate the loss, we compute the pairwise dot-product similarity between
each `caption_i` and `images_j` in the batch as the predictions.
The target similarity between `caption_i` and `image_j` is computed as
the average of the (dot-product similarity between `caption_i` and `caption_j`)
and (the dot-product similarity between `image_i` and `image_j`).
Then, we use crossentropy to compute the loss between the targets and the predictions.
"""
class DualEncoder(keras.Model):
def __init__(self, text_encoder, image_encoder, temperature=1.0, **kwargs):
super().__init__(**kwargs)
self.text_encoder = text_encoder
self.image_encoder = image_encoder
self.temperature = temperature
self.loss_tracker = keras.metrics.Mean(name="loss")
@property
def metrics(self):
return [self.loss_tracker]
def call(self, features, training=False):
# Place each encoder on a separate GPU (if available).
# TF will fallback on available devices if there are fewer than 2 GPUs.
with tf.device("/gpu:0"):
# Get the embeddings for the captions.
caption_embeddings = text_encoder(features["caption"], training=training)
with tf.device("/gpu:1"):
# Get the embeddings for the images.
image_embeddings = vision_encoder(features["image"], training=training)
return caption_embeddings, image_embeddings
def compute_loss(self, caption_embeddings, image_embeddings):
# logits[i][j] is the dot_similarity(caption_i, image_j).
logits = (
tf.matmul(caption_embeddings, image_embeddings, transpose_b=True)
/ self.temperature
)
# images_similarity[i][j] is the dot_similarity(image_i, image_j).
images_similarity = tf.matmul(
image_embeddings, image_embeddings, transpose_b=True
)
# captions_similarity[i][j] is the dot_similarity(caption_i, caption_j).
captions_similarity = tf.matmul(
caption_embeddings, caption_embeddings, transpose_b=True
)
# targets[i][j] = avarage dot_similarity(caption_i, caption_j) and dot_similarity(image_i, image_j).
targets = keras.activations.softmax(
(captions_similarity + images_similarity) / (2 * self.temperature)
)
# Compute the loss for the captions using crossentropy
captions_loss = keras.losses.categorical_crossentropy(
y_true=targets, y_pred=logits, from_logits=True
)
# Compute the loss for the images using crossentropy
images_loss = keras.losses.categorical_crossentropy(
y_true=tf.transpose(targets), y_pred=tf.transpose(logits), from_logits=True
)
# Return the mean of the loss over the batch.
return (captions_loss + images_loss) / 2
def train_step(self, features):
with tf.GradientTape() as tape:
# Forward pass
caption_embeddings, image_embeddings = self(features, training=True)
loss = self.compute_loss(caption_embeddings, image_embeddings)
# Backward pass
gradients = tape.gradient(loss, self.trainable_variables)
self.optimizer.apply_gradients(zip(gradients, self.trainable_variables))
# Monitor loss
self.loss_tracker.update_state(loss)
return {"loss": self.loss_tracker.result()}
def test_step(self, features):
caption_embeddings, image_embeddings = self(features, training=False)
loss = self.compute_loss(caption_embeddings, image_embeddings)
self.loss_tracker.update_state(loss)
return {"loss": self.loss_tracker.result()}
"""
## Train the dual encoder model
In this experiment, we freeze the base encoders for text and images, and make only
the projection head trainable.
"""
num_epochs = 5 # In practice, train for at least 30 epochs
batch_size = 256
vision_encoder = create_vision_encoder(
num_projection_layers=1, projection_dims=256, dropout_rate=0.1
)
text_encoder = create_text_encoder(
num_projection_layers=1, projection_dims=256, dropout_rate=0.1
)
dual_encoder = DualEncoder(text_encoder, vision_encoder, temperature=0.05)
dual_encoder.compile(
optimizer=tfa.optimizers.AdamW(learning_rate=0.001, weight_decay=0.001)
)
"""
Note that training the model with 60,000 image-caption pairs, with a batch size of 256,
takes around 12 minutes per epoch using a V100 GPU accelerator. If 2 GPUs are available,
the epoch takes around 8 minutes.
"""
print(f"Number of GPUs: {len(tf.config.list_physical_devices('GPU'))}")
print(f"Number of examples (caption-image pairs): {train_example_count}")
print(f"Batch size: {batch_size}")
print(f"Steps per epoch: {int(np.ceil(train_example_count / batch_size))}")
train_dataset = get_dataset(os.path.join(tfrecords_dir, "train-*.tfrecord"), batch_size)
valid_dataset = get_dataset(os.path.join(tfrecords_dir, "valid-*.tfrecord"), batch_size)
# Create a learning rate scheduler callback.
reduce_lr = keras.callbacks.ReduceLROnPlateau(
monitor="val_loss", factor=0.2, patience=3
)
# Create an early stopping callback.
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor="val_loss", patience=5, restore_best_weights=True
)
history = dual_encoder.fit(
train_dataset,
epochs=num_epochs,
validation_data=valid_dataset,
callbacks=[reduce_lr, early_stopping],
)
print("Training completed. Saving vision and text encoders...")
vision_encoder.save("vision_encoder")
text_encoder.save("text_encoder")
print("Models are saved.")
"""
Plotting the training loss:
"""
plt.plot(history.history["loss"])
plt.plot(history.history["val_loss"])
plt.ylabel("Loss")
plt.xlabel("Epoch")
plt.legend(["train", "valid"], loc="upper right")
plt.show()
"""
## Search for images using natural language queries
We can then retrieve images corresponding to natural language queries via
the following steps:
1. Generate embeddings for the images by feeding them into the `vision_encoder`.
2. Feed the natural language query to the `text_encoder` to generate a query embedding.
3. Compute the similarity between the query embedding and the image embeddings
in the index to retrieve the indices of the top matches.
4. Look up the paths of the top matching images to display them.
Note that, after training the `dual encoder`, only the fine-tuned `vision_encoder`
and `text_encoder` models will be used, while the `dual_encoder` model will be discarded.
"""
"""
### Generate embeddings for the images
We load the images and feed them into the `vision_encoder` to generate their embeddings.
In large scale systems, this step is performed using a parallel data processing framework,
such as [Apache Spark](https://spark.apache.org) or [Apache Beam](https://beam.apache.org).
Generating the image embeddings may take several minutes.
"""
print("Loading vision and text encoders...")
vision_encoder = keras.models.load_model("vision_encoder")
text_encoder = keras.models.load_model("text_encoder")
print("Models are loaded.")
def read_image(image_path):
image_array = tf.image.decode_jpeg(tf.io.read_file(image_path), channels=3)
return tf.image.resize(image_array, (299, 299))
print(f"Generating embeddings for {len(image_paths)} images...")
image_embeddings = vision_encoder.predict(
tf.data.Dataset.from_tensor_slices(image_paths).map(read_image).batch(batch_size),
verbose=1,
)
print(f"Image embeddings shape: {image_embeddings.shape}.")
"""
### Retrieve relevant images
In this example, we use exact matching by computing the dot product similarity
between the input query embedding and the image embeddings, and retrieve the top k
matches. However, *approximate* similarity matching, using frameworks like
[ScaNN](https://github.com/google-research/google-research/tree/master/scann),
[Annoy](https://github.com/spotify/annoy), or [Faiss](https://github.com/facebookresearch/faiss)
is preferred in real-time use cases to scale with a large number of images.
"""
def find_matches(image_embeddings, queries, k=9, normalize=True):
# Get the embedding for the query.
query_embedding = text_encoder(tf.convert_to_tensor(queries))
# Normalize the query and the image embeddings.
if normalize:
image_embeddings = tf.math.l2_normalize(image_embeddings, axis=1)
query_embedding = tf.math.l2_normalize(query_embedding, axis=1)
# Compute the dot product between the query and the image embeddings.
dot_similarity = tf.matmul(query_embedding, image_embeddings, transpose_b=True)
# Retrieve top k indices.
results = tf.math.top_k(dot_similarity, k).indices.numpy()
# Return matching image paths.
return [[image_paths[idx] for idx in indices] for indices in results]
"""
Set the `query` variable to the type of images you want to search for.
Try things like: 'a plate of healthy food',
'a woman wearing a hat is walking down a sidewalk',
'a bird sits near to the water', or 'wild animals are standing in a field'.
"""
query = "a family standing next to the ocean on a sandy beach with a surf board"
matches = find_matches(image_embeddings, [query], normalize=True)[0]
plt.figure(figsize=(20, 20))
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(mpimg.imread(matches[i]))
plt.axis("off")
"""
## Evaluate the retrieval quality
To evaluate the dual encoder model, we use the captions as queries.
We use the out-of-training-sample images and captions to evaluate the retrieval quality,
using top k accuracy. A true prediction is counted if, for a given caption, its associated image
is retrieved within the top k matches.
"""
def compute_top_k_accuracy(image_paths, k=100):
hits = 0
num_batches = int(np.ceil(len(image_paths) / batch_size))
for idx in tqdm(range(num_batches)):
start_idx = idx * batch_size
end_idx = start_idx + batch_size
current_image_paths = image_paths[start_idx:end_idx]
queries = [
image_path_to_caption[image_path][0] for image_path in current_image_paths
]
result = find_matches(image_embeddings, queries, k)
hits += sum(
[
image_path in matches
for (image_path, matches) in list(zip(current_image_paths, result))
]
)
return hits / len(image_paths)
print("Scoring training data...")
train_accuracy = compute_top_k_accuracy(train_image_paths)
print(f"Train accuracy: {round(train_accuracy * 100, 3)}%")
print("Scoring evaluation data...")
eval_accuracy = compute_top_k_accuracy(image_paths[train_size:])
print(f"Eval accuracy: {round(eval_accuracy * 100, 3)}%")
"""
## Final remarks
You can obtain better results by increasing the size of the training sample,
train for more epochs, explore other base encoders for images and text,
set the base encoders to be trainable, and tune the hyperparameters,
especially the `temperature` for the softmax in the loss computation.
Example available on HuggingFace
| Trained Model | Demo |
| :--: | :--: |
| [![Generic badge](https://img.shields.io/badge/%F0%9F%A4%97%20Model-nl%20image%20search-black.svg)](https://huggingface.co/keras-io/dual-encoder-image-search) | [![Generic badge](https://img.shields.io/badge/%F0%9F%A4%97%20Spaces-nl%20image%20search-black.svg)](https://huggingface.co/spaces/keras-io/dual-encoder-image-search) |
"""
| keras-io/examples/vision/nl_image_search.py/0 | {
"file_path": "keras-io/examples/vision/nl_image_search.py",
"repo_id": "keras-io",
"token_count": 8032
} | 126 |
"""
Title: A Vision Transformer without Attention
Author: [Aritra Roy Gosthipaty](https://twitter.com/ariG23498), [Ritwik Raha](https://twitter.com/ritwik_raha), [Shivalika Singh](https://www.linkedin.com/in/shivalika-singh/)
Date created: 2022/02/24
Last modified: 2022/10/15
Description: A minimal implementation of ShiftViT.
Accelerator: GPU
"""
"""
## Introduction
[Vision Transformers](https://arxiv.org/abs/2010.11929) (ViTs) have sparked a wave of
research at the intersection of Transformers and Computer Vision (CV).
ViTs can simultaneously model long- and short-range dependencies, thanks to
the Multi-Head Self-Attention mechanism in the Transformer block. Many researchers believe
that the success of ViTs are purely due to the attention layer, and they seldom
think about other parts of the ViT model.
In the academic paper
[When Shift Operation Meets Vision Transformer: An Extremely Simple Alternative to Attention Mechanism](https://arxiv.org/abs/2201.10801)
the authors propose to demystify the success of ViTs with the introduction of a **NO
PARAMETER** operation in place of the attention operation. They swap the attention
operation with a shifting operation.
In this example, we minimally implement the paper with close alignement to the author's
[official implementation](https://github.com/microsoft/SPACH/blob/main/models/shiftvit.py).
This example requires TensorFlow 2.9 or higher, as well as TensorFlow Addons, which can
be installed using the following command:
"""
"""shell
pip install -qq -U tensorflow-addons
"""
"""
## Setup and imports
"""
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import tensorflow_addons as tfa
import pathlib
import glob
# Setting seed for reproducibiltiy
SEED = 42
keras.utils.set_random_seed(SEED)
"""
## Hyperparameters
These are the hyperparameters that we have chosen for the experiment.
Please feel free to tune them.
"""
class Config(object):
# DATA
batch_size = 256
buffer_size = batch_size * 2
input_shape = (32, 32, 3)
num_classes = 10
# AUGMENTATION
image_size = 48
# ARCHITECTURE
patch_size = 4
projected_dim = 96
num_shift_blocks_per_stages = [2, 4, 8, 2]
epsilon = 1e-5
stochastic_depth_rate = 0.2
mlp_dropout_rate = 0.2
num_div = 12
shift_pixel = 1
mlp_expand_ratio = 2
# OPTIMIZER
lr_start = 1e-5
lr_max = 1e-3
weight_decay = 1e-4
# TRAINING
epochs = 100
# INFERENCE
label_map = {
0: "airplane",
1: "automobile",
2: "bird",
3: "cat",
4: "deer",
5: "dog",
6: "frog",
7: "horse",
8: "ship",
9: "truck",
}
tf_ds_batch_size = 20
config = Config()
"""
## Load the CIFAR-10 dataset
We use the CIFAR-10 dataset for our experiments.
"""
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data()
(x_train, y_train), (x_val, y_val) = (
(x_train[:40000], y_train[:40000]),
(x_train[40000:], y_train[40000:]),
)
print(f"Training samples: {len(x_train)}")
print(f"Validation samples: {len(x_val)}")
print(f"Testing samples: {len(x_test)}")
AUTO = tf.data.AUTOTUNE
train_ds = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_ds = train_ds.shuffle(config.buffer_size).batch(config.batch_size).prefetch(AUTO)
val_ds = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_ds = val_ds.batch(config.batch_size).prefetch(AUTO)
test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test))
test_ds = test_ds.batch(config.batch_size).prefetch(AUTO)
"""
## Data Augmentation
The augmentation pipeline consists of:
- Rescaling
- Resizing
- Random cropping
- Random horizontal flipping
_Note_: The image data augmentation layers do not apply
data transformations at inference time. This means that
when these layers are called with `training=False` they
behave differently. Refer to the
[documentation](https://keras.io/api/layers/preprocessing_layers/image_augmentation/)
for more details.
"""
def get_augmentation_model():
"""Build the data augmentation model."""
data_augmentation = keras.Sequential(
[
layers.Resizing(config.input_shape[0] + 20, config.input_shape[0] + 20),
layers.RandomCrop(config.image_size, config.image_size),
layers.RandomFlip("horizontal"),
layers.Rescaling(1 / 255.0),
]
)
return data_augmentation
"""
## The ShiftViT architecture
In this section, we build the architecture proposed in
[the ShiftViT paper](https://arxiv.org/abs/2201.10801).
| ![ShiftViT Architecture](https://i.imgur.com/CHU40HX.png) |
| :--: |
| Figure 1: The entire architecutre of ShiftViT.
[Source](https://arxiv.org/abs/2201.10801) |
The architecture as shown in Fig. 1, is inspired by
[Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030).
Here the authors propose a modular architecture with 4 stages. Each stage works on its
own spatial size, creating a hierarchical architecture.
An input image of size `HxWx3` is split into non-overlapping patches of size `4x4`.
This is done via the patchify layer which results in individual tokens of feature size `48`
(`4x4x3`). Each stage comprises two parts:
1. Embedding Generation
2. Stacked Shift Blocks
We discuss the stages and the modules in detail in what follows.
_Note_: Compared to the [official implementation](https://github.com/microsoft/SPACH/blob/main/models/shiftvit.py)
we restructure some key components to better fit the Keras API.
"""
"""
### The ShiftViT Block
| ![ShiftViT block](https://i.imgur.com/IDe35vo.gif) |
| :--: |
| Figure 2: From the Model to a Shift Block. |
Each stage in the ShiftViT architecture comprises of a Shift Block as shown in Fig 2.
| ![Shift Vit Block](https://i.imgur.com/0q13pLu.png) |
| :--: |
| Figure 3: The Shift ViT Block. [Source](https://arxiv.org/abs/2201.10801) |
The Shift Block as shown in Fig. 3, comprises of the following:
1. Shift Operation
2. Linear Normalization
3. MLP Layer
"""
"""
#### The MLP block
The MLP block is intended to be a stack of densely-connected layers
"""
class MLP(layers.Layer):
"""Get the MLP layer for each shift block.
Args:
mlp_expand_ratio (int): The ratio with which the first feature map is expanded.
mlp_dropout_rate (float): The rate for dropout.
"""
def __init__(self, mlp_expand_ratio, mlp_dropout_rate, **kwargs):
super().__init__(**kwargs)
self.mlp_expand_ratio = mlp_expand_ratio
self.mlp_dropout_rate = mlp_dropout_rate
def build(self, input_shape):
input_channels = input_shape[-1]
initial_filters = int(self.mlp_expand_ratio * input_channels)
self.mlp = keras.Sequential(
[
layers.Dense(
units=initial_filters,
activation=tf.nn.gelu,
),
layers.Dropout(rate=self.mlp_dropout_rate),
layers.Dense(units=input_channels),
layers.Dropout(rate=self.mlp_dropout_rate),
]
)
def call(self, x):
x = self.mlp(x)
return x
"""
#### The DropPath layer
Stochastic depth is a regularization technique that randomly drops a set of
layers. During inference, the layers are kept as they are. It is very
similar to Dropout, but it operates on a block of layers rather
than on individual nodes present inside a layer.
"""
class DropPath(layers.Layer):
"""Drop Path also known as the Stochastic Depth layer.
Refernece:
- https://keras.io/examples/vision/cct/#stochastic-depth-for-regularization
- github.com:rwightman/pytorch-image-models
"""
def __init__(self, drop_path_prob, **kwargs):
super().__init__(**kwargs)
self.drop_path_prob = drop_path_prob
def call(self, x, training=False):
if training:
keep_prob = 1 - self.drop_path_prob
shape = (tf.shape(x)[0],) + (1,) * (len(tf.shape(x)) - 1)
random_tensor = keep_prob + tf.random.uniform(shape, 0, 1)
random_tensor = tf.floor(random_tensor)
return (x / keep_prob) * random_tensor
return x
"""
#### Block
The most important operation in this paper is the **shift operation**. In this section,
we describe the shift operation and compare it with its original implementation provided
by the authors.
A generic feature map is assumed to have the shape `[N, H, W, C]`. Here we choose a
`num_div` parameter that decides the division size of the channels. The first 4 divisions
are shifted (1 pixel) in the left, right, up, and down direction. The remaining splits
are kept as is. After partial shifting the shifted channels are padded and the overflown
pixels are chopped off. This completes the partial shifting operation.
In the original implementation, the code is approximately:
```python
out[:, g * 0:g * 1, :, :-1] = x[:, g * 0:g * 1, :, 1:] # shift left
out[:, g * 1:g * 2, :, 1:] = x[:, g * 1:g * 2, :, :-1] # shift right
out[:, g * 2:g * 3, :-1, :] = x[:, g * 2:g * 3, 1:, :] # shift up
out[:, g * 3:g * 4, 1:, :] = x[:, g * 3:g * 4, :-1, :] # shift down
out[:, g * 4:, :, :] = x[:, g * 4:, :, :] # no shift
```
In TensorFlow it would be infeasible for us to assign shifted channels to a tensor in the
middle of the training process. This is why we have resorted to the following procedure:
1. Split the channels with the `num_div` parameter.
2. Select each of the first four spilts and shift and pad them in the respective
directions.
3. After shifting and padding, we concatenate the channel back.
| ![Manim rendered animation for shift operation](https://i.imgur.com/PReeULP.gif) |
| :--: |
| Figure 4: The TensorFlow style shifting |
The entire procedure is explained in the Fig. 4.
"""
class ShiftViTBlock(layers.Layer):
"""A unit ShiftViT Block
Args:
shift_pixel (int): The number of pixels to shift. Default to 1.
mlp_expand_ratio (int): The ratio with which MLP features are
expanded. Default to 2.
mlp_dropout_rate (float): The dropout rate used in MLP.
num_div (int): The number of divisions of the feature map's channel.
Totally, 4/num_div of channels will be shifted. Defaults to 12.
epsilon (float): Epsilon constant.
drop_path_prob (float): The drop probability for drop path.
"""
def __init__(
self,
epsilon,
drop_path_prob,
mlp_dropout_rate,
num_div=12,
shift_pixel=1,
mlp_expand_ratio=2,
**kwargs,
):
super().__init__(**kwargs)
self.shift_pixel = shift_pixel
self.mlp_expand_ratio = mlp_expand_ratio
self.mlp_dropout_rate = mlp_dropout_rate
self.num_div = num_div
self.epsilon = epsilon
self.drop_path_prob = drop_path_prob
def build(self, input_shape):
self.H = input_shape[1]
self.W = input_shape[2]
self.C = input_shape[3]
self.layer_norm = layers.LayerNormalization(epsilon=self.epsilon)
self.drop_path = (
DropPath(drop_path_prob=self.drop_path_prob)
if self.drop_path_prob > 0.0
else layers.Activation("linear")
)
self.mlp = MLP(
mlp_expand_ratio=self.mlp_expand_ratio,
mlp_dropout_rate=self.mlp_dropout_rate,
)
def get_shift_pad(self, x, mode):
"""Shifts the channels according to the mode chosen."""
if mode == "left":
offset_height = 0
offset_width = 0
target_height = 0
target_width = self.shift_pixel
elif mode == "right":
offset_height = 0
offset_width = self.shift_pixel
target_height = 0
target_width = self.shift_pixel
elif mode == "up":
offset_height = 0
offset_width = 0
target_height = self.shift_pixel
target_width = 0
else:
offset_height = self.shift_pixel
offset_width = 0
target_height = self.shift_pixel
target_width = 0
crop = tf.image.crop_to_bounding_box(
x,
offset_height=offset_height,
offset_width=offset_width,
target_height=self.H - target_height,
target_width=self.W - target_width,
)
shift_pad = tf.image.pad_to_bounding_box(
crop,
offset_height=offset_height,
offset_width=offset_width,
target_height=self.H,
target_width=self.W,
)
return shift_pad
def call(self, x, training=False):
# Split the feature maps
x_splits = tf.split(x, num_or_size_splits=self.C // self.num_div, axis=-1)
# Shift the feature maps
x_splits[0] = self.get_shift_pad(x_splits[0], mode="left")
x_splits[1] = self.get_shift_pad(x_splits[1], mode="right")
x_splits[2] = self.get_shift_pad(x_splits[2], mode="up")
x_splits[3] = self.get_shift_pad(x_splits[3], mode="down")
# Concatenate the shifted and unshifted feature maps
x = tf.concat(x_splits, axis=-1)
# Add the residual connection
shortcut = x
x = shortcut + self.drop_path(self.mlp(self.layer_norm(x)), training=training)
return x
"""
### The ShiftViT blocks
| ![Shift Blokcs](https://i.imgur.com/FKy5NnD.png) |
| :--: |
| Figure 5: Shift Blocks in the architecture. [Source](https://arxiv.org/abs/2201.10801) |
Each stage of the architecture has shift blocks as shown in Fig.5. Each of these blocks
contain a variable number of stacked ShiftViT block (as built in the earlier section).
Shift blocks are followed by a PatchMerging layer that scales down feature inputs. The
PatchMerging layer helps in the pyramidal structure of the model.
"""
"""
#### The PatchMerging layer
This layer merges the two adjacent tokens. This layer helps in scaling the features down
spatially and increasing the features up channel wise. We use a Conv2D layer to merge the
patches.
"""
class PatchMerging(layers.Layer):
"""The Patch Merging layer.
Args:
epsilon (float): The epsilon constant.
"""
def __init__(self, epsilon, **kwargs):
super().__init__(**kwargs)
self.epsilon = epsilon
def build(self, input_shape):
filters = 2 * input_shape[-1]
self.reduction = layers.Conv2D(
filters=filters, kernel_size=2, strides=2, padding="same", use_bias=False
)
self.layer_norm = layers.LayerNormalization(epsilon=self.epsilon)
def call(self, x):
# Apply the patch merging algorithm on the feature maps
x = self.layer_norm(x)
x = self.reduction(x)
return x
"""
#### Stacked Shift Blocks
Each stage will have a variable number of stacked ShiftViT Blocks, as suggested in
the paper. This is a generic layer that will contain the stacked shift vit blocks
with the patch merging layer as well. Combining the two operations (shift ViT
block and patch merging) is a design choice we picked for better code reusability.
"""
# Note: This layer will have a different depth of stacking
# for different stages on the model.
class StackedShiftBlocks(layers.Layer):
"""The layer containing stacked ShiftViTBlocks.
Args:
epsilon (float): The epsilon constant.
mlp_dropout_rate (float): The dropout rate used in the MLP block.
num_shift_blocks (int): The number of shift vit blocks for this stage.
stochastic_depth_rate (float): The maximum drop path rate chosen.
is_merge (boolean): A flag that determines the use of the Patch Merge
layer after the shift vit blocks.
num_div (int): The division of channels of the feature map. Defaults to 12.
shift_pixel (int): The number of pixels to shift. Defaults to 1.
mlp_expand_ratio (int): The ratio with which the initial dense layer of
the MLP is expanded Defaults to 2.
"""
def __init__(
self,
epsilon,
mlp_dropout_rate,
num_shift_blocks,
stochastic_depth_rate,
is_merge,
num_div=12,
shift_pixel=1,
mlp_expand_ratio=2,
**kwargs,
):
super().__init__(**kwargs)
self.epsilon = epsilon
self.mlp_dropout_rate = mlp_dropout_rate
self.num_shift_blocks = num_shift_blocks
self.stochastic_depth_rate = stochastic_depth_rate
self.is_merge = is_merge
self.num_div = num_div
self.shift_pixel = shift_pixel
self.mlp_expand_ratio = mlp_expand_ratio
def build(self, input_shapes):
# Calculate stochastic depth probabilities.
# Reference: https://keras.io/examples/vision/cct/#the-final-cct-model
dpr = [
x
for x in np.linspace(
start=0, stop=self.stochastic_depth_rate, num=self.num_shift_blocks
)
]
# Build the shift blocks as a list of ShiftViT Blocks
self.shift_blocks = list()
for num in range(self.num_shift_blocks):
self.shift_blocks.append(
ShiftViTBlock(
num_div=self.num_div,
epsilon=self.epsilon,
drop_path_prob=dpr[num],
mlp_dropout_rate=self.mlp_dropout_rate,
shift_pixel=self.shift_pixel,
mlp_expand_ratio=self.mlp_expand_ratio,
)
)
if self.is_merge:
self.patch_merge = PatchMerging(epsilon=self.epsilon)
def call(self, x, training=False):
for shift_block in self.shift_blocks:
x = shift_block(x, training=training)
if self.is_merge:
x = self.patch_merge(x)
return x
# Since this is a custom layer, we need to overwrite get_config()
# so that model can be easily saved & loaded after training
def get_config(self):
config = super().get_config()
config.update(
{
"epsilon": self.epsilon,
"mlp_dropout_rate": self.mlp_dropout_rate,
"num_shift_blocks": self.num_shift_blocks,
"stochastic_depth_rate": self.stochastic_depth_rate,
"is_merge": self.is_merge,
"num_div": self.num_div,
"shift_pixel": self.shift_pixel,
"mlp_expand_ratio": self.mlp_expand_ratio,
}
)
return config
"""
## The ShiftViT model
Build the ShiftViT custom model.
"""
class ShiftViTModel(keras.Model):
"""The ShiftViT Model.
Args:
data_augmentation (keras.Model): A data augmentation model.
projected_dim (int): The dimension to which the patches of the image are
projected.
patch_size (int): The patch size of the images.
num_shift_blocks_per_stages (list[int]): A list of all the number of shit
blocks per stage.
epsilon (float): The epsilon constant.
mlp_dropout_rate (float): The dropout rate used in the MLP block.
stochastic_depth_rate (float): The maximum drop rate probability.
num_div (int): The number of divisions of the channesl of the feature
map. Defaults to 12.
shift_pixel (int): The number of pixel to shift. Default to 1.
mlp_expand_ratio (int): The ratio with which the initial mlp dense layer
is expanded to. Defaults to 2.
"""
def __init__(
self,
data_augmentation,
projected_dim,
patch_size,
num_shift_blocks_per_stages,
epsilon,
mlp_dropout_rate,
stochastic_depth_rate,
num_div=12,
shift_pixel=1,
mlp_expand_ratio=2,
**kwargs,
):
super().__init__(**kwargs)
self.data_augmentation = data_augmentation
self.patch_projection = layers.Conv2D(
filters=projected_dim,
kernel_size=patch_size,
strides=patch_size,
padding="same",
)
self.stages = list()
for index, num_shift_blocks in enumerate(num_shift_blocks_per_stages):
if index == len(num_shift_blocks_per_stages) - 1:
# This is the last stage, do not use the patch merge here.
is_merge = False
else:
is_merge = True
# Build the stages.
self.stages.append(
StackedShiftBlocks(
epsilon=epsilon,
mlp_dropout_rate=mlp_dropout_rate,
num_shift_blocks=num_shift_blocks,
stochastic_depth_rate=stochastic_depth_rate,
is_merge=is_merge,
num_div=num_div,
shift_pixel=shift_pixel,
mlp_expand_ratio=mlp_expand_ratio,
)
)
self.global_avg_pool = layers.GlobalAveragePooling2D()
self.classifier = layers.Dense(config.num_classes)
def get_config(self):
config = super().get_config()
config.update(
{
"data_augmentation": self.data_augmentation,
"patch_projection": self.patch_projection,
"stages": self.stages,
"global_avg_pool": self.global_avg_pool,
"classifier": self.classifier,
}
)
return config
def _calculate_loss(self, data, training=False):
(images, labels) = data
# Augment the images
augmented_images = self.data_augmentation(images, training=training)
# Create patches and project the pathces.
projected_patches = self.patch_projection(augmented_images)
# Pass through the stages
x = projected_patches
for stage in self.stages:
x = stage(x, training=training)
# Get the logits.
x = self.global_avg_pool(x)
logits = self.classifier(x)
# Calculate the loss and return it.
total_loss = self.compiled_loss(labels, logits)
return total_loss, labels, logits
def train_step(self, inputs):
with tf.GradientTape() as tape:
total_loss, labels, logits = self._calculate_loss(
data=inputs, training=True
)
# Apply gradients.
train_vars = [
self.data_augmentation.trainable_variables,
self.patch_projection.trainable_variables,
self.global_avg_pool.trainable_variables,
self.classifier.trainable_variables,
]
train_vars = train_vars + [stage.trainable_variables for stage in self.stages]
# Optimize the gradients.
grads = tape.gradient(total_loss, train_vars)
trainable_variable_list = []
for grad, var in zip(grads, train_vars):
for g, v in zip(grad, var):
trainable_variable_list.append((g, v))
self.optimizer.apply_gradients(trainable_variable_list)
# Update the metrics
self.compiled_metrics.update_state(labels, logits)
return {m.name: m.result() for m in self.metrics}
def test_step(self, data):
_, labels, logits = self._calculate_loss(data=data, training=False)
# Update the metrics
self.compiled_metrics.update_state(labels, logits)
return {m.name: m.result() for m in self.metrics}
def call(self, images):
augmented_images = self.data_augmentation(images)
x = self.patch_projection(augmented_images)
for stage in self.stages:
x = stage(x, training=False)
x = self.global_avg_pool(x)
logits = self.classifier(x)
return logits
"""
## Instantiate the model
"""
model = ShiftViTModel(
data_augmentation=get_augmentation_model(),
projected_dim=config.projected_dim,
patch_size=config.patch_size,
num_shift_blocks_per_stages=config.num_shift_blocks_per_stages,
epsilon=config.epsilon,
mlp_dropout_rate=config.mlp_dropout_rate,
stochastic_depth_rate=config.stochastic_depth_rate,
num_div=config.num_div,
shift_pixel=config.shift_pixel,
mlp_expand_ratio=config.mlp_expand_ratio,
)
"""
## Learning rate schedule
In many experiments, we want to warm up the model with a slowly increasing learning rate
and then cool down the model with a slowly decaying learning rate. In the warmup cosine
decay, the learning rate linearly increases for the warmup steps and then decays with a
cosine decay.
"""
# Some code is taken from:
# https://www.kaggle.com/ashusma/training-rfcx-tensorflow-tpu-effnet-b2.
class WarmUpCosine(keras.optimizers.schedules.LearningRateSchedule):
"""A LearningRateSchedule that uses a warmup cosine decay schedule."""
def __init__(self, lr_start, lr_max, warmup_steps, total_steps):
"""
Args:
lr_start: The initial learning rate
lr_max: The maximum learning rate to which lr should increase to in
the warmup steps
warmup_steps: The number of steps for which the model warms up
total_steps: The total number of steps for the model training
"""
super().__init__()
self.lr_start = lr_start
self.lr_max = lr_max
self.warmup_steps = warmup_steps
self.total_steps = total_steps
self.pi = tf.constant(np.pi)
def __call__(self, step):
# Check whether the total number of steps is larger than the warmup
# steps. If not, then throw a value error.
if self.total_steps < self.warmup_steps:
raise ValueError(
f"Total number of steps {self.total_steps} must be"
+ f"larger or equal to warmup steps {self.warmup_steps}."
)
# `cos_annealed_lr` is a graph that increases to 1 from the initial
# step to the warmup step. After that this graph decays to -1 at the
# final step mark.
cos_annealed_lr = tf.cos(
self.pi
* (tf.cast(step, tf.float32) - self.warmup_steps)
/ tf.cast(self.total_steps - self.warmup_steps, tf.float32)
)
# Shift the mean of the `cos_annealed_lr` graph to 1. Now the grpah goes
# from 0 to 2. Normalize the graph with 0.5 so that now it goes from 0
# to 1. With the normalized graph we scale it with `lr_max` such that
# it goes from 0 to `lr_max`
learning_rate = 0.5 * self.lr_max * (1 + cos_annealed_lr)
# Check whether warmup_steps is more than 0.
if self.warmup_steps > 0:
# Check whether lr_max is larger that lr_start. If not, throw a value
# error.
if self.lr_max < self.lr_start:
raise ValueError(
f"lr_start {self.lr_start} must be smaller or"
+ f"equal to lr_max {self.lr_max}."
)
# Calculate the slope with which the learning rate should increase
# in the warumup schedule. The formula for slope is m = ((b-a)/steps)
slope = (self.lr_max - self.lr_start) / self.warmup_steps
# With the formula for a straight line (y = mx+c) build the warmup
# schedule
warmup_rate = slope * tf.cast(step, tf.float32) + self.lr_start
# When the current step is lesser that warmup steps, get the line
# graph. When the current step is greater than the warmup steps, get
# the scaled cos graph.
learning_rate = tf.where(
step < self.warmup_steps, warmup_rate, learning_rate
)
# When the current step is more that the total steps, return 0 else return
# the calculated graph.
return tf.where(
step > self.total_steps, 0.0, learning_rate, name="learning_rate"
)
def get_config(self):
config = {
"lr_start": self.lr_start,
"lr_max": self.lr_max,
"total_steps": self.total_steps,
"warmup_steps": self.warmup_steps,
}
return config
"""
## Compile and train the model
"""
# pass sample data to the model so that input shape is available at the time of
# saving the model
sample_ds, _ = next(iter(train_ds))
model(sample_ds, training=False)
# Get the total number of steps for training.
total_steps = int((len(x_train) / config.batch_size) * config.epochs)
# Calculate the number of steps for warmup.
warmup_epoch_percentage = 0.15
warmup_steps = int(total_steps * warmup_epoch_percentage)
# Initialize the warmupcosine schedule.
scheduled_lrs = WarmUpCosine(
lr_start=1e-5,
lr_max=1e-3,
warmup_steps=warmup_steps,
total_steps=total_steps,
)
# Get the optimizer.
optimizer = tfa.optimizers.AdamW(
learning_rate=scheduled_lrs, weight_decay=config.weight_decay
)
# Compile and pretrain the model.
model.compile(
optimizer=optimizer,
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[
keras.metrics.SparseCategoricalAccuracy(name="accuracy"),
keras.metrics.SparseTopKCategoricalAccuracy(5, name="top-5-accuracy"),
],
)
# Train the model
history = model.fit(
train_ds,
epochs=config.epochs,
validation_data=val_ds,
callbacks=[
keras.callbacks.EarlyStopping(
monitor="val_accuracy",
patience=5,
mode="auto",
)
],
)
# Evaluate the model with the test dataset.
print("TESTING")
loss, acc_top1, acc_top5 = model.evaluate(test_ds)
print(f"Loss: {loss:0.2f}")
print(f"Top 1 test accuracy: {acc_top1*100:0.2f}%")
print(f"Top 5 test accuracy: {acc_top5*100:0.2f}%")
"""
## Save trained model
Since we created the model by Subclassing, we can't save the model in HDF5 format.
It can be saved in TF SavedModel format only. In general, this is the recommended format for saving models as well.
"""
model.save("ShiftViT")
"""
## Model inference
"""
"""
**Download sample data for inference**
"""
"""shell
wget -q 'https://tinyurl.com/2p9483sw' -O inference_set.zip
unzip -q inference_set.zip
"""
"""
**Load saved model**
"""
# Custom objects are not included when the model is saved.
# At loading time, these objects need to be passed for reconstruction of the model
saved_model = tf.keras.models.load_model(
"ShiftViT",
custom_objects={"WarmUpCosine": WarmUpCosine, "AdamW": tfa.optimizers.AdamW},
)
"""
**Utility functions for inference**
"""
def process_image(img_path):
# read image file from string path
img = tf.io.read_file(img_path)
# decode jpeg to uint8 tensor
img = tf.io.decode_jpeg(img, channels=3)
# resize image to match input size accepted by model
# use `method` as `nearest` to preserve dtype of input passed to `resize()`
img = tf.image.resize(
img, [config.input_shape[0], config.input_shape[1]], method="nearest"
)
return img
def create_tf_dataset(image_dir):
data_dir = pathlib.Path(image_dir)
# create tf.data dataset using directory of images
predict_ds = tf.data.Dataset.list_files(str(data_dir / "*.jpg"), shuffle=False)
# use map to convert string paths to uint8 image tensors
# setting `num_parallel_calls' helps in processing multiple images parallely
predict_ds = predict_ds.map(process_image, num_parallel_calls=AUTO)
# create a Prefetch Dataset for better latency & throughput
predict_ds = predict_ds.batch(config.tf_ds_batch_size).prefetch(AUTO)
return predict_ds
def predict(predict_ds):
# ShiftViT model returns logits (non-normalized predictions)
logits = saved_model.predict(predict_ds)
# normalize predictions by calling softmax()
probabilities = tf.nn.softmax(logits)
return probabilities
def get_predicted_class(probabilities):
pred_label = np.argmax(probabilities)
predicted_class = config.label_map[pred_label]
return predicted_class
def get_confidence_scores(probabilities):
# get the indices of the probability scores sorted in descending order
labels = np.argsort(probabilities)[::-1]
confidences = {
config.label_map[label]: np.round((probabilities[label]) * 100, 2)
for label in labels
}
return confidences
"""
**Get predictions**
"""
img_dir = "inference_set"
predict_ds = create_tf_dataset(img_dir)
probabilities = predict(predict_ds)
print(f"probabilities: {probabilities[0]}")
confidences = get_confidence_scores(probabilities[0])
print(confidences)
"""
**View predictions**
"""
plt.figure(figsize=(10, 10))
for images in predict_ds:
for i in range(min(6, probabilities.shape[0])):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(images[i].numpy().astype("uint8"))
predicted_class = get_predicted_class(probabilities[i])
plt.title(predicted_class)
plt.axis("off")
"""
## Conclusion
The most impactful contribution of the paper is not the novel architecture, but
the idea that hierarchical ViTs trained with no attention can perform quite well. This
opens up the question of how essential attention is to the performance of ViTs.
For curious minds, we would suggest reading the
[ConvNexT](https://arxiv.org/abs/2201.03545) paper which attends more to the training
paradigms and architectural details of ViTs rather than providing a novel architecture
based on attention.
Acknowledgements:
- We would like to thank [PyImageSearch](https://pyimagesearch.com) for providing us with
resources that helped in the completion of this project.
- We would like to thank [JarvisLabs.ai](https://jarvislabs.ai/) for providing with the
GPU credits.
- We would like to thank [Manim Community](https://www.manim.community/) for the manim
library.
- A personal note of thanks to [Puja Roychowdhury](https://twitter.com/pleb_talks) for
helping us with the Learning Rate Schedule.
"""
"""
**Example available on HuggingFace**
| Trained Model | Demo |
| :--: | :--: |
| [![Generic badge](https://img.shields.io/badge/%F0%9F%A4%97%20Model-ShiftViT-brightgreen)](https://huggingface.co/keras-io/shiftvit) | [![Generic badge](https://img.shields.io/badge/%F0%9F%A4%97%20Space-ShiftViT-brightgreen)](https://huggingface.co/spaces/keras-io/shiftvit) |
"""
| keras-io/examples/vision/shiftvit.py/0 | {
"file_path": "keras-io/examples/vision/shiftvit.py",
"repo_id": "keras-io",
"token_count": 14401
} | 127 |
<jupyter_start><jupyter_text>Customizing what happens in `fit()` with JAX**Author:** [fchollet](https://twitter.com/fchollet)**Date created:** 2023/06/27**Last modified:** 2023/06/27**Description:** Overriding the training step of the Model class with JAX. IntroductionWhen you're doing supervised learning, you can use `fit()` and everything workssmoothly.When you need to take control of every little detail, you can write your own trainingloop entirely from scratch.But what if you need a custom training algorithm, but you still want to benefit fromthe convenient features of `fit()`, such as callbacks, built-in distribution support,or step fusing?A core principle of Keras is **progressive disclosure of complexity**. You shouldalways be able to get into lower-level workflows in a gradual way. You shouldn't falloff a cliff if the high-level functionality doesn't exactly match your use case. Youshould be able to gain more control over the small details while retaining acommensurate amount of high-level convenience.When you need to customize what `fit()` does, you should **override the training stepfunction of the `Model` class**. This is the function that is called by `fit()` forevery batch of data. You will then be able to call `fit()` as usual -- and it will berunning your own learning algorithm.Note that this pattern does not prevent you from building models with the FunctionalAPI. You can do this whether you're building `Sequential` models, Functional APImodels, or subclassed models.Let's see how that works.<jupyter_code>!pip install keras==3.0.0 --upgrade --quiet<jupyter_output><empty_output><jupyter_text>Setup<jupyter_code>import os
# This guide can only be run with the JAX backend.
os.environ["KERAS_BACKEND"] = "jax"
import jax
import keras
import numpy as np<jupyter_output><empty_output><jupyter_text>A first simple exampleLet's start from a simple example:- We create a new class that subclasses `keras.Model`.- We implement a fully-stateless `compute_loss_and_updates()` methodto compute the loss as well as the updated values for the non-trainablevariables of the model. Internally, it calls `stateless_call()` andthe built-in `compute_loss()`.- We implement a fully-stateless `train_step()` method to compute currentmetric values (including the loss) as well as updated values for thetrainable variables, the optimizer variables, and the metric variables.Note that you can also take into account the `sample_weight` argument by:- Unpacking the data as `x, y, sample_weight = data`- Passing `sample_weight` to `compute_loss()`- Passing `sample_weight` alongside `y` and `y_pred`to metrics in `stateless_update_state()`<jupyter_code>class CustomModel(keras.Model):
def compute_loss_and_updates(
self,
trainable_variables,
non_trainable_variables,
x,
y,
training=False,
):
y_pred, non_trainable_variables = self.stateless_call(
trainable_variables,
non_trainable_variables,
x,
training=training,
)
loss = self.compute_loss(x, y, y_pred)
return loss, (y_pred, non_trainable_variables)
def train_step(self, state, data):
(
trainable_variables,
non_trainable_variables,
optimizer_variables,
metrics_variables,
) = state
x, y = data
# Get the gradient function.
grad_fn = jax.value_and_grad(self.compute_loss_and_updates, has_aux=True)
# Compute the gradients.
(loss, (y_pred, non_trainable_variables)), grads = grad_fn(
trainable_variables,
non_trainable_variables,
x,
y,
training=True,
)
# Update trainable variables and optimizer variables.
(
trainable_variables,
optimizer_variables,
) = self.optimizer.stateless_apply(
optimizer_variables, grads, trainable_variables
)
# Update metrics.
new_metrics_vars = []
for metric in self.metrics:
this_metric_vars = metrics_variables[
len(new_metrics_vars) : len(new_metrics_vars) + len(metric.variables)
]
if metric.name == "loss":
this_metric_vars = metric.stateless_update_state(this_metric_vars, loss)
else:
this_metric_vars = metric.stateless_update_state(
this_metric_vars, y, y_pred
)
logs = metric.stateless_result(this_metric_vars)
new_metrics_vars += this_metric_vars
# Return metric logs and updated state variables.
state = (
trainable_variables,
non_trainable_variables,
optimizer_variables,
new_metrics_vars,
)
return logs, state<jupyter_output><empty_output><jupyter_text>Let's try this out:<jupyter_code># Construct and compile an instance of CustomModel
inputs = keras.Input(shape=(32,))
outputs = keras.layers.Dense(1)(inputs)
model = CustomModel(inputs, outputs)
model.compile(optimizer="adam", loss="mse", metrics=["mae"])
# Just use `fit` as usual
x = np.random.random((1000, 32))
y = np.random.random((1000, 1))
model.fit(x, y, epochs=3)<jupyter_output><empty_output><jupyter_text>Going lower-levelNaturally, you could just skip passing a loss function in `compile()`, and instead doeverything *manually* in `train_step`. Likewise for metrics.Here's a lower-level example, that only uses `compile()` to configure the optimizer:<jupyter_code>class CustomModel(keras.Model):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.loss_tracker = keras.metrics.Mean(name="loss")
self.mae_metric = keras.metrics.MeanAbsoluteError(name="mae")
self.loss_fn = keras.losses.MeanSquaredError()
def compute_loss_and_updates(
self,
trainable_variables,
non_trainable_variables,
x,
y,
training=False,
):
y_pred, non_trainable_variables = self.stateless_call(
trainable_variables,
non_trainable_variables,
x,
training=training,
)
loss = self.loss_fn(y, y_pred)
return loss, (y_pred, non_trainable_variables)
def train_step(self, state, data):
(
trainable_variables,
non_trainable_variables,
optimizer_variables,
metrics_variables,
) = state
x, y = data
# Get the gradient function.
grad_fn = jax.value_and_grad(self.compute_loss_and_updates, has_aux=True)
# Compute the gradients.
(loss, (y_pred, non_trainable_variables)), grads = grad_fn(
trainable_variables,
non_trainable_variables,
x,
y,
training=True,
)
# Update trainable variables and optimizer variables.
(
trainable_variables,
optimizer_variables,
) = self.optimizer.stateless_apply(
optimizer_variables, grads, trainable_variables
)
# Update metrics.
loss_tracker_vars = metrics_variables[: len(self.loss_tracker.variables)]
mae_metric_vars = metrics_variables[len(self.loss_tracker.variables) :]
loss_tracker_vars = self.loss_tracker.stateless_update_state(
loss_tracker_vars, loss
)
mae_metric_vars = self.mae_metric.stateless_update_state(
mae_metric_vars, y, y_pred
)
logs = {}
logs[self.loss_tracker.name] = self.loss_tracker.stateless_result(
loss_tracker_vars
)
logs[self.mae_metric.name] = self.mae_metric.stateless_result(mae_metric_vars)
new_metrics_vars = loss_tracker_vars + mae_metric_vars
# Return metric logs and updated state variables.
state = (
trainable_variables,
non_trainable_variables,
optimizer_variables,
new_metrics_vars,
)
return logs, state
@property
def metrics(self):
# We list our `Metric` objects here so that `reset_states()` can be
# called automatically at the start of each epoch
# or at the start of `evaluate()`.
return [self.loss_tracker, self.mae_metric]
# Construct an instance of CustomModel
inputs = keras.Input(shape=(32,))
outputs = keras.layers.Dense(1)(inputs)
model = CustomModel(inputs, outputs)
# We don't passs a loss or metrics here.
model.compile(optimizer="adam")
# Just use `fit` as usual -- you can use callbacks, etc.
x = np.random.random((1000, 32))
y = np.random.random((1000, 1))
model.fit(x, y, epochs=5)<jupyter_output><empty_output><jupyter_text>Providing your own evaluation stepWhat if you want to do the same for calls to `model.evaluate()`? Then you wouldoverride `test_step` in exactly the same way. Here's what it looks like:<jupyter_code>class CustomModel(keras.Model):
def test_step(self, state, data):
# Unpack the data.
x, y = data
(
trainable_variables,
non_trainable_variables,
metrics_variables,
) = state
# Compute predictions and loss.
y_pred, non_trainable_variables = self.stateless_call(
trainable_variables,
non_trainable_variables,
x,
training=False,
)
loss = self.compute_loss(x, y, y_pred)
# Update metrics.
new_metrics_vars = []
for metric in self.metrics:
this_metric_vars = metrics_variables[
len(new_metrics_vars) : len(new_metrics_vars) + len(metric.variables)
]
if metric.name == "loss":
this_metric_vars = metric.stateless_update_state(this_metric_vars, loss)
else:
this_metric_vars = metric.stateless_update_state(
this_metric_vars, y, y_pred
)
logs = metric.stateless_result(this_metric_vars)
new_metrics_vars += this_metric_vars
# Return metric logs and updated state variables.
state = (
trainable_variables,
non_trainable_variables,
new_metrics_vars,
)
return logs, state
# Construct an instance of CustomModel
inputs = keras.Input(shape=(32,))
outputs = keras.layers.Dense(1)(inputs)
model = CustomModel(inputs, outputs)
model.compile(loss="mse", metrics=["mae"])
# Evaluate with our custom test_step
x = np.random.random((1000, 32))
y = np.random.random((1000, 1))
model.evaluate(x, y)<jupyter_output><empty_output> | keras-io/guides/ipynb/custom_train_step_in_jax.ipynb/0 | {
"file_path": "keras-io/guides/ipynb/custom_train_step_in_jax.ipynb",
"repo_id": "keras-io",
"token_count": 4565
} | 128 |
<jupyter_start><jupyter_text>Multi-GPU distributed training with JAX**Author:** [fchollet](https://twitter.com/fchollet)**Date created:** 2023/07/11**Last modified:** 2023/07/11**Description:** Guide to multi-GPU/TPU training for Keras models with JAX. IntroductionThere are generally two ways to distribute computation across multiple devices:**Data parallelism**, where a single model gets replicated on multiple devices ormultiple machines. Each of them processes different batches of data, then they mergetheir results. There exist many variants of this setup, that differ in how the differentmodel replicas merge results, in whether they stay in sync at every batch or whether theyare more loosely coupled, etc.**Model parallelism**, where different parts of a single model run on different devices,processing a single batch of data together. This works best with models that have anaturally-parallel architecture, such as models that feature multiple branches.This guide focuses on data parallelism, in particular **synchronous data parallelism**,where the different replicas of the model stay in sync after each batch they process.Synchronicity keeps the model convergence behavior identical to what you would see forsingle-device training.Specifically, this guide teaches you how to use `jax.sharding` APIs to train Kerasmodels, with minimal changes to your code, on multiple GPUs or TPUS (typically 2 to 16)installed on a single machine (single host, multi-device training). This is themost common setup for researchers and small-scale industry workflows. SetupLet's start by defining the function that creates the model that we will train,and the function that creates the dataset we will train on (MNIST in this case).<jupyter_code>import os
os.environ["KERAS_BACKEND"] = "jax"
import jax
import numpy as np
import tensorflow as tf
import keras
from jax.experimental import mesh_utils
from jax.sharding import Mesh
from jax.sharding import NamedSharding
from jax.sharding import PartitionSpec as P
def get_model():
# Make a simple convnet with batch normalization and dropout.
inputs = keras.Input(shape=(28, 28, 1))
x = keras.layers.Rescaling(1.0 / 255.0)(inputs)
x = keras.layers.Conv2D(filters=12, kernel_size=3, padding="same", use_bias=False)(
x
)
x = keras.layers.BatchNormalization(scale=False, center=True)(x)
x = keras.layers.ReLU()(x)
x = keras.layers.Conv2D(
filters=24,
kernel_size=6,
use_bias=False,
strides=2,
)(x)
x = keras.layers.BatchNormalization(scale=False, center=True)(x)
x = keras.layers.ReLU()(x)
x = keras.layers.Conv2D(
filters=32,
kernel_size=6,
padding="same",
strides=2,
name="large_k",
)(x)
x = keras.layers.BatchNormalization(scale=False, center=True)(x)
x = keras.layers.ReLU()(x)
x = keras.layers.GlobalAveragePooling2D()(x)
x = keras.layers.Dense(256, activation="relu")(x)
x = keras.layers.Dropout(0.5)(x)
outputs = keras.layers.Dense(10)(x)
model = keras.Model(inputs, outputs)
return model
def get_datasets():
# Load the data and split it between train and test sets
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
# Scale images to the [0, 1] range
x_train = x_train.astype("float32")
x_test = x_test.astype("float32")
# Make sure images have shape (28, 28, 1)
x_train = np.expand_dims(x_train, -1)
x_test = np.expand_dims(x_test, -1)
print("x_train shape:", x_train.shape)
print(x_train.shape[0], "train samples")
print(x_test.shape[0], "test samples")
# Create TF Datasets
train_data = tf.data.Dataset.from_tensor_slices((x_train, y_train))
eval_data = tf.data.Dataset.from_tensor_slices((x_test, y_test))
return train_data, eval_data<jupyter_output><empty_output><jupyter_text>Single-host, multi-device synchronous trainingIn this setup, you have one machine with several GPUs or TPUs on it (typically 2 to 16).Each device will run a copy of your model (called a **replica**). For simplicity, inwhat follows, we'll assume we're dealing with 8 GPUs, at no loss of generality.**How it works**At each step of training:- The current batch of data (called **global batch**) is split into 8 different sub-batches (called **local batches**). For instance, if the global batch has 512 samples, each of the 8 local batches will have 64 samples.- Each of the 8 replicas independently processes a local batch: they run a forward pass, then a backward pass, outputting the gradient of the weights with respect to the loss of the model on the local batch.- The weight updates originating from local gradients are efficiently merged across the 8 replicas. Because this is done at the end of every step, the replicas always stay in sync.In practice, the process of synchronously updating the weights of the model replicas ishandled at the level of each individual weight variable. This is done through a usinga `jax.sharding.NamedSharding` that is configured to replicate the variables.**How to use it**To do single-host, multi-device synchronous training with a Keras model, youwould use the `jax.sharding` features. Here's how it works:- We first create a device mesh using `mesh_utils.create_device_mesh`.- We use `jax.sharding.Mesh`, `jax.sharding.NamedSharding` and `jax.sharding.PartitionSpec` to define how to partition JAX arrays. - We specify that we want to replicate the model and optimizer variables across all devices by using a spec with no axis. - We specify that we want to shard the data across devices by using a spec that splits along the batch dimension.- We use `jax.device_put` to replicate the model and optimizer variables across devices. This happens once at the beginning.- In the training loop, for each batch that we process, we use `jax.device_put` to split the batch across devices before invoking the train step.Here's the flow, where each step is split into its own utility function:<jupyter_code># Config
num_epochs = 2
batch_size = 64
train_data, eval_data = get_datasets()
train_data = train_data.batch(batch_size, drop_remainder=True)
model = get_model()
optimizer = keras.optimizers.Adam(1e-3)
loss = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# Initialize all state with .build()
(one_batch, one_batch_labels) = next(iter(train_data))
model.build(one_batch)
optimizer.build(model.trainable_variables)
# This is the loss function that will be differentiated.
# Keras provides a pure functional forward pass: model.stateless_call
def compute_loss(trainable_variables, non_trainable_variables, x, y):
y_pred, updated_non_trainable_variables = model.stateless_call(
trainable_variables, non_trainable_variables, x
)
loss_value = loss(y, y_pred)
return loss_value, updated_non_trainable_variables
# Function to compute gradients
compute_gradients = jax.value_and_grad(compute_loss, has_aux=True)
# Training step, Keras provides a pure functional optimizer.stateless_apply
@jax.jit
def train_step(train_state, x, y):
trainable_variables, non_trainable_variables, optimizer_variables = train_state
(loss_value, non_trainable_variables), grads = compute_gradients(
trainable_variables, non_trainable_variables, x, y
)
trainable_variables, optimizer_variables = optimizer.stateless_apply(
optimizer_variables, grads, trainable_variables
)
return loss_value, (
trainable_variables,
non_trainable_variables,
optimizer_variables,
)
# Replicate the model and optimizer variable on all devices
def get_replicated_train_state(devices):
# All variables will be replicated on all devices
var_mesh = Mesh(devices, axis_names=("_"))
# In NamedSharding, axes not mentioned are replicated (all axes here)
var_replication = NamedSharding(var_mesh, P())
# Apply the distribution settings to the model variables
trainable_variables = jax.device_put(model.trainable_variables, var_replication)
non_trainable_variables = jax.device_put(
model.non_trainable_variables, var_replication
)
optimizer_variables = jax.device_put(optimizer.variables, var_replication)
# Combine all state in a tuple
return (trainable_variables, non_trainable_variables, optimizer_variables)
num_devices = len(jax.local_devices())
print(f"Running on {num_devices} devices: {jax.local_devices()}")
devices = mesh_utils.create_device_mesh((num_devices,))
# Data will be split along the batch axis
data_mesh = Mesh(devices, axis_names=("batch",)) # naming axes of the mesh
data_sharding = NamedSharding(
data_mesh,
P(
"batch",
),
) # naming axes of the sharded partition
# Display data sharding
x, y = next(iter(train_data))
sharded_x = jax.device_put(x.numpy(), data_sharding)
print("Data sharding")
jax.debug.visualize_array_sharding(jax.numpy.reshape(sharded_x, [-1, 28 * 28]))
train_state = get_replicated_train_state(devices)
# Custom training loop
for epoch in range(num_epochs):
data_iter = iter(train_data)
for data in data_iter:
x, y = data
sharded_x = jax.device_put(x.numpy(), data_sharding)
loss_value, train_state = train_step(train_state, sharded_x, y.numpy())
print("Epoch", epoch, "loss:", loss_value)
# Post-processing model state update to write them back into the model
trainable_variables, non_trainable_variables, optimizer_variables = train_state
for variable, value in zip(model.trainable_variables, trainable_variables):
variable.assign(value)
for variable, value in zip(model.non_trainable_variables, non_trainable_variables):
variable.assign(value)<jupyter_output><empty_output> | keras-io/guides/ipynb/keras_core/distributed_training_with_jax.ipynb/0 | {
"file_path": "keras-io/guides/ipynb/keras_core/distributed_training_with_jax.ipynb",
"repo_id": "keras-io",
"token_count": 3224
} | 129 |
<jupyter_start><jupyter_text>Using KerasCV COCO Metrics**Author:** [lukewood](https://twitter.com/luke_wood_ml)**Date created:** 2022/04/13**Last modified:** 2022/04/13**Description:** Use KerasCV COCO metrics to evaluate object detection models. OverviewWith KerasCV's COCO metrics implementation, you can easily evaluate your objectdetection model's performance all from within the TensorFlow graph. This guideshows you how to use KerasCV's COCO metrics and integrate it into your own modelevaluation pipeline. Historically, users have evaluated COCO metrics as a post trainingstep. KerasCV offers an in graph implementation of COCO metrics, enabling users toevaluate COCO metrics *during* training!Let's get started using KerasCV's COCO metrics. Input formatAll KerasCV components that process bounding boxes, including COCO metrics, require a`bounding_box_format` parameter. This parameter is used to tell the components whatformat your bounding boxes are in. While this guide uses the `xyxy` format, a fulllist of supported formats is available in[the bounding_box API documentation](https://keras.io/api/keras_cv/bounding_box/formats/).The metrics expect `y_true` and be a `float` Tensor with the shape `[batch,num_images, num_boxes, 5]`, with the ordering of last set of axes determined by theprovided format. The same is true of `y_pred`, except that an additional `confidence`axis must be provided.Due to the fact that each image may have a different number of bounding boxes,the `num_boxes` dimension may actually have a mismatching shape between images.KerasCV works around this by allowing you to either pass a `RaggedTensor` as aninput to the KerasCV COCO metrics, or padding unused bounding boxes with `-1`.Utility functions to manipulate bounding boxes, transform between formats, andpad bounding box Tensors with `-1s` are available from the[`keras_cv.bounding_box`](https://github.com/keras-team/keras-cv/blob/master/keras_cv/bounding_box)package. Independent metric useThe usage first pattern for KerasCV COCO metrics is to manually call`update_state()` and `result()` methods. This pattern is recommended for userswho want finer grained control of their metric evaluation, or want to use adifferent format for `y_pred` in their model.Let's run through a quick code example. 1.) First, we must construct our metric:<jupyter_code>import keras_cv
# import all modules we will need in this example
import tensorflow as tf
from tensorflow import keras
# only consider boxes with areas less than a 32x32 square.
metric = keras_cv.metrics.COCORecall(
bounding_box_format="xyxy", class_ids=[1, 2, 3], area_range=(0, 32**2)
)<jupyter_output><empty_output><jupyter_text>2.) Create Some Bounding Boxes:<jupyter_code>y_true = tf.ragged.stack(
[
# image 1
tf.constant([[0, 0, 10, 10, 1], [11, 12, 30, 30, 2]], tf.float32),
# image 2
tf.constant([[0, 0, 10, 10, 1]], tf.float32),
]
)
y_pred = tf.ragged.stack(
[
# predictions for image 1
tf.constant([[5, 5, 10, 10, 1, 0.9]], tf.float32),
# predictions for image 2
tf.constant([[0, 0, 10, 10, 1, 1.0], [5, 5, 10, 10, 1, 0.9]], tf.float32),
]
)<jupyter_output><empty_output><jupyter_text>3.) Update metric state:<jupyter_code>metric.update_state(y_true, y_pred)<jupyter_output><empty_output><jupyter_text>4.) Evaluate the result:<jupyter_code>metric.result()<jupyter_output><empty_output><jupyter_text>Evaluating COCORecall for your object detection model is as simple as that! Metric use in a modelYou can also leverage COCORecall in your model's training loop. Let's walk through thisprocess.1.) Construct your the metric and a dummy model<jupyter_code>i = keras.layers.Input((None, 6))
model = keras.Model(i, i)<jupyter_output><empty_output><jupyter_text>2.) Create some fake bounding boxes:<jupyter_code>y_true = tf.constant([[[0, 0, 10, 10, 1], [5, 5, 10, 10, 1]]], tf.float32)
y_pred = tf.constant([[[0, 0, 10, 10, 1, 1.0], [5, 5, 10, 10, 1, 0.9]]], tf.float32)<jupyter_output><empty_output><jupyter_text>3.) Create the metric and compile the model<jupyter_code>recall = keras_cv.metrics.COCORecall(
bounding_box_format="xyxy",
max_detections=100,
class_ids=[1],
area_range=(0, 64**2),
name="coco_recall",
)
model.compile(metrics=[recall])<jupyter_output><empty_output><jupyter_text>4.) Use `model.evaluate()` to evaluate the metric<jupyter_code>model.evaluate(y_pred, y_true, return_dict=True)<jupyter_output><empty_output> | keras-io/guides/ipynb/keras_cv/coco_metrics.ipynb/0 | {
"file_path": "keras-io/guides/ipynb/keras_cv/coco_metrics.ipynb",
"repo_id": "keras-io",
"token_count": 1530
} | 130 |
<jupyter_start><jupyter_text>Making new layers and models via subclassing**Author:** [fchollet](https://twitter.com/fchollet)**Date created:** 2019/03/01**Last modified:** 2023/06/25**Description:** Complete guide to writing `Layer` and `Model` objects from scratch. IntroductionThis guide will cover everything you need to know to build your ownsubclassed layers and models. In particular, you'll learn about the following features:- The `Layer` class- The `add_weight()` method- Trainable and non-trainable weights- The `build()` method- Making sure your layers can be used with any backend- The `add_loss()` method- The `training` argument in `call()`- The `mask` argument in `call()`- Making sure your layers can be serializedLet's dive in. Setup<jupyter_code>import numpy as np
import keras
from keras import ops
from keras import layers<jupyter_output><empty_output><jupyter_text>The `Layer` class: the combination of state (weights) and some computationOne of the central abstractions in Keras is the `Layer` class. A layerencapsulates both a state (the layer's "weights") and a transformation frominputs to outputs (a "call", the layer's forward pass).Here's a densely-connected layer. It has two state variables:the variables `w` and `b`.<jupyter_code>class Linear(keras.layers.Layer):
def __init__(self, units=32, input_dim=32):
super().__init__()
self.w = self.add_weight(
shape=(input_dim, units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(shape=(units,), initializer="zeros", trainable=True)
def call(self, inputs):
return ops.matmul(inputs, self.w) + self.b<jupyter_output><empty_output><jupyter_text>You would use a layer by calling it on some tensor input(s), much like a Pythonfunction.<jupyter_code>x = ops.ones((2, 2))
linear_layer = Linear(4, 2)
y = linear_layer(x)
print(y)<jupyter_output><empty_output><jupyter_text>Note that the weights `w` and `b` are automatically tracked by the layer uponbeing set as layer attributes:<jupyter_code>assert linear_layer.weights == [linear_layer.w, linear_layer.b]<jupyter_output><empty_output><jupyter_text>Layers can have non-trainable weightsBesides trainable weights, you can add non-trainable weights to a layer aswell. Such weights are meant not to be taken into account duringbackpropagation, when you are training the layer.Here's how to add and use a non-trainable weight:<jupyter_code>class ComputeSum(keras.layers.Layer):
def __init__(self, input_dim):
super().__init__()
self.total = self.add_weight(
initializer="zeros", shape=(input_dim,), trainable=False
)
def call(self, inputs):
self.total.assign_add(ops.sum(inputs, axis=0))
return self.total
x = ops.ones((2, 2))
my_sum = ComputeSum(2)
y = my_sum(x)
print(y.numpy())
y = my_sum(x)
print(y.numpy())<jupyter_output><empty_output><jupyter_text>It's part of `layer.weights`, but it gets categorized as a non-trainable weight:<jupyter_code>print("weights:", len(my_sum.weights))
print("non-trainable weights:", len(my_sum.non_trainable_weights))
# It's not included in the trainable weights:
print("trainable_weights:", my_sum.trainable_weights)<jupyter_output><empty_output><jupyter_text>Best practice: deferring weight creation until the shape of the inputs is knownOur `Linear` layer above took an `input_dim` argument that was used to computethe shape of the weights `w` and `b` in `__init__()`:<jupyter_code>class Linear(keras.layers.Layer):
def __init__(self, units=32, input_dim=32):
super().__init__()
self.w = self.add_weight(
shape=(input_dim, units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(shape=(units,), initializer="zeros", trainable=True)
def call(self, inputs):
return ops.matmul(inputs, self.w) + self.b<jupyter_output><empty_output><jupyter_text>In many cases, you may not know in advance the size of your inputs, and youwould like to lazily create weights when that value becomes known, some timeafter instantiating the layer.In the Keras API, we recommend creating layer weights in the`build(self, inputs_shape)` method of your layer. Like this:<jupyter_code>class Linear(keras.layers.Layer):
def __init__(self, units=32):
super().__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return ops.matmul(inputs, self.w) + self.b<jupyter_output><empty_output><jupyter_text>The `__call__()` method of your layer will automatically run build the first timeit is called. You now have a layer that's lazy and thus easier to use:<jupyter_code># At instantiation, we don't know on what inputs this is going to get called
linear_layer = Linear(32)
# The layer's weights are created dynamically the first time the layer is called
y = linear_layer(x)<jupyter_output><empty_output><jupyter_text>Implementing `build()` separately as shown above nicely separates creating weightsonly once from using weights in every call. Layers are recursively composableIf you assign a Layer instance as an attribute of another Layer, the outer layerwill start tracking the weights created by the inner layer.We recommend creating such sublayers in the `__init__()` method and leave it tothe first `__call__()` to trigger building their weights.<jupyter_code>class MLPBlock(keras.layers.Layer):
def __init__(self):
super().__init__()
self.linear_1 = Linear(32)
self.linear_2 = Linear(32)
self.linear_3 = Linear(1)
def call(self, inputs):
x = self.linear_1(inputs)
x = keras.activations.relu(x)
x = self.linear_2(x)
x = keras.activations.relu(x)
return self.linear_3(x)
mlp = MLPBlock()
y = mlp(ops.ones(shape=(3, 64))) # The first call to the `mlp` will create the weights
print("weights:", len(mlp.weights))
print("trainable weights:", len(mlp.trainable_weights))<jupyter_output><empty_output><jupyter_text>Backend-agnostic layers and backend-specific layersAs long as a layer only uses APIs from the `keras.ops` namespace(or other Keras namespaces such as `keras.activations`, `keras.random`, or `keras.layers`),then it can be used with any backend -- TensorFlow, JAX, or PyTorch.All layers you've seen so far in this guide work with all Keras backends.The `keras.ops` namespace gives you access to:- The NumPy API, e.g. `ops.matmul`, `ops.sum`, `ops.reshape`, `ops.stack`, etc.- Neural networks-specific APIs such as `ops.softmax`, `ops`.conv`, `ops.binary_crossentropy`, `ops.relu`, etc.You can also use backend-native APIs in your layers (such as `tf.nn` functions),but if you do this, then your layer will only be usable with the backend in question.For instance, you could write the following JAX-specific layer using `jax.numpy`:```pythonimport jaxclass Linear(keras.layers.Layer): ... def call(self, inputs): return jax.numpy.matmul(inputs, self.w) + self.b```This would be the equivalent TensorFlow-specific layer:```pythonimport tensorflow as tfclass Linear(keras.layers.Layer): ... def call(self, inputs): return tf.matmul(inputs, self.w) + self.b```And this would be the equivalent PyTorch-specific layer:```pythonimport torchclass Linear(keras.layers.Layer): ... def call(self, inputs): return torch.matmul(inputs, self.w) + self.b```Because cross-backend compatibility is a tremendously useful property, we stronglyrecommend that you seek to always make your layers backend-agnostic by leveragingonly Keras APIs. The `add_loss()` methodWhen writing the `call()` method of a layer, you can create loss tensors thatyou will want to use later, when writing your training loop. This is doable bycalling `self.add_loss(value)`:<jupyter_code># A layer that creates an activity regularization loss
class ActivityRegularizationLayer(keras.layers.Layer):
def __init__(self, rate=1e-2):
super().__init__()
self.rate = rate
def call(self, inputs):
self.add_loss(self.rate * ops.mean(inputs))
return inputs<jupyter_output><empty_output><jupyter_text>These losses (including those created by any inner layer) can be retrieved via`layer.losses`. This property is reset at the start of every `__call__()` tothe top-level layer, so that `layer.losses` always contains the loss valuescreated during the last forward pass.<jupyter_code>class OuterLayer(keras.layers.Layer):
def __init__(self):
super().__init__()
self.activity_reg = ActivityRegularizationLayer(1e-2)
def call(self, inputs):
return self.activity_reg(inputs)
layer = OuterLayer()
assert len(layer.losses) == 0 # No losses yet since the layer has never been called
_ = layer(ops.zeros((1, 1)))
assert len(layer.losses) == 1 # We created one loss value
# `layer.losses` gets reset at the start of each __call__
_ = layer(ops.zeros((1, 1)))
assert len(layer.losses) == 1 # This is the loss created during the call above<jupyter_output><empty_output><jupyter_text>In addition, the `loss` property also contains regularization losses createdfor the weights of any inner layer:<jupyter_code>class OuterLayerWithKernelRegularizer(keras.layers.Layer):
def __init__(self):
super().__init__()
self.dense = keras.layers.Dense(
32, kernel_regularizer=keras.regularizers.l2(1e-3)
)
def call(self, inputs):
return self.dense(inputs)
layer = OuterLayerWithKernelRegularizer()
_ = layer(ops.zeros((1, 1)))
# This is `1e-3 * sum(layer.dense.kernel ** 2)`,
# created by the `kernel_regularizer` above.
print(layer.losses)<jupyter_output><empty_output><jupyter_text>These losses are meant to be taken into account when writing custom training loops.They also work seamlessly with `fit()` (they get automatically summed and added to the main loss, if any):<jupyter_code>inputs = keras.Input(shape=(3,))
outputs = ActivityRegularizationLayer()(inputs)
model = keras.Model(inputs, outputs)
# If there is a loss passed in `compile`, the regularization
# losses get added to it
model.compile(optimizer="adam", loss="mse")
model.fit(np.random.random((2, 3)), np.random.random((2, 3)))
# It's also possible not to pass any loss in `compile`,
# since the model already has a loss to minimize, via the `add_loss`
# call during the forward pass!
model.compile(optimizer="adam")
model.fit(np.random.random((2, 3)), np.random.random((2, 3)))<jupyter_output><empty_output><jupyter_text>You can optionally enable serialization on your layersIf you need your custom layers to be serializable as part of a[Functional model](/guides/functional_api/),you can optionally implement a `get_config()` method:<jupyter_code>class Linear(keras.layers.Layer):
def __init__(self, units=32):
super().__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return ops.matmul(inputs, self.w) + self.b
def get_config(self):
return {"units": self.units}
# Now you can recreate the layer from its config:
layer = Linear(64)
config = layer.get_config()
print(config)
new_layer = Linear.from_config(config)<jupyter_output><empty_output><jupyter_text>Note that the `__init__()` method of the base `Layer` class takes some keywordarguments, in particular a `name` and a `dtype`. It's good practice to passthese arguments to the parent class in `__init__()` and to include them in thelayer config:<jupyter_code>class Linear(keras.layers.Layer):
def __init__(self, units=32, **kwargs):
super().__init__(**kwargs)
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return ops.matmul(inputs, self.w) + self.b
def get_config(self):
config = super().get_config()
config.update({"units": self.units})
return config
layer = Linear(64)
config = layer.get_config()
print(config)
new_layer = Linear.from_config(config)<jupyter_output><empty_output><jupyter_text>If you need more flexibility when deserializing the layer from its config, youcan also override the `from_config()` class method. This is the baseimplementation of `from_config()`:```pythondef from_config(cls, config): return cls(**config)```To learn more about serialization and saving, see the complete[guide to saving and serializing models](/guides/serialization_and_saving/). Privileged `training` argument in the `call()` methodSome layers, in particular the `BatchNormalization` layer and the `Dropout`layer, have different behaviors during training and inference. For suchlayers, it is standard practice to expose a `training` (boolean) argument inthe `call()` method.By exposing this argument in `call()`, you enable the built-in training andevaluation loops (e.g. `fit()`) to correctly use the layer in training andinference.<jupyter_code>class CustomDropout(keras.layers.Layer):
def __init__(self, rate, **kwargs):
super().__init__(**kwargs)
self.rate = rate
self.seed_generator = keras.random.SeedGenerator(1337)
def call(self, inputs, training=None):
if training:
return keras.random.dropout(
inputs, rate=self.rate, seed=self.seed_generator
)
return inputs<jupyter_output><empty_output><jupyter_text>Privileged `mask` argument in the `call()` methodThe other privileged argument supported by `call()` is the `mask` argument.You will find it in all Keras RNN layers. A mask is a boolean tensor (oneboolean value per timestep in the input) used to skip certain input timestepswhen processing timeseries data.Keras will automatically pass the correct `mask` argument to `__call__()` forlayers that support it, when a mask is generated by a prior layer.Mask-generating layers are the `Embedding`layer configured with `mask_zero=True`, and the `Masking` layer. The `Model` classIn general, you will use the `Layer` class to define inner computation blocks,and will use the `Model` class to define the outer model -- the object youwill train.For instance, in a ResNet50 model, you would have several ResNet blockssubclassing `Layer`, and a single `Model` encompassing the entire ResNet50network.The `Model` class has the same API as `Layer`, with the following differences:- It exposes built-in training, evaluation, and prediction loops(`model.fit()`, `model.evaluate()`, `model.predict()`).- It exposes the list of its inner layers, via the `model.layers` property.- It exposes saving and serialization APIs (`save()`, `save_weights()`...)Effectively, the `Layer` class corresponds to what we refer to in theliterature as a "layer" (as in "convolution layer" or "recurrent layer") or asa "block" (as in "ResNet block" or "Inception block").Meanwhile, the `Model` class corresponds to what is referred to in theliterature as a "model" (as in "deep learning model") or as a "network" (as in"deep neural network").So if you're wondering, "should I use the `Layer` class or the `Model` class?",ask yourself: will I need to call `fit()` on it? Will I need to call `save()`on it? If so, go with `Model`. If not (either because your class is just a blockin a bigger system, or because you are writing training & saving code yourself),use `Layer`.For instance, we could take our mini-resnet example above, and use it to builda `Model` that we could train with `fit()`, and that we could save with`save_weights()`: ```pythonclass ResNet(keras.Model): def __init__(self, num_classes=1000): super().__init__() self.block_1 = ResNetBlock() self.block_2 = ResNetBlock() self.global_pool = layers.GlobalAveragePooling2D() self.classifier = Dense(num_classes) def call(self, inputs): x = self.block_1(inputs) x = self.block_2(x) x = self.global_pool(x) return self.classifier(x)resnet = ResNet()dataset = ...resnet.fit(dataset, epochs=10)resnet.save(filepath.keras)``` Putting it all together: an end-to-end exampleHere's what you've learned so far:- A `Layer` encapsulate a state (created in `__init__()` or `build()`) and somecomputation (defined in `call()`).- Layers can be recursively nested to create new, bigger computation blocks.- Layers are backend-agnostic as long as they only use Keras APIs. You can usebackend-native APIs (such as `jax.numpy`, `torch.nn` or `tf.nn`), but thenyour layer will only be usable with that specific backend.- Layers can create and track losses (typically regularization losses)via `add_loss()`.- The outer container, the thing you want to train, is a `Model`. A `Model` isjust like a `Layer`, but with added training and serialization utilities.Let's put all of these things together into an end-to-end example: we're goingto implement a Variational AutoEncoder (VAE) in a backend-agnostic fashion-- so that it runs the same with TensorFlow, JAX, and PyTorch.We'll train it on MNIST digits.Our VAE will be a subclass of `Model`, built as a nested composition of layersthat subclass `Layer`. It will feature a regularization loss (KL divergence).<jupyter_code>class Sampling(layers.Layer):
"""Uses (z_mean, z_log_var) to sample z, the vector encoding a digit."""
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.seed_generator = keras.random.SeedGenerator(1337)
def call(self, inputs):
z_mean, z_log_var = inputs
batch = ops.shape(z_mean)[0]
dim = ops.shape(z_mean)[1]
epsilon = keras.random.normal(shape=(batch, dim), seed=self.seed_generator)
return z_mean + ops.exp(0.5 * z_log_var) * epsilon
class Encoder(layers.Layer):
"""Maps MNIST digits to a triplet (z_mean, z_log_var, z)."""
def __init__(self, latent_dim=32, intermediate_dim=64, name="encoder", **kwargs):
super().__init__(name=name, **kwargs)
self.dense_proj = layers.Dense(intermediate_dim, activation="relu")
self.dense_mean = layers.Dense(latent_dim)
self.dense_log_var = layers.Dense(latent_dim)
self.sampling = Sampling()
def call(self, inputs):
x = self.dense_proj(inputs)
z_mean = self.dense_mean(x)
z_log_var = self.dense_log_var(x)
z = self.sampling((z_mean, z_log_var))
return z_mean, z_log_var, z
class Decoder(layers.Layer):
"""Converts z, the encoded digit vector, back into a readable digit."""
def __init__(self, original_dim, intermediate_dim=64, name="decoder", **kwargs):
super().__init__(name=name, **kwargs)
self.dense_proj = layers.Dense(intermediate_dim, activation="relu")
self.dense_output = layers.Dense(original_dim, activation="sigmoid")
def call(self, inputs):
x = self.dense_proj(inputs)
return self.dense_output(x)
class VariationalAutoEncoder(keras.Model):
"""Combines the encoder and decoder into an end-to-end model for training."""
def __init__(
self,
original_dim,
intermediate_dim=64,
latent_dim=32,
name="autoencoder",
**kwargs
):
super().__init__(name=name, **kwargs)
self.original_dim = original_dim
self.encoder = Encoder(latent_dim=latent_dim, intermediate_dim=intermediate_dim)
self.decoder = Decoder(original_dim, intermediate_dim=intermediate_dim)
def call(self, inputs):
z_mean, z_log_var, z = self.encoder(inputs)
reconstructed = self.decoder(z)
# Add KL divergence regularization loss.
kl_loss = -0.5 * ops.mean(
z_log_var - ops.square(z_mean) - ops.exp(z_log_var) + 1
)
self.add_loss(kl_loss)
return reconstructed<jupyter_output><empty_output><jupyter_text>Let's train it on MNIST using the `fit()` API:<jupyter_code>(x_train, _), _ = keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype("float32") / 255
original_dim = 784
vae = VariationalAutoEncoder(784, 64, 32)
optimizer = keras.optimizers.Adam(learning_rate=1e-3)
vae.compile(optimizer, loss=keras.losses.MeanSquaredError())
vae.fit(x_train, x_train, epochs=2, batch_size=64)<jupyter_output><empty_output> | keras-io/guides/ipynb/making_new_layers_and_models_via_subclassing.ipynb/0 | {
"file_path": "keras-io/guides/ipynb/making_new_layers_and_models_via_subclassing.ipynb",
"repo_id": "keras-io",
"token_count": 7448
} | 131 |
# Custom Image Augmentations with BaseImageAugmentationLayer
**Author:** [lukewood](https://twitter.com/luke_wood_ml)<br>
**Date created:** 2022/04/26<br>
**Last modified:** 2023/11/29<br>
**Description:** Use BaseImageAugmentationLayer to implement custom data augmentations.
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/guides/ipynb/keras_cv/custom_image_augmentations.ipynb) <span class="k-dot">β’</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/guides/keras_cv/custom_image_augmentations.py)
---
## Overview
Data augmentation is an integral part of training any robust computer vision model.
While KerasCV offers a plethora of prebuild high quality data augmentation techniques,
you may still want to implement your own custom technique.
KerasCV offers a helpful base class for writing data augmentation layers:
`BaseImageAugmentationLayer`.
Any augmentation layer built with `BaseImageAugmentationLayer` will automatically be
compatible with the KerasCV `RandomAugmentationPipeline` class.
This guide will show you how to implement your own custom augmentation layers using
`BaseImageAugmentationLayer`. As an example, we will implement a layer that tints all
images blue.
Currently, KerasCV's preprocessing layers only support the TensorFlow backend with Keras 3.
```python
!pip install -q --upgrade keras-cv
!pip install -q --upgrade keras # Upgrade to Keras 3
```
```python
import os
os.environ["KERAS_BACKEND"] = "tensorflow"
import keras
from keras import ops
from keras import layers
import keras_cv
import matplotlib.pyplot as plt
```
First, let's implement some helper functions for visualization and some transformations.
```python
def imshow(img):
img = img.astype(int)
plt.axis("off")
plt.imshow(img)
plt.show()
def gallery_show(images):
images = images.astype(int)
for i in range(9):
image = images[i]
plt.subplot(3, 3, i + 1)
plt.imshow(image.astype("uint8"))
plt.axis("off")
plt.show()
def transform_value_range(images, original_range, target_range):
images = (images - original_range[0]) / (original_range[1] - original_range[0])
scale_factor = target_range[1] - target_range[0]
return (images * scale_factor) + target_range[0]
def parse_factor(param, min_value=0.0, max_value=1.0, seed=None):
if isinstance(param, keras_cv.core.FactorSampler):
return param
if isinstance(param, float) or isinstance(param, int):
param = (min_value, param)
if param[0] == param[1]:
return keras_cv.core.ConstantFactorSampler(param[0])
return keras_cv.core.UniformFactorSampler(param[0], param[1], seed=seed)
```
---
## BaseImageAugmentationLayer Introduction
Image augmentation should operate on a sample-wise basis; not batch-wise.
This is a common mistake many machine learning practitioners make when implementing
custom techniques.
`BaseImageAugmentation` offers a set of clean abstractions to make implementing image
augmentation techniques on a sample wise basis much easier.
This is done by allowing the end user to override an `augment_image()` method and then
performing automatic vectorization under the hood.
Most augmentation techniques also must sample from one or more random distributions.
KerasCV offers an abstraction to make random sampling end user configurable: the
`FactorSampler` API.
Finally, many augmentation techniques requires some information about the pixel values
present in the input images. KerasCV offers the `value_range` API to simplify the handling of this.
In our example, we will use the `FactorSampler` API, the `value_range` API, and
`BaseImageAugmentationLayer` to implement a robust, configurable, and correct `RandomBlueTint` layer.
---
## Overriding `augment_image()`
Let's start off with the minimum:
```python
class RandomBlueTint(keras_cv.layers.BaseImageAugmentationLayer):
def augment_image(self, image, *args, transformation=None, **kwargs):
# image is of shape (height, width, channels)
[*others, blue] = ops.unstack(image, axis=-1)
blue = ops.clip(blue + 100, 0.0, 255.0)
return ops.stack([*others, blue], axis=-1)
```
Our layer overrides `BaseImageAugmentationLayer.augment_image()`. This method is
used to augment images given to the layer. By default, using
`BaseImageAugmentationLayer` gives you a few nice features for free:
- support for unbatched inputs (HWC Tensor)
- support for batched inputs (BHWC Tensor)
- automatic vectorization on batched inputs (more information on this in automatic
vectorization performance)
Let's check out the result. First, let's download a sample image:
```python
SIZE = (300, 300)
elephants = keras.utils.get_file(
"african_elephant.jpg", "https://i.imgur.com/Bvro0YD.png"
)
elephants = keras.utils.load_img(elephants, target_size=SIZE)
elephants = keras.utils.img_to_array(elephants)
imshow(elephants)
```
<div class="k-default-codeblock">
```
Downloading data from https://i.imgur.com/Bvro0YD.png
4217496/4217496 ββββββββββββββββββββ 0s 0us/step
```
</div>
![png](/img/guides/custom_image_augmentations/custom_image_augmentations_9_1.png)
Next, let's augment it and visualize the result:
```python
layer = RandomBlueTint()
augmented = layer(elephants)
imshow(ops.convert_to_numpy(augmented))
```
![png](/img/guides/custom_image_augmentations/custom_image_augmentations_11_0.png)
Looks great! We can also call our layer on batched inputs:
```python
layer = RandomBlueTint()
augmented = layer(ops.expand_dims(elephants, axis=0))
imshow(ops.convert_to_numpy(augmented)[0])
```
![png](/img/guides/custom_image_augmentations/custom_image_augmentations_13_0.png)
---
## Adding Random Behavior with the `FactorSampler` API.
Usually an image augmentation technique should not do the same thing on every
invocation of the layer's `__call__` method.
KerasCV offers the `FactorSampler` API to allow users to provide configurable random
distributions.
```python
class RandomBlueTint(keras_cv.layers.BaseImageAugmentationLayer):
"""RandomBlueTint randomly applies a blue tint to images.
Args:
factor: A tuple of two floats, a single float or a
`keras_cv.FactorSampler`. `factor` controls the extent to which the
image is blue shifted. `factor=0.0` makes this layer perform a no-op
operation, while a value of 1.0 uses the degenerated result entirely.
Values between 0 and 1 result in linear interpolation between the original
image and a fully blue image.
Values should be between `0.0` and `1.0`. If a tuple is used, a `factor` is
sampled between the two values for every image augmented. If a single float
is used, a value between `0.0` and the passed float is sampled. In order to
ensure the value is always the same, please pass a tuple with two identical
floats: `(0.5, 0.5)`.
"""
def __init__(self, factor, **kwargs):
super().__init__(**kwargs)
self.factor = parse_factor(factor)
def augment_image(self, image, *args, transformation=None, **kwargs):
[*others, blue] = ops.unstack(image, axis=-1)
blue_shift = self.factor() * 255
blue = ops.clip(blue + blue_shift, 0.0, 255.0)
return ops.stack([*others, blue], axis=-1)
```
Now, we can configure the random behavior of ou `RandomBlueTint` layer.
We can give it a range of values to sample from:
```python
many_elephants = ops.repeat(ops.expand_dims(elephants, axis=0), 9, axis=0)
layer = RandomBlueTint(factor=0.5)
augmented = layer(many_elephants)
gallery_show(ops.convert_to_numpy(augmented))
```
![png](/img/guides/custom_image_augmentations/custom_image_augmentations_17_0.png)
Each image is augmented differently with a random factor sampled from the range
`(0, 0.5)`.
We can also configure the layer to draw from a normal distribution:
```python
many_elephants = ops.repeat(ops.expand_dims(elephants, axis=0), 9, axis=0)
factor = keras_cv.core.NormalFactorSampler(
mean=0.3, stddev=0.1, min_value=0.0, max_value=1.0
)
layer = RandomBlueTint(factor=factor)
augmented = layer(many_elephants)
gallery_show(ops.convert_to_numpy(augmented))
```
![png](/img/guides/custom_image_augmentations/custom_image_augmentations_19_0.png)
As you can see, the augmentations now are drawn from a normal distributions.
There are various types of `FactorSamplers` including `UniformFactorSampler`,
`NormalFactorSampler`, and `ConstantFactorSampler`. You can also implement you own.
---
## Overriding `get_random_transformation()`
Now, suppose that your layer impacts the prediction targets: whether they are bounding
boxes, classification labels, or regression targets.
Your layer will need to have information about what augmentations are taken on the image
when augmenting the label.
Luckily, `BaseImageAugmentationLayer` was designed with this in mind.
To handle this issue, `BaseImageAugmentationLayer` has an overridable
`get_random_transformation()` method alongside with `augment_label()`,
`augment_target()` and `augment_bounding_boxes()`.
`augment_segmentation_map()` and others will be added in the future.
Let's add this to our layer.
```python
class RandomBlueTint(keras_cv.layers.BaseImageAugmentationLayer):
"""RandomBlueTint randomly applies a blue tint to images.
Args:
factor: A tuple of two floats, a single float or a
`keras_cv.FactorSampler`. `factor` controls the extent to which the
image is blue shifted. `factor=0.0` makes this layer perform a no-op
operation, while a value of 1.0 uses the degenerated result entirely.
Values between 0 and 1 result in linear interpolation between the original
image and a fully blue image.
Values should be between `0.0` and `1.0`. If a tuple is used, a `factor` is
sampled between the two values for every image augmented. If a single float
is used, a value between `0.0` and the passed float is sampled. In order to
ensure the value is always the same, please pass a tuple with two identical
floats: `(0.5, 0.5)`.
"""
def __init__(self, factor, **kwargs):
super().__init__(**kwargs)
self.factor = parse_factor(factor)
def get_random_transformation(self, **kwargs):
# kwargs holds {"images": image, "labels": label, etc...}
return self.factor() * 255
def augment_image(self, image, transformation=None, **kwargs):
[*others, blue] = ops.unstack(image, axis=-1)
blue = ops.clip(blue + transformation, 0.0, 255.0)
return ops.stack([*others, blue], axis=-1)
def augment_label(self, label, transformation=None, **kwargs):
# you can use transformation somehow if you want
if transformation > 100:
# i.e. maybe class 2 corresponds to blue images
return 2.0
return label
def augment_bounding_boxes(self, bounding_boxes, transformation=None, **kwargs):
# you can also perform no-op augmentations on label types to support them in
# your pipeline.
return bounding_boxes
```
To make use of these new methods, you will need to feed your inputs in with a
dictionary maintaining a mapping from images to targets.
As of now, KerasCV supports the following label types:
- labels via `augment_label()`.
- bounding_boxes via `augment_bounding_boxes()`.
In order to use augmention layers alongside your prediction targets, you must package
your inputs as follows:
```python
labels = ops.array([[1, 0]])
inputs = {"images": ops.convert_to_tensor(elephants), "labels": labels}
```
Now if we call our layer on the inputs:
```python
layer = RandomBlueTint(factor=(0.6, 0.6))
augmented = layer(inputs)
print(augmented["labels"])
```
<div class="k-default-codeblock">
```
2.0
```
</div>
Both the inputs and labels are augmented.
Note how when `transformation` is > 100 the label is modified to contain 2.0 as
specified in the layer above.
---
## `value_range` support
Imagine you are using your new augmentation layer in many pipelines.
Some pipelines have values in the range `[0, 255]`, some pipelines have normalized their
images to the range `[-1, 1]`, and some use a value range of `[0, 1]`.
If a user calls your layer with an image in value range `[0, 1]`, the outputs will be
nonsense!
```python
layer = RandomBlueTint(factor=(0.1, 0.1))
elephants_0_1 = elephants / 255
print("min and max before augmentation:", elephants_0_1.min(), elephants_0_1.max())
augmented = layer(elephants_0_1)
print(
"min and max after augmentation:",
ops.convert_to_numpy(augmented).min(),
ops.convert_to_numpy(augmented).max(),
)
imshow(ops.convert_to_numpy(augmented * 255).astype(int))
```
<div class="k-default-codeblock">
```
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
min and max before augmentation: 0.0 1.0
min and max after augmentation: 0.0 26.488235
```
</div>
![png](/img/guides/custom_image_augmentations/custom_image_augmentations_27_2.png)
Note that this is an incredibly weak augmentation!
Factor is only set to 0.1.
Let's resolve this issue with KerasCV's `value_range` API.
```python
class RandomBlueTint(keras_cv.layers.BaseImageAugmentationLayer):
"""RandomBlueTint randomly applies a blue tint to images.
Args:
value_range: value_range: a tuple or a list of two elements. The first value
represents the lower bound for values in passed images, the second represents
the upper bound. Images passed to the layer should have values within
`value_range`.
factor: A tuple of two floats, a single float or a
`keras_cv.FactorSampler`. `factor` controls the extent to which the
image is blue shifted. `factor=0.0` makes this layer perform a no-op
operation, while a value of 1.0 uses the degenerated result entirely.
Values between 0 and 1 result in linear interpolation between the original
image and a fully blue image.
Values should be between `0.0` and `1.0`. If a tuple is used, a `factor` is
sampled between the two values for every image augmented. If a single float
is used, a value between `0.0` and the passed float is sampled. In order to
ensure the value is always the same, please pass a tuple with two identical
floats: `(0.5, 0.5)`.
"""
def __init__(self, value_range, factor, **kwargs):
super().__init__(**kwargs)
self.value_range = value_range
self.factor = parse_factor(factor)
def get_random_transformation(self, **kwargs):
# kwargs holds {"images": image, "labels": label, etc...}
return self.factor() * 255
def augment_image(self, image, transformation=None, **kwargs):
image = transform_value_range(image, self.value_range, (0, 255))
[*others, blue] = ops.unstack(image, axis=-1)
blue = ops.clip(blue + transformation, 0.0, 255.0)
result = ops.stack([*others, blue], axis=-1)
result = transform_value_range(result, (0, 255), self.value_range)
return result
def augment_label(self, label, transformation=None, **kwargs):
# you can use transformation somehow if you want
if transformation > 100:
# i.e. maybe class 2 corresponds to blue images
return 2.0
return label
def augment_bounding_boxes(self, bounding_boxes, transformation=None, **kwargs):
# you can also perform no-op augmentations on label types to support them in
# your pipeline.
return bounding_boxes
layer = RandomBlueTint(value_range=(0, 1), factor=(0.1, 0.1))
elephants_0_1 = elephants / 255
print("min and max before augmentation:", elephants_0_1.min(), elephants_0_1.max())
augmented = layer(elephants_0_1)
print(
"min and max after augmentation:",
ops.convert_to_numpy(augmented).min(),
ops.convert_to_numpy(augmented).max(),
)
imshow(ops.convert_to_numpy(augmented * 255).astype(int))
```
<div class="k-default-codeblock">
```
min and max before augmentation: 0.0 1.0
min and max after augmentation: 0.0 1.0
```
</div>
![png](/img/guides/custom_image_augmentations/custom_image_augmentations_29_1.png)
Now our elephants are only slgihtly blue tinted. This is the expected behavior when
using a factor of `0.1`. Great!
Now users can configure the layer to support any value range they may need. Note that
only layers that interact with color information should use the value range API.
Many augmentation techniques, such as `RandomRotation` will not need this.
---
## Auto vectorization performance
If you are wondering:
> Does implementing my augmentations on an sample-wise basis carry performance
implications?
You are not alone!
Luckily, I have performed extensive analysis on the performance of automatic
vectorization, manual vectorization, and unvectorized implementations.
In this benchmark, I implemented a RandomCutout layer using auto vectorization, no auto
vectorization and manual vectorization.
All of these were benchmarked inside of an `@tf.function` annotation.
They were also each benchmarked with the `jit_compile` argument.
The following chart shows the results of this benchmark:
![Auto Vectorization Performance Chart](https://i.imgur.com/NeNhDoi.png)
_The primary takeaway should be that the difference between manual vectorization and
automatic vectorization is marginal!_
Please note that Eager mode performance will be drastically different.
---
## Common gotchas
Some layers are not able to be automatically vectorizated.
An example of this is [GridMask](https://tinyurl.com/ffb5zzf7).
If you receive an error when invoking your layer, try adding the following to your
constructor:
```python
class UnVectorizable(keras_cv.layers.BaseImageAugmentationLayer):
def __init__(self, **kwargs):
super().__init__(**kwargs)
# this disables BaseImageAugmentationLayer's Auto Vectorization
self.auto_vectorize = False
```
Additionally, be sure to accept `**kwargs` to your `augment_*` methods to ensure
forwards compatibility. KerasCV will add additional label types in the future, and
if you do not include a `**kwargs` argument your augmentation layers will not be
forward compatible.
---
## Conclusion and next steps
KerasCV offers a standard set of APIs to streamline the process of implementing your
own data augmentation techniques.
These include `BaseImageAugmentationLayer`, the `FactorSampler` API and the
`value_range` API.
We used these APIs to implement a highly configurable `RandomBlueTint` layer.
This layer can take inputs as standalone images, a dictionary with keys of `"images"`
and labels, inputs that are unbatched, or inputs that are batched. Inputs may be in any
value range, and the random distribution used to sample the tint values is end user
configurable.
As a follow up exercises you can:
- implement your own data augmentation technique using `BaseImageAugmentationLayer`
- [contribute an augmentation layer to KerasCV](https://github.com/keras-team/keras-cv)
- [read through the existing KerasCV augmentation layers](https://tinyurl.com/4txy4m3t)
| keras-io/guides/md/keras_cv/custom_image_augmentations.md/0 | {
"file_path": "keras-io/guides/md/keras_cv/custom_image_augmentations.md",
"repo_id": "keras-io",
"token_count": 6524
} | 132 |
# Migrating Keras 2 code to multi-backend Keras 3
**Author:** [Divyashree Sreepathihalli](https://github.com/divyashreepathihalli)<br>
**Date created:** 2023/10/23<br>
**Last modified:** 2023/10/30<br>
**Description:** Instructions & troubleshooting for migrating your Keras 2 code to multi-backend Keras 3.
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/guides/ipynb/migrating_to_keras_3.ipynb) <span class="k-dot">β’</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/guides/migrating_to_keras_3.py)
This guide will help you migrate TensorFlow-only Keras 2 code to multi-backend Keras
3 code. The overhead for the migration is minimal. Once you have migrated,
you can run Keras workflows on top of either JAX, TensorFlow, or PyTorch.
This guide has two parts:
1. Migrating your legacy Keras 2 code to Keras 3, running on top of the TensorFlow backend.
This is generally very easy, though there are minor issues to be mindful of, that we will go over
in detail.
2. Further migrating your Keras 3 + TensorFlow code to multi-backend Keras 3, so that it can run on
JAX and PyTorch.
Let's get started.
---
## Setup
First, lets install `keras-nightly`.
This example uses the TensorFlow backend (`os.environ["KERAS_BACKEND"] = "tensorflow"`).
After you've migrated your code, you can change the `"tensorflow"` string to `"jax"` or `"torch"`
and click "Restart runtime" in Colab, and your code will run on the JAX or PyTorch backend.
```python
!pip install -q keras-nightly
```
```python
import os
os.environ["KERAS_BACKEND"] = "tensorflow"
import keras
import tensorflow as tf
import numpy as np
```
---
## Going from Keras 2 to Keras 3 with the TensorFlow backend
First, replace your imports:
1. Replace `from tensorflow import keras` to `import keras`
2. Replace `from tensorflow.keras import xyz` (e.g. `from tensorflow.keras import layers`)
to `from keras import xyz` (e.g. `from keras import layers`)
3. Replace `tf.keras.*` to `keras.*`
Next, start running your tests. Most of the time, your code will execute on Keras 3 just fine.
All issues you might encounter are detailed below, with their fixes.
### `jit_compile` is set to `True` by default on GPU.
The default value of the `jit_compile` argument to the `Model` constructor has been set to
`True` on GPU in Keras 3. This means that models will be compiled with Just-In-Time (JIT)
compilation by default on GPU.
JIT compilation can improve the performance of some models. However, it may not work with
all TensorFlow operations. If you are using a custom model or layer and you see an
XLA-related error, you may need to set the `jit_compile` argument to `False`. Here is a list
of [known issues](https://www.tensorflow.org/xla/known_issues) encountered when
using XLA with TensorFlow. In addition to these issues, there are some
ops that are not supported by XLA.
The error message you could encounter would be as follows:
```
Detected unsupported operations when trying to compile graph
__inference_one_step_on_data_125[] on XLA_GPU_JIT
```
For example, the following snippet of code will reproduce the above error:
```python
class MyModel(keras.Model):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def call(self, inputs):
string_input = tf.strings.as_string(inputs)
return tf.strings.to_number(string_input)
subclass_model = MyModel()
x_train = np.array([[1, 2, 3], [4, 5, 6]])
subclass_model.compile(optimizer="sgd", loss="mse")
subclass_model.predict(x_train)
```
**How to fix it:** set `jit_compile=False` in `model.compile(..., jit_compile=False)`,
or set the `jit_compile` attribute to `False`, like this:
```python
class MyModel(keras.Model):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def call(self, inputs):
# tf.strings ops aren't support by XLA
string_input = tf.strings.as_string(inputs)
return tf.strings.to_number(string_input)
subclass_model = MyModel()
x_train = np.array([[1, 2, 3], [4, 5, 6]])
subclass_model.jit_compile = False
subclass_model.predict(x_train)
```
<div class="k-default-codeblock">
```
1/1 ββββββββββββββββββββ 0s 45ms/step
array([[1., 2., 3.],
[4., 5., 6.]], dtype=float32)
```
</div>
### Saving a model in the TF SavedModel format
Saving to the TF SavedModel format via `model.save()` is no longer supported in Keras 3.
The error message you could encounter would be as follows:
```
>>> model.save("mymodel")
ValueError: Invalid filepath extension for saving. Please add either a `.keras` extension
for the native Keras format (recommended) or a `.h5` extension. Use
`tf.saved_model.save()` if you want to export a SavedModel for use with
TFLite/TFServing/etc. Received: filepath=saved_model.
```
The following snippet of code will reproduce the above error:
```python
sequential_model = keras.Sequential([
keras.layers.Dense(2)
])
sequential_model.save("saved_model")
```
**How to fix it:** use `tf.saved_model.save` instead of `model.save`
```python
sequential_model = keras.Sequential([keras.layers.Dense(2)])
sequential_model(np.random.rand(3, 5))
tf.saved_model.save(sequential_model, "saved_model")
```
<div class="k-default-codeblock">
```
INFO:tensorflow:Assets written to: saved_model/assets
INFO:tensorflow:Assets written to: saved_model/assets
```
</div>
### Loading a TF SavedModel
Loading a TF SavedModel file via `keras.models.load_model()` is no longer supported
If you try to use `keras.models.load_model()` with a TF SavedModel, you will get the following error:
```python
ValueError: File format not supported: filepath=saved_model. Keras 3 only supports V3
`.keras` files and legacy H5 format files (`.h5` extension). Note that the legacy
SavedModel format is not supported by `load_model()` in Keras 3. In order to reload a
TensorFlow SavedModel as an inference-only layer in Keras 3, use
`keras.layers.TFSMLayer(saved_model, call_endpoint='serving_default')` (note that your
`call_endpoint` might have a different name).
```
The following snippet of code will reproduce the above error:
```python
keras.models.load_model("saved_model")
```
**How to fix it:** Use `keras.layers.TFSMLayer(filepath, call_endpoint="serving_default")` to reload a TF
SavedModel as a Keras layer. This is not limited to SavedModels that originate from Keras -- it will work
with any SavedModel, e.g. TF-Hub models.
```python
keras.layers.TFSMLayer("saved_model", call_endpoint="serving_default")
```
<div class="k-default-codeblock">
```
<TFSMLayer name=tfsm_layer, built=True>
```
</div>
### Using deeply nested inputs in Functional Models
`Model()` can no longer be passed deeply nested inputs/outputs (nested more than 1 level
deep, e.g. lists of lists of tensors).
You would encounter errors as follows:
```
ValueError: When providing `inputs` as a dict, all values in the dict must be
KerasTensors. Received: inputs={'foo': <KerasTensor shape=(None, 1), dtype=float32,
sparse=None, name=foo>, 'bar': {'baz': <KerasTensor shape=(None, 1), dtype=float32,
sparse=None, name=bar>}} including invalid value {'baz': <KerasTensor shape=(None, 1),
dtype=float32, sparse=None, name=bar>} of type <class 'dict'>
```
The following snippet of code will reproduce the above error:
```python
inputs = {
"foo": keras.Input(shape=(1,), name="foo"),
"bar": {
"baz": keras.Input(shape=(1,), name="bar"),
},
}
outputs = inputs["foo"] + inputs["bar"]["baz"]
keras.Model(inputs, outputs)
```
**How to fix it:** replace nested input with either dicts, lists, and tuples
of input tensors.
```python
inputs = {
"foo": keras.Input(shape=(1,), name="foo"),
"bar": keras.Input(shape=(1,), name="bar"),
}
outputs = inputs["foo"] + inputs["bar"]
keras.Model(inputs, outputs)
```
<div class="k-default-codeblock">
```
<Functional name=functional_2, built=True>
```
</div>
### TF autograph
In Keras 2, TF autograph is enabled by default on the `call()` method of custom
layers. In Keras 3, it is not. This means you may have to use cond ops if you're using
control flow, or alternatively you can decorate your `call()` method with `@tf.function`.
You would encounter an error as follows:
```
OperatorNotAllowedInGraphError: Exception encountered when calling MyCustomLayer.call().
Using a symbolic `tf.Tensor` as a Python `bool` is not allowed. You can attempt the
following resolutions to the problem: If you are running in Graph mode, use Eager
execution mode or decorate this function with @tf.function. If you are using AutoGraph,
you can try decorating this function with @tf.function. If that does not work, then you
may be using an unsupported feature or your source code may not be visible to AutoGraph.
Here is a [link for more information](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/ref
erence/limitations.md#access-to-source-code).
```
The following snippet of code will reproduce the above error:
```python
class MyCustomLayer(keras.layers.Layer):
def call(self, inputs):
if tf.random.uniform(()) > 0.5:
return inputs * 2
else:
return inputs / 2
layer = MyCustomLayer()
data = np.random.uniform(size=[3, 3])
model = keras.models.Sequential([layer])
model.compile(optimizer="adam", loss="mse")
model.predict(data)
```
**How to fix it:** decorate your `call()` method with `@tf.function`
```python
class MyCustomLayer(keras.layers.Layer):
@tf.function()
def call(self, inputs):
if tf.random.uniform(()) > 0.5:
return inputs * 2
else:
return inputs / 2
layer = MyCustomLayer()
data = np.random.uniform(size=[3, 3])
model = keras.models.Sequential([layer])
model.compile(optimizer="adam", loss="mse")
model.predict(data)
```
<div class="k-default-codeblock">
```
1/1 ββββββββββββββββββββ 0s 41ms/step
array([[0.69081205, 1.0757748 , 0.06216738],
[0.86100876, 0.92610997, 1.7946503 ],
[1.0368572 , 1.0535108 , 1.1335285 ]], dtype=float32)
```
</div>
### Calling TF ops with a `KerasTensor`
Using a TF op on a Keras tensor during functional model construction is disallowed: "A
KerasTensor cannot be used as input to a TensorFlow function".
The error you would encounter would be as follows:
```
ValueError: A KerasTensor cannot be used as input to a TensorFlow function. A KerasTensor
is a symbolic placeholder for a shape and dtype, used when constructing Keras Functional
models or Keras Functions. You can only use it as input to a Keras layer or a Keras
operation (from the namespaces `keras.layers` and `keras.operations`).
```
The following snippet of code will reproduce the error:
```python
input = keras.layers.Input([2, 2, 1])
tf.squeeze(input)
```
**How to fix it:** use an equivalent op from `keras.ops`.
```python
input = keras.layers.Input([2, 2, 1])
keras.ops.squeeze(input)
```
<div class="k-default-codeblock">
```
<KerasTensor shape=(None, 2, 2), dtype=float32, sparse=None, name=keras_tensor_6>
```
</div>
### Multi-output model `evaluate()`
The `evaluate()` method of a multi-output model no longer returns individual output
losses separately. Instead, you should utilize the `metrics` argument in the `compile()`
method to keep track of these losses.
When dealing with multiple named outputs, such as output_a and output_b, the legacy
`tf.keras` would include <output_a>_loss, <output_b>_loss, and similar entries in
metrics. However, in keras 3.0, these entries are not automatically added to metrics.
They must be explicitly provided in the metrics list for each individual output.
The following snippet of code will reproduce the above behavior:
```python
from keras import layers
# A functional model with multiple outputs
inputs = layers.Input(shape=(10,))
x1 = layers.Dense(5, activation='relu')(inputs)
x2 = layers.Dense(5, activation='relu')(x1)
output_1 = layers.Dense(5, activation='softmax', name="output_1")(x1)
output_2 = layers.Dense(5, activation='softmax', name="output_2")(x2)
model = keras.Model(inputs=inputs, outputs=[output_1, output_2])
model.compile(optimizer='adam', loss='categorical_crossentropy')
# dummy data
x_test = np.random.uniform(size=[10, 10])
y_test = np.random.uniform(size=[10, 5])
model.evaluate(x_test, y_test)
```
```python
from keras import layers
# A functional model with multiple outputs
inputs = layers.Input(shape=(10,))
x1 = layers.Dense(5, activation="relu")(inputs)
x2 = layers.Dense(5, activation="relu")(x1)
output_1 = layers.Dense(5, activation="softmax", name="output_1")(x1)
output_2 = layers.Dense(5, activation="softmax", name="output_2")(x2)
# dummy data
x_test = np.random.uniform(size=[10, 10])
y_test = np.random.uniform(size=[10, 5])
multi_output_model = keras.Model(inputs=inputs, outputs=[output_1, output_2])
multi_output_model.compile(
optimizer="adam",
loss="categorical_crossentropy",
metrics=["categorical_crossentropy", "categorical_crossentropy"],
)
multi_output_model.evaluate(x_test, y_test)
```
<div class="k-default-codeblock">
```
1/1 ββββββββββββββββββββ 0s 111ms/step - loss: 3.7628 - output_1_categorical_crossentropy: 3.7628
[3.762784481048584, 3.762784481048584]
```
</div>
### TensorFlow variables tracking
Setting a `tf.Variable` as an attribute of a Keras 3 layer or model will not automatically
track the variable, unlike in Keras 2. The following snippet of code will show that the `tf.Variables`
are not being tracked.
```python
class MyCustomLayer(keras.layers.Layer):
def __init__(self, units):
super().__init__()
self.units = units
def build(self, input_shape):
input_dim = input_shape[-1]
self.w = tf.Variable(initial_value=tf.zeros([input_dim, self.units]))
self.b = tf.Variable(initial_value=tf.zeros([self.units,]))
def call(self, inputs):
return keras.ops.matmul(inputs, self.w) + self.b
layer = MyCustomLayer(3)
data = np.random.uniform(size=[3, 3])
model = keras.models.Sequential([layer])
model.compile(optimizer="adam", loss="mse")
model.predict(data)
# The model does not have any trainable variables
for layer in model.layers:
print(layer.trainable_variables)
```
You will see the following warning:
```
UserWarning: The model does not have any trainable weights.
warnings.warn("The model does not have any trainable weights.")
```
**How to fix it:** use `self.add_weight()` method or opt for a `keras.Variable` instead. If you
are currently using `tf.variable`, you can switch to `keras.Variable`.
```python
class MyCustomLayer(keras.layers.Layer):
def __init__(self, units):
super().__init__()
self.units = units
def build(self, input_shape):
input_dim = input_shape[-1]
self.w = self.add_weight(
shape=[input_dim, self.units],
initializer="zeros",
)
self.b = self.add_weight(
shape=[
self.units,
],
initializer="zeros",
)
def call(self, inputs):
return keras.ops.matmul(inputs, self.w) + self.b
layer = MyCustomLayer(3)
data = np.random.uniform(size=[3, 3])
model = keras.models.Sequential([layer])
model.compile(optimizer="adam", loss="mse")
model.predict(data)
# Verify that the variables are now being tracked
for layer in model.layers:
print(layer.trainable_variables)
```
<div class="k-default-codeblock">
```
1/1 ββββββββββββββββββββ 0s 30ms/step
[<KerasVariable shape=(3, 3), dtype=float32, path=sequential_2/my_custom_layer_1/variable>, <KerasVariable shape=(3,), dtype=float32, path=sequential_2/my_custom_layer_1/variable_1>]
```
</div>
### `None` entries in nested `call()` arguments
`None` entries are not allowed as part of nested (e.g. list/tuples) tensor
arguments in `Layer.call()`, nor as part of `call()`'s nested return values.
If the `None` in the argument is intentional and serves a specific purpose,
ensure that the argument is optional and structure it as a separate parameter.
For example, consider defining the `call` method with optional argument.
The following snippet of code will reproduce the error.
```python
class CustomLayer(keras.layers.Layer):
def __init__(self):
super().__init__()
def call(self, inputs):
foo = inputs["foo"]
baz = inputs["bar"]["baz"]
if baz is not None:
return foo + baz
return foo
layer = CustomLayer()
inputs = {
"foo": keras.Input(shape=(1,), name="foo"),
"bar": {
"baz": None,
},
}
layer(inputs)
```
**How to fix it:**
**Solution 1:** Replace `None` with a value, like this:
```python
class CustomLayer(keras.layers.Layer):
def __init__(self):
super().__init__()
def call(self, inputs):
foo = inputs["foo"]
baz = inputs["bar"]["baz"]
return foo + baz
layer = CustomLayer()
inputs = {
"foo": keras.Input(shape=(1,), name="foo"),
"bar": {
"baz": keras.Input(shape=(1,), name="bar"),
},
}
layer(inputs)
```
<div class="k-default-codeblock">
```
<KerasTensor shape=(None, 1), dtype=float32, sparse=False, name=keras_tensor_14>
```
</div>
**Solution 2:** Define the call method with an optional argument.
Here is an example of this fix:
```python
class CustomLayer(keras.layers.Layer):
def __init__(self):
super().__init__()
def call(self, foo, baz=None):
if baz is not None:
return foo + baz
return foo
layer = CustomLayer()
foo = keras.Input(shape=(1,), name="foo")
baz = None
layer(foo, baz=baz)
```
<div class="k-default-codeblock">
```
<KerasTensor shape=(None, 1), dtype=float32, sparse=False, name=keras_tensor_15>
```
</div>
### State-building issues
Keras 3 is significantly stricter than Keras 2 about when state (e.g. numerical weight variables)
can be created. Keras 3 wants all state to be created before the model can be trained. This is a requirement
for using JAX (whereas TensorFlow was very lenient about state creation timing).
Keras layers should create their state either in their constructor (`__init__()` method) or in their `build()` method.
They should avoid creating state in `call()`.
If you ignore this recommendation and create state in `call()`
anyway (e.g. by calling a previously unbuilt layer), then Keras will attempt to build the layer automatically
by calling the `call()` method on symbolic inputs before training.
However, this attempt at automatic state creation may fail in certain cases.
This will cause an error that looks like like this:
```
Layer 'frame_position_embedding' looks like it has unbuilt state,
but Keras is not able to trace the layer `call()` in order to build it automatically.
Possible causes:
1. The `call()` method of your layer may be crashing.
Try to `__call__()` the layer eagerly on some test input first to see if it works.
E.g. `x = np.random.random((3, 4)); y = layer(x)`
2. If the `call()` method is correct, then you may need to implement
the `def build(self, input_shape)` method on your layer.
It should create all variables used by the layer
(e.g. by calling `layer.build()` on all its children layers).
```
You could reproduce this error with the following layer, when used with the JAX backend:
```python
class PositionalEmbedding(keras.layers.Layer):
def __init__(self, sequence_length, output_dim, **kwargs):
super().__init__(**kwargs)
self.position_embeddings = layers.Embedding(
input_dim=sequence_length, output_dim=output_dim
)
self.sequence_length = sequence_length
self.output_dim = output_dim
def call(self, inputs):
inputs = keras.ops.cast(inputs, self.compute_dtype)
length = keras.ops.shape(inputs)[1]
positions = keras.ops.arange(start=0, stop=length, step=1)
embedded_positions = self.position_embeddings(positions)
return inputs + embedded_positions
```
**How to fix it:** Do exactly what the error message asks. First, try to run the layer eagerly
to see if the `call()` method is in fact correct (note: if it was working in Keras 2, then it is correct
and does not need to be changed). If it is indeed correct, then you should implement a `build(self, input_shape)`
method that creates all of the layer's state, including the state of sublayers. Here's the fix as applied for the layer above
(note the `build()` method):
```python
class PositionalEmbedding(keras.layers.Layer):
def __init__(self, sequence_length, output_dim, **kwargs):
super().__init__(**kwargs)
self.position_embeddings = layers.Embedding(
input_dim=sequence_length, output_dim=output_dim
)
self.sequence_length = sequence_length
self.output_dim = output_dim
def build(self, input_shape):
self.position_embeddings.build(input_shape)
def call(self, inputs):
inputs = keras.ops.cast(inputs, self.compute_dtype)
length = keras.ops.shape(inputs)[1]
positions = keras.ops.arange(start=0, stop=length, step=1)
embedded_positions = self.position_embeddings(positions)
return inputs + embedded_positions
```
### Removed features
A small number of legacy features with very low usage were removed from Keras 3 as a cleanup measure:
* `keras.layers.ThresholdedReLU` is removed. Instead, you can simply use the `ReLU` layer
with the argument `threshold`.
* Symbolic `Layer.add_loss()`: Symbolic `add_loss()` is removed (you can still use
`add_loss()` inside the `call()` method of a layer/model).
* Locally connected layers (`LocallyConnected1D`, `LocallyConnected2D`
are removed due to very low usage. To
use locally connected layers, copy the layer implementation into your own codebase.
* `keras.layers.experimental.RandomFourierFeatures` is removed due to very low usage.
To use it, copy the layer implementation into your own codebase.
* Removed layer attributes: Layer attributes `metrics`, `dynamic` are removed. `metrics` is still
available on the `Model` class.
* The `constants` and `time_major` arguments in RNN layers are removed.
The `constants` argument was a remnant of Theano and had very low usage. The `time_major`
argument also had very low usage.
* `reset_metrics` argument: The `reset_metrics` argument is removed from `model.*_on_batch()`
methods. This argument had very low usage.
* The `keras.constraints.RadialConstraint` object is removed. This object had very low usage.
---
## Transitioning to backend-agnostic Keras 3
Keras 3 code with the TensorFlow backend will work with native TensorFlow APIs.
However, if you want your code to be backend-agnostic, you will need to:
- Replace all of the `tf.*` API calls with their equivalent Keras APIs.
- Convert your custom `train_step`/`test_step` methods to a multi-framework
implementation.
- Make sure you're using stateless `keras.random` ops correctly in your layers.
Let's go over each point in detail.
### Switching to Keras ops
In many cases, this is the only thing you need to do to start being able to run
your custom layers and metrics with JAX and PyTorch:
replace any `tf.*`, `tf.math*`, `tf.linalg.*`, etc. with `keras.ops.*`. Most TF ops
should be consistent with Keras 3. If the names different, they will be
highlighted in this guide.
#### NumPy ops
Keras implements the NumPy API as part of `keras.ops`.
The table below only lists a small subset of TensorFlow and Keras ops; ops not listed
are usually named the same in both frameworks (e.g. `reshape`, `matmul`, `cast`, etc.)
| TensorFlow | Keras 3.0 |
|--------------------------------------------|-------------------------------------------|
| `tf.abs` | `keras.ops.absolute` |
| `tf.reduce_all` | `keras.ops.all` |
| `tf.reduce_max` | `keras.ops.amax` |
| `tf.reduce_min` | `keras.ops.amin` |
| `tf.reduce_any` | `keras.ops.any` |
| `tf.concat` | `keras.ops.concatenate` |
| `tf.range` | `keras.ops.arange` |
| `tf.acos` | `keras.ops.arccos` |
| `tf.asin` | `keras.ops.arcsin` |
| `tf.asinh` | `keras.ops.arcsinh` |
| `tf.atan` | `keras.ops.arctan` |
| `tf.atan2` | `keras.ops.arctan2` |
| `tf.atanh` | `keras.ops.arctanh` |
| `tf.convert_to_tensor` | `keras.ops.convert_to_tensor` |
| `tf.reduce_mean` | `keras.ops.mean` |
| `tf.clip_by_value` | `keras.ops.clip` |
| `tf.math.conj` | `keras.ops.conjugate` |
| `tf.linalg.diag_part` | `keras.ops.diagonal` |
| `tf.reverse` | `keras.ops.flip` |
| `tf.gather` | `keras.ops.take` |
| `tf.math.is_finite` | `keras.ops.isfinite` |
| `tf.math.is_inf` | `keras.ops.isinf` |
| `tf.math.is_nan` | `keras.ops.isnan` |
| `tf.reduce_max` | `keras.ops.max` |
| `tf.reduce_mean` | `keras.ops.mean` |
| `tf.reduce_min` | `keras.ops.min` |
| `tf.rank` | `keras.ops.ndim` |
| `tf.math.pow` | `keras.ops.power` |
| `tf.reduce_prod` | `keras.ops.prod` |
| `tf.math.reduce_std` | `keras.ops.std` |
| `tf.reduce_sum` | `keras.ops.sum` |
| `tf.gather` | `keras.ops.take` |
| `tf.gather_nd` | `keras.ops.take_along_axis` |
| `tf.math.reduce_variance` | `keras.ops.var` |
#### Others ops
| TensorFlow | Keras 3.0 |
|----------------------------------------------------|-------------------------------------------------------------------|
| `tf.nn.sigmoid_cross_entropy_with_logits` | `keras.ops.binary_crossentropy` (mind the `from_logits` argument) |
| `tf.nn.sparse_softmax_cross_entropy_with_logits` | `keras.ops.sparse_categorical_crossentropy` (mind the `from_logits` argument)|
| `tf.nn.sparse_softmax_cross_entropy_with_logits` | `keras.ops.categorical_crossentropy(target, output, from_logits=False, axis=-1)`|
| `tf.nn.conv1d`, `tf.nn.conv2d`, `tf.nn.conv3d`, `tf.nn.convolution` | `keras.ops.conv` |
| `tf.nn.conv_transpose`, `tf.nn.conv1d_transpose`, `tf.nn.conv2d_transpose`, `tf.nn.conv3d_transpose` | `keras.ops.conv_transpose` |
| `tf.nn.depthwise_conv2d` | `keras.ops.depthwise_conv` |
| `tf.nn.separable_conv2d` | `keras.ops.separable_conv` |
| `tf.nn.batch_normalization` | No direct equivalent; use `keras.layers.BatchNormalization` |
| `tf.nn.dropout` | `keras.random.dropout` |
| `tf.nn.embedding_lookup` | `keras.ops.take` |
| `tf.nn.l2_normalize` | `keras.utils.normalize` (not an op) |
| `x.numpy` | `keras.ops.convert_to_numpy` |
| `tf.scatter_nd_update` | `keras.ops.scatter_update` |
| `tf.tensor_scatter_nd_update` | `keras.ops.slice_update` |
| `tf.signal.fft2d` | `keras.ops.fft2` |
| `tf.signal.inverse_stft` | `keras.ops.istft` |
### Custom `train_step()` methods
Your models may include a custom `train_step()` or `test_step()` method, which rely
on TensorFlow-only APIs -- for instance, your `train_step()` method may leverage TensorFlow's `tf.GradientTape`.
To convert such models to run on JAX or PyTorch, you will have a write a different `train_step()` implementation
for each backend you want to support.
In some cases, you might be able to simply override the `Model.compute_loss()` method and make it fully backend-agnostic,
instead of overriding `train_step()`. Here's an example of a layer with a custom `compute_loss()` method which works
across JAX, TensorFlow, and PyTorch:
```python
class MyModel(keras.Model):
def compute_loss(self, x=None, y=None, y_pred=None, sample_weight=None):
loss = keras.ops.sum(keras.losses.mean_squared_error(y, y_pred, sample_weight))
return loss
```
If you need to modify the optimization mechanism itself, beyond the loss computation,
then you will need to override `train_step()`, and implement one `train_step` method per backend, like below.
See the following guides for details on how each backend should be handled:
- [Customizing what happens in `fit()` with JAX](https://keras.io/guides/custom_train_step_in_jax/)
- [Customizing what happens in `fit()` with TensorFlow](https://keras.io/guides/custom_train_step_in_tensorflow/)
- [Customizing what happens in `fit()` with PyTorch](https://keras.io/guides/custom_train_step_in_torch/)
```python
class MyModel(keras.Model):
def train_step(self, *args, **kwargs):
if keras.backend.backend() == "jax":
return self._jax_train_step(*args, **kwargs)
elif keras.backend.backend() == "tensorflow":
return self._tensorflow_train_step(*args, **kwargs)
elif keras.backend.backend() == "torch":
return self._torch_train_step(*args, **kwargs)
def _jax_train_step(self, state, data):
pass # See guide: keras.io/guides/custom_train_step_in_jax/
def _tensorflow_train_step(self, data):
pass # See guide: keras.io/guides/custom_train_step_in_tensorflow/
def _torch_train_step(self, data):
pass # See guide: keras.io/guides/custom_train_step_in_torch/
```
### RNG-using layers
Keras 3 has a new `keras.random` namespace, containing:
- `keras.random.normal`
- `keras.random.uniform`
- `keras.random.shuffle`
- etc.
These operations are **stateless**, which means that if you pass a `seed`
argument, they will return the same result every time. Like this:
```python
print(keras.random.normal(shape=(), seed=123))
print(keras.random.normal(shape=(), seed=123))
```
<div class="k-default-codeblock">
```
tf.Tensor(0.7832616, shape=(), dtype=float32)
tf.Tensor(0.7832616, shape=(), dtype=float32)
```
</div>
Crucially, this differs from the behavior of stateful `tf.random` ops:
```python
print(tf.random.normal(shape=(), seed=123))
print(tf.random.normal(shape=(), seed=123))
```
<div class="k-default-codeblock">
```
tf.Tensor(2.4435377, shape=(), dtype=float32)
tf.Tensor(-0.6386405, shape=(), dtype=float32)
```
</div>
When you write a RNG-using layer, such as a custom dropout layer, you are
going to want to use a different seed value at layer call. However, you cannot
just increment a Python integer and pass it, because while this would work fine
when executed eagerly, it would not work as expected when using compilation
(which is available with JAX, TensorFlow, and PyTorch). When compiling the layer,
the first Python integer seed value seen by the layer would be hardcoded into the
compiled graph.
To address this, you should pass as the `seed` argument an instance of a
stateful `keras.random.SeedGenerator` object, like this:
```python
seed_generator = keras.random.SeedGenerator(1337)
print(keras.random.normal(shape=(), seed=seed_generator))
print(keras.random.normal(shape=(), seed=seed_generator))
```
<div class="k-default-codeblock">
```
tf.Tensor(0.6077996, shape=(), dtype=float32)
tf.Tensor(0.8211102, shape=(), dtype=float32)
```
</div>
So when writing a RNG using layer, you would use the following pattern:
```python
class RandomNoiseLayer(keras.layers.Layer):
def __init__(self, noise_rate, **kwargs):
super().__init__(**kwargs)
self.noise_rate = noise_rate
self.seed_generator = keras.random.SeedGenerator(1337)
def call(self, inputs):
noise = keras.random.uniform(
minval=0, maxval=self.noise_rate, seed=self.seed_generator
)
return inputs + noise
```
Such a layer is safe to use in any setting -- in eager execution or in a compiled model. Each
layer call will be using a different seed value, as expected.
| keras-io/guides/md/migrating_to_keras_3.md/0 | {
"file_path": "keras-io/guides/md/migrating_to_keras_3.md",
"repo_id": "keras-io",
"token_count": 13936
} | 133 |
<meta http-equiv="refresh" content="0; URL='https://keras.io/api/keras_nlp/modeling_layers/transformer_decoder/'" />
| keras-io/redirects/api/keras_nlp/layers/transformer_decoder/index.html/0 | {
"file_path": "keras-io/redirects/api/keras_nlp/layers/transformer_decoder/index.html",
"repo_id": "keras-io",
"token_count": 47
} | 134 |
<meta http-equiv="refresh" content="0; URL='https://keras.io/getting_started/faq/'" />
| keras-io/redirects/getting-started/faq/index.html/0 | {
"file_path": "keras-io/redirects/getting-started/faq/index.html",
"repo_id": "keras-io",
"token_count": 34
} | 135 |
<meta http-equiv="refresh" content="0; URL='https://keras.io/api/layers/recurrent_layers/'" />
| keras-io/redirects/layers/recurrent/index.html/0 | {
"file_path": "keras-io/redirects/layers/recurrent/index.html",
"repo_id": "keras-io",
"token_count": 38
} | 136 |
<meta http-equiv="refresh" content="0; URL='https://keras.io/api/utils/model_plotting_utils/'" />
| keras-io/redirects/visualization/index.html/0 | {
"file_path": "keras-io/redirects/visualization/index.html",
"repo_id": "keras-io",
"token_count": 38
} | 137 |
<jupyter_start><jupyter_text>Learning to Resize in Computer Vision**Author:** [Sayak Paul](https://twitter.com/RisingSayak)**Date created:** 2021/04/30**Last modified:** 2023/12/18**Description:** How to optimally learn representations of images for a given resolution. It is a common belief that if we constrain vision models to perceive things as humans do,their performance can be improved. For example, in [this work](https://arxiv.org/abs/1811.12231),Geirhos et al. showed that the vision models pre-trained on the ImageNet-1k dataset arebiased towards texture, whereas human beings mostly use the shape descriptor to develop acommon perception. But does this belief always apply, especially when it comes to improvingthe performance of vision models?It turns out it may not always be the case. When training vision models, it is common toresize images to a lower dimension ((224 x 224), (299 x 299), etc.) to allow mini-batchlearning and also to keep up the compute limitations. We generally make use of imageresizing methods like **bilinear interpolation** for this step and the resized images donot lose much of their perceptual character to the human eyes. In[Learning to Resize Images for Computer Vision Tasks](https://arxiv.org/abs/2103.09950v1), Talebi et al. showthat if we try to optimize the perceptual quality of the images for the vision modelsrather than the human eyes, their performance can further be improved. They investigatethe following question:**For a given image resolution and a model, how to best resize the given images?**As shown in the paper, this idea helps to consistently improve the performance of thecommon vision models (pre-trained on ImageNet-1k) like DenseNet-121, ResNet-50,MobileNetV2, and EfficientNets. In this example, we will implement the learnable imageresizing module as proposed in the paper and demonstrate that on the[Cats and Dogs dataset](https://www.microsoft.com/en-us/download/details.aspx?id=54765)using the [DenseNet-121](https://arxiv.org/abs/1608.06993) architecture. Setup<jupyter_code>import os
os.environ["KERAS_BACKEND"] = "tensorflow"
import keras
from keras import ops
from keras import layers
import tensorflow as tf
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
import matplotlib.pyplot as plt
import numpy as np<jupyter_output><empty_output><jupyter_text>Define hyperparameters In order to facilitate mini-batch learning, we need to have a fixed shape for the imagesinside a given batch. This is why an initial resizing is required. We first resize allthe images to (300 x 300) shape and then learn their optimal representation for the(150 x 150) resolution.<jupyter_code>INP_SIZE = (300, 300)
TARGET_SIZE = (150, 150)
INTERPOLATION = "bilinear"
AUTO = tf.data.AUTOTUNE
BATCH_SIZE = 64
EPOCHS = 5<jupyter_output><empty_output><jupyter_text>In this example, we will use the bilinear interpolation but the learnable image resizermodule is not dependent on any specific interpolation method. We can also use others,such as bicubic. Load and prepare the datasetFor this example, we will only use 40% of the total training dataset.<jupyter_code>train_ds, validation_ds = tfds.load(
"cats_vs_dogs",
# Reserve 10% for validation
split=["train[:40%]", "train[40%:50%]"],
as_supervised=True,
)
def preprocess_dataset(image, label):
image = ops.image.resize(image, (INP_SIZE[0], INP_SIZE[1]))
label = ops.one_hot(label, num_classes=2)
return (image, label)
train_ds = (
train_ds.shuffle(BATCH_SIZE * 100)
.map(preprocess_dataset, num_parallel_calls=AUTO)
.batch(BATCH_SIZE)
.prefetch(AUTO)
)
validation_ds = (
validation_ds.map(preprocess_dataset, num_parallel_calls=AUTO)
.batch(BATCH_SIZE)
.prefetch(AUTO)
)<jupyter_output><empty_output><jupyter_text>Define the learnable resizer utilitiesThe figure below (courtesy: [Learning to Resize Images for Computer Vision Tasks](https://arxiv.org/abs/2103.09950v1))presents the structure of the learnable resizing module:<jupyter_code>def conv_block(x, filters, kernel_size, strides, activation=layers.LeakyReLU(0.2)):
x = layers.Conv2D(filters, kernel_size, strides, padding="same", use_bias=False)(x)
x = layers.BatchNormalization()(x)
if activation:
x = activation(x)
return x
def res_block(x):
inputs = x
x = conv_block(x, 16, 3, 1)
x = conv_block(x, 16, 3, 1, activation=None)
return layers.Add()([inputs, x])
# Note: user can change num_res_blocks to >1 also if needed
def get_learnable_resizer(filters=16, num_res_blocks=1, interpolation=INTERPOLATION):
inputs = layers.Input(shape=[None, None, 3])
# First, perform naive resizing.
naive_resize = layers.Resizing(*TARGET_SIZE, interpolation=interpolation)(inputs)
# First convolution block without batch normalization.
x = layers.Conv2D(filters=filters, kernel_size=7, strides=1, padding="same")(inputs)
x = layers.LeakyReLU(0.2)(x)
# Second convolution block with batch normalization.
x = layers.Conv2D(filters=filters, kernel_size=1, strides=1, padding="same")(x)
x = layers.LeakyReLU(0.2)(x)
x = layers.BatchNormalization()(x)
# Intermediate resizing as a bottleneck.
bottleneck = layers.Resizing(*TARGET_SIZE, interpolation=interpolation)(x)
# Residual passes.
# First res_block will get bottleneck output as input
x = res_block(bottleneck)
# Remaining res_blocks will get previous res_block output as input
for _ in range(num_res_blocks - 1):
x = res_block(x)
# Projection.
x = layers.Conv2D(
filters=filters, kernel_size=3, strides=1, padding="same", use_bias=False
)(x)
x = layers.BatchNormalization()(x)
# Skip connection.
x = layers.Add()([bottleneck, x])
# Final resized image.
x = layers.Conv2D(filters=3, kernel_size=7, strides=1, padding="same")(x)
final_resize = layers.Add()([naive_resize, x])
return keras.Model(inputs, final_resize, name="learnable_resizer")
learnable_resizer = get_learnable_resizer()<jupyter_output><empty_output><jupyter_text>Visualize the outputs of the learnable resizing moduleHere, we visualize how the resized images would look like after being passed through therandom weights of the resizer.<jupyter_code>sample_images, _ = next(iter(train_ds))
plt.figure(figsize=(16, 10))
for i, image in enumerate(sample_images[:6]):
image = image / 255
ax = plt.subplot(3, 4, 2 * i + 1)
plt.title("Input Image")
plt.imshow(image.numpy().squeeze())
plt.axis("off")
ax = plt.subplot(3, 4, 2 * i + 2)
resized_image = learnable_resizer(image[None, ...])
plt.title("Resized Image")
plt.imshow(resized_image.numpy().squeeze())
plt.axis("off")<jupyter_output><empty_output><jupyter_text>Model building utility<jupyter_code>def get_model():
backbone = keras.applications.DenseNet121(
weights=None,
include_top=True,
classes=2,
input_shape=((TARGET_SIZE[0], TARGET_SIZE[1], 3)),
)
backbone.trainable = True
inputs = layers.Input((INP_SIZE[0], INP_SIZE[1], 3))
x = layers.Rescaling(scale=1.0 / 255)(inputs)
x = learnable_resizer(x)
outputs = backbone(x)
return keras.Model(inputs, outputs)<jupyter_output><empty_output><jupyter_text>The structure of the learnable image resizer module allows for flexible integrations withdifferent vision models. Compile and train our model with learnable resizer<jupyter_code>model = get_model()
model.compile(
loss=keras.losses.CategoricalCrossentropy(label_smoothing=0.1),
optimizer="sgd",
metrics=["accuracy"],
)
model.fit(train_ds, validation_data=validation_ds, epochs=EPOCHS)<jupyter_output><empty_output><jupyter_text>Visualize the outputs of the trained visualizer<jupyter_code>plt.figure(figsize=(16, 10))
for i, image in enumerate(sample_images[:6]):
image = image / 255
ax = plt.subplot(3, 4, 2 * i + 1)
plt.title("Input Image")
plt.imshow(image.numpy().squeeze())
plt.axis("off")
ax = plt.subplot(3, 4, 2 * i + 2)
resized_image = learnable_resizer(image[None, ...])
plt.title("Resized Image")
plt.imshow(resized_image.numpy().squeeze() / 10)
plt.axis("off")<jupyter_output><empty_output> | keras-io/scripts/tmp_2343486/learnable_resizer.ipynb/0 | {
"file_path": "keras-io/scripts/tmp_2343486/learnable_resizer.ipynb",
"repo_id": "keras-io",
"token_count": 2879
} | 138 |
# KerasNLP Layers
KerasNLP layers are `keras.Layer` subclasses for NLP-specific use cases.
These layers are building blocks for common NLP model architectures
(e.g. Transformers).
{{toc}}
| keras-io/templates/api/keras_nlp/layers/index.md/0 | {
"file_path": "keras-io/templates/api/keras_nlp/layers/index.md",
"repo_id": "keras-io",
"token_count": 59
} | 139 |
# Layer weight initializers
## Usage of initializers
Initializers define the way to set the initial random weights of Keras layers.
The keyword arguments used for passing initializers to layers depends on the layer.
Usually, it is simply `kernel_initializer` and `bias_initializer`:
```python
from keras import layers
from keras import initializers
layer = layers.Dense(
units=64,
kernel_initializer=initializers.RandomNormal(stddev=0.01),
bias_initializer=initializers.Zeros()
)
```
All built-in initializers can also be passed via their string identifier:
```python
layer = layers.Dense(
units=64,
kernel_initializer='random_normal',
bias_initializer='zeros'
)
```
---
## Available initializers
The following built-in initializers are available as part of the `keras.initializers` module:
{{autogenerated}}
## Creating custom initializers
### Simple callables
You can pass a custom callable as initializer.
It must take the arguments `shape` (shape of the variable to initialize) and `dtype` (dtype of generated values):
```python
def my_init(shape, dtype=None):
return keras.random.normal(shape, dtype=dtype)
layer = Dense(64, kernel_initializer=my_init)
```
### `Initializer` subclasses
If you need to configure your initializer via various arguments (e.g. `stddev` argument in `RandomNormal`),
you should implement it as a subclass of `keras.initializers.Initializer`.
Initializers should implement a `__call__` method with the following
signature:
```python
def __call__(self, shape, dtype=None)`:
# returns a tensor of shape `shape` and dtype `dtype`
# containing values drawn from a distribution of your choice.
```
Optionally, you an also implement the method `get_config` and the class
method `from_config` in order to support serialization -- just like with
any Keras object.
Here's a simple example: a random normal initializer.
```python
class ExampleRandomNormal(keras.initializers.Initializer):
def __init__(self, mean, stddev):
self.mean = mean
self.stddev = stddev
def __call__(self, shape, dtype=None)`:
return keras.random.normal(
shape, mean=self.mean, stddev=self.stddev, dtype=dtype)
def get_config(self): # To support serialization
return {'mean': self.mean, 'stddev': self.stddev}
```
Note that we don't have to implement `from_config` in the example above since
the constructor arguments of the class the keys in the config returned by
`get_config` are the same. In this case, the default `from_config`
works fine.
| keras-io/templates/api/layers/initializers.md/0 | {
"file_path": "keras-io/templates/api/layers/initializers.md",
"repo_id": "keras-io",
"token_count": 801
} | 140 |
.blog-content {
text-align: justify;
padding: 2em;
}
strong {
font-weight: 600;
font-family: Arial;
font-size: 1.1em;
}
.irasto {
width: 96%;
margin-left: 2%;
}
h2 {
margin-top: 1em;
margin-bottom: 1em;
}
h3 {
font-size: 1.5rem;
}
.credits {
text-align: center;
padding: 2em;
padding-top: 0;
}
.credits h2 {
margin-top: 0;
}
.credit-header {
text-transform: uppercase;
font-weight: bolder;
font-family: arial;
font-size: 1.2em;
padding: 0.5em;
padding-top: 2em;
}
.credit-subheader {
text-transform: uppercase;
font-weight: bolder;
font-family: arial;
font-size: 0.9em;
padding: 0.2em;
}
.credit-name {
text-align: center;
}
.subtext {
font-size: 0.8em;
padding-top: 1em;
}
.subcredit-name {
font-size: 0.8em;
padding-top: 0;
} | keras-io/theme/css/announcement.css/0 | {
"file_path": "keras-io/theme/css/announcement.css",
"repo_id": "keras-io",
"token_count": 429
} | 141 |
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<meta name="description" content="Keras Core documentation">
<meta name="author" content="Keras Team">
<title>Keras: Deep Learning for humans</title>
<!-- Bootstrap core CSS -->
<link href="/css/bootstrap.min.css" rel="stylesheet">
<!-- Custom fonts for this template -->
<link href="https://fonts.googleapis.com/css?family=Open+Sans:wght@300;400;500;600&display=swap" rel="stylesheet">
<!-- Custom styles for this template -->
<link href="/css/landing.css" rel="stylesheet">
<link href="/css/announcement.css" rel="stylesheet">
<link href="{{base_url}}css/monokai.css" rel="stylesheet">
<!-- Google Tag Manager -->
<script>(function(w,d,s,l,i){w[l]=w[l]||[];w[l].push({'gtm.start':
new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0],
j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.src=
'https://www.googletagmanager.com/gtm.js?id='+i+dl;f.parentNode.insertBefore(j,f);
})(window,document,'script','dataLayer','GTM-5DNGF4N');
</script>
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','https://www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-175165319-128', 'auto');
ga('send', 'pageview');
</script>
<!-- End Google Tag Manager -->
</head>
<body>
<!-- Google Tag Manager (noscript) -->
<noscript><iframe src="https://www.googletagmanager.com/ns.html?id=GTM-5DNGF4N"
height="0" width="0" style="display:none;visibility:hidden"></iframe></noscript>
<!-- End Google Tag Manager (noscript) -->
<!-- Masthead -->
<header class="masthead text-center">
<div class="container">
<img src='/img/logo.png' class='logo' />
<div class="row">
<div class="col-xl-8 mx-auto">
<h1 class="mb-5">Introducing Keras 3.0</h1>
<div class="row mx-auto">
<div class="col-md px-1">
<a href='{{base_url}}getting_started/' class="btn btn-block btn-lg btn-primary">Get started</a>
</div>
<div class="col-md px-1">
<a href='{{base_url}}api/' class="btn btn-block btn-lg btn-secondary">API docs</a>
</div>
<div class="col-md px-1">
<a href='{{base_url}}guides/' class="btn btn-block btn-lg btn-secondary">Guides</a>
</div>
<div class="col-md px-1">
<a href='https://github.com/keras-team/keras/' class="btn btn-block btn-lg btn-secondary">GitHub</a>
</div>
</div>
</div>
</div>
</header>
<div class="container">
<div class="row">
<div class="col-lg">
<div class="blog-content">
{{content|safe}}
</div>
</div>
</div>
</div>
</body>
</html> | keras-io/theme/keras_3.html/0 | {
"file_path": "keras-io/theme/keras_3.html",
"repo_id": "keras-io",
"token_count": 1570
} | 142 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Benchmark for text generation."""
import time
import tensorflow as tf
from tensorflow import keras
import keras_nlp
SEED = 42
DATASET_ARGS = {
"vocab_size": 40000,
"num_samples": 1000,
"batch_size": 2,
}
MODEL_ARGS = {
"max_length": 64,
"embed_dim": 768,
"num_layers": 8,
"num_heads": 8,
"ff_dim": 3072,
}
TEST_RUNS = [
{
"sampler": "greedy",
"execution_methods": ["xla", "graph"],
},
{
"sampler": "beam",
"execution_methods": ["xla", "graph"],
},
{
"sampler": "top_k",
"execution_methods": ["xla", "graph"],
},
{
"sampler": "top_p",
"execution_methods": ["xla", "graph"],
},
]
def generate_random_ds(vocab_size, num_samples, batch_size, length, seed):
inputs = tf.random.uniform(
shape=(num_samples, length),
minval=0,
maxval=vocab_size - 1,
dtype=tf.dtypes.int32,
seed=seed,
)
ds = tf.data.Dataset.from_tensor_slices(inputs)
ds = ds.batch(batch_size)
return ds
def build_model(
vocab_size, max_length, embed_dim, num_layers, num_heads, ff_dim
):
inputs = keras.layers.Input(shape=(None,), dtype="int32")
# Embedding.
x = keras_nlp.layers.TokenAndPositionEmbedding(
vocabulary_size=vocab_size,
sequence_length=max_length,
embedding_dim=embed_dim,
mask_zero=True,
)(inputs)
# Transformer decoders.
for _ in range(num_layers):
x = keras_nlp.layers.TransformerDecoder(
num_heads=num_heads,
intermediate_dim=ff_dim,
)(x)
# Output.
outputs = keras.layers.Dense(vocab_size)(x)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
def generate_text(
sampler,
next,
prompt,
jit_compile,
):
class TestModel(tf.keras.Model):
def call(self, inputs):
generated = keras_nlp.samplers.get(sampler)(
next=next,
prompt=inputs,
)
return generated
test_model = TestModel()
test_model.compile(jit_compile=jit_compile)
t0 = time.time()
_ = test_model.predict(prompt)
return time.time() - t0
def main():
keras.utils.set_random_seed(SEED)
csv_path = time.strftime("text_gen_%Y-%m-%d_%H-%M-%S.csv")
ds = generate_random_ds(
vocab_size=DATASET_ARGS["vocab_size"],
num_samples=DATASET_ARGS["num_samples"],
batch_size=DATASET_ARGS["batch_size"],
length=MODEL_ARGS["max_length"],
seed=SEED,
)
model = build_model(
vocab_size=DATASET_ARGS["vocab_size"],
max_length=MODEL_ARGS["max_length"],
embed_dim=MODEL_ARGS["embed_dim"],
num_layers=MODEL_ARGS["num_layers"],
num_heads=MODEL_ARGS["num_heads"],
ff_dim=MODEL_ARGS["ff_dim"],
)
def next(prompt, state, index):
output = model(prompt)
return output[:, index, :], state
print("*************************************\n")
with open(csv_path, "w") as res_handler:
res_handler.write("decoding_strategy,execution_method,time\n")
for test_run in TEST_RUNS:
sampler = test_run["sampler"]
for execution_method in test_run["execution_methods"]:
print(f"Running {sampler} in {execution_method} mode")
if execution_method == "graph":
jit_compile = False
elif execution_method == "xla":
jit_compile = True
time_taken = generate_text(
sampler=sampler,
next=next,
prompt=ds,
jit_compile=jit_compile,
)
print("Time taken: ", time_taken)
res_handler.write(
f"{sampler},{execution_method}," f"{time_taken}\n"
)
print()
print("*************************************")
print(f"Writing results to {csv_path}")
if __name__ == "__main__":
main()
| keras-nlp/benchmarks/text_generation.py/0 | {
"file_path": "keras-nlp/benchmarks/text_generation.py",
"repo_id": "keras-nlp",
"token_count": 2232
} | 143 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
from tensorflow import keras
from keras_nlp.layers import TransformerDecoder
from keras_nlp.layers import TransformerEncoder
class PositionalEmbedding(keras.layers.Layer):
"""The positional embedding class."""
def __init__(self, sequence_length, vocab_size, embed_dim, **kwargs):
super().__init__(**kwargs)
self.token_embeddings = keras.layers.Embedding(
input_dim=vocab_size, output_dim=embed_dim
)
self.position_embeddings = keras.layers.Embedding(
input_dim=sequence_length, output_dim=embed_dim
)
self.sequence_length = sequence_length
self.vocab_size = vocab_size
self.embed_dim = embed_dim
def call(self, inputs):
length = tf.shape(inputs)[-1]
positions = tf.range(start=0, limit=length, delta=1)
embedded_tokens = self.token_embeddings(inputs)
embedded_positions = self.position_embeddings(positions)
return embedded_tokens + embedded_positions
def compute_mask(self, inputs, mask=None):
return tf.math.not_equal(inputs, 0)
class TranslationModel(keras.Model):
"""The machine translation model.
The model is an encoder-decoder structure model. The encoder is a stack of
`keras_nlp.TransformerEncoder`, and the decoder is a stack of
`keras_nlp.TransformerDecoder`. We also pass in the tokenizer for encoder
and decoder so that during save/load, the tokenizer is also kept.
"""
def __init__(
self,
encoder_tokenizer,
decoder_tokenizer,
num_encoders,
num_decoders,
num_heads,
transformer_intermediate_dim,
encoder_vocab_size,
decoder_vocab_size,
embed_dim,
sequence_length,
):
super().__init__()
self.encoders = []
self.decoders = []
for _ in range(num_encoders):
self.encoders.append(
TransformerEncoder(
num_heads=num_heads,
intermediate_dim=transformer_intermediate_dim,
)
)
for _ in range(num_decoders):
self.decoders.append(
TransformerDecoder(
num_heads=num_heads,
intermediate_dim=transformer_intermediate_dim,
)
)
self.encoder_tokenizer = encoder_tokenizer
self.decoder_tokenizer = decoder_tokenizer
self.encoder_embedding = PositionalEmbedding(
sequence_length=sequence_length,
vocab_size=encoder_vocab_size,
embed_dim=embed_dim,
)
self.decoder_embedding = PositionalEmbedding(
sequence_length=sequence_length,
vocab_size=decoder_vocab_size,
embed_dim=embed_dim,
)
self.dense = keras.layers.Dense(
decoder_vocab_size,
activation="softmax",
)
def call(self, inputs):
encoder_input, decoder_input = (
inputs["encoder_inputs"],
inputs["decoder_inputs"],
)
encoded = self.encoder_embedding(encoder_input)
for encoder in self.encoders:
encoded = encoder(encoded)
decoded = self.decoder_embedding(decoder_input)
for decoder in self.decoders:
decoded = decoder(
decoded,
encoded,
use_causal_mask=True,
)
output = self.dense(decoded)
return output
| keras-nlp/examples/machine_translation/model.py/0 | {
"file_path": "keras-nlp/examples/machine_translation/model.py",
"repo_id": "keras-nlp",
"token_count": 1848
} | 144 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.backend import keras
from keras_nlp.backend import ops
@keras_nlp_export("keras_nlp.layers.ReversibleEmbedding")
class ReversibleEmbedding(keras.layers.Embedding):
"""An embedding layer which can project backwards to the input dim.
This layer is an extension of `keras.layers.Embedding` for language models.
This layer can be called "in reverse" with `reverse=True`, in which case the
layer will linearly project from `output_dim` back to `input_dim`.
By default, the reverse projection will use the transpose of the
`embeddings` weights to project to `input_dim` (weights are "tied"). If
`tie_weights=False`, the model will use a separate, trainable variable for
reverse projection.
This layer has no bias terms.
Args:
input_dim: Integer. Size of the vocabulary,
i.e. maximum integer index + 1.
output_dim: Integer. Dimension of the dense embedding.
tie_weights: Boolean, whether or not the matrix for embedding and
the matrix for the `reverse` projection should share the same
weights.
embeddings_initializer: Initializer for the `embeddings`
matrix (see `keras.initializers`).
embeddings_regularizer: Regularizer function applied to
the `embeddings` matrix (see `keras.regularizers`).
embeddings_constraint: Constraint function applied to
the `embeddings` matrix (see `keras.constraints`).
mask_zero: Boolean, whether or not the input value 0 is a special
"padding" value that should be masked out.
reverse_dtype: The dtype for the reverse projection computation.
For stability, it is usually best to use full precision even when
working with half or mixed precision training.
Call arguments:
inputs: The tensor inputs to the layer.
reverse: Boolean. If `True` the layer will perform a linear projection
from `output_dim` to `input_dim`, instead of a normal embedding
call. Default to `False`.
Examples:
```python
batch_size = 16
vocab_size = 100
hidden_dim = 32
seq_length = 50
# Generate random inputs.
token_ids = np.random.randint(vocab_size, size=(batch_size, seq_length))
embedding = keras_nlp.layers.ReversibleEmbedding(vocab_size, hidden_dim)
# Embed tokens to shape `(batch_size, seq_length, hidden_dim)`.
hidden_states = embedding(token_ids)
# Project hidden states to shape `(batch_size, seq_length, vocab_size)`.
logits = embedding(hidden_state, reverse=True)
```
References:
- [Vaswani et al., 2017](https://arxiv.org/abs/1706.03762)
- [Press and Wolf, 2016](https://arxiv.org/abs/1608.05859)
"""
def __init__(
self,
input_dim,
output_dim,
tie_weights=True,
embeddings_initializer="uniform",
embeddings_regularizer=None,
embeddings_constraint=None,
mask_zero=False,
reverse_dtype="float32",
**kwargs,
):
super().__init__(
input_dim,
output_dim,
embeddings_initializer=embeddings_initializer,
embeddings_regularizer=embeddings_regularizer,
embeddings_constraint=embeddings_constraint,
mask_zero=mask_zero,
**kwargs,
)
self.tie_weights = tie_weights
self.reverse_dtype = reverse_dtype
def build(self, inputs_shape=None):
super().build(inputs_shape)
if not self.tie_weights:
self.reverse_embeddings = self.add_weight(
name="reverse_embeddings",
shape=(self.output_dim, self.input_dim),
initializer=self.embeddings_initializer,
dtype=self.dtype,
)
def call(self, inputs, reverse=False):
if reverse:
if self.tie_weights:
kernel = ops.transpose(ops.convert_to_tensor(self.embeddings))
else:
kernel = self.reverse_embeddings
inputs = ops.cast(inputs, self.reverse_dtype)
kernel = ops.cast(kernel, self.reverse_dtype)
return ops.matmul(inputs, kernel)
return super().call(inputs)
def get_config(self):
config = super().get_config()
config.update(
{
"tie_weights": self.tie_weights,
"reverse_dtype": self.reverse_dtype,
}
)
return config
def load_own_variables(self, store):
if not self.built:
self.build()
self.embeddings.assign(store["0"])
if not self.tie_weights:
# Handle the case where saved weights are tied, but the layer
# weights untied. We can simply assign the embedding weights to both
# variables in this case.
if len(store.keys()) == 1:
self.reverse_embeddings.assign(np.transpose(store["0"]))
else:
self.reverse_embeddings.assign(store["1"])
def compute_output_spec(self, inputs, reverse=False):
output_shape = list(inputs.shape)
if reverse:
output_shape[-1] = self.input_dim
else:
output_shape += [self.output_dim]
return keras.KerasTensor(output_shape, dtype=self.dtype)
| keras-nlp/keras_nlp/layers/modeling/reversible_embedding.py/0 | {
"file_path": "keras-nlp/keras_nlp/layers/modeling/reversible_embedding.py",
"repo_id": "keras-nlp",
"token_count": 2502
} | 145 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
from keras_nlp.backend import ops
from keras_nlp.layers.preprocessing.masked_lm_mask_generator import (
MaskedLMMaskGenerator,
)
from keras_nlp.tests.test_case import TestCase
class MaskedLMMaskGeneratorTest(TestCase):
def setUp(self):
super().setUp()
self.VOCAB = [
"[UNK]",
"[MASK]",
"[RANDOM]",
"[CLS]",
"[SEP]",
"do",
"you",
"like",
"machine",
"learning",
"welcome",
"to",
"keras",
]
self.mask_token_id = self.VOCAB.index("[MASK]")
self.vocabulary_size = len(self.VOCAB)
def test_mask_ragged(self):
masked_lm_masker = MaskedLMMaskGenerator(
vocabulary_size=self.vocabulary_size,
mask_selection_rate=1,
mask_selection_length=4,
mask_token_id=self.mask_token_id,
mask_token_rate=1,
random_token_rate=0,
)
inputs = [[5, 3, 2], [1, 2, 3, 4]]
x = masked_lm_masker(inputs)
self.assertAllEqual(x["token_ids"], [[1, 1, 1], [1, 1, 1, 1]])
self.assertAllEqual(x["mask_positions"], [[0, 1, 2, 0], [0, 1, 2, 3]])
self.assertAllEqual(x["mask_ids"], [[5, 3, 2, 0], [1, 2, 3, 4]])
def test_mask_dense(self):
masked_lm_masker = MaskedLMMaskGenerator(
vocabulary_size=self.vocabulary_size,
mask_selection_rate=1,
mask_selection_length=4,
mask_token_id=self.mask_token_id,
mask_token_rate=1,
random_token_rate=0,
)
inputs = [[5, 3, 2, 4], [1, 2, 3, 4]]
x = masked_lm_masker(inputs)
self.assertAllEqual(x["token_ids"], [[1, 1, 1, 1], [1, 1, 1, 1]])
self.assertAllEqual(x["mask_positions"], [[0, 1, 2, 3], [0, 1, 2, 3]])
self.assertAllEqual(x["mask_ids"], [[5, 3, 2, 4], [1, 2, 3, 4]])
def test_unbatched(self):
masked_lm_masker = MaskedLMMaskGenerator(
vocabulary_size=self.vocabulary_size,
mask_selection_rate=1,
mask_selection_length=4,
mask_token_id=self.mask_token_id,
mask_token_rate=1,
random_token_rate=0,
)
inputs = [5, 3, 2, 4]
x = masked_lm_masker(inputs)
self.assertAllEqual(x["token_ids"], [1, 1, 1, 1])
self.assertAllEqual(x["mask_positions"], [0, 1, 2, 3])
self.assertAllEqual(x["mask_ids"], [5, 3, 2, 4])
def test_random_replacement(self):
masked_lm_masker = MaskedLMMaskGenerator(
vocabulary_size=10_000,
mask_selection_rate=1,
mask_selection_length=4,
mask_token_id=self.mask_token_id,
mask_token_rate=0,
random_token_rate=1,
)
inputs = [5, 3, 2, 4]
x = masked_lm_masker(inputs)
self.assertNotAllEqual(x["token_ids"], [1, 1, 1, 1])
self.assertAllEqual(x["mask_positions"], [0, 1, 2, 3])
self.assertAllEqual(x["mask_ids"], [5, 3, 2, 4])
def test_number_of_masked_position_as_expected(self):
mask_selection_rate = 0.5
mask_selection_length = 5
inputs = [[0, 1, 2], [0, 1, 2, 3, 4, 5], [0, 1, 2, 3, 4]]
# Cap the number of masked tokens at 0, so we can test if
# mask_selection_length takes effect.
mask_selection_length = 0
masked_lm_masker = MaskedLMMaskGenerator(
vocabulary_size=self.vocabulary_size,
mask_selection_rate=mask_selection_rate,
mask_token_id=self.mask_token_id,
mask_selection_length=mask_selection_length,
)
outputs = masked_lm_masker(inputs)
self.assertEqual(tf.reduce_sum(outputs["mask_positions"]), 0)
def test_invalid_mask_token(self):
with self.assertRaisesRegex(ValueError, "Mask token id should be*"):
_ = MaskedLMMaskGenerator(
vocabulary_size=self.vocabulary_size,
mask_selection_rate=0.5,
mask_token_id=self.vocabulary_size,
mask_selection_length=5,
)
def test_unselectable_tokens(self):
unselectable_token_ids = [
self.vocabulary_size - 1,
self.vocabulary_size - 2,
]
masked_lm_masker = MaskedLMMaskGenerator(
vocabulary_size=self.vocabulary_size,
mask_selection_rate=1,
mask_token_id=self.mask_token_id,
mask_selection_length=5,
unselectable_token_ids=unselectable_token_ids,
mask_token_rate=1,
random_token_rate=0,
)
outputs = masked_lm_masker([unselectable_token_ids])
# Verify that no token is masked out.
self.assertEqual(ops.sum(outputs["mask_weights"]), 0)
def test_config(self):
unselectable_token_ids = [
self.vocabulary_size - 1,
self.vocabulary_size - 2,
]
masked_lm_masker = MaskedLMMaskGenerator(
vocabulary_size=self.vocabulary_size,
mask_selection_rate=0.5,
mask_token_id=self.mask_token_id,
mask_selection_length=5,
unselectable_token_ids=unselectable_token_ids,
)
config = masked_lm_masker.get_config()
expected_config = {
"vocabulary_size": self.vocabulary_size,
"unselectable_token_ids": unselectable_token_ids,
}
self.assertDictContainsSubset(expected_config, config)
# Test cloned masked_lm_masker can be run.
cloned_masked_lm_masker = MaskedLMMaskGenerator.from_config(config)
inputs = [[5, 3, 2], [1, 2, 3, 4]]
cloned_masked_lm_masker(inputs)
def test_with_tf_data(self):
ds = tf.data.Dataset.from_tensor_slices(
tf.ones((100, 10), dtype="int32")
)
masked_lm_masker = MaskedLMMaskGenerator(
vocabulary_size=self.vocabulary_size,
mask_selection_rate=0.5,
mask_token_id=self.mask_token_id,
mask_selection_length=5,
)
batch_first = ds.batch(8).map(masked_lm_masker)
batch_second = ds.map(masked_lm_masker).batch(8)
self.assertEqual(
batch_first.take(1).get_single_element()["token_ids"].shape,
batch_second.take(1).get_single_element()["token_ids"].shape,
)
| keras-nlp/keras_nlp/layers/preprocessing/masked_lm_mask_generator_test.py/0 | {
"file_path": "keras-nlp/keras_nlp/layers/preprocessing/masked_lm_mask_generator_test.py",
"repo_id": "keras-nlp",
"token_count": 3528
} | 146 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from keras_nlp.backend import ops
from keras_nlp.metrics.perplexity import Perplexity
from keras_nlp.tests.test_case import TestCase
class PerplexityTest(TestCase):
def test_vars_after_initializing_class(self):
perplexity = Perplexity()
self.assertEqual(perplexity.result(), 0.0)
def test_from_logits_without_masking(self):
perplexity = Perplexity(from_logits=True)
y_true = ops.array([[1, 3, 0], [2, 1, 3]])
y_pred = ops.array(
[
[
[1.034, 4.797, 2.82, 1.154],
[2.258, 1.591, 1.811, 1.852],
[3.216, 1.037, 0.3662, 2.7],
],
[
[1.363, 1.726, 1.898, 2.582],
[1.163, 1.943, 1.761, 1.497],
[2.766, 1.453, 2.61, 2.805],
],
]
)
perplexity_val = perplexity(y_true, y_pred)
self.assertAlmostEqual(perplexity_val, 2.6542, delta=1e-3)
def test_from_logits_with_sample_weight(self):
perplexity = Perplexity(from_logits=True)
y_true = ops.array([[1, 3, 0], [2, 1, 3]])
y_pred = ops.array(
[
[
[1.034, 4.797, 2.82, 1.154],
[2.258, 1.591, 1.811, 1.852],
[3.216, 1.037, 0.3662, 2.7],
],
[
[1.363, 1.726, 1.898, 2.582],
[1.163, 1.943, 1.761, 1.497],
[2.766, 1.453, 2.61, 2.805],
],
]
)
sample_wt = ops.cast(y_true != 0, "int32")
perplexity_val = perplexity(y_true, y_pred, sample_wt)
self.assertAlmostEqual(perplexity_val, 2.8789, delta=1e-3)
def test_from_logits_with_mask_token_id(self):
perplexity = Perplexity(from_logits=True, mask_token_id=0)
y_true = ops.array([[1, 3, 0], [2, 1, 3]])
y_pred = ops.array(
[
[
[1.034, 4.797, 2.82, 1.154],
[2.258, 1.591, 1.811, 1.852],
[3.216, 1.037, 0.3662, 2.7],
],
[
[1.363, 1.726, 1.898, 2.582],
[1.163, 1.943, 1.761, 1.497],
[2.766, 1.453, 2.61, 2.805],
],
]
)
perplexity_val = perplexity(y_true, y_pred)
self.assertAlmostEqual(perplexity_val, 2.8789, delta=1e-3)
def test_from_logits_with_mask_token_id_and_sample_weight(self):
perplexity = Perplexity(from_logits=True, mask_token_id=0)
y_true = ops.array([[1, 3, 0], [2, 1, 3]])
y_pred = ops.array(
[
[
[1.034, 4.797, 2.82, 1.154],
[2.258, 1.591, 1.811, 1.852],
[3.216, 1.037, 0.3662, 2.7],
],
[
[1.363, 1.726, 1.898, 2.582],
[1.163, 1.943, 1.761, 1.497],
[2.766, 1.453, 2.61, 2.805],
],
]
)
sample_weight = ops.array([[0.5, 0.1, 0.9], [1, 0.7, 0.5]])
perplexity_val = perplexity(y_true, y_pred, sample_weight)
self.assertAlmostEqual(perplexity_val, 2.9442, delta=1e-3)
def test_two_inputs_from_logits(self):
perplexity = Perplexity(from_logits=True, mask_token_id=0)
y_true_1 = ops.array([[1, 3, 0], [2, 1, 3]])
y_pred_1 = ops.array(
[
[
[1.034, 4.797, 2.82, 1.154],
[2.258, 1.591, 1.811, 1.852],
[3.216, 1.037, 0.3662, 2.7],
],
[
[1.363, 1.726, 1.898, 2.582],
[1.163, 1.943, 1.761, 1.497],
[2.766, 1.453, 2.61, 2.805],
],
]
)
perplexity_val = perplexity(y_true_1, y_pred_1)
self.assertAlmostEqual(perplexity_val, 2.8789, delta=1e-3)
y_true_2 = ops.array([[2, 0, 0], [1, 2, 3]])
y_pred_2 = ops.array(
[
[
[2.887, 0.885, 2.973, 2.582],
[0.3838, 2.629, 1.91, 1.802],
[0.2578, 1.081, 1.125, 2.773],
],
[
[1.623, 2.784, 0.2109, 2.66],
[2.395, 2.01, 0.252, 1.828],
[0.4482, 2.629, 0.9697, 0.998],
],
]
)
perplexity_val = perplexity(y_true_2, y_pred_2)
self.assertAlmostEqual(perplexity_val, 3.9998, delta=1e-3)
def test_from_probs_with_sample_weight(self):
perplexity = Perplexity(from_logits=False)
y_true = ops.array([[1, 3, 0], [2, 1, 3]])
y_pred = ops.array(
[
[
[1.034, 4.797, 2.82, 1.154],
[2.258, 1.591, 1.811, 1.852],
[3.216, 1.037, 0.3662, 2.7],
],
[
[1.363, 1.726, 1.898, 2.582],
[1.163, 1.943, 1.761, 1.497],
[2.766, 1.453, 2.61, 2.805],
],
]
)
y_prob = ops.softmax(y_pred, axis=-1)
sample_wt = ops.cast(y_true != 0, "int32")
perplexity_val = perplexity(y_true, y_prob, sample_wt)
self.assertAlmostEqual(perplexity_val, 2.8789, delta=1e-3)
def test_from_probs_with_pad_token(self):
perplexity = Perplexity(from_logits=False, mask_token_id=0)
y_true = ops.array([[1, 3, 0], [2, 1, 3]])
y_pred = ops.array(
[
[
[1.034, 4.797, 2.82, 1.154],
[2.258, 1.591, 1.811, 1.852],
[3.216, 1.037, 0.3662, 2.7],
],
[
[1.363, 1.726, 1.898, 2.582],
[1.163, 1.943, 1.761, 1.497],
[2.766, 1.453, 2.61, 2.805],
],
]
)
y_prob = ops.softmax(y_pred, axis=-1)
perplexity_val = perplexity(y_true, y_prob)
self.assertAlmostEqual(perplexity_val, 2.8789, delta=1e-3)
def test_reset_state(self):
y_true = ops.array([[1, 3, 0], [2, 1, 3]])
y_pred = ops.array(
[
[
[1.034, 4.797, 2.82, 1.154],
[2.258, 1.591, 1.811, 1.852],
[3.216, 1.037, 0.3662, 2.7],
],
[
[1.363, 1.726, 1.898, 2.582],
[1.163, 1.943, 1.761, 1.497],
[2.766, 1.453, 2.61, 2.805],
],
]
)
perplexity = Perplexity(from_logits=True, mask_token_id=0)
perplexity.update_state(y_true, y_pred)
self.assertNotEqual(perplexity.result(), 0.0)
perplexity.reset_state()
self.assertEqual(perplexity.result(), 0.0)
def test_update_state(self):
perplexity = Perplexity(from_logits=True, mask_token_id=0)
y_true_1 = ops.array([[1, 3, 0], [2, 1, 3]])
y_pred_1 = ops.array(
[
[
[1.034, 4.797, 2.82, 1.154],
[2.258, 1.591, 1.811, 1.852],
[3.216, 1.037, 0.3662, 2.7],
],
[
[1.363, 1.726, 1.898, 2.582],
[1.163, 1.943, 1.761, 1.497],
[2.766, 1.453, 2.61, 2.805],
],
]
)
perplexity.update_state(y_true_1, y_pred_1)
perplexity_val = perplexity.result()
self.assertAlmostEqual(perplexity_val, 2.8789, delta=1e-3)
y_true_2 = ops.array([[2, 0, 0], [1, 2, 3]])
y_pred_2 = ops.array(
[
[
[2.887, 0.885, 2.973, 2.582],
[0.3838, 2.629, 1.91, 1.802],
[0.2578, 1.081, 1.125, 2.773],
],
[
[1.623, 2.784, 0.2109, 2.66],
[2.395, 2.01, 0.252, 1.828],
[0.4482, 2.629, 0.9697, 0.998],
],
]
)
perplexity.update_state(y_true_2, y_pred_2)
perplexity_val = perplexity.result()
self.assertAlmostEqual(perplexity_val, 3.9998, delta=1e-3)
def test_get_config(self):
perplexity = Perplexity(
from_logits=True,
mask_token_id=0,
dtype="float32",
name="perplexity_test",
)
config = perplexity.get_config()
expected_config = {
"from_logits": True,
"mask_token_id": 0,
"dtype": "float32",
"name": "perplexity_test",
}
self.assertEqual(config, expected_config)
| keras-nlp/keras_nlp/metrics/perplexity_test.py/0 | {
"file_path": "keras-nlp/keras_nlp/metrics/perplexity_test.py",
"repo_id": "keras-nlp",
"token_count": 6016
} | 147 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.layers.preprocessing.multi_segment_packer import (
MultiSegmentPacker,
)
from keras_nlp.models.albert.albert_presets import backbone_presets
from keras_nlp.models.albert.albert_tokenizer import AlbertTokenizer
from keras_nlp.models.preprocessor import Preprocessor
from keras_nlp.utils.keras_utils import (
convert_inputs_to_list_of_tensor_segments,
)
from keras_nlp.utils.keras_utils import pack_x_y_sample_weight
from keras_nlp.utils.python_utils import classproperty
@keras_nlp_export("keras_nlp.models.AlbertPreprocessor")
class AlbertPreprocessor(Preprocessor):
"""An ALBERT preprocessing layer which tokenizes and packs inputs.
This preprocessing layer will do three things:
- Tokenize any number of input segments using the `tokenizer`.
- Pack the inputs together using a `keras_nlp.layers.MultiSegmentPacker`.
with the appropriate `"[CLS]"`, `"[SEP]"` and `"<pad>"` tokens.
- Construct a dictionary with keys `"token_ids"`, `"segment_ids"` and
`"padding_mask"`, that can be passed directly to
`keras_nlp.models.AlbertBackbone`.
This layer can be used directly with `tf.data.Dataset.map` to preprocess
string data in the `(x, y, sample_weight)` format used by
`keras.Model.fit`.
The call method of this layer accepts three arguments, `x`, `y`, and
`sample_weight`. `x` can be a python string or tensor representing a single
segment, a list of python strings representing a batch of single segments,
or a list of tensors representing multiple segments to be packed together.
`y` and `sample_weight` are both optional, can have any format, and will be
passed through unaltered.
Special care should be taken when using `tf.data` to map over an unlabeled
tuple of string segments. `tf.data.Dataset.map` will unpack this tuple
directly into the call arguments of this layer, rather than forward all
argument to `x`. To handle this case, it is recommended to explicitly call
the layer, e.g. `ds.map(lambda seg1, seg2: preprocessor(x=(seg1, seg2)))`.
Args:
tokenizer: A `keras_nlp.models.AlbertTokenizer` instance.
sequence_length: The length of the packed inputs.
truncate: string. The algorithm to truncate a list of batched segments
to fit within `sequence_length`. The value can be either
`round_robin` or `waterfall`:
- `"round_robin"`: Available space is assigned one token at a
time in a round-robin fashion to the inputs that still need
some, until the limit is reached.
- `"waterfall"`: The allocation of the budget is done using a
"waterfall" algorithm that allocates quota in a
left-to-right manner and fills up the buckets until we run
out of budget. It supports an arbitrary number of segments.
Examples:
Directly calling the layer on data.
```python
preprocessor = keras_nlp.models.AlbertPreprocessor.from_preset(
"albert_base_en_uncased"
)
# Tokenize and pack a single sentence.
preprocessor("The quick brown fox jumped.")
# Tokenize a batch of single sentences.
preprocessor(["The quick brown fox jumped.", "Call me Ishmael."])
# Preprocess a batch of sentence pairs.
# When handling multiple sequences, always convert to tensors first!
first = tf.constant(["The quick brown fox jumped.", "Call me Ishmael."])
second = tf.constant(["The fox tripped.", "Oh look, a whale."])
preprocessor((first, second))
# Custom vocabulary.
bytes_io = io.BytesIO()
ds = tf.data.Dataset.from_tensor_slices(["The quick brown fox jumped."])
sentencepiece.SentencePieceTrainer.train(
sentence_iterator=ds.as_numpy_iterator(),
model_writer=bytes_io,
vocab_size=10,
model_type="WORD",
pad_id=0,
unk_id=1,
bos_id=2,
eos_id=3,
pad_piece="<pad>",
unk_piece="<unk>",
bos_piece="[CLS]",
eos_piece="[SEP]",
user_defined_symbols="[MASK]",
)
tokenizer = keras_nlp.models.AlbertTokenizer(
proto=bytes_io.getvalue(),
)
preprocessor = keras_nlp.models.AlbertPreprocessor(tokenizer)
preprocessor("The quick brown fox jumped.")
```
Mapping with `tf.data.Dataset`.
```python
preprocessor = keras_nlp.models.AlbertPreprocessor.from_preset(
"albert_base_en_uncased"
)
first = tf.constant(["The quick brown fox jumped.", "Call me Ishmael."])
second = tf.constant(["The fox tripped.", "Oh look, a whale."])
label = tf.constant([1, 1])
# Map labeled single sentences.
ds = tf.data.Dataset.from_tensor_slices((first, label))
ds = ds.map(preprocessor, num_parallel_calls=tf.data.AUTOTUNE)
# Map unlabeled single sentences.
ds = tf.data.Dataset.from_tensor_slices(first)
ds = ds.map(preprocessor, num_parallel_calls=tf.data.AUTOTUNE)
# Map labeled sentence pairs.
ds = tf.data.Dataset.from_tensor_slices(((first, second), label))
ds = ds.map(preprocessor, num_parallel_calls=tf.data.AUTOTUNE)
# Map unlabeled sentence pairs.
ds = tf.data.Dataset.from_tensor_slices((first, second))
# Watch out for tf.data's default unpacking of tuples here!
# Best to invoke the `preprocessor` directly in this case.
ds = ds.map(
lambda first, second: preprocessor(x=(first, second)),
num_parallel_calls=tf.data.AUTOTUNE,
)
```
"""
def __init__(
self,
tokenizer,
sequence_length=512,
truncate="round_robin",
**kwargs,
):
super().__init__(**kwargs)
self.tokenizer = tokenizer
self.packer = None
self.truncate = truncate
self.sequence_length = sequence_length
def build(self, input_shape):
# Defer packer creation to `build()` so that we can be sure tokenizer
# assets have loaded when restoring a saved model.
self.packer = MultiSegmentPacker(
start_value=self.tokenizer.cls_token_id,
end_value=self.tokenizer.sep_token_id,
pad_value=self.tokenizer.pad_token_id,
truncate=self.truncate,
sequence_length=self.sequence_length,
)
self.built = True
def get_config(self):
config = super().get_config()
config.update(
{
"sequence_length": self.sequence_length,
"truncate": self.truncate,
}
)
return config
def call(self, x, y=None, sample_weight=None):
x = convert_inputs_to_list_of_tensor_segments(x)
x = [self.tokenizer(segment) for segment in x]
token_ids, segment_ids = self.packer(x)
x = {
"token_ids": token_ids,
"segment_ids": segment_ids,
"padding_mask": token_ids != self.tokenizer.pad_token_id,
}
return pack_x_y_sample_weight(x, y, sample_weight)
@property
def sequence_length(self):
"""The padded length of model input sequences."""
return self._sequence_length
@sequence_length.setter
def sequence_length(self, value):
self._sequence_length = value
if self.packer is not None:
self.packer.sequence_length = value
@classproperty
def tokenizer_cls(cls):
return AlbertTokenizer
@classproperty
def presets(cls):
return copy.deepcopy(backbone_presets)
| keras-nlp/keras_nlp/models/albert/albert_preprocessor.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/albert/albert_preprocessor.py",
"repo_id": "keras-nlp",
"token_count": 3242
} | 148 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import pytest
from keras_nlp.models.deberta_v3.deberta_v3_backbone import DebertaV3Backbone
from keras_nlp.models.deberta_v3.deberta_v3_classifier import (
DebertaV3Classifier,
)
from keras_nlp.models.deberta_v3.deberta_v3_preprocessor import (
DebertaV3Preprocessor,
)
from keras_nlp.models.deberta_v3.deberta_v3_tokenizer import DebertaV3Tokenizer
from keras_nlp.tests.test_case import TestCase
class DebertaV3ClassifierTest(TestCase):
def setUp(self):
# Setup model.
self.preprocessor = DebertaV3Preprocessor(
DebertaV3Tokenizer(
# Generated using create_deberta_v3_test_proto.py
proto=os.path.join(
self.get_test_data_dir(), "deberta_v3_test_vocab.spm"
)
),
sequence_length=5,
)
self.backbone = DebertaV3Backbone(
vocabulary_size=self.preprocessor.tokenizer.vocabulary_size(),
num_layers=2,
num_heads=2,
hidden_dim=2,
intermediate_dim=4,
max_sequence_length=self.preprocessor.sequence_length,
)
self.init_kwargs = {
"preprocessor": self.preprocessor,
"backbone": self.backbone,
"num_classes": 2,
}
self.train_data = (
["the quick brown fox.", "the slow brown fox."], # Features.
[1, 0], # Labels.
)
self.input_data = self.preprocessor(*self.train_data)[0]
def test_classifier_basics(self):
self.run_task_test(
cls=DebertaV3Classifier,
init_kwargs=self.init_kwargs,
train_data=self.train_data,
expected_output_shape=(2, 2),
)
@pytest.mark.large
def test_saved_model(self):
self.run_model_saving_test(
cls=DebertaV3Classifier,
init_kwargs=self.init_kwargs,
input_data=self.input_data,
)
@pytest.mark.extra_large
def test_all_presets(self):
for preset in DebertaV3Classifier.presets:
self.run_preset_test(
cls=DebertaV3Classifier,
preset=preset,
init_kwargs={"num_classes": 2},
input_data=self.input_data,
expected_output_shape=(2, 2),
)
| keras-nlp/keras_nlp/models/deberta_v3/deberta_v3_classifier_test.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/deberta_v3/deberta_v3_classifier_test.py",
"repo_id": "keras-nlp",
"token_count": 1381
} | 149 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.backend import keras
from keras_nlp.models.distil_bert.distil_bert_backbone import DistilBertBackbone
from keras_nlp.models.distil_bert.distil_bert_backbone import (
distilbert_kernel_initializer,
)
from keras_nlp.models.distil_bert.distil_bert_preprocessor import (
DistilBertPreprocessor,
)
from keras_nlp.models.distil_bert.distil_bert_presets import backbone_presets
from keras_nlp.models.task import Task
from keras_nlp.utils.python_utils import classproperty
@keras_nlp_export("keras_nlp.models.DistilBertClassifier")
class DistilBertClassifier(Task):
"""An end-to-end DistilBERT model for classification tasks.
This model attaches a classification head to a
`keras_nlp.model.DistilBertBackbone` instance, mapping from the backbone
outputs to logits suitable for a classification task. For usage of
this model with pre-trained weights, see the `from_preset()` constructor.
This model can optionally be configured with a `preprocessor` layer, in
which case it will automatically apply preprocessing to raw inputs during
`fit()`, `predict()`, and `evaluate()`. This is done by default when
creating the model with `from_preset()`.
Disclaimer: Pre-trained models are provided on an "as is" basis, without
warranties or conditions of any kind. The underlying model is provided by a
third party and subject to a separate license, available
[here](https://github.com/huggingface/transformers).
Args:
backbone: A `keras_nlp.models.DistilBert` instance.
num_classes: int. Number of classes to predict.
preprocessor: A `keras_nlp.models.DistilBertPreprocessor` or `None`. If
`None`, this model will not apply preprocessing, and inputs should
be preprocessed before calling the model.
activation: Optional `str` or callable. The
activation function to use on the model outputs. Set
`activation="softmax"` to return output probabilities.
Defaults to `None`.
hidden_dim: int. The size of the pooler layer.
dropout: float. The dropout probability value, applied after the first
dense layer.
Examples:
Raw string data.
```python
features = ["The quick brown fox jumped.", "I forgot my homework."]
labels = [0, 3]
# Use a shorter sequence length.
preprocessor = keras_nlp.models.DistilBertPreprocessor.from_preset(
"distil_bert_base_en_uncased",
sequence_length=128,
)
# Pretrained classifier.
classifier = keras_nlp.models.DistilBertClassifier.from_preset(
"distil_bert_base_en_uncased",
num_classes=4,
preprocessor=preprocessor,
)
classifier.fit(x=features, y=labels, batch_size=2)
# Re-compile (e.g., with a new learning rate)
classifier.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=keras.optimizers.Adam(5e-5),
jit_compile=True,
)
# Access backbone programmatically (e.g., to change `trainable`).
classifier.backbone.trainable = False
# Fit again.
classifier.fit(x=features, y=labels, batch_size=2)
```
Preprocessed integer data.
```python
features = {
"token_ids": np.ones(shape=(2, 12), dtype="int32"),
"padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0]] * 2)
}
labels = [0, 3]
# Pretrained classifier without preprocessing.
classifier = keras_nlp.models.DistilBertClassifier.from_preset(
"distil_bert_base_en_uncased",
num_classes=4,
preprocessor=None,
)
classifier.fit(x=features, y=labels, batch_size=2)
```
Custom backbone and vocabulary.
```python
features = ["The quick brown fox jumped.", "I forgot my homework."]
labels = [0, 3]
vocab = ["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"]
vocab += ["The", "quick", "brown", "fox", "jumped", "."]
tokenizer = keras_nlp.models.DistilBertTokenizer(
vocabulary=vocab,
)
preprocessor = keras_nlp.models.DistilBertPreprocessor(
tokenizer=tokenizer,
sequence_length=128,
)
backbone = keras_nlp.models.DistilBertBackbone(
vocabulary_size=30552,
num_layers=4,
num_heads=4,
hidden_dim=256,
intermediate_dim=512,
max_sequence_length=128,
)
classifier = keras_nlp.models.DistilBertClassifier(
backbone=backbone,
preprocessor=preprocessor,
num_classes=4,
)
classifier.fit(x=features, y=labels, batch_size=2)
```
"""
def __init__(
self,
backbone,
num_classes,
preprocessor=None,
activation=None,
hidden_dim=None,
dropout=0.2,
**kwargs,
):
# === Layers ===
self.backbone = backbone
self.preprocessor = preprocessor
hidden_dim = hidden_dim or backbone.hidden_dim
self.pooled_dense = keras.layers.Dense(
hidden_dim,
activation="relu",
kernel_initializer=distilbert_kernel_initializer(),
dtype=backbone.dtype_policy,
name="pooled_dense",
)
self.output_dropout = keras.layers.Dropout(
dropout,
dtype=backbone.dtype_policy,
name="output_dropout",
)
self.output_dense = keras.layers.Dense(
num_classes,
kernel_initializer=distilbert_kernel_initializer(),
activation=activation,
dtype=backbone.dtype_policy,
name="logits",
)
# === Functional Model ===
inputs = backbone.input
x = backbone(inputs)[:, backbone.cls_token_index, :]
x = self.pooled_dense(x)
x = self.output_dropout(x)
outputs = self.output_dense(x)
super().__init__(
inputs=inputs,
outputs=outputs,
**kwargs,
)
# === Config ===
self.num_classes = num_classes
self.activation = keras.activations.get(activation)
self.hidden_dim = hidden_dim
self.dropout = dropout
# === Default compilation ===
logit_output = self.activation == keras.activations.linear
self.compile(
loss=keras.losses.SparseCategoricalCrossentropy(
from_logits=logit_output
),
optimizer=keras.optimizers.Adam(5e-5),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
jit_compile=True,
)
def get_config(self):
config = super().get_config()
config.update(
{
"num_classes": self.num_classes,
"activation": keras.activations.serialize(self.activation),
"hidden_dim": self.hidden_dim,
"dropout": self.dropout,
}
)
return config
@classproperty
def backbone_cls(cls):
return DistilBertBackbone
@classproperty
def preprocessor_cls(cls):
return DistilBertPreprocessor
@classproperty
def presets(cls):
return copy.deepcopy(backbone_presets)
| keras-nlp/keras_nlp/models/distil_bert/distil_bert_classifier.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/distil_bert/distil_bert_classifier.py",
"repo_id": "keras-nlp",
"token_count": 3297
} | 150 |
# Copyright 2024 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.backend import config
from keras_nlp.backend import keras
from keras_nlp.backend import ops
from keras_nlp.layers.modeling.reversible_embedding import ReversibleEmbedding
from keras_nlp.models.backbone import Backbone
from keras_nlp.models.gemma.gemma_decoder_block import GemmaDecoderBlock
from keras_nlp.models.gemma.gemma_presets import backbone_presets
from keras_nlp.models.gemma.rms_normalization import RMSNormalization
from keras_nlp.utils.python_utils import classproperty
@keras_nlp_export("keras_nlp.models.GemmaBackbone")
class GemmaBackbone(Backbone):
"""Gemma core network with hyperparameters.
This backbone implements the base Transformer network for the Gemma model.
It includes the embedding lookups and transformer layers. This backbone
will output the final hidden states for each token, not generative
predictions over the vocabulary space. For a higher-level object for text
generation, see `keras_nlp.models.GemmaCausalLM`.
The default constructor gives a fully customizable, randomly initialized
Gemma model with any number of layers, heads, and embedding dimensions. To
load preset architectures and weights, use the `from_preset` constructor.
Args:
vocabulary_size: int. The size of the token vocabulary.
num_layers: int. The number of transformer layers.
num_query_heads: int. The number of heads for the query projections in
the attention layer.
num_key_value_heads: int. The number of heads for the key and value
projections in the attention layer.
hidden_dim: int. The size of the transformer hidden state at the end
of each transformer layer.
intermediate_dim: int. The output dimension of the first Dense layer in
a two-layer feedforward network for each transformer.
head_dim: int. The size of each attention head.
layer_norm_epsilon: float. The epsilon value user for every layer norm
in the transformer model.
dropout: float. Dropout probability for the Transformer encoder.
dtype: string or `keras.mixed_precision.DTypePolicy`. The dtype to use
for the models computations and weights. Note that some
computations, such as softmax and layer normalization will always
be done a float32 precision regardless of dtype.
Example usage:
```python
input_data = {
"token_ids": np.ones(shape=(1, 12), dtype="int32"),
"padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0]]),
}
# Pretrained Gemma decoder.
model = keras_nlp.models.GemmaBackbone.from_preset("gemma_2b_en")
model(input_data)
# Randomly initialized Gemma decoder with custom config.
model = keras_nlp.models.GemmaBackbone(
vocabulary_size=50257,
num_layers=12,
num_query_heads=12,
num_key_value_heads=1,
hidden_dim=768,
intermediate_dim=3072,
head_dim=64,
)
model(input_data)
```
"""
def __init__(
self,
vocabulary_size,
num_layers,
num_query_heads,
num_key_value_heads,
hidden_dim,
intermediate_dim,
head_dim,
layer_norm_epsilon=1e-6,
dropout=0,
dtype=None,
**kwargs,
):
if not config.keras_3():
raise ValueError(
"`GemmaBackbone` requires Keras 3. Run `pip install -U keras` "
"upgrade your Keras version, or see https://keras.io/getting_started/ "
"for more info on Keras versions and installation."
)
# === Layers ===
self.token_embedding = ReversibleEmbedding(
input_dim=vocabulary_size,
output_dim=hidden_dim,
tie_weights=True,
embeddings_initializer=keras.initializers.VarianceScaling(
scale=1.0,
mode="fan_in",
distribution="untruncated_normal",
seed=None,
),
dtype=dtype,
name="token_embedding",
)
self.transformer_layers = []
for i in range(num_layers):
layer = GemmaDecoderBlock(
intermediate_dim=intermediate_dim,
hidden_dim=hidden_dim,
num_query_heads=num_query_heads,
head_dim=head_dim,
num_key_value_heads=num_key_value_heads,
dropout=dropout,
dtype=dtype,
name=f"decoder_block_{i}",
)
self.transformer_layers.append(layer)
self.layer_norm = RMSNormalization(
epsilon=layer_norm_epsilon,
dtype=dtype,
name="final_normalization",
)
# === Functional Model ===
token_id_input = keras.Input(
shape=(None,), dtype="float32", name="token_ids"
)
padding_mask_input = keras.Input(
shape=(None,), dtype="float32", name="padding_mask"
)
x = self.token_embedding(token_id_input)
x = x * ops.cast(ops.sqrt(hidden_dim), x.dtype)
for transformer_layer in self.transformer_layers:
x = transformer_layer(x, padding_mask=padding_mask_input)
sequence_output = self.layer_norm(x)
super().__init__(
inputs={
"token_ids": token_id_input,
"padding_mask": padding_mask_input,
},
outputs=sequence_output,
**kwargs,
)
# === Config ===
self.vocabulary_size = vocabulary_size
self.num_layers = num_layers
self.num_query_heads = num_query_heads
self.num_key_value_heads = num_key_value_heads
self.hidden_dim = hidden_dim
self.intermediate_dim = intermediate_dim
self.head_dim = head_dim
self.layer_norm_epsilon = layer_norm_epsilon
self.dropout = dropout
def get_config(self):
config = super().get_config()
config.update(
{
"vocabulary_size": self.vocabulary_size,
"num_layers": self.num_layers,
"num_query_heads": self.num_query_heads,
"num_key_value_heads": self.num_key_value_heads,
"hidden_dim": self.hidden_dim,
"intermediate_dim": self.intermediate_dim,
"head_dim": self.head_dim,
"layer_norm_epsilon": self.layer_norm_epsilon,
"dropout": self.dropout,
}
)
return config
@classproperty
def presets(cls):
return copy.deepcopy(backbone_presets)
@staticmethod
def get_layout_map(device_mesh, model_parallel_dim_name="model"):
"""Get a `keras.distribution.LayoutMap` for model parallel distribution.
The returned `LayoutMap` contains the sharding spec for the gemma
backbone weights, so that you can use it to distribute weights across
the accelerators.
Sample usage:
```
# Feel free to change the mesh shape to balance data and model parallel
mesh = keras.distribution.DeviceMesh(
shape=(1, 8), axis_names=('batch', 'model'),
devices=keras.distribution.list_devices())
layout_map = GemmaBackbone.get_layout_map(
mesh, model_parallel_dim_name="model")
distribution = keras.distribution.ModelParallel(
mesh, layout_map, batch_dim_name='batch')
with distribution.scope():
gemma_model = keras_nlp.models.GemmaCausalLM.from_preset()
```
Args:
device_mesh: The `keras.distribution.DeviceMesh` instance for
distribution.
model_parallel_dim_name: The axis name of the device mesh, where
the weights should be partition on.
Return:
`keras.distribution.LayoutMap` that contains the sharding spec
of all the model weights.
"""
# The weight path and shape of the Gemma backbone is like below (for 2G)
# token_embedding/embeddings, (256128, 2048), 524550144
# repeat block for decoder
# ...
# decoder_block_17/pre_attention_norm/scale, (2048,), 2048
# decoder_block_17/attention/query/kernel, (8, 2048, 256), 4194304
# decoder_block_17/attention/key/kernel, (8, 2048, 256), 4194304
# decoder_block_17/attention/value/kernel, (8, 2048, 256), 4194304
# decoder_block_17/attention/attention_output/kernel, (8, 256, 2048), 4194304
# decoder_block_17/pre_ffw_norm/scale, (2048,), 2048
# decoder_block_17/ffw_gating/kernel, (2048, 16384), 33554432
# decoder_block_17/ffw_gating_2/kernel, (2048, 16384), 33554432
# decoder_block_17/ffw_linear/kernel, (16384, 2048), 33554432
if not isinstance(device_mesh, keras.distribution.DeviceMesh):
raise ValueError(
"Invalid device_mesh type. Expected `keras.distribution.Device`,"
f" got {type(device_mesh)}"
)
if model_parallel_dim_name not in device_mesh.axis_names:
raise ValueError(
f"{model_parallel_dim_name} is not found in the "
f"device_mesh.axis_names. {device_mesh.axis_name=}"
)
model_dim = model_parallel_dim_name
# The sharding is partition for the hidden_dim of the model.
layout_map = keras.distribution.LayoutMap(device_mesh)
layout_map["token_embedding/embeddings"] = (None, model_dim)
layout_map["decoder_block.*attention.*(query|key|value).*kernel"] = (
None,
model_dim,
None,
)
layout_map["decoder_block.*attention_output.*kernel"] = (
None,
None,
model_dim,
)
layout_map["decoder_block.*ffw_gating.*kernel"] = (model_dim, None)
layout_map["decoder_block.*ffw_linear.*kernel"] = (None, model_dim)
return layout_map
| keras-nlp/keras_nlp/models/gemma/gemma_backbone.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/gemma/gemma_backbone.py",
"repo_id": "keras-nlp",
"token_count": 4765
} | 151 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.backend import keras
from keras_nlp.layers.modeling.position_embedding import PositionEmbedding
from keras_nlp.layers.modeling.reversible_embedding import ReversibleEmbedding
from keras_nlp.layers.modeling.transformer_decoder import TransformerDecoder
from keras_nlp.models.backbone import Backbone
from keras_nlp.models.gpt2.gpt2_presets import backbone_presets
from keras_nlp.utils.keras_utils import gelu_approximate
from keras_nlp.utils.python_utils import classproperty
def _gpt_2_kernel_initializer(stddev=0.02):
return keras.initializers.RandomNormal(stddev=stddev)
@keras_nlp_export("keras_nlp.models.GPT2Backbone")
class GPT2Backbone(Backbone):
"""GPT-2 core network with hyperparameters.
This network implements a Transformer-based decoder network,
Generative Pretrained Transformer-2 (GPT-2), as described in
["Language Models are Unsupervised Multitask Learners"](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf).
It includes the embedding lookups and transformer layers.
The default constructor gives a fully customizable, randomly initialized
GPT-2 model with any number of layers, heads, and embedding
dimensions. To load preset architectures and weights, use the `from_preset`
constructor.
Disclaimer: Pre-trained models are provided on an "as is" basis, without
warranties or conditions of any kind. The underlying model is provided by a
third party and subject to a separate license, available
[here](https://github.com/openai/gpt-2).
Args:
vocabulary_size: int. The size of the token vocabulary.
num_layers: int. The number of transformer layers.
num_heads: int. The number of attention heads for each transformer.
The hidden size must be divisible by the number of attention heads.
hidden_dim: int. The size of the transformer encoding and pooler layers.
intermediate_dim: int. The output dimension of the first Dense layer in
a two-layer feedforward network for each transformer.
dropout: float. Dropout probability for the Transformer encoder.
max_sequence_length: int. The maximum sequence length that this encoder
can consume. If `None`, `max_sequence_length` uses the value from
sequence length. This determines the variable shape for positional
embeddings.
dtype: string or `keras.mixed_precision.DTypePolicy`. The dtype to use
for the models computations and weights. Note that some
computations, such as softmax and layer normalization will always
be done a float32 precision regardless of dtype.
Example:
```python
input_data = {
"token_ids": np.ones(shape=(1, 12), dtype="int32"),
"padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0]]),
}
# Pretrained GPT-2 decoder.
model = keras_nlp.models.GPT2Backbone.from_preset("gpt2_base_en")
model(input_data)
# Randomly initialized GPT-2 decoder with custom config.
model = keras_nlp.models.GPT2Backbone(
vocabulary_size=50257,
num_layers=12,
num_heads=12,
hidden_dim=768,
intermediate_dim=3072,
max_sequence_length=1024,
)
model(input_data)
```
"""
def __init__(
self,
vocabulary_size,
num_layers,
num_heads,
hidden_dim,
intermediate_dim,
dropout=0.1,
max_sequence_length=1024,
dtype=None,
**kwargs,
):
# === Layers ===
self.token_embedding = ReversibleEmbedding(
input_dim=vocabulary_size,
output_dim=hidden_dim,
embeddings_initializer=_gpt_2_kernel_initializer(stddev=0.01),
dtype=dtype,
name="token_embedding",
)
self.position_embedding = PositionEmbedding(
initializer=_gpt_2_kernel_initializer(stddev=0.02),
sequence_length=max_sequence_length,
dtype=dtype,
name="position_embedding",
)
self.embeddings_add = keras.layers.Add(
dtype=dtype,
name="embeddings_add",
)
self.embeddings_dropout = keras.layers.Dropout(
dropout,
dtype=dtype,
name="embeddings_dropout",
)
self.transformer_layers = []
for i in range(num_layers):
self.transformer_layers.append(
TransformerDecoder(
intermediate_dim=intermediate_dim,
num_heads=num_heads,
dropout=dropout,
layer_norm_epsilon=1e-05,
activation=gelu_approximate,
kernel_initializer=_gpt_2_kernel_initializer(stddev=0.02),
normalize_first=True,
dtype=dtype,
name=f"transformer_layer_{i}",
)
)
self.layer_norm = keras.layers.LayerNormalization(
axis=-1,
epsilon=1e-05,
dtype=dtype,
name="layer_norm",
)
# === Functional Model ===
token_id_input = keras.Input(
shape=(None,), dtype="int32", name="token_ids"
)
padding_mask_input = keras.Input(
shape=(None,), dtype="int32", name="padding_mask"
)
# Embed inputs.
tokens = self.token_embedding(token_id_input)
positions = self.position_embedding(tokens)
x = self.embeddings_add((tokens, positions))
x = self.embeddings_dropout(x)
# Apply transformer layers.
for transformer_layer in self.transformer_layers:
x = transformer_layer(x, decoder_padding_mask=padding_mask_input)
sequence_output = self.layer_norm(x)
# Instantiate using the Functional constructor.
super().__init__(
inputs={
"token_ids": token_id_input,
"padding_mask": padding_mask_input,
},
outputs=sequence_output,
**kwargs,
)
# === Config ===
self.vocabulary_size = vocabulary_size
self.num_layers = num_layers
self.num_heads = num_heads
self.hidden_dim = hidden_dim
self.intermediate_dim = intermediate_dim
self.dropout = dropout
self.max_sequence_length = max_sequence_length
def get_config(self):
config = super().get_config()
config.update(
{
"vocabulary_size": self.vocabulary_size,
"num_layers": self.num_layers,
"num_heads": self.num_heads,
"hidden_dim": self.hidden_dim,
"intermediate_dim": self.intermediate_dim,
"dropout": self.dropout,
"max_sequence_length": self.max_sequence_length,
}
)
return config
@classproperty
def presets(cls):
return copy.deepcopy(backbone_presets)
| keras-nlp/keras_nlp/models/gpt2/gpt2_backbone.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/gpt2/gpt2_backbone.py",
"repo_id": "keras-nlp",
"token_count": 3367
} | 152 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
from absl import logging
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.backend import ops
from keras_nlp.models.gpt_neo_x.gpt_neo_x_preprocessor import (
GPTNeoXPreprocessor,
)
from keras_nlp.utils.keras_utils import (
convert_inputs_to_list_of_tensor_segments,
)
from keras_nlp.utils.keras_utils import pack_x_y_sample_weight
@keras_nlp_export("keras_nlp.models.GPTNeoXCausalLMPreprocessor")
class GPTNeoXCausalLMPreprocessor(GPTNeoXPreprocessor):
"""GPT-NeoX Causal LM preprocessor.
This preprocessing layer is meant for use with
`keras_nlp.models.GPTNeoXCausalLM`. By default, it will take in batches of
strings, and return outputs in a `(x, y, sample_weight)` format, where the
`y` label is the next token id in the `x` sequence.
For use with generation, the layer also exposes two methods
`generate_preprocess()` and `generate_postprocess()`. When this preprocessor
is attached to a `keras_nlp.models.GPTNeoXCausalLM` instance, these methods
will be called implicitly in `generate()`. They can also be called
standalone (e.g. to precompute preprocessing inputs for generation in a
separate process).
Args:
tokenizer: A `keras_nlp.models.GPTNeoXTokenizer` instance.
sequence_length: The length of the packed inputs.
add_start_token: If `True`, the preprocessor will prepend the tokenizer
start token to each input sequence.
add_end_token: If `True`, the preprocessor will append the tokenizer
end token to each input sequence.
Call arguments:
x: A string, `tf.Tensor` or list of python strings.
y: Label data. Should always be `None` as the layer generates labels.
sample_weight: Label weights. Should always be `None` as the layer
generates label weights.
sequence_length: Pass to override the configured `sequence_length` of
the layer.
"""
def call(
self,
x,
y=None,
sample_weight=None,
sequence_length=None,
):
if y is not None or sample_weight is not None:
logging.warning(
"`GPTNeoXCausalLMPreprocessor` generates `y` and `sample_weight` "
"based on your input data, but your data already contains `y` "
"or `sample_weight`. Your `y` and `sample_weight` will be "
"ignored."
)
sequence_length = sequence_length or self.sequence_length
x = convert_inputs_to_list_of_tensor_segments(x)[0]
x = self.tokenizer(x)
# Pad with one extra token to account for the truncation below.
token_ids, padding_mask = self.packer(
x,
sequence_length=sequence_length + 1,
add_start_value=self.add_start_token,
add_end_value=self.add_end_token,
)
# The last token does not have a next token, so we truncate it out.
x = {
"token_ids": token_ids[..., :-1],
"padding_mask": padding_mask[..., :-1],
}
# Target `y` will be the next token.
y, sample_weight = token_ids[..., 1:], padding_mask[..., 1:]
return pack_x_y_sample_weight(x, y, sample_weight)
def generate_preprocess(
self,
x,
sequence_length=None,
):
"""Covert strings to integer token input for generation.
Similar to calling the layer for training, this method takes in strings
or tensor strings, tokenizes and packs the input, and computes a padding
mask masking all inputs not filled in with a padded value.
Unlike calling the layer for training, this method does not compute
labels and will never append a `tokenizer.end_token_id` to the end of
the sequence (as generation is expected to continue at the end of the
inputted prompt).
"""
if not self.built:
self.build(None)
x = convert_inputs_to_list_of_tensor_segments(x)[0]
x = self.tokenizer(x)
token_ids, padding_mask = self.packer(
x, sequence_length=sequence_length, add_end_value=False
)
return {
"token_ids": token_ids,
"padding_mask": padding_mask,
}
def generate_postprocess(
self,
x,
):
"""Covert integer token output to strings for generation.
This method reverses `generate_preprocess()`, by first removing all
padding and start/end tokens, and then converting the integer sequence
back to a string.
"""
if not self.built:
self.build(None)
token_ids, padding_mask = x["token_ids"], x["padding_mask"]
if not isinstance(token_ids, tf.Tensor):
token_ids = ops.convert_to_numpy(token_ids)
if not isinstance(padding_mask, tf.Tensor):
padding_mask = ops.convert_to_numpy(padding_mask)
# Strip any special tokens during detokenization (e.g. the start and
# end markers). In the future we could make this configurable.
padding_mask = padding_mask & (token_ids != self.tokenizer.end_token_id)
token_ids = tf.ragged.boolean_mask(token_ids, padding_mask)
return self.tokenizer.detokenize(token_ids)
| keras-nlp/keras_nlp/models/gpt_neo_x/gpt_neo_x_causal_lm_preprocessor.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/gpt_neo_x/gpt_neo_x_causal_lm_preprocessor.py",
"repo_id": "keras-nlp",
"token_count": 2347
} | 153 |
# Copyright 2022 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.backend import keras
from keras_nlp.layers.modeling.token_and_position_embedding import (
TokenAndPositionEmbedding,
)
from keras_nlp.layers.modeling.transformer_decoder import TransformerDecoder
from keras_nlp.models.backbone import Backbone
from keras_nlp.models.opt.opt_presets import backbone_presets
from keras_nlp.utils.python_utils import classproperty
def opt_kernel_initializer(stddev=0.02):
return keras.initializers.TruncatedNormal(stddev=stddev)
@keras_nlp_export("keras_nlp.models.OPTBackbone")
class OPTBackbone(Backbone):
"""An OPT decoder network.
This class implements a Transformer-based decoder model as described in
["OPT: Open Pre-trained Transformer Language Models"](https://arxiv.org/abs/2205.01068).
The default constructor gives a fully customizable, randomly initialized OPT
model with any number of layers, heads, and embedding dimensions. To load
preset architectures and weights, use the `from_preset()` constructor.
Disclaimer: Pre-trained models are provided on an "as is" basis, without
warranties or conditions of any kind. The underlying model is provided by a
third party and subject to a separate license, available
[here](https://github.com/facebookresearch/fairseq/).
Args:
vocabulary_size: int. The size of the token vocabulary.
num_layers: int. The number of transformer decoder layers.
num_heads: int. The number of attention heads for each transformer.
The hidden size must be divisible by the number of attention heads.
hidden_dim: int. The hidden size of the transformer decoder layers.
intermediate_dim: int. The output dimension of the first Dense layer in
a two-layer feedforward network for each transformer decoder layer.
dropout: float. Dropout probability for the Transformer decoder.
max_sequence_length: int. The maximum sequence length that this decoder
can consume. If `None`, `max_sequence_length` uses the value from
sequence length. This determines the variable shape for positional
embeddings.
dtype: string or `keras.mixed_precision.DTypePolicy`. The dtype to use
for model computations and weights. Note that some computations,
such as softmax and layer normalization, will always be done at
float32 precision regardless of dtype.
Examples:
```python
input_data = {
"token_ids": np.ones(shape=(1, 12), dtype="int32"),
"padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0]]),
}
# Pretrained OPT decoder
model = keras_nlp.models.OPTBackbone.from_preset("opt_125m_en")
model(input_data)
# Randomly initialized OPT decoder model with a custom config
model = keras_nlp.models.OPTBackbone(
vocabulary_size=50265,
num_layers=4,
num_heads=4,
hidden_dim=256,
intermediate_dim=512,
max_sequence_length=128,
)
model(input_data)
```
"""
def __init__(
self,
vocabulary_size,
num_layers,
num_heads,
hidden_dim,
intermediate_dim,
dropout=0.1,
max_sequence_length=2048,
dtype=None,
**kwargs,
):
# === Layers ===
self.embeddings = TokenAndPositionEmbedding(
vocabulary_size=vocabulary_size,
sequence_length=max_sequence_length,
embedding_dim=hidden_dim,
embeddings_initializer=opt_kernel_initializer(),
dtype=dtype,
name="embeddings",
)
self.token_embedding = self.embeddings.token_embedding
self.transformer_layers = []
for i in range(num_layers):
layer = TransformerDecoder(
intermediate_dim=intermediate_dim,
num_heads=num_heads,
dropout=dropout,
activation="relu",
layer_norm_epsilon=1e-5,
normalize_first=True,
kernel_initializer=opt_kernel_initializer(),
dtype=dtype,
name=f"transformer_layer_{i}",
)
self.transformer_layers.append(layer)
self.layer_norm = keras.layers.LayerNormalization(
axis=-1,
epsilon=1e-5,
dtype=dtype,
name="layer_norm",
)
# === Functional Model ===
token_id_input = keras.Input(
shape=(None,), dtype="int32", name="token_ids"
)
padding_mask_input = keras.Input(
shape=(None,), dtype="int32", name="padding_mask"
)
x = self.embeddings(token_id_input)
for transformer_layer in self.transformer_layers:
x = transformer_layer(x, decoder_padding_mask=padding_mask_input)
x = self.layer_norm(x)
super().__init__(
inputs={
"token_ids": token_id_input,
"padding_mask": padding_mask_input,
},
outputs=x,
**kwargs,
)
# === Config ===
self.vocabulary_size = vocabulary_size
self.num_layers = num_layers
self.num_heads = num_heads
self.hidden_dim = hidden_dim
self.intermediate_dim = intermediate_dim
self.dropout = dropout
self.max_sequence_length = max_sequence_length
def get_config(self):
return {
"vocabulary_size": self.vocabulary_size,
"num_layers": self.num_layers,
"num_heads": self.num_heads,
"hidden_dim": self.hidden_dim,
"intermediate_dim": self.intermediate_dim,
"dropout": self.dropout,
"max_sequence_length": self.max_sequence_length,
}
@classproperty
def presets(cls):
return copy.deepcopy(backbone_presets)
| keras-nlp/keras_nlp/models/opt/opt_backbone.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/opt/opt_backbone.py",
"repo_id": "keras-nlp",
"token_count": 2745
} | 154 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import pytest
from keras_nlp.models.roberta.roberta_backbone import RobertaBackbone
from keras_nlp.models.roberta.roberta_classifier import RobertaClassifier
from keras_nlp.models.roberta.roberta_preprocessor import RobertaPreprocessor
from keras_nlp.models.roberta.roberta_tokenizer import RobertaTokenizer
from keras_nlp.tests.test_case import TestCase
class RobertaClassifierTest(TestCase):
def setUp(self):
# Setup model.
self.vocab = ["<s>", "<pad>", "</s>", "air", "Δ air", "plane", "Δ at"]
self.vocab += ["port", "<mask>"]
self.vocab = dict([(token, i) for i, token in enumerate(self.vocab)])
self.merges = ["Δ a", "Δ t", "Δ i", "Δ b", "a i", "p l", "n e"]
self.merges += ["Δ a t", "p o", "r t", "Δ t h", "ai r", "pl a", "po rt"]
self.merges += ["Δ ai r", "Δ a i", "pla ne"]
self.preprocessor = RobertaPreprocessor(
RobertaTokenizer(vocabulary=self.vocab, merges=self.merges),
sequence_length=5,
)
self.backbone = RobertaBackbone(
vocabulary_size=self.preprocessor.tokenizer.vocabulary_size(),
num_layers=2,
num_heads=2,
hidden_dim=2,
intermediate_dim=4,
max_sequence_length=self.preprocessor.sequence_length,
)
self.init_kwargs = {
"preprocessor": self.preprocessor,
"backbone": self.backbone,
"num_classes": 2,
}
self.train_data = (
[" airplane at airport", " airplane airport"], # Features.
[1, 0], # Labels.
)
self.input_data = self.preprocessor(*self.train_data)[0]
def test_classifier_basics(self):
self.run_task_test(
cls=RobertaClassifier,
init_kwargs=self.init_kwargs,
train_data=self.train_data,
expected_output_shape=(2, 2),
)
@pytest.mark.large
def test_saved_model(self):
self.run_model_saving_test(
cls=RobertaClassifier,
init_kwargs=self.init_kwargs,
input_data=self.input_data,
)
@pytest.mark.extra_large
def test_all_presets(self):
for preset in RobertaClassifier.presets:
self.run_preset_test(
cls=RobertaClassifier,
preset=preset,
init_kwargs={"num_classes": 2},
input_data=self.input_data,
expected_output_shape=(2, 2),
)
| keras-nlp/keras_nlp/models/roberta/roberta_classifier_test.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/roberta/roberta_classifier_test.py",
"repo_id": "keras-nlp",
"token_count": 1396
} | 155 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
import json
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.models.whisper.whisper_presets import backbone_presets
from keras_nlp.tokenizers.byte_pair_tokenizer import BytePairTokenizer
from keras_nlp.utils.python_utils import classproperty
def _load_dict(dict_or_path):
if isinstance(dict_or_path, str):
with open(dict_or_path, "r", encoding="utf-8") as f:
dict_or_path = json.load(f)
return dict_or_path
@keras_nlp_export("keras_nlp.models.WhisperTokenizer")
class WhisperTokenizer(BytePairTokenizer):
"""Whisper text tokenizer using Byte-Pair Encoding subword segmentation.
This tokenizer class will tokenize raw strings into integer sequences and
is based on `keras_nlp.tokenizers.BytePairTokenizer`.
This tokenizer does not provide truncation or padding of inputs.
Args:
vocabulary: string or dict, maps token to integer ids. If it is a
string, it should be the file path to a json file.
merges: string or list, contains the merge rule. If it is a string,
it should be the file path to merge rules. The merge rule file
should have one merge rule per line. Every merge rule contains
merge entities separated by a space.
special_tokens: string or dict, maps special tokens to integer IDs. If
it is a string, it should be the path to a JSON file.
language_tokens: string or dict, maps language tokens to integer IDs. If
not None, the tokenizer will be assumed to be a multilingual
tokenizer.
"""
def __init__(
self,
vocabulary=None,
merges=None,
special_tokens=None,
language_tokens=None,
**kwargs,
):
special_tokens = _load_dict(special_tokens)
if language_tokens is not None:
language_tokens = _load_dict(language_tokens)
# Necessary special tokens.
self.bos_token = "<|startoftranscript|>"
self.eos_token = "<|endoftext|>"
# TODO: The pad token for the multilingual tokenizer is actually
# "", but it errors out (OOM). After BPE is fixed, we can update
# this to "". For now, we will use `"<|endoftext|>"`.
self.pad_token = "<|endoftext|>"
self.no_timestamps_token = "<|notimestamps|>"
# Task special tokens.
self.translate_token = "<|translate|>"
self.transcribe_token = "<|transcribe|>"
for token in [
self.bos_token,
self.eos_token,
self.pad_token,
self.no_timestamps_token,
self.translate_token,
self.transcribe_token,
]:
if token not in special_tokens:
raise ValueError(
f"Cannot find token `'{token}'` in the provided "
f"`special_tokens`. Please provide `'{token}'` in your "
"`special_tokens`."
)
self.bos_token_id = special_tokens[self.bos_token]
self.eos_token_id = special_tokens[self.eos_token]
self.pad_token_id = special_tokens[self.pad_token]
self.no_timestamps_token_id = special_tokens[self.no_timestamps_token]
self.translate_token_id = special_tokens[self.translate_token]
self.transcribe_token_id = special_tokens[self.transcribe_token]
self.special_tokens = special_tokens
self.language_tokens = language_tokens
# TODO: Add language tokens to `unsplittable_tokens` once we figure
# out the performance issue with a large list.
unsplittable_tokens = list(special_tokens.keys())
super().__init__(
vocabulary=vocabulary,
merges=merges,
unsplittable_tokens=unsplittable_tokens,
**kwargs,
)
def save_assets(self, dir_path):
# TODO: whisper is currently mutating it's vocabulary before passing
# it to the super class, so we need to restore the unmutated vocabulary
# before saving our assets. We should find a more robust (and memory
# efficient) way to do this.
vocabulary = self.vocabulary
self.vocabulary = self._initial_vocabulary
super().save_assets(dir_path)
self.vocabulary = vocabulary
def set_vocabulary_and_merges(self, vocabulary, merges):
if vocabulary is not None:
vocabulary = _load_dict(vocabulary)
self._initial_vocabulary = dict(vocabulary)
if self.language_tokens is not None:
# Multilingual tokenizer.
# Add language tokens to the vocabulary. This makes
# detokenization easier for us.
vocabulary = {
**vocabulary,
**self.language_tokens,
}
for token in [
self.bos_token,
self.eos_token,
self.pad_token,
self.no_timestamps_token,
self.translate_token,
self.transcribe_token,
]:
vocabulary[token] = self.special_tokens[token]
else:
self._initial_vocabulary = None
super().set_vocabulary_and_merges(vocabulary, merges)
def get_config(self):
config = super().get_config()
# In the constructor, we pass the list of special tokens to the
# `unsplittable_tokens` arg of the superclass' constructor. Hence, we
# delete it from the config here.
del config["unsplittable_tokens"]
config.update(
{
"special_tokens": self.special_tokens,
"language_tokens": self.language_tokens,
}
)
return config
@classproperty
def presets(cls):
return copy.deepcopy(backbone_presets)
| keras-nlp/keras_nlp/models/whisper/whisper_tokenizer.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/whisper/whisper_tokenizer.py",
"repo_id": "keras-nlp",
"token_count": 2796
} | 156 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.backend import keras
from keras_nlp.samplers.beam_sampler import BeamSampler
from keras_nlp.samplers.contrastive_sampler import ContrastiveSampler
from keras_nlp.samplers.greedy_sampler import GreedySampler
from keras_nlp.samplers.random_sampler import RandomSampler
from keras_nlp.samplers.top_k_sampler import TopKSampler
from keras_nlp.samplers.top_p_sampler import TopPSampler
@keras_nlp_export("keras_nlp.samplers.serialize")
def serialize(sampler):
return keras.saving.serialize_keras_object(sampler)
@keras_nlp_export("keras_nlp.samplers.deserialize")
def deserialize(config, custom_objects=None):
"""Return a `Sampler` object from its config."""
all_classes = {
"beam": BeamSampler,
"contrastive": ContrastiveSampler,
"greedy": GreedySampler,
"random": RandomSampler,
"top_k": TopKSampler,
"top_p": TopPSampler,
}
return keras.saving.deserialize_keras_object(
config,
module_objects=all_classes,
custom_objects=custom_objects,
printable_module_name="samplers",
)
@keras_nlp_export("keras_nlp.samplers.get")
def get(identifier):
"""Retrieve a KerasNLP sampler by the identifier.
The `identifier` may be the string name of a sampler class or class.
>>> identifier = 'greedy'
>>> sampler = keras_nlp.samplers.get(identifier)
You can also specify `config` of the sampler to this function by passing
dict containing `class_name` and `config` as an identifier. Also note that
the `class_name` must map to a `Sampler` class.
>>> cfg = {'class_name': 'keras_nlp>GreedySampler', 'config': {}}
>>> sampler = keras_nlp.samplers.get(cfg)
In the case that the `identifier` is a class, this method will return a new
instance of the class by its constructor.
Args:
identifier: String or dict that contains the sampler name or
configurations.
Returns:
Sampler instance base on the input identifier.
Raises:
ValueError: If the input identifier is not a supported type or in a bad
format.
"""
if identifier is None:
return None
if isinstance(identifier, dict):
return deserialize(identifier)
elif isinstance(identifier, str):
if not identifier.islower():
raise KeyError(
"`keras_nlp.samplers.get()` must take a lowercase string "
f"identifier, but received: {identifier}."
)
return deserialize(identifier)
elif callable(identifier):
return identifier
else:
raise ValueError(
"Could not interpret sampler identifier: " + str(identifier)
)
| keras-nlp/keras_nlp/samplers/serialization.py/0 | {
"file_path": "keras-nlp/keras_nlp/samplers/serialization.py",
"repo_id": "keras-nlp",
"token_count": 1259
} | 157 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
from keras_nlp.tests.test_case import TestCase
from keras_nlp.tokenizers.tokenizer import Tokenizer
class SimpleTokenizer(Tokenizer):
__test__ = False # for pytest
def tokenize(self, inputs):
return tf.strings.split(inputs).to_tensor()
def detokenize(self, inputs):
return tf.strings.reduce_join([inputs], separator=" ", axis=-1)
class TokenizerTest(TestCase):
def test_tokenize(self):
input_data = ["the quick brown fox"]
tokenizer = SimpleTokenizer()
tokenize_output = tokenizer.tokenize(input_data)
call_output = tokenizer(input_data)
self.assertAllEqual(tokenize_output, [["the", "quick", "brown", "fox"]])
self.assertAllEqual(call_output, [["the", "quick", "brown", "fox"]])
def test_detokenize(self):
input_data = ["the", "quick", "brown", "fox"]
tokenizer = SimpleTokenizer()
detokenize_output = tokenizer.detokenize(input_data)
self.assertAllEqual(detokenize_output, ["the quick brown fox"])
def test_missing_tokenize_raises(self):
with self.assertRaises(NotImplementedError):
Tokenizer()(["the quick brown fox"])
| keras-nlp/keras_nlp/tokenizers/tokenizer_test.py/0 | {
"file_path": "keras-nlp/keras_nlp/tokenizers/tokenizer_test.py",
"repo_id": "keras-nlp",
"token_count": 625
} | 158 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
from keras_nlp.backend import config
from keras_nlp.backend import keras
from keras_nlp.backend import ops
try:
import tensorflow_text as tf_text
except ImportError:
tf_text = None
def _decode_strings_to_utf8(inputs):
"""Recursively decodes to list of strings with 'utf-8' encoding."""
if isinstance(inputs, bytes):
# Handles the case when the input is a scalar string.
return inputs.decode("utf-8", errors="ignore")
else:
# Recursively iterate when input is a list.
return [_decode_strings_to_utf8(x) for x in inputs]
def tensor_to_list(inputs):
"""Converts a tensor to nested lists.
Args:
inputs: Input tensor, or dict/list/tuple of input tensors.
"""
if not isinstance(inputs, (tf.RaggedTensor, tf.Tensor)):
inputs = tf.convert_to_tensor(inputs)
if isinstance(inputs, tf.RaggedTensor):
list_outputs = inputs.to_list()
elif isinstance(inputs, tf.Tensor):
list_outputs = inputs.numpy()
if inputs.shape.rank != 0:
list_outputs = list_outputs.tolist()
if inputs.dtype == tf.string:
list_outputs = _decode_strings_to_utf8(list_outputs)
return list_outputs
def convert_to_backend_tensor_or_python_list(x):
"""
Convert a tensor to the backend friendly representation of the data.
This wraps `ops.convert_to_tensor` to account for the fact that torch and
jax both lack native types for ragged and string data.
If we encounter one of these types in torch or jax, we will instead covert
the tensor to simple pythonic types (lists of strings).
"""
if isinstance(x, tf.RaggedTensor) or getattr(x, "dtype", None) == tf.string:
return tensor_to_list(x)
return ops.convert_to_tensor(x)
def convert_to_ragged_batch(inputs):
"""Convert pythonic or numpy-like input to a 2-D `tf.RaggedTensor`.
This is useful for text preprocessing layers which deal with already
tokenized or split text.
Args:
inputs: A pythonic or numpy-like input to covert. This input should
represent a possibly batched list of token sequences.
Returns:
An `(inputs, unbatched, rectangular)` tuple, where `inputs` is a
2-D `tf.RaggedTensor`, `unbatched` is `True` if the inputs were
origianlly rank 1, and `rectangular` is `True` if the inputs rows are
all of equal lengths.
"""
# `tf.keras.layers.Layer` does a weird conversion in __call__, where a list
# of lists of ints will become a list of list of scalar tensors. We could
# clean this up if we no longer need to care about that case.
if isinstance(inputs, (list, tuple)):
if isinstance(inputs[0], (list, tuple)):
rectangular = len(set([len(row) for row in inputs])) == 1
rows = [
tf.convert_to_tensor(row, dtype_hint="int32") for row in inputs
]
inputs = tf.ragged.stack(rows).with_row_splits_dtype("int64")
else:
inputs = tf.convert_to_tensor(inputs)
rectangular = True
elif isinstance(inputs, tf.Tensor):
rectangular = True
elif isinstance(inputs, tf.RaggedTensor):
rectangular = False
elif hasattr(inputs, "__array__"):
inputs = tf.convert_to_tensor(ops.convert_to_numpy(inputs))
rectangular = True
else:
raise ValueError(
f"Unknown tensor type. Tensor input can be passed as "
"tensors, numpy arrays, or python lists. Received: "
f"`type(inputs)={type(inputs)}`"
)
if inputs.shape.rank < 1 or inputs.shape.rank > 2:
raise ValueError(
f"Tokenized tensor input should be rank 1 (unbatched) or "
f"rank 2 (batched). Received: `inputs.shape={input.shape}`"
)
unbatched = inputs.shape.rank == 1
rectangular = rectangular or unbatched
if unbatched:
inputs = tf.expand_dims(inputs, 0)
if isinstance(inputs, tf.Tensor):
inputs = tf.RaggedTensor.from_tensor(inputs)
return inputs, unbatched, rectangular
def truncate_at_token(inputs, token, mask):
"""Truncate at first instance of `token`, ignoring `mask`."""
matches = (inputs == token) & (~mask)
end_indices = tf.cast(tf.math.argmax(matches, -1), "int32")
end_indices = tf.where(end_indices == 0, tf.shape(inputs)[-1], end_indices)
return tf.RaggedTensor.from_tensor(inputs, end_indices)
def assert_tf_text_installed(symbol_name):
if tf_text is None:
raise ImportError(
f"{symbol_name} requires the `tensorflow-text` package. "
"Please install with `pip install tensorflow-text`."
)
def assert_tf_backend(symbol_name):
if config.backend() != "tensorflow":
raise RuntimeError(
f"{symbol_name} requires the `tensorflow` backend. "
"Please set `KERAS_BACKEND=tensorflow` when running your program."
)
def is_tensor_type(x):
return hasattr(x, "__array__")
def standardize_dtype(dtype):
if config.keras_3():
return keras.backend.standardize_dtype(dtype)
if hasattr(dtype, "name"):
return dtype.name
return dtype
def is_float_dtype(dtype):
return "float" in standardize_dtype(dtype)
def is_int_dtype(dtype):
return "int" in standardize_dtype(dtype)
def is_string_dtype(dtype):
return "string" in standardize_dtype(dtype)
| keras-nlp/keras_nlp/utils/tensor_utils.py/0 | {
"file_path": "keras-nlp/keras_nlp/utils/tensor_utils.py",
"repo_id": "keras-nlp",
"token_count": 2409
} | 159 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import shutil
import numpy as np
import tensorflow as tf
import transformers
from absl import app
from absl import flags
from checkpoint_conversion_utils import get_md5_checksum
import keras_nlp
PRESET_MAP = {
"f_net_base_en": "google/fnet-base",
"f_net_large_en": "google/fnet-large",
}
FLAGS = flags.FLAGS
flags.DEFINE_string(
"preset", None, f'Must be one of {",".join(PRESET_MAP.keys())}'
)
def convert_checkpoints(hf_model):
print("\n-> Convert original weights to KerasNLP format.")
print("\n-> Load KerasNLP model.")
keras_nlp_model = keras_nlp.models.FNetBackbone.from_preset(
FLAGS.preset, load_weights=False
)
hf_wts = hf_model.state_dict()
print("Original weights:")
print(list(hf_wts.keys()))
keras_nlp_model.get_layer("token_embedding").embeddings.assign(
hf_wts["embeddings.word_embeddings.weight"]
)
keras_nlp_model.get_layer("position_embedding").position_embeddings.assign(
hf_wts["embeddings.position_embeddings.weight"]
)
keras_nlp_model.get_layer("segment_embedding").embeddings.assign(
hf_wts["embeddings.token_type_embeddings.weight"]
)
keras_nlp_model.get_layer("embeddings_layer_norm").gamma.assign(
hf_wts["embeddings.LayerNorm.weight"]
)
keras_nlp_model.get_layer("embeddings_layer_norm").beta.assign(
hf_wts["embeddings.LayerNorm.bias"]
)
keras_nlp_model.get_layer("embedding_projection").kernel.assign(
hf_wts["embeddings.projection.weight"].T
)
keras_nlp_model.get_layer("embedding_projection").bias.assign(
hf_wts["embeddings.projection.bias"]
)
for i in range(keras_nlp_model.num_layers):
keras_nlp_model.get_layer(
f"f_net_layer_{i}"
)._mixing_layer_norm.gamma.assign(
hf_wts[f"encoder.layer.{i}.fourier.output.LayerNorm.weight"].numpy()
)
keras_nlp_model.get_layer(
f"f_net_layer_{i}"
)._mixing_layer_norm.beta.assign(
hf_wts[f"encoder.layer.{i}.fourier.output.LayerNorm.bias"].numpy()
)
keras_nlp_model.get_layer(
f"f_net_layer_{i}"
)._intermediate_dense.kernel.assign(
hf_wts[f"encoder.layer.{i}.intermediate.dense.weight"]
.transpose(1, 0)
.numpy()
)
keras_nlp_model.get_layer(
f"f_net_layer_{i}"
)._intermediate_dense.bias.assign(
hf_wts[f"encoder.layer.{i}.intermediate.dense.bias"].numpy()
)
keras_nlp_model.get_layer(
f"f_net_layer_{i}"
)._output_dense.kernel.assign(
hf_wts[f"encoder.layer.{i}.output.dense.weight"]
.transpose(1, 0)
.numpy()
)
keras_nlp_model.get_layer(f"f_net_layer_{i}")._output_dense.bias.assign(
hf_wts[f"encoder.layer.{i}.output.dense.bias"].numpy()
)
keras_nlp_model.get_layer(
f"f_net_layer_{i}"
)._output_layer_norm.gamma.assign(
hf_wts[f"encoder.layer.{i}.output.LayerNorm.weight"].numpy()
)
keras_nlp_model.get_layer(
f"f_net_layer_{i}"
)._output_layer_norm.beta.assign(
hf_wts[f"encoder.layer.{i}.output.LayerNorm.bias"].numpy()
)
keras_nlp_model.get_layer("pooled_dense").kernel.assign(
hf_wts["pooler.dense.weight"].transpose(1, 0).numpy()
)
keras_nlp_model.get_layer("pooled_dense").bias.assign(
hf_wts["pooler.dense.bias"].numpy()
)
# Save the model.
print("\n-> Save KerasNLP model weights.")
keras_nlp_model.save_weights(os.path.join(FLAGS.preset, "model.h5"))
return keras_nlp_model
def extract_vocab(hf_tokenizer):
spm_path = os.path.join(FLAGS.preset, "spiece.model")
print(f"\n-> Save KerasNLP SPM vocabulary file to `{spm_path}`.")
shutil.copyfile(
transformers.utils.hub.get_file_from_repo(
hf_tokenizer.name_or_path, "spiece.model"
),
spm_path,
)
keras_nlp_tokenizer = keras_nlp.models.FNetTokenizer(
proto=spm_path,
)
keras_nlp_preprocessor = keras_nlp.models.FNetPreprocessor(
keras_nlp_tokenizer
)
print("-> Print MD5 checksum of the vocab files.")
print(f"`{spm_path}` md5sum: ", get_md5_checksum(spm_path))
return keras_nlp_preprocessor
def check_output(
keras_nlp_preprocessor,
keras_nlp_model,
hf_tokenizer,
hf_model,
):
print("\n-> Check the outputs.")
sample_text = ["cricket is awesome, easily the best sport in the world!"]
# KerasNLP
keras_nlp_inputs = keras_nlp_preprocessor(tf.constant(sample_text))
keras_nlp_output = keras_nlp_model.predict(keras_nlp_inputs)[
"sequence_output"
]
# HF
hf_inputs = hf_tokenizer(
sample_text, padding="max_length", return_tensors="pt"
)
hf_output = hf_model(**hf_inputs).last_hidden_state
print("KerasNLP output:", keras_nlp_output[0, 0, :10])
print("HF output:", hf_output[0, 0, :10])
print("Difference:", np.mean(keras_nlp_output - hf_output.detach().numpy()))
# Show the MD5 checksum of the model weights.
print(
"Model md5sum: ",
get_md5_checksum(os.path.join(FLAGS.preset, "model.h5")),
)
def main(_):
os.makedirs(FLAGS.preset)
hf_model_name = PRESET_MAP[FLAGS.preset]
print("\n-> Load HF model and HF tokenizer.")
hf_model = transformers.AutoModel.from_pretrained(hf_model_name)
hf_model.eval()
hf_tokenizer = transformers.AutoTokenizer.from_pretrained(hf_model_name)
keras_nlp_model = convert_checkpoints(hf_model)
print("\n -> Load KerasNLP preprocessor.")
keras_nlp_preprocessor = extract_vocab(hf_tokenizer)
check_output(
keras_nlp_preprocessor,
keras_nlp_model,
hf_tokenizer,
hf_model,
)
if __name__ == "__main__":
flags.mark_flag_as_required("preset")
app.run(main)
| keras-nlp/tools/checkpoint_conversion/convert_f_net_checkpoints.py/0 | {
"file_path": "keras-nlp/tools/checkpoint_conversion/convert_f_net_checkpoints.py",
"repo_id": "keras-nlp",
"token_count": 3089
} | 160 |
# Copyright 2024 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import contextlib
import os
import random
import sys
from typing import List
import gemma.xla_model_parallel as xla_model_parallel
import numpy as np
import torch
import torch.multiprocessing
import torch_xla.core.xla_model as xm
import torch_xla.distributed.xla_multiprocessing as xmp
from absl import app
from absl import flags
from gemma.config import GemmaConfig
from gemma.config import get_config_for_2b
from gemma.config import get_config_for_7b
from gemma.model_xla import GemmaForCausalLM
from gemma.tokenizer import Tokenizer
PAD_TOKEN_ID = -1
FILE_PATH = "gemma.ckpt"
TOKENIZER_DIR = "gemma_tokenizer"
PRESET_MAP = {
"gemma_2b_en": get_config_for_2b(),
"gemma_instruct_2b_en": get_config_for_2b(),
"gemma_7b_en": get_config_for_7b(),
"gemma_instruct_7b_en": get_config_for_7b(),
}
SIZE_MAP = {
"2b": get_config_for_2b(),
"7b": get_config_for_7b(),
}
FLAGS = flags.FLAGS
flags.DEFINE_string(
"preset", None, f'Must be one of {",".join(PRESET_MAP.keys())}'
)
flags.DEFINE_string(
"size",
None,
"Size of model. Must be passed if `preset` is not passed. "
"This should be either `2b` or `7b`.",
)
flags.DEFINE_string(
"checkpoint_file",
"gemma.ckpt",
"A PyTorch checkpoint file containing the converted weights.",
)
flags.DEFINE_string(
"vocab_file",
"gemma_tokenizer/vocabulary.spm",
"The file containing the vocabulary for the tokenizer.",
)
flags.DEFINE_string(
"prompt",
"The capital of France is",
"A test prompt for verifying functionality of the PyTorch Gemma model.",
)
# This is a modified version of `run_xla.py` script in the Hex-LLM Gemma repo
# to ensure proper functionality after porting checkpoints from Keras.
@contextlib.contextmanager
def _set_default_tensor_type(dtype: torch.dtype):
"""Sets the default torch dtype to the given dtype."""
torch.set_default_dtype(dtype)
yield
torch.set_default_dtype(torch.float)
def generate(
i: int,
model_config: GemmaConfig,
checkpoint_file: str,
vocab_file: str,
prompts: List[str],
output_lens: List[int],
temperatures: List[float],
top_ps: List[float],
top_ks: List[int],
):
# Set seed from config
seed = model_config.seed
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
device = xm.xla_device()
xm.set_rng_state(seed, device)
rank = xla_model_parallel.get_model_parallel_rank()
world_size = xla_model_parallel.get_model_parallel_world_size()
if rank > 0:
sys.stdout = open(os.devnull, "w")
# Load model with ported weights and place on device
with _set_default_tensor_type(model_config.get_dtype()):
model = GemmaForCausalLM(model_config, world_size, rank, device)
model.load_weights(checkpoint_file)
model = model.to(device).eval()
# Create tokenizer with saved Keras tokenizer state
tokenizer = Tokenizer(vocab_file)
prompt_tokens = [tokenizer.encode(prompt) for prompt in prompts]
min_prompt_len = min(len(p) for p in prompt_tokens)
batch_size = len(prompts)
assert batch_size == len(temperatures)
assert batch_size == len(top_ps)
assert batch_size == len(top_ks)
max_seq_len = max([len(p) + o for p, o in zip(prompt_tokens, output_lens)])
assert max_seq_len <= model_config.max_position_embeddings
if model_config.num_key_value_heads < world_size:
assert world_size % model_config.num_key_value_heads == 0
n_local_heads = 1
else:
assert model_config.num_key_value_heads % world_size == 0
n_local_heads = model_config.num_key_value_heads // world_size
# build KV caches
kv_caches = []
for _ in range(model_config.num_hidden_layers):
k_cache = torch.zeros(
size=(
batch_size,
max_seq_len,
n_local_heads,
model_config.head_dim,
),
dtype=model_config.get_dtype(),
device=device,
)
v_cache = torch.zeros(
size=(
batch_size,
max_seq_len,
n_local_heads,
model_config.head_dim,
),
dtype=model_config.get_dtype(),
device=device,
)
kv_caches.append((k_cache, v_cache))
# prepare inputs
token_ids_tensor = torch.full(
(batch_size, max_seq_len), PAD_TOKEN_ID, dtype=torch.int64
)
input_token_ids_tensor = torch.full(
(batch_size, min_prompt_len), PAD_TOKEN_ID, dtype=torch.int64
)
for i, p in enumerate(prompt_tokens):
token_ids_tensor[i, : len(p)] = torch.tensor(p)
input_token_ids_tensor[i, :min_prompt_len] = torch.tensor(
p[:min_prompt_len]
)
token_ids_tensor = token_ids_tensor.to(device)
prompt_mask_tensor = token_ids_tensor != PAD_TOKEN_ID
input_token_ids_tensor = input_token_ids_tensor.to(device)
input_positions_tensor = torch.arange(
0, min_prompt_len, dtype=torch.int64
).to(device)
mask_tensor = torch.full(
(1, 1, max_seq_len, max_seq_len), -2.3819763e38
).to(torch.float)
mask_tensor = torch.triu(mask_tensor, diagonal=1).to(device)
curr_mask_tensor = mask_tensor.index_select(2, input_positions_tensor)
output_positions_tensor = torch.LongTensor([min_prompt_len - 1]).to(device)
temperatures_tensor = torch.FloatTensor(temperatures).to(device)
top_ps_tensor = torch.FloatTensor(top_ps).to(device)
top_ks_tensor = torch.LongTensor(top_ks).to(device)
output_index = torch.tensor(min_prompt_len, dtype=torch.int64).to(device)
xm.mark_step()
# Prefill up to min_prompt_len tokens, then treat other prefill as decode and ignore output.
for i in range(max_seq_len - min_prompt_len):
next_token_ids = model(
input_token_ids=input_token_ids_tensor,
input_positions=input_positions_tensor,
kv_write_indices=None,
kv_caches=kv_caches,
mask=curr_mask_tensor,
output_positions=output_positions_tensor,
temperatures=temperatures_tensor,
top_ps=top_ps_tensor,
top_ks=top_ks_tensor,
)
curr_prompt_mask = prompt_mask_tensor.index_select(
1, output_index
).squeeze(dim=1)
curr_token_ids = token_ids_tensor.index_select(1, output_index).squeeze(
dim=1
)
output_token_ids = torch.where(
curr_prompt_mask, curr_token_ids, next_token_ids
).unsqueeze(dim=1)
token_ids_tensor.index_copy_(1, output_index, output_token_ids)
input_token_ids_tensor = output_token_ids
input_positions_tensor = output_index
curr_mask_tensor = mask_tensor.index_select(2, input_positions_tensor)
output_positions_tensor = torch.tensor(0, dtype=torch.int64).to(device)
output_index = output_index + 1
xm.mark_step()
# Detokenization.
token_ids = token_ids_tensor.tolist()
results = []
for i, tokens in enumerate(token_ids):
trimmed_output = tokens[
len(prompt_tokens[i]) : len(prompt_tokens[i]) + output_lens[i]
]
if tokenizer.eos_id in trimmed_output:
eos_index = trimmed_output.index(tokenizer.eos_id)
trimmed_output = trimmed_output[:eos_index]
results.append(tokenizer.decode(trimmed_output))
for prompt, result in zip(prompts, results):
print("======================================")
print(f"PROMPT: {prompt}")
print(f"RESULT: {result}")
print("======================================")
def flag_error_handler():
if not FLAGS.preset and not FLAGS.size:
raise ValueError(
"Please pass either a valid Keras preset to `--preset`"
" or supply a model size (`2b` or `7b`) to `--size`."
)
if FLAGS.size and FLAGS.size.lower() not in ["2b", "7b"]:
raise ValueError(
"Invalid `size`. Please pass the appropriate size (`2b` or `7b`) "
"for your model to the `--size` flag."
)
def main(_):
flag_error_handler()
if FLAGS.preset:
model_config = PRESET_MAP[FLAGS.preset]
else:
model_config = SIZE_MAP[FLAGS.size.lower()]
prompts = [
FLAGS.prompt,
]
n = len(prompts)
output_lengths = [10] * n
temperatures = [0.95] * n
top_ps = [1.0] * n
top_ks = [100] * n
xmp.spawn(
generate,
args=(
model_config,
FLAGS.checkpoint_file,
FLAGS.vocab_file,
prompts,
output_lengths,
temperatures,
top_ps,
top_ks,
),
)
if __name__ == "__main__":
app.run(main)
| keras-nlp/tools/gemma/run_gemma_xla.py/0 | {
"file_path": "keras-nlp/tools/gemma/run_gemma_xla.py",
"repo_id": "keras-nlp",
"token_count": 4199
} | 161 |
[report]
fail_under = 85
show_missing = True
| keras-preprocessing/.coveragerc/0 | {
"file_path": "keras-preprocessing/.coveragerc",
"repo_id": "keras-preprocessing",
"token_count": 16
} | 162 |
COPYRIGHT
Copyright (c) 2015 - 2018, the respective contributors.
All rights reserved.
Each contributor holds copyright over their respective contributions.
The project versioning (Git) records all such contribution source information.
The initial code of this repository came from https://github.com/keras-team/keras
(the Keras repository), hence, for author information regarding commits
that occured earlier than the first commit in the present repository,
please see the original Keras repository.
LICENSE
The MIT License (MIT)
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| keras-preprocessing/LICENSE/0 | {
"file_path": "keras-preprocessing/LICENSE",
"repo_id": "keras-preprocessing",
"token_count": 384
} | 163 |
from setuptools import find_packages, setup
long_description = '''
Keras Preprocessing is the data preprocessing
and data augmentation module of the Keras deep learning library.
It provides utilities for working with image data, text data,
and sequence data.
Read the documentation at: https://keras.io/
Keras Preprocessing may be imported directly
from an up-to-date installation of Keras:
```
from keras import preprocessing
```
Keras Preprocessing is compatible with Python 3.6
and is distributed under the MIT license.
'''
setup(name='Keras_Preprocessing',
version='1.1.2',
description='Easy data preprocessing and data augmentation '
'for deep learning models',
long_description=long_description,
author='Keras Team',
url='https://github.com/keras-team/keras-preprocessing',
download_url='https://github.com/keras-team/'
'keras-preprocessing/tarball/1.1.2',
license='MIT',
install_requires=['numpy>=1.9.1'],
extras_require={
'tests': ['pandas',
'Pillow',
'tensorflow', # CPU version
'keras',
'pytest',
'pytest-xdist',
'pytest-cov'],
'pep8': ['flake8'],
'image': ['scipy>=0.14',
'Pillow>=5.2.0'],
},
classifiers=[
'Development Status :: 5 - Production/Stable',
'Intended Audience :: Developers',
'Intended Audience :: Education',
'Intended Audience :: Science/Research',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.6',
'Topic :: Software Development :: Libraries',
'Topic :: Software Development :: Libraries :: Python Modules'
],
packages=find_packages())
| keras-preprocessing/setup.py/0 | {
"file_path": "keras-preprocessing/setup.py",
"repo_id": "keras-preprocessing",
"token_count": 823
} | 164 |
# How to Contribute
We'd love to accept your patches and contributions to this project. There are
just a few small guidelines you need to follow.
## Contributor License Agreement
Contributions to this project must be accompanied by a Contributor License
Agreement. You (or your employer) retain the copyright to your contribution;
this simply gives us permission to use and redistribute your contributions as
part of the project. Head over to <https://cla.developers.google.com/> to see
your current agreements on file or to sign a new one.
You generally only need to submit a CLA once, so if you've already submitted one
(even if it was for a different project), you probably don't need to do it
again.
## Pull Request Guide
Before you submit a pull request, check that it meets these guidelines:
1. Is this the first pull request that you're making with GitHub? If so, read the guide [Making a pull request to an open-source project](https://github.com/gabrieldemarmiesse/getting_started_open_source).
2. Include "resolves #issue_number" in the description of the pull request if applicable and briefly describe your contribution.
3. For the case of bug fixes, add new test cases which would fail before your bug fix.
## Setup Environment
We introduce 2 different options: **GitHub Codespaces**, **VS Code & Remote-Containers**.
You may also use any other environment as long as you install the dependencies in `setup.py`.
Be sure that you have the same environment as us, we recommend you to install like this:
```shell
pip install --upgrade pip
pip install -e ".[tensorflow-cpu,tests]"
echo "sh shell/lint.sh" > .git/hooks/pre-commit
chmod a+x .git/hooks/pre-commit
```
### Option 1: GitHub Codespaces
You can simply open the repository in GitHub Codespaces.
The environment is already setup there.
### Option 2: VS Code & Remote-Containers
Open VS Code.
Install the `Remote-Containers` extension.
Press `F1` key. Enter `Remote-Containers: Open Folder in Container` to open the repository root folder.
The environment is already setup there.
## Run Tests
You can simply open any `*_test.py` file under the `tests` directory,
and wait a few seconds, you will see the test tab on the left of the window.
We use PyTest for the tests, you may also use the `pytest` command to run the tests.
## Code Style
We use `flake8`, `black` and `isort` for linting.
You can run the following manually every time you want to format your code.
1. Run `shell/format.sh` to format your code.
2. Run `shell/lint.sh` to check.
## Rebuilding Protos
If you make changes to any `.proto` file, you'll have to rebuild the generated
`*_pb2.py` files. To do this, run these commands from the root directory of this
project:
```
pip install grpcio-tools
python -m grpc_tools.protoc --python_out=. --grpc_python_out=. --proto_path=. keras_tuner/protos/keras_tuner.proto
python -m grpc_tools.protoc --python_out=. --grpc_python_out=. --proto_path=. keras_tuner/protos/service.proto
```
## Community Guidelines
This project follows [Google's Open Source Community
Guidelines](https://opensource.google.com/conduct/). | keras-tuner/CONTRIBUTING.md/0 | {
"file_path": "keras-tuner/CONTRIBUTING.md",
"repo_id": "keras-tuner",
"token_count": 867
} | 165 |
# Copyright 2019 The KerasTuner Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from keras_tuner import applications
from keras_tuner import oracles
from keras_tuner import tuners
from keras_tuner.api_export import keras_tuner_export
from keras_tuner.engine.hypermodel import HyperModel
from keras_tuner.engine.hyperparameters import HyperParameter
from keras_tuner.engine.hyperparameters import HyperParameters
from keras_tuner.engine.objective import Objective
from keras_tuner.engine.oracle import Oracle
from keras_tuner.engine.oracle import synchronized
from keras_tuner.engine.tuner import Tuner
from keras_tuner.tuners import BayesianOptimization
from keras_tuner.tuners import GridSearch
from keras_tuner.tuners import Hyperband
from keras_tuner.tuners import RandomSearch
from keras_tuner.tuners import SklearnTuner
from keras_tuner.version import __version__
| keras-tuner/keras_tuner/__init__.py/0 | {
"file_path": "keras-tuner/keras_tuner/__init__.py",
"repo_id": "keras-tuner",
"token_count": 388
} | 166 |
# Copyright 2019 The KerasTuner Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from keras_tuner import utils
from keras_tuner.api_export import keras_tuner_export
from keras_tuner.backend import keras
from keras_tuner.engine.hyperparameters import hp_types
from keras_tuner.engine.hyperparameters.hp_types import Boolean
from keras_tuner.engine.hyperparameters.hp_types import Choice
from keras_tuner.engine.hyperparameters.hp_types import Fixed
from keras_tuner.engine.hyperparameters.hp_types import Float
from keras_tuner.engine.hyperparameters.hp_types import Int
from keras_tuner.engine.hyperparameters.hyperparameter import HyperParameter
from keras_tuner.engine.hyperparameters.hyperparameters import HyperParameters
OBJECTS = hp_types.OBJECTS + (
HyperParameter,
HyperParameters,
)
ALL_CLASSES = {cls.__name__: cls for cls in OBJECTS}
@keras_tuner_export("keras_tuner.engine.hyperparameters.deserialize")
def deserialize(config):
return utils.deserialize_keras_object(config, module_objects=ALL_CLASSES)
@keras_tuner_export("keras_tuner.engine.hyperparameters.serialize")
def serialize(obj):
return utils.serialize_keras_object(obj)
| keras-tuner/keras_tuner/engine/hyperparameters/__init__.py/0 | {
"file_path": "keras-tuner/keras_tuner/engine/hyperparameters/__init__.py",
"repo_id": "keras-tuner",
"token_count": 520
} | 167 |
# Copyright 2019 The KerasTuner Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from keras_tuner.api_export import keras_tuner_export
@keras_tuner_export(["keras_tuner.errors.FailedTrialError"])
class FailedTrialError(Exception):
"""Raise this error to mark a `Trial` as failed.
When this error is raised in a `Trial`, the `Tuner` would not retry the
`Trial` but directly mark it as `"FAILED"`.
Example:
```py
class MyHyperModel(keras_tuner.HyperModel):
def build(self, hp):
# Build the model
...
if too_slow(model):
# Mark the Trial as "FAILED" if the model is too slow.
raise keras_tuner.FailedTrialError("Model is too slow.")
return model
```
"""
pass
@keras_tuner_export(["keras_tuner.errors.FatalError"])
class FatalError(Exception):
"""A fatal error during search to terminate the program.
It is used to terminate the KerasTuner program for errors that need
users immediate attention. When this error is raised in a `Trial`, it will
not be caught by KerasTuner.
"""
pass
@keras_tuner_export(["keras_tuner.errors.FatalValueError"])
class FatalValueError(FatalError, ValueError):
"""A fatal error during search to terminate the program.
It is a subclass of `FatalError` and `ValueError`.
It is used to terminate the KerasTuner program for errors that need
users immediate attention. When this error is raised in a `Trial`, it will
not be caught by KerasTuner.
"""
pass
@keras_tuner_export(["keras_tuner.errors.FatalTypeError"])
class FatalTypeError(FatalError, TypeError):
"""A fatal error during search to terminate the program.
It is a subclass of `FatalError` and `TypeError`.
It is used to terminate the KerasTuner program for errors that need
users immediate attention. When this error is raised in a `Trial`, it will
not be caught by KerasTuner.
"""
pass
@keras_tuner_export(["keras_tuner.errors.FatalRuntimeError"])
class FatalRuntimeError(FatalError, RuntimeError):
"""A fatal error during search to terminate the program.
It is a subclass of `FatalError` and `RuntimeError`.
It is used to terminate the KerasTuner program for errors that need
users immediate attention. When this error is raised in a `Trial`, it will
not be caught by KerasTuner.
"""
pass
| keras-tuner/keras_tuner/errors.py/0 | {
"file_path": "keras-tuner/keras_tuner/errors.py",
"repo_id": "keras-tuner",
"token_count": 981
} | 168 |
# Copyright 2019 The KerasTuner Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
import pandas as pd
import pytest
from sklearn import datasets
from sklearn import decomposition
from sklearn import ensemble
from sklearn import linear_model
from sklearn import metrics
from sklearn import model_selection
from sklearn import neighbors
from sklearn import pipeline
import keras_tuner
def build_model(hp):
model_type = hp.Choice("model_type", ["random_forest", "ridge", "knn"])
if model_type == "random_forest":
with hp.conditional_scope("model_type", "random_forest"):
model = ensemble.RandomForestClassifier(
n_estimators=hp.Int("n_estimators", 10, 50, step=10),
max_depth=hp.Int("max_depth", 3, 10),
)
elif model_type == "ridge":
with hp.conditional_scope("model_type", "ridge"):
model = linear_model.RidgeClassifier(
alpha=hp.Float("alpha", 1e-3, 1, sampling="log")
)
elif model_type == "knn":
with hp.conditional_scope("model_type", "knn"):
k = hp.Int("n_neighbors", 1, 30, default=5)
model = neighbors.KNeighborsClassifier(
n_neighbors=k,
weights=hp.Choice(
"weights", ["uniform", "distance"], default="uniform"
),
)
else:
raise ValueError("Unrecognized model_type")
return model
def build_pipeline(hp):
n_components = hp.Choice("n_components", [2, 5, 10], default=5)
pca = decomposition.PCA(n_components=n_components)
model_type = hp.Choice("model_type", ["random_forest", "ridge", "knn"])
if model_type == "random_forest":
with hp.conditional_scope("model_type", "random_forest"):
model = ensemble.RandomForestClassifier(
n_estimators=hp.Int("n_estimators", 10, 50, step=10),
max_depth=hp.Int("max_depth", 3, 10),
)
elif model_type == "ridge":
with hp.conditional_scope("model_type", "ridge"):
model = linear_model.RidgeClassifier(
alpha=hp.Float("alpha", 1e-3, 1, sampling="log")
)
elif model_type == "knn":
with hp.conditional_scope("model_type", "knn"):
k = hp.Int("n_neighbors", 1, 30, default=5)
model = neighbors.KNeighborsClassifier(
n_neighbors=k,
weights=hp.Choice(
"weights", ["uniform", "distance"], default="uniform"
),
)
else:
raise ValueError("Unrecognized model_type")
skpipeline = pipeline.Pipeline([("pca", pca), ("clf", model)])
return skpipeline
def test_sklearn_tuner_simple_with_np(tmp_path):
tuner = keras_tuner.SklearnTuner(
oracle=keras_tuner.oracles.BayesianOptimizationOracle(
objective=keras_tuner.Objective("score", "max"), max_trials=10
),
hypermodel=build_model,
directory=tmp_path,
)
x = np.random.uniform(size=(50, 10))
y = np.random.randint(0, 2, size=(50,))
tuner.search(x, y)
assert len(tuner.oracle.trials) == 10
best_trial = tuner.oracle.get_best_trials()[0]
assert best_trial.status == "COMPLETED"
assert best_trial.score is not None
assert best_trial.best_step == 0
assert best_trial.metrics.exists("score")
# Make sure best model can be reloaded.
best_model = tuner.get_best_models()[0]
best_model.score(x, y)
@pytest.mark.filterwarnings("ignore:.*column-vector")
def test_sklearn_tuner_with_df(tmp_path):
tuner = keras_tuner.SklearnTuner(
oracle=keras_tuner.oracles.BayesianOptimizationOracle(
objective=keras_tuner.Objective("score", "max"), max_trials=10
),
hypermodel=build_model,
directory=tmp_path,
)
x = pd.DataFrame(np.random.uniform(size=(50, 10)))
y = pd.DataFrame(np.random.randint(0, 2, size=(50,)))
tuner.search(x, y)
assert len(tuner.oracle.trials) == 10
def test_sklearn_custom_scoring_and_cv(tmp_path):
tuner = keras_tuner.SklearnTuner(
oracle=keras_tuner.oracles.BayesianOptimizationOracle(
objective=keras_tuner.Objective("score", "max"), max_trials=10
),
hypermodel=build_model,
scoring=metrics.make_scorer(metrics.balanced_accuracy_score),
cv=model_selection.StratifiedKFold(5),
directory=tmp_path,
)
x = np.random.uniform(size=(50, 10))
y = np.random.randint(0, 2, size=(50,))
tuner.search(x, y)
assert len(tuner.oracle.trials) == 10
best_trial = tuner.oracle.get_best_trials()[0]
assert best_trial.status == "COMPLETED"
assert best_trial.score is not None
assert best_trial.best_step == 0
assert best_trial.metrics.exists("score")
# Make sure best model can be reloaded.
best_model = tuner.get_best_models()[0]
best_model.score(x, y)
def test_sklearn_additional_metrics(tmp_path):
tuner = keras_tuner.SklearnTuner(
oracle=keras_tuner.oracles.BayesianOptimizationOracle(
objective=keras_tuner.Objective("score", "max"), max_trials=10
),
hypermodel=build_model,
metrics=[metrics.balanced_accuracy_score, metrics.recall_score],
directory=tmp_path,
)
x = np.random.uniform(size=(50, 10))
y = np.random.randint(0, 2, size=(50,))
tuner.search(x, y)
assert len(tuner.oracle.trials) == 10
best_trial = tuner.oracle.get_best_trials()[0]
assert best_trial.status == "COMPLETED"
assert best_trial.score is not None
assert best_trial.best_step == 0
assert best_trial.metrics.exists("score")
assert best_trial.metrics.exists("balanced_accuracy_score")
assert best_trial.metrics.exists("recall_score")
# Make sure best model can be reloaded.
best_model = tuner.get_best_models()[0]
best_model.score(x, y)
def test_sklearn_sample_weight(tmp_path):
tuner = keras_tuner.SklearnTuner(
oracle=keras_tuner.oracles.BayesianOptimizationOracle(
objective=keras_tuner.Objective("score", "max"), max_trials=10
),
hypermodel=build_model,
directory=tmp_path,
)
x = np.random.uniform(size=(50, 10))
y = np.random.randint(0, 2, size=(50,))
sample_weight = np.random.uniform(0.1, 1, size=(50,))
tuner.search(x, y, sample_weight=sample_weight)
assert len(tuner.oracle.trials) == 10
best_trial = tuner.oracle.get_best_trials()[0]
assert best_trial.status == "COMPLETED"
assert best_trial.score is not None
assert best_trial.best_step == 0
assert best_trial.metrics.exists("score")
# Make sure best model can be reloaded.
best_model = tuner.get_best_models()[0]
best_model.score(x, y)
def test_sklearn_pipeline(tmp_path):
tuner = keras_tuner.SklearnTuner(
oracle=keras_tuner.oracles.BayesianOptimizationOracle(
objective=keras_tuner.Objective("score", "max"), max_trials=10
),
hypermodel=build_pipeline,
directory=tmp_path,
)
x = np.random.uniform(size=(50, 10))
y = np.random.randint(0, 2, size=(50,))
sample_weight = np.random.uniform(0.1, 1, size=(50,))
tuner.search(x, y, sample_weight=sample_weight)
assert len(tuner.oracle.trials) == 10
best_trial = tuner.oracle.get_best_trials()[0]
assert best_trial.status == "COMPLETED"
assert best_trial.score is not None
assert best_trial.best_step == 0
assert best_trial.metrics.exists("score")
# Make sure best pipeline can be reloaded.
best_pipeline = tuner.get_best_models()[0]
best_pipeline.score(x, y)
def test_sklearn_cv_with_groups(tmp_path):
tuner = keras_tuner.SklearnTuner(
oracle=keras_tuner.oracles.BayesianOptimizationOracle(
objective=keras_tuner.Objective("score", "max"), max_trials=10
),
hypermodel=build_model,
cv=model_selection.GroupKFold(5),
directory=tmp_path,
)
x = np.random.uniform(size=(50, 10))
y = np.random.randint(0, 2, size=(50,))
groups = np.random.randint(0, 5, size=(50,))
tuner.search(x, y, groups=groups)
assert len(tuner.oracle.trials) == 10
best_trial = tuner.oracle.get_best_trials()[0]
assert best_trial.status == "COMPLETED"
assert best_trial.score is not None
assert best_trial.best_step == 0
assert best_trial.metrics.exists("score")
# Make sure best model can be reloaded.
best_model = tuner.get_best_models()[0]
best_model.score(x, y)
def test_sklearn_real_data(tmp_path):
tuner = keras_tuner.SklearnTuner(
oracle=keras_tuner.oracles.BayesianOptimizationOracle(
objective=keras_tuner.Objective("score", "max"), max_trials=10
),
hypermodel=build_model,
scoring=metrics.make_scorer(metrics.accuracy_score),
metrics=metrics.accuracy_score,
cv=model_selection.StratifiedKFold(5),
directory=tmp_path,
)
x, y = datasets.load_iris(return_X_y=True)
x_train, x_test, y_train, y_test = model_selection.train_test_split(
x, y, test_size=0.2
)
tuner.search(x_train, y_train)
best_models = tuner.get_best_models(10)
best_model = best_models[0]
worst_model = best_models[9]
best_model_score = best_model.score(x_test, y_test)
worst_model_score = worst_model.score(x_test, y_test)
assert best_model_score > 0.8
assert best_model_score >= worst_model_score
def test_sklearn_not_install_error(tmp_path):
sklearn_module = keras_tuner.tuners.sklearn_tuner.sklearn
keras_tuner.tuners.sklearn_tuner.sklearn = None
with pytest.raises(ImportError, match="Please install sklearn"):
keras_tuner.SklearnTuner(
oracle=keras_tuner.oracles.BayesianOptimizationOracle(
objective=keras_tuner.Objective("score", "max"), max_trials=10
),
hypermodel=build_model,
directory=tmp_path,
)
keras_tuner.tuners.sklearn_tuner.sklearn = sklearn_module
def test_sklearn_wrong_data_type(tmp_path):
with pytest.raises(RuntimeError, match="Expected the data to be numpy"):
tuner = keras_tuner.SklearnTuner(
oracle=keras_tuner.oracles.BayesianOptimizationOracle(
objective=keras_tuner.Objective("score", "max"), max_trials=10
),
hypermodel=build_model,
scoring=metrics.make_scorer(metrics.accuracy_score),
cv=model_selection.StratifiedKFold(3),
directory=tmp_path,
)
tuner.search([1, 2, 3, 4, 5, 6], [1, 1, 1, 2, 2, 2])
| keras-tuner/keras_tuner/tuners/sklearn_tuner_test.py/0 | {
"file_path": "keras-tuner/keras_tuner/tuners/sklearn_tuner_test.py",
"repo_id": "keras-tuner",
"token_count": 4892
} | 169 |
# Dev container configurations
This directory contains the configuration for dev containers, which is used to
initialize the development environment in **Codespaces**, **Visual Studio
Code**, and **JetBrains IDEs**. The environment is installed with all the
necessary dependencies for development and is ready for linting, formatting, and
running tests.
* **GitHub Codespaces**. Create a codespace for the repo by clicking
the "Code" button on the main page of the repo, selecting the "Codespaces"
tab, and clicking the "+". The configurations will automatically be used.
Follow
[this guide](https://docs.github.com/en/codespaces/developing-in-a-codespace/creating-a-codespace-for-a-repository)
for more details.
* **Visual Studio Code**. Open the root folder of the repo in VS Code. A
notification will pop up to open it in a dev container with the
configuration. Follow
[this guide](https://code.visualstudio.com/docs/devcontainers/tutorial)
for more details.
* **JetBrains IDEs**. Open the `.devcontainer/devcontainer.json` in your
JetBrains IDE. Click the docker icon to create a dev container.
Follow
[this guide](https://www.jetbrains.com/help/idea/connect-to-devcontainer.html)
for more details. | keras/.devcontainer/README.md/0 | {
"file_path": "keras/.devcontainer/README.md",
"repo_id": "keras",
"token_count": 362
} | 170 |
"""Benchmark regularization layers.
To run benchmarks, see the following command for an example, please change the
flag to your custom value:
```
python3 -m benchmarks.layer_benchmark.regularization_benchmark \
--benchmark_name=benchmark_dropout\
--num_samples=2048 \
--batch_size=256 \
--jit_compile=True
```
"""
from absl import app
from absl import flags
from benchmarks.layer_benchmark.base_benchmark import LayerBenchmark
FLAGS = flags.FLAGS
def benchmark_dropout(
num_samples,
batch_size,
jit_compile=True,
):
layer_name = "Dropout"
init_args = {
"rate": 0.5,
}
benchmark = LayerBenchmark(
layer_name,
init_args,
input_shape=[256, 256, 4],
jit_compile=jit_compile,
)
benchmark.benchmark_predict(
num_samples=num_samples,
batch_size=batch_size,
)
benchmark.benchmark_train(
num_samples=num_samples,
batch_size=batch_size,
)
def benchmark_gaussian_dropout(
num_samples,
batch_size,
jit_compile=True,
):
layer_name = "GaussianDropout"
init_args = {
"rate": 0.5,
}
benchmark = LayerBenchmark(
layer_name,
init_args,
input_shape=[256, 256, 4],
jit_compile=jit_compile,
)
benchmark.benchmark_predict(
num_samples=num_samples,
batch_size=batch_size,
)
benchmark.benchmark_train(
num_samples=num_samples,
batch_size=batch_size,
)
def benchmark_gaussian_noise(
num_samples,
batch_size,
jit_compile=True,
):
layer_name = "GaussianNoise"
init_args = {
"stddev": 0.5,
}
benchmark = LayerBenchmark(
layer_name,
init_args,
input_shape=[256, 256, 4],
jit_compile=jit_compile,
)
benchmark.benchmark_predict(
num_samples=num_samples,
batch_size=batch_size,
)
benchmark.benchmark_train(
num_samples=num_samples,
batch_size=batch_size,
)
def benchmark_spatial_dropout1D(
num_samples,
batch_size,
jit_compile=True,
):
layer_name = "SpatialDropout1D"
init_args = {
"rate": 0.5,
}
benchmark = LayerBenchmark(
layer_name,
init_args,
input_shape=[256, 3],
jit_compile=jit_compile,
)
benchmark.benchmark_predict(
num_samples=num_samples,
batch_size=batch_size,
)
benchmark.benchmark_train(
num_samples=num_samples,
batch_size=batch_size,
)
def benchmark_spatial_dropout2D(
num_samples,
batch_size,
jit_compile=True,
):
layer_name = "SpatialDropout2D"
init_args = {
"rate": 0.5,
}
benchmark = LayerBenchmark(
layer_name,
init_args,
input_shape=[256, 256, 3],
jit_compile=jit_compile,
)
benchmark.benchmark_predict(
num_samples=num_samples,
batch_size=batch_size,
)
benchmark.benchmark_train(
num_samples=num_samples,
batch_size=batch_size,
)
def benchmark_spatial_dropout3D(
num_samples,
batch_size,
jit_compile=True,
):
layer_name = "SpatialDropout3D"
init_args = {
"rate": 0.5,
}
benchmark = LayerBenchmark(
layer_name,
init_args,
input_shape=[32, 32, 32, 3],
jit_compile=jit_compile,
)
benchmark.benchmark_predict(
num_samples=num_samples,
batch_size=batch_size,
)
benchmark.benchmark_train(
num_samples=num_samples,
batch_size=batch_size,
)
BENCHMARK_NAMES = {
"benchmark_dropout": benchmark_dropout,
"benchmark_gaussian_dropout": benchmark_gaussian_dropout,
"benchmark_gaussian_noise": benchmark_gaussian_noise,
"benchmark_spatial_dropout1D": benchmark_spatial_dropout1D,
"benchmark_spatial_dropout2D": benchmark_spatial_dropout2D,
"benchmark_spatial_dropout3D": benchmark_spatial_dropout3D,
}
def main(_):
benchmark_name = FLAGS.benchmark_name
num_samples = FLAGS.num_samples
batch_size = FLAGS.batch_size
jit_compile = FLAGS.jit_compile
if benchmark_name is None:
for name, benchmark_fn in BENCHMARK_NAMES.items():
benchmark_fn(num_samples, batch_size, jit_compile)
return
if benchmark_name not in BENCHMARK_NAMES:
raise ValueError(
f"Invalid benchmark name: {benchmark_name}, `benchmark_name` must "
f"be one of {BENCHMARK_NAMES.keys()}"
)
benchmark_fn = BENCHMARK_NAMES[benchmark_name]
benchmark_fn(num_samples, batch_size, jit_compile)
if __name__ == "__main__":
app.run(main)
| keras/benchmarks/layer_benchmark/regularization_benchmark.py/0 | {
"file_path": "keras/benchmarks/layer_benchmark/regularization_benchmark.py",
"repo_id": "keras",
"token_count": 2198
} | 171 |
# flake8: noqa
import os
# Set backend env to tensorflow
os.environ["KERAS_BACKEND"] = "tensorflow"
import numpy as np
import tensorflow as tf
from keras import Model
from keras import backend
from keras import initializers
from keras import layers
from keras import ops
from keras import optimizers
class MyDense(layers.Layer):
def __init__(self, units, name=None):
super().__init__(name=name)
self.units = units
def build(self, input_shape):
input_dim = input_shape[-1]
w_shape = (input_dim, self.units)
w_value = initializers.GlorotUniform()(w_shape)
self.w = backend.Variable(w_value, name="kernel")
b_shape = (self.units,)
b_value = initializers.Zeros()(b_shape)
self.b = backend.Variable(b_value, name="bias")
def call(self, inputs):
return ops.matmul(inputs, self.w) + self.b
class MyModel(Model):
def __init__(self, hidden_dim, output_dim):
super().__init__()
self.dense1 = MyDense(hidden_dim)
self.dense2 = MyDense(hidden_dim)
self.dense3 = MyDense(output_dim)
def call(self, x):
x = tf.nn.relu(self.dense1(x))
x = tf.nn.relu(self.dense2(x))
return self.dense3(x)
def Dataset():
for _ in range(20):
yield (
np.random.random((32, 128)).astype("float32"),
np.random.random((32, 4)).astype("float32"),
)
def loss_fn(y_true, y_pred):
return ops.sum((y_true - y_pred) ** 2)
model = MyModel(hidden_dim=256, output_dim=4)
optimizer = optimizers.SGD(learning_rate=0.001)
dataset = Dataset()
######### Custom TF workflow ###############
@tf.function(jit_compile=True)
def train_step(data):
x, y = data
with tf.GradientTape() as tape:
y_pred = model(x)
loss = loss_fn(y, y_pred)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
return loss
for data in dataset:
loss = train_step(data)
print("Loss:", float(loss))
| keras/examples/demo_custom_tf_workflow.py/0 | {
"file_path": "keras/examples/demo_custom_tf_workflow.py",
"repo_id": "keras",
"token_count": 895
} | 172 |
from keras.backend.common import global_state
class name_scope:
"""Creates a sub-namespace for variable paths.
Args:
name: Name of the current scope (string).
caller: Optional ID of a caller object (e.g. class instance).
deduplicate: If `True`, if `caller` was passed,
and the previous caller matches the current caller,
and the previous name matches the current name,
do not reenter a new namespace.
"""
def __init__(self, name, caller=None, deduplicate=True):
if not isinstance(name, str) or "/" in name:
raise ValueError(
"Argument `name` must be a string and "
"cannot contain character `/`. "
f"Received: name={name}"
)
self.name = name
self.caller = caller
self.deduplicate = deduplicate
self._pop_on_exit = False
def __enter__(self):
name_scope_stack = global_state.get_global_attribute(
"name_scope_stack", default=[], set_to_default=True
)
if self.deduplicate and name_scope_stack:
parent_caller = name_scope_stack[-1].caller
parent_name = name_scope_stack[-1].name
if (
self.caller is not None
and self.caller is parent_caller
and self.name == parent_name
):
return self
name_scope_stack.append(self)
self._pop_on_exit = True
return self
def __exit__(self, *args, **kwargs):
if self._pop_on_exit:
name_scope_stack = global_state.get_global_attribute(
"name_scope_stack"
)
name_scope_stack.pop()
def current_path():
name_scope_stack = global_state.get_global_attribute("name_scope_stack")
if name_scope_stack is None:
return ""
return "/".join(x.name for x in name_scope_stack)
| keras/keras/backend/common/name_scope.py/0 | {
"file_path": "keras/keras/backend/common/name_scope.py",
"repo_id": "keras",
"token_count": 902
} | 173 |
import jax
import jax.numpy as jnp
import numpy as np
from jax import lax
from jax import nn as jnn
from keras.backend import standardize_data_format
from keras.backend import standardize_dtype
from keras.backend.common.backend_utils import (
compute_conv_transpose_padding_args_for_jax,
)
from keras.backend.config import epsilon
from keras.backend.jax.core import cast
from keras.backend.jax.core import convert_to_tensor
def relu(x):
x = convert_to_tensor(x)
return jnn.relu(x)
def relu6(x):
x = convert_to_tensor(x)
return jnn.relu6(x)
def sigmoid(x):
x = convert_to_tensor(x)
return jnn.sigmoid(x)
def tanh(x):
x = convert_to_tensor(x)
return jnn.tanh(x)
def softplus(x):
x = convert_to_tensor(x)
return jnn.softplus(x)
def softsign(x):
x = convert_to_tensor(x)
return jnn.soft_sign(x)
def silu(x):
x = convert_to_tensor(x)
return jnn.silu(x)
def log_sigmoid(x):
x = convert_to_tensor(x)
return jnn.log_sigmoid(x)
def leaky_relu(x, negative_slope=0.2):
x = convert_to_tensor(x)
return jnn.leaky_relu(x, negative_slope=negative_slope)
def hard_sigmoid(x):
x = convert_to_tensor(x)
return jnn.hard_sigmoid(x)
def hard_silu(x):
x = convert_to_tensor(x)
return jnn.hard_silu(x)
def elu(x, alpha=1.0):
x = convert_to_tensor(x)
return jnn.elu(x, alpha=alpha)
def selu(x):
x = convert_to_tensor(x)
return jnn.selu(x)
def gelu(x, approximate=True):
x = convert_to_tensor(x)
return jnn.gelu(x, approximate)
def softmax(x, axis=-1):
x = convert_to_tensor(x)
return jnn.softmax(x, axis=axis)
def log_softmax(x, axis=-1):
x = convert_to_tensor(x)
return jnn.log_softmax(x, axis=axis)
def _convert_to_spatial_operand(
x,
num_spatial_dims,
data_format="channels_last",
include_batch_and_channels=True,
):
# Helper function that converts an operand to a spatial operand.
x = (x,) * num_spatial_dims if isinstance(x, int) else x
if not include_batch_and_channels:
return x
if data_format == "channels_last":
x = (1,) + x + (1,)
else:
x = (1,) + (1,) + x
return x
def _pool(
inputs,
initial_value,
reduce_fn,
pool_size,
strides=None,
padding="valid",
):
"""Helper function to define pooling functions.
Args:
inputs: input data of shape `N+2`.
initial_value: the initial value for the reduction.
reduce_fn: a reduce function of the form `(T, T) -> T`.
pool_size: a sequence of `N` integers, representing the window size to
reduce over.
strides: a sequence of `N` integers, representing the inter-window
strides (default: `(1, ..., 1)`).
padding: either the string `same` or `valid`.
Returns:
The output of the reduction for each window slice.
"""
if padding not in ("same", "valid"):
raise ValueError(
f"Invalid padding '{padding}', must be 'same' or 'valid'."
)
padding = padding.upper()
return lax.reduce_window(
inputs,
initial_value,
reduce_fn,
pool_size,
strides,
padding,
)
def max_pool(
inputs,
pool_size,
strides=None,
padding="valid",
data_format=None,
):
data_format = standardize_data_format(data_format)
num_spatial_dims = inputs.ndim - 2
pool_size = _convert_to_spatial_operand(
pool_size, num_spatial_dims, data_format
)
strides = pool_size if strides is None else strides
strides = _convert_to_spatial_operand(
strides, num_spatial_dims, data_format
)
return _pool(inputs, -jnp.inf, lax.max, pool_size, strides, padding)
def average_pool(
inputs,
pool_size,
strides,
padding,
data_format=None,
):
data_format = standardize_data_format(data_format)
num_spatial_dims = inputs.ndim - 2
pool_size = _convert_to_spatial_operand(
pool_size, num_spatial_dims, data_format
)
strides = pool_size if strides is None else strides
strides = _convert_to_spatial_operand(
strides, num_spatial_dims, data_format
)
pooled = _pool(inputs, 0.0, lax.add, pool_size, strides, padding)
if padding == "valid":
# Avoid the extra reduce_window.
return pooled / np.prod(pool_size)
else:
# Count the number of valid entries at each input point, then use that
# for computing average. Assumes that any two arrays of same shape will
# be padded the same. Avoid broadcasting on axis where pooling is
# skipped.
shape = [
(a if b != 1 else 1) for (a, b) in zip(inputs.shape, pool_size)
]
window_counts = _pool(
jnp.ones(shape, inputs.dtype),
0.0,
lax.add,
pool_size,
strides,
padding,
)
return pooled / window_counts
def _convert_to_lax_conv_dimension_numbers(
num_spatial_dims,
data_format="channels_last",
transpose=False,
):
"""Create a `lax.ConvDimensionNumbers` for the given inputs."""
num_dims = num_spatial_dims + 2
if data_format == "channels_last":
spatial_dims = tuple(range(1, num_dims - 1))
inputs_dn = (0, num_dims - 1) + spatial_dims
else:
spatial_dims = tuple(range(2, num_dims))
inputs_dn = (0, 1) + spatial_dims
if transpose:
kernel_dn = (num_dims - 2, num_dims - 1) + tuple(range(num_dims - 2))
else:
kernel_dn = (num_dims - 1, num_dims - 2) + tuple(range(num_dims - 2))
return lax.ConvDimensionNumbers(
lhs_spec=inputs_dn, rhs_spec=kernel_dn, out_spec=inputs_dn
)
def conv(
inputs,
kernel,
strides=1,
padding="valid",
data_format=None,
dilation_rate=1,
):
data_format = standardize_data_format(data_format)
num_spatial_dims = inputs.ndim - 2
dimension_numbers = _convert_to_lax_conv_dimension_numbers(
num_spatial_dims,
data_format,
transpose=False,
)
strides = _convert_to_spatial_operand(
strides,
num_spatial_dims,
data_format,
include_batch_and_channels=False,
)
dilation_rate = _convert_to_spatial_operand(
dilation_rate,
num_spatial_dims,
data_format,
include_batch_and_channels=False,
)
if data_format == "channels_last":
channels = inputs.shape[-1]
else:
channels = inputs.shape[1]
kernel_in_channels = kernel.shape[-2]
if channels % kernel_in_channels > 0:
raise ValueError(
"The number of input channels must be evenly divisible by "
f"kernel's in_channels. Received input channels {channels} and "
f"kernel in_channels {kernel_in_channels}. "
)
feature_group_count = channels // kernel_in_channels
return jax.lax.conv_general_dilated(
convert_to_tensor(inputs),
convert_to_tensor(kernel),
strides,
padding,
rhs_dilation=dilation_rate,
dimension_numbers=dimension_numbers,
feature_group_count=feature_group_count,
)
def depthwise_conv(
inputs,
kernel,
strides=1,
padding="valid",
data_format=None,
dilation_rate=1,
):
data_format = standardize_data_format(data_format)
num_spatial_dims = inputs.ndim - 2
dimension_numbers = _convert_to_lax_conv_dimension_numbers(
num_spatial_dims,
data_format,
transpose=False,
)
strides = _convert_to_spatial_operand(
strides,
num_spatial_dims,
data_format,
include_batch_and_channels=False,
)
dilation_rate = _convert_to_spatial_operand(
dilation_rate,
num_spatial_dims,
data_format,
include_batch_and_channels=False,
)
feature_group_count = (
inputs.shape[-1] if data_format == "channels_last" else inputs.shape[1]
)
kernel = jnp.reshape(
kernel,
kernel.shape[:-2] + (1, feature_group_count * kernel.shape[-1]),
)
return jax.lax.conv_general_dilated(
inputs,
kernel,
strides,
padding,
rhs_dilation=dilation_rate,
dimension_numbers=dimension_numbers,
feature_group_count=feature_group_count,
)
def separable_conv(
inputs,
depthwise_kernel,
pointwise_kernel,
strides=1,
padding="valid",
data_format=None,
dilation_rate=1,
):
data_format = standardize_data_format(data_format)
depthwise_conv_output = depthwise_conv(
inputs,
depthwise_kernel,
strides,
padding,
data_format,
dilation_rate,
)
return conv(
depthwise_conv_output,
pointwise_kernel,
strides=1,
padding="valid",
data_format=data_format,
dilation_rate=dilation_rate,
)
def conv_transpose(
inputs,
kernel,
strides=1,
padding="valid",
output_padding=None,
data_format=None,
dilation_rate=1,
):
data_format = standardize_data_format(data_format)
num_spatial_dims = inputs.ndim - 2
padding_values = compute_conv_transpose_padding_args_for_jax(
input_shape=inputs.shape,
kernel_shape=kernel.shape,
strides=strides,
padding=padding,
output_padding=output_padding,
dilation_rate=dilation_rate,
)
dimension_numbers = _convert_to_lax_conv_dimension_numbers(
num_spatial_dims,
data_format,
transpose=False,
)
strides = _convert_to_spatial_operand(
strides,
num_spatial_dims,
data_format,
include_batch_and_channels=False,
)
dilation_rate = _convert_to_spatial_operand(
dilation_rate,
num_spatial_dims,
data_format,
include_batch_and_channels=False,
)
return jax.lax.conv_transpose(
inputs,
kernel,
strides,
padding=padding_values,
rhs_dilation=dilation_rate,
dimension_numbers=dimension_numbers,
transpose_kernel=True,
)
def one_hot(x, num_classes, axis=-1, dtype="float32"):
x = convert_to_tensor(x)
return jnn.one_hot(x, num_classes, axis=axis, dtype=dtype)
def multi_hot(x, num_classes, axis=-1, dtype="float32"):
x = convert_to_tensor(x)
reduction_axis = 1 if len(x.shape) > 1 else 0
outputs = jnp.max(
one_hot(cast(x, "int32"), num_classes, axis=axis, dtype=dtype),
axis=reduction_axis,
)
return outputs
def categorical_crossentropy(target, output, from_logits=False, axis=-1):
target = jnp.array(target)
output = jnp.array(output)
if target.shape != output.shape:
raise ValueError(
"Arguments `target` and `output` must have the same shape. "
"Received: "
f"target.shape={target.shape}, output.shape={output.shape}"
)
if len(target.shape) < 1:
raise ValueError(
"Arguments `target` and `output` must be at least rank 1. "
"Received: "
f"target.shape={target.shape}, output.shape={output.shape}"
)
if from_logits:
log_prob = jax.nn.log_softmax(output, axis=axis)
else:
output = output / jnp.sum(output, axis, keepdims=True)
output = jnp.clip(output, epsilon(), 1.0 - epsilon())
log_prob = jnp.log(output)
return -jnp.sum(target * log_prob, axis=axis)
def sparse_categorical_crossentropy(target, output, from_logits=False, axis=-1):
target = jnp.array(target, dtype="int32")
output = jnp.array(output)
if len(target.shape) == len(output.shape) and target.shape[-1] == 1:
target = jnp.squeeze(target, axis=-1)
if len(output.shape) < 1:
raise ValueError(
"Argument `output` must be at least rank 1. "
"Received: "
f"output.shape={output.shape}"
)
if target.shape != output.shape[:-1]:
raise ValueError(
"Arguments `target` and `output` must have the same shape "
"up until the last dimension: "
f"target.shape={target.shape}, output.shape={output.shape}"
)
if from_logits:
log_prob = jax.nn.log_softmax(output, axis=axis)
else:
output = output / jnp.sum(output, axis, keepdims=True)
output = jnp.clip(output, epsilon(), 1.0 - epsilon())
log_prob = jnp.log(output)
target = jnn.one_hot(target, output.shape[axis], axis=axis)
return -jnp.sum(target * log_prob, axis=axis)
def binary_crossentropy(target, output, from_logits=False):
target = jnp.array(target)
output = jnp.array(output)
if target.shape != output.shape:
raise ValueError(
"Arguments `target` and `output` must have the same shape. "
"Received: "
f"target.shape={target.shape}, output.shape={output.shape}"
)
if from_logits:
log_logits = jax.nn.log_sigmoid(output)
log_neg_logits = jax.nn.log_sigmoid(-output)
return -1.0 * target * log_logits - (1.0 - target) * log_neg_logits
output = jnp.clip(output, epsilon(), 1.0 - epsilon())
bce = target * jnp.log(output)
bce += (1.0 - target) * jnp.log(1.0 - output)
return -bce
def moments(x, axes, keepdims=False, synchronized=False):
if synchronized:
raise NotImplementedError(
"Argument synchronized=True is not supported with JAX."
)
# The dynamic range of float16 is too limited for statistics. As a
# workaround, we simply perform the operations on float32 and convert back
# to float16
need_cast = False
ori_dtype = standardize_dtype(x.dtype)
if ori_dtype in ("float16", "bfloat16"):
need_cast = True
x = cast(x, "float32")
mean = jnp.mean(x, axes, keepdims=True)
variance = jnp.var(x, axis=axes, keepdims=True)
if not keepdims:
mean = jnp.squeeze(mean, axes)
variance = jnp.squeeze(variance, axes)
if need_cast:
# avoid overflow and underflow when casting from float16 to float32
mean = jnp.clip(
mean, jnp.finfo(jnp.float16).min, jnp.finfo(jnp.float16).max
)
variance = jnp.clip(
variance, jnp.finfo(jnp.float16).min, jnp.finfo(jnp.float16).max
)
mean = cast(mean, ori_dtype)
variance = cast(variance, ori_dtype)
return mean, variance
def batch_normalization(
x, mean, variance, axis, offset=None, scale=None, epsilon=1e-3
):
shape = [1] * len(x.shape)
shape[axis] = mean.shape[0]
mean = jnp.reshape(mean, shape)
variance = jnp.reshape(variance, shape)
inv = jax.lax.rsqrt(variance + epsilon)
if scale is not None:
scale = jnp.reshape(scale, shape)
inv = inv * scale
res = -mean * inv
if offset is not None:
offset = jnp.reshape(offset, shape)
res = res + offset
return x * inv + res
def ctc_loss(
target,
output,
target_length,
output_length,
mask_index=0,
):
batch_size, _, _ = output.shape
batch_size, max_target_length = target.shape
output = output.transpose((1, 0, 2))
target = target.transpose((1, 0)).astype("int32")
logits = jnn.log_softmax(output)
mgrid_t, mgrid_b = jnp.meshgrid(
jnp.arange(max_target_length), jnp.arange(batch_size)
)
logprobs_emit = logits[mgrid_t, mgrid_b, target[:, :, None]]
logprobs_mask = logits[:, :, mask_index]
logit_paddings = jnp.array(
jnp.arange(max_target_length) < output_length[:, None],
dtype=jnp.float32,
)
repeat = jnp.array(target[1:] == target[:-1])
repeat = jnp.pad(repeat, ((0, 1), (0, 0))).transpose((1, 0))
_logepsilon = -100000.0
def _iterate(prev, x):
prev_mask, prev_emit = prev
logprob_mask, logprob_emit, pad = x
prev_mask_orig = prev_mask
prev_mask = prev_mask.at[:, 1:].set(
jnp.logaddexp(prev_mask[:, 1:], prev_emit + _logepsilon * repeat),
)
emit = jnp.logaddexp(
prev_mask[:, :-1] + logprob_emit, prev_emit + logprob_emit
)
mask = prev_mask + logprob_mask[:, None]
mask = mask.at[:, 1:].set(
jnp.logaddexp(
mask[:, 1:],
prev_emit + logprob_mask[:, None] + _logepsilon * (1 - repeat),
)
)
pad = pad[:, None]
emit = emit * pad + prev_emit * (1 - pad)
mask = mask * pad + prev_mask_orig * (1 - pad)
return (mask, emit), (mask, emit)
mask_init = jnp.full((batch_size, max_target_length + 1), _logepsilon)
mask_init = mask_init.at[:, 0].set(0.0)
emit_init = jnp.full((batch_size, max_target_length), _logepsilon)
_, (alphas_mask, alphas_emit) = lax.scan(
_iterate,
(mask_init, emit_init),
(logprobs_mask, logprobs_emit, logit_paddings.transpose()),
)
last_alpha_mask = (
alphas_mask[-1]
.at[:, 1:]
.set(jnp.logaddexp(alphas_mask[-1, :, 1:], alphas_emit[-1]))
)
return -last_alpha_mask[jnp.arange(batch_size), target_length]
| keras/keras/backend/jax/nn.py/0 | {
"file_path": "keras/keras/backend/jax/nn.py",
"repo_id": "keras",
"token_count": 7943
} | 174 |
import numpy as np
import tree
from keras.utils.nest import pack_sequence_as
def rnn(
step_function,
inputs,
initial_states,
go_backwards=False,
mask=None,
constants=None,
unroll=False,
input_length=None,
time_major=False,
zero_output_for_mask=False,
return_all_outputs=True,
):
def swap_batch_timestep(input_t):
# Swap the batch and timestep dim for the incoming tensor.
axes = list(range(len(input_t.shape)))
axes[0], axes[1] = 1, 0
return np.transpose(input_t, axes)
if not time_major:
inputs = tree.map_structure(swap_batch_timestep, inputs)
flattened_inputs = tree.flatten(inputs)
time_steps = flattened_inputs[0].shape[0]
if mask is not None:
if mask.dtype != "bool":
mask = mask.astype("bool")
if len(mask.shape) == 2:
mask = np.expand_dims(mask, axis=-1)
if not time_major:
mask = swap_batch_timestep(mask)
if constants is None:
constants = []
def _expand_mask(mask_t, input_t, fixed_dim=1):
if tree.is_nested(mask_t):
raise ValueError(
f"mask_t is expected to be tensor, but got {mask_t}"
)
if tree.is_nested(input_t):
raise ValueError(
f"input_t is expected to be tensor, but got {input_t}"
)
rank_diff = len(input_t.shape) - len(mask_t.shape)
for _ in range(rank_diff):
mask_t = np.expand_dims(mask_t, -1)
multiples = [1] * fixed_dim + list(input_t.shape[fixed_dim:])
return np.tile(mask_t, multiples)
if unroll:
if not time_steps:
raise ValueError("Unrolling requires a fixed number of timesteps.")
states = tuple(initial_states)
successive_states = []
successive_outputs = []
# Process the input tensors. The input tensor need to be split on the
# time_step dim, and reverse if go_backwards is True. In the case of
# nested input, the input is flattened and then transformed
# individually. The result of this will be a tuple of lists, each of
# the item in tuple is list of the tensor with shape (batch, feature)
def _process_single_input_t(input_t):
input_t = unstack(input_t) # unstack for time_step dim
if go_backwards:
input_t.reverse()
return input_t
if tree.is_nested(inputs):
processed_input = tree.map_structure(
_process_single_input_t, inputs
)
else:
processed_input = (_process_single_input_t(inputs),)
def _get_input_tensor(time):
inp = [t_[time] for t_ in processed_input]
return pack_sequence_as(inputs, inp)
if mask is not None:
mask_list = unstack(mask)
if go_backwards:
mask_list.reverse()
for i in range(time_steps):
inp = _get_input_tensor(i)
mask_t = mask_list[i]
output, new_states = step_function(
inp, tuple(states) + tuple(constants)
)
tiled_mask_t = _expand_mask(mask_t, output)
if not successive_outputs:
prev_output = np.zeros_like(output)
else:
prev_output = successive_outputs[-1]
output = np.where(tiled_mask_t, output, prev_output)
flat_states = tree.flatten(states)
flat_new_states = tree.flatten(new_states)
tiled_mask_t = tuple(
_expand_mask(mask_t, s) for s in flat_states
)
flat_final_states = tuple(
np.where(m, s, ps)
for m, s, ps in zip(
tiled_mask_t, flat_new_states, flat_states
)
)
states = pack_sequence_as(states, flat_final_states)
if return_all_outputs:
successive_outputs.append(output)
successive_states.append(states)
else:
successive_outputs = [output]
successive_states = [states]
last_output = successive_outputs[-1]
new_states = successive_states[-1]
outputs = np.stack(successive_outputs)
else: # mask is None
for i in range(time_steps):
inp = _get_input_tensor(i)
output, states = step_function(
inp, tuple(states) + tuple(constants)
)
if return_all_outputs:
successive_outputs.append(output)
successive_states.append(states)
else:
successive_outputs = [output]
successive_states = [states]
last_output = successive_outputs[-1]
new_states = successive_states[-1]
outputs = np.stack(successive_outputs)
else: # Unroll == False
if mask is not None:
def _step(states, current_input):
current_input, current_mask = current_input
is_masked = np.all(
np.logical_not(current_mask), axis=-1, keepdims=True
)
output_t, new_states = step_function(current_input, states)
if zero_output_for_mask:
masked_outs = np.where(
is_masked, np.zeros_like(output_t), output_t
)
else:
# Assume the first state is the previous output.
output_tm1 = states[0]
masked_outs = np.where(is_masked, output_tm1, output_t)
new_states = [
np.where(is_masked, s, ns)
for s, ns in zip(states, new_states)
]
return (new_states, masked_outs)
scan_xs = (inputs, mask)
else:
def _step(states, current_input):
output_t, new_states = step_function(current_input, states)
return new_states, output_t
scan_xs = inputs
new_states, outputs = numpy_scan(
f=_step,
init=initial_states,
xs=scan_xs,
reverse=go_backwards,
mask=mask,
)
if go_backwards:
outputs = np.flip(outputs, axis=0)
last_output = outputs[-1]
if not time_major:
outputs = tree.map_structure(swap_batch_timestep, outputs)
return last_output, outputs, new_states
def lstm(*args, **kwargs):
raise NotImplementedError
def gru(*args, **kwargs):
raise NotImplementedError
def unstack(x, axis=0):
return [x.take(i, axis) for i in range(x.shape[axis])]
def numpy_scan(f, init, xs, reverse=False, mask=None):
states = init
outputs = []
if mask is not None:
x, mask = xs
x = np.flip(x, axis=0) if reverse else x
mask = np.flip(mask, axis=0) if reverse else mask
for each_x, each_mask in zip(x, mask):
states, output = f(states, (each_x, each_mask))
outputs.append(output)
else:
xs = np.flip(xs, axis=0) if reverse else xs
for x in xs:
states, output = f(states, x)
outputs.append(output)
outputs = np.array(outputs)
if reverse:
outputs = np.flip(outputs, axis=0)
return states, outputs
def cudnn_ok(*args, **kwargs):
return False
| keras/keras/backend/numpy/rnn.py/0 | {
"file_path": "keras/keras/backend/numpy/rnn.py",
"repo_id": "keras",
"token_count": 3963
} | 175 |
import tensorflow as tf
import tree
from keras.utils.nest import pack_sequence_as
def rnn(
step_function,
inputs,
initial_states,
go_backwards=False,
mask=None,
constants=None,
unroll=False,
input_length=None,
time_major=False,
zero_output_for_mask=False,
return_all_outputs=True,
):
"""Iterates over the time dimension of a tensor.
Args:
step_function: RNN step function.
Args;
`input`; Tensor with shape `(samples, ...)` (no time dimension),
representing input for the batch of samples at a certain
time step.
`states`; List of tensors.
Returns;
`output`; Tensor with shape `(samples, output_dim)`
(no time dimension).
`new_states`; List of tensors, same length and shapes
as 'states'. The first state in the list must be the
output tensor at the previous timestep.
inputs: Tensor of temporal data of shape `(samples, time, ...)`
(at least 3D), or nested tensors, and each of which has shape
`(samples, time, ...)`.
initial_states: Tensor with shape `(samples, state_size)`
(no time dimension), containing the initial values for the states
used in the step function. In the case that state_size is in a
nested shape, the shape of initial_states will also follow the
nested structure.
go_backwards: Boolean. If `True`, do the iteration over the time
dimension in reverse order and return the reversed sequence.
mask: Binary tensor with shape `(samples, time, 1)`,
with a zero for every element that is masked.
constants: List of constant values passed at each step.
unroll: Whether to unroll the RNN or to use a symbolic `while_loop`.
input_length: An integer or a 1-D Tensor, depending on whether
the time dimension is fixed-length or not. In case of variable
length input, it is used for masking in case there's no mask
specified.
time_major: Boolean. If `True`, the inputs and outputs will be in shape
`(timesteps, batch, ...)`, whereas in the False case, it will be
`(batch, timesteps, ...)`. Using `time_major = True` is a bit more
efficient because it avoids transposes at the beginning and end of
the RNN calculation. However, most TensorFlow data is batch-major,
so by default this function accepts input and emits output in
batch-major form.
zero_output_for_mask: Boolean. If `True`, the output for masked timestep
will be zeros, whereas in the `False` case, output from previous
timestep is returned.
return_all_outputs: Boolean. If `True`, return the recurrent outputs for
all timesteps in the sequence. If `False`, only return the output
for the last timestep (which consumes less memory).
Returns:
A tuple, `(last_output, outputs, new_states)`.
- `last_output`: the latest output of the rnn,
with shape `(samples, ...)`.
- `outputs`:
- If `return_all_outputs=True`: a tensor with shape
`(samples, time, ...)` where each entry `outputs[s, t]` is the
output of the step function at time `t` for sample `s`
- Else, a tensor equal to `last_output` with shape
`(samples, 1, ...)`
- `new_states`: list of tensors, latest states returned by
the step function, of shape `(samples, ...)`.
"""
input_length = input_length or inputs.shape[1]
def swap_batch_timestep(input_t):
# Swap the batch and timestep dim for the incoming tensor.
axes = list(range(len(input_t.shape)))
axes[0], axes[1] = 1, 0
return tf.transpose(input_t, axes)
if not time_major:
inputs = tree.map_structure(swap_batch_timestep, inputs)
flattened_inputs = tree.flatten(inputs)
time_steps = flattened_inputs[0].shape[0]
time_steps_t = (
tf.shape(flattened_inputs[0])[0] if time_steps is None else time_steps
)
for input_ in flattened_inputs:
input_.shape.with_rank_at_least(3)
if mask is not None:
if mask.dtype != tf.bool:
mask = tf.cast(mask, tf.bool)
if len(mask.shape) == 2:
mask = tf.expand_dims(mask, axis=-1)
if not time_major:
mask = swap_batch_timestep(mask)
if constants is None:
constants = []
# tf.where needs its condition tensor to be the same shape as its two
# result tensors, but in our case the condition (mask) tensor is
# (nsamples, 1), and inputs are (nsamples, ndimensions) or even more.
# So we need to broadcast the mask to match the shape of inputs.
# That's what the tile call does, it just repeats the mask along its
# second dimension n times.
def _expand_mask(mask_t, input_t, fixed_dim=1):
if tree.is_nested(mask_t):
raise ValueError(
f"mask_t is expected to be tensor, but got {mask_t}"
)
if tree.is_nested(input_t):
raise ValueError(
f"input_t is expected to be tensor, but got {input_t}"
)
rank_diff = len(input_t.shape) - len(mask_t.shape)
for _ in range(rank_diff):
mask_t = tf.expand_dims(mask_t, -1)
multiples = [1] * fixed_dim + input_t.shape.as_list()[fixed_dim:]
return tf.tile(mask_t, multiples)
if unroll:
if not time_steps:
raise ValueError("Unrolling requires a fixed number of timesteps.")
states = tuple(initial_states)
successive_states = []
successive_outputs = []
# Process the input tensors. The input tensor need to be split on the
# time_step dim, and reverse if go_backwards is True. In the case of
# nested input, the input is flattened and then transformed
# individually. The result of this will be a tuple of lists, each of
# the item in tuple is list of the tensor with shape (batch, feature)
def _process_single_input_t(input_t):
input_t = tf.unstack(input_t) # unstack for time_step dim
if go_backwards:
input_t.reverse()
return input_t
if tree.is_nested(inputs):
processed_input = tree.map_structure(
_process_single_input_t, inputs
)
else:
processed_input = (_process_single_input_t(inputs),)
def _get_input_tensor(time):
inp = [t_[time] for t_ in processed_input]
return pack_sequence_as(inputs, inp)
if mask is not None:
mask_list = tf.unstack(mask)
if go_backwards:
mask_list.reverse()
for i in range(time_steps):
inp = _get_input_tensor(i)
mask_t = mask_list[i]
output, new_states = step_function(
inp, tuple(states) + tuple(constants)
)
tiled_mask_t = _expand_mask(mask_t, output)
if not successive_outputs:
prev_output = tf.zeros_like(output)
else:
prev_output = successive_outputs[-1]
output = tf.where(tiled_mask_t, output, prev_output)
flat_states = tree.flatten(states)
flat_new_states = tree.flatten(new_states)
tiled_mask_t = tuple(
_expand_mask(mask_t, s) for s in flat_states
)
flat_final_states = tuple(
tf.where(m, s, ps)
for m, s, ps in zip(
tiled_mask_t, flat_new_states, flat_states
)
)
states = pack_sequence_as(states, flat_final_states)
if return_all_outputs:
successive_outputs.append(output)
successive_states.append(states)
else:
successive_outputs = [output]
successive_states = [states]
last_output = successive_outputs[-1]
new_states = successive_states[-1]
outputs = tf.stack(successive_outputs)
if zero_output_for_mask:
last_output = tf.where(
_expand_mask(mask_list[-1], last_output),
last_output,
tf.zeros_like(last_output),
)
outputs = tf.where(
_expand_mask(mask, outputs, fixed_dim=2),
outputs,
tf.zeros_like(outputs),
)
else: # mask is None
for i in range(time_steps):
inp = _get_input_tensor(i)
output, states = step_function(
inp, tuple(states) + tuple(constants)
)
if return_all_outputs:
successive_outputs.append(output)
successive_states.append(states)
else:
successive_outputs = [output]
successive_states = [states]
last_output = successive_outputs[-1]
new_states = successive_states[-1]
outputs = tf.stack(successive_outputs)
else: # Unroll == False
states = tuple(initial_states)
# Create input tensor array, if the inputs is nested tensors, then it
# will be flattened first, and tensor array will be created one per
# flattened tensor.
input_ta = tuple(
tf.TensorArray(
dtype=inp.dtype,
size=time_steps_t,
tensor_array_name=f"input_ta_{i}",
)
for i, inp in enumerate(flattened_inputs)
)
input_ta = tuple(
(
ta.unstack(input_)
if not go_backwards
else ta.unstack(tf.reverse(input_, [0]))
)
for ta, input_ in zip(input_ta, flattened_inputs)
)
# Get the time(0) input and compute the output for that, the output will
# be used to determine the dtype of output tensor array. Don't read from
# input_ta due to TensorArray clear_after_read default to True.
input_time_zero = pack_sequence_as(
inputs, [inp[0] for inp in flattened_inputs]
)
# output_time_zero is used to determine the cell output shape and its
# dtype. the value is discarded.
output_time_zero, _ = step_function(
input_time_zero, tuple(initial_states) + tuple(constants)
)
output_ta_size = time_steps_t if return_all_outputs else 1
output_ta = tuple(
tf.TensorArray(
dtype=out.dtype,
size=output_ta_size,
element_shape=out.shape,
tensor_array_name=f"output_ta_{i}",
)
for i, out in enumerate(tree.flatten(output_time_zero))
)
time = tf.constant(0, dtype="int32", name="time")
if input_length is None:
max_iterations = time_steps_t
else:
max_iterations = tf.reduce_max(input_length)
while_loop_kwargs = {
"cond": lambda time, *_: time < time_steps_t,
"maximum_iterations": max_iterations,
"parallel_iterations": 32,
"swap_memory": True,
}
if mask is not None:
if go_backwards:
mask = tf.reverse(mask, [0])
mask_ta = tf.TensorArray(
dtype=tf.bool, size=time_steps_t, tensor_array_name="mask_ta"
)
mask_ta = mask_ta.unstack(mask)
def masking_fn(time):
return mask_ta.read(time)
def compute_masked_output(mask_t, flat_out, flat_mask):
tiled_mask_t = tuple(
_expand_mask(mask_t, o, fixed_dim=len(mask_t.shape))
for o in flat_out
)
return tuple(
tf.where(m, o, fm)
for m, o, fm in zip(tiled_mask_t, flat_out, flat_mask)
)
elif isinstance(input_length, tf.Tensor):
if go_backwards:
max_len = tf.reduce_max(input_length, axis=0)
rev_input_length = tf.subtract(max_len - 1, input_length)
def masking_fn(time):
return tf.less(rev_input_length, time)
else:
def masking_fn(time):
return tf.greater(input_length, time)
def compute_masked_output(mask_t, flat_out, flat_mask):
return tuple(
tf.where(mask_t, o, zo)
for (o, zo) in zip(flat_out, flat_mask)
)
else:
masking_fn = None
if masking_fn is not None:
# Mask for the T output will be base on the output of T - 1. In the
# case T = 0, a zero filled tensor will be used.
flat_zero_output = tuple(
tf.zeros_like(o) for o in tree.flatten(output_time_zero)
)
def _step(time, output_ta_t, prev_output, *states):
"""RNN step function.
Args:
time: Current timestep value.
output_ta_t: TensorArray.
prev_output: tuple of outputs from time - 1.
*states: List of states.
Returns:
Tuple: `(time + 1, output_ta_t, output) + tuple(new_states)`
"""
current_input = tuple(ta.read(time) for ta in input_ta)
# maybe set shape.
current_input = pack_sequence_as(inputs, current_input)
mask_t = masking_fn(time)
output, new_states = step_function(
current_input, tuple(states) + tuple(constants)
)
# mask output
flat_output = tree.flatten(output)
flat_mask_output = (
flat_zero_output
if zero_output_for_mask
else tree.flatten(prev_output)
)
flat_new_output = compute_masked_output(
mask_t, flat_output, flat_mask_output
)
# mask states
flat_state = tree.flatten(states)
flat_new_state = tree.flatten(new_states)
flat_final_state = compute_masked_output(
mask_t, flat_new_state, flat_state
)
new_states = pack_sequence_as(new_states, flat_final_state)
ta_index_to_write = time if return_all_outputs else 0
output_ta_t = tuple(
ta.write(ta_index_to_write, out)
for ta, out in zip(output_ta_t, flat_new_output)
)
return (time + 1, output_ta_t, tuple(flat_new_output)) + tuple(
new_states
)
final_outputs = tf.while_loop(
body=_step,
loop_vars=(time, output_ta, flat_zero_output) + states,
**while_loop_kwargs,
)
# Skip final_outputs[2] which is the output for final timestep.
new_states = final_outputs[3:]
else:
def _step(time, output_ta_t, *states):
"""RNN step function.
Args:
time: Current timestep value.
output_ta_t: TensorArray.
*states: List of states.
Returns:
Tuple: `(time + 1,output_ta_t) + tuple(new_states)`
"""
current_input = tuple(ta.read(time) for ta in input_ta)
current_input = pack_sequence_as(inputs, current_input)
output, new_states = step_function(
current_input, tuple(states) + tuple(constants)
)
flat_new_state = tree.flatten(new_states)
flat_output = tree.flatten(output)
ta_index_to_write = time if return_all_outputs else 0
output_ta_t = tuple(
ta.write(ta_index_to_write, out)
for ta, out in zip(output_ta_t, flat_output)
)
new_states = pack_sequence_as(initial_states, flat_new_state)
return (time + 1, output_ta_t) + tuple(new_states)
final_outputs = tf.while_loop(
body=_step,
loop_vars=(time, output_ta) + states,
**while_loop_kwargs,
)
new_states = final_outputs[2:]
output_ta = final_outputs[1]
outputs = tuple(o.stack() for o in output_ta)
last_output = tuple(o[-1] for o in outputs)
outputs = pack_sequence_as(output_time_zero, outputs)
last_output = pack_sequence_as(output_time_zero, last_output)
if not time_major:
outputs = tree.map_structure(swap_batch_timestep, outputs)
return last_output, outputs, new_states
def gru(
inputs,
initial_state,
mask,
kernel,
recurrent_kernel,
bias,
activation,
recurrent_activation,
return_sequences=False,
go_backwards=False,
unroll=False,
time_major=False,
reset_after=True,
):
cudnn_supported = cudnn_ok(
activation,
recurrent_activation,
unroll,
use_bias=bias is not None,
reset_after=reset_after,
)
if not cudnn_supported or mask is not None:
raise NotImplementedError
from keras.backend.tensorflow import Variable
if isinstance(kernel, Variable):
kernel = kernel.value
if isinstance(recurrent_kernel, Variable):
recurrent_kernel = recurrent_kernel.value
if isinstance(bias, Variable):
bias = bias.value
try:
return _cudnn_gru(
inputs,
initial_state,
kernel,
recurrent_kernel,
bias,
mask,
time_major,
go_backwards,
return_sequences,
)
except tf.errors.InvalidArgumentError:
# cuDNN op not found.
raise NotImplementedError
except tf.errors.NotFoundError:
# alternative error: device not found for op
raise NotImplementedError
def _do_gru_arguments_support_cudnn(
activation,
recurrent_activation,
unroll,
use_bias,
reset_after,
):
from keras import activations
from keras import ops
return (
activation in (activations.tanh, tf.tanh, ops.tanh)
and recurrent_activation
in (activations.sigmoid, tf.sigmoid, ops.sigmoid)
and not unroll
and use_bias
and reset_after
)
def _do_lstm_arguments_support_cudnn(
activation,
recurrent_activation,
unroll,
use_bias,
):
from keras import activations
from keras import ops
return (
activation in (activations.tanh, tf.tanh, ops.tanh)
and recurrent_activation
in (activations.sigmoid, tf.sigmoid, ops.sigmoid)
and not unroll
and use_bias
)
def _is_sequence_right_padded(mask):
"""Check the mask tensor and see if it right padded.
For cuDNN kernel, it uses the sequence length param to skip the tailing
timestep. If the data is left padded, or not a strict right padding (has
masked value in the middle of the sequence), then cuDNN kernel won't be work
properly in those cases.
Left padded data: [[False, False, True, True, True]].
Right padded data: [[True, True, True, False, False]].
Mixture of mask/unmasked data: [[True, False, True, False, False]].
Note that for the mixed data example above, the actually data RNN should see
are those 2 Trues (index 0 and 2), the index 1 False should be ignored and
not pollute the internal states.
Args:
mask: the Boolean tensor with shape [batch, timestep]
Returns:
boolean scalar tensor, whether the mask is strictly right padded.
"""
max_seq_length = tf.shape(mask)[1]
count_of_true = tf.reduce_sum(tf.cast(mask, tf.int32), axis=1)
right_padded_mask = tf.sequence_mask(count_of_true, maxlen=max_seq_length)
return tf.reduce_all(
tf.equal(
tf.cast(mask, dtype="bool"),
tf.cast(right_padded_mask, dtype="bool"),
)
)
def _has_fully_masked_sequence(mask):
# Cudnn kernel will error out if the input sequence contains any
# fully masked data. We walk around this issue by rerouting the computation
# to standard kernel, until the issue on cudnn side has been fixed. For a
# fully masked sequence, it will contain all Falses. To make it easy to
# check, we inverse the boolean, check if any of the sequence has all True.
return tf.reduce_any(
tf.reduce_all(tf.logical_not(tf.cast(mask, dtype="bool")), axis=1)
)
def _standardize_cudnn_weights(weights, biases, shape, transpose_weights=False):
"""Utility function convert variable to cuDNN compatible parameter.
Note that Keras weights for kernels are different from the cuDNN format.
Eg.:
```
Keras cuDNN
[[0, 1, 2], <---> [[0, 2, 4],
[3, 4, 5]] [1, 3, 5]]
```
If the input weights need to be in a unified format, then set
`transpose_weights=True` to convert the weights.
Args:
weights: list of weights for the kernels and recurrent kernels.
biases: list of biases for individual gate.
shape: the shape for the converted variables that will be feed to cuDNN.
transpose_weights: boolean, whether to transpose the weights.
Returns:
The converted weights that can be feed to cuDNN ops as param.
"""
def convert(w):
return tf.transpose(w) if transpose_weights else w
weights = [tf.reshape(convert(x), shape) for x in weights]
biases = [tf.reshape(x, shape) for x in biases]
return tf.concat(weights + biases, axis=0)
def _compute_sequence_length_from_mask(mask, time_major):
"""Calculate the sequence length tensor (1-D) based on the masking tensor.
The masking tensor is a 2D boolean tensor with shape [batch, timestep]. For
any timestep that should be masked, the corresponding field will be False.
Consider the following example:
a = [[True, True, False, False],
[True, True, True, False]]
It is a (2, 4) tensor, and the corresponding sequence length result should
be 1D tensor with value [2, 3]. Note that the masking tensor must be right
padded that could be checked by, e.g., `is_sequence_right_padded()`.
Args:
mask: Boolean tensor with shape [batch, timestep] or [timestep, batch] if
time_major=True.
time_major: Boolean, which indicates whether the mask is time major or
batch major.
Returns:
sequence_length: 1D int32 tensor.
"""
timestep_index = 0 if time_major else 1
return tf.reduce_sum(tf.cast(mask, tf.int32), axis=timestep_index)
def _is_gpu_available():
return bool(tf.config.list_logical_devices("GPU"))
def _cudnn_gru(
inputs,
initial_state,
kernel,
recurrent_kernel,
bias,
mask,
time_major,
go_backwards,
return_sequences,
):
"""GRU with cuDNN implementation which is only available for GPU."""
if mask is not None:
sequence_lengths = _compute_sequence_length_from_mask(mask, time_major)
else:
sequence_lengths = None
if not time_major and sequence_lengths is None:
inputs = tf.transpose(inputs, perm=(1, 0, 2))
seq_axis, batch_axis = (0, 1)
else:
seq_axis, batch_axis = (0, 1) if time_major else (1, 0)
# For init_h, cuDNN expects one more dim of num_layers before or after batch
# dim for time major or batch major inputs respectively
init_h = tf.expand_dims(initial_state, axis=seq_axis)
weights = tf.split(kernel, 3, axis=1)
weights += tf.split(recurrent_kernel, 3, axis=1)
# Note that the bias was initialized as shape (2, 3 * units), flatten it to
# (6 * units)
bias = tf.split(tf.reshape(bias, [-1]), 6)
if tf.sysconfig.get_build_info()["is_cuda_build"]:
# Note that the gate order for cuDNN is different from the canonical
# format. canonical format is [z, r, h], whereas cuDNN is [r, z, h].
# The swap need to be done for kernel, recurrent_kernel, input_bias,
# recurrent_bias.
# z is update gate weights.
# r is reset gate weights.
# h is output gate weights.
weights[0], weights[1] = weights[1], weights[0]
weights[3], weights[4] = weights[4], weights[3]
bias[0], bias[1] = bias[1], bias[0]
bias[3], bias[4] = bias[4], bias[3]
params = _standardize_cudnn_weights(
weights=weights,
biases=bias,
shape=tf.constant([-1]),
transpose_weights=True,
)
if sequence_lengths is not None:
if go_backwards:
# Three reversals are required. E.g.,
# normal input = [1, 2, 3, 0, 0] # where 0 need to be masked
# reversed_input_to_cudnn = [3, 2, 1, 0, 0]
# output_from_cudnn = [6, 5, 4, 0, 0]
# expected_output = [0, 0, 6, 5 ,4]
inputs = tf.reverse_sequence(
inputs,
sequence_lengths,
seq_axis=seq_axis,
batch_axis=batch_axis,
)
outputs, h, _, _, _ = tf.raw_ops.CudnnRNNV3(
input=inputs,
input_h=init_h,
input_c=0,
params=params,
is_training=True,
rnn_mode="gru",
sequence_lengths=sequence_lengths,
time_major=time_major,
)
if go_backwards:
outputs = tf.reverse_sequence(
outputs,
sequence_lengths,
seq_axis=seq_axis,
batch_axis=batch_axis,
)
outputs = tf.reverse(outputs, axis=[seq_axis])
else:
if go_backwards:
# Reverse axis 0 since the input is already convert to time major.
inputs = tf.reverse(inputs, axis=[0])
outputs, h, _, _ = tf.raw_ops.CudnnRNN(
input=inputs,
input_h=init_h,
input_c=0,
params=params,
is_training=True,
rnn_mode="gru",
)
last_output = outputs[-1]
if not time_major and sequence_lengths is None and return_sequences:
outputs = tf.transpose(outputs, perm=[1, 0, 2])
state = tf.squeeze(h, axis=seq_axis)
# In the case of variable length input, the cudnn kernel will fill zeros for
# the output, whereas the default keras behavior is to bring over the
# previous output for t-1, so that in the return_sequence=False case, user
# can quickly get the final effect output instead just 0s at the last
# timestep. In order to mimic the default keras behavior, we copy the final
# h state as the last_output, since it is numerically same as the output.
if sequence_lengths is not None:
last_output = state
# Match CPU return format
if not return_sequences:
outputs = tf.expand_dims(last_output, axis=0 if time_major else 1)
return (
last_output,
outputs,
state,
)
def cudnn_ok(
activation,
recurrent_activation,
unroll,
use_bias,
reset_after=None,
):
if reset_after is None:
args_supported = _do_lstm_arguments_support_cudnn(
activation=activation,
recurrent_activation=recurrent_activation,
unroll=unroll,
use_bias=use_bias,
)
else:
args_supported = _do_gru_arguments_support_cudnn(
activation=activation,
recurrent_activation=recurrent_activation,
unroll=unroll,
use_bias=use_bias,
reset_after=reset_after,
)
return args_supported and _is_gpu_available()
def lstm(
inputs,
initial_state_h,
initial_state_c,
mask,
kernel,
recurrent_kernel,
bias,
activation,
recurrent_activation,
return_sequences=False,
go_backwards=False,
unroll=False,
time_major=False,
):
cudnn_supported = cudnn_ok(
activation, recurrent_activation, unroll, use_bias=bias is not None
)
if not cudnn_supported or mask is not None:
raise NotImplementedError
from keras.backend.tensorflow import Variable
if isinstance(kernel, Variable):
kernel = kernel.value
if isinstance(recurrent_kernel, Variable):
recurrent_kernel = recurrent_kernel.value
if isinstance(bias, Variable):
bias = bias.value
try:
return _cudnn_lstm(
inputs,
initial_state_h,
initial_state_c,
kernel,
recurrent_kernel,
bias,
mask,
time_major,
go_backwards,
return_sequences,
)
except tf.errors.InvalidArgumentError:
# cuDNN op not found.
raise NotImplementedError
except tf.errors.NotFoundError:
# alternative error: device not found for op
raise NotImplementedError
def _cudnn_lstm(
inputs,
initial_state_h,
initial_state_c,
kernel,
recurrent_kernel,
bias,
mask,
time_major,
go_backwards,
return_sequences,
):
if mask is not None:
sequence_lengths = _compute_sequence_length_from_mask(mask, time_major)
else:
sequence_lengths = None
if not time_major and sequence_lengths is None:
inputs = tf.transpose(inputs, perm=(1, 0, 2))
seq_axis, batch_axis = (0, 1)
else:
seq_axis, batch_axis = (0, 1) if time_major else (1, 0)
# For init_h and init_c, cuDNN expects one more dim of num_layers before or
# after batch dim for time major or batch major inputs respectively
init_h = tf.expand_dims(initial_state_h, axis=seq_axis)
init_c = tf.expand_dims(initial_state_c, axis=seq_axis)
weights = tf.split(kernel, 4, axis=1)
weights += tf.split(recurrent_kernel, 4, axis=1)
# cuDNN has an extra set of bias for inputs, we disable them (setting to 0),
# so that mathematically it is same as the canonical LSTM implementation.
full_bias = tf.concat((tf.zeros_like(bias), bias), 0)
if tf.sysconfig.get_build_info()["is_rocm_build"]:
# ROCm MIOpen's weight sequence for LSTM is different from both
# canonical and Cudnn format
# MIOpen: [i, f, o, c] Cudnn/Canonical: [i, f, c, o]
# i is input gate weights.
# f is forget gate weights.
# o is output gate weights.
# c is cell gate weights.
weights = [weights[x] for x in (0, 1, 3, 2, 4, 5, 7, 6)]
# full_bias is a tensor of shape (8*n,)
full_bias = tf.split(full_bias, 8, axis=0)
full_bias = [full_bias[x] for x in (0, 1, 3, 2, 4, 5, 7, 6)]
params = _standardize_cudnn_weights(
weights=weights,
biases=tf.split(full_bias, 8),
shape=tf.constant([-1]),
transpose_weights=True,
)
if sequence_lengths is not None:
if go_backwards:
# Three reversals are required. E.g.,
# normal input = [1, 2, 3, 0, 0] # where 0 need to be masked
# reversed_input_to_cudnn = [3, 2, 1, 0, 0]
# output_from_cudnn = [6, 5, 4, 0, 0]
# expected_output = [0, 0, 6, 5 ,4]
inputs = tf.reverse_sequence(
inputs,
sequence_lengths,
seq_axis=seq_axis,
batch_axis=batch_axis,
)
outputs, h, c, _, _ = tf.raw_ops.CudnnRNNV3(
input=inputs,
input_h=init_h,
input_c=init_c,
params=params,
is_training=True,
rnn_mode="lstm",
sequence_lengths=sequence_lengths,
time_major=time_major,
)
if go_backwards:
outputs = tf.reverse_sequence(
outputs,
sequence_lengths,
seq_axis=seq_axis,
batch_axis=batch_axis,
)
outputs = tf.reverse(outputs, axis=[seq_axis])
else:
# # Fill the array with shape [batch] with value of max timesteps.
# sequence_length = array_ops.fill([array_ops.shape(inputs)[1]],
# array_ops.shape(inputs)[0])
if go_backwards:
# Reverse axis 0 since the input is already convert to time major.
inputs = tf.reverse(inputs, axis=[0])
outputs, h, c, _ = tf.raw_ops.CudnnRNN(
input=inputs,
input_h=init_h,
input_c=init_c,
params=params,
is_training=True,
rnn_mode="lstm",
)
last_output = outputs[-1]
if not time_major and sequence_lengths is None and return_sequences:
outputs = tf.transpose(outputs, perm=[1, 0, 2])
h = tf.squeeze(h, axis=seq_axis)
c = tf.squeeze(c, axis=seq_axis)
# In the case of variable length input, the cudnn kernel will fill zeros for
# the output, whereas the default keras behavior is to bring over the
# previous output for t-1, so that in the return_sequence=False case, user
# can quickly get the final effect output instead just 0s at the last
# timestep. In order to mimic the default keras behavior, we copy the final
# h state as the last_output, since it is numerically same as the output.
if sequence_lengths is not None:
last_output = h
# Match CPU return format
if not return_sequences:
outputs = tf.expand_dims(last_output, axis=0 if time_major else 1)
return (last_output, outputs, [h, c])
| keras/keras/backend/tensorflow/rnn.py/0 | {
"file_path": "keras/keras/backend/tensorflow/rnn.py",
"repo_id": "keras",
"token_count": 16379
} | 176 |
from keras.backend.torch.optimizers.torch_optimizer import TorchOptimizer
| keras/keras/backend/torch/optimizers/__init__.py/0 | {
"file_path": "keras/keras/backend/torch/optimizers/__init__.py",
"repo_id": "keras",
"token_count": 24
} | 177 |
from keras.api_export import keras_export
from keras.callbacks.callback import Callback
from keras.utils import file_utils
@keras_export("keras.callbacks.BackupAndRestore")
class BackupAndRestore(Callback):
"""Callback to back up and restore the training state.
`BackupAndRestore` callback is intended to recover training from an
interruption that has happened in the middle of a `Model.fit` execution, by
backing up the training states in a temporary checkpoint file, at the end of
each epoch. Each backup overwrites the previously written checkpoint file,
so at any given time there is at most one such checkpoint file for
backup/restoring purpose.
If training restarts before completion, the training state (which includes
the `Model` weights and epoch number) is restored to the most recently saved
state at the beginning of a new `Model.fit` run. At the completion of a
`Model.fit` run, the temporary checkpoint file is deleted.
Note that the user is responsible to bring jobs back after the interruption.
This callback is important for the backup and restore mechanism for fault
tolerance purpose, and the model to be restored from a previous checkpoint
is expected to be the same as the one used to back up. If user changes
arguments passed to compile or fit, the checkpoint saved for fault tolerance
can become invalid.
Example:
>>> class InterruptingCallback(keras.callbacks.Callback):
... def on_epoch_begin(self, epoch, logs=None):
... if epoch == 4:
... raise RuntimeError('Interrupting!')
>>> callback = keras.callbacks.BackupAndRestore(backup_dir="/tmp/backup")
>>> model = keras.models.Sequential([keras.layers.Dense(10)])
>>> model.compile(keras.optimizers.SGD(), loss='mse')
>>> try:
... model.fit(np.arange(100).reshape(5, 20), np.zeros(5), epochs=10,
... batch_size=1, callbacks=[callback, InterruptingCallback()],
... verbose=0)
... except:
... pass
>>> history = model.fit(np.arange(100).reshape(5, 20), np.zeros(5),
... epochs=10, batch_size=1, callbacks=[callback],
... verbose=0)
>>> # Only 6 more epochs are run, since first training got interrupted at
>>> # zero-indexed epoch 4, second training will continue from 4 to 9.
>>> len(history.history['loss'])
>>> 6
Args:
backup_dir: String, path of directory where to store the data
needed to restore the model. The directory
cannot be reused elsewhere to store other files, e.g. by the
`BackupAndRestore` callback of another training run,
or by another callback (e.g. `ModelCheckpoint`)
of the same training run.
save_freq: `"epoch"`, integer, or `False`. When set to `"epoch"`
the callback saves the checkpoint at the end of each epoch.
When set to an integer, the callback saves the checkpoint every
`save_freq` batches. Set `save_freq=False` only if using
preemption checkpointing (i.e. with `save_before_preemption=True`).
delete_checkpoint: Boolean, defaults to `True`. This `BackupAndRestore`
callback works by saving a checkpoint to back up the training state.
If `delete_checkpoint=True`, the checkpoint will be deleted after
training is finished. Use `False` if you'd like to keep the checkpoint
for future usage.
"""
def __init__(
self,
backup_dir,
save_freq="epoch",
delete_checkpoint=True,
):
super().__init__()
self.save_freq = save_freq
self.delete_checkpoint = delete_checkpoint
self._batches_seen_since_last_saving = 0
self._last_batch_seen = 0
self._current_epoch = 0
if not backup_dir:
raise ValueError("Empty `backup_dir` argument passed")
self.backup_dir = backup_dir
self._weights_path = file_utils.join(backup_dir, "latest.weights.h5")
if save_freq != "epoch" and not isinstance(save_freq, int):
raise ValueError(
"Invalid value for argument `save_freq`. "
f"Received: save_freq={save_freq}. "
"Expected either 'epoch' or an integer value."
)
def on_train_begin(self, logs=None):
"""Get training state from temporary file and restore it."""
if file_utils.exists(self._weights_path):
self.model.load_weights(self._weights_path)
def on_epoch_end(self, epoch, logs=None):
self._current_epoch = epoch
if self.save_freq == "epoch":
self._save_model()
def on_train_batch_end(self, batch, logs=None):
if self._should_save_on_batch(batch):
self._save_model()
def _save_model(self):
"""Saves the model.
Args:
epoch: the epoch this iteration is in.
batch: the batch this iteration is in. `None` if the `save_freq`
is set to `"epoch"`.
logs: the `logs` dict passed in to `on_batch_end` or `on_epoch_end`.
"""
# Create host directory if it doesn't exist.
if not file_utils.exists(self.backup_dir):
file_utils.makedirs(self.backup_dir)
self.model.save_weights(filepath=self._weights_path, overwrite=True)
def _should_save_on_batch(self, batch):
"""Handles batch-level saving logic, supports steps_per_execution."""
if self.save_freq == "epoch":
return False
if batch <= self._last_batch_seen: # New epoch.
add_batches = batch + 1 # batches are zero-indexed.
else:
add_batches = batch - self._last_batch_seen
self._batches_seen_since_last_saving += add_batches
self._last_batch_seen = batch
if self._batches_seen_since_last_saving >= self.save_freq:
self._batches_seen_since_last_saving = 0
return True
return False
def on_train_end(self, logs=None):
if self.delete_checkpoint and file_utils.exists(self.backup_dir):
file_utils.rmtree(self.backup_dir)
| keras/keras/callbacks/backup_and_restore_callback.py/0 | {
"file_path": "keras/keras/callbacks/backup_and_restore_callback.py",
"repo_id": "keras",
"token_count": 2461
} | 178 |
from keras.api_export import keras_export
from keras.callbacks.callback import Callback
from keras.utils import io_utils
from keras.utils.progbar import Progbar
@keras_export("keras.callbacks.ProgbarLogger")
class ProgbarLogger(Callback):
"""Callback that prints metrics to stdout.
Args:
count_mode: One of `"steps"` or `"samples"`.
Whether the progress bar should
count samples seen or steps (batches) seen.
Raises:
ValueError: In case of invalid `count_mode`.
"""
def __init__(self):
super().__init__()
self.seen = 0
self.progbar = None
self.target = None
self.verbose = 1
self.epochs = 1
self._called_in_fit = False
def set_params(self, params):
verbose = params["verbose"]
if verbose == "auto":
verbose = 1
self.verbose = verbose
self.epochs = params["epochs"]
self.target = params["steps"]
def on_train_begin(self, logs=None):
# When this logger is called inside `fit`, validation is silent.
self._called_in_fit = True
def on_test_begin(self, logs=None):
if not self._called_in_fit:
self._reset_progbar()
self._maybe_init_progbar()
def on_predict_begin(self, logs=None):
self._reset_progbar()
self._maybe_init_progbar()
def on_epoch_begin(self, epoch, logs=None):
self._reset_progbar()
self._maybe_init_progbar()
if self.verbose and self.epochs > 1:
io_utils.print_msg(f"Epoch {epoch + 1}/{self.epochs}")
def on_train_batch_end(self, batch, logs=None):
self._update_progbar(batch, logs)
def on_test_batch_end(self, batch, logs=None):
if not self._called_in_fit:
self._update_progbar(batch, logs)
def on_predict_batch_end(self, batch, logs=None):
# Don't pass prediction results.
self._update_progbar(batch, None)
def on_epoch_end(self, epoch, logs=None):
self._finalize_progbar(logs)
def on_test_end(self, logs=None):
if not self._called_in_fit:
self._finalize_progbar(logs)
def on_predict_end(self, logs=None):
self._finalize_progbar(logs)
def _reset_progbar(self):
self.seen = 0
self.progbar = None
def _maybe_init_progbar(self):
if self.progbar is None:
self.progbar = Progbar(
target=self.target, verbose=self.verbose, unit_name="step"
)
def _update_progbar(self, batch, logs=None):
"""Updates the progbar."""
logs = logs or {}
self._maybe_init_progbar()
self.seen = batch + 1 # One-indexed.
if self.verbose == 1:
self.progbar.update(self.seen, list(logs.items()), finalize=False)
def _finalize_progbar(self, logs):
logs = logs or {}
if self.target is None:
self.target = self.seen
self.progbar.target = self.target
self.progbar.update(self.target, list(logs.items()), finalize=True)
| keras/keras/callbacks/progbar_logger.py/0 | {
"file_path": "keras/keras/callbacks/progbar_logger.py",
"repo_id": "keras",
"token_count": 1426
} | 179 |
"""Tests for inference-only model/layer exporting utilities."""
import os
import numpy as np
import pytest
import tensorflow as tf
from keras import backend
from keras import layers
from keras import models
from keras import testing
from keras import utils
from keras.export import export_lib
from keras.saving import saving_lib
def get_model():
layer_list = [
layers.Dense(10, activation="relu"),
layers.BatchNormalization(),
layers.Dense(1, activation="sigmoid"),
]
model = models.Sequential(layer_list)
return model
@pytest.mark.skipif(
backend.backend() not in ("tensorflow", "jax"),
reason="Export only currently supports the TF and JAX backends.",
)
class ExportArchiveTest(testing.TestCase):
def test_standard_model_export(self):
temp_filepath = os.path.join(self.get_temp_dir(), "exported_model")
model = get_model()
ref_input = tf.random.normal((3, 10))
ref_output = model(ref_input)
export_lib.export_model(model, temp_filepath)
revived_model = tf.saved_model.load(temp_filepath)
self.assertAllClose(
ref_output, revived_model.serve(ref_input), atol=1e-6
)
def test_low_level_model_export(self):
temp_filepath = os.path.join(self.get_temp_dir(), "exported_model")
model = get_model()
ref_input = tf.random.normal((3, 10))
ref_output = model(ref_input)
# Test variable tracking
export_archive = export_lib.ExportArchive()
export_archive.track(model)
self.assertLen(export_archive.variables, 8)
self.assertLen(export_archive.trainable_variables, 6)
self.assertLen(export_archive.non_trainable_variables, 2)
export_archive = export_lib.ExportArchive()
export_archive.track(model)
export_archive.add_endpoint(
"call",
model.call,
input_signature=[
tf.TensorSpec(
shape=(None, 10),
dtype=tf.float32,
)
],
)
export_archive.write_out(temp_filepath)
revived_model = tf.saved_model.load(temp_filepath)
self.assertAllClose(
ref_output, revived_model.call(ref_input), atol=1e-6
)
@pytest.mark.skipif(
backend.backend() != "tensorflow",
reason="Registering a tf.function endpoint is only in TF backend.",
)
def test_endpoint_registration_tf_function(self):
temp_filepath = os.path.join(self.get_temp_dir(), "exported_model")
model = get_model()
ref_input = tf.random.normal((3, 10))
ref_output = model(ref_input)
# Test variable tracking
export_archive = export_lib.ExportArchive()
export_archive.track(model)
self.assertLen(export_archive.variables, 8)
self.assertLen(export_archive.trainable_variables, 6)
self.assertLen(export_archive.non_trainable_variables, 2)
@tf.function()
def my_endpoint(x):
return model(x)
# Test registering an endpoint that is a tf.function (called)
my_endpoint(ref_input) # Trace fn
export_archive.add_endpoint(
"call",
my_endpoint,
)
export_archive.write_out(temp_filepath)
revived_model = tf.saved_model.load(temp_filepath)
self.assertFalse(hasattr(revived_model, "_tracked"))
self.assertAllClose(
ref_output, revived_model.call(ref_input), atol=1e-6
)
self.assertLen(revived_model.variables, 8)
self.assertLen(revived_model.trainable_variables, 6)
self.assertLen(revived_model.non_trainable_variables, 2)
@pytest.mark.skipif(
backend.backend() != "jax",
reason="This test is native to the JAX backend.",
)
def test_jax_endpoint_registration_tf_function(self):
model = get_model()
ref_input = np.random.normal(size=(3, 10))
model(ref_input)
# build a JAX function
def model_call(x):
return model(x)
from jax import default_backend as jax_device
from jax.experimental import jax2tf
native_jax_compatible = not (
jax_device() == "gpu"
and len(tf.config.list_physical_devices("GPU")) == 0
)
# now, convert JAX function
converted_model_call = jax2tf.convert(
model_call,
native_serialization=native_jax_compatible,
polymorphic_shapes=["(b, 10)"],
)
# you can now build a TF inference function
@tf.function(
input_signature=[tf.TensorSpec(shape=(None, 10), dtype=tf.float32)],
autograph=False,
)
def infer_fn(x):
return converted_model_call(x)
ref_output = infer_fn(ref_input)
# Export with TF inference function as endpoint
temp_filepath = os.path.join(self.get_temp_dir(), "my_model")
export_archive = export_lib.ExportArchive()
export_archive.track(model)
export_archive.add_endpoint("serve", infer_fn)
export_archive.write_out(temp_filepath)
# Reload and verify outputs
revived_model = tf.saved_model.load(temp_filepath)
self.assertFalse(hasattr(revived_model, "_tracked"))
self.assertAllClose(
ref_output, revived_model.serve(ref_input), atol=1e-6
)
self.assertLen(revived_model.variables, 8)
self.assertLen(revived_model.trainable_variables, 6)
self.assertLen(revived_model.non_trainable_variables, 2)
# Assert all variables wrapped as `tf.Variable`
assert isinstance(export_archive.variables[0], tf.Variable)
assert isinstance(export_archive.trainable_variables[0], tf.Variable)
assert isinstance(
export_archive.non_trainable_variables[0], tf.Variable
)
def test_layer_export(self):
temp_filepath = os.path.join(self.get_temp_dir(), "exported_layer")
layer = layers.BatchNormalization()
ref_input = tf.random.normal((3, 10))
ref_output = layer(ref_input) # Build layer (important)
export_archive = export_lib.ExportArchive()
export_archive.track(layer)
export_archive.add_endpoint(
"call",
layer.call,
input_signature=[
tf.TensorSpec(
shape=(None, 10),
dtype=tf.float32,
)
],
)
export_archive.write_out(temp_filepath)
revived_layer = tf.saved_model.load(temp_filepath)
self.assertAllClose(
ref_output, revived_layer.call(ref_input), atol=1e-6
)
def test_multi_input_output_functional_model(self):
temp_filepath = os.path.join(self.get_temp_dir(), "exported_model")
x1 = layers.Input((2,))
x2 = layers.Input((2,))
y1 = layers.Dense(3)(x1)
y2 = layers.Dense(3)(x2)
model = models.Model([x1, x2], [y1, y2])
ref_inputs = [tf.random.normal((3, 2)), tf.random.normal((3, 2))]
ref_outputs = model(ref_inputs)
export_archive = export_lib.ExportArchive()
export_archive.track(model)
export_archive.add_endpoint(
"serve",
model.call,
input_signature=[
[
tf.TensorSpec(
shape=(None, 2),
dtype=tf.float32,
),
tf.TensorSpec(
shape=(None, 2),
dtype=tf.float32,
),
]
],
)
export_archive.write_out(temp_filepath)
revived_model = tf.saved_model.load(temp_filepath)
self.assertAllClose(
ref_outputs[0],
revived_model.serve(ref_inputs)[0],
atol=1e-6,
)
self.assertAllClose(
ref_outputs[1],
revived_model.serve(ref_inputs)[1],
atol=1e-6,
)
# Now test dict inputs
model = models.Model({"x1": x1, "x2": x2}, [y1, y2])
ref_inputs = {
"x1": tf.random.normal((3, 2)),
"x2": tf.random.normal((3, 2)),
}
ref_outputs = model(ref_inputs)
export_archive = export_lib.ExportArchive()
export_archive.track(model)
export_archive.add_endpoint(
"serve",
model.call,
input_signature=[
{
"x1": tf.TensorSpec(
shape=(None, 2),
dtype=tf.float32,
),
"x2": tf.TensorSpec(
shape=(None, 2),
dtype=tf.float32,
),
}
],
)
export_archive.write_out(temp_filepath)
revived_model = tf.saved_model.load(temp_filepath)
self.assertAllClose(
ref_outputs[0],
revived_model.serve(ref_inputs)[0],
atol=1e-6,
)
self.assertAllClose(
ref_outputs[1],
revived_model.serve(ref_inputs)[1],
atol=1e-6,
)
# def test_model_with_lookup_table(self):
# tf.debugging.disable_traceback_filtering()
# temp_filepath = os.path.join(self.get_temp_dir(), "exported_model")
# text_vectorization = layers.TextVectorization()
# text_vectorization.adapt(["one two", "three four", "five six"])
# model = models.Sequential(
# [
# layers.Input(shape=(), dtype="string"),
# text_vectorization,
# layers.Embedding(10, 32),
# layers.Dense(1),
# ]
# )
# ref_input = tf.convert_to_tensor(["one two three four"])
# ref_output = model(ref_input)
# export_lib.export_model(model, temp_filepath)
# revived_model = tf.saved_model.load(temp_filepath)
# self.assertAllClose(
# ref_output, revived_model.serve(ref_input), atol=1e-6
# )
def test_track_multiple_layers(self):
temp_filepath = os.path.join(self.get_temp_dir(), "exported_model")
layer_1 = layers.Dense(2)
ref_input_1 = tf.random.normal((3, 4))
ref_output_1 = layer_1(ref_input_1)
layer_2 = layers.Dense(3)
ref_input_2 = tf.random.normal((3, 5))
ref_output_2 = layer_2(ref_input_2)
export_archive = export_lib.ExportArchive()
export_archive.add_endpoint(
"call_1",
layer_1.call,
input_signature=[
tf.TensorSpec(
shape=(None, 4),
dtype=tf.float32,
),
],
)
export_archive.add_endpoint(
"call_2",
layer_2.call,
input_signature=[
tf.TensorSpec(
shape=(None, 5),
dtype=tf.float32,
),
],
)
export_archive.write_out(temp_filepath)
revived_layer = tf.saved_model.load(temp_filepath)
self.assertAllClose(
ref_output_1,
revived_layer.call_1(ref_input_1),
atol=1e-6,
)
self.assertAllClose(
ref_output_2,
revived_layer.call_2(ref_input_2),
atol=1e-6,
)
def test_non_standard_layer_signature(self):
temp_filepath = os.path.join(self.get_temp_dir(), "exported_layer")
layer = layers.MultiHeadAttention(2, 2)
x1 = tf.random.normal((3, 2, 2))
x2 = tf.random.normal((3, 2, 2))
ref_output = layer(x1, x2) # Build layer (important)
export_archive = export_lib.ExportArchive()
export_archive.track(layer)
export_archive.add_endpoint(
"call",
layer.call,
input_signature=[
tf.TensorSpec(
shape=(None, 2, 2),
dtype=tf.float32,
),
tf.TensorSpec(
shape=(None, 2, 2),
dtype=tf.float32,
),
],
)
export_archive.write_out(temp_filepath)
revived_layer = tf.saved_model.load(temp_filepath)
self.assertAllClose(
ref_output,
revived_layer.call(x1, x2),
atol=1e-6,
)
# TODO(nkovela): Remove test when argument name preservation
# workaround is created for JAX backend.
@pytest.mark.skipif(
backend.backend() != "tensorflow",
reason="JAX2TF has issues with argument name preservation.",
)
def test_non_standard_layer_signature_with_kwargs(self):
temp_filepath = os.path.join(self.get_temp_dir(), "exported_layer")
layer = layers.MultiHeadAttention(2, 2)
x1 = tf.random.normal((3, 2, 2))
x2 = tf.random.normal((3, 2, 2))
ref_output = layer(x1, x2) # Build layer (important)
export_archive = export_lib.ExportArchive()
export_archive.track(layer)
export_archive.add_endpoint(
"call",
layer.call,
input_signature=[
tf.TensorSpec(
shape=(None, 2, 2),
dtype=tf.float32,
),
tf.TensorSpec(
shape=(None, 2, 2),
dtype=tf.float32,
),
],
)
export_archive.write_out(temp_filepath)
revived_layer = tf.saved_model.load(temp_filepath)
self.assertAllClose(
ref_output,
revived_layer.call(query=x1, value=x2),
atol=1e-6,
)
def test_variable_collection(self):
temp_filepath = os.path.join(self.get_temp_dir(), "exported_model")
model = models.Sequential(
[
layers.Input((10,)),
layers.Dense(2),
layers.Dense(2),
]
)
# Test variable tracking
export_archive = export_lib.ExportArchive()
export_archive.track(model)
export_archive.add_endpoint(
"call",
model.call,
input_signature=[
tf.TensorSpec(
shape=(None, 10),
dtype=tf.float32,
)
],
)
export_archive.add_variable_collection(
"my_vars", model.layers[1].weights
)
self.assertLen(export_archive._tf_trackable.my_vars, 2)
export_archive.write_out(temp_filepath)
revived_model = tf.saved_model.load(temp_filepath)
self.assertLen(revived_model.my_vars, 2)
def test_export_model_errors(self):
temp_filepath = os.path.join(self.get_temp_dir(), "exported_model")
# Model has not been built
model = models.Sequential([layers.Dense(2)])
with self.assertRaisesRegex(ValueError, "It must be built"):
export_lib.export_model(model, temp_filepath)
# Subclassed model has not been called
class MyModel(models.Model):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.dense = layers.Dense(2)
def build(self, input_shape):
self.dense.build(input_shape)
self.built = True
def call(self, x):
return self.dense(x)
model = MyModel()
model.build((2, 3))
with self.assertRaisesRegex(ValueError, "It must be called"):
export_lib.export_model(model, temp_filepath)
def test_export_archive_errors(self):
temp_filepath = os.path.join(self.get_temp_dir(), "exported_model")
model = models.Sequential([layers.Dense(2)])
model(tf.random.normal((2, 3)))
# Endpoint name reuse
export_archive = export_lib.ExportArchive()
export_archive.track(model)
export_archive.add_endpoint(
"call",
model.call,
input_signature=[
tf.TensorSpec(
shape=(None, 3),
dtype=tf.float32,
)
],
)
with self.assertRaisesRegex(ValueError, "already taken"):
export_archive.add_endpoint(
"call",
model.call,
input_signature=[
tf.TensorSpec(
shape=(None, 3),
dtype=tf.float32,
)
],
)
# Write out with no endpoints
export_archive = export_lib.ExportArchive()
export_archive.track(model)
with self.assertRaisesRegex(ValueError, "No endpoints have been set"):
export_archive.write_out(temp_filepath)
# Invalid object type
with self.assertRaisesRegex(ValueError, "Invalid resource type"):
export_archive = export_lib.ExportArchive()
export_archive.track("model")
# Set endpoint with no input signature
export_archive = export_lib.ExportArchive()
export_archive.track(model)
with self.assertRaisesRegex(
ValueError, "you must provide an `input_signature`"
):
export_archive.add_endpoint(
"call",
model.call,
)
# Set endpoint that has never been called
export_archive = export_lib.ExportArchive()
export_archive.track(model)
@tf.function()
def my_endpoint(x):
return model(x)
export_archive = export_lib.ExportArchive()
export_archive.track(model)
with self.assertRaisesRegex(
ValueError, "you must either provide a function"
):
export_archive.add_endpoint(
"call",
my_endpoint,
)
def test_subclassed_model_export(self):
class CustomModelX(models.Model):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.dense1 = layers.Dense(1)
self.dense2 = layers.Dense(1)
def call(self, inputs):
out = self.dense1(inputs)
return self.dense2(out)
temp_filepath = os.path.join(self.get_temp_dir(), "exported_model")
x = np.random.random((100, 32))
model = CustomModelX()
model.compile(
optimizer="adam",
loss="mse",
)
ref_output = model(x)
model.export(temp_filepath)
revived_model = tf.saved_model.load(temp_filepath)
self.assertAllClose(ref_output, revived_model.serve(x), atol=1e-6)
def test_export_no_assets(self):
temp_filepath = os.path.join(self.get_temp_dir(), "exported_model")
# Case where there are legitimately no assets.
model = models.Sequential([layers.Flatten()])
model(tf.random.normal((2, 3)))
export_archive = export_lib.ExportArchive()
export_archive.add_endpoint(
"call",
model.call,
input_signature=[
tf.TensorSpec(
shape=(None, 3),
dtype=tf.float32,
)
],
)
export_archive.write_out(temp_filepath)
def test_model_export_method(self):
temp_filepath = os.path.join(self.get_temp_dir(), "exported_model")
model = get_model()
ref_input = tf.random.normal((3, 10))
ref_output = model(ref_input)
model.export(temp_filepath)
revived_model = tf.saved_model.load(temp_filepath)
self.assertAllClose(
ref_output, revived_model.serve(ref_input), atol=1e-6
)
@pytest.mark.skipif(
backend.backend() != "tensorflow",
reason="TFSM Layer reloading is only for the TF backend.",
)
class TestTFSMLayer(testing.TestCase):
def test_reloading_export_archive(self):
temp_filepath = os.path.join(self.get_temp_dir(), "exported_model")
model = get_model()
ref_input = tf.random.normal((3, 10))
ref_output = model(ref_input)
export_lib.export_model(model, temp_filepath)
reloaded_layer = export_lib.TFSMLayer(temp_filepath)
self.assertAllClose(reloaded_layer(ref_input), ref_output, atol=1e-7)
self.assertLen(reloaded_layer.weights, len(model.weights))
self.assertLen(
reloaded_layer.trainable_weights, len(model.trainable_weights)
)
self.assertLen(
reloaded_layer.non_trainable_weights,
len(model.non_trainable_weights),
)
# TODO(nkovela): Expand test coverage/debug fine-tuning and
# non-trainable use cases here.
def test_reloading_default_saved_model(self):
temp_filepath = os.path.join(self.get_temp_dir(), "exported_model")
model = get_model()
ref_input = tf.random.normal((3, 10))
ref_output = model(ref_input)
tf.saved_model.save(model, temp_filepath)
reloaded_layer = export_lib.TFSMLayer(
temp_filepath, call_endpoint="serving_default"
)
# The output is a dict, due to the nature of SavedModel saving.
new_output = reloaded_layer(ref_input)
self.assertAllClose(
new_output[list(new_output.keys())[0]],
ref_output,
atol=1e-7,
)
self.assertLen(reloaded_layer.weights, len(model.weights))
self.assertLen(
reloaded_layer.trainable_weights, len(model.trainable_weights)
)
self.assertLen(
reloaded_layer.non_trainable_weights,
len(model.non_trainable_weights),
)
def test_call_training(self):
temp_filepath = os.path.join(self.get_temp_dir(), "exported_model")
utils.set_random_seed(1337)
model = models.Sequential(
[
layers.Input((10,)),
layers.Dense(10),
layers.Dropout(0.99999),
]
)
export_archive = export_lib.ExportArchive()
export_archive.track(model)
export_archive.add_endpoint(
name="call_inference",
fn=lambda x: model(x, training=False),
input_signature=[tf.TensorSpec(shape=(None, 10), dtype=tf.float32)],
)
export_archive.add_endpoint(
name="call_training",
fn=lambda x: model(x, training=True),
input_signature=[tf.TensorSpec(shape=(None, 10), dtype=tf.float32)],
)
export_archive.write_out(temp_filepath)
reloaded_layer = export_lib.TFSMLayer(
temp_filepath,
call_endpoint="call_inference",
call_training_endpoint="call_training",
)
inference_output = reloaded_layer(
tf.random.normal((1, 10)), training=False
)
training_output = reloaded_layer(
tf.random.normal((1, 10)), training=True
)
self.assertAllClose(np.mean(training_output), 0.0, atol=1e-7)
self.assertNotAllClose(np.mean(inference_output), 0.0, atol=1e-7)
def test_serialization(self):
temp_filepath = os.path.join(self.get_temp_dir(), "exported_model")
model = get_model()
ref_input = tf.random.normal((3, 10))
ref_output = model(ref_input)
export_lib.export_model(model, temp_filepath)
reloaded_layer = export_lib.TFSMLayer(temp_filepath)
# Test reinstantiation from config
config = reloaded_layer.get_config()
rereloaded_layer = export_lib.TFSMLayer.from_config(config)
self.assertAllClose(rereloaded_layer(ref_input), ref_output, atol=1e-7)
# Test whole model saving with reloaded layer inside
model = models.Sequential([reloaded_layer])
temp_model_filepath = os.path.join(self.get_temp_dir(), "m.keras")
model.save(temp_model_filepath, save_format="keras_v3")
reloaded_model = saving_lib.load_model(
temp_model_filepath,
custom_objects={"TFSMLayer": export_lib.TFSMLayer},
)
self.assertAllClose(reloaded_model(ref_input), ref_output, atol=1e-7)
def test_errors(self):
# Test missing call endpoint
temp_filepath = os.path.join(self.get_temp_dir(), "exported_model")
model = models.Sequential([layers.Input((2,)), layers.Dense(3)])
export_lib.export_model(model, temp_filepath)
with self.assertRaisesRegex(ValueError, "The endpoint 'wrong'"):
export_lib.TFSMLayer(temp_filepath, call_endpoint="wrong")
# Test missing call training endpoint
with self.assertRaisesRegex(ValueError, "The endpoint 'wrong'"):
export_lib.TFSMLayer(
temp_filepath,
call_endpoint="serve",
call_training_endpoint="wrong",
)
| keras/keras/export/export_lib_test.py/0 | {
"file_path": "keras/keras/export/export_lib_test.py",
"repo_id": "keras",
"token_count": 12701
} | 180 |
import numpy as np
import pytest
from keras import testing
from keras.layers.activations import prelu
class PReLUTest(testing.TestCase):
@pytest.mark.requires_trainable_backend
def test_prelu(self):
self.run_layer_test(
prelu.PReLU,
init_kwargs={
"alpha_initializer": "zeros",
"alpha_regularizer": "L1",
"alpha_constraint": "MaxNorm",
"shared_axes": 1,
},
input_shape=(2, 3, 4),
supports_masking=True,
)
def test_prelu_correctness(self):
def np_prelu(x, alpha):
return (x > 0) * x + (x <= 0) * alpha * x
inputs = np.random.randn(2, 10, 5, 3)
prelu_layer = prelu.PReLU(
alpha_initializer="glorot_uniform",
alpha_regularizer="l1",
alpha_constraint="non_neg",
shared_axes=(1, 2),
)
prelu_layer.build(inputs.shape)
weights = np.random.random((1, 1, 3))
prelu_layer.alpha.assign(weights)
ref_out = np_prelu(inputs, weights)
self.assertAllClose(prelu_layer(inputs), ref_out)
| keras/keras/layers/activations/prelu_test.py/0 | {
"file_path": "keras/keras/layers/activations/prelu_test.py",
"repo_id": "keras",
"token_count": 605
} | 181 |
"""Keras base class for transpose convolution layers."""
from keras import activations
from keras import constraints
from keras import initializers
from keras import ops
from keras import regularizers
from keras.backend import standardize_data_format
from keras.backend.common.backend_utils import (
compute_conv_transpose_output_shape,
)
from keras.layers.input_spec import InputSpec
from keras.layers.layer import Layer
from keras.utils.argument_validation import standardize_padding
from keras.utils.argument_validation import standardize_tuple
class BaseConvTranspose(Layer):
"""Abstract N-D transposed convolution layer.
The need for transposed convolutions generally arises from the desire to use
a transformation going in the opposite direction of a normal convolution,
i.e., from something that has the shape of the output of some convolution to
something that has the shape of its input while maintaining a connectivity
pattern that is compatible with said convolution.
Args:
rank: int, the rank of the transposed convolution, e.g. 2 for 2D
transposed convolution.
filters: int, the dimension of the output space (the number of filters
in the transposed convolution).
kernel_size: int or tuple/list of `rank` integers, specifying the size
of the transposed convolution window.
strides: int or tuple/list of `rank` integers, specifying the stride
length of the transposed convolution. If only one int is specified,
the same stride size will be used for all dimensions.
`strides > 1` is incompatible with `dilation_rate > 1`.
padding: string, either `"valid"` or `"same"` (case-insensitive).
`"valid"` means no padding. `"same"` results in padding evenly to
the left/right or up/down of the input such that output has the same
height/width dimension as the input.
data_format: string, either `"channels_last"` or `"channels_first"`.
The ordering of the dimensions in the inputs. `"channels_last"`
corresponds to inputs with shape `(batch, steps, features)`
while `"channels_first"` corresponds to inputs with shape
`(batch, features, steps)`. It defaults to the `image_data_format`
value found in your Keras config file at `~/.keras/keras.json`.
If you never set it, then it will be `"channels_last"`.
dilation_rate: int or tuple/list of `rank` integers, specifying the
dilation rate to use for dilated convolution. If only one int is
specified, the same dilation rate will be used for all dimensions.
activation: Activation function. If `None`, no activation is applied.
use_bias: bool, if `True`, bias will be added to the output.
kernel_initializer: Initializer for the convolution kernel. If `None`,
the default initializer (`"glorot_uniform"`) will be used.
bias_initializer: Initializer for the bias vector. If `None`, the
default initializer (`"zeros"`) will be used.
kernel_regularizer: Optional regularizer for the convolution kernel.
bias_regularizer: Optional regularizer for the bias vector.
activity_regularizer: Optional regularizer function for the output.
kernel_constraint: Optional projection function to be applied to the
kernel after being updated by an `Optimizer` (e.g. used to implement
norm constraints or value constraints for layer weights). The
function must take as input the unprojected variable and must return
the projected variable (which must have the same shape). Constraints
are not safe to use when doing asynchronous distributed training.
bias_constraint: Optional projection function to be applied to the
bias after being updated by an `Optimizer`.
"""
def __init__(
self,
rank,
filters,
kernel_size,
strides=1,
padding="valid",
output_padding=None,
data_format=None,
dilation_rate=1,
activation=None,
use_bias=True,
kernel_initializer="glorot_uniform",
bias_initializer="zeros",
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
trainable=True,
name=None,
**kwargs,
):
super().__init__(
trainable=trainable,
name=name,
activity_regularizer=activity_regularizer,
**kwargs,
)
self.rank = rank
self.filters = filters
self.kernel_size = standardize_tuple(kernel_size, rank, "kernel_size")
self.strides = standardize_tuple(strides, rank, "strides")
self.dilation_rate = standardize_tuple(
dilation_rate, rank, "dilation_rate"
)
self.padding = standardize_padding(padding)
if output_padding is None:
self.output_padding = None
else:
self.output_padding = standardize_tuple(
output_padding,
rank,
"output_padding",
)
self.data_format = standardize_data_format(data_format)
self.activation = activations.get(activation)
self.use_bias = use_bias
self.kernel_initializer = initializers.get(kernel_initializer)
self.bias_initializer = initializers.get(bias_initializer)
self.kernel_regularizer = regularizers.get(kernel_regularizer)
self.bias_regularizer = regularizers.get(bias_regularizer)
self.kernel_constraint = constraints.get(kernel_constraint)
self.bias_constraint = constraints.get(bias_constraint)
self.input_spec = InputSpec(min_ndim=self.rank + 2)
self.data_format = self.data_format
if self.filters is not None and self.filters <= 0:
raise ValueError(
"Invalid value for argument `filters`. Expected a strictly "
f"positive value. Received filters={self.filters}."
)
if not all(self.kernel_size):
raise ValueError(
"The argument `kernel_size` cannot contain 0. Received "
f"kernel_size={self.kernel_size}."
)
if not all(self.strides):
raise ValueError(
"The argument `strides` cannot contains 0. Received "
f"strides={self.strides}."
)
if max(self.strides) > 1 and max(self.dilation_rate) > 1:
raise ValueError(
"`strides > 1` not supported in conjunction with "
f"`dilation_rate > 1`. Received: strides={self.strides} and "
f"dilation_rate={self.dilation_rate}"
)
def build(self, input_shape):
if self.data_format == "channels_last":
channel_axis = -1
input_channel = input_shape[-1]
else:
channel_axis = 1
input_channel = input_shape[1]
self.input_spec = InputSpec(
min_ndim=self.rank + 2, axes={channel_axis: input_channel}
)
kernel_shape = self.kernel_size + (
self.filters,
input_channel,
)
self.kernel = self.add_weight(
name="kernel",
shape=kernel_shape,
initializer=self.kernel_initializer,
regularizer=self.kernel_regularizer,
constraint=self.kernel_constraint,
trainable=True,
dtype=self.dtype,
)
if self.use_bias:
self.bias = self.add_weight(
name="bias",
shape=(self.filters,),
initializer=self.bias_initializer,
regularizer=self.bias_regularizer,
constraint=self.bias_constraint,
trainable=True,
dtype=self.dtype,
)
else:
self.bias = None
self.built = True
def call(self, inputs):
outputs = ops.conv_transpose(
inputs,
self.kernel,
strides=list(self.strides),
padding=self.padding,
output_padding=self.output_padding,
dilation_rate=self.dilation_rate,
data_format=self.data_format,
)
if self.use_bias:
if self.data_format == "channels_last":
bias_shape = (1,) * (self.rank + 1) + (self.filters,)
else:
bias_shape = (1, self.filters) + (1,) * self.rank
bias = ops.reshape(self.bias, bias_shape)
outputs += bias
if self.activation is not None:
return self.activation(outputs)
return outputs
def compute_output_shape(self, input_shape):
return compute_conv_transpose_output_shape(
input_shape,
self.kernel_size,
self.filters,
strides=self.strides,
padding=self.padding,
output_padding=self.output_padding,
data_format=self.data_format,
dilation_rate=self.dilation_rate,
)
def get_config(self):
config = super().get_config()
config.update(
{
"filters": self.filters,
"kernel_size": self.kernel_size,
"strides": self.strides,
"padding": self.padding,
"data_format": self.data_format,
"dilation_rate": self.dilation_rate,
"activation": activations.serialize(self.activation),
"use_bias": self.use_bias,
"kernel_initializer": initializers.serialize(
self.kernel_initializer
),
"bias_initializer": initializers.serialize(
self.bias_initializer
),
"kernel_regularizer": regularizers.serialize(
self.kernel_regularizer
),
"bias_regularizer": regularizers.serialize(
self.bias_regularizer
),
"activity_regularizer": regularizers.serialize(
self.activity_regularizer
),
"kernel_constraint": constraints.serialize(
self.kernel_constraint
),
"bias_constraint": constraints.serialize(self.bias_constraint),
}
)
return config
| keras/keras/layers/convolutional/base_conv_transpose.py/0 | {
"file_path": "keras/keras/layers/convolutional/base_conv_transpose.py",
"repo_id": "keras",
"token_count": 4802
} | 182 |
import numpy as np
import pytest
from absl.testing import parameterized
from keras import layers
from keras import testing
from keras.layers.convolutional.conv_test import np_conv1d
from keras.layers.convolutional.conv_test import np_conv2d
from keras.layers.convolutional.depthwise_conv_test import np_depthwise_conv1d
from keras.layers.convolutional.depthwise_conv_test import np_depthwise_conv2d
class SeparableConvBasicTest(testing.TestCase, parameterized.TestCase):
@parameterized.parameters(
{
"depth_multiplier": 5,
"filters": 5,
"kernel_size": 2,
"strides": 1,
"padding": "valid",
"data_format": "channels_last",
"dilation_rate": 1,
"input_shape": (3, 5, 4),
"output_shape": (3, 4, 5),
},
{
"depth_multiplier": 6,
"filters": 6,
"kernel_size": 2,
"strides": 1,
"padding": "same",
"data_format": "channels_last",
"dilation_rate": (2,),
"input_shape": (3, 4, 4),
"output_shape": (3, 4, 6),
},
{
"depth_multiplier": 6,
"filters": 6,
"kernel_size": 2,
"strides": (2,),
"padding": "valid",
"data_format": "channels_last",
"dilation_rate": 1,
"input_shape": (3, 5, 4),
"output_shape": (3, 2, 6),
},
)
@pytest.mark.requires_trainable_backend
def test_separable_conv1d_basic(
self,
depth_multiplier,
filters,
kernel_size,
strides,
padding,
data_format,
dilation_rate,
input_shape,
output_shape,
):
self.run_layer_test(
layers.SeparableConv1D,
init_kwargs={
"depth_multiplier": depth_multiplier,
"filters": filters,
"kernel_size": kernel_size,
"strides": strides,
"padding": padding,
"data_format": data_format,
"dilation_rate": dilation_rate,
},
input_shape=input_shape,
expected_output_shape=output_shape,
expected_num_trainable_weights=3,
expected_num_non_trainable_weights=0,
expected_num_losses=0,
supports_masking=False,
)
@parameterized.parameters(
{
"depth_multiplier": 5,
"filters": 5,
"kernel_size": 2,
"strides": 1,
"padding": "valid",
"data_format": "channels_last",
"dilation_rate": 1,
"input_shape": (3, 5, 5, 4),
"output_shape": (3, 4, 4, 5),
},
{
"depth_multiplier": 6,
"filters": 6,
"kernel_size": 2,
"strides": 1,
"padding": "same",
"data_format": "channels_last",
"dilation_rate": (2, 2),
"input_shape": (3, 4, 4, 4),
"output_shape": (3, 4, 4, 6),
},
{
"depth_multiplier": 6,
"filters": 6,
"kernel_size": (2, 2),
"strides": (2, 2),
"padding": "valid",
"data_format": "channels_last",
"dilation_rate": (1, 1),
"input_shape": (3, 5, 5, 4),
"output_shape": (3, 2, 2, 6),
},
)
@pytest.mark.requires_trainable_backend
def test_separable_conv2d_basic(
self,
depth_multiplier,
filters,
kernel_size,
strides,
padding,
data_format,
dilation_rate,
input_shape,
output_shape,
):
self.run_layer_test(
layers.SeparableConv2D,
init_kwargs={
"depth_multiplier": depth_multiplier,
"filters": filters,
"kernel_size": kernel_size,
"strides": strides,
"padding": padding,
"data_format": data_format,
"dilation_rate": dilation_rate,
},
input_shape=input_shape,
expected_output_shape=output_shape,
expected_num_trainable_weights=3,
expected_num_non_trainable_weights=0,
expected_num_losses=0,
supports_masking=False,
)
def test_bad_init_args(self):
# `depth_multiplier` is not positive.
with self.assertRaisesRegex(
ValueError,
"Invalid value for argument `depth_multiplier`. "
"Expected a strictly positive value. Received "
"depth_multiplier=0.",
):
layers.SeparableConv1D(depth_multiplier=0, filters=1, kernel_size=1)
# `filters` is not positive.
with self.assertRaisesRegex(
ValueError,
"Invalid value for argument `filters`. Expected a "
"strictly positive value. Received filters=0.",
):
layers.SeparableConv1D(depth_multiplier=1, filters=0, kernel_size=1)
# `kernel_size` has 0.
with self.assertRaisesRegex(
ValueError,
r"The `kernel_size` argument must be a tuple of "
r"\d+ integers. Received kernel_size=\(1, 0\), including values"
r" \{0\} that do not satisfy `value > 0`",
):
layers.SeparableConv2D(
depth_multiplier=2, filters=2, kernel_size=(1, 0)
)
# `strides` has 0.
with self.assertRaisesRegex(
ValueError,
r"The `strides` argument must be a tuple of \d+ "
r"integers. Received strides=\(1, 0\), including values \{0\} "
r"that do not satisfy `value > 0`",
):
layers.SeparableConv2D(
depth_multiplier=2,
filters=2,
kernel_size=(2, 2),
strides=(1, 0),
)
# `dilation_rate > 1` while `strides > 1`.
with self.assertRaisesRegex(
ValueError,
r"`strides > 1` not supported in conjunction with "
r"`dilation_rate > 1`. Received: strides=\(2, 2\) and "
r"dilation_rate=\(2, 1\)",
):
layers.SeparableConv2D(
depth_multiplier=2,
filters=2,
kernel_size=(2, 2),
strides=2,
dilation_rate=(2, 1),
)
class SeparableConvCorrectnessTest(testing.TestCase, parameterized.TestCase):
@parameterized.parameters(
{
"depth_multiplier": 5,
"filters": 5,
"kernel_size": 2,
"strides": 1,
"padding": "valid",
"data_format": "channels_last",
"dilation_rate": 1,
},
{
"depth_multiplier": 6,
"filters": 6,
"kernel_size": 2,
"strides": 1,
"padding": "same",
"data_format": "channels_last",
"dilation_rate": (2,),
},
{
"depth_multiplier": 6,
"filters": 6,
"kernel_size": (2,),
"strides": (2,),
"padding": "valid",
"data_format": "channels_last",
"dilation_rate": 1,
},
)
def test_separable_conv1d(
self,
depth_multiplier,
filters,
kernel_size,
strides,
padding,
data_format,
dilation_rate,
):
layer = layers.SeparableConv1D(
depth_multiplier=depth_multiplier,
filters=filters,
kernel_size=kernel_size,
strides=strides,
padding=padding,
data_format=data_format,
dilation_rate=dilation_rate,
)
inputs = np.random.normal(size=[2, 8, 4])
layer.build(input_shape=inputs.shape)
depthwise_kernel_shape = layer.depthwise_kernel.shape
depthwise_kernel_weights = np.random.normal(size=depthwise_kernel_shape)
layer.depthwise_kernel.assign(depthwise_kernel_weights)
pointwise_kernel_shape = layer.pointwise_kernel.shape
pointwise_kernel_weights = np.random.normal(size=pointwise_kernel_shape)
layer.pointwise_kernel.assign(pointwise_kernel_weights)
bias_weights = np.random.normal(size=(filters,))
layer.bias.assign(bias_weights)
outputs = layer(inputs)
expected_depthwise = np_depthwise_conv1d(
inputs,
depthwise_kernel_weights,
np.zeros(4 * depth_multiplier),
strides=strides,
padding=padding,
data_format=data_format,
dilation_rate=dilation_rate,
)
expected = np_conv1d(
expected_depthwise,
pointwise_kernel_weights,
bias_weights,
strides=1,
padding=padding,
data_format=data_format,
dilation_rate=1,
groups=1,
)
self.assertAllClose(outputs.shape, expected.shape)
self.assertAllClose(outputs, expected, rtol=1e-5, atol=1e-5)
@parameterized.parameters(
{
"depth_multiplier": 5,
"filters": 5,
"kernel_size": 2,
"strides": 1,
"padding": "valid",
"data_format": "channels_last",
"dilation_rate": 1,
},
{
"depth_multiplier": 6,
"filters": 6,
"kernel_size": 2,
"strides": 1,
"padding": "same",
"data_format": "channels_last",
"dilation_rate": (2, 2),
},
{
"depth_multiplier": 6,
"filters": 6,
"kernel_size": (2, 2),
"strides": (2, 2),
"padding": "valid",
"data_format": "channels_last",
"dilation_rate": (1, 1),
},
)
def test_separable_conv2d(
self,
depth_multiplier,
filters,
kernel_size,
strides,
padding,
data_format,
dilation_rate,
):
layer = layers.SeparableConv2D(
depth_multiplier=depth_multiplier,
filters=filters,
kernel_size=kernel_size,
strides=strides,
padding=padding,
data_format=data_format,
dilation_rate=dilation_rate,
)
inputs = np.random.normal(size=[2, 8, 8, 4])
layer.build(input_shape=inputs.shape)
depthwise_kernel_shape = layer.depthwise_kernel.shape
depthwise_kernel_weights = np.random.normal(size=depthwise_kernel_shape)
layer.depthwise_kernel.assign(depthwise_kernel_weights)
pointwise_kernel_shape = layer.pointwise_kernel.shape
pointwise_kernel_weights = np.random.normal(size=pointwise_kernel_shape)
layer.pointwise_kernel.assign(pointwise_kernel_weights)
bias_weights = np.random.normal(size=(filters,))
layer.bias.assign(bias_weights)
outputs = layer(inputs)
expected_depthwise = np_depthwise_conv2d(
inputs,
depthwise_kernel_weights,
np.zeros(4 * depth_multiplier),
strides=strides,
padding=padding,
data_format=data_format,
dilation_rate=dilation_rate,
)
expected = np_conv2d(
expected_depthwise,
pointwise_kernel_weights,
bias_weights,
strides=1,
padding=padding,
data_format=data_format,
dilation_rate=1,
groups=1,
)
self.assertAllClose(outputs.shape, expected.shape)
self.assertAllClose(outputs, expected, rtol=1e-5, atol=1e-5)
| keras/keras/layers/convolutional/separable_conv_test.py/0 | {
"file_path": "keras/keras/layers/convolutional/separable_conv_test.py",
"repo_id": "keras",
"token_count": 6430
} | 183 |
from keras.api_export import keras_export
from keras.layers.layer import Layer
from keras.saving import serialization_lib
@keras_export("keras.layers.Wrapper")
class Wrapper(Layer):
"""Abstract wrapper base class.
Wrappers take another layer and augment it in various ways.
Do not use this class as a layer, it is only an abstract base class.
Two usable wrappers are the `TimeDistributed` and `Bidirectional` layers.
Args:
layer: The layer to be wrapped.
"""
def __init__(self, layer, **kwargs):
try:
assert isinstance(layer, Layer)
except Exception:
raise ValueError(
f"Layer {layer} supplied to Wrapper isn't "
"a supported layer type. Please "
"ensure wrapped layer is a valid Keras layer."
)
super().__init__(**kwargs)
self.layer = layer
def build(self, input_shape=None):
if not self.layer.built:
self.layer.build(input_shape)
self.layer.built = True
self.built = True
def get_config(self):
config = {"layer": serialization_lib.serialize_keras_object(self.layer)}
base_config = super().get_config()
return {**base_config, **config}
@classmethod
def from_config(cls, config, custom_objects=None):
layer = serialization_lib.deserialize_keras_object(
config.pop("layer"),
custom_objects=custom_objects,
)
return cls(layer, **config)
| keras/keras/layers/core/wrapper.py/0 | {
"file_path": "keras/keras/layers/core/wrapper.py",
"repo_id": "keras",
"token_count": 634
} | 184 |
from keras import backend
from keras.layers.input_spec import InputSpec
from keras.layers.layer import Layer
class BaseGlobalPooling(Layer):
"""Base global pooling layer."""
def __init__(
self, pool_dimensions, data_format=None, keepdims=False, **kwargs
):
super().__init__(**kwargs)
self.data_format = backend.standardize_data_format(data_format)
self.keepdims = keepdims
self.input_spec = InputSpec(ndim=pool_dimensions + 2)
def call(self, inputs):
raise NotImplementedError
def compute_output_shape(self, input_shape):
num_spatial_dims = len(input_shape) - 2
if self.data_format == "channels_last":
if self.keepdims:
return (
(input_shape[0],)
+ (1,) * num_spatial_dims
+ (input_shape[-1],)
)
else:
return (input_shape[0],) + (input_shape[-1],)
else:
if self.keepdims:
return (input_shape[0], input_shape[1]) + (
1,
) * num_spatial_dims
else:
return (input_shape[0], input_shape[1])
def get_config(self):
config = super().get_config()
config.update(
{
"data_format": self.data_format,
"keepdims": self.keepdims,
}
)
return config
| keras/keras/layers/pooling/base_global_pooling.py/0 | {
"file_path": "keras/keras/layers/pooling/base_global_pooling.py",
"repo_id": "keras",
"token_count": 756
} | 185 |
import numpy as np
from tensorflow import data as tf_data
from keras import layers
from keras import testing
class CategoryEncodingTest(testing.TestCase):
def test_count_output(self):
input_array = np.array([1, 2, 3, 1])
expected_output = np.array([0, 2, 1, 1, 0, 0])
num_tokens = 6
expected_output_shape = (num_tokens,)
layer = layers.CategoryEncoding(num_tokens=6, output_mode="count")
int_data = layer(input_array)
self.assertEqual(expected_output_shape, int_data.shape)
self.assertAllClose(int_data, expected_output)
# Test symbolic call.
output = layer(
layers.Input(batch_shape=input_array.shape, dtype="int32")
)
self.assertEqual(expected_output_shape, output.shape)
self.assertEqual("float32", output.dtype)
def test_count_weighted_output(self):
input_array = np.array([[0, 1], [0, 0], [1, 2], [3, 1]])
count_weights = np.array(
[[0.1, 0.2], [0.1, 0.1], [0.2, 0.3], [0.4, 0.2]]
)
expected_output = np.array(
[
[0.1, 0.2, 0.0, 0.0],
[0.2, 0.0, 0.0, 0.0],
[0.0, 0.2, 0.3, 0.0],
[0.0, 0.2, 0.0, 0.4],
]
)
num_tokens = 4
expected_output_shape = (num_tokens, num_tokens)
layer = layers.CategoryEncoding(num_tokens=4, output_mode="count")
int_data = layer(input_array, count_weights=count_weights)
self.assertEqual(expected_output_shape, int_data.shape)
self.assertAllClose(int_data, expected_output)
# Test symbolic call.
output = layer(
layers.Input(batch_shape=input_array.shape, dtype="int32")
)
self.assertEqual(expected_output_shape, output.shape)
self.assertEqual("float32", output.dtype)
def test_batched_count_output(self):
input_array = np.array([[1, 2, 3, 1], [0, 3, 1, 0]])
expected_output = np.array([[0, 2, 1, 1, 0, 0], [2, 1, 0, 1, 0, 0]])
num_tokens = 6
expected_output_shape = (2, num_tokens)
layer = layers.CategoryEncoding(num_tokens=6, output_mode="count")
int_data = layer(input_array)
self.assertEqual(expected_output_shape, int_data.shape)
self.assertAllClose(int_data, expected_output)
# Test symbolic call.
output = layer(
layers.Input(batch_shape=input_array.shape, dtype="int32")
)
self.assertEqual(expected_output_shape, output.shape)
self.assertEqual("float32", output.dtype)
def test_multi_hot(self):
input_data = np.array([3, 2, 0, 1])
expected_output = np.array([1, 1, 1, 1, 0, 0])
num_tokens = 6
expected_output_shape = (num_tokens,)
# Test call on layer directly.
layer = layers.CategoryEncoding(
num_tokens=num_tokens, output_mode="multi_hot"
)
output_data = layer(input_data)
self.assertAllClose(expected_output, output_data)
self.assertEqual(expected_output_shape, output_data.shape)
# Test symbolic call.
output = layer(
layers.Input(batch_shape=input_data.shape, dtype="int32")
)
self.assertEqual(expected_output_shape, output.shape)
self.assertEqual("float32", output.dtype)
def test_batched_multi_hot(self):
input_data = np.array([[3, 2, 0, 1], [3, 2, 0, 1]])
expected_output = np.array([[1, 1, 1, 1, 0, 0], [1, 1, 1, 1, 0, 0]])
num_tokens = 6
expected_output_shape = (2, num_tokens)
# Test call on layer directly.
layer = layers.CategoryEncoding(
num_tokens=num_tokens, output_mode="multi_hot"
)
output_data = layer(input_data)
self.assertAllClose(expected_output, output_data)
self.assertEqual(expected_output_shape, output_data.shape)
# Test symbolic call.
output = layer(
layers.Input(batch_shape=input_data.shape, dtype="int32")
)
self.assertEqual(expected_output_shape, output.shape)
self.assertEqual("float32", output.dtype)
def test_one_hot(self):
input_data = np.array([3, 2, 0, 1])
expected_output = np.array(
[
[0, 0, 0, 1],
[0, 0, 1, 0],
[1, 0, 0, 0],
[0, 1, 0, 0],
]
)
num_tokens = 4
expected_output_shape = (num_tokens, num_tokens)
# Test call on layer directly.
layer = layers.CategoryEncoding(
num_tokens=num_tokens, output_mode="one_hot"
)
output_data = layer(input_data)
self.assertAllClose(expected_output, output_data)
self.assertEqual(expected_output_shape, output_data.shape)
# Test symbolic call.
output = layer(
layers.Input(batch_shape=input_data.shape, dtype="int32")
)
self.assertEqual(expected_output_shape, output.shape)
self.assertEqual("float32", output.dtype)
def test_batched_one_hot(self):
input_data = np.array([[3, 2, 0, 1], [3, 2, 0, 1]])
expected_output = np.array(
[
[
[0, 0, 0, 1],
[0, 0, 1, 0],
[1, 0, 0, 0],
[0, 1, 0, 0],
],
[
[0, 0, 0, 1],
[0, 0, 1, 0],
[1, 0, 0, 0],
[0, 1, 0, 0],
],
]
)
num_tokens = 4
expected_output_shape = (2, num_tokens, num_tokens)
# Test call on layer directly.
layer = layers.CategoryEncoding(
num_tokens=num_tokens, output_mode="one_hot"
)
output_data = layer(input_data)
self.assertAllClose(expected_output, output_data)
self.assertEqual(expected_output_shape, output_data.shape)
# Test symbolic call.
output = layer(
layers.Input(batch_shape=input_data.shape, dtype="int32")
)
self.assertEqual(expected_output_shape, output.shape)
self.assertEqual("float32", output.dtype)
def test_tf_data_compatibility(self):
layer = layers.CategoryEncoding(
num_tokens=4, output_mode="one_hot", dtype="int32"
)
input_data = np.array([3, 2, 0, 1])
expected_output = np.array(
[
[0, 0, 0, 1],
[0, 0, 1, 0],
[1, 0, 0, 0],
[0, 1, 0, 0],
]
)
ds = tf_data.Dataset.from_tensor_slices(input_data).batch(4).map(layer)
for output in ds.take(1):
output = output.numpy()
self.assertAllClose(output, expected_output)
| keras/keras/layers/preprocessing/category_encoding_test.py/0 | {
"file_path": "keras/keras/layers/preprocessing/category_encoding_test.py",
"repo_id": "keras",
"token_count": 3515
} | 186 |
import numpy as np
import pytest
from absl.testing import parameterized
from tensorflow import data as tf_data
from keras import backend
from keras import layers
from keras import testing
class NormalizationTest(testing.TestCase, parameterized.TestCase):
@pytest.mark.requires_trainable_backend
def test_normalization_basics(self):
self.run_layer_test(
layers.Normalization,
init_kwargs={
"axis": -1,
},
input_shape=(2, 3),
expected_output_shape=(2, 3),
expected_num_trainable_weights=0,
expected_num_non_trainable_weights=3,
expected_num_seed_generators=0,
expected_num_losses=0,
supports_masking=True,
)
self.run_layer_test(
layers.Normalization,
init_kwargs={
"axis": -1,
"mean": np.array([0.5, 0.2, -0.1]),
"variance": np.array([0.1, 0.2, 0.3]),
},
input_shape=(2, 3),
expected_output_shape=(2, 3),
expected_num_trainable_weights=0,
expected_num_non_trainable_weights=0,
expected_num_seed_generators=0,
expected_num_losses=0,
supports_masking=True,
)
self.run_layer_test(
layers.Normalization,
init_kwargs={
"axis": -1,
"mean": np.array([0.5, 0.2, -0.1]),
"variance": np.array([0.1, 0.2, 0.3]),
"invert": True,
},
input_shape=(2, 3),
expected_output_shape=(2, 3),
expected_num_trainable_weights=0,
expected_num_non_trainable_weights=0,
expected_num_seed_generators=0,
expected_num_losses=0,
supports_masking=True,
)
@parameterized.parameters([("np",), ("tensor",), ("tf.data")])
def test_normalization_adapt(self, input_type):
x = np.random.random((32, 4))
if input_type == "np":
data = x
elif input_type == "tensor":
data = backend.convert_to_tensor(x)
elif input_type == "tf.data":
data = tf_data.Dataset.from_tensor_slices(x).batch(8)
layer = layers.Normalization()
layer.adapt(data)
self.assertTrue(layer.built)
output = layer(x)
output = backend.convert_to_numpy(output)
self.assertAllClose(np.var(output, axis=0), 1.0, atol=1e-5)
self.assertAllClose(np.mean(output, axis=0), 0.0, atol=1e-5)
# Test in high-dim and with tuple axis.
x = np.random.random((32, 4, 3, 5))
if input_type == "np":
data = x
elif input_type == "tensor":
data = backend.convert_to_tensor(x)
elif input_type == "tf.data":
data = tf_data.Dataset.from_tensor_slices(x).batch(8)
layer = layers.Normalization(axis=(1, 2))
layer.adapt(data)
self.assertTrue(layer.built)
output = layer(x)
output = backend.convert_to_numpy(output)
self.assertAllClose(np.var(output, axis=(0, 3)), 1.0, atol=1e-5)
self.assertAllClose(np.mean(output, axis=(0, 3)), 0.0, atol=1e-5)
def test_normalization_errors(self):
# TODO
pass
@pytest.mark.skipif(
backend.backend() != "torch",
reason="Test symbolic call for torch meta device.",
)
def test_call_on_meta_device_after_built(self):
from keras.backend.torch import core
layer = layers.Normalization()
data = np.random.random((32, 4))
layer.adapt(data)
with core.device_scope("meta"):
layer(data)
| keras/keras/layers/preprocessing/normalization_test.py/0 | {
"file_path": "keras/keras/layers/preprocessing/normalization_test.py",
"repo_id": "keras",
"token_count": 1905
} | 187 |
import numpy as np
import pytest
from tensorflow import data as tf_data
from keras import backend
from keras import layers
from keras import testing
class RescalingTest(testing.TestCase):
@pytest.mark.requires_trainable_backend
def test_rescaling_basics(self):
self.run_layer_test(
layers.Rescaling,
init_kwargs={"scale": 1.0 / 255, "offset": 0.5},
input_shape=(2, 3),
expected_output_shape=(2, 3),
expected_num_trainable_weights=0,
expected_num_non_trainable_weights=0,
expected_num_seed_generators=0,
expected_num_losses=0,
supports_masking=True,
)
@pytest.mark.requires_trainable_backend
def test_rescaling_dtypes(self):
# int scale
self.run_layer_test(
layers.Rescaling,
init_kwargs={"scale": 2, "offset": 0.5},
input_shape=(2, 3),
expected_output_shape=(2, 3),
expected_num_trainable_weights=0,
expected_num_non_trainable_weights=0,
expected_num_seed_generators=0,
expected_num_losses=0,
supports_masking=True,
)
# int offset
self.run_layer_test(
layers.Rescaling,
init_kwargs={"scale": 1.0, "offset": 2},
input_shape=(2, 3),
expected_output_shape=(2, 3),
expected_num_trainable_weights=0,
expected_num_non_trainable_weights=0,
expected_num_seed_generators=0,
expected_num_losses=0,
supports_masking=True,
)
# int inputs
self.run_layer_test(
layers.Rescaling,
init_kwargs={"scale": 1.0 / 255, "offset": 0.5},
input_shape=(2, 3),
input_dtype="int16",
expected_output_shape=(2, 3),
expected_num_trainable_weights=0,
expected_num_non_trainable_weights=0,
expected_num_seed_generators=0,
expected_num_losses=0,
supports_masking=True,
)
def test_rescaling_correctness(self):
layer = layers.Rescaling(scale=1.0 / 255, offset=0.5)
x = np.random.random((3, 10, 10, 3)) * 255
out = layer(x)
self.assertAllClose(out, x / 255 + 0.5)
def test_tf_data_compatibility(self):
layer = layers.Rescaling(scale=1.0 / 255, offset=0.5)
x = np.random.random((3, 10, 10, 3)) * 255
ds = tf_data.Dataset.from_tensor_slices(x).batch(3).map(layer)
for output in ds.take(1):
output.numpy()
def test_rescaling_with_channels_first_and_vector_scale(self):
config = backend.image_data_format()
backend.set_image_data_format("channels_first")
layer = layers.Rescaling(
scale=[1.0 / 255, 1.5 / 255, 2.0 / 255], offset=0.5
)
x = np.random.random((2, 3, 10, 10)) * 255
layer(x)
backend.set_image_data_format(config)
| keras/keras/layers/preprocessing/rescaling_test.py/0 | {
"file_path": "keras/keras/layers/preprocessing/rescaling_test.py",
"repo_id": "keras",
"token_count": 1522
} | 188 |
import numpy as np
import pytest
from keras import backend
from keras import layers
from keras import testing
class GaussianDropoutTest(testing.TestCase):
@pytest.mark.requires_trainable_backend
def test_gaussian_dropout_basics(self):
self.run_layer_test(
layers.GaussianDropout,
init_kwargs={
"rate": 0.2,
},
input_shape=(2, 3),
expected_output_shape=(2, 3),
expected_num_trainable_weights=0,
expected_num_non_trainable_weights=0,
expected_num_seed_generators=1,
expected_num_losses=0,
supports_masking=True,
)
def test_gaussian_dropout_correctness(self):
inputs = np.ones((20, 500))
layer = layers.GaussianDropout(0.3, seed=1337)
outputs = layer(inputs, training=True)
self.assertAllClose(
np.std(backend.convert_to_numpy(outputs)),
np.sqrt(0.3 / (1 - 0.3)),
atol=0.02,
)
| keras/keras/layers/regularization/gaussian_dropout_test.py/0 | {
"file_path": "keras/keras/layers/regularization/gaussian_dropout_test.py",
"repo_id": "keras",
"token_count": 509
} | 189 |
from keras import ops
from keras.api_export import keras_export
from keras.layers.input_spec import InputSpec
from keras.layers.layer import Layer
@keras_export("keras.layers.RepeatVector")
class RepeatVector(Layer):
"""Repeats the input n times.
Example:
>>> x = keras.Input(shape=(32,))
>>> y = keras.layers.RepeatVector(3)(x)
>>> y.shape
(None, 3, 32)
Args:
n: Integer, repetition factor.
Input shape:
2D tensor with shape `(batch_size, features)`.
Output shape:
3D tensor with shape `(batch_size, n, features)`.
"""
def __init__(self, n, **kwargs):
super().__init__(**kwargs)
self.n = n
if not isinstance(n, int):
raise TypeError(
f"Expected an integer value for `n`, got {type(n)}."
)
self.input_spec = InputSpec(ndim=2)
def compute_output_shape(self, input_shape):
return (input_shape[0], self.n, input_shape[1])
def call(self, inputs):
input_shape = ops.shape(inputs)
reshaped = ops.reshape(inputs, (input_shape[0], 1, input_shape[1]))
return ops.repeat(reshaped, self.n, axis=1)
def get_config(self):
config = {"n": self.n}
base_config = super().get_config()
return {**base_config, **config}
| keras/keras/layers/reshaping/repeat_vector.py/0 | {
"file_path": "keras/keras/layers/reshaping/repeat_vector.py",
"repo_id": "keras",
"token_count": 577
} | 190 |
import numpy as np
import pytest
from absl.testing import parameterized
from keras import initializers
from keras import layers
from keras import testing
class LSTMTest(testing.TestCase, parameterized.TestCase):
@pytest.mark.requires_trainable_backend
def test_basics(self):
self.run_layer_test(
layers.LSTM,
init_kwargs={"units": 3, "dropout": 0.5, "recurrent_dropout": 0.5},
input_shape=(3, 2, 4),
call_kwargs={"training": True},
expected_output_shape=(3, 3),
expected_num_trainable_weights=3,
expected_num_non_trainable_weights=0,
supports_masking=True,
)
self.run_layer_test(
layers.LSTM,
init_kwargs={
"units": 3,
"return_sequences": True,
"bias_regularizer": "l1",
"kernel_regularizer": "l2",
"recurrent_regularizer": "l2",
},
input_shape=(3, 2, 4),
expected_output_shape=(3, 2, 3),
expected_num_losses=3,
expected_num_trainable_weights=3,
expected_num_non_trainable_weights=0,
supports_masking=True,
)
@parameterized.parameters([1, 2])
def test_correctness(self, implementation):
sequence = np.arange(72).reshape((3, 6, 4)).astype("float32")
layer = layers.LSTM(
3,
kernel_initializer=initializers.Constant(0.01),
recurrent_initializer=initializers.Constant(0.02),
bias_initializer=initializers.Constant(0.03),
implementation=implementation,
)
output = layer(sequence)
self.assertAllClose(
np.array(
[
[0.6288687, 0.6288687, 0.6288687],
[0.86899155, 0.86899155, 0.86899155],
[0.9460773, 0.9460773, 0.9460773],
]
),
output,
)
layer = layers.LSTM(
3,
kernel_initializer=initializers.Constant(0.01),
recurrent_initializer=initializers.Constant(0.02),
bias_initializer=initializers.Constant(0.03),
go_backwards=True,
implementation=implementation,
)
output = layer(sequence)
self.assertAllClose(
np.array(
[
[0.35622165, 0.35622165, 0.35622165],
[0.74789524, 0.74789524, 0.74789524],
[0.8872726, 0.8872726, 0.8872726],
]
),
output,
)
layer = layers.LSTM(
3,
kernel_initializer=initializers.Constant(0.01),
recurrent_initializer=initializers.Constant(0.02),
bias_initializer=initializers.Constant(0.03),
unroll=True,
implementation=implementation,
)
output = layer(sequence)
self.assertAllClose(
np.array(
[
[0.6288687, 0.6288687, 0.6288687],
[0.86899155, 0.86899155, 0.86899155],
[0.9460773, 0.9460773, 0.9460773],
]
),
output,
)
layer = layers.LSTM(
3,
kernel_initializer=initializers.Constant(0.01),
recurrent_initializer=initializers.Constant(0.02),
bias_initializer=initializers.Constant(0.03),
unit_forget_bias=False,
implementation=implementation,
)
output = layer(sequence)
self.assertAllClose(
np.array(
[
[0.57019705, 0.57019705, 0.57019705],
[0.8661914, 0.8661914, 0.8661914],
[0.9459622, 0.9459622, 0.9459622],
]
),
output,
)
layer = layers.LSTM(
3,
kernel_initializer=initializers.Constant(0.01),
recurrent_initializer=initializers.Constant(0.02),
bias_initializer=initializers.Constant(0.03),
use_bias=False,
implementation=implementation,
)
output = layer(sequence)
self.assertAllClose(
np.array(
[
[0.54986924, 0.54986924, 0.54986924],
[0.86226785, 0.86226785, 0.86226785],
[0.9443936, 0.9443936, 0.9443936],
]
),
output,
)
def test_statefulness(self):
sequence = np.arange(24).reshape((2, 3, 4)).astype("float32")
layer = layers.LSTM(
4,
stateful=True,
kernel_initializer=initializers.Constant(0.01),
recurrent_initializer=initializers.Constant(0.02),
bias_initializer=initializers.Constant(0.03),
)
layer(sequence)
output = layer(sequence)
self.assertAllClose(
np.array(
[
[0.3124785, 0.3124785, 0.3124785, 0.3124785],
[0.6863672, 0.6863672, 0.6863672, 0.6863672],
]
),
output,
)
layer.reset_state()
layer(sequence)
output = layer(sequence)
self.assertAllClose(
np.array(
[
[0.3124785, 0.3124785, 0.3124785, 0.3124785],
[0.6863672, 0.6863672, 0.6863672, 0.6863672],
]
),
output,
)
def test_pass_initial_state(self):
sequence = np.arange(24).reshape((2, 4, 3)).astype("float32")
initial_state = [
np.arange(4).reshape((2, 2)).astype("float32"),
np.arange(4).reshape((2, 2)).astype("float32"),
]
layer = layers.LSTM(
2,
kernel_initializer=initializers.Constant(0.01),
recurrent_initializer=initializers.Constant(0.02),
bias_initializer=initializers.Constant(0.03),
)
output = layer(sequence, initial_state=initial_state)
self.assertAllClose(
np.array([[0.20574439, 0.3558822], [0.64930826, 0.66276]]),
output,
)
layer = layers.LSTM(
2,
kernel_initializer=initializers.Constant(0.01),
recurrent_initializer=initializers.Constant(0.02),
bias_initializer=initializers.Constant(0.03),
go_backwards=True,
)
output = layer(sequence, initial_state=initial_state)
self.assertAllClose(
np.array([[0.13281618, 0.2790356], [0.5839337, 0.5992567]]),
output,
)
def test_masking(self):
sequence = np.arange(24).reshape((2, 4, 3)).astype("float32")
mask = np.array([[True, True, False, True], [True, False, False, True]])
layer = layers.LSTM(
2,
kernel_initializer=initializers.Constant(0.01),
recurrent_initializer=initializers.Constant(0.02),
bias_initializer=initializers.Constant(0.03),
unroll=True,
)
output = layer(sequence, mask=mask)
self.assertAllClose(
np.array([[0.1524914, 0.1524914], [0.35969394, 0.35969394]]),
output,
)
layer = layers.LSTM(
2,
kernel_initializer=initializers.Constant(0.01),
recurrent_initializer=initializers.Constant(0.02),
bias_initializer=initializers.Constant(0.03),
return_sequences=True,
)
output = layer(sequence, mask=mask)
self.assertAllClose(
np.array(
[
[0.0158891, 0.0158891],
[0.05552047, 0.05552047],
[0.05552047, 0.05552047],
[0.1524914, 0.1524914],
],
),
output[0],
)
self.assertAllClose(
np.array(
[
[0.14185596, 0.14185596],
[0.14185596, 0.14185596],
[0.14185596, 0.14185596],
[0.35969394, 0.35969394],
],
),
output[1],
)
layer = layers.LSTM(
2,
kernel_initializer=initializers.Constant(0.01),
recurrent_initializer=initializers.Constant(0.02),
bias_initializer=initializers.Constant(0.03),
return_sequences=True,
zero_output_for_mask=True,
)
output = layer(sequence, mask=mask)
self.assertAllClose(
np.array(
[
[0.0158891, 0.0158891],
[0.05552047, 0.05552047],
[0.0, 0.0],
[0.1524914, 0.1524914],
],
),
output[0],
)
self.assertAllClose(
np.array(
[
[0.14185596, 0.14185596],
[0.0, 0.0],
[0.0, 0.0],
[0.35969394, 0.35969394],
],
),
output[1],
)
layer = layers.LSTM(
2,
kernel_initializer=initializers.Constant(0.01),
recurrent_initializer=initializers.Constant(0.02),
bias_initializer=initializers.Constant(0.03),
go_backwards=True,
)
output = layer(sequence, mask=mask)
self.assertAllClose(
np.array([[0.10056866, 0.10056866], [0.31006062, 0.31006062]]),
output,
)
| keras/keras/layers/rnn/lstm_test.py/0 | {
"file_path": "keras/keras/layers/rnn/lstm_test.py",
"repo_id": "keras",
"token_count": 5689
} | 191 |
"""Deprecated text preprocessing APIs from Keras 1."""
import collections
import hashlib
import json
import warnings
import numpy as np
from keras.api_export import keras_export
@keras_export("keras._legacy.preprocessing.text.text_to_word_sequence")
def text_to_word_sequence(
input_text,
filters='!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t\n',
lower=True,
split=" ",
):
"""DEPRECATED."""
if lower:
input_text = input_text.lower()
translate_dict = {c: split for c in filters}
translate_map = str.maketrans(translate_dict)
input_text = input_text.translate(translate_map)
seq = input_text.split(split)
return [i for i in seq if i]
@keras_export("keras._legacy.preprocessing.text.one_hot")
def one_hot(
input_text,
n,
filters='!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t\n',
lower=True,
split=" ",
analyzer=None,
):
"""DEPRECATED."""
return hashing_trick(
input_text,
n,
hash_function=hash,
filters=filters,
lower=lower,
split=split,
analyzer=analyzer,
)
@keras_export("keras._legacy.preprocessing.text.hashing_trick")
def hashing_trick(
text,
n,
hash_function=None,
filters='!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t\n',
lower=True,
split=" ",
analyzer=None,
):
"""DEPRECATED."""
if hash_function is None:
hash_function = hash
elif hash_function == "md5":
def hash_function(w):
return int(hashlib.md5(w.encode()).hexdigest(), 16)
if analyzer is None:
seq = text_to_word_sequence(
text, filters=filters, lower=lower, split=split
)
else:
seq = analyzer(text)
return [(hash_function(w) % (n - 1) + 1) for w in seq]
@keras_export("keras._legacy.preprocessing.text.Tokenizer")
class Tokenizer(object):
"""DEPRECATED."""
def __init__(
self,
num_words=None,
filters='!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t\n',
lower=True,
split=" ",
char_level=False,
oov_token=None,
analyzer=None,
**kwargs
):
# Legacy support
if "nb_words" in kwargs:
warnings.warn(
"The `nb_words` argument in `Tokenizer` "
"has been renamed `num_words`."
)
num_words = kwargs.pop("nb_words")
document_count = kwargs.pop("document_count", 0)
if kwargs:
raise TypeError("Unrecognized keyword arguments: " + str(kwargs))
self.word_counts = collections.OrderedDict()
self.word_docs = collections.defaultdict(int)
self.filters = filters
self.split = split
self.lower = lower
self.num_words = num_words
self.document_count = document_count
self.char_level = char_level
self.oov_token = oov_token
self.index_docs = collections.defaultdict(int)
self.word_index = {}
self.index_word = {}
self.analyzer = analyzer
def fit_on_texts(self, texts):
for text in texts:
self.document_count += 1
if self.char_level or isinstance(text, list):
if self.lower:
if isinstance(text, list):
text = [text_elem.lower() for text_elem in text]
else:
text = text.lower()
seq = text
else:
if self.analyzer is None:
seq = text_to_word_sequence(
text,
filters=self.filters,
lower=self.lower,
split=self.split,
)
else:
seq = self.analyzer(text)
for w in seq:
if w in self.word_counts:
self.word_counts[w] += 1
else:
self.word_counts[w] = 1
for w in set(seq):
# In how many documents each word occurs
self.word_docs[w] += 1
wcounts = list(self.word_counts.items())
wcounts.sort(key=lambda x: x[1], reverse=True)
# forcing the oov_token to index 1 if it exists
if self.oov_token is None:
sorted_voc = []
else:
sorted_voc = [self.oov_token]
sorted_voc.extend(wc[0] for wc in wcounts)
# note that index 0 is reserved, never assigned to an existing word
self.word_index = dict(
zip(sorted_voc, list(range(1, len(sorted_voc) + 1)))
)
self.index_word = {c: w for w, c in self.word_index.items()}
for w, c in list(self.word_docs.items()):
self.index_docs[self.word_index[w]] = c
def fit_on_sequences(self, sequences):
self.document_count += len(sequences)
for seq in sequences:
seq = set(seq)
for i in seq:
self.index_docs[i] += 1
def texts_to_sequences(self, texts):
return list(self.texts_to_sequences_generator(texts))
def texts_to_sequences_generator(self, texts):
num_words = self.num_words
oov_token_index = self.word_index.get(self.oov_token)
for text in texts:
if self.char_level or isinstance(text, list):
if self.lower:
if isinstance(text, list):
text = [text_elem.lower() for text_elem in text]
else:
text = text.lower()
seq = text
else:
if self.analyzer is None:
seq = text_to_word_sequence(
text,
filters=self.filters,
lower=self.lower,
split=self.split,
)
else:
seq = self.analyzer(text)
vect = []
for w in seq:
i = self.word_index.get(w)
if i is not None:
if num_words and i >= num_words:
if oov_token_index is not None:
vect.append(oov_token_index)
else:
vect.append(i)
elif self.oov_token is not None:
vect.append(oov_token_index)
yield vect
def sequences_to_texts(self, sequences):
return list(self.sequences_to_texts_generator(sequences))
def sequences_to_texts_generator(self, sequences):
num_words = self.num_words
oov_token_index = self.word_index.get(self.oov_token)
for seq in sequences:
vect = []
for num in seq:
word = self.index_word.get(num)
if word is not None:
if num_words and num >= num_words:
if oov_token_index is not None:
vect.append(self.index_word[oov_token_index])
else:
vect.append(word)
elif self.oov_token is not None:
vect.append(self.index_word[oov_token_index])
vect = " ".join(vect)
yield vect
def texts_to_matrix(self, texts, mode="binary"):
sequences = self.texts_to_sequences(texts)
return self.sequences_to_matrix(sequences, mode=mode)
def sequences_to_matrix(self, sequences, mode="binary"):
if not self.num_words:
if self.word_index:
num_words = len(self.word_index) + 1
else:
raise ValueError(
"Specify a dimension (`num_words` argument), "
"or fit on some text data first."
)
else:
num_words = self.num_words
if mode == "tfidf" and not self.document_count:
raise ValueError(
"Fit the Tokenizer on some data before using tfidf mode."
)
x = np.zeros((len(sequences), num_words))
for i, seq in enumerate(sequences):
if not seq:
continue
counts = collections.defaultdict(int)
for j in seq:
if j >= num_words:
continue
counts[j] += 1
for j, c in list(counts.items()):
if mode == "count":
x[i][j] = c
elif mode == "freq":
x[i][j] = c / len(seq)
elif mode == "binary":
x[i][j] = 1
elif mode == "tfidf":
# Use weighting scheme 2 in
# https://en.wikipedia.org/wiki/Tf%E2%80%93idf
tf = 1 + np.log(c)
idf = np.log(
1
+ self.document_count / (1 + self.index_docs.get(j, 0))
)
x[i][j] = tf * idf
else:
raise ValueError("Unknown vectorization mode:", mode)
return x
def get_config(self):
json_word_counts = json.dumps(self.word_counts)
json_word_docs = json.dumps(self.word_docs)
json_index_docs = json.dumps(self.index_docs)
json_word_index = json.dumps(self.word_index)
json_index_word = json.dumps(self.index_word)
return {
"num_words": self.num_words,
"filters": self.filters,
"lower": self.lower,
"split": self.split,
"char_level": self.char_level,
"oov_token": self.oov_token,
"document_count": self.document_count,
"word_counts": json_word_counts,
"word_docs": json_word_docs,
"index_docs": json_index_docs,
"index_word": json_index_word,
"word_index": json_word_index,
}
def to_json(self, **kwargs):
config = self.get_config()
tokenizer_config = {
"class_name": self.__class__.__name__,
"config": config,
}
return json.dumps(tokenizer_config, **kwargs)
@keras_export("keras._legacy.preprocessing.text.tokenizer_from_json")
def tokenizer_from_json(json_string):
"""DEPRECATED."""
tokenizer_config = json.loads(json_string)
config = tokenizer_config.get("config")
word_counts = json.loads(config.pop("word_counts"))
word_docs = json.loads(config.pop("word_docs"))
index_docs = json.loads(config.pop("index_docs"))
# Integer indexing gets converted to strings with json.dumps()
index_docs = {int(k): v for k, v in index_docs.items()}
index_word = json.loads(config.pop("index_word"))
index_word = {int(k): v for k, v in index_word.items()}
word_index = json.loads(config.pop("word_index"))
tokenizer = Tokenizer(**config)
tokenizer.word_counts = word_counts
tokenizer.word_docs = word_docs
tokenizer.index_docs = index_docs
tokenizer.word_index = word_index
tokenizer.index_word = index_word
return tokenizer
| keras/keras/legacy/preprocessing/text.py/0 | {
"file_path": "keras/keras/legacy/preprocessing/text.py",
"repo_id": "keras",
"token_count": 5877
} | 192 |
import re
import numpy as np
from keras import testing
from keras.metrics import accuracy_metrics
class AccuracyTest(testing.TestCase):
def test_config(self):
acc_obj = accuracy_metrics.Accuracy(name="accuracy", dtype="float32")
self.assertEqual(acc_obj.name, "accuracy")
self.assertEqual(len(acc_obj.variables), 2)
self.assertEqual(acc_obj._dtype, "float32")
# Test get_config
acc_obj_config = acc_obj.get_config()
self.assertEqual(acc_obj_config["name"], "accuracy")
self.assertEqual(acc_obj_config["dtype"], "float32")
# Check save and restore config
acc_obj2 = accuracy_metrics.Accuracy.from_config(acc_obj_config)
self.assertEqual(acc_obj2.name, "accuracy")
self.assertEqual(len(acc_obj2.variables), 2)
self.assertEqual(acc_obj2._dtype, "float32")
def test_unweighted(self):
acc_obj = accuracy_metrics.Accuracy(name="accuracy", dtype="float32")
y_true = np.array([[1], [2], [3], [4]])
y_pred = np.array([[0], [2], [3], [4]])
acc_obj.update_state(y_true, y_pred)
result = acc_obj.result()
self.assertAllClose(result, 0.75, atol=1e-3)
def test_weighted(self):
acc_obj = accuracy_metrics.Accuracy(name="accuracy", dtype="float32")
y_true = np.array([[1], [2], [3], [4]])
y_pred = np.array([[0], [2], [3], [4]])
sample_weight = np.array([1, 1, 0, 0])
acc_obj.update_state(y_true, y_pred, sample_weight=sample_weight)
result = acc_obj.result()
self.assertAllClose(result, 0.5, atol=1e-3)
class BinaryAccuracyTest(testing.TestCase):
def test_config(self):
bin_acc_obj = accuracy_metrics.BinaryAccuracy(
name="binary_accuracy", dtype="float32"
)
self.assertEqual(bin_acc_obj.name, "binary_accuracy")
self.assertEqual(len(bin_acc_obj.variables), 2)
self.assertEqual(bin_acc_obj._dtype, "float32")
# Test get_config
bin_acc_obj_config = bin_acc_obj.get_config()
self.assertEqual(bin_acc_obj_config["name"], "binary_accuracy")
self.assertEqual(bin_acc_obj_config["dtype"], "float32")
# Check save and restore config
bin_acc_obj2 = accuracy_metrics.BinaryAccuracy.from_config(
bin_acc_obj_config
)
self.assertEqual(bin_acc_obj2.name, "binary_accuracy")
self.assertEqual(len(bin_acc_obj2.variables), 2)
self.assertEqual(bin_acc_obj2._dtype, "float32")
def test_unweighted(self):
bin_acc_obj = accuracy_metrics.BinaryAccuracy(
name="binary_accuracy", dtype="float32"
)
y_true = np.array([[1], [1], [0], [0]])
y_pred = np.array([[0.98], [1], [0], [0.6]])
bin_acc_obj.update_state(y_true, y_pred)
result = bin_acc_obj.result()
self.assertAllClose(result, 0.75, atol=1e-3)
# Test broadcasting case
bin_acc_obj = accuracy_metrics.BinaryAccuracy(
name="binary_accuracy", dtype="float32"
)
y_true = np.array([1, 1, 0, 0])
y_pred = np.array([[0.98], [1], [0], [0.6]])
bin_acc_obj.update_state(y_true, y_pred)
result = bin_acc_obj.result()
self.assertAllClose(result, 0.75, atol=1e-3)
def test_weighted(self):
bin_acc_obj = accuracy_metrics.BinaryAccuracy(
name="binary_accuracy", dtype="float32"
)
y_true = np.array([[1], [1], [0], [0]])
y_pred = np.array([[0.98], [1], [0], [0.6]])
sample_weight = np.array([1, 0, 0, 1])
bin_acc_obj.update_state(y_true, y_pred, sample_weight=sample_weight)
result = bin_acc_obj.result()
self.assertAllClose(result, 0.5, atol=1e-3)
def test_threshold(self):
bin_acc_obj_1 = accuracy_metrics.BinaryAccuracy(
name="binary_accuracy", dtype="float32", threshold=0.3
)
bin_acc_obj_2 = accuracy_metrics.BinaryAccuracy(
name="binary_accuracy", dtype="float32", threshold=0.9
)
y_true = np.array([[1], [1], [0], [0]])
y_pred = np.array([[0.98], [0.5], [0.1], [0.2]])
bin_acc_obj_1.update_state(y_true, y_pred)
bin_acc_obj_2.update_state(y_true, y_pred)
result_1 = bin_acc_obj_1.result()
result_2 = bin_acc_obj_2.result()
# Higher threshold must result in lower measured accuracy.
self.assertAllClose(result_1, 1.0)
self.assertAllClose(result_2, 0.75)
def test_invalid_threshold(self):
self.assertRaisesRegex(
ValueError,
re.compile(r"Invalid value for argument `threshold`"),
lambda: accuracy_metrics.BinaryAccuracy(threshold=-0.5),
)
self.assertRaisesRegex(
ValueError,
re.compile(r"Invalid value for argument `threshold`"),
lambda: accuracy_metrics.BinaryAccuracy(threshold=1.5),
)
class CategoricalAccuracyTest(testing.TestCase):
def test_config(self):
cat_acc_obj = accuracy_metrics.CategoricalAccuracy(
name="categorical_accuracy", dtype="float32"
)
self.assertEqual(cat_acc_obj.name, "categorical_accuracy")
self.assertEqual(len(cat_acc_obj.variables), 2)
self.assertEqual(cat_acc_obj._dtype, "float32")
# Test get_config
cat_acc_obj_config = cat_acc_obj.get_config()
self.assertEqual(cat_acc_obj_config["name"], "categorical_accuracy")
self.assertEqual(cat_acc_obj_config["dtype"], "float32")
# Check save and restore config
cat_acc_obj2 = accuracy_metrics.CategoricalAccuracy.from_config(
cat_acc_obj_config
)
self.assertEqual(cat_acc_obj2.name, "categorical_accuracy")
self.assertEqual(len(cat_acc_obj2.variables), 2)
self.assertEqual(cat_acc_obj2._dtype, "float32")
def test_unweighted(self):
cat_acc_obj = accuracy_metrics.CategoricalAccuracy(
name="categorical_accuracy", dtype="float32"
)
y_true = np.array([[0, 0, 1], [0, 1, 0]])
y_pred = np.array([[0.1, 0.9, 0.8], [0.05, 0.95, 0]])
cat_acc_obj.update_state(y_true, y_pred)
result = cat_acc_obj.result()
self.assertAllClose(result, 0.5, atol=1e-3)
def test_weighted(self):
cat_acc_obj = accuracy_metrics.CategoricalAccuracy(
name="categorical_accuracy", dtype="float32"
)
y_true = np.array([[0, 0, 1], [0, 1, 0]])
y_pred = np.array([[0.1, 0.9, 0.8], [0.05, 0.95, 0]])
sample_weight = np.array([0.7, 0.3])
cat_acc_obj.update_state(y_true, y_pred, sample_weight=sample_weight)
result = cat_acc_obj.result()
self.assertAllClose(result, 0.3, atol=1e-3)
class SparseCategoricalAccuracyTest(testing.TestCase):
def test_config(self):
sp_cat_acc_obj = accuracy_metrics.SparseCategoricalAccuracy(
name="sparse_categorical_accuracy", dtype="float32"
)
self.assertEqual(sp_cat_acc_obj.name, "sparse_categorical_accuracy")
self.assertEqual(len(sp_cat_acc_obj.variables), 2)
self.assertEqual(sp_cat_acc_obj._dtype, "float32")
# Test get_config
sp_cat_acc_obj_config = sp_cat_acc_obj.get_config()
self.assertEqual(
sp_cat_acc_obj_config["name"], "sparse_categorical_accuracy"
)
self.assertEqual(sp_cat_acc_obj_config["dtype"], "float32")
# Check save and restore config
sp_cat_acc_obj2 = (
accuracy_metrics.SparseCategoricalAccuracy.from_config(
sp_cat_acc_obj_config
)
)
self.assertEqual(sp_cat_acc_obj2.name, "sparse_categorical_accuracy")
self.assertEqual(len(sp_cat_acc_obj2.variables), 2)
self.assertEqual(sp_cat_acc_obj2._dtype, "float32")
def test_unweighted(self):
sp_cat_acc_obj = accuracy_metrics.SparseCategoricalAccuracy(
name="sparse_categorical_accuracy", dtype="float32"
)
y_true = np.array([[2], [1]])
y_pred = np.array([[0.1, 0.6, 0.3], [0.05, 0.95, 0]])
sp_cat_acc_obj.update_state(y_true, y_pred)
result = sp_cat_acc_obj.result()
self.assertAllClose(result, 0.5, atol=1e-3)
def test_weighted(self):
sp_cat_acc_obj = accuracy_metrics.SparseCategoricalAccuracy(
name="sparse_categorical_accuracy", dtype="float32"
)
y_true = np.array([[2], [1]])
y_pred = np.array([[0.1, 0.6, 0.3], [0.05, 0.95, 0]])
sample_weight = np.array([0.7, 0.3])
sp_cat_acc_obj.update_state(y_true, y_pred, sample_weight=sample_weight)
result = sp_cat_acc_obj.result()
self.assertAllClose(result, 0.3, atol=1e-3)
class TopKCategoricalAccuracyTest(testing.TestCase):
def test_config(self):
top_k_cat_acc_obj = accuracy_metrics.TopKCategoricalAccuracy(
k=1, name="top_k_categorical_accuracy", dtype="float32"
)
self.assertEqual(top_k_cat_acc_obj.name, "top_k_categorical_accuracy")
self.assertEqual(len(top_k_cat_acc_obj.variables), 2)
self.assertEqual(top_k_cat_acc_obj._dtype, "float32")
# Test get_config
top_k_cat_acc_obj_config = top_k_cat_acc_obj.get_config()
self.assertEqual(
top_k_cat_acc_obj_config["name"], "top_k_categorical_accuracy"
)
self.assertEqual(top_k_cat_acc_obj_config["dtype"], "float32")
self.assertEqual(top_k_cat_acc_obj_config["k"], 1)
# Check save and restore config
top_k_cat_acc_obj2 = (
accuracy_metrics.TopKCategoricalAccuracy.from_config(
top_k_cat_acc_obj_config
)
)
self.assertEqual(top_k_cat_acc_obj2.name, "top_k_categorical_accuracy")
self.assertEqual(len(top_k_cat_acc_obj2.variables), 2)
self.assertEqual(top_k_cat_acc_obj2._dtype, "float32")
self.assertEqual(top_k_cat_acc_obj2.k, 1)
def test_unweighted(self):
top_k_cat_acc_obj = accuracy_metrics.TopKCategoricalAccuracy(
k=1, name="top_k_categorical_accuracy", dtype="float32"
)
y_true = np.array([[0, 0, 1], [0, 1, 0]])
y_pred = np.array([[0.1, 0.9, 0.8], [0.05, 0.95, 0]], dtype="float32")
top_k_cat_acc_obj.update_state(y_true, y_pred)
result = top_k_cat_acc_obj.result()
self.assertAllClose(result, 0.5, atol=1e-3)
def test_weighted(self):
top_k_cat_acc_obj = accuracy_metrics.TopKCategoricalAccuracy(
k=1, name="top_k_categorical_accuracy", dtype="float32"
)
y_true = np.array([[0, 0, 1], [0, 1, 0]])
y_pred = np.array([[0.1, 0.9, 0.8], [0.05, 0.95, 0]], dtype="float32")
sample_weight = np.array([0.7, 0.3])
top_k_cat_acc_obj.update_state(
y_true, y_pred, sample_weight=sample_weight
)
result = top_k_cat_acc_obj.result()
self.assertAllClose(result, 0.3, atol=1e-3)
class SparseTopKCategoricalAccuracyTest(testing.TestCase):
def test_config(self):
sp_top_k_cat_acc_obj = accuracy_metrics.SparseTopKCategoricalAccuracy(
k=1, name="sparse_top_k_categorical_accuracy", dtype="float32"
)
self.assertEqual(
sp_top_k_cat_acc_obj.name, "sparse_top_k_categorical_accuracy"
)
self.assertEqual(len(sp_top_k_cat_acc_obj.variables), 2)
self.assertEqual(sp_top_k_cat_acc_obj._dtype, "float32")
# Test get_config
sp_top_k_cat_acc_obj_config = sp_top_k_cat_acc_obj.get_config()
self.assertEqual(
sp_top_k_cat_acc_obj_config["name"],
"sparse_top_k_categorical_accuracy",
)
self.assertEqual(sp_top_k_cat_acc_obj_config["dtype"], "float32")
self.assertEqual(sp_top_k_cat_acc_obj_config["k"], 1)
# Check save and restore config
sp_top_k_cat_acc_obj2 = (
accuracy_metrics.SparseTopKCategoricalAccuracy.from_config(
sp_top_k_cat_acc_obj_config
)
)
self.assertEqual(
sp_top_k_cat_acc_obj2.name, "sparse_top_k_categorical_accuracy"
)
self.assertEqual(len(sp_top_k_cat_acc_obj2.variables), 2)
self.assertEqual(sp_top_k_cat_acc_obj2._dtype, "float32")
self.assertEqual(sp_top_k_cat_acc_obj2.k, 1)
def test_unweighted(self):
sp_top_k_cat_acc_obj = accuracy_metrics.SparseTopKCategoricalAccuracy(
k=1, name="sparse_top_k_categorical_accuracy", dtype="float32"
)
y_true = np.array([2, 1])
y_pred = np.array([[0.1, 0.9, 0.8], [0.05, 0.95, 0]], dtype="float32")
sp_top_k_cat_acc_obj.update_state(y_true, y_pred)
result = sp_top_k_cat_acc_obj.result()
self.assertAllClose(result, 0.5, atol=1e-3)
def test_weighted(self):
sp_top_k_cat_acc_obj = accuracy_metrics.SparseTopKCategoricalAccuracy(
k=1, name="sparse_top_k_categorical_accuracy", dtype="float32"
)
y_true = np.array([2, 1])
y_pred = np.array([[0.1, 0.9, 0.8], [0.05, 0.95, 0]], dtype="float32")
sample_weight = np.array([0.7, 0.3])
sp_top_k_cat_acc_obj.update_state(
y_true, y_pred, sample_weight=sample_weight
)
result = sp_top_k_cat_acc_obj.result()
self.assertAllClose(result, 0.3, atol=1e-3)
| keras/keras/metrics/accuracy_metrics_test.py/0 | {
"file_path": "keras/keras/metrics/accuracy_metrics_test.py",
"repo_id": "keras",
"token_count": 6687
} | 193 |
import warnings
from keras import initializers
from keras import ops
from keras.api_export import keras_export
from keras.losses.loss import squeeze_or_expand_to_same_rank
from keras.losses.losses import log_cosh
from keras.losses.losses import mean_absolute_error
from keras.losses.losses import mean_absolute_percentage_error
from keras.losses.losses import mean_squared_error
from keras.losses.losses import mean_squared_logarithmic_error
from keras.metrics import reduction_metrics
from keras.utils.numerical_utils import normalize
@keras_export("keras.metrics.MeanSquaredError")
class MeanSquaredError(reduction_metrics.MeanMetricWrapper):
"""Computes the mean squared error between `y_true` and `y_pred`.
Formula:
```python
loss = mean(square(y_true - y_pred))
```
Args:
name: (Optional) string name of the metric instance.
dtype: (Optional) data type of the metric result.
Example:
>>> m = keras.metrics.MeanSquaredError()
>>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]])
>>> m.result()
0.25
"""
def __init__(self, name="mean_squared_error", dtype=None):
super().__init__(fn=mean_squared_error, name=name, dtype=dtype)
# Metric should be minimized during optimization.
self._direction = "down"
def get_config(self):
return {"name": self.name, "dtype": self.dtype}
@keras_export("keras.metrics.MeanAbsoluteError")
class MeanAbsoluteError(reduction_metrics.MeanMetricWrapper):
"""Computes the mean absolute error between the labels and predictions.
Formula:
```python
loss = mean(abs(y_true - y_pred))
```
Args:
name: (Optional) string name of the metric instance.
dtype: (Optional) data type of the metric result.
Examples:
Standalone usage:
>>> m = keras.metrics.MeanAbsoluteError()
>>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]])
>>> m.result()
0.25
>>> m.reset_state()
>>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]],
... sample_weight=[1, 0])
>>> m.result()
0.5
Usage with `compile()` API:
```python
model.compile(
optimizer='sgd',
loss='mse',
metrics=[keras.metrics.MeanAbsoluteError()])
```
"""
def __init__(self, name="mean_absolute_error", dtype=None):
super().__init__(mean_absolute_error, name, dtype=dtype)
# Metric should be minimized during optimization.
self._direction = "down"
def get_config(self):
return {"name": self.name, "dtype": self.dtype}
@keras_export("keras.metrics.MeanAbsolutePercentageError")
class MeanAbsolutePercentageError(reduction_metrics.MeanMetricWrapper):
"""Computes mean absolute percentage error between `y_true` and `y_pred`.
Formula:
```python
loss = 100 * mean(abs((y_true - y_pred) / y_true))
```
Args:
name: (Optional) string name of the metric instance.
dtype: (Optional) data type of the metric result.
Examples:
Standalone usage:
>>> m = keras.metrics.MeanAbsolutePercentageError()
>>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]])
>>> m.result()
250000000.0
>>> m.reset_state()
>>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]],
... sample_weight=[1, 0])
>>> m.result()
500000000.0
Usage with `compile()` API:
```python
model.compile(
optimizer='sgd',
loss='mse',
metrics=[keras.metrics.MeanAbsolutePercentageError()])
```
"""
def __init__(self, name="mean_absolute_percentage_error", dtype=None):
super().__init__(mean_absolute_percentage_error, name, dtype=dtype)
# Metric should be minimized during optimization.
self._direction = "down"
def get_config(self):
return {"name": self.name, "dtype": self.dtype}
@keras_export("keras.metrics.MeanSquaredLogarithmicError")
class MeanSquaredLogarithmicError(reduction_metrics.MeanMetricWrapper):
"""Computes mean squared logarithmic error between `y_true` and `y_pred`.
Formula:
```python
loss = mean(square(log(y_true + 1) - log(y_pred + 1)))
```
Args:
name: (Optional) string name of the metric instance.
dtype: (Optional) data type of the metric result.
Examples:
Standalone usage:
>>> m = keras.metrics.MeanSquaredLogarithmicError()
>>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]])
>>> m.result()
0.12011322
>>> m.reset_state()
>>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]],
... sample_weight=[1, 0])
>>> m.result()
0.24022643
Usage with `compile()` API:
```python
model.compile(
optimizer='sgd',
loss='mse',
metrics=[keras.metrics.MeanSquaredLogarithmicError()])
```
"""
def __init__(self, name="mean_squared_logarithmic_error", dtype=None):
super().__init__(mean_squared_logarithmic_error, name, dtype=dtype)
# Metric should be minimized during optimization.
self._direction = "down"
def get_config(self):
return {"name": self.name, "dtype": self.dtype}
@keras_export("keras.metrics.RootMeanSquaredError")
class RootMeanSquaredError(reduction_metrics.Mean):
"""Computes root mean squared error metric between `y_true` and `y_pred`.
Formula:
```python
loss = sqrt(mean((y_pred - y_true) ** 2))
```
Args:
name: (Optional) string name of the metric instance.
dtype: (Optional) data type of the metric result.
Examples:
Standalone usage:
>>> m = keras.metrics.RootMeanSquaredError()
>>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]])
>>> m.result()
0.5
>>> m.reset_state()
>>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]],
... sample_weight=[1, 0])
>>> m.result()
0.70710677
Usage with `compile()` API:
```python
model.compile(
optimizer='sgd',
loss='mse',
metrics=[keras.metrics.RootMeanSquaredError()])
```
"""
def __init__(self, name="root_mean_squared_error", dtype=None):
super().__init__(name, dtype=dtype)
# Metric should be minimized during optimization.
self._direction = "down"
def update_state(self, y_true, y_pred, sample_weight=None):
"""Accumulates root mean squared error statistics.
Args:
y_true: The ground truth values.
y_pred: The predicted values.
sample_weight: Optional weighting of each example. Can
be a `Tensor` whose rank is either 0, or the same rank as
`y_true`, and must be broadcastable to `y_true`.
Defaults to `1`.
Returns:
Update op.
"""
y_true = ops.convert_to_tensor(y_true, self._dtype)
y_pred = ops.convert_to_tensor(y_pred, self._dtype)
y_true, y_pred = squeeze_or_expand_to_same_rank(y_true, y_pred)
error_sq = ops.square(y_pred - y_true)
return super().update_state(error_sq, sample_weight=sample_weight)
def result(self):
return ops.sqrt(super().result())
@keras_export("keras.metrics.CosineSimilarity")
class CosineSimilarity(reduction_metrics.MeanMetricWrapper):
"""Computes the cosine similarity between the labels and predictions.
Formula:
```python
loss = sum(l2_norm(y_true) * l2_norm(y_pred))
```
See: [Cosine Similarity](https://en.wikipedia.org/wiki/Cosine_similarity).
This metric keeps the average cosine similarity between `predictions` and
`labels` over a stream of data.
Args:
name: (Optional) string name of the metric instance.
dtype: (Optional) data type of the metric result.
axis: (Optional) Defaults to `-1`. The dimension along which the cosine
similarity is computed.
Examples:
Standalone usage:
>>> # l2_norm(y_true) = [[0., 1.], [1./1.414, 1./1.414]]
>>> # l2_norm(y_pred) = [[1., 0.], [1./1.414, 1./1.414]]
>>> # l2_norm(y_true) . l2_norm(y_pred) = [[0., 0.], [0.5, 0.5]]
>>> # result = mean(sum(l2_norm(y_true) . l2_norm(y_pred), axis=1))
>>> # = ((0. + 0.) + (0.5 + 0.5)) / 2
>>> m = keras.metrics.CosineSimilarity(axis=1)
>>> m.update_state([[0., 1.], [1., 1.]], [[1., 0.], [1., 1.]])
>>> m.result()
0.49999997
>>> m.reset_state()
>>> m.update_state([[0., 1.], [1., 1.]], [[1., 0.], [1., 1.]],
... sample_weight=[0.3, 0.7])
>>> m.result()
0.6999999
Usage with `compile()` API:
```python
model.compile(
optimizer='sgd',
loss='mse',
metrics=[keras.metrics.CosineSimilarity(axis=1)])
```
"""
def __init__(self, name="cosine_similarity", dtype=None, axis=-1):
super().__init__(cosine_similarity, name, dtype=dtype, axis=axis)
# Metric should be maximized during optimization.
self._direction = "up"
def get_config(self):
return {"name": self.name, "dtype": self.dtype}
@keras_export("keras.metrics.LogCoshError")
class LogCoshError(reduction_metrics.MeanMetricWrapper):
"""Computes the logarithm of the hyperbolic cosine of the prediction error.
Formula:
```python
error = y_pred - y_true
logcosh = mean(log((exp(error) + exp(-error))/2), axis=-1)
```
Args:
name: (Optional) string name of the metric instance.
dtype: (Optional) data type of the metric result.
Examples:
Standalone usage:
>>> m = keras.metrics.LogCoshError()
>>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]])
>>> m.result()
0.10844523
>>> m.reset_state()
>>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]],
... sample_weight=[1, 0])
>>> m.result()
0.21689045
Usage with `compile()` API:
```python
model.compile(optimizer='sgd',
loss='mse',
metrics=[keras.metrics.LogCoshError()])
```
"""
def __init__(self, name="logcosh", dtype=None):
super().__init__(log_cosh, name, dtype=dtype)
# Metric should be minimized during optimization.
self._direction = "down"
def get_config(self):
return {"name": self.name, "dtype": self.dtype}
# Adapted from TF-Addons implementation (RSquare class).
@keras_export("keras.metrics.R2Score")
class R2Score(reduction_metrics.Metric):
"""Computes R2 score.
Formula:
```python
sum_squares_residuals = sum((y_true - y_pred) ** 2)
sum_squares = sum((y_true - mean(y_true)) ** 2)
R2 = 1 - sum_squares_residuals / sum_squares
```
This is also called the
[coefficient of determination](
https://en.wikipedia.org/wiki/Coefficient_of_determination).
It indicates how close the fitted regression line
is to ground-truth data.
- The highest score possible is 1.0. It indicates that the predictors
perfectly accounts for variation in the target.
- A score of 0.0 indicates that the predictors do not
account for variation in the target.
- It can also be negative if the model is worse than random.
This metric can also compute the "Adjusted R2" score.
Args:
class_aggregation: Specifies how to aggregate scores corresponding to
different output classes (or target dimensions),
i.e. different dimensions on the last axis of the predictions.
Equivalent to `multioutput` argument in Scikit-Learn.
Should be one of
`None` (no aggregation), `"uniform_average"`,
`"variance_weighted_average"`.
num_regressors: Number of independent regressors used
("Adjusted R2" score). 0 is the standard R2 score.
Defaults to `0`.
name: Optional. string name of the metric instance.
dtype: Optional. data type of the metric result.
Example:
>>> y_true = np.array([[1], [4], [3]], dtype=np.float32)
>>> y_pred = np.array([[2], [4], [4]], dtype=np.float32)
>>> metric = keras.metrics.R2Score()
>>> metric.update_state(y_true, y_pred)
>>> result = metric.result()
>>> result
0.57142854
"""
def __init__(
self,
class_aggregation="uniform_average",
num_regressors=0,
name="r2_score",
dtype=None,
):
super().__init__(name=name, dtype=dtype)
# Metric should be maximized during optimization.
self._direction = "up"
valid_class_aggregation_values = (
None,
"uniform_average",
"variance_weighted_average",
)
if class_aggregation not in valid_class_aggregation_values:
raise ValueError(
"Invalid value for argument `class_aggregation`. Expected "
f"one of {valid_class_aggregation_values}. "
f"Received: class_aggregation={class_aggregation}"
)
if num_regressors < 0:
raise ValueError(
"Invalid value for argument `num_regressors`. "
"Expected a value >= 0. "
f"Received: num_regressors={num_regressors}"
)
self.class_aggregation = class_aggregation
self.num_regressors = num_regressors
self.num_samples = self.add_variable(
shape=(),
initializer=initializers.Zeros(),
name="num_samples",
)
self._built = False
def _build(self, y_true_shape, y_pred_shape):
if len(y_pred_shape) != 2 or len(y_true_shape) != 2:
raise ValueError(
"R2Score expects 2D inputs with shape "
"(batch_size, output_dim). Received input "
f"shapes: y_pred.shape={y_pred_shape} and "
f"y_true.shape={y_true_shape}."
)
if y_pred_shape[-1] is None or y_true_shape[-1] is None:
raise ValueError(
"R2Score expects 2D inputs with shape "
"(batch_size, output_dim), with output_dim fully "
"defined (not None). Received input "
f"shapes: y_pred.shape={y_pred_shape} and "
f"y_true.shape={y_true_shape}."
)
num_classes = y_pred_shape[-1]
self.squared_sum = self.add_variable(
name="squared_sum",
shape=[num_classes],
initializer=initializers.Zeros(),
)
self.sum = self.add_variable(
name="sum",
shape=[num_classes],
initializer=initializers.Zeros(),
)
self.total_mse = self.add_variable(
name="residual",
shape=[num_classes],
initializer=initializers.Zeros(),
)
self.count = self.add_variable(
name="count",
shape=[num_classes],
initializer=initializers.Zeros(),
)
self._built = True
def update_state(self, y_true, y_pred, sample_weight=None):
"""Accumulates root mean squared error statistics.
Args:
y_true: The ground truth values.
y_pred: The predicted values.
sample_weight: Optional weighting of each example. Can
be a `Tensor` whose rank is either 0, or the same rank as
`y_true`, and must be broadcastable to `y_true`.
Defaults to `1`.
Returns:
Update op.
"""
y_true = ops.convert_to_tensor(y_true, dtype=self._dtype)
y_pred = ops.convert_to_tensor(y_pred, dtype=self._dtype)
y_true, y_pred = squeeze_or_expand_to_same_rank(y_true, y_pred)
if not self._built:
self._build(y_true.shape, y_pred.shape)
if sample_weight is None:
sample_weight = 1
sample_weight = ops.convert_to_tensor(sample_weight, dtype=self.dtype)
if len(sample_weight.shape) == 1:
# Make sure there's a features dimension
sample_weight = ops.expand_dims(sample_weight, axis=1)
sample_weight = ops.broadcast_to(sample_weight, ops.shape(y_true))
weighted_y_true = y_true * ops.cast(sample_weight, y_true.dtype)
self.sum.assign(self.sum + ops.sum(weighted_y_true, axis=0))
self.squared_sum.assign(
self.squared_sum + ops.sum(y_true * weighted_y_true, axis=0)
)
self.total_mse.assign(
self.total_mse
+ ops.sum(
(y_true - y_pred) ** 2 * ops.cast(sample_weight, y_true.dtype),
axis=0,
)
)
self.count.assign(self.count + ops.sum(sample_weight, axis=0))
self.num_samples.assign(self.num_samples + ops.size(y_true))
def result(self):
mean = self.sum / self.count
total = self.squared_sum - self.sum * mean
raw_scores = 1 - (self.total_mse / total)
raw_scores = ops.where(ops.isinf(raw_scores), 0.0, raw_scores)
if self.class_aggregation == "uniform_average":
r2_score = ops.mean(raw_scores)
elif self.class_aggregation == "variance_weighted_average":
weighted_sum = ops.sum(total * raw_scores)
sum_of_weights = ops.sum(total)
r2_score = weighted_sum / sum_of_weights
else:
r2_score = raw_scores
if self.num_regressors != 0:
if self.num_regressors > self.num_samples - 1:
warnings.warn(
"More independent predictors than datapoints "
"in adjusted R2 score. Falling back to standard R2 score.",
stacklevel=2,
)
elif self.num_regressors == self.num_samples - 1:
warnings.warn(
"Division by zero in Adjusted R2 score. "
"Falling back to standard R2 score.",
stacklevel=2,
)
else:
n = ops.convert_to_tensor(self.num_samples, dtype="float32")
p = ops.convert_to_tensor(self.num_regressors, dtype="float32")
num = ops.multiply(
ops.subtract(1.0, r2_score), ops.subtract(n, 1.0)
)
den = ops.subtract(ops.subtract(n, p), 1.0)
r2_score = ops.subtract(1.0, ops.divide(num, den))
return r2_score
def reset_state(self):
for v in self.variables:
v.assign(ops.zeros(v.shape, dtype=v.dtype))
def get_config(self):
config = {
"name": self.name,
"dtype": self.dtype,
"class_aggregation": self.class_aggregation,
"num_regressors": self.num_regressors,
}
base_config = super().get_config()
return {**base_config, **config}
def cosine_similarity(y_true, y_pred, axis=-1):
"""Computes the cosine similarity between labels and predictions.
Formula:
```python
loss = sum(l2_norm(y_true) * l2_norm(y_pred))
```
Args:
y_true: Tensor of true targets.
y_pred: Tensor of predicted targets.
axis: Axis along which to determine similarity. Defaults to `-1`.
Returns:
Cosine similarity tensor.
Example:
>>> y_true = [[0., 1.], [1., 1.], [1., 1.]]
>>> y_pred = [[1., 0.], [1., 1.], [-1., -1.]]
>>> loss = keras.losses.cosine_similarity(y_true, y_pred, axis=-1)
[0., 0.99999994, -0.99999994]
"""
y_pred = ops.convert_to_tensor(y_pred)
y_true = ops.convert_to_tensor(y_true, dtype=y_pred.dtype)
y_true, y_pred = squeeze_or_expand_to_same_rank(y_true, y_pred)
y_pred = normalize(y_pred, axis=axis)
y_true = normalize(y_true, axis=axis)
return ops.sum(y_true * y_pred, axis=axis)
| keras/keras/metrics/regression_metrics.py/0 | {
"file_path": "keras/keras/metrics/regression_metrics.py",
"repo_id": "keras",
"token_count": 9079
} | 194 |
import collections
import tree
from keras.api_export import keras_export
from keras.backend import KerasTensor
from keras.backend.config import backend
from keras.ops.operation import Operation
from keras.utils.nest import pack_sequence_as
@keras_export("keras.Function")
class Function(Operation):
"""Class that encapsulates a computation graph of Keras operations.
You can use a `Function` to capture the computation graph linking
some input tensors to some output tensors, and reapply the same
computation on new inputs.
A `Function` is similar to a Functional Model, with the difference
that it is stateless (it does not track state variables)
and does not implement the `Layer` API.
Example:
```python
input_1 = keras.KerasTensor(shape=(None, 2, 3))
input_2 = keras.KerasTensor(shape=(None, 2, 3))
x = input_1 + input_2
output = keras.ops.sigmoid(x)
fn = keras.Function(inputs=[input_1, input_2], outputs=output)
input_1_val = np.random.random((4, 2, 3))
input_2_val = np.random.random((4, 2, 3))
output_val = fn([input_1_val, input_2_val])
```
Args:
inputs: `KerasTensor` instance or nested structured of
`KerasTensor` instances.
outputs: `KerasTensor` instance or nested structured of
`KerasTensor` instances. They should be computable
given only the values of `inputs`.
name: String. The name of the function.
"""
def __init__(self, inputs, outputs, name=None):
super().__init__(name=name)
if backend() == "tensorflow":
# Temporary work around for
# https://github.com/keras-team/keras/issues/931
# This stop tensorflow from wrapping tf.function output in a
# _DictWrapper object.
_self_setattr_tracking = getattr(
self, "_self_setattr_tracking", True
)
self._self_setattr_tracking = False
self._inputs_struct = tree.map_structure(lambda x: x, inputs)
self._outputs_struct = tree.map_structure(lambda x: x, outputs)
self._inputs = tree.flatten(inputs)
self._outputs = tree.flatten(outputs)
if not self._inputs:
raise ValueError(
"`inputs` argument cannot be empty. Received:\n"
f"inputs={inputs}\n"
f"outputs={outputs}"
)
if not self._outputs:
raise ValueError(
"`outputs` argument cannot be empty. Received:\n"
f"inputs={inputs}\n"
f"outputs={outputs}"
)
if backend() == "tensorflow":
self._self_setattr_tracking = _self_setattr_tracking
(nodes, nodes_by_depth, operations, operations_by_depth) = map_graph(
self._inputs, self._outputs
)
self._nodes = nodes
self._nodes_by_depth = nodes_by_depth
self._operations = operations
self._operations_by_depth = operations_by_depth
@property
def operations(self):
return self._operations[:]
@property
def inputs(self):
return self._inputs
@property
def outputs(self):
return self._outputs
def compute_output_spec(self, inputs):
self._assert_input_compatibility(inputs)
# Check if input shapes are identical to ref input shapes,
# if so take a shortcut.
shortcut = True
for x, x_ref in zip(tree.flatten(inputs), self._inputs):
if x.shape != x_ref.shape:
shortcut = False
break
if shortcut:
return tree.map_structure(
lambda x: KerasTensor(shape=x.shape, dtype=x.dtype),
self._outputs_struct,
)
# No luck; take the long road through the graph.
# Original Keras used a cache to avoid recomputing all this
# when known input shapes where seen again. Perhaps a good
# idea to bring that back.
return self._run_through_graph(
inputs, operation_fn=lambda op: op.compute_output_spec
)
def call(self, inputs):
"""Computes output tensors for new inputs."""
self._assert_input_compatibility(inputs)
return self._run_through_graph(inputs, operation_fn=lambda op: op)
def _run_through_graph(self, inputs, operation_fn):
"""Execute the graph.
At each node we compute outputs via
`operation_fn(node.operation)(*args, **kwargs)`.
"""
inputs = tree.flatten(inputs)
# Dictionary mapping reference tensors to computed tensors.
tensor_dict = {}
for x, y in zip(self.inputs, inputs):
tensor_dict[id(x)] = y
nodes_by_depth = self._nodes_by_depth
depth_keys = list(nodes_by_depth.keys())
depth_keys.sort(reverse=True)
for depth in depth_keys:
nodes = nodes_by_depth[depth]
for node in nodes:
if not node.operation or node.is_input:
continue # Input tensors already exist.
if any(id(x) not in tensor_dict for x in node.input_tensors):
continue # Node is not computable, try skipping.
args, kwargs = node.arguments.fill_in(tensor_dict)
outputs = operation_fn(node.operation)(*args, **kwargs)
# Update tensor_dict.
for x, y in zip(node.outputs, tree.flatten(outputs)):
tensor_dict[id(x)] = y
output_tensors = []
for x in self.outputs:
output_tensors.append(tensor_dict[id(x)])
return pack_sequence_as(self._outputs_struct, output_tensors)
def _assert_input_compatibility(self, inputs):
try:
tree.assert_same_structure(
inputs, self._inputs_struct, check_types=False
)
except ValueError:
raise ValueError(
"Function was called with an invalid input structure. "
f"Expected input structure: {self._inputs_struct}\n"
f"Received input structure: {inputs}"
)
for x, x_ref in zip(tree.flatten(inputs), self._inputs):
if len(x.shape) != len(x_ref.shape):
raise ValueError(
f"{self.__class__.__name__} was passed "
f"incompatible inputs. For input '{x_ref.name}', "
f"expected shape {x_ref.shape}, but received "
f"instead a tensor with shape {x.shape}."
)
for dim, ref_dim in zip(x.shape, x_ref.shape):
if ref_dim is not None and dim is not None:
if dim != ref_dim:
raise ValueError(
f"{self.__class__.__name__} was passed "
f"incompatible inputs. For input '{x_ref.name}', "
f"expected shape {x_ref.shape}, but received "
f"instead a tensor with shape {x.shape}."
)
def make_node_key(op, node_index):
return str(id(op)) + "_ib-" + str(node_index)
def map_graph(inputs, outputs):
"""Validates a graph's topology and gather its operations and nodes.
Args:
inputs: List of input tensors.
outputs: List of outputs tensors.
Returns:
A tuple `(nodes, nodes_by_depth, operations, operations_by_depth)`.
- network_nodes: dict mapping unique node keys to the Node instances
- nodes_by_depth: dict mapping ints (depth) to lists of node instances.
- operations: list of Operation instances.
- operations_by_depth: dict mapping ints (depth) to lists of Operation
instances.
"""
# "depth" is number of operations between output Node and the Node.
# Nodes are ordered from inputs -> outputs.
nodes_in_decreasing_depth, operation_indices = _build_map(inputs, outputs)
network_nodes = {
make_node_key(node.operation, node.operation._inbound_nodes.index(node))
for node in nodes_in_decreasing_depth
}
nodes_depths = {} # dict {node: depth value}
operations_depths = {} # dict {operation: depth value}
for node in reversed(nodes_in_decreasing_depth):
# If the depth is not set, the node has no outbound nodes (depth 0).
depth = nodes_depths.setdefault(node, 0)
# Update the depth of the corresponding operation
previous_depth = operations_depths.get(node.operation, 0)
# If we've seen this operation before at a higher depth,
# we should use that depth instead of the node depth.
# This is necessary for shared operations that have inputs at different
# depth levels in the graph.
depth = max(depth, previous_depth)
operations_depths[node.operation] = depth
nodes_depths[node] = depth
# Update the depth of inbound nodes.
# The "depth" of a node is the max of the depths
# of all nodes it is connected to + 1.
for node_dep in node.parent_nodes:
previous_depth = nodes_depths.get(node_dep, 0)
nodes_depths[node_dep] = max(depth + 1, previous_depth)
# Handle inputs that are not connected to outputs.
# We do not error out here because the inputs may be used to compute losses
# and metrics.
for input_t in inputs:
input_operation = input_t._keras_history[0]
if input_operation and input_operation not in operations_depths:
operations_depths[input_operation] = 0
operation_indices[input_operation] = -1
nodes_depths[input_operation._inbound_nodes[0]] = 0
network_nodes.add(make_node_key(input_operation, 0))
# Build a dict {depth: list of nodes with this depth}
nodes_by_depth = collections.defaultdict(list)
for node, depth in nodes_depths.items():
nodes_by_depth[depth].append(node)
# Build a dict {depth: list of operations with this depth}
operations_by_depth = collections.defaultdict(list)
for operation, depth in operations_depths.items():
operations_by_depth[depth].append(operation)
# Get sorted list of operation depths.
depth_keys = list(operations_by_depth.keys())
depth_keys.sort(reverse=True)
# Set self.operations ordered by depth.
operations = []
for depth in depth_keys:
operations_for_depth = operations_by_depth[depth]
# Network.operations needs to have a deterministic order:
# here we order them by traversal order.
operations_for_depth.sort(key=lambda x: operation_indices[x])
operations.extend(operations_for_depth)
# Get sorted list of node depths.
depth_keys = list(nodes_by_depth.keys())
depth_keys.sort(reverse=True)
# Check that all tensors required are computable.
# computable_tensors: all tensors in the graph
# that can be computed from the inputs provided.
computable_tensors = set()
for x in inputs:
computable_tensors.add(x)
operations_with_complete_input = [] # To provide a better error msg.
for depth in depth_keys:
for node in nodes_by_depth[depth]:
for x in tree.flatten(node.input_tensors):
if x not in computable_tensors:
operation = node.operation
raise ValueError(
"Graph disconnected: cannot find parent for "
f"tensor {x} at operation '{operation}'. "
"The following previous operations were accessed "
f"without issue: {operations_with_complete_input}"
)
operations_with_complete_input.append(operation.name)
for x in tree.flatten(node.outputs):
computable_tensors.add(x)
# Ensure name unicity, which will be crucial for serialization
# (since serialized nodes refer to operations by their name).
all_names = [operation.name for operation in operations]
for name in all_names:
if all_names.count(name) != 1:
raise ValueError(
f'The name "{name}" is used {all_names.count(name)} '
"times in the model. All operation names should be unique."
)
return network_nodes, nodes_by_depth, operations, operations_by_depth
def _build_map(inputs, outputs):
"""Topologically sort nodes in order from inputs to outputs.
It uses a depth-first search to topologically sort nodes that appear in the
_keras_history connectivity metadata of `outputs`.
Args:
outputs: the output tensors whose _keras_history metadata should be
walked. This may be an arbitrary nested structure.
Returns:
A tuple like (ordered_nodes, operation_to_first_traversal_index)
ordered_nodes: list of nodes appearing in the keras history,
topologically sorted from original inputs to the `outputs`.
(If outputs have different sets of ancestors, the inputs to one
output may appear after a different output).
operation_to_first_traversal_index:
A dict mapping operation to the traversal index in the DFS where it
is seen. Note: if a operation is shared by several nodes, the dict
will onlystore the index corresponding to the *first* time the
operation seen.
"""
finished_nodes = set()
nodes_in_progress = set()
nodes_in_decreasing_depth = [] # nodes from inputs -> outputs.
operation_indices = {} # operation -> in traversal order.
for output in tree.flatten(outputs):
_build_map_helper(
inputs,
output,
finished_nodes,
nodes_in_progress,
nodes_in_decreasing_depth,
operation_indices,
)
return nodes_in_decreasing_depth, operation_indices
def _build_map_helper(
inputs,
tensor,
finished_nodes,
nodes_in_progress,
nodes_in_decreasing_depth,
operation_indices,
):
"""Recursive helper for `_build_map`."""
(
operation,
node_index,
_,
) = tensor._keras_history
if not operation:
return
node = operation._inbound_nodes[node_index]
# Don't repeat work for shared subgraphs
if node in finished_nodes:
return
# Prevent cycles.
if node in nodes_in_progress:
raise ValueError(
f"Tensor {tensor} from operation '{operation.name}' is part of a "
"cycle."
)
# Store the traversal order for operation sorting.
if operation not in operation_indices:
operation_indices[operation] = len(operation_indices)
# Propagate to all previous tensors connected to this node.
nodes_in_progress.add(node)
if not node.is_input and tensor not in tree.flatten(inputs):
for tensor in node.input_tensors:
_build_map_helper(
inputs,
tensor,
finished_nodes,
nodes_in_progress,
nodes_in_decreasing_depth,
operation_indices,
)
finished_nodes.add(node)
nodes_in_progress.remove(node)
nodes_in_decreasing_depth.append(node)
| keras/keras/ops/function.py/0 | {
"file_path": "keras/keras/ops/function.py",
"repo_id": "keras",
"token_count": 6597
} | 195 |
import math
import numpy as np
import tree
from keras.api_export import keras_export
def broadcast_shapes(shape1, shape2):
"""Broadcast input shapes to a unified shape.
Convert to list for mutability.
Args:
shape1: A tuple or list of integers.
shape2: A tuple or list of integers.
Returns:
output_shape (list of integers or `None`): The broadcasted shape.
Example:
>>> broadcast_shapes((5, 3), (1, 3))
[5, 3]
"""
shape1 = list(shape1)
shape2 = list(shape2)
origin_shape1 = shape1
origin_shape2 = shape2
if len(shape1) > len(shape2):
shape2 = [1] * (len(shape1) - len(shape2)) + shape2
if len(shape1) < len(shape2):
shape1 = [1] * (len(shape2) - len(shape1)) + shape1
output_shape = list(shape1)
for i in range(len(shape1)):
if shape1[i] == 1:
output_shape[i] = shape2[i]
elif shape1[i] is None:
output_shape[i] = None if shape2[i] == 1 else shape2[i]
else:
if shape2[i] == 1 or shape2[i] is None or shape2[i] == shape1[i]:
output_shape[i] = shape1[i]
else:
raise ValueError(
"Cannot broadcast shape, the failure dim has value "
f"{shape1[i]}, which cannot be broadcasted to {shape2[i]}. "
f"Input shapes are: {origin_shape1} and {origin_shape2}."
)
return output_shape
def compute_expand_dims_output_shape(input_shape, axis):
"""Compute the output shape for the `expand_dims` operation.
Args:
input_shape: Input shape.
axis: int for the axis to expand.
Returns:
Tuple of ints: The output shape after the `expand_dims` operation.
"""
input_shape = list(input_shape)
if axis is None:
axis = len(input_shape)
elif axis < 0:
axis = len(input_shape) + 1 + axis
return tuple(input_shape[:axis] + [1] + input_shape[axis:])
def compute_pooling_output_shape(
input_shape,
pool_size,
strides,
padding="valid",
data_format="channels_last",
):
"""Computes the output shape of pooling operations.
Args:
input_shape: Input shape. Must be a tuple of integers.
pool_size: Size of the pooling operation. Must be a tuple of integers.
strides: Stride of the pooling operation. Must be a tuple of integers.
Defaults to `pool_size`.
padding: Padding method. Available methods are `"valid"` or `"same"`.
Defaults to `"valid"`.
data_format: String, either `"channels_last"` or `"channels_first"`.
The ordering of the dimensions in the inputs. `"channels_last"`
corresponds to inputs with shape `(batch, height, width, channels)`
while `"channels_first"` corresponds to inputs with shape
`(batch, channels, height, weight)`. Defaults to `"channels_last"`.
Returns:
Tuple of ints: The output shape of the pooling operation.
Examples:
# Basic usage with square pooling on a single image
>>> compute_pooling_output_shape((1, 4, 4, 1), (2, 2))
(1, 2, 2, 1)
# Strided pooling on a single image with strides different from pool_size
>>> compute_pooling_output_shape((1, 4, 4, 1), (2, 2), strides=(1, 1))
(1, 3, 3, 1)
# Pooling on a batch of images
>>> compute_pooling_output_shape((32, 4, 4, 3), (2, 2))
(32, 2, 2, 3)
"""
strides = pool_size if strides is None else strides
input_shape_origin = list(input_shape)
input_shape = np.array(input_shape)
if data_format == "channels_last":
spatial_shape = input_shape[1:-1]
else:
spatial_shape = input_shape[2:]
none_dims = []
for i in range(len(spatial_shape)):
if spatial_shape[i] is None:
# Set `None` shape to a manual value so that we can run numpy
# computation on `spatial_shape`.
spatial_shape[i] = -1
none_dims.append(i)
pool_size = np.array(pool_size)
if padding == "valid":
output_spatial_shape = (
np.floor((spatial_shape - pool_size) / strides) + 1
)
for i in range(len(output_spatial_shape)):
if i not in none_dims and output_spatial_shape[i] < 0:
raise ValueError(
"Computed output size would be negative. Received: "
f"`inputs.shape={input_shape}` and `pool_size={pool_size}`."
)
elif padding == "same":
output_spatial_shape = np.floor((spatial_shape - 1) / strides) + 1
else:
raise ValueError(
"Argument `padding` must be either 'valid' or 'same'. Received: "
f"padding={padding}"
)
output_spatial_shape = [int(i) for i in output_spatial_shape]
for i in none_dims:
output_spatial_shape[i] = None
output_spatial_shape = tuple(output_spatial_shape)
if data_format == "channels_last":
output_shape = (
(input_shape_origin[0],)
+ output_spatial_shape
+ (input_shape_origin[-1],)
)
else:
output_shape = (
input_shape_origin[0],
input_shape_origin[1],
) + output_spatial_shape
return output_shape
def compute_conv_output_shape(
input_shape,
filters,
kernel_size,
strides=1,
padding="valid",
data_format="channels_last",
dilation_rate=1,
):
"""Compute the output shape of conv ops."""
if data_format == "channels_last":
spatial_shape = input_shape[1:-1]
kernel_shape = kernel_size + (input_shape[-1], filters)
else:
spatial_shape = input_shape[2:]
kernel_shape = kernel_size + (input_shape[1], filters)
if len(kernel_shape) != len(input_shape):
raise ValueError(
"Kernel shape must have the same length as input, but received "
f"kernel of shape {kernel_shape} and "
f"input of shape {input_shape}."
)
if isinstance(dilation_rate, int):
dilation_rate = (dilation_rate,) * len(spatial_shape)
if isinstance(strides, int):
strides = (strides,) * len(spatial_shape)
if len(dilation_rate) != len(spatial_shape):
raise ValueError(
"Dilation must be None, scalar or tuple/list of length of "
"inputs' spatial shape, but received "
f"`dilation_rate={dilation_rate}` and "
f"input of shape {input_shape}."
)
none_dims = []
spatial_shape = np.array(spatial_shape)
for i in range(len(spatial_shape)):
if spatial_shape[i] is None:
# Set `None` shape to a manual value so that we can run numpy
# computation on `spatial_shape`.
spatial_shape[i] = -1
none_dims.append(i)
kernel_spatial_shape = np.array(kernel_shape[:-2])
dilation_rate = np.array(dilation_rate)
if padding == "valid":
output_spatial_shape = (
np.floor(
(spatial_shape - dilation_rate * (kernel_spatial_shape - 1) - 1)
/ strides
)
+ 1
)
for i in range(len(output_spatial_shape)):
if i not in none_dims and output_spatial_shape[i] < 0:
raise ValueError(
"Computed output size would be negative. Received "
f"`inputs shape={input_shape}`, "
f"`kernel shape={kernel_shape}`, "
f"`dilation_rate={dilation_rate}`."
)
elif padding == "same" or padding == "causal":
output_spatial_shape = np.floor((spatial_shape - 1) / strides) + 1
else:
raise ValueError(
"`padding` must be either `'valid'` or `'same'`. Received "
f"{padding}."
)
output_spatial_shape = [int(i) for i in output_spatial_shape]
for i in none_dims:
output_spatial_shape[i] = None
output_spatial_shape = tuple(output_spatial_shape)
if data_format == "channels_last":
output_shape = (
(input_shape[0],) + output_spatial_shape + (kernel_shape[-1],)
)
else:
output_shape = (input_shape[0], kernel_shape[-1]) + output_spatial_shape
return output_shape
def compute_matmul_output_shape(shape1, shape2):
"""Compute the output shape of a `matmul` operation.
Args:
shape1: Shape of the left operand.
shape2: Shape of the right operand.
Returns:
Tuple of ints: The output shape for the `matmul` operation.
"""
if len(shape1) == 1:
shape1 = (1, shape1[0])
if len(shape2) == 1:
shape2 = (shape2[0], 1)
if (
shape1[-1] is not None
and shape2[-2] is not None
and shape1[-1] != shape2[-2]
):
raise ValueError(
"Inner dimensions (`x1.shape[-1]` and `x2.shape[-2]`) must be "
f"equal, but received `x1.shape={shape1}` and "
f"`x2.shape={shape2}`."
)
leading_shape = broadcast_shapes(shape1[:-2], shape2[:-2])
last_2_dims_shape = [shape1[-2], shape2[-1]]
output_shape = leading_shape + last_2_dims_shape
if len(shape1) == 1:
del output_shape[-2]
if len(shape2) == 1:
del output_shape[-1]
return tuple(output_shape)
def compute_reshape_output_shape(input_shape, newshape, newshape_arg_name):
"""Converts `-1` in `newshape` to either an actual dimension or `None`.
This utility does not special case the 0th dimension (batch size).
"""
unknown_dim_count = newshape.count(-1)
if unknown_dim_count > 1:
raise ValueError(
"There must be at most one unknown dimension (-1) in "
f"{newshape_arg_name}. Received: {newshape_arg_name}={newshape}."
)
# If there is a None in input_shape, we can't infer what the -1 is
if None in input_shape:
return tuple(dim if dim != -1 else None for dim in newshape)
input_size = math.prod(input_shape)
# If the `newshape` is fully defined, return it
if unknown_dim_count == 0:
if input_size != math.prod(newshape):
raise ValueError(
"The total size of the tensor must be unchanged. Received: "
f"input_shape={input_shape}, {newshape_arg_name}={newshape}"
)
return newshape
# We have one -1 in `newshape`, compute the actual value
known_output_size = 1
unknown_dim_index = None
for index, dim in enumerate(newshape):
if dim == -1:
unknown_dim_index = index
else:
known_output_size *= dim
if known_output_size == 0 or input_size % known_output_size != 0:
raise ValueError(
"The total size of the tensor must be unchanged, however, the "
"input size cannot by divided by the specified dimensions in "
f"{newshape_arg_name}. Received: input_shape={input_shape}, "
f"{newshape_arg_name}={newshape}"
)
output_shape = list(newshape)
output_shape[unknown_dim_index] = input_size // known_output_size
return tuple(output_shape)
def compute_transpose_output_shape(input_shape, axes):
"""Compute the output shape for the `transpose` operation.
Args:
input_shape: Input shape.
axes: Permutation of the dimensions for the `transpose` operation.
Returns:
Tuple of ints: The output shape after the `transpose` operation.
"""
input_shape = list(input_shape)
if axes is None:
return tuple(input_shape[::-1])
if len(axes) != len(input_shape):
raise ValueError(
"axis must be a list of the same length as the input shape, "
f"expected {len(input_shape)}, but received {len(axes)}."
)
return tuple(input_shape[ax] for ax in axes)
def reduce_shape(shape, axis=None, keepdims=False):
shape = list(shape)
if axis is None:
if keepdims:
return tuple([1 for _ in shape])
else:
return tuple([])
if keepdims:
for ax in axis:
shape[ax] = 1
return tuple(shape)
else:
for ax in sorted(axis, reverse=True):
del shape[ax]
return tuple(shape)
@keras_export("keras.utils.get_source_inputs")
def get_source_inputs(tensor):
"""Returns the list of input tensors necessary to compute `tensor`.
Output will always be a list of tensors
(potentially with 1 element).
Args:
tensor: The tensor to start from.
Returns:
List of input tensors.
"""
if not hasattr(tensor, "_keras_history"):
return tensor
operation, node_index, _ = tensor._keras_history
if not operation or not operation._inbound_nodes:
return [tensor]
else:
node = operation._inbound_nodes[node_index]
if node.is_input:
# Reached input node, stop recursion.
return tree.flatten(node.output_tensors)
else:
source_tensors = []
for tensor in node.input_tensors:
previous_sources = get_source_inputs(tensor)
# Avoid input redundancy.
for x in previous_sources:
if all(x is not t for t in source_tensors):
source_tensors.append(x)
return source_tensors
| keras/keras/ops/operation_utils.py/0 | {
"file_path": "keras/keras/ops/operation_utils.py",
"repo_id": "keras",
"token_count": 6022
} | 196 |
# flake8: noqa
import numpy as np
from keras import backend
from keras import ops
from keras import testing
from keras.optimizers.adamw import AdamW
class AdamWTest(testing.TestCase):
def test_config(self):
optimizer = AdamW(
learning_rate=0.5,
weight_decay=0.008,
beta_1=0.5,
beta_2=0.67,
epsilon=1e-5,
amsgrad=True,
)
self.run_class_serialization_test(optimizer)
def test_single_step(self):
optimizer = AdamW(learning_rate=0.5)
grads = ops.array([1.0, 6.0, 7.0, 2.0])
vars = backend.Variable([1.0, 2.0, 3.0, 4.0])
optimizer.apply_gradients(zip([grads], [vars]))
self.assertAllClose(
vars, [0.4980, 1.4960, 2.494, 3.492], rtol=1e-4, atol=1e-4
)
def test_weight_decay(self):
grads, var1, var2, var3 = (
ops.zeros(()),
backend.Variable(2.0),
backend.Variable(2.0, name="exclude"),
backend.Variable(2.0),
)
optimizer_1 = AdamW(learning_rate=1.0, weight_decay=0.004)
optimizer_1.apply_gradients(zip([grads], [var1]))
optimizer_2 = AdamW(learning_rate=1.0, weight_decay=0.004)
optimizer_2.exclude_from_weight_decay(var_names=["exclude"])
optimizer_2.apply_gradients(zip([grads, grads], [var1, var2]))
optimizer_3 = AdamW(learning_rate=1.0, weight_decay=0.004)
optimizer_3.exclude_from_weight_decay(var_list=[var3])
optimizer_3.apply_gradients(zip([grads, grads], [var1, var3]))
self.assertAlmostEqual(var1.numpy(), 1.9760959, decimal=6)
self.assertAlmostEqual(var2.numpy(), 2.0, decimal=6)
self.assertAlmostEqual(var3.numpy(), 2.0, decimal=6)
def test_correctness_with_golden(self):
optimizer = AdamW(learning_rate=1.0, weight_decay=0.5, epsilon=2)
x = backend.Variable(np.ones([10]))
grads = ops.arange(0.1, 1.1, 0.1)
first_grads = ops.full((10,), 0.01)
# fmt: off
golden = np.array(
[[0.4998, 0.4998, 0.4998, 0.4998, 0.4998, 0.4998, 0.4998, 0.4998, 0.4998, 0.4998],
[0.2486, 0.2475, 0.2463, 0.2451, 0.244, 0.2428, 0.2417, 0.2405, 0.2394, 0.2382],
[0.1223, 0.1198, 0.1174, 0.1149, 0.1124, 0.11, 0.1075, 0.1051, 0.1027, 0.1003],
[0.0586, 0.0549, 0.0512, 0.0475, 0.0439, 0.0402, 0.0366, 0.033, 0.0294, 0.0258],
[0.0263, 0.0215, 0.0167, 0.012, 0.0073, 0.0026, -0.0021, -0.0067, -0.0113, -0.0159]]
)
# fmt: on
optimizer.apply_gradients(zip([first_grads], [x]))
for i in range(5):
self.assertAllClose(x, golden[i], rtol=5e-4, atol=5e-4)
optimizer.apply_gradients(zip([grads], [x]))
def test_clip_norm(self):
optimizer = AdamW(clipnorm=1)
grad = [np.array([100.0, 100.0])]
clipped_grad = optimizer._clip_gradients(grad)
self.assertAllClose(clipped_grad[0], [2**0.5 / 2, 2**0.5 / 2])
def test_clip_value(self):
optimizer = AdamW(clipvalue=1)
grad = [np.array([100.0, 100.0])]
clipped_grad = optimizer._clip_gradients(grad)
self.assertAllClose(clipped_grad[0], [1.0, 1.0])
| keras/keras/optimizers/adamw_test.py/0 | {
"file_path": "keras/keras/optimizers/adamw_test.py",
"repo_id": "keras",
"token_count": 1730
} | 197 |
"""Various learning rate schedule functions."""
import math
from keras import ops
from keras.api_export import keras_export
from keras.saving import serialization_lib
@keras_export("keras.optimizers.schedules.LearningRateSchedule")
class LearningRateSchedule:
"""The learning rate schedule base class.
You can use a learning rate schedule to modulate how the learning rate
of your optimizer changes over time.
Several built-in learning rate schedules are available, such as
`keras.optimizers.schedules.ExponentialDecay` or
`keras.optimizers.schedules.PiecewiseConstantDecay`:
```python
lr_schedule = keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate=1e-2,
decay_steps=10000,
decay_rate=0.9)
optimizer = keras.optimizers.SGD(learning_rate=lr_schedule)
```
A `LearningRateSchedule` instance can be passed in as the `learning_rate`
argument of any optimizer.
To implement your own schedule object, you should implement the `__call__`
method, which takes a `step` argument (scalar integer tensor, the
current training step count).
Like for any other Keras object, you can also optionally
make your object serializable by implementing the `get_config`
and `from_config` methods.
Example:
```python
class MyLRSchedule(keras.optimizers.schedules.LearningRateSchedule):
def __init__(self, initial_learning_rate):
self.initial_learning_rate = initial_learning_rate
def __call__(self, step):
return self.initial_learning_rate / (step + 1)
optimizer = keras.optimizers.SGD(learning_rate=MyLRSchedule(0.1))
```
"""
def __call__(self, step):
raise NotImplementedError(
f"Learning rate schedule '{self.__class__.__name__}' "
"must override `__call__(self, step)`."
)
def get_config(self):
raise NotImplementedError(
f"Learning rate schedule '{self.__class__.__name__}' "
"must override `get_config()` in order to be serializable."
)
@classmethod
def from_config(cls, config):
"""Instantiates a `LearningRateSchedule` from its config.
Args:
config: Output of `get_config()`.
Returns:
A `LearningRateSchedule` instance.
"""
return cls(**config)
@keras_export("keras.optimizers.schedules.ExponentialDecay")
class ExponentialDecay(LearningRateSchedule):
"""A `LearningRateSchedule` that uses an exponential decay schedule.
When training a model, it is often useful to lower the learning rate as
the training progresses. This schedule applies an exponential decay function
to an optimizer step, given a provided initial learning rate.
The schedule is a 1-arg callable that produces a decayed learning
rate when passed the current optimizer step. This can be useful for changing
the learning rate value across different invocations of optimizer functions.
It is computed as:
```python
def decayed_learning_rate(step):
return initial_learning_rate * decay_rate ^ (step / decay_steps)
```
If the argument `staircase` is `True`, then `step / decay_steps` is
an integer division and the decayed learning rate follows a
staircase function.
You can pass this schedule directly into a `keras.optimizers.Optimizer`
as the learning rate.
Example: When fitting a Keras model, decay every 100000 steps with a base
of 0.96:
```python
initial_learning_rate = 0.1
lr_schedule = keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate,
decay_steps=100000,
decay_rate=0.96,
staircase=True)
model.compile(optimizer=keras.optimizers.SGD(learning_rate=lr_schedule),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(data, labels, epochs=5)
```
The learning rate schedule is also serializable and deserializable using
`keras.optimizers.schedules.serialize` and
`keras.optimizers.schedules.deserialize`.
Args:
initial_learning_rate: A Python float. The initial learning rate.
decay_steps: A Python integer. Must be positive. See the decay
computation above.
decay_rate: A Python float. The decay rate.
staircase: Boolean. If `True` decay the learning rate at discrete
intervals.
name: String. Optional name of the operation. Defaults to
`"ExponentialDecay`".
Returns:
A 1-arg callable learning rate schedule that takes the current optimizer
step and outputs the decayed learning rate, a scalar tensor of the
same type as `initial_learning_rate`.
"""
def __init__(
self,
initial_learning_rate,
decay_steps,
decay_rate,
staircase=False,
name="ExponentialDecay",
):
super().__init__()
self.initial_learning_rate = initial_learning_rate
self.decay_steps = decay_steps
self.decay_rate = decay_rate
self.staircase = staircase
self.name = name
if self.decay_steps <= 0:
raise ValueError(
"Argument `decay_steps` must be > 0. "
f"Received: decay_steps={self.decay_steps}"
)
def __call__(self, step):
with ops.name_scope(self.name):
initial_learning_rate = ops.convert_to_tensor(
self.initial_learning_rate
)
dtype = initial_learning_rate.dtype
decay_steps = ops.cast(self.decay_steps, dtype)
decay_rate = ops.cast(self.decay_rate, dtype)
global_step_recomp = ops.cast(step, dtype)
p = global_step_recomp / decay_steps
if self.staircase:
p = ops.floor(p)
return ops.multiply(initial_learning_rate, ops.power(decay_rate, p))
def get_config(self):
return {
"initial_learning_rate": self.initial_learning_rate,
"decay_steps": self.decay_steps,
"decay_rate": self.decay_rate,
"staircase": self.staircase,
"name": self.name,
}
@keras_export("keras.optimizers.schedules.PiecewiseConstantDecay")
class PiecewiseConstantDecay(LearningRateSchedule):
"""A `LearningRateSchedule` that uses a piecewise constant decay schedule.
The function returns a 1-arg callable to compute the piecewise constant
when passed the current optimizer step. This can be useful for changing the
learning rate value across different invocations of optimizer functions.
Example: use a learning rate that's 1.0 for the first 100001 steps, 0.5
for the next 10000 steps, and 0.1 for any additional steps.
```python
step = ops.array(0)
boundaries = [100000, 110000]
values = [1.0, 0.5, 0.1]
learning_rate_fn = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries, values)
# Later, whenever we perform an optimization step, we pass in the step.
learning_rate = learning_rate_fn(step)
```
You can pass this schedule directly into a `keras.optimizers.Optimizer`
as the learning rate. The learning rate schedule is also serializable and
deserializable using `keras.optimizers.schedules.serialize` and
`keras.optimizers.schedules.deserialize`.
Args:
boundaries: A list of Python numbers with strictly increasing
entries, and with all elements having the same type as the
optimizer step.
values: A list of Python numbers that specifies the values for the
intervals defined by `boundaries`. It should have one more
element than `boundaries`, and all elements should have the same
type.
name: A string. Optional name of the operation. Defaults to
`"PiecewiseConstant"`.
Returns:
A 1-arg callable learning rate schedule that takes the current optimizer
step and outputs the decayed learning rate, a scalar tensor of the
same type as the boundary tensors.
The output of the 1-arg function that takes the `step`
is `values[0]` when `step <= boundaries[0]`,
`values[1]` when `step > boundaries[0]` and `step <= boundaries[1]`,
..., and `values[-1]` when `step > boundaries[-1]`.
Raises:
ValueError: if the number of elements in the `boundaries` and `values`
lists do not match.
"""
def __init__(self, boundaries, values, name="PiecewiseConstant"):
super().__init__()
if len(boundaries) != len(values) - 1:
raise ValueError(
"The length of boundaries should be 1 less than the length of "
f"values. Received: boundaries={boundaries} of length "
f"{len(boundaries)}, and values={values} "
f"of length {len(values)}."
)
self.boundaries = boundaries
self.values = values
self.name = name
def __call__(self, step):
with ops.name_scope(self.name):
boundaries = [ops.convert_to_tensor(x) for x in self.boundaries]
values = [ops.convert_to_tensor(x) for x in self.values]
step = ops.convert_to_tensor(step)
for i, b in enumerate(boundaries):
if b.dtype != step.dtype:
# We cast the boundaries to have the same type as the step
b = ops.cast(b, step.dtype)
boundaries[i] = b
result_dtype = values[0].dtype
result_value = ops.array(0, dtype=result_dtype)
# For each range between boundaries, we check whether the step is
# within that range, cast the resulting boolean to a number,
# and multiply the result by the corresponding value for the range.
# Taking the sum of these yields a piecewise constant function.
step_less_than_first_boundary = ops.cast(
step <= boundaries[0], result_dtype
)
result_value += step_less_than_first_boundary * values[0]
step_greater_than_last_boundary = ops.cast(
step > boundaries[-1], result_dtype
)
result_value += step_greater_than_last_boundary * values[-1]
for low, high, value in zip(
boundaries[:-1], boundaries[1:], values[1:-1]
):
# Need to bind v here; can do this with lambda v=v: ...
step_in_range = ops.cast(
(step > low) & (step <= high), result_dtype
)
result_value += step_in_range * value
return result_value
def get_config(self):
return {
"boundaries": self.boundaries,
"values": self.values,
"name": self.name,
}
@keras_export("keras.optimizers.schedules.PolynomialDecay")
class PolynomialDecay(LearningRateSchedule):
"""A `LearningRateSchedule` that uses a polynomial decay schedule.
It is commonly observed that a monotonically decreasing learning rate, whose
degree of change is carefully chosen, results in a better performing model.
This schedule applies a polynomial decay function to an optimizer step,
given a provided `initial_learning_rate`, to reach an `end_learning_rate`
in the given `decay_steps`.
It requires a `step` value to compute the decayed learning rate. You
can just pass a backend variable that you increment at each training
step.
The schedule is a 1-arg callable that produces a decayed learning rate
when passed the current optimizer step. This can be useful for changing the
learning rate value across different invocations of optimizer functions.
It is computed as:
```python
def decayed_learning_rate(step):
step = min(step, decay_steps)
return ((initial_learning_rate - end_learning_rate) *
(1 - step / decay_steps) ^ (power)
) + end_learning_rate
```
If `cycle` is True then a multiple of `decay_steps` is used, the first one
that is bigger than `step`.
```python
def decayed_learning_rate(step):
decay_steps = decay_steps * ceil(step / decay_steps)
return ((initial_learning_rate - end_learning_rate) *
(1 - step / decay_steps) ^ (power)
) + end_learning_rate
```
You can pass this schedule directly into a `keras.optimizers.Optimizer`
as the learning rate.
Example: Fit a model while decaying from 0.1 to 0.01 in 10000 steps using
sqrt (i.e. power=0.5):
```python
...
starter_learning_rate = 0.1
end_learning_rate = 0.01
decay_steps = 10000
learning_rate_fn = keras.optimizers.schedules.PolynomialDecay(
starter_learning_rate,
decay_steps,
end_learning_rate,
power=0.5)
model.compile(optimizer=keras.optimizers.SGD(
learning_rate=learning_rate_fn),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(data, labels, epochs=5)
```
The learning rate schedule is also serializable and deserializable using
`keras.optimizers.schedules.serialize` and
`keras.optimizers.schedules.deserialize`.
Args:
initial_learning_rate: A Python float. The initial learning rate.
decay_steps: A Python integer. Must be positive. See the decay
computation above.
end_learning_rate: A Python float. The minimal end learning rate.
power: A Python float. The power of the polynomial. Defaults to
`1.0`.
cycle: A boolean, whether it should cycle beyond decay_steps.
name: String. Optional name of the operation. Defaults to
`"PolynomialDecay"`.
Returns:
A 1-arg callable learning rate schedule that takes the current optimizer
step and outputs the decayed learning rate, a scalar tensor of the
same type as `initial_learning_rate`.
"""
def __init__(
self,
initial_learning_rate,
decay_steps,
end_learning_rate=0.0001,
power=1.0,
cycle=False,
name="PolynomialDecay",
):
super().__init__()
self.initial_learning_rate = initial_learning_rate
self.decay_steps = decay_steps
self.end_learning_rate = end_learning_rate
self.power = power
self.cycle = cycle
self.name = name
if self.decay_steps <= 0:
raise ValueError(
"Argument `decay_steps` must be > 0. "
f"Received: decay_steps={self.decay_steps}"
)
def __call__(self, step):
with ops.name_scope(self.name):
initial_learning_rate = ops.convert_to_tensor(
self.initial_learning_rate
)
dtype = initial_learning_rate.dtype
end_learning_rate = ops.cast(self.end_learning_rate, dtype)
power = ops.cast(self.power, dtype)
global_step_recomp = ops.cast(step, dtype)
decay_steps_recomp = ops.cast(self.decay_steps, dtype)
if self.cycle:
# Find the first multiple of decay_steps that is bigger than
# global_step. If global_step is zero set the multiplier to 1
multiplier = ops.where(
ops.equal(global_step_recomp, 0),
1.0,
ops.ceil(global_step_recomp / self.decay_steps),
)
decay_steps_recomp = ops.multiply(
decay_steps_recomp, multiplier
)
else:
# Make sure that the global_step used is not bigger than
# decay_steps.
global_step_recomp = ops.minimum(
global_step_recomp, decay_steps_recomp
)
p = ops.divide(global_step_recomp, decay_steps_recomp)
return ops.add(
ops.multiply(
initial_learning_rate - end_learning_rate,
ops.power(1 - p, power),
),
end_learning_rate,
)
def get_config(self):
return {
"initial_learning_rate": self.initial_learning_rate,
"decay_steps": self.decay_steps,
"end_learning_rate": self.end_learning_rate,
"power": self.power,
"cycle": self.cycle,
"name": self.name,
}
@keras_export("keras.optimizers.schedules.InverseTimeDecay")
class InverseTimeDecay(LearningRateSchedule):
"""A `LearningRateSchedule` that uses an inverse time decay schedule.
When training a model, it is often useful to lower the learning rate as
the training progresses. This schedule applies the inverse decay function
to an optimizer step, given a provided initial learning rate.
It requires a `step` value to compute the decayed learning rate. You can
just pass a backend variable that you increment at each training step.
The schedule is a 1-arg callable that produces a decayed learning
rate when passed the current optimizer step. This can be useful for changing
the learning rate value across different invocations of optimizer functions.
It is computed as:
```python
def decayed_learning_rate(step):
return initial_learning_rate / (1 + decay_rate * step / decay_step)
```
or, if `staircase` is `True`, as:
```python
def decayed_learning_rate(step):
return initial_learning_rate /
(1 + decay_rate * floor(step / decay_step))
```
You can pass this schedule directly into a `keras.optimizers.Optimizer`
as the learning rate.
Example: Fit a Keras model when decaying 1/t with a rate of 0.5:
```python
...
initial_learning_rate = 0.1
decay_steps = 1.0
decay_rate = 0.5
learning_rate_fn = keras.optimizers.schedules.InverseTimeDecay(
initial_learning_rate, decay_steps, decay_rate)
model.compile(optimizer=keras.optimizers.SGD(
learning_rate=learning_rate_fn),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(data, labels, epochs=5)
```
Args:
initial_learning_rate: A Python float. The initial learning rate.
decay_steps: How often to apply decay.
decay_rate: A Python number. The decay rate.
staircase: Whether to apply decay in a discrete staircase, as o
pposed to continuous, fashion.
name: String. Optional name of the operation. Defaults to
`"InverseTimeDecay"`.
Returns:
A 1-arg callable learning rate schedule that takes the current optimizer
step and outputs the decayed learning rate, a scalar tensor of the
same type as `initial_learning_rate`.
"""
def __init__(
self,
initial_learning_rate,
decay_steps,
decay_rate,
staircase=False,
name="InverseTimeDecay",
):
super().__init__()
self.initial_learning_rate = initial_learning_rate
self.decay_steps = decay_steps
self.decay_rate = decay_rate
self.staircase = staircase
self.name = name
if self.decay_steps <= 0:
raise ValueError(
"Argument `decay_steps` must be > 0. "
f"Received: decay_steps={self.decay_steps}"
)
def __call__(self, step):
with ops.name_scope(self.name):
initial_learning_rate = ops.convert_to_tensor(
self.initial_learning_rate
)
dtype = initial_learning_rate.dtype
decay_steps = ops.cast(self.decay_steps, dtype)
decay_rate = ops.cast(self.decay_rate, dtype)
global_step_recomp = ops.cast(step, dtype)
p = global_step_recomp / decay_steps
if self.staircase:
p = ops.floor(p)
const = ops.cast(ops.array(1), dtype)
denom = ops.add(const, ops.multiply(decay_rate, p))
return ops.divide(initial_learning_rate, denom)
def get_config(self):
return {
"initial_learning_rate": self.initial_learning_rate,
"decay_steps": self.decay_steps,
"decay_rate": self.decay_rate,
"staircase": self.staircase,
"name": self.name,
}
@keras_export("keras.optimizers.schedules.CosineDecay")
class CosineDecay(LearningRateSchedule):
"""A `LearningRateSchedule` that uses a cosine decay with optional warmup.
See [Loshchilov & Hutter, ICLR2016](https://arxiv.org/abs/1608.03983),
SGDR: Stochastic Gradient Descent with Warm Restarts.
For the idea of a linear warmup of our learning rate,
see [Goyal et al.](https://arxiv.org/pdf/1706.02677.pdf).
When we begin training a model, we often want an initial increase in our
learning rate followed by a decay. If `warmup_target` is an int, this
schedule applies a linear increase per optimizer step to our learning rate
from `initial_learning_rate` to `warmup_target` for a duration of
`warmup_steps`. Afterwards, it applies a cosine decay function taking our
learning rate from `warmup_target` to `alpha` for a duration of
`decay_steps`. If `warmup_target` is None we skip warmup and our decay
will take our learning rate from `initial_learning_rate` to `alpha`.
It requires a `step` value to compute the learning rate. You can
just pass a backend variable that you increment at each training step.
The schedule is a 1-arg callable that produces a warmup followed by a
decayed learning rate when passed the current optimizer step. This can be
useful for changing the learning rate value across different invocations of
optimizer functions.
Our warmup is computed as:
```python
def warmup_learning_rate(step):
completed_fraction = step / warmup_steps
total_delta = target_warmup - initial_learning_rate
return completed_fraction * total_delta
```
And our decay is computed as:
```python
if warmup_target is None:
initial_decay_lr = initial_learning_rate
else:
initial_decay_lr = warmup_target
def decayed_learning_rate(step):
step = min(step, decay_steps)
cosine_decay = 0.5 * (1 + cos(pi * step / decay_steps))
decayed = (1 - alpha) * cosine_decay + alpha
return initial_decay_lr * decayed
```
Example usage without warmup:
```python
decay_steps = 1000
initial_learning_rate = 0.1
lr_decayed_fn = keras.optimizers.schedules.CosineDecay(
initial_learning_rate, decay_steps)
```
Example usage with warmup:
```python
decay_steps = 1000
initial_learning_rate = 0
warmup_steps = 1000
target_learning_rate = 0.1
lr_warmup_decayed_fn = keras.optimizers.schedules.CosineDecay(
initial_learning_rate, decay_steps, warmup_target=target_learning_rate,
warmup_steps=warmup_steps
)
```
You can pass this schedule directly into a `keras.optimizers.Optimizer`
as the learning rate. The learning rate schedule is also serializable and
deserializable using `keras.optimizers.schedules.serialize` and
`keras.optimizers.schedules.deserialize`.
Args:
initial_learning_rate: A Python float. The initial learning rate.
decay_steps: A Python int. Number of steps to decay over.
alpha: A Python float. Minimum learning rate value for decay as a
fraction of `initial_learning_rate`.
name: String. Optional name of the operation. Defaults to
`"CosineDecay"`.
warmup_target: A Python float. The target learning rate for our
warmup phase. Will cast to the `initial_learning_rate` datatype.
Setting to `None` will skip warmup and begins decay phase from
`initial_learning_rate`. Otherwise scheduler will warmup from
`initial_learning_rate` to `warmup_target`.
warmup_steps: A Python int. Number of steps to warmup over.
Returns:
A 1-arg callable learning rate schedule that takes the current optimizer
step and outputs the decayed learning rate, a scalar tensor of the
same type as `initial_learning_rate`.
"""
def __init__(
self,
initial_learning_rate,
decay_steps,
alpha=0.0,
name="CosineDecay",
warmup_target=None,
warmup_steps=0,
):
super().__init__()
self.initial_learning_rate = initial_learning_rate
self.decay_steps = decay_steps
self.alpha = alpha
self.name = name
self.warmup_steps = warmup_steps
self.warmup_target = warmup_target
if self.decay_steps <= 0:
raise ValueError(
"Argument `decay_steps` must be > 0. "
f"Received: decay_steps={self.decay_steps}"
)
def _decay_function(self, step, decay_steps, decay_from_lr, dtype):
with ops.name_scope(self.name):
completed_fraction = step / decay_steps
pi = ops.array(math.pi, dtype=dtype)
cosine_decayed = 0.5 * (1.0 + ops.cos(pi * completed_fraction))
decayed = (1 - self.alpha) * cosine_decayed + self.alpha
return ops.multiply(decay_from_lr, decayed)
def _warmup_function(
self, step, warmup_steps, warmup_target, initial_learning_rate
):
with ops.name_scope(self.name):
completed_fraction = step / warmup_steps
total_step_delta = warmup_target - initial_learning_rate
return total_step_delta * completed_fraction + initial_learning_rate
def __call__(self, step):
with ops.name_scope(self.name):
initial_learning_rate = ops.convert_to_tensor(
self.initial_learning_rate
)
dtype = initial_learning_rate.dtype
decay_steps = ops.cast(self.decay_steps, dtype)
global_step_recomp = ops.cast(step, dtype)
if self.warmup_target is None:
global_step_recomp = ops.minimum(
global_step_recomp, decay_steps
)
return self._decay_function(
global_step_recomp,
decay_steps,
initial_learning_rate,
dtype,
)
warmup_target = ops.cast(self.warmup_target, dtype)
warmup_steps = ops.cast(self.warmup_steps, dtype)
global_step_recomp = ops.minimum(
global_step_recomp, decay_steps + warmup_steps
)
return ops.cond(
global_step_recomp < warmup_steps,
lambda: self._warmup_function(
global_step_recomp,
warmup_steps,
warmup_target,
initial_learning_rate,
),
lambda: self._decay_function(
global_step_recomp - warmup_steps,
decay_steps,
warmup_target,
dtype,
),
)
def get_config(self):
return {
"initial_learning_rate": self.initial_learning_rate,
"decay_steps": self.decay_steps,
"alpha": self.alpha,
"name": self.name,
"warmup_target": self.warmup_target,
"warmup_steps": self.warmup_steps,
}
@keras_export("keras.optimizers.schedules.CosineDecayRestarts")
class CosineDecayRestarts(LearningRateSchedule):
"""A `LearningRateSchedule` that uses a cosine decay schedule with restarts.
See [Loshchilov & Hutter, ICLR2016](https://arxiv.org/abs/1608.03983),
SGDR: Stochastic Gradient Descent with Warm Restarts.
When training a model, it is often useful to lower the learning rate as
the training progresses. This schedule applies a cosine decay function with
restarts to an optimizer step, given a provided initial learning rate.
It requires a `step` value to compute the decayed learning rate. You can
just pass a backend variable that you increment at each training step.
The schedule is a 1-arg callable that produces a decayed learning
rate when passed the current optimizer step. This can be useful for changing
the learning rate value across different invocations of optimizer functions.
The learning rate multiplier first decays
from 1 to `alpha` for `first_decay_steps` steps. Then, a warm
restart is performed. Each new warm restart runs for `t_mul` times more
steps and with `m_mul` times initial learning rate as the new learning rate.
Example usage:
```python
first_decay_steps = 1000
lr_decayed_fn = (
keras.optimizers.schedules.CosineDecayRestarts(
initial_learning_rate,
first_decay_steps))
```
You can pass this schedule directly into a `keras.optimizers.Optimizer`
as the learning rate. The learning rate schedule is also serializable and
deserializable using `keras.optimizers.schedules.serialize` and
`keras.optimizers.schedules.deserialize`.
Args:
initial_learning_rate: A Python float. The initial learning rate.
first_decay_steps: A Python integer. Number of steps to decay over.
t_mul: A Python float. Used to derive the number of iterations in
the i-th period.
m_mul: A Python float. Used to derive the initial learning rate of
the i-th period.
alpha: A Python float. Minimum learning rate value as a fraction of
the `initial_learning_rate`.
name: String. Optional name of the operation. Defaults to
`"SGDRDecay"`.
Returns:
A 1-arg callable learning rate schedule that takes the current optimizer
step and outputs the decayed learning rate, a scalar tensor of the
same type as `initial_learning_rate`.
"""
def __init__(
self,
initial_learning_rate,
first_decay_steps,
t_mul=2.0,
m_mul=1.0,
alpha=0.0,
name="SGDRDecay",
):
super().__init__()
self.initial_learning_rate = initial_learning_rate
self.first_decay_steps = first_decay_steps
self._t_mul = t_mul
self._m_mul = m_mul
self.alpha = alpha
self.name = name
if self.first_decay_steps <= 0:
raise ValueError(
"Argument `first_decay_steps` must be > 0. "
f"Received: first_decay_steps={self.first_decay_steps}"
)
def __call__(self, step):
with ops.name_scope(self.name):
initial_learning_rate = ops.convert_to_tensor(
self.initial_learning_rate
)
dtype = initial_learning_rate.dtype
first_decay_steps = ops.cast(self.first_decay_steps, dtype)
alpha = ops.cast(self.alpha, dtype)
t_mul = ops.cast(self._t_mul, dtype)
m_mul = ops.cast(self._m_mul, dtype)
global_step_recomp = ops.cast(step, dtype)
completed_fraction = global_step_recomp / first_decay_steps
def compute_step(completed_fraction, geometric=False):
"""Helper for `cond` operation."""
if geometric:
# ops.log is sensitive to the precision of dtype, so we need
# the additional casting
i_restart = ops.floor(
ops.log(
ops.cast(
1.0 - completed_fraction * (1.0 - t_mul), dtype
)
)
/ ops.log(t_mul)
)
sum_r = (1.0 - t_mul**i_restart) / (1.0 - t_mul)
completed_fraction = (
completed_fraction - sum_r
) / t_mul**i_restart
else:
i_restart = ops.floor(completed_fraction)
completed_fraction -= i_restart
return i_restart, completed_fraction
i_restart, completed_fraction = ops.cond(
ops.equal(t_mul, 1.0),
lambda: compute_step(completed_fraction, geometric=False),
lambda: compute_step(completed_fraction, geometric=True),
)
m_fac = m_mul**i_restart
cosine_decayed = (
0.5
* m_fac
* (
1.0
+ ops.cos(
ops.array(math.pi, dtype=dtype) * completed_fraction
)
)
)
decayed = (1 - alpha) * cosine_decayed + alpha
return ops.multiply(initial_learning_rate, decayed)
def get_config(self):
return {
"initial_learning_rate": self.initial_learning_rate,
"first_decay_steps": self.first_decay_steps,
"t_mul": self._t_mul,
"m_mul": self._m_mul,
"alpha": self.alpha,
"name": self.name,
}
@keras_export("keras.optimizers.schedules.serialize")
def serialize(learning_rate_schedule):
"""Serializes a `LearningRateSchedule` into a JSON-compatible dict.
Args:
learning_rate_schedule: The `LearningRateSchedule` object to serialize.
Returns:
A JSON-serializable dict representing the object's config.
Example:
>>> lr_schedule = keras.optimizers.schedules.ExponentialDecay(
... 0.1, decay_steps=100000, decay_rate=0.96, staircase=True)
>>> keras.optimizers.schedules.serialize(lr_schedule)
{'module': 'keras.optimizers.schedules',
'class_name': 'ExponentialDecay', 'config': {...},
'registered_name': None}
"""
return serialization_lib.serialize_keras_object(learning_rate_schedule)
@keras_export("keras.optimizers.schedules.deserialize")
def deserialize(config, custom_objects=None):
"""Instantiates a `LearningRateSchedule` object from a serialized form.
Args:
config: The serialized form of the `LearningRateSchedule`. Dictionary of
the form {'class_name': str, 'config': dict}.
custom_objects: A dictionary mapping class names (or function names) of
custom (non-Keras) objects to class/functions.
Returns:
A `LearningRateSchedule` object.
Example:
```python
# Configuration for PolynomialDecay
config = {
'class_name': 'PolynomialDecay',
'config': {'cycle': False,
'decay_steps': 10000,
'end_learning_rate': 0.01,
'initial_learning_rate': 0.1,
'name': None,
'power': 0.5
}
}
lr_schedule = keras.optimizers.schedules.deserialize(config)
```
"""
return serialization_lib.deserialize_keras_object(
config,
module_objects=globals(),
custom_objects=custom_objects,
printable_module_name="decay",
)
| keras/keras/optimizers/schedules/learning_rate_schedule.py/0 | {
"file_path": "keras/keras/optimizers/schedules/learning_rate_schedule.py",
"repo_id": "keras",
"token_count": 15351
} | 198 |
import os
import zipfile
from absl import logging
from keras.api_export import keras_export
from keras.legacy.saving import legacy_h5_format
from keras.saving import saving_lib
from keras.utils import file_utils
from keras.utils import io_utils
try:
import h5py
except ImportError:
h5py = None
@keras_export(["keras.saving.save_model", "keras.models.save_model"])
def save_model(model, filepath, overwrite=True, **kwargs):
"""Saves a model as a `.keras` file.
Args:
model: Keras model instance to be saved.
filepath: `str` or `pathlib.Path` object. Path where to save the model.
overwrite: Whether we should overwrite any existing model at the target
location, or instead ask the user via an interactive prompt.
Example:
```python
model = keras.Sequential(
[
keras.layers.Dense(5, input_shape=(3,)),
keras.layers.Softmax(),
],
)
model.save("model.keras")
loaded_model = keras.saving.load_model("model.keras")
x = keras.random.uniform((10, 3))
assert np.allclose(model.predict(x), loaded_model.predict(x))
```
Note that `model.save()` is an alias for `keras.saving.save_model()`.
The saved `.keras` file contains:
- The model's configuration (architecture)
- The model's weights
- The model's optimizer's state (if any)
Thus models can be reinstantiated in the exact same state.
"""
include_optimizer = kwargs.pop("include_optimizer", True)
save_format = kwargs.pop("save_format", False)
if save_format:
if str(filepath).endswith((".h5", ".hdf5")) or str(filepath).endswith(
".keras"
):
logging.warning(
"The `save_format` argument is deprecated in Keras 3. "
"We recommend removing this argument as it can be inferred "
"from the file path. "
f"Received: save_format={save_format}"
)
else:
raise ValueError(
"The `save_format` argument is deprecated in Keras 3. "
"Please remove this argument and pass a file path with "
"either `.keras` or `.h5` extension."
f"Received: save_format={save_format}"
)
if kwargs:
raise ValueError(
"The following argument(s) are not supported: "
f"{list(kwargs.keys())}"
)
# Deprecation warnings
if str(filepath).endswith((".h5", ".hdf5")):
logging.warning(
"You are saving your model as an HDF5 file via "
"`model.save()` or `keras.saving.save_model(model)`. "
"This file format is considered legacy. "
"We recommend using instead the native Keras format, "
"e.g. `model.save('my_model.keras')` or "
"`keras.saving.save_model(model, 'my_model.keras')`. "
)
# If file exists and should not be overwritten.
try:
exists = os.path.exists(filepath)
except TypeError:
exists = False
if exists and not overwrite:
proceed = io_utils.ask_to_proceed_with_overwrite(filepath)
if not proceed:
return
if str(filepath).endswith(".keras"):
saving_lib.save_model(model, filepath)
elif str(filepath).endswith((".h5", ".hdf5")):
legacy_h5_format.save_model_to_hdf5(
model, filepath, overwrite, include_optimizer
)
else:
raise ValueError(
"Invalid filepath extension for saving. "
"Please add either a `.keras` extension for the native Keras "
f"format (recommended) or a `.h5` extension. "
"Use `tf.saved_model.save()` if you want to export a SavedModel "
"for use with TFLite/TFServing/etc. "
f"Received: filepath={filepath}."
)
@keras_export(["keras.saving.load_model", "keras.models.load_model"])
def load_model(filepath, custom_objects=None, compile=True, safe_mode=True):
"""Loads a model saved via `model.save()`.
Args:
filepath: `str` or `pathlib.Path` object, path to the saved model file.
custom_objects: Optional dictionary mapping names
(strings) to custom classes or functions to be
considered during deserialization.
compile: Boolean, whether to compile the model after loading.
safe_mode: Boolean, whether to disallow unsafe `lambda` deserialization.
When `safe_mode=False`, loading an object has the potential to
trigger arbitrary code execution. This argument is only
applicable to the Keras v3 model format. Defaults to True.
Returns:
A Keras model instance. If the original model was compiled,
and the argument `compile=True` is set, then the returned model
will be compiled. Otherwise, the model will be left uncompiled.
Example:
```python
model = keras.Sequential([
keras.layers.Dense(5, input_shape=(3,)),
keras.layers.Softmax()])
model.save("model.keras")
loaded_model = keras.saving.load_model("model.keras")
x = np.random.random((10, 3))
assert np.allclose(model.predict(x), loaded_model.predict(x))
```
Note that the model variables may have different name values
(`var.name` property, e.g. `"dense_1/kernel:0"`) after being reloaded.
It is recommended that you use layer attributes to
access specific variables, e.g. `model.get_layer("dense_1").kernel`.
"""
is_keras_zip = str(filepath).endswith(".keras") and zipfile.is_zipfile(
filepath
)
# Support for remote zip files
if (
file_utils.is_remote_path(filepath)
and not file_utils.isdir(filepath)
and not is_keras_zip
):
local_path = os.path.join(
saving_lib.get_temp_dir(), os.path.basename(filepath)
)
# Copy from remote to temporary local directory
file_utils.copy(filepath, local_path)
# Switch filepath to local zipfile for loading model
if zipfile.is_zipfile(local_path):
filepath = local_path
is_keras_zip = True
if is_keras_zip:
return saving_lib.load_model(
filepath,
custom_objects=custom_objects,
compile=compile,
safe_mode=safe_mode,
)
if str(filepath).endswith((".h5", ".hdf5")):
return legacy_h5_format.load_model_from_hdf5(filepath)
elif str(filepath).endswith(".keras"):
raise ValueError(
f"File not found: filepath={filepath}. "
"Please ensure the file is an accessible `.keras` "
"zip file."
)
else:
raise ValueError(
f"File format not supported: filepath={filepath}. "
"Keras 3 only supports V3 `.keras` files and "
"legacy H5 format files (`.h5` extension). "
"Note that the legacy SavedModel format is not "
"supported by `load_model()` in Keras 3. In "
"order to reload a TensorFlow SavedModel as an "
"inference-only layer in Keras 3, use "
"`keras.layers.TFSMLayer("
f"{filepath}, call_endpoint='serving_default')` "
"(note that your `call_endpoint` "
"might have a different name)."
)
def load_weights(model, filepath, skip_mismatch=False, **kwargs):
if str(filepath).endswith(".keras"):
if kwargs:
raise ValueError(f"Invalid keyword arguments: {kwargs}")
saving_lib.load_weights_only(
model, filepath, skip_mismatch=skip_mismatch
)
elif str(filepath).endswith(".weights.h5"):
if kwargs:
raise ValueError(f"Invalid keyword arguments: {kwargs}")
saving_lib.load_weights_only(
model, filepath, skip_mismatch=skip_mismatch
)
elif str(filepath).endswith(".h5") or str(filepath).endswith(".hdf5"):
by_name = kwargs.pop("by_name", False)
if kwargs:
raise ValueError(f"Invalid keyword arguments: {kwargs}")
if not h5py:
raise ImportError(
"Loading a H5 file requires `h5py` to be installed."
)
with h5py.File(filepath, "r") as f:
if "layer_names" not in f.attrs and "model_weights" in f:
f = f["model_weights"]
if by_name:
legacy_h5_format.load_weights_from_hdf5_group_by_name(
f, model, skip_mismatch
)
else:
legacy_h5_format.load_weights_from_hdf5_group(f, model)
else:
raise ValueError(
f"File format not supported: filepath={filepath}. "
"Keras 3 only supports V3 `.keras` and `.weights.h5` "
"files, or legacy V1/V2 `.h5` files."
)
| keras/keras/saving/saving_api.py/0 | {
"file_path": "keras/keras/saving/saving_api.py",
"repo_id": "keras",
"token_count": 3968
} | 199 |