Model Overview
ALBERT encoder network.
This class implements a bi-directional Transformer-based encoder as described in "ALBERT: A Lite BERT for Self-supervised Learning of Language Representations". ALBERT is a more efficient variant of BERT, and uses parameter reduction techniques such as cross-layer parameter sharing and factorized embedding parameterization. This model class includes the embedding lookups and transformer layers, but not the masked language model or sentence order prediction heads.
The default constructor gives a fully customizable, randomly initialized
ALBERT encoder with any number of layers, heads, and embedding dimensions.
To load preset architectures and weights, use the from_preset
constructor.
Disclaimer: Pre-trained models are provided on an "as is" basis, without warranties or conditions of any kind.
Links
- ALBERT Quickstart Notebook
- ALBERT API Documentation
- ALBERT Model Card
- KerasHub Beginner Guide
- KerasHub Model Publishing Guide
Installation
Keras and KerasHub can be installed with:
pip install -U -q keras-hub
pip install -U -q keras
Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instructions on installing them in another environment see the Keras Getting Started page.
Presets
The following model checkpoints are provided by the Keras team. Full code examples for each are available below.
Preset name | Parameters | Description |
---|---|---|
albert_base_en_uncased | 11.68M | 12-layer ALBERT model where all input is lowercased. Trained on English Wikipedia + BooksCorpus. |
albert_large_en_uncased | 17.68M | 24-layer ALBERT model where all input is lowercased. Trained on English Wikipedia + BooksCorpus. |
albert_extra_large_en_uncased | 58.72M | 24-layer ALBERT model where all input is lowercased. Trained on English Wikipedia + BooksCorpus. |
albert_extra_extra_large_en_uncased | 222.60M | 12-layer ALBERT model where all input is lowercased. Trained on English Wikipedia + BooksCorpus. |
Arguments
- vocabulary_size: int. The size of the token vocabulary.
- num_layers: int, must be divisible by
num_groups
. The number of "virtual" layers, i.e., the total number of times the input sequence will be fed through the groups in one forward pass. The input will be routed to the correct group based on the layer index. - num_heads: int. The number of attention heads for each transformer. The hidden size must be divisible by the number of attention heads.
- embedding_dim: int. The size of the embeddings.
- hidden_dim: int. The size of the transformer encoding and pooler layers.
- intermediate_dim: int. The output dimension of the first Dense layer in a two-layer feedforward network for each transformer.
- num_groups: int. Number of groups, with each group having
num_inner_repetitions
number ofTransformerEncoder
layers. - num_inner_repetitions: int. Number of
TransformerEncoder
layers per group. - dropout: float. Dropout probability for the Transformer encoder.
- max_sequence_length: int. The maximum sequence length that this encoder
can consume. If None,
max_sequence_length
uses the value from sequence length. This determines the variable shape for positional embeddings. - num_segments: int. The number of types that the 'segment_ids' input can take.
Example Usage
import keras
import keras_hub
import numpy as np
Raw string data.
features = ["The quick brown fox jumped.", "I forgot my homework."]
labels = [0, 3]
# Pretrained classifier.
classifier = keras_hub.models.AlbertClassifier.from_preset(
"albert_base_en_uncased",
num_classes=4,
)
classifier.fit(x=features, y=labels, batch_size=2)
classifier.predict(x=features, batch_size=2)
# Re-compile (e.g., with a new learning rate).
classifier.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=keras.optimizers.Adam(5e-5),
jit_compile=True,
)
# Access backbone programmatically (e.g., to change `trainable`).
classifier.backbone.trainable = False
# Fit again.
classifier.fit(x=features, y=labels, batch_size=2)
Preprocessed integer data.
features = {
"token_ids": np.ones(shape=(2, 12), dtype="int32"),
"segment_ids": np.array([[0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0]] * 2),
"padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0]] * 2),
}
labels = [0, 3]
# Pretrained classifier without preprocessing.
classifier = keras_hub.models.AlbertClassifier.from_preset(
"albert_base_en_uncased",
num_classes=4,
preprocessor=None,
)
classifier.fit(x=features, y=labels, batch_size=2)
Example Usage with Hugging Face URI
import keras
import keras_hub
import numpy as np
Raw string data.
features = ["The quick brown fox jumped.", "I forgot my homework."]
labels = [0, 3]
# Pretrained classifier.
classifier = keras_hub.models.AlbertClassifier.from_preset(
"hf://keras/albert_base_en_uncased",
num_classes=4,
)
classifier.fit(x=features, y=labels, batch_size=2)
classifier.predict(x=features, batch_size=2)
# Re-compile (e.g., with a new learning rate).
classifier.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=keras.optimizers.Adam(5e-5),
jit_compile=True,
)
# Access backbone programmatically (e.g., to change `trainable`).
classifier.backbone.trainable = False
# Fit again.
classifier.fit(x=features, y=labels, batch_size=2)
Preprocessed integer data.
features = {
"token_ids": np.ones(shape=(2, 12), dtype="int32"),
"segment_ids": np.array([[0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0]] * 2),
"padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0]] * 2),
}
labels = [0, 3]
# Pretrained classifier without preprocessing.
classifier = keras_hub.models.AlbertClassifier.from_preset(
"hf://keras/albert_base_en_uncased",
num_classes=4,
preprocessor=None,
)
classifier.fit(x=features, y=labels, batch_size=2)
- Downloads last month
- 9