Divyasreepat commited on
Commit
103dd94
·
verified ·
1 Parent(s): a458ce5

Update README.md with new model card content

Browse files
Files changed (1) hide show
  1. README.md +29 -0
README.md CHANGED
@@ -25,6 +25,35 @@ To load preset architectures and weights, use the `from_preset` constructor.
25
  Disclaimer: Pre-trained models are provided on an "as is" basis, without
26
  warranties or conditions of any kind.
27
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
 
29
  __Arguments__
30
 
 
25
  Disclaimer: Pre-trained models are provided on an "as is" basis, without
26
  warranties or conditions of any kind.
27
 
28
+ ## Links
29
+
30
+ * [ALBERT Quickstart Notebook](https://www.kaggle.com/code/laxmareddypatlolla/albert-quickstart-notebook)
31
+ * [ALBERT API Documentation](https://keras.io/keras_hub/api/models/albert/)
32
+ * [ALBERT Model Card](https://huggingface.co/docs/transformers/en/model_doc/albert)
33
+ * [KerasHub Beginner Guide](https://keras.io/guides/keras_hub/getting_started/)
34
+ * [KerasHub Model Publishing Guide](https://keras.io/guides/keras_hub/upload/)
35
+
36
+ ## Installation
37
+
38
+ Keras and KerasHub can be installed with:
39
+
40
+ ```
41
+ pip install -U -q keras-hub
42
+ pip install -U -q keras
43
+ ```
44
+
45
+ Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instructions on installing them in another environment see the [Keras Getting Started](https://keras.io/getting_started/) page.
46
+
47
+ ## Presets
48
+
49
+ The following model checkpoints are provided by the Keras team. Full code examples for each are available below.
50
+ | Preset name | Parameters | Description |
51
+ |----------------|------------|--------------------------------------------------|
52
+ | albert_base_en_uncased | 11.68M | 12-layer ALBERT model where all input is lowercased. Trained on English Wikipedia + BooksCorpus.|
53
+ | albert_large_en_uncased| 17.68M | 24-layer ALBERT model where all input is lowercased. Trained on English Wikipedia + BooksCorpus. |
54
+ | albert_extra_large_en_uncased | 58.72M | 24-layer ALBERT model where all input is lowercased. Trained on English Wikipedia + BooksCorpus. |
55
+ | albert_extra_extra_large_en_uncased| 222.60M | 12-layer ALBERT model where all input is lowercased. Trained on English Wikipedia + BooksCorpus. |
56
+
57
 
58
  __Arguments__
59