julien-c HF staff commited on
Commit
f5e29a9
1 Parent(s): ada142f

Migrate model card from transformers-repo

Browse files

Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/codegram/calbert-tiny-uncased/README.md

Files changed (1) hide show
  1. README.md +90 -0
README.md ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: "ca"
3
+ tags:
4
+ - masked-lm
5
+ - catalan
6
+ - exbert
7
+ license: mit
8
+ ---
9
+
10
+ # Calbert: a Catalan Language Model
11
+
12
+ ## Introduction
13
+
14
+ CALBERT is an open-source language model for Catalan pretrained on the ALBERT architecture.
15
+
16
+ It is now available on Hugging Face in its `tiny-uncased` version (the one you're looking at) and `base-uncased` as well, and was pretrained on the [OSCAR dataset](https://traces1.inria.fr/oscar/).
17
+
18
+ For further information or requests, please go to the [GitHub repository](https://github.com/codegram/calbert)
19
+
20
+ ## Pre-trained models
21
+
22
+ | Model | Arch. | Training data |
23
+ | ----------------------------------- | -------------- | ---------------------- |
24
+ | `codegram` / `calbert-tiny-uncased` | Tiny (uncased) | OSCAR (4.3 GB of text) |
25
+ | `codegram` / `calbert-base-uncased` | Base (uncased) | OSCAR (4.3 GB of text) |
26
+
27
+ ## How to use Calbert with HuggingFace
28
+
29
+ #### Load Calbert and its tokenizer:
30
+
31
+ ```python
32
+ from transformers import AutoModel, AutoTokenizer
33
+
34
+ tokenizer = AutoTokenizer.from_pretrained("codegram/calbert-tiny-uncased")
35
+ model = AutoModel.from_pretrained("codegram/calbert-tiny-uncased")
36
+
37
+ model.eval() # disable dropout (or leave in train mode to finetune
38
+ ```
39
+
40
+ #### Filling masks using pipeline
41
+
42
+ ```python
43
+ from transformers import pipeline
44
+
45
+ calbert_fill_mask = pipeline("fill-mask", model="codegram/calbert-tiny-uncased", tokenizer="codegram/calbert-tiny-uncased")
46
+ results = calbert_fill_mask("M'agrada [MASK] això")
47
+ # results
48
+ # [{'sequence': "[CLS] m'agrada molt aixo[SEP]", 'score': 0.4403671622276306, 'token': 61},
49
+ # {'sequence': "[CLS] m'agrada més aixo[SEP]", 'score': 0.050061386078596115, 'token': 43},
50
+ # {'sequence': "[CLS] m'agrada veure aixo[SEP]", 'score': 0.026286985725164413, 'token': 157},
51
+ # {'sequence': "[CLS] m'agrada bastant aixo[SEP]", 'score': 0.022483550012111664, 'token': 2143},
52
+ # {'sequence': "[CLS] m'agrada moltíssim aixo[SEP]", 'score': 0.014491282403469086, 'token': 4867}]
53
+
54
+ ```
55
+
56
+ #### Extract contextual embedding features from Calbert output
57
+
58
+ ```python
59
+ import torch
60
+ # Tokenize in sub-words with SentencePiece
61
+ tokenized_sentence = tokenizer.tokenize("M'és una mica igual")
62
+ # ['▁m', "'", 'es', '▁una', '▁mica', '▁igual']
63
+
64
+ # 1-hot encode and add special starting and end tokens
65
+ encoded_sentence = tokenizer.encode(tokenized_sentence)
66
+ # [2, 109, 7, 71, 36, 371, 1103, 3]
67
+ # NB: Can be done in one step : tokenize.encode("M'és una mica igual")
68
+
69
+ # Feed tokens to Calbert as a torch tensor (batch dim 1)
70
+ encoded_sentence = torch.tensor(encoded_sentence).unsqueeze(0)
71
+ embeddings, _ = model(encoded_sentence)
72
+ embeddings.size()
73
+ # torch.Size([1, 8, 312])
74
+ embeddings.detach()
75
+ # tensor([[[-0.2726, -0.9855, 0.9643, ..., 0.3511, 0.3499, -0.1984],
76
+ # [-0.2824, -1.1693, -0.2365, ..., -3.1866, -0.9386, -1.3718],
77
+ # [-2.3645, -2.2477, -1.6985, ..., -1.4606, -2.7294, 0.2495],
78
+ # ...,
79
+ # [ 0.8800, -0.0244, -3.0446, ..., 0.5148, -3.0903, 1.1879],
80
+ # [ 1.1300, 0.2425, 0.2162, ..., -0.5722, -2.2004, 0.4045],
81
+ # [ 0.4549, -0.2378, -0.2290, ..., -2.1247, -2.2769, -0.0820]]])
82
+ ```
83
+
84
+ ## Authors
85
+
86
+ CALBERT was trained and evaluated by [Txus Bach](https://twitter.com/txustice), as part of [Codegram](https://www.codegram.com)'s applied research.
87
+
88
+ <a href="https://huggingface.co/exbert/?model=codegram/calbert-tiny-uncased&modelKind=bidirectional&sentence=M%27agradaria%20força%20saber-ne%20més">
89
+ <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
90
+ </a>