julien-c HF staff commited on
Commit
39f73fa
1 Parent(s): 35860e5

Migrate model card from transformers-repo

Browse files

Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/codegram/calbert-base-uncased/README.md

Files changed (1) hide show
  1. README.md +90 -0
README.md ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: "ca"
3
+ tags:
4
+ - masked-lm
5
+ - catalan
6
+ - exbert
7
+ license: mit
8
+ ---
9
+
10
+ # Calbert: a Catalan Language Model
11
+
12
+ ## Introduction
13
+
14
+ CALBERT is an open-source language model for Catalan pretrained on the ALBERT architecture.
15
+
16
+ It is now available on Hugging Face in its `tiny-uncased` version and `base-uncased` (the one you're looking at) as well, and was pretrained on the [OSCAR dataset](https://traces1.inria.fr/oscar/).
17
+
18
+ For further information or requests, please go to the [GitHub repository](https://github.com/codegram/calbert)
19
+
20
+ ## Pre-trained models
21
+
22
+ | Model | Arch. | Training data |
23
+ | ----------------------------------- | -------------- | ---------------------- |
24
+ | `codegram` / `calbert-tiny-uncased` | Tiny (uncased) | OSCAR (4.3 GB of text) |
25
+ | `codegram` / `calbert-base-uncased` | Base (uncased) | OSCAR (4.3 GB of text) |
26
+
27
+ ## How to use Calbert with HuggingFace
28
+
29
+ #### Load Calbert and its tokenizer:
30
+
31
+ ```python
32
+ from transformers import AutoModel, AutoTokenizer
33
+
34
+ tokenizer = AutoTokenizer.from_pretrained("codegram/calbert-base-uncased")
35
+ model = AutoModel.from_pretrained("codegram/calbert-base-uncased")
36
+
37
+ model.eval() # disable dropout (or leave in train mode to finetune
38
+ ```
39
+
40
+ #### Filling masks using pipeline
41
+
42
+ ```python
43
+ from transformers import pipeline
44
+
45
+ calbert_fill_mask = pipeline("fill-mask", model="codegram/calbert-base-uncased", tokenizer="codegram/calbert-base-uncased")
46
+ results = calbert_fill_mask("M'agrada [MASK] això")
47
+ # results
48
+ # [{'sequence': "[CLS] m'agrada molt aixo[SEP]", 'score': 0.614592969417572, 'token': 61},
49
+ # {'sequence': "[CLS] m'agrada moltíssim aixo[SEP]", 'score': 0.06058056280016899, 'token': 4867},
50
+ # {'sequence': "[CLS] m'agrada més aixo[SEP]", 'score': 0.017195818945765495, 'token': 43},
51
+ # {'sequence': "[CLS] m'agrada llegir aixo[SEP]", 'score': 0.016321714967489243, 'token': 684},
52
+ # {'sequence': "[CLS] m'agrada escriure aixo[SEP]", 'score': 0.012185849249362946, 'token': 1306}]
53
+
54
+ ```
55
+
56
+ #### Extract contextual embedding features from Calbert output
57
+
58
+ ```python
59
+ import torch
60
+ # Tokenize in sub-words with SentencePiece
61
+ tokenized_sentence = tokenizer.tokenize("M'és una mica igual")
62
+ # ['▁m', "'", 'es', '▁una', '▁mica', '▁igual']
63
+
64
+ # 1-hot encode and add special starting and end tokens
65
+ encoded_sentence = tokenizer.encode(tokenized_sentence)
66
+ # [2, 109, 7, 71, 36, 371, 1103, 3]
67
+ # NB: Can be done in one step : tokenize.encode("M'és una mica igual")
68
+
69
+ # Feed tokens to Calbert as a torch tensor (batch dim 1)
70
+ encoded_sentence = torch.tensor(encoded_sentence).unsqueeze(0)
71
+ embeddings, _ = model(encoded_sentence)
72
+ embeddings.size()
73
+ # torch.Size([1, 8, 768])
74
+ embeddings.detach()
75
+ # tensor([[[-0.0261, 0.1166, -0.1075, ..., -0.0368, 0.0193, 0.0017],
76
+ # [ 0.1289, -0.2252, 0.9881, ..., -0.1353, 0.3534, 0.0734],
77
+ # [-0.0328, -1.2364, 0.9466, ..., 0.3455, 0.7010, -0.2085],
78
+ # ...,
79
+ # [ 0.0397, -1.0228, -0.2239, ..., 0.2932, 0.1248, 0.0813],
80
+ # [-0.0261, 0.1165, -0.1074, ..., -0.0368, 0.0193, 0.0017],
81
+ # [-0.1934, -0.2357, -0.2554, ..., 0.1831, 0.6085, 0.1421]]])
82
+ ```
83
+
84
+ ## Authors
85
+
86
+ CALBERT was trained and evaluated by [Txus Bach](https://twitter.com/txustice), as part of [Codegram](https://www.codegram.com)'s applied research.
87
+
88
+ <a href="https://huggingface.co/exbert/?model=codegram/calbert-base-uncased&modelKind=bidirectional&sentence=M%27agradaria%20força%20saber-ne%20més">
89
+ <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
90
+ </a>