julien-c HF staff commited on
Commit
5988aaf
1 Parent(s): 7ba9a95

Migrate model card from transformers-repo

Browse files

Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/neuralmind/bert-large-portuguese-cased/README.md

Files changed (1) hide show
  1. README.md +106 -0
README.md ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: pt
3
+ license: mit
4
+ tags:
5
+ - bert
6
+ - pytorch
7
+ datasets:
8
+ - brWaC
9
+ ---
10
+
11
+ # BERTimbau Large (aka "bert-large-portuguese-cased")
12
+
13
+ ![Bert holding a berimbau](https://imgur.com/JZ7Hynh.jpg)
14
+
15
+ ## Introduction
16
+
17
+ BERTimbau Large is a pretrained BERT model for Brazilian Portuguese that achieves state-of-the-art performances on three downstream NLP tasks: Named Entity Recognition, Sentence Textual Similarity and Recognizing Textual Entailment. It is available in two sizes: Base and Large.
18
+
19
+ For further information or requests, please go to [BERTimbau repository](https://github.com/neuralmind-ai/portuguese-bert/).
20
+
21
+ ## Available models
22
+
23
+ | Model | Arch. | #Layers | #Params |
24
+ | ---------------------------------------- | ---------- | ------- | ------- |
25
+ | `neuralmind/bert-base-portuguese-cased` | BERT-Base | 12 | 110M |
26
+ | `neuralmind/bert-large-portuguese-cased` | BERT-Large | 24 | 335M |
27
+
28
+ ## Usage
29
+
30
+ ```python
31
+ from transformers import AutoTokenizer # Or BertTokenizer
32
+ from transformers import AutoModelForPreTraining # Or BertForPreTraining for loading pretraining heads
33
+ from transformers import AutoModel # or BertModel, for BERT without pretraining heads
34
+
35
+ model = AutoModelForPreTraining.from_pretrained('neuralmind/bert-large-portuguese-cased')
36
+ tokenizer = AutoTokenizer.from_pretrained('neuralmind/bert-large-portuguese-cased', do_lower_case=False)
37
+ ```
38
+
39
+ ### Masked language modeling prediction example
40
+
41
+ ```python
42
+ from transformers import pipeline
43
+
44
+ pipe = pipeline('fill-mask', model=model, tokenizer=tokenizer)
45
+
46
+ pipe('Tinha uma [MASK] no meio do caminho.')
47
+ # [{'score': 0.5054386258125305,
48
+ # 'sequence': '[CLS] Tinha uma pedra no meio do caminho. [SEP]',
49
+ # 'token': 5028,
50
+ # 'token_str': 'pedra'},
51
+ # {'score': 0.05616172030568123,
52
+ # 'sequence': '[CLS] Tinha uma curva no meio do caminho. [SEP]',
53
+ # 'token': 9562,
54
+ # 'token_str': 'curva'},
55
+ # {'score': 0.02348282001912594,
56
+ # 'sequence': '[CLS] Tinha uma parada no meio do caminho. [SEP]',
57
+ # 'token': 6655,
58
+ # 'token_str': 'parada'},
59
+ # {'score': 0.01795753836631775,
60
+ # 'sequence': '[CLS] Tinha uma mulher no meio do caminho. [SEP]',
61
+ # 'token': 2606,
62
+ # 'token_str': 'mulher'},
63
+ # {'score': 0.015246033668518066,
64
+ # 'sequence': '[CLS] Tinha uma luz no meio do caminho. [SEP]',
65
+ # 'token': 3377,
66
+ # 'token_str': 'luz'}]
67
+
68
+ ```
69
+
70
+ ### For BERT embeddings
71
+
72
+ ```python
73
+
74
+ import torch
75
+
76
+ model = AutoModel.from_pretrained('neuralmind/bert-large-portuguese-cased')
77
+ input_ids = tokenizer.encode('Tinha uma pedra no meio do caminho.', return_tensors='pt')
78
+
79
+ with torch.no_grad():
80
+ outs = model(input_ids)
81
+ encoded = outs[0][0, 1:-1] # Ignore [CLS] and [SEP] special tokens
82
+
83
+ # encoded.shape: (8, 1024)
84
+ # tensor([[ 1.1872, 0.5606, -0.2264, ..., 0.0117, -0.1618, -0.2286],
85
+ # [ 1.3562, 0.1026, 0.1732, ..., -0.3855, -0.0832, -0.1052],
86
+ # [ 0.2988, 0.2528, 0.4431, ..., 0.2684, -0.5584, 0.6524],
87
+ # ...,
88
+ # [ 0.3405, -0.0140, -0.0748, ..., 0.6649, -0.8983, 0.5802],
89
+ # [ 0.1011, 0.8782, 0.1545, ..., -0.1768, -0.8880, -0.1095],
90
+ # [ 0.7912, 0.9637, -0.3859, ..., 0.2050, -0.1350, 0.0432]])
91
+ ```
92
+
93
+ ## Citation
94
+
95
+ If you use our work, please cite:
96
+
97
+ ```bibtex
98
+ @inproceedings{souza2020bertimbau,
99
+ author = {F{\'a}bio Souza and
100
+ Rodrigo Nogueira and
101
+ Roberto Lotufo},
102
+ title = {{BERT}imbau: pretrained {BERT} models for {B}razilian {P}ortuguese},
103
+ booktitle = {9th Brazilian Conference on Intelligent Systems, {BRACIS}, Rio Grande do Sul, Brazil, October 20-23 (to appear)},
104
+ year = {2020}
105
+ }
106
+ ```