rufimelo commited on
Commit
b7cf306
1 Parent(s): 1065101

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -63
README.md CHANGED
@@ -1,42 +1,65 @@
 
1
  ---
 
 
 
2
  pipeline_tag: sentence-similarity
3
  tags:
4
  - sentence-transformers
5
- - feature-extraction
6
  - sentence-similarity
7
  - transformers
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  ---
 
 
 
9
 
10
- # {MODEL_NAME}
11
 
12
- This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
13
 
14
- <!--- Describe your model here -->
15
 
16
  ## Usage (Sentence-Transformers)
17
-
18
  Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
19
-
20
  ```
21
  pip install -U sentence-transformers
22
  ```
23
-
24
  Then you can use the model like this:
25
-
26
  ```python
27
  from sentence_transformers import SentenceTransformer
28
- sentences = ["This is an example sentence", "Each sentence is converted"]
29
 
30
- model = SentenceTransformer('{MODEL_NAME}')
31
  embeddings = model.encode(sentences)
32
  print(embeddings)
33
  ```
34
-
35
-
36
-
37
  ## Usage (HuggingFace Transformers)
38
- Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
39
-
40
  ```python
41
  from transformers import AutoTokenizer, AutoModel
42
  import torch
@@ -48,13 +71,12 @@ def mean_pooling(model_output, attention_mask):
48
  input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
49
  return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
50
 
51
-
52
  # Sentences we want sentence embeddings for
53
  sentences = ['This is an example sentence', 'Each sentence is converted']
54
 
55
  # Load model from HuggingFace Hub
56
- tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
57
- model = AutoModel.from_pretrained('{MODEL_NAME}')
58
 
59
  # Tokenize sentences
60
  encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
@@ -62,64 +84,60 @@ encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tenso
62
  # Compute token embeddings
63
  with torch.no_grad():
64
  model_output = model(**encoded_input)
65
-
66
  # Perform pooling. In this case, mean pooling.
67
  sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
68
-
69
  print("Sentence embeddings:")
70
  print(sentence_embeddings)
71
  ```
72
 
73
 
74
-
75
- ## Evaluation Results
76
-
77
- <!--- Describe how your model was evaluated -->
78
-
79
- For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
80
-
81
-
82
- ## Training
83
- The model was trained with the parameters:
84
-
85
- **DataLoader**:
86
-
87
- `torch.utils.data.dataloader.DataLoader` of length 2157 with parameters:
88
  ```
89
- {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
 
 
 
90
  ```
 
91
 
92
- **Loss**:
93
-
94
- `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
95
 
96
- Parameters of the fit()-Method:
97
- ```
98
- {
99
- "epochs": 5,
100
- "evaluation_steps": 0,
101
- "evaluator": "NoneType",
102
- "max_grad_norm": 1,
103
- "optimizer_class": "<class 'transformers.optimization.AdamW'>",
104
- "optimizer_params": {
105
- "lr": 1e-05
106
- },
107
- "scheduler": "WarmupLinear",
108
- "steps_per_epoch": null,
109
- "warmup_steps": 1079,
110
- "weight_decay": 0.01
111
  }
112
- ```
113
 
 
 
 
 
 
 
 
 
114
 
115
- ## Full Model Architecture
116
- ```
117
- SentenceTransformer(
118
- (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
119
- (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
120
- )
121
- ```
122
 
123
- ## Citing & Authors
 
 
 
 
 
 
 
 
 
 
 
 
 
124
 
125
- <!--- Describe where people can find more information -->
 
1
+
2
  ---
3
+ language:
4
+ - pt
5
+ thumbnail: "Portuguese BERT for the Legal Domain"
6
  pipeline_tag: sentence-similarity
7
  tags:
8
  - sentence-transformers
 
9
  - sentence-similarity
10
  - transformers
11
+ datasets:
12
+ - assin
13
+ - assin2
14
+ - stjiris/portuguese-legal-sentences-v0
15
+ widget:
16
+ - source_sentence: "O advogado apresentou as provas ao juíz."
17
+ sentences:
18
+ - "O juíz leu as provas."
19
+ - "O juíz leu o recurso."
20
+ - "O juíz atirou uma pedra."
21
+ example_title: "Example 1"
22
+ model-index:
23
+ - name: BERTimbau
24
+ results:
25
+ - task:
26
+ name: STS
27
+ type: STS
28
+ metrics:
29
+ - name: Pearson Correlation - assin Dataset
30
+ type: Pearson Correlation
31
+ value: 0.7716333759993093
32
+ - name: Pearson Correlation - assin2 Dataset
33
+ type: Pearson Correlation
34
+ value: 0.8403302138785704
35
+ - name: Pearson Correlation - stsb_multi_mt pt Dataset
36
+ type: Pearson Correlation
37
+ value: 0.8249826985133595
38
  ---
39
+ # stjiris/bert-large-portuguese-cased-legal-mlm-sts-v0
40
+ This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
41
+ stjiris/bert-large-portuguese-cased-legal-mlm-sts-v0 derives from [BERTimbau](https://huggingface.co/neuralmind/bert-large-portuguese-cased) large.
42
 
43
+ It was trained using the MLM technique with a learning rate 3e-5 [Legal Sentences from +-30000 documents](https://huggingface.co/datasets/stjiris/portuguese-legal-sentences-v0)
44
 
45
+ It is adapted to the Portuguese legal domain and trained for STS on portuguese datasets. [assin](https://huggingface.co/datasets/assin), [assin2](https://huggingface.co/datasets/assin2) and [stsb_multi_mt](https://huggingface.co/datasets/stsb_multi_mt) portuguese subdataset
46
 
 
47
 
48
  ## Usage (Sentence-Transformers)
 
49
  Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
 
50
  ```
51
  pip install -U sentence-transformers
52
  ```
 
53
  Then you can use the model like this:
 
54
  ```python
55
  from sentence_transformers import SentenceTransformer
56
+ sentences = ["Isto é um exemplo", "Isto é um outro exemplo"]
57
 
58
+ model = SentenceTransformer('stjiris/bert-large-portuguese-cased-legal-mlm-sts-v0')
59
  embeddings = model.encode(sentences)
60
  print(embeddings)
61
  ```
 
 
 
62
  ## Usage (HuggingFace Transformers)
 
 
63
  ```python
64
  from transformers import AutoTokenizer, AutoModel
65
  import torch
 
71
  input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
72
  return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
73
 
 
74
  # Sentences we want sentence embeddings for
75
  sentences = ['This is an example sentence', 'Each sentence is converted']
76
 
77
  # Load model from HuggingFace Hub
78
+ tokenizer = AutoTokenizer.from_pretrained('stjiris/bert-large-portuguese-cased-legal-mlm-sts-v0')
79
+ model = AutoModel.from_pretrained('stjiris/bert-large-portuguese-cased-legal-mlm-sts-v0')
80
 
81
  # Tokenize sentences
82
  encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
 
84
  # Compute token embeddings
85
  with torch.no_grad():
86
  model_output = model(**encoded_input)
 
87
  # Perform pooling. In this case, mean pooling.
88
  sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
 
89
  print("Sentence embeddings:")
90
  print(sentence_embeddings)
91
  ```
92
 
93
 
94
+ ## Full Model Architecture
 
 
 
 
 
 
 
 
 
 
 
 
 
95
  ```
96
+ SentenceTransformer(
97
+ (0): Transformer({'max_seq_length': 514, 'do_lower_case': False}) with Transformer model: BertModel
98
+ (1): Pooling({'word_embedding_dimension': 1028, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
99
+ )
100
  ```
101
+ ## Citing & Authors
102
 
103
+ If you use this work, please cite:
 
 
104
 
105
+ ```bibtex
106
+ @inproceedings{MeloSemantic,
107
+ author = {Melo, Rui and Santos, Professor Pedro Alexandre and Dias, Professor Jo{\~ a}o},
108
+ title = {A {Semantic} {Search} {System} for {Supremo} {Tribunal} de {Justi}{\c c}a},
 
 
 
 
 
 
 
 
 
 
 
109
  }
 
110
 
111
+ @inproceedings{souza2020bertimbau,
112
+ author = {F{\'a}bio Souza and
113
+ Rodrigo Nogueira and
114
+ Roberto Lotufo},
115
+ title = {{BERT}imbau: pretrained {BERT} models for {B}razilian {P}ortuguese},
116
+ booktitle = {9th Brazilian Conference on Intelligent Systems, {BRACIS}, Rio Grande do Sul, Brazil, October 20-23 (to appear)},
117
+ year = {2020}
118
+ }
119
 
120
+ @inproceedings{fonseca2016assin,
121
+ title={ASSIN: Avaliacao de similaridade semantica e inferencia textual},
122
+ author={Fonseca, E and Santos, L and Criscuolo, Marcelo and Aluisio, S},
123
+ booktitle={Computational Processing of the Portuguese Language-12th International Conference, Tomar, Portugal},
124
+ pages={13--15},
125
+ year={2016}
126
+ }
127
 
128
+ @inproceedings{real2020assin,
129
+ title={The assin 2 shared task: a quick overview},
130
+ author={Real, Livy and Fonseca, Erick and Oliveira, Hugo Goncalo},
131
+ booktitle={International Conference on Computational Processing of the Portuguese Language},
132
+ pages={406--412},
133
+ year={2020},
134
+ organization={Springer}
135
+ }
136
+ @InProceedings{huggingface:dataset:stsb_multi_mt,
137
+ title = {Machine translated multilingual STS benchmark dataset.},
138
+ author={Philip May},
139
+ year={2021},
140
+ url={https://github.com/PhilipMay/stsb-multi-mt}
141
+ }
142
 
143
+ ```