rufimelo commited on
Commit
dbb77b9
1 Parent(s): dda1e5e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -41
README.md CHANGED
@@ -1,13 +1,18 @@
1
  ---
 
 
 
2
  pipeline_tag: sentence-similarity
3
  tags:
4
  - sentence-transformers
5
- - feature-extraction
6
  - sentence-similarity
7
  - transformers
 
 
 
8
  ---
9
 
10
- # {MODEL_NAME}
11
 
12
  This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
13
 
@@ -27,7 +32,7 @@ Then you can use the model like this:
27
  from sentence_transformers import SentenceTransformer
28
  sentences = ["This is an example sentence", "Each sentence is converted"]
29
 
30
- model = SentenceTransformer('{MODEL_NAME}')
31
  embeddings = model.encode(sentences)
32
  print(embeddings)
33
  ```
@@ -53,8 +58,8 @@ def mean_pooling(model_output, attention_mask):
53
  sentences = ['This is an example sentence', 'Each sentence is converted']
54
 
55
  # Load model from HuggingFace Hub
56
- tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
57
- model = AutoModel.from_pretrained('{MODEL_NAME}')
58
 
59
  # Tokenize sentences
60
  encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
@@ -71,52 +76,30 @@ print(sentence_embeddings)
71
  ```
72
 
73
 
 
74
 
75
- ## Evaluation Results
76
-
77
- <!--- Describe how your model was evaluated -->
78
-
79
- For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
 
 
 
 
80
 
81
 
82
  ## Training
83
- The model was trained with the parameters:
84
-
85
- **DataLoader**:
86
-
87
- `torch.utils.data.dataloader.DataLoader` of length 204 with parameters:
88
- ```
89
- {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
90
- ```
91
-
92
- **Loss**:
93
-
94
- `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
95
-
96
- Parameters of the fit()-Method:
97
- ```
98
- {
99
- "epochs": 1,
100
- "evaluation_steps": 0,
101
- "evaluator": "NoneType",
102
- "max_grad_norm": 1,
103
- "optimizer_class": "<class 'transformers.optimization.AdamW'>",
104
- "optimizer_params": {
105
- "lr": 2e-05
106
- },
107
- "scheduler": "WarmupLinear",
108
- "steps_per_epoch": null,
109
- "warmup_steps": 20.400000000000002,
110
- "weight_decay": 0.01
111
- }
112
- ```
113
 
 
 
 
114
 
115
  ## Full Model Architecture
116
  ```
117
  SentenceTransformer(
118
  (0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel
119
- (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
120
  )
121
  ```
122
 
1
  ---
2
+ language:
3
+ - pt
4
+ thumbnail: "Portugues SBERT for the Legal Domain"
5
  pipeline_tag: sentence-similarity
6
  tags:
7
  - sentence-transformers
 
8
  - sentence-similarity
9
  - transformers
10
+ datasets:
11
+ - assin
12
+ - assin2
13
  ---
14
 
15
+ # rufimelo/Legal-SBERTimbau-large
16
 
17
  This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
18
 
32
  from sentence_transformers import SentenceTransformer
33
  sentences = ["This is an example sentence", "Each sentence is converted"]
34
 
35
+ model = SentenceTransformer('rufimelo/Legal-SBERTimbau-large')
36
  embeddings = model.encode(sentences)
37
  print(embeddings)
38
  ```
58
  sentences = ['This is an example sentence', 'Each sentence is converted']
59
 
60
  # Load model from HuggingFace Hub
61
+ tokenizer = AutoTokenizer.from_pretrained('rufimelo/Legal-SBERTimbau-large')
62
+ model = AutoModel.from_pretrained('rufimelo/Legal-SBERTimbau-large}')
63
 
64
  # Tokenize sentences
65
  encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
76
  ```
77
 
78
 
79
+ ## Evaluation Results STS
80
 
81
+ | Model| Dataset | PearsonCorrelation |
82
+ | ---------------------------------------- | ---------- |
83
+ | Legal-SBERTimbau-large| Assin | 0.766293861 |
84
+ | Legal-SBERTimbau-large| Assin2| 0.823565322 |
85
+ | ---------------------------------------- | ---------- |
86
+ | paraphrase-multilingual-mpnet-base-v2| Assin | 0.743740222 |
87
+ | paraphrase-multilingual-mpnet-base-v2| Assin2| 0.823565322 |
88
+ | paraphrase-multilingual-mpnet-base-v2 Fine tuned with assin(s)| Assin | 0.77641 |
89
+ | paraphrase-multilingual-mpnet-base-v2 Fine tuned with assin(s)| Assin2| 0.79831 |
90
 
91
 
92
  ## Training
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
93
 
94
+ Legal-SBERTimbau-large is based on Legal-BERTimbau-large whioch derives from [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased) Large.
95
+ It was trained using the multilingual knowledge distillation process, meaning it was trained as a multilingual model. This was chosen due to the lack of Portuguese available data.
96
+ In addition to that, it was submitted to a fine tuning stage with the [assin](https://huggingface.co/datasets/assin) and [assin2](https://huggingface.co/datasets/assin2) datasets.
97
 
98
  ## Full Model Architecture
99
  ```
100
  SentenceTransformer(
101
  (0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel
102
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
103
  )
104
  ```
105