Update README.md
Browse files
README.md
CHANGED
@@ -15,8 +15,8 @@ datasets:
|
|
15 |
# rufimelo/Legal-SBERTimbau-large
|
16 |
|
17 |
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
|
18 |
-
|
19 |
-
|
20 |
|
21 |
## Usage (Sentence-Transformers)
|
22 |
|
@@ -30,7 +30,7 @@ Then you can use the model like this:
|
|
30 |
|
31 |
```python
|
32 |
from sentence_transformers import SentenceTransformer
|
33 |
-
sentences = ["
|
34 |
|
35 |
model = SentenceTransformer('rufimelo/Legal-SBERTimbau-large')
|
36 |
embeddings = model.encode(sentences)
|
@@ -40,7 +40,7 @@ print(embeddings)
|
|
40 |
|
41 |
|
42 |
## Usage (HuggingFace Transformers)
|
43 |
-
|
44 |
|
45 |
```python
|
46 |
from transformers import AutoTokenizer, AutoModel
|
|
|
15 |
# rufimelo/Legal-SBERTimbau-large
|
16 |
|
17 |
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
|
18 |
+
Legal-SBERTimbau-large is based on Legal-BERTimbau-large whioch derives from [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased) Large.
|
19 |
+
It is adapted to the Portuguese legal domain.
|
20 |
|
21 |
## Usage (Sentence-Transformers)
|
22 |
|
|
|
30 |
|
31 |
```python
|
32 |
from sentence_transformers import SentenceTransformer
|
33 |
+
sentences = ["Isto é um exemplo", "Isto é um outro exemplo"]
|
34 |
|
35 |
model = SentenceTransformer('rufimelo/Legal-SBERTimbau-large')
|
36 |
embeddings = model.encode(sentences)
|
|
|
40 |
|
41 |
|
42 |
## Usage (HuggingFace Transformers)
|
43 |
+
|
44 |
|
45 |
```python
|
46 |
from transformers import AutoTokenizer, AutoModel
|