nickprock commited on
Commit
7770650
1 Parent(s): 94e04b0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -10
README.md CHANGED
@@ -8,7 +8,7 @@ tags:
8
 
9
  ---
10
 
11
- # {MODEL_NAME}
12
 
13
  This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
14
 
@@ -28,7 +28,7 @@ Then you can use the model like this:
28
  from sentence_transformers import SentenceTransformer
29
  sentences = ["This is an example sentence", "Each sentence is converted"]
30
 
31
- model = SentenceTransformer('{MODEL_NAME}')
32
  embeddings = model.encode(sentences)
33
  print(embeddings)
34
  ```
@@ -54,8 +54,8 @@ def mean_pooling(model_output, attention_mask):
54
  sentences = ['This is an example sentence', 'Each sentence is converted']
55
 
56
  # Load model from HuggingFace Hub
57
- tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
58
- model = AutoModel.from_pretrained('{MODEL_NAME}')
59
 
60
  # Tokenize sentences
61
  encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
@@ -77,7 +77,7 @@ print(sentence_embeddings)
77
 
78
  <!--- Describe how your model was evaluated -->
79
 
80
- For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
81
 
82
 
83
  ## Training
@@ -119,8 +119,4 @@ SentenceTransformer(
119
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
120
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
121
  )
122
- ```
123
-
124
- ## Citing & Authors
125
-
126
- <!--- Describe where people can find more information -->
 
8
 
9
  ---
10
 
11
+ # sentence-bert-base-italian-xxl-cased
12
 
13
  This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
14
 
 
28
  from sentence_transformers import SentenceTransformer
29
  sentences = ["This is an example sentence", "Each sentence is converted"]
30
 
31
+ model = SentenceTransformer('nickprock/sentence-bert-base-italian-xxl-cased')
32
  embeddings = model.encode(sentences)
33
  print(embeddings)
34
  ```
 
54
  sentences = ['This is an example sentence', 'Each sentence is converted']
55
 
56
  # Load model from HuggingFace Hub
57
+ tokenizer = AutoTokenizer.from_pretrained('nickprock/sentence-bert-base-italian-xxl-cased')
58
+ model = AutoModel.from_pretrained('nickprock/sentence-bert-base-italian-xxl-cased')
59
 
60
  # Tokenize sentences
61
  encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
 
77
 
78
  <!--- Describe how your model was evaluated -->
79
 
80
+ For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=nickprock/sentence-bert-base-italian-xxl-cased)
81
 
82
 
83
  ## Training
 
119
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
120
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
121
  )
122
+ ```