pere commited on
Commit
efc084c
1 Parent(s): 49265e4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -0
README.md CHANGED
@@ -58,6 +58,12 @@ kw_model.extract_keywords(doc, stop_words=None)
58
 
59
  The [keyBERT Homepage](https://github.com/MaartenGr/KeyBERT) gives several other examples on how this can be used. For instance how it can be combined with stop words, how longer phrases can be extracted and how it directly can output the highlighted text.
60
 
 
 
 
 
 
 
61
 
62
  ## Embeddings and Sentence Similarity (Sentence-Transformers)
63
 
@@ -92,6 +98,7 @@ print(scipy_cosine_scores)
92
 
93
 
94
 
 
95
  ## Embeddings and Sentence Similarity (HuggingFace Transformers)
96
  Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
97
 
@@ -135,6 +142,11 @@ print(scipy_cosine_scores)
135
 
136
  ```
137
 
 
 
 
 
 
138
 
139
  ## Training
140
  The model was trained with the parameters:
 
58
 
59
  The [keyBERT Homepage](https://github.com/MaartenGr/KeyBERT) gives several other examples on how this can be used. For instance how it can be combined with stop words, how longer phrases can be extracted and how it directly can output the highlighted text.
60
 
61
+ ## Keyword Extraction
62
+ https://github.com/MaartenGr/BERTopic
63
+
64
+ ## Similarity Search
65
+ [Javier]??
66
+
67
 
68
  ## Embeddings and Sentence Similarity (Sentence-Transformers)
69
 
 
98
 
99
 
100
 
101
+
102
  ## Embeddings and Sentence Similarity (HuggingFace Transformers)
103
  Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
104
 
 
142
 
143
  ```
144
 
145
+ # Evaluation and Parameters
146
+
147
+ ## Evaluaton
148
+ Rov-Arild?
149
+
150
 
151
  ## Training
152
  The model was trained with the parameters: