Lauler commited on
Commit
97d49ab
1 Parent(s): a42c18b

evaluation script links

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -121,9 +121,9 @@ sentence_pair_scores = cosine_scores.diag()
121
  df["model_score"] = sentence_pair_scores.cpu().tolist()
122
  print(df[["score", "model_score"]].corr(method="spearman"))
123
  print(df[["score", "model_score"]].corr(method="pearson"))
 
124
 
125
  Examples how to evaluate the model on other test sets of the SuperLim suites can be found on the following links: [evaluate_faq.py](https://github.com/kb-labb/swedish-sbert/blob/main/evaluate_faq.py) (Swedish FAQ), [evaluate_swesat.py](https://github.com/kb-labb/swedish-sbert/blob/main/evaluate_swesat.py) (SweSAT synonyms), [evaluate_supersim.py](https://github.com/kb-labb/swedish-sbert/blob/main/evaluate_supersim.py) (SuperSim).
126
- ```
127
 
128
  ## Training
129
 
 
121
  df["model_score"] = sentence_pair_scores.cpu().tolist()
122
  print(df[["score", "model_score"]].corr(method="spearman"))
123
  print(df[["score", "model_score"]].corr(method="pearson"))
124
+ ```
125
 
126
  Examples how to evaluate the model on other test sets of the SuperLim suites can be found on the following links: [evaluate_faq.py](https://github.com/kb-labb/swedish-sbert/blob/main/evaluate_faq.py) (Swedish FAQ), [evaluate_swesat.py](https://github.com/kb-labb/swedish-sbert/blob/main/evaluate_swesat.py) (SweSAT synonyms), [evaluate_supersim.py](https://github.com/kb-labb/swedish-sbert/blob/main/evaluate_supersim.py) (SuperSim).
 
127
 
128
  ## Training
129