simonhughes22 commited on
Commit
250806e
1 Parent(s): 040675b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -28,7 +28,7 @@ This model is based on [microsoft/deberta-v3-base](https://huggingface.co/micros
28
  * [SummaC Benchmark](https://aclanthology.org/2022.tacl-1.10.pdf) (Test Split) - 0.764 Balanced Accuracy, 0.831 AUC Score
29
  * [AnyScale Ranking Test for Hallucinations](https://www.anyscale.com/blog/llama-2-is-about-as-factually-accurate-as-gpt-4-for-summaries-and-is-30x-cheaper) - 86.6 % Accuracy
30
 
31
- ## Usage
32
 
33
  The model can be used like this:
34
 
@@ -53,7 +53,7 @@ array([0.61051559, 0.00047493709, 0.99639291, 0.00021221573, 0.99599433, 0.00141
53
  ```
54
 
55
  ## Usage with Transformers AutoModel
56
- You can use the model also directly with Transformers library (without SentenceTransformers library):
57
 
58
  ```python
59
  from transformers import AutoTokenizer, AutoModelForSequenceClassification
 
28
  * [SummaC Benchmark](https://aclanthology.org/2022.tacl-1.10.pdf) (Test Split) - 0.764 Balanced Accuracy, 0.831 AUC Score
29
  * [AnyScale Ranking Test for Hallucinations](https://www.anyscale.com/blog/llama-2-is-about-as-factually-accurate-as-gpt-4-for-summaries-and-is-30x-cheaper) - 86.6 % Accuracy
30
 
31
+ ## Usage with Sentencer Transformers (Recommended)
32
 
33
  The model can be used like this:
34
 
 
53
  ```
54
 
55
  ## Usage with Transformers AutoModel
56
+ You can use the model also directly with Transformers library (without the SentenceTransformers library):
57
 
58
  ```python
59
  from transformers import AutoTokenizer, AutoModelForSequenceClassification