Add link to GitHub repository
Browse files
README.md
CHANGED
@@ -32,6 +32,9 @@ This model is based on [microsoft/deberta-v3-base](https://huggingface.co/micros
|
|
32 |
* [SummaC Benchmark](https://aclanthology.org/2022.tacl-1.10.pdf) (Test Split) - 0.764 Balanced Accuracy, 0.831 AUC Score
|
33 |
* [AnyScale Ranking Test for Hallucinations](https://www.anyscale.com/blog/llama-2-is-about-as-factually-accurate-as-gpt-4-for-summaries-and-is-30x-cheaper) - 86.6 % Accuracy
|
34 |
|
|
|
|
|
|
|
35 |
## Note about using the Inference API Widget on the Right
|
36 |
To use the model with the widget, you need to pass both documents as a single string separated with [SEP]. For example:
|
37 |
|
|
|
32 |
* [SummaC Benchmark](https://aclanthology.org/2022.tacl-1.10.pdf) (Test Split) - 0.764 Balanced Accuracy, 0.831 AUC Score
|
33 |
* [AnyScale Ranking Test for Hallucinations](https://www.anyscale.com/blog/llama-2-is-about-as-factually-accurate-as-gpt-4-for-summaries-and-is-30x-cheaper) - 86.6 % Accuracy
|
34 |
|
35 |
+
## Results (Leaderboard)
|
36 |
+
If you want to stay up to date with results of the latest tests using this model, a public leaderboard is maintained and periodically updated on the [vectara/hallucination-leaderboard](https://github.com/vectara/hallucination-leaderboard) GitHub repository.
|
37 |
+
|
38 |
## Note about using the Inference API Widget on the Right
|
39 |
To use the model with the widget, you need to pass both documents as a single string separated with [SEP]. For example:
|
40 |
|