simonhughes22 commited on
Commit
6dbcd22
1 Parent(s): 739923d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -32,8 +32,8 @@ This model is based on [microsoft/deberta-v3-base](https://huggingface.co/micros
32
  * [SummaC Benchmark](https://aclanthology.org/2022.tacl-1.10.pdf) (Test Split) - 0.764 Balanced Accuracy, 0.831 AUC Score
33
  * [AnyScale Ranking Test for Hallucinations](https://www.anyscale.com/blog/llama-2-is-about-as-factually-accurate-as-gpt-4-for-summaries-and-is-30x-cheaper) - 86.6 % Accuracy
34
 
35
- ## Results (Leaderboard)
36
- If you want to stay up to date with results of the latest tests using this model, a public leaderboard is maintained and periodically updated on the [vectara/hallucination-leaderboard](https://github.com/vectara/hallucination-leaderboard) GitHub repository.
37
 
38
  ## Note about using the Inference API Widget on the Right
39
  To use the model with the widget, you need to pass both documents as a single string separated with [SEP]. For example:
 
32
  * [SummaC Benchmark](https://aclanthology.org/2022.tacl-1.10.pdf) (Test Split) - 0.764 Balanced Accuracy, 0.831 AUC Score
33
  * [AnyScale Ranking Test for Hallucinations](https://www.anyscale.com/blog/llama-2-is-about-as-factually-accurate-as-gpt-4-for-summaries-and-is-30x-cheaper) - 86.6 % Accuracy
34
 
35
+ ## LLM Hallucination Leaderboard
36
+ If you want to stay up to date with results of the latest tests using this model to evaluate the top LLM models, a public leaderboard is maintained and periodically updated on the [vectara/hallucination-leaderboard](https://github.com/vectara/hallucination-leaderboard) GitHub repository.
37
 
38
  ## Note about using the Inference API Widget on the Right
39
  To use the model with the widget, you need to pass both documents as a single string separated with [SEP]. For example: