RebeccaQian1 commited on
Commit
d88ac0a
1 Parent(s): 60b07c2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -14,7 +14,7 @@ language:
14
 
15
  # Model Card for Model ID
16
 
17
- Lynx is an open-source hallucination evaluation model. Patronus-Lynx-8B-Instruct was trained on a mix of datasets such as CovidQA, PubmedQA, DROP, FinanceBench.
18
  The datasets contain a mix of hand-annotated and synthetic data. The maximum sequence length is 8000 tokens.
19
 
20
 
@@ -25,7 +25,7 @@ The datasets contain a mix of hand-annotated and synthetic data. The maximum seq
25
  - **Developed by:** Patronus AI
26
  - **License:** [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
27
 
28
- ### Model Sources [optional]
29
 
30
  <!-- Provide the basic links for the model. -->
31
 
@@ -33,7 +33,7 @@ The datasets contain a mix of hand-annotated and synthetic data. The maximum seq
33
 
34
 
35
  ## How to Get Started with the Model
36
- The model is fine-tuned to be used to detect faithfulness in a RAG setting. Provided a document, question and answer, the model can evaluate whether the answer is faithful to the document.
37
 
38
  To use the model, we recommend using the prompt we used for fine-tuning:
39
 
 
14
 
15
  # Model Card for Model ID
16
 
17
+ Lynx is an open-source hallucination evaluation model. Patronus-Lynx-8B-Instruct was trained on a mix of datasets including CovidQA, PubmedQA, DROP, RAGTruth.
18
  The datasets contain a mix of hand-annotated and synthetic data. The maximum sequence length is 8000 tokens.
19
 
20
 
 
25
  - **Developed by:** Patronus AI
26
  - **License:** [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
27
 
28
+ ### Model Sources
29
 
30
  <!-- Provide the basic links for the model. -->
31
 
 
33
 
34
 
35
  ## How to Get Started with the Model
36
+ The model is fine-tuned to be used to detect hallucinations in a RAG setting. Provided a document, question and answer, the model can evaluate whether the answer is faithful to the document.
37
 
38
  To use the model, we recommend using the prompt we used for fine-tuning:
39