yizhongw commited on
Commit
72601ef
1 Parent(s): 6b4c173

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -8,9 +8,9 @@ metrics:
8
  - accuracy
9
  ---
10
 
11
- This model is built based on LLaMa2 7B in replacement of the truthfulness/informativeness judge models that was originally introduced in the TruthfulQA paper.
12
  That model is based on OpenAI's Curie engine using their finetuning API.
13
- But as of Feb 08, 2024, OpenAI has taken down their Curie engine and thus we cannot use it for TruthfulQA evaluation anymore.
14
  So, we decided to train the judge models using an open model (i.e., LLaMa), which can make the evaluation more accessible and reproducible.
15
 
16
  ## Released Models
@@ -22,12 +22,12 @@ We released two models for the truthfulness and informativeness evaluation, resp
22
 
23
  ## Training Details
24
 
25
- The training code and validation results of these models can be found [here](https://github.com/allenai/truthfulqa_reeval)
26
 
27
 
28
  ## Usage
29
 
30
- These models are only intended for the TruthfulQA evaluation. It is intended to generalize to the evaluation of new models on the fixed set of prompts, while it may fail to generalize to new prompts.
31
  You can try the model using the following scripts:
32
 
33
  ```python
 
8
  - accuracy
9
  ---
10
 
11
+ This model is built based on LLaMa2 7B in replacement of the truthfulness/informativeness judge models that were originally introduced in the TruthfulQA paper.
12
  That model is based on OpenAI's Curie engine using their finetuning API.
13
+ However, as of February 08, 2024, OpenAI has taken down its Curie engine, and thus, we cannot use it for TruthfulQA evaluation anymore.
14
  So, we decided to train the judge models using an open model (i.e., LLaMa), which can make the evaluation more accessible and reproducible.
15
 
16
  ## Released Models
 
22
 
23
  ## Training Details
24
 
25
+ The training code and validation results of these models can be found [here](https://github.com/yizhongw/truthfulqa_reeval)
26
 
27
 
28
  ## Usage
29
 
30
+ These models are only intended for the TruthfulQA evaluation. They are intended to generalize to the evaluation of new models on the fixed set of prompts, but they may fail to generalize to new prompts.
31
  You can try the model using the following scripts:
32
 
33
  ```python