Update README.md
Browse files
README.md
CHANGED
@@ -13,7 +13,7 @@ The model was finetuned with the following promt: \
|
|
13 |
``"Answer the following question in context:\n\nQuestion: " + samples["prompt"] + " Answer: "`` \
|
14 |
It should be benefical to use the same or a similar prompt for inference.
|
15 |
|
16 |
-
An increase
|
17 |
|
18 |
| HellaSwag | WinoGrande | BooLQ | ARC-c |
|
19 |
|:------:|:------:|:------:|:------:|
|
|
|
13 |
``"Answer the following question in context:\n\nQuestion: " + samples["prompt"] + " Answer: "`` \
|
14 |
It should be benefical to use the same or a similar prompt for inference.
|
15 |
|
16 |
+
An increase in performance compared to [GPT4All-J v1.3](https://huggingface.co/nomic-ai/gpt4all-j) was observed when using two-shot Chain-of-Thought prompting.
|
17 |
|
18 |
| HellaSwag | WinoGrande | BooLQ | ARC-c |
|
19 |
|:------:|:------:|:------:|:------:|
|