File size: 6,592 Bytes
0355710 559a332 0355710 2aa3092 0355710 1e5afae 0355710 e3b8c07 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 |
---
license: apache-2.0
language:
- en
library_name: peft
base_model: meta-llama/Llama-2-13b-hf
pipeline_tag: text-generation
datasets:
- allenai/ai2_arc
- tasksource/Boardgame-QA
- skrishna/coin_flip
- openai/gsm8k
- hotpotqa/hotpot_qa
- ChilleD/LastLetterConcat
- allenai/quartz
- tasksource/strategy-qa
- ConditionalQA
widget:
- text: "[Question] Juan and LaKeisha roll a few objects down a ramp. They want to see which object rolls the farthest. What should they do so they can repeat their investigation?\n[Options] A) Put the objects in groups. B) Change the height of the ramp. C) Choose different objects to roll. D) Record the details of the investigation.\n[Number of answers] 2\n[Answer 1] "
example_title: "Multiple Choice QA"
---
This is the official model from the publication "Fine-Tuning with Divergent Chains of Thought Boosts Reasoning Through Self-Correction in Language Models" (arXiv, 2024).
> TLDR: Divergent Chain of Thought (DCoT) consists of requiring models to generate multiple CoTs before choosing an answer. Adding DCoT data to instruction tuning allows models to improve performance through self-correction.
Stay tuned for the release of the paper!
# Load the Model
```
from peft import LoraConfig, PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
base_model_path = "meta-llama/Llama-2-13b-hf"
model = AutoModelForCausalLM.from_pretrained(
base_model_path,
torch_dtype=torch.bfloat16,
device_map="auto",
)
peft_model_id = "haritzpuerto/LLaMA2-13B-dcot"
model.load_adapter(peft_model_id)
tokenizer = AutoTokenizer.from_pretrained(base_model_path)
```
# Run the model
## Prompt Template
```
[Question] {question} [Context] {document} [Options] {answer_options} [Number of answers] {k}
```
Note, that not all commands (text in brackets) are mandatory. `[Context]` and `[Options]` are optional.
- `[Context]` refers to a paragraph that contains the answer to a question (for span-extraction QA).
- `[Options]` refers to a list of candidate answers (for multiple-choice QA). The format is `A) {answer option 1} B) {answer option 2}, ...`
The minimal template is
```
[Question] {question} [Number of answers] {k}
```
The inclusion of context and options depends on your tasks.
## Response format
You should expect the model returning the following type of text
```
[Answer 1]CoT_1
[Answer 2]CoT_2
...
[Final answer] answer
```
You should get as many answers as requested with the command `[Number of answers] {k}`
## Run Example
```
prompt = "[Question] Juan and LaKeisha roll a few objects down a ramp. They want to see which object rolls the farthest. What should they do so they can repeat their investigation?\n[Options] A) Put the objects in groups. B) Change the height of the ramp. C) Choose different objects to roll. D) Record the details of the investigation.\n[Number of answers] 2\n[Answer 1] "
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate(**inputs.to("cuda"), max_length=1024)
print(tokenizer.decode(output[0]))
```
You should get an output similar to:
```
<s> [Question] Juan and LaKeisha roll a few objects down a ramp. They want to see which object rolls the farthest. What should they do so they can repeat their investigation?
[Options] A) Put the objects in groups. B) Change the height of the ramp. C) Choose different objects to roll. D) Record the details of the investigation.
[Number of answers] 2
[Answer 1] 1. Juan and LaKeisha want to see which object rolls the farthest.
2. They have already rolled a few objects down the ramp.
3. To repeat their investigation, they need to do something that will affect the outcome of the experiment.
4. Putting the objects in groups will not affect the outcome of the experiment.
5. Changing the height of the ramp may affect the outcome, but it is not the best option as it requires changing the setup of the experiment.
6. Choosing different objects to roll may also affect the outcome, but it is not the best option as it does not address the issue of repeating the experiment.
7. The best option is to record the details of the investigation. This includes the objects used, the height of the ramp, and any other relevant information. By recording the details, Juan and LaKeisha can repeat the experiment with the same conditions and compare the results.
[Answer 2] Step 1: Identify the problem and the question.
Problem: Juan and LaKeisha want to see which object rolls the farthest.
Question: What should they do to repeat their investigation?
Step 2: Evaluate the options.
A) Put the objects in groups. - This option does not directly relate to the question of which object rolls the farthest, so it can be eliminated.
B) Change the height of the ramp. - This option also does not directly relate to the question of which object rolls the farthest, so it can be eliminated.
C) Choose different objects to roll. - This option is a possible solution to the question, but it does not guarantee that the object will roll the farthest.
D) Record the details of the investigation. - This option is a necessary step to repeat the investigation.
Step 3: Choose the best option.
The best option to repeat the investigation is to record the details of the investigation. This will allow them to replicate the conditions of the original experiment and compare the results.
[Final answer] D) Record the details of the investigation.</s>
```
# Training details
We train all models using LoRA with the PEFT library. The main parameters are:
| Param. name | Value |
|---------------------|:-------------------:|
| lora\_r | 64 |
| lora\_alpha | 16 |
| lora\_dropout | 0.1 |
| batch size | 4 |
| learning\_rate | 2e-4 |
| weight\_decay | 0.001 |
| optim | paged\_adamw\_32bit |
| lr\_scheduler\_type | constant |
Please check Appendix B of the paper for more details.
# Cite
If you find our work useful, please consider citing it using the following citation:
```
@misc{puerto2024dcot,
title={Fine-Tuning with Divergent Chains of Thought Boosts Reasoning Through Self-Correction in Language Models},
author={Haritz Puerto and Tilek Chubakov and Xiaodan Zhu and Harish Tayyar Madabushi and Iryna Gurevych},
year={2024},
eprint={2407.03181},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2407.03181},
}
``` |