File size: 2,234 Bytes
927e33a
 
 
eb151d5
927e33a
 
 
 
 
 
 
 
 
eb151d5
06fc6e3
eb151d5
 
927e33a
 
8b7e5cc
8d0fec9
 
8b7e5cc
bdf7fe0
31efc9a
 
927e33a
8b7e5cc
927e33a
 
 
06fc6e3
927e33a
 
 
 
 
 
 
 
bdf7fe0
 
927e33a
 
98bee13
927e33a
8b7e5cc
927e33a
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
---
language:
- en
license: apache-2.0
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
datasets:
- snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset
metrics:
- accuracy
---

This repo contains the model and tokenizer checkpoints for:
- model family [<b>mistralai/Mistral-7B-Instruct-v0.2</b>](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
- optimized with the loss [<b>KTO</b>](https://twitter.com/winniethexu/status/1732839295365554643)
- aligned using the [snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset](https://huggingface.co/datasets/snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset)
- via 3 iterations of KTO on one epoch of each training partition, each previous iteration's model serving as the reference for the subsequent.

**[03/06/2024]**: We are #2 on the (verified) [Alpaca Eval 2.0 Leaderboard](https://tatsu-lab.github.io/alpaca_eval/) scoring **33.23**!

To prompt this model, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```

<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added at tokenization time and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
You may also use our tokenizer's `apply_chat_template` if doing inference with `chatml` set or evaluating generations through non-local clients.


For more info on KTO refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) for more details on the methodology.

If you found this work useful, feel free to cite [our work](https://arxiv.org/abs/2402.01306):
```
@techreport{ethayarajh2023halos,
  author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
  title = {Human-Centered Loss Functions (HALOs)},
  institution = {Contextual AI},
  note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
  year = {2023},
}
```