File size: 1,479 Bytes
13fbbbd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ee822fd
778dbcb
 
ee822fd
 
778dbcb
 
f588b41
 
 
f502369
778dbcb
b3f8011
778dbcb
b3f8011
778dbcb
 
b3f8011
 
778dbcb
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

![halos](https://gist.github.com/assets/29318529/fe2d8391-dbd1-4b7e-9dc4-7cb97e55bc06)

This repo contains the model checkpoints for:
- model family <b>pythia6-9b</b>
- optimized with the loss <b>SFT+DPO</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.

To prompt archangel models, ensure that the format is consistent with that of TuluV2, i.e. `"<s>\n<|user|>\n" + <prompt> + "\n<|assistant|>\n</s>"`. 
Note that the BOS / EOS tokens should be excluded if automatically added by your tokenizer during batch collation.

Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.

If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
  author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
  title = {Human-Centered Loss Functions (HALOs)},
  institution = {Contextual AI},
  note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
  year = {2023},
}
```