xwinxu commited on
Commit
2c3e9b7
1 Parent(s): e59a79d

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +54 -0
README.md ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - stanfordnlp/SHP
5
+ - Anthropic/hh-rlhf
6
+ - OpenAssistant/oasst1
7
+ language:
8
+ - en
9
+ metrics:
10
+ - accuracy
11
+ tags:
12
+ - human feedback
13
+ - rlhf
14
+ - preferences
15
+ - alignment
16
+ - HALO
17
+ - halos
18
+ - dpo
19
+ - rl
20
+ ---
21
+
22
+ ![halos](https://gist.github.com/assets/29318529/fe2d8391-dbd1-4b7e-9dc4-7cb97e55bc06)
23
+
24
+ This repo contains the model checkpoints for:
25
+ - model family <b>llama13b</b>
26
+ - optimized with the loss <b>CSFT</b>
27
+ - aligned using the SHP, Anthropic HH and Open Assistant datasets.
28
+
29
+ To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
30
+ For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
31
+ The human should speak first:
32
+ ```
33
+ <|user|>
34
+ Hi! I'm looking for a cake recipe.
35
+ <|assistant|>
36
+ What kind of cake?
37
+ <|user|>
38
+ Chocolate cake.
39
+ <|assistant|>
40
+ ```
41
+ Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
42
+
43
+ Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
44
+
45
+ If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
46
+ ```
47
+ @techreport{ethayarajh2023halos,
48
+ author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
49
+ title = {Human-Centered Loss Functions (HALOs)},
50
+ institution = {Contextual AI},
51
+ note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
52
+ year = {2023},
53
+ }
54
+ ```