stas commited on
Commit
bf0c833
1 Parent(s): 2351dc4

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -22,8 +22,8 @@ tags:
22
  ![halos](https://gist.github.com/assets/29318529/fe2d8391-dbd1-4b7e-9dc4-7cb97e55bc06)
23
 
24
  This repo contains the model checkpoints for:
25
- - model family <b>pythia1-4b</b>
26
- - optimized with the loss <b>SFT+PPO</b>
27
  - aligned using the SHP, Anthropic HH and Open Assistant datasets.
28
 
29
  To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
@@ -40,6 +40,8 @@ Chocolate cake.
40
  ```
41
  Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
42
 
 
 
43
  Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
44
 
45
  If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
 
22
  ![halos](https://gist.github.com/assets/29318529/fe2d8391-dbd1-4b7e-9dc4-7cb97e55bc06)
23
 
24
  This repo contains the model checkpoints for:
25
+ - model family <b>EleutherAI/pythia-1.4b</b>
26
+ - optimized with the loss <b>PPO</b>
27
  - aligned using the SHP, Anthropic HH and Open Assistant datasets.
28
 
29
  To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
 
40
  ```
41
  Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
42
 
43
+
44
+
45
  Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
46
 
47
  If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):