andreaskoepf commited on
Commit
e96aa1e
1 Parent(s): 9b13660

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -1
README.md CHANGED
@@ -1,7 +1,53 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
3
  ---
4
- - wandb: https://wandb.ai/open-assistant/supervised-finetuning/runs/770a0t41 ()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  - base model: [andreaskoepf/pythia-12b-pre-2000](https://huggingface.co/andreaskoepf/pythia-12b-pre-2000)
6
  - checkpoint: 4000 steps
7
  - [sampling report](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-04-03_andreaskoepf_oasst-sft-4-pythia-12b-epoch-3_5_sampling_noprefix_lottery.json%0Ahttps%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Fchat-gpt%2F2023-04-11_gpt-3.5-turbo_lottery.json)
 
1
  ---
2
  license: apache-2.0
3
+ language:
4
+ - en
5
+ tags:
6
+ - sft
7
+ pipeline_tag: text-generation
8
+ widget:
9
+ - text: <|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>
10
+ - text: <|prompter|>What's the Earth total population<|endoftext|><|assistant|>
11
+ - text: <|prompter|>Write a story about future of AI development<|endoftext|><|assistant|>
12
  ---
13
+
14
+ # Open-Assistant SFT-4 12B Model
15
+
16
+
17
+ This is the 4th iteration English supervised-fine-tuning (SFT) model of
18
+ the [Open-Assistant](https://github.com/LAION-AI/Open-Assistant) project.
19
+ It is based on a Pythia 12B that was fine-tuned on human demonstrations
20
+ of assistant conversations collected through the
21
+ [https://open-assistant.io/](https://open-assistant.io/) human feedback web
22
+ app before March 25, 2023.
23
+
24
+ ## Model Details
25
+
26
+ - **Developed by:** [Open-Assistant Contributors](https://open-assistant.io/)
27
+ - **Model type:** Transformer-based Language Model
28
+ - **Language:** English
29
+ - **Finetuned from:** [EleutherAI / pythia-12b-deduped](https://huggingface.co/EleutherAI/pythia-12b-deduped)
30
+ - **Code:** [Open-Assistant/model/model_training](https://github.com/LAION-AI/Open-Assistant/tree/main/model/model_training)
31
+ - **Demo:** [Continuations for 250 random prompts](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-03-09_andreaskoepf_oasst-1_12b_7000_sampling_noprefix_lottery.json) ([sampling code](https://github.com/Open-Assistant/oasst-model-eval/blob/3d71f3be100c05cd8ddb568365e036a29fbff8c7/model_eval/manual/sampling_report.py)).
32
+ - **License:** Apache 2.0
33
+ - **Contact:** [Open-Assistant Discord](https://ykilcher.com/open-assistant-discord)
34
+
35
+ ## Prompting
36
+
37
+ Two special tokens are used to mark the beginning of user and assistant turns:
38
+ `<|prompter|>` and `<|assistant|>`. Each turn ends with a `<|endoftext|>` token.
39
+
40
+ Input prompt example:
41
+ ```
42
+ <|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>
43
+ ```
44
+ The input ends with the `<|assistant|>` token to signal that the model should
45
+ start generating the assistant reply.
46
+
47
+
48
+ ## Dev Details
49
+
50
+ - wandb: https://wandb.ai/open-assistant/supervised-finetuning/runs/770a0t41
51
  - base model: [andreaskoepf/pythia-12b-pre-2000](https://huggingface.co/andreaskoepf/pythia-12b-pre-2000)
52
  - checkpoint: 4000 steps
53
  - [sampling report](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-04-03_andreaskoepf_oasst-sft-4-pythia-12b-epoch-3_5_sampling_noprefix_lottery.json%0Ahttps%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Fchat-gpt%2F2023-04-11_gpt-3.5-turbo_lottery.json)