jondurbin commited on
Commit
c405077
1 Parent(s): e8a8f42

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -5
README.md CHANGED
@@ -4,18 +4,17 @@ license: other
4
 
5
  # Overview
6
 
7
- This is a fine-tuned 7b parameter LlaMa model, using completely synthetic training data created by [airoboros](https://github.com/jondurbin/airoboros)
8
 
9
  ### Training data
10
 
11
  I used a jailbreak prompt to generate the synthetic instructions, which resulted in some training data that would likely be censored by other models, such as how-to prompts about synthesizing drugs, making homemade flamethrowers, etc. Mind you, this is all generated by ChatGPT, not me. My goal was to simply test some of the capabilities of ChatGPT when unfiltered (as much as possible), and not to intentionally produce any harmful/dangerous/etc. content.
12
 
13
- The jailbreak prompt I used is the default prompt in the python code when using the `--uncensored` flag:
14
- (https://github.com/jondurbin/airoboros/blob/main/airoboros/self_instruct.py#L39)
15
 
16
- I also did a few passes of manually cleanup to remove some bad prompts, but mostly I left the data as-is.
17
 
18
- Initially, the model was fairly bad at math/extrapolation, closed question-answering (heavy hallucination), and coding, so I did one more fine tuning pass with additional synthetic instructions aimed at those types of problems.
19
 
20
  ### Fine-tuning method
21
 
 
4
 
5
  # Overview
6
 
7
+ This is a fine-tuned 7b parameter LlaMa model, using completely synthetic training data created by https://github.com/jondurbin/airoboros
8
 
9
  ### Training data
10
 
11
  I used a jailbreak prompt to generate the synthetic instructions, which resulted in some training data that would likely be censored by other models, such as how-to prompts about synthesizing drugs, making homemade flamethrowers, etc. Mind you, this is all generated by ChatGPT, not me. My goal was to simply test some of the capabilities of ChatGPT when unfiltered (as much as possible), and not to intentionally produce any harmful/dangerous/etc. content.
12
 
13
+ The jailbreak prompt I used is the default prompt in the python code when using the `--uncensored` flag: https://github.com/jondurbin/airoboros/blob/main/airoboros/self_instruct.py#L39
 
14
 
15
+ I also did a few passes of manually cleanup to remove some bad prompts, but mostly I left the data as-is. Initially, the model was fairly bad at math/extrapolation, closed question-answering (heavy hallucination), and coding, so I did one more fine tuning pass with additional synthetic instructions aimed at those types of problems.
16
 
17
+ Both the initial instructions and final-pass fine-tuning instructions will be published soon.
18
 
19
  ### Fine-tuning method
20