Update README.md
Browse files
README.md
CHANGED
@@ -4,17 +4,19 @@ license: other
|
|
4 |
|
5 |
# Overview
|
6 |
|
7 |
-
This is a fine-tuned 7b parameter LlaMa model,
|
8 |
-
|
9 |
-
I used a jailbreak prompt to generate the synthetic instructions this time, which resulted in some questionable training data, such as synthesizing drugs, making homemade flamethrowers, etc. Mind you, this is all generated by ChatGPT, not me, so I won't speak for any outputs the model produces.
|
10 |
|
11 |
### Training data
|
12 |
|
13 |
-
I
|
14 |
|
15 |
The jailbreak prompt I used is the default prompt in the python code when using the `--uncensored` flag:
|
16 |
(https://github.com/jondurbin/airoboros/blob/main/airoboros/self_instruct.py#L39)
|
17 |
|
|
|
|
|
|
|
|
|
18 |
### Fine-tuning method
|
19 |
|
20 |
I used the excellent [FastChat](https://github.com/lm-sys/FastChat) module, running with:
|
@@ -24,7 +26,7 @@ torchrun --nproc_per_node=8 --master_port=20001 /workspace/FastChat/fastchat/tra
|
|
24 |
--model_name_or_path /workspace/llama-7b \
|
25 |
--data_path /workspace/as_conversations.json \
|
26 |
--bf16 True \
|
27 |
-
--output_dir /workspace/airoboros-
|
28 |
--num_train_epochs 3 \
|
29 |
--per_device_train_batch_size 24 \
|
30 |
--per_device_eval_batch_size 24 \
|
|
|
4 |
|
5 |
# Overview
|
6 |
|
7 |
+
This is a fine-tuned 7b parameter LlaMa model, using completely synthetic training data created by [airoboros](https://github.com/jondurbin/airoboros)
|
|
|
|
|
8 |
|
9 |
### Training data
|
10 |
|
11 |
+
I used a jailbreak prompt to generate the synthetic instructions, which resulted in some training data that would likely be censored by other models, such as how-to prompts about synthesizing drugs, making homemade flamethrowers, etc. Mind you, this is all generated by ChatGPT, not me. My goal was to simply test some of the capabilities of ChatGPT when unfiltered (as much as possible), and not to intentionally produce any harmful/dangerous/etc. content.
|
12 |
|
13 |
The jailbreak prompt I used is the default prompt in the python code when using the `--uncensored` flag:
|
14 |
(https://github.com/jondurbin/airoboros/blob/main/airoboros/self_instruct.py#L39)
|
15 |
|
16 |
+
I also did a few passes of manually cleanup to remove some bad prompts, but mostly I left the data as-is.
|
17 |
+
|
18 |
+
Initially, the model was fairly bad at math/extrapolation, closed question-answering (heavy hallucination), and coding, so I did one more fine tuning pass with additional synthetic instructions aimed at those types of problems.
|
19 |
+
|
20 |
### Fine-tuning method
|
21 |
|
22 |
I used the excellent [FastChat](https://github.com/lm-sys/FastChat) module, running with:
|
|
|
26 |
--model_name_or_path /workspace/llama-7b \
|
27 |
--data_path /workspace/as_conversations.json \
|
28 |
--bf16 True \
|
29 |
+
--output_dir /workspace/airoboros-7b \
|
30 |
--num_train_epochs 3 \
|
31 |
--per_device_train_batch_size 24 \
|
32 |
--per_device_eval_batch_size 24 \
|