license: apache-2.0
datasets:
- ai2_arc
- unalignment/spicy-3.1
- codeparrot/apps
- facebook/belebele
- boolq
- jondurbin/cinematika-v0.1
- drop
- lmsys/lmsys-chat-1m
- TIGER-Lab/MathInstruct
- cais/mmlu
- Muennighoff/natural-instructions
- openbookqa
- piqa
- Vezora/Tested-22k-Python-Alpaca
- cakiki/rosetta-code
- Open-Orca/SlimOrca
- spider
- squad_v2
- migtissera/Synthia-v1.3
- datasets/winogrande
A bagel, with everything (except DPO)
Overview
This is the pre-DPO version of the mistral-7b model fine-tuned with https://github.com/jondurbin/bagel
You probably want the higher performing model that underwent DPO: https://huggingface.co/jondurbin/bagel-dpo-7b-v0.1
The only benefit to this model is that it is less "truthful", for roleplaying and other types of scenarios that may benefit more from the SFT-only tune.
Data selection.
The first step in the process is creating a dataset. In this case, we're actually creating a composite dataset, consisting of both supervised fine-tuning data (SFT) and direct preference optimization (DPO) data.
All instruction data, that is, data that is not plain text (like project Gutenberg and items from Cinematika) or DPO, is converted into ShareGPT format so it's easier to work with.
See the corresponding code in bagel/data_sources/*.py
in the repo linked above for full implementation for each data source.
Deduplication is done by creating a uuid v5 of the instruction/text, then only adding items not previously seen (where datasets are loaded in order of the confidence score I assign them). This means that if an instruction is in data source "Foo" with confidence 4 as well as in data source "Bar" with confidence score 2, only the entry from "Foo" will be taken.
SFT data sources
Yes, you will see benchmark names in the list, but this only uses the train splits, and a decontamination by cosine similarity is performed at the end as a sanity check
- ai2_arc
- Abstraction and reasoning dataset, useful in measuring "intelligence" to a certain extent.
- airoboros
- Variety of categories of synthetic instructions generated by gpt-4.
- apps
- Python coding dataset with 10k problems.
- belebele
- Multi-lingual reading comprehension dataset.
- boolq
- Corpus of yes/no questions (which can be surprisingly difficult for AI to answer apparently?)
- cinematika (instruction and plain text)
- RP-style data synthesized from movie scripts so the model isn't quite as boring as it otherwise would be.
- drop
- More reading comprehension.
- gutenberg (plain text)
- Books/plain text, again to make the model less boring, only a handful of examples supported by chapterize
- lmsys_chat_1m (only gpt-4 items, also used for DPO)
- Chats collected by the lmsys chat arena, containing a wide variety of chats with various models.
- mathinstruct
- Composite dataset with a variety of math-related tasks and problem/question formats.
- mmlu
- Massive Multitask Language Understanding - a wide variety of questions about various subject matters.
- natural_instructions
- Millions of instructions from 1600+ task categories (sampled down substantially, stratified by task type)
- openbookqa
- Question answering dataset.
- piqa
- Phyiscal interaction question answering.
- python_alpaca
- Python instruction response pairs, validated as functional.
- rosetta_code
- Code problems and solutions in a variety of programming languages taken from rosettacode.org.
- slimorca
- Collection of ~500k gpt-4 verified chats from OpenOrca.
- spider
- SQL-targeted dataset.
- squad_v2
- Contextual question answering (RAG).
- synthia
- GPT-4 generated data using advanced prompting from Migel Tissera.
- winogrande
- Fill in the blank style prompts.
Only the train splits were used (if a split was provided), and an additional pass of decontamination is performed using approximate nearest neighbor search (via faiss).
Prompt formatting
In sticking with the theme of the bagel, I didn't want to use a single prompt format, so I used 4 - vicuna, llama-2, alpaca, and chat-ml (sorta). I also didn't want to randomly select a single prompt format for each item (hoping each instruction would generalize more when used in a variety of prompt formats), so each instruction is actually converted into every prompt format.
This means each epoch of our fine-tune is really basically 4 epochs. So, for the fine-tunes, I would recommend only doing 1 epoch (or 0.75 epochs). I am testing with a single epoch using a relatively low learning rate.
Alpaca (sort of)
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:
The main difference here is that because of the dataset formatting and variety of data sources, it would have been much to tedious to add an ### Input:
block, so the inputs are just in the instruction section.
Vicuna
{system prompt, if provided, randomly defaulting to "A chat between a user and an unbiased, uncensored assistant."}
USER: {instruction}
ASSISTANT:
ChatML (sort of)
I don't really understand the point of having special tokens for <|im_start|>
and <|im_end|>
, because in practice they just act as BOS and EOS tokens (but, please correct me if I'm wrong).
So, instead of:
{bos}<|im_start|>{role}
{text}
<|im_end|>{eos}
I just changed it to:
{bos}{role}
{text}
{eos}
In practice, this would mean tokenization code like such:
tokenizer = AutoTokenizer.from_pretrained('mistralai/mistral-7b-v0.1')
input_str = f"""system
You are a goat.
{tokenizer.eos_token}
{tokenizer.bos_token}user
Tell me how to fry an egg.
{tokenizer.eos_token}
{tokenizer.bos_token}assistant
"""
inputs = tokenizer(input_str, return_tensors="pt")
If you really want to use <|im_start|>
and <|im_end|>
, just update your tokenizer_config.json
to use <|im_start|>
instead of <s>
and <|im_end|>
instead of </s>
and when tokenizing. And if you still don't like what I've done to this chat-ml-ish format, feel free to cry into your pillow or fork the code and do a new fine-tune.
Llama-2 chat
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]
Fine-tune
Note: I actually used my fork of qlora's train.py
for this, but I'm porting it to a minified version here, not tested yet!
More notes: I stopped the fine-tune around 50% because of budget constraints - it's a lot of data...
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.json
Deepspeed configuration:
{
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"bf16": {
"enabled": true
},
"zero_optimization": {
"stage": 2,
"contiguous_gradients": true,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 5e8,
"allgather_bucket_size": 5e8
}
}