Finetunes Llama-13b+Alpaca to solve problems via stepwise reasoning (OpenAI PRM800k dataset, or rather our postprocessed version, sl-alex/openai-prm800k-solutions-only).

Model description

This is a fork of llama-13b + chansung/alpaca-lora-13b.

That is: we loaded Llama-13b, we applied Alpaca LoRA, expanded vocabulary, then QLoRA 4-bit finetuned from there.

Parts:

  • base model llama-13b
  • LoRA 0 chansung/alpaca-lora-13b
  • LoRA 1
    • adapter_config.json
    • adapter_model.bin
  • tokenizer
    • added_tokens.json
    • special_tokens_map.json
    • tokenizer.model
    • tokenizer_config.json
  • finetuned input/output embedding layers:
    • embed_tokens.pt (state_dict for model.get_input_embeddings(), embed_tokens: Embedding)
    • lm_head.pt (state_dict for model.get_output_embeddings(), lm_head: Linear)

Training

Trained using qlora.py from our stepwise branch of qlora.
Known-good as of commit 3a86919.

python -m qlora --model_name_or_path huggyllama/llama-13b --lora_name_or_path chansung/alpaca-lora-13b --dataset prm800k-solutions --dataset_format prm800k-solutions --bf16 --max_memory_MB 24000 --use_bos_token_in_prompt --truncate_toward_center --source_max_len 184 --target_max_len 998 --gradient_accumulation_steps 4 --per_device_train_batch_size 4 --per_device_eval_batch_size 4 --learning_rate 0.0002 --run_name 13b_alpaca_special_tokens_long --report_to wandb --save_steps 64 --save_total_limit 3 --max_steps 1664 --evaluation_strategy steps --eval_steps 64 --generate_steps 16 --register_process_supervision_tokens

Usage

You can load using evaluate.py from our stepwise branch of qlora.
Known-good as of commit 3a86919.

You'll need to download embed_tokens.pt and lm_head.pt from this repository, and ensure they are saved to the root of the qlora repository, then run evaluate.py like so:

git clone -b stepwise https://github.com/scottlogic-alex/qlora.git
cd qlora
wget https://huggingface.co/sl-alex/llama-13b-alpaca-stepwise-lora-embtuned/resolve/main/embed_tokens.pt
wget https://huggingface.co/sl-alex/llama-13b-alpaca-stepwise-lora-embtuned/resolve/main/lm_head.pt
python -m evaluate \
--model_name_or_path huggyllama/llama-13b \
--base_lora_model_name_or_path chansung/alpaca-lora-13b \
--tokenizer_model_name_or_path sl-alex/llama-13b-alpaca-stepwise-lora-embtuned \
--lora_model_name_or_path sl-alex/llama-13b-alpaca-stepwise-lora-embtuned \
--input_embedding_path embed_tokens.pt \
--output_embedding_path lm_head.pt \
--use_bos_token_in_prompt \
--overrun_countermeasures False

Prompt style

The model is finetuned for instruction-response.

prompt = '<s>Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\nHow do I feed a family of 5 without ingredients?\n\n### Response:\n<|step_start|>'

Example output:

$ How do I drink a matcha latte?
To drink a matcha latte, I should first remove the matcha from the bag. <|step_end|> <|step_start|> I can do that by opening the bag and shaking out the matcha into a glass or bowl. <|step_end|> <|step_start|> Then, I should add some hot water into the glass or bowl and start stirring the matcha. <|step_end|> <|step_start|> This is to make the matcha easier to mix with the water. <|step_end|> <|step_start|> Next, I should add milk into the glass or bowl and continue stirring. <|step_end|> <|step_start|> This is to make the matcha and the water form a thick mixture. <|step_end|> <|step_start|> Finally, I should add ice cubes into the glass or bowl and stir again. <|step_end|> <|step_start|> This is to make the mixture even thicker and chilled. <|step_end|> <|step_start|> So, a matcha latte is a drink that is a mixture of matcha, water, and milk, with some ice cubes. <|step_end|> <|step_start|> <|answer_start|> To make a matcha latte, I should remove the matcha from the bag, add some hot water into a glass or bowl and stir, add milk into the glass or bowl and stir as well, and finally add ice cubes and stir again. <|step_end|> <|step_start|> <|answer_start|> To drink a matcha latte, I should remove

$ What is the third Fibonacci number?
I know that the first two Fibonacci numbers are 1 and 1, since they are just the ones that appear in the Fibonacci sequence: 1, 1, 2, 3, 5, 8, 13, and so on, where each term is the sum of the previous two. <|step_end|> <|step_start|> To find the third number, I could try to figure out a pattern or a shortcut, but I'll just use the definition and do some calculations. <|step_end|> <|step_start|> The third Fibonacci number is the sum of the first two, so I have 1 + 1 = 2. <|step_end|> <|step_start|> <|answer_start|> 2 <|answer_end|>

$ How do I make a cup of tea?
Ok, the first step is to open the cup of tea. <|step_end|> <|step_start|> Then the second step is to jump in the tea. <|step_end|> <|step_start|> And finally the third step is to drink the jump. <|step_end|> <|step_start|> Let's reverse the order of the steps so that they are more intuitive. <|step_end|> <|step_start|> The first step is to jump in the tea. <|step_end|> <|step_start|> Then the second step is to open the cup of tea. <|step_end|> <|step_start|> And finally the third step is to drink the jump. <|step_end|> <|step_start|> So, I think this is a good way to make a cup of tea. <|step_end|> <|step_start|> Yes, it is. <|answer_start|> How do I make a cup of tea? <|answer_end|>

$ How do I open a jar?
To open a jar, I need to use a key that fits into the lock on the jar. <|step_end|> <|step_start|> I recognize that a common type of key is a skeleton key, which can open many different types of locks, because it has no ridges that would restrict its usage. <|step_end|> <|step_start|> To get a skeleton key, I can cut any old key into pieces and join them back together. <|step_end|> <|step_start|> Alternatively, I can use a special tool that can make cuts exactly perpendicular to each other, forming a key that opens any lock. <|step_end|> <|step_start|> Alternatively, I can use a special trick that can make any old key open the jar. <|step_end|> <|step_start|> This trick involves placing the key on top of the jar and banging it on a hard surface, such as the floor or a table. <|step_end|> <|step_start|> I need to make sure that the key and the jar are both upside-down, and that I am banging them as hard as I can. <|step_end|> <|step_start|> This will create a small hole in the glass on the jar, and then I can use the key to open it. <|step_end|> <|step_start|> I need to be careful not to get cut by the glass, and I need to check if the jar is empty first, otherwise I might poke my finger and

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Dataset used to train sl-alex/llama-13b-alpaca-stepwise-lora-embtuned