Configuration Parsing
Warning:
In adapter_config.json: "peft.task_type" must be a string
minihf_evaluator_openllama_7b
minihf_evaluator_openllama_7b
is a LoRA instruct fine-tune of OpenLLaMA 7B.
The sequence <|end|>
was used to separate the prompt and response. The correct way to prompt the model is: Does 2 + 2 = 4?<|end|>
. The tokenizer will prepend a BOS token (<s>
) by default. The response will end with an EOS token (</s>
).
Training procedure
minihf_evaluator_openllama_7b
was fine-tuned for 100,000 examples on 90% Muennighoff/flan / 10% databricks/databricks-dolly-15k using batch size 4 per GPU on 8 40GB A100 GPUs. Examples where the prompt and response would not fit into 2,048 tokens were dropped. The fine-tuning was done using the following command:
accelerate launch make_evaluator.py --output-dir minihf_evaluator_openllama_7b
The following bitsandbytes
quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
Framework versions
- PEFT 0.4.0.dev0
- Downloads last month
- 2