phi_mini_math23k_v1 / README.md
aloobun's picture
Update README.md
5c783f8
---
library_name: peft
license: mit
datasets:
- aloobun/mini-math23k-v1
tags:
- math
- phi
---
### WIP
## DONT USE THIS, ITS SHIT
## Usage:
```
import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
model_name = "microsoft/phi-1_5"
adapters_name = 'aloobun/phi_mini_math23k_v1'
model = AutoModelForCausalLM.from_pretrained(
model_name,
load_in_4bit=True,
torch_dtype=torch.bfloat16,
device_map="auto",
quantization_config=BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type='nf4'
),
)
model = PeftModel.from_pretrained(model, adapters_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
```
prompt = "What is the largest two-digit integer whose digits are distinct and form a geometric sequence?"
formatted_prompt = (
f"### Instruction: {prompt} ### Response:"
)
inputs = tokenizer(formatted_prompt, return_tensors="pt").to("cuda:0")
outputs = model.generate(inputs=inputs.input_ids, max_new_tokens=1048)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0