For demonstration, please refer to demo.jpynb in the files.
To use the checkpoint:
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM
config = PeftConfig.from_pretrained("TorpilleAlpha/scanpy-llama")
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-chat-hf")# or use your local llama-2-7B-chat shards
model = PeftModel.from_pretrained(model, "TorpilleAlpha/scanpy-llama")
Training procedure
The following bitsandbytes
quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
Framework versions
- PEFT 0.5.0
- Downloads last month
- 4
Unable to determine this model’s pipeline type. Check the
docs
.