jalopeura-lora-7b / README.md
aciidix's picture
Update README.md
3ffb085
|
raw
history blame
663 Bytes
metadata
language:
  - fi

Jalopeura: A Finnish fine-tune for LLaMA

The model at this time has been trained with 15,000 prompts. The whole alpaca-lora finetune will be uploaded at a later date.

Usage

Check the Github repo with code: https://github.com/Aciid/jalopeura

from peft import PeftModel
from transformers import LLaMATokenizer, LLaMAForCausalLM, GenerationConfig
tokenizer = LLaMATokenizer.from_pretrained("decapoda-research/llama-7b-hf")
model = LLaMAForCausalLM.from_pretrained(
    "decapoda-research/llama-7b-hf",
    load_in_8bit=True,
    device_map="auto",
)
model = PeftModel.from_pretrained(model, "aciidix/jalopeura-lora-7b")