File size: 639 Bytes
39fa710 3acc85b 3ffb085 39fa710 1a7f528 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
---
language:
- fi
---
# Jalopeura: A Finnish fine-tune for LLaMA
The fine-tune has been trained with full alpaca-lora dataset translated to Finnish with gpt-3.5-turbo
## Usage
Check the Github repo with code: https://github.com/Aciid/jalopeura
```python
from peft import PeftModel
from transformers import LLaMATokenizer, LLaMAForCausalLM, GenerationConfig
tokenizer = LLaMATokenizer.from_pretrained("decapoda-research/llama-7b-hf")
model = LLaMAForCausalLM.from_pretrained(
"decapoda-research/llama-7b-hf",
load_in_8bit=True,
device_map="auto",
)
model = PeftModel.from_pretrained(model, "aciidix/jalopeura-lora-7b")
``` |