|
--- |
|
language: |
|
- fi |
|
--- |
|
|
|
# Jalopeura: A Finnish fine-tune for LLaMA |
|
|
|
The fine-tune has been trained with full alpaca-lora dataset translated to Finnish with gpt-3.5-turbo |
|
|
|
## Usage |
|
|
|
Check the Github repo with code: https://github.com/Aciid/jalopeura |
|
|
|
```python |
|
from peft import PeftModel |
|
from transformers import LLaMATokenizer, LLaMAForCausalLM, GenerationConfig |
|
tokenizer = LLaMATokenizer.from_pretrained("decapoda-research/llama-7b-hf") |
|
model = LLaMAForCausalLM.from_pretrained( |
|
"decapoda-research/llama-7b-hf", |
|
load_in_8bit=True, |
|
device_map="auto", |
|
) |
|
model = PeftModel.from_pretrained(model, "aciidix/jalopeura-lora-7b") |
|
``` |