File size: 924 Bytes
7cea0ca
b4b28bd
 
7cea0ca
b4b28bd
 
7cea0ca
b4b28bd
 
 
8eafcf1
7cea0ca
b4b28bd
 
 
 
8eafcf1
b4b28bd
7cea0ca
b4b28bd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7cea0ca
b4b28bd
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
---
license: mit
language: de
---
Mistral-7B German [LAPT]
===

## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer

model = AutoPeftModelForCausalLM.from_pretrained(
    "atsuki-yamaguchi/Mistral-7B-v0.1-lapt-de"
)
tokenizer = AutoTokenizer.from_pretrained(
    "mistralai/Mistral-7B-v0.1"
)

# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
    "atsuki-yamaguchi/Mistral-7B-v0.1-lapt-de",
    device_map="auto", 
    load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
  title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference}, 
  author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
  journal={ArXiv},
  year={2024},
  volume={abs/2402.10712},
  url={https://arxiv.org/abs/2402.10712}
}
```

## Link
For more details, please visit https://github.com/gucci-j/llm-cva