atsuki-yamaguchi commited on
Commit
a08a16f
1 Parent(s): 7c296fc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -4
README.md CHANGED
@@ -1,9 +1,35 @@
1
  ---
2
- library_name: peft
 
 
3
  ---
4
- ## Training procedure
 
5
 
6
- ### Framework versions
 
 
 
7
 
 
 
 
 
 
 
 
8
 
9
- - PEFT 0.5.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: mit
3
+ language:
4
+ - sw
5
  ---
6
+ TigerBot-7B LAPT + Heuristics Arabic
7
+ ===
8
 
9
+ ## How to use
10
+ ```python
11
+ from peft import AutoPeftModelForCausalLM
12
+ from transformers import AutoTokenizer
13
 
14
+ model = AutoPeftModelForCausalLM.from_pretrained(
15
+ "atsuki-yamaguchi/tigerbot-7b-base-heuristics-ar"
16
+ )
17
+ tokenizer = AutoTokenizer.from_pretrained(
18
+ "atsuki-yamaguchi/tigerbot-7b-base-heuristics-ar"
19
+ )
20
+ ```
21
 
22
+ ## Citation
23
+ ```
24
+ @article{yamaguchi2024empirical,
25
+ title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
26
+ author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
27
+ journal={ArXiv},
28
+ year={2024},
29
+ volume={abs/2402.10712},
30
+ url={https://arxiv.org/abs/2402.10712}
31
+ }
32
+ ```
33
+
34
+ ## Link
35
+ For more details, please visit https://github.com/gucci-j/llm-cva