atsuki-yamaguchi commited on
Commit
ffa66e6
1 Parent(s): 810c426

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +33 -4
README.md CHANGED
@@ -1,9 +1,39 @@
1
  ---
2
  library_name: peft
 
 
 
3
  ---
4
- ## Training procedure
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
 
 
 
6
 
 
 
7
  The following `bitsandbytes` quantization config was used during training:
8
  - quant_method: bitsandbytes
9
  - load_in_8bit: True
@@ -15,7 +45,6 @@ The following `bitsandbytes` quantization config was used during training:
15
  - bnb_4bit_quant_type: fp4
16
  - bnb_4bit_use_double_quant: False
17
  - bnb_4bit_compute_dtype: float32
 
18
  ### Framework versions
19
-
20
-
21
- - PEFT 0.5.0
 
1
  ---
2
  library_name: peft
3
+ license: mit
4
+ language:
5
+ - ar
6
  ---
7
+ BLOOM-7B Arabic [LAPT + CLP+]
8
+ ===
9
+
10
+ ## How to use
11
+ ```python
12
+ from peft import AutoPeftModelForCausalLM
13
+ from transformers import AutoTokenizer
14
+
15
+ model = AutoPeftModelForCausalLM.from_pretrained(
16
+ "atsuki-yamaguchi/bloom-7b1-clpp-ar"
17
+ )
18
+ ```
19
+
20
+ ## Citation
21
+ ```
22
+ @article{yamaguchi2024empirical,
23
+ title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
24
+ author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
25
+ journal={ArXiv},
26
+ year={2024},
27
+ volume={abs/2402.10712},
28
+ url={https://arxiv.org/abs/2402.10712}
29
+ }
30
+ ```
31
 
32
+ ## Link
33
+ For more details, please visit https://github.com/gucci-j/llm-cva
34
 
35
+
36
+ ## Training procedure
37
  The following `bitsandbytes` quantization config was used during training:
38
  - quant_method: bitsandbytes
39
  - load_in_8bit: True
 
45
  - bnb_4bit_quant_type: fp4
46
  - bnb_4bit_use_double_quant: False
47
  - bnb_4bit_compute_dtype: float32
48
+
49
  ### Framework versions
50
+ - PEFT 0.5.0