huyhuyvu01 commited on
Commit
c67e39c
1 Parent(s): 4f2b74a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +60 -0
README.md CHANGED
@@ -1,3 +1,63 @@
1
  ---
2
  license: llama2
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: llama2
3
+ language:
4
+ - vi
5
+ - en
6
  ---
7
+
8
+ From Vilm vinallama-7b-chat, I pretrain on law/online public services crawl on VBPL
9
+
10
+ ### Training process
11
+ The model is pretrain on a single A600 system.
12
+
13
+ Hyperparameters are set as follows:
14
+ - Training Regime: BFloat16 mixed precision
15
+ - Lora Config:
16
+
17
+ ```
18
+ {
19
+ "base_model_name_or_path": "vilm/vinallama-7b-chat",
20
+ "bias": "none",
21
+ "enable_lora": null,
22
+ "fan_in_fan_out": false,
23
+ "inference_mode": true,
24
+ "lora_alpha": 32.0,
25
+ "lora_dropout": 0.05,
26
+ "merge_weights": false,
27
+ "modules_to_save": [
28
+ "embed_tokens",
29
+ "lm_head"
30
+ ],
31
+ "peft_type": "LORA",
32
+ "r": 8,
33
+ "target_modules": [
34
+ "q_proj",
35
+ "v_proj",
36
+ "k_proj",
37
+ "o_proj",
38
+ "gate_proj",
39
+ "down_proj",
40
+ "up_proj"
41
+ ],
42
+ "task_type": "CAUSAL_LM"
43
+ }
44
+
45
+ ```
46
+
47
+ Please note that **this model requires further supervised fine-tuning (SFT)** to be used in practice!
48
+
49
+ Usage and other considerations: Please refer to the [Llama 2](https://github.com/facebookresearch/llama)
50
+
51
+ ### Training loss
52
+ To be updated.
53
+
54
+
55
+ ### Disclaimer
56
+
57
+ This project is built upon vilm/vinallama-7b-chat, which is built upon Meta's Llama-2 model. It is essential to strictly adhere to the open-source license agreement of Llama-2 when using this model. If you incorporate third-party code, please ensure compliance with the relevant open-source license agreements.
58
+ It's important to note that the content generated by the model may be influenced by various factors, such as calculation methods, random elements, and potential inaccuracies in quantification. Consequently, this project does not offer any guarantees regarding the accuracy of the model's outputs, and it disclaims any responsibility for consequences resulting from the use of the model's resources and its output.
59
+ For those employing the models from this project for commercial purposes, developers must adhere to local laws and regulations to ensure the compliance of the model's output content. This project is not accountable for any products or services derived from such usage.
60
+
61
+ ### Contact
62
+ huyhuyvu01@gmail.com (persional email)
63
+ https://github.com/huyhuyvu01 (Github)