zhaozitian commited on
Commit
bf456b4
1 Parent(s): d19b4cb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -17
README.md CHANGED
@@ -1,32 +1,33 @@
1
  ---
2
- library_name: peft
 
 
 
 
 
3
  ---
4
- ## Training procedure
5
 
 
 
6
 
7
- The following `bitsandbytes` quantization config was used during training:
8
- - load_in_8bit: True
9
- - llm_int8_threshold: 6.0
10
- - llm_int8_skip_modules: None
11
- - llm_int8_enable_fp32_cpu_offload: False
12
 
13
- The following `bitsandbytes` quantization config was used during training:
14
- - load_in_8bit: True
15
- - llm_int8_threshold: 6.0
16
- - llm_int8_skip_modules: None
17
- - llm_int8_enable_fp32_cpu_offload: False
18
- ### Framework versions
19
 
20
- - PEFT 0.5.0.dev0
21
 
22
- - PEFT 0.5.0.dev0
23
- config was used during training:
 
24
  - load_in_8bit: True
25
  - llm_int8_threshold: 6.0
26
  - llm_int8_skip_modules: None
27
  - llm_int8_enable_fp32_cpu_offload: False
 
28
  ### Framework versions
29
 
30
  - PEFT 0.5.0.dev0
31
 
32
- - PEFT 0.5.0.dev0
 
1
  ---
2
+ license: cc-by-sa-4.0
3
+ datasets:
4
+ - izumi-lab/llm-japanese-dataset
5
+ language:
6
+ - ja
7
+ - en
8
  ---
 
9
 
10
+ ## This model is a fine-tuned Llama2-7b-chat-hf model with Japanese dataset with LoRA.
11
+ # This model is finetuned by the joint efforts of Sparticle Inc. and A. I. Hakusan Inc.
12
 
13
+ The training set of this model contains:
14
+
15
+ 5% of randomly chosen data from llm-japanese-dataset by izumi-lab.
 
 
16
 
17
+ Japanese-alpaca-lora dataset, retrieved from https://github.com/masa3141/japanese-alpaca-lora/tree/main
 
 
 
 
 
18
 
19
+ For inference, please follow the instructions in https://github.com/tloen/alpaca-lora/ .
20
 
21
+ ## Training procedure
22
+
23
+ The following `bitsandbytes` quantization config was used during training:
24
  - load_in_8bit: True
25
  - llm_int8_threshold: 6.0
26
  - llm_int8_skip_modules: None
27
  - llm_int8_enable_fp32_cpu_offload: False
28
+
29
  ### Framework versions
30
 
31
  - PEFT 0.5.0.dev0
32
 
33
+ You must agree with Meta's agreements when using this LoRA adapter with Llama-2.