zhaozitian commited on
Commit
aa15794
1 Parent(s): a9b1c69

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -15
README.md CHANGED
@@ -1,22 +1,26 @@
1
  ---
2
- library_name: peft
 
 
 
 
 
3
  ---
4
- ## Training procedure
5
 
 
6
 
7
- The following `bitsandbytes` quantization config was used during training:
8
- - load_in_8bit: True
9
- - llm_int8_threshold: 6.0
10
- - llm_int8_skip_modules: None
11
- - llm_int8_enable_fp32_cpu_offload: False
12
 
13
- The following `bitsandbytes` quantization config was used during training:
14
- - load_in_8bit: True
15
- - llm_int8_threshold: 6.0
16
- - llm_int8_skip_modules: None
17
- - llm_int8_enable_fp32_cpu_offload: False
18
- ### Framework versions
 
 
 
19
 
20
- - PEFT 0.5.0.dev0
21
 
22
- - PEFT 0.5.0.dev0
 
1
  ---
2
+ license: cc-by-sa-4.0
3
+ datasets:
4
+ - c-s-ale/alpaca-gpt4-data-zh
5
+ language:
6
+ - en
7
+ - zh
8
  ---
 
9
 
10
+ This model is an LoRA adapter file from finetuned Llama-2-13b-chat-hf model. This is an experimental model.
11
 
12
+ This model is presented by IceBear-AI.
 
 
 
 
13
 
14
+ To run it, you need to:
15
+ - Agree with Meta's agreements to download the Llama-2-13b-chat-hf model from here: https://huggingface.co/meta-llama/Llama-2-13b-chat-hf
16
+ - Clone this repository
17
+ - Clone the Alpaca-LoRA repository from here: https://github.com/tloen/alpaca-lora
18
+ - Use this command to run it:
19
+ -python generate.py \
20
+ --load_8bit \
21
+ --base_model 'PATH_TO_YOUR_LOCAL_LLAMA_2_13B_CHAT_HF' \
22
+ --lora_weights 'PATH_TO_YOUR_LOCAL_FILE_OF_THIS_MODEL'
23
 
24
+ You must agree with Meta/Llama-2's agreements to use this model.
25
 
26
+ If you would like to contact us, please don't hesitate to email to icebearai@163.com.