BAAI
/

ldwang commited on
Commit
f362f93
1 Parent(s): 749b6a5

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +55 -0
README.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ ---
4
+
5
+
6
+ ![Aquila_logo](./log.jpeg)
7
+
8
+
9
+ <h4 align="center">
10
+ <p>
11
+ <b>English</b> |
12
+ <a href="https://huggingface.co/BAAI/AquilaChat2-7B/blob/main/README_zh.md">简体中文</a>
13
+ </p>
14
+ </h4>
15
+
16
+
17
+ We opensource our **Aquila2** series, now including **Aquila2**, the base language models, namely **Aquila2-7B** and **Aquila2-34B**, as well as **AquilaChat2**, the chat models, namely **AquilaChat2-7B** and **AquilaChat2-34B**, as well as the long-text chat models, namely **AquilaChat2-7B-16k** and **AquilaChat2-34B-16k**
18
+
19
+ The additional details of the Aquila model will be presented in the official technical report. Please stay tuned for updates on official channels.
20
+
21
+ ## Quick Start AquilaChat2-7B(Chat model)
22
+
23
+ ### 1. Inference
24
+
25
+ ```python
26
+ import torch
27
+ from transformers import AutoTokenizer, AutoModelForCausalLM
28
+ from transformers import BitsAndBytesConfig
29
+
30
+ device = torch.device("cuda:0")
31
+ model_info = "BAAI/AquilaChat2-7B"
32
+ tokenizer = AutoTokenizer.from_pretrained(model_info, trust_remote_code=True)
33
+ quantization_config=BitsAndBytesConfig(
34
+ load_in_4bit=True,
35
+ bnb_4bit_use_double_quant=True,
36
+ bnb_4bit_quant_type="nf4",
37
+ bnb_4bit_compute_dtype=torch.bfloat16,
38
+ )
39
+ model = AutoModelForCausalLM.from_pretrained(model_info, trust_remote_code=True, torch_dtype=torch.float16,
40
+ # quantization_config=quantization_config, # Uncomment this line for 4bit quantization
41
+ )
42
+ model.eval()
43
+ model.to(device)
44
+ text = "请给出10个要到北京旅游的理由。"
45
+ from predict import predict
46
+ out = predict(model, text, tokenizer=tokenizer, max_gen_len=200, top_p=0.95,
47
+ seed=1234, topk=100, temperature=0.9, sft=True, device=device,
48
+ model_name="AquilaChat2-7B")
49
+ print(out)
50
+ ```
51
+
52
+
53
+ ## License
54
+
55
+ Aquila2 series open-source model is licensed under [ BAAI Aquila Model Licence Agreement](https://huggingface.co/BAAI/AquilaChat2-7B/blob/main/BAAI-Aquila-Model-License%20-Agreement.pdf)