shunxing1234 commited on
Commit
f3d098c
1 Parent(s): b89d371

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -0
README.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ ---
4
+
5
+ ![Aquila_logo](./log.jpeg)
6
+
7
+ <h4 align="center">
8
+ <p>
9
+ <b>English</b> |
10
+ <a href="https://huggingface.co/BAAI/Aquila2-7B/blob/main/README_zh.md">简体中文</a> |
11
+ <p>
12
+ </h4>
13
+
14
+
15
+ Aquila Language Model is the first open source language model that supports both Chinese and English knowledge, commercial license agreements, and compliance with domestic data regulations.
16
+
17
+ - 🌟 **Supports open source commercial licenses**. The source code of the Aquila series models is based on the [Apache 2.0 agreement](https://www.apache.org/licenses/LICENSE-2.0), while the model weight is based on the [BAAI Aquila Model License Agreement](https://huggingface.co/BAAI/Aquila-7B/resolve/main/BAAI%20Aquila%20Model%20License%20Agreement.pdf). Users can use it for commercial purposes as long as they meet the licensing restrictions.
18
+
19
+ - ✍️ **Possesses Chinese and English knowledge**. The Aquila series model is trained from scratch on a high-quality corpus of Chinese and English languages, with Chinese corpora accounting for about 40%, ensuring that the model accumulates native Chinese world knowledge during the pre-training phase, rather than translated knowledge.
20
+
21
+ - 👮‍♀️ **Complies with domestic data regulations**. The Chinese corpora of the Aquila series models come from Intelligence Source's accumulated Chinese datasets over the years, including Chinese internet data from over 10,000 sources (more than 99% of which are domestic sources), as well as high-quality Chinese literature and book data supported by authoritative domestic organizations. We will continue to accumulate high-quality and diverse datasets and incorporate them into the subsequent training of the Aquila base models.
22
+
23
+ - 🎯 **Continuous improvements and open sourcing**. We will continue to improve training data, optimize training methods, and enhance model performance, cultivate a flourishing "model tree" on a better base model foundation, and continuously update open-source versions.
24
+
25
+ The additional details of the Aquila model will be presented in the official technical report. Please stay tuned for updates on official channels.
26
+
27
+
28
+
29
+
30
+ ## Quick Start Aquila-7B
31
+
32
+ ### 1. Inference
33
+
34
+ ```python
35
+ from transformers import AutoTokenizer, AutoModelForCausalLM
36
+ import torch
37
+
38
+ model_info = "BAAI/Aquila2-7B"
39
+ tokenizer = AutoTokenizer.from_pretrained(model_info, trust_remote_code=True)
40
+ model = AutoModelForCausalLM.from_pretrained(model_info, trust_remote_code=True)
41
+ model.eval()
42
+ model.to("cuda:0")
43
+
44
+ text = "汽车EDR是什么"
45
+
46
+ tokens = tokenizer.encode_plus(text)['input_ids'][:-1]
47
+
48
+ tokens = torch.tensor(tokens)[None,].to("cuda:0")
49
+
50
+
51
+ with torch.no_grad():
52
+ out = model.generate(tokens, do_sample=True, max_length=512, eos_token_id=100007)[0]
53
+
54
+ out = tokenizer.decode(out.cpu().numpy().tolist())
55
+
56
+ print(out)
57
+ ```
58
+
59
+
60
+ ## License
61
+
62
+ Aquila-7B and AquilaChat-33B open-source model is licensed under [ BAAI Aquila Model Licence Agreement](https://huggingface.co/BAAI/Aquila-7B/resolve/main/BAAI%20Aquila%20Model%20License%20Agreement.pdf)