SkyWork
commited on
Commit
•
d58c4a7
1
Parent(s):
a37842e
Upload README_SkyText_en.md
Browse files- README_SkyText_en.md +50 -0
README_SkyText_en.md
ADDED
@@ -0,0 +1,50 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# SkyText
|
2 |
+
|
3 |
+
SkyText is a Chinese GPT3 pre-trained large model released by Singularity-AI, which can perform different [tasks](https://openapi.singularity-ai.com/index.html#/examplesIndex) such as chatting, Q&A, and Chinese-English translation. SkyText is an open source project of the Chinese GPT3 pre-training model.
|
4 |
+
|
5 |
+
|
6 |
+
## Project Highlights
|
7 |
+
|
8 |
+
1. Technical advantage 1: data cleaning of more than 30 processes
|
9 |
+
|
10 |
+
With the development of NLP technology, pre-training large models has gradually become one of the core technologies of artificial intelligence. Pre-training large models usually requires a large amount of text for training, and network text naturally becomes the most important source of corpus. The quality of the training corpus undoubtedly directly affects the effect of the model. In order to train a model with outstanding capabilities, Singularity-AI has used more than 30 cleaning processes in data cleaning. Excellence in details, casting excellent model effect.
|
11 |
+
|
12 |
+
|
13 |
+
2. Technical advantage 2: optimized and innovative Chinese coding method for Chinese
|
14 |
+
|
15 |
+
In the field of pre-training large models, it has always been dominated by the English community, and the importance of Chinese pre-training large models is self-evident. Unlike English, the Chinese input method(pinyin text) of the Chinese pre-trained large model should obviously be different. According to the characteristics of Chinese, Singularity-AI has optimized and innovated a unique Chinese encoding method, which is more in line with Chinese language habits, and rebuilt a Chinese dictionary that is more conducive to model understanding.
|
16 |
+
|
17 |
+
|
18 |
+
|
19 |
+
# News of Singularity-AI
|
20 |
+
|
21 |
+
- [2022.12.15] [AIGC Press Conference of Singularity-AI](https://live.vhall.com/v3/lives/subscribe/697547540)
|
22 |
+
|
23 |
+
|
24 |
+
|
25 |
+
## Installation
|
26 |
+
|
27 |
+
```
|
28 |
+
Recommand
|
29 |
+
transformers>=4.18.0
|
30 |
+
```
|
31 |
+
|
32 |
+
## Model Usage
|
33 |
+
|
34 |
+
```python
|
35 |
+
# -*- coding: utf-8 -*-
|
36 |
+
from transformers import GPT2LMHeadModel
|
37 |
+
from transformers import AutoTokenizer
|
38 |
+
from transformers import TextGenerationPipeline
|
39 |
+
|
40 |
+
model = GPT2LMHeadModel.from_pretrained("SkyWork/SkyTextTiny")
|
41 |
+
tokenizer = AutoTokenizer.from_pretrained("SkyWork/SkyTextTiny", trust_remote_code=True)
|
42 |
+
text_generator = TextGenerationPipeline(model, tokenizer, device=0)
|
43 |
+
input_str = "Today is a "
|
44 |
+
max_new_tokens = 20
|
45 |
+
print(text_generator(input_str, max_new_tokens=max_new_tokens, do_sample=True))
|
46 |
+
```
|
47 |
+
|
48 |
+
# License
|
49 |
+
|
50 |
+
[MIT License]
|