Update README.md
Browse files
README.md
CHANGED
@@ -9,27 +9,18 @@ Arabic language domain. This is the repository for the 7B-chat pretrained model.
|
|
9 |
|
10 |
---
|
11 |
## Model Details
|
12 |
-
We have released the AceGPT family of large language models, which is a collection of fully fine-tuned generative
|
13 |
-
text models based on LlaMA2, ranging from 7B to 13B parameters. Our models include two main categories: AceGPT
|
14 |
-
and AceGPT-chat. AceGPT-chat is an optimized version specifically designed for dialogue applications. It is worth
|
15 |
-
mentioning that our models have demonstrated superior performance compared to all currently available open-source
|
16 |
-
Arabic dialogue models in multiple benchmark tests. Furthermore, in our human evaluations, our models have shown
|
17 |
-
comparable satisfaction levels to some closed-source models, such as ChatGPT, in the Arabic language.
|
18 |
## Model Developers
|
19 |
-
We are from the School of Data Science, the Chinese University of Hong Kong, Shenzhen (CUHKSZ), and
|
20 |
-
the Shenzhen Research Institute of Big Data (SRIBD).
|
21 |
## Variations
|
22 |
-
AceGPT famils comes in a range of parameter sizes —— 7B and 13B, each size of model has a base categorie
|
23 |
-
and a -chat categorie.
|
24 |
## Input
|
25 |
Models input text only.
|
26 |
## Output
|
27 |
Models output text only.
|
28 |
## Model Evaluation Results
|
29 |
|
30 |
-
Experiments on Arabic Vicuna-80, Arabic AlpacaEval. Numbers are the average perfor-mance ratio of ChatGPT
|
31 |
-
over three runs. We do not report results of raw Llama-2 models since they cannot properly generate Arabic
|
32 |
-
texts.
|
33 |
| | Arabic Vicuna-80 | Arabic AlpacaEval |
|
34 |
|------------------------------|--------------------|---------------------|
|
35 |
| Phoenix Chen et al. (2023a) | 71.92% ± 0.2% | 65.62% ± 0.3% |
|
|
|
9 |
|
10 |
---
|
11 |
## Model Details
|
12 |
+
We have released the AceGPT family of large language models, which is a collection of fully fine-tuned generative text models based on LlaMA2, ranging from 7B to 13B parameters. Our models include two main categories: AceGPT and AceGPT-chat. AceGPT-chat is an optimized version specifically designed for dialogue applications. It is worth mentioning that our models have demonstrated superior performance compared to all currently available open-source Arabic dialogue models in multiple benchmark tests. Furthermore, in our human evaluations, our models have shown comparable satisfaction levels to some closed-source models, such as ChatGPT, in the Arabic language.
|
|
|
|
|
|
|
|
|
|
|
13 |
## Model Developers
|
14 |
+
We are from the School of Data Science, the Chinese University of Hong Kong, Shenzhen (CUHKSZ), and the Shenzhen Research Institute of Big Data (SRIBD).
|
|
|
15 |
## Variations
|
16 |
+
AceGPT famils comes in a range of parameter sizes —— 7B and 13B, each size of model has a base categorie and a -chat categorie.
|
|
|
17 |
## Input
|
18 |
Models input text only.
|
19 |
## Output
|
20 |
Models output text only.
|
21 |
## Model Evaluation Results
|
22 |
|
23 |
+
Experiments on Arabic Vicuna-80, Arabic AlpacaEval. Numbers are the average perfor-mance ratio of ChatGPT over three runs. We do not report results of raw Llama-2 models since they cannot properly generate Arabic texts.
|
|
|
|
|
24 |
| | Arabic Vicuna-80 | Arabic AlpacaEval |
|
25 |
|------------------------------|--------------------|---------------------|
|
26 |
| Phoenix Chen et al. (2023a) | 71.92% ± 0.2% | 65.62% ± 0.3% |
|