Update README.md
Browse files
README.md
CHANGED
@@ -9,19 +9,11 @@ Arabic language domain. This is the repository for the 13B pretrained model.
|
|
9 |
|
10 |
---
|
11 |
## Model Details
|
12 |
-
We have released the AceGPT family of large language models, which is a collection of fully fine-tuned
|
13 |
-
generative text models based on LlaMA2, ranging from 7B to 13B parameters. Our models include two main
|
14 |
-
categories: AceGPT and AceGPT-chat. AceGPT-chat is an optimized version specifically designed for dialogue
|
15 |
-
applications. It is worth mentioning that our models have demonstrated superior performance compared
|
16 |
-
to all currently available open-source Arabic dialogue models in multiple benchmark tests. Furthermore,
|
17 |
-
in our human evaluations, our models have shown comparable satisfaction levels to some closed-source models,
|
18 |
-
such as ChatGPT, in the Arabic language.
|
19 |
## Model Developers
|
20 |
-
We are from the School of Data Science, the Chinese University of Hong Kong, Shenzhen (CUHKSZ), and
|
21 |
-
the Shenzhen Research Institute of Big Data (SRIBD).
|
22 |
## Variations
|
23 |
-
AceGPT famils comes in a range of parameter sizes โโ 7B and 13B, each size of model has a base categorie
|
24 |
-
and a -chat categorie.
|
25 |
## Input
|
26 |
Models input text only.
|
27 |
## Output
|
@@ -29,8 +21,7 @@ Models output text only.
|
|
29 |
|
30 |
## Model Evaluation Results
|
31 |
|
32 |
-
Experiments on Arabic MMLU and EXAMs. ' AverageBest ', ' STEM ', ' Humanities ', ' Social Sciences ' and ' Others (Business, Health, Misc)'
|
33 |
-
belong to Arabic MMLU. Best performance is in bold and the second best is underlined.
|
34 |
|
35 |
| Model | Average | STEM | Humanities | Social Sciences | Others (Business, Health, Misc) |EXAMs |
|
36 |
|-----------------|---------|------|------------|-----------------|---------------------------------|--------------|
|
|
|
9 |
|
10 |
---
|
11 |
## Model Details
|
12 |
+
We have released the AceGPT family of large language models, which is a collection of fully fine-tuned generative text models based on LlaMA2, ranging from 7B to 13B parameters. Our models include two main categories: AceGPT and AceGPT-chat. AceGPT-chat is an optimized version specifically designed for dialogue applications. It is worth mentioning that our models have demonstrated superior performance compared to all currently available open-source Arabic dialogue models in multiple benchmark tests. Furthermore, in our human evaluations, our models have shown comparable satisfaction levels to some closed-source models, such as ChatGPT, in the Arabic language.
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
## Model Developers
|
14 |
+
We are from the School of Data Science, the Chinese University of Hong Kong, Shenzhen (CUHKSZ), and the Shenzhen Research Institute of Big Data (SRIBD).
|
|
|
15 |
## Variations
|
16 |
+
AceGPT famils comes in a range of parameter sizes โโ 7B and 13B, each size of model has a base categorie and a -chat categorie.
|
|
|
17 |
## Input
|
18 |
Models input text only.
|
19 |
## Output
|
|
|
21 |
|
22 |
## Model Evaluation Results
|
23 |
|
24 |
+
Experiments on Arabic MMLU and EXAMs. ' AverageBest ', ' STEM ', ' Humanities ', ' Social Sciences ' and ' Others (Business, Health, Misc)' belong to Arabic MMLU. Best performance is in bold and the second best is underlined.
|
|
|
25 |
|
26 |
| Model | Average | STEM | Humanities | Social Sciences | Others (Business, Health, Misc) |EXAMs |
|
27 |
|-----------------|---------|------|------------|-----------------|---------------------------------|--------------|
|