Update README.md
Browse files
README.md
CHANGED
@@ -13,9 +13,9 @@ This is the repository for the version 2 of the 70B pre-trained model, developed
|
|
13 |
|
14 |
---
|
15 |
## Model Details
|
16 |
-
We have released the MgGPT family of large language models, which is a collection of fully fine-tuned generative text models, ranging from 8B to 70B parameters. Our models include two main categories:
|
17 |
-
## Model Developers
|
18 |
-
We are from the King Abdullah University of Science and Technology (KAUST), the Chinese University of Hong Kong, Shenzhen (CUHKSZ) and the Shenzhen Research Institute of Big Data (SRIBD).
|
19 |
## Variations
|
20 |
MgGPT families come in a range of parameter sizes —— 8B, 13B, 32B and 70B, each size of model has a base category and a -chat category.
|
21 |
<!-- ## Paper -->
|
|
|
13 |
|
14 |
---
|
15 |
## Model Details
|
16 |
+
We have released the MgGPT family of large language models, which is a collection of fully fine-tuned generative text models, ranging from 8B to 70B parameters. Our models include two main categories: MgGPT and MgGPT-chat. MgGPT-chat is an optimized version specifically designed for dialogue applications. It is worth mentioning that our models have demonstrated superior performance compared to all currently available open-source Arabic dialogue models in multiple benchmark tests. Furthermore, in our human evaluations, our models have shown comparable satisfaction levels to some closed-source models, such as ChatGPT, in the Arabic language.
|
17 |
+
<!-- ## Model Developers
|
18 |
+
We are from the King Abdullah University of Science and Technology (KAUST), the Chinese University of Hong Kong, Shenzhen (CUHKSZ) and the Shenzhen Research Institute of Big Data (SRIBD). -->
|
19 |
## Variations
|
20 |
MgGPT families come in a range of parameter sizes —— 8B, 13B, 32B and 70B, each size of model has a base category and a -chat category.
|
21 |
<!-- ## Paper -->
|