Update README.md
Browse files
README.md
CHANGED
@@ -34,13 +34,13 @@ The large generation of language models focuses on optimizing excellent reasonin
|
|
34 |
|
35 |
# Introduction
|
36 |
|
37 |
-
Ghost 7B Alpha is a large language model fine-tuned from Mistral 7B, with a size of 7 billion parameters. The model was developed with the goal of optimizing reasoning ability, multi-task knowledge and supporting tool usage. The model works well with the main trained and optimized languages being English and Vietnamese.
|
38 |
|
39 |
Overall, the model is suitable when making a pretrained version so you can continue to develop the desired tasks, develop virtual assistants, perform features on tasks such as coding, translation, answering questions, creating documents, etc. It is truly an efficient, fast and extremely cheap open model.
|
40 |
|
41 |
## Specifications
|
42 |
|
43 |
-
- Name: Ghost 7B Alpha
|
44 |
- Model size: 7 billion parameters.
|
45 |
- Context length: 8K, 8192.
|
46 |
- Languages: English and Vietnamese.
|
@@ -48,7 +48,7 @@ Overall, the model is suitable when making a pretrained version so you can conti
|
|
48 |
- License: [Ghost 7B LICENSE AGREEMENT](https://ghost-x.vercel.app/ghost-7b-license).
|
49 |
- Based on: Mistral 7B.
|
50 |
- Distributions: Standard (BF16), GUFF, AWQ.
|
51 |
-
- Developed by: Ghost X
|
52 |
|
53 |
## Distributions
|
54 |
|
@@ -182,7 +182,7 @@ In summary, the self-driving car's priority should be to protect the lives of pe
|
|
182 |
|
183 |
</details>
|
184 |
|
185 |
-
A reasoning question suddenly popped up during the process of writing an article announcing information about Ghost 7B Alpha
|
186 |
|
187 |
<details close>
|
188 |
<summary>π¨βπ» : If you could travel back in time and change one event in history, what would it be and why?</summary>
|
@@ -951,7 +951,7 @@ The content of this document will be updated soon. Documentation will guide use
|
|
951 |
|
952 |
## Deployments
|
953 |
|
954 |
-
The models developed by Ghost X have the same goal of being easy to integrate and use in practice to save costs and facilitate development for the community and startups.
|
955 |
|
956 |
For production deployment with small to large infrastructure, please see more detailed instructions in [this article](https://ghost-x.vercel.app/docs/guides/deployments/). The article will provide the most common and effective deployment solutions with leading, trusted libraries such as vLLM and more. In addition, it also has information about more optimal solutions and methods depending on each need to be able to choose the appropriate solution.
|
957 |
|
@@ -969,7 +969,7 @@ The results of this evaluation will be updated soon.
|
|
969 |
|
970 |
MT-bench is a challenging multi-turn question set designed to evaluate the conversational and instruction-following ability of models. [[source from lmsys.org]](https://lmsys.org/blog/2023-06-22-leaderboard)
|
971 |
|
972 |
-
Ghost 7B Alpha achieved a decent score for the MT-Bench review, we worked hard to balance the reasoning ability and linguistic insight of both primary languages, English and Vietnamese. Overall, it was able to outperform some large language models such as tulu-30b, guanaco-65b, and mpt-30b-chat which are many times larger.
|
973 |
|
974 |
| Model | Score |
|
975 |
| --------------------- | ------------ |
|
@@ -1037,17 +1037,17 @@ Good friends, who have accompanied the project, Luan Nguyen and Phu Tran.
|
|
1037 |
|
1038 |
## Confidence
|
1039 |
|
1040 |
-
In addition to the Ghost 7B Alpha project, Ghost X always wants to develop and improve many better models in the future, better supporting the community and businesses with the most openness possible.
|
1041 |
|
1042 |
-
Revealing the Ghost 7B Beta project plan. This model is expected to outperform with a deeper focus on multi-tasking, math, and reasoning. Along with that is the ability to expand context length and support other languages (highly requested languages).
|
1043 |
|
1044 |
The organization is being operated and developed by [Hieu Lam](https://huggingface.co/ghost-x)'s personal resources, if there is any development support or consulting request. Please feel free to contact the organization, we are very happy about this. Directly via email: [ghostx.ai.team@gmail.com](mailto:ghostx.ai.team@gmail.com).
|
1045 |
|
1046 |
-
Ghost X is happy to support providing models for server providers, aiming to help startups develop better.
|
1047 |
|
1048 |
-
## Contact
|
1049 |
|
1050 |
-
Follow Ghost X to stay updated with the latest information.
|
1051 |
|
1052 |
- Twitter/X via [@ghostx_ai](https://twitter.com/ghostx_ai).
|
1053 |
- HuggingFace via [@ghost-x](https://huggingface.co/ghost-x).
|
|
|
34 |
|
35 |
# Introduction
|
36 |
|
37 |
+
**Ghost 7B Alpha** is a large language model fine-tuned from Mistral 7B, with a size of 7 billion parameters. The model was developed with the goal of optimizing reasoning ability, multi-task knowledge and supporting tool usage. The model works well with the main trained and optimized languages being English and Vietnamese.
|
38 |
|
39 |
Overall, the model is suitable when making a pretrained version so you can continue to develop the desired tasks, develop virtual assistants, perform features on tasks such as coding, translation, answering questions, creating documents, etc. It is truly an efficient, fast and extremely cheap open model.
|
40 |
|
41 |
## Specifications
|
42 |
|
43 |
+
- Name: **Ghost 7B Alpha**.
|
44 |
- Model size: 7 billion parameters.
|
45 |
- Context length: 8K, 8192.
|
46 |
- Languages: English and Vietnamese.
|
|
|
48 |
- License: [Ghost 7B LICENSE AGREEMENT](https://ghost-x.vercel.app/ghost-7b-license).
|
49 |
- Based on: Mistral 7B.
|
50 |
- Distributions: Standard (BF16), GUFF, AWQ.
|
51 |
+
- Developed by: **Ghost X**, [Hieu Lam](https://huggingface.co/lamhieu).
|
52 |
|
53 |
## Distributions
|
54 |
|
|
|
182 |
|
183 |
</details>
|
184 |
|
185 |
+
A reasoning question suddenly popped up during the process of writing an article announcing information about **Ghost 7B Alpha**. The model gave an impressive answer, at least to its creator.
|
186 |
|
187 |
<details close>
|
188 |
<summary>π¨βπ» : If you could travel back in time and change one event in history, what would it be and why?</summary>
|
|
|
951 |
|
952 |
## Deployments
|
953 |
|
954 |
+
The models developed by **Ghost X** have the same goal of being easy to integrate and use in practice to save costs and facilitate development for the community and startups.
|
955 |
|
956 |
For production deployment with small to large infrastructure, please see more detailed instructions in [this article](https://ghost-x.vercel.app/docs/guides/deployments/). The article will provide the most common and effective deployment solutions with leading, trusted libraries such as vLLM and more. In addition, it also has information about more optimal solutions and methods depending on each need to be able to choose the appropriate solution.
|
957 |
|
|
|
969 |
|
970 |
MT-bench is a challenging multi-turn question set designed to evaluate the conversational and instruction-following ability of models. [[source from lmsys.org]](https://lmsys.org/blog/2023-06-22-leaderboard)
|
971 |
|
972 |
+
**Ghost 7B Alpha** achieved a decent score for the MT-Bench review, we worked hard to balance the reasoning ability and linguistic insight of both primary languages, English and Vietnamese. Overall, it was able to outperform some large language models such as tulu-30b, guanaco-65b, and mpt-30b-chat which are many times larger.
|
973 |
|
974 |
| Model | Score |
|
975 |
| --------------------- | ------------ |
|
|
|
1037 |
|
1038 |
## Confidence
|
1039 |
|
1040 |
+
In addition to the **Ghost 7B Alpha** project, **Ghost X** always wants to develop and improve many better models in the future, better supporting the community and businesses with the most openness possible.
|
1041 |
|
1042 |
+
Revealing the **Ghost 7B Beta** project plan. This model is expected to outperform with a deeper focus on multi-tasking, math, and reasoning. Along with that is the ability to expand context length and support other languages (highly requested languages).
|
1043 |
|
1044 |
The organization is being operated and developed by [Hieu Lam](https://huggingface.co/ghost-x)'s personal resources, if there is any development support or consulting request. Please feel free to contact the organization, we are very happy about this. Directly via email: [ghostx.ai.team@gmail.com](mailto:ghostx.ai.team@gmail.com).
|
1045 |
|
1046 |
+
**Ghost X** is happy to support providing models for server providers, aiming to help startups develop better.
|
1047 |
|
1048 |
+
## Contact
|
1049 |
|
1050 |
+
Follow **Ghost X** to stay updated with the latest information.
|
1051 |
|
1052 |
- Twitter/X via [@ghostx_ai](https://twitter.com/ghostx_ai).
|
1053 |
- HuggingFace via [@ghost-x](https://huggingface.co/ghost-x).
|