Update README.md
#1
by
iwiwi
- opened
README.md
CHANGED
@@ -7,12 +7,12 @@ language:
|
|
7 |
|
8 |
# π EvoLLM-JP-v1-7B
|
9 |
|
10 |
-
π€ [Models](https://huggingface.co/SakanaAI) | π [Paper](
|
11 |
|
12 |
|
13 |
<!-- Provide a quick summary of what the model is/does. -->
|
14 |
|
15 |
-
**EvoLLM-JP-v1-7B** is an experimental general-purpose Japanese LLM. This model was created using the Evolutionary Model Merge method. Please refer to our [report](
|
16 |
|
17 |
- [Shisa Gamma 7B v1](https://huggingface.co/augmxnt/shisa-gamma-7b-v1)
|
18 |
- [WizardMath 7B V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
|
@@ -67,17 +67,10 @@ print(generated_text)
|
|
67 |
- **Language(s):** Japanese
|
68 |
- **License:** [MICROSOFT RESEARCH LICENSE TERMS](./LICENSE) (due to the inclusion of the WizardMath model)
|
69 |
- **Repository:** [SakanaAI/evolutionary-model-merge](https://github.com/SakanaAI/evolutionary-model-merge)
|
70 |
-
- **Paper:**
|
71 |
-
- **Blog:**
|
72 |
|
73 |
|
74 |
-
## Uses
|
75 |
-
This model is provided for research and development purposes only and should be considered as an experimental prototype.
|
76 |
-
It is not intended for commercial use or deployment in mission-critical environments.
|
77 |
-
Use of this model is at the user's own risk, and its performance and outcomes are not guaranteed.
|
78 |
-
Sakana AI shall not be liable for any direct, indirect, special, incidental, or consequential damages, or any loss arising from the use of this model, regardless of the results obtained.
|
79 |
-
Users must fully understand the risks associated with the use of this model and use it at their own discretion.
|
80 |
-
|
81 |
|
82 |
## Acknowledgement
|
83 |
|
@@ -91,9 +84,9 @@ We would like to thank the developers of the source models for their contributio
|
|
91 |
title = {Evolutionary Optimization of Model Merging Recipes},
|
92 |
author. = {Takuya Akiba and Makoto Shing and Yujin Tang and Qi Sun and David Ha},
|
93 |
year = {2024},
|
94 |
-
eprint = {
|
95 |
archivePrefix = {arXiv},
|
96 |
-
primaryClass = {cs.
|
97 |
}
|
98 |
```
|
99 |
|
|
|
7 |
|
8 |
# π EvoLLM-JP-v1-7B
|
9 |
|
10 |
+
π€ [Models](https://huggingface.co/SakanaAI) | π [Paper](TODO) | π [Blog](TODO) | π¦ [Twitter](https://twitter.com/SakanaAILabs)
|
11 |
|
12 |
|
13 |
<!-- Provide a quick summary of what the model is/does. -->
|
14 |
|
15 |
+
**EvoLLM-JP-v1-7B** is an experimental general-purpose Japanese LLM. This model was created using the Evolutionary Model Merge method. Please refer to our [report](TOOD) and [blog](TODO) for more details. This model was produced by merging the following models. We are grateful to the developers of the source models.
|
16 |
|
17 |
- [Shisa Gamma 7B v1](https://huggingface.co/augmxnt/shisa-gamma-7b-v1)
|
18 |
- [WizardMath 7B V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
|
|
|
67 |
- **Language(s):** Japanese
|
68 |
- **License:** [MICROSOFT RESEARCH LICENSE TERMS](./LICENSE) (due to the inclusion of the WizardMath model)
|
69 |
- **Repository:** [SakanaAI/evolutionary-model-merge](https://github.com/SakanaAI/evolutionary-model-merge)
|
70 |
+
- **Paper:** TODO
|
71 |
+
- **Blog:** TODO
|
72 |
|
73 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
74 |
|
75 |
## Acknowledgement
|
76 |
|
|
|
84 |
title = {Evolutionary Optimization of Model Merging Recipes},
|
85 |
author. = {Takuya Akiba and Makoto Shing and Yujin Tang and Qi Sun and David Ha},
|
86 |
year = {2024},
|
87 |
+
eprint = {TODO},
|
88 |
archivePrefix = {arXiv},
|
89 |
+
primaryClass = {cs.CV}
|
90 |
}
|
91 |
```
|
92 |
|