File size: 2,805 Bytes
385a63a c78ddb0 385a63a 8425527 385a63a 1a633b1 385a63a 1a633b1 385a63a 925fa58 385a63a 1a633b1 385a63a 925fa58 385a63a 925fa58 52c439c 925fa58 385a63a 925fa58 ef0f8b7 385a63a 925fa58 385a63a 1a633b1 8425527 385a63a 925fa58 385a63a 925fa58 a019a91 889ad25 925fa58 385a63a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 |
---
library_name: transformers
license: other
language:
- ja
---
# π EvoLLM-JP-v1-7B
π€ [Models](https://huggingface.co/SakanaAI) | π [Paper](TODO) | π [Blog](TODO) | π¦ [Twitter](https://twitter.com/SakanaAILabs)
<!-- Provide a quick summary of what the model is/does. -->
**EvoLLM-JP-v1-7B** is an experimental general-purpose Japanese LLM. This model was created using the Evolutionary Model Merge method. Please refer to our [report](TOOD) and [blog](TODO) for more details. This model was produced by merging the following models. We are grateful to the developers of the source models.
- [Shisa Gamma 7B v1](https://huggingface.co/augmxnt/shisa-gamma-7b-v1)
- [WizardMath 7B V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
- [Abel 7B 002](https://huggingface.co/GAIR/Abel-7B-002)
## Usage
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
# 1. load model
device = "cuda" if torch.cuda.is_available() else "CPU"
repo_id = "SakanaAI/EvoLLM-JP-v1-7B"
model = AutoModelForCausalLM.from_pretrained(repo_id, torch_dtype="auto")
tokenizer = AutoTokenizer.from_pretrained(repo_id)
model.to(device)
# 2. prepare inputs
text = "ι’θ₯ΏεΌγ§ι’η½γεθ«γθ¨γ£γ¦γΏγ¦δΈγγγ"
messages = [
{"role": "system", "content": "γγͺγγ―ε½Ήη«γ€γεθ¦γγͺγγζ€ι²γγγ¦γγͺγγ’γ·γΉγΏγ³γγ§γγ"},
{"role": "user", "content": text},
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt")
# 3. generate
output_ids = model.generate(**inputs.to(device))
output_ids = output_ids[:, inputs.input_ids.shape[1] :]
generated_text = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0]
print(generated_text)
```
</details>
## Model Details
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Sakana AI](https://sakana.ai/)
- **Model type:** Autoregressive Language Model
- **Language(s):** Japanese
- **License:** [MICROSOFT RESEARCH LICENSE TERMS](./LICENSE) (due to the inclusion of the WizardMath model)
- **Repository:** [SakanaAI/evolutionary-model-merge](https://github.com/SakanaAI/evolutionary-model-merge)
- **Paper:** TODO
- **Blog:** TODO
## Acknowledgement
We would like to thank the developers of the source models for their contributions and for making their work available.
## Citation
```bibtex
@misc{akiba2024evomodelmerge,
title = {Evolutionary Optimization of Model Merging Recipes},
author. = {Takuya Akiba and Makoto Shing and Yujin Tang and Qi Sun and David Ha},
year = {2024},
eprint = {TODO},
archivePrefix = {arXiv},
primaryClass = {cs.CV}
}
```
|