|
--- |
|
base_model: mistralai/Mixtral-8x7B-v0.1 |
|
inference: false |
|
license: apache-2.0 |
|
language: |
|
- zh |
|
- en |
|
--- |
|
|
|
# Chinese-Mixtral-LoRA |
|
<p align="center"> |
|
<a href="https://github.com/ymcui/Chinese-Mixtral"><img src="https://ymcui.com/images/chinese-mixtral-banner.png" width="600"/></a> |
|
</p> |
|
|
|
**Chinese Mixtral GitHub repository: https://github.com/ymcui/Chinese-Mixtral** |
|
|
|
This repository contains **Chinese-Mixtral-LoRA**, which is further pre-trained on [Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1). |
|
|
|
**Note: You must combine LoRA with the original [Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) to obtain full weight.** |
|
|
|
## Others |
|
|
|
- For full model, please see: https://huggingface.co/hfl/chinese-mixtral |
|
|
|
- For GGUF model (llama.cpp compatible), please see: https://huggingface.co/hfl/chinese-mixtral-gguf |
|
|
|
- If you have questions/issues regarding this model, please submit an issue through [https://github.com/ymcui/Chinese-Mixtral/](https://github.com/ymcui/Chinese-Mixtral/). |
|
|
|
## Citation |
|
|
|
Please consider cite our paper if you use the resource of this repository. |
|
Paper link: https://arxiv.org/abs/2403.01851 |
|
``` |
|
@article{chinese-mixtral, |
|
title={Rethinking LLM Language Adaptation: A Case Study on Chinese Mixtral}, |
|
author={Cui, Yiming and Yao, Xin}, |
|
journal={arXiv preprint arXiv:2403.01851}, |
|
url={https://arxiv.org/abs/2403.01851}, |
|
year={2024} |
|
} |
|
``` |