|
--- |
|
license: other |
|
language: |
|
- en |
|
- zh |
|
library_name: transformers |
|
pipeline_tag: text-generation |
|
inference: false |
|
tags: |
|
- baichuan |
|
- llama2 |
|
- baichuan2 |
|
--- |
|
|
|
This is the LLaMAfied version of [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) model by Baichuan Inc. |
|
|
|
This model is converted with https://github.com/hiyouga/LLaMA-Efficient-Tuning/blob/main/tests/llamafy_baichuan2.py |
|
|
|
You may use this model for fine-tuning in downstream tasks, we recommend using our efficient fine-tuning toolkit. https://github.com/hiyouga/LLaMA-Efficient-Tuning |
|
|
|
- **Developed by:** Baichuan Inc. |
|
- **Language(s) (NLP):** Chinese/English |
|
- **License:** [Baichuan2 License](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf) |
|
|
|
Usage: |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("hiyouga/Baichuan2-7B-Base-LLaMAfied", use_fast=False) |
|
model = AutoModelForCausalLM.from_pretrained("hiyouga/Baichuan2-7B-Base-LLaMAfied").cuda() |
|
``` |