shibing624's picture
Update README.md
e19dad0 verified
|
raw
history blame
No virus
3.88 kB
---
language:
- zh
- en
pipeline_tag: text-generation
license: other
license_name: llama3
license_link: LICENSE
tags:
- llama3
- chinese
- meta
---
# llama-3-8b-instruct-262k-chinese-lora
llama-3-8b-instruct-262k-chinese基于[Llama-3-8B-Instruct-262k](https://huggingface.co/gradientai/Llama-3-8B-Instruct-262k),使用ORPO方法,在中英文偏好数据集[shibing624/DPO-En-Zh-20k-Preference](https://huggingface.co/datasets/shibing624/DPO-En-Zh-20k-Preference)
上微调得到的对话模型。
模型的部署、训练等方法详见MedicalGPT的GitHub仓库:[https://github.com/shibing624/MedicalGPT](https://github.com/shibing624/MedicalGPT)
## Relate models
- 完整模型权重:https://huggingface.co/shibing624/llama-3-8b-instruct-262k-chinese
- lora权重:https://huggingface.co/shibing624/llama-3-8b-instruct-262k-chinese-lora
## 如何使用
```python
import transformers
import torch
model_id = "shibing624/llama-3-8b-instruct-262k-chinese"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.float16},
device="cuda",
)
messages = [{"role": "system", "content": ""}]
messages.append({"role": "user", "content": "介绍一下机器学习"})
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=512,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9
)
content = outputs[0]["generated_text"][len(prompt):]
print(content)
```
## About Llama-3-8B-Instruct-262k
Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. To learn more or collaborate on a custom model.
This model extends LLama-3 8B's context length from 8k to -> 160K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training (< 200M tokens) by appropriately adjusting RoPE theta.
<img src="https://cdn-uploads.huggingface.co/production/uploads/6585dc9be92bc5f258156bd6/hiHWva3CbsrnPvZTp5-lu.png" width="600">
**Approach:**
- [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as the base
- NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by a new data-driven RoPE theta optimization technique
- Progressive training on increasing context lengths similar to the [Large World Model](https://huggingface.co/LargeWorldModel) [2] (See details below)
**Infra:**
We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 262144 tokens on [Crusoe Energy](https://huggingface.co/crusoeai) high performance L40S cluster.
**Data:**
For training data, we generate long contexts by augmenting [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B).
**Progressive Training Details:**
| Parameter | 65K | 262K |
|-----------------------------|----------------|------------|
| Initialize From | LLaMA-3-8B-Inst| 65K |
| Sequence Length | 2^16 | 2^18 |
| RoPE theta | 15.3 M | 207.1 M |
| Batch Size (Tokens / Step) | 2.097 M | 4.192 M |
| Steps | 30 | 24 |
| Total Tokens | 63 M | 101 M |
| Learning Rate | 2.00E-05 | 2.00E-05 |
| # GPUs | 32 | 32 |
| GPU Type | NVIDIA L40S | NVIDIA L40S|