Update README.md
Browse filesIt's not a chat model, just using Wizard-LM-Chinese-instruct-evol datesets training with several steps for test the model typical Chinese skill, this is version1, will release version2 for more long context windows and Chat model
____________________________
Train scenario:
2k context
datasets:Wizard-LM-Chinese-instruct-evol
batchsize:8
steps:500
epchos:2
____________________________________________________
How to use?
Follow common huggingface-api is enough or using other framework like VLLM, support continue training.
____________________________________________________
import transformers
import torch
model_id = "/aml/llama3-ft"
pipeline = transformers.pipeline(
"text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto"
)
pipeline("川普和拜登谁能赢得大选??")
>> [{'generated_text': '川普和拜登谁能赢得大选?](https://www.voachinese.com'}]
Wechat:18618377979, Gmail:zhouboyang1983@gmail.com
@@ -6,3 +6,15 @@ metrics:
|
|
6 |
- accuracy
|
7 |
pipeline_tag: text-generation
|
8 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
- accuracy
|
7 |
pipeline_tag: text-generation
|
8 |
---
|
9 |
+
import transformers
|
10 |
+
import torch
|
11 |
+
|
12 |
+
model_id = "/aml/llama3-ft"
|
13 |
+
|
14 |
+
pipeline = transformers.pipeline(
|
15 |
+
"text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto"
|
16 |
+
)
|
17 |
+
pipeline("川普和拜登谁能赢得大选??")
|
18 |
+
|
19 |
+
|
20 |
+
>> [{'generated_text': '川普和拜登谁能赢得大选?](https://www.voachinese.com'}]
|