BoyangZ commited on
Commit
a64164e
1 Parent(s): 480b25b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -20
README.md CHANGED
@@ -7,37 +7,54 @@ metrics:
7
  pipeline_tag: text-generation
8
  ---
9
 
10
- >> It's not a chat model, just using Wizard-LM-Chinese-instruct-evol datesets training with several steps for test the model typical Chinese skill, this is version1, will release version2 for more long context windows and Chat model
11
- ____________________________
12
- Train scenario:
 
13
 
14
- 2k context
15
 
16
- datasets:Wizard-LM-Chinese-instruct-evol
17
 
18
- batchsize:8
19
 
20
- steps:500
21
 
22
- epchos:2
23
- ____________________________________________________
24
- How to use?
25
 
26
- Follow common huggingface-api is enough or using other framework like VLLM, support continue training.
27
 
28
  ____________________________________________________
29
 
30
- import transformers
31
- import torch
 
 
 
 
 
 
 
32
 
33
- model_id = "/aml/llama3-ft"
34
 
35
- pipeline = transformers.pipeline(
36
- "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto"
37
- )
38
- pipeline("川普和拜登谁能赢得大选??")
 
 
 
 
 
 
 
 
 
 
 
39
 
40
 
41
- >> [{'generated_text': '川普和拜登谁能赢得大选?](https://www.voachinese.com'}]
42
 
43
- Wechat:18618377979, Gmail:zhouboyang1983@gmail.com
 
7
  pipeline_tag: text-generation
8
  ---
9
 
10
+ >> It's not a chat model, just using Wizard-LM-Chinese-instruct-evol datesets training with several steps for test the model typical Chinese skill,
11
+ >> this is version1, will release version2 for more long context windows and Chat model
12
+ >>____________________________
13
+ >>Train scenario:
14
 
15
+ >>2k context
16
 
17
+ >>datasets:Wizard-LM-Chinese-instruct-evol
18
 
19
+ >>batchsize:8
20
 
21
+ >>steps:500
22
 
23
+ >>epchos:2
24
+ >>____________________________________________________
25
+ >>How to use?
26
 
27
+ >>Follow common huggingface-api is enough or using other framework like VLLM, support continue training.
28
 
29
  ____________________________________________________
30
 
31
+ >>import transformers
32
+ >>import torch
33
+
34
+ >>model_id = "BoyangZ/llama3-chinese"
35
+
36
+ >>pipeline = transformers.pipeline(
37
+ >> "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto"
38
+ >> )
39
+ >>pipeline("川普和拜登谁能赢得大选??")
40
 
 
41
 
42
+ >> [{'generated_text': '川普和拜登谁能赢得大选?](https://www.voachinese.com'}]
43
+ >>
44
+ >> import torch
45
+ >> from transformers import AutoModelForCausalLM, AutoTokenizer
46
+ >> torch.set_default_device("cuda")
47
+ >> model = AutoModelForCausalLM.from_pretrained("BoyangZ/llama3-chinese", torch_dtype="auto", trust_remote_code=True)
48
+ >> tokenizer = AutoTokenizer.from_pretrained("BoyangZ/llama3-chinese", trust_remote_code=True)
49
+ >> inputs = tokenizer(
50
+ >> "川普和拜登一起竞选,美国总统,谁获胜的几率大,分析一下?",
51
+ >> return_tensors="pt",
52
+ >> return_attention_mask=False
53
+ >> )
54
+ >> outputs = model.generate(**inputs, max_length=200)
55
+ >> text = tokenizer.batch_decode(outputs)[0]
56
+ >> print(text)
57
 
58
 
 
59
 
60
+ >>Wechat:18618377979, Gmail:zhouboyang1983@gmail.com