JustinLin610 commited on
Commit
923bdc7
1 Parent(s): 17070ad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -25
README.md CHANGED
@@ -10,7 +10,7 @@ tags:
10
  - pretrained
11
  ---
12
 
13
- # Qwen2-beta
14
 
15
 
16
  ## Introduction
@@ -34,31 +34,9 @@ The code of Qwen2 has been in the latest Hugging face transformers and we advise
34
  <br>
35
 
36
 
37
- ## Quickstart
38
 
39
- Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
40
-
41
- ```python
42
- from transformers import AutoModelForCausalLM, AutoTokenizer
43
- device = "cuda" # the device to load the model onto
44
-
45
- model = AutoModelForCausalLM.from_pretrained("Qwen2/Qwen2-beta-7B-Chat", device_map="auto")
46
- tokenizer = AutoTokenizer.from_pretrained("Qwen2/Qwen2-beta-7B-Chat")
47
-
48
- prompt = "Give me a short introduction to large language model."
49
-
50
- messages = [{"role": "user", "content": prompt}]
51
-
52
- text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
53
-
54
- model_inputs = tokenizer([text], return_tensors="pt").to(device)
55
-
56
- generated_ids = model.generate(model_inputs.input_ids, max_new_tokens=512, do_sample=True)
57
-
58
- generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)]
59
-
60
- response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
61
- ```
62
 
63
 
64
  ## Citation
 
10
  - pretrained
11
  ---
12
 
13
+ # Qwen2-beta-72B
14
 
15
 
16
  ## Introduction
 
34
  <br>
35
 
36
 
37
+ ## Usage
38
 
39
+ We do not advise you to use base language models for text generation. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
 
41
 
42
  ## Citation