afrideva commited on
Commit
6c5b980
1 Parent(s): 1994ffd

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +107 -0
README.md ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen2-1.5B-Instruct
3
+ inference: true
4
+ language:
5
+ - en
6
+ license: apache-2.0
7
+ model_creator: Qwen
8
+ model_name: Qwen2-1.5B-Instruct
9
+ pipeline_tag: text-generation
10
+ quantized_by: afrideva
11
+ tags:
12
+ - chat
13
+ - gguf
14
+ - ggml
15
+ - quantized
16
+ ---
17
+
18
+ # Qwen2-1.5B-Instruct-GGUF
19
+
20
+ Quantized GGUF model files for [Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct) from [Qwen](https://huggingface.co/Qwen)
21
+
22
+ ## Original Model Card:
23
+
24
+ # Qwen2-1.5B-Instruct
25
+
26
+ ## Introduction
27
+
28
+ Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the instruction-tuned 1.5B Qwen2 model.
29
+
30
+ Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
31
+
32
+ For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/).
33
+ <br>
34
+
35
+ ## Model Details
36
+ Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
37
+
38
+ ## Training details
39
+ We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization.
40
+
41
+
42
+ ## Requirements
43
+ The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
44
+ ```
45
+ KeyError: 'qwen2'
46
+ ```
47
+
48
+ ## Quickstart
49
+
50
+ Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
51
+
52
+ ```python
53
+ from transformers import AutoModelForCausalLM, AutoTokenizer
54
+ device = "cuda" # the device to load the model onto
55
+
56
+ model = AutoModelForCausalLM.from_pretrained(
57
+ "Qwen/Qwen2-1.5B-Instruct",
58
+ torch_dtype="auto",
59
+ device_map="auto"
60
+ )
61
+ tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-1.5B-Instruct")
62
+
63
+ prompt = "Give me a short introduction to large language model."
64
+ messages = [
65
+ {"role": "system", "content": "You are a helpful assistant."},
66
+ {"role": "user", "content": prompt}
67
+ ]
68
+ text = tokenizer.apply_chat_template(
69
+ messages,
70
+ tokenize=False,
71
+ add_generation_prompt=True
72
+ )
73
+ model_inputs = tokenizer([text], return_tensors="pt").to(device)
74
+
75
+ generated_ids = model.generate(
76
+ model_inputs.input_ids,
77
+ max_new_tokens=512
78
+ )
79
+ generated_ids = [
80
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
81
+ ]
82
+
83
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
84
+ ```
85
+
86
+ ## Evaluation
87
+
88
+ We briefly compare Qwen2-1.5B-Instruct with Qwen1.5-1.8B-Chat. The results are as follows:
89
+
90
+ | Datasets | Qwen1.5-0.5B-Chat | **Qwen2-0.5B-Instruct** | Qwen1.5-1.8B-Chat | **Qwen2-1.5B-Instruct** |
91
+ | :--- | :---: | :---: | :---: | :---: |
92
+ | MMLU | 35.0 | **37.9** | 43.7 | **52.4** |
93
+ | HumanEval | 9.1 | **17.1** | 25.0 | **37.8** |
94
+ | GSM8K | 11.3 | **40.1** | 35.3 | **61.6** |
95
+ | C-Eval | 37.2 | **45.2** | 55.3 | **63.8** |
96
+ | IFEval (Prompt Strict-Acc.) | 14.6 | **20.0** | 16.8 | **29.0** |
97
+
98
+ ## Citation
99
+
100
+ If you find our work helpful, feel free to give us a cite.
101
+
102
+ ```
103
+ @article{qwen2,
104
+ title={Qwen2 Technical Report},
105
+ year={2024}
106
+ }
107
+ ```