TechxGenus commited on
Commit
8683379
1 Parent(s): ecbfbf6

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: tongyi-qianwen-research
4
+ license_link: >-
5
+ https://huggingface.co/Qwen/CodeQwen1.5-7B/blob/main/LICENSE
6
+ language:
7
+ - en
8
+ pipeline_tag: text-generation
9
+ tags:
10
+ - pretrained
11
+ ---
12
+
13
+ # CodeQwen1.5-7B
14
+
15
+ AWQ quantized version of CodeQwen1.5-7B model.
16
+
17
+ ---
18
+
19
+ ## Introduction
20
+
21
+ CodeQwen1.5 is the Code-Specific version of Qwen1.5. It is a transformer-based decoder-only language model pretrained on a large amount of data of codes.
22
+
23
+ * Strong code generation capabilities and competitve performance across a series of benchmarks;
24
+ * Supporting long context understanding and generation with the context length of 64K tokens;
25
+ * Supporting 92 coding languages
26
+ * Excellent performance in text-to-SQL, bug fix, etc.
27
+
28
+
29
+ For more details, please refer to our [blog post](https://qwenlm.github.io/blog/codeqwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).
30
+
31
+
32
+ ## Model Details
33
+ CodeQwen1.5 is based on Qwen1.5, a language model series including decoder language models of different model sizes. It is trained on 3 trillion tokens of data of codes, and it includes group query attention (GQA) for efficient inference.
34
+
35
+
36
+ ## Requirements
37
+ The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
38
+ ```
39
+ KeyError: 'qwen2'.
40
+ ```
41
+
42
+
43
+ ## Usage
44
+
45
+ For the base language model, we do not advise you to use it for chat. You can use it for finetuning, and you can also use it for code infilling, code generation, etc., but please be careful about your stopping criteria.
46
+
47
+
48
+ ## Citation
49
+
50
+ If you find our work helpful, feel free to give us a cite.
51
+
52
+ ```
53
+ @article{qwen,
54
+ title={Qwen Technical Report},
55
+ author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu},
56
+ journal={arXiv preprint arXiv:2309.16609},
57
+ year={2023}
58
+ }
59
+ ```
config.json ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "Qwen/CodeQwen1.5-7B",
3
+ "architectures": [
4
+ "Qwen2ForCausalLM"
5
+ ],
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 2,
8
+ "eos_token_id": 2,
9
+ "hidden_act": "silu",
10
+ "hidden_size": 4096,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 13440,
13
+ "max_position_embeddings": 65536,
14
+ "max_window_layers": 28,
15
+ "model_type": "qwen2",
16
+ "num_attention_heads": 32,
17
+ "num_hidden_layers": 32,
18
+ "num_key_value_heads": 4,
19
+ "quantization_config": {
20
+ "bits": 4,
21
+ "group_size": 128,
22
+ "modules_to_not_convert": null,
23
+ "quant_method": "awq",
24
+ "version": "gemm",
25
+ "zero_point": true
26
+ },
27
+ "rms_norm_eps": 1e-05,
28
+ "rope_theta": 1000000,
29
+ "rotary_emb_base": 1000000,
30
+ "seq_length": 65536,
31
+ "sliding_window": 65536,
32
+ "tie_word_embeddings": false,
33
+ "torch_dtype": "float16",
34
+ "transformers_version": "4.39.3",
35
+ "use_cache": true,
36
+ "use_sliding_window": false,
37
+ "vocab_size": 92416
38
+ }
generation_config.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 2,
3
+ "do_sample": true,
4
+ "eos_token_id": [
5
+ 4,
6
+ 2
7
+ ],
8
+ "pad_token_id": 92298,
9
+ "top_p": 0.95,
10
+ "transformers_version": "4.39.3"
11
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b9ddd2c3d29dc02d70f221e3b1f466e0b6fe4900f67e0d2434b78196333023fe
3
+ size 4888301576
special_tokens_map.json ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>",
5
+ "<fim_prefix>",
6
+ "<fim_middle>",
7
+ "<fim_suffix>",
8
+ "<fim_pad>"
9
+ ],
10
+ "bos_token": {
11
+ "content": "<|endoftext|>",
12
+ "lstrip": false,
13
+ "normalized": false,
14
+ "rstrip": false,
15
+ "single_word": false
16
+ },
17
+ "eos_token": {
18
+ "content": "<|endoftext|>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ },
24
+ "pad_token": {
25
+ "content": "<fim_pad>",
26
+ "lstrip": false,
27
+ "normalized": false,
28
+ "rstrip": false,
29
+ "single_word": false
30
+ },
31
+ "unk_token": {
32
+ "content": "<unk>",
33
+ "lstrip": false,
34
+ "normalized": false,
35
+ "rstrip": false,
36
+ "single_word": false
37
+ }
38
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff