JustinLin610 commited on
Commit
d2b72fe
1 Parent(s): 4bb872b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -2,7 +2,7 @@
2
  license: other
3
  license_name: tongyi-qianwen
4
  license_link: >-
5
- https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT
6
  language:
7
  - en
8
  pipeline_tag: text-generation
@@ -24,7 +24,7 @@ Qwen2-beta is the beta version of Qwen2, a transformer-based decoder-only langua
24
  * No need of `trust_remote_code`.
25
 
26
  For more details, please refer to our blog post and github repo.
27
- <br>
28
 
29
  ## Model Details
30
  Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA and the mixture of SWA and full attention.
 
2
  license: other
3
  license_name: tongyi-qianwen
4
  license_link: >-
5
+ https://huggingface.co/Qwen/Qwen2-beta-72B/blob/main/LICENSE
6
  language:
7
  - en
8
  pipeline_tag: text-generation
 
24
  * No need of `trust_remote_code`.
25
 
26
  For more details, please refer to our blog post and github repo.
27
+
28
 
29
  ## Model Details
30
  Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA and the mixture of SWA and full attention.