nhanv commited on
Commit
1e326ab
1 Parent(s): a87bede

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -3
README.md CHANGED
@@ -15,10 +15,9 @@ tags:
15
 
16
  ## Introduction
17
 
18
- Nxcode-CQ-7B-orpo is the Code-Specific version of Qwen1.5. It is a transformer-based decoder-only language model pretrained on a large amount of data of codes.
19
 
20
  * Strong code generation capabilities and competitve performance across a series of benchmarks;
21
- * Supporting long context understanding and generation with the context length of 64K tokens;
22
  * Supporting 92 coding languages
23
  * Excellent performance in text-to-SQL, bug fix, etc.
24
 
@@ -31,7 +30,7 @@ from transformers import AutoModelForCausalLM, AutoTokenizer
31
  device = "cuda" # the device to load the model onto
32
 
33
  model = AutoModelForCausalLM.from_pretrained(
34
- "Qwen/CodeQwen1.5-7B-Chat",
35
  torch_dtype="auto",
36
  device_map="auto"
37
  )
 
15
 
16
  ## Introduction
17
 
18
+ Nxcode-CQ-7B-orpo is an ORPO fine-tune of Qwen/CodeQwen1.5-7B-Chat on 100k samples ours datasets.
19
 
20
  * Strong code generation capabilities and competitve performance across a series of benchmarks;
 
21
  * Supporting 92 coding languages
22
  * Excellent performance in text-to-SQL, bug fix, etc.
23
 
 
30
  device = "cuda" # the device to load the model onto
31
 
32
  model = AutoModelForCausalLM.from_pretrained(
33
+ "NTQAI/Nxcode-CQ-7B-orpo",
34
  torch_dtype="auto",
35
  device_map="auto"
36
  )