File size: 3,253 Bytes
69bb499 74f3b3c 69bb499 766baf7 b7222bc 74f3b3c 69bb499 72ac613 69bb499 47a1b53 69bb499 15165fa 1a61029 15165fa 1a61029 0f4b1a7 9e0cea1 15165fa 01aa790 7c76c25 15165fa f36bb6f 1a61029 a95b3c8 09f015f f45b9e7 0cdf8a6 09f015f f45b9e7 a95b3c8 09f015f 9e0cea1 d955c16 69bb499 199602f 69bb499 1e326ab 69bb499 0c7484a 69bb499 2ca26d3 69bb499 a87bede 766baf7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 |
---
license_name: tongyi-qianwen-research
license_link: https://huggingface.co/Qwen/CodeQwen1.5-7B/blob/main/LICENSE
tags:
- code
pipeline_tag: text-generation
license: other
---
<a href="https://ntq.com.vn" target="_blank"><img src="https://cdn-uploads.huggingface.co/production/uploads/5ee1b417636bdb3834e2da19/etbfTJuVdAub2evNP_E4g.png" width="200"/></a>
## Introduction
Nxcode-CQ-7B-orpo is an [Monolithic Preference Optimization without Reference Model](https://arxiv.org/abs/2403.07691) fine-tune of Qwen/CodeQwen1.5-7B on 100k samples of high-quality ranking data.
## [Evalplus](https://github.com/evalplus/evalplus)
| EvalPlus | pass@1 |
| --- | --- |
| HumanEval | 86.6 |
| HumanEval+ | 83.5 |
| MBPP(v0.2.0) | 82.3 |
| MBPP+(v0.2.0) | 70.4 |
We use a simple template to generate the solution for evalplus:
```python
"Complete the following Python function:\n{prompt}"
```
[Evalplus Leaderboard](https://evalplus.github.io/leaderboard.html)
| Models | HumanEval | HumanEval+|
|------ | ------ | ------ |
| GPT-4-Turbo (April 2024)| 90.2| 86.6|
| GPT-4 (May 2023)| 88.4| 81.17|
| GPT-4-Turbo (Nov 2023)| 85.4| 79.3|
| CodeQwen1.5-7B-Chat| 83.5| 78.7|
| claude-3-opus (Mar 2024)| 82.9| 76.8|
| DeepSeek-Coder-33B-instruct| 81.1| 75.0|
| WizardCoder-33B-V1.1| 79.9| 73.2|
| OpenCodeInterpreter-DS-33B| 79.3| 73.8|
| speechless-codellama-34B-v2.0| 77.4| 72|
| GPT-3.5-Turbo (Nov 2023)| 76.8| 70.7|
| Llama3-70B-instruct| 76.2| 70.7|
## Bigcode Leaderboard
[Bigcode Leaderboard](https://huggingface.co/spaces/bigcode/bigcode-models-leaderboard)
**09/05/2024**
Top 1 average score.
Top 2 winrate.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/5ee1b417636bdb3834e2da19/OQonD6a7aNjnN9SsTkFp-.png)
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. You should upgrade the transformers if you receive an error when loading the tokenizer
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"NTQAI/Nxcode-CQ-7B-orpo",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("NTQAI/Nxcode-CQ-7B-orpo")
prompt = """Complete the following Python function:
from typing import List
def has_close_elements(numbers: List[float], threshold: float) -> bool:
""" Check if in given list of numbers, are any two numbers closer to each other than
given threshold.
>>> has_close_elements([1.0, 2.0, 3.0], 0.5)
False
>>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3)
True
"""
"""
messages = [
{"role": "user", "content": prompt}
]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
res = tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)
```
### Contact information
For personal communication related to this project, please contact Nha Nguyen Van (nha.nguyen@ntq-solution.com.vn). |