--- license: other license_name: tongyi-qianwen license_link: >- https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat/blob/main/LICENSE language: - en pipeline_tag: text-generation tags: - chat --- # Nxcode-CQ-7B-orpo ## Introduction Nxcode-CQ-7B-orpo is an ORPO fine-tune of Qwen/CodeQwen1.5-7B-Chat on 100k samples ours datasets. * Strong code generation capabilities and competitve performance across a series of benchmarks; * Supporting 92 coding languages * Excellent performance in text-to-SQL, bug fix, etc. ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( "NTQAI/Nxcode-CQ-7B-orpo", torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("NTQAI/Nxcode-CQ-7B-orpo") prompt = "Write a quicksort algorithm in python." messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ### Contact information For personal communication related to this project, please contact Nha Nguyen Van (nha.nguyen@ntq-solution.com.vn).