Safetensors
qwen2

OpenCSG-R1-Qwen2.5-Math-3B-V1 [中文] [English]

OpenCSG

[OpenCSG Community] [github] [wechat] [Twitter]

OpenCSG stands for Converged resources, Software refinement, and Generative LM. The 'C' represents Converged resources, indicating the integration and full utilization of hybrid resources. The 'S' stands for Software refinement, signifying software that is refined by large models. The 'G' represents Generative LM, which denotes widespread, inclusive, and democratized generative large models.

The vision of OpenCSG is to empower every industry, every company, and every individual to own their models. We adhere to the principles of openness and open source, making the large model software stack of OpenCSG available to the community. We welcome everyone to use, send feedback, and contribute collaboratively.

Model Usage

from transformers import AutoTokenizer
import transformers
import torch

model_name = "opencsg/OpenCSG-R1-Qwen2.5-Math-3B-V1"
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)

messages = [
    {
        "role": "user",
         "content": f"请你帮我用因式分解拆解123958102这个数字。在 <think> </think> 标签中输出思考过程,并在 <answer> </answer> 标签中返回最终结果,例如 <answer> (1 + 2) / 3 </answer>。在 <think> 标签中逐步思考。",
    },
    {
        "role": "assistant",
        "content": "让我们逐步解决这个问题。\n<think>",
    },
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    continue_final_message=True,
    # add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512,
    temperature=0.6
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

Training

Hardware

  • GPUs: 8 Tesla A800
  • Training time: 7 hours

Software

OpenCSG介绍

OpenCSG

[OpenCSG 社区] [github] [微信] [推特]

OpenCSG中 Open是开源开放;C 代表 Converged resources,整合和充分利用的混合异构资源优势,算力降本增效;S 代表 Software refined,重新定义软件的交付方式,通过大模型驱动软件开发,人力降本增效;G 代表 Generative LM,大众化、普惠化和民主化的可商用的开源生成式大模型。

OpenCSG的愿景是让每个行业、每个公司、每个人都拥有自己的模型。 我们坚持开源开放的原则,将OpenCSG的大模型软件栈开源到社区,欢迎使用、反馈和参与共建,欢迎关注。

模型使用

from transformers import AutoTokenizer
import transformers
import torch

model_name = "opencsg/OpenCSG-R1-Qwen2.5-Math-3B-V1"
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)

messages = [
    {
        "role": "user",
         "content": f"请你帮我用因式分解拆解123958102这个数字。在 <think> </think> 标签中输出思考过程,并在 <answer> </answer> 标签中返回最终结果,例如 <answer> (1 + 2) / 3 </answer>。在 <think> 标签中逐步思考。",
    },
    {
        "role": "assistant",
        "content": "让我们逐步解决这个问题。\n<think>",
    },
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    continue_final_message=True,
    # add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512,
    temperature=0.6
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

训练

硬件资源

  • GPU数量: 8 Tesla A800
  • 训练时间: 12 小时

软件使用

Downloads last month
18
Safetensors
Model size
3.09B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for opencsg/OpenCSG-R1-Qwen2.5-Math-3B-V1

Base model

Qwen/Qwen2.5-3B
Finetuned
(144)
this model
Quantizations
1 model

Dataset used to train opencsg/OpenCSG-R1-Qwen2.5-Math-3B-V1