Model description
This model is a fine-tuned version of Qwen2.5-Math-7B-Instruct on the SocraTeach dataset.
It is an implementation of SocraticLM.
Intended uses & limitations
SocraticLM is designed for educational perposes, where students need a Socratic-style guidance when having difficulties learning to solve mathematical problems.
Also, SocraticLM can solve mathematical problems itself.
This model mainly supports English and Chinese.
How to use
For Huggingface transformers:
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("CogBase-USTC/SocraticLM")
model = AutoModelForCausalLM.from_pretrained(
"CogBase-USTC/SocraticLM",
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True
)
messages = [
{"role": "system", "content" : "Please analyse and solve the following problem step by step."},
{"role": "user", "content": "Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?"},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=4096)
print(tokenizer.decode(outputs[0]))
For vLLM:
from vllm import LLM, SamplingParams
llm = LLM(model=r'CogBase-USTC/SocraticLM',
tokenizer=r'CogBase-USTC/SocraticLM',
trust_remote_code=True,
tensor_parallel_size=1,
gpu_memory_utilization=0.99,
enable_chunked_prefill=True,
max_num_batched_tokens=512,
max_num_seqs=128)
sampling_params = SamplingParams(temperature=0, max_tokens=4096, seed=42)
def print_outputs(outputs):
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Generated text: {generated_text!r}")
print("-" * 80)
print("=" * 80)
conversation = [
{
"role": "system",
"content": "Please analyse and solve the following problem step by step."
},
{
"role": "user",
"content": "Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?"
},
]
outputs = llm.chat(conversation,
sampling_params=sampling_params,
use_tqdm=False,
)
print_outputs(outputs)
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3