Model Card for SkillTree Enhanced Model
Model Details
This model has been enhanced using the SkillTree approach, which applies specific skills extracted from advanced training or fine-tuning processes to improve the model's capabilities in targeted areas.
- Base Model: tokyotech-llm/Swallow-MS-7b-v0.1
- Skill Tree: HachiML/SkillTree-Math-OpenMath-Mistral-7B-v0.1
- Language(s) (NLP): Japanese
- Functionality Status: Functional / Non-Functional / Not Verified
Uses
This section should describe the intended use cases for the enhanced model. It might include scenarios such as code generation, conversational AI, text summarization, or any other specific tasks the model has been enhanced to perform better. Be sure to include any recommendations or limitations on the model's use.
# Import library
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load model
model_name = "HachiML/Swallow-MS-7b-v0.1-MathSkill-OpenMath"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
# Inference 1
prompt = "Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?\n"
input_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
tokens = model.generate(input_ids.to(device=model.device), max_new_tokens=256, temperature=0.99, top_p=0.95, do_sample=True)
out = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(out)
# Inference 2
prompt = "ナタリアは4月に48人の友人にクリップを売り、5月にはその半分の数のクリップを売った。ナタリアが4月と5月に売ったクリップの数は?:\n"
input_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
tokens = model.generate(input_ids.to(device=model.device), max_new_tokens=256, temperature=0.99, top_p=0.95, do_sample=True)
out = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(out)
Response Examples
Below are examples of responses from the model, demonstrating how the SkillTree enhancement has improved its performance.
Example 1:
- Input:
Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?\n
- Output:
Let's solve this problem using Python code.
<llm-code>
friends_sold_in_April = 48
friends_sold_in_May = friends_sold_in_April / 2
clips_sold_in_April_and_May = friends_sold_in_April + friends_sold_in_May
clips_sold_in_April_and_May
</llm-code>
<llm-code-output>
72.0
</llm-code-output>
Thus, in April and May, Natalia sold \boxed{72} clips in total.
Example 2:
- Input:
ナタリアは4月に48人の友人にクリップを売り、5月にはその半分の数のクリップを売った。ナタリアが4月と5月に売ったクリップの数は?:\n
- Output:
Let's solve this problem using Python code.
<llm-code>
clip_count = 48
clip_count_sold_4th_month = clip_count
clip_count_sold_5th_month = clip_count_sold_4th_month / 2
clip_count_sold_in_both_months = clip_count_sold_4th_month + clip_count_sold_5th_month
clip_count_sold_in_both_months
</llm-code>
<llm-code-output>
72.0
</llm-code-output>
So, the total number of clip sales is \textbf{72}.
- Downloads last month
- 11
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.