YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
algebra_linear_1d_composed
language: en datasets: - algebra_linear_1d_composed
This is a t5-small fine-tuned version on the math_dataset/algebra_linear_1d_composed for solving algebra linear 1d composed equations mission.
To load the model: (necessary packages: !pip install transformers sentencepiece)
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("dbernsohn/algebra_linear_1d_composed")
model = AutoModelWithLMHead.from_pretrained("dbernsohn/algebra_linear_1d_composed")
You can then use this model to solve algebra 1d equations into numbers.
query = "Suppose -d = 5 - 16. Let b = -579 + 584. Solve -b*c + 36 = d for c."
input_text = f"{query} </s>"
features = tokenizer([input_text], return_tensors='pt')
model.to('cuda')
output = model.generate(input_ids=features['input_ids'].cuda(),
attention_mask=features['attention_mask'].cuda())
tokenizer.decode(output[0])
# <pad> 5</s>
Another examples:
- Suppose -d = 5 - 16. Let b = -579 + 584. Solve -b*c + 36 = d for c.
- Answer: 5 Pred: 5
- Suppose 3v - l + 9 = 4v, 0 = -5v + 5l - 5. Let f(s) = 3s**2 + 1. Let g be f(-1). Suppose 63 = gx - x. Solve -5*i + v + x = 0 for i.
- Answer: 5 Pred: 5
- Let w be 2 - (0 - 0)/(-2). Let f = -110 - -110. Suppose fm - 4m + 3m = 0. Solve mv = -w*v for v.
- Answer: 0 Pred: 0
- Let a(h) = -34h**3 - 15 + 3h + 36h**3 + 8h2 + 5*h2. Let r be a(-6). Solve 2z = rz for z.
- Answer: 0 Pred: 0
- Suppose -3p + 24 = -3c, 0c + 6 = -2c. Suppose -67 = 4i + 289. Let t = i + 94. Solve t = 2y - p for y.
- Answer: 5 Pred: 5
- Let b = -36 + 53. Suppose -7u - b = -73. Solve j + 3j = -u for j.
- Answer: -2 Pred: -2
- Let h be 8*((-2)/2 + 14)1. Let y = -101 + h. Solve yp = -p for p.
- Answer: 0 Pred: 0
- Let b = 178 - 79. Let s be 9/(-1 - 2 - b/(-22)). Solve s = -k - k for k.
- Answer: -3 Pred: -3
- Suppose 31 = -4z + 11, -3k - 5z - 22 = 0. Solve 23 = -11p + k for p.
- Answer: -2 Pred: -2
The whole training process and hyperparameters are in my GitHub repo
Created by Dor Bernsohn
- Downloads last month
- 6
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.