MATHWELL
MATHWELL is the model released in the paper MATHWELL: Generating Educational Math Word Problems Using Teacher Annotations. MATHWELL is a finetuned Llama-2 (70B) model that generates customized educational grade school math word problems and Python function solutions to these problems. Generated problems are 1) solvable, 2) accurate, and 3) appropriate. These criteria are essential to successfully supplement grade-school students’ math education. On average, 74% of MATHWELL's problems with executable solutions are solvable, accurate, and appropriate.
For more details on how MATHWELL was trained and evaluated, please see our paper. Our repo contains a sample script for loading and interacting with MATHWELL.
Training procedure
The following bitsandbytes
quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
Framework versions
- PEFT 0.6.0.dev0
Citation
@inproceedings{christ-etal-2024-mathwell,
title = "{MATHWELL}: Generating Educational Math Word Problems Using Teacher Annotations",
author = "Christ, Bryan R and
Kropko, Jonathan and
Hartvigsen, Thomas",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2024",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-emnlp.696",
pages = "11914--11938",
abstract = "Math word problems are critical K-8 educational tools, but writing them is time consuming and requires extensive expertise. To be educational, problems must be solvable, have accurate answers, and, most importantly, be educationally appropriate. We propose that language models have potential to support K-8 math education by automatically generating word problems. However, evaluating educational appropriateness is hard to quantify. We fill this gap by having teachers evaluate problems generated by LLMs, who find existing models and data often fail to be educationally appropriate. We then explore automatically generating *educational* word problems, ultimately using our expert annotations to finetune a 70B language model. Our model, MATHWELL, is the first K-8 word problem generator targeted at educational appropriateness. Further expert studies find MATHWELL generates problems far more solvable, accurate, and appropriate than public models. MATHWELL also matches GPT-4{'}s problem quality while attaining more appropriate reading levels for K-8 students and avoiding generating harmful questions.",
}
- Downloads last month
- 0