---
license: apache-2.0
language:
- en
---
# TinyLlama-1.1B
We used this version of TinyLlama as a base model:
https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0
The goal was to improve performance on basic algebra (i.e. solving systems of linear equations).
The base model was fine tuned on 8k rows synthetic solution data generated by [OpenMath-Mistral-7B-v0.1-hf](https://huggingface.co/nvidia/OpenMath-Mistral-7B-v0.1-hf) on [ALG-514](https://paperswithcode.com/sota/math-word-problem-solving-on-alg514).
We used the [NeMo Skills](https://github.com/Kipok/NeMo-Skills) pipeline for inference with code execution and generating the synthetic data. HuggingFace's SFTTrainer was used for fine tuning, as the NeMo Skills pipeline is a buggy mess. It took 30 minutes to fine tune on an RTX3090.
Notes from previous model cards:
> We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### Eval
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64388bdd43d932c4623e4983/H07dGzwOfzcvP1GFA1GUq.png)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64388bdd43d932c4623e4983/Qr7rvIms3AL67jltHBXnr.png)
Note that `checkpoint_0` is the base model and `checkpoint_mistral` is OpenMath-Mistral-7B-v0.1-hf.
The performance is _not good_™, but this model could be used to quickly generate synthetic data, as the coverage is decent for the dataset. The uploaded model is checkpoint-2.6k.
People involved in creating this fine tune:
- Coulton Theuer [theuerc@umich.edu]
- Bret Ellenbogen [bretelle@umich.edu]
- Victoria Chang [vgc@umich.edu]