simple-math / README.md
fblgit's picture
Update README.md
ebf0183 verified
|
raw
history blame
4.49 kB
metadata
dataset_info:
  features:
    - name: instruction
      dtype: string
    - name: output
      dtype: string
  splits:
    - name: train
      num_bytes: 15268888.05
      num_examples: 487500
    - name: test
      num_bytes: 391509.95
      num_examples: 12500
  download_size: 12160789
  dataset_size: 15660398
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
tags:
  - math

Simple Math

Just like my teacher gave me homework, i thought maybe we can also add some of these basics on the trainings of our models.

It was created with very simple code that is in the repo, if you add more complex operations and so.. please share the code :D thank you

Does it Works?

34BEAGLES Evaluation:

hf (pretrained=/data/models/UNA-34Beagles-v1-final,dtype=bfloat16,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: auto (8)
|    Tasks     |Version|Filter|n-shot| Metric |Value |   |Stderr|
|--------------|-------|------|-----:|--------|-----:|---|-----:|
|arc_challenge |Yaml   |none  |    25|acc     |0.7039|±  |0.0133|
|              |       |none  |    25|acc_norm|0.7321|±  |0.0129|
|truthfulqa_mc2|Yaml   |none  |     0|acc     |0.7387|±  |0.0141|

hf (pretrained=/data/models/UNA-34Beagles-v1-final,dtype=bfloat16,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: auto
|Tasks|Version|  Filter  |n-shot|  Metric   |Value |   |Stderr|
|-----|-------|----------|-----:|-----------|-----:|---|-----:|
|gsm8k|Yaml   |get-answer|     5|exact_match|0.6399|±  |0.0132|

|      Groups      |Version|Filter|n-shot|Metric|Value |   |Stderr|
|------------------|-------|------|-----:|------|-----:|---|-----:|
|mmlu              |N/A    |none  |     0|acc   |0.7477|±  |0.1079|
| - humanities     |N/A    |none  |     0|acc   |0.7188|±  |0.0855|
| - other          |N/A    |none  |     0|acc   |0.7950|±  |0.1057|
| - social_sciences|N/A    |none  |     0|acc   |0.8297|±  |0.0664|
| - stem           |N/A    |none  |     0|acc   |0.6641|±  |0.1291|

34BEAGLES-MATH Evaluation

hf (pretrained=/data/models/34BeaglesMath-v1,dtype=bfloat16,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: auto
|Tasks|Version|  Filter  |n-shot|  Metric   |Value |   |Stderr|
|-----|-------|----------|-----:|-----------|-----:|---|-----:|
|gsm8k|Yaml   |get-answer|     5|exact_match|0.6505|±  |0.0131|

hf (pretrained=/data/models/34BeaglesMath-v1,dtype=bfloat16,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: auto (8)
|    Tasks     |Version|Filter|n-shot| Metric |Value |   |Stderr|
|--------------|-------|------|-----:|--------|-----:|---|-----:|
|arc_challenge |Yaml   |none  |    25|acc     |0.7090|±  |0.0133|
|              |       |none  |    25|acc_norm|0.7329|±  |0.0129|
|truthfulqa_mc2|Yaml   |none  |     0|acc     |0.7378|±  |0.0141|

|      Groups      |Version|Filter|n-shot|Metric|Value |   |Stderr|
|------------------|-------|------|-----:|------|-----:|---|-----:|
|mmlu              |N/A    |none  |     0|acc   |0.7524|±  |0.1045|
| - humanities     |N/A    |none  |     0|acc   |0.7307|±  |0.0846|
| - other          |N/A    |none  |     0|acc   |0.7937|±  |0.1029|
| - social_sciences|N/A    |none  |     0|acc   |0.8274|±  |0.0667|
| - stem           |N/A    |none  |     0|acc   |0.6708|±  |0.1236|

I think it works. Not too difficult, not too easy, as a curriculum works well :)

Note to contributors: thank you to those contributing on the experiment with beautiful commits and good spirit

  • The model needs some splits
  • The complexity has to be gradual as show in experiments
  • Feel free to contribute on the readme Evaluation tests.
  • Lets aim to build an ablation & paper together. All contributors will be cited.
  • Add your log entry on the version so we can keep a track, thanks.

Versions

24.01.24 Added gradual complexity on a separate script
20-23.01.24 Multiple contributions with operations and increased complexity on the main generator script.

Citations

If you use Simple Math o train your model, please cite on the modelcard or the paper.

@misc{simplemath,
  title={Simple-Math: 2+2=4 4-1=3}, 
  author={Xavier Murias},
  year={2024},
  publisher = {Juanako.AI},
  journal = {HuggingFace repository},
  howpublished = {\url{https://huggingface.co/datasets/fblgit/simple-math}},
}