JackCloudman commited on
Commit
f88708e
1 Parent(s): c58ad2d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +90 -0
README.md ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama2
3
+ datasets:
4
+ - gsm8k
5
+ - competition_math
6
+ language:
7
+ - en
8
+ metrics:
9
+ - exact_match
10
+ library_name: transformers
11
+ pipeline_tag: text-generation
12
+ tags:
13
+ - code
14
+ - math
15
+ ---
16
+ <h1 align="center">⚠️Testing Quantized Tora-Code-13b-1.0 Model 4bits 128g⚠️</h1>
17
+ <hr>
18
+ <p align="center"><b>Original README</b></p>
19
+ <h1 align="center">
20
+ ToRA: A Tool-Integrated Reasoning Agent <br> for Mathematical Problem Solving
21
+ </h1>
22
+
23
+ <p align="center">
24
+ <a href="https://microsoft.github.io/ToRA/"><b>[🌐 Website]</b></a> •
25
+ <a href="https://arxiv.org/pdf/2309.17452.pdf"><b>[📜 Paper]</b></a> •
26
+ <a href="https://huggingface.co/llm-agents"><b>[🤗 HF Models]</b></a> •
27
+ <a href="https://github.com/microsoft/ToRA"><b>[🐱 GitHub]</b></a>
28
+ <br>
29
+ <a href="https://twitter.com/zhs05232838/status/1708860992631763092"><b>[🐦 Twitter]</b></a> •
30
+ <a href="https://www.reddit.com/r/LocalLLaMA/comments/1703k6d/tora_a_toolintegrated_reasoning_agent_for/"><b>[💬 Reddit]</b></a> •
31
+ <a href="https://notes.aimodels.fyi/researchers-announce-tora-training-language-models-to-better-understand-math-using-external-tools/">[🍀 Unofficial Blog]</a>
32
+ <!-- <a href="#-quick-start">Quick Start</a> • -->
33
+ <!-- <a href="#%EF%B8%8F-citation">Citation</a> -->
34
+ </p>
35
+
36
+ <p align="center">
37
+ Repo for "<a href="https://arxiv.org/pdf/2309.17452.pdf" target="_blank">ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving</a>"
38
+ </p>
39
+
40
+ ## 🔥 News
41
+
42
+ - [2023/10/08] 🔥🔥🔥 All ToRA models released at [HuggingFace](https://huggingface.co/llm-agents)!!!
43
+ - [2023/09/29] ToRA paper, repo, and website released.
44
+
45
+ ## 💡 Introduction
46
+
47
+ ToRA is a series of Tool-integrated Reasoning Agents designed to solve challenging mathematical reasoning problems by interacting with tools, e.g., computation libraries and symbolic solvers. ToRA series seamlessly integrate natural language reasoning with the utilization of external tools, thereby amalgamating the analytical prowess of language and the computational efficiency of external tools.
48
+
49
+ | Model | Size | GSM8k | MATH | AVG@10 math tasks<sup>&dagger;</sup> |
50
+ |---|---|---|---|---|
51
+ | GPT-4 | - | 92.0 | 42.5 | 78.3 |
52
+ | GPT-4 (PAL) | - | 94.2 | 51.8 | 86.4 |
53
+ | [ToRA-7B](https://huggingface.co/llm-agents/tora-7b-v1.0) | 7B | 68.8 | 40.1 | 62.4|
54
+ | [ToRA-Code-7B](https://huggingface.co/llm-agents/tora-code-7b-v1.0) | 7B | 72.6 | 44.6 | 66.5|
55
+ | [ToRA-13B](https://huggingface.co/llm-agents/tora-13b-v1.0) | 13B | 72.7 | 43.0 | 65.9|
56
+ | [ToRA-Code-13B](https://huggingface.co/llm-agents/tora-code-13b-v1.0) | 13B | 75.8 | 48.1 | 71.3 |
57
+ | [ToRA-Code-34B<sup>*</sup>](https://huggingface.co/llm-agents/tora-code-34b-v1.0) | 34B | 80.7 | **51.0** | 74.8 |
58
+ | [ToRA-70B](https://huggingface.co/llm-agents/tora-70b-v1.0) | 70B | **84.3** | 49.7 | **76.9** |
59
+
60
+ - <sup>*</sup>ToRA-Code-34B is currently the first and only open-source model to achieve over 50% accuracy (pass@1) on the MATH dataset, which significantly outperforms GPT-4’s CoT result (51.0 vs. 42.5), and is competitive with GPT-4 solving problems with programs. By open-sourcing our codes and models, we hope more breakthroughs will come!
61
+
62
+ - <sup>&dagger;</sup>10 math tasks include GSM8k, MATH, GSM-Hard, SVAMP, TabMWP, ASDiv, SingleEQ, SingleOP, AddSub, and MultiArith.
63
+
64
+
65
+ ## ⚡️ Training
66
+
67
+ The models are trained on ToRA-Corpus 16k, which contains tool-integrated reasoning trajectories of MATH and GSM8k from GPT-4.
68
+
69
+ We use imitation learning (i.e., SFT) to fine-tune the models, and then apply our proposed *output space shaping* to improve tool-integrated reasoning behaviors. Please refer to the [paper](https://arxiv.org/pdf/2309.17452.pdf) for more details.
70
+
71
+
72
+ ## 🪁 Inference & Evaluation
73
+
74
+ Please refer to ToRA's [GitHub repo](https://github.com/microsoft/ToRA) for inference, evaluation, and training code.
75
+
76
+
77
+ ## ☕️ Citation
78
+
79
+ If you find this repository helpful, please consider citing our paper:
80
+
81
+ ```
82
+ @misc{gou2023tora,
83
+ title={ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving},
84
+ author={Zhibin Gou and Zhihong Shao and Yeyun Gong and yelong shen and Yujiu Yang and Minlie Huang and Nan Duan and Weizhu Chen},
85
+ year={2023},
86
+ eprint={2309.17452},
87
+ archivePrefix={arXiv},
88
+ primaryClass={cs.CL}
89
+ }
90
+ ```