File size: 9,095 Bytes
4f8398f
 
02648b5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4f8398f
02648b5
 
 
 
 
 
 
e6122aa
de78c64
 
 
e6122aa
de78c64
02648b5
 
 
 
 
 
 
 
de78c64
 
02648b5
 
 
 
71dfeaf
02648b5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
71dfeaf
02648b5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
115fd5a
02648b5
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
---
license: apache-2.0
datasets:
- wikitablequestions
metrics:
- accuracy
model-index:
- name: lever-wikitq-codex
  results:
  - task:
      type: code generation             # Required. Example: automatic-speech-recognition
      # name: {task_name}             # Optional. Example: Speech Recognition
    dataset:
      type: wikitablequestions          # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
      name: WikiTQ (text-to-sql)        # Required. A pretty name for the dataset. Example: Common Voice (French)
      # config: {dataset_config}      # Optional. The name of the dataset configuration used in `load_dataset()`. Example: fr in `load_dataset("common_voice", "fr")`. See the `datasets` docs for more info: https://huggingface.co/docs/datasets/package_reference/loading_methods#datasets.load_dataset.name
      # split: {dataset_split}        # Optional. Example: test
      # revision: {dataset_revision}  # Optional. Example: 5503434ddd753f426f4b38109466949a1217c2bb
      # args:
      #   {arg_0}: {value_0}          # Optional. Additional arguments to `load_dataset()`. Example for wikipedia: language: en
      #   {arg_1}: {value_1}          # Optional. Example for wikipedia: date: 20220301
    metrics:
      - type: accuracy         # Required. Example: wer. Use metric id from https://hf.co/metrics
        value: 65.8       # Required. Example: 20.90
        # name: {metric_name}         # Optional. Example: Test WER
        # config: {metric_config}     # Optional. The name of the metric configuration used in `load_metric()`. Example: bleurt-large-512 in `load_metric("bleurt", "bleurt-large-512")`. See the `datasets` docs for more info: https://huggingface.co/docs/datasets/v2.1.0/en/loading#load-configurations
        # args:
        #   {arg_0}: {value_0}        # Optional. The arguments passed during `Metric.compute()`. Example for `bleu`: max_order: 4
        verified: false              # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported).
---

# LEVER (for Codex on WikiTQ)

This is one of the models produced by the paper ["LEVER: Learning to Verify Language-to-Code Generation with Execution"](https://arxiv.org/abs/2302.08468).

**Authors:** [Ansong Ni](https://niansong1996.github.io), Srini Iyer, Dragomir Radev, Ves Stoyanov, Wen-tau Yih, Sida I. Wang*, Xi Victoria Lin*

**Note**: This specific model is for Codex on the [WikiTQ](https://github.com/ppasupat/WikiTableQuestions) dataset, for the models pretrained on other datasets, please see:
* [lever-spider-codex](https://huggingface.co/niansong1996/lever-spider-codex)
* [lever-gsm8k-codex](https://huggingface.co/niansong1996/lever-gsm8k-codex)
* [lever-mbpp-codex](https://huggingface.co/niansong1996/lever-mbpp-codex)

![Model Image](https://huggingface.co/niansong1996/lever-wikitq-codex/resolve/main/lever_info.png)

# Model Details


## Model Description
The advent of pre-trained code language models (Code LLMs) has led to significant progress in language-to-code generation. State-of-the-art approaches in this area combine CodeLM decoding with sample pruning and reranking using test cases or heuristics based on the execution results. However, it is challenging to obtain test cases for many real-world language-to-code applications, and heuristics cannot well capture the semantic features of the execution results, such as data type and value range, which often indicates the correctness of the program. In this work, we propose LEVER, a simple approach to improve language-to-code generation by learning to verify the generated programs with their execution results. Specifically, we train verifiers to determine whether a program sampled from the CodeLM is correct or not based on the natural language input, the program itself and its execution results. The sampled programs are reranked by combining the verification score with the CodeLM generation probability, and marginalizing over programs with the same execution results. On four datasets across the domains of table QA, math QA and basic Python programming, LEVER consistently improves over the base CodeLMs (4.6% to 10.9% with code-davinci-002) and achieves new state-of-the-art results on all of them.


- **Developed by:** Yale University and Meta AI
- **Shared by [Optional]:** Ansong Ni

- **Model type:** Text Classification
- **Language(s) (NLP):** More information needed
- **License:** Apache-2.0
- **Parent Model:** T5-large
- **Resources for more information:**
 	- [Github Repo](https://github.com/niansong1996/lever)
 	- [Associated Paper](https://arxiv.org/abs/2302.08468)


# Uses
 

## Direct Use

This model is *not* intended to be directly used. LEVER is used to verify and rerank the programs generated by code LLMs (e.g., Codex). We recommend checking out our [Github Repo](https://github.com/niansong1996/lever) for more details.
 
## Downstream Use

LEVER is learned to verify and rerank the programs sampled from code LLMs for different tasks. 
More specifically, for `lever-wikitq-codex`, it was trained on the outputs of `code-davinci-002` on the [WikiTQ](https://github.com/ppasupat/WikiTableQuestions) dataset. It can be used to rerank the SQL programs generated by Codex out-of-box.
Moreover, it may also be applied to other model's outputs on the WikiTQ dataset, as studied in the [Original Paper](https://arxiv.org/abs/2302.08468).


 
## Out-of-Scope Use
 
The model should not be used to intentionally create hostile or alienating environments for people. 
 
# Bias, Risks, and Limitations
 
 
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.



## Recommendations
 
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

# Training Details
 
## Training Data
 
The model is trained with the outputs from `code-davinci-002` model on the [WikiTQ](https://github.com/ppasupat/WikiTableQuestions) dataset.
 
## Training Procedure

20 program samples are drawn from the Codex model on the training examples of the WikiTQ dataset, those programs are later executed to obtain the execution information.
And for each example and its program sample, the natural language description and execution information are also part of the inputs that used to train the T5-based model to predict "yes" or "no" as the verification labels.

 
### Preprocessing
 
Please follow the instructions in the [Github Repo](https://github.com/niansong1996/lever) to reproduce the results.


 
### Speeds, Sizes, Times
 
More information needed 


 
# Evaluation
 
 
## Testing Data, Factors & Metrics
 
### Testing Data
 
Dev and test set of the [WikiTQ](https://github.com/ppasupat/WikiTableQuestions) dataset.
 
### Factors
More information needed
 
### Metrics
 
Execution accuracy (i.e., pass@1)
 
 
## Results 
 

### WikiTQ Text-to-SQL Generation

|                 | Exec. Acc. (Dev) | Exec. Acc. (Test) |
|-----------------|------------------|-------------------|
| Codex           |         49.6     |       53.0        |
| Codex+LEVER     |         64.6     |       65.8        |



 
# Model Examination

More information needed

 
# Environmental Impact
 
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
 
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
 
# Technical Specifications [optional]
 
## Model Architecture and Objective
 
`lever-wikitq-codex` is based on T5-large.

## Compute Infrastructure
 
More information needed
 
### Hardware
 
 
More information needed
 
### Software
 
More information needed.
 
# Citation

 
**BibTeX:**
 
 
```bibtex
@inproceedings{ni2023lever,
  title={Lever: Learning to verify language-to-code generation with execution},
  author={Ni, Ansong and Iyer, Srini and Radev, Dragomir and Stoyanov, Ves and Yih, Wen-tau and Wang, Sida I and Lin, Xi Victoria},
  booktitle={Proceedings of the 40th International Conference on Machine Learning (ICML'23)},
  year={2023}
}
```
 
 
 
 
# Glossary [optional]
 
More information needed

# More Information [optional]
More information needed 

 
# Model Card Author and Contact
 
Ansong Ni, contact info on [personal website](https://niansong1996.github.io)
 
# How to Get Started with the Model
 
This model is *not* intended to be directly used, please follow the instructions in the [Github Repo](https://github.com/niansong1996/lever).