File size: 2,295 Bytes
b94a088
 
a01287d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ff684a0
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
license: apache-2.0
datasets:
- spider
metrics:
- accuracy
model-index:
- name: lever-spider-codex
  results:
  - task:
      type: code generation             # Required. Example: automatic-speech-recognition
      # name: {task_name}             # Optional. Example: Speech Recognition
    dataset:
      type: spider          # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
      name: spider (text-to-sql)          # Required. A pretty name for the dataset. Example: Common Voice (French)
      # config: {dataset_config}      # Optional. The name of the dataset configuration used in `load_dataset()`. Example: fr in `load_dataset("common_voice", "fr")`. See the `datasets` docs for more info: https://huggingface.co/docs/datasets/package_reference/loading_methods#datasets.load_dataset.name
      # split: {dataset_split}        # Optional. Example: test
      # revision: {dataset_revision}  # Optional. Example: 5503434ddd753f426f4b38109466949a1217c2bb
      # args:
      #   {arg_0}: {value_0}          # Optional. Additional arguments to `load_dataset()`. Example for wikipedia: language: en
      #   {arg_1}: {value_1}          # Optional. Example for wikipedia: date: 20220301
    metrics:
      - type: accuracy         # Required. Example: wer. Use metric id from https://hf.co/metrics
        value: 81.9       # Required. Example: 20.90
        # name: {metric_name}         # Optional. Example: Test WER
        # config: {metric_config}     # Optional. The name of the metric configuration used in `load_metric()`. Example: bleurt-large-512 in `load_metric("bleurt", "bleurt-large-512")`. See the `datasets` docs for more info: https://huggingface.co/docs/datasets/v2.1.0/en/loading#load-configurations
        # args:
        #   {arg_0}: {value_0}        # Optional. The arguments passed during `Metric.compute()`. Example for `bleu`: max_order: 4
        verified: false              # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported).
---

# LEVER for Codex on Spider
Basic usage:
Load the model and tokenizer as follows:
```python
tokenizer = T5Tokenizer.from_pretrained("Yale-LILY/lever-spider-codex")
model = T5ForConditionalGeneration.from_pretrained("Yale-LILY/lever-spider-codex")
```