Datasets:
The dataset viewer is not available for this dataset.
Error code: ConfigNamesError Exception: FileNotFoundError Message: Couldn't find a dataset script at /src/services/worker/amitdanin/s3_spyder/s3_spyder.py or any data file in the same directory. Couldn't find 'amitdanin/s3_spyder' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in amitdanin/s3_spyder. Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 55, in compute_config_names_response for config in sorted(get_dataset_config_names(path=dataset, token=hf_token)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1508, in dataset_module_factory raise FileNotFoundError( FileNotFoundError: Couldn't find a dataset script at /src/services/worker/amitdanin/s3_spyder/s3_spyder.py or any data file in the same directory. Couldn't find 'amitdanin/s3_spyder' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in amitdanin/s3_spyder.
Need help to make the dataset viewer work? Open a discussion for direct support.
Dataset Card for Spider
Dataset Summary
Spider is a large-scale complex and cross-domain semantic parsing and text-to-SQL dataset annotated by 11 Yale students The goal of the Spider challenge is to develop natural language interfaces to cross-domain databases
Supported Tasks and Leaderboards
The leaderboard can be seen at https://yale-lily.github.io/spider
Languages
The text in the dataset is in English.
Dataset Structure
Data Instances
What do the instances that comprise the dataset represent?
Each instance is natural language question and the equivalent SQL query
How many instances are there in total?
What data does each instance consist of?
[More Information Needed]
Data Fields
- db_id: Database name
- question: Natural language to interpret into SQL
- query: Target SQL query
- query_toks: List of tokens for the query
- query_toks_no_value: List of tokens for the query
- question_toks: List of tokens for the question
Data Splits
train: 7000 questions and SQL query pairs dev: 1034 question and SQL query pairs
[More Information Needed]
Dataset Creation
Curation Rationale
[More Information Needed]
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
[More Information Needed]
Annotations
The dataset was annotated by 11 college students at Yale University
Annotation process
Who are the annotators?
Personal and Sensitive Information
[More Information Needed]
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
[More Information Needed]
Other Known Limitations
Additional Information
The listed authors in the homepage are maintaining/supporting the dataset.
Dataset Curators
[More Information Needed]
Licensing Information
The spider dataset is licensed under the CC BY-SA 4.0
[More Information Needed]
Citation Information
@article{yu2018spider,
title={Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task},
author={Yu, Tao and Zhang, Rui and Yang, Kai and Yasunaga, Michihiro and Wang, Dongxu and Li, Zifan and Ma, James and Li, Irene and Yao, Qingning and Roman, Shanelle and others},
journal={arXiv preprint arXiv:1809.08887},
year={2018}
}
Contributions
Thanks to @olinguyen for adding this dataset.
- Downloads last month
- 2