Datasets:
Tasks:
Sentence Similarity
Modalities:
Text
Formats:
json
Sub-tasks:
semantic-similarity-classification
Languages:
English
Size:
1M - 10M
License:
metadata
language:
- en
license: mit
task_categories:
- sentence-similarity
task_ids:
- semantic-similarity-classification
paperswithcode_id: embedding-data/Amazon-QA
pretty_name: Amazon-QA
tags:
- paraphrase-mining
Dataset Card for "Amazon-QA"
Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: http://jmcauley.ucsd.edu/data/amazon/qa/
- Repository: More Information Needed
- Paper: More Information Needed
- Point of Contact: Julian McAuley
- Size of downloaded dataset files:
- Size of the generated dataset:
- Total amount of disk used: 247 MB
Dataset Summary
This dataset contains Question and Answer data from Amazon.
Disclaimer: The team releasing Amazon-QA did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team.
Supported Tasks
- Sentence Transformers training; useful for semantic search and sentence similarity.
Languages
- English.
Dataset Structure
Each example in the dataset contains pairs of query and answer sentences and is formatted as a dictionary:
{"query": [sentence_1], "pos": [sentence_2]}
{"query": [sentence_1], "pos": [sentence_2]}
...
{"query": [sentence_1], "pos": [sentence_2]}
This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar sentences.
Usage Example
Install the 🤗 Datasets library with pip install datasets
and load the dataset from the Hub with:
from datasets import load_dataset
dataset = load_dataset("embedding-data/Amazon-QA")
The dataset is loaded as a DatasetDict
and has the format:
DatasetDict({
train: Dataset({
features: ['query', 'pos'],
num_rows: 1095290
})
})
Review an example i
with:
dataset["train"][0]