File size: 1,818 Bytes
6a2d404 48b6994 6a2d404 48b6994 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
---
dataset_info:
features:
- name: set
sequence: string
splits:
- name: train
num_bytes: 1580895690.1875737
num_examples: 1095326
download_size: 665292967
dataset_size: 1580895690.1875737
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- feature-extraction
- text-classification
- sentence-similarity
language:
- en
size_categories:
- 1M<n<10M
---
# Dataset Card for "WikiAnswers Small"
## Dataset Summary
`nikhilchigali/wikianswers_small` is a subset of the `embedding-data/WikiAnswers` dataset ([Link](https://huggingface.co/datasets/embedding-data/WikiAnswers)). This dataset is for the owner's personal use and claims no rights whatsoever.
As opposed to the original dataset with `3,386,256` rows, this dataset contains only 4% of the total rows (1,095,326).
## Languages
English.
## Dataset Structure
Each example in the dataset contains 25 equivalent sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value".
```
{"set": [sentence_1, sentence_2, ..., sentence_25]}
{"set": [sentence_1, sentence_2, ..., sentence_25]}
...
{"set": [sentence_1, sentence_2, ..., sentence_25]}
```
### Usage Example
Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with:
```python
from datasets import load_dataset
dataset = load_dataset("embedding-data/WikiAnswers")
```
The dataset is loaded as a `DatasetDict` and has the format for `N` examples:
```python
DatasetDict({
train: Dataset({
features: ['set'],
num_rows: N
})
})
```
Review an example `i` with:
```python
dataset["train"][i]["set"]
```
### Source Data
* `embedding-data/WikiAnswers` on HuggingFace ([Link](https://huggingface.co/datasets/embedding-data/WikiAnswers)) |