altlex / README.md
tomaarsen's picture
tomaarsen HF staff
Update README.md
97eb209 verified
|
raw
history blame
No virus
1.42 kB
---
language:
- en
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
task_categories:
- feature-extraction
- sentence-similarity
pretty_name: altlex
tags:
- sentence-transformers
dataset_info:
config_name: pair
features:
- name: text
dtype: string
- name: simplified
dtype: string
splits:
- name: train
num_bytes: 29717244
num_examples: 112696
download_size: 19447229
dataset_size: 29717244
configs:
- config_name: pair
data_files:
- split: train
path: pair/train-*
---
# Dataset Card for altlex
This dataset is a collection of pairs of English Wikipedia entries and their simplified variants. See [altlex](https://github.com/chridey/altlex/) for additional information.
This dataset can be used directly with Sentence Transformers to train embedding models.
## Dataset Subsets
### `pair` subset
* Columns: "text", "simplified"
* Column types: `str`, `str`
* Examples:
```python
{
'text': "Valentin Berlinsky retired from the Borodin Quartet in September 2007 , and was succeeded by his protégé , Vladimir Balshin , but he still remained the group 's mentor .",
'simplified': 'Valentin Berlinsky retired from the Borodin Quartet in September 2007 .',
}
```
* Collection strategy: Reading the SimpleWiki dataset from [embedding-training-data](https://huggingface.co/datasets/sentence-transformers/embedding-training-data).
* Deduplified: No