id
stringlengths 2
115
| README
stringlengths 0
977k
|
---|---|
IlyaGusev/headline_cause | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- ru
- en
license:
- cc0-1.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
pretty_name: HeadlineCause
tags:
- causal-reasoning
---
# Dataset Card for HeadlineCause
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/IlyaGusev/HeadlineCause
- **Paper:** [HeadlineCause: A Dataset of News Headlines for Detecting Causalities](https://arxiv.org/abs/2108.12626)
- **Point of Contact:** [Ilya Gusev](ilya.gusev@phystech.edu)
### Dataset Summary
A dataset for detecting implicit causal relations between pairs of news headlines. The dataset includes over 5000 headline pairs from English news and over 9000 headline pairs from Russian news labeled through crowdsourcing. The pairs vary from totally unrelated or belonging to the same general topic to the ones including causation and refutation relations.
### Usage
Loading Russian Simple task:
```python
from datasets import load_dataset
dataset = load_dataset("IlyaGusev/headline_cause", "ru_simple")
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
This dataset consists of two parts, Russian and English.
## Dataset Structure
### Data Instances
There is an URL, a title, and a timestamp for each of the two headlines in every data instance. A label is presented in three fields. 'Result' field is a textual label, 'label' field is a numeric label, and the 'agreement' field shows the majority vote agreement between annotators. Additional information includes instance ID and the presence of the link between two articles.
```
{
'left_url': 'https://www.kommersant.ru/doc/4347456',
'right_url': 'https://tass.ru/kosmos/8488527',
'left_title': 'NASA: информация об отказе сотрудничать с Россией по освоению Луны некорректна',
'right_title': 'NASA назвало некорректными сообщения о нежелании США включать РФ в соглашение по Луне',
'left_timestamp': datetime.datetime(2020, 5, 15, 19, 46, 20),
'right_timestamp': datetime.datetime(2020, 5, 15, 19, 21, 36),
'label': 0,
'result': 'not_cause',
'agreement': 1.0,
'id': 'ru_tg_101',
'has_link': True
}
```
### Data Splits
| Dataset | Split | Number of Instances |
|:---------|:---------|:---------|
| ru_simple | train | 7,641 |
| | validation | 955 |
| | test | 957 |
| en_simple | train | 4,332 |
| | validation | 542 |
| | test | 542 |
| ru_full | train | 5,713 |
| | validation | 715 |
| | test | 715 |
| en_full | train | 2,009 |
| | validation | 251 |
| | test | 252 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
Every candidate pair was annotated with [Yandex Toloka](https://toloka.ai/), a crowdsourcing platform. The task was to determine a relationship between two headlines, A and B. There were seven possible options: titles are almost the same, A causes B, B causes A, A refutes B, B refutes A, A linked with B in another way, A is not linked to B. An annotation guideline was in Russian for Russian news and in English for English news.
Guidelines:
* Russian: [link](https://ilyagusev.github.io/HeadlineCause/toloka/ru/instruction.html)
* English: [link](https://ilyagusev.github.io/HeadlineCause/toloka/en/instruction.html)
Ten workers annotated every pair. The total annotation budget was 870$, with the estimated hourly wage paid to participants of 45 cents. Annotation management was semi-automatic. Scripts are available in the [Github repository](https://github.com/IlyaGusev/HeadlineCause).
#### Who are the annotators?
Yandex Toloka workers were the annotators, 457 workers for the Russian part, 180 workers for the English part.
### Personal and Sensitive Information
The dataset is not anonymized, so individuals' names can be found in the dataset. Information about the original author is not included in the dataset. No information about annotators is included except a platform worker ID.
## Considerations for Using the Data
### Social Impact of Dataset
We do not see any direct malicious applications of our work. The data probably do not contain offensive content, as news agencies usually do not produce it, and a keyword search returned nothing. However, there are news documents in the dataset on several sensitive topics.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The data was collected by Ilya Gusev.
### Licensing Information
[More Information Needed]
### Citation Information
```bibtex
@misc{gusev2021headlinecause,
title={HeadlineCause: A Dataset of News Headlines for Detecting Causalities},
author={Ilya Gusev and Alexey Tikhonov},
year={2021},
eprint={2108.12626},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
[N/A] |
Intel/WEC-Eng | # WEC-Eng
A large-scale dataset for cross-document event coreference extracted from English Wikipedia. </br>
- **Repository (Code for generating WEC):** https://github.com/AlonEirew/extract-wec
- **Paper:** https://aclanthology.org/2021.naacl-main.198/
### Languages
English
## Load Dataset
You can read in WEC-Eng files as follows (using the **huggingface_hub** library):
```json
from huggingface_hub import hf_hub_url, cached_download
import json
REPO_ID = "datasets/Intel/WEC-Eng"
splits_files = ["Dev_Event_gold_mentions_validated.json",
"Test_Event_gold_mentions_validated.json",
"Train_Event_gold_mentions.json"]
wec_eng = list()
for split_file in splits_files:
wec_eng.append(json.load(open(cached_download(
hf_hub_url(REPO_ID, split_file)), "r")))
```
## Dataset Structure
### Data Splits
- **Final version of the English CD event coreference dataset**<br>
- Train - Train_Event_gold_mentions.json
- Dev - Dev_Event_gold_mentions_validated.json
- Test - Test_Event_gold_mentions_validated.json
| | Train | Valid | Test |
| ----- | ------ | ----- | ---- |
| Clusters | 7,042 | 233 | 322 |
| Event Mentions | 40,529 | 1250 | 1,893 |
- **The non (within clusters) controlled version of the dataset (lexical diversity)**<br>
- All (experimental) - All_Event_gold_mentions_unfiltered.json
### Data Instances
```json
{
"coref_chain": 2293469,
"coref_link": "Family Values Tour 1998",
"doc_id": "House of Pain",
"mention_context": [
"From",
"then",
"on",
",",
"the",
"members",
"continued",
"their"
],
"mention_head": "Tour",
"mention_head_lemma": "Tour",
"mention_head_pos": "PROPN",
"mention_id": "108172",
"mention_index": 1,
"mention_ner": "UNK",
"mention_type": 8,
"predicted_coref_chain": null,
"sent_id": 2,
"tokens_number": [
50,
51,
52,
53
],
"tokens_str": "Family Values Tour 1998",
"topic_id": -1
}
```
### Data Fields
|Field|Value Type|Value|
|---|:---:|---|
|coref_chain|Numeric|Coreference chain/cluster ID|
|coref_link|String|Coreference link wikipeida page/article title|
|doc_id|String|Mention page/article title|
|mention_context|List[String]|Tokenized mention paragraph (including mention)|
|mention_head|String|Mention span head token|
|mention_head_lemma|String|Mention span head token lemma|
|mention_head_pos|String|Mention span head token POS|
|mention_id|String|Mention id|
|mention_index|Numeric|Mention index in json file|
|mention_ner|String|Mention NER|
|tokens_number|List[Numeric]|Mentions tokens ids within the context|
|tokens_str|String|Mention span text|
|topic_id|Ignore|Ignore|
|mention_type|Ignore|Ignore|
|predicted_coref_chain|Ignore|Ignore|
|sent_id|Ignore|Ignore|
## Citation
```
@inproceedings{eirew-etal-2021-wec,
title = "{WEC}: Deriving a Large-scale Cross-document Event Coreference dataset from {W}ikipedia",
author = "Eirew, Alon and
Cattan, Arie and
Dagan, Ido",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.198",
doi = "10.18653/v1/2021.naacl-main.198",
pages = "2498--2510",
abstract = "Cross-document event coreference resolution is a foundational task for NLP applications involving multi-text processing. However, existing corpora for this task are scarce and relatively small, while annotating only modest-size clusters of documents belonging to the same topic. To complement these resources and enhance future research, we present Wikipedia Event Coreference (WEC), an efficient methodology for gathering a large-scale dataset for cross-document event coreference from Wikipedia, where coreference links are not restricted within predefined topics. We apply this methodology to the English Wikipedia and extract our large-scale WEC-Eng dataset. Notably, our dataset creation method is generic and can be applied with relatively little effort to other Wikipedia languages. To set baseline results, we develop an algorithm that adapts components of state-of-the-art models for within-document coreference resolution to the cross-document setting. Our model is suitably efficient and outperforms previously published state-of-the-art results for the task.",
}
```
## License
We provide the following data sets under a <a href="https://creativecommons.org/licenses/by-sa/3.0/deed.en_US">Creative Commons Attribution-ShareAlike 3.0 Unported License</a>. It is based on content extracted from Wikipedia that is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License
## Contact
If you have any questions please create a Github issue at https://github.com/AlonEirew/extract-wec. |
JIsanan/war-ceb-wikipedia | annotations_creators: []
language_creators:
- found
languages:
- war, ceb
licenses: []
multilinguality:
- multilingual
pretty_name: Waray Cebu Wikipedia
size_categories:
- unknown
source_datasets: []
task_categories: []
task_ids: [] |
Jean-Baptiste/wikiner_fr | ---
language:
- fr
dataset_info:
features:
- name: id
dtype: int64
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': LOC
'2': PER
'3': MISC
'4': ORG
splits:
- name: test
num_bytes: 5954708
num_examples: 13410
- name: train
num_bytes: 54305659
num_examples: 120682
download_size: 12147768
dataset_size: 60260367
train-eval-index:
- config: Jean-Baptiste--wikiner_fr
task: token-classification
task_id: entity_extraction
splits:
eval_split: test
col_mapping:
tokens: tokens
ner_tags: tags
---
# Dataset Card for "wikiner_fr"
Dataset Description:
- **Homepage:** https://metatext.io/datasets/wikiner
- **Repository:**
- **Paper:** https://www.sciencedirect.com/science/article/pii/S0004370212000276?via%3Dihub
- **Leaderboard:**
- **Point of Contact:**
|
Jeska/autonlp-data-vaccinfaq | ---
task_categories:
- text-classification
---
# AutoNLP Dataset for project: vaccinfaq
## Table of content
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Descritpion
This dataset has been automatically processed by AutoNLP for project vaccinfaq.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"target": 6,
"text": "What je naam?"
},
{
"target": 6,
"text": "Hoe heet je?"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"target": "ClassLabel(num_classes=181, names=['chitchat_ask_bye', 'chitchat_ask_hi', 'chitchat_ask_hi_de', 'chitchat_ask_hi_en', 'chitchat_ask_hi_fr', 'chitchat_ask_hoe_gaat_het', 'chitchat_ask_name', 'chitchat_ask_thanks', 'faq_ask_aantal_gevaccineerd', 'faq_ask_aantal_gevaccineerd_wereldwijd', 'faq_ask_afspraak_afzeggen', 'faq_ask_afspraak_gemist', 'faq_ask_algemeen_info', 'faq_ask_allergisch_na_vaccinatie', 'faq_ask_alternatieve_medicatie', 'faq_ask_andere_vaccins', 'faq_ask_astrazeneca', 'faq_ask_astrazeneca_bij_ouderen', 'faq_ask_astrazeneca_bloedklonters', 'faq_ask_astrazeneca_prik_2', 'faq_ask_attest', 'faq_ask_autisme_na_vaccinatie', 'faq_ask_auto-immuun', 'faq_ask_begeleiding', 'faq_ask_beschermen', 'faq_ask_beschermingsduur', 'faq_ask_beschermingspercentage', 'faq_ask_besmetten_na_vaccin', 'faq_ask_betalen_voor_vaccin', 'faq_ask_betrouwbaar', 'faq_ask_betrouwbare_bronnen', 'faq_ask_bijsluiter', 'faq_ask_bijwerking_AZ', 'faq_ask_bijwerking_JJ', 'faq_ask_bijwerking_algemeen', 'faq_ask_bijwerking_lange_termijn', 'faq_ask_bijwerking_moderna', 'faq_ask_bijwerking_pfizer', 'faq_ask_bloed_geven', 'faq_ask_borstvoeding', 'faq_ask_buitenlander', 'faq_ask_chronisch_ziek', 'faq_ask_combi', 'faq_ask_complottheorie', 'faq_ask_complottheorie_5G', 'faq_ask_complottheorie_Bill_Gates', 'faq_ask_contra_ind', 'faq_ask_corona_is_griep', 'faq_ask_corona_vermijden', 'faq_ask_covid_door_vaccin', 'faq_ask_curevac', 'faq_ask_derde_prik', 'faq_ask_dna', 'faq_ask_duur_vaccinatie', 'faq_ask_eerst_weigeren', 'faq_ask_eerste_prik_buitenland', 'faq_ask_essentieel_beroep', 'faq_ask_experimenteel', 'faq_ask_foetus', 'faq_ask_geen_antwoord', 'faq_ask_geen_risicopatient', 'faq_ask_geen_uitnodiging', 'faq_ask_gestockeerd', 'faq_ask_gezondheidstoestand_gekend', 'faq_ask_gif_in_vaccin', 'faq_ask_goedkeuring', 'faq_ask_groepsimmuniteit', 'faq_ask_hartspierontsteking', 'faq_ask_hersenziekte', 'faq_ask_hoe_dodelijk', 'faq_ask_hoe_weet_overheid', 'faq_ask_hoeveel_dosissen', 'faq_ask_huisarts', 'faq_ask_huisdieren', 'faq_ask_iedereen', 'faq_ask_in_vaccin', 'faq_ask_info_vaccins', 'faq_ask_janssen', 'faq_ask_janssen_een_dosis', 'faq_ask_jong_en_gezond', 'faq_ask_keuze', 'faq_ask_keuze_vaccinatiecentrum', 'faq_ask_kinderen', 'faq_ask_kosjer_halal', 'faq_ask_leveringen', 'faq_ask_logistiek', 'faq_ask_logistiek_veilig', 'faq_ask_magnetisch', 'faq_ask_man_vrouw_verschillen', 'faq_ask_mantelzorger', 'faq_ask_maximaal_een_dosis', 'faq_ask_meer_bijwerkingen_tweede_dosis', 'faq_ask_minder_mobiel', 'faq_ask_moderna', 'faq_ask_mondmasker', 'faq_ask_motiveren', 'faq_ask_mrna_vs_andere_vaccins', 'faq_ask_naaldangst', 'faq_ask_nadelen', 'faq_ask_nuchter', 'faq_ask_ontwikkeling', 'faq_ask_onvruchtbaar', 'faq_ask_oplopen_vaccinatie', 'faq_ask_pfizer', 'faq_ask_phishing', 'faq_ask_pijnstiller', 'faq_ask_planning_eerstelijnszorg', 'faq_ask_planning_ouderen', 'faq_ask_positieve_test_na_vaccin', 'faq_ask_prioritaire_gropen', 'faq_ask_privacy', 'faq_ask_probleem_registratie', 'faq_ask_problemen_uitnodiging', 'faq_ask_quarantaine', 'faq_ask_qvax_probleem', 'faq_ask_reproductiegetal', 'faq_ask_risicopatient', 'faq_ask_risicopatient_diabetes', 'faq_ask_risicopatient_hartvaat', 'faq_ask_risicopatient_immuunziekte', 'faq_ask_risicopatient_kanker', 'faq_ask_risicopatient_luchtwegaandoening', 'faq_ask_smaakverlies', 'faq_ask_snel_ontwikkeld', 'faq_ask_sneller_aan_de_beurt', 'faq_ask_taxi', 'faq_ask_test_voor_vaccin', 'faq_ask_testen', 'faq_ask_tijd_tot_tweede_dosis', 'faq_ask_timing_andere_vaccins', 'faq_ask_trage_start', 'faq_ask_tweede_dosis_afspraak', 'faq_ask_tweede_dosis_vervroegen', 'faq_ask_twijfel_bijwerkingen', 'faq_ask_twijfel_effectiviteit', 'faq_ask_twijfel_inhoud', 'faq_ask_twijfel_ivm_vaccinatie', 'faq_ask_twijfel_noodzaak', 'faq_ask_twijfel_ontwikkeling', 'faq_ask_twijfel_praktisch', 'faq_ask_twijfel_vaccins_zelf', 'faq_ask_twijfel_vrijheid', 'faq_ask_uit_flacon', 'faq_ask_uitnodiging_afspraak_kwijt', 'faq_ask_uitnodiging_na_vaccinatie', 'faq_ask_vaccin_doorgeven', 'faq_ask_vaccin_immuunsysteem', 'faq_ask_vaccin_variant', 'faq_ask_vaccinatiecentrum', 'faq_ask_vaccine_covid_gehad', 'faq_ask_vaccine_covid_gehad_effect', 'faq_ask_vakantie', 'faq_ask_veelgestelde_vragen', 'faq_ask_vegan', 'faq_ask_verplicht', 'faq_ask_verschillen', 'faq_ask_vrijwillig_Janssen', 'faq_ask_vrijwilliger', 'faq_ask_waar_en_wanneer', 'faq_ask_waarom', 'faq_ask_waarom_niet_verplicht', 'faq_ask_waarom_ouderen_eerst', 'faq_ask_waarom_twee_prikken', 'faq_ask_waarom_twijfel', 'faq_ask_wanneer_algemene_bevolking', 'faq_ask_wanneer_iedereen_gevaccineerd', 'faq_ask_wat_is_corona', 'faq_ask_wat_is_rna', 'faq_ask_wat_is_vaccin', 'faq_ask_wat_na_vaccinatie', 'faq_ask_welk_vaccin_krijg_ik', 'faq_ask_welke_vaccin', 'faq_ask_wie_ben_ik', 'faq_ask_wie_doet_inenting', 'faq_ask_wie_is_risicopatient', 'faq_ask_wie_nu', 'faq_ask_wilsonbekwaam', 'faq_ask_zwanger', 'get_started', 'nlu_fallback', 'test'], names_file=None, id=None)",
"text": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 11651 |
| valid | 1267 |
|
KBLab/overlim | ---
annotations_creators:
- other
language_creators:
- other
language:
- sv
- da
- nb
license:
- cc-by-4.0
multilinguality:
- translation
size_categories:
- unknown
source_datasets:
- extended|glue
- extended|super_glue
task_categories:
- text-classification
task_ids:
- natural-language-inference
- semantic-similarity-classification
- sentiment-classification
- text-scoring
pretty_name: overlim
tags:
- qa-nli
- paraphrase-identification
---
# Dataset Card for OverLim
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The _OverLim_ dataset contains some of the GLUE and SuperGLUE tasks automatically
translated to Swedish, Danish, and Norwegian (bokmål), using the OpusMT models
for MarianMT.
The translation quality was not manually checked and may thus be faulty.
Results on these datasets should thus be interpreted carefully.
If you want to have an easy script to train and evaluate your models have a look [here](https://github.com/kb-labb/overlim_eval)
### Supported Tasks and Leaderboards
The data contains the following tasks from GLUE and SuperGLUE:
- GLUE
- `mnli`
- `mrpc`
- `qnli`
- `qqp`
- `rte`
- `sst`
- `stsb`
- `wnli`
- SuperGLUE
- `boolq`
- `cb`
- `copa`
- `rte`
### Languages
- Swedish
- Danish
- Norwegian (bokmål)
## Dataset Structure
### Data Instances
Every task has their own set of features, but all share an `idx` and `label`.
- GLUE
- `mnli`
- `premise`, `hypothesis`
- `mrpc`
- `text_a`, `text_b`
- `qnli`
- `premise`, `hypothesis`
- `qqp`
- `text_a`, `text_b`
- `sst`
- `text`
- `stsb`
- `text_a`, `text_b`
- `wnli`
- `premise`, `hypothesis`
- SuperGLUE
- `boolq`
- `question`, `passage`
- `cb`
- `premise`, `hypothesis`
- `copa`
- `premise`, `choice1`, `choice2`, `question`
- `rte`
- `premise`, `hypothesis`
### Data Splits
In order to have test-split, we repurpose the original validation-split as
test-split, and split the training-split into a new training- and
validation-split, with an 80-20 distribution.
## Dataset Creation
For more information about the individual tasks see (https://gluebenchmark.com) and (https://super.gluebenchmark.com).
### Curation Rationale
Training non-English models is easy, but there is a lack of evaluation datasets to compare their actual performance.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@kb-labb](https://github.com/kb-labb) for adding this dataset.
|
KBLab/sucx3_ner | ---
annotations_creators:
- expert-generated
language_creators:
- other
language:
- sv
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- other
task_ids:
- named-entity-recognition
- part-of-speech
pretty_name: sucx3_ner
tags:
- structure-prediction
---
# Dataset Card for _SUCX 3.0 - NER_
## Dataset Description
- **Homepage:** [https://spraakbanken.gu.se/en/resources/suc3](https://spraakbanken.gu.se/en/resources/suc3)
- **Repository:** [https://github.com/kb-labb/sucx3_ner](https://github.com/kb-labb/sucx3_ner)
- **Paper:** [SUC 2.0 manual](http://spraakbanken.gu.se/parole/Docs/SUC2.0-manual.pdf)
- **Point of Contact:**
### Dataset Summary
The dataset is a conversion of the venerable SUC 3.0 dataset into the
huggingface ecosystem.
The original dataset does not contain an official train-dev-test split, which is
introduced here; the tag distribution for the NER tags between the three splits
is mostly the same.
The dataset has three different types of tagsets: manually annotated POS,
manually annotated NER, and automatically annotated NER.
For the automatically annotated NER tags, only sentences were chosen, where the
automatic and manual annotations would match (with their respective categories).
Additionally we provide remixes of the same data with some or all sentences
being lowercased.
### Supported Tasks and Leaderboards
- Part-of-Speech tagging
- Named-Entity-Recognition
### Languages
Swedish
## Dataset Structure
### Data Remixes
- `original_tags` contain the manual NER annotations
- `lower` the whole dataset uncased
- `lower_mix` some of the dataset uncased
- `lower_both` every instance both cased and uncased
- `simple_tags` contain the automatic NER annotations
- `lower` the whole dataset uncased
- `lower_mix` some of the dataset uncased
- `lower_both` every instance both cased and uncased
### Data Instances
For each instance, there is an `id`, with an optional `_lower` suffix to mark
that it has been modified, a `tokens` list of strings containing tokens, a
`pos_tags` list of strings containing POS-tags, and a `ner_tags` list of strings
containing NER-tags.
```json
{"id": "e24d782c-e2475603_lower",
"tokens": ["-", "dels", "har", "vi", "inget", "index", "att", "g\u00e5", "efter", ",", "vi", "kr\u00e4ver", "allts\u00e5", "ers\u00e4ttning", "i", "40-talets", "penningv\u00e4rde", "."],
"pos_tags": ["MID", "KN", "VB", "PN", "DT", "NN", "IE", "VB", "PP", "MID", "PN", "VB", "AB", "NN", "PP", "NN", "NN", "MAD"],
"ner_tags": ["O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"]}
```
### Data Fields
- `id`: a string containing the sentence-id
- `tokens`: a list of strings containing the sentence's tokens
- `pos_tags`: a list of strings containing the tokens' POS annotations
- `ner_tags`: a list of strings containing the tokens' NER annotations
### Data Splits
| Dataset Split | Size Percentage of Total Dataset Size | Number of Instances for the Original Tags |
| ------------- | ------------------------------------- | ----------------------------------------- |
| train | 64% | 46\,026 |
| dev | 16% | 11\,506 |
| test | 20% | 14\,383 |
The `simple_tags` remix has fewer instances due to the requirement to match
tags.
## Dataset Creation
See the [original webpage](https://spraakbanken.gu.se/en/resources/suc3)
## Additional Information
### Dataset Curators
[Språkbanken](sb-info@svenska.gu.se)
### Licensing Information
CC BY 4.0 (attribution)
### Citation Information
[SUC 2.0 manual](http://spraakbanken.gu.se/parole/Docs/SUC2.0-manual.pdf)
### Contributions
Thanks to [@robinqrtz](https://github.com/robinqrtz) for adding this dataset.
|
KETI-AIR/klue | <!--
Copyright 2021 san kim
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
# Korean Language Understanding Evaluation (KLUE) |
KETI-AIR/korquad | <!--
Copyright 2021 san kim
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
# KorQuAD
|
KETI-AIR/nikl | <!--
Copyright 2021 san kim
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
# National Institute of Korean Language(NIKL) Corpus
|
KTH/martin | hello
|
KTH/nst | ---
license: cc0-1.0
task_categories:
- automatic-speech-recognition
language:
- sv
---
# NST Swedish ASR Database (16 kHz) – reorganized
This database was created by Nordic Language Technology for the development of automatic speech recognition and dictation in Swedish. In this updated version, the organization of the data have been altered to improve the usefulness of the database.
In the original version of the material, the files were organized in a specific folder structure where the folder names were meaningful. However, the file names were not meaningful, and there were also cases of files with identical names in different folders. This proved to be impractical, since users had to keep the original folder structure in order to use the data. The files have been renamed, such that the file names are unique and meaningful regardless of the folder structure. The original metadata files were in spl format. These have been converted to JSON format. The converted metadata files are also anonymized and the text encoding has been converted from ANSI to UTF-8.
See the documentation file for a full description of the data and the changes made to the database.
The data is originally hosted on the National Library of Norway website.
https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-56/
Hosting on Hugging Face datasets for convenience.
Licence
CC0 1.0 Universal (CC0 1.0)
Public Domain Dedication |
KTH/speechdat | # Speechdat
Speechdat dataset
## Loading the dataset
You need to download the dataset from another folder. We assume the wav files are located inside a /wav folder inside speechdat.
```
from datasets import load_dataset
speechdat = load_dataset("./speechdat", split="train", data_dir="./speechdat/wav")
```
|
KTH/waxholm | # THE WAXHOLM CORPUS
The Waxholm corpus was collected in 1993 - 1994 at the department of
Speech, Hearing and Music (TMH), KTH. It is described in several
publications. Two are included in this archive. Publication of work
using the Waxholm corpus should refer to either of these. More
information on the Waxholm project can be found on the web page
http://www.speech.kth.se/waxholm/waxholm2.html
## FILE INFORMATION
### SAMPLED FILES
The .smp files contain the speech signal. The identity
of the speaker is coded by the two digits after 'fp20' in the file
name. The smp file format was developed by TMH. Recording information
is stored in a header as a 1024 byte text string. The speech signal in
the Waxholm corpus is quantised into 16 bits, 2 bytes/sample and the
byte order is big-endian (most significant byte first). The sampling
frequency is 16 kHz. Here is an example of a file header:
```
>head -9 fp2001.1.01.smp
file=samp ; file type is sampled signal
msb=first ; byte order
sftot=16000 ; sampling frequency in Hz
nchans=1 ; number of channels
preemph=no ; no signal preemphasis during recording
view=-10,10
born=/o/libhex/ad_da.h25
range=-12303,11168 ; amplitude range
=
```
### LABEL FILES
Normally, each sample file has a label file. This has been
produced in four steps. The first step was to manually enter the
orthographic text by listening. From this text a sequence of phonemes
were produced by a rule-based text-to-phoneme module. The endpoint
time positions of the phonemes were computed by an automatic alignment
program, followed by manual correction. Some of the speech files have
no label file, due to different problems in this process. These files
should not be used for training or testing.
The labels are stored in .mix files. Below is an example of the
beginning of a mix file.
```
>head -20 fp2001.1.01.smp.mix
CORRECTED: OK jesper Jesper Hogberg Thu Jun 22 13:26:26 EET 1995
AUTOLABEL: tony A. de Serpa-Leitao Mon Nov 15 13:44:30 MET 1993
Waxholm dialog. /u/wax/data/scenes/fp2001/fp2001.1.01.smp
TEXT:
jag vill }ka h{rifr}n .
J'A:+ V'IL+ "]:K'A H'[3RIFR]N.
CT 1
Labels: J'A: V'IL "]:KkA H'[3RIFR]N .
FR 11219 #J >pm #J >w jag 0.701 sec
FR 12565 $'A: >pm $'A:+ 0.785 sec
FR 13189 #V >pm #V >w vill 0.824 sec
FR 13895 $'I >pm $'I 0.868 sec
FR 14700 $L >pm $L+ 0.919 sec
```
The orthographic text representation is after the label 'TEXT:' CT is
the frame length in number of sample points. (Always = 1 in Waxholm
mix files) Each line starting with 'FR' contains up to three labels at
the phonetic, phonemic and word levels. FR is immediately followed by
the frame number of the start of the segment. Since CT = 1, FR is the
sample index in the file. If a frame duration is = 0, the label has
been judged as a non-pronounced segment and deleted by the manual
labeller, although it was generated by the text-to-phoneme or the
automatic alignment modules. Column 3 in an FR line is the phonetic
label. Initial '#' indicates word initial position. '$' indicates
other positions. The optional label '>pm' precedes the phonemic label,
which has been generated by the text-to-phoneme rules. Often, the
phonemic and the phonetic labels are identical. The optional '>w' is
followed by the identity of the word beginning at this frame. The
phoneme symbol inventory is mainly STA, used by the KTH/TMH RULSYS
system. It is specified in the included file 'sampa_latex_se.pdf'.
Some extra labels at the phonetic level have been defined.
The most common ones are:
| | |
|---------------------|------------------------------------------|
|sm | lip or tongue opening |
|p: | silent interval |
|pa | aspirative sound from breathing |
|kl | click sound |
|v | short vocalic segment between consonants |
|upper case of stops | occlusion |
|lower case of stops | burst |
The label 'Labels:' before the FR lines is a text string assembled
from the FR labels
The mix files in this archive correspond to those with the name
extension .mix.new in the original corpus. Besides a few other
corrections, the main difference is that burst segments after
retroflex stops were not labelled as retroflex in the original .mix
files ( d, t after 2D and 2T have been changed to 2d and 2t).
## REFERENCES
Bertenstam, J., Blomberg, M., Carlson, R., Elenius, K., Granström,
B., Gustafson, J., Hunnicutt, S., Högberg, J., Lindell, R., Neovius,
L., Nord, L., de Serpa-Leitao, A., and Ström, N.,(1995). "Spoken
dialogue data collected in the WAXHOLM project" STL-QPSR 1/1995,
KTH/TMH, Stockholm.
Bertenstam, J., Blomberg, M., Carlson, R.,
Elenius, K., Granström, B., Gustafson, J., Hunnicutt, S., Högberg, J.,
Lindell, R., Neovius, L., de Serpa-Leitao, A., Nord, L., & Ström,
N. (1995). The Waxholm application data-base. In Pardo, J.M. (Ed.),
Proceedings Eurospeech 1995 (pp. 833-836). Madrid.
Comments and error reports are welcome. These should be sent to:
Mats Blomberg <matsb@speech.kth.se> or Kjell Elenius <kjell@speech.kth.se>
|
Karavet/ARPA-Armenian-Paraphrase-Corpus | ---
language:
- hy
task_categories: [paraphrase, paraphrase detection]
multilinguality: [monolingual]
task_ids: [paraphrase, paraphrase detection]
---
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Dataset Evaluation](#dataset-evaluation)
- [Additional Information](#additional-information)
## Dataset Description
We provide sentential paraphrase detection train, test datasets as well as BERT-based models for the Armenian language.
### Dataset Summary
The sentences in the dataset are taken from [Hetq](https://hetq.am/) and [Panarmenian](http://www.panarmenian.net/) news articles. To generate paraphrase for the sentences, we used back translation from Armenian to English. We repeated the step twice, after which the generated paraphrases were manually reviewed. Invalid sentences were filtered out, while the rest were labelled as either paraphrase, near paraphrase or non-paraphrase. Test examples were reviewed by 3 different annotators. In addition, to increase the number of non-paraphrase pairs, we padded the dataset with automatically generated negative examples, including pairs of consecutive sentences and random pairs.
## Dataset Structure
Each row consists of 2 sentences and their label. This sentences were labelled as either paraphrase, near paraphrase or non-paraphrase (with 1, 0, -1 labels respectively). The sentences are divided into train and test sets.
|Number of examples|Total|Paraphrase|Non-paraphrase (near paraphrase)|
|:-- | :---: | :---: | :---: |
|Train | 4233 |1339 |2683 (211) |
|Test | 1682 |1021 |448 (213) |
### Dataset Evaluation
We finetuned Multilingual BERT on several training sets, including the proposed ARPA dataset, and evaluated their performance on our test set. During training and
evaluation, near paraphrase and non-paraphrase pairs were combined into one class. The results are provided below:
|BERT Model | Train set | F1 | Acc. |
|:-- | :---: | :---: | :---: |
|Multilingual BERT | ARPA train set| 84.27| 78.06|
|Multilingual BERT | Paraphraser.ru train set machine-translated into Armenian | 83.81 | 77.09 |
|Multilingual BERT | MRPC train set machine-translated into Armenian | 80.07 | 69.87 |
|Multilingual BERT | All of the above combined | 84 |77.6 |
#### Additional Information
The model trained on ARPA is available for use, and can be downloaded using this [link](https://drive.google.com/uc?id=14owW5kkZ1JiNa6P-676e-QIj8m8i5e_8).
For more details about the models and dataset construction, refer to the [paper](https://arxiv.org/pdf/2009.12615).
|
Karavet/ILUR-news-text-classification-corpus | ---
language:
- hy
task_categories: [news-classification, text-classification]
multilinguality: [monolingual]
task_ids: [news-classification, text-classification]
license:
- apache-2.0
---
## Table of Contents
- [Table of Contents](#table-of-contents)
- [News Texts Dataset](#news-texts-dataset)
## News Texts Dataset
We release a dataset of over 12000 news articles from [iLur.am](http://www.ilur.am/), categorized into 7 classes: sport, politics, weather, economy, accidents, art, society. The articles are split into train (2242k tokens) and test sets (425k tokens).
For more details, refer to the [paper](https://arxiv.org/ftp/arxiv/papers/1906/1906.03134.pdf). |
Karavet/pioNER-Armenian-Named-Entity | ---
language: [hy]
task_categories: [named-entity-recognition]
multilinguality: [monolingual]
task_ids: [named-entity-recognition]
license: [apache-2.0]
---
## Table of Contents
- [Table of Contents](#table-of-contents)
- [pioNER - named entity annotated datasets](#pioNER---named-entity-annotated-datasets)
- [Silver-standard dataset](#silver-standard-dataset)
- [Gold-standard dataset](#gold-standard-dataset)
# pioNER - named entity annotated datasets
pioNER corpus provides gold-standard and automatically generated named-entity datasets for the Armenian language.
Alongside the datasets, we release 50-, 100-, 200-, and 300-dimensional GloVe word embeddings trained on a collection of Armenian texts from Wikipedia, news, blogs, and encyclopedia.
## Silver-standard dataset
The generated corpus is automatically extracted and annotated using Armenian Wikipedia. We used a modification of [Nothman et al](https://www.researchgate.net/publication/256660013_Learning_multilingual_named_entity_recognition_from_Wikipedia) and [Sysoev and Andrianov](http://www.dialog-21.ru/media/3433/sysoevaaandrianovia.pdf) approaches to create this corpus. This approach uses links between Wikipedia articles to extract fragments of named-entity annotated texts.
The corpus is split into train and development sets.
*Table 1. Statistics for pioNER train, development and test sets*
| dataset | #tokens | #sents | annotation | texts' source |
|-------------|:--------:|:-----:|:--------:|:-----:|
| train | 130719 | 5964 | automatic | Wikipedia |
| dev | 32528 | 1491 | automatic | Wikipedia |
| test | 53606 | 2529 | manual | iLur.am |
## Gold-standard dataset
This dataset is a collection of over 250 news articles from iLur.am with manual named-entity annotation. It includes sentences from political, sports, local and world news, and is comparable in size with the test sets of other languages (Table 2).
We aim it to serve as a benchmark for future named entity recognition systems designed for the Armenian language.
The dataset contains annotations for 3 popular named entity classes:
people (PER), organizations (ORG), and locations (LOC), and is released in CoNLL03 format with IOB tagging scheme.
During annotation, we generally relied on categories and [guidelines assembled by BBN](https://catalog.ldc.upenn.edu/docs/LDC2005T33/BBN-Types-Subtypes.html) Technologies for TREC 2002 question answering track
Tokens and sentences were segmented according to the UD standards for the Armenian language from [ArmTreebank project](http://armtreebank.yerevann.com/tokenization/process/).
*Table 2. Comparison of pioNER gold-standard test set with test sets for English, Russian, Spanish and German*
| test dataset | #tokens | #LOC | #ORG | #PER |
|-------------|:--------:|:-----:|:--------:|:-----:|
| Armenian pioNER | 53606 | 1312 | 1338 | 1274 |
| Russian factRuEval-2016 | 59382 | 1239 | 1595 | 1353 |
| German CoNLL03 | 51943 | 1035 | 773 | 1195 |
| Spanish CoNLL02 | 51533 | 1084 | 1400 | 735 |
| English CoNLL03 | 46453 | 1668 | 1661 | 1671 | |
Khanoooo/autonlp-data-Corona | It's all about Corona |
khondoker/SentNoB | ---
language:
- bn
task_categories:
- text-classification
pretty_name: SentNoB
task_ids:
- sentiment-classification
annotations_creators:
- expert-generated
language_creators:
- expert-generated
paperswithcode_id: sentnob
---
# Dataset Card for "SentNoB"
### Dataset Summary
Social Media User Comments' Sentiment Analysis Dataset. Each user comments are labeled with either positive (1), negative (2), or neutral (0).
### Citation Information
```
@inproceedings{islam2021sentnob,
title={SentNoB: A Dataset for Analysing Sentiment on Noisy Bangla Texts},
author={Islam, Khondoker Ittehadul and Kar, Sudipta and Islam, Md Saiful and Amin, Mohammad Ruhul},
booktitle={Findings of the Association for Computational Linguistics: EMNLP 2021},
pages={3265--3271},
year={2021}
}
``` |
Kili/plastic_in_river | ---
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: Plastic in river
tags:
- other-object-detection
---
# Plastic in river
This dataset is an export of the annotated assets from the [Kili's Community Challenge - Plastic in River dataset](https://kili-technology.com/blog/kili-s-community-challenge-plastic-in-river-dataset).
The Hugging Face dataset will be updated every day during the challenge with the latest annotations. |
Kira-Asimov/gender_clinical_trial | # Gender classification from Clinical Trial Public Data
|
LIAMF-USP/arc-retrieval-c4 | hello
|
Langame/starter | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- expert-generated
license:
- mit
multilinguality:
- monolingual
pretty_name: ''
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-generation
task_ids: []
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
Langame/waiting-messages | # Langame/waiting-messages
Generated using OpenAI GPT-3 davinci-codex based on random initial samples written by a human.
⚠️ The dataset has not been de-duplicated, so there may be duplicates. ⚠️ |
Language/Fren | ---
annotations_creators:
- no-annotation
language_creators:
- found
languages:
- fr
licenses:
- apache-2.0
multilinguity:
-monolingual
size_categories:
- 110K<n<1M
source_datasets:
- original
task_categories:
- conditional-text-generation
task_ids:
- summarization
--- |
LeoCordoba/CC-NEWS-ES-titles | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- es
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- cc-news
task_categories:
- summarization
- text-generation
task_ids: []
tags:
- conditional-text-generation
---
# Dataset Card for CC-NEWS-ES-titles
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** [CC-NEWS-ES-titles dataset repository](https://huggingface.co/datasets/LeoCordoba/CC-NEWS-ES-titles)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Leonardo Ignacio Córdoba](https://www.linkedin.com/in/leonardo-ignacio-c%C3%B3rdoba/)
### Dataset Summary
CC-NEWS-ES-titles is a Spanish-language dataset for news titles generation. The text and titles comes from 2019 and 2020 CC-NEWS data (which is part of Common Crawl).
It contains 402.310 pairs of news title and body, splitted in :
- Train: 370.125
- Eval: 16.092
- Test: 16.092
### Supported Tasks and Leaderboards
- `text-classification`, `sentiment-classification`: The dataset can be used to train a model for news title generation which can be considered a subset of abstractive summarization.
### Languages
The text is in Spanish. The BCP-47 code for Spanish is es.
## Dataset Structure
### Data Instances
Each data instance contains the following features: _text_ and _output_text_.
- _text_ is the body of the news.
- _output_text_ is the title of the news.
An example from the CC-NEWS-ES-titles train set looks like the following:
```
{'text': 'Hoy en el Boletín Oficial también se publicó la disposición para universidades, institutos universitarios y de educación superior de todas las jurisdicciones, a las que recomienda que "adecúen las condiciones en que se desarrolla la actividad académica presencial en el marco de la emergencia conforme con las recomendaciones del Ministerio de Salud", según lo publicado por la agencia ',
'output_text': 'Coronavirus: "Seguimos educando", la plataforma online para que los chicos estudien en cuarentena'}
```
### Data Fields
- 'text': a string containing the body of the news.
- 'output_text': a string containing the title of the news.
### Data Splits
The CC-NEWS-ES-titles dataset has 3 splits: _train_, _validation_, and _test_. The splits contain disjoint sets of news.
| Dataset Split | Number of Instances in Split |
| ------------- | ---------------------------- |
| Train | 370.125 |
| Eval | 16.092 |
| Test | 16.092 |
## Dataset Creation
### Curation Rationale
[N/A]
### Source Data
#### Initial Data Collection and Normalization
TODO
#### Who are the source language producers?
Common Crawl: https://commoncrawl.org/
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
Abstractive summarization is a complex task and Spanish is a underrepresented language in the NLP domain. As a consequence, adding a Spanish resource may help others to improve their research and educational activities.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
This dataset is maintained by [Leonardo Ignacio Córdoba](https://www.linkedin.com/in/leonardo-ignacio-c%C3%B3rdoba/) and was built with the help of [María Gaska](https://www.linkedin.com/in/mfgaska/).
### Licensing Information
[N/A]
### Citation Information
TODO
### Contributions
[N/A] |
LeoCordoba/CC-NEWS-ES | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- es
license:
- mit
multilinguality:
- monolingual
size_categories:
- n<1K
- 1K<n<10K
- 10K<n<100K
- 100K<n<1M
- 1M<n<10M
source_datasets:
- cc-news
task_categories:
- summarization
- text-generation
task_ids: []
tags:
- conditional-text-generation
---
# Dataset Card for CC-NEWS-ES
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** [CC-NEWS-ES dataset repository](https://huggingface.co/datasets/LeoCordoba/CC-NEWS-ES)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Leonardo Ignacio Córdoba](https://www.linkedin.com/in/leonardo-ignacio-c%C3%B3rdoba/)
### Dataset Summary
CC-NEWS-ES is a Spanish-language dataset of news. The corpus was generated by extracting the Spanish articles from CC-NEWS (news index of Common Crawl) of 2019. For doing that FastText model was used for language prediction.
It contains a total of 7,473,286 texts and 1,812,009,283 words distributed as follows:
|domain | texts | words |
|:----|-----------------:|-----------------:|
| ar | 532703 | 1.45127e+08 |
| bo | 29557 | 7.28996e+06 |
| br | 107 | 14207 |
| cl | 116661 | 3.34633e+07 |
| co | 78662 | 1.92649e+07 |
| com | 3650950 | 8.44094e+08 |
| cr | 16542 | 3.82075e+06 |
| es |1838790 | 4.82943e+08 |
| gt | 4833 | 838121 |
| hn | 36559 | 5.49933e+06 |
| mx | 724908 | 1.62198e+08 |
| ni | 40643 | 1.08501e+07 |
| pa | 18447 | 4.34724e+06 |
| pe | 230962 | 3.52123e+07 |
| pr | 7756 | 1.6633e+06 |
| py | 30651 | 2.08077e+07 |
| sv | 454 | 353145 |
| uy | 80948 | 2.72562e+07 |
| ve | 33148 | 6.96578e+06 |
### Supported Tasks and Leaderboards
TODO
-
### Languages
The text is in Spanish. The BCP-47 code for Spanish is es.
## Dataset Structure
### Data Instances
Each data instance contains the following features: ...
- country: top level domain, usually refers to a country (except in the case of .com).
- text: body of the news
- id: internal id
An example from CC-NEWS-ES looks like the following:
```
{'country': 'py',
'text': '“La que asumió es una mujer que está en línea de sucesión. La policía, ni los militares están en el Palacio, lo que ella dijo fue que no se podía seguir reprimiendo al pueblo", manifestó este jueves el senador colorado, Enrique Riera, sobre la asunción presidencial en Bolivia de la senadora opositora, Jeanine Áñez,Riera agregó que Evo Morales el que "escapó y abandonó" a su pueblo al ir como asilado a México. En ese sentido, dijo que irónicamente, el expresidente boliviano no eligió como destino a Venezuela, Nicaragua ni a Cuba.Sostuvo que nos de debe utilizar a las instituciones democráticas y republicanas para llegar al poder, cambiando Constituciones y prorrogando mandatos una y otra vez. “El amigo Morales no respetó absolutamente nada”, subrayó.Por otra parte, el senador colorado mencionó que los fiscales y jueces bolivianos deberían tener el "coraje" de investigar el origen de la riqueza de Morales.Habló también sobre la situación en Venezuela y mencionó que Nicolás Maduro no cae, porque "toda la FFAA está contaminada de narcotráfico". El hombre cuenta con orden de prisión en su país por los ilícitos de Tráfico de Drogas y Asociación Criminal, según el Consejo Nacional de Justicia del Brasil.La agente fiscal Liliana Denice Duarte, titular de la Unidad Fiscal Nº 1 de Presidente Franco, requirió la expulsión del extranjero y la jueza Carina Frutos Recalde, mediante Auto Interlocutorio (A.I.) N° 2.153, dio curso favorable al pedido del Ministerio Público. Esto considerando la alta expectativa de pena que tiene el supuesto delincuente en su país.La detención ...',
'id': 7328086}
Note: the text is shortened for simplicity.
```
### Data Fields
- ...
- ...
### Data Splits
...
## Dataset Creation
### Curation Rationale
[N/A]
### Source Data
#### Initial Data Collection and Normalization
TODO
#### Who are the source language producers?
Common Crawl: https://commoncrawl.org/
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
...
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
This dataset is maintained by [Leonardo Ignacio Córdoba](https://www.linkedin.com/in/leonardo-ignacio-c%C3%B3rdoba/) and was built with the help of [María Gaska](https://www.linkedin.com/in/mfgaska/).
### Licensing Information
[N/A]
### Citation Information
TODO
### Contributions
[N/A] |
Linda/test1111 | dataset card |
Llamacha/monolingual-quechua-iic | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- qu
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 1M<n<5M
source_datasets:
- original
task_categories:
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
---
# Dataset Card for Monolingual-Quechua-IIC
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://llamacha.pe](https://llamacha.pe)
- **Paper:** [Introducing QuBERT: A Large Monolingual Corpus and BERT Model for
Southern Quechua](https://aclanthology.org/2022.deeplo-1.1.pdf)
- **Point of Contact:** [Rodolfo Zevallos](mailto:rodolfojoel.zevallos@upf.edu)
- **Size of downloaded dataset files:** 373.28 MB
### Dataset Summary
We present Monolingual-Quechua-IIC, a monolingual corpus of Southern Quechua, which can be used to build language models using Transformers models. This corpus also includes the Wiki and OSCAR corpora. We used this corpus to build Llama-RoBERTa-Quechua, the first language model for Southern Quechua using Transformers.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
Southern Quechua
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
Apache-2.0
### Citation Information
```
@inproceedings{zevallos2022introducing,
title={Introducing QuBERT: A Large Monolingual Corpus and BERT Model for Southern Quechua},
author={Zevallos, Rodolfo and Ortega, John and Chen, William and Castro, Richard and Bel, Nuria and Toshio, Cesar and Venturas, Renzo and Aradiel, Hilario and Melgarejo, Nelsi},
booktitle={Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing},
pages={1--13},
year={2022}
}
```
### Contributions
Thanks to [@rjzevallos](https://github.com/rjzevallos) for adding this dataset.
|
LoganKells/amazon_product_reviews_video_games | #Title |
MBAH/MOVIESON | https://mahoningmed.org/docs/123movies-watch-after-we-fell-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-cinderella-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-cryptozoo-2021full-movie-hd-free/
https://mahoningmed.org/docs/123movieswatch-breathless-2021hd-full-movie-online/
https://mahoningmed.org/docs/123movieswatch-firebird-2021full-hd-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-paw-patrol-the-movie-2021-full-hd-movie-online-free/
https://mahoningmed.org/docs/atch-sweet-girl-2021free-hd-full-movie-online/
https://mahoningmed.org/docs/123movies-watch-hes-all-that-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movieswatch-im-your-man-2020hd-full-movie-online-free/
https://mahoningmed.org/docs/watchcrazy-fist-2021full-hd-movie-online-free/
https://mahoningmed.org/docs/watchpaw-patrol-the-movie-2021hd-full-movie-online-for-free/
https://mahoningmed.org/docs/123movies-watch-black-widow-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/full-watch-dont-breathe-2-2021-hd-movie-online-free/
https://mahoningmed.org/docs/watchthe-tomorrow-war-2021hd-full-movie-online-for-free/
https://mahoningmed.org/docs/123movies-watch-jurassic-hunt-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-after-we-fell-2021-hd-full-movie-online-free-2/
https://mahoningmed.org/docs/23movies-watch-free-guy-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/watch-candyman-2021-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watch-the-night-house-2021full-hd-movie-online-free/
https://mahoningmed.org/docs/watchsas-red-notice-2021hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watchshang-chi-and-the-legend-of-the-ten-rings-2021hd-full-movie-online-for-free/
https://mahoningmed.org/docs/123movies-watch-luca-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-escape-room-tournament-of-champions-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/watchnarco-sub-2021hd-full-movie-online-for-free/
https://mahoningmed.org/docs/123movies-watch-malignant-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/watch-mortal-kombat-legends-battle-of-the-realms-2021full-online-movie-free-hd/
https://mahoningmed.org/docs/watch-space-jam-a-new-legacy-2021-hd-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-cinderella-2021-hd-full-movie-online-free-2/
https://mahoningmed.org/docs/watcheggs-run-2021hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watch-f9-2021full-online-movie-free-hd-1080p/
https://mahoningmed.org/docs/123movies-watch-jurassic-hunt-2021-hd-full-movie-online-free-2/
https://mahoningmed.org/docs/123movies-watch-vacation-friends-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-silent-night-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-the-card-counter-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-silent-night-2021-hd-full-movie-online-free-2/
https://mahoningmed.org/docs/123movies-watch-jolt-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-the-last-mercenary-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-beckett-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-rogue-hostage-2018-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-the-boss-baby-family-business-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-cruella-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/watch-the-manson-brothers-midnight-zombie-massacre-2021full-hd-movie-online-free-123movies/
https://mahoningmed.org/docs/watchthe-suicide-squad-2021-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watch-jungle-cruise-2021full-hd-movie-online-free/
https://mahoningmed.org/docs/watch-after-we-fell-2021full-hd-movie-online-free/
https://mahoningmed.org/docs/123movieswatch-the-last-warrior-root-of-evil-2021hd-full-movie-online-free/
https://mahoningmed.org/docs/123movieswatch-kate-2021hd-full-movie-online-free/
https://mahoningmed.org/docs/23movieswatch-wrath-of-man-2021hd-full-movie-online-free/
https://mahoningmed.org/docs/watchthe-forever-purge-2021-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watchafterlife-of-the-party-2021-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watchthe-conjuring-the-devil-made-me-do-it-2021-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watchold-2021-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watchinsensate-2021-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watchdreamcatcher-2021-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watchthe-kissing-booth-3-2021-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watchjj-plus-e-2021-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watchyoung-sister-in-law-3-2019-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watchjurassic-world-fallen-kingdom-2018-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/123movieswatch-danny-boy-2021hd-full-movie-online-free/
https://mahoningmed.org/docs/watchsnake-eyes-g-i-joe-origins-2021-hd-full-movie-online-for-free/ |
MKK/Dhivehi-English | |
Mansooreh/sharif-emotional-speech-dataset | # <a href='https://arxiv.org/pdf/1906.01155.pdf'>ShEMO: a large-scale validated database for Persian speech emotion detection</a><br>
## Abstract
<div align="justify"> This paper introduces a large-scale, validated database for Persian called Sharif Emotional Speech Database (ShEMO). The database includes 3000 semi-natural utterances, equivalent to 3 hours and 25 minutes of speech data extracted from online radio plays. The ShEMO covers speech samples of 87 native-Persian speakers for five basic emotions including <i>anger</i>, <i>fear</i>, <i>happiness</i>, <i>sadness</i> and <i>surprise</i>, as well as neutral state. Twelve annotators label the underlying emotional state of utterances and majority voting is used to decide on the final labels. According to the kappa measure,
the inter-annotator agreement is 64% which is interpreted as "substantial agreement". We also present benchmark results based on common classification methods in speech emotion detection task. According to the experiments, support vector machine achieves the best results for both gender-independent (58.2%) and gender-dependent models (female=59.4%, male=57.6%). The ShEMO is available for academic purposes free of charge to provide a baseline for further research on Persian emotional speech.
## Download Dataset
To download female utterances (zip file):
```bash
wget -O female.zip "https://www.dropbox.com/s/42okby6c40w3j2x/female.zip?dl=0"
```
To download male utterances (zip file):
```bash
wget -O male.zip "https://www.dropbox.com/s/5ebs8hq1zm0qkp6/male.zip?dl=0"
```
To download labels & transcripts (json file):
```bash
wget https://github.com/pariajm/sharif-emotional-speech-dataset/raw/master/shemo.json
```
## Models Trained or Fine-tuned on ShEMO
Credits to [Mehrdad Farahani](https://github.com/m3hrdadfi/soxan)
- [Speech emotion detection in Persian (fa) using wav2vec 2.0](https://huggingface.co/m3hrdadfi/wav2vec2-xlsr-persian-speech-emotion-recognition)
- [Speech emotion detection in Persian (fa) using HuBERT](https://huggingface.co/m3hrdadfi/hubert-base-persian-speech-emotion-recognition)
- [Speech geneder detection in Persian (fa) using HuBERT](https://huggingface.co/m3hrdadfi/hubert-base-persian-speech-gender-recognition)
- [Automatic speech recognition in Persian (fa) using XLSR-53](https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-persian-shemo)
## Overview of ShEMO
Feature | Status
------------- | ----------
**access** | open source
**language** | Persian (fa)
**modality** | speech
**duration** | 3 hours and 25 minutes
**#utterances** | 3000
**#speakers** | 87 (31 females, 56 males)
**#emotions** | 5 basic emotions (anger, fear, happiness, sadness and surprise) and neutral state
**orthographic transcripts** | available
**phonetic transcripts** | available
Read our paper on <a href='https://link.springer.com/article/10.1007/s10579-018-9427-x'>Springer</a> or [arxiv](https://arxiv.org/pdf/1906.01155.pdf)
## Description of Filenames
The characters used in the filenames and their corresponding meaning:
- **A**: angry
- **F**: female speaker (if used at the beginning of the label e.g.`F14A09`) or fearful (if used in the middle of the label e.g. `M02F01`)
- **H** : happy
- **M** : male speaker
- **N** : neutral
- **S** : sad
- **W** : surprised
e.g. `F03S02` **F** means the speaker is **female**, **03** denotes **the speaker code**, **S** refers to the underlying emotion of the utterance which is **sadness**, **02** means this is the **second utterance for this speaker in sad emotion**.
## Data Instances
Here is a sample of data instances:
```json
"F21N37": {
"speaker_id": "F21",
"gender": "female",
"emotion": "neutral",
"transcript": "مگه من به تو نگفته بودم که باید راجع به دورانت سکوت کنی؟",
"ipa": "mӕge mæn be to nægofte budӕm ke bɑyæd rɑdʒeʔ be dorɑnt sokut koni"
}
```
## دادگان گفتار احساسی شریف (شمو)
برای دریافت مقاله <a href='https://arxiv.org/pdf/1906.01155.pdf'>اینجا</a> کلیک کنید
## Citation
If you use this dataset, please cite the following paper:
~~~~
@Article{MohamadNezami2019,
author = {Mohamad Nezami, Omid and Jamshid Lou, Paria and Karami, Mansoureh},
title = {ShEMO: a large-scale validated database for Persian speech emotion detection},
journal = {Language Resources and Evaluation},
year = {2019},
volume = {53},
number = {1},
pages = {1--16},
issn = {1574-0218},
doi = {10.1007/s10579-018-9427-x},
url = {https://doi.org/10.1007/s10579-018-9427-x}
}
~~~~
### Contact
Paria Jamshid Lou <paria.jamshid-lou@hdr.mq.edu.au>
Omid Mohamad Nezami <omid.mohamad-nezami@hdr.mq.edu.au> |
Marzipan/QA4PC | ## QA4PC Dataset (paper: Cross-Policy Compliance Detection via Question Answering)
### Train Sets
To create training set or entailment and QA tasks, download the ShARC data from here: https://sharc-data.github.io/data.html. After that, run the script from _create_train_from_sharc.py_, by providing the path to the ShARC train and development sets.
### Evaluation Sets
#### Entailment Data
The following files contain the data for the entailment task. This includes the policy + questions, a scenario and an answer (_Yes, No, Maybe_). Each datapoint also contain the information from the ShARC dataset such as tree_id and source_url.
- __dev_entailment_qa4pc.json__
- __test_entailment_qa4pc.json__
#### QA Data
The following files contain the data for the QA task.
- __dev_sc_qa4pc.json__
- __test_sc_qa4pc.json__
The following file contains the expression tree data for the dev and test sets. Each tree includes a policy, a set of questions and a logical expression.
- __trees_dev_test_qa4pc.json__
|
McGill-NLP/mlquestions | # Dataset Card for mlquestions
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/McGill-NLP/MLQuestions
- **Repository:** https://github.com/McGill-NLP/MLQuestions
- **Paper:** https://aclanthology.org/2021.emnlp-main.566.pdf
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Devang Kulshreshtha](mailto:devang.kulshreshtha@mila.quebec)
### Dataset Summary
The MLQuestions dataset consists of questions from Google search queries and passages from Wikipedia pages related to Machine learning domain . The dataset was created to support research in domain adaptation of question generation and passage retrieval models.
### Languages
The text in the dataset is in English
## Dataset Structure
### Data Instances
We release development and test sets where a typical data point comprises a passage denoted by `input_text` label and a question, with a `target_text` label.
An example from the MLQuestions test set looks as follows:
{
'input_text': 'Bayesian learning uses Bayes' theorem to determine the conditional probability of a hypotheses given some evidence or observations.'
'target_text': 'What is Bayesian learning in machine learning'
}
We also provide unsupervised questions and passages in two separate files - 'passages_unaligned.csv' and 'questions_unaligned.csv' with labels `input_text` and `target_text` respectively.
## Additional Information
### Licensing Information
https://github.com/McGill-NLP/MLQuestions/blob/main/LICENSE.md
### Citation Information
If you find this useful in your research, please consider citing:
@inproceedings{kulshreshtha-etal-2021-back,
title = "Back-Training excels Self-Training at Unsupervised Domain Adaptation of Question Generation and Passage Retrieval",
author = "Kulshreshtha, Devang and
Belfer, Robert and
Serban, Iulian Vlad and
Reddy, Siva",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.566",
pages = "7064--7078",
abstract = "In this work, we introduce back-training, an alternative to self-training for unsupervised domain adaptation (UDA). While self-training generates synthetic training data where natural inputs are aligned with noisy outputs, back-training results in natural outputs aligned with noisy inputs. This significantly reduces the gap between target domain and synthetic data distribution, and reduces model overfitting to source domain. We run UDA experiments on question generation and passage retrieval from the Natural Questions domain to machine learning and biomedical domains. We find that back-training vastly outperforms self-training by a mean improvement of 7.8 BLEU-4 points on generation, and 17.6{\%} top-20 retrieval accuracy across both domains. We further propose consistency filters to remove low-quality synthetic data before training. We also release a new domain-adaptation dataset - MLQuestions containing 35K unaligned questions, 50K unaligned passages, and 3K aligned question-passage pairs.",
} |
Melinoe/TheLabTexts | |
Motahar/github-issues | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
pretty_name: Huggingface Datasets github issues
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-retrieval
- text-classification
task_ids:
- document-retrieval
- multi-label-classification
- multi-class-classification
--- |
Mulin/my_third_dataset | My third Dataset
- for wolf classification |
NLPC-UOM/Sinhala-Stopword-list | ---
annotations_creators: []
language:
- si
license:
- mit
---
|
NYTK/HuCOLA | ---
YAML tags:
annotations_creators:
- expert-generated
language_creators:
- found
language:
- hu
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: HuCOLA
size_categories:
- unknown
source_datasets:
- original
task_categories:
- conditional-text-generation
task_ids:
- machine-translation
- summarization
- text-simplification
---
# Dataset Card for HuCOLA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
[HuCOLA dataset](https://github.com/nytud/HuCOLA)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
[lnnoemi](mailto:ligeti-nagy.noemi@nytud.hu)
### Dataset Summary
This is the dataset card for the Hungarian Corpus of Linguistic Acceptability (HuCOLA), which is also part of the Hungarian Language Understanding Evaluation Benchmark Kit [HuLU](hulu.nlp.nytud.hu).
### Supported Tasks and Leaderboards
### Languages
The BCP-47 code for Hungarian, the only represented language in this dataset, is hu-HU.
## Dataset Structure
### Data Instances
For each instance, there is aN id, a sentence and a label.
An example:
```
{"Sent_id": "dev_0",
"Sent": "A földek eláradtak.",
"Label": "0"}
```
### Data Fields
- Sent_id: unique id of the instances, an integer between 1 and 1000;
- Sent: a Hungarian sentence;
- label: '0' for wrong, '1' for good sentences.
### Data Splits
HuCOLA has 3 splits: *train*, *validation* and *test*.
| Dataset split | Number of sentences in the split | Proportion of the split
|---------------|----------------------------------| ---------|
| train | 7276 | 80%|
| validation | 900 |10%|
| test | 900 |10%|
The test data is distributed without the labels. To evaluate your model, please [contact us](mailto:ligeti-nagy.noemi@nytud.hu), or check [HuLU's website](hulu.nlp.nytud.hu) for an automatic evaluation (this feature is under construction at the moment). The evaluation metric is Matthew's correlation coefficient.
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The data was collected by two human annotators from 3 main linguistic books on Hungarian language:
- Kiefer Ferenc (ed.) (1992), Strukturális magyar nyelvtan 1. Mondattan. Budapest, Akadémiai Kiadó.
- Alberti, Gábor and Laczkó, Tibor (eds) (2018), Syntax of Hungarian Nouns and Noun Phrases. I., II. Comprehensive grammar resources. Amsterdam University Press, Amsterdam.
- Katalin É. Kiss and Veronika Hegedűs (eds) (2021), Postpositions and Postpositional Phrases. Amsterdam: Amsterdam University Press.
The process of collecting sentences partly followed the one described in Warstadt et. al (2018). The guideline of our process is available in the repository of [HuCOLA](https://github.com/nytud/HuCOLA).
### Annotations
#### Annotation process
Each instance was annotated by 4 human annotators for its acceptability (see the annotation guidelines in the repository of [HuCOLA](https://github.com/nytud/HuCOLA)).
#### Who are the annotators?
The annotators were native Hungarian speakers (of various ages, from 20 to 67) without any linguistic backround.
## Additional Information
### Licensing Information
HuCOLA is released under the CC-BY-SA 4.0 licence.
### Citation Information
If you use this resource or any part of its documentation, please refer to:
Ligeti-Nagy, N., Ferenczi, G., Héja, E., Jelencsik-Mátyus, K., Laki, L. J., Vadász, N., Yang, Z. Gy. and Váradi, T. (2022) HuLU: magyar nyelvű benchmark adatbázis
kiépítése a neurális nyelvmodellek kiértékelése céljából [HuLU: Hungarian benchmark dataset to evaluate neural language models]. XVIII. Magyar Számítógépes Nyelvészeti Konferencia. (in press)
```
@inproceedings{ligetinagy2022hulu,
title={HuLU: magyar nyelvű benchmark adatbázis kiépítése a neurális nyelvmodellek kiértékelése céljából},
author={Ligeti-Nagy, N. and Ferenczi, G. and Héja, E. and Jelencsik-Mátyus, K. and Laki, L. J. and Vadász, N. and Yang, Z. Gy. and Váradi, T.},
booktitle={XVIII. Magyar Számítógépes Nyelvészeti Konferencia},
year={2022}
}
```
### Contributions
Thanks to [lnnoemi](https://github.com/lnnoemi) for adding this dataset. |
NYTK/HuCoPA | ---
annotations_creators:
- found
language_creators:
- found
- expert-generated
language:
- hu
license:
- bsd-2-clause
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- extended|other
task_categories:
- other
task_ids: []
pretty_name: HuCoPA
tags:
- commonsense-reasoning
---
# Dataset Card for HuCoPA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
[HuCoPA dataset](https://github.com/nytud/HuCoPA)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
[lnnoemi](mailto:ligeti-nagy.noemi@nytud.hu)
### Dataset Summary
This is the dataset card for the Hungarian Choice of Plausible Alternatives Corpus (HuCoPA), which is also part of the Hungarian Language Understanding Evaluation Benchmark Kit [HuLU](hulu.nlp.nytud.hu). The corpus was created by translating and re-annotating the original English CoPA corpus (Roemmele et al., 2011).
### Supported Tasks and Leaderboards
'commonsense reasoning'
'question answering'
### Languages
The BCP-47 code for Hungarian, the only represented language in this dataset, is hu-HU.
## Dataset Structure
### Data Instances
For each instance, there is an id, a premise, a question ('cause' or 'effect'), two alternatives and a label (1 or 2).
An example:
```
{"idx": "1",
"question": "cause",
"label": "1",
"premise": "A testem árnyékot vetett a fűre.",
"choice1": "Felkelt a nap.",
"choice2": "A füvet lenyírták."}
```
### Data Fields
- id: unique id of the instances, an integer between 1 and 1000;
- question: "cause" or "effect". It suggests what kind of causal relation are we looking for: in the case of "cause" we search for the more plausible alternative that may be a cause of the premise. In the case of "effect" we are looking for a plausible result of the premise;
- premise: the premise, a sentence;
- choice1: the first alternative, a sentence;
- choice2: the second alternative, a sentence;
- label: the number of the more plausible alternative (1 or 2).
### Data Splits
HuCoPA has 3 splits: *train*, *validation* and *test*.
| Dataset split | Number of instances in the split |
|---------------|----------------------------------|
| train | 400 |
| validation | 100 |
| test | 500 |
The test data is distributed without the labels. To evaluate your model, please [contact us](mailto:ligeti-nagy.noemi@nytud.hu), or check [HuLU's website](hulu.nlp.nytud.hu) for an automatic evaluation (this feature is under construction at the moment).
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The data is a translation of the content of the CoPA corpus. Each sentence was translated by a human translator. Each translation was manually checked and further refined by another annotator.
### Annotations
#### Annotation process
The instances initially inherited their original labels from the CoPA dataset. Each instance was annotated by a human annotator. If the original label and the human annotator's label did not match, we manually curated the instance and assigned a final label to that. This step was necessary to ensure that the causal realationship had not been changed or lost during the translation process.
#### Who are the annotators?
The translators were native Hungarian speakers with English proficiency. The annotators were university students with some linguistic background.
## Additional Information
The human performance on the test set is 96% (accuracy).
### Licensing Information
HuCoPA is released under the BSD 2-Clause License.
Copyright (c) 2010, University of Southern California
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
### Citation Information
If you use this resource or any part of its documentation, please refer to:
Ligeti-Nagy, N., Ferenczi, G., Héja, E., Jelencsik-Mátyus, K., Laki, L. J., Vadász, N., Yang, Z. Gy. and Váradi, T. (2022) HuLU: magyar nyelvű benchmark adatbázis
kiépítése a neurális nyelvmodellek kiértékelése céljából [HuLU: Hungarian benchmark dataset to evaluate neural language models]. In: Berend, Gábor and Gosztolya, Gábor and Vincze, Veronika (eds), XVIII. Magyar Számítógépes Nyelvészeti Konferencia. JATEPress, Szeged. 431–446.
```
@inproceedings{ligetinagy2022hulu,
title={HuLU: magyar nyelvű benchmark adatbázis kiépítése a neurális nyelvmodellek kiértékelése céljából},
author={Ligeti-Nagy, N. and Ferenczi, G. and Héja, E. and Jelencsik-Mátyus, K. and Laki, L. J. and Vadász, N. and Yang, Z. Gy. and Váradi, T.},
booktitle={XVIII. Magyar Számítógépes Nyelvészeti Konferencia},
year={2022},
editors = {Berend, Gábor and Gosztolya, Gábor and Vincze, Veronika},
address = {Szeged},
publisher = {JATEPress},
pages = {431–446}
}
```
and to:
Roemmele, M., Bejan, C., and Gordon, A. (2011) Choice of Plausible Alternatives: An Evaluation of Commonsense Causal Reasoning. AAAI Spring Symposium on Logical Formalizations of Commonsense Reasoning, Stanford University, March 21-23, 2011.
```
@inproceedings{roemmele2011choice,
title={Choice of plausible alternatives: An evaluation of commonsense causal reasoning},
author={Roemmele, Melissa and Bejan, Cosmin Adrian and Gordon, Andrew S},
booktitle={2011 AAAI Spring Symposium Series},
year={2011},
url={https://people.ict.usc.edu/~gordon/publications/AAAI-SPRING11A.PDF},
}
```
### Contributions
Thanks to [lnnoemi](https://github.com/lnnoemi) for adding this dataset.
|
NYTK/HuRC | ---
YAML tags:
annotations_creators:
- crowdsourced
language_creators:
- found
- expert-generated
language:
- hu
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: HuRC
size_categories:
- unknown
source_datasets:
- extended|other
task_categories:
- question-answering
task_ids:
- extractive-qa
- abstractive-qa
---
# Dataset Card for HuRC
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
[HuRC dataset](https://github.com/nytud/HuRC)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
[lnnoemi](mailto:ligeti-nagy.noemi@nytud.hu)
### Dataset Summary
This is the dataset card for the Hungarian Corpus for Reading Comprehension with Commonsense Reasoning (HuRC), which is also part of the Hungarian Language Understanding Evaluation Benchmark Kit HuLU.
The dataset contains 80 614 instances. Each instance is composed of a lead, a passage and a cloze-style query with a masked entity. The task is to select the named entity that is being masked in the query.
The data was automatically collected from the online news of Népszabadság online (nol.hu).
### Languages
The BCP-47 code for Hungarian, the only represented language in this dataset, is hu-HU.
## Dataset Structure
### Data Instances
For each instance, there is an id, a lead, a passage, a query and a MASK.
An example:
```
{
"id": "1",
"lead": ["A Közigazgatási és Igazságügyi Minisztérium szerint a Bárka Színház esetében felmerült a felelőtlen gazdálkodás gyanúja, egyes értesülések szerint pedig ebben \"a színház igazgatójának és gazdasági vezetőjének felelőssége is felmerül\""],
"passage": [
"A teátrumnak Navracsics Tibor közigazgatási és igazságügyi miniszterhez és Kocsis Máté VIII. kerületi polgármesterhez",
"reagálva a tárca azt írta, hogy a felelőtlen gazdálkodás gyanújában \"egyes értesülések szerint a színház igazgatójának és gazdasági vezetőjének felelőssége is felmerül\". A KIM \"éppen ezért nagyon várja az Állami Számvevőszék készülő jelentését, hogy tiszta képet kaphasson a színház működéséről\".",
"A minisztérium hangsúlyozta, hogy az elmúlt évben is mindent elkövetett azért, hogy a Bárka Színház \"valós, rangos művészeti térként\" működjön, és a továbbiakban is ez a szándéka, de jelenleg a társulat működtetését a minisztérium fenntartói támogatás formájában jogszerűen még nem tudja megoldani.",
"A teátrum az átadás-átvétel elhúzódásának okát keresve tette közzé nyílt levelét, amelyben elmaradó fizetésekre, előadásokra és bemutatókra hívta fel a figyelmet, és jelezte, hogy várja a helyzet megoldását.",
"A színház átadás-átvétele jelenleg zajlik, a folyamat végeztével a Bárka a józsefvárosi önkormányzattól állami tulajdonba, a tervek szerint a Közigazgatási és Igazságügyi Minisztérium fenntartásába kerül."
],
"query": "A KIM 2014-es költségvetésében szerepel a Bárka Színház, de amíg nem a minisztérium a [MASK] fenntartója, addig ez a költségvetési keret nem nyitható meg.",
"MASK": "Bárka",
}
```
### Data Fields
- id: unique id of the instances;
- lead: a short summary of the article as it was extracted from the source texts;
- passage: 3-6 paragraphs of texts as the body of the article;
- query: the last paragraph of an article, some kind of summary or conclusion, with a named entity masked (with [MASK]) in it;
- MASK: the masked named entity.
### Data Splits
HuRC has 3 splits: *train*, *validation* and *test*.
| Dataset split | Number of instances in the split | Proportion of the split
|---------------|----------------------------------| ---------|
| train | 64614 | 80%|
| validation | 8000 |10%|
| test | 8000 |10%|
The test data is distributed without the MASK fields. To evaluate your model, please [contact us](mailto:ligeti-nagy.noemi@nytud.hu), or check [HuLU's website](hulu.nlp.nytud.hu) for an automatic evaluation (this feature is under construction at the moment).
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
To produce the Hungarian material, we used the daily articles from Népszabadság Online which had titles and summaries as well. We selected 3-6 paragraphs from each article from the ones which contain proper nouns both in the main part and the summary as well. We trained a NER model using huBERT (Nemeskey 2021) for recognizing proper nouns. NerKor (Simon és Vadász 2021) and Huggingface’s token-level classification library were used to fine-tune the model. Our model achieved an F-score of 90.18 on the test material. As a final step, we found pairs of proper names which are present both in the main article and the summary. Multiple articles contained more than one such pairs so we used those more than once. This resulted in a database of 88655 instances (from 49782 articles).
The quantitative properties of our corpus are as follows: Number of articles: 88655 Number of different articles (type): 49782 Token: 27703631 Type: 1115.260 Average length of text (token): 249.42 (median: 229) Average question length (token): 63.07 (median: 56). We fine-tuned the corpus by hand.
One annotator per 100 unit checked and validated the dataset for which we provided our own demo interface. Automatic masking and the previous occurrence of the entity was checked. This resulted in a database of 80 614 validated entries.
## Additional Information
### Licensing Information
HuRC is released under the cc-by-4.0 license.
### Citation Information
If you use this resource or any part of its documentation, please refer to:
Ligeti-Nagy, N., Ferenczi, G., Héja, E., Jelencsik-Mátyus, K., Laki, L. J., Vadász, N., Yang, Z. Gy. and Váradi, T. (2022) HuLU: magyar nyelvű benchmark adatbázis kiépítése a neurális nyelvmodellek kiértékelése céljából [HuLU: Hungarian benchmark dataset to evaluate neural language models]. XVIII. Magyar Számítógépes Nyelvészeti Konferencia. (in press)
```
@inproceedings{ligetinagy2022hulu,
title={HuLU: magyar nyelvű benchmark adatbázis kiépítése a neurális nyelvmodellek kiértékelése céljából},
author={Ligeti-Nagy, N. and Ferenczi, G. and Héja, E. and Jelencsik-Mátyus, K. and Laki, L. J. and Vadász, N. and Yang, Z. Gy. and Váradi, T.},
booktitle={XVIII. Magyar Számítógépes Nyelvészeti Konferencia},
year={2022}
}
```
### Contributions
Thanks to [lnnoemi](https://github.com/lnnoemi) for adding this dataset. |
NYTK/HuSST | ---
annotations_creators:
- found
language_creators:
- found
- expert-generated
language:
- hu
license:
- bsd-2-clause
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- extended|other
task_categories:
- text-classification
task_ids:
- sentiment-classification
- sentiment-scoring
- text-scoring
pretty_name: HuSST
---
# Dataset Card for HuSST
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Language](#language)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
[HuSST dataset](https://github.com/nytud/HuSST)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
[lnnoemi](mailto:ligeti-nagy.noemi@nytud.hu)
### Dataset Summary
This is the dataset card for the Hungarian version of the Stanford Sentiment Treebank. This dataset which is also part of the Hungarian Language Understanding Evaluation Benchmark Kit [HuLU](hulu.nlp.nytud.hu). The corpus was created by translating and re-annotating the original SST (Roemmele et al., 2011).
### Supported Tasks and Leaderboards
'sentiment classification'
'sentiment scoring'
### Language
The BCP-47 code for Hungarian, the only represented language in this dataset, is hu-HU.
## Dataset Structure
### Data Instances
For each instance, there is an id, a sentence and a sentiment label.
An example:
```
{
"Sent_id": "dev_0",
"Sent": "Nos, a Jason elment Manhattanbe és a Pokolba kapcsán, azt hiszem, az elkerülhetetlen folytatások ötletlistájáról kihúzhatunk egy űrállomást 2455-ben (hé, ne lődd le a poént).",
"Label": "neutral"
}
```
### Data Fields
- Sent_id: unique id of the instances;
- Sent: the sentence, translation of an instance of the SST dataset;
- Label: "negative", "neutral", or "positive".
### Data Splits
HuSST has 3 splits: *train*, *validation* and *test*.
| Dataset split | Number of instances in the split |
|---------------|----------------------------------|
| train | 9344 |
| validation | 1168 |
| test | 1168 |
The test data is distributed without the labels. To evaluate your model, please [contact us](mailto:ligeti-nagy.noemi@nytud.hu), or check [HuLU's website](hulu.nlp.nytud.hu) for an automatic evaluation (this feature is under construction at the moment).
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The data is a translation of the content of the SST dataset (only the whole sentences were used). Each sentence was translated by a human translator. Each translation was manually checked and further refined by another annotator.
### Annotations
#### Annotation process
The translated sentences were annotated by three human annotators with one of the following labels: negative, neutral and positive. Each sentence was then curated by a fourth annotator (the 'curator'). The final label is the decision of the curator based on the three labels of the annotators.
#### Who are the annotators?
The translators were native Hungarian speakers with English proficiency. The annotators were university students with some linguistic background.
## Additional Information
### Licensing Information
### Citation Information
If you use this resource or any part of its documentation, please refer to:
Ligeti-Nagy, N., Ferenczi, G., Héja, E., Jelencsik-Mátyus, K., Laki, L. J., Vadász, N., Yang, Z. Gy. and Vadász, T. (2022) HuLU: magyar nyelvű benchmark adatbázis
kiépítése a neurális nyelvmodellek kiértékelése céljából [HuLU: Hungarian benchmark dataset to evaluate neural language models]. XVIII. Magyar Számítógépes Nyelvészeti Konferencia. pp. 431–446.
```
@inproceedings{ligetinagy2022hulu,
title={HuLU: magyar nyelvű benchmark adatbázis kiépítése a neurális nyelvmodellek kiértékelése céljából},
author={Ligeti-Nagy, N. and Ferenczi, G. and Héja, E. and Jelencsik-Mátyus, K. and Laki, L. J. and Vadász, N. and Yang, Z. Gy. and Vadász, T.},
booktitle={XVIII. Magyar Számítógépes Nyelvészeti Konferencia},
year={2022},
pages = {431--446}
}
```
and to:
Socher et al. (2013), Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. 1631--1642.
```
@inproceedings{socher-etal-2013-recursive,
title = "Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank",
author = "Socher, Richard and
Perelygin, Alex and
Wu, Jean and
Chuang, Jason and
Manning, Christopher D. and
Ng, Andrew and
Potts, Christopher",
booktitle = "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
month = oct,
year = "2013",
address = "Seattle, Washington, USA",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D13-1170",
pages = "1631--1642",
}
```
### Contributions
Thanks to [lnnoemi](https://github.com/lnnoemi) for adding this dataset. |
NYTK/HuWNLI | ---
annotations_creators:
- found
language_creators:
- found
- expert-generated
language:
- hu
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- extended|other
task_categories:
- other
task_ids:
- coreference-resolution
pretty_name: HuWNLI
tags:
- structure-prediction
---
# Dataset Card for HuWNLI
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
[HuWNLI dataset](https://github.com/nytud/HuWNLI)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
[lnnoemi](mailto:ligeti-nagy.noemi@nytud.hu)
### Dataset Summary
This is the dataset card for the Hungarian translation of the Winograd schemata formatted as an inference task. A Winograd schema is a pair of sentences that differ in only one or two words and that contain an ambiguity that is resolved in opposite ways in the two sentences and requires the use of world knowledge and reasoning for its resolution (Levesque et al. 2012). This dataset is also part of the Hungarian Language Understanding Evaluation Benchmark Kit [HuLU](hulu.nlp.nytud.hu). The corpus was created by translating and manually curating the original English Winograd schemata. The NLI format was created by replacing the ambiguous pronoun with each possible referent (the method is described in GLUE's paper, Wang et al. 2019). We extended the set of sentence pairs derived from the schemata by the translation of the sentence pairs that - together with the Winograd schema sentences - build up the WNLI dataset of GLUE.
### Languages
The BCP-47 code for Hungarian, the only represented language in this dataset, is hu-HU.
## Dataset Structure
### Data Instances
For each instance, there is an orig_id, an id, two sentences and a label.
An example:
```
{"orig_id": "4",
"id": "4",
"sentence1": "A férfi nem tudta felemelni a fiát, mert olyan nehéz volt.",
"sentence2": "A fia nehéz volt.",
"Label": "1"
}
```
### Data Fields
- orig_id: the original id of this sentence pair (more precisely, its English counterpart's) in GLUE's WNLI dataset;
- id: unique id of the instances;
- sentence1: the premise;
- sentence2: the hypothesis;
- label: "1" if sentence2 is entailed by sentence1, and "0" otherwise.
### Data Splits
The data is distributed in three splits: training set (562), development set (59) and test set (134). The splits follow GLUE's WNLI's splits but contain fewer instances as many sentence pairs had to be thrown away for being untranslatable to Hungarian. The train and the development set have been extended from nli sentence pairs formatted from the Hungarian translation of 6 Winograd schemata left out from the original WNLI dataset.
The test set's sentence pairs are translated from GLUE's WNLI's test set. This set was distributed without labels. 3 annotators annotated the Hungarian sentence pairs.
The test set of HuWNLI is also distributed without labels. To evaluate your model, please [contact us](mailto:ligeti-nagy.noemi@nytud.hu), or check [HuLU's website](hulu.nytud.hu) for an automatic evaluation (this feature is under construction at the moment).
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The data is a translation of the English Winograd schemata and the additional sentence pairs of GLUE's WNLI. Each schema and sentence pair was translated by a human translator. Each schema was manually curated by a linguistic expert. The schemata were transformed into nli format by a linguistic expert.
During the adaption method, we found two erroneous labels in GLUE's WNLI's train set (id 347 and id 464). We corrected them in our dataset.
## Additional Information
Average human performance on the test set is 92,78% (accuracy).
### Licensing Information
HuWNLI is released under the Creative Commons Attribution-ShareAlike 4.0 International License.
### Citation Information
If you use this resource or any part of its documentation, please refer to:
Ligeti-Nagy, N., Héja, E., Laki, L. J., Takács, D., Yang, Z. Gy. and Váradi, T. (2023) Hát te mekkorát nőttél! - A HuLU első életéve új adatbázisokkal és webszolgáltatással \[Look at how much you have grown! - The first year of HuLU with new databases and with webservice\]. In: Berend, G., Gosztolya, G. and Vincze, V. (eds), XIX. Magyar Számítógépes Nyelvészeti Konferencia. Szeged, Szegedi Tudományegyetem, Informatikai Intézet. 217-230.
```
@inproceedings{ligetinagy2023hulu,
title={át te mekkorát nőttél! - A HuLU első életéve új adatbázisokkal és webszolgáltatással},
author={Ligeti-Nagy, N. and Héja, E. and Laki, L. J. and Takács, D. and Yang, Z. Gy. and Váradi, T.},
booktitle={XIX. Magyar Számítógépes Nyelvészeti Konferencia},
year={2023},
editors = {Berend, Gábor and Gosztolya, Gábor and Vincze, Veronika},
address = {Szeged},
publisher = {JATEPress},
pages = {217–230}
}
```
Ligeti-Nagy, N., Ferenczi, G., Héja, E., Jelencsik-Mátyus, K., Laki, L. J., Vadász, N., Yang, Z. Gy. and Váradi, T. (2022) HuLU: magyar nyelvű benchmark adatbázis kiépítése a neurális nyelvmodellek kiértékelése céljából \[HuLU: Hungarian benchmark dataset to evaluate neural language models\]. In: Berend, Gábor and Gosztolya, Gábor and Vincze, Veronika (eds), XVIII. Magyar Számítógépes Nyelvészeti Konferencia. JATEPress, Szeged. 431–446.
```
@inproceedings{ligetinagy2022hulu,
title={HuLU: magyar nyelvű benchmark adatbázis kiépítése a neurális nyelvmodellek kiértékelése céljából},
author={Ligeti-Nagy, N. and Ferenczi, G. and Héja, E. and Jelencsik-Mátyus, K. and Laki, L. J. and Vadász, N. and Yang, Z. Gy. and Váradi, T.},
booktitle={XVIII. Magyar Számítógépes Nyelvészeti Konferencia},
year={2022},
editors = {Berend, Gábor and Gosztolya, Gábor and Vincze, Veronika},
address = {Szeged},
publisher = {JATEPress},
pages = {431–446}
}
```
and to:
Levesque, Hector, Davis, Ernest, Morgenstern, Leora (2012) he winograd schema challenge. In: Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning.
```
@inproceedings{levesque2012winograd,
title={The Winograd Schema Challenge},
author={Levesque, Hector and Davis, Ernest and Morgenstern, Leora},
booktitle={Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning},
year={2012},
organization={Citeseer}
}
```
### Contributions
Thanks to [lnnoemi](https://github.com/lnnoemi) for adding this dataset. |
Nathanael/NPS | TEST |
NbAiLab/NCC_small_100 | # Dataset Card for NBAiLab/NCC_small_100
annotations_creators:
- no-annotation
language_creators:
- found
languages:
- en,nb,no,nn,se,dk,is,fo
licenses:
- odc-by-1.0
multilinguality:
- multilingual
pretty_name: NCC
size_categories:
- 2G<n<1B
source_datasets:
- original
task_categories:
- sequence-modeling
task_ids:
- language-modeling
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Data Fields](#data-fiels)
- [Dataset Creation](#dataset-creation)
- [Statistics](#statistics)
- [Document Types](#document-types)
- [Languages](#languages)
- [Publish Periode](#publish-periode)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/NBAiLab/notram
- **Repository:** https://github.com/NBAiLab/notram
- **Paper:** https://arxiv.org/abs/2104.09617
- **Point of Contact:** [Freddy Wetjen](mailto:freddy.wetjen@nb.no)
The Norwegian Colossal Corpus is a collection of multiple smaller Norwegian corpuses suitable for training large language models. We have done extensive cleaning on the datasets, and have made them available in a common format. The total size of the NCC is currently 45GB.
## How to Use
```python
from datasets import load_dataset
data = load_dataset("NBAiLab/NCC_small_100", streaming=True)
```
## Download Data
If you do not want to use the HuggingFace Dataset-library for training, or if you want to do additional pre-processing, it is also possible to download the files locally.
```bash
# Download all files in one batch operation
for i in $(seq -f "%04g" 1 100): do wget https://huggingface.co/datasets/NbAiLab/NCC_small_100/blob/main/data/train-shard-$i-of-0100.json.gz &; done
# Create one large training file of all shards without unpacking
cat *.gz > onefile.json.gz
```
<details>
<summary>List of all the files.</summary>
* [train-shard-0001-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0001-of-0100.json.gz)
* [train-shard-0002-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0002-of-0100.json.gz)
* [train-shard-0003-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0003-of-0100.json.gz)
* [train-shard-0004-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0004-of-0100.json.gz)
* [train-shard-0005-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0005-of-0100.json.gz)
* [train-shard-0006-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0006-of-0100.json.gz)
* [train-shard-0007-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0007-of-0100.json.gz)
* [train-shard-0008-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0008-of-0100.json.gz)
* [train-shard-0009-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0009-of-0100.json.gz)
* [train-shard-0010-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0010-of-0100.json.gz)
* [train-shard-0011-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0011-of-0100.json.gz)
* [train-shard-0012-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0012-of-0100.json.gz)
* [train-shard-0013-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0013-of-0100.json.gz)
* [train-shard-0014-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0014-of-0100.json.gz)
* [train-shard-0015-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0015-of-0100.json.gz)
* [train-shard-0016-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0016-of-0100.json.gz)
* [train-shard-0017-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0017-of-0100.json.gz)
* [train-shard-0018-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0018-of-0100.json.gz)
* [train-shard-0019-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0019-of-0100.json.gz)
* [train-shard-0020-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0020-of-0100.json.gz)
* [train-shard-0021-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0021-of-0100.json.gz)
* [train-shard-0022-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0022-of-0100.json.gz)
* [train-shard-0023-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0023-of-0100.json.gz)
* [train-shard-0024-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0024-of-0100.json.gz)
* [train-shard-0025-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0025-of-0100.json.gz)
* [train-shard-0026-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0026-of-0100.json.gz)
* [train-shard-0027-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0027-of-0100.json.gz)
* [train-shard-0028-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0028-of-0100.json.gz)
* [train-shard-0029-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0029-of-0100.json.gz)
* [train-shard-0030-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0030-of-0100.json.gz)
* [train-shard-0031-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0031-of-0100.json.gz)
* [train-shard-0032-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0032-of-0100.json.gz)
* [train-shard-0033-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0033-of-0100.json.gz)
* [train-shard-0034-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0034-of-0100.json.gz)
* [train-shard-0035-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0035-of-0100.json.gz)
* [train-shard-0036-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0036-of-0100.json.gz)
* [train-shard-0037-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0037-of-0100.json.gz)
* [train-shard-0038-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0038-of-0100.json.gz)
* [train-shard-0039-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0039-of-0100.json.gz)
* [train-shard-0040-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0040-of-0100.json.gz)
* [train-shard-0041-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0041-of-0100.json.gz)
* [train-shard-0042-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0042-of-0100.json.gz)
* [train-shard-0043-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0043-of-0100.json.gz)
* [train-shard-0044-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0044-of-0100.json.gz)
* [train-shard-0045-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0045-of-0100.json.gz)
* [train-shard-0046-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0046-of-0100.json.gz)
* [train-shard-0047-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0047-of-0100.json.gz)
* [train-shard-0048-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0048-of-0100.json.gz)
* [train-shard-0049-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0049-of-0100.json.gz)
* [train-shard-0050-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0050-of-0100.json.gz)
* [train-shard-0051-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0051-of-0100.json.gz)
* [train-shard-0052-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0052-of-0100.json.gz)
* [train-shard-0053-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0053-of-0100.json.gz)
* [train-shard-0054-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0054-of-0100.json.gz)
* [train-shard-0055-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0055-of-0100.json.gz)
* [train-shard-0056-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0056-of-0100.json.gz)
* [train-shard-0057-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0057-of-0100.json.gz)
* [train-shard-0058-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0058-of-0100.json.gz)
* [train-shard-0059-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0059-of-0100.json.gz)
* [train-shard-0060-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0060-of-0100.json.gz)
* [train-shard-0061-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0061-of-0100.json.gz)
* [train-shard-0062-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0062-of-0100.json.gz)
* [train-shard-0063-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0063-of-0100.json.gz)
* [train-shard-0064-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0064-of-0100.json.gz)
* [train-shard-0065-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0065-of-0100.json.gz)
* [train-shard-0066-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0066-of-0100.json.gz)
* [train-shard-0067-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0067-of-0100.json.gz)
* [train-shard-0068-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0068-of-0100.json.gz)
* [train-shard-0069-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0069-of-0100.json.gz)
* [train-shard-0070-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0070-of-0100.json.gz)
* [train-shard-0071-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0071-of-0100.json.gz)
* [train-shard-0072-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0072-of-0100.json.gz)
* [train-shard-0073-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0073-of-0100.json.gz)
* [train-shard-0074-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0074-of-0100.json.gz)
* [train-shard-0075-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0075-of-0100.json.gz)
* [train-shard-0076-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0076-of-0100.json.gz)
* [train-shard-0077-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0077-of-0100.json.gz)
* [train-shard-0078-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0078-of-0100.json.gz)
* [train-shard-0079-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0079-of-0100.json.gz)
* [train-shard-0080-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0080-of-0100.json.gz)
* [train-shard-0081-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0081-of-0100.json.gz)
* [train-shard-0082-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0082-of-0100.json.gz)
* [train-shard-0083-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0083-of-0100.json.gz)
* [train-shard-0084-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0084-of-0100.json.gz)
* [train-shard-0085-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0085-of-0100.json.gz)
* [train-shard-0086-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0086-of-0100.json.gz)
* [train-shard-0087-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0087-of-0100.json.gz)
* [train-shard-0088-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0088-of-0100.json.gz)
* [train-shard-0089-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0089-of-0100.json.gz)
* [train-shard-0090-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0090-of-0100.json.gz)
* [train-shard-0091-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0091-of-0100.json.gz)
* [train-shard-0092-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0092-of-0100.json.gz)
* [train-shard-0093-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0093-of-0100.json.gz)
* [train-shard-0094-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0094-of-0100.json.gz)
* [train-shard-0095-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0095-of-0100.json.gz)
* [train-shard-0096-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0096-of-0100.json.gz)
* [train-shard-0097-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0097-of-0100.json.gz)
* [train-shard-0098-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0098-of-0100.json.gz)
* [train-shard-0099-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0099-of-0100.json.gz)
* [train-shard-0100-of-0100](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/train-shard-0100-of-0100.json.gz)
* [validation-shard-0001-of-0001](https://huggingface.co/datasets/NbAiLab/NCC_small_100/resolve/main/data/validation-shard-0001-of-0001.json.gz)
</details>
### Dataset Summary
The NCC_small_100 dataset contains json lines with language training data. Here is an example json line:
```json
{
"id": "1006205",
"doc_type": "cc100",
"publish_year": 2021,
"lang_fasttext": "nn",
"lang_fasttext_conf": "0.641",
"text": "Eg har ein PLAN! KOS deg og ha ei fin helg"
}
```
## Data Fields
|**id:** | String with id to source of line and a unique identifier|
|:-----------|:------------|
|**doc_type ** | String describing type of media text extracted from (I.e. book,newspaper etc)|
|**publish_year ** | Integer. The year text published. When year is undetermined it is set to 2021.|
|**lang_fasttext ** | String. Language of text identified by FastText|
|**lang_fasttext_conf ** | String. Confidence calculated by FastText|
|**text ** | String. The complete utf-8 document. If longer than 1M characters it is split.|
### Dataset Creation
We are providing a **train** and a **validation** split. The standard size of the validation is a single 1GB file, while train is sharded in 1GB chunks.
All files are gzipped.
Build date: 03122021
#### Initial Data Collection and Curation
The procedure for the dataset creation is described in detail in our paper.
### Summary
| Words | Documents | Words/Document |
|------------:|------------:|-----------------:|
| 152,828,370 | 452,838 | 337 |
### Document Types
| Source | Words | Documents | Words/Document |
|--------------------------------------:|-----------:|------------:|-----------------:|
| newspaper_ocr | 42,875,141 | 214,047 | 200 |
| parliament | 27,906,014 | 201 | 138,835 |
| books | 19,806,532 | 546 | 36,275 |
| newspapers_online_nb | 10,681,495 | 75,216 | 142 |
| maalfrid_regjeringen | 7,884,471 | 20,124 | 391 |
| maalfrid_ssb | 6,101,595 | 18,502 | 329 |
| maalfrid_uio | 3,891,127 | 16,548 | 235 |
| government_nb | 3,399,192 | 100 | 33,991 |
| wikipedia_download_nbo | 2,520,875 | 11,481 | 219 |
| maalfrid_fylkesmannen | 2,234,608 | 10,150 | 220 |
| publicreports | 2,198,220 | 82 | 26,807 |
| maalfrid_nve | 1,430,464 | 6,458 | 221 |
| maalfrid_patentstyret | 1,367,518 | 4,551 | 300 |
| maalfrid_ntnu | 1,266,953 | 4,378 | 289 |
| newspapers_online_nn | 913,353 | 3,679 | 248 |
| maalfrid_fhi | 729,409 | 3,167 | 230 |
| maalfrid_vegvesen | 710,333 | 3,545 | 200 |
| maalfrid_norad | 708,294 | 2,017 | 351 |
| lovdata_cd_odelsting_2005 | 696,273 | 43 | 16,192 |
| maalfrid_skatteetaten | 679,125 | 1,753 | 387 |
| maalfrid_uib | 615,724 | 2,483 | 247 |
| wikipedia_download_nno | 592,536 | 3,181 | 186 |
| maalfrid_forskningsradet | 526,472 | 1,604 | 328 |
| maalfrid_nasjonalparkstyre | 467,226 | 2,013 | 232 |
| maalfrid_nmbu | 406,053 | 1,578 | 257 |
| maalfrid_domstol | 382,629 | 1,121 | 341 |
| maalfrid_oslomet | 380,272 | 1,002 | 379 |
| maalfrid_nav | 363,539 | 1,670 | 217 |
| maalfrid_banenor | 338,844 | 1,479 | 229 |
| maalfrid_landbruksdirektoratet | 293,549 | 1,048 | 280 |
| maalfrid_helsedirektoratet | 289,269 | 1,076 | 268 |
| government_nn | 281,763 | 25 | 11,270 |
| maalfrid_udir | 228,128 | 874 | 261 |
| maalfrid_nokut | 226,072 | 877 | 257 |
| maalfrid_norges-bank | 224,812 | 829 | 271 |
| maalfrid_vkm | 214,855 | 683 | 314 |
| maalfrid_nbim | 214,235 | 417 | 513 |
| maalfrid_hi | 209,748 | 848 | 247 |
| maalfrid_ngu | 209,454 | 794 | 263 |
| maalfrid_miljodirektoratet | 208,346 | 757 | 275 |
| maalfrid_distriktssenteret | 204,538 | 847 | 241 |
| maalfrid_ptil | 204,458 | 753 | 271 |
| maalfrid_nord | 193,119 | 961 | 200 |
| maalfrid_difi | 172,807 | 788 | 219 |
| maalfrid_fiskeridir | 172,337 | 714 | 241 |
| maalfrid_hivolda | 168,122 | 564 | 298 |
| maalfrid_mattilsynet | 165,226 | 616 | 268 |
| maalfrid_havarikommisjonen | 164,555 | 555 | 296 |
| maalfrid_kulturradet | 151,989 | 472 | 322 |
| maalfrid_kystverket | 151,671 | 686 | 221 |
| maalfrid_ks | 149,000 | 563 | 264 |
| maalfrid_udi | 141,628 | 429 | 330 |
| maalfrid_uia | 134,214 | 535 | 250 |
| maalfrid_hjelpemiddeldatabasen | 129,178 | 764 | 169 |
| maalfrid_dsb | 127,197 | 423 | 300 |
| maalfrid_khrono | 124,208 | 432 | 287 |
| maalfrid_helsetilsynet | 123,804 | 397 | 311 |
| lovdata_cd_somb_rundskriv_2005 | 121,983 | 68 | 1,793 |
| maalfrid_veiviseren | 116,185 | 388 | 299 |
| lovdata_cd_sentrale_forskrifter_2005 | 114,379 | 251 | 455 |
| maalfrid_moreforsk | 114,223 | 451 | 253 |
| maalfrid_husbanken | 112,257 | 359 | 312 |
| maalfrid_forsvarsbygg | 109,309 | 441 | 247 |
| maalfrid_imdi | 108,090 | 357 | 302 |
| maalfrid_jernbanedirektoratet | 107,264 | 435 | 246 |
| maalfrid_konkurransetilsynet | 106,330 | 296 | 359 |
| maalfrid_inn | 102,298 | 613 | 166 |
| maalfrid_legemiddelverket | 100,455 | 452 | 222 |
| maalfrid_dsa | 100,141 | 353 | 283 |
| maalfrid_hiof | 99,743 | 528 | 188 |
| maalfrid_vetinst | 97,390 | 312 | 312 |
| maalfrid_ehelse | 95,975 | 496 | 193 |
| maalfrid_arkivverket | 94,310 | 360 | 261 |
| maalfrid_sdir | 94,192 | 311 | 302 |
| maalfrid_klagenemndssekretariatet | 87,830 | 258 | 340 |
| maalfrid_dibk | 84,106 | 336 | 250 |
| maalfrid_nhh | 81,294 | 317 | 256 |
| maalfrid_sprakradet | 80,918 | 315 | 256 |
| maalfrid_toll | 79,364 | 305 | 260 |
| maalfrid_politiet | 78,471 | 240 | 326 |
| maalfrid_vestlandfylke | 77,600 | 304 | 255 |
| maalfrid_riksrevisjonen | 77,117 | 225 | 342 |
| maalfrid_met | 76,310 | 400 | 190 |
| maalfrid_artsdatabanken | 76,117 | 200 | 380 |
| maalfrid_kartverket | 75,468 | 397 | 190 |
| maalfrid_bufdir | 75,375 | 262 | 287 |
| maalfrid_nibio | 74,594 | 386 | 193 |
| maalfrid_nkom | 63,734 | 215 | 296 |
| maalfrid_npd | 63,605 | 260 | 244 |
| maalfrid_nlr | 61,251 | 364 | 168 |
| maalfrid_aldringoghelse | 58,322 | 147 | 396 |
| maalfrid_uis | 57,580 | 190 | 303 |
| maalfrid_custompublish | 56,876 | 219 | 259 |
| maalfrid_nyemetoder | 56,634 | 245 | 231 |
| maalfrid_sykkelbynettverket | 53,461 | 230 | 232 |
| maalfrid_arbeidstilsynet | 52,903 | 127 | 416 |
| maalfrid_luftfartstilsynet | 51,929 | 221 | 234 |
| maalfrid_seniorporten | 50,569 | 159 | 318 |
| maalfrid_bioteknologiradet | 49,956 | 129 | 387 |
| maalfrid_riksantikvaren | 49,722 | 187 | 265 |
| maalfrid_sjt | 47,006 | 230 | 204 |
| maalfrid_dfo | 46,582 | 206 | 226 |
| maalfrid_hvl | 46,544 | 202 | 230 |
| lovdata_cd_lokaleforskrifter_2005 | 46,482 | 476 | 97 |
| maalfrid_forbrukerradet | 44,620 | 157 | 284 |
| maalfrid_himolde | 43,761 | 226 | 193 |
| maalfrid_kompetansenorge | 43,626 | 213 | 204 |
| maalfrid_ldo | 41,409 | 153 | 270 |
| lovdata_cd_norgeslover_2005 | 40,450 | 32 | 1,264 |
| maalfrid_forskningsetikk | 39,574 | 127 | 311 |
| maalfrid_naku | 37,039 | 107 | 346 |
| maalfrid_usn | 35,982 | 154 | 233 |
| maalfrid_godeidrettsanlegg | 35,482 | 145 | 244 |
| maalfrid_naturfag | 34,881 | 132 | 264 |
| maalfrid_matematikksenteret | 34,258 | 158 | 216 |
| maalfrid_medietilsynet | 33,904 | 145 | 233 |
| maalfrid_diskrimineringsnemnda | 33,264 | 89 | 373 |
| maalfrid_nupi | 31,508 | 121 | 260 |
| maalfrid_miljopakken | 31,029 | 140 | 221 |
| lovdata_cd_rtv_rundskriv_2005 | 30,518 | 222 | 137 |
| maalfrid_dirmin | 30,360 | 117 | 259 |
| maalfrid_diku | 29,246 | 135 | 216 |
| maalfrid_arbeidsretten | 27,492 | 92 | 298 |
| maalfrid_fellesstudentsystem | 27,029 | 197 | 137 |
| maalfrid_kriminalitetsforebygging | 26,971 | 104 | 259 |
| maalfrid_statsbygg | 26,256 | 102 | 257 |
| maalfrid_nb | 25,375 | 94 | 269 |
| maalfrid_nih | 25,036 | 112 | 223 |
| maalfrid_folketrygdfondet | 25,027 | 91 | 275 |
| maalfrid_npolar | 24,843 | 62 | 400 |
| maalfrid_valgdirektoratet | 23,205 | 205 | 113 |
| maalfrid_lottstift | 22,736 | 78 | 291 |
| maalfrid_naturfagsenteret | 22,618 | 95 | 238 |
| maalfrid_samordnaopptak | 22,400 | 56 | 400 |
| maalfrid_sykehuspartner | 21,855 | 108 | 202 |
| maalfrid_unit | 21,305 | 135 | 157 |
| lovdata_cd_rundskriv_lovavdeling_2005 | 21,295 | 10 | 2,129 |
| maalfrid_anskaffelser | 21,097 | 104 | 202 |
| maalfrid_barneombudet | 20,092 | 65 | 309 |
| maalfrid_mareano | 19,922 | 91 | 218 |
| maalfrid_datatilsynet | 19,845 | 55 | 360 |
| maalfrid_fiskeridirektoratet | 18,831 | 60 | 313 |
| maalfrid_spesialenheten | 18,550 | 47 | 394 |
| maalfrid_xn--miljlftet-o8ab | 18,447 | 78 | 236 |
| lovdata_cd_skatt_rundskriv_2005 | 18,316 | 7 | 2,616 |
| maalfrid_skrivesenteret | 17,951 | 102 | 175 |
| maalfrid_khio | 16,924 | 63 | 268 |
| maalfrid_bibliotekutvikling | 16,631 | 89 | 186 |
| maalfrid_helsenorge | 15,431 | 60 | 257 |
| maalfrid_sykehusinnkjop | 15,204 | 92 | 165 |
| maalfrid_spk | 13,824 | 44 | 314 |
| maalfrid_aho | 13,268 | 78 | 170 |
| maalfrid_matportalen | 12,756 | 51 | 250 |
| maalfrid_nfi | 12,696 | 36 | 352 |
| maalfrid_samas | 12,650 | 62 | 204 |
| maalfrid_kunstkultursenteret | 12,307 | 35 | 351 |
| maalfrid_nhn | 12,156 | 77 | 157 |
| maalfrid_pasientsikkerhetsprogrammet | 11,892 | 91 | 130 |
| maalfrid_ceres | 11,310 | 44 | 257 |
| maalfrid_nysgjerrigper | 11,177 | 63 | 177 |
| maalfrid_une | 11,036 | 23 | 479 |
| maalfrid_nynorsksenteret | 10,822 | 45 | 240 |
| maalfrid_natursekken | 10,060 | 73 | 137 |
| maalfrid_nidsenter | 9,996 | 34 | 294 |
| maalfrid_nsm | 9,926 | 39 | 254 |
| maalfrid_justervesenet | 9,847 | 29 | 339 |
| maalfrid_giek | 9,769 | 39 | 250 |
| maalfrid_digdir | 9,675 | 54 | 179 |
| maalfrid_stami | 9,518 | 22 | 432 |
| maalfrid_sshf | 9,488 | 37 | 256 |
| maalfrid_kriminalomsorgen | 9,126 | 32 | 285 |
| maalfrid_vinmonopolet | 9,094 | 22 | 413 |
| maalfrid_nodnett | 8,738 | 50 | 174 |
| maalfrid_gjenopptakelse | 8,249 | 30 | 274 |
| maalfrid_fordelingsutvalget | 8,242 | 28 | 294 |
| maalfrid_kjonnsforskning | 8,010 | 22 | 364 |
| maalfrid_nasjonalmuseet | 7,935 | 21 | 377 |
| maalfrid_forsvaret | 7,614 | 27 | 282 |
| maalfrid_ombudsmann | 7,496 | 12 | 624 |
| maalfrid_forbrukereuropa | 7,260 | 28 | 259 |
| maalfrid_romsenter | 7,219 | 27 | 267 |
| maalfrid_ovf | 6,699 | 28 | 239 |
| maalfrid_beccle | 6,686 | 33 | 202 |
| maalfrid_forbrukertilsynet | 6,440 | 19 | 338 |
| maalfrid_helfo | 5,746 | 21 | 273 |
| maalfrid_politietssikkerhetstjeneste | 5,570 | 16 | 348 |
| maalfrid_geonorge | 5,228 | 35 | 149 |
| maalfrid_realfagsloyper | 5,155 | 22 | 234 |
| maalfrid_opplaringslovutvalget | 5,062 | 11 | 460 |
| maalfrid_vea-fs | 5,026 | 28 | 179 |
| maalfrid_energimerking | 4,842 | 25 | 193 |
| maalfrid_jernbanemagasinet | 4,663 | 14 | 333 |
| maalfrid_traumebevisst | 4,456 | 50 | 89 |
| maalfrid_politihogskolen | 4,434 | 27 | 164 |
| maalfrid_universell | 4,138 | 37 | 111 |
| maalfrid_nafkam | 4,096 | 11 | 372 |
| maalfrid_koro | 3,781 | 10 | 378 |
| maalfrid_npe | 3,744 | 21 | 178 |
| maalfrid_regionaleforskningsfond | 3,512 | 23 | 152 |
| maalfrid_denkulturelleskolesekken | 3,375 | 7 | 482 |
| maalfrid_squarespace | 3,310 | 12 | 275 |
| maalfrid_riksteatret | 3,143 | 12 | 261 |
| maalfrid_riksmekleren | 2,936 | 15 | 195 |
| maalfrid_pkh | 2,927 | 9 | 325 |
| maalfrid_konfliktraadet | 2,918 | 9 | 324 |
| maalfrid_aasentunet | 2,713 | 8 | 339 |
| maalfrid_radetfordyreetikk | 2,579 | 12 | 214 |
| maalfrid_generaladvokaten | 2,428 | 7 | 346 |
| maalfrid_lanekassen | 2,237 | 7 | 319 |
| maalfrid_okokrim | 2,184 | 10 | 218 |
| maalfrid_kulturminnefondet | 2,157 | 10 | 215 |
| maalfrid_whocc | 2,143 | 13 | 164 |
| maalfrid_brreg | 2,140 | 13 | 164 |
| maalfrid_polarhistorie | 2,016 | 7 | 288 |
| maalfrid_unknown | 2,015 | 11 | 183 |
| maalfrid_ffi | 2,010 | 6 | 335 |
| maalfrid_finansportalen | 1,967 | 7 | 281 |
| maalfrid_digidel | 1,701 | 10 | 170 |
| maalfrid_sismo | 1,685 | 6 | 280 |
| maalfrid_nlb | 1,665 | 5 | 333 |
| maalfrid_lektor2 | 1,397 | 8 | 174 |
| maalfrid_sivilforsvaret | 1,365 | 8 | 170 |
| maalfrid_konkursradet | 1,309 | 4 | 327 |
| maalfrid_varsom | 1,281 | 10 | 128 |
| maalfrid_informasjonskompetanse | 1,254 | 8 | 156 |
| maalfrid_skattefunn | 1,171 | 3 | 390 |
| maalfrid_sivilrett | 1,166 | 3 | 388 |
| maalfrid_uit | 1,112 | 16 | 69 |
| maalfrid_yrkesfisker | 1,110 | 10 | 111 |
| maalfrid_nbsk | 1,098 | 8 | 137 |
| maalfrid_lokforerskolen | 1,075 | 7 | 153 |
| maalfrid_laudim | 1,069 | 8 | 133 |
| maalfrid_nyinorge | 1,064 | 2 | 532 |
| maalfrid_transport21 | 1,030 | 4 | 257 |
| maalfrid_openaccess | 953 | 3 | 317 |
| maalfrid_sinn | 924 | 5 | 184 |
| maalfrid_htu | 881 | 4 | 220 |
| maalfrid_yr | 865 | 12 | 72 |
| maalfrid_akkreditert | 856 | 4 | 214 |
| maalfrid_helseklage | 855 | 3 | 285 |
| maalfrid_ssn | 841 | 5 | 168 |
| maalfrid_fug | 816 | 2 | 408 |
| maalfrid_matogindustri | 780 | 6 | 130 |
| maalfrid_fordelingsutvalet | 772 | 2 | 386 |
| maalfrid_dekom | 764 | 18 | 42 |
| maalfrid_lokalhistorie | 753 | 3 | 251 |
| maalfrid_unesco | 749 | 4 | 187 |
| maalfrid_omsorgsforskning | 711 | 5 | 142 |
| maalfrid_pts | 651 | 2 | 325 |
| maalfrid_valg | 638 | 2 | 319 |
| maalfrid_forbrukerklageutvalget | 626 | 2 | 313 |
| maalfrid_miljoklagenemnda | 625 | 3 | 208 |
| maalfrid_regjeringsadvokaten | 616 | 2 | 308 |
| maalfrid_iearth | 552 | 3 | 184 |
| maalfrid_skeivtarkiv | 552 | 4 | 138 |
| maalfrid_xn--kvinneligomskjring-1ub | 514 | 1 | 514 |
| maalfrid_haldenfengsel | 469 | 1 | 469 |
| maalfrid_hjelpelinjen | 466 | 2 | 233 |
| maalfrid_sevuppt | 429 | 1 | 429 |
| maalfrid_norec | 376 | 1 | 376 |
| maalfrid_kk-utvalget | 348 | 1 | 348 |
| maalfrid_ah | 346 | 1 | 346 |
| maalfrid_lykillinn | 331 | 1 | 331 |
| maalfrid_vergemal | 319 | 1 | 319 |
| maalfrid_riksadvokaten | 315 | 2 | 157 |
| maalfrid_global | 301 | 1 | 301 |
| maalfrid_webhuset | 280 | 1 | 280 |
| maalfrid_xn--tilbakefring-2jb | 267 | 2 | 133 |
| maalfrid_oslofengsel | 266 | 1 | 266 |
| maalfrid_nasjonaleturistveger | 227 | 1 | 227 |
| maalfrid_kulturped | 172 | 1 | 172 |
| maalfrid_altinn | 170 | 2 | 85 |
| maalfrid_shiprep | 165 | 2 | 82 |
| maalfrid_kulturoghelse | 161 | 4 | 40 |
| maalfrid_kantinekurset | 145 | 1 | 145 |
| maalfrid_designavgang | 145 | 1 | 145 |
| maalfrid_memu | 126 | 2 | 63 |
| maalfrid_alleteller | 123 | 1 | 123 |
| maalfrid_havmiljo | 118 | 1 | 118 |
| maalfrid_fmfiavo@fylkesmannen | 81 | 2 | 40 |
| maalfrid_okopark | 61 | 1 | 61 |
| maalfrid_nynorskbok | 52 | 1 | 52 |
| maalfrid_uh-it | 47 | 2 | 23 |
| maalfrid_bastoyfengsel | 46 | 1 | 46 |
| maalfrid_overgangsbolig | 40 | 1 | 40 |
| maalfrid_spinn-inn | 37 | 2 | 18 |
| maalfrid_karriereveiledning | 31 | 1 | 31 |
| maalfrid_norskpetroleum | 15 | 2 | 7 |
| maalfrid_feide | 9 | 1 | 9 |
### Languages
| Language | Words | Documents | Words/Document |
|-----------:|------------:|------------:|-----------------:|
| no | 110,561,181 | 373,475 | 296 |
| da | 22,054,103 | 12,507 | 1,763 |
| en | 10,551,361 | 33,082 | 318 |
| nn | 6,400,816 | 21,583 | 296 |
| fr | 1,150,970 | 2,354 | 488 |
| de | 848,915 | 1,804 | 470 |
| sv | 290,653 | 2,578 | 112 |
| es | 238,453 | 910 | 262 |
| fi | 138,410 | 984 | 140 |
| et | 71,255 | 507 | 140 |
| cs | 57,634 | 465 | 123 |
| oc | 51,457 | 109 | 472 |
| pt | 49,471 | 326 | 151 |
| nl | 38,024 | 266 | 142 |
| la | 36,388 | 20 | 1,819 |
| uk | 31,820 | 107 | 297 |
| zh | 27,640 | 181 | 152 |
| eu | 25,582 | 74 | 345 |
| it | 24,134 | 199 | 121 |
| ru | 24,022 | 149 | 161 |
| pl | 23,919 | 216 | 110 |
| ca | 23,748 | 84 | 282 |
| gu | 16,739 | 1 | 16,739 |
| fa | 11,657 | 49 | 237 |
| hu | 10,583 | 173 | 61 |
| is | 10,225 | 37 | 276 |
| ja | 9,563 | 109 | 87 |
| el | 5,320 | 20 | 266 |
| id | 5,254 | 44 | 119 |
| ar | 4,268 | 20 | 213 |
| so | 3,343 | 13 | 257 |
| sl | 3,243 | 47 | 69 |
| vi | 3,077 | 22 | 139 |
| sr | 2,022 | 29 | 69 |
| hr | 1,947 | 23 | 84 |
| tr | 1,802 | 41 | 43 |
| gl | 1,709 | 17 | 100 |
| mn | 1,575 | 1 | 1,575 |
| lt | 1,442 | 15 | 96 |
| am | 1,405 | 6 | 234 |
| ko | 1,301 | 29 | 44 |
| sq | 1,265 | 8 | 158 |
| ro | 1,214 | 13 | 93 |
| kk | 1,092 | 2 | 546 |
| ur | 1,003 | 5 | 200 |
| ml | 986 | 6 | 164 |
| sh | 939 | 5 | 187 |
| eo | 755 | 14 | 53 |
| th | 550 | 12 | 45 |
| ta | 505 | 6 | 84 |
| sw | 468 | 3 | 156 |
| sk | 442 | 12 | 36 |
| war | 369 | 3 | 123 |
| tl | 340 | 2 | 170 |
| bg | 327 | 1 | 327 |
| pnb | 276 | 1 | 276 |
| bs | 230 | 2 | 115 |
| ceb | 196 | 6 | 32 |
| cy | 182 | 2 | 91 |
| ku | 175 | 1 | 175 |
| ga | 102 | 6 | 17 |
| my | 82 | 1 | 82 |
| hy | 66 | 2 | 33 |
| ast | 59 | 1 | 59 |
| ms | 53 | 13 | 4 |
| be | 40 | 1 | 40 |
| nds | 30 | 6 | 5 |
| lv | 30 | 3 | 10 |
| als | 22 | 3 | 7 |
| mk | 21 | 2 | 10 |
| as | 17 | 1 | 17 |
| br | 16 | 3 | 5 |
| af | 13 | 1 | 13 |
| tt | 12 | 2 | 6 |
| si | 10 | 1 | 10 |
| su | 8 | 1 | 8 |
| bn | 8 | 1 | 8 |
| hsb | 6 | 1 | 6 |
| jv | 5 | 1 | 5 |
| fy | 5 | 2 | 2 |
| az | 5 | 1 | 5 |
| pms | 4 | 1 | 4 |
| jbo | 4 | 1 | 4 |
| lb | 3 | 1 | 3 |
| io | 3 | 1 | 3 |
| he | 1 | 1 | 1 |
### Publish Periode
| Decade | Words | Documents | Words/Document |
|---------:|-----------:|------------:|-----------------:|
| 2020 | 90,368,489 | 238,255 | 568 |
| 2010 | 7,706,272 | 52,464 | 1,483 |
| 2000 | 10,118,391 | 36,978 | 3,135 |
| 1990 | 16,379,779 | 54,636 | 2,989 |
| 1980 | 3,378,092 | 11,838 | 2,845 |
| 1970 | 4,041,362 | 17,805 | 2,261 |
| 1960 | 3,523,333 | 17,974 | 1,971 |
| 1950 | 2,128,506 | 10,387 | 2,058 |
| 1940 | 2,662,606 | 12,271 | 2,521 |
| 1930 | 964,846 | 20 | 383,978 |
| 1920 | 744,560 | 16 | 328,756 |
| 1910 | 1,701,319 | 31 | 527,445 |
| 1900 | 1,183,273 | 24 | 414,972 |
| 1890 | 2,246,433 | 40 | 461,126 |
| 1880 | 1,059,838 | 19 | 490,702 |
| 1870 | 999,024 | 15 | 521,165 |
| 1860 | 842,042 | 17 | 533,772 |
| 1850 | 1,408,491 | 25 | 434,091 |
| 1840 | 627,004 | 10 | 398,914 |
| 1830 | 695,289 | 11 | 475,094 |
| 1820 | 49,421 | 2 | 49,421 |
## Considerations for Using the Data
This corpus contains data under copyright and is not allowed to be used outide the National Library of Norway. The dataset should not be distributed.
### Discussion of Biases
Please refer to our paper.
### Dataset Curators
Freddy.wetjen@nb.no
Per.Kummervold@nb.no
## License
Various licences applies to different parts of the corpus. Every document in the corpus has a tag telling what **"doc_type"** it belongs to. If you are unable to accept any of the licenses, you should filter out the **"doc_type"** with a conflicting license.
| Doc_type | License |
| :-------- | :------------- |
| government_nb, government_nn, parliament, publicreports, lovdata_cd_\*, maalfrid_\* | [NLOD 2.0](https://data.norge.no/nlod/en/2.0/)|
| newspapers_ocr, newspapers_pdf, books| [CC0 1.0](https://creativecommons.org/publicdomain/zero/1.0/)|
| newspapers_online_nb, newspapers_online_nn | [CC BY-NC 2.0](https://creativecommons.org/licenses/by-nc/2.0/)|
| opensubtitles, wikipedia | [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/)
|
### Citation Information
We are preparing an article with detailed information about this corpus. Until it is published, please cite out paper discussing the first version of this corpus:
```
@inproceedings{kummervold-etal-2021-operationalizing,
title = {Operationalizing a National Digital Library: The Case for a {N}orwegian Transformer Model},
author = {Kummervold, Per E and
De la Rosa, Javier and
Wetjen, Freddy and
Brygfjeld, Svein Arne",
booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)},
year = "2021",
address = "Reykjavik, Iceland (Online)",
publisher = {Link{"o}ping University Electronic Press, Sweden},
url = "https://aclanthology.org/2021.nodalida-main.3",
pages = "20--29",
abstract = "In this work, we show the process of building a large-scale training set from digital and digitized collections at a national library.
The resulting Bidirectional Encoder Representations from Transformers (BERT)-based language model for Norwegian outperforms multilingual BERT (mBERT) models
in several token and sequence classification tasks for both Norwegian Bokm{aa}l and Norwegian Nynorsk. Our model also improves the mBERT performance for other languages present in the corpus such as English, Swedish, and Danish. For languages not included in the corpus, the weights degrade moderately while keeping strong multilingual properties. Therefore, we show that building high-quality models within a memory institution using somewhat noisy optical character recognition (OCR) content is feasible, and we hope to pave the way for other memory institutions to follow.",
}
```
|
NbAiLab/NCC_small_divided | # Dataset Card for NBAiLab/NCC_small_divided
annotations_creators:
- no-annotation
language_creators:
- found
languages:
- en,nb,no,nn,se,dk,is,fo
licenses:
- odc-by-1.0
multilinguality:
- multilingual
pretty_name: NCC
size_categories:
- 2G<n<1B
source_datasets:
- original
task_categories:
- sequence-modeling
task_ids:
- language-modeling
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Data Fields](#data-fiels)
- [Dataset Creation](#dataset-creation)
- [Statistics](#statistics)
- [Document Types](#document-types)
- [Languages](#languages)
- [Publish Periode](#publish-periode)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/NBAiLab/notram
- **Repository:** https://github.com/NBAiLab/notram
- **Paper:** https://arxiv.org/abs/2104.09617
- **Point of Contact:** [Freddy Wetjen](mailto:freddy.wetjen@nb.no)
The Norwegian Colossal Corpus is a collection of multiple smaller Norwegian corpuses suitable for training large language models. We have done extensive cleaning on the datasets, and have made them available in a common format. The total size of the NCC is currently 45GB.
## How to Use
```python
from datasets import load_dataset
data = load_dataset("NBAiLab/NCC_small_divided", streaming=True)
```
## Download Data
If you do not want to use the HuggingFace Dataset-library for training, or if you want to do additional pre-processing, it is also possible to download the files locally.
```bash
# Download all files in one batch operation
for i in $(seq -f "%04g" 1 1): do wget https://huggingface.co/datasets/NbAiLab/NCC_small/blob/main/data/train-shard-$i-of-0001.json.gz &; done
# Create one large training file of all shards without unpacking
cat *.gz > onefile.json.gz
```
<details>
<summary>List of all the files.</summary>
* [train-shard-0001-of-0001](https://huggingface.co/datasets/NbAiLab/NCC_small/resolve/main/data/train-shard-0001-of-0001.json.gz)
* [validation-shard-0001-of-0001](https://huggingface.co/datasets/NbAiLab/NCC_small/resolve/main/data/validation-shard-0001-of-0001.json.gz)
</details>
### Dataset Summary
The NCC_small dataset contains json lines with language training data. Here is an example json line:
```json
{
"id": "1006205",
"doc_type": "cc100",
"publish_year": 2021,
"lang_fasttext": "nn",
"lang_fasttext_conf": "0.641",
"text": "Eg har ein PLAN! KOS deg og ha ei fin helg"
}
```
## Data Fields
|**id:** | String with id to source of line and a unique identifier|
|:-----------|:------------|
|**doc_type ** | String describing type of media text extracted from (I.e. book,newspaper etc)|
|**publish_year ** | Integer. The year text published. When year is undetermined it is set to 2021.|
|**lang_fasttext ** | String. Language of text identified by FastText|
|**lang_fasttext_conf ** | String. Confidence calculated by FastText|
|**text ** | String. The complete utf-8 document. If longer than 1M characters it is split.|
### Dataset Creation
We are providing a **train** and a **validation** split. The standard size of the validation is a single 1GB file, while train is sharded in 1GB chunks.
All files are gzipped.
Build date: 03122021
#### Initial Data Collection and Curation
The procedure for the dataset creation is described in detail in our paper.
### Summary
| Words | Documents | Words/Document |
|------------:|------------:|-----------------:|
| 152,829,466 | 452,845 | 337 |
### Document Types
| Source | Words | Documents | Words/Document |
|--------------------------------------:|-----------:|------------:|-----------------:|
| newspaper_ocr | 42,875,679 | 214,051 | 200 |
| parliament | 27,906,014 | 201 | 138,835 |
| books | 19,806,532 | 546 | 36,275 |
| newspapers_online_nb | 10,681,801 | 75,218 | 142 |
| maalfrid_regjeringen | 7,884,471 | 20,124 | 391 |
| maalfrid_ssb | 6,101,847 | 18,503 | 329 |
| maalfrid_uio | 3,891,127 | 16,548 | 235 |
| government_nb | 3,399,192 | 100 | 33,991 |
| wikipedia_download_nbo | 2,520,875 | 11,481 | 219 |
| maalfrid_fylkesmannen | 2,234,608 | 10,150 | 220 |
| publicreports | 2,198,220 | 82 | 26,807 |
| maalfrid_nve | 1,430,464 | 6,458 | 221 |
| maalfrid_patentstyret | 1,367,518 | 4,551 | 300 |
| maalfrid_ntnu | 1,266,953 | 4,378 | 289 |
| newspapers_online_nn | 913,353 | 3,679 | 248 |
| maalfrid_fhi | 729,409 | 3,167 | 230 |
| maalfrid_vegvesen | 710,333 | 3,545 | 200 |
| maalfrid_norad | 708,294 | 2,017 | 351 |
| lovdata_cd_odelsting_2005 | 696,273 | 43 | 16,192 |
| maalfrid_skatteetaten | 679,125 | 1,753 | 387 |
| maalfrid_uib | 615,724 | 2,483 | 247 |
| wikipedia_download_nno | 592,536 | 3,181 | 186 |
| maalfrid_forskningsradet | 526,472 | 1,604 | 328 |
| maalfrid_nasjonalparkstyre | 467,226 | 2,013 | 232 |
| maalfrid_nmbu | 406,053 | 1,578 | 257 |
| maalfrid_domstol | 382,629 | 1,121 | 341 |
| maalfrid_oslomet | 380,272 | 1,002 | 379 |
| maalfrid_nav | 363,539 | 1,670 | 217 |
| maalfrid_banenor | 338,844 | 1,479 | 229 |
| maalfrid_landbruksdirektoratet | 293,549 | 1,048 | 280 |
| maalfrid_helsedirektoratet | 289,269 | 1,076 | 268 |
| government_nn | 281,763 | 25 | 11,270 |
| maalfrid_udir | 228,128 | 874 | 261 |
| maalfrid_nokut | 226,072 | 877 | 257 |
| maalfrid_norges-bank | 224,812 | 829 | 271 |
| maalfrid_vkm | 214,855 | 683 | 314 |
| maalfrid_nbim | 214,235 | 417 | 513 |
| maalfrid_hi | 209,748 | 848 | 247 |
| maalfrid_ngu | 209,454 | 794 | 263 |
| maalfrid_miljodirektoratet | 208,346 | 757 | 275 |
| maalfrid_distriktssenteret | 204,538 | 847 | 241 |
| maalfrid_ptil | 204,458 | 753 | 271 |
| maalfrid_nord | 193,119 | 961 | 200 |
| maalfrid_difi | 172,807 | 788 | 219 |
| maalfrid_fiskeridir | 172,337 | 714 | 241 |
| maalfrid_hivolda | 168,122 | 564 | 298 |
| maalfrid_mattilsynet | 165,226 | 616 | 268 |
| maalfrid_havarikommisjonen | 164,555 | 555 | 296 |
| maalfrid_kulturradet | 151,989 | 472 | 322 |
| maalfrid_kystverket | 151,671 | 686 | 221 |
| maalfrid_ks | 149,000 | 563 | 264 |
| maalfrid_udi | 141,628 | 429 | 330 |
| maalfrid_uia | 134,214 | 535 | 250 |
| maalfrid_hjelpemiddeldatabasen | 129,178 | 764 | 169 |
| maalfrid_dsb | 127,197 | 423 | 300 |
| maalfrid_khrono | 124,208 | 432 | 287 |
| maalfrid_helsetilsynet | 123,804 | 397 | 311 |
| lovdata_cd_somb_rundskriv_2005 | 121,983 | 68 | 1,793 |
| maalfrid_veiviseren | 116,185 | 388 | 299 |
| lovdata_cd_sentrale_forskrifter_2005 | 114,379 | 251 | 455 |
| maalfrid_moreforsk | 114,223 | 451 | 253 |
| maalfrid_husbanken | 112,257 | 359 | 312 |
| maalfrid_forsvarsbygg | 109,309 | 441 | 247 |
| maalfrid_imdi | 108,090 | 357 | 302 |
| maalfrid_jernbanedirektoratet | 107,264 | 435 | 246 |
| maalfrid_konkurransetilsynet | 106,330 | 296 | 359 |
| maalfrid_inn | 102,298 | 613 | 166 |
| maalfrid_legemiddelverket | 100,455 | 452 | 222 |
| maalfrid_dsa | 100,141 | 353 | 283 |
| maalfrid_hiof | 99,743 | 528 | 188 |
| maalfrid_vetinst | 97,390 | 312 | 312 |
| maalfrid_ehelse | 95,975 | 496 | 193 |
| maalfrid_arkivverket | 94,310 | 360 | 261 |
| maalfrid_sdir | 94,192 | 311 | 302 |
| maalfrid_klagenemndssekretariatet | 87,830 | 258 | 340 |
| maalfrid_dibk | 84,106 | 336 | 250 |
| maalfrid_nhh | 81,294 | 317 | 256 |
| maalfrid_sprakradet | 80,918 | 315 | 256 |
| maalfrid_toll | 79,364 | 305 | 260 |
| maalfrid_politiet | 78,471 | 240 | 326 |
| maalfrid_vestlandfylke | 77,600 | 304 | 255 |
| maalfrid_riksrevisjonen | 77,117 | 225 | 342 |
| maalfrid_met | 76,310 | 400 | 190 |
| maalfrid_artsdatabanken | 76,117 | 200 | 380 |
| maalfrid_kartverket | 75,468 | 397 | 190 |
| maalfrid_bufdir | 75,375 | 262 | 287 |
| maalfrid_nibio | 74,594 | 386 | 193 |
| maalfrid_nkom | 63,734 | 215 | 296 |
| maalfrid_npd | 63,605 | 260 | 244 |
| maalfrid_nlr | 61,251 | 364 | 168 |
| maalfrid_aldringoghelse | 58,322 | 147 | 396 |
| maalfrid_uis | 57,580 | 190 | 303 |
| maalfrid_custompublish | 56,876 | 219 | 259 |
| maalfrid_nyemetoder | 56,634 | 245 | 231 |
| maalfrid_sykkelbynettverket | 53,461 | 230 | 232 |
| maalfrid_arbeidstilsynet | 52,903 | 127 | 416 |
| maalfrid_luftfartstilsynet | 51,929 | 221 | 234 |
| maalfrid_seniorporten | 50,569 | 159 | 318 |
| maalfrid_bioteknologiradet | 49,956 | 129 | 387 |
| maalfrid_riksantikvaren | 49,722 | 187 | 265 |
| maalfrid_sjt | 47,006 | 230 | 204 |
| maalfrid_dfo | 46,582 | 206 | 226 |
| maalfrid_hvl | 46,544 | 202 | 230 |
| lovdata_cd_lokaleforskrifter_2005 | 46,482 | 476 | 97 |
| maalfrid_forbrukerradet | 44,620 | 157 | 284 |
| maalfrid_himolde | 43,761 | 226 | 193 |
| maalfrid_kompetansenorge | 43,626 | 213 | 204 |
| maalfrid_ldo | 41,409 | 153 | 270 |
| lovdata_cd_norgeslover_2005 | 40,450 | 32 | 1,264 |
| maalfrid_forskningsetikk | 39,574 | 127 | 311 |
| maalfrid_naku | 37,039 | 107 | 346 |
| maalfrid_usn | 35,982 | 154 | 233 |
| maalfrid_godeidrettsanlegg | 35,482 | 145 | 244 |
| maalfrid_naturfag | 34,881 | 132 | 264 |
| maalfrid_matematikksenteret | 34,258 | 158 | 216 |
| maalfrid_medietilsynet | 33,904 | 145 | 233 |
| maalfrid_diskrimineringsnemnda | 33,264 | 89 | 373 |
| maalfrid_nupi | 31,508 | 121 | 260 |
| maalfrid_miljopakken | 31,029 | 140 | 221 |
| lovdata_cd_rtv_rundskriv_2005 | 30,518 | 222 | 137 |
| maalfrid_dirmin | 30,360 | 117 | 259 |
| maalfrid_diku | 29,246 | 135 | 216 |
| maalfrid_arbeidsretten | 27,492 | 92 | 298 |
| maalfrid_fellesstudentsystem | 27,029 | 197 | 137 |
| maalfrid_kriminalitetsforebygging | 26,971 | 104 | 259 |
| maalfrid_statsbygg | 26,256 | 102 | 257 |
| maalfrid_nb | 25,375 | 94 | 269 |
| maalfrid_nih | 25,036 | 112 | 223 |
| maalfrid_folketrygdfondet | 25,027 | 91 | 275 |
| maalfrid_npolar | 24,843 | 62 | 400 |
| maalfrid_valgdirektoratet | 23,205 | 205 | 113 |
| maalfrid_lottstift | 22,736 | 78 | 291 |
| maalfrid_naturfagsenteret | 22,618 | 95 | 238 |
| maalfrid_samordnaopptak | 22,400 | 56 | 400 |
| maalfrid_sykehuspartner | 21,855 | 108 | 202 |
| maalfrid_unit | 21,305 | 135 | 157 |
| lovdata_cd_rundskriv_lovavdeling_2005 | 21,295 | 10 | 2,129 |
| maalfrid_anskaffelser | 21,097 | 104 | 202 |
| maalfrid_barneombudet | 20,092 | 65 | 309 |
| maalfrid_mareano | 19,922 | 91 | 218 |
| maalfrid_datatilsynet | 19,845 | 55 | 360 |
| maalfrid_fiskeridirektoratet | 18,831 | 60 | 313 |
| maalfrid_spesialenheten | 18,550 | 47 | 394 |
| maalfrid_xn--miljlftet-o8ab | 18,447 | 78 | 236 |
| lovdata_cd_skatt_rundskriv_2005 | 18,316 | 7 | 2,616 |
| maalfrid_skrivesenteret | 17,951 | 102 | 175 |
| maalfrid_khio | 16,924 | 63 | 268 |
| maalfrid_bibliotekutvikling | 16,631 | 89 | 186 |
| maalfrid_helsenorge | 15,431 | 60 | 257 |
| maalfrid_sykehusinnkjop | 15,204 | 92 | 165 |
| maalfrid_spk | 13,824 | 44 | 314 |
| maalfrid_aho | 13,268 | 78 | 170 |
| maalfrid_matportalen | 12,756 | 51 | 250 |
| maalfrid_nfi | 12,696 | 36 | 352 |
| maalfrid_samas | 12,650 | 62 | 204 |
| maalfrid_kunstkultursenteret | 12,307 | 35 | 351 |
| maalfrid_nhn | 12,156 | 77 | 157 |
| maalfrid_pasientsikkerhetsprogrammet | 11,892 | 91 | 130 |
| maalfrid_ceres | 11,310 | 44 | 257 |
| maalfrid_nysgjerrigper | 11,177 | 63 | 177 |
| maalfrid_une | 11,036 | 23 | 479 |
| maalfrid_nynorsksenteret | 10,822 | 45 | 240 |
| maalfrid_natursekken | 10,060 | 73 | 137 |
| maalfrid_nidsenter | 9,996 | 34 | 294 |
| maalfrid_nsm | 9,926 | 39 | 254 |
| maalfrid_justervesenet | 9,847 | 29 | 339 |
| maalfrid_giek | 9,769 | 39 | 250 |
| maalfrid_digdir | 9,675 | 54 | 179 |
| maalfrid_stami | 9,518 | 22 | 432 |
| maalfrid_sshf | 9,488 | 37 | 256 |
| maalfrid_kriminalomsorgen | 9,126 | 32 | 285 |
| maalfrid_vinmonopolet | 9,094 | 22 | 413 |
| maalfrid_nodnett | 8,738 | 50 | 174 |
| maalfrid_gjenopptakelse | 8,249 | 30 | 274 |
| maalfrid_fordelingsutvalget | 8,242 | 28 | 294 |
| maalfrid_kjonnsforskning | 8,010 | 22 | 364 |
| maalfrid_nasjonalmuseet | 7,935 | 21 | 377 |
| maalfrid_forsvaret | 7,614 | 27 | 282 |
| maalfrid_ombudsmann | 7,496 | 12 | 624 |
| maalfrid_forbrukereuropa | 7,260 | 28 | 259 |
| maalfrid_romsenter | 7,219 | 27 | 267 |
| maalfrid_ovf | 6,699 | 28 | 239 |
| maalfrid_beccle | 6,686 | 33 | 202 |
| maalfrid_forbrukertilsynet | 6,440 | 19 | 338 |
| maalfrid_helfo | 5,746 | 21 | 273 |
| maalfrid_politietssikkerhetstjeneste | 5,570 | 16 | 348 |
| maalfrid_geonorge | 5,228 | 35 | 149 |
| maalfrid_realfagsloyper | 5,155 | 22 | 234 |
| maalfrid_opplaringslovutvalget | 5,062 | 11 | 460 |
| maalfrid_vea-fs | 5,026 | 28 | 179 |
| maalfrid_energimerking | 4,842 | 25 | 193 |
| maalfrid_jernbanemagasinet | 4,663 | 14 | 333 |
| maalfrid_traumebevisst | 4,456 | 50 | 89 |
| maalfrid_politihogskolen | 4,434 | 27 | 164 |
| maalfrid_universell | 4,138 | 37 | 111 |
| maalfrid_nafkam | 4,096 | 11 | 372 |
| maalfrid_koro | 3,781 | 10 | 378 |
| maalfrid_npe | 3,744 | 21 | 178 |
| maalfrid_regionaleforskningsfond | 3,512 | 23 | 152 |
| maalfrid_denkulturelleskolesekken | 3,375 | 7 | 482 |
| maalfrid_squarespace | 3,310 | 12 | 275 |
| maalfrid_riksteatret | 3,143 | 12 | 261 |
| maalfrid_riksmekleren | 2,936 | 15 | 195 |
| maalfrid_pkh | 2,927 | 9 | 325 |
| maalfrid_konfliktraadet | 2,918 | 9 | 324 |
| maalfrid_aasentunet | 2,713 | 8 | 339 |
| maalfrid_radetfordyreetikk | 2,579 | 12 | 214 |
| maalfrid_generaladvokaten | 2,428 | 7 | 346 |
| maalfrid_lanekassen | 2,237 | 7 | 319 |
| maalfrid_okokrim | 2,184 | 10 | 218 |
| maalfrid_kulturminnefondet | 2,157 | 10 | 215 |
| maalfrid_whocc | 2,143 | 13 | 164 |
| maalfrid_brreg | 2,140 | 13 | 164 |
| maalfrid_polarhistorie | 2,016 | 7 | 288 |
| maalfrid_unknown | 2,015 | 11 | 183 |
| maalfrid_ffi | 2,010 | 6 | 335 |
| maalfrid_finansportalen | 1,967 | 7 | 281 |
| maalfrid_digidel | 1,701 | 10 | 170 |
| maalfrid_sismo | 1,685 | 6 | 280 |
| maalfrid_nlb | 1,665 | 5 | 333 |
| maalfrid_lektor2 | 1,397 | 8 | 174 |
| maalfrid_sivilforsvaret | 1,365 | 8 | 170 |
| maalfrid_konkursradet | 1,309 | 4 | 327 |
| maalfrid_varsom | 1,281 | 10 | 128 |
| maalfrid_informasjonskompetanse | 1,254 | 8 | 156 |
| maalfrid_skattefunn | 1,171 | 3 | 390 |
| maalfrid_sivilrett | 1,166 | 3 | 388 |
| maalfrid_uit | 1,112 | 16 | 69 |
| maalfrid_yrkesfisker | 1,110 | 10 | 111 |
| maalfrid_nbsk | 1,098 | 8 | 137 |
| maalfrid_lokforerskolen | 1,075 | 7 | 153 |
| maalfrid_laudim | 1,069 | 8 | 133 |
| maalfrid_nyinorge | 1,064 | 2 | 532 |
| maalfrid_transport21 | 1,030 | 4 | 257 |
| maalfrid_openaccess | 953 | 3 | 317 |
| maalfrid_sinn | 924 | 5 | 184 |
| maalfrid_htu | 881 | 4 | 220 |
| maalfrid_yr | 865 | 12 | 72 |
| maalfrid_akkreditert | 856 | 4 | 214 |
| maalfrid_helseklage | 855 | 3 | 285 |
| maalfrid_ssn | 841 | 5 | 168 |
| maalfrid_fug | 816 | 2 | 408 |
| maalfrid_matogindustri | 780 | 6 | 130 |
| maalfrid_fordelingsutvalet | 772 | 2 | 386 |
| maalfrid_dekom | 764 | 18 | 42 |
| maalfrid_lokalhistorie | 753 | 3 | 251 |
| maalfrid_unesco | 749 | 4 | 187 |
| maalfrid_omsorgsforskning | 711 | 5 | 142 |
| maalfrid_pts | 651 | 2 | 325 |
| maalfrid_valg | 638 | 2 | 319 |
| maalfrid_forbrukerklageutvalget | 626 | 2 | 313 |
| maalfrid_miljoklagenemnda | 625 | 3 | 208 |
| maalfrid_regjeringsadvokaten | 616 | 2 | 308 |
| maalfrid_iearth | 552 | 3 | 184 |
| maalfrid_skeivtarkiv | 552 | 4 | 138 |
| maalfrid_xn--kvinneligomskjring-1ub | 514 | 1 | 514 |
| maalfrid_haldenfengsel | 469 | 1 | 469 |
| maalfrid_hjelpelinjen | 466 | 2 | 233 |
| maalfrid_sevuppt | 429 | 1 | 429 |
| maalfrid_norec | 376 | 1 | 376 |
| maalfrid_kk-utvalget | 348 | 1 | 348 |
| maalfrid_ah | 346 | 1 | 346 |
| maalfrid_lykillinn | 331 | 1 | 331 |
| maalfrid_vergemal | 319 | 1 | 319 |
| maalfrid_riksadvokaten | 315 | 2 | 157 |
| maalfrid_global | 301 | 1 | 301 |
| maalfrid_webhuset | 280 | 1 | 280 |
| maalfrid_xn--tilbakefring-2jb | 267 | 2 | 133 |
| maalfrid_oslofengsel | 266 | 1 | 266 |
| maalfrid_nasjonaleturistveger | 227 | 1 | 227 |
| maalfrid_kulturped | 172 | 1 | 172 |
| maalfrid_altinn | 170 | 2 | 85 |
| maalfrid_shiprep | 165 | 2 | 82 |
| maalfrid_kulturoghelse | 161 | 4 | 40 |
| maalfrid_kantinekurset | 145 | 1 | 145 |
| maalfrid_designavgang | 145 | 1 | 145 |
| maalfrid_memu | 126 | 2 | 63 |
| maalfrid_alleteller | 123 | 1 | 123 |
| maalfrid_havmiljo | 118 | 1 | 118 |
| maalfrid_fmfiavo@fylkesmannen | 81 | 2 | 40 |
| maalfrid_okopark | 61 | 1 | 61 |
| maalfrid_nynorskbok | 52 | 1 | 52 |
| maalfrid_uh-it | 47 | 2 | 23 |
| maalfrid_bastoyfengsel | 46 | 1 | 46 |
| maalfrid_overgangsbolig | 40 | 1 | 40 |
| maalfrid_spinn-inn | 37 | 2 | 18 |
| maalfrid_karriereveiledning | 31 | 1 | 31 |
| maalfrid_norskpetroleum | 15 | 2 | 7 |
| maalfrid_feide | 9 | 1 | 9 |
### Languages
| Language | Words | Documents | Words/Document |
|-----------:|------------:|------------:|-----------------:|
| no | 110,562,216 | 373,481 | 296 |
| da | 22,054,103 | 12,507 | 1,763 |
| en | 10,551,361 | 33,082 | 318 |
| nn | 6,400,877 | 21,584 | 296 |
| fr | 1,150,970 | 2,354 | 488 |
| de | 848,915 | 1,804 | 470 |
| sv | 290,653 | 2,578 | 112 |
| es | 238,453 | 910 | 262 |
| fi | 138,410 | 984 | 140 |
| et | 71,255 | 507 | 140 |
| cs | 57,634 | 465 | 123 |
| oc | 51,457 | 109 | 472 |
| pt | 49,471 | 326 | 151 |
| nl | 38,024 | 266 | 142 |
| la | 36,388 | 20 | 1,819 |
| uk | 31,820 | 107 | 297 |
| zh | 27,640 | 181 | 152 |
| eu | 25,582 | 74 | 345 |
| it | 24,134 | 199 | 121 |
| ru | 24,022 | 149 | 161 |
| pl | 23,919 | 216 | 110 |
| ca | 23,748 | 84 | 282 |
| gu | 16,739 | 1 | 16,739 |
| fa | 11,657 | 49 | 237 |
| hu | 10,583 | 173 | 61 |
| is | 10,225 | 37 | 276 |
| ja | 9,563 | 109 | 87 |
| el | 5,320 | 20 | 266 |
| id | 5,254 | 44 | 119 |
| ar | 4,268 | 20 | 213 |
| so | 3,343 | 13 | 257 |
| sl | 3,243 | 47 | 69 |
| vi | 3,077 | 22 | 139 |
| sr | 2,022 | 29 | 69 |
| hr | 1,947 | 23 | 84 |
| tr | 1,802 | 41 | 43 |
| gl | 1,709 | 17 | 100 |
| mn | 1,575 | 1 | 1,575 |
| lt | 1,442 | 15 | 96 |
| am | 1,405 | 6 | 234 |
| ko | 1,301 | 29 | 44 |
| sq | 1,265 | 8 | 158 |
| ro | 1,214 | 13 | 93 |
| kk | 1,092 | 2 | 546 |
| ur | 1,003 | 5 | 200 |
| ml | 986 | 6 | 164 |
| sh | 939 | 5 | 187 |
| eo | 755 | 14 | 53 |
| th | 550 | 12 | 45 |
| ta | 505 | 6 | 84 |
| sw | 468 | 3 | 156 |
| sk | 442 | 12 | 36 |
| war | 369 | 3 | 123 |
| tl | 340 | 2 | 170 |
| bg | 327 | 1 | 327 |
| pnb | 276 | 1 | 276 |
| bs | 230 | 2 | 115 |
| ceb | 196 | 6 | 32 |
| cy | 182 | 2 | 91 |
| ku | 175 | 1 | 175 |
| ga | 102 | 6 | 17 |
| my | 82 | 1 | 82 |
| hy | 66 | 2 | 33 |
| ast | 59 | 1 | 59 |
| ms | 53 | 13 | 4 |
| be | 40 | 1 | 40 |
| nds | 30 | 6 | 5 |
| lv | 30 | 3 | 10 |
| als | 22 | 3 | 7 |
| mk | 21 | 2 | 10 |
| as | 17 | 1 | 17 |
| br | 16 | 3 | 5 |
| af | 13 | 1 | 13 |
| tt | 12 | 2 | 6 |
| si | 10 | 1 | 10 |
| su | 8 | 1 | 8 |
| bn | 8 | 1 | 8 |
| hsb | 6 | 1 | 6 |
| jv | 5 | 1 | 5 |
| fy | 5 | 2 | 2 |
| az | 5 | 1 | 5 |
| pms | 4 | 1 | 4 |
| jbo | 4 | 1 | 4 |
| lb | 3 | 1 | 3 |
| io | 3 | 1 | 3 |
| he | 1 | 1 | 1 |
### Publish Periode
| Decade | Words | Documents | Words/Document |
|---------:|-----------:|------------:|-----------------:|
| 2020 | 90,369,047 | 238,258 | 568 |
| 2010 | 7,706,272 | 52,464 | 1,483 |
| 2000 | 10,118,573 | 36,979 | 3,135 |
| 1990 | 16,379,779 | 54,636 | 2,989 |
| 1980 | 3,378,153 | 11,839 | 2,845 |
| 1970 | 4,041,657 | 17,807 | 2,261 |
| 1960 | 3,523,333 | 17,974 | 1,971 |
| 1950 | 2,128,506 | 10,387 | 2,058 |
| 1940 | 2,662,606 | 12,271 | 2,521 |
| 1930 | 964,846 | 20 | 383,978 |
| 1920 | 744,560 | 16 | 328,756 |
| 1910 | 1,701,319 | 31 | 527,445 |
| 1900 | 1,183,273 | 24 | 414,972 |
| 1890 | 2,246,433 | 40 | 461,126 |
| 1880 | 1,059,838 | 19 | 490,702 |
| 1870 | 999,024 | 15 | 521,165 |
| 1860 | 842,042 | 17 | 533,772 |
| 1850 | 1,408,491 | 25 | 434,091 |
| 1840 | 627,004 | 10 | 398,914 |
| 1830 | 695,289 | 11 | 475,094 |
| 1820 | 49,421 | 2 | 49,421 |
## Considerations for Using the Data
This corpus contains data under copyright and is not allowed to be used outide the National Library of Norway. The dataset should not be distributed.
### Discussion of Biases
Please refer to our paper.
### Dataset Curators
Freddy.wetjen@nb.no
Per.Kummervold@nb.no
## License
Various licences applies to different parts of the corpus. Every document in the corpus has a tag telling what **"doc_type"** it belongs to. If you are unable to accept any of the licenses, you should filter out the **"doc_type"** with a conflicting license.
| Doc_type | License |
| :-------- | :------------- |
| government_nb, government_nn, parliament, publicreports, lovdata_cd_\*, maalfrid_\* | [NLOD 2.0](https://data.norge.no/nlod/en/2.0/)|
| newspapers_ocr, newspapers_pdf, books| [CC0 1.0](https://creativecommons.org/publicdomain/zero/1.0/)|
| newspapers_online_nb, newspapers_online_nn | [CC BY-NC 2.0](https://creativecommons.org/licenses/by-nc/2.0/)|
| opensubtitles, wikipedia | [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/)
|
### Citation Information
We are preparing an article with detailed information about this corpus. Until it is published, please cite out paper discussing the first version of this corpus:
```
@inproceedings{kummervold-etal-2021-operationalizing,
title = {Operationalizing a National Digital Library: The Case for a {N}orwegian Transformer Model},
author = {Kummervold, Per E and
De la Rosa, Javier and
Wetjen, Freddy and
Brygfjeld, Svein Arne",
booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)},
year = "2021",
address = "Reykjavik, Iceland (Online)",
publisher = {Link{"o}ping University Electronic Press, Sweden},
url = "https://aclanthology.org/2021.nodalida-main.3",
pages = "20--29",
abstract = "In this work, we show the process of building a large-scale training set from digital and digitized collections at a national library.
The resulting Bidirectional Encoder Representations from Transformers (BERT)-based language model for Norwegian outperforms multilingual BERT (mBERT) models
in several token and sequence classification tasks for both Norwegian Bokm{aa}l and Norwegian Nynorsk. Our model also improves the mBERT performance for other languages present in the corpus such as English, Swedish, and Danish. For languages not included in the corpus, the weights degrade moderately while keeping strong multilingual properties. Therefore, we show that building high-quality models within a memory institution using somewhat noisy optical character recognition (OCR) content is feasible, and we hope to pave the way for other memory institutions to follow.",
}
```
|
NbAiLab/NPSC | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- 'no'
- nb
- nn
license:
- cc0-1.0
multilinguality:
- monolingual
size_categories:
- 2G<n<1B
source_datasets:
- original
task_categories:
- automatic-speech-recognition
- audio-classification
pretty_name: NPSC
tags:
- speech-modeling
---
# Dataset Card for NbAiLab/NPSC
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Data Fields](#data-fiels)
- [Dataset Creation](#dataset-creation)
- [Statistics](#statistics)
- [Document Types](#document-types)
- [Languages](#languages)
- [Publish Periode](#publish-periode)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://www.nb.no/sprakbanken/
- **Repository:** https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-58/
- **Paper:** https://www.nb.no/sprakbanken/
- **Point of Contact:** [Per Erik Solberg](mailto:per.solberg@nb.no)
The Norwegian Parliamentary Speech Corpus (NPSC) is a speech corpus made by the Norwegian Language Bank at the National Library of Norway in 2019-2021. The NPSC consists of recordings of speech from Stortinget, the Norwegian parliament, and corresponding orthographic transcriptions to Norwegian Bokmål and Norwegian Nynorsk. All transcriptions are done manually by trained linguists or philologists, and the manual transcriptions are subsequently proofread to ensure consistency and accuracy. Entire days of Parliamentary meetings are transcribed in the dataset.
This repository contains a version of the NPSC in the 🤗 Dataset Format. Note that the official release of the dataset, which can be found in [the repository of the Norwegian Language Bank](https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-58/), contains more information than the version found here, including word-level metadata, metadata about the speakers, and detailed documentation.
## How to Use
```python
# Loads the 16K Bokmål corpus in streaming mode
from datasets import load_dataset
data = load_dataset("NbAiLab/NPSC", config="16K_mp3_bokmaal", streaming=True)
```
## Dataset Summary
The NPSC dataset contains JSON lines with language training data. The data loader will add audio data to this structure. Here is an example json object:
```json
{
"sentence_id": 49853,
"sentence_order": 0,
"speaker_id": 32,
"meeting_date": "20170110",
"speaker_name": "Olemic Thommessen",
"sentence_text": "Stortingets møte er lovlig satt",
"sentence_language_code": "nb-NO",
"text": "Stortingets møte er lovlig satt",
"start_time": 320246,
"end_time": 323590,
"normsentence_text": "Stortingets møte er lovlig satt",
"transsentence_text": "Stortingets møte er lovleg sett",
"translated": 1,
"audio": {"path": "audio/20170110-095504_320246_323590.wav","array": [.......]}
}
```
## Data Fields
|**Key** | **Type** | **Description** |
|:-----------|:------------|:------------|
|**sentence_id:** | Integer | Unique identifier of the sentence |
|**sentence_order** | Integer | A number indicating the order of the sentences in the meeting |
|**speaker_id** | Integer | The ID of the speaker. This can be linked to the original dataset containing thorough demographic and dialectal information about the speaker. |
|**meeting_date** | String | The date for the meeting in the format __yyyymmdd__ |
| **speaker_name** | String | Name of the speaker. All speakers were members of the Norwegian Parliament or members of the Norwegian Government at the meeting date |
| **sentence_text** | String | The sentence text. The transcribed text string of the sentence in non-normalized form. This is the text of the manual transcriptions, without any postprocessing (apart from corrections of known errors). It may contain interrupted words, non-standard words and function words with a pronunciation deviating from the written form. Detailed metadata about the words in the sentence can be found in the word-tokenized version of the corpus in the official release of the dataset. |
| **sentence_language_code** | String | The language code of the sentence. The following alternatives exists in the file: ['nb-NO'. 'nn-NO', 'en-US']|
| **text** | String | sentence text. This is a copy of "sentence_text". It is included here to make it more convenient to interleave with other datasets.|
| **start_time** | Integer | The start time of the sentence in milliseconds. This time is relative to the start of audiofile of the entire meeting, which can be accessed in the official release |
| **end_time** | Integer | End time. See comment above. |
| **normsentence_text** | String | Normalized sentence text. In this version of the transcription, numbers and dates are written in digits on standardized formats, and common abbreviations are used. These modifications to the original transcriptions are produced automatically using normalization grammars |
| **transsentence_text** | String | Translated sentence text. Whenever the original transcription is in Bokmål (nb-NO), this field contains a machine-translated version in Nynorsk (nn-NO), and vice versa |
| **translated** | Integer | A flag indicating whether a machine-translated version has been produced or not. Sentences in en-US have not been translated |
| **audio** | Array | The dataloader will encode the accociated audio files and provide them as an array containing 'path', 'sound array','sampling_rate' |
#### Initial Data Collection
The procedure for the dataset creation is described in detail in our paper.
## Statistics
| Feature | Value |
|:---------|-----------:|
| Duration, pauses included | 140,3 hours|
| Duration, pauses not included | 125,7 hours |
| Word count | 1,2 million |
| Sentence count | 64.531 |
| Language distribution | Nynorsk: 12,8%|
| | Bokmål: 87,2%|
| Gender distribution | Female: 38,3% |
| | Male: 61.7% |
## Considerations for Using the Data
This corpus contains speech data. All recordings are of members of Parliament in a public setting, and can be distributed without any restrains.
### Dataset Creators and Curators
The content of the dataset was created by the Norwegian Language Bank (Språkbanken) at the National Library of Norway. [Javier de la Rosa](mailto:versae@nb.no), [Freddy Wetjen](mailto:freddy.wetjen@nb.no), [Per Egil Kummervold](mailto:per.kummervold@nb.no), and [Andre Kaasen](mailto:andre.kasen@nb.no) all contributed in making this into a HuggingFace Dataset. Thanks to the HuggingFace team for assistance.
## License
The sound and the transcriptions are released under the [CC-ZERO-license](https://creativecommons.org/publicdomain/zero/1.0/). The curation of the HuggingFace Dataset is released under [CC-BY-SA-3-license](https://creativecommons.org/licenses/by-sa/3.0/).
### Citation Information
The following article gives detailed information about the corpus. Please refer to the article and this page if you are using this dataset:
```
@inproceedings{solberg2022norwegian,
title={The Norwegian Parliamentary Speech Corpus},
author={Solberg, Per Erik and Ortiz, Pablo},
booktitle={Proceedings of the 13th Language Resources and Evaluation Conference},
url={http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.106.pdf},
year={2022}
}
```
|
NbAiLab/NPSC_test | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- nb
- 'no'
- nn
license:
- cc0-1.0
multilinguality:
- monolingual
size_categories:
- 2G<n<1B
source_datasets:
- original
task_categories:
- automatic-speech-recognition
- audio-classification
task_ids:
- speech-modeling
pretty_name: NPSC
tags:
- speech-modeling
---
# Dataset Card for NBAiLab/NPSC
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Data Fields](#data-fiels)
- [Dataset Creation](#dataset-creation)
- [Statistics](#statistics)
- [Document Types](#document-types)
- [Languages](#languages)
- [Publish Periode](#publish-periode)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://www.nb.no/sprakbanken/
- **Repository:** https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-58/
- **Paper:** https://www.nb.no/sprakbanken/
- **Point of Contact:** [Per Erik Solberg](mailto:per.solberg@nb.no)
The Norwegian Parliament Speech Corpus (NPSC) is a corpus for training a Norwegian ASR (Automatic Speech Recognition) models. The corpus is created by Språkbanken at the National Library in Norway.
NPSC is based on sound recording from meeting in the Norwegian Parliament. These talks are orthographically transcribed to either Norwegian Bokmål or Norwegian Nynorsk. In addition to the data actually included in this dataset, there is a significant amount of metadata that is included in the original corpus. Through the speaker id there is additional information about the speaker, like gender, age, and place of birth (ie dialect). Through the proceedings id the corpus can be linked to the official proceedings from the meetings.
The corpus is in total sound recordings from 40 entire days of meetings. This amounts to 140 hours of speech, 65,000 sentences or 1.2 million words.
This corpus is an adaption of the original corpus made for efficiant ASR training. For simplicity and portability, a few of the original datasets features, like the token transcription, is ommitted. You can find the complete dataset at [the Resource Catalogue at Språkbanken](https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-58/).
## How to Use (This needs to be edited of course)
```python
from datasets import load_dataset
data = load_dataset("nb/NPSC", streaming=True)
```
## Data Fields
Currently there are two versions included in this repo.
### Version A
This verison has a short list of the metadata and includes the audio (48k mp3) encoded as a float32 array in the dataset itself.
The current dataloader script is associated with this version.
One line in train.json looks like this:
```json
{
"sentence_id": 7309,
"sentence_order": 0,
"speaker_id": 1,
"speaker_name": "Marit Nybakk",
"sentence_text": "Stortingets møte er lovlig satt",
"sentence_language_code": "nb-NO",
"text": "Stortingets møte er lovlig satt",
"start_time": 302650,
"end_time": 306000,
"normsentence_text": "Stortingets møte er lovlig satt",
"transsentence_text": "Stortingets møte er lovleg sett",
"translated": 1,
"audio": {
"path": "audio/20170207-095506_302650_306000.wav",
"array": [
24,
25,
50,
(...)
],
"sampling_rate": 48000
}
}
```
### Version B
This verison does not contain the audio encoded in the dataset. Instead it has the audio files placed in sub-directories. There are currently both samples in clips_48k_wav and clips_16k_mp3. Only the base filename is referred in the dataset. Please not that there are both sentence-based audio clips as well at meeting-based audio clips. The dataset contains referrals to both, the latter referral has start and stop time as well.
One line in the train/metadata.json looks like this:
```json
{
"meeting_date": "20170207",
"full_audio_file": "20170207-095506",
"proceedings_file": "20170207-095506.ref",
"duration": 4442474,
"transcriber_id": 1,
"reviewer_id": 2,
"data_split": "test",
"speaker_name": "Marit Nybakk",
"speaker_id": 1,
"sentence_id": 7309,
"sentence_language_code": "nb-NO",
"sentence_text": "Stortingets møte er lovlig satt",
"sentence_order": 0,
"audio_file": "20170207-095506_302650_306000",
"start_time": 302650,
"end_time": 306000,
"normsentence_text": "Stortingets møte er lovlig satt",
"transsentence_text": "Stortingets møte er lovleg sett",
"translated": 1
}
```
### Dataset Creation
We are providing a **train**, **dev** and **test** split. These are the same as in the orginal corpus.
Build date: 20012022
#### Initial Data Collection and Curation
The procedure for the dataset creation is described in detail in the paper.
## Statistics
| Feature | Value |
|:---------|-----------:|
| Duration, pauses included | 140,3 hours|
| Duration, pauses not included | 125,7 hours |
| Word count | 1,2 million |
| Sentence count | 64.531 |
| Language distribution | Nynorsk: 12,8%|
| | Bokmål: 87,2%%|
| Gender distribution | Female: 38,3% |
| | Male: 61.7% |
## Considerations for Using the Data
This corpus contains speech data and is allowed to be used outside the National Library of Norway for speech recognition technology purposes.
### Discussion of Biases
Please refer to our paper.
### Dataset Curators
[Per Erik Solberg](mailto:per.solberg@nb.no)
[Freddy Wetjen](mailto:Freddy.wetjen@nb.no), [Andre Kaasen](mailto:andre.kasen@nb.no) and [Per Egil Kummervold](mailto:per.kummervold@nb.no) has contributed to porting it to the Hugging Face Dataset format.
### Licensing Information
Licensed for use outside the National Library of Norway.
## License
CC-ZERO(https://creativecommons.org/publicdomain/zero/1.0/)
### Citation Information
We are preparing an article with detailed information about this corpus. Until it is published, please cite out paper discussing the first version of this corpus:
```
ANDRE: TO BE DONE
```
|
NbAiLab/NPSC_test2 | ---
license: cc0-1.0
---
|
NbAiLab/bokmaal_admin | # Dataset Card for NBAiLab/bokmaal_admin
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Data Fields](#data-fiels)
- [Dataset Creation](#dataset-creation)
- [Statistics](#statistics)
- [Document Types](#document-types)
- [Languages](#languages)
- [Publish Periode](#publish-periode)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/NBAiLab/notram
- **Repository:** https://github.com/NBAiLab/notram
- **Paper:** https://arxiv.org/abs/2104.09617
- **Point of Contact:** [Freddy Wetjen][mailto:freddy.wetjen@nb.no]
## How to Use
```
from datasets import load_dataset
data = load_dataset("NBAiLab/bokmaal_admin")
```
### Dataset Summary
The bokmaal_admin dataset contains json lines with language training data. Here is an example json line:
```
{"id": "1006205", "doc_type": "cc100", "publish_year": 2021, "lang_fasttext": "no", "lang_fasttext_conf": "0.641", "text": "Eg har en PLAN! KOS deg og ha en fortryllende herlig pinse :)"}
```
## Data Fields
**id:** String with id to source of line and a unique identifier
**doc_type:** String describing type of media text extracted from (I.e. book,newspaper etc)
**publish_year:** String with year text published
**lang_fasttext:** String. Language of text identified by FastText
**lang_fasttext_conf:** String. Confidence calculated by FastText
**text:** String. The complete utf-8 document. If longer than 1M characters it is split.
### Dataset Creation
We are providing a **train** and a **validation** split. The standard size of the validation is a single 1GB file, while train is sharded in 1GB chunks. All files are gzipped.
Build date: 20211112 05:25
#### Initial Data Collection and Curation
The procedure for the dataset creation is described in detail in our paper.
## Statistics
### Document Types
| Source | Words | Documents | Words/Document |
|--------------------------------------:|--------------:|------------:|-----------------:|
| books | 6,380,459,425 | 179,928 | 35,461 |
| newspaper_ocr | 3,430,315,600 | 17,800,434 | 192 |
| newspaper_pdf | 1,322,713,993 | 2,578,379 | 513 |
| parliament | 505,936,380 | 3,772 | 134,129 |
| mc4 | 335,731,602 | 629,305 | 533 |
| maalfrid_regjeringen | 247,343,698 | 569,556 | 434 |
| facebook | 224,240,379 | 6,029,317 | 37 |
| newspapers_online_nb | 202,484,169 | 1,550,434 | 130 |
| cc100 | 178,306,026 | 415,438 | 429 |
| lovdata_transfer | 151,683,287 | 2,455,158 | 61 |
| wikipedia_download_nbo | 67,973,672 | 337,685 | 201 |
| publicreports | 60,688,239 | 2,485 | 24,421 |
| maalfrid_fylkesmannen | 59,961,218 | 225,748 | 265 |
| maalfrid_ssb | 52,430,231 | 161,640 | 324 |
| maalfrid_uio | 51,941,921 | 205,859 | 252 |
| maalfrid_nve | 36,914,989 | 134,307 | 274 |
| lovdata_cd_odelsting_2005 | 30,759,585 | 1,512 | 20,343 |
| maalfrid_ntnu | 28,396,799 | 91,801 | 309 |
| government_nb | 26,180,035 | 1,778 | 14,724 |
| maalfrid_skatteetaten | 19,959,092 | 50,433 | 395 |
| maalfrid_vegvesen | 17,211,017 | 72,632 | 236 |
| maalfrid_fhi | 15,255,348 | 60,886 | 250 |
| maalfrid_uib | 14,942,918 | 54,175 | 275 |
| maalfrid_forskningsradet | 14,842,039 | 41,564 | 357 |
| Published_article | 13,672,349 | 8,641 | 1,582 |
| maalfrid_domstol | 12,358,434 | 32,931 | 375 |
| maalfrid_nasjonalparkstyre | 11,843,995 | 46,622 | 254 |
| maalfrid_nav | 9,860,610 | 35,635 | 276 |
| maalfrid_banenor | 9,418,904 | 33,241 | 283 |
| maalfrid_landbruksdirektoratet | 9,306,807 | 29,919 | 311 |
| maalfrid_helsedirektoratet | 8,846,278 | 30,601 | 289 |
| maalfrid_udir | 7,174,012 | 25,639 | 279 |
| maalfrid_nokut | 7,129,990 | 25,355 | 281 |
| oscar | 6,645,855 | 20,520 | 323 |
| maalfrid_distriktssenteret | 6,615,310 | 24,369 | 271 |
| maalfrid_patentstyret | 6,532,768 | 15,936 | 409 |
| maalfrid_oslomet | 6,179,568 | 16,991 | 363 |
| maalfrid_nmbu | 6,111,332 | 21,911 | 278 |
| maalfrid_difi | 5,906,257 | 22,634 | 260 |
| maalfrid_ptil | 5,417,850 | 19,492 | 277 |
| maalfrid_ks | 5,291,586 | 18,458 | 286 |
| maalfrid_nord | 5,220,333 | 22,139 | 235 |
| maalfrid_miljodirektoratet | 5,057,030 | 16,276 | 310 |
| lovdata_cd_somb_rundskriv_2005 | 4,877,892 | 2,913 | 1,674 |
| maalfrid_kulturradet | 4,765,528 | 13,438 | 354 |
| maalfrid_hi | 4,371,332 | 14,953 | 292 |
| maalfrid_khrono | 4,334,418 | 13,657 | 317 |
| maalfrid_havarikommisjonen | 4,324,589 | 13,785 | 313 |
| maalfrid_helsetilsynet | 4,139,780 | 11,949 | 346 |
| maalfrid_veiviseren | 4,026,354 | 12,641 | 318 |
| maalfrid_kystverket | 3,986,569 | 15,948 | 249 |
| maalfrid_fiskeridir | 3,917,471 | 13,931 | 281 |
| maalfrid_imdi | 3,876,903 | 11,910 | 325 |
| maalfrid_klagenemndssekretariatet | 3,627,643 | 9,907 | 366 |
| maalfrid_mattilsynet | 3,570,296 | 11,399 | 313 |
| maalfrid_jernbanedirektoratet | 3,249,132 | 10,901 | 298 |
| maalfrid_husbanken | 3,205,560 | 9,956 | 321 |
| maalfrid_inn | 3,180,435 | 15,942 | 199 |
| maalfrid_ehelse | 3,169,426 | 13,781 | 229 |
| maalfrid_moreforsk | 3,149,028 | 11,288 | 278 |
| maalfrid_dibk | 3,075,479 | 10,192 | 301 |
| maalfrid_dsb | 2,931,468 | 9,116 | 321 |
| maalfrid_uia | 2,875,875 | 10,161 | 283 |
| maalfrid_hivolda | 2,825,217 | 8,503 | 332 |
| maalfrid_konkurransetilsynet | 2,761,569 | 6,937 | 398 |
| maalfrid_riksrevisjonen | 2,758,645 | 7,474 | 369 |
| lovdata_cd_sentrale_forskrifter_2005 | 2,730,182 | 4,952 | 551 |
| maalfrid_hiof | 2,700,988 | 11,976 | 225 |
| maalfrid_bufdir | 2,697,466 | 8,398 | 321 |
| maalfrid_forsvarsbygg | 2,683,162 | 9,920 | 270 |
| maalfrid_udi | 2,664,169 | 7,842 | 339 |
| maalfrid_norad | 2,574,013 | 6,832 | 376 |
| maalfrid_politiet | 2,540,239 | 7,562 | 335 |
| maalfrid_arkivverket | 2,540,064 | 8,338 | 304 |
| maalfrid_vkm | 2,539,216 | 8,036 | 315 |
| maalfrid_sdir | 2,370,087 | 7,819 | 303 |
| maalfrid_norges-bank | 2,240,269 | 6,157 | 363 |
| maalfrid_ngu | 2,237,688 | 9,121 | 245 |
| maalfrid_legemiddelverket | 2,188,175 | 8,378 | 261 |
| maalfrid_hjelpemiddeldatabasen | 2,159,289 | 12,899 | 167 |
| maalfrid_vetinst | 2,060,618 | 5,974 | 344 |
| maalfrid_seniorporten | 1,964,793 | 5,735 | 342 |
| maalfrid_aldringoghelse | 1,829,458 | 4,715 | 388 |
| maalfrid_sykkelbynettverket | 1,736,925 | 6,595 | 263 |
| maalfrid_bioteknologiradet | 1,718,372 | 3,983 | 431 |
| maalfrid_riksantikvaren | 1,648,807 | 5,549 | 297 |
| maalfrid_arbeidstilsynet | 1,615,239 | 4,189 | 385 |
| maalfrid_custompublish | 1,612,284 | 5,460 | 295 |
| maalfrid_nlr | 1,538,129 | 6,949 | 221 |
| maalfrid_dsa | 1,537,797 | 5,546 | 277 |
| maalfrid_sjt | 1,524,515 | 6,280 | 242 |
| maalfrid_dfo | 1,444,746 | 5,686 | 254 |
| maalfrid_sprakradet | 1,395,100 | 4,908 | 284 |
| maalfrid_kartverket | 1,377,475 | 6,001 | 229 |
| maalfrid_uis | 1,367,221 | 4,047 | 337 |
| maalfrid_ldo | 1,340,466 | 4,945 | 271 |
| maalfrid_nkom | 1,326,479 | 4,093 | 324 |
| maalfrid_kompetansenorge | 1,316,195 | 5,409 | 243 |
| maalfrid_diskrimineringsnemnda | 1,309,432 | 3,369 | 388 |
| maalfrid_arbeidsretten | 1,284,610 | 4,106 | 312 |
| maalfrid_naku | 1,275,346 | 3,528 | 361 |
| maalfrid_forbrukerradet | 1,247,911 | 4,339 | 287 |
| maalfrid_toll | 1,180,475 | 4,396 | 268 |
| maalfrid_himolde | 1,169,805 | 5,882 | 198 |
| maalfrid_artsdatabanken | 1,123,755 | 3,043 | 369 |
| maalfrid_medietilsynet | 1,120,669 | 4,219 | 265 |
| maalfrid_dirmin | 1,106,085 | 3,560 | 310 |
| maalfrid_usn | 1,079,920 | 3,964 | 272 |
| maalfrid_naturfag | 1,074,980 | 3,442 | 312 |
| maalfrid_forskningsetikk | 1,050,995 | 2,835 | 370 |
| maalfrid_nibio | 1,018,312 | 4,418 | 230 |
| maalfrid_npd | 983,147 | 3,023 | 325 |
| maalfrid_fellesstudentsystem | 979,009 | 6,086 | 160 |
| maalfrid_nhh | 959,716 | 3,547 | 270 |
| maalfrid_miljopakken | 921,711 | 3,876 | 237 |
| maalfrid_nyemetoder | 912,371 | 3,571 | 255 |
| maalfrid_nbim | 905,006 | 2,871 | 315 |
| lovdata_cd_lokaleforskrifter_2005 | 865,182 | 5,737 | 150 |
| maalfrid_unit | 863,597 | 4,278 | 201 |
| government | 863,481 | 15 | 57,565 |
| lovdata_cd_rtv_rundskriv_2005 | 851,334 | 5,557 | 153 |
| maalfrid_sykehuspartner | 844,623 | 3,551 | 237 |
| maalfrid_statsbygg | 837,396 | 2,847 | 294 |
| lovdata_cd_skatt_rundskriv_2005 | 834,700 | 280 | 2,981 |
| maalfrid_diku | 820,160 | 3,060 | 268 |
| maalfrid_folketrygdfondet | 817,411 | 2,403 | 340 |
| maalfrid_anskaffelser | 806,237 | 3,281 | 245 |
| maalfrid_godeidrettsanlegg | 798,087 | 2,945 | 270 |
| maalfrid_hvl | 763,706 | 3,342 | 228 |
| maalfrid_kriminalitetsforebygging | 750,695 | 2,811 | 267 |
| maalfrid_fiskeridirektoratet | 703,340 | 2,139 | 328 |
| maalfrid_met | 689,904 | 3,894 | 177 |
| lovdata_cd_norgeslover_2005 | 654,973 | 545 | 1,201 |
| maalfrid_aho | 638,050 | 2,638 | 241 |
| maalfrid_barneombudet | 625,833 | 1,579 | 396 |
| maalfrid_luftfartstilsynet | 608,528 | 2,218 | 274 |
| maalfrid_datatilsynet | 603,203 | 1,791 | 336 |
| maalfrid_xn--miljlftet-o8ab | 586,635 | 2,217 | 264 |
| maalfrid_matematikksenteret | 585,609 | 2,632 | 222 |
| maalfrid_sykehusinnkjop | 549,824 | 2,653 | 207 |
| maalfrid_spesialenheten | 533,643 | 1,241 | 430 |
| maalfrid_helsenorge | 520,386 | 1,670 | 311 |
| maalfrid_naturfagsenteret | 512,334 | 1,777 | 288 |
| maalfrid_lottstift | 492,579 | 1,849 | 266 |
| maalfrid_sshf | 489,791 | 1,462 | 335 |
| maalfrid_nih | 480,388 | 2,313 | 207 |
| maalfrid_une | 442,146 | 935 | 472 |
| maalfrid_ceres | 417,870 | 1,623 | 257 |
| maalfrid_khio | 417,364 | 1,654 | 252 |
| maalfrid_skrivesenteret | 408,989 | 1,876 | 218 |
| maalfrid_pasientsikkerhetsprogrammet | 406,202 | 2,468 | 164 |
| maalfrid_nodnett | 400,250 | 1,581 | 253 |
| maalfrid_nhn | 380,617 | 1,989 | 191 |
| maalfrid_vestlandfylke | 347,841 | 1,690 | 205 |
| maalfrid_nsm | 347,481 | 1,175 | 295 |
| maalfrid_spk | 338,368 | 1,087 | 311 |
| maalfrid_samordnaopptak | 337,615 | 1,208 | 279 |
| lovdata_cd_rundskriv_lovavdeling_2005 | 334,659 | 320 | 1,045 |
| maalfrid_kriminalomsorgen | 326,604 | 1,055 | 309 |
| maalfrid_fordelingsutvalget | 320,403 | 1,180 | 271 |
| maalfrid_stami | 312,943 | 799 | 391 |
| maalfrid_mareano | 305,826 | 1,482 | 206 |
| maalfrid_nysgjerrigper | 299,481 | 1,570 | 190 |
| maalfrid_natursekken | 297,159 | 2,187 | 135 |
| maalfrid_nidsenter | 296,232 | 855 | 346 |
| maalfrid_justervesenet | 295,738 | 896 | 330 |
| maalfrid_matportalen | 270,759 | 920 | 294 |
| maalfrid_kunstkultursenteret | 266,393 | 805 | 330 |
| maalfrid_digdir | 249,544 | 1,245 | 200 |
| maalfrid_kjonnsforskning | 248,296 | 756 | 328 |
| maalfrid_forsvaret | 243,755 | 799 | 305 |
| maalfrid_gjenopptakelse | 243,459 | 904 | 269 |
| maalfrid_forbrukertilsynet | 241,234 | 766 | 314 |
| maalfrid_romsenter | 232,686 | 698 | 333 |
| maalfrid_geonorge | 220,003 | 1,062 | 207 |
| maalfrid_nupi | 206,031 | 726 | 283 |
| maalfrid_universell | 199,461 | 1,294 | 154 |
| maalfrid_ovf | 187,196 | 540 | 346 |
| maalfrid_vea-fs | 185,860 | 957 | 194 |
| maalfrid_nfi | 183,915 | 572 | 321 |
| maalfrid_ombudsmann | 181,429 | 313 | 579 |
| maalfrid_valgdirektoratet | 176,937 | 659 | 268 |
| maalfrid_bibliotekutvikling | 173,659 | 828 | 209 |
| maalfrid_nasjonalmuseet | 173,044 | 426 | 406 |
| maalfrid_politihogskolen | 168,943 | 724 | 233 |
| maalfrid_nb | 154,850 | 570 | 271 |
| maalfrid_regionaleforskningsfond | 152,294 | 748 | 203 |
| maalfrid_opplaringslovutvalget | 148,285 | 399 | 371 |
| maalfrid_beccle | 147,160 | 542 | 271 |
| maalfrid_jernbanemagasinet | 141,276 | 280 | 504 |
| maalfrid_energimerking | 138,944 | 530 | 262 |
| maalfrid_samas | 138,706 | 527 | 263 |
| maalfrid_pkh | 135,365 | 453 | 298 |
| maalfrid_traumebevisst | 126,587 | 1,027 | 123 |
| maalfrid_npe | 124,648 | 531 | 234 |
| maalfrid_realfagsloyper | 123,279 | 498 | 247 |
| maalfrid_vinmonopolet | 116,011 | 288 | 402 |
| maalfrid_nafkam | 115,290 | 335 | 344 |
| maalfrid_helfo | 109,740 | 517 | 212 |
| maalfrid_giek | 104,748 | 315 | 332 |
| maalfrid_polarhistorie | 102,152 | 265 | 385 |
| maalfrid_okokrim | 91,030 | 274 | 332 |
| maalfrid_koro | 86,691 | 270 | 321 |
| maalfrid_politietssikkerhetstjeneste | 83,250 | 246 | 338 |
| maalfrid_lokforerskolen | 82,092 | 410 | 200 |
| maalfrid_konfliktraadet | 81,638 | 236 | 345 |
| maalfrid_sismo | 81,184 | 186 | 436 |
| maalfrid_radetfordyreetikk | 78,270 | 297 | 263 |
| maalfrid_squarespace | 77,333 | 255 | 303 |
| maalfrid_riksmekleren | 76,636 | 352 | 217 |
| maalfrid_brreg | 71,766 | 331 | 216 |
| maalfrid_riksteatret | 69,415 | 308 | 225 |
| maalfrid_generaladvokaten | 63,969 | 195 | 328 |
| maalfrid_sivilforsvaret | 63,303 | 280 | 226 |
| maalfrid_lanekassen | 62,069 | 174 | 356 |
| maalfrid_ffi | 60,454 | 144 | 419 |
| maalfrid_uit | 53,943 | 283 | 190 |
| maalfrid_akkreditert | 53,878 | 235 | 229 |
| maalfrid_lektor2 | 48,998 | 278 | 176 |
| maalfrid_nynorsksenteret | 47,745 | 207 | 230 |
| maalfrid_omsorgsforskning | 46,908 | 196 | 239 |
| maalfrid_riksadvokaten | 46,891 | 113 | 414 |
| maalfrid_nlb | 43,425 | 142 | 305 |
| maalfrid_unknown | 43,077 | 174 | 247 |
| maalfrid_dekom | 42,214 | 610 | 69 |
| maalfrid_kulturminnefondet | 40,957 | 209 | 195 |
| maalfrid_varsom | 39,362 | 165 | 238 |
| maalfrid_openaccess | 36,978 | 107 | 345 |
| maalfrid_lokalhistorie | 35,629 | 141 | 252 |
| maalfrid_sivilrett | 34,831 | 98 | 355 |
| maalfrid_denkulturelleskolesekken | 34,167 | 156 | 219 |
| maalfrid_unesco | 32,206 | 97 | 332 |
| maalfrid_finansportalen | 30,756 | 128 | 240 |
| maalfrid_htu | 29,233 | 108 | 270 |
| maalfrid_dep | 28,746 | 88 | 326 |
| maalfrid_yrkesfisker | 28,629 | 194 | 147 |
| maalfrid_ssn | 25,958 | 131 | 198 |
| maalfrid_informasjonskompetanse | 24,635 | 159 | 154 |
| maalfrid_helseklage | 24,477 | 82 | 298 |
| maalfrid_forbrukereuropa | 22,901 | 102 | 224 |
| maalfrid_kulturped | 21,673 | 61 | 355 |
| maalfrid_kulturoghelse | 21,192 | 115 | 184 |
| maalfrid_nbsk | 20,379 | 124 | 164 |
| maalfrid_nyinorge | 20,353 | 43 | 473 |
| maalfrid_matogindustri | 19,957 | 113 | 176 |
| maalfrid_fug | 19,910 | 66 | 301 |
| maalfrid_sinn | 19,682 | 87 | 226 |
| maalfrid_transport21 | 19,666 | 62 | 317 |
| maalfrid_vergemal | 18,784 | 54 | 347 |
| maalfrid_konkursradet | 17,890 | 50 | 357 |
| maalfrid_xn--kvinneligomskjring-1ub | 17,578 | 71 | 247 |
| maalfrid_feide | 16,493 | 115 | 143 |
| maalfrid_digidel | 15,548 | 91 | 170 |
| maalfrid_skattefunn | 15,185 | 50 | 303 |
| maalfrid_xn--tilbakefring-2jb | 14,974 | 39 | 383 |
| maalfrid_memu | 14,965 | 65 | 230 |
| maalfrid_russamtalen | 14,672 | 53 | 276 |
| maalfrid_pts | 14,672 | 46 | 318 |
| maalfrid_regjeringsadvokaten | 14,565 | 36 | 404 |
| maalfrid_nasjonaleturistveger | 13,564 | 55 | 246 |
| maalfrid_samfunnskunnskap | 12,499 | 46 | 271 |
| maalfrid_skeivtarkiv | 11,599 | 44 | 263 |
| maalfrid_forbrukerklageutvalget | 11,415 | 39 | 292 |
| maalfrid_ah | 11,363 | 33 | 344 |
| maalfrid_fordelingsutvalet | 11,329 | 21 | 539 |
| maalfrid_xn--forskerfr-t8a | 11,062 | 81 | 136 |
| maalfrid_nettvett | 10,135 | 37 | 273 |
| maalfrid_laudim | 8,732 | 63 | 138 |
| maalfrid_uh-it | 7,131 | 126 | 56 |
| maalfrid_valg | 7,089 | 36 | 196 |
| maalfrid_mhfa | 6,287 | 52 | 120 |
| maalfrid_spinn-inn | 6,286 | 24 | 261 |
| maalfrid_npolar | 6,200 | 22 | 281 |
| maalfrid_bastoyfengsel | 6,194 | 40 | 154 |
| maalfrid_miljoklagenemnda | 5,432 | 23 | 236 |
| maalfrid_prosjektveiviseren | 5,154 | 15 | 343 |
| maalfrid_voldsoffererstatning | 5,129 | 21 | 244 |
| maalfrid_aldersvennlig | 4,540 | 23 | 197 |
| maalfrid_hjelpelinjen | 4,514 | 18 | 250 |
| maalfrid_sevuppt | 4,491 | 16 | 280 |
| maalfrid_barentswatch | 4,099 | 26 | 157 |
| maalfrid_global | 4,079 | 16 | 254 |
| maalfrid_kk-utvalget | 3,813 | 14 | 272 |
| maalfrid_forsvaretsmuseer | 3,768 | 33 | 114 |
| maalfrid_utdanningiverden | 2,876 | 7 | 410 |
| maalfrid_fmfiavo@fylkesmannen | 2,830 | 33 | 85 |
| maalfrid_iearth | 2,747 | 15 | 183 |
| maalfrid_pst | 2,667 | 12 | 222 |
| maalfrid_altinn | 2,600 | 10 | 260 |
| maalfrid_overgangsbolig | 2,580 | 16 | 161 |
| maalfrid_designavgang | 2,541 | 20 | 127 |
| maalfrid_kantinekurset | 2,319 | 17 | 136 |
| maalfrid_velgekte | 2,269 | 10 | 226 |
| maalfrid_okopark | 2,261 | 7 | 323 |
| maalfrid_musikkbasertmiljobehandling | 2,118 | 13 | 162 |
| maalfrid_arkitektur | 1,922 | 9 | 213 |
| maalfrid_agropub | 1,875 | 6 | 312 |
| maalfrid_alleteller | 1,511 | 7 | 215 |
| maalfrid_norskpetroleum | 1,355 | 20 | 67 |
| maalfrid_lykillinn | 1,349 | 4 | 337 |
| maalfrid_oslofengsel | 1,159 | 5 | 231 |
| maalfrid_hjorteviltregisteret | 910 | 2 | 455 |
| maalfrid_umb | 875 | 5 | 175 |
| maalfrid_webhuset | 849 | 3 | 283 |
| maalfrid_anleggsregisteret | 702 | 3 | 234 |
| maalfrid_utdanning | 687 | 5 | 137 |
| maalfrid_mangfoldsprisen | 538 | 2 | 269 |
| maalfrid_nynorskbok | 464 | 4 | 116 |
| maalfrid_mammapresenterer | 447 | 2 | 223 |
| maalfrid_ringerikefengsel | 435 | 3 | 145 |
| maalfrid_romerikefengsel | 252 | 2 | 126 |
| maalfrid_indreostfoldfengsel | 215 | 3 | 71 |
| wikipedia_huggingface | 209 | 4 | 52 |
| maalfrid_yr | 209 | 1 | 209 |
| maalfrid_xn--kroppsvingsforskning-gcc | 162 | 1 | 162 |
| maalfrid_retttilaalese | 160 | 2 | 80 |
| maalfrid_grunderskolen | 117 | 1 | 117 |
| maalfrid_nodsms | 98 | 1 | 98 |
| maalfrid_karriereveiledning | 32 | 3 | 10 |
| maalfrid_sikkerhverdag | 19 | 1 | 19 |
### Languages
| Language | Words | Documents | Words/Document |
|-----------:|---------------:|------------:|-----------------:|
| no | 13,827,935,821 | 34,804,446 | 397 |
### Publish Periode
| Decade | Words | Documents | Words/Document |
|---------:|--------------:|------------:|-----------------:|
| 2020 | 2,627,953,523 | 8,645,421 | 710 |
| 2010 | 4,313,046,574 | 17,265,618 | 2,802 |
| 2000 | 2,749,348,382 | 4,303,662 | 8,388 |
| 1990 | 2,847,964,191 | 3,290,177 | 8,828 |
| 1980 | 1,289,623,151 | 1,299,568 | 10,006 |
## Considerations for Using the Data
This corpus contains data under copyright and is not allowed to be used outide the National Library of Norway. The dataset should not be distributed.
### Discussion of Biases
Please refer to our paper.
### Dataset Curators
Freddy.wetjen@nb.no
Per.Kummervold@nb.no
### Licensing Information
Not lisenced for use outside the National Library of Norway.
### Citation Information
We are preparing an article with detailed information about this corpus. Until it is published, please cite out paper discussing the first version of this corpus:
```
@inproceedings{kummervold-etal-2021-operationalizing,
title = {Operationalizing a National Digital Library: The Case for a {N}orwegian Transformer Model},
author = {Kummervold, Per E and
De la Rosa, Javier and
Wetjen, Freddy and
Brygfjeld, Svein Arne",
booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)},
year = "2021",
address = "Reykjavik, Iceland (Online)",
publisher = {Link{"o}ping University Electronic Press, Sweden},
url = "https://aclanthology.org/2021.nodalida-main.3",
pages = "20--29",
abstract = "In this work, we show the process of building a large-scale training set from digital and digitized collections at a national library. The resulting Bidirectional Encoder Representations from Transformers (BERT)-based language model for Norwegian outperforms multilingual BERT (mBERT) models in several token and sequence classification tasks for both Norwegian Bokm{aa}l and Norwegian Nynorsk. Our model also improves the mBERT performance for other languages present in the corpus such as English, Swedish, and Danish. For languages not included in the corpus, the weights degrade moderately while keeping strong multilingual properties. Therefore, we show that building high-quality models within a memory institution using somewhat noisy optical character recognition (OCR) content is feasible, and we hope to pave the way for other memory institutions to follow.",
}
```
|
NbAiLab/norec_agg | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card Creation Guide
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** N/A
- **Repository:** [GitHub](https://github.com/ltgoslo/NorBERT/)
- **Paper:** [A Fine-grained Sentiment Dataset for Norwegian](https://www.aclweb.org/anthology/2020.lrec-1.618/)
- **Leaderboard:** N/A
- **Point of Contact:** -
### Dataset Summary
Aggregated NoRec_fine: A Fine-grained Sentiment Dataset for Norwegian.
This dataset was created by the Nordic Language Processing Laboratory by aggregating the fine-grained annotations in NoReC_fine and removing sentences with conflicting or no sentiment.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text in the dataset is in Norwegian.
## Dataset Structure
### Data Instances
Example of one instance in the dataset.
```{'label': 0, 'text': 'Verre er det med slagsmålene .'}```
### Data Fields
- `id`: index of the example
- `text`: Text of a sentence
- `label`: The sentiment label. Here
- 0 = negative
- 1 = positive
### Data Splits
The dataset is split into a `train`, `validation`, and `test` split with the following sizes:
| | Tain | Valid | Test |
| ----- | ------ | ----- | ----- |
| Number of examples | 2675 | 516 | 417 |
## Dataset Creation
This dataset is based largely on the original data described in the paper _A Fine-Grained Sentiment Dataset for Norwegian_ by L. Øvrelid, P. Mæhlum, J. Barnes, and E. Velldal, accepted at LREC 2020, [paper available](https://www.aclweb.org/anthology/2020.lrec-1.618). However, we have since added annotations for another 3476 sentences, increasing the overall size and scope of the dataset.
## Additional Information
### Licensing Information
This work is licensed under a Creative Commons Attribution 4.0 International License
### Citation Information
```latex
@misc{sheng2020investigating,
title={Investigating Societal Biases in a Poetry Composition System},
author={Emily Sheng and David Uthus},
year={2020},
eprint={2011.02686},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
NbAiLab/norne | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- 'no'
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
- part-of-speech
tags:
- structure-prediction
---
# Dataset Card for NorNE: Norwegian Named Entities
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [NorNE](https://github.com/ltgoslo/norne/)
- **Repository:** [Github](https://github.com/ltgoslo/norne/)
- **Paper:** https://arxiv.org/abs/1911.12146
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
NorNE is a manually annotated corpus of named entities which extends the annotation of the existing Norwegian Dependency Treebank. Comprising both of the official standards of written Norwegian (Bokmål and Nynorsk), the corpus contains around 600,000 tokens and annotates a rich set of entity types including persons,organizations, locations, geo-political entities, products, and events, in addition to a class corresponding to nominals derived from names.
### Supported Tasks and Leaderboards
NorNE ads named entity annotations on top of the Norwegian Dependency Treebank.
### Languages
Both Norwegian Bokmål (`bokmaal`) and Nynorsk (`nynorsk`) are supported as different configs in this dataset. An extra config for the combined languages is also included (`combined`). See the Annotation section for details on accessing reduced tag sets for the NER feature.
## Dataset Structure
Each entry contains text sentences, their language, identifiers, tokens, lemmas, and corresponding NER and POS tag lists.
### Data Instances
An example of the `train` split of the `bokmaal` config.
```python
{'idx': '000001',
'lang': 'bokmaal',
'lemmas': ['lam', 'og', 'piggvar', 'på', 'bryllupsmeny'],
'ner_tags': [0, 0, 0, 0, 0],
'pos_tags': [0, 9, 0, 5, 0],
'text': 'Lam og piggvar på bryllupsmenyen',
'tokens': ['Lam', 'og', 'piggvar', 'på', 'bryllupsmenyen']}
```
### Data Fields
Each entry is annotated with the next fields:
- `idx` (`int`), text (sentence) identifier from the NorNE dataset
- `lang` (`str`), language variety, either `bokmaal`, `nynorsk` or `combined`
- `text` (`str`), plain text
- `tokens` (`List[str]`), list of tokens extracted from `text`
- `lemmas` (`List[str]`), list of lemmas extracted from `tokens`
- `ner_tags` (`List[int]`), list of numeric NER tags for each token in `tokens`
- `pos_tags` (`List[int]`), list of numeric PoS tags for each token in `tokens`
An example DataFrame obtained from the dataset:
<table class="dataframe" border="1">
<thead>
<tr style="text-align: right;">
<th></th>
<th>idx</th>
<th>lang</th>
<th>text</th>
<th>tokens</th>
<th>lemmas</th>
<th>ner_tags</th>
<th>pos_tags</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>000001</td>
<td>bokmaal</td>
<td>Lam og piggvar på bryllupsmenyen</td>
<td>[Lam, og, piggvar, på, bryllupsmenyen]</td>
<td>[lam, og, piggvar, på, bryllupsmeny]</td>
<td>[0, 0, 0, 0, 0]</td>
<td>[0, 9, 0, 5, 0]</td>
</tr>
<tr>
<th>1</th>
<td>000002</td>
<td>bokmaal</td>
<td>Kamskjell, piggvar og lammefilet sto på menyen...</td>
<td>[Kamskjell, ,, piggvar, og, lammefilet, sto, p...</td>
<td>[kamskjell, $,, piggvar, og, lammefilet, stå, ...</td>
<td>[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]</td>
<td>[0, 1, 0, 9, 0, 15, 2, 0, 2, 8, 6, 0, 1]</td>
</tr>
<tr>
<th>2</th>
<td>000003</td>
<td>bokmaal</td>
<td>Og til dessert: Parfait à la Mette-Marit.</td>
<td>[Og, til, dessert, :, Parfait, à, la, Mette-Ma...</td>
<td>[og, til, dessert, $:, Parfait, à, la, Mette-M...</td>
<td>[0, 0, 0, 0, 7, 8, 8, 8, 0]</td>
<td>[9, 2, 0, 1, 10, 12, 12, 10, 1]</td>
</tr>
</tbody>
</table>
### Data Splits
There are three splits: `train`, `validation` and `test`.
| Config | Split | Total |
| :---------|-------------:|-------:|
| `bokmaal` | `train` | 15696 |
| `bokmaal` | `validation` | 2410 |
| `bokmaal` | `test` | 1939 |
| `nynorsk` | `train` | 14174 |
| `nynorsk` | `validation` | 1890 |
| `nynorsk` | `test` | 1511 |
| `combined`| `test` | 29870 |
| `combined`| `validation` | 4300 |
| `combined`| `test` | 3450 |
## Dataset Creation
### Curation Rationale
1. A _name_ in this context is close to [Saul Kripke's definition of a name](https://en.wikipedia.org/wiki/Saul_Kripke#Naming_and_Necessity),
in that a name has a unique reference and its meaning is constant (there are exceptions in the annotations, e.g. "Regjeringen" (en. "Government")).
2. It is the usage of a name that determines the entity type, not the default/literal sense of the name,
3. If there is an ambiguity in the type/sense of a name, then the the default/literal sense of the name is chosen
(following [Markert and Nissim, 2002](http://www.lrec-conf.org/proceedings/lrec2002/pdf/11.pdf)).
For more details, see the "Annotation Guidelines.pdf" distributed with the corpus.
### Source Data
Data was collected using blogs and newspapers in Norwegian, as well as parliament speeches and governamental reports.
#### Initial Data Collection and Normalization
The texts in the Norwegian Dependency Treebank (NDT) are manually annotated with morphological features, syntactic functions
and hierarchical structure. The formalism used for the syntactic annotation is dependency grammar.
The treebanks consists of two parts, one part in Norwegian Bokmål (`nob`) and one part in Norwegian Nynorsk (`nno`).
Both parts contain around 300.000 tokens, and are a mix of different non-fictional genres.
See the [NDT webpage](https://www.nb.no/sprakbanken/show?serial=sbr-10) for more details.
### Annotations
The following types of entities are annotated:
- **Person (`PER`):** Real or fictional characters and animals
- **Organization (`ORG`):** Any collection of people, such as firms, institutions, organizations, music groups,
sports teams, unions, political parties etc.
- **Location (`LOC`):** Geographical places, buildings and facilities
- **Geo-political entity (`GPE`):** Geographical regions defined by political and/or social groups.
A GPE entity subsumes and does not distinguish between a nation, its region, its government, or its people
- **Product (`PROD`):** Artificially produced entities are regarded products. This may include more abstract entities, such as speeches,
radio shows, programming languages, contracts, laws and ideas.
- **Event (`EVT`):** Festivals, cultural events, sports events, weather phenomena, wars, etc. Events are bounded in time and space.
- **Derived (`DRV`):** Words (and phrases?) that are dervied from a name, but not a name in themselves. They typically contain a full name and are capitalized, but are not proper nouns. Examples (fictive) are "Brann-treneren" ("the Brann coach") or "Oslo-mannen" ("the man from Oslo").
- **Miscellaneous (`MISC`):** Names that do not belong in the other categories. Examples are animals species and names of medical conditions. Entities that are manufactured or produced are of type Products, whereas thing naturally or spontaneously occurring are of type Miscellaneous.
Furthermore, all `GPE` entities are additionally sub-categorized as being either `ORG` or `LOC`, with the two annotation levels separated by an underscore:
- `GPE_LOC`: Geo-political entity, with a locative sense (e.g. "John lives in _Spain_")
- `GPE_ORG`: Geo-political entity, with an organisation sense (e.g. "_Spain_ declined to meet with Belgium")
The two special types `GPE_LOC` and `GPE_ORG` can easily be altered depending on the task, choosing either the more general `GPE` tag or the more specific `LOC`/`ORG` tags, conflating them with the other annotations of the same type. This means that the following sets of entity types can be derived:
- 7 types, deleting `_GPE`: **`ORG`**, **`LOC`**, `PER`, `PROD`, `EVT`, `DRV`, `MISC`
- 8 types, deleting `LOC_` and `ORG_`: **`ORG`**, **`LOC`**, **`GPE`**, `PER`, `PROD`, `EVT`, `DRV`, `MISC`
- 9 types, keeping all types: **`ORG`**, **`LOC`**, **`GPE_LOC`**, **`GPE_ORG`**, `PER`, `PROD`, `EVT`, `DRV`, `MISC`
The class distribution is as follows, broken down across the data splits of the UD version of NDT, and sorted by total counts (i.e. the number of examples, not tokens within the spans of the annotatons):
| Type | Train | Dev | Test | Total |
| :--------|-------:|-------:|-------:|-------:|
| `PER` | 4033 | 607 | 560 | 5200 |
| `ORG` | 2828 | 400 | 283 | 3511 |
| `GPE_LOC`| 2132 | 258 | 257 | 2647 |
| `PROD` | 671 | 162 | 71 | 904 |
| `LOC` | 613 | 109 | 103 | 825 |
| `GPE_ORG`| 388 | 55 | 50 | 493 |
| `DRV` | 519 | 77 | 48 | 644 |
| `EVT` | 131 | 9 | 5 | 145 |
| `MISC` | 8 | 0 | 0 | 0 |
To access these reduce versions of the dataset, you can use the configs `bokmaal-7`, `nynorsk-7`, `combined-7` for the NER tag set with 7 tags ( **`ORG`**, **`LOC`**, `PER`, `PROD`, `EVT`, `DRV`, `MISC`), and `bokmaal-8`, `nynorsk-8`, `combined-8` for the NER tag set with 8 tags (`LOC_` and `ORG_`: **`ORG`**, **`LOC`**, **`GPE`**, `PER`, `PROD`, `EVT`, `DRV`, `MISC`). By default, the full set (9 tags) will be used.
## Additional Information
### Dataset Curators
NorNE was created as a collaboration between [Schibsted Media Group](https://schibsted.com/), [Språkbanken](https://www.nb.no/forskning/sprakbanken/) at the [National Library of Norway](https://www.nb.no) and the [Language Technology Group](https://www.mn.uio.no/ifi/english/research/groups/ltg/) at the University of Oslo.
NorNE was added to Huggingface Datasets by the AI-Lab at the National Library of Norway.
### Licensing Information
The NorNE corpus is published under the same [license](https://github.com/ltgoslo/norne/blob/master/LICENSE_NDT.txt) as the Norwegian Dependency Treebank
### Citation Information
This dataset is described in the paper _NorNE: Annotating Named Entities for Norwegian_ by
Fredrik Jørgensen, Tobias Aasmoe, Anne-Stine Ruud Husevåg, Lilja Øvrelid, and Erik Velldal, accepted for LREC 2020 and available as pre-print here: https://arxiv.org/abs/1911.12146.
|
NbAiLab/norwegian_parliament | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- no
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
---
# Dataset Card Creation Guide
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** N/A
- **Repository:** [GitHub](https://github.com/ltgoslo/NorBERT/)
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** -
### Dataset Summary
The Norwegian Parliament Speeches is a collection of text passages from 1998 to 2016 and pronounced at the Norwegian Parliament (Storting) by members of the two major parties: Fremskrittspartiet and Sosialistisk Venstreparti. The dataset is annotated with the party the speaker was associated with at the time (dates of speeches are also included).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text in the dataset is in Norwegian.
## Dataset Structure
### Data Instances
Example of one instance in the dataset.
```{'label': 0, 'text': 'Verre er det med slagsmålene .'}```
### Data Fields
- `id`: index of the example
- `text`: Text of a speech
- `date`: Date (`YYYY-MM-DD`) the speech was produced
- `label`: Political party the speaker was associated with at the time
- 0 = Fremskrittspartiet
- 1 = Sosialistisk Venstreparti
### Data Splits
The dataset is split into a `train`, `validation`, and `test` split with the following sizes:
| | Tain | Valid | Test |
| ----- | ------ | ----- | ----- |
| Number of examples | 3600 | 1200 | 1200 |
The dataset is balanced on political party.
## Dataset Creation
This dataset is based on the publicly available information by Norwegian Parliament (Storting) and created by the National Library of Norway AI-Lab to benchmark their language models.
## Additional Information
### Licensing Information
This work is licensed under a Creative Commons Attribution 4.0 International License
### Citation Information
```latex
@misc{--,
title={--},
author={--},
year={2021},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Niciu/github-issues | restsfds |
Nuwaisir/Quran_speech_recognition_kaggle | This dataset can be found in Kaggle |
Omar2027/caner_replicate | |
OmarN121/train | ---
YAML tags:
- copy-paste the tags obtained with the online tagging app: https://huggingface.co/spaces/huggingface/datasets-tagging
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
Paul/hatecheck | ---
annotations_creators:
- crowdsourced
language_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: HateCheck
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- hate-speech-detection
---
# Dataset Card for HateCheck
## Dataset Description
HateCheck is a suite of functional test for hate speech detection models.
The dataset contains 3,728 validated test cases in 29 functional tests.
19 functional tests correspond to distinct types of hate. The other 11 functional tests cover challenging types of non-hate.
This allows for targeted diagnostic insights into model performance.
In our ACL paper, we found critical weaknesses in all commercial and academic hate speech detection model that we tested with HateCheck.
Please refer to the paper (linked below) for results and further discussion, as well as further information on the dataset and a full data statement.
- **Paper:** Röttger et al. (2021) - HateCheck: Functional Tests for Hate Speech Detection Model. https://aclanthology.org/2021.acl-long.4/ or https://arxiv.org/abs/2012.15606
- **Repository:** https://github.com/paul-rottger/hatecheck-data
- **Point of Contact:** paul.rottger@oii.ox.ac.uk
## Dataset Structure
"test.csv" contains all 3,728 validated test cases. Each test case (row) has the following attributes:
**functionality**
The shorthand for the functionality tested by the test case.
**case_id**
The unique ID of the test case (assigned to each of the 3,901 cases we initially generated)
**test_case**
The text of the test case.
**label_gold**
The gold standard label (hateful/non-hateful) of the test case. All test cases within a given functionality have the same gold standard label.
**target_ident**
Where applicable, the protected group targeted or referenced by the test case. We cover seven protected groups in the test suite: women, trans people, gay people, black people, disabled people, Muslims and immigrants.
**direction**
For hateful cases, the binary secondary label indicating whether they are *directed* at an individual as part of a protected group or aimed at the group in *general*.
**focus_words**
Where applicable, the key word or phrase in a given test case (e.g. "cut their throats").
**focus_lemma**
Where applicable, the corresponding lemma (e.g. "cut sb. throat").
**ref_case_id**
For hateful cases, where applicable, the ID of the simpler hateful case which was perturbed to generate them.
For non-hateful cases, where applicable, the ID of the hateful case which is contrasted.
**ref_templ_id**
The equivalent, but for template IDs.
**templ_id**
The unique ID of the template from which the test case was generated (assigned to each of the 866 cases and templates from which we generated the 3,901 initial cases).
## Citation Information
When using HateCheck, please cite our ACL paper:
@inproceedings{rottger-etal-2021-hatecheck,
title = "{H}ate{C}heck: Functional Tests for Hate Speech Detection Models",
author = {R{\"o}ttger, Paul and
Vidgen, Bertie and
Nguyen, Dong and
Waseem, Zeerak and
Margetts, Helen and
Pierrehumbert, Janet},
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.4",
doi = "10.18653/v1/2021.acl-long.4",
pages = "41--58",
abstract = "Detecting online hate is a difficult task that even state-of-the-art models struggle with. Typically, hate speech detection models are evaluated by measuring their performance on held-out test data using metrics such as accuracy and F1 score. However, this approach makes it difficult to identify specific model weak points. It also risks overestimating generalisable model performance due to increasingly well-evidenced systematic gaps and biases in hate speech datasets. To enable more targeted diagnostic insights, we introduce HateCheck, a suite of functional tests for hate speech detection models. We specify 29 model functionalities motivated by a review of previous research and a series of interviews with civil society stakeholders. We craft test cases for each functionality and validate their quality through a structured annotation process. To illustrate HateCheck{'}s utility, we test near-state-of-the-art transformer models as well as two popular commercial models, revealing critical model weaknesses.",
}
|
PaulLerner/triviaqa_for_viquae | See https://github.com/PaulLerner/ViQuAE
Get the original dataset there: http://nlp.cs.washington.edu/triviaqa/ (or via HF: https://huggingface.co/datasets/trivia_qa) |
PaulLerner/viquae_all_images | See https://github.com/PaulLerner/ViQuAE
---
license: cc-by-4.0
---
|
PaulLerner/viquae_dataset | See https://github.com/PaulLerner/ViQuAE
---
license: cc-by-4.0
---
|
PaulLerner/viquae_images | See https://github.com/PaulLerner/ViQuAE
---
license: cc-by-4.0
---
|
PaulLerner/viquae_wikipedia | See https://github.com/PaulLerner/ViQuAE
---
license: cc-by-3.0
---
|
Pengfei/asfwe | |
Pengfei/test1 | This is the dataset |
Perkhad/corejur | |
PlanTL-GOB-ES/SQAC | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- es
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: Spanish Question Answering Corpus (SQAC)
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
---
# SQAC (Spanish Question-Answering Corpus)
## Dataset Description
SQAC is an extractive QA dataset for the Spanish language.
- **Paper:** [MarIA: Spanish Language Models](https://upcommons.upc.edu/bitstream/handle/2117/367156/6405-5863-1-PB%20%281%29.pdf?sequence=1)
- **Point of Contact:** carlos.rodriguez1@bsc.es
- **Leaderboard:** [EvalEs] (https://plantl-gob-es.github.io/spanish-benchmark/)
### Dataset Summary
Contains 6,247 contexts and 18,817 questions with their respective answers, 1 to 5 for each fragment.
The sources of the contexts are:
* Encyclopedic articles from the [Spanish Wikipedia](https://es.wikipedia.org/), used under [CC-by-sa licence](https://creativecommons.org/licenses/by-sa/3.0/legalcode).
* News articles from [Wikinews](https://es.wikinews.org/), used under [CC-by licence](https://creativecommons.org/licenses/by/2.5/).
* Newswire and literature text from the [AnCora corpus](http://clic.ub.edu/corpus/en), used under [CC-by licence](https://creativecommons.org/licenses/by/4.0/legalcode).
### Supported Tasks
Extractive-QA
### Languages
- Spanish (es)
### Directory Structure
- README.md
- SQAC.py
- dev.json
- test.json
- train.json
## Dataset Structure
### Data Instances
<pre>
{
'id': '6cf3dcd6-b5a3-4516-8f9e-c5c1c6b66628',
'title': 'Historia de Japón',
'context': 'La historia de Japón (日本の歴史 o 日本史, Nihon no rekishi / Nihonshi?) es la sucesión de hechos acontecidos dentro del archipiélago japonés. Algunos de estos hechos aparecen aislados e influenciados por la naturaleza geográfica de Japón como nación insular, en tanto que otra serie de hechos, obedece a influencias foráneas como en el caso del Imperio chino, el cual definió su idioma, su escritura y, también, su cultura política. Asimismo, otra de las influencias foráneas fue la de origen occidental, lo que convirtió al país en una nación industrial, ejerciendo con ello una esfera de influencia y una expansión territorial sobre el área del Pacífico. No obstante, dicho expansionismo se detuvo tras la Segunda Guerra Mundial y el país se posicionó en un esquema de nación industrial con vínculos a su tradición cultural.',
'question': '¿Qué influencia convirtió Japón en una nación industrial?',
'answers': {
'text': ['la de origen occidental'],
'answer_start': [473]
}
}
</pre>
### Data Fields
<pre>
{
id: str
title: str
context: str
question: str
answers: {
answer_start: [int]
text: [str]
}
}
</pre>
### Data Splits
| Split | Size |
| ------------- | ------------- |
| `train` | 15,036 |
| `dev` | 1,864 |
| `test` | 1.910 |
## Content analysis
### Number of articles, paragraphs and questions
* Number of articles: 3,834
* Number of contexts: 6,247
* Number of questions: 18,817
* Number of sentences: 48,026
* Questions/Context ratio: 3.01
* Sentences/Context ratio: 7.70
### Number of tokens
* Total tokens in context: 1,561,616
* Average tokens/context: 250
* Total tokens in questions: 203,235
* Average tokens/question: 10.80
* Total tokens in answers: 90,307
* Average tokens/answer: 4.80
### Lexical variation
46.38% of the words in the Question can be found in the Context.
### Question type
| Question | Count | % |
|----------|-------:|---:|
| qué | 6,381 | 33.91 % |
| quién/es | 2,952 | 15.69 % |
| cuál/es | 2,034 | 10.81 % |
| cómo | 1,949 | 10.36 % |
| dónde | 1,856 | 9.86 % |
| cuándo | 1,639 | 8.71 % |
| cuánto | 1,311 | 6.97 % |
| cuántos | 495 |2.63 % |
| adónde | 100 | 0.53 % |
| cuánta | 49 | 0.26 % |
| no question mark | 43 | 0.23 % |
| cuántas | 19 | 0.10 % |
## Dataset Creation
### Curation Rationale
For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines from SQUAD 1.0 [(Rajpurkar, Pranav et al.)](http://arxiv.org/abs/1606.05250).
### Source Data
#### Initial Data Collection and Normalization
The source data are scraped articles from Wikinews, the Spanish Wikipedia and the AnCora corpus.
- [Spanish Wikipedia](https://es.wikipedia.org)
- [Spanish Wikinews](https://es.wikinews.org/)
- [AnCora corpus](http://clic.ub.edu/corpus/en)
#### Who are the source language producers?
Contributors to the aforementioned sites.
### Annotations
#### Annotation process
We commissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQUAD 1.0 [(Rajpurkar, Pranav et al.)](http://arxiv.org/abs/1606.05250).
#### Who are the annotators?
Native language speakers.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
This corpus contributes to the development of language models in Spanish.
### Discussion of Biases
No postprocessing steps were applied to mitigate potential social biases.
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es).
For further information, send an email to (plantl-gob-es@bsc.es).
This work was funded by the [Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)](https://avancedigital.mineco.gob.es/en-us/Paginas/index.aspx) within the framework of the [Plan-TL](https://plantl.mineco.gob.es/Paginas/index.aspx).
### Licensing information
This work is licensed under [CC Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) License.
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
### Citation Information
```
@article{maria,
author = {Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquin Silveira-Ocampo and Casimiro Pio Carrino and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Aitor Gonzalez-Agirre and Marta Villegas},
title = {MarIA: Spanish Language Models},
journal = {Procesamiento del Lenguaje Natural},
volume = {68},
number = {0},
year = {2022},
issn = {1989-7553},
url = {http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405},
pages = {39--60}
}
```
### Contributions
[N/A]
|
PlanTL-GOB-ES/cantemist-ner | ---
annotations_creators:
- expert-generated
language:
- es
tags:
- biomedical
- clinical
- spanish
multilinguality:
- monolingual
task_categories:
- token-classification
task_ids:
- named-entity-recognition
license:
- cc-by-4.0
---
# CANTEMIST
## Dataset Description
Manually classified collection of Spanish oncological clinical case reports.
- **Homepage:** [zenodo](https://zenodo.org/record/3978041)
- **Paper:** [Named Entity Recognition, Concept Normalization and Clinical Coding: Overview of the Cantemist Track for Cancer Text Mining in Spanish, Corpus, Guidelines, Methods and Results](https://www.researchgate.net/profile/Antonio-Miranda-Escalada-2/publication/352786464_Named_Entity_Recognition_Concept_Normalization_and_Clinical_Coding_Overview_of_the_Cantemist_Track_for_Cancer_Text_Mining_in_Spanish_Corpus_Guidelines_Methods_and_Results/links/60d98a3b458515d6fbe382d8/Named-Entity-Recognition-Concept-Normalization-and-Clinical-Coding-Overview-of-the-Cantemist-Track-for-Cancer-Text-Mining-in-Spanish-Corpus-Guidelines-Methods-and-Results.pdf)
- **Point of Contact:** encargo-pln-life@bsc.es
### Dataset Summary
Collection of 1301 oncological clinical case reports written in Spanish, with tumor morphology mentions manually annotated and mapped by clinical experts to a controlled terminology. Every tumor morphology mention is linked to an eCIE-O code (the Spanish equivalent of ICD-O).
The training subset contains 501 documents, the development subsets 500, and the test subset 300. The original dataset is distributed in [Brat](https://brat.nlplab.org/standoff.html) format.
This dataset was designed for the CANcer TExt Mining Shared Task, sponsored by [Plan-TL](https://plantl.mineco.gob.es/Paginas/index.aspx).
For further information, please visit [the official website](https://temu.bsc.es/cantemist/).
### Supported Tasks
Named Entity Recognition (NER)
### Languages
- Spanish (es)
### Directory Structure
* README.md
* cantemist.py
* train.conll
* dev.conll
* test.conll
## Dataset Structure
### Data Instances
Three four-column files, one for each split.
### Data Fields
Every file has 4 columns:
* 1st column: Word form or punctuation symbol
* 2nd column: Original BRAT file name
* 3rd column: Spans
* 4th column: IOB tag
#### Example
<pre>
El cc_onco101 662_664 O
informe cc_onco101 665_672 O
HP cc_onco101 673_675 O
es cc_onco101 676_678 O
compatible cc_onco101 679_689 O
con cc_onco101 690_693 O
adenocarcinoma cc_onco101 694_708 B-MORFOLOGIA_NEOPLASIA
moderadamente cc_onco101 709_722 I-MORFOLOGIA_NEOPLASIA
diferenciado cc_onco101 723_735 I-MORFOLOGIA_NEOPLASIA
que cc_onco101 736_739 O
afecta cc_onco101 740_746 O
a cc_onco101 747_748 O
grasa cc_onco101 749_754 O
peripancreática cc_onco101 755_770 O
sobrepasando cc_onco101 771_783 O
la cc_onco101 784_786 O
serosa cc_onco101 787_793 O
, cc_onco101 793_794 O
infiltración cc_onco101 795_807 O
perineural cc_onco101 808_818 O
. cc_onco101 818_819 O
</pre>
### Data Splits
| Split | Size |
| ------------- | ------------- |
| `train` | 19,397 |
| `dev` | 18,165 |
| `test` | 11,168 |
## Dataset Creation
### Curation Rationale
For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines.
### Source Data
#### Initial Data Collection and Normalization
The selected clinical case reports are fairly similar to hospital health records. To increase the usefulness and practical relevance of the CANTEMIST corpus, we selected clinical cases affecting all genders and that comprised most ages (from children to the elderly) and of various complexity levels (solid tumors, hemato-oncological malignancies, neuroendocrine cancer...).
The CANTEMIST cases include clinical signs and symptoms, personal and family history, current illness, physical examination, complementary tests (blood tests, imaging, pathology), diagnosis, treatment (including adverse effects of chemotherapy), evolution and outcome.
#### Who are the source language producers?
Humans, there is no machine generated data.
### Annotations
#### Annotation process
The manual annotation of the Cantemist corpus was performed by clinical experts following the Cantemist guidelines (for more detail refer to this [paper](http://ceur-ws.org/Vol-2664/cantemist_overview.pdf)). These guidelines contain rules for annotating morphology neoplasms in Spanish oncology clinical cases, as well as for mapping these annotations to eCIE-O.
A medical doctor was regularly consulted by annotators (scientists with PhDs on cancer-related subjects) for the most difficult pathology expressions. This same doctor periodically checked a random selection of annotated clinical records and these annotations were compared and discussed with the annotators. To normalize a selection of very complex cases, MD specialists in pathology from one of the largest university hospitals in Spain were consulted.
#### Who are the annotators?
Clinical experts.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
This corpus contributes to the development of medical language models in Spanish.
### Discussion of Biases
Not applicable.
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es).
For further information, send an email to (plantl-gob-es@bsc.es).
This work was funded by the [Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)](https://avancedigital.mineco.gob.es/en-us/Paginas/index.aspx) within the framework of the [Plan-TL](https://plantl.mineco.gob.es/Paginas/index.aspx).
### Licensing information
This work is licensed under [CC Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) License.
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
### Citation Information
```bibtex
@article{cantemist,
title={Named Entity Recognition, Concept Normalization and Clinical Coding: Overview of the Cantemist Track for Cancer Text Mining in Spanish, Corpus, Guidelines, Methods and Results.},
author={Miranda-Escalada, Antonio and Farr{\'e}, Eul{\`a}lia and Krallinger, Martin},
journal={IberLEF@ SEPLN},
pages={303--323},
year={2020}
}
```
### Contributions
[N/A]
|
PlanTL-GOB-ES/pharmaconer | ---
annotations_creators:
- expert-generated
language:
- es
tags:
- biomedical
- clinical
- spanish
multilinguality:
- monolingual
task_categories:
- token-classification
task_ids:
- named-entity-recognition
license:
- cc-by-4.0
---
# PharmaCoNER
## Dataset Description
Manually classified collection of Spanish clinical case studies.
- **Homepage:** [zenodo](https://zenodo.org/record/4270158)
- **Paper:** [PharmaCoNER: Pharmacological Substances, Compounds and proteins Named Entity Recognition track](https://aclanthology.org/D19-5701/)
- **Point of Contact:** encargo-pln-life@bsc.es
### Dataset Summary
Manually classified collection of clinical case studies derived from the Spanish Clinical Case Corpus (SPACCC), an open access electronic library that gathers Spanish medical publications from [SciELO](https://scielo.org/).
The PharmaCoNER corpus contains a total of 396,988 words and 1,000 clinical cases that have been randomly sampled into 3 subsets.
The training set contains 500 clinical cases, while the development and test sets contain 250 clinical cases each.
In terms of training examples, this translates to a total of 8129, 3787 and 3952 annotated sentences in each set.
The original dataset is distributed in [Brat](https://brat.nlplab.org/standoff.html) format.
The annotation of the entire set of entity mentions was carried out by domain experts.
It includes the following 4 entity types: NORMALIZABLES, NO_NORMALIZABLES, PROTEINAS and UNCLEAR.
This dataset was designed for the PharmaCoNER task, sponsored by [Plan-TL](https://plantl.mineco.gob.es/Paginas/index.aspx).
For further information, please visit [the official website](https://temu.bsc.es/pharmaconer/).
### Supported Tasks
Named Entity Recognition (NER)
### Languages
- Spanish (es)
### Directory Structure
* README.md
* pharmaconer.py
* dev-set_1.1.conll
* test-set_1.1.conll
* train-set_1.1.conll
## Dataset Structure
### Data Instances
Three four-column files, one for each split.
### Data Fields
Every file has four columns:
* 1st column: Word form or punctuation symbol
* 2nd column: Original BRAT file name
* 3rd column: Spans
* 4th column: IOB tag
#### Example
<pre>
La S0004-06142006000900008-1 123_125 O
paciente S0004-06142006000900008-1 126_134 O
tenía S0004-06142006000900008-1 135_140 O
antecedentes S0004-06142006000900008-1 141_153 O
de S0004-06142006000900008-1 154_156 O
hipotiroidismo S0004-06142006000900008-1 157_171 O
, S0004-06142006000900008-1 171_172 O
hipertensión S0004-06142006000900008-1 173_185 O
arterial S0004-06142006000900008-1 186_194 O
en S0004-06142006000900008-1 195_197 O
tratamiento S0004-06142006000900008-1 198_209 O
habitual S0004-06142006000900008-1 210_218 O
con S0004-06142006000900008-1 219-222 O
atenolol S0004-06142006000900008-1 223_231 B-NORMALIZABLES
y S0004-06142006000900008-1 232_233 O
enalapril S0004-06142006000900008-1 234_243 B-NORMALIZABLES
</pre>
### Data Splits
| Split | Size |
| ------------- | ------------- |
| `train` | 8,129 |
| `dev` | 3,787 |
| `test` | 3,952 |
## Dataset Creation
### Curation Rationale
For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines.
### Source Data
#### Initial Data Collection and Normalization
Manually classified collection of clinical case report sections. The clinical cases were not restricted to a single medical discipline, covering a variety of medical disciplines, including oncology, urology, cardiology, pneumology or infectious diseases. This is key to cover a diverse set of chemicals and drugs.
#### Who are the source language producers?
Humans, there is no machine generated data.
### Annotations
#### Annotation process
The annotation process of the PharmaCoNER corpus was inspired by previous annotation schemes and corpora used for the BioCreative CHEMDNER and GPRO tracks, translating the guidelines used for these tracks into Spanish and adapting them to the characteristics and needs of clinically oriented documents by modifying the annotation criteria and rules to cover medical information needs. This adaptation was carried out in collaboration with practicing physicians and medicinal chemistry experts. The adaptation, translation and refinement of the guidelines was done on a sample set of the SPACCC corpus and linked to an iterative process of annotation consistency analysis through interannotator agreement (IAA) studies until a high annotation quality in terms of IAA was reached.
#### Who are the annotators?
Practicing physicians and medicinal chemistry experts.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
This corpus contributes to the development of medical language models in Spanish.
### Discussion of Biases
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es).
For further information, send an email to (plantl-gob-es@bsc.es).
This work was funded by the [Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)](https://avancedigital.mineco.gob.es/en-us/Paginas/index.aspx) within the framework of the [Plan-TL](https://plantl.mineco.gob.es/Paginas/index.aspx).
### Licensing information
This work is licensed under [CC Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) License.
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
### Citation Information
```bibtex
@inproceedings{,
title = "PharmaCoNER: Pharmacological Substances, Compounds and proteins Named Entity Recognition track",
author = "Gonzalez-Agirre, Aitor and
Marimon, Montserrat and
Intxaurrondo, Ander and
Rabal, Obdulia and
Villegas, Marta and
Krallinger, Martin",
booktitle = "Proceedings of The 5th Workshop on BioNLP Open Shared Tasks",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D19-5701",
doi = "10.18653/v1/D19-5701",
pages = "1--10",
}
```
### Contributions
[N/A]
|
Plim/common_voice_7_0_fr_processed | ---
language:
- fr
--- |
Plim/fr_corpora_parliament_processed | ---
language: fr
--- |
Plim/fr_wikipedia_processed | ---
language: fr
--- |
Plim/language_model_fr | ---
language: fr
--- |
Pratik/Gujarati_OpenSLR | OpenSLR is a site devoted to hosting speech and language resources,
such as training corpora for speech recognition, and software related to speech recognition.
They intend to be a convenient place for anyone to put resources that they have created,
so that they can be downloaded publicly.
They aim to provide a central, hassle-free place for others to put their speech resources. see there http://www.openslr.org/contributions.html
#Supported Task
Automatic Speech Recognition
#Languages
Gujarati
Identifier: SLR78
Summary: Data set which contains recordings of native speakers of Gujarati.
Category: Speech
License: Attribution-ShareAlike 4.0 International
Downloads (use a mirror closer to you):
about.html [1.5K] (Information about the data set ) Mirrors: [China]
LICENSE [20K] (License information for the data set ) Mirrors: [China]
line_index_female.tsv [423K] (Lines recorded by the female speakers ) Mirrors: [China]
line_index_male.tsv [393K] (Lines recorded by the male speakers ) Mirrors: [China]
gu_in_female.zip [917M] (Archive containing recordings from female speakers ) Mirrors: [China]
gu_in_male.zip [825M] (Archive file recordings from male speakers ) Mirrors: [China]
About this resource:
This data set contains transcribed high-quality audio of Gujarati sentences recorded by volunteers. The data set consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
See LICENSE file for license information.
Copyright 2018, 2019 Google, Inc.
If you use this data in publications, please cite it as follows:
@inproceedings{he-etal-2020-open,
title = {{Open-source Multi-speaker Speech Corpora for Building Gujarati, Kannada, Malayalam, Marathi, Tamil and Telugu Speech Synthesis Systems}},
author = {He, Fei and Chu, Shan-Hui Cathy and Kjartansson, Oddur and Rivera, Clara and Katanova, Anna and Gutkin, Alexander and Demirsahin, Isin and Johny, Cibu and Jansche, Martin and Sarin, Supheakmungkol and Pipatsrisawat, Knot},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference (LREC)},
month = may,
year = {2020},
address = {Marseille, France},
publisher = {European Language Resources Association (ELRA)},
pages = {6494--6503},
url = {https://www.aclweb.org/anthology/2020.lrec-1.800},
ISBN = "{979-10-95546-34-4},
}
|
R0bk/XFUN | ---
license: mit
---
|
Remesita/tagged_reviews | |
RohanAiLab/persian_blog | ---
language:
- fa
source_datasets:
- original
task_categories:
- text-generation
task_ids:
- language-modeling
pretty_name: persian_blog
---
# Persian Blog
# Dataset Summary
persian_news_dataset is a collection of 400k blog posts. these posts have been gathered from more than 10 websites. This dataset can be used in different NLP tasks like language modeling and text generation tasks.
This effort is part of a bigger perspective to have several datasets in Persian language for different tasks that have two important factors: `free` and `easy-to-use`. Here is a quick HOW-TO for using this dataset in datasets library:[Demo-datasets](https://saied71.github.io/RohanAiLab/2021/09/03/Demo-datasets.html)
# Description
As discussed before, this dataset contains 5M news articles. Each article has these three attributes: text, title, category. Here is a sample of dataset:
```
text : چرا کودکان به روانشناس نیاز دارند؟ روانشناسی کودکانکودکان همچون غنچههای زیبا هستند که برای شکوفایی و به ثمر رسیدن نیاز به مراقبت و رسیدگی دارند . روانشناس کودک فردیست که از زمان بدو تولد کودک در مراحل مختلف زندگی کودک در کنار والدین وی میباشد و به چگونگی تربیت کودک کمک میکند تا به بهترین شکل رشد کند . چرا که روانشناس کودک با روحیات ، نیازها و مشکلات کودکان و همچنین چگونگی برقراری ارتباط بین کودک و والدین وی آشنایی کامل دارد .بسیاری از کودکان در سنین مختلف بخاطر شرایط زندگی ، دچار انواع ناسازگاریها و مشکلات در زندگی خود میشود از ناسازگاری کودکان میتوان به موارد زیر اشاره کرد : 1 . پرخاشگری 2 . بد دهنی 3 . اختلال در خوابیدن 4 . اختلال در غذا خوردن و کم اشتهایی 5 . حالت افسردگی و اضطراب 6 . ترس از محیط پیرامون 7 . عدم آمادگی برای ورود به جامعه 8 . وجود مشکل در محیط مدرسه 9 . عدم تمرکز 10 . جویدن ناخن ها 11 . انزوا و گوشه گیری 12 . عدم هم بازی شدن با هم سن و سال هاو .این گونه ناسازگاریها در زندگی آینده کودک نقش به سزایی دارد .روانشناس کودکیک روانشناس کودک خوب ، با دلسوزی و با تکیه بر تجربیات و تخصص خود میکوشد تا رفتارهای کودک را مورد ارزیابی و بررسی قرار دهد سپس سعی میکند تا رفتارهای بعدی کودک را پیش بینی کند و منشاء این مشکلات و سطح پیشرفت آن را بیابد. سپس او بهترین روشهای درمان برای بهبود اختلال کودک را مییابد و با کمک والدین این ناسازگاریها ، مشکلات و ناهنجاریها را حل کرده و نهایتا رابطهای دوستانه و صمیمانه بین کودک و والدین وی ایجاد مینماید تاآیندهای درخشان در انتظار کودک شما باشد .
```
# Citation
```
rohanailab@gmail.com
title={persian_blog},
author={Saied Alimoradi},
year={2021}
}
``` |
RohanAiLab/persian_daily_news | ---
pretty_name: Persian Daily News
language:
- fa
source_datasets:
- original
task_categories:
- Summarization
- sequence-modeling
---
# Persian Daily News
# Dataset Summary
persian_daily_news is a collection of 2 million of unique news articles with the headline for each article. dataset can be used in abstractive summarization and paraphrasing tasks.
This effort is part of a bigger perspective to have several datasets in Persian language(and other low resources languages) for different tasks that have two important factors: `free` and `easy-to-use`. Here is a quick HOW-TO for using this dataset in datasets library:[Demo-datasets](https://saied71.github.io/RohanAiLab/2021/09/03/Demo-datasets.html)
# Description
As discussed before, this dataset contains 2M news articles. Each article has these two attributes: text and summary. Here is a sample of dataset:
```
text: به گزارش گروه بین الملل ، خبرگزاری رسمی قطر اعلام کرد، بعد از امضای موافقتنامه همکاری نظامی بین قطر و روسیه این امکان فراهم شده است تا نظامیان قطری برای تکمیل آموزشهای نظامی خود عازم روسیه شده و در آنجا تعلیم ببینند.در چارچوب این قرارداد که امروز یک شنبه توسط سرتیپ ستاد عبدالعزیز صالح السلیطی رییس هییت همکاریهای بین المللی نظامی قطر و سرلشکر ویکتور جوریمیکین رییس اداره عمومی نیروی انسانی وزارت دفاع روسیه به امضا رسید، روابط نظامی بین دوحه و مسکو در زمینه موسسات آموزشهای نظامی شاهد توسه قابل توجهی خواهد شد.به نوشته این خبرگزاری روابط قطر و روسیه در حال گسترش بوده و به سوی شکلگیری مشارکت راهبردی در تمامی زمینهها پیش میرود.
summary: از این پس نظامیان قطری برای آموزش عازم روسیه شده و در موسسات آموزش نظامی این کشور تعلیم خواهند دید.
```
# Citation
```
rohanailab@gmail.com
title={persian_daily_news},
author={Saied Alimoradi},
year={2021}
}
``` |
RohanAiLab/persian_news_dataset | ---
pretty_name: persian_news_datset
language:
- fa
source_datasets:
- original
task_categories:
- text-classification
- sequence-modeling
task_ids:
- language-modeling
- multi-class-classification
---
# Persian_News_Dataset
# Dataset Summary
persian_news_dataset is a collection of 5 million news articles. News articles have been gathered from more than 10 news agencies for the last 12 years. This dataset can be used in different NLP tasks like language modeling, classification, supervised topic modeling,...
This effort is part of a bigger perspective to have several datasets in Persian language for different tasks that have two important factors: `free` and `easy-to-use`. Here is a quick HOW-TO for using this dataset in datasets library:[Demo-datasets](https://saied71.github.io/RohanAiLab/2021/09/03/Demo-datasets.html)
# Description
As discussed before, this dataset contains 5M news articles. Each article has these three attributes: text, title, category. Here is a sample of dataset:
```
text :سهشنبه شب از دور برگشت مرحله نیمهنهایی لیگ قهرمانان اروپا، منچسترسیتی در ورزشگاه «اتحاد» میزبان پاریسنژرمن بود و با ارائه نمایشی حساب شده و تحسین برانگیز به پیروزی دو بر صفر دست یافت.بازی رفت در پاریس با برتری دو بر یک سیتی به اتمام رسیده بود و با این اوصاف تیم تحت هدایت «پپ گواردیولا» در مجموع با پیروزی چهار بر یک، راهی فینال شد.بارش برف موجب سفیدپوش شدن زمین شده بود و همین امر بر عملکرد تیمها تاثیر گذاشت. دیدار در حالی آغاز به کار کرد که «امباپه» ستاره پاریسیها که به تازگی از مصدومیت رهایی پیدا کرده است، نیمکتنشین بود.بازی با حملات میهمان آغاز شد و در دقیقه هفتم داور هلندی با تصمیمی عجیب اعتقاد داشت توپ به دست «زینچنکو» مدافع سیتی برخورد کرده و نقطه پنالتی را نشان داد، اما با استفاده از سیستم کمک داور ویدئویی، پنالتی پس گرفته شد. سیتی خیلی زود به هدفش رسید و در دقیقه ۱۰ حرکت عالی او و پاس به «دیبروین» موجب شد تا توپ در یک رفت و برگشت به «ریاض محرز» رسیده و این بازیکن الجزایری گل نخست بازی را برای میزبان به ارمغان آورد.در دقیقه ۱۶ ضربه سر «مارکینیوش» مدافع پیشتاخته پاریسنژرمن با بدشانسی به تیرک دروازه سیتی برخورد کرد.در ادامه برای دقایقی، بازیکنان در میانه میدان خطاهای متعددی انجام دادند و این امر موجب ایجاد چند درگیری شد.هرچند نماینده فرانسه درپی جبران مافات بود اما برنامهای برای رسیدن به این مهم نداشت تا نیمه نخست با همین یک گل همراه شود.در نیمه دوم هم حملات پاریسیها سودی نداشت و در طرف مقابل منچسترسیتی، بازی بسیار هوشمندانهای ارائه کرد.در دقیقه ۶۲ و در ضد حملهای برق آسا، «فیل فودن» با پاسی عالی توپ را به «ریاض محرز» رساند تا این بازیکن گل دوم خود و تیمش را ثبت کرده و سند صعود سیتی به فینال را امضا کند.در دقیقه ۶۸ «آنخل دیماریا» وینگر آرژانتینی تیم پاریسنژرمن پس از درگیری با «فرناندینو» با کارت قرمز داور از زمین اخراج شد تا کار تیمش تمام شود.در این بازی پاریسنژرمن با تفکرات «پوچتینو»، طراحی حملات خود را به «نیمار» سپرده بود اما این بازیکن مطرح برزیلی با حرکات انفرادی بیش از از اندازه، عملکرد خوبی نداشت و حملات تیمش را خراب کرد.در نهایت بازی با پیروزی سیتی همراه شد و مالکان ثروتمند منچسترسیتی به آرزوی خود رسیده و پس از سالها سرمایهگذاری به دیدار نهایی رسیدند. این اولین حضور سیتی در فینال لیگ قهرمانان اروپا است.چهارشنبه شب در دیگر دیدار دور برگشت نیمهنهایی، چلسی انگلیس در ورزشگاه «استمفورد بریج» شهر لندن پذیرای رئالمادرید اسپانیا است. بازی رفت با تساوی یک بر یک به اتمام رسید
title:آرزوی سیتی برآورده شد؛ صعود شاگردان «گواردیولا» به فینال
category:ورزش
```
# Citation
```
rohanailab@gmail.com
title={persian_news_dataset},
author={Saied Alimoradi},
year={2021}
}
``` |
SCourthial/test | |
SajjadAyoubi/persian_qa | # PersianQA: a dataset for Persian Question Answering
Persian Question Answering (PersianQA) Dataset is a reading comprehension dataset on Persian Wikipedia. The crowd-sourced dataset consists of more than 9,000 entries. Each entry can be either an impossible to answer or a question with one or more answers spanning in the passage (the context) from which the questioner proposed the question. Much like the SQuAD2.0 dataset, the impossible or unanswerable questions can be utilized to create a system which "knows that it doesn't know the answer".
On top of that, the dataset has 900 test data available. Moreover, the first models trained on the dataset, Transformers, are available.
All the crowd workers of the dataset are native Persian speakers. Also, it worth mentioning that the contexts are collected from all categories of the Wiki (Historical, Religious, Geography, Science, etc.)
At the moment, each context has 7 pairs of questions with one answer and 3 impossible questions.
## Dataset
### Access/Download
- You can find the data under the [`dataset/`](https://github.com/sajjjadayobi/PersianQA/tree/main/dataset) directory. and use it like this
```python
import read_qa # is avalible at src/read_ds.py
train_ds = read_qa('pqa_train.json')
test_ds = read_qa('pqa_test.json')
```
- Alternatively, you can also access the data through the HuggingFace🤗 datasets library
- First, you need to install datasets using this command in your terminal:
```sh
pip install -q datasets
```
- Then import `persian_qa` dataset using `load_dataset`:
```python
from datasets import load_dataset
dataset = load_dataset("SajjadAyoubi/persian_qa")
```
### Examples
| Title | Context | Question | Answer |
| :---: | :---------------------: | :--------: | :----: |
| خوب، بد، زشت | خوب، بد، زشت یک فیلم درژانر وسترن اسپاگتی حماسی است که توسط سرجو لئونه در سال ۱۹۶۶ در ایتالیا ساخته شد. زبانی که بازیگران این فیلم به آن تکلم میکنند مخلوطی از ایتالیایی و انگلیسی است. این فیلم سومین (و آخرین) فیلم از سهگانهٔ دلار (Dollars Trilogy) سرجو لئونه است. این فیلم در حال حاضر در فهرست ۲۵۰ فیلم برتر تاریخ سینما در وبگاه IMDB با امتیاز ۸٫۸ از ۱۰، رتبهٔ هشتم را به خود اختصاص دادهاست و به عنوان بهترین فیلم وسترن تاریخ سینمای جهان شناخته میشود. «خوب» (کلینت ایستوود، در فیلم، با نام «بلوندی») و «زشت» (ایلای والاک، در فیلم، با نام «توکو») با هم کار میکنند و با شگرد خاصی، به گول زدن کلانترهای مناطق مختلف و پول درآوردن از این راه میپردازند. «بد» (لی وان کلیف) آدمکشی حرفهای است که بهخاطر پول حاضر به انجام هر کاری است. «بد»، که در فیلم او را «اِنجل آیز (اِینجل آیز)» (به انگلیسی: Angel Eyes) صدا میکنند. بهدنبال گنجی است که در طی جنگهای داخلی آمریکا، به دست سربازی به نام «جکسون»، که بعدها به «کارسون» نامش را تغییر داده، مخفی شدهاست. | در فیلم خوب بد زشت شخصیت ها کجایی صحبت می کنند؟ | مخلوطی از ایتالیایی و انگلیسی |
| قرارداد کرسنت | قرارداد کرسنت قراردادی برای فروش روزانه معادل ۵۰۰ میلیون فوت مکعب، گاز ترش میدان سلمان است، که در سال ۱۳۸۱ و در زمان وزارت بیژن نامدار زنگنه در دولت هفتم مابین شرکت کرسنت پترولیوم و شرکت ملی نفت ایران منعقد گردید. مذاکرات اولیه این قرارداد از سال ۱۹۹۷ آغاز شد و در نهایت، سال ۲۰۰۱ (۱۳۸۱) به امضای این تفاهم نامه مشترک انجامید. بر اساس مفاد این قرارداد، مقرر شده بود که از سال ۲۰۰۵ با احداث خط لوله در خلیج فارس، گاز فرآورده نشده میدان سلمان (مخزن مشترک با ابوظبی)، به میزان روزانه ۵۰۰ میلیون فوت مکعب (به قول برخی منابع ۶۰۰ میلیون فوت مکعب) به امارات صادر شود. این قرارداد مطابق قوانین داخلی ایران بسته شده و تنها قرارداد نفتی ایران است که از طرف مقابل خود، تضمین گرفتهاست. اجرای این پروژه در سال ۱۳۸۴ با دلایل ارائه شده از سوی دیوان محاسبات ایران از جمله تغییر نیافتن بهای گاز صادراتی و ثابت ماندن آن در هفت سال اول اجرای قرارداد متوقف شد. این در حالی است که طبق تعریف حقوقی، دیوان محاسبات ایران، حق دخالت در قراردادها، پیش از آنکه قراردادها اجرایی و مالی شوند را ندارد. | طرفین قرار داد کرسنت کیا بودن؟ | کرسنت پترولیوم و شرکت ملی نفت ایران |
| چهارشنبهسوری | چهارشنبهسوری یکی از جشنهای ایرانی است که از غروب آخرین سهشنبه ی ماه اسفند، تا پس از نیمهشب تا آخرین چهارشنبه ی سال، برگزار میشود و برافروختن و پریدن از روی آتش مشخصهٔ اصلی آن است. این جشن، نخستین جشن از مجموعهٔ جشنها و مناسبتهای نوروزی است که با برافروختن آتش و برخی رفتارهای نمادین دیگر، بهصورت جمعی در فضای باز برگزار میشود. بهگفتهٔ ابراهیم پورداوود چهارشنبهسوری ریشه در گاهنبارِ هَمَسْپَتْمَدَم زرتشتیان و نیز جشن نزول فروهرها دارد که شش روز پیش از فرارسیدن نوروز برگزار میشد. احتمال دیگر این است که چهارشنبهسوری بازمانده و شکل تحولیافتهای از جشن سده باشد، که احتمال بعیدی است. علاوه برافروختن آتش، آیینهای مختلف دیگری نیز در بخشهای گوناگون ایران در زمان این جشن انجام میشوند. برای نمونه، در تبریز، مردم به چهارشنبهبازار میروند که با چراغ و شمع، بهطرز زیبایی چراغانی شدهاست. هر خانواده یک آینه، دانههای اسفند، و یک کوزه برای سال نو خریداری میکنند. همهساله شهروندانی از ایران در اثر انفجارهای ناخوشایند مربوط به این جشن، کشته یا مصدوم میشوند. | نام جشن اخرین شنبه ی سال چیست؟ | No Answer |
### Statistic
| Split | # of instances | # of unanswerables | avg. question length | avg. paragraph length | avg. answer length |
| :---: | :------------: | :----------------: | :------------------: | :-------------------: | :----------------: |
| Train | 9,000 | 2,700 | 8.39 | 224.58 | 9.61 |
| Test | 938 | 280 | 8.02 | 220.18 | 5.99 |
The lengths are on the token level.
- for more about data and more example see [here](https://github.com/sajjjadayobi/PersianQA/tree/main/dataset#readme)
## Models
Currently, two models (baseline) on [HuggingFace🤗](https://huggingface.co/SajjadAyoubi/) model hub are using the dataset.
## Citation
Yet, we didn't publish any papers on the work.
However, if you did, please cite us properly with an entry like one below.
```bibtex
@misc{PersianQA,
author = {Ayoubi, Sajjad \& Davoodeh, Mohammad Yasin},
title = {PersianQA: a dataset for Persian Question Answering},
year = 2021,
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/SajjjadAyobi/PersianQA}},
}
```
|
Sakonii/nepalitext-language-model-dataset | ---
annotations_creators:
- no-annotation
language_creators:
- found
- other
language:
- ne
license:
- cc0-1.0
multilinguality:
- monolingual
source_datasets:
- extended|oscar
- extended|cc100
task_categories:
- text-generation
task_ids:
- language-modeling
pretty_name: nepalitext-language-model-dataset
---
# Dataset Card for "nepalitext-language-model-dataset"
### Dataset Summary
"NepaliText" language modeling dataset is a collection of over 13 million Nepali text sequences (phrases/sentences/paragraphs) extracted by combining the datasets: [OSCAR](https://huggingface.co/datasets/oscar) , [cc100](https://huggingface.co/datasets/cc100) and a set of scraped Nepali articles on Wikipedia.
### Supported Tasks and Leaderboards
This dataset is intended to pre-train language models and word representations on Nepali Language.
### Languages
The data is focused on Nepali language, but may have instances of other languages as well.
## Dataset Structure
### Data Instances
An example:
```
{'text': 'घरेलु मैदानमा भएको च्याम्पियन्स लिगको दोस्रो लेगमा एथ्लेटिको मड्रिडले आर्सनललाई एक शून्यले हराउँदै समग्रमा दुई एकको अग्रताका साथ फाइनलमा प्रवेश गरेको हो ।\n'}
```
### Data Fields
The data fields are:
- `text`: a `string` feature.
### Data Splits
train|test|
----:|---:|
13141222|268189|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
Being extracted and scraped from variety of internet sources, Personal and sensitive information might be present. This must be considered before training deep learning models, specially in the case of text-generation models.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@Sakonii](https://github.com/Sakonii) for adding this dataset. |
Samip/func | Hello
|
SaulLu/Natural_Questions_HTML | This is a dataset extracted from the Natural Questions dataset
This dataset is currently under development |
SebastianS/github-issues | ---
annotations_creators: []
language_creators:
- crowdsourced
language:
- en-US
license: []
multilinguality:
- monolingual
pretty_name: github-issues
size_categories:
- unknown
source_datasets: []
task_categories:
- text-classification
task_ids: []
---
# Dataset Card for GitHub Issues
## Dataset Description
this was an example dataset made from the huggingface course |
SetFit/20_newsgroups | This is a version of the [20 newsgroups dataset](https://scikit-learn.org/0.19/datasets/twenty_newsgroups.html#the-20-newsgroups-text-dataset) that is provided in Scikit-learn. From the Scikit-learn docs:
> The 20 newsgroups dataset comprises around 18000 newsgroups posts on 20 topics split in two subsets: one for training (or development) and the other one for testing (or for performance evaluation). The split between the train and test set is based upon a messages posted before and after a specific date.
We followed the [recommended practice](https://scikit-learn.org/0.19/datasets/twenty_newsgroups.html#filtering-text-for-more-realistic-training) to remove headers, signature blocks, and quotations from each news article. |
SetFit/TREC-QC | # TREC Question Classification
Question classification in coarse and fine-grained categories.
Source:
[Experimental Data for Question Classification](https://cogcomp.seas.upenn.edu/Data/QA/QC/)
Xin Li, Dan Roth, Learning Question Classifiers. COLING'02, Aug., 2002. |
SetFit/amazon_counterfactual | # Amazon Multilingual Counterfactual Dataset
The dataset contains sentences from Amazon customer reviews (sampled from Amazon product review dataset) annotated for counterfactual detection (CFD) binary classification. Counterfactual statements describe events that did not or cannot take place. Counterfactual statements may be identified as statements of the form – If p was true, then q would be true (i.e. assertions whose antecedent (p) and consequent (q) are known or assumed to be false).
The key features of this dataset are:
* The dataset is multilingual and contains sentences in English, German, and Japanese.
* The labeling was done by professional linguists and high quality was ensured.
* The dataset is supplemented with the annotation guidelines and definitions, which were worked out by professional linguists. We also provide the clue word lists, which are typical for counterfactual sentences and were used for initial data filtering. The clue word lists were also compiled by professional linguists.
Please see the [paper](https://arxiv.org/abs/2104.06893) for the data statistics, detailed description of data collection and annotation.
GitHub repo URL: https://github.com/amazon-research/amazon-multilingual-counterfactual-dataset
## Usage
You can load each of the languages as follows:
```
from datasets import get_dataset_config_names
dataset_id = "SetFit/amazon_counterfactual"
# Returns ['de', 'en', 'en-ext', 'ja']
configs = get_dataset_config_names(dataset_id)
# Load English subset
dset = load_dataset(dataset_id, name="en")
``` |
SetFit/amazon_counterfactual_en | # Amazon Counterfactual Statements
This dataset is the *en-ext* split from [SetFit/amazon_counterfactual](https://huggingface.co/datasets/SetFit/amazon_counterfactual). As the original test set is rather small (1333 examples), a different split was created with 50-50 for training & testing.
The dataset is described in [amazon-multilingual-counterfactual-dataset](https://github.com/amazon-research/amazon-multilingual-counterfactual-dataset) / [Paper](https://arxiv.org/pdf/2104.06893.pdf)
It contains statements from Amazon reviews about events that did not or cannot take place. |
SetFit/bbc-news | # BBC News Topic Classification
Dataset on [BBC News Topic Classification](https://www.kaggle.com/yufengdev/bbc-text-categorization/data): 2225 articles, each labeled under one of 5 categories: business, entertainment, politics, sport or tech. |
SetFit/emotion | ** Attention: There appears an overlap in train / test. I trained a model on the train set and achieved 100% acc on test set. With the original emotion dataset this is not the case (92.4% acc)** |
SetFit/enron_spam | This is a version of the [Enron Spam Email Dataset](https://github.com/MWiechmann/enron_spam_data), containing emails (subject + message) and a label whether it is spam or ham. |
SetFit/ethos | # Ethos
This dataset is a clone of the official [`ethos` dataset](https://huggingface.co/datasets/ethos) on the Hub. It contains both `binary` and `multilabel` subsets. |
SetFit/ethos_binary |
This is the binary split of [ethos](https://huggingface.co/datasets/ethos), split into train and test.
It contains comments annotated for hate speech or not. |
SetFit/go_emotions | # GoEmotions
This dataset is a port of the official [`go_emotions` dataset](https://huggingface.co/datasets/go_emotions) on the Hub. It only contains the `simplified` subset as these are the only fields we need for text classification. |
SetFit/hate_speech_offensive | # hate_speech_offensive
This dataset is a version from [hate_speech_offensive](https://huggingface.co/datasets/hate_speech_offensive), splitted into train and test set. |
SetFit/insincere-questions | This is a version of the [Quora Insincere Questions Classification](https://www.kaggle.com/c/quora-insincere-questions-classification).
An insincere question is defined as a question intended to make a statement rather than look for helpful answers. About 6% of questions are labeled as insincere. |
SetFit/mnli | # Glue MNLI
This dataset is a port of the official [`mnli` dataset](https://huggingface.co/datasets/glue/viewer/mnli/train) on the Hub.
It contains the matched version.
Note that the premise and hypothesis columns have been renamed to text1 and text2 respectively.
Also, the test split is not labeled; the label column values are always -1.
|
SetFit/mnli_mm | # Glue MNLI
This dataset is a port of the official [`mnli` dataset](https://huggingface.co/datasets/glue/viewer/mnli/train) on the Hub.
It contains the mismatched version.
Note that the premise and hypothesis columns have been renamed to text1 and text2 respectively.
Also, the test split is not labeled; the label column values are always -1.
|