sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
sequence | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
c5589b86803b9ab1a3747272f6a729f5c22b1e1e | # GEM Submission
Submission name: This is a test name
| GEM-submissions/lewtun__this-is-a-test-name__1648137608 | [
"benchmark:gem",
"evaluation",
"benchmark",
"region:us"
] | 2022-03-24T16:00:09+00:00 | {"benchmark": "gem", "type": "prediction", "submission_name": "This is a test name", "tags": ["evaluation", "benchmark"]} | 2022-03-24T16:00:11+00:00 |
5a3e6f4f6abdc2088363b7d4ececec81b3c8a053 | # Dataset Card for SpanishNLP
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Spanish Poems and their Authors and titles
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | wesamhaddad14/spanishNLP | [
"region:us"
] | 2022-03-24T16:36:16+00:00 | {} | 2022-03-24T16:46:39+00:00 |
9242e8cb6ce1f497794c1728838700bb182cc435 | Openmindedness/mc_chat_scraped_from_toxigon_anarchy | [
"license:cc",
"region:us"
] | 2022-03-24T17:03:45+00:00 | {"license": "cc"} | 2022-03-24T17:13:13+00:00 |
|
0e7601c463d3048563ea8017d7162279f56333b1 |
# Dataset Card for SciDTB
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/PKU-TANGENT/SciDTB
- **Repository:** https://github.com/PKU-TANGENT/SciDTB
- **Paper:** https://aclanthology.org/P18-2071/
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
SciDTB is a domain-specific discourse treebank annotated on scientific articles written in English-language. Different from widely-used RST-DT and PDTB, SciDTB uses dependency trees to represent discourse structure, which is flexible and simplified to some extent but do not sacrifice structural integrity. Furthermore, this treebank is made as a benchmark for evaluating discourse dependency parsers. This dataset can benefit many downstream NLP tasks such as machine translation and automatic summarization.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English.
## Dataset Structure
### Data Instances
A typical data point consist of `root` which is a list of nodes in dependency tree. Each node in the list has four fields: `id` containing id for the node, `parent` contains id of the parent node, `text` refers to the span that is part of the current node and finally `relation` represents relation between current node and parent node.
An example from SciDTB train set is given below:
```
{
"root": [
{
"id": 0,
"parent": -1,
"text": "ROOT",
"relation": "null"
},
{
"id": 1,
"parent": 0,
"text": "We propose a neural network approach ",
"relation": "ROOT"
},
{
"id": 2,
"parent": 1,
"text": "to benefit from the non-linearity of corpus-wide statistics for part-of-speech ( POS ) tagging . <S>",
"relation": "enablement"
},
{
"id": 3,
"parent": 1,
"text": "We investigated several types of corpus-wide information for the words , such as word embeddings and POS tag distributions . <S>",
"relation": "elab-aspect"
},
{
"id": 4,
"parent": 5,
"text": "Since these statistics are encoded as dense continuous features , ",
"relation": "cause"
},
{
"id": 5,
"parent": 3,
"text": "it is not trivial to combine these features ",
"relation": "elab-addition"
},
{
"id": 6,
"parent": 5,
"text": "comparing with sparse discrete features . <S>",
"relation": "comparison"
},
{
"id": 7,
"parent": 1,
"text": "Our tagger is designed as a combination of a linear model for discrete features and a feed-forward neural network ",
"relation": "elab-aspect"
},
{
"id": 8,
"parent": 7,
"text": "that captures the non-linear interactions among the continuous features . <S>",
"relation": "elab-addition"
},
{
"id": 9,
"parent": 10,
"text": "By using several recent advances in the activation functions for neural networks , ",
"relation": "manner-means"
},
{
"id": 10,
"parent": 1,
"text": "the proposed method marks new state-of-the-art accuracies for English POS tagging tasks . <S>",
"relation": "evaluation"
}
]
}
```
More such raw data instance can be found [here](https://github.com/PKU-TANGENT/SciDTB/tree/master/dataset)
### Data Fields
- id: an integer identifier for the node
- parent: an integer identifier for the parent node
- text: a string containing text for the current node
- relation: a string representing discourse relation between current node and parent node
### Data Splits
Dataset consists of three splits: `train`, `dev` and `test`.
| Train | Valid | Test |
| ------ | ----- | ---- |
| 743 | 154 | 152|
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
More information can be found [here](https://aclanthology.org/P18-2071/)
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
```
@inproceedings{yang-li-2018-scidtb,
title = "{S}ci{DTB}: Discourse Dependency {T}ree{B}ank for Scientific Abstracts",
author = "Yang, An and
Li, Sujian",
booktitle = "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = jul,
year = "2018",
address = "Melbourne, Australia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P18-2071",
doi = "10.18653/v1/P18-2071",
pages = "444--449",
abstract = "Annotation corpus for discourse relations benefits NLP tasks such as machine translation and question answering. In this paper, we present SciDTB, a domain-specific discourse treebank annotated on scientific articles. Different from widely-used RST-DT and PDTB, SciDTB uses dependency trees to represent discourse structure, which is flexible and simplified to some extent but do not sacrifice structural integrity. We discuss the labeling framework, annotation workflow and some statistics about SciDTB. Furthermore, our treebank is made as a benchmark for evaluating discourse dependency parsers, on which we provide several baselines as fundamental work.",
}
``` | DFKI-SLT/scidtb | [
"task_categories:token-classification",
"task_ids:parsing",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"region:us"
] | 2022-03-25T09:07:59+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["parsing"], "pretty_name": "Scientific Dependency Tree Bank", "language_bcp47": ["en-US"]} | 2022-10-25T05:38:25+00:00 |
1eddac63112eee1fdf1966e0bca27a5ff248c772 | ## Overview
The original dataset can be found [here](https://www.dropbox.com/s/hylbuaovqwo2zav/nli_fever.zip?dl=0)
while the Github repo is [here](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md).
This dataset has been proposed in [Combining fact extraction and verification with neural semantic matching networks](https://dl.acm.org/doi/abs/10.1609/aaai.v33i01.33016859). This dataset has been created as a modification
of FEVER.
In the original FEVER setting, the input is a claim from Wikipedia and the expected output is a label.
However, this is different from the standard NLI formalization which is basically a *pair-of-sequence to label* problem.
To facilitate NLI-related research to take advantage of the FEVER dataset, the authors pair the claims in the FEVER dataset
with the textual evidence and make it a *pair-of-sequence to label* formatted dataset.
## Dataset curation
The label mapping follows the paper and is the following
```python
mapping = {
"SUPPORTS": 0, # entailment
"NOT ENOUGH INFO": 1, # neutral
"REFUTES": 2, # contradiction
}
```
Also, the "verifiable" column has been encoded as follows
```python
mapping = {"NOT VERIFIABLE": 0, "VERIFIABLE": 1}
```
Finally, a consistency check with the labels reported in the original FEVER dataset is performed.
NOTE: no label is available for the "test" split.
NOTE: there are 3 instances in common between `dev` and `train` splits.
## Code to generate the dataset
```python
import pandas as pd
from datasets import Dataset, ClassLabel, load_dataset, Value, Features, DatasetDict
import json
# download data from https://www.dropbox.com/s/hylbuaovqwo2zav/nli_fever.zip?dl=0
paths = {
"train": "<some_path>/nli_fever/train_fitems.jsonl",
"validation": "<some_path>/nli_fever/dev_fitems.jsonl",
"test": "<some_path>/nli_fever/test_fitems.jsonl",
}
# parsing code from https://github.com/facebookresearch/anli/blob/main/src/utils/common.py
registered_jsonabl_classes = {}
def register_class(cls):
global registered_jsonabl_classes
if cls not in registered_jsonabl_classes:
registered_jsonabl_classes.update({cls.__name__: cls})
def unserialize_JsonableObject(d):
global registered_jsonabl_classes
classname = d.pop("_jcls_", None)
if classname:
cls = registered_jsonabl_classes[classname]
obj = cls.__new__(cls) # Make instance without calling __init__
for key, value in d.items():
setattr(obj, key, value)
return obj
else:
return d
def load_jsonl(filename, debug_num=None):
d_list = []
with open(filename, encoding="utf-8", mode="r") as in_f:
print("Load Jsonl:", filename)
for line in in_f:
item = json.loads(line.strip(), object_hook=unserialize_JsonableObject)
d_list.append(item)
if debug_num is not None and 0 < debug_num == len(d_list):
break
return d_list
def get_original_fever() -> pd.DataFrame:
"""Get original fever datasets."""
fever_v1 = load_dataset("fever", "v1.0")
fever_v2 = load_dataset("fever", "v2.0")
columns = ["id", "label"]
splits = ["paper_test", "paper_dev", "labelled_dev", "train"]
list_dfs = [fever_v1[split].to_pandas()[columns] for split in splits]
list_dfs.append(fever_v2["validation"].to_pandas()[columns])
dfs = pd.concat(list_dfs, ignore_index=False)
dfs = dfs.drop_duplicates()
dfs = dfs.rename(columns={"label": "fever_gold_label"})
return dfs
def load_and_process(path: str, fever_df: pd.DataFrame) -> pd.DataFrame:
"""Load data split and merge with fever."""
df = pd.DataFrame(load_jsonl(path))
df = df.rename(columns={"query": "premise", "context": "hypothesis"})
# adjust dtype
df["cid"] = df["cid"].astype(int)
# merge with original fever to get labels
df = pd.merge(df, fever_df, left_on="cid", right_on="id", how="inner").drop_duplicates()
return df
def encode_labels(df: pd.DataFrame) -> pd.DataFrame:
"""Encode labels using the mapping used in SNLI and MultiNLI"""
mapping = {
"SUPPORTS": 0, # entailment
"NOT ENOUGH INFO": 1, # neutral
"REFUTES": 2, # contradiction
}
df["label"] = df["fever_gold_label"].map(mapping)
# verifiable
df["verifiable"] = df["verifiable"].map({"NOT VERIFIABLE": 0, "VERIFIABLE": 1})
return df
if __name__ == "__main__":
fever_df = get_original_fever()
dataset_splits = {}
for split, path in paths.items():
# from json to dataframe and merge with fever
df = load_and_process(path, fever_df)
if not len(df) > 0:
print(f"Split `{split}` has no matches")
continue
if split == "train":
# train must have same labels
assert sum(df["fever_gold_label"] != df["label"]) == 0
# encode labels using the default mapping used by other nli datasets
# i.e, entailment: 0, neutral: 1, contradiction: 2
df = df.drop(columns=["label"])
df = encode_labels(df)
# cast to dataset
features = Features(
{
"cid": Value(dtype="int64", id=None),
"fid": Value(dtype="string", id=None),
"id": Value(dtype="int32", id=None),
"premise": Value(dtype="string", id=None),
"hypothesis": Value(dtype="string", id=None),
"verifiable": Value(dtype="int64", id=None),
"fever_gold_label": Value(dtype="string", id=None),
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
}
)
if "test" in path:
# no features for test set
df["label"] = -1
df["verifiable"] = -1
df["fever_gold_label"] = "not available"
dataset = Dataset.from_pandas(df, features=features)
dataset_splits[split] = dataset
nli_fever = DatasetDict(dataset_splits)
nli_fever.push_to_hub("pietrolesci/nli_fever", token="<your token>")
# check overlap between splits
from itertools import combinations
for i, j in combinations(dataset_splits.keys(), 2):
print(
f"{i} - {j}: ",
pd.merge(
dataset_splits[i].to_pandas(),
dataset_splits[j].to_pandas(),
on=["premise", "hypothesis", "label"],
how="inner",
).shape[0],
)
#> train - dev: 3
#> train - test: 0
#> dev - test: 0
``` | pietrolesci/nli_fever | [
"region:us"
] | 2022-03-25T10:01:17+00:00 | {} | 2022-04-25T08:03:28+00:00 |
a3923be8c49a8c8b5025737e64919faecc7576a7 | ## Overview
The original dataset can be found [here](https://github.com/swarnaHub/ConjNLI). It has been
proposed in [ConjNLI: Natural Language Inference Over Conjunctive Sentences](https://aclanthology.org/2020.emnlp-main.661/).
This dataset is a stress test for natural language inference over conjunctive sentences,
where the premise differs from the hypothesis by conjuncts removed, added, or replaced.
## Dataset curation
The label mapping is the usual `{"entailment": 0, "neutral": 1, "contradiction": 2}`
used in NLI datasets. Note that labels for `test` split are not available.
Also, the `train` split is originally named `adversarial_train_15k`.
There are 2 instances (join on "premise", "hypothesis", "label") present both in `train` and `dev`.
The `test` split does not have labels.
Finally, in the `train` set there are a few instances without a label, they are removed.
## Code to create the dataset
```python
import pandas as pd
from datasets import Dataset, ClassLabel, Value, Features, DatasetDict
# download data from repo https://github.com/swarnaHub/ConjNLI
paths = {
"train": "<path_to_folder>/ConjNLI-master/data/NLI/adversarial_train_15k.tsv",
"dev": "<path_to_folder>/ConjNLI-master/data/NLI/conj_dev.tsv",
"test": "<path_to_folder>/ConjNLI-master/data/NLI/conj_test.tsv",
}
dataset_splits = {}
for split, path in paths.items():
# load data
df = pd.read_csv(paths[split], sep="\t")
# encode labels using the default mapping used by other nli datasets
# i.e, entailment: 0, neutral: 1, contradiction: 2
df.columns = df.columns.str.lower()
if "test" in path:
df["label"] = -1
else:
# remove empty labels
df = df.loc[~df["label"].isna()]
# encode labels
df["label"] = df["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2})
# cast to dataset
features = Features({
"premise": Value(dtype="string", id=None),
"hypothesis": Value(dtype="string", id=None),
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
})
dataset = Dataset.from_pandas(df, features=features)
dataset_splits[split] = dataset
conj_nli = DatasetDict(dataset_splits)
conj_nli.push_to_hub("pietrolesci/conj_nli", token="<token>")
# check overlap between splits
from itertools import combinations
for i, j in combinations(conj_nli.keys(), 2):
print(
f"{i} - {j}: ",
pd.merge(
conj_nli[i].to_pandas(),
conj_nli[j].to_pandas(),
on=["premise", "hypothesis", "label"], how="inner"
).shape[0],
)
#> train - dev: 2
#> train - test: 0
#> dev - test: 0
``` | pietrolesci/conj_nli | [
"region:us"
] | 2022-03-25T10:17:37+00:00 | {} | 2022-04-25T12:27:25+00:00 |
9cbf1e972349722deae84615618ee6ad4d41e36e | [![CC BY 4.0][cc-by-shield]][cc-by]
[](https://doi.org/10.5281/zenodo.6457824)
# GLARE: Google Apps Arabic Reviews
Dataset and Code of "GLARE: Google Apps Arabic Reviews" paper.
You can download the paper via: [[Github]](GLARE.pdf)
## Paper Summary
We introduce GLARE: Google Apps Arabic Reviews dataset. A collection of 76M reviews from 9,980 Android apps collected from Google PlayStore Saudi store.
## Preparation
#### Below is details about each file, please ensure that you have enough storage before downloading the data.
| Data Type | File Name | File Size | File Type |
| ------------------ |---------------- | -------------- |-------------- |
| raw | apps | 4.1 MB | CSV |
| raw | reviews | 17 GB | CSV |
| raw | categories/ | 4.3 MB | CSV
| engineered | apps | 3.8 MB | CSV
| engineered | reviews | 21.9 GB | CSV
| engineered | vocabulary | 530.5 MB | CSV
## File Specifications
- **apps.csv**: File that contains apps metadata.
- **reviews.csv**: File that contains reviews and reviews metadata.
- **categories/**: Folder that contains 59 CSV files, each file corresponds to one category with apps and apps metadata scrapped from top 200 free apps for that category.
- **vocabulary.csv**: File that contains vocabulary set generated from reviews with additional engineered features (word length, word frequency, has noise or digits, ..etc.)
### Raw Data
#### Apps Metadata
```
{
"title":"application name/title",
"app_id":"application unique identifier",
"url":"application url at Google PlayStore",
"icon":"url for image object",
"developer":"developer name",
"developer_id":"developer unique identifier",
"summary":"short description of the application",
"rating":"application accumulated rating"
}
```
#### Reviews Metadata
```
{
"at":"review datetime",
"content":"review text",
"replied_at":"developer reply datetime",
"reply_content":"developer reply content",
"review_created_version":"user application version during the time of review",
"review_id":"review unique identifier",
"rating":"user rating",
"thumbs_up_count":"number of users that agree with the reviewer",
"user_name":"user display name",
"app_id":"application unique identifier"
}
```
### Engineered Data
#### Apps Metadata
Same as apps.csv in raw data with the following additions:
```
{
"reviews_count":"number of reviews for the application",
"categories":"list of application categories",
"categories_count":"number of application categories"
}
```
#### Reviews Metadata
Same as reviews.csv in raw data with the following additions:
```
{
"tokenized_review":"list of review words tokenized on white-space",
"words_count":"number of words in review"
}
```
#### Vocabulary
```
{
"word":"term text",
"length":"word characters count",
"frequency":"word occurrences in the reviews dataset",
"has_noise":"true or false if word contains anything non-arabic alphanumeric",
"noise":"list of noise (anything non-arabic alphanumeric) in the word",
"has_digits":"true or false if word contains arabic or hindi digits",
"digits":"list of digits in the word"
}
```
### Folders Structure
- Data are prepared as raw data or engineered data.
- Download the dataset files: [Google Drive](https://drive.google.com/drive/folders/1Cb61K3wFdVlIQfKouchsUpn5oXdJbhyg?usp=sharing) | [Zenodo](https://zenodo.org/record/6457824#.Ylv-gX9Bz8w) | [Alternative Google Drive](https://drive.google.com/drive/folders/1jWCCyJPKFf6Q-1zDuGRUBi6XtlmkyHlt?usp=sharing)
- The directory structure is as follow:
```
data
└── raw
├── apps.csv
├── reviews.csv
└── categories/
└── engineered
├── apps.csv
├── reviews.csv
└── vocabulary.csv
```
## Citation
If you use this dataset please cite as:
```
@dataset{alghamdi_fatima_2022_6457824,
author = {AlGhamdi, Fatima and
Mohammed, Reem and
Al-Khalifa, Hend and
Alowisheq, Areeb},
title = {GLARE: Google Apps Arabic Reviews Dataset},
month = apr,
year = 2022,
publisher = {Zenodo},
version = {1.0},
doi = {10.5281/zenodo.6457824},
url = {https://doi.org/10.5281/zenodo.6457824}
}
```
## License
This work is licensed under a
[Creative Commons Attribution 4.0 International License][cc-by].
[![CC BY 4.0][cc-by-image]][cc-by]
[cc-by]: http://creativecommons.org/licenses/by/4.0/
[cc-by-image]: https://i.creativecommons.org/l/by/4.0/88x31.png
[cc-by-shield]: https://img.shields.io/badge/License-CC%20BY%204.0-lightgrey.svg
| Fatima-Gh/GLARE | [
"region:us"
] | 2022-03-25T11:22:43+00:00 | {} | 2022-06-09T13:00:29+00:00 |
34554c2cebd75f8104b5a8128d3685802793558c | # GEM Submission
Submission name: This is a test name
| GEM-submissions/lewtun__this-is-a-test-name__1648220072 | [
"benchmark:gem",
"evaluation",
"benchmark",
"region:us"
] | 2022-03-25T14:54:36+00:00 | {"benchmark": "gem", "type": "prediction", "submission_name": "This is a test name", "tags": ["evaluation", "benchmark"]} | 2022-03-25T14:54:37+00:00 |
dbe7278565c80d2528ff20554b26c1655ced9cdd |
# Dataset Card for roman_urdu_hate_speech
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [roman_urdu_hate_speech homepage](https://aclanthology.org/2020.emnlp-main.197/)
- **Repository:** [roman_urdu_hate_speech repository](https://github.com/haroonshakeel/roman_urdu_hate_speech)
- **Paper:** [Hate-Speech and Offensive Language Detection in Roman Urdu](https://aclanthology.org/2020.emnlp-main.197.pdf)
- **Leaderboard:** [N/A]
- **Point of Contact:** [M. Haroon Shakeel](mailto:m.shakeel@lums.edu.pk)
### Dataset Summary
The Roman Urdu Hate-Speech and Offensive Language Detection (RUHSOLD) dataset is a Roman Urdu dataset of tweets annotated by experts in the relevant language. The authors develop the gold-standard for two sub-tasks. First sub-task is based on binary labels of Hate-Offensive content and Normal content (i.e., inoffensive language). These labels are self-explanatory. The authors refer to this sub-task as coarse-grained classification. Second sub-task defines Hate-Offensive content with four labels at a granular level. These labels are the most relevant for the demographic of users who converse in RU and are defined in related literature. The authors refer to this sub-task as fine-grained classification. The objective behind creating two gold-standards is to enable the researchers to evaluate the hate speech detection approaches on both easier (coarse-grained) and challenging (fine-grained) scenarios.
### Supported Tasks and Leaderboards
- 'multi-class-classification', 'text-classification-other-binary classification': The dataset can be used for both multi class classification as well as for binary classification as it contains both coarse grained and fine grained labels.
### Languages
The text of this dataset is Roman Urdu. The associated BCP-47 code is 'ur'.
## Dataset Structure
### Data Instances
The dataset consists of two parts divided as a set of two types, Coarse grained examples and Fine Grained examples. The difference is that in the coarse grained example the tweets are labelled as abusive or normal whereas in the fine grained version there are several classes of hate associated with a tweet.
For the Coarse grained segment of the dataset the label mapping is:-
Task 1: Coarse-grained Classification Labels
0: Abusive/Offensive
1: Normal
Whereas for the Fine Grained segment of the dataset the label mapping is:-
Task 2: Fine-grained Classification Labels
0: Abusive/Offensive
1: Normal
2: Religious Hate
3: Sexism
4: Profane/Untargeted
An example from Roman Urdu Hate Speech looks as follows:
```
{
'tweet': 'there are some yahodi daboo like imran chore zakat khore'
'label': 0
}
```
### Data Fields
-tweet:a string denoting the tweet which has been selected by using a random sampling from a tweet base of 50000 tweets to select 10000 tweets and annotated for the dataset.
-label:An annotation manually labeled by three independent annotators, during the annotation process, all conflicts are resolved by a majority vote among three annotators.
### Data Splits
The data of each of the segments, Coarse Grained and Fine Grained is further split into training, validation and test set. The data is split in train, test, and validation sets with 70,20,10 split ratio using stratification based on fine-grained labels.
The use of stratified sampling is deemed necessary to preserve the same labels ratio across all splits.
The Final split sizes are as follows:
Train Valid Test
7209 2003 801
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was created by Hammad Rizwan, Muhammad Haroon Shakeel, Asim Karim during work done at Department of Computer Science, Lahore University of Management Sciences (LUMS), Lahore, Pakistan.
### Licensing Information
The licensing status of the dataset hinges on the legal status of the [Roman Urdu Hate Speech Dataset Repository](https://github.com/haroonshakeel/roman_urdu_hate_speech) which is under MIT License.
### Citation Information
```bibtex
@inproceedings{rizwan2020hate,
title={Hate-speech and offensive language detection in roman Urdu},
author={Rizwan, Hammad and Shakeel, Muhammad Haroon and Karim, Asim},
booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
pages={2512--2522},
year={2020}
}
```
### Contributions
Thanks to [@bp-high](https://github.com/bp-high), for adding this dataset. | roman_urdu_hate_speech | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ur",
"license:mit",
"binary classification",
"region:us"
] | 2022-03-25T15:51:45+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced"], "language": ["ur"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "roman_urdu_hate_speech", "tags": ["binary classification"], "dataset_info": [{"config_name": "Coarse_Grained", "features": [{"name": "tweet", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Abusive/Offensive", "1": "Normal"}}}}], "splits": [{"name": "train", "num_bytes": 725719, "num_examples": 7208}, {"name": "test", "num_bytes": 218087, "num_examples": 2002}, {"name": "validation", "num_bytes": 79759, "num_examples": 800}], "download_size": 927937, "dataset_size": 1023565}, {"config_name": "Fine_Grained", "features": [{"name": "tweet", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Abusive/Offensive", "1": "Normal", "2": "Religious Hate", "3": "Sexism", "4": "Profane/Untargeted"}}}}], "splits": [{"name": "train", "num_bytes": 723670, "num_examples": 7208}, {"name": "test", "num_bytes": 219359, "num_examples": 2002}, {"name": "validation", "num_bytes": 723670, "num_examples": 7208}], "download_size": 1519423, "dataset_size": 1666699}]} | 2024-01-18T11:19:02+00:00 |
0d783a9fe5e53539e7bc40df462c5a641dd48ce3 | # Dataset Card for Winoground
## Dataset Description
Winoground is a novel task and dataset for evaluating the ability of vision and language models to conduct visio-linguistic compositional reasoning. Given two images and two captions, the goal is to match them correctly—but crucially, both captions contain a completely identical set of words/morphemes, only in a different order. The dataset was carefully hand-curated by expert annotators and is labeled with a rich set of fine-grained tags to assist in analyzing model performance. In our accompanying paper, we probe a diverse range of state-of-the-art vision and language models and find that, surprisingly, none of them do much better than chance. Evidently, these models are not as skilled at visio-linguistic compositional reasoning as we might have hoped. In the paper, we perform an extensive analysis to obtain insights into how future work might try to mitigate these models’ shortcomings. We aim for Winoground to serve as a useful evaluation set for advancing the state of the art and driving further progress in the field.
We are thankful to Getty Images for providing the image data.
## Data
The captions and tags are located in `data/examples.jsonl` and the images are located in `data/images.zip`. You can load the data as follows:
```python
from datasets import load_dataset
examples = load_dataset('facebook/winoground', use_auth_token=<YOUR USER ACCESS TOKEN>)
```
You can get `<YOUR USER ACCESS TOKEN>` by following these steps:
1) log into your Hugging Face account
2) click on your profile picture
3) click "Settings"
4) click "Access Tokens"
5) generate an access token
## Model Predictions and Statistics
The image-caption model scores from our paper are saved in `statistics/model_scores`. To compute many of the tables and graphs from our paper, run the following commands:
```bash
git clone https://huggingface.co/datasets/facebook/winoground
cd winoground
pip install -r statistics/requirements.txt
python statistics/compute_statistics.py
```
## FLAVA Colab notebook code for Winoground evaluation
https://colab.research.google.com/drive/1c3l4r4cEA5oXfq9uXhrJibddwRkcBxzP?usp=sharing
## CLIP Colab notebook code for Winoground evaluation
https://colab.research.google.com/drive/15wwOSte2CjTazdnCWYUm2VPlFbk2NGc0?usp=sharing
## Paper FAQ
### Why is the group score for a random model equal to 16.67%?
<details>
<summary>Click for a proof!</summary>
Intuitively, we might think that we can multiply the probabilities from the image and text score to get 1/16 = 6.25%. But, these scores are not conditionally independent. We can find the correct probability with combinatorics:
For ease of notation, let:
- a = s(c_0, i_0)
- b = s(c_1, i_0)
- c = s(c_1, i_1)
- d = s(c_0, i_1)
The group score is defined as 1 if a > b, a > d, c > b, c > d and 0 otherwise.
As one would say to GPT-3, let's think step by step:
1. There are 4! = 24 different orderings of a, c, b, d.
2. There are only 4 orderings for which a > b, a > d, c > b, c > d:
- a, c, b, d
- a, c, d, b
- c, a, b, d
- c, a, d, b
3. No ordering is any more likely than another because a, b, c, d are sampled from the same random distribution.
4. We can conclude that the probability of a group score of 1 is 4/24 = 0.166...
</details>
## Citation Information
[https://arxiv.org/abs/2204.03162](https://arxiv.org/abs/2204.03162)
Tristan Thrush and Candace Ross contributed equally.
```bibtex
@inproceedings{thrush_and_ross2022winoground,
author = {Tristan Thrush and Ryan Jiang and Max Bartolo and Amanpreet Singh and Adina Williams and Douwe Kiela and Candace Ross},
title = {Winoground: Probing vision and language models for visio-linguistic compositionality},
booktitle = {CVPR},
year = 2022,
}
``` | facebook/winoground | [
"task_categories:image-to-text",
"task_categories:text-to-image",
"task_categories:image-classification",
"language:en",
"arxiv:2204.03162",
"region:us"
] | 2022-03-25T22:27:33+00:00 | {"language": ["en"], "task_categories": ["image-to-text", "text-to-image", "image-classification"], "pretty_name": "Winoground", "extra_gated_prompt": "By clicking on \u201cAccess repository\u201d below, you also agree that you are using it solely for research purposes. The full license agreement is available in the dataset files."} | 2023-11-02T17:15:41+00:00 |
0c8f5b621e9809eda4a0c0ff337eaefc5409a635 | nndhung/garlic | [
"region:us"
] | 2022-03-26T02:26:12+00:00 | {} | 2022-03-26T02:27:35+00:00 |
|
551df3187d04f5f7ea4d6ebf062d016a72a2680c |
# lang-uk's ner-uk dataset
A dataset for Ukrainian Named Entity Recognition.
The original dataset is located at https://github.com/lang-uk/ner-uk. All credit for creation of the dataset goes to the contributors of https://github.com/lang-uk/ner-uk.
# License
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/Dataset" property="dct:title" rel="dct:type">"Корпус NER-анотацій українських текстів"</span> by <a xmlns:cc="http://creativecommons.org/ns#" href="https://github.com/lang-uk" property="cc:attributionName" rel="cc:attributionURL">lang-uk</a> is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>.<br />Based on a work at <a xmlns:dct="http://purl.org/dc/terms/" href="https://github.com/lang-uk/ner-uk" rel="dct:source">https://github.com/lang-uk/ner-uk</a>. | benjamin/ner-uk | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"language:uk",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-03-26T10:10:50+00:00 | {"language": ["uk"], "license": "cc-by-nc-sa-4.0", "multilinguality": ["monolingual"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"]} | 2022-10-26T10:47:43+00:00 |
7f6ffb530d7e1b220a1b87b006450452a3b5e1af | Marmoot/Fake_News_jpposadas | [
"license:cc-by-4.0",
"region:us"
] | 2022-03-26T13:17:33+00:00 | {"license": "cc-by-4.0"} | 2022-03-26T13:51:48+00:00 |
|
1d387cb76539715be292ce6cabc052efb0e79918 | Georgii/russianPoetry | [
"license:mit",
"region:us"
] | 2022-03-26T16:31:53+00:00 | {"license": "mit"} | 2022-03-26T16:32:30+00:00 |
|
264197c1d45d2aa4c8dc4e992e89e432a6e889c4 | name: **TRBLLmaker**
annotations_creators: found
language_creators: found
languages: en-US
licenses: Genius-Ventura-Toker
multilinguality: monolingual
source_datasets: original
task_categories: sequence-modeling
task_ids: sequence-modeling-seq2seq_generate
# Dataset Card for TRBLLmaker Dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Split info](#Split-info)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/venturamor/TRBLLmaker-NLP
- **Paper:** in git
### Dataset Summary
TRBLLmaker - To Read Between Lyrics Lines.
Dataset used in order to train a model to get as an input - several lines of song's lyrics and generate optional interpretation / meaning of them or use the songs' metdata for various tasks such as classification.
This dataset is based on 'Genius' website's data, which contains global collection of songs lyrics and provides annotations and interpretations to songs lyrics and additional music knowledge.
We used 'Genius' API, created private client and extracted the relevant raw data from Genius servers.
We extracted the songs by the most popular songs in each genre - pop, rap, rock, country and r&b. Afterwards, we created a varied pool of 150 artists that associated with different music styles and periods, and extracted maximum of 100 samples from each.
We combined all the data, without repetitions, into one final database. After preforming a cleaning of non-English lyrics, we got our final corpus that contains 8,808 different songs with over all of 60,630 samples, while each sample is a specific sentence from the song's lyrics and its top rated annotation.
### Supported Tasks and Leaderboards
Seq2Seq
### Languages
[En] - English
## Dataset Structure
### Data Fields
We stored each sample in a 'SongInfo' structure with the following attributes: title, genre, annotations and song's meta data.
The meta data contains the artist's name, song id in the server, lyrics and statistics such page views.
### Data Splits
train
train_songs
test
test_songs
validation
validation songs
## Split info
- songs
- samples
train [0.64 (0.8 * 0.8)], test[0.2], validation [0.16 (0.8 * 0.2)]
## Dataset Creation
### Source Data
Genius - https://genius.com/
### Annotations
#### Who are the annotators?
top-ranked annotations by users in Genoius websites / Official Genius annotations
## Considerations for Using the Data
### Social Impact of Dataset
We are excited about the future of applying attention-based models on task such as meaning generation.
We hope this dataset will encourage more NLP researchers to improve the way we understand and enjoy songs, since
achieving artistic comprehension is another step that progress us to the goal of robust AI.
### Other Known Limitations
The artists list can be found here.
## Additional Information
### Dataset Curators
This Dataset created by Mor Ventura and Michael Toker.
### Licensing Information
All source of data belongs to Genius.
### Contributions
Thanks to [@venturamor, @tokeron](https://github.com/venturamor/TRBLLmaker-NLP) for adding this dataset. | MorVentura/TRBLLmaker | [
"region:us"
] | 2022-03-26T17:29:20+00:00 | {"TODO": "Add YAML tags here."} | 2022-03-26T18:44:51+00:00 |
701f90e92dca56a28a7439406291a565c55576ef |
# Dataset Card for ESsnli
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Machine-translated Spanish version of the Stanford Natural Language Inference dataset.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
Machine-generated Spanish (from an English-language original corpus).
## Dataset Structure
### Data Instances
For each instance, there is a string for the premise, a string for the hypothesis, a string for the label given by the annotator and a string for the gold label, as well as data about the images that originated the sentences. Note that each premise may appear three times with a different hypothesis and label. See the [SNLI corpus viewer](https://huggingface.co/datasets/viewer/?dataset=snli) to explore more examples.
```
{"annotator_labels": ["contradiction"]
"captionID": "3618932839.jpg#4"
"gold_label": "contradiction"
"pairID": "3618932839.jpg#4r1c"
"sentence1": "El perro intenta saltar sobre el poste."
"sentence2": "Perrito durmiendo con su madre."}
```
### Data Fields
- `sentence1`: a string used to determine the truthfulness of the hypothesis
- `sentence2`: a string that may be true, false, or whose truth conditions may not be knowable when compared to the premise
- `label`: a string whose value may be either "entailment", "contradiction" or "neutral".
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
This corpus was built to remedy the scarcity of annotated Spanish-language datasets for NLI. It was generated by translating from the SNLI original dataset to Spanish using Argos. While machine translation is far from an ideal source for semantic classification, it is an aid to enlarging the data available.
### Source Data
#### Initial Data Collection and Normalization
The hypotheses (sentence1) were elicited by presenting crowdworkers with captions from preexisting datasets without the associated photos, but the vocabulary of the hypotheses still reflects the content of the photos as well as the caption style of writing (e.g. mostly present tense). The dataset developers report 37,026 distinct words in the corpus, ignoring case. They allowed bare NPs as well as full sentences. Using the Stanford PCFG Parser 3.5.2 (Klein and Manning, 2003) trained on the standard training set as well as on the Brown Corpus (Francis and Kucera 1979), the authors report that 74% of the premises and 88.9% of the hypotheses result in a parse rooted with an 'S'. The corpus was developed between 2014 and 2015.
Crowdworkers were presented with a caption without the associated photo and asked to produce three alternate captions, one that is definitely true, one that might be true, and one that is definitely false. See Section 2.1 and Figure 1 for details (Bowman et al., 2015).
The corpus includes content from the [Flickr 30k corpus](http://shannon.cs.illinois.edu/DenotationGraph/) and the [VisualGenome corpus](https://visualgenome.org/). The photo captions used to prompt the data creation were collected on Flickr by [Young et al. (2014)](https://www.aclweb.org/anthology/Q14-1006.pdf), who extended the Flickr 8K dataset developed by [Hodosh et al. (2013)](https://www.jair.org/index.php/jair/article/view/10833). Hodosh et al. collected photos from the following Flickr groups: strangers!, Wild-Child (Kids in Action), Dogs in Action (Read the Rules), Outdoor Activities, Action Photography, Flickr-Social (two or more people in the photo). Young et al. do not list the specific groups they collected photos from. The VisualGenome corpus also contains images from Flickr, originally collected in [MS-COCO](https://cocodataset.org/#home) and [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/).
The premises (sentence2) from the Flickr 30k corpus corrected for spelling using the Linux spell checker and ungrammatical sentences were removed. Bowman et al. do not report any normalization, though they note that punctuation and capitalization are often omitted.
#### Who are the source language producers?
A large portion of the premises (160k) were produced in the [Flickr 30k corpus](http://shannon.cs.illinois.edu/DenotationGraph/) by an unknown number of crowdworkers. About 2,500 crowdworkers from Amazon Mechanical Turk produced the associated hypotheses. The premises from the Flickr 30k project describe people and animals whose photos were collected and presented to the Flickr 30k crowdworkers, but the SNLI corpus did not present the photos to the hypotheses creators.
The Flickr 30k corpus did not report crowdworker or photo subject demographic information or crowdworker compensation. The SNLI crowdworkers were compensated per HIT at rates between $.1 and $.5 with no incentives. Workers who ignored the guidelines were disqualified, and automated bulk submissions were rejected. No demographic information was collected from the SNLI crowdworkers.
An additional 4,000 premises come from the pilot study of the [VisualGenome corpus](https://visualgenome.org/static/paper/Visual_Genome.pdf). Though the pilot study itself is not described, the location information of the 33,000 AMT crowdworkers that participated over the course of the 6 months of data collection are aggregated. Most of the workers were located in the United States (93%), with others from the Philippines, Kenya, India, Russia, and Canada. Workers were paid $6-$8 per hour.
### Annotations
#### Annotation process
56,941 of the total sentence pairs were further annotated in a validation task. Four annotators each labeled a premise-hypothesis pair as entailment, contradiction, or neither, resulting in 5 total judgements including the original hypothesis author judgement. See Section 2.2 for more details (Bowman et al., 2015).
The authors report 3/5 annotator agreement on 98% of the validation set and unanimous annotator agreement on 58.3% of the validation set. If a label was chosen by three annotators, that label was made the gold label. Following from this, 2% of the data did not have a consensus label and was labeled '-' by the authors.
#### Who are the annotators?
The annotators of the validation task were a closed set of about 30 trusted crowdworkers on Amazon Mechanical Turk. No demographic information was collected. Annotators were compensated per HIT between $.1 and $.5 with $1 bonuses in cases where annotator labels agreed with the curators' labels for 250 randomly distributed examples.
### Personal and Sensitive Information
The dataset does not contain any personal information about the authors or the crowdworkers, but may contain descriptions of the people in the original Flickr photos.
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
The language reflects the content of the photos collected from Flickr, as described in the [Data Collection](#initial-data-collection-and-normalization) section. [Rudinger et al (2017)](https://www.aclweb.org/anthology/W17-1609.pdf) use pointwise mutual information to calculate a measure of association between a manually selected list of tokens corresponding to identity categories and the other words in the corpus, showing strong evidence of stereotypes across gender categories. They also provide examples in which crowdworkers reproduced harmful stereotypes or pejorative language in the hypotheses.
### Other Known Limitations
The translation of the sentences was mostly unsupervised and may introduce some noise in the corpus. Machine translation from an English-language corpus is likely to generate syntactic and lexical forms that differ from those a human Spanish speaker would produce.
[Gururangan et al (2018)](https://www.aclweb.org/anthology/N18-2017.pdf), [Poliak et al (2018)](https://www.aclweb.org/anthology/S18-2023.pdf), and [Tsuchiya (2018)](https://www.aclweb.org/anthology/L18-1239.pdf) show that the SNLI corpus has a number of annotation artifacts. Using various classifiers, Poliak et al correctly predicted the label of the hypothesis 69% of the time without using the premise, Gururangan et al 67% of the time, and Tsuchiya 63% of the time.
## Additional Information
### Dataset Curators
The SNLI corpus was developed by Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning as part of the [Stanford NLP group](https://nlp.stanford.edu/).
It was supported by a Google Faculty Research Award, a gift from Bloomberg L.P., the Defense Advanced Research Projects Agency (DARPA) Deep Exploration and Filtering of Text (DEFT) Program under Air Force Research Laboratory (AFRL) contract no. FA8750-13-2-0040, the National Science Foundation under grant no. IIS 1159679, and the Department of the Navy, Office of Naval Research, under grant no. N00014-10-1-0109.
### Licensing Information
This corpus is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0).
The Stanford Natural Language Inference Corpus is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/).
### Citation Information
[Needs More Information] | medardodt/ESsnli | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:n<100K",
"source_datasets:extended|snli",
"region:us"
] | 2022-03-26T19:07:01+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "multilinguality": ["monolingual"], "size_categories": ["n<100K"], "source_datasets": ["extended|snli"], "task_categories": ["text-classification"], "task_ids": ["natural-language-inference"], "languages": ["es"], "licenses": ["cc-by-nc-sa-4.0"]} | 2022-03-26T22:03:14+00:00 |
e7b37332d07b614d95d1dd7c99904f825180f08a |
## How to use the data sets
This dataset contains more than 16,000 unique pairs of protein sequences and ligand SMILES, and the coordinates
of their complexes.
SMILES are assumed to be tokenized by the regex from P. Schwaller
Every (x,y,z) ligand coordinate maps onto a SMILES token, and is *nan* if the token does not represent an atom
Every receptor coordinate maps onto the Calpha coordinate of that residue.
The dataset can be used to fine-tune a language model, all data comes from PDBind-cn.
### Use the already preprocessed data
Load a test/train split using
```
from datasets import load_dataset
train = load_dataset("jglaser/pdbbind_complexes",split='train[:90%]')
validation = load_dataset("jglaser/pdbbind_complexes",split='train[90%:]')
```
### Pre-process yourself
To manually perform the preprocessing, download the data sets from P.DBBind-cn
Register for an account at <https://www.pdbbind.org.cn/>, confirm the validation
email, then login and download
- the Index files (1)
- the general protein-ligand complexes (2)
- the refined protein-ligand complexes (3)
Extract those files in `pdbbind/data`
Run the script `pdbbind.py` in a compute job on an MPI-enabled cluster
(e.g., `mpirun -n 64 pdbbind.py`).
| jglaser/pdbbind_complexes | [
"molecules",
"chemistry",
"SMILES",
"region:us"
] | 2022-03-26T21:30:56+00:00 | {"tags": ["molecules", "chemistry", "SMILES"]} | 2022-05-14T19:15:20+00:00 |
204d96b91f91c546df38a2284d250eb346fe0c77 |
# Dataset Card for ekar_chinese
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://ekar-leaderboard.github.io
- **Paper:** [E-KAR: A Benchmark for Rationalizing Natural Language Analogical Reasoning](https://aclanthology.org/2022.findings-acl.311)
- **Leaderboard:** https://eval.ai/web/challenges/challenge-page/1671/overview
- **Point of Contact:** jjchen19@fudan.edu.cn
### Dataset Summary
***New!***(9/18/2022) E-KAR `v1.1` is officially released (at the `main` branch), **with a higher-quality English dataset!** In `v1.1`, we further improve the Chinese-to-English translation quality of the English E-KAR, with over 600 problems and over 1,000 explanations manually adjusted. You can still find previous version (as in the paper) in the `v1.0` branch in the repo. For more information please refer to https://ekar-leaderboard.github.io.
The ability to recognize analogies is fundamental to human cognition. Existing benchmarks to test word analogy do not reveal the underneath process of analogical reasoning of neural models. Holding the belief that models capable of reasoning should be right for the right reasons, we propose a first-of-its-kind Explainable Knowledge-intensive Analogical Reasoning benchmark (E-KAR). Our benchmark consists of 1,655 (in Chinese) and 1,251 (in English) problems sourced from the Civil Service Exams, which require intensive background knowledge to solve. More importantly, we design a free-text explanation scheme to explain whether an analogy should be drawn, and manually annotate them for each and every question and candidate answer. Empirical results suggest that this benchmark is very challenging for some state-of-the-art models for both explanation generation and analogical question answering tasks, which invites further research in this area.
### Supported Tasks and Leaderboards
- `analogical-qa`: The dataset can be used to train a model for analogical reasoning in the form of multiple-choice QA.
- `explanation-generation`: The dataset can be used to generate free-text explanations to rationalize analogical reasoning.
This dataset supports two task modes: EASY mode and HARD mode:
- `EASY mode`: where query explanation can be used as part of the input.
- `HARD mode`: no explanation is allowed as part of the input.
### Languages
This dataset is in Chinese, with its [English version](https://huggingface.co/datasets/Jiangjie/ekar_english).
## Dataset Structure
### Data Instances
```json
{
"id": "982f17-en",
"question": "plant:coal",
"choices": {
"label": [
"A",
"B",
"C",
"D"
],
"text": [
"white wine:aged vinegar",
"starch:corn",
"milk:yogurt",
"pickled cabbage:cabbage"
]
},
"answerKey": "C",
"explanation": [
"\"plant\" is the raw material of \"coal\".",
"both \"white wine\" and \"aged vinegar\" are brewed.",
"\"starch\" is made of \"corn\", and the order of words is inconsistent with the query.",
"\"yogurt\" is made from \"milk\".",
"\"pickled cabbage\" is made of \"cabbage\", and the word order is inconsistent with the query."
],
"relation": [
[["plant", "coal", "R3.7"]],
[["white wine", "aged vinegar", "R2.4"]],
[["corn", "starch", "R3.7"]],
[["milk", "yogurt", "R3.7"]],
[["cabbage", "pickled cabbage", "R3.7"]]
]
}
```
### Data Fields
- id: a string identifier for each example.
- question: query terms.
- choices: candidate answer terms.
- answerKey: correct answer.
- explanation: explanations for query (1st) and candidate answers (2nd-5th).
- relation: annotated relations for terms in the query (1st) and candidate answers (2nd-5th).
### Data Splits
| name |train|validation|test|
|:-----:|:---:|:--------:|:--:|
|default| 1155 | 165 | 335 |
|description| | | blinded |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop analogical reasoning systems that are right for the right reasons.
### Discussion of Biases
This dataset is sourced and translated from the Civil Service Examinations of China. Therefore, it may contain information biased to Chinese culture.
### Other Known Limitations
1. The explanation annotation process in E-KAR (not the EG task) is mostly post-hoc and reflects only the result of reasoning. Humans solve the analogy problems in a trial-and-error manner, i.e., adjusting the abduced source structure and trying to find the most suited one for all candidate answers. Therefore, such explanations cannot offer supervision for intermediate reasoning.
2. E-KAR only presents one feasible explanation for each problem, whereas there may be several.
## Additional Information
### Dataset Curators
The dataset was initially created and curated by Jiangjie Chen (Fudan University, ByteDance), Rui Xu (Fudan University), Ziquan Fu (Brain Technologies, Inc.), Wei Shi (South China University of Technology), Xinbo Zhang (ByteDance), Changzhi Sun (ByteDance) and other colleagues at ByteDance and Fudan University.
### Licensing Information
[Needs More Information]
### Citation Information
```latex
@inproceedings{chen-etal-2022-e,
title = "{E}-{KAR}: A Benchmark for Rationalizing Natural Language Analogical Reasoning",
author = "Chen, Jiangjie and
Xu, Rui and
Fu, Ziquan and
Shi, Wei and
Li, Zhongqiao and
Zhang, Xinbo and
Sun, Changzhi and
Li, Lei and
Xiao, Yanghua and
Zhou, Hao",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-acl.311",
pages = "3941--3955",
}
```
| jiangjiechen/ekar_chinese | [
"task_categories:question-answering",
"task_categories:text-generation",
"task_ids:explanation-generation",
"size_categories:1K<n<2K",
"source_datasets:original",
"language:zh",
"license:afl-3.0",
"region:us"
] | 2022-03-27T05:00:49+00:00 | {"language": ["zh"], "license": ["afl-3.0"], "size_categories": ["1K<n<2K"], "source_datasets": ["original"], "task_categories": ["question-answering", "text-generation"], "task_ids": ["analogical-qa", "explanation-generation"]} | 2023-01-11T08:12:59+00:00 |
a4aa3ae597a4308ef79cd34138a2ddeba611ed51 |
# Dataset Card for ekar_english
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://ekar-leaderboard.github.io
- **Paper:** [E-KAR: A Benchmark for Rationalizing Natural Language Analogical Reasoning](https://aclanthology.org/2022.findings-acl.311)
- **Leaderboard:** https://eval.ai/web/challenges/challenge-page/1671/overview
- **Point of Contact:** jjchen19@fudan.edu.cn
### Dataset Summary
***New!***(9/18/2022) E-KAR `v1.1` is officially released (at the `main` branch), **with a higher-quality English dataset!** In `v1.1`, we further improve the Chinese-to-English translation quality of the English E-KAR, with over 600 problems and over 1,000 explanations manually adjusted. You can still find previous version (as in the paper) in the `v1.0` branch in the repo. For more information please refer to https://ekar-leaderboard.github.io.
The ability to recognize analogies is fundamental to human cognition. Existing benchmarks to test word analogy do not reveal the underneath process of analogical reasoning of neural models. Holding the belief that models capable of reasoning should be right for the right reasons, we propose a first-of-its-kind Explainable Knowledge-intensive Analogical Reasoning benchmark (E-KAR). Our benchmark consists of 1,655 (in Chinese) and 1,251 (in English) problems sourced from the Civil Service Exams, which require intensive background knowledge to solve. More importantly, we design a free-text explanation scheme to explain whether an analogy should be drawn, and manually annotate them for each and every question and candidate answer. Empirical results suggest that this benchmark is very challenging for some state-of-the-art models for both explanation generation and analogical question answering tasks, which invites further research in this area.
### Supported Tasks and Leaderboards
- `analogical-qa`: The dataset can be used to train a model for analogical reasoning in the form of multiple-choice QA.
- `explanation-generation`: The dataset can be used to generate free-text explanations to rationalize analogical reasoning.
This dataset supports two task modes: EASY mode and HARD mode:
- `EASY mode`: where query explanation can be used as part of the input.
- `HARD mode`: no explanation is allowed as part of the input.
### Languages
This dataset is in English, which is translated from [its Chinese version](https://huggingface.co/datasets/Jiangjie/ekar_chinese/)
## Dataset Structure
### Data Instances
```json
{
"id": "982f17-en",
"question": "plant:coal",
"choices": {
"label": [
"A",
"B",
"C",
"D"
],
"text": [
"white wine:aged vinegar",
"starch:corn",
"milk:yogurt",
"pickled cabbage:cabbage"
]
},
"answerKey": "C",
"explanation": [
"\"plant\" is the raw material of \"coal\".",
"both \"white wine\" and \"aged vinegar\" are brewed.",
"\"starch\" is made of \"corn\", and the order of words is inconsistent with the query.",
"\"yogurt\" is made from \"milk\".",
"\"pickled cabbage\" is made of \"cabbage\", and the word order is inconsistent with the query."
],
"relation": [
[["plant", "coal", "R3.7"]],
[["white wine", "aged vinegar", "R2.4"]],
[["corn", "starch", "R3.7"]],
[["milk", "yogurt", "R3.7"]],
[["cabbage", "pickled cabbage", "R3.7"]]
]
}
```
### Data Fields
- id: a string identifier for each example.
- question: query terms.
- choices: candidate answer terms.
- answerKey: correct answer.
- explanation: explanations for query (1st) and candidate answers (2nd-5th).
- relation: annotated relations for terms in the query (1st) and candidate answers (2nd-5th).
### Data Splits
| name |train|validation|test|
|:-----:|:---:|:--------:|:--:|
|default| 870| 119| 262|
|description| | | blinded |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop analogical reasoning systems that are right for the right reasons.
### Discussion of Biases
This dataset is sourced and translated from the Civil Service Examinations of China. Therefore, despite the effort that the authors try to remove or rewrite such problems, it may still contain information biased to Chinese culture.
### Other Known Limitations
1. The explanation annotation process in E-KAR (not the EG task) is mostly post-hoc and reflects only the result of reasoning. Humans solve the analogy problems in a trial-and-error manner, i.e., adjusting the abduced source structure and trying to find the most suited one for all candidate answers. Therefore, such explanations cannot offer supervision for intermediate reasoning.
2. E-KAR only presents one feasible explanation for each problem, whereas there may be several.
3. The English version of E-KAR is machine-translated and post-edited by humans. Although the authors have tried their best to maintain the translation quality, there could be some unsatisfying samples in the English dataset, e.g., culture-specific ones, ambiguous ones after translation, etc.
## Additional Information
### Dataset Curators
The dataset was initially created and curated by Jiangjie Chen (Fudan University, ByteDance), Rui Xu (Fudan University), Ziquan Fu (Brain Technologies, Inc.), Wei Shi (South China University of Technology), Xinbo Zhang (ByteDance), Changzhi Sun (ByteDance) and other colleagues at ByteDance and Fudan University.
### Licensing Information
[Needs More Information]
### Citation Information
```latex
@inproceedings{chen-etal-2022-e,
title = "{E}-{KAR}: A Benchmark for Rationalizing Natural Language Analogical Reasoning",
author = "Chen, Jiangjie and
Xu, Rui and
Fu, Ziquan and
Shi, Wei and
Li, Zhongqiao and
Zhang, Xinbo and
Sun, Changzhi and
Li, Lei and
Xiao, Yanghua and
Zhou, Hao",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-acl.311",
pages = "3941--3955",
}
``` | jiangjiechen/ekar_english | [
"task_categories:question-answering",
"task_categories:text-generation",
"task_ids:explanation-generation",
"size_categories:1K<n<2K",
"source_datasets:original",
"language:en",
"license:afl-3.0",
"region:us"
] | 2022-03-27T05:03:06+00:00 | {"language": ["en"], "license": ["afl-3.0"], "size_categories": ["1K<n<2K"], "source_datasets": ["original"], "task_categories": ["question-answering", "text-generation"], "task_ids": ["analogical-qa", "explanation-generation"]} | 2023-01-11T08:13:18+00:00 |
a1a0b053c5c1fdfb9a26a5557be62272b1582b2c |
# Dataset Card for taiwanese_english_translation
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage: https://taigi.fhl.net/list.html**
### Dataset Summary
[More Information Needed]
### Languages
Source Language: Taiwanese (Tailo romanization system)
Target Language: English
## Dataset Structure
csv: Tailo,English
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@atenglens](https://github.com/atenglens) for adding this dataset. | atenglens/taiwanese_english_translation | [
"task_categories:question-answering",
"task_categories:text2text-generation",
"task_categories:text-generation",
"task_categories:translation",
"task_ids:language-modeling",
"language_creators:other",
"multilinguality:translation",
"size_categories:unknown",
"source_datasets:extended|other",
"language:tw",
"language:en",
"conditional-text-generation",
"region:us"
] | 2022-03-27T05:31:42+00:00 | {"annotations_creators": [], "language_creators": ["other"], "language": ["tw", "en"], "license": [], "multilinguality": ["translation"], "size_categories": ["unknown"], "source_datasets": ["extended|other"], "task_categories": ["question-answering", "text2text-generation", "text-generation", "translation"], "task_ids": ["language-modeling"], "pretty_name": "taiwanese_english_translation", "tags": ["conditional-text-generation"]} | 2022-10-24T18:51:45+00:00 |
e3a39e3d6c1ff7e58cbeac653518a300a875adfd |
Machine translated Ohsumed collection (EN to ID)
Original corpora: http://disi.unitn.it/moschitti/corpora.htm
Translated using: https://huggingface.co/Helsinki-NLP/opus-mt-en-id
Compatible with HuggingFace text-classification script (Tested in 4.17)
https://github.com/huggingface/transformers/tree/v4.17.0/examples/pytorch/text-classification
[Moschitti, 2003a]. Alessandro Moschitti, Natural Language Processing and Text Categorization: a study on the reciprocal beneficial interactions, PhD thesis, University of Rome Tor Vergata, Rome, Italy, May 2003. | nadhifikbarw/id_ohsuhmed | [
"task_categories:text-classification",
"language:id",
"region:us"
] | 2022-03-27T06:01:29+00:00 | {"language": ["id"], "task_categories": ["text-classification"], "source": ["http://disi.unitn.it/moschitti/corpora.htm"]} | 2022-10-25T09:03:35+00:00 |
dcfc1a380a19b9f4b30cec04a4387be24da0b2b3 | text to text implementation basing on https://github.com/salesforce/DocNLI
DatasetDict({
train: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 942314
})
validation: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 234258
})
test: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 267086
})
}) | stjokerli/TextToText_DocNLI_seqio | [
"region:us"
] | 2022-03-27T13:27:45+00:00 | {} | 2022-03-27T13:46:59+00:00 |
1938d356e66bcdcdd662dd7b9285d0c4a0bc9c6b | squad_v010_allanswers in T5 paper https://github.com/google-research/text-to-text-transfer-transformer/blob/main/t5/data/tasks.py
DatasetDict({
squad: DatasetDict({
train: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 87599
})
validation: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 10570
})
})
}) | stjokerli/TextToText_squad_seqio | [
"region:us"
] | 2022-03-27T21:28:53+00:00 | {} | 2022-03-27T21:39:25+00:00 |
3f1a89e7d89662e16fa2e0f1b9ce0af57eabdc35 | sac3tf/roman_urdu | [
"region:us"
] | 2022-03-28T03:47:59+00:00 | {} | 2022-03-28T03:50:30+00:00 |
|
e1abda026f687e917a8d9895469194736ebe872c |
# Dataset Card for Adversarial GLUE
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://adversarialglue.github.io/
- **Repository:**
- **Paper:** [arXiv](https://arxiv.org/pdf/2111.02840.pdf)
- **Leaderboard:**
- **Point of Contact:**
- **Size of downloaded dataset files:** 202.75 kB
### Dataset Summary
Adversarial GLUE Benchmark (AdvGLUE) is a comprehensive robustness evaluation benchmark that focuses on the adversarial robustness evaluation of language models. It covers five natural language understanding tasks from the famous GLUE tasks and is an adversarial version of GLUE benchmark.
AdvGLUE considers textual adversarial attacks from different perspectives and hierarchies, including word-level transformations, sentence-level manipulations, and human-written adversarial examples, which provide comprehensive coverage of various adversarial linguistic phenomena.
### Supported Tasks and Leaderboards
Leaderboard available on the homepage: [https://adversarialglue.github.io/](https://adversarialglue.github.io/).
### Languages
AdvGLUE deviates from the GLUE dataset, which has a base language of English.
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 202.75 kB
- **Example**:
```python
>>> datasets.load_dataset('adv_glue', 'adv_sst2')['validation'][0]
{'sentence': "it 's an uneven treat that bores fun at the democratic exercise while also examining its significance for those who take part .", 'label': 1, 'idx': 0}
```
### Data Fields
The data fields are the same as in the GLUE dataset, which differ by task.
The data fields are the same among all splits.
#### adv_mnli
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### adv_mnli_matched
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### adv_mnli_mismatched
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### adv_qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### adv_qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### adv_rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### adv_sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Splits
Adversarial GLUE provides only a 'dev' split.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is distributed under the [CC BY-SA 4.0](http://creativecommons.org/licenses/by-sa/4.0/legalcode) license.
### Citation Information
```bibtex
@article{Wang2021AdversarialGA,
title={Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models},
author={Boxin Wang and Chejian Xu and Shuohang Wang and Zhe Gan and Yu Cheng and Jianfeng Gao and Ahmed Hassan Awadallah and B. Li},
journal={ArXiv},
year={2021},
volume={abs/2111.02840}
}
```
### Contributions
Thanks to [@jxmorris12](https://github.com/jxmorris12) for adding this dataset. | adv_glue | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"task_ids:sentiment-classification",
"annotations_creators:other",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:extended|glue",
"language:en",
"license:cc-by-sa-4.0",
"paraphrase-identification",
"qa-nli",
"arxiv:2111.02840",
"region:us"
] | 2022-03-28T10:12:33+00:00 | {"annotations_creators": ["other"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["extended|glue"], "task_categories": ["text-classification"], "task_ids": ["natural-language-inference", "sentiment-classification"], "pretty_name": "Adversarial GLUE", "config_names": ["adv_mnli", "adv_mnli_mismatched", "adv_qnli", "adv_qqp", "adv_rte", "adv_sst2"], "tags": ["paraphrase-identification", "qa-nli"], "dataset_info": [{"config_name": "adv_mnli", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "validation", "num_bytes": 23712, "num_examples": 121}], "download_size": 13485, "dataset_size": 23712}, {"config_name": "adv_mnli_mismatched", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "validation", "num_bytes": 40953, "num_examples": 162}], "download_size": 25166, "dataset_size": 40953}, {"config_name": "adv_qnli", "features": [{"name": "question", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "not_entailment"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "validation", "num_bytes": 34850, "num_examples": 148}], "download_size": 19111, "dataset_size": 34850}, {"config_name": "adv_qqp", "features": [{"name": "question1", "dtype": "string"}, {"name": "question2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "not_duplicate", "1": "duplicate"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "validation", "num_bytes": 9908, "num_examples": 78}], "download_size": 7705, "dataset_size": 9908}, {"config_name": "adv_rte", "features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "not_entailment"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "validation", "num_bytes": 25979, "num_examples": 81}], "download_size": 15872, "dataset_size": 25979}, {"config_name": "adv_sst2", "features": [{"name": "sentence", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "positive"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "validation", "num_bytes": 16572, "num_examples": 148}], "download_size": 10833, "dataset_size": 16572}], "configs": [{"config_name": "adv_mnli", "data_files": [{"split": "validation", "path": "adv_mnli/validation-*"}]}, {"config_name": "adv_mnli_mismatched", "data_files": [{"split": "validation", "path": "adv_mnli_mismatched/validation-*"}]}, {"config_name": "adv_qnli", "data_files": [{"split": "validation", "path": "adv_qnli/validation-*"}]}, {"config_name": "adv_qqp", "data_files": [{"split": "validation", "path": "adv_qqp/validation-*"}]}, {"config_name": "adv_rte", "data_files": [{"split": "validation", "path": "adv_rte/validation-*"}]}, {"config_name": "adv_sst2", "data_files": [{"split": "validation", "path": "adv_sst2/validation-*"}]}]} | 2024-01-09T11:45:55+00:00 |
7ea460abd146b010b3668f374c1f51068c6ff032 |
# Dataset Card for Corpus Carolina
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [sites.usp.br/corpuscarolina](https://sites.usp.br/corpuscarolina/)
- **Current Version:** 1.2 (Ada)
- **Point of Contact:** [LaViHD](mailto:lavihd@usp.br)
### Dataset Summary
Carolina is an Open Corpus for Linguistics and Artificial Intelligence with a
robust volume of texts of varied typology in contemporary Brazilian Portuguese
(1970-2021). This corpus contains documents and texts extracted from the web
and includes information (metadata) about its provenance and tipology.
The documents are clustered into taxonomies and the corpus can be loaded in complete or taxonomy modes. To load a single taxonomy, it is possible to pass a code as a parameter to the loading script (see the example bellow). Codes are 3-letters string and possible values are:
- `dat` : datasets and other corpora;
- `jud` : judicial branch;
- `leg` : legislative branch;
- `pub` : public domain works;
- `soc` : social media;
- `uni` : university domains;
- `wik` : wikis.
Dataset Vesioning:
The Carolina Corpus is under continuous development resulting in multiple vesions. The current version is v1.2, but v1.1 is also available. You can access diferent vesions of the corpus using the `revision` parameter on `load_dataset`.
Usage Example:
```python
from datasets import load_dataset
# to load all taxonomies
corpus_carolina = load_dataset("carolina-c4ai/corpus-carolina")
# to load social media documents
social_media = load_dataset("carolina-c4ai/corpus-carolina", taxonomy="soc")
# to load previous version
corpus_carolina = load_dataset("carolina-c4ai/corpus-carolina", revision="v1.1")
```
### Supported Tasks
Carolina corpus was compiled for academic purposes,
namely linguistic and computational analysis.
### Languages
Contemporary Brazilian Portuguese (1970-2021).
## Dataset Structure
Files are stored inside `corpus` folder with a subfolder
for each taxonomy. Every file folows a XML structure
(TEI P5) and contains multiple extracted documents. For
each document, the text and metadata are exposed as
`text` and `meta` features, respectively.
### Data Instances
Every instance have the following structure.
```
{
"meta": datasets.Value("string"),
"text": datasets.Value("string")
}
```
| Code | Taxonomy | Instances | Size |
|:----:|:---------------------------|----------:|-------:|
| | **Total** | 2107045 | 11 GB |
| dat | Datasets and other Corpora | 1102049 | 4.4 GB |
| wik | Wikis | 960139 | 5.2 GB |
| jud | Judicial Branch | 40464 | 1.5 GB |
| leg | Legislative Branch | 13 | 25 MB |
| soc | Social Media | 3413 | 17 MB |
| uni | University Domains | 941 | 10 MB |
| pub | Public Domain Works | 26 | 4.5 MB |
||
### Data Fields
- `meta`: a XML string with a TEI conformant `teiHeader` tag. It is exposed as text and needs to be parsed in order to access the actual metada;
- `text`: a string containing the extracted document.
### Data Splits
As a general corpus, Carolina does not have splits. In order to load the dataset, it is used `corpus` as its single split.
## Additional Information
### Dataset Curators
The Corpus Carolina is developed by a multidisciplinary
team of linguists and computer scientists, members of the
Virtual Laboratory of Digital Humanities - LaViHD and the Artificial Intelligence Center of the University of São Paulo - C4AI.
### Licensing Information
The Open Corpus for Linguistics and Artificial Intelligence (Carolina) was
compiled for academic purposes, namely linguistic and computational analysis.
It is composed of texts assembled in various digital repositories, whose
licenses are multiple and therefore should be observed when making use of the
corpus. The Carolina headers are licensed under Creative Commons
Attribution-NonCommercial-ShareAlike 4.0 International."
### Citation Information
```
@misc{corpusCarolinaV1.1,
title={
Carolina:
The Open Corpus for Linguistics and Artificial Intelligence
},
author={
Finger, Marcelo and
Paixão de Sousa, Maria Clara and
Namiuti, Cristiane and
Martins do Monte, Vanessa and
Costa, Aline Silva and
Serras, Felipe Ribas and
Sturzeneker, Mariana Lourenço and
Guets, Raquel de Paula and
Mesquita, Renata Morais and
Mello, Guilherme Lamartine de and
Crespo, Maria Clara Ramos Morales and
Rocha, Maria Lina de Souza Jeannine and
Brasil, Patrícia and
Silva, Mariana Marques da and
Palma, Mayara Feliciano
},
howpublished={\url{
https://sites.usp.br/corpuscarolina/corpus}},
year={2022},
note={Version 1.1 (Ada)},
}
```
| carolina-c4ai/corpus-carolina | [
"task_categories:fill-mask",
"task_categories:text-generation",
"task_ids:masked-language-modeling",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1B<n<10B",
"source_datasets:original",
"language:pt",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-03-28T12:30:33+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["crowdsourced"], "language": ["pt"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1B<n<10B"], "source_datasets": ["original"], "task_categories": ["fill-mask", "text-generation"], "task_ids": ["masked-language-modeling", "language-modeling"], "pretty_name": "Carolina", "language_bcp47": ["pt-BR"]} | 2023-03-23T19:46:16+00:00 |
01540a66ded66626baf224072d8faf05b5d329d0 | # AutoTrain Dataset for project: TweetClimateAnalysis
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project TweetClimateAnalysis.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "What do you do if you are a global warming alarmist and real-world temperatures do not warm as much [...]",
"target": 16
},
{
"text": "(2.) A sun-blocking volcanic aerosols component to explain the sudden but temporary cooling of globa[...]",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=18, names=['0_0', '1_1', '1_2', '1_3', '1_4', '1_6', '1_7', '2_1', '2_3', '3_1', '3_2', '3_3', '4_1', '4_2', '4_4', '4_5', '5_1', '5_2'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 23436 |
| valid | 2898 |
| KeithHorgan98/autotrain-data-TweetClimateAnalysis | [
"task_categories:text-classification",
"region:us"
] | 2022-03-28T21:17:30+00:00 | {"task_categories": ["text-classification"]} | 2022-03-28T21:27:22+00:00 |
dc42b11d642dd1b4985d30f98ec68f63363b1141 | # UNL: Universidad Nacional de Loja
### Miembros del equipo:
- Anderson Quizhpe <br>
- Luis Negrón <br>
- David Pacheco <br>
- Bryan Requenes <br>
- Paul Pasaca
<br><br>
| hackathon-pln-es/Dataset-Acoso-Twitter-Es | [
"license:gpl-3.0",
"region:us"
] | 2022-03-29T04:46:25+00:00 | {"license": "gpl-3.0", "languaje": ["es"]} | 2022-03-30T23:03:51+00:00 |
67e0da2e44e860e37857edf17af7b2656b3be221 | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/horse2zebra | [
"arxiv:1703.10593",
"region:us"
] | 2022-03-29T09:34:58+00:00 | {} | 2022-04-12T12:57:34+00:00 |
26e882f90a286672b7dd46e603b3dd6b9c6c007e | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/monet2photo | [
"arxiv:1703.10593",
"region:us"
] | 2022-03-29T11:23:53+00:00 | {} | 2022-04-12T12:58:04+00:00 |
b9a1024774140d73f8272bf1158b4a8b4ef7abfe | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/cezanne2photo | [
"arxiv:1703.10593",
"region:us"
] | 2022-03-29T11:27:36+00:00 | {} | 2022-04-12T12:56:27+00:00 |
7d0f0b1f34034b010c6b7fc44d6b266803788448 | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/ukiyoe2photo | [
"arxiv:1703.10593",
"region:us"
] | 2022-03-29T11:30:34+00:00 | {} | 2022-04-12T12:58:34+00:00 |
877a11fde59bfcbbc59d508c7b00c7fa307604e6 | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/vangogh2photo | [
"arxiv:1703.10593",
"region:us"
] | 2022-03-29T11:33:03+00:00 | {} | 2022-04-12T12:58:45+00:00 |
c8706b48de9deec7eeee248792fcf483b3ccf4ef | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/apple2orange | [
"arxiv:1703.10593",
"region:us"
] | 2022-03-29T11:44:10+00:00 | {} | 2022-04-12T12:55:40+00:00 |
c4f40db4563d2acebd3a92c9b968f00c95234472 | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/iphone2dslr_flower | [
"arxiv:1703.10593",
"region:us"
] | 2022-03-29T11:47:17+00:00 | {} | 2022-04-12T12:57:46+00:00 |
7ada4dc70d20d435adc11b644ceeaff8d3b323c4 | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/summer2winter_yosemite | [
"arxiv:1703.10593",
"region:us"
] | 2022-03-29T11:53:43+00:00 | {} | 2022-04-12T12:58:19+00:00 |
853bc3c3221dfaa41d9116b4b11ec2953cc13fa3 | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/grumpifycat | [
"arxiv:1703.10593",
"region:us"
] | 2022-03-29T13:42:02+00:00 | {} | 2022-04-12T12:57:20+00:00 |
bf3e4aaebd4162e0b9d31785028c43bbd6303585 |
Dataset introduced in [Stop Clickbait: Detecting and Preventing Clickbaits in Online News Media](https://arxiv.org/abs/1610.09786)
by Abhijnan Chakraborty, Bhargavi Paranjape, Sourya Kakarla, Niloy Ganguly
Abhijnan Chakraborty, Bhargavi Paranjape, Sourya Kakarla, and Niloy Ganguly. "Stop Clickbait: Detecting and Preventing Clickbaits in Online News Media”. In Proceedings of the 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), San Fransisco, US, August 2016.
Cite:
```
@inproceedings{chakraborty2016stop,
title={Stop Clickbait: Detecting and preventing clickbaits in online news media},
author={Chakraborty, Abhijnan and Paranjape, Bhargavi and Kakarla, Sourya and Ganguly, Niloy},
booktitle={Advances in Social Networks Analysis and Mining (ASONAM), 2016 IEEE/ACM International Conference on},
pages={9--16},
year={2016},
organization={IEEE}
}
```
| marksverdhei/clickbait_title_classification | [
"license:mit",
"arxiv:1610.09786",
"region:us"
] | 2022-03-29T20:02:09+00:00 | {"license": "mit"} | 2022-03-29T20:25:01+00:00 |
b62d5fa059150ce40bef72febc0779e9a2f4ba26 |
# Dataset Card for CoCoNuT-Java(2006)
## Dataset Description
- **Homepage:** [CoCoNuT training data](https://github.com/lin-tan/CoCoNut-Artifact/releases/tag/training_data_1.0.0)
- **Repository:** [CoCoNuT repository](https://github.com/lin-tan/CoCoNut-Artifact)
- **Paper:** [CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair](https://dl.acm.org/doi/abs/10.1145/3395363.3397369)
### Dataset Summary
Part of the data used to train the models in the "CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair" paper.
These datasets contain raw data extracted from GitHub, GitLab, and Bitbucket, and have neither been shuffled nor tokenized.
The year in the dataset’s name is the cutting year that shows the year of the newest commit in the dataset.
### Languages
- Java
## Dataset Structure
### Data Fields
The dataset consists of 4 columns: `add`, `rem`, `context`, and `meta`.
These match the original dataset files: `add.txt`, `rem.txt`, `context.txt`, and `meta.txt`.
### Data Instances
There is a mapping between the 4 columns for each instance.
For example:
5 first rows of `rem` (i.e., the buggy line/hunk):
```
1 public synchronized StringBuffer append(char ch)
2 ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this;
3 public String substring(int beginIndex, int endIndex)
4 if (beginIndex < 0 || endIndex > count || beginIndex > endIndex) throw new StringIndexOutOfBoundsException(); if (beginIndex == 0 && endIndex == count) return this; int len = endIndex - beginIndex; return new String(value, beginIndex + offset, len, (len << 2) >= value.length);
5 public Object next() {
```
5 first rows of add (i.e., the fixed line/hunk):
```
1 public StringBuffer append(Object obj)
2 return append(obj == null ? "null" : obj.toString());
3 public String substring(int begin)
4 return substring(begin, count);
5 public FSEntry next() {
```
These map to the 5 instances:
```diff
- public synchronized StringBuffer append(char ch)
+ public StringBuffer append(Object obj)
```
```diff
- ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this;
+ return append(obj == null ? "null" : obj.toString());
```
```diff
- public String substring(int beginIndex, int endIndex)
+ public String substring(int begin)
```
```diff
- if (beginIndex < 0 || endIndex > count || beginIndex > endIndex) throw new StringIndexOutOfBoundsException(); if (beginIndex == 0 && endIndex == count) return this; int len = endIndex - beginIndex; return new String(value, beginIndex + offset, len, (len << 2) >= value.length);
+ return substring(begin, count);
```
```diff
- public Object next() {
+ public FSEntry next() {
```
`context` contains the associated "context". Context is the (in-lined) buggy function (including the buggy lines and comments).
For example, the context of
```
public synchronized StringBuffer append(char ch)
```
is its associated function:
```java
public synchronized StringBuffer append(char ch) { ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this; }
```
`meta` contains some metadata about the project:
```
1056 /local/tlutelli/issta_data/temp/all_java0context/java/2006_temp/2006/1056/68a6301301378680519f2b146daec37812a1bc22/StringBuffer.java/buggy/core/src/classpath/java/java/lang/StringBuffer.java
```
`1056` is the project id. `/local/...` is the absolute path to the buggy file. This can be parsed to extract the commit id: `68a6301301378680519f2b146daec37812a1bc22`, the file name: `StringBuffer.java` and the original path within the project
`core/src/classpath/java/java/lang/StringBuffer.java`
| Number of projects | Number of Instances |
| ------------------ |-------------------- |
| 45,180 | 3,241,966 |
## Dataset Creation
### Curation Rationale
Data is collected to train automated program repair (APR) models.
### Citation Information
```bib
@inproceedings{lutellierCoCoNuTCombiningContextaware2020,
title = {{{CoCoNuT}}: Combining Context-Aware Neural Translation Models Using Ensemble for Program Repair},
shorttitle = {{{CoCoNuT}}},
booktitle = {Proceedings of the 29th {{ACM SIGSOFT International Symposium}} on {{Software Testing}} and {{Analysis}}},
author = {Lutellier, Thibaud and Pham, Hung Viet and Pang, Lawrence and Li, Yitong and Wei, Moshi and Tan, Lin},
year = {2020},
month = jul,
series = {{{ISSTA}} 2020},
pages = {101--114},
publisher = {{Association for Computing Machinery}},
address = {{New York, NY, USA}},
doi = {10.1145/3395363.3397369},
url = {https://doi.org/10.1145/3395363.3397369},
urldate = {2022-12-06},
isbn = {978-1-4503-8008-9},
keywords = {AI and Software Engineering,Automated program repair,Deep Learning,Neural Machine Translation}
}
```
| h4iku/coconut_java2006 | [
"code",
"region:us"
] | 2022-03-29T22:30:34+00:00 | {"pretty_name": "CoCoNuT-Java(2006)", "tags": ["code"]} | 2023-09-28T21:53:23+00:00 |
07c8608cac175708f83f05da225c487d66f8e8c9 |
# Dataset Card for CoCoNuT-JavaScript(2010)
## Dataset Description
- **Homepage:** [CoCoNuT training data](https://github.com/lin-tan/CoCoNut-Artifact/releases/tag/training_data_1.0.0)
- **Repository:** [CoCoNuT repository](https://github.com/lin-tan/CoCoNut-Artifact)
- **Paper:** [CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair](https://dl.acm.org/doi/abs/10.1145/3395363.3397369)
### Dataset Summary
Part of the data used to train the models in the "CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair" paper.
These datasets contain raw data extracted from GitHub, GitLab, and Bitbucket, and have neither been shuffled nor tokenized.
The year in the dataset’s name is the cutting year that shows the year of the newest commit in the dataset.
### Languages
- JavaScript
## Dataset Structure
### Data Fields
The dataset consists of 4 columns: `add`, `rem`, `context`, and `meta`.
These match the original dataset files: `add.txt`, `rem.txt`, `context.txt`, and `meta.txt`.
### Data Instances
There is a mapping between the 4 columns for each instance.
For example:
5 first rows of `rem` (i.e., the buggy line/hunk):
```
1 public synchronized StringBuffer append(char ch)
2 ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this;
3 public String substring(int beginIndex, int endIndex)
4 if (beginIndex < 0 || endIndex > count || beginIndex > endIndex) throw new StringIndexOutOfBoundsException(); if (beginIndex == 0 && endIndex == count) return this; int len = endIndex - beginIndex; return new String(value, beginIndex + offset, len, (len << 2) >= value.length);
5 public Object next() {
```
5 first rows of add (i.e., the fixed line/hunk):
```
1 public StringBuffer append(Object obj)
2 return append(obj == null ? "null" : obj.toString());
3 public String substring(int begin)
4 return substring(begin, count);
5 public FSEntry next() {
```
These map to the 5 instances:
```diff
- public synchronized StringBuffer append(char ch)
+ public StringBuffer append(Object obj)
```
```diff
- ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this;
+ return append(obj == null ? "null" : obj.toString());
```
```diff
- public String substring(int beginIndex, int endIndex)
+ public String substring(int begin)
```
```diff
- if (beginIndex < 0 || endIndex > count || beginIndex > endIndex) throw new StringIndexOutOfBoundsException(); if (beginIndex == 0 && endIndex == count) return this; int len = endIndex - beginIndex; return new String(value, beginIndex + offset, len, (len << 2) >= value.length);
+ return substring(begin, count);
```
```diff
- public Object next() {
+ public FSEntry next() {
```
`context` contains the associated "context". Context is the (in-lined) buggy function (including the buggy lines and comments).
For example, the context of
```
public synchronized StringBuffer append(char ch)
```
is its associated function:
```java
public synchronized StringBuffer append(char ch) { ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this; }
```
`meta` contains some metadata about the project:
```
1056 /local/tlutelli/issta_data/temp/all_java0context/java/2006_temp/2006/1056/68a6301301378680519f2b146daec37812a1bc22/StringBuffer.java/buggy/core/src/classpath/java/java/lang/StringBuffer.java
```
`1056` is the project id. `/local/...` is the absolute path to the buggy file. This can be parsed to extract the commit id: `68a6301301378680519f2b146daec37812a1bc22`, the file name: `StringBuffer.java` and the original path within the project
`core/src/classpath/java/java/lang/StringBuffer.java`
| Number of projects | Number of Instances |
| ------------------ |-------------------- |
| 10,163 | 2,254,253 |
## Dataset Creation
### Curation Rationale
Data is collected to train automated program repair (APR) models.
### Citation Information
```bib
@inproceedings{lutellierCoCoNuTCombiningContextaware2020,
title = {{{CoCoNuT}}: Combining Context-Aware Neural Translation Models Using Ensemble for Program Repair},
shorttitle = {{{CoCoNuT}}},
booktitle = {Proceedings of the 29th {{ACM SIGSOFT International Symposium}} on {{Software Testing}} and {{Analysis}}},
author = {Lutellier, Thibaud and Pham, Hung Viet and Pang, Lawrence and Li, Yitong and Wei, Moshi and Tan, Lin},
year = {2020},
month = jul,
series = {{{ISSTA}} 2020},
pages = {101--114},
publisher = {{Association for Computing Machinery}},
address = {{New York, NY, USA}},
doi = {10.1145/3395363.3397369},
url = {https://doi.org/10.1145/3395363.3397369},
urldate = {2022-12-06},
isbn = {978-1-4503-8008-9},
keywords = {AI and Software Engineering,Automated program repair,Deep Learning,Neural Machine Translation}
}
```
| h4iku/coconut_javascript2010 | [
"code",
"region:us"
] | 2022-03-29T22:49:36+00:00 | {"pretty_name": "CoCoNuT-JavaScript(2010)", "tags": ["code"]} | 2023-09-28T22:20:59+00:00 |
76866995461bd07841e9ef7b08751da46c7eb9f4 | annotations_creators:
- crowdsourced
- other
language_creators:
- other
- crowdsourced
languages:
- es
licenses:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: ESnli
size_categories:
- unknown
source_datasets:
- extended|snli
- extended|xnli
- extended|multi_nli
task_categories:
- text-classification
task_ids:
- natural-language-inference
# Dataset Card for nli-es
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://huggingface.co/datasets/hackathon-pln-es/nli-es/
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
A Spanish Natural Language Inference dataset put together from the sources:
- the Spanish slice of the XNLI dataset;
- machine-translated Spanish version of the SNLI dataset
- machine-translated Spanish version of the Multinli dataset
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
A small percentage of the dataset contains original Spanish text by human speakers. The rest was generated by automatic translation.
## Dataset Structure
### Data Instances
A line includes four values: a sentence1 (the premise); a sentence2 (the hypothesis); a label specifying the relationship between the two ("gold_label") and the ID number of the pair of sentences as given in the original dataset.
Labels can be "entailment" if the premise entails the hypothesis, "contradiction" if it contradicts it or "neutral" if it neither implies it nor denies it.
{
"gold_label": "neutral",
"pairID": 1,
"sentence1": "A ver si nos tenemos que poner todos en huelga hasta cobrar lo que queramos.",
"sentence2": "La huelga es el método de lucha más eficaz para conseguir mejoras en el salario."
}
### Data Fields
gold_label: A string defining the relation between the sentence pair. Labels can be "entailment" if the premise entails the hypothesis, "contradiction" if it contradicts it or "neutral" if it neither implies it nor denies it.
pairID: A string identifying a pair sentence. It was inherited from the original datasets. NOTE: For the moment we are having trouble loading this column so we replaced every string with an int 0 as a placeholder. We hope to have the pairID back up soon.
sentence1: A string containing one sentence in Spanish, the premise. (See gold_label.)
sentence2: A string containing one sentence in Spanish, the hypothesis. (See gold_label.)
### Data Splits
The whole dataset was used for training. We did not use an evaluation split as we used the SemEval-2015 Task 2.
## Dataset Creation
### Curation Rationale
This corpus was built to remedy the scarcity of annotated Spanish-language datasets for NLI. It was generated by translating from the SNLI original dataset to Spanish using Argos. While machine translation is far from an ideal source for semantic classification, it is an aid to enlarging the data available.
### Source Data
#### Initial Data Collection and Normalization
Please refer to the respective documentations of the original datasets:
https://nlp.stanford.edu/projects/snli/
https://arxiv.org/pdf/1809.05053.pdf
https://cims.nyu.edu/~sbowman/multinli/
#### Who are the source language producers?
Please refer to the respective documentations of the original datasets:
https://nlp.stanford.edu/projects/snli/
https://arxiv.org/pdf/1809.05053.pdf
https://cims.nyu.edu/~sbowman/multinli/
### Annotations
#### Annotation process
Please refer to the respective documentations of the original datasets:
https://nlp.stanford.edu/projects/snli/
https://arxiv.org/pdf/1809.05053.pdf
https://cims.nyu.edu/~sbowman/multinli/
#### Who are the annotators?
Please refer to the respective documentations of the original datasets:
https://nlp.stanford.edu/projects/snli/
https://arxiv.org/pdf/1809.05053.pdf
https://cims.nyu.edu/~sbowman/multinli/
### Personal and Sensitive Information
In general, no sensitive information is conveyed in the sentences.
Please refer to the respective documentations of the original datasets:
https://nlp.stanford.edu/projects/snli/
https://arxiv.org/pdf/1809.05053.pdf
https://cims.nyu.edu/~sbowman/multinli/
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to offer new tools for semantic textual similarity analysis of Spanish sentences.
### Discussion of Biases
Please refer to the respective documentations of the original datasets:
https://nlp.stanford.edu/projects/snli/
https://arxiv.org/pdf/1809.05053.pdf
https://cims.nyu.edu/~sbowman/multinli/
### Other Known Limitations
The translation of the sentences was mostly unsupervised and may introduce some noise in the corpus. Machine translation from an English-language corpus is likely to generate syntactic and lexical forms that differ from those a human Spanish speaker would produce.
For discussion on the biases and limitations of the original datasets, please refer to their respective documentations:
https://nlp.stanford.edu/projects/snli/
https://arxiv.org/pdf/1809.05053.pdf
https://cims.nyu.edu/~sbowman/multinli/
## Additional Information
### Dataset Curators
The nli-es dataset was put together by Anibal Pérez, Lautaro Gesuelli, Mauricio Mazuecos and Emilio Tomás Ariza.
### Licensing Information
This corpus is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0).
Please refer to the respective documentations of the original datasets for information on their licenses:
https://nlp.stanford.edu/projects/snli/
https://arxiv.org/pdf/1809.05053.pdf
https://cims.nyu.edu/~sbowman/multinli/
### Citation Information
If you need to cite this dataset, you can link to this readme. | hackathon-pln-es/nli-es | [
"arxiv:1809.05053",
"region:us"
] | 2022-03-29T22:54:07+00:00 | {} | 2022-04-04T02:30:59+00:00 |
024190594583122994c09be3724f2c5279422081 |
# Dataset Card for CoCoNuT-Python(2010)
## Dataset Description
- **Homepage:** [CoCoNuT training data](https://github.com/lin-tan/CoCoNut-Artifact/releases/tag/training_data_1.0.0)
- **Repository:** [CoCoNuT repository](https://github.com/lin-tan/CoCoNut-Artifact)
- **Paper:** [CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair](https://dl.acm.org/doi/abs/10.1145/3395363.3397369)
### Dataset Summary
Part of the data used to train the models in the "CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair" paper.
These datasets contain raw data extracted from GitHub, GitLab, and Bitbucket, and have neither been shuffled nor tokenized.
The year in the dataset’s name is the cutting year that shows the year of the newest commit in the dataset.
### Languages
- Python
## Dataset Structure
### Data Fields
The dataset consists of 4 columns: `add`, `rem`, `context`, and `meta`.
These match the original dataset files: `add.txt`, `rem.txt`, `context.txt`, and `meta.txt`.
### Data Instances
There is a mapping between the 4 columns for each instance.
For example:
5 first rows of `rem` (i.e., the buggy line/hunk):
```
1 public synchronized StringBuffer append(char ch)
2 ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this;
3 public String substring(int beginIndex, int endIndex)
4 if (beginIndex < 0 || endIndex > count || beginIndex > endIndex) throw new StringIndexOutOfBoundsException(); if (beginIndex == 0 && endIndex == count) return this; int len = endIndex - beginIndex; return new String(value, beginIndex + offset, len, (len << 2) >= value.length);
5 public Object next() {
```
5 first rows of add (i.e., the fixed line/hunk):
```
1 public StringBuffer append(Object obj)
2 return append(obj == null ? "null" : obj.toString());
3 public String substring(int begin)
4 return substring(begin, count);
5 public FSEntry next() {
```
These map to the 5 instances:
```diff
- public synchronized StringBuffer append(char ch)
+ public StringBuffer append(Object obj)
```
```diff
- ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this;
+ return append(obj == null ? "null" : obj.toString());
```
```diff
- public String substring(int beginIndex, int endIndex)
+ public String substring(int begin)
```
```diff
- if (beginIndex < 0 || endIndex > count || beginIndex > endIndex) throw new StringIndexOutOfBoundsException(); if (beginIndex == 0 && endIndex == count) return this; int len = endIndex - beginIndex; return new String(value, beginIndex + offset, len, (len << 2) >= value.length);
+ return substring(begin, count);
```
```diff
- public Object next() {
+ public FSEntry next() {
```
`context` contains the associated "context". Context is the (in-lined) buggy function (including the buggy lines and comments).
For example, the context of
```
public synchronized StringBuffer append(char ch)
```
is its associated function:
```java
public synchronized StringBuffer append(char ch) { ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this; }
```
`meta` contains some metadata about the project:
```
1056 /local/tlutelli/issta_data/temp/all_java0context/java/2006_temp/2006/1056/68a6301301378680519f2b146daec37812a1bc22/StringBuffer.java/buggy/core/src/classpath/java/java/lang/StringBuffer.java
```
`1056` is the project id. `/local/...` is the absolute path to the buggy file. This can be parsed to extract the commit id: `68a6301301378680519f2b146daec37812a1bc22`, the file name: `StringBuffer.java` and the original path within the project
`core/src/classpath/java/java/lang/StringBuffer.java`
| Number of projects | Number of Instances |
| ------------------ |-------------------- |
| 13,899 | 480,777 |
## Dataset Creation
### Curation Rationale
Data is collected to train automated program repair (APR) models.
### Citation Information
```bib
@inproceedings{lutellierCoCoNuTCombiningContextaware2020,
title = {{{CoCoNuT}}: Combining Context-Aware Neural Translation Models Using Ensemble for Program Repair},
shorttitle = {{{CoCoNuT}}},
booktitle = {Proceedings of the 29th {{ACM SIGSOFT International Symposium}} on {{Software Testing}} and {{Analysis}}},
author = {Lutellier, Thibaud and Pham, Hung Viet and Pang, Lawrence and Li, Yitong and Wei, Moshi and Tan, Lin},
year = {2020},
month = jul,
series = {{{ISSTA}} 2020},
pages = {101--114},
publisher = {{Association for Computing Machinery}},
address = {{New York, NY, USA}},
doi = {10.1145/3395363.3397369},
url = {https://doi.org/10.1145/3395363.3397369},
urldate = {2022-12-06},
isbn = {978-1-4503-8008-9},
keywords = {AI and Software Engineering,Automated program repair,Deep Learning,Neural Machine Translation}
}
```
| h4iku/coconut_python2010 | [
"code",
"region:us"
] | 2022-03-30T00:03:32+00:00 | {"pretty_name": "CoCoNuT-Python(2010)", "tags": ["code"]} | 2023-09-28T22:17:32+00:00 |
cd86a6292c0525246a69827c754c3d2ed2993318 |
# Dataset Card for CoCoNuT-C(2005)
## Dataset Description
- **Homepage:** [CoCoNuT training data](https://github.com/lin-tan/CoCoNut-Artifact/releases/tag/training_data_1.0.0)
- **Repository:** [CoCoNuT repository](https://github.com/lin-tan/CoCoNut-Artifact)
- **Paper:** [CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair](https://dl.acm.org/doi/abs/10.1145/3395363.3397369)
### Dataset Summary
Part of the data used to train the models in the "CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair" paper.
These datasets contain raw data extracted from GitHub, GitLab, and Bitbucket, and have neither been shuffled nor tokenized.
The year in the dataset’s name is the cutting year that shows the year of the newest commit in the dataset.
### Languages
- C
## Dataset Structure
### Data Fields
The dataset consists of 4 columns: `add`, `rem`, `context`, and `meta`.
These match the original dataset files: `add.txt`, `rem.txt`, `context.txt`, and `meta.txt`.
### Data Instances
There is a mapping between the 4 columns for each instance.
For example:
5 first rows of `rem` (i.e., the buggy line/hunk):
```
1 public synchronized StringBuffer append(char ch)
2 ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this;
3 public String substring(int beginIndex, int endIndex)
4 if (beginIndex < 0 || endIndex > count || beginIndex > endIndex) throw new StringIndexOutOfBoundsException(); if (beginIndex == 0 && endIndex == count) return this; int len = endIndex - beginIndex; return new String(value, beginIndex + offset, len, (len << 2) >= value.length);
5 public Object next() {
```
5 first rows of add (i.e., the fixed line/hunk):
```
1 public StringBuffer append(Object obj)
2 return append(obj == null ? "null" : obj.toString());
3 public String substring(int begin)
4 return substring(begin, count);
5 public FSEntry next() {
```
These map to the 5 instances:
```diff
- public synchronized StringBuffer append(char ch)
+ public StringBuffer append(Object obj)
```
```diff
- ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this;
+ return append(obj == null ? "null" : obj.toString());
```
```diff
- public String substring(int beginIndex, int endIndex)
+ public String substring(int begin)
```
```diff
- if (beginIndex < 0 || endIndex > count || beginIndex > endIndex) throw new StringIndexOutOfBoundsException(); if (beginIndex == 0 && endIndex == count) return this; int len = endIndex - beginIndex; return new String(value, beginIndex + offset, len, (len << 2) >= value.length);
+ return substring(begin, count);
```
```diff
- public Object next() {
+ public FSEntry next() {
```
`context` contains the associated "context". Context is the (in-lined) buggy function (including the buggy lines and comments).
For example, the context of
```
public synchronized StringBuffer append(char ch)
```
is its associated function:
```java
public synchronized StringBuffer append(char ch) { ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this; }
```
`meta` contains some metadata about the project:
```
1056 /local/tlutelli/issta_data/temp/all_java0context/java/2006_temp/2006/1056/68a6301301378680519f2b146daec37812a1bc22/StringBuffer.java/buggy/core/src/classpath/java/java/lang/StringBuffer.java
```
`1056` is the project id. `/local/...` is the absolute path to the buggy file. This can be parsed to extract the commit id: `68a6301301378680519f2b146daec37812a1bc22`, the file name: `StringBuffer.java` and the original path within the project
`core/src/classpath/java/java/lang/StringBuffer.java`
| Number of projects | Number of Instances |
| ------------------ |-------------------- |
| 12,577 | 2,735,506 |
## Dataset Creation
### Curation Rationale
Data is collected to train automated program repair (APR) models.
### Citation Information
```bib
@inproceedings{lutellierCoCoNuTCombiningContextaware2020,
title = {{{CoCoNuT}}: Combining Context-Aware Neural Translation Models Using Ensemble for Program Repair},
shorttitle = {{{CoCoNuT}}},
booktitle = {Proceedings of the 29th {{ACM SIGSOFT International Symposium}} on {{Software Testing}} and {{Analysis}}},
author = {Lutellier, Thibaud and Pham, Hung Viet and Pang, Lawrence and Li, Yitong and Wei, Moshi and Tan, Lin},
year = {2020},
month = jul,
series = {{{ISSTA}} 2020},
pages = {101--114},
publisher = {{Association for Computing Machinery}},
address = {{New York, NY, USA}},
doi = {10.1145/3395363.3397369},
url = {https://doi.org/10.1145/3395363.3397369},
urldate = {2022-12-06},
isbn = {978-1-4503-8008-9},
keywords = {AI and Software Engineering,Automated program repair,Deep Learning,Neural Machine Translation}
}
```
| h4iku/coconut_c2005 | [
"code",
"region:us"
] | 2022-03-30T00:06:36+00:00 | {"pretty_name": "CoCoNuT-C(2005)", "tags": ["code"]} | 2023-09-28T22:19:25+00:00 |
fa7ca4dffe2448901592f0a8bb2ea0f0581c5951 | Japanese Pitch Dataset | vumichien/pitch_japanese_data | [
"region:us"
] | 2022-03-30T09:40:52+00:00 | {} | 2022-04-04T02:05:08+00:00 |
699b536f42eef7b7b73f3b6dbe857ea4821a7bbf | Cptburgos/aircraft_reports | [
"license:afl-3.0",
"region:us"
] | 2022-03-30T10:28:42+00:00 | {"license": "afl-3.0"} | 2022-03-30T10:28:42+00:00 |
|
e0f3a5567f5c8db711fce1d5dcf244000c5ab587 |
# Spanish Poetry Dataset
There are not many poetry datasets, and in Spanish language is even worst! With this dataset, we want to give access to these quality Spanish data for NLP tasks.
It is a simple dataset, but its potential is huge. I'm itching to discover new literary structures within Spanish literature data, a wider analysis, and so on!
# Authors
Andrea Morales (@andreamorgar) and Miguel López (@wizmik12)
### Motivation
This dataset was built for the PyConES2020 conference with the purpose of using it for a poem generation task. More information: https://github.com/andreamorgar/poesIA
### Content
Data was acquired in July 2020 from the poetry webpage www.poemas-del-alma.com. It provides a wide amount of data involving poems in Spanish. Data was scraped using Python library BeautifulSoup. For each poem in www.poemas-del-alma.com, we collected the name of the poet, poem, and poem title. Scraping processed is available at https://github.com/andreamorgar/poesIA/blob/master/poetry-scrapper.py.
### Languages
Spanish
### Acknowledgements
We wouldn't be here without www.poemas-del-alma.com, which provides the poetry collection in this dataset. | andreamorgar/spanish_poetry | [
"license:gpl-3.0",
"region:us"
] | 2022-03-30T11:29:11+00:00 | {"license": "gpl-3.0"} | 2022-03-30T11:39:22+00:00 |
f6718eea7b91b3e3756598b20fc034d4da1c72bc | ntt123/infore | [
"license:cc-by-nc-4.0",
"region:us"
] | 2022-03-30T14:16:04+00:00 | {"license": "cc-by-nc-4.0"} | 2022-05-07T03:00:24+00:00 |
|
dac22204f9694926352e6346327e111aaac1ee93 | omerm/test_dataset | [
"license:apache-2.0",
"region:us"
] | 2022-03-30T14:38:48+00:00 | {"license": "apache-2.0"} | 2022-03-30T14:38:48+00:00 |
|
a3fa98229bb3b442163ac8bda8e950076f75b589 |
# Dataset Card for People's Speech
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://mlcommons.org/en/peoples-speech/
- **Repository:** https://github.com/mlcommons/peoples-speech
- **Paper:** https://arxiv.org/abs/2111.09344
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [datasets@mlcommons.org](mailto:datasets@mlcommons.org)
### Dataset Summary
The People's Speech Dataset is among the world's largest English speech recognition corpus today that is licensed for academic and commercial usage under CC-BY-SA and CC-BY 4.0. It includes 30,000+ hours of transcribed speech in English languages with a diverse set of speakers. This open dataset is large enough to train speech-to-text systems and crucially is available with a permissive license.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English
## Dataset Structure
### Data Instances
{
"id": "gov_DOT_uscourts_DOT_scotus_DOT_19-161/gov_DOT_uscourts_DOT_scotus_DOT_19-161_DOT_2020-03-02_DOT_mp3_00002.flac",
"audio": {
"path": "gov_DOT_uscourts_DOT_scotus_DOT_19-161/gov_DOT_uscourts_DOT_scotus_DOT_19-161_DOT_2020-03-02_DOT_mp3_00002.flac"
"array": array([-6.10351562e-05, ...]),
"sampling_rate": 16000
}
"duration_ms": 14490,
"text": "contends that the suspension clause requires a [...]"
}
### Data Fields
{
"id": datasets.Value("string"),
"audio": datasets.Audio(sampling_rate=16_000),
"duration_ms": datasets.Value("int32"),
"text": datasets.Value("string"),
}
### Data Splits
We provide the following configurations for the dataset: `cc-by-clean`, `cc-by-dirty`, `cc-by-sa-clean`, `cc-by-sa-dirty`, and `microset`. We don't provide splits for any of the configurations.
## Dataset Creation
### Curation Rationale
See our [paper](https://arxiv.org/abs/2111.09344).
### Source Data
#### Initial Data Collection and Normalization
Data was downloaded via the archive.org API. No data inference was done.
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
No manual annotation is done. We download only source audio with already existing transcripts.
#### Who are the annotators?
For the test and dev sets, we paid native American English speakers to do transcriptions. We do not know the identities of the transcriptionists for data in the training set. For the training set, we have noticed that some transcriptions are likely to be the output of automatic speech recognition systems.
### Personal and Sensitive Information
Several of our sources are legal and government proceedings, spoken histories, speeches, and so on. Given that these were intended as public documents and licensed as such, it is natural that the involved individuals are aware of this.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset could be used for speech synthesis. However, this requires careful cleaning of the dataset, as background noise is not tolerable for speech synthesis.
The dataset could be used for keyword spotting tasks as well. In particular, this is good use case for the non-English audio in the dataset.
Our sincere hope is that the large breadth of sources our dataset incorporates reduces existing quality of service issues today, like speech recognition system’s poor understanding of non-native English accents. We cannot think of any unfair treatment that come from using this dataset at this time.
### Discussion of Biases
Our data is downloaded from archive.org. As such, the data is biased towards whatever users decide to upload there.
Almost all of our data is American accented English.
### Other Known Limitations
As of version 1.0, a portion of data in the training, test, and dev sets is poorly aligned. Specifically, some words appear in the transcript, but not the audio, or some words appear in the audio, but not the transcript. We are working on it.
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
We provide CC-BY and CC-BY-SA subsets of the dataset.
### Citation Information
Please cite:
```
@article{DBLP:journals/corr/abs-2111-09344,
author = {Daniel Galvez and
Greg Diamos and
Juan Ciro and
Juan Felipe Cer{\'{o}}n and
Keith Achorn and
Anjali Gopi and
David Kanter and
Maximilian Lam and
Mark Mazumder and
Vijay Janapa Reddi},
title = {The People's Speech: {A} Large-Scale Diverse English Speech Recognition
Dataset for Commercial Usage},
journal = {CoRR},
volume = {abs/2111.09344},
year = {2021},
url = {https://arxiv.org/abs/2111.09344},
eprinttype = {arXiv},
eprint = {2111.09344},
timestamp = {Mon, 22 Nov 2021 16:44:07 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2111-09344.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | MLCommons/peoples_speech_v1.0 | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1T<n",
"source_datasets:original",
"language:en",
"license:cc-by-2.0",
"license:cc-by-2.5",
"license:cc-by-3.0",
"license:cc-by-4.0",
"license:cc-by-sa-3.0",
"license:cc-by-sa-4.0",
"arxiv:2111.09344",
"region:us"
] | 2022-03-30T14:49:51+00:00 | {"annotations_creators": ["crowdsourced", "machine-generated"], "language_creators": ["crowdsourced", "machine-generated"], "language": ["en"], "license": ["cc-by-2.0", "cc-by-2.5", "cc-by-3.0", "cc-by-4.0", "cc-by-sa-3.0", "cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1T<n"], "source_datasets": ["original"], "task_categories": ["automatic-speech-recognition"], "task_ids": ["speech-recognition", "robust-speech-recognition", "noisy-speech-recognition"], "pretty_name": "People's Speech"} | 2022-08-10T15:41:34+00:00 |
a29d08ef45085a1bf86173196c6a68aa54a8f43e |
# Axolotl-Spanish-Nahuatl : Parallel corpus for Spanish-Nahuatl machine translation
## Table of Contents
- [Dataset Card for [Axolotl-Spanish-Nahuatl]](#dataset-card-for-Axolotl-Spanish-Nahuatl)
## Dataset Description
- **Source 1:** http://www.corpus.unam.mx/axolotl
- **Source 2:** http://link.springer.com/article/10.1007/s10579-014-9287-y
- **Repository:1** https://github.com/ElotlMX/py-elotl
- **Repository:2** https://github.com/christos-c/bible-corpus/blob/master/bibles/Nahuatl-NT.xml
- **Paper:** https://aclanthology.org/N15-2021.pdf
## Dataset Collection
In order to get a good translator, we collected and cleaned two of the most complete Nahuatl-Spanish parallel corpora available. Those are Axolotl collected by an expert team at UNAM and Bible UEDIN Nahuatl Spanish crawled by Christos Christodoulopoulos and Mark Steedman from Bible Gateway site.
After this, we ended with 12,207 samples from Axolotl due to misalignments and duplicated texts in Spanish in both original and nahuatl columns and 7,821 samples from Bible UEDIN for a total of 20028 utterances.
## Team members
- Emilio Morales [(milmor)](https://huggingface.co/milmor)
- Rodrigo Martínez Arzate [(rockdrigoma)](https://huggingface.co/rockdrigoma)
- Luis Armando Mercado [(luisarmando)](https://huggingface.co/luisarmando)
- Jacobo del Valle [(jjdv)](https://huggingface.co/jjdv)
## Applications
- MODEL: Spanish Nahuatl Translation Task with a T5 model in ([t5-small-spanish-nahuatl](https://huggingface.co/hackathon-pln-es/t5-small-spanish-nahuatl))
- DEMO: Spanish Nahuatl Translation in ([Spanish-nahuatl](https://huggingface.co/spaces/hackathon-pln-es/Spanish-Nahuatl-Translation)) | hackathon-pln-es/Axolotl-Spanish-Nahuatl | [
"task_categories:text2text-generation",
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:translation",
"size_categories:unknown",
"source_datasets:original",
"language:es",
"license:mpl-2.0",
"conditional-text-generation",
"region:us"
] | 2022-03-30T14:52:03+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["es"], "license": ["mpl-2.0"], "multilinguality": ["translation"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["text2text-generation", "translation"], "task_ids": [], "pretty_name": "Axolotl Spanish-Nahuatl parallel corpus , is a digital corpus that compiles several sources with parallel content in these two languages. \n\nA parallel corpus is a type of corpus that contains texts in a source language with their correspondent translation in one or more target languages. Gutierrez-Vasques, X., Sierra, G., and Pompa, I. H. (2016). Axolotl: a web accessible parallel corpus for spanish-nahuatl. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC 2016), Portoro, Slovenia. European Language Resources Association (ELRA). Grupo de Ingenieria Linguistica (GIL, UNAM). Corpus paralelo espa\u00c3\u00b1ol-nahuatl. http://www.corpus.unam.mx/axolotl.", "language_bcp47": ["es-MX"], "tags": ["conditional-text-generation"]} | 2023-04-13T07:51:58+00:00 |
037b97278c87e535254b71cbd5a310dcf0b9e992 | lislia/GDPR | [
"license:afl-3.0",
"region:us"
] | 2022-03-30T15:03:15+00:00 | {"license": "afl-3.0"} | 2022-04-01T15:37:15+00:00 |
|
2b66248c092349da7db3e12eb66d7ffb692c77d9 | # MF Rocket Paraphrase Corpus (MFRPC) - A State of the Art Paraphrasing Solution
## Dataset Description
MF Rocket Paraphrase Corpus (MFRPC) ) is a corpus consisting of 10,000 sentence pairs. Each sentence pair contains a source sentence and the paraphrased version of the source sentence. The source sentences are created manually and are intended to represent typical sentences found in online articles. They are limited to general topics and are not restricted to a specific domain. The paraphrased sentences were created partly using GPT-3 and partly manually. In this way, we hope to investigate the performance of GPT-3 in a typical real-world setting and improve the quality of the paraphrased sentences through manual corrections.
By finetuning a model we Pegasus with this data, we create a paraphraser that performs very well. The results are indistinguishable from human parahrased sentences in a blind test.
We are currently working on a data set with complete paragraphs or articles.
For more information, our Contact form can be used at https://mf-rocket.de.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "To overcome these difficulties, you must select an activity or goal that you are enthusiastic about [...]",
"target": "To overcome these challenges, you need to find an activity or goal that you are passionate about and[...]"
},
{
"text": "If you are unsure about what to do next, seek advice from a close friend or family member you can tr[...]",
"target": "If you are feeling lost, ask a trusted friend or family member for their opinion about what you shou[...]"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 8000 |
| valid | 2000 |
| MFRocket/MFRPC | [
"region:us"
] | 2022-03-30T15:59:22+00:00 | {"task_categories": ["conditional-text-generation", "paraphrase", "gpt-3", "crowdsourced"]} | 2022-03-30T18:58:37+00:00 |
58d5b361e47265cf85f2334800c66d9bb485029e |
# Dataset Card for sufficient_facts
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/copenlu/sufficient_facts
- **Repository:** https://github.com/copenlu/sufficient_facts
- **Paper:** Will be uploaded soon...
- **Leaderboard:**
- **Point of Contact:** https://apepa.github.io/
### Dataset Summary
This is the dataset SufficientFacts, introduced in the paper "Fact Checking with Insufficient Evidence", accepted at the TACL journal in 2022.
Automating the fact checking (FC) process relies on information obtained from external sources. In this work, we posit that it is crucial for FC models to make veracity predictions only when there is sufficient evidence and otherwise indicate when it is not enough. To this end, we are the first to study what information FC models consider sufficient by introducing a novel task and advancing it with three main contributions. First, we conduct an in-depth empirical analysis of the task with a new fluency-preserving method for omitting information from the evidence at the constituent and sentence level. We identify when models consider the remaining evidence (in)sufficient for FC, based on three trained models with different Transformer architectures and three FC datasets. Second, we ask annotators whether the omitted evidence was important for FC, resulting in a novel diagnostic dataset, **SufficientFacts**, for FC with omitted evidence. We find that models are least successful in detecting missing evidence when adverbial modifiers are omitted (21% accuracy), whereas it is easiest for omitted date modifiers (63% accuracy). Finally, we propose a novel data augmentation strategy for contrastive self-learning of missing evidence by employing the proposed omission method combined with tri-training. It improves performance for Evidence Sufficiency Prediction by up to 17.8 F1 score, which in turn improves FC performance by up to 2.6 F1 score.
### Languages
English
## Dataset Structure
The dataset consists of three files, each for one of the datasets -- FEVER, HoVer, and VitaminC.
Each file consists of json lines of the format:
```json
{
"claim": "Unison (Celine Dion album) was originally released by Atlantic Records.",
"evidence": [
[
"Unison (Celine Dion album)",
"The album was originally released on 2 April 1990 ."
]
],
"label_before": "REFUTES",
"label_after": "NOT ENOUGH",
"agreement": "agree_ei",
"type": "PP",
"removed": ["by Columbia Records"],
"text_orig": "[[Unison (Celine Dion album)]] The album was originally released on 2 April 1990 <span style=\"color:red;\">by Columbia Records</span> ."
}
```
### Data Instances
* FEVER: 600 consituent-level, 400 sentence-level;
* HoVer - 600 consituent-level, 400 sentence-level;
* VitaminC - 600 consituent-level.
### Data Fields
* `claim` - the claim that is being verified
* `evidence` - the augmented evidence for the claim, i.e. the evidence with some removed information
* `label_before` - the original label for the claim-evidence pair, before information was removed from the evidence
* `label_after` - the label for the augmented claim-evidence pair, after information was removed from the evidence, as annotated by crowd-source workers
* `type` - type of the information removed from the evidence. The types are fine-grained and their mapping to the general types -- 7 constituent and 1 sentence type can be found in [types.json](types.json) file.
* `removed` - the text of the removed information from the evidence
* `text_orig` - the original text of the evidence, as presented to crowd-source workers, the text of the removed information is inside `<span style=\"color:red;\"></span>` tags.
### Data Splits
| name |test_fever|test_hover|test_vitaminc|
|----------|-------:|-----:|-------:|
|test| 1000| 1000| 600|
Augmented from the test splits of the corresponding datasets.
### Annotations
#### Annotation process
The workers were provided with the following task description:
For each evidence text, some facts have been removed (marked in <span style="color:red;">red</span>).
You should annotate whether, <b>given the remaining facts in the evidence text, the evidence is still enough for verifying the claim.</b> <br></br>
<ul>
<li>You should select <i><b>'ENOUGH -- IRRELEVANT'</b></i>, if the <b>remaining information is still <i>enough</i></b> for verifying the claim because the <b>removed information is irrelevant</b> for identifying the evidence as SUPPORTS or REFUTES. See examples 1 and 2.</li>
<li>You should select <i><b>'ENOUGH -- REPEATED'</b></i>, if the <b>remaining information is still <i>enough</i></b> for verifying the claim because the <b>removed information is relevant but is also present (repeated) in the remaining (not red) text.</b> See example 3.</li>
<li>You should select <i><b>'NOT ENOUGH'</b></i> -- when <b>1) the removed information is <i>relevant</i></b> for verifying the claim <b> AND 2) it is <i>not present (repeated)</i> in the remaining text.</b> See examples 4, 5, and 6.</li>
<!--<li>You should select <i><b>'CHANGED INFO'</b></i> in the rare cases when the remaining evidence has <b>changed the support for the claim</b></li>-->
</ul>
<b>Note: You should not incorporate your own knowledge or beliefs! You should rely only on the evidence provided for the claim.</b>
The annotators were then given example instance annotations.
Finally, annotators were asked to complete a qualification test in order to be allowed to annotate instances for the task.
The resulting inter-annotator agreement for SufficientFacts is 0.81 Fleiss'k from three annotators.
#### Who are the annotators?
The annotations were performed by workers at Amazon Mechanical Turk.
## Additional Information
### Licensing Information
MIT
### Citation Information
```
@article{10.1162/tacl_a_00486,
author = {Atanasova, Pepa and Simonsen, Jakob Grue and Lioma, Christina and Augenstein, Isabelle},
title = "{Fact Checking with Insufficient Evidence}",
journal = {Transactions of the Association for Computational Linguistics},
volume = {10},
pages = {746-763},
year = {2022},
month = {07},
abstract = "{Automating the fact checking (FC) process relies on information obtained from external sources. In this work, we posit that it is crucial for FC models to make veracity predictions only when there is sufficient evidence and otherwise indicate when it is not enough. To this end, we are the first to study what information FC models consider sufficient by introducing a novel task and advancing it with three main contributions. First, we conduct an in-depth empirical analysis of the task with a new fluency-preserving method for omitting information from the evidence at the constituent and sentence level. We identify when models consider the remaining evidence (in)sufficient for FC, based on three trained models with different Transformer architectures and three FC datasets. Second, we ask annotators whether the omitted evidence was important for FC, resulting in a novel diagnostic dataset, SufficientFacts1, for FC with omitted evidence. We find that models are least successful in detecting missing evidence when adverbial modifiers are omitted (21\\% accuracy), whereas it is easiest for omitted date modifiers (63\\% accuracy). Finally, we propose a novel data augmentation strategy for contrastive self-learning of missing evidence by employing the proposed omission method combined with tri-training. It improves performance for Evidence Sufficiency Prediction by up to 17.8 F1 score, which in turn improves FC performance by up to 2.6 F1 score.}",
issn = {2307-387X},
doi = {10.1162/tacl_a_00486},
url = {https://doi.org/10.1162/tacl\_a\_00486},
eprint = {https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl\_a\_00486/2037141/tacl\_a\_00486.pdf},
}
```
### Contributions
Thanks to [@apepa](https://github.com/apepa) for adding this dataset. | copenlu/sufficient_facts | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|fever",
"source_datasets:extended|hover",
"source_datasets:extended|fever_gold_evidence",
"language:en",
"license:mit",
"region:us"
] | 2022-03-30T18:12:14+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["extended|fever", "extended|hover", "extended|fever_gold_evidence"], "task_categories": ["text-classification"], "task_ids": ["fact-checking"], "pretty_name": "sufficient_facts"} | 2022-08-05T07:33:48+00:00 |
2c630ee849227566086e3955786c0d6f4762bcbe | simonchristensen1/GDPR | [
"region:us"
] | 2022-03-30T18:30:59+00:00 | {} | 2022-03-30T18:31:43+00:00 |
|
7e960327b88e961ca35bee0a6bed94c42b0ac0d8 | # AutoTrain Dataset for project: security-texts-classification-distilroberta
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project security-texts-classification-distilroberta.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "Netgear launches Bug Bounty Program for Hacker; Offering up to $15,000 in Rewards It might be the ea[...]",
"target": 0
},
{
"text": "Popular Malware Families Using 'Process Doppelg\u00e4nging' to Evade Detection The fileless code injectio[...]",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=2, names=['irrelevant', 'relevant'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 780 |
| valid | 196 |
| vlsb/autotrain-data-security-texts-classification-distilroberta | [
"task_categories:text-classification",
"region:us"
] | 2022-03-30T19:48:23+00:00 | {"task_categories": ["text-classification"]} | 2022-03-30T19:48:56+00:00 |
608c2e9f00eacfdb0932301e45fe2b420a0559a0 | # DISCO: Diachronic Spanish Sonnet Corpus
[](https://zenodo.org/badge/latestdoi/103841064)
The Diachronic Spanish Sonnet Corpus (DISCO) contains sonnets in Spanish in CSV, between the 15th and the 20th centuries (4303 sonnets by 1215 authors from 22 different countries). It includes well-known authors, but also less canonized ones.
This is a CSV compilation taken from the plain text corpus v4 published on git https://github.com/pruizf/disco/tree/v4. It includes the title, author, age and text metadata.
<br><br> | hackathon-pln-es/disco_spanish_poetry | [
"region:us"
] | 2022-03-30T20:47:36+00:00 | {} | 2022-03-30T20:50:28+00:00 |
88fa19e471d13eae3c2903011908b1a8bbccb46a | # test-imagefolder-dataset
This dataset shows that you can upload image folders (with an accompanying info.csv file within) to share and visualize multiple splits of a dataset. Cheers 🍻 | nateraw/test-imagefolder-dataset | [
"region:us"
] | 2022-03-30T20:58:59+00:00 | {} | 2022-03-30T21:19:04+00:00 |
2e15cfca305fcbc9215b159088e588bcca6ee60c | This dataset contains parallel corpora for Ethiopian languages | Atnafu/Parallel_dataset_for_Ethiopian_languages | [
"license:afl-3.0",
"region:us"
] | 2022-03-31T02:45:15+00:00 | {"license": "afl-3.0"} | 2022-03-31T02:45:52+00:00 |
7408d8e0848a237692cfd597574ced7762ea42c9 | basic text | DioLiu/Test1 | [
"region:us"
] | 2022-03-31T03:16:01+00:00 | {} | 2022-04-09T03:11:46+00:00 |
3ed1ac857230642d208eede613bc5194e187c0b4 | benwoodyear/guardian_crosswords | [
"license:afl-3.0",
"region:us"
] | 2022-03-31T11:19:26+00:00 | {"license": "afl-3.0"} | 2022-04-02T10:41:59+00:00 |
|
81e35ae03e23e02427bb4ce8f2089af2049dd00a | LeoFeng/MLHW_6 | [
"license:afl-3.0",
"region:us"
] | 2022-03-31T11:26:38+00:00 | {"license": "afl-3.0"} | 2022-03-31T11:35:46+00:00 |
|
966942abb87e2e57c5b357342d7bc2f4177e0ba4 |
## Dataset Summary
Scotch is a dataset of about 19 million functions collected from open-source repositiories from GitHub with permissive licenses. Each function has its corresponding code context and about 4 million functions have corresponding docstrings.
### Languages
The dataset includes functions written in programming languages Python, Java, Javascript, and Go.
## Statistics
### Split
The functions with docstrings is splitted into train, valid, and test set of 3200626, 400077, 400080 functions respectively.
## Features
Each function consists of following features:
* repository_name: Name of the repository the function belongs to.
* function_path: Path of the function within the repository.
* function_identifier: Function name/identifier.
* language: Programming language the function is written in.
* function: Function string.
* docstring: Function docstring.
* function_url: URL to the function code.
* context: Code context.
* license: License info of the repository (includes only repositories with permissive licenses).
## Data Collection
The dataset is collected from GitHub repositories of respective languages with 5 or more stars. Such repositories are listed using [SEART](https://seart-ghs.si.usi.ch/). Functions are parsed using a lightweight parser build on top of function parser from [CodeSearchNet dataset](https://github.com/github/CodeSearchNet/tree/master/function_parser) and repositories were collected with help of [github-downloader from EleutherAI](https://github.com/EleutherAI/github-downloader).
### Data Processing
All the code without permissive licenses are removed and deduplication is performed on the remaining set of functions. Afterwards, all the functions with single line of code, whose docstring contains non-English characters are removed. Files with multiple same functions are excluded. This results in about 19M functions. To obtain a dataset of NL-Code pairs, functions with no docstrings or doctrings less than 3 tokens separated by white-space are excluded. Following CodeSearchNet, functions with 'test' keyword in their name are excluded.
## License
This dataset is under MIT License. However, the repositories the functions are collected from may have several permissive licenses. Those licenses include MIT License, Apache License 2.0, BSD 3-Clause “New” or “Revised” License, BSD 2-Clause “Simplified” License, and ISC License. | Samip/Scotch | [
"region:us"
] | 2022-03-31T11:31:51+00:00 | {} | 2022-04-29T13:19:23+00:00 |
630a24d3e902f49f89ba5b835410ad2cbb3f0059 | ## Generation procedure
The dataset was constructed using documents from [the Pile](https://pile.eleuther.ai/) scored using using [Perspective API](http://perspectiveapi.com) toxicity scores.
The procedure was the following:
1. A chunk of the Pile (3%, 7m documents) was scored using the Perspective API.
1. The first half of this dataset is [tomekkorbak/pile-toxic-chunk-0](https://huggingface.co/datasets/tomekkorbak/pile-toxic-chunk-0), 100k *most* toxic documents of the scored chunk
2. The first half of this dataset is [tomekkorbak/pile-nontoxic-chunk-0](https://huggingface.co/datasets/tomekkorbak/pile-nontoxic-chunk-0), 100k *least* toxic documents of the scored chunk
3. Then, the dataset was shuffled and a 9:1 train-test split was done
## Basic stats
The average scores of the good and bad half are 0.0014 and 0.67, respectively. The average score of the whole dataset is 0.33; the median is 0.51.
However, the weighted average score (weighted by document length) is 0.45. Correlation between score and document length is 0.2.
Score histogram:

Mean score per Pile subset
| pile_set_name | score | length |
|:------------------|----------:|------------:|
| ArXiv | 0.141808 | 9963.82 |
| Books3 | 0.405541 | 8911.67 |
| DM Mathematics | 0.535474 | 8194 |
| Enron Emails | 0.541136 | 1406.76 |
| EuroParl | 0.373395 | 4984.36 |
| FreeLaw | 0.279582 | 8986.73 |
| Github | 0.495742 | 2184.86 |
| Gutenberg (PG-19) | 0.583263 | 4034 |
| HackerNews | 0.617917 | 3714.83 |
| NIH ExPorter | 0.0376628 | 1278.83 |
| OpenSubtitles | 0.674261 | 14881.1 |
| OpenWebText2 | 0.613273 | 2634.41 |
| PhilPapers | 0.549582 | 9693 |
| Pile-CC | 0.525136 | 2925.7 |
| PubMed Abstracts | 0.0388705 | 1282.29 |
| PubMed Central | 0.235012 | 7418.34 |
| StackExchange | 0.590904 | 2210.16 |
| USPTO Backgrounds | 0.0100077 | 2086.39 |
| Ubuntu IRC | 0.598423 | 4396.67 |
| Wikipedia (en) | 0.0136901 | 1515.89 |
| YoutubeSubtitles | 0.65201 | 4729.52 | | tomekkorbak/pile-toxicity-balanced | [
"region:us"
] | 2022-03-31T11:43:11+00:00 | {} | 2022-04-06T10:07:05+00:00 |
0cd8a6125adf50d4f589f21a2514aff5ec63ee1c | These orbs were generated with GLID-3, a text-to-image system (https://github.com/Jack000/glid-3)
The text prompt for many was "Orbs within orbs, concentric circles and ripples of fire (spheres and circles, roundness)"
I used a high guidance scale (10 IIRC) and generated them in batches of 64
There are two 'flavours', 'dark' and 'light' (indicated with the 'label' attribute in the dataset. The 'light' images are from a GLID-3 model I fine-tuned on some abstract art, and tend to be more pastel colors and plain shapes. The 'dark' images are from GLID-3 part way through it's training.
This dataset is intended for use in GAN training demos and other art projects. Please give attribution if you use it in your own work (and tag me @johnowhitaker so I can see what you make!)
It's also nice for other artsy things, such as this montage made up of many little orb images: https://www.easyzoom.com/imageaccess/47cab299796a45edbd98951e704cb340
gan trained on this dataset: https://huggingface.co/johnowhitaker/orbgan_e1
gan demo (spaces): https://huggingface.co/spaces/johnowhitaker/orbgan_demo | johnowhitaker/glid3_orbs | [
"region:us"
] | 2022-03-31T14:46:41+00:00 | {} | 2022-04-01T02:58:57+00:00 |
f6a833aa772e2b7a60008061fbb637a1940b35d7 | arjundd/dosma-data | [
"license:apache-2.0",
"region:us"
] | 2022-03-31T16:58:01+00:00 | {"license": "apache-2.0"} | 2022-03-31T17:18:27+00:00 |
|
4bc4e8decfdda1c956ca15694d9fa1518261efd0 | # Spanish Gender Neutralization
<p align="center">
<img src="https://upload.wikimedia.org/wikipedia/commons/2/29/Gender_equality_symbol_%28clipart%29.png" width="250"/>
</p>
Spanish is a beautiful language and it has many ways of referring to people, neutralizing the genders and using some of the resources inside the language. One would say *Todas las personas asistentes* instead of *Todos los asistentes* and it would end in a more inclusive way for talking about people. This dataset collects a set of manually anotated examples of gendered-to-neutral spanish transformations.
The intended use of this dataset is to train a spanish language model for translating from gendered to neutral, in order to have more inclusive sentences.
### Compiled sources
One of the major challenges was to obtain a valuable dataset that would suit gender inclusion purpose, therefore, when building the dataset, the team opted to dedicate a considerable amount of time to build it from a scratch. You can find here the results.
The data used for the model training has been manually created form a compilation of sources, obtained from a series of guidelines and manuals issued by Spanish Ministry of Health, Social Services and Equality in the matter of the usage of non-sexist language, stipulated in this linked [document](https://www.inmujeres.gob.es/servRecursos/formacion/GuiasLengNoSexista/docs/Guiaslenguajenosexista_.pdf).
**NOTE: Appart from manually anotated samples, this dataset has been further increased by applying data augmentation so a minumin number of training examples are generated.**
* [Guía para un discurso igualitario en la universidad de alicante](https://ieg.ua.es/es/documentos/normativasobreigualdad/guia-para-un-discurso-igualitario-en-la-ua.pdf)
* [Guía UC de Comunicación en Igualdad](<https://web.unican.es/unidades/igualdad/SiteAssets/igualdad/comunicacion-en-igualdad/guia%20comunicacion%20igualdad%20(web).pdf>)
* [Buenas prácticas para el tratamiento del lenguaje en igualdad](https://e-archivo.uc3m.es/handle/10016/22811)
* [Guía del lenguaje no sexista de la Universidad de Castilla-La Mancha](https://unidadigualdad.ugr.es/page/guiialenguajeuniversitarionosexista_universidaddecastillalamancha/!)
* [Guía de Lenguaje Para el Ámbito Educativo](https://www.educacionyfp.gob.es/va/dam/jcr:8ce318fd-c8ff-4ad2-97b4-7318c27d1682/guialenguajeambitoeducativo.pdf)
* [Guía para un uso igualitario y no sexista del lenguaje y dela imagen en la Universidad de Jaén](https://www.ujaen.es/servicios/uigualdad/sites/servicio_uigualdad/files/uploads/Guia_lenguaje_no_sexista.pdf)
* [Guía de uso no sexista del vocabulario español](https://www.um.es/documents/2187255/2187763/guia-leng-no-sexista.pdf/d5b22eb9-b2e4-4f4b-82aa-8a129cdc83e3)
* [Guía para el uso no sexista de la lengua castellana y de imágnes en la UPV/EHV](https://www.ehu.eus/documents/1734204/1884196/Guia_uso_no_sexista_EHU.pdf)
* [Guía de lenguaje no sexista UNED](http://portal.uned.es/pls/portal/docs/PAGE/UNED_MAIN/LAUNIVERSIDAD/VICERRECTORADOS/GERENCIA/OFICINA_IGUALDAD/CONCEPTOS%20BASICOS/GUIA_LENGUAJE.PDF)
* [COMUNICACIÓN AMBIENTAL CON PERSPECTIVA DE GÉNERO](https://cima.cantabria.es/documents/5710649/5729124/COMUNICACI%C3%93N+AMBIENTAL+CON+PERSPECTIVA+DE+G%C3%89NERO.pdf/ccc18730-53e3-35b9-731e-b4c43339254b)
* [Recomendaciones para la utilización de lenguaje no sexista](https://www.csic.es/sites/default/files/guia_para_un_uso_no_sexista_de_la_lengua_adoptada_por_csic2.pdf)
* [Estudio sobre lenguaje y contenido sexista en la Web](https://www.mujeresenred.net/IMG/pdf/Estudio_paginas_web_T-incluye_ok.pdf)
* [Nombra.en.red. En femenino y en masculino](https://www.inmujeres.gob.es/areasTematicas/educacion/publicaciones/serieLenguaje/docs/Nombra_en_red.pdf)
## Team Members
- Fernando Velasco [(fermaat)](https://huggingface.co/fermaat)
- Cibeles Redondo [(CibelesR)](https://huggingface.co/CibelesR)
- Juan Julian Cea [(Juanju)](https://huggingface.co/Juanju)
- Magdalena Kujalowicz [(MacadellaCosta)](https://huggingface.co/MacadellaCosta)
- Javier Blasco [(javiblasco)](https://huggingface.co/javiblasco)
### Enjoy and feel free to collaborate with this dataset 🤗 | hackathon-pln-es/neutral-es | [
"task_categories:text2text-generation",
"task_categories:translation",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:es",
"region:us"
] | 2022-03-31T17:02:00+00:00 | {"language": ["es"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "task_categories": ["text2text-generation", "translation"], "task_ids": [], "pretty_name": "neutralES"} | 2022-10-25T09:20:48+00:00 |
939a75ee8e11464d473de956df8a96fa5e5e64b7 |
This is a suite of psycholinguistic datasets by Allyson Ettinger. See her [official Github repository](https://github.com/aetting/lm-diagnostics) for specific details. | KevinZ/psycholinguistic_eval | [
"task_categories:multiple-choice",
"task_categories:fill-mask",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"license:mit",
"region:us"
] | 2022-03-31T23:04:18+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en-US"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": [], "task_categories": ["multiple-choice", "fill-mask", "question-answering", "zero-shot-classification"], "task_ids": [], "pretty_name": "psycholinguistic_eval"} | 2022-10-25T09:03:37+00:00 |
5f55375edbfe0270c20bcf770751ad982c0e6614 |
# Dataset Card for MultiWOZ 2.1
- **Repository:** https://github.com/budzianowski/multiwoz
- **Paper:** https://aclanthology.org/2020.lrec-1.53
- **Leaderboard:** https://github.com/budzianowski/multiwoz
- **Who transforms the dataset:** Qi Zhu(zhuq96 at gmail dot com)
To use this dataset, you need to install [ConvLab-3](https://github.com/ConvLab/ConvLab-3) platform first. Then you can load the dataset via:
```
from convlab.util import load_dataset, load_ontology, load_database
dataset = load_dataset('multiwoz21')
ontology = load_ontology('multiwoz21')
database = load_database('multiwoz21')
```
For more usage please refer to [here](https://github.com/ConvLab/ConvLab-3/tree/master/data/unified_datasets).
### Dataset Summary
MultiWOZ 2.1 fixed the noise in state annotations and dialogue utterances. It also includes user dialogue acts from ConvLab (Lee et al., 2019) as well as multiple slot descriptions per dialogue state slot.
- **How to get the transformed data from original data:**
- Download [MultiWOZ_2.1.zip](https://github.com/budzianowski/multiwoz/blob/master/data/MultiWOZ_2.1.zip).
- Run `python preprocess.py` in the current directory.
- **Main changes of the transformation:**
- Create a new ontology in the unified format, taking slot descriptions from MultiWOZ 2.2.
- Correct some grammar errors in the text, mainly following `tokenization.md` in MultiWOZ_2.1.
- Normalize slot name and value. See `normalize_domain_slot_value` function in `preprocess.py`.
- Correct some non-categorical slots' values and provide character level span annotation.
- Concatenate multiple values in user goal & state using `|`.
- Add `booked` information in system turns from original belief states.
- Remove `Booking` domain and remap all booking relevant dialog acts to unify the annotation of booking action in different domains, see `booking_remapper.py`.
- **Annotations:**
- user goal, dialogue acts, state.
### Supported Tasks and Leaderboards
NLU, DST, Policy, NLG, E2E, User simulator
### Languages
English
### Data Splits
| split | dialogues | utterances | avg_utt | avg_tokens | avg_domains | cat slot match(state) | cat slot match(goal) | cat slot match(dialogue act) | non-cat slot span(dialogue act) |
|------------|-------------|--------------|-----------|--------------|---------------|-------------------------|------------------------|--------------------------------|-----------------------------------|
| train | 8438 | 113556 | 13.46 | 13.23 | 2.8 | 98.84 | 99.48 | 86.39 | 98.22 |
| validation | 1000 | 14748 | 14.75 | 13.5 | 2.98 | 98.84 | 99.46 | 86.59 | 98.17 |
| test | 1000 | 14744 | 14.74 | 13.5 | 2.93 | 99.21 | 99.32 | 85.83 | 98.58 |
| all | 10438 | 143048 | 13.7 | 13.28 | 2.83 | 98.88 | 99.47 | 86.35 | 98.25 |
8 domains: ['attraction', 'hotel', 'taxi', 'restaurant', 'train', 'police', 'hospital', 'general']
- **cat slot match**: how many values of categorical slots are in the possible values of ontology in percentage.
- **non-cat slot span**: how many values of non-categorical slots have span annotation in percentage.
### Citation
```
@inproceedings{eric-etal-2020-multiwoz,
title = "{M}ulti{WOZ} 2.1: A Consolidated Multi-Domain Dialogue Dataset with State Corrections and State Tracking Baselines",
author = "Eric, Mihail and Goel, Rahul and Paul, Shachi and Sethi, Abhishek and Agarwal, Sanchit and Gao, Shuyag and Hakkani-Tur, Dilek",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.lrec-1.53",
pages = "422--428",
ISBN = "979-10-95546-34-4",
}
```
### Licensing Information
Apache License, Version 2.0 | ConvLab/multiwoz21 | [
"task_categories:conversational",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"region:us"
] | 2022-04-01T02:32:58+00:00 | {"language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "task_categories": ["conversational"], "pretty_name": "MultiWOZ 2.1"} | 2022-11-25T08:00:28+00:00 |
0fbd5a29ff7f0c14ba9b11b878b05e8bdcc0a4c0 | Mitre technique subtechnqie | Ericblancosf/subtechnique | [
"region:us"
] | 2022-04-01T03:56:18+00:00 | {} | 2022-04-01T04:02:50+00:00 |
79ac5a87c5025ad4eb1713832edaf19838d8c10f | annotations_creators:
- expert-generated
- machine-generated
language_creators:
- expert-generated
- machine-generated
languages: []
licenses:
- other-my-license
multilinguality:
- monolingual
pretty_name: bioasq
size_categories:
- unknown
source_datasets:
- extended|pubmed_qa
task_categories:
- question-answering
task_ids:
- multiple-choice-qa | tan9/bioasq | [
"region:us"
] | 2022-04-01T07:25:44+00:00 | {} | 2022-04-01T08:40:24+00:00 |
3955cef83f919910b99d77369b44c09a67907ef2 | annotations_creators:
- other
language_creators:
- other
languages:
- en-US
licenses:
- other-my-license
multilinguality:
- monolingual
pretty_name: pubmedqa
size_categories:
- unknown
source_datasets: []
task_categories:
- question-answering
task_ids:
- extractive-qa | tan9/pubmedQA | [
"region:us"
] | 2022-04-01T08:41:08+00:00 | {} | 2022-04-01T08:44:08+00:00 |
b1aac0607073656a92770b7dc5766a02abef01d6 | # EAS Dataset
[](https://opendatacommons.org/licenses/odbl/)
Emotions Analytic System (EAS) on Instagram social network data
Nowadays, thanks to spread of social media, and large amount of data in Internet, the need for changing how we look and interpret data is evolving. Visualization is one of the most important fields in data science. About growing usage of social media, analyzing the data they contain is crucial. In this research, the Emotion Analytic System on Instagram social network data designed and developed. In this system, we analyze emotions and words that user writes, and visualize them by visualizing techniques. Over 370,000 Instagram comments have been collected with the help of data crawlers that we developed, after that we prepared the data and preprocessed them; including normalizing, finding the keywords and etc. The system is developed by Python.
This Dataset has over 370,000 preprocessed comments (that most of them are in Persian) from 40 instagram channels. These comments are crawled from 12 April 2017 (1396/01/26 A.H.S) to 29 July 2017 (1396/05/07 A.H.S).
# Citation
If you use this dataset in your publications, please cite this paper:
```
@article {
author = {Kiaei, Seyed Faridoddin and Dehghan Rouzi, Mohammad and Farzi, Saeed},
title = {Designing and Implementing an Emotion Analytic System (EAS) on Instagram Social Network Data},
journal = {International Journal of Web Research},
volume = {2},
number = {2},
pages = {9-14},
year = {2019},
publisher = {University of Science and Culture},
issn = {2645-4335},
eissn = {2645-4343},
doi = {10.22133/ijwr.2020.225574.1052},
keywords = {Emotion Analysis,visualization,Instagram,Election},
url = {http://ijwr.usc.ac.ir/article_110287.html},
eprint = {http://ijwr.usc.ac.ir/article_110287_ad2b34be8792fd3e55ae13ea0f367b7a.pdf}
}
```
| sfdkiaei/EAS | [
"region:us"
] | 2022-04-01T09:46:59+00:00 | {} | 2022-04-01T10:14:44+00:00 |
befbfaa257a8c30ecac899233856947f65272b5e | jimregan/psst | [
"license:apache-2.0",
"region:us"
] | 2022-04-01T09:56:07+00:00 | {"license": "apache-2.0"} | 2022-04-01T09:56:42+00:00 |
|
5e44a58b8670e3c79a1a78efbd08ba3f3cbddfac | # Citation
```
@article{DBLP:journals/corr/abs-2101-04775,
author = {Bingchen Liu and
Yizhe Zhu and
Kunpeng Song and
Ahmed Elgammal},
title = {Towards Faster and Stabilized {GAN} Training for High-fidelity Few-shot
Image Synthesis},
journal = {CoRR},
volume = {abs/2101.04775},
year = {2021},
url = {https://arxiv.org/abs/2101.04775},
eprinttype = {arXiv},
eprint = {2101.04775},
timestamp = {Fri, 22 Jan 2021 15:16:00 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-04775.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/few-shot-obama | [
"arxiv:2101.04775",
"region:us"
] | 2022-04-01T10:33:51+00:00 | {} | 2022-04-12T13:05:43+00:00 |
ec44f1c4b919da79f19f17f39233d1622ac359fa | # Citation
```
@article{DBLP:journals/corr/abs-2101-04775,
author = {Bingchen Liu and
Yizhe Zhu and
Kunpeng Song and
Ahmed Elgammal},
title = {Towards Faster and Stabilized {GAN} Training for High-fidelity Few-shot
Image Synthesis},
journal = {CoRR},
volume = {abs/2101.04775},
year = {2021},
url = {https://arxiv.org/abs/2101.04775},
eprinttype = {arXiv},
eprint = {2101.04775},
timestamp = {Fri, 22 Jan 2021 15:16:00 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-04775.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/few-shot-grumpy-cat | [
"arxiv:2101.04775",
"region:us"
] | 2022-04-01T10:36:28+00:00 | {} | 2022-04-12T13:05:58+00:00 |
687192a767a75775f5c7eb8dae634d52f23a7b53 | # Citation
```
@article{DBLP:journals/corr/abs-2101-04775,
author = {Bingchen Liu and
Yizhe Zhu and
Kunpeng Song and
Ahmed Elgammal},
title = {Towards Faster and Stabilized {GAN} Training for High-fidelity Few-shot
Image Synthesis},
journal = {CoRR},
volume = {abs/2101.04775},
year = {2021},
url = {https://arxiv.org/abs/2101.04775},
eprinttype = {arXiv},
eprint = {2101.04775},
timestamp = {Fri, 22 Jan 2021 15:16:00 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-04775.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/few-shot-panda | [
"arxiv:2101.04775",
"region:us"
] | 2022-04-01T10:37:01+00:00 | {} | 2022-04-12T13:06:07+00:00 |
1abe3d88240ec5d9d0072cba028e81def9a26a71 | # Citation
```
@article{DBLP:journals/corr/abs-2101-04775,
author = {Bingchen Liu and
Yizhe Zhu and
Kunpeng Song and
Ahmed Elgammal},
title = {Towards Faster and Stabilized {GAN} Training for High-fidelity Few-shot
Image Synthesis},
journal = {CoRR},
volume = {abs/2101.04775},
year = {2021},
url = {https://arxiv.org/abs/2101.04775},
eprinttype = {arXiv},
eprint = {2101.04775},
timestamp = {Fri, 22 Jan 2021 15:16:00 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-04775.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/few-shot-cat | [
"arxiv:2101.04775",
"region:us"
] | 2022-04-01T10:40:37+00:00 | {} | 2022-04-12T13:06:50+00:00 |
53fbaa9de53882a687970fba23a960f75c452df6 | # Citation
```
@article{DBLP:journals/corr/abs-2101-04775,
author = {Bingchen Liu and
Yizhe Zhu and
Kunpeng Song and
Ahmed Elgammal},
title = {Towards Faster and Stabilized {GAN} Training for High-fidelity Few-shot
Image Synthesis},
journal = {CoRR},
volume = {abs/2101.04775},
year = {2021},
url = {https://arxiv.org/abs/2101.04775},
eprinttype = {arXiv},
eprint = {2101.04775},
timestamp = {Fri, 22 Jan 2021 15:16:00 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-04775.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/few-shot-dog | [
"arxiv:2101.04775",
"region:us"
] | 2022-04-01T10:41:14+00:00 | {} | 2022-04-12T13:07:22+00:00 |
07ca20da8baf5a0e04029236a7d9de706e05966b | # Citation
```
@article{DBLP:journals/corr/abs-2101-04775,
author = {Bingchen Liu and
Yizhe Zhu and
Kunpeng Song and
Ahmed Elgammal},
title = {Towards Faster and Stabilized {GAN} Training for High-fidelity Few-shot
Image Synthesis},
journal = {CoRR},
volume = {abs/2101.04775},
year = {2021},
url = {https://arxiv.org/abs/2101.04775},
eprinttype = {arXiv},
eprint = {2101.04775},
timestamp = {Fri, 22 Jan 2021 15:16:00 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-04775.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/few-shot-anime-face | [
"arxiv:2101.04775",
"region:us"
] | 2022-04-01T10:42:03+00:00 | {} | 2022-04-12T13:08:09+00:00 |
649a061a8b9fc03aad2d3abd56c2e9ce42da42fd | Source: https://www.kaggle.com/datasets/djilax/pkmn-image-dataset | huggan/pokemon | [
"region:us"
] | 2022-04-01T10:44:34+00:00 | {} | 2022-04-01T10:50:45+00:00 |
623cf5299032a13f955fef4259db0a794b42c8d0 | # Citation
```
@article{DBLP:journals/corr/abs-2101-04775,
author = {Bingchen Liu and
Yizhe Zhu and
Kunpeng Song and
Ahmed Elgammal},
title = {Towards Faster and Stabilized {GAN} Training for High-fidelity Few-shot
Image Synthesis},
journal = {CoRR},
volume = {abs/2101.04775},
year = {2021},
url = {https://arxiv.org/abs/2101.04775},
eprinttype = {arXiv},
eprint = {2101.04775},
timestamp = {Fri, 22 Jan 2021 15:16:00 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-04775.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/few-shot-art-painting | [
"arxiv:2101.04775",
"region:us"
] | 2022-04-01T10:46:40+00:00 | {} | 2022-04-12T13:06:24+00:00 |
ab6960d72dde5d5880a24e3580dc4af97f61436b | # Citation
```
@article{DBLP:journals/corr/abs-2101-04775,
author = {Bingchen Liu and
Yizhe Zhu and
Kunpeng Song and
Ahmed Elgammal},
title = {Towards Faster and Stabilized {GAN} Training for High-fidelity Few-shot
Image Synthesis},
journal = {CoRR},
volume = {abs/2101.04775},
year = {2021},
url = {https://arxiv.org/abs/2101.04775},
eprinttype = {arXiv},
eprint = {2101.04775},
timestamp = {Fri, 22 Jan 2021 15:16:00 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-04775.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/few-shot-fauvism-still-life | [
"arxiv:2101.04775",
"region:us"
] | 2022-04-01T10:47:44+00:00 | {} | 2022-04-12T13:07:31+00:00 |
9d26da16edb06b659c3a2ede3660cefcd23168af | # Citation
```
@article{DBLP:journals/corr/abs-2101-04775,
author = {Bingchen Liu and
Yizhe Zhu and
Kunpeng Song and
Ahmed Elgammal},
title = {Towards Faster and Stabilized {GAN} Training for High-fidelity Few-shot
Image Synthesis},
journal = {CoRR},
volume = {abs/2101.04775},
year = {2021},
url = {https://arxiv.org/abs/2101.04775},
eprinttype = {arXiv},
eprint = {2101.04775},
timestamp = {Fri, 22 Jan 2021 15:16:00 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-04775.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/few-shot-flat-colored-patterns | [
"arxiv:2101.04775",
"region:us"
] | 2022-04-01T10:54:39+00:00 | {} | 2022-04-12T13:07:41+00:00 |
a56f84f9de3496b3d492d960611c54546f6b89dc | # Citation
```
@article{DBLP:journals/corr/abs-2101-04775,
author = {Bingchen Liu and
Yizhe Zhu and
Kunpeng Song and
Ahmed Elgammal},
title = {Towards Faster and Stabilized {GAN} Training for High-fidelity Few-shot
Image Synthesis},
journal = {CoRR},
volume = {abs/2101.04775},
year = {2021},
url = {https://arxiv.org/abs/2101.04775},
eprinttype = {arXiv},
eprint = {2101.04775},
timestamp = {Fri, 22 Jan 2021 15:16:00 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-04775.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/few-shot-moongate | [
"arxiv:2101.04775",
"region:us"
] | 2022-04-01T10:55:18+00:00 | {} | 2022-04-12T13:07:11+00:00 |
d5aca3bdb21bff3e20c0e78b614fa114477118fc | # Citation
```
@article{DBLP:journals/corr/abs-2101-04775,
author = {Bingchen Liu and
Yizhe Zhu and
Kunpeng Song and
Ahmed Elgammal},
title = {Towards Faster and Stabilized {GAN} Training for High-fidelity Few-shot
Image Synthesis},
journal = {CoRR},
volume = {abs/2101.04775},
year = {2021},
url = {https://arxiv.org/abs/2101.04775},
eprinttype = {arXiv},
eprint = {2101.04775},
timestamp = {Fri, 22 Jan 2021 15:16:00 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-04775.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/few-shot-pokemon | [
"arxiv:2101.04775",
"region:us"
] | 2022-04-01T10:56:00+00:00 | {} | 2022-04-12T13:06:36+00:00 |
592999df611c39c3cac8774c53f3c59f819a3eef | # Citation
```
@article{DBLP:journals/corr/abs-2101-04775,
author = {Bingchen Liu and
Yizhe Zhu and
Kunpeng Song and
Ahmed Elgammal},
title = {Towards Faster and Stabilized {GAN} Training for High-fidelity Few-shot
Image Synthesis},
journal = {CoRR},
volume = {abs/2101.04775},
year = {2021},
url = {https://arxiv.org/abs/2101.04775},
eprinttype = {arXiv},
eprint = {2101.04775},
timestamp = {Fri, 22 Jan 2021 15:16:00 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-04775.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/few-shot-shells | [
"arxiv:2101.04775",
"region:us"
] | 2022-04-01T10:56:38+00:00 | {} | 2022-04-12T13:07:59+00:00 |
fef3bf060bf60fc11be5d4d651c6a5634d5eaf56 | # Citation
```
@article{DBLP:journals/corr/abs-2101-04775,
author = {Bingchen Liu and
Yizhe Zhu and
Kunpeng Song and
Ahmed Elgammal},
title = {Towards Faster and Stabilized {GAN} Training for High-fidelity Few-shot
Image Synthesis},
journal = {CoRR},
volume = {abs/2101.04775},
year = {2021},
url = {https://arxiv.org/abs/2101.04775},
eprinttype = {arXiv},
eprint = {2101.04775},
timestamp = {Fri, 22 Jan 2021 15:16:00 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-04775.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/few-shot-skulls | [
"arxiv:2101.04775",
"region:us"
] | 2022-04-01T10:57:06+00:00 | {} | 2022-04-12T13:03:56+00:00 |
0689c984ee2d9fb5ffd7c91f0cfeb7bbaa43f2f9 |
# Dataset Card for [es_tweets_laboral]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
Dataset creado por @hucruz, @DanielaGarciaQuezada, @hylandude, @BloodBoy21
Etiquetado por @DanielaGarciaQuezada
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
español
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | hackathon-pln-es/es_tweets_laboral | [
"task_categories:text-classification",
"task_ids:intent-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:es",
"license:unknown",
"region:us"
] | 2022-04-01T12:20:33+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["es"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["intent-classification"], "pretty_name": "Tweets en espa\u00f1ol denuncia laboral"} | 2022-10-25T09:03:39+00:00 |
40251166354d6348fdd75a258486e55260642a5d |
# Dataset Card for MetaShift
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [MetaShift homepage](https://metashift.readthedocs.io/)
- **Repository:** [MetaShift repository](https://github.com/Weixin-Liang/MetaShift)
- **Paper:** [MetaShift paper](https://arxiv.org/abs/2202.06523v1)
- **Point of Contact:** [Weixin Liang](mailto:wxliang@stanford.edu)
### Dataset Summary
The MetaShift dataset is a collection of 12,868 sets of natural images across 410 classes. It was created for understanding the performance of a machine learning model across diverse data distributions.
The authors leverage the natural heterogeneity of Visual Genome and its annotations to construct MetaShift.
The key idea is to cluster images using its metadata which provides context for each image.
For example : cats with cars or cats in bathroom.
The main advantage is the dataset contains many more coherent sets of data compared to other benchmarks.
Two important benefits of MetaShift :
- Contains orders of magnitude more natural data shifts than previously available.
- Provides explicit explanations of what is unique about each of its data sets and a distance score that measures the amount of distribution shift between any two of its data sets.
### Dataset Usage
The dataset has the following configuration parameters:
- selected_classes: `list[string]`, optional, list of the classes to generate the MetaShift dataset for. If `None`, the list is equal to `['cat', 'dog', 'bus', 'truck', 'elephant', 'horse']`.
- attributes_dataset: `bool`, default `False`, if `True`, the script generates the MetaShift-Attributes dataset. Refer [MetaShift-Attributes Dataset](https://github.com/Weixin-Liang/MetaShift#bonus-generate-the-metashift-attributes-dataset-subsets-defined-by-subject-attributes) for more information.
- attributes: `list[string]`, optional, list of attributes classes included in the Attributes dataset. If `None` and `attributes_dataset` is `True`, it's equal to `["cat(orange)", "cat(white)", "dog(sitting)", "dog(jumping)"]`. You can find the full attribute ontology in the above link.
- with_image_metadata: `bool`, default `False`, whether to include image metadata. If set to `True`, this will give additional metadata about each image. See [Scene Graph](https://cs.stanford.edu/people/dorarad/gqa/download.html) for more information.
- image_subset_size_threshold: `int`, default `25`, the number of images required to be considered a subset. If the number of images is less than this threshold, the subset is ignored.
- min_local_groups: `int`, default `5`, the minimum number of local groups required to be considered an object class.
Consider the following examples to get an idea of how you can use the configuration parameters :
1. To generate the MetaShift Dataset :
```python
load_dataset("metashift", selected_classes=['cat', 'dog', 'bus'])
```
The full object vocabulary and its hierarchy can be seen [here](https://github.com/Weixin-Liang/MetaShift/blob/main/dataset/meta_data/class_hierarchy.json).
The default classes are `['cat', 'dog', 'bus', 'truck', 'elephant', 'horse']`
2. To generate the MetaShift-Attributes Dataset (subsets defined by subject attributes) :
```python
load_dataset("metashift", attributes_dataset = True, attributes=["dog(smiling)", "cat(resting)"])
```
The default attributes are `["cat(orange)", "cat(white)", "dog(sitting)", "dog(jumping)"]`
3. To generate the dataset with additional image metadata information :
```python
load_dataset("metashift", selected_classes=['cat', 'dog', 'bus'], with_image_metadata=True)
```
4. Further, you can specify your own configuration different from those used in the papers as follows:
```python
load_dataset("metashift", image_subset_size_threshold=20, min_local_groups=3)
```
### Dataset Meta-Graphs
From the MetaShift Github Repo :
> MetaShift splits the data points of each class (e.g., Cat) into many subsets based on visual contexts. Each node in the meta-graph represents one subset. The weight of each edge is the overlap coefficient between the corresponding two subsets. Node colors indicate the graph-based community detection results. Inter-community edges are colored. Intra-community edges are grayed out for better visualization. The border color of each example image indicates its community in the meta-graph. We have one such meta-graph for each of the 410 classes in the MetaShift.
The following are the metagraphs for the default classes, these have been generated using the `generate_full_MetaShift.py` file.
<p align='center'>
<img width='75%' src='https://i.imgur.com/wrpezCK.jpg' alt="Cat Meta-graph" /> </br>
<b>Figure: Meta-graph: visualizing the diverse data distributions within the “cat” class. </b>
</p>
<p align='center'>
<img width='75%' src='https://i.imgur.com/FhuAwfT.jpg' alt="Dog Meta-graph" /> </br>
<b>Figure: Meta-graph for the “Dog” class, which captures meaningful semantics of the multi-modal data distribution of “Dog”. </b>
</p>
<p align='center'>
<img width='75%' src='https://i.imgur.com/FFCcN6L.jpg' alt="Bus Meta-graph" /> </br>
<b>Figure: Meta-graph for the “Bus” class. </b>
</p>
<p align='center'>
<img width='75%' src='https://i.imgur.com/rx5b5Vo.jpg' alt="Elephant Meta-graph" /> </br>
<b>Figure: Meta-graph for the "Elephant" class. </b>
</p>
<p align='center'>
<img width='75%' src='https://i.imgur.com/6f6U3S8.jpg' alt="Horse Meta-graph" /> </br>
<b>Figure: Meta-graph for the "Horse" class. </b>
</p>
<p align='center'>
<img width='75%' src='https://i.imgur.com/x9zhQD7.jpg' alt="Truck Meta-graph"/> </br>
<b>Figure: Meta-graph for the Truck class. </b>
</p>
### Supported Tasks and Leaderboards
From the paper:
> MetaShift supports evaluation on both :
> - domain generalization and subpopulation shifts settings,
> - assessing training conflicts.
### Languages
All the classes and subsets use English as their primary language.
## Dataset Structure
### Data Instances
A sample from the MetaShift dataset is provided below:
```
{
'image_id': '2411520',
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x375 at 0x7F99115B8D90>,
'label': 2,
'context': 'fence'
}
```
A sample from the MetaShift-Attributes dataset is provided below:
```
{
'image_id': '2401643',
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x333 at 0x7FED371CE350>
'label': 0
}
```
The format of the dataset with image metadata included by passing `with_image_metadata=True` to `load_dataset` is provided below:
```
{
'image_id': '2365745',
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x333 at 0x7FEBCD39E4D0>
'label': 0,
'context': 'ground',
'width': 500,
'height': 333,
'location': None,
'weather': None,
'objects':
{
'object_id': ['2676428', '3215330', '1962110', '2615742', '3246028', '3232887', '3215329', '1889633', '3882667', '3882663', '1935409', '3882668', '3882669'],
'name': ['wall', 'trailer', 'floor', 'building', 'walkway', 'head', 'tire', 'ground', 'dock', 'paint', 'tail', 'cat', 'wall'],
'x': [194, 12, 0, 5, 3, 404, 27, 438, 2, 142, 324, 328, 224],
'y': [1, 7, 93, 10, 100, 46, 215, 139, 90, 172, 157, 45, 246],
'w': [305, 477, 499, 492, 468, 52, 283, 30, 487, 352, 50, 122, 274],
'h': [150, 310, 72, 112, 53, 59, 117, 23, 240, 72, 107, 214, 85],
'attributes': [['wood', 'green'], [], ['broken', 'wood'], [], [], [], ['black'], [], [], [], ['thick'], ['small'], ['blue']],
'relations': [{'name': [], 'object': []}, {'name': [], 'object': []}, {'name': [], 'object': []}, {'name': [], 'object': []}, {'name': [], 'object': []}, {'name': ['of'], 'object': ['3882668']}, {'name': ['to the left of'], 'object': ['3882669']}, {'name': ['to the right of'], 'object': ['3882668']}, {'name': [], 'object': []}, {'name': [], 'object': []}, {'name': ['of'], 'object': ['3882668']}, {'name': ['perched on', 'to the left of'], 'object': ['3882667', '1889633']}, {'name': ['to the right of'], 'object': ['3215329']}]
}
}
```
### Data Fields
- `image_id`: Unique numeric ID of the image in Base Visual Genome dataset.
- `image`: A PIL.Image.Image object containing the image.
- `label`: an int classification label.
- `context`: represents the context in which the label is seen. A given label could have multiple contexts.
Image Metadata format can be seen [here](https://cs.stanford.edu/people/dorarad/gqa/download.html) and a sample above has been provided for reference.
### Data Splits
All the data is contained in training set.
## Dataset Creation
### Curation Rationale
From the paper:
> We present MetaShift as an important resource for studying the behavior of
ML algorithms and training dynamics across data with heterogeneous contexts. In order to assess the reliability and fairness of a model, we need to evaluate
its performance and training behavior across heterogeneous types of data. MetaShift contains many more coherent sets of data compared to other benchmarks. Importantly, we have explicit annotations of what makes each subset unique (e.g. cats with cars or dogs next to a bench) as well as a score that measures the distance between any two subsets, which is not available in previous benchmarks of natural data.
### Source Data
#### Initial Data Collection and Normalization
From the paper:
> We leverage the natural heterogeneity of Visual Genome and its annotations to construct MetaShift. Visual Genome contains over 100k images across 1,702 object classes. MetaShift is constructed on a class-by-class basis. For each class, say “cat”, we pull out all cat images and proceed with generating candidate subests, constructing meta-graphs and then duantify distances of distribution shifts.
#### Who are the source language producers?
[More Information Needed]
### Annotations
The MetaShift dataset uses Visual Genome as its base, therefore the annotations process is same as the Visual Genome dataset.
#### Annotation process
From the Visual Genome paper :
> We used Amazon Mechanical Turk (AMT) as our primary source of annotations. Overall, a total of over 33,000 unique workers contributed to the dataset. The dataset was collected over the course of 6 months after 15 months of experimentation and iteration on the data representation. Approximately 800, 000 Human Intelligence Tasks (HITs) were launched on AMT, where each HIT involved creating descriptions, questions and answers, or region graphs.
#### Who are the annotators?
From the Visual Genome paper :
> Visual Genome was collected and verified entirely by crowd workers from Amazon Mechanical Turk.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
From the paper:
> One limitation is that our MetaShift might inherit existing biases in Visual Genome, which is the
base dataset of our MetaShift. Potential concerns include minority groups being under-represented
in certain classes (e.g., women with snowboard), or annotation bias where people in images are
by default labeled as male when gender is unlikely to be identifiable. Existing work in analyzing,
quantifying, and mitigating biases in general computer vision datasets can help with addressing this
potential negative societal impact.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
From the paper :
> Our MetaShift and the code would use the Creative Commons Attribution 4.0 International License. Visual Genome (Krishna et al., 2017) is licensed under a Creative Commons Attribution 4.0 International License. MS-COCO (Lin et al., 2014) is licensed under CC-BY 4.0. The Visual Genome dataset uses 108, 077 images from the intersection of the YFCC100M (Thomee et al., 2016) and MS-COCO. We use the pre-processed and cleaned version of Visual Genome by GQA (Hudson & Manning, 2019).
### Citation Information
```bibtex
@InProceedings{liang2022metashift,
title={MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts},
author={Weixin Liang and James Zou},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=MTex8qKavoS}
}
```
### Contributions
Thanks to [@dnaveenr](https://github.com/dnaveenr) for adding this dataset. | metashift | [
"task_categories:image-classification",
"task_categories:other",
"task_ids:multi-label-image-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"domain-generalization",
"arxiv:2202.06523",
"region:us"
] | 2022-04-01T14:16:57+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["image-classification", "other"], "task_ids": ["multi-label-image-classification"], "paperswithcode_id": "metashift", "pretty_name": "MetaShift", "tags": ["domain-generalization"], "dataset_info": {"features": [{"name": "image_id", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "cat", "1": "dog", "2": "bus", "3": "truck", "4": "elephant", "5": "horse", "6": "bowl", "7": "cup"}}}}, {"name": "context", "dtype": "string"}], "config_name": "metashift", "splits": [{"name": "train", "num_bytes": 16333509, "num_examples": 86808}], "download_size": 21878013674, "dataset_size": 16333509}} | 2024-01-18T11:19:04+00:00 |
d5d0b1e1251a49c870b8929dc10c3f03ea7dbb67 | Deprecated as of meerqat v4-alpha. See https://github.com/PaulLerner/ViQuAE | PaulLerner/viquae_passages | [
"region:us"
] | 2022-04-01T14:45:33+00:00 | {} | 2023-05-31T10:41:41+00:00 |
650743525ff22fb48e2d89783bb2c2e671a0e678 | ```original_label0 = original_labeled_dts[original_labeled_dts.label==0]
original_label1 = original_labeled_dts[original_labeled_dts.label==1]
original_label1_shuffle = original_label1.sample(frac=1, random_state=101).reset_index(drop=True)
original_label0_shuffle = original_label0.sample(frac=1, random_state=101).reset_index(drop=True)
original_train_randomState101 = original_label1_shuffle[500:]
original_train_randomState101 = original_train_randomState101.append(original_label0_shuffle[2500:], sort=False, ignore_index=True)
original_train_randomState101 = original_train_randomState101.sample(frac=1, random_state=101).reset_index(drop=True)
original_test_randomState101 = original_label1_shuffle[:500]
original_test_randomState101 = original_test_randomState101.append(original_label0_shuffle[:2500], sort=False, ignore_index=True)
original_test_randomState101 = original_test_randomState101.sample(frac=1, random_state=101).reset_index(drop=True)
original_test_randomState101``` | nntadotzips/vietjack_geography_original_labeled_train_test | [
"region:us"
] | 2022-04-01T16:33:30+00:00 | {} | 2022-04-01T16:36:39+00:00 |
2eec8352d97326bcba1de4687668e2602b22c110 |
## How to use the data sets
This dataset contains about 36,000 unique pairs of protein sequences and ligand SMILES, and the coordinates
of their complexes from the PDB.
SMILES are assumed to be tokenized by the regex from P. Schwaller.
## Ligand selection criteria
Only ligands
- that have at least 3 atoms,
- a molecular weight >= 100 Da,
- and which are not among the 280 most common ligands in the PDB (this includes common additives like PEG, ADP, ..)
are considered.
### Use the already preprocessed data
Load a test/train split using
```
import pandas as pd
train = pd.read_pickle('data/pdb_train.p')
test = pd.read_pickle('data/pdb_test.p')
```
Receptor features contain protein frames and side chain angles in OpenFold/AlphaFold format.
Ligand tokens which do not correspond to atoms have `nan` as their coordinates.
Documentation by example:
```
>>> import pandas as pd
>>> test = pd.read_pickle('data/pdb_test.p')
>>> test.head(5)
pdb_id lig_id ... ligand_xyz_2d ligand_bonds
0 7k38 VTY ... [(-2.031355975502858, -1.6316778784387098, 0.0... [(0, 1), (1, 4), (4, 5), (5, 10), (10, 9), (9,...
1 6prt OWA ... [(4.883261310160714, -0.37850716807626705, 0.0... [(11, 18), (18, 20), (20, 8), (8, 7), (7, 2), ...
2 4lxx FNF ... [(8.529427756002057, 2.2434809270065372, 0.0),... [(51, 49), (49, 48), (48, 46), (46, 53), (53, ...
3 4lxx FON ... [(-10.939694946697701, -1.1876214529096956, 0.... [(13, 1), (1, 0), (0, 3), (3, 4), (4, 7), (7, ...
4 7bp1 CAQ ... [(-1.9485571585149868, -1.499999999999999, 0.0... [(4, 3), (3, 1), (1, 0), (0, 7), (7, 9), (7, 6...
[5 rows x 8 columns]
>>> test.columns
Index(['pdb_id', 'lig_id', 'seq', 'smiles', 'receptor_features', 'ligand_xyz',
'ligand_xyz_2d', 'ligand_bonds'],
dtype='object')
>>> test.iloc[0]['receptor_features']
{'rigidgroups_gt_frames': array([[[[-5.3122622e-01, 2.0922849e-01, -8.2098854e-01,
1.7295000e+01],
[-7.1005428e-01, -6.3858479e-01, 2.9670244e-01,
-9.1399997e-01],
[-4.6219218e-01, 7.4056256e-01, 4.8779655e-01,
3.3284000e+01],
[ 0.0000000e+00, 0.0000000e+00, 0.0000000e+00,
1.0000000e+00]],
...
[[ 0.0000000e+00, 0.0000000e+00, 0.0000000e+00,
-3.5030000e+00],
[ 0.0000000e+00, 0.0000000e+00, 0.0000000e+00,
2.6764999e+01],
[ 0.0000000e+00, 0.0000000e+00, 0.0000000e+00,
1.5136000e+01],
[ 0.0000000e+00, 0.0000000e+00, 0.0000000e+00,
1.0000000e+00]]]], dtype=float32), 'torsion_angles_sin_cos': array([[[-1.90855725e-09, 3.58859784e-02],
[ 1.55730803e-01, 9.87799530e-01],
[ 6.05505241e-01, -7.95841312e-01],
...,
[-2.92459433e-01, -9.56277928e-01],
[ 9.96634814e-01, -8.19697779e-02],
[ 0.00000000e+00, 0.00000000e+00]],
...
[[ 2.96455977e-04, -9.99999953e-01],
[-8.15660990e-01, 5.78530158e-01],
[-3.17915569e-01, 9.48119024e-01],
...,
[ 0.00000000e+00, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00]]])}
>>> test.iloc[0]['receptor_features'].keys()
dict_keys(['rigidgroups_gt_frames', 'torsion_angles_sin_cos'])
>>> test.iloc[0]['ligand_xyz']
[(22.289, 11.985, 9.225), (21.426, 11.623, 7.959), (nan, nan, nan), (nan, nan, nan), (21.797, 11.427, 6.574), (20.556, 11.56, 5.792), (nan, nan, nan), (20.507, 11.113, 4.552), (nan, nan, nan), (19.581, 10.97, 6.639), (20.107, 10.946, 7.954), (nan, nan, nan), (nan, nan, nan), (19.645, 10.364, 8.804)]
```
### Manual update from PDB
```
# download the PDB archive into folder pdb/
sh rsync.sh 24 # number of parallel download processes
# extract sequences and coordinates in parallel
sbatch pdb.slurm
# or
mpirun -n 42 parse_complexes.py # desired number of tasks
```
| jglaser/pdb_protein_ligand_complexes | [
"proteins",
"molecules",
"chemistry",
"SMILES",
"complex structures",
"region:us"
] | 2022-04-01T18:14:21+00:00 | {"tags": ["proteins", "molecules", "chemistry", "SMILES", "complex structures"]} | 2022-10-13T14:09:57+00:00 |
e3789d92458aeb34a189a1fff9863e6d248d891a | # Dataset Card for biomed_squad_es_v2
This Dataset was created as part of the "Extractive QA Biomedicine" project developed during the 2022 [Hackathon](https://somosnlp.org/hackathon) organized by SOMOS NLP.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This is a subset of the [dev squad_es (v2) dataset](https://huggingface.co/datasets/squad_es) (automatic translation of the Stanford Question Answering Dataset v2 into Spanish) containing questions related to the biomedical domain.
License, distribution and usage conditions of the original Squad_es Dataset apply.
### Languages
Spanish
## Dataset Structure
### Data Fields
```
{'answers': {'answer_start': [343, 343, 343],
'text': ['diez veces su propio peso',
'diez veces su propio peso',
'diez veces su propio peso']},
'context': 'Casi todos los ctenóforos son depredadores, tomando presas que van desde larvas microscópicas y rotíferos a los adultos de pequeños crustáceos; Las excepciones son los juveniles de dos especies, que viven como parásitos en las salpas en las que los adultos de su especie se alimentan. En circunstancias favorables, los ctenóforos pueden comer diez veces su propio peso en un día. Sólo 100-150 especies han sido validadas, y posiblemente otras 25 no han sido completamente descritas y nombradas. Los ejemplos de libros de texto son cidipidos con cuerpos en forma de huevo y un par de tentáculos retráctiles bordeados con tentilla ("pequeños tentáculos") que están cubiertos con colúnculos, células pegajosas. El filo tiene una amplia gama de formas corporales, incluyendo los platyctenidos de mar profundo, en los que los adultos de la mayoría de las especies carecen de peines, y los beroides costeros, que carecen de tentáculos. Estas variaciones permiten a las diferentes especies construir grandes poblaciones en la misma área, porque se especializan en diferentes tipos de presas, que capturan por una amplia gama de métodos que utilizan las arañas.',
'id': '5725c337271a42140099d165',
'question': '¿Cuánta comida come un Ctenophora en un día?',
'title': 'Ctenophora'}
```
### Data Splits
Validation: 1137 examples
### Citation Information
```
@article{2016arXiv160605250R,
author = {Casimiro Pio , Carrino and Marta R. , Costa-jussa and Jose A. R. , Fonollosa},
title = "{Automatic Spanish Translation of the SQuAD Dataset for Multilingual
Question Answering}",
journal = {arXiv e-prints},
year = 2019,
eid = {arXiv:1912.05200v1},
pages = {arXiv:1912.05200v1},
archivePrefix = {arXiv},
eprint = {1912.05200v2},
}
```
## Team
Santiago Maximo: [smaximo](https://huggingface.co/smaximo) | hackathon-pln-es/biomed_squad_es_v2 | [
"arxiv:1912.05200",
"region:us"
] | 2022-04-02T02:05:44+00:00 | {} | 2022-04-03T16:46:58+00:00 |
db1a30d40f031f31dc49353f7dc40b08fea6719a |
# RuNNE dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Structure](#dataset-structure)
- [Citation Information](#citation-information)
- [Contacts](#contacts)
## Dataset Description
Part of NEREL dataset (https://arxiv.org/abs/2108.13112), a Russian dataset
for named entity recognition and relation extraction, used in RuNNE (2022)
competition (https://github.com/dialogue-evaluation/RuNNE).
Entities may be nested (see https://arxiv.org/abs/2108.13112).
Entity types list:
* AGE
* AWARD
* CITY
* COUNTRY
* CRIME
* DATE
* DISEASE
* DISTRICT
* EVENT
* FACILITY
* FAMILY
* IDEOLOGY
* LANGUAGE
* LAW
* LOCATION
* MONEY
* NATIONALITY
* NUMBER
* ORDINAL
* ORGANIZATION
* PENALTY
* PERCENT
* PERSON
* PRODUCT
* PROFESSION
* RELIGION
* STATE_OR_PROVINCE
* TIME
* WORK_OF_ART
## Dataset Structure
There are two "configs" or "subsets" of the dataset.
Using
`load_dataset('MalakhovIlya/RuNNE', 'ent_types')['ent_types']`
you can download list of entity types (
Dataset({
features: ['type'],
num_rows: 29
})
)
Using
`load_dataset('MalakhovIlya/RuNNE', 'data')` or `load_dataset('MalakhovIlya/RuNNE')`
you can download the data itself (DatasetDict)
Dataset consists of 3 splits: "train", "test" and "dev". Each of them contains text document. "Train" and "test" splits also contain annotated entities, "dev" doesn't.
Each entity is represented by a string of the following format: "\<start> \<stop> \<type>", where \<start> is a position of the first symbol of entity in text, \<stop> is the last symbol position in text and \<type> is a one of the aforementioned list of types.
P.S.
Original NEREL dataset also contains relations, events and linked entities, but they were not added here yet ¯\\\_(ツ)_/¯
## Citation Information
@article{Artemova2022runne,
title={{RuNNE-2022 Shared Task: Recognizing Nested Named Entities}},
author={Artemova, Ekaterina and Zmeev, Maksim and Loukachevitch, Natalia and Rozhkov, Igor and Batura, Tatiana and Braslavski, Pavel and Ivanov, Vladimir and Tutubalina, Elena},
journal={Computational Linguistics and Intellectual Technologies: Proceedings of the International Conference "Dialog"},
year={2022}
}
| iluvvatar/RuNNE | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"language:ru",
"arxiv:2108.13112",
"region:us"
] | 2022-04-02T06:55:42+00:00 | {"language": ["ru"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "RuNNE"} | 2023-03-30T12:36:53+00:00 |
64fd51e4bb4d4d41e59df46d597725468c716c97 |
为了对 like-BERT 预训练模型进行 fine-tune 调优和评测以得到更好的文本表征模,对业界开源的语义相似(STS)、自然语言推理(NLI)、问题匹配(QMC)以及相关性等数据集进行了搜集整理,具体介绍如下:
| 类型 | 数据集 | 简介 | 规模 |
| -------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | -------------------------------------------------- |
| **通用领域** | [OCNLI](https://www.cluebenchmarks.com/introduce.html) | 原生中文自然语言推理数据集,是第一个非翻译的、使用原生汉语的大型中文自然语言推理数据集。OCNLI为中文语言理解基准测评(CLUE)的一部分。 | **Train**: 50437, **Dev**: 2950 |
| | [CMNLI](https://github.com/pluto-junzeng/CNSD) | 翻译自英文自然语言推理数据集 XNLI 和 MNLI,曾经是中文语言理解基准测评(CLUE)的一部分,现在被 OCNLI 取代。 | **Train**: 391783, **Dev**: 12241 |
| | [CSNLI](https://github.com/pluto-junzeng/CNSD) | 翻译自英文自然语言推理数据集 SNLI。 | **Train**: 545833, **Dev**: 9314, **Test**: 9176 |
| | [STS-B-Chinese](https://github.com/pluto-junzeng/CNSD) | 翻译自英文语义相似数据集 STSbenchmark。 | **Train**: 5231, **Dev**: 1458, **Test**: 1361 |
| | [PAWS-X](https://www.luge.ai/#/luge/dataDetail?id=16) | 释义(含义)匹配数据集,特点是具有高度重叠词汇,重点考察模型对句法结构的理解能力。 | **Train**: 49401, **Dev**: 2000, **Test**: 2000 |
| | [PKU-Paraphrase-Bank](https://github.com/pkucoli/PKU-Paraphrase-Bank/) | 中文语句复述数据集,也即一句话换种方式描述但语义保持一致。 | 共509832个语句对 |
| **问题匹配** | [LCQMC](https://www.luge.ai/#/luge/dataDetail?id=14) | 百度知道领域的中文问题匹配大规模数据集,该数据集从百度知道不同领域的用户问题中抽取构建数据。 | **Train**: 238766, **Dev**: 8802, **Test**: 12500 |
| | [BQCorpus](https://www.luge.ai/#/luge/dataDetail?id=15) | 银行金融领域的问题匹配数据,包括了从一年的线上银行系统日志里抽取的问题pair对,是目前最大的银行领域问题匹配数据。 | **Train**: 100000, **Dev**: 10000, **Test**: 10000 |
| | [AFQMC](https://www.cluebenchmarks.com/introduce.html) | 蚂蚁金服真实金融业务场景中的问题匹配数据集(已脱敏),是中文语言理解基准测评(CLUE)的一部分。 | **Train**: 34334, **Dev**: 4316 |
| | [DuQM](https://www.luge.ai/#/luge/dataDetail?id=27) | 问题匹配评测数据集(label没有公开),属于百度大规模阅读理解数据集(DuReader)的[一部分](https://github.com/baidu/DuReader/tree/master/DuQM)。 | 共50000个语句对 |
| **对话和搜索** | [BUSTM: OPPO-xiaobu](https://www.luge.ai/#/luge/dataDetail?id=28) | 通过对闲聊、智能客服、影音娱乐、信息查询等多领域真实用户交互语料进行用户信息脱敏、相似度筛选处理得到,该对话匹配(Dialogue Short Text Matching)数据集主要特点是文本较短、非常口语化、存在文本高度相似而语义不同的难例。 | **Train**: 167173, **Dev**: 10000 |
| | [QBQTC](https://github.com/CLUEbenchmark/QBQTC) | QQ浏览器搜索相关性数据集(QBQTC,QQ Browser Query Title Corpus),是QQ浏览器搜索引擎目前针对大搜场景构建的一个融合了相关性、权威性、内容质量、 时效性等维度标注的学习排序(LTR)数据集,广泛应用在搜索引擎业务场景中。(相关性的含义:0,相关程度差;1,有一定相关性;2,非常相关。) | **Train**: 180000, **Dev**: 20000, **Test**: 5000 |
*以上数据集主要收集整理自[CLUE](https://www.cluebenchmarks.com/introduce.html)(中文语言理解基准评测数据集)、[SimCLUE](https://github.com/CLUEbenchmark/SimCLUE) (整合许多开源文本相似数据集)、[百度千言](https://www.luge.ai/#/)的文本相似度等数据集。*
根据以上收集的数据集构建如下**评测 benchmark**:
| Name | Size | Type |
| ---------------------- | ----- | ------------- |
| **Chinese-STS-B-dev** | 1458 | label=0.0~1.0 |
| **Chinese-STS-B-test** | 1361 | label=0.0~1.0 |
| **afqmc-dev** | 4316 | label=0,1 |
| **lcqmc-dev** | 8802 | label=0,1 |
| **bqcorpus-dev** | 10000 | label=0,1 |
| **pawsx_dev** | 2000 | label=0,1 |
| **oppo-xiaobu-dev** | 10000 | label=0,1 |
*TODO:目前收集的数据集不论是数量还是多样性都需要进一步扩充以更真实的反映表征模型的效果*
| DMetaSoul/chinese-semantic-textual-similarity | [
"license:apache-2.0",
"region:us"
] | 2022-04-02T09:10:43+00:00 | {"license": "apache-2.0"} | 2022-04-02T09:38:47+00:00 |
a6b8d891d393e97a4efac791afffb2d7de5e57c6 | # Dataset Card for fever_gold_evidence
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/copenlu/fever-adversarial-attacks
- **Repository:** https://github.com/copenlu/fever-adversarial-attacks
- **Paper:** https://aclanthology.org/2020.emnlp-main.256/
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Dataset for training classification-only fact checking with claims from the FEVER dataset.
This dataset is used in the paper "Generating Label Cohesive and Well-Formed Adversarial Claims", EMNLP 2020
The evidence is the gold evidence from the FEVER dataset for *REFUTE* and *SUPPORT* claims.
For *NEI* claims, we extract evidence sentences with the system in "Christopher Malon. 2018. Team Papelo: Transformer Networks at FEVER. In Proceedings of the
First Workshop on Fact Extraction and VERification (FEVER), pages 109113."
More details can be found in https://github.com/copenlu/fever-adversarial-attacks
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
```
@inproceedings{atanasova-etal-2020-generating,
title = "Generating Label Cohesive and Well-Formed Adversarial Claims",
author = "Atanasova, Pepa and
Wright, Dustin and
Augenstein, Isabelle",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.emnlp-main.256",
doi = "10.18653/v1/2020.emnlp-main.256",
pages = "3168--3177",
abstract = "Adversarial attacks reveal important vulnerabilities and flaws of trained models. One potent type of attack are universal adversarial triggers, which are individual n-grams that, when appended to instances of a class under attack, can trick a model into predicting a target class. However, for inference tasks such as fact checking, these triggers often inadvertently invert the meaning of instances they are inserted in. In addition, such attacks produce semantically nonsensical inputs, as they simply concatenate triggers to existing samples. Here, we investigate how to generate adversarial attacks against fact checking systems that preserve the ground truth meaning and are semantically valid. We extend the HotFlip attack algorithm used for universal trigger generation by jointly minimizing the target class loss of a fact checking model and the entailment class loss of an auxiliary natural language inference model. We then train a conditional language model to generate semantically valid statements, which include the found universal triggers. We find that the generated attacks maintain the directionality and semantic validity of the claim better than previous work.",
}
``` | copenlu/fever_gold_evidence | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|fever",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
"region:us"
] | 2022-04-02T13:52:35+00:00 | {"annotations_creators": ["machine-generated", "expert-generated"], "language_creators": ["machine-generated", "crowdsourced"], "language": ["en"], "license": ["cc-by-sa-3.0", "gpl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|fever"], "task_categories": ["text-classification"], "task_ids": ["fact-checking"], "paperswithcode_id": "fever", "pretty_name": ""} | 2022-11-17T11:42:54+00:00 |
d267838191dbf769374ef1f8ce0c0a04a413b18a | # Wordnet definitions for English
Dataset by Princeton WordNet and the Open English WordNet team
https://github.com/globalwordnet/english-wordnet
This dataset contains every entry in wordnet that has a definition and an example.
Be aware that the word "null" can be misinterpreted as a null value if loading it in with e.g. pandas | marksverdhei/wordnet-definitions-en-2021 | [
"region:us"
] | 2022-04-02T18:02:14+00:00 | {} | 2022-04-04T20:55:03+00:00 |
49cf0593a2baf2fd848d81470d7c439c3ab8d3ec | This dataset was previously created in Kaggle by [Andrea Morales Garzón](https://huggingface.co/andreamorgar).
[Link Kaggle](https://www.kaggle.com/andreamorgar/spanish-poetry-dataset/version/1) | hackathon-pln-es/spanish-poetry-dataset | [
"region:us"
] | 2022-04-03T02:31:57+00:00 | {} | 2022-04-03T02:34:26+00:00 |
aa48b3c7f4d0c1450f8f2df27ceb8a882b022600 |
# Spanish to Quechua
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Structure](#dataset-structure)
- [Dataset Creation](#dataset-creation)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [team members](#team-members)
## Dataset Description
This dataset is a recopilation of webs and others datasets that shows in [dataset creation section](#dataset-creation). This contains translations from spanish (es) to Qechua of Ayacucho (qu).
## Dataset Structure
### Data Fields
- es: The sentence in Spanish.
- qu: The sentence in Quechua of Ayacucho.
### Data Splits
- train: To train the model (102 747 sentences).
- Validation: To validate the model during training (12 844 sentences).
- test: To evaluate the model when the training is finished (12 843 sentences).
## Dataset Creation
### Source Data
This dataset has generated from:
- "Mundo Quechua" by "Ivan Acuña" - [available here](https://mundoquechua.blogspot.com/2006/07/frases-comunes-en-quechua.html)
- "Kuyakuykim (Te quiero): Apps con las que podrías aprender quechua" by "El comercio" - [available here](https://elcomercio.pe/tecnologia/actualidad/traductor-frases-romanticas-quechua-noticia-467022-noticia/)
- "Piropos y frases de amor en quechua" by "Soy Quechua" - [available here](https://www.soyquechua.org/2019/12/palabras-en-quechua-de-amor.html)
- "Corazón en quechua" by "Soy Quechua" - [available here](https://www.soyquechua.org/2020/05/corazon-en-quechua.html)
- "Oraciones en Español traducidas a Quechua" by "Tatoeba" - [available here](https://tatoeba.org/es/sentences/search?from=spa&query=&to=que)
- "AmericasNLP 2021 Shared Task on Open Machine Translation" by "americasnlp2021" - [available here](https://github.com/AmericasNLP/americasnlp2021/tree/main/data/quechua-spanish/parallel_data/es-quy)
### Data cleaning
- The dataset was manually cleaned during compilation, as some words of one language were related to several words of the other language.
## Considerations for Using the Data
This is a first version of the dataset, we expected improve it over time and especially to neutralize the biblical themes.
## Team members
- [Sara Benel](https://huggingface.co/sbenel)
- [Jose Vílchez](https://huggingface.co/JCarlos) | hackathon-pln-es/spanish-to-quechua | [
"task_categories:translation",
"language:es",
"language:qu",
"region:us"
] | 2022-04-03T03:02:58+00:00 | {"language": ["es", "qu"], "task_categories": ["translation"], "task": ["translation"]} | 2022-10-25T09:03:46+00:00 |
c8d301967424c6c7a3632b863453ddcd1fa60cd3 | aymen31/PlantVillage | [
"license:other",
"region:us"
] | 2022-04-03T03:35:03+00:00 | {"license": "other"} | 2022-04-03T03:41:23+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.