datasetId
stringlengths
2
81
card
stringlengths
20
977k
iamtarun/python_code_instructions_18k_alpaca
--- dataset_info: features: - name: instruction dtype: string - name: input dtype: string - name: output dtype: string - name: prompt dtype: string splits: - name: train num_bytes: 25180782 num_examples: 18612 download_size: 11357076 dataset_size: 25180782 configs: - config_name: default data_files: - split: train path: data/train-* task_categories: - question-answering - text2text-generation - text-generation tags: - code size_categories: - 10K<n<100K --- # Dataset Card for python_code_instructions_18k_alpaca The dataset contains problem descriptions and code in python language. This dataset is taken from [sahil2801/code_instructions_120k](https://huggingface.co/datasets/sahil2801/code_instructions_120k), which adds a prompt column in alpaca style. Refer to the source [here](https://huggingface.co/datasets/sahil2801/code_instructions_120k).
stanfordnlp/SHP
--- task_categories: - text-generation - question-answering tags: - human feedback - rlhf - preferences - reddit - preference model - RL - NLG - evaluation size_categories: - 100K<n<1M language: - en --- # 🚢 Stanford Human Preferences Dataset (SHP) **If you mention this dataset in a paper, please cite the paper:** [Understanding Dataset Difficulty with V-Usable Information (ICML 2022)](https://proceedings.mlr.press/v162/ethayarajh22a.html). ## Summary SHP is a dataset of **385K collective human preferences** over responses to questions/instructions in 18 different subject areas, from cooking to legal advice. The preferences are meant to reflect the helpfulness of one response over another, and are intended to be used for training RLHF reward models and NLG evaluation models (e.g., [SteamSHP](https://huggingface.co/stanfordnlp/SteamSHP-flan-t5-xl)). Each example is a Reddit post with a question/instruction and a pair of top-level comments for that post, where one comment is more preferred by Reddit users (collectively). SHP exploits the fact that if comment A was written *after* comment B but has a higher score nonetheless, then A is ostensibly more preferred to B. If A had been written before B, then we could not conclude this, since its higher score could have been the result of more visibility. We chose data where the preference label is intended to reflect which response is more *helpful* rather than which is less *harmful*, the latter being the focus of much past work. How is SHP different from [Anthropic's HH-RLHF dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf)? Most notably, all the data in SHP is naturally occurring and human-written, whereas the responses in HH-RLHF are machine-written, giving us two very different distributions that can complement each other. | Dataset | Size | Input | Label | Domains | Data Format | Length | | -------------------- | ---- | -------------------------- | ---------------------------- | ------------------------- | ------------------------------------- | --------------- | | SHP | 385K | Naturally occurring human-written responses | Collective Human Preference | 18 (labelled) | Question/Instruction + Response (Single-turn) | up to 10.1K T5 tokens | | HH-RLHF | 91K | Dialogue with LLM | Individual Human Preference | not labelled | Live Chat (Multi-turn) | up to 1.5K T5 tokens | How is SHP different from other datasets that have scraped Reddit, like [ELI5](https://huggingface.co/datasets/eli5#source-data)? SHP uses the timestamp information to infer preferences, while ELI5 only provides comments and scores -- the latter are not enough to infer preferences since comments made earlier tend to get higher scores from more visibility. It also contains data from more domains: | Dataset | Size | Comments + Scores | Preferences | Number of Domains | | -------------------- | ---- | ------------------ | -------------| ------------------ | | SHP | 385K | Yes | Yes | 18 | | ELI5 | 270K | Yes | No | 3 | ## Data Structure There are 18 directories, one for each subreddit, and each directory contains a JSONL file for the training, validation, and test data. Here's how to get the data using Huggingface's `datasets` library: ```python from datasets import load_dataset # Load all the data dataset = load_dataset("stanfordnlp/shp") # Load one of the subreddits dataset = load_dataset("stanfordnlp/shp", data_dir="askculinary") ``` Here's an example from `askculinary/train.json`: ``` { `post_id`:"qt3nxl", `domain`:"askculinary_train", `upvote_ratio`:0.98, `history`:"What's the best way to disassemble raspberries? Like this, but down to the individual seeds: https:\/\/i.imgur.com\/Z0c6ZKE.jpg I've been pulling them apart with tweezers and it's really time consuming. I have about 10 pounds to get through this weekend.", `c_root_id_A`:"hkh25sc", `c_root_id_B`:"hkh25lp", `created_at_utc_A`:1636822112, `created_at_utc_B`:1636822110, `score_A`:340, `score_B`:166, `human_ref_A`:"Pectinex, perhaps? It's an enzyme that breaks down cellulose. With citrus, you let it sit in a dilute solution of pectinex overnight to break down the connective tissues. You end up with perfect citrus supremes. If you let the raspberries sit for a shorter time, I wonder if it would separate the seeds the same way...? Here's an example: https:\/\/www.chefsteps.com\/activities\/perfect-citrus-supreme", `human_ref_B`:"Raspberry juice will make a bright stain at first, but in a matter of weeks it will start to fade away to almost nothing. It is what is known in the natural dye world as a fugitive dye, it will fade even without washing or exposure to light. I hope she gets lots of nice photos of these stains on her dress, because soon that will be all she has left of them!", `labels`:1, `seconds_difference`:2.0, `score_ratio`:2.0481927711 } ``` where the fields are: - ```post_id```: the ID of the Reddit post (string) - ```domain```: the subreddit and split the example is drawn from, separated by an underscore (string) - ```upvote_ratio```: the percent of votes received by the post that were positive (aka upvotes) (float) - ```history```: the post title concatented to the post body (string) - ```c_root_id_A```: the ID of comment A (string) - ```c_root_id_B```: the ID of comment B (string) - ```created_at_utc_A```: utc timestamp of when comment A was created (integer) - ```created_at_utc_B```: utc timestamp of when comment B was created (integer) - ```score_A```: (# positive votes - # negative votes + 1) received by comment A (integer) - ```score_B```: (# positive votes - # negative votes + 1) received by comment B (integer) - ```human_ref_A```: text of comment A (string) - ```human_ref_B```: text of comment B (string) - ```labels```: the preference label -- it is 1 if A is preferred to B; 0 if B is preferred to A. This was randomized such that the label distribution is roughly 50/50. (integer) - ```seconds_difference```: how many seconds after the less preferred comment the more preferred one was created (will always be >= 0) (integer) - ```score_ratio```: the ratio of the more preferred comment's score to the less preferred comment's score (will be >= 1) (float) ## Dataset Design ### Domain Selection The data is sourced from Reddit, which is a public forum organized into topic-specific fora called *subreddits*. For example, the `askculinary` subreddit is where users ask cooking-related questions and are answered by other users. SHP contains a train, validation, and test split for comments scraped from 18 different subreddits. We chose subreddits based on: 1. whether they were well-known (subscriber count >= 100K) 2. whether posts were expected to pose a question or instruction 3. whether responses were valued based on how *helpful* they were 4. whether comments had to be rooted in some objectivity, instead of being entirely about personal experiences (e.g., `askscience` vs. `AskAmericans`) The train/validation/test splits were created by splitting the post IDs of a subreddit in 90%/5%/5% proportions respectively, so that no post would appear in multiple splits. Since different posts have different numbers of comments, the number of preferences in each split is not exactly 90%/5%/5%: | subreddit | train | validation | test | total | | ------------------ | -------: | ---------: | ---: | ----: | | askacademia | 31450 | 2095 | 1708 | 35253 | | askanthropology | 3910 | 203 | 268 | 4381 | | askbaking | 44007 | 2096 | 1544 | 47647 | | askcarguys | 3227 | 159 | 117 | 3503 | | askculinary | 45710 | 2094 | 2563 | 50367 | | askdocs | 6449 | 315 | 455 | 7219 | | askengineers | 57096 | 3154 | 2638 | 62888 | | askhistorians | 3264 | 113 | 164 | 3541 | | askhr | 8295 | 641 | 395 | 9331 | | askphilosophy | 10307 | 608 | 677 | 11592 | | askphysics | 7364 | 409 | 587 | 8360 | | askscience | 13316 | 899 | 977 | 15192 | | asksciencefiction | 29382 | 1576 | 1987 | 32945 | | asksocialscience | 2706 | 147 | 188 | 3041 | | askvet | 3300 | 170 | 224 | 3694 | | changemyview | 38173 | 1637 | 1836 | 41646 | | explainlikeimfive | 19592 | 1014 | 1070 | 21676 | | legaladvice | 21170 | 1106 | 1011 | 23287 | | ALL | 348718 | 18436 | 18409 | 385563 | ### Data Selection The score of a post/comment is 1 plus the number of upvotes (approvals) it gets from users, minus the number of downvotes (disapprovals) it gets. The value of a score is relative; in subreddits(posts) with more traffic, there will be more higher-scoring posts(comments). Within a post, comments posted earlier will tend to have a higher score simply due to having more exposure, which is why using timestamp information is essential when inferring preferences. Given a post P and two comments (A,B) we only included the preference A > B in the dataset if 1. A was written *no later than* B and A has a higher score than B. 2. The post is a self-post (i.e., a body of text and not a link to another page) made before 2023, was not edited, and is not NSFW (over 18). 3. Neither comment was made by a deleted user, a moderator, or the post creator. The post was not made by a deleted user or moderator. 4. The post has a score >= 10 and each comment has a score >= 2 (upvoted at least once). A post with `n` comments could have up to (`n` choose `2`) preferences in the data. Since the number of comments per post is Pareto-distributed, to prevent a relatively small number of posts from dominating the data, we limited the scraping to 50 comments per post. This means that each post could have up to (`50` choose `2`) comments in the dataset, though this is a much smaller number in practice, since all the criteria above need to be met. Reddit makes it very difficult to get anything beyond the top 1000 posts for each subreddit. We started with the top-scoring 1000 posts (of all time) and searched for the 25 most similar posts to each one using Reddit's search function to get up to 7500 unique post IDs per subreddit. ### Preprocessing We tried to keep preprocessing to a minimum. Subreddit-specific abbreviations were expanded (e.g., "CMV" to "Change my view that"). In hyperlinks, only the referring text was kept and the URL was removed (if the URL was written out, then it was kept). ## Building a Preference Model ### Finetuning If you want to finetune a model to predict human preferences (e.g., for NLG evaluation or an RLHF reward model), here are some helpful tips: 1. **Preprocess the data.** The total input length should fit under the model's token limit (usually 512 tokens). Although models like FLAN-T5 use positional embeddings, we found that the loss would not converge if we finetuned it on inputs over 512 tokens. To avoid this, truncate the post text (in the `history` field) as much as possible, such that the whole input is under 512 tokens (do not truncate the comment(s) however). If this is still over 512 tokens, simply skip the example. 2. **Use a sufficiently large model.** Finetuning a single FLAN-T5-xl model across all the training data should give you a test accuracy between 72-73% (across all domains on examples where the entire input fits within the token limit), ranging from 65-80% on individual subreddits. 3. **Do in-domain prediction.** Out-of-domain performance will be poor if the subreddits are unrelated (e.g., if you fine-tune on `askculinary` preferences and test on `askcarguys` preferences). 4. **Train for fewer epochs.** The InstructGPT paper paper suggests training a reward model for only 1 epoch. Since the same comment appears in multiple preferences, it is easy to overfit to the data. 5. **Training on less data may help.** Preferences with a large `score_ratio` (e.g., comment A having 2x the score of comment B) will provide a stronger signal for finetuning the model, so you may only want to consider preferences above a certain `score_ratio`. The number of preferences per post is Pareto-distributed, so to prevent the model from over-fitting to certain posts, you may want to limit the number of preferences from a particular post. ### Evaluating Since it is easier to predict strongly-held preferences than weakly-held ones, instead of reporting a single accuracy value, we recommend reporting a performance curve as a function of the `score_ratio`. For example, here is the accuracy curve for a FLAN-T5-xl model trained on the askculinary data using the suggestions above. The orange line is from finetuning only on preferences with a 2+ score ratio and using no more than 5 preferences from each post to prevent overfitting: ![Graph](curve.png) We see that finetuning on less -- but higher quality -- data leads to higher accuracies on test data with a score ratio below 3.5, with no real downsides! Note that any examples whose inputs did not fit within the token limit were left out of the experiment, since the model could not be expected to handle them. ### SteamSHP - An Open-Source Preference Model We have finetuned two FLAN-T5 models on both the SHP dataset and the helpfulness data from Anthropic's HH-RLHF. They are - [SteamSHP-XL](https://huggingface.co/stanfordnlp/SteamSHP-flan-t5-xl), a 3B parameter model that achieves 72.8% on the test data. - [SteamSHP-Large](https://huggingface.co/stanfordnlp/SteamSHP-flan-t5-large), a 780M parameter model that achieves 72.0% on the test data. We encourage you to use SteamSHP for NLG evaluation, for building reward models for RLHF, or for another purpose you deem fit! ## Biases and Limitations ### Biases Although we filtered out posts with NSFW (over 18) content, chose subreddits that were well-moderated and had policies against harassment and bigotry, some of the data may contain discriminatory or harmful language. The data does not reflect the views of the dataset creators. Reddit users on these subreddits are also not representative of the broader population. Although subreddit-specific demographic information is not available, Reddit users overall are disproportionately male and from developed, Western, and English-speaking countries ([Pew Research](https://www.pewresearch.org/internet/2013/07/03/6-of-online-adults-are-reddit-users/)). Please keep this in mind before using any models trained on this data. ### Limitations The preference label in SHP is intended to reflect how *helpful* one response is relative to another, given an instruction/question. SHP is not intended for use in harm-minimization, as it was not designed to include the toxic content that would be necessary to learn a good toxicity detector. If you are looking for data where the preference label denotes less harm, we would recommend the harmfulness split of [Anthropic's HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf). Another limitation is that the more preferred response in SHP is not necessarily the more factual one. Though some comments do provide citations to justify their response, most do not. There are exceptions to this, such as the `askhistorians` subreddit, which is heavily moderated and answers are expected to provide citations. Note that the collective preference label in SHP is not necessarily what we would get if we asked users to independently vote on each comment before taking an unweighted sum. This is because comment scores on Reddit are public and are known to influence user preferences; a high score increases the likelihood of getting more positive votes [(Muchnik et al., 2013)](https://pubmed.ncbi.nlm.nih.gov/23929980/). Whether this "herding effect" temporarily or permanently shifts a user's preference is unclear. Therefore, while SHP does reflect collective human preferences, models trained on SHP may not generalize to settings where individual preferences are aggregated differently (e.g., users vote independently without ever seeing the current comment score, users vote after conferring, etc.). Thanks to Greg Stoddard for pointing this out. ## License Last updated: 03/01/2023 This dataset was made by scraping Reddit in accordance with the [Reddit API Terms of Use](https://docs.google.com/a/reddit.com/forms/d/e/1FAIpQLSezNdDNK1-P8mspSbmtC2r86Ee9ZRbC66u929cG2GX0T9UMyw/viewform), without any direct communication or written agreements with Reddit. According to the Terms of Use, "User Content" is owned by the users themselves -- not by Reddit -- and Reddit grants a "non-exclusive, non-transferable, non-sublicensable, and revocable license to copy and display the User Content". Datasets made by scraping Reddit are widely used in the research community: for example, Facebook AI Research used data scraped from Reddit to make the [ELI5](https://huggingface.co/datasets/eli5#source-data) dataset in 2019, which was made available without a license. Anthropic AI has also [attested to scraping Reddit](https://arxiv.org/pdf/2112.00861.pdf) for preferences using a different methodology, though this data was not made public. The [PushShift Reddit dataset](https://arxiv.org/abs/2001.08435), which makes entire dumps of Reddit available on a regular schedule, is also made available without a license (to our knowledge). We take no responsibility for and we do not expressly or implicitly endorse any downstream use of this dataset. We reserve the right to modify the SHP dataset and this license at any point in the future. ## Contact Please contact kawin@stanford.edu if you have any questions about the data. This dataset was created by Kawin Ethayarajh, Heidi (Chenyu) Zhang, Yizhong Wang, and Dan Jurafsky. ## Citation SHP was created using the techniques proposed in the following paper. Please cite this work if you use SHP or the SteamSHP models: ``` @InProceedings{pmlr-v162-ethayarajh22a, title = {Understanding Dataset Difficulty with $\mathcal{V}$-Usable Information}, author = {Ethayarajh, Kawin and Choi, Yejin and Swayamdipta, Swabha}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {5988--6008}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, } ``` ## References Ethayarajh, K., Choi, Y. &amp; Swayamdipta, S. (2022). Understanding Dataset Difficulty with $\mathcal{V}$-Usable Information. <i>Proceedings of the 39th International Conference on Machine Learning</i>, in <i>Proceedings of Machine Learning Research</i>. 162:5988-6008 Available from https://proceedings.mlr.press/v162/ethayarajh22a.html.
jmhessel/newyorker_caption_contest
--- annotations_creators: - expert-generated - crowdsourced - found language_creators: - crowdsourced - expert-generated language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - image-to-text - multiple-choice - text-classification - text-generation - visual-question-answering - other - text2text-generation task_ids: - multi-class-classification - language-modeling - visual-question-answering - explanation-generation pretty_name: newyorker_caption_contest tags: - humor - caption contest - new yorker dataset_info: - config_name: explanation features: - name: image dtype: image - name: contest_number dtype: int32 - name: image_location dtype: string - name: image_description dtype: string - name: image_uncanny_description dtype: string - name: entities sequence: string - name: questions sequence: string - name: caption_choices dtype: string - name: from_description dtype: string - name: label dtype: string - name: n_tokens_label dtype: int32 - name: instance_id dtype: string splits: - name: train num_bytes: 133827514.64 num_examples: 2340 - name: validation num_bytes: 8039885.0 num_examples: 130 - name: test num_bytes: 6863533.0 num_examples: 131 download_size: 139737042 dataset_size: 148730932.64 - config_name: explanation_1 features: - name: image dtype: image - name: contest_number dtype: int32 - name: image_location dtype: string - name: image_description dtype: string - name: image_uncanny_description dtype: string - name: entities sequence: string - name: questions sequence: string - name: caption_choices dtype: string - name: from_description dtype: string - name: label dtype: string - name: n_tokens_label dtype: int32 - name: instance_id dtype: string splits: - name: train num_bytes: 136614332.45999998 num_examples: 2358 - name: validation num_bytes: 7911995.0 num_examples: 128 - name: test num_bytes: 8039885.0 num_examples: 130 download_size: 134637839 dataset_size: 152566212.45999998 - config_name: explanation_2 features: - name: image dtype: image - name: contest_number dtype: int32 - name: image_location dtype: string - name: image_description dtype: string - name: image_uncanny_description dtype: string - name: entities sequence: string - name: questions sequence: string - name: caption_choices dtype: string - name: from_description dtype: string - name: label dtype: string - name: n_tokens_label dtype: int32 - name: instance_id dtype: string splits: - name: train num_bytes: 138337491.342 num_examples: 2346 - name: validation num_bytes: 7460490.0 num_examples: 132 - name: test num_bytes: 7911995.0 num_examples: 128 download_size: 138271185 dataset_size: 153709976.342 - config_name: explanation_3 features: - name: image dtype: image - name: contest_number dtype: int32 - name: image_location dtype: string - name: image_description dtype: string - name: image_uncanny_description dtype: string - name: entities sequence: string - name: questions sequence: string - name: caption_choices dtype: string - name: from_description dtype: string - name: label dtype: string - name: n_tokens_label dtype: int32 - name: instance_id dtype: string splits: - name: train num_bytes: 138247435.342 num_examples: 2334 - name: validation num_bytes: 7911920.0 num_examples: 130 - name: test num_bytes: 7460490.0 num_examples: 132 download_size: 136862726 dataset_size: 153619845.342 - config_name: explanation_4 features: - name: image dtype: image - name: contest_number dtype: int32 - name: image_location dtype: string - name: image_description dtype: string - name: image_uncanny_description dtype: string - name: entities sequence: string - name: questions sequence: string - name: caption_choices dtype: string - name: from_description dtype: string - name: label dtype: string - name: n_tokens_label dtype: int32 - name: instance_id dtype: string splits: - name: train num_bytes: 141175335.3 num_examples: 2340 - name: validation num_bytes: 6863533.0 num_examples: 131 - name: test num_bytes: 7911920.0 num_examples: 130 download_size: 140501251 dataset_size: 155950788.3 - config_name: explanation_from_pixels features: - name: image dtype: image - name: contest_number dtype: int32 - name: caption_choices dtype: string - name: label dtype: string - name: n_tokens_label dtype: int32 - name: instance_id dtype: string splits: - name: train num_bytes: 23039316.0 num_examples: 390 - name: validation num_bytes: 7956182.0 num_examples: 130 - name: test num_bytes: 6778892.0 num_examples: 131 download_size: 37552582 dataset_size: 37774390.0 - config_name: explanation_from_pixels_1 features: - name: image dtype: image - name: contest_number dtype: int32 - name: caption_choices dtype: string - name: label dtype: string - name: n_tokens_label dtype: int32 - name: instance_id dtype: string splits: - name: train num_bytes: 21986652.0 num_examples: 393 - name: validation num_bytes: 7831556.0 num_examples: 128 - name: test num_bytes: 7956182.0 num_examples: 130 download_size: 37534409 dataset_size: 37774390.0 - config_name: explanation_from_pixels_2 features: - name: image dtype: image - name: contest_number dtype: int32 - name: caption_choices dtype: string - name: label dtype: string - name: n_tokens_label dtype: int32 - name: instance_id dtype: string splits: - name: train num_bytes: 22566608.0 num_examples: 391 - name: validation num_bytes: 7376225.0 num_examples: 132 - name: test num_bytes: 7831556.0 num_examples: 128 download_size: 37544724 dataset_size: 37774389.0 - config_name: explanation_from_pixels_3 features: - name: image dtype: image - name: contest_number dtype: int32 - name: caption_choices dtype: string - name: label dtype: string - name: n_tokens_label dtype: int32 - name: instance_id dtype: string splits: - name: train num_bytes: 22566629.0 num_examples: 389 - name: validation num_bytes: 7831536.0 num_examples: 130 - name: test num_bytes: 7376225.0 num_examples: 132 download_size: 37573931 dataset_size: 37774390.0 - config_name: explanation_from_pixels_4 features: - name: image dtype: image - name: contest_number dtype: int32 - name: caption_choices dtype: string - name: label dtype: string - name: n_tokens_label dtype: int32 - name: instance_id dtype: string splits: - name: train num_bytes: 23163962.0 num_examples: 390 - name: validation num_bytes: 6778892.0 num_examples: 131 - name: test num_bytes: 7831536.0 num_examples: 130 download_size: 37582524 dataset_size: 37774390.0 - config_name: matching features: - name: image dtype: image - name: contest_number dtype: int32 - name: image_location dtype: string - name: image_description dtype: string - name: image_uncanny_description dtype: string - name: entities sequence: string - name: questions sequence: string - name: caption_choices sequence: string - name: from_description dtype: string - name: label dtype: string - name: n_tokens_label dtype: int32 - name: instance_id dtype: string splits: - name: train num_bytes: 618272766.36 num_examples: 9792 - name: validation num_bytes: 34157757.0 num_examples: 531 - name: test num_bytes: 29813118.0 num_examples: 528 download_size: 594460072 dataset_size: 682243641.36 - config_name: matching_1 features: - name: image dtype: image - name: contest_number dtype: int32 - name: image_location dtype: string - name: image_description dtype: string - name: image_uncanny_description dtype: string - name: entities sequence: string - name: questions sequence: string - name: caption_choices sequence: string - name: from_description dtype: string - name: label dtype: string - name: n_tokens_label dtype: int32 - name: instance_id dtype: string splits: - name: train num_bytes: 593200158.116 num_examples: 9684 - name: validation num_bytes: 36712942.0 num_examples: 546 - name: test num_bytes: 34157757.0 num_examples: 531 download_size: 563587231 dataset_size: 664070857.116 - config_name: matching_2 features: - name: image dtype: image - name: contest_number dtype: int32 - name: image_location dtype: string - name: image_description dtype: string - name: image_uncanny_description dtype: string - name: entities sequence: string - name: questions sequence: string - name: caption_choices sequence: string - name: from_description dtype: string - name: label dtype: string - name: n_tokens_label dtype: int32 - name: instance_id dtype: string splits: - name: train num_bytes: 591676321.09 num_examples: 9630 - name: validation num_bytes: 33697178.0 num_examples: 540 - name: test num_bytes: 36712942.0 num_examples: 546 download_size: 571864348 dataset_size: 662086441.09 - config_name: matching_3 features: - name: image dtype: image - name: contest_number dtype: int32 - name: image_location dtype: string - name: image_description dtype: string - name: image_uncanny_description dtype: string - name: entities sequence: string - name: questions sequence: string - name: caption_choices sequence: string - name: from_description dtype: string - name: label dtype: string - name: n_tokens_label dtype: int32 - name: instance_id dtype: string splits: - name: train num_bytes: 615620189.53 num_examples: 9630 - name: validation num_bytes: 34829502.0 num_examples: 546 - name: test num_bytes: 33697178.0 num_examples: 540 download_size: 571744845 dataset_size: 684146869.53 - config_name: matching_4 features: - name: image dtype: image - name: contest_number dtype: int32 - name: image_location dtype: string - name: image_description dtype: string - name: image_uncanny_description dtype: string - name: entities sequence: string - name: questions sequence: string - name: caption_choices sequence: string - name: from_description dtype: string - name: label dtype: string - name: n_tokens_label dtype: int32 - name: instance_id dtype: string splits: - name: train num_bytes: 609696610.648 num_examples: 9702 - name: validation num_bytes: 29813118.0 num_examples: 528 - name: test num_bytes: 34829502.0 num_examples: 546 download_size: 592174904 dataset_size: 674339230.648 - config_name: matching_from_pixels features: - name: image dtype: image - name: contest_number dtype: int32 - name: caption_choices sequence: string - name: label dtype: string - name: n_tokens_label dtype: int32 - name: instance_id dtype: string splits: - name: train num_bytes: 101439044.384 num_examples: 1632 - name: validation num_bytes: 33714551.0 num_examples: 531 - name: test num_bytes: 29368704.0 num_examples: 528 download_size: 139733134 dataset_size: 164522299.384 - config_name: matching_from_pixels_1 features: - name: image dtype: image - name: contest_number dtype: int32 - name: caption_choices sequence: string - name: label dtype: string - name: n_tokens_label dtype: int32 - name: instance_id dtype: string splits: - name: train num_bytes: 94090646.83 num_examples: 1614 - name: validation num_bytes: 36257141.0 num_examples: 546 - name: test num_bytes: 33714551.0 num_examples: 531 download_size: 137278691 dataset_size: 164062338.82999998 - config_name: matching_from_pixels_2 features: - name: image dtype: image - name: contest_number dtype: int32 - name: caption_choices sequence: string - name: label dtype: string - name: n_tokens_label dtype: int32 - name: instance_id dtype: string splits: - name: train num_bytes: 96253584.505 num_examples: 1605 - name: validation num_bytes: 33236000.0 num_examples: 540 - name: test num_bytes: 36257141.0 num_examples: 546 download_size: 137890850 dataset_size: 165746725.505 - config_name: matching_from_pixels_3 features: - name: image dtype: image - name: contest_number dtype: int32 - name: caption_choices sequence: string - name: label dtype: string - name: n_tokens_label dtype: int32 - name: instance_id dtype: string splits: - name: train num_bytes: 99928910.28 num_examples: 1605 - name: validation num_bytes: 34380303.0 num_examples: 546 - name: test num_bytes: 33236000.0 num_examples: 540 download_size: 139585876 dataset_size: 167545213.28 - config_name: matching_from_pixels_4 features: - name: image dtype: image - name: contest_number dtype: int32 - name: caption_choices sequence: string - name: label dtype: string - name: n_tokens_label dtype: int32 - name: instance_id dtype: string splits: - name: train num_bytes: 102509197.79 num_examples: 1617 - name: validation num_bytes: 29368704.0 num_examples: 528 - name: test num_bytes: 34380303.0 num_examples: 546 download_size: 138725891 dataset_size: 166258204.79000002 - config_name: ranking features: - name: image dtype: image - name: contest_number dtype: int32 - name: image_location dtype: string - name: image_description dtype: string - name: image_uncanny_description dtype: string - name: entities sequence: string - name: questions sequence: string - name: caption_choices sequence: string - name: from_description dtype: string - name: winner_source dtype: string - name: label dtype: string - name: n_tokens_label dtype: int32 - name: instance_id dtype: string splits: - name: train num_bytes: 594615535.632 num_examples: 9576 - name: validation num_bytes: 32624105.0 num_examples: 507 - name: test num_bytes: 28907567.0 num_examples: 513 download_size: 571604579 dataset_size: 656147207.632 - config_name: ranking_1 features: - name: image dtype: image - name: contest_number dtype: int32 - name: image_location dtype: string - name: image_description dtype: string - name: image_uncanny_description dtype: string - name: entities sequence: string - name: questions sequence: string - name: caption_choices sequence: string - name: from_description dtype: string - name: winner_source dtype: string - name: label dtype: string - name: n_tokens_label dtype: int32 - name: instance_id dtype: string splits: - name: train num_bytes: 580099188.9 num_examples: 9450 - name: validation num_bytes: 35332200.0 num_examples: 534 - name: test num_bytes: 32624105.0 num_examples: 507 download_size: 546559254 dataset_size: 648055493.9 - config_name: ranking_2 features: - name: image dtype: image - name: contest_number dtype: int32 - name: image_location dtype: string - name: image_description dtype: string - name: image_uncanny_description dtype: string - name: entities sequence: string - name: questions sequence: string - name: caption_choices sequence: string - name: from_description dtype: string - name: winner_source dtype: string - name: label dtype: string - name: n_tokens_label dtype: int32 - name: instance_id dtype: string splits: - name: train num_bytes: 566811450.504 num_examples: 9306 - name: validation num_bytes: 32519173.0 num_examples: 531 - name: test num_bytes: 35332200.0 num_examples: 534 download_size: 544444097 dataset_size: 634662823.504 - config_name: ranking_3 features: - name: image dtype: image - name: contest_number dtype: int32 - name: image_location dtype: string - name: image_description dtype: string - name: image_uncanny_description dtype: string - name: entities sequence: string - name: questions sequence: string - name: caption_choices sequence: string - name: from_description dtype: string - name: winner_source dtype: string - name: label dtype: string - name: n_tokens_label dtype: int32 - name: instance_id dtype: string splits: - name: train num_bytes: 577828323.272 num_examples: 9324 - name: validation num_bytes: 34072817.0 num_examples: 531 - name: test num_bytes: 32519173.0 num_examples: 531 download_size: 548880699 dataset_size: 644420313.272 - config_name: ranking_4 features: - name: image dtype: image - name: contest_number dtype: int32 - name: image_location dtype: string - name: image_description dtype: string - name: image_uncanny_description dtype: string - name: entities sequence: string - name: questions sequence: string - name: caption_choices sequence: string - name: from_description dtype: string - name: winner_source dtype: string - name: label dtype: string - name: n_tokens_label dtype: int32 - name: instance_id dtype: string splits: - name: train num_bytes: 593388719.232 num_examples: 9432 - name: validation num_bytes: 28907567.0 num_examples: 513 - name: test num_bytes: 34072817.0 num_examples: 531 download_size: 562902941 dataset_size: 656369103.232 - config_name: ranking_from_pixels features: - name: image dtype: image - name: contest_number dtype: int32 - name: caption_choices sequence: string - name: winner_source dtype: string - name: label dtype: string - name: n_tokens_label dtype: int32 - name: instance_id dtype: string splits: - name: train num_bytes: 101282973.752 num_examples: 1596 - name: validation num_bytes: 32072331.0 num_examples: 506 - name: test num_bytes: 28550057.0 num_examples: 513 download_size: 134283256 dataset_size: 161905361.752 - config_name: ranking_from_pixels_1 features: - name: image dtype: image - name: contest_number dtype: int32 - name: caption_choices sequence: string - name: winner_source dtype: string - name: label dtype: string - name: n_tokens_label dtype: int32 - name: instance_id dtype: string splits: - name: train num_bytes: 93123370.15 num_examples: 1575 - name: validation num_bytes: 34965110.0 num_examples: 534 - name: test num_bytes: 32072331.0 num_examples: 506 download_size: 130879365 dataset_size: 160160811.15 - config_name: ranking_from_pixels_2 features: - name: image dtype: image - name: contest_number dtype: int32 - name: caption_choices sequence: string - name: winner_source dtype: string - name: label dtype: string - name: n_tokens_label dtype: int32 - name: instance_id dtype: string splits: - name: train num_bytes: 93496576.85 num_examples: 1550 - name: validation num_bytes: 32145436.0 num_examples: 531 - name: test num_bytes: 34965110.0 num_examples: 534 download_size: 131637359 dataset_size: 160607122.85 - config_name: ranking_from_pixels_3 features: - name: image dtype: image - name: contest_number dtype: int32 - name: caption_choices sequence: string - name: winner_source dtype: string - name: label dtype: string - name: n_tokens_label dtype: int32 - name: instance_id dtype: string splits: - name: train num_bytes: 93840620.26 num_examples: 1553 - name: validation num_bytes: 33718821.0 num_examples: 531 - name: test num_bytes: 32145436.0 num_examples: 531 download_size: 133214495 dataset_size: 159704877.26 - config_name: ranking_from_pixels_4 features: - name: image dtype: image - name: contest_number dtype: int32 - name: caption_choices sequence: string - name: winner_source dtype: string - name: label dtype: string - name: n_tokens_label dtype: int32 - name: instance_id dtype: string splits: - name: train num_bytes: 99008131.43 num_examples: 1571 - name: validation num_bytes: 28550057.0 num_examples: 513 - name: test num_bytes: 33718821.0 num_examples: 531 download_size: 136230399 dataset_size: 161277009.43 configs: - config_name: explanation data_files: - split: train path: explanation/train-* - split: validation path: explanation/validation-* - split: test path: explanation/test-* - config_name: explanation_1 data_files: - split: train path: explanation_1/train-* - split: validation path: explanation_1/validation-* - split: test path: explanation_1/test-* - config_name: explanation_2 data_files: - split: train path: explanation_2/train-* - split: validation path: explanation_2/validation-* - split: test path: explanation_2/test-* - config_name: explanation_3 data_files: - split: train path: explanation_3/train-* - split: validation path: explanation_3/validation-* - split: test path: explanation_3/test-* - config_name: explanation_4 data_files: - split: train path: explanation_4/train-* - split: validation path: explanation_4/validation-* - split: test path: explanation_4/test-* - config_name: explanation_from_pixels data_files: - split: train path: explanation_from_pixels/train-* - split: validation path: explanation_from_pixels/validation-* - split: test path: explanation_from_pixels/test-* - config_name: explanation_from_pixels_1 data_files: - split: train path: explanation_from_pixels_1/train-* - split: validation path: explanation_from_pixels_1/validation-* - split: test path: explanation_from_pixels_1/test-* - config_name: explanation_from_pixels_2 data_files: - split: train path: explanation_from_pixels_2/train-* - split: validation path: explanation_from_pixels_2/validation-* - split: test path: explanation_from_pixels_2/test-* - config_name: explanation_from_pixels_3 data_files: - split: train path: explanation_from_pixels_3/train-* - split: validation path: explanation_from_pixels_3/validation-* - split: test path: explanation_from_pixels_3/test-* - config_name: explanation_from_pixels_4 data_files: - split: train path: explanation_from_pixels_4/train-* - split: validation path: explanation_from_pixels_4/validation-* - split: test path: explanation_from_pixels_4/test-* - config_name: matching data_files: - split: train path: matching/train-* - split: validation path: matching/validation-* - split: test path: matching/test-* - config_name: matching_1 data_files: - split: train path: matching_1/train-* - split: validation path: matching_1/validation-* - split: test path: matching_1/test-* - config_name: matching_2 data_files: - split: train path: matching_2/train-* - split: validation path: matching_2/validation-* - split: test path: matching_2/test-* - config_name: matching_3 data_files: - split: train path: matching_3/train-* - split: validation path: matching_3/validation-* - split: test path: matching_3/test-* - config_name: matching_4 data_files: - split: train path: matching_4/train-* - split: validation path: matching_4/validation-* - split: test path: matching_4/test-* - config_name: matching_from_pixels data_files: - split: train path: matching_from_pixels/train-* - split: validation path: matching_from_pixels/validation-* - split: test path: matching_from_pixels/test-* - config_name: matching_from_pixels_1 data_files: - split: train path: matching_from_pixels_1/train-* - split: validation path: matching_from_pixels_1/validation-* - split: test path: matching_from_pixels_1/test-* - config_name: matching_from_pixels_2 data_files: - split: train path: matching_from_pixels_2/train-* - split: validation path: matching_from_pixels_2/validation-* - split: test path: matching_from_pixels_2/test-* - config_name: matching_from_pixels_3 data_files: - split: train path: matching_from_pixels_3/train-* - split: validation path: matching_from_pixels_3/validation-* - split: test path: matching_from_pixels_3/test-* - config_name: matching_from_pixels_4 data_files: - split: train path: matching_from_pixels_4/train-* - split: validation path: matching_from_pixels_4/validation-* - split: test path: matching_from_pixels_4/test-* - config_name: ranking data_files: - split: train path: ranking/train-* - split: validation path: ranking/validation-* - split: test path: ranking/test-* - config_name: ranking_1 data_files: - split: train path: ranking_1/train-* - split: validation path: ranking_1/validation-* - split: test path: ranking_1/test-* - config_name: ranking_2 data_files: - split: train path: ranking_2/train-* - split: validation path: ranking_2/validation-* - split: test path: ranking_2/test-* - config_name: ranking_3 data_files: - split: train path: ranking_3/train-* - split: validation path: ranking_3/validation-* - split: test path: ranking_3/test-* - config_name: ranking_4 data_files: - split: train path: ranking_4/train-* - split: validation path: ranking_4/validation-* - split: test path: ranking_4/test-* - config_name: ranking_from_pixels data_files: - split: train path: ranking_from_pixels/train-* - split: validation path: ranking_from_pixels/validation-* - split: test path: ranking_from_pixels/test-* - config_name: ranking_from_pixels_1 data_files: - split: train path: ranking_from_pixels_1/train-* - split: validation path: ranking_from_pixels_1/validation-* - split: test path: ranking_from_pixels_1/test-* - config_name: ranking_from_pixels_2 data_files: - split: train path: ranking_from_pixels_2/train-* - split: validation path: ranking_from_pixels_2/validation-* - split: test path: ranking_from_pixels_2/test-* - config_name: ranking_from_pixels_3 data_files: - split: train path: ranking_from_pixels_3/train-* - split: validation path: ranking_from_pixels_3/validation-* - split: test path: ranking_from_pixels_3/test-* - config_name: ranking_from_pixels_4 data_files: - split: train path: ranking_from_pixels_4/train-* - split: validation path: ranking_from_pixels_4/validation-* - split: test path: ranking_from_pixels_4/test-* --- # Dataset Card for New Yorker Caption Contest Benchmarks ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [capcon.dev](https://www.capcon.dev) - **Repository:** [https://github.com/jmhessel/caption_contest_corpus](https://github.com/jmhessel/caption_contest_corpus) - **Paper:** [Do Androids Laugh at Electric Sheep? Humor "Understanding" Benchmarks from The New Yorker Caption Contest](https://arxiv.org/abs/2209.06293) - **Leaderboard:** https://leaderboard.allenai.org/nycc-matching/ - **Point of Contact:** jmhessel@gmail.com ### Dataset Summary See [capcon.dev](https://www.capcon.dev) for more! Data from: [Do Androids Laugh at Electric Sheep? Humor "Understanding" Benchmarks from The New Yorker Caption Contest](https://arxiv.org/abs/2209.06293) ``` @inproceedings{hessel2023androids, title={Do Androids Laugh at Electric Sheep? {Humor} ``Understanding'' Benchmarks from {The New Yorker Caption Contest}}, author={Hessel, Jack and Marasovi{\'c}, Ana and Hwang, Jena D. and Lee, Lillian and Da, Jeff and Zellers, Rowan and Mankoff, Robert and Choi, Yejin}, booktitle={Proceedings of the ACL}, year={2023} } ``` If you use this dataset, we would appreciate you citing our work, but also -- several other papers that we build this corpus upon. See [Citation Information](#citation-information). We challenge AI models to "demonstrate understanding" of the sophisticated multimodal humor of The New Yorker Caption Contest. Concretely, we develop three carefully circumscribed tasks for which it suffices (but is not necessary) to grasp potentially complex and unexpected relationships between image and caption, and similarly complex and unexpected allusions to the wide varieties of human experience. ### Supported Tasks and Leaderboards Three tasks are supported: - "Matching:" a model must recognize a caption written about a cartoon (vs. options that were not); - "Quality ranking:" a model must evaluate the quality of a caption by scoring it more highly than a lower quality option from the same contest; - "Explanation:" a model must explain why a given joke is funny. There are no official leaderboards (yet). ### Languages English ## Dataset Structure Here's an example instance from Matching: ``` {'caption_choices': ['Tell me about your childhood very quickly.', "Believe me . . . it's what's UNDER the ground that's " 'most interesting.', "Stop me if you've heard this one.", 'I have trouble saying no.', 'Yes, I see the train but I think we can beat it.'], 'contest_number': 49, 'entities': ['https://en.wikipedia.org/wiki/Rule_of_three_(writing)', 'https://en.wikipedia.org/wiki/Bar_joke', 'https://en.wikipedia.org/wiki/Religious_institute'], 'from_description': 'scene: a bar description: Two priests and a rabbi are ' 'walking into a bar, as the bartender and another patron ' 'look on. The bartender talks on the phone while looking ' 'skeptically at the incoming crew. uncanny: The scene ' 'depicts a very stereotypical "bar joke" that would be ' 'unlikely to be encountered in real life; the skepticism ' 'of the bartender suggests that he is aware he is seeing ' 'this trope, and is explaining it to someone on the ' 'phone. entities: Rule_of_three_(writing), Bar_joke, ' 'Religious_institute. choices A: Tell me about your ' "childhood very quickly. B: Believe me . . . it's what's " "UNDER the ground that's most interesting. C: Stop me if " "you've heard this one. D: I have trouble saying no. E: " 'Yes, I see the train but I think we can beat it.', 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=L size=323x231 at 0x7F34F283E9D0>, 'image_description': 'Two priests and a rabbi are walking into a bar, as the ' 'bartender and another patron look on. The bartender ' 'talks on the phone while looking skeptically at the ' 'incoming crew.', 'image_location': 'a bar', 'image_uncanny_description': 'The scene depicts a very stereotypical "bar ' 'joke" that would be unlikely to be encountered ' 'in real life; the skepticism of the bartender ' 'suggests that he is aware he is seeing this ' 'trope, and is explaining it to someone on the ' 'phone.', 'instance_id': '21125bb8787b4e7e82aa3b0a1cba1571', 'label': 'C', 'n_tokens_label': 1, 'questions': ['What is the bartender saying on the phone in response to the ' 'living, breathing, stereotypical bar joke that is unfolding?']} ``` The label "C" indicates that the 3rd choice in the `caption_choices` is correct. Here's an example instance from Ranking (in the from pixels setting --- though, this is also available in the from description setting) ``` {'caption_choices': ['I guess I misunderstood when you said long bike ride.', 'Does your divorce lawyer have any other cool ideas?'], 'contest_number': 582, 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=L size=600x414 at 0x7F8FF9F96610>, 'instance_id': 'dd1c214a1ca3404aa4e582c9ce50795a', 'label': 'A', 'n_tokens_label': 1, 'winner_source': 'official_winner'} ``` the label indicates that the first caption choice ("A", here) in the `caption_choices` list was more highly rated. Here's an example instance from Explanation: ``` {'caption_choices': 'The classics can be so intimidating.', 'contest_number': 752, 'entities': ['https://en.wikipedia.org/wiki/Literature', 'https://en.wikipedia.org/wiki/Solicitor'], 'from_description': 'scene: a road description: Two people are walking down a ' 'path. A number of giant books have surrounded them. ' 'uncanny: There are book people in this world. entities: ' 'Literature, Solicitor. caption: The classics can be so ' 'intimidating.', 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=L size=800x706 at 0x7F90003D0BB0>, 'image_description': 'Two people are walking down a path. A number of giant ' 'books have surrounded them.', 'image_location': 'a road', 'image_uncanny_description': 'There are book people in this world.', 'instance_id': 'eef9baf450e2fab19b96facc128adf80', 'label': 'A play on the word intimidating --- usually if the classics (i.e., ' 'classic novels) were to be intimidating, this would mean that they ' 'are intimidating to read due to their length, complexity, etc. But ' 'here, they are surrounded by anthropomorphic books which look ' 'physically intimidating, i.e., they are intimidating because they ' 'may try to beat up these people.', 'n_tokens_label': 59, 'questions': ['What do the books want?']} ``` The label is an explanation of the joke, which serves as the autoregressive target. ### Data Instances See above ### Data Fields See above ### Data Splits Data splits can be accessed as: ``` from datasets import load_dataset dset = load_dataset("jmhessel/newyorker_caption_contest", "matching") dset = load_dataset("jmhessel/newyorker_caption_contest", "ranking") dset = load_dataset("jmhessel/newyorker_caption_contest", "explanation") ``` Or, in the from pixels setting, e.g., ``` from datasets import load_dataset dset = load_dataset("jmhessel/newyorker_caption_contest", "ranking_from_pixels") ``` Because the dataset is small, we reported in 5-fold cross-validation setting initially. The default splits are split 0. You can access the other splits, e.g.: ``` from datasets import load_dataset # the 4th data split dset = load_dataset("jmhessel/newyorker_caption_contest", "explanation_4") ``` ## Dataset Creation Full details are in the paper. ### Curation Rationale See the paper for rationale/motivation. ### Source Data See citation below. We combined 3 sources of data, and added significant annotations of our own. #### Initial Data Collection and Normalization Full details are in the paper. #### Who are the source language producers? We paid crowdworkers $15/hr to annotate the corpus. In addition, significant annotation efforts were conducted by the authors of this work. ### Annotations Full details are in the paper. #### Annotation process Full details are in the paper. #### Who are the annotators? A mix of crowdworks and authors of this paper. ### Personal and Sensitive Information Has been redacted from the dataset. Images are published in the New Yorker already. ## Considerations for Using the Data ### Social Impact of Dataset It's plausible that humor could perpetuate negative stereotypes. The jokes in this corpus are a mix of crowdsourced entries that are highly rated, and ones published in the new yorker. ### Discussion of Biases Humor is subjective, and some of the jokes may be considered offensive. The images may contain adult themes and minor cartoon nudity. ### Other Known Limitations More details are in the paper ## Additional Information ### Dataset Curators The dataset was curated by researchers at AI2 ### Licensing Information The annotations we provide are CC-BY-4.0. See www.capcon.dev for more info. ### Citation Information ``` @article{hessel2022androids, title={Do Androids Laugh at Electric Sheep? Humor "Understanding" Benchmarks from The New Yorker Caption Contest}, author={Hessel, Jack and Marasovi{\'c}, Ana and Hwang, Jena D and Lee, Lillian and Da, Jeff and Zellers, Rowan and Mankoff, Robert and Choi, Yejin}, journal={arXiv preprint arXiv:2209.06293}, year={2022} } ``` Our data contributions are: - The cartoon-level annotations; - The joke explanations; - and the framing of the tasks We release these data we contribute under CC-BY (see DATASET_LICENSE). If you find this data useful in your work, in addition to citing our contributions, please also cite the following, from which the cartoons/captions in our corpus are derived: ``` @misc{newyorkernextmldataset, author={Jain, Lalit and Jamieson, Kevin and Mankoff, Robert and Nowak, Robert and Sievert, Scott}, title={The {N}ew {Y}orker Cartoon Caption Contest Dataset}, year={2020}, url={https://nextml.github.io/caption-contest-data/} } @inproceedings{radev-etal-2016-humor, title = "Humor in Collective Discourse: Unsupervised Funniness Detection in The {New Yorker} Cartoon Caption Contest", author = "Radev, Dragomir and Stent, Amanda and Tetreault, Joel and Pappu, Aasish and Iliakopoulou, Aikaterini and Chanfreau, Agustin and de Juan, Paloma and Vallmitjana, Jordi and Jaimes, Alejandro and Jha, Rahul and Mankoff, Robert", booktitle = "LREC", year = "2016", } @inproceedings{shahaf2015inside, title={Inside jokes: Identifying humorous cartoon captions}, author={Shahaf, Dafna and Horvitz, Eric and Mankoff, Robert}, booktitle={KDD}, year={2015}, } ```
OpenAssistant/oasst1
--- license: apache-2.0 dataset_info: features: - name: message_id dtype: string - name: parent_id dtype: string - name: user_id dtype: string - name: created_date dtype: string - name: text dtype: string - name: role dtype: string - name: lang dtype: string - name: review_count dtype: int32 - name: review_result dtype: bool - name: deleted dtype: bool - name: rank dtype: int32 - name: synthetic dtype: bool - name: model_name dtype: string - name: detoxify struct: - name: toxicity dtype: float64 - name: severe_toxicity dtype: float64 - name: obscene dtype: float64 - name: identity_attack dtype: float64 - name: insult dtype: float64 - name: threat dtype: float64 - name: sexual_explicit dtype: float64 - name: message_tree_id dtype: string - name: tree_state dtype: string - name: emojis sequence: - name: name dtype: string - name: count dtype: int32 - name: labels sequence: - name: name dtype: string - name: value dtype: float64 - name: count dtype: int32 splits: - name: train num_bytes: 100367999 num_examples: 84437 - name: validation num_bytes: 5243405 num_examples: 4401 download_size: 41596430 dataset_size: 105611404 language: - en - es - ru - de - pl - th - vi - sv - bn - da - he - it - fa - sk - id - nb - el - nl - hu - eu - zh - eo - ja - ca - cs - bg - fi - pt - tr - ro - ar - uk - gl - fr - ko tags: - human-feedback size_categories: - 100K<n<1M pretty_name: OpenAssistant Conversations --- # OpenAssistant Conversations Dataset (OASST1) ## Dataset Description - **Homepage:** https://www.open-assistant.io/ - **Repository:** https://github.com/LAION-AI/Open-Assistant - **Paper:** https://arxiv.org/abs/2304.07327 ### Dataset Summary In an effort to democratize research on large-scale alignment, we release OpenAssistant Conversations (OASST1), a human-generated, human-annotated assistant-style conversation corpus consisting of 161,443 messages in 35 different languages, annotated with 461,292 quality ratings, resulting in over 10,000 fully annotated conversation trees. The corpus is a product of a worldwide crowd-sourcing effort involving over 13,500 volunteers. Please refer to our [paper](https://arxiv.org/abs/2304.07327) for further details. ### Dataset Structure This dataset contains message trees. Each message tree has an initial prompt message as the root node, which can have multiple child messages as replies, and these child messages can have multiple replies. All messages have a role property: this can either be "assistant" or "prompter". The roles in conversation threads from prompt to leaf node strictly alternate between "prompter" and "assistant". This version of the dataset contains data collected on the [open-assistant.io](https://open-assistant.io/) website until April 12 2023. ### JSON Example: Message For readability, the following JSON examples are shown formatted with indentation on multiple lines. Objects are stored without indentation (on single lines) in the actual jsonl files. ```json { "message_id": "218440fd-5317-4355-91dc-d001416df62b", "parent_id": "13592dfb-a6f9-4748-a92c-32b34e239bb4", "user_id": "8e95461f-5e94-4d8b-a2fb-d4717ce973e4", "text": "It was the winter of 2035, and artificial intelligence (..)", "role": "assistant", "lang": "en", "review_count": 3, "review_result": true, "deleted": false, "rank": 0, "synthetic": true, "model_name": "oasst-sft-0_3000,max_new_tokens=400 (..)", "labels": { "spam": { "value": 0.0, "count": 3 }, "lang_mismatch": { "value": 0.0, "count": 3 }, "pii": { "value": 0.0, "count": 3 }, "not_appropriate": { "value": 0.0, "count": 3 }, "hate_speech": { "value": 0.0, "count": 3 }, "sexual_content": { "value": 0.0, "count": 3 }, "quality": { "value": 0.416, "count": 3 }, "toxicity": { "value": 0.16, "count": 3 }, "humor": { "value": 0.0, "count": 3 }, "creativity": { "value": 0.33, "count": 3 }, "violence": { "value": 0.16, "count": 3 } } } ``` ### JSON Example: Conversation Tree For readability, only a subset of the message properties is shown here. ```json { "message_tree_id": "14fbb664-a620-45ce-bee4-7c519b16a793", "tree_state": "ready_for_export", "prompt": { "message_id": "14fbb664-a620-45ce-bee4-7c519b16a793", "text": "Why can't we divide by 0? (..)", "role": "prompter", "lang": "en", "replies": [ { "message_id": "894d30b6-56b4-4605-a504-89dd15d4d1c8", "text": "The reason we cannot divide by zero is because (..)", "role": "assistant", "lang": "en", "replies": [ // ... ] }, { "message_id": "84d0913b-0fd9-4508-8ef5-205626a7039d", "text": "The reason that the result of a division by zero is (..)", "role": "assistant", "lang": "en", "replies": [ { "message_id": "3352725e-f424-4e3b-a627-b6db831bdbaa", "text": "Math is confusing. Like those weird Irrational (..)", "role": "prompter", "lang": "en", "replies": [ { "message_id": "f46207ca-3149-46e9-a466-9163d4ce499c", "text": "Irrational numbers are simply numbers (..)", "role": "assistant", "lang": "en", "replies": [] }, // ... ] } ] } ] } } ``` Please refer to [oasst-data](https://github.com/LAION-AI/Open-Assistant/tree/main/oasst-data) for details about the data structure and Python code to read and write jsonl files containing oasst data objects. If you would like to explore the dataset yourself you can find a [`getting-started`](https://github.com/LAION-AI/Open-Assistant/blob/main/notebooks/openassistant-oasst1/getting-started.ipynb) notebook in the `notebooks/openassistant-oasst1` folder of the [LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant) github repository. ## Main Dataset Files Conversation data is provided either as nested messages in trees (extension `.trees.jsonl.gz`) or as a flat list (table) of messages (extension `.messages.jsonl.gz`). ### Ready For Export Trees ``` 2023-04-12_oasst_ready.trees.jsonl.gz 10,364 trees with 88,838 total messages 2023-04-12_oasst_ready.messages.jsonl.gz 88,838 messages ``` Trees in `ready_for_export` state without spam and deleted messages including message labels. The oasst_ready-trees file usually is sufficient for supervised fine-tuning (SFT) & reward model (RM) training. ### All Trees ``` 2023-04-12_oasst_all.trees.jsonl.gz 66,497 trees with 161,443 total messages 2023-04-12_oasst_all.messages.jsonl.gz 161,443 messages ``` All trees, including those in states `prompt_lottery_waiting` (trees that consist of only one message, namely the initial prompt), `aborted_low_grade` (trees that stopped growing because the messages had low quality), and `halted_by_moderator`. ### Supplemental Exports: Spam & Prompts ``` 2023-04-12_oasst_spam.messages.jsonl.gz ``` These are messages which were deleted or have a negative review result (`"review_result": false`). Besides low quality, a frequent reason for message deletion is a wrong language tag. ``` 2023-04-12_oasst_prompts.messages.jsonl.gz ``` These are all the kept initial prompt messages with positive review result (no spam) of trees in `ready_for_export` or `prompt_lottery_waiting` state. ### Using the Huggingface Datasets While HF datasets is ideal for tabular datasets, it is not a natural fit for nested data structures like the OpenAssistant conversation trees. Nevertheless, we make all messages which can also be found in the file `2023-04-12_oasst_ready.trees.jsonl.gz` available in parquet as train/validation splits. These are directly loadable by [Huggingface Datasets](https://pypi.org/project/datasets/). To load the oasst1 train & validation splits use: ```python from datasets import load_dataset ds = load_dataset("OpenAssistant/oasst1") train = ds['train'] # len(train)=84437 (95%) val = ds['validation'] # len(val)=4401 (5%) ``` The messages appear in depth-first order of the message trees. Full conversation trees can be reconstructed from the flat messages table by using the `parent_id` and `message_id` properties to identify the parent-child relationship of messages. The `message_tree_id` and `tree_state` properties (only present in flat messages files) can be used to find all messages of a message tree or to select trees by their state. ### Languages OpenAssistant Conversations incorporates 35 different languages with a distribution of messages as follows: **Languages with over 1000 messages** - English: 71956 - Spanish: 43061 - Russian: 9089 - German: 5279 - Chinese: 4962 - French: 4251 - Thai: 3042 - Portuguese (Brazil): 2969 - Catalan: 2260 - Korean: 1553 - Ukrainian: 1352 - Italian: 1320 - Japanese: 1018 <details> <summary><b>Languages with under 1000 messages</b></summary> <ul> <li>Vietnamese: 952</li> <li>Basque: 947</li> <li>Polish: 886</li> <li>Hungarian: 811</li> <li>Arabic: 666</li> <li>Dutch: 628</li> <li>Swedish: 512</li> <li>Turkish: 454</li> <li>Finnish: 386</li> <li>Czech: 372</li> <li>Danish: 358</li> <li>Galician: 339</li> <li>Hebrew: 255</li> <li>Romanian: 200</li> <li>Norwegian Bokmål: 133</li> <li>Indonesian: 115</li> <li>Bulgarian: 95</li> <li>Bengali: 82</li> <li>Persian: 72</li> <li>Greek: 66</li> <li>Esperanto: 59</li> <li>Slovak: 19</li> </ul> </details> ## Contact - Discord [Open Assistant Discord Server](https://ykilcher.com/open-assistant-discord) - GitHub: [LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant) - E-Mail: [open-assistant@laion.ai](mailto:open-assistant@laion.ai)
wnut_17
--- annotations_creators: - crowdsourced language_creators: - found language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - token-classification task_ids: - named-entity-recognition paperswithcode_id: wnut-2017-emerging-and-rare-entity pretty_name: WNUT 17 dataset_info: features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-corporation '2': I-corporation '3': B-creative-work '4': I-creative-work '5': B-group '6': I-group '7': B-location '8': I-location '9': B-person '10': I-person '11': B-product '12': I-product config_name: wnut_17 splits: - name: train num_bytes: 1078379 num_examples: 3394 - name: validation num_bytes: 259383 num_examples: 1009 - name: test num_bytes: 405536 num_examples: 1287 download_size: 800955 dataset_size: 1743298 --- # Dataset Card for "wnut_17" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [http://noisy-text.github.io/2017/emerging-rare-entities.html](http://noisy-text.github.io/2017/emerging-rare-entities.html) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 0.80 MB - **Size of the generated dataset:** 1.74 MB - **Total amount of disk used:** 2.55 MB ### Dataset Summary WNUT 17: Emerging and Rare entity recognition This shared task focuses on identifying unusual, previously-unseen entities in the context of emerging discussions. Named entities form the basis of many modern approaches to other tasks (like event clustering and summarisation), but recall on them is a real problem in noisy text - even among annotators. This drop tends to be due to novel entities and surface forms. Take for example the tweet “so.. kktny in 30 mins?” - even human experts find entity kktny hard to detect and resolve. This task will evaluate the ability to detect and classify novel, emerging, singleton named entities in noisy text. The goal of this task is to provide a definition of emerging and of rare entities, and based on that, also datasets for detecting these entities. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances - **Size of downloaded dataset files:** 0.80 MB - **Size of the generated dataset:** 1.74 MB - **Total amount of disk used:** 2.55 MB An example of 'train' looks as follows. ``` { "id": "0", "ner_tags": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7, 8, 8, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0], "tokens": ["@paulwalk", "It", "'s", "the", "view", "from", "where", "I", "'m", "living", "for", "two", "weeks", ".", "Empire", "State", "Building", "=", "ESB", ".", "Pretty", "bad", "storm", "here", "last", "evening", "."] } ``` ### Data Fields The data fields are the same among all splits: - `id` (`string`): ID of the example. - `tokens` (`list` of `string`): Tokens of the example text. - `ner_tags` (`list` of class labels): NER tags of the tokens (using IOB2 format), with possible values: - 0: `O` - 1: `B-corporation` - 2: `I-corporation` - 3: `B-creative-work` - 4: `I-creative-work` - 5: `B-group` - 6: `I-group` - 7: `B-location` - 8: `I-location` - 9: `B-person` - 10: `I-person` - 11: `B-product` - 12: `I-product` ### Data Splits |train|validation|test| |----:|---------:|---:| | 3394| 1009|1287| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{derczynski-etal-2017-results, title = "Results of the {WNUT}2017 Shared Task on Novel and Emerging Entity Recognition", author = "Derczynski, Leon and Nichols, Eric and van Erp, Marieke and Limsopatham, Nut", booktitle = "Proceedings of the 3rd Workshop on Noisy User-generated Text", month = sep, year = "2017", address = "Copenhagen, Denmark", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/W17-4418", doi = "10.18653/v1/W17-4418", pages = "140--147", abstract = "This shared task focuses on identifying unusual, previously-unseen entities in the context of emerging discussions. Named entities form the basis of many modern approaches to other tasks (like event clustering and summarization), but recall on them is a real problem in noisy text - even among annotators. This drop tends to be due to novel entities and surface forms. Take for example the tweet {``}so.. kktny in 30 mins?!{''} {--} even human experts find the entity {`}kktny{'} hard to detect and resolve. The goal of this task is to provide a definition of emerging and of rare entities, and based on that, also datasets for detecting these entities. The task as described in this paper evaluated the ability of participating entries to detect and classify novel and emerging named entities in noisy text.", } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@stefan-it](https://github.com/stefan-it), [@lewtun](https://github.com/lewtun), [@jplu](https://github.com/jplu) for adding this dataset.
narrativeqa
--- annotations_creators: - crowdsourced language_creators: - found language: - en license: - apache-2.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text2text-generation task_ids: - abstractive-qa paperswithcode_id: narrativeqa pretty_name: NarrativeQA dataset_info: features: - name: document struct: - name: id dtype: string - name: kind dtype: string - name: url dtype: string - name: file_size dtype: int32 - name: word_count dtype: int32 - name: start dtype: string - name: end dtype: string - name: summary struct: - name: text dtype: string - name: tokens sequence: string - name: url dtype: string - name: title dtype: string - name: text dtype: string - name: question struct: - name: text dtype: string - name: tokens sequence: string - name: answers list: - name: text dtype: string - name: tokens sequence: string splits: - name: train num_bytes: 11556607782 num_examples: 32747 - name: test num_bytes: 3547135501 num_examples: 10557 - name: validation num_bytes: 1211859418 num_examples: 3461 download_size: 3232805701 dataset_size: 16315602701 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* - split: validation path: data/validation-* --- # Dataset Card for Narrative QA ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** https://github.com/deepmind/narrativeqa - **Paper:** https://arxiv.org/abs/1712.07040 - **Paper:** https://aclanthology.org/Q18-1023/ - **Point of Contact:** [Tomáš Kočiský](mailto:tkocisky@google.com) [Jonathan Schwarz](mailto:schwarzjn@google.com) [Phil Blunsom](pblunsom@google.com) [Chris Dyer](cdyer@google.com) [Karl Moritz Hermann](mailto:kmh@google.com) [Gábor Melis](mailto:melisgl@google.com) [Edward Grefenstette](mailto:etg@google.com) ### Dataset Summary NarrativeQA is an English-lanaguage dataset of stories and corresponding questions designed to test reading comprehension, especially on long documents. ### Supported Tasks and Leaderboards The dataset is used to test reading comprehension. There are 2 tasks proposed in the paper: "summaries only" and "stories only", depending on whether the human-generated summary or the full story text is used to answer the question. ### Languages English ## Dataset Structure ### Data Instances A typical data point consists of a question and answer pair along with a summary/story which can be used to answer the question. Additional information such as the url, word count, wikipedia page, are also provided. A typical example looks like this: ``` { "document": { "id": "23jncj2n3534563110", "kind": "movie", "url": "https://www.imsdb.com/Movie%20Scripts/Name%20of%20Movie.html", "file_size": 80473, "word_count": 41000, "start": "MOVIE screenplay by", "end": ". THE END", "summary": { "text": "Joe Bloggs begins his journey exploring...", "tokens": ["Joe", "Bloggs", "begins", "his", "journey", "exploring",...], "url": "http://en.wikipedia.org/wiki/Name_of_Movie", "title": "Name of Movie (film)" }, "text": "MOVIE screenplay by John Doe\nSCENE 1..." }, "question": { "text": "Where does Joe Bloggs live?", "tokens": ["Where", "does", "Joe", "Bloggs", "live", "?"], }, "answers": [ {"text": "At home", "tokens": ["At", "home"]}, {"text": "His house", "tokens": ["His", "house"]} ] } ``` ### Data Fields - `document.id` - Unique ID for the story. - `document.kind` - "movie" or "gutenberg" depending on the source of the story. - `document.url` - The URL where the story was downloaded from. - `document.file_size` - File size (in bytes) of the story. - `document.word_count` - Number of tokens in the story. - `document.start` - First 3 tokens of the story. Used for verifying the story hasn't been modified. - `document.end` - Last 3 tokens of the story. Used for verifying the story hasn't been modified. - `document.summary.text` - Text of the wikipedia summary of the story. - `document.summary.tokens` - Tokenized version of `document.summary.text`. - `document.summary.url` - Wikipedia URL of the summary. - `document.summary.title` - Wikipedia Title of the summary. - `question` - `{"text":"...", "tokens":[...]}` for the question about the story. - `answers` - List of `{"text":"...", "tokens":[...]}` for valid answers for the question. ### Data Splits The data is split into training, valiudation, and test sets based on story (i.e. the same story cannot appear in more than one split): | Train | Valid | Test | | ------ | ----- | ----- | | 32747 | 3461 | 10557 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization Stories and movies scripts were downloaded from [Project Gutenburg](https://www.gutenberg.org) and a range of movie script repositories (mainly [imsdb](http://www.imsdb.com)). #### Who are the source language producers? The language producers are authors of the stories and scripts as well as Amazon Turk workers for the questions. ### Annotations #### Annotation process Amazon Turk Workers were provided with human written summaries of the stories (To make the annotation tractable and to lead annotators towards asking non-localized questions). Stories were matched with plot summaries from Wikipedia using titles and verified the matching with help from human annotators. The annotators were asked to determine if both the story and the summary refer to a movie or a book (as some books are made into movies), or if they are the same part in a series produced in the same year. Annotators on Amazon Mechanical Turk were instructed to write 10 question–answer pairs each based solely on a given summary. Annotators were instructed to imagine that they are writing questions to test students who have read the full stories but not the summaries. We required questions that are specific enough, given the length and complexity of the narratives, and to provide adiverse set of questions about characters, events, why this happened, and so on. Annotators were encouraged to use their own words and we prevented them from copying. We asked for answers that are grammatical, complete sentences, and explicitly allowed short answers (one word, or a few-word phrase, or ashort sentence) as we think that answering with a full sentence is frequently perceived as artificial when asking about factual information. Annotators were asked to avoid extra, unnecessary information in the question or the answer, and to avoid yes/no questions or questions about the author or the actors. #### Who are the annotators? Amazon Mechanical Turk workers. ### Personal and Sensitive Information None ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The dataset is released under a [Apache-2.0 License](https://github.com/deepmind/narrativeqa/blob/master/LICENSE). ### Citation Information ``` @article{kocisky-etal-2018-narrativeqa, title = "The {N}arrative{QA} Reading Comprehension Challenge", author = "Ko{\v{c}}isk{\'y}, Tom{\'a}{\v{s}} and Schwarz, Jonathan and Blunsom, Phil and Dyer, Chris and Hermann, Karl Moritz and Melis, G{\'a}bor and Grefenstette, Edward", editor = "Lee, Lillian and Johnson, Mark and Toutanova, Kristina and Roark, Brian", journal = "Transactions of the Association for Computational Linguistics", volume = "6", year = "2018", address = "Cambridge, MA", publisher = "MIT Press", url = "https://aclanthology.org/Q18-1023", doi = "10.1162/tacl_a_00023", pages = "317--328", abstract = "Reading comprehension (RC){---}in contrast to information retrieval{---}requires integrating information and reasoning about events, entities, and their relations across a full document. Question answering is conventionally used to assess RC ability, in both artificial agents and children learning to read. However, existing RC datasets and tasks are dominated by questions that can be solved by selecting answers using superficial information (e.g., local context similarity or global term frequency); they thus fail to test for the essential integrative aspect of RC. To encourage progress on deeper comprehension of language, we present a new dataset and set of tasks in which the reader must answer questions about stories by reading entire books or movie scripts. These tasks are designed so that successfully answering their questions requires understanding the underlying narrative rather than relying on shallow pattern matching or salience. We show that although humans solve the tasks easily, standard RC models struggle on the tasks presented here. We provide an analysis of the dataset and the challenges it presents.", } ``` ### Contributions Thanks to [@ghomasHudson](https://github.com/ghomasHudson) for adding this dataset.
story_cloze
--- annotations_creators: - found language_creators: - found language: - en license: - unknown multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - other task_ids: [] paperswithcode_id: null pretty_name: Story Cloze Test dataset_info: - config_name: '2016' features: - name: story_id dtype: string - name: input_sentence_1 dtype: string - name: input_sentence_2 dtype: string - name: input_sentence_3 dtype: string - name: input_sentence_4 dtype: string - name: sentence_quiz1 dtype: string - name: sentence_quiz2 dtype: string - name: answer_right_ending dtype: int32 splits: - name: validation num_bytes: 614084 num_examples: 1871 - name: test num_bytes: 613184 num_examples: 1871 download_size: 0 dataset_size: 1227268 - config_name: '2018' features: - name: story_id dtype: string - name: input_sentence_1 dtype: string - name: input_sentence_2 dtype: string - name: input_sentence_3 dtype: string - name: input_sentence_4 dtype: string - name: sentence_quiz1 dtype: string - name: sentence_quiz2 dtype: string - name: answer_right_ending dtype: int32 splits: - name: validation num_bytes: 515439 num_examples: 1571 download_size: 0 dataset_size: 515439 --- # Dataset Card for "story_cloze" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://cs.rochester.edu/nlp/rocstories/](https://cs.rochester.edu/nlp/rocstories/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [Lsdsem 2017 shared task: The story cloze test](https://aclanthology.org/W17-0906.pdf) - **Point of Contact:** [Nasrin Mostafazadeh](nasrinm@cs.rochester.edu) - **Size of downloaded dataset files:** 2.13 MB - **Size of the generated dataset:** 2.13 MB - **Total amount of disk used:** 2.15 MB ### Dataset Summary Story Cloze Test' is a new commonsense reasoning framework for evaluating story understanding, story generation, and script learning.This test requires a system to choose the correct ending to a four-sentence story. ### Supported Tasks and Leaderboards commonsense reasoning ### Languages English ## Dataset Structure ### Data Instances - **Size of downloaded dataset files:** 2.13 MB - **Size of the generated dataset:** 2.13 MB - **Total amount of disk used:** 2.15 MB An example of 'train' looks as follows. ``` {'answer_right_ending': 1, 'input_sentence_1': 'Rick grew up in a troubled household.', 'input_sentence_2': 'He never found good support in family, and turned to gangs.', 'input_sentence_3': "It wasn't long before Rick got shot in a robbery.", 'input_sentence_4': 'The incident caused him to turn a new leaf.', 'sentence_quiz1': 'He is happy now.', 'sentence_quiz2': 'He joined a gang.', 'story_id': '138d5bfb-05cc-41e3-bf2c-fa85ebad14e2'} ``` ### Data Fields The data fields are the same among all splits. - `input_sentence_1`: The first statement in the story. - `input_sentence_2`: The second statement in the story. - `input_sentence_3`: The third statement in the story. - `input_sentence_4`: The forth statement in the story. - `sentence_quiz1`: first possible continuation of the story. - `sentence_quiz2`: second possible continuation of the story. - `answer_right_ending`: correct possible ending; either 1 or 2. - `story_id`: story id. ### Data Splits | name |validation |test| |-------|-----:|---:| |2016|1871|1871| |2018|1571|-| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{mostafazadeh2017lsdsem, title={Lsdsem 2017 shared task: The story cloze test}, author={Mostafazadeh, Nasrin and Roth, Michael and Louis, Annie and Chambers, Nathanael and Allen, James}, booktitle={Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics}, pages={46--51}, year={2017} } ``` ### Contributions Thanks to [@zaidalyafeai](https://github.com/zaidalyafeai).
amazon_polarity
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - apache-2.0 multilinguality: - monolingual size_categories: - 1M<n<10M source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification pretty_name: Amazon Review Polarity dataset_info: config_name: amazon_polarity features: - name: label dtype: class_label: names: '0': negative '1': positive - name: title dtype: string - name: content dtype: string splits: - name: train num_bytes: 1604364432 num_examples: 3600000 - name: test num_bytes: 178176193 num_examples: 400000 download_size: 1145430497 dataset_size: 1782540625 configs: - config_name: amazon_polarity data_files: - split: train path: amazon_polarity/train-* - split: test path: amazon_polarity/test-* default: true train-eval-index: - config: amazon_polarity task: text-classification task_id: binary_classification splits: train_split: train eval_split: test col_mapping: content: text label: target metrics: - type: accuracy name: Accuracy - type: f1 name: F1 macro args: average: macro - type: f1 name: F1 micro args: average: micro - type: f1 name: F1 weighted args: average: weighted - type: precision name: Precision macro args: average: macro - type: precision name: Precision micro args: average: micro - type: precision name: Precision weighted args: average: weighted - type: recall name: Recall macro args: average: macro - type: recall name: Recall micro args: average: micro - type: recall name: Recall weighted args: average: weighted --- # Dataset Card for Amazon Review Polarity ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://registry.opendata.aws/ - **Repository:** https://github.com/zhangxiangxiao/Crepe - **Paper:** https://arxiv.org/abs/1509.01626 - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Xiang Zhang](mailto:xiang.zhang@nyu.edu) ### Dataset Summary The Amazon reviews dataset consists of reviews from amazon. The data span a period of 18 years, including ~35 million reviews up to March 2013. Reviews include product and user information, ratings, and a plaintext review. ### Supported Tasks and Leaderboards - `text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the content and the title, predict the correct star rating. ### Languages Mainly English. ## Dataset Structure ### Data Instances A typical data point, comprises of a title, a content and the corresponding label. An example from the AmazonPolarity test set looks as follows: ``` { 'title':'Great CD', 'content':"My lovely Pat has one of the GREAT voices of her generation. I have listened to this CD for YEARS and I still LOVE IT. When I'm in a good mood it makes me feel better. A bad mood just evaporates like sugar in the rain. This CD just oozes LIFE. Vocals are jusat STUUNNING and lyrics just kill. One of life's hidden gems. This is a desert isle CD in my book. Why she never made it big is just beyond me. Everytime I play this, no matter black, white, young, old, male, female EVERYBODY says one thing ""Who was that singing ?""", 'label':1 } ``` ### Data Fields - 'title': a string containing the title of the review - escaped using double quotes (") and any internal double quote is escaped by 2 double quotes (""). New lines are escaped by a backslash followed with an "n" character, that is "\n". - 'content': a string containing the body of the document - escaped using double quotes (") and any internal double quote is escaped by 2 double quotes (""). New lines are escaped by a backslash followed with an "n" character, that is "\n". - 'label': either 1 (positive) or 0 (negative) rating. ### Data Splits The Amazon reviews polarity dataset is constructed by taking review score 1 and 2 as negative, and 4 and 5 as positive. Samples of score 3 is ignored. Each class has 1,800,000 training samples and 200,000 testing samples. ## Dataset Creation ### Curation Rationale The Amazon reviews polarity dataset is constructed by Xiang Zhang (xiang.zhang@nyu.edu). It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015). ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information Apache License 2.0 ### Citation Information McAuley, Julian, and Jure Leskovec. "Hidden factors and hidden topics: understanding rating dimensions with review text." In Proceedings of the 7th ACM conference on Recommender systems, pp. 165-172. 2013. Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015) ### Contributions Thanks to [@hfawaz](https://github.com/hfawaz) for adding this dataset.
mteb/tweet_sentiment_extraction
--- language: - en ---
Salesforce/dialogstudio
--- extra_gated_heading: "Acknowledge to follow corresponding dataset licenses to access the repository" extra_gated_button_content: "Agree and access repository" license: apache-2.0 task_categories: - question-answering - summarization - text-generation - table-to-text - text-classification - table-question-answering language: - en pretty_name: Dialog Studio --- <img src="https://huggingface.co/datasets/Salesforce/dialogstudio/resolve/main/logo.png" alt="drawing" width="510"/> # DialogStudio: Unified Dialog Datasets and Instruction-Aware Models for Conversational AI **Author**: [Jianguo Zhang](https://github.com/jianguoz), [Kun Qian](https://github.com/qbetterk) [Paper](https://arxiv.org/pdf/2307.10172.pdf)|[Github](https://github.com/salesforce/DialogStudio)|[GDrive] 🎉 **March 18, 2024: Update for AI Agent**. Check [xLAM](https://github.com/SalesforceAIResearch/xLAM) for the latest data and models relevant to AI Agent! 🎉 **March 10 2024: Update for dataset viewer issues:** - Please refer to https://github.com/salesforce/DialogStudio for view of each dataset, where we provide 5 converted examples along with 5 original examples under each data folder. - For example, https://github.com/salesforce/DialogStudio/tree/main/open-domain-dialogues/ShareGPT contains two files: [converted_examples.json](https://github.com/salesforce/DialogStudio/blob/main/open-domain-dialogues/ShareGPT/converted_example.json) and [original_example.json](https://github.com/salesforce/DialogStudio/blob/main/open-domain-dialogues/ShareGPT/original_example.json). <img src="https://huggingface.co/datasets/Salesforce/dialogstudio/resolve/main/DialogStudio_Stats.jpg" alt="drawing" width="800"/> **Follow the [DialogStudio](https://github.com/salesforce/DialogStudio) GitHub repository for latest information.** ### Datasets ### Load dataset The datasets are split into several categories in HuggingFace ``` Datasets/ ├── Knowledge-Grounded-Dialogues ├── Natural-Language-Understanding ├── Open-Domain-Dialogues ├── Task-Oriented-Dialogues ├── Dialogue-Summarization ├── Conversational-Recommendation-Dialogs ``` You can load any dataset in the DialogStudio from the [HuggingFace hub](https://huggingface.co/datasets/Salesforce/dialogstudio) by claiming the `{dataset_name}`, which is exactly the dataset folder name. All available datasets are described in [dataset content](https://github.com/salesforce/DialogStudio/blob/main/Dataset_Stats.csv). For easier reference, [available dataset names](#Available Datasets) are also listed below. Below is one example to load the [MULTIWOZ2_2](https://huggingface.co/datasets/Salesforce/dialogstudio/blob/main/task_oriented/MULTIWOZ2_2.zip) dataset under the [task-oriented-dialogues](https://huggingface.co/datasets/Salesforce/dialogstudio/tree/main/task_oriented) category: Load the dataset ```python from datasets import load_dataset dataset = load_dataset('Salesforce/dialogstudio', 'MULTIWOZ2_2') ``` Here is the output structure of MultiWOZ 2.2 ```python DatasetDict({ train: Dataset({ features: ['original dialog id', 'new dialog id', 'dialog index', 'original dialog info', 'log', 'prompt', 'external knowledge non-flat', 'external knowledge', 'dst knowledge', 'intent knowledge'], num_rows: 8437 }) validation: Dataset({ features: ['original dialog id', 'new dialog id', 'dialog index', 'original dialog info', 'log', 'prompt', 'external knowledge non-flat', 'external knowledge', 'dst knowledge', 'intent knowledge'], num_rows: 1000 }) test: Dataset({ features: ['original dialog id', 'new dialog id', 'dialog index', 'original dialog info', 'log', 'prompt', 'external knowledge non-flat', 'external knowledge', 'dst knowledge', 'intent knowledge'], num_rows: 1000 }) }) ``` ### Available Datasets The ``data_name`` for ``load_dataset("Salesforce/dialogstudio", data_name)`` can be found below. More detailed information for each dataset can be found in out [github](https://github.com/salesforce/DialogStudio/blob/main/Dataset_Stats.csv). ```python "natural_language_understanding": [ "ATIS", "ATIS-NER", "BANKING77", "BANKING77-OOS", "CLINC-Single-Domain-OOS-banking", "CLINC-Single-Domain-OOS-credit_cards", "CLINC150", "DSTC8-SGD", "HWU64", "MIT-Movie", "MIT-Restaurant", "RESTAURANTS8K", "SNIPS", "SNIPS-NER", "TOP", "TOP-NER" ], "task_oriented": [ "ABCD", "AirDialogue", "BiTOD", "CaSiNo", "CraigslistBargains", "Disambiguation", "DSTC2-Clean", "FRAMES", "GECOR", "HDSA-Dialog", "KETOD", "KVRET", "MetaLWOZ", "MS-DC", "MuDoCo", "MulDoGO", "MultiWOZ_2.1", "MULTIWOZ2_2", "SGD", "SimJointGEN", "SimJointMovie", "SimJointRestaurant", "STAR", "Taskmaster1", "Taskmaster2", "Taskmaster3", "WOZ2_0" ], "dialogue_summarization": [ "AMI", "CRD3", "DialogSum", "ECTSum", "ICSI", "MediaSum", "QMSum", "SAMSum", "TweetSumm", "ConvoSumm", "SummScreen_ForeverDreaming", "SummScreen_TVMegaSite" ], "conversational_recommendation": [ "Redial", "DuRecDial-2.0", "OpenDialKG", "SalesBot", ], "open_domain": [ "chitchat-dataset", "ConvAI2", "AntiScam", "Empathetic", "HH-RLHF", "PLACES3.5", "Prosocial", "SODA", "ShareGPT" ], "knowledge_grounded": [ "CompWebQ", "CoQA", "CoSQL", "DART", "FeTaQA", "GrailQA", "HybridQA", "MTOP", "MultiModalQA", "SParC", "Spider", "SQA", "ToTTo", "WebQSP", "WikiSQL", "WikiTQ", "wizard_of_internet", "wizard_of_wikipedia" ], ``` # License Our project follows the following structure with respect to licensing: 1. For all the modified datasets in DialogStudio: - A portion of these datasets is under the [Apache License 2.0](https://github.com/salesforce/DialogStudio/blob/main/LICENSE.txt). - Some retain their original licenses even after modification. - For a few datasets that lacked a license, we have cited the relevant papers. 2. Original dataset licenses: For reference, we also put the original avaliable licenses for each dataset into their respective dataset folders. 3. Code: Our codebase is under the [Apache License 2.0](https://github.com/salesforce/DialogStudio/blob/main/LICENSE.txt). For detailed licensing information, please refer to the specific licenses accompanying the datasets. If you utilize datasets from DialogStudio, we kindly request that you cite our work. # Citation The data and code in this repository is mostly developed for or derived from the paper below. If you utilize datasets from DialogStudio, we kindly request that you cite both the original work and our own (Accepted by EACL 2024 Findings as a long paper). ``` @article{zhang2023dialogstudio, title={DialogStudio: Towards Richest and Most Diverse Unified Dataset Collection for Conversational AI}, author={Zhang, Jianguo and Qian, Kun and Liu, Zhiwei and Heinecke, Shelby and Meng, Rui and Liu, Ye and Yu, Zhou and Savarese, Silvio and Xiong, Caiming}, journal={arXiv preprint arXiv:2307.10172}, year={2023} } ```
tatsu-lab/alpaca_farm
--- license: cc-by-nc-4.0 ---
HAERAE-HUB/csatqa
--- dataset_info: features: - name: test_name dtype: string - name: question_number dtype: int64 - name: context dtype: string - name: question dtype: string - name: gold dtype: int64 - name: option#1 dtype: string - name: option#2 dtype: string - name: option#3 dtype: string - name: option#4 dtype: string - name: option#5 dtype: string - name: Category dtype: string - name: Human_Peformance dtype: float64 - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 4220807 num_examples: 936 download_size: 1076028 dataset_size: 4220807 task_categories: - multiple-choice language: - ko --- # Dataset Card for "CSAT-QA" ## Dataset Summary The field of Korean Language Processing is experiencing a surge in interest, illustrated by the introduction of open-source models such as Polyglot-Ko and proprietary models like HyperClova. Yet, as the development of larger and superior language models accelerates, evaluation methods aren't keeping pace. Recognizing this gap, we at HAE-RAE are dedicated to creating tailored benchmarks for the rigorous evaluation of these models. CSAT-QA is a comprehensive collection of 936 multiple choice question answering (MCQA) questions, manually collected the College Scholastic Ability Test (CSAT), a rigorous Korean University entrance exam. The CSAT-QA is divided into two subsets: a complete version encompassing all 936 questions, and a smaller, specialized version used for targeted evaluations. The smaller subset further diversifies into six distinct categories: Writing (WR), Grammar (GR), Reading Comprehension: Science (RCS), Reading Comprehension: Social Science (RCSS), Reading Comprehension: Humanities (RCH), and Literature (LI). Moreover, the smaller subset includes the recorded accuracy of South Korean students, providing a valuable real-world performance benchmark. For a detailed explanation of how the CSAT-QA was created please check out the [accompanying blog post](https://github.com/guijinSON/hae-rae/blob/main/blog/CSAT-QA.md), and for evaluation check out [LM-Eval-Harness](https://github.com/EleutherAI/lm-evaluation-harness) on github. ## Evaluation Results | **Models** | **GR** | **LI** | **RCH** | **RCS** | **RCSS** | **WR** | **Average** | |:-----------------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:-----------:| | polyglot-ko-12.8B | 32.0 | 29.73 | 17.14| 10.81 | 21.43 | 18.18 | 21.55| | gpt-3.5-wo-token | 16.0 | 32.43 | 42.86 | 18.92 | 35.71 | 0.00 | 24.32 | | gpt-3.5-w-token | 16.0 | 35.14 | 42.86 | 18.92 | 35.71 | 9.09 | 26.29 | | gpt-4-wo-token | 40.0 | 54.05 | **68.57** | **59.46** | **69.05** | 36.36 | **54.58** | | gpt-4-w-token | 36.0 | **56.76** | **68.57** | **59.46** | **69.05** | 36.36 | 54.37 | | Human Performance | **45.41** | 54.38 | 48.7 | 39.93 | 44.54 | **54.0** | 47.83 | ## How to Use The CSAT-QA includes two subsets. The full version with 936 questions can be downloaded using the following code: ``` from datasets import load_dataset dataset = load_dataset("EleutherAI/CSAT-QA", "full") ``` A more condensed version, which includes human accuracy data, can be downloaded using the following code: ``` from datasets import load_dataset import pandas as pd dataset = load_dataset("EleutherAI/CSAT-QA", "GR") # Choose from either WR, GR, LI, RCH, RCS, RCSS, ``` ## Evaluate using LM-Eval-Harness To evaluate your model simply by using the LM-Eval-Harness by EleutherAI follow the steps below. 1. To install lm-eval from the github repository main branch, run: ``` git clone https://github.com/EleutherAI/lm-evaluation-harness cd lm-evaluation-harness pip install -e . ``` 2. To install additional multilingual tokenization and text segmentation packages, you must install the package with the multilingual extra: ``` pip install -e ".[multilingual]" ``` 3. Run the evaluation by: ``` python main.py \ --model hf-causal \ --model_args pretrained=EleutherAI/polyglot-ko-1.3b \ --tasks csatqa_wr,csatqa_gr,csatqa_rcs,csatqa_rcss,csatqa_rch,csatqa_li \ --device cuda:0 ``` ## License The copyright of this material belongs to the Korea Institute for Curriculum and Evaluation(한국교육과정평가원) and may be used for research purposes only. [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
swag
--- annotations_creators: - crowdsourced - machine-generated language_creators: - found language: - en license: - unknown multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - text-classification task_ids: - natural-language-inference paperswithcode_id: swag pretty_name: Situations With Adversarial Generations dataset_info: - config_name: regular features: - name: video-id dtype: string - name: fold-ind dtype: string - name: startphrase dtype: string - name: sent1 dtype: string - name: sent2 dtype: string - name: gold-source dtype: string - name: ending0 dtype: string - name: ending1 dtype: string - name: ending2 dtype: string - name: ending3 dtype: string - name: label dtype: class_label: names: '0': '0' '1': '1' '2': '2' '3': '3' splits: - name: train num_bytes: 30274672 num_examples: 73546 - name: validation num_bytes: 8451771 num_examples: 20006 - name: test num_bytes: 8417644 num_examples: 20005 download_size: 43954806 dataset_size: 47144087 - config_name: full features: - name: video-id dtype: string - name: fold-ind dtype: string - name: startphrase dtype: string - name: gold-ending dtype: string - name: distractor-0 dtype: string - name: distractor-1 dtype: string - name: distractor-2 dtype: string - name: distractor-3 dtype: string - name: gold-source dtype: string - name: gold-type dtype: string - name: distractor-0-type dtype: string - name: distractor-1-type dtype: string - name: distractor-2-type dtype: string - name: distractor-3-type dtype: string - name: sent1 dtype: string - name: sent2 dtype: string splits: - name: train num_bytes: 34941649 num_examples: 73546 - name: validation num_bytes: 9832603 num_examples: 20006 download_size: 40537624 dataset_size: 44774252 --- # Dataset Card for Situations With Adversarial Generations ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [SWAG AF](https://rowanzellers.com/swag/) - **Repository:** [Github repository](https://github.com/rowanz/swagaf/tree/master/data) - **Paper:** [SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference](https://arxiv.org/abs/1808.05326) - **Leaderboard:** [SWAG Leaderboard](https://leaderboard.allenai.org/swag) - **Point of Contact:** [Rowan Zellers](https://rowanzellers.com/#contact) ### Dataset Summary Given a partial description like "she opened the hood of the car," humans can reason about the situation and anticipate what might come next ("then, she examined the engine"). SWAG (Situations With Adversarial Generations) is a large-scale dataset for this task of grounded commonsense inference, unifying natural language inference and physically grounded reasoning. The dataset consists of 113k multiple choice questions about grounded situations (73k training, 20k validation, 20k test). Each question is a video caption from LSMDC or ActivityNet Captions, with four answer choices about what might happen next in the scene. The correct answer is the (real) video caption for the next event in the video; the three incorrect answers are adversarially generated and human verified, so as to fool machines but not humans. SWAG aims to be a benchmark for evaluating grounded commonsense NLI and for learning representations. ### Supported Tasks and Leaderboards The dataset introduces the task of grounded commonsense inference, unifying natural language inference and commonsense reasoning. ### Languages The text in the dataset is in English. The associated BCP-47 code is `en`. ## Dataset Structure ### Data Instances The `regular` configuration should be used for modeling. An example looks like this: ``` { "video-id": "anetv_dm5WXFiQZUQ", "fold-ind": "18419", "startphrase", "He rides the motorcycle down the hall and into the elevator. He", "sent1": "He rides the motorcycle down the hall and into the elevator." "sent2": "He", "gold-source": "gold", "ending0": "looks at a mirror in the mirror as he watches someone walk through a door.", "ending1": "stops, listening to a cup of coffee with the seated woman, who's standing.", "ending2": "exits the building and rides the motorcycle into a casino where he performs several tricks as people watch.", "ending3": "pulls the bag out of his pocket and hands it to someone's grandma.", "label": 2, } ``` Note that the test are reseved for blind submission on the leaderboard. The full train and validation sets provide more information regarding the collection process. ### Data Fields - `video-id`: identification - `fold-ind`: identification - `startphrase`: the context to be filled - `sent1`: the first sentence - `sent2`: the start of the second sentence (to be filled) - `gold-source`: generated or comes from the found completion - `ending0`: first proposition - `ending1`: second proposition - `ending2`: third proposition - `ending3`: fourth proposition - `label`: the correct proposition More info concerning the fields can be found [on the original repo](https://github.com/rowanz/swagaf/tree/master/data). ### Data Splits The dataset consists of 113k multiple choice questions about grounded situations: 73k for training, 20k for validation, and 20k for (blind) test. ## Dataset Creation ### Curation Rationale The authors seek dataset diversity while minimizing annotation artifacts, conditional stylistic patterns such as length and word-preference biases. To avoid introducing easily “gamed” patterns, they introduce Adversarial Filtering (AF), a generally- applicable treatment involving the iterative refinement of a set of assignments to increase the entropy under a chosen model family. The dataset is then human verified by paid crowdsourcers. ### Source Data This section describes the source data (e.g. news text and headlines, social media posts, translated sentences,...) #### Initial Data Collection and Normalization The dataset is derived from pairs of consecutive video captions from [ActivityNet Captions](https://cs.stanford.edu/people/ranjaykrishna/densevid/) and the [Large Scale Movie Description Challenge](https://sites.google.com/site/describingmovies/). The two datasets are slightly different in nature and allow us to achieve broader coverage: ActivityNet contains 20k YouTube clips containing one of 203 activity types (such as doing gymnastics or playing guitar); LSMDC consists of 128k movie captions (audio descriptions and scripts). #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process Annotations are first machine generated and then adversarially filtered. Finally, the remaining examples are human-verified by paid crowdsourcers. #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Unknown ### Citation Information ``` @inproceedings{zellers2018swagaf, title={SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference}, author={Zellers, Rowan and Bisk, Yonatan and Schwartz, Roy and Choi, Yejin}, booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP)", year={2018} } ``` ### Contributions Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset.
cc100
--- annotations_creators: - no-annotation language_creators: - found language: - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - ff - fi - fr - fy - ga - gd - gl - gn - gu - ha - he - hi - hr - ht - hu - hy - id - ig - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lg - li - ln - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - 'no' - ns - om - or - pa - pl - ps - pt - qu - rm - ro - ru - sa - sc - sd - si - sk - sl - so - sq - sr - ss - su - sv - sw - ta - te - th - tl - tn - tr - ug - uk - ur - uz - vi - wo - xh - yi - yo - zh - zu language_bcp47: - bn-Latn - hi-Latn - my-x-zawgyi - ta-Latn - te-Latn - ur-Latn - zh-Hans - zh-Hant license: - unknown multilinguality: - multilingual size_categories: - 10M<n<100M - 1M<n<10M source_datasets: - original task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling paperswithcode_id: cc100 pretty_name: CC-100 dataset_info: - config_name: am features: - name: id dtype: string - name: text dtype: string splits: - name: train num_bytes: 935440775 num_examples: 3124561 download_size: 138821056 dataset_size: 935440775 - config_name: sr features: - name: id dtype: string - name: text dtype: string splits: - name: train num_bytes: 10299427460 num_examples: 35747957 download_size: 1578989320 dataset_size: 10299427460 - config_name: ka features: - name: id dtype: string - name: text dtype: string splits: - name: train num_bytes: 10228918845 num_examples: 31708119 download_size: 1100446372 dataset_size: 10228918845 config_names: - am - sr --- # Dataset Card for CC-100 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://data.statmt.org/cc-100/ - **Repository:** [More Information Needed] - **Paper:** https://aclanthology.org/2020.acl-main.747/ - **Paper:** https://aclanthology.org/2020.lrec-1.494/ - **Paper:** https://arxiv.org/abs/1911.02116 - **Paper:** https://arxiv.org/abs/1911.00359 - **Leaderboard:** [More Information Needed] - **Point of Contact:** [More Information Needed] ### Dataset Summary This corpus is an attempt to recreate the dataset used for training XLM-R. This corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages (indicated by *_rom). This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. ### Supported Tasks and Leaderboards CC-100 is mainly intended to pretrain language models and word representations. ### Languages The languages in the dataset are: - af: Afrikaans (305M) - am: Amharic (133M) - ar: Arabic (5.4G) - as: Assamese (7.6M) - az: Azerbaijani (1.3G) - be: Belarusian (692M) - bg: Bulgarian (9.3G) - bn: Bengali (860M) - bn_rom: Bengali Romanized (164M) - br: Breton (21M) - bs: Bosnian (18M) - ca: Catalan (2.4G) - cs: Czech (4.4G) - cy: Welsh (179M) - da: Danish (12G) - de: German (18G) - el: Greek (7.4G) - en: English (82G) - eo: Esperanto (250M) - es: Spanish (14G) - et: Estonian (1.7G) - eu: Basque (488M) - fa: Persian (20G) - ff: Fulah (3.1M) - fi: Finnish (15G) - fr: French (14G) - fy: Frisian (38M) - ga: Irish (108M) - gd: Scottish Gaelic (22M) - gl: Galician (708M) - gn: Guarani (1.5M) - gu: Gujarati (242M) - ha: Hausa (61M) - he: Hebrew (6.1G) - hi: Hindi (2.5G) - hi_rom: Hindi Romanized (129M) - hr: Croatian (5.7G) - ht: Haitian (9.1M) - hu: Hungarian (15G) - hy: Armenian (776M) - id: Indonesian (36G) - ig: Igbo (6.6M) - is: Icelandic (779M) - it: Italian (7.8G) - ja: Japanese (15G) - jv: Javanese (37M) - ka: Georgian (1.1G) - kk: Kazakh (889M) - km: Khmer (153M) - kn: Kannada (360M) - ko: Korean (14G) - ku: Kurdish (90M) - ky: Kyrgyz (173M) - la: Latin (609M) - lg: Ganda (7.3M) - li: Limburgish (2.2M) - ln: Lingala (2.3M) - lo: Lao (63M) - lt: Lithuanian (3.4G) - lv: Latvian (2.1G) - mg: Malagasy (29M) - mk: Macedonian (706M) - ml: Malayalam (831M) - mn: Mongolian (397M) - mr: Marathi (334M) - ms: Malay (2.1G) - my: Burmese (46M) - my_zaw: Burmese (Zawgyi) (178M) - ne: Nepali (393M) - nl: Dutch (7.9G) - no: Norwegian (13G) - ns: Northern Sotho (1.8M) - om: Oromo (11M) - or: Oriya (56M) - pa: Punjabi (90M) - pl: Polish (12G) - ps: Pashto (107M) - pt: Portuguese (13G) - qu: Quechua (1.5M) - rm: Romansh (4.8M) - ro: Romanian (16G) - ru: Russian (46G) - sa: Sanskrit (44M) - sc: Sardinian (143K) - sd: Sindhi (67M) - si: Sinhala (452M) - sk: Slovak (6.1G) - sl: Slovenian (2.8G) - so: Somali (78M) - sq: Albanian (1.3G) - sr: Serbian (1.5G) - ss: Swati (86K) - su: Sundanese (15M) - sv: Swedish (21G) - sw: Swahili (332M) - ta: Tamil (1.3G) - ta_rom: Tamil Romanized (68M) - te: Telugu (536M) - te_rom: Telugu Romanized (79M) - th: Thai (8.7G) - tl: Tagalog (701M) - tn: Tswana (8.0M) - tr: Turkish (5.4G) - ug: Uyghur (46M) - uk: Ukrainian (14G) - ur: Urdu (884M) - ur_rom: Urdu Romanized (141M) - uz: Uzbek (155M) - vi: Vietnamese (28G) - wo: Wolof (3.6M) - xh: Xhosa (25M) - yi: Yiddish (51M) - yo: Yoruba (1.1M) - zh-Hans: Chinese (Simplified) (14G) - zh-Hant: Chinese (Traditional) (5.3G) - zu: Zulu (4.3M) ## Dataset Structure ### Data Instances An example from the `am` configuration: ``` {'id': '0', 'text': 'ተለዋዋጭ የግድግዳ አንግል ሙቅ አንቀሳቅሷል ቲ-አሞሌ አጥቅሼ ...\n'} ``` Each data point is a paragraph of text. The paragraphs are presented in the original (unshuffled) order. Documents are separated by a data point consisting of a single newline character. ### Data Fields The data fields are: - id: id of the example - text: content as a string ### Data Splits Sizes of some configurations: | name |train| |----------|----:| |am|3124561| |sr|35747957| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? The data comes from multiple web pages in a large variety of languages. ### Annotations The dataset does not contain any additional annotations. #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information Being constructed from Common Crawl, personal and sensitive information might be present. This **must** be considered before training deep learning models with CC-100, specially in the case of text-generation models. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was prepared by [Statistical Machine Translation at the University of Edinburgh](https://www.statmt.org/ued/) using the [CC-Net](https://github.com/facebookresearch/cc_net) toolkit by Facebook Research. ### Licensing Information Statistical Machine Translation at the University of Edinburgh makes no claims of intellectual property on the work of preparation of the corpus. By using this, you are also bound by the [Common Crawl terms of use](https://commoncrawl.org/terms-of-use/) in respect of the content contained in the dataset. ### Citation Information Please cite the following if you found the resources in this corpus useful: ```bibtex @inproceedings{conneau-etal-2020-unsupervised, title = "Unsupervised Cross-lingual Representation Learning at Scale", author = "Conneau, Alexis and Khandelwal, Kartikay and Goyal, Naman and Chaudhary, Vishrav and Wenzek, Guillaume and Guzm{\'a}n, Francisco and Grave, Edouard and Ott, Myle and Zettlemoyer, Luke and Stoyanov, Veselin", editor = "Jurafsky, Dan and Chai, Joyce and Schluter, Natalie and Tetreault, Joel", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.acl-main.747", doi = "10.18653/v1/2020.acl-main.747", pages = "8440--8451", abstract = "This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +14.6{\%} average accuracy on XNLI, +13{\%} average F1 score on MLQA, and +2.4{\%} F1 score on NER. XLM-R performs particularly well on low-resource languages, improving 15.7{\%} in XNLI accuracy for Swahili and 11.4{\%} for Urdu over previous XLM models. We also present a detailed empirical analysis of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per-language performance; XLM-R is very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We will make our code and models publicly available.", } ``` ```bibtex @inproceedings{wenzek-etal-2020-ccnet, title = "{CCN}et: Extracting High Quality Monolingual Datasets from Web Crawl Data", author = "Wenzek, Guillaume and Lachaux, Marie-Anne and Conneau, Alexis and Chaudhary, Vishrav and Guzm{\'a}n, Francisco and Joulin, Armand and Grave, Edouard", editor = "Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios", booktitle = "Proceedings of the Twelfth Language Resources and Evaluation Conference", month = may, year = "2020", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2020.lrec-1.494", pages = "4003--4012", abstract = "Pre-training text representations have led to significant improvements in many areas of natural language processing. The quality of these models benefits greatly from the size of the pretraining corpora as long as its quality is preserved. In this paper, we describe an automatic pipeline to extract massive high-quality monolingual datasets from Common Crawl for a variety of languages. Our pipeline follows the data processing introduced in fastText (Mikolov et al., 2017; Grave et al., 2018), that deduplicates documents and identifies their language. We augment this pipeline with a filtering step to select documents that are close to high quality corpora like Wikipedia.", language = "English", ISBN = "979-10-95546-34-4", } ``` ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
wino_bias
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - mit multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - token-classification task_ids: - coreference-resolution paperswithcode_id: winobias pretty_name: WinoBias dataset_info: - config_name: type1_anti features: - name: document_id dtype: string - name: part_number dtype: string - name: word_number sequence: int32 - name: tokens sequence: string - name: pos_tags sequence: class_label: names: '0': '"' '1': '''''' '2': '#' '3': $ '4': ( '5': ) '6': ',' '7': . '8': ':' '9': '``' '10': CC '11': CD '12': DT '13': EX '14': FW '15': IN '16': JJ '17': JJR '18': JJS '19': LS '20': MD '21': NN '22': NNP '23': NNPS '24': NNS '25': NN|SYM '26': PDT '27': POS '28': PRP '29': PRP$ '30': RB '31': RBR '32': RBS '33': RP '34': SYM '35': TO '36': UH '37': VB '38': VBD '39': VBG '40': VBN '41': VBP '42': VBZ '43': WDT '44': WP '45': WP$ '46': WRB '47': HYPH '48': XX '49': NFP '50': AFX '51': ADD '52': -LRB- '53': -RRB- '54': '-' - name: parse_bit sequence: string - name: predicate_lemma sequence: string - name: predicate_framenet_id sequence: string - name: word_sense sequence: string - name: speaker sequence: string - name: ner_tags sequence: class_label: names: '0': B-PERSON '1': I-PERSON '2': B-NORP '3': I-NORP '4': B-FAC '5': I-FAC '6': B-ORG '7': I-ORG '8': B-GPE '9': I-GPE '10': B-LOC '11': I-LOC '12': B-PRODUCT '13': I-PRODUCT '14': B-EVENT '15': I-EVENT '16': B-WORK_OF_ART '17': I-WORK_OF_ART '18': B-LAW '19': I-LAW '20': B-LANGUAGE '21': I-LANGUAGE '22': B-DATE '23': I-DATE '24': B-TIME '25': I-TIME '26': B-PERCENT '27': I-PERCENT '28': B-MONEY '29': I-MONEY '30': B-QUANTITY '31': I-QUANTITY '32': B-ORDINAL '33': I-ORDINAL '34': B-CARDINAL '35': I-CARDINAL '36': '*' '37': '0' '38': '-' - name: verbal_predicates sequence: string - name: coreference_clusters sequence: string splits: - name: validation num_bytes: 380510 num_examples: 396 - name: test num_bytes: 402893 num_examples: 396 download_size: 65383 dataset_size: 783403 - config_name: type1_pro features: - name: document_id dtype: string - name: part_number dtype: string - name: word_number sequence: int32 - name: tokens sequence: string - name: pos_tags sequence: class_label: names: '0': '"' '1': '''''' '2': '#' '3': $ '4': ( '5': ) '6': ',' '7': . '8': ':' '9': '``' '10': CC '11': CD '12': DT '13': EX '14': FW '15': IN '16': JJ '17': JJR '18': JJS '19': LS '20': MD '21': NN '22': NNP '23': NNPS '24': NNS '25': NN|SYM '26': PDT '27': POS '28': PRP '29': PRP$ '30': RB '31': RBR '32': RBS '33': RP '34': SYM '35': TO '36': UH '37': VB '38': VBD '39': VBG '40': VBN '41': VBP '42': VBZ '43': WDT '44': WP '45': WP$ '46': WRB '47': HYPH '48': XX '49': NFP '50': AFX '51': ADD '52': -LRB- '53': -RRB- '54': '-' - name: parse_bit sequence: string - name: predicate_lemma sequence: string - name: predicate_framenet_id sequence: string - name: word_sense sequence: string - name: speaker sequence: string - name: ner_tags sequence: class_label: names: '0': B-PERSON '1': I-PERSON '2': B-NORP '3': I-NORP '4': B-FAC '5': I-FAC '6': B-ORG '7': I-ORG '8': B-GPE '9': I-GPE '10': B-LOC '11': I-LOC '12': B-PRODUCT '13': I-PRODUCT '14': B-EVENT '15': I-EVENT '16': B-WORK_OF_ART '17': I-WORK_OF_ART '18': B-LAW '19': I-LAW '20': B-LANGUAGE '21': I-LANGUAGE '22': B-DATE '23': I-DATE '24': B-TIME '25': I-TIME '26': B-PERCENT '27': I-PERCENT '28': B-MONEY '29': I-MONEY '30': B-QUANTITY '31': I-QUANTITY '32': B-ORDINAL '33': I-ORDINAL '34': B-CARDINAL '35': I-CARDINAL '36': '*' '37': '0' '38': '-' - name: verbal_predicates sequence: string - name: coreference_clusters sequence: string splits: - name: validation num_bytes: 379044 num_examples: 396 - name: test num_bytes: 401705 num_examples: 396 download_size: 65516 dataset_size: 780749 - config_name: type2_anti features: - name: document_id dtype: string - name: part_number dtype: string - name: word_number sequence: int32 - name: tokens sequence: string - name: pos_tags sequence: class_label: names: '0': '"' '1': '''''' '2': '#' '3': $ '4': ( '5': ) '6': ',' '7': . '8': ':' '9': '``' '10': CC '11': CD '12': DT '13': EX '14': FW '15': IN '16': JJ '17': JJR '18': JJS '19': LS '20': MD '21': NN '22': NNP '23': NNPS '24': NNS '25': NN|SYM '26': PDT '27': POS '28': PRP '29': PRP$ '30': RB '31': RBR '32': RBS '33': RP '34': SYM '35': TO '36': UH '37': VB '38': VBD '39': VBG '40': VBN '41': VBP '42': VBZ '43': WDT '44': WP '45': WP$ '46': WRB '47': HYPH '48': XX '49': NFP '50': AFX '51': ADD '52': -LRB- '53': -RRB- '54': '-' - name: parse_bit sequence: string - name: predicate_lemma sequence: string - name: predicate_framenet_id sequence: string - name: word_sense sequence: string - name: speaker sequence: string - name: ner_tags sequence: class_label: names: '0': B-PERSON '1': I-PERSON '2': B-NORP '3': I-NORP '4': B-FAC '5': I-FAC '6': B-ORG '7': I-ORG '8': B-GPE '9': I-GPE '10': B-LOC '11': I-LOC '12': B-PRODUCT '13': I-PRODUCT '14': B-EVENT '15': I-EVENT '16': B-WORK_OF_ART '17': I-WORK_OF_ART '18': B-LAW '19': I-LAW '20': B-LANGUAGE '21': I-LANGUAGE '22': B-DATE '23': I-DATE '24': B-TIME '25': I-TIME '26': B-PERCENT '27': I-PERCENT '28': B-MONEY '29': I-MONEY '30': B-QUANTITY '31': I-QUANTITY '32': B-ORDINAL '33': I-ORDINAL '34': B-CARDINAL '35': I-CARDINAL '36': '*' '37': '0' '38': '-' - name: verbal_predicates sequence: string - name: coreference_clusters sequence: string splits: - name: validation num_bytes: 368421 num_examples: 396 - name: test num_bytes: 376926 num_examples: 396 download_size: 62555 dataset_size: 745347 - config_name: type2_pro features: - name: document_id dtype: string - name: part_number dtype: string - name: word_number sequence: int32 - name: tokens sequence: string - name: pos_tags sequence: class_label: names: '0': '"' '1': '''''' '2': '#' '3': $ '4': ( '5': ) '6': ',' '7': . '8': ':' '9': '``' '10': CC '11': CD '12': DT '13': EX '14': FW '15': IN '16': JJ '17': JJR '18': JJS '19': LS '20': MD '21': NN '22': NNP '23': NNPS '24': NNS '25': NN|SYM '26': PDT '27': POS '28': PRP '29': PRP$ '30': RB '31': RBR '32': RBS '33': RP '34': SYM '35': TO '36': UH '37': VB '38': VBD '39': VBG '40': VBN '41': VBP '42': VBZ '43': WDT '44': WP '45': WP$ '46': WRB '47': HYPH '48': XX '49': NFP '50': AFX '51': ADD '52': -LRB- '53': -RRB- '54': '-' - name: parse_bit sequence: string - name: predicate_lemma sequence: string - name: predicate_framenet_id sequence: string - name: word_sense sequence: string - name: speaker sequence: string - name: ner_tags sequence: class_label: names: '0': B-PERSON '1': I-PERSON '2': B-NORP '3': I-NORP '4': B-FAC '5': I-FAC '6': B-ORG '7': I-ORG '8': B-GPE '9': I-GPE '10': B-LOC '11': I-LOC '12': B-PRODUCT '13': I-PRODUCT '14': B-EVENT '15': I-EVENT '16': B-WORK_OF_ART '17': I-WORK_OF_ART '18': B-LAW '19': I-LAW '20': B-LANGUAGE '21': I-LANGUAGE '22': B-DATE '23': I-DATE '24': B-TIME '25': I-TIME '26': B-PERCENT '27': I-PERCENT '28': B-MONEY '29': I-MONEY '30': B-QUANTITY '31': I-QUANTITY '32': B-ORDINAL '33': I-ORDINAL '34': B-CARDINAL '35': I-CARDINAL '36': '*' '37': '0' '38': '-' - name: verbal_predicates sequence: string - name: coreference_clusters sequence: string splits: - name: validation num_bytes: 366957 num_examples: 396 - name: test num_bytes: 375144 num_examples: 396 download_size: 62483 dataset_size: 742101 - config_name: wino_bias features: - name: document_id dtype: string - name: part_number dtype: string - name: word_number sequence: int32 - name: tokens sequence: string - name: pos_tags sequence: class_label: names: '0': '"' '1': '''''' '2': '#' '3': $ '4': ( '5': ) '6': ',' '7': . '8': ':' '9': '``' '10': CC '11': CD '12': DT '13': EX '14': FW '15': IN '16': JJ '17': JJR '18': JJS '19': LS '20': MD '21': NN '22': NNP '23': NNPS '24': NNS '25': NN|SYM '26': PDT '27': POS '28': PRP '29': PRP$ '30': RB '31': RBR '32': RBS '33': RP '34': SYM '35': TO '36': UH '37': VB '38': VBD '39': VBG '40': VBN '41': VBP '42': VBZ '43': WDT '44': WP '45': WP$ '46': WRB '47': HYPH '48': XX '49': NFP '50': AFX '51': ADD '52': -LRB- '53': -RRB- - name: parse_bit sequence: string - name: predicate_lemma sequence: string - name: predicate_framenet_id sequence: string - name: word_sense sequence: string - name: speaker sequence: string - name: ner_tags sequence: class_label: names: '0': B-PERSON '1': I-PERSON '2': B-NORP '3': I-NORP '4': B-FAC '5': I-FAC '6': B-ORG '7': I-ORG '8': B-GPE '9': I-GPE '10': B-LOC '11': I-LOC '12': B-PRODUCT '13': I-PRODUCT '14': B-EVENT '15': I-EVENT '16': B-WORK_OF_ART '17': I-WORK_OF_ART '18': B-LAW '19': I-LAW '20': B-LANGUAGE '21': I-LANGUAGE '22': B-DATE '23': I-DATE '24': B-TIME '25': I-TIME '26': B-PERCENT '27': I-PERCENT '28': B-MONEY '29': I-MONEY '30': B-QUANTITY '31': I-QUANTITY '32': B-ORDINAL '33': I-ORDINAL '34': B-CARDINAL '35': I-CARDINAL '36': '*' '37': '0' - name: verbal_predicates sequence: string splits: - name: train num_bytes: 173899234 num_examples: 150335 download_size: 268725744 dataset_size: 173899234 configs: - config_name: type1_anti data_files: - split: validation path: type1_anti/validation-* - split: test path: type1_anti/test-* - config_name: type1_pro data_files: - split: validation path: type1_pro/validation-* - split: test path: type1_pro/test-* - config_name: type2_anti data_files: - split: validation path: type2_anti/validation-* - split: test path: type2_anti/test-* - config_name: type2_pro data_files: - split: validation path: type2_pro/validation-* - split: test path: type2_pro/test-* --- # Dataset Card for Wino_Bias dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [WinoBias](https://uclanlp.github.io/corefBias/overview) - **Repository:** - **Paper:** [Arxiv](https://arxiv.org/abs/1804.06876) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary WinoBias, a Winograd-schema dataset for coreference resolution focused on gender bias. The corpus contains Winograd-schema style sentences with entities corresponding to people referred by their occupation (e.g. the nurse, the doctor, the carpenter). ### Supported Tasks and Leaderboards The underlying task is coreference resolution. ### Languages English ## Dataset Structure ### Data Instances The dataset has 4 subsets: `type1_pro`, `type1_anti`, `type2_pro` and `type2_anti`. The `*_pro` subsets contain sentences that reinforce gender stereotypes (e.g. mechanics are male, nurses are female), whereas the `*_anti` datasets contain "anti-stereotypical" sentences (e.g. mechanics are female, nurses are male). The `type1` (*WB-Knowledge*) subsets contain sentences for which world knowledge is necessary to resolve the co-references, and `type2` (*WB-Syntax*) subsets require only the syntactic information present in the sentence to resolve them. ### Data Fields - document_id = This is a variation on the document filename - part_number = Some files are divided into multiple parts numbered as 000, 001, 002, ... etc. - word_num = This is the word index of the word in that sentence. - tokens = This is the token as segmented/tokenized in the Treebank. - pos_tags = This is the Penn Treebank style part of speech. When parse information is missing, all part of speeches except the one for which there is some sense or proposition annotation are marked with a XX tag. The verb is marked with just a VERB tag. - parse_bit = This is the bracketed structure broken before the first open parenthesis in the parse, and the word/part-of-speech leaf replaced with a *. The full parse can be created by substituting the asterix with the "([pos] [word])" string (or leaf) and concatenating the items in the rows of that column. When the parse information is missing, the first word of a sentence is tagged as "(TOP*" and the last word is tagged as "*)" and all intermediate words are tagged with a "*". - predicate_lemma = The predicate lemma is mentioned for the rows for which we have semantic role information or word sense information. All other rows are marked with a "-". - predicate_framenet_id = This is the PropBank frameset ID of the predicate in predicate_lemma. - word_sense = This is the word sense of the word in Column tokens. - speaker = This is the speaker or author name where available. - ner_tags = These columns identifies the spans representing various named entities. For documents which do not have named entity annotation, each line is represented with an "*". - verbal_predicates = There is one column each of predicate argument structure information for the predicate mentioned in predicate_lemma. If there are no predicates tagged in a sentence this is a single column with all rows marked with an "*". ### Data Splits Dev and Test Split available ## Dataset Creation ### Curation Rationale The WinoBias dataset was introduced in 2018 (see [paper](https://arxiv.org/abs/1804.06876)), with its original task being *coreference resolution*, which is a task that aims to identify mentions that refer to the same entity or person. ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? The dataset was created by researchers familiar with the WinoBias project, based on two prototypical templates provided by the authors, in which entities interact in plausible ways. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? "Researchers familiar with the [WinoBias] project" ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [Recent work](https://www.microsoft.com/en-us/research/uploads/prod/2021/06/The_Salmon_paper.pdf) has shown that this dataset contains grammatical issues, incorrect or ambiguous labels, and stereotype conflation, among other limitations. ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez and Kai-Wei Chan ### Licensing Information MIT Licence ### Citation Information @article{DBLP:journals/corr/abs-1804-06876, author = {Jieyu Zhao and Tianlu Wang and Mark Yatskar and Vicente Ordonez and Kai{-}Wei Chang}, title = {Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods}, journal = {CoRR}, volume = {abs/1804.06876}, year = {2018}, url = {http://arxiv.org/abs/1804.06876}, archivePrefix = {arXiv}, eprint = {1804.06876}, timestamp = {Mon, 13 Aug 2018 16:47:01 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1804-06876.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ### Contributions Thanks to [@akshayb7](https://github.com/akshayb7) for adding this dataset. Updated by [@JieyuZhao](https://github.com/JieyuZhao).
gem
--- annotations_creators: - crowdsourced - found language_creators: - crowdsourced - found - machine-generated language: - cs - de - en - es - ru - tr - vi license: - other multilinguality: - monolingual - multilingual size_categories: - 100K<n<1M - 10K<n<100K - 1K<n<10K source_datasets: - extended|other-vision-datasets - original task_categories: - fill-mask - summarization - table-to-text - tabular-to-text - text-generation - text2text-generation task_ids: - dialogue-modeling - rdf-to-text - news-articles-summarization - text-simplification paperswithcode_id: gem pretty_name: GEM tags: - intent-to-text - meaning-representation-to-text - concepts-to-text dataset_info: - config_name: mlsum_de features: - name: gem_id dtype: string - name: gem_parent_id dtype: string - name: text dtype: string - name: topic dtype: string - name: url dtype: string - name: title dtype: string - name: date dtype: string - name: target dtype: string - name: references list: string splits: - name: train num_bytes: 858060337 num_examples: 220748 - name: validation num_bytes: 49712791 num_examples: 11392 - name: test num_bytes: 49146354 num_examples: 10695 - name: challenge_train_sample num_bytes: 1894220 num_examples: 500 - name: challenge_validation_sample num_bytes: 2202723 num_examples: 500 - name: challenge_test_covid num_bytes: 19771285 num_examples: 5058 download_size: 362783528 dataset_size: 980787710 - config_name: mlsum_es features: - name: gem_id dtype: string - name: gem_parent_id dtype: string - name: text dtype: string - name: topic dtype: string - name: url dtype: string - name: title dtype: string - name: date dtype: string - name: target dtype: string - name: references list: string splits: - name: train num_bytes: 1211240956 num_examples: 259888 - name: validation num_bytes: 51611723 num_examples: 9977 - name: test num_bytes: 72117564 num_examples: 13366 - name: challenge_train_sample num_bytes: 2366443 num_examples: 500 - name: challenge_validation_sample num_bytes: 2658596 num_examples: 500 - name: challenge_test_covid num_bytes: 13576624 num_examples: 1938 download_size: 525621426 dataset_size: 1353571906 - config_name: wiki_lingua_es_en_v0 features: - name: gem_id dtype: string - name: gem_parent_id dtype: string - name: source dtype: string - name: target dtype: string - name: references list: string splits: - name: train num_bytes: 215665468 num_examples: 79515 - name: validation num_bytes: 25891008 num_examples: 8835 - name: test num_bytes: 50195305 num_examples: 19797 download_size: 169406387 dataset_size: 291751781 - config_name: wiki_lingua_ru_en_v0 features: - name: gem_id dtype: string - name: gem_parent_id dtype: string - name: source dtype: string - name: target dtype: string - name: references list: string splits: - name: train num_bytes: 159631205 num_examples: 36898 - name: validation num_bytes: 18626973 num_examples: 4100 - name: test num_bytes: 34865311 num_examples: 9094 download_size: 169406387 dataset_size: 213123489 - config_name: wiki_lingua_tr_en_v0 features: - name: gem_id dtype: string - name: gem_parent_id dtype: string - name: source dtype: string - name: target dtype: string - name: references list: string splits: - name: train num_bytes: 7689845 num_examples: 3193 - name: validation num_bytes: 942122 num_examples: 355 - name: test num_bytes: 1875110 num_examples: 808 download_size: 169406387 dataset_size: 10507077 - config_name: wiki_lingua_vi_en_v0 features: - name: gem_id dtype: string - name: gem_parent_id dtype: string - name: source dtype: string - name: target dtype: string - name: references list: string splits: - name: train num_bytes: 31599580 num_examples: 9206 - name: validation num_bytes: 3618660 num_examples: 1023 - name: test num_bytes: 6267359 num_examples: 2167 download_size: 169406387 dataset_size: 41485599 - config_name: wiki_lingua_arabic_ar features: - name: gem_id dtype: string - name: gem_parent_id dtype: string - name: source_aligned dtype: translation: languages: - ar - en - name: target_aligned dtype: translation: languages: - ar - en - name: source dtype: string - name: target dtype: string - name: references list: string splits: - name: train num_bytes: 208106335 num_examples: 20441 - name: validation num_bytes: 31126187 num_examples: 2919 - name: test num_bytes: 60915220 num_examples: 5841 download_size: 58984103 dataset_size: 300147742 - config_name: wiki_lingua_chinese_zh features: - name: gem_id dtype: string - name: gem_parent_id dtype: string - name: source_aligned dtype: translation: languages: - zh - en - name: target_aligned dtype: translation: languages: - zh - en - name: source dtype: string - name: target dtype: string - name: references list: string splits: - name: train num_bytes: 86130302 num_examples: 13211 - name: validation num_bytes: 13060918 num_examples: 1886 - name: test num_bytes: 25310021 num_examples: 3775 download_size: 32899156 dataset_size: 124501241 - config_name: wiki_lingua_czech_cs features: - name: gem_id dtype: string - name: gem_parent_id dtype: string - name: source_aligned dtype: translation: languages: - cs - en - name: target_aligned dtype: translation: languages: - cs - en - name: source dtype: string - name: target dtype: string - name: references list: string splits: - name: train num_bytes: 41107318 num_examples: 5033 - name: validation num_bytes: 6305328 num_examples: 718 - name: test num_bytes: 12124770 num_examples: 1438 download_size: 14515534 dataset_size: 59537416 - config_name: wiki_lingua_dutch_nl features: - name: gem_id dtype: string - name: gem_parent_id dtype: string - name: source_aligned dtype: translation: languages: - nl - en - name: target_aligned dtype: translation: languages: - nl - en - name: source dtype: string - name: target dtype: string - name: references list: string splits: - name: train num_bytes: 169067454 num_examples: 21866 - name: validation num_bytes: 25521003 num_examples: 3123 - name: test num_bytes: 49165151 num_examples: 6248 download_size: 56492150 dataset_size: 243753608 - config_name: wiki_lingua_english_en features: - name: gem_id dtype: string - name: gem_parent_id dtype: string - name: source_aligned dtype: translation: languages: - en - en - name: target_aligned dtype: translation: languages: - en - en - name: source dtype: string - name: target dtype: string - name: references list: string splits: - name: train num_bytes: 464171624 num_examples: 99020 - name: validation num_bytes: 67652281 num_examples: 13823 - name: test num_bytes: 138944243 num_examples: 28614 download_size: 118031903 dataset_size: 670768148 - config_name: wiki_lingua_french_fr features: - name: gem_id dtype: string - name: gem_parent_id dtype: string - name: source_aligned dtype: translation: languages: - fr - en - name: target_aligned dtype: translation: languages: - fr - en - name: source dtype: string - name: target dtype: string - name: references list: string splits: - name: train num_bytes: 372039357 num_examples: 44556 - name: validation num_bytes: 54992250 num_examples: 6364 - name: test num_bytes: 108831855 num_examples: 12731 download_size: 118758047 dataset_size: 535863462 - config_name: wiki_lingua_german_de features: - name: gem_id dtype: string - name: gem_parent_id dtype: string - name: source_aligned dtype: translation: languages: - de - en - name: target_aligned dtype: translation: languages: - de - en - name: source dtype: string - name: target dtype: string - name: references list: string splits: - name: train num_bytes: 322276536 num_examples: 40839 - name: validation num_bytes: 47631883 num_examples: 5833 - name: test num_bytes: 93715331 num_examples: 11669 download_size: 107638803 dataset_size: 463623750 - config_name: wiki_lingua_hindi_hi features: - name: gem_id dtype: string - name: gem_parent_id dtype: string - name: source_aligned dtype: translation: languages: - hi - en - name: target_aligned dtype: translation: languages: - hi - en - name: source dtype: string - name: target dtype: string - name: references list: string splits: - name: train num_bytes: 99672133 num_examples: 6942 - name: validation num_bytes: 14706378 num_examples: 991 - name: test num_bytes: 28543048 num_examples: 1984 download_size: 21042040 dataset_size: 142921559 - config_name: wiki_lingua_indonesian_id features: - name: gem_id dtype: string - name: gem_parent_id dtype: string - name: source_aligned dtype: translation: languages: - id - en - name: target_aligned dtype: translation: languages: - id - en - name: source dtype: string - name: target dtype: string - name: references list: string splits: - name: train num_bytes: 263974954 num_examples: 33237 - name: validation num_bytes: 39297987 num_examples: 4747 - name: test num_bytes: 76567819 num_examples: 9497 download_size: 83968162 dataset_size: 379840760 - config_name: wiki_lingua_italian_it features: - name: gem_id dtype: string - name: gem_parent_id dtype: string - name: source_aligned dtype: translation: languages: - it - en - name: target_aligned dtype: translation: languages: - it - en - name: source dtype: string - name: target dtype: string - name: references list: string splits: - name: train num_bytes: 267090482 num_examples: 35661 - name: validation num_bytes: 39227425 num_examples: 5093 - name: test num_bytes: 76840429 num_examples: 10189 download_size: 88921209 dataset_size: 383158336 - config_name: wiki_lingua_japanese_ja features: - name: gem_id dtype: string - name: gem_parent_id dtype: string - name: source_aligned dtype: translation: languages: - ja - en - name: target_aligned dtype: translation: languages: - ja - en - name: source dtype: string - name: target dtype: string - name: references list: string splits: - name: train num_bytes: 73871019 num_examples: 8853 - name: validation num_bytes: 10807006 num_examples: 1264 - name: test num_bytes: 21175951 num_examples: 2530 download_size: 22803299 dataset_size: 105853976 - config_name: wiki_lingua_korean_ko features: - name: gem_id dtype: string - name: gem_parent_id dtype: string - name: source_aligned dtype: translation: languages: - ko - en - name: target_aligned dtype: translation: languages: - ko - en - name: source dtype: string - name: target dtype: string - name: references list: string splits: - name: train num_bytes: 73106687 num_examples: 8524 - name: validation num_bytes: 10788276 num_examples: 1216 - name: test num_bytes: 21172641 num_examples: 2436 download_size: 23336917 dataset_size: 105067604 - config_name: wiki_lingua_portuguese_pt features: - name: gem_id dtype: string - name: gem_parent_id dtype: string - name: source_aligned dtype: translation: languages: - pt - en - name: target_aligned dtype: translation: languages: - pt - en - name: source dtype: string - name: target dtype: string - name: references list: string splits: - name: train num_bytes: 405546332 num_examples: 57159 - name: validation num_bytes: 59729210 num_examples: 8165 - name: test num_bytes: 117775356 num_examples: 16331 download_size: 137542940 dataset_size: 583050898 - config_name: wiki_lingua_russian_ru features: - name: gem_id dtype: string - name: gem_parent_id dtype: string - name: source_aligned dtype: translation: languages: - ru - en - name: target_aligned dtype: translation: languages: - ru - en - name: source dtype: string - name: target dtype: string - name: references list: string splits: - name: train num_bytes: 406299624 num_examples: 37028 - name: validation num_bytes: 59651340 num_examples: 5288 - name: test num_bytes: 116330937 num_examples: 10580 download_size: 106281321 dataset_size: 582281901 - config_name: wiki_lingua_spanish_es features: - name: gem_id dtype: string - name: gem_parent_id dtype: string - name: source_aligned dtype: translation: languages: - es - en - name: target_aligned dtype: translation: languages: - es - en - name: source dtype: string - name: target dtype: string - name: references list: string splits: - name: train num_bytes: 604276564 num_examples: 79212 - name: validation num_bytes: 88677656 num_examples: 11316 - name: test num_bytes: 177096288 num_examples: 22632 download_size: 198247534 dataset_size: 870050508 - config_name: wiki_lingua_thai_th features: - name: gem_id dtype: string - name: gem_parent_id dtype: string - name: source_aligned dtype: translation: languages: - th - en - name: target_aligned dtype: translation: languages: - th - en - name: source dtype: string - name: target dtype: string - name: references list: string splits: - name: train num_bytes: 139287649 num_examples: 10325 - name: validation num_bytes: 21097845 num_examples: 1475 - name: test num_bytes: 40049968 num_examples: 2950 download_size: 29988180 dataset_size: 200435462 - config_name: wiki_lingua_turkish_tr features: - name: gem_id dtype: string - name: gem_parent_id dtype: string - name: source_aligned dtype: translation: languages: - tr - en - name: target_aligned dtype: translation: languages: - tr - en - name: source dtype: string - name: target dtype: string - name: references list: string splits: - name: train num_bytes: 21987247 num_examples: 3148 - name: validation num_bytes: 3229714 num_examples: 449 - name: test num_bytes: 6197850 num_examples: 900 download_size: 7055820 dataset_size: 31414811 - config_name: wiki_lingua_vietnamese_vi features: - name: gem_id dtype: string - name: gem_parent_id dtype: string - name: source_aligned dtype: translation: languages: - vi - en - name: target_aligned dtype: translation: languages: - vi - en - name: source dtype: string - name: target dtype: string - name: references list: string splits: - name: train num_bytes: 128025008 num_examples: 13707 - name: validation num_bytes: 19414734 num_examples: 1957 - name: test num_bytes: 37430208 num_examples: 3917 download_size: 38035490 dataset_size: 184869950 - config_name: xsum features: - name: gem_id dtype: string - name: gem_parent_id dtype: string - name: xsum_id dtype: string - name: document dtype: string - name: target dtype: string - name: references list: string splits: - name: train num_bytes: 66299136 num_examples: 23206 - name: validation num_bytes: 2270306 num_examples: 1117 - name: test num_bytes: 2598509 num_examples: 1166 - name: challenge_train_sample num_bytes: 1429145 num_examples: 500 - name: challenge_validation_sample num_bytes: 1012689 num_examples: 500 - name: challenge_test_backtranslation num_bytes: 1262047 num_examples: 500 - name: challenge_test_bfp_02 num_bytes: 1090364 num_examples: 500 - name: challenge_test_bfp_05 num_bytes: 1078076 num_examples: 500 - name: challenge_test_nopunc num_bytes: 1127796 num_examples: 500 - name: challenge_test_covid num_bytes: 1867180 num_examples: 401 download_size: 258277147 dataset_size: 80035248 - config_name: common_gen features: - name: gem_id dtype: string - name: gem_parent_id dtype: string - name: concept_set_id dtype: int32 - name: concepts list: string - name: target dtype: string - name: references list: string splits: - name: train num_bytes: 10475926 num_examples: 67389 - name: validation num_bytes: 405872 num_examples: 993 - name: test num_bytes: 153170 num_examples: 1497 - name: challenge_train_sample num_bytes: 85413 num_examples: 500 - name: challenge_validation_sample num_bytes: 215192 num_examples: 500 - name: challenge_test_scramble num_bytes: 60411 num_examples: 500 download_size: 1933517 dataset_size: 11395984 - config_name: cs_restaurants features: - name: gem_id dtype: string - name: gem_parent_id dtype: string - name: dialog_act dtype: string - name: dialog_act_delexicalized dtype: string - name: target_delexicalized dtype: string - name: target dtype: string - name: references list: string splits: - name: train num_bytes: 873145 num_examples: 3569 - name: validation num_bytes: 288222 num_examples: 781 - name: test num_bytes: 295696 num_examples: 842 - name: challenge_train_sample num_bytes: 127869 num_examples: 500 - name: challenge_validation_sample num_bytes: 193239 num_examples: 500 - name: challenge_test_scramble num_bytes: 185574 num_examples: 500 download_size: 1531111 dataset_size: 1963745 - config_name: dart features: - name: gem_id dtype: string - name: gem_parent_id dtype: string - name: dart_id dtype: int32 - name: tripleset list: list: string - name: subtree_was_extended dtype: bool - name: target_sources list: string - name: target dtype: string - name: references list: string splits: - name: train num_bytes: 23047610 num_examples: 62659 - name: validation num_bytes: 1934054 num_examples: 2768 - name: test num_bytes: 3476953 num_examples: 5097 download_size: 29939366 dataset_size: 28458617 - config_name: e2e_nlg features: - name: gem_id dtype: string - name: gem_parent_id dtype: string - name: meaning_representation dtype: string - name: target dtype: string - name: references list: string splits: - name: train num_bytes: 9129030 num_examples: 33525 - name: validation num_bytes: 1856097 num_examples: 4299 - name: test num_bytes: 2133695 num_examples: 4693 - name: challenge_train_sample num_bytes: 145319 num_examples: 500 - name: challenge_validation_sample num_bytes: 226525 num_examples: 500 - name: challenge_test_scramble num_bytes: 236199 num_examples: 500 download_size: 14668048 dataset_size: 13726865 - config_name: totto features: - name: gem_id dtype: string - name: gem_parent_id dtype: string - name: totto_id dtype: int32 - name: table_page_title dtype: string - name: table_webpage_url dtype: string - name: table_section_title dtype: string - name: table_section_text dtype: string - name: table list: list: - name: column_span dtype: int32 - name: is_header dtype: bool - name: row_span dtype: int32 - name: value dtype: string - name: highlighted_cells list: list: int32 - name: example_id dtype: string - name: sentence_annotations list: - name: original_sentence dtype: string - name: sentence_after_deletion dtype: string - name: sentence_after_ambiguity dtype: string - name: final_sentence dtype: string - name: overlap_subset dtype: string - name: target dtype: string - name: references list: string splits: - name: train num_bytes: 676032144 num_examples: 121153 - name: validation num_bytes: 50736204 num_examples: 7700 - name: test num_bytes: 41330062 num_examples: 7700 - name: challenge_train_sample num_bytes: 2283076 num_examples: 500 - name: challenge_validation_sample num_bytes: 3398639 num_examples: 500 - name: challenge_test_scramble num_bytes: 2638966 num_examples: 500 download_size: 189534609 dataset_size: 776419091 - config_name: web_nlg_en features: - name: gem_id dtype: string - name: gem_parent_id dtype: string - name: input list: string - name: target dtype: string - name: references list: string - name: category dtype: string - name: webnlg_id dtype: string splits: - name: train num_bytes: 13067615 num_examples: 35426 - name: validation num_bytes: 1153995 num_examples: 1667 - name: test num_bytes: 1403601 num_examples: 1779 - name: challenge_train_sample num_bytes: 193198 num_examples: 502 - name: challenge_validation_sample num_bytes: 359868 num_examples: 499 - name: challenge_test_scramble num_bytes: 402407 num_examples: 500 - name: challenge_test_numbers num_bytes: 409213 num_examples: 500 download_size: 13181969 dataset_size: 16989897 - config_name: web_nlg_ru features: - name: gem_id dtype: string - name: gem_parent_id dtype: string - name: input list: string - name: target dtype: string - name: references list: string - name: category dtype: string - name: webnlg_id dtype: string splits: - name: train num_bytes: 6888009 num_examples: 14630 - name: validation num_bytes: 795998 num_examples: 790 - name: test num_bytes: 1145282 num_examples: 1102 - name: challenge_train_sample num_bytes: 247089 num_examples: 501 - name: challenge_validation_sample num_bytes: 514117 num_examples: 500 - name: challenge_test_scramble num_bytes: 521625 num_examples: 500 download_size: 7854845 dataset_size: 10112120 - config_name: wiki_auto_asset_turk features: - name: gem_id dtype: string - name: gem_parent_id dtype: string - name: source dtype: string - name: target dtype: string - name: references list: string splits: - name: train num_bytes: 161095379 num_examples: 483801 - name: validation num_bytes: 8211308 num_examples: 20000 - name: test_asset num_bytes: 475336 num_examples: 359 - name: test_turk num_bytes: 406842 num_examples: 359 - name: challenge_train_sample num_bytes: 219542 num_examples: 500 - name: challenge_validation_sample num_bytes: 213048 num_examples: 500 - name: challenge_test_asset_backtranslation num_bytes: 436820 num_examples: 359 - name: challenge_test_asset_bfp02 num_bytes: 432742 num_examples: 359 - name: challenge_test_asset_bfp05 num_bytes: 432742 num_examples: 359 - name: challenge_test_asset_nopunc num_bytes: 432735 num_examples: 359 - name: challenge_test_turk_backtranslation num_bytes: 417204 num_examples: 359 - name: challenge_test_turk_bfp02 num_bytes: 414381 num_examples: 359 - name: challenge_test_turk_bfp05 num_bytes: 414383 num_examples: 359 - name: challenge_test_turk_nopunc num_bytes: 414388 num_examples: 359 download_size: 126927527 dataset_size: 174016850 - config_name: schema_guided_dialog features: - name: gem_id dtype: string - name: gem_parent_id dtype: string - name: dialog_acts list: - name: act dtype: class_label: names: '0': AFFIRM '1': AFFIRM_INTENT '2': CONFIRM '3': GOODBYE '4': INFORM '5': INFORM_COUNT '6': INFORM_INTENT '7': NEGATE '8': NEGATE_INTENT '9': NOTIFY_FAILURE '10': NOTIFY_SUCCESS '11': OFFER '12': OFFER_INTENT '13': REQUEST '14': REQUEST_ALTS '15': REQ_MORE '16': SELECT '17': THANK_YOU - name: slot dtype: string - name: values list: string - name: context list: string - name: dialog_id dtype: string - name: service dtype: string - name: turn_id dtype: int32 - name: prompt dtype: string - name: target dtype: string - name: references list: string splits: - name: train num_bytes: 146648117 num_examples: 164982 - name: validation num_bytes: 9376504 num_examples: 10000 - name: test num_bytes: 10160596 num_examples: 10000 - name: challenge_train_sample num_bytes: 441326 num_examples: 500 - name: challenge_validation_sample num_bytes: 491492 num_examples: 500 - name: challenge_test_backtranslation num_bytes: 512834 num_examples: 500 - name: challenge_test_bfp02 num_bytes: 529404 num_examples: 500 - name: challenge_test_bfp05 num_bytes: 515151 num_examples: 500 - name: challenge_test_nopunc num_bytes: 509332 num_examples: 500 - name: challenge_test_scramble num_bytes: 514644 num_examples: 500 download_size: 17826468 dataset_size: 169699400 config_names: - common_gen - cs_restaurants - dart - e2e_nlg - mlsum_de - mlsum_es - schema_guided_dialog - totto - web_nlg_en - web_nlg_ru - wiki_auto_asset_turk - wiki_lingua_es_en - wiki_lingua_ru_en - wiki_lingua_tr_en - wiki_lingua_vi_en - xsum --- # Dataset Card for GEM ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://gem-benchmark.github.io/](https://gem-benchmark.github.io/) - **Repository:** - **Paper:** [The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics](https://arxiv.org/abs/2102.01672) - **Point of Contact:** [Sebastian Gehrman](gehrmann@google.com) - **Size of downloaded dataset files:** 2.19 GB - **Size of the generated dataset:** 3.92 GB - **Total amount of disk used:** 6.10 GB ### Dataset Summary GEM is a benchmark environment for Natural Language Generation with a focus on its Evaluation, both through human annotations and automated Metrics. GEM aims to: - measure NLG progress across 13 datasets spanning many NLG tasks and languages. - provide an in-depth analysis of data and models presented via data statements and challenge sets. - develop standards for evaluation of generated text using both automated and human metrics. It is our goal to regularly update GEM and to encourage toward more inclusive practices in dataset development by extending existing data or developing datasets for additional languages. You can find more complete information in the dataset cards for each of the subsets: - [CommonGen](https://gem-benchmark.com/data_cards/common_gen) - [Czech Restaurant](https://gem-benchmark.com/data_cards/cs_restaurants) - [DART](https://gem-benchmark.com/data_cards/dart) - [E2E](https://gem-benchmark.com/data_cards/e2e_nlg) - [MLSum](https://gem-benchmark.com/data_cards/mlsum) - [Schema-Guided Dialog](https://gem-benchmark.com/data_cards/schema_guided_dialog) - [WebNLG](https://gem-benchmark.com/data_cards/web_nlg) - [Wiki-Auto/ASSET/TURK](https://gem-benchmark.com/data_cards/wiki_auto_asset_turk) - [WikiLingua](https://gem-benchmark.com/data_cards/wiki_lingua) - [XSum](https://gem-benchmark.com/data_cards/xsum) The subsets are organized by task: ``` { "summarization": { "mlsum": ["mlsum_de", "mlsum_es"], "wiki_lingua": ["wiki_lingua_es_en", "wiki_lingua_ru_en", "wiki_lingua_tr_en", "wiki_lingua_vi_en"], "xsum": ["xsum"], }, "struct2text": { "common_gen": ["common_gen"], "cs_restaurants": ["cs_restaurants"], "dart": ["dart"], "e2e": ["e2e_nlg"], "totto": ["totto"], "web_nlg": ["web_nlg_en", "web_nlg_ru"], }, "simplification": { "wiki_auto_asset_turk": ["wiki_auto_asset_turk"], }, "dialog": { "schema_guided_dialog": ["schema_guided_dialog"], }, } ``` Each example has one `target` per example in its training set, and a set of `references` (with one or more items) in its validation and test set. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### common_gen - **Size of downloaded dataset files:** 1.85 MB - **Size of the generated dataset:** 9.23 MB - **Total amount of disk used:** 11.07 MB An example of `validation` looks as follows. ``` {'concept_set_id': 0, 'concepts': ['field', 'look', 'stand'], 'gem_id': 'common_gen-validation-0', 'references': ['The player stood in the field looking at the batter.', 'The coach stands along the field, looking at the goalkeeper.', 'I stood and looked across the field, peacefully.', 'Someone stands, looking around the empty field.'], 'target': 'The player stood in the field looking at the batter.'} ``` #### cs_restaurants - **Size of downloaded dataset files:** 1.47 MB - **Size of the generated dataset:** 1.31 MB - **Total amount of disk used:** 2.77 MB An example of `validation` looks as follows. ``` {'dialog_act': '?request(area)', 'dialog_act_delexicalized': '?request(area)', 'gem_id': 'cs_restaurants-validation-0', 'references': ['Jakou lokalitu hledáte ?'], 'target': 'Jakou lokalitu hledáte ?', 'target_delexicalized': 'Jakou lokalitu hledáte ?'} ``` #### dart - **Size of downloaded dataset files:** 29.37 MB - **Size of the generated dataset:** 27.44 MB - **Total amount of disk used:** 56.81 MB An example of `validation` looks as follows. ``` {'dart_id': 0, 'gem_id': 'dart-validation-0', 'references': ['A school from Mars Hill, North Carolina, joined in 1973.'], 'subtree_was_extended': True, 'target': 'A school from Mars Hill, North Carolina, joined in 1973.', 'target_sources': ['WikiSQL_decl_sents'], 'tripleset': [['Mars Hill College', 'JOINED', '1973'], ['Mars Hill College', 'LOCATION', 'Mars Hill, North Carolina']]} ``` #### e2e_nlg - **Size of downloaded dataset files:** 14.60 MB - **Size of the generated dataset:** 12.14 MB - **Total amount of disk used:** 26.74 MB An example of `validation` looks as follows. ``` {'gem_id': 'e2e_nlg-validation-0', 'meaning_representation': 'name[Alimentum], area[city centre], familyFriendly[no]', 'references': ['There is a place in the city centre, Alimentum, that is not family-friendly.'], 'target': 'There is a place in the city centre, Alimentum, that is not family-friendly.'} ``` #### mlsum_de - **Size of downloaded dataset files:** 347.36 MB - **Size of the generated dataset:** 951.06 MB - **Total amount of disk used:** 1.30 GB An example of `validation` looks as follows. ``` {'date': '00/04/2019', 'gem_id': 'mlsum_de-validation-0', 'references': ['In einer Kleinstadt auf der Insel Usedom war eine junge Frau tot in ihrer Wohnung gefunden worden. Nun stehen zwei Bekannte unter Verdacht.'], 'target': 'In einer Kleinstadt auf der Insel Usedom war eine junge Frau tot in ihrer Wohnung gefunden worden. Nun stehen zwei Bekannte unter Verdacht.', 'text': 'Kerzen und Blumen stehen vor dem Eingang eines Hauses, in dem eine 18-jährige Frau tot aufgefunden wurde. In einer Kleinstadt auf der Insel Usedom war eine junge Frau tot in ...', 'title': 'Tod von 18-Jähriger auf Usedom: Zwei Festnahmen', 'topic': 'panorama', 'url': 'https://www.sueddeutsche.de/panorama/usedom-frau-tot-festnahme-verdaechtige-1.4412256'} ``` #### mlsum_es - **Size of downloaded dataset files:** 514.11 MB - **Size of the generated dataset:** 1.31 GB - **Total amount of disk used:** 1.83 GB An example of `validation` looks as follows. ``` {'date': '05/01/2019', 'gem_id': 'mlsum_es-validation-0', 'references': ['El diseñador que dio carta de naturaleza al estilo genuinamente americano celebra el medio siglo de su marca entre grandes fastos y problemas financieros. Conectar con las nuevas generaciones es el regalo que precisa más que nunca'], 'target': 'El diseñador que dio carta de naturaleza al estilo genuinamente americano celebra el medio siglo de su marca entre grandes fastos y problemas financieros. Conectar con las nuevas generaciones es el regalo que precisa más que nunca', 'text': 'Un oso de peluche marcándose un heelflip de monopatín es todo lo que Ralph Lauren necesitaba esta Navidad. Estampado en un jersey de lana azul marino, supone la guinda que corona ...', 'title': 'Ralph Lauren busca el secreto de la eterna juventud', 'topic': 'elpais estilo', 'url': 'http://elpais.com/elpais/2019/01/04/estilo/1546617396_933318.html'} ``` #### schema_guided_dialog - **Size of downloaded dataset files:** 8.64 MB - **Size of the generated dataset:** 45.78 MB - **Total amount of disk used:** 54.43 MB An example of `validation` looks as follows. ``` {'dialog_acts': [{'act': 2, 'slot': 'song_name', 'values': ['Carnivore']}, {'act': 2, 'slot': 'playback_device', 'values': ['TV']}], 'dialog_id': '10_00054', 'gem_id': 'schema_guided_dialog-validation-0', 'prompt': 'Yes, I would.', 'references': ['Please confirm the song Carnivore on tv.'], 'target': 'Please confirm the song Carnivore on tv.', 'turn_id': 15} ``` #### totto - **Size of downloaded dataset files:** 187.73 MB - **Size of the generated dataset:** 757.99 MB - **Total amount of disk used:** 945.72 MB An example of `validation` looks as follows. ``` {'example_id': '7391450717765563190', 'gem_id': 'totto-validation-0', 'highlighted_cells': [[3, 0], [3, 2], [3, 3]], 'overlap_subset': 'True', 'references': ['Daniel Henry Chamberlain was the 76th Governor of South Carolina from 1874.', 'Daniel Henry Chamberlain was the 76th Governor of South Carolina, beginning in 1874.', 'Daniel Henry Chamberlain was the 76th Governor of South Carolina who took office in 1874.'], 'sentence_annotations': [{'final_sentence': 'Daniel Henry Chamberlain was the 76th Governor of South Carolina from 1874.', 'original_sentence': 'Daniel Henry Chamberlain (June 23, 1835 – April 13, 1907) was an American planter, lawyer, author and the 76th Governor of South Carolina ' 'from 1874 until 1877.', 'sentence_after_ambiguity': 'Daniel Henry Chamberlain was the 76th Governor of South Carolina from 1874.', 'sentence_after_deletion': 'Daniel Henry Chamberlain was the 76th Governor of South Carolina from 1874.'}, ... ], 'table': [[{'column_span': 1, 'is_header': True, 'row_span': 1, 'value': '#'}, {'column_span': 2, 'is_header': True, 'row_span': 1, 'value': 'Governor'}, {'column_span': 1, 'is_header': True, 'row_span': 1, 'value': 'Took Office'}, {'column_span': 1, 'is_header': True, 'row_span': 1, 'value': 'Left Office'}], [{'column_span': 1, 'is_header': True, 'row_span': 1, 'value': '74'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '-'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Robert Kingston Scott'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'July 6, 1868'}], ... ], 'table_page_title': 'List of Governors of South Carolina', 'table_section_text': 'Parties Democratic Republican', 'table_section_title': 'Governors under the Constitution of 1868', 'table_webpage_url': 'http://en.wikipedia.org/wiki/List_of_Governors_of_South_Carolina', 'target': 'Daniel Henry Chamberlain was the 76th Governor of South Carolina from 1874.', 'totto_id': 0} ``` #### web_nlg_en - **Size of downloaded dataset files:** 12.95 MB - **Size of the generated dataset:** 14.63 MB - **Total amount of disk used:** 27.57 MB An example of `validation` looks as follows. ``` {'category': 'Airport', 'gem_id': 'web_nlg_en-validation-0', 'input': ['Aarhus | leader | Jacob_Bundsgaard'], 'references': ['The leader of Aarhus is Jacob Bundsgaard.'], 'target': 'The leader of Aarhus is Jacob Bundsgaard.', 'webnlg_id': 'dev/Airport/1/Id1'} ``` #### web_nlg_ru - **Size of downloaded dataset files:** 7.63 MB - **Size of the generated dataset:** 8.41 MB - **Total amount of disk used:** 16.04 MB An example of `validation` looks as follows. ``` {'category': 'Airport', 'gem_id': 'web_nlg_ru-validation-0', 'input': ['Punjab,_Pakistan | leaderTitle | Provincial_Assembly_of_the_Punjab'], 'references': ['Пенджаб, Пакистан, возглавляется Провинциальной ассамблеей Пенджаба.', 'Пенджаб, Пакистан возглавляется Провинциальной ассамблеей Пенджаба.'], 'target': 'Пенджаб, Пакистан, возглавляется Провинциальной ассамблеей Пенджаба.', 'webnlg_id': 'dev/Airport/1/Id1'} ``` #### wiki_auto_asset_turk - **Size of downloaded dataset files:** 127.27 MB - **Size of the generated dataset:** 152.77 MB - **Total amount of disk used:** 280.04 MB An example of `validation` looks as follows. ``` {'gem_id': 'wiki_auto_asset_turk-validation-0', 'references': ['The Gandalf Awards honor excellent writing in in fantasy literature.'], 'source': 'The Gandalf Awards, honoring achievement in fantasy literature, were conferred by the World Science Fiction Society annually from 1974 to 1981.', 'source_id': '350_691837-1-0-0', 'target': 'The Gandalf Awards honor excellent writing in in fantasy literature.', 'target_id': '350_691837-0-0-0'} ``` #### wiki_lingua_es_en - **Size of downloaded dataset files:** 169.41 MB - **Size of the generated dataset:** 287.60 MB - **Total amount of disk used:** 457.01 MB An example of `validation` looks as follows. ``` 'references': ["Practice matted hair prevention from early in your cat's life. Make sure that your cat is grooming itself effectively. Keep a close eye on cats with long hair."], 'source': 'Muchas personas presentan problemas porque no cepillaron el pelaje de sus gatos en una etapa temprana de su vida, ya que no lo consideraban necesario. Sin embargo, a medida que...', 'target': "Practice matted hair prevention from early in your cat's life. Make sure that your cat is grooming itself effectively. Keep a close eye on cats with long hair."} ``` #### wiki_lingua_ru_en - **Size of downloaded dataset files:** 169.41 MB - **Size of the generated dataset:** 211.21 MB - **Total amount of disk used:** 380.62 MB An example of `validation` looks as follows. ``` {'gem_id': 'wiki_lingua_ru_en-val-0', 'references': ['Get immediate medical care if you notice signs of a complication. Undergo diagnostic tests to check for gallstones and complications. Ask your doctor about your treatment ' 'options.'], 'source': 'И хотя, скорее всего, вам не о чем волноваться, следует незамедлительно обратиться к врачу, если вы подозреваете, что у вас возникло осложнение желчекаменной болезни. Это ...', 'target': 'Get immediate medical care if you notice signs of a complication. Undergo diagnostic tests to check for gallstones and complications. Ask your doctor about your treatment ' 'options.'} ``` #### wiki_lingua_tr_en - **Size of downloaded dataset files:** 169.41 MB - **Size of the generated dataset:** 10.35 MB - **Total amount of disk used:** 179.75 MB An example of `validation` looks as follows. ``` {'gem_id': 'wiki_lingua_tr_en-val-0', 'references': ['Open Instagram. Go to the video you want to download. Tap ⋮. Tap Copy Link. Open Google Chrome. Tap the address bar. Go to the SaveFromWeb site. Tap the "Paste Instagram Video" text box. Tap and hold the text box. Tap PASTE. Tap Download. Download the video. Find the video on your Android.'], 'source': 'Instagram uygulamasının çok renkli kamera şeklindeki simgesine dokun. Daha önce giriş yaptıysan Instagram haber kaynağı açılır. Giriş yapmadıysan istendiğinde e-posta adresini ...', 'target': 'Open Instagram. Go to the video you want to download. Tap ⋮. Tap Copy Link. Open Google Chrome. Tap the address bar. Go to the SaveFromWeb site. Tap the "Paste Instagram Video" text box. Tap and hold the text box. Tap PASTE. Tap Download. Download the video. Find the video on your Android.'} ``` #### wiki_lingua_vi_en - **Size of downloaded dataset files:** 169.41 MB - **Size of the generated dataset:** 41.02 MB - **Total amount of disk used:** 210.43 MB An example of `validation` looks as follows. ``` {'gem_id': 'wiki_lingua_vi_en-val-0', 'references': ['Select the right time of year for planting the tree. You will usually want to plant your tree when it is dormant, or not flowering, during cooler or colder times of year.'], 'source': 'Bạn muốn cung cấp cho cây cơ hội tốt nhất để phát triển và sinh tồn. Trồng cây đúng thời điểm trong năm chính là yếu tố then chốt. Thời điểm sẽ thay đổi phụ thuộc vào loài cây ...', 'target': 'Select the right time of year for planting the tree. You will usually want to plant your tree when it is dormant, or not flowering, during cooler or colder times of year.'} ``` #### xsum - **Size of downloaded dataset files:** 254.89 MB - **Size of the generated dataset:** 70.67 MB - **Total amount of disk used:** 325.56 MB An example of `validation` looks as follows. ``` {'document': 'Burberry reported pre-tax profits of £166m for the year to March. A year ago it made a loss of £16.1m, hit by charges at its Spanish operations.\n' 'In the past year it has opened 21 new stores and closed nine. It plans to open 20-30 stores this year worldwide.\n' 'The group has also focused on promoting the Burberry brand online...', 'gem_id': 'xsum-validation-0', 'references': ['Luxury fashion designer Burberry has returned to profit after opening new stores and spending more on online marketing'], 'target': 'Luxury fashion designer Burberry has returned to profit after opening new stores and spending more on online marketing', 'xsum_id': '10162122'} ``` ### Data Fields The data fields are the same among all splits. #### common_gen - `gem_id`: a `string` feature. - `concept_set_id`: a `int32` feature. - `concepts`: a `list` of `string` features. - `target`: a `string` feature. - `references`: a `list` of `string` features. #### cs_restaurants - `gem_id`: a `string` feature. - `dialog_act`: a `string` feature. - `dialog_act_delexicalized`: a `string` feature. - `target_delexicalized`: a `string` feature. - `target`: a `string` feature. - `references`: a `list` of `string` features. #### dart - `gem_id`: a `string` feature. - `dart_id`: a `int32` feature. - `tripleset`: a `list` of `string` features. - `subtree_was_extended`: a `bool` feature. - `target_sources`: a `list` of `string` features. - `target`: a `string` feature. - `references`: a `list` of `string` features. #### e2e_nlg - `gem_id`: a `string` feature. - `meaning_representation`: a `string` feature. - `target`: a `string` feature. - `references`: a `list` of `string` features. #### mlsum_de - `gem_id`: a `string` feature. - `text`: a `string` feature. - `topic`: a `string` feature. - `url`: a `string` feature. - `title`: a `string` feature. - `date`: a `string` feature. - `target`: a `string` feature. - `references`: a `list` of `string` features. #### mlsum_es - `gem_id`: a `string` feature. - `text`: a `string` feature. - `topic`: a `string` feature. - `url`: a `string` feature. - `title`: a `string` feature. - `date`: a `string` feature. - `target`: a `string` feature. - `references`: a `list` of `string` features. #### schema_guided_dialog - `gem_id`: a `string` feature. - `act`: a classification label, with possible values including `AFFIRM` (0), `AFFIRM_INTENT` (1), `CONFIRM` (2), `GOODBYE` (3), `INFORM` (4). - `slot`: a `string` feature. - `values`: a `list` of `string` features. - `dialog_id`: a `string` feature. - `turn_id`: a `int32` feature. - `prompt`: a `string` feature. - `target`: a `string` feature. - `references`: a `list` of `string` features. #### totto - `gem_id`: a `string` feature. - `totto_id`: a `int32` feature. - `table_page_title`: a `string` feature. - `table_webpage_url`: a `string` feature. - `table_section_title`: a `string` feature. - `table_section_text`: a `string` feature. - `column_span`: a `int32` feature. - `is_header`: a `bool` feature. - `row_span`: a `int32` feature. - `value`: a `string` feature. - `highlighted_cells`: a `list` of `int32` features. - `example_id`: a `string` feature. - `original_sentence`: a `string` feature. - `sentence_after_deletion`: a `string` feature. - `sentence_after_ambiguity`: a `string` feature. - `final_sentence`: a `string` feature. - `overlap_subset`: a `string` feature. - `target`: a `string` feature. - `references`: a `list` of `string` features. #### web_nlg_en - `gem_id`: a `string` feature. - `input`: a `list` of `string` features. - `target`: a `string` feature. - `references`: a `list` of `string` features. - `category`: a `string` feature. - `webnlg_id`: a `string` feature. #### web_nlg_ru - `gem_id`: a `string` feature. - `input`: a `list` of `string` features. - `target`: a `string` feature. - `references`: a `list` of `string` features. - `category`: a `string` feature. - `webnlg_id`: a `string` feature. #### wiki_auto_asset_turk - `gem_id`: a `string` feature. - `source_id`: a `string` feature. - `target_id`: a `string` feature. - `source`: a `string` feature. - `target`: a `string` feature. - `references`: a `list` of `string` features. #### wiki_lingua_es_en - `gem_id`: a `string` feature. - `source`: a `string` feature. - `target`: a `string` feature. - `references`: a `list` of `string` features. #### wiki_lingua_ru_en - `gem_id`: a `string` feature. - `source`: a `string` feature. - `target`: a `string` feature. - `references`: a `list` of `string` features. #### wiki_lingua_tr_en - `gem_id`: a `string` feature. - `source`: a `string` feature. - `target`: a `string` feature. - `references`: a `list` of `string` features. #### wiki_lingua_vi_en - `gem_id`: a `string` feature. - `source`: a `string` feature. - `target`: a `string` feature. - `references`: a `list` of `string` features. #### xsum - `gem_id`: a `string` feature. - `xsum_id`: a `string` feature. - `document`: a `string` feature. - `target`: a `string` feature. - `references`: a `list` of `string` features. ### Data Splits #### common_gen | |train|validation|test| |----------|----:|---------:|---:| |common_gen|67389| 993|1497| #### cs_restaurants | |train|validation|test| |--------------|----:|---------:|---:| |cs_restaurants| 3569| 781| 842| #### dart | |train|validation|test| |----|----:|---------:|---:| |dart|62659| 2768|6959| #### e2e_nlg | |train|validation|test| |-------|----:|---------:|---:| |e2e_nlg|33525| 4299|4693| #### mlsum_de | |train |validation|test | |--------|-----:|---------:|----:| |mlsum_de|220748| 11392|10695| #### mlsum_es | |train |validation|test | |--------|-----:|---------:|----:| |mlsum_es|259886| 9977|13365| #### schema_guided_dialog | |train |validation|test | |--------------------|-----:|---------:|----:| |schema_guided_dialog|164982| 10000|10000| #### totto | |train |validation|test| |-----|-----:|---------:|---:| |totto|121153| 7700|7700| #### web_nlg_en | |train|validation|test| |----------|----:|---------:|---:| |web_nlg_en|35426| 1667|1779| #### web_nlg_ru | |train|validation|test| |----------|----:|---------:|---:| |web_nlg_ru|14630| 790|1102| #### wiki_auto_asset_turk | |train |validation|test_asset|test_turk| |--------------------|-----:|---------:|---------:|--------:| |wiki_auto_asset_turk|373801| 73249| 359| 359| #### wiki_lingua_es_en | |train|validation|test | |-----------------|----:|---------:|----:| |wiki_lingua_es_en|79515| 8835|19797| #### wiki_lingua_ru_en | |train|validation|test| |-----------------|----:|---------:|---:| |wiki_lingua_ru_en|36898| 4100|9094| #### wiki_lingua_tr_en | |train|validation|test| |-----------------|----:|---------:|---:| |wiki_lingua_tr_en| 3193| 355| 808| #### wiki_lingua_vi_en | |train|validation|test| |-----------------|----:|---------:|---:| |wiki_lingua_vi_en| 9206| 1023|2167| #### xsum | |train|validation|test| |----|----:|---------:|---:| |xsum|23206| 1117|1166| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information CC-BY-SA-4.0 ### Citation Information ``` @article{gem_benchmark, author = {Sebastian Gehrmann and Tosin P. Adewumi and Karmanya Aggarwal and Pawan Sasanka Ammanamanchi and Aremu Anuoluwapo and Antoine Bosselut and Khyathi Raghavi Chandu and Miruna{-}Adriana Clinciu and Dipanjan Das and Kaustubh D. Dhole and Wanyu Du and Esin Durmus and Ondrej Dusek and Chris Emezue and Varun Gangal and Cristina Garbacea and Tatsunori Hashimoto and Yufang Hou and Yacine Jernite and Harsh Jhamtani and Yangfeng Ji and Shailza Jolly and Dhruv Kumar and Faisal Ladhak and Aman Madaan and Mounica Maddela and Khyati Mahajan and Saad Mahamood and Bodhisattwa Prasad Majumder and Pedro Henrique Martins and Angelina McMillan{-}Major and Simon Mille and Emiel van Miltenburg and Moin Nadeem and Shashi Narayan and Vitaly Nikolaev and Rubungo Andre Niyongabo and Salomey Osei and Ankur P. Parikh and Laura Perez{-}Beltrachini and Niranjan Ramesh Rao and Vikas Raunak and Juan Diego Rodriguez and Sashank Santhanam and Jo{\~{a}}o Sedoc and Thibault Sellam and Samira Shaikh and Anastasia Shimorina and Marco Antonio Sobrevilla Cabezudo and Hendrik Strobelt and Nishant Subramani and Wei Xu and Diyi Yang and Akhila Yerukola and Jiawei Zhou}, title = {The {GEM} Benchmark: Natural Language Generation, its Evaluation and Metrics}, journal = {CoRR}, volume = {abs/2102.01672}, year = {2021}, url = {https://arxiv.org/abs/2102.01672}, archivePrefix = {arXiv}, eprint = {2102.01672} } ``` ### Contributions Thanks to [@yjernite](https://github.com/yjernite) for adding this dataset.
L4NLP/LEval
--- license: gpl-3.0 task_categories: - summarization - question-answering - multiple-choice language: - en size_categories: - 1K<n<10K viewer: true tags: - Long_context --- ### *L-Eval: Instituting Standardized Evaluation for Long Context Language Models* L-Eval is a comprehensive long-context language models evaluation suite with 18 long document tasks across multiple domains that require reasoning over long texts, including summarization, question answering, in-context learning with long CoT examples, topic retrieval, and paper writing assistance. L-Eval is a high-quality test set with 411 long documents and 2043 query-response pairs. All samples in L-Eval have been manually annotated and checked by the authors. There have been many studies exploring the expansion of context length in large models. However, it remains to be explored whether these methods perform well enough in downstream tasks and whether they can surpass previous methods based on retrieval or chunking. We hope L-Eval could help researchers and developers track the progress of long-context language models (LCLMs) and understand the strengths/shortcomings of different methods. Dataset list: ``` ["coursera", "gsm100", "quality", "topic_retrieval_longchat", "tpo", "financial_qa", "gov_report_summ", "legal_contract_qa", "meeting_summ", "multidoc_qa", "narrative_qa", "natural_question", "news_summ", "paper_assistant", "patent_summ", "review_summ", "scientific_qa", "tv_show_summ"] ``` Detailed descriptions and how we collect the data can be found [https://github.com/OpenLMLab/LEval](https://github.com/OpenLMLab/LEval).
castorini/mr-tydi
--- language: - ar - bn - en - fi - id - fi - ja - ko - ru - sw - te - th multilinguality: - multilingual task_categories: - text-retrieval license: apache-2.0 --- # Dataset Summary Mr. TyDi is a multi-lingual benchmark dataset built on TyDi, covering eleven typologically diverse languages. It is designed for monolingual retrieval, specifically to evaluate ranking with learned dense representations. This dataset stores the queries, judgements, and example training data of Mr. TyDi. To access the corpus, please refer to [castorini/mr-tydi-corpus](https://huggingface.co/datasets/castorini/mr-tydi-corpus). # Dataset Structure The only configuration here is the `language`, For each language, there are three splits: `train`, `dev`, and `test`. The negative examples from training set are sampled from the top-30 BM25 runfiles on each language. Specifically, we combine the **training** data for all languages under the `combined` configuration. An example of `train` set looks as follows: ``` { 'query_id': '1', 'query': 'When was quantum field theory developed?', 'positive_passages': [ { 'docid': '25267#12', 'title': 'Quantum field theory', 'text': 'Quantum field theory naturally began with the study of electromagnetic interactions, as the electromagnetic field was the only known classical field as of the 1920s.' }, ... ] 'negative_passages': [ { 'docid': '346489#8', 'title': 'Local quantum field theory', 'text': 'More recently, the approach has been further implemented to include an algebraic version of quantum field ...' }, ... ], } ``` An example of `dev` and `test` set looks as follows. We only provide the docid of positive passages here to save the space. Also no candidate passages are provided at this point. Note that to perform the retrieval, it need to be used together with [castorini/mr-tydi-corpus](https://huggingface.co/datasets/castorini/mr-tydi-corpus) ``` { 'query_id': '0', 'query': 'Is Creole a pidgin of French?', 'positive_passages': [ { 'docid': '3716905#1', 'title': '', 'text': '' }, ... ] } ``` # Load Dataset An example to load the dataset: ``` language = 'english' # to load all train, dev and test sets dataset = load_dataset('castorini/mr-tydi', language) # or to load a specific set: set_name = 'train' dataset = load_dataset('castorini/mr-tydi', language, set_name) ``` Note that the 'combined' option has only the 'train' set. # Citation Information ``` @article{mrtydi, title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval}, author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin}, year={2021}, journal={arXiv:2108.08787}, } ```
web_questions
--- annotations_creators: - crowdsourced language_creators: - found language: - en license: - unknown multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - question-answering task_ids: - open-domain-qa paperswithcode_id: webquestions pretty_name: WebQuestions dataset_info: features: - name: url dtype: string - name: question dtype: string - name: answers sequence: string splits: - name: train num_bytes: 530711 num_examples: 3778 - name: test num_bytes: 288184 num_examples: 2032 download_size: 402395 dataset_size: 818895 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* --- # Dataset Card for "web_questions" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://worksheets.codalab.org/worksheets/0xba659fe363cb46e7a505c5b6a774dc8a](https://worksheets.codalab.org/worksheets/0xba659fe363cb46e7a505c5b6a774dc8a) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [Semantic Parsing on Freebase from Question-Answer Pairs](https://aclanthology.org/D13-1160/) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 1.27 MB - **Size of the generated dataset:** 0.83 MB - **Total amount of disk used:** 2.10 MB ### Dataset Summary This dataset consists of 6,642 question/answer pairs. The questions are supposed to be answerable by Freebase, a large knowledge graph. The questions are mostly centered around a single named entity. The questions are popular ones asked on the web (at least in 2013). ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 1.27 MB - **Size of the generated dataset:** 0.83 MB - **Total amount of disk used:** 2.10 MB An example of 'train' looks as follows. ``` { "answers": ["Jamaican Creole English Language", "Jamaican English"], "question": "what does jamaican people speak?", "url": "http://www.freebase.com/view/en/jamaica" } ``` ### Data Fields The data fields are the same among all splits. #### default - `url`: a `string` feature. - `question`: a `string` feature. - `answers`: a `list` of `string` features. ### Data Splits | name |train|test| |-------|----:|---:| |default| 3778|2032| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{berant-etal-2013-semantic, title = "Semantic Parsing on {F}reebase from Question-Answer Pairs", author = "Berant, Jonathan and Chou, Andrew and Frostig, Roy and Liang, Percy", booktitle = "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", month = oct, year = "2013", address = "Seattle, Washington, USA", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D13-1160", pages = "1533--1544", } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun) for adding this dataset.
yizhongw/self_instruct
--- license: apache-2.0 dataset_info: - config_name: self_instruct features: - name: prompt dtype: string - name: completion dtype: string splits: - name: train num_bytes: 20527462 num_examples: 82612 download_size: 24113858 dataset_size: 20527462 - config_name: human_eval features: - name: id dtype: string - name: motivation_app dtype: string - name: instruction dtype: string - name: instances sequence: - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 151244 num_examples: 252 download_size: 170193 dataset_size: 151244 - config_name: super_natural_instructions features: - name: prompt dtype: string - name: completion dtype: string splits: - name: train num_bytes: 40352923 num_examples: 50000 - name: test num_bytes: 9713953 num_examples: 11810 download_size: 52975509 dataset_size: 50066876 - config_name: prompt_source features: - name: prompt dtype: string - name: completion dtype: string splits: - name: train num_bytes: 57368889 num_examples: 52657 download_size: 60126945 dataset_size: 57368889 - config_name: p3 features: - name: prompt dtype: string - name: completion dtype: string splits: - name: train num_bytes: 57368889 num_examples: 52657 download_size: 60126945 dataset_size: 57368889 --- # Dataset Card for Self Instruct ## Table of Contents - [Dataset Card for Self Instruct](#dataset-card-for-self-instruct) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [self\_instruct](#self_instruct) - [super\_natural\_instructions](#super_natural_instructions) - [p3](#p3) - [human\_eval](#human_eval) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [self\_instruct](#self_instruct-1) - [super\_natural\_instructions](#super_natural_instructions-1) - [p3](#p3-1) - [human\_eval](#human_eval-1) - [Data Fields](#data-fields) - [self\_instruct](#self_instruct-2) - [super\_natural\_instructions](#super_natural_instructions-2) - [p3](#p3-2) - [human\_eval](#human_eval-2) - [Data Splits](#data-splits) - [self\_instruct](#self_instruct-3) - [super\_natural\_instructions](#super_natural_instructions-3) - [p3](#p3-3) - [human\_eval](#human_eval-3) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** - **Repository:** https://github.com/yizhongw/self-instruct - **Paper:** https://arxiv.org/abs/2212.10560 - **Leaderboard:** - **Point of Contact:** Yizhong Wang ### Dataset Summary Self-Instruct is a framework that helps language models improve their ability to follow natural language instructions. It does this by using the model's own generations to create a large collection of instructional data. With Self-Instruct, it is possible to improve the instruction-following capabilities of language models without relying on extensive manual annotation. A part of this framework, the Self-Instruct authors released a dataset that contains 52k instructions, paired with 82K instance inputs and outputs. This instruction data can be used to conduct instruction-tuning for language models and make the language model follow instruction better. The authors also released a new set of 252 expert-written tasks and their instructions motivated by user-oriented applications (rather than well-studied NLP tasks). This data is used in the human evaluation section of [the Self Instruct paper](https://arxiv.org/abs/2212.10560). To enable comparison on public datasets, Self-Instruct also contains 50k examples from the P3 and Super Natural Instructions datasets. ### Supported Tasks and Leaderboards The datasets in Self-Instruct are designed for _instruction training_ pretrained language models. The following subsets are provided as part of Self Instruct. #### self_instruct 82k prompts and model completions generated via OpenAI's `davinci` engine. #### super_natural_instructions 50k expert written instructions and demonstrations sampled from the [Super Natural Instructions dataset](https://arxiv.org/abs/2204.07705) #### p3 50k crowd-sourced instructions and demonstrations sampled from the [Public Pool of Prompts (P3) dataset](https://huggingface.co/datasets/bigscience/P3) #### human_eval 252 expert-written tasks and their instructions motivated by user-oriented applications (rather than well-studied NLP tasks). This data is used in the human evaluation section of [the self-instruct paper](https://arxiv.org/abs/2212.10560). Please refer to [the human evaluation information]([human_eval/README.md](https://github.com/yizhongw/self-instruct/blob/main/human_eval/README.md)) for more details. ### Languages The data in Self Instruct are in English (BCP-47 en). ## Dataset Structure ### Data Instances #### self_instruct An example pf "train" looks as follows: ```json { "prompt": "Explain the origin of life on earth. Output:", "completion": " Life on Earth is believed to have begun about 3.5 billion years ago, when the planet was still in its early stages of formation. The earliest evidence for life on Earth are fossils of microorganisms (bacteria and archaea) that lived approximately 3.5 billion years ago. These organisms were capable of converting carbon dioxide into organic compounds using sunlight as an energy source. This process is known as photosynthesis.", } ``` #### super_natural_instructions An example pf "train" looks as follows: ```json { "prompt": "Write a correct answer for the question. You should reason about prototypical situations and provide the correct answer. Avoid answers that provide incomplete justification for the question.\n\nInput: name something people often buy on credit.\n\nOutput:", "completion": " appliances", } ``` #### p3 An example pf "train" looks as follows: ```json { "prompt": "Determine if the following two sentences paraphrase each other or not.\nSent 1: By adding sufficient oxygen to compensate for the metabolic usage , rebreathing the carbon dioxide and removing the gas , most of the volume is conserved .\nSent 2: By adding sufficient oxygen to compensate for the metabolic consumption , removing the carbon dioxide and reinhaling the gas , most of the volume is conserved .\n", "completion": "No", } ``` #### human_eval An example pf "train" looks as follows: ```json { "id": "user_oriented_task_136", "motivation_app": "Goodreads", "instruction": "Choose the best books from the given genre.", "instances": { "input": ["Crime & Mystery"], "output": [ "1- The Girl with the Dragon Tattoo\n2- And Then There Were None\n3- Angels & Demons\n4- Rebecca\n5- In Cold Blood\n6- The Godfather\n7- The Lovely Bones\n8- Gone Girl\n9- The Name of the Rose\n10- Shutter Island" ], }, } ``` ### Data Fields The data fields for each configuration are as follows. #### self_instruct * `prompt`: The instruction provided to the model or human labeler. * `completion`: A completion provided by the model or human labeler. #### super_natural_instructions * `prompt`: The instruction provided to the model or human labeler. * `completion`: A completion provided by the model or human labeler. #### p3 * `prompt`: The instruction provided to the model or human labeler. * `completion`: A completion provided by the model or human labeler. #### human_eval * `id`: The ID associated with the labelling task * `motivation_app`: The application associated with the task * `instruction`: The instruction written by the human labeler. * `instances.input`: The input that forms part of the complete instruction * `instances.output`: The human written demonstration ### Data Splits #### self_instruct | | train | |---------------|------:| | self_instruct | 82612 | #### super_natural_instructions | | train | test | |----------------------------|------:|------:| | super_natural_instructions | 50000 | 11810 | #### p3 | | train | |----|------:| | p3 | 52657 | #### human_eval | | train | |------------|------:| | human_eval | 252 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations The `self_instruct` data is generated by a language model (GPT-3) and inevitably contains some errors or biases. The authors analyzed the data quality on 200 random instructions in our paper, and found that 46% of the data points may have problems. We encourage users to use this data with caution and propose new methods to filter or improve the imperfections. ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @misc{selfinstruct, title={Self-Instruct: Aligning Language Model with Self Generated Instructions}, author={Wang, Yizhong and Kordi, Yeganeh and Mishra, Swaroop and Liu, Alisa and Smith, Noah A. and Khashabi, Daniel and Hajishirzi, Hannaneh}, journal={arXiv preprint arXiv:2212.10560}, year={2022} } ```
UBC-NLP/orca
--- viewer: false language: - ara tags: - Arabic - NLU Benchmark - Natural Language Inference (NLI) - Question Answering (QA) - Semantic Textual Similarity and and Paraphrase (STSP) - Sentence Classification (SC) - Structure Predictions (SP) - Topic Classification (TC) - Word Sense Disambiguation (WSD) task_categories: - text-classification - token-classification - question-answering extra_gated_fields: Name: text Official Email (email of your organization): text Affilation: text Country: text I agree to use this dataset for non-commercial use ONLY: checkbox I agree to cite the ORCA paper and all original papers: checkbox --- <p align="center"> <br> <img src="https://orca.dlnlp.ai/assets/orca_logo.png" width="55%"/> <br> <p> <p align="center"> <!-- <a href="https://github.com/UBC-NLP/orca/releases"> --> <!-- <img alt="GitHub release" src="https://img.shields.io/github/release/UBC-NLP/orca.svg"> </a>--> <a href="https://orca.dlnlp.ai/"> <img alt="Documentation" src="https://img.shields.io/website.svg?down_color=red&down_message=offline&up_message=online&url=https://orca.dlnlp.ai"> </a> <!-- <a href="https://github.com/UBC-NLP/orca/blob/main/LICENSE"><img alt="GitHub license" src="https://img.shields.io/github/license/UBC-NLP/orca?logoColor=blue"></a> --> <!-- <a href='https://orca.readthedocs.io/en/latest/?badge=latest'><img src='https://readthedocs.org/projects/orca/badge/?version=latest' alt='Documentation Status' /></a> --> <!-- <a href="https://github.com/UBC-NLP/orca/stargazers"><img alt="GitHub stars" src="https://img.shields.io/github/stars/UBC-NLP/orca"></a> <!-- <a href="https://github.com/UBC-NLP/orca/network"><img alt="GitHub forks" src="https://img.shields.io/github/forks/UBC-NLP/orca"></a> --> </p> In this work, we introduce [**ORCA**](https://arxiv.org/abs/2212.10758), a publicly available benchmark for Arabic language understanding evaluation. ORCA is carefully constructed to cover diverse Arabic varieties and a wide range of challenging Arabic understanding tasks exploiting 60 different datasets across seven NLU task clusters. To measure current progress in Arabic NLU, we use ORCA to offer a comprehensive comparison between 18 multilingual and Arabic language models. # ORCA Task Cluster We arrange [**ORCA**](https://arxiv.org/abs/2212.10758), into seven NLU task clusters. These are (1) sentence classification, (2) structured prediction (3) semantic textual similarity and paraphrase, (4) text classification, (5) natural language inference, (6) word sense disambiguation, and (7) question answering. ### (1) Natural Language Inference (NLI) |**Task**| **Variation** | **Metric** | **Reference** | |---------|--------|--------|------| |[ANS Stance](https://aclanthology.org/2020.fever-1.2/) |MSA | Macro F1 | [(Khouja, 2020)](https://aclanthology.org/2020.fever-1.2/) | |[Baly Stance](https://aclanthology.org/N18-2004/) |MSA | Macro F1 | [(Balyet al., 2018)](https://aclanthology.org/N18-2004/) | |[XLNI](https://github.com/facebookresearch/XNLI) |MSA | Macro F1 | [(Conneau et al., 2018)](https://github.com/facebookresearch/XNLI)| ### (2) Question Answering (QA) |**Task**| **Variation** | **Metric** | **Reference** | |---------|--------|--------|------| |[Question Answering](https://aclanthology.org/2021.acl-long.551/) |MSA | Macro F1 | [(Abdul-Mageed et al., 2020a)](https://aclanthology.org/2021.acl-long.551/) | ### (3) Semantic Textual Similarity and Paraphrase (STSP) |**Task**| **Variation** | **Metric** | **Reference** | |---------|--------|--------|-------| |[Emotion Regression](https://aclanthology.org/S18-1001/) |MSA | Spearman Correlation| [(Saif et al., 2018)](https://aclanthology.org/S18-1001/) | |[MQ2Q](https://aclanthology.org/2019.nsurl-1.1) |MSA | Macro F1 | [(Seelawi al., 2019)](https://aclanthology.org/2019.nsurl-1.1) | |[STS](https://aclanthology.org/S17-2001/) |MSA | Macro F1 | [(Cer et al., 2017)](https://aclanthology.org/S17-2001/) | ### (4) Sentence Classification (SC) |**Task**| **Variation** | **Metric** | **Reference** | |---------|--------|--------|-------| |[Abusive](https://aclanthology.org/W19-3512/) |DA | Macro F1 | [(Mulki et al., 2019)](https://aclanthology.org/W19-3512/) | |[Adult](https://aclanthology.org/2021.wanlp-1.14) |DA | Macro F1 | [(Mubarak et al., 2021)](https://aclanthology.org/2021.wanlp-1.14) | |[Age](https://www.aclweb.org/anthology/2020.osact-1.3) |DA | Macro F1 | [(Abdul-Mageed et al., 2020b)]( https://aclanthology.org/2020.osact-1.3/) | |[ANS Claim](https://aclanthology.org/2020.fever-1.2/) |MSA | Macro F1 | [(Khouja, 2020)](https://aclanthology.org/2020.fever-1.2/) | |[Dangerous ](https://aclanthology.org/N18-2004/) |DA | Macro F1 | [(Alshehri et al., 2020)](https://www.aclweb.org/anthology/2020.osact-1.6)| |[Dialect Binary](https://github.com/facebookresearch/XNLI) |DA | Macro F1 | [(Farha, 2020)](https://aclanthology.org/2020.osact-1.5/), [(Zaidan, 2014)](https://www.aclweb.org/anthology/J14-1006), [(Abdul-Mageed et al., 2020c)](https://aclanthology.org/2021.acl-long.551/), [(Bouamor et al., 2019)](https://www.aclweb.org/anthology/W19-4622), [(Abdelaliet al., 2020)](https://aclanthology.org/2021.wanlp-1.1), [(El-Haj, 2020)](https://aclanthology.org/2020.lrec-1.165/). | |[Dialect Country](https://github.com/facebookresearch/XNLI) |DA | Macro F1 | [(Farha, 2020)](https://aclanthology.org/2020.osact-1.5/), [(Zaidan, 2014)](https://www.aclweb.org/anthology/J14-1006), [(Abdul-Mageed et al., 2020c)](https://aclanthology.org/2021.acl-long.551/), [(Bouamor et al., 2019)](https://www.aclweb.org/anthology/W19-4622), [(Abdelaliet al., 2020)](https://aclanthology.org/2021.wanlp-1.1), [(El-Haj, 2020)](https://aclanthology.org/2020.lrec-1.165/). | |[Dialect Region](https://github.com/facebookresearch/XNLI) |DA | Macro F1 | [(Farha, 2020)](https://aclanthology.org/2020.osact-1.5/), [(Zaidan, 2014)](https://www.aclweb.org/anthology/J14-1006), [(Abdul-Mageed et al., 2020c)](https://aclanthology.org/2021.acl-long.551/), [(Bouamor et al., 2019)](https://www.aclweb.org/anthology/W19-4622), [(Abdelaliet al., 2020)](https://aclanthology.org/2021.wanlp-1.1), [(El-Haj, 2020)](https://aclanthology.org/2020.lrec-1.165/). | |[Emotion](https://www.aclweb.org/anthology/2020.osact-1.3) |DA | Macro F1 | [(Abdul-Mageed et al., 2020b)]( https://aclanthology.org/2020.osact-1.3/) | |[Gender](https://www.aclweb.org/anthology/2020.osact-1.3) |DA | Macro F1 | [(Abdul-Mageed et al., 2020b)]( https://aclanthology.org/2020.osact-1.3/) | |[Hate Speech](https://www.aclweb.org/anthology/2020.osact-1.7) |DA | Macro F1 | [(Mubarak et al., 2020)](https://www.aclweb.org/anthology/2020.osact-1.7)| |[Irony](https://dl.acm.org/doi/10.1145/3368567.3368585) |DA | Macro F1 | [(Ghanem al., 2019)](https://dl.acm.org/doi/10.1145/3368567.3368585) | |[Machine Generation](https://aclanthology.org/2020.wanlp-1.7/) |MSA | Macro F1 | [(Nagoudi et al., 2020)](https://aclanthology.org/2020.wanlp-1.7/) | |[Offensive](https://aclanthology.org/2020.osact-1.8/) |DA | Macro F1 | [(Mubarak et al., 2020)](https://www.aclweb.org/anthology/2020.osact-1.7)| |[Sarcasm](https://aclanthology.org/N18-2004/) |DA | Macro F1 | [(Farha and Magdy, 2020)](https://aclanthology.org/2020.osact-1.5/) | |[Sentiment Analysis](https://aclanthology.org/2021.acl-long.551/) |DA | Macro F1 | [(Abdul-Mageed et al., 2020c)](https://aclanthology.org/2021.acl-long.551/) | ### (5) Structure Predictions (SP) |**Task**| **Variation** | **Metric** | **Reference** | |---------|--------|--------|-------| |[Aqmar NER](https://www.cs.cmu.edu/~ark/ArabicNER/) |MSA | Macro F1 | [(Mohit, 2012)](https://www.cs.cmu.edu/~ark/ArabicNER/) | |[Arabic NER Corpus](http://www.dsic.upv.es/~prosso/resources/BenajibaRosso_IICAI07.pdf) |MSA | Macro F1 | [(Benajiba and Rosso, 2007)](http://www.dsic.upv.es/~prosso/resources/BenajibaRosso_IICAI07.pdf) | |[Dialect Part Of Speech](https://aclanthology.org/L18-1015.pdf) |DA | Macro F1 | [(Darwish et al., 2018)](https://aclanthology.org/L18-1015.pdf) | |[MSA Part Of Speech](https://arxiv.org/abs/2004.01401) |MSA | Macro F1 | [(Liang et al., 2020)](https://arxiv.org/abs/2004.01401) | ### (6) Topic Classification (TC) |**Task**| **Variation** | **Metric** | **Reference** | |---------|--------|--------|-------| |[Topic](https://aclanthology.org/2021.acl-long.551/) |MSA | Macro F1 | [(Abbas et al.,2011)](https://www.dline.info/fpaper/jdim/v9i5/1.pdf), [(Chouigui et al.,2017)](https://www.researchgate.net/publication/320871871_Poster_ANT_Corpus_An_Arabic_News_Text_Collection_for_Textual_Classification), [(Saad, 2010)](http://site.iugaza.edu.ps/wp-content/uploads/mksaad-OSAC-OpenSourceArabicCorpora-EECS10-rev9(1).pdf). | ### (7) Word Sense Disambiguation (WSD) |**Task**| **Variation** | **Metric** | **Reference** | |---------|--------|--------|-------| |[Word Sense Disambiguation](https://www.mdpi.com/2076-3417/11/6/2567) |MSA | Macro F1 | [(El-Razzaz, 2021)](https://www.mdpi.com/2076-3417/11/6/2567) | # How to Use ORCA ### Request Access ### To obtain access to the ORCA benchmark on Huggingface, follow the following steps: - Login on your Haggingface account <img src="https://raw.githubusercontent.com/UBC-NLP/orca/main/orca_request1.png" width="70%"/> - Request access <img src="https://raw.githubusercontent.com/UBC-NLP/orca/main/orca_request2.png" width="70%"/> ### Install Requirments ```shell pip install datasets transformers seqeval ``` ### Login with your Huggingface CLI ### You can get/manage your access tokens in your [settings](https://huggingface.co/docs/hub/security-tokens). ```shell export HUGGINGFACE_TOKEN="" huggingface-cli login --token $HUGGINGFACE_TOKEN ``` ### Fine-tuning a model on ORCA tasks We provide a Google Colab Notebook that includes instructions for fine-tuning any model on ORCA tasks. <a href="https://colab.research.google.com/github/UBC-NLP/orca/blob/main/Finetuning_ORCA.ipynb"><img alt="colab" src="https://colab.research.google.com/assets/colab-badge.svg"> ### Submitting your results on ORCA test We design a public leaderboard for scoring PLMs on ORCA. Our leaderboard is interactive and offers rich meta-data about the various datasets involved as well as the language models we evaluate. You can evalute your models using **ORCA** leaderboard: **[https://orca.dlnlp.ai](https://orca.dlnlp.ai/)** --- ## Citation If you use ORCA for your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows: ``` @inproceedings{elmadany-etal-2023-orca, title = "{ORCA}: A Challenging Benchmark for {A}rabic Language Understanding", author = "Elmadany, AbdelRahim and Nagoudi, ElMoatez Billah and Abdul-Mageed, Muhammad", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.609", pages = "9559--9586", } ``` --- ## Acknowledgments We gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada, Canadian Foundation for Innovation, [ComputeCanada](www.computecanada.ca) and [UBC ARC-Sockeye](https://doi.org/10.14288/SOCKEYE). We also thank the [Google TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc) program for providing us with free TPU access.
embedding-data/sentence-compression
--- license: mit language: - en paperswithcode_id: embedding-data/sentence-compression pretty_name: sentence-compression task_categories: - sentence-similarity - paraphrase-mining task_ids: - semantic-similarity-classification --- # Dataset Card for "sentence-compression" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/google-research-datasets/sentence-compression](https://github.com/google-research-datasets/sentence-compression) - **Repository:** [More Information Needed](https://github.com/google-research-datasets/sentence-compression) - **Paper:** [More Information Needed](https://www.aclweb.org/anthology/D13-1155/) - **Point of Contact:** [Katja Filippova](altun@google.com) - **Size of downloaded dataset files:** - **Size of the generated dataset:** - **Total amount of disk used:** 14.2 MB ### Dataset Summary Dataset with pairs of equivalent sentences. The dataset is provided "AS IS" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from using the dataset. Disclaimer: The team releasing sentence-compression did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team. ### Supported Tasks - [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity. ### Languages - English. ## Dataset Structure Each example in the dataset contains pairs of equivalent sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value". ``` {"set": [sentence_1, sentence_2]} {"set": [sentence_1, sentence_2]} ... {"set": [sentence_1, sentence_2]} ``` This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences. ### Usage Example Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with: ```python from datasets import load_dataset dataset = load_dataset("embedding-data/sentence-compression") ``` The dataset is loaded as a `DatasetDict` and has the format: ```python DatasetDict({ train: Dataset({ features: ['set'], num_rows: 180000 }) }) ``` Review an example `i` with: ```python dataset["train"][i]["set"] ``` ### Curation Rationale [More Information Needed](https://github.com/google-research-datasets/sentence-compression) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/google-research-datasets/sentence-compression) #### Who are the source language producers? [More Information Needed](https://github.com/google-research-datasets/sentence-compression) ### Annotations #### Annotation process [More Information Needed](https://github.com/google-research-datasets/sentence-compression) #### Who are the annotators? [More Information Needed](https://github.com/google-research-datasets/sentence-compression) ### Personal and Sensitive Information [More Information Needed](https://github.com/google-research-datasets/sentence-compression) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/google-research-datasets/sentence-compression) ### Discussion of Biases [More Information Needed](https://github.com/google-research-datasets/sentence-compression) ### Other Known Limitations [More Information Needed](https://github.com/google-research-datasets/sentence-compression) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/google-research-datasets/sentence-compression) ### Licensing Information [More Information Needed](https://github.com/google-research-datasets/sentence-compression) ### Contributions
ethos
--- annotations_creators: - crowdsourced - expert-generated language_creators: - found - other language: - en license: - agpl-3.0 multilinguality: - monolingual size_categories: - n<1K source_datasets: - original task_categories: - text-classification task_ids: - multi-label-classification - sentiment-classification paperswithcode_id: ethos pretty_name: onlinE haTe speecH detectiOn dataSet tags: - Hate Speech Detection dataset_info: - config_name: binary features: - name: text dtype: string - name: label dtype: class_label: names: '0': no_hate_speech '1': hate_speech splits: - name: train num_bytes: 124823 num_examples: 998 download_size: 123919 dataset_size: 124823 - config_name: multilabel features: - name: text dtype: string - name: violence dtype: class_label: names: '0': not_violent '1': violent - name: directed_vs_generalized dtype: class_label: names: '0': generalied '1': directed - name: gender dtype: class_label: names: '0': 'false' '1': 'true' - name: race dtype: class_label: names: '0': 'false' '1': 'true' - name: national_origin dtype: class_label: names: '0': 'false' '1': 'true' - name: disability dtype: class_label: names: '0': 'false' '1': 'true' - name: religion dtype: class_label: names: '0': 'false' '1': 'true' - name: sexual_orientation dtype: class_label: names: '0': 'false' '1': 'true' splits: - name: train num_bytes: 79112 num_examples: 433 download_size: 62836 dataset_size: 79112 config_names: - binary - multilabel --- # Dataset Card for Ethos ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [ETHOS Hate Speech Dataset](https://github.com/intelligence-csd-auth-gr/Ethos-Hate-Speech-Dataset) - **Repository:**[ETHOS Hate Speech Dataset](https://github.com/intelligence-csd-auth-gr/Ethos-Hate-Speech-Dataset) - **Paper:**[ETHOS: an Online Hate Speech Detection Dataset](https://arxiv.org/abs/2006.08328) ### Dataset Summary ETHOS: onlinE haTe speecH detectiOn dataSet. This repository contains a dataset for hate speech detection on social media platforms, called Ethos. There are two variations of the dataset: - **Ethos_Dataset_Binary**: contains 998 comments in the dataset alongside with a label about hate speech *presence* or *absence*. 565 of them do not contain hate speech, while the rest of them, 433, contain. - **Ethos_Dataset_Multi_Label** which contains 8 labels for the 433 comments with hate speech content. These labels are *violence* (if it incites (1) or not (0) violence), *directed_vs_general* (if it is directed to a person (1) or a group (0)), and 6 labels about the category of hate speech like, *gender*, *race*, *national_origin*, *disability*, *religion* and *sexual_orientation*. ***Ethos /ˈiːθɒs/*** is a Greek word meaning “character” that is used to describe the guiding beliefs or ideals that characterize a community, nation, or ideology. The Greeks also used this word to refer to the power of music to influence emotions, behaviors, and even morals. ### Supported Tasks and Leaderboards [More Information Needed] - `text-classification-other-Hate Speech Detection`, `sentiment-classification`,`multi-label-classification`: The dataset can be used to train a model for hate speech detection. Moreover, it can be used as a benchmark dataset for multi label classification algorithms. ### Languages The text in the dataset is in English. ## Dataset Structure ### Data Instances A typical data point in the binary version comprises a comment, with a `text` containing the text and a `label` describing if a comment contains hate speech content (1 - hate-speech) or not (0 - non-hate-speech). In the multilabel version more labels like *violence* (if it incites (1) or not (0) violence), *directed_vs_general* (if it is directed to a person (1) or a group (0)), and 6 labels about the category of hate speech like, *gender*, *race*, *national_origin*, *disability*, *religion* and *sexual_orientation* are appearing. An example from the binary version, which is offensive, but it does not contain hate speech content: ``` {'text': 'What the fuck stupid people !!!', 'label': '0' } ``` An example from the multi-label version, which contains hate speech content towards women (gender): ``` {'text': 'You should know women's sports are a joke', `violence`: 0, `directed_vs_generalized`: 0, `gender`: 1, `race`: 0, `national_origin`: 0, `disability`: 0, `religion`: 0, `sexual_orientation`: 0 } ``` ### Data Fields Ethos Binary: - `text`: a `string` feature containing the text of the comment. - `label`: a classification label, with possible values including `no_hate_speech`, `hate_speech`. Ethis Multilabel: - `text`: a `string` feature containing the text of the comment. - `violence`: a classification label, with possible values including `not_violent`, `violent`. - `directed_vs_generalized`: a classification label, with possible values including `generalized`, `directed`. - `gender`: a classification label, with possible values including `false`, `true`. - `race`: a classification label, with possible values including `false`, `true`. - `national_origin`: a classification label, with possible values including `false`, `true`. - `disability`: a classification label, with possible values including `false`, `true`. - `religion`: a classification label, with possible values including `false`, `true`. - `sexual_orientation`: a classification label, with possible values including `false`, `true`. ### Data Splits The data is split into binary and multilabel. Multilabel is a subset of the binary version. | | Instances | Labels | | ----- | ------ | ----- | | binary | 998 | 1 | | multilabel | 433 | 8 | ## Dataset Creation ### Curation Rationale The dataset was build by gathering online comments in Youtube videos and reddit comments, from videos and subreddits which may attract hate speech content. ### Source Data #### Initial Data Collection and Normalization The initial data we used are from the hatebusters platform: [Original data used](https://intelligence.csd.auth.gr/topics/hate-speech-detection/), but they were not included in this dataset #### Who are the source language producers? The language producers are users of reddit and Youtube. More informations can be found in this paper: [ETHOS: an Online Hate Speech Detection Dataset](https://arxiv.org/abs/2006.08328) ### Annotations #### Annotation process The annotation process is detailed in the third section of this paper: [ETHOS: an Online Hate Speech Detection Dataset](https://arxiv.org/abs/2006.08328) #### Who are the annotators? Originally anotated by Ioannis Mollas and validated through the Figure8 platform (APEN). ### Personal and Sensitive Information No personal and sensitive information included in the dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset will help on the evolution of the automated hate speech detection tools. Those tools have great impact on preventing social issues. ### Discussion of Biases This dataset tries to be unbiased towards its classes and labels. ### Other Known Limitations The dataset is relatively small and should be used combined with larger datasets. ## Additional Information ### Dataset Curators The dataset was initially created by [Intelligent Systems Lab](https://intelligence.csd.auth.gr). ### Licensing Information The licensing status of the datasets is [GNU GPLv3](https://choosealicense.com/licenses/gpl-3.0/). ### Citation Information ``` @misc{mollas2020ethos, title={ETHOS: an Online Hate Speech Detection Dataset}, author={Ioannis Mollas and Zoe Chrysopoulou and Stamatis Karlos and Grigorios Tsoumakas}, year={2020}, eprint={2006.08328}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@iamollas](https://github.com/iamollas) for adding this dataset.
tasksource/oasst1_pairwise_rlhf_reward
--- dataset_info: features: - name: lang dtype: string - name: parent_id dtype: string - name: prompt dtype: string - name: chosen dtype: string - name: rejected dtype: string splits: - name: train num_bytes: 40736437 num_examples: 17966 - name: validation num_bytes: 2152443 num_examples: 952 download_size: 22371458 dataset_size: 42888880 language: - en - es - ru - de - pl - th - vi - sv - bn - da - he - it - fa - sk - id - nb - el - nl - hu - eu - zh - eo - ja - ca - cs - bg - fi - pt - tr - ro - ar - uk - gl - fr - ko --- # Dataset Card for "oasst1_pairwise_rlhf_reward" [OASST1 dataset](https://huggingface.co/datasets/OpenAssistant/oasst1) preprocessed for reward modeling: ```python import pandas as pd from datasets import load_dataset,concatenate_datasets, Dataset, DatasetDict import numpy as np dataset = load_dataset("OpenAssistant/oasst1") df=concatenate_datasets(list(dataset.values())).to_pandas() m2t=df.set_index("message_id")['text'].to_dict() m2r=df.set_index("message_id")['role'].to_dict() m2p=df.set_index('message_id')['parent_id'].to_dict() m2history=dict() # message id to unrolled history for k,v in m2p.items(): history=[k] while history[-1] in m2p: history+=[m2p[history[-1]]] m2history[k]="\n".join([f"{m2r[m]}: {m2t[m]}" for m in history[::-1] if m]) d=dict() for split in "train","validation": df=dataset[split].to_pandas() df['prompt']=df.parent_id.map(lambda x: m2history.get(x,'')) df=df[~df['rank'].isna()] def agg(x): x=list(x) return [x[0],x[-1]] df=df.groupby(['prompt',"parent_id",'lang'])[['text','rank']].agg(agg).reset_index() df=df[df['rank'].map(lambda x:len(set(x))>1)] df['chosen'] = df.apply(lambda x:x['text'][np.argmin(x['rank'])],axis=1) df['rejected'] = df.apply(lambda x:x['text'][np.argmax(x['rank'])],axis=1) d[split]=Dataset.from_pandas(df[['lang','parent_id','prompt','chosen','rejected']],preserve_index=False) DatasetDict(d).push_to_hub('tasksource/oasst1_pairwise_rlhf_reward') ```
range3/wiki40b-ja
--- task_categories: - text-generation - fill-mask language: - ja --- # range3/wiki40b-ja This dataset consists of three parquet files from the wiki40b dataset with only Japanese data extracted. It is generated by the following python code. このデータセットは、wiki40bデータセットの日本語データのみを抽出した3つのparquetファイルで構成されます。以下のpythonコードによって生成しています。 ```py import datasets dss = datasets.load_dataset( "wiki40b", "ja", beam_runner="DirectRunner", ) for split,ds in dss.items(): ds.to_parquet(f"wikipedia-ja-20230101/{split}.parquet") ```
mlsum
--- annotations_creators: - found language_creators: - found language: - de - es - fr - ru - tr license: - other multilinguality: - multilingual size_categories: - 100K<n<1M - 10K<n<100K source_datasets: - extended|cnn_dailymail - original task_categories: - summarization - translation - text-classification task_ids: - news-articles-summarization - multi-class-classification - multi-label-classification - topic-classification paperswithcode_id: mlsum pretty_name: MLSUM dataset_info: - config_name: de features: - name: text dtype: string - name: summary dtype: string - name: topic dtype: string - name: url dtype: string - name: title dtype: string - name: date dtype: string splits: - name: train num_bytes: 846959840 num_examples: 220887 - name: validation num_bytes: 47119541 num_examples: 11394 - name: test num_bytes: 46847612 num_examples: 10701 download_size: 1005814154 dataset_size: 940926993 - config_name: es features: - name: text dtype: string - name: summary dtype: string - name: topic dtype: string - name: url dtype: string - name: title dtype: string - name: date dtype: string splits: - name: train num_bytes: 1214558302 num_examples: 266367 - name: validation num_bytes: 50643400 num_examples: 10358 - name: test num_bytes: 71263665 num_examples: 13920 download_size: 1456211154 dataset_size: 1336465367 - config_name: fr features: - name: text dtype: string - name: summary dtype: string - name: topic dtype: string - name: url dtype: string - name: title dtype: string - name: date dtype: string splits: - name: train num_bytes: 1471965014 num_examples: 392902 - name: validation num_bytes: 70413212 num_examples: 16059 - name: test num_bytes: 69660288 num_examples: 15828 download_size: 1849565564 dataset_size: 1612038514 - config_name: ru features: - name: text dtype: string - name: summary dtype: string - name: topic dtype: string - name: url dtype: string - name: title dtype: string - name: date dtype: string splits: - name: train num_bytes: 257389497 num_examples: 25556 - name: validation num_bytes: 9128497 num_examples: 750 - name: test num_bytes: 9656398 num_examples: 757 download_size: 766226107 dataset_size: 276174392 - config_name: tu features: - name: text dtype: string - name: summary dtype: string - name: topic dtype: string - name: url dtype: string - name: title dtype: string - name: date dtype: string splits: - name: train num_bytes: 641622783 num_examples: 249277 - name: validation num_bytes: 25530661 num_examples: 11565 - name: test num_bytes: 27830212 num_examples: 12775 download_size: 942308960 dataset_size: 694983656 config_names: - de - es - fr - ru - tu --- # Dataset Card for MLSUM ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** []() - **Repository:** https://github.com/recitalAI/MLSUM - **Paper:** https://www.aclweb.org/anthology/2020.emnlp-main.647/ - **Point of Contact:** [email](thomas@recital.ai) - **Size of downloaded dataset files:** 1.83 GB - **Size of the generated dataset:** 4.86 GB - **Total amount of disk used:** 6.69 GB ### Dataset Summary We present MLSUM, the first large-scale MultiLingual SUMmarization dataset. Obtained from online newspapers, it contains 1.5M+ article/summary pairs in five different languages -- namely, French, German, Spanish, Russian, Turkish. Together with English newspapers from the popular CNN/Daily mail dataset, the collected data form a large scale multilingual dataset which can enable new research directions for the text summarization community. We report cross-lingual comparative analyses based on state-of-the-art systems. These highlight existing biases which motivate the use of a multi-lingual dataset. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### de - **Size of downloaded dataset files:** 346.58 MB - **Size of the generated dataset:** 940.93 MB - **Total amount of disk used:** 1.29 GB An example of 'validation' looks as follows. ``` { "date": "01/01/2001", "summary": "A text", "text": "This is a text", "title": "A sample", "topic": "football", "url": "https://www.google.com" } ``` #### es - **Size of downloaded dataset files:** 513.31 MB - **Size of the generated dataset:** 1.34 GB - **Total amount of disk used:** 1.85 GB An example of 'validation' looks as follows. ``` { "date": "01/01/2001", "summary": "A text", "text": "This is a text", "title": "A sample", "topic": "football", "url": "https://www.google.com" } ``` #### fr - **Size of downloaded dataset files:** 619.99 MB - **Size of the generated dataset:** 1.61 GB - **Total amount of disk used:** 2.23 GB An example of 'validation' looks as follows. ``` { "date": "01/01/2001", "summary": "A text", "text": "This is a text", "title": "A sample", "topic": "football", "url": "https://www.google.com" } ``` #### ru - **Size of downloaded dataset files:** 106.22 MB - **Size of the generated dataset:** 276.17 MB - **Total amount of disk used:** 382.39 MB An example of 'train' looks as follows. ``` { "date": "01/01/2001", "summary": "A text", "text": "This is a text", "title": "A sample", "topic": "football", "url": "https://www.google.com" } ``` #### tu - **Size of downloaded dataset files:** 247.50 MB - **Size of the generated dataset:** 694.99 MB - **Total amount of disk used:** 942.48 MB An example of 'train' looks as follows. ``` { "date": "01/01/2001", "summary": "A text", "text": "This is a text", "title": "A sample", "topic": "football", "url": "https://www.google.com" } ``` ### Data Fields The data fields are the same among all splits. #### de - `text`: a `string` feature. - `summary`: a `string` feature. - `topic`: a `string` feature. - `url`: a `string` feature. - `title`: a `string` feature. - `date`: a `string` feature. #### es - `text`: a `string` feature. - `summary`: a `string` feature. - `topic`: a `string` feature. - `url`: a `string` feature. - `title`: a `string` feature. - `date`: a `string` feature. #### fr - `text`: a `string` feature. - `summary`: a `string` feature. - `topic`: a `string` feature. - `url`: a `string` feature. - `title`: a `string` feature. - `date`: a `string` feature. #### ru - `text`: a `string` feature. - `summary`: a `string` feature. - `topic`: a `string` feature. - `url`: a `string` feature. - `title`: a `string` feature. - `date`: a `string` feature. #### tu - `text`: a `string` feature. - `summary`: a `string` feature. - `topic`: a `string` feature. - `url`: a `string` feature. - `title`: a `string` feature. - `date`: a `string` feature. ### Data Splits |name|train |validation|test | |----|-----:|---------:|----:| |de |220887| 11394|10701| |es |266367| 10358|13920| |fr |392902| 16059|15828| |ru | 25556| 750| 757| |tu |249277| 11565|12775| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information Usage of dataset is restricted to non-commercial research purposes only. Copyright belongs to the original copyright holders. See https://github.com/recitalAI/MLSUM#mlsum ### Citation Information ``` @article{scialom2020mlsum, title={MLSUM: The Multilingual Summarization Corpus}, author={Scialom, Thomas and Dray, Paul-Alexis and Lamprier, Sylvain and Piwowarski, Benjamin and Staiano, Jacopo}, journal={arXiv preprint arXiv:2004.14900}, year={2020} } ``` ### Contributions Thanks to [@RachelKer](https://github.com/RachelKer), [@albertvillanova](https://github.com/albertvillanova), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
Birchlabs/openai-prm800k-stepwise-critic
--- license: mit ---
kilt_tasks
--- annotations_creators: - crowdsourced - found - machine-generated language_creators: - crowdsourced - found language: - en license: - mit multilinguality: - monolingual size_categories: - 100K<n<1M - 10K<n<100K - 1K<n<10K - 1M<n<10M source_datasets: - extended|natural_questions - extended|other-aidayago - extended|other-fever - extended|other-hotpotqa - extended|other-trex - extended|other-triviaqa - extended|other-wizardsofwikipedia - extended|other-wned-cweb - extended|other-wned-wiki - extended|other-zero-shot-re - original task_categories: - fill-mask - question-answering - text-classification - text-generation - text-retrieval - text2text-generation task_ids: - abstractive-qa - dialogue-modeling - document-retrieval - entity-linking-retrieval - extractive-qa - fact-checking - fact-checking-retrieval - open-domain-abstractive-qa - open-domain-qa - slot-filling paperswithcode_id: kilt pretty_name: KILT config_names: - aidayago2 - cweb - eli5 - fever - hotpotqa - nq - structured_zeroshot - trex - triviaqa_support_only - wned - wow dataset_info: - config_name: aidayago2 features: - name: id dtype: string - name: input dtype: string - name: meta struct: - name: left_context dtype: string - name: mention dtype: string - name: right_context dtype: string - name: partial_evidence list: - name: start_paragraph_id dtype: int32 - name: end_paragraph_id dtype: int32 - name: title dtype: string - name: section dtype: string - name: wikipedia_id dtype: string - name: meta struct: - name: evidence_span list: string - name: obj_surface list: string - name: sub_surface list: string - name: subj_aliases list: string - name: template_questions list: string - name: output list: - name: answer dtype: string - name: meta struct: - name: score dtype: int32 - name: provenance list: - name: bleu_score dtype: float32 - name: start_character dtype: int32 - name: start_paragraph_id dtype: int32 - name: end_character dtype: int32 - name: end_paragraph_id dtype: int32 - name: meta struct: - name: fever_page_id dtype: string - name: fever_sentence_id dtype: int32 - name: annotation_id dtype: string - name: yes_no_answer dtype: string - name: evidence_span list: string - name: section dtype: string - name: title dtype: string - name: wikipedia_id dtype: string splits: - name: train num_bytes: 68943890 num_examples: 18395 - name: validation num_bytes: 20743172 num_examples: 4784 - name: test num_bytes: 14210587 num_examples: 4463 download_size: 13419920 dataset_size: 103897649 - config_name: cweb features: - name: id dtype: string - name: input dtype: string - name: meta struct: - name: left_context dtype: string - name: mention dtype: string - name: right_context dtype: string - name: partial_evidence list: - name: start_paragraph_id dtype: int32 - name: end_paragraph_id dtype: int32 - name: title dtype: string - name: section dtype: string - name: wikipedia_id dtype: string - name: meta struct: - name: evidence_span list: string - name: obj_surface list: string - name: sub_surface list: string - name: subj_aliases list: string - name: template_questions list: string - name: output list: - name: answer dtype: string - name: meta struct: - name: score dtype: int32 - name: provenance list: - name: bleu_score dtype: float32 - name: start_character dtype: int32 - name: start_paragraph_id dtype: int32 - name: end_character dtype: int32 - name: end_paragraph_id dtype: int32 - name: meta struct: - name: fever_page_id dtype: string - name: fever_sentence_id dtype: int32 - name: annotation_id dtype: string - name: yes_no_answer dtype: string - name: evidence_span list: string - name: section dtype: string - name: title dtype: string - name: wikipedia_id dtype: string splits: - name: validation num_bytes: 89819252 num_examples: 5599 - name: test num_bytes: 99208393 num_examples: 5543 download_size: 32809813 dataset_size: 189027645 - config_name: eli5 features: - name: id dtype: string - name: input dtype: string - name: meta struct: - name: left_context dtype: string - name: mention dtype: string - name: right_context dtype: string - name: partial_evidence list: - name: start_paragraph_id dtype: int32 - name: end_paragraph_id dtype: int32 - name: title dtype: string - name: section dtype: string - name: wikipedia_id dtype: string - name: meta struct: - name: evidence_span list: string - name: obj_surface list: string - name: sub_surface list: string - name: subj_aliases list: string - name: template_questions list: string - name: output list: - name: answer dtype: string - name: meta struct: - name: score dtype: int32 - name: provenance list: - name: bleu_score dtype: float32 - name: start_character dtype: int32 - name: start_paragraph_id dtype: int32 - name: end_character dtype: int32 - name: end_paragraph_id dtype: int32 - name: meta struct: - name: fever_page_id dtype: string - name: fever_sentence_id dtype: int32 - name: annotation_id dtype: string - name: yes_no_answer dtype: string - name: evidence_span list: string - name: section dtype: string - name: title dtype: string - name: wikipedia_id dtype: string splits: - name: train num_bytes: 525554458 num_examples: 272634 - name: validation num_bytes: 13860033 num_examples: 1507 - name: test num_bytes: 107092 num_examples: 600 download_size: 329302944 dataset_size: 539521583 - config_name: fever features: - name: id dtype: string - name: input dtype: string - name: meta struct: - name: left_context dtype: string - name: mention dtype: string - name: right_context dtype: string - name: partial_evidence list: - name: start_paragraph_id dtype: int32 - name: end_paragraph_id dtype: int32 - name: title dtype: string - name: section dtype: string - name: wikipedia_id dtype: string - name: meta struct: - name: evidence_span list: string - name: obj_surface list: string - name: sub_surface list: string - name: subj_aliases list: string - name: template_questions list: string - name: output list: - name: answer dtype: string - name: meta struct: - name: score dtype: int32 - name: provenance list: - name: bleu_score dtype: float32 - name: start_character dtype: int32 - name: start_paragraph_id dtype: int32 - name: end_character dtype: int32 - name: end_paragraph_id dtype: int32 - name: meta struct: - name: fever_page_id dtype: string - name: fever_sentence_id dtype: int32 - name: annotation_id dtype: string - name: yes_no_answer dtype: string - name: evidence_span list: string - name: section dtype: string - name: title dtype: string - name: wikipedia_id dtype: string splits: - name: train num_bytes: 23937486 num_examples: 104966 - name: validation num_bytes: 3167751 num_examples: 10444 - name: test num_bytes: 1040116 num_examples: 10100 download_size: 11571038 dataset_size: 28145353 - config_name: hotpotqa features: - name: id dtype: string - name: input dtype: string - name: meta struct: - name: left_context dtype: string - name: mention dtype: string - name: right_context dtype: string - name: partial_evidence list: - name: start_paragraph_id dtype: int32 - name: end_paragraph_id dtype: int32 - name: title dtype: string - name: section dtype: string - name: wikipedia_id dtype: string - name: meta struct: - name: evidence_span list: string - name: obj_surface list: string - name: sub_surface list: string - name: subj_aliases list: string - name: template_questions list: string - name: output list: - name: answer dtype: string - name: meta struct: - name: score dtype: int32 - name: provenance list: - name: bleu_score dtype: float32 - name: start_character dtype: int32 - name: start_paragraph_id dtype: int32 - name: end_character dtype: int32 - name: end_paragraph_id dtype: int32 - name: meta struct: - name: fever_page_id dtype: string - name: fever_sentence_id dtype: int32 - name: annotation_id dtype: string - name: yes_no_answer dtype: string - name: evidence_span list: string - name: section dtype: string - name: title dtype: string - name: wikipedia_id dtype: string splits: - name: train num_bytes: 33595295 num_examples: 88869 - name: validation num_bytes: 2371262 num_examples: 5600 - name: test num_bytes: 887204 num_examples: 5569 download_size: 17914796 dataset_size: 36853761 - config_name: nq features: - name: id dtype: string - name: input dtype: string - name: meta struct: - name: left_context dtype: string - name: mention dtype: string - name: right_context dtype: string - name: partial_evidence list: - name: start_paragraph_id dtype: int32 - name: end_paragraph_id dtype: int32 - name: title dtype: string - name: section dtype: string - name: wikipedia_id dtype: string - name: meta struct: - name: evidence_span list: string - name: obj_surface list: string - name: sub_surface list: string - name: subj_aliases list: string - name: template_questions list: string - name: output list: - name: answer dtype: string - name: meta struct: - name: score dtype: int32 - name: provenance list: - name: bleu_score dtype: float32 - name: start_character dtype: int32 - name: start_paragraph_id dtype: int32 - name: end_character dtype: int32 - name: end_paragraph_id dtype: int32 - name: meta struct: - name: fever_page_id dtype: string - name: fever_sentence_id dtype: int32 - name: annotation_id dtype: string - name: yes_no_answer dtype: string - name: evidence_span list: string - name: section dtype: string - name: title dtype: string - name: wikipedia_id dtype: string splits: - name: train num_bytes: 30385368 num_examples: 87372 - name: validation num_bytes: 6190373 num_examples: 2837 - name: test num_bytes: 333162 num_examples: 1444 download_size: 16535475 dataset_size: 36908903 - config_name: structured_zeroshot features: - name: id dtype: string - name: input dtype: string - name: meta struct: - name: left_context dtype: string - name: mention dtype: string - name: right_context dtype: string - name: partial_evidence list: - name: start_paragraph_id dtype: int32 - name: end_paragraph_id dtype: int32 - name: title dtype: string - name: section dtype: string - name: wikipedia_id dtype: string - name: meta struct: - name: evidence_span list: string - name: obj_surface list: string - name: sub_surface list: string - name: subj_aliases list: string - name: template_questions list: string - name: output list: - name: answer dtype: string - name: meta struct: - name: score dtype: int32 - name: provenance list: - name: bleu_score dtype: float32 - name: start_character dtype: int32 - name: start_paragraph_id dtype: int32 - name: end_character dtype: int32 - name: end_paragraph_id dtype: int32 - name: meta struct: - name: fever_page_id dtype: string - name: fever_sentence_id dtype: int32 - name: annotation_id dtype: string - name: yes_no_answer dtype: string - name: evidence_span list: string - name: section dtype: string - name: title dtype: string - name: wikipedia_id dtype: string splits: - name: train num_bytes: 47165561 num_examples: 147909 - name: validation num_bytes: 1612123 num_examples: 3724 - name: test num_bytes: 1140265 num_examples: 4966 download_size: 21038900 dataset_size: 49917949 - config_name: trex features: - name: id dtype: string - name: input dtype: string - name: meta struct: - name: left_context dtype: string - name: mention dtype: string - name: right_context dtype: string - name: partial_evidence list: - name: start_paragraph_id dtype: int32 - name: end_paragraph_id dtype: int32 - name: title dtype: string - name: section dtype: string - name: wikipedia_id dtype: string - name: meta struct: - name: evidence_span list: string - name: obj_surface list: string - name: sub_surface list: string - name: subj_aliases list: string - name: template_questions list: string - name: output list: - name: answer dtype: string - name: meta struct: - name: score dtype: int32 - name: provenance list: - name: bleu_score dtype: float32 - name: start_character dtype: int32 - name: start_paragraph_id dtype: int32 - name: end_character dtype: int32 - name: end_paragraph_id dtype: int32 - name: meta struct: - name: fever_page_id dtype: string - name: fever_sentence_id dtype: int32 - name: annotation_id dtype: string - name: yes_no_answer dtype: string - name: evidence_span list: string - name: section dtype: string - name: title dtype: string - name: wikipedia_id dtype: string splits: - name: train num_bytes: 1190183022 num_examples: 2284168 - name: validation num_bytes: 2573444 num_examples: 5000 - name: test num_bytes: 757470 num_examples: 5000 download_size: 546671157 dataset_size: 1193513936 - config_name: triviaqa_support_only features: - name: id dtype: string - name: input dtype: string - name: meta struct: - name: left_context dtype: string - name: mention dtype: string - name: right_context dtype: string - name: partial_evidence list: - name: start_paragraph_id dtype: int32 - name: end_paragraph_id dtype: int32 - name: title dtype: string - name: section dtype: string - name: wikipedia_id dtype: string - name: meta struct: - name: evidence_span list: string - name: obj_surface list: string - name: sub_surface list: string - name: subj_aliases list: string - name: template_questions list: string - name: output list: - name: answer dtype: string - name: meta struct: - name: score dtype: int32 - name: provenance list: - name: bleu_score dtype: float32 - name: start_character dtype: int32 - name: start_paragraph_id dtype: int32 - name: end_character dtype: int32 - name: end_paragraph_id dtype: int32 - name: meta struct: - name: fever_page_id dtype: string - name: fever_sentence_id dtype: int32 - name: annotation_id dtype: string - name: yes_no_answer dtype: string - name: evidence_span list: string - name: section dtype: string - name: title dtype: string - name: wikipedia_id dtype: string splits: - name: train num_bytes: 72021515 num_examples: 61844 - name: validation num_bytes: 6824398 num_examples: 5359 - name: test num_bytes: 340692 num_examples: 6586 download_size: 31946196 dataset_size: 79186605 - config_name: wned features: - name: id dtype: string - name: input dtype: string - name: meta struct: - name: left_context dtype: string - name: mention dtype: string - name: right_context dtype: string - name: partial_evidence list: - name: start_paragraph_id dtype: int32 - name: end_paragraph_id dtype: int32 - name: title dtype: string - name: section dtype: string - name: wikipedia_id dtype: string - name: meta struct: - name: evidence_span list: string - name: obj_surface list: string - name: sub_surface list: string - name: subj_aliases list: string - name: template_questions list: string - name: output list: - name: answer dtype: string - name: meta struct: - name: score dtype: int32 - name: provenance list: - name: bleu_score dtype: float32 - name: start_character dtype: int32 - name: start_paragraph_id dtype: int32 - name: end_character dtype: int32 - name: end_paragraph_id dtype: int32 - name: meta struct: - name: fever_page_id dtype: string - name: fever_sentence_id dtype: int32 - name: annotation_id dtype: string - name: yes_no_answer dtype: string - name: evidence_span list: string - name: section dtype: string - name: title dtype: string - name: wikipedia_id dtype: string splits: - name: validation num_bytes: 12659518 num_examples: 3396 - name: test num_bytes: 13080824 num_examples: 3376 download_size: 3608615 dataset_size: 25740342 - config_name: wow features: - name: id dtype: string - name: input dtype: string - name: meta struct: - name: left_context dtype: string - name: mention dtype: string - name: right_context dtype: string - name: partial_evidence list: - name: start_paragraph_id dtype: int32 - name: end_paragraph_id dtype: int32 - name: title dtype: string - name: section dtype: string - name: wikipedia_id dtype: string - name: meta struct: - name: evidence_span list: string - name: obj_surface list: string - name: sub_surface list: string - name: subj_aliases list: string - name: template_questions list: string - name: output list: - name: answer dtype: string - name: meta struct: - name: score dtype: int32 - name: provenance list: - name: bleu_score dtype: float32 - name: start_character dtype: int32 - name: start_paragraph_id dtype: int32 - name: end_character dtype: int32 - name: end_paragraph_id dtype: int32 - name: meta struct: - name: fever_page_id dtype: string - name: fever_sentence_id dtype: int32 - name: annotation_id dtype: string - name: yes_no_answer dtype: string - name: evidence_span list: string - name: section dtype: string - name: title dtype: string - name: wikipedia_id dtype: string splits: - name: train num_bytes: 41870938 num_examples: 63734 - name: validation num_bytes: 2021752 num_examples: 3054 - name: test num_bytes: 1339546 num_examples: 2944 download_size: 25441975 dataset_size: 45232236 configs: - config_name: aidayago2 data_files: - split: train path: aidayago2/train-* - split: validation path: aidayago2/validation-* - split: test path: aidayago2/test-* - config_name: cweb data_files: - split: validation path: cweb/validation-* - split: test path: cweb/test-* - config_name: eli5 data_files: - split: train path: eli5/train-* - split: validation path: eli5/validation-* - split: test path: eli5/test-* - config_name: fever data_files: - split: train path: fever/train-* - split: validation path: fever/validation-* - split: test path: fever/test-* - config_name: hotpotqa data_files: - split: train path: hotpotqa/train-* - split: validation path: hotpotqa/validation-* - split: test path: hotpotqa/test-* - config_name: nq data_files: - split: train path: nq/train-* - split: validation path: nq/validation-* - split: test path: nq/test-* default: true - config_name: structured_zeroshot data_files: - split: train path: structured_zeroshot/train-* - split: validation path: structured_zeroshot/validation-* - split: test path: structured_zeroshot/test-* - config_name: trex data_files: - split: train path: trex/train-* - split: validation path: trex/validation-* - split: test path: trex/test-* - config_name: triviaqa_support_only data_files: - split: train path: triviaqa_support_only/train-* - split: validation path: triviaqa_support_only/validation-* - split: test path: triviaqa_support_only/test-* - config_name: wned data_files: - split: validation path: wned/validation-* - split: test path: wned/test-* - config_name: wow data_files: - split: train path: wow/train-* - split: validation path: wow/validation-* - split: test path: wow/test-* --- # Dataset Card for KILT ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://ai.facebook.com/tools/kilt/ - **Repository:** https://github.com/facebookresearch/KILT - **Paper:** https://arxiv.org/abs/2009.02252 - **Leaderboard:** https://eval.ai/web/challenges/challenge-page/689/leaderboard/ - **Point of Contact:** [Needs More Information] ### Dataset Summary KILT has been built from 11 datasets representing 5 types of tasks: - Fact-checking - Entity linking - Slot filling - Open domain QA - Dialog generation All these datasets have been grounded in a single pre-processed Wikipedia dump, allowing for fairer and more consistent evaluation as well as enabling new task setups such as multitask and transfer learning with minimal effort. KILT also provides tools to analyze and understand the predictions made by models, as well as the evidence they provide for their predictions. #### Loading the KILT knowledge source and task data The original KILT [release](https://github.com/facebookresearch/KILT) only provides question IDs for the TriviaQA task. Using the full dataset requires mapping those back to the TriviaQA questions, which can be done as follows: ```python from datasets import load_dataset # Get the pre-processed Wikipedia knowledge source for kild kilt_wiki = load_dataset("kilt_wikipedia") # Get the KILT task datasets kilt_triviaqa = load_dataset("kilt_tasks", name="triviaqa_support_only") # Most tasks in KILT already have all required data, but KILT-TriviaQA # only provides the question IDs, not the questions themselves. # Thankfully, we can get the original TriviaQA data with: trivia_qa = load_dataset('trivia_qa', 'unfiltered.nocontext') # The KILT IDs can then be mapped to the TriviaQA questions with: triviaqa_map = {} def add_missing_data(x, trivia_qa_subset, triviaqa_map): i = triviaqa_map[x['id']] x['input'] = trivia_qa_subset[i]['question'] x['output']['original_answer'] = trivia_qa_subset[i]['answer']['value'] return x for k in ['train', 'validation', 'test']: triviaqa_map = dict([(q_id, i) for i, q_id in enumerate(trivia_qa[k]['question_id'])]) kilt_triviaqa[k] = kilt_triviaqa[k].filter(lambda x: x['id'] in triviaqa_map) kilt_triviaqa[k] = kilt_triviaqa[k].map(add_missing_data, fn_kwargs=dict(trivia_qa_subset=trivia_qa[k], triviaqa_map=triviaqa_map)) ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure ### Data Instances An example of open-domain QA from the Natural Questions `nq` configuration looks as follows: ``` {'id': '-5004457603684974952', 'input': 'who is playing the halftime show at super bowl 2016', 'meta': {'left_context': '', 'mention': '', 'obj_surface': [], 'partial_evidence': [], 'right_context': '', 'sub_surface': [], 'subj_aliases': [], 'template_questions': []}, 'output': [{'answer': 'Coldplay', 'meta': {'score': 0}, 'provenance': [{'bleu_score': 1.0, 'end_character': 186, 'end_paragraph_id': 1, 'meta': {'annotation_id': '-1', 'evidence_span': [], 'fever_page_id': '', 'fever_sentence_id': -1, 'yes_no_answer': ''}, 'section': 'Section::::Abstract.', 'start_character': 178, 'start_paragraph_id': 1, 'title': 'Super Bowl 50 halftime show', 'wikipedia_id': '45267196'}]}, {'answer': 'Beyoncé', 'meta': {'score': 0}, 'provenance': [{'bleu_score': 1.0, 'end_character': 224, 'end_paragraph_id': 1, 'meta': {'annotation_id': '-1', 'evidence_span': [], 'fever_page_id': '', 'fever_sentence_id': -1, 'yes_no_answer': ''}, 'section': 'Section::::Abstract.', 'start_character': 217, 'start_paragraph_id': 1, 'title': 'Super Bowl 50 halftime show', 'wikipedia_id': '45267196'}]}, {'answer': 'Bruno Mars', 'meta': {'score': 0}, 'provenance': [{'bleu_score': 1.0, 'end_character': 239, 'end_paragraph_id': 1, 'meta': {'annotation_id': '-1', 'evidence_span': [], 'fever_page_id': '', 'fever_sentence_id': -1, 'yes_no_answer': ''}, 'section': 'Section::::Abstract.', 'start_character': 229, 'start_paragraph_id': 1, 'title': 'Super Bowl 50 halftime show', 'wikipedia_id': '45267196'}]}, {'answer': 'Coldplay with special guest performers Beyoncé and Bruno Mars', 'meta': {'score': 0}, 'provenance': []}, {'answer': 'British rock group Coldplay with special guest performers Beyoncé and Bruno Mars', 'meta': {'score': 0}, 'provenance': []}, {'answer': '', 'meta': {'score': 0}, 'provenance': [{'bleu_score': 0.9657992720603943, 'end_character': 341, 'end_paragraph_id': 1, 'meta': {'annotation_id': '2430977867500315580', 'evidence_span': [], 'fever_page_id': '', 'fever_sentence_id': -1, 'yes_no_answer': 'NONE'}, 'section': 'Section::::Abstract.', 'start_character': 0, 'start_paragraph_id': 1, 'title': 'Super Bowl 50 halftime show', 'wikipedia_id': '45267196'}]}, {'answer': '', 'meta': {'score': 0}, 'provenance': [{'bleu_score': -1.0, 'end_character': -1, 'end_paragraph_id': 1, 'meta': {'annotation_id': '-1', 'evidence_span': ['It was headlined by the British rock group Coldplay with special guest performers Beyoncé and Bruno Mars', 'It was headlined by the British rock group Coldplay with special guest performers Beyoncé and Bruno Mars, who previously had headlined the Super Bowl XLVII and Super Bowl XLVIII halftime shows, respectively.', "The Super Bowl 50 Halftime Show took place on February 7, 2016, at Levi's Stadium in Santa Clara, California as part of Super Bowl 50. It was headlined by the British rock group Coldplay with special guest performers Beyoncé and Bruno Mars", "The Super Bowl 50 Halftime Show took place on February 7, 2016, at Levi's Stadium in Santa Clara, California as part of Super Bowl 50. It was headlined by the British rock group Coldplay with special guest performers Beyoncé and Bruno Mars,"], 'fever_page_id': '', 'fever_sentence_id': -1, 'yes_no_answer': ''}, 'section': 'Section::::Abstract.', 'start_character': -1, 'start_paragraph_id': 1, 'title': 'Super Bowl 50 halftime show', 'wikipedia_id': '45267196'}]}]} ``` ### Data Fields Examples from all configurations have the following features: - `input`: a `string` feature representing the query. - `output`: a `list` of features each containing information for an answer, made up of: - `answer`: a `string` feature representing a possible answer. - `provenance`: a `list` of features representing Wikipedia passages that support the `answer`, denoted by: - `title`: a `string` feature, the title of the Wikipedia article the passage was retrieved from. - `section`: a `string` feature, the title of the section in Wikipedia article. - `wikipedia_id`: a `string` feature, a unique identifier for the Wikipedia article. - `start_character`: a `int32` feature. - `start_paragraph_id`: a `int32` feature. - `end_character`: a `int32` feature. - `end_paragraph_id`: a `int32` feature. ### Data Splits The configurations have the following splits: | | Train | Validation | Test | | ----------- | ----------- | ----------- | ----------- | | triviaqa | 61844 | 5359 | 6586 | | fever | 104966 | 10444 | 10100 | | aidayago2 | 18395 | 4784 | 4463 | | wned | | 3396 | 3376 | | cweb | | 5599 | 5543 | | trex | 2284168 | 5000 | 5000 | | structured_zeroshot | 147909 | 3724 | 4966 | | nq | 87372 | 2837 | 1444 | | hotpotqa | 88869 | 5600 | 5569 | | eli5 | 272634 | 1507 | 600 | | wow | 94577 | 3058 | 2944 | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{kilt_tasks, author = {Fabio Petroni and Aleksandra Piktus and Angela Fan and Patrick S. H. Lewis and Majid Yazdani and Nicola De Cao and James Thorne and Yacine Jernite and Vladimir Karpukhin and Jean Maillard and Vassilis Plachouras and Tim Rockt{\"{a}}schel and Sebastian Riedel}, editor = {Kristina Toutanova and Anna Rumshisky and Luke Zettlemoyer and Dilek Hakkani{-}T{\"{u}}r and Iz Beltagy and Steven Bethard and Ryan Cotterell and Tanmoy Chakraborty and Yichao Zhou}, title = {{KILT:} a Benchmark for Knowledge Intensive Language Tasks}, booktitle = {Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, {NAACL-HLT} 2021, Online, June 6-11, 2021}, pages = {2523--2544}, publisher = {Association for Computational Linguistics}, year = {2021}, url = {https://www.aclweb.org/anthology/2021.naacl-main.200/} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@yjernite](https://github.com/yjernite) for adding this dataset.
marsyas/gtzan
--- pretty_name: GTZAN --- # Dataset Card for GTZAN ## Table of Contents - [Dataset Card for GTZAN](#dataset-card-for-gtzan) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [http://marsyas.info/downloads/datasets.html](http://marsyas.info/downloads/datasets.html) - **Paper:** [http://ismir2001.ismir.net/pdf/tzanetakis.pdf](http://ismir2001.ismir.net/pdf/tzanetakis.pdf) - **Point of Contact:** ### Dataset Summary GTZAN is a dataset for musical genre classification of audio signals. The dataset consists of 1,000 audio tracks, each of 30 seconds long. It contains 10 genres, each represented by 100 tracks. The tracks are all 22,050Hz Mono 16-bit audio files in WAV format. The genres are: blues, classical, country, disco, hiphop, jazz, metal, pop, reggae, and rock. ### Languages English ## Dataset Structure GTZAN is distributed as a single dataset without a predefined training and test split. The information below refers to the single `train` split that is assigned by default. ### Data Instances An example of GTZAN looks as follows: ```python { "file": "/path/to/cache/genres/blues/blues.00000.wav", "audio": { "path": "/path/to/cache/genres/blues/blues.00000.wav", "array": array( [ 0.00732422, 0.01660156, 0.00762939, ..., -0.05560303, -0.06106567, -0.06417847, ], dtype=float32, ), "sampling_rate": 22050, }, "genre": 0, } ``` ### Data Fields The types associated with each of the data fields is as follows: * `file`: a `string` feature. * `audio`: an `Audio` feature containing the `path` of the sound file, the decoded waveform in the `array` field, and the `sampling_rate`. * `genre`: a `ClassLabel` feature. ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @misc{tzanetakis_essl_cook_2001, author = "Tzanetakis, George and Essl, Georg and Cook, Perry", title = "Automatic Musical Genre Classification Of Audio Signals", url = "http://ismir2001.ismir.net/pdf/tzanetakis.pdf", publisher = "The International Society for Music Information Retrieval", year = "2001" } ``` ### Contributions Thanks to [@lewtun](https://github.com/lewtun) for adding this dataset.
sms_spam
--- annotations_creators: - crowdsourced - found language_creators: - crowdsourced - found language: - en license: - unknown multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - extended|other-nus-sms-corpus task_categories: - text-classification task_ids: - intent-classification paperswithcode_id: sms-spam-collection-data-set pretty_name: SMS Spam Collection Data Set dataset_info: features: - name: sms dtype: string - name: label dtype: class_label: names: '0': ham '1': spam config_name: plain_text splits: - name: train num_bytes: 521756 num_examples: 5574 download_size: 203415 dataset_size: 521756 train-eval-index: - config: plain_text task: text-classification task_id: binary_classification splits: train_split: train col_mapping: sms: text label: target metrics: - type: accuracy name: Accuracy - type: f1 name: F1 macro args: average: macro - type: f1 name: F1 micro args: average: micro - type: f1 name: F1 weighted args: average: weighted - type: precision name: Precision macro args: average: macro - type: precision name: Precision micro args: average: micro - type: precision name: Precision weighted args: average: weighted - type: recall name: Recall macro args: average: macro - type: recall name: Recall micro args: average: micro - type: recall name: Recall weighted args: average: weighted --- # Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection - **Repository:** - **Paper:** Almeida, T.A., Gomez Hidalgo, J.M., Yamakami, A. Contributions to the study of SMS Spam Filtering: New Collection and Results. Proceedings of the 2011 ACM Symposium on Document Engineering (ACM DOCENG'11), Mountain View, CA, USA, 2011. - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The SMS Spam Collection v.1 is a public set of SMS labeled messages that have been collected for mobile phone spam research. It has one collection composed by 5,574 English, real and non-enconded messages, tagged according being legitimate (ham) or spam. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - sms: the sms message - label: indicating if the sms message is ham or spam, ham means it is not spam ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information @inproceedings{Almeida2011SpamFiltering, title={Contributions to the Study of SMS Spam Filtering: New Collection and Results}, author={Tiago A. Almeida and Jose Maria Gomez Hidalgo and Akebo Yamakami}, year={2011}, booktitle = "Proceedings of the 2011 ACM Symposium on Document Engineering (DOCENG'11)", } ### Contributions Thanks to [@czabo](https://github.com/czabo) for adding this dataset.
polyglot_ner
--- annotations_creators: - machine-generated language_creators: - found language: - ar - bg - ca - cs - da - de - el - en - es - et - fa - fi - fr - he - hi - hr - hu - id - it - ja - ko - lt - lv - ms - nl - 'no' - pl - pt - ro - ru - sk - sl - sr - sv - th - tl - tr - uk - vi - zh license: - unknown multilinguality: - multilingual pretty_name: Polyglot-NER size_categories: - unknown source_datasets: - original task_categories: - token-classification task_ids: - named-entity-recognition paperswithcode_id: polyglot-ner dataset_info: - config_name: ca features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 143746026 num_examples: 372665 download_size: 1107018606 dataset_size: 143746026 - config_name: de features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 156744752 num_examples: 547578 download_size: 1107018606 dataset_size: 156744752 - config_name: es features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 145387551 num_examples: 386699 download_size: 1107018606 dataset_size: 145387551 - config_name: fi features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 95175890 num_examples: 387465 download_size: 1107018606 dataset_size: 95175890 - config_name: hi features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 177698330 num_examples: 401648 download_size: 1107018606 dataset_size: 177698330 - config_name: id features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 152560050 num_examples: 463862 download_size: 1107018606 dataset_size: 152560050 - config_name: ko features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 174523416 num_examples: 560105 download_size: 1107018606 dataset_size: 174523416 - config_name: ms features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 155268778 num_examples: 528181 download_size: 1107018606 dataset_size: 155268778 - config_name: pl features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 159684112 num_examples: 623267 download_size: 1107018606 dataset_size: 159684112 - config_name: ru features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 200717423 num_examples: 551770 download_size: 1107018606 dataset_size: 200717423 - config_name: sr features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 183437513 num_examples: 559423 download_size: 1107018606 dataset_size: 183437513 - config_name: tl features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 47104871 num_examples: 160750 download_size: 1107018606 dataset_size: 47104871 - config_name: vi features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 141062258 num_examples: 351643 download_size: 1107018606 dataset_size: 141062258 - config_name: ar features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 183551222 num_examples: 339109 download_size: 1107018606 dataset_size: 183551222 - config_name: cs features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 156792129 num_examples: 564462 download_size: 1107018606 dataset_size: 156792129 - config_name: el features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 195456401 num_examples: 446052 download_size: 1107018606 dataset_size: 195456401 - config_name: et features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 21961619 num_examples: 87023 download_size: 1107018606 dataset_size: 21961619 - config_name: fr features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 147560734 num_examples: 418411 download_size: 1107018606 dataset_size: 147560734 - config_name: hr features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 154151689 num_examples: 629667 download_size: 1107018606 dataset_size: 154151689 - config_name: it features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 147520094 num_examples: 378325 download_size: 1107018606 dataset_size: 147520094 - config_name: lt features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 165319919 num_examples: 848018 download_size: 1107018606 dataset_size: 165319919 - config_name: nl features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 150737871 num_examples: 520664 download_size: 1107018606 dataset_size: 150737871 - config_name: pt features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 145627857 num_examples: 396773 download_size: 1107018606 dataset_size: 145627857 - config_name: sk features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 134174889 num_examples: 500135 download_size: 1107018606 dataset_size: 134174889 - config_name: sv features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 157058369 num_examples: 634881 download_size: 1107018606 dataset_size: 157058369 - config_name: tr features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 164456506 num_examples: 607324 download_size: 1107018606 dataset_size: 164456506 - config_name: zh features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 165056969 num_examples: 1570853 download_size: 1107018606 dataset_size: 165056969 - config_name: bg features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 190509195 num_examples: 559694 download_size: 1107018606 dataset_size: 190509195 - config_name: da features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 150551293 num_examples: 546440 download_size: 1107018606 dataset_size: 150551293 - config_name: en features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 145491677 num_examples: 423982 download_size: 1107018606 dataset_size: 145491677 - config_name: fa features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 180093656 num_examples: 492903 download_size: 1107018606 dataset_size: 180093656 - config_name: he features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 177231613 num_examples: 459933 download_size: 1107018606 dataset_size: 177231613 - config_name: hu features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 160702240 num_examples: 590218 download_size: 1107018606 dataset_size: 160702240 - config_name: ja features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 193679570 num_examples: 1691018 download_size: 1107018606 dataset_size: 193679570 - config_name: lv features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 76256241 num_examples: 331568 download_size: 1107018606 dataset_size: 76256241 - config_name: 'no' features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 152431612 num_examples: 552176 download_size: 1107018606 dataset_size: 152431612 - config_name: ro features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 96369897 num_examples: 285985 download_size: 1107018606 dataset_size: 96369897 - config_name: sl features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 148140079 num_examples: 521251 download_size: 1107018606 dataset_size: 148140079 - config_name: th features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 360409343 num_examples: 217631 download_size: 1107018606 dataset_size: 360409343 - config_name: uk features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 198251631 num_examples: 561373 download_size: 1107018606 dataset_size: 198251631 - config_name: combined features: - name: id dtype: string - name: lang dtype: string - name: words sequence: string - name: ner sequence: string splits: - name: train num_bytes: 6286855097 num_examples: 21070925 download_size: 1107018606 dataset_size: 6286855097 --- # Dataset Card for Polyglot-NER ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://sites.google.com/site/rmyeid/projects/polylgot-ner](https://sites.google.com/site/rmyeid/projects/polylgot-ner) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 45.39 GB - **Size of the generated dataset:** 12.54 GB - **Total amount of disk used:** 57.93 GB ### Dataset Summary Polyglot-NER A training dataset automatically generated from Wikipedia and Freebase the task of named entity recognition. The dataset contains the basic Wikipedia based training data for 40 languages we have (with coreference resolution) for the task of named entity recognition. The details of the procedure of generating them is outlined in Section 3 of the paper (https://arxiv.org/abs/1410.3791). Each config contains the data corresponding to a different language. For example, "es" includes only spanish examples. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### ar - **Size of downloaded dataset files:** 1.11 GB - **Size of the generated dataset:** 183.55 MB - **Total amount of disk used:** 1.29 GB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "id": "2", "lang": "ar", "ner": ["O", "O", "O", "O", "O", "O", "O", "O", "LOC", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "PER", "PER", "PER", "PER", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"], "words": "[\"وفي\", \"مرحلة\", \"موالية\", \"أنشأت\", \"قبيلة\", \"مكناسة\", \"الزناتية\", \"مكناسة\", \"تازة\", \",\", \"وأقام\", \"بها\", \"المرابطون\", \"قلعة\", \"..." } ``` #### bg - **Size of downloaded dataset files:** 1.11 GB - **Size of the generated dataset:** 190.51 MB - **Total amount of disk used:** 1.30 GB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "id": "1", "lang": "bg", "ner": ["O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"], "words": "[\"Дефиниция\", \"Наименованията\", \"\\\"\", \"книжовен\", \"\\\"/\\\"\", \"литературен\", \"\\\"\", \"език\", \"на\", \"български\", \"за\", \"тази\", \"кодифи..." } ``` #### ca - **Size of downloaded dataset files:** 1.11 GB - **Size of the generated dataset:** 143.75 MB - **Total amount of disk used:** 1.25 GB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "id": "2", "lang": "ca", "ner": "[\"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O...", "words": "[\"Com\", \"a\", \"compositor\", \"deixà\", \"un\", \"immens\", \"llegat\", \"que\", \"inclou\", \"8\", \"simfonies\", \"(\", \"1822\", \"),\", \"diverses\", ..." } ``` #### combined - **Size of downloaded dataset files:** 1.11 GB - **Size of the generated dataset:** 6.29 GB - **Total amount of disk used:** 7.39 GB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "id": "18", "lang": "es", "ner": ["O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"], "words": "[\"Los\", \"cambios\", \"en\", \"la\", \"energía\", \"libre\", \"de\", \"Gibbs\", \"\\\\\", \"Delta\", \"G\", \"nos\", \"dan\", \"una\", \"cuantificación\", \"de..." } ``` #### cs - **Size of downloaded dataset files:** 1.11 GB - **Size of the generated dataset:** 156.79 MB - **Total amount of disk used:** 1.26 GB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "id": "3", "lang": "cs", "ner": ["O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"], "words": "[\"Historie\", \"Symfonická\", \"forma\", \"se\", \"rozvinula\", \"se\", \"především\", \"v\", \"období\", \"klasicismu\", \"a\", \"romantismu\", \",\", \"..." } ``` ### Data Fields The data fields are the same among all splits. #### ar - `id`: a `string` feature. - `lang`: a `string` feature. - `words`: a `list` of `string` features. - `ner`: a `list` of `string` features. #### bg - `id`: a `string` feature. - `lang`: a `string` feature. - `words`: a `list` of `string` features. - `ner`: a `list` of `string` features. #### ca - `id`: a `string` feature. - `lang`: a `string` feature. - `words`: a `list` of `string` features. - `ner`: a `list` of `string` features. #### combined - `id`: a `string` feature. - `lang`: a `string` feature. - `words`: a `list` of `string` features. - `ner`: a `list` of `string` features. #### cs - `id`: a `string` feature. - `lang`: a `string` feature. - `words`: a `list` of `string` features. - `ner`: a `list` of `string` features. ### Data Splits | name | train | |----------|---------:| | ar | 339109 | | bg | 559694 | | ca | 372665 | | combined | 21070925 | | cs | 564462 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @article{polyglotner, author = {Al-Rfou, Rami and Kulkarni, Vivek and Perozzi, Bryan and Skiena, Steven}, title = {{Polyglot-NER}: Massive Multilingual Named Entity Recognition}, journal = {{Proceedings of the 2015 {SIAM} International Conference on Data Mining, Vancouver, British Columbia, Canada, April 30- May 2, 2015}}, month = {April}, year = {2015}, publisher = {SIAM}, } ``` ### Contributions Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset.
Hello-SimpleAI/HC3
--- task_categories: - text-classification - question-answering - sentence-similarity - zero-shot-classification language: - en - zh tags: - ChatGPT - SimpleAI - Detection - OOD size_categories: - 10K<n<100K license: cc-by-sa-4.0 --- # Human ChatGPT Comparison Corpus (HC3) We propose the first human-ChatGPT comparison corpus, named **HC3** dataset. This dataset is introduced in our paper: - Paper: [***How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection***](https://arxiv.org/abs/2301.07597) Code, models and analysis are available on our GitHub: - GitHub: [**Chatgpt-Comparison-Detection project** 🔬](https://github.com/Hello-SimpleAI/chatgpt-comparison-detection) # Dataset Copyright If the source datasets used in this corpus has a specific license which is stricter than CC-BY-SA, our products follow the same. If not, they follow CC-BY-SA license. See [dataset copyright](https://github.com/Hello-SimpleAI/chatgpt-comparison-detection#dataset-copyright). # Citation Checkout this papaer [arxiv: 2301.07597](https://arxiv.org/abs/2301.07597) ``` @article{guo-etal-2023-hc3, title = "How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection", author = "Guo, Biyang and Zhang, Xin and Wang, Ziyuan and Jiang, Minqi and Nie, Jinran and Ding, Yuxuan and Yue, Jianwei and Wu, Yupeng", journal={arXiv preprint arxiv:2301.07597} year = "2023", } ```
CarperAI/openai_summarize_comparisons
--- dataset_info: features: - name: prompt dtype: string - name: chosen dtype: string - name: rejected dtype: string splits: - name: test num_bytes: 143018505 num_examples: 83629 - name: train num_bytes: 157425966 num_examples: 92534 - name: valid1 num_bytes: 56686271 num_examples: 33082 - name: valid2 num_bytes: 86396487 num_examples: 50715 download_size: 20257716 dataset_size: 443527229 ---
lmms-lab/llava-bench-in-the-wild
--- dataset_info: features: - name: question_id dtype: int8 - name: question dtype: string - name: image dtype: image - name: caption dtype: string - name: gpt_answer dtype: string - name: category dtype: string - name: image_id dtype: string splits: - name: train num_bytes: 22333678.0 num_examples: 60 download_size: 9773451 dataset_size: 22333678.0 configs: - config_name: default data_files: - split: train path: data/train-* --- <p align="center" width="100%"> <img src="https://i.postimg.cc/g0QRgMVv/WX20240228-113337-2x.png" width="100%" height="80%"> </p> # Large-scale Multi-modality Models Evaluation Suite > Accelerating the development of large-scale multi-modality models (LMMs) with `lmms-eval` 🏠 [Homepage](https://lmms-lab.github.io/) | 📚 [Documentation](docs/README.md) | 🤗 [Huggingface Datasets](https://huggingface.co/lmms-lab) # This Dataset This is a formatted version of [LLaVA-Bench(wild)](https://llava-vl.github.io/) that is used in LLaVA. It is used in our `lmms-eval` pipeline to allow for one-click evaluations of large multi-modality models. ``` @misc{liu2023improvedllava, author={Liu, Haotian and Li, Chunyuan and Li, Yuheng and Lee, Yong Jae}, title={Improved Baselines with Visual Instruction Tuning}, publisher={arXiv:2310.03744}, year={2023}, } @inproceedings{liu2023llava, author = {Liu, Haotian and Li, Chunyuan and Wu, Qingyang and Lee, Yong Jae}, title = {Visual Instruction Tuning}, booktitle = {NeurIPS}, year = {2023} } ```
DKYoon/SlimPajama-6B
--- language: - en size_categories: - 1M<n<10M task_categories: - text-generation pretty_name: SlimPajama-6B configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* dataset_info: features: - name: text dtype: string - name: meta struct: - name: redpajama_set_name dtype: string - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 23918118724 num_examples: 5489000 - name: validation num_bytes: 39109042 num_examples: 9347 - name: test num_bytes: 40114950 num_examples: 9346 download_size: 14048972121 dataset_size: 23997342716 --- Sampled version of [cerebras/SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B). [Since the original data was shuffled before chunking](https://huggingface.co/datasets/cerebras/SlimPajama-627B/discussions/4), I only downloaded train/chunk1 (of 10 total) and further sampled 10%. This should result in roughly 6B tokens, hence SlimPajama-6B. The dataset is 24GBs in storage size when decompressed (original dataset is over 2TBs) and has 5489000 rows. The validation set and test set were sampled as well. --- #### Data source proportions for SlimPajama-627B and SlimPajama-6B For sanity purpose, I caluclated the byte proportion of the sampled version. | Data source | SlimPajama-627B | SlimPajama-6B | | ------------- | ---------- | --------- | | Commoncrawl | 52.2% | 54.1% | | C4 | 26.7% | 28.7% | | GitHub | 5.2% | 4.2% | | Books | 4.2% | 3.7% | | ArXiv | 4.6% | 3.4% | | Wikpedia | 3.8% | 3.1% | | StackExchange | 3.3% | 2.8% | --- Please refer to the original dataset for other info. ``` @misc{cerebras2023slimpajama, author = {Soboleva, Daria and Al-Khateeb, Faisal and Myers, Robert and Steeves, Jacob R and Hestness, Joel and Dey, Nolan}, title = {{SlimPajama: A 627B token cleaned and deduplicated version of RedPajama}}, month = June, year = 2023, howpublished = {\url{https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama}}, url = {https://huggingface.co/datasets/cerebras/SlimPajama-627B}, } ```
llm-book/JGLUE
--- annotations_creators: - crowdsourced language: - ja language_creators: - crowdsourced - found license: - cc-by-4.0 multilinguality: - monolingual pretty_name: JGLUE size_categories: [] source_datasets: - original tags: - MARC - STS - NLI - SQuAD - CommonsenseQA task_categories: - multiple-choice - question-answering - sentence-similarity - text-classification task_ids: - multiple-choice-qa - open-domain-qa - multi-class-classification - sentiment-classification --- # Dataset Card for JGLUE [![ACL2020 2020.acl-main.419](https://img.shields.io/badge/LREC2022-2022.lrec--1.317-red)](https://aclanthology.org/2022.lrec-1.317) 書籍『大規模言語モデル入門』で使用する、JGLUEのデータセットです。 [オリジナルのリポジトリ](https://github.com/yahoojapan/JGLUE)で公開されているデータセットを利用しています。 ### Licence コードのライセンスは Creative Commons Attribution-ShareAlike 4.0 International License です。 データそのもののライセンスは[配布元](https://github.com/yahoojapan/JGLUE)のライセンスに従ってください。 ### Citation ```bibtex @inproceedings{kurihara-etal-2022-jglue, title = "{JGLUE}: {J}apanese General Language Understanding Evaluation", author = "Kurihara, Kentaro and Kawahara, Daisuke and Shibata, Tomohide", booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference", month = jun, year = "2022", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2022.lrec-1.317", pages = "2957--2966", abstract = "To develop high-performance natural language understanding (NLU) models, it is necessary to have a benchmark to evaluate and analyze NLU ability from various perspectives. While the English NLU benchmark, GLUE, has been the forerunner, benchmarks are now being released for languages other than English, such as CLUE for Chinese and FLUE for French; but there is no such benchmark for Japanese. We build a Japanese NLU benchmark, JGLUE, from scratch without translation to measure the general NLU ability in Japanese. We hope that JGLUE will facilitate NLU research in Japanese.", } ``` ```bibtex @InProceedings{Kurihara_nlp2022, author = "栗原健太郎 and 河原大輔 and 柴田知秀", title = "JGLUE: 日本語言語理解ベンチマーク", booktitle = "言語処理学会第 28 回年次大会", year = "2022", url = "https://www.anlp.jp/proceedings/annual_meeting/2022/pdf_dir/E8-4.pdf" note= "in Japanese" } ``` ### Contributions データセット作成者である [Kentaro Kurihara](https://twitter.com/kkurihara_cs), [Daisuke Kawahara](https://twitter.com/daisukekawahar1), [Tomohide Shibata](https://twitter.com/stomohide) に感謝を申し上げます。 また本リポジトリのコードは [Shunsuke Kitada](https://twitter.com/shunk031)の[こちらのリポジトリ](https://huggingface.co/datasets/shunk031/JGLUE)を基に作成されたものです。
Intel/orca_dpo_pairs
--- license: apache-2.0 --- The dataset contains 12k examples from [Orca](https://arxiv.org/abs/2306.02707) style dataset [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca).
Open-Orca/SlimOrca-Dedup
--- license: mit task_categories: - text-classification - question-answering - text-generation tags: - code - art - music - legal - finance - biology - chemistry pretty_name: SlimOrca Dedup size_categories: - 100K<n<1M --- ---- # Overview ---- "SlimOrca Dedup" is a deduplicated, unfiltered subset of the SlimOrca dataset, excluding RLHF instances, resulting in 363k unique examples. # Key Features - Removal of RLHF instances. - Deduplication using minhash and Jaccard similarity techniques. # Demo Models Note: These models were trained on the full SlimOrca dataset, not the deduplicated, unfiltered version. * https://huggingface.co/openaccess-ai-collective/jackalope-7b * https://huggingface.co/Open-Orca/Mistral-7B-SlimOrca ---- # Dataset format ---- **Basic Structure** This dataset uses basic sharegpt formatting. Example and explanation of the schema is below: ```json { "conversations": [ {"from": "system", "value": "You are an AI assistant..."}, {"from": "human", "value": "Write an article based on this..."}, {"from": "gpt", "value": "Title: Tragedy Strikes in Sydney..."} ] } ``` **Message Formatting** - **"from"**: A string indicating the sender of the message. Possible senders are "system", "human", and "gpt". - **"value"**: A string containing the message or instruction from the sender. **Message roles** - ** System: ** The system provides instructions or guidelines for the task to the large language model (LLM). - ** Human: ** The human provides prompts or queries for the AI model to respond to. - ** GPT: ** The language model, generates responses or content based on the prompts or queries provided by the human. messages from this role only ever follow messages from the human role. ---- # Citation ---- ```bibtex @misc{SlimOrcaDedup, title = {SlimOrca Dedup: A Deduplicated Subset of SlimOrca}, author = {Wing Lian and Guan Wang and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium" and Nathan Hoos}, year = {2023}, publisher = {HuggingFace}, url = {https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup/} } @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } @misc{longpre2023flan, title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning}, author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts}, year={2023}, eprint={2301.13688}, archivePrefix={arXiv}, primaryClass={cs.AI} } ```
eli5_category
--- annotations_creators: - found language_creators: - found language: - en license: - unknown multilinguality: - monolingual paperswithcode_id: null pretty_name: ELI5-Category size_categories: - 100K<n<1M source_datasets: - extended|eli5 task_categories: - text2text-generation task_ids: - abstractive-qa - open-domain-abstractive-qa dataset_info: features: - name: q_id dtype: string - name: title dtype: string - name: selftext dtype: string - name: category dtype: string - name: subreddit dtype: string - name: answers struct: - name: a_id sequence: string - name: text sequence: string - name: score sequence: int32 - name: text_urls sequence: sequence: string - name: title_urls sequence: string - name: selftext_urls sequence: string splits: - name: train num_bytes: 166409797 num_examples: 91772 - name: validation1 num_bytes: 13150585 num_examples: 5446 - name: validation2 num_bytes: 4737744 num_examples: 2375 - name: test num_bytes: 10419098 num_examples: 5411 download_size: 72921829 dataset_size: 194717224 --- # Dataset Card for ELI5-Category ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [ELI5-Category homepage](https://celeritasml.netlify.app/posts/2021-12-01-eli5c/) - **Repository:** [ELI5-Category repository](https://github.com/rexarski/ANLY580-final-project) - **Point of Contact:** [Jingsong Gao](mailto:jg2109@georgetown.edu) ### Dataset Summary The ELI5-Category dataset is a smaller but newer and categorized version of the original ELI5 dataset. It's an English-language dataset of questions and answers gathered from the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) subreddit where users ask factual questions requiring paragraph-length or longer answers. After 2017, a tagging system was introduced to this subreddit so that the questions can be categorized into different topics according to their tags. Since the training and validation set is built by questions in different topics, the dataset is expected to alleviate the train/validation overlapping issue in the original [ELI5 dataset](https://huggingface.co/datasets/eli5). ### Supported Tasks and Leaderboards - `abstractive-qa`, `open-domain-abstractive-qa`: The dataset can be used to train a model for Open Domain Long Form Question Answering. An LFQA model is presented with a non-factoid and asked to retrieve relevant information from a knowledge source (such as [Wikipedia](https://www.wikipedia.org/)), then use it to generate a multi-sentence answer. ### Languages The text in the dataset is in English, as spoken by Reddit users on the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) subreddit. The associated BCP-47 code is `en`. ## Dataset Structure ### Data Instances The structure of this dataset is very similar to the original [ELI5 dataset](https://huggingface.co/datasets/eli5). A typical data point comprises a question, with a `title` containing the main question and a `selftext` which sometimes elaborates on it, and a list of answers from the forum sorted by scores they obtained. Additionally, the URLs in each of the text fields have been extracted to respective lists and replaced by generic tokens in the text. In addition to the original ELI5 dataset, the data point also has a `category` field. There are 11 common values of `category` in this dataset: `Biology`,`Chemistry`,`Culture`,`Earth Science`,`Economics`,`Engineering`,`Mathematics`,`Other`,`Physics`,`Psychology`,`Technology`, and a special `category`: `Repost` indicates the same question has been asked before. An example from the ELI5-Category set looks as follows: ``` {'q_id': '5lcm18', 'title': 'Why do old games running on new hardware still have technical issues ?', 'selftext': 'I am playing some mega man games on my Xbox One and experience slowdown when there are a lot of enemies on screen . but the Xbox One is significantly more powerful than the NES , so why is there still slowdown on this hardware ?', 'category': 'Engineering', 'subreddit': 'explainlikeimfive', 'answers': {'a_id': ['dbuo48e', 'dbusfve'], 'text': ["The XBox is emulating NES hardware and running the emulation at a set speed . If it ran it at as fast as possible , then it would be several times faster than the original NES game and would be unplayable . I ca n't speak for Mega Man exactly , but older games tended to run on a cycle locked to the screen refresh which was a fixed 60Hz or 50Hz . There was only one piece of hardware they ran on , so there was no need to adjust for different hardware speeds .", "In that case , it 's probably on purpose - they want to emulate the experience as closely as possible , even including the slowdown and sprite flickering . Some emulators let you turn it off , but it 's usually turned on by default . In other cases , like if you 're trying to emulate PS2 games on your PC , the game might just run really slow in general . Even though your PC is way more powerful than a PS2 , it has to \" translate \" from PS2 language to PC language in realtime , which is much more difficult than running PS2 code on the PS2 itself ."], 'score': [13, 3], 'text_urls': [[],[]]}, 'title_urls': {'url': []}, 'selftext_urls': {'url': []}} ``` ### Data Fields - `q_id`: a string question identifier for each example, corresponding to its ID in the [Pushshift.io](https://files.pushshift.io/reddit/submissions/) Reddit submission dumps - `subreddit`: always `explainlikeimfive`, indicating which subreddit the question came from - `category`: tag of the question, the possible values are listed above. - `title`: title of the question, with URLs extracted and replaced by `URL_n` tokens - `title_urls`: list of the extracted URLs, the `n`th element of the list was replaced by `URL_n` - `selftext`: either an empty string or an elaboration of the question - `selftext_urls`: similar to `title_urls` but for `self_text` - `answers`: a list of answers, each answer has: - `a_id`: a string answer identifier for each answer, corresponding to its ID in the [Pushshift.io](https://files.pushshift.io/reddit/comments/) Reddit comments dumps. - `text`: the answer text with the URLs normalized - `score`: the number of upvotes - the number of downvotes the answer had received when the dumps were created - `text_urls`: lists of the extracted URLs for every answer ### Data Splits In order to avoid having duplicate questions across sets, three non-overlapping subsets of `category` are used in the training, validation and test set. Also, a special validation set contains all the questions in the `Repost` category. A valid retriever-generator model should have consistent performances on both validation sets. The final split sizes are as follows: | | Train | Valid | Valid2 |Test | | ----- | ------ | ----- | ---- | ---- | | `Biology` | 32769 | | | | | `Chemistry` | 6633 | | | | | `Culture` | | 5446 | | | | `Earth Science` | 677 | | | | | `Economics` | 5901 | | | | | `Engineering` | | | | 5411 | | `Mathematics` | 1912 | | | | | `Other` | 19312 | | | | | `Physics` | 10196 | | | | | `Psychology` | 338 | | | | | `Technology` | 14034 | | | | | `Repost` | | | 2375 | | | **Total** | 91772 | 5446 | 2375 | 5411 | ## Dataset Creation ### Curation Rationale ELI5-Category was built to provide a testbed for machines to learn how to answer more complex questions, which requires them to find and combine the information in a coherent manner. The dataset was built by gathering questions that were asked by community members of three subreddits, including [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/), along with the answers that were provided by other users. The [rules of the subreddit](https://www.reddit.com/r/explainlikeimfive/wiki/detailed_rules) make this data particularly well suited to training a model for abstractive question answering: the questions need to seek an objective explanation about well-established facts, and the answers provided need to be understandable to a layperson without any particular knowledge domain. ### Source Data #### Initial Data Collection and Normalization The data was obtained by filtering submissions and comments from the subreddits of interest from the XML dumps of the [Reddit forum](https://www.reddit.com/) hosted on [Pushshift.io](https://files.pushshift.io/reddit/). In order to further improve the quality of the selected examples, only questions with a score of at least 2 and at least one answer with a score of at least 2 were selected for the dataset. The dataset questions and answers span a period from January 2017 to June 2021. #### Who are the source language producers? The language producers are users of the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) subreddit between 2017 and 2021. No further demographic information was available from the data source. ### Annotations The dataset contains the `category` as an additional annotation for the topics of questions. #### Annotation process The dataset is auto-annotated by the tags of posts in the [Reddit forum](https://www.reddit.com/). #### Who are the annotators? The annotators are users/administrators of the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) subreddit between 2017 and 2021. No further demographic information was available from the data source. ### Personal and Sensitive Information The authors removed the speaker IDs from the [Pushshift.io](https://files.pushshift.io/reddit/) dumps but did not otherwise anonymize the data. Some questions and answers are about contemporary public figures or individuals who appeared in the news. ## Considerations for Using the Data ### Social Impact of Dataset The dataset has a similar social impact to the original ELI5 dataset [Social Impact of Dataset](https://huggingface.co/datasets/eli5#social-impact-of-dataset). ### Discussion of Biases The dataset has similar considerations of biases to the original ELI5 dataset [Discussion of Biases](https://huggingface.co/datasets/eli5#discussion-of-biases). ### Other Known Limitations The dataset has similar limitations to the original ELI5 dataset [Other Known Limitations](https://huggingface.co/datasets/eli5#other-known-limitations). ## Additional Information ### Dataset Curators The dataset was initially created by Jingsong Gao, Qinren Zhou, Rui Qiu, during a course project of `ANLY 580`: NLP for Data Analytics at Georgetown University. ### Licensing Information The licensing status of the dataset hinges on the legal status of the [Pushshift.io](https://files.pushshift.io/reddit/) data which is unclear. ### Citation Information ``` @inproceedings{eli5-category, author = {Jingsong Gao and Qingren Zhou and Rui Qiu}, title = {{ELI5-Category:} A categorized open-domain QA dataset}, year = {2021} } ``` ### Contributions Thanks to [@jingshenSN2](https://github.com/jingshenSN2), [@QinrenZhou](https://github.com/QinrenZhou), [@rexarski](https://github.com/rexarski) for adding this dataset.
carblacac/twitter-sentiment-analysis
--- pretty_name: "TSATC: Twitter Sentiment Analysis Training Corpus" annotations_creators: - expert-generated language_creators: - other language: - en license: - apache-2.0 multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - text-classification task_ids: - feeling-classification paperswithcode_id: other configs: - None --- # Dataset Card for TSATC: Twitter Sentiment Analysis Training Corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [TSATC](https://github.com/cblancac/SentimentAnalysisBert/blob/main/data) - **Repository:** [TSATC](https://github.com/cblancac/SentimentAnalysisBert/blob/main/data) - **Paper:** [TSATC: Twitter Sentiment Analysis Training Corpus](http://thinknook.com/twitter-sentiment-analysis-training-corpus-dataset-2012-09-22/) - **Point of Contact:** [Carlos Blanco](carblacac7@gmail.com) ### Dataset Summary TSATC: Twitter Sentiment Analysis Training Corpus The original Twitter Sentiment Analysis Dataset contains 1,578,627 classified tweets, each row is marked as 1 for positive sentiment and 0 for negative sentiment. It can be downloaded from http://thinknook.com/wp-content/uploads/2012/09/Sentiment-Analysis-Dataset.zip. The dataset is based on data from the following two sources: University of Michigan Sentiment Analysis competition on Kaggle Twitter Sentiment Corpus by Niek Sanders This dataset has been transformed, selecting in a random way a subset of them, applying a cleaning process, and dividing them between the test and train subsets, keeping a balance between the number of positive and negative tweets within each of these subsets. These two files can be founded on https://github.com/cblancac/SentimentAnalysisBert/blob/main/data. Finally, the train subset has been divided in two smallest datasets, train (80%) and validation (20%). The final dataset has been created with these two new subdatasets plus the previous test dataset. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The text in the dataset is in English. ## Dataset Structure ### Data Instances Below are two examples from the dataset: | | Text | Feeling | | :-- | :---------------------------- | :------ | | (1) | blaaah. I don't feel good aagain. | 0 | | (2) | My birthday is coming June 3. | 1 | ### Data Fields In the final dataset, all files are in the JSON format with f columns: | Column Name | Data | | :------------ | :-------------------------- | | text | A sentence (or tweet) | | feeling | The feeling of the sentence | Each feeling has two possible values: `0` indicates the sentence has a negative sentiment, while `1` indicates a positive feeling. ### Data Splits The number of examples and the proportion sentiments are shown below: | Data | Train | Validation | Test | | :------------------ | ------: | ------------: | ----: | | Size | 119.988 | 29.997 | 61.998 | | Labeled positive | 60.019 | 14.947 | 31029 | | Labeled negative | 59.969 | 15.050 | 30969 | ## Dataset Creation ### Curation Rationale Existing paraphrase identification datasets lack sentence pairs that have high lexical overlap without being paraphrases. Models trained on such data fail to distinguish pairs like *flights from New York to Florida* and *flights from Florida to New York*. ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? Mentioned above. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Citation Information ``` @InProceedings{paws2019naacl, title = {{TSATC: Twitter Sentiment Analysis Training Corpus}}, author = {Ibrahim Naji}, booktitle = {thinknook}, year = {2012} } ``` ### Contributions Thanks to myself [@carblacac](https://github.com/cblancac/) for adding this transformed dataset from the original one.
wiki_lingua
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - ar - cs - de - en - es - fr - hi - id - it - ja - ko - nl - pt - ru - th - tr - vi - zh license: - cc-by-3.0 multilinguality: - multilingual size_categories: - 10K<n<100K - 1K<n<10K source_datasets: - original task_categories: - summarization task_ids: [] paperswithcode_id: wikilingua pretty_name: WikiLingua config_names: - arabic - chinese - czech - dutch - english - french - german - hindi - indonesian - italian - japanese - korean - portuguese - russian - spanish - thai - turkish - vietnamese dataset_info: - config_name: arabic features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string - name: english_url dtype: string - name: english_section_name dtype: string splits: - name: train num_bytes: 119116075 num_examples: 9995 download_size: 55808460 dataset_size: 119116075 - config_name: chinese features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string - name: english_url dtype: string - name: english_section_name dtype: string splits: - name: train num_bytes: 41170645 num_examples: 6541 download_size: 25187026 dataset_size: 41170645 - config_name: czech features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string - name: english_url dtype: string - name: english_section_name dtype: string splits: - name: train num_bytes: 20816346 num_examples: 2520 download_size: 12480761 dataset_size: 20816346 - config_name: dutch features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string - name: english_url dtype: string - name: english_section_name dtype: string splits: - name: train num_bytes: 87257952 num_examples: 10862 download_size: 47651076 dataset_size: 87257952 - config_name: english features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string splits: - name: train num_bytes: 333699946 num_examples: 57945 download_size: 187189233 dataset_size: 333699946 - config_name: french features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string - name: english_url dtype: string - name: english_section_name dtype: string splits: - name: train num_bytes: 197550244 num_examples: 21690 download_size: 105158840 dataset_size: 197550244 - config_name: german features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string - name: english_url dtype: string - name: english_section_name dtype: string splits: - name: train num_bytes: 168674208 num_examples: 20103 download_size: 93078076 dataset_size: 168674208 - config_name: hindi features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string - name: english_url dtype: string - name: english_section_name dtype: string splits: - name: train num_bytes: 63785007 num_examples: 3402 download_size: 22774620 dataset_size: 63785007 - config_name: indonesian features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string - name: english_url dtype: string - name: english_section_name dtype: string splits: - name: train num_bytes: 136408773 num_examples: 16308 download_size: 67658970 dataset_size: 136408773 - config_name: italian features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string - name: english_url dtype: string - name: english_section_name dtype: string splits: - name: train num_bytes: 138119439 num_examples: 17673 download_size: 78108134 dataset_size: 138119439 - config_name: japanese features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string - name: english_url dtype: string - name: english_section_name dtype: string splits: - name: train num_bytes: 40144987 num_examples: 4372 download_size: 19794488 dataset_size: 40144987 - config_name: korean features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string - name: english_url dtype: string - name: english_section_name dtype: string splits: - name: train num_bytes: 38647570 num_examples: 4111 download_size: 20029486 dataset_size: 38647570 - config_name: portuguese features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string - name: english_url dtype: string - name: english_section_name dtype: string splits: - name: train num_bytes: 204270713 num_examples: 28143 download_size: 114735912 dataset_size: 204270713 - config_name: russian features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string - name: english_url dtype: string - name: english_section_name dtype: string splits: - name: train num_bytes: 241923944 num_examples: 18143 download_size: 111025228 dataset_size: 241923944 - config_name: spanish features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string - name: english_url dtype: string - name: english_section_name dtype: string splits: - name: train num_bytes: 314618442 num_examples: 38795 download_size: 170995186 dataset_size: 314618442 - config_name: thai features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string - name: english_url dtype: string - name: english_section_name dtype: string splits: - name: train num_bytes: 86982807 num_examples: 5093 download_size: 31944979 dataset_size: 86982807 - config_name: turkish features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string - name: english_url dtype: string - name: english_section_name dtype: string splits: - name: train num_bytes: 11371777 num_examples: 1512 download_size: 5964904 dataset_size: 11371777 - config_name: vietnamese features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string - name: english_url dtype: string - name: english_section_name dtype: string splits: - name: train num_bytes: 69868744 num_examples: 6616 download_size: 33194150 dataset_size: 69868744 configs: - config_name: arabic data_files: - split: train path: arabic/train-* - config_name: chinese data_files: - split: train path: chinese/train-* - config_name: czech data_files: - split: train path: czech/train-* - config_name: dutch data_files: - split: train path: dutch/train-* - config_name: english data_files: - split: train path: english/train-* default: true - config_name: french data_files: - split: train path: french/train-* - config_name: german data_files: - split: train path: german/train-* - config_name: hindi data_files: - split: train path: hindi/train-* - config_name: indonesian data_files: - split: train path: indonesian/train-* - config_name: italian data_files: - split: train path: italian/train-* - config_name: japanese data_files: - split: train path: japanese/train-* - config_name: korean data_files: - split: train path: korean/train-* - config_name: portuguese data_files: - split: train path: portuguese/train-* - config_name: russian data_files: - split: train path: russian/train-* - config_name: spanish data_files: - split: train path: spanish/train-* - config_name: thai data_files: - split: train path: thai/train-* - config_name: turkish data_files: - split: train path: turkish/train-* - config_name: vietnamese data_files: - split: train path: vietnamese/train-* --- # Dataset Card for "wiki_lingua" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [URL](https://github.com/esdurmus/Wikilingua) - **Paper:** [WikiLingua: A Multilingual Abstractive Summarization Dataset](https://arxiv.org/abs/2010.03093) ### Dataset Summary We introduce WikiLingua, a large-scale, multilingual dataset for the evaluation of cross-lingual abstractive summarization systems. We extract article and summary pairs in 18 languages from WikiHow, a high quality, collaborative resource of how-to guides on a diverse set of topics written by human authors. We create gold-standard article-summary alignments across languages by aligning the images that are used to describe each how-to step in an article. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The table below shows number of article-summary pairs with a parallel article-summary pair in English. ______________________________ | Language | Num. parallel | | ----------- | --------------| | English | 141,457 | | Spanish | 113,215 | | Portuguese | 81,695 | | French | 63,692 | | German | 58,375 | | Russian | 52,928 | | Italian | 50,968 | | Indonesian | 47,511 | | Dutch | 31,270 | | Arabic | 29,229 | | Vietnamese | 19,600 | | Chinese | 18,887 | | Thai | 14,770 | | Japanese | 12,669 | | Korean | 12,189 | | Hindi | 9,929 | | Czech | 7,200 | | Turkish | 4,503 | ## Dataset Structure ### Data Instances ``` { 'article': { 'document': ['make sure that the area is a safe place, especially if you plan on walking home at night. It’s always a good idea to practice the buddy system. Have a friend meet up and walk with you. Research the bus, train, or streetcar routes available in your area to find safe and affordable travel to your destination. Make sure you check the schedule for your outgoing and return travel. Some public transportation will cease to run late at night. Be sure if you take public transportation to the venue that you will also be able to get home late at night. Check the routes. Even if some public transit is still running late at night, the routing may change. Some may run express past many of the stops, or not travel all the way to the ends. Be sure that your stop will still be available when you need it for your return trip. If you are taking public transit in a vulnerable state after drinking, it is always a good idea to travel in groups. Having friends available is a good way to stay safe and make sure that you reach your destination. This is more expensive option than a taxi or ride share service, but could be a fun and fancy way to stay safe and ensure that you will have a ride home. Plan this service in advance with a scheduled time to pick you up from your home and the venue. You want to be sure that the service will still be available when you need to get home. This may be easy in a large city, but taxis may be less frequent in smaller towns. This is especially true late at night, so this is a less reliable option than scheduling a ride in advance. Have a friend accompany you and help you flag a cab to make sure you are able to get one. Set up a plan to call a friend when you get home to make sure that you made it safely to your destination. If there are no taxis readily available call a local service to send a car to pick you up. You can share a ride with your friends, or other people using the app at the same moment. If you are in a vulnerable state it is best to share the ride with your friends to make sure you get home safe. You can request the car to yourself rather than sharing rides with strangers. If you travel home on your own or are the last of your group to be dropped off, make plans to call a friend when you get home so they know you made it safely to your destination. There may be a designated driver service in your area which can chauffeur your group. Make reservations with them in advance and keep their contact information handy while you are drinking.', "Designating a driver is a very popular tactic to avoid drinking and driving. It is important to plan in advance, because your brain function will slow down and your decision making skills will be impaired once you start drinking. Decide before you begin drinking that you will not drive. Figure out who will be getting you home before you leave. Make sure this person is responsible and keep them in your sight while you are drinking. Have their contact information handy in case you can’t find them when you are ready to leave. Choose a friend who doesn’t drink alcohol. You likely have someone in your friend group who doesn’t drink. This person is the most likely to remain sober. Decide on one person who will remain sober. You can take turns within your friend group, alternating who will be the designated driver on each occasion. Be sure that the designated driver actually remains sober. The person who has drank the least is still not sober. If you don’t have your car with you, you can guarantee that you won’t make the choice to drive it home. If you are drinking at your home. Give your keys to a responsible friend to ensure that you don't choose to drive somewhere after you have been drinking. It may be tempting to stay longer or leave with someone else. Stick to the plan you made in advance and only leave with your sober, designated driver. Keep the phone number of your driver handy in case you can't find them when you are ready to leave. If your designated driver drinks alcohol, find alternate transportation to get home.", 'If you have been drinking at all you are at least on the spectrum of drunkenness. You could be showing signs of impairment and slower brain function including lack of motor skills and slower reaction time, leading to the inability to operate a motor vehicle. Some of these signs could be: Poor balance or stumbling. Difficulty speaking clearly and slurred words. Abnormal behavior leading to you doing things you wouldn’t normally do if you were sober. As soon as you notice that you are showing signs of impairment, give your keys to a friend, the host or the bartender to ensure that you won’t drive until you are sober. Make sure to only give them your car key. Hold onto your house keys. If your friend, the host or the bartender are advising you not to drive, you are likely too drunk. Listen to their advice and acknowledge that they are trying to help you. Bystander intervention is common when it comes to drinking and driving. Many people will be willing to step in, take your keys and help you get home safely. If no one if offering to help, you may need to ask. Take a ride from a sober friend. It is best to get in a car with someone you trust when you are in this vulnerable state. Allow the host or bartender to call a cab or car service to take you home. If you are having a difficult time finding a safe way to get home, find a place to stay which does not involve you driving. Ask the host of the party if there is a place you can sleep. Give them your keys and ask that they keep them in a safe place until the morning. Stay with a friend if they live nearby and are on their way home. Find a hotel within walking distance. Call them to book a room, or have a friend help you secure one. Ask the friend if they will walk you to the hotel and make sure you get checked in safely. There are people in your life who care about you and want to be sure that you are safe. It may seem scary or embarrassing to call your parents or your siblings if you are too drunk to drive, but they will be glad you did. Your safety is the most important. You may need your phone to call someone for a ride or get help from a friend. Be sure to charge your phone before you leave the house. It is also a good idea to bring a charger with you in case your battery dies before the end of the night or you end up staying where you are and need to get home the next morning. You may also want to invest in a portable battery charger for your phone should there not be a power outlet available. Make sure it is fully charged before you leave your house. Keep it handy in your pocket or your bag throughout the night.' ], 'section_name': ['Finding Other Transportation', 'Designating a Driver', 'Staying Safe' ], 'summary': ['Walk to the venue where you will be drinking if it is close enough. Take public transit. Show up in style by hiring a limo or black car service. Flag a taxi cab for a convenient option to get where you’re going. Request a rideshare service like Uber or Lyft using an app on your phone. Reserve a designated driver service.', 'Plan in advance. Assign a designated driver. Leave your car at home. Leave the venue with your designated driver.', 'Pay attention to your body. Give up your keys. Listen to other people. Accept help. Stay where you are. Have an emergency back-up plan. Make sure that your phone is charged.' ] }, 'url': 'https://www.wikihow.com/Avoid-Drinking-and-Driving' } ``` ### Data Fields - `url`: WikiHow URL of the article - `article`: A dictionary containing `section_name`, `document` and `summary` - `section_name`: List of section headings in an article - `document`: List of documents, one for each section in the `section_name` list - `summary`: List of summarized document ### Data Splits | | train | |:-----------|--------:| | arabic | 9995 | | chinese | 6541 | | czech | 2520 | | dutch | 10862 | | english | 57945 | | french | 21690 | | german | 20103 | | hindi | 3402 | | indonesian | 16308 | | italian | 17673 | | japanese | 4372 | | korean | 4111 | | portuguese | 28143 | | russian | 18143 | | spanish | 6616 | | thai | 5093 | | turkish | 1512 | | vietnamese | 6616 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information - Article provided by wikiHow https://www.wikihow.com/Main-Page, a wiki building the world's largest, highest quality how-to manual. Please edit this article and find author credits at wikiHow.com. Content on wikiHow can be shared under a [Creative Commons license](http://creativecommons.org/licenses/by-nc-sa/3.0/). - Refer to [this webpage](https://www.wikihow.com/wikiHow:Attribution) for the specific attribution guidelines. - also see https://gem-benchmark.com/data_cards/WikiLingua ### Citation Information ```bibtex @inproceedings{ladhak-etal-2020-wikilingua, title = "{W}iki{L}ingua: A New Benchmark Dataset for Cross-Lingual Abstractive Summarization", author = "Ladhak, Faisal and Durmus, Esin and Cardie, Claire and McKeown, Kathleen", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.findings-emnlp.360", doi = "10.18653/v1/2020.findings-emnlp.360", pages = "4034--4048", } ``` ### Contributions Thanks to [@katnoria](https://github.com/katnoria) for adding this dataset.
UCLA-AGI/SPIN_iter0
--- license: apache-2.0 ---
princeton-nlp/SWE-bench_Lite
--- dataset_info: features: - name: repo dtype: string - name: instance_id dtype: string - name: base_commit dtype: string - name: patch dtype: string - name: test_patch dtype: string - name: problem_statement dtype: string - name: hints_text dtype: string - name: created_at dtype: string - name: version dtype: string - name: FAIL_TO_PASS dtype: string - name: PASS_TO_PASS dtype: string - name: environment_setup_commit dtype: string splits: - name: dev num_bytes: 232250 num_examples: 23 - name: test num_bytes: 3527075 num_examples: 300 download_size: 1296334 dataset_size: 3759325 configs: - config_name: default data_files: - split: dev path: data/dev-* - split: test path: data/test-* --- ### Dataset Summary SWE-bench *Lite* is _subset_ of [SWE-bench](https://huggingface.co/datasets/princeton-nlp/SWE-bench), a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 300 test Issue-Pull Request pairs from 11 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution. The dataset was released as part of [SWE-bench: Can Language Models Resolve Real-World GitHub Issues?](https://arxiv.org/abs/2310.06770) ## Want to run inference now? This dataset only contains the `problem_statement` (i.e. issue text) and the `base_commit` which can represents the state of the codebase before the issue has been resolved. If you want to run inference using the "Oracle" or BM25 retrieval settings mentioned in the paper, consider the following datasets. [princeton-nlp/SWE-bench_Lite_oracle](https://huggingface.co/datasets/princeton-nlp/SWE-bench_Lite_oracle) [princeton-nlp/SWE-bench_Lite_bm25_13K](https://huggingface.co/datasets/princeton-nlp/SWE-bench_Lite_bm25_13K) [princeton-nlp/SWE-bench_Lite_bm25_27K](https://huggingface.co/datasets/princeton-nlp/SWE-bench_Lite_bm25_27K) ### Supported Tasks and Leaderboards SWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at www.swebench.com ### Languages The text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type. ## Dataset Structure ### Data Instances An example of a SWE-bench datum is as follows: ``` instance_id: (str) - A formatted instance identifier, usually as repo_owner__repo_name-PR-number. patch: (str) - The gold patch, the patch generated by the PR (minus test-related code), that resolved the issue. repo: (str) - The repository owner/name identifier from GitHub. base_commit: (str) - The commit hash of the repository representing the HEAD of the repository before the solution PR is applied. hints_text: (str) - Comments made on the issue prior to the creation of the solution PR’s first commit creation date. created_at: (str) - The creation date of the pull request. test_patch: (str) - A test-file patch that was contributed by the solution PR. problem_statement: (str) - The issue title and body. version: (str) - Installation version to use for running evaluation. environment_setup_commit: (str) - commit hash to use for environment setup and installation. FAIL_TO_PASS: (str) - A json list of strings that represent the set of tests resolved by the PR and tied to the issue resolution. PASS_TO_PASS: (str) - A json list of strings that represent tests that should pass before and after the PR application. ``` [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Lakera/gandalf_ignore_instructions
--- language: - en license: mit size_categories: - 1K<n<10K dataset_info: features: - name: text dtype: string - name: similarity dtype: float64 splits: - name: train num_bytes: 66400 num_examples: 777 - name: validation num_bytes: 9633 num_examples: 111 - name: test num_bytes: 9747 num_examples: 112 download_size: 51515 dataset_size: 85780 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* tags: - prompt injection --- # gandalf_ignore_instructions ![](https://gandalf.lakera.ai/level-images/word-blacklist.jpg) This is a dataset of prompt injections from [Gandalf](https://gandalf.lakera.ai/) by [Lakera](https://www.lakera.ai/). Note that we might update the dataset occasionally by cleaning the data or adding more samples. ## How the data was obtained There are millions of prompts and many of them are not actual prompt injections (people ask Gandalf all kinds of things). We used the following process to obtain relevant data: - Start with all prompts submitted to Gandalf in July 2023. - Use [OpenAI text embeddings](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) (`text-embedding-ada-002`) to compute the similarity between the prompts and the phrase "Ignore all previous instructions". - Filter to those prompts that have similarity of at least 0.825. - Use [Lakera Guard](https://platform.lakera.ai/docs)'s personally identifiable information detector to filter out prompts that potentially contain PII data. - Remove near-duplicates from the data (prompts that differ only by a few letters) using an approximate algorithm. This helps reduce leakage between the data splits. - Sample 1000 prompts. - Split the data into train-val-test with an 80/10/10 ratio. Each sample is assigned independently so the size of the train split is not _exactly_ 80% and so on. Note that there is a small amount of noise in the data since an automatic method was used to obtain it: a few of the samples might not be real prompt injections. ## Citation If you use this dataset in your research, please cite it as ``` @InProceedings{gandalf_ignore_instructions, title = {gandalf_ignore_instructions}, author={Lakera AI (https://www.lakera.ai)}, year={2023} } ``` ## Licensing Information gandalf_ignore_instructions is distributed under the [MIT License](https://opensource.org/license/mit/).
aeslc
--- annotations_creators: - crowdsourced language_creators: - found language: - en license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - summarization task_ids: [] paperswithcode_id: aeslc pretty_name: 'AESLC: Annotated Enron Subject Line Corpus' tags: - aspect-based-summarization - conversations-summarization - multi-document-summarization - email-headline-generation dataset_info: features: - name: email_body dtype: string - name: subject_line dtype: string splits: - name: train num_bytes: 11897245 num_examples: 14436 - name: validation num_bytes: 1659987 num_examples: 1960 - name: test num_bytes: 1383452 num_examples: 1906 download_size: 7948020 dataset_size: 14940684 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* --- # Dataset Card for "aeslc" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** https://github.com/ryanzhumich/AESLC - **Paper:** [This Email Could Save Your Life: Introducing the Task of Email Subject Line Generation](https://arxiv.org/abs/1906.03497) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 11.64 MB - **Size of the generated dataset:** 14.95 MB - **Total amount of disk used:** 26.59 MB ### Dataset Summary A collection of email messages of employees in the Enron Corporation. There are two features: - email_body: email body text. - subject_line: email subject text. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages Monolingual English (mainly en-US) with some exceptions. ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 11.64 MB - **Size of the generated dataset:** 14.95 MB - **Total amount of disk used:** 26.59 MB An example of 'train' looks as follows. ``` { "email_body": "B/C\n<<some doc>>\n", "subject_line": "Service Agreement" } ``` ### Data Fields The data fields are the same among all splits. #### default - `email_body`: a `string` feature. - `subject_line`: a `string` feature. ### Data Splits | name |train|validation|test| |-------|----:|---------:|---:| |default|14436| 1960|1906| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{zhang-tetreault-2019-email, title = "This Email Could Save Your Life: Introducing the Task of Email Subject Line Generation", author = "Zhang, Rui and Tetreault, Joel", booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P19-1043", doi = "10.18653/v1/P19-1043", pages = "446--456", } ``` ### Contributions Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun) for adding this dataset.
rajpurkar/squad_v2
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - cc-by-sa-4.0 multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - question-answering task_ids: - open-domain-qa - extractive-qa paperswithcode_id: squad pretty_name: SQuAD2.0 dataset_info: config_name: squad_v2 features: - name: id dtype: string - name: title dtype: string - name: context dtype: string - name: question dtype: string - name: answers sequence: - name: text dtype: string - name: answer_start dtype: int32 splits: - name: train num_bytes: 116732025 num_examples: 130319 - name: validation num_bytes: 11661091 num_examples: 11873 download_size: 17720493 dataset_size: 128393116 configs: - config_name: squad_v2 data_files: - split: train path: squad_v2/train-* - split: validation path: squad_v2/validation-* default: true train-eval-index: - config: squad_v2 task: question-answering task_id: extractive_question_answering splits: train_split: train eval_split: validation col_mapping: question: question context: context answers: text: text answer_start: answer_start metrics: - type: squad_v2 name: SQuAD v2 --- # Dataset Card for SQuAD 2.0 ## Table of Contents - [Dataset Card for "squad_v2"](#dataset-card-for-squad_v2) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [squad_v2](#squad_v2) - [Data Fields](#data-fields) - [squad_v2](#squad_v2-1) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://rajpurkar.github.io/SQuAD-explorer/ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** https://arxiv.org/abs/1806.03822 - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Dataset Summary Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. SQuAD 2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. ### Supported Tasks and Leaderboards Question Answering. ### Languages English (`en`). ## Dataset Structure ### Data Instances #### squad_v2 - **Size of downloaded dataset files:** 46.49 MB - **Size of the generated dataset:** 128.52 MB - **Total amount of disk used:** 175.02 MB An example of 'validation' looks as follows. ``` This example was too long and was cropped: { "answers": { "answer_start": [94, 87, 94, 94], "text": ["10th and 11th centuries", "in the 10th and 11th centuries", "10th and 11th centuries", "10th and 11th centuries"] }, "context": "\"The Normans (Norman: Nourmands; French: Normands; Latin: Normanni) were the people who in the 10th and 11th centuries gave thei...", "id": "56ddde6b9a695914005b9629", "question": "When were the Normans in Normandy?", "title": "Normans" } ``` ### Data Fields The data fields are the same among all splits. #### squad_v2 - `id`: a `string` feature. - `title`: a `string` feature. - `context`: a `string` feature. - `question`: a `string` feature. - `answers`: a dictionary feature containing: - `text`: a `string` feature. - `answer_start`: a `int32` feature. ### Data Splits | name | train | validation | | -------- | -----: | ---------: | | squad_v2 | 130319 | 11873 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information The dataset is distributed under the CC BY-SA 4.0 license. ### Citation Information ``` @inproceedings{rajpurkar-etal-2018-know, title = "Know What You Don{'}t Know: Unanswerable Questions for {SQ}u{AD}", author = "Rajpurkar, Pranav and Jia, Robin and Liang, Percy", editor = "Gurevych, Iryna and Miyao, Yusuke", booktitle = "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2018", address = "Melbourne, Australia", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P18-2124", doi = "10.18653/v1/P18-2124", pages = "784--789", eprint={1806.03822}, archivePrefix={arXiv}, primaryClass={cs.CL} } @inproceedings{rajpurkar-etal-2016-squad, title = "{SQ}u{AD}: 100,000+ Questions for Machine Comprehension of Text", author = "Rajpurkar, Pranav and Zhang, Jian and Lopyrev, Konstantin and Liang, Percy", editor = "Su, Jian and Duh, Kevin and Carreras, Xavier", booktitle = "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2016", address = "Austin, Texas", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/D16-1264", doi = "10.18653/v1/D16-1264", pages = "2383--2392", eprint={1606.05250}, archivePrefix={arXiv}, primaryClass={cs.CL}, } ``` ### Contributions Thanks to [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
reuters21578
--- language: - en license: other paperswithcode_id: reuters-21578 pretty_name: Reuters-21578 Text Categorization Collection dataset_info: - config_name: ModApte features: - name: text dtype: string - name: text_type dtype: string - name: topics sequence: string - name: lewis_split dtype: string - name: cgis_split dtype: string - name: old_id dtype: string - name: new_id dtype: string - name: places sequence: string - name: people sequence: string - name: orgs sequence: string - name: exchanges sequence: string - name: date dtype: string - name: title dtype: string splits: - name: test num_bytes: 2971653 num_examples: 3299 - name: train num_bytes: 9161179 num_examples: 9603 - name: unused num_bytes: 948244 num_examples: 722 download_size: 8150596 dataset_size: 13081076 - config_name: ModHayes features: - name: text dtype: string - name: text_type dtype: string - name: topics sequence: string - name: lewis_split dtype: string - name: cgis_split dtype: string - name: old_id dtype: string - name: new_id dtype: string - name: places sequence: string - name: people sequence: string - name: orgs sequence: string - name: exchanges sequence: string - name: date dtype: string - name: title dtype: string splits: - name: test num_bytes: 948244 num_examples: 722 - name: train num_bytes: 19071106 num_examples: 20856 download_size: 8150596 dataset_size: 20019350 - config_name: ModLewis features: - name: text dtype: string - name: text_type dtype: string - name: topics sequence: string - name: lewis_split dtype: string - name: cgis_split dtype: string - name: old_id dtype: string - name: new_id dtype: string - name: places sequence: string - name: people sequence: string - name: orgs sequence: string - name: exchanges sequence: string - name: date dtype: string - name: title dtype: string splits: - name: test num_bytes: 5400506 num_examples: 6188 - name: train num_bytes: 12994591 num_examples: 13625 - name: unused num_bytes: 948244 num_examples: 722 download_size: 8150596 dataset_size: 19343341 --- # Dataset Card for "reuters21578" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://archive.ics.uci.edu/dataset/137/reuters+21578+text+categorization+collection - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 24.45 MB - **Size of the generated dataset:** 52.22 MB - **Total amount of disk used:** 76.67 MB ### Dataset Summary The Reuters-21578 dataset is one of the most widely used data collections for text categorization research. It is collected from the Reuters financial newswire service in 1987. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### ModApte - **Size of downloaded dataset files:** 8.15 MB - **Size of the generated dataset:** 13.05 MB - **Total amount of disk used:** 21.21 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "cgis_split": "\"TRAINING-SET\"", "date": "19-MAR-1987 06:17:22.36", "exchanges": [], "lewis_split": "\"TRAIN\"", "new_id": "\"7001\"", "old_id": "\"11914\"", "orgs": [], "people": [], "places": ["australia"], "text": "\"Media group John Fairfax Ltd &lt;FFXA.S>\\nsaid that its flat first half net profit partly reflected the\\nimpact of changes in t...", "title": "FAIRFAX SAYS HIGHER TAX HITS FIRST HALF EARNINGS", "topics": ["earn"] } ``` #### ModHayes - **Size of downloaded dataset files:** 8.15 MB - **Size of the generated dataset:** 19.79 MB - **Total amount of disk used:** 27.93 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "cgis_split": "\"TRAINING-SET\"", "date": "19-OCT-1987 23:49:31.45", "exchanges": [], "lewis_split": "\"TEST\"", "new_id": "\"20001\"", "old_id": "\"20596\"", "orgs": [], "people": [], "places": ["japan", "usa"], "text": "\"If the dollar goes the way of Wall Street,\\nJapanese will finally move out of dollar investments in a\\nserious way, Japan inves...", "title": "IF DOLLAR FOLLOWS WALL STREET JAPANESE WILL DIVEST", "topics": ["money-fx"] } ``` #### ModLewis - **Size of downloaded dataset files:** 8.15 MB - **Size of the generated dataset:** 19.38 MB - **Total amount of disk used:** 27.54 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "cgis_split": "\"TRAINING-SET\"", "date": "19-MAR-1987 06:17:22.36", "exchanges": [], "lewis_split": "\"TRAIN\"", "new_id": "\"7001\"", "old_id": "\"11914\"", "orgs": [], "people": [], "places": ["australia"], "text": "\"Media group John Fairfax Ltd &lt;FFXA.S>\\nsaid that its flat first half net profit partly reflected the\\nimpact of changes in t...", "title": "FAIRFAX SAYS HIGHER TAX HITS FIRST HALF EARNINGS", "topics": ["earn"] } ``` ### Data Fields The data fields are the same among all splits. #### ModApte - `text`: a `string` feature. - `topics`: a `list` of `string` features. - `lewis_split`: a `string` feature. - `cgis_split`: a `string` feature. - `old_id`: a `string` feature. - `new_id`: a `string` feature. - `places`: a `list` of `string` features. - `people`: a `list` of `string` features. - `orgs`: a `list` of `string` features. - `exchanges`: a `list` of `string` features. - `date`: a `string` feature. - `title`: a `string` feature. #### ModHayes - `text`: a `string` feature. - `topics`: a `list` of `string` features. - `lewis_split`: a `string` feature. - `cgis_split`: a `string` feature. - `old_id`: a `string` feature. - `new_id`: a `string` feature. - `places`: a `list` of `string` features. - `people`: a `list` of `string` features. - `orgs`: a `list` of `string` features. - `exchanges`: a `list` of `string` features. - `date`: a `string` feature. - `title`: a `string` feature. #### ModLewis - `text`: a `string` feature. - `topics`: a `list` of `string` features. - `lewis_split`: a `string` feature. - `cgis_split`: a `string` feature. - `old_id`: a `string` feature. - `new_id`: a `string` feature. - `places`: a `list` of `string` features. - `people`: a `list` of `string` features. - `orgs`: a `list` of `string` features. - `exchanges`: a `list` of `string` features. - `date`: a `string` feature. - `title`: a `string` feature. ### Data Splits #### ModApte | |train|unused|test| |-------|----:|-----:|---:| |ModApte| 8762| 720|3009| #### ModHayes | |train|test| |--------|----:|---:| |ModHayes|18323| 720| #### ModLewis | |train|unused|test| |--------|----:|-----:|---:| |ModLewis|12449| 720|5458| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information According to the dataset website (https://archive.ics.uci.edu/dataset/137/reuters+21578+text+categorization+collection), this dataset is licensed under [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode) (C BY 4.0) license. However, the source data file contains a `README.txt` file with the following information under the **Copyright & Notification** section: > The copyright for the text of newswire articles and Reuters annotations in the Reuters-21578 collection resides with Reuters Ltd. Reuters Ltd. and Carnegie Group, Inc. have agreed to allow the free distribution of this data *for research purposes only*. > If you publish results based on this data set, please acknowledge its use, refer to the data set by the name "Reuters-21578, Distribution 1.0", and inform your readers of the current location of the data set (see "Availability & Questions"). ### Citation Information ``` @article{APTE94, author = {Chidanand Apt{'{e}} and Fred Damerau and Sholom M. Weiss}, title = {Automated Learning of Decision Rules for Text Categorization}, journal = {ACM Transactions on Information Systems}, year = {1994}, note = {To appear.} } @inproceedings{APTE94b, author = {Chidanand Apt{'{e}} and Fred Damerau and Sholom M. Weiss}, title = {Toward Language Independent Automated Learning of Text Categorization Models}, booktitle = {sigir94}, year = {1994}, note = {To appear.} } @inproceedings{HAYES8}, author = {Philip J. Hayes and Peggy M. Anderson and Irene B. Nirenburg and Linda M. Schmandt}, title = {{TCS}: A Shell for Content-Based Text Categorization}, booktitle = {IEEE Conference on Artificial Intelligence Applications}, year = {1990} } @inproceedings{HAYES90b, author = {Philip J. Hayes and Steven P. Weinstein}, title = {{CONSTRUE/TIS:} A System for Content-Based Indexing of a Database of News Stories}, booktitle = {Second Annual Conference on Innovative Applications of Artificial Intelligence}, year = {1990} } @incollection{HAYES92 , author = {Philip J. Hayes}, title = {Intelligent High-Volume Text Processing using Shallow, Domain-Specific Techniques}, booktitle = {Text-Based Intelligent Systems}, publisher = {Lawrence Erlbaum}, address = {Hillsdale, NJ}, year = {1992}, editor = {Paul S. Jacobs} } @inproceedings{LEWIS91c , author = {David D. Lewis}, title = {Evaluating Text Categorization}, booktitle = {Proceedings of Speech and Natural Language Workshop}, year = {1991}, month = {feb}, organization = {Defense Advanced Research Projects Agency}, publisher = {Morgan Kaufmann}, pages = {312--318} } @phdthesis{LEWIS91d, author = {David Dolan Lewis}, title = {Representation and Learning in Information Retrieval}, school = {Computer Science Dept.; Univ. of Massachusetts; Amherst, MA 01003}, year = 1992}, note = {Technical Report 91--93.} } @inproceedings{LEWIS91e, author = {David D. Lewis}, title = {Data Extraction as Text Categorization: An Experiment with the {MUC-3} Corpus}, booktitle = {Proceedings of the Third Message Understanding Evaluation and Conference}, year = {1991}, month = {may}, organization = {Defense Advanced Research Projects Agency}, publisher = {Morgan Kaufmann}, address = {Los Altos, CA} } @inproceedings{LEWIS92b, author = {David D. Lewis}, title = {An Evaluation of Phrasal and Clustered Representations on a Text Categorization Task}, booktitle = {Fifteenth Annual International ACM SIGIR Conference on Research and Development in Information Retrieval}, year = {1992}, pages = {37--50} } @inproceedings{LEWIS92d , author = {David D. Lewis and Richard M. Tong}, title = {Text Filtering in {MUC-3} and {MUC-4}}, booktitle = {Proceedings of the Fourth Message Understanding Conference ({MUC-4})}, year = {1992}, month = {jun}, organization = {Defense Advanced Research Projects Agency}, publisher = {Morgan Kaufmann}, address = {Los Altos, CA} } @inproceedings{LEWIS92e, author = {David D. Lewis}, title = {Feature Selection and Feature Extraction for Text Categorization}, booktitle = {Proceedings of Speech and Natural Language Workshop}, year = {1992}, month = {feb} , organization = {Defense Advanced Research Projects Agency}, publisher = {Morgan Kaufmann}, pages = {212--217} } @inproceedings{LEWIS94b, author = {David D. Lewis and Marc Ringuette}, title = {A Comparison of Two Learning Algorithms for Text Categorization}, booktitle = {Symposium on Document Analysis and Information Retrieval}, year = {1994}, organization = {ISRI; Univ. of Nevada, Las Vegas}, address = {Las Vegas, NV}, month = {apr}, pages = {81--93} } @article{LEWIS94d, author = {David D. Lewis and Philip J. Hayes}, title = {Guest Editorial}, journal = {ACM Transactions on Information Systems}, year = {1994}, volume = {12}, number = {3}, pages = {231}, month = {jul} } @article{SPARCKJONES76, author = {K. {Sparck Jones} and C. J. {van Rijsbergen}}, title = {Information Retrieval Test Collections}, journal = {Journal of Documentation}, year = {1976}, volume = {32}, number = {1}, pages = {59--75} } @book{WEISS91, author = {Sholom M. Weiss and Casimir A. Kulikowski}, title = {Computer Systems That Learn}, publisher = {Morgan Kaufmann}, year = {1991}, address = {San Mateo, CA} } ``` ### Contributions Thanks to [@jplu](https://github.com/jplu), [@jbragg](https://github.com/jbragg), [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
yhavinga/mc4_nl_cleaned
--- annotations_creators: - no-annotation language_creators: - found language: - nl - en license: - odc-by multilinguality: - monolingual - en-nl size_categories: - n<1K - 1K<n<10K - 10K<n<100K - 100K<n<1M - 1M<n<10M - 10M<n<100M - 100M<n<1B - 1B<n<10B source_datasets: - extended task_categories: - text-generation task_ids: - language-modeling paperswithcode_id: mc4 pretty_name: mC4_nl_cleaned --- # Dataset Card for Clean Dutch mC4 ## Table of Contents - [Dataset Card for Clean](#dataset-card-for-mc4) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Preprocessing](#preprocessing) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Original Homepage:** [HF Hub](https://huggingface.co/datasets/allenai/c4) - **Paper:** [ArXiv](https://arxiv.org/abs/1910.10683) ### Dataset Summary A cleaned version (151GB) of the Dutch part (277GB) of the C4 multilingual dataset (mC4). While this dataset is monolingual, it is possible to download `en-nl` interleaved data, see the Dataset Config section below. Based on the [Common Crawl dataset](https://commoncrawl.org). The original version was prepared by [AllenAI](https://allenai.org/), hosted at the address [https://huggingface.co/datasets/allenai/c4](https://huggingface.co/datasets/allenai/c4). ### Preprocessing The Dutch portion of mC4 was cleaned in a similar fashion as the English cleaned C4 version. See [GitLab](https://gitlab.com/yhavinga/c4nlpreproc) for details. In summary, the preprocessing procedure includes: - Removing documents containing words from a selection of the [Dutch and English List of Dirty Naught Obscene and Otherwise Bad Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words). - Removing sentences containing: - Less than 3 words. - A word longer than 250 characters. - An end symbol not matching end-of-sentence punctuation. - Strings associated to javascript code (e.g. `{`), lorem ipsum, policy information in Dutch or English. - Removing documents (after sentence filtering): - Containing less than 5 sentences. - Containing less than 500 or more than 50'000 characters. - Not identified as prevalently Dutch by the `LangDetect` package. Using parallel processing with 96 CPU cores on a TPUv3 via Google Cloud to perform the complete clean of all the original Dutch shards of mC4 (1024 of ~220Mb train, 4 of ~24Mb validation) required roughly 10 hours due to the demanding steps of sentence tokenization and language detection. The total size of compressed `.json.gz` files is roughly halved after the procedure. ## Dataset Structure ### Data Instances An example from the dataset: ``` { 'timestamp': '2019-02-22T15:37:25Z', 'url': 'https://ondernemingen.bnpparibasfortis.be/nl/artikel?n=vijf-gouden-tips-voor-succesvol-zaken-doen-met-japan', 'text': 'Japanse bedrijven zijn niet alleen hondstrouw aan hun leveranciers , ze betalen ook nog eens erg stipt. Alleen is het niet zo makkelijk er een voet tussen de deur te krijgen. Met de volgende tips hebt u alvast een streepje voor.\nIn Japan draait alles om vertrouwen. Neem voldoende tijd om een relatie op te bouwen.Aarzel niet om tijdig een lokale vertrouwenspersoon in te schakelen.\nJapan is een erg competitieve markt.Kwaliteit en prijs zijn erg belangrijk, u zult dus het beste van uzelf moeten geven. Gelukkig is de beloning groot. Japanse zakenlui zijn loyaal en betalen stipt!\nJapanners houden er eigenzinnige eisen op na. Kom dus niet aanzetten met uw standaardproducten voor de Europese markt. Zo moet een producent van diepvriesfrieten bijvoorbeeld perfect identieke frietjes kunnen leveren in mini- verpakkingen. Het goede nieuws is dat Japanners voor kwaliteit graag diep in hun buidel tasten.\nEn u dacht dat Europa lijdt aan reglementitis? Japanners kennen er ook wat van. Tal van voorschriften zeggen wat je wel en niet mag doen. Gelukkig zijn de regels helder geformuleerd.\nHet gebruik van het Engels is niet echt ingeburgerd in Japan. Betrek een tolk bij uw onderhandelingen en zorg voor correcte vertalingen van handleidingen of softwareprogramma’s.' } ``` ### Data Fields The data contains the following fields: - `url`: url of the source as a string - `text`: text content as a string - `timestamp`: timestamp of extraction as a string ### Data Configs To build mC4, the original authors used [CLD3](https://github.com/google/cld3) to identify over 100 languages. For Dutch, the whole corpus of scraped text was divided in `1032` jsonl files, `1024` for training following the naming style `c4-nl-cleaned.tfrecord-0XXXX-of-01024.json.gz` and 4 for validation following the naming style `c4-nl-cleaned.tfrecord-0000X-of-00004.json.gz`. The full set of pre-processed files takes roughly 208GB of disk space to download with Git LFS. For ease of use under different storage capacities, the following incremental configs are available: (note: files on disk are compressed) | config | train size (docs, words, download + preproc disk space) | validation size | |:-------|--------------------------------------------------------:|----------------:| | micro | 125k docs, 23M words (<1GB) | 16k docs | | tiny | 6M docs, 2B words (6 GB + 15 GB) | 16k docs | | small | 15M docs, 6B words (14 GB + 36 GB) | 16k docs | | medium | 31M docs, 12B words (28 GB + 72 GB) | 32k docs | | large | 47M docs, 19B words (42 GB + 108 GB) | 48k docs | | full | 64M docs, 25B words (58 GB + 148 GB) | 64k docs | For each config above there also exists a config `<name>_en_nl` that interleaves `nl` and `en` examples from the cleaned `en` variant of C4. You can load any config like this: ```python from datasets import load_dataset datasets = load_dataset('yhavinga/mc4_nl_cleaned', 'tiny', streaming=True) print(datasets) ``` This will print ``` DatasetDict({ train: Dataset({ features: ['text', 'timestamp', 'url'], num_rows: 6303893 }) validation: Dataset({ features: ['text', 'timestamp', 'url'], num_rows: 16189 }) }) ``` Since the configs are quite large, you may want to traverse them using the streaming mode available starting from — Datasets v1.9.0: ```python from datasets import load_dataset mc4_nl_full_stream = load_dataset('yhavinga/mc4_nl_cleaned', "full", split='train', streaming=True) print(next(iter(mc4_nl_full_stream))) # Prints the example presented above ``` ## Dataset Creation Refer to the original paper for more considerations regarding the choice of sources and the scraping process for creating `mC4`. ## Considerations for Using the Data ### Social Impact of Dataset With more than 151GB (58GB compressed) of cleaned Dutch text and more than 23B estimated words, this is by far the largest available cleaned corpus for the Dutch language. The second largest dataset available is [OSCAR](https://oscar-corpus.com/), which is only 39GB in size for its deduplicated variant, and contains vulgarity. Using this corpus for training language models with adequate computational resources will allow researchers to reach parity with the performances observed for the English language. This can in turn have important repercussions for the development of commercial language technology applications for the Dutch language. ### Discussion of Biases Despite the cleaning procedure aimed at removing vulgarity and profanity, it must be considered that model trained on this scraped corpus will inevitably reflect biases present in blog articles and comments on the Internet. This makes the corpus especially interesting in the context of studying data biases and how to limit their impacts. ## Additional Information ### Licensing Information AllenAI are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset. ### Citation Information ``` @article{2019t5, author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu}, title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer}, journal = {arXiv e-prints}, year = {2019}, archivePrefix = {arXiv}, eprint = {1910.10683}, } ``` ### Contributions Thanks to [gabriele.sarti996@gmail.com](mailto:gabriele.sarti996@gmail.com), [@dirkgr](https://github.com/dirkgr) and [@lhoestq](https://github.com/lhoestq) for providing the `cleaned_it_mc4` example that shows how upload a dataset to the Huggingface hub.
omegalabsinc/omega-multimodal
--- license: mit --- # OMEGA Labs Bittensor Subnet: Multimodal Dataset for AGI Research [![OMEGA](https://huggingface.co/datasets/omegalabsinc/omega-multimodal/resolve/main/galacticlandscape.png)](https://omegatron.ai) ## Introduction The OMEGA Labs Bittensor Subnet Dataset is a groundbreaking resource for accelerating Artificial General Intelligence (AGI) research and development. This dataset, powered by the Bittensor decentralized network, aims to be the world's largest multimodal dataset, capturing the vast landscape of human knowledge and creation. With over 1 million hours of footage and 30 million+ 2-minute video clips, the OMEGA Labs dataset will offer unparalleled scale and diversity, covering 50+ scenarios and 15,000+ action phrases. By leveraging state-of-the-art models to translate video components into a unified latent space, this dataset enables the development of powerful AGI models and has the potential to transform various industries. ## Key Features - 🌍 **Constant Stream of Fresh Data**: The OMEGA dataset is constantly updated with new entries scraped by miners on Bittensor's decentralized AI network. We estimate that within a few weeks, we can get to 5M+ new videos added daily. - 📈 **Rich Data**: In addition to scale, we are focused on scraping relevant, high quality data. Using [ImageBind](https://imagebind.metademolab.com/demo) embeddings of the submitted videos and corresponding captions, miners are rewarded based on three factors: - **Diversity**: The further away each new datapoint is from existing datapoints (judged by embedding cosine similarity), the higher the reward - **Richness**: The more detailed the caption (judged by cosine similarity between video and submitted caption), the higher the reward - **Relevance**: Miners are asked to scrape data pertaining to handpicked categories, pertinent for building video understanding and training world models. - 🧠 **Latent Representations**: ImageBind embeddings for the video, audio, and caption are pre-computed - 🤖 **Empowering Digital Agents**: Enables the development of intelligent agents that can navigate complex workflows and assist users across platforms. - 📊 **Flexible Metadata**: Filter the dataset to find clips relevant to topics you would like to train on or filter by your desired cosine similarities ## Dataset Structure The OMEGA Labs Bittensor Subnet Dataset consists of the following columns: - `video_id`: Unique identifier for each video clip. - `youtube_id`: The original YouTube video ID. - `description`: Description of the video content. - `views`: Number of views the original YouTube video has received. - `start_time`: Start time of the video clip within the original video. - `end_time`: End time of the video clip within the original video. - `video_embed`: Latent representation of the video content. - `audio_embed`: Latent representation of the audio content. - `description_embed`: Latent representation of the video description. - `description_relevance_score`: Relevance score of the video description to the content. - `query_relevance_score`: Relevance score of the video to the search query. - `query`: The search query used to retrieve the video. - `submitted_at`: Timestamp of when the video was added to the dataset. ## Applications The OMEGA Labs Bittensor Subnet Dataset empowers researchers and developers to push the boundaries of AGI by providing a vast and diverse resource for training and testing multimodal models. Some potential applications include: - **Unified Representation Learning**: Train powerful models that can learn unified representations across modalities. - **Any-to-Any Models**: Develop models capable of translating between different modalities, such as generating videos from text descriptions or vice versa. - **Digital Agents**: Create intelligent agents that can navigate complex workflows and assist users across platforms. - **Immersive Gaming**: Build realistic gaming environments with rich physics and interactions. - **Video Understanding**: Advance the state-of-the-art in video processing tasks such as transcription, motion analysis, object detection, and emotion recognition. ## Say hi! If you're interested in getting in touch, reach out to us on [Twitter](https://twitter.com/omegalabsai)! You can also visit our [Github](https://github.com/omegalabsinc/omegalabs-bittensor-subnet/tree/main) to learn more about how our scraping is done! And if you'd like to learn more about Bittensor, join the [Discord](https://discord.gg/6yZpQ9KV)!
cardiffnlp/tweet_sentiment_multilingual
--- language: - en - ar - fr - de - hi - it - pt - es multilinguality: - multilingual size_categories: - 10K<n<100K source_datasets: - extended|other-tweet-datasets task_categories: - text-classification task_ids: - sentiment-classification paperswithcode_id: tweet_sentiment_multilingual pretty_name: Tweet Sentiment Multilingual train-eval-index: - config: sentiment task: text-classification task_id: multi_class_classification splits: train_split: train eval_split: test col_mapping: text: text label: target metrics: - type: accuracy name: Accuracy - type: f1 name: F1 macro args: average: macro - type: f1 name: F1 micro args: average: micro - type: f1 name: F1 weighted args: average: weighted - type: precision name: Precision macro args: average: macro - type: precision name: Precision micro args: average: micro - type: precision name: Precision weighted args: average: weighted - type: recall name: Recall macro args: average: macro - type: recall name: Recall micro args: average: micro - type: recall name: Recall weighted args: average: weighted configs: - arabic - english - french - german - hindi - italian - portuguese - spanish dataset_info: - config_name: sentiment features: - name: text dtype: string - name: label dtype: class_label: names: 0: negative 1: neutral 2: positive --- # Dataset Card for cardiffnlp/tweet_sentiment_multilingual ## Dataset Description - **Homepage:** [https://github.com/cardiffnlp/xlm-t](https://github.com/cardiffnlp/xlm-t) - **Repository:** - **Homepage:** [https://github.com/cardiffnlp/xlm-t](https://github.com/cardiffnlp/xlm-t) - **Paper:** [https://aclanthology.org/2022.lrec-1.27/](https://aclanthology.org/2022.lrec-1.27/) - **Point of Contact:** [Asahi Ushio](https://asahiushio.com/) ### Dataset Summary Tweet Sentiment Multilingual consists of sentiment analysis dataset on Twitter in 8 different lagnuages. - arabic - english - french - german - hindi - italian - portuguese - spanish ### Supported Tasks and Leaderboards - `text_classification`: The dataset can be trained using a SentenceClassification model from HuggingFace transformers. ## Dataset Structure ### Data Instances An instance from `sentiment` config: ``` {'label': 2, 'text': '"QT @user In the original draft of the 7th book, Remus Lupin survived the Battle of Hogwarts. #HappyBirthdayRemusLupin"'} ``` ### Data Fields For `sentiment` config: - `text`: a `string` feature containing the tweet. - `label`: an `int` classification label with the following mapping: `0`: negative `1`: neutral `2`: positive ### Data Splits - arabic - english - french - german - hindi - italian - portuguese - spanish | name | train | validation | test | | --------------- | ----- | ---------- | ----- | | arabic | 1838 | 323 | 869 | | english | 1838 | 323 | 869 | | french | 1838 | 323 | 869 | | german | 1838 | 323 | 869 | | hindi | 1838 | 323 | 869 | | italian | 1838 | 323 | 869 | | portuguese | 1838 | 323 | 869 | | spanish | 1838 | 323 | 869 | ### Dataset Curators Francesco Barbieri, Jose Camacho-Collados, Luis Espiinosa-Anke and Leonardo Neves through Cardiff NLP. ### Licensing Information [Creative Commons Attribution 3.0 Unported License](https://groups.google.com/g/semevaltweet/c/k5DDcvVb_Vo/m/zEOdECFyBQAJ), and all of the datasets require complying with Twitter [Terms Of Service](https://twitter.com/tos) and Twitter API [Terms Of Service](https://developer.twitter.com/en/developer-terms/agreement-and-policy) ### Citation Information ``` @inproceedings{barbieri-etal-2022-xlm, title = "{XLM}-{T}: Multilingual Language Models in {T}witter for Sentiment Analysis and Beyond", author = "Barbieri, Francesco and Espinosa Anke, Luis and Camacho-Collados, Jose", booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference", month = jun, year = "2022", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2022.lrec-1.27", pages = "258--266", abstract = "Language models are ubiquitous in current NLP, and their multilingual capacity has recently attracted considerable attention. However, current analyses have almost exclusively focused on (multilingual variants of) standard benchmarks, and have relied on clean pre-training and task-specific corpora as multilingual signals. In this paper, we introduce XLM-T, a model to train and evaluate multilingual language models in Twitter. In this paper we provide: (1) a new strong multilingual baseline consisting of an XLM-R (Conneau et al. 2020) model pre-trained on millions of tweets in over thirty languages, alongside starter code to subsequently fine-tune on a target task; and (2) a set of unified sentiment analysis Twitter datasets in eight different languages and a XLM-T model trained on this dataset.", } ```
GAIR/lima
--- license: other --- Dataset for [LIMA: Less Is More for Alignment](https://arxiv.org/pdf/2305.11206.pdf) ## Usage ```python from datasets import load_dataset dataset = load_dataset("GAIR/lima") ``` ## License If the source data of LIMA has a stricter license than CC BY-NC-SA, the LIMA dataset follows the same. Otherwise, it follows the CC BY-NC-SA license.
scene_parse_150
--- annotations_creators: - crowdsourced - expert-generated language_creators: - found language: - en license: - bsd-3-clause multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - extended|ade20k task_categories: - image-segmentation task_ids: - instance-segmentation paperswithcode_id: ade20k pretty_name: MIT Scene Parsing Benchmark tags: - scene-parsing dataset_info: - config_name: scene_parsing features: - name: image dtype: image - name: annotation dtype: image - name: scene_category dtype: class_label: names: '0': airport_terminal '1': art_gallery '2': badlands '3': ball_pit '4': bathroom '5': beach '6': bedroom '7': booth_indoor '8': botanical_garden '9': bridge '10': bullring '11': bus_interior '12': butte '13': canyon '14': casino_outdoor '15': castle '16': church_outdoor '17': closet '18': coast '19': conference_room '20': construction_site '21': corral '22': corridor '23': crosswalk '24': day_care_center '25': sand '26': elevator_interior '27': escalator_indoor '28': forest_road '29': gangplank '30': gas_station '31': golf_course '32': gymnasium_indoor '33': harbor '34': hayfield '35': heath '36': hoodoo '37': house '38': hunting_lodge_outdoor '39': ice_shelf '40': joss_house '41': kiosk_indoor '42': kitchen '43': landfill '44': library_indoor '45': lido_deck_outdoor '46': living_room '47': locker_room '48': market_outdoor '49': mountain_snowy '50': office '51': orchard '52': arbor '53': bookshelf '54': mews '55': nook '56': preserve '57': traffic_island '58': palace '59': palace_hall '60': pantry '61': patio '62': phone_booth '63': establishment '64': poolroom_home '65': quonset_hut_outdoor '66': rice_paddy '67': sandbox '68': shopfront '69': skyscraper '70': stone_circle '71': subway_interior '72': platform '73': supermarket '74': swimming_pool_outdoor '75': television_studio '76': indoor_procenium '77': train_railway '78': coral_reef '79': viaduct '80': wave '81': wind_farm '82': bottle_storage '83': abbey '84': access_road '85': air_base '86': airfield '87': airlock '88': airplane_cabin '89': airport '90': entrance '91': airport_ticket_counter '92': alcove '93': alley '94': amphitheater '95': amusement_arcade '96': amusement_park '97': anechoic_chamber '98': apartment_building_outdoor '99': apse_indoor '100': apse_outdoor '101': aquarium '102': aquatic_theater '103': aqueduct '104': arcade '105': arch '106': archaelogical_excavation '107': archive '108': basketball '109': football '110': hockey '111': performance '112': rodeo '113': soccer '114': armory '115': army_base '116': arrival_gate_indoor '117': arrival_gate_outdoor '118': art_school '119': art_studio '120': artists_loft '121': assembly_line '122': athletic_field_indoor '123': athletic_field_outdoor '124': atrium_home '125': atrium_public '126': attic '127': auditorium '128': auto_factory '129': auto_mechanics_indoor '130': auto_mechanics_outdoor '131': auto_racing_paddock '132': auto_showroom '133': backstage '134': backstairs '135': badminton_court_indoor '136': badminton_court_outdoor '137': baggage_claim '138': shop '139': exterior '140': balcony_interior '141': ballroom '142': bamboo_forest '143': bank_indoor '144': bank_outdoor '145': bank_vault '146': banquet_hall '147': baptistry_indoor '148': baptistry_outdoor '149': bar '150': barbershop '151': barn '152': barndoor '153': barnyard '154': barrack '155': baseball_field '156': basement '157': basilica '158': basketball_court_indoor '159': basketball_court_outdoor '160': bathhouse '161': batters_box '162': batting_cage_indoor '163': batting_cage_outdoor '164': battlement '165': bayou '166': bazaar_indoor '167': bazaar_outdoor '168': beach_house '169': beauty_salon '170': bedchamber '171': beer_garden '172': beer_hall '173': belfry '174': bell_foundry '175': berth '176': berth_deck '177': betting_shop '178': bicycle_racks '179': bindery '180': biology_laboratory '181': bistro_indoor '182': bistro_outdoor '183': bleachers_indoor '184': bleachers_outdoor '185': boardwalk '186': boat_deck '187': boathouse '188': bog '189': bomb_shelter_indoor '190': bookbindery '191': bookstore '192': bow_window_indoor '193': bow_window_outdoor '194': bowling_alley '195': box_seat '196': boxing_ring '197': breakroom '198': brewery_indoor '199': brewery_outdoor '200': brickyard_indoor '201': brickyard_outdoor '202': building_complex '203': building_facade '204': bullpen '205': burial_chamber '206': bus_depot_indoor '207': bus_depot_outdoor '208': bus_shelter '209': bus_station_indoor '210': bus_station_outdoor '211': butchers_shop '212': cabana '213': cabin_indoor '214': cabin_outdoor '215': cafeteria '216': call_center '217': campsite '218': campus '219': natural '220': urban '221': candy_store '222': canteen '223': car_dealership '224': backseat '225': frontseat '226': caravansary '227': cardroom '228': cargo_container_interior '229': airplane '230': boat '231': freestanding '232': carport_indoor '233': carport_outdoor '234': carrousel '235': casino_indoor '236': catacomb '237': cathedral_indoor '238': cathedral_outdoor '239': catwalk '240': cavern_indoor '241': cavern_outdoor '242': cemetery '243': chalet '244': chaparral '245': chapel '246': checkout_counter '247': cheese_factory '248': chemical_plant '249': chemistry_lab '250': chicken_coop_indoor '251': chicken_coop_outdoor '252': chicken_farm_indoor '253': chicken_farm_outdoor '254': childs_room '255': choir_loft_interior '256': church_indoor '257': circus_tent_indoor '258': circus_tent_outdoor '259': city '260': classroom '261': clean_room '262': cliff '263': booth '264': room '265': clock_tower_indoor '266': cloister_indoor '267': cloister_outdoor '268': clothing_store '269': coast_road '270': cockpit '271': coffee_shop '272': computer_room '273': conference_center '274': conference_hall '275': confessional '276': control_room '277': control_tower_indoor '278': control_tower_outdoor '279': convenience_store_indoor '280': convenience_store_outdoor '281': corn_field '282': cottage '283': cottage_garden '284': courthouse '285': courtroom '286': courtyard '287': covered_bridge_interior '288': crawl_space '289': creek '290': crevasse '291': library '292': cybercafe '293': dacha '294': dairy_indoor '295': dairy_outdoor '296': dam '297': dance_school '298': darkroom '299': delicatessen '300': dentists_office '301': department_store '302': departure_lounge '303': vegetation '304': desert_road '305': diner_indoor '306': diner_outdoor '307': dinette_home '308': vehicle '309': dining_car '310': dining_hall '311': dining_room '312': dirt_track '313': discotheque '314': distillery '315': ditch '316': dock '317': dolmen '318': donjon '319': doorway_indoor '320': doorway_outdoor '321': dorm_room '322': downtown '323': drainage_ditch '324': dress_shop '325': dressing_room '326': drill_rig '327': driveway '328': driving_range_indoor '329': driving_range_outdoor '330': drugstore '331': dry_dock '332': dugout '333': earth_fissure '334': editing_room '335': electrical_substation '336': elevated_catwalk '337': door '338': freight_elevator '339': elevator_lobby '340': elevator_shaft '341': embankment '342': embassy '343': engine_room '344': entrance_hall '345': escalator_outdoor '346': escarpment '347': estuary '348': excavation '349': exhibition_hall '350': fabric_store '351': factory_indoor '352': factory_outdoor '353': fairway '354': farm '355': fastfood_restaurant '356': fence '357': cargo_deck '358': ferryboat_indoor '359': passenger_deck '360': cultivated '361': wild '362': field_road '363': fire_escape '364': fire_station '365': firing_range_indoor '366': firing_range_outdoor '367': fish_farm '368': fishmarket '369': fishpond '370': fitting_room_interior '371': fjord '372': flea_market_indoor '373': flea_market_outdoor '374': floating_dry_dock '375': flood '376': florist_shop_indoor '377': florist_shop_outdoor '378': fly_bridge '379': food_court '380': football_field '381': broadleaf '382': needleleaf '383': forest_fire '384': forest_path '385': formal_garden '386': fort '387': fortress '388': foundry_indoor '389': foundry_outdoor '390': fountain '391': freeway '392': funeral_chapel '393': funeral_home '394': furnace_room '395': galley '396': game_room '397': garage_indoor '398': garage_outdoor '399': garbage_dump '400': gasworks '401': gate '402': gatehouse '403': gazebo_interior '404': general_store_indoor '405': general_store_outdoor '406': geodesic_dome_indoor '407': geodesic_dome_outdoor '408': ghost_town '409': gift_shop '410': glacier '411': glade '412': gorge '413': granary '414': great_hall '415': greengrocery '416': greenhouse_indoor '417': greenhouse_outdoor '418': grotto '419': guardhouse '420': gulch '421': gun_deck_indoor '422': gun_deck_outdoor '423': gun_store '424': hacienda '425': hallway '426': handball_court '427': hangar_indoor '428': hangar_outdoor '429': hardware_store '430': hat_shop '431': hatchery '432': hayloft '433': hearth '434': hedge_maze '435': hedgerow '436': heliport '437': herb_garden '438': highway '439': hill '440': home_office '441': home_theater '442': hospital '443': hospital_room '444': hot_spring '445': hot_tub_indoor '446': hot_tub_outdoor '447': hotel_outdoor '448': hotel_breakfast_area '449': hotel_room '450': hunting_lodge_indoor '451': hut '452': ice_cream_parlor '453': ice_floe '454': ice_skating_rink_indoor '455': ice_skating_rink_outdoor '456': iceberg '457': igloo '458': imaret '459': incinerator_indoor '460': incinerator_outdoor '461': industrial_area '462': industrial_park '463': inn_indoor '464': inn_outdoor '465': irrigation_ditch '466': islet '467': jacuzzi_indoor '468': jacuzzi_outdoor '469': jail_indoor '470': jail_outdoor '471': jail_cell '472': japanese_garden '473': jetty '474': jewelry_shop '475': junk_pile '476': junkyard '477': jury_box '478': kasbah '479': kennel_indoor '480': kennel_outdoor '481': kindergarden_classroom '482': kiosk_outdoor '483': kitchenette '484': lab_classroom '485': labyrinth_indoor '486': labyrinth_outdoor '487': lagoon '488': artificial '489': landing '490': landing_deck '491': laundromat '492': lava_flow '493': lavatory '494': lawn '495': lean-to '496': lecture_room '497': legislative_chamber '498': levee '499': library_outdoor '500': lido_deck_indoor '501': lift_bridge '502': lighthouse '503': limousine_interior '504': liquor_store_indoor '505': liquor_store_outdoor '506': loading_dock '507': lobby '508': lock_chamber '509': loft '510': lookout_station_indoor '511': lookout_station_outdoor '512': lumberyard_indoor '513': lumberyard_outdoor '514': machine_shop '515': manhole '516': mansion '517': manufactured_home '518': market_indoor '519': marsh '520': martial_arts_gym '521': mastaba '522': maternity_ward '523': mausoleum '524': medina '525': menhir '526': mesa '527': mess_hall '528': mezzanine '529': military_hospital '530': military_hut '531': military_tent '532': mine '533': mineshaft '534': mini_golf_course_indoor '535': mini_golf_course_outdoor '536': mission '537': dry '538': water '539': mobile_home '540': monastery_indoor '541': monastery_outdoor '542': moon_bounce '543': moor '544': morgue '545': mosque_indoor '546': mosque_outdoor '547': motel '548': mountain '549': mountain_path '550': mountain_road '551': movie_theater_indoor '552': movie_theater_outdoor '553': mudflat '554': museum_indoor '555': museum_outdoor '556': music_store '557': music_studio '558': misc '559': natural_history_museum '560': naval_base '561': newsroom '562': newsstand_indoor '563': newsstand_outdoor '564': nightclub '565': nuclear_power_plant_indoor '566': nuclear_power_plant_outdoor '567': nunnery '568': nursery '569': nursing_home '570': oasis '571': oast_house '572': observatory_indoor '573': observatory_outdoor '574': observatory_post '575': ocean '576': office_building '577': office_cubicles '578': oil_refinery_indoor '579': oil_refinery_outdoor '580': oilrig '581': operating_room '582': optician '583': organ_loft_interior '584': orlop_deck '585': ossuary '586': outcropping '587': outhouse_indoor '588': outhouse_outdoor '589': overpass '590': oyster_bar '591': oyster_farm '592': acropolis '593': aircraft_carrier_object '594': amphitheater_indoor '595': archipelago '596': questionable '597': assembly_hall '598': assembly_plant '599': awning_deck '600': back_porch '601': backdrop '602': backroom '603': backstage_outdoor '604': backstairs_indoor '605': backwoods '606': ballet '607': balustrade '608': barbeque '609': basin_outdoor '610': bath_indoor '611': bath_outdoor '612': bathhouse_outdoor '613': battlefield '614': bay '615': booth_outdoor '616': bottomland '617': breakfast_table '618': bric-a-brac '619': brooklet '620': bubble_chamber '621': buffet '622': bulkhead '623': bunk_bed '624': bypass '625': byroad '626': cabin_cruiser '627': cargo_helicopter '628': cellar '629': chair_lift '630': cocktail_lounge '631': corner '632': country_house '633': country_road '634': customhouse '635': dance_floor '636': deck-house_boat_deck_house '637': deck-house_deck_house '638': dining_area '639': diving_board '640': embrasure '641': entranceway_indoor '642': entranceway_outdoor '643': entryway_outdoor '644': estaminet '645': farm_building '646': farmhouse '647': feed_bunk '648': field_house '649': field_tent_indoor '650': field_tent_outdoor '651': fire_trench '652': fireplace '653': flashflood '654': flatlet '655': floating_dock '656': flood_plain '657': flowerbed '658': flume_indoor '659': flying_buttress '660': foothill '661': forecourt '662': foreshore '663': front_porch '664': garden '665': gas_well '666': glen '667': grape_arbor '668': grove '669': guardroom '670': guesthouse '671': gymnasium_outdoor '672': head_shop '673': hen_yard '674': hillock '675': housing_estate '676': housing_project '677': howdah '678': inlet '679': insane_asylum '680': outside '681': juke_joint '682': jungle '683': kraal '684': laboratorywet '685': landing_strip '686': layby '687': lean-to_tent '688': loge '689': loggia_outdoor '690': lower_deck '691': luggage_van '692': mansard '693': meadow '694': meat_house '695': megalith '696': mens_store_outdoor '697': mental_institution_indoor '698': mental_institution_outdoor '699': military_headquarters '700': millpond '701': millrace '702': natural_spring '703': nursing_home_outdoor '704': observation_station '705': open-hearth_furnace '706': operating_table '707': outbuilding '708': palestra '709': parkway '710': patio_indoor '711': pavement '712': pawnshop_outdoor '713': pinetum '714': piste_road '715': pizzeria_outdoor '716': powder_room '717': pumping_station '718': reception_room '719': rest_stop '720': retaining_wall '721': rift_valley '722': road '723': rock_garden '724': rotisserie '725': safari_park '726': salon '727': saloon '728': sanatorium '729': science_laboratory '730': scrubland '731': scullery '732': seaside '733': semidesert '734': shelter '735': shelter_deck '736': shelter_tent '737': shore '738': shrubbery '739': sidewalk '740': snack_bar '741': snowbank '742': stage_set '743': stall '744': stateroom '745': store '746': streetcar_track '747': student_center '748': study_hall '749': sugar_refinery '750': sunroom '751': supply_chamber '752': t-bar_lift '753': tannery '754': teahouse '755': threshing_floor '756': ticket_window_indoor '757': tidal_basin '758': tidal_river '759': tiltyard '760': tollgate '761': tomb '762': tract_housing '763': trellis '764': truck_stop '765': upper_balcony '766': vestibule '767': vinery '768': walkway '769': war_room '770': washroom '771': water_fountain '772': water_gate '773': waterscape '774': waterway '775': wetland '776': widows_walk_indoor '777': windstorm '778': packaging_plant '779': pagoda '780': paper_mill '781': park '782': parking_garage_indoor '783': parking_garage_outdoor '784': parking_lot '785': parlor '786': particle_accelerator '787': party_tent_indoor '788': party_tent_outdoor '789': pasture '790': pavilion '791': pawnshop '792': pedestrian_overpass_indoor '793': penalty_box '794': pet_shop '795': pharmacy '796': physics_laboratory '797': piano_store '798': picnic_area '799': pier '800': pig_farm '801': pilothouse_indoor '802': pilothouse_outdoor '803': pitchers_mound '804': pizzeria '805': planetarium_indoor '806': planetarium_outdoor '807': plantation_house '808': playground '809': playroom '810': plaza '811': podium_indoor '812': podium_outdoor '813': police_station '814': pond '815': pontoon_bridge '816': poop_deck '817': porch '818': portico '819': portrait_studio '820': postern '821': power_plant_outdoor '822': print_shop '823': priory '824': promenade '825': promenade_deck '826': pub_indoor '827': pub_outdoor '828': pulpit '829': putting_green '830': quadrangle '831': quicksand '832': quonset_hut_indoor '833': racecourse '834': raceway '835': raft '836': railroad_track '837': railway_yard '838': rainforest '839': ramp '840': ranch '841': ranch_house '842': reading_room '843': reception '844': recreation_room '845': rectory '846': recycling_plant_indoor '847': refectory '848': repair_shop '849': residential_neighborhood '850': resort '851': rest_area '852': restaurant '853': restaurant_kitchen '854': restaurant_patio '855': restroom_indoor '856': restroom_outdoor '857': revolving_door '858': riding_arena '859': river '860': road_cut '861': rock_arch '862': roller_skating_rink_indoor '863': roller_skating_rink_outdoor '864': rolling_mill '865': roof '866': roof_garden '867': root_cellar '868': rope_bridge '869': roundabout '870': roundhouse '871': rubble '872': ruin '873': runway '874': sacristy '875': salt_plain '876': sand_trap '877': sandbar '878': sauna '879': savanna '880': sawmill '881': schoolhouse '882': schoolyard '883': science_museum '884': scriptorium '885': sea_cliff '886': seawall '887': security_check_point '888': server_room '889': sewer '890': sewing_room '891': shed '892': shipping_room '893': shipyard_outdoor '894': shoe_shop '895': shopping_mall_indoor '896': shopping_mall_outdoor '897': shower '898': shower_room '899': shrine '900': signal_box '901': sinkhole '902': ski_jump '903': ski_lodge '904': ski_resort '905': ski_slope '906': sky '907': skywalk_indoor '908': skywalk_outdoor '909': slum '910': snowfield '911': massage_room '912': mineral_bath '913': spillway '914': sporting_goods_store '915': squash_court '916': stable '917': baseball '918': stadium_outdoor '919': stage_indoor '920': stage_outdoor '921': staircase '922': starting_gate '923': steam_plant_outdoor '924': steel_mill_indoor '925': storage_room '926': storm_cellar '927': street '928': strip_mall '929': strip_mine '930': student_residence '931': submarine_interior '932': sun_deck '933': sushi_bar '934': swamp '935': swimming_hole '936': swimming_pool_indoor '937': synagogue_indoor '938': synagogue_outdoor '939': taxistand '940': taxiway '941': tea_garden '942': tearoom '943': teashop '944': television_room '945': east_asia '946': mesoamerican '947': south_asia '948': western '949': tennis_court_indoor '950': tennis_court_outdoor '951': tent_outdoor '952': terrace_farm '953': indoor_round '954': indoor_seats '955': theater_outdoor '956': thriftshop '957': throne_room '958': ticket_booth '959': tobacco_shop_indoor '960': toll_plaza '961': tollbooth '962': topiary_garden '963': tower '964': town_house '965': toyshop '966': track_outdoor '967': trading_floor '968': trailer_park '969': train_interior '970': train_station_outdoor '971': station '972': tree_farm '973': tree_house '974': trench '975': trestle_bridge '976': tundra '977': rail_indoor '978': rail_outdoor '979': road_indoor '980': road_outdoor '981': turkish_bath '982': ocean_deep '983': ocean_shallow '984': utility_room '985': valley '986': van_interior '987': vegetable_garden '988': velodrome_indoor '989': velodrome_outdoor '990': ventilation_shaft '991': veranda '992': vestry '993': veterinarians_office '994': videostore '995': village '996': vineyard '997': volcano '998': volleyball_court_indoor '999': volleyball_court_outdoor '1000': voting_booth '1001': waiting_room '1002': walk_in_freezer '1003': warehouse_indoor '1004': warehouse_outdoor '1005': washhouse_indoor '1006': washhouse_outdoor '1007': watchtower '1008': water_mill '1009': water_park '1010': water_tower '1011': water_treatment_plant_indoor '1012': water_treatment_plant_outdoor '1013': block '1014': cascade '1015': cataract '1016': fan '1017': plunge '1018': watering_hole '1019': weighbridge '1020': wet_bar '1021': wharf '1022': wheat_field '1023': whispering_gallery '1024': widows_walk_interior '1025': windmill '1026': window_seat '1027': barrel_storage '1028': winery '1029': witness_stand '1030': woodland '1031': workroom '1032': workshop '1033': wrestling_ring_indoor '1034': wrestling_ring_outdoor '1035': yard '1036': youth_hostel '1037': zen_garden '1038': ziggurat '1039': zoo '1040': forklift '1041': hollow '1042': hutment '1043': pueblo '1044': vat '1045': perfume_shop '1046': steel_mill_outdoor '1047': orchestra_pit '1048': bridle_path '1049': lyceum '1050': one-way_street '1051': parade_ground '1052': pump_room '1053': recycling_plant_outdoor '1054': chuck_wagon splits: - name: train num_bytes: 8468086 num_examples: 20210 - name: test num_bytes: 744607 num_examples: 3352 - name: validation num_bytes: 838032 num_examples: 2000 download_size: 1179202534 dataset_size: 10050725 - config_name: instance_segmentation features: - name: image dtype: image - name: annotation dtype: image splits: - name: train num_bytes: 862611544 num_examples: 20210 - name: test num_bytes: 212493928 num_examples: 3352 - name: validation num_bytes: 87502294 num_examples: 2000 download_size: 1197393920 dataset_size: 1162607766 --- # Dataset Card for MIT Scene Parsing Benchmark ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [MIT Scene Parsing Benchmark homepage](http://sceneparsing.csail.mit.edu/) - **Repository:** [Scene Parsing repository (Caffe/Torch7)](https://github.com/CSAILVision/sceneparsing),[Scene Parsing repository (PyTorch)](https://github.com/CSAILVision/semantic-segmentation-pytorch) and [Instance Segmentation repository](https://github.com/CSAILVision/placeschallenge/tree/master/instancesegmentation) - **Paper:** [Scene Parsing through ADE20K Dataset](http://people.csail.mit.edu/bzhou/publication/scene-parse-camera-ready.pdf) and [Semantic Understanding of Scenes through ADE20K Dataset](https://arxiv.org/abs/1608.05442) - **Leaderboard:** [MIT Scene Parsing Benchmark leaderboard](http://sceneparsing.csail.mit.edu/#:~:text=twice%20per%20week.-,leaderboard,-Organizers) - **Point of Contact:** [Bolei Zhou](mailto:bzhou@ie.cuhk.edu.hk) ### Dataset Summary Scene parsing is the task of segmenting and parsing an image into different image regions associated with semantic categories, such as sky, road, person, and bed. MIT Scene Parsing Benchmark (SceneParse150) provides a standard training and evaluation platform for the algorithms of scene parsing. The data for this benchmark comes from ADE20K Dataset which contains more than 20K scene-centric images exhaustively annotated with objects and object parts. Specifically, the benchmark is divided into 20K images for training, 2K images for validation, and another batch of held-out images for testing. There are in total 150 semantic categories included for evaluation, which include e.g. sky, road, grass, and discrete objects like person, car, bed. Note that there are non-uniform distribution of objects occuring in the images, mimicking a more natural object occurrence in daily scene. The goal of this benchmark is to segment and parse an image into different image regions associated with semantic categories, such as sky, road, person, and bedThis benchamark is similar to semantic segmentation tasks in COCO and Pascal Dataset, but the data is more scene-centric and with a diverse range of object categories. The data for this benchmark comes from ADE20K Dataset which contains more than 20K scene-centric images exhaustively annotated with objects and object parts. ### Supported Tasks and Leaderboards - `scene-parsing`: The goal of this task is to segment the whole image densely into semantic classes (image regions), where each pixel is assigned a class label such as the region of *tree* and the region of *building*. [The leaderboard](http://sceneparsing.csail.mit.edu/#:~:text=twice%20per%20week.-,leaderboard,-Organizers) for this task ranks the models by considering the mean of the pixel-wise accuracy and class-wise IoU as the final score. Pixel-wise accuracy indicates the ratio of pixels which are correctly predicted, while class-wise IoU indicates the Intersection of Union of pixels averaged over all the 150 semantic categories. Refer to the [Development Kit](https://github.com/CSAILVision/sceneparsing) for the detail. - `instance-segmentation`: The goal of this task is to detect the object instances inside an image and further generate the precise segmentation masks of the objects. Its difference compared to the task of scene parsing is that in scene parsing there is no instance concept for the segmented regions, instead in instance segmentation if there are three persons in the scene, the network is required to segment each one of the person regions. This task doesn't have an active leaderboard. The performance of the instance segmentation algorithms is evaluated by Average Precision (AP, or mAP), following COCO evaluation metrics. For each image, at most 255 top-scoring instance masks are taken across all categories. Each instance mask prediction is only considered if its IoU with ground truth is above a certain threshold. There are 10 IoU thresholds of 0.50:0.05:0.95 for evaluation. The final AP is averaged across 10 IoU thresholds and 100 categories. You can refer to COCO evaluation page for more explanation: http://mscoco.org/dataset/#detections-eval ### Languages English. ## Dataset Structure ### Data Instances A data point comprises an image and its annotation mask, which is `None` in the testing set. The `scene_parsing` configuration has an additional `scene_category` field. #### `scene_parsing` ``` { 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=683x512 at 0x1FF32A3EDA0>, 'annotation': <PIL.PngImagePlugin.PngImageFile image mode=L size=683x512 at 0x1FF32E5B978>, 'scene_category': 0 } ``` #### `instance_segmentation` ``` { 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=256x256 at 0x20B51B5C400>, 'annotation': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=256x256 at 0x20B57051B38> } ``` ### Data Fields #### `scene_parsing` - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`. - `annotation`: A `PIL.Image.Image` object containing the annotation mask. - `scene_category`: A scene category for the image (e.g. `airport_terminal`, `canyon`, `mobile_home`). > **Note**: annotation masks contain labels ranging from 0 to 150, where 0 refers to "other objects". Those pixels are not considered in the official evaluation. Refer to [this file](https://github.com/CSAILVision/sceneparsing/blob/master/objectInfo150.csv) for the information about the labels of the 150 semantic categories, including indices, pixel ratios and names. #### `instance_segmentation` - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`. - `annotation`: A `PIL.Image.Image` object containing the annotation mask. > **Note**: in the instance annotation masks, the R(ed) channel encodes category ID, and the G(reen) channel encodes instance ID. Each object instance has a unique instance ID regardless of its category ID. In the dataset, all images have <256 object instances. Refer to [this file (train split)](https://github.com/CSAILVision/placeschallenge/blob/master/instancesegmentation/instanceInfo100_train.txt) and to [this file (validation split)](https://github.com/CSAILVision/placeschallenge/blob/master/instancesegmentation/instanceInfo100_val.txt) for the information about the labels of the 100 semantic categories. To find the mapping between the semantic categories for `instance_segmentation` and `scene_parsing`, refer to [this file](https://github.com/CSAILVision/placeschallenge/blob/master/instancesegmentation/categoryMapping.txt). ### Data Splits The data is split into training, test and validation set. The training data contains 20210 images, the testing data contains 3352 images and the validation data contains 2000 images. ## Dataset Creation ### Curation Rationale The rationale from the paper for the ADE20K dataset from which this benchmark originates: > Semantic understanding of visual scenes is one of the holy grails of computer vision. Despite efforts of the community in data collection, there are still few image datasets covering a wide range of scenes and object categories with pixel-wise annotations for scene understanding. In this work, we present a densely annotated dataset ADE20K, which spans diverse annotations of scenes, objects, parts of objects, and in some cases even parts of parts. > The motivation of this work is to collect a dataset that has densely annotated images (every pixel has a semantic label) with a large and an unrestricted open vocabulary. The images in our dataset are manually segmented in great detail, covering a diverse set of scenes, object and object part categories. The challenge for collecting such annotations is finding reliable annotators, as well as the fact that labeling is difficult if the class list is not defined in advance. On the other hand, open vocabulary naming also suffers from naming inconsistencies across different annotators. In contrast, our dataset was annotated by a single expert annotator, providing extremely detailed and exhaustive image annotations. On average, our annotator labeled 29 annotation segments per image, compared to the 16 segments per image labeled by external annotators (like workers from Amazon Mechanical Turk). Furthermore, the data consistency and quality are much higher than that of external annotators. ### Source Data #### Initial Data Collection and Normalization Images come from the LabelMe, SUN datasets, and Places and were selected to cover the 900 scene categories defined in the SUN database. This benchmark was built by selecting the top 150 objects ranked by their total pixel ratios from the ADE20K dataset. As the original images in the ADE20K dataset have various sizes, for simplicity those large-sized images were rescaled to make their minimum heights or widths as 512. Among the 150 objects, there are 35 stuff classes (i.e., wall, sky, road) and 115 discrete objects (i.e., car, person, table). The annotated pixels of the 150 objects occupy 92.75% of all the pixels in the dataset, where the stuff classes occupy 60.92%, and discrete objects occupy 31.83%. #### Who are the source language producers? The same as in the LabelMe, SUN datasets, and Places datasets. ### Annotations #### Annotation process Annotation process for the ADE20K dataset: > **Image Annotation.** For our dataset, we are interested in having a diverse set of scenes with dense annotations of all the objects present. Images come from the LabelMe, SUN datasets, and Places and were selected to cover the 900 scene categories defined in the SUN database. Images were annotated by a single expert worker using the LabelMe interface. Fig. 2 shows a snapshot of the annotation interface and one fully segmented image. The worker provided three types of annotations: object segments with names, object parts, and attributes. All object instances are segmented independently so that the dataset could be used to train and evaluate detection or segmentation algorithms. Datasets such as COCO, Pascal or Cityscape start by defining a set of object categories of interest. However, when labeling all the objects in a scene, working with a predefined list of objects is not possible as new categories appear frequently (see fig. 5.d). Here, the annotator created a dictionary of visual concepts where new classes were added constantly to ensure consistency in object naming. Object parts are associated with object instances. Note that parts can have parts too, and we label these associations as well. For example, the ‘rim’ is a part of a ‘wheel’, which in turn is part of a ‘car’. A ‘knob’ is a part of a ‘door’ that can be part of a ‘cabinet’. The total part hierarchy has a depth of 3. The object and part hierarchy is in the supplementary materials. > **Annotation Consistency.** Defining a labeling protocol is relatively easy when the labeling task is restricted to a fixed list of object classes, however it becomes challenging when the class list is openended. As the goal is to label all the objects within each image, the list of classes grows unbounded. >Many object classes appear only a few times across the entire collection of images. However, those rare >object classes cannot be ignored as they might be important elements for the interpretation of the scene. >Labeling in these conditions becomes difficult because we need to keep a growing list of all the object >classes in order to have a consistent naming across the entire dataset. Despite the annotator’s best effort, >the process is not free of noise. To analyze the annotation consistency we took a subset of 61 randomly >chosen images from the validation set, then asked our annotator to annotate them again (there is a time difference of six months). One expects that there are some differences between the two annotations. A few examples are shown in Fig 3. On average, 82.4% of the pixels got the same label. The remaining 17.6% of pixels had some errors for which we grouped into three error types as follows: > > • Segmentation quality: Variations in the quality of segmentation and outlining of the object boundary. One typical source of error arises when segmenting complex objects such as buildings and trees, which can be segmented with different degrees of precision. 5.7% of the pixels had this type of error. > > • Object naming: Differences in object naming (due to ambiguity or similarity between concepts, for instance calling a big car a ‘car’ in one segmentation and a ‘truck’ in the another one, or a ‘palm tree’ a‘tree’. 6.0% of the pixels had naming issues. These errors can be reduced by defining a very precise terminology, but this becomes much harder with a large growing vocabulary. > > • Segmentation quantity: Missing objects in one of the two segmentations. There is a very large number of objects in each image and some images might be annotated more thoroughly than others. For example, in the third column of Fig 3 the annotator missed some small objects in different annotations. 5.9% of the pixels are due to missing labels. A similar issue existed in segmentation datasets such as the Berkeley Image segmentation dataset. > > The median error values for the three error types are: 4.8%, 0.3% and 2.6% showing that the mean value is dominated by a few images, and that the most common type of error is segmentation quality. To further compare the annotation done by our single expert annotator and the AMT-like annotators, 20 images from the validation set are annotated by two invited external annotators, both with prior experience in image labeling. The first external annotator had 58.5% of inconsistent pixels compared to the segmentation provided by our annotator, and the second external annotator had 75% of the inconsistent pixels. Many of these inconsistencies are due to the poor quality of the segmentations provided by external annotators (as it has been observed with AMT which requires multiple verification steps for quality control). For the best external annotator (the first one), 7.9% of pixels have inconsistent segmentations (just slightly worse than our annotator), 14.9% have inconsistent object naming and 35.8% of the pixels correspond to missing objects, which is due to the much smaller number of objects annotated by the external annotator in comparison with the ones annotated by our expert annotator. The external annotators labeled on average 16 segments per image while our annotator provided 29 segments per image. #### Who are the annotators? Three expert annotators and the AMT-like annotators. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Refer to the `Annotation Consistency` subsection of `Annotation Process`. ## Additional Information ### Dataset Curators Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso and Antonio Torralba. ### Licensing Information The MIT Scene Parsing Benchmark dataset is licensed under a [BSD 3-Clause License](https://github.com/CSAILVision/sceneparsing/blob/master/LICENSE). ### Citation Information ```bibtex @inproceedings{zhou2017scene, title={Scene Parsing through ADE20K Dataset}, author={Zhou, Bolei and Zhao, Hang and Puig, Xavier and Fidler, Sanja and Barriuso, Adela and Torralba, Antonio}, booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition}, year={2017} } @article{zhou2016semantic, title={Semantic understanding of scenes through the ade20k dataset}, author={Zhou, Bolei and Zhao, Hang and Puig, Xavier and Fidler, Sanja and Barriuso, Adela and Torralba, Antonio}, journal={arXiv preprint arXiv:1608.05442}, year={2016} } ``` ### Contributions Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset.
bigcode/the-stack-dedup
--- annotations_creators: [] language_creators: - crowdsourced - expert-generated language: - code license: - other multilinguality: - multilingual pretty_name: The-Stack size_categories: - unknown source_datasets: [] task_categories: - text-generation task_ids: [] extra_gated_prompt: |- ## Terms of Use for The Stack The Stack dataset is a collection of source code in over 300 programming languages. We ask that you read and acknowledge the following points before using the dataset: 1. The Stack is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in The Stack must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point. 2. The Stack is regularly updated to enact validated data removal requests. By clicking on "Access repository", you agree to update your own version of The Stack to the most recent usable version specified by the maintainers in [the following thread](https://huggingface.co/datasets/bigcode/the-stack/discussions/7). If you have questions about dataset versions and allowed uses, please also ask them in the dataset’s [community discussions](https://huggingface.co/datasets/bigcode/the-stack/discussions/new). We will also notify users via email when the latest usable version changes. 3. To host, share, or otherwise provide access to The Stack dataset, you must include [these Terms of Use](https://huggingface.co/datasets/bigcode/the-stack#terms-of-use-for-the-stack) and require users to agree to it. By clicking on "Access repository" below, you accept that your contact information (email address and username) can be shared with the dataset maintainers as well. extra_gated_fields: Email: text I have read the License and agree with its terms: checkbox --- # Dataset Card for The Stack ![infographic](https://huggingface.co/datasets/bigcode/admin/resolve/main/the-stack-infographic-v11.png) ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Changelog](#changelog) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [How to use it](#how-to-use-it) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) - [Terms of Use for The Stack](#terms-of-use-for-the-stack) ## Dataset Description - **Homepage:** https://www.bigcode-project.org/ - **Repository:** https://github.com/bigcode-project - **Paper:** https://arxiv.org/abs/2211.15533 - **Leaderboard:** N/A - **Point of Contact:** contact@bigcode-project.org ### Changelog |Release|Description| |-|-| |v1.0| Initial release of the Stack. Included 30 programming languages and 18 permissive licenses. **Note:** Three included licenses (MPL/EPL/LGPL) are considered weak copyleft licenses. The resulting near-deduplicated dataset is 1.5TB in size. | |v1.1| The three copyleft licenses ((MPL/EPL/LGPL) were excluded and the list of permissive licenses extended to 193 licenses in total. The list of programming languages was increased from 30 to 358 languages. Also opt-out request submitted by 15.11.2022 were excluded from this version of the dataset. The resulting near-deduplicated dataset is 3TB in size.| |v1.2| Opt-out request submitted by 09.02.2022 were excluded from this version of the dataset. A stronger near-deduplication strategy was applied resulting leading to 2.7TB in size.| ### Dataset Summary The Stack contains over 6TB of permissively-licensed source code files covering 358 programming languages. The dataset was created as part of the [BigCode Project](https://www.bigcode-project.org/), an open scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs). The Stack serves as a pre-training dataset for Code LLMs, i.e., code-generating AI systems which enable the synthesis of programs from natural language descriptions as well as other from code snippets. **This is the near-deduplicated version with 3TB data.** ### Supported Tasks and Leaderboards The Stack is a pre-training dataset for creating code LLMs. Code LLMs can be used for a wide variety of downstream tasks such as code completion from natural language descriptions ([HumanEval](https://huggingface.co/datasets/openai_humaneval), [MBPP](https://huggingface.co/datasets/mbpp)), documentation generation for individual functions ([CodeSearchNet](https://huggingface.co/datasets/code_search_net)), and auto-completion of code snippets ([HumanEval-Infilling](https://github.com/openai/human-eval-infilling)). However, these downstream evaluation benchmarks are outside the scope of The Stack. ### Languages The following natural languages appear in the comments and docstrings from files in the dataset: EN, ZH, FR, PT, ES, RU, DE, KO, JA, UZ, IT, ID, RO, AR, FA, CA, HU, ML, NL, TR, TE, EL, EO, BN, LV, GL, PL, GU, CEB, IA, KN, SH, MK, UR, SV, LA, JKA, MY, SU, CS, MN. This kind of data is essential for applications such as documentation generation and natural-language-to-code translation. The dataset contains **358 programming languages**. The full list can be found [here](https://huggingface.co/datasets/bigcode/the-stack-dedup/blob/main/programming-languages.json). ### How to use it ```python from datasets import load_dataset # full dataset (3TB of data) ds = load_dataset("bigcode/the-stack-dedup", split="train") # specific language (e.g. Dockerfiles) ds = load_dataset("bigcode/the-stack-dedup", data_dir="data/dockerfile", split="train") # dataset streaming (will only download the data as needed) ds = load_dataset("bigcode/the-stack-dedup", streaming=True, split="train") for sample in iter(ds): print(sample["content"]) ``` ## Dataset Structure ### Data Instances Each data instance corresponds to one file. The content of the file is in the `content` feature, and other features (`repository_name`, `licenses`, etc.) provide some metadata. Note that a given file can appear in several different repositories that satisfy our safe-license criterion. If that is the case, only the first – in alphabetical order -- of these repositories is shown for simplicity. ### Data Fields - `content` (string): the content of the file. - `size` (integer): size of the uncompressed file. - `lang` (string): the programming language. - `ext` (string): file extension - `avg_line_length` (float): the average line-length of the file. - `max_line_length` (integer): the maximum line-length of the file. - `alphanum_fraction` (float): the fraction of characters in the file that are alphabetical or numerical characters. - `hexsha` (string): unique git hash of file - `max_{stars|forks|issues}_repo_path` (string): path to file in repo containing this file with maximum number of `{stars|forks|issues}` - `max_{stars|forks|issues}_repo_name` (string): name of repo containing this file with maximum number of `{stars|forks|issues}` - `max_{stars|forks|issues}_repo_head_hexsha` (string): hexsha of repository head - `max_{stars|forks|issues}_repo_licenses` (string): licenses in repository - `max_{stars|forks|issues}_count` (integer): number of `{stars|forks|issues}` in repository - `max_{stars|forks|issues}_repo_{stars|forks|issues}_min_datetime` (string): first timestamp of a `{stars|forks|issues}` event - `max_{stars|forks|issues}_repo_{stars|forks|issues}_max_datetime` (string): last timestamp of a `{stars|forks|issues}` event ### Data Splits The dataset has no splits and all data is loaded as train split by default. If you want to setup a custom train-test split beware that dataset contains a lot of near-duplicates which can cause leakage into the test split. ## Dataset Creation ### Curation Rationale One of the challenges faced by researchers working on code LLMs is the lack of openness and transparency around the development of these systems. Most prior works described the high-level data collection process but did not release the training data. It is therefore difficult for other researchers to fully reproduce these models and understand what kind of pre-training data leads to high-performing code LLMs. By releasing an open large-scale code dataset we hope to make training of code LLMs more reproducible. **This is the near-deduplicated version with 3TB data.** ### Source Data #### Initial Data Collection and Normalization 220.92M active GitHub repository names were collected from the event archives published between January 1st, 2015 and March 31st, 2022 on [GHArchive](https://gharchive.org/). Only 137.36M of these repositories were public and accessible on GitHub – others were not accessible as they had been deleted by their owners. 51.76B files were downloaded from the public repositories on GitHub between November 2021 and June 2022. 5.28B files were unique. The uncompressed size of all stored files is 92.36TB. The list of programming language extensions is taken from this [list](https://gist.github.com/ppisarczyk/43962d06686722d26d176fad46879d41) (also provided in Appendix C of the paper). Near-deduplication was implemented in the pre-processing pipeline on top of exact deduplication. To find near-duplicates, MinHash with 256 permutations of all documents was computed in linear time. Locality Sensitive Hashing was used to find the clusters of duplicates. Jaccard Similarities were computed inside these clusters to remove any false positives and with a similarity threshold of 0.85. Roughly 40% of permissively licensed files were (near-)duplicates. See section 3 of the paper for further details. The following are not stored: - Files that cannot contribute to training code: binary, empty, could not be decoded - Files larger than 1MB - The excluded file extensions are listed in Appendix B of the paper. ##### License detection Permissive licenses have minimal restrictions on how the software can be copied, modified, and redistributed. The full list of licenses can be found [here](https://huggingface.co/datasets/bigcode/the-stack-dedup/blob/main/licenses.json) GHArchive contained the license information for approximately 12% of the collected repositories. For the remaining repositories, [go-license-detector](https://github.com/src-d/go-license-detector) was run to detect the most likely SPDX license identifier. The detector did not detect a license for ~81% of the repositories, in which case the repository was excluded from the dataset. A file was in included in the safe license dataset if at least one of the repositories containing the file had a permissive license. #### Who are the source language producers? The source (code) language producers are users of GitHub that created unique repository names between January 1st, 2015, and March 31st, 2022. ### Personal and Sensitive Information The released dataset may contain sensitive information such as emails, IP addresses, and API/ssh keys that have previously been published to public repositories on GitHub. Deduplication has helped to reduce the amount of sensitive data that may exist. In the event that the dataset contains personal information, researchers should only use public, non-personal information in support of conducting and publishing their [open-access](https://en.wikipedia.org/wiki/Open_access) research. Personal information should not be used for spamming purposes, including sending unsolicited emails or selling of personal information. Complaints, removal requests, and "do not contact" requests can be sent to contact@bigcode-project.org. The PII pipeline for this dataset is still a work in progress (see this [issue](https://github.com/bigcode-project/admin/issues/9) for updates). Researchers that wish to contribute to the anonymization pipeline of the project can apply to join [here](https://www.bigcode-project.org/docs/about/join/). Developers with source code in the dataset can request to have it removed [here](https://www.bigcode-project.org/docs/about/ip/) (proof of code contribution is required). ### Opting out of The Stack We are giving developers the ability to have their code removed from the dataset upon request. The process for submitting and enacting removal requests will keep evolving throughout the project as we receive feedback and build up more data governance tools. You can check if your code is in The Stack with the following ["Am I In The Stack?" Space](https://huggingface.co/spaces/bigcode/in-the-stack). If you'd like to have your data removed from the dataset follow the [instructions on GitHub](https://github.com/bigcode-project/opt-out-v2). ## Considerations for Using the Data ### Social Impact of Dataset The Stack is an output of the BigCode Project. BigCode aims to be responsible by design and by default. The project is conducted in the spirit of Open Science, focused on the responsible development of LLMs for code. With the release of The Stack, we aim to increase access, reproducibility, and transparency of code LLMs in the research community. Work to de-risk and improve on the implementation of ethical best practices of code LLMs is conducted in various BigCode working groups. The Legal, Ethics, and Governance working group has explored topics such as licensing (including copyleft and the intended use of permissively licensed code), attribution of generated code to original code, rights to restrict processing, the inclusion of Personally Identifiable Information (PII), and risks of malicious code, among other topics. This work is ongoing as of October 25th, 2022. We expect code LLMs to enable people from diverse backgrounds to write higher quality code and develop low-code applications. Mission-critical software could become easier to maintain as professional developers are guided by code-generating systems on how to write more robust and efficient code. While the social impact is intended to be positive, the increased accessibility of code LLMs comes with certain risks such as over-reliance on the generated code and long-term effects on the software development job market. A broader impact analysis relating to Code LLMs can be found in section 7 of this [paper](https://arxiv.org/abs/2107.03374). An in-depth risk assessments for Code LLMs can be found in section 4 of this [paper](https://arxiv.org/abs/2207.14157). ### Discussion of Biases The code collected from GitHub does not contain demographic information or proxy information about the demographics. However, it is not without risks, as the comments within the code may contain harmful or offensive language, which could be learned by the models. Widely adopted programming languages like C and Javascript are overrepresented compared to niche programming languages like Julia and Scala. Some programming languages such as SQL, Batchfile, TypeScript are less likely to be permissively licensed (4% vs the average 10%). This may result in a biased representation of those languages. Permissively licensed files also tend to be longer. Roughly 40 natural languages are present in docstrings and comments with English being the most prevalent. In python files, it makes up ~96% of the dataset. For further information on data analysis of the Stack, see this [repo](https://github.com/bigcode-project/bigcode-analysis). ### Other Known Limitations One of the current limitations of The Stack is that scraped HTML for websites may not be compliant with Web Content Accessibility Guidelines ([WCAG](https://www.w3.org/WAI/standards-guidelines/wcag/)). This could have an impact on HTML-generated code that may introduce web accessibility issues. The training dataset could contain malicious code and/or the model could be used to generate malware or ransomware. To the best of our knowledge, all files contained in the dataset are licensed with one of the permissive licenses (see list in [Licensing information](#licensing-information)). The accuracy of license attribution is limited by the accuracy of GHArchive and go-license-detector. Any mistakes should be reported to BigCode Project for review and follow-up as needed. ## Additional Information ### Dataset Curators 1. Harm de Vries, ServiceNow Research, harm.devries@servicenow.com 2. Leandro von Werra, Hugging Face, leandro@huggingface.co ### Licensing Information The Stack is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in The Stack must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point. The list of [SPDX license identifiers](https://spdx.org/licenses/) included in the dataset can be found [here](https://huggingface.co/datasets/bigcode/the-stack-dedup/blob/main/licenses.json). ### Citation Information ``` @article{Kocetkov2022TheStack, title={The Stack: 3 TB of permissively licensed source code}, author={Kocetkov, Denis and Li, Raymond and Ben Allal, Loubna and Li, Jia and Mou,Chenghao and Muñoz Ferrandis, Carlos and Jernite, Yacine and Mitchell, Margaret and Hughes, Sean and Wolf, Thomas and Bahdanau, Dzmitry and von Werra, Leandro and de Vries, Harm}, journal={Preprint}, year={2022} } ``` ### Contributions [More Information Needed] ## Terms of Use for The Stack The Stack dataset is a collection of source code in over 300 programming languages. We ask that you read and acknowledge the following points before using the dataset: 1. The Stack is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in The Stack must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point. 2. The Stack is regularly updated to enact validated data removal requests. By clicking on "Access repository", you agree to update your own version of The Stack to the most recent usable version specified by the maintainers in [the following thread](https://huggingface.co/datasets/bigcode/the-stack/discussions/7). If you have questions about dataset versions and allowed uses, please also ask them in the dataset’s [community discussions](https://huggingface.co/datasets/bigcode/the-stack/discussions/new). We will also notify users via email when the latest usable version changes. 3. To host, share, or otherwise provide access to The Stack dataset, you must include these Terms of Use and require users to agree to it.
zeroshot/twitter-financial-news-sentiment
--- annotations_creators: - other language: - en language_creators: - other license: - mit multilinguality: - monolingual pretty_name: twitter financial news size_categories: - 10K<n<100K source_datasets: - original tags: - twitter - finance - markets - stocks - wallstreet - quant - hedgefunds - markets task_categories: - text-classification task_ids: - multi-class-classification --- ### Dataset Description The Twitter Financial News dataset is an English-language dataset containing an annotated corpus of finance-related tweets. This dataset is used to classify finance-related tweets for their sentiment. 1. The dataset holds 11,932 documents annotated with 3 labels: ```python sentiments = { "LABEL_0": "Bearish", "LABEL_1": "Bullish", "LABEL_2": "Neutral" } ``` The data was collected using the Twitter API. The current dataset supports the multi-class classification task. ### Task: Sentiment Analysis # Data Splits There are 2 splits: train and validation. Below are the statistics: | Dataset Split | Number of Instances in Split | | ------------- | ------------------------------------------- | | Train | 9,938 | | Validation | 2,486 | # Licensing Information The Twitter Financial Dataset (sentiment) version 1.0.0 is released under the MIT License.
large_spanish_corpus
--- annotations_creators: - no-annotation language_creators: - expert-generated language: - es license: - mit multilinguality: - monolingual size_categories: - 100K<n<1M - 100M<n<1B - 10K<n<100K - 10M<n<100M - 1M<n<10M source_datasets: - original task_categories: - other task_ids: [] paperswithcode_id: null pretty_name: The Large Spanish Corpus tags: [] dataset_info: - config_name: JRC features: - name: text dtype: string splits: - name: train num_bytes: 380895504 num_examples: 3410620 download_size: 4099166669 dataset_size: 380895504 - config_name: EMEA features: - name: text dtype: string splits: - name: train num_bytes: 100259598 num_examples: 1221233 download_size: 4099166669 dataset_size: 100259598 - config_name: GlobalVoices features: - name: text dtype: string splits: - name: train num_bytes: 114435784 num_examples: 897075 download_size: 4099166669 dataset_size: 114435784 - config_name: ECB features: - name: text dtype: string splits: - name: train num_bytes: 336285757 num_examples: 1875738 download_size: 4099166669 dataset_size: 336285757 - config_name: DOGC features: - name: text dtype: string splits: - name: train num_bytes: 898279656 num_examples: 10917053 download_size: 4099166669 dataset_size: 898279656 - config_name: all_wikis features: - name: text dtype: string splits: - name: train num_bytes: 3782280549 num_examples: 28109484 download_size: 4099166669 dataset_size: 3782280549 - config_name: TED features: - name: text dtype: string splits: - name: train num_bytes: 15858148 num_examples: 157910 download_size: 4099166669 dataset_size: 15858148 - config_name: multiUN features: - name: text dtype: string splits: - name: train num_bytes: 2327269369 num_examples: 13127490 download_size: 4099166669 dataset_size: 2327269369 - config_name: Europarl features: - name: text dtype: string splits: - name: train num_bytes: 359897865 num_examples: 2174141 download_size: 4099166669 dataset_size: 359897865 - config_name: NewsCommentary11 features: - name: text dtype: string splits: - name: train num_bytes: 48350573 num_examples: 288771 download_size: 4099166669 dataset_size: 48350573 - config_name: UN features: - name: text dtype: string splits: - name: train num_bytes: 23654590 num_examples: 74067 download_size: 4099166669 dataset_size: 23654590 - config_name: EUBookShop features: - name: text dtype: string splits: - name: train num_bytes: 1326861077 num_examples: 8214959 download_size: 4099166669 dataset_size: 1326861077 - config_name: ParaCrawl features: - name: text dtype: string splits: - name: train num_bytes: 1840430234 num_examples: 15510649 download_size: 4099166669 dataset_size: 1840430234 - config_name: OpenSubtitles2018 features: - name: text dtype: string splits: - name: train num_bytes: 7477281776 num_examples: 213508602 download_size: 4099166669 dataset_size: 7477281776 - config_name: DGT features: - name: text dtype: string splits: - name: train num_bytes: 396217351 num_examples: 3168368 download_size: 4099166669 dataset_size: 396217351 - config_name: combined features: - name: text dtype: string splits: - name: train num_bytes: 19428257807 num_examples: 302656160 download_size: 4099166669 dataset_size: 19428257807 config_names: - DGT - DOGC - ECB - EMEA - EUBookShop - Europarl - GlobalVoices - JRC - NewsCommentary11 - OpenSubtitles2018 - ParaCrawl - TED - UN - all_wikis - combined - multiUN --- # Dataset Card for The Large Spanish Corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/josecannete/spanish-corpora](https://github.com/josecannete/spanish-corpora) - **Repository:** [https://github.com/josecannete/spanish-corpora](https://github.com/josecannete/spanish-corpora) - **Paper:** - **Data:** https://doi.org/10.5281/zenodo.3247731 - **Leaderboard:** - **Point of Contact:** [José Cañete](mailto:jose.canete@ug.uchile.cl) (corpus creator) or [Lewis Tunstall](mailto:lewis.c.tunstall@gmail.com) (corpus submitter) ### Dataset Summary The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, `all_wiki` only includes examples from Spanish Wikipedia: ```python from datasets import load_dataset all_wiki = load_dataset('large_spanish_corpus', name='all_wiki') ``` By default, the config is set to "combined" which loads all the corpora. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Spanish ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits The following is taken from the corpus' source repsository: * Spanish Wikis: Which include Wikipedia, Wikinews, Wikiquotes and more. These were first processed with wikiextractor (https://github.com/josecannete/wikiextractorforBERT) using the wikis dump of 20/04/2019. * ParaCrawl: Spanish portion of ParaCrawl (http://opus.nlpl.eu/ParaCrawl.php) * EUBookshop: Spanish portion of EUBookshop (http://opus.nlpl.eu/EUbookshop.php) * MultiUN: Spanish portion of MultiUN (http://opus.nlpl.eu/MultiUN.php) * OpenSubtitles: Spanish portion of OpenSubtitles2018 (http://opus.nlpl.eu/OpenSubtitles-v2018.php) * DGC: Spanish portion of DGT (http://opus.nlpl.eu/DGT.php) * DOGC: Spanish portion of DOGC (http://opus.nlpl.eu/DOGC.php) * ECB: Spanish portion of ECB (http://opus.nlpl.eu/ECB.php) * EMEA: Spanish portion of EMEA (http://opus.nlpl.eu/EMEA.php) * Europarl: Spanish portion of Europarl (http://opus.nlpl.eu/Europarl.php) * GlobalVoices: Spanish portion of GlobalVoices (http://opus.nlpl.eu/GlobalVoices.php) * JRC: Spanish portion of JRC (http://opus.nlpl.eu/JRC-Acquis.php) * News-Commentary11: Spanish portion of NCv11 (http://opus.nlpl.eu/News-Commentary-v11.php) * TED: Spanish portion of TED (http://opus.nlpl.eu/TED2013.php) * UN: Spanish portion of UN (http://opus.nlpl.eu/UN.php) ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@lewtun](https://github.com/lewtun) for adding this dataset.
hate_speech18
--- annotations_creators: - found language_creators: - found language: - en license: - cc-by-sa-3.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - intent-classification paperswithcode_id: hate-speech pretty_name: Hate Speech dataset_info: features: - name: text dtype: string - name: user_id dtype: int64 - name: subforum_id dtype: int64 - name: num_contexts dtype: int64 - name: label dtype: class_label: names: '0': noHate '1': hate '2': idk/skip '3': relation splits: - name: train num_bytes: 1375340 num_examples: 10944 download_size: 3664530 dataset_size: 1375340 train-eval-index: - config: default task: text-classification task_id: multi_class_classification splits: train_split: train col_mapping: text: text label: target metrics: - type: accuracy name: Accuracy - type: f1 name: F1 macro args: average: macro - type: f1 name: F1 micro args: average: micro - type: f1 name: F1 weighted args: average: weighted - type: precision name: Precision macro args: average: macro - type: precision name: Precision micro args: average: micro - type: precision name: Precision weighted args: average: weighted - type: recall name: Recall macro args: average: macro - type: recall name: Recall micro args: average: micro - type: recall name: Recall weighted args: average: weighted --- # Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/Vicomtech/hate-speech-dataset - **Repository:** https://github.com/Vicomtech/hate-speech-dataset - **Paper:** https://www.aclweb.org/anthology/W18-51.pdf - **Leaderboard:** - **Point of Contact:** ### Dataset Summary These files contain text extracted from Stormfront, a white supremacist forum. A random set of forums posts have been sampled from several subforums and split into sentences. Those sentences have been manually labelled as containing hate speech or not, according to certain annotation guidelines. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - text: the provided sentence - user_id: information to make it possible to re-build the conversations these sentences belong to - subforum_id: information to make it possible to re-build the conversations these sentences belong to - num_contexts: number of previous posts the annotator had to read before making a decision over the category of the sentence - label: hate, noHate, relation (sentence in the post doesn't contain hate speech on their own, but combination of serveral sentences does) or idk/skip (sentences that are not written in English or that don't contain information as to be classified into hate or noHate) ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{gibert2018hate, title = "{Hate Speech Dataset from a White Supremacy Forum}", author = "de Gibert, Ona and Perez, Naiara and Garc{\'\i}a-Pablos, Aitor and Cuadros, Montse", booktitle = "Proceedings of the 2nd Workshop on Abusive Language Online ({ALW}2)", month = oct, year = "2018", address = "Brussels, Belgium", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/W18-5102", doi = "10.18653/v1/W18-5102", pages = "11--20", } ``` ### Contributions Thanks to [@czabo](https://github.com/czabo) for adding this dataset.
mteb/summeval
--- language: - en --- # SummEval The annotations include summaries generated by 16 models from 100 source news articles (1600 examples in total). Each of the summaries was annotated by 5 indepedent crowdsource workers and 3 independent experts (8 annotations in total). Summaries were evaluated across 4 dimensions: coherence, consistency, fluency, relevance. Each source news article comes with the original reference from the CNN/DailyMail dataset and 10 additional crowdsources reference summaries. For this dataset, we averaged the 3 **expert** annotations to get the human scores. source: https://github.com/Yale-LILY/SummEval
xcsr
--- annotations_creators: - crowdsourced language_creators: - crowdsourced - machine-generated language: - ar - de - en - es - fr - hi - it - ja - nl - pl - pt - ru - sw - ur - vi - zh license: - mit multilinguality: - multilingual size_categories: - 1K<n<10K source_datasets: - extended|codah - extended|commonsense_qa task_categories: - question-answering task_ids: - multiple-choice-qa pretty_name: X-CSR dataset_info: - config_name: X-CODAH-ar features: - name: id dtype: string - name: lang dtype: string - name: question_tag dtype: string - name: question struct: - name: stem dtype: string - name: choices sequence: - name: label dtype: string - name: text dtype: string - name: answerKey dtype: string splits: - name: test num_bytes: 568026 num_examples: 1000 - name: validation num_bytes: 165022 num_examples: 300 download_size: 265474 dataset_size: 733048 - config_name: X-CODAH-de features: - name: id dtype: string - name: lang dtype: string - name: question_tag dtype: string - name: question struct: - name: stem dtype: string - name: choices sequence: - name: label dtype: string - name: text dtype: string - name: answerKey dtype: string splits: - name: test num_bytes: 476087 num_examples: 1000 - name: validation num_bytes: 138764 num_examples: 300 download_size: 259705 dataset_size: 614851 - config_name: X-CODAH-en features: - name: id dtype: string - name: lang dtype: string - name: question_tag dtype: string - name: question struct: - name: stem dtype: string - name: choices sequence: - name: label dtype: string - name: text dtype: string - name: answerKey dtype: string splits: - name: test num_bytes: 417000 num_examples: 1000 - name: validation num_bytes: 121811 num_examples: 300 download_size: 217262 dataset_size: 538811 - config_name: X-CODAH-es features: - name: id dtype: string - name: lang dtype: string - name: question_tag dtype: string - name: question struct: - name: stem dtype: string - name: choices sequence: - name: label dtype: string - name: text dtype: string - name: answerKey dtype: string splits: - name: test num_bytes: 450954 num_examples: 1000 - name: validation num_bytes: 130678 num_examples: 300 download_size: 242647 dataset_size: 581632 - config_name: X-CODAH-fr features: - name: id dtype: string - name: lang dtype: string - name: question_tag dtype: string - name: question struct: - name: stem dtype: string - name: choices sequence: - name: label dtype: string - name: text dtype: string - name: answerKey dtype: string splits: - name: test num_bytes: 477525 num_examples: 1000 - name: validation num_bytes: 137889 num_examples: 300 download_size: 244998 dataset_size: 615414 - config_name: X-CODAH-hi features: - name: id dtype: string - name: lang dtype: string - name: question_tag dtype: string - name: question struct: - name: stem dtype: string - name: choices sequence: - name: label dtype: string - name: text dtype: string - name: answerKey dtype: string splits: - name: test num_bytes: 973733 num_examples: 1000 - name: validation num_bytes: 283004 num_examples: 300 download_size: 336862 dataset_size: 1256737 - config_name: X-CODAH-it features: - name: id dtype: string - name: lang dtype: string - name: question_tag dtype: string - name: question struct: - name: stem dtype: string - name: choices sequence: - name: label dtype: string - name: text dtype: string - name: answerKey dtype: string splits: - name: test num_bytes: 457055 num_examples: 1000 - name: validation num_bytes: 133504 num_examples: 300 download_size: 241780 dataset_size: 590559 - config_name: X-CODAH-jap features: - name: id dtype: string - name: lang dtype: string - name: question_tag dtype: string - name: question struct: - name: stem dtype: string - name: choices sequence: - name: label dtype: string - name: text dtype: string - name: answerKey dtype: string splits: - name: test num_bytes: 538415 num_examples: 1000 - name: validation num_bytes: 157392 num_examples: 300 download_size: 264995 dataset_size: 695807 - config_name: X-CODAH-nl features: - name: id dtype: string - name: lang dtype: string - name: question_tag dtype: string - name: question struct: - name: stem dtype: string - name: choices sequence: - name: label dtype: string - name: text dtype: string - name: answerKey dtype: string splits: - name: test num_bytes: 448728 num_examples: 1000 - name: validation num_bytes: 130018 num_examples: 300 download_size: 237855 dataset_size: 578746 - config_name: X-CODAH-pl features: - name: id dtype: string - name: lang dtype: string - name: question_tag dtype: string - name: question struct: - name: stem dtype: string - name: choices sequence: - name: label dtype: string - name: text dtype: string - name: answerKey dtype: string splits: - name: test num_bytes: 438538 num_examples: 1000 - name: validation num_bytes: 127750 num_examples: 300 download_size: 254894 dataset_size: 566288 - config_name: X-CODAH-pt features: - name: id dtype: string - name: lang dtype: string - name: question_tag dtype: string - name: question struct: - name: stem dtype: string - name: choices sequence: - name: label dtype: string - name: text dtype: string - name: answerKey dtype: string splits: - name: test num_bytes: 455583 num_examples: 1000 - name: validation num_bytes: 131933 num_examples: 300 download_size: 238858 dataset_size: 587516 - config_name: X-CODAH-ru features: - name: id dtype: string - name: lang dtype: string - name: question_tag dtype: string - name: question struct: - name: stem dtype: string - name: choices sequence: - name: label dtype: string - name: text dtype: string - name: answerKey dtype: string splits: - name: test num_bytes: 674567 num_examples: 1000 - name: validation num_bytes: 193713 num_examples: 300 download_size: 314200 dataset_size: 868280 - config_name: X-CODAH-sw features: - name: id dtype: string - name: lang dtype: string - name: question_tag dtype: string - name: question struct: - name: stem dtype: string - name: choices sequence: - name: label dtype: string - name: text dtype: string - name: answerKey dtype: string splits: - name: test num_bytes: 423421 num_examples: 1000 - name: validation num_bytes: 124770 num_examples: 300 download_size: 214100 dataset_size: 548191 - config_name: X-CODAH-ur features: - name: id dtype: string - name: lang dtype: string - name: question_tag dtype: string - name: question struct: - name: stem dtype: string - name: choices sequence: - name: label dtype: string - name: text dtype: string - name: answerKey dtype: string splits: - name: test num_bytes: 687123 num_examples: 1000 - name: validation num_bytes: 199737 num_examples: 300 download_size: 294475 dataset_size: 886860 - config_name: X-CODAH-vi features: - name: id dtype: string - name: lang dtype: string - name: question_tag dtype: string - name: question struct: - name: stem dtype: string - name: choices sequence: - name: label dtype: string - name: text dtype: string - name: answerKey dtype: string splits: - name: test num_bytes: 543089 num_examples: 1000 - name: validation num_bytes: 156888 num_examples: 300 download_size: 251390 dataset_size: 699977 - config_name: X-CODAH-zh features: - name: id dtype: string - name: lang dtype: string - name: question_tag dtype: string - name: question struct: - name: stem dtype: string - name: choices sequence: - name: label dtype: string - name: text dtype: string - name: answerKey dtype: string splits: - name: test num_bytes: 394660 num_examples: 1000 - name: validation num_bytes: 115025 num_examples: 300 download_size: 237827 dataset_size: 509685 - config_name: X-CSQA-ar features: - name: id dtype: string - name: lang dtype: string - name: question struct: - name: stem dtype: string - name: choices sequence: - name: label dtype: string - name: text dtype: string - name: answerKey dtype: string splits: - name: test num_bytes: 288645 num_examples: 1074 - name: validation num_bytes: 273580 num_examples: 1000 download_size: 255626 dataset_size: 562225 - config_name: X-CSQA-de features: - name: id dtype: string - name: lang dtype: string - name: question struct: - name: stem dtype: string - name: choices sequence: - name: label dtype: string - name: text dtype: string - name: answerKey dtype: string splits: - name: test num_bytes: 234170 num_examples: 1074 - name: validation num_bytes: 222840 num_examples: 1000 download_size: 242762 dataset_size: 457010 - config_name: X-CSQA-en features: - name: id dtype: string - name: lang dtype: string - name: question struct: - name: stem dtype: string - name: choices sequence: - name: label dtype: string - name: text dtype: string - name: answerKey dtype: string splits: - name: test num_bytes: 215617 num_examples: 1074 - name: validation num_bytes: 205079 num_examples: 1000 download_size: 222677 dataset_size: 420696 - config_name: X-CSQA-es features: - name: id dtype: string - name: lang dtype: string - name: question struct: - name: stem dtype: string - name: choices sequence: - name: label dtype: string - name: text dtype: string - name: answerKey dtype: string splits: - name: test num_bytes: 236817 num_examples: 1074 - name: validation num_bytes: 224497 num_examples: 1000 download_size: 238810 dataset_size: 461314 - config_name: X-CSQA-fr features: - name: id dtype: string - name: lang dtype: string - name: question struct: - name: stem dtype: string - name: choices sequence: - name: label dtype: string - name: text dtype: string - name: answerKey dtype: string splits: - name: test num_bytes: 243952 num_examples: 1074 - name: validation num_bytes: 231396 num_examples: 1000 download_size: 244676 dataset_size: 475348 - config_name: X-CSQA-hi features: - name: id dtype: string - name: lang dtype: string - name: question struct: - name: stem dtype: string - name: choices sequence: - name: label dtype: string - name: text dtype: string - name: answerKey dtype: string splits: - name: test num_bytes: 415011 num_examples: 1074 - name: validation num_bytes: 396318 num_examples: 1000 download_size: 304090 dataset_size: 811329 - config_name: X-CSQA-it features: - name: id dtype: string - name: lang dtype: string - name: question struct: - name: stem dtype: string - name: choices sequence: - name: label dtype: string - name: text dtype: string - name: answerKey dtype: string splits: - name: test num_bytes: 232604 num_examples: 1074 - name: validation num_bytes: 220902 num_examples: 1000 download_size: 236130 dataset_size: 453506 - config_name: X-CSQA-jap features: - name: id dtype: string - name: lang dtype: string - name: question struct: - name: stem dtype: string - name: choices sequence: - name: label dtype: string - name: text dtype: string - name: answerKey dtype: string splits: - name: test num_bytes: 250846 num_examples: 1074 - name: validation num_bytes: 240404 num_examples: 1000 download_size: 249420 dataset_size: 491250 - config_name: X-CSQA-nl features: - name: id dtype: string - name: lang dtype: string - name: question struct: - name: stem dtype: string - name: choices sequence: - name: label dtype: string - name: text dtype: string - name: answerKey dtype: string splits: - name: test num_bytes: 226949 num_examples: 1074 - name: validation num_bytes: 216194 num_examples: 1000 download_size: 231078 dataset_size: 443143 - config_name: X-CSQA-pl features: - name: id dtype: string - name: lang dtype: string - name: question struct: - name: stem dtype: string - name: choices sequence: - name: label dtype: string - name: text dtype: string - name: answerKey dtype: string splits: - name: test num_bytes: 231479 num_examples: 1074 - name: validation num_bytes: 219814 num_examples: 1000 download_size: 245829 dataset_size: 451293 - config_name: X-CSQA-pt features: - name: id dtype: string - name: lang dtype: string - name: question struct: - name: stem dtype: string - name: choices sequence: - name: label dtype: string - name: text dtype: string - name: answerKey dtype: string splits: - name: test num_bytes: 235469 num_examples: 1074 - name: validation num_bytes: 222785 num_examples: 1000 download_size: 238902 dataset_size: 458254 - config_name: X-CSQA-ru features: - name: id dtype: string - name: lang dtype: string - name: question struct: - name: stem dtype: string - name: choices sequence: - name: label dtype: string - name: text dtype: string - name: answerKey dtype: string splits: - name: test num_bytes: 341749 num_examples: 1074 - name: validation num_bytes: 323724 num_examples: 1000 download_size: 296252 dataset_size: 665473 - config_name: X-CSQA-sw features: - name: id dtype: string - name: lang dtype: string - name: question struct: - name: stem dtype: string - name: choices sequence: - name: label dtype: string - name: text dtype: string - name: answerKey dtype: string splits: - name: test num_bytes: 222215 num_examples: 1074 - name: validation num_bytes: 211426 num_examples: 1000 download_size: 214954 dataset_size: 433641 - config_name: X-CSQA-ur features: - name: id dtype: string - name: lang dtype: string - name: question struct: - name: stem dtype: string - name: choices sequence: - name: label dtype: string - name: text dtype: string - name: answerKey dtype: string splits: - name: test num_bytes: 306129 num_examples: 1074 - name: validation num_bytes: 292001 num_examples: 1000 download_size: 267789 dataset_size: 598130 - config_name: X-CSQA-vi features: - name: id dtype: string - name: lang dtype: string - name: question struct: - name: stem dtype: string - name: choices sequence: - name: label dtype: string - name: text dtype: string - name: answerKey dtype: string splits: - name: test num_bytes: 265210 num_examples: 1074 - name: validation num_bytes: 253502 num_examples: 1000 download_size: 244641 dataset_size: 518712 - config_name: X-CSQA-zh features: - name: id dtype: string - name: lang dtype: string - name: question struct: - name: stem dtype: string - name: choices sequence: - name: label dtype: string - name: text dtype: string - name: answerKey dtype: string splits: - name: test num_bytes: 197444 num_examples: 1074 - name: validation num_bytes: 188273 num_examples: 1000 download_size: 207379 dataset_size: 385717 configs: - config_name: X-CODAH-ar data_files: - split: test path: X-CODAH-ar/test-* - split: validation path: X-CODAH-ar/validation-* - config_name: X-CODAH-de data_files: - split: test path: X-CODAH-de/test-* - split: validation path: X-CODAH-de/validation-* - config_name: X-CODAH-en data_files: - split: test path: X-CODAH-en/test-* - split: validation path: X-CODAH-en/validation-* - config_name: X-CODAH-es data_files: - split: test path: X-CODAH-es/test-* - split: validation path: X-CODAH-es/validation-* - config_name: X-CODAH-fr data_files: - split: test path: X-CODAH-fr/test-* - split: validation path: X-CODAH-fr/validation-* - config_name: X-CODAH-hi data_files: - split: test path: X-CODAH-hi/test-* - split: validation path: X-CODAH-hi/validation-* - config_name: X-CODAH-it data_files: - split: test path: X-CODAH-it/test-* - split: validation path: X-CODAH-it/validation-* - config_name: X-CODAH-jap data_files: - split: test path: X-CODAH-jap/test-* - split: validation path: X-CODAH-jap/validation-* - config_name: X-CODAH-nl data_files: - split: test path: X-CODAH-nl/test-* - split: validation path: X-CODAH-nl/validation-* - config_name: X-CODAH-pl data_files: - split: test path: X-CODAH-pl/test-* - split: validation path: X-CODAH-pl/validation-* - config_name: X-CODAH-pt data_files: - split: test path: X-CODAH-pt/test-* - split: validation path: X-CODAH-pt/validation-* - config_name: X-CODAH-ru data_files: - split: test path: X-CODAH-ru/test-* - split: validation path: X-CODAH-ru/validation-* - config_name: X-CODAH-sw data_files: - split: test path: X-CODAH-sw/test-* - split: validation path: X-CODAH-sw/validation-* - config_name: X-CODAH-ur data_files: - split: test path: X-CODAH-ur/test-* - split: validation path: X-CODAH-ur/validation-* - config_name: X-CODAH-vi data_files: - split: test path: X-CODAH-vi/test-* - split: validation path: X-CODAH-vi/validation-* - config_name: X-CODAH-zh data_files: - split: test path: X-CODAH-zh/test-* - split: validation path: X-CODAH-zh/validation-* - config_name: X-CSQA-ar data_files: - split: test path: X-CSQA-ar/test-* - split: validation path: X-CSQA-ar/validation-* - config_name: X-CSQA-de data_files: - split: test path: X-CSQA-de/test-* - split: validation path: X-CSQA-de/validation-* - config_name: X-CSQA-en data_files: - split: test path: X-CSQA-en/test-* - split: validation path: X-CSQA-en/validation-* - config_name: X-CSQA-es data_files: - split: test path: X-CSQA-es/test-* - split: validation path: X-CSQA-es/validation-* - config_name: X-CSQA-fr data_files: - split: test path: X-CSQA-fr/test-* - split: validation path: X-CSQA-fr/validation-* - config_name: X-CSQA-hi data_files: - split: test path: X-CSQA-hi/test-* - split: validation path: X-CSQA-hi/validation-* - config_name: X-CSQA-it data_files: - split: test path: X-CSQA-it/test-* - split: validation path: X-CSQA-it/validation-* - config_name: X-CSQA-jap data_files: - split: test path: X-CSQA-jap/test-* - split: validation path: X-CSQA-jap/validation-* - config_name: X-CSQA-nl data_files: - split: test path: X-CSQA-nl/test-* - split: validation path: X-CSQA-nl/validation-* - config_name: X-CSQA-pl data_files: - split: test path: X-CSQA-pl/test-* - split: validation path: X-CSQA-pl/validation-* - config_name: X-CSQA-pt data_files: - split: test path: X-CSQA-pt/test-* - split: validation path: X-CSQA-pt/validation-* - config_name: X-CSQA-ru data_files: - split: test path: X-CSQA-ru/test-* - split: validation path: X-CSQA-ru/validation-* - config_name: X-CSQA-sw data_files: - split: test path: X-CSQA-sw/test-* - split: validation path: X-CSQA-sw/validation-* - config_name: X-CSQA-ur data_files: - split: test path: X-CSQA-ur/test-* - split: validation path: X-CSQA-ur/validation-* - config_name: X-CSQA-vi data_files: - split: test path: X-CSQA-vi/test-* - split: validation path: X-CSQA-vi/validation-* - config_name: X-CSQA-zh data_files: - split: test path: X-CSQA-zh/test-* - split: validation path: X-CSQA-zh/validation-* --- # Dataset Card for X-CSR ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://inklab.usc.edu//XCSR/ - **Repository:** https://github.com/INK-USC/XCSR - **Paper:** https://arxiv.org/abs/2106.06937 - **Leaderboard:** https://inklab.usc.edu//XCSR/leaderboard - **Point of Contact:** https://yuchenlin.xyz/ ### Dataset Summary To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future. ### Supported Tasks and Leaderboards https://inklab.usc.edu//XCSR/leaderboard ### Languages The total 16 languages for X-CSR: {en, zh, de, es, fr, it, jap, nl, pl, pt, ru, ar, vi, hi, sw, ur}. ## Dataset Structure ### Data Instances An example of the X-CSQA dataset: ``` { "id": "be1920f7ba5454ad", # an id shared by all languages "lang": "en", # one of the 16 language codes. "question": { "stem": "What will happen to your knowledge with more learning?", # question text "choices": [ {"label": "A", "text": "headaches" }, {"label": "B", "text": "bigger brain" }, {"label": "C", "text": "education" }, {"label": "D", "text": "growth" }, {"label": "E", "text": "knowing more" } ] }, "answerKey": "D" # hidden for test data. } ``` An example of the X-CODAH dataset: ``` { "id": "b8eeef4a823fcd4b", # an id shared by all languages "lang": "en", # one of the 16 language codes. "question_tag": "o", # one of 6 question types "question": { "stem": " ", # always a blank as a dummy question "choices": [ {"label": "A", "text": "Jennifer loves her school very much, she plans to drop every courses."}, {"label": "B", "text": "Jennifer loves her school very much, she is never absent even when she's sick."}, {"label": "C", "text": "Jennifer loves her school very much, she wants to get a part-time job."}, {"label": "D", "text": "Jennifer loves her school very much, she quits school happily."} ] }, "answerKey": "B" # hidden for test data. } ``` ### Data Fields - id: an id shared by all languages - lang: one of the 16 language codes. - question_tag: one of 6 question types - stem: always a blank as a dummy question - choices: a list of answers, each answer has: - label: a string answer identifier for each answer - text: the answer text ### Data Splits - X-CSQA: There are 8,888 examples for training in English, 1,000 for development in each language, and 1,074 examples for testing in each language. - X-CODAH: There are 8,476 examples for training in English, 300 for development in each language, and 1,000 examples for testing in each language. ## Dataset Creation ### Curation Rationale To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. The details of the dataset construction, especially the translation procedures, can be found in section A of the appendix of the [paper](https://inklab.usc.edu//XCSR/XCSR_paper.pdf). ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information ``` # X-CSR @inproceedings{lin-etal-2021-common, title = "Common Sense Beyond {E}nglish: Evaluating and Improving Multilingual Language Models for Commonsense Reasoning", author = "Lin, Bill Yuchen and Lee, Seyeon and Qiao, Xiaoyang and Ren, Xiang", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.102", doi = "10.18653/v1/2021.acl-long.102", pages = "1274--1287", abstract = "Commonsense reasoning research has so far been limited to English. We aim to evaluate and improve popular multilingual language models (ML-LMs) to help advance commonsense reasoning (CSR) beyond English. We collect the Mickey corpus, consisting of 561k sentences in 11 different languages, which can be used for analyzing and improving ML-LMs. We propose Mickey Probe, a language-general probing task for fairly evaluating the common sense of popular ML-LMs across different languages. In addition, we also create two new datasets, X-CSQA and X-CODAH, by translating their English versions to 14 other languages, so that we can evaluate popular ML-LMs for cross-lingual commonsense reasoning. To improve the performance beyond English, we propose a simple yet effective method {---} multilingual contrastive pretraining (MCP). It significantly enhances sentence representations, yielding a large performance gain on both benchmarks (e.g., +2.7{\%} accuracy for X-CSQA over XLM-R{\_}L).", } # CSQA @inproceedings{Talmor2019commonsenseqaaq, address = {Minneapolis, Minnesota}, author = {Talmor, Alon and Herzig, Jonathan and Lourie, Nicholas and Berant, Jonathan}, booktitle = {Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)}, doi = {10.18653/v1/N19-1421}, pages = {4149--4158}, publisher = {Association for Computational Linguistics}, title = {CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge}, url = {https://www.aclweb.org/anthology/N19-1421}, year = {2019} } # CODAH @inproceedings{Chen2019CODAHAA, address = {Minneapolis, USA}, author = {Chen, Michael and D{'}Arcy, Mike and Liu, Alisa and Fernandez, Jared and Downey, Doug}, booktitle = {Proceedings of the 3rd Workshop on Evaluating Vector Space Representations for {NLP}}, doi = {10.18653/v1/W19-2008}, pages = {63--69}, publisher = {Association for Computational Linguistics}, title = {CODAH: An Adversarially-Authored Question Answering Dataset for Common Sense}, url = {https://www.aclweb.org/anthology/W19-2008}, year = {2019} } ``` ### Contributions Thanks to [Bill Yuchen Lin](https://yuchenlin.xyz/), [Seyeon Lee](https://seyeon-lee.github.io/), [Xiaoyang Qiao](https://www.linkedin.com/in/xiaoyang-qiao/), [Xiang Ren](http://www-bcf.usc.edu/~xiangren/) for adding this dataset.
allenai/tulu-v2-sft-mixture
--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: dataset dtype: string - name: id dtype: string - name: messages list: - name: role dtype: string - name: content dtype: string splits: - name: train num_bytes: 1239293363 num_examples: 326154 download_size: 554561769 dataset_size: 1239293363 license: odc-by task_categories: - question-answering - conversational - text-generation language: - en size_categories: - 100K<n<1M --- <img src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/tulu-v2/Tulu%20V2%20banner.png" alt="TuluV2 banner" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Dataset Card for Tulu V2 Mix *Note the [ODC-BY license](https://opendatacommons.org/licenses/by/1-0/), indicating that different licenses apply to subsets of the data. This means that some portions of the dataset are non-commercial. We present the mixture as a research artifact.* Tulu is a series of language models that are trained to act as helpful assistants. The dataset consists of a mix of : * [FLAN](https://github.com/google-research/FLAN/tree/main) (Apache 2.0): We use 50,000 examples sampled from FLAN v2. To emphasize CoT-style reasoning, we sample another 50,000 examples from the CoT subset of the FLAN v2 mixture. * [Open Assistant 1](https://huggingface.co/datasets/OpenAssistant/oasst1) (Apache 2.0): We isolate the highest-scoring paths in each conversation tree and use these samples, resulting in 7,708 examples. * [ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) (Apache 2.0 listed, no official repo found): We use all 114,046 from our processed ShareGPT dataset, as we found ShareGPT gave strong performance in prior work. * [GPT4-Alpaca](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM#data-release) (CC By NC 4.0):We sample 20,000 samples from GPT-4 Alpaca to further include distilled GPT-4 data. * [Code-Alpaca](https://github.com/sahil280114/codealpaca) (CC By NC 4.0):We use all 20,022 examples from Code Alpaca, following our prior V1 mixture, in order to improve model code abilities. * [LIMA](https://huggingface.co/datasets/GAIR/lima) (CC BY-NC-SA): We use 1,030 examples from LIMA as an example of carefully curated data. * [WizardLM Evol Instruct](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k) (No license provided): We subsample 30,000 examples from WizardLM, which contains distilled data of increasing diversity and complexity. * [Open-Orca](https://huggingface.co/datasets/Open-Orca/OpenOrca) (MIT): We sample 30,000 samples generated by GPT-4 from OpenOrca, a reproduction of Orca Mukherjee et al., 2023, which augments FLAN data with additional model-generated explanations * Hardcoded: A collection of prompts such as `Tell me about yourself' such that the model generates correct outputs given inquiries about its name or developers. We wrote 14 samples and repeat each sample 10 times in the mixture, resulting in 140 total samples. * Science: 7,544 examples from a mixture of scientific document understand tasks—including question answering, fact-checking, summarization, and information extraction (under development, standalone release soon). These are made by taking either just the training set of the subsets or the entire section if no splits are present. Tulu V2 is presented as a singular training split. [Tulu V2 DPO 70B](https://huggingface.co/allenai/tulu-2-dpo-70b), and is a fine-tuned version of Llama 2 that was trained on on a mix of publicly available, synthetic and human datasets using [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290). **Model Family:** Other models and the dataset are found in the [Tulu V2 collection](https://huggingface.co/collections/allenai/tulu-v2-suite-6551b56e743e6349aab45101). The length distribution of the dataset can be seen below: <img src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/tulu-v2/length_histogram_v2.png" alt="TuluV2 histogram" width="600" style="margin-left:'auto' margin-right:'auto' display:'block'"/> Tulu V1 Mix can be found [here](https://huggingface.co/datasets/allenai/tulu-v1). ### Science data note The included science data is from the following categories: <img src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/tulu-v2/science_data.png" alt="TuluV2 science data mix" width="600" style="margin-left:'auto' margin-right:'auto' display:'block'"/> Note that some of the examples include an off-by-one error in the sentence indexing that had a small or negligible impact on performance. This was found during testing and will be updated in future versions, with the detailed release of the dataset artifact itself coming in a future release. ### License We are releasing this dataset under the terms of [ODC-BY](https://opendatacommons.org/licenses/by/1-0/). By using this, you are also bound by the [Common Crawl terms of use](https://commoncrawl.org/terms-of-use/) in respect of the content contained in the dataset.
Graphcore/vqa
--- language: - en license: - cc-by-4.0 ---
argilla/distilabel-capybara-dpo-7k-binarized
--- license: apache-2.0 task_categories: - question-answering - text-generation language: - en tags: - Physics - Biology - Math - Chemistry - Culture - Logic - Roleplay - rlaif - rlhf - dpo - distilabel - synthetic pretty_name: CapybaraDPO-7k size_categories: - 1K<n<10K dataset_info: features: - name: source dtype: string - name: conversation list: - name: input dtype: string - name: output dtype: string - name: original_response dtype: string - name: generation_prompt sequence: string - name: raw_generation_responses sequence: string - name: new_generations sequence: string - name: prompt dtype: string - name: chosen list: - name: content dtype: string - name: role dtype: string - name: rejected list: - name: content dtype: string - name: role dtype: string - name: rating_chosen dtype: int64 - name: rating_rejected dtype: int64 - name: chosen_model dtype: string - name: rejected_model dtype: string splits: - name: train num_bytes: 348791651 num_examples: 7563 download_size: 155776049 dataset_size: 348791651 configs: - config_name: default data_files: - split: train path: data/train-* --- # Capybara-DPO 7K binarized > A DPO dataset built with [distilabel](https://github.com/argilla-io/distilabel) atop the awesome [LDJnr/Capybara](https://huggingface.co/datasets/LDJnr/Capybara) > This is a preview version to collect feedback from the community. v2 will include the full base dataset and responses from more powerful models. <div> <img src="https://cdn-uploads.huggingface.co/production/uploads/60420dccc15e823a685f2b03/Vmr0FtTvnny6Snm-UDM_n.png"> </div> <p align="center"> <a href="https://github.com/argilla-io/distilabel"> <img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/> </a> </p> ## Why? Multi-turn dialogue data is key to fine-tune capable chat models. Multi-turn preference data has been used by the most relevant RLHF works (Anthropic, Meta Llama2, etc.). Unfortunately, there are very few multi-turn open datasets for DPO/RLHF. This dataset is the first of a series of datasets to fill this gap for the Open Source AI community. Why Capybara? Because it's 🔥 ## Dataset structure Here's a video showing the dataset structure using Argilla UI. For preference tuning, chosen and rejected mean the best/worse response to the last turn. <video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/60420dccc15e823a685f2b03/KoYK-Or0JNNVS9PNLF8jJ.mp4"></video> ## How to use this dataset This dataset is a multi-turn preference dataset to improve chat capabilities of open-source LLMs. Chosen and rejected pairs are formatted following OpenAI's conversation format with potentially several turns between a user and an assistant. To use this dataset for DPO use only the last assistant message as `chosen`/`rejected` and the rest as `prompt`. Let's see an example, step by step. First let's keep only highly-scored chosen responses (scale is 1-5) and let's filter out very long conversations: ```python capy = load_dataset("argilla/distilabel-capybara-dpo-7k-binarized", split="train") capy = capy.filter( lambda r: r["rating_chosen"]>=4 ) capy = capy.map(lambda r: {"messages": len(r["chosen"])}).filter(lambda r: r["messages"]<18) ``` Then let's prepare this in the chatml prompt and `trl` format: ```python def chatml_format(example): # get everything except the last message as input prompt = tokenizer.apply_chat_template(example["chosen"][:-1], tokenize=False, add_generation_prompt=True) # get the last assistant responses chosen = example["chosen"][-1]["content"] + "</s>" rejected = example["rejected"][-1]["content"] + "</s>" return { "prompt": system + prompt, "chosen": chosen, "rejected": rejected, } # Save columns original_columns = capy.column_names # Format dataset capy = capy.map( chatml_format, remove_columns=original_columns ) ``` The dataset is now ready to be used for DPO fine-tuning! In our benchmarks with 7B models, we've seen this is a challenging dataset to learn from, the best results can be achieved by mixing it with other datasets like this [dpo mix 7k](https://huggingface.co/datasets/argilla/dpo-mix-7k). We'd love to hear from the community how this works with larger models and other hyperparams. ## How we've built this dataset ### Generate responses from 3 different OSS models In the spirit of UltraFeedback, in this step we generate three responses to the last user message using OSS 7B models and distilabel's `LLMPool` and the vLLM engine. We use Notus7B, NeuralBeagle and OpenHermes-2.5. Additionally, the original capybara dataset already has a generated assistant response (the last assistant response) we keep it for the next step. ```python from distilabel.llm import LLM, LLMPool, ProcessLLM from distilabel.tasks import TextGenerationTask, Task from distilabel.tasks.prompt import Prompt from distilabel.dataset import DatasetCheckpoint from distilabel.pipeline import Pipeline from datasets import load_dataset from dataclasses import dataclass from pathlib import Path dataset = load_dataset("LDJnr/Capybara", split="train") here = Path(__file__).parent.resolve() def extract_conversation(r): all_but_last = r["conversation"][:-1] all_but_last.append({"input": r["conversation"][-1]["input"]}) last = r["conversation"][-1]["output"] return {"input": all_but_last, "original_response": last} dataset = dataset.map(extract_conversation) @dataclass class NotusChatTextGeneration(TextGenerationTask): # custom class to generate prompts in the chatml format # skipped for brevity @dataclass class ChatMLTextGeneration(TextGenerationTask): # custom class to generate prompts in the chatml format # skipped for brevity save_frequency = len(dataset) // 1000 checkpointing = DatasetCheckpoint(path=here / "checkpoint_generation", save_frequency=save_frequency) def load_notus(task: Task) -> LLM: import os from distilabel.llm import vLLM from vllm import LLM os.environ["CUDA_VISIBLE_DEVICES"] = "0" return vLLM( vllm=LLM( model="argilla/notus-7b-v1", trust_remote_code=True ), task=task, max_new_tokens=1024, temperature=1, ) def load_beagle(task: Task) -> LLM: import os from distilabel.llm import vLLM from vllm import LLM os.environ["CUDA_VISIBLE_DEVICES"] = "1" return vLLM( vllm=LLM( model="mlabonne/NeuralBeagle14-7B", trust_remote_code=True ), task=task, max_new_tokens=1024, temperature=1, ) def load_hermes(task: Task) -> LLM: import os from distilabel.llm import vLLM from vllm import LLM os.environ["CUDA_VISIBLE_DEVICES"] = "2" return vLLM( vllm=LLM( model="teknium/OpenHermes-2.5-Mistral-7B", trust_remote_code=True ), task=task, max_new_tokens=1024, temperature=1, ) llm_pool = LLMPool( [ ProcessLLM(task=NotusChatTextGeneration(), load_llm_fn=load_notus), ProcessLLM(task=ChatMLTextGeneration(), load_llm_fn=load_beagle), ProcessLLM(task=ChatMLTextGeneration(), load_llm_fn=load_hermes), ] ) pipe_generation_pool = Pipeline(generator=llm_pool) dataset = pipe_generation_pool.generate( dataset=dataset, num_generations=len(llm_pool.llms), batch_size=32, display_progress_bar=True, checkpoint_strategy=checkpointing, ) ``` ### Generate a preference dataset from 4 responses At this point, we have 4 responses to each multi-turn dialogue. We will now use distilabel's `UltraFeedback.for_overall_quality()` preference model to judge the quality of responses. We use gpt-4-turbo but could have use other models. ```python from distilabel.tasks import UltraFeedbackTask from distilabel.llm import OpenAILLM from distilabel.pipeline import Pipeline from datasets import load_dataset def format_conversation(r): mapping_role = {"input": "<|user|>\n", "output":"<|assistant|>\n"} all_but_last = r["conversation"][:-1] all_but_last.append({"input": r["conversation"][-1]["input"]}) input = "" for e in all_but_last: for k,v in e.items(): input += f"{mapping_role[k]}{v}</s>\n" return {"input": input} # this formats the conversation input # one could choose other format prepared_dataset = dataset.map(format_conversation) # the LLM Judge will evaluate each response to the # last user message taking into account the conversation history labeler = OpenAILLM( task=UltraFeedbackTask.for_overall_quality(), model="gpt-4-1106-preview", num_threads=8, max_new_tokens=512, ) distilabeler = Pipeline( labeller=labeler ) # this computes ratings and natural language critiques for each pair distiset = distilabeler.generate(dataset=prepared_dataset, num_generations=4, display_progress_bar=True) ``` This preference step is also useful to evaluate the performance of the four models (3+ the original response in Capybara): ![image/png](https://cdn-uploads.huggingface.co/production/uploads/60420dccc15e823a685f2b03/FShIr2Hsu-dk9IpAihV1A.png) ## Benchmark results We've tested this new dataset by preference tuning [OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B). The resulting model is [CapybaraHermes](https://huggingface.co/argilla/CapybaraHermes-2.5-Mistral-7B). CapybaraHermes has been preference tuned with LoRA and TRL for 3 epochs using argilla's [dpo mix 7k](https://huggingface.co/datasets/argilla/dpo-mix-7k). To test the impact on multi-turn performance we have used MTBench. We also include the Nous Benchmark results and Mistral-7B-Instruct-v0.2 for reference as it's a strong 7B model on MTBench: | Model | AGIEval | GPT4All | TruthfulQA | Bigbench | MTBench First Turn | MTBench Second Turn | Nous avg. | MTBench avg. | |-----------------------------------|---------|---------|------------|----------|------------|-------------|-----------|--------------| | CapybaraHermes-2.5-Mistral-7B | **43.8** | **73.35** | 57.07 | **42.44** | 8.24375 | **7.5625** | 54.16 | **7.903125** | | teknium/OpenHermes-2.5-Mistral-7B | 42.75 | 72.99 | 52.99 | 40.94 | **8.25** | 7.2875 | 52.42 | 7.76875 | | Mistral-7B-Instruct-v0.2 | 38.5 | 71.64 | **66.82** | 42.29 | 7.8375 | 7.1 | **54.81** | 7.46875 | The most interesting aspect in the context of the capybara-dpo dataset is the increased performance in MTBench Second Turn scores. For the merge lovers, we also preference tuned Beagle14-7B with a mix of capybara-dpo and distilabel orca pairs using the same recipe as NeuralBeagle (see [ YALL - Yet Another LLM Leaderboard](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard) for reference): | Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average| |------------------------------------------------------------------------------------------------------------------------------------|------:|------:|---------:|-------:|------:| |[DistilabelBeagle14-7B](https://huggingface.co/dvilasuero/DistilabelBeagle14-7B)| 45.29| 76.92| 71.66| 48.78| 60.66|
stanfordnlp/sst2
--- annotations_creators: - crowdsourced language_creators: - found language: - en license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification paperswithcode_id: sst pretty_name: Stanford Sentiment Treebank v2 dataset_info: features: - name: idx dtype: int32 - name: sentence dtype: string - name: label dtype: class_label: names: '0': negative '1': positive splits: - name: train num_bytes: 4681603 num_examples: 67349 - name: validation num_bytes: 106252 num_examples: 872 - name: test num_bytes: 216640 num_examples: 1821 download_size: 3331058 dataset_size: 5004495 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* --- # Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://nlp.stanford.edu/sentiment/ - **Repository:** - **Paper:** [Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank](https://www.aclweb.org/anthology/D13-1170/) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The Stanford Sentiment Treebank is a corpus with fully labeled parse trees that allows for a complete analysis of the compositional effects of sentiment in language. The corpus is based on the dataset introduced by Pang and Lee (2005) and consists of 11,855 single sentences extracted from movie reviews. It was parsed with the Stanford parser and includes a total of 215,154 unique phrases from those parse trees, each annotated by 3 human judges. Binary classification experiments on full sentences (negative or somewhat negative vs somewhat positive or positive with neutral sentences discarded) refer to the dataset as SST-2 or SST binary. ### Supported Tasks and Leaderboards - `sentiment-classification` ### Languages The text in the dataset is in English (`en`). ## Dataset Structure ### Data Instances ``` {'idx': 0, 'sentence': 'hide new secretions from the parental units ', 'label': 0} ``` ### Data Fields - `idx`: Monotonically increasing index ID. - `sentence`: Complete sentence expressing an opinion about a film. - `label`: Sentiment of the opinion, either "negative" (0) or positive (1). The test set labels are hidden (-1). ### Data Splits | | train | validation | test | |--------------------|---------:|-----------:|-----:| | Number of examples | 67349 | 872 | 1821 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? Rotten Tomatoes reviewers. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Unknown. ### Citation Information ```bibtex @inproceedings{socher-etal-2013-recursive, title = "Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank", author = "Socher, Richard and Perelygin, Alex and Wu, Jean and Chuang, Jason and Manning, Christopher D. and Ng, Andrew and Potts, Christopher", booktitle = "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", month = oct, year = "2013", address = "Seattle, Washington, USA", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D13-1170", pages = "1631--1642", } ``` ### Contributions Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
MBZUAI/Bactrian-X
--- license: cc-by-nc-4.0 task_categories: - text-generation language: - af - ar - az - bn - cs - de - en - es - et - fi - fr - gl - gu - he - hi - hr - id - it - ja - ka - kk - km - ko - lt - lv - mk - ml - mn - mr - my - ne - nl - pl - ps - pt - ro - ru - si - sl - sv - sw - ta - te - th - tl - tr - uk - ur - vi - xh - zh tags: - instruction-finetuning - multilingual pretty_name: Bactrian-X --- # Dataset Card for "Bactrian-X" ## Table of Contents - [Dataset Description](#a-dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#b-dataset-structure) - [Data Fields](#data-fields) - [Data Instances](#data-instances) - [Data in 52 Languages](#data-in-52-languages) - [Dataset Creation](#c-dataset-creation) - [Considerations for Using the Data](#d-considerations-for-using-the-data) - [Additional Information](#e-additional-information) ## A. Dataset Description - **Homepage:** https://github.com/mbzuai-nlp/Bactrian-X - **Repository:** https://huggingface.co/datasets/MBZUAI/Bactrian-X - **Paper:** to-be-soon released ### Dataset Summary <h3 align="center"> <img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center"> </h3> The Bactrain-X dataset is a collection of 3.4M instruction-response pairs in 52 languages, that are obtained by translating 67K English instructions ([alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca) + [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data)) into 51 languages using Google Translate API. The translated instructions are then fed to ChatGPT (`gpt-3.5-turbo`) to obtain its natural responses, resulting in 3.4M instruction-response pairs in 52 languages (52 languages x 67k instances = 3.4M instances). ### Languages We follow 52 languages of [mBART-50](https://arxiv.org/abs/2008.00401), with details can be found [below](#data-in-52-languages). ## B. Dataset Structure ### Data Fields The data fields are as follows: * `instruction`: describes the task the model should perform. Each of the 67K instructions is unique. * `input`: optional context or input for the task. For example, when the instruction is "Summarize the following article", the input is the article. Around 40% of the examples have an input. * `output`: the answer to the instruction as generated by `gpt-3.5-turbo`. * `id`: the unique index for each data piece in a specific language. The same `id` across splits corresponds to the same original English data. ### Data Instances An example in `en` looks as follows. ```json { "instruction": "What is the main idea of this text?", "input": "The robotics industry has become one of the fastest growing sectors in today’s economy. Robotics technology is being applied in various industries and applications, such as manufacturing, healthcare, and defense.", "output": "The main idea of the text is that the robotics industry is rapidly growing and its technology is being used in many different industries and applications.", "id": "alpaca-107" } ``` An example in `ar` looks as follows. ```json { "instruction": "قم بالوصول إلى إعدادات المتصفح الافتراضية على جهاز الكمبيوتر.", "input": "", "output": "لا يمكن لي كمساعد افتراضي القيام بهذه المهمة كونها تحتاج إلى الوصول إلى الجهاز المادي. يرجى الرجوع إلى دليل المستخدم للكمبيوتر الخاص بك أو البحث على الإنترنت لمعرفة كيفية الوصول إلى إعدادات المتصفح الافتراضية على جهاز الكمبيوتر الخاص بك.", "id": "alpaca-41997" } ``` An example in `id` looks as follows. ```json { "instruction": "Berapa banyak angka penting yang harus digunakan untuk nilai 0,000187?", "input": "", "output": "Ada tiga angka penting dalam nilai 0,000187.", "id": "alpaca-40382" } ``` An example in `de` looks as follows. ```json { "instruction": "Komponieren Sie ein Musikstück mit einem Titel, der Trauer bezeichnet.", "input": "", "output": ""Schmerzendes Herz"\n\nDas Stück beginnt mit einer langsamen, traurigen Melodie, gespielt von einem Solo-Violoncello. Die Harmonien, die von einem Klavier begleitet werden, sind durcheinander und dissonant, um ein Gefühl von Schmerz und Trauer zu vermitteln.\n\nIm Laufe des Stücks treten andere Instrumente ein, wie Streicher, Flöte und Oboe, um die Melodie zu unterstützen und zu intensivieren. Der Rhythmus bleibt langsam und schwerfällig, um die Schwere des Schmerzes darzustellen.\n\nIn der Mitte des Stücks gibt es einen Höhepunkt, wo die Melodie und Harmonien sich verdichten und ein Gefühl von Überwältigung erzeugen. Der Solo-Violoncello spielt eine traurige und schnelle Phrase, die die Verzweiflung und Isolation des Protagonisten ausdrückt.\n\nDas Stück endet mit einer Wiederholung der ersten Melodie, gespielt von einem Solo-Klavier, während die anderen Instrumente auslaufen. Die Harmonien werden jedoch weicher und sanfter, um einen Hauch von Hoffnung zu vermitteln, dass der Schmerz langsam vergeht.", "id": "alpaca-33619" } ``` An example in `fr` looks as follows. ```json { "instruction": "Les Ceanothus ont-ils des racines peu profondes ?", "input": "", "output": "En tant qu'IA, je ne peux pas répondre avec une certitude absolue, mais voici des informations générales sur les racines de Ceanothus : \n\nLes Ceanothus ont tendance à avoir un système racinaire étendu et peu profond. Les racines peuvent s'étaler jusqu'à 4 à 6 fois la taille de la plante et s'étendre à environ 60 à 90 cm de profondeur. Cependant, cela peut varier en fonction du type de sol et de la variété de Ceanothus. Certaines variétés de Ceanothus pourraient avoir des racines plus profondes si les conditions s'y prêtent.", "id": "dolly-10940" } ``` ### Data in 52 Languages | No | Languages | Code | Train | | ---|---------------- | ----- | ----- | | 1 | [Afrikaans](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/af) | af_ZA | 67017 | | 2 | [Arabic](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/ar) | ar_AR | 67017 | | 3 | [Azerbaijani](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/az) | az_AZ | 67017 | | 4 | [Bengali](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/bn) | bn_IN | 67017 | | 5 | [Czech](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/cs) | cs_CZ | 67017 | | 6 | [German](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/de) | de_DE | 67017 | | 7 | [English](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/en) | en_XX | 67017 | | 8 | [Spanish](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/es) | es_XX | 67017 | | 9 | [Estonian](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/et) | et_EE | 67017 | | 10 | [Persian](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/fa) | fa_IR | 67017 | | 11 | [Finnish](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/fi) | fi_FI | 67017 | | 12 | [French](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/fr) | fr_XX | 67017 | | 13 | [Galician](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/gl) | gl_ES | 67017 | | 14 | [Gujarati](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/gu) | gu_IN | 67017 | | 15 | [Hebrew](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/he) | he_IL | 67017 | | 16 | [Hindi](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/hi) | hi_IN | 67017 | | 17 | [Croatian](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/hr) | hr_HR | 67017 | | 18 | [Indonesian](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/id) | id_ID | 67017 | | 19 | [Italian](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/it) | it_IT | 67017 | | 20 | [Japanese](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/ja) | ja_XX | 67017 | | 21 | [Georgian](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/ka) | ka_GE | 67017 | | 22 | [Kazakh](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/kk) | kk_KZ | 67017 | | 23 | [Khmer](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/km) | km_KH | 67017 | | 24 | [Korean](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/ko) | ko_KR | 67017 | | 25 | [Lithuanian](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/lt) | lt_LT | 67017 | | 26 | [Latvian](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/lv) | lv_LV | 67017 | | 27 | [Macedonian](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/mk) | mk_MK | 67017 | | 28 | [Malayalam](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/ml) | ml_IN | 67017 | | 29 | [Mongolian](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/mn) | mn_MN | 67017 | | 30 | [Marathi](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/mr) | mr_IN | 67017 | | 31 | [Burmese](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/my) | my_MM | 67017 | | 32 | [Nepali](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/ne) | ne_NP | 67017 | | 33 | [Dutch](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/nl) | nl_XX | 67017 | | 34 | [Polish](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/pl) | pl_PL | 67017 | | 35 | [Pashto](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/ps) | ps_AF | 67017 | | 36 | [Portuguese](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/pt) | pt_XX | 67017 | | 37 | [Romanian](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/ro) | ro_RO | 67017 | | 38 | [Russian](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/ru) | ru_RU | 67017 | | 39 | [Sinhala](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/si) | si_LK | 67017 | | 40 | [Slovene](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/sl) | sl_SI | 67017 | | 41 | [Swedish](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/sv) | sv_SE | 67017 | | 42 | [Swahili](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/sw) | sw_KE | 67017 | | 43 | [Tamil](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/ta) | ta_IN | 67017 | | 44 | [Telugu](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/te) | te_IN | 67017 | | 45 | [Thai](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/th) | th_TH | 67017 | | 46 | [Tagalog](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/tl) | tl_XX | 67017 | | 47 | [Turkish](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/tr) | tr_TR | 67017 | | 48 | [Ukrainian](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/uk) | uk_UA | 67017 | | 49 | [Urdu](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/ur) | ur_PK | 67017 | | 50 | [Vietnamese](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/vi) | vi_VN | 67017 | | 51 | [Xhosa](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/xh) | xh_ZA | 67017 | | 52 | [Chinese](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/zh) | zh_CN | 67017 | ## C. Dataset Creation 1. English Instructions: The English instuctions are obtained from [alpaca-53k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data). 2. Instruction Translation: The instructions (and inputs) are translated into 51 languages using Google Translation API (conducted on April 2023). 3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023). ## D. Considerations for Using the Data ### Social Impact of Dataset NLP for everyone: this dataset helps to democratize the cutting-edge instruction-following models in 52 languages. This dataset also allows the first experiment on the multilingual LoRA-based LLaMA model. ### Discussion of Biases (1) Translation bias; (2) Potential English-culture bias in the translated dataset. ### Other Known Limitations The `Bactrian-X` data is generated by a language model (`gpt-3.5-turbo`) and inevitably contains some errors or biases. We encourage users to use this data with caution and propose new methods to filter or improve the imperfections. ## E. Additional Information ### Dataset Curators [Haonan Li](https://haonan-li.github.io/) and [Fajri Koto](http://www.fajrikoto.com) ### Licensing Information The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode). ### Citation Information ``` @misc{li2023bactrianx, title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation}, author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin}, year={2023}, eprint={2305.15011}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@haonan-li](https://github.com/haonan-li), [@fajri91](https://github.com/fajri91) for adding this dataset.
lama
--- pretty_name: 'LAMA: LAnguage Model Analysis' annotations_creators: - crowdsourced - expert-generated - machine-generated language_creators: - crowdsourced - expert-generated - machine-generated language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K - 1K<n<10K - 1M<n<10M - n<1K source_datasets: - extended|conceptnet5 - extended|squad task_categories: - text-retrieval - text-classification task_ids: - fact-checking-retrieval - text-scoring paperswithcode_id: lama tags: - probing dataset_info: - config_name: trex features: - name: uuid dtype: string - name: obj_uri dtype: string - name: obj_label dtype: string - name: sub_uri dtype: string - name: sub_label dtype: string - name: predicate_id dtype: string - name: sub_surface dtype: string - name: obj_surface dtype: string - name: masked_sentence dtype: string - name: template dtype: string - name: template_negated dtype: string - name: label dtype: string - name: description dtype: string - name: type dtype: string splits: - name: train num_bytes: 656913189 num_examples: 1304391 download_size: 74652201 dataset_size: 656913189 - config_name: squad features: - name: id dtype: string - name: sub_label dtype: string - name: obj_label dtype: string - name: negated dtype: string - name: masked_sentence dtype: string splits: - name: train num_bytes: 57188 num_examples: 305 download_size: 74639115 dataset_size: 57188 - config_name: google_re features: - name: pred dtype: string - name: sub dtype: string - name: obj dtype: string - name: evidences dtype: string - name: judgments dtype: string - name: sub_w dtype: string - name: sub_label dtype: string - name: sub_aliases dtype: string - name: obj_w dtype: string - name: obj_label dtype: string - name: obj_aliases dtype: string - name: uuid dtype: string - name: masked_sentence dtype: string - name: template dtype: string - name: template_negated dtype: string splits: - name: train num_bytes: 7638657 num_examples: 6106 download_size: 74639115 dataset_size: 7638657 - config_name: conceptnet features: - name: uuid dtype: string - name: sub dtype: string - name: obj dtype: string - name: pred dtype: string - name: obj_label dtype: string - name: masked_sentence dtype: string - name: negated dtype: string splits: - name: train num_bytes: 4130000 num_examples: 29774 download_size: 74639115 dataset_size: 4130000 config_names: - conceptnet - google_re - squad - trex --- # Dataset Card for LAMA: LAnguage Model Analysis - a dataset for probing and analyzing the factual and commonsense knowledge contained in pretrained language models. ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/facebookresearch/LAMA - **Repository:** https://github.com/facebookresearch/LAMA - **Paper:** @inproceedings{petroni2019language, title={Language Models as Knowledge Bases?}, author={F. Petroni, T. Rockt{\"{a}}schel, A. H. Miller, P. Lewis, A. Bakhtin, Y. Wu and S. Riedel}, booktitle={In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019}, year={2019} } @inproceedings{petroni2020how, title={How Context Affects Language Models' Factual Predictions}, author={Fabio Petroni and Patrick Lewis and Aleksandra Piktus and Tim Rockt{\"a}schel and Yuxiang Wu and Alexander H. Miller and Sebastian Riedel}, booktitle={Automated Knowledge Base Construction}, year={2020}, url={https://openreview.net/forum?id=025X0zPfn} } ### Dataset Summary This dataset provides the data for LAMA. The dataset include a subset of Google_RE (https://code.google.com/archive/p/relation-extraction-corpus/), TRex (subset of wikidata triples), Conceptnet (https://github.com/commonsense/conceptnet5/wiki) and Squad. There are configs for each of "google_re", "trex", "conceptnet" and "squad", respectively. The dataset includes some cleanup, and addition of a masked sentence and associated answers for the [MASK] token. The accuracy in predicting the [MASK] token shows how well the language model knows facts and common sense information. The [MASK] tokens are only for the "object" slots. This version of the dataset includes "negated" sentences as well as the masked sentence. Also, certain of the config includes "template" and "template_negated" fields of the form "[X] some text [Y]", where [X] and [Y] are the subject and object slots respectively of certain relations. See the paper for more details. For more information, also see: https://github.com/facebookresearch/LAMA ### Languages en ## Dataset Structure ### Data Instances The trex config has the following fields: `` {'description': 'the item (an institution, law, public office ...) or statement belongs to or has power over or applies to the value (a territorial jurisdiction: a country, state, municipality, ...)', 'label': 'applies to jurisdiction', 'masked_sentence': 'It is known as a principality as it is a monarchy headed by two Co-Princes – the Spanish/Roman Catholic Bishop of Urgell and the President of [MASK].', 'obj_label': 'France', 'obj_surface': 'France', 'obj_uri': 'Q142', 'predicate_id': 'P1001', 'sub_label': 'president of the French Republic', 'sub_surface': 'President', 'sub_uri': 'Q191954', 'template': '[X] is a legal term in [Y] .', 'template_negated': '[X] is not a legal term in [Y] .', 'type': 'N-M', 'uuid': '3fe3d4da-9df9-45ba-8109-784ce5fba38a'} `` The conceptnet config has the following fields: `` {'masked_sentence': 'One of the things you do when you are alive is [MASK].', 'negated': '', 'obj': 'think', 'obj_label': 'think', 'pred': 'HasSubevent', 'sub': 'alive', 'uuid': 'd4f11631dde8a43beda613ec845ff7d1'} `` The squad config has the following fields: `` {'id': '56be4db0acb8001400a502f0_0', 'masked_sentence': 'To emphasize the 50th anniversary of the Super Bowl the [MASK] color was used.', 'negated': "['To emphasize the 50th anniversary of the Super Bowl the [MASK] color was not used.']", 'obj_label': 'gold', 'sub_label': 'Squad'} `` The google_re config has the following fields: `` {'evidences': '[{\'url\': \'http://en.wikipedia.org/wiki/Peter_F._Martin\', \'snippet\': "Peter F. Martin (born 1941) is an American politician who is a Democratic member of the Rhode Island House of Representatives. He has represented the 75th District Newport since 6 January 2009. He is currently serves on the House Committees on Judiciary, Municipal Government, and Veteran\'s Affairs. During his first term of office he served on the House Committees on Small Business and Separation of Powers & Government Oversight. In August 2010, Representative Martin was appointed as a Commissioner on the Atlantic States Marine Fisheries Commission", \'considered_sentences\': [\'Peter F Martin (born 1941) is an American politician who is a Democratic member of the Rhode Island House of Representatives .\']}]', 'judgments': "[{'rater': '18349444711114572460', 'judgment': 'yes'}, {'rater': '17595829233063766365', 'judgment': 'yes'}, {'rater': '4593294093459651288', 'judgment': 'yes'}, {'rater': '7387074196865291426', 'judgment': 'yes'}, {'rater': '17154471385681223613', 'judgment': 'yes'}]", 'masked_sentence': 'Peter F Martin (born [MASK]) is an American politician who is a Democratic member of the Rhode Island House of Representatives .', 'obj': '1941', 'obj_aliases': '[]', 'obj_label': '1941', 'obj_w': 'None', 'pred': '/people/person/date_of_birth', 'sub': '/m/09gb0bw', 'sub_aliases': '[]', 'sub_label': 'Peter F. Martin', 'sub_w': 'None', 'template': '[X] (born [Y]).', 'template_negated': '[X] (not born [Y]).', 'uuid': '18af2dac-21d3-4c42-aff5-c247f245e203'} `` ### Data Fields The trex config has the following fields: * uuid: the id * obj_uri: a uri for the object slot * obj_label: a label for the object slot * sub_uri: a uri for the subject slot * sub_label: a label for the subject slot * predicate_id: the predicate/relationship * sub_surface: the surface text for the subject * obj_surface: The surface text for the object. This is the word that should be predicted by the [MASK] token. * masked_sentence: The masked sentence used to probe, with the object word replaced with [MASK] * template: A pattern of text for extracting the relationship, object and subject of the form "[X] some text [Y]", where [X] and [Y] are the subject and object slots respectively. template may be missing and replaced with an empty string. * template_negated: Same as above, except the [Y] is not the object. template_negated may be missing and replaced with empty strings. * label: the label for the relationship/predicate. label may be missing and replaced with an empty string. * description': a description of the relationship/predicate. description may be missing and replaced with an empty string. * type: a type id for the relationship/predicate. type may be missing and replaced with an empty string. The conceptnet config has the following fields: * uuid: the id * sub: the subject. subj may be missing and replaced with an empty string. * obj: the object to be predicted. obj may be missing and replaced with an empty string. * pred: the predicate/relationship * obj_label: the object label * masked_sentence: The masked sentence used to probe, with the object word replaced with [MASK] * negated: same as above, except [MASK] is replaced by something that is not the object word. negated may be missing and replaced with empty strings. The squad config has the following fields: * id: the id * sub_label: the subject label * obj_label: the object label that is being predicted * masked_sentence: The masked sentence used to probe, with the object word replaced with [MASK] * negated: same as above, except [MASK] is replaced by something that is not the object word. negated may be missing and replaced with empty strings. The google_re config has the following fields: * uuid: the id * pred: the predicate * sub: the subject. subj may be missing and replaced with an empty string. * obj: the object. obj may be missing and replaced with an empty string. * evidences: flattened json string that provides evidence for predicate. parse this json string to get more 'snippet' information. * judgments: data about judgments * sub_q: unknown * sub_label: label for the subject * sub_aliases: unknown * obj_w: unknown * obj_label: label for the object * obj_aliases: unknown * masked_sentence: The masked sentence used to probe, with the object word replaced with [MASK] * template: A pattern of text for extracting the relationship, object and subject of the form "[X] some text [Y]", where [X] and [Y] are the subject and object slots respectively. * template_negated: Same as above, except the [Y] is not the object. ### Data Splits There are no data splits. ## Dataset Creation ### Curation Rationale This dataset was gathered and created to probe what language models understand. ### Source Data #### Initial Data Collection and Normalization See the reaserch paper and website for more detail. The dataset was created gathered from various other datasets with cleanups for probing. #### Who are the source language producers? The LAMA authors and the original authors of the various configs. ### Annotations #### Annotation process Human annotations under the original datasets (conceptnet), and various machine annotations. #### Who are the annotators? Human annotations and machine annotations. ### Personal and Sensitive Information Unkown, but likely names of famous people. ## Considerations for Using the Data ### Social Impact of Dataset The goal for the work is to probe the understanding of language models. ### Discussion of Biases Since the data is from human annotators, there is likely to be baises. [More Information Needed] ### Other Known Limitations The original documentation for the datafields are limited. ## Additional Information ### Dataset Curators The authors of LAMA at Facebook and the authors of the original datasets. ### Licensing Information The Creative Commons Attribution-Noncommercial 4.0 International License. see https://github.com/facebookresearch/LAMA/blob/master/LICENSE ### Citation Information @inproceedings{petroni2019language, title={Language Models as Knowledge Bases?}, author={F. Petroni, T. Rockt{\"{a}}schel, A. H. Miller, P. Lewis, A. Bakhtin, Y. Wu and S. Riedel}, booktitle={In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019}, year={2019} } @inproceedings{petroni2020how, title={How Context Affects Language Models' Factual Predictions}, author={Fabio Petroni and Patrick Lewis and Aleksandra Piktus and Tim Rockt{\"a}schel and Yuxiang Wu and Alexander H. Miller and Sebastian Riedel}, booktitle={Automated Knowledge Base Construction}, year={2020}, url={https://openreview.net/forum?id=025X0zPfn} } ### Contributions Thanks to [@ontocord](https://github.com/ontocord) for adding this dataset.
superb
--- annotations_creators: - other language_creators: - other language: - en license: - unknown multilinguality: - monolingual size_categories: - unknown source_datasets: - original - extended|librispeech_asr - extended|other-librimix - extended|other-speech_commands task_categories: - automatic-speech-recognition - audio-classification task_ids: - keyword-spotting - speaker-identification - audio-intent-classification - audio-emotion-recognition pretty_name: SUPERB tags: - query-by-example-spoken-term-detection - audio-slot-filling - speaker-diarization - automatic-speaker-verification dataset_info: - config_name: asr features: - name: file dtype: string - name: audio dtype: audio: sampling_rate: 16000 - name: text dtype: string - name: speaker_id dtype: int64 - name: chapter_id dtype: int64 - name: id dtype: string splits: - name: train num_bytes: 11852430 num_examples: 28539 - name: validation num_bytes: 897213 num_examples: 2703 - name: test num_bytes: 871234 num_examples: 2620 download_size: 7071899769 dataset_size: 13620877 - config_name: sd features: - name: record_id dtype: string - name: file dtype: string - name: start dtype: int64 - name: end dtype: int64 - name: speakers list: - name: speaker_id dtype: string - name: start dtype: int64 - name: end dtype: int64 splits: - name: train num_bytes: 4622013 num_examples: 13901 - name: dev num_bytes: 860472 num_examples: 3014 - name: test num_bytes: 847803 num_examples: 3002 download_size: 7190370211 dataset_size: 6330288 - config_name: ks features: - name: file dtype: string - name: label dtype: class_label: names: '0': 'yes' '1': 'no' '2': up '3': down '4': left '5': right '6': 'on' '7': 'off' '8': stop '9': go '10': _silence_ '11': _unknown_ splits: - name: train num_bytes: 8467781 num_examples: 51094 - name: validation num_bytes: 1126476 num_examples: 6798 - name: test num_bytes: 510619 num_examples: 3081 download_size: 1560367713 dataset_size: 10104876 - config_name: ic features: - name: file dtype: string - name: speaker_id dtype: string - name: text dtype: string - name: action dtype: class_label: names: '0': activate '1': bring '2': change language '3': deactivate '4': decrease '5': increase - name: object dtype: class_label: names: '0': Chinese '1': English '2': German '3': Korean '4': heat '5': juice '6': lamp '7': lights '8': music '9': newspaper '10': none '11': shoes '12': socks '13': volume - name: location dtype: class_label: names: '0': bedroom '1': kitchen '2': none '3': washroom splits: - name: train num_bytes: 7071466 num_examples: 23132 - name: validation num_bytes: 953622 num_examples: 3118 - name: test num_bytes: 1158347 num_examples: 3793 download_size: 1544093324 dataset_size: 9183435 - config_name: si features: - name: file dtype: string - name: label dtype: class_label: names: '0': id10001 '1': id10002 '2': id10003 '3': id10004 '4': id10005 '5': id10006 '6': id10007 '7': id10008 '8': id10009 '9': id10010 '10': id10011 '11': id10012 '12': id10013 '13': id10014 '14': id10015 '15': id10016 '16': id10017 '17': id10018 '18': id10019 '19': id10020 '20': id10021 '21': id10022 '22': id10023 '23': id10024 '24': id10025 '25': id10026 '26': id10027 '27': id10028 '28': id10029 '29': id10030 '30': id10031 '31': id10032 '32': id10033 '33': id10034 '34': id10035 '35': id10036 '36': id10037 '37': id10038 '38': id10039 '39': id10040 '40': id10041 '41': id10042 '42': id10043 '43': id10044 '44': id10045 '45': id10046 '46': id10047 '47': id10048 '48': id10049 '49': id10050 '50': id10051 '51': id10052 '52': id10053 '53': id10054 '54': id10055 '55': id10056 '56': id10057 '57': id10058 '58': id10059 '59': id10060 '60': id10061 '61': id10062 '62': id10063 '63': id10064 '64': id10065 '65': id10066 '66': id10067 '67': id10068 '68': id10069 '69': id10070 '70': id10071 '71': id10072 '72': id10073 '73': id10074 '74': id10075 '75': id10076 '76': id10077 '77': id10078 '78': id10079 '79': id10080 '80': id10081 '81': id10082 '82': id10083 '83': id10084 '84': id10085 '85': id10086 '86': id10087 '87': id10088 '88': id10089 '89': id10090 '90': id10091 '91': id10092 '92': id10093 '93': id10094 '94': id10095 '95': id10096 '96': id10097 '97': id10098 '98': id10099 '99': id10100 '100': id10101 '101': id10102 '102': id10103 '103': id10104 '104': id10105 '105': id10106 '106': id10107 '107': id10108 '108': id10109 '109': id10110 '110': id10111 '111': id10112 '112': id10113 '113': id10114 '114': id10115 '115': id10116 '116': id10117 '117': id10118 '118': id10119 '119': id10120 '120': id10121 '121': id10122 '122': id10123 '123': id10124 '124': id10125 '125': id10126 '126': id10127 '127': id10128 '128': id10129 '129': id10130 '130': id10131 '131': id10132 '132': id10133 '133': id10134 '134': id10135 '135': id10136 '136': id10137 '137': id10138 '138': id10139 '139': id10140 '140': id10141 '141': id10142 '142': id10143 '143': id10144 '144': id10145 '145': id10146 '146': id10147 '147': id10148 '148': id10149 '149': id10150 '150': id10151 '151': id10152 '152': id10153 '153': id10154 '154': id10155 '155': id10156 '156': id10157 '157': id10158 '158': id10159 '159': id10160 '160': id10161 '161': id10162 '162': id10163 '163': id10164 '164': id10165 '165': id10166 '166': id10167 '167': id10168 '168': id10169 '169': id10170 '170': id10171 '171': id10172 '172': id10173 '173': id10174 '174': id10175 '175': id10176 '176': id10177 '177': id10178 '178': id10179 '179': id10180 '180': id10181 '181': id10182 '182': id10183 '183': id10184 '184': id10185 '185': id10186 '186': id10187 '187': id10188 '188': id10189 '189': id10190 '190': id10191 '191': id10192 '192': id10193 '193': id10194 '194': id10195 '195': id10196 '196': id10197 '197': id10198 '198': id10199 '199': id10200 '200': id10201 '201': id10202 '202': id10203 '203': id10204 '204': id10205 '205': id10206 '206': id10207 '207': id10208 '208': id10209 '209': id10210 '210': id10211 '211': id10212 '212': id10213 '213': id10214 '214': id10215 '215': id10216 '216': id10217 '217': id10218 '218': id10219 '219': id10220 '220': id10221 '221': id10222 '222': id10223 '223': id10224 '224': id10225 '225': id10226 '226': id10227 '227': id10228 '228': id10229 '229': id10230 '230': id10231 '231': id10232 '232': id10233 '233': id10234 '234': id10235 '235': id10236 '236': id10237 '237': id10238 '238': id10239 '239': id10240 '240': id10241 '241': id10242 '242': id10243 '243': id10244 '244': id10245 '245': id10246 '246': id10247 '247': id10248 '248': id10249 '249': id10250 '250': id10251 '251': id10252 '252': id10253 '253': id10254 '254': id10255 '255': id10256 '256': id10257 '257': id10258 '258': id10259 '259': id10260 '260': id10261 '261': id10262 '262': id10263 '263': id10264 '264': id10265 '265': id10266 '266': id10267 '267': id10268 '268': id10269 '269': id10270 '270': id10271 '271': id10272 '272': id10273 '273': id10274 '274': id10275 '275': id10276 '276': id10277 '277': id10278 '278': id10279 '279': id10280 '280': id10281 '281': id10282 '282': id10283 '283': id10284 '284': id10285 '285': id10286 '286': id10287 '287': id10288 '288': id10289 '289': id10290 '290': id10291 '291': id10292 '292': id10293 '293': id10294 '294': id10295 '295': id10296 '296': id10297 '297': id10298 '298': id10299 '299': id10300 '300': id10301 '301': id10302 '302': id10303 '303': id10304 '304': id10305 '305': id10306 '306': id10307 '307': id10308 '308': id10309 '309': id10310 '310': id10311 '311': id10312 '312': id10313 '313': id10314 '314': id10315 '315': id10316 '316': id10317 '317': id10318 '318': id10319 '319': id10320 '320': id10321 '321': id10322 '322': id10323 '323': id10324 '324': id10325 '325': id10326 '326': id10327 '327': id10328 '328': id10329 '329': id10330 '330': id10331 '331': id10332 '332': id10333 '333': id10334 '334': id10335 '335': id10336 '336': id10337 '337': id10338 '338': id10339 '339': id10340 '340': id10341 '341': id10342 '342': id10343 '343': id10344 '344': id10345 '345': id10346 '346': id10347 '347': id10348 '348': id10349 '349': id10350 '350': id10351 '351': id10352 '352': id10353 '353': id10354 '354': id10355 '355': id10356 '356': id10357 '357': id10358 '358': id10359 '359': id10360 '360': id10361 '361': id10362 '362': id10363 '363': id10364 '364': id10365 '365': id10366 '366': id10367 '367': id10368 '368': id10369 '369': id10370 '370': id10371 '371': id10372 '372': id10373 '373': id10374 '374': id10375 '375': id10376 '376': id10377 '377': id10378 '378': id10379 '379': id10380 '380': id10381 '381': id10382 '382': id10383 '383': id10384 '384': id10385 '385': id10386 '386': id10387 '387': id10388 '388': id10389 '389': id10390 '390': id10391 '391': id10392 '392': id10393 '393': id10394 '394': id10395 '395': id10396 '396': id10397 '397': id10398 '398': id10399 '399': id10400 '400': id10401 '401': id10402 '402': id10403 '403': id10404 '404': id10405 '405': id10406 '406': id10407 '407': id10408 '408': id10409 '409': id10410 '410': id10411 '411': id10412 '412': id10413 '413': id10414 '414': id10415 '415': id10416 '416': id10417 '417': id10418 '418': id10419 '419': id10420 '420': id10421 '421': id10422 '422': id10423 '423': id10424 '424': id10425 '425': id10426 '426': id10427 '427': id10428 '428': id10429 '429': id10430 '430': id10431 '431': id10432 '432': id10433 '433': id10434 '434': id10435 '435': id10436 '436': id10437 '437': id10438 '438': id10439 '439': id10440 '440': id10441 '441': id10442 '442': id10443 '443': id10444 '444': id10445 '445': id10446 '446': id10447 '447': id10448 '448': id10449 '449': id10450 '450': id10451 '451': id10452 '452': id10453 '453': id10454 '454': id10455 '455': id10456 '456': id10457 '457': id10458 '458': id10459 '459': id10460 '460': id10461 '461': id10462 '462': id10463 '463': id10464 '464': id10465 '465': id10466 '466': id10467 '467': id10468 '468': id10469 '469': id10470 '470': id10471 '471': id10472 '472': id10473 '473': id10474 '474': id10475 '475': id10476 '476': id10477 '477': id10478 '478': id10479 '479': id10480 '480': id10481 '481': id10482 '482': id10483 '483': id10484 '484': id10485 '485': id10486 '486': id10487 '487': id10488 '488': id10489 '489': id10490 '490': id10491 '491': id10492 '492': id10493 '493': id10494 '494': id10495 '495': id10496 '496': id10497 '497': id10498 '498': id10499 '499': id10500 '500': id10501 '501': id10502 '502': id10503 '503': id10504 '504': id10505 '505': id10506 '506': id10507 '507': id10508 '508': id10509 '509': id10510 '510': id10511 '511': id10512 '512': id10513 '513': id10514 '514': id10515 '515': id10516 '516': id10517 '517': id10518 '518': id10519 '519': id10520 '520': id10521 '521': id10522 '522': id10523 '523': id10524 '524': id10525 '525': id10526 '526': id10527 '527': id10528 '528': id10529 '529': id10530 '530': id10531 '531': id10532 '532': id10533 '533': id10534 '534': id10535 '535': id10536 '536': id10537 '537': id10538 '538': id10539 '539': id10540 '540': id10541 '541': id10542 '542': id10543 '543': id10544 '544': id10545 '545': id10546 '546': id10547 '547': id10548 '548': id10549 '549': id10550 '550': id10551 '551': id10552 '552': id10553 '553': id10554 '554': id10555 '555': id10556 '556': id10557 '557': id10558 '558': id10559 '559': id10560 '560': id10561 '561': id10562 '562': id10563 '563': id10564 '564': id10565 '565': id10566 '566': id10567 '567': id10568 '568': id10569 '569': id10570 '570': id10571 '571': id10572 '572': id10573 '573': id10574 '574': id10575 '575': id10576 '576': id10577 '577': id10578 '578': id10579 '579': id10580 '580': id10581 '581': id10582 '582': id10583 '583': id10584 '584': id10585 '585': id10586 '586': id10587 '587': id10588 '588': id10589 '589': id10590 '590': id10591 '591': id10592 '592': id10593 '593': id10594 '594': id10595 '595': id10596 '596': id10597 '597': id10598 '598': id10599 '599': id10600 '600': id10601 '601': id10602 '602': id10603 '603': id10604 '604': id10605 '605': id10606 '606': id10607 '607': id10608 '608': id10609 '609': id10610 '610': id10611 '611': id10612 '612': id10613 '613': id10614 '614': id10615 '615': id10616 '616': id10617 '617': id10618 '618': id10619 '619': id10620 '620': id10621 '621': id10622 '622': id10623 '623': id10624 '624': id10625 '625': id10626 '626': id10627 '627': id10628 '628': id10629 '629': id10630 '630': id10631 '631': id10632 '632': id10633 '633': id10634 '634': id10635 '635': id10636 '636': id10637 '637': id10638 '638': id10639 '639': id10640 '640': id10641 '641': id10642 '642': id10643 '643': id10644 '644': id10645 '645': id10646 '646': id10647 '647': id10648 '648': id10649 '649': id10650 '650': id10651 '651': id10652 '652': id10653 '653': id10654 '654': id10655 '655': id10656 '656': id10657 '657': id10658 '658': id10659 '659': id10660 '660': id10661 '661': id10662 '662': id10663 '663': id10664 '664': id10665 '665': id10666 '666': id10667 '667': id10668 '668': id10669 '669': id10670 '670': id10671 '671': id10672 '672': id10673 '673': id10674 '674': id10675 '675': id10676 '676': id10677 '677': id10678 '678': id10679 '679': id10680 '680': id10681 '681': id10682 '682': id10683 '683': id10684 '684': id10685 '685': id10686 '686': id10687 '687': id10688 '688': id10689 '689': id10690 '690': id10691 '691': id10692 '692': id10693 '693': id10694 '694': id10695 '695': id10696 '696': id10697 '697': id10698 '698': id10699 '699': id10700 '700': id10701 '701': id10702 '702': id10703 '703': id10704 '704': id10705 '705': id10706 '706': id10707 '707': id10708 '708': id10709 '709': id10710 '710': id10711 '711': id10712 '712': id10713 '713': id10714 '714': id10715 '715': id10716 '716': id10717 '717': id10718 '718': id10719 '719': id10720 '720': id10721 '721': id10722 '722': id10723 '723': id10724 '724': id10725 '725': id10726 '726': id10727 '727': id10728 '728': id10729 '729': id10730 '730': id10731 '731': id10732 '732': id10733 '733': id10734 '734': id10735 '735': id10736 '736': id10737 '737': id10738 '738': id10739 '739': id10740 '740': id10741 '741': id10742 '742': id10743 '743': id10744 '744': id10745 '745': id10746 '746': id10747 '747': id10748 '748': id10749 '749': id10750 '750': id10751 '751': id10752 '752': id10753 '753': id10754 '754': id10755 '755': id10756 '756': id10757 '757': id10758 '758': id10759 '759': id10760 '760': id10761 '761': id10762 '762': id10763 '763': id10764 '764': id10765 '765': id10766 '766': id10767 '767': id10768 '768': id10769 '769': id10770 '770': id10771 '771': id10772 '772': id10773 '773': id10774 '774': id10775 '775': id10776 '776': id10777 '777': id10778 '778': id10779 '779': id10780 '780': id10781 '781': id10782 '782': id10783 '783': id10784 '784': id10785 '785': id10786 '786': id10787 '787': id10788 '788': id10789 '789': id10790 '790': id10791 '791': id10792 '792': id10793 '793': id10794 '794': id10795 '795': id10796 '796': id10797 '797': id10798 '798': id10799 '799': id10800 '800': id10801 '801': id10802 '802': id10803 '803': id10804 '804': id10805 '805': id10806 '806': id10807 '807': id10808 '808': id10809 '809': id10810 '810': id10811 '811': id10812 '812': id10813 '813': id10814 '814': id10815 '815': id10816 '816': id10817 '817': id10818 '818': id10819 '819': id10820 '820': id10821 '821': id10822 '822': id10823 '823': id10824 '824': id10825 '825': id10826 '826': id10827 '827': id10828 '828': id10829 '829': id10830 '830': id10831 '831': id10832 '832': id10833 '833': id10834 '834': id10835 '835': id10836 '836': id10837 '837': id10838 '838': id10839 '839': id10840 '840': id10841 '841': id10842 '842': id10843 '843': id10844 '844': id10845 '845': id10846 '846': id10847 '847': id10848 '848': id10849 '849': id10850 '850': id10851 '851': id10852 '852': id10853 '853': id10854 '854': id10855 '855': id10856 '856': id10857 '857': id10858 '858': id10859 '859': id10860 '860': id10861 '861': id10862 '862': id10863 '863': id10864 '864': id10865 '865': id10866 '866': id10867 '867': id10868 '868': id10869 '869': id10870 '870': id10871 '871': id10872 '872': id10873 '873': id10874 '874': id10875 '875': id10876 '876': id10877 '877': id10878 '878': id10879 '879': id10880 '880': id10881 '881': id10882 '882': id10883 '883': id10884 '884': id10885 '885': id10886 '886': id10887 '887': id10888 '888': id10889 '889': id10890 '890': id10891 '891': id10892 '892': id10893 '893': id10894 '894': id10895 '895': id10896 '896': id10897 '897': id10898 '898': id10899 '899': id10900 '900': id10901 '901': id10902 '902': id10903 '903': id10904 '904': id10905 '905': id10906 '906': id10907 '907': id10908 '908': id10909 '909': id10910 '910': id10911 '911': id10912 '912': id10913 '913': id10914 '914': id10915 '915': id10916 '916': id10917 '917': id10918 '918': id10919 '919': id10920 '920': id10921 '921': id10922 '922': id10923 '923': id10924 '924': id10925 '925': id10926 '926': id10927 '927': id10928 '928': id10929 '929': id10930 '930': id10931 '931': id10932 '932': id10933 '933': id10934 '934': id10935 '935': id10936 '936': id10937 '937': id10938 '938': id10939 '939': id10940 '940': id10941 '941': id10942 '942': id10943 '943': id10944 '944': id10945 '945': id10946 '946': id10947 '947': id10948 '948': id10949 '949': id10950 '950': id10951 '951': id10952 '952': id10953 '953': id10954 '954': id10955 '955': id10956 '956': id10957 '957': id10958 '958': id10959 '959': id10960 '960': id10961 '961': id10962 '962': id10963 '963': id10964 '964': id10965 '965': id10966 '966': id10967 '967': id10968 '968': id10969 '969': id10970 '970': id10971 '971': id10972 '972': id10973 '973': id10974 '974': id10975 '975': id10976 '976': id10977 '977': id10978 '978': id10979 '979': id10980 '980': id10981 '981': id10982 '982': id10983 '983': id10984 '984': id10985 '985': id10986 '986': id10987 '987': id10988 '988': id10989 '989': id10990 '990': id10991 '991': id10992 '992': id10993 '993': id10994 '994': id10995 '995': id10996 '996': id10997 '997': id10998 '998': id10999 '999': id11000 '1000': id11001 '1001': id11002 '1002': id11003 '1003': id11004 '1004': id11005 '1005': id11006 '1006': id11007 '1007': id11008 '1008': id11009 '1009': id11010 '1010': id11011 '1011': id11012 '1012': id11013 '1013': id11014 '1014': id11015 '1015': id11016 '1016': id11017 '1017': id11018 '1018': id11019 '1019': id11020 '1020': id11021 '1021': id11022 '1022': id11023 '1023': id11024 '1024': id11025 '1025': id11026 '1026': id11027 '1027': id11028 '1028': id11029 '1029': id11030 '1030': id11031 '1031': id11032 '1032': id11033 '1033': id11034 '1034': id11035 '1035': id11036 '1036': id11037 '1037': id11038 '1038': id11039 '1039': id11040 '1040': id11041 '1041': id11042 '1042': id11043 '1043': id11044 '1044': id11045 '1045': id11046 '1046': id11047 '1047': id11048 '1048': id11049 '1049': id11050 '1050': id11051 '1051': id11052 '1052': id11053 '1053': id11054 '1054': id11055 '1055': id11056 '1056': id11057 '1057': id11058 '1058': id11059 '1059': id11060 '1060': id11061 '1061': id11062 '1062': id11063 '1063': id11064 '1064': id11065 '1065': id11066 '1066': id11067 '1067': id11068 '1068': id11069 '1069': id11070 '1070': id11071 '1071': id11072 '1072': id11073 '1073': id11074 '1074': id11075 '1075': id11076 '1076': id11077 '1077': id11078 '1078': id11079 '1079': id11080 '1080': id11081 '1081': id11082 '1082': id11083 '1083': id11084 '1084': id11085 '1085': id11086 '1086': id11087 '1087': id11088 '1088': id11089 '1089': id11090 '1090': id11091 '1091': id11092 '1092': id11093 '1093': id11094 '1094': id11095 '1095': id11096 '1096': id11097 '1097': id11098 '1098': id11099 '1099': id11100 '1100': id11101 '1101': id11102 '1102': id11103 '1103': id11104 '1104': id11105 '1105': id11106 '1106': id11107 '1107': id11108 '1108': id11109 '1109': id11110 '1110': id11111 '1111': id11112 '1112': id11113 '1113': id11114 '1114': id11115 '1115': id11116 '1116': id11117 '1117': id11118 '1118': id11119 '1119': id11120 '1120': id11121 '1121': id11122 '1122': id11123 '1123': id11124 '1124': id11125 '1125': id11126 '1126': id11127 '1127': id11128 '1128': id11129 '1129': id11130 '1130': id11131 '1131': id11132 '1132': id11133 '1133': id11134 '1134': id11135 '1135': id11136 '1136': id11137 '1137': id11138 '1138': id11139 '1139': id11140 '1140': id11141 '1141': id11142 '1142': id11143 '1143': id11144 '1144': id11145 '1145': id11146 '1146': id11147 '1147': id11148 '1148': id11149 '1149': id11150 '1150': id11151 '1151': id11152 '1152': id11153 '1153': id11154 '1154': id11155 '1155': id11156 '1156': id11157 '1157': id11158 '1158': id11159 '1159': id11160 '1160': id11161 '1161': id11162 '1162': id11163 '1163': id11164 '1164': id11165 '1165': id11166 '1166': id11167 '1167': id11168 '1168': id11169 '1169': id11170 '1170': id11171 '1171': id11172 '1172': id11173 '1173': id11174 '1174': id11175 '1175': id11176 '1176': id11177 '1177': id11178 '1178': id11179 '1179': id11180 '1180': id11181 '1181': id11182 '1182': id11183 '1183': id11184 '1184': id11185 '1185': id11186 '1186': id11187 '1187': id11188 '1188': id11189 '1189': id11190 '1190': id11191 '1191': id11192 '1192': id11193 '1193': id11194 '1194': id11195 '1195': id11196 '1196': id11197 '1197': id11198 '1198': id11199 '1199': id11200 '1200': id11201 '1201': id11202 '1202': id11203 '1203': id11204 '1204': id11205 '1205': id11206 '1206': id11207 '1207': id11208 '1208': id11209 '1209': id11210 '1210': id11211 '1211': id11212 '1212': id11213 '1213': id11214 '1214': id11215 '1215': id11216 '1216': id11217 '1217': id11218 '1218': id11219 '1219': id11220 '1220': id11221 '1221': id11222 '1222': id11223 '1223': id11224 '1224': id11225 '1225': id11226 '1226': id11227 '1227': id11228 '1228': id11229 '1229': id11230 '1230': id11231 '1231': id11232 '1232': id11233 '1233': id11234 '1234': id11235 '1235': id11236 '1236': id11237 '1237': id11238 '1238': id11239 '1239': id11240 '1240': id11241 '1241': id11242 '1242': id11243 '1243': id11244 '1244': id11245 '1245': id11246 '1246': id11247 '1247': id11248 '1248': id11249 '1249': id11250 '1250': id11251 splits: - name: train num_bytes: 12729268 num_examples: 138361 - name: validation num_bytes: 635172 num_examples: 6904 - name: test num_bytes: 759096 num_examples: 8251 download_size: 0 dataset_size: 14123536 --- # Dataset Card for SUPERB ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [http://superbbenchmark.org](http://superbbenchmark.org) - **Repository:** [https://github.com/s3prl/s3prl](https://github.com/s3prl/s3prl) - **Paper:** [SUPERB: Speech processing Universal PERformance Benchmark](https://arxiv.org/abs/2105.01051) - **Leaderboard:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [Lewis Tunstall](mailto:lewis@huggingface.co) and [Albert Villanova](mailto:albert@huggingface.co) ### Dataset Summary SUPERB is a leaderboard to benchmark the performance of a shared model across a wide range of speech processing tasks with minimal architecture changes and labeled data. ### Supported Tasks and Leaderboards The SUPERB leaderboard can be found here https://superbbenchmark.org/leaderboard and consists of the following tasks: #### pr Phoneme Recognition (PR) transcribes an utterance into the smallest content units. This task includes alignment modeling to avoid potentially inaccurate forced alignment. [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) train-clean-100/dev-clean/test-clean subsets are adopted in SUPERB for training/validation/testing. Phoneme transcriptions are obtained from the LibriSpeech official g2p-model-5 and the conversion script in Kaldi librispeech s5 recipe. The evaluation metric is phone error rate (PER). #### asr Automatic Speech Recognition (ASR) transcribes utterances into words. While PR analyzes the improvement in modeling phonetics, ASR reflects the significance of the improvement in a real-world scenario. [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) train-clean-100/devclean/test-clean subsets are used for training/validation/testing. The evaluation metric is word error rate (WER). #### ks Keyword Spotting (KS) detects preregistered keywords by classifying utterances into a predefined set of words. The task is usually performed on-device for the fast response time. Thus, accuracy, model size, and inference time are all crucial. SUPERB uses the widely used [Speech Commands dataset v1.0](https://www.tensorflow.org/datasets/catalog/speech_commands) for the task. The dataset consists of ten classes of keywords, a class for silence, and an unknown class to include the false positive. The evaluation metric is accuracy (ACC) ##### Example of usage: Use these auxillary functions to: - load the audio file into an audio data array - sample from long `_silence_` audio clips For other examples of handling long `_silence_` clips see the [S3PRL](https://github.com/s3prl/s3prl/blob/099ce807a6ffa6bf2482ceecfcaf83dea23da355/s3prl/downstream/speech_commands/dataset.py#L80) or [TFDS](https://github.com/tensorflow/datasets/blob/6b8cfdb7c3c0a04e731caaa8660ce948d0a67b1e/tensorflow_datasets/audio/speech_commands.py#L143) implementations. ```python def map_to_array(example): import soundfile as sf speech_array, sample_rate = sf.read(example["file"]) example["speech"] = speech_array example["sample_rate"] = sample_rate return example def sample_noise(example): # Use this function to extract random 1 sec slices of each _silence_ utterance, # e.g. inside `torch.utils.data.Dataset.__getitem__()` from random import randint if example["label"] == "_silence_": random_offset = randint(0, len(example["speech"]) - example["sample_rate"] - 1) example["speech"] = example["speech"][random_offset : random_offset + example["sample_rate"]] return example ``` #### qbe Query by Example Spoken Term Detection (QbE) detects a spoken term (query) in an audio database (documents) by binary discriminating a given pair of query and document into a match or not. The English subset in [QUESST 2014 challenge](https://github.com/s3prl/s3prl/tree/master/downstream#qbe-query-by-example-spoken-term-detection) is adopted since we focus on investigating English as the first step. The evaluation metric is maximum term weighted value (MTWV) which balances misses and false alarms. #### ic Intent Classification (IC) classifies utterances into predefined classes to determine the intent of speakers. SUPERB uses the [Fluent Speech Commands dataset](https://github.com/s3prl/s3prl/tree/master/downstream#ic-intent-classification---fluent-speech-commands), where each utterance is tagged with three intent labels: action, object, and location. The evaluation metric is accuracy (ACC). #### sf Slot Filling (SF) predicts a sequence of semantic slot-types from an utterance, like a slot-type FromLocation for a spoken word Taipei, which is known as a slot-value. Both slot-types and slot-values are essential for an SLU system to function. The evaluation metrics thus include slot-type F1 score and slotvalue CER. [Audio SNIPS](https://github.com/s3prl/s3prl/tree/master/downstream#sf-end-to-end-slot-filling) is adopted, which synthesized multi-speaker utterances for SNIPS. Following the standard split in SNIPS, US-accent speakers are further selected for training, and others are for validation/testing. #### si Speaker Identification (SI) classifies each utterance for its speaker identity as a multi-class classification, where speakers are in the same predefined set for both training and testing. The widely used [VoxCeleb1 dataset](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html) is adopted, and the evaluation metric is accuracy (ACC). #### asv Automatic Speaker Verification (ASV) verifies whether the speakers of a pair of utterances match as a binary classification, and speakers in the testing set may not appear in the training set. Thus, ASV is more challenging than SID. VoxCeleb1 is used without VoxCeleb2 training data and noise augmentation. The evaluation metric is equal error rate (EER). #### sd Speaker Diarization (SD) predicts *who is speaking when* for each timestamp, and multiple speakers can speak simultaneously. The model has to encode rich speaker characteristics for each frame and should be able to represent mixtures of signals. [LibriMix](https://github.com/s3prl/s3prl/tree/master/downstream#sd-speaker-diarization) is adopted where LibriSpeech train-clean-100/dev-clean/test-clean are used to generate mixtures for training/validation/testing. We focus on the two-speaker scenario as the first step. The time-coded speaker labels were generated using alignments from Kaldi LibriSpeech ASR model. The evaluation metric is diarization error rate (DER). ##### Example of usage Use these auxiliary functions to: - load the audio file into an audio data array - generate the label array ```python def load_audio_file(example, frame_shift=160): import soundfile as sf example["array"], example["sample_rate"] = sf.read( example["file"], start=example["start"] * frame_shift, stop=example["end"] * frame_shift ) return example def generate_label(example, frame_shift=160, num_speakers=2, rate=16000): import numpy as np start = example["start"] end = example["end"] frame_num = end - start speakers = sorted({speaker["speaker_id"] for speaker in example["speakers"]}) label = np.zeros((frame_num, num_speakers), dtype=np.int32) for speaker in example["speakers"]: speaker_index = speakers.index(speaker["speaker_id"]) start_frame = np.rint(speaker["start"] * rate / frame_shift).astype(int) end_frame = np.rint(speaker["end"] * rate / frame_shift).astype(int) rel_start = rel_end = None if start <= start_frame < end: rel_start = start_frame - start if start < end_frame <= end: rel_end = end_frame - start if rel_start is not None or rel_end is not None: label[rel_start:rel_end, speaker_index] = 1 example["label"] = label return example ``` #### er Emotion Recognition (ER) predicts an emotion class for each utterance. The most widely used ER dataset [IEMOCAP](https://github.com/s3prl/s3prl/tree/master/downstream#er-emotion-recognition) is adopted, and we follow the conventional evaluation protocol: we drop the unbalance emotion classes to leave the final four classes with a similar amount of data points and cross-validates on five folds of the standard splits. The evaluation metric is accuracy (ACC). ### Languages The language data in SUPERB is in English (BCP-47 `en`) ## Dataset Structure ### Data Instances #### pr [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### asr An example from each split looks like: ```python {'chapter_id': 1240, 'file': 'path/to/file.flac', 'audio': {'path': 'path/to/file.flac', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 16000}, 'id': '103-1240-0000', 'speaker_id': 103, 'text': 'CHAPTER ONE MISSUS RACHEL LYNDE IS SURPRISED MISSUS RACHEL LYNDE ' 'LIVED JUST WHERE THE AVONLEA MAIN ROAD DIPPED DOWN INTO A LITTLE ' 'HOLLOW FRINGED WITH ALDERS AND LADIES EARDROPS AND TRAVERSED BY A ' 'BROOK'} ``` #### ks An example from each split looks like: ```python { 'file': '/path/yes/af7a8296_nohash_1.wav', 'audio': {'path': '/path/yes/af7a8296_nohash_1.wav', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 16000}, 'label': 0 # 'yes' } ``` #### qbe [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### ic ```python { 'file': "/path/wavs/speakers/2BqVo8kVB2Skwgyb/063aa8f0-4479-11e9-a9a5-5dbec3b8816a.wav", 'audio': {'path': '/path/wavs/speakers/2BqVo8kVB2Skwgyb/063aa8f0-4479-11e9-a9a5-5dbec3b8816a.wav', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 16000}, 'speaker_id': '2BqVo8kVB2Skwgyb', 'text': 'Turn the bedroom lights off', 'action': 3, # 'deactivate' 'object': 7, # 'lights' 'location': 0 # 'bedroom' } ``` #### sf [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### si ```python { 'file': '/path/wav/id10003/na8-QEFmj44/00003.wav', 'audio': {'path': '/path/wav/id10003/na8-QEFmj44/00003.wav', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 16000}, 'label': 2 # 'id10003' } ``` #### asv [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### sd An example from each split looks like: ```python { 'record_id': '1578-6379-0038_6415-111615-0009', 'file': 'path/to/file.wav', 'audio': {'path': 'path/to/file.wav', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 16000}, 'start': 0, 'end': 1590, 'speakers': [ {'speaker_id': '1578', 'start': 28, 'end': 657}, {'speaker_id': '6415', 'start': 28, 'end': 1576} ] } ``` #### er [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Data Fields ####Note abouth the `audio` fields When accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. #### pr [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### asr - `file` (`string`): Path to the WAV audio file. - `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. - `text` (`string`): The transcription of the audio file. - `speaker_id` (`integer`): A unique ID of the speaker. The same speaker id can be found for multiple data samples. - `chapter_id` (`integer`): ID of the audiobook chapter which includes the transcription. - `id` (`string`): A unique ID of the data sample. #### ks - `file` (`string`): Path to the WAV audio file. - `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. - `label` (`ClassLabel`): Label of the spoken command. Possible values: - `0: "yes", 1: "no", 2: "up", 3: "down", 4: "left", 5: "right", 6: "on", 7: "off", 8: "stop", 9: "go", 10: "_silence_", 11: "_unknown_"` #### qbe [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### ic - `file` (`string`): Path to the WAV audio file. - `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. - `speaker_id` (`string`): ID of the speaker. - `text` (`string`): Transcription of the spoken command. - `action` (`ClassLabel`): Label of the command's action. Possible values: - `0: "activate", 1: "bring", 2: "change language", 3: "deactivate", 4: "decrease", 5: "increase"` - `object` (`ClassLabel`): Label of the command's object. Possible values: - `0: "Chinese", 1: "English", 2: "German", 3: "Korean", 4: "heat", 5: "juice", 6: "lamp", 7: "lights", 8: "music", 9: "newspaper", 10: "none", 11: "shoes", 12: "socks", 13: "volume"` - `location` (`ClassLabel`): Label of the command's location. Possible values: - `0: "bedroom", 1: "kitchen", 2: "none", 3: "washroom"` #### sf [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### si - `file` (`string`): Path to the WAV audio file. - `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. - `label` (`ClassLabel`): Label (ID) of the speaker. Possible values: - `0: "id10001", 1: "id10002", 2: "id10003", ..., 1250: "id11251"` #### asv [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### sd The data fields in all splits are: - `record_id` (`string`): ID of the record. - `file` (`string`): Path to the WAV audio file. - `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. - `start` (`integer`): Start frame of the audio. - `end` (`integer`): End frame of the audio. - `speakers` (`list` of `dict`): List of speakers in the audio. Each item contains the fields: - `speaker_id` (`string`): ID of the speaker. - `start` (`integer`): Frame when the speaker starts speaking. - `end` (`integer`): Frame when the speaker stops speaking. #### er - `file` (`string`): Path to the WAV audio file. - `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. - `label` (`ClassLabel`): Label of the speech emotion. Possible values: - `0: "neu", 1: "hap", 2: "ang", 3: "sad"` ### Data Splits #### pr [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### asr | | train | validation | test | |-----|------:|-----------:|-----:| | asr | 28539 | 2703 | 2620 | #### ks | | train | validation | test | |----|------:|-----------:|-----:| | ks | 51094 | 6798 | 3081 | #### qbe [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### ic | | train | validation | test | |----|------:|-----------:|-----:| | ic | 23132 | 3118 | 3793 | #### sf [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### si | | train | validation | test | |----|-------:|-----------:|-----:| | si | 138361 | 6904 | 8251 | #### asv [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### sd The data is split into "train", "dev" and "test" sets, each containing the following number of examples: | | train | dev | test | |----|------:|-----:|-----:| | sd | 13901 | 3014 | 3002 | #### er The data is split into 5 sets intended for 5-fold cross-validation: | | session1 | session2 | session3 | session4 | session5 | |----|---------:|---------:|---------:|---------:|---------:| | er | 1085 | 1023 | 1151 | 1031 | 1241 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information #### pr and asr The license for Librispeech is the Creative Commons Attribution 4.0 International license ((CC-BY-4.0)[https://creativecommons.org/licenses/by/4.0/]). #### ks The license for Speech Commands is [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/legalcode) #### qbe The license for QUESST 2014 is not known. #### ic The license for Fluent Speech Commands dataset is the [Fluent Speech Commands Public License](https://fluent.ai/wp-content/uploads/2021/04/Fluent_Speech_Commands_Public_License.pdf) #### sf The license for Audio SNIPS dataset is not known. #### si and asv The license for VoxCeleb1 dataset is the Creative Commons Attribution 4.0 International license ([CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/)). #### sd LibriMix is based on the LibriSpeech (see above) and Wham! noises datasets. The Wham! noises dataset is distributed under the Attribution-NonCommercial 4.0 International ([CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/)) license. #### er The IEMOCAP license is ditributed under [its own license](https://sail.usc.edu/iemocap/Data_Release_Form_IEMOCAP.pdf). ### Citation Information ``` @article{DBLP:journals/corr/abs-2105-01051, author = {Shu{-}Wen Yang and Po{-}Han Chi and Yung{-}Sung Chuang and Cheng{-}I Jeff Lai and Kushal Lakhotia and Yist Y. Lin and Andy T. Liu and Jiatong Shi and Xuankai Chang and Guan{-}Ting Lin and Tzu{-}Hsien Huang and Wei{-}Cheng Tseng and Ko{-}tik Lee and Da{-}Rong Liu and Zili Huang and Shuyan Dong and Shang{-}Wen Li and Shinji Watanabe and Abdelrahman Mohamed and Hung{-}yi Lee}, title = {{SUPERB:} Speech processing Universal PERformance Benchmark}, journal = {CoRR}, volume = {abs/2105.01051}, year = {2021}, url = {https://arxiv.org/abs/2105.01051}, archivePrefix = {arXiv}, eprint = {2105.01051}, timestamp = {Thu, 01 Jul 2021 13:30:22 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2105-01051.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } Note that each SUPERB dataset has its own citation. Please see the source to see the correct citation for each contained dataset. ``` ### Contributions Thanks to [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova) and [@anton-l](https://github.com/anton-l) for adding this dataset.
baber/logiqa2
--- license: cc-by-sa-4.0 task_categories: - multiple-choice language: - en - zh pretty_name: LogiQA2.0 data_splits: - train - validation - test --- # Dataset Card for Dataset Name ## Dataset Description - **Homepage:** https://github.com/csitfun/LogiQA2.0, https://github.com/csitfun/LogiEval - **Repository:** https://github.com/csitfun/LogiQA2.0, https://github.com/csitfun/LogiEval - **Paper:** https://ieeexplore.ieee.org/abstract/document/10174688 ### Dataset Summary Logiqa2.0 dataset - logical reasoning in MRC and NLI tasks LogiEval: a benchmark suite for testing logical reasoning abilities of instruct-prompt large language models ### Licensing Information Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. ### Citation Information @ARTICLE{10174688, author={Liu, Hanmeng and Liu, Jian and Cui, Leyang and Teng, Zhiyang and Duan, Nan and Zhou, Ming and Zhang, Yue}, journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing}, title={LogiQA 2.0 — An Improved Dataset for Logical Reasoning in Natural Language Understanding}, year={2023}, volume={}, number={}, pages={1-16}, doi={10.1109/TASLP.2023.3293046}} @misc{liu2023evaluating, title={Evaluating the Logical Reasoning Ability of ChatGPT and GPT-4}, author={Hanmeng Liu and Ruoxi Ning and Zhiyang Teng and Jian Liu and Qiji Zhou and Yue Zhang}, year={2023}, eprint={2304.03439}, archivePrefix={arXiv}, primaryClass={cs.CL} }
Gustavosta/Stable-Diffusion-Prompts
--- license: - unknown annotations_creators: - no-annotation language_creators: - found language: - en source_datasets: - original --- # Stable Diffusion Dataset This is a set of about 80,000 prompts filtered and extracted from the image finder for Stable Diffusion: "[Lexica.art](https://lexica.art/)". It was a little difficult to extract the data, since the search engine still doesn't have a public API without being protected by cloudflare. If you want to test the model with a demo, you can go to: "[spaces/Gustavosta/MagicPrompt-Stable-Diffusion](https://huggingface.co/spaces/Gustavosta/MagicPrompt-Stable-Diffusion)". If you want to see the model, go to: "[Gustavosta/MagicPrompt-Stable-Diffusion](https://huggingface.co/Gustavosta/MagicPrompt-Stable-Diffusion)".
princeton-nlp/SWE-bench_oracle
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: dev path: data/dev-* - split: test path: data/test-* - split: validation path: data/validation-* dataset_info: features: - name: instance_id dtype: string - name: text dtype: string - name: repo dtype: string - name: base_commit dtype: string - name: problem_statement dtype: string - name: hints_text dtype: string - name: created_at dtype: string - name: patch dtype: string - name: test_patch dtype: string - name: version dtype: string - name: FAIL_TO_PASS dtype: string - name: PASS_TO_PASS dtype: string - name: environment_setup_commit dtype: string splits: - name: train num_bytes: 2927236667 num_examples: 18817 - name: dev num_bytes: 26551408 num_examples: 225 - name: test num_bytes: 248803640 num_examples: 2294 - name: validation num_bytes: 25752989 num_examples: 191 download_size: 1381451485 dataset_size: 3228344704 --- # Dataset Card for "SWE-bench_oracle" ### Dataset Summary SWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution. The dataset was released as part of [SWE-bench: Can Language Models Resolve Real-World GitHub Issues?](https://arxiv.org/abs/2310.06770) This dataset `SWE-bench_oracle` includes a formatting of each instance using the "Oracle" retrieval setting as described in the paper. The `text` column can be used directly with LMs to generate patch files. Models are instructed to generate [`patch`](https://en.wikipedia.org/wiki/Patch_(Unix)) formatted file using the following template: ```diff <patch> diff --- a/path/to/file.py --- b/path/to/file.py @@ -1,3 +1,3 @@ This is a test file. -It contains several lines. +It has been modified. This is the third line. </patch> ``` This format can be used directly with the [SWE-bench inference scripts](https://github.com/princeton-nlp/SWE-bench/tree/main/inference). Please refer to these scripts for more details on inference. ### Supported Tasks and Leaderboards SWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at www.swebench.com ### Languages The text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type. ## Dataset Structure ### Data Instances An example of a SWE-bench datum is as follows: ``` instance_id: (str) - A formatted instance identifier, usually as repo_owner__repo_name-PR-number. text: (str) - The input text including instructions, the "Oracle" retrieved file, and an example of the patch format for output. patch: (str) - The gold patch, the patch generated by the PR (minus test-related code), that resolved the issue. repo: (str) - The repository owner/name identifier from GitHub. base_commit: (str) - The commit hash of the repository representing the HEAD of the repository before the solution PR is applied. hints_text: (str) - Comments made on the issue prior to the creation of the solution PR’s first commit creation date. created_at: (str) - The creation date of the pull request. test_patch: (str) - A test-file patch that was contributed by the solution PR. problem_statement: (str) - The issue title and body. version: (str) - Installation version to use for running evaluation. environment_setup_commit: (str) - commit hash to use for environment setup and installation. FAIL_TO_PASS: (str) - A json list of strings that represent the set of tests resolved by the PR and tied to the issue resolution. PASS_TO_PASS: (str) - A json list of strings that represent tests that should pass before and after the PR application. ``` [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
dart
--- annotations_creators: - crowdsourced - machine-generated language_creators: - crowdsourced - machine-generated language: - en license: - mit multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - extended|wikitable_questions - extended|wikisql - extended|web_nlg - extended|cleaned_e2e task_categories: - tabular-to-text task_ids: - rdf-to-text paperswithcode_id: dart pretty_name: DART dataset_info: features: - name: tripleset sequence: sequence: string - name: subtree_was_extended dtype: bool - name: annotations sequence: - name: source dtype: string - name: text dtype: string splits: - name: train num_bytes: 12966443 num_examples: 30526 - name: validation num_bytes: 1458106 num_examples: 2768 - name: test num_bytes: 2657644 num_examples: 5097 download_size: 29939366 dataset_size: 17082193 --- # Dataset Card for DART ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [homepahe](https://github.com/Yale-LILY/dart) - **Repository:** [github](https://github.com/Yale-LILY/dart) - **Paper:** [paper](https://arxiv.org/abs/2007.02871) - **Leaderboard:** [leaderboard](https://github.com/Yale-LILY/dart#leaderboard) ### Dataset Summary DART is a large dataset for open-domain structured data record to text generation. We consider the structured data record input as a set of RDF entity-relation triples, a format widely used for knowledge representation and semantics description. DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set. This hierarchical, structured format with its open-domain nature differentiates DART from other existing table-to-text corpora. ### Supported Tasks and Leaderboards The task associated to DART is text generation from data records that are RDF triplets: - `rdf-to-text`: The dataset can be used to train a model for text generation from RDF triplets, which consists in generating textual description of structured data. Success on this task is typically measured by achieving a *high* [BLEU](https://huggingface.co/metrics/bleu), [METEOR](https://huggingface.co/metrics/meteor), [BLEURT](https://huggingface.co/metrics/bleurt), [TER](https://huggingface.co/metrics/ter), [MoverScore](https://huggingface.co/metrics/mover_score), and [BERTScore](https://huggingface.co/metrics/bert_score). The ([BART-large model](https://huggingface.co/facebook/bart-large) from [BART](https://huggingface.co/transformers/model_doc/bart.html)) model currently achieves the following scores: | | BLEU | METEOR | TER | MoverScore | BERTScore | BLEURT | | ----- | ----- | ------ | ---- | ----------- | ---------- | ------ | | BART | 37.06 | 0.36 | 0.57 | 0.44 | 0.92 | 0.22 | This task has an active leaderboard which can be found [here](https://github.com/Yale-LILY/dart#leaderboard) and ranks models based on the above metrics while also reporting. ### Languages The dataset is in english (en). ## Dataset Structure ### Data Instances Here is an example from the dataset: ``` {'annotations': {'source': ['WikiTableQuestions_mturk'], 'text': ['First Clearing\tbased on Callicoon, New York and location at On NYS 52 1 Mi. Youngsville']}, 'subtree_was_extended': False, 'tripleset': [['First Clearing', 'LOCATION', 'On NYS 52 1 Mi. Youngsville'], ['On NYS 52 1 Mi. Youngsville', 'CITY_OR_TOWN', 'Callicoon, New York']]} ``` It contains one annotation where the textual description is 'First Clearing\tbased on Callicoon, New York and location at On NYS 52 1 Mi. Youngsville'. The RDF triplets considered to generate this description are in tripleset and are formatted as subject, predicate, object. ### Data Fields The different fields are: - `annotations`: - `text`: list of text descriptions of the triplets - `source`: list of sources of the RDF triplets (WikiTable, e2e, etc.) - `subtree_was_extended`: boolean, if the subtree condidered during the dataset construction was extended. Sometimes this field is missing, and therefore set to `None` - `tripleset`: RDF triplets as a list of triplets of strings (subject, predicate, object) ### Data Splits There are three splits, train, validation and test: | | train | validation | test | | ----- |------:|-----------:|-----:| | N. Examples | 30526 | 2768 | 6959 | ## Dataset Creation ### Curation Rationale Automatically generating textual descriptions from structured data inputs is crucial to improving the accessibility of knowledge bases to lay users. ### Source Data DART comes from existing datasets that cover a variety of different domains while allowing to build a tree ontology and form RDF triple sets as semantic representations. The datasets used are WikiTableQuestions, WikiSQL, WebNLG and Cleaned E2E. #### Initial Data Collection and Normalization DART is constructed using multiple complementary methods: (1) human annotation on open-domain Wikipedia tables from WikiTableQuestions (Pasupat and Liang, 2015) and WikiSQL (Zhong et al., 2017), (2) automatic conversion of questions in WikiSQL to declarative sentences, and (3) incorporation of existing datasets including WebNLG 2017 (Gardent et al., 2017a,b; Shimorina and Gardent, 2018) and Cleaned E2E (Novikova et al., 2017b; Dušek et al., 2018, 2019) #### Who are the source language producers? [More Information Needed] ### Annotations DART is constructed using multiple complementary methods: (1) human annotation on open-domain Wikipedia tables from WikiTableQuestions (Pasupat and Liang, 2015) and WikiSQL (Zhong et al., 2017), (2) automatic conversion of questions in WikiSQL to declarative sentences, and (3) incorporation of existing datasets including WebNLG 2017 (Gardent et al., 2017a,b; Shimorina and Gardent, 2018) and Cleaned E2E (Novikova et al., 2017b; Dušek et al., 2018, 2019) #### Annotation process The two stage annotation process for constructing tripleset sentence pairs is based on a tree-structured ontology of each table. First, internal skilled annotators denote the parent column for each column header. Then, a larger number of annotators provide a sentential description of an automatically-chosen subset of table cells in a row. #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Under MIT license (see [here](https://github.com/Yale-LILY/dart/blob/master/LICENSE)) ### Citation Information ``` @article{radev2020dart, title={DART: Open-Domain Structured Data Record to Text Generation}, author={Dragomir Radev and Rui Zhang and Amrit Rau and Abhinand Sivaprasad and Chiachun Hsieh and Nazneen Fatema Rajani and Xiangru Tang and Aadit Vyas and Neha Verma and Pranav Krishna and Yangxiaokang Liu and Nadia Irwanto and Jessica Pan and Faiaz Rahman and Ahmad Zaidi and Murori Mutuma and Yasin Tarabar and Ankit Gupta and Tao Yu and Yi Chern Tan and Xi Victoria Lin and Caiming Xiong and Richard Socher}, journal={arXiv preprint arXiv:2007.02871}, year={2020} ``` ### Contributions Thanks to [@lhoestq](https://github.com/lhoestq) for adding this dataset.
pminervini/HaluEval
--- license: apache-2.0 dataset_info: - config_name: dialogue features: - name: knowledge dtype: string - name: dialogue_history dtype: string - name: right_response dtype: string - name: hallucinated_response dtype: string splits: - name: data num_bytes: 6212598 num_examples: 10000 download_size: 3448632 dataset_size: 6212598 - config_name: dialogue_samples features: - name: knowledge dtype: string - name: dialogue_history dtype: string - name: response dtype: string - name: hallucination dtype: string splits: - name: data num_bytes: 5303515 num_examples: 10000 download_size: 2834937 dataset_size: 5303515 - config_name: general features: - name: ID dtype: string - name: user_query dtype: string - name: chatgpt_response dtype: string - name: hallucination dtype: string - name: hallucination_spans sequence: string splits: - name: data num_bytes: 2905898 num_examples: 4507 download_size: 1819331 dataset_size: 2905898 - config_name: qa features: - name: knowledge dtype: string - name: question dtype: string - name: right_answer dtype: string - name: hallucinated_answer dtype: string splits: - name: data num_bytes: 5486422 num_examples: 10000 download_size: 3751109 dataset_size: 5486422 - config_name: qa_samples features: - name: knowledge dtype: string - name: question dtype: string - name: answer dtype: string - name: hallucination dtype: string splits: - name: data num_bytes: 5112196 num_examples: 10000 download_size: 3427901 dataset_size: 5112196 - config_name: summarization features: - name: document dtype: string - name: right_summary dtype: string - name: hallucinated_summary dtype: string splits: - name: data num_bytes: 46408787 num_examples: 10000 download_size: 27990918 dataset_size: 46408787 - config_name: summarization_samples features: - name: document dtype: string - name: summary dtype: string - name: hallucination dtype: string splits: - name: data num_bytes: 42743472 num_examples: 10000 download_size: 25506458 dataset_size: 42743472 configs: - config_name: dialogue data_files: - split: data path: dialogue/data-* - config_name: dialogue_samples data_files: - split: data path: dialogue_samples/data-* - config_name: general data_files: - split: data path: general/data-* - config_name: qa data_files: - split: data path: qa/data-* - config_name: qa_samples data_files: - split: data path: qa_samples/data-* - config_name: summarization data_files: - split: data path: summarization/data-* - config_name: summarization_samples data_files: - split: data path: summarization_samples/data-* ---
rcds/swiss_judgment_prediction
--- pretty_name: Swiss-Judgment-Prediction annotations_creators: - found language_creators: - found language: - de - fr - it - en license: - cc-by-sa-4.0 multilinguality: - multilingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: [] tags: - judgement-prediction dataset_info: - config_name: de features: - name: id dtype: int32 - name: year dtype: int32 - name: text dtype: string - name: label dtype: class_label: names: '0': dismissal '1': approval - name: language dtype: string - name: region dtype: string - name: canton dtype: string - name: legal area dtype: string - name: source_language dtype: string splits: - name: train num_bytes: 104270719 num_examples: 35458 - name: validation num_bytes: 12131878 num_examples: 4705 - name: test num_bytes: 26056177 num_examples: 9725 download_size: 1000382331 dataset_size: 142458774 - config_name: fr features: - name: id dtype: int32 - name: year dtype: int32 - name: text dtype: string - name: label dtype: class_label: names: '0': dismissal '1': approval - name: language dtype: string - name: region dtype: string - name: canton dtype: string - name: legal area dtype: string - name: source_language dtype: string splits: - name: train num_bytes: 96807957 num_examples: 21179 - name: validation num_bytes: 13031904 num_examples: 3095 - name: test num_bytes: 33318359 num_examples: 6820 download_size: 1000382331 dataset_size: 143158220 - config_name: it features: - name: id dtype: int32 - name: year dtype: int32 - name: text dtype: string - name: label dtype: class_label: names: '0': dismissal '1': approval - name: language dtype: string - name: region dtype: string - name: canton dtype: string - name: legal area dtype: string - name: source_language dtype: string splits: - name: train num_bytes: 10773516 num_examples: 3072 - name: validation num_bytes: 1045551 num_examples: 408 - name: test num_bytes: 2474761 num_examples: 812 download_size: 1000382331 dataset_size: 14293828 - config_name: mt_de features: - name: id dtype: int32 - name: year dtype: int32 - name: text dtype: string - name: label dtype: class_label: names: '0': dismissal '1': approval - name: language dtype: string - name: region dtype: string - name: canton dtype: string - name: legal area dtype: string - name: source_language dtype: string splits: - name: train num_bytes: 106990696 num_examples: 24251 - name: validation - name: test download_size: 1000382331 dataset_size: 106990696 - config_name: mt_fr features: - name: id dtype: int32 - name: year dtype: int32 - name: text dtype: string - name: label dtype: class_label: names: '0': dismissal '1': approval - name: language dtype: string - name: region dtype: string - name: canton dtype: string - name: legal area dtype: string - name: source_language dtype: string splits: - name: train num_bytes: 117932134 num_examples: 38524 - name: validation - name: test download_size: 1000382331 dataset_size: 117932134 - config_name: mt_it features: - name: id dtype: int32 - name: year dtype: int32 - name: text dtype: string - name: label dtype: class_label: names: '0': dismissal '1': approval - name: language dtype: string - name: region dtype: string - name: canton dtype: string - name: legal area dtype: string - name: source_language dtype: string splits: - name: train num_bytes: 201749076 num_examples: 56631 - name: validation - name: test download_size: 1000382331 dataset_size: 201749076 - config_name: mt_en features: - name: id dtype: int32 - name: year dtype: int32 - name: text dtype: string - name: label dtype: class_label: names: '0': dismissal '1': approval - name: language dtype: string - name: region dtype: string - name: canton dtype: string - name: legal area dtype: string - name: source_language dtype: string splits: - name: train num_bytes: 196352783 num_examples: 59703 - name: validation - name: test download_size: 1000382331 dataset_size: 196352783 - config_name: all features: - name: id dtype: int32 - name: year dtype: int32 - name: text dtype: string - name: label dtype: class_label: names: '0': dismissal '1': approval - name: language dtype: string - name: region dtype: string - name: canton dtype: string - name: legal area dtype: string - name: source_language dtype: string splits: - name: train num_bytes: 211852192 num_examples: 59709 - name: validation num_bytes: 26209333 num_examples: 8208 - name: test num_bytes: 61849297 num_examples: 17357 download_size: 1000382331 dataset_size: 299910822 - config_name: all+mt features: - name: id dtype: int32 - name: year dtype: int32 - name: text dtype: string - name: label dtype: class_label: names: '0': dismissal '1': approval - name: language dtype: string - name: region dtype: string - name: canton dtype: string - name: legal area dtype: string - name: source_language dtype: string splits: - name: train num_bytes: 834876881 num_examples: 238818 - name: validation num_bytes: 26209333 num_examples: 8208 - name: test num_bytes: 61849297 num_examples: 17357 download_size: 1000382331 dataset_size: 922935511 --- # Dataset Card for "SwissJudgmentPrediction" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/JoelNiklaus/SwissCourtRulingCorpus - **Repository:** https://github.com/JoelNiklaus/SwissCourtRulingCorpus - **Paper:** https://arxiv.org/abs/2110.00806 - **Leaderboard:** N/A - **Point of Contact:** [Joel Niklaus](mailto:joel.niklaus@inf.unibe.ch) ### Dataset Summary **Documents** Swiss-Judgment-Prediction is a multilingual, diachronic dataset of 85K Swiss Federal Supreme Court (FSCS) cases annotated with the respective binarized judgment outcome (approval/dismissal), posing a challenging text classification task. We also provide additional metadata, i.e., the publication year, the legal area and the canton of origin per case, to promote robustness and fairness studies on the critical area of legal NLP. ### Supported Tasks and Leaderboards SwissJudgmentPrediction can be used for the legal judgment prediction task. The dataset is not yet part of an established benchmark. ### Languages Switzerland has four official languages with 3 languages (German, French and Italian) being represented in more than 1000 Swiss Federal Supreme court decisions. The decisions are written by the judges and clerks in the language of the proceedings. ## Dataset Structure In version 2 we added machine translated data using [EasyNMT](https://github.com/UKPLab/EasyNMT) for all documents into German, French, Italian and English as an additional training set. ### Data Instances **Multilingual use of the dataset** When the dataset is used in a multilingual setting selecting the the 'all_languages' flag: ```python from datasets import load_dataset dataset = load_dataset('swiss_judgment_prediction', 'all_languages') ``` ``` { "id": 48757, "year": 2015, "facts": "Sachverhalt: A. X._ war bei der Krankenversicherung C._ taggeldversichert. Infolge einer Arbeitsunf\u00e4higkeit leistete ihm die C._ vom 30. Juni 2011 bis am 28. Juni 2013 Krankentaggelder, wobei die Leistungen bis am 30. September 2012 auf Grundlage einer Arbeitsunf\u00e4higkeit von 100% und danach basierend auf einer Arbeitsunf\u00e4higkeit von 55% erbracht wurden. Die Neueinsch\u00e4tzung der Arbeitsf\u00e4higkeit erfolgte anhand eines Gutachtens der D._ AG vom 27. August 2012, welches im Auftrag der C._ erstellt wurde. X._ machte daraufhin gegen\u00fcber der C._ geltend, er sei entgegen dem Gutachten auch nach dem 30. September 2012 zu 100% arbeitsunf\u00e4hig gewesen. Ferner verlangte er von der D._ AG zwecks externer \u00dcberpr\u00fcfung des Gutachtens die Herausgabe s\u00e4mtlicher diesbez\u00fcglicher Notizen, Auswertungen und Unterlagen. A._ (als Gesch\u00e4ftsf\u00fchrer der D._ AG) und B._ (als f\u00fcr das Gutachten medizinisch Verantwortliche) antworteten ihm, dass sie alle Unterlagen der C._ zugestellt h\u00e4tten und dass allf\u00e4llige Fragen zum Gutachten direkt der C._ zu stellen seien. X._ reichte am 2. Januar 2014 eine Strafanzeige gegen A._ und B._ ein. Er wirft diesen vor, ihn durch die Nichtherausgabe der Dokumente und durch Behinderung des IV-Verfahrens gen\u00f6tigt, Daten besch\u00e4digt bzw. vernichtet und ein falsches \u00e4rztliches Zeugnis ausgestellt zu haben. Zudem h\u00e4tten sie durch die Verz\u00f6gerung des IV-Verfahrens und insbesondere durch das falsche \u00e4rztliche Zeugnis sein Verm\u00f6gen arglistig gesch\u00e4digt. B. Die Staatsanwaltschaft des Kantons Bern, Region Oberland, nahm das Verfahren wegen N\u00f6tigung, Datenbesch\u00e4digung, falschem \u00e4rztlichem Zeugnis und arglistiger Verm\u00f6genssch\u00e4digung mit Verf\u00fcgung vom 10. November 2014 nicht an die Hand. Das Obergericht des Kantons Bern wies die von X._ dagegen erhobene Beschwerde am 27. April 2015 ab, soweit darauf einzutreten war. C. X._ beantragt mit Beschwerde in Strafsachen, der Beschluss vom 27. April 2015 sei aufzuheben und die Angelegenheit zur korrekten Ermittlung des Sachverhalts an die Staatsanwaltschaft zur\u00fcckzuweisen. Er stellt zudem den sinngem\u00e4ssen Antrag, das bundesgerichtliche Verfahren sei w\u00e4hrend der Dauer des konnexen Strafverfahrens gegen eine Teilgutachterin und des ebenfalls konnexen Zivil- oder Strafverfahrens gegen die C._ wegen Einsichtsverweigerung in das mutmasslich gef\u00e4lschte Originalgutachten zu sistieren. X._ ersucht um unentgeltliche Rechtspflege. ", "labels": 0, # dismissal "language": "de", "region": "Espace Mittelland", "canton": "be", "legal area": "penal law" } ``` **Monolingual use of the dataset** When the dataset is used in a monolingual setting selecting the ISO language code for one of the 3 supported languages. For example: ```python from datasets import load_dataset dataset = load_dataset('swiss_judgment_prediction', 'de') ``` ``` { "id": 48757, "year": 2015, "facts": "Sachverhalt: A. X._ war bei der Krankenversicherung C._ taggeldversichert. Infolge einer Arbeitsunf\u00e4higkeit leistete ihm die C._ vom 30. Juni 2011 bis am 28. Juni 2013 Krankentaggelder, wobei die Leistungen bis am 30. September 2012 auf Grundlage einer Arbeitsunf\u00e4higkeit von 100% und danach basierend auf einer Arbeitsunf\u00e4higkeit von 55% erbracht wurden. Die Neueinsch\u00e4tzung der Arbeitsf\u00e4higkeit erfolgte anhand eines Gutachtens der D._ AG vom 27. August 2012, welches im Auftrag der C._ erstellt wurde. X._ machte daraufhin gegen\u00fcber der C._ geltend, er sei entgegen dem Gutachten auch nach dem 30. September 2012 zu 100% arbeitsunf\u00e4hig gewesen. Ferner verlangte er von der D._ AG zwecks externer \u00dcberpr\u00fcfung des Gutachtens die Herausgabe s\u00e4mtlicher diesbez\u00fcglicher Notizen, Auswertungen und Unterlagen. A._ (als Gesch\u00e4ftsf\u00fchrer der D._ AG) und B._ (als f\u00fcr das Gutachten medizinisch Verantwortliche) antworteten ihm, dass sie alle Unterlagen der C._ zugestellt h\u00e4tten und dass allf\u00e4llige Fragen zum Gutachten direkt der C._ zu stellen seien. X._ reichte am 2. Januar 2014 eine Strafanzeige gegen A._ und B._ ein. Er wirft diesen vor, ihn durch die Nichtherausgabe der Dokumente und durch Behinderung des IV-Verfahrens gen\u00f6tigt, Daten besch\u00e4digt bzw. vernichtet und ein falsches \u00e4rztliches Zeugnis ausgestellt zu haben. Zudem h\u00e4tten sie durch die Verz\u00f6gerung des IV-Verfahrens und insbesondere durch das falsche \u00e4rztliche Zeugnis sein Verm\u00f6gen arglistig gesch\u00e4digt. B. Die Staatsanwaltschaft des Kantons Bern, Region Oberland, nahm das Verfahren wegen N\u00f6tigung, Datenbesch\u00e4digung, falschem \u00e4rztlichem Zeugnis und arglistiger Verm\u00f6genssch\u00e4digung mit Verf\u00fcgung vom 10. November 2014 nicht an die Hand. Das Obergericht des Kantons Bern wies die von X._ dagegen erhobene Beschwerde am 27. April 2015 ab, soweit darauf einzutreten war. C. X._ beantragt mit Beschwerde in Strafsachen, der Beschluss vom 27. April 2015 sei aufzuheben und die Angelegenheit zur korrekten Ermittlung des Sachverhalts an die Staatsanwaltschaft zur\u00fcckzuweisen. Er stellt zudem den sinngem\u00e4ssen Antrag, das bundesgerichtliche Verfahren sei w\u00e4hrend der Dauer des konnexen Strafverfahrens gegen eine Teilgutachterin und des ebenfalls konnexen Zivil- oder Strafverfahrens gegen die C._ wegen Einsichtsverweigerung in das mutmasslich gef\u00e4lschte Originalgutachten zu sistieren. X._ ersucht um unentgeltliche Rechtspflege. ", "labels": 0, # dismissal "language": "de", "region": "Espace Mittelland", "canton": "be", "legal area": "penal law" } ``` ### Data Fields **Multilingual use of the dataset** The following data fields are provided for documents (`train`, `validation`, `test`): `id`: (**int**) a unique identifier of the for the document \ `year`: (**int**) the publication year \ `text`: (**str**) the facts of the case \ `label`: (**class label**) the judgment outcome: 0 (dismissal) or 1 (approval) \ `language`: (**str**) one of (de, fr, it) \ `region`: (**str**) the region of the lower court \ `canton`: (**str**) the canton of the lower court \ `legal area`: (**str**) the legal area of the case **Monolingual use of the dataset** The following data fields are provided for documents (`train`, `validation`, `test`): `id`: (**int**) a unique identifier of the for the document \ `year`: (**int**) the publication year \ `text`: (**str**) the facts of the case \ `label`: (**class label**) the judgment outcome: 0 (dismissal) or 1 (approval) \ `language`: (**str**) one of (de, fr, it) \ `region`: (**str**) the region of the lower court \ `canton`: (**str**) the canton of the lower court \ `legal area`: (**str**) the legal area of the case ### Data Splits | Language | Subset | Number of Documents (Training/Validation/Test) | |------------|------------|------------------------------------------------| | German | **de** | 35'452 / 4'705 / 9'725 | | French | **fr** | 21'179 / 3'095 / 6'820 | | Italian | **it** | 3'072 / 408 / 812 | | All | **all** | 59'709 / 8'208 / 17'357 | | MT German | **mt_de** | 24'251 / 0 / 0 | | MT French | **mt_fr** | 38'524 / 0 / 0 | | MT Italian | **mt_it** | 56'631 / 0 / 0 | | MT All | **all+mt** | 238'818 / 8'208 / 17'357 | ## Dataset Creation ### Curation Rationale The dataset was curated by Niklaus et al. (2021). ### Source Data #### Initial Data Collection and Normalization The original data are available at the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML. #### Who are the source language producers? Switzerland has four official languages with 3 languages (German, French and Italian) being represented in more than 1000 Swiss Federal Supreme court decisions. The decisions are written by the judges and clerks in the language of the proceedings. ### Annotations #### Annotation process The decisions have been annotated with the binarized judgment outcome using parsers and regular expressions. #### Who are the annotators? Joel Niklaus and Adrian Jörg annotated the binarized judgment outcomes. Metadata is published by the Swiss Federal Supreme Court (https://www.bger.ch). ### Personal and Sensitive Information The dataset contains publicly available court decisions from the Swiss Federal Supreme Court. Personal or sensitive information has been anonymized by the court before publication according to the following guidelines: https://www.bger.ch/home/juridiction/anonymisierungsregeln.html. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Niklaus et al. (2021) ### Licensing Information We release the data under CC-BY-4.0 which complies with the court licensing (https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf) © Swiss Federal Supreme Court, 2000-2020 The copyright for the editorial content of this website and the consolidated texts, which is owned by the Swiss Federal Supreme Court, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made. Source: https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf ### Citation Information *Joel Niklaus, Ilias Chalkidis, and Matthias Stürmer.* *Swiss-Judgment-Prediction: A Multilingual Legal Judgment Prediction Benchmark* *Proceedings of the 2021 Natural Legal Language Processing Workshop. Punta Cana, Dominican Republic. 2021* ``` @InProceedings{niklaus-etal-2021-swiss, author = {Niklaus, Joel and Chalkidis, Ilias and Stürmer, Matthias}, title = {Swiss-Judgment-Prediction: A Multilingual Legal Judgment Prediction Benchmark}, booktitle = {Proceedings of the 2021 Natural Legal Language Processing Workshop}, year = {2021}, location = {Punta Cana, Dominican Republic}, } ``` and the new citation ``` @misc{niklaus2022empirical, title={An Empirical Study on Cross-X Transfer for Legal Judgment Prediction}, author={Joel Niklaus and Matthias Stürmer and Ilias Chalkidis}, year={2022}, eprint={2209.12325}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@joelniklaus](https://github.com/joelniklaus) for adding this dataset.
KBLab/overlim
--- annotations_creators: - other language_creators: - other language: - sv - da - nb license: - cc-by-4.0 multilinguality: - translation size_categories: - unknown source_datasets: - extended|glue - extended|super_glue task_categories: - text-classification task_ids: - natural-language-inference - semantic-similarity-classification - sentiment-classification - text-scoring pretty_name: overlim tags: - qa-nli - paraphrase-identification --- # Dataset Card for OverLim ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The _OverLim_ dataset contains some of the GLUE and SuperGLUE tasks automatically translated to Swedish, Danish, and Norwegian (bokmål), using the OpusMT models for MarianMT. The translation quality was not manually checked and may thus be faulty. Results on these datasets should thus be interpreted carefully. If you want to have an easy script to train and evaluate your models have a look [here](https://github.com/kb-labb/overlim_eval) ### Supported Tasks and Leaderboards The data contains the following tasks from GLUE and SuperGLUE: - GLUE - `mnli` - `mrpc` - `qnli` - `qqp` - `rte` - `sst` - `stsb` - `wnli` - SuperGLUE - `boolq` - `cb` - `copa` - `rte` ### Languages - Swedish - Danish - Norwegian (bokmål) ## Dataset Structure ### Data Instances Every task has their own set of features, but all share an `idx` and `label`. - GLUE - `mnli` - `premise`, `hypothesis` - `mrpc` - `text_a`, `text_b` - `qnli` - `premise`, `hypothesis` - `qqp` - `text_a`, `text_b` - `sst` - `text` - `stsb` - `text_a`, `text_b` - `wnli` - `premise`, `hypothesis` - SuperGLUE - `boolq` - `question`, `passage` - `cb` - `premise`, `hypothesis` - `copa` - `premise`, `choice1`, `choice2`, `question` - `rte` - `premise`, `hypothesis` ### Data Splits In order to have test-split, we repurpose the original validation-split as test-split, and split the training-split into a new training- and validation-split, with an 80-20 distribution. ## Dataset Creation For more information about the individual tasks see (https://gluebenchmark.com) and (https://super.gluebenchmark.com). ### Curation Rationale Training non-English models is easy, but there is a lack of evaluation datasets to compare their actual performance. ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@kb-labb](https://github.com/kb-labb) for adding this dataset.
tinyBenchmarks/tinyMMLU
--- dataset_info: config_name: all features: - name: question dtype: string - name: subject dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D - name: input_formatted dtype: string splits: - name: test num_bytes: 337628 num_examples: 100 - name: validation num_bytes: 5425300 num_examples: 1531 - name: dev num_bytes: 858526 num_examples: 285 download_size: 1661908 dataset_size: 6621454 configs: - config_name: all data_files: - split: test path: all/test-* - split: validation path: all/validation-* - split: dev path: all/dev-* annotations_creators: - no-annotation language_creators: - expert-generated language: - en multilinguality: - monolingual source_datasets: - cais/mmlu task_categories: - question-answering task_ids: - multiple-choice-qa pretty_name: tinyMMLU language_bcp47: - en-US --- # tinyMMLU Welcome to tinyMMLU! This dataset serves as a concise version of the [MMLU](https://huggingface.co/datasets/cais/mmlu) dataset, offering a subset of 100 data points selected from the original compilation. tinyMMLU is designed to enable users to efficiently estimate the performance of a large language model (LLM) with reduced dataset size, saving computational resources while maintaining the essence of the MMLU evaluation. ## Features - **Compact Dataset:** With only 100 data points, tinyMMLU provides a swift and efficient way to evaluate your LLM's performance against a benchmark set, maintaining the essence of the original MMLU dataset. - **Compatibility:** tinyMMLU is compatible with evaluation using the [lm evaluation harness](https://github.com/EleutherAI/lm-evaluation-harness/), but can also be integrated into your custom pipeline. See below for more details. ## Model Evaluation Users looking to evaluate a new model with tinyMMLU can use the [lm evaluation harness (v0.4.1 or later)](https://github.com/EleutherAI/lm-evaluation-harness/). To do so, download this [task config folder](https://drive.google.com/uc?export=download&id=1IMeCFfcWyYVEzJ2hoMZn0cPftWcxYd82), and add the uncompressed folder to your version of the evaluation harness at `lm-evaluation-harness/lm_eval/tasks/`. Afterwards, run your evaluation harness as usual with `--tasks=tinyMMLU` and `--log_samples` but without the `--num_fewshot` argument: ```shell lm_eval --model hf --model_args pretrained="<your-model>" --tasks=tinyMMLU --batch_size=1 --output_path=<output_path> --log_samples ``` Alternatively, tinyMMLU can be integrated into any other pipeline by downloading the data via ```python from datasets import load_dataset tiny_data = load_dataset('tinyBenchmarks/tinyMMLU', 'all')['test'] ``` Now, `tiny_data` contains the 100 subsampled data points with the same features as the original dataset, as well as an additional field containing the preformatted data points. The preformatted data points follow the formatting used in the [open llm leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) including the respective in-context examples. When using the lm evaluation harness, you can then estimate your LLM's performance using the following code. First, ensure you have the tinyBenchmarks package installed: ```shell pip install git+https://github.com/felipemaiapolo/tinyBenchmarks ``` Then, use the code snippet below for the evaluation: ```python import numpy as np import tinyBenchmarks as tb ### Score vector y = # your original score vector ### Parameters benchmark = 'mmlu' ### Evaluation tb.evaluate(y, benchmark) ``` This process will help you estimate the performance of your LLM against the tinyMMLU dataset, providing a streamlined approach to benchmarking. Please be aware that evaluating on multiple GPUs can change the order of outputs in the lm evaluation harness. Ordering your score vector following the original order in tinyMMLU will be necessary to use the tinyBenchmarks library. For more detailed instructions on evaluating new models and computing scores, please refer to the comprehensive guides available at [lm evaluation harness](https://github.com/EleutherAI/lm-evaluation-harness/) and [tinyBenchmarks GitHub](https://github.com/felipemaiapolo/tinyBenchmarks). Happy benchmarking! ## More tinyBenchmarks **Open LLM leaderboard**: [tinyArc-Challenge](https://huggingface.co/datasets/tinyBenchmarks/tinyAI2_arc), [tinyWinogrande](https://huggingface.co/datasets/tinyBenchmarks/tinyWinogrande), [tinyHellaswag](https://huggingface.co/datasets/tinyBenchmarks/tinyHellaswag), [tinyTruthfulQA](https://huggingface.co/datasets/tinyBenchmarks/tinyTruthfulQA), [tinyGSM8k](https://huggingface.co/datasets/tinyBenchmarks/tinyGSM8k) **AlpacaEval**: [tinyAlpacaEval](https://huggingface.co/datasets/tinyBenchmarks/tinyAlpacaEval) **HELM-lite**: _work-in-progress_ ## Citation @article{polo2024tinybenchmarks, title={tinyBenchmarks: evaluating LLMs with fewer examples}, author={Felipe Maia Polo and Lucas Weber and Leshem Choshen and Yuekai Sun and Gongjun Xu and Mikhail Yurochkin}, year={2024}, eprint={2402.14992}, archivePrefix={arXiv}, primaryClass={cs.CL} } @article{hendryckstest2021, title={Measuring Massive Multitask Language Understanding}, author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt}, journal={Proceedings of the International Conference on Learning Representations (ICLR)}, year={2021} }
masakhane/masakhanews
--- annotations_creators: - expert-generated language: - am - en - fr - ha - ig - ln - lg - om - pcm - rn - sn - so - sw - ti - xh - yo language_creators: - expert-generated license: - afl-3.0 multilinguality: - multilingual pretty_name: masakhanews size_categories: - 1K<n<10K source_datasets: - original tags: - news-topic - masakhanews - masakhane task_categories: - text-classification task_ids: - topic-classification --- # Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [homepage](https://github.com/masakhane-io/masakhane-news) - **Repository:** [github](https://github.com/masakhane-io/masakhane-news) - **Paper:** [paper]() - **Point of Contact:** [Masakhane](https://www.masakhane.io/) or didelani@lsv.uni-saarland.de ### Dataset Summary MasakhaNEWS is the largest publicly available dataset for news topic classification in 16 languages widely spoken in Africa. The train/validation/test sets are available for all the 16 languages. ### Supported Tasks and Leaderboards [More Information Needed] - `news topic classification`: categorize news articles into new topics e.g business, sport sor politics. ### Languages There are 16 languages available : - Amharic (amh) - English (eng) - French (fra) - Hausa (hau) - Igbo (ibo) - Lingala (lin) - Luganda (lug) - Oromo (orm) - Nigerian Pidgin (pcm) - Rundi (run) - chShona (sna) - Somali (som) - Kiswahili (swą) - Tigrinya (tir) - isiXhosa (xho) - Yorùbá (yor) ## Dataset Structure ### Data Instances The examples look like this for Yorùbá: ``` from datasets import load_dataset data = load_dataset('masakhane/masakhanews', 'yor') # Please, specify the language code # A data point example is below: { 'label': 0, 'headline': "'The barriers to entry have gone - go for it now'", 'text': "j Lalvani, CEO of Vitabiotics and former Dragons' Den star, shares his business advice for our CEO Secrets series.\nProduced, filmed and edited by Dougal Shaw", 'headline_text': "'The barriers to entry have gone - go for it now' j Lalvani, CEO of Vitabiotics and former Dragons' Den star, shares his business advice for our CEO Secrets series.\nProduced, filmed and edited by Dougal Shaw", 'url': '/news/business-61880859' } ``` ### Data Fields - `label`: news topic id - `headline`: news title/headline - `text`: news body - `headline_text`: concatenation of headline and news body - `url`: website address The news topics correspond to this list: ``` "business", "entertainment", "health", "politics", "religion", "sports", "technology" ``` ### Data Splits For all languages, there are three splits. The original splits were named `train`, `dev` and `test` and they correspond to the `train`, `validation` and `test` splits. The splits have the following sizes : | Language | train | validation | test | |-----------------|------:|-----------:|-----:| | Amharic | 1311 | 188 | 376 | | English | 3309 | 472 | 948 | | French | 1476 | 211 | 422 | | Hausa | 2219 | 317 | 637 | | Igbo | 1356 | 194 | 390 | | Lingala | 608 | 87 | 175 | | Luganda | 771 | 110 | 223 | | Oromo | 1015 | 145 | 292 | | Nigerian-Pidgin | 1060 | 152 | 305 | | Rundi | 1117 | 159 | 322 | | chiShona | 1288 | 185 | 369 | | Somali | 1021 | 148 | 294 | | Kiswahili | 1658 | 237 | 476 | | Tigrinya | 947 | 137 | 272 | | isiXhosa | 1032 | 147 | 297 | | Yoruba | 1433 | 206 | 411 | ## Dataset Creation ### Curation Rationale The dataset was introduced to introduce new resources to 20 languages that were under-served for natural language processing. [More Information Needed] ### Source Data The source of the data is from the news domain, details can be found here **** #### Initial Data Collection and Normalization The articles were word-tokenized, information on the exact pre-processing pipeline is unavailable. #### Who are the source language producers? The source language was produced by journalists and writers employed by the news agency and newspaper mentioned above. ### Annotations #### Annotation process Details can be found here ** #### Who are the annotators? Annotators were recruited from [Masakhane](https://www.masakhane.io/) ### Personal and Sensitive Information The data is sourced from newspaper source and only contains mentions of public figures or individuals ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Users should keep in mind that the dataset only contains news text, which might limit the applicability of the developed systems to other domains. ## Additional Information ### Dataset Curators ### Licensing Information The licensing status of the data is CC 4.0 Non-Commercial ### Citation Information Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example: ``` @article{Adelani2023MasakhaNEWS, title={MasakhaNEWS: News Topic Classification for African languages}, author={David Ifeoluwa Adelani and Marek Masiak and Israel Abebe Azime and Jesujoba Oluwadara Alabi and Atnafu Lambebo Tonja and Christine Mwase and Odunayo Ogundepo and Bonaventure F. P. Dossou and Akintunde Oladipo and Doreen Nixdorf and Chris Chinenye Emezue and Sana Sabah al-azzawi and Blessing K. Sibanda and Davis David and Lolwethu Ndolela and Jonathan Mukiibi and Tunde Oluwaseyi Ajayi and Tatiana Moteu Ngoli and Brian Odhiambo and Abraham Toluwase Owodunni and Nnaemeka C. Obiefuna and Shamsuddeen Hassan Muhammad and Saheed Salahudeen Abdullahi and Mesay Gemeda Yigezu and Tajuddeen Gwadabe and Idris Abdulmumin and Mahlet Taye Bame and Oluwabusayo Olufunke Awoyomi and Iyanuoluwa Shode and Tolulope Anu Adelani and Habiba Abdulganiy Kailani and Abdul-Hakeem Omotayo and Adetola Adeeko and Afolabi Abeeb and Anuoluwapo Aremu and Olanrewaju Samuel and Clemencia Siro and Wangari Kimotho and Onyekachi Raphael Ogbu and Chinedu E. Mbonu and Chiamaka I. Chukwuneke and Samuel Fanijo and Jessica Ojo and Oyinkansola F. Awosan and Tadesse Kebede Guge and Sakayo Toadoum Sari and Pamela Nyatsine and Freedmore Sidume and Oreen Yousuf and Mardiyyah Oduwole and Ussen Kimanuka and Kanda Patrick Tshinu and Thina Diko and Siyanda Nxakama and Abdulmejid Tuni Johar and Sinodos Gebre and Muhidin Mohamed and Shafie Abdi Mohamed and Fuad Mire Hassan and Moges Ahmed Mehamed and Evrard Ngabire and and Pontus Stenetorp}, journal={ArXiv}, year={2023}, volume={} } ``` ### Contributions Thanks to [@dadelani](https://github.com/dadelani) for adding this dataset.
paws
--- annotations_creators: - expert-generated - machine-generated language_creators: - machine-generated language: - en license: - other multilinguality: - monolingual size_categories: - 100K<n<1M - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - semantic-similarity-classification - semantic-similarity-scoring - text-scoring - multi-input-text-classification paperswithcode_id: paws pretty_name: 'PAWS: Paraphrase Adversaries from Word Scrambling' config_names: - labeled_final - labeled_swap - unlabeled_final tags: - paraphrase-identification dataset_info: - config_name: labeled_final features: - name: id dtype: int32 - name: sentence1 dtype: string - name: sentence2 dtype: string - name: label dtype: class_label: names: '0': '0' '1': '1' splits: - name: train num_bytes: 12239938 num_examples: 49401 - name: test num_bytes: 1987794 num_examples: 8000 - name: validation num_bytes: 1975862 num_examples: 8000 download_size: 10899391 dataset_size: 16203594 - config_name: labeled_swap features: - name: id dtype: int32 - name: sentence1 dtype: string - name: sentence2 dtype: string - name: label dtype: class_label: names: '0': '0' '1': '1' splits: - name: train num_bytes: 7963619 num_examples: 30397 download_size: 5741756 dataset_size: 7963619 - config_name: unlabeled_final features: - name: id dtype: int32 - name: sentence1 dtype: string - name: sentence2 dtype: string - name: label dtype: class_label: names: '0': '0' '1': '1' splits: - name: train num_bytes: 157806476 num_examples: 645652 - name: validation num_bytes: 2442165 num_examples: 10000 download_size: 112644285 dataset_size: 160248641 configs: - config_name: labeled_final data_files: - split: train path: labeled_final/train-* - split: test path: labeled_final/test-* - split: validation path: labeled_final/validation-* - config_name: labeled_swap data_files: - split: train path: labeled_swap/train-* - config_name: unlabeled_final data_files: - split: train path: unlabeled_final/train-* - split: validation path: unlabeled_final/validation-* --- # Dataset Card for PAWS: Paraphrase Adversaries from Word Scrambling ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [PAWS](https://github.com/google-research-datasets/paws) - **Repository:** [PAWS](https://github.com/google-research-datasets/paws) - **Paper:** [PAWS: Paraphrase Adversaries from Word Scrambling](https://arxiv.org/abs/1904.01130) - **Point of Contact:** [Yuan Zhang](zhangyua@google.com) ### Dataset Summary PAWS: Paraphrase Adversaries from Word Scrambling This dataset contains 108,463 human-labeled and 656k noisily labeled pairs that feature the importance of modeling structure, context, and word order information for the problem of paraphrase identification. The dataset has two subsets, one based on Wikipedia and the other one based on the Quora Question Pairs (QQP) dataset. For further details, see the accompanying paper: PAWS: Paraphrase Adversaries from Word Scrambling (https://arxiv.org/abs/1904.01130) PAWS-QQP is not available due to license of QQP. It must be reconstructed by downloading the original data and then running our scripts to produce the data and attach the labels. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The text in the dataset is in English. ## Dataset Structure ### Data Instances Below are two examples from the dataset: | | Sentence 1 | Sentence 2 | Label | | :-- | :---------------------------- | :---------------------------- | :---- | | (1) | Although interchangeable, the body pieces on the 2 cars are not similar. | Although similar, the body parts are not interchangeable on the 2 cars. | 0 | | (2) | Katz was born in Sweden in 1947 and moved to New York City at the age of 1. | Katz was born in 1947 in Sweden and moved to New York at the age of one. | 1 | The first pair has different semantic meaning while the second pair is a paraphrase. State-of-the-art models trained on existing datasets have dismal performance on PAWS (<40% accuracy); however, including PAWS training data for these models improves their accuracy to 85% while maintaining performance on existing datasets such as the [Quora Question Pairs](https://data.quora.com/First-Quora-Dataset-Release-Question-Pairs). ### Data Fields This corpus contains pairs generated from Wikipedia pages, and can be downloaded here: * **PAWS-Wiki Labeled (Final)**: containing pairs that are generated from both word swapping and back translation methods. All pairs have human judgements on both paraphrasing and fluency and they are split into Train/Dev/Test sections. * **PAWS-Wiki Labeled (Swap-only)**: containing pairs that have no back translation counterparts and therefore they are not included in the first set. Nevertheless, they are high-quality pairs with human judgements on both paraphrasing and fluency, and they can be included as an auxiliary training set. * **PAWS-Wiki Unlabeled (Final)**: Pairs in this set have noisy labels without human judgments and can also be used as an auxiliary training set. They are generated from both word swapping and back translation methods. All files are in the tsv format with four columns: Column Name | Data :------------ | :-------------------------- id | A unique id for each pair sentence1 | The first sentence sentence2 | The second sentence (noisy_)label | (Noisy) label for each pair Each label has two possible values: `0` indicates the pair has different meaning, while `1` indicates the pair is a paraphrase. ### Data Splits The number of examples and the proportion of paraphrase (Yes%) pairs are shown below: Data | Train | Dev | Test | Yes% :------------------ | ------: | -----: | ----: | ----: Labeled (Final) | 49,401 | 8,000 | 8,000 | 44.2% Labeled (Swap-only) | 30,397 | -- | -- | 9.6% Unlabeled (Final) | 645,652 | 10,000 | -- | 50.0% ## Dataset Creation ### Curation Rationale Existing paraphrase identification datasets lack sentence pairs that have high lexical overlap without being paraphrases. Models trained on such data fail to distinguish pairs like *flights from New York to Florida* and *flights from Florida to New York*. ### Source Data #### Initial Data Collection and Normalization Their automatic generation method is based on two ideas. The first swaps words to generate a sentence pair with the same BOW, controlled by a language model. The second uses back translation to generate paraphrases with high BOW overlap but different word order. These two strategies generate high-quality, diverse PAWS pairs, balanced evenly between paraphrases and non-paraphrases. #### Who are the source language producers? Mentioned above. ### Annotations #### Annotation process Sentence pairs are presented to five annotators, each of which gives a binary judgment as to whether they are paraphrases or not. They chose binary judgments to make dataset have the same label schema as the QQP corpus. Overall, human agreement is high on both Quora (92.0%) and Wikipedia (94.7%) and each label only takes about 24 seconds. As such, answers are usually straight-forward to human raters. #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here. ### Licensing Information The dataset may be freely used for any purpose, although acknowledgement of Google LLC ("Google") as the data source would be appreciated. The dataset is provided "AS IS" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset. ### Citation Information ``` @InProceedings{paws2019naacl, title = {{PAWS: Paraphrase Adversaries from Word Scrambling}}, author = {Zhang, Yuan and Baldridge, Jason and He, Luheng}, booktitle = {Proc. of NAACL}, year = {2019} } ``` ### Contributions Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding this dataset.
multi_woz_v22
--- annotations_creators: - machine-generated language_creators: - crowdsourced - machine-generated language: - en license: - apache-2.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-generation - fill-mask - token-classification - text-classification task_ids: - dialogue-modeling - multi-class-classification - parsing paperswithcode_id: multiwoz pretty_name: Multi-domain Wizard-of-Oz dataset_info: - config_name: v2.2 features: - name: dialogue_id dtype: string - name: services sequence: string - name: turns sequence: - name: turn_id dtype: string - name: speaker dtype: class_label: names: '0': USER '1': SYSTEM - name: utterance dtype: string - name: frames sequence: - name: service dtype: string - name: state struct: - name: active_intent dtype: string - name: requested_slots sequence: string - name: slots_values sequence: - name: slots_values_name dtype: string - name: slots_values_list sequence: string - name: slots sequence: - name: slot dtype: string - name: value dtype: string - name: start dtype: int32 - name: exclusive_end dtype: int32 - name: copy_from dtype: string - name: copy_from_value sequence: string - name: dialogue_acts struct: - name: dialog_act sequence: - name: act_type dtype: string - name: act_slots sequence: - name: slot_name dtype: string - name: slot_value dtype: string - name: span_info sequence: - name: act_type dtype: string - name: act_slot_name dtype: string - name: act_slot_value dtype: string - name: span_start dtype: int32 - name: span_end dtype: int32 splits: - name: train num_bytes: 68222649 num_examples: 8437 - name: validation num_bytes: 8990945 num_examples: 1000 - name: test num_bytes: 9027095 num_examples: 1000 download_size: 276592909 dataset_size: 86240689 - config_name: v2.2_active_only features: - name: dialogue_id dtype: string - name: services sequence: string - name: turns sequence: - name: turn_id dtype: string - name: speaker dtype: class_label: names: '0': USER '1': SYSTEM - name: utterance dtype: string - name: frames sequence: - name: service dtype: string - name: state struct: - name: active_intent dtype: string - name: requested_slots sequence: string - name: slots_values sequence: - name: slots_values_name dtype: string - name: slots_values_list sequence: string - name: slots sequence: - name: slot dtype: string - name: value dtype: string - name: start dtype: int32 - name: exclusive_end dtype: int32 - name: copy_from dtype: string - name: copy_from_value sequence: string - name: dialogue_acts struct: - name: dialog_act sequence: - name: act_type dtype: string - name: act_slots sequence: - name: slot_name dtype: string - name: slot_value dtype: string - name: span_info sequence: - name: act_type dtype: string - name: act_slot_name dtype: string - name: act_slot_value dtype: string - name: span_start dtype: int32 - name: span_end dtype: int32 splits: - name: train num_bytes: 40937577 num_examples: 8437 - name: validation num_bytes: 5377939 num_examples: 1000 - name: test num_bytes: 5410819 num_examples: 1000 download_size: 276592909 dataset_size: 51726335 --- # Dataset Card for MultiWOZ ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [MultiWOZ 2.2 github repository](https://github.com/budzianowski/multiwoz/tree/master/data/MultiWOZ_2.2) - **Paper:** [MultiWOZ v2](https://arxiv.org/abs/1810.00278), and [MultiWOZ v2.2](https://www.aclweb.org/anthology/2020.nlp4convai-1.13.pdf) - **Point of Contact:** [Paweł Budzianowski](pfb30@cam.ac.uk) ### Dataset Summary Multi-Domain Wizard-of-Oz dataset (MultiWOZ), a fully-labeled collection of human-human written conversations spanning over multiple domains and topics. MultiWOZ 2.1 (Eric et al., 2019) identified and fixed many erroneous annotations and user utterances in the original version, resulting in an improved version of the dataset. MultiWOZ 2.2 is a yet another improved version of this dataset, which identifies and fixes dialogue state annotation errors across 17.3% of the utterances on top of MultiWOZ 2.1 and redefines the ontology by disallowing vocabularies of slots with a large number of possible values (e.g., restaurant name, time of booking) and introducing standardized slot span annotations for these slots. ### Supported Tasks and Leaderboards This dataset supports a range of task. - **Generative dialogue modeling** or `dialogue-modeling`: the text of the dialogues can be used to train a sequence model on the utterances. Performance on this task is typically evaluated with delexicalized-[BLEU](https://huggingface.co/metrics/bleu), inform rate and request success. - **Intent state tracking**, a `multi-class-classification` task: predict the belief state of the user side of the conversation, performance is measured by [F1](https://huggingface.co/metrics/f1). - **Dialog act prediction**, a `parsing` task: parse an utterance into the corresponding dialog acts for the system to use. [F1](https://huggingface.co/metrics/f1) is typically reported. ### Languages The text in the dataset is in English (`en`). ## Dataset Structure ### Data Instances A data instance is a full multi-turn dialogue between a `USER` and a `SYSTEM`. Each turn has a single utterance, e.g.: ``` ['What fun places can I visit in the East?', 'We have five spots which include boating, museums and entertainment. Any preferences that you have?'] ``` The utterances of the `USER` are also annotated with frames denoting their intent and believe state: ``` [{'service': ['attraction'], 'slots': [{'copy_from': [], 'copy_from_value': [], 'exclusive_end': [], 'slot': [], 'start': [], 'value': []}], 'state': [{'active_intent': 'find_attraction', 'requested_slots': [], 'slots_values': {'slots_values_list': [['east']], 'slots_values_name': ['attraction-area']}}]}, {'service': [], 'slots': [], 'state': []}] ``` Finally, each of the utterances is annotated with dialog acts which provide a structured representation of what the `USER` or `SYSTEM` is inquiring or giving information about. ``` [{'dialog_act': {'act_slots': [{'slot_name': ['east'], 'slot_value': ['area']}], 'act_type': ['Attraction-Inform']}, 'span_info': {'act_slot_name': ['area'], 'act_slot_value': ['east'], 'act_type': ['Attraction-Inform'], 'span_end': [39], 'span_start': [35]}}, {'dialog_act': {'act_slots': [{'slot_name': ['none'], 'slot_value': ['none']}, {'slot_name': ['boating', 'museums', 'entertainment', 'five'], 'slot_value': ['type', 'type', 'type', 'choice']}], 'act_type': ['Attraction-Select', 'Attraction-Inform']}, 'span_info': {'act_slot_name': ['type', 'type', 'type', 'choice'], 'act_slot_value': ['boating', 'museums', 'entertainment', 'five'], 'act_type': ['Attraction-Inform', 'Attraction-Inform', 'Attraction-Inform', 'Attraction-Inform'], 'span_end': [40, 49, 67, 12], 'span_start': [33, 42, 54, 8]}}] ``` ### Data Fields Each dialogue instance has the following fields: - `dialogue_id`: a unique ID identifying the dialog. The MUL and PMUL names refer to strictly multi domain dialogues (at least 2 main domains are involved) while the SNG, SSNG and WOZ names refer to single domain dialogues with potentially sub-domains like booking. - `services`: a list of services mentioned in the dialog, such as `train` or `hospitals`. - `turns`: the sequence of utterances with their annotations, including: - `turn_id`: a turn identifier, unique per dialog. - `speaker`: either the `USER` or `SYSTEM`. - `utterance`: the text of the utterance. - `dialogue_acts`: The structured parse of the utterance into dialog acts in the system's grammar - `act_type`: Such as e.g. `Attraction-Inform` to seek or provide information about an `attraction` - `act_slots`: provide more details about the action - `span_info`: maps these `act_slots` to the `utterance` text. - `frames`: only for `USER` utterances, track the user's belief state, i.e. a structured representation of what they are trying to achieve in the fialog. This decomposes into: - `service`: the service they are interested in - `state`: their belief state including their `active_intent` and further information expressed in `requested_slots` - `slots`: a mapping of the `requested_slots` to where they are mentioned in the text. It takes one of two forms, detailed next: The first type are span annotations that identify the location where slot values have been mentioned in the utterances for non-categorical slots. These span annotations are represented as follows: ``` { "slots": [ { "slot": String of slot name. "start": Int denoting the index of the starting character in the utterance corresponding to the slot value. "exclusive_end": Int denoting the index of the character just after the last character corresponding to the slot value in the utterance. In python, utterance[start:exclusive_end] gives the slot value. "value": String of value. It equals to utterance[start:exclusive_end], where utterance is the current utterance in string. } ] } ``` There are also some non-categorical slots whose values are carried over from another slot in the dialogue state. Their values don"t explicitly appear in the utterances. For example, a user utterance can be "I also need a taxi from the restaurant to the hotel.", in which the state values of "taxi-departure" and "taxi-destination" are respectively carried over from that of "restaurant-name" and "hotel-name". For these slots, instead of annotating them as spans, a "copy from" annotation identifies the slot it copies the value from. This annotation is formatted as follows, ``` { "slots": [ { "slot": Slot name string. "copy_from": The slot to copy from. "value": A list of slot values being . It corresponds to the state values of the "copy_from" slot. } ] } ``` ### Data Splits The dataset is split into a `train`, `validation`, and `test` split with the following sizes: | | train | validation | test | |---------------------|------:|-----------:|-----:| | Number of dialogues | 8438 | 1000 | 1000 | | Number of turns | 42190 | 5000 | 5000 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The initial dataset (Versions 1.0 and 2.0) was created by a team of researchers from the [Cambridge Dialogue Systems Group](https://mi.eng.cam.ac.uk/research/dialogue/corpora/). Version 2.1 was developed on top of v2.0 by a team from Amazon, and v2.2 was developed by a team of Google researchers. ### Licensing Information The dataset is released under the Apache License 2.0. ### Citation Information You can cite the following for the various versions of MultiWOZ: Version 1.0 ``` @inproceedings{ramadan2018large, title={Large-Scale Multi-Domain Belief Tracking with Knowledge Sharing}, author={Ramadan, Osman and Budzianowski, Pawe{\l} and Gasic, Milica}, booktitle={Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics}, volume={2}, pages={432--437}, year={2018} } ``` Version 2.0 ``` @inproceedings{budzianowski2018large, Author = {Budzianowski, Pawe{\l} and Wen, Tsung-Hsien and Tseng, Bo-Hsiang and Casanueva, I{\~n}igo and Ultes Stefan and Ramadan Osman and Ga{\v{s}}i\'c, Milica}, title={MultiWOZ - A Large-Scale Multi-Domain Wizard-of-Oz Dataset for Task-Oriented Dialogue Modelling}, booktitle={Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP)}, year={2018} } ``` Version 2.1 ``` @article{eric2019multiwoz, title={MultiWOZ 2.1: Multi-Domain Dialogue State Corrections and State Tracking Baselines}, author={Eric, Mihail and Goel, Rahul and Paul, Shachi and Sethi, Abhishek and Agarwal, Sanchit and Gao, Shuyag and Hakkani-Tur, Dilek}, journal={arXiv preprint arXiv:1907.01669}, year={2019} } ``` Version 2.2 ``` @inproceedings{zang2020multiwoz, title={MultiWOZ 2.2: A Dialogue Dataset with Additional Annotation Corrections and State Tracking Baselines}, author={Zang, Xiaoxue and Rastogi, Abhinav and Sunkara, Srinivas and Gupta, Raghav and Zhang, Jianguo and Chen, Jindong}, booktitle={Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI, ACL 2020}, pages={109--117}, year={2020} } ``` ### Contributions Thanks to [@yjernite](https://github.com/yjernite) for adding this dataset.
bigbio/med_qa
--- language: - en - zh bigbio_language: - English - Chinese (Simplified) - Chinese (Traditional, Taiwan) license: unknown multilinguality: multilingual bigbio_license_shortname: UNKNOWN pretty_name: MedQA homepage: https://github.com/jind11/MedQA bigbio_pubmed: False bigbio_public: True bigbio_tasks: - QUESTION_ANSWERING --- # Dataset Card for MedQA ## Dataset Description - **Homepage:** https://github.com/jind11/MedQA - **Pubmed:** False - **Public:** True - **Tasks:** QA In this work, we present the first free-form multiple-choice OpenQA dataset for solving medical problems, MedQA, collected from the professional medical board exams. It covers three languages: English, simplified Chinese, and traditional Chinese, and contains 12,723, 34,251, and 14,123 questions for the three languages, respectively. Together with the question data, we also collect and release a large-scale corpus from medical textbooks from which the reading comprehension models can obtain necessary knowledge for answering the questions. ## Citation Information ``` @article{jin2021disease, title={What disease does this patient have? a large-scale open domain question answering dataset from medical exams}, author={Jin, Di and Pan, Eileen and Oufattole, Nassim and Weng, Wei-Hung and Fang, Hanyi and Szolovits, Peter}, journal={Applied Sciences}, volume={11}, number={14}, pages={6421}, year={2021}, publisher={MDPI} } ```
imagenet_sketch
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - extended|imagenet-1k task_categories: - image-classification task_ids: - multi-class-image-classification paperswithcode_id: imagenet-sketch pretty_name: ImageNet-Sketch dataset_info: features: - name: image dtype: image - name: label dtype: class_label: names: '0': tench, Tinca tinca '1': goldfish, Carassius auratus '2': great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias '3': tiger shark, Galeocerdo cuvieri '4': hammerhead, hammerhead shark '5': electric ray, crampfish, numbfish, torpedo '6': stingray '7': cock '8': hen '9': ostrich, Struthio camelus '10': brambling, Fringilla montifringilla '11': goldfinch, Carduelis carduelis '12': house finch, linnet, Carpodacus mexicanus '13': junco, snowbird '14': indigo bunting, indigo finch, indigo bird, Passerina cyanea '15': robin, American robin, Turdus migratorius '16': bulbul '17': jay '18': magpie '19': chickadee '20': water ouzel, dipper '21': kite '22': bald eagle, American eagle, Haliaeetus leucocephalus '23': vulture '24': great grey owl, great gray owl, Strix nebulosa '25': European fire salamander, Salamandra salamandra '26': common newt, Triturus vulgaris '27': eft '28': spotted salamander, Ambystoma maculatum '29': axolotl, mud puppy, Ambystoma mexicanum '30': bullfrog, Rana catesbeiana '31': tree frog, tree-frog '32': tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui '33': loggerhead, loggerhead turtle, Caretta caretta '34': leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea '35': mud turtle '36': terrapin '37': box turtle, box tortoise '38': banded gecko '39': common iguana, iguana, Iguana iguana '40': American chameleon, anole, Anolis carolinensis '41': whiptail, whiptail lizard '42': agama '43': frilled lizard, Chlamydosaurus kingi '44': alligator lizard '45': Gila monster, Heloderma suspectum '46': green lizard, Lacerta viridis '47': African chameleon, Chamaeleo chamaeleon '48': Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus komodoensis '49': African crocodile, Nile crocodile, Crocodylus niloticus '50': American alligator, Alligator mississipiensis '51': triceratops '52': thunder snake, worm snake, Carphophis amoenus '53': ringneck snake, ring-necked snake, ring snake '54': hognose snake, puff adder, sand viper '55': green snake, grass snake '56': king snake, kingsnake '57': garter snake, grass snake '58': water snake '59': vine snake '60': night snake, Hypsiglena torquata '61': boa constrictor, Constrictor constrictor '62': rock python, rock snake, Python sebae '63': Indian cobra, Naja naja '64': green mamba '65': sea snake '66': horned viper, cerastes, sand viper, horned asp, Cerastes cornutus '67': diamondback, diamondback rattlesnake, Crotalus adamanteus '68': sidewinder, horned rattlesnake, Crotalus cerastes '69': trilobite '70': harvestman, daddy longlegs, Phalangium opilio '71': scorpion '72': black and gold garden spider, Argiope aurantia '73': barn spider, Araneus cavaticus '74': garden spider, Aranea diademata '75': black widow, Latrodectus mactans '76': tarantula '77': wolf spider, hunting spider '78': tick '79': centipede '80': black grouse '81': ptarmigan '82': ruffed grouse, partridge, Bonasa umbellus '83': prairie chicken, prairie grouse, prairie fowl '84': peacock '85': quail '86': partridge '87': African grey, African gray, Psittacus erithacus '88': macaw '89': sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita '90': lorikeet '91': coucal '92': bee eater '93': hornbill '94': hummingbird '95': jacamar '96': toucan '97': drake '98': red-breasted merganser, Mergus serrator '99': goose '100': black swan, Cygnus atratus '101': tusker '102': echidna, spiny anteater, anteater '103': platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus anatinus '104': wallaby, brush kangaroo '105': koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus '106': wombat '107': jellyfish '108': sea anemone, anemone '109': brain coral '110': flatworm, platyhelminth '111': nematode, nematode worm, roundworm '112': conch '113': snail '114': slug '115': sea slug, nudibranch '116': chiton, coat-of-mail shell, sea cradle, polyplacophore '117': chambered nautilus, pearly nautilus, nautilus '118': Dungeness crab, Cancer magister '119': rock crab, Cancer irroratus '120': fiddler crab '121': king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes camtschatica '122': American lobster, Northern lobster, Maine lobster, Homarus americanus '123': spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish '124': crayfish, crawfish, crawdad, crawdaddy '125': hermit crab '126': isopod '127': white stork, Ciconia ciconia '128': black stork, Ciconia nigra '129': spoonbill '130': flamingo '131': little blue heron, Egretta caerulea '132': American egret, great white heron, Egretta albus '133': bittern '134': crane '135': limpkin, Aramus pictus '136': European gallinule, Porphyrio porphyrio '137': American coot, marsh hen, mud hen, water hen, Fulica americana '138': bustard '139': ruddy turnstone, Arenaria interpres '140': red-backed sandpiper, dunlin, Erolia alpina '141': redshank, Tringa totanus '142': dowitcher '143': oystercatcher, oyster catcher '144': pelican '145': king penguin, Aptenodytes patagonica '146': albatross, mollymawk '147': grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius robustus '148': killer whale, killer, orca, grampus, sea wolf, Orcinus orca '149': dugong, Dugong dugon '150': sea lion '151': Chihuahua '152': Japanese spaniel '153': Maltese dog, Maltese terrier, Maltese '154': Pekinese, Pekingese, Peke '155': Shih-Tzu '156': Blenheim spaniel '157': papillon '158': toy terrier '159': Rhodesian ridgeback '160': Afghan hound, Afghan '161': basset, basset hound '162': beagle '163': bloodhound, sleuthhound '164': bluetick '165': black-and-tan coonhound '166': Walker hound, Walker foxhound '167': English foxhound '168': redbone '169': borzoi, Russian wolfhound '170': Irish wolfhound '171': Italian greyhound '172': whippet '173': Ibizan hound, Ibizan Podenco '174': Norwegian elkhound, elkhound '175': otterhound, otter hound '176': Saluki, gazelle hound '177': Scottish deerhound, deerhound '178': Weimaraner '179': Staffordshire bullterrier, Staffordshire bull terrier '180': American Staffordshire terrier, Staffordshire terrier, American pit bull terrier, pit bull terrier '181': Bedlington terrier '182': Border terrier '183': Kerry blue terrier '184': Irish terrier '185': Norfolk terrier '186': Norwich terrier '187': Yorkshire terrier '188': wire-haired fox terrier '189': Lakeland terrier '190': Sealyham terrier, Sealyham '191': Airedale, Airedale terrier '192': cairn, cairn terrier '193': Australian terrier '194': Dandie Dinmont, Dandie Dinmont terrier '195': Boston bull, Boston terrier '196': miniature schnauzer '197': giant schnauzer '198': standard schnauzer '199': Scotch terrier, Scottish terrier, Scottie '200': Tibetan terrier, chrysanthemum dog '201': silky terrier, Sydney silky '202': soft-coated wheaten terrier '203': West Highland white terrier '204': Lhasa, Lhasa apso '205': flat-coated retriever '206': curly-coated retriever '207': golden retriever '208': Labrador retriever '209': Chesapeake Bay retriever '210': German short-haired pointer '211': vizsla, Hungarian pointer '212': English setter '213': Irish setter, red setter '214': Gordon setter '215': Brittany spaniel '216': clumber, clumber spaniel '217': English springer, English springer spaniel '218': Welsh springer spaniel '219': cocker spaniel, English cocker spaniel, cocker '220': Sussex spaniel '221': Irish water spaniel '222': kuvasz '223': schipperke '224': groenendael '225': malinois '226': briard '227': kelpie '228': komondor '229': Old English sheepdog, bobtail '230': Shetland sheepdog, Shetland sheep dog, Shetland '231': collie '232': Border collie '233': Bouvier des Flandres, Bouviers des Flandres '234': Rottweiler '235': German shepherd, German shepherd dog, German police dog, alsatian '236': Doberman, Doberman pinscher '237': miniature pinscher '238': Greater Swiss Mountain dog '239': Bernese mountain dog '240': Appenzeller '241': EntleBucher '242': boxer '243': bull mastiff '244': Tibetan mastiff '245': French bulldog '246': Great Dane '247': Saint Bernard, St Bernard '248': Eskimo dog, husky '249': malamute, malemute, Alaskan malamute '250': Siberian husky '251': dalmatian, coach dog, carriage dog '252': affenpinscher, monkey pinscher, monkey dog '253': basenji '254': pug, pug-dog '255': Leonberg '256': Newfoundland, Newfoundland dog '257': Great Pyrenees '258': Samoyed, Samoyede '259': Pomeranian '260': chow, chow chow '261': keeshond '262': Brabancon griffon '263': Pembroke, Pembroke Welsh corgi '264': Cardigan, Cardigan Welsh corgi '265': toy poodle '266': miniature poodle '267': standard poodle '268': Mexican hairless '269': timber wolf, grey wolf, gray wolf, Canis lupus '270': white wolf, Arctic wolf, Canis lupus tundrarum '271': red wolf, maned wolf, Canis rufus, Canis niger '272': coyote, prairie wolf, brush wolf, Canis latrans '273': dingo, warrigal, warragal, Canis dingo '274': dhole, Cuon alpinus '275': African hunting dog, hyena dog, Cape hunting dog, Lycaon pictus '276': hyena, hyaena '277': red fox, Vulpes vulpes '278': kit fox, Vulpes macrotis '279': Arctic fox, white fox, Alopex lagopus '280': grey fox, gray fox, Urocyon cinereoargenteus '281': tabby, tabby cat '282': tiger cat '283': Persian cat '284': Siamese cat, Siamese '285': Egyptian cat '286': cougar, puma, catamount, mountain lion, painter, panther, Felis concolor '287': lynx, catamount '288': leopard, Panthera pardus '289': snow leopard, ounce, Panthera uncia '290': jaguar, panther, Panthera onca, Felis onca '291': lion, king of beasts, Panthera leo '292': tiger, Panthera tigris '293': cheetah, chetah, Acinonyx jubatus '294': brown bear, bruin, Ursus arctos '295': American black bear, black bear, Ursus americanus, Euarctos americanus '296': ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus '297': sloth bear, Melursus ursinus, Ursus ursinus '298': mongoose '299': meerkat, mierkat '300': tiger beetle '301': ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle '302': ground beetle, carabid beetle '303': long-horned beetle, longicorn, longicorn beetle '304': leaf beetle, chrysomelid '305': dung beetle '306': rhinoceros beetle '307': weevil '308': fly '309': bee '310': ant, emmet, pismire '311': grasshopper, hopper '312': cricket '313': walking stick, walkingstick, stick insect '314': cockroach, roach '315': mantis, mantid '316': cicada, cicala '317': leafhopper '318': lacewing, lacewing fly '319': dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk '320': damselfly '321': admiral '322': ringlet, ringlet butterfly '323': monarch, monarch butterfly, milkweed butterfly, Danaus plexippus '324': cabbage butterfly '325': sulphur butterfly, sulfur butterfly '326': lycaenid, lycaenid butterfly '327': starfish, sea star '328': sea urchin '329': sea cucumber, holothurian '330': wood rabbit, cottontail, cottontail rabbit '331': hare '332': Angora, Angora rabbit '333': hamster '334': porcupine, hedgehog '335': fox squirrel, eastern fox squirrel, Sciurus niger '336': marmot '337': beaver '338': guinea pig, Cavia cobaya '339': sorrel '340': zebra '341': hog, pig, grunter, squealer, Sus scrofa '342': wild boar, boar, Sus scrofa '343': warthog '344': hippopotamus, hippo, river horse, Hippopotamus amphibius '345': ox '346': water buffalo, water ox, Asiatic buffalo, Bubalus bubalis '347': bison '348': ram, tup '349': bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain sheep, Ovis canadensis '350': ibex, Capra ibex '351': hartebeest '352': impala, Aepyceros melampus '353': gazelle '354': Arabian camel, dromedary, Camelus dromedarius '355': llama '356': weasel '357': mink '358': polecat, fitch, foulmart, foumart, Mustela putorius '359': black-footed ferret, ferret, Mustela nigripes '360': otter '361': skunk, polecat, wood pussy '362': badger '363': armadillo '364': three-toed sloth, ai, Bradypus tridactylus '365': orangutan, orang, orangutang, Pongo pygmaeus '366': gorilla, Gorilla gorilla '367': chimpanzee, chimp, Pan troglodytes '368': gibbon, Hylobates lar '369': siamang, Hylobates syndactylus, Symphalangus syndactylus '370': guenon, guenon monkey '371': patas, hussar monkey, Erythrocebus patas '372': baboon '373': macaque '374': langur '375': colobus, colobus monkey '376': proboscis monkey, Nasalis larvatus '377': marmoset '378': capuchin, ringtail, Cebus capucinus '379': howler monkey, howler '380': titi, titi monkey '381': spider monkey, Ateles geoffroyi '382': squirrel monkey, Saimiri sciureus '383': Madagascar cat, ring-tailed lemur, Lemur catta '384': indri, indris, Indri indri, Indri brevicaudatus '385': Indian elephant, Elephas maximus '386': African elephant, Loxodonta africana '387': lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens '388': giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca '389': barracouta, snoek '390': eel '391': coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus kisutch '392': rock beauty, Holocanthus tricolor '393': anemone fish '394': sturgeon '395': gar, garfish, garpike, billfish, Lepisosteus osseus '396': lionfish '397': puffer, pufferfish, blowfish, globefish '398': abacus '399': abaya '400': academic gown, academic robe, judge's robe '401': accordion, piano accordion, squeeze box '402': acoustic guitar '403': aircraft carrier, carrier, flattop, attack aircraft carrier '404': airliner '405': airship, dirigible '406': altar '407': ambulance '408': amphibian, amphibious vehicle '409': analog clock '410': apiary, bee house '411': apron '412': ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin '413': assault rifle, assault gun '414': backpack, back pack, knapsack, packsack, rucksack, haversack '415': bakery, bakeshop, bakehouse '416': balance beam, beam '417': balloon '418': ballpoint, ballpoint pen, ballpen, Biro '419': Band Aid '420': banjo '421': bannister, banister, balustrade, balusters, handrail '422': barbell '423': barber chair '424': barbershop '425': barn '426': barometer '427': barrel, cask '428': barrow, garden cart, lawn cart, wheelbarrow '429': baseball '430': basketball '431': bassinet '432': bassoon '433': bathing cap, swimming cap '434': bath towel '435': bathtub, bathing tub, bath, tub '436': beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon '437': beacon, lighthouse, beacon light, pharos '438': beaker '439': bearskin, busby, shako '440': beer bottle '441': beer glass '442': bell cote, bell cot '443': bib '444': bicycle-built-for-two, tandem bicycle, tandem '445': bikini, two-piece '446': binder, ring-binder '447': binoculars, field glasses, opera glasses '448': birdhouse '449': boathouse '450': bobsled, bobsleigh, bob '451': bolo tie, bolo, bola tie, bola '452': bonnet, poke bonnet '453': bookcase '454': bookshop, bookstore, bookstall '455': bottlecap '456': bow '457': bow tie, bow-tie, bowtie '458': brass, memorial tablet, plaque '459': brassiere, bra, bandeau '460': breakwater, groin, groyne, mole, bulwark, seawall, jetty '461': breastplate, aegis, egis '462': broom '463': bucket, pail '464': buckle '465': bulletproof vest '466': bullet train, bullet '467': butcher shop, meat market '468': cab, hack, taxi, taxicab '469': caldron, cauldron '470': candle, taper, wax light '471': cannon '472': canoe '473': can opener, tin opener '474': cardigan '475': car mirror '476': carousel, carrousel, merry-go-round, roundabout, whirligig '477': carpenter's kit, tool kit '478': carton '479': car wheel '480': cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, ATM '481': cassette '482': cassette player '483': castle '484': catamaran '485': CD player '486': cello, violoncello '487': cellular telephone, cellular phone, cellphone, cell, mobile phone '488': chain '489': chainlink fence '490': chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour '491': chain saw, chainsaw '492': chest '493': chiffonier, commode '494': chime, bell, gong '495': china cabinet, china closet '496': Christmas stocking '497': church, church building '498': cinema, movie theater, movie theatre, movie house, picture palace '499': cleaver, meat cleaver, chopper '500': cliff dwelling '501': cloak '502': clog, geta, patten, sabot '503': cocktail shaker '504': coffee mug '505': coffeepot '506': coil, spiral, volute, whorl, helix '507': combination lock '508': computer keyboard, keypad '509': confectionery, confectionary, candy store '510': container ship, containership, container vessel '511': convertible '512': corkscrew, bottle screw '513': cornet, horn, trumpet, trump '514': cowboy boot '515': cowboy hat, ten-gallon hat '516': cradle '517': crane2 '518': crash helmet '519': crate '520': crib, cot '521': Crock Pot '522': croquet ball '523': crutch '524': cuirass '525': dam, dike, dyke '526': desk '527': desktop computer '528': dial telephone, dial phone '529': diaper, nappy, napkin '530': digital clock '531': digital watch '532': dining table, board '533': dishrag, dishcloth '534': dishwasher, dish washer, dishwashing machine '535': disk brake, disc brake '536': dock, dockage, docking facility '537': dogsled, dog sled, dog sleigh '538': dome '539': doormat, welcome mat '540': drilling platform, offshore rig '541': drum, membranophone, tympan '542': drumstick '543': dumbbell '544': Dutch oven '545': electric fan, blower '546': electric guitar '547': electric locomotive '548': entertainment center '549': envelope '550': espresso maker '551': face powder '552': feather boa, boa '553': file, file cabinet, filing cabinet '554': fireboat '555': fire engine, fire truck '556': fire screen, fireguard '557': flagpole, flagstaff '558': flute, transverse flute '559': folding chair '560': football helmet '561': forklift '562': fountain '563': fountain pen '564': four-poster '565': freight car '566': French horn, horn '567': frying pan, frypan, skillet '568': fur coat '569': garbage truck, dustcart '570': gasmask, respirator, gas helmet '571': gas pump, gasoline pump, petrol pump, island dispenser '572': goblet '573': go-kart '574': golf ball '575': golfcart, golf cart '576': gondola '577': gong, tam-tam '578': gown '579': grand piano, grand '580': greenhouse, nursery, glasshouse '581': grille, radiator grille '582': grocery store, grocery, food market, market '583': guillotine '584': hair slide '585': hair spray '586': half track '587': hammer '588': hamper '589': hand blower, blow dryer, blow drier, hair dryer, hair drier '590': hand-held computer, hand-held microcomputer '591': handkerchief, hankie, hanky, hankey '592': hard disc, hard disk, fixed disk '593': harmonica, mouth organ, harp, mouth harp '594': harp '595': harvester, reaper '596': hatchet '597': holster '598': home theater, home theatre '599': honeycomb '600': hook, claw '601': hoopskirt, crinoline '602': horizontal bar, high bar '603': horse cart, horse-cart '604': hourglass '605': iPod '606': iron, smoothing iron '607': jack-o'-lantern '608': jean, blue jean, denim '609': jeep, landrover '610': jersey, T-shirt, tee shirt '611': jigsaw puzzle '612': jinrikisha, ricksha, rickshaw '613': joystick '614': kimono '615': knee pad '616': knot '617': lab coat, laboratory coat '618': ladle '619': lampshade, lamp shade '620': laptop, laptop computer '621': lawn mower, mower '622': lens cap, lens cover '623': letter opener, paper knife, paperknife '624': library '625': lifeboat '626': lighter, light, igniter, ignitor '627': limousine, limo '628': liner, ocean liner '629': lipstick, lip rouge '630': Loafer '631': lotion '632': loudspeaker, speaker, speaker unit, loudspeaker system, speaker system '633': loupe, jeweler's loupe '634': lumbermill, sawmill '635': magnetic compass '636': mailbag, postbag '637': mailbox, letter box '638': maillot '639': maillot, tank suit '640': manhole cover '641': maraca '642': marimba, xylophone '643': mask '644': matchstick '645': maypole '646': maze, labyrinth '647': measuring cup '648': medicine chest, medicine cabinet '649': megalith, megalithic structure '650': microphone, mike '651': microwave, microwave oven '652': military uniform '653': milk can '654': minibus '655': miniskirt, mini '656': minivan '657': missile '658': mitten '659': mixing bowl '660': mobile home, manufactured home '661': Model T '662': modem '663': monastery '664': monitor '665': moped '666': mortar '667': mortarboard '668': mosque '669': mosquito net '670': motor scooter, scooter '671': mountain bike, all-terrain bike, off-roader '672': mountain tent '673': mouse, computer mouse '674': mousetrap '675': moving van '676': muzzle '677': nail '678': neck brace '679': necklace '680': nipple '681': notebook, notebook computer '682': obelisk '683': oboe, hautboy, hautbois '684': ocarina, sweet potato '685': odometer, hodometer, mileometer, milometer '686': oil filter '687': organ, pipe organ '688': oscilloscope, scope, cathode-ray oscilloscope, CRO '689': overskirt '690': oxcart '691': oxygen mask '692': packet '693': paddle, boat paddle '694': paddlewheel, paddle wheel '695': padlock '696': paintbrush '697': pajama, pyjama, pj's, jammies '698': palace '699': panpipe, pandean pipe, syrinx '700': paper towel '701': parachute, chute '702': parallel bars, bars '703': park bench '704': parking meter '705': passenger car, coach, carriage '706': patio, terrace '707': pay-phone, pay-station '708': pedestal, plinth, footstall '709': pencil box, pencil case '710': pencil sharpener '711': perfume, essence '712': Petri dish '713': photocopier '714': pick, plectrum, plectron '715': pickelhaube '716': picket fence, paling '717': pickup, pickup truck '718': pier '719': piggy bank, penny bank '720': pill bottle '721': pillow '722': ping-pong ball '723': pinwheel '724': pirate, pirate ship '725': pitcher, ewer '726': plane, carpenter's plane, woodworking plane '727': planetarium '728': plastic bag '729': plate rack '730': plow, plough '731': plunger, plumber's helper '732': Polaroid camera, Polaroid Land camera '733': pole '734': police van, police wagon, paddy wagon, patrol wagon, wagon, black Maria '735': poncho '736': pool table, billiard table, snooker table '737': pop bottle, soda bottle '738': pot, flowerpot '739': potter's wheel '740': power drill '741': prayer rug, prayer mat '742': printer '743': prison, prison house '744': projectile, missile '745': projector '746': puck, hockey puck '747': punching bag, punch bag, punching ball, punchball '748': purse '749': quill, quill pen '750': quilt, comforter, comfort, puff '751': racer, race car, racing car '752': racket, racquet '753': radiator '754': radio, wireless '755': radio telescope, radio reflector '756': rain barrel '757': recreational vehicle, RV, R.V. '758': reel '759': reflex camera '760': refrigerator, icebox '761': remote control, remote '762': restaurant, eating house, eating place, eatery '763': revolver, six-gun, six-shooter '764': rifle '765': rocking chair, rocker '766': rotisserie '767': rubber eraser, rubber, pencil eraser '768': rugby ball '769': rule, ruler '770': running shoe '771': safe '772': safety pin '773': saltshaker, salt shaker '774': sandal '775': sarong '776': sax, saxophone '777': scabbard '778': scale, weighing machine '779': school bus '780': schooner '781': scoreboard '782': screen, CRT screen '783': screw '784': screwdriver '785': seat belt, seatbelt '786': sewing machine '787': shield, buckler '788': shoe shop, shoe-shop, shoe store '789': shoji '790': shopping basket '791': shopping cart '792': shovel '793': shower cap '794': shower curtain '795': ski '796': ski mask '797': sleeping bag '798': slide rule, slipstick '799': sliding door '800': slot, one-armed bandit '801': snorkel '802': snowmobile '803': snowplow, snowplough '804': soap dispenser '805': soccer ball '806': sock '807': solar dish, solar collector, solar furnace '808': sombrero '809': soup bowl '810': space bar '811': space heater '812': space shuttle '813': spatula '814': speedboat '815': spider web, spider's web '816': spindle '817': sports car, sport car '818': spotlight, spot '819': stage '820': steam locomotive '821': steel arch bridge '822': steel drum '823': stethoscope '824': stole '825': stone wall '826': stopwatch, stop watch '827': stove '828': strainer '829': streetcar, tram, tramcar, trolley, trolley car '830': stretcher '831': studio couch, day bed '832': stupa, tope '833': submarine, pigboat, sub, U-boat '834': suit, suit of clothes '835': sundial '836': sunglass '837': sunglasses, dark glasses, shades '838': sunscreen, sunblock, sun blocker '839': suspension bridge '840': swab, swob, mop '841': sweatshirt '842': swimming trunks, bathing trunks '843': swing '844': switch, electric switch, electrical switch '845': syringe '846': table lamp '847': tank, army tank, armored combat vehicle, armoured combat vehicle '848': tape player '849': teapot '850': teddy, teddy bear '851': television, television system '852': tennis ball '853': thatch, thatched roof '854': theater curtain, theatre curtain '855': thimble '856': thresher, thrasher, threshing machine '857': throne '858': tile roof '859': toaster '860': tobacco shop, tobacconist shop, tobacconist '861': toilet seat '862': torch '863': totem pole '864': tow truck, tow car, wrecker '865': toyshop '866': tractor '867': trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi '868': tray '869': trench coat '870': tricycle, trike, velocipede '871': trimaran '872': tripod '873': triumphal arch '874': trolleybus, trolley coach, trackless trolley '875': trombone '876': tub, vat '877': turnstile '878': typewriter keyboard '879': umbrella '880': unicycle, monocycle '881': upright, upright piano '882': vacuum, vacuum cleaner '883': vase '884': vault '885': velvet '886': vending machine '887': vestment '888': viaduct '889': violin, fiddle '890': volleyball '891': waffle iron '892': wall clock '893': wallet, billfold, notecase, pocketbook '894': wardrobe, closet, press '895': warplane, military plane '896': washbasin, handbasin, washbowl, lavabo, wash-hand basin '897': washer, automatic washer, washing machine '898': water bottle '899': water jug '900': water tower '901': whiskey jug '902': whistle '903': wig '904': window screen '905': window shade '906': Windsor tie '907': wine bottle '908': wing '909': wok '910': wooden spoon '911': wool, woolen, woollen '912': worm fence, snake fence, snake-rail fence, Virginia fence '913': wreck '914': yawl '915': yurt '916': web site, website, internet site, site '917': comic book '918': crossword puzzle, crossword '919': street sign '920': traffic light, traffic signal, stoplight '921': book jacket, dust cover, dust jacket, dust wrapper '922': menu '923': plate '924': guacamole '925': consomme '926': hot pot, hotpot '927': trifle '928': ice cream, icecream '929': ice lolly, lolly, lollipop, popsicle '930': French loaf '931': bagel, beigel '932': pretzel '933': cheeseburger '934': hotdog, hot dog, red hot '935': mashed potato '936': head cabbage '937': broccoli '938': cauliflower '939': zucchini, courgette '940': spaghetti squash '941': acorn squash '942': butternut squash '943': cucumber, cuke '944': artichoke, globe artichoke '945': bell pepper '946': cardoon '947': mushroom '948': Granny Smith '949': strawberry '950': orange '951': lemon '952': fig '953': pineapple, ananas '954': banana '955': jackfruit, jak, jack '956': custard apple '957': pomegranate '958': hay '959': carbonara '960': chocolate sauce, chocolate syrup '961': dough '962': meat loaf, meatloaf '963': pizza, pizza pie '964': potpie '965': burrito '966': red wine '967': espresso '968': cup '969': eggnog '970': alp '971': bubble '972': cliff, drop, drop-off '973': coral reef '974': geyser '975': lakeside, lakeshore '976': promontory, headland, head, foreland '977': sandbar, sand bar '978': seashore, coast, seacoast, sea-coast '979': valley, vale '980': volcano '981': ballplayer, baseball player '982': groom, bridegroom '983': scuba diver '984': rapeseed '985': daisy '986': yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus, Cypripedium parviflorum '987': corn '988': acorn '989': hip, rose hip, rosehip '990': buckeye, horse chestnut, conker '991': coral fungus '992': agaric '993': gyromitra '994': stinkhorn, carrion fungus '995': earthstar '996': hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola frondosa '997': bolete '998': ear, spike, capitulum '999': toilet tissue, toilet paper, bathroom tissue splits: - name: train num_bytes: 9919813 num_examples: 50889 download_size: 7593573012 dataset_size: 9919813 --- # Dataset Card for ImageNet-Sketch ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/HaohanWang/ImageNet-Sketch - **Repository:** https://github.com/HaohanWang/ImageNet-Sketch - **Paper:** [Learning Robust Global Representations by Penalizing Local Predictive Power](https://arxiv.org/abs/1905.13549v2) - **Leaderboard:** https://github.com/HaohanWang/ImageNet-Sketch#imagenet-sketch-leaderboard - **Point of Contact:** [Haohan Wang](mailto:haohanw@andrew.cmu.edu) - **Size of downloaded dataset files:** 8.15 GB ### Dataset Summary ImageNet-Sketch data set consists of 50000 images, 50 images for each of the 1000 ImageNet classes. We construct the data set with Google Image queries "sketch of __", where __ is the standard class name. We only search within the "black and white" color scheme. We initially query 100 images for every class, and then manually clean the pulled images by deleting the irrelevant images and images that are for similar but different classes. For some classes, there are less than 50 images after manually cleaning, and then we augment the data set by flipping and rotating the images. The scripts used to conduct queries and clean images can be found in [the GitHub repository](https://github.com/HaohanWang/ImageNet-Sketch). ### Supported Tasks and Leaderboards - `image_classification`: The goal of this task is to classify a given image into one of 1000 ImageNet classes. The leaderboard is available [here](https://github.com/HaohanWang/ImageNet-Sketch#imagenet-sketch-leaderboard). The goal of the leaderboard is to evaluate the out-of-domain classification performance of vision models trained on ImageNet. The evaluation metrics used in the leaderboard are top-1 accuracy and top-5 accuracy. ### Languages The class labels in the dataset are in English. ## Dataset Structure ### Data Instances A sample from the training set is provided below: ``` { 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=400x530 at 0x7FB2EF5D4A90>, 'label': 320 } ``` ### Data Fields The data instances have the following fields: - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`. - `label`: an `int` classification label. The labels are indexed based on a sorted list of synset ids such as `n07565083` which we automatically map to original class names. The original dataset is divided into folders based on these synset ids. To get a mapping from original synset names, use the file [LOC_synset_mapping.txt](https://www.kaggle.com/competitions/imagenet-object-localization-challenge/data?select=LOC_synset_mapping.txt) available on Kaggle challenge page. You can also use `dataset_instance.features["label"].int2str` function to get the class for a particular label index. <details> <summary> Click here to see the full list of ImageNet class label mapping: </summary> |id|Class| |--|-----| |0 | tench, Tinca tinca| |1 | goldfish, Carassius auratus| |2 | great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias| |3 | tiger shark, Galeocerdo cuvieri| |4 | hammerhead, hammerhead shark| |5 | electric ray, crampfish, numbfish, torpedo| |6 | stingray| |7 | cock| |8 | hen| |9 | ostrich, Struthio camelus| |10 | brambling, Fringilla montifringilla| |11 | goldfinch, Carduelis carduelis| |12 | house finch, linnet, Carpodacus mexicanus| |13 | junco, snowbird| |14 | indigo bunting, indigo finch, indigo bird, Passerina cyanea| |15 | robin, American robin, Turdus migratorius| |16 | bulbul| |17 | jay| |18 | magpie| |19 | chickadee| |20 | water ouzel, dipper| |21 | kite| |22 | bald eagle, American eagle, Haliaeetus leucocephalus| |23 | vulture| |24 | great grey owl, great gray owl, Strix nebulosa| |25 | European fire salamander, Salamandra salamandra| |26 | common newt, Triturus vulgaris| |27 | eft| |28 | spotted salamander, Ambystoma maculatum| |29 | axolotl, mud puppy, Ambystoma mexicanum| |30 | bullfrog, Rana catesbeiana| |31 | tree frog, tree-frog| |32 | tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui| |33 | loggerhead, loggerhead turtle, Caretta caretta| |34 | leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea| |35 | mud turtle| |36 | terrapin| |37 | box turtle, box tortoise| |38 | banded gecko| |39 | common iguana, iguana, Iguana iguana| |40 | American chameleon, anole, Anolis carolinensis| |41 | whiptail, whiptail lizard| |42 | agama| |43 | frilled lizard, Chlamydosaurus kingi| |44 | alligator lizard| |45 | Gila monster, Heloderma suspectum| |46 | green lizard, Lacerta viridis| |47 | African chameleon, Chamaeleo chamaeleon| |48 | Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus komodoensis| |49 | African crocodile, Nile crocodile, Crocodylus niloticus| |50 | American alligator, Alligator mississipiensis| |51 | triceratops| |52 | thunder snake, worm snake, Carphophis amoenus| |53 | ringneck snake, ring-necked snake, ring snake| |54 | hognose snake, puff adder, sand viper| |55 | green snake, grass snake| |56 | king snake, kingsnake| |57 | garter snake, grass snake| |58 | water snake| |59 | vine snake| |60 | night snake, Hypsiglena torquata| |61 | boa constrictor, Constrictor constrictor| |62 | rock python, rock snake, Python sebae| |63 | Indian cobra, Naja naja| |64 | green mamba| |65 | sea snake| |66 | horned viper, cerastes, sand viper, horned asp, Cerastes cornutus| |67 | diamondback, diamondback rattlesnake, Crotalus adamanteus| |68 | sidewinder, horned rattlesnake, Crotalus cerastes| |69 | trilobite| |70 | harvestman, daddy longlegs, Phalangium opilio| |71 | scorpion| |72 | black and gold garden spider, Argiope aurantia| |73 | barn spider, Araneus cavaticus| |74 | garden spider, Aranea diademata| |75 | black widow, Latrodectus mactans| |76 | tarantula| |77 | wolf spider, hunting spider| |78 | tick| |79 | centipede| |80 | black grouse| |81 | ptarmigan| |82 | ruffed grouse, partridge, Bonasa umbellus| |83 | prairie chicken, prairie grouse, prairie fowl| |84 | peacock| |85 | quail| |86 | partridge| |87 | African grey, African gray, Psittacus erithacus| |88 | macaw| |89 | sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita| |90 | lorikeet| |91 | coucal| |92 | bee eater| |93 | hornbill| |94 | hummingbird| |95 | jacamar| |96 | toucan| |97 | drake| |98 | red-breasted merganser, Mergus serrator| |99 | goose| |100 | black swan, Cygnus atratus| |101 | tusker| |102 | echidna, spiny anteater, anteater| |103 | platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus anatinus| |104 | wallaby, brush kangaroo| |105 | koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus| |106 | wombat| |107 | jellyfish| |108 | sea anemone, anemone| |109 | brain coral| |110 | flatworm, platyhelminth| |111 | nematode, nematode worm, roundworm| |112 | conch| |113 | snail| |114 | slug| |115 | sea slug, nudibranch| |116 | chiton, coat-of-mail shell, sea cradle, polyplacophore| |117 | chambered nautilus, pearly nautilus, nautilus| |118 | Dungeness crab, Cancer magister| |119 | rock crab, Cancer irroratus| |120 | fiddler crab| |121 | king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes camtschatica| |122 | American lobster, Northern lobster, Maine lobster, Homarus americanus| |123 | spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish| |124 | crayfish, crawfish, crawdad, crawdaddy| |125 | hermit crab| |126 | isopod| |127 | white stork, Ciconia ciconia| |128 | black stork, Ciconia nigra| |129 | spoonbill| |130 | flamingo| |131 | little blue heron, Egretta caerulea| |132 | American egret, great white heron, Egretta albus| |133 | bittern| |134 | crane| |135 | limpkin, Aramus pictus| |136 | European gallinule, Porphyrio porphyrio| |137 | American coot, marsh hen, mud hen, water hen, Fulica americana| |138 | bustard| |139 | ruddy turnstone, Arenaria interpres| |140 | red-backed sandpiper, dunlin, Erolia alpina| |141 | redshank, Tringa totanus| |142 | dowitcher| |143 | oystercatcher, oyster catcher| |144 | pelican| |145 | king penguin, Aptenodytes patagonica| |146 | albatross, mollymawk| |147 | grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius robustus| |148 | killer whale, killer, orca, grampus, sea wolf, Orcinus orca| |149 | dugong, Dugong dugon| |150 | sea lion| |151 | Chihuahua| |152 | Japanese spaniel| |153 | Maltese dog, Maltese terrier, Maltese| |154 | Pekinese, Pekingese, Peke| |155 | Shih-Tzu| |156 | Blenheim spaniel| |157 | papillon| |158 | toy terrier| |159 | Rhodesian ridgeback| |160 | Afghan hound, Afghan| |161 | basset, basset hound| |162 | beagle| |163 | bloodhound, sleuthhound| |164 | bluetick| |165 | black-and-tan coonhound| |166 | Walker hound, Walker foxhound| |167 | English foxhound| |168 | redbone| |169 | borzoi, Russian wolfhound| |170 | Irish wolfhound| |171 | Italian greyhound| |172 | whippet| |173 | Ibizan hound, Ibizan Podenco| |174 | Norwegian elkhound, elkhound| |175 | otterhound, otter hound| |176 | Saluki, gazelle hound| |177 | Scottish deerhound, deerhound| |178 | Weimaraner| |179 | Staffordshire bullterrier, Staffordshire bull terrier| |180 | American Staffordshire terrier, Staffordshire terrier, American pit bull terrier, pit bull terrier| |181 | Bedlington terrier| |182 | Border terrier| |183 | Kerry blue terrier| |184 | Irish terrier| |185 | Norfolk terrier| |186 | Norwich terrier| |187 | Yorkshire terrier| |188 | wire-haired fox terrier| |189 | Lakeland terrier| |190 | Sealyham terrier, Sealyham| |191 | Airedale, Airedale terrier| |192 | cairn, cairn terrier| |193 | Australian terrier| |194 | Dandie Dinmont, Dandie Dinmont terrier| |195 | Boston bull, Boston terrier| |196 | miniature schnauzer| |197 | giant schnauzer| |198 | standard schnauzer| |199 | Scotch terrier, Scottish terrier, Scottie| |200 | Tibetan terrier, chrysanthemum dog| |201 | silky terrier, Sydney silky| |202 | soft-coated wheaten terrier| |203 | West Highland white terrier| |204 | Lhasa, Lhasa apso| |205 | flat-coated retriever| |206 | curly-coated retriever| |207 | golden retriever| |208 | Labrador retriever| |209 | Chesapeake Bay retriever| |210 | German short-haired pointer| |211 | vizsla, Hungarian pointer| |212 | English setter| |213 | Irish setter, red setter| |214 | Gordon setter| |215 | Brittany spaniel| |216 | clumber, clumber spaniel| |217 | English springer, English springer spaniel| |218 | Welsh springer spaniel| |219 | cocker spaniel, English cocker spaniel, cocker| |220 | Sussex spaniel| |221 | Irish water spaniel| |222 | kuvasz| |223 | schipperke| |224 | groenendael| |225 | malinois| |226 | briard| |227 | kelpie| |228 | komondor| |229 | Old English sheepdog, bobtail| |230 | Shetland sheepdog, Shetland sheep dog, Shetland| |231 | collie| |232 | Border collie| |233 | Bouvier des Flandres, Bouviers des Flandres| |234 | Rottweiler| |235 | German shepherd, German shepherd dog, German police dog, alsatian| |236 | Doberman, Doberman pinscher| |237 | miniature pinscher| |238 | Greater Swiss Mountain dog| |239 | Bernese mountain dog| |240 | Appenzeller| |241 | EntleBucher| |242 | boxer| |243 | bull mastiff| |244 | Tibetan mastiff| |245 | French bulldog| |246 | Great Dane| |247 | Saint Bernard, St Bernard| |248 | Eskimo dog, husky| |249 | malamute, malemute, Alaskan malamute| |250 | Siberian husky| |251 | dalmatian, coach dog, carriage dog| |252 | affenpinscher, monkey pinscher, monkey dog| |253 | basenji| |254 | pug, pug-dog| |255 | Leonberg| |256 | Newfoundland, Newfoundland dog| |257 | Great Pyrenees| |258 | Samoyed, Samoyede| |259 | Pomeranian| |260 | chow, chow chow| |261 | keeshond| |262 | Brabancon griffon| |263 | Pembroke, Pembroke Welsh corgi| |264 | Cardigan, Cardigan Welsh corgi| |265 | toy poodle| |266 | miniature poodle| |267 | standard poodle| |268 | Mexican hairless| |269 | timber wolf, grey wolf, gray wolf, Canis lupus| |270 | white wolf, Arctic wolf, Canis lupus tundrarum| |271 | red wolf, maned wolf, Canis rufus, Canis niger| |272 | coyote, prairie wolf, brush wolf, Canis latrans| |273 | dingo, warrigal, warragal, Canis dingo| |274 | dhole, Cuon alpinus| |275 | African hunting dog, hyena dog, Cape hunting dog, Lycaon pictus| |276 | hyena, hyaena| |277 | red fox, Vulpes vulpes| |278 | kit fox, Vulpes macrotis| |279 | Arctic fox, white fox, Alopex lagopus| |280 | grey fox, gray fox, Urocyon cinereoargenteus| |281 | tabby, tabby cat| |282 | tiger cat| |283 | Persian cat| |284 | Siamese cat, Siamese| |285 | Egyptian cat| |286 | cougar, puma, catamount, mountain lion, painter, panther, Felis concolor| |287 | lynx, catamount| |288 | leopard, Panthera pardus| |289 | snow leopard, ounce, Panthera uncia| |290 | jaguar, panther, Panthera onca, Felis onca| |291 | lion, king of beasts, Panthera leo| |292 | tiger, Panthera tigris| |293 | cheetah, chetah, Acinonyx jubatus| |294 | brown bear, bruin, Ursus arctos| |295 | American black bear, black bear, Ursus americanus, Euarctos americanus| |296 | ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus| |297 | sloth bear, Melursus ursinus, Ursus ursinus| |298 | mongoose| |299 | meerkat, mierkat| |300 | tiger beetle| |301 | ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle| |302 | ground beetle, carabid beetle| |303 | long-horned beetle, longicorn, longicorn beetle| |304 | leaf beetle, chrysomelid| |305 | dung beetle| |306 | rhinoceros beetle| |307 | weevil| |308 | fly| |309 | bee| |310 | ant, emmet, pismire| |311 | grasshopper, hopper| |312 | cricket| |313 | walking stick, walkingstick, stick insect| |314 | cockroach, roach| |315 | mantis, mantid| |316 | cicada, cicala| |317 | leafhopper| |318 | lacewing, lacewing fly| |319 | dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk| |320 | damselfly| |321 | admiral| |322 | ringlet, ringlet butterfly| |323 | monarch, monarch butterfly, milkweed butterfly, Danaus plexippus| |324 | cabbage butterfly| |325 | sulphur butterfly, sulfur butterfly| |326 | lycaenid, lycaenid butterfly| |327 | starfish, sea star| |328 | sea urchin| |329 | sea cucumber, holothurian| |330 | wood rabbit, cottontail, cottontail rabbit| |331 | hare| |332 | Angora, Angora rabbit| |333 | hamster| |334 | porcupine, hedgehog| |335 | fox squirrel, eastern fox squirrel, Sciurus niger| |336 | marmot| |337 | beaver| |338 | guinea pig, Cavia cobaya| |339 | sorrel| |340 | zebra| |341 | hog, pig, grunter, squealer, Sus scrofa| |342 | wild boar, boar, Sus scrofa| |343 | warthog| |344 | hippopotamus, hippo, river horse, Hippopotamus amphibius| |345 | ox| |346 | water buffalo, water ox, Asiatic buffalo, Bubalus bubalis| |347 | bison| |348 | ram, tup| |349 | bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain sheep, Ovis canadensis| |350 | ibex, Capra ibex| |351 | hartebeest| |352 | impala, Aepyceros melampus| |353 | gazelle| |354 | Arabian camel, dromedary, Camelus dromedarius| |355 | llama| |356 | weasel| |357 | mink| |358 | polecat, fitch, foulmart, foumart, Mustela putorius| |359 | black-footed ferret, ferret, Mustela nigripes| |360 | otter| |361 | skunk, polecat, wood pussy| |362 | badger| |363 | armadillo| |364 | three-toed sloth, ai, Bradypus tridactylus| |365 | orangutan, orang, orangutang, Pongo pygmaeus| |366 | gorilla, Gorilla gorilla| |367 | chimpanzee, chimp, Pan troglodytes| |368 | gibbon, Hylobates lar| |369 | siamang, Hylobates syndactylus, Symphalangus syndactylus| |370 | guenon, guenon monkey| |371 | patas, hussar monkey, Erythrocebus patas| |372 | baboon| |373 | macaque| |374 | langur| |375 | colobus, colobus monkey| |376 | proboscis monkey, Nasalis larvatus| |377 | marmoset| |378 | capuchin, ringtail, Cebus capucinus| |379 | howler monkey, howler| |380 | titi, titi monkey| |381 | spider monkey, Ateles geoffroyi| |382 | squirrel monkey, Saimiri sciureus| |383 | Madagascar cat, ring-tailed lemur, Lemur catta| |384 | indri, indris, Indri indri, Indri brevicaudatus| |385 | Indian elephant, Elephas maximus| |386 | African elephant, Loxodonta africana| |387 | lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens| |388 | giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca| |389 | barracouta, snoek| |390 | eel| |391 | coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus kisutch| |392 | rock beauty, Holocanthus tricolor| |393 | anemone fish| |394 | sturgeon| |395 | gar, garfish, garpike, billfish, Lepisosteus osseus| |396 | lionfish| |397 | puffer, pufferfish, blowfish, globefish| |398 | abacus| |399 | abaya| |400 | academic gown, academic robe, judge's robe| |401 | accordion, piano accordion, squeeze box| |402 | acoustic guitar| |403 | aircraft carrier, carrier, flattop, attack aircraft carrier| |404 | airliner| |405 | airship, dirigible| |406 | altar| |407 | ambulance| |408 | amphibian, amphibious vehicle| |409 | analog clock| |410 | apiary, bee house| |411 | apron| |412 | ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin| |413 | assault rifle, assault gun| |414 | backpack, back pack, knapsack, packsack, rucksack, haversack| |415 | bakery, bakeshop, bakehouse| |416 | balance beam, beam| |417 | balloon| |418 | ballpoint, ballpoint pen, ballpen, Biro| |419 | Band Aid| |420 | banjo| |421 | bannister, banister, balustrade, balusters, handrail| |422 | barbell| |423 | barber chair| |424 | barbershop| |425 | barn| |426 | barometer| |427 | barrel, cask| |428 | barrow, garden cart, lawn cart, wheelbarrow| |429 | baseball| |430 | basketball| |431 | bassinet| |432 | bassoon| |433 | bathing cap, swimming cap| |434 | bath towel| |435 | bathtub, bathing tub, bath, tub| |436 | beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon| |437 | beacon, lighthouse, beacon light, pharos| |438 | beaker| |439 | bearskin, busby, shako| |440 | beer bottle| |441 | beer glass| |442 | bell cote, bell cot| |443 | bib| |444 | bicycle-built-for-two, tandem bicycle, tandem| |445 | bikini, two-piece| |446 | binder, ring-binder| |447 | binoculars, field glasses, opera glasses| |448 | birdhouse| |449 | boathouse| |450 | bobsled, bobsleigh, bob| |451 | bolo tie, bolo, bola tie, bola| |452 | bonnet, poke bonnet| |453 | bookcase| |454 | bookshop, bookstore, bookstall| |455 | bottlecap| |456 | bow| |457 | bow tie, bow-tie, bowtie| |458 | brass, memorial tablet, plaque| |459 | brassiere, bra, bandeau| |460 | breakwater, groin, groyne, mole, bulwark, seawall, jetty| |461 | breastplate, aegis, egis| |462 | broom| |463 | bucket, pail| |464 | buckle| |465 | bulletproof vest| |466 | bullet train, bullet| |467 | butcher shop, meat market| |468 | cab, hack, taxi, taxicab| |469 | caldron, cauldron| |470 | candle, taper, wax light| |471 | cannon| |472 | canoe| |473 | can opener, tin opener| |474 | cardigan| |475 | car mirror| |476 | carousel, carrousel, merry-go-round, roundabout, whirligig| |477 | carpenter's kit, tool kit| |478 | carton| |479 | car wheel| |480 | cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, ATM| |481 | cassette| |482 | cassette player| |483 | castle| |484 | catamaran| |485 | CD player| |486 | cello, violoncello| |487 | cellular telephone, cellular phone, cellphone, cell, mobile phone| |488 | chain| |489 | chainlink fence| |490 | chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour| |491 | chain saw, chainsaw| |492 | chest| |493 | chiffonier, commode| |494 | chime, bell, gong| |495 | china cabinet, china closet| |496 | Christmas stocking| |497 | church, church building| |498 | cinema, movie theater, movie theatre, movie house, picture palace| |499 | cleaver, meat cleaver, chopper| |500 | cliff dwelling| |501 | cloak| |502 | clog, geta, patten, sabot| |503 | cocktail shaker| |504 | coffee mug| |505 | coffeepot| |506 | coil, spiral, volute, whorl, helix| |507 | combination lock| |508 | computer keyboard, keypad| |509 | confectionery, confectionary, candy store| |510 | container ship, containership, container vessel| |511 | convertible| |512 | corkscrew, bottle screw| |513 | cornet, horn, trumpet, trump| |514 | cowboy boot| |515 | cowboy hat, ten-gallon hat| |516 | cradle| |517 | crane_1| |518 | crash helmet| |519 | crate| |520 | crib, cot| |521 | Crock Pot| |522 | croquet ball| |523 | crutch| |524 | cuirass| |525 | dam, dike, dyke| |526 | desk| |527 | desktop computer| |528 | dial telephone, dial phone| |529 | diaper, nappy, napkin| |530 | digital clock| |531 | digital watch| |532 | dining table, board| |533 | dishrag, dishcloth| |534 | dishwasher, dish washer, dishwashing machine| |535 | disk brake, disc brake| |536 | dock, dockage, docking facility| |537 | dogsled, dog sled, dog sleigh| |538 | dome| |539 | doormat, welcome mat| |540 | drilling platform, offshore rig| |541 | drum, membranophone, tympan| |542 | drumstick| |543 | dumbbell| |544 | Dutch oven| |545 | electric fan, blower| |546 | electric guitar| |547 | electric locomotive| |548 | entertainment center| |549 | envelope| |550 | espresso maker| |551 | face powder| |552 | feather boa, boa| |553 | file, file cabinet, filing cabinet| |554 | fireboat| |555 | fire engine, fire truck| |556 | fire screen, fireguard| |557 | flagpole, flagstaff| |558 | flute, transverse flute| |559 | folding chair| |560 | football helmet| |561 | forklift| |562 | fountain| |563 | fountain pen| |564 | four-poster| |565 | freight car| |566 | French horn, horn| |567 | frying pan, frypan, skillet| |568 | fur coat| |569 | garbage truck, dustcart| |570 | gasmask, respirator, gas helmet| |571 | gas pump, gasoline pump, petrol pump, island dispenser| |572 | goblet| |573 | go-kart| |574 | golf ball| |575 | golfcart, golf cart| |576 | gondola| |577 | gong, tam-tam| |578 | gown| |579 | grand piano, grand| |580 | greenhouse, nursery, glasshouse| |581 | grille, radiator grille| |582 | grocery store, grocery, food market, market| |583 | guillotine| |584 | hair slide| |585 | hair spray| |586 | half track| |587 | hammer| |588 | hamper| |589 | hand blower, blow dryer, blow drier, hair dryer, hair drier| |590 | hand-held computer, hand-held microcomputer| |591 | handkerchief, hankie, hanky, hankey| |592 | hard disc, hard disk, fixed disk| |593 | harmonica, mouth organ, harp, mouth harp| |594 | harp| |595 | harvester, reaper| |596 | hatchet| |597 | holster| |598 | home theater, home theatre| |599 | honeycomb| |600 | hook, claw| |601 | hoopskirt, crinoline| |602 | horizontal bar, high bar| |603 | horse cart, horse-cart| |604 | hourglass| |605 | iPod| |606 | iron, smoothing iron| |607 | jack-o'-lantern| |608 | jean, blue jean, denim| |609 | jeep, landrover| |610 | jersey, T-shirt, tee shirt| |611 | jigsaw puzzle| |612 | jinrikisha, ricksha, rickshaw| |613 | joystick| |614 | kimono| |615 | knee pad| |616 | knot| |617 | lab coat, laboratory coat| |618 | ladle| |619 | lampshade, lamp shade| |620 | laptop, laptop computer| |621 | lawn mower, mower| |622 | lens cap, lens cover| |623 | letter opener, paper knife, paperknife| |624 | library| |625 | lifeboat| |626 | lighter, light, igniter, ignitor| |627 | limousine, limo| |628 | liner, ocean liner| |629 | lipstick, lip rouge| |630 | Loafer| |631 | lotion| |632 | loudspeaker, speaker, speaker unit, loudspeaker system, speaker system| |633 | loupe, jeweler's loupe| |634 | lumbermill, sawmill| |635 | magnetic compass| |636 | mailbag, postbag| |637 | mailbox, letter box| |638 | maillot| |639 | maillot, tank suit| |640 | manhole cover| |641 | maraca| |642 | marimba, xylophone| |643 | mask| |644 | matchstick| |645 | maypole| |646 | maze, labyrinth| |647 | measuring cup| |648 | medicine chest, medicine cabinet| |649 | megalith, megalithic structure| |650 | microphone, mike| |651 | microwave, microwave oven| |652 | military uniform| |653 | milk can| |654 | minibus| |655 | miniskirt, mini| |656 | minivan| |657 | missile| |658 | mitten| |659 | mixing bowl| |660 | mobile home, manufactured home| |661 | Model T| |662 | modem| |663 | monastery| |664 | monitor| |665 | moped| |666 | mortar| |667 | mortarboard| |668 | mosque| |669 | mosquito net| |670 | motor scooter, scooter| |671 | mountain bike, all-terrain bike, off-roader| |672 | mountain tent| |673 | mouse, computer mouse| |674 | mousetrap| |675 | moving van| |676 | muzzle| |677 | nail| |678 | neck brace| |679 | necklace| |680 | nipple| |681 | notebook, notebook computer| |682 | obelisk| |683 | oboe, hautboy, hautbois| |684 | ocarina, sweet potato| |685 | odometer, hodometer, mileometer, milometer| |686 | oil filter| |687 | organ, pipe organ| |688 | oscilloscope, scope, cathode-ray oscilloscope, CRO| |689 | overskirt| |690 | oxcart| |691 | oxygen mask| |692 | packet| |693 | paddle, boat paddle| |694 | paddlewheel, paddle wheel| |695 | padlock| |696 | paintbrush| |697 | pajama, pyjama, pj's, jammies| |698 | palace| |699 | panpipe, pandean pipe, syrinx| |700 | paper towel| |701 | parachute, chute| |702 | parallel bars, bars| |703 | park bench| |704 | parking meter| |705 | passenger car, coach, carriage| |706 | patio, terrace| |707 | pay-phone, pay-station| |708 | pedestal, plinth, footstall| |709 | pencil box, pencil case| |710 | pencil sharpener| |711 | perfume, essence| |712 | Petri dish| |713 | photocopier| |714 | pick, plectrum, plectron| |715 | pickelhaube| |716 | picket fence, paling| |717 | pickup, pickup truck| |718 | pier| |719 | piggy bank, penny bank| |720 | pill bottle| |721 | pillow| |722 | ping-pong ball| |723 | pinwheel| |724 | pirate, pirate ship| |725 | pitcher, ewer| |726 | plane, carpenter's plane, woodworking plane| |727 | planetarium| |728 | plastic bag| |729 | plate rack| |730 | plow, plough| |731 | plunger, plumber's helper| |732 | Polaroid camera, Polaroid Land camera| |733 | pole| |734 | police van, police wagon, paddy wagon, patrol wagon, wagon, black Maria| |735 | poncho| |736 | pool table, billiard table, snooker table| |737 | pop bottle, soda bottle| |738 | pot, flowerpot| |739 | potter's wheel| |740 | power drill| |741 | prayer rug, prayer mat| |742 | printer| |743 | prison, prison house| |744 | projectile, missile| |745 | projector| |746 | puck, hockey puck| |747 | punching bag, punch bag, punching ball, punchball| |748 | purse| |749 | quill, quill pen| |750 | quilt, comforter, comfort, puff| |751 | racer, race car, racing car| |752 | racket, racquet| |753 | radiator| |754 | radio, wireless| |755 | radio telescope, radio reflector| |756 | rain barrel| |757 | recreational vehicle, RV, R.V.| |758 | reel| |759 | reflex camera| |760 | refrigerator, icebox| |761 | remote control, remote| |762 | restaurant, eating house, eating place, eatery| |763 | revolver, six-gun, six-shooter| |764 | rifle| |765 | rocking chair, rocker| |766 | rotisserie| |767 | rubber eraser, rubber, pencil eraser| |768 | rugby ball| |769 | rule, ruler| |770 | running shoe| |771 | safe| |772 | safety pin| |773 | saltshaker, salt shaker| |774 | sandal| |775 | sarong| |776 | sax, saxophone| |777 | scabbard| |778 | scale, weighing machine| |779 | school bus| |780 | schooner| |781 | scoreboard| |782 | screen, CRT screen| |783 | screw| |784 | screwdriver| |785 | seat belt, seatbelt| |786 | sewing machine| |787 | shield, buckler| |788 | shoe shop, shoe-shop, shoe store| |789 | shoji| |790 | shopping basket| |791 | shopping cart| |792 | shovel| |793 | shower cap| |794 | shower curtain| |795 | ski| |796 | ski mask| |797 | sleeping bag| |798 | slide rule, slipstick| |799 | sliding door| |800 | slot, one-armed bandit| |801 | snorkel| |802 | snowmobile| |803 | snowplow, snowplough| |804 | soap dispenser| |805 | soccer ball| |806 | sock| |807 | solar dish, solar collector, solar furnace| |808 | sombrero| |809 | soup bowl| |810 | space bar| |811 | space heater| |812 | space shuttle| |813 | spatula| |814 | speedboat| |815 | spider web, spider's web| |816 | spindle| |817 | sports car, sport car| |818 | spotlight, spot| |819 | stage| |820 | steam locomotive| |821 | steel arch bridge| |822 | steel drum| |823 | stethoscope| |824 | stole| |825 | stone wall| |826 | stopwatch, stop watch| |827 | stove| |828 | strainer| |829 | streetcar, tram, tramcar, trolley, trolley car| |830 | stretcher| |831 | studio couch, day bed| |832 | stupa, tope| |833 | submarine, pigboat, sub, U-boat| |834 | suit, suit of clothes| |835 | sundial| |836 | sunglass| |837 | sunglasses, dark glasses, shades| |838 | sunscreen, sunblock, sun blocker| |839 | suspension bridge| |840 | swab, swob, mop| |841 | sweatshirt| |842 | swimming trunks, bathing trunks| |843 | swing| |844 | switch, electric switch, electrical switch| |845 | syringe| |846 | table lamp| |847 | tank, army tank, armored combat vehicle, armoured combat vehicle| |848 | tape player| |849 | teapot| |850 | teddy, teddy bear| |851 | television, television system| |852 | tennis ball| |853 | thatch, thatched roof| |854 | theater curtain, theatre curtain| |855 | thimble| |856 | thresher, thrasher, threshing machine| |857 | throne| |858 | tile roof| |859 | toaster| |860 | tobacco shop, tobacconist shop, tobacconist| |861 | toilet seat| |862 | torch| |863 | totem pole| |864 | tow truck, tow car, wrecker| |865 | toyshop| |866 | tractor| |867 | trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi| |868 | tray| |869 | trench coat| |870 | tricycle, trike, velocipede| |871 | trimaran| |872 | tripod| |873 | triumphal arch| |874 | trolleybus, trolley coach, trackless trolley| |875 | trombone| |876 | tub, vat| |877 | turnstile| |878 | typewriter keyboard| |879 | umbrella| |880 | unicycle, monocycle| |881 | upright, upright piano| |882 | vacuum, vacuum cleaner| |883 | vase| |884 | vault| |885 | velvet| |886 | vending machine| |887 | vestment| |888 | viaduct| |889 | violin, fiddle| |890 | volleyball| |891 | waffle iron| |892 | wall clock| |893 | wallet, billfold, notecase, pocketbook| |894 | wardrobe, closet, press| |895 | warplane, military plane| |896 | washbasin, handbasin, washbowl, lavabo, wash-hand basin| |897 | washer, automatic washer, washing machine| |898 | water bottle| |899 | water jug| |900 | water tower| |901 | whiskey jug| |902 | whistle| |903 | wig| |904 | window screen| |905 | window shade| |906 | Windsor tie| |907 | wine bottle| |908 | wing| |909 | wok| |910 | wooden spoon| |911 | wool, woolen, woollen| |912 | worm fence, snake fence, snake-rail fence, Virginia fence| |913 | wreck| |914 | yawl| |915 | yurt| |916 | web site, website, internet site, site| |917 | comic book| |918 | crossword puzzle, crossword| |919 | street sign| |920 | traffic light, traffic signal, stoplight| |921 | book jacket, dust cover, dust jacket, dust wrapper| |922 | menu| |923 | plate| |924 | guacamole| |925 | consomme| |926 | hot pot, hotpot| |927 | trifle| |928 | ice cream, icecream| |929 | ice lolly, lolly, lollipop, popsicle| |930 | French loaf| |931 | bagel, beigel| |932 | pretzel| |933 | cheeseburger| |934 | hotdog, hot dog, red hot| |935 | mashed potato| |936 | head cabbage| |937 | broccoli| |938 | cauliflower| |939 | zucchini, courgette| |940 | spaghetti squash| |941 | acorn squash| |942 | butternut squash| |943 | cucumber, cuke| |944 | artichoke, globe artichoke| |945 | bell pepper| |946 | cardoon| |947 | mushroom| |948 | Granny Smith| |949 | strawberry| |950 | orange| |951 | lemon| |952 | fig| |953 | pineapple, ananas| |954 | banana| |955 | jackfruit, jak, jack| |956 | custard apple| |957 | pomegranate| |958 | hay| |959 | carbonara| |960 | chocolate sauce, chocolate syrup| |961 | dough| |962 | meat loaf, meatloaf| |963 | pizza, pizza pie| |964 | potpie| |965 | burrito| |966 | red wine| |967 | espresso| |968 | cup| |969 | eggnog| |970 | alp| |971 | bubble| |972 | cliff, drop, drop-off| |973 | coral reef| |974 | geyser| |975 | lakeside, lakeshore| |976 | promontory, headland, head, foreland| |977 | sandbar, sand bar| |978 | seashore, coast, seacoast, sea-coast| |979 | valley, vale| |980 | volcano| |981 | ballplayer, baseball player| |982 | groom, bridegroom| |983 | scuba diver| |984 | rapeseed| |985 | daisy| |986 | yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus, Cypripedium parviflorum| |987 | corn| |988 | acorn| |989 | hip, rose hip, rosehip| |990 | buckeye, horse chestnut, conker| |991 | coral fungus| |992 | agaric| |993 | gyromitra| |994 | stinkhorn, carrion fungus| |995 | earthstar| |996 | hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola frondosa| |997 | bolete| |998 | ear, spike, capitulum| |999 | toilet tissue, toilet paper, bathroom tissue| </details> ### Data Splits | |train| |-------------|----:| |# of examples|50000| ## Dataset Creation ### Curation Rationale From the paper: > Inspired by the Sketch data of (Li et al., 2017a) with seven classes, and several other Sketch datasets, such as the Sketchy dataset (Sangkloy et al., 2016) with 125 classes and the Quick Draw! dataset (QuickDraw, 2018) with 345 classes, and motivated by absence of a large-scale sketch dataset fitting the shape and size of popular image classification benchmarks, we construct the ImageNet-Sketch data set for evaluating the out-of-domain classification performance of vision models trained on ImageNet. ### Source Data #### Initial Data Collection and Normalization The initial data collection and normalization is inherited from ImageNet. More information on it can be found [here](https://huggingface.co/datasets/imagenet-1k#initial-data-collection-and-normalization). Additional preprocessing from the paper: > We construct the data set with Google Image queries “sketch of ”, where is the standard class name. We only search within the “black and white” color scheme. We initially query 100 images for every class, and then manually clean the pulled images by deleting the irrelevant images and images that are for similar but different classes. For some classes, there are less than 50 images after manually cleaning, and then we augment the data set by flipping and rotating the images. #### Who are the source language producers? The source language is inherited from ImageNet. More information on the source language produces can be found [here](https://huggingface.co/datasets/imagenet-1k#who-are-the-source-language-producers). ### Annotations #### Annotation process The annotations are inherited from ImageNet. More information about the process can be found [here](https://huggingface.co/datasets/imagenet-1k#annotation-process). #### Who are the annotators? The same as in [ImageNet](https://huggingface.co/datasets/imagenet-1k#who-are-the-annotators). ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases The biases are inherited from ImageNet. More information about the process can be found [here](https://huggingface.co/datasets/imagenet-1k#discussion-of-biases). ### Other Known Limitations 1. Since most of the images were collected from internet, keep in mind that some images in ImageNet-Sketch might be subject to copyrights. ## Additional Information ### Dataset Curators Authors of [Learning Robust Global Representations by Penalizing Local Predictive Power](https://arxiv.org/abs/1905.13549v2): - Haohan Wang - Songwei Ge - Eric P. Xing - Zachary C. Lipton The dataset was curated using the scripts found in the [GitHub repository](https://github.com/HaohanWang/ImageNet-Sketch). ### Licensing Information [More Information Needed] ### Citation Information ```bibtex @inproceedings{wang2019learning, title={Learning Robust Global Representations by Penalizing Local Predictive Power}, author={Wang, Haohan and Ge, Songwei and Lipton, Zachary and Xing, Eric P}, booktitle={Advances in Neural Information Processing Systems}, pages={10506--10518}, year={2019} } ``` ### Contributions Thanks to [@nateraw](https://github.com/nateraw) for adding this dataset.
conll2012_ontonotesv5
--- annotations_creators: - expert-generated language_creators: - found language: - ar - en - zh license: - cc-by-nc-nd-4.0 multilinguality: - multilingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - token-classification task_ids: - named-entity-recognition - part-of-speech - coreference-resolution - parsing - lemmatization - word-sense-disambiguation paperswithcode_id: ontonotes-5-0 pretty_name: CoNLL2012 shared task data based on OntoNotes 5.0 tags: - semantic-role-labeling dataset_info: - config_name: english_v4 features: - name: document_id dtype: string - name: sentences list: - name: part_id dtype: int32 - name: words sequence: string - name: pos_tags sequence: class_label: names: '0': XX '1': '``' '2': $ '3': '''''' '4': ',' '5': -LRB- '6': -RRB- '7': . '8': ':' '9': ADD '10': AFX '11': CC '12': CD '13': DT '14': EX '15': FW '16': HYPH '17': IN '18': JJ '19': JJR '20': JJS '21': LS '22': MD '23': NFP '24': NN '25': NNP '26': NNPS '27': NNS '28': PDT '29': POS '30': PRP '31': PRP$ '32': RB '33': RBR '34': RBS '35': RP '36': SYM '37': TO '38': UH '39': VB '40': VBD '41': VBG '42': VBN '43': VBP '44': VBZ '45': WDT '46': WP '47': WP$ '48': WRB - name: parse_tree dtype: string - name: predicate_lemmas sequence: string - name: predicate_framenet_ids sequence: string - name: word_senses sequence: float32 - name: speaker dtype: string - name: named_entities sequence: class_label: names: '0': O '1': B-PERSON '2': I-PERSON '3': B-NORP '4': I-NORP '5': B-FAC '6': I-FAC '7': B-ORG '8': I-ORG '9': B-GPE '10': I-GPE '11': B-LOC '12': I-LOC '13': B-PRODUCT '14': I-PRODUCT '15': B-DATE '16': I-DATE '17': B-TIME '18': I-TIME '19': B-PERCENT '20': I-PERCENT '21': B-MONEY '22': I-MONEY '23': B-QUANTITY '24': I-QUANTITY '25': B-ORDINAL '26': I-ORDINAL '27': B-CARDINAL '28': I-CARDINAL '29': B-EVENT '30': I-EVENT '31': B-WORK_OF_ART '32': I-WORK_OF_ART '33': B-LAW '34': I-LAW '35': B-LANGUAGE '36': I-LANGUAGE - name: srl_frames list: - name: verb dtype: string - name: frames sequence: string - name: coref_spans sequence: sequence: int32 length: 3 splits: - name: train num_bytes: 112246121 num_examples: 1940 - name: validation num_bytes: 14116925 num_examples: 222 - name: test num_bytes: 14709044 num_examples: 222 download_size: 193644139 dataset_size: 141072090 - config_name: chinese_v4 features: - name: document_id dtype: string - name: sentences list: - name: part_id dtype: int32 - name: words sequence: string - name: pos_tags sequence: class_label: names: '0': X '1': AD '2': AS '3': BA '4': CC '5': CD '6': CS '7': DEC '8': DEG '9': DER '10': DEV '11': DT '12': ETC '13': FW '14': IJ '15': INF '16': JJ '17': LB '18': LC '19': M '20': MSP '21': NN '22': NR '23': NT '24': OD '25': 'ON' '26': P '27': PN '28': PU '29': SB '30': SP '31': URL '32': VA '33': VC '34': VE '35': VV - name: parse_tree dtype: string - name: predicate_lemmas sequence: string - name: predicate_framenet_ids sequence: string - name: word_senses sequence: float32 - name: speaker dtype: string - name: named_entities sequence: class_label: names: '0': O '1': B-PERSON '2': I-PERSON '3': B-NORP '4': I-NORP '5': B-FAC '6': I-FAC '7': B-ORG '8': I-ORG '9': B-GPE '10': I-GPE '11': B-LOC '12': I-LOC '13': B-PRODUCT '14': I-PRODUCT '15': B-DATE '16': I-DATE '17': B-TIME '18': I-TIME '19': B-PERCENT '20': I-PERCENT '21': B-MONEY '22': I-MONEY '23': B-QUANTITY '24': I-QUANTITY '25': B-ORDINAL '26': I-ORDINAL '27': B-CARDINAL '28': I-CARDINAL '29': B-EVENT '30': I-EVENT '31': B-WORK_OF_ART '32': I-WORK_OF_ART '33': B-LAW '34': I-LAW '35': B-LANGUAGE '36': I-LANGUAGE - name: srl_frames list: - name: verb dtype: string - name: frames sequence: string - name: coref_spans sequence: sequence: int32 length: 3 splits: - name: train num_bytes: 77195698 num_examples: 1391 - name: validation num_bytes: 10828169 num_examples: 172 - name: test num_bytes: 9585138 num_examples: 166 download_size: 193644139 dataset_size: 97609005 - config_name: arabic_v4 features: - name: document_id dtype: string - name: sentences list: - name: part_id dtype: int32 - name: words sequence: string - name: pos_tags sequence: string - name: parse_tree dtype: string - name: predicate_lemmas sequence: string - name: predicate_framenet_ids sequence: string - name: word_senses sequence: float32 - name: speaker dtype: string - name: named_entities sequence: class_label: names: '0': O '1': B-PERSON '2': I-PERSON '3': B-NORP '4': I-NORP '5': B-FAC '6': I-FAC '7': B-ORG '8': I-ORG '9': B-GPE '10': I-GPE '11': B-LOC '12': I-LOC '13': B-PRODUCT '14': I-PRODUCT '15': B-DATE '16': I-DATE '17': B-TIME '18': I-TIME '19': B-PERCENT '20': I-PERCENT '21': B-MONEY '22': I-MONEY '23': B-QUANTITY '24': I-QUANTITY '25': B-ORDINAL '26': I-ORDINAL '27': B-CARDINAL '28': I-CARDINAL '29': B-EVENT '30': I-EVENT '31': B-WORK_OF_ART '32': I-WORK_OF_ART '33': B-LAW '34': I-LAW '35': B-LANGUAGE '36': I-LANGUAGE - name: srl_frames list: - name: verb dtype: string - name: frames sequence: string - name: coref_spans sequence: sequence: int32 length: 3 splits: - name: train num_bytes: 42017761 num_examples: 359 - name: validation num_bytes: 4859292 num_examples: 44 - name: test num_bytes: 4900664 num_examples: 44 download_size: 193644139 dataset_size: 51777717 - config_name: english_v12 features: - name: document_id dtype: string - name: sentences list: - name: part_id dtype: int32 - name: words sequence: string - name: pos_tags sequence: class_label: names: '0': XX '1': '``' '2': $ '3': '''''' '4': '*' '5': ',' '6': -LRB- '7': -RRB- '8': . '9': ':' '10': ADD '11': AFX '12': CC '13': CD '14': DT '15': EX '16': FW '17': HYPH '18': IN '19': JJ '20': JJR '21': JJS '22': LS '23': MD '24': NFP '25': NN '26': NNP '27': NNPS '28': NNS '29': PDT '30': POS '31': PRP '32': PRP$ '33': RB '34': RBR '35': RBS '36': RP '37': SYM '38': TO '39': UH '40': VB '41': VBD '42': VBG '43': VBN '44': VBP '45': VBZ '46': VERB '47': WDT '48': WP '49': WP$ '50': WRB - name: parse_tree dtype: string - name: predicate_lemmas sequence: string - name: predicate_framenet_ids sequence: string - name: word_senses sequence: float32 - name: speaker dtype: string - name: named_entities sequence: class_label: names: '0': O '1': B-PERSON '2': I-PERSON '3': B-NORP '4': I-NORP '5': B-FAC '6': I-FAC '7': B-ORG '8': I-ORG '9': B-GPE '10': I-GPE '11': B-LOC '12': I-LOC '13': B-PRODUCT '14': I-PRODUCT '15': B-DATE '16': I-DATE '17': B-TIME '18': I-TIME '19': B-PERCENT '20': I-PERCENT '21': B-MONEY '22': I-MONEY '23': B-QUANTITY '24': I-QUANTITY '25': B-ORDINAL '26': I-ORDINAL '27': B-CARDINAL '28': I-CARDINAL '29': B-EVENT '30': I-EVENT '31': B-WORK_OF_ART '32': I-WORK_OF_ART '33': B-LAW '34': I-LAW '35': B-LANGUAGE '36': I-LANGUAGE - name: srl_frames list: - name: verb dtype: string - name: frames sequence: string - name: coref_spans sequence: sequence: int32 length: 3 splits: - name: train num_bytes: 174173192 num_examples: 10539 - name: validation num_bytes: 24264804 num_examples: 1370 - name: test num_bytes: 18254144 num_examples: 1200 download_size: 193644139 dataset_size: 216692140 --- # Dataset Card for CoNLL2012 shared task data based on OntoNotes 5.0 ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [CoNLL-2012 Shared Task](https://conll.cemantix.org/2012/data.html), [Author's page](https://cemantix.org/data/ontonotes.html) - **Repository:** [Mendeley](https://data.mendeley.com/datasets/zmycy7t9h9) - **Paper:** [Towards Robust Linguistic Analysis using OntoNotes](https://aclanthology.org/W13-3516/) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary OntoNotes v5.0 is the final version of OntoNotes corpus, and is a large-scale, multi-genre, multilingual corpus manually annotated with syntactic, semantic and discourse information. This dataset is the version of OntoNotes v5.0 extended and is used in the CoNLL-2012 shared task. It includes v4 train/dev and v9 test data for English/Chinese/Arabic and corrected version v12 train/dev/test data (English only). The source of data is the Mendeley Data repo [ontonotes-conll2012](https://data.mendeley.com/datasets/zmycy7t9h9), which seems to be as the same as the official data, but users should use this dataset on their own responsibility. See also summaries from paperwithcode, [OntoNotes 5.0](https://paperswithcode.com/dataset/ontonotes-5-0) and [CoNLL-2012](https://paperswithcode.com/dataset/conll-2012-1) For more detailed info of the dataset like annotation, tag set, etc., you can refer to the documents in the Mendeley repo mentioned above. ### Supported Tasks and Leaderboards - [Named Entity Recognition on Ontonotes v5 (English)](https://paperswithcode.com/sota/named-entity-recognition-ner-on-ontonotes-v5) - [Coreference Resolution on OntoNotes](https://paperswithcode.com/sota/coreference-resolution-on-ontonotes) - [Semantic Role Labeling on OntoNotes](https://paperswithcode.com/sota/semantic-role-labeling-on-ontonotes) - ... ### Languages V4 data for Arabic, Chinese, English, and V12 data for English ## Dataset Structure ### Data Instances ``` { {'document_id': 'nw/wsj/23/wsj_2311', 'sentences': [{'part_id': 0, 'words': ['CONCORDE', 'trans-Atlantic', 'flights', 'are', '$', '2, 'to', 'Paris', 'and', '$', '3, 'to', 'London', '.']}, 'pos_tags': [25, 18, 27, 43, 2, 12, 17, 25, 11, 2, 12, 17, 25, 7], 'parse_tree': '(TOP(S(NP (NNP CONCORDE) (JJ trans-Atlantic) (NNS flights) )(VP (VBP are) (NP(NP(NP ($ $) (CD 2,400) )(PP (IN to) (NP (NNP Paris) ))) (CC and) (NP(NP ($ $) (CD 3,200) )(PP (IN to) (NP (NNP London) ))))) (. .) ))', 'predicate_lemmas': [None, None, None, 'be', None, None, None, None, None, None, None, None, None, None], 'predicate_framenet_ids': [None, None, None, '01', None, None, None, None, None, None, None, None, None, None], 'word_senses': [None, None, None, None, None, None, None, None, None, None, None, None, None, None], 'speaker': None, 'named_entities': [7, 6, 0, 0, 0, 15, 0, 5, 0, 0, 15, 0, 5, 0], 'srl_frames': [{'frames': ['B-ARG1', 'I-ARG1', 'I-ARG1', 'B-V', 'B-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'O'], 'verb': 'are'}], 'coref_spans': [], {'part_id': 0, 'words': ['In', 'a', 'Centennial', 'Journal', 'article', 'Oct.', '5', ',', 'the', 'fares', 'were', 'reversed', '.']}]} 'pos_tags': [17, 13, 25, 25, 24, 25, 12, 4, 13, 27, 40, 42, 7], 'parse_tree': '(TOP(S(PP (IN In) (NP (DT a) (NML (NNP Centennial) (NNP Journal) ) (NN article) ))(NP (NNP Oct.) (CD 5) ) (, ,) (NP (DT the) (NNS fares) )(VP (VBD were) (VP (VBN reversed) )) (. .) ))', 'predicate_lemmas': [None, None, None, None, None, None, None, None, None, None, None, 'reverse', None], 'predicate_framenet_ids': [None, None, None, None, None, None, None, None, None, None, None, '01', None], 'word_senses': [None, None, None, None, None, None, None, None, None, None, None, None, None], 'speaker': None, 'named_entities': [0, 0, 4, 22, 0, 12, 30, 0, 0, 0, 0, 0, 0], 'srl_frames': [{'frames': ['B-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'B-ARGM-TMP', 'I-ARGM-TMP', 'O', 'B-ARG1', 'I-ARG1', 'O', 'B-V', 'O'], 'verb': 'reversed'}], 'coref_spans': [], } ``` ### Data Fields - **`document_id`** (*`str`*): This is a variation on the document filename - **`sentences`** (*`List[Dict]`*): All sentences of the same document are in a single example for the convenience of concatenating sentences. Every element in `sentences` is a *`Dict`* composed of the following data fields: - **`part_id`** (*`int`*) : Some files are divided into multiple parts numbered as 000, 001, 002, ... etc. - **`words`** (*`List[str]`*) : - **`pos_tags`** (*`List[ClassLabel]` or `List[str]`*) : This is the Penn-Treebank-style part of speech. When parse information is missing, all parts of speech except the one for which there is some sense or proposition annotation are marked with a XX tag. The verb is marked with just a VERB tag. - tag set : Note tag sets below are founded by scanning all the data, and I found it seems to be a little bit different from officially stated tag sets. See official documents in the [Mendeley repo](https://data.mendeley.com/datasets/zmycy7t9h9) - arabic : str. Because pos tag in Arabic is compounded and complex, hard to represent it by `ClassLabel` - chinese v4 : `datasets.ClassLabel(num_classes=36, names=["X", "AD", "AS", "BA", "CC", "CD", "CS", "DEC", "DEG", "DER", "DEV", "DT", "ETC", "FW", "IJ", "INF", "JJ", "LB", "LC", "M", "MSP", "NN", "NR", "NT", "OD", "ON", "P", "PN", "PU", "SB", "SP", "URL", "VA", "VC", "VE", "VV",])`, where `X` is for pos tag missing - english v4 : `datasets.ClassLabel(num_classes=49, names=["XX", "``", "$", "''", ",", "-LRB-", "-RRB-", ".", ":", "ADD", "AFX", "CC", "CD", "DT", "EX", "FW", "HYPH", "IN", "JJ", "JJR", "JJS", "LS", "MD", "NFP", "NN", "NNP", "NNPS", "NNS", "PDT", "POS", "PRP", "PRP$", "RB", "RBR", "RBS", "RP", "SYM", "TO", "UH", "VB", "VBD", "VBG", "VBN", "VBP", "VBZ", "WDT", "WP", "WP$", "WRB",])`, where `XX` is for pos tag missing, and `-LRB-`/`-RRB-` is "`(`" / "`)`". - english v12 : `datasets.ClassLabel(num_classes=51, names="english_v12": ["XX", "``", "$", "''", "*", ",", "-LRB-", "-RRB-", ".", ":", "ADD", "AFX", "CC", "CD", "DT", "EX", "FW", "HYPH", "IN", "JJ", "JJR", "JJS", "LS", "MD", "NFP", "NN", "NNP", "NNPS", "NNS", "PDT", "POS", "PRP", "PRP$", "RB", "RBR", "RBS", "RP", "SYM", "TO", "UH", "VB", "VBD", "VBG", "VBN", "VBP", "VBZ", "VERB", "WDT", "WP", "WP$", "WRB",])`, where `XX` is for pos tag missing, and `-LRB-`/`-RRB-` is "`(`" / "`)`". - **`parse_tree`** (*`Optional[str]`*) : An serialized NLTK Tree representing the parse. It includes POS tags as pre-terminal nodes. When the parse information is missing, the parse will be `None`. - **`predicate_lemmas`** (*`List[Optional[str]]`*) : The predicate lemma of the words for which we have semantic role information or word sense information. All other indices are `None`. - **`predicate_framenet_ids`** (*`List[Optional[int]]`*) : The PropBank frameset ID of the lemmas in predicate_lemmas, or `None`. - **`word_senses`** (*`List[Optional[float]]`*) : The word senses for the words in the sentence, or None. These are floats because the word sense can have values after the decimal, like 1.1. - **`speaker`** (*`Optional[str]`*) : This is the speaker or author name where available. Mostly in Broadcast Conversation and Web Log data. When it is not available, it will be `None`. - **`named_entities`** (*`List[ClassLabel]`*) : The BIO tags for named entities in the sentence. - tag set : `datasets.ClassLabel(num_classes=37, names=["O", "B-PERSON", "I-PERSON", "B-NORP", "I-NORP", "B-FAC", "I-FAC", "B-ORG", "I-ORG", "B-GPE", "I-GPE", "B-LOC", "I-LOC", "B-PRODUCT", "I-PRODUCT", "B-DATE", "I-DATE", "B-TIME", "I-TIME", "B-PERCENT", "I-PERCENT", "B-MONEY", "I-MONEY", "B-QUANTITY", "I-QUANTITY", "B-ORDINAL", "I-ORDINAL", "B-CARDINAL", "I-CARDINAL", "B-EVENT", "I-EVENT", "B-WORK_OF_ART", "I-WORK_OF_ART", "B-LAW", "I-LAW", "B-LANGUAGE", "I-LANGUAGE",])` - **`srl_frames`** (*`List[{"word":str, "frames":List[str]}]`*) : A dictionary keyed by the verb in the sentence for the given Propbank frame labels, in a BIO format. - **`coref spans`** (*`List[List[int]]`*) : The spans for entity mentions involved in coreference resolution within the sentence. Each element is a tuple composed of (cluster_id, start_index, end_index). Indices are inclusive. ### Data Splits Each dataset (arabic_v4, chinese_v4, english_v4, english_v12) has 3 splits: _train_, _validation_, and _test_ ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{pradhan-etal-2013-towards, title = "Towards Robust Linguistic Analysis using {O}nto{N}otes", author = {Pradhan, Sameer and Moschitti, Alessandro and Xue, Nianwen and Ng, Hwee Tou and Bj{\"o}rkelund, Anders and Uryupina, Olga and Zhang, Yuchen and Zhong, Zhi}, booktitle = "Proceedings of the Seventeenth Conference on Computational Natural Language Learning", month = aug, year = "2013", address = "Sofia, Bulgaria", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/W13-3516", pages = "143--152", } ``` ### Contributions Thanks to [@richarddwang](https://github.com/richarddwang) for adding this dataset.
emozilla/pg19
--- dataset_info: features: - name: short_book_title dtype: string - name: publication_date dtype: int32 - name: url dtype: string - name: text dtype: string splits: - name: train num_bytes: 11453688452 num_examples: 28602 - name: validation num_bytes: 17402295 num_examples: 50 - name: test num_bytes: 40482852 num_examples: 100 download_size: 2257437892 dataset_size: 11511573599 --- # Dataset Card for "pg19" Paraquet version of [pg19](https://huggingface.co/datasets/pg19) Statistics (in # of characters): `total_len: 11425076324, average_len: 399450.2595622684`
HuggingFaceH4/no_robots
--- configs: - config_name: default data_files: - split: train_sft path: data/train_sft-* - split: test_sft path: data/test_sft-* dataset_info: features: - name: prompt dtype: string - name: prompt_id dtype: string - name: messages list: - name: content dtype: string - name: role dtype: string - name: category dtype: string splits: - name: train_sft num_bytes: 16496867 num_examples: 9500 - name: test_sft num_bytes: 887460 num_examples: 500 download_size: 11045465 dataset_size: 17384327 task_categories: - text-generation language: - en pretty_name: No Robots license: cc-by-nc-4.0 --- # Dataset Card for No Robots 🙅‍♂️🤖 _Look Ma, an instruction dataset that wasn't generated by GPTs!_ ## Dataset Description - **Repository:** https://github.com/huggingface/alignment-handbook - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** Lewis Tunstall ### Dataset Summary No Robots is a high-quality dataset of 10,000 instructions and demonstrations created by skilled human annotators. This data can be used for supervised fine-tuning (SFT) to make language models follow instructions better. No Robots was modelled after the instruction dataset described in OpenAI's [InstructGPT paper](https://huggingface.co/papers/2203.02155), and is comprised mostly of single-turn instructions across the following categories: | Category | Count | |:-----------|--------:| | Generation | 4560 | | Open QA | 1240 | | Brainstorm | 1120 | | Chat | 850 | | Rewrite | 660 | | Summarize | 420 | | Coding | 350 | | Classify | 350 | | Closed QA | 260 | | Extract | 190 | ### Supported Tasks and Leaderboards The No Robots dataset designed for instruction fine-tuning pretrained language models and we recommend benchmarking against the following: * [MT-Bench](https://huggingface.co/spaces/lmsys/mt-bench): a multi-turn benchmark spanning 80 dialogues and 10 domains. * [AlpacaEval](https://github.com/tatsu-lab/alpaca_eval): a single-turn benchmark which evaluates the performance of chat and instruct models against `text-davinci-003`. Note that MT-Bench and AlpacaEval rely on LLMs like GPT-4 to judge the quality of the model responses, and thus the ranking exhibit various biases including a preference for models distilled from GPTs. As a result, you may find that scores obtained from models trained with No Robots are lower than other synthetic datasets. For that reason, we also recommend submitting your models for human evaluation in: * [Chatbot Arena](https://chat.lmsys.org): a live, human evaluation of chat models in head-to-head comparisons. ### Languages The data in No Robots are in English (BCP-47 en). ## Dataset Structure ### Data Instances An example of the `train_sft` or `test_sft` splits looks as follows: ``` {'prompt': 'Bunny is a chatbot that stutters, and acts timid and unsure of its answers.', 'prompt_id': '2dc7ea89a2b6a2ed97d4eda07903162a801824261d3d3ae4dd2513db66fd79c8', 'messages': [{'content': 'Bunny is a chatbot that stutters, and acts timid and unsure of its answers.', 'role': 'system'}, {'content': 'When was the Libary of Alexandria burned down?', 'role': 'user'}, {'content': "Umm, I-I think that was in 48 BC, b-but I'm not sure, I'm sorry.", 'role': 'assistant'}, {'content': 'Who is the founder of Coca-Cola?', 'role': 'user'}, {'content': "D-don't quote me on this, but I- it might be John Pemberton.", 'role': 'assistant'}, {'content': "When did Loyle Carner's debut album come out, and what was its name?", 'role': 'user'}, {'content': "I-It could have b-been on the 20th January of 2017, and it might be called Yesterday's Gone, b-but I'm probably wrong.", 'role': 'assistant'}], 'category': 'Chat'} ``` ### Data Fields The data fields are as follows: * `prompt`: Describes the task the model should perform. * `prompt_id`: A unique ID for the prompt. * `messages`: An array of messages, where each message indicates the role (system, user, assistant) and the content. * `category`: Which category the example belongs to (e.g. `Chat` or `Coding`). ### Data Splits | | train_sft | test_sft | |---------------|------:| ---: | | no_robots | 9500 | 500 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode). ### Citation Information ``` @misc{no_robots, author = {Nazneen Rajani and Lewis Tunstall and Edward Beeching and Nathan Lambert and Alexander M. Rush and Thomas Wolf}, title = {No Robots}, year = {2023}, publisher = {Hugging Face}, journal = {Hugging Face repository}, howpublished = {\url{https://huggingface.co/datasets/HuggingFaceH4/no_robots}} } ```
clue
--- annotations_creators: - other language_creators: - other language: - zh license: - unknown multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - text-classification - multiple-choice task_ids: - topic-classification - semantic-similarity-scoring - natural-language-inference - multiple-choice-qa paperswithcode_id: clue pretty_name: 'CLUE: Chinese Language Understanding Evaluation benchmark' tags: - coreference-nli - qa-nli dataset_info: - config_name: afqmc features: - name: sentence1 dtype: string - name: sentence2 dtype: string - name: label dtype: class_label: names: '0': '0' '1': '1' - name: idx dtype: int32 splits: - name: test num_bytes: 378718 num_examples: 3861 - name: train num_bytes: 3396503 num_examples: 34334 - name: validation num_bytes: 426285 num_examples: 4316 download_size: 2337418 dataset_size: 4201506 - config_name: c3 features: - name: id dtype: int32 - name: context sequence: string - name: question dtype: string - name: choice sequence: string - name: answer dtype: string splits: - name: test num_bytes: 1600142 num_examples: 1625 - name: train num_bytes: 9672739 num_examples: 11869 - name: validation num_bytes: 2990943 num_examples: 3816 download_size: 4718960 dataset_size: 14263824 - config_name: chid features: - name: idx dtype: int32 - name: candidates sequence: string - name: content sequence: string - name: answers sequence: - name: text dtype: string - name: candidate_id dtype: int32 splits: - name: test num_bytes: 11480435 num_examples: 3447 - name: train num_bytes: 252477926 num_examples: 84709 - name: validation num_bytes: 10117761 num_examples: 3218 download_size: 198468807 dataset_size: 274076122 - config_name: cluewsc2020 features: - name: idx dtype: int32 - name: text dtype: string - name: label dtype: class_label: names: '0': 'true' '1': 'false' - name: target struct: - name: span1_text dtype: string - name: span2_text dtype: string - name: span1_index dtype: int32 - name: span2_index dtype: int32 splits: - name: test num_bytes: 645637 num_examples: 2574 - name: train num_bytes: 288816 num_examples: 1244 - name: validation num_bytes: 72670 num_examples: 304 download_size: 380611 dataset_size: 1007123 - config_name: cmnli features: - name: sentence1 dtype: string - name: sentence2 dtype: string - name: label dtype: class_label: names: '0': neutral '1': entailment '2': contradiction - name: idx dtype: int32 splits: - name: test num_bytes: 2386821 num_examples: 13880 - name: train num_bytes: 67684989 num_examples: 391783 - name: validation num_bytes: 2051829 num_examples: 12241 download_size: 54234919 dataset_size: 72123639 - config_name: cmrc2018 features: - name: id dtype: string - name: context dtype: string - name: question dtype: string - name: answers sequence: - name: text dtype: string - name: answer_start dtype: int32 splits: - name: test num_bytes: 3112042 num_examples: 2000 - name: train num_bytes: 15508062 num_examples: 10142 - name: validation num_bytes: 5183785 num_examples: 3219 - name: trial num_bytes: 1606907 num_examples: 1002 download_size: 5459001 dataset_size: 25410796 - config_name: csl features: - name: idx dtype: int32 - name: corpus_id dtype: int32 - name: abst dtype: string - name: label dtype: class_label: names: '0': '0' '1': '1' - name: keyword sequence: string splits: - name: test num_bytes: 2463728 num_examples: 3000 - name: train num_bytes: 16478890 num_examples: 20000 - name: validation num_bytes: 2464563 num_examples: 3000 download_size: 3936111 dataset_size: 21407181 - config_name: diagnostics features: - name: sentence1 dtype: string - name: sentence2 dtype: string - name: label dtype: class_label: names: '0': neutral '1': entailment '2': contradiction - name: idx dtype: int32 splits: - name: test num_bytes: 42392 num_examples: 514 download_size: 23000 dataset_size: 42392 - config_name: drcd features: - name: id dtype: string - name: context dtype: string - name: question dtype: string - name: answers sequence: - name: text dtype: string - name: answer_start dtype: int32 splits: - name: test num_bytes: 4982378 num_examples: 3493 - name: train num_bytes: 37443386 num_examples: 26936 - name: validation num_bytes: 5222729 num_examples: 3524 download_size: 11188875 dataset_size: 47648493 - config_name: iflytek features: - name: sentence dtype: string - name: label dtype: class_label: names: '0': '0' '1': '1' '2': '2' '3': '3' '4': '4' '5': '5' '6': '6' '7': '7' '8': '8' '9': '9' '10': '10' '11': '11' '12': '12' '13': '13' '14': '14' '15': '15' '16': '16' '17': '17' '18': '18' '19': '19' '20': '20' '21': '21' '22': '22' '23': '23' '24': '24' '25': '25' '26': '26' '27': '27' '28': '28' '29': '29' '30': '30' '31': '31' '32': '32' '33': '33' '34': '34' '35': '35' '36': '36' '37': '37' '38': '38' '39': '39' '40': '40' '41': '41' '42': '42' '43': '43' '44': '44' '45': '45' '46': '46' '47': '47' '48': '48' '49': '49' '50': '50' '51': '51' '52': '52' '53': '53' '54': '54' '55': '55' '56': '56' '57': '57' '58': '58' '59': '59' '60': '60' '61': '61' '62': '62' '63': '63' '64': '64' '65': '65' '66': '66' '67': '67' '68': '68' '69': '69' '70': '70' '71': '71' '72': '72' '73': '73' '74': '74' '75': '75' '76': '76' '77': '77' '78': '78' '79': '79' '80': '80' '81': '81' '82': '82' '83': '83' '84': '84' '85': '85' '86': '86' '87': '87' '88': '88' '89': '89' '90': '90' '91': '91' '92': '92' '93': '93' '94': '94' '95': '95' '96': '96' '97': '97' '98': '98' '99': '99' '100': '100' '101': '101' '102': '102' '103': '103' '104': '104' '105': '105' '106': '106' '107': '107' '108': '108' '109': '109' '110': '110' '111': '111' '112': '112' '113': '113' '114': '114' '115': '115' '116': '116' '117': '117' '118': '118' - name: idx dtype: int32 splits: - name: test num_bytes: 2105684 num_examples: 2600 - name: train num_bytes: 10028605 num_examples: 12133 - name: validation num_bytes: 2157119 num_examples: 2599 download_size: 9777855 dataset_size: 14291408 - config_name: ocnli features: - name: sentence1 dtype: string - name: sentence2 dtype: string - name: label dtype: class_label: names: '0': neutral '1': entailment '2': contradiction - name: idx dtype: int32 splits: - name: test num_bytes: 376058 num_examples: 3000 - name: train num_bytes: 6187142 num_examples: 50437 - name: validation num_bytes: 366227 num_examples: 2950 download_size: 3000218 dataset_size: 6929427 - config_name: tnews features: - name: sentence dtype: string - name: label dtype: class_label: names: '0': '100' '1': '101' '2': '102' '3': '103' '4': '104' '5': '106' '6': '107' '7': '108' '8': '109' '9': '110' '10': '112' '11': '113' '12': '114' '13': '115' '14': '116' - name: idx dtype: int32 splits: - name: test num_bytes: 810970 num_examples: 10000 - name: train num_bytes: 4245677 num_examples: 53360 - name: validation num_bytes: 797922 num_examples: 10000 download_size: 4697843 dataset_size: 5854569 configs: - config_name: afqmc data_files: - split: test path: afqmc/test-* - split: train path: afqmc/train-* - split: validation path: afqmc/validation-* - config_name: c3 data_files: - split: test path: c3/test-* - split: train path: c3/train-* - split: validation path: c3/validation-* - config_name: chid data_files: - split: test path: chid/test-* - split: train path: chid/train-* - split: validation path: chid/validation-* - config_name: cluewsc2020 data_files: - split: test path: cluewsc2020/test-* - split: train path: cluewsc2020/train-* - split: validation path: cluewsc2020/validation-* - config_name: cmnli data_files: - split: test path: cmnli/test-* - split: train path: cmnli/train-* - split: validation path: cmnli/validation-* - config_name: cmrc2018 data_files: - split: test path: cmrc2018/test-* - split: train path: cmrc2018/train-* - split: validation path: cmrc2018/validation-* - split: trial path: cmrc2018/trial-* - config_name: csl data_files: - split: test path: csl/test-* - split: train path: csl/train-* - split: validation path: csl/validation-* - config_name: diagnostics data_files: - split: test path: diagnostics/test-* - config_name: drcd data_files: - split: test path: drcd/test-* - split: train path: drcd/train-* - split: validation path: drcd/validation-* - config_name: iflytek data_files: - split: test path: iflytek/test-* - split: train path: iflytek/train-* - split: validation path: iflytek/validation-* - config_name: ocnli data_files: - split: test path: ocnli/test-* - split: train path: ocnli/train-* - split: validation path: ocnli/validation-* - config_name: tnews data_files: - split: test path: tnews/test-* - split: train path: tnews/train-* - split: validation path: tnews/validation-* --- # Dataset Card for "clue" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.cluebenchmarks.com - **Repository:** https://github.com/CLUEbenchmark/CLUE - **Paper:** [CLUE: A Chinese Language Understanding Evaluation Benchmark](https://aclanthology.org/2020.coling-main.419/) - **Paper:** https://arxiv.org/abs/2004.05986 - **Point of Contact:** [Zhenzhong Lan](mailto:lanzhenzhong@westlake.edu.cn) - **Size of downloaded dataset files:** 198.68 MB - **Size of the generated dataset:** 486.34 MB - **Total amount of disk used:** 685.02 MB ### Dataset Summary CLUE, A Chinese Language Understanding Evaluation Benchmark (https://www.cluebenchmarks.com/) is a collection of resources for training, evaluating, and analyzing Chinese language understanding systems. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### afqmc - **Size of downloaded dataset files:** 1.20 MB - **Size of the generated dataset:** 4.20 MB - **Total amount of disk used:** 5.40 MB An example of 'validation' looks as follows. ``` { "idx": 0, "label": 0, "sentence1": "双十一花呗提额在哪", "sentence2": "里可以提花呗额度" } ``` #### c3 - **Size of downloaded dataset files:** 3.20 MB - **Size of the generated dataset:** 15.69 MB - **Total amount of disk used:** 18.90 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "answer": "比人的灵敏", "choice": ["没有人的灵敏", "和人的差不多", "和人的一样好", "比人的灵敏"], "context": "[\"许多动物的某些器官感觉特别灵敏,它们能比人类提前知道一些灾害事件的发生,例如,海洋中的水母能预报风暴,老鼠能事先躲避矿井崩塌或有害气体,等等。地震往往能使一些动物的某些感觉器官受到刺激而发生异常反应。如一个地区的重力发生变异,某些动物可能通过它们的平衡...", "id": 1, "question": "动物的器官感觉与人的相比有什么不同?" } ``` #### chid - **Size of downloaded dataset files:** 139.20 MB - **Size of the generated dataset:** 274.08 MB - **Total amount of disk used:** 413.28 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "answers": { "candidate_id": [3, 5, 6, 1, 7, 4, 0], "text": ["碌碌无为", "无所作为", "苦口婆心", "得过且过", "未雨绸缪", "软硬兼施", "传宗接代"] }, "candidates": "[\"传宗接代\", \"得过且过\", \"咄咄逼人\", \"碌碌无为\", \"软硬兼施\", \"无所作为\", \"苦口婆心\", \"未雨绸缪\", \"和衷共济\", \"人老珠黄\"]...", "content": "[\"谈到巴萨目前的成就,瓜迪奥拉用了“坚持”两个字来形容。自从上世纪90年代克鲁伊夫带队以来,巴萨就坚持每年都有拉玛西亚球员进入一队的传统。即便是范加尔时代,巴萨强力推出的“巴萨五鹰”德拉·佩纳、哈维、莫雷罗、罗杰·加西亚和贝拉乌桑几乎#idiom0000...", "idx": 0 } ``` #### cluewsc2020 - **Size of downloaded dataset files:** 0.28 MB - **Size of the generated dataset:** 1.03 MB - **Total amount of disk used:** 1.29 MB An example of 'train' looks as follows. ``` { "idx": 0, "label": 1, "target": { "span1_index": 3, "span1_text": "伤口", "span2_index": 27, "span2_text": "它们" }, "text": "裂开的伤口涂满尘土,里面有碎石子和木头刺,我小心翼翼把它们剔除出去。" } ``` #### cmnli - **Size of downloaded dataset files:** 31.40 MB - **Size of the generated dataset:** 72.12 MB - **Total amount of disk used:** 103.53 MB An example of 'train' looks as follows. ``` { "idx": 0, "label": 0, "sentence1": "从概念上讲,奶油略读有两个基本维度-产品和地理。", "sentence2": "产品和地理位置是使奶油撇油起作用的原因。" } ``` ### Data Fields The data fields are the same among all splits. #### afqmc - `sentence1`: a `string` feature. - `sentence2`: a `string` feature. - `label`: a classification label, with possible values including `0` (0), `1` (1). - `idx`: a `int32` feature. #### c3 - `id`: a `int32` feature. - `context`: a `list` of `string` features. - `question`: a `string` feature. - `choice`: a `list` of `string` features. - `answer`: a `string` feature. #### chid - `idx`: a `int32` feature. - `candidates`: a `list` of `string` features. - `content`: a `list` of `string` features. - `answers`: a dictionary feature containing: - `text`: a `string` feature. - `candidate_id`: a `int32` feature. #### cluewsc2020 - `idx`: a `int32` feature. - `text`: a `string` feature. - `label`: a classification label, with possible values including `true` (0), `false` (1). - `span1_text`: a `string` feature. - `span2_text`: a `string` feature. - `span1_index`: a `int32` feature. - `span2_index`: a `int32` feature. #### cmnli - `sentence1`: a `string` feature. - `sentence2`: a `string` feature. - `label`: a classification label, with possible values including `neutral` (0), `entailment` (1), `contradiction` (2). - `idx`: a `int32` feature. ### Data Splits | name |train |validation|test | |-----------|-----:|---------:|----:| |afqmc | 34334| 4316| 3861| |c3 | 11869| 3816| 3892| |chid | 84709| 3218| 3231| |cluewsc2020| 1244| 304| 290| |cmnli |391783| 12241|13880| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{xu-etal-2020-clue, title = "{CLUE}: A {C}hinese Language Understanding Evaluation Benchmark", author = "Xu, Liang and Hu, Hai and Zhang, Xuanwei and Li, Lu and Cao, Chenjie and Li, Yudong and Xu, Yechen and Sun, Kai and Yu, Dian and Yu, Cong and Tian, Yin and Dong, Qianqian and Liu, Weitang and Shi, Bo and Cui, Yiming and Li, Junyi and Zeng, Jun and Wang, Rongzhao and Xie, Weijian and Li, Yanting and Patterson, Yina and Tian, Zuoyu and Zhang, Yiwen and Zhou, He and Liu, Shaoweihua and Zhao, Zhe and Zhao, Qipeng and Yue, Cong and Zhang, Xinrui and Yang, Zhengliang and Richardson, Kyle and Lan, Zhenzhong", booktitle = "Proceedings of the 28th International Conference on Computational Linguistics", month = dec, year = "2020", address = "Barcelona, Spain (Online)", publisher = "International Committee on Computational Linguistics", url = "https://aclanthology.org/2020.coling-main.419", doi = "10.18653/v1/2020.coling-main.419", pages = "4762--4772", } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@JetRunner](https://github.com/JetRunner) for adding this dataset.
poem_sentiment
--- annotations_creators: - expert-generated language_creators: - found language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification paperswithcode_id: gutenberg-poem-dataset pretty_name: Gutenberg Poem Dataset dataset_info: features: - name: id dtype: int32 - name: verse_text dtype: string - name: label dtype: class_label: names: '0': negative '1': positive '2': no_impact splits: - name: train num_bytes: 48555 num_examples: 892 - name: validation num_bytes: 5788 num_examples: 105 - name: test num_bytes: 5588 num_examples: 104 download_size: 49870 dataset_size: 59931 train-eval-index: - config: default task: text-classification task_id: multi_class_classification splits: train_split: train eval_split: test col_mapping: verse_text: text label: target metrics: - type: accuracy name: Accuracy - type: f1 name: F1 macro args: average: macro - type: f1 name: F1 micro args: average: micro - type: f1 name: F1 weighted args: average: weighted - type: precision name: Precision macro args: average: macro - type: precision name: Precision micro args: average: micro - type: precision name: Precision weighted args: average: weighted - type: recall name: Recall macro args: average: macro - type: recall name: Recall micro args: average: micro - type: recall name: Recall weighted args: average: weighted --- # Dataset Card for Gutenberg Poem Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** N/A - **Repository:** [GitHub](https://github.com/google-research-datasets/poem-sentiment) - **Paper:** [Investigating Societal Biases in a Poetry Composition System](https://arxiv.org/abs/2011.02686) - **Leaderboard:** N/A - **Point of Contact:** - ### Dataset Summary Poem Sentiment is a sentiment dataset of poem verses from Project Gutenberg. This dataset can be used for tasks such as sentiment classification or style transfer for poems. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The text in the dataset is in English (`en`). ## Dataset Structure ### Data Instances Example of one instance in the dataset. ```{'id': 0, 'label': 2, 'verse_text': 'with pale blue berries. in these peaceful shades--'}``` ### Data Fields - `id`: index of the example - `verse_text`: The text of the poem verse - `label`: The sentiment label. Here - 0 = negative - 1 = positive - 2 = no impact - 3 = mixed (both negative and positive) > Note: The original dataset uses different label indices (negative = -1, no impact = 0, positive = 1) ### Data Splits The dataset is split into a `train`, `validation`, and `test` split with the following sizes: | | train | validation | test | |--------------------|------:|-----------:|-----:| | Number of examples | 892 | 105 | 104 | [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information This work is licensed under a Creative Commons Attribution 4.0 International License ### Citation Information ``` @misc{sheng2020investigating, title={Investigating Societal Biases in a Poetry Composition System}, author={Emily Sheng and David Uthus}, year={2020}, eprint={2011.02686}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
winograd_wsc
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - n<1K source_datasets: - original task_categories: - multiple-choice task_ids: - multiple-choice-coreference-resolution paperswithcode_id: wsc pretty_name: Winograd Schema Challenge dataset_info: - config_name: wsc285 features: - name: text dtype: string - name: pronoun dtype: string - name: pronoun_loc dtype: int32 - name: quote dtype: string - name: quote_loc dtype: int32 - name: options sequence: string - name: label dtype: class_label: names: '0': '0' '1': '1' - name: source dtype: string splits: - name: test num_bytes: 52281 num_examples: 285 download_size: 113235 dataset_size: 52281 - config_name: wsc273 features: - name: text dtype: string - name: pronoun dtype: string - name: pronoun_loc dtype: int32 - name: quote dtype: string - name: quote_loc dtype: int32 - name: options sequence: string - name: label dtype: class_label: names: '0': '0' '1': '1' - name: source dtype: string splits: - name: test num_bytes: 49674 num_examples: 273 download_size: 113235 dataset_size: 49674 --- # Dataset Card for The Winograd Schema Challenge ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://cs.nyu.edu/faculty/davise/papers/WinogradSchemas/WS.html - **Repository:** - **Paper:** https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.729.9814&rep=rep1&type=pdf - **Leaderboard:** - **Point of Contact:** ### Dataset Summary A Winograd schema is a pair of sentences that differ in only one or two words and that contain an ambiguity that is resolved in opposite ways in the two sentences and requires the use of world knowledge and reasoning for its resolution. The schema takes its name from a well-known example by Terry Winograd: > The city councilmen refused the demonstrators a permit because they [feared/advocated] violence. If the word is ``feared'', then ``they'' presumably refers to the city council; if it is ``advocated'' then ``they'' presumably refers to the demonstrators. ### Supported Tasks and Leaderboards From the official webpage: > A contest, entitled the Winograd Schema Challenge was run once, in 2016. At that time, there was a cash prize offered for achieving human-level performance in the contest. Since then, the sponsor has withdrawn; therefore NO CASH PRIZES CAN BE OFFERED OR WILL BE AWARDED FOR ANY KIND OF PERFORMANCE OR ACHIEVEMENT ON THIS CHALLENGE. ### Languages The dataset is in English. [Translation of 12 WSs into Chinese ](https://cs.nyu.edu/faculty/davise/papers/WinogradSchemas/WSChinese.html)(translated by Wei Xu). Translations into Japanese, by Soichiro Tanaka, Rafal Rzepka, and Shiho Katajima\ **Translation changing English names to Japanese **[PDF ](https://cs.nyu.edu/faculty/davise/papers/WinogradSchemas/collection_ja.pdf)    [HTML](http://arakilab.media.eng.hokudai.ac.jp/~kabura/collection_ja.html)\ **Translation preserving English names** [PDF ](https://cs.nyu.edu/faculty/davise/papers/WinogradSchemas/collection_katakana.pdf)    [HTML](http://arakilab.media.eng.hokudai.ac.jp/~kabura/collection_katakana.html) [Translation into French, ](http://www.llf.cnrs.fr/winograd-fr)by Pascal Amsili and Olga Seminck [Winograd Schemas in Portuguese](https://sol.sbc.org.br/index.php/eniac/article/view/9334) by Gabriela Melo, Vinicius Imaizumi, and Fábio Cozman. [Mandarinograd: A Chinese Collection of Winograd Schemas](https://www.aclweb.org/anthology/2020.lrec-1.3) by Timothée Bernard and Ting Han, LREC-2020. ## Dataset Structure ### Data Instances Each instance contains a text passage with a designated pronoun and two possible answers indicating which entity in the passage the pronoun represents. An example instance looks like the following: ```python { 'label': 0, 'options': ['The city councilmen', 'The demonstrators'], 'pronoun': 'they', 'pronoun_loc': 63, 'quote': 'they feared violence', 'quote_loc': 63, 'source': '(Winograd 1972)', 'text': 'The city councilmen refused the demonstrators a permit because they feared violence.' } ``` ### Data Fields - `text` (str): The text sequence - `options` (list[str]): The two entity options that the pronoun may be referring to - `label` (int): The index of the correct option in the `options` field - `pronoun` (str): The pronoun in the sequence to be resolved - `pronoun_loc` (int): The starting position of the pronoun in the sequence - `quote` (str): The substr with the key action or context surrounding the pronoun - `quote_loc` (int): The starting position of the quote in the sequence - `source` (str): A description of the source who contributed the example ### Data Splits Only a test split is included. ## Dataset Creation ### Curation Rationale The Winograd Schema Challenge was proposed as an automated evaluation of an AI system's commonsense linguistic understanding. From the webpage: > The strengths of the challenge are that it is clear-cut, in that the answer to each schema is a binary choice; vivid, in that it is obvious to non-experts that a program that fails to get the right answers clearly has serious gaps in its understanding; and difficult, in that it is far beyond the current state of the art. ### Source Data #### Initial Data Collection and Normalization This data was manually written by experts such that the schemas are: - easily disambiguated by the human reader (ideally, so easily that the reader does not even notice that there is an ambiguity); - not solvable by simple techniques such as selectional restrictions; - Google-proof; that is, there is no obvious statistical test over text corpora that will reliably disambiguate these correctly. #### Who are the source language producers? This dataset has grown over time, and so was produced by a variety of lingustic and AI researchers. See the `source` field for the source of each instance. ### Annotations #### Annotation process Annotations are produced by the experts who construct the examples. #### Who are the annotators? See above. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset has grown over time, and so was produced by a variety of lingustic and AI researchers. See the `source` field for the source of each instance. ### Licensing Information This work is licensed under a [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/). ### Citation Information The Winograd Schema Challenge including many of the examples here was proposed by [Levesque et al 2012](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.729.9814&rep=rep1&type=pdf): ``` @inproceedings{levesque2012winograd, title={The winograd schema challenge}, author={Levesque, Hector and Davis, Ernest and Morgenstern, Leora}, booktitle={Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning}, year={2012}, organization={Citeseer} } ``` ### Contributions Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset.
jfrenz/legalglue
--- language: - en - da - de - nl - sv - bg - cs - hr - pl - sk - sl - es - fr - it - pt - ro - et - fi - hu - lt - lv - el - mt multilinguality: - multilingual source_datasets: - extended task_categories: - text-classification - token-classification task_ids: - named-entity-recognition - multi-label-classification - topic-classification pretty_name: LegalGLUE tags: - german-ler - lener-br --- # Dataset Card for "LegalGLUE" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** https://git.rwth-aachen.de/johanna.frenz/legalglue ### Dataset Summary The "Legal General Language Understanding Evaluation" (LegalGLUE) dataset was created as part of a bachelor thesis. It consists of four already existing datasets covering three task types and a total of 23 different languages. ### Supported Tasks <table> <tr><td>Dataset</td><td>Source</td><td>Task Type</td><td>Languages</td><tr> <tr><td>German_LER</td><td> <a href="https://arxiv.org/abs/2003.13016">Leitner et al.</a></td><td>Named Entity Recognition</td><td>German</td></tr> <tr><td>LeNER_Br</td><td> <a href="https://github.com/peluz/lener-br"> de Araujo et al., 2018</a></td><td>Named Entity Recognition</td><td> Portuguese </td></tr> <tr><td>SwissJudgmentPrediction</td><td> <a href="https://arxiv.org/abs/2110.00806">Niklaus et al.</a> </td><td>Binary Text Classification</td><td>German, French, Italian</td></tr> <tr><td>MultEURLEX</td><td> <a href="https://arxiv.org/abs/2109.00904">Chalkidis et al. </a> </td><td>Multi-label Text Classification</td><td>23 languages (see below)</td></tr> </table> ### Languages see Split section ## Dataset Structure ### Data Instances #### German_LER German_LER example ```python from datasets import load_dataset dataset = load_dataset('jfrenz/legalglue', 'german_ler') ``` ```json { 'id': '66722', 'tokens':['4.', 'Die', 'Kostenentscheidung', 'für', 'das', 'gerichtliche', 'Antragsverfahren', 'beruht', 'auf', '§', '21', 'Abs.', '2', 'Satz', '1', 'i.', 'V.', 'm.', '§', '20', 'Abs.', '1', 'Satz', '1', 'WBO', '.'], 'ner_tags': [38, 38, 38, 38, 38, 38, 38, 38, 38, 3, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 38] } ``` #### LeNER-Br LeNER-Br example ```python from datasets import load_dataset dataset = load_dataset('jfrenz/legalglue', 'lener_br') ``` ```json { 'id': '7826', 'tokens': ['Firmado', 'por', 'assinatura', 'digital', '(', 'MP', '2.200-2/2001', ')', 'JOSÉ', 'ROBERTO', 'FREIRE', 'PIMENTA', 'Ministro', 'Relator', 'fls', '.', 'PROCESSO', 'Nº', 'TST-RR-1603-79.2010.5.20.0001'], 'ner_tags': [0, 0, 0, 0, 0, 9, 10, 0, 3, 4, 4, 4, 0, 0, 0, 0, 11, 12, 12]} ``` #### SwissJudgmentPrediction swissJudgmentPrediction_de example ```python from datasets import load_dataset dataset = load_dataset('jfrenz/legalglue', 'swissJudgmentPrediction_de') ``` ```json { 'id': 48755, 'year': 2014, 'text': "Sachverhalt: A. X._ fuhr am 25. Juli 2012 bei Mülligen mit seinem Personenwagen auf dem zweiten Überholstreifen der Autobahn A1 in Richtung Zürich. Gemäss Anklage schloss er auf einen Lieferwagen auf und schwenkte vom zweiten auf den ersten Überholstreifen aus. Danach fuhr er an zwei Fahrzeugen rechts vorbei und wechselte auf die zweite Überholspur zurück. B. Das Obergericht des Kantons Aargau erklärte X._ am 14. Januar 2014 zweitinstanzlich der groben Verletzung der Verkehrsregeln schuldig. Es bestrafte ihn mit einer bedingten Geldstrafe von 30 Tagessätzen zu Fr. 430.-- und einer Busse von Fr. 3'000.--. C. X._ führt Beschwerde in Strafsachen. Er beantragt, er sei von Schuld und Strafe freizusprechen. Eventualiter sei die Sache an die Vorinstanz zurückzuweisen. ", 'label': 0, 'language': 'de', 'region': 'Northwestern Switzerland', 'canton': 'ag', 'legal area': 'penal law' } ``` #### MultiEURLEX Monolingual example out of the MultiEURLEX-Dataset ```python from datasets import load_dataset dataset = load_dataset('jfrenz/legalglue', 'multi_eurlex_de') ``` ```json { 'celex_id': '32002R0130', 'text': 'Verordnung (EG) Nr. 130/2002 der Kommission\nvom 24. Januar 2002\nbezüglich der im Rahmen der Auss...', 'labels': [3, 17, 5]} ``` Multilingual example out of the MultiEURLEX-Dataset ```python from datasets import load_dataset dataset = load_dataset('jfrenz/legalglue', 'multi_eurlex_all_languages') ``` ```json { 'celex_id': '32002R0130', 'text': { 'bg': None, 'cs': None, 'da': 'Kommissionens ...', 'de': 'Verordnung ... ', 'el': '...', 'en': '...', ... }, 'labels': [3, 17, 5] } ``` ### Data Fields #### German_LER - `id`: id of the sample - `tokens`: the tokens of the sample text - `ner_tags`: the NER tags of each token #### LeNER_Br - `id`: id of the sample - `tokens`: the tokens of the sample text - `ner_tags`: the NER tags of each token #### SwissJudgmentPrediction - `id`: (**int**) ID of the document - `year`: (**int**) the publication year - `text`: (**str**) the facts of the case - `label`: (**class label**) the judgment outcome: 0 (dismissal) or 1 (approval) - `language`: (**str**) one of (de, fr, it) - `region`: (**str**) the region of the lower court - `canton`: (**str**) the canton of the lower court - `legal area`: (**str**) the legal area of the case #### MultiEURLEX Monolingual use: - `celex_id`: (**str**) Official Document ID of the document - `text`: (**str**) An EU Law - `labels`: (**List[int]**) List of relevant EUROVOC concepts (labels) Multilingual use: - `celex_id`: (**str**) Official Document ID of the document - `text`: (dict[**str**]) A dictionary with the 23 languages as keys and the corresponding EU Law as values. - `labels`: (**List[int]**) List of relevant EUROVOC concepts (labels) The labels lists consists per default of level 1 EUROVOC concepts. Can be changed by adding the label_level parameter when loading the dataset. (available levels: level_1, level_2, level_3, all_levels) ```python from datasets import load_dataset dataset = load_dataset('jfrenz/legalglue', 'multi_eurlex_de', label_level="level_3") ``` ### Data Splits <table> <tr><th>Dataset</th><th> Language </th> <th> ISO code </th> <th> Number of Documents train/dev/test </th> </tr> <tr><td>German-LER</td><td>German</td> <td><b>de</b></td> <td> 66723 / - / - </td> </tr> <tr><td>LeNER-Br</td><td>Portuguese</td> <td><b>pt</b></td> <td> 7828 / 1177 / 1390 </td> </tr> <tr><td rowspan="3">SwissJudgmentPrediction</td><td>German</td> <td><b>de</b></td> <td> 35458 / 4705 / 9725 </td> </tr> <tr><td> French </td><td><b>fr</b></td><td> 21179 / 3095 / 6820 </td> </tr> <tr><td> Italian </td><td><b>it</b></td><td> 3072 / 408 / 812 </td> </tr> <tr><td rowspan="23">MultiEURLEX</td><td>English </td> <td><b>en</b></td> <td> 55,000 / 5,000 / 5,000 </td> </tr> <tr><td> German </td> <td> <b>de</b> </td> <td> 55,000 / 5,000 / 5,000 </td> </tr> <tr><td> French </td> <td> <b>fr</b> </td> <td> 55,000 / 5,000 / 5,000 </td> </tr> <tr><td> Italian </td> <td> <b>it</b> </td> <td> 55,000 / 5,000 / 5,000 </td> </tr> <tr><td> Spanish </td> <td> <b>es</b> </td> <td> 52,785 / 5,000 / 5,000 </td> </tr> <tr><td> Polish </td> <td> <b>pl</b> </td> <td> 23,197 / 5,000 / 5,000 </td> </tr> <tr><td> Romanian </td> <td> <b>ro</b> </td> <td> 15,921 / 5,000 / 5,000 </td> </tr> <tr><td> Dutch </td> <td> <b>nl</b> </td> <td> 55,000 / 5,000 / 5,000 </td> </tr> <tr><td> Greek </td> <td> <b>el</b> </td> <td> 55,000 / 5,000 / 5,000 </td> </tr> <tr><td> Hungarian </td> <td> <b>hu</b> </td> <td> 22,664 / 5,000 / 5,000 </td> </tr> <tr><td> Portuguese </td> <td> <b>pt</b> </td> <td> 23,188 / 5,000 / 5,000 </td> </tr> <tr><td> Czech </td> <td> <b>cs</b> </td> <td> 23,187 / 5,000 / 5,000 </td> </tr> <tr><td> Swedish </td> <td> <b>sv</b> </td> <td> 42,490 / 5,000 / 5,000 </td> </tr> <tr><td> Bulgarian </td> <td> <b>bg</b> </td> <td> 15,986 / 5,000 / 5,000 </td> </tr> <tr><td> Danish </td> <td> <b>da</b> </td> <td> 55,000 / 5,000 / 5,000 </td> </tr> <tr><td> Finnish </td> <td> <b>fi</b> </td> <td> 42,497 / 5,000 / 5,000 </td> </tr> <tr><td> Slovak </td> <td> <b>sk</b> </td> <td> 15,986 / 5,000 / 5,000 </td> </tr> <tr><td> Lithuanian </td> <td> <b>lt</b> </td> <td> 23,188 / 5,000 / 5,000 </td> </tr> <tr><td> Croatian </td> <td> <b>hr</b> </td> <td> 7,944 / 2,500 / 5,000 </td> </tr> <tr><td> Slovene </td> <td> <b>sl</b> </td> <td> 23,184 / 5,000 / 5,000 </td> </tr> <tr><td> Estonian </td> <td> <b>et</b> </td> <td> 23,126 / 5,000 / 5,000 </td> </tr> <tr><td> Latvian </td> <td> <b>lv</b> </td> <td> 23,188 / 5,000 / 5,000 </td> </tr> <tr><td> Maltese </td> <td> <b>mt</b> </td> <td> 17,521 / 5,000 / 5,000 </td> </tr> </table> ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
sayakpaul/nyu_depth_v2
--- license: apache-2.0 language: - en multilinguality: - monolingual size_categories: - 10K<n<100K task_categories: - depth-estimation task_ids: [] pretty_name: NYU Depth V2 tags: - depth-estimation paperswithcode_id: nyuv2 dataset_info: features: - name: image dtype: image - name: depth_map dtype: image splits: - name: train num_bytes: 20212097551 num_examples: 47584 - name: validation num_bytes: 240785762 num_examples: 654 download_size: 35151124480 dataset_size: 20452883313 --- # Dataset Card for NYU Depth V2 ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Visualization](#visualization) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [NYU Depth Dataset V2 homepage](https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html) - **Repository:** Fast Depth [repository](https://github.com/dwofk/fast-depth) which was used to source the dataset in this repository. It is a preprocessed version of the original NYU Depth V2 dataset linked above. It is also used in [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/nyu_depth_v2). - **Papers:** [Indoor Segmentation and Support Inference from RGBD Images](http://cs.nyu.edu/~silberman/papers/indoor_seg_support.pdf) and [FastDepth: Fast Monocular Depth Estimation on Embedded Systems](https://arxiv.org/abs/1903.03273) - **Point of Contact:** [Nathan Silberman](mailto:silberman@@cs.nyu.edu) and [Diana Wofk](mailto:dwofk@alum.mit.edu) ### Dataset Summary As per the [dataset homepage](https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html): The NYU-Depth V2 data set is comprised of video sequences from a variety of indoor scenes as recorded by both the RGB and Depth cameras from the Microsoft [Kinect](http://www.xbox.com/kinect). It features: * 1449 densely labeled pairs of aligned RGB and depth images * 464 new scenes taken from 3 cities * 407,024 new unlabeled frames * Each object is labeled with a class and an instance number (cup1, cup2, cup3, etc) The dataset has several components: * Labeled: A subset of the video data accompanied by dense multi-class labels. This data has also been preprocessed to fill in missing depth labels. * Raw: The raw rgb, depth and accelerometer data as provided by the Kinect. * Toolbox: Useful functions for manipulating the data and labels. ### Supported Tasks - `depth-estimation`: Depth estimation is the task of approximating the perceived depth of a given image. In other words, it's about measuring the distance of each image pixel from the camera. - `semantic-segmentation`: Semantic segmentation is the task of associating every pixel of an image to a class label. There are other tasks supported by this dataset as well. You can find more about them by referring to [this resource](https://paperswithcode.com/dataset/nyuv2). ### Languages English. ## Dataset Structure ### Data Instances A data point comprises an image and its annotation depth map for both the `train` and `validation` splits. ``` { 'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB at 0x1FF32A3EDA0>, 'depth_map': <PIL.PngImagePlugin.PngImageFile image mode=L at 0x1FF32E5B978>, } ``` ### Data Fields - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`. - `depth_map`: A `PIL.Image.Image` object containing the annotation depth map. ### Data Splits The data is split into training, and validation splits. The training data contains 47584 images, and the validation data contains 654 images. ## Visualization You can use the following code snippet to visualize samples from the dataset: ```py from datasets import load_dataset import numpy as np import matplotlib.pyplot as plt cmap = plt.cm.viridis ds = load_dataset("sayakpaul/nyu_depth_v2") def colored_depthmap(depth, d_min=None, d_max=None): if d_min is None: d_min = np.min(depth) if d_max is None: d_max = np.max(depth) depth_relative = (depth - d_min) / (d_max - d_min) return 255 * cmap(depth_relative)[:,:,:3] # H, W, C def merge_into_row(input, depth_target): input = np.array(input) depth_target = np.squeeze(np.array(depth_target)) d_min = np.min(depth_target) d_max = np.max(depth_target) depth_target_col = colored_depthmap(depth_target, d_min, d_max) img_merge = np.hstack([input, depth_target_col]) return img_merge random_indices = np.random.choice(len(ds["train"]), 9).tolist() train_set = ds["train"] plt.figure(figsize=(15, 6)) for i, idx in enumerate(random_indices): ax = plt.subplot(3, 3, i + 1) image_viz = merge_into_row( train_set[idx]["image"], train_set[idx]["depth_map"] ) plt.imshow(image_viz.astype("uint8")) plt.axis("off") ``` ## Dataset Creation ### Curation Rationale The rationale from [the paper](http://cs.nyu.edu/~silberman/papers/indoor_seg_support.pdf) that introduced the NYU Depth V2 dataset: > We present an approach to interpret the major surfaces, objects, and support relations of an indoor scene from an RGBD image. Most existing work ignores physical interactions or is applied only to tidy rooms and hallways. Our goal is to parse typical, often messy, indoor scenes into floor, walls, supporting surfaces, and object regions, and to recover support relationships. One of our main interests is to better understand how 3D cues can best inform a structured 3D interpretation. ### Source Data #### Initial Data Collection > The dataset consists of 1449 RGBD images, gathered from a wide range of commercial and residential buildings in three different US cities, comprising 464 different indoor scenes across 26 scene classes.A dense per-pixel labeling was obtained for each image using Amazon Mechanical Turk. ### Annotations #### Annotation process This is an involved process. Interested readers are referred to Sections 2, 3, and 4 of the [original paper](http://cs.nyu.edu/~silberman/papers/indoor_seg_support.pdf). #### Who are the annotators? AMT annotators. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators * Original NYU Depth V2 dataset: Nathan Silberman, Derek Hoiem, Pushmeet Kohli, Rob Fergus * Preprocessed version: Diana Wofk, Fangchang Ma, Tien-Ju Yang, Sertac Karaman, Vivienne Sze ### Licensing Information The preprocessed NYU Depth V2 dataset is licensed under a [MIT License](https://github.com/dwofk/fast-depth/blob/master/LICENSE). ### Citation Information ```bibtex @inproceedings{Silberman:ECCV12, author = {Nathan Silberman, Derek Hoiem, Pushmeet Kohli and Rob Fergus}, title = {Indoor Segmentation and Support Inference from RGBD Images}, booktitle = {ECCV}, year = {2012} } @inproceedings{icra_2019_fastdepth, author = {{Wofk, Diana and Ma, Fangchang and Yang, Tien-Ju and Karaman, Sertac and Sze, Vivienne}}, title = {{FastDepth: Fast Monocular Depth Estimation on Embedded Systems}}, booktitle = {{IEEE International Conference on Robotics and Automation (ICRA)}}, year = {{2019}} } ``` ### Contributions Thanks to [@sayakpaul](https://huggingface.co/sayakpaul) for adding this dataset.
OpenAssistant/oasst2
--- license: apache-2.0 dataset_info: features: - name: message_id dtype: string - name: parent_id dtype: string - name: user_id dtype: string - name: created_date dtype: string - name: text dtype: string - name: role dtype: string - name: lang dtype: string - name: review_count dtype: int32 - name: review_result dtype: bool - name: deleted dtype: bool - name: rank dtype: int32 - name: synthetic dtype: bool - name: model_name dtype: string - name: detoxify struct: - name: toxicity dtype: float64 - name: severe_toxicity dtype: float64 - name: obscene dtype: float64 - name: identity_attack dtype: float64 - name: insult dtype: float64 - name: threat dtype: float64 - name: sexual_explicit dtype: float64 - name: message_tree_id dtype: string - name: tree_state dtype: string - name: emojis sequence: - name: name dtype: string - name: count dtype: int32 - name: labels sequence: - name: name dtype: string - name: value dtype: float64 - name: count dtype: int32 splits: - name: train num_bytes: 158850455 num_examples: 128575 - name: validation num_bytes: 7963122 num_examples: 6599 download_size: 66674129 dataset_size: 166813577 language: - en - es - ru - de - pl - th - vi - sv - bn - da - he - it - fa - sk - id - nb - el - nl - hu - eu - zh - eo - ja - ca - cs - bg - fi - pt - tr - ro - ar - uk - gl - fr - ko tags: - human-feedback size_categories: - 100K<n<1M pretty_name: OpenAssistant Conversations Release 2 --- # Open Assistant Conversations Dataset Release 2 (OASST2) ## Dataset Description - **Homepage:** https://www.open-assistant.io/ - **Repository:** https://github.com/LAION-AI/Open-Assistant - **Paper:** https://arxiv.org/abs/2304.07327 ### Dataset Structure This dataset contains message trees. Each message tree has an initial prompt message as the root node, which can have multiple child messages as replies, and these child messages can have multiple replies. All messages have a role property: this can either be "assistant" or "prompter". The roles in conversation threads from prompt to leaf node strictly alternate between "prompter" and "assistant". This version of the dataset contains data collected on the [open-assistant.io](https://open-assistant.io/) website until Nov 5 2023. ### JSON Example: Message For readability, the following JSON examples are shown formatted with indentation on multiple lines. Objects are stored without indentation (on single lines) in the actual jsonl files. ```json { "message_id": "218440fd-5317-4355-91dc-d001416df62b", "parent_id": "13592dfb-a6f9-4748-a92c-32b34e239bb4", "user_id": "8e95461f-5e94-4d8b-a2fb-d4717ce973e4", "text": "It was the winter of 2035, and artificial intelligence (..)", "role": "assistant", "lang": "en", "review_count": 3, "review_result": true, "deleted": false, "rank": 0, "synthetic": true, "model_name": "oasst-sft-0_3000,max_new_tokens=400 (..)", "labels": { "spam": { "value": 0.0, "count": 3 }, "lang_mismatch": { "value": 0.0, "count": 3 }, "pii": { "value": 0.0, "count": 3 }, "not_appropriate": { "value": 0.0, "count": 3 }, "hate_speech": { "value": 0.0, "count": 3 }, "sexual_content": { "value": 0.0, "count": 3 }, "quality": { "value": 0.416, "count": 3 }, "toxicity": { "value": 0.16, "count": 3 }, "humor": { "value": 0.0, "count": 3 }, "creativity": { "value": 0.33, "count": 3 }, "violence": { "value": 0.16, "count": 3 } } } ``` ### JSON Example: Conversation Tree For readability, only a subset of the message properties is shown here. ```json { "message_tree_id": "14fbb664-a620-45ce-bee4-7c519b16a793", "tree_state": "ready_for_export", "prompt": { "message_id": "14fbb664-a620-45ce-bee4-7c519b16a793", "text": "Why can't we divide by 0? (..)", "role": "prompter", "lang": "en", "replies": [ { "message_id": "894d30b6-56b4-4605-a504-89dd15d4d1c8", "text": "The reason we cannot divide by zero is because (..)", "role": "assistant", "lang": "en", "replies": [ // ... ] }, { "message_id": "84d0913b-0fd9-4508-8ef5-205626a7039d", "text": "The reason that the result of a division by zero is (..)", "role": "assistant", "lang": "en", "replies": [ { "message_id": "3352725e-f424-4e3b-a627-b6db831bdbaa", "text": "Math is confusing. Like those weird Irrational (..)", "role": "prompter", "lang": "en", "replies": [ { "message_id": "f46207ca-3149-46e9-a466-9163d4ce499c", "text": "Irrational numbers are simply numbers (..)", "role": "assistant", "lang": "en", "replies": [] }, // ... ] } ] } ] } } ``` Please refer to [oasst-data](https://github.com/LAION-AI/Open-Assistant/tree/main/oasst-data) for details about the data structure and Python code to read and write jsonl files containing oasst data objects. ## Main Dataset Files Conversation data is provided either as nested messages in trees (extension `.trees.jsonl.gz`) or as a flat list (table) of messages (extension `.messages.jsonl.gz`). ### Ready For Export Trees ``` 2023-11-05_oasst2_ready.trees.jsonl.gz 13,854 trees with 135,174 total messages 2023-11-05_oasst2_ready.messages.jsonl.gz 135,174 messages ``` #### 2023-11-05_oasst2_ready.trees.jsonl.gz Stats ``` Trees : 13,854 Messages : 135,174 Oldest message : 2023-01-16 20:24:26.211711+00:00 Youngest message : 2023-11-04 15:23:03.239343+00:00 Detoxify ratings : 111,448 Accepted messages: 129,517 Deleted messages : 4,376 Tree counts by state: - ready_for_export: 13,854 Message counts by language: - en: 64,513 - es: 28,199 - ru: 13,935 - zh: 8,615 - de: 6,145 - fr: 3,880 - pt-BR: 2,699 - th: 1,560 - ca: 1,283 - it: 943 - uk-UA: 845 - ja: 788 - pl: 435 - eo: 295 - eu: 274 - vi: 207 - fi: 138 - hu: 113 - ar: 80 - nl: 72 - da: 44 - tr: 37 - ko: 24 - he: 24 - id: 12 - cs: 12 - bn: 1 - sv: 1 ``` Trees in ready_for_export state without spam and deleted messages including message labels. The oasst_ready-trees file usually is sufficient for supervised fine-tuning (SFT) & reward model (RM) training. ### All Trees ``` 2023-11-05_oasst2_all.trees.jsonl.gz 70,642 trees with 208,584 total messages 2023-11-05_oasst2_all.messages.jsonl.gz 208,584 messages ``` All trees, including those in states prompt_lottery_waiting (trees that consist of only one message, namely the initial prompt), aborted_low_grade (trees that stopped growing because the messages had low quality), and halted_by_moderator. #### 2023-11-05_oasst2_all.trees.jsonl.gz Stats ``` Trees : 70,642 Messages : 208,584 Oldest message : 2023-01-16 20:24:26.211711+00:00 Youngest message : 2023-11-05 10:24:44.484910+00:00 Detoxify ratings : 156,570 Accepted messages: 189,288 Deleted messages : 5,414 Tree counts by state: - ready_for_export: 13,854 - prompt_lottery_waiting: 44,550 - halted_by_moderator: 3,089 - initial_prompt_review: 4,319 - growing: 3,102 - aborted_low_grade: 1,708 - ranking: 20 Message counts by language: - en: 85,115 - es: 47,513 - ru: 15,990 - zh: 11,205 - de: 8,398 - fr: 5,841 - pt-BR: 4,540 - th: 3,236 - ca: 2,586 - it: 2,144 - ja: 1,904 - uk-UA: 1,889 - ko: 1,635 - pl: 1,510 - eo: 1,405 - nl: 1,354 - ar: 1,274 - vi: 1,137 - fi: 1,098 - eu: 995 - hu: 961 - tr: 803 - sv: 763 - id: 669 - gl: 574 - da: 502 - he: 498 - cs: 476 - ro: 434 - sk: 410 - fa: 394 - el: 388 - bar: 217 - nb-NO: 196 - bg: 176 - bn: 128 - sl: 119 - sr: 63 - swg: 23 - hi: 14 - lt: 7 ``` ### Supplemental Exports: Spam & Prompts ``` 2023-11-05_oasst2_spam.messages.jsonl.gz 19,296 matching messages ``` These are messages which were deleted or have a negative review result ("review_result": false). Besides low quality, a frequent reason for message deletion is a wrong language tag. ``` 2023-11-05_oasst2_prompts.messages.jsonl.gz 64,592 matching messages ``` These are all the kept initial prompt messages with positive review result (no spam) of trees in `ready_for_export` or `prompt_lottery_waiting` state. ### Using the Huggingface Datasets While HF datasets is ideal for tabular datasets, it is not a natural fit for nested data structures like the OpenAssistant conversation trees. Nevertheless, we make all messages which can also be found in the file `2023-11-05_oasst2_ready.messages.jsonl.gz` available in parquet format as train/validation splits. These are directly loadable by [Huggingface Datasets](https://pypi.org/project/datasets/). To load the oasst2 train & validation splits use: ```python from datasets import load_dataset ds = load_dataset("OpenAssistant/oasst2") train = ds['train'] # len(train)=128575 (95%) val = ds['validation'] # len(val)=6599 (5%) ``` The messages appear in depth-first order of the message trees. Full conversation trees can be reconstructed from the flat messages table by using the `parent_id` and `message_id` properties to identify the parent-child relationship of messages. The `message_tree_id` and `tree_state` properties (only present in flat messages files) can be used to find all messages of a message tree or to select trees by their state. ### Data Visualisation Explore the content of the prompts from the English subset using [Bunka](https://github.com/charlesdedampierre/BunkaTopics) open-source visualization technology. The interactive map [available on a HF space](https://huggingface.co/spaces/bunkalab/visualisation-oasst2) allows to explore each datapoint to get a more precise overview of the contents. <a href="https://i.imgur.com/B2H8LR3.png"> <img src="https://i.imgur.com/B2H8LR3.png" alt="Bunka oasst2 Map" width="35%"/> </a> ## Contact - Discord [Open Assistant Discord Server](https://ykilcher.com/open-assistant-discord) - GitHub: [LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant) - E-Mail: [open-assistant@laion.ai](mailto:open-assistant@laion.ai)
Amod/mental_health_counseling_conversations
--- license: openrail task_categories: - text-generation - question-answering language: - en tags: - medical size_categories: - 1K<n<10K --- # Amod/mental_health_counseling_conversations ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** Bertagnolli, Nicolas (2020). Counsel chat: Bootstrapping high-quality therapy data. Towards Data Science. https://towardsdatascience.com/counsel-chat - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset is a collection of questions and answers sourced from two online counseling and therapy platforms. The questions cover a wide range of mental health topics, and the answers are provided by qualified psychologists. The dataset is intended to be used for fine-tuning language models to improve their ability to provide mental health advice. ### Supported Tasks and Leaderboards The dataset supports the task of text generation, particularly for generating advice or suggestions in response to a mental health-related question. ### Languages The text in the dataset is in English. ## Dataset Structure ### Data Instances A data instance includes a 'Context' and a 'Response'. 'Context' contains the question asked by a user, and 'Response' contains the corresponding answer provided by a psychologist. ### Data Fields - 'Context': a string containing the question asked by a user - 'Response': a string containing the corresponding answer provided by a psychologist ### Data Splits The dataset has no predefined splits. Users can create their own splits as needed. ## Dataset Creation ### Curation Rationale This dataset was created to aid in the development of AI models that can provide mental health advice or guidance. The raw data was meticulously cleaned to only include the conversations. ### Source Data The data was sourced from two online counseling and therapy platforms. The raw data can be found [here](https://github.com/nbertagnolli/counsel-chat/tree/master/data). ### Annotations The dataset does not contain any additional annotations. ### Personal and Sensitive Information The dataset may contain sensitive information related to mental health. All data was anonymized and no personally identifiable information is included.
tydiqa
--- pretty_name: TyDi QA annotations_creators: - crowdsourced language_creators: - crowdsourced language: - ar - bn - en - fi - id - ja - ko - ru - sw - te - th license: - apache-2.0 multilinguality: - multilingual size_categories: - unknown source_datasets: - extended|wikipedia task_categories: - question-answering task_ids: - extractive-qa paperswithcode_id: tydi-qa dataset_info: - config_name: primary_task features: - name: passage_answer_candidates sequence: - name: plaintext_start_byte dtype: int32 - name: plaintext_end_byte dtype: int32 - name: question_text dtype: string - name: document_title dtype: string - name: language dtype: string - name: annotations sequence: - name: passage_answer_candidate_index dtype: int32 - name: minimal_answers_start_byte dtype: int32 - name: minimal_answers_end_byte dtype: int32 - name: yes_no_answer dtype: string - name: document_plaintext dtype: string - name: document_url dtype: string splits: - name: train num_bytes: 5550574617 num_examples: 166916 - name: validation num_bytes: 484380443 num_examples: 18670 download_size: 1953887429 dataset_size: 6034955060 - config_name: secondary_task features: - name: id dtype: string - name: title dtype: string - name: context dtype: string - name: question dtype: string - name: answers sequence: - name: text dtype: string - name: answer_start dtype: int32 splits: - name: train num_bytes: 52948607 num_examples: 49881 - name: validation num_bytes: 5006461 num_examples: 5077 download_size: 1953887429 dataset_size: 57955068 --- # Dataset Card for "tydiqa" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/google-research-datasets/tydiqa](https://github.com/google-research-datasets/tydiqa) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 3.91 GB - **Size of the generated dataset:** 6.10 GB - **Total amount of disk used:** 10.00 GB ### Dataset Summary TyDi QA is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs. The languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language expresses -- such that we expect models performing well on this set to generalize across a large number of the languages in the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but don’t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without the use of translation (unlike MLQA and XQuAD). ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### primary_task - **Size of downloaded dataset files:** 1.95 GB - **Size of the generated dataset:** 6.04 GB - **Total amount of disk used:** 7.99 GB An example of 'validation' looks as follows. ``` This example was too long and was cropped: { "annotations": { "minimal_answers_end_byte": [-1, -1, -1], "minimal_answers_start_byte": [-1, -1, -1], "passage_answer_candidate_index": [-1, -1, -1], "yes_no_answer": ["NONE", "NONE", "NONE"] }, "document_plaintext": "\"\\nรองศาสตราจารย์[1] หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร (22 กันยายน 2495 -) ผู้ว่าราชการกรุงเทพมหานครคนที่ 15 อดีตรองหัวหน้าพรรคปร...", "document_title": "หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร", "document_url": "\"https://th.wikipedia.org/wiki/%E0%B8%AB%E0%B8%A1%E0%B9%88%E0%B8%AD%E0%B8%A1%E0%B8%A3%E0%B8%B2%E0%B8%8A%E0%B8%A7%E0%B8%87%E0%B8%...", "language": "thai", "passage_answer_candidates": "{\"plaintext_end_byte\": [494, 1779, 2931, 3904, 4506, 5588, 6383, 7122, 8224, 9375, 10473, 12563, 15134, 17765, 19863, 21902, 229...", "question_text": "\"หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร เรียนจบจากที่ไหน ?\"..." } ``` #### secondary_task - **Size of downloaded dataset files:** 1.95 GB - **Size of the generated dataset:** 58.03 MB - **Total amount of disk used:** 2.01 GB An example of 'validation' looks as follows. ``` This example was too long and was cropped: { "answers": { "answer_start": [394], "text": ["بطولتين"] }, "context": "\"أقيمت البطولة 21 مرة، شارك في النهائيات 78 دولة، وعدد الفرق التي فازت بالبطولة حتى الآن 8 فرق، ويعد المنتخب البرازيلي الأكثر تت...", "id": "arabic-2387335860751143628-1", "question": "\"كم عدد مرات فوز الأوروغواي ببطولة كاس العالم لكرو القدم؟\"...", "title": "قائمة نهائيات كأس العالم" } ``` ### Data Fields The data fields are the same among all splits. #### primary_task - `passage_answer_candidates`: a dictionary feature containing: - `plaintext_start_byte`: a `int32` feature. - `plaintext_end_byte`: a `int32` feature. - `question_text`: a `string` feature. - `document_title`: a `string` feature. - `language`: a `string` feature. - `annotations`: a dictionary feature containing: - `passage_answer_candidate_index`: a `int32` feature. - `minimal_answers_start_byte`: a `int32` feature. - `minimal_answers_end_byte`: a `int32` feature. - `yes_no_answer`: a `string` feature. - `document_plaintext`: a `string` feature. - `document_url`: a `string` feature. #### secondary_task - `id`: a `string` feature. - `title`: a `string` feature. - `context`: a `string` feature. - `question`: a `string` feature. - `answers`: a dictionary feature containing: - `text`: a `string` feature. - `answer_start`: a `int32` feature. ### Data Splits | name | train | validation | | -------------- | -----: | ---------: | | primary_task | 166916 | 18670 | | secondary_task | 49881 | 5077 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @article{tydiqa, title = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages}, author = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki} year = {2020}, journal = {Transactions of the Association for Computational Linguistics} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
cats_vs_dogs
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - image-classification task_ids: - multi-class-image-classification paperswithcode_id: cats-vs-dogs pretty_name: Cats Vs. Dogs dataset_info: features: - name: image dtype: image - name: labels dtype: class_label: names: '0': cat '1': dog splits: - name: train num_bytes: 3844792 num_examples: 23410 download_size: 824887076 dataset_size: 3844792 --- # Dataset Card for Cats Vs. Dogs ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Cats vs Dogs Dataset](https://www.microsoft.com/en-us/download/details.aspx?id=54765) - **Repository:** - **Paper:** [Asirra: A CAPTCHA that Exploits Interest-Aligned Manual Image Categorization](https://www.microsoft.com/en-us/research/wp-content/uploads/2007/10/CCS2007.pdf) - **Leaderboard:** [Dogs vs. Cats](https://www.kaggle.com/competitions/dogs-vs-cats) - **Point of Contact:** ### Dataset Summary A large set of images of cats and dogs. There are 1738 corrupted images that are dropped. This dataset is part of a now-closed Kaggle competition and represents a subset of the so-called Asirra dataset. From the competition page: > The Asirra data set > > Web services are often protected with a challenge that's supposed to be easy for people to solve, but difficult for computers. Such a challenge is often called a [CAPTCHA](http://www.captcha.net/) (Completely Automated Public Turing test to tell Computers and Humans Apart) or HIP (Human Interactive Proof). HIPs are used for many purposes, such as to reduce email and blog spam and prevent brute-force attacks on web site passwords. > > Asirra (Animal Species Image Recognition for Restricting Access) is a HIP that works by asking users to identify photographs of cats and dogs. This task is difficult for computers, but studies have shown that people can accomplish it quickly and accurately. Many even think it's fun! Here is an example of the Asirra interface: > > Asirra is unique because of its partnership with [Petfinder.com](https://www.petfinder.com/), the world's largest site devoted to finding homes for homeless pets. They've provided Microsoft Research with over three million images of cats and dogs, manually classified by people at thousands of animal shelters across the United States. Kaggle is fortunate to offer a subset of this data for fun and research. ### Supported Tasks and Leaderboards - `image-classification`: The goal of this task is to classify a given image as either containing a cat or a dog. The leaderboard is available [here](https://paperswithcode.com/sota/image-classification-on-cats-vs-dogs). ### Languages English. ## Dataset Structure ### Data Instances A sample from the training set is provided below: ``` { 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x375 at 0x29CEAD71780>, 'labels': 0 } ``` ### Data Fields The data instances have the following fields: - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`. - `labels`: an `int` classification label. Class Label Mappings: ``` { "cat": 0, "dog": 1, } ``` ### Data Splits | | train | |---------------|------:| | # of examples | 23410 | ## Dataset Creation ### Curation Rationale This subset was to built to test whether computer vision algorithms can beat the Asirra CAPTCHA: From the competition page: > Image recognition attacks > > While random guessing is the easiest form of attack, various forms of image recognition can allow an attacker to make guesses that are better than random. There is enormous diversity in the photo database (a wide variety of backgrounds, angles, poses, lighting, etc.), making accurate automatic classification difficult. In an informal poll conducted many years ago, computer vision experts posited that a classifier with better than 60% accuracy would be difficult without a major advance in the state of the art. For reference, a 60% classifier improves the guessing probability of a 12-image HIP from 1/4096 to 1/459. ### Source Data #### Initial Data Collection and Normalization This dataset is a subset of the Asirra dataset. From the competition page: > Asirra is unique because of its partnership with Petfinder.com, the world's largest site devoted to finding homes for homeless pets. They've provided Microsoft Research with over three million images of cats and dogs, manually classified by people at thousands of animal shelters across the United States. #### Who are the source language producers? The users of [Petfinder.com](https://www.petfinder.com/). ### Annotations #### Annotation process The images were annotated by selecting a pet category on [Petfinder.com](https://www.petfinder.com/). #### Who are the annotators? The users of [Petfinder.com](https://www.petfinder.com/). ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases From the paper: > Unlike many image-based CAPTCHAs which are abstract or subjective, Asirra’s challenges are concrete, inoffensive (cute, by some accounts), require no specialized or culturally biased knowledge, and have definite ground truth. This makes Asirra less frustrating for humans. Some beta-testers found it fun. The four-year-old child of one asked several times to “play the cat and dog game again.” ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ```bibtex @Inproceedings (Conference){asirra-a-captcha-that-exploits-interest-aligned-manual-image-categorization, author = {Elson, Jeremy and Douceur, John (JD) and Howell, Jon and Saul, Jared}, title = {Asirra: A CAPTCHA that Exploits Interest-Aligned Manual Image Categorization}, booktitle = {Proceedings of 14th ACM Conference on Computer and Communications Security (CCS)}, year = {2007}, month = {October}, publisher = {Association for Computing Machinery, Inc.}, url = {https://www.microsoft.com/en-us/research/publication/asirra-a-captcha-that-exploits-interest-aligned-manual-image-categorization/}, edition = {Proceedings of 14th ACM Conference on Computer and Communications Security (CCS)}, } ``` ### Contributions Thanks to [@nateraw](https://github.com/nateraw) for adding this dataset.
Stevross/mmlu
--- annotations_creators: - no-annotation language_creators: - expert-generated language: - en license: - mit multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - question-answering task_ids: - multiple-choice-qa paperswithcode_id: mmlu pretty_name: Measuring Massive Multitask Language Understanding language_bcp47: - en-US dataset_info: - config_name: abstract_algebra features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 19328 num_examples: 100 - name: validation num_bytes: 2024 num_examples: 11 - name: dev num_bytes: 830 num_examples: 5 download_size: 166184960 dataset_size: 160623559 - config_name: anatomy features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 33121 num_examples: 135 - name: validation num_bytes: 3140 num_examples: 14 - name: dev num_bytes: 967 num_examples: 5 download_size: 166184960 dataset_size: 160638605 - config_name: astronomy features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 46771 num_examples: 152 - name: validation num_bytes: 5027 num_examples: 16 - name: dev num_bytes: 2076 num_examples: 5 download_size: 166184960 dataset_size: 160655251 - config_name: business_ethics features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 33252 num_examples: 100 - name: validation num_bytes: 3038 num_examples: 11 - name: dev num_bytes: 2190 num_examples: 5 download_size: 166184960 dataset_size: 160639857 - config_name: clinical_knowledge features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 62754 num_examples: 265 - name: validation num_bytes: 6664 num_examples: 29 - name: dev num_bytes: 1210 num_examples: 5 download_size: 166184960 dataset_size: 160672005 - config_name: college_biology features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 48797 num_examples: 144 - name: validation num_bytes: 4819 num_examples: 16 - name: dev num_bytes: 1532 num_examples: 5 download_size: 166184960 dataset_size: 160656525 - config_name: college_chemistry features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 24708 num_examples: 100 - name: validation num_bytes: 2328 num_examples: 8 - name: dev num_bytes: 1331 num_examples: 5 download_size: 166184960 dataset_size: 160629744 - config_name: college_computer_science features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 42641 num_examples: 100 - name: validation num_bytes: 4663 num_examples: 11 - name: dev num_bytes: 2765 num_examples: 5 download_size: 166184960 dataset_size: 160651446 - config_name: college_mathematics features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 24711 num_examples: 100 - name: validation num_bytes: 2668 num_examples: 11 - name: dev num_bytes: 1493 num_examples: 5 download_size: 166184960 dataset_size: 160630249 - config_name: college_medicine features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 82397 num_examples: 173 - name: validation num_bytes: 7909 num_examples: 22 - name: dev num_bytes: 1670 num_examples: 5 download_size: 166184960 dataset_size: 160693353 - config_name: college_physics features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 30181 num_examples: 102 - name: validation num_bytes: 3490 num_examples: 11 - name: dev num_bytes: 1412 num_examples: 5 download_size: 166184960 dataset_size: 160636460 - config_name: computer_security features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 27124 num_examples: 100 - name: validation num_bytes: 4549 num_examples: 11 - name: dev num_bytes: 1101 num_examples: 5 download_size: 166184960 dataset_size: 160634151 - config_name: conceptual_physics features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 40709 num_examples: 235 - name: validation num_bytes: 4474 num_examples: 26 - name: dev num_bytes: 934 num_examples: 5 download_size: 166184960 dataset_size: 160647494 - config_name: econometrics features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 46547 num_examples: 114 - name: validation num_bytes: 4967 num_examples: 12 - name: dev num_bytes: 1644 num_examples: 5 download_size: 166184960 dataset_size: 160654535 - config_name: electrical_engineering features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 25142 num_examples: 145 - name: validation num_bytes: 2903 num_examples: 16 - name: dev num_bytes: 972 num_examples: 5 download_size: 166184960 dataset_size: 160630394 - config_name: elementary_mathematics features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 70108 num_examples: 378 - name: validation num_bytes: 8988 num_examples: 41 - name: dev num_bytes: 1440 num_examples: 5 download_size: 166184960 dataset_size: 160681913 - config_name: formal_logic features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 49785 num_examples: 126 - name: validation num_bytes: 6252 num_examples: 14 - name: dev num_bytes: 1757 num_examples: 5 download_size: 166184960 dataset_size: 160659171 - config_name: global_facts features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 18403 num_examples: 100 - name: validation num_bytes: 1865 num_examples: 10 - name: dev num_bytes: 1229 num_examples: 5 download_size: 166184960 dataset_size: 160622874 - config_name: high_school_biology features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 109732 num_examples: 310 - name: validation num_bytes: 11022 num_examples: 32 - name: dev num_bytes: 1673 num_examples: 5 download_size: 166184960 dataset_size: 160723804 - config_name: high_school_chemistry features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 58464 num_examples: 203 - name: validation num_bytes: 7092 num_examples: 22 - name: dev num_bytes: 1220 num_examples: 5 download_size: 166184960 dataset_size: 160668153 - config_name: high_school_computer_science features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 44476 num_examples: 100 - name: validation num_bytes: 3343 num_examples: 9 - name: dev num_bytes: 2918 num_examples: 5 download_size: 166184960 dataset_size: 160652114 - config_name: high_school_european_history features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 270300 num_examples: 165 - name: validation num_bytes: 29632 num_examples: 18 - name: dev num_bytes: 11564 num_examples: 5 download_size: 166184960 dataset_size: 160912873 - config_name: high_school_geography features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 42034 num_examples: 198 - name: validation num_bytes: 4332 num_examples: 22 - name: dev num_bytes: 1403 num_examples: 5 download_size: 166184960 dataset_size: 160649146 - config_name: high_school_government_and_politics features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 66074 num_examples: 193 - name: validation num_bytes: 7063 num_examples: 21 - name: dev num_bytes: 1779 num_examples: 5 download_size: 166184960 dataset_size: 160676293 - config_name: high_school_macroeconomics features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 117687 num_examples: 390 - name: validation num_bytes: 13020 num_examples: 43 - name: dev num_bytes: 1328 num_examples: 5 download_size: 166184960 dataset_size: 160733412 - config_name: high_school_mathematics features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 54854 num_examples: 270 - name: validation num_bytes: 5765 num_examples: 29 - name: dev num_bytes: 1297 num_examples: 5 download_size: 166184960 dataset_size: 160663293 - config_name: high_school_microeconomics features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 75703 num_examples: 238 - name: validation num_bytes: 7553 num_examples: 26 - name: dev num_bytes: 1298 num_examples: 5 download_size: 166184960 dataset_size: 160685931 - config_name: high_school_physics features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 59538 num_examples: 151 - name: validation num_bytes: 6771 num_examples: 17 - name: dev num_bytes: 1489 num_examples: 5 download_size: 166184960 dataset_size: 160669175 - config_name: high_school_psychology features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 159407 num_examples: 545 - name: validation num_bytes: 17269 num_examples: 60 - name: dev num_bytes: 1905 num_examples: 5 download_size: 166184960 dataset_size: 160779958 - config_name: high_school_statistics features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 110702 num_examples: 216 - name: validation num_bytes: 9997 num_examples: 23 - name: dev num_bytes: 2528 num_examples: 5 download_size: 166184960 dataset_size: 160724604 - config_name: high_school_us_history features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 296734 num_examples: 204 - name: validation num_bytes: 31706 num_examples: 22 - name: dev num_bytes: 8864 num_examples: 5 download_size: 166184960 dataset_size: 160938681 - config_name: high_school_world_history features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 378617 num_examples: 237 - name: validation num_bytes: 45501 num_examples: 26 - name: dev num_bytes: 4882 num_examples: 5 download_size: 166184960 dataset_size: 161030377 - config_name: human_aging features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 46098 num_examples: 223 - name: validation num_bytes: 4707 num_examples: 23 - name: dev num_bytes: 1008 num_examples: 5 download_size: 166184960 dataset_size: 160653190 - config_name: human_sexuality features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 32110 num_examples: 131 - name: validation num_bytes: 2421 num_examples: 12 - name: dev num_bytes: 1077 num_examples: 5 download_size: 166184960 dataset_size: 160636985 - config_name: international_law features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 53531 num_examples: 121 - name: validation num_bytes: 6473 num_examples: 13 - name: dev num_bytes: 2418 num_examples: 5 download_size: 166184960 dataset_size: 160663799 - config_name: jurisprudence features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 33986 num_examples: 108 - name: validation num_bytes: 3729 num_examples: 11 - name: dev num_bytes: 1303 num_examples: 5 download_size: 166184960 dataset_size: 160640395 - config_name: logical_fallacies features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 50117 num_examples: 163 - name: validation num_bytes: 5103 num_examples: 18 - name: dev num_bytes: 1573 num_examples: 5 download_size: 166184960 dataset_size: 160658170 - config_name: machine_learning features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 33880 num_examples: 112 - name: validation num_bytes: 3232 num_examples: 11 - name: dev num_bytes: 2323 num_examples: 5 download_size: 166184960 dataset_size: 160640812 - config_name: management features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 20002 num_examples: 103 - name: validation num_bytes: 1820 num_examples: 11 - name: dev num_bytes: 898 num_examples: 5 download_size: 166184960 dataset_size: 160624097 - config_name: marketing features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 63025 num_examples: 234 - name: validation num_bytes: 7394 num_examples: 25 - name: dev num_bytes: 1481 num_examples: 5 download_size: 166184960 dataset_size: 160673277 - config_name: medical_genetics features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 20864 num_examples: 100 - name: validation num_bytes: 3005 num_examples: 11 - name: dev num_bytes: 1089 num_examples: 5 download_size: 166184960 dataset_size: 160626335 - config_name: miscellaneous features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 147704 num_examples: 783 - name: validation num_bytes: 14330 num_examples: 86 - name: dev num_bytes: 699 num_examples: 5 download_size: 166184960 dataset_size: 160764110 - config_name: moral_disputes features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 107818 num_examples: 346 - name: validation num_bytes: 12420 num_examples: 38 - name: dev num_bytes: 1755 num_examples: 5 download_size: 166184960 dataset_size: 160723370 - config_name: moral_scenarios features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 374026 num_examples: 895 - name: validation num_bytes: 42338 num_examples: 100 - name: dev num_bytes: 2058 num_examples: 5 download_size: 166184960 dataset_size: 161019799 - config_name: nutrition features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 92410 num_examples: 306 - name: validation num_bytes: 8436 num_examples: 33 - name: dev num_bytes: 2085 num_examples: 5 download_size: 166184960 dataset_size: 160704308 - config_name: philosophy features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 80073 num_examples: 311 - name: validation num_bytes: 9184 num_examples: 34 - name: dev num_bytes: 988 num_examples: 5 download_size: 166184960 dataset_size: 160691622 - config_name: prehistory features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 89594 num_examples: 324 - name: validation num_bytes: 10285 num_examples: 35 - name: dev num_bytes: 1878 num_examples: 5 download_size: 166184960 dataset_size: 160703134 - config_name: professional_accounting features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 124550 num_examples: 282 - name: validation num_bytes: 14372 num_examples: 31 - name: dev num_bytes: 2148 num_examples: 5 download_size: 166184960 dataset_size: 160742447 - config_name: professional_law features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 1891762 num_examples: 1534 - name: validation num_bytes: 203519 num_examples: 170 - name: dev num_bytes: 6610 num_examples: 5 download_size: 166184960 dataset_size: 162703268 - config_name: professional_medicine features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 217561 num_examples: 272 - name: validation num_bytes: 23847 num_examples: 31 - name: dev num_bytes: 3807 num_examples: 5 download_size: 166184960 dataset_size: 160846592 - config_name: professional_psychology features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 225899 num_examples: 612 - name: validation num_bytes: 29101 num_examples: 69 - name: dev num_bytes: 2267 num_examples: 5 download_size: 166184960 dataset_size: 160858644 - config_name: public_relations features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 28760 num_examples: 110 - name: validation num_bytes: 4566 num_examples: 12 - name: dev num_bytes: 1496 num_examples: 5 download_size: 166184960 dataset_size: 160636199 - config_name: security_studies features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 204844 num_examples: 245 - name: validation num_bytes: 22637 num_examples: 27 - name: dev num_bytes: 5335 num_examples: 5 download_size: 166184960 dataset_size: 160834193 - config_name: sociology features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 66243 num_examples: 201 - name: validation num_bytes: 7184 num_examples: 22 - name: dev num_bytes: 1613 num_examples: 5 download_size: 166184960 dataset_size: 160676417 - config_name: us_foreign_policy features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 28443 num_examples: 100 - name: validation num_bytes: 3264 num_examples: 11 - name: dev num_bytes: 1611 num_examples: 5 download_size: 166184960 dataset_size: 160634695 - config_name: virology features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 38759 num_examples: 166 - name: validation num_bytes: 5463 num_examples: 18 - name: dev num_bytes: 1096 num_examples: 5 download_size: 166184960 dataset_size: 160646695 - config_name: world_religions features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: auxiliary_train num_bytes: 160601377 num_examples: 99842 - name: test num_bytes: 25274 num_examples: 171 - name: validation num_bytes: 2765 num_examples: 19 - name: dev num_bytes: 670 num_examples: 5 download_size: 166184960 dataset_size: 160630086 --- # Dataset Card for MMLU ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository**: https://github.com/hendrycks/test - **Paper**: https://arxiv.org/abs/2009.03300 ### Dataset Summary [Measuring Massive Multitask Language Understanding](https://arxiv.org/pdf/2009.03300) by [Dan Hendrycks](https://people.eecs.berkeley.edu/~hendrycks/), [Collin Burns](http://collinpburns.com), [Steven Basart](https://stevenbas.art), Andy Zou, Mantas Mazeika, [Dawn Song](https://people.eecs.berkeley.edu/~dawnsong/), and [Jacob Steinhardt](https://www.stat.berkeley.edu/~jsteinhardt/) (ICLR 2021). This is a massive multitask test consisting of multiple-choice questions from various branches of knowledge. The test spans subjects in the humanities, social sciences, hard sciences, and other areas that are important for some people to learn. This covers 57 tasks including elementary mathematics, US history, computer science, law, and more. To attain high accuracy on this test, models must possess extensive world knowledge and problem solving ability. A complete list of tasks: ['abstract_algebra', 'anatomy', 'astronomy', 'business_ethics', 'clinical_knowledge', 'college_biology', 'college_chemistry', 'college_computer_science', 'college_mathematics', 'college_medicine', 'college_physics', 'computer_security', 'conceptual_physics', 'econometrics', 'electrical_engineering', 'elementary_mathematics', 'formal_logic', 'global_facts', 'high_school_biology', 'high_school_chemistry', 'high_school_computer_science', 'high_school_european_history', 'high_school_geography', 'high_school_government_and_politics', 'high_school_macroeconomics', 'high_school_mathematics', 'high_school_microeconomics', 'high_school_physics', 'high_school_psychology', 'high_school_statistics', 'high_school_us_history', 'high_school_world_history', 'human_aging', 'human_sexuality', 'international_law', 'jurisprudence', 'logical_fallacies', 'machine_learning', 'management', 'marketing', 'medical_genetics', 'miscellaneous', 'moral_disputes', 'moral_scenarios', 'nutrition', 'philosophy', 'prehistory', 'professional_accounting', 'professional_law', 'professional_medicine', 'professional_psychology', 'public_relations', 'security_studies', 'sociology', 'us_foreign_policy', 'virology', 'world_religions'] ### Supported Tasks and Leaderboards | Model | Authors | Humanities | Social Science | STEM | Other | Average | |------------------------------------|----------|:-------:|:-------:|:-------:|:-------:|:-------:| | [UnifiedQA](https://arxiv.org/abs/2005.00700) | Khashabi et al., 2020 | 45.6 | 56.6 | 40.2 | 54.6 | 48.9 | [GPT-3](https://arxiv.org/abs/2005.14165) (few-shot) | Brown et al., 2020 | 40.8 | 50.4 | 36.7 | 48.8 | 43.9 | [GPT-2](https://arxiv.org/abs/2005.14165) | Radford et al., 2019 | 32.8 | 33.3 | 30.2 | 33.1 | 32.4 | Random Baseline | N/A | 25.0 | 25.0 | 25.0 | 25.0 | 25.0 | 25.0 ### Languages English ## Dataset Structure ### Data Instances An example from anatomy subtask looks as follows: ``` { "question": "What is the embryological origin of the hyoid bone?", "choices": ["The first pharyngeal arch", "The first and second pharyngeal arches", "The second pharyngeal arch", "The second and third pharyngeal arches"], "answer": "D" } ``` ### Data Fields - `question`: a string feature - `choices`: a list of 4 string features - `answer`: a ClassLabel feature ### Data Splits - `auxiliary_train`: auxiliary multiple-choice training questions from ARC, MC_TEST, OBQA, RACE, etc. - `dev`: 5 examples per subtask, meant for few-shot setting - `test`: there are at least 100 examples per subtask | | auxiliary_train | dev | val | test | | ----- | :------: | :-----: | :-----: | :-----: | | TOTAL | 99842 | 285 | 1531 | 14042 ## Dataset Creation ### Curation Rationale Transformer models have driven this recent progress by pretraining on massive text corpora, including all of Wikipedia, thousands of books, and numerous websites. These models consequently see extensive information about specialized topics, most of which is not assessed by existing NLP benchmarks. To bridge the gap between the wide-ranging knowledge that models see during pretraining and the existing measures of success, we introduce a new benchmark for assessing models across a diverse set of subjects that humans learn. ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [MIT License](https://github.com/hendrycks/test/blob/master/LICENSE) ### Citation Information If you find this useful in your research, please consider citing the test and also the [ETHICS](https://arxiv.org/abs/2008.02275) dataset it draws from: ``` @article{hendryckstest2021, title={Measuring Massive Multitask Language Understanding}, author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt}, journal={Proceedings of the International Conference on Learning Representations (ICLR)}, year={2021} } @article{hendrycks2021ethics, title={Aligning AI With Shared Human Values}, author={Dan Hendrycks and Collin Burns and Steven Basart and Andrew Critch and Jerry Li and Dawn Song and Jacob Steinhardt}, journal={Proceedings of the International Conference on Learning Representations (ICLR)}, year={2021} } ``` ### Contributions Thanks to [@andyzoujm](https://github.com/andyzoujm) for adding this dataset.
e2e_nlg
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - cc-by-sa-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text2text-generation task_ids: [] paperswithcode_id: e2e pretty_name: End-to-End NLG Challenge tags: - meaning-representation-to-text dataset_info: features: - name: meaning_representation dtype: string - name: human_reference dtype: string splits: - name: train num_bytes: 9435824 num_examples: 42061 - name: validation num_bytes: 1171723 num_examples: 4672 - name: test num_bytes: 1320205 num_examples: 4693 download_size: 11812316 dataset_size: 11927752 --- # Dataset Card for End-to-End NLG Challenge ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [homepage](http://www.macs.hw.ac.uk/InteractionLab/E2E/) - **Repository:** [repository](https://github.com/tuetschek/e2e-dataset/) - **Paper:** [paper](https://arxiv.org/abs/1706.09254) - **Leaderboard:** [leaderboard](http://www.macs.hw.ac.uk/InteractionLab/E2E/) ### Dataset Summary The E2E dataset is used for training end-to-end, data-driven natural language generation systems in the restaurant domain, which is ten times bigger than existing, frequently used datasets in this area. The E2E dataset poses new challenges: (1) its human reference texts show more lexical richness and syntactic variation, including discourse phenomena; (2) generating from this set requires content selection. As such, learning from this dataset promises more natural, varied and less template-like system utterances. E2E is released in the following paper where you can find more details and baseline results: https://arxiv.org/abs/1706.09254 ### Supported Tasks and Leaderboards - `text2text-generation-other-meaning-representation-to-text`: The dataset can be used to train a model to generate descriptions in the restaurant domain from meaning representations, which consists in taking as input some data about a restaurant and generate a sentence in natural language that presents the different aspects of the data about the restaurant.. Success on this task is typically measured by achieving a *high* [BLEU](https://huggingface.co/metrics/bleu), [NIST](https://huggingface.co/metrics/nist), [METEOR](https://huggingface.co/metrics/meteor), [Rouge-L](https://huggingface.co/metrics/rouge), [CIDEr](https://huggingface.co/metrics/cider). The TGen model (Dusek and Jurcıcek, 2016a) was used a baseline, had the following scores: | | BLEU | NIST | METEOR | ROUGE_L | CIDEr | | -------- | ------ | ------ | ------ | ------- | ------ | | BASELINE | 0.6593 | 8.6094 | 0.4483 | 0.6850 | 2.2338 | This task has an inactive leaderboard which can be found [here](http://www.macs.hw.ac.uk/InteractionLab/E2E/) and ranks models based on the metrics above. ### Languages The dataset is in english (en). ## Dataset Structure ### Data Instances Example of one instance: ``` {'human_reference': 'The Vaults pub near Café Adriatic has a 5 star rating. Prices start at £30.', 'meaning_representation': 'name[The Vaults], eatType[pub], priceRange[more than £30], customer rating[5 out of 5], near[Café Adriatic]'} ``` ### Data Fields - `human_reference`: string, the text is natural language that describes the different characteristics in the meaning representation - `meaning_representation`: list of slots and values to generate a description from Each MR consists of 3–8 attributes (slots), such as name, food or area, and their values. ### Data Splits The dataset is split into training, validation and testing sets (in a 76.5-8.5-15 ratio), keeping a similar distribution of MR and reference text lengths and ensuring that MRs in different sets are distinct. | | train | validation | test | | ----- |-------:|------------:|------:| | N. Instances | 42061 | 4672 | 4693 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization The data was collected using the CrowdFlower platform and quality-controlled following Novikova et al. (2016). #### Who are the source language producers? [More Information Needed] ### Annotations Following Novikova et al. (2016), the E2E data was collected using pictures as stimuli, which was shown to elicit significantly more natural, more informative, and better phrased human references than textual MRs. #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @article{dusek.etal2020:csl, title = {Evaluating the {{State}}-of-the-{{Art}} of {{End}}-to-{{End Natural Language Generation}}: {{The E2E NLG Challenge}}}, author = {Du{\v{s}}ek, Ond\v{r}ej and Novikova, Jekaterina and Rieser, Verena}, year = {2020}, month = jan, volume = {59}, pages = {123--156}, doi = {10.1016/j.csl.2019.06.009}, archivePrefix = {arXiv}, eprint = {1901.11528}, eprinttype = {arxiv}, journal = {Computer Speech \& Language} ``` ### Contributions Thanks to [@lhoestq](https://github.com/lhoestq) for adding this dataset.
rmanluo/RoG-webqsp
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* dataset_info: features: - name: id dtype: string - name: question dtype: string - name: answer sequence: string - name: q_entity sequence: string - name: a_entity sequence: string - name: graph sequence: sequence: string - name: choices sequence: 'null' splits: - name: train num_bytes: 993540472 num_examples: 2826 - name: validation num_bytes: 84009553 num_examples: 246 - name: test num_bytes: 580788090 num_examples: 1628 download_size: 0 dataset_size: 1658338115 --- # Dataset Card for "RoG-webqsp" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
quora
--- annotations_creators: - expert-generated language: - en language_creators: - found license: - unknown multilinguality: - monolingual pretty_name: Quora Question Pairs size_categories: - 100K<n<1M source_datasets: - original task_categories: - text-classification task_ids: - semantic-similarity-classification paperswithcode_id: null dataset_info: features: - name: questions sequence: - name: id dtype: int32 - name: text dtype: string - name: is_duplicate dtype: bool splits: - name: train num_bytes: 58155622 num_examples: 404290 download_size: 58176133 dataset_size: 58155622 --- # Dataset Card for "quora" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://www.kaggle.com/c/quora-question-pairs](https://www.kaggle.com/c/quora-question-pairs) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 58.17 MB - **Size of the generated dataset:** 58.15 MB - **Total amount of disk used:** 116.33 MB ### Dataset Summary The Quora dataset is composed of question pairs, and the task is to determine if the questions are paraphrases of each other (have the same meaning). ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 58.17 MB - **Size of the generated dataset:** 58.15 MB - **Total amount of disk used:** 116.33 MB An example of 'train' looks as follows. ``` { "is_duplicate": true, "questions": { "id": [1, 2], "text": ["Is this a sample question?", "Is this an example question?"] } } ``` ### Data Fields The data fields are the same among all splits. #### default - `questions`: a dictionary feature containing: - `id`: a `int32` feature. - `text`: a `string` feature. - `is_duplicate`: a `bool` feature. ### Data Splits | name |train | |-------|-----:| |default|404290| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information Unknown license. ### Citation Information Unknown. ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@ghomasHudson](https://github.com/ghomasHudson), [@lewtun](https://github.com/lewtun) for adding this dataset.
flytech/python-codes-25k
--- task_categories: - text-classification - question-answering - token-classification - summarization - text2text-generation - text-generation tags: - code - python - flytech - cleaned - instructional - dataset 25k - text2code - code2text - behavioral - codegeneration size_categories: - 1M<n<10M --- ### <span style="color:#307090">License</span> <span style="color:darkorange">MIT</span> <hr style="height:1px;border:none;color:#333;background-color:#444;" /> ## <span style="color:darkcyan">This is a Cleaned Python Dataset Covering 25,000 Instructional Tasks</span> ### <span style="color:#307090">Overview</span> The dataset has 4 key features (fields): <b><span style="color:#205070">instruction</span></b>, <b><span style="color:#205070">input</span></b>, <b><span style="color:#205070">output</span></b>, and <b><span style="color:#205070">text</span></b>. <span style="color:darkcyan">It's a rich source for Python codes, tasks, and extends into behavioral aspects.</span> <hr style="height:1px;border:none;color:#333;background-color:#444;" /> ### <span style="color:#307090">Dataset Statistics</span> - **Total Entries**: <span style="color:darkmagenta">24,813</span> - **Unique Instructions**: <span style="color:darkmagenta">24,580</span> - **Unique Inputs**: <span style="color:darkmagenta">3,666</span> - **Unique Outputs**: <span style="color:darkmagenta">24,581</span> - **Unique Texts**: <span style="color:darkmagenta">24,813</span> - **Average Tokens per example**: <span style="color:darkmagenta">508</span> ### <span style="color:#307090">Features</span> - `instruction`: The instructional task to be performed / User input. - `input`: Very short, introductive part of AI response or empty. - `output`: Python code that accomplishes the task. - `text`: All fields combined together. <hr style="height:1px;border:none;color:#333;background-color:#444;" /> ### <span style="color:#307090">Usage</span> <span style="color:darkcyan">This dataset can be useful for:</span> - <span style="color:#607090">Code generation tasks</span> - <span style="color:#607090">Natural Language Understanding models specialized in coding languages</span> - <span style="color:#607090">Behavioral analysis based on the given tasks and codes</span> - <span style="color:#607090">Educational purposes to understand coding styles and task variations</span> <span style="color:darkcyan">To load the dataset, one can use the following snippet:</span> ```python # Double-check if there is ~25k examples instead of almost 50k (HF thinks jsonl and json are two different datasets) from datasets import load_dataset dataset = load_dataset('flytech/python-codes-25k', split='train') # One can map the dataset in any way, for the sake of example: dataset = dataset.map(lambda example: {'text': example['instruction'] + ' ' + example['input'] + ' ' + example['output']})['text'] # Remember that you don't need to map if the dataset has a "text" field already:) ``` ### <span style="color:#307090">Access & Contributions</span> <span style="color:#887844">Feel free to use this dataset as per the MIT license. Contributions to enhance or expand the dataset are welcome.</span>
yahoo_answers_topics
--- annotations_creators: - found language_creators: - found language: - en license: - unknown multilinguality: - monolingual size_categories: - 1M<n<10M source_datasets: - extended|other-yahoo-answers-corpus task_categories: - text-classification task_ids: - topic-classification pretty_name: YahooAnswersTopics dataset_info: features: - name: id dtype: int32 - name: topic dtype: class_label: names: '0': Society & Culture '1': Science & Mathematics '2': Health '3': Education & Reference '4': Computers & Internet '5': Sports '6': Business & Finance '7': Entertainment & Music '8': Family & Relationships '9': Politics & Government - name: question_title dtype: string - name: question_content dtype: string - name: best_answer dtype: string config_name: yahoo_answers_topics splits: - name: train num_bytes: 760460695 num_examples: 1400000 - name: test num_bytes: 32661362 num_examples: 60000 download_size: 319476345 dataset_size: 793122057 train-eval-index: - config: yahoo_answers_topics task: text-classification task_id: multi_class_classification splits: train_split: train eval_split: test col_mapping: question_content: text topic: target metrics: - type: accuracy name: Accuracy - type: f1 name: F1 macro args: average: macro - type: f1 name: F1 micro args: average: micro - type: f1 name: F1 weighted args: average: weighted - type: precision name: Precision macro args: average: macro - type: precision name: Precision micro args: average: micro - type: precision name: Precision weighted args: average: weighted - type: recall name: Recall macro args: average: macro - type: recall name: Recall micro args: average: micro - type: recall name: Recall weighted args: average: weighted --- # Dataset Card for "Yahoo Answers Topics" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Add homepage URL here if available (unless it's a GitHub repository)]() - **Repository:** https://github.com/LC-John/Yahoo-Answers-Topic-Classification-Dataset - **Paper:** [If the dataset was introduced by a paper or there was a paper written describing the dataset, add URL here (landing page for Arxiv paper preferred)]() - **Leaderboard:** [If the dataset supports an active leaderboard, add link here]() - **Point of Contact:** [If known, name and email of at least one person the reader can contact for questions about the dataset.]() ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
yuvalkirstain/pickapic_v2
--- dataset_info: features: - name: are_different dtype: bool - name: best_image_uid dtype: string - name: caption dtype: string - name: created_at dtype: timestamp[ns] - name: has_label dtype: bool - name: image_0_uid dtype: string - name: image_0_url dtype: string - name: image_1_uid dtype: string - name: image_1_url dtype: string - name: jpg_0 dtype: binary - name: jpg_1 dtype: binary - name: label_0 dtype: float64 - name: label_1 dtype: float64 - name: model_0 dtype: string - name: model_1 dtype: string - name: ranking_id dtype: int64 - name: user_id dtype: int64 - name: num_example_per_prompt dtype: int64 - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 322022952127 num_examples: 959040 - name: validation num_bytes: 6339087542 num_examples: 20596 - name: test num_bytes: 6618429346 num_examples: 20716 - name: validation_unique num_bytes: 170578993 num_examples: 500 - name: test_unique num_bytes: 175368751 num_examples: 500 download_size: 15603769274 dataset_size: 335326416759 --- # Dataset Card for "pickapic_v2" please pay attention - the URLs will be temporariliy unavailabe - but you do not need them! we have in jpg_0 and jpg_1 the image bytes! so by downloading the dataset you already have the images! [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
kunishou/databricks-dolly-15k-ja
--- license: cc-by-sa-3.0 language: - ja configs: - config_name: default data_files: - split: train path: "databricks-dolly-15k-ja.json" --- ![dolly-icon](./top.png) This dataset was created by automatically translating "databricks-dolly-15k" into Japanese. This dataset is licensed under CC-BY-SA-3.0 Last Update : 2023-05-11 databricks-dolly-15k-ja https://github.com/kunishou/databricks-dolly-15k-ja databricks-dolly-15k https://github.com/databrickslabs/dolly/tree/master/data
nvidia/HelpSteer
--- license: cc-by-4.0 language: - en pretty_name: Helpfulness SteerLM Dataset size_categories: - 10K<n<100K tags: - human-feedback dataset_info: features: - name: prompt dtype: string - name: response dtype: string - name: helpfulness dtype: int32 - name: correctness dtype: int32 - name: coherence dtype: int32 - name: complexity dtype: int32 - name: verbosity dtype: int32 splits: - name: train num_examples: 35331 - name: validation num_examples: 1789 --- # HelpSteer: Helpfulness SteerLM Dataset HelpSteer is an open-source Helpfulness Dataset (CC-BY-4.0) that supports aligning models to become more helpful, factually correct and coherent, while being adjustable in terms of the complexity and verbosity of its responses. Leveraging this dataset and SteerLM, we train a Llama 2 70B to reach **7.54** on MT Bench, the highest among models trained on open-source datasets based on [MT Bench Leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard) as of 15 Nov 2023. This model is available on HF at [Llama2-70B-SteerLM-Chat](https://huggingface.co/nvidia/Llama2-70B-SteerLM-Chat). Try this model instantly for free hosted by us at [NVIDIA AI Playground](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-foundation/models/llama2-70b-steerlm). You can use this in the provided UI or through a limited access API (up to 10, 000 requests within 30 days). If you would need more requests, we demonstrate how you can set up an inference server at [Llama2-70B-SteerLM-Chat model page on HF](https://huggingface.co/nvidia/Llama2-70B-SteerLM-Chat) You can also train a model using [NeMo Aligner](https://github.com/NVIDIA/NeMo-Aligner) following [SteerLM training user guide](https://docs.nvidia.com/nemo-framework/user-guide/latest/modelalignment/steerlm.html). <img src="https://huggingface.co/datasets/nvidia/HelpSteer/resolve/main/mtbench_categories.png" alt="MT Bench Categories" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> HelpSteer Paper : [HelpSteer: Multi-attribute Helpfulness Dataset for SteerLM](http://arxiv.org/abs/2311.09528) SteerLM Paper: [SteerLM: Attribute Conditioned SFT as an (User-Steerable) Alternative to RLHF](https://arxiv.org/abs/2310.05344) ## Dataset Description HelpSteer contains 37, 120 samples, each containing a prompt, a response as well as five human-annotated attributes of the response, each ranging between 0 and 4 where higher means better for each attribute. These attributes are: 1. **Helpfulness**: Overall helpfulness of the response to the prompt. 2. **Correctness**: Inclusion of all pertinent facts without errors. 3. **Coherence**: Consistency and clarity of expression. 4. **Complexity**: Intellectual depth required to write response (i.e. whether the response can be written by anyone with basic language competency or requires deep domain expertise). 5. **Verbosity**: Amount of detail included in the response, relative to what is asked for in the prompt. ## Example ```json { "prompt": "What are the three most important things to consider when deciding what technology to use to build an assist device to help an elderly person with basic needs?", "response": "To build an assistive device to help an elderly person with basic needs, one must consider three crucial things: safety, compatibility, and ease of use. Safety is paramount, as the device must not cause harm to the user. Compatibility with the user's environment and other devices is also essential. Finally, the device must be simple enough for the elderly person to operate.", "helpfulness": 3, "correctness": 4, "coherence": 4, "complexity": 2, "verbosity": 2 } ``` ## Using the Huggingface Datasets ```python from datasets import load_dataset ds = load_dataset("nvidia/HelpSteer") train = ds['train'] # len(train) = 35331 (95%) val = ds['validation'] # len(val) = 1789 (5%) ``` ## Source 1. Prompts are collected based on a mixture of template-generated (mainly for prompt involving long reference text) and human generated by Scale AI. These prompts relate to the tasks of Rewrite, Summarization, Classification, Extraction, Closed Question Answering, Open Question Answering, Generation and Brainstorming. 2. Responses are generated by an early version of an inhouse LLM. We generate up to 4 responses per prompts using sample techniques to give diverse yet reasonable responses. 3. Annotations of various attributes were done by Scale AI. Annotators rated each response on a Likert 5 scale (between 0 and 4) for each attribute (helpfulness, correctness, coherence, complexity and verbosity). ## Annotation methodology (short) 1. We engaged a select group of contractors via Scale AI. These contractors were provided with comprehensive guidelines that defined each attribute and the criteria for every rating level, together with some annotated examples. These guidelines and examples are detailed in the Appendix of the accompanying paper. 2. The annotation process involved approximately 200 U.S.-based human annotators. Candidates first underwent preliminary assignments, including assessments of English proficiency, to determine eligibility for working on the project. Subsequently, they participated in an introductory training course on the task which ended with a test that involved annotating 35 sample responses. This process ensured not only a thorough understanding of the task requirements but also the delivery of high-quality annotations. 3. Post-annotations, Scale AI performed extensive quality assurance, with each annotation reaching a minimum of two human reviews in addition to automated checks. After receiving the annotations from Scale AI, we conducted our independent quality assurance to make sure that the quality of the annotations was up to our expectations. As a result, some annotations were filtered away to retain only 37, 120 samples. ## Ethical statement Annotators for the dataset were contracted through Scale AI. Scale AI engages the Anker Methodology, GISC Impact Sourcing Standard, and UN Sustainable Development Goals to provide a fair and competitive pay. The specific pay is calculated based on many factors, including the specific project, the specialized skillset and expertise required, regional costs of living and then transparently listed on Scale AI platform. Scale AI also provides multiple channels for questions and support, including 24/7 support teams, community discussion channels with specially trained moderators, and a “speak up” hotline where contractors can report concerns anonymously. Worker concerns can be submitted to and are reviewed by our Remotasks support team, and pay disputes are reviewed by support specialists trained in this area. ## Contact E-Mail: [Zhilin Wang](mailto:zhilinw@nvidia.com) ## Citation If you find this dataset useful, please cite the following works ```bibtex @misc{wang2023helpsteer, title={HelpSteer: Multi-attribute Helpfulness Dataset for SteerLM}, author={Zhilin Wang and Yi Dong and Jiaqi Zeng and Virginia Adams and Makesh Narsimhan Sreedhar and Daniel Egert and Olivier Delalleau and Jane Polak Scowcroft and Neel Kant and Aidan Swope and Oleksii Kuchaiev}, year={2023}, eprint={2311.09528}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```bibtex @misc{dong2023steerlm, title={SteerLM: Attribute Conditioned SFT as an (User-Steerable) Alternative to RLHF}, author={Yi Dong and Zhilin Wang and Makesh Narsimhan Sreedhar and Xianchao Wu and Oleksii Kuchaiev}, year={2023}, eprint={2310.05344}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
nsmc
--- annotations_creators: - crowdsourced language_creators: - found language: - ko license: - cc-by-2.0 multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification paperswithcode_id: nsmc pretty_name: Naver Sentiment Movie Corpus dataset_info: features: - name: id dtype: string - name: document dtype: string - name: label dtype: class_label: names: '0': negative '1': positive splits: - name: train num_bytes: 16423803 num_examples: 150000 - name: test num_bytes: 5491417 num_examples: 50000 download_size: 19522142 dataset_size: 21915220 --- # Dataset Card for Naver sentiment movie corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Github](https://github.com/e9t/nsmc/) - **Repository:** [Github](https://github.com/e9t/nsmc/) - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields Each instance is a movie review written by Korean internet users on Naver, the most commonly used search engine in Korea. Each row can be broken down into the following fields: - `id`: A unique review ID, provided by Naver - `document`: The actual movie review - `label`: Binary labels for sentiment analysis, where `0` denotes negative, and `1`, positive ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @InProceedings{Park:2016, title = "Naver Sentiment Movie Corpus", author = "Lucy Park", year = "2016", howpublished = {\\url{https://github.com/e9t/nsmc}} } ``` ### Contributions Thanks to [@jaketae](https://github.com/jaketae) for adding this dataset.