Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
10M<n<100M
Language Creators:
found
Annotations Creators:
no-annotation
Source Datasets:
original
ArXiv:
Tags:
License:
system HF staff commited on
Commit
0e76d65
1 Parent(s): 30174bf

Update files from the datasets library (from 1.4.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.4.0

Files changed (1) hide show
  1. README.md +21 -21
README.md CHANGED
@@ -27,7 +27,7 @@
27
  - [Citation Information](#citation-information)
28
  - [Contributions](#contributions)
29
 
30
- ## [Dataset Description](#dataset-description)
31
 
32
  - **Homepage:** [https://yknzhu.wixsite.com/mbweb](https://yknzhu.wixsite.com/mbweb)
33
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
@@ -37,23 +37,23 @@
37
  - **Size of the generated dataset:** 4629.00 MB
38
  - **Total amount of disk used:** 5753.87 MB
39
 
40
- ### [Dataset Summary](#dataset-summary)
41
 
42
  Books are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story.This work aims to align books to their movie releases in order to providerich descriptive explanations for visual content that go semantically farbeyond the captions available in current datasets.
43
 
44
- ### [Supported Tasks](#supported-tasks)
45
 
46
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
47
 
48
- ### [Languages](#languages)
49
 
50
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
51
 
52
- ## [Dataset Structure](#dataset-structure)
53
 
54
  We show detailed information for up to 5 configurations of the dataset.
55
 
56
- ### [Data Instances](#data-instances)
57
 
58
  #### plain_text
59
 
@@ -68,62 +68,62 @@ An example of 'train' looks as follows.
68
  }
69
  ```
70
 
71
- ### [Data Fields](#data-fields)
72
 
73
  The data fields are the same among all splits.
74
 
75
  #### plain_text
76
  - `text`: a `string` feature.
77
 
78
- ### [Data Splits Sample Size](#data-splits-sample-size)
79
 
80
  | name | train |
81
  |----------|-------:|
82
  |plain_text|74004228|
83
 
84
- ## [Dataset Creation](#dataset-creation)
85
 
86
- ### [Curation Rationale](#curation-rationale)
87
 
88
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
89
 
90
- ### [Source Data](#source-data)
91
 
92
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
93
 
94
- ### [Annotations](#annotations)
95
 
96
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
97
 
98
- ### [Personal and Sensitive Information](#personal-and-sensitive-information)
99
 
100
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
101
 
102
- ## [Considerations for Using the Data](#considerations-for-using-the-data)
103
 
104
- ### [Social Impact of Dataset](#social-impact-of-dataset)
105
 
106
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
107
 
108
- ### [Discussion of Biases](#discussion-of-biases)
109
 
110
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
111
 
112
- ### [Other Known Limitations](#other-known-limitations)
113
 
114
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
115
 
116
- ## [Additional Information](#additional-information)
117
 
118
- ### [Dataset Curators](#dataset-curators)
119
 
120
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
121
 
122
- ### [Licensing Information](#licensing-information)
123
 
124
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
125
 
126
- ### [Citation Information](#citation-information)
127
 
128
  ```
129
  @InProceedings{Zhu_2015_ICCV,
27
  - [Citation Information](#citation-information)
28
  - [Contributions](#contributions)
29
 
30
+ ## Dataset Description
31
 
32
  - **Homepage:** [https://yknzhu.wixsite.com/mbweb](https://yknzhu.wixsite.com/mbweb)
33
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
37
  - **Size of the generated dataset:** 4629.00 MB
38
  - **Total amount of disk used:** 5753.87 MB
39
 
40
+ ### Dataset Summary
41
 
42
  Books are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story.This work aims to align books to their movie releases in order to providerich descriptive explanations for visual content that go semantically farbeyond the captions available in current datasets.
43
 
44
+ ### Supported Tasks
45
 
46
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
47
 
48
+ ### Languages
49
 
50
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
51
 
52
+ ## Dataset Structure
53
 
54
  We show detailed information for up to 5 configurations of the dataset.
55
 
56
+ ### Data Instances
57
 
58
  #### plain_text
59
 
68
  }
69
  ```
70
 
71
+ ### Data Fields
72
 
73
  The data fields are the same among all splits.
74
 
75
  #### plain_text
76
  - `text`: a `string` feature.
77
 
78
+ ### Data Splits Sample Size
79
 
80
  | name | train |
81
  |----------|-------:|
82
  |plain_text|74004228|
83
 
84
+ ## Dataset Creation
85
 
86
+ ### Curation Rationale
87
 
88
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
89
 
90
+ ### Source Data
91
 
92
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
93
 
94
+ ### Annotations
95
 
96
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
97
 
98
+ ### Personal and Sensitive Information
99
 
100
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
101
 
102
+ ## Considerations for Using the Data
103
 
104
+ ### Social Impact of Dataset
105
 
106
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
107
 
108
+ ### Discussion of Biases
109
 
110
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
111
 
112
+ ### Other Known Limitations
113
 
114
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
115
 
116
+ ## Additional Information
117
 
118
+ ### Dataset Curators
119
 
120
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
121
 
122
+ ### Licensing Information
123
 
124
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
125
 
126
+ ### Citation Information
127
 
128
  ```
129
  @InProceedings{Zhu_2015_ICCV,