Update README.md
Browse files
README.md
CHANGED
|
@@ -79,6 +79,7 @@ A sample from the training set is provided below:
|
|
| 79 |
To collect human-annotated labels, we used Amazon Mechanical Turk (MTurk) to deploy our annotation task. The layout and interface design for the MTurk task can be found in the file `design-layout-mturk.html`.
|
| 80 |
|
| 81 |
In each task, a single image was enlarged to 200 x 200 for clarity and presented alongside the question: `Choose any one "incorrect" label for this image`? Annotators were given four example labels to choose from (e.g., `dog, cat, ship, bird`), and were instructed to select the one that does not correctly describe the image.
|
|
|
|
| 82 |
## Citing
|
| 83 |
|
| 84 |
If you find this dataset useful, please cite the following:
|
|
|
|
| 79 |
To collect human-annotated labels, we used Amazon Mechanical Turk (MTurk) to deploy our annotation task. The layout and interface design for the MTurk task can be found in the file `design-layout-mturk.html`.
|
| 80 |
|
| 81 |
In each task, a single image was enlarged to 200 x 200 for clarity and presented alongside the question: `Choose any one "incorrect" label for this image`? Annotators were given four example labels to choose from (e.g., `dog, cat, ship, bird`), and were instructed to select the one that does not correctly describe the image.
|
| 82 |
+
|
| 83 |
## Citing
|
| 84 |
|
| 85 |
If you find this dataset useful, please cite the following:
|