shachardon commited on
Commit
b5d745e
·
1 Parent(s): a5f32cd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -12
README.md CHANGED
@@ -34,16 +34,6 @@ The dataset was introduced as part of the paper: [Human Learning by Model Feedba
34
  [ourpaper]: https://arxiv.org/abs/2311.12131 "markdown our paper"
35
 
36
 
37
-
38
- ## Dataset Details
39
-
40
- ### Dataset Description
41
-
42
- <!-- Provide a longer summary of what this dataset is. -->
43
-
44
-
45
- - **Language(s) (NLP):** English
46
-
47
  ### Dataset Sources
48
 
49
  <!-- Provide the basic links for the dataset. -->
@@ -63,14 +53,14 @@ Main Columns:
63
  - 'timestamp' - a timestamp of the prompt creation
64
  - 'label' - Ture whether an image that was generated based on that prompt was upscaled, otherwise False.
65
  - 'id' - unique id of the prompt
66
- - 'url_png' - like to the generated images (a 4-grid version)
67
  - 'main_content' - prefix of the prompt, without trailing magic-words
68
  - 'concreteness' - concreteness score, based on the [this paper][concpaper]
69
  - 'word_len' - the number of words
70
  - 'repeat_words' - the occurrences of each word that appears more than once in the prompt, excluding stop words.
71
  - 'reapeat_words_ratio' - repeat_words / word_len
72
  - 'perplexity' - the perplexity GPT-2 assigns to each prompt.
73
- - 'caption_0-3' - captions that were generated by the BLIP-2 model
74
  - 'phase' - train/test split, as was used to train image/text classifiers
75
  - 'magic_ratio' - the percentage of words that were recognized as magic words in the prompt
76
  - 'thread_id' - the id of the thread
 
34
  [ourpaper]: https://arxiv.org/abs/2311.12131 "markdown our paper"
35
 
36
 
 
 
 
 
 
 
 
 
 
 
37
  ### Dataset Sources
38
 
39
  <!-- Provide the basic links for the dataset. -->
 
53
  - 'timestamp' - a timestamp of the prompt creation
54
  - 'label' - Ture whether an image that was generated based on that prompt was upscaled, otherwise False.
55
  - 'id' - unique id of the prompt
56
+ - 'url_png' - link to the generated images (a 4-grid version)
57
  - 'main_content' - prefix of the prompt, without trailing magic-words
58
  - 'concreteness' - concreteness score, based on the [this paper][concpaper]
59
  - 'word_len' - the number of words
60
  - 'repeat_words' - the occurrences of each word that appears more than once in the prompt, excluding stop words.
61
  - 'reapeat_words_ratio' - repeat_words / word_len
62
  - 'perplexity' - the perplexity GPT-2 assigns to each prompt.
63
+ - 'caption_0-3' - captions that were generated by the BLIP-2 model, with the 4 created images as its inputs.
64
  - 'phase' - train/test split, as was used to train image/text classifiers
65
  - 'magic_ratio' - the percentage of words that were recognized as magic words in the prompt
66
  - 'thread_id' - the id of the thread