Datasets:
Size:
100K<n<1M
ArXiv:
Tags:
image-text retrieval
noisy correspondence learning
NCL-specific benchmark
realistic
industry
mobile user interface
License:
Update README.md
Browse files
README.md
CHANGED
@@ -71,4 +71,33 @@ We develop a new dataset named **Noise of Web (NoW)** for NCL. It contains **100
|
|
71 |
|
72 |
```
|
73 |
|
74 |
-
Please note that since our raw data contains some sensitive business data, we only provide the **encoded image features** (\*_ims.npy) and the **token ids of the text tokenized**. For tokenizer, we provide [Tokenizers](https://github.com/huggingface/tokenizers) with [BPE](https://huggingface.co/docs/tokenizers/api/models#tokenizers.models.BPE) to produce \*_caps_bpe.txt, [BertTokenizer](https://huggingface.co/transformers/v3.0.2/model_doc/bert.html#berttokenizer) with [bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) pre-trained model to produce \*_caps_bert.txt, and [Jieba](https://github.com/fxsjy/jieba) to produce \*_caps_jieba.txt. **Our vocabulary size of BPETokenizer is 10,000, while BertTokenizer and JiebaTokenizer have a vocabulary size of 32,702 and 56,271 respectively.** (recorded in now100k_precomp_vocab\_\*.txt). \*_ids.txt records the data indexs in the original 500k dataset. In the future, we may process and make the original dataset public.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
71 |
|
72 |
```
|
73 |
|
74 |
+
Please note that since our raw data contains some sensitive business data, we only provide the **encoded image features** (\*_ims.npy) and the **token ids of the text tokenized**. For tokenizer, we provide [Tokenizers](https://github.com/huggingface/tokenizers) with [BPE](https://huggingface.co/docs/tokenizers/api/models#tokenizers.models.BPE) to produce \*_caps_bpe.txt, [BertTokenizer](https://huggingface.co/transformers/v3.0.2/model_doc/bert.html#berttokenizer) with [bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) pre-trained model to produce \*_caps_bert.txt, and [Jieba](https://github.com/fxsjy/jieba) to produce \*_caps_jieba.txt. **Our vocabulary size of BPETokenizer is 10,000, while BertTokenizer and JiebaTokenizer have a vocabulary size of 32,702 and 56,271 respectively.** (recorded in now100k_precomp_vocab\_\*.txt). \*_ids.txt records the data indexs in the original 500k dataset. In the future, we may process and make the original dataset public.
|
75 |
+
|
76 |
+
### Usage
|
77 |
+
|
78 |
+
```
|
79 |
+
# data_path: your dataset name and path
|
80 |
+
# data_split: {train,dev,test}
|
81 |
+
# tokenizer: {bpe,bert,jieba}
|
82 |
+
# vocabulary size of {bpe,bert,jieba} is {10000,32702,56271}
|
83 |
+
|
84 |
+
# captions
|
85 |
+
with open(os.path.join(data_path, "{}_caps_{}.txt".format(data_split, tokenizer))) as f:
|
86 |
+
for line in f:
|
87 |
+
captions.append(line.strip())
|
88 |
+
captions_token = []
|
89 |
+
for index in range(len(captions)):
|
90 |
+
caption = captions[index]
|
91 |
+
tokens = caption.split(',')
|
92 |
+
caption = []
|
93 |
+
caption.append(vocab("<start>"))
|
94 |
+
caption.extend([int(token) for token in tokens if token])
|
95 |
+
caption.append(vocab("<end>"))
|
96 |
+
captions_token.append(caption)
|
97 |
+
|
98 |
+
# images
|
99 |
+
images = np.load(os.path.join(data_path, "%s_ims.npy" % data_split))
|
100 |
+
|
101 |
+
return captions_token, images
|
102 |
+
```
|
103 |
+
Additionally, you can search for code snippets containing the string `now100k_precomp` in `co_train.py`, `data.py`, `evaluation.py`, and `run.py` in this repo and refer to them to process the NoW dataset for use in your own code.
|