Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
100K<n<1M
Language Creators:
found
Annotations Creators:
found
Source Datasets:
extended|iit_cdip
ArXiv:
License:

Found issues in the training set that could significantly impact the performance

#5
by sanjanagarg - opened

Ran cleanvision on the dataset to find potential issues before training a model. It led to finding issues like a huge number of (near) duplicates, invalid documents, and completely dark images that could lead to subpar model performance. Here's a glimpse of what the issues look like
Screenshot 2023-05-12 at 2.19.52 PM.png
Screenshot 2023-05-12 at 2.20.03 PM.png
Screenshot 2023-05-12 at 2.20.23 PM.png
Screenshot 2023-05-12 at 2.20.52 PM.png
Screenshot 2023-05-12 at 2.21.05 PM.png
Screenshot 2023-05-12 at 2.21.13 PM.png
Screenshot 2023-05-12 at 2.21.24 PM.png

Checkout the package here : https://github.com/cleanlab/cleanvision
To install the package with huggingface dependencies:

pip install "cleanvision[huggingface]"

To reproduce aboce results run these few lines of code:

from cleanvision.imagelab import Imagelab
from datasets import load_dataset

dataset = load_dataset("aharley/rvl_cdip", split="train")

imagelab = Imagelab(hf_dataset=dataset, image_key="image")
imagelab.find_issues()
imagelab.report()

Very cool! The light and dark images are part of our task, but the "exact duplicates" returned here are very helpful. Image de-duplication has advanced quite a bit since when we collected this data.

Is there a way your tool can easily identify images in the train set which have "exact duplicates" in the test set? These would be the top priority, I think.

Yep you can concatenate train set and test set and get duplicate results on the entire dataset. All the duplicate sets can be accessed using:
imagelab.info['exact_duplicates']['sets']

You can use indices to know which duplicate sets contain samples from both train set and test set.

Here's an example of concatenating the two splits into a single HF dataset to run CleanVision on:
https://github.com/cleanlab/cleanvision-examples/blob/main/huggingface_dataset.ipynb

In addition to the IMAGE NOT AVAILABLE ONLINE images (of which I found about 470), I came across a document that is a letter to Rob. This letter exists 65 times within the data set and is in almost all document classes aswell as test train and validation data set.
A list of the letters to Rob as well as the associated classes is below.

Dear Rob.png

imagesr\r\g\e\rge31d00\503210033+-0034.tif 3
imagesc\c\e\j\cej80d00\517306722+-6724.tif 3
imagesm\m\r\r\mrr36d00\50603620-3621.tif 14
imagesg\g\t\u\gtu29c00\2084573574a.tif 2
imagese\e\p\m\epm70d00\522753037+-3041.tif 3
imagesp\p\o\h\poh93d00\508523217_508523218.tif 0
imagesw\w\b\e\wbe21d00\515945893+-5894.tif 3
imagesx\x\o\u\xou4aa00\10420895_10420896.tif 6
imagesp\p\h\b\phb45c00\2081956482_6483.tif 12
imagesl\l\t\t\ltt64f00\0060318545.tif 15
imagesc\c\l\j\clj71a00\2057435947_2057435949.tif 7
imagesr\r\n\o\rno83c00\2046023617_3618.tif 15
imagesl\l\t\o\lto85f00\0060113357.tif 8
imagesu\u\z\z\uzz17e00\2028693075.tif 1
imagesq\q\u\s\qus00c00\2085123298.tif 2
imagesk\k\u\m\kum27e00\2028747171.tif 11
imagesl\l\o\t\lot22e00\2501194244.tif 8
imagesm\m\b\z\mbz62c00\2077939966.tif 1
imagesn\n\y\f\nyf22e00\2501668539.tif 9
imagesh\h\h\v\hhv21a00\0071022619.tif 1
imagest\t\b\o\tbo70c00\2078562182.tif 1
imagese\e\a\z\eaz70c00\1001402331.tif 8
imagesx\x\d\q\xdq60d00\520761146+-1146.tif 0
imagesr\r\q\k\rqk99c00\40030538-0543.tif 6
imagesx\x\w\x\xwx34f00\0060314146.tif 12
imagesp\p\i\r\pir62e00\2042033344.tif 15
imagesf\f\f\h\ffh80d00\517543891+-3894.tif 3
imagesi\i\z\v\izv40c00\ti16830375.tif 9
imagesh\h\f\y\hfy80d00\522717672+-7673.tif 3
imagesn\n\n\n\nnn78e00\2015043481.tif 9
imagesx\x\h\t\xht10d00\502613493a-3494.tif 4
imagesl\l\f\z\lfz80e00\89120799_89120802.tif 10
imagesh\h\e\f\hef76c00\2077348049_8052.tif 0
imagesj\j\z\s\jzs09d00\50457773-7773.tif 14
imagesd\d\d\f\ddf75a00\528026434+-6434.tif 2
imagesa\a\v\d\avd91a00\1003657311.tif 9
imagesr\r\u\l\rul94d00\505870213.tif 0
imagesr\r\r\t\rrt25c00\2505287910a.tif 2
imagesq\q\g\q\qgq08e00\1003727246_1003727249.tif 9
imagesq\q\x\g\qxg23e00\2058500960.tif 4
imagesy\y\u\f\yuf51d00\515956033+-6055.tif 3
imagesk\k\q\p\kqp41c00\2085765230.tif 2
imageso\o\l\x\olx85e00\2028425256.tif 14
imagesb\b\y\z\byz40c00\ti17120055.tif 11
imagesi\i\f\p\ifp70e00\89972302.tif 8
imagesx\x\c\c\xcc30e00\91085970.tif 10
imagesi\i\n\r\inr43e00\2022906659.tif 8
imagesn\n\k\w\nkw22e00\2501652918_2501652936.tif 1
imagesd\d\n\i\dni14f00\0000109229.tif 4
imagesa\a\j\a\aja45e00\2040962215.tif 10
imagesv/v/z/o/vzo21f00/0001461863.tif 15
imagesj/j/s/a/jsa53f00/0001489550.tif 14
imagesf/f/o/w/fow43f00/0001476681.tif 13
imagesx/x/j/m/xjm21f00/0001463691.tif 13
imagesw/w/p/v/wpv43f00/0001477312.tif 13
imagesm/m/o/f/mof21f00/0001481423.tif 12
imagesj/j/t/h/jth31f00/0001447623.tif 12
imagesu/u/n/q/unq21f00/0001460963.tif 15
imagesl/l/k/m/lkm01f00/0011567541.tif 13
imagesq/q/k/s/qks31f00/0001437971.tif 13
imagesq/q/h/v/qhv43f00/0001477412.tif 13
imagesz/z/q/y/zqy21f00/0001454946.tif 13
imagesk/k/c/w/kcw01f00/0011565999.tif 15
imagest/t/r/u/tru43f00/0001477962.tif 13
imagesf/f/q/x/fqx43f00/0001491969.tif 0

I did exactly the same type of analysis with fastdup: https://smartandeasy-my.sharepoint.com/:f:/g/personal/jordy_contract_fit/Elmxoa4_ChlCinsT6lBdYDQBcCArMDtIOJ3Z5ku3ZQMu5w?e=rxfkiN links to HTML files covering duplicates, near-duplicates etc.

2023-06-22 11:56:43 [INFO] Total time took 607088 ms
2023-06-22 11:56:43 [INFO] Found a total of 35106 fully identical images (d>0.990), which are 4.39 %
2023-06-22 11:56:43 [INFO] Found a total of 188747 nearly identical images(d>0.980), which are 23.59 %
2023-06-22 11:56:43 [INFO] Found a total of 769216 above threshold images (d>0.900), which are 96.15 %
2023-06-22 11:56:43 [INFO] Found a total of 40079 outlier images         (d<0.050), which are 5.01 %
2023-06-22 11:56:43 [INFO] Min distance found 0.684 max distance 1.000

However, I am wondering how we should continue with this analysis. Scrapping duplicates makes sense, but beyond that, how can we remove other noisy instances over the whole dataset?
This is also discussed in On Evaluation of Document Classification with RVL-CDIP (https://aclanthology.org/2023.eacl-main.195.pdf)

Very cool! The light and dark images are part of our task, but the "exact duplicates" returned here are very helpful. Image de-duplication has advanced quite a bit since when we collected this data.

Is there a way your tool can easily identify images in the train set which have "exact duplicates" in the test set? These would be the top priority, I think.

If you want to have an easy OCRed version of RVL-CDIP to also compare the (near-)duplication at text-level, you could use: https://huggingface.co/datasets/jordyvl/rvl_cdip_easyocr

A potential text-type duplication check can be the jaccard similarity of sets of character-ngrams (https://en.wikipedia.org/wiki/W-shingling).

It would be great if we (as a community) could align at least on an updated version of the test set, such that results reported can be compared methodologically correct.
Something like test set of RVL_CDIP++. For training, anyone could choose what to keep in or remove to attain better performance, but at least the test set should be standardized.

It would be great if we (as a community) could align at least on an updated version of the test set, such that results reported can be compared methodologically correct.
Something like test set of RVL_CDIP++. For training, anyone could choose what to keep in or remove to attain better performance, but at least the test set should be standardized.

Agreed! Such an exercise would be a great way to benchmark de-duplication techniques, too. I am the author of https://aclanthology.org/2023.eacl-main.195.pdf; one caveat we found with near-duplicates is where two documents have the same "form" or "template" structure, but have slightly different contents (e.g., two people filled out the same form, but with slightly different contents). An example of this is in Figure 5 of that paper (both the resume and invoice examples), as well as in Figure 17 of the arxiv version's appendix (https://arxiv.org/pdf/2306.12550.pdf).

I would definitely be interested in developing something like @jordyvl 's RVL_CDIP++, and I wonder if there is interest from anyone else in this thread as well!

Sign up or log in to comment