Datasets:

ArXiv:
License:

Unsafe Files

#12
by GetzPro - opened

Why are there unsafe files?

OSCAR org

Good question (and catch :p) ! It looks like there is the same issue in C4: https://huggingface.co/datasets/allenai/c4/discussions/2.
I guess that a response from @mcpotato could help here too :)

@mcpotato any update on these? Here is another
allenai/c4

Hi! Thanks for the ping on twitter, haven't payed attention to my notifications here.

It may be false positives from our malware scanner ClamAV.

I'll take a look at the files in a few days, currently out of office.

So I took the following file and it still tests positive :

# clamscan --max-scansize=4000M --max-filesize=4000M --verbose --gen-json=yes --allmatch=yes ~/Downloads/pl_meta_part_66.jsonl.gz
Loading:     8s, ETA:   0s [========================>]    8.67M/8.67M sigs
Compiling:   2s, ETA:   0s [========================>]       41/41 tasks

Scanning ~/Downloads/pl_meta_part_66.jsonl.gz
~Downloads/pl_meta_part_66.jsonl.gz: Win.Trojan.B-194 FOUND

----------- SCAN SUMMARY -----------
Known viruses: 8667134
Engine version: 1.0.0
Scanned directories: 0
Scanned files: 1
Infected files: 1
Data scanned: 1343.86 MB
Data read: 308.17 MB (ratio 4.36:1)
Time: 44.965 sec (0 m 44 s)
Start Date: 2023:05:25 15:05:49
End Date:   2023:05:25 15:06:34

When I check the virus signature, this is what I get :

# sigtool --find-sigs Win.Trojan.B-194 | sigtool --decode-sigs
VIRUS NAME: Win.Trojan.B-194
TARGET TYPE: ANY FILE
OFFSET: *
DECODED SIGNATURE:
Don't turn off the power or remove the diskette while Disk Killer is Processing!

I suppose this means that somewhere in your file, there is a match for the above sentence "Don't turn off ...", which is how they check if the Win.Trojan.B-194 virus is present or not.
So I'd err on the side of this not being a false positive and that you may have the source code of a virus in your file. For the other reported files, it's case by case, you can run the commands I run here to check yourselves with an up to date virus db, there may well be false positives in the mix (to get started with clamav).

FYI, I'm thinking of implementing periodic re-scanning of the files, currently we only scan them once, but it'd be best to re-scan them as the virus db changes across time.

Same story for en_meta_part_3137.jsonl.gz :

# clamscan --max-scansize=4000M --max-filesize=4000M --verbose --gen-json=yes --allmatch=yes ~/Downloads/en_meta_part_3137.jsonl.gz
Loading:     7s, ETA:   0s [========================>]    8.67M/8.67M sigs
Compiling:   2s, ETA:   0s [========================>]       41/41 tasks

Scanning ~/Downloads/en_meta_part_3137.jsonl.gz
~/Downloads/en_meta_part_3137.jsonl.gz: Legacy.Trojan.Agent-36873 FOUND

----------- SCAN SUMMARY -----------
Known viruses: 8667134
Engine version: 1.0.0
Scanned directories: 0
Scanned files: 1
Infected files: 1
Data scanned: 1348.39 MB
Data read: 312.43 MB (ratio 4.32:1)
Time: 57.277 sec (0 m 57 s)
Start Date: 2023:05:25 15:14:43
End Date:   2023:05:25 15:15:40
# sigtool --find-sigs Legacy.Trojan.Agent-36873 | sigtool --decode-sigs
VIRUS NAME: Legacy.Trojan.Agent-36873
TARGET TYPE: ANY FILE
OFFSET: *
DECODED SIGNATURE:
we would first send a letter to the mayor of the city where you reside and direct them to close your bank account
OSCAR org

Thanks for the response.
I have checked the incriminating document from pl_meta_part_66.jsonl.gz, which looks like an article about viruses (including messages that would be printed by a virus).
Translating the sentences around the "Don't turn off the power or remove the diskette while Disk Killer is Processing!" reveals the following:

Disk Killer
The principle of operation is very identical to its name, although the disk manages to be saved after the virus picks at the data and "encrypts" it
although in this case it is rather too much to say. Once activated, Disk Killer shows a message that reads like this:
Disk Killer -- Version 1.00 by COMPUTER OGRE 04/01/1989 Warning !! Don't turn off the power or remove the diskette while Disk Killer is Processing! PROCESSING
An ignorant user may think - well, the system is picking, picking, telling you not to take out the floppy disk, not to turn off the computer ... well. I would have wondered about the name of the program itself, but what the hell.

This, in my opinion, is not a true positive but I understand that this is hard to detect.
I'll check some other examples to get an idea of what's flagged :)

For en_meta_part_3137.jsonl.gz, the whole document is a blog post about spams, as the first lines show:

"Jay, I have to say you’ve got to be the smartest scammer we’ve come across! This gag inspired me to even get another Spam Jam out there
Posted on Categories comedy, Spam JamTags email, Spam JamLeave a comment on Spam Jam – Jay The Assassin
KPB vs FBI – Spam Jam
It’s been ages since my last Spam Jam (it seems) and this one just couldn’t be ignored
It’s way up there in the creativity department
Definitely an A FOR EFFORT
Needless to say, I’ve never heard of the guy nor know what the hell he’s babbling about

The blog post contains the spam/scam emails themselves, with the incriminated sentence.

I'm not sure we want to go down the path of selectively running the scanner on specific files or even having custom sets of virus signatures based on file extensions. Is the virus report something you can live with?

OSCAR org

I can absolutely live with the virus reports! I wanted to explore/document the source of (potentially) false positives.
It's an interesting problem nonetheless, and I think that it would help/reassure our users if we put something about these virus warnings in the README.md.
I guess the issue is closed, but I'll open another one to remind myself to add this warning in the README.md.

Thanks :)

uj changed discussion status to closed

I think this pertains to a larger discussion as many files show as unsafe.
https://huggingface.co/datasets/oscar-corpus/OSCAR-2201/discussions/12#645467a3ecdd69b81350336f

I think on large datasets this is inevitable. I'd rather have a small portion be wrongly flagged as unsafe than having no checks. I don't think there's currently a solution that has no false positives and detects all viruses.

One could argue that since the files in the datasets are not executables they pose no danger (though you could still abuse the compression mechanism) and that too many checks are needlessly run.

I have to think about this, what are your opinions on this matter?

I'm conflicted, there are positives and negatives to both solutions (one being removing checks, one being keeping them).

On removing:

  • As you said, the files (supposedly) pose no danger as is
  • From my (very limited) inspection, false positives can be linked to articles (of a good quality, it seems :p). This would "soft"-remove this content from the corpus, as people might filter out flagged entries.
  • It looks like there's a sizeable portion of these datasets that contain these warnings, which makes me wonder if they are useful (since people might pay attention to one single warning, but interpret hundreds of warnings as a bug)

On keeping them:

  • While they pose no danger as is, the data is used to train models that could in turn output malicious content. If one of these warnings contains actually harmful source code to and the model output is somewhere executed, it could pose a security risk.
  • We use annotations to signify content that could be harmful (see harmful__pp metadata), as well as length-related annotations. This looks to me like another annotation, which could be useful to people! (but the granularity is wrong here, and we'd need to bake that in our metadata, so that's not really the matter here).

Random opinion - HF should probably add a way into the system, to view the affected data row/record with the virus, and acknowledge/dismiss them into a file in the system

I think this might be the best compromise, detect by default, and acknowledge and dismiss warnings for items that a false flagged

Who would be deciding that the warning is a false positive?

The dataset authors/maintainers

If you are concern of them being bad-actors, the virus safeguard is pointless anyway, as they could have just manipulated the data to fit in the virus - or do much worse prompt poisoning attacks. (besides you can still keep the amount of acknowledge warning shown, but in less aggressive manner, and a link to read the data and decide if the decision makes sense)

The "all in red" and phrasing of "has been marked unsafe" will scare off typical coporate folks (tbh that may not be such a bad thing)

I understand that the display is a bit rough, but I disagree with "If you are concern of them being bad-actors, the virus safeguard is pointless anyway". We need to have some external verification, otherwise if the maintainers can "self-validate", then any check is pointless.

OSCAR org

The problem is also that maintainers, in our case, do not necessarily have the means to inspect the whole dataset (the dataset is far too big for us to manually check it for viruses). So I do like this feature. However I do agree with @picocreator that once the antivirus flags a file, a manual check can be performed and if after that manual check we can conclude that the file is harmless, I wouldn't mind having the option of removing the warning somehow.

If on the contrary there is a threat, we could probably try to address it and remove it from the file.

this warning message always annoyed me as well.
I would appreciate more information from huggingface regarding why this was triggered, but I guess it is difficult since that might serve as a way for potential malicious actors to evade detection.

Understand that there are sliding scales on concern when it comes to the maintainer. However i feel that at the very least an option where it says in yellow (instead of red)

5 files were detected as potentially unsafe, but has been verified by the authors for issues. Click here to find out more

Kinda helps - this thread is kinda doing that in a round about way.

Also: the amount of false flags across many datasets, kinda makes me ignore them already especially if the number is sensible to the dataset size

Sign up or log in to comment