Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,108 @@
|
|
1 |
---
|
2 |
license: unknown
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: unknown
|
3 |
---
|
4 |
+
|
5 |
+
# Dataset Card for the SpamAssassin public mail corpus
|
6 |
+
|
7 |
+
## Dataset Description
|
8 |
+
|
9 |
+
- **Homepage:** https://spamassassin.apache.org/old/publiccorpus/readme.html
|
10 |
+
|
11 |
+
### Dataset Summary
|
12 |
+
|
13 |
+
This is a selection of mail messages, suitable for use in testing spam filtering systems assembled by members of the SpamAssassin project.
|
14 |
+
|
15 |
+
### Supported Tasks and Leaderboards
|
16 |
+
|
17 |
+
[More Information Needed]
|
18 |
+
|
19 |
+
### Languages
|
20 |
+
|
21 |
+
[More Information Needed]
|
22 |
+
|
23 |
+
## Dataset Structure
|
24 |
+
|
25 |
+
### Data Instances
|
26 |
+
|
27 |
+
- The `text` config normalizes all character sets to utf8 and dumps the
|
28 |
+
MIME tree as a JSON list of lists.
|
29 |
+
- The `raw` config does not parse messages at all, leaving the full
|
30 |
+
headers and content as binary.
|
31 |
+
|
32 |
+
### Data Fields
|
33 |
+
|
34 |
+
- `label`: `spam` or `ham`
|
35 |
+
- `group`: SpamAssassin has grouped these samples into categories
|
36 |
+
{'hard_ham', 'spam_2', 'spam', 'easy_ham', 'easy_ham_2'}
|
37 |
+
- `text`: normalized text of the message bodies
|
38 |
+
- `raw`: full binary headers and contents of messages
|
39 |
+
|
40 |
+
### Data Splits
|
41 |
+
|
42 |
+
Only a _train_ split has been provided.
|
43 |
+
|
44 |
+
## Dataset Creation
|
45 |
+
|
46 |
+
### Curation Rationale
|
47 |
+
|
48 |
+
It is hoped this dataset can help verify that modern NLP tools can solve
|
49 |
+
old NLP problems.
|
50 |
+
|
51 |
+
### Source Data
|
52 |
+
|
53 |
+
#### Initial Data Collection and Normalization
|
54 |
+
|
55 |
+
[The upstream corpus description](https://spamassassin.apache.org/old/publiccorpus/readme.html)
|
56 |
+
goes into detail on collection methods. The work here to recover text bodies
|
57 |
+
is largely done with [email.parser](https://docs.python.org/3/library/email.parser.html)
|
58 |
+
and [ftfy](https://pypi.org/project/ftfy/).
|
59 |
+
|
60 |
+
#### Who are the source language producers?
|
61 |
+
|
62 |
+
[More Information Needed]
|
63 |
+
|
64 |
+
### Annotations
|
65 |
+
|
66 |
+
#### Annotation process
|
67 |
+
|
68 |
+
[More Information Needed]
|
69 |
+
|
70 |
+
#### Who are the annotators?
|
71 |
+
|
72 |
+
[More Information Needed]
|
73 |
+
|
74 |
+
### Personal and Sensitive Information
|
75 |
+
|
76 |
+
[More Information Needed]
|
77 |
+
|
78 |
+
## Considerations for Using the Data
|
79 |
+
|
80 |
+
### Social Impact of Dataset
|
81 |
+
|
82 |
+
[More Information Needed]
|
83 |
+
|
84 |
+
### Discussion of Biases
|
85 |
+
|
86 |
+
[More Information Needed]
|
87 |
+
|
88 |
+
### Other Known Limitations
|
89 |
+
|
90 |
+
[More Information Needed]
|
91 |
+
|
92 |
+
## Additional Information
|
93 |
+
|
94 |
+
### Dataset Curators
|
95 |
+
|
96 |
+
[More Information Needed]
|
97 |
+
|
98 |
+
### Licensing Information
|
99 |
+
|
100 |
+
[More Information Needed]
|
101 |
+
|
102 |
+
### Citation Information
|
103 |
+
|
104 |
+
[More Information Needed]
|
105 |
+
|
106 |
+
### Contributions
|
107 |
+
|
108 |
+
[More Information Needed]
|