ealvaradob commited on
Commit
7676290
1 Parent(s): 23b8c17

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -2
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- license: mit
3
  task_categories:
4
  - text-classification
5
  language:
@@ -11,4 +11,65 @@ tags:
11
  - url
12
  - html
13
  - text
14
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: apache-2.0
3
  task_categories:
4
  - text-classification
5
  language:
 
11
  - url
12
  - html
13
  - text
14
+ ---
15
+ # Phishing Dataset
16
+
17
+ Phishing dataset compiled from various resources for classification and phishing detection tasks.
18
+
19
+ ## Dataset Details
20
+
21
+ The dataset has two columns: `text` and `label`. Text field contains samples of:
22
+
23
+ - URL
24
+ - SMS messages
25
+ - Email messages
26
+ - HTML code
27
+
28
+ Which are labeled as **1 (Phishing)** or **0(Benign)**.
29
+
30
+ ### Source Data
31
+
32
+ This dataset is a compilation of 4 sources, which are described below:
33
+
34
+ - [Mail dataset](https://www.kaggle.com/datasets/subhajournal/phishingemails) that specifies the body text of various emails that can be used to detect phishing emails,
35
+ through extensive text analysis and classification with machine learning. Contains over 18,000 emails
36
+ generated by Enron Corporation employees.
37
+
38
+ - [SMS message dataset](https://data.mendeley.com/datasets/f45bkkt8pr/1) of more than 5,971 text messages. It includes 489 Spam messages, 638 Smishing messages
39
+ and 4,844 Ham messages. The dataset contains attributes extracted from malicious messages that can be used
40
+ to classify messages as malicious or legitimate. The data was collected by converting images obtained from
41
+ the Internet into text using Python code.
42
+
43
+ - [URL dataset](https://www.kaggle.com/datasets/harisudhan411/phishing-and-legitimate-urls) with more than 800,000 URLs where 52% of the domains are legitimate and the remaining 47% are
44
+ phishing domains. It is a collection of data samples from various sources, the URLs were collected from the
45
+ JPCERT website, existing Kaggle datasets, Github repositories where the URLs are updated once a year and
46
+ some open source databases, including Excel files.
47
+
48
+ - [Website dataset](https://data.mendeley.com/datasets/n96ncsr5g4/1) of 80,000 instances of legitimate websites (50,000) and phishing websites (30,000). Each
49
+ instance contains the URL and the HTML page. Legitimate data were collected from two sources: 1) A simple
50
+ keyword search on the Google search engine was used and the first 5 URLs of each search were collected.
51
+ Domain restrictions were used and a maximum of 10 collections from one domain was limited to have a diverse
52
+ collection at the end. 2) Almost 25,874 active URLs were collected from the Ebbu2017 Phishing Dataset
53
+ repository. Three sources were used for the phishing data: PhishTank, OpenPhish and PhishRepo.
54
+
55
+ #### Dataset Processing
56
+
57
+ Primarily, this dataset is intended to be used in conjunction with the BERT language model. Therefore, it has
58
+ not been subjected to traditional preprocessing that is usually done for NLP tasks, such as Text Classification.
59
+
60
+ _Is stemming, lemmatization, stop word removal, etc., necessary to improve the performance of BERT?_
61
+
62
+ In general, **NO**. Preprocessing will not change the output predictions. In fact, removing empty words (which
63
+ are considered noise in conventional text representation, such as bag-of-words or tf-idf) can and probably will
64
+ worsen the predictions of your BERT model. Since BERT uses the self-attenuation mechanism, these "stop words"
65
+ are valuable information for BERT. The same goes for punctuation: a question mark can certainly change the
66
+ overall meaning of a sentence. Therefore, eliminating stop words and punctuation marks would only mean
67
+ eliminating the context that BERT could have used to get better results.
68
+
69
+ However, if this dataset plans to be used for another type of model, perhaps preprocessing for NLP tasks should
70
+ be considered. That is at the discretion of whoever wishes to employ this dataset.
71
+
72
+ For more information check these links:
73
+
74
+ - https://stackoverflow.com/a/70700145
75
+ - https://datascience.stackexchange.com/a/113366