FredZhang7's picture
Update README.md
380c90b
|
raw
history blame
3.38 kB
---
license: cc-by-4.0
task_categories:
- text-classification
- feature-extraction
- tabular-classification
language:
- af
- en
- et
- sw
- sv
- sq
- de
- ca
- hu
- da
- tl
- so
- fi
- fr
- cs
- hr
- cy
- es
- sl
- tr
- pl
- pt
- nl
- id
- sk
- lt
- no
- lv
- vi
- it
- ro
- ru
- mk
- bg
- th
- ja
- ko
- multilingual
size_categories:
- 1M<n<10M
configs:
- config_name: default
data_files:
- split: train
pattern:
- "phishing_features_train.csv"
- "phishing_url_train.csv"
- split: test
pattern:
- "phishing_features_val.csv"
- "phishing_url_val.csv"
---
**I have decided to release the auto-moderation models all at once sometime in July/August, 2023. The datasets for training these models will be avaliable first.**
**INCORRECT DATA ANALYSIS. IM REDOING THEM**
The *features* dataset is original, and my feature extraction method is covered in [feature_extraction.py](./feature_extraction.py)
In the *features* dataset, there're 911,180 websites online at the time of data collection. The plots below show the regression line and correlation coefficients of 22+ features extracted and whether the URL is malicious.
If we could plot the lifespan of URLs, we could see that the oldest website has been online since Nov 7th, 2008, while the most recent phishing websites appeared as late as July 10th, 2023.
As of July 2023, there's no correlation between `is_malicious` and the columns `meta_percentage`, `mouseover_changes`, `not_indexed_by_google`, `right_click_disabled`, and `popup_window_has_text_field`,
while the negative correlations shown in the scatterplots for `age_of_domain`, `domain_registration_length`, and ` These correlations contradict some [analyses of researchers in 2013 on phishing detection](./Phishing_Websites_Features.docx).
The majority of features have very weak correlations with `is_malicious`, while some have a weak correlation.
I split the classification task into two stages in anticipation of the limited availability of online phishing websites due to their short lifespan, as well as the possibility that research done on phishing is not up-to-date:
1. a small multilingual BERT model to output the confidence level of a URL being malicious to model #2, by finetuning on 2,436,727 legitimate and malicious URLs
2. (probably) LightGBM to analyze the confidence level, along with roughly 19 extracted features
This way, I can make the most out of the limited phishing websites avaliable.
![Phish Eater Data Analysis](https://i.imgur.com/Hctxv4h.png)
## Source of the URLs
- https://moz.com/top500
- https://phishtank.org/phish_search.php?valid=y&active=y&Search=Search
- https://www.kaggle.com/datasets/siddharthkumar25/malicious-and-benign-urls
- https://www.kaggle.com/datasets/sid321axn/malicious-urls-dataset
- https://github.com/ESDAUNG/PhishDataset
- https://github.com/JPCERTCC/phishurl-list
- https://github.com/Dogino/Discord-Phishing-URLs
## Reference
- https://www.kaggle.com/datasets/akashkr/phishing-website-dataset
- https://www.kaggle.com/datasets/shashwatwork/web-page-phishing-detection-dataset
- https://www.kaggle.com/datasets/aman9d/phishing-data
## Side notes
- Cloudflare offers an [API for URL scanning](https://developers.cloudflare.com/api/operations/phishing-url-information-get-results-for-a-url-scan), with a generous global rate limit of 1200 requests every 5 minutes.