|
--- |
|
license: odc-by |
|
task_categories: |
|
- text-generation |
|
language: |
|
- en |
|
pretty_name: Primus-FineWeb |
|
tags: |
|
- cybersecurity |
|
- pretraining |
|
- FineWeb |
|
size_categories: |
|
- 1M<n<10M |
|
extra_gated_fields: |
|
Affiliation: text |
|
Country: country |
|
I want to use this model for: |
|
type: select |
|
options: |
|
- Research |
|
- Commercial |
|
- label: Other |
|
value: other |
|
Job title: |
|
type: select |
|
options: |
|
- Student |
|
- Research graduate |
|
- AI researcher |
|
- AI developer/engineer |
|
- Cybersecurity researcher |
|
- Reporter |
|
- Other |
|
geo: ip_location |
|
library_name: transformers |
|
--- |
|
|
|
> ⭐ Please download the dataset from [here](https://huggingface.co/datasets/trendmicro-ailab/Primus-FineWeb). |
|
|
|
|
|
# PRIMUS: A Pioneering Collection of Open-Source Datasets for Cybersecurity LLM Training |
|
|
|
## 🤗 Primus-FineWeb |
|
|
|
The **Primus-FineWeb** dataset is constructed by filtering cybersecurity-related text from FineWeb, a refined version of Common Crawl. We began by leveraging _Primus-Seed_, a high-quality dataset of manually curated cybersecurity text, as positive samples. We then sampled ten times the amount of data from FineWeb as negative samples and trained a **binary cybersecurity classifier** based on TinyBERT. Using this classifier, we assigned each text in FineWeb a score between **0 and 1** and filtered out texts with a score greater than **0.003**, creating the Primus-FineWeb with 15.3 billion tokens. However, after discovering a significant amount of duplicate content, we performed deduplication, reducing the final dataset to **🔥 2.57 billion tokens of cybersecurity corpus**. |
|
|
|
🚀🚀 For more details, see our paper: |
|
[https://arxiv.org/abs/2502.11191](https://arxiv.org/abs/2502.11191) |
|
|
|
--- |
|
|
|
## Why was the threshold set at 0.003? |
|
|
|
We divided the score range (0-1) into several bins and randomly sampled 50 examples from each bin. These samples were then scored by GPT-4o to determine the proportion of text that was "_truly_" cybersecurity-related. We found that if the score was below 0.003, the proportion of cybersecurity text fell below 50%. |
|
|
|
<img src="https://i.imgur.com/XbqpmbI.png" alt="Threshold Selection" width="60%"> |
|
|
|
|
|
## FineWeb: Cybersecurity Score vs. Token Count |
|
|
|
<img src="https://i.imgur.com/6twJL1p.png" alt="Cybersecurity Score vs. Token Count" width="65%"> |
|
|
|
--- |
|
|
|
## License |
|
|
|
This dataset is released under the **ODC-By** license. However, you must still comply with the **FineWeb license** and the **Common Crawl Terms of Use**. |