Matteo Rinaldi ruggsea commited on
Commit
a0ada14
•
1 Parent(s): 5ad75a6

- Adding dataset card (c5c2c0e3547e0fd0372dbedec997fc92d0b73e07)
- Revert "Adding dataset card" (5c9a3d9454489fc458c63582e68b9782d167ddb4)
- Finalizing readme (fbf20e72acaabb289a6b6927868f41a253f9420d)
- Fixing YAML (73e73de9f146346dbbb06319aa8b35d091208674)


Co-authored-by: Ruggero Marino Lazzaroni <ruggsea@users.noreply.huggingface.co>

README.md CHANGED
@@ -1,27 +1,84 @@
1
  ---
2
  dataset_info:
3
  features:
4
- - name: title
5
- dtype: string
6
- - name: author
7
- dtype: string
8
- - name: id
9
- dtype: int32
10
- - name: timestamp
11
- dtype: string
12
- - name: progressive_number
13
- dtype: int32
14
- - name: original_url
15
- dtype: string
16
- - name: newsgroup
17
- dtype: string
18
- - name: text
19
- dtype: string
20
  splits:
21
- - name: train
22
- path: "parquet/*.parquet"
23
- num_bytes: 72373684017
24
- num_examples: 85010057
25
  download_size: 0
26
  dataset_size: 72373684017
27
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  dataset_info:
3
  features:
4
+ - name: title
5
+ dtype: string
6
+ - name: author
7
+ dtype: string
8
+ - name: id
9
+ dtype: int32
10
+ - name: timestamp
11
+ dtype: string
12
+ - name: progressive_number
13
+ dtype: int32
14
+ - name: original_url
15
+ dtype: string
16
+ - name: newsgroup
17
+ dtype: string
18
+ - name: text
19
+ dtype: string
20
  splits:
21
+ - name: trainv
22
+ path: "parquet/*.parquet"
23
+ num_bytes: 72373684017
24
+ num_examples: 85010057
25
  download_size: 0
26
  dataset_size: 72373684017
27
  ---
28
+ # Usenet Archive IT Dataset 🇮🇹
29
+
30
+ ## Description
31
+
32
+ ### Dataset Content
33
+
34
+ This dataset contains Usenet posts from Italian language newsgroups belonging to the `it`, `it-alt` and `italia` hierarchies. The data has been archived and converted to the Parquet format for easy processing. The only preprocessing conducted on the text was the removal of source code of some malicious scripts that were present in the original data and were causing HF to flag the dataset as malicious.
35
+
36
+ This dataset contributes to the [mii-community](https://huggingface.co/mii-community) project, aimed at advancing the creation of Italian open-source Language Models (LLMs).🇮🇹 🤖
37
+
38
+ ### Descriptive Statistics
39
+
40
+ This dataset contains 85010057 posts from 11956999 threads in 539 newsgroups. Threads appear to have around 7 posts on average, with a median of 3 posts.
41
+ The posts were created between 1995 and 2024. The text of all the posts together sum up to a total of 55885335313 characters, or approximately 10-20B tokens. The average length of the posts is 657 characters, and the median length is 380 characters.
42
+
43
+ ### Languages
44
+
45
+ The dataset should contain only Italian language posts, but it is possible that some posts are in other languages. The dataset has not been language filtered, as post were expected to be in Italian.
46
+
47
+ ## Dataset Structure
48
+
49
+ Each record in the dataset has the following fields:
50
+
51
+ - `title`: The title of the post.
52
+ - `author`: The username of the author of the post.
53
+ - `id`: The unique identifier of the post.
54
+ - `timestamp`: The timestamp of the post.
55
+ - `progressive_number`: An integer identifying the thread number in the newsgroup.
56
+ - `original_url`: The URL of the original post on Google Groups.
57
+ - `newsgroup`: The name of the newsgroup the post belongs to.
58
+ - `text`: The text content of the post.
59
+
60
+ This repo contains the dataset in the Parquet format. The dataset is split into multiple Parquet files inside the `parquet` folder, each containing a portion of the records. The files are named `usenet_converted_*.parquet`, where `*` is a number indicating the order of the file.
61
+ The original jsonl lines of the data are included as well as compressed bz2 files.
62
+
63
+
64
+ ## Additional Information
65
+
66
+ ### Dataset Curators
67
+
68
+ This dataset was compiled and curated by Hugging Face users [manalog](https://huggingface.co/manalog) and [ruggsea](https://huggingface.co/ruggsea), as part of the [mii-community](https://huggingface.co/mii-community) dataset creation effort.
69
+
70
+ ### Dataset rationale
71
+
72
+ The dataset was created as part of a bigger effort to create various high-quality datasets of native Italian text, with the aim of aiding the development of Italian open-source LLMs.
73
+
74
+ The dataset is expected to be used for training and fine-tuning language models, as well as for other NLP tasks such as text classification, summarization, and translation. The column `text` contains the raw text of the posts, and the column `newsgroup` contains the name of the newsgroup the post belongs to, which can be used for classification tasks.
75
+
76
+ ## Usage
77
+
78
+ You can load the dataset directly from datasets using the `load_dataset` function. Here's an example:
79
+
80
+ ```python
81
+ from datasets import load_dataset
82
+
83
+ dataset = load_dataset("manalog/UsenetArchiveIT")
84
+ ```
it.comp.os.win.nt_NO_VIRUS.jsonl → it.comp.os.win.nt_NO_VIRUS.jsonl.bz2 RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:86763db842a8d053b2706af57259a99e23ef2191c7499dad9749a0c2888ddb16
3
- size 5569638
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e4282ad2a34d8e0c4753b2a0fcbde9b900061715257492b9395398117f2f5ec
3
+ size 5597309