Update README.md
Browse files
README.md
CHANGED
@@ -9,4 +9,57 @@ tags:
|
|
9 |
- human
|
10 |
size_categories:
|
11 |
- 1B<n<10B
|
12 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
- human
|
10 |
size_categories:
|
11 |
- 1B<n<10B
|
12 |
+
---
|
13 |
+
|
14 |
+
# Conversational Usenet Archive IT Dataset 🇮🇹
|
15 |
+
|
16 |
+
## Description
|
17 |
+
|
18 |
+
### Dataset Content
|
19 |
+
|
20 |
+
This dataset is a filtered version from the [Usenet dataset](https://huggingface.co/datasets/mrinaldi/UsenetArchiveIT) that contains posts from Italian language newsgroups belonging to the `it` and `italia` hierarchies. The data has been archived and converted to the Parquet format for easy processing. All posts with more the one message has been grouped in conversations
|
21 |
+
|
22 |
+
This dataset contributes to the [mii-community](https://huggingface.co/mii-community) project, aimed at advancing the creation of Italian open-source Language Models (LLMs).🇮🇹 🤖
|
23 |
+
|
24 |
+
### Descriptive Statistics
|
25 |
+
|
26 |
+
This dataset contains 9,161,482 conversations of about 539 newsgroups, in about 18GB
|
27 |
+
|
28 |
+
### Languages
|
29 |
+
|
30 |
+
The dataset should contain only Italian language posts, but it is possible that some posts are in other languages. The dataset has not been language filtered, as post were expected to be in Italian.
|
31 |
+
|
32 |
+
## Dataset Structure
|
33 |
+
|
34 |
+
### Features
|
35 |
+
|
36 |
+
Each record in the dataset has the following fields:
|
37 |
+
|
38 |
+
- `title`: The title of the post.
|
39 |
+
- `id`: The unique identifier of the post.
|
40 |
+
- `original_url`: The URL of the original post on Google Groups.
|
41 |
+
- `newsgroup`: The name of the newsgroup the post belongs to.
|
42 |
+
- `messages`: An array of messages in the form of [ { 'role': user, 'content' : '.....' }, { 'role' : 'assistant' , 'content' : '.......' ].
|
43 |
+
|
44 |
+
This repo contains the dataset in the Parquet format.
|
45 |
+
|
46 |
+
## Additional Information
|
47 |
+
|
48 |
+
### Dataset Curators
|
49 |
+
|
50 |
+
This dataset was curated by Hugging Face user [giux78](https://huggingface.co/giux78) but is only a filter and grouped version of [Usenet dataset](https://huggingface.co/datasets/mrinaldi/UsenetArchiveIT) released by
|
51 |
+
[manalog](https://huggingface.co/manalog) and [ruggsea](https://huggingface.co/ruggsea), as part of the [mii-community](https://huggingface.co/mii-community) dataset creation effort.
|
52 |
+
|
53 |
+
|
54 |
+
### Dataset rationale
|
55 |
+
|
56 |
+
The dataset was created as part of a bigger effort to create various high-quality datasets of native Italian text, with the aim of aiding the development of Italian open-source LLMs.
|
57 |
+
|
58 |
+
## Usage
|
59 |
+
|
60 |
+
You can load the dataset directly from datasets using the `load_dataset` function. Here's an example:
|
61 |
+
|
62 |
+
```python
|
63 |
+
from datasets import load_dataset
|
64 |
+
|
65 |
+
dataset = load_dataset("mii-community/UsenetArchiveIT-conversations")
|