File size: 2,184 Bytes
fd44837
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bd9746d
 
 
 
 
fd44837
 
 
bd9746d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
---
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
dataset_info:
  features:
  - name: text
    dtype: string
  - name: timestamp
    dtype: string
  - name: url
    dtype: string
  splits:
  - name: train
    num_bytes: 137146387873
    num_examples: 18507273
  - name: validation
    num_bytes: 138079468
    num_examples: 18392
  download_size: 4087107539
  dataset_size: 137284467341
license: apache-2.0
task_categories:
- text-generation
language:
- hi
---
# Dataset Card for "mC4-hindi"

This dataset is a subset of the mC4 dataset, which is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. It contains natural text in 101 languages, including Hindi. This dataset is specifically focused on Hindi text, and contains a variety of different types of text, including news articles, blog posts, and social media posts.

This dataset is intended to be used for training and evaluating natural language processing models for Hindi. It can be used for a variety of tasks, such as pretraining language models, machine translation, text summarization, and question-answering.

**Data format**

The dataset is in JSONL format. Each line in the file contains a JSON object with the following fields:

* `text`: field contains the text of the document.
* `timestamp`: field contains the date and time when the document was crawled.
* `url`: field contains the URL of the document.

**Data splits**

The dataset is split into two parts: train and validation. The train split contains 90% of the data, the validation split contains 5% of the data, and the test split contains 5% of the data.

**Usage**

To use the dataset, you can load it into a Hugging Face Dataset object using the following code:

```python
import datasets

dataset = datasets.load_dataset("zicsx/mC4-hindi")
```

Once you have loaded the dataset, you can access the train and validation splits using the following code:

```python
train_dataset = dataset["train"]
validation_dataset = dataset["validation"]
```

You can then use the dataset to train and evaluate your natural language processing model.