File size: 4,988 Bytes
37e2f84
df2ff90
37e2f84
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39a762e
37e2f84
 
 
 
 
 
39a762e
 
 
 
 
 
 
 
37e2f84
 
 
 
 
39a762e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37e2f84
 
39a762e
37e2f84
 
 
39a762e
37e2f84
 
 
39a762e
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
---
viewer: false
license: openrail
task_categories:
- summarization
- text-generation
- text2text-generation
language:
- az
tags:
- butabytes
- nlp
- corpus
- az
- azerbaijani
- ai
- tifosiai
- aznlp
pretty_name: butabytes
size_categories:
- 10M<n<100M
---

![](https://i.ibb.co/bs9ktP4/logo.jpg)

# ButaBytes 2.0 - The largest NLP corpus for Azerbaijani Language (43M+ sentences)

ButaBytes is designed for a wide range of NLP tasks, collected from 3 million sources with a diverse range of genres and topics, such as politics, economics, science, culture, sports, history, and society. The documents include a mix of contemporary and historical texts, drawn from newspapers, magazines, academic journals, Wikipedia articles, and books. This mix provides a comprehensive linguistic and cultural resource for NLP technologies.

## Corpus Structure
### Data Splits
The ButaBytes corpus has 4 main sources (books, wikipedia, news, and sentences) with the following distribution:

| Source Name      | Number of Instances | Size (GB) |
| ---------------- | ------------------- | --------- |
| sentences.json   | 43,755,942          | 10.1      |
| wikipedia.json   | 178,836             | 0.64      |
| news.json        | 623,964             | 1.37      |
| books.zip        | 434                 | 0.12      |

## Methodology

The ButaBytes corpus was constructed by scraping a wide array of Azerbaijani content to ensure a comprehensive and diverse dataset. Our sources included Azerbaijani news websites known for their popularity and reliability, public documents, books spanning various genres, and a rich selection of user-generated content such as social media posts and blogs. We implemented specialized cleaning techniques tailored to each content type, enhancing the accuracy and consistency of the data across the corpus. This approach guarantees a robust and versatile resource suited for a multitude of NLP applications.

## Usage Instructions

In order to use ButaBytes, you should download the corresponding material manually to your device.

### Reading JSON Files

To read the JSON files from the dataset, use the following function:

```python
import json

def read_local_json(file_path):
    try:
        with open(file_path, 'r') as file:
            data = json.load(file)
        print(f"Successfully loaded JSON data from {file_path}.")
        return data
    except json.JSONDecodeError:
        print("The file is not a valid JSON.")
        return None
    except FileNotFoundError:
        print("The file was not found. Please ensure the file path is correct.")
        return None

# Example usage
file_path = "sentences.json"
data = read_local_json(file_path)
```

### Converting JSON Data to DataFrame

```python
import pandas as pd

file_path = "news.json"
data = read_local_json(file_path)
df = pd.DataFrame(data)
print(df.head())
```

```python
import pandas as pd

file_path = "wikipedia.json"
data = read_local_json(file_path)
df = pd.DataFrame(data)
print(df.head())
```

### Unzipping and Reading Text Files

```python
import os
import zipfile
import glob
import pandas as pd

def unzip_file(zip_path, extract_to):
    if not os.path.exists(extract_to):
        os.makedirs(extract_to)

    with zipfile.ZipFile(zip_path, 'r') as zip_ref:
        zip_ref.extractall(extract_to)
    print(f"Extracted all files from {zip_path} to {extract_to}")

# Example usage
zip_path = "books.zip"
extract_to = "books"
unzip_file(zip_path, extract_to)

def read_text_files_into_dataframe(root_folder):
    all_text_files = glob.glob(os.path.join(root_folder, '**/*.txt'), recursive=True)
    data = []

    for file_path in all_text_files:
        with open(file_path, 'r', encoding='utf-8') as file:
            content = file.read()
            data.append({
                'file_path': file_path,
                'content': content
            })

    df = pd.DataFrame(data)
    return df

# Example usage
root_folder = "books"
df = read_text_files_into_dataframe(root_folder)
print(df.head())
```

## Considerations for Using the Corpus

### Social Impact

ButaBytes contributes significantly to the NLP research community by providing a valuable resource for developing text generation tools in Azerbaijani. It not only supports the advancement of language technologies but also promotes linguistic diversity and cultural preservation.

### Biases and Limitations

While efforts were made to minimize bias in the corpus, some limitations remain. Users should be cautious with models trained on this data, particularly concerning inherent biases that might influence the performance and fairness of these models.

### Corpus Authors

The ButaBytes 2.0 was developed by the Tifosi AI (formerly AZNLP), a group of dedicated researchers and data scientists focused on advancing Artificial Intelligence. The team has committed to ethical sourcing and responsible management of the dataset, ensuring it serves as a reliable and valuable resource for the community.