Datasets:
dataset_info:
features:
- name: text
dtype: string
- name: title
dtype: string
- name: source
dtype: int64
splits:
- name: train
num_bytes: 1551060789
num_examples: 3602
download_size: 857089622
dataset_size: 1551060789
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
language:
- az
pretty_name: Azerbaijani Book Dataset
Azerbaijani Book Dataset
The Azerbaijani Book Dataset is a curated collection of Azerbaijani language books, encompassing a diverse range of genres and styles. This dataset is designed for use in natural language processing (NLP) research.
Dataset Structure
The Azerbaijani Book Dataset contains books in a structured format. Each entry consists of:
- Title: The title of the book
- Text: The main text content of the book
- Source: An integer code representing the origin or source of the book
The dataset includes a total of 3,602 books, with approximately 180 million words, making it a comprehensive resource for Azerbaijani language research.
Uses
The allmalab/aze-books dataset is licensed under the Apache-2.0 license and is intended for educational, research, and commercial purposes. Additionally, users are encouraged to cite the dataset appropriately in any publications or works derived from it. Citation information:
@inproceedings{isbarov-etal-2024-open,
title = "Open foundation models for {A}zerbaijani language",
author = "Isbarov, Jafar and
Huseynova, Kavsar and
Mammadov, Elvin and
Hajili, Mammad and
Ataman, Duygu",
editor = {Ataman, Duygu and
Derin, Mehmet Oguz and
Ivanova, Sardana and
K{\"o}ksal, Abdullatif and
S{\"a}lev{\"a}, Jonne and
Zeyrek, Deniz},
booktitle = "Proceedings of the First Workshop on Natural Language Processing for Turkic Languages (SIGTURK 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand and Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.sigturk-1.2",
pages = "18--28",
abstract = "The emergence of multilingual large language models has enabled the development of language understanding and generation systems in Azerbaijani. However, most of the production-grade systems rely on cloud solutions, such as GPT-4. While there have been several attempts to develop open foundation models for Azerbaijani, these works have not found their way into common use due to a lack of systemic benchmarking. This paper encompasses several lines of work that promote open-source foundation models for Azerbaijani. We introduce (1) a large text corpus for Azerbaijani, (2) a family of encoder-only language models trained on this dataset, (3) labeled datasets for evaluating these models, and (4) extensive evaluation that covers all major open-source models with Azerbaijani support.",
}
Recommendations
Use the dataset responsibly and adhere to ethical standards in your research. The creators encourage users to report any quality issues for ongoing improvements and updates.
Dataset Description
- Curated by: Kavsar Huseynova, Jafar Isbarov, Mirakram Aghalarov
- Funded by: PRODATA LLC
- Shared by: aLLMA Lab
- Languages: Azerbaijani
- DOI: 10.57967/hf/3452
- License: apache-2.0