Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Azerbaijani
DOI:
Libraries:
Datasets
Dask
License:

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Azerbaijani Book Dataset

The Azerbaijani Book Dataset is a curated collection of Azerbaijani language books, encompassing a diverse range of genres and styles. This dataset is designed for use in natural language processing (NLP) research.

Dataset Structure

The Azerbaijani Book Dataset contains books in a structured format. Each entry consists of:

  • Title: The title of the book
  • Text: The main text content of the book
  • Source: An integer code representing the origin or source of the book

The dataset includes a total of 3,602 books, with approximately 180 million words, making it a comprehensive resource for Azerbaijani language research.

Uses

The allmalab/aze-books dataset is licensed under the Apache-2.0 license and is intended for educational, research, and commercial purposes. Additionally, users are encouraged to cite the dataset appropriately in any publications or works derived from it. Citation information:

@inproceedings{isbarov-etal-2024-open,
    title = "Open foundation models for {A}zerbaijani language",
    author = "Isbarov, Jafar  and
      Huseynova, Kavsar  and
      Mammadov, Elvin  and
      Hajili, Mammad  and
      Ataman, Duygu",
    editor = {Ataman, Duygu  and
      Derin, Mehmet Oguz  and
      Ivanova, Sardana  and
      K{\"o}ksal, Abdullatif  and
      S{\"a}lev{\"a}, Jonne  and
      Zeyrek, Deniz},
    booktitle = "Proceedings of the First Workshop on Natural Language Processing for Turkic Languages (SIGTURK 2024)",
    month = aug,
    year = "2024",
    address = "Bangkok, Thailand and Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.sigturk-1.2",
    pages = "18--28",
    abstract = "The emergence of multilingual large language models has enabled the development of language understanding and generation systems in Azerbaijani. However, most of the production-grade systems rely on cloud solutions, such as GPT-4. While there have been several attempts to develop open foundation models for Azerbaijani, these works have not found their way into common use due to a lack of systemic benchmarking. This paper encompasses several lines of work that promote open-source foundation models for Azerbaijani. We introduce (1) a large text corpus for Azerbaijani, (2) a family of encoder-only language models trained on this dataset, (3) labeled datasets for evaluating these models, and (4) extensive evaluation that covers all major open-source models with Azerbaijani support.",
}

Recommendations

Use the dataset responsibly and adhere to ethical standards in your research. The creators encourage users to report any quality issues for ongoing improvements and updates.

Dataset Description

  • Curated by: Kavsar Huseynova, Jafar Isbarov, Mirakram Aghalarov
  • Funded by: PRODATA LLC
  • Shared by: aLLMA Lab
  • Languages: Azerbaijani
  • DOI: 10.57967/hf/3452
  • License: apache-2.0
Downloads last month
57

Models trained or fine-tuned on allmalab/aze-books