You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Targoman Large Persian Corpus

Dataset Summary

Ever since the invention of the computer, humans have always been interested in being able to communicate with computers in human language. The research and endeavors of scientists and engineers brought us to the knowledge of human language processing and large language models. Large Language Models (LLMs) with their amazing capabilities, have caused a revolution in the artificial intelligence industry, which has caused a change in perceptions about the capabilities of machines. Large Language Models have shown that artificial intelligence is no longer just a technical issue and should be considered from other cultural, political, religious, social and security perspectives in addition to the industrial view and benefs.

In addition to the need for huge processing infrastructures and engineering brainwares, the development of LLMs requires access to huge data. In Persian language, so far, many efforts have been made to generate data and many text corpuses have been published, however, the vast majority of the created and published corpuses have problems to be used for training LLMs. The main problems are:

  • Most of the published text corpuses are in the form of plain and continuous text, as a result, the thematic continuity of the texts has been lost.
  • In the majority of text corpuses, the text is broken into sentences and the continuity of the paragraphs is lost
  • Most of the them has been preprocesed according to the needs and opinion of the publisher and some of the valuable information has been lost.
  • Source categories has not enough diversity so does not include a complete variety of Persian language
  • Published corpus size are limited compared to the size of corpuses published in other languages.

With the aim of solving these problems and creating a standard text structure that can be used in a variety of Persian NLP tasks, Targoman Intelligent Processing Company, developed a specialized scrapper to scrap and correctly extract content from the Persian web. Using the power of about 50 servers, during the period of 6 months, it has collected a large amount of Persian language text. This corpus named "Targoman Large Persian Corpus (TLPC)" has been made available to researchers and developers of Persian language processing tools. In addition to its large volume, this corpus has unique features:

  • Each document is stored independently and for each document, the following metadata is extracted and stored in the corresponding document, and by using these metadata, it is possible to obtain the desired outputs from the body:
    • Date
    • Title
    • Subtitle, Surtitle and Summary (each if any)
    • Document keywords (if any)
    • Link to document images with image description (if any)
    • The structure of paragraphs and the type of text, including paragraphs, explanations, titles, links, etc.
    • Document references (if any)
    • Reader's comments aside the original text and including the date and the name of the commenter
  • The output is in the form of Json, which can be easily filtered with the provided scripts and converted to raw text according to the needs.
  • Each document is categorized in at least one and maximum 3 levels, and texts in different categories can be easily separated.
  • Colloquial and formal text are separated
  • High variety of covered areas, including news, blogs, discussion forums, stores, etc.

By scraping more than 450 Persian popular websites, TLPC includes more than 35 billion tokens resulting from more than 65 million documents in various fields. Covered areas are highly diverse and include:

  • News
  • Weblogs
  • Discussion forums
  • Literature
  • Question and answer
  • Law
  • Religious
  • Medical
  • Dialogue
  • Educational
  • References
  • Science
  • And even: ninisite (mostly famous for it's highly diverse content generated by individuals)

Tragoman's scrapper used to create thi corpus is available as an LGPL-v3 Open software on GitHub.

Data structure

TLPC is not a simple plain text data, but contains all metadata necessary for natural language processing. The data is published in jsonl.gz format so that the least amount of memory is required during processing. Each line of data in the file contains a json that has the following structure

{
    url: string,                //Article normalized URL
    category: {
        original: string        //Category as specified by the article
        textType: string        //Main content textType, can be: Formal, Informal, Hybrid, Unknown
        major: string           //Major category can be: News, QA (Question & Answer), Literature, Forum, Weblog, SocialMedia, Doc, Undefined
        minor?: string          //Minor category (see source code for list of avaialable minor categories)
        subminor?: string       //Subminor category (see source code for list of avaialable minor and subminor categories)
    },          
    date: string,               //Article publish date (if specified)
    title: string,              //Title of the article
    aboveTitle?: string,        //Surtitle or any text provided before Title
    subtitle?: string,          //Subtitle or Lead (if provided)
    summary?: string,           //Summary (if provided)
    content?: IntfText[],       //An array containing main article each item structure will be 
                                //{ text: string, type: enuTextType, ref?: string } 
                                //where ref is just for items provided with a hyperlink
    comments?: IntfComment[]    //An array of comments (if any) in the following structure: 
                                //{ text: string, author?: string, date?: string }
    images?: IntfImage[],       //List of image URLs used in the article with their alt-texts.
                                //{ src: string, alt?: string }
    qa? : {                     //If the site has Question/Answer each item will consist of:
        q: IntfComment,         //Question part { text: string, author?: string, date?: string }
        a?: IntfComment[]       //Answers part 
    }[],
    tags?: string[],            //List of tags provided with the article
}

Contribution and Usages

TLPC was initially intended to be used in the development of Persian LLM by AlphabetAI collaboration group. The group is formed by Targoman Intelligent Processing, Akam Innovative Data Processors, Knowledge Technology Era and Part Financial Information Processing compaies. After successful usage in the prcess of development of a Persian LLM it was extended for the next stages of LLM development such as instruction tunning, task tuning etc. Currently TLPC is in use by houshafarin partnership group and also Persian LLM hackaton Any contribution in debugging and expanding this corpus is welcomed. Fields of contribution are:

  • debugging the corpus in order to remove duplications and scraping bugs
  • expanding the corpus with new sites and new content
  • development of preprocessing script in order to better use the corpus

Statistics

As of April 11, 2024, nearly 500 Persian websites were scraped and by processing nearly 200 million URLs which generated about 65 million documents with over 35 billion tokens were gathered. details and updated information about statistics are published in the [project page] (http://oss.targoman.ir/TLPC/#statistics)

Licensing Information

Targoman Intelligent Processing Company, in line with social responsibility and with the aim of spreading the culture of freedom and strengthening artificial intelligence in Iran, has granted the right to use TLPC under the CC-BY-NC-SA-v4.0 license. At the same time, this company has signed a MoA with the headquarters of the development of artificial intelligence and robotics technologies of the ISTI(https://isti.ir/), and granted commercial usage of TLPC to all knowledge-based companies approved by ISTI.

Downloads last month
31
Edit dataset card