Dataset Viewer
Auto-converted to Parquet Duplicate
link
dict
data
dict
{ "id": 1, "domain": "https://drmahdavispine.ir/", "category": null, "page_url": "https://drmahdavispine.ir/%D8%A8%DB%8C%D9%85%D8%A7%D8%B1%D8%B3%D8%AA%D8%A7%D9%86-%D8%AF%D9%88%D9%84%D8%AA%DB%8C-%D8%AC%D8%B1%D8%A7%D8%AD%DB%8C-%D8%B3%D8%AA%D9%88%D9%86-%D9%81%D9%82%D8%B1%D8%A7%D8%AA-%D8%AA%D9%87%D8%B1%D8%A7/", "...
{ "version": 2, "status": "10", "main-content": "بیمارستان دولتی جراحی ستون فقرات تهران\n1404-09-29\nبیمارستان دولتی جراحی ستون فقرات تهران\nدر سالهای اخیر، با افزایش شیوع مشکلات ستون فقرات، موضوع عمل جراحی ستون فقرات در بیمارستان دولتی در تهران به یکی از دغدغههای مهم بیماران و خانوادهها تبدیل شده است. دسترسی به ...
{ "id": 2, "domain": "https://bigbangpage.com", "category": null, "page_url": "https://bigbangpage.com/astronomy/%d8%b4%d9%86%d8%a7%d8%b3%d8%a7%db%8c%db%8c-%db%8c%da%a9-%d8%b0%d8%b1%d9%87-%d9%be%d8%b1%d8%a7%d9%86%d8%b1%da%98%db%8c-%da%a9%d9%87-%d8%a8%d9%87-%d8%b2%d9%85%db%8c%d9%86-%d8%a8%d8%b1%d8%ae%d9%88%d8%b1...
{ "version": 2, "status": "10", "main-content": "\nسمیر اله وردی\n24 مرداد 1404\nاخبار فیزیک , فیزیک , کیهان شناسی , نجوم\nبیگ بنگ: دانشمندان اعلام کردند شناسایی یک نوترینوی کیهانی که با سطح انرژی بی سابقه به زمین برخورد کرد، نه یک خطا و نه یک اشکال بود، بلکه ثبت واقعی یک ذره پرانرژی بود.\nبه گزارش بیگ بنگ، در فو...
{ "id": 3, "domain": "https://taaghche.com/", "category": null, "page_url": "https://taaghche.com/book/239888/%D9%85%D8%AC%D9%85%D9%88%D8%B9%D9%87-%D8%AF%D8%A7%D8%B3%D8%AA%D8%A7%D9%86-%D9%87%D8%A7%DB%8C-%DA%A9%D9%88%D8%AF%DA%A9%D8%A7%D9%86-%D8%A8%D8%A7-%D8%B1%D9%88%DB%8C%DA%A9%D8%B1%D8%AF-%D8%A2%D9%85%D9%88%D8%...
{ "version": 2, "status": "10", "main-content": "کتاب مجموعه داستان های کودکان با رویکرد آموزش فلسفه برای کودکان P4C (فبک)؛ جلد اول\nنویسنده: زهرا عبدالهیان\nانتشارات: انتشارات حکمت سبز\nدسته بندی:\nداستان کودک و نوجوانان\nمعرفی کتاب مجموعه داستان های کودکان با رویکرد آموزش فلسفه برای کودکان P4C (فبک)؛ جلد اول\nک...
{ "id": 4, "domain": "https://www.sarzamindownload.com/", "category": null, "page_url": "https://www.sarzamindownload.com/37367/-%D9%86%D8%B1%D9%85-%D8%A7%D9%81%D8%B2%D8%A7%D8%B1-%D8%B7%D8%B1%D8%A7%D8%AD%DB%8C-%D9%88-%D8%AA%D8%AD%D9%84%DB%8C%D9%84-%D8%B3%D8%A7%D8%AE%D8%AA%D9%85%D8%A7%D9%86%D8%8C-%D8%A7%DB%8C%D8...
{ "version": 2, "status": "10", "main-content": "7206 بازدید\nچهارشنبه، 1 مرداد ماه، 1404\nدانلود نرم افزار طراحی و تحلیل ساختمان، ایتبس (برای ویندوز)\nETABS Ultimate 22٫7.0٫4095 Windows\nنرم افزار CSI ETABS یکی از قدرتمندترین و معروف ترین نرم افزارهای تحلیل و طراحی سازه در دنیاست. این نرم افزار توسط شرکت Compute...
{ "id": 5, "domain": "https://familyhealth.behdasht.gov.ir/", "category": null, "page_url": "https://familyhealth.behdasht.gov.ir/%D8%AA%D9%88%D8%A7%D9%86%D9%85%D9%86%D8%AF%D8%B3%D8%A7%D8%B2%DB%8C-%D9%85%D8%A7%D8%AF%D8%B1%D8%A7%D9%86-%D8%AF%D8%B1-%D8%B1%D8%A7%D8%B3%D8%AA%D8%A7%DB%8C-%D8%A7%D9%86%D8%AA%D8%AE%D8%...
{ "version": 2, "status": "10", "main-content": "برنامه کلاسهای آمادگی برای زایمان از سال1387 اولین بار در کشور توسط اداره سلامت مادران با هدف ترویج زایمان طبیعی با رویکرد علمی مبتنی بر شواهد در راستای افزایش نرخ باروری، توانمند سازی مادران برای انتخاب روش زایمان ایمن به دانشگاه های علوم پزشکی کشور حوزه درمان ابل...

Sana Web Dataset

Dataset Description

The Sana Dataset is a large-scale Persian web corpus designed to improve the quality of Persian language models and NLP systems. Due to the limited availability of large, clean, and diverse Persian datasets, many language models still struggle with understanding and generating high-quality Persian text.

To address this challenge, the Sana project focuses on large-scale web crawling, intelligent content extraction, and advanced text normalization for Persian web pages.

Unlike traditional rule-based extractors that depend on website structure or HTML templates, Sana uses a deep neural network specifically trained for Persian web content extraction. The extraction system operates independently of webpage layouts and can automatically identify and extract the main textual content from a wide variety of Persian websites.

The dataset includes cleaned and normalized textual data collected from multiple Persian domains across different categories.


Data Collection

Persian domains were collected from the public web and manually reviewed by human annotators. Each domain was categorized based on its primary topic.

After the annotation phase, the domains were crawled using a high-speed distributed crawler. The downloaded webpages were processed in parallel by multiple extraction pipelines.

The crawling system stores metadata related to each page, including crawl timestamps, parent links, domain information, and page depth.


Content Extraction and Cleaning

Web pages often contain noisy and irrelevant elements such as:

  • Navigation menus
  • Advertisements
  • Sidebars
  • Repeated page components
  • Link directories

To extract the main textual content, a dedicated deep neural network model for Persian was designed and trained.

A separate manually annotated dataset of Persian webpages was created for training the extraction model. Multiple volunteers participated in labeling the useful and non-useful sections of webpages.

In addition, a separate machine learning classifier was developed to detect pages that contain little or no meaningful textual content. Pages consisting mainly of menus, lists of links, or empty structures were filtered before content extraction.


Dataset Statistics

Metric Value
Crawled Domains 265
Crawled Pages 500,419
Extracted Pages 422,218
Pages Without Useful Text 191,753
Pages With Useful Content 230,465
Unique Content Pages 179,862
Total Tokens 300,689,961
Total Characters 929,332,077

Example Domains

Some of the crawled domains include:

  • president.ir
  • kanoon.ir
  • tasnimnews.com
  • gama.ir
  • jobinja.ir
  • mehrnews.com
  • soft98.ir
  • khamenei.ir
  • hamshahrionline.ir
  • e-estekhdam.com

Data Format

The dataset is stored in JSON format.

Example structure:

{
  "link": {
    "id": "",
    "domain": "https://example.com",
    "category": "category 1",
    "url": "https://example.com/home/index",
    "depth": 2,
    "anchor": "",
    "referer": "https://example.com",
    "date": "Mon, 22 Sep 2025 12:53:17 GMT"
  },
  "data": {
    "version": 1,
    "main_content": "main content of the url",
    "markdown_content": "main content in markdown format",
    "url": "https://example.com/home/index",
    "create_date": "2025-05-25",
    "metadata": {
      "title": "title",
      "description": "description",
      "lang": "fa",
      "last_date": "2020-01-01",
      "keywords": [
        { "name": "keywords1", "source": "meta/oth" }
      ],
      "author": [
        { "name": "author1", "source": "meta/oth" }
      ],
      "page_type": "website"
    }
  }
}

Text Processing Pipeline

Several normalization and cleaning steps were applied to the dataset:

  • Replacing non-Persian punctuation and symbols with Persian equivalents
  • Converting Persian and Arabic digits to English digits
  • Removing diacritics
  • Removing extra spaces and applying proper half-space normalization
  • Replacing uncommon Unicode characters with normalized forms
  • Removing unnecessary Unicode symbols and special characters
  • Reducing excessive repeated characters in informal text
  • Proper normalization of Persian verb prefixes such as "می" and "نمی"
  • Removing duplicated textual content

Intended Use

The dataset is primarily designed for:

  • Training Persian Large Language Models (LLMs)
  • Language modeling
  • Pretraining transformer-based models
  • Persian NLP research
  • Text normalization research
  • Information extraction
  • Web content extraction systems

Access and Licensing

Access to the dataset requires approval.

Researchers and developers with valid technical profiles, GitHub activity, academic background, or experience in artificial intelligence, natural language processing, or large language models may receive higher priority during the review process.

The dataset is intended for non-commercial research and educational use.


Limitations

Although extensive filtering and cleaning have been applied, the dataset may still contain:

  • Residual noisy content
  • Crawling artifacts
  • Incomplete pages
  • Biased or domain-specific language
  • Automatically extracted metadata errors

Users are encouraged to apply additional filtering depending on their downstream tasks.

Downloads last month
70