palestine / README.md
mlibre's picture
e0d026a
---
license: unknown
language:
- en
tags:
- Palestine
- dataset
pretty_name: Palestine
size_categories:
- 10M<n<100M
configs:
- config_name: default
data_files:
- split: train
path: combined/combined_with_metadata.jsonl
---
# Palestine Dataset πŸ‡΅πŸ‡Έ
A curated dataset focused on authentic Palestinian narratives and reporting.
## Data Sources πŸ“Š
* **decolonizepalestine.com** - Educational content and historical documentation
* **electronicintifada.net** - Hundreds of articles - news, analysis, and more
* **palianswers.com** - A crowdsourced database of short responses to Zionist claims
* **english.khamenei.ir** - Articles related to Palestine
* **mondoweiss.net** - Hundreds of articles - news, analysis, and more
* **stand-with-palestine.org** - Interviews and articles
* **bdsmovement.net** - Boycott, Divestment, and Sanctions (BDS) movement
## Purpose
This dataset aims to preserve and provide access to information about Palestine by collecting content from selected sources that represent Palestinian perspectives and history.
## Note on Sources
While this dataset strives to provide quality information from carefully selected sources, users are encouraged to:
* Cross-reference information
* Form their own informed conclusions
* Understand that historical events can have multiple perspectives
## Dataset Structure πŸ“
The repository is organized as follows:
* `combined` directory:
* Combined data from all sources in multiple formats:
* `combined.csv` and `combined.jsonl` - Basic combined data
* `combined_with_metadata.csv` and `combined_with_metadata.jsonl` - Combined data with additional metadata
* `texts` directory - Individual numbered text files
* `texts_with_metadata` directory - Text files with metadata information
* `decolonizepalestine`, `electronicintifada`, `mondoweiss`, `palianswers`, `stand-with-palestine`, `khamenei-ir-free-palestine-tag`, `khamenei-ir-palestine-special-page`, `bdsmovement` are the directories for each source:
* Includes raw `texts`, `CSV` and `JSONL` formats
* Original `website content` preserved
Direct download link to `combined_with_metadata.jsonl` file: [combined_with_metadata.jsonl](https://huggingface.co/datasets/mlibre/palestine/resolve/main/combined/combined_with_metadata.jsonl)
## Contributing 🀝
### 1. Add New Sources πŸ“š
* Submit trustworthy sources about Palestine through pull requests
* Sources can include news outlets, academic papers, historical archives
* Ensure sources provide authentic Palestinian perspectives
* Include source validation and credibility information
### 2. Model Training πŸ€–
* Use this dataset to fine-tune Language Models
* Create specialized models focused on Palestinian history and culture
* Help improve AI understanding of Palestinian narratives
* Share your training results and model performance metrics
### 3. Custom AI Applications πŸ’‘
* Develop custom GPTs using this dataset
* Create educational tools and chatbots
* Build applications that help spread awareness
* Share your applications and use cases with the community
Feel free to start contributing in any of these areas. Together we can help preserve and share Palestinian knowledge and history. 🌟
## Contact πŸ“§
* <m.gh@linuxmail.org>
* <mlibrego@gmail.com>
* <https://github.com/mlibre>
## How to regenerate the dataset
The dataset is generated using the [Clean-Web-Scraper](https://github.com/mlibre/Clean-Web-Scraper) library.
```bash
git clone https://github.com/mlibre/Clean-Web-Scraper
cd Clean-Web-Scraper
npm install --ignore-scripts
node example-usage.js
```
## License βš–οΈ
* unlicense
* Content from source websites remains under their respective original licenses.