Divyanshusingh01
commited on
Commit
•
0288a79
1
Parent(s):
06d606d
Update README.md
Browse files
README.md
CHANGED
@@ -10,4 +10,93 @@ pretty_name: 'Niband '
|
|
10 |
size_categories:
|
11 |
- 100M<n<1B
|
12 |
---
|
|
|
13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
size_categories:
|
11 |
- 100M<n<1B
|
12 |
---
|
13 |
+
### Dataset Name: Hindi- Niband (Massive Hindi language Text Dataset)
|
14 |
|
15 |
+
#### Dataset Overview
|
16 |
+
|
17 |
+
This dataset is a comprehensive collection of text data consisting of more than 10 billion tokens. It encompasses a wide range of sources, including Wikipedia articles, news articles, email transcripts, and generated prompt text. Specific Hindi language data columns have been extracted from the CulturaX dataset, which is a large, cleaned, and multilingual dataset for large language models. We acknowledge and cite the CulturaX dataset using the provided citation.
|
18 |
+
|
19 |
+
#### Data Sources
|
20 |
+
|
21 |
+
1. **Wikipedia Articles:** A large corpus of text extracted from Wikipedia articles covering various topics and domains.
|
22 |
+
2. **News Articles:** Textual data sourced from news articles from diverse sources and regions.
|
23 |
+
3. **Email Transcripts:** Transcripts of email communications, providing insights into natural language usage in electronic correspondence.
|
24 |
+
4. **Prompt Text Generation:** Text generated from prompts or prompts used to generate text, contributing to the dataset's diversity and complexity.
|
25 |
+
5. **Hindi Data from CulturaX Dataset:** Specific Hindi language data columns have been extracted from the CulturaX dataset, which is a large, cleaned, and multilingual dataset for large language models.
|
26 |
+
|
27 |
+
|
28 |
+
|
29 |
+
#### Potential Uses
|
30 |
+
|
31 |
+
- Training and evaluating natural language generation models in the Hindi language domain.
|
32 |
+
- Exploring the capabilities of models in narrative generation tasks.
|
33 |
+
- Conducting research on narrative understanding and generation in Hindi.
|
34 |
+
- Analyzing sentiment and opinion mining in Hindi text data.
|
35 |
+
- Building chatbots or virtual assistants capable of interacting in Hindi.
|
36 |
+
- Creating educational resources for teaching Hindi language and literature.
|
37 |
+
- Developing machine translation systems for translating between Hindi and other languages.
|
38 |
+
- Studying cross-lingual transfer learning techniques for improving natural language processing tasks in Hindi.
|
39 |
+
|
40 |
+
|
41 |
+
#### Importance for Indian Native Languages :-
|
42 |
+
|
43 |
+
This dataset can be crucial for the training of LLM (Large Language Model) models and aiming to explore the capabilities of those natural language generation models in Hindi. It serves as a foundation for training and evaluating models capable of producing coherent and contextually relevant narratives or explanations. Additionally, this dataset aligns with our commitment to promoting Indian native languages on a global scale. We recognize the limited availability of such datasets as a major challenge for innovation within the local Indian community. As part of our contribution to the Indian open-source community, we are planning to release a very large database covering various Indian native languages. This initiative aims to empower researchers, practitioners, and developers to explore and innovate in Indian language processing and generation tasks.
|
44 |
+
|
45 |
+
|
46 |
+
#### Citation
|
47 |
+
|
48 |
+
If you use this dataset in your research or applications, please consider citing the CulturaX dataset using the provided citation.
|
49 |
+
|
50 |
+
We acknowledge and cite the CulturaX dataset using the following citation:
|
51 |
+
|
52 |
+
```
|
53 |
+
@misc{nguyen2023culturax,
|
54 |
+
title={CulturaX: A Cleaned, Enormous, and Multilingual Dataset for Large Language Models in 167 Languages},
|
55 |
+
author={Thuat Nguyen and Chien Van Nguyen and Viet Dac Lai and Hieu Man and Nghia Trung Ngo and Franck Dernoncourt and Ryan A. Rossi and Thien Huu Nguyen},
|
56 |
+
year={2023},
|
57 |
+
eprint={2309.09400},
|
58 |
+
archivePrefix={arXiv},
|
59 |
+
primaryClass={cs.CL}.
|
60 |
+
|
61 |
+
``
|
62 |
+
|
63 |
+
Additionally, the dataset includes news article data, and we acknowledge and cite the source of this data using the following citations:
|
64 |
+
|
65 |
+
|
66 |
+
@inproceedings{see-etal-2017-get,
|
67 |
+
title = "Get To The Point: Summarization with Pointer-Generator Networks",
|
68 |
+
author = "See, Abigail and
|
69 |
+
Liu, Peter J. and
|
70 |
+
Manning, Christopher D.",
|
71 |
+
booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
|
72 |
+
month = jul,
|
73 |
+
year = "2017",
|
74 |
+
address = "Vancouver, Canada",
|
75 |
+
publisher = "Association for Computational Linguistics",
|
76 |
+
url = "https://www.aclweb.org/anthology/P17-1099",
|
77 |
+
doi = "10.18653/v1/P17-1099",
|
78 |
+
pages = "1073--1083",
|
79 |
+
abstract = "Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.",
|
80 |
+
}
|
81 |
+
|
82 |
+
@inproceedings{DBLP:conf/nips/HermannKGEKSB15,
|
83 |
+
author={Karl Moritz Hermann and Tomás Kociský and Edward Grefenstette and Lasse Espeholt and Will Kay and Mustafa Suleyman and Phil Blunsom},
|
84 |
+
title={Teaching Machines to Read and Comprehend},
|
85 |
+
year={2015},
|
86 |
+
cdate={1420070400000},
|
87 |
+
pages={1693-1701},
|
88 |
+
url={http://papers.nips.cc/paper/5945-teaching-machines-to-read-and-comprehend},
|
89 |
+
booktitle={NIPS},
|
90 |
+
crossref={conf/nips/2015}
|
91 |
+
}
|
92 |
+
|
93 |
+
|
94 |
+
#### License
|
95 |
+
|
96 |
+
Please refer to the licensing terms specified by the dataset creators.
|
97 |
+
|
98 |
+
#### Disclaimer
|
99 |
+
|
100 |
+
The views expressed in the dataset do not necessarily reflect the views of the dataset creators or contributors. Users are advised to use the data responsibly and in accordance with ethical guidelines.
|
101 |
+
|
102 |
+
This dataset card provides an overview of the massive multilingual text dataset, highlighting its sources, potential uses, citation, and disclaimer.
|