Update README.md
Browse files
README.md
CHANGED
@@ -14,4 +14,80 @@ language:
|
|
14 |
- ka
|
15 |
- ta
|
16 |
- ur
|
17 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
- ka
|
15 |
- ta
|
16 |
- ur
|
17 |
+
---
|
18 |
+
# Bhasha Wiki Indic Context
|
19 |
+
|
20 |
+
<!-- Provide a quick summary of the dataset. -->
|
21 |
+
This dataset has Wikipedia articles pertaining to Indian context.
|
22 |
+
## Dataset Details
|
23 |
+
|
24 |
+
### Dataset Description
|
25 |
+
|
26 |
+
<!-- Provide a longer summary of what this dataset is. -->
|
27 |
+
We filtered out Indian context data from [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset's English articles by keywords.
|
28 |
+
Further we trained a model to classify for Indian content vs Not Indian content to narrow down filtered English articles.
|
29 |
+
We then translated these artices to 6 Indian languages (Hindi, Bengali, Gujarati, Tamil, Kannada, Urdu) using AI4Bharat's [IndicTrans2](https://huggingface.co/ai4bharat/indictrans2-en-indic-1B). The dataset has been cleaned and can be used for pre-training multilingual LLMs.
|
30 |
+
|
31 |
+
|
32 |
+
- **Curated by:** [Soket AI Labs](https://soket.ai/)
|
33 |
+
- **Language(s) (NLP):** [More Information Needed]
|
34 |
+
- **License:** [cc-by-sa-3.0]
|
35 |
+
|
36 |
+
## Uses
|
37 |
+
|
38 |
+
<!-- Address questions around how the dataset is intended to be used. -->
|
39 |
+
The dataset is focussed on Indian factual content for pre-training LLMs where Indian knowledge and contextual understanding is required.
|
40 |
+
|
41 |
+
## Dataset Structure
|
42 |
+
|
43 |
+
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
|
44 |
+
Each row corresponds to a wikipedia article with the decription of article in source language(english) and translations in 6 indian languages.
|
45 |
+
Each description column in different languages is a list of sentences/multiple sentences and can be concatenated to get cleaned article decription.
|
46 |
+
|
47 |
+
## Dataset Creation
|
48 |
+
|
49 |
+
### Curation Rationale
|
50 |
+
|
51 |
+
<!-- Motivation for the creation of this dataset. -->
|
52 |
+
|
53 |
+
[More Information Needed]
|
54 |
+
|
55 |
+
### Source Data
|
56 |
+
|
57 |
+
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
|
58 |
+
Wikpedia english articles from [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia)
|
59 |
+
|
60 |
+
#### Data Collection and Processing
|
61 |
+
|
62 |
+
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
|
63 |
+
|
64 |
+
[More Information Needed]
|
65 |
+
|
66 |
+
|
67 |
+
|
68 |
+
### Recommendations
|
69 |
+
|
70 |
+
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
71 |
+
|
72 |
+
Though we tried to filter as much Indic context articles as possible with high Recall, there might be some non indic articles mixed in them as well.
|
73 |
+
|
74 |
+
## Citation [optional]
|
75 |
+
|
76 |
+
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
|
77 |
+
|
78 |
+
**BibTeX:**
|
79 |
+
|
80 |
+
[More Information Needed]
|
81 |
+
|
82 |
+
**APA:**
|
83 |
+
|
84 |
+
[More Information Needed]
|
85 |
+
|
86 |
+
|
87 |
+
## Dataset Card Authors [optional]
|
88 |
+
|
89 |
+
[More Information Needed]
|
90 |
+
|
91 |
+
## Dataset Card Contact
|
92 |
+
|
93 |
+
[More Information Needed]
|