Update README.md
Browse files
README.md
CHANGED
@@ -123,7 +123,10 @@ Encompassing a wide spectrum of content, ranging from social media conversations
|
|
123 |
This corpus offers comprehensive insights into the linguistic diversity and cultural nuances of Arabic expression.
|
124 |
|
125 |
## Usage
|
126 |
-
If you want to use this dataset you pick one among the available configs:
|
|
|
|
|
|
|
127 |
|
128 |
Example of usage:
|
129 |
```python
|
@@ -131,7 +134,7 @@ dataset = load_dataset('mixed-arabic-datasets', 'Ara--MBZUAI--Bactrian-X')
|
|
131 |
```
|
132 |
If you load multiple datasets and merge them together then you can simply laverage `concatenate_datasets()` from `datasets`
|
133 |
```pyhton
|
134 |
-
|
135 |
```
|
136 |
Note : proccess the datasets before merging in order to make sure you have a new dataset that is consistent
|
137 |
|
|
|
123 |
This corpus offers comprehensive insights into the linguistic diversity and cultural nuances of Arabic expression.
|
124 |
|
125 |
## Usage
|
126 |
+
If you want to use this dataset you pick one among the available configs:
|
127 |
+
['Ara--MBZUAI--Bactrian-X',
|
128 |
+
'Ara--OpenAssistant--oasst1',
|
129 |
+
'Ary--AbderrahmanSkiredj1--Darija-Wikipedia']
|
130 |
|
131 |
Example of usage:
|
132 |
```python
|
|
|
134 |
```
|
135 |
If you load multiple datasets and merge them together then you can simply laverage `concatenate_datasets()` from `datasets`
|
136 |
```pyhton
|
137 |
+
dataset3 = concatenate_datasets([dataset1['train'], dataset2['train']])
|
138 |
```
|
139 |
Note : proccess the datasets before merging in order to make sure you have a new dataset that is consistent
|
140 |
|