Datasets:
Contribute are welcome
#1
by
dofbi
- opened
🔍 Comment contribuer ?
- Fine-Tuning : Experts en NLP, entraînez des modèles spécifiques au Wolof.
- Correction et annotation : Améliorez les traductions et annotez les phrases.
Merci pour votre contribution à l’avancement de la recherche en NLP ! 🙌
Hello @dofbi , to get a preview of this, I’m sharing the code below where we need to proceed in three steps:
#pip install datasets jsonlines
import jsonlines
from datasets import Dataset
# Load the data from the jsonl file and make two lists for the wolof and french sentences
french = []
wolof = []
with jsonlines.open('dataset.jsonl') as reader:
data = list(reader)
for line in data:
wolof.append(str(line['input']))
if line['output'] is not None:
if isinstance(line['output'], dict) and 'definition' in line['output']:
french.append(str(line['output']['definition']))
else:
french.append(str(line['output']))
else:
french.append(None)
if len(french) != len(wolof):
raise ValueError('The number of french and wolof sentences are not equal')
# Build the dataset from the lists then split it into train and test sets
ds= Dataset.from_dict({"wolof": wolof, "french": french})
ds.train_test_split(test_size=0.2)
# We push the dataset to the hub
ds.push_to_hub("dofbi/jolof")
Hello
@abdouaziiz
Thank you for this contribution
Dataset Viewer run at Dataset Card