Datasets:

Multilinguality:
translation
Size Categories:
unknown
Language Creators:
crowdsourced
Annotations Creators:
no-annotation
Source Datasets:
original
License:

Fix splits loading

#4
by lhoestq HF staff - opened

As mentioned in https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt/discussions/3 there is currently an error preventing from loading the dataset.
In particular the validation split is not loaded correctly because the file location in the dataset script is wrong.

In this PR I fixed the file location, and this fixes issues to load this dataset

cc @Zaid

cc @tiedeman @Muennighoff can you check this PR please ?

Amazing thanks for fixing - @tiedeman can you merge this?
I think I introduced the bug, sorry for that.

@Muennighoff apology accepted. I got a free time because of that.

@lhoestq I tried loading the dataset locally using your fix, however I am getting these features, they should be sourceLang, targetlang. sourceString, targetString

DatasetDict({
    test: Dataset({
        features: ['version https://git-lfs.github.com/spec/v1'],
        num_rows: 1648
    })
    validation: Dataset({
        features: ['version https://git-lfs.github.com/spec/v1'],
        num_rows: 1038
    })
})

Ok, I needed to call git lfs pull because the files are pointers only.

I'm merging this one if it's ok for everybody

lhoestq changed pull request status to merged

Thanks @lhoestq , how to use load_dataset with gh-lfs pointers?

load_dataset("Helsinki-NLP/tatoeba_mt") should work just fine.
If you did git clone and load the dataset from the local files, make sure you have git lfs installed and that your local clone of the dataset repository contains the LFS files and not pointers

Sign up or log in to comment