Dataset Preview
Go to dataset viewer
The dataset preview is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    ValueError
Message:      The HTTP server doesn't appear to support range requests. Only reading this file from the beginning is supported. Open with block_size=0 for a streaming file interface.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows_from_streaming.py", line 584, in compute_first_rows_response
                  rows = get_rows(
                File "/src/services/worker/src/worker/job_runners/split/first_rows_from_streaming.py", line 179, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/job_runners/split/first_rows_from_streaming.py", line 235, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 937, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 113, in __iter__
                  yield from self.generate_examples_fn(**self.kwargs)
                File "/tmp/modules-cache/datasets_modules/datasets/lince/10d41747f55f0849fa84ac579ea1acfa7df49aa2015b60426bc459c111b3d589/lince.py", line 519, in _generate_examples
                  csv.reader(open(filepath), delimiter=delimiter, quoting=csv.QUOTE_NONE), is_empty_line
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/streaming.py", line 70, in wrapper
                  return function(*args, use_auth_token=use_auth_token, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 495, in xopen
                  file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 419, in open
                  return open_files(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 272, in open_files
                  fs, fs_token, paths = get_fs_token_paths(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 586, in get_fs_token_paths
                  fs = filesystem(protocol, **inkwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/registry.py", line 253, in filesystem
                  return cls(**storage_options)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 76, in __call__
                  obj = super().__call__(*args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/zip.py", line 59, in __init__
                  self.zip = zipfile.ZipFile(
                File "/usr/local/lib/python3.9/zipfile.py", line 1266, in __init__
                  self._RealGetContents()
                File "/usr/local/lib/python3.9/zipfile.py", line 1329, in _RealGetContents
                  endrec = _EndRecData(fp)
                File "/usr/local/lib/python3.9/zipfile.py", line 273, in _EndRecData
                  data = fpin.read()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 600, in read
                  return super().read(length)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 1700, in read
                  out = self.cache._fetch(self.loc, self.loc + length)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/caching.py", line 380, in _fetch
                  self.cache = self.fetcher(start, bend)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 115, in wrapper
                  return sync(self.loop, func, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 100, in sync
                  raise return_result
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 55, in _runner
                  result[0] = await coro
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 671, in async_fetch_range
                  raise ValueError(
              ValueError: The HTTP server doesn't appear to support range requests. Only reading this file from the beginning is supported. Open with block_size=0 for a streaming file interface.

Need help to make the dataset viewer work? Open an discussion for direct support.

Dataset Card for "lince"

Dataset Summary

LinCE is a centralized Linguistic Code-switching Evaluation benchmark (https://ritual.uh.edu/lince/) that contains data for training and evaluating NLP systems on code-switching tasks.

Supported Tasks and Leaderboards

More Information Needed

Languages

More Information Needed

Dataset Structure

Data Instances

lid_hineng

  • Size of downloaded dataset files: 0.41 MB
  • Size of the generated dataset: 2.28 MB
  • Total amount of disk used: 2.69 MB

An example of 'validation' looks as follows.

{
    "idx": 0,
    "lid": ["other", "other", "lang1", "lang1", "lang1", "other", "lang1", "lang1", "lang1", "lang1", "lang1", "lang1", "lang1", "mixed", "lang1", "lang1", "other"],
    "words": ["@ZahirJ", "@BinyavangaW", "Loved", "the", "ending", "!", "I", "could", "have", "offered", "you", "some", "ironic", "chai-tea", "for", "it", ";)"]
}

lid_msaea

  • Size of downloaded dataset files: 0.77 MB
  • Size of the generated dataset: 4.66 MB
  • Total amount of disk used: 5.43 MB

An example of 'train' looks as follows.

This example was too long and was cropped:

{
    "idx": 0,
    "lid": ["ne", "lang2", "other", "lang2", "lang2", "other", "other", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "other", "lang2", "lang2", "lang2", "ne", "lang2", "lang2"],
    "words": "[\"علاء\", \"بخير\", \"،\", \"معنوياته\", \"كويسة\", \".\", \"..\", \"اسخف\", \"حاجة\", \"بس\", \"ان\", \"كل\", \"واحد\", \"منهم\", \"بييقى\", \"مقفول\", \"عليه\"..."
}

lid_nepeng

  • Size of downloaded dataset files: 0.52 MB
  • Size of the generated dataset: 3.06 MB
  • Total amount of disk used: 3.58 MB

An example of 'validation' looks as follows.

{
    "idx": 1,
    "lid": ["other", "lang2", "lang2", "lang2", "lang2", "lang1", "lang1", "lang1", "lang1", "lang1", "lang2", "lang2", "other", "mixed", "lang2", "lang2", "other", "other", "other", "other"],
    "words": ["@nirvikdada", "la", "hamlai", "bhetna", "paayeko", "will", "be", "your", "greatest", "gift", "ni", "dada", ";P", "#TreatChaiyo", "j", "hos", ";)", "@zappylily", "@AsthaGhm", "@ayacs_asis"]
}

lid_spaeng

  • Size of downloaded dataset files: 1.13 MB
  • Size of the generated dataset: 6.51 MB
  • Total amount of disk used: 7.64 MB

An example of 'train' looks as follows.

{
    "idx": 0,
    "lid": ["other", "other", "lang1", "lang1", "lang1", "other", "lang1", "lang1"],
    "words": ["11:11", ".....", "make", "a", "wish", ".......", "night", "night"]
}

ner_hineng

  • Size of downloaded dataset files: 0.13 MB
  • Size of the generated dataset: 0.75 MB
  • Total amount of disk used: 0.88 MB

An example of 'train' looks as follows.

{
    "idx": 1,
    "lid": ["en", "en", "en", "en", "en", "en", "hi", "hi", "hi", "hi", "hi", "hi", "hi", "en", "en", "en", "en", "rest"],
    "ner": ["O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-PERSON", "I-PERSON", "O", "O", "O", "B-PERSON", "I-PERSON"],
    "words": ["I", "liked", "a", "@YouTube", "video", "https://t.co/DmVqhZbdaI", "Kabhi", "Palkon", "Pe", "Aasoon", "Hai-", "Kishore", "Kumar", "-Vocal", "Cover", "By", "Stephen", "Qadir"]
}

Data Fields

The data fields are the same among all splits.

lid_hineng

  • idx: a int32 feature.
  • words: a list of string features.
  • lid: a list of string features.

lid_msaea

  • idx: a int32 feature.
  • words: a list of string features.
  • lid: a list of string features.

lid_nepeng

  • idx: a int32 feature.
  • words: a list of string features.
  • lid: a list of string features.

lid_spaeng

  • idx: a int32 feature.
  • words: a list of string features.
  • lid: a list of string features.

ner_hineng

  • idx: a int32 feature.
  • words: a list of string features.
  • lid: a list of string features.
  • ner: a list of string features.

Data Splits

name train validation test
lid_hineng 4823 744 1854
lid_msaea 8464 1116 1663
lid_nepeng 8451 1332 3228
lid_spaeng 21030 3332 8289
ner_hineng 1243 314 522

Dataset Creation

Curation Rationale

More Information Needed

Source Data

Initial Data Collection and Normalization

More Information Needed

Who are the source language producers?

More Information Needed

Annotations

Annotation process

More Information Needed

Who are the annotators?

More Information Needed

Personal and Sensitive Information

More Information Needed

Considerations for Using the Data

Social Impact of Dataset

More Information Needed

Discussion of Biases

More Information Needed

Other Known Limitations

More Information Needed

Additional Information

Dataset Curators

More Information Needed

Licensing Information

More Information Needed

Citation Information

@inproceedings{aguilar-etal-2020-lince,
    title = "{L}in{CE}: A Centralized Benchmark for Linguistic Code-switching Evaluation",
    author = "Aguilar, Gustavo  and
      Kar, Sudipta  and
      Solorio, Thamar",
    booktitle = "Proceedings of The 12th Language Resources and Evaluation Conference",
    month = may,
    year = "2020",
    address = "Marseille, France",
    publisher = "European Language Resources Association",
    url = "https://www.aclweb.org/anthology/2020.lrec-1.223",
    pages = "1803--1813",
    language = "English",
    ISBN = "979-10-95546-34-4",
}

Note that each LinCE dataset has its own citation too. Please see here for the correct citation on each dataset.

Contributions

Thanks to @lhoestq, @thomwolf, @gaguilar for adding this dataset.

Downloads last month
1,459

Models trained or fine-tuned on lince