Dataset Viewer
Full Screen
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    UnicodeDecodeError
Message:      'utf-8' codec can't decode byte 0xff in position 189: invalid start byte
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 233, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2998, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1918, in _head
                  return _examples_to_batch(list(self.take(n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2093, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1576, in __iter__
                  for key_example in islice(self.ex_iterable, self.n - ex_iterable_num_taken):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 279, in __iter__
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/text/text.py", line 73, in _generate_tables
                  batch = f.read(self.config.chunksize)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 826, in read_with_retries
                  out = read(*args, **kwargs)
                File "/usr/local/lib/python3.9/codecs.py", line 322, in decode
                  (result, consumed) = self._buffer_decode(data, self.errors, final)
              UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 189: invalid start byte

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

The VoxCommunis Corpus contains acoustic models, lexicons, and force-aligned TextGrids with phone- and word-level segmentations derived from the Mozilla Common Voice Corpus. The Mozilla Common Voice Corpus and derivative VoxCommunis Corpus stored here are free to download and use under a CC0 license.

The lexicons are developed using Epitran, the XPF Corpus, Charsiu, and some custom dictionaries. Some manual correction has been applied, and we hope to continue improving these. Any updates from the community are welcome.

The acoustic models have been trained using the Montreal Forced Aligner, and the force-aligned TextGrids are obtained directly from those alignments. These acoustic models can be downloaded and re-used with the Montreal Forced Aligner for new data.

The TextGrids contain phone- and word-level alignments of the validated set of the Common Voice data. The filename has the structure: Common Voice language code, G2P system, Common Voice version (validated), VoxCommunis acoustic model. mk_xpf_textgrids19_acoustic19 corresponds to: alignments from the validated portion of the Macedonian Common Voice 19 Corpus using a lexicon generated with the XPF Corpus, aligned using an acoustic model trained on the validated portion of the Macedonian Common Voice 19 Corpus.

The spkr_files contain a mapping from the original client_id to a simplified spkr_id.

The corresponding Github repository can be found here: https://github.com/pacscilab/voxcommunis

Downloads last month
97