The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    FileNotFoundError
Message:      Couldn't find a dataset script at /src/services/worker/picocreator/constitution-multi-lang/constitution-multi-lang.py or any data file in the same directory. Couldn't find 'picocreator/constitution-multi-lang' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in picocreator/constitution-multi-lang. 
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 55, in compute_config_names_response
                  for config in sorted(get_dataset_config_names(path=dataset, token=hf_token))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1508, in dataset_module_factory
                  raise FileNotFoundError(
              FileNotFoundError: Couldn't find a dataset script at /src/services/worker/picocreator/constitution-multi-lang/constitution-multi-lang.py or any data file in the same directory. Couldn't find 'picocreator/constitution-multi-lang' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in picocreator/constitution-multi-lang.

Need help to make the dataset viewer work? Open a discussion for direct support.

Constitution Multi Lang

A collection of multiple nation, constitutional legal documents, with their official language translation.

Rationale behind this

This project aims to get the official translation pairs of various non-english constitutions, of various nations. Due to the importance of such documents on a nation, it is expected that the translation pairs are of high quality. Additional in many cases, official goverment documents are "copyright free", removing any legal issues in the training process.

This also provides an easy scalable way to get reliable translation pairs for AI training.

Repo links

How to contribute (a public contributor)

  1. Obtain official copies, and/or links and place them within the respective country folder in raw-copies

  2. Cleanup and convert raw copies, into language markdown pairs. Line number content must 1:1 match one another into the cleaned folder.

  3. Submit a pull request - via github

How to followup with a completed contribution

  1. Validate the cleaned markdown pairs, and ensure they are 1:1 match with the official copies.

  2. Convert into translation training pairs, on the "vocab", "section", and "document" level. Generate the .jsonl files into the parsed folder.

  3. Split out some vocab and section pairs, for them to be used in validation dataset.

Example

Canada

The (incompleted) converted markdown pairs (for en/fr) would be:

Downloads last month
0
Edit dataset card