Dataset Preview
Go to dataset viewer
The dataset preview is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    FileNotFoundError
Message:      https://storage.googleapis.com/eduge_dataset/eduge_train.csv
Traceback:    Traceback (most recent call last):
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 411, in _info
                  await _file_info(
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 827, in _file_info
                  r.raise_for_status()
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/aiohttp/client_reqrep.py", line 1005, in raise_for_status
                  raise ClientResponseError(
              aiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('https://storage.googleapis.com/eduge_dataset/eduge_train.csv')
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/workers/datasets_based/src/datasets_based/workers/first_rows.py", line 485, in compute_first_rows_response
                  rows = get_rows(
                File "/src/workers/datasets_based/src/datasets_based/workers/first_rows.py", line 120, in decorator
                  return func(*args, **kwargs)
                File "/src/workers/datasets_based/src/datasets_based/workers/first_rows.py", line 176, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 917, in __iter__
                  for key, example in ex_iterable:
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 113, in __iter__
                  yield from self.generate_examples_fn(**self.kwargs)
                File "/tmp/modules-cache/datasets_modules/datasets/eduge/533d404905b3b3371f97e26f3dd3e73f45582dfb9ce96b9afd42bbf7fb1a87f8/eduge.py", line 76, in _generate_examples
                  with open(filepath, encoding="utf-8") as csv_file:
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/streaming.py", line 70, in wrapper
                  return function(*args, use_auth_token=use_auth_token, **kwargs)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 495, in xopen
                  file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open()
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/core.py", line 134, in open
                  return self.__enter__()
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/core.py", line 102, in __enter__
                  f = self.fs.open(self.path, mode=mode)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 1135, in open
                  f = self._open(
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 350, in _open
                  size = size or self.info(path, **kwargs)["size"]
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 114, in wrapper
                  return sync(self.loop, func, *args, **kwargs)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 99, in sync
                  raise return_result
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 54, in _runner
                  result[0] = await coro
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 424, in _info
                  raise FileNotFoundError(url) from exc
              FileNotFoundError: https://storage.googleapis.com/eduge_dataset/eduge_train.csv

Need help to make the dataset viewer work? Open an discussion for direct support.

Dataset Card for Eduge

Dataset Summary

Eduge news classification dataset provided by Bolorsoft LLC. Used to train the Eduge.mn production news classifier 75K news articles in 9 categories: урлаг соёл, эдийн засаг, эрүүл мэнд, хууль, улс төр, спорт, технологи, боловсрол and байгал орчин

Supported Tasks and Leaderboards

  • text-classification: We can transform the above into a 9-class classification task.

Languages

The text in the dataset is in Mongolian

Dataset Structure

Data Instances

For the default configuration:

{
 'label': 0, #  'урлаг соёл'
 'news': 'Шударга өрсөлдөөн, хэрэглэгчийн төлөө газар 2013 оны дөрөвдүгээр сараас эхлэн Монгол киноны ашиг орлогын мэдээллийг олон нийтэд хүргэж байгаа. Ингэснээр Монголын кино үйлдвэрлэгчид улсад ашиг орлогоо шударгаар төлөх, мөн  чанартай уран бүтээлийн тоо өсөх боломж бүрдэж байгаа юм.',
}

Data Fields

  • news: a complete news article on a specific topic as a string
  • label: the single class of the topic, among these values: "урлаг соёл" (0), "эдийн засаг" (1), "эрүүл мэнд" (2), "хууль" (3), "улс төр" (4), "спорт" (5), "технологи" (6), "боловсрол" (7), "байгал орчин" (8).

Data Splits

The set of complete articles is split into a training and test set.

Dataset Creation

Curation Rationale

[Needs More Information]

Source Data

Initial Data Collection and Normalization

[Needs More Information]

Who are the source language producers?

Eduge.mn which is a combination from shuud.mn, ikon.mn, olloo.mn, news.gogo.mn, montsame.mn, zaluu.com, sonin.mn, medee.mn, bloombergtv.mn.

Annotations

Annotation process

[Needs More Information]

Who are the annotators?

[Needs More Information]

Personal and Sensitive Information

[Needs More Information]

Considerations for Using the Data

Social Impact of Dataset

[Needs More Information]

Discussion of Biases

[Needs More Information]

Other Known Limitations

[Needs More Information]

Additional Information

Dataset Curators

[Needs More Information]

Licensing Information

[Needs More Information]

Citation Information

No citation available for this dataset.

Contributions

Thanks to @enod for adding this dataset.

Downloads last month
185