Dataset Preview
Go to dataset viewer
The dataset preview is not available for this split.
Cannot load the dataset split (in normal download mode) to extract the first rows.
Error code:   NormalRowsError
Exception:    NonMatchingChecksumError
Message:      Checksums didn't match for dataset source files:
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/", line 453, in xopen
                  file_obj =, mode=mode, *args, **kwargs).open()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 441, in open
                  return open_files(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 273, in open_files
                  fs, fs_token, paths = get_fs_token_paths(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 606, in get_fs_token_paths
                  fs = filesystem(protocol, **inkwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 268, in filesystem
                  return cls(**storage_options)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 76, in __call__
                  obj = super().__call__(*args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/", line 60, in __init__
         = zipfile.ZipFile(, mode=mode)
                File "/usr/local/lib/python3.9/", line 1257, in __init__
                File "/usr/local/lib/python3.9/", line 1320, in _RealGetContents
                  endrec = _EndRecData(fp)
                File "/usr/local/lib/python3.9/", line 263, in _EndRecData
        , 2)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/", line 684, in seek
                  raise ValueError("Cannot seek streaming HTTP file")
              ValueError: Cannot seek streaming HTTP file
              The above exception was the direct cause of the following exception:
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/responses/", line 337, in get_first_rows_response
                  rows = get_rows(dataset, config, split, streaming=True, rows_max_number=rows_max_number, hf_token=hf_token)
                File "/src/services/worker/src/worker/", line 123, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/responses/", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 718, in __iter__
                  for key, example in self._iter():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 708, in _iter
                  yield from ex_iterable
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 112, in __iter__
                  yield from self.generate_examples_fn(**self.kwargs)
                File "/tmp/modules-cache/datasets_modules/datasets/kd_conv/7c75514efa08c2416d870f771e7f1d135a6e19944a02bfc9454c861b8b25a7f2/", line 170, in _generate_examples
                  with open(filepath, encoding="utf-8") as f:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 67, in wrapper
                  return function(*args, use_auth_token=use_auth_token, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/", line 456, in xopen
                  raise NonStreamableDatasetError(
     Streaming is not possible for this dataset because data host server doesn't support HTTP range requests. You can still load this dataset in non-streaming mode by passing `streaming=False` (default)
              During handling of the above exception, another exception occurred:
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/responses/", line 345, in get_first_rows_response
                  rows = get_rows(
                File "/src/services/worker/src/worker/", line 123, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/responses/", line 65, in get_rows
                  ds = load_dataset(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 1746, in load_dataset
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 704, in download_and_prepare
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 1227, in _download_and_prepare
                  super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 775, in _download_and_prepare
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/", line 40, in verify_checksums
                  raise NonMatchingChecksumError(error_msg + str(bad_urls))
              datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:

Need help to make the dataset viewer work? Open an discussion for direct support.

Dataset Card for KdConv

Dataset Summary

KdConv is a Chinese multi-domain Knowledge-driven Conversionsation dataset, grounding the topics in multi-turn conversations to knowledge graphs. KdConv contains 4.5K conversations from three domains (film, music, and travel), and 86K utterances with an average turn number of 19.0. These conversations contain in-depth discussions on related topics and natural transition between multiple topics, while the corpus can also used for exploration of transfer learning and domain adaptation.

Supported Tasks and Leaderboards

This dataset can be leveraged for dialogue modelling tasks involving multi-turn and Knowledge base setup.


This dataset has only Chinese Language.

Dataset Structure

Data Instances

Each data instance is a multi-turn conversation between 2 people with annotated knowledge base data used while talking , e.g.:

  "messages": [
      "message": "对《我喜欢上你时的内心活动》这首歌有了解吗?"
      "attrs": [
          "attrname": "Information",
          "attrvalue": "《我喜欢上你时的内心活动》是由韩寒填词,陈光荣作曲,陈绮贞演唱的歌曲,作为电影《喜欢你》的主题曲于2017年4月10日首发。2018年,该曲先后提名第37届香港电影金像奖最佳原创电影歌曲奖、第7届阿比鹿音乐奖流行单曲奖。",
          "name": "我喜欢上你时的内心活动"
      "message": "有些了解,是电影《喜欢你》的主题曲。"
      "attrs": [
          "attrname": "代表作品",
          "attrvalue": "旅行的意义",
          "name": "陈绮贞"
          "attrname": "代表作品",
          "attrvalue": "时间的歌",
          "name": "陈绮贞"
      "message": "我还知道《旅行的意义》与《时间的歌》,都算是她的代表作。"
      "message": "好,有时间我找出来听听。"
  "name": "我喜欢上你时的内心活动"

The corresponding entries in Knowledge base is a dictionary with list of knowledge base triplets (head entity , relationship, tail entity), e.g.:

"忽然之间": [
    "《忽然之间》是歌手 莫文蔚演唱的歌曲,由 周耀辉, 李卓雄填词, 林健华谱曲,收录在莫文蔚1999年发行专辑《 就是莫文蔚》里。"

Data Fields

Conversation data fields:

  • name: the starting topic (entity) of the conversation
  • domain: the domain this sample belongs to. Categorical value among {travel, film, music}
  • messages: list of all the turns in the dialogue. For each turn:
    • message: the utterance
    • attrs: list of knowledge graph triplets referred by the utterance. For each triplet:
      • name: the head entity
      • attrname: the relation
      • attrvalue: the tail entity

Knowledge Base data fields:

  • head_entity: the head entity
  • kb_triplets: list of corresponding triplets
  • domain: the domain this sample belongs to. Categorical value among {travel, film, music}

Data Splits

The conversation dataset is split into a train, validation, and test split with the following sizes:

train validation test
travel 1200 1200 1200
film 1200 150 150
music 1200 150 150
all 3600 450 450

The Knowledge base dataset is having only train split with following sizes:

travel 1154
film 8090
music 4441
all 13685

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]


Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

Apache License 2.0

Citation Information

    title = "{K}d{C}onv: A {C}hinese Multi-domain Dialogue Dataset Towards Multi-turn Knowledge-driven Conversation",
    author = "Zhou, Hao  and
      Zheng, Chujie  and
      Huang, Kaili  and
      Huang, Minlie  and
      Zhu, Xiaoyan",
    booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
    month = jul,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "",
    doi = "10.18653/v1/2020.acl-main.635",
    pages = "7098--7108",


Thanks to @pacman100 for adding this dataset.