samchain's picture
Update README.md
f8ffc00
metadata
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
dataset_info:
  features:
    - name: text1
      dtype: string
    - name: text2
      dtype: string
    - name: label
      dtype:
        class_label:
          names:
            '0': '-1'
            '1': '1'
  splits:
    - name: train
      num_bytes: 150266647.47592032
      num_examples: 50712
    - name: test
      num_bytes: 64403801.52407967
      num_examples: 21735
  download_size: 129675237
  dataset_size: 214670449

Dataset Card for "WikiMedical_sentence_similarity"

WikiMedical_sentence_similarity is an adapted and ready-to-use sentence similarity dataset based on this dataset.

The preprocessing followed three steps:

  • Each text is splitted into sentences of 256 tokens (nltk tokenizer)
  • Each sentence is paired with a positive pair if found, and a negative one. Negative one are drawn randomly in the whole dataset.
  • Train and test split correspond to 70%/30%

More Information needed