FemkeBakker's picture
Upload dataset
ce9e264 verified
|
raw
history blame
1.54 kB
metadata
language:
  - nl
license: eupl-1.1
size_categories:
  - 10K<n<100K
tags:
  - documents
  - fine-tuning
dataset_info:
  features:
    - name: prompt_id
      dtype: int64
    - name: message
      list:
        - name: content
          dtype: string
        - name: role
          dtype: string
  splits:
    - name: train
      num_bytes: 9011452
      num_examples: 9900
    - name: test
      num_bytes: 998068
      num_examples: 1100
    - name: val
      num_bytes: 1000675
      num_examples: 1100
    - name: discard
      num_bytes: 7897005
      num_examples: 8718
  download_size: 5846654
  dataset_size: 18907200
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
      - split: val
        path: data/val-*
      - split: discard
        path: data/discard-*

This dataset is used in the Woo-document classification project from the Municipality of Amsterdam. The goal of this dataset is to fine-tune LLMs. The documents are formatted into a zero-shot prompt and then turned into conversations, where the ideal response of the model is the prediction (class) formatted into JSON format.

Specifics:

  • Truncation: first 200 tokens of each document. Docs are tokenized using the Llama tokenizer.
  • Data split:
    • test set: first 100 docs of each class (in total 1100 docs)
    • train set: remaining docs, with max of 1500 docs per class (11000 docs)
      • train set: 90% of train set is used for fine-tuning model (9900 docs)
      • val set: 10% of train segt is used for evaluating the loss during training (1100 docs)