vwxyzjn's picture
Upload README.md with huggingface_hub
30cc151
metadata
dataset_info:
  features:
    - name: id
      dtype: string
    - name: subreddit
      dtype: string
    - name: title
      dtype: string
    - name: post
      dtype: string
    - name: summary
      dtype: string
    - name: query_token
      sequence: int64
    - name: query
      dtype: string
    - name: reference_response
      dtype: string
    - name: reference_response_token
      sequence: int64
    - name: reference_response_token_len
      dtype: int64
    - name: query_reference_response
      dtype: string
    - name: query_reference_response_token
      sequence: int64
    - name: query_reference_response_token_len
      dtype: int64
  splits:
    - name: train
      num_bytes: 1600440249
      num_examples: 116722
    - name: validation
      num_bytes: 88425771
      num_examples: 6447
    - name: test
      num_bytes: 89922466
      num_examples: 6553
  download_size: 551824607
  dataset_size: 1778788486

TL;DR SFT Dataset for OpenAI's Summarize from Feedback task

The dataset is directly taken from https://github.com/openai/summarize-from-feedback/tree/700967448d10004279f138666442bf1497d0e705#reddit-tldr-dataset

These columns are taken directly from the aforementioned dataset:

  • id: unique identifier for the post
  • subreddit: subreddit the post was taken from
  • title: title of the post
  • post: body of the post
  • summary: summary of the post
  • reference_response: reference response for the post

These columns are added by this preprocessing script:

  • query: length-limited query for summarization: OAI pre-processes the main text (title + subreddit + post), ensuring it has only 512 tokens; if the main text is too long, then it tries to truncate at the last . If it's too short it pads the main text (summarize_from_feedback/tasks.py#L98-L165). Padding is either space or [PAD] token (see Args below).
  • query_token: tokenized version of query
  • reference_response_token: tokenized version of reference_response
  • reference_response_token_len: length of reference_response_token
  • query_reference_response: concatenation of query.strip() and reference_response
  • query_reference_response_token: tokenized version of query_reference_response, up to max_sft_query_response_length tokens
  • query_reference_response_token_len: length of query_reference_response_token

Args

{'base_model': 'EleutherAI/pythia-1b-deduped',
 'check_length_correctness': False,
 'cnndm_params': TaskQueryHParams(length=1919,
                                  format_str='Article:\n{article}\n\nTL;DR:\n',
                                  truncate_field='article',
                                  truncate_text='\n',
                                  padding=[50277],
                                  pad_side='left',
                                  max_sft_response_length=None,
                                  max_sft_query_response_length=None,
                                  max_rm_response_length=155,
                                  max_rm_query_response_length=2021),
 'hf_entity': 'cleanrl',
 'push_to_hub': True,
 'tldr_params': TaskQueryHParams(length=512,
                                 format_str='SUBREDDIT: r/{subreddit}\n'
                                            '\n'
                                            'TITLE: {title}\n'
                                            '\n'
                                            'POST: {post}\n'
                                            '\n'
                                            'TL;DR:',
                                 truncate_field='post',
                                 truncate_text='\n',
                                 padding=[50277],
                                 pad_side='left',
                                 max_sft_response_length=53,
                                 max_sft_query_response_length=562,
                                 max_rm_response_length=169,
                                 max_rm_query_response_length=638)}