Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -40,6 +40,40 @@ dataset_info:
|
|
40 |
download_size: 551527888
|
41 |
dataset_size: 1778788486
|
42 |
---
|
43 |
-
# Dataset
|
44 |
|
45 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
40 |
download_size: 551527888
|
41 |
dataset_size: 1778788486
|
42 |
---
|
43 |
+
# TL;DR SFT Dataset for OpenAI's [Summarize from Feedback](https://openai.com/blog/summarization/) task
|
44 |
|
45 |
+
The dataset is directly taken from https://github.com/openai/summarize-from-feedback/tree/700967448d10004279f138666442bf1497d0e705#reddit-tldr-dataset
|
46 |
+
|
47 |
+
These columns are taken directly from the aforementioned dataset:
|
48 |
+
|
49 |
+
* **id**: unique identifier for the post
|
50 |
+
* **subreddit**: subreddit the post was taken from
|
51 |
+
* **title**: title of the post
|
52 |
+
* **post**: body of the post
|
53 |
+
* **summary**: summary of the post
|
54 |
+
* **reference_response**: reference response for the post
|
55 |
+
|
56 |
+
These columns are added by this preprocessing script:
|
57 |
+
* **query**: length-limited query for summarization: OAI pre-processes the main text (title + subreddit + post), ensuring it has only 512 tokens; if the main text is too long, then it tries to truncate at the last `
|
58 |
+
`. If it's too short it pads the main text ([summarize_from_feedback/tasks.py#L98-L165](https://github.com/openai/summarize-from-feedback/blob/700967448d10004279f138666442bf1497d0e705/summarize_from_feedback/tasks.py#L98-L165)). Padding is either space or `[PAD]` token (see Args below).
|
59 |
+
* **query_token**: tokenized version of `query`
|
60 |
+
* **reference_response_token**: tokenized version of `reference_response`
|
61 |
+
* **reference_response_token_len**: length of `reference_response_token`
|
62 |
+
* **query_reference_response**: concatenation of `query.strip()` and `reference_response`
|
63 |
+
* **query_reference_response_token**: tokenized version of `query_reference_response`, up to `max_sft_query_response_length` tokens
|
64 |
+
* **query_reference_response_token_len**: length of `query_reference_response_token`
|
65 |
+
|
66 |
+
|
67 |
+
# Args
|
68 |
+
|
69 |
+
```python
|
70 |
+
{'base_model': 'EleutherAI/pythia-1b-deduped',
|
71 |
+
'cnndm_params': TaskQueryHParams(length=1919, format_str='Article:\n{article}\n\nTL;DR:\n', truncate_field='article', truncate_text='\n', padding=[50277], pad_side='left'),
|
72 |
+
'hf_entity': 'cleanrl',
|
73 |
+
'max_rm_query_response_length': 638,
|
74 |
+
'max_rm_response_length': 169,
|
75 |
+
'max_sft_query_response_length': 562,
|
76 |
+
'max_sft_response_length': 53,
|
77 |
+
'push_to_hub': True,
|
78 |
+
'tldr_params': TaskQueryHParams(length=512, format_str='SUBREDDIT: r/{subreddit}\n\nTITLE: {title}\n\nPOST: {post}\n\nTL;DR:', truncate_field='post', truncate_text='\n', padding=[50277], pad_side='left')}
|
79 |
+
```
|