tbenthompson commited on
Commit
66b8bd6
1 Parent(s): 3a3a0a4

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +18 -46
README.md CHANGED
@@ -1,47 +1,19 @@
1
- ---
2
- dataset_info:
3
- features:
4
- - name: text
5
- dtype: string
6
- - name: token_short
7
- dtype: string
8
- - name: token_long
9
- dtype: string
10
- - name: p_short
11
- dtype: float32
12
- - name: p_long
13
- dtype: float32
14
- - name: JS
15
- dtype: float32
16
- - name: long_ids
17
- sequence: int32
18
- - name: short_max_id
19
- dtype: int64
20
- - name: long_max_id
21
- dtype: int64
22
- - name: context
23
- dtype: string
24
- - name: context_ids
25
- sequence: int32
26
- - name: p_delta_max
27
- dtype: float32
28
- - name: logit_excite_max
29
- dtype: float32
30
- - name: logit_inhibit_max
31
- dtype: float32
32
- - name: batch
33
- dtype: int64
34
- - name: sample
35
- dtype: int64
36
- - name: start
37
- dtype: int64
38
- splits:
39
- - name: scan
40
- num_bytes: 466393218
41
- num_examples: 1874497
42
- download_size: 337479388
43
- dataset_size: 466393218
44
- ---
45
- # Dataset Card for "pile_scan_4"
46
 
47
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # scan_4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
 
3
+ - `text`: The long prompt text with the the first token surrounded by square brackets.
4
+ - `token_short`: The model's prediction for the most likely token given the short prompt.
5
+ - `token_long`: The model's prediction for the most likely token given the long prompt.
6
+ - `p_short`: The model's prediction for the probability of `token_short`.
7
+ - `p_long`: The model's prediction for the probability of `token_long`.
8
+ - `JS`: The Jensen-Shannon divergence between the model's distribution over tokens given the short prompt and the model's distribution over tokens given the long prompt.
9
+ - `long_ids`: The ids of the tokens in the long prompt.
10
+ - `short_max_id`: The id of `token_short`.
11
+ - `long_max_id`: The id of `token_long`.
12
+ - `context`: The surrounding text of the prompt.
13
+ - `context_ids`: The ids of `context`.
14
+ - `p_delta_max`: The largest difference in probability for any token between the short and long prompt.
15
+ - `logit_excite_max`: The largest increase in logit for any token between the short and long prompt.
16
+ - `logit_inhibit_max`: The largest decrease in logit for any token between the short and long prompt.
17
+ - `batch`: The batch number of the prompt.
18
+ - `sample`: The sample number of the prompt.
19
+ - `start`: The start index of the prompt in the sample.