pszemraj autoevaluator HF staff commited on
Commit
db7f095
1 Parent(s): 7054526

Add verifyToken field to verify evaluation results are produced by Hugging Face's automatic model evaluator (#11)

Browse files

- Add verifyToken field to verify evaluation results are produced by Hugging Face's automatic model evaluator (797b6e8ba53c5f6efa593e246e0d5dd56005c17c)


Co-authored-by: Evaluation Bot <autoevaluator@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +163 -128
README.md CHANGED
@@ -1,6 +1,9 @@
1
  ---
2
  language:
3
  - en
 
 
 
4
  tags:
5
  - summarization
6
  - led
@@ -9,9 +12,6 @@ tags:
9
  - booksum
10
  - long-document
11
  - long-form
12
- license:
13
- - apache-2.0
14
- - bsd-3-clause
15
  datasets:
16
  - kmfoda/booksum
17
  metrics:
@@ -30,39 +30,38 @@ widget:
30
  deviation of the average recurrence interval, the more specific could be the long
31
  term prediction of a future mainshock.
32
  example_title: earthquakes
33
- - text: " A typical feed-forward neural field algorithm. Spatiotemporal coordinates\
34
- \ are fed into a neural network that predicts values in the reconstructed domain.\
35
- \ Then, this domain is mapped to the sensor domain where sensor measurements are\
36
- \ available as supervision. Class and Section Problems Addressed Generalization\
37
- \ (Section 2) Inverse problems, ill-posed problems, editability; symmetries. Hybrid\
38
- \ Representations (Section 3) Computation & memory efficiency, representation\
39
- \ capacity, editability: Forward Maps (Section 4) Inverse problems Network Architecture\
40
- \ (Section 5) Spectral bias, integration & derivatives. Manipulating Neural Fields\
41
- \ (Section 6) Edit ability, constraints, regularization. Table 2: The five classes\
42
- \ of techniques in the neural field toolbox each addresses problems that arise\
43
- \ in learning, inference, and control. (Section 3). We can supervise reconstruction\
44
- \ via differentiable forward maps that transform Or project our domain (e.g, 3D\
45
- \ reconstruction via 2D images; Section 4) With appropriate network architecture\
46
- \ choices, we can overcome neural network spectral biases (blurriness) and efficiently\
47
- \ compute derivatives and integrals (Section 5). Finally, we can manipulate neural\
48
- \ fields to add constraints and regularizations, and to achieve editable representations\
49
- \ (Section 6). Collectively, these classes constitute a 'toolbox' of techniques\
50
- \ to help solve problems with neural fields There are three components in a conditional\
51
- \ neural field: (1) An encoder or inference function \u20AC that outputs the conditioning\
52
- \ latent variable 2 given an observation 0 E(0) =2. 2 is typically a low-dimensional\
53
- \ vector, and is often referred to aS a latent code Or feature code_ (2) A mapping\
54
- \ function 4 between Z and neural field parameters O: Y(z) = O; (3) The neural\
55
- \ field itself $. The encoder \u20AC finds the most probable z given the observations\
56
- \ O: argmaxz P(2/0). The decoder maximizes the inverse conditional probability\
57
- \ to find the most probable 0 given Z: arg- max P(Olz). We discuss different encoding\
58
- \ schemes with different optimality guarantees (Section 2.1.1), both global and\
59
- \ local conditioning (Section 2.1.2), and different mapping functions Y (Section\
60
- \ 2.1.3) 2. Generalization Suppose we wish to estimate a plausible 3D surface\
61
- \ shape given a partial or noisy point cloud. We need a suitable prior over the\
62
- \ sur- face in its reconstruction domain to generalize to the partial observations.\
63
- \ A neural network expresses a prior via the function space of its architecture\
64
- \ and parameters 0, and generalization is influenced by the inductive bias of\
65
- \ this function space (Section 5)."
66
  example_title: scientific paper
67
  - text: ' the big variety of data coming from diverse sources is one of the key properties
68
  of the big data phenomenon. It is, therefore, beneficial to understand how data
@@ -107,50 +106,62 @@ widget:
107
  in their business An important area of data analytics on the edge of corporate
108
  IT and the Internet is Web Analytics.'
109
  example_title: data science textbook
110
- - text: "Transformer-based models have shown to be very useful for many NLP tasks.\
111
- \ However, a major limitation of transformers-based models is its O(n^2)O(n 2)\
112
- \ time & memory complexity (where nn is sequence length). Hence, it's computationally\
113
- \ very expensive to apply transformer-based models on long sequences n > 512n>512.\
114
- \ Several recent papers, e.g. Longformer, Performer, Reformer, Clustered attention\
115
- \ try to remedy this problem by approximating the full attention matrix. You can\
116
- \ checkout \U0001F917's recent blog post in case you are unfamiliar with these\
117
- \ models.\nBigBird (introduced in paper) is one of such recent models to address\
118
- \ this issue. BigBird relies on block sparse attention instead of normal attention\
119
- \ (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a\
120
- \ much lower computational cost compared to BERT. It has achieved SOTA on various\
121
- \ tasks involving very long sequences such as long documents summarization, question-answering\
122
- \ with long contexts.\nBigBird RoBERTa-like model is now available in \U0001F917\
123
- Transformers. The goal of this post is to give the reader an in-depth understanding\
124
- \ of big bird implementation & ease one's life in using BigBird with \U0001F917\
125
- Transformers. But, before going into more depth, it is important to remember that\
126
- \ the BigBird's attention is an approximation of BERT's full attention and therefore\
127
- \ does not strive to be better than BERT's full attention, but rather to be more\
128
- \ efficient. It simply allows to apply transformer-based models to much longer\
129
- \ sequences since BERT's quadratic memory requirement quickly becomes unbearable.\
130
- \ Simply put, if we would have \u221E compute & \u221E time, BERT's attention\
131
- \ would be preferred over block sparse attention (which we are going to discuss\
132
- \ in this post).\nIf you wonder why we need more compute when working with longer\
133
- \ sequences, this blog post is just right for you!\nSome of the main questions\
134
- \ one might have when working with standard BERT-like attention include:\nDo all\
135
- \ tokens really have to attend to all other tokens? Why not compute attention\
136
- \ only over important tokens? How to decide what tokens are important? How to\
137
- \ attend to just a few tokens in a very efficient way? In this blog post, we will\
138
- \ try to answer those questions.\nWhat tokens should be attended to? We will give\
139
- \ a practical example of how attention works by considering the sentence 'BigBird\
140
- \ is now available in HuggingFace for extractive question answering'. In BERT-like\
141
- \ attention, every word would simply attend to all other tokens.\nLet's think\
142
- \ about a sensible choice of key tokens that a queried token actually only should\
143
- \ attend to by writing some pseudo-code. Will will assume that the token available\
144
- \ is queried and build a sensible list of key tokens to attend to.\n>>> # let's\
145
- \ consider following sentence as an example >>> example = ['BigBird', 'is', 'now',\
146
- \ 'available', 'in', 'HuggingFace', 'for', 'extractive', 'question', 'answering']\n\
147
- >>> # further let's assume, we're trying to understand the representation of 'available'\
148
- \ i.e. >>> query_token = 'available' >>> # We will initialize an empty `set` and\
149
- \ fill up the tokens of our interest as we proceed in this section. >>> key_tokens\
150
- \ = [] # => currently 'available' token doesn't have anything to attend Nearby\
151
- \ tokens should be important because, in a sentence (sequence of words), the current\
152
- \ word is highly dependent on neighboring past & future tokens. This intuition\
153
- \ is the idea behind the concept of sliding attention."
 
 
 
 
 
 
 
 
 
 
 
 
154
  example_title: bigbird blog intro
155
  - text: 'The majority of available text summarization datasets include short-form
156
  source documents that lack long-range causal and temporal dependencies, and often
@@ -188,30 +199,36 @@ model-index:
188
  config: kmfoda--booksum
189
  split: test
190
  metrics:
191
- - name: ROUGE-1
192
- type: rouge
193
  value: 31.7308
 
194
  verified: true
195
- - name: ROUGE-2
196
- type: rouge
197
  value: 5.3311
 
198
  verified: true
199
- - name: ROUGE-L
200
- type: rouge
201
  value: 16.1465
 
202
  verified: true
203
- - name: ROUGE-LSUM
204
- type: rouge
205
  value: 29.0883
 
206
  verified: true
207
- - name: loss
208
- type: loss
209
  value: 4.815707206726074
 
210
  verified: true
211
- - name: gen_len
212
- type: gen_len
213
  value: 154.9036
 
214
  verified: true
 
215
  - task:
216
  type: summarization
217
  name: Summarization
@@ -221,30 +238,36 @@ model-index:
221
  config: samsum
222
  split: test
223
  metrics:
224
- - name: ROUGE-1
225
- type: rouge
226
  value: 33.4484
 
227
  verified: true
228
- - name: ROUGE-2
229
- type: rouge
230
  value: 10.4249
 
231
  verified: true
232
- - name: ROUGE-L
233
- type: rouge
234
  value: 24.5802
 
235
  verified: true
236
- - name: ROUGE-LSUM
237
- type: rouge
238
  value: 29.8226
 
239
  verified: true
240
- - name: loss
241
- type: loss
242
  value: 4.176078796386719
 
243
  verified: true
244
- - name: gen_len
245
- type: gen_len
246
  value: 65.4005
 
247
  verified: true
 
248
  - task:
249
  type: summarization
250
  name: Summarization
@@ -254,30 +277,36 @@ model-index:
254
  config: default
255
  split: test
256
  metrics:
257
- - name: ROUGE-1
258
- type: rouge
259
  value: 40.5843
 
260
  verified: true
261
- - name: ROUGE-2
262
- type: rouge
263
  value: 17.3401
 
264
  verified: true
265
- - name: ROUGE-L
266
- type: rouge
267
  value: 25.1256
 
268
  verified: true
269
- - name: ROUGE-LSUM
270
- type: rouge
271
  value: 34.6619
 
272
  verified: true
273
- - name: loss
274
- type: loss
275
  value: 4.792657375335693
 
276
  verified: true
277
- - name: gen_len
278
- type: gen_len
279
  value: 163.9394
 
280
  verified: true
 
281
  - task:
282
  type: summarization
283
  name: Summarization
@@ -287,30 +316,36 @@ model-index:
287
  config: default
288
  split: test
289
  metrics:
290
- - name: ROUGE-1
291
- type: rouge
292
  value: 39.0834
 
293
  verified: true
294
- - name: ROUGE-2
295
- type: rouge
296
  value: 11.4043
 
297
  verified: true
298
- - name: ROUGE-L
299
- type: rouge
300
  value: 19.1813
 
301
  verified: true
302
- - name: ROUGE-LSUM
303
- type: rouge
304
  value: 35.1581
 
305
  verified: true
306
- - name: loss
307
- type: loss
308
  value: 4.654905319213867
 
309
  verified: true
310
- - name: gen_len
311
- type: gen_len
312
  value: 186.2494
 
313
  verified: true
 
314
  ---
315
 
316
  # Longformer Encoder-Decoder (LED) for Narrative-Esque Long Text Summarization
1
  ---
2
  language:
3
  - en
4
+ license:
5
+ - apache-2.0
6
+ - bsd-3-clause
7
  tags:
8
  - summarization
9
  - led
12
  - booksum
13
  - long-document
14
  - long-form
 
 
 
15
  datasets:
16
  - kmfoda/booksum
17
  metrics:
30
  deviation of the average recurrence interval, the more specific could be the long
31
  term prediction of a future mainshock.
32
  example_title: earthquakes
33
+ - text: ' A typical feed-forward neural field algorithm. Spatiotemporal coordinates
34
+ are fed into a neural network that predicts values in the reconstructed domain.
35
+ Then, this domain is mapped to the sensor domain where sensor measurements are
36
+ available as supervision. Class and Section Problems Addressed Generalization
37
+ (Section 2) Inverse problems, ill-posed problems, editability; symmetries. Hybrid
38
+ Representations (Section 3) Computation & memory efficiency, representation capacity,
39
+ editability: Forward Maps (Section 4) Inverse problems Network Architecture (Section
40
+ 5) Spectral bias, integration & derivatives. Manipulating Neural Fields (Section
41
+ 6) Edit ability, constraints, regularization. Table 2: The five classes of techniques
42
+ in the neural field toolbox each addresses problems that arise in learning, inference,
43
+ and control. (Section 3). We can supervise reconstruction via differentiable forward
44
+ maps that transform Or project our domain (e.g, 3D reconstruction via 2D images;
45
+ Section 4) With appropriate network architecture choices, we can overcome neural
46
+ network spectral biases (blurriness) and efficiently compute derivatives and integrals
47
+ (Section 5). Finally, we can manipulate neural fields to add constraints and regularizations,
48
+ and to achieve editable representations (Section 6). Collectively, these classes
49
+ constitute a ''toolbox'' of techniques to help solve problems with neural fields
50
+ There are three components in a conditional neural field: (1) An encoder or inference
51
+ function that outputs the conditioning latent variable 2 given an observation
52
+ 0 E(0) =2. 2 is typically a low-dimensional vector, and is often referred to aS
53
+ a latent code Or feature code_ (2) A mapping function 4 between Z and neural field
54
+ parameters O: Y(z) = O; (3) The neural field itself $. The encoder € finds the
55
+ most probable z given the observations O: argmaxz P(2/0). The decoder maximizes
56
+ the inverse conditional probability to find the most probable 0 given Z: arg-
57
+ max P(Olz). We discuss different encoding schemes with different optimality guarantees
58
+ (Section 2.1.1), both global and local conditioning (Section 2.1.2), and different
59
+ mapping functions Y (Section 2.1.3) 2. Generalization Suppose we wish to estimate
60
+ a plausible 3D surface shape given a partial or noisy point cloud. We need a suitable
61
+ prior over the sur- face in its reconstruction domain to generalize to the partial
62
+ observations. A neural network expresses a prior via the function space of its
63
+ architecture and parameters 0, and generalization is influenced by the inductive
64
+ bias of this function space (Section 5).'
 
65
  example_title: scientific paper
66
  - text: ' the big variety of data coming from diverse sources is one of the key properties
67
  of the big data phenomenon. It is, therefore, beneficial to understand how data
106
  in their business An important area of data analytics on the edge of corporate
107
  IT and the Internet is Web Analytics.'
108
  example_title: data science textbook
109
+ - text: 'Transformer-based models have shown to be very useful for many NLP tasks.
110
+ However, a major limitation of transformers-based models is its O(n^2)O(n 2) time
111
+ & memory complexity (where nn is sequence length). Hence, it''s computationally
112
+ very expensive to apply transformer-based models on long sequences n > 512n>512.
113
+ Several recent papers, e.g. Longformer, Performer, Reformer, Clustered attention
114
+ try to remedy this problem by approximating the full attention matrix. You can
115
+ checkout 🤗''s recent blog post in case you are unfamiliar with these models.
116
+
117
+ BigBird (introduced in paper) is one of such recent models to address this issue.
118
+ BigBird relies on block sparse attention instead of normal attention (i.e. BERT''s
119
+ attention) and can handle sequences up to a length of 4096 at a much lower computational
120
+ cost compared to BERT. It has achieved SOTA on various tasks involving very long
121
+ sequences such as long documents summarization, question-answering with long contexts.
122
+
123
+ BigBird RoBERTa-like model is now available in 🤗Transformers. The goal of this
124
+ post is to give the reader an in-depth understanding of big bird implementation
125
+ & ease one''s life in using BigBird with 🤗Transformers. But, before going into
126
+ more depth, it is important to remember that the BigBird''s attention is an approximation
127
+ of BERT''s full attention and therefore does not strive to be better than BERT''s
128
+ full attention, but rather to be more efficient. It simply allows to apply transformer-based
129
+ models to much longer sequences since BERT''s quadratic memory requirement quickly
130
+ becomes unbearable. Simply put, if we would have compute & time, BERT''s attention
131
+ would be preferred over block sparse attention (which we are going to discuss
132
+ in this post).
133
+
134
+ If you wonder why we need more compute when working with longer sequences, this
135
+ blog post is just right for you!
136
+
137
+ Some of the main questions one might have when working with standard BERT-like
138
+ attention include:
139
+
140
+ Do all tokens really have to attend to all other tokens? Why not compute attention
141
+ only over important tokens? How to decide what tokens are important? How to attend
142
+ to just a few tokens in a very efficient way? In this blog post, we will try to
143
+ answer those questions.
144
+
145
+ What tokens should be attended to? We will give a practical example of how attention
146
+ works by considering the sentence ''BigBird is now available in HuggingFace for
147
+ extractive question answering''. In BERT-like attention, every word would simply
148
+ attend to all other tokens.
149
+
150
+ Let''s think about a sensible choice of key tokens that a queried token actually
151
+ only should attend to by writing some pseudo-code. Will will assume that the token
152
+ available is queried and build a sensible list of key tokens to attend to.
153
+
154
+ >>> # let''s consider following sentence as an example >>> example = [''BigBird'',
155
+ ''is'', ''now'', ''available'', ''in'', ''HuggingFace'', ''for'', ''extractive'',
156
+ ''question'', ''answering'']
157
+
158
+ >>> # further let''s assume, we''re trying to understand the representation of
159
+ ''available'' i.e. >>> query_token = ''available'' >>> # We will initialize an
160
+ empty `set` and fill up the tokens of our interest as we proceed in this section.
161
+ >>> key_tokens = [] # => currently ''available'' token doesn''t have anything
162
+ to attend Nearby tokens should be important because, in a sentence (sequence of
163
+ words), the current word is highly dependent on neighboring past & future tokens.
164
+ This intuition is the idea behind the concept of sliding attention.'
165
  example_title: bigbird blog intro
166
  - text: 'The majority of available text summarization datasets include short-form
167
  source documents that lack long-range causal and temporal dependencies, and often
199
  config: kmfoda--booksum
200
  split: test
201
  metrics:
202
+ - type: rouge
 
203
  value: 31.7308
204
+ name: ROUGE-1
205
  verified: true
206
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjJmZjMxYTY0OGU3MzNjNmIzNmYyODNlNDg2ZGRhZDAzNTMwMDM5YWMxODc1OTc1ZWE3MzM2OTg1ODFhZDBkNCIsInZlcnNpb24iOjF9.B8BCKgySYVZW910_1zP0LfCpQYJbAe6loyWut76JlgZb2kV1_x9ybqtNESX0ka-lNqhYyXUNDpuS-7pTmsJVDg
207
+ - type: rouge
208
  value: 5.3311
209
+ name: ROUGE-2
210
  verified: true
211
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzViMmY4ODFjYTc5ODk5MmRhMDQ3ZDRiYWQwMDg0OTk3ZTA4NDAxYTNiNDgyMmI4NDA3ZDMwYWViOTBkODBjNyIsInZlcnNpb24iOjF9.MOhJLDcgvv93mVFL1igIgIiTAH3b2Xa4gmBObq7RF44Mmu8Kxtd1KP7rOlDVFOrtrsooGPGsyE1GMCQ2kqeMDg
212
+ - type: rouge
213
  value: 16.1465
214
+ name: ROUGE-L
215
  verified: true
216
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzNjMzEwMTliZGE3ZmQ4M2UxMDAyMTY3YzJjZmMyMDYyN2YyNDM0N2VhNzI1MDc1YTg4MTRjMmEzNjVkNTk1NCIsInZlcnNpb24iOjF9.XLJ-DVKiYLlbw5E5rWADKbzUzf5fNHhlTCWPCC5dU4NI9Yeh76aR7TPt36ZzLDwTBknnR8KHqlaF8F8YAvBUAg
217
+ - type: rouge
218
  value: 29.0883
219
+ name: ROUGE-LSUM
220
  verified: true
221
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTcwNzEwMmE5NjQxZTkzYmQyZDZmNzllYzYyNGI5OTMyNWMwNjdiM2I2YmM5YjdmY2E5OWQ3OTk3ZDA1MTc3YyIsInZlcnNpb24iOjF9.d6rFxjCB6RJNI_pn2DNNSjuZe4rdvj0RatkaTJRp5lP0F_AFfU5Zn9zRWzZJV7V-xMauIc4UhfdoLp9r_-CABA
222
+ - type: loss
223
  value: 4.815707206726074
224
+ name: loss
225
  verified: true
226
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTMwMTgxMmJkODY3MjkzOWJhMzJhOTIxMWVkODhjZmM0MWUzMWQ1N2JkZjRhOTQxNmU1YWVjYzQ0MDNlZWI3OSIsInZlcnNpb24iOjF9.mkBQHYhYFfDV6F4klXGJ1dSsF-pbCs-6F9zcw6IYznwmXUjtk7m5J4Zt4JAju5LKz4YizvEcUCl_L0WddnfvDA
227
+ - type: gen_len
228
  value: 154.9036
229
+ name: gen_len
230
  verified: true
231
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTc0ZmM1ZDM4MDE0MzY3MDM3OWJhNDkzZjJkZDdkMjU5M2JmMDJjYTIxODA1OTllNmY5ZWQzZDlmNWFiYzk4NiIsInZlcnNpb24iOjF9.VQ_O_xSTz870tnM08PJXQOwg9OsNNwI_HVX4S7AuW57_FzGGyRaWSuGE5SWzRS4Tur9YP0QxV4VV0Yoaoi3IAA
232
  - task:
233
  type: summarization
234
  name: Summarization
238
  config: samsum
239
  split: test
240
  metrics:
241
+ - type: rouge
 
242
  value: 33.4484
243
+ name: ROUGE-1
244
  verified: true
245
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTk4Yjg1YTc4YmY0MzBiZDU4ZjFhNzI4MjZkMWU1MzBlOWNlMjQ5ODMzY2YzYzRhYjJkMGUzNmI3ZjdkMzIzZSIsInZlcnNpb24iOjF9.AqS8A1OUiM0IZFBEGirv5F3Novk8lSUYSfPc3bYWLA6t-W7wgup3qA207eGbE5j9CkDWZ7QrSG1U6Z9A0sOqAA
246
+ - type: rouge
247
  value: 10.4249
248
+ name: ROUGE-2
249
  verified: true
250
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2U4NjUyNTFmOGM5OTlhZDMyMTlmM2E4OWI2NGFiMDAyMGJjMzRjNWNlMGEyYWFmNTE5ZWMxM2I0ZGZmNWNmOCIsInZlcnNpb24iOjF9.SgJcHJ4qoRWXFvFiwv1PUutWktvsxQNynVPEv-GtBgxd6WI7o561ONyco5U-5tcyE_1SbSCJzz-L-R-q3cvoDA
251
+ - type: rouge
252
  value: 24.5802
253
+ name: ROUGE-L
254
  verified: true
255
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmQ5MDI5MzdiNGE5NDM0MmU5OThmZTBkNjkxMzg5N2IxNGVlODdhZTZhNjg3NzFjYWEyMzA3MTQxNjMyMjRkOCIsInZlcnNpb24iOjF9.Bg5dHqCcJjmxa-xGWNR5lD9g3quX7lKkH0pjiTd2xE5WiPoLLN2c0mYa2GovdW7__WnYwhhHC7es03jmvyZbCw
256
+ - type: rouge
257
  value: 29.8226
258
+ name: ROUGE-LSUM
259
  verified: true
260
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGFhOTEwNGM1MmZkNDk2ZjQ1Y2MyNjM3MGI5MGY3MWVkM2I0MjU2NWFiYmEwMjE4MTJlZWIwOGQ2MjQ3YjgzYSIsInZlcnNpb24iOjF9.W_aQKs10oXQdKEczJBGM3iiwJgb-VaXTpyA3sGof5WbhHf9vITAQA-xvynh5LgKtXQ1zjx737hnHgjEsu_Y0Cw
261
+ - type: loss
262
  value: 4.176078796386719
263
+ name: loss
264
  verified: true
265
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2JhODQ5YTZkNDZkZGYyNGU2MzkxMWU5MTEwMGM2YmVjZTA5YzI5NTMxMDNhYjhlOTAxMzFiMDYwYmM0MjEzZCIsInZlcnNpb24iOjF9.OvZrPBOR5jhkoTGBgsInkH7j3_xpacXHDoT7UIXEnyXzadfBO-O-K6fjalLNZw8wSkbjHIFcL_6S_qTTxPsNAQ
266
+ - type: gen_len
267
  value: 65.4005
268
+ name: gen_len
269
  verified: true
270
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2NhYjc3ZjQzNDEwYmMzOTM0ODkyZTJhZWNhNzZhYmEyZTYxMzA2YTYzMWFjOTA5ZjlhYWMzODg3NzY1ZTUwYSIsInZlcnNpb24iOjF9.vk9bgmtQFeRwdY3VXjtrJr_5wUCIeoAkI3kO0cHxhxmJo6RvUnyXiut72FuB-mlLZvqgiNkaZ-u_bh0Z3DjuCw
271
  - task:
272
  type: summarization
273
  name: Summarization
277
  config: default
278
  split: test
279
  metrics:
280
+ - type: rouge
 
281
  value: 40.5843
282
+ name: ROUGE-1
283
  verified: true
284
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTVjMDkyMWZjYTQ0NzgzNGUxZjNiMTg3NjU1MWJlNTQ2MWQ1NjE1MDk1OTU4ZjJiNGQ5ODg3Y2VlMWUyMzllNyIsInZlcnNpb24iOjF9.OhqBcVIuHk7fzmdrsWMvUe1bLeVMZVstZUoZpP7C1vR-3aIDl7r6eBmPrt5w-KcNq5p4teNPBsq7oKzbd5ZgDQ
285
+ - type: rouge
286
  value: 17.3401
287
+ name: ROUGE-2
288
  verified: true
289
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGQxYmQzMmE0OTcyNTM5NmMwNjIxNzYxZDcwMDFkYzJkOWY4YWY3NTdhZGRhZDdlMDAxNzcwODQ5OGM3Mzc1MCIsInZlcnNpb24iOjF9.Pksn25EEqvmx757N7Swrd4yXc_xU7-AMN9yNe8lrbBa-l1LoI_2PUASvnjML4f705cfuyMAfb0FkFp5WfER2AA
290
+ - type: rouge
291
  value: 25.1256
292
+ name: ROUGE-L
293
  verified: true
294
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjhjYzI5MDBiMjk2NTY3MDNmZTdiOGYwMTRlYjIwZjAwMjdlNTAyYzdhYTJlODQ4MjYzYmQ3MjRlYTA2YzhhZSIsInZlcnNpb24iOjF9.1jPepsweS2bzIqDverQzzhmhFGch7gpoEGFGqQ8zW7K10aUKWFX8lt-uZAmTa1Z5ZhzyXGBzc3dReFPhWRRJBg
295
+ - type: rouge
296
  value: 34.6619
297
+ name: ROUGE-LSUM
298
  verified: true
299
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2VkZDIxNWJjOTA0NzFjOTIwOTdjYjc1M2EyNDVjZjY2ZjY3MjIxNDk3YTc5YWExNzAwN2FhOTc1NjVhYjBkYiIsInZlcnNpb24iOjF9.8opqHSUckPohoSF9jfPTpXDz2AtDwvdMqOdIXx2kE1tkOcbLPbOBfcc8RhRR98y8S26yC6EYFhFnf03CV2ejAQ
300
+ - type: loss
301
  value: 4.792657375335693
302
+ name: loss
303
  verified: true
304
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTY5ZTRkMGU3OGVkODMzMDU5OWE1NTM5YjA4NDliZDlmNzc2NzZjNjFmNTA3M2EwY2NmN2E0MWJmZjQ5ZDliMiIsInZlcnNpb24iOjF9.KCKdk8xt2NWcMmYKV3-9eVEsFm9MqGllSMu9QCFJFIQlnyNXllHKdBLouoaGQz8IRYXvZKH8_TLDPIQx-31jAg
305
+ - type: gen_len
306
  value: 163.9394
307
+ name: gen_len
308
  verified: true
309
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzdkZDYyZGUzYmFkZmI2NjUwYmQ0MzZjMmIyZjI1YTFiMzM4OThiZjBiMzljOTVkZTgwMjA0NTE5OGM2YmFjMiIsInZlcnNpb24iOjF9.XyMZLUdkUIF32KTJMuv_bJswQCx_Tfg4Fx823cURUixSeoIKps8_a634AreZ3Z8kb7bfE_sFGh3rM9KWsMxlDw
310
  - task:
311
  type: summarization
312
  name: Summarization
316
  config: default
317
  split: test
318
  metrics:
319
+ - type: rouge
 
320
  value: 39.0834
321
+ name: ROUGE-1
322
  verified: true
323
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjYzMmVlMDM4MTNkMTI4MjAyMTU2YTg1ZWQwNTI1MmJlNGUwZmE1NTRmYTljZTQwY2RlMjcxOTgyZGMyYTc0ZiIsInZlcnNpb24iOjF9.6yuSr7UmsFatwqQ-mEO4gmsEtWI05kGB5Ib2pnl05H1OiPT2uUwmqdUytUw8KTx9u1jv9q0cTF1cL-n2kPEJAA
324
+ - type: rouge
325
  value: 11.4043
326
+ name: ROUGE-2
327
  verified: true
328
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMWI5N2U2ZWI1ODM2MWUwOTIzYTAzNmRhNDA2OWEzZWRjMGEzMjBmY2EwN2YyYzU1NWE0YjIyZDE3MWE0MmMxZCIsInZlcnNpb24iOjF9.wonuxbBl25TzEaHUH_E816nHJ1OSXKfkaq7eJzbLpsfeGwcDklxUSxZxRO7VBiBMaY3Qttf9ywmEIPp40HnpBA
329
+ - type: rouge
330
  value: 19.1813
331
+ name: ROUGE-L
332
  verified: true
333
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjU1NDZhN2NkMzZiZGJkODE4NDZiYjViOTZkNGMyNDlkNjBlZmFjYzU1N2IzMjFjYjY1MDU1Zjk2MzA0M2U4NyIsInZlcnNpb24iOjF9.bTCRzv3J9NiCh4aV23tAWGTvrdQCv_RS40zGwC4AJXtGS40cY7tJHYwBf9U9_rCetDBxqfjJpdaUbCAOglxLAA
334
+ - type: rouge
335
  value: 35.1581
336
+ name: ROUGE-LSUM
337
  verified: true
338
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDNhNTUyZjE4NjYxYjIzYThmMDM2YWNhM2QwYzY1ODI2ZTE3NmNjMmVhOTAzZjZlOWQwYzc1NzU2NDNjNzIxMyIsInZlcnNpb24iOjF9.cWlSbEBgrMN5D-fV_yL9geNMyMkIItcVO3wehNJPzFi3E0v1-4q8pnX-UgjLzto8X7JLi6as2V_HtZE4-C-CDw
339
+ - type: loss
340
  value: 4.654905319213867
341
+ name: loss
342
  verified: true
343
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTc5Nzk0ODhiNWUzNTAxNzk2YzZmMjU2NDliY2UzOTYyYTdmZGEyYjI5NDNhOTE0MGUxOTgxMGVjMmNhM2UyMSIsInZlcnNpb24iOjF9.eBBAebcl3AwkrjR6a8BvoSjDfpw8LWTRFjyIFHVzspvoOKVfnO8_NB_UeR_K127OwXyoZ70Z7X_aKJOe-2kTDA
344
+ - type: gen_len
345
  value: 186.2494
346
+ name: gen_len
347
  verified: true
348
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOWI2NjVlYjgwYWJiMjcyMDUzMzEwNDNjZTMxMDM0MjAzMzk1ZmIwY2Q1ZDQ2Y2M5NDBlMDEzYzFkNWEyNzJmNiIsInZlcnNpb24iOjF9.iZ1Iy7FuWL4GH7LS5EylVj5eZRC3L2ZsbYQapAkMNzR_VXPoMGvoM69Hp-kU7gW55tmz2V4Qxhvoz9cM8fciBA
349
  ---
350
 
351
  # Longformer Encoder-Decoder (LED) for Narrative-Esque Long Text Summarization