jordiclive commited on
Commit
2e42f53
1 Parent(s): d60bd00

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +473 -1
README.md CHANGED
@@ -1,3 +1,475 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ license:
5
+ - apache-2.0
6
+ - bsd-3-clause
7
+ tags:
8
+ - summarization
9
+ - led
10
+ - summary
11
+ - longformer
12
+ - booksum
13
+ - long-document
14
+ - long-form
15
+ datasets:
16
+ - kmfoda/booksum
17
+ metrics:
18
+ - rouge
19
+ widget:
20
+ - text: large earthquakes along a given fault segment do not occur at random intervals
21
+ because it takes time to accumulate the strain energy for the rupture. The rates
22
+ at which tectonic plates move and accumulate strain at their boundaries are approximately
23
+ uniform. Therefore, in first approximation, one may expect that large ruptures
24
+ of the same fault segment will occur at approximately constant time intervals.
25
+ If subsequent main shocks have different amounts of slip across the fault, then
26
+ the recurrence time may vary, and the basic idea of periodic mainshocks must be
27
+ modified. For great plate boundary ruptures the length and slip often vary by
28
+ a factor of 2. Along the southern segment of the San Andreas fault the recurrence
29
+ interval is 145 years with variations of several decades. The smaller the standard
30
+ deviation of the average recurrence interval, the more specific could be the long
31
+ term prediction of a future mainshock.
32
+ example_title: earthquakes
33
+ - text: ' A typical feed-forward neural field algorithm. Spatiotemporal coordinates
34
+ are fed into a neural network that predicts values in the reconstructed domain.
35
+ Then, this domain is mapped to the sensor domain where sensor measurements are
36
+ available as supervision. Class and Section Problems Addressed Generalization
37
+ (Section 2) Inverse problems, ill-posed problems, editability; symmetries. Hybrid
38
+ Representations (Section 3) Computation & memory efficiency, representation capacity,
39
+ editability: Forward Maps (Section 4) Inverse problems Network Architecture (Section
40
+ 5) Spectral bias, integration & derivatives. Manipulating Neural Fields (Section
41
+ 6) Edit ability, constraints, regularization. Table 2: The five classes of techniques
42
+ in the neural field toolbox each addresses problems that arise in learning, inference,
43
+ and control. (Section 3). We can supervise reconstruction via differentiable forward
44
+ maps that transform Or project our domain (e.g, 3D reconstruction via 2D images;
45
+ Section 4) With appropriate network architecture choices, we can overcome neural
46
+ network spectral biases (blurriness) and efficiently compute derivatives and integrals
47
+ (Section 5). Finally, we can manipulate neural fields to add constraints and regularizations,
48
+ and to achieve editable representations (Section 6). Collectively, these classes
49
+ constitute a ''toolbox'' of techniques to help solve problems with neural fields
50
+ There are three components in a conditional neural field: (1) An encoder or inference
51
+ function € that outputs the conditioning latent variable 2 given an observation
52
+ 0 E(0) =2. 2 is typically a low-dimensional vector, and is often referred to aS
53
+ a latent code Or feature code_ (2) A mapping function 4 between Z and neural field
54
+ parameters O: Y(z) = O; (3) The neural field itself $. The encoder € finds the
55
+ most probable z given the observations O: argmaxz P(2/0). The decoder maximizes
56
+ the inverse conditional probability to find the most probable 0 given Z: arg-
57
+ max P(Olz). We discuss different encoding schemes with different optimality guarantees
58
+ (Section 2.1.1), both global and local conditioning (Section 2.1.2), and different
59
+ mapping functions Y (Section 2.1.3) 2. Generalization Suppose we wish to estimate
60
+ a plausible 3D surface shape given a partial or noisy point cloud. We need a suitable
61
+ prior over the sur- face in its reconstruction domain to generalize to the partial
62
+ observations. A neural network expresses a prior via the function space of its
63
+ architecture and parameters 0, and generalization is influenced by the inductive
64
+ bias of this function space (Section 5).'
65
+ example_title: scientific paper
66
+ - text: ' the big variety of data coming from diverse sources is one of the key properties
67
+ of the big data phenomenon. It is, therefore, beneficial to understand how data
68
+ is generated in various environments and scenarios, before looking at what should
69
+ be done with this data and how to design the best possible architecture to accomplish
70
+ this The evolution of IT architectures, described in Chapter 2, means that the
71
+ data is no longer processed by a few big monolith systems, but rather by a group
72
+ of services In parallel to the processing layer, the underlying data storage has
73
+ also changed and became more distributed This, in turn, required a significant
74
+ paradigm shift as the traditional approach to transactions (ACID) could no longer
75
+ be supported. On top of this, cloud computing is becoming a major approach with
76
+ the benefits of reducing costs and providing on-demand scalability but at the
77
+ same time introducing concerns about privacy, data ownership, etc In the meantime
78
+ the Internet continues its exponential growth: Every day both structured and unstructured
79
+ data is published and available for processing: To achieve competitive advantage
80
+ companies have to relate their corporate resources to external services, e.g.
81
+ financial markets, weather forecasts, social media, etc While several of the sites
82
+ provide some sort of API to access the data in a more orderly fashion; countless
83
+ sources require advanced web mining and Natural Language Processing (NLP) processing
84
+ techniques: Advances in science push researchers to construct new instruments
85
+ for observing the universe O conducting experiments to understand even better
86
+ the laws of physics and other domains. Every year humans have at their disposal
87
+ new telescopes, space probes, particle accelerators, etc These instruments generate
88
+ huge streams of data, which need to be stored and analyzed. The constant drive
89
+ for efficiency in the industry motivates the introduction of new automation techniques
90
+ and process optimization: This could not be done without analyzing the precise
91
+ data that describe these processes. As more and more human tasks are automated,
92
+ machines provide rich data sets, which can be analyzed in real-time to drive efficiency
93
+ to new levels. Finally, it is now evident that the growth of the Internet of Things
94
+ is becoming a major source of data. More and more of the devices are equipped
95
+ with significant computational power and can generate a continuous data stream
96
+ from their sensors. In the subsequent sections of this chapter, we will look at
97
+ the domains described above to see what they generate in terms of data sets. We
98
+ will compare the volumes but will also look at what is characteristic and important
99
+ from their respective points of view. 3.1 The Internet is undoubtedly the largest
100
+ database ever created by humans. While several well described; cleaned, and structured
101
+ data sets have been made available through this medium, most of the resources
102
+ are of an ambiguous, unstructured, incomplete or even erroneous nature. Still,
103
+ several examples in the areas such as opinion mining, social media analysis, e-governance,
104
+ etc, clearly show the potential lying in these resources. Those who can successfully
105
+ mine and interpret the Internet data can gain unique insight and competitive advantage
106
+ in their business An important area of data analytics on the edge of corporate
107
+ IT and the Internet is Web Analytics.'
108
+ example_title: data science textbook
109
+ - text: 'Transformer-based models have shown to be very useful for many NLP tasks.
110
+ However, a major limitation of transformers-based models is its O(n^2)O(n 2) time
111
+ & memory complexity (where nn is sequence length). Hence, it''s computationally
112
+ very expensive to apply transformer-based models on long sequences n > 512n>512.
113
+ Several recent papers, e.g. Longformer, Performer, Reformer, Clustered attention
114
+ try to remedy this problem by approximating the full attention matrix. You can
115
+ checkout 🤗''s recent blog post in case you are unfamiliar with these models.
116
+
117
+ BigBird (introduced in paper) is one of such recent models to address this issue.
118
+ BigBird relies on block sparse attention instead of normal attention (i.e. BERT''s
119
+ attention) and can handle sequences up to a length of 4096 at a much lower computational
120
+ cost compared to BERT. It has achieved SOTA on various tasks involving very long
121
+ sequences such as long documents summarization, question-answering with long contexts.
122
+
123
+ BigBird RoBERTa-like model is now available in 🤗Transformers. The goal of this
124
+ post is to give the reader an in-depth understanding of big bird implementation
125
+ & ease one''s life in using BigBird with 🤗Transformers. But, before going into
126
+ more depth, it is important to remember that the BigBird''s attention is an approximation
127
+ of BERT''s full attention and therefore does not strive to be better than BERT''s
128
+ full attention, but rather to be more efficient. It simply allows to apply transformer-based
129
+ models to much longer sequences since BERT''s quadratic memory requirement quickly
130
+ becomes unbearable. Simply put, if we would have ∞ compute & ∞ time, BERT''s attention
131
+ would be preferred over block sparse attention (which we are going to discuss
132
+ in this post).
133
+
134
+ If you wonder why we need more compute when working with longer sequences, this
135
+ blog post is just right for you!
136
+
137
+ Some of the main questions one might have when working with standard BERT-like
138
+ attention include:
139
+
140
+ Do all tokens really have to attend to all other tokens? Why not compute attention
141
+ only over important tokens? How to decide what tokens are important? How to attend
142
+ to just a few tokens in a very efficient way? In this blog post, we will try to
143
+ answer those questions.
144
+
145
+ What tokens should be attended to? We will give a practical example of how attention
146
+ works by considering the sentence ''BigBird is now available in HuggingFace for
147
+ extractive question answering''. In BERT-like attention, every word would simply
148
+ attend to all other tokens.
149
+
150
+ Let''s think about a sensible choice of key tokens that a queried token actually
151
+ only should attend to by writing some pseudo-code. Will will assume that the token
152
+ available is queried and build a sensible list of key tokens to attend to.
153
+
154
+ >>> # let''s consider following sentence as an example >>> example = [''BigBird'',
155
+ ''is'', ''now'', ''available'', ''in'', ''HuggingFace'', ''for'', ''extractive'',
156
+ ''question'', ''answering'']
157
+
158
+ >>> # further let''s assume, we''re trying to understand the representation of
159
+ ''available'' i.e. >>> query_token = ''available'' >>> # We will initialize an
160
+ empty `set` and fill up the tokens of our interest as we proceed in this section.
161
+ >>> key_tokens = [] # => currently ''available'' token doesn''t have anything
162
+ to attend Nearby tokens should be important because, in a sentence (sequence of
163
+ words), the current word is highly dependent on neighboring past & future tokens.
164
+ This intuition is the idea behind the concept of sliding attention.'
165
+ example_title: bigbird blog intro
166
+ - text: 'The majority of available text summarization datasets include short-form
167
+ source documents that lack long-range causal and temporal dependencies, and often
168
+ contain strong layout and stylistic biases. While relevant, such datasets will
169
+ offer limited challenges for future generations of text summarization systems.
170
+ We address these issues by introducing BookSum, a collection of datasets for long-form
171
+ narrative summarization. Our dataset covers source documents from the literature
172
+ domain, such as novels, plays and stories, and includes highly abstractive, human
173
+ written summaries on three levels of granularity of increasing difficulty: paragraph-,
174
+ chapter-, and book-level. The domain and structure of our dataset poses a unique
175
+ set of challenges for summarization systems, which include: processing very long
176
+ documents, non-trivial causal and temporal dependencies, and rich discourse structures.
177
+ To facilitate future work, we trained and evaluated multiple extractive and abstractive
178
+ summarization models as baselines for our dataset.'
179
+ example_title: BookSum Abstract
180
+ inference:
181
+ parameters:
182
+ max_length: 64
183
+ min_length: 8
184
+ no_repeat_ngram_size: 3
185
+ early_stopping: true
186
+ repetition_penalty: 3.5
187
+ length_penalty: 0.3
188
+ encoder_no_repeat_ngram_size: 3
189
+ num_beams: 4
190
+ model-index:
191
+ - name: pszemraj/led-large-book-summary
192
+ results:
193
+ - task:
194
+ type: summarization
195
+ name: Summarization
196
+ dataset:
197
+ name: kmfoda/booksum
198
+ type: kmfoda/booksum
199
+ config: kmfoda--booksum
200
+ split: test
201
+ metrics:
202
+ - type: rouge
203
+ value: 31.7308
204
+ name: ROUGE-1
205
+ verified: true
206
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjJmZjMxYTY0OGU3MzNjNmIzNmYyODNlNDg2ZGRhZDAzNTMwMDM5YWMxODc1OTc1ZWE3MzM2OTg1ODFhZDBkNCIsInZlcnNpb24iOjF9.B8BCKgySYVZW910_1zP0LfCpQYJbAe6loyWut76JlgZb2kV1_x9ybqtNESX0ka-lNqhYyXUNDpuS-7pTmsJVDg
207
+ - type: rouge
208
+ value: 5.3311
209
+ name: ROUGE-2
210
+ verified: true
211
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzViMmY4ODFjYTc5ODk5MmRhMDQ3ZDRiYWQwMDg0OTk3ZTA4NDAxYTNiNDgyMmI4NDA3ZDMwYWViOTBkODBjNyIsInZlcnNpb24iOjF9.MOhJLDcgvv93mVFL1igIgIiTAH3b2Xa4gmBObq7RF44Mmu8Kxtd1KP7rOlDVFOrtrsooGPGsyE1GMCQ2kqeMDg
212
+ - type: rouge
213
+ value: 16.1465
214
+ name: ROUGE-L
215
+ verified: true
216
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzNjMzEwMTliZGE3ZmQ4M2UxMDAyMTY3YzJjZmMyMDYyN2YyNDM0N2VhNzI1MDc1YTg4MTRjMmEzNjVkNTk1NCIsInZlcnNpb24iOjF9.XLJ-DVKiYLlbw5E5rWADKbzUzf5fNHhlTCWPCC5dU4NI9Yeh76aR7TPt36ZzLDwTBknnR8KHqlaF8F8YAvBUAg
217
+ - type: rouge
218
+ value: 29.0883
219
+ name: ROUGE-LSUM
220
+ verified: true
221
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTcwNzEwMmE5NjQxZTkzYmQyZDZmNzllYzYyNGI5OTMyNWMwNjdiM2I2YmM5YjdmY2E5OWQ3OTk3ZDA1MTc3YyIsInZlcnNpb24iOjF9.d6rFxjCB6RJNI_pn2DNNSjuZe4rdvj0RatkaTJRp5lP0F_AFfU5Zn9zRWzZJV7V-xMauIc4UhfdoLp9r_-CABA
222
+ - type: loss
223
+ value: 4.815707206726074
224
+ name: loss
225
+ verified: true
226
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTMwMTgxMmJkODY3MjkzOWJhMzJhOTIxMWVkODhjZmM0MWUzMWQ1N2JkZjRhOTQxNmU1YWVjYzQ0MDNlZWI3OSIsInZlcnNpb24iOjF9.mkBQHYhYFfDV6F4klXGJ1dSsF-pbCs-6F9zcw6IYznwmXUjtk7m5J4Zt4JAju5LKz4YizvEcUCl_L0WddnfvDA
227
+ - type: gen_len
228
+ value: 154.9036
229
+ name: gen_len
230
+ verified: true
231
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTc0ZmM1ZDM4MDE0MzY3MDM3OWJhNDkzZjJkZDdkMjU5M2JmMDJjYTIxODA1OTllNmY5ZWQzZDlmNWFiYzk4NiIsInZlcnNpb24iOjF9.VQ_O_xSTz870tnM08PJXQOwg9OsNNwI_HVX4S7AuW57_FzGGyRaWSuGE5SWzRS4Tur9YP0QxV4VV0Yoaoi3IAA
232
+ - task:
233
+ type: summarization
234
+ name: Summarization
235
+ dataset:
236
+ name: samsum
237
+ type: samsum
238
+ config: samsum
239
+ split: test
240
+ metrics:
241
+ - type: rouge
242
+ value: 33.4484
243
+ name: ROUGE-1
244
+ verified: true
245
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTk4Yjg1YTc4YmY0MzBiZDU4ZjFhNzI4MjZkMWU1MzBlOWNlMjQ5ODMzY2YzYzRhYjJkMGUzNmI3ZjdkMzIzZSIsInZlcnNpb24iOjF9.AqS8A1OUiM0IZFBEGirv5F3Novk8lSUYSfPc3bYWLA6t-W7wgup3qA207eGbE5j9CkDWZ7QrSG1U6Z9A0sOqAA
246
+ - type: rouge
247
+ value: 10.4249
248
+ name: ROUGE-2
249
+ verified: true
250
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2U4NjUyNTFmOGM5OTlhZDMyMTlmM2E4OWI2NGFiMDAyMGJjMzRjNWNlMGEyYWFmNTE5ZWMxM2I0ZGZmNWNmOCIsInZlcnNpb24iOjF9.SgJcHJ4qoRWXFvFiwv1PUutWktvsxQNynVPEv-GtBgxd6WI7o561ONyco5U-5tcyE_1SbSCJzz-L-R-q3cvoDA
251
+ - type: rouge
252
+ value: 24.5802
253
+ name: ROUGE-L
254
+ verified: true
255
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmQ5MDI5MzdiNGE5NDM0MmU5OThmZTBkNjkxMzg5N2IxNGVlODdhZTZhNjg3NzFjYWEyMzA3MTQxNjMyMjRkOCIsInZlcnNpb24iOjF9.Bg5dHqCcJjmxa-xGWNR5lD9g3quX7lKkH0pjiTd2xE5WiPoLLN2c0mYa2GovdW7__WnYwhhHC7es03jmvyZbCw
256
+ - type: rouge
257
+ value: 29.8226
258
+ name: ROUGE-LSUM
259
+ verified: true
260
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGFhOTEwNGM1MmZkNDk2ZjQ1Y2MyNjM3MGI5MGY3MWVkM2I0MjU2NWFiYmEwMjE4MTJlZWIwOGQ2MjQ3YjgzYSIsInZlcnNpb24iOjF9.W_aQKs10oXQdKEczJBGM3iiwJgb-VaXTpyA3sGof5WbhHf9vITAQA-xvynh5LgKtXQ1zjx737hnHgjEsu_Y0Cw
261
+ - type: loss
262
+ value: 4.176078796386719
263
+ name: loss
264
+ verified: true
265
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2JhODQ5YTZkNDZkZGYyNGU2MzkxMWU5MTEwMGM2YmVjZTA5YzI5NTMxMDNhYjhlOTAxMzFiMDYwYmM0MjEzZCIsInZlcnNpb24iOjF9.OvZrPBOR5jhkoTGBgsInkH7j3_xpacXHDoT7UIXEnyXzadfBO-O-K6fjalLNZw8wSkbjHIFcL_6S_qTTxPsNAQ
266
+ - type: gen_len
267
+ value: 65.4005
268
+ name: gen_len
269
+ verified: true
270
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2NhYjc3ZjQzNDEwYmMzOTM0ODkyZTJhZWNhNzZhYmEyZTYxMzA2YTYzMWFjOTA5ZjlhYWMzODg3NzY1ZTUwYSIsInZlcnNpb24iOjF9.vk9bgmtQFeRwdY3VXjtrJr_5wUCIeoAkI3kO0cHxhxmJo6RvUnyXiut72FuB-mlLZvqgiNkaZ-u_bh0Z3DjuCw
271
+ - task:
272
+ type: summarization
273
+ name: Summarization
274
+ dataset:
275
+ name: billsum
276
+ type: billsum
277
+ config: default
278
+ split: test
279
+ metrics:
280
+ - type: rouge
281
+ value: 40.5843
282
+ name: ROUGE-1
283
+ verified: true
284
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTVjMDkyMWZjYTQ0NzgzNGUxZjNiMTg3NjU1MWJlNTQ2MWQ1NjE1MDk1OTU4ZjJiNGQ5ODg3Y2VlMWUyMzllNyIsInZlcnNpb24iOjF9.OhqBcVIuHk7fzmdrsWMvUe1bLeVMZVstZUoZpP7C1vR-3aIDl7r6eBmPrt5w-KcNq5p4teNPBsq7oKzbd5ZgDQ
285
+ - type: rouge
286
+ value: 17.3401
287
+ name: ROUGE-2
288
+ verified: true
289
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGQxYmQzMmE0OTcyNTM5NmMwNjIxNzYxZDcwMDFkYzJkOWY4YWY3NTdhZGRhZDdlMDAxNzcwODQ5OGM3Mzc1MCIsInZlcnNpb24iOjF9.Pksn25EEqvmx757N7Swrd4yXc_xU7-AMN9yNe8lrbBa-l1LoI_2PUASvnjML4f705cfuyMAfb0FkFp5WfER2AA
290
+ - type: rouge
291
+ value: 25.1256
292
+ name: ROUGE-L
293
+ verified: true
294
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjhjYzI5MDBiMjk2NTY3MDNmZTdiOGYwMTRlYjIwZjAwMjdlNTAyYzdhYTJlODQ4MjYzYmQ3MjRlYTA2YzhhZSIsInZlcnNpb24iOjF9.1jPepsweS2bzIqDverQzzhmhFGch7gpoEGFGqQ8zW7K10aUKWFX8lt-uZAmTa1Z5ZhzyXGBzc3dReFPhWRRJBg
295
+ - type: rouge
296
+ value: 34.6619
297
+ name: ROUGE-LSUM
298
+ verified: true
299
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2VkZDIxNWJjOTA0NzFjOTIwOTdjYjc1M2EyNDVjZjY2ZjY3MjIxNDk3YTc5YWExNzAwN2FhOTc1NjVhYjBkYiIsInZlcnNpb24iOjF9.8opqHSUckPohoSF9jfPTpXDz2AtDwvdMqOdIXx2kE1tkOcbLPbOBfcc8RhRR98y8S26yC6EYFhFnf03CV2ejAQ
300
+ - type: loss
301
+ value: 4.792657375335693
302
+ name: loss
303
+ verified: true
304
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTY5ZTRkMGU3OGVkODMzMDU5OWE1NTM5YjA4NDliZDlmNzc2NzZjNjFmNTA3M2EwY2NmN2E0MWJmZjQ5ZDliMiIsInZlcnNpb24iOjF9.KCKdk8xt2NWcMmYKV3-9eVEsFm9MqGllSMu9QCFJFIQlnyNXllHKdBLouoaGQz8IRYXvZKH8_TLDPIQx-31jAg
305
+ - type: gen_len
306
+ value: 163.9394
307
+ name: gen_len
308
+ verified: true
309
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzdkZDYyZGUzYmFkZmI2NjUwYmQ0MzZjMmIyZjI1YTFiMzM4OThiZjBiMzljOTVkZTgwMjA0NTE5OGM2YmFjMiIsInZlcnNpb24iOjF9.XyMZLUdkUIF32KTJMuv_bJswQCx_Tfg4Fx823cURUixSeoIKps8_a634AreZ3Z8kb7bfE_sFGh3rM9KWsMxlDw
310
+ - task:
311
+ type: summarization
312
+ name: Summarization
313
+ dataset:
314
+ name: multi_news
315
+ type: multi_news
316
+ config: default
317
+ split: test
318
+ metrics:
319
+ - type: rouge
320
+ value: 39.0834
321
+ name: ROUGE-1
322
+ verified: true
323
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjYzMmVlMDM4MTNkMTI4MjAyMTU2YTg1ZWQwNTI1MmJlNGUwZmE1NTRmYTljZTQwY2RlMjcxOTgyZGMyYTc0ZiIsInZlcnNpb24iOjF9.6yuSr7UmsFatwqQ-mEO4gmsEtWI05kGB5Ib2pnl05H1OiPT2uUwmqdUytUw8KTx9u1jv9q0cTF1cL-n2kPEJAA
324
+ - type: rouge
325
+ value: 11.4043
326
+ name: ROUGE-2
327
+ verified: true
328
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMWI5N2U2ZWI1ODM2MWUwOTIzYTAzNmRhNDA2OWEzZWRjMGEzMjBmY2EwN2YyYzU1NWE0YjIyZDE3MWE0MmMxZCIsInZlcnNpb24iOjF9.wonuxbBl25TzEaHUH_E816nHJ1OSXKfkaq7eJzbLpsfeGwcDklxUSxZxRO7VBiBMaY3Qttf9ywmEIPp40HnpBA
329
+ - type: rouge
330
+ value: 19.1813
331
+ name: ROUGE-L
332
+ verified: true
333
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjU1NDZhN2NkMzZiZGJkODE4NDZiYjViOTZkNGMyNDlkNjBlZmFjYzU1N2IzMjFjYjY1MDU1Zjk2MzA0M2U4NyIsInZlcnNpb24iOjF9.bTCRzv3J9NiCh4aV23tAWGTvrdQCv_RS40zGwC4AJXtGS40cY7tJHYwBf9U9_rCetDBxqfjJpdaUbCAOglxLAA
334
+ - type: rouge
335
+ value: 35.1581
336
+ name: ROUGE-LSUM
337
+ verified: true
338
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDNhNTUyZjE4NjYxYjIzYThmMDM2YWNhM2QwYzY1ODI2ZTE3NmNjMmVhOTAzZjZlOWQwYzc1NzU2NDNjNzIxMyIsInZlcnNpb24iOjF9.cWlSbEBgrMN5D-fV_yL9geNMyMkIItcVO3wehNJPzFi3E0v1-4q8pnX-UgjLzto8X7JLi6as2V_HtZE4-C-CDw
339
+ - type: loss
340
+ value: 4.654905319213867
341
+ name: loss
342
+ verified: true
343
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTc5Nzk0ODhiNWUzNTAxNzk2YzZmMjU2NDliY2UzOTYyYTdmZGEyYjI5NDNhOTE0MGUxOTgxMGVjMmNhM2UyMSIsInZlcnNpb24iOjF9.eBBAebcl3AwkrjR6a8BvoSjDfpw8LWTRFjyIFHVzspvoOKVfnO8_NB_UeR_K127OwXyoZ70Z7X_aKJOe-2kTDA
344
+ - type: gen_len
345
+ value: 186.2494
346
+ name: gen_len
347
+ verified: true
348
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOWI2NjVlYjgwYWJiMjcyMDUzMzEwNDNjZTMxMDM0MjAzMzk1ZmIwY2Q1ZDQ2Y2M5NDBlMDEzYzFkNWEyNzJmNiIsInZlcnNpb24iOjF9.iZ1Iy7FuWL4GH7LS5EylVj5eZRC3L2ZsbYQapAkMNzR_VXPoMGvoM69Hp-kU7gW55tmz2V4Qxhvoz9cM8fciBA
349
  ---
350
+
351
+ # Longformer Encoder-Decoder (LED) for Narrative-Esque Long Text Summarization
352
+
353
+ <a href="https://colab.research.google.com/gist/pszemraj/3eba944ddc9fc9a4a1bfb21e83b57620/summarization-token-batching.ipynb">
354
+ <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
355
+ </a>
356
+
357
+ A fine-tuned version of [allenai/led-large-16384](https://huggingface.co/allenai/led-large-16384) on the `BookSum` dataset.
358
+
359
+ Goal: a model that can generalize well and is useful in summarizing long text in academic and daily usage. The result works well on lots of text and can handle 16384 tokens/batch (_if you have the GPU memory to handle that_)
360
+
361
+ - See the Colab demo linked above or try the [demo on Spaces](https://huggingface.co/spaces/pszemraj/summarize-long-text)
362
+
363
+
364
+ > Note: the API is set to generate a max of 64 tokens for runtime reasons, so the summaries may be truncated (depending on the length of input text). For best results use python as below.
365
+
366
+ ---
367
+
368
+ # Usage - Basic
369
+
370
+ - use `encoder_no_repeat_ngram_size=3` when calling the pipeline object to improve summary quality.
371
+ - this forces the model to use new vocabulary and create an abstractive summary, otherwise it may compile the best _extractive_ summary from the input provided.
372
+
373
+ Load the model into a pipeline object:
374
+
375
+ ```python
376
+ import torch
377
+ from transformers import pipeline
378
+
379
+ hf_name = 'pszemraj/led-large-book-summary'
380
+
381
+ summarizer = pipeline(
382
+ "summarization",
383
+ hf_name,
384
+ device=0 if torch.cuda.is_available() else -1,
385
+ )
386
+ ```
387
+
388
+ - put words into the pipeline object:
389
+
390
+ ```python
391
+ wall_of_text = "your words here"
392
+
393
+ result = summarizer(
394
+ wall_of_text,
395
+ min_length=16,
396
+ max_length=256,
397
+ no_repeat_ngram_size=3,
398
+ encoder_no_repeat_ngram_size=3,
399
+ repetition_penalty=3.5,
400
+ num_beams=4,
401
+ early_stopping=True,
402
+ )
403
+ ```
404
+
405
+
406
+ **Important:** To generate the best quality summaries, you should use the global attention mask when decoding, as demonstrated in [this community notebook here](https://colab.research.google.com/drive/12INTTR6n64TzS4RrXZxMSXfrOd9Xzamo?usp=sharing), see the definition of `generate_answer(batch)`.
407
+
408
+ If having computing constraints, try the base version [`pszemraj/led-base-book-summary`](https://huggingface.co/pszemraj/led-base-book-summary)
409
+ - all the parameters for generation on the API here are the same as [the base model](https://huggingface.co/pszemraj/led-base-book-summary) for easy comparison between versions.
410
+
411
+ ## Training and evaluation data
412
+
413
+ - the [booksum](https://arxiv.org/abs/2105.08209) dataset (this is what adds the `bsd-3-clause` license)
414
+ - During training, the input text was the text of the `chapter`, and the output was `summary_text`
415
+ - Eval results can be found [here](https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-kmfoda__booksum-79c1c0d8-10905463) with metrics on the sidebar.
416
+
417
+ ## Training procedure
418
+
419
+ - Training completed on the BookSum dataset for 13 total epochs
420
+ - **The final four epochs combined the training and validation sets as 'train' in an effort to increase generalization.**
421
+
422
+ ### Training hyperparameters
423
+
424
+ #### Initial Three Epochs
425
+
426
+ The following hyperparameters were used during training:
427
+ - learning_rate: 5e-05
428
+ - train_batch_size: 1
429
+ - eval_batch_size: 1
430
+ - seed: 42
431
+ - distributed_type: multi-GPU
432
+ - gradient_accumulation_steps: 4
433
+ - total_train_batch_size: 4
434
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
435
+ - lr_scheduler_type: linear
436
+ - num_epochs: 3
437
+
438
+ #### In-between Epochs
439
+
440
+ Unfortunately, don't have all records on-hand for middle epochs; the following should be representative:
441
+
442
+ - learning_rate: 4e-05
443
+ - train_batch_size: 2
444
+ - eval_batch_size: 2
445
+ - seed: 42
446
+ - distributed_type: multi-GPU
447
+ - gradient_accumulation_steps: 16
448
+ - total_train_batch_size: 32
449
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
450
+ - lr_scheduler_type: cosine
451
+ - lr_scheduler_warmup_ratio: 0.05
452
+ - num_epochs: 6 (in addition to prior model)
453
+
454
+ #### Final Two Epochs
455
+
456
+ The following hyperparameters were used during training:
457
+ - learning_rate: 2e-05
458
+ - train_batch_size: 1
459
+ - eval_batch_size: 1
460
+ - seed: 42
461
+ - distributed_type: multi-GPU
462
+ - gradient_accumulation_steps: 16
463
+ - total_train_batch_size: 16
464
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
465
+ - lr_scheduler_type: cosine
466
+ - lr_scheduler_warmup_ratio: 0.03
467
+ - num_epochs: 2 (in addition to prior model)
468
+
469
+
470
+ ### Framework versions
471
+
472
+ - Transformers 4.19.2
473
+ - Pytorch 1.11.0+cu113
474
+ - Datasets 2.2.2
475
+ - Tokenizers 0.12.1