File size: 19,045 Bytes
5f29fca 1561c72 711cf62 5f29fca df7285d 4cee59f e1f2529 9e0c825 1561c72 d07f228 8f66b46 3b6ab3e 5f29fca 9232bf9 78094bc |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 |
---
tags:
- summarization
- summary
- booksum
- long-document
- long-form
license:
- apache-2.0
- bsd-3-clause
datasets:
- kmfoda/booksum
- big_patent
metrics:
- rouge
widget:
- text: large earthquakes along a given fault segment do not occur at random intervals
because it takes time to accumulate the strain energy for the rupture. The rates
at which tectonic plates move and accumulate strain at their boundaries are approximately
uniform. Therefore, in first approximation, one may expect that large ruptures
of the same fault segment will occur at approximately constant time intervals.
If subsequent main shocks have different amounts of slip across the fault, then
the recurrence time may vary, and the basic idea of periodic mainshocks must be
modified. For great plate boundary ruptures the length and slip often vary by
a factor of 2. Along the southern segment of the San Andreas fault the recurrence
interval is 145 years with variations of several decades. The smaller the standard
deviation of the average recurrence interval, the more specific could be the long
term prediction of a future mainshock.
example_title: earthquakes
- text: " A typical feed-forward neural field algorithm. Spatiotemporal coordinates\
\ are fed into a neural network that predicts values in the reconstructed domain.\
\ Then, this domain is mapped to the sensor domain where sensor measurements are\
\ available as supervision. Class and Section Problems Addressed Generalization\
\ (Section 2) Inverse problems, ill-posed problems, editability; symmetries. Hybrid\
\ Representations (Section 3) Computation & memory efficiency, representation\
\ capacity, editability: Forward Maps (Section 4) Inverse problems Network Architecture\
\ (Section 5) Spectral bias, integration & derivatives. Manipulating Neural Fields\
\ (Section 6) Edit ability, constraints, regularization. Table 2: The five classes\
\ of techniques in the neural field toolbox each addresses problems that arise\
\ in learning, inference, and control. (Section 3). We can supervise reconstruction\
\ via differentiable forward maps that transform Or project our domain (e.g, 3D\
\ reconstruction via 2D images; Section 4) With appropriate network architecture\
\ choices, we can overcome neural network spectral biases (blurriness) and efficiently\
\ compute derivatives and integrals (Section 5). Finally, we can manipulate neural\
\ fields to add constraints and regularizations, and to achieve editable representations\
\ (Section 6). Collectively, these classes constitute a 'toolbox' of techniques\
\ to help solve problems with neural fields There are three components in a conditional\
\ neural field: (1) An encoder or inference function \u20AC that outputs the conditioning\
\ latent variable 2 given an observation 0 E(0) =2. 2 is typically a low-dimensional\
\ vector, and is often referred to aS a latent code Or feature code_ (2) A mapping\
\ function 4 between Z and neural field parameters O: Y(z) = O; (3) The neural\
\ field itself $. The encoder \u20AC finds the most probable z given the observations\
\ O: argmaxz P(2/0). The decoder maximizes the inverse conditional probability\
\ to find the most probable 0 given Z: arg- max P(Olz). We discuss different encoding\
\ schemes with different optimality guarantees (Section 2.1.1), both global and\
\ local conditioning (Section 2.1.2), and different mapping functions Y (Section\
\ 2.1.3) 2. Generalization Suppose we wish to estimate a plausible 3D surface\
\ shape given a partial or noisy point cloud. We need a suitable prior over the\
\ sur- face in its reconstruction domain to generalize to the partial observations.\
\ A neural network expresses a prior via the function space of its architecture\
\ and parameters 0, and generalization is influenced by the inductive bias of\
\ this function space (Section 5)."
example_title: scientific paper
- text: 'Is a else or outside the cob and tree written being of early client rope
and you have is for good reasons. On to the ocean in Orange for time. By''s the
aggregate we can bed it yet. Why this please pick up on a sort is do and also
M Getoi''s nerocos and do rain become you to let so is his brother is made in
use and Mjulia''s''s the lay major is aging Masastup coin present sea only of
Oosii rooms set to you We do er do we easy this private oliiishs lonthen might
be okay. Good afternoon everybody. Welcome to this lecture of Computational Statistics.
As you can see, I''m not socially my name is Michael Zelinger. I''m one of the
task for this class and you might have already seen me in the first lecture where
I made a quick appearance. I''m also going to give the tortillas in the last third
of this course. So to give you a little bit about me, I''m a old student here
with better Bulman and my research centres on casual inference applied to biomedical
disasters, so that could be genomics or that could be hospital data. If any of
you is interested in writing a bachelor thesis, a semester paper may be mastathesis
about this topic feel for reach out to me. you have my name on models and my email
address you can find in the directory I''d Be very happy to talk about it. you
do not need to be sure about it, we can just have a chat. So with that said, let''s
get on with the lecture. There''s an exciting topic today I''m going to start
by sharing some slides with you and later on during the lecture we''ll move to
the paper. So bear with me for a few seconds. Well, the projector is starting
up. Okay, so let''s get started. Today''s topic is a very important one. It''s
about a technique which really forms one of the fundamentals of data science,
machine learning, and any sort of modern statistics. It''s called cross validation.
I know you really want to understand this topic I Want you to understand this
and frankly, nobody''s gonna leave Professor Mineshousen''s class without understanding
cross validation. So to set the stage for this, I Want to introduce you to the
validation problem in computational statistics. So the problem is the following:
You trained a model on available data. You fitted your model, but you know the
training data you got could always have been different and some data from the
environment. Maybe it''s a random process. You do not really know what it is,
but you know that somebody else who gets a different batch of data from the same
environment they would get slightly different training data and you do not care
that your method performs as well. On this training data. you want to to perform
well on other data that you have not seen other data from the same environment.
So in other words, the validation problem is you want to quantify the performance
of your model on data that you have not seen. So how is this even possible? How
could you possibly measure the performance on data that you do not know The solution
to? This is the following realization is that given that you have a bunch of data,
you were in charge. You get to control how much that your model sees. It works
in the following way: You can hide data firms model. Let''s say you have a training
data set which is a bunch of doubtless so X eyes are the features those are typically
hide and national vector. It''s got more than one dimension for sure. And the
why why eyes. Those are the labels for supervised learning. As you''ve seen before,
it''s the same set up as we have in regression. And so you have this training
data and now you choose that you only use some of those data to fit your model.
You''re not going to use everything, you only use some of it the other part you
hide from your model. And then you can use this hidden data to do validation from
the point of you of your model. This hidden data is complete by unseen. In other
words, we solve our problem of validation.'
example_title: transcribed audio - lecture
- text: "Transformer-based models have shown to be very useful for many NLP tasks.\
\ However, a major limitation of transformers-based models is its O(n^2)O(n 2)\
\ time & memory complexity (where nn is sequence length). Hence, it's computationally\
\ very expensive to apply transformer-based models on long sequences n > 512n>512.\
\ Several recent papers, e.g. Longformer, Performer, Reformer, Clustered attention\
\ try to remedy this problem by approximating the full attention matrix. You can\
\ checkout \U0001F917's recent blog post in case you are unfamiliar with these\
\ models.\nBigBird (introduced in paper) is one of such recent models to address\
\ this issue. BigBird relies on block sparse attention instead of normal attention\
\ (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a\
\ much lower computational cost compared to BERT. It has achieved SOTA on various\
\ tasks involving very long sequences such as long documents summarization, question-answering\
\ with long contexts.\nBigBird RoBERTa-like model is now available in \U0001F917\
Transformers. The goal of this post is to give the reader an in-depth understanding\
\ of big bird implementation & ease one's life in using BigBird with \U0001F917\
Transformers. But, before going into more depth, it is important to remember that\
\ the BigBird's attention is an approximation of BERT's full attention and therefore\
\ does not strive to be better than BERT's full attention, but rather to be more\
\ efficient. It simply allows to apply transformer-based models to much longer\
\ sequences since BERT's quadratic memory requirement quickly becomes unbearable.\
\ Simply put, if we would have \u221E compute & \u221E time, BERT's attention\
\ would be preferred over block sparse attention (which we are going to discuss\
\ in this post).\nIf you wonder why we need more compute when working with longer\
\ sequences, this blog post is just right for you!\nSome of the main questions\
\ one might have when working with standard BERT-like attention include:\nDo all\
\ tokens really have to attend to all other tokens? Why not compute attention\
\ only over important tokens? How to decide what tokens are important? How to\
\ attend to just a few tokens in a very efficient way? In this blog post, we will\
\ try to answer those questions.\nWhat tokens should be attended to? We will give\
\ a practical example of how attention works by considering the sentence 'BigBird\
\ is now available in HuggingFace for extractive question answering'. In BERT-like\
\ attention, every word would simply attend to all other tokens.\nLet's think\
\ about a sensible choice of key tokens that a queried token actually only should\
\ attend to by writing some pseudo-code. Will will assume that the token available\
\ is queried and build a sensible list of key tokens to attend to.\n>>> # let's\
\ consider following sentence as an example >>> example = ['BigBird', 'is', 'now',\
\ 'available', 'in', 'HuggingFace', 'for', 'extractive', 'question', 'answering']\n\
>>> # further let's assume, we're trying to understand the representation of 'available'\
\ i.e. >>> query_token = 'available' >>> # We will initialize an empty `set` and\
\ fill up the tokens of our interest as we proceed in this section. >>> key_tokens\
\ = [] # => currently 'available' token doesn't have anything to attend Nearby\
\ tokens should be important because, in a sentence (sequence of words), the current\
\ word is highly dependent on neighboring past & future tokens. This intuition\
\ is the idea behind the concept of sliding attention."
example_title: bigbird blog intro
- text: "To be fair, you have to have a very high IQ to understand Rick and Morty.\
\ The humour is extremely subtle, and without a solid grasp of theoretical physics\
\ most of the jokes will go over a typical viewer's head. There's also Rick's\
\ nihilistic outlook, which is deftly woven into his characterisation- his personal\
\ philosophy draws heavily from Narodnaya Volya literature, for instance. The\
\ fans understand this stuff; they have the intellectual capacity to truly appreciate\
\ the depths of these jokes, to realise that they're not just funny- they say\
\ something deep about LIFE. As a consequence people who dislike Rick & Morty\
\ truly ARE idiots- of course they wouldn't appreciate, for instance, the humour\
\ in Rick's existential catchphrase 'Wubba Lubba Dub Dub,' which itself is a cryptic\
\ reference to Turgenev's Russian epic Fathers and Sons. I'm smirking right now\
\ just imagining one of those addlepated simpletons scratching their heads in\
\ confusion as Dan Harmon's genius wit unfolds itself on their television screens.\
\ What fools.. how I pity them. \U0001F602\nAnd yes, by the way, i DO have a Rick\
\ & Morty tattoo. And no, you cannot see it. It's for the ladies' eyes only- and\
\ even then they have to demonstrate that they're within 5 IQ points of my own\
\ (preferably lower) beforehand. Nothin personnel kid \U0001F60E"
example_title: Richard & Mortimer
parameters:
max_length: 64
min_length: 8
no_repeat_ngram_size: 3
early_stopping: true
repetition_penalty: 3.5
length_penalty: 0.3
encoder_no_repeat_ngram_size: 3
num_beams: 4
model-index:
- name: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2
results:
- task:
type: summarization
name: Summarization
dataset:
name: kmfoda/booksum
type: kmfoda/booksum
config: kmfoda--booksum
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 23.1439
verified: true
- name: ROUGE-2
type: rouge
value: 3.2393
verified: true
- name: ROUGE-L
type: rouge
value: 12.7038
verified: true
- name: ROUGE-LSUM
type: rouge
value: 19.8101
verified: true
- name: loss
type: loss
value: 2.766307830810547
verified: true
- name: gen_len
type: gen_len
value: 63.4493
verified: true
- task:
type: summarization
name: Summarization
dataset:
name: samsum
type: samsum
config: samsum
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 26.8026
verified: true
- name: ROUGE-2
type: rouge
value: 6.0656
verified: true
- name: ROUGE-L
type: rouge
value: 20.0098
verified: true
- name: ROUGE-LSUM
type: rouge
value: 21.9115
verified: true
- name: loss
type: loss
value: 2.317471981048584
verified: true
- name: gen_len
type: gen_len
value: 19.1111
verified: true
- task:
type: summarization
name: Summarization
dataset:
name: xsum
type: xsum
config: default
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 25.2061
verified: true
- name: ROUGE-2
type: rouge
value: 4.7048
verified: true
- name: ROUGE-L
type: rouge
value: 17.8593
verified: true
- name: ROUGE-LSUM
type: rouge
value: 18.0798
verified: true
- name: loss
type: loss
value: 3.003053665161133
verified: true
- name: gen_len
type: gen_len
value: 27.4815
verified: true
- task:
type: summarization
name: Summarization
dataset:
name: cnn_dailymail
type: cnn_dailymail
config: 3.0.0
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 27.5692
verified: true
- name: ROUGE-2
type: rouge
value: 6.1264
verified: true
- name: ROUGE-L
type: rouge
value: 17.1127
verified: true
- name: ROUGE-LSUM
type: rouge
value: 23.0066
verified: true
- name: loss
type: loss
value: 2.218526601791382
verified: true
- name: gen_len
type: gen_len
value: 39.1952
verified: true
- task:
type: summarization
name: Summarization
dataset:
name: billsum
type: billsum
config: default
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 28.0632
verified: true
- name: ROUGE-2
type: rouge
value: 9.8996
verified: true
- name: ROUGE-L
type: rouge
value: 18.25
verified: true
- name: ROUGE-LSUM
type: rouge
value: 21.9053
verified: true
- name: loss
type: loss
value: 2.032966375350952
verified: true
- name: gen_len
type: gen_len
value: 48.5987
verified: true
- task:
type: summarization
name: Summarization
dataset:
name: big_patent
type: big_patent
config: y
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 34.7848
verified: true
- name: ROUGE-2
type: rouge
value: 9.7549
verified: true
- name: ROUGE-L
type: rouge
value: 22.228
verified: true
- name: ROUGE-LSUM
type: rouge
value: 28.0389
verified: true
- name: loss
type: loss
value: 1.7787292003631592
verified: true
- name: gen_len
type: gen_len
value: 71.6372
verified: true
- task:
type: summarization
name: Summarization
dataset:
name: launch/gov_report
type: launch/gov_report
config: plain_text
split: validation
metrics:
- name: ROUGE-1
type: rouge
value: 23.5925
verified: true
- name: ROUGE-2
type: rouge
value: 5.6762
verified: true
- name: ROUGE-L
type: rouge
value: 13.8108
verified: true
- name: ROUGE-LSUM
type: rouge
value: 20.2437
verified: true
- name: loss
type: loss
value: 2.6377077102661133
verified: true
- name: gen_len
type: gen_len
value: 64.1807
verified: true
---
# README - long-t5-tglobal-base-16384-booksum-V11-big_patent-V2
- this README was added because there wasn't one
- created 2022-07-31_12-14-50
## about
An experiment testing some transfer learning with [pszemraj/long-t5-tglobal-base-16384-book-summary](https://huggingface.co/pszemraj/long-t5-tglobal-base-16384-book-summary) to evaluate the ability to learn some technical documentation through the `big_patent` dataset on huggingface.
This checkpoint has been trained on dataset subsection `y` of `big_patent` for approx 400 steps of functional batch size 128. |