File size: 9,964 Bytes
6e6fcc0
eca5b1d
 
6e6fcc0
eca5b1d
 
 
 
 
 
 
 
9a354d1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
eca5b1d
 
 
 
 
 
 
 
9a354d1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6e6fcc0
 
 
 
 
49c0630
6e6fcc0
 
 
 
 
 
 
d16d450
49c0630
6e6fcc0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
---
language:
- en
tags:
- summarization
- pegasus
license: apache-2.0
datasets:
- kmfoda/booksum
metrics:
- rouge
widget:
- text: large earthquakes along a given fault segment do not occur at random intervals
    because it takes time to accumulate the strain energy for the rupture. The rates
    at which tectonic plates move and accumulate strain at their boundaries are approximately
    uniform. Therefore, in first approximation, one may expect that large ruptures
    of the same fault segment will occur at approximately constant time intervals.
    If subsequent main shocks have different amounts of slip across the fault, then
    the recurrence time may vary, and the basic idea of periodic mainshocks must be
    modified. For great plate boundary ruptures the length and slip often vary by
    a factor of 2. Along the southern segment of the San Andreas fault the recurrence
    interval is 145 years with variations of several decades. The smaller the standard
    deviation of the average recurrence interval, the more specific could be the long
    term prediction of a future mainshock.
  example_title: earthquakes
- text: " A typical feed-forward neural field algorithm. Spatiotemporal coordinates\
    \ are fed into a neural network that predicts values in the reconstructed domain.\
    \ Then, this domain is mapped to the sensor domain where sensor measurements are\
    \ available as supervision. Class and Section Problems Addressed Generalization\
    \ (Section 2) Inverse problems, ill-posed problems, editability; symmetries. Hybrid\
    \ Representations (Section 3) Computation & memory efficiency, representation\
    \ capacity, editability: Forward Maps (Section 4) Inverse problems Network Architecture\
    \ (Section 5) Spectral bias, integration & derivatives. Manipulating Neural Fields\
    \ (Section 6) Edit ability, constraints, regularization. Table 2: The five classes\
    \ of techniques in the neural field toolbox each addresses problems that arise\
    \ in learning, inference, and control. (Section 3). We can supervise reconstruction\
    \ via differentiable forward maps that transform Or project our domain (e.g, 3D\
    \ reconstruction via 2D images; Section 4) With appropriate network architecture\
    \ choices, we can overcome neural network spectral biases (blurriness) and efficiently\
    \ compute derivatives and integrals (Section 5). Finally, we can manipulate neural\
    \ fields to add constraints and regularizations, and to achieve editable representations\
    \ (Section 6). Collectively, these classes constitute a 'toolbox' of techniques\
    \ to help solve problems with neural fields There are three components in a conditional\
    \ neural field: (1) An encoder or inference function \u20AC that outputs the conditioning\
    \ latent variable 2 given an observation 0 E(0) =2. 2 is typically a low-dimensional\
    \ vector, and is often referred to aS a latent code Or feature code_ (2) A mapping\
    \ function 4 between Z and neural field parameters O: Y(z) = O; (3) The neural\
    \ field itself $. The encoder \u20AC finds the most probable z given the observations\
    \ O: argmaxz P(2/0). The decoder maximizes the inverse conditional probability\
    \ to find the most probable 0 given Z: arg- max P(Olz). We discuss different encoding\
    \ schemes with different optimality guarantees (Section 2.1.1), both global and\
    \ local conditioning (Section 2.1.2), and different mapping functions Y (Section\
    \ 2.1.3) 2. Generalization Suppose we wish to estimate a plausible 3D surface\
    \ shape given a partial or noisy point cloud. We need a suitable prior over the\
    \ sur- face in its reconstruction domain to generalize to the partial observations.\
    \ A neural network expresses a prior via the function space of its architecture\
    \ and parameters 0, and generalization is influenced by the inductive bias of\
    \ this function space (Section 5)."
  example_title: scientific paper
- text: ' the big variety of data coming from diverse sources is one of the key properties
    of the big data phenomenon. It is, therefore, beneficial to understand how data
    is generated in various environments and scenarios, before looking at what should
    be done with this data and how to design the best possible architecture to accomplish
    this The evolution of IT architectures, described in Chapter 2, means that the
    data is no longer processed by a few big monolith systems, but rather by a group
    of services In parallel to the processing layer, the underlying data storage has
    also changed and became more distributed This, in turn, required a significant
    paradigm shift as the traditional approach to transactions (ACID) could no longer
    be supported. On top of this, cloud computing is becoming a major approach with
    the benefits of reducing costs and providing on-demand scalability but at the
    same time introducing concerns about privacy, data ownership, etc In the meantime
    the Internet continues its exponential growth: Every day both structured and unstructured
    data is published and available for processing: To achieve competitive advantage
    companies have to relate their corporate resources to external services, e.g.
    financial markets, weather forecasts, social media, etc While several of the sites
    provide some sort of API to access the data in a more orderly fashion; countless
    sources require advanced web mining and Natural Language Processing (NLP) processing
    techniques: Advances in science push researchers to construct new instruments
    for observing the universe O conducting experiments to understand even better
    the laws of physics and other domains. Every year humans have at their disposal
    new telescopes, space probes, particle accelerators, etc These instruments generate
    huge streams of data, which need to be stored and analyzed. The constant drive
    for efficiency in the industry motivates the introduction of new automation techniques
    and process optimization: This could not be done without analyzing the precise
    data that describe these processes. As more and more human tasks are automated,
    machines provide rich data sets, which can be analyzed in real-time to drive efficiency
    to new levels. Finally, it is now evident that the growth of the Internet of Things
    is becoming a major source of data. More and more of the devices are equipped
    with significant computational power and can generate a continuous data stream
    from their sensors. In the subsequent sections of this chapter, we will look at
    the domains described above to see what they generate in terms of data sets. We
    will compare the volumes but will also look at what is characteristic and important
    from their respective points of view. 3.1 The Internet is undoubtedly the largest
    database ever created by humans. While several well described; cleaned, and structured
    data sets have been made available through this medium, most of the resources
    are of an ambiguous, unstructured, incomplete or even erroneous nature. Still,
    several examples in the areas such as opinion mining, social media analysis, e-governance,
    etc, clearly show the potential lying in these resources. Those who can successfully
    mine and interpret the Internet data can gain unique insight and competitive advantage
    in their business An important area of data analytics on the edge of corporate
    IT and the Internet is Web Analytics.'
  example_title: data science textbook
inference:
  parameters:
    max_length: 64
    no_repeat_ngram_size: 2
    encoder_no_repeat_ngram_size: 3
    repetition_penalty: 2.4
    length_penalty: 0.5
    num_beams: 4
    early_stopping: true
model-index:
- name: pszemraj/pegasus-large-book-summary
  results:
  - task:
      type: summarization
      name: Summarization
    dataset:
      name: kmfoda/booksum
      type: kmfoda/booksum
      config: kmfoda--booksum
      split: test
    metrics:
    - name: ROUGE-1
      type: rouge
      value: 29.9119
      verified: true
    - name: ROUGE-2
      type: rouge
      value: 6.1923
      verified: true
    - name: ROUGE-L
      type: rouge
      value: 14.9786
      verified: true
    - name: ROUGE-LSUM
      type: rouge
      value: 28.0299
      verified: true
    - name: loss
      type: loss
      value: 3.0345687866210938
      verified: true
    - name: gen_len
      type: gen_len
      value: 427.3899
      verified: true
---


# checkpoints

This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on the [booksum](https://github.com/salesforce/booksum) dataset.

## Model description

More information needed

## Intended uses & limitations

- standard pegasus has a max input length of 1024 tokens, therefore the model only saw the first 1024 tokens of a chapter when training, and learned to try to make the chapter's summary from that. Keep this in mind when using this model, as information at the end of a text sequence longer than 1024 tokens may be excluded from the final summary/the model will be biased towards information presented first.
- this was only trained on the dataset for an epoch but still provides reasonable results.

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1

### Training results



### Framework versions

- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.10.3