com3dian commited on
Commit
1134b4d
1 Parent(s): b2d2945

Update README.md

Browse files

update model card

Files changed (1) hide show
  1. README.md +5 -2
README.md CHANGED
@@ -3,6 +3,9 @@ language:
3
  - en
4
  tags:
5
  - summarization
 
 
 
6
  license:
7
  - mit
8
  pipeline_tag: summarization
@@ -65,7 +68,7 @@ The specific hyperparameters and training details used for fine-tuning this mode
65
 
66
  ## Model Performance
67
 
68
- The performance of the Bart-Large Slide Generation Model has been evaluated on various metrics, including slide quality, coherence, and relevance. While the model has achieved promising results during evaluation, it is essential to note that no model is perfect, and its performance may vary depending on the input data and specific use cases.
69
 
70
  ## Acknowledgments
71
 
@@ -75,4 +78,4 @@ If you use this model or find it helpful in your work, please consider citing th
75
 
76
  ## License
77
 
78
- This model and the associated code are released under the [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0).
 
3
  - en
4
  tags:
5
  - summarization
6
+ widget:
7
+ - text: 'We here recount the main elements of a classic bag-of-features model before introducing the simpler DNN-based BagNets in the next paragraph. Bag-of-feature representations can be described by analogy to bag-of-words representations. With bag-of-words, one counts the number of occurrences of words from a vocabulary in a document. This vocabulary contains important words (but not common ones like "and" or "the") and word clusters (i.e. semantically similar words like "gigantic" and "enormous" are subsumed). The counts of each word in the vocabulary are assembled as one long term vector. This is called the bag-of-words document representation because all ordering of the words is lost. Likewise, bag-of-feature representations are based on a vocabulary of visual words which represent clusters of local image features. The term vector for an image is then simply the number of occurrences of each visual word in the vocabulary. This term vector is used as an input to a classifier (e.g. SVM or MLP). Many successful image classification models have been based on this pipeline (Csurka et al., 2004; Jurie & Triggs, 2005; Zhang et al., 2007; Lazebnik et al., 2006), see O’Hara & Draper (2011) for an up-to-date overview.'
8
+ - text: 'The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU [16], ByteNet [18] and ConvS2S [9], all of which use convolutional neural networks as basic building block, computing hidden representations in parallel for all input and output positions. In these models, the number of operations required to relate signals from two arbitrary input or output positions grows in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes it more difficult to learn dependencies between distant positions [12]. In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as described in section 3.2. \n Self-attention, sometimes called intra-attention is an attention mechanism relating different positions of a single sequence in order to compute a representation of the sequence. Self-attention has been used successfully in a variety of tasks including reading comprehension, abstractive summarization, textual entailment and learning task-independent sentence representations [4, 27, 28, 22].\n End-to-end memory networks are based on a recurrent attention mechanism instead of sequencealigned recurrence and have been shown to perform well on simple-language question answering and language modeling tasks [34].\n To the best of our knowledge, however, the Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequencealigned RNNs or convolution. In the following sections, we will describe the Transformer, motivate self-attention and discuss its advantages over models such as [17, 18] and [9].'
9
  license:
10
  - mit
11
  pipeline_tag: summarization
 
68
 
69
  ## Model Performance
70
 
71
+ The Bart-Large Slide Generation Model has undergone thorough human evaluation in a wide range of scientific domains, including AI, mathematics, statistics, history, geography, and climate science, to compare its performance with the [Bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) model.
72
 
73
  ## Acknowledgments
74
 
 
78
 
79
  ## License
80
 
81
+ This model and the associated code are released under the [MIT license](https://opensource.org/license/mit/).