Update README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,7 @@ license:
|
|
11 |
![Bart Logo](https://huggingface.co/front/assets/huggingface_logo.svg)
|
12 |
|
13 |
This repository contains the **Bart-Large-paper2slides-summarizer Model**, which has been fine-tuned on the [Automatic Slide Generation from Scientific Papers dataset](https://www.kaggle.com/datasets/andrewmvd/automatic-slide-generation-from-scientific-papers) using unsupervised learning techniques.
|
14 |
-
|
15 |
|
16 |
## Model Details
|
17 |
|
@@ -23,10 +23,6 @@ The model is made available for usage and experimentation through the Hugging Fa
|
|
23 |
|
24 |
This particular model, Bart-Large, is the larger version of the Bart model. It consists of 12 encoder and decoder layers and has a total of 400 million parameters.
|
25 |
|
26 |
-
## Dataset
|
27 |
-
|
28 |
-
The model has been fine-tuned on a slide generation dataset. Slide generation involves automatically generating visually appealing and informative slide decks based on input data. The dataset used for fine-tuning this model contains examples of input data along with corresponding slide deck outputs. The fine-tuning process enables the model to learn to generate high-quality slides based on the given input.
|
29 |
-
|
30 |
## Usage
|
31 |
|
32 |
To use this model, you can leverage the Hugging Face [Transformers](https://huggingface.co/transformers/) library. Here's an example of how to use it in Python:
|
|
|
11 |
![Bart Logo](https://huggingface.co/front/assets/huggingface_logo.svg)
|
12 |
|
13 |
This repository contains the **Bart-Large-paper2slides-summarizer Model**, which has been fine-tuned on the [Automatic Slide Generation from Scientific Papers dataset](https://www.kaggle.com/datasets/andrewmvd/automatic-slide-generation-from-scientific-papers) using unsupervised learning techniques.
|
14 |
+
Its primary focus is to summarize **scientific texts** with precision and accuracy.
|
15 |
|
16 |
## Model Details
|
17 |
|
|
|
23 |
|
24 |
This particular model, Bart-Large, is the larger version of the Bart model. It consists of 12 encoder and decoder layers and has a total of 400 million parameters.
|
25 |
|
|
|
|
|
|
|
|
|
26 |
## Usage
|
27 |
|
28 |
To use this model, you can leverage the Hugging Face [Transformers](https://huggingface.co/transformers/) library. Here's an example of how to use it in Python:
|