com3dian commited on
Commit
6bde4b4
1 Parent(s): fc6a957

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -20,7 +20,7 @@ pipeline_tag: summarization
20
  ![Bart Logo](https://huggingface.co/front/assets/huggingface_logo.svg)
21
 
22
  This repository contains the **Bart-Large-paper2slides-summarizer Model**, which has been fine-tuned on the [Automatic Slide Generation from Scientific Papers dataset](https://www.kaggle.com/datasets/andrewmvd/automatic-slide-generation-from-scientific-papers) using unsupervised learning techniques using an algorithm from the paper entitled '[Unsupervised Machine Translation Using Monolingual Corpora Only](https://arxiv.org/abs/1711.00043)'.
23
- Its primary focus is to summarize **scientific texts** with precision and accuracy, the model is parallelly trained with another model from the same contributor.
24
 
25
  ## Model Details
26
 
@@ -67,7 +67,7 @@ pip install transformers
67
 
68
  ## Model Fine-tuning Details
69
 
70
- The fine-tuning process for this model involved training on the slide generation dataset using unsupervised learning techniques. Unsupervised learning refers to training a model without explicit human-labeled targets. Instead, the model learns to generate slides by maximizing the likelihood of generating the correct output given the input data.
71
 
72
  The specific hyperparameters and training details used for fine-tuning this model are as follows:
73
 
 
20
  ![Bart Logo](https://huggingface.co/front/assets/huggingface_logo.svg)
21
 
22
  This repository contains the **Bart-Large-paper2slides-summarizer Model**, which has been fine-tuned on the [Automatic Slide Generation from Scientific Papers dataset](https://www.kaggle.com/datasets/andrewmvd/automatic-slide-generation-from-scientific-papers) using unsupervised learning techniques using an algorithm from the paper entitled '[Unsupervised Machine Translation Using Monolingual Corpora Only](https://arxiv.org/abs/1711.00043)'.
23
+ Its primary focus is to summarize **scientific texts** with precision and accuracy, the model is parallelly trained with the [**Bart-large-paper2slides-expander**](https://huggingface.co/com3dian/Bart-large-paper2slides-expander) from the same contributor.
24
 
25
  ## Model Details
26
 
 
67
 
68
  ## Model Fine-tuning Details
69
 
70
+ The fine-tuning process for this model involved training on the slide generation dataset using unsupervised learning techniques. Unsupervised learning refers to training a model without explicit human-labeled targets. Instead, the model learns to back-summarize the input provided by the expansion model, into the original texts.
71
 
72
  The specific hyperparameters and training details used for fine-tuning this model are as follows:
73