Update README.md
Browse files
README.md
CHANGED
@@ -9,14 +9,14 @@ pipeline_tag: summarization
|
|
9 |
# Fine-tuned Longformer for Summarization of Machine Learning Articles
|
10 |
|
11 |
## Model Details
|
12 |
-
- GitHub: https://github.com/Bakhitovd/
|
13 |
- Model name: bakhitovd/led-base-7168-ml
|
14 |
- Model type: Longformer (alenai/led-base-16384)
|
15 |
- Model description: This Longformer model has been fine-tuned on a focused subset of the arXiv part of the scientific papers dataset, specifically targeting articles about Machine Learning. It aims to generate accurate and consistent summaries of machine learning research papers.
|
16 |
## Intended Use
|
17 |
This model is intended to be used for text summarization tasks, specifically for summarizing machine learning research papers.
|
18 |
## How to Use
|
19 |
-
```
|
20 |
import torch
|
21 |
from transformers import LEDTokenizer, LEDForConditionalGeneration
|
22 |
tokenizer = LEDTokenizer.from_pretrained("bakhitovd/led-base-7168-ml")
|
@@ -24,7 +24,7 @@ model = LEDForConditionalGeneration.from_pretrained("bakhitovd/led-base-7168-ml"
|
|
24 |
```
|
25 |
|
26 |
## Use the model for summarization
|
27 |
-
```
|
28 |
article = "... long document ..."
|
29 |
inputs_dict = tokenizer.encode(article, padding="max_length", max_length=16384, return_tensors="pt", truncation=True)
|
30 |
input_ids = inputs_dict.input_ids.to("cuda")
|
|
|
9 |
# Fine-tuned Longformer for Summarization of Machine Learning Articles
|
10 |
|
11 |
## Model Details
|
12 |
+
- GitHub: https://github.com/Bakhitovd/led-base-7168-ml
|
13 |
- Model name: bakhitovd/led-base-7168-ml
|
14 |
- Model type: Longformer (alenai/led-base-16384)
|
15 |
- Model description: This Longformer model has been fine-tuned on a focused subset of the arXiv part of the scientific papers dataset, specifically targeting articles about Machine Learning. It aims to generate accurate and consistent summaries of machine learning research papers.
|
16 |
## Intended Use
|
17 |
This model is intended to be used for text summarization tasks, specifically for summarizing machine learning research papers.
|
18 |
## How to Use
|
19 |
+
```python
|
20 |
import torch
|
21 |
from transformers import LEDTokenizer, LEDForConditionalGeneration
|
22 |
tokenizer = LEDTokenizer.from_pretrained("bakhitovd/led-base-7168-ml")
|
|
|
24 |
```
|
25 |
|
26 |
## Use the model for summarization
|
27 |
+
```python
|
28 |
article = "... long document ..."
|
29 |
inputs_dict = tokenizer.encode(article, padding="max_length", max_length=16384, return_tensors="pt", truncation=True)
|
30 |
input_ids = inputs_dict.input_ids.to("cuda")
|