RichardErkhov commited on
Commit
548f1e4
1 Parent(s): 6554d96

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +130 -0
README.md ADDED
@@ -0,0 +1,130 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ bart_summarizer_model - bnb 4bits
11
+ - Model creator: https://huggingface.co/KipperDev/
12
+ - Original model: https://huggingface.co/KipperDev/bart_summarizer_model/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ license: mit
20
+ datasets:
21
+ - big_patent
22
+ language:
23
+ - en
24
+ metrics:
25
+ - rouge
26
+ tags:
27
+ - summarization
28
+ - summarizer
29
+ - text summarization
30
+ - abstractive summarization
31
+ pipeline_tag: summarization
32
+ ---
33
+
34
+ [![Generic badge](https://img.shields.io/badge/STATUS-WIP-yellow.svg)](https://shields.io/)
35
+
36
+ [![Open in Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1TWasAT17zU90CqgbK98ouDuBXXHtwbVL?usp=sharing)
37
+
38
+ # Table of Contents
39
+
40
+ 1. [Model Details](#model-details)
41
+ 2. [Usage](#usage)
42
+ 3. [Training Details](#training-details)
43
+ 4. [Training Results](#training-results)
44
+ 5. [Citation](#citation)
45
+ 6. [Author](#model-card-authors)
46
+
47
+ # Model Details
48
+
49
+ This variant of the [facebook/bart-base](https://huggingface.co/facebook/bart-base) model, is fine-tuned specifically for the task of text summarization. This model aims to generate concise, coherent, and informative summaries from extensive text documents, leveraging the power of the BART bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder approach.
50
+
51
+ # Usage
52
+
53
+ This model is intended for use in summarizing long-form texts into concise, informative abstracts. It's particularly useful for professionals and researchers who need to quickly grasp the essence of detailed reports, research papers, or articles without reading the entire text.
54
+
55
+ ## Get Started
56
+
57
+ Install with `pip`:
58
+
59
+ ```bash
60
+ pip install transformers
61
+ ```
62
+
63
+ Use in python:
64
+
65
+ ```python
66
+ from transformers import pipeline
67
+ from transformers import AutoTokenizer
68
+ from transformers import AutoModelForSeq2SeqLM
69
+
70
+ model_name = "KipperDev/bart_summarizer_model"
71
+
72
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
73
+ model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
74
+ summarizer = pipeline("summarization", model=model, tokenizer=tokenizer)
75
+
76
+ # Example usage
77
+ prefix = "summarize: "
78
+ input_text = "Your input text here."
79
+ input_ids = tokenizer.encode(prefix + input_text, return_tensors="pt")
80
+ summary_ids = model.generate(input_ids)
81
+ summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
82
+
83
+ print(summary)
84
+ ```
85
+
86
+ **NOTE THAT FOR THE MODEL TO WORK AS INTENDED, YOU NEED TO APPEND THE 'summarize:' PREFIX BEFORE THE INPUT DATA**
87
+
88
+ # Training Details
89
+
90
+ ## Training Data
91
+
92
+ The model was trained using the [Big Patent Dataset](https://huggingface.co/datasets/big_patent), comprising 1.3 million US patent documents and their corresponding human-written summaries. This dataset was chosen for its rich language and complex structure, representative of the challenging nature of document summarization tasks.
93
+
94
+ Training involved multiple subsets of the dataset to ensure broad coverage and robust model performance across varied document types.
95
+
96
+ ## Training Procedure
97
+
98
+ Training was conducted over three rounds, with initial settings including a learning rate of 0.00002, batch size of 8, and 4 epochs. Subsequent rounds adjusted these parameters to refine model performance further, for respectively 0.0003, 8 and 12. As well, a linear decay learning rate schedule was applied to enhance model learning efficiency over time.
99
+
100
+ # Training results
101
+
102
+ Model performance was evaluated using the ROUGE metric, highlighting its capability to generate summaries closely aligned with human-written abstracts.
103
+
104
+ | **Metric** | **Value** |
105
+ |-----------------------------------------|------------|
106
+ | Evaluation Loss (Eval Loss) | 1.9244 |
107
+ | Rouge-1 | 0.5007 |
108
+ | Rouge-2 | 0.2704 |
109
+ | Rouge-L | 0.3627 |
110
+ | Rouge-Lsum | 0.3636 |
111
+ | Average Generation Length (Gen Len) | 122.1489 |
112
+ | Runtime (seconds) | 1459.3826 |
113
+ | Samples per Second | 1.312 |
114
+ | Steps per Second | 0.164 |
115
+
116
+
117
+ # Citation
118
+
119
+ **BibTeX:**
120
+
121
+ ```bibtex
122
+ @article{kipper_t5_summarizer,
123
+ // SOON
124
+ }
125
+ ```
126
+
127
+ # Authors
128
+
129
+ This model card was written by [Fernanda Kipper](https://www.fernandakipper.com/)
130
+