philipp-zettl commited on
Commit
fbadaa0
β€’
1 Parent(s): 1c1bd81

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +234 -152
README.md CHANGED
@@ -1,77 +1,87 @@
1
  ---
 
 
 
 
 
2
  library_name: transformers
3
- tags: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ---
5
 
6
- # Model Card for Model ID
7
 
8
  <!-- Provide a quick summary of what the model is/does. -->
9
 
10
-
11
-
12
  ## Model Details
13
 
14
  ### Model Description
15
 
16
  <!-- Provide a longer summary of what this model is. -->
 
17
 
18
- This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
  ### Model Sources [optional]
29
 
30
  <!-- Provide the basic links for the model. -->
31
-
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
  ## Uses
37
 
38
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
-
40
- ### Direct Use
41
-
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
-
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
-
52
- ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
 
58
  ## Bias, Risks, and Limitations
59
 
60
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
 
62
- [More Information Needed]
63
-
64
- ### Recommendations
65
-
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
 
70
  ## How to Get Started with the Model
71
 
72
  Use the code below to get started with the model.
73
 
74
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
75
 
76
  ## Training Details
77
 
@@ -79,121 +89,193 @@ Use the code below to get started with the model.
79
 
80
  <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
 
82
- [More Information Needed]
83
-
84
- ### Training Procedure
85
-
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
-
92
-
93
- #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
-
103
- ## Evaluation
104
-
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
 
119
- [More Information Needed]
120
 
121
- #### Metrics
 
 
 
 
122
 
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
 
125
- [More Information Needed]
 
 
 
126
 
127
- ### Results
128
 
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
-
193
- ## Model Card Authors [optional]
194
-
195
- [More Information Needed]
196
 
197
- ## Model Card Contact
 
198
 
199
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: mit
3
+ datasets:
4
+ - philipp-zettl/long-qa
5
+ language:
6
+ - en
7
  library_name: transformers
8
+ pipeline_tag: text2text-generation
9
+ widget:
10
+ - text: "question: What's part of the Hugging Face Hub? context: The Hugging Face Hub is a
11
+ platform with over 350k models, 75k datasets, and 150k demo apps (Spaces),
12
+ all open source and publicly available, in an online platform where people
13
+ can easily collaborate and build ML together. The Hub works as a central
14
+ place where anyone can explore, experiment, collaborate, and build
15
+ technology with Machine Learning. Are you ready to join the path towards
16
+ open source Machine Learning? πŸ€—"
17
+ example_title: πŸ€— Hub
18
+ - text: "question: What data sets can be accessed in the Datasets library? context:
19
+ πŸ€— Datasets is a library for easily accessing and sharing datasets for Audio,
20
+ Computer Vision, and Natural Language Processing (NLP) tasks. Load a dataset
21
+ in a single line of code, and use our powerful data processing methods to
22
+ quickly get your dataset ready for training in a deep learning model. Backed
23
+ by the Apache Arrow format, process large datasets with zero-copy reads without
24
+ any memory constraints for optimal speed and efficiency. We also feature a
25
+ deep integration with the Hugging Face Hub, allowing you to easily load
26
+ and share a dataset with the wider machine learning community. Find your
27
+ dataset today on the Hugging Face Hub, and take an in-depth look inside of
28
+ it with the live viewer."
29
+ example_title: πŸ€— datasets
30
  ---
31
 
32
+ # Model Card for t5-small-long-qa
33
 
34
  <!-- Provide a quick summary of what the model is/does. -->
35
 
 
 
36
  ## Model Details
37
 
38
  ### Model Description
39
 
40
  <!-- Provide a longer summary of what this model is. -->
41
+ This model was trained to generate answers for questions out of a given context.
42
 
 
43
 
44
+ - **Developed by:** [philipp-zettl](https://huggingface.co/philipp-zettl)
45
+ - **Model type:** Transformer (T5)
46
+ - **Language(s) (NLP):** English
47
+ - **License:** M.I.T
48
+ - **Finetuned from model [optional]:** [google/flan-t5-small](https://huggingface.co/google/flan-t5-small)
 
 
49
 
50
  ### Model Sources [optional]
51
 
52
  <!-- Provide the basic links for the model. -->
53
+ Fine-tune of the amazing [google/flan-t5-small](https://huggingface.co/google/flan-t5-small)
 
 
 
54
 
55
  ## Uses
56
 
57
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
58
+ It's intended to use the model to answers for questions from given context.
59
+ The context should not exceed the model's _context_ length.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
60
 
61
  ## Bias, Risks, and Limitations
62
 
63
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
64
 
65
+ No bias evaluation was performed on this model.
 
 
 
 
 
 
66
 
67
  ## How to Get Started with the Model
68
 
69
  Use the code below to get started with the model.
70
 
71
+ ```python
72
+ context = "This is a long text based of multiple concatenated paragraphs."
73
+ question = "My question about something mentioned inside the context."
74
+
75
+ model_inputs = tokenizer([f"question: {question} context: {context}"], max_length=512, padding=True, truncation=True)
76
+ input_ids = torch.tensor(model_inputs['input_ids']).to(device)
77
+ attention_mask = torch.tensor(model_inputs['attention_mask']).to(device)
78
+ with torch.no_grad():
79
+ sample_output = model.generate(input_ids[:1], max_length=85)
80
+ sample_output_text = tokenizer.decode(sample_output[0], skip_special_tokens=True)
81
+ input_text = tokenizer.decode(input_ids[0], skip_special_tokens=True)
82
+ print(f"Sample Input:\n \"{input_text}\"\n\n")
83
+ print(f"Model Output: \"{sample_output_text}\"")
84
+ ```
85
 
86
  ## Training Details
87
 
 
89
 
90
  <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
91
 
92
+ This model was trained on [philipp-zettl/long-qa](https://huggingface.co/datasets/philipp-zettl/long-qa).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
93
 
94
+ A synthetic data set created from [philipp-zettl/qg-tidyqa_squad2](https://huggingface.co/datasets/philipp-zettl/qg-tydiqa_squad2) using [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct).
95
 
96
+ The data set was created by prompting Phi-3 using the prompt template
97
+ ```python
98
+ msg = f"""
99
+ Answer the following question using the content provided in the context.
100
+ Do not answer questions where the answer isn't inside the context.
101
 
 
102
 
103
+ Question: {sample['question']}
104
+ Context: {sample['context']}
105
+ """
106
+ ```
107
 
108
+ After generating synthetic answers, the data set was manually corrected and validated to ensure high quality as well as consistent longer answers than the original data sets.
109
 
110
+ ### Training Procedure
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
111
 
112
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
113
+ Below you can find the full training pipeline used to achieve this fine-tune.
114
 
115
+ ```python
116
+ import torch
117
+ from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
118
+
119
+ # Base model (e.g., T5-large)
120
+ # https://huggingface.co/collections/google/flan-t5-release-65005c39e3201fff885e22fb
121
+ model_name = 'google/flan-t5-small'
122
+ model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
123
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
124
+
125
+ # Move only the student model to GPU if available
126
+ device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
127
+ model = model.to(device)
128
+ ```
129
+
130
+ Load dataset
131
+ ```python
132
+ from datasets import load_dataset
133
+
134
+ # Load dataset
135
+ ds = load_dataset('philipp-zettl/long-qa')
136
+
137
+ # Split the dataset into training and validation
138
+ train_dataset = ds['train']
139
+ validation_dataset = ds['test']
140
+ ```
141
+
142
+ Preprocessing: tokenize inputs and labels for faster training cycles, i.e. no need for tokenization during training anymore
143
+ ```python
144
+ def preprocess_batch(batch, tokenizer, max_input_length=512, max_output_length=128):
145
+ questions = batch['question']
146
+ contexts = batch['context']
147
+ answers = batch['answer']
148
+
149
+ inputs = [f"question: {q} context: {c}" for q, c in zip(questions, contexts)]
150
+ model_inputs = tokenizer(inputs, max_length=max_input_length, padding=True, truncation=True)
151
+
152
+ labels = tokenizer(answers, max_length=max_output_length, padding=True, truncation=True)
153
+ model_inputs['labels'] = labels['input_ids']
154
+
155
+ return model_inputs
156
+
157
+ # Tokenize the dataset
158
+ train_dataset = train_dataset.map(lambda batch: preprocess_batch(batch, tokenizer), batched=True)
159
+ validation_dataset = validation_dataset.map(lambda batch: preprocess_batch(batch, tokenizer), batched=True)
160
+
161
+ # Set format for PyTorch
162
+ train_dataset.set_format(type='torch', columns=['input_ids', 'attention_mask', 'labels'])
163
+ validation_dataset.set_format(type='torch', columns=['input_ids', 'attention_mask', 'labels'])
164
+ ```
165
+
166
+ The train loop
167
+ ```python
168
+ from tqdm import tqdm
169
+ from transformers import AdamW, DataCollatorForSeq2Seq
170
+ from torch.utils.data import DataLoader
171
+ from torch.utils.tensorboard import SummaryWriter
172
+
173
+ torch.cuda.empty_cache()
174
+
175
+ model_name = 'google/flan-t5-small'
176
+ model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
177
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
178
+
179
+ # Move only the student model to GPU if available
180
+ device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
181
+ model = model.to(device)
182
+
183
+ # Training parameters
184
+ epochs = 50
185
+ learning_rate = 3e-5
186
+ temperature = 2.0
187
+ batch_size = 8
188
+ optimizer = torch.optim.AdamW(model.parameters(), lr=learning_rate)
189
+
190
+ # Create a data collator for padding and batching
191
+ data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=model)
192
+
193
+ # Create DataLoaders with the data collator
194
+ train_dataloader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True, collate_fn=data_collator)
195
+ validation_dataloader = DataLoader(validation_dataset, batch_size=batch_size, collate_fn=data_collator)
196
+
197
+ writer = SummaryWriter(comment='t5-small-long-qa')
198
+
199
+ # Store losses and learning rates
200
+ train_losses = []
201
+ val_losses = []
202
+ learning_rates = []
203
+
204
+ print("Starting training...")
205
+
206
+ # Training loop
207
+ for epoch in range(epochs):
208
+ model.train()
209
+ total_loss = 0
210
+ print(f"Epoch {epoch+1}/{epochs}")
211
+
212
+ progress_bar = tqdm(train_dataloader, desc="Training", leave=False)
213
+
214
+ for step, batch in enumerate(progress_bar):
215
+ # Move student inputs to GPU
216
+ input_ids = batch['input_ids'].to(device)
217
+ attention_mask = batch['attention_mask'].to(device)
218
+ labels = batch['labels'].to(device)
219
+
220
+ # Teacher forward pass on CPU
221
+ outputs = model(input_ids=input_ids, attention_mask=attention_mask, labels=labels)
222
+ logits = outputs.logits
223
+
224
+ # Calculate losses
225
+ loss = outputs.loss # Cross-entropy loss
226
+ writer.add_scalar("Loss/train", loss, epoch * len(train_dataloader) + step)
227
+
228
+ # Backpropagation
229
+ optimizer.zero_grad()
230
+ loss.backward()
231
+ optimizer.step()
232
+
233
+ total_loss += loss.item()
234
+
235
+ # Verbose logging
236
+ if step % len(train_dataloader)//10 == 1 or step == len(train_dataloader) - 1:
237
+ progress_bar.set_postfix({
238
+ 'step': step,
239
+ 'loss': loss.item(),
240
+ })
241
+
242
+ # Generate a sample output from the student model
243
+ model.eval()
244
+ with torch.no_grad():
245
+ sample_output = model.generate(input_ids[:1], max_length=50)
246
+ sample_output_text = tokenizer.decode(sample_output[0], skip_special_tokens=True)
247
+ input_text = tokenizer.decode(input_ids[0], skip_special_tokens=True)
248
+ writer.add_text(f"Sample Input", input_text, step)
249
+ writer.add_text(f"Sample Output", sample_output_text, step)
250
+ model.train()
251
+
252
+
253
+ avg_train_loss = total_loss / len(train_dataloader)
254
+ train_losses.append(avg_train_loss)
255
+ learning_rates.append(optimizer.param_groups[0]['lr'])
256
+
257
+ # Validation step
258
+ model.eval()
259
+ total_val_loss = 0
260
+ with torch.no_grad():
261
+ for batch in validation_dataloader:
262
+ input_ids = batch['input_ids'].to(device)
263
+ attention_mask = batch['attention_mask'].to(device)
264
+ labels = batch['labels'].to(device)
265
+
266
+ outputs = model(input_ids=input_ids, attention_mask=attention_mask, labels=labels)
267
+ val_loss = outputs.loss
268
+ total_val_loss += val_loss.item()
269
+
270
+ avg_val_loss = total_val_loss / len(validation_dataloader)
271
+ val_losses.append(avg_val_loss)
272
+
273
+ writer.add_scalar("AVG Loss/train", avg_train_loss, epoch)
274
+ writer.add_scalar("AVG Loss/val", avg_val_loss, epoch)
275
+
276
+ print(f"Epoch {epoch+1} completed. Avg Train Loss: {avg_train_loss:.4f}, Avg Val Loss: {avg_val_loss:.4f}")
277
+
278
+
279
+ print("Training complete.")
280
+ writer.close()
281
+ ```