File size: 18,080 Bytes
2aebb5b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
---
library_name: transformers
tags: []
---

# Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->
This is the baseline model for the news source classification project. 

Please run the following evaluation pipeline code:

# START # 
## Imports 
<pre>from huggingface_hub import hf_hub_download 
import joblib 
!huggingface-cli login 
import pandas as pd  
import torch  
from transformers import AutoTokenizer, AutoModel
import torchvision
from torchvision import transforms, utils 
import torch.nn as nn
import torch.optim as optim
import torchvision.transforms as transforms  
from PIL import Image
from skimage import io, transform 
from torchvision.io import read_image
from torch.utils.data import Dataset, DataLoader 
from sklearn.metrics import accuracy_score  
import numpy as np 
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import nltk
from nltk.corpus import stopwords
nltk.download('stopwords')
nltk.download('wordnet')

import re
from transformers import DistilBertTokenizer, DistilBertModel</pre>


# Load model from Huggingface (Please load test data into test_df below) 
<pre>repo_id='awngsz/nn_model'  
filename='nn_model_v3.joblib'

model_file_path=hf_hub_download(repo_id=repo_id, filename=filename)  <br> 
model=joblib.load(model_file_path) 
print(model)  
  
#Load test dataset (assuming the name is the same as the one in the Ed post) <br> 
test_df = pd.read_csv(file_path) 

#Copying the naming convention from the sample dataset in the edpost <br> 
X_test = test_df['title']  
y_test = test_df['labels']  </pre>

# Clean the data

<pre>
def clean_headlines(df, column_name):
    """
    Cleans a specified column in a DataFrame by:
    - Removing HTML tags
    - Removing <script> elements
    - Removing extra spaces, trailing/leading whitespaces
    - Removing special characters
    - Removing repeating special characters
    - Removing tabs
    - Removing newline characters
    - Removing specific punctuation: periods, commas, and parentheses
    - Normalizing double quotes ("") to single quotes ('')

    Args:
        df (pd.DataFrame): The DataFrame containing the column to clean
        column_name (str): The name of the column to clean

    Returns:
        pd.DataFrame: A DataFrame with the cleaned column
    """
    # Remove HTML tags
    df[column_name] = df[column_name].str.replace(r'<[^<]+?>', '', regex=True)

    # Remove scripts
    df[column_name] = df[column_name].str.replace(r'<script.*?</script>', '', regex=True)

    # Remove special characters
    df[column_name] = df[column_name].str.strip().str.replace(r'[&*|~`^=_+{}[\]<>\\]', ' ', regex=True)

    # Remove repeating special characters
    df[column_name] = df[column_name].str.strip().str.replace(r'([?!])\1+', r'\1', regex=True)

    # Remove tabs
    df[column_name] = df[column_name].str.replace(r'\t', ' ', regex=True)

    # Remove newline characters
    df[column_name] = df[column_name].str.replace(r'\n', ' ', regex=True)

    # Normalize all references to US as u.s.
    df[column_name] = df[column_name].str.replace(r'US', 'u.s.', regex=True)
    df[column_name] = df[column_name].str.replace(r'UN', 'u.n.', regex=True)

    # Remove extra spaces including leading/trailing whitespaces
    df[column_name] = df[column_name].str.strip().str.replace(r'\s+', ' ', regex=True)

    # get rid of these fox news patterns we see
    df[column_name] = df[column_name].str.replace(r'fox news poll:', '', regex=True)

    df[column_name] = df[column_name].str.replace(r'| fox news', '', regex=True)

    df[column_name] = df[column_name].str.replace(r'Fox News', '', regex=True)
    df[column_name] = df[column_name].str.replace(r'fox news', '', regex=True)

    df[column_name] = df[column_name].str.replace(r'news poll:', '', regex=True)

    df[column_name] = df[column_name].str.replace(r'opinion:', '', regex=True)

    df[column_name] = df[column_name].str.replace(r"reporter's notebook", '', regex=True)

    # Normalize double quotes to single quotes
    # df[column_name] = df[column_name].str.replace(r'"', "'", regex=True)

    # Punctuation
    # df[column_name] = df[column_name].str.replace(r'[.,()]', '', regex=True)

    return df </pre>

<pre>
def normalize_headlines(df, column_name):
  """
    Normalizes a given headline by:
    - converting it to lowercase
    - removing stopwords
    - applying stemming or lemmatization to reduce words to their base forms

    Args:
        df (pd.DataFrame): The DataFrame containing the column to clean
        column_name (str): The name of the column to clean

    Returns:
        pd.DataFrame: A DataFrame with the cleaned column
  """

  # Convert headlines to lowercase
  df[column_name] = df[column_name].str.lower()

  # Remove stopwords from headline
  stop_words = set(stopwords.words('english'))
  df[column_name] = df[column_name].apply(lambda x: ' '.join([word for word in x.split() if word not in (stop_words)]))

  # Lemmatize words to base form
  lemmatizer = nltk.stem.WordNetLemmatizer()
  df[column_name] = df[column_name].apply(lambda x: ' '.join([lemmatizer.lemmatize(word) for word in x.split()]))

  return df </pre>

<pre>
def handle_missing_data(df, column_name):
    """
    Handles missing or incomplete data in a given column of a DataFrame, including:

    - Replacing NULL values with "Unknown Headline"
    - Augmenting the data by creating headlines with synonyms of words in other headlines

    Args:
        df (pd.DataFrame): The DataFrame containing the column to clean
        column_name (str): The name of the column to clean

    Returns:
        pd.DataFrame: A DataFrame with the cleaned column
    """

    # Remove NULL headlines
    df = df.dropna(subset=[column_name])

    # Set a minimum word count threshold
    min_word_count = 3

    # Filter out titles with fewer words
    df = df[df[column_name].str.split().apply(len) >= min_word_count].reset_index(drop=True)


    return df </pre>

<pre>
def consistency_checks(df, column_name):
  """
    Ensures all headlines follow a consistent format by:
      - Removing duplicate headlines

    Args:
        df (pd.DataFrame): The DataFrame containing the column to clean
        column_name (str): The name of the column to clean

    Returns:
        pd.DataFrame: A DataFrame with the cleaned column

  """

  # Remove duplicate headlines
  df = df.drop_duplicates(subset=[column_name])

  # Filter headlines with too few or too many words
  #df = df[df['title'].str.split().apply(len).between(3, 20)]


  return df </pre>

<pre>
X_test = clean_headlines(X_test, 'title')
X_test = normalize_headlines(X_test, 'title')
X_test = X_test.dropna(subset = ['title'])
X_test = handle_missing_data(X_test, 'title')
X_test = consistency_checks(X_test, 'title') </pre>

# Load the embedding model from Huggingface. Transformer: DistilBERT


<pre>
def get_embeddings(text_all, tokenizer, model, device, max_len=128):
    '''
    Generate embeddings using a transformer model on GPU if available.
    Args:
    - text_all: List of input texts
    - tokenizer: Tokenizer for the model
    - model: Transformer model
    - device: torch.device to run the computations
    - max_len: Maximum token length for the input
    Returns:
    - embeddings: List of embeddings for each input text
    '''
    embeddings = []
    
    count = 0
    print('Start embeddings:')

    for text in text_all:
        count += 1
        if count % (len(text_all) // 10) == 0:
            print(f'{count / len(text_all) * 100:.1f}% done ...')

        # Tokenize the input text
        model_input_token = tokenizer(
            text,
            add_special_tokens=True,
            max_length=max_len,
            padding='max_length',
            truncation=True,
            return_tensors='pt'
        ).to(device)  # Move input tensors to GPU

        # Generate embeddings without gradient computation
        with torch.no_grad():
            model_output = model(**model_input_token)
            cls_embedding = model_output.last_hidden_state[:, 0, :]  # Use CLS token embedding
            cls_embedding = cls_embedding.squeeze().cpu().numpy()  # Move back to CPU for numpy
            embeddings.append(cls_embedding)

  return embeddings </pre>


# Check for GPU availability
<pre>
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(f'Using device: {device}')

# Load the tokenizer and model for 'all-mpnet-base-v2'
print("Loading model and tokenizer...")
# Load model and tokenizer
tokenizer_news = AutoTokenizer.from_pretrained('distilbert-base-uncased')
model_news = AutoModel.from_pretrained('distilbert-base-uncased').to(device)

# Set the model to evaluation mode
model_news.eval()

############################################# DBERT UNCASED Embedding #############################################
############################################# Embedding #############################################
print("Computing DBERT embeddings for training data...")

y_test = X_test['labels']
X_test = X_test['title']
  
X_test_embeddings_DBERT = get_embeddings(X_test, tokenizer_news, model_news, device, max_len=128)
print("DBERT embeddings for training data computed!")


prediction = model.predict(X_test_embeddings_DBERT)
</pre>
# Accuracy 
<pre>label_map = {'NBC': 0, 'FoxNews': 1}

def compute_category_accuracy(y_true, y_pred, label):
  y_true = np.array(y_true)
  n_correct = np.sum((y_true == label) & (y_pred == label)) 
  n_total = np.sum(y_true == label) 
  cat_accuracy = n_correct / n_total 
  return cat_accuracy

#Print accuracy 
print(f'Test accuracy: {accuracy_score(y_test, prediction) * 100:.2f}%') 
print(f'Test accuracy for NBC: {compute_category_accuracy(y_test, prediction, label_map["NBC"]) * 100:.2f}%') 
print(f'Test accuracy for FoxNews: {compute_category_accuracy(y_test, prediction, label_map["FoxNews"]) * 100:.2f}%')
</pre>






<!-- from huggingface_hub import hf_hub_download
import joblib

#Load model from Huggingface
repo_id='awngsz/baseline_model'
filename='CIS5190_Proj2_AWNGSZ.joblib'

file_path=hf_hub_download(repo_id=repo_id, filename=filename)
model=joblib.load(file_path)

print(model)

#Load test dataset (assuming the name is the same as the one in the Ed post)
test_df = pd.read_csv(file_path)

#Copying the naming convention from the sample dataset in the edpost
X_test = test_df['title']
y_test = test_df['labels']

#Load the embedding model from Huggingface
############################################# Transformer: DistilBERT #############################################
from transformers import DistilBertTokenizer, DistilBertModel
# pytorch related packages
import torch
import torchvision
from torchvision import transforms, utils
import torch.nn as nn
import torch.optim as optim
import torchvision.transforms as transforms
from PIL import Image
from skimage import io, transform
from torchvision.io import read_image
from torch.utils.data import Dataset, DataLoader

def get_embeddings(text_all, tokenizer, model, max_len = 128):
  '''
  return: embeddings list
  '''
  embeddings = []
  count = 0
  print('Start embeddings:')
  for text in text_all:
    count += 1
    if count % (len(text_all) // 10) == 0:
      print(f'{count / len(text_all) * 100:.1f}% done ...')

    model_input_token = tokenizer(
                    text,
                    add_special_tokens = True,
                    max_length = max_len,
                    padding = 'max_length',
                    truncation = True,
                    return_tensors = 'pt'
                    )

    with torch.no_grad():
      model_output = model(**model_input_token)
      cls_embedding = model_output.last_hidden_state[:, 0, :]
      cls_embedding = cls_embedding.squeeze().numpy()
      embeddings.append(cls_embedding)

  return embeddings

#Load the tokenizer and model from Hugging Face
tokenizer_DBERT = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
transformer_model_DBERT = DistilBertModel.from_pretrained('distilbert-base-uncased')

#Set the model to evaluation mode
transformer_model_DBERT.eval()

#Get the embeddings for the test data

max_len = max(len(text) for text in X_test)

#this may take awhile to run
X_test_embeddings_DBERT = get_embeddings(X_test, tokenizer_DBERT, transformer_model_DBERT, max_len = max_len)

prediction = model.predict(X_test_embeddings_DBERT)

#Accuracy
from sklearn.metrics import accuracy_score

label_map = {'NBC': 1, 'FoxNews': 0}

def compute_category_accuracy(y_true, y_pred, label):
  n_correct = np.sum((y_true == label) & (y_pred == label))
  n_total = np.sum(y_true == label)
  cat_accuracy = n_correct / n_total
  return cat_accuracy

#Print accuracy
print(f'Test accuracy: {accuracy_score(y_test, prediction) * 100:.2f}%')
print(f'Test accuracy for NBC: {compute_category_accuracy(y_test, prediction, label_map["NBC"]) * 100:.2f}%')
print(f'Test accuracy for FoxNews: {compute_category_accuracy(y_test, prediction, label_map["FoxNews"]) * 100:.2f}%') -->

##### END ######

## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.

- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]

### Model Sources [optional]

<!-- Provide the basic links for the model. -->

- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]

## Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->

### Direct Use

<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->

[More Information Needed]

### Downstream Use [optional]

<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->

[More Information Needed]

### Out-of-Scope Use

<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->

[More Information Needed]

## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

[More Information Needed]

### Recommendations

<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

## How to Get Started with the Model

Use the code below to get started with the model.

[More Information Needed]

## Training Details

### Training Data

<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->

[More Information Needed]

### Training Procedure

<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->

#### Preprocessing [optional]

[More Information Needed]


#### Training Hyperparameters

- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->

#### Speeds, Sizes, Times [optional]

<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->

[More Information Needed]

## Evaluation

<!-- This section describes the evaluation protocols and provides the results. -->

### Testing Data, Factors & Metrics

#### Testing Data

<!-- This should link to a Dataset Card if possible. -->

[More Information Needed]

#### Factors

<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->

[More Information Needed]

#### Metrics

<!-- These are the evaluation metrics being used, ideally with a description of why. -->

[More Information Needed]

### Results

[More Information Needed]

#### Summary



## Model Examination [optional]

<!-- Relevant interpretability work for the model goes here -->

[More Information Needed]

## Environmental Impact

<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->

Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).

- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]

## Technical Specifications [optional]

### Model Architecture and Objective

[More Information Needed]

### Compute Infrastructure

[More Information Needed]

#### Hardware

[More Information Needed]

#### Software

[More Information Needed]

## Citation [optional]

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

[More Information Needed]

**APA:**

[More Information Needed]

## Glossary [optional]

<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->

[More Information Needed]

## More Information [optional]

[More Information Needed]

## Model Card Authors [optional]

[More Information Needed]

## Model Card Contact

[More Information Needed]