# -*- coding: utf-8 -*- """Copy of Coding Challenge for Fatima Fellowship Automatically generated by Colaboratory. Original file is located at https://colab.research.google.com/drive/1fCgH-E1EykMl_2gkpitYV7fXPjNRqMcv # Fatima Fellowship Quick Coding Challenge (Pick 1) Thank you for applying to the Fatima Fellowship. To help us select the Fellows and assess your ability to do machine learning research, we are asking that you complete a short coding challenge. Please pick **1 of these 5** coding challenges, whichever is most aligned with your interests. **Due date: 1 week** **How to submit**: Please make a copy of this colab notebook, add your code and results, and submit your colab notebook to the submission link below. If you have never used a colab notebook, [check out this video](https://www.youtube.com/watch?v=i-HnvsehuSw). **Submission link**: https://airtable.com/shrXy3QKSsO2yALd3 # 1. Deep Learning for Vision **Upside down detector**: Train a model to detect if images are upside down * Pick a dataset of natural images (we suggest looking at datasets on the [Hugging Face Hub](https://huggingface.co/datasets?task_categories=task_categories:image-classification&sort=downloads)) * Synthetically turn some of images upside down. Create a training and test set. * Build a neural network (using Tensorflow, PyTorch, or any framework you like) * Train it to classify image orientation until a reasonable accuracy is reached * [Upload the the model to the Hugging Face Hub](https://huggingface.co/docs/hub/adding-a-model), and add a link to your model below. * Look at some of the images that were classified incorrectly. Please explain what you might do to improve your model's performance on these images in the future (you do not need to impelement these suggestions) **Submission instructions**: Please write your code below and include some examples of images that were classified """ ### WRITE YOUR CODE TO TRAIN THE MODEL HERE """**Write up**: * Link to the model on Hugging Face Hub: * Include some examples of misclassified images. Please explain what you might do to improve your model's performance on these images in the future (you do not need to impelement these suggestions) # 2. Deep Learning for NLP **Fake news classifier**: Train a text classification model to detect fake news articles! * Download the dataset here: https://www.kaggle.com/clmentbisaillon/fake-and-real-news-dataset * Develop an NLP model for classification that uses a pretrained language model * Finetune your model on the dataset, and generate an AUC curve of your model on the test set of your choice. * [Upload the the model to the Hugging Face Hub](https://huggingface.co/docs/hub/adding-a-model), and add a link to your model below. * *Answer the following question*: Look at some of the news articles that were classified incorrectly. Please explain what you might do to improve your model's performance on these news articles in the future (you do not need to impelement these suggestions) """ ### WRITE YOUR CODE TO TRAIN THE MODEL HERE """# **Downloading dataset from Kaggle in my drive and giving appropriate permissions**""" !pip install kaggle #upload kaggle.json file in drive ! mkdir ~/.kaggle ! cp kaggle.json ~/.kaggle/ ! chmod 600 ~/.kaggle/kaggle.json ! kaggle datasets download clmentbisaillon/fake-and-real-news-dataset ! unzip fake-and-real-news-dataset.zip """# **Setting Developer environment for using transformers**""" # setting developer environment !pip install datasets transformers[sentencepiece] import os os.environ["'WANDB_MODE'"] = "online" import numpy as np import pandas as pd import torch from tqdm import tqdm from sklearn.metrics import accuracy_score, precision_recall_fscore_support from transformers import Trainer, TrainingArguments from transformers import DistilBertTokenizer, DistilBertForSequenceClassification # from transformers import (RobertaForSequenceClassification, RobertaTokenizer, AdamW) # wanted to try RobertA model too but seems like collab had reached RAM limit. !pip install wandb import wandb wandb.login() # Commented out IPython magic to ensure Python compatibility. # %env WANDB_PROJECT=fake_news_classifier data_path = os.path.join("..", "content") true_df = pd.read_csv(os.path.join(data_path, "True.csv")) fake_df = pd.read_csv(os.path.join(data_path, "Fake.csv")) # adding label fake_df["label"] = [0]*len(fake_df) true_df["label"] = [1]*len(true_df) fake_df df = pd.concat([true_df, fake_df]).sample(frac=1).reset_index(drop=True) """# **Merging title and body under one column**""" df["body"] = df["title"] + '' + df['text'] """# **Removing erraneous data elements**""" df.drop(["title", "text", "subject", "date"], axis=1, inplace=True) # droping duplicates df.drop_duplicates(inplace=True) # empty rows df.dropna(inplace=True) df.head(15) """# **Preparing Training, Validation and Test set**""" train, valid, test = np.split(df.sample(frac=1, random_state=42), [int(.6*len(df)), int(.8*len(df))]) X_train, y_train = train.body, train.label X_valid, y_valid = valid.body, valid.label X_test, y_test = test.body , test.label assert len(X_train) == len(y_train) and len(X_valid) == len(y_valid) and len(X_test) == len(y_test) assert len(df) == len(X_train) + len(X_valid) + len(X_test) len(X_train) len(y_train) """# **Using "distilbert-base-uncased" pretrained model** """ model_name = "distilbert-base-uncased" tokenizer = DistilBertTokenizer.from_pretrained(model_name, do_lower_case=True) model = DistilBertForSequenceClassification.from_pretrained(model_name, num_labels=1) def encode_samples(samples, tokenizer, max_length): """ Converts words to (BERT) tokens. words can be composed of multiple tokens. Parameters ---------- samples: list(str) A X_train_tokens is list of strings where each string is a sentence. tokenizer: transformers.PreTrainedTokenizer The BERT's pre-trained tokenizer. Returns ------- X: list(int) A list of integers where each integer represents the sub-word index according to the `tokenizer`. """ X = {"input_ids": [], "attention_mask": []} for i in tqdm(range(len(samples)//50+1), "Encoding"): batch = samples[i*50:50*(i+1)] tokens_batch = tokenizer(batch, truncation=True, padding=True, max_length=max_length) X["input_ids"].extend(tokens_batch.data["input_ids"]) X["attention_mask"].extend(tokens_batch.data["attention_mask"]) return X max_length = 512 X_train_tokens = encode_samples(X_train.tolist(), tokenizer, max_length) X_valid_tokens = encode_samples(X_valid.tolist(), tokenizer, max_length) X_test_tokens = encode_samples(X_test.tolist(), tokenizer, max_length) class KaggleNewsDataset(torch.utils.data.Dataset): def __init__(self, samples, labels): self.samples = samples self.labels = labels def __getitem__(self, idx): item = {k: torch.tensor(v[idx]) for k, v in self.samples.items()} item["labels"] = torch.tensor([self.labels[idx]], dtype=torch.float) return item def __len__(self): return len(self.samples["input_ids"]) train_dataset = KaggleNewsDataset(X_train_tokens, y_train.tolist()) valid_dataset = KaggleNewsDataset(X_valid_tokens, y_valid.tolist()) test_dataset = KaggleNewsDataset(X_test_tokens, y_test.tolist()) """# **Computing metrics**""" def compute_metrics(pred, threshold=0.5): labels = pred.label_ids preds = torch.nn.Sigmoid()(torch.from_numpy(pred.predictions)) > threshold acc = accuracy_score(labels, preds) precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='binary') return { 'accuracy': acc, 'f1': f1, 'precision': precision, 'recall': recall } #defining epochs and batch_size epochs = 5 batch_size = 16 #emptying cache import torch torch.cuda.empty_cache() trainer = Trainer( model=model, # Transformers model to be trained train_dataset=train_dataset, # training dataset eval_dataset=valid_dataset, # validation set used as evaluation dataset compute_metrics=compute_metrics, # computes metrics args=TrainingArguments( output_dir='./results', # output directory num_train_epochs=epochs, # number of training epochs per_device_train_batch_size=batch_size, # batch size per device per_device_eval_batch_size=batch_size, # batch size for evaluation warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir='./logs', # directory for storing logs load_best_model_at_end=True, # load the best model when finished training (default metric is loss) logging_steps=400, # log & save weights each logging_steps save_steps=400, evaluation_strategy="steps", # evaluate each `logging_steps` report_to="wandb", ) ) # training the model trainer.train() test_predictions = trainer.predict(test_dataset) test_predictions test_predictions[0] test_pred=test_predictions[0].round(2) test_pred y_pred = np.where(test_pred > 0.5, 1, 0) print(y_pred) df_1=pd.DataFrame(test_predictions[0], columns=['prediction']) df_1 test_predictions[1] df_2=pd.DataFrame(test_predictions[1], columns=['actual']) df_2 result = pd.concat([df_1, df_2], axis=1, join='inner') display(result) """# **Reading misclassified news articles**""" for i in range(len(result)): if(y_pred[i]!=test_predictions[1][i]): print(i) #predicted label result.prediction[338] #actual label result.actual[338] for index in [338,1657, 3838, 7601]: print(df.iat[index,1]) print("\n") result.prediction[1657] result.actual[1657] result.prediction[3838] result.actual[3838] """# **Plotting Confusion Matrix** """ from sklearn.metrics import confusion_matrix cf=confusion_matrix(y_test, y_pred) import seaborn as sns import matplotlib.pyplot as plt ax = sns.heatmap(cf, annot=True, cmap='Blues') plt.show() """# **Generating AUC Curve**""" from sklearn import metrics from sklearn.metrics import roc_curve labels = np.array(test_dataset.labels) predictions = torch.nn.Sigmoid()(torch.from_numpy(test_predictions.predictions)).numpy() fpr, tpr, thresholds = roc_curve(labels, predictions) auc = metrics.roc_auc_score(y_test, predictions) import matplotlib.pyplot as plt #create ROC curve plt.plot(fpr,tpr,label="AUC="+str(auc)) plt.ylabel('True Positive Rate') plt.xlabel('False Positive Rate') plt.legend(loc=4) plt.show() """# **Generating related graphs**""" wandb.finish() """# **Link to the model on Hugging Face Hub:** [link to model](https://huggingface.co/ankitkupadhyay/fake_news_classifier) # **Possible reasons for misclassification** 1. Donald Trump has a higher frequency in Fake dataset, so there is apossibility of a bias in the news articles that talks about Donald Trump. This may be one of the reasons of misclassification. 2. News articles containing hashtags(#) are mostly present in Fake dataset. So any true news containing hashtags has higher probability of getting classified as fake. **Write up**: * Link to the model on Hugging Face Hub: * Include some examples of misclassified news articles. Please explain what you might do to improve your model's performance on these news articles in the future (you do not need to impelement these suggestions) # 3. Deep RL / Robotics **RL for Classical Control:** Using any of the [classical control](https://github.com/openai/gym/blob/master/docs/environments.md#classic-control) environments from OpenAI's `gym`, implement a deep NN that learns an optimal policy which maximizes the reward of the environment. * Describe the NN you implemented and the behavior you observe from the agent as the model converges (or diverges). * Plot the reward as a function of steps (or Epochs). Compare your results to a random agent. * Discuss whether you think your model has learned the optimal policy and potential methods for improving it and/or where it might fail. * (Optional) [Upload the the model to the Hugging Face Hub](https://huggingface.co/docs/hub/adding-a-model), and add a link to your model below. You may use any frameworks you like, but you must implement your NN on your own (no pre-defined/trained models like [`stable_baselines`](https://stable-baselines.readthedocs.io/en/master/)). You may use any simulator other than `gym` _however_: * The environment has to be similar to the classical control environments (or more complex like [`robosuite`](https://github.com/ARISE-Initiative/robosuite)). * You cannot choose a game/Atari/text based environment. The purpose of this challenge is to demonstrate an understanding of basic kinematic/dynamic systems. """ ### WRITE YOUR CODE TO TRAIN THE MODEL HERE """**Write up**: * (Optional) link to the model on Hugging Face Hub: * Discuss whether you think your model has learned the optimal policy and potential methods for improving it and/or where it might fail. # 4. Theory / Linear Algebra **Implement Contrastive PCA** Read [this paper](https://www.nature.com/articles/s41467-018-04608-8) and implement contrastive PCA in Python. * First, please discuss what kind of dataset this would make sense to use this method on * Implement the method in Python (do not use previous implementations of the method if they already exist) * Then create a synthetic dataset and apply the method to the synthetic data. Compare with standard PCA. **Write up**: Discuss what kind of dataset it would make sense to use Contrastive PCA """ ### WRITE YOUR CODE HERE """# 5. Systems **Inference on the edge**: Measure the inference times in various computationally-constrained settings * Pick a few different speech detection models (we suggest looking at models on the [Hugging Face Hub](https://huggingface.co/models?pipeline_tag=automatic-speech-recognition&sort=downloads)) * Simulate different memory constraints and CPU allocations that are realistic for edge devices that might run such models, such as smart speakers or microcontrollers, and measure what is the average inference time of the models under these conditions * How does the inference time vary with (1) choice of model (2) available system memory (3) available CPU (4) size of input? Are there any surprising discoveries? (Note that this coding challenge is fairly open-ended, so we will be considering the amount of effort invested in discovering something interesting here). """ ### WRITE YOUR CODE HERE """**Write up**: What surprising discoveries do you see?"""