markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Maybe a few outliers with high GDP do not follow the linear trend that we observed above. | # Data for training
Xfull = np.c_[full_country_stats["GDP per capita"]]
yfull = np.c_[full_country_stats["Life satisfaction"]]
print(Xfull.shape, yfull.shape) | (36, 1) (36, 1)
| MIT | reading_assignments/5_Note-ML Methodology.ipynb | biqar/Fall-2020-ITCS-8156-MachineLearning |
We can better observe the trend by fitting polynomial regression models by changing the degrees. | # polynomial model to this data
for deg in [1, 2, 5, 10, 30]:
plt.figure();
full_country_stats.plot(kind='scatter', x="GDP per capita", y='Life satisfaction',
figsize=(12,4));
plt.axis([0, 110000, 3, 10]);
Xp, mu, sd, wp = poly_regress(Xfull.flatten(), deg, yfull.flatten(), normalize=True)
yp1 = Xp @ wp
# plot curve
plt.plot(Xfull, yp1, 'r-', label=deg);
plt.title("degree: {}".format(deg));
| _____no_output_____ | MIT | reading_assignments/5_Note-ML Methodology.ipynb | biqar/Fall-2020-ITCS-8156-MachineLearning |
What degree do you think the data follow? What is your best pick? Do you see overfitting here? From which one do you see it? As the complexity of model grows, you may have small training errors. However, there is no guarantee that you have a good generalization (you may have very bad generalization error!). This is called **Overfitting** problem in machine learning. From training data, once you learned the hypothesis *h* (or machine learning model), you can have training error $E_{train}(h)$ and testing error $E_{test}(h)$. Let us say that there is another model $h^\prime$ for which$$ E_{train}(h) E_{test}(h^\prime).$$Then, we say the hypothesis $h$ is "overfitted." Bias-Variance TradeoffHere the bias refers an error from erroneous assumptions and the variance means an error from sensitivity to small variation in the data. Thus, high bias can cause an underfitted model and high variance can cause an overfitted model. Finding the sweet spot that have good generalization is on our hand. In the same track of discussion, Scott summarizes the errors that we need to consider as follows: - high bias error: under-performing model that misses the important trends- high variance error: excessively sensitive to small variations in the training data- Irreducible error: genuine to the noise in the data. Need to clean up the dataFrom Understanding the Bias-Variance Tradeoff, by Scott Fortmann-Roe RegularizationWe reduce overfitting by addding a complexity penalty to the loss function. Here follows the loss function for the linear regression with $L2$-norm. $$\begin{align*}E(\wv) &= \sum_i^N ( y_i - t_i)^2 + \lambda \lVert \wv \rVert_2^2 \\ \\ &= \sum_i^N ( y_i - t_i)^2 + \lambda \sum_k^D w_k^2 \\ \\ &= (\Xm \wv - T)^\top (\Xm \wv - T) + \lambda \wv^\top \wv \\ \\ &= \wv^\top \Xm^\top \Xm \wv - 2 \Tm^\top \Xm \wv + \Tm^\top \Tm + \lambda \wv^\top \wv \end{align*}$$Repeating the derivation as in linear regression, $$\begin{align*}\frac{\partial E(\wv)}{\partial \wv} &= \frac{\partial (\Xm \wv - \Tm)^\top (\Xm \wv - \Tm)}{\partial \wv} + \frac{\partial \lambda \wv^\top \wv}{\partial \wv} \\ \\ &= 2 \Xm^\top \Xm \wv - 2 \Xm^\top \Tm + 2 \lambda \wv\end{align*}$$Setting the last term zero, we reach the solution of *ridge regression*: $$\begin{align*} 2 \Xm^\top \Xm \wv - 2 \Xm^\top \Tm + 2 \lambda \wv &= 0\\\\\big(\Xm^\top \Xm + \lambda \Im \big) \wv &= \Xm^\top \Tm\\\\\wv &= \big(\Xm^\top \Xm + \lambda \Im \big)^{-1} \Xm^\top \Tm.\end{align*}$$ Cross-ValidationNow, let us select a model. Even with the regularization, we still need to pick $\lambda$. For polynomial regression, we need to find the degree parameter. When we are mix-using multiple algorithms, we still need to know which model to choose. Here, remembe that we want a model that have good generalization. The idea is preparing one dataset (a validation set) by pretending that we cannot see the labels. After choosing a model parameter (or a model) and train it with training dataset, we test it on the validation data. Comparing the validation error, we select the one that has the lowest validation error. Finally, we evaluate the model on testing data. Here follows the K-fold cross-validation that divides the data into K blocks for traing, validating and testing. K-fold CV Procedure Feature SelectionAnother way to get a sparse (possibly good generalization) model is using small set of most relevant features. Weight analysis or some other tools can give us what is most relevant or irrelevant features w.r.t the training error. But it is still hard to tell the relevance to the generalization error. Thus, problems in choosing a minimally relevant set of features is NP-hard even with perfect estimation of generalization error. With inaccurate estimation that we have, it is much hard to find them. Thus, we can simply use cross-validation to find features. We can greedily add (forward selection) or delete (backward selection) features that decrease cross-validation error most. PracticeNow, try to write your own 5-fold cross validation code that follows the procedure above for ridge regression. Try 5 different $\lambda$ values, [0, 0.01, 0.1, 1, 10], for this. | # TODO: try to implement your own K-fold CV.
# (This will be a part of next assignment (no solution will be provided.)) | _____no_output_____ | MIT | reading_assignments/5_Note-ML Methodology.ipynb | biqar/Fall-2020-ITCS-8156-MachineLearning |
GOOGLE PLAYSTORE ANALYSIS The dataset used in this analysis is taken from [kaggle datasets](https://www.kaggle.com/datasets) In this analysis we took a raw data which is in csv format and then converted it into a dataframe.Performed some operations, cleaning of the data and finally visualizing some necessary conclusions obtained from it. Let's import necessary libraries required for the analysis | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns | _____no_output_____ | MIT | playstore analysis.ipynb | yazalipavan/Playstore_analysis |
Convert the csv file into dataframe using pandas | df=pd.read_csv('googleplaystore.csv')
df.head(5) | _____no_output_____ | MIT | playstore analysis.ipynb | yazalipavan/Playstore_analysis |
This is the data we obtained from the csv file.Let's see some info about this dataframe | df.info() | <class 'pandas.core.frame.DataFrame'>
RangeIndex: 10841 entries, 0 to 10840
Data columns (total 13 columns):
App 10841 non-null object
Category 10841 non-null object
Rating 9367 non-null float64
Reviews 10841 non-null object
Size 10841 non-null object
Installs 10841 non-null object
Type 10840 non-null object
Price 10841 non-null object
Content Rating 10840 non-null object
Genres 10841 non-null object
Last Updated 10841 non-null object
Current Ver 10833 non-null object
Android Ver 10838 non-null object
dtypes: float64(1), object(12)
memory usage: 1.1+ MB
| MIT | playstore analysis.ipynb | yazalipavan/Playstore_analysis |
This dataframe consists of 10841 entries ie information about 10841 apps.It tells about the category to which the app belongs,rating given by the users,size of the app,number of reviews given,count of number of installs and some other information DATA CLEANING Some columns have in-appropriate data,data types.This columns needed to be cleaned to perform the analysis. SIZE : This column has in-appropriate data type.This needed to be converted into numeric type after converting every value into MB's For example, the size of the app is in “string” format. We need to convert it into a numeric value. If the size is “10M”, then ‘M’ was removed to get the numeric value of ‘10’. If the size is “512k”, which depicts app size in kilobytes, the first ‘k’ should be removed and the size should be converted to an equivalent of ‘megabytes’. | df['Size'] = df['Size'].map(lambda x: x.rstrip('M'))
df['Size'] = df['Size'].map(lambda x: str(round((float(x.rstrip('k'))/1024), 1)) if x[-1]=='k' else x)
df['Size'] = df['Size'].map(lambda x: np.nan if x.startswith('Varies') else x) | _____no_output_____ | MIT | playstore analysis.ipynb | yazalipavan/Playstore_analysis |
10472 has in-appropriate data in every column, may due to entry mistake.So we are removing that entry from the table | df.drop(10472,inplace=True) | _____no_output_____ | MIT | playstore analysis.ipynb | yazalipavan/Playstore_analysis |
By using pd.to_numeric command we are converting into numeric type | df['Size']=df['Size'].apply(pd.to_numeric) | _____no_output_____ | MIT | playstore analysis.ipynb | yazalipavan/Playstore_analysis |
Installs : The value of installs is in “string” format. It contains numeric values with commas. It should be removed. And also, the ‘+’ sign should be removed from the end of each string. | df['Installs'] = df['Installs'].map(lambda x: x.rstrip('+'))
df['Installs'] = df['Installs'].map(lambda x: ''.join(x.split(','))) | _____no_output_____ | MIT | playstore analysis.ipynb | yazalipavan/Playstore_analysis |
By using pd.to_numeric command we are converting it into numeric data type | df['Installs']=df['Installs'].apply(pd.to_numeric) | _____no_output_____ | MIT | playstore analysis.ipynb | yazalipavan/Playstore_analysis |
Reviews :The reviews column is in string format and we need to convert it into numeric type | df['Reviews']=df['Reviews'].apply(pd.to_numeric) | _____no_output_____ | MIT | playstore analysis.ipynb | yazalipavan/Playstore_analysis |
After cleaning some columns and rows we obtained the required format to perform the analysis | df.head(5) | _____no_output_____ | MIT | playstore analysis.ipynb | yazalipavan/Playstore_analysis |
DATA VISUALIZATION In this we are taking a parameter as reference and checking the trend of another parameter like whether there is a rise or fall,which category are more,what kinds are of more intrest and so on. Basic pie chart to view distribution of apps across various categories | fig, ax = plt.subplots(figsize=(10, 10), subplot_kw=dict(aspect="equal"))
number_of_apps = df["Category"].value_counts()
labels = number_of_apps.index
sizes = number_of_apps.values
ax.pie(sizes,labeldistance=2,autopct='%1.1f%%')
ax.legend(labels=labels,loc="right",bbox_to_anchor=(0.9, 0, 0.5, 1))
ax.axis("equal")
plt.show() | _____no_output_____ | MIT | playstore analysis.ipynb | yazalipavan/Playstore_analysis |
App count for certain range of Ratings In this we are finding the count of apps for each range from 0 to 5 ie how many apps have more rating,how many are less rated. | bins=pd.cut(df['Rating'],[0.0,1.0,2.0,3.0,4.0,5.0])
rating_df=pd.DataFrame(df.groupby(bins)['App'].count())
rating_df.reset_index(inplace=True)
rating_df
plt.figure(figsize=(12, 6))
axis=sns.barplot('Rating','App',data=rating_df);
axis.set(ylabel= "App count",title='APP COUNT STATISTICS ACCORDING TO RATING'); | _____no_output_____ | MIT | playstore analysis.ipynb | yazalipavan/Playstore_analysis |
We can see that most of the apps are with rating 4 and above and very less apps have rating below 2. Top5 Apps with highest review count In this we are retrieving the top5 apps with more number of reviews and seeing it visually how their review count is changing. | reviews_df=df.sort_values('Reviews').tail(15).drop_duplicates(subset='App')[['App','Reviews','Rating']]
reviews_df
plt.figure(figsize=(12, 6))
axis=sns.lineplot(x="App",y="Reviews",data=reviews_df)
axis.set(title="Top 5 most Reviewed Apps");
sns.set_style('darkgrid') | _____no_output_____ | MIT | playstore analysis.ipynb | yazalipavan/Playstore_analysis |
Facebook has more reviews compared to other apps in the playstore Which content type Apps are more in playstore In this we are grouping the apps according to their content type and visually observing the result | content_df=pd.DataFrame(df.groupby('Content Rating')['App'].count())
content_df.reset_index(inplace=True)
content_df
plt.figure(figsize=(12, 6))
plt.bar(content_df['Content Rating'],content_df['App']);
plt.xlabel('Content Rating')
plt.ylabel('App count')
plt.title('App count for different Contents'); | _____no_output_____ | MIT | playstore analysis.ipynb | yazalipavan/Playstore_analysis |
Most of the apps in playstore can be used by everyone irrespective of the age.Only 3 apps are A rated --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Free vs Paid Apps Let's see variations considering type of App ie paid and free apps | Type_df=df.groupby('Type')[['App']].count()
Type_df['Rating']=df.groupby('Type')['Rating'].mean()
Type_df.reset_index(inplace=True)
Type_df | _____no_output_____ | MIT | playstore analysis.ipynb | yazalipavan/Playstore_analysis |
We found the number of apps that are freely available and their average rating and also number of paid apps and their average rating. | fig, axes = plt.subplots(1, 2, figsize=(18, 6))
axes[0].bar(Type_df.Type,Type_df.App)
axes[0].set_title("Number of free and paid apps")
axes[0].set_ylabel('App count')
axes[1].bar(Type_df.Type,Type_df.Rating)
axes[1].set_title('Average Rating of free and paid apps')
axes[1].set_ylabel('Average Rating'); | _____no_output_____ | MIT | playstore analysis.ipynb | yazalipavan/Playstore_analysis |
Conclusion Average rating of Paid Apps is more than Free apps.So,we can say that paid apps are trust worthy and we can invest in them ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Max Installs In this we are finding the apps with more number of installs and as we dont have exact count of installs we got around 20 apps with 1B+ downloadsFrom the 20 apps we will see some analysis of what types are more installed | max_installs=df.loc[df['Installs']==df.Installs.max()][['App','Category','Reviews','Rating','Installs','Content Rating']]
max_installs=max_installs.drop_duplicates(subset='App')
max_installs | _____no_output_____ | MIT | playstore analysis.ipynb | yazalipavan/Playstore_analysis |
These are the 20 apps which are with 1B+ downloads Which App has more rating and trend of 20 apps rating | plt.figure(figsize=(12, 6))
sns.barplot('Rating','App',data=max_installs); | _____no_output_____ | MIT | playstore analysis.ipynb | yazalipavan/Playstore_analysis |
We can see that Google photos,Instagram and Subway Surfers are the most rated Apps which have 1B+ downloads.Though the Apps are used by 1B+ users they have a good rating too Which content Apps are most Installed We will group the most installed apps according to their content and see which content apps are most installed | content_max_df=pd.DataFrame(max_installs.groupby('Content Rating')['App'].count())
content_max_df.reset_index(inplace=True)
content_max_df
plt.figure(figsize=(12, 6))
axis=sns.barplot('Content Rating','App',data=content_max_df);
axis.set(ylabel= "App count",title='Max Installed APP COUNT STATISTICS ACCORDING TO Content RATING'); | _____no_output_____ | MIT | playstore analysis.ipynb | yazalipavan/Playstore_analysis |
More than 10 apps are of type which can be used by any age group and about 8 apps are teen aged apps.Only 1 app is to used by person with age 10+ Which category Apps are more Installed In this we will group the most installed apps according to their category and see which category are on high demand | category_max_df=pd.DataFrame(max_installs.groupby('Category')['App'].count())
category_max_df.reset_index(inplace=True)
category_max_df
plt.figure(figsize=(12, 6))
axis=sns.barplot('App','Category',data=category_max_df);
plt.plot(category_max_df.App,category_max_df.Category,'o--r')
axis.set(ylabel= "App count",title='Max Installed APP COUNT STATISTICS ACCORDING TO Category'); | _____no_output_____ | MIT | playstore analysis.ipynb | yazalipavan/Playstore_analysis |
_Lambda School Data Science_ Make explanatory visualizationsTody we will reproduce this [example by FiveThirtyEight:](https://fivethirtyeight.com/features/al-gores-new-movie-exposes-the-big-flaw-in-online-movie-ratings/) | from IPython.display import display, Image
url = 'https://fivethirtyeight.com/wp-content/uploads/2017/09/mehtahickey-inconvenient-0830-1.png'
example = Image(url=url, width=400)
display(example) | _____no_output_____ | MIT | module3-make-explanatory-visualizations/LS_DS_223_Make_explanatory_visualizations.ipynb | coding-ss/DS-Unit-1-Sprint-3-Data-Storytelling |
Using this data: https://github.com/fivethirtyeight/data/tree/master/inconvenient-sequel Objectives- add emphasis and annotations to transform visualizations from exploratory to explanatory- remove clutter from visualizationsLinks- [Strong Titles Are The Biggest Bang for Your Buck](http://stephanieevergreen.com/strong-titles/)- [Remove to improve (the data-ink ratio)](https://www.darkhorseanalytics.com/blog/data-looks-better-naked)- [How to Generate FiveThirtyEight Graphs in Python](https://www.dataquest.io/blog/making-538-plots/) Make prototypesThis helps us understand the problem | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
plt.style.use('fivethirtyeight')
fake = pd.Series([38, 3, 2, 1, 2, 4, 6, 5, 5, 33],
index=range(1,11)) # index will start from 0 if not for this
fake.plot.bar(color='C1', width=0.9);
fake2 = pd.Series(
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
2, 2, 2,
3, 3, 3,
4, 4,
5, 5, 5,
6, 6, 6, 6,
7, 7, 7, 7, 7,
8, 8, 8, 8,
9, 9, 9, 9,
10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10])
fake2.value_counts().sort_index().plot.bar(color='C1', width=0.9); | _____no_output_____ | MIT | module3-make-explanatory-visualizations/LS_DS_223_Make_explanatory_visualizations.ipynb | coding-ss/DS-Unit-1-Sprint-3-Data-Storytelling |
Annotate with text | display(example)
plt.style.use('fivethirtyeight')
fake = pd.Series([38, 3, 2, 1, 2, 4, 6, 5, 5, 33],
index=range(1,11)) # index will start from 0 if not for this
fake.plot.bar(color='C1', width=0.9);
# rotate x axis numbers
plt.style.use('fivethirtyeight')
fake = pd.Series([38, 3, 2, 1, 2, 4, 6, 5, 5, 33],
index=range(1,11)) # index will start from 0 if not for this
ax = fake.plot.bar(color='C1', width=0.9)
ax.tick_params(labelrotation=0) #to unrotate or remove the rotation
ax.set(title="'An Incovenient Sequel: Truth to Power' is divisive");
#or '\'An Incovenient Sequel: Truth to Power\' is divisive'
plt.style.use('fivethirtyeight')
fake = pd.Series([38, 3, 2, 1, 2, 4, 6, 5, 5, 33],
index=range(1,11)) # index will start from 0 if not for this
ax = fake.plot.bar(color='C1', width=0.9)
ax.tick_params(labelrotation=0)
ax.text(x=-2,y=48,s="'An Incovenient Sequel: Truth to Power' is divisive",
fontsize=16, fontweight='bold')
ax.text(x=-2,y=45, s='IMDb ratings for the film as of Aug. 29',
fontsize=12)
ax.set(xlabel='Rating',
ylabel='Percent of total votes',
yticks=range(0,50,10));
#(start pt., end pt., increment) | _____no_output_____ | MIT | module3-make-explanatory-visualizations/LS_DS_223_Make_explanatory_visualizations.ipynb | coding-ss/DS-Unit-1-Sprint-3-Data-Storytelling |
Reproduce with real data | df = pd.read_csv('https://raw.githubusercontent.com/fivethirtyeight/data/master/inconvenient-sequel/ratings.csv')
df.shape
df.head()
width,height = df.shape
width*height
pd.options.display.max_columns = 500
df.head()
df.sample(1).T
df.timestamp.describe()
# convert timestamp to date time
df.timestamp = pd.to_datetime(df.timestamp)
df.timestamp.describe()
# Making datetime index of your df
df = df.set_index('timestamp')
df.head()
df['2017-08-09']
# everything from this date
df.category.value_counts() | _____no_output_____ | MIT | module3-make-explanatory-visualizations/LS_DS_223_Make_explanatory_visualizations.ipynb | coding-ss/DS-Unit-1-Sprint-3-Data-Storytelling |
only interested in IMDb users | df.category == 'IMDb users'
# As a filter to select certain rows
df[df.category == 'IMDb users']
lastday = df['2017-08-09']
lastday.head(1)
lastday[lastday.category =='IMDb users'].tail()
lastday[lastday.category =='IMDb users'].respondents.plot();
final = df.tail(1)
#columns = ['1_pct','2_pct','3_pct','4_pct','5_pct','6_pct','7_pct','8_pct','9_pct','10_pct']
#OR
columns = [str(i) + '_pct' for i in range(1,11)]
final[columns]
#OR
#data.index.str.replace('_pct', '')
data = final[columns].T
data
data.plot.bar()
plt.style.use('fivethirtyeight')
ax = data.plot.bar(color='C1', width=0.9)
ax.tick_params(labelrotation=0)
ax.text(x=-2,y=48,s="'An Incovenient Sequel: Truth to Power' is divisive",
fontsize=16, fontweight='bold')
ax.text(x=-2,y=44, s='IMDb ratings for the film as of Aug. 29',
fontsize=12)
ax.set(xlabel='Rating',
ylabel='Percent of total votes',
yticks=range(0,50,10));
#(start pt., end pt., increment)
# to remove the timestamp texts in the center
# to change the x axis texts
plt.style.use('fivethirtyeight')
ax = data.plot.bar(color='C1', width=0.9, legend=False)
ax.tick_params(labelrotation=0)
ax.text(x=-2,y=48,s="'An Incovenient Sequel: Truth to Power' is divisive",
fontsize=16, fontweight='bold')
ax.text(x=-2,y=44, s='IMDb ratings for the film as of Aug. 29',
fontsize=12)
ax.set(xlabel='Rating',
ylabel='Percent of total votes',
yticks=range(0,50,10));
data.index = range(1,11)
data
plt.style.use('fivethirtyeight')
ax = data.plot.bar(color='C1', width=0.9, legend=False)
ax.tick_params(labelrotation=0)
ax.text(x=-2,y=48,s="'An Incovenient Sequel: Truth to Power' is divisive",
fontsize=16, fontweight='bold')
ax.text(x=-2,y=44, s='IMDb ratings for the film as of Aug. 29',
fontsize=12)
ax.set(xlabel='Rating',
ylabel='Percent of total votes',
yticks=range(0,50,10))
plt.xlabel('Rating', fontsize=14); | _____no_output_____ | MIT | module3-make-explanatory-visualizations/LS_DS_223_Make_explanatory_visualizations.ipynb | coding-ss/DS-Unit-1-Sprint-3-Data-Storytelling |
Exam 2 - Gema Castillo García | %load_ext sql
%config SqlMagic.autocommit=True
%sql mysql+pymysql://root:root@127.0.0.1:3306/mysql | _____no_output_____ | MIT | Exam_2/Exam_2_Answers.ipynb | gcg-99/GitExams |
Problem 1: ControlsWrite a Python script that proves that the lines of data in Germplasm.tsv, and LocusGene are in the same sequence, based on the AGI Locus Code (ATxGxxxxxx). (hint: This will help you decide how to load the data into the database) | import pandas as pd
import csv
gp = pd.read_csv('Germplasm.tsv', sep='\t')
matrix2 = gp[gp.columns[0]].to_numpy()
germplasm = matrix2.tolist()
#print(germplasm) ##to see the first column (AGI Locus Codes) of Germplasm.tsv
lg = pd.read_csv('LocusGene.tsv', sep='\t')
matrix2 = lg[lg.columns[0]].to_numpy()
locus = matrix2.tolist()
#print(locus) ##to see the first column (AGI Locus Codes) of LocusGene.tsv
if (germplasm == locus):
print("lines of data are in the same sequence")
else:
print("lines of data are not in the same sequence") | lines of data are in the same sequence
| MIT | Exam_2/Exam_2_Answers.ipynb | gcg-99/GitExams |
**I have only compared the first columns because is where AGI Codes are (they are the same in the two tables).** Problem 2: Design and create the database. * It should have two tables - one for each of the two data files* The two tables should be linked in a 1:1 relationship* you may use either sqlMagic or pymysql to build the database | ##creating a database called germplasm
%sql create database germplasm;
##showing the existing databases
%sql show databases;
##selecting the new database to interact with it
%sql use germplasm;
%sql show tables;
##the database is empty (it has not tables as expected)
##showing the structure of the tables I want to add to the germplasm database
germplasm_file = open("Germplasm.tsv", "r")
print(germplasm_file.read())
print()
print()
locus_file = open("LocusGene.tsv", "r")
print(locus_file.read())
germplasm_file.close() ##closing the Germplasm.tsv file
locus_file.close() ##closing the LocusGene.tsv file
##creating a table for Germplasm data
%sql CREATE TABLE Germplasm_table(locus VARCHAR(10) NOT NULL PRIMARY KEY, germplasm VARCHAR(30) NOT NULL, phenotype VARCHAR(1000) NOT NULL, pubmed INTEGER NOT NULL);
%sql DESCRIBE Germplasm_table;
##creating a table for Locus data
%sql CREATE TABLE Locus_table(locus VARCHAR(10) NOT NULL PRIMARY KEY, gene VARCHAR(10) NOT NULL, protein_lenght INTEGER NOT NULL);
%sql DESCRIBE Locus_table;
##showing the created tables
%sql show tables;
##showing all of the data linking the two tables in a 1:1 relationship (it is empty because I have not introduced the data yet)
%sql SELECT Germplasm_table.locus, Germplasm_table.germplasm, Germplasm_table.phenotype, Germplasm_table.pubmed, Locus_table.gene, Locus_table.protein_lenght\
FROM Germplasm_table, Locus_table\
WHERE Germplasm_table.locus = Locus_table.locus; | * mysql+pymysql://root:***@127.0.0.1:3306/mysql
0 rows affected.
| MIT | Exam_2/Exam_2_Answers.ipynb | gcg-99/GitExams |
**- I have designed a database with two tables: Germplasm_table for Germplasm.tsv and Locus_table for LocusGene.tsv****- The primary keys to link the two tables in a 1:1 relationship are in the 'locus' column of each table** Problem 3: Fill the databaseUsing pymysql, create a Python script that reads the data from these files, and fills the database. There are a variety of strategies to accomplish this. I will give all strategies equal credit - do whichever one you are most confident with. | import csv
import re
with open("Germplasm.tsv", "r") as Germplasm_file:
next(Germplasm_file) ##skipping the first row
for line in Germplasm_file:
line = line.rstrip() ##removing blank spaces created by the \n (newline) character at the end of every line
print(line, file=open('Germplasm_wo_header.tsv', 'a'))
Germplasm_woh = open("Germplasm_wo_header.tsv", "r")
import pymysql.cursors
##connecting to the database (db) germplasm
connection = pymysql.connect(host='localhost',
user='root',
password='root',
db='germplasm',
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
connection.autocommit(True)
try:
with connection.cursor() as cursor:
sql = "INSERT INTO Germplasm_table (locus, germplasm, phenotype, pubmed) VALUES (%s, %s, %s, %s)"
for line in Germplasm_woh.readlines():
field = line.split("\t") ##this splits the lines and inserts each field into a column
fields = (field[0], field[1], field[2], field[3])
cursor.execute(sql, fields)
connection.commit()
finally:
print("inserted")
#connection.close()
%sql SELECT * FROM Germplasm_table;
import csv
import re
with open("LocusGene.tsv", "r") as LocusGene_file:
next(LocusGene_file) ##skipping the first row
for line in LocusGene_file:
line = line.rstrip() ##removing blank spaces created by the \n (newline) character at the end of every line
print(line, file=open('LocusGene_wo_header.tsv', 'a'))
LocusGene_woh = open("LocusGene_wo_header.tsv", "r")
import pymysql.cursors
##connecting to the database (db) germplasm
connection = pymysql.connect(host='localhost',
user='root',
password='root',
db='germplasm',
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
connection.autocommit(True)
try:
with connection.cursor() as cursor:
sql = "INSERT INTO Locus_table (locus, gene, protein_lenght) VALUES (%s, %s, %s)"
for line in LocusGene_woh.readlines():
field = line.split("\t") ##this splits the lines and inserts each field into a column
fields = (field[0], field[1], field[2])
cursor.execute(sql, fields)
connection.commit()
finally:
print("inserted")
#connection.close()
%sql SELECT * FROM Locus_table; | * mysql+pymysql://root:***@127.0.0.1:3306/mysql
32 rows affected.
| MIT | Exam_2/Exam_2_Answers.ipynb | gcg-99/GitExams |
To do this exercise, I have asked Andrea Álvarez for some help because I did not understand well what you did in the suggested practice to fill databases.**As 'pubmed' and 'protein_length' columns are for INTEGERS, I have created new TSV files without the header (the first row gave me an error in those columns because of the header).** Problem 4: Create reports, written to a file1. Create a report that shows the full, joined, content of the two database tables (including a header line)2. Create a joined report that only includes the Genes SKOR and MAA33. Create a report that counts the number of entries for each Chromosome (AT1Gxxxxxx to AT5Gxxxxxxx)4. Create a report that shows the average protein length for the genes on each Chromosome (AT1Gxxxxxx to AT5Gxxxxxxx)When creating reports 2 and 3, remember the "Don't Repeat Yourself" rule! All reports should be written to **the same file**. You may name the file anything you wish. | ##creating an empty text file in current directory
report = open('exam2_report.txt', 'x')
import pymysql.cursors
##connecting to the database (db) germplasm
connection = pymysql.connect(host='localhost',
user='root',
password='root',
db='germplasm',
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
connection.autocommit(True)
print('Problem 4.1. Create a report that shows the full, joined, content of the two database tables (including a header line):', file=open('exam2_report.txt', 'a'))
try:
with connection.cursor() as cursor:
sql = "SELECT 'locus' AS locus, 'germplasm' AS germplasm, 'phenotype' AS phenotype, 'pubmed' AS pubmed, 'gene' AS gene, 'protein_lenght' AS protein_lenght\
UNION ALL SELECT Germplasm_table.locus, Germplasm_table.germplasm, Germplasm_table.phenotype, Germplasm_table.pubmed, Locus_table.gene, Locus_table.protein_lenght\
FROM Germplasm_table, Locus_table\
WHERE Germplasm_table.locus = Locus_table.locus"
cursor.execute(sql)
results = cursor.fetchall()
for result in results:
print(result['locus'],result['germplasm'], result['phenotype'], result['pubmed'], result['gene'], result['protein_lenght'], file=open('exam2_report.txt', 'a'))
finally:
print("Problem 4.1 report written in exam2_report.txt file") | Problem 4.1 report written in exam2_report.txt file
| MIT | Exam_2/Exam_2_Answers.ipynb | gcg-99/GitExams |
**I have omitted the locus column from the Locus_table in 4.1 and 4.2 for not repeating information.** | print('\n\nProblem 4.2. Create a joined report that only includes the Genes SKOR and MAA3:', file=open('exam2_report.txt', 'a'))
try:
with connection.cursor() as cursor:
sql = "SELECT Germplasm_table.locus, Germplasm_table.germplasm, Germplasm_table.phenotype, Germplasm_table.pubmed, Locus_table.gene, Locus_table.protein_lenght\
FROM Germplasm_table, Locus_table\
WHERE Germplasm_table.locus = Locus_table.locus AND (Locus_table.gene = 'SKOR' OR Locus_table.gene = 'MAA3')"
cursor.execute(sql)
results = cursor.fetchall()
for result in results:
print(result['locus'],result['germplasm'], result['phenotype'], result['pubmed'], result['gene'], result['protein_lenght'], file=open('exam2_report.txt', 'a'))
finally:
print("Problem 4.2 report written in exam2_report.txt file")
print('\n\nProblem 4.3. Create a report that counts the number of entries for each Chromosome:', file=open('exam2_report.txt', 'a'))
try:
with connection.cursor() as cursor:
i = 1 ##marks the beginning of the loop (i.e., chromosome 1)
while i < 6:
sql = "SELECT COUNT(*) AS 'Entries for each Chromosome' FROM Germplasm_table WHERE locus REGEXP 'AT"+str(i)+"G'"
cursor.execute(sql)
results = cursor.fetchall()
for result in results:
print("- Chromosome", i, "has", result['Entries for each Chromosome'], "entries.", file=open('exam2_report.txt', 'a'))
i = i +1
finally:
print("Problem 4.3 report written in exam2_report.txt file")
print('\n\nProblem 4.4. Create a report that shows the average protein length for the genes on each Chromosome:', file=open('exam2_report.txt', 'a'))
try:
with connection.cursor() as cursor:
i = 1 ##marks the beginning of the loop (i.e., chromosome 1)
while i < 6:
sql = "SELECT AVG(protein_lenght) AS 'Average protein length for each Chromosome' FROM Locus_table WHERE locus REGEXP 'AT"+str(i)+"G'"
cursor.execute(sql)
results = cursor.fetchall()
for result in results:
print("- Average protein length for chromosome", i, "genes is", result['Average protein length for each Chromosome'], file=open('exam2_report.txt', 'a'))
i = i +1
finally:
print("Problem 4.4 report written in exam2_report.txt file")
##closing the report file with 'Problem 4' answers
report.close() | Problem 4.4 report written in exam2_report.txt file
| MIT | Exam_2/Exam_2_Answers.ipynb | gcg-99/GitExams |
Quick Multi-Processing Tests | import numpy as np
import matplotlib.pyplot as plt
from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor
import time
import numba
import pandas as pd
import pyspark
from pyspark.sql import SparkSession | _____no_output_____ | Apache-2.0 | Multiprocessing.ipynb | pawarbi/snippets |
Defining a an arbitrary function for testing. The function doesn't mean anything. | def fun(x):
return x * np.sin(10*x) + np.tan(34*x) + np.log(x)
#Calcluate a value for testing
fun(10)
#Plot the function, for testing
x = np.arange(0.1,10,0.5)
plt.plot(x,fun(x)); | _____no_output_____ | Apache-2.0 | Multiprocessing.ipynb | pawarbi/snippets |
Benchmark Without any parallelism, for comparison purposes | %%timeit
n = int(1e7) ## Using a large number to iterate
def f(n):
x = np.random.random(n)
y = (x * np.sin(10*x) + np.tan(34*x) + np.log(x))
return y
f(n) | 652 ms ± 2.79 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
| Apache-2.0 | Multiprocessing.ipynb | pawarbi/snippets |
652 ms without parallel processing ProcessPool Execution [ProcessPoolExecutor](https://docs.python.org/3/library/concurrent.futures.html) uses executes the processes asynchronously by using the number of processors assigned, in parallel. | %%time
with ProcessPoolExecutor(max_workers=4) as executor:
result = executor.map(f, [int(1e7) for i in range(10)]) | Wall time: 312 ms
| Apache-2.0 | Multiprocessing.ipynb | pawarbi/snippets |
Execution time dropped from 652 ms to 312 ms! This can be further optimized by specifying the number of processors to use and the chunk size. I will skip that for now. ThreadPool Execution Similar to `ProcessPool` but uses threads instead of CPU. | %%time
with ThreadPoolExecutor(max_workers=4) as texecute:
result_t = texecute.map(f, [int(1e7) for i in range(10)]) | Wall time: 3.67 s
| Apache-2.0 | Multiprocessing.ipynb | pawarbi/snippets |
Far worse than the benchmark and the `ProcessPool`. I am not entirely sure why, but most lilelt because the interpreter is allowing only 1 thread to run or is creating an I/O bottleneck. Using NUMBA I have used `numba` for JIT compilation for some of my programs for bootstrapping. | %%time
@numba.jit(nopython=True, parallel=True)
def f2(n):
x = np.random.random(n)
y = (x * np.sin(10*x) + np.tan(34*x) + np.log(x))
return y
f2(int(1e7)) | Wall time: 400 ms
| Apache-2.0 | Multiprocessing.ipynb | pawarbi/snippets |
400 ms - so better than the bechmark but almost as good as the `ProcessPool` method Using Spark | spark=( SparkSession.builder.master("local")
.appName("processingtest")
.getOrCreate()
)
from pyspark.sql.types import FloatType
from pyspark.sql.functions import udf
n = int(1e7)
df = pd.DataFrame({"x":np.random.random(n)})
df.head(3)
def f3(x):
return (x * np.sin(10*x) + np.tan(34*x) + np.log(x))
func_udf = udf(lambda x: f3(x), FloatType())
df_spark = spark.createDataFrame(df)
df_spark.withColumn("udf",func_udf("x")) | _____no_output_____ | Apache-2.0 | Multiprocessing.ipynb | pawarbi/snippets |
Inspecting the Spark job shows execution time as 0.4s (400 ms), as good as numba and `ProcessPool`. Spark would be much more scalable. Th eonly challenge here is the data needs to be converted to a tabular/dataframe format first. For most business process modeling scenarios that's usually not required and is an added step. | spark.stop() | _____no_output_____ | Apache-2.0 | Multiprocessing.ipynb | pawarbi/snippets |
Importing the data | #print(os.listdir('../data'))
df = pd.read_csv('../data/NumberConfirmedOfCases.csv')
#df = df.set_index('Date', append=False)
#df['Date'] = df.apply(lambda x: datetime.strptime(x['Date'], '%d-%m-%Y').date(), axis=1) #convert the date
df = df.groupby('Date')['Cases'].sum().reset_index() #group the data
df['Date'] = pd.to_datetime(df['Date'])
df['date_delta'] = (df['Date'] - df['Date'].min()) / np.timedelta64(1,'D')
df.head()
#print(os.listdir('../data'))
df = pd.read_csv('../data/NumberConfirmedOfCases.csv')
#df = df.set_index('Date', append=False)
#df['Date'] = df.apply(lambda x: datetime.strptime(x['Date'], '%d-%m-%Y').date(), axis=1) #convert the date
df = df.groupby('Date')['Cases'].sum().reset_index() #group the data
df.head()
df.describe()
df.describe(include='O')
df.columns
df.shape
#print('Time period start: {}\nTime period end: {}'.format(df.year.min(),df.year.max())) | _____no_output_____ | MIT | notebooks/Time series analysis.ipynb | AsmaaOmer/CoronavirusCases-in-Sudan |
Visualizing the time series dataWe are going to use matplotlib to visualise the dataset. | # Time series data source: fpp pacakge in R.
import matplotlib.pyplot as plt
df = pd.read_csv('../data/NumberConfirmedOfCases.csv', parse_dates=['Date'], index_col='Date')
df = df.groupby('Date')['Cases'].sum().reset_index() #group the data
# Draw Plot
def plot_df(df, x, y, title="", xlabel='Date', ylabel='Cases', dpi=100,angle=45):
plt.figure(figsize=(8,4), dpi=dpi)
plt.plot(x, y, color='tab:red')
plt.gca().set(title=title, xlabel=xlabel, ylabel=ylabel)
plt.xticks(rotation=angle)
plt.show()
plot_df(df, x=df.Date, y=df.Cases, title='Dayly infiction')
df['Date'] = pd.to_datetime(df['Date'], format='%Y-%m-%d')
df['Date'].head()
df = df.set_index('Date')
df.index
from pandas import Series
from matplotlib import pyplot
pyplot.figure(figsize=(6,8), dpi= 100)
pyplot.subplot(211)
df.Cases.hist()
pyplot.subplot(212)
df.Cases.plot(kind='kde')
pyplot.show()
from pylab import rcParams
df = pd.read_csv('../data/NumberConfirmedOfCases.csv', parse_dates=['Date'], index_col='Date')
df = df.groupby('Date')['Cases'].sum().reset_index() #group the data
df['Date'] = pd.to_datetime(df['Date'], format='%Y-%m-%d')
df = df.set_index('Date')
rcParams['figure.figsize'] = 8,6
decomposition = sm.tsa.seasonal_decompose(df, model='multiplicative', freq=1)
fig = decomposition.plot()
plt.show()
df = pd.read_csv('../data/NumberConfirmedOfCases.csv', parse_dates=['Date'], index_col='Date')
df = df.groupby('Date')['Cases'].sum().reset_index() #group the data
x = df['Date'].values
y1 = df['Cases'].values
# Plot
fig, ax = plt.subplots(1, 1, figsize=(6,3), dpi= 120)
plt.fill_between(x, y1=y1, y2=-y1, alpha=0.5, linewidth=2, color='seagreen')
plt.ylim(-50, 50)
plt.title('Dayly Infiction (Two Side View)', fontsize=16)
plt.hlines(y=0, xmin=np.min(df.Date), xmax=np.max(df.Date), linewidth=.5)
plt.xticks(rotation=45)
plt.show() | _____no_output_____ | MIT | notebooks/Time series analysis.ipynb | AsmaaOmer/CoronavirusCases-in-Sudan |
Boxplot of Month-wise (Seasonal) and Year-wise (trend) DistributionYou can group the data at seasonal intervals and see how the values are distributed within a given year or month and how it compares over time.The boxplots make the year-wise and month-wise distributions evident. Also, in a month-wise boxplot, the months of December and January clearly has higher drug sales, which can be attributed to the holiday discounts season.So far, we have seen the similarities to identify the pattern. Now, how to find out any deviations from the usual pattern? | # Importing the data
df = pd.read_csv('../data/NumberConfirmedOfCases.csv', parse_dates=['Date'], index_col='Date')
df = df.groupby('Date')['Cases'].sum().reset_index() #group the data
df.reset_index(inplace=True)
# Prepare data
#df['year'] = [d.year for d in df.Date]
df['month'] = [d.strftime('%b') for d in df.Date]
df['day']=df['Date'].dt.day
df['week']=df['Date'].dt.week
months = df['month'].unique()
# Plotting
fig, axes = plt.subplots(3,1, figsize=(8,16), dpi= 80)
sns.boxplot(x='month', y='Cases', data=df, ax=axes[0])
sns.boxplot(x='week', y='Cases', data=df,ax=axes[1])
sns.boxplot(x='day', y='Cases', data=df,ax=axes[2])
axes[0].set_title('Month-wise Box Plot', fontsize=18);
axes[1].set_title('Week-wise Box Plot', fontsize=18)
axes[1].set_title('Day-wise Box Plot', fontsize=18)
plt.show() | _____no_output_____ | MIT | notebooks/Time series analysis.ipynb | AsmaaOmer/CoronavirusCases-in-Sudan |
Autocorrelation and partial autocorrelationAutocorrelation measures the relationship between a variable's current value and its past values.Autocorrelation is simply the correlation of a series with its own lags. If a series is significantly autocorrelated, that means, the previous values of the series (lags) may be helpful in predicting the current value.Partial Autocorrelation also conveys similar information but it conveys the pure correlation of a series and its lag, excluding the correlation contributions from the intermediate lags. | from statsmodels.graphics.tsaplots import plot_acf
from statsmodels.graphics.tsaplots import plot_pacf
df = pd.read_csv('../data/NumberConfirmedOfCases.csv')
df = df.groupby('Date')['Cases'].sum().reset_index() #group the data
pyplot.figure(figsize=(6,8), dpi= 100)
pyplot.subplot(211)
plot_acf(df.Cases, ax=pyplot.gca(), lags = len(df.Cases)-1)
pyplot.subplot(212)
plot_pacf(df.Cases, ax=pyplot.gca(), lags = len(df.Cases)-1)
pyplot.show() | _____no_output_____ | MIT | notebooks/Time series analysis.ipynb | AsmaaOmer/CoronavirusCases-in-Sudan |
Lag PlotsA Lag plot is a scatter plot of a time series against a lag of itself. It is normally used to check for autocorrelation. If there is any pattern existing in the series like the one you see below, the series is autocorrelated. If there is no such pattern, the series is likely to be random white noise. | from pandas.plotting import lag_plot
plt.rcParams.update({'ytick.left' : False, 'axes.titlepad':10})
# Import
df = pd.read_csv('../data/NumberConfirmedOfCases.csv')
df = df.groupby('Date')['Cases'].sum().reset_index() #group the data
# Plot
fig, axes = plt.subplots(1, 4, figsize=(10,3), sharex=True, sharey=True, dpi=100)
for i, ax in enumerate(axes.flatten()[:4]):
lag_plot(df.Cases, lag=i+1, ax=ax, c='firebrick')
ax.set_title('Lag ' + str(i+1))
fig.suptitle('Lag Plots of Sun Spots Area)', y=1.15)
| _____no_output_____ | MIT | notebooks/Time series analysis.ipynb | AsmaaOmer/CoronavirusCases-in-Sudan |
Estimating the forecastabilityThe more regular and repeatable patterns a time series has, the easier it is to forecast. Since we have a small dataset, we apply a Sample Entropy to examine that. Put in mind that, The higher the approximate entropy, the more difficult it is to forecast it. | # https://en.wikipedia.org/wiki/Sample_entropy
df = pd.read_csv('../data/NumberConfirmedOfCases.csv')
df = df.groupby('Date')['Cases'].sum().reset_index() #group the data
def SampEn(U, m, r):
"""Compute Sample entropy"""
def _maxdist(x_i, x_j):
return max([abs(ua - va) for ua, va in zip(x_i, x_j)])
def _phi(m):
x = [[U[j] for j in range(i, i + m - 1 + 1)] for i in range(N - m + 1)]
C = [len([1 for j in range(len(x)) if i != j and _maxdist(x[i], x[j]) <= r]) for i in range(len(x))]
return sum(C)
N = len(U)
return -np.log(_phi(m+1) / _phi(m))
print(SampEn(df.Cases, m=2, r=0.2*np.std(df.Cases))) | 0.21622310846963594
| MIT | notebooks/Time series analysis.ipynb | AsmaaOmer/CoronavirusCases-in-Sudan |
Plotting Rolling StatisticsWe observe that the rolling mean and Standard deviation are not constant with respect to time (increasing trend) The time series is hence not stationary | from statsmodels.tsa.stattools import adfuller
def test_stationarity(timeseries):
#Determing rolling statistics
rolmean = pd.Series(timeseries).rolling(window=12).std()
rolstd = pd.Series(timeseries).rolling(window=12).mean()
#Plot rolling statistics:
orig = plt.plot(timeseries, color='blue',label='Original')
mean = plt.plot(rolmean, color='red', label='Rolling Mean')
std = plt.plot(rolstd, color='black', label = 'Rolling Std')
plt.legend(loc='best')
plt.title('Rolling Mean & Standard Deviation')
plt.show(block=False)
#Perform Dickey-Fuller test:
print ('Results of Dickey-Fuller Test:')
dftest = adfuller(timeseries, autolag='AIC')
dfoutput = pd.Series(dftest[0:4], index=['Test Statistic','p-value','#Lags Used','Number of Observations Used'])
for key,value in dftest[4].items():
dfoutput['Critical Value (%s)'%key] = value
print(dfoutput)
df = pd.read_csv('../data/NumberConfirmedOfCases.csv')
df = df.groupby('Date')['Cases'].sum().reset_index() #group the data
test_stationarity(df['Cases']) | _____no_output_____ | MIT | notebooks/Time series analysis.ipynb | AsmaaOmer/CoronavirusCases-in-Sudan |
The standard deviation and th mean are clearly increasing with time therefore, this is not a stationary series. | from pylab import rcParams
df = pd.read_csv('../data/NumberConfirmedOfCases.csv', parse_dates=['Date'], index_col='Date')
df = df.groupby('Date')['Cases'].sum().reset_index() #group the data
df['Date'] = pd.to_datetime(df['Date'], format='%Y-%m-%d')
df = df.set_index('Date')
ts_log = np.log(df)
plt.plot(ts_log) | _____no_output_____ | MIT | notebooks/Time series analysis.ipynb | AsmaaOmer/CoronavirusCases-in-Sudan |
Remove Trend - Smoothing | n = int(len(df.Cases)/2)
moving_avg = ts_log.rolling(n).mean()
plt.plot(ts_log)
plt.plot(moving_avg, color='red')
ts_log_moving_avg_diff = ts_log.Cases - moving_avg.Cases
ts_log_moving_avg_diff.head(n)
ts_log_moving_avg_diff.dropna(inplace=True)
test_stationarity(ts_log_moving_avg_diff)
expwighted_avg = ts_log.ewm(n).mean()
plt.plot(ts_log)
plt.plot(expwighted_avg, color='red')
ts_log_ewma_diff = ts_log.Cases - expwighted_avg.Cases
test_stationarity(ts_log_ewma_diff)
ts_log_diff = ts_log.Cases - ts_log.Cases.shift()
plt.plot(ts_log_diff)
ts_log_diff.dropna(inplace=True)
test_stationarity(ts_log_diff) | _____no_output_____ | MIT | notebooks/Time series analysis.ipynb | AsmaaOmer/CoronavirusCases-in-Sudan |
Autoregressive Integrated Moving Average (ARIMA)In an ARIMA model there are 3 parameters that are used to help model the major aspects of a times series: seasonality, trend, and noise. These parameters are labeled p,d,and q.Number of AR (Auto-Regressive) terms (p): p is the parameter associated with the auto-regressive aspect of the model, which incorporates past values i.e lags of dependent variable. For instance if p is 5, the predictors for x(t) will be x(t-1)….x(t-5).Number of Differences (d): d is the parameter associated with the integrated part of the model, which effects the amount of differencing to apply to a time series. Number of MA (Moving Average) terms (q): q is size of the moving average part window of the model i.e. lagged forecast errors in prediction equation. For instance if q is 5, the predictors for x(t) will be e(t-1)….e(t-5) where e(i) is the difference between the moving average at ith instant and actual value. | # ARMA example
from statsmodels.tsa.arima_model import ARMA
from random import random
# fit model
model = ARMA(ts_log_diff, order=(2, 1))
model_fit = model.fit(disp=False)
model_fit.summary()
plt.plot(ts_log_diff)
plt.plot(model_fit.fittedvalues, color='red')
plt.title('RSS: %.4f'% np.nansum((model_fit.fittedvalues-ts_log_diff)**2))
ts = df.Cases - df.Cases.shift()
ts.dropna(inplace=True)
pyplot.figure()
pyplot.subplot(211)
plot_acf(ts, ax=pyplot.gca(),lags=n)
pyplot.subplot(212)
plot_pacf(ts, ax=pyplot.gca(),lags=n)
pyplot.show()
#divide into train and validation set
train = df[:int(0.8*(len(df)))]
valid = df[int(0.8*(len(df))):]
#plotting the data
train['Cases'].plot()
valid['Cases'].plot()
#building the model
from pmdarima.arima import auto_arima
model = auto_arima(train, trace=True, error_action='ignore', suppress_warnings=True)
model.fit(train)
forecast = model.predict(n_periods=len(valid))
forecast = pd.DataFrame(forecast,index = valid.index,columns=['Prediction'])
#plot the predictions for validation set
plt.plot(df.Cases, label='Train')
#plt.plot(valid, label='Valid')
plt.plot(forecast, label='Prediction')
plt.show()
from sklearn.metrics import mean_squared_error, r2_score, mean_absolute_error, median_absolute_error, mean_squared_log_error
def evaluate_forecast(y,pred):
results = pd.DataFrame({'r2_score':r2_score(y, pred),
}, index=[0])
results['mean_absolute_error'] = mean_absolute_error(y, pred)
results['median_absolute_error'] = median_absolute_error(y, pred)
results['mse'] = mean_squared_error(y, pred)
results['msle'] = mean_squared_log_error(y, pred)
results['mape'] = mean_absolute_percentage_error(y, pred)
results['rmse'] = np.sqrt(results['mse'])
return results
evaluate_forecast(valid, forecast)
train.head()
train_prophet = pd.DataFrame()
train_prophet['ds'] = train.index
train_prophet['y'] = train.Cases.values
train_prophet.head()
from fbprophet import Prophet
#instantiate Prophet with only yearly seasonality as our data is monthly
model = Prophet( yearly_seasonality=True, seasonality_mode = 'multiplicative')
model.fit(train_prophet) #fit the model with your dataframe
# predict for five months in the furure and MS - month start is the frequency
future = model.make_future_dataframe(periods = 36, freq = 'MS')
future.tail()
forecast.columns
# now lets make the forecasts
forecast = model.predict(future)
forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()
fig = model.plot(forecast)
#plot the predictions for validation set
plt.plot(valid, label='Valid', color = 'red', linewidth = 2)
plt.show()
model.plot_components(forecast);
y_prophet = pd.DataFrame()
y_prophet['ds'] = df.index
y_prophet['y'] = df.Cases.values
y_prophet = y_prophet.set_index('ds')
forecast_prophet = forecast.set_index('ds')
start_index =5
end_index = 15
evaluate_forecast(y_prophet.y[start_index:end_index], forecast_prophet.yhat[start_index:end_index])
from statsmodels.tsa.arima_model import ARIMA
model = ARIMA(df['Cases'], order=(2, 1, 2))
results_ARIMA = model.fit(disp=-1)
#plt.plot(ts_log_diff)
plt.plot(results_ARIMA.fittedvalues, color='red')
#plt.title('RSS: %.4f'% sum((results_ARIMA.fittedvalues-ts_log_diff)**2))
| _____no_output_____ | MIT | notebooks/Time series analysis.ipynb | AsmaaOmer/CoronavirusCases-in-Sudan |
Bayesian Randomized Benchmarking DemoThis is a bayesian pyMC3 implementation on top of frequentist interleaved RB from qiskit experimentsBased on this [WIP tutorial](https://github.com/Qiskit/qiskit-experiments/blob/main/docs/tutorials/rb_example.ipynb) on july 10 2021 | import numpy as np
import copy
import qiskit_experiments as qe
import qiskit.circuit.library as circuits
rb = qe.randomized_benchmarking
# for retrieving gate calibration
from datetime import datetime
import qiskit.providers.aer.noise.device as dv
# import the bayesian packages
import pymc3 as pm
import arviz as az
import bayesian_fitter as bf
simulation = True # make your choice here
if simulation:
from qiskit.providers.aer import AerSimulator
from qiskit.test.mock import FakeParis
backend = AerSimulator.from_backend(FakeParis())
else:
from qiskit import IBMQ
IBMQ.load_account()
provider = IBMQ.get_provider(hub='ibm-q')
backend = provider.get_backend('ibmq_lima') # type here hardware backend
import importlib
importlib.reload(bf) | _____no_output_____ | Apache-2.0 | 02-bayesian-rb-example-hierarchical.ipynb | pdc-quantum/qiskit-advocates-bayes-RB |
Running 1-qubit RB | lengths = np.arange(1, 1000, 100)
num_samples = 10
seed = 1010
qubits = [0]
# Run an RB experiment on qubit 0
exp1 = rb.StandardRB(qubits, lengths, num_samples=num_samples, seed=seed)
expdata1 = exp1.run(backend)
# View result data
print(expdata1)
physical_qubits = [0]
nQ = len(qubits)
scale = (2 ** nQ - 1) / 2 ** nQ
interleaved_gate =''
# retrieve from the frequentist model (fm) analysis
# some values,including priors, for the bayesian analysis
perr_fm, popt_fm, epc_est_fm, epc_est_fm_err, experiment_type = bf.retrieve_from_lsf(expdata1)
EPG_dic = expdata1._analysis_results[0]['EPG'][qubits[0]]
# get count data
Y = bf.get_GSP_counts(expdata1._data, len(lengths),range(num_samples))
shots = bf.guess_shots(Y) | _____no_output_____ | Apache-2.0 | 02-bayesian-rb-example-hierarchical.ipynb | pdc-quantum/qiskit-advocates-bayes-RB |
Pooled model | #build model
pooled_model = bf.get_bayesian_model(model_type="pooled",Y=Y,shots=shots,m_gates=lengths,
mu_AB=[popt_fm[0],popt_fm[2]],cov_AB=[perr_fm[0],perr_fm[2]],
alpha_ref=popt_fm[1],
alpha_lower=popt_fm[1]-6*perr_fm[1],
alpha_upper=min(1.-1.E-6,popt_fm[1]+6*perr_fm[1]))
pm.model_to_graphviz(pooled_model)
trace_p = bf.get_trace(pooled_model, target_accept = 0.95)
# backend's recorded EPG
print(rb.RBUtils.get_error_dict_from_backend(backend, qubits))
bf.RB_bayesian_results(pooled_model, trace_p, lengths,
epc_est_fm, epc_est_fm_err, experiment_type, scale,
num_samples, Y, shots, physical_qubits, interleaved_gate, backend,
EPG_dic = EPG_dic) | mean sd hdi_3% hdi_97%
alpha 0.995688 0.000099 0.995500 0.995870
AB[0] 0.476342 0.003614 0.469744 0.483336
AB[1] 0.506909 0.003476 0.500081 0.513033
Model: Frequentist Bayesian
_______________________________________
EPC 2.135e-03 2.156e-03
± sigma ± 1.722e-04 ± 4.950e-05
EPG rz 0.000e+00 0.000e+00
EPG sx 4.322e-04 4.365e-04
EPG x 4.322e-04 4.365e-04
| Apache-2.0 | 02-bayesian-rb-example-hierarchical.ipynb | pdc-quantum/qiskit-advocates-bayes-RB |
Hierarchical model | #build model
original_model = bf.get_bayesian_model(model_type="h_sigma",Y=Y,shots=shots,m_gates=lengths,
mu_AB=[popt_fm[0],popt_fm[2]],cov_AB=[perr_fm[0],perr_fm[2]],
alpha_ref=popt_fm[1],
alpha_lower=popt_fm[1]-6*perr_fm[1],
alpha_upper=min(1.-1.E-6,popt_fm[1]+6*perr_fm[1]),
sigma_theta=0.001,sigma_theta_l=0.0005,sigma_theta_u=0.0015)
pm.model_to_graphviz(original_model)
trace_o = bf.get_trace(original_model, target_accept = 0.95)
# backend's recorded EPG
print(rb.RBUtils.get_error_dict_from_backend(backend, qubits))
bf.RB_bayesian_results(original_model, trace_o, lengths,
epc_est_fm, epc_est_fm_err, experiment_type, scale,
num_samples, Y, shots, physical_qubits, interleaved_gate, backend,
EPG_dic = EPG_dic) | mean sd hdi_3% hdi_97%
alpha 0.995677 0.000103 0.995476 0.995860
AB[0] 0.476004 0.003727 0.469408 0.483239
AB[1] 0.507318 0.003578 0.500497 0.513866
sigma_t 0.001005 0.000290 0.000500 0.001439
GSP[0] 0.981185 0.001334 0.978472 0.983521
GSP[1] 0.814874 0.002270 0.810739 0.819283
GSP[2] 0.706736 0.002638 0.701747 0.711649
GSP[3] 0.636268 0.002425 0.631594 0.640623
GSP[4] 0.591258 0.002131 0.587351 0.595196
GSP[5] 0.561619 0.002015 0.557831 0.565306
GSP[6] 0.543046 0.002158 0.538801 0.546921
GSP[7] 0.530202 0.002393 0.525753 0.534719
GSP[8] 0.522240 0.002650 0.517287 0.527240
GSP[9] 0.516811 0.002883 0.511258 0.522020
Model: Frequentist Bayesian
_______________________________________
EPC 2.135e-03 2.161e-03
± sigma ± 1.722e-04 ± 5.150e-05
EPG rz 0.000e+00 0.000e+00
EPG sx 4.322e-04 4.376e-04
EPG x 4.322e-04 4.376e-04
| Apache-2.0 | 02-bayesian-rb-example-hierarchical.ipynb | pdc-quantum/qiskit-advocates-bayes-RB |
import seaborn as sns
import pandas as pd
import matplotlib.pyplot as plt
from pandas.plotting import parallel_coordinates
data = sns.load_dataset("iris") | _____no_output_____ | MIT | HW2/HW2.ipynb | DSNortsev/CSE-694-Case-Studies-in-Deep-Learning |
|
**Task 1. Brief description on its value and possible applications.**The IRIS dataset has the following characteristics:* 150 examples of Iris flowers * The first four fields are features that are the characteristics of flower examples. All these fields hold float numbers representing flower measurements. * The last column is the label which represents the Iris species. * Balance class distribution meaning that each category has even amount of instances * Has no missing values One example of possible application is for botanists to find an automated way to categorize each Iris flower they find. For instance, to classify based on photographs, or in our case based on the length and width measurements of their sepals and petals. | data
print(f'CLASS DISTRIBUTION:\n{data.groupby("species").size()}')
print(f'\nSHAPE: {data.shape}')
print(f'\nTOTAL MISSING VALUES:\n{data.isnull().sum()}\n') | CLASS DISTRIBUTION:
species
setosa 50
versicolor 50
virginica 50
dtype: int64
SHAPE: (150, 5)
TOTAL MISSING VALUES:
sepal_length 0
sepal_width 0
petal_length 0
petal_width 0
species 0
dtype: int64
| MIT | HW2/HW2.ipynb | DSNortsev/CSE-694-Case-Studies-in-Deep-Learning |
_________ **Task 2. Summarize and visually report on the Size of this data set including labeling or non-labeled status** For all three species, the respective values of the mean and median of its features are found to be pretty close. This indicates that data is nearly symmetrically distributed with very less presence of outliers. | data.groupby('species').agg(['mean', 'median']) | _____no_output_____ | MIT | HW2/HW2.ipynb | DSNortsev/CSE-694-Case-Studies-in-Deep-Learning |
Standard deviation (or variance) is an indication of how widely the data is spread about the mean. | data.groupby('species').std() | _____no_output_____ | MIT | HW2/HW2.ipynb | DSNortsev/CSE-694-Case-Studies-in-Deep-Learning |
The isolated points for each feature that can be seen in the box-plots below are the outliers in the data. Since these are very few in number, it wouldn't have any significant impact on our analysis. | sns.set(style="ticks")
plt.figure(figsize=(12,10))
plt.subplot(2,2,1)
sns.boxplot(x='species',y='sepal_length',data=data)
plt.subplot(2,2,2)
sns.boxplot(x='species',y='sepal_width',data=data)
plt.subplot(2,2,3)
sns.boxplot(x='species',y='petal_length',data=data)
plt.subplot(2,2,4)
sns.boxplot(x='species',y='petal_width',data=data)
plt.show() | _____no_output_____ | MIT | HW2/HW2.ipynb | DSNortsev/CSE-694-Case-Studies-in-Deep-Learning |
Scatter plot helps to analyze the relationship between 2 features on the x and y | sns.pairplot(data) | _____no_output_____ | MIT | HW2/HW2.ipynb | DSNortsev/CSE-694-Case-Studies-in-Deep-Learning |
Next, we can make a correlation matrix to see how these features are correlated to each other using a heatmap in the seaborn library. It can be observed that petal measurements are highly correlated, while the sepal one are uncorrelated. Also we can see that petal length is highly correlated with speal length, but not with sepal width. | plt.figure(figsize=(10,11))
sns.heatmap(data.corr(),annot=True, square = True)
plt.plot() | _____no_output_____ | MIT | HW2/HW2.ipynb | DSNortsev/CSE-694-Case-Studies-in-Deep-Learning |
Another way to visualize the data is by parallel coordinate plot, which represents each row as a line. As we have seen below, petal measurements can separate species better than the sepal ones. | parallel_coordinates(data, "species", color = ['blue', 'red', 'green']); | _____no_output_____ | MIT | HW2/HW2.ipynb | DSNortsev/CSE-694-Case-Studies-in-Deep-Learning |
Now, we can plot a scatter plot between the sepal length and the sepal width to visualise the iris dataset. We can observe that the blue dots(setosa) are quite clear separated from red(versicolor) and green dots(virginica), while separation between red dots and green dots might be a very difficult task given the two features available. | labels_names = { 'setosa': 'blue',
'versicolor': 'red',
'virginica': 'green'}
for species, color in labels_names.items():
x = data.loc[data['species'] == species]['sepal_length']
y = data.loc[data['species'] == species]['sepal_width']
plt.scatter(x, y, c=color)
plt.legend(labels_names.keys())
plt.xlabel('sepal_length')
plt.ylabel('sepal_width')
plt.show() | _____no_output_____ | MIT | HW2/HW2.ipynb | DSNortsev/CSE-694-Case-Studies-in-Deep-Learning |
We can also visualise the data on different features such as petal width and petal length. In this case, the decision boundary between blue, green and red dots can be easily determined, which indicates that using all features for training is a good choice. | labels_names = { 'setosa': 'blue',
'versicolor': 'red',
'virginica': 'green'}
for species, color in labels_names.items():
x = data.loc[data['species'] == species]['petal_length']
y = data.loc[data['species'] == species]['petal_width']
plt.scatter(x, y, c=color)
plt.legend(labels_names.keys())
plt.xlabel('petal_length')
plt.ylabel('petal_width')
plt.show() | _____no_output_____ | MIT | HW2/HW2.ipynb | DSNortsev/CSE-694-Case-Studies-in-Deep-Learning |
___________ **3. Propose and perform Deep Learning using this data set.**Report on your implementation as follows:* Justify your selection of techniques and platform* Explain your results and their applicability In our project we are using python language. There are two well-known libraries for deep learning such as PyTorch and Tensorflow. Each library has its own API implementation for example, Keras is high level API for Tensorflow, while fastai is an API for PyTorch. The Iris classification problem is an example of supervised machine learning: the model is trained from examples that contain labels and for our model we are planning to use Keras wrapper for Tensor flow. The Deep Learning would be performed in the following steps: * Data preprocessing * Model Building * Model Selection In the **Data preprocessing**, we need to create data frames for features and labels, normalize the feature data by converting all values in a range between 0 and 1, convert species labels to numerical representation and then to binary string. Then, the data needs to be split into train and test data sets. **Phase 1: Data Preprocessing** Step 1: Create Dataframes for features and lables | import pandas as pd
from sklearn.preprocessing import LabelBinarizer, LabelEncoder
encoder = LabelBinarizer()
le=LabelEncoder()
seed = 42
data = sns.load_dataset("iris")
# Create X variable with four features
X = data.drop(['species'],axis=1)
# Convert species to int
Y_int = le.fit_transform(data['species'])
# Convert species int to binary representation
Y_binary = encoder.fit_transform(Y_int)
target_names = data['species'].unique()
Y = pd.DataFrame(data=Y_binary, columns=target_names)
print(f'\nNormalized X_test values:\n{X[:5]}')
print(f'\nEncoded Y_test:\n{Y[:5]}') |
Normalized X_test values:
sepal_length sepal_width petal_length petal_width
0 5.1 3.5 1.4 0.2
1 4.9 3.0 1.4 0.2
2 4.7 3.2 1.3 0.2
3 4.6 3.1 1.5 0.2
4 5.0 3.6 1.4 0.2
Encoded Y_test:
setosa versicolor virginica
0 1 0 0
1 1 0 0
2 1 0 0
3 1 0 0
4 1 0 0
| MIT | HW2/HW2.ipynb | DSNortsev/CSE-694-Case-Studies-in-Deep-Learning |
Step 2: Create training and testing datasets | from sklearn.model_selection import train_test_split
# Split data in train and test with percentage proportion 70%/30%
X_train,X_test,y_train,y_test = train_test_split(X, Y, test_size=0.30,random_state=seed)
print(f'X_train: {X_train.shape}, y_train: {y_train.shape}')
print(f'X_test : {X_test.shape}, y_test : {y_test.shape}') | X_train: (105, 4), y_train: (105, 3)
X_test : (45, 4), y_test : (45, 3)
| MIT | HW2/HW2.ipynb | DSNortsev/CSE-694-Case-Studies-in-Deep-Learning |
Step 3: Normalize the feature data, all values should be in a range from 0 to 1 | import pandas as pd
from sklearn import preprocessing
# Normalize X features, make all values between 0 and 1
X_train = pd.DataFrame(preprocessing.normalize(X_train),
columns=X_train.columns,
index=X_train.index)
X_test = pd.DataFrame(preprocessing.normalize(X_test),
columns=X_test.columns,
index=X_test.index)
print(f'Train sample:\n{X_train.head(4)},\nShape: {X_train.shape}')
print(f'\nTest sample:\n{X_test.head(4)},\nShape: {X_test.shape}') | Train sample:
sepal_length sepal_width petal_length petal_width
81 0.772429 0.337060 0.519634 0.140442
133 0.723660 0.321627 0.585820 0.172300
137 0.698048 0.338117 0.599885 0.196326
75 0.767857 0.349026 0.511905 0.162879,
Shape: (105, 4)
Test sample:
sepal_length sepal_width petal_length petal_width
73 0.736599 0.338111 0.567543 0.144905
18 0.806828 0.537885 0.240633 0.042465
118 0.706006 0.238392 0.632655 0.210885
78 0.733509 0.354530 0.550132 0.183377,
Shape: (45, 4)
| MIT | HW2/HW2.ipynb | DSNortsev/CSE-694-Case-Studies-in-Deep-Learning |
**Phase 2: Model Building** Step 1: Build model The IRIS is a classification problem, we need to classify if an Iris flower is setosa, versicolor or virginia. Softmax activation function is commonly used in multi classification problems in the output layer, that would return the label with the highest probability. The **tf.keras.Sequential** model is a linear stack of layers. Its constructor takes a list of layer instances, in this case, one tf.keras.layers.Dense layer with 8 nodes, two layers with 10 nodes, and an output layer with 3 nodes representing our label predictions. The first layer’s input_shape parameter corresponds to the number of features from the dataset which is equal 4. The **activation** function determines the output shape of each node in the layer. These non linearities are important, without them the model would be equivalent to a single layer. There are many tf.keras.activations such as tanh, like sigmoid or relu. In our two models we have decided to use "tahn" and "relu" and compare the performance. The ideal number of hidden layers and neurons depends on the problem and the dataset. Like many aspects of machine learning, picking the best shape of the neural network requires a mixture of knowledge and experimentation. As a rule of thumb, increasing the number of hidden layers and neurons typically creates a more powerful model, which requires more data to train effectively. For our illustration we have used two models with 3 and 4 layers. Our expectation that the model with many layers should give a better result. | from keras.models import Sequential
from keras.layers import Dense
def model_with_3_layers():
model = Sequential()
model.add(Dense(27, input_dim=4, activation='relu', name='input_layer'))
model.add(Dense(9, activation='relu', name='layer_1'))
model.add(Dense(3, activation='softmax', name='output_layer'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
return model
def model_with_4_layers():
"""build the Keras model callback"""
model = Sequential()
model.add(Dense(8, input_dim=4, activation='tanh', name='layer_1'))
model.add(Dense(10, activation='tanh', name='layer_2'))
model.add(Dense(10, activation='tanh', name='layer_3'))
model.add(Dense(3, activation='softmax', name='output_layer'))
model.compile(loss="categorical_crossentropy",
optimizer="adam",
metrics=['accuracy'])
return model | _____no_output_____ | MIT | HW2/HW2.ipynb | DSNortsev/CSE-694-Case-Studies-in-Deep-Learning |
Step 2: Create estimator We can also pass arguments in the construction of the KerasClassifier class that will be passed on to the fit() function internally used to train the neural network. Here, we pass the number of epochs as 200 and batch size as 20 to use when training the model. | from keras.wrappers.scikit_learn import KerasClassifier
estimator = KerasClassifier(
build_fn=model_with_4_layers,
epochs=200, batch_size=20,
verbose=0) | _____no_output_____ | MIT | HW2/HW2.ipynb | DSNortsev/CSE-694-Case-Studies-in-Deep-Learning |
Step 3: Evaluate The Model with k-Fold Cross Validation Now, the neural network model can be evaluated on a training dataset. The scikit-learn has excellent capability to evaluate models using a suite of techniques. The gold standard for evaluating machine learning models is k-fold cross validation.Since the dataset is quite small, we can pass 5 fold for cross validation. | from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
import tensorflow as tf
# Suppress Tensorflow warning
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
estimator = KerasClassifier(
build_fn=model_with_3_layers,
epochs=200, batch_size=20,
verbose=0)
kfold = KFold(n_splits=5, shuffle=True, random_state=seed)
results = cross_val_score(estimator, X_train, y_train, cv=kfold)
print(f'Model Performance:\nmean: {results.mean()*100:.2f}\
\nstd: {results.std()*100:.2f}')
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
estimator = KerasClassifier(
build_fn=model_with_4_layers,
epochs=200, batch_size=20,
verbose=0)
kfold = KFold(n_splits=5, shuffle=True, random_state=seed)
results = cross_val_score(estimator, X_train, y_train, cv=kfold)
print(f'Model Performance:\nmean: {results.mean()*100:.2f}\
\nstd: {results.std()*100:.2f}') | Model Performance:
mean: 98.10
std: 2.33
| MIT | HW2/HW2.ipynb | DSNortsev/CSE-694-Case-Studies-in-Deep-Learning |
Phase 3 : Model Selection For our illustration, two models have been used. One with 3 layers and another with 4 layers. We can observe that the accuracy are almost the same, but the loss value is much lower in the model with 4 layers. It can be concluded that by adding more layers, it improves the accuracy and lost and at the same time requires more computational power. | md1 = model_with_3_layers()
md1.fit(X_train,
y_train,
epochs=200,
shuffle=True, # shuffle data randomly.
verbose=0 # this will tell keras to print more detailed info
)
# Validate the model with test dateset
test_error_rate = md1.evaluate(X_test, y_test, verbose=0)
print(f'{md1.metrics_names[1]}: {test_error_rate[1]*100:.2f}')
print(f'{md1.metrics_names[0]}: {test_error_rate[0]*100:.2f}')
md2 = model_with_4_layers()
md2.fit(X_train,
y_train,
epochs=200,
shuffle=True, # shuffle data randomly.
verbose=0 # this will tell keras to print more detailed info
)
# Validate the model with test dateset
test_error_rate = md2.evaluate(X_test, y_test, verbose=0)
print(f'{md2.metrics_names[1]}: {test_error_rate[1]*100:.2f}')
print(f'{md2.metrics_names[0]}: {test_error_rate[0]*100:.2f}') | accuracy: 95.56
loss: 11.00
| MIT | HW2/HW2.ipynb | DSNortsev/CSE-694-Case-Studies-in-Deep-Learning |
STEP 4: Evaluate model performs on the test data | from sklearn.metrics import confusion_matrix
def evaluate_performace(actual, expected):
"""
Function accepts two lists with actual and expected lables
"""
flowers = {0:'setosa',
1:'versicolor',
2:'virginica'}
print(f'Flowers in test set: \nSetosa={y_test["setosa"].sum()}\
\nVersicolor={y_test["versicolor"].sum()}\
\nVirginica={y_test["virginica"].sum()}')
for act,exp in zip(actual, expected):
if act != exp:
print(f'ERROR: {flowers[exp]} predicted as {flowers[act]}')
for i,model in enumerate((md1, md2), 1):
print(f'\nEVALUATION OF MODEL {i}')
predicted_targets = model.predict_classes(X_test)
true_targets = encoder.inverse_transform(y_test.values)
evaluate_performace(predicted_targets, true_targets)
# Calculate the confusion matrix using sklearn.metrics
fig, ax =plt.subplots(1,1)
conf_matrix = confusion_matrix(true_targets, predicted_targets)
sns.heatmap(conf_matrix, annot=True, cmap='Blues', xticklabels=target_names,yticklabels=target_names)
print('\n')
|
EVALUATION OF MODEL 1
Flowers in test set:
Setosa=19
Versicolor=13
Virginica=13
ERROR: versicolor predicted as virginica
ERROR: virginica predicted as versicolor
EVALUATION OF MODEL 2
Flowers in test set:
Setosa=19
Versicolor=13
Virginica=13
ERROR: versicolor predicted as virginica
ERROR: virginica predicted as versicolor
| MIT | HW2/HW2.ipynb | DSNortsev/CSE-694-Case-Studies-in-Deep-Learning |
From the confusion matrix above we can see that the second model with 4 layers outperformed the model with 3 layers and the prediction was wrong only once for versicolor species. ___ **4. Find a publication or report that uses this same data set and compare it’s methodology and results to what you did** In the last assignment, we would like to analyze the approach that has been suggested by TensorFlow in "Custom Training: walkthrough" report [1]. The same deep learning framework has been used. Feature and labels are stored in tf.Tensor structure, where in my model all data was stored in pandas.DataFrame. Label data is converted to numerical representation, where in our side we have decided to use binary string representation. The Author decided to not normalize the feature data, to be more specific to represent in the range from 0 to 1. It is a preferable approach, because it allows the model to learn faster. Suggested model is using a Sequential model which is a linear stack of layers. The stack is built with 4 layers, input and output layers and two Dense layers with 10 nodes each can be simply represented as 4/10/10/3. One of our models that showed better accuracy and loss contains 5 layers which can be represented as follows: 4/8/10/10/3. The relu activation function has been chosen for inner layers, that outputs 0 if input is negative or 0, and returns the value when it is positive. Both models are using **SparseCategoricalCrossentropy** function which calculates the loss value by taking the model's class probability predictions and the desired labels, and returns the average loss across all examples. To minimize the loss the Stochastic gradient algorithm has been chosen with learning rate 0.01, in contrast our model is built with Adam which is an extension to stochastic gradient descent algorithm. Both models are run with almost the same number of epochs. It can be observed that both models return almost the same accuracy and loss. To summarize, we can see that both models performed similarly, but in our approach to the same result can be achieved by adding the new inner layer that does help to improve the model but it might be very resource-intensive.**References:**[1] - "Custom training: walkthrough", https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/customization/custom_training_walkthrough.ipynbscrollTo=rwxGnsA92emp, Accessed 2018, The TensorFlow Authors | # To convert colab notebook to pdf
!apt-get install texlive texlive-xetex texlive-latex-extra pandoc >/dev/null
!pip install pypandoc >/dev/null
from google.colab import drive
drive.mount('/content/drive')
!cp drive/My\ Drive/Colab\ Notebooks/HW2.ipynb ./
!jupyter nbconvert --to PDF "HW2.ipynb" 2>/dev/null
!cp ./HW2.pdf drive/My\ Drive/Colab\ Notebooks/ | _____no_output_____ | MIT | HW2/HW2.ipynb | DSNortsev/CSE-694-Case-Studies-in-Deep-Learning |
list = [1, 2, 3, 4, 5, 6]
for element in list:
print(element)
| 1
2
3
4
5
6
| MIT | HolaGitHub.ipynb | andresrivera125/colab-books |
|
Plagiarism Detection, Feature EngineeringIn this project, you will be tasked with building a plagiarism detector that examines an answer text file and performs binary classification; labeling that file as either plagiarized or not, depending on how similar that text file is to a provided, source text. Your first task will be to create some features that can then be used to train a classification model. This task will be broken down into a few discrete steps:* Clean and pre-process the data.* Define features for comparing the similarity of an answer text and a source text, and extract similarity features.* Select "good" features, by analyzing the correlations between different features.* Create train/test `.csv` files that hold the relevant features and class labels for train/test data points.In the _next_ notebook, Notebook 3, you'll use the features and `.csv` files you create in _this_ notebook to train a binary classification model in a SageMaker notebook instance.You'll be defining a few different similarity features, as outlined in [this paper](https://s3.amazonaws.com/video.udacity-data.com/topher/2019/January/5c412841_developing-a-corpus-of-plagiarised-short-answers/developing-a-corpus-of-plagiarised-short-answers.pdf), which should help you build a robust plagiarism detector!To complete this notebook, you'll have to complete all given exercises and answer all the questions in this notebook.> All your tasks will be clearly labeled **EXERCISE** and questions as **QUESTION**.It will be up to you to decide on the features to include in your final training and test data.--- Read in the DataThe cell below will download the necessary, project data and extract the files into the folder `data/`.This data is a slightly modified version of a dataset created by Paul Clough (Information Studies) and Mark Stevenson (Computer Science), at the University of Sheffield. You can read all about the data collection and corpus, at [their university webpage](https://ir.shef.ac.uk/cloughie/resources/plagiarism_corpus.html). > **Citation for data**: Clough, P. and Stevenson, M. Developing A Corpus of Plagiarised Short Answers, Language Resources and Evaluation: Special Issue on Plagiarism and Authorship Analysis, In Press. [Download] | # NOTE:
# you only need to run this cell if you have not yet downloaded the data
# otherwise you may skip this cell or comment it out
#!wget https://s3.amazonaws.com/video.udacity-data.com/topher/2019/January/5c4147f9_data/data.zip
#!unzip data
# import libraries
import pandas as pd
import numpy as np
import os | _____no_output_____ | MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | ubbn/sagemaker-ml-studues |
This plagiarism dataset is made of multiple text files; each of these files has characteristics that are is summarized in a `.csv` file named `file_information.csv`, which we can read in using `pandas`. | csv_file = 'data/file_information.csv'
plagiarism_df = pd.read_csv(csv_file)
# print out the first few rows of data info
plagiarism_df.head() | _____no_output_____ | MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | ubbn/sagemaker-ml-studues |
Types of PlagiarismEach text file is associated with one **Task** (task A-E) and one **Category** of plagiarism, which you can see in the above DataFrame. Tasks, A-EEach text file contains an answer to one short question; these questions are labeled as tasks A-E. For example, Task A asks the question: "What is inheritance in object oriented programming?" Categories of plagiarism Each text file has an associated plagiarism label/category:**1. Plagiarized categories: `cut`, `light`, and `heavy`.*** These categories represent different levels of plagiarized answer texts. `cut` answers copy directly from a source text, `light` answers are based on the source text but include some light rephrasing, and `heavy` answers are based on the source text, but *heavily* rephrased (and will likely be the most challenging kind of plagiarism to detect). **2. Non-plagiarized category: `non`.** * `non` indicates that an answer is not plagiarized; the Wikipedia source text is not used to create this answer. **3. Special, source text category: `orig`.*** This is a specific category for the original, Wikipedia source text. We will use these files only for comparison purposes. --- Pre-Process the DataIn the next few cells, you'll be tasked with creating a new DataFrame of desired information about all of the files in the `data/` directory. This will prepare the data for feature extraction and for training a binary, plagiarism classifier. EXERCISE: Convert categorical to numerical dataYou'll notice that the `Category` column in the data, contains string or categorical values, and to prepare these for feature extraction, we'll want to convert these into numerical values. Additionally, our goal is to create a binary classifier and so we'll need a binary class label that indicates whether an answer text is plagiarized (1) or not (0). Complete the below function `numerical_dataframe` that reads in a `file_information.csv` file by name, and returns a *new* DataFrame with a numerical `Category` column and a new `Class` column that labels each answer as plagiarized or not. Your function should return a new DataFrame with the following properties:* 4 columns: `File`, `Task`, `Category`, `Class`. The `File` and `Task` columns can remain unchanged from the original `.csv` file.* Convert all `Category` labels to numerical labels according to the following rules (a higher value indicates a higher degree of plagiarism): * 0 = `non` * 1 = `heavy` * 2 = `light` * 3 = `cut` * -1 = `orig`, this is a special value that indicates an original file.* For the new `Class` column * Any answer text that is not plagiarized (`non`) should have the class label `0`. * Any plagiarized answer texts should have the class label `1`. * And any `orig` texts will have a special label `-1`. Expected outputAfter running your function, you should get a DataFrame with rows that looks like the following: ``` File Task Category Class0 g0pA_taska.txt a 0 01 g0pA_taskb.txt b 3 12 g0pA_taskc.txt c 2 13 g0pA_taskd.txt d 1 14 g0pA_taske.txt e 0 0......99 orig_taske.txt e -1 -1``` | # Read in a csv file and return a transformed dataframe
def numerical_dataframe(csv_file='data/file_information.csv'):
'''Reads in a csv file which is assumed to have `File`, `Category` and `Task` columns.
This function does two things:
1) converts `Category` column values to numerical values
2) Adds a new, numerical `Class` label column.
The `Class` column will label plagiarized answers as 1 and non-plagiarized as 0.
Source texts have a special label, -1.
:param csv_file: The directory for the file_information.csv file
:return: A dataframe with numerical categories and a new `Class` label column'''
orig_df = pd.read_csv(csv_file)
new_df = orig_df[['File', 'Task']]
new_df['Category'] = [0 if x == 'non' else 1 if x == 'heavy' else 2 if x == 'light' else 3 if x == 'cut' else -1 for x in orig_df['Category']]
new_df['Class'] = [0 if x == 0 else 1 if x > 0 else -1 for x in new_df['Category']]
return new_df
numerical_dataframe().head(100) | _____no_output_____ | MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | ubbn/sagemaker-ml-studues |
Test cellsBelow are a couple of test cells. The first is an informal test where you can check that your code is working as expected by calling your function and printing out the returned result.The **second** cell below is a more rigorous test cell. The goal of a cell like this is to ensure that your code is working as expected, and to form any variables that might be used in _later_ tests/code, in this case, the data frame, `transformed_df`.> The cells in this notebook should be run in chronological order (the order they appear in the notebook). This is especially important for test cells.Often, later cells rely on the functions, imports, or variables defined in earlier cells. For example, some tests rely on previous tests to work.These tests do not test all cases, but they are a great way to check that you are on the right track! | # informal testing, print out the results of a called function
# create new `transformed_df`
transformed_df = numerical_dataframe(csv_file ='data/file_information.csv')
# check work
# check that all categories of plagiarism have a class label = 1
transformed_df.head(20)
# test cell that creates `transformed_df`, if tests are passed
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# importing tests
import problem_unittests as tests
# test numerical_dataframe function
tests.test_numerical_df(numerical_dataframe)
# if above test is passed, create NEW `transformed_df`
transformed_df = numerical_dataframe(csv_file ='data/file_information.csv')
# check work
print('\nExample data: ')
transformed_df.head() | Tests Passed!
Example data:
| MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | ubbn/sagemaker-ml-studues |
Text Processing & Splitting DataRecall that the goal of this project is to build a plagiarism classifier. At it's heart, this task is a comparison text; one that looks at a given answer and a source text, compares them and predicts whether an answer has plagiarized from the source. To effectively do this comparison, and train a classifier we'll need to do a few more things: pre-process all of our text data and prepare the text files (in this case, the 95 answer files and 5 original source files) to be easily compared, and split our data into a `train` and `test` set that can be used to train a classifier and evaluate it, respectively. To this end, you've been provided code that adds additional information to your `transformed_df` from above. The next two cells need not be changed; they add two additional columns to the `transformed_df`:1. A `Text` column; this holds all the lowercase text for a `File`, with extraneous punctuation removed.2. A `Datatype` column; this is a string value `train`, `test`, or `orig` that labels a data point as part of our train or test setThe details of how these additional columns are created can be found in the `helpers.py` file in the project directory. You're encouraged to read through that file to see exactly how text is processed and how data is split.Run the cells below to get a `complete_df` that has all the information you need to proceed with plagiarism detection and feature engineering. | """
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import helpers
# create a text column
text_df = helpers.create_text_column(transformed_df)
text_df.head()
# after running the cell above
# check out the processed text for a single file, by row index
row_idx = 0 # feel free to change this index
sample_text = text_df.iloc[0]['Text']
print('Sample processed text:\n\n', sample_text) | Sample processed text:
inheritance is a basic concept of object oriented programming where the basic idea is to create new classes that add extra detail to existing classes this is done by allowing the new classes to reuse the methods and variables of the existing classes and new methods and classes are added to specialise the new class inheritance models the is kind of relationship between entities or objects for example postgraduates and undergraduates are both kinds of student this kind of relationship can be visualised as a tree structure where student would be the more general root node and both postgraduate and undergraduate would be more specialised extensions of the student node or the child nodes in this relationship student would be known as the superclass or parent class whereas postgraduate would be known as the subclass or child class because the postgraduate class extends the student class inheritance can occur on several layers where if visualised would display a larger tree structure for example we could further extend the postgraduate node by adding two extra extended classes to it called msc student and phd student as both these types of student are kinds of postgraduate student this would mean that both the msc student and phd student classes would inherit methods and variables from both the postgraduate and student classes
| MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | ubbn/sagemaker-ml-studues |
Split data into training and test setsThe next cell will add a `Datatype` column to a given DataFrame to indicate if the record is: * `train` - Training data, for model training.* `test` - Testing data, for model evaluation.* `orig` - The task's original answer from wikipedia. Stratified samplingThe given code uses a helper function which you can view in the `helpers.py` file in the main project directory. This implements [stratified random sampling](https://en.wikipedia.org/wiki/Stratified_sampling) to randomly split data by task & plagiarism amount. Stratified sampling ensures that we get training and test data that is fairly evenly distributed across task & plagiarism combinations. Approximately 26% of the data is held out for testing and 74% of the data is used for training.The function **train_test_dataframe** takes in a DataFrame that it assumes has `Task` and `Category` columns, and, returns a modified frame that indicates which `Datatype` (train, test, or orig) a file falls into. This sampling will change slightly based on a passed in *random_seed*. Due to a small sample size, this stratified random sampling will provide more stable results for a binary plagiarism classifier. Stability here is smaller *variance* in the accuracy of classifier, given a random seed. | random_seed = 1 # can change; set for reproducibility
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import helpers
# create new df with Datatype (train, test, orig) column
# pass in `text_df` from above to create a complete dataframe, with all the information you need
complete_df = helpers.train_test_dataframe(text_df, random_seed=random_seed)
# check results
complete_df.head() | _____no_output_____ | MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | ubbn/sagemaker-ml-studues |
Determining PlagiarismNow that you've prepared this data and created a `complete_df` of information, including the text and class associated with each file, you can move on to the task of extracting similarity features that will be useful for plagiarism classification. > Note: The following code exercises, assume that the `complete_df` as it exists now, will **not** have its existing columns modified. The `complete_df` should always include the columns: `['File', 'Task', 'Category', 'Class', 'Text', 'Datatype']`. You can add additional columns, and you can create any new DataFrames you need by copying the parts of the `complete_df` as long as you do not modify the existing values, directly.--- Similarity Features One of the ways we might go about detecting plagiarism, is by computing **similarity features** that measure how similar a given answer text is as compared to the original wikipedia source text (for a specific task, a-e). The similarity features you will use are informed by [this paper on plagiarism detection](https://s3.amazonaws.com/video.udacity-data.com/topher/2019/January/5c412841_developing-a-corpus-of-plagiarised-short-answers/developing-a-corpus-of-plagiarised-short-answers.pdf). > In this paper, researchers created features called **containment** and **longest common subsequence**. Using these features as input, you will train a model to distinguish between plagiarized and not-plagiarized text files. Feature EngineeringLet's talk a bit more about the features we want to include in a plagiarism detection model and how to calculate such features. In the following explanations, I'll refer to a submitted text file as a **Student Answer Text (A)** and the original, wikipedia source file (that we want to compare that answer to) as the **Wikipedia Source Text (S)**. ContainmentYour first task will be to create **containment features**. To understand containment, let's first revisit a definition of [n-grams](https://en.wikipedia.org/wiki/N-gram). An *n-gram* is a sequential word grouping. For example, in a line like "bayes rule gives us a way to combine prior knowledge with new information," a 1-gram is just one word, like "bayes." A 2-gram might be "bayes rule" and a 3-gram might be "combine prior knowledge."> Containment is defined as the **intersection** of the n-gram word count of the Wikipedia Source Text (S) with the n-gram word count of the Student Answer Text (S) *divided* by the n-gram word count of the Student Answer Text.$$ \frac{\sum{count(\text{ngram}_{A}) \cap count(\text{ngram}_{S})}}{\sum{count(\text{ngram}_{A})}} $$If the two texts have no n-grams in common, the containment will be 0, but if _all_ their n-grams intersect then the containment will be 1. Intuitively, you can see how having longer n-gram's in common, might be an indication of cut-and-paste plagiarism. In this project, it will be up to you to decide on the appropriate `n` or several `n`'s to use in your final model. EXERCISE: Create containment featuresGiven the `complete_df` that you've created, you should have all the information you need to compare any Student Answer Text (A) with its appropriate Wikipedia Source Text (S). An answer for task A should be compared to the source text for task A, just as answers to tasks B, C, D, and E should be compared to the corresponding original source text.In this exercise, you'll complete the function, `calculate_containment` which calculates containment based upon the following parameters:* A given DataFrame, `df` (which is assumed to be the `complete_df` from above)* An `answer_filename`, such as 'g0pB_taskd.txt' * An n-gram length, `n` Containment calculationThe general steps to complete this function are as follows:1. From *all* of the text files in a given `df`, create an array of n-gram counts; it is suggested that you use a [CountVectorizer](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) for this purpose.2. Get the processed answer and source texts for the given `answer_filename`.3. Calculate the containment between an answer and source text according to the following equation. >$$ \frac{\sum{count(\text{ngram}_{A}) \cap count(\text{ngram}_{S})}}{\sum{count(\text{ngram}_{A})}} $$ 4. Return that containment value.You are encouraged to write any helper functions that you need to complete the function below. | complete_df[complete_df['File'] == 'g0pA_taska.txt'].iloc[0]['Text']
'g0pA_taska.txt'.replace('g0pA','orig')
s = 'g0pA_taska.txt'
'orig' + s[4:]
from sklearn.feature_extraction.text import CountVectorizer
def get_texts(df, filename):
answer = df[df['File'] == filename].iloc[0]['Text']
orig_filename = 'orig' + filename[4:]
orig = df[df['File'] == orig_filename].iloc[0]['Text']
#print(filename)
#print(orig_filename)
return answer, orig
# Calculate the ngram containment for one answer file/source file pair in a df
def calculate_containment(df, n, answer_filename):
'''Calculates the containment between a given answer text and its associated source text.
This function creates a count of ngrams (of a size, n) for each text file in our data.
Then calculates the containment by finding the ngram count for a given answer text,
and its associated source text, and calculating the normalized intersection of those counts.
:param df: A dataframe with columns,
'File', 'Task', 'Category', 'Class', 'Text', and 'Datatype'
:param n: An integer that defines the ngram size
:param answer_filename: A filename for an answer text in the df, ex. 'g0pB_taskd.txt'
:return: A single containment value that represents the similarity
between an answer text and its source text.
'''
a_text, s_text = get_texts(df, answer_filename)
# instantiate an ngram counter
counts = CountVectorizer(analyzer='word', ngram_range=(n,n))
ngrams = counts.fit_transform([a_text, s_text])
ngram_array = ngrams.toarray()
return sum(np.amin(ngram_array,axis=0))/sum(ngram_array[0])
| _____no_output_____ | MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | ubbn/sagemaker-ml-studues |
Test cellsAfter you've implemented the containment function, you can test out its behavior. The cell below iterates through the first few files, and calculates the original category _and_ containment values for a specified n and file.>If you've implemented this correctly, you should see that the non-plagiarized have low or close to 0 containment values and that plagiarized examples have higher containment values, closer to 1.Note what happens when you change the value of n. I recommend applying your code to multiple files and comparing the resultant containment values. You should see that the highest containment values correspond to files with the highest category (`cut`) of plagiarism level. | # select a value for n
n = 1
# indices for first few files
test_indices = range(4)
# iterate through files and calculate containment
category_vals = []
containment_vals = []
for i in test_indices:
# get level of plagiarism for a given file index
category_vals.append(complete_df.loc[i, 'Category'])
# calculate containment for given file and n
filename = complete_df.loc[i, 'File']
print(filename)
c = calculate_containment(complete_df, n, filename)
containment_vals.append(c)
# print out result, does it make sense?
print('Original category values: \n', category_vals)
print()
print(str(n)+'-gram containment values: \n', containment_vals)
ngram_1 = [0.39814814814814814, 1.0, 0.86936936936936937, 0.5935828877005348]
print('Expected values: \n', ngram_1)
assert all(np.isclose(containment_vals, ngram_1, rtol=1e-04)), \
'n=1 calculations are incorrect. Double check the intersection calculation.'
# run this test cell
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# test containment calculation
# params: complete_df from before, and containment function
import problem_unittests as tests
tests.test_containment(complete_df, calculate_containment) | Tests Passed!
| MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | ubbn/sagemaker-ml-studues |
QUESTION 1: Why can we calculate containment features across *all* data (training & test), prior to splitting the DataFrame for modeling? That is, what about the containment calculation means that the test and training data do not influence each other? **Answer:**Bit ambiguous lengthy question. But I think, since containment value is just one feature or one variable for our data, and it is not we are changing origin data or running prediction, in other words it is a pre-step before running actual trainnning. So we can calculate it for all data at this stage. --- Longest Common SubsequenceContainment a good way to find overlap in word usage between two documents; it may help identify cases of cut-and-paste as well as paraphrased levels of plagiarism. Since plagiarism is a fairly complex task with varying levels, it's often useful to include other measures of similarity. The paper also discusses a feature called **longest common subsequence**.> The longest common subsequence is the longest string of words (or letters) that are *the same* between the Wikipedia Source Text (S) and the Student Answer Text (A). This value is also normalized by dividing by the total number of words (or letters) in the Student Answer Text. In this exercise, we'll ask you to calculate the longest common subsequence of words between two texts. EXERCISE: Calculate the longest common subsequenceComplete the function `lcs_norm_word`; this should calculate the *longest common subsequence* of words between a Student Answer Text and corresponding Wikipedia Source Text. It may be helpful to think of this in a concrete example. A Longest Common Subsequence (LCS) problem may look as follows:* Given two texts: text A (answer text) of length n, and string S (original source text) of length m. Our goal is to produce their longest common subsequence of words: the longest sequence of words that appear left-to-right in both texts (though the words don't have to be in continuous order).* Consider: * A = "i think pagerank is a link analysis algorithm used by google that uses a system of weights attached to each element of a hyperlinked set of documents" * S = "pagerank is a link analysis algorithm used by the google internet search engine that assigns a numerical weighting to each element of a hyperlinked set of documents"* In this case, we can see that the start of each sentence of fairly similar, having overlap in the sequence of words, "pagerank is a link analysis algorithm used by" before diverging slightly. Then we **continue moving left -to-right along both texts** until we see the next common sequence; in this case it is only one word, "google". Next we find "that" and "a" and finally the same ending "to each element of a hyperlinked set of documents".* Below, is a clear visual of how these sequences were found, sequentially, in each text.* Now, those words appear in left-to-right order in each document, sequentially, and even though there are some words in between, we count this as the longest common subsequence between the two texts. * If I count up each word that I found in common I get the value 20. **So, LCS has length 20**. * Next, to normalize this value, divide by the total length of the student answer; in this example that length is only 27. **So, the function `lcs_norm_word` should return the value `20/27` or about `0.7408`.**In this way, LCS is a great indicator of cut-and-paste plagiarism or if someone has referenced the same source text multiple times in an answer. LCS, dynamic programmingIf you read through the scenario above, you can see that this algorithm depends on looking at two texts and comparing them word by word. You can solve this problem in multiple ways. First, it may be useful to `.split()` each text into lists of comma separated words to compare. Then, you can iterate through each word in the texts and compare them, adding to your value for LCS as you go. The method I recommend for implementing an efficient LCS algorithm is: using a matrix and dynamic programming. **Dynamic programming** is all about breaking a larger problem into a smaller set of subproblems, and building up a complete result without having to repeat any subproblems. This approach assumes that you can split up a large LCS task into a combination of smaller LCS tasks. Let's look at a simple example that compares letters:* A = "ABCD"* S = "BD"We can see right away that the longest subsequence of _letters_ here is 2 (B and D are in sequence in both strings). And we can calculate this by looking at relationships between each letter in the two strings, A and S.Here, I have a matrix with the letters of A on top and the letters of S on the left side:This starts out as a matrix that has as many columns and rows as letters in the strings S and O **+1** additional row and column, filled with zeros on the top and left sides. So, in this case, instead of a 2x4 matrix it is a 3x5.Now, we can fill this matrix up by breaking it into smaller LCS problems. For example, let's first look at the shortest substrings: the starting letter of A and S. We'll first ask, what is the Longest Common Subsequence between these two letters "A" and "B"? **Here, the answer is zero and we fill in the corresponding grid cell with that value.**Then, we ask the next question, what is the LCS between "AB" and "B"?**Here, we have a match, and can fill in the appropriate value 1**.If we continue, we get to a final matrix that looks as follows, with a **2** in the bottom right corner.The final LCS will be that value **2** *normalized* by the number of n-grams in A. So, our normalized value is 2/4 = **0.5**. The matrix rulesOne thing to notice here is that, you can efficiently fill up this matrix one cell at a time. Each grid cell only depends on the values in the grid cells that are directly on top and to the left of it, or on the diagonal/top-left. The rules are as follows:* Start with a matrix that has one extra row and column of zeros.* As you traverse your string: * If there is a match, fill that grid cell with the value to the top-left of that cell *plus* one. So, in our case, when we found a matching B-B, we added +1 to the value in the top-left of the matching cell, 0. * If there is not a match, take the *maximum* value from either directly to the left or the top cell, and carry that value over to the non-match cell.After completely filling the matrix, **the bottom-right cell will hold the non-normalized LCS value**.This matrix treatment can be applied to a set of words instead of letters. Your function should apply this to the words in two texts and return the normalized LCS value. | ss = "aas"
for i in range(1,len(ss)+1):
print(ss[i-1])
d = np.zeros((2, 2))
d[0][0] = 1
d
import re
def clean_text(sentence):
return [re.sub(r'\W+', '', c) for c in sentence.split()]
# Compute the normalized LCS given an answer text and a source text
def lcs_norm_word(answer_text, source_text):
'''Computes the longest common subsequence of words in two texts; returns a normalized value.
:param answer_text: The pre-processed text for an answer text
:param source_text: The pre-processed text for an answer's associated source text
:return: A normalized LCS value'''
answer_words = clean_text(answer_text)
source_words = clean_text(source_text)
lcs = 0
la = len(answer_words)
ls = len(source_words)
table = np.zeros((la+1, ls+1))
for i in range(1,la+1):
for j in range(1,ls+1):
o = max(table[i-1][j], table[i][j-1])
if (answer_words[i-1] == source_words[j-1]):
o = table[i-1][j-1] + 1
table[i][j] = o
lcs = o
return lcs/la
| _____no_output_____ | MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | ubbn/sagemaker-ml-studues |
Test cellsLet's start by testing out your code on the example given in the initial description.In the below cell, we have specified strings A (answer text) and S (original source text). We know that these texts have 20 words in common and the submitted answer is 27 words long, so the normalized, longest common subsequence should be 20/27. | # Run the test scenario from above
# does your function return the expected value?
A = "i think pagerank is a link analysis algorithm used by google that uses a system of weights attached to each element of a hyperlinked set of documents"
S = "pagerank is a link analysis algorithm used by the google internet search engine that assigns a numerical weighting to each element of a hyperlinked set of documents"
# calculate LCS
lcs = lcs_norm_word(A, S)
print('LCS = ', lcs)
# expected value test
assert lcs==20/27., "Incorrect LCS value, expected about 0.7408, got "+str(lcs)
print('Test passed!') | LCS = 0.7407407407407407
Test passed!
| MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | ubbn/sagemaker-ml-studues |
This next cell runs a more rigorous test. | # run test cell
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# test lcs implementation
# params: complete_df from before, and lcs_norm_word function
tests.test_lcs(complete_df, lcs_norm_word) | Tests Passed!
| MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | ubbn/sagemaker-ml-studues |
Finally, take a look at a few resultant values for `lcs_norm_word`. Just like before, you should see that higher values correspond to higher levels of plagiarism. | # test on your own
test_indices = range(5) # look at first few files
category_vals = []
lcs_norm_vals = []
# iterate through first few docs and calculate LCS
for i in test_indices:
category_vals.append(complete_df.loc[i, 'Category'])
# get texts to compare
answer_text = complete_df.loc[i, 'Text']
task = complete_df.loc[i, 'Task']
# we know that source texts have Class = -1
orig_rows = complete_df[(complete_df['Class'] == -1)]
orig_row = orig_rows[(orig_rows['Task'] == task)]
source_text = orig_row['Text'].values[0]
# calculate lcs
lcs_val = lcs_norm_word(answer_text, source_text)
lcs_norm_vals.append(lcs_val)
# print out result, does it make sense?
print('Original category values: \n', category_vals)
print()
print('Normalized LCS values: \n', lcs_norm_vals) | Original category values:
[0, 3, 2, 1, 0]
Normalized LCS values:
[0.1917808219178082, 0.8207547169811321, 0.8464912280701754, 0.3160621761658031, 0.24257425742574257]
| MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | ubbn/sagemaker-ml-studues |
--- Create All FeaturesNow that you've completed the feature calculation functions, it's time to actually create multiple features and decide on which ones to use in your final model! In the below cells, you're provided two helper functions to help you create multiple features and store those in a DataFrame, `features_df`. Creating multiple containment featuresYour completed `calculate_containment` function will be called in the next cell, which defines the helper function `create_containment_features`. > This function returns a list of containment features, calculated for a given `n` and for *all* files in a df (assumed to the the `complete_df`).For our original files, the containment value is set to a special value, -1.This function gives you the ability to easily create several containment features, of different n-gram lengths, for each of our text files. | """
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# Function returns a list of containment features, calculated for a given n
# Should return a list of length 100 for all files in a complete_df
def create_containment_features(df, n, column_name=None):
containment_values = []
if(column_name==None):
column_name = 'c_'+str(n) # c_1, c_2, .. c_n
# iterates through dataframe rows
for i in df.index:
file = df.loc[i, 'File']
# Computes features using calculate_containment function
if df.loc[i,'Category'] > -1:
c = calculate_containment(df, n, file)
containment_values.append(c)
# Sets value to -1 for original tasks
else:
containment_values.append(-1)
print(str(n)+'-gram containment features created!')
return containment_values
| _____no_output_____ | MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | ubbn/sagemaker-ml-studues |
Creating LCS featuresBelow, your complete `lcs_norm_word` function is used to create a list of LCS features for all the answer files in a given DataFrame (again, this assumes you are passing in the `complete_df`. It assigns a special value for our original, source files, -1. | """
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# Function creates lcs feature and add it to the dataframe
def create_lcs_features(df, column_name='lcs_word'):
lcs_values = []
# iterate through files in dataframe
for i in df.index:
# Computes LCS_norm words feature using function above for answer tasks
if df.loc[i,'Category'] > -1:
# get texts to compare
answer_text = df.loc[i, 'Text']
task = df.loc[i, 'Task']
# we know that source texts have Class = -1
orig_rows = df[(df['Class'] == -1)]
orig_row = orig_rows[(orig_rows['Task'] == task)]
source_text = orig_row['Text'].values[0]
# calculate lcs
lcs = lcs_norm_word(answer_text, source_text)
lcs_values.append(lcs)
# Sets to -1 for original tasks
else:
lcs_values.append(-1)
print('LCS features created!')
return lcs_values
| _____no_output_____ | MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | ubbn/sagemaker-ml-studues |
EXERCISE: Create a features DataFrame by selecting an `ngram_range`The paper suggests calculating the following features: containment *1-gram to 5-gram* and *longest common subsequence*. > In this exercise, you can choose to create even more features, for example from *1-gram to 7-gram* containment features and *longest common subsequence*. You'll want to create at least 6 features to choose from as you think about which to give to your final, classification model. Defining and comparing at least 6 different features allows you to discard any features that seem redundant, and choose to use the best features for your final model!In the below cell **define an n-gram range**; these will be the n's you use to create n-gram containment features. The rest of the feature creation code is provided. | # Define an ngram range
ngram_range = range(1,7)
# The following code may take a minute to run, depending on your ngram_range
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
features_list = []
# Create features in a features_df
all_features = np.zeros((len(ngram_range)+1, len(complete_df)))
# Calculate features for containment for ngrams in range
i=0
for n in ngram_range:
column_name = 'c_'+str(n)
features_list.append(column_name)
# create containment features
all_features[i]=np.squeeze(create_containment_features(complete_df, n))
i+=1
# Calculate features for LCS_Norm Words
features_list.append('lcs_word')
all_features[i]= np.squeeze(create_lcs_features(complete_df))
# create a features dataframe
features_df = pd.DataFrame(np.transpose(all_features), columns=features_list)
# Print all features/columns
print()
print('Features: ', features_list)
print()
# print some results
features_df.head(10) | _____no_output_____ | MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | ubbn/sagemaker-ml-studues |
Correlated FeaturesYou should use feature correlation across the *entire* dataset to determine which features are ***too*** **highly-correlated** with each other to include both features in a single model. For this analysis, you can use the *entire* dataset due to the small sample size we have. All of our features try to measure the similarity between two texts. Since our features are designed to measure similarity, it is expected that these features will be highly-correlated. Many classification models, for example a Naive Bayes classifier, rely on the assumption that features are *not* highly correlated; highly-correlated features may over-inflate the importance of a single feature. So, you'll want to choose your features based on which pairings have the lowest correlation. These correlation values range between 0 and 1; from low to high correlation, and are displayed in a [correlation matrix](https://www.displayr.com/what-is-a-correlation-matrix/), below. | """
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# Create correlation matrix for just Features to determine different models to test
corr_matrix = features_df.corr().abs().round(2)
# display shows all of a dataframe
display(corr_matrix) | _____no_output_____ | MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | ubbn/sagemaker-ml-studues |
EXERCISE: Create selected train/test dataComplete the `train_test_data` function below. This function should take in the following parameters:* `complete_df`: A DataFrame that contains all of our processed text data, file info, datatypes, and class labels* `features_df`: A DataFrame of all calculated features, such as containment for ngrams, n= 1-5, and lcs values for each text file listed in the `complete_df` (this was created in the above cells)* `selected_features`: A list of feature column names, ex. `['c_1', 'lcs_word']`, which will be used to select the final features in creating train/test sets of data.It should return two tuples:* `(train_x, train_y)`, selected training features and their corresponding class labels (0/1)* `(test_x, test_y)`, selected training features and their corresponding class labels (0/1)** Note: x and y should be arrays of feature values and numerical class labels, respectively; not DataFrames.**Looking at the above correlation matrix, you should decide on a **cutoff** correlation value, less than 1.0, to determine which sets of features are *too* highly-correlated to be included in the final training and test data. If you cannot find features that are less correlated than some cutoff value, it is suggested that you increase the number of features (longer n-grams) to choose from or use *only one or two* features in your final model to avoid introducing highly-correlated features.Recall that the `complete_df` has a `Datatype` column that indicates whether data should be `train` or `test` data; this should help you split the data appropriately. | # Takes in dataframes and a list of selected features (column names)
# and returns (train_x, train_y), (test_x, test_y)
def train_test_data(complete_df, features_df, selected_features):
'''Gets selected training and test features from given dataframes, and
returns tuples for training and test features and their corresponding class labels.
:param complete_df: A dataframe with all of our processed text data, datatypes, and labels
:param features_df: A dataframe of all computed, similarity features
:param selected_features: An array of selected features that correspond to certain columns in `features_df`
:return: training and test features and labels: (train_x, train_y), (test_x, test_y)'''
# get the training features
train_x = features_df[complete_df['Datatype'] == 'train'][selected_features].to_numpy()
# And training class labels (0 or 1)
train_y = complete_df[complete_df['Datatype'] == 'train']['Class'].to_numpy()
# get the test features and labels
test_x = features_df[complete_df['Datatype'] == 'test'][selected_features].to_numpy()
test_y = complete_df[complete_df['Datatype'] == 'test']['Class'].to_numpy()
return (train_x, train_y), (test_x, test_y)
| _____no_output_____ | MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | ubbn/sagemaker-ml-studues |
Test cellsBelow, test out your implementation and create the final train/test data. | #complete_df.loc(list(features_df)[:2])
[list(features_df)[:2]]
features_df[list(features_df)[:2]]
features_df[complete_df['Datatype'] == 'train'][list(features_df)[:2]].to_numpy()
#list(complete_df[complete_df['Datatype'] == 'train']['Class'])
features_df
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
test_selection = list(features_df)[:2] # first couple columns as a test
# test that the correct train/test data is created
(train_x, train_y), (test_x, test_y) = train_test_data(complete_df, features_df, test_selection)
# params: generated train/test data
tests.test_data_split(train_x, train_y, test_x, test_y) | Tests Passed!
| MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | ubbn/sagemaker-ml-studues |
EXERCISE: Select "good" featuresIf you passed the test above, you can create your own train/test data, below. Define a list of features you'd like to include in your final mode, `selected_features`; this is a list of the features names you want to include. | # Select your list of features, this should be column names from features_df
# ex. ['c_1', 'lcs_word']
selected_features = ['c_1', 'c_5', 'lcs_word']
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
(train_x, train_y), (test_x, test_y) = train_test_data(complete_df, features_df, selected_features)
# check that division of samples seems correct
# these should add up to 95 (100 - 5 original files)
print('Training size: ', len(train_x))
print('Test size: ', len(test_x))
print()
print('Training df sample: \n', train_x[:10])
from numpy import cov
cov(features_df['c_1'].to_numpy(), features_df['c_2'].to_numpy())
#features_df['c_1'].to_numpy()
less_corr = 1
less_corr_a=0
less_corr_b=0
for i in range(1,6):
for j in range(1,6):
if less_corr > features_df['c_'+str(i)].corr(features_df['c_'+str(j)]):
less_corr = features_df['c_'+str(i)].corr(features_df['c_'+str(j)])
less_corr_a = i
less_corr_b = j
print(less_corr)
print(less_corr_a)
print(less_corr_b) | 0.8809022697353123
1
5
| MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | ubbn/sagemaker-ml-studues |
Question 2: How did you decide on which features to include in your final model? **Answer:**I run correlation analysis between each pair of features, and result shows that c_1 and c_5 are least correlated, therefore I chose them --- Creating Final Data FilesNow, you are almost ready to move on to training a model in SageMaker!You'll want to access your train and test data in SageMaker and upload it to S3. In this project, SageMaker will expect the following format for your train/test data:* Training and test data should be saved in one `.csv` file each, ex `train.csv` and `test.csv`* These files should have class labels in the first column and features in the rest of the columnsThis format follows the practice, outlined in the [SageMaker documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/cdf-training.html), which reads: "Amazon SageMaker requires that a CSV file doesn't have a header record and that the target variable [class label] is in the first column." EXERCISE: Create csv filesDefine a function that takes in x (features) and y (labels) and saves them to one `.csv` file at the path `data_dir/filename`.It may be useful to use pandas to merge your features and labels into one DataFrame and then convert that into a csv file. You can make sure to get rid of any incomplete rows, in a DataFrame, by using `dropna`. | fake_x = [ [0.39814815, 0.0001, 0.19178082],
[0.86936937, 0.44954128, 0.84649123],
[0.44086022, 0., 0.22395833] ]
fake_y = [0, 1, 1]
a=np.array(fake_x)
b=np.array(fake_y).reshape(3,1)
np.concatenate((a,b),axis=1)
def make_csv(x, y, filename, data_dir):
'''Merges features and labels and converts them into one csv file with labels in the first column.
:param x: Data features
:param y: Data labels
:param file_name: Name of csv file, ex. 'train.csv'
:param data_dir: The directory where files will be saved
'''
# make data dir, if it does not exist
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# your code here
a = np.array(x)
b = np.array(y).reshape(len(y),1)
c = np.concatenate((b,a),axis=1)
np.savetxt(str(data_dir)+'/'+str(filename), c, delimiter=",")
# nothing is returned, but a print statement indicates that the function has run
print('Path created: '+str(data_dir)+'/'+str(filename)) | _____no_output_____ | MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | ubbn/sagemaker-ml-studues |
Test cellsTest that your code produces the correct format for a `.csv` file, given some text features and labels. | """
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
fake_x = [ [0.39814815, 0.0001, 0.19178082],
[0.86936937, 0.44954128, 0.84649123],
[0.44086022, 0., 0.22395833] ]
fake_y = [0, 1, 1]
make_csv(fake_x, fake_y, filename='to_delete.csv', data_dir='test_csv')
# read in and test dimensions
fake_df = pd.read_csv('test_csv/to_delete.csv', header=None)
# check shape
assert fake_df.shape==(3, 4), \
'The file should have as many rows as data_points and as many columns as features+1 (for indices).'
# check that first column = labels
assert np.all(fake_df.iloc[:,0].values==fake_y), 'First column is not equal to the labels, fake_y.'
print('Tests passed!')
# delete the test csv file, generated above
! rm -rf test_csv | _____no_output_____ | MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | ubbn/sagemaker-ml-studues |
If you've passed the tests above, run the following cell to create `train.csv` and `test.csv` files in a directory that you specify! This will save the data in a local directory. Remember the name of this directory because you will reference it again when uploading this data to S3. | # can change directory, if you want
data_dir = 'plagiarism_data'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
make_csv(train_x, train_y, filename='train.csv', data_dir=data_dir)
make_csv(test_x, test_y, filename='test.csv', data_dir=data_dir) | Path created: plagiarism_data/train.csv
Path created: plagiarism_data/test.csv
| MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | ubbn/sagemaker-ml-studues |
يرجى القراءة الدفتر التالي هو مثال عن كيفية استخدام Python لتحميل، معالجة وتحليل WaPOR data: * [1 تحميل بيانات WaPOR بالجملة](1_Bulk_download_WaPOR_data.ipynb)* [2 معالجة بيانات WaPOR المحملة](2_Preprocess_WaPOR_data.ipynb)* [3 تحليل بيانات WaPOR](3_Analyse_WaPOR_data.ipynb)تحتاج إلى تحميل الحزم التالية لبدء تشغيل الدفتر: * requests* gdal* numpy* pandas* matplotlib* pyshp (shapefile)تفقد إن تم تحميل الحزم التالية من خلال بدء تشغيل الكود التالي. | import requests
import numpy
import pandas
import matplotlib
import shapefile
import gdal
import osr
import ogr | _____no_output_____ | CC0-1.0 | notebooks_AR/Module1_unit4/0_Start_here.ipynb | LaurenZ-IHE/WAPOROCW |
إن لم تحصل في الخلية على أية مخرجات، هذا يعني أن الحزم قد تحملت بشكل صحيح. إن لم يتم التحميل بشكل صحيح، سوف يظهر الخطأ على شكل كود كالتالي | import notinstalledpackage | _____no_output_____ | CC0-1.0 | notebooks_AR/Module1_unit4/0_Start_here.ipynb | LaurenZ-IHE/WAPOROCW |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.