markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Tables can be split, rearranged and combined.
df4 = df.copy() df4 pieces = [df4[6:], df4[3:6], df4[:3]] # split row 2+3+3 pieces df5 = pd.concat(pieces) # concantenate (rearrange/combine) df5 df4+df5 # Operation between tables with original index sequence df0 = df.loc[:,'Kedai A':'Kedai C'] # Slicing and extracting columns pd.conc...
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
*** **_8.3 Plotting Functions_** ---Let us look on some of the simple plotting function on $Pandas$ (requires $Matplotlib$ library).
df_add = df.copy() # Simple auto plotting %matplotlib inline df_add.cumsum().plot() # Reposition the legend import matplotlib.pyplot as plt df_add.cumsum().plot() plt.legend(bbox_to_anchor=[1.3, 1])
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
In the above example, repositioning the legend requires the legend function in $Matplotlib$ library. Therefore, the $Matplotlib$ library must be explicitly imported.
df_add.cumsum().plot(kind='bar') plt.legend(bbox_to_anchor=[1.3, 1]) df_add.cumsum().plot(kind='barh', stacked=True) df_add.cumsum().plot(kind='hist', alpha=0.5) df_add.cumsum().plot(kind='area', alpha=0.4, stacked=False) plt.legend(bbox_to_anchor=[1.3, 1])
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
A 3-dimensional plot can be projected on a canvas but requires the $Axes3D$ library with slightly complicated settings.
# Plotting a 3D bar plot from mpl_toolkits.mplot3d import Axes3D import numpy as np # Convert the time format into ordinary strings time_series = pd.Series(df.index.format()) fig = plt.figure(figsize=(8,6)) ax = fig.add_subplot(111, projection='3d') # Plotting the bar graph column by column for c, z in zip(['r', 'g...
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
*** **_8.4 Reading And Writing Data To File_** Data in **_DataFrame_** can be exported into **_csv_** (comma separated value) and **_Excel_** file. The users can also create a **_DataFrame_** from data in **_csv_** and **_Excel_** file, the data can then be processed.
# Export data to a csv file but separated with < TAB > rather than comma # the default separation is with comma df.to_csv('Tutorial8/Kedai.txt', sep='\t') # Export to Excel file df.to_excel('Tutorial8/Kedai.xlsx', sheet_name = 'Tarikh', index = True) # Importing data from csv file (without header) from_file = pd.rea...
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
Germany: LK Aurich (Niedersachsen)* Homepage of project: https://oscovida.github.io* Plots are explained at http://oscovida.github.io/plots.html* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Niedersachsen-LK-Aurich.ipynb)
import datetime import time start = datetime.datetime.now() print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}") %config InlineBackend.figure_formats = ['svg'] from oscovida import * overview(country="Germany", subregion="LK Aurich", weeks=5); overview(country="Germany",...
_____no_output_____
CC-BY-4.0
ipynb/Germany-Niedersachsen-LK-Aurich.ipynb
oscovida/oscovida.github.io
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Niedersachsen-LK-Aurich.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org...
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and " f"deaths at {fetch_deaths_last_execution()}.") # to force a fresh download of data, run "clear_cache()" print(f"Notebook execution took: {datetime.datetime.now()-start}")
_____no_output_____
CC-BY-4.0
ipynb/Germany-Niedersachsen-LK-Aurich.ipynb
oscovida/oscovida.github.io
Investigation of No-show Appointments Data Table of ContentsIntroductionData WranglingExploratory Data AnalysisConclusions IntroductionThe data includes some information about more than 100,000 Braxzilian medical appointments. It gives if the patient shows up or not for the appointment as well as some characteristics...
import pandas as pd import seaborn as sb import numpy as np import matplotlib.pyplot as plt % matplotlib inline
_____no_output_____
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
Data Wrangling
# Load your data and print out a few lines. Perform operations to inspect data # types and look for instances of missing or possibly errant data. filename = 'noshowappointments-kagglev2-may-2016.csv' df= pd.read_csv(filename) df.head() df.info() # no missing values
<class 'pandas.core.frame.DataFrame'> RangeIndex: 110527 entries, 0 to 110526 Data columns (total 14 columns): PatientId 110527 non-null float64 AppointmentID 110527 non-null int64 Gender 110527 non-null object ScheduledDay 110527 non-null object AppointmentDay 110527 non-null object Age ...
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
The data gives information about gender and age of the patient, neighbourhood of the hospital, if the patient has hypertension, diabetes, alcoholism or not, date and time of appointment and schedule, if the patient is registered in scholarship or not, and if SMS received or not as a reminder. When I look at the data ty...
df.describe() df.isnull().any().sum() # no missing value df.duplicated().any()
_____no_output_____
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
Data Cleaning A dummy variable named no_showup is created. It takes the value 1 if the patient did not show up, and 0 otherwise. I omitted PatientId, AppointmentID and No-show columns. There are some rows with Age value of -1 which does not make much sense. So I dropped these rows.Other than that, the data seems pret...
df['No-show'].unique() df['no_showup'] = np.where(df['No-show'] == 'Yes', 1, 0) df.drop(['PatientId', 'AppointmentID', 'No-show'], axis = 1, inplace = True) noshow = df.no_showup == 1 show = df.no_showup == 0 index = df[df.Age == -1].index df.drop(index, inplace = True)
_____no_output_____
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
Exploratory Data Analysis What factors are important in predicting no-show rate?
plt.figure(figsize = (10,6)) df.Age[noshow].plot(kind = 'hist', alpha= 0.5, color= 'green', bins =20, label = 'no-show'); df.Age[show].plot(kind = 'hist', alpha= 0.4, color= 'orange', bins =20, label = 'show'); plt.legend(); plt.xlabel('Age'); plt.ylabel('Number of Patients');
_____no_output_____
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
I started exploratory data analysis by first looking at the relationship between age and no_showup. By looking age distributions for patients who showed up and not showed up, we can not say much. There is a spike at around age 0, and no show up number is not that high compared to other ages. We can infer that adults ar...
bin_edges = np.arange(0, df.Age.max()+3, 3) df['age_bins'] = pd.cut(df.Age, bin_edges) base_color = sb.color_palette()[0] age_order = df.age_bins.unique().sort_values() g= sb.FacetGrid(data= df, row= 'Gender', row_order = ['M', 'F'], height=4, aspect = 2); g = g.map(sb.barplot, 'age_bins', 'no_showup', color = base_c...
_____no_output_____
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
No-show rate is smaller than average for babies ((0-3] interval). Then it increases as age get larger and it reaches peak at around 15-18 depending on gender. After that point, as age gets larger the no-show rate gets smaller. So middle-aged and old people are much more careful about their doctor appointments which is ...
df.groupby('age_bins').size().sort_values().head(8) df.groupby('Gender').no_showup.mean()
_____no_output_____
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
There is no much difference across genders. No-show rates are close.
order_scholar = [0, 1] g= sb.FacetGrid(data= df, col= 'Gender', col_order = ['M', 'F'], height=4); g = g.map(sb.barplot, 'Scholarship', 'no_showup', order = order_scholar, color = base_color, ci = None,); g.axes[0,0].set_ylabel('No-show Rate'); g.axes[0,1].set_ylabel('No-show Rate');
_____no_output_____
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
If the patient is in Brazilian welfare program, then the probability of her not showing up for the appointment is larger than the probablity of a patient which is not registered in welfare program. There is no significant difference between males and females.
order_hyper = [0, 1] g= sb.FacetGrid(data= df, col= 'Gender', col_order = ['M', 'F'], height=4); g = g.map(sb.barplot, 'Hipertension', 'no_showup', order = order_hyper, color = base_color, ci = None,); g.axes[0,0].set_ylabel('No-show Rate'); g.axes[0,1].set_ylabel('No-show Rate');
_____no_output_____
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
When the patient has hypertension or diabetes, she would not want to miss doctor appointments. So having a disease to be watched closely incentivizes you to show up for your appointments. Again, being male or female does not make a significant difference in no-show rate.
order_diabetes = [0, 1] sb.barplot(data =df, x = 'Diabetes', y = 'no_showup', hue = 'Gender', ci = None, order = order_diabetes); sb.despine(); plt.ylabel('No-show Rate'); plt.legend(loc = 'lower right'); order_alcol = [0, 1] sb.barplot(data =df, x = 'Alcoholism', y = 'no_showup', hue = 'Gender', ci = None, order = ord...
_____no_output_____
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
The story for alcoholism is a bit different. If the patient is a male with alcoholism, the probability of his no showing up is smaller than the one of male with no alcoholism. On the other hand, having alcoholism makes a female patient's probability of not showing up larger. Here I suspect if the number of females havi...
df.groupby(['Gender', 'Alcoholism']).size() order_handcap = [0, 1, 2, 3, 4] sb.barplot(data =df, x = 'Handcap', y = 'no_showup', hue = 'Gender', ci = None, order = order_handcap); sb.despine(); plt.ylabel('No-show Rate'); plt.legend(loc = 'lower right'); df.groupby(['Handcap', 'Gender']).size()
_____no_output_____
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
We cannot see a significant difference across levels of Handcap variable. Label 4 for females is 1 but I do not pay attention to this since there are only 2 data points in this group. So being in different Handcap levels does not say much when predicting if a patient will show up.
plt.figure(figsize = (16,6)) sb.barplot(data = df, x='Neighbourhood', y='no_showup', color =base_color, ci = None); plt.xticks(rotation = 90); plt.ylabel('No-show Rate'); df.groupby('Neighbourhood').size().sort_values(ascending = True).head(10)
_____no_output_____
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
I want to see no-show rate in different neighborhoods. There is no significant difference across neighborhoods except ILHAS OCEÂNICAS DE TRINDADE. There are only 2 data points from this place in the dataset. The exceptions can occur with only 2 data points. Lastly, I want to look at how sending SMS to patients to remin...
plt.figure(figsize = (5,5)) sb.barplot(data = df, x='SMS_received', y='no_showup', color =base_color, ci = None); plt.title('No-show Rate vs SMS received'); plt.ylabel('No-show Rate');
_____no_output_____
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
The association between SMS_received variable and no-show rate is very counterintuitive. I expect that when the patient receives SMS as a reminder, she is more likely to go to the appointment. Here the graph says exact opposite thing; when no SMS, the rate is around 16% whereas when SMS received it is more than 27%. It...
sb.barplot(data = df, x = 'SMS_received', y = 'no_showup', hue = 'Gender', ci = None); plt.title('No-show Rate vs SMS received'); plt.ylabel('No-show Rate'); plt.legend(loc ='lower right');
_____no_output_____
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
Gender does not make a significant impact on the rate with SMS and no SMS.Below I try to look at how no-show rate changes with time to appointment day. I convert ScheduledDay and AppointmentDay to datetime. There is no information about hour in AppointmentDay variable. It includes 00:00:00 for all rows whereas Schedule...
df['ScheduledDay'] = pd.to_datetime(df['ScheduledDay']) df['AppointmentDay'] = pd.to_datetime(df['AppointmentDay']) df['time_to_app']= df['AppointmentDay'] - df['ScheduledDay'] import datetime as dt rows_to_drop = df[df.time_to_app < dt.timedelta(days = -1)].index df.drop(rows_to_drop, inplace = True)
_____no_output_____
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
All time_to_app values smaller than 1 day are omitted since it points another error.
time_bins = [dt.timedelta(days=-1, hours= 0), dt.timedelta(days=-1, hours= 6), dt.timedelta(days=-1, hours= 12), dt.timedelta(days=-1, hours= 15), dt.timedelta(days=-1, hours = 18), dt.timedelta(days=1), dt.timedelta(days=2), dt.timedelta(days=3), dt.timedelta(days=7), dt.timedelta(days=15), ...
_____no_output_____
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
I created bins for time_to_app variable. They are not equally spaced. I notice that there are significant number of patients in (-1 days, 0 days] bin. I partitioned it into smaller time bins to see the picture. The number of points in each bin is given above.I group the data by time_bins and look at no-show rate.
plt.figure(figsize =(9,6)) sb.barplot(data= df, y ='time_bins', x = 'no_showup', hue = 'SMS_received', ci = None); plt.xlabel('No-show Rate'); plt.ylabel('Time to Appointment');
_____no_output_____
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
When patient schedules an appointment for the same day which represented by the first 4 upper rows in the graaph above, no-show rate is pretty smaller than the average rate higher than 20%. If patients schedule an appointment for the same day (meaning patients make a schedule several hours before the appointment hour)...
sms_sent = df[( df.AppointmentDay - df.ScheduledDay) >= dt.timedelta(days = 2) ] sms_sent.groupby('SMS_received').no_showup.mean()
_____no_output_____
MIT
No-show-dataset-investigation.ipynb
ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows
Test for Embedding, to later move it into a layer
import numpy as np # Set-up numpy generator for random numbers random_number_generator = np.random.default_rng() # First tokenize the protein sequence (or any sequence) in kmers. def tokenize(protein_seqs, kmer_sz): kmers = set() # Loop over protein sequences for protein_seq in protein_seqs: # Loop ...
_____no_output_____
MIT
additional/notebooks/embedding.ipynb
Mees-Molenaar/protein_location
Spectral encoding of categorical featuresAbout a year ago I was working on a regression model, which had over a million features. Needless to say, the training was super slow, and the model was overfitting a lot. After investigating this issue, I realized that most of the features were created using 1-hot encoding of ...
import numpy as np import pandas as pd np.set_printoptions(linewidth=130) def normalized_laplacian(A): 'Compute normalized Laplacian matrix given the adjacency matrix' d = A.sum(axis=0) D = np.diag(d) L = D-A D_rev_sqrt = np.diag(1/np.sqrt(d)) return D_rev_sqrt @ L @ D_rev_sqrt
_____no_output_____
Apache-2.0
spectral-analysis/spectral-encoding-of-categorical-features.ipynb
mlarionov/machine_learning_POC
We will consider an example, where weekdays are similar to each other, but differ a lot from the weekends.
#The adjacency matrix for days of the week A_dw = np.array([[0,10,9,8,5,2,1], [0,0,10,9,5,2,1], [0,0,0,10,8,2,1], [0,0,0,0,10,2,1], [0,0,0,0,0,5,3], [0,0,0,0,0,0,10], [0,0,0,0,0,0,0]]) A_dw = A_dw + A_dw.T A_dw #The normaliz...
_____no_output_____
Apache-2.0
spectral-analysis/spectral-encoding-of-categorical-features.ipynb
mlarionov/machine_learning_POC
Notice, that the eigenvalues are not ordered here. Let's plot the eigenvalues, ignoring the uninformative zero.
%matplotlib inline from matplotlib import pyplot as plt import seaborn as sns sns.stripplot(data=sz[1:], jitter=False, );
_____no_output_____
Apache-2.0
spectral-analysis/spectral-encoding-of-categorical-features.ipynb
mlarionov/machine_learning_POC
We can see a pretty substantial gap between the first eigenvalue and the rest of the eigenvalues. If this does not give enough model performance, you can include the second eigenvalue, because the gap between it and the higher eigenvalues is also quite substantial. Let's print all eigenvectors:
sv
_____no_output_____
Apache-2.0
spectral-analysis/spectral-encoding-of-categorical-features.ipynb
mlarionov/machine_learning_POC
Look at the second eigenvector. The weekend values have a different size than the weekdays and Friday is close to zero. This proves the transitional role of Friday, that, being a day of the week, is also the beginning of the weekend.If we are going to pick two lowest non-zero eigenvalues, our categorical feature encodi...
#Picking only two eigenvectors category_vectors = sv[:,[1,3]] category_vectors category_vector_frame=pd.DataFrame(category_vectors, index=['mon', 'tue', 'wed', 'thu', 'fri', 'sat', 'sun'], columns=['col1', 'col2']).reset_index() sns.scatterplot(data=category_vector_frame, x='col1', y...
_____no_output_____
Apache-2.0
spectral-analysis/spectral-encoding-of-categorical-features.ipynb
mlarionov/machine_learning_POC
In the plot above we see that Monday and Tuesday, and also Saturday and Sunday are clustered close together, while Wednesday, Thursday and Friday are far apart. Learning the kernel functionIn the previous example we assumed that the similarity function is given. Sometimes this is the case, where it can be defined bas...
liq = pd.read_csv('Iowa_Liquor_agg.csv', dtype={'Date': 'str', 'Store Number': 'str', 'Category': 'str', 'orders': 'int', 'sales': 'float'}, parse_dates=True) liq.Date = pd.to_datetime(liq.Date) liq.head()
_____no_output_____
Apache-2.0
spectral-analysis/spectral-encoding-of-categorical-features.ipynb
mlarionov/machine_learning_POC
Since we care about sales, let's encode the day of week using the information from the sales columnLet's check the histogram first:
sns.distplot(liq.sales, kde=False);
_____no_output_____
Apache-2.0
spectral-analysis/spectral-encoding-of-categorical-features.ipynb
mlarionov/machine_learning_POC
We see that the distribution is very skewed, so let's try to use log of sales columns instead
sns.distplot(np.log10(1+liq.sales), kde=False);
_____no_output_____
Apache-2.0
spectral-analysis/spectral-encoding-of-categorical-features.ipynb
mlarionov/machine_learning_POC
This is much better. So we will use a log for our distribution
liq["log_sales"] = np.log10(1+liq.sales)
_____no_output_____
Apache-2.0
spectral-analysis/spectral-encoding-of-categorical-features.ipynb
mlarionov/machine_learning_POC
Here we will follow [this blog](https://amethix.com/entropy-in-machine-learning/) for computation of the Kullback-Leibler divergence.Also note, that since there are no liquor sales on Sunday, we consider only six days in a week
from scipy.stats import wasserstein_distance from numpy import histogram from scipy.stats import iqr def dw_data(i): return liq[liq.Date.dt.dayofweek == i].log_sales def wass_from_data(i,j): return wasserstein_distance(dw_data(i), dw_data(j)) if i > j else 0.0 distance_matrix = np.fromfunction(np.vectorize...
_____no_output_____
Apache-2.0
spectral-analysis/spectral-encoding-of-categorical-features.ipynb
mlarionov/machine_learning_POC
As we already mentioned, the hyperparameter $\gamma$ has to be tuned. Here we just pick the value that will give a plausible result
gamma = 100 kernel = np.exp(-gamma * distance_matrix**2) np.fill_diagonal(kernel, 0) kernel norm_lap = normalized_laplacian(kernel) sz, sv = np.linalg.eig(norm_lap) sz sns.stripplot(data=sz[1:], jitter=False, );
_____no_output_____
Apache-2.0
spectral-analysis/spectral-encoding-of-categorical-features.ipynb
mlarionov/machine_learning_POC
Ignoring the zero eigenvalue, we can see that there is a bigger gap between the first eigenvalue and the rest of the eigenvalues, even though the values are all in the range between 1 and 1.3. Looking at the eigenvectors,
sv
_____no_output_____
Apache-2.0
spectral-analysis/spectral-encoding-of-categorical-features.ipynb
mlarionov/machine_learning_POC
Ultimately the number of eigenvectors to use is another hyperparameter, that should be optimized on a supervised learning task. The Category field is another candidate to do spectral analysis, and is, probably, a better choice since it has more unique values
len(liq.Category.unique()) unique_categories = liq.Category.unique() def dw_data_c(i): return liq[liq.Category == unique_categories[int(i)]].log_sales def wass_from_data_c(i,j): return wasserstein_distance(dw_data_c(i), dw_data_c(j)) if i > j else 0.0 #WARNING: THIS WILL TAKE A LONG TIME distance_matrix = np...
_____no_output_____
Apache-2.0
spectral-analysis/spectral-encoding-of-categorical-features.ipynb
mlarionov/machine_learning_POC
We can see, that a lot of eigenvalues are grouped around the 1.1 mark. The eigenvalues that are below that cluster can be used for encoding the Category feature.Please also note that this method is highly sensitive on selection of hyperparameter $\gamma$. For illustration let me pick a higher and a lower gamma
plot_eigenvalues(500); plot_eigenvalues(10)
_____no_output_____
Apache-2.0
spectral-analysis/spectral-encoding-of-categorical-features.ipynb
mlarionov/machine_learning_POC
TuplesIn Python tuples are very similar to lists, however, unlike lists they are *immutable* meaning they can not be changed. You would use tuples to present things that shouldn't be changed, such as days of the week, or dates on a calendar. In this section, we will get a brief overview of the following: 1.) Constr...
# Create a tuple t = (1,2,3) # Check len just like a list len(t) # Can also mix object types t = ('one',2) # Show t # Use indexing just like we did in lists t[0] # Slicing just like a list t[-1]
_____no_output_____
MIT
Python-Programming/Python-3-Bootcamp/00-Python Object and Data Structure Basics/06-Tuples.ipynb
vivekparasharr/Learn-Programming
Basic Tuple MethodsTuples have built-in methods, but not as many as lists do. Let's look at two of them:
# Use .index to enter a value and return the index t.index('one') # Use .count to count the number of times a value appears t.count('one')
_____no_output_____
MIT
Python-Programming/Python-3-Bootcamp/00-Python Object and Data Structure Basics/06-Tuples.ipynb
vivekparasharr/Learn-Programming
ImmutabilityIt can't be stressed enough that tuples are immutable. To drive that point home:
t[0]= 'change'
_____no_output_____
MIT
Python-Programming/Python-3-Bootcamp/00-Python Object and Data Structure Basics/06-Tuples.ipynb
vivekparasharr/Learn-Programming
Because of this immutability, tuples can't grow. Once a tuple is made we can not add to it.
t.append('nope')
_____no_output_____
MIT
Python-Programming/Python-3-Bootcamp/00-Python Object and Data Structure Basics/06-Tuples.ipynb
vivekparasharr/Learn-Programming
Multivariate SuSiE and ENLOC model Aim This notebook aims to demonstrate a workflow of generating posterior inclusion probabilities (PIPs) from GWAS summary statistics using SuSiE regression and construsting SNP signal clusters from global eQTL analysis data obtained from multivariate SuSiE models. Methods overviewT...
sos run mvenloc.ipynb merge \ --cwd output \ --eqtl-sumstats .. \ --gwas-sumstats .. sos run mvenloc.ipynb eqtl \ --cwd output \ --sumstats-file .. \ --ld-region .. sos run mvenloc.ipynb gwas \ --cwd output \ --sumstats-file .. \ --ld-region .. sos run mvenloc.ipynb enloc \ --cwd...
_____no_output_____
MIT
pipeline/mvenloc.ipynb
seriousbamboo/xqtl-pipeline
Summary
head enloc.enrich.out head enloc.sig.out head enloc.snp.out
_____no_output_____
MIT
pipeline/mvenloc.ipynb
seriousbamboo/xqtl-pipeline
Command interface
sos run mvenloc.ipynb -h
_____no_output_____
MIT
pipeline/mvenloc.ipynb
seriousbamboo/xqtl-pipeline
Implementation
[global] parameter: cwd = path parameter: container = ""
_____no_output_____
MIT
pipeline/mvenloc.ipynb
seriousbamboo/xqtl-pipeline
Step 0: data formatting Extract common SNPS between the GWAS summary statistics and eQTL data
[merger] # eQTL summary statistics as a list of RData parameter: eqtl_sumstats = path # GWAS summary stats in gz format parameter: gwas_sumstats = path input: eqtl_sumstats, gwas_sumstats output: f"{cwd:a}/{eqtl_sumstats:bn}.standardized.gz", f"{cwd:a}/{gwas_sumstats:bn}.standardized.gz" R: expand = "${ }" ### ...
_____no_output_____
MIT
pipeline/mvenloc.ipynb
seriousbamboo/xqtl-pipeline
Extract common SNPS between the summary statistics and LD
[eqtl_1, gwas_1 (filter LD file and sumstat file)] parameter: sumstat_file = path # LD and region information: chr, start, end, LD file paramter: ld_region = path input: sumstat_file, for_each = 'ld_region' output: f"{cwd:a}/{sumstat_file:bn}_{region[0]}_{region[1]}_{region[2]}.z.rds", f"{cwd:a}/{sumstat_file:b...
_____no_output_____
MIT
pipeline/mvenloc.ipynb
seriousbamboo/xqtl-pipeline
Step 1: fine-mapping
[eqtl_2, gwas_2 (finemapping)] # FIXME: RDS file should have included region information output: f"{_input[0]:nn}.susieR.rds", f"{_input[0]:nn}.susieR_plot.rds" R: susie_results = susieR::susie_rss(z = f_gwas.f$z,R = ld_f2, check_prior = F) susieR::susie_plot(susie_results,"PIP") susie_results$z = f_gwas.f$...
_____no_output_____
MIT
pipeline/mvenloc.ipynb
seriousbamboo/xqtl-pipeline
Step 2: fine-mapping results processing Construct eQTL annotation file using eQTL SNP PIPs and credible sets
[eqtl_3 (create signal cluster using CS)] output: f"{_input[0]:nn}.enloc_annot.gz" R: cs = eqtl[["sets"]][["cs"]][["L1"]] o_id = which(var %in% eqtl_id.f$eqtl) pip = eqtl$pip[o_id] eqtl_annot = cbind(eqtl_id.f, pip) %>% mutate(gene = gene.name,cluster = -1, cluster_pip = 0, total_snps = 0) for(snp ...
_____no_output_____
MIT
pipeline/mvenloc.ipynb
seriousbamboo/xqtl-pipeline
Export GWAS PIP
[gwas_3 (format PIP into enloc GWAS input)] output: f"{_input[0]:nn}.enloc_gwas.gz" R: gwas_annot1 = f_gwas.f %>% mutate(pip = susie_results$pip) # FIXME: repeat whole process (extracting common snps + fine-mapping) 3 times before the next steps gwas_annot_comb = rbind(gwas_annot3, gwas_annot1, gw...
_____no_output_____
MIT
pipeline/mvenloc.ipynb
seriousbamboo/xqtl-pipeline
Step 3: Colocalization with FastEnloc
[enloc] # eQTL summary statistics as a list of RData # FIXME: to replace later parameter: eqtl_pip = path # GWAS summary stats in gz format parameter: gwas_pip = path input: eqtl_pip, gwas_pip output: f"{cwd:a}/{eqtl_pip:bnn}.{gwas_pip:bnn}.xx.gz" bash: fastenloc -eqtl eqtl.annot.txt.gz -gwas gwas.pip.txt.gz ...
_____no_output_____
MIT
pipeline/mvenloc.ipynb
seriousbamboo/xqtl-pipeline
Guided Investigation - Anomaly Lookup__Notebook Version:__ 1.0__Python Version:__ Python 3.6 (including Python 3.6 - AzureML)__Required Packages:__ azure 4.0.0, azure-cli-profile 2.1.4__Platforms Supported:__ - Azure Notebooks Free Compute - Azure Notebook on DSVM __Data Source Required:__ - Log Analyt...
# only run once !pip install --upgrade Azure-Sentinel-Utilities !pip install azure-cli-core # User Input and Save to Environmental store import os from SentinelWidgets import WidgetViewHelper env_dir = %env helper = WidgetViewHelper() # Enter Tenant Domain helper.set_env(env_dir, 'tenant_domain') # Enter Azure Subscrip...
_____no_output_____
MIT
Notebooks/Guided Investigation - Anomaly Lookup.ipynb
CrisRomeo/Azure-Sentinel
2. Looking up for anomaly entities
# Select a workspace selected_workspace = WidgetViewHelper.select_log_analytics_workspace(la) display(selected_workspace) import ipywidgets as widgets workspace_id = la.get_workspace_id(selected_workspace.value) #DateTime format: 2019-07-15T07:05:20.000 q_timestamp = widgets.Text(value='2019-09-15',description='DateTim...
_____no_output_____
MIT
Notebooks/Guided Investigation - Anomaly Lookup.ipynb
CrisRomeo/Azure-Sentinel
4 - Train models and make predictions Motivation- **`tf.keras`** API offers built-in functions for training, validation and prediction.- Those functions are easy to use and enable you to train any ML model.- They also give you a high level of customizability. Objectives- Understand the common training workflow in Tens...
import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import matplotlib.pyplot as plt tf.__version__
_____no_output_____
Apache-2.0
4 - Train models and make predictions.ipynb
oyiakoumis/tensorflow2-course
Table of contents:* [Overview](overview)* [Part 1: Setting an optimizer, a loss function, and metrics](part-1)* [Part 2: Training models and make predictions](part-2)* [Part 3: Using callbacks](part-3)* [Part 4: Exercise](part-4)* [Summary](summary)* [Where to go next](next) Overview - Model training and evalua...
# Load the MNIST dataset train, test = tf.keras.datasets.mnist.load_data() # Overview of the dataset: images, labels = train print(type(images), type(labels)) print(images.shape, labels.shape) # First 9 images of the training set: plt.figure(figsize=(3,3)) for i in range(9): plt.subplot(3,3,i+1) plt.xticks([]) ...
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= flatten (Flatten) (None, 784) 0 ____________________________________...
Apache-2.0
4 - Train models and make predictions.ipynb
oyiakoumis/tensorflow2-course
CLIP GradCAM ColabThis Colab notebook uses [GradCAM](https://arxiv.org/abs/1610.02391) on OpenAI's [CLIP](https://openai.com/blog/clip/) model to produce a heatmap highlighting which regions in an image activate the most to a given caption.**Note:** Currently only works with the ResNet variants of CLIP. ViT support c...
#@title Install dependencies #@markdown Please execute this cell by pressing the _Play_ button #@markdown on the left. #@markdown **Note**: This installs the software on the Colab #@markdown notebook in the cloud and not on your computer. %%capture !pip install ftfy regex tqdm matplotlib opencv-python scipy scikit...
_____no_output_____
MIT
demos/CLIP_GradCAM_Visualization.ipynb
AdMoR/clipit
Capstone Project - Flight Delays Does weather events have impact the delay of flights (Brazil)? It is important to see this notebook with the step-by-step of the dataset cleaning process:[https://github.com/davicsilva/dsintensive/blob/master/notebooks/flightDelayPrepData_v2.ipynb](https://github.com/davicsilva/dsinte...
from datetime import datetime # Pandas and NumPy import pandas as pd import numpy as np # Matplotlib for additional customization from matplotlib import pyplot as plt %matplotlib inline # Seaborn for plotting and styling import seaborn as sns # 1. Flight delay: any flight with (real_departure - planned_departure >= ...
_____no_output_____
Apache-2.0
notebooks/capstone-flightDelay.ipynb
davicsilva/dsintensive
1 - Local flights dataset. For now, only flights from January to September, 2017**A note about date columns on this dataset*** In the original dataset (CSV file from ANAC), the date was not in ISO8601 format (e.g. '2017-10-31 09:03:00')* To fix this I used regex (regular expression) to transform this column directly o...
#[flights] dataset_01 => all "Active Regular Flights" from 2017, from january to september #source: http://www.anac.gov.br/assuntos/dados-e-estatisticas/historico-de-voos #Last access this website: nov, 14th, 2017 flights = pd.read_csv('data/arf2017ISO.csv', sep = ';', dtype = str) flights['departure-est'] = flights[['...
_____no_output_____
Apache-2.0
notebooks/capstone-flightDelay.ipynb
davicsilva/dsintensive
Some EDA's tasks
# See: https://stackoverflow.com/questions/37287938/sort-pandas-dataframe-by-value # df_departures = flights.groupby(['airport-A']).size().reset_index(name='number_departures') df_departures.sort_values(by=['number_departures'], ascending=False, inplace=True) df_departures
_____no_output_____
Apache-2.0
notebooks/capstone-flightDelay.ipynb
davicsilva/dsintensive
2 - Local airports (list with all the ~600 brazilian public airports)Source: https://goo.gl/mNFuPt (a XLS spreadsheet in portuguese; last access on nov, 15th, 2017)
# Airports dataset: all brazilian public airports (updated until october, 2017) airports = pd.read_csv('data/brazilianPublicAirports-out2017.csv', sep = ';', dtype= str) airports.head() # Merge "flights" dataset with "airports" in order to identify # local flights (origin and destination are in Brazil) flights = pd....
_____no_output_____
Apache-2.0
notebooks/capstone-flightDelay.ipynb
davicsilva/dsintensive
3 - List of codes (two letters) used when there was a flight delay (departure)I have found two lists that define two-letter codes used by the aircraft crew to justify the delay of the flights: a short and a long one.Source: https://goo.gl/vUC8BX (last access: nov, 15th, 2017)
# ------------------------------------------------------------------ # List of codes (two letters) used to justify a delay on the flight # - delayCodesShortlist.csv: list with YYY codes # - delayCodesLongList.csv: list with XXX codes # ------------------------------------------------------------------ delaycodes = pd.r...
_____no_output_____
Apache-2.0
notebooks/capstone-flightDelay.ipynb
davicsilva/dsintensive
4 - The Weather data from https://www.wunderground.com/historyFrom this website I captured a sample data from local airport (Campinas, SP, Brazil): January to September, 2017.The website presents data like this (see [https://goo.gl/oKwzyH](https://goo.gl/oKwzyH)):
# Weather sample: load the CSV with weather historical data (from Campinas, SP, Brazil, 2017) weather = pd.read_csv('data/DataScience-Intensive-weatherAtCampinasAirport-2017-Campinas_Airport_2017Weather.csv', \ sep = ',', dtype = str) weather["date"] = weather["year"].map(str) + "-" + weather["mon...
_____no_output_____
Apache-2.0
notebooks/capstone-flightDelay.ipynb
davicsilva/dsintensive
Request workspace add
t0 = time.time() ekos = EventHandler(**paths) request = ekos.test_requests['request_workspace_add_1'] response_workspace_add = ekos.request_workspace_add(request) ekos.write_test_response('request_workspace_add_1', response_workspace_add) # request = ekos.test_requests['request_workspace_add_2'] # response_workspace_a...
2018-09-20 19:02:54,811 event_handler.py 117 __init__ DEBUG Start EventHandler: event_handler 2018-09-20 19:02:54,814 event_handler.py 152 _load_mapping_objects DEBUG Loading mapping files from pickle file. 2018-09-20 19:02:55,637 event_handler.py 128 __init__ DEBUG Time for mapping: 0.8230469226837158 2018-09-20 19:02...
MIT
notebooks/.ipynb_checkpoints/mw_requests_flow-checkpoint.ipynb
lvikt/ekostat_calculator
Update workspace uuid in test requests
update_workspace_uuid_in_test_requests()
2018-09-20 19:02:56,883 event_handler.py 117 __init__ DEBUG Start EventHandler: event_handler 2018-09-20 19:02:56,886 event_handler.py 152 _load_mapping_objects DEBUG Loading mapping files from pickle file. 2018-09-20 19:02:57,796 event_handler.py 128 __init__ DEBUG Time for mapping: 0.9100518226623535 2018-09-20 19:02...
MIT
notebooks/.ipynb_checkpoints/mw_requests_flow-checkpoint.ipynb
lvikt/ekostat_calculator
Request workspace import default data
# ekos = EventHandler(**paths) # # When copying data the first time all sources has status=0, i.e. no data will be loaded. # request = ekos.test_requests['request_workspace_import_default_data'] # response_import_data = ekos.request_workspace_import_default_data(request) # ekos.write_test_response('request_workspace_i...
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/mw_requests_flow-checkpoint.ipynb
lvikt/ekostat_calculator
Import data from sharkweb
ekos = EventHandler(**paths) request = ekos.test_requests['request_sharkweb_import'] response_sharkweb_import = ekos.request_sharkweb_import(request) ekos.write_test_response('request_sharkweb_import', response_sharkweb_import) ekos.data_params ekos.selection_dicts # ekos = EventHandler(**paths) # ekos.mapping_objects...
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/mw_requests_flow-checkpoint.ipynb
lvikt/ekostat_calculator
Request data source list/edit
ekos = EventHandler(**paths) request = ekos.test_requests['request_workspace_data_sources_list'] response = ekos.request_workspace_data_sources_list(request) ekos.write_test_response('request_workspace_data_sources_list', response) request = response request['data_sources'][0]['status'] = False request['data_source...
2018-09-20 19:31:23,369 event_handler.py 117 __init__ DEBUG Start EventHandler: event_handler 2018-09-20 19:31:23,373 event_handler.py 152 _load_mapping_objects DEBUG Loading mapping files from pickle file. 2018-09-20 19:31:24,259 event_handler.py 128 __init__ DEBUG Time for mapping: 0.8860509395599365 2018-09-20 19:31...
MIT
notebooks/.ipynb_checkpoints/mw_requests_flow-checkpoint.ipynb
lvikt/ekostat_calculator
Request subset add
ekos = EventHandler(**paths) request = ekos.test_requests['request_subset_add_1'] response_subset_add = ekos.request_subset_add(request) ekos.write_test_response('request_subset_add_1', response_subset_add) update_subset_uuid_in_test_requests(subset_alias='mw_subset')
2018-09-20 19:05:16,853 event_handler.py 117 __init__ DEBUG Start EventHandler: event_handler 2018-09-20 19:05:16,857 event_handler.py 152 _load_mapping_objects DEBUG Loading mapping files from pickle file. 2018-09-20 19:05:17,716 event_handler.py 128 __init__ DEBUG Time for mapping: 0.8590488433837891 2018-09-20 19:05...
MIT
notebooks/.ipynb_checkpoints/mw_requests_flow-checkpoint.ipynb
lvikt/ekostat_calculator
Request subset get data filter
ekos = EventHandler(**paths) update_subset_uuid_in_test_requests(subset_alias='mw_subset') request = ekos.test_requests['request_subset_get_data_filter'] response_subset_get_data_filter = ekos.request_subset_get_data_filter(request) ekos.write_test_response('request_subset_get_data_filter', response_subset_get_data_fil...
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/mw_requests_flow-checkpoint.ipynb
lvikt/ekostat_calculator
Request subset set data filter
ekos = EventHandler(**paths) update_subset_uuid_in_test_requests(subset_alias='mw_subset') request = ekos.test_requests['request_subset_set_data_filter'] response_subset_set_data_filter = ekos.request_subset_set_data_filter(request) ekos.write_test_response('request_subset_set_data_filter', response_subset_set_data_fil...
2018-09-20 13:54:00,112 event_handler.py 117 __init__ DEBUG Start EventHandler: event_handler 2018-09-20 13:54:00,112 event_handler.py 152 _load_mapping_objects DEBUG Loading mapping files from pickle file. 2018-09-20 13:54:00,912 event_handler.py 128 __init__ DEBUG Time for mapping: 0.8000011444091797 2018-09-20 13:54...
MIT
notebooks/.ipynb_checkpoints/mw_requests_flow-checkpoint.ipynb
lvikt/ekostat_calculator
Request subset get indicator settings
ekos = EventHandler(**paths) request = ekos.test_requests['request_subset_get_indicator_settings'] # request = ekos.test_requests['request_subset_get_indicator_settings_no_areas'] # print(request['subset']['subset_uuid']) # request['subset']['subset_uuid'] = 'fel' # print(request['subset']['subset_uuid']) response_sub...
2018-09-20 06:50:41,643 event_handler.py 117 __init__ DEBUG Start EventHandler: event_handler 2018-09-20 06:50:41,643 event_handler.py 152 _load_mapping_objects DEBUG Loading mapping files from pickle file. 2018-09-20 06:50:42,330 event_handler.py 128 __init__ DEBUG Time for mapping: 0.6864011287689209 2018-09-20 06:50...
MIT
notebooks/.ipynb_checkpoints/mw_requests_flow-checkpoint.ipynb
lvikt/ekostat_calculator
Request subset set indicator settings
ekos = EventHandler(**paths) request = ekos.test_requests['request_subset_set_indicator_settings'] response_subset_set_indicator_settings = ekos.request_subset_set_indicator_settings(request) ekos.write_test_response('request_subset_set_indicator_settings', response_subset_set_indicator_settings)
2018-09-20 12:09:08,454 event_handler.py 117 __init__ DEBUG Start EventHandler: event_handler 2018-09-20 12:09:08,454 event_handler.py 152 _load_mapping_objects DEBUG Loading mapping files from pickle file. 2018-09-20 12:09:09,234 event_handler.py 128 __init__ DEBUG Time for mapping: 0.780001163482666 2018-09-20 12:09:...
MIT
notebooks/.ipynb_checkpoints/mw_requests_flow-checkpoint.ipynb
lvikt/ekostat_calculator
Request subset calculate status
ekos = EventHandler(**paths) request = ekos.test_requests['request_subset_calculate_status'] response = ekos.request_subset_calculate_status(request) ekos.write_test_response('request_subset_calculate_status', response)
2018-09-20 19:05:31,914 event_handler.py 117 __init__ DEBUG Start EventHandler: event_handler 2018-09-20 19:05:31,917 event_handler.py 152 _load_mapping_objects DEBUG Loading mapping files from pickle file. 2018-09-20 19:05:32,790 event_handler.py 128 __init__ DEBUG Time for mapping: 0.8740499019622803 2018-09-20 19:05...
MIT
notebooks/.ipynb_checkpoints/mw_requests_flow-checkpoint.ipynb
lvikt/ekostat_calculator
Request subset result get
ekos = EventHandler(**paths) request = ekos.test_requests['request_workspace_result'] response_workspace_result = ekos.request_workspace_result(request) ekos.write_test_response('request_workspace_result', response_workspace_result) response_workspace_result['subset']['a4e53080-2c68-40d5-957f-8cc4dbf77815']['result'][...
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/mw_requests_flow-checkpoint.ipynb
lvikt/ekostat_calculator
Notebook prirejen s strani http://www.pieriandata.com NumPy Indexing and SelectionIn this lecture we will discuss how to select elements or groups of elements from an array.
import numpy as np #Creating sample array arr = np.arange(0,11) #Show arr
_____no_output_____
MIT
theory/NumPy/01-NumPy-Indexing-and-Selection.ipynb
CrtomirJuren/python-delavnica
Bracket Indexing and SelectionThe simplest way to pick one or some elements of an array looks very similar to python lists:
#Get a value at an index arr[8] #Get values in a range arr[1:5] #Get values in a range arr[0:5] # l = ['a', 'b', 'c'] # l[0:2]
_____no_output_____
MIT
theory/NumPy/01-NumPy-Indexing-and-Selection.ipynb
CrtomirJuren/python-delavnica
BroadcastingNumPy arrays differ from normal Python lists because of their ability to broadcast. With lists, you can only reassign parts of a list with new parts of the same size and shape. That is, if you wanted to replace the first 5 elements in a list with a new value, you would have to pass in a new 5 element list....
l = list(range(10)) l l[0:5] = [100,100,100,100,100] l #Setting a value with index range (Broadcasting) arr[0:5]=100 #Show arr # Reset array, we'll see why I had to reset in a moment arr = np.arange(0,11) #Show arr #Important notes on Slices slice_of_arr = arr[0:6] #Show slice slice_of_arr #Change Slice slice_of_a...
_____no_output_____
MIT
theory/NumPy/01-NumPy-Indexing-and-Selection.ipynb
CrtomirJuren/python-delavnica
Now note the changes also occur in our original array!
arr
_____no_output_____
MIT
theory/NumPy/01-NumPy-Indexing-and-Selection.ipynb
CrtomirJuren/python-delavnica
Data is not copied, it's a view of the original array! This avoids memory problems!
#To get a copy, need to be explicit arr_copy = arr.copy() arr_copy
_____no_output_____
MIT
theory/NumPy/01-NumPy-Indexing-and-Selection.ipynb
CrtomirJuren/python-delavnica
Indexing a 2D array (matrices)The general format is **arr_2d[row][col]** or **arr_2d[row,col]**. I recommend using the comma notation for clarity.
arr_2d = np.array(([5,10,15],[20,25,30],[35,40,45])) #Show arr_2d #Indexing row arr_2d[1] # Format is arr_2d[row][col] or arr_2d[row,col] # Getting individual element value arr_2d[1][0] # Getting individual element value arr_2d[1,0] # 2D array slicing #Shape (2,2) from top right corner arr_2d[:2,1:] #Shape bottom ro...
_____no_output_____
MIT
theory/NumPy/01-NumPy-Indexing-and-Selection.ipynb
CrtomirJuren/python-delavnica
More Indexing HelpIndexing a 2D matrix can be a bit confusing at first, especially when you start to add in step size. Try google image searching *NumPy indexing* to find useful images, like this one: Image source: http://www.scipy-lectures.org/intro/numpy/numpy.html Conditional SelectionThis is a very fundamental co...
arr = np.arange(1,11) arr arr > 4 bool_arr = arr>4 bool_arr arr[bool_arr] arr[arr>2] x = 2 arr[arr>x]
_____no_output_____
MIT
theory/NumPy/01-NumPy-Indexing-and-Selection.ipynb
CrtomirJuren/python-delavnica
Chapter 4: Linear models[Link to outline](https://docs.google.com/document/d/1fwep23-95U-w1QMPU31nOvUnUXE2X3s_Dbk5JuLlKAY/editheading=h.9etj7aw4al9w)Concept map:![concepts_LINEARMODELS.png](attachment:c335ebb2-f116-486c-8737-22e517de3146.png) Notebook setup
import numpy as np import pandas as pd import scipy as sp import seaborn as sns from scipy.stats import uniform, norm # notebooks figs setup %matplotlib inline import matplotlib.pyplot as plt sns.set(rc={'figure.figsize':(8,5)}) blue, orange = sns.color_palette()[0], sns.color_palette()[1] # silence annoying warnin...
_____no_output_____
MIT
stats_overview/04_LINEAR_MODELS.ipynb
minireference/noBSstatsnotebooks
4.1 Linear models for relationship between two numeric variables- def'n linear model: **y ~ m*x + b**, a.k.a. linear regression- Amy has collected a new dataset: - Instead of receiving a fixed amount of stats training (100 hours), **each employee now receives a variable amount of stats training (anywhere from 0 ho...
# Load data into a pandas dataframe df2 = pd.read_excel("data/ELV_vs_hours.ods", sheet_name="Data") # df2 df2.describe() # plot ELV vs. hours data sns.scatterplot(x='hours', y='ELV', data=df2) # linear model plot (preview) # sns.lmplot(x='hours', y='ELV', data=df2, ci=False)
_____no_output_____
MIT
stats_overview/04_LINEAR_MODELS.ipynb
minireference/noBSstatsnotebooks
Types of linear relationship between input and outputDifferent possible relationships between the number of hours of stats training and ELV gains:![figures/ELV_as_function_of_stats_hours.png](figures/ELV_as_function_of_stats_hours.png) 4.2 Fitting linear models- Main idea: use `fit` method from `statsmodels.ols` and ...
import statsmodels.formula.api as smf model = smf.ols('ELV ~ 1 + hours', data=df2) result = model.fit() # extact the best-fit model parameters beta0, beta1 = result.params beta0, beta1 # data points sns.scatterplot(x='hours', y='ELV', data=df2) # linear model for data x = df2['hours'].values # input = hours ymodel ...
_____no_output_____
MIT
stats_overview/04_LINEAR_MODELS.ipynb
minireference/noBSstatsnotebooks
Alternative model fitting methods2. fit using statsmodels [`OLS`](https://www.statsmodels.org/stable/generated/statsmodels.regression.linear_model.OLS.html)3. solution using [`linregress`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.linregress.html) from `scipy`4. solution using [`minimize`](htt...
# extract hours and ELV data from df2 x = df2['hours'].values # hours data as an array y = df2['ELV'].values # ELV data as an array x.shape, y.shape # x
_____no_output_____
MIT
stats_overview/04_LINEAR_MODELS.ipynb
minireference/noBSstatsnotebooks
Two of the approaches required "packaging" the x-values along with a column of ones,to form a matrix (called a design matrix). Luckily `statsmodels` provides a convenient function for this:
import statsmodels.api as sm # add a column of ones to the x data X = sm.add_constant(x) X.shape # X
_____no_output_____
MIT
stats_overview/04_LINEAR_MODELS.ipynb
minireference/noBSstatsnotebooks
____ 2. fit using statsmodels OLS
model2 = sm.OLS(y, X) result2 = model2.fit() # result2.summary() result2.params
_____no_output_____
MIT
stats_overview/04_LINEAR_MODELS.ipynb
minireference/noBSstatsnotebooks
____ 3. solution using `linregress` from `scipy`
from scipy.stats import linregress result3 = linregress(x, y) result3.intercept, result3.slope
_____no_output_____
MIT
stats_overview/04_LINEAR_MODELS.ipynb
minireference/noBSstatsnotebooks
____ 4. Using an optimization approach
from scipy.optimize import minimize def sse(beta, x=x, y=y): """Compute the sum-of-squared-errors objective function.""" sumse = 0.0 for xi, yi in zip(x, y): yi_pred = beta[0] + beta[1]*xi ei = (yi_pred-yi)**2 sumse += ei return sumse result4 = minimize(sse, x0=[0,0]) beta0, be...
_____no_output_____
MIT
stats_overview/04_LINEAR_MODELS.ipynb
minireference/noBSstatsnotebooks
____ 5. Linear algebra solutionWe obtain the least squares solution using the Moore–Penrose inverse formula:$$ \large \vec{\beta} = (X^{\sf T} X)^{-1}X^{\sf T}\; \vec{y}$$
# 5. linear algebra solution using `numpy` import numpy as np result5 = np.linalg.inv(X.T.dot(X)).dot(X.T).dot(y) beta0, beta1 = result5 beta0, beta1
_____no_output_____
MIT
stats_overview/04_LINEAR_MODELS.ipynb
minireference/noBSstatsnotebooks
_____ Using scikit-learn
# 6. solution using `LinearRegression` from scikit-learn from sklearn import linear_model model6 = linear_model.LinearRegression() model6.fit(x[:,np.newaxis], y) model6.intercept_, model6.coef_
_____no_output_____
MIT
stats_overview/04_LINEAR_MODELS.ipynb
minireference/noBSstatsnotebooks
4.3 Interpreting linear models- model fit checks - $R^2$ [coefficient of determination](https://en.wikipedia.org/wiki/Coefficient_of_determination) = the proportion of the variation in the dependent variable that is predictable from the independent variable - plot of residuals - many other: see [scikit docs](h...
result.summary()
_____no_output_____
MIT
stats_overview/04_LINEAR_MODELS.ipynb
minireference/noBSstatsnotebooks
Model parameters
beta0, beta1 = result.params result.params
_____no_output_____
MIT
stats_overview/04_LINEAR_MODELS.ipynb
minireference/noBSstatsnotebooks
The $R^2$ coefficient of determination$R^2 = 1$ corresponds to perfect prediction
result.rsquared
_____no_output_____
MIT
stats_overview/04_LINEAR_MODELS.ipynb
minireference/noBSstatsnotebooks
Hypothesis testing for slope coefficientIs there a non-zero slope coefficient?- **null hypothesis $H_0$**: `hours` has no effect on `ELV`, which is equivalent to $\beta_1 = 0$: $$ \large H_0: \qquad \textrm{ELV} \sim \mathcal{N}(\color{red}{\beta_0}, \sigma^2) \qquad \qquad \qquad $$- **alternative hypothesis...
# p-value under the null hypotheis of zero slope or "no effect of `hours` on `ELV`" result.pvalues.loc['hours'] # 95% confidence interval for the hours-slope parameter # result.conf_int() CI_hours = list(result.conf_int().loc['hours']) CI_hours
_____no_output_____
MIT
stats_overview/04_LINEAR_MODELS.ipynb
minireference/noBSstatsnotebooks
Predictions using the modelWe can use the model we obtained to predict (interpolate) the ELV for future employees.
sns.scatterplot(x='hours', y='ELV', data=df2) ymodel = beta0 + beta1*x sns.lineplot(x, ymodel)
_____no_output_____
MIT
stats_overview/04_LINEAR_MODELS.ipynb
minireference/noBSstatsnotebooks
What ELV can we expect from a new employee that takes 50 hours of stats training?
result.predict({'hours':[50]}) result.predict({'hours':[100]})
_____no_output_____
MIT
stats_overview/04_LINEAR_MODELS.ipynb
minireference/noBSstatsnotebooks