markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Class RecommenderEvaluatorIn order to become easier to evaluate the metrics, I created a class that receives all the original ratings and predicted ratings for every recommender system and defined functions to extract all the metrics established in section 1 of the capstone report. Lets take a look at a summary of the class before looking at the code:- **Constructor (init)**: receive all recommendation algorithms, besides the actual rating list and the list of items. All data is contained in the data downloaded from Coursera. Besides storing all recommendation algorithms, the constructor also calculate the 20 most frequent items, which is used in the popularity metric calculation.- **get_observed_ratings**: as the ratings matrix is sparse, this method only returns the items a user with id userId has purchased.- **get_top_n**: by ordering all the predicted ratings for each recommendation algorithm, we can extract what would be their 'top' recommendation for a given user. Given a parameter $n$, we can then return all the top $n$ recommendations for all the recommendation algorithms.- **rmse**: by comparing the observed ratings a given user has given to an item and the predicted rating an algorithm has defined for a user, we can have an idea of how much error the algorithm is predicting the user's ratings. Here we don't work with lists, as usually each user has rated only a few amount of items. So here we get all the items the user has rated, recover these items from the algorithms' recommendations and them calculate the error.- **nDCG**: By looking at lists now, we can have an idea of how optimal the ranked lists are. By using the scoring factor defined in the report, we can calculate the overall DCG for the recommenders' lists and then normalise them using the concepts of the nDCG.- **Price and avalaibility diversity**: Diversity metric which evaluate how the recommended items' prices vary, *i.e.*, how is the standard deviation of the price. The higher, the better in this case. The same is for the availability index, but here, with higher standard deviations, it means the models are recommending items which are present and not present in local stores.- **Popularity**: A popular recommender tries to recommend items which has a high chance of being purchased. In the formulation of this metric, an item has a high chance of being purchased if lots of people have purchased them. In the class constructor, we take the observed ratings data and the item list and select which were the top $n$ (standard = 20) most purchased data. In a recommendation list, we return the ration of how many items were inside this list of top $n$ ones.
class RecommenderEvaluator: def __init__(self, items, actual_ratings, content_based, user_user, item_item, matrix_fact, pers_bias): self.items = items self.actual_ratings = actual_ratings # static data containing the average score given by each user self.average_rating_per_userid = actual_ratings.apply(lambda row: np.average(row[~np.isnan(row)])) self.content_based = content_based self.user_user = user_user self.item_item = item_item self.matrix_fact = matrix_fact self.pers_bias = pers_bias # aggregate list. Makes for loops among all recommenders' predictions easier self.recommenders_list = [self.content_based, self.user_user, self.item_item, self.matrix_fact,self.pers_bias] self.recommenders_list_names = ['content_based', 'user_user', 'item_item', 'matrix_fact','pers_bias'] # Used for item popularity metric. # Calculate the 20 most popular items (item which most of the customers bought) N_LIM = 20 perc_users_bought_item = self.actual_ratings.apply(lambda item: np.sum(~np.isnan(item)), axis=0)/actual_ratings.shape[1] sort_pop_items = np.argsort(perc_users_bought_item)[::-1] self.pop_items = perc_users_bought_item.iloc[sort_pop_items][:N_LIM].index.values.astype(np.int) def get_observed_ratings(self, userId): """ Returns all the items a given user evaluated and their ratings. Used mainly by all the metrics calculation :parameter: userId - user id :return: array of rated items. Index is the item id and value is the item rating """ userId = str(userId) filtered_ratings = self.actual_ratings[userId] rated_items = filtered_ratings[~np.isnan(filtered_ratings)] return rated_items def get_top_n(self, userId, n): """ Get the top n recommendations for every recommender in the list given a user id :parameter: userId - user id :parameter: n - max number of recommendations to return :return: dictionary where the key is the recommender's name and the value is an array of size n for the top n recommnendations. """ userId = str(userId) predicted_ratings = dict() for recommender, recommender_name in zip(self.recommenders_list,self.recommenders_list_names): item_ids = recommender[userId].argsort().sort_values()[:n].index.values predicted_ratings[recommender_name] = item_ids return predicted_ratings def rmse(self, userId): """ Root Mean Square Error of the predicted and observed values between the recommender's prediction and the actual ratings :parameter: userId - user id :return: dataframe of containing the rmse from all recommenders given user id """ userId = str(userId) observed_ratings = self.get_observed_ratings(userId) rmse_list = {'rmse': []} for recommender in self.recommenders_list: predicted_ratings = recommender.loc[observed_ratings.index, userId] rmse_list['rmse'].append(np.sqrt(np.average((predicted_ratings - observed_ratings)**2))) rmse_list = pd.DataFrame(rmse_list, index = self.recommenders_list_names) return rmse_list def nDCG(self, userId, top_n = 5, individual_recommendation = None): """ Normalised Discounted Cumulative Gain for all recommenders given user id :parameter: userId - user id :return: dataframe of containing the nDCG from all recommenders given user id """ ri = self.get_observed_ratings(userId) if(individual_recommendation is None): topn = self.get_top_n(userId,top_n) results_pandas_index = self.recommenders_list_names else: topn = individual_recommendation results_pandas_index = list(individual_recommendation.keys()) # 1st step: Given recommendations, transform list into scores (see score transcriptions in the capstone report) scores_all = [] for name, item_list in topn.items(): scores = np.empty_like(item_list) # initialise 'random' array scores[:] = -10 ########################### # check which items returned by the recommender is_already_rated = np.isin(item_list, ri.index.values) # the user already rated. Items users didn't rate scores[~is_already_rated] = 0 # receive score = 0 for index, score in enumerate(scores): if(score != 0): # for each recommended items the user rated if(ri[item_list[index]] < self.average_rating_per_userid[userId] - 1): # score accordingly the report scores[index] = -1 elif((ri[item_list[index]] >= self.average_rating_per_userid[userId] - 1) & (ri[item_list[index]] < self.average_rating_per_userid[userId] + 0.5)): scores[index] = 1 else: scores[index] = 2 scores_all.append(scores) # append all the transformed scores scores_all # 2nd step: Given scores, calculate the model's DCG, ideal DCG and then nDCG nDCG_all = dict() for index_model, scores_model in enumerate(scores_all): # for each model model_DCG = 0 # calculate model's DCG for index, score in enumerate(scores_model): # index_ = index + 1 # model_DCG = model_DCG + score/np.log2(index_ + 1) # ideal_rank_items = np.sort(scores_model)[::-1] # calculate model's ideal DCG ideal_rank_DCG = 0 # for index, ideal_score in enumerate(ideal_rank_items): # index_ = index + 1 # ideal_rank_DCG = ideal_rank_DCG + ideal_score/np.log2(index_ + 1) # if((ideal_rank_DCG == 0) | (np.abs(ideal_rank_DCG) < np.abs(model_DCG))): # if nDCG is 0 or only negative scores came up nDCG = 0 else: # calculate final nDCG when ideal DCG is != 0 nDCG = model_DCG/ideal_rank_DCG nDCG_all[results_pandas_index[index_model]] = nDCG # save each model's nDCG in a dict # convert it to dataframe result_final = pd.DataFrame(nDCG_all, index=range(1)).transpose() result_final.columns = ['nDCG'] return result_final def price_diversity(self,userId,top_n = 5,individual_recommendation = None): """ Mean and standard deviation of the price of the top n products recommended by each algorithm. Intuition for a high price wise diversity recommender is to have a high price standard deviation :parameter: userId - user id :return: dataframe of containing the price's mean and standard deviation from all recommenders given user id """ if(individual_recommendation is None): topn = self.get_top_n(userId,top_n) else: topn = individual_recommendation stats = pd.DataFrame() for key, value in topn.items(): data_filtered = self.items.loc[topn[key]][['Price']].agg(['mean','std']).transpose() data_filtered.index = [key] stats = stats.append(data_filtered) return stats def availability_diversity(self,userId,top_n = 5,individual_recommendation = None): """ Mean and standard deviation of the availabity index of the top n products recommended by each algorithm. Intuition for a high availabity diversity is to have a small mean value in the availabity index :parameter: userId - user id :return: dataframe of containing the availabity index's mean and standard deviation from all recommenders given user id """ if(individual_recommendation is None): topn = self.get_top_n(userId,top_n) else: topn = individual_recommendation stats = pd.DataFrame() for key, value in topn.items(): data_filtered = self.items.loc[topn[key]][['Availability']].agg(['mean','std']).transpose() data_filtered.index = [key] stats = stats.append(data_filtered) return stats def popularity(self, userId,top_n = 5,individual_recommendation = None): """ Return the ratio of how many items of the top n items are among the most popular purchased items. Default is the 20 most purchased items. :parameter: userId - user id :return: dataframe of containing ratio of popular items in the recommended list from all recommenders given user id """ if(individual_recommendation is None): topn = self.get_top_n(userId,top_n) results_pandas_index = self.recommenders_list_names else: topn = individual_recommendation results_pandas_index = list(individual_recommendation.keys()) results = {'popularity': []} for recommender, recommendations in topn.items(): popularity = np.sum(np.isin(recommendations,self.pop_items)) results['popularity'].append(popularity) return pd.DataFrame(results,index = results_pandas_index) def precision_at_n(self, userId, top_n = 5, individual_recommendation = None): if(individual_recommendation is None): topn = self.get_top_n(userId,top_n) results_pandas_index = self.recommenders_list_names else: topn = individual_recommendation results_pandas_index = list(individual_recommendation.keys()) observed_ratings = self.get_observed_ratings(userId).index.values precisions = {'precision_at_'+str(top_n): []} for recommender, recommendations in topn.items(): precisions['precision_at_'+str(top_n)].append(np.sum(np.isin(recommendations, observed_ratings))/top_n) return pd.DataFrame(precisions,index = results_pandas_index)
_____no_output_____
MIT
notebooks/reco-tut-asr-99-10-metrics-calculation.ipynb
sparsh-ai/reco-tut-asr
Test methods:Just to have an idea of the output of each method, lets call all them with a test user. At the next section we will calculate these metrics for all users.
userId = '64' re = RecommenderEvaluator(items, actual_ratings, content_based, user_user, item_item, matrix_fact, pers_bias)
_____no_output_____
MIT
notebooks/reco-tut-asr-99-10-metrics-calculation.ipynb
sparsh-ai/reco-tut-asr
Test RMSE
re.rmse(userId)
_____no_output_____
MIT
notebooks/reco-tut-asr-99-10-metrics-calculation.ipynb
sparsh-ai/reco-tut-asr
Test nDCG
re.nDCG(userId)
_____no_output_____
MIT
notebooks/reco-tut-asr-99-10-metrics-calculation.ipynb
sparsh-ai/reco-tut-asr
Test Diversity - Price and Availability
re.price_diversity(userId) re.availability_diversity(userId)
_____no_output_____
MIT
notebooks/reco-tut-asr-99-10-metrics-calculation.ipynb
sparsh-ai/reco-tut-asr
Test Popularity
re.popularity(userId)
_____no_output_____
MIT
notebooks/reco-tut-asr-99-10-metrics-calculation.ipynb
sparsh-ai/reco-tut-asr
Test Precision@N
re.precision_at_n(userId)
_____no_output_____
MIT
notebooks/reco-tut-asr-99-10-metrics-calculation.ipynb
sparsh-ai/reco-tut-asr
Average metrics by all usersEspefically for user 907, the recommendations from the user user came with all nulls (original dataset). This specifically impacted the RMSE calculation, as one Nan damaged the entire average calculation. So specifically for RMSE we did a separate calculation section. All the other metrics are going the be calculated in the next code block.
re = RecommenderEvaluator(items, actual_ratings, content_based, user_user, item_item, matrix_fact, pers_bias) i = 0 count = np.array([0,0,0,0,0]) for userId in actual_ratings.columns: if(userId == '907'): rmse_recommenders = re.rmse(userId).fillna(0) else: rmse_recommenders = re.rmse(userId) count = count + rmse_recommenders['rmse'] # as we didn't use user 907 for user user, divide it by the number of users - 1 denominator = [len(actual_ratings.columns)] * 5 denominator[1] = len(actual_ratings.columns) - 1 print('Average RMSE for all users') count/ denominator count_nDCG = np.array([0,0,0,0,0]) count_diversity_price = np.ndarray([5,2]) count_diversity_availability = np.ndarray([5,2]) count_popularity = np.array([0,0,0,0,0]) count_precision_at_5 = np.array([0,0,0,0,0]) for userId in actual_ratings.columns: nDCG_recommenders = re.nDCG(userId) count_nDCG = count_nDCG + nDCG_recommenders['nDCG'] diversity_price_recommenders = re.price_diversity(userId) count_diversity_price = count_diversity_price + diversity_price_recommenders[['mean','std']] diversity_availability_recommenders = re.availability_diversity(userId) count_diversity_availability = count_diversity_availability + diversity_availability_recommenders[['mean','std']] popularity_recommenders = re.popularity(userId) count_popularity = count_popularity + popularity_recommenders['popularity'] precision_recommenders = re.precision_at_n(userId) count_precision_at_5 = count_precision_at_5 + precision_recommenders['precision_at_5'] print('\n---') print('Average nDCG') print('---\n') print(count_nDCG/len(actual_ratings.columns)) print('\n---') print('Average Price - Diversity Measure') print('---\n') print(count_diversity_price/len(actual_ratings.columns)) print('\n---') print('Average Availability - Diversity Measure') print('---\n') print(count_diversity_availability/len(actual_ratings.columns)) print('\n---') print('Average Popularity') print('---\n') print(count_popularity/len(actual_ratings.columns)) print('---\n') print('Average Precision@5') print('---\n') print(count_precision_at_5/len(actual_ratings.columns))
--- Average nDCG --- content_based 0.136505 item_item 0.146798 matrix_fact 0.155888 pers_bias 0.125180 user_user 0.169080 Name: nDCG, dtype: float64 --- Average Price - Diversity Measure --- mean std content_based 19.286627 19.229536 user_user 21.961776 25.275120 item_item 25.931943 32.224609 matrix_fact 21.165554 26.236822 pers_bias 9.938984 5.159261 --- Average Availability - Diversity Measure --- mean std content_based 0.623888 0.225789 user_user 0.682751 0.230219 item_item 0.655725 0.223781 matrix_fact 0.601153 0.202596 pers_bias 0.638596 0.202630 --- Average Popularity --- content_based 0.00 user_user 0.01 item_item 0.00 matrix_fact 0.00 pers_bias 0.00 Name: popularity, dtype: float64 --- Average Precision@5 --- content_based 0.050 user_user 0.066 item_item 0.076 matrix_fact 0.064 pers_bias 0.052 Name: precision_at_5, dtype: float64
MIT
notebooks/reco-tut-asr-99-10-metrics-calculation.ipynb
sparsh-ai/reco-tut-asr
Final AnalysisIn terms of **RMSE**, the user-user collaborative filtering showed to be the most effective, despite it not being significantly better.For nDCG rank score, again user user and now item item collaborative filtering were the best.In terms of price diversity, the item item algorith was the most diverse, providing products varying ~32 dollars from the mean item price list. Matrix factorisation and user user follow right behind, with price standard deviation around 25 dollars. An interesting factor here was the *pers_bias* algorithm, as it recommended basically cheap products with a low standard deviation.For the availabity index, all the algorithms besides the user user managed to recommend items not so present in the local stores **together** with items present in local stores, as we can see they also provided items with availability index high (high standard deviation).In terms of popularity, no algorithm actually managed to obtain good scores in the way we defined. So, if the popularity is focused in the future, we can either change the popularity concept or improve mechanics in the recommender so it predict higher scores for the most popular items in the store.After this evaluation, it seemed to us that the item-item recommender system had an overall better performance, highlighted in terms of its diversity scores. Unfortunately, the items that item item recommender has suggested are in overall pricy, and we can check if there is any mixture possibility with the pers_bias algorithm, as it really indicated cheap prices and a low price standard deviation. Matrix factorization performed good as well but it didn't outperform any of the other recommenders. Hibridization Techniques - Part IIIWe are trying four different types of hibridization here.1. Linear ensemble2. Non linear ensemble3. Top 1 from each recommender4. Recommender switching The first two options approach the recommender's performance in terms of how good it predicts the users' ratings, so its only evaluation will be in terms of RMSE. The third approach have the intuition that, if we get the top 1 recommendation from each algorithm, the resulting 5 item list will have a better performance in terms of identyfing 'good' items to users. In this case, we defined the good items if the recommender suggested an already bought item for an user. Therefore, the final measurement of this hibridization mechanism is through the precision@5, as we end up with a 5 item list.The final mixing algorithm has the underlying theory of how collaborative filtering mechanisms perform with items that had not enough users/items in its calculations. As a well known weakness of these recommenders, the idea was to check how many items we would affect if we established a threshold of enough data in order for us to use a collaborative filtering. Otherwise, if the item doesn't have enough support in form of users' ratings we could have a support of a content based recommendation, or even, in last case, a non personalised one. Dataset Creation and User Sample Definition DatasetFor the first and second approach, we need another perspective on the data. The dataset contains all the existing ratings from all users and concatenates all the predictions made the 5 traditional recommenders. The idea is to use the observed rating as target variable and all recommenders' predictions as dependent variable, *i.e.* treat this as a regression problems.
obs_ratings_list = [] content_based_list = [] user_user_list = [] item_item_list = [] matrix_fact_list = [] pers_bias_list = [] re = RecommenderEvaluator(items, actual_ratings, content_based, user_user, item_item, matrix_fact, pers_bias) for userId in actual_ratings.columns: observed_ratings = re.get_observed_ratings(userId) obs_ratings_list.extend(observed_ratings.values) content_based_list.extend(content_based.loc[observed_ratings.index, userId].values) user_user_list.extend(user_user.loc[observed_ratings.index, userId].values) item_item_list.extend(item_item.loc[observed_ratings.index, userId].values) matrix_fact_list.extend(matrix_fact.loc[observed_ratings.index, userId].values) pers_bias_list.extend(pers_bias.loc[observed_ratings.index, userId].values) dataset = pd.DataFrame({'rating': obs_ratings_list, 'content_based':content_based_list, 'user_user': user_user_list, 'item_item':item_item_list, 'matrix_fact':matrix_fact_list,'pers_bias':pers_bias_list}) dataset = dataset.dropna() dataset.head()
_____no_output_____
MIT
notebooks/reco-tut-asr-99-10-metrics-calculation.ipynb
sparsh-ai/reco-tut-asr
In order to have an idea of the results, let's choose 3 users randomly to show the predictions using the new hybrid models
np.random.seed(42) sample_users = np.random.choice(actual_ratings.columns, 3).astype(str) print('sample_users: ' + str(sample_users))
sample_users: ['1528' '3524' '417']
MIT
notebooks/reco-tut-asr-99-10-metrics-calculation.ipynb
sparsh-ai/reco-tut-asr
Get recommenders' predictions for sample users in order to create input for ensemble models (hybridization I and II)
from collections import OrderedDict df_sample = pd.DataFrame() for user in sample_users: content_based_ = re.content_based[user] user_user_ = re.user_user[user] item_item_ = re.item_item[user] matrix_fact_ = re.matrix_fact[user] pers_bias_ = re.pers_bias[user] df_sample = df_sample.append(pd.DataFrame(OrderedDict({'user':user,'item':actual_ratings.index.values,'content_based':content_based_, 'user_user':user_user_, 'item_item':item_item_, 'matrix_fact':matrix_fact_,'pers_bias':pers_bias_})), ignore_index=True) df_sample.head()
_____no_output_____
MIT
notebooks/reco-tut-asr-99-10-metrics-calculation.ipynb
sparsh-ai/reco-tut-asr
Focus on Performance (RMSE) I - Linear Model
from sklearn.linear_model import LinearRegression from sklearn.model_selection import cross_val_score linear = LinearRegression() print('RMSE for linear ensemble of recommender systems:') np.mean(cross_val_score(linear, dataset.drop('rating', axis=1), dataset['rating'], cv=5))
RMSE for linear ensemble of recommender systems:
MIT
notebooks/reco-tut-asr-99-10-metrics-calculation.ipynb
sparsh-ai/reco-tut-asr
Predictions for sample users: Creating top 5 recommendations for sample users
pred_cols = ['content_based','user_user','item_item','matrix_fact','pers_bias'] predictions = linear.fit(dataset.drop('rating', axis=1), dataset['rating']).predict(df_sample[pred_cols]) recommendations = pd.DataFrame(OrderedDict({'user':df_sample['user'], 'item':df_sample['item'], 'predictions':predictions})) recommendations.groupby('user').apply(lambda df_user : df_user.loc[df_user['predictions'].sort_values(ascending=False)[:5].index.values])
_____no_output_____
MIT
notebooks/reco-tut-asr-99-10-metrics-calculation.ipynb
sparsh-ai/reco-tut-asr
Focus on Performance (RMSE) II - Emsemble
from sklearn.ensemble import RandomForestRegressor rf = RandomForestRegressor(random_state=42) print('RMSE for non linear ensemble of recommender systems:') np.mean(cross_val_score(rf, dataset.drop('rating', axis=1), dataset['rating'], cv=5))
RMSE for non linear ensemble of recommender systems:
MIT
notebooks/reco-tut-asr-99-10-metrics-calculation.ipynb
sparsh-ai/reco-tut-asr
Predictions for sample users:
predictions = rf.fit(dataset.drop('rating', axis=1), dataset['rating']).predict(df_sample[pred_cols]) recommendations = pd.DataFrame(OrderedDict({'user':df_sample['user'], 'item':df_sample['item'], 'predictions':predictions})) recommendations.groupby('user').apply(lambda df_user : df_user.loc[df_user['predictions'].sort_values(ascending=False)[:5].index.values])
_____no_output_____
MIT
notebooks/reco-tut-asr-99-10-metrics-calculation.ipynb
sparsh-ai/reco-tut-asr
Focus on Recommendations - Top 1 from each RecommenderWith the all top 1 recommender, we can evaluate its performance not just with RMSE, but all the list metrics we evaluated before. As a business constraint, we will also pay more attention to the *precision@5* metric, as a general information on how good is the recommender on providing suggestions that the user will buy, or already bought in this case.The majority of metrics were in the same scale as the best metrics in the all models comparison. However, it's good to highlight the the top 1 all recommender had the best *precision@5* metric among all recommender, showing to be a **good suitable hibridization mechanism**.
count_nDCG = np.array([0]) count_diversity_price = np.ndarray([1,2]) count_diversity_availability = np.ndarray([1,2]) count_popularity = np.array([0]) count_precision = np.array([0]) for userId in actual_ratings.columns: top_n_1 = re.get_top_n(userId,1) user_items = {} user_items['top_1_all'] = [a[0] for a in top_n_1.values()] nDCG_recommenders = re.nDCG(userId, individual_recommendation = user_items) count_nDCG = count_nDCG + nDCG_recommenders['nDCG'] diversity_price_recommenders = re.price_diversity(userId, individual_recommendation = user_items) count_diversity_price = count_diversity_price + diversity_price_recommenders[['mean','std']] diversity_availability_recommenders = re.availability_diversity(userId, individual_recommendation = user_items) count_diversity_availability = count_diversity_availability + diversity_availability_recommenders[['mean','std']] popularity_recommenders = re.popularity(userId, individual_recommendation = user_items) count_popularity = count_popularity + popularity_recommenders['popularity'] precision_recommenders = re.precision_at_n(userId, individual_recommendation = user_items) count_precision = count_precision + precision_recommenders['precision_at_5'] print('\n---') print('Average nDCG') print('---\n') print(count_nDCG/len(actual_ratings.columns)) print('\n---') print('Average Price - Diversity Measure') print('---\n') print(count_diversity_price/len(actual_ratings.columns)) print('\n---') print('Average Availability - Diversity Measure') print('---\n') print(count_diversity_availability/len(actual_ratings.columns)) print('\n---') print('Average Popularity') print('---\n') print(count_popularity/len(actual_ratings.columns)) print('\n---') print('Average Precision@5') print('---\n') print(count_precision/len(actual_ratings.columns))
--- Average nDCG --- top_1_all 0.159211 Name: nDCG, dtype: float64 --- Average Price - Diversity Measure --- mean std top_1_all 16.4625 14.741783 --- Average Availability - Diversity Measure --- mean std top_1_all 0.575683 0.161168 --- Average Popularity --- top_1_all 0.0 Name: popularity, dtype: float64 --- Average Precision@5 --- top_1_all 0.082 Name: precision_at_5, dtype: float64
MIT
notebooks/reco-tut-asr-99-10-metrics-calculation.ipynb
sparsh-ai/reco-tut-asr
Predictions for sample users:
results = {} for user_sample in sample_users: results[user_sample] = [a[0] for a in list(re.get_top_n(user_sample, 1).values())] results
_____no_output_____
MIT
notebooks/reco-tut-asr-99-10-metrics-calculation.ipynb
sparsh-ai/reco-tut-asr
Focus on Recommendations - Switching algorithm Can we use a Content Based Recommender for items with less evaluations?We can see in the cumulative histogram that only around 20% of the rated items had 10 or more ratings. This signals us that maybe we can prioritize the use of a content based recommender or even a non personalised one for the majority of the items which don't have a sufficient amount of ratings in order to make the collaborative filtering algorithms to be stable.
import matplotlib.pyplot as plt item_nbr_ratings = actual_ratings.apply(lambda col: np.sum(~np.isnan(col)), axis=1) item_max_nbr_ratings = item_nbr_ratings.max() range_item_max_nbr_ratings = range(item_max_nbr_ratings+1) plt.figure(figsize=(15,3)) plt.subplot(121) nbr_ratings_items = [] for i in range_item_max_nbr_ratings: nbr_ratings_items.append(len(item_nbr_ratings[item_nbr_ratings == i])) plt.plot(nbr_ratings_items) plt.xlabel('Number of ratings') plt.ylabel('Amount of items') plt.title('Histogram of amount of ratings') plt.subplot(122) cum_nbr_ratings_items = [] for i in range(len(nbr_ratings_items)): cum_nbr_ratings_items.append(np.sum(nbr_ratings_items[:i])) cum_nbr_ratings_items = np.array(cum_nbr_ratings_items) plt.plot(cum_nbr_ratings_items/actual_ratings.shape[0]) plt.xlabel('Number of ratings') plt.ylabel('Cumulative distribution') plt.title('Cumulative histogram of amount of ratings');
_____no_output_____
MIT
notebooks/reco-tut-asr-99-10-metrics-calculation.ipynb
sparsh-ai/reco-tut-asr
Geometric operations Overlay analysisIn this tutorial, the aim is to make an overlay analysis where we create a new layer based on geometries from a dataset that `intersect` with geometries of another layer. As our test case, we will select Polygon grid cells from `TravelTimes_to_5975375_RailwayStation_Helsinki.shp` that intersects with municipality borders of Helsinki found in `Helsinki_borders.shp`.Typical overlay operations are (source: [QGIS docs](https://docs.qgis.org/2.8/en/docs/gentle_gis_introduction/vector_spatial_analysis_buffers.htmlmore-spatial-analysis-tools)):![](img/overlay_operations.png) Download dataFor this lesson, you should [download a data package](https://github.com/AutoGIS/data/raw/master/L4_data.zip) that includes 3 files: 1. Helsinki_borders.shp 2. Travel_times_to_5975375_RailwayStation.shp 3. Amazon_river.shp ```$ cd /home/jovyan/notebooks/L4$ wget https://github.com/AutoGIS/data/raw/master/L4_data.zip$ unzip L4_data.zip```Let's first read the data and see how they look like.- Import required packages and read in the input data:
import geopandas as gpd import matplotlib.pyplot as plt import shapely.speedups %matplotlib inline # File paths border_fp = "data/Helsinki_borders.shp" grid_fp = "data/TravelTimes_to_5975375_RailwayStation.shp" # Read files grid = gpd.read_file(grid_fp) hel = gpd.read_file(border_fp)
_____no_output_____
MIT
geometric-operations.ipynb
AdrianKriger/Automating-GIS-Processess
- Visualize the layers:
# Plot the layers ax = grid.plot(facecolor='gray') hel.plot(ax=ax, facecolor='None', edgecolor='blue')
_____no_output_____
MIT
geometric-operations.ipynb
AdrianKriger/Automating-GIS-Processess
Here the grey area is the Travel Time Matrix grid (13231 grid squares) that covers the Helsinki region, and the blue area represents the municipality of Helsinki. Our goal is to conduct an overlay analysis and select the geometries from the grid polygon layer that intersect with the Helsinki municipality polygon.When conducting overlay analysis, it is important to check that the CRS of the layers match!- Check if Helsinki polygon and the grid polygon are in the same crs:
# Ensure that the CRS matches, if not raise an AssertionError assert hel.crs == grid.crs, "CRS differs between layers!"
_____no_output_____
MIT
geometric-operations.ipynb
AdrianKriger/Automating-GIS-Processess
Indeed, they do. Hence, the pre-requisite to conduct spatial operations between the layers is fullfilled (also the map we plotted indicated this).- Let's do an overlay analysis and create a new layer from polygons of the grid that `intersect` with our Helsinki layer. We can use a function called `overlay()` to conduct the overlay analysis that takes as an input 1) the GeoDataFrame where the selection is taken, 2) the GeoDataFrame used for making the selection, and 3) parameter `how` that can be used to control how the overlay analysis is conducted (possible values are `'intersection'`, `'union'`, `'symmetric_difference'`, `'difference'`, and `'identity'`):
intersection = gpd.overlay(grid, hel, how='intersection')
_____no_output_____
MIT
geometric-operations.ipynb
AdrianKriger/Automating-GIS-Processess
- Let's plot our data and see what we have:
intersection.plot(color="b")
_____no_output_____
MIT
geometric-operations.ipynb
AdrianKriger/Automating-GIS-Processess
As a result, we now have only those grid cells that intersect with the Helsinki borders. As we can see **the grid cells are clipped based on the boundary.**- Whatabout the data attributes? Let's see what we have:
print(intersection.head())
car_m_d car_m_t car_r_d car_r_t from_id pt_m_d pt_m_t pt_m_tt \ 0 29476 41 29483 46 5876274 29990 76 95 1 29456 41 29462 46 5876275 29866 74 95 2 36772 50 36778 56 5876278 33541 116 137 3 36898 49 36904 56 5876279 33720 119 141 4 29411 40 29418 44 5878128 29944 75 95 pt_r_d pt_r_t pt_r_tt to_id walk_d walk_t GML_ID NAMEFIN \ 0 24984 77 99 5975375 25532 365 27517366 Helsinki 1 24860 75 93 5975375 25408 363 27517366 Helsinki 2 44265 130 146 5975375 31110 444 27517366 Helsinki 3 44444 132 155 5975375 31289 447 27517366 Helsinki 4 24938 76 99 5975375 25486 364 27517366 Helsinki NAMESWE NATCODE geometry 0 Helsingfors 091 POLYGON ((402250.000 6685750.000, 402024.224 6... 1 Helsingfors 091 POLYGON ((402367.890 6685750.000, 402250.000 6... 2 Helsingfors 091 POLYGON ((403250.000 6685750.000, 403148.515 6... 3 Helsingfors 091 POLYGON ((403456.484 6685750.000, 403250.000 6... 4 Helsingfors 091 POLYGON ((402000.000 6685500.000, 401900.425 6...
MIT
geometric-operations.ipynb
AdrianKriger/Automating-GIS-Processess
As we can see, due to the overlay analysis, the dataset contains the attributes from both input layers.- Let's save our result grid as a GeoJSON file that is commonly used file format nowadays for storing spatial data.
# Output filepath outfp = "data/TravelTimes_to_5975375_RailwayStation_Helsinki.geojson" # Use GeoJSON driver intersection.to_file(outfp, driver="GeoJSON")
_____no_output_____
MIT
geometric-operations.ipynb
AdrianKriger/Automating-GIS-Processess
There are many more examples for different types of overlay analysis in [Geopandas documentation](http://geopandas.org/set_operations.html) where you can go and learn more. Aggregating dataData aggregation refers to a process where we combine data into groups. When doing spatial data aggregation, we merge the geometries together into coarser units (based on some attribute), and can also calculate summary statistics for these combined geometries from the original, more detailed values. For example, suppose that we are interested in studying continents, but we only have country-level data like the country dataset. If we aggregate the data by continent, we would convert the country-level data into a continent-level dataset.In this tutorial, we will aggregate our travel time data by car travel times (column `car_r_t`), i.e. the grid cells that have the same travel time to Railway Station will be merged together.- For doing the aggregation we will use a function called `dissolve()` that takes as input the column that will be used for conducting the aggregation:
# Conduct the aggregation dissolved = intersection.dissolve(by="car_r_t") # What did we get print(dissolved.head())
geometry car_m_d car_m_t \ car_r_t -1 MULTIPOLYGON (((388000.000 6668750.000, 387750... -1 -1 0 POLYGON ((386000.000 6672000.000, 385750.000 6... 0 0 7 POLYGON ((386250.000 6671750.000, 386000.000 6... 1051 7 8 MULTIPOLYGON (((386250.000 6671500.000, 386000... 1286 8 9 MULTIPOLYGON (((386500.000 6671250.000, 386250... 1871 9 car_r_d from_id pt_m_d pt_m_t pt_m_tt pt_r_d pt_r_t pt_r_tt \ car_r_t -1 -1 5913094 -1 -1 -1 -1 -1 -1 0 0 5975375 0 0 0 0 0 0 7 1051 5973739 617 5 6 617 5 6 8 1286 5973736 706 10 10 706 10 10 9 1871 5970457 1384 11 13 1394 11 12 to_id walk_d walk_t GML_ID NAMEFIN NAMESWE NATCODE car_r_t -1 -1 -1 -1 27517366 Helsinki Helsingfors 091 0 5975375 0 0 27517366 Helsinki Helsingfors 091 7 5975375 448 6 27517366 Helsinki Helsingfors 091 8 5975375 706 10 27517366 Helsinki Helsingfors 091 9 5975375 1249 18 27517366 Helsinki Helsingfors 091
MIT
geometric-operations.ipynb
AdrianKriger/Automating-GIS-Processess
- Let's compare the number of cells in the layers before and after the aggregation:
print('Rows in original intersection GeoDataFrame:', len(intersection)) print('Rows in dissolved layer:', len(dissolved))
Rows in original intersection GeoDataFrame: 3826 Rows in dissolved layer: 51
MIT
geometric-operations.ipynb
AdrianKriger/Automating-GIS-Processess
Indeed the number of rows in our data has decreased and the Polygons were merged together.What actually happened here? Let's take a closer look. - Let's see what columns we have now in our GeoDataFrame:
print(dissolved.columns)
Index(['geometry', 'car_m_d', 'car_m_t', 'car_r_d', 'from_id', 'pt_m_d', 'pt_m_t', 'pt_m_tt', 'pt_r_d', 'pt_r_t', 'pt_r_tt', 'to_id', 'walk_d', 'walk_t', 'GML_ID', 'NAMEFIN', 'NAMESWE', 'NATCODE'], dtype='object')
MIT
geometric-operations.ipynb
AdrianKriger/Automating-GIS-Processess
As we can see, the column that we used for conducting the aggregation (`car_r_t`) can not be found from the columns list anymore. What happened to it?- Let's take a look at the indices of our GeoDataFrame:
print(dissolved.index)
Int64Index([-1, 0, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56], dtype='int64', name='car_r_t')
MIT
geometric-operations.ipynb
AdrianKriger/Automating-GIS-Processess
Aha! Well now we understand where our column went. It is now used as index in our `dissolved` GeoDataFrame. - Now, we can for example select only such geometries from the layer that are for example exactly 15 minutes away from the Helsinki Railway Station:
# Select only geometries that are within 15 minutes away dissolved.iloc[15] # See the data type print(type(dissolved.iloc[15])) # See the data print(dissolved.iloc[15].head())
geometry (POLYGON ((388250.0001354316 6668750.000042891... car_m_d 12035 car_m_t 18 car_r_d 11997 from_id 5903886 Name: 20, dtype: object
MIT
geometric-operations.ipynb
AdrianKriger/Automating-GIS-Processess
As we can see, as a result, we have now a Pandas `Series` object containing basically one row from our original aggregated GeoDataFrame.Let's also visualize those 15 minute grid cells.- First, we need to convert the selected row back to a GeoDataFrame:
# Create a GeoDataFrame selection = gpd.GeoDataFrame([dissolved.iloc[15]], crs=dissolved.crs)
_____no_output_____
MIT
geometric-operations.ipynb
AdrianKriger/Automating-GIS-Processess
- Plot the selection on top of the entire grid:
# Plot all the grid cells, and the grid cells that are 15 minutes a way from the Railway Station ax = dissolved.plot(facecolor='gray') selection.plot(ax=ax, facecolor='red')
_____no_output_____
MIT
geometric-operations.ipynb
AdrianKriger/Automating-GIS-Processess
Simplifying geometries Sometimes it might be useful to be able to simplify geometries. This could be something to consider for example when you have very detailed spatial features that cover the whole world. If you make a map that covers the whole world, it is unnecessary to have really detailed geometries because it is simply impossible to see those small details from your map. Furthermore, it takes a long time to actually render a large quantity of features into a map. Here, we will see how it is possible to simplify geometric features in Python.As an example we will use data representing the Amazon river in South America, and simplify it's geometries.- Let's first read the data and see how the river looks like:
import geopandas as gpd # File path fp = "data/Amazon_river.shp" data = gpd.read_file(fp) # Print crs print(data.crs) # Plot the river data.plot();
PROJCS["Mercator_2SP",GEOGCS["GCS_GRS 1980(IUGG, 1980)",DATUM["D_unknown",SPHEROID["GRS80",6378137,298.257222101]],PRIMEM["Unknown",0],UNIT["Degree",0.0174532925199433]],PROJECTION["Mercator_2SP"],PARAMETER["standard_parallel_1",-2],PARAMETER["central_meridian",-43],PARAMETER["false_easting",5000000],PARAMETER["false_northing",10000000],UNIT["metre",1,AUTHORITY["EPSG","9001"]],AXIS["Easting",EAST],AXIS["Northing",NORTH]]
MIT
geometric-operations.ipynb
AdrianKriger/Automating-GIS-Processess
The LineString that is presented here is quite detailed, so let's see how we can generalize them a bit. As we can see from the coordinate reference system, the data is projected in a metric system using [Mercator projection based on SIRGAS datum](http://spatialreference.org/ref/sr-org/7868/). - Generalization can be done easily by using a Shapely function called `.simplify()`. The `tolerance` parameter can be used to adjusts how much geometries should be generalized. **The tolerance value is tied to the coordinate system of the geometries**. Hence, the value we pass here is 20 000 **meters** (20 kilometers).
# Generalize geometry data2 = data.copy() data2['geom_gen'] = data2.simplify(tolerance=20000) # Set geometry to be our new simlified geometry data2 = data2.set_geometry('geom_gen') # Plot data2.plot() # plot them side-by-side %matplotlib inline import matplotlib.pyplot as plt #basic config fig, (ax1,ax2) = plt.subplots(nrows=1, ncols=2, figsize=(20, 16)) #ax1, ax2 = axes #1st plot ax1 = data.plot(ax=ax1, color='red', alpha=0.5) ax1.set_title('Original') #2nd plot ax2 = data2.plot(ax=ax2, color='orange', alpha=0.5) ax2.set_title('Generalize') fig.tight_layout()
_____no_output_____
MIT
geometric-operations.ipynb
AdrianKriger/Automating-GIS-Processess
GNN Implementation- Name: Abhishek Aditya BS- SRN: PES1UG19CS019- VI Semester 'A' Section- Date: 27-04-2022
import sys if 'google.colab' in sys.modules: %pip install -q stellargraph[demos]==1.2.1 import pandas as pd import os import stellargraph as sg from stellargraph.mapper import FullBatchNodeGenerator from stellargraph.layer import GCN from tensorflow.keras import layers, optimizers, losses, metrics, Model from sklearn import preprocessing, model_selection from IPython.display import display, HTML import matplotlib.pyplot as plt dataset=sg.datasets.Cora() display(HTML(dataset.description)) G, node_subjects = dataset.load() print(G.info()) node_subjects.value_counts().to_frame() train_subjects, test_subjects = model_selection.train_test_split(node_subjects, train_size=140, test_size=None, stratify=node_subjects) val_subjects, test_subjects = model_selection.train_test_split(test_subjects, train_size=500, test_size=None, stratify=test_subjects) train_subjects.value_counts().to_frame() target_encoding=preprocessing.LabelBinarizer() train_targets=target_encoding.fit_transform(train_subjects) val_targets=target_encoding.transform(val_subjects) test_targets=target_encoding.transform(test_subjects) from stellargraph.mapper.full_batch_generators import FullBatchGenerator generator = FullBatchNodeGenerator(G, method="gcn") train_gen=generator.flow(train_subjects.index, train_targets) gcn=GCN(layer_sizes=[16,16], activations=['relu', 'relu'], generator=generator, dropout=0.5) x_inp, x_out = gcn.in_out_tensors() x_out predictions=layers.Dense(units=train_targets.shape[1], activation="softmax")(x_out) model=Model(inputs=x_inp, outputs=predictions) model.compile(optimizer=optimizers.Adam(lr=0.01), loss=losses.categorical_crossentropy, metrics=["acc"]) val_gen = generator.flow(val_subjects.index, val_targets) from tensorflow.keras.callbacks import EarlyStopping os_callback = EarlyStopping(monitor="val_acc", patience=50, restore_best_weights=True) history = model.fit(train_gen, epochs=200, validation_data=val_gen, verbose=2, shuffle=False, callbacks=[os_callback]) sg.utils.plot_history(history) test_gen=generator.flow(test_subjects.index, test_targets) all_nodes=node_subjects.index all_gen=generator.flow(all_nodes) all_predictions=model.predict(all_gen) node_predictions=target_encoding.inverse_transform(all_predictions.squeeze()) df=pd.DataFrame({"Predicted":node_predictions, "True":node_subjects}) df.head(20) embedding_model=Model(inputs=x_inp, outputs=x_out) emb=embedding_model.predict(all_gen) emb.shape from sklearn.decomposition import PCA from sklearn.manifold import TSNE X=emb.squeeze(0) X.shape transform = TSNE trans=transform(n_components=2) X_reduced=trans.fit_transform(X) X_reduced.shape fig, ax = plt.subplots(figsize=(7, 7)) ax.scatter( X_reduced[:, 0], X_reduced[:, 1], c=node_subjects.astype("category").cat.codes, cmap="jet", alpha=0.7, ) ax.set( aspect="equal", xlabel="$X_1$", ylabel="$X_2$", title=f"{transform.__name__} visualization of GCN embeddings for cora dataset", )
_____no_output_____
MIT
Topics of Deep Learning Lab/Lab-3/GNN.ipynb
Abhishek-Aditya-bs/Lab-Projects-and-Assignments
Within repo
import pandas as pd train_file = ["mesos", "usergrid", "appceleratorstudio", "appceleratorstudio", "titanium", "aptanastudio", "mule", "mulestudio"] test_file = ["usergrid", "mesos", "aptanastudio", "titanium", "appceleratorstudio", "titanium", "mulestudio", "mule"] mae = [1.07, 1.14, 2.75, 1.99, 2.85, 3.41, 3.14, 2.31] df_gpt = pd.DataFrame(data={"group": ["Within Repository" for i in range(8)], "approach": ["Deep-SE" for i in range(8)], "train_file": train_file, "test_file": test_file, "mae": mae}) df = pd.read_csv("./within_repo_abe0.csv") df = df.append(df_gpt) df.to_csv("./within_repo_abe0.csv", index=False)
_____no_output_____
MIT
abe0/ignore_process_csv.ipynb
awsm-research/gpt2sp
Cross repo
import pandas as pd train_file = ["clover", "talendesb", "talenddataquality", "mule", "talenddataquality", "mulestudio", "appceleratorstudio", "appceleratorstudio"] test_file = ["usergrid", "mesos", "aptanastudio", "titanium", "appceleratorstudio", "titanium", "mulestudio", "mule"] mae = [1.57, 2.08, 5.37, 6.36, 5.55, 2.67, 4.24, 2.7] df = pd.read_csv("./cross_repo_abe0.csv") df_gpt = pd.DataFrame(data={"group": ["Cross Repository" for i in range(8)], "approach": ["Deep-SE" for i in range(8)], "train_file": train_file, "test_file": test_file, "mae": mae}) df = df.append(df_gpt) df.to_csv("./cross_repo_abe0.csv", index=False)
_____no_output_____
MIT
abe0/ignore_process_csv.ipynb
awsm-research/gpt2sp
Solution to puzzle number 5
import pandas as pd import numpy as np data = pd.read_csv('../inputs/puzzle5_input.csv') data = [val for val in data.columns] data[:10]
_____no_output_____
MIT
puzzle_notebooks/puzzle5.ipynb
fromdatavistodatascience/adventofcode2019
Part 5.1 After providing 1 to the only input instruction and passing all the tests, what diagnostic code does the program produce? More Rules: - Opcode 3 takes a single integer as input and saves it to the position given by its only parameter. - Opcode 4 outputs the value of its only parameter. Functions now need to support the parameter mode 1 (Immediate mode): - Immediate mode - In immediate mode, a parameter is interpreted as a value - if the parameter is 50, its value is 50.
user_ID = 1 numbers = 1002,4,3,4,33 def opcode_instructions(intcode): "Function that breaks the opcode instructions into pieces" str_intcode = str(intcode) opcode = str_intcode[-2:] int_opcode = int(opcode) return int_opcode def extract_p_modes(intcode): "Function that extracts the p_modes" str_p_modes = str(intcode) p_modes_dic = {} for n, val in enumerate(str_p_modes[:-2]): p_modes_dic[f'p_mode_{n+1}'] = val return p_modes_dic def opcode_1(i, new_numbers, p_modes): "Function that adds together numbers read from two positions and stores the result in a third position" second_item = new_numbers[i+1] third_item = new_numbers[i+2] position_item = new_numbers[i+3] if (p_modes[0] == 0) & (p_modes[1] == 0): sum_of_second_and_third = new_numbers[second_item] + new_numbers[third_item] elif (p_modes[0] == 1) & (p_modes[1] == 0): sum_of_second_and_third = second_item + new_numbers[third_item] elif (p_modes[0] == 0) & (p_modes[1] == 1): sum_of_second_and_third = new_numbers[second_item] + third_item else: sum_of_second_and_third = second_item + third_item new_numbers[position_item] = sum_of_second_and_third return new_numbers def opcode_2(i, new_numbers, p_modes): "Function that multiplies together numbers read from two positions and stores the result in a third position" second_item = new_numbers[i+1] third_item = new_numbers[i+2] position_item = new_numbers[i+3] if (p_modes[0] == 0) & (p_modes[1] == 0): m_of_second_and_third = new_numbers[second_item] * new_numbers[third_item] elif (p_modes[0] == 1) & (p_modes[1] == 0): m_of_second_and_third = second_item * new_numbers[third_item] elif (p_modes[0] == 0) & (p_modes[1] == 1): m_of_second_and_third = new_numbers[second_item] * third_item else: m_of_second_and_third = second_item * third_item new_numbers[position_item] = m_of_second_and_third return new_numbers def opcode_3(i, new_numbers, inpt): "Function takes a single integer as input and saves it to the position given by its only parameter" val = input_value second_item = new_numbers[i+1] new_numbers[second_item] = val return new_numbers # from puzzle n2 copy the intcode function def modifiedintcodefunction(numbers, input_value): "Function that similates that of an Intcode program but takes into account extra information." new_numbers = [num for num in numbers] i = 0 output_values = [] while i < len(new_numbers): opcode = opcode_instructions(new_numbers[i]) p_modes = extract_p_modes(new_numbers[i]) if new_numbers[i] == 1: new_numbers = opcode_1(i, new_numbers, p_modes) i = i + 4 elif new_numbers[i] == 2: new_numbers = opcode_2(i, new_numbers, p_modes) i = i + 4 elif new_numbers[i] == 3: new_numbers = opcode_3(i, new_numbers, inpt) i = i + 2 elif new_numbers[i] == 4: output_values.append(new_numbers[i+1]) i = i + 2 elif new_numbers[i] == 99: break else: continue #Return the first item after the code has run. first_item = new_numbers[0] return first_item
_____no_output_____
MIT
puzzle_notebooks/puzzle5.ipynb
fromdatavistodatascience/adventofcode2019
Selecting the first speech to see what we need to clean.
filename = os.path.join(path, dirs[0]) # dirs is a list, and we are going to study the first element dirs[0] text_file = open(filename, 'r') #open the first file dirs[0] lines = text_file.read() # read the file lines # print what is in the file lines.replace('\n', ' ') # remove the \n symbols by replacing with an empty space #print (lines) sotu_data = [] #create an empty list sotu_dict = {} # create an empty dictionary so that we can use file names to list the speeches by name
_____no_output_____
MIT
state_of_union_main.ipynb
gequitz/State_of_The_Union_Analysis_NLP
Putting all the speeches into a list, after cleaning them
#The filter() function returns an iterator were the items are filtered #through a function to test if the item is accepted or not. # str.isalpha : checks if it is an alpha character. # lower() : transform everything to lower case # split() : Split a string into a list where each word is a list item # loop over all the files: for i in range(len(dirs)): # loop on all the speeches, dirs is the list of speeches filename = os.path.join(path, dirs[i]) # location of the speeches text_file = open(filename, 'r') # read the speeches lines = text_file.read() #read the speeches lines = lines.replace('\n', ' ') #replace \n by an empty string # tranform the speeches in lower cases, split them into a list and then filter to accept only alpha characters # finally it joins the words with an empty space clean_lines = ' '.join(filter(str.isalpha, lines.lower().split())) #print(clean_lines) sotu_data.append(clean_lines) # append the clean speeches to the sotu_data list. sotu_dict[filename] = clean_lines # store in dict so we can access clean_lines by filename. sotu_data[10] #11th speech/element speech_name = 'Wilson_1919.txt' sotu_dict[path + '\\' + speech_name]
_____no_output_____
MIT
state_of_union_main.ipynb
gequitz/State_of_The_Union_Analysis_NLP
Count Vectorize
#from notebook #vectorizer = CountVectorizer(stop_words='english') #remove stop words: a, the, and, etc. vectorizer = TfidfVectorizer(stop_words='english', max_df = 0.42, min_df = 0.01) #remove stop words: a, the, and, etc. doc_word = vectorizer.fit_transform(sotu_data) #transform into sparse matrix (0, 1, 2, etc. for instance(s) in document) pairwise_similarity = doc_word * doc_word.T doc_word.shape # 228 = number of documents, 20932 = # of unique words) #pairwise_similarity.toarray()
_____no_output_____
MIT
state_of_union_main.ipynb
gequitz/State_of_The_Union_Analysis_NLP
Compare how similar speeches are to one another
df_similarity = pd.DataFrame(pairwise_similarity.toarray(), index = dirs, columns = dirs) df_similarity.head() #similarity dataframe, compares each document to eachother df_similarity.to_pickle("df_similarity.pkl") #pickle file df_similarity['Speech_str'] = dirs #matrix comparing speech similarity df_similarity['Year'] =df_similarity['Speech_str'].replace('[^0-9]', '', regex=True) df_similarity.drop(['Speech_str'], axis=1) df_similarity = df_similarity.sort_values(by=['Year']) df_similarity.head() plt.subplots(2, 2, figsize=(30, 15), sharex=True) #4 speeches similarity # plt.rcParams.update({'font.size': 20}) plt.subplot(2, 2, 1) plt.plot(df_similarity['Year'], df_similarity['Adams_1797.txt']) plt.title("Similarity for Adams 1797 speech") plt.xlabel("Year") plt.ylabel("Similarity") plt.axhline(y=0.0, color='k', linestyle='-') plt.xticks(['1800', '1850','1900','1950','2000']) # Set label locations. plt.subplot(2, 2, 2) plt.plot(df_similarity['Year'], df_similarity['Roosevelt_1945.txt']) plt.title("Similarity for Roosevelt 1945 speech") plt.xlabel("Year") plt.ylabel("Similarity") plt.axhline(y=0.0, color='k', linestyle='-') plt.xticks(['1800', '1850','1900','1950','2000']) # Set label locations. plt.subplot(2, 2, 3) plt.plot(df_similarity['Year'], df_similarity['Obama_2014.txt']) plt.title("Similarity for Obama 2014 speech") plt.xlabel("Year") plt.ylabel("Similarity") plt.axhline(y=0.0, color='k', linestyle='-') plt.xticks(['1800', '1850','1900','1950','2000']) # Set label locations. plt.subplot(2, 2, 4) plt.plot(df_similarity['Year'], df_similarity['Trump_2018.txt']) plt.title("Similarity for Trump 2018 speech") plt.xlabel("Year") plt.ylabel("Similarity") plt.axhline(y=0.0, color='k', linestyle='-') plt.xticks(['1800', '1850','1900','1950','2000']) # Set label locations. plt.subplots_adjust(top=0.90, bottom=0.02, wspace=0.30, hspace=0.3) #sns.set() plt.show() #(sotu_dict.keys()) #for i in range(0,len(dirs)): # print(dirs[i])
_____no_output_____
MIT
state_of_union_main.ipynb
gequitz/State_of_The_Union_Analysis_NLP
Transforming the doc into a dataframe
# We have to convert `.toarray()` because the vectorizer returns a sparse matrix. # For a big corpus, we would skip the dataframe and keep the output sparse. #pd.DataFrame(doc_word.toarray(), index=sotu_data, columns=vectorizer.get_feature_names()).head(10) #doc_word.toarray() makes 7x19 table, otherwise it would be #represented in 2 columns #from notebook pd.DataFrame(doc_word.toarray(), index=dirs, columns=vectorizer.get_feature_names()).head(95) #doc_word.toarray() makes 7x19 table, otherwise it would be #represented in 2 columns
_____no_output_____
MIT
state_of_union_main.ipynb
gequitz/State_of_The_Union_Analysis_NLP
Topic Modeling using nmf
n_topics = 8 # number of topics nmf_model = NMF(n_topics) # create an object doc_topic = nmf_model.fit_transform(doc_word) #break into 10 components like SVD topic_word = pd.DataFrame(nmf_model.components_.round(3), #,"component_9","component_10","component_11","component_12" index = ["component_1","component_2","component_3","component_4","component_5","component_6","component_7","component_8"], columns = vectorizer.get_feature_names()) #8 components in final draft topic_word #https://stackoverflow.com/questions/16486252/is-it-possible-to-use-argsort-in-descending-order/16486299 #list the top words for each Component: def print_top_words(model, feature_names, n_top_words): for topic_idx, topic in enumerate(model.components_): # loop over the model components print("Component_" + "%d:" % topic_idx ) # print the component # join the top words by an empty space # argsort : sorts the list in increasing order, meaning the top are the last words # then select the top words # -1 loops backwards # reading from the tail to find the largest elements print(" ".join([feature_names[i] for i in topic.argsort()[:-n_top_words - 1:-1]])) print()
_____no_output_____
MIT
state_of_union_main.ipynb
gequitz/State_of_The_Union_Analysis_NLP
Top 15 words in each component
n_top_words = 15 feature_names = vectorizer.get_feature_names() print_top_words(nmf_model, feature_names, n_top_words) #Component x Speech H = pd.DataFrame(doc_topic.round(5), index=dirs, #,"component_9","component_10" columns = ["component_1","component_2", "component_3","component_4","component_5","component_6","component_7","component_8"]) H.head() H.iloc[30:35] H.iloc[60:70] H.iloc[225:230]
_____no_output_____
MIT
state_of_union_main.ipynb
gequitz/State_of_The_Union_Analysis_NLP
Use NMF to plot top 15 words for each of 8 components def plot_top_words(model, feature_names, n_top_words, title): fig, axes = plt.subplots(2, 4, figsize=(30, 15), sharex=True) axes = axes.flatten() for topic_idx, topic in enumerate(model.components_): top_features_ind = topic.argsort()[:-n_top_words - 1:-1] top_features = [feature_names[i] for i in top_features_ind] weights = topic[top_features_ind] ax = axes[topic_idx] ax.barh(top_features, weights, height=0.7) ax.set_title(f'Topic {topic_idx +1}', fontdict={'fontsize': 30}) ax.invert_yaxis() ax.tick_params(axis='both', which='major', labelsize=20) for i in 'top right left'.split(): ax.spines[i].set_visible(False) fig.suptitle(title, fontsize=40) plt.subplots_adjust(top=0.90, bottom=0.05, wspace=0.90, hspace=0.3) plt.show()
n_top_words = 12 feature_names = vectorizer.get_feature_names() plot_top_words(nmf_model, feature_names, n_top_words, 'Topics in NMF model') #title
_____no_output_____
MIT
state_of_union_main.ipynb
gequitz/State_of_The_Union_Analysis_NLP
Sort speeches Chronologically
H1 = H H1['Speech_str'] = dirs H1['Year'] = H1['Speech_str'].replace('[^0-9]', '', regex=True) H1 = H1.sort_values(by = ['Year']) H1.to_csv("Data_H1.csv", index = False) #Save chronologically sorted speeches in this csv H1.head() H1.to_pickle("H1.pkl") #pickle chronological csv file
_____no_output_____
MIT
state_of_union_main.ipynb
gequitz/State_of_The_Union_Analysis_NLP
Plots of Components over Time (check Powerpoint/Readme for more insights)
plt.subplots(4, 2, figsize=(30, 15), sharex=True) plt.rcParams.update({'font.size': 20}) plt.subplot(4, 2, 1) plt.plot(H1['Year'], H1['component_1'] ) #Label axis and titles for all plots plt.title("19th Century Economic Terms") plt.xlabel("Year") plt.ylabel("Component_1") plt.axhline(y=0.0, color='k', linestyle='-') plt.xticks(['1800', '1850','1900','1950','2000']) # Set label locations. plt.subplot(4, 2, 2) plt.plot(H1['Year'], H1['component_2']) plt.title("Modern Economic Language") plt.xlabel("Year") plt.ylabel("Component_2") plt.axhline(y=0.0, color='k', linestyle='-') plt.xticks(['1800', '1850','1900','1950','2000']) # Set label locations. plt.subplot(4, 2, 3) plt.plot(H1['Year'], H1['component_3']) plt.title("Growth of US Gov't & Programs") plt.xlabel("Year") plt.ylabel("Component_3") plt.axhline(y=0.0, color='k', linestyle='-') plt.xticks(['1800', '1850','1900','1950','2000']) # Set label locations. plt.subplot(4, 2, 4) plt.plot(H1['Year'], H1['component_4']) plt.title("Early Foreign Policy & War") plt.xlabel("Year") plt.ylabel("Component_4") plt.axhline(y=0.0, color='k', linestyle='-') plt.xticks(['1800', '1850','1900','1950','2000']) # Set label locations. plt.subplot(4, 2, 5) plt.plot(H1['Year'], H1['component_5']) plt.title("Progressive Era & Roaring 20s") plt.xlabel("Year") plt.ylabel("Component_5") plt.axhline(y=0.0, color='k', linestyle='-') plt.xticks(['1800', '1850','1900','1950','2000']) # Set label locations. plt.subplot(4, 2, 6) plt.plot(H1['Year'], H1['component_6']) plt.title("Before, During, After the Civil War") plt.xlabel("Year") plt.ylabel("Component_6") plt.axhline(y=0.0, color='k', linestyle='-') plt.xticks(['1800', '1850','1900','1950','2000']) # Set label locations. plt.subplot(4, 2, 7) plt.plot(H1['Year'], H1['component_7']) plt.title("World War & Cold War") plt.xlabel("Year") plt.ylabel("Component_7") plt.axhline(y=0.0, color='k', linestyle='-') plt.xticks(['1800', '1850','1900','1950','2000']) # Set label locations. plt.subplot(4, 2, 8) plt.plot(H1['Year'], H1['component_8']) plt.title("Iraq War & Terrorism") plt.xlabel("Year") plt.ylabel("Component_8") plt.axhline(y=0.0, color='k', linestyle='-') plt.xticks(['1800', '1850','1900','1950','2000']) # Set label locations. plt.subplots_adjust(top=0.90, bottom=0.02, wspace=0.30, hspace=0.4) plt.show()
_____no_output_____
MIT
state_of_union_main.ipynb
gequitz/State_of_The_Union_Analysis_NLP
Component 1: 19th Century Economics
H1.iloc[75:85] #Starts 1831. Peak starts 1868 (apex=1894), Nosedive in 1901 w/ Teddy. 4 Yr resurgence under Taft (1909-1912)
_____no_output_____
MIT
state_of_union_main.ipynb
gequitz/State_of_The_Union_Analysis_NLP
Component 2: Modern Economic Language
H1.iloc[205:215] #1960s: Starts under JFK in 1961, peaks w/ Clinton, dips post 9/11 Bush, resurgence under Obama
_____no_output_____
MIT
state_of_union_main.ipynb
gequitz/State_of_The_Union_Analysis_NLP
Component 3: Growth of US Government and Federal Programs
H1.iloc[155:165] #1921, 1929-1935. Big peak in 1946-1950 (1951 Cold War). 1954-1961 Eisenhower. Low after Reagan Revolution (1984)
_____no_output_____
MIT
state_of_union_main.ipynb
gequitz/State_of_The_Union_Analysis_NLP
Component 4: Early Foreign Policy and War
H1.iloc[30:40] #Highest from 1790-1830, Washington to Jackson
_____no_output_____
MIT
state_of_union_main.ipynb
gequitz/State_of_The_Union_Analysis_NLP
Component 5: Progressive Era, Roaring 20s
H1.iloc[115:125] #Peaks in 1900-1930.Especially Teddy Roosevelt. Dip around WW1
_____no_output_____
MIT
state_of_union_main.ipynb
gequitz/State_of_The_Union_Analysis_NLP
Component 6: War Before, During, and After the Civil War
H1.iloc[70:80] #Starts w/ Jackson 1829, Peaks w/ Mexican-American War (1846-1848). Drops 60% w/ Lincoln. Peak ends w/ Johnson 1868. Remains pretty low after 1876 (Reconstruction ends)
_____no_output_____
MIT
state_of_union_main.ipynb
gequitz/State_of_The_Union_Analysis_NLP
Component 7: World Wars and Korean War
H1.iloc[155:165] #Minor Peak around WW1. Masssive spike a response of Cold War, Korean War (1951). Eisenhower drops (except 1960 U2). Johnson Vietnam. Peaks again 1980 (Jimmy Carter foreign policy crises)
_____no_output_____
MIT
state_of_union_main.ipynb
gequitz/State_of_The_Union_Analysis_NLP
Component 8: Iraq War and Terrorism
H1.iloc[210:220] #Minor peak w/ Bush 1990. BIG peak w/ Bush 2002. Ends w/ Obama 2009. Resurgence in 2016/18 (ISIS?)
_____no_output_____
MIT
state_of_union_main.ipynb
gequitz/State_of_The_Union_Analysis_NLP
Word Cloud
from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator speech_name = 'Lincoln_1864.txt' sotu_dict[path + '\\' + speech_name] #example = sotu_data[0] example = sotu_dict[path + '\\' + speech_name] wordcloud = WordCloud(max_words=100).generate(example) plt.title("WordCloud of " + speech_name) plt.imshow(wordcloud, interpolation='bilinear') plt.axis("off") plt.show()
_____no_output_____
MIT
state_of_union_main.ipynb
gequitz/State_of_The_Union_Analysis_NLP
Average Monthly Temperatures, 1970-2004**Date:** 2021-12-02**Reference:**
library(TTR) options( jupyter.plot_mimetypes = "image/svg+xml", repr.plot.width = 7, repr.plot.height = 5 )
_____no_output_____
MIT
jupyter/2_time_series/2_03_ljk_decompose_seasonal.ipynb
ljk233/R249
SummaryThe aim of this notebook was to show how to decompose seasonal time series data using **R** so the trend, seasonal and irregular components can be estimated.Data on the average monthly temperatures in central England from January 1970 to December 2004 was plotted.The series was decomposed using the `decompose` function from `R.stats` and the seasonal factors displayed as a `matrix`.A seasonally adjusted series was calculated by subtracting the seasonal factors from the original series.The seasonally adjusted series was used to plot an estimate of the trend component by taking a simple moving average.The irregular component was estimated by subtracting the estimate of the trend and seasonal components from the original time series. Get the dataData on the average monthly temperatures in central England January 1970 to December 2004 is shown below.
monthlytemps <- read.csv("..\\..\\data\\moderntemps.csv") head(monthlytemps) modtemps <- monthlytemps$temperature
_____no_output_____
MIT
jupyter/2_time_series/2_03_ljk_decompose_seasonal.ipynb
ljk233/R249
Plot the time series
ts_modtemps <- ts(modtemps, start = c(1970, 1), frequency = 12) plot.ts(ts_modtemps, xlab = "year", ylab = "temperature")
_____no_output_____
MIT
jupyter/2_time_series/2_03_ljk_decompose_seasonal.ipynb
ljk233/R249
The time series is highly seasonal with little evidence of a trend.There appears to be a constant level of approximately 10$^{\circ}$C. Decompose the dataUse the `decompose` function from `R.stats` to return estimates of the trend, seasonal, and irregular components of the time series.
decomp_ts <- decompose(ts_modtemps)
_____no_output_____
MIT
jupyter/2_time_series/2_03_ljk_decompose_seasonal.ipynb
ljk233/R249
Seasonal factorsCalculate the seasonal factors of the decomposed time series.Cast the `seasonal` time series object held in `decomp_ts` to a `vector`, slice the new vector to isolate a single period, and then cast the sliced vector to a named `matrix`.
sf <- as.vector(decomp_ts$seasonal) (matrix(sf[1:12], dimnames = list(month.abb, c("factors"))))
_____no_output_____
MIT
jupyter/2_time_series/2_03_ljk_decompose_seasonal.ipynb
ljk233/R249
_Add a comment_ Plot the componentsPlot the trend, seasonal, and irregular components in a single graphic.
plot(decomp_ts, xlab = "year")
_____no_output_____
MIT
jupyter/2_time_series/2_03_ljk_decompose_seasonal.ipynb
ljk233/R249
Plot the individual components of the decomposition by accessing the variables held in the `tsdecomp`.This will generally make the components easier to understand.
plot(decomp_ts$trend, xlab = "year", ylab = "temperature (Celsius)") title(main = "Trend component") plot(decomp_ts$seasonal, xlab = "year", ylab = "temperature (Celsius)") title(main = "Seasonal component") plot(decomp_ts$random, xlab = "year", ylab = "temperature (Celsius)") title(main = "Irregular component")
_____no_output_____
MIT
jupyter/2_time_series/2_03_ljk_decompose_seasonal.ipynb
ljk233/R249
_Add comment on trend, seasonal, and irregular components.__Which component dominates the series?_ Seasonal adjusted plotPlot the seasonally adjusted series by subtracting the seasonal factors from the original series.
adjusted_ts <- ts_modtemps - decomp_ts$seasonal plot(adjusted_ts, xlab = "year", ylab = "temperature (Celsius)") title(main = "Seasonally adjusted series")
_____no_output_____
MIT
jupyter/2_time_series/2_03_ljk_decompose_seasonal.ipynb
ljk233/R249
This new seasonally adjusted series only contains the trend and irregular components, so it can be treated as if it is non-seasonal data.Estimate the trend component by taking the simple moving order of order 35.
sma35_adjusted_ts <- SMA(adjusted_ts, n = 35) plot.ts(sma35_adjusted_ts, xlab = "year", ylab = "temperature (Celsius)") title(main = "Trend component (ma35)")
_____no_output_____
MIT
jupyter/2_time_series/2_03_ljk_decompose_seasonal.ipynb
ljk233/R249
PySchools without Thomas High School 9th graders Dependencies and data
# Dependencies import os import numpy as np import pandas as pd # School data school_path = os.path.join('data', 'schools.csv') # school data path school_df = pd.read_csv(school_path) # Student data student_path = os.path.join('data', 'students.csv') # student data path student_df = pd.read_csv(student_path) school_df.shape, student_df.shape # Change Thomas High School 9th grade scores to NaN student_df.loc[(student_df['school_name'].str.contains('Thomas')) & (student_df['grade'] == '9th'), ['reading_score', 'math_score']] = np.NaN student_df.loc[(student_df['school_name'].str.contains('Thomas')) & (student_df['grade'] == '9th'), ['reading_score', 'math_score']].head(3)
_____no_output_____
MIT
pyschools/analysis2.ipynb
tri-bui/sandbox-analytics
Clean student names
# Prefixes to remove: "Miss ", "Dr. ", "Mr. ", "Ms. ", "Mrs. " # Suffixes to remove: " MD", " DDS", " DVM", " PhD" fixes_to_remove = ['Miss ', '\w+\. ', ' [DMP]\w?[DMS]'] # regex for prefixes and suffixes str_to_remove = r'|'.join(fixes_to_remove) # join into a single raw str # Remove inappropriate prefixes and suffixes student_df['student_name'] = student_df['student_name'].str.replace(str_to_remove, '', regex=True) # Check prefixes and suffixes student_names = [n.split() for n in student_df['student_name'].tolist() if len(n.split()) > 2] pre = list(set([name[0] for name in student_names if len(name[0]) <= 4])) # prefixes suf = list(set([name[-1] for name in student_names if len(name[-1]) <= 4])) # suffixes print(pre, suf)
['Juan', 'Noah', 'Cory', 'Omar', 'Eric', 'Ryan', 'Sean', 'Jon', 'Cody', 'Todd', 'Erik', 'Greg', 'Adam', 'Seth', 'Tony', 'Mark'] ['V', 'IV', 'Jr.', 'III', 'II']
MIT
pyschools/analysis2.ipynb
tri-bui/sandbox-analytics
Merge data
# Add binary vars for passing score student_df['pass_read'] = (student_df.reading_score >= 70).astype(int) # passing reading score student_df['pass_math'] = (student_df.math_score >= 70).astype(int) # passing math score student_df['pass_both'] = np.min([student_df.pass_read, student_df.pass_math], axis=0) # passing both scores student_df.head(3) # Add budget per student var school_df['budget_per_student'] = (school_df['budget'] / school_df['size']).round().astype(int) # Bin budget per student school_df['spending_lvl'] = pd.qcut(school_df['budget_per_student'], 4, labels=range(1, 5)) # Bin school size school_df['school_size'] = pd.qcut(school_df['size'], 3, labels=['Small', 'Medium', 'Large']) school_df # Merge data df = pd.merge(student_df, school_df, on='school_name', how='left') df.info()
<class 'pandas.core.frame.DataFrame'> Int64Index: 39170 entries, 0 to 39169 Data columns (total 17 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Student ID 39170 non-null int64 1 student_name 39170 non-null object 2 gender 39170 non-null object 3 grade 39170 non-null object 4 school_name 39170 non-null object 5 reading_score 38709 non-null float64 6 math_score 38709 non-null float64 7 pass_read 39170 non-null int64 8 pass_math 39170 non-null int64 9 pass_both 39170 non-null int64 10 School ID 39170 non-null int64 11 type 39170 non-null object 12 size 39170 non-null int64 13 budget 39170 non-null int64 14 budget_per_student 39170 non-null int64 15 spending_lvl 39170 non-null category 16 school_size 39170 non-null category dtypes: category(2), float64(2), int64(8), object(5) memory usage: 4.9+ MB
MIT
pyschools/analysis2.ipynb
tri-bui/sandbox-analytics
District summary
# District summary district_summary = pd.DataFrame(school_df[['size', 'budget']].sum(), columns=['District']).T district_summary['Total Schools'] = school_df.shape[0] district_summary = district_summary[['Total Schools', 'size', 'budget']] district_summary_cols = ['Total Schools', 'Total Students', 'Total Budget'] district_summary # Score cols score_cols = ['reading_score', 'math_score', 'pass_read', 'pass_math', 'pass_both'] score_cols_new = ['Average Reading Score', 'Average Math Score', '% Passing Reading', '% Passing Math', '% Passing Overall'] # Add scores to district summary for col, val in df[score_cols].mean().items(): if 'pass' in col: val *= 100 district_summary[col] = val district_summary # Rename cols district_summary.columns = district_summary_cols + score_cols_new district_summary # Format columns for col in district_summary.columns: if 'Total' in col: district_summary[col] = district_summary[col].apply('{:,}'.format) if 'Average' in col: district_summary[col] = district_summary[col].round(2) if '%' in col: district_summary[col] = district_summary[col].round().astype(int) district_summary
_____no_output_____
MIT
pyschools/analysis2.ipynb
tri-bui/sandbox-analytics
School summary
# School cols school_cols = ['type', 'size', 'budget', 'budget_per_student', 'reading_score', 'math_score', 'pass_read', 'pass_math', 'pass_both'] school_cols_new = ['School Type', 'Total Students', 'Total Budget', 'Budget Per Student'] school_cols_new += score_cols_new # School summary school_summary = df.groupby('school_name')[school_cols].agg({ 'type': 'max', 'size': 'max', 'budget': 'max', 'budget_per_student': 'max', 'reading_score': 'mean', 'math_score': 'mean', 'pass_read': 'mean', 'pass_math': 'mean', 'pass_both': 'mean' }) school_summary.head(3) # Rename cols school_summary.index.name = None school_summary.columns = school_cols_new # Format values for col in school_summary.columns: if 'Total' in col: school_summary[col] = school_summary[col].apply('{:,}'.format) if 'Average' in col: school_summary[col] = school_summary[col].round(2) if '%' in col: school_summary[col] = (school_summary[col] * 100).round().astype(int) school_summary
_____no_output_____
MIT
pyschools/analysis2.ipynb
tri-bui/sandbox-analytics
Scores by grade
# Reading scores by grade of each school grade_read_scores = pd.pivot_table(df, index='school_name', columns='grade', values='reading_score', aggfunc='mean').round(2) grade_read_scores.index.name = None grade_read_scores.columns.name = 'Reading scores' grade_read_scores = grade_read_scores[['9th', '10th', '11th', '12th']] grade_read_scores # Math scores by grade of each school grade_math_scores = pd.pivot_table(df, index='school_name', columns='grade', values='math_score', aggfunc='mean').round(2) grade_math_scores.index.name = None grade_math_scores.columns.name = 'Math Scores' grade_math_scores = grade_math_scores[['9th', '10th', '11th', '12th']] grade_math_scores
_____no_output_____
MIT
pyschools/analysis2.ipynb
tri-bui/sandbox-analytics
Scores by budget per student
# Scores by spending spending_scores = df.groupby('spending_lvl')[score_cols].mean().round(2) for col in spending_scores.columns: if "pass" in col: spending_scores[col] = (spending_scores[col] * 100).astype(int) spending_scores # Formatting spending_scores.index.name = 'Spending Level' spending_scores.columns = score_cols_new spending_scores
_____no_output_____
MIT
pyschools/analysis2.ipynb
tri-bui/sandbox-analytics
Scores by school size
# Scores by school size size_scores = df.groupby('school_size')[score_cols].mean().round(2) for col in size_scores.columns: if "pass" in col: size_scores[col] = (size_scores[col] * 100).astype(int) size_scores # Formatting size_scores.index.name = 'School Size' size_scores.columns = score_cols_new size_scores
_____no_output_____
MIT
pyschools/analysis2.ipynb
tri-bui/sandbox-analytics
Scores by school type
# Scores by school type type_scores = df.groupby('type')[score_cols].mean().round(2) for col in type_scores.columns: if "pass" in col: type_scores[col] = (type_scores[col] * 100).astype(int) type_scores # Formatting type_scores.index.name = 'School Type' type_scores.columns = score_cols_new type_scores
_____no_output_____
MIT
pyschools/analysis2.ipynb
tri-bui/sandbox-analytics
InstructionsImplement multi output cross entropy loss in pytorch.Throughout this whole problem we use multioutput models:* predicting 4 localization coordinates* predicting 4 keypoint coordinates + whale id and callosity pattern* predicting whale id and callosity patternIn order for that to work your loss function needs to cooperate.Remember that for the simple single output models the following function will work:```pythonimport torch.nn.functional as Fsingle_output_loss = F.nll_loss(output, target)``` Your SolutionYour solution function should be called solution. In this case we leave it for consistency but you don't need to do anything with it. CONFIG is a dictionary with all parameters that you want to pass to your solution function.
def solution(outputs, targets): """ Args: outputs: list of torch.autograd.Variables containing model outputs targets: list of torch.autograd.Variables containing targets for each output Returns: loss_value: torch.autograd.Variable object """ return loss_value
_____no_output_____
MIT
resources/whales/tasks/task5.ipynb
pknut/minerva
**ANALYSIS OF FINANCIAL INCLUSION IN EAST AFRICA BETWEEN 2016 TO 2018** DEFINING QUESTION The research problem is to figure out how we can predict which individuals are most likely to have or use a bank account. METRIC FOR SUCCESS My solution procedure will be to help provide an indication of the state of financial inclusion in Kenya, Rwanda, Tanzania, and Uganda, while providing insights into some of the key demographic factors that might drive individuals financial outcomes. THE CONTEXT Financial Inclusion remains one of the main obstacles to economic and human development in Africa. For example, across Kenya, Rwanda, Tanzania, and Uganda only 9.1 million adults (or 13.9% of the adult population) have access to or use a commercial bank account.Traditionally, access to bank accounts has been regarded as an indicator of financial inclusion. Despite the proliferation of mobile money in Africa and the growth of innovative fintech solutions, banks still play a pivotal role in facilitating access to financial services. Access to bank accounts enables households to save and facilitate payments while also helping businesses build up their credit-worthiness and improve their access to other financial services. Therefore, access to bank accounts is an essential contributor to long-term economic growth. EXPERIMENTAL DESIGN TAKEN The procedure taken is:1. Definition of the question2. Reading and checking of the data3. External data source validation4. Cleaning of the dataset5. Exploratory analysis DATA RELEVANCE Data to be used contains demographic information and what financial services are used by individuals in East Africa. The data is extracted from various Finsscope surveys and is ranging from 2016 to 2018. The data files include:Variable Definitions: http://bit.ly/VariableDefinitionsDataset: http://bit.ly/FinancialDatasetFinAccess Kenya 2018: https://fsdkenya.org/publication/finaccess2019/Finscope Rwanda 2016: http://www.statistics.gov.rw/publication/finscope-rwanda-2016 Finscope Tanzania 2017: http://www.fsdt.or.tz/finscope/Finscope Uganda 2018: http://fsduganda.or.ug/finscope-2018-survey-report/This data is relevant in this project since it provides important insights that will help in solving the research question. LOADING LIBRARIES
# importing libraries import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt
_____no_output_____
MIT
Moringa_Data_Science_Core_W2_Independent_Project_2021_09_Moreen_Mugambi_Python_Notebook_ipyn.ipynb
MoreenMarutaData/FINANCIAL-INCLUSION-IN-EAST-AFRICA-MORINGA-CORE-WEEK-2-PROJECT
READING AND CHECKING DATA
# loading and viewing variable definitions dataset url = "http://bit.ly/VariableDefinitions" vb_df = pd.read_csv(url) vb_df # loading and viewing financial dataset url2 = "http://bit.ly/FinancialDataset" fds = pd.read_csv(url2) fds fds.shape fds.head() fds.tail() fds.dtypes fds.columns fds.info() fds.describe() fds.describe(include=object) len(fds) fds.nunique() fds.count()
_____no_output_____
MIT
Moringa_Data_Science_Core_W2_Independent_Project_2021_09_Moreen_Mugambi_Python_Notebook_ipyn.ipynb
MoreenMarutaData/FINANCIAL-INCLUSION-IN-EAST-AFRICA-MORINGA-CORE-WEEK-2-PROJECT
EXTERNAL DATA SOURCE VALIDATION FinAccess Kenya 2018: https://fsdkenya.org/publication/finaccess2019/Finscope Rwanda 2016: http://www.statistics.gov.rw/publication/finscope-rwanda-2016 Finscope Tanzania 2017: http://www.fsdt.or.tz/finscope/Finscope Uganda 2018: http://fsduganda.or.ug/finscope-2018-survey-report/ CLEANING THE DATASET
fds.head(2) # CHECKING FOR OUTLIERS IN YEAR COLUMN sns.boxplot(x=fds['year']) fds.shape # dropping year column outliers fds1= fds[fds['year']<2020] fds1.shape # CHECKING FOR OUTLIERS IN HOUSEHOLD SIZE COLUMN sns.boxplot(x=fds1['household_size']) # dropping household size outliers fds2 =fds1[fds1['household_size']<10] fds2.shape # CHECKING FOR OUTLIERS IN AGE OF RESPONDENT sns.boxplot(x=fds2['Respondent Age']) # dropping age of respondent outliers fds3 = fds2[fds2['Respondent Age']<82] fds3.shape # plotting the final boxplots fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2,2, figsize=(10, 7)) fig.suptitle('Boxplots') sns.boxplot(fds3['Respondent Age'], ax=ax1) sns.boxplot(fds3['year'], ax=ax2) sns.boxplot(fds3['household_size'], ax=ax3) plt.show() # the outliers have finally been droppped # CHECKING FOR NULLL OR MISSING DATA fds3.isnull().sum() # dropping nulls fds4 = fds3.dropna() fds4.shape # dropping duplicates fds4.drop_duplicates().head(2) # changing column names and columns to lowercase #for columns in fds.columns: #fds1[columns] = fds[columns].astype(str).str.lower() #fds1 #fds1.rename(columns=str.lower) # renaming columns fds5 = fds4.rename(columns={'Type of Location':'location_type', 'Has a Bank account' : 'bank account','Cell Phone Access':'cellphone_access', 'Respondent Age': 'age_of_respondent', 'The relathip with head': 'relationship_with_head', 'Level of Educuation' : 'education_level', 'Type of Job': 'job_type'}) fds5.head(2) fds5.shape fds5.size fds5.nunique() fds5['country'].unique() fds5['year'].unique() fds5['bank account'].unique() fds5['location_type'].unique() fds5['cellphone_access'].unique() fds5['education_level'].unique() fds5.drop(fds5.loc[fds5['education_level'] ==6].index, inplace=True) fds5['education_level'].unique() fds5['gender_of_respondent'].unique() fds5['household_size'].unique() fds5.drop(fds5.loc[fds5['household_size'] == 0].index, inplace=True) fds5['household_size'].unique() fds5['job_type'].unique() fds5['relationship_with_head'].unique() fds5['age_of_respondent'].unique()
_____no_output_____
MIT
Moringa_Data_Science_Core_W2_Independent_Project_2021_09_Moreen_Mugambi_Python_Notebook_ipyn.ipynb
MoreenMarutaData/FINANCIAL-INCLUSION-IN-EAST-AFRICA-MORINGA-CORE-WEEK-2-PROJECT
**EXPLORATORY ANALYSIS** 1.UNIVARIATE ANALYSIS a. NUMERICAL VARIABLES MODE
fds5['year'].mode() fds5['household_size'].mode() fds5['age_of_respondent'].mode()
_____no_output_____
MIT
Moringa_Data_Science_Core_W2_Independent_Project_2021_09_Moreen_Mugambi_Python_Notebook_ipyn.ipynb
MoreenMarutaData/FINANCIAL-INCLUSION-IN-EAST-AFRICA-MORINGA-CORE-WEEK-2-PROJECT
MEAN
fds5['age_of_respondent'].mean() fds5['household_size'].mean() fds5.mean()
_____no_output_____
MIT
Moringa_Data_Science_Core_W2_Independent_Project_2021_09_Moreen_Mugambi_Python_Notebook_ipyn.ipynb
MoreenMarutaData/FINANCIAL-INCLUSION-IN-EAST-AFRICA-MORINGA-CORE-WEEK-2-PROJECT
MEDIAN
fds5['age_of_respondent'].median() fds5['household_size'].median() fds5.median()
_____no_output_____
MIT
Moringa_Data_Science_Core_W2_Independent_Project_2021_09_Moreen_Mugambi_Python_Notebook_ipyn.ipynb
MoreenMarutaData/FINANCIAL-INCLUSION-IN-EAST-AFRICA-MORINGA-CORE-WEEK-2-PROJECT
RANGE
a = fds5['age_of_respondent'].max() b = fds5['age_of_respondent'].min() c = a-b print('The range of the age for the respondents is', c) d = fds5['household_size'].max() e = fds5['household_size'].min() f = d-e print('The range of the household_sizes is', f)
The range of the household_sizes is 8.0
MIT
Moringa_Data_Science_Core_W2_Independent_Project_2021_09_Moreen_Mugambi_Python_Notebook_ipyn.ipynb
MoreenMarutaData/FINANCIAL-INCLUSION-IN-EAST-AFRICA-MORINGA-CORE-WEEK-2-PROJECT
QUANTILE AND INTERQUANTILE
fds5.quantile([0.25,0.5,0.75]) # FINDING THE INTERQUANTILE RANGE = IQR Q3 = fds5['age_of_respondent'].quantile(0.75) Q2 = fds5['age_of_respondent'].quantile(0.25) IQR= Q3-Q2 print('The IQR for the respondents age is', IQR) q3 = fds5['household_size'].quantile(0.75) q2 = fds5['household_size'].quantile(0.25) iqr = q3-q2 print('The IQR for household sizes is', iqr)
_____no_output_____
MIT
Moringa_Data_Science_Core_W2_Independent_Project_2021_09_Moreen_Mugambi_Python_Notebook_ipyn.ipynb
MoreenMarutaData/FINANCIAL-INCLUSION-IN-EAST-AFRICA-MORINGA-CORE-WEEK-2-PROJECT
STANDARD DEVIATION
fds5.std()
_____no_output_____
MIT
Moringa_Data_Science_Core_W2_Independent_Project_2021_09_Moreen_Mugambi_Python_Notebook_ipyn.ipynb
MoreenMarutaData/FINANCIAL-INCLUSION-IN-EAST-AFRICA-MORINGA-CORE-WEEK-2-PROJECT
VARIANCE
fds5.var()
_____no_output_____
MIT
Moringa_Data_Science_Core_W2_Independent_Project_2021_09_Moreen_Mugambi_Python_Notebook_ipyn.ipynb
MoreenMarutaData/FINANCIAL-INCLUSION-IN-EAST-AFRICA-MORINGA-CORE-WEEK-2-PROJECT
KURTOSIS
fds5.kurt()
_____no_output_____
MIT
Moringa_Data_Science_Core_W2_Independent_Project_2021_09_Moreen_Mugambi_Python_Notebook_ipyn.ipynb
MoreenMarutaData/FINANCIAL-INCLUSION-IN-EAST-AFRICA-MORINGA-CORE-WEEK-2-PROJECT
SKEWNESS
fds5.skew()
_____no_output_____
MIT
Moringa_Data_Science_Core_W2_Independent_Project_2021_09_Moreen_Mugambi_Python_Notebook_ipyn.ipynb
MoreenMarutaData/FINANCIAL-INCLUSION-IN-EAST-AFRICA-MORINGA-CORE-WEEK-2-PROJECT
b. CATEGORICAL MODE
fds5.mode().head(1) fds5['age_of_respondent'].plot(kind="hist") plt.xlabel('ages of respondents') plt.ylabel('frequency') plt.title(' Frequency of the ages of the respondents') country=fds5['country'].value_counts() print(country) # Plotting the pie chart colors=['pink','white','cyan','yellow'] country.plot(kind='pie',colors=colors,autopct='%1.1f%%',shadow=True,startangle=90) plt.title('Distribution of the respondents by country') bank =fds5['bank account'].value_counts() # Plotting the pie chart colors=['plum', 'aqua'] bank.plot(kind='pie',colors=colors,autopct='%1.1f%%',shadow=True,startangle=90) plt.title('availability of bank accounts') location=fds5['location_type'].value_counts() # Plotting the pie chart colors=['aquamarine','beige'] location.plot(kind='pie',colors=colors,autopct='%1.3f%%',shadow=True,startangle=00) plt.title('Distribution of the respondents according to location') celly =fds5['cellphone_access'].value_counts() # Plotting the pie chart colors=['plum','lavender'] celly.plot(kind='pie',colors=colors,autopct='%1.1f%%',shadow=True,startangle=0) plt.title('cellphone access for the respondents') gen =fds5['gender_of_respondent'].value_counts() # Plotting the pie chart colors=['red','lavender'] gen.plot(kind='pie',colors=colors,autopct='%1.1f%%',shadow=True,startangle=0) plt.title('gender distribution')
_____no_output_____
MIT
Moringa_Data_Science_Core_W2_Independent_Project_2021_09_Moreen_Mugambi_Python_Notebook_ipyn.ipynb
MoreenMarutaData/FINANCIAL-INCLUSION-IN-EAST-AFRICA-MORINGA-CORE-WEEK-2-PROJECT
CONCLUSION AND RECOMMENDATION Most of the data was collected in Rwanda.Most of the data was collected in Rural areas.Most of those who were interviewed were women.Most of the population has mobile phones.There were several outliers.Since 75% of the population has phones, phones should be used as the main channel for information and awareness of bank accessories. 2. BIVARIATE ANALYSIS
fds5.head() #@title Since i am predicting the likelihood of the respondents using the bank,I shall be comparing all variables against the bank account column.
_____no_output_____
MIT
Moringa_Data_Science_Core_W2_Independent_Project_2021_09_Moreen_Mugambi_Python_Notebook_ipyn.ipynb
MoreenMarutaData/FINANCIAL-INCLUSION-IN-EAST-AFRICA-MORINGA-CORE-WEEK-2-PROJECT
NUMERICAL VS NUMERICAL
sns.pairplot(fds5) plt.show() # pearson correlation of numerical variables sns.heatmap(fds5.corr(),annot=True) plt.show() # possible weak correlation fds5.corr()
_____no_output_____
MIT
Moringa_Data_Science_Core_W2_Independent_Project_2021_09_Moreen_Mugambi_Python_Notebook_ipyn.ipynb
MoreenMarutaData/FINANCIAL-INCLUSION-IN-EAST-AFRICA-MORINGA-CORE-WEEK-2-PROJECT
CATEGORICAL VS CATEGORICAL
# Grouping bank usage by country country1 = fds5.groupby('country')['bank account'].value_counts(normalize=True).unstack() colors= ['lightpink', 'skyblue'] country1.plot(kind='bar', figsize=(8, 6), color=colors, stacked=True) plt.title('Bank usage by country', fontsize=15, y=1.015) plt.xlabel('country', fontsize=14, labelpad=15) plt.xticks(rotation = 360) plt.ylabel('Bank usage by country', fontsize=14, labelpad=15) plt.show() # Bank usage by gender gender1 = fds5.groupby('gender_of_respondent')['bank account'].value_counts(normalize=True).unstack() colors= ['lightpink', 'skyblue'] gender1.plot(kind='bar', figsize=(8, 6), color=colors, stacked=True) plt.title('Bank usage by gender', fontsize=17, y=1.015) plt.xlabel('gender', fontsize=17, labelpad=17) plt.xticks(rotation = 360) plt.ylabel('Bank usage by gender', fontsize=17, labelpad=17) plt.show() # Bank usage depending on level of education ed2 = fds5.groupby('education_level')['bank account'].value_counts(normalize=True).unstack() colors= ['cyan', 'darkcyan'] ed2.plot(kind='barh', figsize=(8, 6), color=colors, stacked=True) plt.title('Bank usage by level of education', fontsize=17, y=1.015) plt.xlabel('frequency', fontsize=17, labelpad=17) plt.xticks(rotation = 360) plt.ylabel('level of education', fontsize=17, labelpad=17) plt.show() ms = fds5.groupby('marital_status')['bank account'].value_counts(normalize=True).unstack() colors= ['coral', 'orange'] ms.plot(kind='barh', figsize=(8, 6), color=colors, stacked=True) plt.title('Bank usage by marital status', fontsize=17, y=1.015) plt.xlabel('frequency', fontsize=17, labelpad=17) plt.xticks(rotation = 360) plt.ylabel('marital status', fontsize=17, labelpad=17) gj = fds5.groupby('gender_of_respondent')['job_type'].value_counts(normalize=True).unstack() #colors= ['coral', 'orange'] gj.plot(kind='bar', figsize=(8, 6), stacked=True) plt.title('job type by gender', fontsize=17, y=1.015) plt.xlabel('gender_of_respondent', fontsize=17, labelpad=17) plt.xticks(rotation = 360) plt.ylabel('job type', fontsize=17, labelpad=17)
_____no_output_____
MIT
Moringa_Data_Science_Core_W2_Independent_Project_2021_09_Moreen_Mugambi_Python_Notebook_ipyn.ipynb
MoreenMarutaData/FINANCIAL-INCLUSION-IN-EAST-AFRICA-MORINGA-CORE-WEEK-2-PROJECT
NUMERICAL VS CATEGORICAL IMPLEMENTING AND CHALLENGING SOLUTION
_____no_output_____
MIT
Moringa_Data_Science_Core_W2_Independent_Project_2021_09_Moreen_Mugambi_Python_Notebook_ipyn.ipynb
MoreenMarutaData/FINANCIAL-INCLUSION-IN-EAST-AFRICA-MORINGA-CORE-WEEK-2-PROJECT
Most of those interviewed do not have bank accounts of which 80% is the uneducated.Most of the population that participated is married,followed by single/never married.Most of the population has primary school education level.Most of the population is involved in farming followed by self employment.Bank usage has more males than females.More channeling needs to be done in Kenya as it has the least bank users. 3. MULTIVARIATE ANALYSIS
# Multivariate analysis - This is a statistical analysis that involves observation and analysis of more than one statistical outcome variable at a time # LETS MAKE A COPY fds_new = fds5.copy() fds_new.columns fds_new.dtypes # IMPORTING THE LABEL ENCODER from sklearn.preprocessing import LabelEncoder le = LabelEncoder() # encoding categorial values fds_new['country']=le.fit_transform(fds_new['country'].astype(str)) fds_new['location_type']=le.fit_transform(fds_new['location_type'].astype(str)) fds_new['cellphone_access']=le.fit_transform(fds_new['cellphone_access'].astype(str)) fds_new['gender_of_respondent']=le.fit_transform(fds_new['gender_of_respondent'].astype(str)) fds_new['relationship_with_head']=le.fit_transform(fds_new['relationship_with_head'].astype(str)) fds_new['marital_status']=le.fit_transform(fds_new['marital_status'].astype(str)) fds_new['education_level']=le.fit_transform(fds_new['education_level'].astype(str)) fds_new['job_type']=le.fit_transform(fds_new['job_type'].astype(str)) fds_new.sample(5) # dropping unnecessary columns fds_new.drop(['age_of_respondent','uniqueid','year'], axis=1).head(2)
_____no_output_____
MIT
Moringa_Data_Science_Core_W2_Independent_Project_2021_09_Moreen_Mugambi_Python_Notebook_ipyn.ipynb
MoreenMarutaData/FINANCIAL-INCLUSION-IN-EAST-AFRICA-MORINGA-CORE-WEEK-2-PROJECT
FACTOR ANALYSIS
# Installing factor analyzer !pip install factor_analyzer==0.2.3 from factor_analyzer.factor_analyzer import calculate_bartlett_sphericity chi_square_value,p_value=calculate_bartlett_sphericity(fds_new) chi_square_value, p_value # In Bartlett ’s test, the p-value is 0. The test was statistically significant, # indicating that the observed correlation matrix is not an identity matrix. # Value of KMO less than 0.6 is considered inadequate. # #from factor_analyzer.factor_analyzer import calculate_kmo #kmo_all,kmo_model=calculate_kmo(fds_new) #calculate_kmo(fds_new) # Choosing the Number of Factors from factor_analyzer.factor_analyzer import FactorAnalyzer # Creating factor analysis object and perform factor analysis fa = FactorAnalyzer() fa.analyze(fds_new, 10, rotation=None) # Checking the Eigenvalues ev, v = fa.get_eigenvalues() ev # We choose the factors that are > 1. # so we choose 4 factors only # PERFOMING FACTOR ANALYSIS FOR 4 FACTORS fa = FactorAnalyzer() fa.analyze(fds_new, 4, rotation="varimax") fa.loadings # GETTING VARIANCE FOR THE FACTORS fa.get_factor_variance()
_____no_output_____
MIT
Moringa_Data_Science_Core_W2_Independent_Project_2021_09_Moreen_Mugambi_Python_Notebook_ipyn.ipynb
MoreenMarutaData/FINANCIAL-INCLUSION-IN-EAST-AFRICA-MORINGA-CORE-WEEK-2-PROJECT
Configure
sample_size = 0 max_closure_size = 10000 max_distance = 0.22 cluster_distance_threshold = 0.155 super_cluster_distance_threshold = 0.205 num_candidates = 1000 eps = 0.000001 model_filename = '../data/models/anc-triplet-bilstm-100-512-40-05.pth' # process_nicknames = True # werelate_names_filename = 'givenname_similar_names.werelate.20210414.tsv' # nicknames_filename = '../data/models/givenname_nicknames.txt' # name_freqs_filename = 'given-final.normal.txt' # clusters_filename = 'givenname_clusters.tsv' # super_clusters_filename = 'givenname_super_clusters.tsv' werelate_names_filename = '../data/external/surname_similar_names.werelate.20210414.tsv' nicknames_filename = '' name_freqs_filename = '../data/external/surname-final.normal.txt' clusters_filename = '../data/models/ancestry_surname_clusters-20211028.tsv' super_clusters_filename = '../data/models/ancestry_surname_super_clusters-20211028.tsv' is_surname = True
_____no_output_____
MIT
reports/80_cluster_anc_triplet-initial.ipynb
rootsdev/nama
Read WeRelate names into all_namesLater, we'll want to read frequent FS names into all_names
# TODO rewrite this in just a few lines using pandas def load_werelate_names(path, is_surname): name_variants = defaultdict(set) with fopen(path, mode="r", encoding="utf-8") as f: is_header = True for line in f: if is_header: is_header = False continue fields = line.rstrip().split("\t") # normalize should only return a single name piece, but loop just in case for name_piece in normalize(fields[0], is_surname): confirmed_variants = fields[1].strip().split(" ") if len(fields) >= 2 else [] computer_variants = fields[2].strip().split(" ") if len(fields) == 3 else [] variants = confirmed_variants + computer_variants for variant in variants: for variant_piece in normalize(variant, is_surname): name_variants[name_piece].add(variant_piece) return name_variants all_names = set() name_variants = load_werelate_names(werelate_names_filename, is_surname) print(len(name_variants)) for k, v in name_variants.items(): all_names.add(add_padding(k)) all_names.update(add_padding(variant) for variant in v) print(len(all_names), next(iter(all_names))) name_variants = None
_____no_output_____
MIT
reports/80_cluster_anc_triplet-initial.ipynb
rootsdev/nama
Read nicknames and remove from names
def load_nicknames(path): nicknames = defaultdict(set) with fopen(path, mode="r", encoding="utf-8") as f: for line in f: names = line.rstrip().split(" ") # normalize should only return a single name piece, but loop just in case for name_piece in normalize(names[0], False): orig_name = add_padding(name_piece) for nickname in names[1:]: for nickname_piece in normalize(nickname, False): nicknames[add_padding(nickname_piece)].add(orig_name) return nicknames name_nicks = defaultdict(set) if not is_surname: nick_names = load_nicknames(nicknames_filename) for nick, names in nick_names.items(): for name in names: name_nicks[name].add(nick) print(next(iter(nick_names.items())), "nick_names", len(nick_names.keys()), "name_nicks", len(name_nicks.keys())) all_names -= set(nickname for nickname in nick_names.keys()) print(len(all_names))
_____no_output_____
MIT
reports/80_cluster_anc_triplet-initial.ipynb
rootsdev/nama
Map names to ids
def map_names_to_ids(names): ids = range(len(names)) return dict(zip(names, ids)), dict(zip(ids, names)) name_ids, id_names = map_names_to_ids(all_names) print(next(iter(name_ids.items())), next(iter(id_names.items())))
_____no_output_____
MIT
reports/80_cluster_anc_triplet-initial.ipynb
rootsdev/nama