markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Quality distribution per chromosome
with size_controller(FULL_FIG_W, FULL_FIG_H): bqc = gffData.boxplot(column='Confidence', by='RefContigID')
opticalmapping/xmap_reader.ipynb
sauloal/ipython
mit
Position distribution
with size_controller(FULL_FIG_W, FULL_FIG_H): hs = gffData['RefStartPos'].hist()
opticalmapping/xmap_reader.ipynb
sauloal/ipython
mit
Position distribution per chromosome
hsc = gffData['qry_match_len'].hist(by=gffData['RefContigID'], figsize=(CHROM_FIG_W, CHROM_FIG_H), layout=(len(chromosomes),1)) hsc = gffData['RefStartPos'].hist(by=gffData['RefContigID'], figsize=(CHROM_FIG_W, CHROM_FIG_H), layout=(len(chromosomes),1))
opticalmapping/xmap_reader.ipynb
sauloal/ipython
mit
Length distribution
with size_controller(FULL_FIG_W, FULL_FIG_H): hl = gffData['qry_match_len'].hist()
opticalmapping/xmap_reader.ipynb
sauloal/ipython
mit
Length distribution per chromosome
hlc = gffData['qry_match_len'].hist(by=gffData['RefContigID'], figsize=(CHROM_FIG_W, CHROM_FIG_H), layout=(len(chromosomes),1))
opticalmapping/xmap_reader.ipynb
sauloal/ipython
mit
Criar um novo dataset para práticar
# Selecionar três centróides cluster_center_1 = np.array([2,3]) cluster_center_2 = np.array([6,6]) cluster_center_3 = np.array([10,1]) # Gerar amostras aleátorias a partir dos centróides escolhidos cluster_data_1 = np.random.randn(100, 2) + cluster_center_1 cluster_data_2 = np.random.randn(100,2) + cluster_center_2 cl...
2019/09-clustering/Notebook_KMeans_Answer.ipynb
InsightLab/data-science-cookbook
mit
1. Implementar o algoritmo K-means Nesta etapa você irá implementar as funções que compõe o algoritmo do KMeans uma a uma. É importante entender e ler a documentação de cada função, principalmente as dimensões dos dados esperados na saída. 1.1 Inicializar os centróides A primeira etapa do algoritmo consiste em iniciali...
def calculate_initial_centers(dataset, k): """ Inicializa os centróides iniciais de maneira arbitrária Argumentos: dataset -- Conjunto de dados - [m,n] k -- Número de centróides desejados Retornos: centroids -- Lista com os centróides calculados - [k,n] """ #### CODE ...
2019/09-clustering/Notebook_KMeans_Answer.ipynb
InsightLab/data-science-cookbook
mit
1.2 Definir os clusters Na segunda etapa do algoritmo serão definidos o grupo de cada dado, de acordo com os centróides calculados. 1.2.1 Função de distância Codifique a função de distância euclidiana entre dois pontos (a, b). Definido pela equação: $$ dist(a, b) = \sqrt{(a_1-b_1)^{2}+(a_2-b_2)^{2}+ ... + (a_n-b_n)^{2}...
def euclidean_distance(a, b): """ Calcula a distância euclidiana entre os pontos a e b Argumentos: a -- Um ponto no espaço - [1,n] b -- Um ponto no espaço - [1,n] Retornos: distance -- Distância euclidiana entre os pontos """ #### CODE HERE #### distance = np....
2019/09-clustering/Notebook_KMeans_Answer.ipynb
InsightLab/data-science-cookbook
mit
1.2.2 Calcular o centroide mais próximo Utilizando a função de distância codificada anteriormente, complete a função abaixo para calcular o centroid mais próximo de um ponto qualquer. Dica: https://docs.scipy.org/doc/numpy/reference/generated/numpy.argmin.html
def nearest_centroid(a, centroids): """ Calcula o índice do centroid mais próximo ao ponto a Argumentos: a -- Um ponto no espaço - [1,n] centroids -- Lista com os centróides - [k,n] Retornos: nearest_index -- Índice do centróide mais próximo """ #### CODE HERE #### ...
2019/09-clustering/Notebook_KMeans_Answer.ipynb
InsightLab/data-science-cookbook
mit
1.2.3 Calcular centroid mais próximo de cada dado do dataset Utilizando a função anterior que retorna o índice do centroid mais próximo, calcule o centroid mais próximo de cada dado do dataset.
def all_nearest_centroids(dataset, centroids): """ Calcula o índice do centroid mais próximo para cada ponto do dataset Argumentos: dataset -- Conjunto de dados - [m,n] centroids -- Lista com os centróides - [k,n] Retornos: nearest_indexes -- Índices do centróides mais próximo...
2019/09-clustering/Notebook_KMeans_Answer.ipynb
InsightLab/data-science-cookbook
mit
1.3 Métrica de avaliação Após formar os clusters, como sabemos se o resultado gerado é bom? Para isso, precisamos definir uma métrica de avaliação. O algoritmo K-means tem como objetivo escolher centróides que minimizem a soma quadrática das distância entre os dados de um cluster e seu centróide. Essa métrica é conheci...
def inertia(dataset, centroids, nearest_indexes): """ Soma das distâncias quadradas das amostras para o centro do cluster mais próximo. Argumentos: dataset -- Conjunto de dados - [m,n] centroids -- Lista com os centróides - [k,n] nearest_indexes -- Índices do centróides mais próximos -...
2019/09-clustering/Notebook_KMeans_Answer.ipynb
InsightLab/data-science-cookbook
mit
1.4 Atualizar os clusters Nessa etapa, os centróides são recomputados. O novo valor de cada centróide será a media de todos os dados atribuídos ao cluster.
def update_centroids(dataset, centroids, nearest_indexes): """ Atualiza os centroids Argumentos: dataset -- Conjunto de dados - [m,n] centroids -- Lista com os centróides - [k,n] nearest_indexes -- Índices do centróides mais próximos - [m,1] Retornos: centroids -- Lista com cen...
2019/09-clustering/Notebook_KMeans_Answer.ipynb
InsightLab/data-science-cookbook
mit
2. K-means 2.1 Algoritmo completo Utilizando as funções codificadas anteriormente, complete a classe do algoritmo K-means!
class KMeans(): def __init__(self, n_clusters=8, max_iter=300): self.n_clusters = n_clusters self.max_iter = max_iter def fit(self,X): # Inicializa os centróides self.cluster_centers_ = calculate_initial_centers(X, self.n_clusters) # Computa o ...
2019/09-clustering/Notebook_KMeans_Answer.ipynb
InsightLab/data-science-cookbook
mit
2.2 Comparar com algoritmo do Scikit-Learn Use a implementação do algoritmo do scikit-learn do K-means para o mesmo conjunto de dados. Mostre o valor da inércia e os conjuntos gerados pelo modelo. Você pode usar a mesma estrutura da célula de código anterior. Dica: https://scikit-learn.org/stable/modules/generated/sk...
from sklearn.cluster import KMeans as scikit_KMeans scikit_kmeans = scikit_KMeans(n_clusters=3) scikit_kmeans.fit(dataset) print("Inércia = ", scikit_kmeans.inertia_) plt.scatter(dataset[:,0], dataset[:,1], c=scikit_kmeans.labels_) plt.scatter(scikit_kmeans.cluster_centers_[:,0], scikit_kmeans.cluster_c...
2019/09-clustering/Notebook_KMeans_Answer.ipynb
InsightLab/data-science-cookbook
mit
3. Método do cotovelo Implemete o método do cotovelo e mostre o melhor K para o conjunto de dados.
n_clusters_test = 8 n_sequence = np.arange(1, n_clusters_test+1) inertia_vec = np.zeros(n_clusters_test) for index, n_cluster in enumerate(n_sequence): inertia_vec[index] = KMeans(n_clusters=n_cluster).fit(dataset).inertia_ plt.plot(n_sequence, inertia_vec, 'ro-') plt.show()
2019/09-clustering/Notebook_KMeans_Answer.ipynb
InsightLab/data-science-cookbook
mit
Calculating Molar Fluorescence (MF) of Free Ligand 1. Maximum likelihood curve-fitting Find the maximum likelihood estimate, $\theta^$, i.e. the curve that minimizes the squared error $\theta^ = \text{argmin} \sum_i |y_i - f_\theta(x_i)|^2$ (assuming i.i.d. Gaussian noise) Y = MF*L + BKG Y: Fluorescence read (Flu unit)...
import numpy as np from scipy import optimize import matplotlib.pyplot as plt %matplotlib inline def model(x,slope,intercept): ''' 1D linear model in the format scipy.optimize.curve_fit expects: ''' return x*slope + intercept # generate some data #X = np.random.rand(1000) #true_slope=1.0 #true_intercept=0.0 #...
examples/ipynbs/data-analysis/hsa/analyzing_FLU_hsa_lig1_20150922.ipynb
sonyahanson/assaytools
lgpl-2.1
The following are the scripts for Sentiment Analysis with NLP.
score_file = 'reviews_score.csv' review_file = 'reviews.csv' def read_score_review(score_file, review_file): """Read score and review data.""" score_df = pd.read_csv(score_file) review_df = pd.read_csv(review_file) return score_df, review_df def groupby_agg_data(df, gkey='gkey', rid='rid'): """Gro...
notebook/sentiment_nlp.ipynb
bowen0701/data_science
bsd-2-clause
Collect Data We first read score and review raw datasets. Score dataset: two columns hotel_review_id: hotel review sequence ID rating_overall: overal accommodation rating Review dataset: three columns hotel_review_id: hotel review sequence ID review_title: review title review_comments: detailed review comments
score_raw_df, review_raw_df = read_score_review(score_file, review_file) print(len(score_raw_df)) print(len(review_raw_df)) score_raw_df.head(5) review_raw_df.head(5)
notebook/sentiment_nlp.ipynb
bowen0701/data_science
bsd-2-clause
EDA with Datasets Check missing / abnormal data
count_missing_data(score_raw_df, cols=['hotel_review_id', 'rating_overall']) score_raw_df[score_raw_df.rating_overall.isnull()] count_missing_data(review_raw_df, cols=['hotel_review_id', 'review_title', 'review_comments']) abnorm_df = slice_abnormal_id(score_raw_df, rid='hotel...
notebook/sentiment_nlp.ipynb
bowen0701/data_science
bsd-2-clause
Group-by aggregate score distributions From the following results we can observe that the rating_overall scores are imbalanced. Specifically, only about $1\%$ records have low scores $\le 5$, thus about $99\%$ records have scores $\ge 6$. some records have missing score.
score_raw_df.rating_overall.unique() score_agg_df = groupby_agg_data( score_raw_df, gkey='rating_overall', rid='hotel_review_id') score_agg_df
notebook/sentiment_nlp.ipynb
bowen0701/data_science
bsd-2-clause
Pre-process Datasets Remove missing / abnormal data Since there are few records (only 27) having missing hotel_review_id and rating_overall score, we just ignore them.
score_df, review_df = remove_missing_abnormal_data( score_raw_df, review_raw_df, rid='hotel_review_id', score_col='rating_overall') score_df.head(5) review_df.head(5)
notebook/sentiment_nlp.ipynb
bowen0701/data_science
bsd-2-clause
Join score & review datasets To leverage fast vectorized operation with Pandas DataFrame, we joint score and review datasets.
score_review_df_ = join_score_review(score_df, review_df) score_review_df_.head(5)
notebook/sentiment_nlp.ipynb
bowen0701/data_science
bsd-2-clause
The following are the procedure for processing natural language texts. Concat review_title and review_comments Using the Occam's Razor Principle, since review_title and review_comments both are natural languages, we can simply concat them into one sentence for further natural language processing.
score_review_df = concat_review_title_comments( score_review_df_, concat_cols=['review_title', 'review_comments'], concat_2col='review_title_comments') score_review_df.head(5)
notebook/sentiment_nlp.ipynb
bowen0701/data_science
bsd-2-clause
Lower review_title_comments
score_review_df = lower_review_title_comments( score_review_df, lower_col='review_title_comments') score_review_df.head(5)
notebook/sentiment_nlp.ipynb
bowen0701/data_science
bsd-2-clause
Tokenize and remove stopwords Tokenizing is an important technique by which we would like to split the sentence into vector of invidual words. Nevertheless, there are many stopwords that are useless in natural language text, for example: he, is, at, which, and on. Thus we would like to remove them from the vector of to...
start_token_time = time.time() score_review_token_df = preprocess_sentence_par( score_review_df, sen_col='review_title_comments', sen_token_col='review_title_comments_token', num_proc=32) end_token_time = time.time() print('Time for tokenizing: {}'.format(end_token_time - start_token_time)) score_review...
notebook/sentiment_nlp.ipynb
bowen0701/data_science
bsd-2-clause
Get bag of words The tokenized words may contain duplicated words, and for simplicity, we would like to apply the Bag of Words, which just represents the sentence as a bag (multiset) of its words, ignoring grammar and even word order. Here, following the Occam's Razor Principle again, we do not keep word frequencies, t...
start_bow_time = time.time() score_review_bow_df = get_bag_of_words_par( score_review_token_df, sen_token_col='review_title_comments_token', bow_col='review_title_comments_bow', num_proc=32) end_bow_time= time.time() print('Time for bag of words: {}'.format(end_bow_time - start_bow_time)) score_review_b...
notebook/sentiment_nlp.ipynb
bowen0701/data_science
bsd-2-clause
Sentiment Analysis Label data Since we would like to polarize data with consideration for the imbalanced data problem as mentioned before, we decide to label ratings 2, 3 and 4 by "negative", ratings 9 and 10 by "positive".
neg_review_ls = label_review( score_review_bow_df, scores_ls=[2, 3, 4], label='negative', score_col='rating_overall', review_col='review_title_comments_bow') pos_review_ls = label_review( score_review_bow_df, scores_ls=[9, 10], label='positive', score_col='rating_overall', review_col='r...
notebook/sentiment_nlp.ipynb
bowen0701/data_science
bsd-2-clause
Splite training and test sets We split the training and test sets by the rule of $75\%$ and $25\%$.
train_set, test_set = create_train_test_sets( pos_review_ls, neg_review_ls, train_percent=0.75) train_set[10]
notebook/sentiment_nlp.ipynb
bowen0701/data_science
bsd-2-clause
Naive Bayes Classification We first apply Naive Bayes Classifier to learn positive or negative sentiment.
nb_clf = train_naive_bayes(train_set)
notebook/sentiment_nlp.ipynb
bowen0701/data_science
bsd-2-clause
Model evaluation We evaluate our model by positive / negative precision and recall. From the results we can observe that our model performs fairly good.
eval_naive_bayes(test_set, nb_clf)
notebook/sentiment_nlp.ipynb
bowen0701/data_science
bsd-2-clause
Measure Real-World Performance Predict label based on bag of words
start_pred_time = time.time() pred_label_df = pred_labels( score_review_bow_df, nb_clf, bow_col='review_title_comments_bow', pred_col='pred_label') end_pred_time = time.time() print('Time for prediction: {}'.format(end_pred_time - start_pred_time)) pred_label_df.head(5)
notebook/sentiment_nlp.ipynb
bowen0701/data_science
bsd-2-clause
Compare two labels's score distributions From the following boxplot, we can observe that our model performs reasonably well in the real world, even by our suprisingly simple machine learning modeling. We can further apply divergence measures, such as Kullback-Leibler divergence, to quantify the rating_overall distribu...
box_data = get_boxplot_data( pred_label_df, pred_col='pred_label', score_col='rating_overall') plot_box(box_data, title='Box Plot for rating_overall by Sentiment Classes', xlab='class', ylab='rating_overall', xticks=['positive', 'negative'], figsize=(12, 7))
notebook/sentiment_nlp.ipynb
bowen0701/data_science
bsd-2-clause
Loading and visualizing training data The training data is 5000 digit images of digits of size 20x20. We will display a random selection of 25 of them.
ex3data1 = scipy.io.loadmat("./ex3data1.mat") X = ex3data1['X'] y = ex3data1['y'][:,0] y[y==10] = 0 m, n = X.shape m, n fig = plt.figure(figsize=(5,5)) fig.subplots_adjust(wspace=0.05, hspace=0.15) import random display_rows, display_cols = (5, 5) for i in range(display_rows * display_cols): ax = fig.add_subpl...
ex3/ml-ex3-onevsall.ipynb
noammor/coursera-machinelearning-python
mit
Part 2: Vectorize Logistic Regression In this part of the exercise, you will reuse your logistic regression code from the last exercise. You task here is to make sure that your regularized logistic regression implementation is vectorized. After that, you will implement one-vs-all classification for the handwritten digi...
def sigmoid(z): return 1 / (1 + np.exp(-z)) def h(theta, x): return sigmoid(x.dot(theta)) #LRCOSTFUNCTION Compute cost and gradient for logistic regression with #regularization # J = LRCOSTFUNCTION(theta, X, y, lambda) computes the cost of using # theta as the parameter for regularized logistic regressio...
ex3/ml-ex3-onevsall.ipynb
noammor/coursera-machinelearning-python
mit
Training set accuracy:
(predictions == y).mean()
ex3/ml-ex3-onevsall.ipynb
noammor/coursera-machinelearning-python
mit
<h3> Extract sample data from BigQuery </h3> The dataset that we will use is <a href="https://console.cloud.google.com/bigquery?project=nyc-tlc&p=nyc-tlc&d=yellow&t=trips&page=table">a BigQuery public dataset</a>. Click on the link, and look at the column names. Switch to the Details tab to verify that the number of r...
%%bigquery SELECT FORMAT_TIMESTAMP( "%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime, pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude, passenger_count, trip_distance, tolls_amount, fare_amount, total_amount FROM `nyc-tlc.yellow.trips` # TODO 1 LIMIT 10
courses/machine_learning/deepdive2/launching_into_ml/solutions/explore_data.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
<h3> Exploring data </h3> Let's explore this dataset and clean it up as necessary. We'll use the Python Seaborn package to visualize graphs and Pandas to do the slicing and filtering.
# TODO 2 ax = sns.regplot( x="trip_distance", y="fare_amount", fit_reg=False, ci=None, truncate=True, data=trips) ax.figure.set_size_inches(10, 8)
courses/machine_learning/deepdive2/launching_into_ml/solutions/explore_data.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Hmm ... do you see something wrong with the data that needs addressing? It appears that we have a lot of invalid data that is being coded as zero distance and some fare amounts that are definitely illegitimate. Let's remove them from our analysis. We can do this by modifying the BigQuery query to keep only trips longer...
%%bigquery trips SELECT FORMAT_TIMESTAMP( "%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime, pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude, passenger_count, trip_distance, tolls_amount, fare_amount, total_amount FROM `nyc-tlc.yellow.trips` ...
courses/machine_learning/deepdive2/launching_into_ml/solutions/explore_data.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Looks good! We now have our ML datasets and are ready to train ML models, validate them and evaluate them. <h3> Benchmark </h3> Before we start building complex ML models, it is a good idea to come up with a very simple model and use that as a benchmark. My model is going to be to simply divide the mean fare_amount by...
def distance_between(lat1, lon1, lat2, lon2): # Haversine formula to compute distance "as the crow flies". lat1_r = np.radians(lat1) lat2_r = np.radians(lat2) lon_diff_r = np.radians(lon2 - lon1) sin_prod = np.sin(lat1_r) * np.sin(lat2_r) cos_prod = np.cos(lat1_r) * np.cos(lat2_r) * np.cos(lon_d...
courses/machine_learning/deepdive2/launching_into_ml/solutions/explore_data.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Here, we visualize the generated multi- and singly-connected BBNs.
with warnings.catch_warnings(): warnings.simplefilter('ignore') plt.figure(figsize=(10, 5)) plt.subplot(121) nx.draw(nx_multi_bbn, with_labels=True, font_weight='bold') plt.title('Multi-connected BBN') plt.subplot(122) nx.draw(nx_singly_bbn, with_labels=True, font_weight='bold') p...
jupyter/generate-bbn.ipynb
vangj/py-bbn
apache-2.0
Now, let's print out the probabilities of each node for the multi- and singly-connected BBNs.
join_tree = InferenceController.apply(m_bbn) for node in join_tree.get_bbn_nodes(): potential = join_tree.get_bbn_potential(node) print(node) print(potential) print('>') join_tree = InferenceController.apply(s_bbn) for node in join_tree.get_bbn_nodes(): potential = join_tree.get_bbn_potential(node)...
jupyter/generate-bbn.ipynb
vangj/py-bbn
apache-2.0
Generate a lot of graphs and visualize them
def generate_graphs(n=10, prog='neato', multi=True): d = {} for i in range(n): max_nodes = np.random.randint(3, 8) max_iter = np.random.randint(10, 100) if multi is True: g, p = generate_multi_bbn(max_nodes, max_iter=max_iter) else: g, p = gener...
jupyter/generate-bbn.ipynb
vangj/py-bbn
apache-2.0
Efficient Computation of Powers The function power takes two natural numbers $m$ and $n$ and computes $m^n$. Our first implementation is inefficient and takes $n-1$ multiplication to compute $m^n$.
def power(m, n): r = 1 for i in range(n): r *= m return r power(2, 3), power(3, 2) %%time p = power(3, 500000) p
Python/Chapter-02/Power.ipynb
karlstroetmann/Algorithms
gpl-2.0
Next, we try a recursive implementation that is based on the following two equations: 1. $m^0 = 1$ 2. $m^n = \left{\begin{array}{ll} m^{n//2} \cdot m^{n//2} & \mbox{if $n$ is even}; \ m^{n//2} \cdot m^{n//2} \cdot m & \mbox{if $n$ is odd}. \end{array} \right. $
def power(m, n): if n == 0: return 1 p = power(m, n // 2) if n % 2 == 0: return p * p else: return p * p * m %%time p = power(3, 500000)
Python/Chapter-02/Power.ipynb
karlstroetmann/Algorithms
gpl-2.0
LDA Classifier Object & Fit Now we are in a position to run the LDA classifier. This, as you can see from the three lines below, is as easy as it gets.
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA # Create LDA object and run classifier lda = LDA(solver='lsqr') lda = lda.fit(X_train, y_train) lda
0208_LDA-QDA.ipynb
bMzi/ML_in_Finance
mit
The parameter solver='lsqr' specifies the method by which the covariance matrix is estimated. lsqr follows the approach introduced in the preceding subsection. Others such as svd or eigen are available. See Scikit-learn's guide or the function description. Every function in sklearn has different attributes and methods....
lda.covariance_
0208_LDA-QDA.ipynb
bMzi/ML_in_Finance
mit
In a Jupyter notebook, to see all options you can simply type lda. and hit tab. LDA Performance Here are some basic metrics on how the LDA classifier performed on the training data.
print('default-rate: {0: .4f}'.format(np.sum(y_train)/len(y_train))) print('score: {0: .4f}'.format(lda.score(X_train, y_train))) print('error-rate: {0: .4f}'.format(1-lda.score(X_train, y_train)))
0208_LDA-QDA.ipynb
bMzi/ML_in_Finance
mit
Overall, 3.33% of all observations defaulted. If we would simply label each entry as 'non-default' we would have an error rate of this magnitude. So, in comparison to this naive classifier, LDA seems to have some skill in predicting the default. IMPORTANT NOTE: In order to be in line with James et al. (2015), the text...
# Relabel variables as discussed X_test = X_train y_test = y_train # Predict labels y_pred = lda.predict(X_test) # Sklearn's confusion matrix print(metrics.confusion_matrix(y_test, y_pred)) # Manual confusion matrix as pandas DataFrame confm = pd.DataFrame({'Predicted default status': y_pred, '...
0208_LDA-QDA.ipynb
bMzi/ML_in_Finance
mit
The confusion matrix tells us that for the non-defaulters, LDA only misclassified 22 of them. This is an excellent rate. However, out of the 333 (=253 + 80) people who actually defaulted, LDA classified only 80 correctly. This means our classifier missed out on 76.0% of those who actually defaulted! For a credit card a...
# Calculated posterior probabilities posteriors = lda.predict_proba(X_test) posteriors[:5, :]
0208_LDA-QDA.ipynb
bMzi/ML_in_Finance
mit
The function lda.predict_proba() provides the posterior probabilities of $\Pr(\text{default = 0}|X=x)$ in the first column and $\Pr(\text{default = 1}|X=x)$ in the second. The latter column is what we are interested in. Out of convenience we use sklearn's binarize function to classify all probabilities above the thresh...
from sklearn.preprocessing import binarize # Set threshold and get classes thresh = 0.2 y_pred020 = binarize([posteriors[:, 1]], thresh)[0] # new confusion matrix (threshold of 0.2) print(metrics.confusion_matrix(y_test, y_pred020))
0208_LDA-QDA.ipynb
bMzi/ML_in_Finance
mit
Now LDA misclassifies only 140 out of 333 defaults, or 42.0%. Thats a sharp improvement over the 76.0% from before. But this comes at a price: Before, of those who did not default LDA mislabeled only 22 (or 0.2%) incorrectly. This number increased now to 232 (or 2.4%). Combined, the total error rate increased from 2.75...
# Array of thresholds thresh = np.linspace(0, 0.5, num=100) er = [] # Total error rate der = [] # Defaults error rate nder = [] # Non-Defaults error rate for t in thresh: # Sort/arrange data y_pred_class = binarize([posteriors[:, 1]], t)[0] confm = metrics.confusion_matrix(y_test, y_pred_class) ...
0208_LDA-QDA.ipynb
bMzi/ML_in_Finance
mit
How do we know what threshold value is best? Unfortunately there's no formula for it. "Such a decision must be based on domain knowledge, such as detailed information about costs associated with defaults" (James et al. (2013, p.147)) and it will always be a trade-off: if we increase the threshold we reduce the missed n...
# Assign confusion matrix values to variables confm = metrics.confusion_matrix(y_test, y_pred) print(confm) TP = confm[1, 1] # True positives TN = confm[0, 0] # True negatives FP = confm[0, 1] # False positives FN = confm[1, 0] # False negatives
0208_LDA-QDA.ipynb
bMzi/ML_in_Finance
mit
So far we've encountered the following performance metrics: Score Error rate Sensitivity and Specificity. We briefly recapture their meaning, how they are calculated and how to call them in Scikit-learn. We will make use of the functions in the metrics sublibrary of sklearn Score Score = (TN + TP) / (TN + TP + FP...
print((TN + TP) / (TN + TP + FP + FN)) print(metrics.accuracy_score(y_test, y_pred)) print(lda.score(X_test, y_test))
0208_LDA-QDA.ipynb
bMzi/ML_in_Finance
mit
Error rate Error rate = 1 - Score or Error rate = (FP + FN) / (TN + TP + FP + FN) Fraction of (overall) incorrectly predicted classes Also known as "Misclassification Rate"
print((FP + FN) / (TN + TP + FP + FN)) print(1 - metrics.accuracy_score(y_test, y_pred)) print(1 - lda.score(X_test, y_test))
0208_LDA-QDA.ipynb
bMzi/ML_in_Finance
mit
Specificity Specificity = TN / (TN + FP) Fraction of correctly predicted negatives (e.g. 'non-defaults')
print(TN / (TN + FP))
0208_LDA-QDA.ipynb
bMzi/ML_in_Finance
mit
Sensitivity or Recall Sensitivity = TP / (TP + FN) Fraction of correctly predicted 'positives' (e.g. 'defaults'). Basically asks the question: "When the actual value is positive, how often is the prediction correct?" Also known as True positive rate Counterpart to Precision
print(TP / (TP + FN)) print(metrics.recall_score(y_test, y_pred))
0208_LDA-QDA.ipynb
bMzi/ML_in_Finance
mit
The above four classification performance metrics we already encountered. There are two more metrics we want to cover: Precision and the F-Score. Precision Precision = TP / (TP + FP) Refers to the accuracy of a positive ('default') prediction. Basically asks the question: "When a positive value is predicted, how often...
print(TP / (TP + FP)) print(metrics.precision_score(y_test, y_pred))
0208_LDA-QDA.ipynb
bMzi/ML_in_Finance
mit
<img src="Graphics/0208_ConfusionMatrixDefault.png" alt="ConfusionMatrixDefault" style="width: 800px;"/> F-Score Van Rijsbergen (1979) introduced a measure that is still widely used to evaluate the accuracy of predictions in two-class (binary) classification problems: the F-Score. It combines Precision and Recall (aka ...
print(metrics.confusion_matrix(y_test, y_pred)) print(metrics.f1_score(y_test, y_pred)) print(((1+1**2) * TP)/((1+1**2) * TP + FN + FP)) print(metrics.classification_report(y_test, y_pred))
0208_LDA-QDA.ipynb
bMzi/ML_in_Finance
mit
Let us compare this to the situation where we set the posterior probability threshold for 'default' at 20%.
# Confusion matrix & clf-report for cut-off # value Pr(default=yes | X = x) > 0.20 print(metrics.confusion_matrix(y_test, y_pred020)) print(metrics.classification_report(y_test, y_pred020))
0208_LDA-QDA.ipynb
bMzi/ML_in_Finance
mit
We see that by reducing the cut-off level from $\Pr(\text{default} = 1| X=x) > 0.5$ to $\Pr(\text{default} = 1| X=x) > 0.2$ precision decreases but recall improves. This changes the $F_1$-score. Does this mean that a threshold of 20% is more appropriate? In general, one could argue for a 'yes'. Yet, as mentioned befor...
# Extract data displayed in above plot precision, recall, threshold = metrics.precision_recall_curve(y_test, posteriors[:, 1]) print('Precision: ', precision) print('Recall: ', recall) print('Threshold: ', threshold)
0208_LDA-QDA.ipynb
bMzi/ML_in_Finance
mit
This one can easily visualize - done in the next code snippet. We also add some more information to the plot by displaying the Average Precision (AP) and the Area under the Curve (AUC). The former summarizes the plot in that it calculates the weighted mean of precisions achieved at each threshold, with the increase in ...
# Calculate the average precisions score y_dec_bry = lda.decision_function(X_test) average_precision = metrics.average_precision_score(y_test, y_dec_bry) # Calculate AUC prec_recall_auc = metrics.auc(recall, precision) # Plot Precision/Recall variations given different # levels of thresholds plt.plot(recall, precisio...
0208_LDA-QDA.ipynb
bMzi/ML_in_Finance
mit
ROC Curve Having introduced the major performance measures, let us now discuss the so called ROC curve (short for "receiver operating characteristics"). This is a very popular way of visualizing the performance of binary classifiers. Its origin are in signal detection theory durign WWII (Flaach (2017)) but it has since...
# Compute ROC curve and ROC area (AUC) for each class fpr, tpr, thresholds = metrics.roc_curve(y_test, posteriors[:, 1]) roc_auc = metrics.auc(fpr, tpr) plt.figure(figsize=(6, 6)) plt.plot(fpr, tpr, lw=2, label='ROC curve (area = {0: 0.2f})'.format(roc_auc)) plt.plot([0, 1], [0, 1], lw=2, c = 'k', linestyle='--') plt....
0208_LDA-QDA.ipynb
bMzi/ML_in_Finance
mit
An AUC value of 0.95 is close to the maximum of 1 and should be deemed very good. The dashed black line puts this in perspective: it represents the "no information" classifier; this is what we would expect if the probability of default is not associated with 'student' status and 'balance'. Such a classifier, that perfo...
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis as QDA # Run qda on training data qda = QDA().fit(X_train, y_train) qda # Predict classes for qda y_pred_qda = qda.predict(X_test) posteriors_qda = qda.predict_proba(X_test)[:, 1] # Print performance metrics print(metrics.confusion_matrix(y_test...
0208_LDA-QDA.ipynb
bMzi/ML_in_Finance
mit
The performance seems to be slightly better than with LDA. Let's plot the ROC curve for both LDA and QDA.
# Compute ROC curve and ROC area (AUC) for each class fpr_qda, tpr_qda, _ = metrics.roc_curve(y_test, posteriors_qda) roc_auc_qda = metrics.auc(fpr_qda, tpr_qda) plt.figure(figsize=(6, 6)) plt.plot(fpr, tpr, lw=2, label='LDA ROC (AUC = {0: 0.2f})'.format(roc_auc)) plt.plot(fpr_qda, tpr_qda, lw=2, label='QDA ROC (AUC =...
0208_LDA-QDA.ipynb
bMzi/ML_in_Finance
mit
With respect to Sensitivity (Recall) and Specificity LDA and QDA perform virtually identical. Therefore, one might give the edge here to QDA because of its slighly better Recall and $F_1$-Score. Reality and the Gaussian Assumption for LDA & QDA Despite the rather strict assumptions regarding normal distribution, LDA a...
lda.predict(X_test)[:10]
0208_LDA-QDA.ipynb
bMzi/ML_in_Finance
mit
classifier.predict_proba() we have also introduced above: it provides probabilities of $\Pr(y = 0|X=x)$ in the first column and $\Pr(y = 1|X=x)$ in the second.
lda.predict_proba(X_test)[:10]
0208_LDA-QDA.ipynb
bMzi/ML_in_Finance
mit
Finally, classifier.decision_function() predicts confidence scores given the feature matrix. The confidence scores for a feature matrix is the signed distance of that sample to the hyperplane. What this exaclty means should become more clear once we have discussed the support vector classifier (SVC).
lda.decision_function(X_test)[:10]
0208_LDA-QDA.ipynb
bMzi/ML_in_Finance
mit
ROC & Precision-Recall Curve in Sklearn Version 0.22.1 Starting with Scikit-learn version 0.22.1 the plotting of the ROC and Precision-Recall Curve was integrated into Scikit-learn and there's now a function available to cut the plotting work a bit short. Below two code snippets that show how to do it.
# Plot Precision-Recall Curve disp = metrics.plot_precision_recall_curve(lda, X_test, y_test); disp = metrics.plot_roc_curve(lda, X_test, y_test);
0208_LDA-QDA.ipynb
bMzi/ML_in_Finance
mit
Question 1
df['temperature'].hist()
statistics project 1/sliderule_dsi_inferential_statistics_exercise_1.ipynb
ThomasProctor/Slide-Rule-Data-Intensive
mit
No, this sample isn't normal, it is definitely skewed. However "this is a condition for the CLT... to apply" is just wrong. The whole power of the CLT is that it says that the distribution of sample means (not the sample distribution) tends to a normal distribution regardless of the distribution of the population or sa...
m=df['temperature'].mean() m
statistics project 1/sliderule_dsi_inferential_statistics_exercise_1.ipynb
ThomasProctor/Slide-Rule-Data-Intensive
mit
With 130 data points, it really doesn't matter if we use the normal or t distribution. A t distribution with 129 degrees of freedom is essentially a normal distribution, so the results should not be very different. However, in this day in age I don't see the purpose of even bothering with the normal distribution. Looki...
from scipy.stats import t, norm from math import sqrt patients=df.shape[0] n=patients-1 patients SE=df['temperature'].std()/sqrt(n) SE
statistics project 1/sliderule_dsi_inferential_statistics_exercise_1.ipynb
ThomasProctor/Slide-Rule-Data-Intensive
mit
Our null hypothosis is that the true average body temperature is $98.6^\circ F$. We'll be calculating the probability of finding a value less than or equal to the mean we obtained in this data given that this null hypothosis is true, i.e. our alternative hypothosis is that the true average body temperature is less than...
t.cdf((m-98.6)/SE,n) norm.cdf((m-98.6)/SE)
statistics project 1/sliderule_dsi_inferential_statistics_exercise_1.ipynb
ThomasProctor/Slide-Rule-Data-Intensive
mit
Regardless of what distribution we assume we are drawing our sample means from, the probability of seeing this data or averages less than it if the true average body temperature was 98.6 is basically zero. Question 3
print(m+t.ppf(0.95,n)*SE) print(m-t.ppf(0.95,n)*SE) t.ppf(0.95,n)*SE
statistics project 1/sliderule_dsi_inferential_statistics_exercise_1.ipynb
ThomasProctor/Slide-Rule-Data-Intensive
mit
Our estimate of the true average human body temperature is thus $98.2^\circ F \pm 0.1$. This confidence interval, however, does not answer the question 'At what temperature should we consider someone's temperature to be "abnormal"?'. We can look at the population distribution, and see right away that the majority of ou...
df['temperature'].quantile([.1,.9])
statistics project 1/sliderule_dsi_inferential_statistics_exercise_1.ipynb
ThomasProctor/Slide-Rule-Data-Intensive
mit
This range, 97.29-99.10 degrees F includes 80% of the patients in our sample. This shows the dramatic difference between the population distribution and the sample distribution of the mean; we looked at the sample distribution (from the confidence interval), and found that 90% of the population fell within a $\pm 0.1^\...
males=df[df['gender']=='M'] males.describe() females=df[df['gender']=='F'] females.describe() SEgender=sqrt(females['temperature'].std()/females.shape[0]+males['temperature'].std()/males.shape[0]) SEgender mgender=females['temperature'].mean()-males['temperature'].mean() mgender 2*(1-t.cdf(mgender/SEgender,21))
statistics project 1/sliderule_dsi_inferential_statistics_exercise_1.ipynb
ThomasProctor/Slide-Rule-Data-Intensive
mit
Now we need to import the library in our notebook. There are a number of different ways to do it, depending on what part of matplotlib we want to import, and how should it be imported into the namespace. This is one of the most common ones; it means that we will use the plt. prefix to refer to the Matplotlib API
import matplotlib.pyplot as plt
vmfiles/IPNB/Examples/a Basic/03 Matplotlib essentials.ipynb
paulovn/ml-vm-notebook
bsd-3-clause
Matplotlib allows extensive customization of the graph aspect. Some of these customizations come together in "styles". Let's see which styles are available:
from __future__ import print_function print(plt.style.available) # Let's choose one style. And while we are at it, define thicker lines and big graphic sizes plt.style.use('bmh') plt.rcParams['lines.linewidth'] = 1.5 plt.rcParams['figure.figsize'] = (15, 5)
vmfiles/IPNB/Examples/a Basic/03 Matplotlib essentials.ipynb
paulovn/ml-vm-notebook
bsd-3-clause
Simple plots Without much more ado, let's display a simple graphic. For that we define a vector variable, and a function of that vector to be plotted
import numpy as np x = np.arange( -10, 11 ) y = x*x
vmfiles/IPNB/Examples/a Basic/03 Matplotlib essentials.ipynb
paulovn/ml-vm-notebook
bsd-3-clause
Matplotlib syntax Matplotlib commands have two variants: * A declarative syntax, with direct plotting commands. It is inspired by Matlab graphics syntax, so if you know Matlab it will be easy. It is the one used above. * An object-oriented syntax, more complicated but somehow more powerful The next cell shows an exam...
# Create a figure object fig = plt.figure() # Add a graph to the figure. We get an axes object ax = fig.add_subplot(1, 1, 1) # specify (nrows, ncols, axnum) # Create two vectors: x, y x = np.linspace(0, 10, 1000) y = np.sin(x) # Plot those vectors on the axes we have ax.plot(x, y) # Add another plot to the same a...
vmfiles/IPNB/Examples/a Basic/03 Matplotlib essentials.ipynb
paulovn/ml-vm-notebook
bsd-3-clause
Tabular data with independent (Shapley value) masking
# build a Permutation explainer and explain the model predictions on the given dataset explainer = shap.explainers.Permutation(model.predict_proba, X) shap_values = explainer(X[:100]) # get just the explanations for the positive class shap_values = shap_values[...,1]
notebooks/api_examples/explainers/Permutation.ipynb
slundberg/shap
mit
Tabular data with partition (Owen value) masking While Shapley values result from treating each feature independently of the other features, it is often useful to enforce a structure on the model inputs. Enforcing such a structure produces a structure game (i.e. a game with rules about valid input feature coalitions), ...
# build a clustering of the features based on shared information about y clustering = shap.utils.hclust(X, y) # above we implicitly used shap.maskers.Independent by passing a raw dataframe as the masker # now we explicitly use a Partition masker that uses the clustering we just computed masker = shap.maskers.Partition...
notebooks/api_examples/explainers/Permutation.ipynb
slundberg/shap
mit
Now, we create a TranslationModel instance:
nmt_model = TranslationModel(params, model_type='GroundHogModel', model_name='tutorial_model', vocabularies=dataset.vocabulary, store_path='trained_models/tutorial_model/', v...
examples/2_training_tutorial.ipynb
Sasanita/nmt-keras
mit
Now, we must define the inputs and outputs mapping from our Dataset instance to our model
inputMapping = dict() for i, id_in in enumerate(params['INPUTS_IDS_DATASET']): pos_source = dataset.ids_inputs.index(id_in) id_dest = nmt_model.ids_inputs[i] inputMapping[id_dest] = pos_source nmt_model.setInputsMapping(inputMapping) outputMapping = dict() for i, id_out in enumerate(params['OUTPUTS_IDS_DAT...
examples/2_training_tutorial.ipynb
Sasanita/nmt-keras
mit
We can add some callbacks for controlling the training (e.g. Sampling each N updates, early stop, learning rate annealing...). For instance, let's build an Early-Stop callback. After each 2 epochs, it will compute the 'coco' scores on the development set. If the metric 'Bleu_4' doesn't improve during more than 5 checki...
extra_vars = {'language': 'en', 'n_parallel_loaders': 8, 'tokenize_f': eval('dataset.' + 'tokenize_none'), 'beam_size': 12, 'maxlen': 50, 'model_inputs': ['source_text', 'state_below'], 'model_outputs': ['target_text'], 'd...
examples/2_training_tutorial.ipynb
Sasanita/nmt-keras
mit
Now we are almost ready to train. We set up some training parameters...
training_params = {'n_epochs': 100, 'batch_size': 40, 'maxlen': 30, 'epochs_for_save': 1, 'verbose': 0, 'eval_on_sets': [], 'n_parallel_loaders': 8, 'extra_callbacks': callbacks, ...
examples/2_training_tutorial.ipynb
Sasanita/nmt-keras
mit
TicTaeToe 게임
from IPython.display import Image Image(filename='images/TicTaeToe.png')
midterm/kookmin_midterm_정인환.ipynb
initialkommit/kookmin
mit
TicTaeToe게임을 간단 버젼으로 구현한 것으로 사용자가 먼저 착수하여 승부를 겨루게 됩니다. 향후에는 기계학습으로 발전시켜 실력을 키워 보려 합니다.
# %load TicTaeToe.py import sys import random # 게임 방범 설명 print("출처: http://www.practicepython.org") print("==================================") print("가로, 세로, 대각선 방향으로 ") print("세점을 먼저 이어 놓으면 이기는") print("게임으로 사용자(U)와 Computer(C)가") print("번갈아 놓습니다.") print("==================================\n") # ...
midterm/kookmin_midterm_정인환.ipynb
initialkommit/kookmin
mit
Problem 1.1) Make a simple mean coadd Simulate N observations of a star, and coadd them by taking the mean of the N observations. (We can only do this because they are already astrometrically and photometrically aligned and have the same background value.)
MU = 35 S = 100 F = 100 FWHM = 5 x = np.arange(100) # simulate a single observation of the star and plot: y = # complete pixel_plot(x, y) # Write a simulateN function that returns an array of size (N, x) # representing N realizations of your simulated star # This will stand in as a stack of multiple observations of ...
Sessions/Session11/Day4/CoadditionAndSubtraction.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 1.2) SNR vs N Now compute the observed SNR of the simulated star on each coadd and compare to the expected SNR in the idealized case. The often repeated mnemonic for SNR inscrease as a function of number of images, $N$, is that noise decreses like $\sqrt{N}$. This is of course idealized case where the noise in...
# complete # hint. One way to start this # std = [] # flux = [] # Ns = np.arange(1, 1000, 5) # for N in Ns: # y = simulateN(...) # complete # plt.plot(Ns, ..., label="coadd") # plt.plot(Ns, ..., label="expected") # plt.xlabel('N') # plt.ylabel('pixel noise') # plt.legend() # complete # plt.plot(Ns, ......
Sessions/Session11/Day4/CoadditionAndSubtraction.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 2) PSFs and Image weights in coadds Problem (1) pretends that the input images are identical in quality, however this is never the case in practice. In practice, adding another image does not necessarily increase the SNR. For example, imagine you have two exposures, but in one the dome light was accidentally le...
# complete
Sessions/Session11/Day4/CoadditionAndSubtraction.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 2.2 Weighting images in Variable Seeing Simulate 50 observations with FWHM's ranging from 2-10 pixels. Keep the flux amplitude F, and sky noise S both fixed. Generate two coadds, (1) with the weights that minimize variance and (2) with the weights that maximize point source SNR. Weights should add up to 1. Pl...
# complete
Sessions/Session11/Day4/CoadditionAndSubtraction.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 2.3 Image variance vs per pixel variance (Challenge Problem) Why do we use per image variances instead of per pixel variances? Let's see! Start tracking the per-pixel variance when you simulate the star. Make a coadd of 200 observations with FWHM's ranging from 2 to 20 pixels. Make a coadd weighted by the per-...
# complete
Sessions/Session11/Day4/CoadditionAndSubtraction.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 3.2) What if we have amazing astrometric registration and shrink the astrometric offset by a factor of a thousand. Is there a star sufficiently bright to produce the same dipole? What is its FLUX SCALE?
ASTROM_OFFSET = 0.0001 PSF = 1. # complete # Plot both dipoles (for the offset=0.1 and the offset=0.0001 in the same figure. # Same or different subplots up to you.
Sessions/Session11/Day4/CoadditionAndSubtraction.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 3.3) Distance between peaks. Does the distance between the dipole's positive and negative peaks depend on the astrometric offset? If not, what does it depend on? You can answer this by vizualizing the dipoles vs offsets. But for a challenge, measure the distance between peaks and plot them as a function of ast...
# complete
Sessions/Session11/Day4/CoadditionAndSubtraction.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Matrix creation
B = np.arange(9).reshape(3, 3) print(B) A = np.array([ [-1, 42, 10], [12, 0, 9] ]) print(A) # inspecting the matrices print(A.shape) # 2 x 3 print(B.shape) # 3 x 3 # We have 2 dimensions `X1` and `X2` print(A.ndim) print(B.ndim) Zeros = np.zeros((2, 3)) print(Zeros) Ones = np.ones((3, 3)) print(Ones) Empt...
disciplines/SME0819 - Matrices for Applied Statistics/0x00_Fundamentals/Matrices - Fundamentals.ipynb
jhonatancasale/graduation-pool
apache-2.0
Vector creation
print(np.arange(5, 30, 7)) print(np.arange(10, 13, .3)) print(np.linspace(0, 2, 13))
disciplines/SME0819 - Matrices for Applied Statistics/0x00_Fundamentals/Matrices - Fundamentals.ipynb
jhonatancasale/graduation-pool
apache-2.0
np.arange bahevior to large numbers
print(np.arange(10000)) print(np.arange(10000).reshape(100,100))
disciplines/SME0819 - Matrices for Applied Statistics/0x00_Fundamentals/Matrices - Fundamentals.ipynb
jhonatancasale/graduation-pool
apache-2.0
Basic Operations $$A_{mxn} \pm B_{mxn} \mapsto C_{mxn}$$ $$u_{1xn} \pm v_{1xn} \mapsto w_{1xn} \quad (u_n \pm v_n \mapsto w_n)$$
A = np.array([10, 20, 30, 40, 50, -1]) B = np.linspace(0, 1, A.size) print("{} + {} -> {}".format(A, B, A + B)) print("{} - {} -> {}".format(A, B, A - B))
disciplines/SME0819 - Matrices for Applied Statistics/0x00_Fundamentals/Matrices - Fundamentals.ipynb
jhonatancasale/graduation-pool
apache-2.0
$$f:M_{mxn} \to M_{mxn}$$ $$a_{ij} \mapsto a_{ij}^2$$
print("{} ** 2 -> {}".format(A, A ** 2))
disciplines/SME0819 - Matrices for Applied Statistics/0x00_Fundamentals/Matrices - Fundamentals.ipynb
jhonatancasale/graduation-pool
apache-2.0
$$f:M_{mxn} \to M_{mxn}$$ $$a_{ij} \mapsto 2\sin(a_{ij})$$
print("2 * sin({}) -> {}".format(A, 2 * np.sin(A)))
disciplines/SME0819 - Matrices for Applied Statistics/0x00_Fundamentals/Matrices - Fundamentals.ipynb
jhonatancasale/graduation-pool
apache-2.0