markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Next, let’s build the dictionary to convert the index to word for target and source vocabulary: Inference Model Encoder Inference:
# encoder inference latent_dim=500 #/content/gdrive/MyDrive/Text Summarizer/ #load the model model = models.load_model("Text_Summarizer.h5") #construct encoder model from the output of 6 layer i.e.last LSTM layer en_outputs,state_h_enc,state_c_enc = model.layers[6].output en_states=[state_h_enc,state_c_enc] #add input and state from the layer. en_model = Model(model.input[0],[en_outputs]+en_states)
_____no_output_____
MIT
text-summarization-attention-mechanism.ipynb
buddhadeb33/Text-Summarization-Attention-Mechanism
Decoder Inference:
# decoder inference #create Input object for hidden and cell state for decoder #shape of layer with hidden or latent dimension dec_state_input_h = Input(shape=(latent_dim,)) dec_state_input_c = Input(shape=(latent_dim,)) dec_hidden_state_input = Input(shape=(max_in_len,latent_dim)) # Get the embeddings and input layer from the model dec_inputs = model.input[1] dec_emb_layer = model.layers[5] dec_lstm = model.layers[7] dec_embedding= dec_emb_layer(dec_inputs) #add input and initialize LSTM layer with encoder LSTM states. dec_outputs2, state_h2, state_c2 = dec_lstm(dec_embedding, initial_state=[dec_state_input_h,dec_state_input_c])
_____no_output_____
MIT
text-summarization-attention-mechanism.ipynb
buddhadeb33/Text-Summarization-Attention-Mechanism
Attention Inference:
#Attention layer attention = model.layers[8] attn_out2 = attention([dec_outputs2,dec_hidden_state_input]) merge2 = Concatenate(axis=-1)([dec_outputs2, attn_out2])
_____no_output_____
MIT
text-summarization-attention-mechanism.ipynb
buddhadeb33/Text-Summarization-Attention-Mechanism
Dense layer
#Dense layer dec_dense = model.layers[10] dec_outputs2 = dec_dense(merge2) # Finally define the Model Class dec_model = Model( [dec_inputs] + [dec_hidden_state_input,dec_state_input_h,dec_state_input_c], [dec_outputs2] + [state_h2, state_c2]) #create a dictionary with a key as index and value as words. reverse_target_word_index = tr_tokenizer.index_word reverse_source_word_index = in_tokenizer.index_word target_word_index = tr_tokenizer.word_index def decode_sequence(input_seq): # get the encoder output and states by passing the input sequence en_out, en_h, en_c = en_model.predict(input_seq) # target sequence with inital word as 'sos' target_seq = np.zeros((1, 1)) target_seq[0, 0] = target_word_index['sos'] # if the iteration reaches the end of text than it will be stop the iteration stop_condition = False # append every predicted word in decoded sentence decoded_sentence = "" while not stop_condition: # get predicted output, hidden and cell state. output_words, dec_h, dec_c = dec_model.predict([target_seq] + [en_out, en_h, en_c]) # get the index and from the dictionary get the word for that index. word_index = np.argmax(output_words[0, -1, :]) text_word = reverse_target_word_index[word_index] decoded_sentence += text_word + " " # Exit condition: either hit max length # or find a stop word or last word. if text_word == "eos" or len(decoded_sentence) > max_tr_len: stop_condition = True # update target sequence to the current word index. target_seq = np.zeros((1, 1)) target_seq[0, 0] = word_index en_h, en_c = dec_h, dec_c # return the deocded sentence return decoded_sentence # inp_review = input("Enter : ") inp_review = "Both the Google platforms provide a great cloud environment for any ML work to be deployed to. The features of them both are equally competent. Notebooks can be downloaded and later uploaded between the two. However, Colab comparatively provides greater flexibility to adjust the batch sizes.Saving or storing of models is easier on Colab since it allows them to be saved and stored to Google Drive. Also if one is using TensorFlow, using TPUs would be preferred on Colab. It is also faster than Kaggle. For a use case demanding more power and longer running processes, Colab is preferred." print("Review :", inp_review) inp_review = clean(inp_review, "inputs") inp_review = ' '.join(inp_review) inp_x = in_tokenizer.texts_to_sequences([inp_review]) inp_x = pad_sequences(inp_x, maxlen=max_in_len, padding='post') summary = decode_sequence(inp_x.reshape(1, max_in_len)) if 'eos' in summary: summary = summary.replace('eos', '') print("\nPredicted summary:", summary); print("\n")
Review : Both the Google platforms provide a great cloud environment for any ML work to be deployed to. The features of them both are equally competent. Notebooks can be downloaded and later uploaded between the two. However, Colab comparatively provides greater flexibility to adjust the batch sizes.Saving or storing of models is easier on Colab since it allows them to be saved and stored to Google Drive. Also if one is using TensorFlow, using TPUs would be preferred on Colab. It is also faster than Kaggle. For a use case demanding more power and longer running processes, Colab is preferred. Predicted summary: great
MIT
text-summarization-attention-mechanism.ipynb
buddhadeb33/Text-Summarization-Attention-Mechanism
Brainstorm CTF phantom tutorial datasetHere we compute the evoked from raw for the Brainstorm CTF phantomtutorial dataset. For comparison, see [1]_ and: http://neuroimage.usc.edu/brainstorm/Tutorials/PhantomCtfReferences----------.. [1] Tadel F, Baillet S, Mosher JC, Pantazis D, Leahy RM. Brainstorm: A User-Friendly Application for MEG/EEG Analysis. Computational Intelligence and Neuroscience, vol. 2011, Article ID 879716, 13 pages, 2011. doi:10.1155/2011/879716
# Authors: Eric Larson <larson.eric.d@gmail.com> # # License: BSD (3-clause) import os.path as op import numpy as np import matplotlib.pyplot as plt import mne from mne import fit_dipole from mne.datasets.brainstorm import bst_phantom_ctf from mne.io import read_raw_ctf print(__doc__)
_____no_output_____
BSD-3-Clause
0.15/_downloads/plot_brainstorm_phantom_ctf.ipynb
drammock/mne-tools.github.io
The data were collected with a CTF system at 2400 Hz.
data_path = bst_phantom_ctf.data_path() # Switch to these to use the higher-SNR data: # raw_path = op.join(data_path, 'phantom_200uA_20150709_01.ds') # dip_freq = 7. raw_path = op.join(data_path, 'phantom_20uA_20150603_03.ds') dip_freq = 23. erm_path = op.join(data_path, 'emptyroom_20150709_01.ds') raw = read_raw_ctf(raw_path, preload=True)
_____no_output_____
BSD-3-Clause
0.15/_downloads/plot_brainstorm_phantom_ctf.ipynb
drammock/mne-tools.github.io
The sinusoidal signal is generated on channel HDAC006, so we can usethat to obtain precise timing.
sinusoid, times = raw[raw.ch_names.index('HDAC006-4408')] plt.figure() plt.plot(times[times < 1.], sinusoid.T[times < 1.])
_____no_output_____
BSD-3-Clause
0.15/_downloads/plot_brainstorm_phantom_ctf.ipynb
drammock/mne-tools.github.io
Let's create some events using this signal by thresholding the sinusoid.
events = np.where(np.diff(sinusoid > 0.5) > 0)[1] + raw.first_samp events = np.vstack((events, np.zeros_like(events), np.ones_like(events))).T
_____no_output_____
BSD-3-Clause
0.15/_downloads/plot_brainstorm_phantom_ctf.ipynb
drammock/mne-tools.github.io
The CTF software compensation works reasonably well:
raw.plot()
_____no_output_____
BSD-3-Clause
0.15/_downloads/plot_brainstorm_phantom_ctf.ipynb
drammock/mne-tools.github.io
But here we can get slightly better noise suppression, lower localizationbias, and a better dipole goodness of fit with spatio-temporal (tSSS)Maxwell filtering:
raw.apply_gradient_compensation(0) # must un-do software compensation first mf_kwargs = dict(origin=(0., 0., 0.), st_duration=10.) raw = mne.preprocessing.maxwell_filter(raw, **mf_kwargs) raw.plot()
_____no_output_____
BSD-3-Clause
0.15/_downloads/plot_brainstorm_phantom_ctf.ipynb
drammock/mne-tools.github.io
Our choice of tmin and tmax should capture exactly one cycle, sowe can make the unusual choice of baselining using the entire epochwhen creating our evoked data. We also then crop to a single time point(@t=0) because this is a peak in our signal.
tmin = -0.5 / dip_freq tmax = -tmin epochs = mne.Epochs(raw, events, event_id=1, tmin=tmin, tmax=tmax, baseline=(None, None)) evoked = epochs.average() evoked.plot() evoked.crop(0., 0.)
_____no_output_____
BSD-3-Clause
0.15/_downloads/plot_brainstorm_phantom_ctf.ipynb
drammock/mne-tools.github.io
Let's use a sphere head geometry model and let's see the coordinatealignement and the sphere location.
sphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=None) mne.viz.plot_alignment(raw.info, subject='sample', meg='helmet', bem=sphere, dig=True, surfaces=['brain']) del raw, epochs
_____no_output_____
BSD-3-Clause
0.15/_downloads/plot_brainstorm_phantom_ctf.ipynb
drammock/mne-tools.github.io
To do a dipole fit, let's use the covariance provided by the empty roomrecording.
raw_erm = read_raw_ctf(erm_path).apply_gradient_compensation(0) raw_erm = mne.preprocessing.maxwell_filter(raw_erm, coord_frame='meg', **mf_kwargs) cov = mne.compute_raw_covariance(raw_erm) del raw_erm dip, residual = fit_dipole(evoked, cov, sphere)
_____no_output_____
BSD-3-Clause
0.15/_downloads/plot_brainstorm_phantom_ctf.ipynb
drammock/mne-tools.github.io
Compare the actual position with the estimated one.
expected_pos = np.array([18., 0., 49.]) diff = np.sqrt(np.sum((dip.pos[0] * 1000 - expected_pos) ** 2)) print('Actual pos: %s mm' % np.array_str(expected_pos, precision=1)) print('Estimated pos: %s mm' % np.array_str(dip.pos[0] * 1000, precision=1)) print('Difference: %0.1f mm' % diff) print('Amplitude: %0.1f nAm' % (1e9 * dip.amplitude[0])) print('GOF: %0.1f %%' % dip.gof[0])
_____no_output_____
BSD-3-Clause
0.15/_downloads/plot_brainstorm_phantom_ctf.ipynb
drammock/mne-tools.github.io
Bracketshttps://app.codility.com/programmers/lessons/7-stacks_and_queues/brackets/
from typing import List def solution(S) : stack : List[str] = [] for ch in S : if ch == '{' or ch == '(' or ch == '[' : stack.append(ch) else: if len(stack) == 0 : return 0 lastch = stack.pop() if ch == '}' and lastch != '{' : return 0 if ch == ')' and lastch != '(' : return 0 if ch == ']' and lastch != '[' : return 0 return 0 if (len(stack) > 0) else 1 assert(solution("{[()()]}") == 1) assert(solution("([)()]") == 0) assert(solution("") == 1) assert(solution("[]{}") == 1) assert(solution("[]{") == 0)
_____no_output_____
Apache-2.0
codility-lessons/7 Stacks and Queues.ipynb
stanislawbartkowski/learnml
Fishhttps://app.codility.com/programmers/lessons/7-stacks_and_queues/fish/
from typing import List def solution(A : List[int], B : List[int]) -> int : assert(len(A) == len(B)) stackup : List[int] = [] eatenfish : int = 0 for i in range(len(B)) : if B[i] == 1 : stackup.append(A[i]) else : while len(stackup) > 0 : eatenfish += 1 currup : int = stackup.pop() if currup > A[i] : stackup.append(currup) break return len(A) - eatenfish assert(solution([4,3,2,1,5],[0,1,0,0,0]) == 2) assert(solution([4],[1])==1)
_____no_output_____
Apache-2.0
codility-lessons/7 Stacks and Queues.ipynb
stanislawbartkowski/learnml
Nestinghttps://app.codility.com/programmers/lessons/7-stacks_and_queues/nesting/
def solution(S : str) -> int : numof : int = 0 for c in S : if c == "(" : numof += 1 else: if numof == 0 : return 0 numof -= 1 return 1 if numof == 0 else 0 assert (solution("(()(())())") == 1) assert (solution("())") == 0) assert (solution("") == 1)
_____no_output_____
Apache-2.0
codility-lessons/7 Stacks and Queues.ipynb
stanislawbartkowski/learnml
StoneWallhttps://app.codility.com/programmers/lessons/7-stacks_and_queues/stone_wall/
from typing import List def solution(H : List[int]) -> int : assert(len(H) > 0) stack : List[int] = [] no : int = 0 for i in range(len(H)) : while (len(stack) > 0) and H[i] < stack[len(stack) -1] : no += 1 stack.pop() if len(stack) == 0 or H[i] > stack[len(stack) -1] : stack.append(H[i]) return no + len(stack) assert(solution([8,8,5,7,9,8,7,4,8]) ==7) assert(solution([8,8,5,7,9,8,7,8,4]) == 7) assert(solution([8,8]) == 1)
_____no_output_____
Apache-2.0
codility-lessons/7 Stacks and Queues.ipynb
stanislawbartkowski/learnml
Ensemble Learning Initial Imports
import warnings warnings.filterwarnings('ignore') import numpy as np import pandas as pd from pathlib import Path from collections import Counter from sklearn.metrics import balanced_accuracy_score from sklearn.metrics import confusion_matrix from imblearn.metrics import classification_report_imbalanced
_____no_output_____
ADSL
credit_risk_ensemble.ipynb
THaoV1001/Classification-Homework
Read the CSV and Perform Basic Data Cleaning
# Load the data file_path = Path('lending_data.csv') df = pd.read_csv(file_path) # Preview the data df.head() # homeowner column is categorical, change to numerical so it can be scaled later on from sklearn.preprocessing import LabelEncoder label_encoder = LabelEncoder() label_encoder.fit(df["homeowner"]) df["homeowner"] = label_encoder.transform(df["homeowner"]) df.head()
_____no_output_____
ADSL
credit_risk_ensemble.ipynb
THaoV1001/Classification-Homework
Split the Data into Training and Testing
# Create our features X = df.drop(columns="loan_status") # Create our target y = df["loan_status"].to_frame() X.describe() # Check the balance of our target values y['loan_status'].value_counts() # Split the X and y into X_train, X_test, y_train, y_test # Create X_train, X_test, y_train, y_test from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1, stratify=y) X_train
_____no_output_____
ADSL
credit_risk_ensemble.ipynb
THaoV1001/Classification-Homework
Data Pre-ProcessingScale the training and testing data using the `StandardScaler` from `sklearn`. Remember that when scaling the data, you only scale the features data (`X_train` and `X_testing`).
# Create the StandardScaler instance from sklearn.preprocessing import StandardScaler scaler = StandardScaler() # Fit the Standard Scaler with the training data # When fitting scaling functions, only train on the training dataset X_scaler = scaler.fit(X_train) # Scale the training and testing data X_train_scaled = X_scaler.transform(X_train) X_test_scaled = X_scaler.transform(X_test)
_____no_output_____
ADSL
credit_risk_ensemble.ipynb
THaoV1001/Classification-Homework
Ensemble LearnersIn this section, you will compare two ensemble algorithms to determine which algorithm results in the best performance. You will train a Balanced Random Forest Classifier and an Easy Ensemble classifier . For each algorithm, be sure to complete the folliowing steps:1. Train the model using the training data. 2. Calculate the balanced accuracy score from sklearn.metrics.3. Display the confusion matrix from sklearn.metrics.4. Generate a classication report using the `imbalanced_classification_report` from imbalanced-learn.5. For the Balanced Random Forest Classifier only, print the feature importance sorted in descending order (most important feature to least important) along with the feature scoreNote: Use a random state of 1 for each algorithm to ensure consistency between tests Balanced Random Forest Classifier
# Resample the training data with the BalancedRandomForestClassifier from imblearn.ensemble import BalancedRandomForestClassifier brf = BalancedRandomForestClassifier(n_estimators=100, random_state=1) #100 trees # random forest use 50/50 probability decision, so I think scaled data is not required brf.fit(X_train, y_train) # Calculated the balanced accuracy score from sklearn.metrics import balanced_accuracy_score y_pred = brf.predict(X_test) balanced_accuracy_score(y_test, y_pred) # Display the confusion matrix from sklearn.metrics import confusion_matrix confusion_matrix(y_test, y_pred) # Print the imbalanced classification report from imblearn.metrics import classification_report_imbalanced print(classification_report_imbalanced(y_test, y_pred)) # List the features sorted in descending order by feature importance importances = brf.feature_importances_ sorted(zip(brf.feature_importances_, X.columns), reverse=True)
_____no_output_____
ADSL
credit_risk_ensemble.ipynb
THaoV1001/Classification-Homework
Easy Ensemble Classifier
# Train the Classifier from imblearn.ensemble import EasyEnsembleClassifier eec = EasyEnsembleClassifier(n_estimators=100, random_state=1) eec.fit(X_train, y_train) # Calculated the balanced accuracy score y_pred = eec.predict(X_test) balanced_accuracy_score(y_test, y_pred) # Display the confusion matrix confusion_matrix(y_test, y_pred) # Print the imbalanced classification report print(classification_report_imbalanced(y_test, y_pred))
pre rec spe f1 geo iba sup high_risk 0.84 1.00 0.99 0.91 0.99 0.99 625 low_risk 1.00 0.99 1.00 1.00 0.99 0.99 18759 avg / total 0.99 0.99 1.00 0.99 0.99 0.99 19384
ADSL
credit_risk_ensemble.ipynb
THaoV1001/Classification-Homework
pip install pennylane pip install torch pip install tensorflow pip install sklearn pip install pennylane-qiskit import pennylane as qml from pennylane import numpy as np dev = qml.device("default.qubit", wires=2) @qml.qnode(device=dev) def cos_func(x, w): qml.RX(x, wires=0) qml.templates.BasicEntanglerLayers(w, wires=range(2)) return qml.expval(qml.PauliZ(0)) layer = 4 weights = qml.init.basic_entangler_layers_uniform(layer, 2) xs = np.linspace(-np.pi, 4*np.pi, requires_grad=False) ys = np.cos(xs) opt = qml.AdamOptimizer() epochs = 10 for epoch in range(epochs): for x, y in zip(xs, ys): cost = lambda weights:(cos_func(x, weights) - y) ** 2 weights = opt.step(cost, weights) ys_trained = [cos_func(x, weights) for x in xs] import matplotlib.pyplot as plt plt.figure() plt.plot(xs, ys_trained, marker="o", label="Cos(x") plt.legend() plt.show()
_____no_output_____
MIT
Xanadu3.ipynb
olgOk/XanaduTraining
Preparing GHZ stateUsing the Autograd interface, train a circuit to prepare the 3-qubit W state:$|W> = {1/sqrt(3)}(001|> + |010> + |100>)
qubits = 3 w = np.array([0, 1, 1, 0, 1, 0, 0, 0]) / np.sqrt(3) w_projector = w[:, np.newaxis] * w w_decomp = qml.utils.decompose_hamiltonian(w_projector) H = qml.Hamiltonian(*w_decomp) def prepare_w(weights, wires): qml.templates.StronglyEntanglingLayers(weights, wires=wires) dev = qml.device("default.qubit", wires=qubits) qnodes = qml.map(prepare_w, H.ops, dev) w_overlap = qml.dot(H.coeffs, qnodes) layers = 4 weights = qml.init.strong_ent_layers_uniform(layers, qubits) opt = qml.RMSPropOptimizer() epochs = 50 for i in range(epochs): weights = opt.step(lambda weights: -w_overlap(weights), weights) if i % 5 == 0: print(i, w_overlap(weights)) output_overlap = w_overlap(weights) output_state = np.round(dev.state, 3)
_____no_output_____
MIT
Xanadu3.ipynb
olgOk/XanaduTraining
Quantum-based Optimization
dev = qml.device('default.qubit', wires=1) @qml.qnode(dev) def rotation(thetas): qml.RX(1, wires=0) qml.RZ(1, wires=0) qml.RX(thetas[0], wires=0) qml.RY(thetas[1], wires=0) return qml.expval(qml.PauliZ(0)) opt = qml.RotoselectOptimizer() import sklearn.datasets data = sklearn.datasets.load_iris() x = data["data"] y = data["target"] np.random.seed(1967) x, y = zip(*np.random.permutation(list(zip(x, y)))) split = 125 x_train = x[:split] x_test = x[split:] y_train = y[:split] y_test = y[split:]
_____no_output_____
MIT
Xanadu3.ipynb
olgOk/XanaduTraining
Parte 1 - Imagens coloridas**TIAGO PEREIRA DALL'OCA - 206341**
from scipy import misc from scipy import ndimage import cv2 import numpy as np import matplotlib.pyplot as plt img = cv2.imread('imagens/baboon.png') img.shape
_____no_output_____
MIT
parte1.ipynb
tiagodalloca/mc920-trabalho1
a)Aqui é bem autoexplicativo. É criado uma matriz que irá multiplicar os vetores que representam os três canais de cores de cada pixel.
matriz_a = np.array([[0.393, 0.769, 0.189], [0.394, 0.686, 0.168], [0.272, 0.534, 0.131]]) img_a = np.dot(img, matriz_a)/255 img_a = img_a.clip(max=[1,1,1]) img_a.shape plt.imshow(img_a)
_____no_output_____
MIT
parte1.ipynb
tiagodalloca/mc920-trabalho1
b)Semelhante ao item "a", porém agora faremos uma multiplicação vetorial que nos resultará em uma imagem de canal único (imagem cinza).
vetor_b = np.array([0.2989, 0.5870, 0.1140]) img_b = np.tensordot(img, vetor_b, axes=([2], [0]))/255 img_b = img_b.clip(max=[1]).reshape(img.shape[0:2]) img_b.shape plt.imshow(img_b)
_____no_output_____
MIT
parte1.ipynb
tiagodalloca/mc920-trabalho1
Regular ExpressionsRegular expressions are text-matching patterns described with a formal syntax. You'll often hear regular expressions referred to as 'regex' or 'regexp' in conversation. Regular expressions can include a variety of rules, from finding repetition, to text-matching, and much more. As you advance in Python you'll see that a lot of your parsing problems can be solved with regular expressions (they're also a common interview question!).If you're familiar with Perl, you'll notice that the syntax for regular expressions are very similar in Python. We will be using the re module with Python for this lecture.Let's get started! Searching for Patterns in TextOne of the most common uses for the re module is for finding patterns in text. Let's do a quick example of using the search method in the re module to find some text:
import re # List of patterns to search for patterns = ['term1', 'term2'] # Text to parse text = 'This is a string with term1, but it does not have the other term.' for pattern in patterns: print('Searching for "%s" in:\n "%s"\n' %(pattern,text)) #Check for match if re.search(pattern,text): print('Match was found. \n') else: print('No Match was found.\n')
Searching for "term1" in: "This is a string with term1, but it does not have the other term." Match was found. Searching for "term2" in: "This is a string with term1, but it does not have the other term." No Match was found.
MIT
Python-Programming/Python-3-Bootcamp/13-Advanced Python Modules/.ipynb_checkpoints/05-Regular Expressions - re-checkpoint.ipynb
vivekparasharr/Learn-Programming
Now we've seen that re.search() will take the pattern, scan the text, and then return a **Match** object. If no pattern is found, **None** is returned. To give a clearer picture of this match object, check out the cell below:
# List of patterns to search for pattern = 'term1' # Text to parse text = 'This is a string with term1, but it does not have the other term.' match = re.search(pattern,text) type(match)
_____no_output_____
MIT
Python-Programming/Python-3-Bootcamp/13-Advanced Python Modules/.ipynb_checkpoints/05-Regular Expressions - re-checkpoint.ipynb
vivekparasharr/Learn-Programming
This **Match** object returned by the search() method is more than just a Boolean or None, it contains information about the match, including the original input string, the regular expression that was used, and the location of the match. Let's see the methods we can use on the match object:
# Show start of match match.start() # Show end match.end()
_____no_output_____
MIT
Python-Programming/Python-3-Bootcamp/13-Advanced Python Modules/.ipynb_checkpoints/05-Regular Expressions - re-checkpoint.ipynb
vivekparasharr/Learn-Programming
Split with regular expressionsLet's see how we can split with the re syntax. This should look similar to how you used the split() method with strings.
# Term to split on split_term = '@' phrase = 'What is the domain name of someone with the email: hello@gmail.com' # Split the phrase re.split(split_term,phrase)
_____no_output_____
MIT
Python-Programming/Python-3-Bootcamp/13-Advanced Python Modules/.ipynb_checkpoints/05-Regular Expressions - re-checkpoint.ipynb
vivekparasharr/Learn-Programming
Note how re.split() returns a list with the term to split on removed and the terms in the list are a split up version of the string. Create a couple of more examples for yourself to make sure you understand! Finding all instances of a patternYou can use re.findall() to find all the instances of a pattern in a string. For example:
# Returns a list of all matches re.findall('match','test phrase match is in middle')
_____no_output_____
MIT
Python-Programming/Python-3-Bootcamp/13-Advanced Python Modules/.ipynb_checkpoints/05-Regular Expressions - re-checkpoint.ipynb
vivekparasharr/Learn-Programming
re Pattern SyntaxThis will be the bulk of this lecture on using re with Python. Regular expressions support a huge variety of patterns beyond just simply finding where a single string occurred. We can use *metacharacters* along with re to find specific types of patterns. Since we will be testing multiple re syntax forms, let's create a function that will print out results given a list of various regular expressions and a phrase to parse:
def multi_re_find(patterns,phrase): ''' Takes in a list of regex patterns Prints a list of all matches ''' for pattern in patterns: print('Searching the phrase using the re check: %r' %(pattern)) print(re.findall(pattern,phrase)) print('\n')
_____no_output_____
MIT
Python-Programming/Python-3-Bootcamp/13-Advanced Python Modules/.ipynb_checkpoints/05-Regular Expressions - re-checkpoint.ipynb
vivekparasharr/Learn-Programming
Repetition SyntaxThere are five ways to express repetition in a pattern: 1. A pattern followed by the meta-character * is repeated zero or more times. 2. Replace the * with + and the pattern must appear at least once. 3. Using ? means the pattern appears zero or one time. 4. For a specific number of occurrences, use {m} after the pattern, where **m** is replaced with the number of times the pattern should repeat. 5. Use {m,n} where **m** is the minimum number of repetitions and **n** is the maximum. Leaving out **n** {m,} means the value appears at least **m** times, with no maximum. Now we will see an example of each of these using our multi_re_find function:
test_phrase = 'sdsd..sssddd...sdddsddd...dsds...dsssss...sdddd' test_patterns = [ 'sd*', # s followed by zero or more d's 'sd+', # s followed by one or more d's 'sd?', # s followed by zero or one d's 'sd{3}', # s followed by three d's 'sd{2,3}', # s followed by two to three d's ] multi_re_find(test_patterns,test_phrase)
Searching the phrase using the re check: 'sd*' ['sd', 'sd', 's', 's', 'sddd', 'sddd', 'sddd', 'sd', 's', 's', 's', 's', 's', 's', 'sdddd'] Searching the phrase using the re check: 'sd+' ['sd', 'sd', 'sddd', 'sddd', 'sddd', 'sd', 'sdddd'] Searching the phrase using the re check: 'sd?' ['sd', 'sd', 's', 's', 'sd', 'sd', 'sd', 'sd', 's', 's', 's', 's', 's', 's', 'sd'] Searching the phrase using the re check: 'sd{3}' ['sddd', 'sddd', 'sddd', 'sddd'] Searching the phrase using the re check: 'sd{2,3}' ['sddd', 'sddd', 'sddd', 'sddd']
MIT
Python-Programming/Python-3-Bootcamp/13-Advanced Python Modules/.ipynb_checkpoints/05-Regular Expressions - re-checkpoint.ipynb
vivekparasharr/Learn-Programming
Character SetsCharacter sets are used when you wish to match any one of a group of characters at a point in the input. Brackets are used to construct character set inputs. For example: the input [ab] searches for occurrences of either **a** or **b**.Let's see some examples:
test_phrase = 'sdsd..sssddd...sdddsddd...dsds...dsssss...sdddd' test_patterns = ['[sd]', # either s or d 's[sd]+'] # s followed by one or more s or d multi_re_find(test_patterns,test_phrase)
Searching the phrase using the re check: '[sd]' ['s', 'd', 's', 'd', 's', 's', 's', 'd', 'd', 'd', 's', 'd', 'd', 'd', 's', 'd', 'd', 'd', 'd', 's', 'd', 's', 'd', 's', 's', 's', 's', 's', 's', 'd', 'd', 'd', 'd'] Searching the phrase using the re check: 's[sd]+' ['sdsd', 'sssddd', 'sdddsddd', 'sds', 'sssss', 'sdddd']
MIT
Python-Programming/Python-3-Bootcamp/13-Advanced Python Modules/.ipynb_checkpoints/05-Regular Expressions - re-checkpoint.ipynb
vivekparasharr/Learn-Programming
It makes sense that the first input [sd] returns every instance of s or d. Also, the second input s[sd]+ returns any full strings that begin with an s and continue with s or d characters until another character is reached. ExclusionWe can use ^ to exclude terms by incorporating it into the bracket syntax notation. For example: [^...] will match any single character not in the brackets. Let's see some examples:
test_phrase = 'This is a string! But it has punctuation. How can we remove it?'
_____no_output_____
MIT
Python-Programming/Python-3-Bootcamp/13-Advanced Python Modules/.ipynb_checkpoints/05-Regular Expressions - re-checkpoint.ipynb
vivekparasharr/Learn-Programming
Use [^!.? ] to check for matches that are not a !,.,?, or space. Add a + to check that the match appears at least once. This basically translates into finding the words.
re.findall('[^!.? ]+',test_phrase)
_____no_output_____
MIT
Python-Programming/Python-3-Bootcamp/13-Advanced Python Modules/.ipynb_checkpoints/05-Regular Expressions - re-checkpoint.ipynb
vivekparasharr/Learn-Programming
Character RangesAs character sets grow larger, typing every character that should (or should not) match could become very tedious. A more compact format using character ranges lets you define a character set to include all of the contiguous characters between a start and stop point. The format used is [start-end].Common use cases are to search for a specific range of letters in the alphabet. For instance, [a-f] would return matches with any occurrence of letters between a and f. Let's walk through some examples:
test_phrase = 'This is an example sentence. Lets see if we can find some letters.' test_patterns=['[a-z]+', # sequences of lower case letters '[A-Z]+', # sequences of upper case letters '[a-zA-Z]+', # sequences of lower or upper case letters '[A-Z][a-z]+'] # one upper case letter followed by lower case letters multi_re_find(test_patterns,test_phrase)
Searching the phrase using the re check: '[a-z]+' ['his', 'is', 'an', 'example', 'sentence', 'ets', 'see', 'if', 'we', 'can', 'find', 'some', 'letters'] Searching the phrase using the re check: '[A-Z]+' ['T', 'L'] Searching the phrase using the re check: '[a-zA-Z]+' ['This', 'is', 'an', 'example', 'sentence', 'Lets', 'see', 'if', 'we', 'can', 'find', 'some', 'letters'] Searching the phrase using the re check: '[A-Z][a-z]+' ['This', 'Lets']
MIT
Python-Programming/Python-3-Bootcamp/13-Advanced Python Modules/.ipynb_checkpoints/05-Regular Expressions - re-checkpoint.ipynb
vivekparasharr/Learn-Programming
Escape CodesYou can use special escape codes to find specific types of patterns in your data, such as digits, non-digits, whitespace, and more. For example:CodeMeaning\da digit\Da non-digit\swhitespace (tab, space, newline, etc.)\Snon-whitespace\walphanumeric\Wnon-alphanumericEscapes are indicated by prefixing the character with a backslash \. Unfortunately, a backslash must itself be escaped in normal Python strings, and that results in expressions that are difficult to read. Using raw strings, created by prefixing the literal value with r, eliminates this problem and maintains readability.Personally, I think this use of r to escape a backslash is probably one of the things that block someone who is not familiar with regex in Python from being able to read regex code at first. Hopefully after seeing these examples this syntax will become clear.
test_phrase = 'This is a string with some numbers 1233 and a symbol #hashtag' test_patterns=[ r'\d+', # sequence of digits r'\D+', # sequence of non-digits r'\s+', # sequence of whitespace r'\S+', # sequence of non-whitespace r'\w+', # alphanumeric characters r'\W+', # non-alphanumeric ] multi_re_find(test_patterns,test_phrase)
Searching the phrase using the re check: '\\d+' ['1233'] Searching the phrase using the re check: '\\D+' ['This is a string with some numbers ', ' and a symbol #hashtag'] Searching the phrase using the re check: '\\s+' [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] Searching the phrase using the re check: '\\S+' ['This', 'is', 'a', 'string', 'with', 'some', 'numbers', '1233', 'and', 'a', 'symbol', '#hashtag'] Searching the phrase using the re check: '\\w+' ['This', 'is', 'a', 'string', 'with', 'some', 'numbers', '1233', 'and', 'a', 'symbol', 'hashtag'] Searching the phrase using the re check: '\\W+' [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' #']
MIT
Python-Programming/Python-3-Bootcamp/13-Advanced Python Modules/.ipynb_checkpoints/05-Regular Expressions - re-checkpoint.ipynb
vivekparasharr/Learn-Programming
PTN TemplateThis notebook serves as a template for single dataset PTN experiments It can be run on its own by setting STANDALONE to True (do a find for "STANDALONE" to see where) But it is intended to be executed as part of a *papermill.py script. See any of the experimentes with a papermill script to get started with that workflow.
%load_ext autoreload %autoreload 2 %matplotlib inline import os, json, sys, time, random import numpy as np import torch from torch.optim import Adam from easydict import EasyDict import matplotlib.pyplot as plt from steves_models.steves_ptn import Steves_Prototypical_Network from steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper from steves_utils.iterable_aggregator import Iterable_Aggregator from steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig from steves_utils.torch_sequential_builder import build_sequential from steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader from steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path) from steves_utils.PTN.utils import independent_accuracy_assesment from steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory from steves_utils.ptn_do_report import ( get_loss_curve, get_results_table, get_parameters_table, get_domain_accuracies, ) from steves_utils.transforms import get_chained_transform
_____no_output_____
MIT
experiments/tuned_1v2/oracle.run2/trials/4/trial.ipynb
stevester94/csc500-notebooks
Required ParametersThese are allowed parameters, not defaultsEach of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)Papermill uses the cell tag "parameters" to inject the real parameters below this cell.Enable tags to see what I mean
required_parameters = { "experiment_name", "lr", "device", "seed", "dataset_seed", "labels_source", "labels_target", "domains_source", "domains_target", "num_examples_per_domain_per_label_source", "num_examples_per_domain_per_label_target", "n_shot", "n_way", "n_query", "train_k_factor", "val_k_factor", "test_k_factor", "n_epoch", "patience", "criteria_for_best", "x_transforms_source", "x_transforms_target", "episode_transforms_source", "episode_transforms_target", "pickle_name", "x_net", "NUM_LOGS_PER_EPOCH", "BEST_MODEL_PATH", "torch_default_dtype" } standalone_parameters = {} standalone_parameters["experiment_name"] = "STANDALONE PTN" standalone_parameters["lr"] = 0.0001 standalone_parameters["device"] = "cuda" standalone_parameters["seed"] = 1337 standalone_parameters["dataset_seed"] = 1337 standalone_parameters["num_examples_per_domain_per_label_source"]=100 standalone_parameters["num_examples_per_domain_per_label_target"]=100 standalone_parameters["n_shot"] = 3 standalone_parameters["n_query"] = 2 standalone_parameters["train_k_factor"] = 1 standalone_parameters["val_k_factor"] = 2 standalone_parameters["test_k_factor"] = 2 standalone_parameters["n_epoch"] = 100 standalone_parameters["patience"] = 10 standalone_parameters["criteria_for_best"] = "target_accuracy" standalone_parameters["x_transforms_source"] = ["unit_power"] standalone_parameters["x_transforms_target"] = ["unit_power"] standalone_parameters["episode_transforms_source"] = [] standalone_parameters["episode_transforms_target"] = [] standalone_parameters["torch_default_dtype"] = "torch.float32" standalone_parameters["x_net"] = [ {"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}}, {"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },}, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm2d", "kargs": {"num_features":256}}, {"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },}, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm2d", "kargs": {"num_features":80}}, {"class": "Flatten", "kargs": {}}, {"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm1d", "kargs": {"num_features":256}}, {"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}}, ] # Parameters relevant to results # These parameters will basically never need to change standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10 standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth" # uncomment for CORES dataset from steves_utils.CORES.utils import ( ALL_NODES, ALL_NODES_MINIMUM_1000_EXAMPLES, ALL_DAYS ) standalone_parameters["labels_source"] = ALL_NODES standalone_parameters["labels_target"] = ALL_NODES standalone_parameters["domains_source"] = [1] standalone_parameters["domains_target"] = [2,3,4,5] standalone_parameters["pickle_name"] = "cores.stratified_ds.2022A.pkl" # Uncomment these for ORACLE dataset # from steves_utils.ORACLE.utils_v2 import ( # ALL_DISTANCES_FEET, # ALL_RUNS, # ALL_SERIAL_NUMBERS, # ) # standalone_parameters["labels_source"] = ALL_SERIAL_NUMBERS # standalone_parameters["labels_target"] = ALL_SERIAL_NUMBERS # standalone_parameters["domains_source"] = [8,20, 38,50] # standalone_parameters["domains_target"] = [14, 26, 32, 44, 56] # standalone_parameters["pickle_name"] = "oracle.frame_indexed.stratified_ds.2022A.pkl" # standalone_parameters["num_examples_per_domain_per_label_source"]=1000 # standalone_parameters["num_examples_per_domain_per_label_target"]=1000 # Uncomment these for Metahan dataset # standalone_parameters["labels_source"] = list(range(19)) # standalone_parameters["labels_target"] = list(range(19)) # standalone_parameters["domains_source"] = [0] # standalone_parameters["domains_target"] = [1] # standalone_parameters["pickle_name"] = "metehan.stratified_ds.2022A.pkl" # standalone_parameters["n_way"] = len(standalone_parameters["labels_source"]) # standalone_parameters["num_examples_per_domain_per_label_source"]=200 # standalone_parameters["num_examples_per_domain_per_label_target"]=100 standalone_parameters["n_way"] = len(standalone_parameters["labels_source"]) # Parameters parameters = { "experiment_name": "tuned_1v2:oracle.run2", "device": "cuda", "lr": 0.0001, "labels_source": [ "3123D52", "3123D65", "3123D79", "3123D80", "3123D54", "3123D70", "3123D7B", "3123D89", "3123D58", "3123D76", "3123D7D", "3123EFE", "3123D64", "3123D78", "3123D7E", "3124E4A", ], "labels_target": [ "3123D52", "3123D65", "3123D79", "3123D80", "3123D54", "3123D70", "3123D7B", "3123D89", "3123D58", "3123D76", "3123D7D", "3123EFE", "3123D64", "3123D78", "3123D7E", "3124E4A", ], "episode_transforms_source": [], "episode_transforms_target": [], "domains_source": [8, 32, 50], "domains_target": [14, 20, 26, 38, 44], "num_examples_per_domain_per_label_source": -1, "num_examples_per_domain_per_label_target": -1, "n_shot": 3, "n_way": 16, "n_query": 2, "train_k_factor": 3, "val_k_factor": 2, "test_k_factor": 2, "torch_default_dtype": "torch.float32", "n_epoch": 50, "patience": 3, "criteria_for_best": "target_accuracy", "x_net": [ {"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 256]}}, { "class": "Conv2d", "kargs": { "in_channels": 1, "out_channels": 256, "kernel_size": [1, 7], "bias": False, "padding": [0, 3], }, }, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm2d", "kargs": {"num_features": 256}}, { "class": "Conv2d", "kargs": { "in_channels": 256, "out_channels": 80, "kernel_size": [2, 7], "bias": True, "padding": [0, 3], }, }, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm2d", "kargs": {"num_features": 80}}, {"class": "Flatten", "kargs": {}}, {"class": "Linear", "kargs": {"in_features": 20480, "out_features": 256}}, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm1d", "kargs": {"num_features": 256}}, {"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}}, ], "NUM_LOGS_PER_EPOCH": 10, "BEST_MODEL_PATH": "./best_model.pth", "pickle_name": "oracle.Run2_10kExamples_stratified_ds.2022A.pkl", "x_transforms_source": ["unit_mag"], "x_transforms_target": ["unit_mag"], "dataset_seed": 500, "seed": 500, } # Set this to True if you want to run this template directly STANDALONE = False if STANDALONE: print("parameters not injected, running with standalone_parameters") parameters = standalone_parameters if not 'parameters' in locals() and not 'parameters' in globals(): raise Exception("Parameter injection failed") #Use an easy dict for all the parameters p = EasyDict(parameters) supplied_keys = set(p.keys()) if supplied_keys != required_parameters: print("Parameters are incorrect") if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters)) if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys)) raise RuntimeError("Parameters are incorrect") ################################### # Set the RNGs and make it all deterministic ################################### np.random.seed(p.seed) random.seed(p.seed) torch.manual_seed(p.seed) torch.use_deterministic_algorithms(True) ########################################### # The stratified datasets honor this ########################################### torch.set_default_dtype(eval(p.torch_default_dtype)) ################################### # Build the network(s) # Note: It's critical to do this AFTER setting the RNG # (This is due to the randomized initial weights) ################################### x_net = build_sequential(p.x_net) start_time_secs = time.time() ################################### # Build the dataset ################################### if p.x_transforms_source == []: x_transform_source = None else: x_transform_source = get_chained_transform(p.x_transforms_source) if p.x_transforms_target == []: x_transform_target = None else: x_transform_target = get_chained_transform(p.x_transforms_target) if p.episode_transforms_source == []: episode_transform_source = None else: raise Exception("episode_transform_source not implemented") if p.episode_transforms_target == []: episode_transform_target = None else: raise Exception("episode_transform_target not implemented") eaf_source = Episodic_Accessor_Factory( labels=p.labels_source, domains=p.domains_source, num_examples_per_domain_per_label=p.num_examples_per_domain_per_label_source, iterator_seed=p.seed, dataset_seed=p.dataset_seed, n_shot=p.n_shot, n_way=p.n_way, n_query=p.n_query, train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor), pickle_path=os.path.join(get_datasets_base_path(), p.pickle_name), x_transform_func=x_transform_source, example_transform_func=episode_transform_source, ) train_original_source, val_original_source, test_original_source = eaf_source.get_train(), eaf_source.get_val(), eaf_source.get_test() eaf_target = Episodic_Accessor_Factory( labels=p.labels_target, domains=p.domains_target, num_examples_per_domain_per_label=p.num_examples_per_domain_per_label_target, iterator_seed=p.seed, dataset_seed=p.dataset_seed, n_shot=p.n_shot, n_way=p.n_way, n_query=p.n_query, train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor), pickle_path=os.path.join(get_datasets_base_path(), p.pickle_name), x_transform_func=x_transform_target, example_transform_func=episode_transform_target, ) train_original_target, val_original_target, test_original_target = eaf_target.get_train(), eaf_target.get_val(), eaf_target.get_test() transform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only train_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda) val_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda) test_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda) train_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda) val_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda) test_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda) datasets = EasyDict({ "source": { "original": {"train":train_original_source, "val":val_original_source, "test":test_original_source}, "processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source} }, "target": { "original": {"train":train_original_target, "val":val_original_target, "test":test_original_target}, "processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target} }, }) # Some quick unit tests on the data from steves_utils.transforms import get_average_power, get_average_magnitude q_x, q_y, s_x, s_y, truth = next(iter(train_processed_source)) assert q_x.dtype == eval(p.torch_default_dtype) assert s_x.dtype == eval(p.torch_default_dtype) print("Visually inspect these to see if they line up with expected values given the transforms") print('x_transforms_source', p.x_transforms_source) print('x_transforms_target', p.x_transforms_target) print("Average magnitude, source:", get_average_magnitude(q_x[0].numpy())) print("Average power, source:", get_average_power(q_x[0].numpy())) q_x, q_y, s_x, s_y, truth = next(iter(train_processed_target)) print("Average magnitude, target:", get_average_magnitude(q_x[0].numpy())) print("Average power, target:", get_average_power(q_x[0].numpy())) ################################### # Build the model ################################### model = Steves_Prototypical_Network(x_net, device=p.device, x_shape=(2,256)) optimizer = Adam(params=model.parameters(), lr=p.lr) ################################### # train ################################### jig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device) jig.train( train_iterable=datasets.source.processed.train, source_val_iterable=datasets.source.processed.val, target_val_iterable=datasets.target.processed.val, num_epochs=p.n_epoch, num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH, patience=p.patience, optimizer=optimizer, criteria_for_best=p.criteria_for_best, ) total_experiment_time_secs = time.time() - start_time_secs ################################### # Evaluate the model ################################### source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test) target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test) source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val) target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val) history = jig.get_history() total_epochs_trained = len(history["epoch_indices"]) val_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val)) confusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl) per_domain_accuracy = per_domain_accuracy_from_confusion(confusion) # Add a key to per_domain_accuracy for if it was a source domain for domain, accuracy in per_domain_accuracy.items(): per_domain_accuracy[domain] = { "accuracy": accuracy, "source?": domain in p.domains_source } # Do an independent accuracy assesment JUST TO BE SURE! # _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device) # _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device) # _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device) # _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device) # assert(_source_test_label_accuracy == source_test_label_accuracy) # assert(_target_test_label_accuracy == target_test_label_accuracy) # assert(_source_val_label_accuracy == source_val_label_accuracy) # assert(_target_val_label_accuracy == target_val_label_accuracy) experiment = { "experiment_name": p.experiment_name, "parameters": dict(p), "results": { "source_test_label_accuracy": source_test_label_accuracy, "source_test_label_loss": source_test_label_loss, "target_test_label_accuracy": target_test_label_accuracy, "target_test_label_loss": target_test_label_loss, "source_val_label_accuracy": source_val_label_accuracy, "source_val_label_loss": source_val_label_loss, "target_val_label_accuracy": target_val_label_accuracy, "target_val_label_loss": target_val_label_loss, "total_epochs_trained": total_epochs_trained, "total_experiment_time_secs": total_experiment_time_secs, "confusion": confusion, "per_domain_accuracy": per_domain_accuracy, }, "history": history, "dataset_metrics": get_dataset_metrics(datasets, "ptn"), } ax = get_loss_curve(experiment) plt.show() get_results_table(experiment) get_domain_accuracies(experiment) print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"]) print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"]) json.dumps(experiment)
_____no_output_____
MIT
experiments/tuned_1v2/oracle.run2/trials/4/trial.ipynb
stevester94/csc500-notebooks
Recommender Systems 2018/19 Practice 4 - Similarity with Cython Cython is a superset of Python, allowing you to use C-like operations and import C code. Cython files (.pyx) are compiled and support static typing.
import time import numpy as np
_____no_output_____
MIT
Jupyter notebook/Practice 4 - Cython.ipynb
marcomussi/RecommenderSystemPolimi
Let's implement something simple
def isPrime(n): i = 2 # Usually you loop up to sqrt(n) while i < n: if n % i == 0: return False i += 1 return True print("Is prime 2? {}".format(isPrime(2))) print("Is prime 3? {}".format(isPrime(3))) print("Is prime 5? {}".format(isPrime(5))) print("Is prime 15? {}".format(isPrime(15))) print("Is prime 20? {}".format(isPrime(20))) start_time = time.time() result = isPrime(80000023) print("Is Prime 80000023? {}, time required {:.2f} sec".format(result, time.time()-start_time))
Is Prime 80000023? True, time required 8.19 sec
MIT
Jupyter notebook/Practice 4 - Cython.ipynb
marcomussi/RecommenderSystemPolimi
Load Cython magic command, this takes care of the compilation step. If you are writing code outside Jupyter you'll have to compile using other tools
%load_ext Cython
_____no_output_____
MIT
Jupyter notebook/Practice 4 - Cython.ipynb
marcomussi/RecommenderSystemPolimi
Declare Cython function, paste the same code as before. The function will be compiled and then executed with a Python interface
%%cython def isPrime(n): i = 2 # Usually you loop up to sqrt(n) while i < n: if n % i == 0: return False i += 1 return True start_time = time.time() result = isPrime(80000023) print("Is Prime 80000023? {}, time required {:.2f} sec".format(result, time.time()-start_time))
Is Prime 80000023? True, time required 4.81 sec
MIT
Jupyter notebook/Practice 4 - Cython.ipynb
marcomussi/RecommenderSystemPolimi
As you can see by just compiling the same code we got some improvement. To go seriously higher, we have to use some static tiping
%%cython # Declare the tipe of the arguments def isPrime(long n): # Declare index of for loop cdef long i i = 2 # Usually you loop up to sqrt(n) while i < n: if n % i == 0: return False i += 1 return True start_time = time.time() result = isPrime(80000023) print("Is Prime 80000023? {}, time required {:.2f} sec".format(result, time.time()-start_time))
Is Prime 80000023? True, time required 0.94 sec
MIT
Jupyter notebook/Practice 4 - Cython.ipynb
marcomussi/RecommenderSystemPolimi
Cython code with two tipe declaration, for n and i, runs 50x faster than Python Main benefits of Cython:* Compiled, no interpreter* Static typing, no overhead* Fast loops, no need to vectorize. Vectorization sometimes performes lots of useless operations* Numpy, which is fast in python, becomes often slooooow compared to a carefully written Cython code Similarity with Cython Load the usual data.
from urllib.request import urlretrieve import zipfile # skip the download #urlretrieve ("http://files.grouplens.org/datasets/movielens/ml-10m.zip", "data/Movielens_10M/movielens_10m.zip") dataFile = zipfile.ZipFile("data/Movielens_10M/movielens_10m.zip") URM_path = dataFile.extract("ml-10M100K/ratings.dat", path = "data/Movielens_10M") URM_file = open(URM_path, 'r') def rowSplit (rowString): split = rowString.split("::") split[3] = split[3].replace("\n","") split[0] = int(split[0]) split[1] = int(split[1]) split[2] = float(split[2]) split[3] = int(split[3]) result = tuple(split) return result URM_file.seek(0) URM_tuples = [] for line in URM_file: URM_tuples.append(rowSplit (line)) userList, itemList, ratingList, timestampList = zip(*URM_tuples) userList = list(userList) itemList = list(itemList) ratingList = list(ratingList) timestampList = list(timestampList) import scipy.sparse as sps URM_all = sps.coo_matrix((ratingList, (userList, itemList))) URM_all = URM_all.tocsr() URM_all from Notebooks_utils.data_splitter import train_test_holdout URM_train, URM_test = train_test_holdout(URM_all, train_perc = 0.8) URM_train
_____no_output_____
MIT
Jupyter notebook/Practice 4 - Cython.ipynb
marcomussi/RecommenderSystemPolimi
Since we cannot store in memory the whole similarity, we compute it one row at a time
itemIndex=1 item_ratings = URM_train[:,itemIndex] item_ratings = item_ratings.toarray().squeeze() item_ratings.shape this_item_weights = URM_train.T.dot(item_ratings) this_item_weights.shape
_____no_output_____
MIT
Jupyter notebook/Practice 4 - Cython.ipynb
marcomussi/RecommenderSystemPolimi
Once we have the scores for that row, we get the TopK
k=10 top_k_idx = np.argsort(this_item_weights) [-k:] top_k_idx import scipy.sparse as sps # Function hiding some conversion checks def check_matrix(X, format='csc', dtype=np.float32): if format == 'csc' and not isinstance(X, sps.csc_matrix): return X.tocsc().astype(dtype) elif format == 'csr' and not isinstance(X, sps.csr_matrix): return X.tocsr().astype(dtype) elif format == 'coo' and not isinstance(X, sps.coo_matrix): return X.tocoo().astype(dtype) elif format == 'dok' and not isinstance(X, sps.dok_matrix): return X.todok().astype(dtype) elif format == 'bsr' and not isinstance(X, sps.bsr_matrix): return X.tobsr().astype(dtype) elif format == 'dia' and not isinstance(X, sps.dia_matrix): return X.todia().astype(dtype) elif format == 'lil' and not isinstance(X, sps.lil_matrix): return X.tolil().astype(dtype) else: return X.astype(dtype)
_____no_output_____
MIT
Jupyter notebook/Practice 4 - Cython.ipynb
marcomussi/RecommenderSystemPolimi
Create a Basic Collaborative filtering recommender using only cosine similarity
class BasicItemKNN_CF_Recommender(object): """ ItemKNN recommender with cosine similarity and no shrinkage""" def __init__(self, URM): self.dataset = URM def compute_similarity(self, URM): # We explore the matrix column-wise URM = check_matrix(URM, 'csc') values = [] rows = [] cols = [] start_time = time.time() processedItems = 0 # Compute all similarities for each item using vectorization for itemIndex in range(URM.shape[0]): processedItems += 1 if processedItems % 100==0: itemPerSec = processedItems/(time.time()-start_time) print("Similarity item {}, {:.2f} item/sec, required time {:.2f} min".format( processedItems, itemPerSec, URM.shape[0]/itemPerSec/60)) # All ratings for a given item item_ratings = URM[:,itemIndex] item_ratings = item_ratings.toarray().squeeze() # Compute item similarities this_item_weights = URM_train.T.dot(item_ratings) # Sort indices and select TopK top_k_idx = np.argsort(this_item_weights) [-self.k:] # Incrementally build sparse matrix values.extend(this_item_weights[top_k_idx]) rows.extend(np.arange(URM.shape[0])[top_k_idx]) cols.extend(np.ones(self.k) * itemIndex) self.W_sparse = sps.csc_matrix((values, (rows, cols)), shape=(URM.shape[0], URM.shape[0]), dtype=np.float32) def fit(self, k=50, shrinkage=100): self.k = k self.shrinkage = shrinkage item_weights = self.compute_similarity(self.dataset) item_weights = check_matrix(item_weights, 'csr') def recommend(self, user_id, at=None, exclude_seen=True): # compute the scores using the dot product user_profile = self.URM[user_id] scores = user_profile.dot(self.W_sparse).toarray().ravel() if exclude_seen: scores = self.filter_seen(user_id, scores) # rank items ranking = scores.argsort()[::-1] return ranking[:at] def filter_seen(self, user_id, scores): start_pos = self.URM.indptr[user_id] end_pos = self.URM.indptr[user_id+1] user_profile = self.URM.indices[start_pos:end_pos] scores[user_profile] = -np.inf return scores
_____no_output_____
MIT
Jupyter notebook/Practice 4 - Cython.ipynb
marcomussi/RecommenderSystemPolimi
Let's isolate the compute_similarity function
def compute_similarity(URM, k=100): # We explore the matrix column-wise URM = check_matrix(URM, 'csc') n_items = URM.shape[0] values = [] rows = [] cols = [] start_time = time.time() processedItems = 0 # Compute all similarities for each item using vectorization # for itemIndex in range(n_items): for itemIndex in range(1000): processedItems += 1 if processedItems % 100==0: itemPerSec = processedItems/(time.time()-start_time) print("Similarity item {}, {:.2f} item/sec, required time {:.2f} min".format( processedItems, itemPerSec, n_items/itemPerSec/60)) # All ratings for a given item item_ratings = URM[:,itemIndex] item_ratings = item_ratings.toarray().squeeze() # Compute item similarities this_item_weights = URM.T.dot(item_ratings) # Sort indices and select TopK top_k_idx = np.argsort(this_item_weights) [-k:] # Incrementally build sparse matrix values.extend(this_item_weights[top_k_idx]) rows.extend(np.arange(URM.shape[0])[top_k_idx]) cols.extend(np.ones(k) * itemIndex) W_sparse = sps.csc_matrix((values, (rows, cols)), shape=(n_items, n_items), dtype=np.float32) return W_sparse compute_similarity(URM_train)
Similarity item 100, 81.61 item/sec, required time 14.62 min Similarity item 200, 80.34 item/sec, required time 14.85 min Similarity item 300, 80.08 item/sec, required time 14.89 min Similarity item 400, 80.50 item/sec, required time 14.82 min Similarity item 500, 80.02 item/sec, required time 14.91 min Similarity item 600, 80.30 item/sec, required time 14.85 min Similarity item 700, 80.23 item/sec, required time 14.87 min Similarity item 800, 80.58 item/sec, required time 14.80 min Similarity item 900, 81.18 item/sec, required time 14.69 min Similarity item 1000, 81.15 item/sec, required time 14.70 min
MIT
Jupyter notebook/Practice 4 - Cython.ipynb
marcomussi/RecommenderSystemPolimi
We see that computing the similarity takes more or less 15 minutes Now we use the same identical code, but we compile it
%%cython import time import numpy as np import scipy.sparse as sps def compute_similarity_compiled(URM, k=100): # We explore the matrix column-wise URM = URM.tocsc() n_items = URM.shape[0] values = [] rows = [] cols = [] start_time = time.time() processedItems = 0 # Compute all similarities for each item using vectorization # for itemIndex in range(n_items): for itemIndex in range(1000): processedItems += 1 if processedItems % 100==0: itemPerSec = processedItems/(time.time()-start_time) print("Similarity item {}, {:.2f} item/sec, required time {:.2f} min".format( processedItems, itemPerSec, n_items/itemPerSec/60)) # All ratings for a given item item_ratings = URM[:,itemIndex] item_ratings = item_ratings.toarray().squeeze() # Compute item similarities this_item_weights = URM.T.dot(item_ratings) # Sort indices and select TopK top_k_idx = np.argsort(this_item_weights) [-k:] # Incrementally build sparse matrix values.extend(this_item_weights[top_k_idx]) rows.extend(np.arange(URM.shape[0])[top_k_idx]) cols.extend(np.ones(k) * itemIndex) W_sparse = sps.csc_matrix((values, (rows, cols)), shape=(n_items, n_items), dtype=np.float32) return W_sparse compute_similarity_compiled(URM_train)
Similarity item 100, 56.48 item/sec, required time 21.12 min Similarity item 200, 56.12 item/sec, required time 21.25 min Similarity item 300, 56.58 item/sec, required time 21.08 min Similarity item 400, 56.42 item/sec, required time 21.14 min Similarity item 500, 56.74 item/sec, required time 21.02 min Similarity item 600, 56.90 item/sec, required time 20.96 min Similarity item 700, 56.90 item/sec, required time 20.96 min Similarity item 800, 56.97 item/sec, required time 20.94 min Similarity item 900, 56.84 item/sec, required time 20.99 min Similarity item 1000, 56.57 item/sec, required time 21.08 min
MIT
Jupyter notebook/Practice 4 - Cython.ipynb
marcomussi/RecommenderSystemPolimi
As opposed to the previous example, compilation by itself is not very helpful. Why? Because the compiler is just porting in C all operations that the python interpreter would have to perform, dynamic tiping included Now try to add some tipes
%%cython import time import numpy as np import scipy.sparse as sps cimport numpy as np def compute_similarity_compiled(URM, int k=100): cdef int itemIndex, processedItems # We use the numpy syntax, allowing us to perform vectorized operations cdef np.ndarray[double, ndim=1] item_ratings, this_item_weights cdef np.ndarray[long, ndim=1] top_k_idx # We explore the matrix column-wise URM = URM.tocsc() n_items = URM.shape[0] values = [] rows = [] cols = [] start_time = time.time() processedItems = 0 # Compute all similarities for each item using vectorization # for itemIndex in range(n_items): for itemIndex in range(1000): processedItems += 1 if processedItems % 100==0: itemPerSec = processedItems/(time.time()-start_time) print("Similarity item {}, {:.2f} item/sec, required time {:.2f} min".format( processedItems, itemPerSec, n_items/itemPerSec/60)) # All ratings for a given item item_ratings = URM[:,itemIndex].toarray().squeeze() # Compute item similarities this_item_weights = URM.T.dot(item_ratings) # Sort indices and select TopK top_k_idx = np.argsort(this_item_weights) [-k:] # Incrementally build sparse matrix values.extend(this_item_weights[top_k_idx]) rows.extend(np.arange(URM.shape[0])[top_k_idx]) cols.extend(np.ones(k) * itemIndex) W_sparse = sps.csc_matrix((values, (rows, cols)), shape=(n_items, n_items), dtype=np.float32) return W_sparse compute_similarity_compiled(URM_train)
Similarity item 100, 57.80 item/sec, required time 20.64 min Similarity item 200, 53.69 item/sec, required time 22.22 min Similarity item 300, 54.57 item/sec, required time 21.86 min Similarity item 400, 54.07 item/sec, required time 22.06 min Similarity item 500, 54.65 item/sec, required time 21.83 min Similarity item 600, 54.82 item/sec, required time 21.76 min Similarity item 700, 55.08 item/sec, required time 21.66 min Similarity item 800, 55.30 item/sec, required time 21.57 min Similarity item 900, 55.64 item/sec, required time 21.44 min Similarity item 1000, 55.80 item/sec, required time 21.38 min
MIT
Jupyter notebook/Practice 4 - Cython.ipynb
marcomussi/RecommenderSystemPolimi
Still no luck! Why? There are a few reasons:* We are getting the data from the sparse matrix using its interface, which is SLOW* We are transforming sparse data into a dense array, which is SLOW* We are performing a dot product against a dense vector You colud find a workaround... here we do something different Proposed solution Change the algorithm! Instead of performing the dot product, let's implement somenting that computes the similarity using sparse data directly We loop through the data and update selectively the similarity matrix cells. Underlying idea:* When I select an item I can know which users rated it* Instead of looping through the other items trying to find common users, I use the URM to find which other items that user rated* The user I am considering will be common between the two, so I increment the similarity of the two items* Instead of following the path item1 -> loop item2 -> find user, i go item1 -> loop user -> loop item2
data_matrix = np.array([[1,1,0,1],[0,1,1,1],[1,0,1,0]]) data_matrix = sps.csc_matrix(data_matrix) data_matrix.todense()
_____no_output_____
MIT
Jupyter notebook/Practice 4 - Cython.ipynb
marcomussi/RecommenderSystemPolimi
Example: Compute the similarities for item 1 Step 1: get users that rated item 1
users_rated_item = data_matrix[:,1] users_rated_item.indices
_____no_output_____
MIT
Jupyter notebook/Practice 4 - Cython.ipynb
marcomussi/RecommenderSystemPolimi
Step 2: count how many times those users rated other items
item_similarity = data_matrix[users_rated_item.indices].sum(axis = 0) np.array(item_similarity).squeeze()
_____no_output_____
MIT
Jupyter notebook/Practice 4 - Cython.ipynb
marcomussi/RecommenderSystemPolimi
Verify our result against the common method. We can see that the similarity values for col 1 are identical
similarity_matrix_product = data_matrix.T.dot(data_matrix) similarity_matrix_product.toarray()[:,1] # The following code works for implicit feedback only def compute_similarity_new_algorithm(URM, k=100): # We explore the matrix column-wise URM = check_matrix(URM, 'csc') URM.data = np.ones_like(URM.data) n_items = URM.shape[0] values = [] rows = [] cols = [] start_time = time.time() processedItems = 0 # Compute all similarities for each item using vectorization # for itemIndex in range(n_items): for itemIndex in range(1000): processedItems += 1 if processedItems % 100==0: itemPerSec = processedItems/(time.time()-start_time) print("Similarity item {}, {:.2f} item/sec, required time {:.2f} min".format( processedItems, itemPerSec, n_items/itemPerSec/60)) # All ratings for a given item users_rated_item = URM.indices[URM.indptr[itemIndex]:URM.indptr[itemIndex+1]] # Compute item similarities this_item_weights = URM[users_rated_item].sum(axis = 0) this_item_weights = np.array(this_item_weights).squeeze() # Sort indices and select TopK top_k_idx = np.argsort(this_item_weights) [-k:] # Incrementally build sparse matrix values.extend(this_item_weights[top_k_idx]) rows.extend(np.arange(URM.shape[0])[top_k_idx]) cols.extend(np.ones(k) * itemIndex) W_sparse = sps.csc_matrix((values, (rows, cols)), shape=(n_items, n_items), dtype=np.float32) return W_sparse compute_similarity_new_algorithm(URM_train)
Similarity item 100, 28.04 item/sec, required time 42.53 min Similarity item 200, 28.37 item/sec, required time 42.04 min Similarity item 300, 28.85 item/sec, required time 41.35 min Similarity item 400, 28.77 item/sec, required time 41.45 min Similarity item 500, 29.20 item/sec, required time 40.85 min Similarity item 600, 28.85 item/sec, required time 41.34 min Similarity item 700, 29.60 item/sec, required time 40.30 min Similarity item 800, 29.91 item/sec, required time 39.88 min Similarity item 900, 30.54 item/sec, required time 39.06 min Similarity item 1000, 30.61 item/sec, required time 38.96 min
MIT
Jupyter notebook/Practice 4 - Cython.ipynb
marcomussi/RecommenderSystemPolimi
Slower but expected, dot product operations are implemented in an efficient way and here we are using an indirect approach Now let's write this algorithm in Cython
%%cython import time import numpy as np cimport numpy as np from cpython.array cimport array, clone import scipy.sparse as sps cdef class Cosine_Similarity: cdef int TopK cdef long n_items # Arrays containing the sparse data cdef int[:] user_to_item_row_ptr, user_to_item_cols cdef int[:] item_to_user_rows, item_to_user_col_ptr cdef double[:] user_to_item_data, item_to_user_data # In case you select no TopK cdef double[:,:] W_dense def __init__(self, URM, TopK = 100): """ Dataset must be a matrix with items as columns :param dataset: :param TopK: """ super(Cosine_Similarity, self).__init__() self.n_items = URM.shape[1] self.TopK = min(TopK, self.n_items) URM = URM.tocsr() self.user_to_item_row_ptr = URM.indptr self.user_to_item_cols = URM.indices self.user_to_item_data = np.array(URM.data, dtype=np.float64) URM = URM.tocsc() self.item_to_user_rows = URM.indices self.item_to_user_col_ptr = URM.indptr self.item_to_user_data = np.array(URM.data, dtype=np.float64) if self.TopK == 0: self.W_dense = np.zeros((self.n_items,self.n_items)) cdef int[:] getUsersThatRatedItem(self, long item_id): return self.item_to_user_rows[self.item_to_user_col_ptr[item_id]:self.item_to_user_col_ptr[item_id+1]] cdef int[:] getItemsRatedByUser(self, long user_id): return self.user_to_item_cols[self.user_to_item_row_ptr[user_id]:self.user_to_item_row_ptr[user_id+1]] cdef double[:] computeItemSimilarities(self, long item_id_input): """ For every item the cosine similarity against other items depends on whether they have users in common. The more common users the higher the similarity. The basic implementation is: - Select the first item - Loop through all other items -- Given the two items, get the users they have in common -- Update the similarity considering all common users That is VERY slow due to the common user part, in which a long data structure is looped multiple times. A better way is to use the data structure in a different way skipping the search part, getting directly the information we need. The implementation here used is: - Select the first item - Initialize a zero valued array for the similarities - Get the users who rated the first item - Loop through the users -- Given a user, get the items he rated (second item) -- Update the similarity of the items he rated """ # Create template used to initialize an array with zeros # Much faster than np.zeros(self.n_items) cdef array[double] template_zero = array('d') cdef array[double] result = clone(template_zero, self.n_items, zero=True) cdef long user_index, user_id, item_index, item_id_second cdef int[:] users_that_rated_item = self.getUsersThatRatedItem(item_id_input) cdef int[:] items_rated_by_user cdef double rating_item_input, rating_item_second # Get users that rated the items for user_index in range(len(users_that_rated_item)): user_id = users_that_rated_item[user_index] rating_item_input = self.item_to_user_data[self.item_to_user_col_ptr[item_id_input]+user_index] # Get all items rated by that user items_rated_by_user = self.getItemsRatedByUser(user_id) for item_index in range(len(items_rated_by_user)): item_id_second = items_rated_by_user[item_index] # Do not compute the similarity on the diagonal if item_id_second != item_id_input: # Increment similairty rating_item_second = self.user_to_item_data[self.user_to_item_row_ptr[user_id]+item_index] result[item_id_second] += rating_item_input*rating_item_second return result def compute_similarity(self): cdef int itemIndex, innerItemIndex cdef long long topKItemIndex cdef long long[:] top_k_idx # Declare numpy data type to use vetor indexing and simplify the topK selection code cdef np.ndarray[long, ndim=1] top_k_partition, top_k_partition_sorting cdef np.ndarray[np.float64_t, ndim=1] this_item_weights_np #cdef long[:] top_k_idx cdef double[:] this_item_weights cdef long processedItems = 0 # Data structure to incrementally build sparse matrix # Preinitialize max possible length cdef double[:] values = np.zeros((self.n_items*self.TopK)) cdef int[:] rows = np.zeros((self.n_items*self.TopK,), dtype=np.int32) cdef int[:] cols = np.zeros((self.n_items*self.TopK,), dtype=np.int32) cdef long sparse_data_pointer = 0 start_time = time.time() # Compute all similarities for each item for itemIndex in range(self.n_items): processedItems += 1 if processedItems % 10000==0 or processedItems==self.n_items: itemPerSec = processedItems/(time.time()-start_time) print("Similarity item {} ( {:2.0f} % ), {:.2f} item/sec, required time {:.2f} min".format( processedItems, processedItems*1.0/self.n_items*100, itemPerSec, (self.n_items-processedItems) / itemPerSec / 60)) this_item_weights = self.computeItemSimilarities(itemIndex) if self.TopK == 0: for innerItemIndex in range(self.n_items): self.W_dense[innerItemIndex,itemIndex] = this_item_weights[innerItemIndex] else: # Sort indices and select TopK # Using numpy implies some overhead, unfortunately the plain C qsort function is even slower # top_k_idx = np.argsort(this_item_weights) [-self.TopK:] # Sorting is done in three steps. Faster then plain np.argsort for higher number of items # because we avoid sorting elements we already know we don't care about # - Partition the data to extract the set of TopK items, this set is unsorted # - Sort only the TopK items, discarding the rest # - Get the original item index this_item_weights_np = - np.array(this_item_weights) # Get the unordered set of topK items top_k_partition = np.argpartition(this_item_weights_np, self.TopK-1)[0:self.TopK] # Sort only the elements in the partition top_k_partition_sorting = np.argsort(this_item_weights_np[top_k_partition]) # Get original index top_k_idx = top_k_partition[top_k_partition_sorting] # Incrementally build sparse matrix for innerItemIndex in range(len(top_k_idx)): topKItemIndex = top_k_idx[innerItemIndex] values[sparse_data_pointer] = this_item_weights[topKItemIndex] rows[sparse_data_pointer] = topKItemIndex cols[sparse_data_pointer] = itemIndex sparse_data_pointer += 1 if self.TopK == 0: return np.array(self.W_dense) else: values = np.array(values[0:sparse_data_pointer]) rows = np.array(rows[0:sparse_data_pointer]) cols = np.array(cols[0:sparse_data_pointer]) W_sparse = sps.csr_matrix((values, (rows, cols)), shape=(self.n_items, self.n_items), dtype=np.float32) return W_sparse cosine_cython = Cosine_Similarity(URM_train, TopK=100) start_time = time.time() cosine_cython.compute_similarity() print("Similarity computed in {:.2f} seconds".format(time.time()-start_time))
Similarity item 10000 ( 15 % ), 722.73 item/sec, required time 1.27 min Similarity item 20000 ( 31 % ), 1152.12 item/sec, required time 0.65 min Similarity item 30000 ( 46 % ), 1413.59 item/sec, required time 0.41 min Similarity item 40000 ( 61 % ), 1611.02 item/sec, required time 0.26 min Similarity item 50000 ( 77 % ), 1761.78 item/sec, required time 0.14 min Similarity item 60000 ( 92 % ), 1876.49 item/sec, required time 0.05 min Similarity item 65134 ( 100 % ), 1929.34 item/sec, required time 0.00 min Similarity computed in 33.94 seconds
MIT
Jupyter notebook/Practice 4 - Cython.ipynb
marcomussi/RecommenderSystemPolimi
Better... much better. There are a few other things you could do, but at this point it is not worth the effort How to use Cython outside a notebook Step1: Create a .pyx file and write your code Step2: Create a compilation script "compileCython.py" with the following content
# This code will not run in a notebook cell try: from setuptools import setup from setuptools import Extension except ImportError: from distutils.core import setup from distutils.extension import Extension from Cython.Distutils import build_ext import numpy import sys import re if len(sys.argv) != 4: raise ValueError("Wrong number of paramethers received. Expected 4, got {}".format(sys.argv)) # Get the name of the file to compile fileToCompile = sys.argv[1] # Remove the argument from sys argv in order for it to contain only what setup needs del sys.argv[1] extensionName = re.sub("\.pyx", "", fileToCompile) ext_modules = Extension(extensionName, [fileToCompile], extra_compile_args=['-O3'], include_dirs=[numpy.get_include(),], ) setup( cmdclass={'build_ext': build_ext}, ext_modules=[ext_modules] )
_____no_output_____
MIT
Jupyter notebook/Practice 4 - Cython.ipynb
marcomussi/RecommenderSystemPolimi
Step3: Compile your code with the following command python compileCython.py Cosine_Similarity_Cython.pyx build_ext --inplace Step4: Generate cython report and look for "yellow lines". The report is an .html file which represents how many operations are necessary to translate each python operation in cython code. If a line is white, it has a direct C translation. If it is yellow it will require many indirect steps that will slow down execution. Some of those steps may be inevitable, some may be removed via static typing. IMPORTANT: white does not mean fast!! If a system call is involved that part might be slow anyway.cython -a Cosine_Similarity_Cython.pyx Step5: Add static types and C functions to remove "yellow" lines. If you use a variable only as a C object, use primitive tipes cdef int namevardef double namevarcdef float namevar If you call a function only within C code, use a specific declaration "cdef"cdef function_name(self, int param1, double param2):... Step6: Iterate step 4 and 5 until you are satisfied with how clean your code is, then compile. An example of non optimized code can be found in the source folder of this notebook with the _SLOW suffix Step7: the compilation generates a file wose name is something like "Cosine_Similarity_Cython.cpython-36m-x86_64-linux-gnu.so" and tells you the source file, the architecture it is compiled for and the OS Step8: Import and use the compiled file as if it were a python class
from Base.Simialrity.Cython.Cosine_Similarity_Cython import Cosine_Similarity cosine_cython = Cosine_Similarity(URM_train, TopK=100) start_time = time.time() cosine_cython.compute_similarity() print("Similarity computed in {:.2f} seconds".format(time.time()-start_time))
_____no_output_____
MIT
Jupyter notebook/Practice 4 - Cython.ipynb
marcomussi/RecommenderSystemPolimi
15 PDEs: Solution with Time Stepping Heat EquationThe **heat equation** can be derived from Fourier's law and energy conservation (see the [lecture notes on the heat equation (PDF)](https://github.com/ASU-CompMethodsPhysics-PHY494/PHY494-resources/blob/master/15_PDEs/15_PDEs_LectureNotes_HeatEquation.pdf))$$\frac{\partial T(\mathbf{x}, t)}{\partial t} = \frac{K}{C\rho} \nabla^2 T(\mathbf{x}, t),$$ Problem: insulated metal bar (1D heat equation)A metal bar of length $L$ is insulated along it lengths and held at 0ºC at its ends. Initially, the whole bar is at 100ºC. Calculate $T(x, t)$ for $t>0$. Analytic solutionSolve by separation of variables and power series: The general solution that obeys the boundary conditions $T(0, t) = T(L, t) = 0$ is$$T(x, t) = \sum_{n=1}^{+\infty} A_n \sin(k_n x)\, \exp\left(-\frac{k_n^2 K t}{C\rho}\right), \quad k_n = \frac{n\pi}{L}$$ The specific solution that satisfies $T(x, 0) = T_0 = 100^\circ\text{C}$ leads to $A_n = 4 T_0/n\pi$ for $n$ odd:$$T(x, t) = \sum_{n=1,3,5,\dots}^{+\infty} \frac{4 T_0}{n \pi} \sin(k_n x)\, \exp\left(-\frac{k_n^2 K t}{C\rho}\right)$$
import numpy as np import matplotlib.pyplot as plt %matplotlib inline plt.style.use('ggplot') def T_bar(x, t, T0, L, K=237, C=900, rho=2700, nmax=1000): T = np.zeros_like(x) eta = K / (C*rho) for n in range(1, nmax, 2): kn = n*np.pi/L T += 4*T0/(np.pi * n) * np.sin(kn*x) * np.exp(-kn*kn * eta * t) return T T0 = 100. L = 1.0 X = np.linspace(0, L, 100) for t in np.linspace(0, 3000, 50): plt.plot(X, T_bar(X, t, T0, L)) plt.xlabel(r"$x$ (m)") plt.ylabel(r"$T$ ($^\circ$C)");
_____no_output_____
CC-BY-4.0
15_PDEs/15_PDEs.ipynb
ASU-CompMethodsPhysics-PHY494/PHY494-resources-2018
Numerical solution: Leap frogDiscretize (finite difference):For the time domain we only have the initial values so we use a simple forward difference for the time derivative:$$\frac{\partial T(x,t)}{\partial t} \approx \frac{T(x, t+\Delta t) - T(x, t)}{\Delta t}$$ For the spatial derivative we have initially all values so we can use the more accurate central difference approximation:$$\frac{\partial^2 T(x, t)}{\partial x^2} \approx \frac{T(x+\Delta x, t) + T(x-\Delta x, t) - 2 T(x, t)}{\Delta x^2}$$ Thus, the heat equation can be written as the finite difference equation$$\frac{T(x, t+\Delta t) - T(x, t)}{\Delta t} = \frac{K}{C\rho} \frac{T(x+\Delta x, t) + T(x-\Delta x, t) - 2 T(x, t)}{\Delta x^2}$$ which can be reordered so that the RHS contains only known terms and the LHS future terms. Index $i$ is the spatial index, and $j$ the time index: $x = x_0 + i \Delta x$, $t = t_0 + j \Delta t$.$$T_{i, j+1} = (1 - 2\eta) T_{i,j} + \eta(T_{i+1,j} + T_{i-1, j}), \quad \eta := \frac{K \Delta t}{C \rho \Delta x^2}$$Thus we can step forward in time ("leap frog"), using only known values. Solve the 1D heat equation numerically for an iron bar* $K = 237$ W/mK* $C = 900$ J/K* $\rho = 2700$ kg/m3* $L = 1$ m* $T_0 = 373$ K and $T_b = 273$ K* $T(x, 0) = T_0$ and $T(0, t) = T(L, t) = T_b$ Key considerations The key line is the computation of the new temperature field at time step $j+1$ from the temperature distribution at time step $j$. It can be written purely with numpy array operations (see last lecture!):```pythonT[1:-1] = (1 - 2*eta) * T[1:-1] + eta * (T[2:] + T[:-2])```Note that the range operator `T[start:end]` *excludes* `end`, so in order to include `T[1], T[2], ..., T[-2]` (but not the rightmost `T[-1]`) we have to use `T[1:-1]`. The *boundary conditions* are fixed for all times:```pythonT[0] = T[-1] = Tb```The *initial conditions* (at time step `j=0`)```pythonT[1:-1] = T0```are only used to compute the distribution of temperatures at the next step `j=1`. Solution
import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D %matplotlib notebook
_____no_output_____
CC-BY-4.0
15_PDEs/15_PDEs.ipynb
ASU-CompMethodsPhysics-PHY494/PHY494-resources-2018
For HTML/nbviewer output, use inline:
%matplotlib inline L_rod = 1. # m t_max = 3000. # s Dx = 0.02 # m Dt = 2 # s Nx = int(L_rod // Dx) Nt = int(t_max // Dt) Kappa = 237 # W/(m K) CHeat = 900 # J/K rho = 2700 # kg/m^3 T0 = 373 # K Tb = 273 # K eta = Kappa * Dt / (CHeat * rho * Dx**2) eta2 = 1 - 2*eta step = 20 # plot solution every n steps print("Nx = {0}, Nt = {1}".format(Nx, Nt)) print("eta = {0}".format(eta)) T = np.zeros(Nx) T_plot = np.zeros((Nt//step + 1, Nx)) # initial conditions T[1:-1] = T0 # boundary conditions T[0] = T[-1] = Tb t_index = 0 T_plot[t_index, :] = T for jt in range(1, Nt): T[1:-1] = eta2 * T[1:-1] + eta*(T[2:] + T[:-2]) if jt % step == 0 or jt == Nt-1: t_index += 1 T_plot[t_index, :] = T print("Iteration {0:5d}".format(jt), end="\r") else: print("Completed {0:5d} iterations: t={1} s".format(jt, jt*Dt))
Nx = 49, Nt = 1500 eta = 0.4876543209876543 Iteration 20 Iteration 40 Iteration 60 Iteration 80 Iteration 100 Iteration 120 Iteration 140 Iteration 160 Iteration 180 Iteration 200 Iteration 220 Iteration 240 Iteration 260 Iteration 280 Iteration 300 Iteration 320 Iteration 340 Iteration 360 Iteration 380 Iteration 400 Iteration 420 Iteration 440 Iteration 460 Iteration 480 Iteration 500 Iteration 520 Iteration 540 Iteration 560 Iteration 580 Iteration 600 Iteration 620 Iteration 640 Iteration 660 Iteration 680 Iteration 700 Iteration 720 Iteration 740 Iteration 760 Iteration 780 Iteration 800 Iteration 820 Iteration 840 Iteration 860 Iteration 880 Iteration 900 Iteration 920 Iteration 940 Iteration 960 Iteration 980 Iteration 1000 Iteration 1020 Iteration 1040 Iteration 1060 Iteration 1080 Iteration 1100 Iteration 1120 Iteration 1140 Iteration 1160 Iteration 1180 Iteration 1200 Iteration 1220 Iteration 1240 Iteration 1260 Iteration 1280 Iteration 1300 Iteration 1320 Iteration 1340 Iteration 1360 Iteration 1380 Iteration 1400 Iteration 1420 Iteration 1440 Iteration 1460 Iteration 1480 Iteration 1499 Completed 1499 iterations: t=2998 s
CC-BY-4.0
15_PDEs/15_PDEs.ipynb
ASU-CompMethodsPhysics-PHY494/PHY494-resources-2018
VisualizationVisualize (you can use the code as is). Note how we are making the plot use proper units by mutiplying with `Dt * step` and `Dx`.
X, Y = np.meshgrid(range(T_plot.shape[0]), range(T_plot.shape[1])) Z = T_plot[X, Y] fig = plt.figure() ax = fig.add_subplot(111, projection="3d") ax.plot_wireframe(X*Dt*step, Y*Dx, Z) ax.set_xlabel(r"time $t$ (s)") ax.set_ylabel(r"position $x$ (m)") ax.set_zlabel(r"temperature $T$ (K)") fig.tight_layout()
_____no_output_____
CC-BY-4.0
15_PDEs/15_PDEs.ipynb
ASU-CompMethodsPhysics-PHY494/PHY494-resources-2018
2D as above for the analytical solution…
X = Dx * np.arange(T_plot.shape[1]) plt.plot(X, T_plot.T) plt.xlabel(r"$x$ (m)") plt.ylabel(r"$T$ (K)");
_____no_output_____
CC-BY-4.0
15_PDEs/15_PDEs.ipynb
ASU-CompMethodsPhysics-PHY494/PHY494-resources-2018
Slower solution I benchmarked this slow solution at 89.7 ms and the fast solution at 14.8 ms (commented out all `print`) so the explicit loop is not that much worse (probably because the overhead on array copying etc is high).
L_rod = 1. # m t_max = 3000. # s Dx = 0.02 # m Dt = 2 # s Nx = int(L_rod // Dx) Nt = int(t_max // Dt) Kappa = 237 # W/(m K) CHeat = 900 # J/K rho = 2700 # kg/m^3 T0 = 373 # K Tb = 273 # K eta = Kappa * Dt / (CHeat * rho * Dx**2) eta2 = 1 - 2*eta step = 20 # plot solution every n steps print("Nx = {0}, Nt = {1}".format(Nx, Nt)) print("eta = {0}".format(eta)) T = np.zeros(Nx) T_new = np.zeros_like(T) T_plot = np.zeros((int(np.ceil(Nt/step)) + 1, Nx)) # initial conditions T[1:-1] = T0 # boundary conditions T[0] = T[-1] = Tb T_new[:] = T t_index = 0 T_plot[t_index, :] = T for jt in range(1, Nt): # T[1:-1] = eta2 * T[1:-1] + eta*(T[2:] + T[:-2]) for ix in range(1, Nx-1): T_new[ix] = eta2 * T[ix] + eta*(T[ix+1] + T[ix-1]) T[:] = T_new if jt % step == 0 or jt == Nt-1: t_index += 1 T_plot[t_index, :] = T print("Iteration {0:5d}".format(jt), end="\r") else: print("Completed {0:5d} iterations: t={1} s".format(jt, jt*Dt)) X, Y = np.meshgrid(range(T_plot.shape[0]), range(T_plot.shape[1])) Z = T_plot[X, Y] fig = plt.figure() ax = fig.add_subplot(111, projection="3d") ax.plot_wireframe(X*Dt*step, Y*Dx, Z) ax.set_xlabel(r"time $t$ (s)") ax.set_ylabel(r"position $x$ (m)") ax.set_zlabel(r"temperature $T$ (K)") fig.tight_layout()
_____no_output_____
CC-BY-4.0
15_PDEs/15_PDEs.ipynb
ASU-CompMethodsPhysics-PHY494/PHY494-resources-2018
Stability of the solution Empirical investigation of the stabilityInvestigate the solution for different values of `Dt` and `Dx`. Can you discern patters for stable/unstable solutions?Report `Dt`, `Dx`, and `eta`* for 3 stable solutions * for 3 unstable solutions
def calculate_T(L_rod=1, t_max=3000, Dx=0.02, Dt=2, T0=373, Tb=273, step=20): Nx = int(L_rod // Dx) Nt = int(t_max // Dt) Kappa = 237 # W/(m K) CHeat = 900 # J/K rho = 2700 # kg/m^3 eta = Kappa * Dt / (CHeat * rho * Dx**2) eta2 = 1 - 2*eta print("Nx = {0}, Nt = {1}".format(Nx, Nt)) print("eta = {0}".format(eta)) T = np.zeros(Nx) T_plot = np.zeros((int(np.ceil(Nt/step)) + 1, Nx)) # initial conditions T[1:-1] = T0 # boundary conditions T[0] = T[-1] = Tb t_index = 0 T_plot[t_index, :] = T for jt in range(1, Nt): T[1:-1] = eta2 * T[1:-1] + eta*(T[2:] + T[:-2]) if jt % step == 0 or jt == Nt-1: t_index += 1 T_plot[t_index, :] = T print("Iteration {0:5d}".format(jt), end="\r") else: print("Completed {0:5d} iterations: t={1} s".format(jt, jt*Dt)) return T_plot def plot_T(T_plot, Dx, Dt, step): X, Y = np.meshgrid(range(T_plot.shape[0]), range(T_plot.shape[1])) Z = T_plot[X, Y] fig = plt.figure() ax = fig.add_subplot(111, projection="3d") ax.plot_wireframe(X*Dt*step, Y*Dx, Z) ax.set_xlabel(r"time $t$ (s)") ax.set_ylabel(r"position $x$ (m)") ax.set_zlabel(r"temperature $T$ (K)") fig.tight_layout() return ax T_plot = calculate_T(Dx=0.01, Dt=2, step=20) plot_T(T_plot, 0.01, 2, 20)
Nx = 99, Nt = 1500 eta = 1.9506172839506173 Iteration 20 Iteration 40 Iteration 60 Iteration 80 Iteration 100 Iteration 120 Iteration 140 Iteration 160 Iteration 180 Iteration 200 Iteration 220 Iteration 240 Iteration 260 Iteration 280 Iteration 300 Iteration 320 Iteration 340 Iteration 360 Iteration 380 Iteration 400 Iteration 420 Iteration 440 Iteration 460 Iteration 480 Iteration 500 Iteration 520 Iteration 540 Iteration 560 Iteration 580 Iteration 600 Iteration 620 Iteration 640 Iteration 660 Iteration 680 Iteration 700 Iteration 720 Iteration 740 Iteration 760 Iteration 780 Iteration 800 Iteration 820 Iteration 840 Iteration 860 Iteration 880 Iteration 900 Iteration 920 Iteration 940 Iteration 960 Iteration 980 Iteration 1000 Iteration 1020 Iteration 1040 Iteration 1060 Iteration 1080 Iteration 1100 Iteration 1120 Iteration 1140 Iteration 1160 Iteration 1180 Iteration 1200 Iteration 1220 Iteration 1240 Iteration 1260 Iteration 1280 Iteration 1300 Iteration 1320 Iteration 1340 Iteration 1360 Iteration 1380 Iteration 1400 Iteration 1420 Iteration 1440 Iteration 1460 Iteration 1480 Iteration 1499 Completed 1499 iterations: t=2998 s
CC-BY-4.0
15_PDEs/15_PDEs.ipynb
ASU-CompMethodsPhysics-PHY494/PHY494-resources-2018
Note that *decreasing* the value of $\Delta x$ made the solution *unstable*. This is strange, we have gotten used to the idea that working on a finer mesh will increase the detail (until we hit round-off error) and just become computationally more expensive. But here the algorithm suddenly becomes unstable (and it is not just round-off). For certain combination of values of $\Delta t$ and $\Delta x$ the solution become unstable. Empirically, bigger $\eta$ leads to instability. (In fact, $\eta \geq \frac{1}{2}$ is unstable for the leapfrog algorithm as we will see.) Von Neumann stability analysis If the difference equation solution diverges then we *know* that we have a bad approximation to the original PDE. Von Neumann stability analysis starts from the assumption that *eigenmodes* of the difference equation can be written as$$T_{m,j} = \xi(k)^j e^{ikm\Delta x}, \quad t=j\Delta t,\ x=m\Delta x $$with the unknown wave vectors $k=2\pi/\lambda$ and unknown complex functions – the *amplification factors* – $\xi(k)$. Solutions of the difference equation can be written as linear superpositions of these basis functions. But they are only stable if the eigenmodes are stable, i.e., will not grow in time (with $j$). This is the case when $$|\xi(k)| < 1$$for all $k$. Insert the eigenmodes into the finite difference equation$$T_{m, j+1} = (1 - 2\eta) T_{m,j} + \eta(T_{m+1,j} + T_{m-1, j})$$to obtain \begin{align}\xi(k)^{j+1} e^{ikm\Delta x} &= (1 - 2\eta) \xi(k)^{j} e^{ikm\Delta x} + \eta(\xi(k)^{j} e^{ik(m+1)\Delta x} + \xi(k)^{j} e^{ik(m-1)\Delta x})\\\xi(k) &= (1 - 2\eta) + \eta(e^{ik\Delta x} + e^{-ik\Delta x})\\\xi(k) &= 1 - 2\eta + 2\eta \cos k\Delta x\\\xi(k) &= 1 + 2\eta\big(\cos k\Delta x - 1\big)\end{align} For $|\xi(k)| < 1$ (and all possible $k$):\begin{align}|\xi(k)| < 1 \quad &\Leftrightarrow \quad \xi^2(k) < 1\\(1 + 2y)^2 = 1 + 4y + 4y^2 &< 1 \quad \text{with}\ \ y = \eta(\cos k\Delta x - 1)\\y(1 + y) &< 0 \quad \Leftrightarrow \quad -1 < y < 0\\\eta(\cos k\Delta x - 1) &\leq 0 \quad \forall k \quad (\eta > 0, -1 \leq \cos x \leq 1)\\\eta(\cos k\Delta x - 1) &> -1\\\eta &< \frac{1}{1 - \cos k\Delta x}\\\eta = \frac{K \Delta t}{C \rho \Delta x^2} &< \frac{1}{2} \le \frac{1}{1 - \cos k\Delta x}\end{align} Thus, solutions are only stable for $\eta < 1/2$. In particular, decreasing $\Delta t$ will always improve stability, But decreasing $\Delta x$ requires an quadratic *increase* in $\Delta t$! Note* Perform von Neumann stability analysis when possible (depends on PDE and the specific discretization).* Test different combinations of $\Delta t$ and $\Delta x$.* Not guarantee that decreasing both will lead to more stable solutions! Check my inputs:This was stable and it conforms to the stability criterion:
Dt = 2 Dx = 0.02 eta = Kappa * Dt /(CHeat * rho * Dx*Dx) print(eta)
0.4876543209876543
CC-BY-4.0
15_PDEs/15_PDEs.ipynb
ASU-CompMethodsPhysics-PHY494/PHY494-resources-2018
... and this was unstable, despite a seemingly small change:
Dt = 2 Dx = 0.01 eta = Kappa * Dt /(CHeat * rho * Dx*Dx) print(eta)
1.9506172839506173
CC-BY-4.0
15_PDEs/15_PDEs.ipynb
ASU-CompMethodsPhysics-PHY494/PHY494-resources-2018
Build a sklearn Pipeline for a to ML contest submissionIn the ML_coruse_train notebook we at first analyzed the housing dataset to gain statistical insights and then e.g. features added new, replaced missing values and scaled the colums using pandas dataset methods.In the following we will use sklearn [Pipelines](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html) to integrate all these steps into one final *estimator*. The resulting pipeline can be used for saving an ML estimator to a file and use it later for production.*Optional:*If you want, you can save your estimator as explained in the last cell at the bottom of this notebook.Based on a hidden dataset, it's performance will then be ranked against all other submissions.
# read housing data again import pandas as pd import numpy as np housing = pd.read_csv("datasets/housing/housing.csv") # Try to get header information of the dataframe: housing.head()
_____no_output_____
Apache-2.0
ML_course/ML_Contest_train.ipynb
Riwedieb/handson-ml
One remark: sklearn transformers do **not** act on pandas dataframes. Instead, they use numpy arrays. Now try to [convert](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_numpy.html) a dataframe to a numpy array:
housing.head().to_numpy()
_____no_output_____
Apache-2.0
ML_course/ML_Contest_train.ipynb
Riwedieb/handson-ml
As you can see, the column names are lost now.In a numpy array, columns indexed using integers and no more by their names. Add extra feature columnsAt first, we again add some extra columns (e.g. `rooms_per_household, population_per_household, bedrooms_per_household`) which might correlate better with the predicted parameter `median_house_value`.For modifying the dataset, we now use a [FunctionTransformer](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.FunctionTransformer.html), which we later can put into a pipeline. Hints: * For finding the index number of a given column name, you can use the method [get_loc()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Index.get_loc.html)* For concatenating the new columns with the given array, you can use numpy method [c_](https://docs.scipy.org/doc/numpy/reference/generated/numpy.c_.html)
from sklearn.preprocessing import FunctionTransformer # At first, get the indexes as integers from the column names: rooms_ix = housing.columns.get_loc("total_rooms") bedrooms_ix = population_ix = household_ix = # Now implement a function which takes a numpy array a argument and adds the new feature columns def add_extra_features(X): rooms_per_household = X[:, rooms_ix] / X[:, household_ix] population_per_household = bedrooms_per_household = # Concatenate the original array X with the new columns return attr_adder = FunctionTransformer(add_extra_features, validate = False) housing_extra_attribs = attr_adder.fit_transform(housing.values) assert housing_extra_attribs.shape == (17999, 13) housing_extra_attribs
_____no_output_____
Apache-2.0
ML_course/ML_Contest_train.ipynb
Riwedieb/handson-ml
Imputing missing elementsFor replacing nan values in the dataset with the mean or median of the column they are in, you can also use a [SimpleImputer](https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html) :
from sklearn.impute import SimpleImputer # Drop the categorial column ocean_proximity housing_num = housing.drop(...) print("We have %d nan elements in the numerical columns" %np.count_nonzero(np.isnan(housing_num.to_numpy()))) imp_mean = ... housing_num_cleaned = imp_mean.fit_transform(housing_num) assert np.count_nonzero(np.isnan(housing_num_cleaned)) == 0 housing_num_cleaned[1,:]
_____no_output_____
Apache-2.0
ML_course/ML_Contest_train.ipynb
Riwedieb/handson-ml
Column scalingFor scaling and normalizing the columns, you can use the class [StandardScaler](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html)Use numpy [mean](https://docs.scipy.org/doc/numpy/reference/generated/numpy.mean.html) and [std](https://docs.scipy.org/doc/numpy/reference/generated/numpy.std.html) to calculate the mean and standard deviation of each column (Hint: columns are axis = 0! ) after scaling.
from sklearn.preprocessing import StandardScaler scaler = ... scaled = scaler.fit_transform(housing_num_cleaned) print("mean of the columns is: " , ...) print("standard deviation of the columns is: " , ...)
_____no_output_____
Apache-2.0
ML_course/ML_Contest_train.ipynb
Riwedieb/handson-ml
Putting all preprocessing steps together Now let's build a pipeline for preprocessing the **numerical** attributes.The pipeline shall process the data in the following steps:* [Impute](https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html) median or mean values for elements which are NaN* Add attributes using the FunctionTransformer with the function add_extra_features().* Scale the numerical values using the [StandardScaler()](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html)
from sklearn.pipeline import Pipeline num_pipeline = Pipeline([ ('give a name', ...), # Imputer ('give a name', ...), # FunctionTransformer ('give a name', ...), # Scaler ]) # Now test the pipeline on housing_num num_pipeline.fit_transform(housing_num)
_____no_output_____
Apache-2.0
ML_course/ML_Contest_train.ipynb
Riwedieb/handson-ml
Now we have a pipeline for the numerical columns. But we still have a categorical column:
housing['ocean_proximity'].head()
_____no_output_____
Apache-2.0
ML_course/ML_Contest_train.ipynb
Riwedieb/handson-ml
We need one more pipeline for the categorical column. Instead of the "Dummy encoding" we used before, we now use the [OneHotEncoder](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html) from sklearn. Hint: to make things easier, set the sparse option of the OneHotEncoder to False.
from sklearn.preprocessing import OneHotEncoder housing_cat = housing[] #get the right column cat_encoder = housing_cat_1hot = cat_encoder.fit_transform(housing_cat) housing_cat_1hot
_____no_output_____
Apache-2.0
ML_course/ML_Contest_train.ipynb
Riwedieb/handson-ml
We have everything we need for building a preprocessing pipeline which transforms the columns including all the steps before. Since we have columns where different transformations should be applied, we use the class [ColumnTransformer](https://scikit-learn.org/stable/modules/generated/sklearn.compose.ColumnTransformer.html)
from sklearn.compose import ColumnTransformer # These are the columns with the numerical features: num_attribs = ["longitude", ...] # Here are the columns with categorical features: cat_attribs = [...] full_prep_pipeline = ColumnTransformer([ ("give a name", ..., ...), # Add the numerical pipeline and specify the columns it should work on ("give a name", ..., ...), # Add a OneHotEncoder and specify the columns it should work on ]) full_prep_pipeline.fit_transform(housing)
_____no_output_____
Apache-2.0
ML_course/ML_Contest_train.ipynb
Riwedieb/handson-ml
Train an estimatorInclude `full_prep_pipeline` into a further pipeline where it is followed by an RandomForestRegressor. This way, at first our data is prepared using `full_prep_pipeline` and then the RandomForestRegressor is trained on it.
from sklearn.ensemble import RandomForestRegressor from sklearn.model_selection import train_test_split full_pipeline_with_predictor = Pipeline([ ("give a name", full_prep_pipeline), # add the full_prep_pipeline ("give a name", RandomForestRegressor()) # Add a RandomForestRegressor ])
_____no_output_____
Apache-2.0
ML_course/ML_Contest_train.ipynb
Riwedieb/handson-ml
For training the regressor, seperate the label colum (`median_house_value`) and feature columns (all other columns).Split the data into a training and testing dataset using train_test_split.
# Create two dataframes, one for the labels one for the features housing_features = housing... housing_labels = housing # Split the two dataframes into a training and a test dataset X_train, X_test, y_train, y_test = train_test_split(housing_features, housing_labels, test_size = 0.20) # Now train the full_pipeline_with_predictor on the training dataset full_pipeline_with_predictor.fit(X_train, y_train)
_____no_output_____
Apache-2.0
ML_course/ML_Contest_train.ipynb
Riwedieb/handson-ml
As usual, calculate some score metrics:
from sklearn.metrics import mean_squared_error y_pred = full_pipeline_with_predictor.predict(X_test) tree_mse = mean_squared_error(y_pred, y_test) tree_rmse = np.sqrt(tree_mse) tree_rmse from sklearn.metrics import r2_score r2_score(y_pred, y_test)
_____no_output_____
Apache-2.0
ML_course/ML_Contest_train.ipynb
Riwedieb/handson-ml
Use the [pickle serializer](https://docs.python.org/3/library/pickle.html) to save your estimator to a file for contest participation.
import pickle import getpass from sklearn.utils.validation import check_is_fitted your_regressor = ... # Put your regression pipeline here assert isinstance(your_regressor, Pipeline) pickle.dump(your_regressor, open(getpass.getuser() + "s_model.p", "wb" ) )
_____no_output_____
Apache-2.0
ML_course/ML_Contest_train.ipynb
Riwedieb/handson-ml
Running the Direct Fidelity Estimation (DFE) algorithmThis example walks through the steps of running the direct fidelity estimation (DFE) algorithm as described in these two papers:* Direct Fidelity Estimation from Few Pauli Measurements (https://arxiv.org/abs/1104.4695)* Practical characterization of quantum devices without tomography (https://arxiv.org/abs/1104.3835)Optimizations for Clifford circuits are based on a tableau-based simulator:* Improved Simulation of Stabilizer Circuits (https://arxiv.org/pdf/quant-ph/0406196.pdf)
try: import cirq except ImportError: print("installing cirq...") !pip install --quiet cirq print("installed cirq.") # Import Cirq, DFE, and create a circuit import cirq from cirq.contrib.svg import SVGCircuit import examples.direct_fidelity_estimation as dfe qubits = cirq.LineQubit.range(3) circuit = cirq.Circuit(cirq.CNOT(qubits[0], qubits[2]), cirq.Z(qubits[0]), cirq.H(qubits[2]), cirq.CNOT(qubits[2], qubits[1])) SVGCircuit(circuit) # We then create a sampler. For this example, we use a simulator but the code can accept a hardware sampler. noise = cirq.ConstantQubitNoiseModel(cirq.depolarize(0.1)) sampler = cirq.DensityMatrixSimulator(noise=noise) # We run the DFE: estimated_fidelity, intermediate_results = dfe.direct_fidelity_estimation( circuit, qubits, sampler, n_measured_operators=None, # None=returns all the Pauli strings samples_per_term=0) # 0=use dense matrix simulator print('Estimated fidelity: %.2f' % (estimated_fidelity))
_____no_output_____
Apache-2.0
examples/direct_fidelity_estimation.ipynb
mganahl/Cirq
What is happening under the hood?Now, let's look at the `intermediate_results` and correlate what is happening in the code with the papers. The definition of fidelity is:$$F = F(\hat{\rho},\hat{\sigma}) = \mathrm{Tr} \left(\hat{\rho} \hat{\sigma}\right)$$where $\hat{\rho}$ is the theoretical pure state and $\hat{\sigma}$ is the actual state. The idea of DFE is to write fidelity as:$$F= \sum _i \frac{\rho _i \sigma _i}{d}$$where $d=4^{\mathit{number-of-qubits}}$, $\rho _i = \mathrm{Tr} \left( \hat{\rho} P_i \right)$, and $\sigma _i = \mathrm{Tr} \left(\hat{\sigma} P_i \right)$. Each of the $P_i$ is a Pauli operator. We can then finally rewrite the fidelity as:$$F= \sum _i Pr(i) \frac{\sigma _i}{\rho_i}$$with $Pr(i) = \frac{\rho_i ^2}{d}$, which is a probability-like set of numbers (between 0.0 and 1.0 and they add up to 1.0).One important question is how do we choose these Pauli operators $P_i$? It depends on whether the circuit is Clifford or not. In case it is, we know that there are "only" $2^{\mathit{number-of-qubits}}$ operators for which $Pr(i)$ is non-zero. In fact, we know that they are all equiprobable with $Pr(i) = \frac{1}{2^{\mathit{number-of-qubits}}}$. The code does detect the Cliffordness automatically and switches to this mode. In case the circuit is not Clifford, the code just uses all the operators.Let's inspect that in the case of our example, we do see the Pauli operators with equiprobability (i.e. the $\rho_i$):
for pauli_trace in intermediate_results.pauli_traces: print('Probability %.3f\tPauli: %s' % (pauli_trace.Pr_i, pauli_trace.P_i))
_____no_output_____
Apache-2.0
examples/direct_fidelity_estimation.ipynb
mganahl/Cirq
Yay! We do see 8 entries (we have 3 qubits) with all the same 1/8 probability. What if we had a 23 qubit circuit? In this case, that would be quite many of them. That is where the parameter `n_measured_operators` becomes useful. If it is set to `None` we return *all* the Pauli strings (regardless of whether the circuit is Clifford or not). If set to an integer, we randomly sample the Pauli strings.Then, let's actually look at the measurements, i.e. $\sigma_i$:
for trial_result in intermediate_results.trial_results: print('rho_i=%.3f\tsigma_i=%.3f\tPauli:%s' % (trial_result.pauli_trace.rho_i, trial_result.sigma_i, trial_result.pauli_trace.P_i))
_____no_output_____
Apache-2.0
examples/direct_fidelity_estimation.ipynb
mganahl/Cirq
Gujarati with CLTK See how you can analyse your Gujarati texts with CLTK ! Let's begin by adding the `USER_PATH`..
import os USER_PATH = os.path.expanduser('~')
_____no_output_____
MIT
languages/south_asia/Gujarati_tutorial.ipynb
glaserti/tutorials
In order to be able to download Gujarati texts from CLTK's Github repo, we will require an importer.
from cltk.corpus.utils.importer import CorpusImporter gujarati_downloader = CorpusImporter('gujarati')
_____no_output_____
MIT
languages/south_asia/Gujarati_tutorial.ipynb
glaserti/tutorials
We can now see the corpora available for download, by using `list_corpora` feature of the importer. Let's go ahead and try it out!
gujarati_downloader.list_corpora
_____no_output_____
MIT
languages/south_asia/Gujarati_tutorial.ipynb
glaserti/tutorials
The corpus gujarati_text_wikisource can be downloaded from the Github repo. The corpus will be downloaded to the directory `cltk_data/gujarati` at the above mentioned `USER_PATH`
gujarati_downloader.import_corpus('gujarati_text_wikisource')
_____no_output_____
MIT
languages/south_asia/Gujarati_tutorial.ipynb
glaserti/tutorials
You can see the texts downloaded by doing the following, or checking out the `cltk_data/gujarati/text/gujarati_text_wikisource` directory.
gujarati_corpus_path = os.path.join(USER_PATH,'cltk_data/gujarati/text/gujarati_text_wikisource') list_of_texts = [text for text in os.listdir(gujarati_corpus_path) if '.' not in text] print(list_of_texts)
['narsinh_mehta', 'kabir', 'vallabhacharya']
MIT
languages/south_asia/Gujarati_tutorial.ipynb
glaserti/tutorials
Great, now that we have our texts, let's take a sample from one of them. For this tutorial, we shall be using govinda_khele_holi , a text by the Gujarati poet Narsinh Mehta.
gujarati_text_path = os.path.join(gujarati_corpus_path,'narsinh_mehta/govinda_khele_holi.txt') gujarati_text = open(gujarati_text_path,'r').read() print(gujarati_text)
વૃંદાવન જઈએ, જીહાં ગોવિંદ ખેલે હોળી; નટવર વેશ ધર્યો નંદ નંદન, મળી મહાવન ટોળી... ચાલો સખી ! એક નાચે એક ચંગ વજાડે, છાંટે કેસર ઘોળી; એક અબીરગુલાલ ઉડાડે, એક ગાય ભાંભર ભોળી... ચાલો સખી ! એક એકને કરે છમકલાં, હસી હસી કર લે તાળી; માહોમાહે કરે મરકલાં, મધ્ય ખેલે વનમાળી... ચાલો સખી ! વસંત ઋતુ વૃંદાવન સરી, ફૂલ્યો ફાગણ માસ; ગોવિંદગોપી રમે રંગભર, જુએ નરસૈંયો દાસ... ચાલો સખી !
MIT
languages/south_asia/Gujarati_tutorial.ipynb
glaserti/tutorials
Gujarati Alphabets There are 13 vowels, 33 consonants, which are grouped as follows:
from cltk.corpus.gujarati.alphabet import * print("Digits:",DIGITS) print("Vowels:",VOWELS) print("Dependent vowels:",DEPENDENT_VOWELS) print("Consonants:",CONSONANTS) print("Velar consonants:",VELAR_CONSONANTS) print("Palatal consonants:",PALATAL_CONSONANTS) print("Retroflex consonants:",RETROFLEX_CONSONANTS) print("Dental consonants:",DENTAL_CONSONANTS) print("Labial consonants:",LABIAL_CONSONANTS) print("Sonorant consonants:",SONORANT_CONSONANTS) print("Sibilant consonants:",SIBILANT_CONSONANTS) print("Guttural consonant:",GUTTURAL_CONSONANT) print("Additional consonants:",ADDITIONAL_CONSONANTS) print("Modifiers:",MODIFIERS)
Digits: ['૦', '૧', '૨', '૩', '૪', '૫', '૬', '૭', '૮', '૯', '૧૦'] Vowels: ['અ', 'આ', 'ઇ', 'ઈ', 'ઉ', 'ઊ', 'ઋ', 'એ', 'ઐ', 'ઓ', 'ઔ', 'અં', 'અઃ'] Dependent vowels: ['ા ', 'િ', 'ી', 'ો', 'ૌ'] Consonants: ['ક', 'ખ', 'ગ', 'ઘ', 'ચ', 'છ', 'જ', 'ઝ', 'ઞ', 'ટ', 'ઠ', 'ડ', 'ઢ', 'ણ', 'ત', 'થ', 'દ', 'ધ', 'ન', 'પ', 'ફ', 'બ', 'ભ', 'મ', 'ય', 'ર', 'લ', 'ળ', 'વ', 'શ', 'ષ', 'સ', 'હ'] Velar consonants: ['ક', 'ખ', 'ગ', 'ઘ', 'ઙ'] Palatal consonants: ['ચ', 'છ', 'જ', 'ઝ', 'ઞ'] Retroflex consonants: ['ટ', 'ઠ', 'ડ', 'ઢ', 'ણ'] Dental consonants: ['ત', 'થ', 'દ', 'ધ', 'ન'] Labial consonants: ['પ', 'ફ', 'બ', 'ભ', 'મ'] Sonorant consonants: ['ય', 'ર', 'લ', 'વ'] Sibilant consonants: ['શ', 'ષ', 'સ'] Guttural consonant: ['હ'] Additional consonants: ['ળ', 'ક્ષ', 'જ્ઞ'] Modifiers: [' ्', ' ॓', ' ॔']
MIT
languages/south_asia/Gujarati_tutorial.ipynb
glaserti/tutorials
Transliterations We can transliterate Gujarati scripts to that of other Indic languages. Let us transliterate `કમળ ભારતનો રાષ્ટ્રીય ફૂલ છે`to Kannada:
gujarati_text_two = 'કમળ ભારતનો રાષ્ટ્રીય ફૂલ છે' from cltk.corpus.sanskrit.itrans.unicode_transliterate import UnicodeIndicTransliterator UnicodeIndicTransliterator.transliterate(gujarati_text_two,"gu","kn")
_____no_output_____
MIT
languages/south_asia/Gujarati_tutorial.ipynb
glaserti/tutorials
We can also romanize the text as shown:
from cltk.corpus.sanskrit.itrans.unicode_transliterate import ItransTransliterator ItransTransliterator.to_itrans(gujarati_text_two,'gu')
_____no_output_____
MIT
languages/south_asia/Gujarati_tutorial.ipynb
glaserti/tutorials
Similarly, we can indicize a text given in its ITRANS-transliteration
gujarati_text_itrans = 'bhaawanaa' ItransTransliterator.from_itrans(gujarati_text_itrans,'gu')
_____no_output_____
MIT
languages/south_asia/Gujarati_tutorial.ipynb
glaserti/tutorials
Syllabifier We can use the indian_syllabifier to syllabify the Gujarati sentences. To do this, we will have to import models as follows. The importing of `sanskrit_models_cltk` might take some time.
phonetics_model_importer = CorpusImporter('sanskrit') phonetics_model_importer.list_corpora phonetics_model_importer.import_corpus('sanskrit_models_cltk')
_____no_output_____
MIT
languages/south_asia/Gujarati_tutorial.ipynb
glaserti/tutorials
Now we import the syllabifier and syllabify as follows:
%%capture from cltk.stem.sanskrit.indian_syllabifier import Syllabifier gujarati_syllabifier = Syllabifier('gujarati') gujarati_syllables = gujarati_syllabifier.orthographic_syllabify('ભાવના')
_____no_output_____
MIT
languages/south_asia/Gujarati_tutorial.ipynb
glaserti/tutorials