markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Create output control (OC) data using words
#--oc data spd = {(0,199): ['print budget', 'save head'], (0,200): [], (0,399): ['print budget', 'save head'], (0,400): [], (0,599): ['print budget', 'save head'], (0,600): [], (0,799): ['print budget', 'save head'], (0,800): [], (0,999): ['print budget', 'save he...
original_libraries/flopy-master/examples/Notebooks/swiex4.ipynb
mjasher/gac
gpl-2.0
Create the model with the freshwater well (Simulation 1)
modelname = 'swiex4_s1' ml = mf.Modflow(modelname, version='mf2005', exe_name=exe_name, model_ws=workspace) discret = mf.ModflowDis(ml, nlay=nlay, nrow=nrow, ncol=ncol, laycbd=0, delr=delr, delc=delc, top=botm[0], botm=botm[1:], nper=nper, perlen=perlen, nstp=nstp) bas =...
original_libraries/flopy-master/examples/Notebooks/swiex4.ipynb
mjasher/gac
gpl-2.0
Create the model with the saltwater well (Simulation 2)
modelname2 = 'swiex4_s2' ml2 = mf.Modflow(modelname2, version='mf2005', exe_name=exe_name, model_ws=workspace) discret = mf.ModflowDis(ml2, nlay=nlay, nrow=nrow, ncol=ncol, laycbd=0, delr=delr, delc=delc, top=botm[0], botm=botm[1:], nper=nper, perlen=perlen, nstp=nstp) b...
original_libraries/flopy-master/examples/Notebooks/swiex4.ipynb
mjasher/gac
gpl-2.0
Load the simulation 1 ZETA data and ZETA observations.
#--read base model zeta zfile = fu.CellBudgetFile(os.path.join(ml.model_ws, modelname+'.zta')) kstpkper = zfile.get_kstpkper() zeta = [] for kk in kstpkper: zeta.append(zfile.get_data(kstpkper=kk, text='ZETASRF 1')[0]) zeta = np.array(zeta) #--read swi obs zobs = np.genfromtxt(os.path.join(ml.model_ws, modelname+'...
original_libraries/flopy-master/examples/Notebooks/swiex4.ipynb
mjasher/gac
gpl-2.0
Load the simulation 2 ZETA data and ZETA observations.
#--read saltwater well model zeta zfile2 = fu.CellBudgetFile(os.path.join(ml2.model_ws, modelname2+'.zta')) kstpkper = zfile2.get_kstpkper() zeta2 = [] for kk in kstpkper: zeta2.append(zfile2.get_data(kstpkper=kk, text='ZETASRF 1')[0]) zeta2 = np.array(zeta2) #--read swi obs zobs2 = np.genfromtxt(os.path.join(ml2....
original_libraries/flopy-master/examples/Notebooks/swiex4.ipynb
mjasher/gac
gpl-2.0
Define figure dimensions and colors used for plotting ZETA surfaces
#--figure dimensions fwid, fhgt = 8.00, 5.50 flft, frgt, fbot, ftop = 0.125, 0.95, 0.125, 0.925 #--line color definition icolor = 5 colormap = plt.cm.jet #winter cc = [] cr = np.linspace(0.9, 0.0, icolor) for idx in cr: cc.append(colormap(idx))
original_libraries/flopy-master/examples/Notebooks/swiex4.ipynb
mjasher/gac
gpl-2.0
Recreate Figure 9 from the SWI2 documentation (http://pubs.usgs.gov/tm/6a46/).
plt.rcParams.update({'legend.fontsize': 6, 'legend.frameon' : False}) fig = plt.figure(figsize=(fwid, fhgt), facecolor='w') fig.subplots_adjust(wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop) #--first plot ax = fig.add_subplot(2, 2, 1) #--axes limits ax.set_xlim(-1500, 1500) ax.set_ylim(-50, -10...
original_libraries/flopy-master/examples/Notebooks/swiex4.ipynb
mjasher/gac
gpl-2.0
Initial matchup chart
DataFrame(data = init.data, index = init.row_names, columns = init.col_names)
python/examples/super_street_fighter_2_turbo.ipynb
ajul/zerosum
bsd-3-clause
Matchup chart after balancing with a logistic handicap
DataFrame(data = opt.F, index = init.row_names, columns = init.col_names)
python/examples/super_street_fighter_2_turbo.ipynb
ajul/zerosum
bsd-3-clause
can be used to run a test where the platform is configured to - disable the "sched_is_big_little" flag (if present) - set to 50ms the "sched_migration_cost_ns" Nortice that a value written in a file is verified only if the file path is prefixed by a '/'. Otherwise, the write never fails, e.g. if the file does not exist...
from trace import Trace import json with open('/home/patbel01/Code/lisa/results/LisaInANutshell_Backup/platform.json', 'r') as fh: platform = json.load(fh) trace = Trace('/home/patbel01/Code/lisa/results/LisaInANutshell_Backup/trace.dat', ['sched_switch'], platform )) logging.info("%d tasks loaded from trace...
ipynb/deprecated/releases/ReleaseNotes_v16.12.ipynb
ARM-software/lisa
apache-2.0
Now evaluate your learned model using the test set. Measure the total error of your prediction
Y_test = F_Regression(X_test, W) error = Loss_Regression(Y_test, Z_test) print("Evaluation error: ", error)
Intro ML Semcomp/semcomp17_ml/semcomp17_ml_answer.ipynb
marcelomiky/PythonCodes
mit
Network Architecture The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide rec...
learning_rate = 0.001 # Input and target placeholders inputs_ = tf.placeholder(tf.float32, [None,28,28,1], name='inputs') targets_ = tf.placeholder(tf.float32, [None,28,28,1], name='labels') ### Encoder conv1 = tf.layers.conv2d(inputs=inputs_, filters=16, kernel_size=(3,3), padding='same', a...
autoencoder/Convolutional_Autoencoder.ipynb
otavio-r-filho/AIND-Deep_Learning_Notebooks
mit
Denoising As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then cl...
learning_rate = 0.001 inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs') targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets') ### Encoder conv1 = tf.layers.conv2d(inputs=inputs_, filters=32, kernel_size=(3,3), padding='same', activation=tf.nn.relu) # No...
autoencoder/Convolutional_Autoencoder.ipynb
otavio-r-filho/AIND-Deep_Learning_Notebooks
mit
Similarity Scores Links to information about distance metrics: Implementing the Five Most Popular Similarity Measures in Python Scikit-Learn Distance Metric Python Distance Library Numeric distances are fairly easy, but can be record specific (e.g. phone numbers can compare area codes, city codes, etc. to determine s...
# Typographic Distances print distance.levenshtein("lenvestein", "levenshtein") print distance.hamming("hamming", "hamning") # Compare glyphs, syllables, or phonemes t1 = ("de", "ci", "si", "ve") t2 = ("de", "ri", "si", "ve") print distance.levenshtein(t1, t2) # Sentence Comparison sent1 = "The quick brown fox jum...
Entity Resolution Workshop.ipynb
DistrictDataLabs/entity-resolution
apache-2.0
Preprocessed Text Score Use text preprocessing with NLTK to split long strings into parts, and normalize them using Wordnet.
def tokenize(sent): """ When passed in a sentence, tokenizes and normalizes the string, returning a list of lemmata. """ lemmatizer = nltk.WordNetLemmatizer() for token in nltk.wordpunct_tokenize(sent): token = token.lower() yield lemmatizer.lemmatize(token) def normalized_jacc...
Entity Resolution Workshop.ipynb
DistrictDataLabs/entity-resolution
apache-2.0
Similarity Vectors
def similarity(prod1, prod2): """ Returns a similarity vector of match scores: [name_score, description_score, manufacturer_score, price_score] """ pair = (prod1, prod2) names = [r.get('name', None) or r.get('title', None) for r in pair] descr = [r.get('description') for r in pair] manu...
Entity Resolution Workshop.ipynb
DistrictDataLabs/entity-resolution
apache-2.0
Weighted Pairwise Matching
THRESHOLD = 0.90 WEIGHTS = (0.6, 0.1, 0.2, 0.1) matches = 0 for azprod in amazon.values(): for googprod in google.values(): vector = similarity(azprod, googprod) score = sum(map(lambda v: v[0]*v[1], zip(WEIGHTS, vector))) if score > THRESHOLD: matches += 1 print "...
Entity Resolution Workshop.ipynb
DistrictDataLabs/entity-resolution
apache-2.0
Download the data from the source website if necessary.
#url = 'http://mattmahoney.net/dc/' import urllib.request url = urllib.request.urlretrieve("http://mattmahoney.net/dc/") def maybe_download(filename, expected_bytes): """Download a file if not present, and make sure it's the right size.""" if not os.path.exists(filename): filename, _ = urllib.request.urlretriev...
5_word2vec.ipynb
recepkabatas/Spark
apache-2.0
Read the data into a string.
filename=("text8.zip") def read_data(filename): f = zipfile.ZipFile(filename) for name in f.namelist(): return f.read(name).split() f.close() words = read_data(filename) print ('Data size', len(words))
5_word2vec.ipynb
recepkabatas/Spark
apache-2.0
Function to generate a training batch for the skip-gram model.
data_index = 0 def generate_batch(batch_size, num_skips, skip_window): global data_index assert batch_size % num_skips == 0 assert num_skips <= 2 * skip_window batch = np.ndarray(shape=(batch_size), dtype=np.int32) labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32) span = 2 * skip_window + 1 # [ sk...
5_word2vec.ipynb
recepkabatas/Spark
apache-2.0
Train a skip-gram model.
batch_size = 128 embedding_size = 128 # Dimension of the embedding vector. skip_window = 1 # How many words to consider left and right. num_skips = 2 # How many times to reuse an input to generate a label. # We pick a random validation set to sample nearest neighbors. here we limit the # validation samples to the words...
5_word2vec.ipynb
recepkabatas/Spark
apache-2.0
The above algorithm is known as "trial by division". Keep track of all primes discovered so far, and test divide them, in increasing order, into a candidate number, until: (A) either one of the primes goes evenly, in which case move on to the next odd or (B) until we know our candidate is a next prime, in which ca...
def gcd(a, b): while b: a, b = b, a % b return a print(gcd(81, 18)) print(gcd(12, 44)) print(gcd(117, 17)) # strangers
Silicon Forest Math Series | RSA.ipynb
4dsolutions/Python5
mit
How does Euclid's Method work? That's a great question and one your teacher should be able to explain. First see if you might figure it out for yourself... Here's one explanation: If a smaller number divides a larger one without remainder then we're done, and that will always happen when that smaller number is 1 if n...
print(81 % 18) # 18 goes into print(18 % 9) # so the new b becomes the answer
Silicon Forest Math Series | RSA.ipynb
4dsolutions/Python5
mit
Suppose we had asked for gcd(18, 81) instead? 18 is the remainder (no 81s go into it) whereas b was 81, so the while loop simply flips the two numbers around to give the example above. The gcd function now gives us the means to compute totients and totatives of a number. The totatives of N are the strangers less than ...
def totatives(N): # list comprehension! return [x for x in range(1,N) if gcd(x,N)==1] # strangers only def T(N): """ Returns the number of numbers between (1, N) that have no factors in common with N: called the 'totient of N' (sometimes phi is used in the docs) """ return len(t...
Silicon Forest Math Series | RSA.ipynb
4dsolutions/Python5
mit
Where to go next is in the direction of Euler's Theorem, a generalization of Fermat's Little Theorem. The built-in pow(m, n, N) function will raise m to the n modulo N in an efficient manner.
def powers(N): totient = T(N) print("Totient of {}:".format(N), totient) for t in totatives(N): values = [pow(t, n, N) for n in range(totient + 1)] cycle = values[:values.index(1, 1)] # first 1 after initial 1 print("{:>2}".format(len(cycle)), cycle) powers(17)
Silicon Forest Math Series | RSA.ipynb
4dsolutions/Python5
mit
Above we see repeating cycles of numbers, with the length of the cycles all dividing 16, the totient of the prime number 17. pow(14, 2, 17) is 9, pow(14, 3, 17) is 7, and so on, coming back around the 14 at pow(14, 17, 17) where 17 is 1 modulo 16. Numbers raised to any kth power modulo N, where k is 1 modulo the totie...
from random import randint def check(N): totient = T(N) for t in totatives(N): n = randint(1, 10) print(t, pow(t, (n * totient) + 1, N)) check(17)
Silicon Forest Math Series | RSA.ipynb
4dsolutions/Python5
mit
In public key cryptography, RSA in particular, a gigantic composite N is formed from two primes p and q. N's totient will then be (p - 1) * (q - 1). For example if N = 17 * 23 (both primes) then T(N) = 16 * 22.
p = 17 q = 23 T(p*q) == (p-1)*(q-1)
Silicon Forest Math Series | RSA.ipynb
4dsolutions/Python5
mit
From this totient, we'll be able to find pairs (e, d) such that (e * d) modulo T(N) == 1. We may find d, given e and T(N), by means of the Extended Euclidean Algorithm (xgcd below). Raising some numeric message m to the eth power modulo N will encrypt the message, giving c. Raising the encrypted message c to the dth...
p = 37975227936943673922808872755445627854565536638199 q = 40094690950920881030683735292761468389214899724061 RSA_100 = p * q totient = (p - 1) * (q - 1) # https://en.wikibooks.org/wiki/ # Algorithm_Implementation/Mathematics/ # Extended_Euclidean_algorithm def xgcd(b, n): x0, x1, y0, y1 = 1, 0, 0, 1 while n ...
Silicon Forest Math Series | RSA.ipynb
4dsolutions/Python5
mit
La solution précédente est valide mais elle a l'inconvénient d'utiliser un indice en passage de paramètre, indice qui doit systématiquement être fixé à 0 lors de l'appel de la méthode. Une solution consisterait à "encapsuler" l'appel de cette méthode dans une autre méthode qui aurait une spécification plus simple, mais...
def rechercheRecursiveBis(chaine,carac): ''' :entree: chaine (string) :entree: caractère (string) :sortie: present (booleen) :pre-conditions: carac doit être un caractère seul, la chaîne peut être vide. :post-condions: le booléen est fixé à vrai si la chaîne contient le caractère et à faux sinon...
2015-12-03 - TD16 - Récursivité et tableaux.ipynb
ameliecordier/iutdoua-info_algo2015
cc0-1.0
Exercice 2. Écrire une méthode récursive pour calculer la somme des éléments d'une liste. Vous écrirez également le contrat.
def sommeRec(l): ''' :entree l: une liste de nombres (entiers ou flottants) :sortie somme: la somme des éléments de la liste :pre-conditions: la liste peut être vide :post-condition: somme contient la somme des éléments de la liste, et est donc du même type que les éléments. >>> sommeRec([1, 2,...
2015-12-03 - TD16 - Récursivité et tableaux.ipynb
ameliecordier/iutdoua-info_algo2015
cc0-1.0
Encore une fois, la solution ci-dessus est correcte mais elle est loin d'être "simple" et facile à lire pour quelqu'un d'autre que celui ou celle qui a écrit l'algorithme. On va donc (ci-dessous) proposer une ré-écriture plus simple.
def sommeRecBis(tab): ''' :entree l: une liste de nombres (entiers ou flottants) :sortie somme: la somme des éléments de la liste :pre-conditions: la liste peut être vide :post-condition: somme contient la somme des éléments de la liste, et est donc du même type que les éléments. >>> sommeRecBi...
2015-12-03 - TD16 - Récursivité et tableaux.ipynb
ameliecordier/iutdoua-info_algo2015
cc0-1.0
Exerice 3. Écrire un algorithme qui permet de rechercher un nombre dans un tableau trié. Proposez une solution récursive et une solution non récursive.
def rechercheTab(tab,a): ''' :entree tab: un tableau de nombres (entiers ou flottants) triés :entree a: le nombre recherché :sortie i: l'indice de la case du tableau dans laquelle se trouve le nombre. :pré-conditions: le tableau est trié par ordre croissant de valeur. :post-condition: l'indice ...
2015-12-03 - TD16 - Récursivité et tableaux.ipynb
ameliecordier/iutdoua-info_algo2015
cc0-1.0
From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship: - Survived: Outcome of survival (0 = No; 1 = Yes) - Pclass: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class) - Name: Name of passenger - Sex: Sex of the passenger - Age: Age of the pas...
# Store the 'Survived' feature in a new variable and remove it from the dataset outcomes = full_data['Survived'] data = full_data.drop('Survived', axis = 1) # Show the new dataset with 'Survived' removed display(data.head())
Titanic_Survival_Exploration/Titanic_Survival_Exploration-V1.ipynb
pushpajnc/models
mit
The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcome[i]. To measure the performance of our predi...
def accuracy_score(truth, pred): """ Returns accuracy score for input truth and predictions. """ # Ensure that the number of predictions matches number of outcomes if len(truth) == len(pred): # Calculate and return the accuracy as a percent return "Predictions have an accuracy...
Titanic_Survival_Exploration/Titanic_Survival_Exploration-V1.ipynb
pushpajnc/models
mit
Making Predictions If we were asked to make a prediction about any passenger aboard the RMS Titanic whom we knew nothing about, then the best prediction we could make would be that they did not survive. This is because we can assume that a majority of the passengers (more than 50%) did not survive the ship sinking. The...
def predictions_0(data): """ Model with no features. Always predicts a passenger did not survive. """ predictions = [] for _, passenger in data.iterrows(): # Predict the survival of 'passenger' predictions.append(0) # Return our predictions return pd.Series(predictions...
Titanic_Survival_Exploration/Titanic_Survival_Exploration-V1.ipynb
pushpajnc/models
mit
Using the RMS Titanic data, a prediction would be 61.62% accurate that none of the passengers survived. Let's take a look at whether the feature Sex has any indication of survival rates among passengers using the survival_stats function. This function is defined in the titanic_visualizations.py. The first two paramete...
survival_stats(data, outcomes, 'Sex')
Titanic_Survival_Exploration/Titanic_Survival_Exploration-V1.ipynb
pushpajnc/models
mit
Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive...
def predictions_1(data): """ Model with one feature: - Predict a passenger survived if they are female. """ predictions = [] for _, passenger in data.iterrows(): # Remove the 'pass' statement below # and write your prediction conditions here if(passenger['...
Titanic_Survival_Exploration/Titanic_Survival_Exploration-V1.ipynb
pushpajnc/models
mit
Therefore, the prediction that all female passengers survived and the remaining passengers did not survive, would be 78.68% accurate. Using just the Sex feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if...
survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
Titanic_Survival_Exploration/Titanic_Survival_Exploration-V1.ipynb
pushpajnc/models
mit
Examining the survival statistics, the majority of males younger then 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger t...
def predictions_2(data): """ Model with two features: - Predict a passenger survived if they are female. - Predict a passenger survived if they are male and younger than 10. """ predictions = [] for _, passenger in data.iterrows(): # Remove the 'pass' statement...
Titanic_Survival_Exploration/Titanic_Survival_Exploration-V1.ipynb
pushpajnc/models
mit
Prediction: all female passengers and all male passengers younger than 10 survived
print accuracy_score(outcomes, predictions)
Titanic_Survival_Exploration/Titanic_Survival_Exploration-V1.ipynb
pushpajnc/models
mit
Thus, the accuracy increases with above prediction to 79.35% Adding the feature Age as a condition in conjunction with Sex improves the accuracy by a small margin more than with simply using the feature Sex alone.
survival_stats(data, outcomes, 'Sex') survival_stats(data, outcomes, 'Pclass') survival_stats(data, outcomes, 'Pclass',["Sex == 'female'"]) survival_stats(data, outcomes, 'SibSp', ["Sex == 'female'", "Pclass == 3"]) survival_stats(data, outcomes, 'Age', ["Sex == 'male'", "Age < 18"]) survival_stats(data, outcomes, 'Pc...
Titanic_Survival_Exploration/Titanic_Survival_Exploration-V1.ipynb
pushpajnc/models
mit
Exercise 02.1 Split the training set in two sets with 70% and 30% of the data, respectively. Partir la base de datos es dos partes de 70%
# Insert code here random_sample = np.random.rand(n_obs) X_train, X_test = X[random_sample<0.6], X[random_sample>=0.6] Y_train, Y_test = Y[random_sample<0.6], Y[random_sample>=0.6] print(Y_train.shape, Y_test.shape)
exercises/02-Churn model-solution.ipynb
MonicaGutierrez/PracticalMachineLearningClass
mit
Exercise 02.2 Train a logistic regression using the 70% set Entrenar una regresion logistica usando la particion del 70%
# Insert code here from sklearn.linear_model import LogisticRegression clf = LogisticRegression() clf.fit(X_train, Y_train)
exercises/02-Churn model-solution.ipynb
MonicaGutierrez/PracticalMachineLearningClass
mit
Exercise 02.3 a) Create a confusion matrix using the prediction on the 30% set. b) Estimate the accuracy of the model in the 30% set a) Estimar la matriz de confusion en la base del 30%. b) Calcular el poder de prediccion usando la base del 30%.
# Insert code here y_pred = clf.predict(X_test) from sklearn.metrics import confusion_matrix confusion_matrix(Y_test, y_pred) (Y_test == y_pred).mean()
exercises/02-Churn model-solution.ipynb
MonicaGutierrez/PracticalMachineLearningClass
mit
Below I'm running images through the VGG network in batches. Exercise: Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values).
# Set the batch size higher if you can fit in in your GPU memory batch_size = 30 codes_list = [] labels = [] batch = [] codes = None with tf.Session() as sess: # TODO: Build the vgg network here vgg = vgg16.Vgg16() input_ = tf.placeholder(tf.float32, [None, 224, 224, 3]) with tf.name_scope("conte...
transfer-learning/Transfer_Learning.ipynb
JJINDAHOUSE/deep-learning
mit
Data prep As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels! Exercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.
from sklearn.preprocessing import LabelBinarizer # Your one-hot encoded labels array here lb = LabelBinarizer() lb.fit(labels) labels_vecs = lb.transform(labels)
transfer-learning/Transfer_Learning.ipynb
JJINDAHOUSE/deep-learning
mit
Training Here, we'll train the network. Exercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. Use the get_batches function I wrote before to ...
epochs = 10 iteration = 0 saver = tf.train.Saver() with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for e in range(epochs): for x, y in get_batches(train_x, train_y): feed = { inputs_: x, labels_: y } loss, _ ...
transfer-learning/Transfer_Learning.ipynb
JJINDAHOUSE/deep-learning
mit
Getting insights Retrieve your project thanks to its name. Build your first model, on the central table, from the default schema.
prj = PredicSis.project('Outbound Mail Campaign')
23.how_to_build_a_first_model_SDK/Build your first model.ipynb
jeanbaptistepriez/predicsis-ai-faq-tuto
gpl-3.0
Build a model from the default schema
mdl = prj.default_schema().fit('My first model') mdl.auc()
23.how_to_build_a_first_model_SDK/Build your first model.ipynb
jeanbaptistepriez/predicsis-ai-faq-tuto
gpl-3.0
In the cell above, I have loaded two datasets. The first dataset "reviews" is a list of 25,000 movie reviews that people wrote about various movies. The second dataset is a list of whether or not each review is a “positive” review or “negative” review.
reviews[0] labels[0]
tutorials/sentiment_network/.ipynb_checkpoints/Sentiment Classification - How to Best Frame a Problem for a Neural Network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
I want you to pretend that you’re a neural network for a moment. Consider a few examples from the two datasets below. Do you see any correlation between these two datasets?
print("labels.txt \t : \t reviews.txt\n") pretty_print_review_and_label(2137) pretty_print_review_and_label(12816) pretty_print_review_and_label(6267) pretty_print_review_and_label(21934) pretty_print_review_and_label(5297) pretty_print_review_and_label(4998)
tutorials/sentiment_network/.ipynb_checkpoints/Sentiment Classification - How to Best Frame a Problem for a Neural Network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
Well, let’s consider several different granularities. At the paragraph level, no two paragraphs are the same, so there can be no “correlation” per-say. You have to see two things occur at the same time more than once in order for there to be considered “correlation”. What about at the character level? I’m guessing the ...
from collections import Counter positive_counts = Counter() negative_counts = Counter() total_counts = Counter() for i in range(len(reviews)): if(labels[i] == 'POSITIVE'): for word in reviews[i].split(" "): positive_counts[word] += 1 total_counts[word] += 1 else: for wo...
tutorials/sentiment_network/.ipynb_checkpoints/Sentiment Classification - How to Best Frame a Problem for a Neural Network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
Wow, there’s really something to this theory! As we can see, there are clearly terms in movie reviews that have correlation with our output labels. So, if we think there might be strong correlation between the words present in a particular review and the sentiment of that review, what should our network take as input a...
from IPython.display import Image review = "This was a horrible, terrible movie." Image(filename='sentiment_network.png') review = "The movie was excellent" Image(filename='sentiment_network_pos.png')
tutorials/sentiment_network/.ipynb_checkpoints/Sentiment Classification - How to Best Frame a Problem for a Neural Network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
The Input Let’s say our entire movie review corpus has 10,000 words. Given a single movie review ("This was a horrible, terrible movie"), we’re going to put a “1” in the input of our neural network for every word that exists in the review, and a 0 everywhere else. So, given our 10,000 words, a movie review with 6 words...
vocab = set(total_counts.keys()) vocab_size = len(vocab) print(vocab_size)
tutorials/sentiment_network/.ipynb_checkpoints/Sentiment Classification - How to Best Frame a Problem for a Neural Network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
Creating the Target Data And now we want to do the same thing for our target predictions
def get_target_for_label(label): if(label == 'POSITIVE'): return 1 else: return 0 get_target_for_label(labels[0]) get_target_for_label(labels[1])
tutorials/sentiment_network/.ipynb_checkpoints/Sentiment Classification - How to Best Frame a Problem for a Neural Network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
Making our Network Train and Run Faster Even though this network is very trainable on a laptop, we can really get a lot more performance out of it, and doing so is all about understanding how the neural network is interacting with our data (again, "modeling the problem"). Let's take a moment to consider how layer_1 is ...
layer_0 = np.zeros(10) layer_0
tutorials/sentiment_network/.ipynb_checkpoints/Sentiment Classification - How to Best Frame a Problem for a Neural Network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
Now, let's set a few of the inputs to 1s, and create a sample weight matrix
layer_0[4] = 1 layer_0[9] = 1 layer_0 weights_0_1 = np.random.randn(10,5)
tutorials/sentiment_network/.ipynb_checkpoints/Sentiment Classification - How to Best Frame a Problem for a Neural Network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
So, given these pieces, layer_1 is created in the following way....
layer_1 = layer_0.dot(weights_0_1) layer_1
tutorials/sentiment_network/.ipynb_checkpoints/Sentiment Classification - How to Best Frame a Problem for a Neural Network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
layer_1 is generated by performing vector->matrix multiplication, however, most of our input neurons are turned off! Thus, there's actually a lot of computation being wasted. Consider the network below.
Image(filename='sentiment_network_sparse.png')
tutorials/sentiment_network/.ipynb_checkpoints/Sentiment Classification - How to Best Frame a Problem for a Neural Network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
First Inefficiency: "0" neurons waste computation If you recall from previous lessons, each edge from one neuron to another represents a single value in our weights_0_1 matrix. When we forward propagate, we take our input neuron's value, multiply it by each weight attached to that neuron, and then sum all the resulting...
Image(filename='sentiment_network_sparse_2.png')
tutorials/sentiment_network/.ipynb_checkpoints/Sentiment Classification - How to Best Frame a Problem for a Neural Network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
Second Inefficiency: "1" neurons don't need to multiply! When we're forward propagating, we multiply our input neuron's value by the weights attached to it. However, in this case, when the neuron is turned on, it's always turned on to exactly 1. So, there's no need for multiplication, what if we skipped this step? The ...
#inefficient thing we did before layer_1 = layer_0.dot(weights_0_1) layer_1 # new, less expensive lookup table version layer_1 = weights_0_1[4] + weights_0_1[9] layer_1
tutorials/sentiment_network/.ipynb_checkpoints/Sentiment Classification - How to Best Frame a Problem for a Neural Network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
See how they generate exactly the same value? Let's update our new neural network to do this.
import time import sys # Let's tweak our network from before to model these phenomena class SentimentNetwork: def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1): np.random.seed(1) self.pre_process_data(reviews) self.init_network(len(self.review_v...
tutorials/sentiment_network/.ipynb_checkpoints/Sentiment Classification - How to Best Frame a Problem for a Neural Network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
And wallah! Our network learns 10x faster than before while making exactly the same predictions!
# evaluate our model before training (just to show how horrible it is) mlp.test(reviews[-1000:],labels[-1000:])
tutorials/sentiment_network/.ipynb_checkpoints/Sentiment Classification - How to Best Frame a Problem for a Neural Network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
Our network even tests over twice as fast as well! Making Learning Faster & Easier by Reducing Noise So at first this might seem like the same thing we did in the previous section. However, while the previous section was about looking for computational waste and triming it out, this section is about looking for noise i...
# words most frequently seen in a review with a "POSITIVE" label pos_neg_ratios.most_common() # words most frequently seen in a review with a "NEGATIVE" label list(reversed(pos_neg_ratios.most_common()))[0:30] from bokeh.models import ColumnDataSource, LabelSet from bokeh.plotting import figure, show, output_file fro...
tutorials/sentiment_network/.ipynb_checkpoints/Sentiment Classification - How to Best Frame a Problem for a Neural Network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
In this graph "0" means that a word has no affinitity for either positive or negative. AS you can see, the vast majority of our words don't have that much direct affinity! So, our network is having to learn about lots of terms that are likely irrelevant to the final prediction. If we remove some of the most irrelevant ...
frequency_frequency = Counter() for word, cnt in total_counts.most_common(): frequency_frequency[cnt] += 1 hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True) p = figure(tools="pan,wheel_zoom,reset,save", toolbar_location="above",...
tutorials/sentiment_network/.ipynb_checkpoints/Sentiment Classification - How to Best Frame a Problem for a Neural Network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
As you can see, the vast majority of words in our corpus only happen once or twice. Unfortunately, this isn't enough for any of those words to be correlated with anything. Correlation requires seeing two things occur at the same time on multiple occasions so that you can identify a pattern. We should eliminate these ve...
import time import sys import numpy as np # Let's tweak our network from before to model these phenomena class SentimentNetwork: def __init__(self, reviews,labels,min_count = 10,polarity_cutoff = 0.1,hidden_nodes = 10, learning_rate = 0.1): np.random.seed(1) self.pre_process_data(revie...
tutorials/sentiment_network/.ipynb_checkpoints/Sentiment Classification - How to Best Frame a Problem for a Neural Network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
So, using these techniques, we are able to achieve a slightly higher testing score while training 2x faster than before. Furthermore, if we really crank up these metrics, we can get some pretty extreme speed with minimal loss in quality (if, for example, your business use case requires running very fast)
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01) mlp.train(reviews[:-1000],labels[:-1000]) mlp.test(reviews[-1000:],labels[-1000:])
tutorials/sentiment_network/.ipynb_checkpoints/Sentiment Classification - How to Best Frame a Problem for a Neural Network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
What's Going On in the Weights?
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01) mlp.train(reviews[:-1000],labels[:-1000]) import matplotlib.colors as colors words_to_visualize = list() for word, ratio in pos_neg_ratios.most_common(500): if(word in mlp.word2index.keys()): words_to_...
tutorials/sentiment_network/.ipynb_checkpoints/Sentiment Classification - How to Best Frame a Problem for a Neural Network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
Question 0: That plot looks pretty nice but isn't publication-ready. Luckily, matplotlib has a wide array of plot customizations. Skim through the first part of the tutorial at https://www.labri.fr/perso/nrougier/teaching/matplotlib to create the plot below. There is a lot of extra information there which we suggest yo...
plt.plot(xs, ys, label='cosine') plt.plot(xs, np.sin(xs), label='sine') plt.xlim(0, 2 * np.pi) plt.ylim(-1.1, 1.1) plt.title('Graphs of sin(x) and cos(x)') plt.legend(loc='lower left', frameon=False) plt.savefig('q1.png')
sp17/labs/lab03/lab03_solution.ipynb
DS-100/sp17-materials
gpl-3.0
seaborn Now, we'll learn how to use the seaborn Python library. seaborn is built on top of matplotlib and provides many helpful functions for statistical plotting that matplotlib and pandas don't have. Generally speaking, we'll use seaborn for more complex statistical plots, pandas for simple plots (eg. line / scatter ...
sns.barplot(x='weekday', y='registered', data=bike_trips)
sp17/labs/lab03/lab03_solution.ipynb
DS-100/sp17-materials
gpl-3.0
Question 3: Now for a fancier plot that seaborn makes really easy to produce. Use the distplot function to plot a histogram of all the total rider counts in the bike_trips dataset.
sns.distplot(bike_trips['cnt'])
sp17/labs/lab03/lab03_solution.ipynb
DS-100/sp17-materials
gpl-3.0
Notice that seaborn will fit a curve to the histogram of the data. Fancy! Question 4: Discuss this plot with your partner. What shape does the distribution have? What does that imply about the rider counts? Question 5: Use seaborn to make side-by-side boxplots of the number of casual riders (just checked out a bike for...
ax = sns.boxplot(data=bike_trips[['casual', 'registered']]) ax.set_yscale('log') plt.savefig('q5.png')
sp17/labs/lab03/lab03_solution.ipynb
DS-100/sp17-materials
gpl-3.0
Question 6: Discuss with your partner what the plot tells you about the distribution of casual vs. the distribution of registered riders. Question 7: Let's take a closer look at the number of registered vs. casual riders. Use the lmplot function to make a scatterplot. Put the number of casual riders on the x-axis and t...
sns.lmplot('casual', 'registered', bike_trips)
sp17/labs/lab03/lab03_solution.ipynb
DS-100/sp17-materials
gpl-3.0
Question 8: What do you notice about that plot? Discuss with your partner. Notice that seaborn automatically fits a line of best fit to the plot. Does that line seem to be relevant? You should note that lm_plot allows you to pass in fit_line=False to avoid plotting lines of best fit when you feel they are unnecessary ...
sns.lmplot('casual', 'registered', bike_trips, hue='workingday', scatter_kws={'s': 6}) plt.savefig('q9.png') # Note that the legend for workingday isn't super helpful. 0 in this case # means "not a working day" and 1 means "working day". Try fixing the legend # to be more descriptive.
sp17/labs/lab03/lab03_solution.ipynb
DS-100/sp17-materials
gpl-3.0
Question 10: Discuss the plot with your partner. Was splitting the data by working day informative? One of the best-fit lines looks valid but the other doesn't. Why do you suppose that is? Question 11 (bonus): Eventually, you'll want to be able to pose a question yourself and answer it using a visualization. Here's a q...
riders_by_hour = (bike_trips.groupby('hr') .agg({'casual': 'mean', 'registered': 'mean'})) riders_by_hour.plot.line()
sp17/labs/lab03/lab03_solution.ipynb
DS-100/sp17-materials
gpl-3.0
Want to learn more? We recommend checking out the seaborn tutorials on your own time. http://seaborn.pydata.org/tutorial.html The matplotlib tutorial we linked in Question 1 is also a great refresher on common matplotlib functions: https://www.labri.fr/perso/nrougier/teaching/matplotlib/ Here's a great blog post about ...
i_definitely_finished = True _ = ok.grade('qcompleted') _ = ok.backup() _ = ok.submit()
sp17/labs/lab03/lab03_solution.ipynb
DS-100/sp17-materials
gpl-3.0
9. Adaptive learning rate exponential_decay consine_decay linear_cosine_decay consine_decay_restarts polynomial decay piecewise_constant_decay
def create_estimator(params, run_config): wide_columns, deep_columns = create_feature_columns() def _update_optimizer(initial_learning_rate, decay_steps): # learning_rate = tf.train.exponential_decay( # initial_learning_rate, # global_step=tf.train.get_global_step(), # decay_steps=d...
00_Miscellaneous/tfx/01_tf_estimator_deepdive.ipynb
GoogleCloudPlatform/tf-estimator-tutorials
apache-2.0
Coordinate transformations Coordinates in astronomy often come in equatorial coordinates, specified by right ascension (RA) and declination (DEC).
import astropy.coordinates as coord c1 = coord.SkyCoord(ra=150*u.degree, dec=-17*u.degree) c2 = coord.SkyCoord(ra='21:15:32.141', dec=-17*u.degree, unit=(u.hourangle,u.degree))
day3/Astropy-Demo.ipynb
timothydmorton/usrp-sciprog
mit
If we wanted this coordinate on the celestial sphere to another system (of the celestial sphere), which is tied to our Galaxy, we can do this:
c1.transform_to(coord.Galactic)
day3/Astropy-Demo.ipynb
timothydmorton/usrp-sciprog
mit
Note: It may take a 4-5 minutes to see result of different batches. MobileNetV2 These flower photos are much larger than handwritting recognition images in MNIST. They are about 10 times as many pixels per axis and there are three color channels, making the information here over 200 times larger! How do our current te...
eval_path = "gs://cloud-ml-data/img/flower_photos/eval_set.csv" nclasses = len(CLASS_NAMES) hidden_layer_1_neurons = 400 hidden_layer_2_neurons = 100 dropout_rate = 0.25 num_filters_1 = 64 kernel_size_1 = 3 pooling_size_1 = 2 num_filters_2 = 32 kernel_size_2 = 3 pooling_size_2 = 2 layers = [ Conv2D(num_filters_1, ...
courses/machine_learning/deepdive2/image_classification/solutions/3_tf_hub_transfer_learning.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
If your model is like mine, it learns a little bit, slightly better then random, but ugh, it's too slow! With a batch size of 32, 5 epochs of 5 steps is only getting through about a quarter of our images. Not to mention, this is a much larger problem then MNIST, so wouldn't we need a larger model? But how big do we nee...
module_selection = "mobilenet_v2_100_224" module_handle = "https://tfhub.dev/google/imagenet/{}/feature_vector/4" \ .format(module_selection) transfer_model = tf.keras.Sequential([ hub.KerasLayer(module_handle, trainable=False), tf.keras.layers.Dropout(rate=0.2), tf.keras.layers.Dense( nclasses...
courses/machine_learning/deepdive2/image_classification/solutions/3_tf_hub_transfer_learning.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Data Preparation For the first iteration, we'll only use data after 2009. This is when most modern statistics began to be kept (though not all of them did).
model_data = matches[matches['season'] >= 2010]
analysis/machine_learning/model.ipynb
criffy/aflengine
gpl-3.0
To keep model simple, exclude draws. Mark them as victories for the away team instead.
for idx, row in model_data.iterrows(): if row['winner'] == 'draw': model_data.at[idx,'winner'] = 'away'
analysis/machine_learning/model.ipynb
criffy/aflengine
gpl-3.0
We want to split the data into test and train in a stratified manner, i.e. we don't want to favour a certain season, or a part of the season. So we'll take a portion (25%) of games from each round.
# How many games do we get per round? round_counts = {} curr_round = 1 matches_in_round = 0 for idx,row in model_data.iterrows(): if curr_round != row['round']: if matches_in_round not in round_counts: round_counts[matches_in_round] = 1 else: round_counts[matche...
analysis/machine_learning/model.ipynb
criffy/aflengine
gpl-3.0
Create test and training data
# test set from copy import deepcopy test_data = pd.DataFrame() for season, max_round in rounds_in_season.items(): for rnd in range(1, max_round): round_matches = model_data[(model_data['season']==season) & (model_data['round']==rnd)] num_test = test_sample_size[len(round_matches)] round_te...
analysis/machine_learning/model.ipynb
criffy/aflengine
gpl-3.0
Capture all of the 'diff' columns in the model, too
diff_cols = [col for col in model_data.columns if col[0:4] == 'diff']
analysis/machine_learning/model.ipynb
criffy/aflengine
gpl-3.0
Define features
features = [col for col in ['h_career_' + col for col in player_cols_to_agg] + \ ['h_season_' + col for col in player_cols_to_agg] + \ ['a_career_' + col for col in player_cols_to_agg] + \ ['a_season_' + col for col in player_cols_to_agg] + \ ['h...
analysis/machine_learning/model.ipynb
criffy/aflengine
gpl-3.0
Set up test and train datasets
X_train = training_data[features] y_train = training_data[target] X_test = test_data[features] y_test = test_data[target]
analysis/machine_learning/model.ipynb
criffy/aflengine
gpl-3.0
Fill the NaN values
X_train.fillna(0,inplace=True) y_train.fillna(0,inplace=True) X_test.fillna(0,inplace=True) y_test.fillna(0,inplace=True)
analysis/machine_learning/model.ipynb
criffy/aflengine
gpl-3.0
Modelling Model 1: Logistic regression
from sklearn.linear_model import LogisticRegression from sklearn.model_selection import cross_val_score from sklearn.model_selection import GridSearchCV import numpy as np log_reg = LogisticRegression() param_grid = { 'tol': [.0001, .001, .01], 'C': [.1, 1, 10], 'max...
analysis/machine_learning/model.ipynb
criffy/aflengine
gpl-3.0
Model 2: using less features
diff_cols = [col for col in model_data.columns if col[0:4] == 'diff'] features = diff_cols # REMOVE PERCENTAGE FOR NOW diff_cols.remove('diff_percentage') target = 'winner' X_train_2 = training_data[diff_cols] y_train_2 = training_data[target] X_test_2 = test_data[diff_cols] y_test_2 = test_data[target] #X_train_2...
analysis/machine_learning/model.ipynb
criffy/aflengine
gpl-3.0
Training model on all of the data Generating predictions Now that we have a model, we need to ingest data for that model to make a prediction on. Start by reading in the fixture.
fixture_path = '/Users/t_raver9/Desktop/projects/aflengine/tipengine/fixture2020.csv' fixture = pd.read_csv(fixture_path) fixture[fixture['round']==2]
analysis/machine_learning/model.ipynb
criffy/aflengine
gpl-3.0
We'll then prepare the data for the round we're interested in. We'll do this by: - getting the team-level data, such as ladder position and form - getting the player-level data and aggregating it up to the team level To get the player-level data, we also need to choose who is playing for each team.
next_round_matches = get_upcoming_matches(matches,fixture,round_num=2) next_round_matches
analysis/machine_learning/model.ipynb
criffy/aflengine
gpl-3.0
Get the IDs for the players we'll be using
import cv2 import pytesseract custom_config = r'--oem 3 --psm 6' import pathlib names_dir = '/Users/t_raver9/Desktop/projects/aflengine/analysis/machine_learning/src/OCR/images' # Initialise the dictionary player_names_dict = {} for team in matches['hteam'].unique(): player_names_dict[team] = [] # Fill out ...
analysis/machine_learning/model.ipynb
criffy/aflengine
gpl-3.0
Try including Bachar Houli Now we can collect the data for each player and aggregate it to the team level, as we would with the training data
from copy import deepcopy players_in_rnd = [] for _, v in player_names_dict.items(): players_in_rnd.extend(v) player_data = get_player_data(players_in_rnd) players_in_rnd aggregate = player_data[player_cols].groupby('team').apply(lambda x: x.mean(skipna=False)) # Factor in any missing players num_players_per_t...
analysis/machine_learning/model.ipynb
criffy/aflengine
gpl-3.0
Can now use this to make predictions
X = combined[features] X['diff_wins_form'] grid_log_reg.decision_function(X) grid_log_reg.predict_proba(X) grid_log_reg.predict(X) Z = combined[diff_cols] grid_log_reg_2.predict_proba(Z) grid_log_reg_2.predict(Z) combined[['ateam','hteam']] combined[['h_percentage_form','a_percentage_form']] combined[['h_care...
analysis/machine_learning/model.ipynb
criffy/aflengine
gpl-3.0
Glue these together and sort
coef = [] for i in model_coef: for j in i: coef.append(abs(j)) zipped = list(zip(features,coef)) zipped.sort(key = lambda x: x[1],reverse=True) zipped
analysis/machine_learning/model.ipynb
criffy/aflengine
gpl-3.0
Training model on all data
features = [col for col in ['h_career_' + col for col in player_cols_to_agg] + \ ['h_season_' + col for col in player_cols_to_agg] + \ ['a_career_' + col for col in player_cols_to_agg] + \ ['a_season_' + col for col in player_cols_to_agg] + \ ['h...
analysis/machine_learning/model.ipynb
criffy/aflengine
gpl-3.0
Visualisation
import numpy as np import matplotlib.pyplot as plt %matplotlib inline category_names = ['Home','Away'] results = { 'Collingwood v Richmond': [50.7,49.3], 'Geelong v Hawthorn': [80.4,19.5], 'Brisbane Lions v Fremantle': [57.3,42.7], 'Carlton v Melbourne': [62.4,37.6], 'Gold Coast v West Coast': [9....
analysis/machine_learning/model.ipynb
criffy/aflengine
gpl-3.0
Metadata and functions
from typing import Dict import numpy as np def get_season_rounds(matches: pd.DataFrame) -> Dict: """ Return a dictionary with seasons as keys and number of games in season as values """ seasons = matches['season'].unique() rounds_in_season = dict.fromkeys(seasons,0) for season in seaso...
analysis/machine_learning/model.ipynb
criffy/aflengine
gpl-3.0