markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
The book says fin is an acceptable name, but I opt for a more descriptive name There are a number of methods for reading and writing files, including: read( size ) Reads size bytes of data. If size is omitted or negative, the entire file is readn and return. Returns an empty string if the end of the file (EOF) is rea...
for line in input_file: word = line.strip() print( word )
CSNE2444-Intro-to-CS-I/jupyter-notebooks/ch09-word-play.ipynb
snucsne/CSNE-Course-Source-Code
mit
The strip method removes whitespace at the beginning and end of a string Search Most of the exercises in this chapter have something in common They all involve searching a string for specific characters
def has_no_e( word ): result = True for letter in word: if( 'e' == letter ): result = False return result input_file = open( 'data/short-words.txt' ) for line in input_file: word = line.strip() if( has_no_e( word ) ): print( 'No `e`: ', word )
CSNE2444-Intro-to-CS-I/jupyter-notebooks/ch09-word-play.ipynb
snucsne/CSNE-Course-Source-Code
mit
The for loop traverses each letter in the word looking for an e In fact, if you paid very good attention, you will see that the uses_all and uses_only functions in the book are the same In computer science, we frequently encounter problems that are essentially the same as ones we have already solved, but are just worde...
fruit = 'banana' # For loop for i in range( len( fruit ) ): print( 'For: [',i,']=[',fruit[i],']' ) # Recursive function def recurse_through_string( word, i ): print( 'Recursive: [',i,']=[',fruit[i],']' ) if( (i + 1) < len( word ) ): recurse_through_string( word, i + 1 ) recurse_through_string( fr...
CSNE2444-Intro-to-CS-I/jupyter-notebooks/ch09-word-play.ipynb
snucsne/CSNE-Course-Source-Code
mit
test_simulate_LLN
u = coo_payoffs beta = 1.0 P = np.zeros((2,2))
test_logitdyn.ipynb
oyamad/game_theory_models
bsd-3-clause
I made a probabilistic choice matrix $P$ in a redundant way just in case.
P[0,0] = np.exp(u[0,0] * beta) / (np.exp(u[0,0] * beta) + np.exp(u[1,0] * beta)) P[0,0] P[1,0] = np.exp(u[1,0] * beta) / (np.exp(u[0,0] * beta) + np.exp(u[1,0] * beta)) P[1,0] P[0,1] = np.exp(u[0,1] * beta) / (np.exp(u[0,1] * beta) + np.exp(u[1,1] * beta)) P[0,1] P[1,1] = np.exp(u[1,1] * beta) / (np.exp(u[0,1] * bet...
test_logitdyn.ipynb
oyamad/game_theory_models
bsd-3-clause
$P[i,j]$ represents the probability that a player chooses an action $i$ provided that his opponent takes an action $j$.
Q = np.zeros((4,4)) Q[0, 0] = P[0, 0] Q[0, 1] = 0.5 * P[1, 0] Q[0, 2] = 0.5 * P[1, 0] Q[0, 3] = 0 Q[1, 0] = 0.5 * P[0, 0] Q[1, 1] = 0.5 * P[0, 1] + 0.5 * P[1, 0] Q[1, 2] = 0 Q[1, 3] = 0.5 * P[1, 1] Q[2, 0] = 0.5 * P[0, 0] Q[2, 1] = 0 Q[2, 2] = 0.5 * P[1, 0] + 0.5 * P[0, 1] Q[2, 3] = 0.5 * P[1, 1] Q[3, 0] = 0 Q[3, 1] = ...
test_logitdyn.ipynb
oyamad/game_theory_models
bsd-3-clause
$Q$ is the transition probability matrix. The first row and column represent the state $(0,0)$, which means that player 1 takes action 0 and player 2 also takes action 0. The second ones represent $(0,1)$, the third ones represent $(1,0)$, and the last ones represent $(1,1)$.
from quantecon.mc_tools import MarkovChain mc = MarkovChain(Q) mc.stationary_distributions[0]
test_logitdyn.ipynb
oyamad/game_theory_models
bsd-3-clause
I take 0.61029569 as the criterion for the test.
ld = LogitDynamics(g_coo) # New one (using replicate) n = 1000 seq = ld.replicate(T=100, num_reps=n) count = 0 for i in range(n): if all(seq[i, :] == [1, 1]): count += 1 ratio = count / n ratio # Old one counts = np.zeros(1000) for i in range(1000): seq = ld.simulate(ts_length=100) count = 0 ...
test_logitdyn.ipynb
oyamad/game_theory_models
bsd-3-clause
flexx.app
from flexx import app, react app.init_notebook() class Greeter(app.Model): @react.input def name(s): return str(s) class JS: @react.connect('name') def _greet(name): alert('Hello %s!' % name) greeter = Greeter() greeter.name('John')
EuroScipy 2015 demo.ipynb
zoofIO/flexx-notebooks
bsd-3-clause
The Spacetime of Rx In the examples above all the events happen at the same moment in time. The events are only separated by ordering. This confuses many newcomers to Rx since the result of the merge operation above may have several valid results such as: a1b2c3d4e5 1a2b3c4d5e ab12cd34e5 abcde12345 The only guarantee ...
from rx.testing import marbles xs = Observable.from_marbles("a-b-c-|") xs.to_blocking().to_marbles()
python/libs/rxpy/GettingStarted.ipynb
satishgoda/learning
mit
Laplace approximation from scratch in JAX As mentioned in book2 section 7.4.3, Using laplace approximation, any distribution can be approximated as normal distribution having mean $\hat{\theta}$ and standard deviation as $H^{-1}$ \begin{align} H = \triangledown ^2_{\theta = \hat{\theta}} \log p(\theta|\mathcal{D}) \ ...
def neg_log_prior_likelihood_fn(params, dataset): theta = params["theta"] likelihood_log_prob = likelihood_dist(theta).log_prob(dataset).sum() # log probability of likelihood prior_log_prob = prior_dist().log_prob(theta) # log probability of prior return -(likelihood_log_prob + prior_log_prob) # nega...
notebooks/book1/04/laplace_approx_beta_binom_jax.ipynb
probml/pyprobml
mit
loc and scale of approximated normal posterior
loc = theta_map # loc of approximate posterior print(f"loc = {loc}") # scale of approximate posterior scale = 1 / jnp.sqrt(jax.hessian(neg_log_prior_likelihood_fn)(optimized_params, dataset)["theta"]["theta"]) print(f"scale = {scale}")
notebooks/book1/04/laplace_approx_beta_binom_jax.ipynb
probml/pyprobml
mit
True posterior and laplace approximated posterior
plt.figure() y = jnp.exp(dist.Normal(loc, scale).log_prob(theta_range)) plt.title("Quadratic approximation") plt.plot(theta_range, y, label="laplace approximation", color="tab:red") plt.plot(theta_range, exact_posterior.prob(theta_range), label="true posterior", color="tab:green", linestyle="--") plt.xlabel("$\\theta$"...
notebooks/book1/04/laplace_approx_beta_binom_jax.ipynb
probml/pyprobml
mit
Pymc
try: import pymc3 as pm except ModuleNotFoundError: %pip install -qq pymc3 import pymc3 as pm try: import scipy.stats as stats except ModuleNotFoundError: %pip install -qq scipy import scipy.stats as stats import scipy.special as sp try: import arviz as az except ModuleNotFoundError: %...
notebooks/book1/04/laplace_approx_beta_binom_jax.ipynb
probml/pyprobml
mit
Implement Preprocessing Functions The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below: - Lookup Table - Tokenize Punctuation Lookup Table To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries: - Dict...
import numpy as np import problem_unittests as tests from collections import Counter def create_lookup_tables(text): """ Create lookup tables for vocabulary :param text: The text of tv scripts split into words :return: A tuple of dicts (vocab_to_int, int_to_vocab) """ # TODO: Implement Function ...
tv-script-generation/dlnd_tv_script_generation.ipynb
vinitsamel/udacitydeeplearning
mit
Tokenize Punctuation We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!". Implement the function token_lookup to return a dict that will be used to token...
def token_lookup(): """ Generate a dict to turn punctuation into a token. :return: Tokenize dictionary where the key is the punctuation and the value is the token """ # TODO: Implement Function token_dict = {'.' : "||Period||", ',' : "||Comma||", '"' : "||Quotation_Mark||",\ ';...
tv-script-generation/dlnd_tv_script_generation.ipynb
vinitsamel/udacitydeeplearning
mit
Input Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: - Input text placeholder named "input" using the TF Placeholder name parameter. - Targets placeholder - Learning Rate placeholder Return the placeholders in the following tuple (Inpu...
def get_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate) """ # TODO: Implement Function inputs_ = tf.placeholder(tf.int32, shape=[None, None], name='input') targets_ = tf.placeholder(tf.int32, shape=[None, None], name=...
tv-script-generation/dlnd_tv_script_generation.ipynb
vinitsamel/udacitydeeplearning
mit
Build RNN Cell and Initialize Stack one or more BasicLSTMCells in a MultiRNNCell. - The Rnn size should be set using rnn_size - Initalize Cell State using the MultiRNNCell's zero_state() function - Apply the name "initial_state" to the initial state using tf.identity() Return the cell and initial state in the follo...
def get_init_cell(batch_size, rnn_size): """ Create an RNN Cell and initialize it. :param batch_size: Size of batches :param rnn_size: Size of RNNs :return: Tuple (cell, initialize state) """ # TODO: Implement Function lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) cell = tf.contrib.r...
tv-script-generation/dlnd_tv_script_generation.ipynb
vinitsamel/udacitydeeplearning
mit
Build RNN You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN. - Build the RNN using the tf.nn.dynamic_rnn() - Apply the name "final_state" to the final state using tf.identity() Return the outputs and final_state state in the following tuple (Outputs, FinalState)
def build_rnn(cell, inputs): """ Create a RNN using a RNN Cell :param cell: RNN Cell :param inputs: Input text data :return: Tuple (Outputs, Final State) """ # TODO: Implement Function outputs, fs = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32) final_state = tf.identity(fs, name=...
tv-script-generation/dlnd_tv_script_generation.ipynb
vinitsamel/udacitydeeplearning
mit
Build the Neural Network Apply the functions you implemented above to: - Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function. - Build RNN using cell and your build_rnn(cell, inputs) function. - Apply a fully connected layer with a linear activation and vocab_size as the number...
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim): """ Build part of the neural network :param cell: RNN cell :param rnn_size: Size of rnns :param input_data: Input data :param vocab_size: Vocabulary size :param embed_dim: Number of embedding dimensions :return: Tuple (Logi...
tv-script-generation/dlnd_tv_script_generation.ipynb
vinitsamel/udacitydeeplearning
mit
Batches Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements: - The first element is a single batch of input with the shape [batch size, sequence length] - Th...
def get_batches(int_text, batch_size, seq_length): """ Return batches of input and target :param int_text: Text with the words replaced by their ids :param batch_size: The size of batch :param seq_length: The length of sequence :return: Batches as a Numpy array """ # TODO: Implement Fun...
tv-script-generation/dlnd_tv_script_generation.ipynb
vinitsamel/udacitydeeplearning
mit
Neural Network Training Hyperparameters Tune the following parameters: Set num_epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set embed_dim to the size of the embedding. Set seq_length to the length of sequence. Set learning_rate to the learning rate. Set show_e...
# Number of Epochs num_epochs = 20 # Batch Size batch_size = 100 # RNN Size rnn_size = 512 # Embedding Dimension Size embed_dim = 300 # Sequence Length seq_length = 10 # Learning Rate learning_rate = 0.01 # Show stats for every n number of batches show_every_n_batches = 10 """ DON'T MODIFY ANYTHING IN THIS CELL THAT I...
tv-script-generation/dlnd_tv_script_generation.ipynb
vinitsamel/udacitydeeplearning
mit
Generate TV Script This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
gen_length = 200 # homer_simpson, moe_szyslak, or Barney_Gumble prime_word = 'moe_szyslak' """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_dir + '.meta') loa...
tv-script-generation/dlnd_tv_script_generation.ipynb
vinitsamel/udacitydeeplearning
mit
By default Prophet fits additive seasonalities, meaning the effect of the seasonality is added to the trend to get the forecast. This time series of the number of air passengers is an example of when additive seasonality does not work:
%%R -w 10 -h 6 -u in df <- read.csv('../examples/example_air_passengers.csv') m <- prophet(df) future <- make_future_dataframe(m, 50, freq = 'm') forecast <- predict(m, future) plot(m, forecast) df = pd.read_csv('../examples/example_air_passengers.csv') m = Prophet() m.fit(df) future = m.make_future_dataframe(50, freq...
notebooks/multiplicative_seasonality.ipynb
facebook/prophet
mit
This time series has a clear yearly cycle, but the seasonality in the forecast is too large at the start of the time series and too small at the end. In this time series, the seasonality is not a constant additive factor as assumed by Prophet, rather it grows with the trend. This is multiplicative seasonality. Prophet ...
%%R -w 10 -h 6 -u in m <- prophet(df, seasonality.mode = 'multiplicative') forecast <- predict(m, future) plot(m, forecast) m = Prophet(seasonality_mode='multiplicative') m.fit(df) forecast = m.predict(future) fig = m.plot(forecast)
notebooks/multiplicative_seasonality.ipynb
facebook/prophet
mit
The components figure will now show the seasonality as a percent of the trend:
%%R -w 9 -h 6 -u in prophet_plot_components(m, forecast) fig = m.plot_components(forecast)
notebooks/multiplicative_seasonality.ipynb
facebook/prophet
mit
With seasonality_mode='multiplicative', holiday effects will also be modeled as multiplicative. Any added seasonalities or extra regressors will by default use whatever seasonality_mode is set to, but can be overriden by specifying mode='additive' or mode='multiplicative' as an argument when adding the seasonality or r...
%%R m <- prophet(seasonality.mode = 'multiplicative') m <- add_seasonality(m, 'quarterly', period = 91.25, fourier.order = 8, mode = 'additive') m <- add_regressor(m, 'regressor', mode = 'additive') m = Prophet(seasonality_mode='multiplicative') m.add_seasonality('quarterly', period=91.25, fourier_order=8, mode='addit...
notebooks/multiplicative_seasonality.ipynb
facebook/prophet
mit
The skew result show a positive (right) or negative (left) skew. Values closer to zero show less skew. From the graphs, we can see that radius_mean, perimeter_mean, area_mean, concavity_mean and concave_points_mean are useful in predicting cancer type due to the distinct grouping between malignant and benign cancer ty...
data.diagnosis.unique() # Group by diagnosis and review the output. diag_gr = data.groupby('diagnosis', axis=0) pd.DataFrame(diag_gr.size(), columns=['# of observations'])
NB2_ExploratoryDataAnalysis.ipynb
ShiroJean/Breast-cancer-risk-prediction
mit
Check binary encoding from NB1 to confirm the coversion of the diagnosis categorical data into numeric, where * Malignant = 1 (indicates prescence of cancer cells) * Benign = 0 (indicates abscence) Observation 357 observations indicating the absence of cancer cells and 212 show absence of cancer cell Lets confirm thi...
#lets get the frequency of cancer diagnosis sns.set_style("white") sns.set_context({"figure.figsize": (10, 8)}) sns.countplot(data['diagnosis'],label='Count',palette="Set3")
NB2_ExploratoryDataAnalysis.ipynb
ShiroJean/Breast-cancer-risk-prediction
mit
2.3.1 Visualise distribution of data via histograms Histograms are commonly used to visualize numerical variables. A histogram is similar to a bar graph after the values of the variable are grouped (binned) into a finite number of intervals (bins). Histograms group data into bins and provide you a count of the number o...
#Break up columns into groups, according to their suffix designation #(_mean, _se, # and __worst) to perform visualisation plots off. #Join the 'ID' and 'Diagnosis' back on data_id_diag=data.loc[:,["id","diagnosis"]] data_diag=data.loc[:,["diagnosis"]] #For a merge + slice: data_mean=data.ix[:,1:11] data_se=data.ix[...
NB2_ExploratoryDataAnalysis.ipynb
ShiroJean/Breast-cancer-risk-prediction
mit
Histogram the "_mean" suffix designition
#Plot histograms of CUT1 variables hist_mean=data_mean.hist(bins=10, figsize=(15, 10),grid=False,) #Any individual histograms, use this: #df_cut['radius_worst'].hist(bins=100)
NB2_ExploratoryDataAnalysis.ipynb
ShiroJean/Breast-cancer-risk-prediction
mit
Histogram for the "_se" suffix designition
#Plot histograms of _se variables #hist_se=data_se.hist(bins=10, figsize=(15, 10),grid=False,)
NB2_ExploratoryDataAnalysis.ipynb
ShiroJean/Breast-cancer-risk-prediction
mit
Histogram "_worst" suffix designition
#Plot histograms of _worst variables #hist_worst=data_worst.hist(bins=10, figsize=(15, 10),grid=False,)
NB2_ExploratoryDataAnalysis.ipynb
ShiroJean/Breast-cancer-risk-prediction
mit
Observation We can see that perhaps the attributes concavity,and concavity_point may have an exponential distribution ( ). We can also see that perhaps the texture and smooth and symmetry attributes may have a Gaussian or nearly Gaussian distribution. This is interesting because many machine learning techniques assu...
#Density Plots plt = data_mean.plot(kind= 'density', subplots=True, layout=(4,3), sharex=False, sharey=False,fontsize=12, figsize=(15,10))
NB2_ExploratoryDataAnalysis.ipynb
ShiroJean/Breast-cancer-risk-prediction
mit
Density plots "_se" suffix designition
#Density Plots #plt = data_se.plot(kind= 'density', subplots=True, layout=(4,3), sharex=False, # sharey=False,fontsize=12, figsize=(15,10))
NB2_ExploratoryDataAnalysis.ipynb
ShiroJean/Breast-cancer-risk-prediction
mit
Density plot "_worst" suffix designition
#Density Plots #plt = data_worst.plot(kind= 'kde', subplots=True, layout=(4,3), sharex=False, sharey=False,fontsize=5, # figsize=(15,10))
NB2_ExploratoryDataAnalysis.ipynb
ShiroJean/Breast-cancer-risk-prediction
mit
Observation We can see that perhaps the attributes perimeter,radius, area, concavity,ompactness may have an exponential distribution ( ). We can also see that perhaps the texture and smooth and symmetry attributes may have a Gaussian or nearly Gaussian distribution. This is interesting because many machine learning te...
# box and whisker plots #plt=data_mean.plot(kind= 'box' , subplots=True, layout=(4,4), sharex=False, sharey=False,fontsize=12)
NB2_ExploratoryDataAnalysis.ipynb
ShiroJean/Breast-cancer-risk-prediction
mit
Box plot "_se" suffix designition
# box and whisker plots #plt=data_se.plot(kind= 'box' , subplots=True, layout=(4,4), sharex=False, sharey=False,fontsize=12)
NB2_ExploratoryDataAnalysis.ipynb
ShiroJean/Breast-cancer-risk-prediction
mit
Box plot "_worst" suffix designition
# box and whisker plots #plt=data_worst.plot(kind= 'box' , subplots=True, layout=(4,4), sharex=False, sharey=False,fontsize=12)
NB2_ExploratoryDataAnalysis.ipynb
ShiroJean/Breast-cancer-risk-prediction
mit
Observation We can see that perhaps the attributes perimeter,radius, area, concavity,ompactness may have an exponential distribution ( ). We can also see that perhaps the texture and smooth and symmetry attributes may have a Gaussian or nearly Gaussian distribution. This is interesting because many machine learning te...
# plot correlation matrix import pandas as pd import numpy as np import seaborn as sns from matplotlib import pyplot as plt plt.style.use('fivethirtyeight') sns.set_style("white") data = pd.read_csv('data/clean-data.csv', index_col=False) data.drop('Unnamed: 0',axis=1, inplace=True) # Compute the correlation matrix c...
NB2_ExploratoryDataAnalysis.ipynb
ShiroJean/Breast-cancer-risk-prediction
mit
Observation: We can see strong positive relationship exists with mean values paramaters between 1-0.75;. * The mean area of the tissue nucleus has a strong positive correlation with mean values of radius and parameter; * Some paramters are moderately positive corrlated (r between 0.5-0.75)are concavity and area, concav...
plt.style.use('fivethirtyeight') sns.set_style("white") data = pd.read_csv('data/clean-data.csv', index_col=False) g = sns.PairGrid(data[[data.columns[1],data.columns[2],data.columns[3], data.columns[4], data.columns[5],data.columns[6]]],hue='diagnosis' ) g = g.map_diag(plt.hist) g = g.map_offdia...
NB2_ExploratoryDataAnalysis.ipynb
ShiroJean/Breast-cancer-risk-prediction
mit
Write a function solve_lorenz that solves the Lorenz system above for a particular initial condition $[x(0),y(0),z(0)]$. Your function should return a tuple of the solution array and time array.
def solve_lorentz(ic, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0): """Solve the Lorenz system for a single initial condition. Parameters ---------- ic : array, list, tuple Initial conditions [x,y,z]. max_time: float The max time to use. Integrate with 250 points per time u...
assignments/assignment10/ODEsEx02.ipynb
rsterbentz/phys202-2015-work
mit
Write a function plot_lorentz that: Solves the Lorenz system for N different initial conditions. To generate your initial conditions, draw uniform random samples for x, y and z in the range $[-15,15]$. Call np.random.seed(1) a single time at the top of your function to use the same seed each time. Plot $[x(t),z(t)]$ u...
N = 5 colors = plt.cm.hot(np.linspace(0,1,N)) for i in range(N): # To use these colors with plt.plot, pass them as the color argument print(colors[i]) np.random.seed(1) g=[] h=[] f=[] for i in range(5): rnd = np.random.random(size=3) a,b,c = 30*rnd - 15 g.append(a) h.append(b) f.append(c) g...
assignments/assignment10/ODEsEx02.ipynb
rsterbentz/phys202-2015-work
mit
Use interact to explore your plot_lorenz function with: max_time an integer slider over the interval $[1,10]$. N an integer slider over the interval $[1,50]$. sigma a float slider over the interval $[0.0,50.0]$. rho a float slider over the interval $[0.0,50.0]$. beta fixed at a value of $8/3$.
interact(plot_lorentz, N=(1,50), max_time=(1,10), sigma=(0.0,50.0), rho=(0.0,50.0), beta=fixed(8/3));
assignments/assignment10/ODEsEx02.ipynb
rsterbentz/phys202-2015-work
mit
<table align="left"> <td> <a href="https://colab.research.google.com/github/tensorflow/cloud/blob/master/src/python/tensorflow_cloud/tuner/tests/examples/ai_platform_vizier_tuner.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab ...
! pip install google-cloud ! pip install google-cloud-storage ! pip install requests ! pip install tensorflow_datasets
src/python/tensorflow_cloud/tuner/tests/examples/ai_platform_vizier_tuner.ipynb
tensorflow/cloud
apache-2.0
Install CloudTuner Download and install CloudTuner from tensorflow-cloud.
! pip install tensorflow-cloud
src/python/tensorflow_cloud/tuner/tests/examples/ai_platform_vizier_tuner.ipynb
tensorflow/cloud
apache-2.0
Import libraries and define constants
from tensorflow_cloud import CloudTuner import keras_tuner REGION = 'us-central1' PROJECT_ID = '[your-project-id]' #@param {type:"string"} ! gcloud config set project $PROJECT_ID
src/python/tensorflow_cloud/tuner/tests/examples/ai_platform_vizier_tuner.ipynb
tensorflow/cloud
apache-2.0
Instantiate CloudTuner Next, we instantiate an instance of the CloudTuner. We will define our tuning hyperparameters and pass them into the constructor as the parameter hyperparameters. We also set the objective ('accuracy') to measure the performance of each trial, and we shall keep the number of trials small (5) for ...
# Configure the search space HPS = keras_tuner.HyperParameters() HPS.Float('learning_rate', min_value=1e-4, max_value=1e-2, sampling='log') HPS.Int('num_layers', 2, 10) tuner = CloudTuner( build_model, project_id=PROJECT_ID, region=REGION, objective='accuracy', hyperparameters=HPS, max_trials=5...
src/python/tensorflow_cloud/tuner/tests/examples/ai_platform_vizier_tuner.ipynb
tensorflow/cloud
apache-2.0
Create and train the model
def build_pipeline_model(hp): model = Sequential() model.add(Flatten(input_shape=(28, 28, 1))) # the number of layers is tunable for _ in range(hp.get('num_layers')): model.add(Dense(units=64, activation='relu')) model.add(Dense(10, activation='softmax')) # the learning rate is tunable...
src/python/tensorflow_cloud/tuner/tests/examples/ai_platform_vizier_tuner.ipynb
tensorflow/cloud
apache-2.0
Tutorial: Using a Study Configuration Now, let's repeat this study but this time the search space is passed in as a Vizier study_config. Create the Study Configuration Let's start by constructing the study config for optimizing the accuracy of the model with the hyperparameters number of layers and learning rate, just ...
# Configure the search space STUDY_CONFIG = { 'algorithm': 'ALGORITHM_UNSPECIFIED', 'metrics': [{ 'goal': 'MAXIMIZE', 'metric': 'accuracy' }], 'parameters': [{ 'discrete_value_spec': { 'values': [0.0001, 0.001, 0.01] }, 'parameter': 'learning_rate', ...
src/python/tensorflow_cloud/tuner/tests/examples/ai_platform_vizier_tuner.ipynb
tensorflow/cloud
apache-2.0
Instantiate CloudTuner Next, we instantiate an instance of the CloudTuner. In this instantiation, we replace the hyperparameters and objective parameters with the study_config parameter.
tuner = CloudTuner( build_model, project_id=PROJECT_ID, region=REGION, study_config=STUDY_CONFIG, max_trials=10, directory='tmp_dir/3')
src/python/tensorflow_cloud/tuner/tests/examples/ai_platform_vizier_tuner.ipynb
tensorflow/cloud
apache-2.0
Tutorial: Distributed Tuning Let's run multiple tuning loops concurrently using multiple threads. To run distributed tuning, multiple tuners should share the same study_id, but different tuner_ids.
from multiprocessing.dummy import Pool # If you are running this tutorial in a notebook locally, you may run multiple # tuning loops concurrently using multi-processes instead of multi-threads. # from multiprocessing import Pool import time import datetime STUDY_ID = 'Tuner_study_{}'.format( datetime.datetime.now...
src/python/tensorflow_cloud/tuner/tests/examples/ai_platform_vizier_tuner.ipynb
tensorflow/cloud
apache-2.0
Database format In a different notebook (the document you are looking at is called an IPython Notebook) I have converted the mongodb database text dump from Planet 4 into HDF format. I saved it in a subformat for very fast read-speed into memory; the 2 GB file currently loads within 20 seconds on my Macbook Pro. By t...
df = pd.read_hdf(get_data.get_current_database_fname(), 'df')
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
So, what did we receive in df (note that type 'object' often means string in our case, but could mean also a different complex datatype):
df = pd.read_hdf("/Users/klay6683/local_data/2018-10-14_planet_four_classifications_queryable_cleaned.h5") df.info() from planet4 import stats obsids = df.image_name.unique() from tqdm import tqdm_notebook as tqdm results = [] for obsid in tqdm(obsids): sub_df = df[df.image_name==obsid] results.append(stat...
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
Here are the first 5 rows of the dataframe:
pd.Series(df.image_name.unique()).to_csv("image_names.csv", index=False)
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
Image IDs For a simple first task, let's get a list of unique image ids, to know how many objects have been published.
img_ids = df.image_id.unique() print img_ids
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
We might have some NaN values in there, depending on how the database dump was created. Let's check if that's true.
df.image_id.notnull().value_counts()
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
If there's only True as an answer above, you can skip the nan-cleaning section Cleaning NaNs
df[df.image_id.isnull()].T # .T just to have it printed like a column, not a row
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
In one version of the database dump, I had the last row being completely NaN, so I dropped it with the next command:
#df = df.drop(10718113)
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
Let's confirm that there's nothing with a NaN image_id now:
df[df.image_id.isnull()]
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
After NaNs are removed Ok, now we should only get non-NaNs:
img_ids = df.image_id.unique() img_ids
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
So, how many objects were online:
no_all = len(img_ids) no_all
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
Classification IDs Now we need to find out how often each image_id has been looked at. For that we have the groupby functionality. Specifically, because we want to know how many citizens have submitted a classification for each image_id, we need to group by the image_id and count the unique classification_ids within ...
df.groupby(df.classification_id, sort=False).size()
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
Ok, that is the case. Now, group those classification_ids by the image_ids and save the grouping. Switch off sorting for speed, we want to sort by the counts later anyway.
grouping = df.classification_id.groupby(df.image_id, sort=False)
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
Aggregate each group by finding the size of the unique list of classification_ids.
counts = grouping.agg(lambda x: x.unique().size) counts
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
Order the counts by value
counts = counts.order(ascending=False) counts
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
Note also that the length of this counts data series is 98220, exactly the number of unique image_ids. Percentages done. By constraining the previous data series for the value it has (the counts) and look at the length of the remaining data, we can determine the status of the finished rate.
counts[counts >= 30].size
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
Wishing to see higher values, I was for some moments contemplating if one maybe has to sum up the different counts to be correct, but I don't think that's it. The way I see it, one has to decide in what 'phase-space' one works to determine the status of Planet4. Either in the phase space of total subframes or in the to...
from planet4 import helper_functions as hf hf.define_season_column(df) hf.unique_image_ids_per_season(df) no_all = df.season.value_counts() no_all
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
MDAP 2014
reload(hf) season1 = df.loc[df.season==1, :] inca = season1.loc[season1.image_name.str.endswith('_0985')] manhattan = season1.loc[season1.image_name.str.endswith('_0935')] hf.get_status(inca) hf.get_status(manhattan) hf.get_status(season1) inca_images = """PSP_002380_0985,PSP_002868_0985,PSP_003092_0985,PSP_0031...
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
Ok, so not that big a deal until we require more than 80 classifications to be done. How do the different existing user counts distribute The method 'value_counts()' basically delivers a histogram on the counts_by_user data series. In other words, it shows how the frequency of classifications distribute over the datase...
counts_by_user.value_counts() counts_by_user.value_counts().plot(style='*') users_work = df.classification_id.groupby(df.user_name).agg(lambda x: x.unique().size) users_work.order(ascending=False)[:10] df[df.user_name=='gwyneth walker'].classification_id.value_counts() import helper_functions as hf reload(hf) hf....
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
First, let's try translating between SMILES and SELFIES - as an example, we will use benzaldehyde. To translate from SMILES to SELFIES, use the selfies.encoder function, and to translate from SMILES back to SELFIES, use the selfies.decoder function.
original_smiles = "O=Cc1ccccc1" # benzaldehyde try: encoded_selfies = sf.encoder(original_smiles) # SMILES -> SELFIES decoded_smiles = sf.decoder(encoded_selfies) # SELFIES -> SMILES except sf.EncoderError as err: pass # sf.encoder error... except sf.DecoderError as err: pass # sf.de...
docs/source/tutorial.ipynb
aspuru-guzik-group/selfies
apache-2.0
Note that original_smiles and decoded_smiles are different strings, but they both represent benzaldehyde. Thus, when comparing the two SMILES strings, string equality should not be used. Insead, use RDKit to check whether the SMILES strings represent the same molecule.
from rdkit import Chem Chem.CanonSmiles(original_smiles) == Chem.CanonSmiles(decoded_smiles)
docs/source/tutorial.ipynb
aspuru-guzik-group/selfies
apache-2.0
Customizing SELFIES The SELFIES grammar is derived dynamically from a set of semantic constraints, which assign bonding capacities to various atoms. Let's customize the semantic constraints that selfies operates on. By default, the following constraints are used:
sf.get_preset_constraints("default")
docs/source/tutorial.ipynb
aspuru-guzik-group/selfies
apache-2.0
These constraints map atoms (they keys) to their bonding capacities (the values). The special ? key maps to the bonding capacity for all atoms that are not explicitly listed in the constraints. For example, S and Li are constrained to a maximum of 6 and 8 bonds, respectively. Every SELFIES string can be decoded into a ...
sf.decoder("[Li][=C][C][S][=C][C][#S]")
docs/source/tutorial.ipynb
aspuru-guzik-group/selfies
apache-2.0
But suppose that we instead wanted to constrain S and Li to a maximum of 2 and 1 bond(s), respectively. To do so, we create a new set of constraints, and tell selfies to operate on them using selfies.set_semantic_constraints.
new_constraints = sf.get_preset_constraints("default") new_constraints['Li'] = 1 new_constraints['S'] = 2 sf.set_semantic_constraints(new_constraints)
docs/source/tutorial.ipynb
aspuru-guzik-group/selfies
apache-2.0
To check that the update was succesful, we can use selfies.get_semantic_constraints, which returns the semantic constraints that selfies is currently operating on.
sf.get_semantic_constraints()
docs/source/tutorial.ipynb
aspuru-guzik-group/selfies
apache-2.0
Our previous SELFIES string is now decoded like so. Notice that the specified bonding capacities are met, with every S and Li making only 2 and 1 bonds, respectively.
sf.decoder("[Li][=C][C][S][=C][C][#S]")
docs/source/tutorial.ipynb
aspuru-guzik-group/selfies
apache-2.0
Finally, to revert back to the default constraints, simply call:
sf.set_semantic_constraints()
docs/source/tutorial.ipynb
aspuru-guzik-group/selfies
apache-2.0
Please refer to the API reference for more details and more preset constraints. SELFIES in Practice Let's use a simple example to show how selfies can be used in practice, as well as highlight some convenient utility functions from the library. We start with a toy dataset of SMILES strings. As before, we can use selfie...
smiles_dataset = ["COC", "FCF", "O=O", "O=Cc1ccccc1"] selfies_dataset = list(map(sf.encoder, smiles_dataset)) selfies_dataset
docs/source/tutorial.ipynb
aspuru-guzik-group/selfies
apache-2.0
The function selfies.len_selfies computes the symbol length of a SELFIES string. We can use it to find the maximum symbol length of the SELFIES strings in the dataset.
max_len = max(sf.len_selfies(s) for s in selfies_dataset) max_len
docs/source/tutorial.ipynb
aspuru-guzik-group/selfies
apache-2.0
To extract the SELFIES symbols that form the dataset, use selfies.get_alphabet_from_selfies. Here, we add [nop] to the alphabet, which is a special padding character that selfies recognizes.
alphabet = sf.get_alphabet_from_selfies(selfies_dataset) alphabet.add("[nop]") alphabet = list(sorted(alphabet)) alphabet
docs/source/tutorial.ipynb
aspuru-guzik-group/selfies
apache-2.0
Then, create a mapping between the alphabet SELFIES symbols and indices.
vocab_stoi = {symbol: idx for idx, symbol in enumerate(alphabet)} vocab_itos = {idx: symbol for symbol, idx in vocab_stoi.items()} vocab_stoi
docs/source/tutorial.ipynb
aspuru-guzik-group/selfies
apache-2.0
SELFIES provides some convenience methods to convert between SELFIES strings and label (integer) and one-hot encodings. Using the first entry of the dataset (dimethyl ether) as an example:
dimethyl_ether = selfies_dataset[0] label, one_hot = sf.selfies_to_encoding(dimethyl_ether, vocab_stoi, pad_to_len=max_len) label one_hot dimethyl_ether = sf.encoding_to_selfies(one_hot, vocab_itos, enc_type="one_hot") dimethyl_ether sf.decoder(dimethyl_ether) # sf.decoder ignores [nop]
docs/source/tutorial.ipynb
aspuru-guzik-group/selfies
apache-2.0
If different encoding strategies are desired, selfies.split_selfies can be used to tokenize a SELFIES string into its individual symbols.
list(sf.split_selfies("[C][O][C]"))
docs/source/tutorial.ipynb
aspuru-guzik-group/selfies
apache-2.0
1
triangle_count = 0 for _ in range(N): a,b = sorted((random.random(), random.random())) x,y,z = (a,b-a,1-b) if x<0.5 and y<0.5 and z<0.5: triangle_count += 1 triangle_count / N
FiveThirtyEightRiddler/2017-09-15/classic_sticks.ipynb
andrewzwicky/puzzles
mit
2
triangle_count = 0 for _ in range(N): sticks = sorted((random.random(), random.random(), random.random())) if sticks[2] < sticks[0] + sticks[1]: triangle_count += 1 triangle_count / N
FiveThirtyEightRiddler/2017-09-15/classic_sticks.ipynb
andrewzwicky/puzzles
mit
3
triangle_count = 0 for _ in range(N): a,b = sorted((random.random(), random.random())) x,y,z = (a,b-a,1-b) if (x**2 + y**2 > z**2) and (x**2 + z**2 > y**2) and (z**2 + y**2 > x**2): triangle_count += 1 triangle_count / N
FiveThirtyEightRiddler/2017-09-15/classic_sticks.ipynb
andrewzwicky/puzzles
mit
4
triangle_count = 0 for _ in range(N): x,y,z = (random.random(), random.random(), random.random()) if (x**2 + y**2 > z**2) and (x**2 + z**2 > y**2) and (z**2 + y**2 > x**2): triangle_count += 1 triangle_count / N
FiveThirtyEightRiddler/2017-09-15/classic_sticks.ipynb
andrewzwicky/puzzles
mit
Obtener los datos Se usa el cliente VSO de SunPy para obtener los datos automáticamente, solo se tienen que cambiar las fechas correspondientes. Fechas de interés para el proyecto son: * 2012/01/29 a 2012/01/30 * 2013/03/04 a 2013/03/09 * 2014/09/23 a 2014/09/28 * 2015/09/03 a 2015/09/08 * 2016/03/11 a 2016/03/16 * 201...
# defining datetime range and number of samples dates = [] # where the dates pairs are going to be stored date_start = datetime(2012,7,1,0,0,0) date_end = datetime(2012,7,3,23,59,59) date_samples = 35 # Number of samples to take between dates date_delta = (date_end - date_start)/date_samples # How frequent to take ...
rotation/Acquiring_Data.ipynb
ijpulidos/solar-physics-ex
mit
Acquiring data from helioviewer To this date (04/05/2017) the VSO server for fits files docs.virtualsolar.org is down and has been for some hours. So I had to choose to use Helioviewer to download data, which come in jpg/png files.
from sunpy.net.helioviewer import HelioviewerClient hv = HelioviewerClient() datasources = hv.get_data_sources() # print a list of datasources and their associated ids for observatory, instruments in datasources.items(): for inst, detectors in instruments.items(): for det, measurements in detectors.items(...
rotation/Acquiring_Data.ipynb
ijpulidos/solar-physics-ex
mit
To analyze the bubble sort, we should note that regardless of how the items are arranged in the initial array, $n−1$ passes will be made to sort an array of size n. Table below shows the number of comparisons for each pass. The total number of comparisons is the sum of the first $n−1$ integers. Recall that the sum of t...
def shortBubbleSort(alist): exchanges = True passnum = len(alist)-1 while passnum > 0 and exchanges: exchanges = False for i in range(passnum): # print(i) if alist[i]>alist[i+1]: exchanges = True alist[i], alist[i+1] = alist[i+1], alist[i] ...
lab2-bubble-sort.ipynb
bkimo/discrete-math-with-python
mit
Plotting Algorithmic Time Complexity of a Function using Python We may take an idea of using the Python Timer and timeit methods to create a simple plotting scheme using matplotlib. Here is the code. The code is quite simple. Perhaps the only interesting thing here is the use of partial to pass in the function and the ...
from matplotlib import pyplot import numpy as np import timeit from functools import partial import random def fconst(N): """ O(1) function """ x = 1 def flinear(N): """ O(n) function """ x = [i for i in range(N)] def fsquare(N): """ O(n^2) function """ for i in range(...
lab2-bubble-sort.ipynb
bkimo/discrete-math-with-python
mit
Definitions
fileUrl = "../S_lycopersicum_chromosomes.2.50.BspQI_to_EXP_REFINEFINAL1_xmap.txt" MIN_CONF = 10.0 FULL_FIG_W , FULL_FIG_H = 16, 8 CHROM_FIG_W, CHROM_FIG_H = FULL_FIG_W, 20
opticalmapping/xmap_reader.ipynb
sauloal/ipython
mit
Column type definition
col_type_int = np.int64 col_type_flo = np.float64 col_type_str = np.object col_info =[ [ "XmapEntryID" , col_type_int ], [ "QryContigID" , col_type_int ], [ "RefContigID" , col_type_int ], [ "QryStartPos" , col_type_flo ], [ "QryEndPos" , col_type_flo ], [ "RefStartPos" , col_type_flo ], [...
opticalmapping/xmap_reader.ipynb
sauloal/ipython
mit
Read XMAP http://nbviewer.ipython.org/github/herrfz/dataanalysis/blob/master/week2/getting_data.ipynb
CONVERTERS = { 'info': filter_conv } SKIP_ROWS = 9 NROWS = None gffData = pd.read_csv(fileUrl, names=col_names, index_col='XmapEntryID', dtype=col_types, header=None, skiprows=SKIP_ROWS, delimiter="\t", comment="#", verbose=True, nrows=NROWS) gffData.head()
opticalmapping/xmap_reader.ipynb
sauloal/ipython
mit
Add length column
gffData['qry_match_len'] = abs(gffData['QryEndPos'] - gffData['QryStartPos']) gffData['ref_match_len'] = abs(gffData['RefEndPos'] - gffData['RefStartPos']) gffData['match_prop' ] = gffData['qry_match_len'] / gffData['ref_match_len'] gffData = gffData[gffData['Confidence'] >= MIN_CONF] del gffData['LabelChannel'] gffD...
opticalmapping/xmap_reader.ipynb
sauloal/ipython
mit
More stats
ref_qry = gffData[['RefContigID','QryContigID']] ref_qry = ref_qry.sort('RefContigID') print ref_qry.head() ref_qry_grpby_ref = ref_qry.groupby('RefContigID', sort=True) ref_qry_grpby_ref.head() qry_ref = gffData[['QryContigID','RefContigID']] qry_ref = qry_ref.sort('QryContigI...
opticalmapping/xmap_reader.ipynb
sauloal/ipython
mit
Global statistics
gffData[['Confidence', 'QryLen', 'qry_match_len', 'ref_match_len', 'match_prop']].describe()
opticalmapping/xmap_reader.ipynb
sauloal/ipython
mit
List of chromosomes
chromosomes = np.unique(gffData['RefContigID'].values) chromosomes
opticalmapping/xmap_reader.ipynb
sauloal/ipython
mit
Quality distribution
with size_controller(FULL_FIG_W, FULL_FIG_H): bq = gffData.boxplot(column='Confidence')
opticalmapping/xmap_reader.ipynb
sauloal/ipython
mit