markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Now we create a 5 by 5 grid with a spacing (dx and dy) of 1. We also create an elevation field with value of 1. everywhere, except at the outlet, where the elevation is 0. In this case the outlet is in the middle of the bottom row, at location (0,2), and has a node id of 2.
mg1 = RasterModelGrid((5, 5), 1.0) z1 = mg1.add_ones("topographic__elevation", at="node") mg1.at_node["topographic__elevation"][2] = 0.0 mg1.at_node["topographic__elevation"]
notebooks/tutorials/boundary_conds/set_watershed_BCs_raster.ipynb
landlab/landlab
mit
Check to see that node status were set correctly. imshow will default to not plot the value of BC_NODE_IS_CLOSED nodes, which is why we override this below with the option color_for_closed
mg1.imshow(mg1.status_at_node, color_for_closed="blue")
notebooks/tutorials/boundary_conds/set_watershed_BCs_raster.ipynb
landlab/landlab
mit
The second example uses set_watershed_boundary_condition_outlet_coords In this case the user knows the coordinates of the outlet node. First instantiate a new grid, with new data values.
mg2 = RasterModelGrid((5, 5), 10.0) z2 = mg2.add_ones("topographic__elevation", at="node") mg2.at_node["topographic__elevation"][1] = 0.0 mg2.at_node["topographic__elevation"]
notebooks/tutorials/boundary_conds/set_watershed_BCs_raster.ipynb
landlab/landlab
mit
Plot grid of boundary status information
mg2.imshow(mg2.status_at_node, color_for_closed="blue")
notebooks/tutorials/boundary_conds/set_watershed_BCs_raster.ipynb
landlab/landlab
mit
The third example uses set_watershed_boundary_condition_outlet_id In this case the user knows the node id value of the outlet node. First instantiate a new grid, with new data values.
mg3 = RasterModelGrid((5, 5), 5.0) z3 = mg3.add_ones("topographic__elevation", at="node") mg3.at_node["topographic__elevation"][5] = 0.0 mg3.at_node["topographic__elevation"]
notebooks/tutorials/boundary_conds/set_watershed_BCs_raster.ipynb
landlab/landlab
mit
Another plot to illustrate the results.
mg3.imshow(mg3.status_at_node, color_for_closed="blue")
notebooks/tutorials/boundary_conds/set_watershed_BCs_raster.ipynb
landlab/landlab
mit
The final example uses set_watershed_boundary_condition on a watershed that was exported from Arc. First import read_esri_ascii and then import the DEM data. An optional value of halo=1 is used with read_esri_ascii. This puts a perimeter of nodata values around the DEM. This is done just in case there are data values...
from landlab.io import read_esri_ascii (grid_bijou, z_bijou) = read_esri_ascii("west_bijou_gully.asc", halo=1)
notebooks/tutorials/boundary_conds/set_watershed_BCs_raster.ipynb
landlab/landlab
mit
Let's plot the data to see what the topography looks like.
grid_bijou.imshow(z_bijou)
notebooks/tutorials/boundary_conds/set_watershed_BCs_raster.ipynb
landlab/landlab
mit
Now we can look at the boundary status of the nodes to see where the found outlet was.
grid_bijou.imshow(grid_bijou.status_at_node, color_for_closed="blue")
notebooks/tutorials/boundary_conds/set_watershed_BCs_raster.ipynb
landlab/landlab
mit
This looks sensible. Now that the boundary conditions ae set, we can also look at the topography. imshow will default to show boundaries as black, as illustrated below. But that can be overwridden as we have been doing all along.
grid_bijou.imshow(z_bijou)
notebooks/tutorials/boundary_conds/set_watershed_BCs_raster.ipynb
landlab/landlab
mit
Now we look at the probability that the various configurations of significant and non-significant results will be obtained.
plt.figure(figsize=(7,6)) for k in [0,1,2,3,6,7]: plt.plot(Ns/4, cs[:,k]) plt.legend(['nothing','X','B','BX','AB','ABX'],loc=2) plt.xlabel('N per cell') plt.ylabel('pattern frequency');
_ipynb/No Way Anova - Interactions need more power.ipynb
simkovic/simkovic.github.io
mit
Load and Augment the Data Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. Augmentation In this cell, we perform some simple data augmentation by randomly flipping and rotating the gi...
from torchvision import datasets import torchvision.transforms as transforms from torch.utils.data.sampler import SubsetRandomSampler # number of subprocesses to use for data loading num_workers = 0 # how many samples per batch to load batch_size = 20 # percentage of training set to use as validation valid_size = 0.2 ...
DEEP LEARNING/Pytorch from scratch/CNN/cifar10_cnn_augm.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Train the Network Remember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
# number of epochs to train the model n_epochs = 30 valid_loss_min = np.Inf # track change in validation loss for epoch in range(1, n_epochs+1): # keep track of training and validation loss train_loss = 0.0 valid_loss = 0.0 ################### # train the model # ################### ...
DEEP LEARNING/Pytorch from scratch/CNN/cifar10_cnn_augm.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Load the Model with the Lowest Validation Loss
model.load_state_dict(torch.load('model_augmented.pt'))
DEEP LEARNING/Pytorch from scratch/CNN/cifar10_cnn_augm.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Test the Trained Network Test your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
# track test loss test_loss = 0.0 class_correct = list(0. for i in range(10)) class_total = list(0. for i in range(10)) model.eval() # iterate over test data for batch_idx, (data, target) in enumerate(test_loader): # move tensors to GPU if CUDA is available if train_on_gpu: data, target = data.cuda(), ...
DEEP LEARNING/Pytorch from scratch/CNN/cifar10_cnn_augm.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Visualize Sample Test Results
# obtain one batch of test images dataiter = iter(test_loader) images, labels = dataiter.next() images.numpy() # move model inputs to cuda, if GPU available if train_on_gpu: images = images.cuda() # get sample outputs output = model(images) # convert output probabilities to predicted class _, preds_tensor = torch...
DEEP LEARNING/Pytorch from scratch/CNN/cifar10_cnn_augm.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
First, we load the image and show it.
#Load the image path_to_image = 'images/graffiti.jpg' img = cv2.imread(path_to_image) sr.show_image(img)
Notebooks/DetectorExample.ipynb
NLeSC/SalientDetector-python
apache-2.0
Now we create a SalientDetector object, with some parameters.
det = sr.SalientDetector(SE_size_factor=0.20, lam_factor=4)
Notebooks/DetectorExample.ipynb
NLeSC/SalientDetector-python
apache-2.0
We ask the SalientDetector to detect all types of regions:
regions = det.detect(img, find_holes=True, find_islands=True, find_indentations=True, find_protrusions=True, visualize=True) print(regions.keys())
Notebooks/DetectorExample.ipynb
NLeSC/SalientDetector-python
apache-2.0
We can also output the regions as ellipses
num_regions, features_standard, features_poly = sr.binary_mask2ellipse_features(regions, min_square=False) print("number of features per saliency type: ", num_regions) sr.visualize_ellipses(regions["holes"], features_standard["holes"]) sr.visualize_ellipses(regions["islands"], features_standard["islands"]) sr.visualize...
Notebooks/DetectorExample.ipynb
NLeSC/SalientDetector-python
apache-2.0
We can also save the elliptic parameters in text files. Below is an example of saving the polynomial coefficients of all regions represented as ellipses.
total_num_regions = sr.save_ellipse_features2file(num_regions, features_poly, 'poly_features.txt') print("total_num_regions", total_num_regions)
Notebooks/DetectorExample.ipynb
NLeSC/SalientDetector-python
apache-2.0
To load the saved features from file, use the loading funciton:
import sys, os sys.path.insert(0, os.path.abspath('..')) import salientregions as sr total_num_regions, num_regions, features = sr.load_ellipse_features_from_file('poly_features.txt') print("total_num_regions: ", total_num_regions) print("number of features per saliency type: ", num_regions) #print "features: ", featur...
Notebooks/DetectorExample.ipynb
NLeSC/SalientDetector-python
apache-2.0
Process an observation Setup the processing
# Default input parameters (replaced in next cell) sequence_id = '' # e.g. PAN012_358d0f_20191005T112325 # Unused option for now. See below. # vmag_min = 6 # vmag_max = 14 position_column_x = 'catalog_wcs_x' position_column_y = 'catalog_wcs_y' input_bucket = 'panoptes-images-processed' # JSON string of additional ...
notebooks/ProcessObservation.ipynb
panoptes/PIAA
mit
Fetch all the image documents from the metadata store. We then filter based off image status and measured properties.
unit_id, camera_id, sequence_time = sequence_id.split('_') # Get sequence information sequence_doc_path = f'units/{unit_id}/observations/{sequence_id}' sequence_doc_ref = firestore_db.document(sequence_doc_path) sequence_info = sequence_doc_ref.get().to_dict() exptime = sequence_info['total_exptime'] / sequence_info...
notebooks/ProcessObservation.ipynb
panoptes/PIAA
mit
Filter frames Filter some of the frames based on the image properties as a whole.
# Sigma filtering of certain stats mask_columns = [ 'camera_colortemp', 'sources_num_detected', 'sources_photutils_fwhm_mean' ] for mask_col in mask_columns: images_df[f'mask_{mask_col}'] = stats.sigma_clip(images_df[mask_col]).mask display(plot.filter_plot(images_df, mask_col, sequence_id)) ...
notebooks/ProcessObservation.ipynb
panoptes/PIAA
mit
Load metadata for images
# Build the joined metadata file. sources = list() for image_id in images_df.uid: blob_path = f'gcs://{input_bucket}/{image_id.replace("_", "/")}/sources.parquet' try: sources.append(pd.read_parquet(blob_path)) except FileNotFoundError: print(f'Error finding {blob_path}, skipping') sources_...
notebooks/ProcessObservation.ipynb
panoptes/PIAA
mit
Filter stars Now that we have images of a sufficient quality, filter the star detections themselves. We get the mean metadata values for each star and use that to filter any stellar outliers based on a few properties of the observation as a whole.
# Use the mean value for the observation for each source. sample_source_df = sources_df.groupby('picid').mean() num_sources = len(sample_source_df) print(f'Sources before filtering: {num_sources}') frame_count = sources_df.groupby('picid').catalog_vmag.count() exptime = images_df.camera_exptime.mean() # Mask sources...
notebooks/ProcessObservation.ipynb
panoptes/PIAA
mit
See gini coefficient info here.
# Sigma clip columns. clip_columns = [ 'catalog_vmag', 'photutils_gini', 'photutils_fwhm', ] # Display in pair plot columns. pair_columns = [ 'catalog_sep', 'photutils_eccentricity', 'photutils_background_mean', 'catalog_wcs_x_int', 'catalog_wcs_y_int', 'is_masked', ] for mask_col ...
notebooks/ProcessObservation.ipynb
panoptes/PIAA
mit
Make stamp locations
xy_catalog = pd.read_hdf(observation_store_path, key='sources', columns=[position_column_x, position_column_y]).reset_index().groupby('picid') # Get max diff in xy positions. x_catalog_diff = (xy_catalog.catalog_wcs_x.max() - xy_catalog.catalog_wcs_x.min()).max() y_c...
notebooks/ProcessObservation.ipynb
panoptes/PIAA
mit
Extract stamps
# Get list of FITS file urls fits_urls = [f'{base_url}/{input_bucket}/{image_id.replace("_", "/")}/image.fits.fz' for image_id in images_df.uid] # Build the joined metadata file. reference_image = None diff_image = None stack_image = None for image_time, fits_url in zip(images_df.index, fits_urls): try: da...
notebooks/ProcessObservation.ipynb
panoptes/PIAA
mit
Notebook environment info
!jupyter --version current_time()
notebooks/ProcessObservation.ipynb
panoptes/PIAA
mit
Analyse de clusters
nperm = 1000 T_obs_bin,clusters_bin,clusters_pb_bin,H0_bin = mne.stats.spatio_temporal_cluster_test(X_bin,threshold=None,n_permutations=nperm,out_type='mask') T_obs_ste,clusters_ste,clusters_pb_ste,H0_ste = mne.stats.spatio_temporal_cluster_test(X_ste,threshold=None,n_permutations=nperm,out_type='mask')
oddball/ERP_grav_statistics-univariate.ipynb
nicofarr/eeg4sounds
apache-2.0
On récupère les channels trouvés grace a l'analyse de clusters
def extract_electrodes_times(clusters,clusters_pb,tmin_ind=500,tmax_ind=640,alpha=0.005,evoked = ev_bin_dev): ch_list_temp = [] time_list_temp = [] for clust,pval in zip(clusters,clusters_pb): if pval < alpha: for j,curline in enumerate(clust[tmin_ind:tmax_ind]): for ...
oddball/ERP_grav_statistics-univariate.ipynb
nicofarr/eeg4sounds
apache-2.0
One sample ttest FDR corrected (per electrode)
from scipy.stats import ttest_1samp from mne.stats import bonferroni_correction,fdr_correction def ttest_amplitude(X,times_ind,ch_names,times): # Selecting time points and averaging over time amps = X[:,times_ind,:].mean(axis=1) T, pval = ttest_1samp(amps, 0) alpha = 0.05 n_samples, n_test...
oddball/ERP_grav_statistics-univariate.ipynb
nicofarr/eeg4sounds
apache-2.0
Tests de 280 a 440, par fenetres de 20 ms avec chevauchement de 10 ms
toi = np.arange(0.28,0.44,0.001) toi_index = ev_bin_dev.time_as_index(toi) wsize = 20 wstep = 10 toi
oddball/ERP_grav_statistics-univariate.ipynb
nicofarr/eeg4sounds
apache-2.0
Printing and preparing all time windows
all_toi_indexes = [] for i in range(14): print(toi[10*i],toi[10*i + 20]) cur_toi_ind = range(10*i+1,(10*i+21)) all_toi_indexes.append(ev_bin_dev.time_as_index(toi[cur_toi_ind])) print(toi[10*14],toi[10*14 + 19]) cur_toi_ind = range(10*14+1,(10*14+19)) all_toi_indexes.append(ev_bin_dev.time_as_index(to...
oddball/ERP_grav_statistics-univariate.ipynb
nicofarr/eeg4sounds
apache-2.0
Tests on each time window
for cur_timewindow in all_toi_indexes: T,pval,pval_fdr,pval_bonferroni = ttest_amplitude(X_diff_ste_bin,cur_timewindow,epochs_bin_dev_ch.ch_names,times=epochs_bin_dev_ch.times)
oddball/ERP_grav_statistics-univariate.ipynb
nicofarr/eeg4sounds
apache-2.0
On a channel subset (ROI) - average over channels Parietal roi
#Selecting channels epochs_bin_dev = _matstruc2mne_epochs(mat_bin_dev).crop(tmax=tcrop) epochs_bin_std = _matstruc2mne_epochs(mat_bin_std).crop(tmax=tcrop) epochs_ste_dev = _matstruc2mne_epochs(mat_ste_dev).crop(tmax=tcrop) epochs_ste_std = _matstruc2mne_epochs(mat_ste_std).crop(tmax=tcrop) mne.equalize_channels([e...
oddball/ERP_grav_statistics-univariate.ipynb
nicofarr/eeg4sounds
apache-2.0
Frontal roi
#Selecting channels epochs_bin_dev = _matstruc2mne_epochs(mat_bin_dev).crop(tmax=tcrop) epochs_bin_std = _matstruc2mne_epochs(mat_bin_std).crop(tmax=tcrop) epochs_ste_dev = _matstruc2mne_epochs(mat_ste_dev).crop(tmax=tcrop) epochs_ste_std = _matstruc2mne_epochs(mat_ste_std).crop(tmax=tcrop) mne.equalize_channels([e...
oddball/ERP_grav_statistics-univariate.ipynb
nicofarr/eeg4sounds
apache-2.0
Exercise: Let's consider a more general version of the Monty Hall problem where Monty is more unpredictable. As before, Monty never opens the door you chose (let's call it A) and never opens the door with the prize. So if you choose the door with the prize, Monty has to decide which door to open. Suppose he opens B ...
from sympy import symbols p = symbols('p') pmf = Pmf('ABC') pmf['A'] *= p pmf['B'] *= 0 pmf['C'] *= 1 pmf.Normalize() pmf.Print() p pmf['A'].simplify() pmf['A'].subs(p, 0.5)
code/chap02mine.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Parameters
num_inputs = 784 # 28*28 neurons_hid1 = 392 neurons_hid2 = 196 neurons_hid3 = neurons_hid1 # Decoder Begins num_outputs = num_inputs learning_rate = 0.01
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/06-Autoencoders/02-Stacked-Autoencoder-Example.ipynb
arcyfelix/Courses
apache-2.0
Activation function
actf = tf.nn.relu
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/06-Autoencoders/02-Stacked-Autoencoder-Example.ipynb
arcyfelix/Courses
apache-2.0
Placeholder
X = tf.placeholder(tf.float32, shape = [None, num_inputs])
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/06-Autoencoders/02-Stacked-Autoencoder-Example.ipynb
arcyfelix/Courses
apache-2.0
Weights Initializer capable of adapting its scale to the shape of weights tensors. With distribution="normal", samples are drawn from a truncated normal distribution centered on zero, with stddev = sqrt(scale / n) where n is: - number of input units in the weight tensor, if mode = "fan_in" - number of output units,...
initializer = tf.variance_scaling_initializer() w1 = tf.Variable(initializer([num_inputs, neurons_hid1]), dtype = tf.float32) w2 = tf.Variable(initializer([neurons_hid1, neurons_hid2]), dtype = tf.float32) w3 = tf.Variable(initializer([neurons_hid2, neurons_hid3]), ...
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/06-Autoencoders/02-Stacked-Autoencoder-Example.ipynb
arcyfelix/Courses
apache-2.0
Biases
b1 = tf.Variable(tf.zeros(neurons_hid1)) b2 = tf.Variable(tf.zeros(neurons_hid2)) b3 = tf.Variable(tf.zeros(neurons_hid3)) b4 = tf.Variable(tf.zeros(num_outputs))
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/06-Autoencoders/02-Stacked-Autoencoder-Example.ipynb
arcyfelix/Courses
apache-2.0
Activation Function and Layers
act_func = tf.nn.relu hid_layer1 = act_func(tf.matmul(X, w1) + b1) hid_layer2 = act_func(tf.matmul(hid_layer1, w2) + b2) hid_layer3 = act_func(tf.matmul(hid_layer2, w3) + b3) output_layer = tf.matmul(hid_layer3, w4) + b4
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/06-Autoencoders/02-Stacked-Autoencoder-Example.ipynb
arcyfelix/Courses
apache-2.0
Loss Function
loss = tf.reduce_mean(tf.square(output_layer - X))
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/06-Autoencoders/02-Stacked-Autoencoder-Example.ipynb
arcyfelix/Courses
apache-2.0
Optimizer
#tf.train.RMSPropOptimizer optimizer = tf.train.AdamOptimizer(learning_rate) train = optimizer.minimize(loss)
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/06-Autoencoders/02-Stacked-Autoencoder-Example.ipynb
arcyfelix/Courses
apache-2.0
Intialize Variables
init = tf.global_variables_initializer() saver = tf.train.Saver() num_epochs = 5 batch_size = 150 with tf.Session() as sess: sess.run(init) # Epoch == Entire Training Set for epoch in range(num_epochs): num_batches = mnist.train.num_examples // batch_size # 150 batch size ...
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/06-Autoencoders/02-Stacked-Autoencoder-Example.ipynb
arcyfelix/Courses
apache-2.0
Test Autoencoder output on Test Data
num_test_images = 10 with tf.Session() as sess: saver.restore(sess,"./checkpoint/stacked_autoencoder.ckpt") results = output_layer.eval(feed_dict = {X : mnist.test.images[:num_test_images]}) # Compare original images with their reconstructions f, a = plt.subplots(2, 10, figsize = (20, 4))...
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/06-Autoencoders/02-Stacked-Autoencoder-Example.ipynb
arcyfelix/Courses
apache-2.0
Basic 1D non-linear regression with Keras TODO: see https://stackoverflow.com/questions/44998910/keras-model-to-fit-polynomial Install Keras https://keras.io/#installation Install dependencies Install TensorFlow backend: https://www.tensorflow.org/install/ pip install tensorflow Insall h5py (required if you plan on sav...
import tensorflow as tf tf.__version__ import keras keras.__version__ import h5py h5py.__version__ import pydot pydot.__version__
nb_dev_python/python_keras_1d_non-linear_regression.ipynb
jdhp-docs/python_notebooks
mit
Make the dataset
df_train = gen_1d_polynomial_samples(n_samples=100, noise_std=0.05) x_train = df_train.x.values y_train = df_train.y.values plt.plot(x_train, y_train, ".k"); df_test = gen_1d_polynomial_samples(n_samples=100, noise_std=None) x_test = df_test.x.values y_test = df_test.y.values plt.plot(x_test, y_test, ".k");
nb_dev_python/python_keras_1d_non-linear_regression.ipynb
jdhp-docs/python_notebooks
mit
Make the regressor
model = keras.models.Sequential() #model.add(keras.layers.Dense(units=1000, activation='relu', input_dim=1)) #model.add(keras.layers.Dense(units=1)) #model.add(keras.layers.Dense(units=1000, activation='relu')) #model.add(keras.layers.Dense(units=1)) model.add(keras.layers.Dense(units=5, activation='relu', input_dim=...
nb_dev_python/python_keras_1d_non-linear_regression.ipynb
jdhp-docs/python_notebooks
mit
Next, let's define a new class to represent each player in the game. I have provided a rough framework of the class definition along with comments along the way to help you complete it. Places where you should write code are denoted by comments inside [] brackets and CAPITAL TEXT.
class Player: # create here two local variables to store a unique ID for each player and the player's current 'pot' of money # [FILL IN YOUR VARIABLES HERE] # in the __init__() function, use the two input variables to initialize the ID and starting pot of each player def __init__(self, in...
notebooks/week-2/04 - Lab 2 Assignment.ipynb
tolaoniyangi/dmc
apache-2.0
<span id="alos_land_change_plat_prod">Choose Platform and Product &#9652;</span>
# Select one of the ALOS data cubes from around the world # Colombia, Vietnam, Samoa Islands ## ALOS Data Summary # There are 7 time slices (epochs) for the ALOS mosaic data. # The dates of the mosaics are centered on June 15 of each year (time stamp) # Bands: RGB (HH-HV-HH/HV), HH, HV, date, incidence angle, mask) #...
notebooks/land_change/SAR/ALOS_Land_Change.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
<a id="alos_land_change_extents"></a> Get the Extents of the Cube &#9652;
from utils.data_cube_utilities.dc_time import dt_to_str metadata = dc.load(platform=platform, product=product, measurements=[]) full_lat = metadata.latitude.values[[-1,0]] full_lon = metadata.longitude.values[[0,-1]] min_max_dates = list(map(dt_to_str, map(pd.to_datetime, metadata.time.values[[0,-1]]))) # Print the ...
notebooks/land_change/SAR/ALOS_Land_Change.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
<a id="alos_land_change_parameters"></a> Define the Analysis Parameters &#9652;
from datetime import datetime ## Somoa ## # Apia City # lat = (-13.7897, -13.8864) # lon = (-171.8531, -171.7171) # time_extents = ("2014-01-01", "2014-12-31") # East Area # lat = (-13.94, -13.84) # lon = (-171.96, -171.8) # time_extents = ("2014-01-01", "2014-12-31") # Central Area # lat = (-14.057, -13.884) # lo...
notebooks/land_change/SAR/ALOS_Land_Change.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
<a id="alos_land_change_load"></a> Load and Clean Data from the Data Cube &#9652;
dataset = dc.load(product = product, platform = platform, latitude = lat, longitude = lon, time=time_extents)
notebooks/land_change/SAR/ALOS_Land_Change.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
View an acquisition in dataset
# Select a baseline and analysis time slice for comparison # Make the adjustments to the years according to the following scheme # Time Slice: 0=2007, 1=2008, 2=2009, 3=2010, 4=2015, 5=2016, 6=2017) baseline_slice = dataset.isel(time = 0) analysis_slice = dataset.isel(time = -1)
notebooks/land_change/SAR/ALOS_Land_Change.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
<a id="alos_land_change_rgbs"></a> View RGBs for the Baseline and Analysis Periods &#9652;
%matplotlib inline from utils.data_cube_utilities.dc_rgb import rgb # Baseline RGB rgb_dataset2 = xr.Dataset() min_ = np.min([ np.percentile(baseline_slice.hh,5), np.percentile(baseline_slice.hv,5), ]) max_ = np.max([ np.percentile(baseline_slice.hh,95), np.percentile(baseline_slice.hv,95), ]) rgb_dat...
notebooks/land_change/SAR/ALOS_Land_Change.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
<a id="alos_land_change_hh_hv"></a> Plot HH or HV Band for the Baseline and Analysis Periods &#9652; NOTE: The HV band is best for deforestation detection Typical radar analyses convert the backscatter values at the pixel level to dB scale.<br> The ALOS coversion (from JAXA) is: Backscatter dB = 20 * log10( backscatter...
# Plot the BASELINE and ANALYSIS slice side-by-side # Change the band (HH or HV) in the code below plt.figure(figsize = (15,6)) plt.subplot(1,2,1) (20*np.log10(baseline_slice.hv)-83).plot(vmax=0, vmin=-30, cmap = "Greys_r") plt.subplot(1,2,2) (20*np.log10(analysis_slice.hv)-83).plot(vmax=0, vmin=-30, cmap = "Greys_r"...
notebooks/land_change/SAR/ALOS_Land_Change.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
<a id="alos_land_change_custom_rgb"></a> Plot a Custom RGB That Uses Bands from the Baseline and Analysis Periods &#9652; The RGB image below assigns RED to the baseline year HV band and GREEN+BLUE to the analysis year HV band<br> Vegetation loss appears in RED and regrowth in CYAN. Areas of no change appear in differe...
# Clipping the bands uniformly to brighten the image rgb_dataset2 = xr.Dataset() min_ = np.min([ np.percentile(baseline_slice.hv,5), np.percentile(analysis_slice.hv,5), ]) max_ = np.max([ np.percentile(baseline_slice.hv,95), np.percentile(analysis_slice.hv,95), ]) rgb_dataset2['baseline_slice.hv'] = bas...
notebooks/land_change/SAR/ALOS_Land_Change.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
Select one of the plots below and adjust the threshold limits (top and bottom)
plt.figure(figsize = (15,6)) plt.subplot(1,2,1) baseline_slice.hv.plot (vmax=0, vmin=4000, cmap="Greys") plt.subplot(1,2,2) analysis_slice.hv.plot (vmax=0, vmin=4000, cmap="Greys")
notebooks/land_change/SAR/ALOS_Land_Change.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
<a id="alos_land_change_change_product"></a> Plot a Change Product to Compare Two Time Periods (Epochs) &#9652;
from matplotlib.ticker import FuncFormatter def intersection_threshold_plot(first, second, th, mask = None, color_none=np.array([0,0,0]), color_first=np.array([0,255,0]), color_second=np.array([255,0,0]), color_both=np.array([255,255,255]), color_mask=n...
notebooks/land_change/SAR/ALOS_Land_Change.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
Exercises Exercise: Our solution to the differential equations is only approximate because we used a finite step size, dt=2 minutes. If we make the step size smaller, we expect the solution to be more accurate. Run the simulation with dt=1 and compare the results. What is the largest relative error between the two s...
# Solution goes here # Solution goes here # Solution goes here # Solution goes here
notebooks/chap18.ipynb
AllenDowney/ModSimPy
mit
For some reason can't run these in the notebook. So have to run them with subprocess like so:
%%python from multiprocessing import Pool def f(x): return x*x if __name__ == '__main__': p = Pool(5) print(p.map(f, [1, 2, 3])) %%python from multiprocessing import Pool import numpy as np def f(x): return x*x if __name__ == '__main__': p = Pool(5) print(p.map(f, np.array([1, 2, 3])))
notebooks/troubleshooting_and_sysadmin/Iterators with Multiprocessing.ipynb
Neuroglycerin/neukrill-net-work
mit
Now doing this asynchronously:
%%python from multiprocessing import Pool import numpy as np def f(x): return x**2 if __name__ == '__main__': p = Pool(5) r = p.map_async(f, np.array([0,1,2])) print(dir(r)) print(r.get(timeout=1))
notebooks/troubleshooting_and_sysadmin/Iterators with Multiprocessing.ipynb
Neuroglycerin/neukrill-net-work
mit
Now trying to create an iterable that will precompute it's output using multiprocessing.
%%python from multiprocessing import Pool import numpy as np def f(x): return x**2 class It(object): def __init__(self,a): # store an array (2D) self.a = a # initialise pool self.p = Pool(4) # initialise index self.i = 0 # initialise pre-computed first b...
notebooks/troubleshooting_and_sysadmin/Iterators with Multiprocessing.ipynb
Neuroglycerin/neukrill-net-work
mit
Then we have to try and do a similar thing, but using the randomaugment function. In the following two cells one uses multiprocessiung and one that doesn't. Testing them by pretending to ask for a minibatch and then sleep, applying the RandomAugment function each time.
%%time %%python from multiprocessing import Pool import numpy as np import neukrill_net.augment import time class It(object): def __init__(self,a,f): # store an array (2D) self.a = a # store the function self.f = f # initialise pool self.p = Pool(4) # initial...
notebooks/troubleshooting_and_sysadmin/Iterators with Multiprocessing.ipynb
Neuroglycerin/neukrill-net-work
mit
We now write the adjusted history back to a new history file and then calculate the updated gravity field:
his_changed.write_history('fold_thrust_changed.his') # %%timeit # recompute block model pynoddy.compute_model('fold_thrust_changed.his', 'fold_thrust_changed_out') # %%timeit # recompute geophysical response pynoddy.compute_model('fold_thrust_changed.his', 'fold_thrust_changed_out', sim_type = ...
docs/notebooks/Paper-Fig3-4-Read-Geophysics.ipynb
flohorovicic/pynoddy
gpl-2.0
This example is a mode choice model built using the Swissmetro example dataset. First we create the Dataset and Model objects:
raw_data = pd.read_csv(lx.example_file('swissmetro.csv.gz')).rename_axis(index='CASEID') data = lx.Dataset.construct.from_idco(raw_data, alts={1:'Train', 2:'SM', 3:'Car'}) data
book/example/102-swissmetro-weighted.ipynb
jpn--/larch
gpl-3.0
The swissmetro example models exclude some observations. We can use the Dataset.query_cases method to identify the observations we would like to keep.
m = lx.Model(data.dc.query_cases("PURPOSE in (1,3) and CHOICE != 0"))
book/example/102-swissmetro-weighted.ipynb
jpn--/larch
gpl-3.0
We can attach a title to the model. The title does not affect the calculations as all; it is merely used in various output report styles.
m.title = "swissmetro example 02 (weighted logit)"
book/example/102-swissmetro-weighted.ipynb
jpn--/larch
gpl-3.0
This model adds a weighting factor.
m.weight_co_var = "1.0*(GROUP==2)+1.2*(GROUP==3)"
book/example/102-swissmetro-weighted.ipynb
jpn--/larch
gpl-3.0
The swissmetro dataset, as with all Biogeme data, is only in co format.
from larch.roles import P,X m.utility_co[1] = P("ASC_TRAIN") m.utility_co[2] = 0 m.utility_co[3] = P("ASC_CAR") m.utility_co[1] += X("TRAIN_TT") * P("B_TIME") m.utility_co[2] += X("SM_TT") * P("B_TIME") m.utility_co[3] += X("CAR_TT") * P("B_TIME") m.utility_co[1] += X("TRAIN_CO*(GA==0)") * P("B_COST") m.utility_co[2] +...
book/example/102-swissmetro-weighted.ipynb
jpn--/larch
gpl-3.0
Larch will find all the parameters in the model, but we'd like to output them in a rational order. We can use the ordering method to do this:
m.ordering = [ ("ASCs", 'ASC.*',), ("LOS", 'B_.*',), ] # TEST from pytest import approx assert m.loglike() == approx(-7892.111473285806)
book/example/102-swissmetro-weighted.ipynb
jpn--/larch
gpl-3.0
We can estimate the models and check the results match up with those given by Biogeme:
m.set_cap(15) m.maximize_loglike(method='SLSQP') # TEST r = _ from pytest import approx assert r.loglike == approx(-5931.557677709527) m.calculate_parameter_covariance() m.parameter_summary() # TEST assert m.parameter_summary().data.to_markdown() == ''' | | Value | Std Err | t Stat | Sign...
book/example/102-swissmetro-weighted.ipynb
jpn--/larch
gpl-3.0
Ejercicio (1ª parcial 2018): Resolver $\frac{dy}{dx}=\frac{x y^{4}}{3} - \frac{2 y}{3 x} + \frac{1}{3 x^{3} y^{2}}$. Intentaremos con la heuística $$\xi=ax+cy+e$$ y $$\eta=bx+dy+f$$ para encontrar las simetrías
x,y,a,b,c,d,e,f=symbols('x,y,a,b,c,d,e,f',real=True) #cargamos la función F=x*y**4/3-R(2,3)*y/x+R(1,3)/x**3/y**2 F
Teoria_Basica/scripts/GruposLie.ipynb
fdmazzone/Ecuaciones_Diferenciales
gpl-2.0
Sympy no integra bien el logarítmo
s=log(abs(x)) r, s
Teoria_Basica/scripts/GruposLie.ipynb
fdmazzone/Ecuaciones_Diferenciales
gpl-2.0
Reemplacemos en la fórmula de cambios de variables $$\frac{ds}{dr}=\left.\frac{s_x+s_y F}{r_x+r_y F}\right|_{x=e^s,y=r^{1/3}e^{-2/3s}}.$$
Ecua=( (s.diff(x)+s.diff(y)*F)/(r.diff(x)+r.diff(y)*F)).simplify() r,s=symbols('r,s',real=True) Ecua=Ecua.subs({x:exp(s),y:r**R(1,3)*exp(-R(2,3)*s)}) Ecua
Teoria_Basica/scripts/GruposLie.ipynb
fdmazzone/Ecuaciones_Diferenciales
gpl-2.0
Resolvamos $\frac{dr}{ds}=\frac{1}{1+r^2}$. La solucón gral es $\arctan(r)=s+C$. Expresemos la ecuación en coordenadas cartesianas $$\arctan(x^2y^3)=\log(|x|)+C.$$
C=symbols('C',real=True) sol=Eq(atan(x**2*y**3),log(abs(x))+C) solExpl=solve(sol,y) solExpl yg=solExpl[0] yg Q=simplify(eta-xi*F) Q
Teoria_Basica/scripts/GruposLie.ipynb
fdmazzone/Ecuaciones_Diferenciales
gpl-2.0
No hay soluciones invariantes
p=plot_implicit(sol.subs(C,0),(x,-5,5),(y,-10,10),show=False) for k in range(-10,10): p.append(plot_implicit(sol.subs(C,k),(x,-5,5),(y,-10,10),show=False)[0]) p.show()
Teoria_Basica/scripts/GruposLie.ipynb
fdmazzone/Ecuaciones_Diferenciales
gpl-2.0
Supply user information
# Set the path for data file flname="/Users/guy/data/t28/jpole/T28_JPOLE2003_800.nc"
examples/T28_jpole_flight.ipynb
nguy/AWOT
gpl-2.0
<li>Set up some characteristics for plotting. <li>Use Cylindrical Equidistant Area map projection. <li>Set the spacing of the barbs and X-axis time step for labels. <li>Set the start and end times for subsetting.
proj = 'cea' Wbarb_Spacing = 300 # Spacing of wind barbs along flight path (sec) # Choose the X-axis time step (in seconds) where major labels will be XlabStride = 3600 # Should landmarks be plotted? [If yes, then modify the section below Lmarks=False # Optional variables that can be included with AWOT # Start and e...
examples/T28_jpole_flight.ipynb
nguy/AWOT
gpl-2.0
Read in flight data<br> NOTE: At time or writing this it is required that the time_var argument be provided to make the read function work properly. This may change in the future, but time variables are not standard even among RAF Nimbus guidelines.
fl = awot.io.read_netcdf(fname=flname, platform='t28', time_var="Time") fl.keys()
examples/T28_jpole_flight.ipynb
nguy/AWOT
gpl-2.0
Create the track figure for this flight, there appear to be some bunk data values in lat/lon
print(fl['latitude']['data'].min(), fl['latitude']['data'].max()) fl['latitude']['data'][:] = np.ma.masked_equal(fl['latitude']['data'][:], 0.) fl['longitude']['data'][:] = np.ma.masked_equal(fl['longitude']['data'][:], 0.) print(fl['latitude']['data'].min(), fl['latitude']['data'].max()) print(fl['longitude']['data']....
examples/T28_jpole_flight.ipynb
nguy/AWOT
gpl-2.0
Add a custom section with an evoked slider:
# Load the evoked data evoked = read_evokeds(evoked_fname, condition='Left Auditory', baseline=(None, 0), verbose=False) evoked.crop(0, .2) times = evoked.times[::4] # Create a list of figs for the slider figs = list() for t in times: figs.append(evoked.plot_topomap(t, vmin=-300, vmax=300, res...
0.19/_downloads/81308ca6ca6807326a79661c989cfcba/plot_make_report.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Les instructions SQL s'écrivent d'une manière qui ressemble à celle de phrases ordinaires en anglais. Cette ressemblance voulue vise à faciliter l'apprentissage et la lecture. Il est néanmoins important de respecter un ordre pour les différentes instructions. Dans ce TD, nous allons écrire des commandes en SQL via Pyth...
import sqlite3 # on va se connecter à une base de données SQL vide # SQLite stocke la BDD dans un simple fichier filepath = "./DataBase.db" open(filepath, 'w').close() #crée un fichier vide CreateDataBase = sqlite3.connect(filepath) QueryCurs = CreateDataBase.cursor()
_unittests/ut_helpgen/notebooks2/td2a_eco_sql.ipynb
sdpython/pyquickhelper
mit
La méthode cursor est un peu particulière : Il s'agit d'une sorte de tampon mémoire intermédiaire, destiné à mémoriser temporairement les données en cours de traitement, ainsi que les opérations que vous effectuez sur elles, avant leur transfert définitif dans la base de données. Tant que la méthode commit n'aura pas...
# On définit une fonction de création de table def CreateTable(nom_bdd): QueryCurs.execute('''CREATE TABLE IF NOT EXISTS ''' + nom_bdd + ''' (id INTEGER PRIMARY KEY, Name TEXT,City TEXT, Country TEXT, Price REAL)''') # On définit une fonction qui permet d'ajouter des observations dans la table def AddEntry...
_unittests/ut_helpgen/notebooks2/td2a_eco_sql.ipynb
sdpython/pyquickhelper
mit
GROUP BY En pandas, l'opération GROUP BY de SQL s'effectue avec une méthode similaire : groupby groupby sert à regrouper des observations en groupes selon les modalités de certaines variables en appliquant une fonction d'aggrégation sur d'autres variables.
QueryCurs.execute('SELECT Country, count(*) FROM Clients GROUP BY Country') print(QueryCurs.fetchall())
_unittests/ut_helpgen/notebooks2/td2a_eco_sql.ipynb
sdpython/pyquickhelper
mit
Attention, en pandas, la fonction count ne fait pas la même chose qu'en SQL. count s'applique à toutes les colonnes et compte toutes les observations non nulles.
df2.groupby('Country').count()
_unittests/ut_helpgen/notebooks2/td2a_eco_sql.ipynb
sdpython/pyquickhelper
mit
Pour réaliser la même chose qu'en SQL, il faut utiliser la méthode size.
df2.groupby('Country').size()
_unittests/ut_helpgen/notebooks2/td2a_eco_sql.ipynb
sdpython/pyquickhelper
mit
Ou utiliser des fonctions lambda.
# par exemple calculer le prix moyen et le multiplier par 2 df2.groupby('Country')['Price'].apply(lambda x: 2*x.mean()) QueryCurs.execute('SELECT Country, 2*AVG(Price) FROM Clients GROUP BY Country').fetchall() QueryCurs.execute('SELECT * FROM Clients WHERE Country == "Germany"') print(QueryCurs.fetchall()) QueryCurs...
_unittests/ut_helpgen/notebooks2/td2a_eco_sql.ipynb
sdpython/pyquickhelper
mit
On peut également passer par un DataFrame pandas et utiliser .to_csv()
QueryCurs.execute('''DROP TABLE Clients''') QueryCurs.close()
_unittests/ut_helpgen/notebooks2/td2a_eco_sql.ipynb
sdpython/pyquickhelper
mit
Most of the common options are: Metropolis jump: the starting standard deviation of the proposal distribution tuning: the number of iterations to tune the scale of the proposal ar_low: the lower bound of the target acceptance rate range ar_hi: the upper bound of the target acceptance rate range adapt_step: a number (b...
example = spvcm.upper_level.Upper_SMA(Y, X, M=W2, Z=Z, membership=membership, n_samples=500, configs=dict(tuning=250, adapt_step=1.01, ...
notebooks/model/spvcm/using_the_sampler.ipynb
weikang9009/pysal
bsd-3-clause
Preparation
# the larger the longer it takes, be sure to also adapt input layer size auf vgg network to this value INPUT_SHAPE = (64, 64) # INPUT_SHAPE = (128, 128) # INPUT_SHAPE = (256, 256) EPOCHS = 50 # Depends on harware GPU architecture, set as high as possible (this works well on K80) BATCH_SIZE = 100 !rm -rf ./tf_log # ...
notebooks/workshops/tss/cnn-imagenet-retrain.ipynb
DJCordhose/ai
mit
First Step: Load VGG pretrained on imagenet and remove classifier Hope: Feature Extraction will also work well for Speed Limit Signs Imagenet Collection of labelled images from many categories http://image-net.org/ http://image-net.org/about-stats <table class="table-stats" style="width: 500px"> <tbody><tr> <td widt...
from keras import applications # applications.VGG16? vgg_model = applications.VGG16(include_top=False, weights='imagenet', input_shape=(64, 64, 3)) # vgg_model = applications.VGG16(include_top=False, weights='imagenet', input_shape=(128, 128, 3)) # vgg_model = applications.VGG16(include_top=False, weights='imagenet', i...
notebooks/workshops/tss/cnn-imagenet-retrain.ipynb
DJCordhose/ai
mit
All Convolutional Blocks are kept fully trained, we just removed the classifier part
vgg_model.summary()
notebooks/workshops/tss/cnn-imagenet-retrain.ipynb
DJCordhose/ai
mit
Next step is to push all our signs through the net just once and record the output of bottleneck features Don't get confused: this is no training, yet, this just is recording the prediction in order not to repeat this expensive step over and over again when we train the classifier later
# will take a while, but not really long depending on size and number of input images %time bottleneck_features_train = vgg_model.predict(X_train) bottleneck_features_train.shape
notebooks/workshops/tss/cnn-imagenet-retrain.ipynb
DJCordhose/ai
mit
What does this mean? 303 predictions for 303 images or 3335 predictions for 3335 images when using augmented data set 512 bottleneck feature per prediction each bottleneck feature has a size of 2x2, just a blob more or less bottleneck feature has larger size when we increase size of input images (might be a good idea)...
first_bottleneck_feature = bottleneck_features_train[0,:,:, 0] first_bottleneck_feature
notebooks/workshops/tss/cnn-imagenet-retrain.ipynb
DJCordhose/ai
mit