markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Typical, the UWIs are a disaster. Let's ignore this for now. The Project is really just a list-like thing, so you can index into it to get at a single well. Each well is represented by a welly.Well object.
p[0]
docs/_userguide/Projects.ipynb
agile-geoscience/welly
apache-2.0
Some of the fields of this LAS file are messed up; see the Well notebook for more on how to fix this. Plot curves from several wells The DT log is called DT4P in one of the wells. We can deal with this sort of issue with aliases. Let's set up an alias dictionary, then plot the DT log from each well:
alias = {'Sonic': ['DT', 'DT4P'], 'Caliper': ['HCAL', 'CALI'], } import matplotlib.pyplot as plt fig, axs = plt.subplots(figsize=(7, 14), ncols=len(p), sharey=True, ) for i, (ax, w) in enumerate(zip(axs, p)): log = w.get_cur...
docs/_userguide/Projects.ipynb
agile-geoscience/welly
apache-2.0
Get a pandas.DataFrame The df() method makes a DataFrame using a dual index of UWI and Depth. Before we export our wells, let's give Kennetcook #2 a better UWI:
p[0].uwi = p[0].name p[0]
docs/_userguide/Projects.ipynb
agile-geoscience/welly
apache-2.0
That's better. When creating the DataFrame, you can pass a list of the keys (mnemonics) you want, and use aliases as usual.
alias keys = ['Caliper', 'GR', 'Sonic'] df = p.df(keys=keys, alias=alias, rename_aliased=True) df
docs/_userguide/Projects.ipynb
agile-geoscience/welly
apache-2.0
Quality Welly can run quality tests on the curves in your project. Some of the tests take arguments. You can test for things like this: all_positive: Passes if all the values are greater than zero. all_above(50): Passes if all the values are greater than 50. mean_below(100): Passes if the mean of the log is less than ...
import welly.quality as q tests = { 'All': [q.no_similarities], 'Each': [q.no_gaps, q.no_monotonic, q.no_flat], 'GR': [q.all_positive], 'Sonic': [q.all_positive, q.all_between(50, 200)], }
docs/_userguide/Projects.ipynb
agile-geoscience/welly
apache-2.0
Let's add our own test for units:
def has_si_units(curve): return curve.units.lower() in ['mm', 'gapi', 'us/m', 'k/m3'] tests['Each'].append(has_si_units)
docs/_userguide/Projects.ipynb
agile-geoscience/welly
apache-2.0
We'll use the same alias dictionary as before:
alias
docs/_userguide/Projects.ipynb
agile-geoscience/welly
apache-2.0
Now we can run the tests and look at the results, which are in an HTML table:
from IPython.display import HTML HTML(p.curve_table_html(keys=['Caliper', 'GR', 'Sonic', 'SP', 'RHOB'], tests=tests, alias=alias) )
docs/_userguide/Projects.ipynb
agile-geoscience/welly
apache-2.0
Decoding sensor space data with generalization across time and conditions This example runs the analysis described in :footcite:KingDehaene2014. It illustrates how one can fit a linear classifier to identify a discriminatory topography at a given time instant and subsequently assess whether this linear model can accura...
# Authors: Jean-Remi King <jeanremi.king@gmail.com> # Alexandre Gramfort <alexandre.gramfort@inria.fr> # Denis Engemann <denis.engemann@gmail.com> # # License: BSD-3-Clause import matplotlib.pyplot as plt from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler fro...
dev/_downloads/00e78bba5d10188fcf003ef05e32a6f7/decoding_time_generalization_conditions.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
We will train the classifier on all left visual vs auditory trials and test on all right visual vs auditory trials.
clf = make_pipeline( StandardScaler(), LogisticRegression(solver='liblinear') # liblinear is faster than lbfgs ) time_gen = GeneralizingEstimator(clf, scoring='roc_auc', n_jobs=None, verbose=True) # Fit classifiers on the epochs where the stimulus was presented to the left. # ...
dev/_downloads/00e78bba5d10188fcf003ef05e32a6f7/decoding_time_generalization_conditions.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Score on the epochs where the stimulus was presented to the right.
scores = time_gen.score(X=epochs['Right'].get_data(), y=epochs['Right'].events[:, 2] > 2)
dev/_downloads/00e78bba5d10188fcf003ef05e32a6f7/decoding_time_generalization_conditions.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Plot
fig, ax = plt.subplots(1) im = ax.matshow(scores, vmin=0, vmax=1., cmap='RdBu_r', origin='lower', extent=epochs.times[[0, -1, 0, -1]]) ax.axhline(0., color='k') ax.axvline(0., color='k') ax.xaxis.set_ticks_position('bottom') ax.set_xlabel('Testing Time (s)') ax.set_ylabel('Training Time (s)') ax.set_tit...
dev/_downloads/00e78bba5d10188fcf003ef05e32a6f7/decoding_time_generalization_conditions.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Load the data
# First we load the file file_location = '../results_database/text_wall_street_big.hdf5' f = h5py.File(file_location, 'r') # Now we need to get the letters and align them text_directory = '../data/wall_street_letters.npy' letters_sequence = np.load(text_directory) Nletters = len(letters_sequence) symbols = set(letter...
presentations/2016-01-21(Wall-Street-Letter-Latency-Prediction).ipynb
h-mayorquin/time_series_basic
bsd-3-clause
Study the Latency of the Data by Accuracy Make prediction with winner takes all Make the prediction for each delay. This takes a bit
N = 50000 # Amount of data delays = np.arange(0, 10) accuracy = [] # Make prediction with scikit-learn for delay in delays: X = code_vectors_winner[:(N - delay)] y = letters_sequence[delay:N] X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10) clf = svm.SVC(C=1....
presentations/2016-01-21(Wall-Street-Letter-Latency-Prediction).ipynb
h-mayorquin/time_series_basic
bsd-3-clause
Plot it
import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline plt.plot(delays, accuracy, 'o-', lw=2, markersize=10) plt.xlabel('Delays') plt.ylim([0, 105]) plt.xlim([-0.5, 10]) plt.ylabel('Accuracy %') plt.title('Delays vs Accuracy') fig = plt.gcf() fig.set_size_inches((12, 9))
presentations/2016-01-21(Wall-Street-Letter-Latency-Prediction).ipynb
h-mayorquin/time_series_basic
bsd-3-clause
Make predictions with representation standarization
from sklearn import preprocessing N = 50000 # Amount of data delays = np.arange(0, 10) accuracy_std = [] # Make prediction with scikit-learn for delay in delays: X = code_vectors_winner[:(N - delay)] y = letters_sequence[delay:N] X = preprocessing.scale(X) X_train, X_test, y_train, y_test = cross_val...
presentations/2016-01-21(Wall-Street-Letter-Latency-Prediction).ipynb
h-mayorquin/time_series_basic
bsd-3-clause
Plot it
plt.plot(delays, accuracy, 'o-', lw=2, markersize=10., label='Accuracy') plt.plot(delays, accuracy_std, 'o-', lw=2, markersize=10, label='Standarized Representations') plt.xlabel('Delays') plt.ylim([0, 105]) plt.xlim([-0.5, 10]) plt.ylabel('Accuracy %') plt.title('Delays vs Accuracy') fig = plt.gcf() fig.set_size_inche...
presentations/2016-01-21(Wall-Street-Letter-Latency-Prediction).ipynb
h-mayorquin/time_series_basic
bsd-3-clause
Make prediction with softmax
accuracy_softmax = [] # Make prediction with scikit-learn for delay in delays: X = code_vectors_softmax[:(N - delay)] y = letters_sequence[delay:N] X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10) clf = svm.SVC(C=1.0, cache_size=200, kernel='linear') clf.f...
presentations/2016-01-21(Wall-Street-Letter-Latency-Prediction).ipynb
h-mayorquin/time_series_basic
bsd-3-clause
Standarized predictions with softmax
accuracy_softmax_std = [] # Make prediction with scikit-learn for delay in delays: X = code_vectors_winner[:(N - delay)] y = letters_sequence[delay:N] X = preprocessing.scale(X) X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10) clf = svm.SVC(C=1.0, cache_si...
presentations/2016-01-21(Wall-Street-Letter-Latency-Prediction).ipynb
h-mayorquin/time_series_basic
bsd-3-clause
NOTE: In the output of the above cell you may ignore any WARNINGS or ERRORS related to the following: "apache-beam", "pyarrow", "tensorflow-transform", "tensorflow-model-analysis", "tensorflow-data-validation", "joblib", "google-cloud-storage" etc. If you get any related errors mentioned above please rerun the above c...
import tensorflow as tf import apache_beam as beam import shutil print(tf.__version__)
courses/machine_learning/feateng/feateng.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
<h2> 1. Environment variables for project and bucket </h2> Your project id is the unique string that identifies your project (not the project name). You can find this from the GCP Console dashboard's Home page. My dashboard reads: <b>Project ID:</b> cloud-training-demos Cloud training often involves saving and res...
import os PROJECT = 'cloud-training-demos' # CHANGE THIS BUCKET = 'cloud-training-demos' # REPLACE WITH YOUR BUCKET NAME. Use a regional bucket in the region you selected. REGION = 'us-central1' # Choose an available region for Cloud AI Platform # for bash os.environ['PROJECT'] = PROJECT os.environ['BUCKET'] = BUCK...
courses/machine_learning/feateng/feateng.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
<h2> 2. Specifying query to pull the data </h2> Let's pull out a few extra columns from the timestamp.
def create_query(phase, EVERY_N): if EVERY_N == None: EVERY_N = 4 #use full dataset #select and pre-process fields base_query = """ #legacySQL SELECT (tolls_amount + fare_amount) AS fare_amount, DAYOFWEEK(pickup_datetime) AS dayofweek, HOUR(pickup_datetime) AS hourofday, pickup_longitude AS picku...
courses/machine_learning/feateng/feateng.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Try the query above in https://bigquery.cloud.google.com/table/nyc-tlc:yellow.trips if you want to see what it does (ADD LIMIT 10 to the query!) <h2> 3. Preprocessing Dataflow job from BigQuery </h2> This code reads from BigQuery and saves the data as-is on Google Cloud Storage. We can do additional preprocessing and...
%%bash if gsutil ls | grep -q gs://${BUCKET}/taxifare/ch4/taxi_preproc/; then gsutil -m rm -rf gs://$BUCKET/taxifare/ch4/taxi_preproc/ fi
courses/machine_learning/feateng/feateng.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
First, let's define a function for preprocessing the data
import datetime #### # Arguments: # -rowdict: Dictionary. The beam bigquery reader returns a PCollection in # which each row is represented as a python dictionary # Returns: # -rowstring: a comma separated string representation of the record with dayofweek # converted from int to string (e.g. 3 --> Tue) ##...
courses/machine_learning/feateng/feateng.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Now, let's run pipeline locally. This takes upto <b>5 minutes</b>. You will see a message "Done" when it is done.
preprocess(50*10000, 'DirectRunner') %%bash gsutil ls gs://$BUCKET/taxifare/ch4/taxi_preproc/
courses/machine_learning/feateng/feateng.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
4. Run Beam pipeline on Cloud Dataflow Run pipeline on cloud on a larger sample size.
%%bash if gsutil ls | grep -q gs://${BUCKET}/taxifare/ch4/taxi_preproc/; then gsutil -m rm -rf gs://$BUCKET/taxifare/ch4/taxi_preproc/ fi
courses/machine_learning/feateng/feateng.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
The following step will take <b>10-15 minutes.</b> Monitor job progress on the Cloud Console in the Dataflow section. Note: If the error occurred regarding enabling of Dataflow API then disable and re-enable the Dataflow API and re-run the below cell.
preprocess(50*100, 'DataflowRunner')
courses/machine_learning/feateng/feateng.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Once the job completes, observe the files created in Google Cloud Storage
%%bash gsutil ls -l gs://$BUCKET/taxifare/ch4/taxi_preproc/ %%bash #print first 10 lines of first shard of train.csv gsutil cat "gs://$BUCKET/taxifare/ch4/taxi_preproc/train.csv-00000-of-*" | head
courses/machine_learning/feateng/feateng.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
5. Develop model with new inputs Download the first shard of the preprocessed data to enable local development.
%%bash if [ -d sample ]; then rm -rf sample fi mkdir sample gsutil cat "gs://$BUCKET/taxifare/ch4/taxi_preproc/train.csv-00000-of-*" > sample/train.csv gsutil cat "gs://$BUCKET/taxifare/ch4/taxi_preproc/valid.csv-00000-of-*" > sample/valid.csv
courses/machine_learning/feateng/feateng.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
We have two new inputs in the INPUT_COLUMNS, three engineered features, and the estimator involves bucketization and feature crosses.
%%bash grep -A 20 "INPUT_COLUMNS =" taxifare/trainer/model.py %%bash grep -A 50 "build_estimator" taxifare/trainer/model.py %%bash grep -A 15 "add_engineered(" taxifare/trainer/model.py
courses/machine_learning/feateng/feateng.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Try out the new model on the local sample (this takes <b>5 minutes</b>) to make sure it works fine.
%%bash rm -rf taxifare.tar.gz taxi_trained export PYTHONPATH=${PYTHONPATH}:${PWD}/taxifare python -m trainer.task \ --train_data_paths=${PWD}/sample/train.csv \ --eval_data_paths=${PWD}/sample/valid.csv \ --output_dir=${PWD}/taxi_trained \ --train_steps=10 \ --job-dir=/tmp %%bash ls taxi_trained/export/expo...
courses/machine_learning/feateng/feateng.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
You can use saved_model_cli to look at the exported signature. Note that the model doesn't need any of the engineered features as inputs. It will compute latdiff, londiff, euclidean from the provided inputs, thanks to the add_engineered call in the serving_input_fn.
%%bash model_dir=$(ls ${PWD}/taxi_trained/export/exporter | tail -1) saved_model_cli show --dir ${PWD}/taxi_trained/export/exporter/${model_dir} --all %%writefile /tmp/test.json {"dayofweek": "Sun", "hourofday": 17, "pickuplon": -73.885262, "pickuplat": 40.773008, "dropofflon": -73.987232, "dropofflat": 40.732403, "pa...
courses/machine_learning/feateng/feateng.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
6. Train on cloud This will take <b> 10-15 minutes </b> even though the prompt immediately returns after the job is submitted. Monitor job progress on the Cloud Console, in the AI Platform section and wait for the training job to complete.
%%bash OUTDIR=gs://${BUCKET}/taxifare/ch4/taxi_trained JOBNAME=lab4a_$(date -u +%y%m%d_%H%M%S) echo $OUTDIR $REGION $JOBNAME gsutil -m rm -rf $OUTDIR gcloud ai-platform jobs submit training $JOBNAME \ --region=$REGION \ --module-name=trainer.task \ --package-path=${PWD}/taxifare/trainer \ --job-dir=$OUTDIR \ ...
courses/machine_learning/feateng/feateng.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Create some plot data Function assumes data to plot is an array-like object in a single cell per row.
density_func = 78 mean, var, skew, kurt = stats.chi.stats(density_func, moments='mvsk') x_chi = np.linspace(stats.chi.ppf(0.01, density_func), stats.chi.ppf(0.99, density_func), 100) y_chi = stats.chi.pdf(x_chi, density_func) x_expon = np.linspace(stats.expon.ppf(0.01), stats.expon.ppf(0.99), 100) ...
Pandas Sparklines Demo.ipynb
crdietrich/sparklines
mit
Define range of data to make sparklines Note: data must be row wise
a = df.ix[:, 0:100]
Pandas Sparklines Demo.ipynb
crdietrich/sparklines
mit
Output to new DataFrame of Sparklines
df_out = pd.DataFrame() df_out['sparkline'] = sparklines.create(data=a) sparklines.show(df_out[['sparkline']])
Pandas Sparklines Demo.ipynb
crdietrich/sparklines
mit
Insert Sparklines into source DataFrame
df['sparkline'] = sparklines.create(data=a) sparklines.show(df[['function', 'sparkline']])
Pandas Sparklines Demo.ipynb
crdietrich/sparklines
mit
Detailed Formatting Return only sparklines, format the line, fill and marker.
df_out = pd.DataFrame() df_out['sparkline'] = sparklines.create(data=a, color='#1b470a', fill_color='#99a894', fill_alpha=0.2, point_color='blue', ...
Pandas Sparklines Demo.ipynb
crdietrich/sparklines
mit
Example Data and Sparklines Layout
df_copy = df[['function', 'sparkline']].copy() df_copy['value'] = df.ix[:, 100] df_copy['change'] = df.ix[:,98] - df.ix[:,99] df_copy['change_%'] = df_copy.change / df.ix[:,99] sparklines.show(df_copy)
Pandas Sparklines Demo.ipynb
crdietrich/sparklines
mit
Export to HTML Inline Jupyter Notebook
sparklines.to_html(df_copy, 'pandas_sparklines_demo')
Pandas Sparklines Demo.ipynb
crdietrich/sparklines
mit
HTML text for rendering elsewhere
html = sparklines.to_html(df_copy)
Pandas Sparklines Demo.ipynb
crdietrich/sparklines
mit
Examine a single patient
patientunitstayid = 237395 query = query_schema + """ select * from medication where patientunitstayid = {} order by drugorderoffset """.format(patientunitstayid) df = pd.read_sql_query(query, con) df.head() df.columns # Look at a subset of columns cols = ['medicationid','patientunitstayid', 'drugorderoffse...
notebooks/medication.ipynb
mit-eicu/eicu-code
mit
Here we can see that, roughly on ICU admission, the patient had an order for vancomycin, aztreonam, and tobramycin. Identifying patients admitted on a single drug Let's look for patients who have an order for vancomycin using exact text matching.
drug = 'VANCOMYCIN' query = query_schema + """ select distinct patientunitstayid from medication where drugname like '%{}%' """.format(drug) df_drug = pd.read_sql_query(query, con) print('{} unit stays with {}.'.format(df_drug.shape[0], drug))
notebooks/medication.ipynb
mit-eicu/eicu-code
mit
Exact text matching is fairly weak, as there's no systematic reason to prefer upper case or lower case. Let's relax the case matching.
drug = 'VANCOMYCIN' query = query_schema + """ select distinct patientunitstayid from medication where drugname ilike '%{}%' """.format(drug) df_drug = pd.read_sql_query(query, con) print('{} unit stays with {}.'.format(df_drug.shape[0], drug))
notebooks/medication.ipynb
mit-eicu/eicu-code
mit
HICL codes are used to group together drugs which have the same underlying ingredient (i.e. most frequently this is used to group brand name drugs with the generic name drugs). We can see above the HICL for vancomycin is 10093, so let's try grabbing that.
hicl = 10093 query = query_schema + """ select distinct patientunitstayid from medication where drughiclseqno = {} """.format(hicl) df_hicl = pd.read_sql_query(query, con) print('{} unit stays with HICL = {}.'.format(df_hicl.shape[0], hicl))
notebooks/medication.ipynb
mit-eicu/eicu-code
mit
No luck! I wonder what we missed? Let's go back to the original query, this time retaining HICL and the name of the drug.
drug = 'VANCOMYCIN' query = query_schema + """ select drugname, drughiclseqno, count(*) as n from medication where drugname ilike '%{}%' group by drugname, drughiclseqno order by n desc """.format(drug) df_drug = pd.read_sql_query(query, con) df_drug.head()
notebooks/medication.ipynb
mit-eicu/eicu-code
mit
It appears there are more than one HICL - we can group by HICL in this query to get an idea.
df_drug['drughiclseqno'].value_counts()
notebooks/medication.ipynb
mit-eicu/eicu-code
mit
Unfortunately, we can't be sure that these HICLs always identify only vancomycin. For example, let's look at drugnames for HICL = 1403.
hicl = 1403 query = query_schema + """ select drugname, count(*) as n from medication where drughiclseqno = {} group by drugname order by n desc """.format(hicl) df_hicl = pd.read_sql_query(query, con) df_hicl.head()
notebooks/medication.ipynb
mit-eicu/eicu-code
mit
This HICL seems more focused on the use of creams than on vancomycin. Let's instead inspect the top 3.
for hicl in [4042, 10093, 37442]: query = query_schema + """ select drugname, count(*) as n from medication where drughiclseqno = {} group by drugname order by n desc """.format(hicl) df_hicl = pd.read_sql_query(query, con) print('HICL {}'.format(hicl)) print('Number of r...
notebooks/medication.ipynb
mit-eicu/eicu-code
mit
This is fairly convincing that these only refer to vancomycin. An alternative approach is to acquire the code book for HICL codes and look up vancomycin there. Hospitals with data available
query = query_schema + """ with t as ( select distinct patientunitstayid from medication ) select pt.hospitalid , count(distinct pt.patientunitstayid) as number_of_patients , count(distinct t.patientunitstayid) as number_of_patients_with_tbl from patient pt left join t on pt.patientunitstayid = t.patientunitst...
notebooks/medication.ipynb
mit-eicu/eicu-code
mit
Getting the data ready for work If the data is in GSLIB format you can use the function gslib.read_gslib_file(filename) to import the data into a Pandas DataFrame.
#get the data in gslib format into a pandas Dataframe mydata= pygslib.gslib.read_gslib_file('../data/cluster.dat') #view data in a 2D projection plt.scatter(mydata['Xlocation'],mydata['Ylocation'], c=mydata['Primary']) plt.colorbar() plt.grid(True) plt.show()
pygslib/Ipython_templates/backtr_raw.ipynb
opengeostat/pygslib
mit
The nscore transformation table function
print (pygslib.gslib.__dist_transf.backtr.__doc__)
pygslib/Ipython_templates/backtr_raw.ipynb
opengeostat/pygslib
mit
Get the transformation table
transin,transout, error = pygslib.gslib.__dist_transf.ns_ttable(mydata['Primary'],mydata['Declustering Weight']) print ('there was any error?: ', error!=0)
pygslib/Ipython_templates/backtr_raw.ipynb
opengeostat/pygslib
mit
Get the normal score transformation Note that the declustering is applied on the transformation tables
mydata['NS_Primary'] = pygslib.gslib.__dist_transf.nscore(mydata['Primary'],transin,transout,getrank=False) mydata['NS_Primary'].hist(bins=30)
pygslib/Ipython_templates/backtr_raw.ipynb
opengeostat/pygslib
mit
Doing the back transformation
mydata['NS_Primary_BT'],error = pygslib.gslib.__dist_transf.backtr(mydata['NS_Primary'], transin,transout, ltail=1,utail=1,ltpar=0,utpar=60, zmin=0,zmax=60,getrank=False) print ('there was any error?: ', error...
pygslib/Ipython_templates/backtr_raw.ipynb
opengeostat/pygslib
mit
Then define basic constant, function and define our neural network
original_dim = 4000 # Our 1D images dimension, each image has 4000 pixel intermediate_dim = 256 # Number of neurone our fully connected neural net has batch_size = 50 epochs = 15 epsilon_std = 1.0 def blackbox_image_generator(pixel, center, sigma): return norm.pdf(pixel, center, sigma) def model_vae(latent_d...
demo_tutorial/VAE/variational_autoencoder_demo.ipynb
henrysky/astroNN
mit
Now we will generate some true latent variable so we can pass them to a blackbox image generator to generate some 1D images. The blackbox image generator (which is deterministic) will take two numbers and generate images in a predictable way. This is important because if the generator generate image in a random way, th...
s_1 = np.random.normal(30, 1.5, 900) s_2 = np.random.normal(15, 1, 900) s_3 = np.random.normal(10, 1, 900) s = np.concatenate([s_1, s_2, s_3]) plt.figure(figsize=(12, 12)) plt.hist(s[:900], 70, density=1, facecolor='green', alpha=0.75, label='Population 1') plt.hist(s[900:1800], 70, density=1, facecolor='red', alpha=...
demo_tutorial/VAE/variational_autoencoder_demo.ipynb
henrysky/astroNN
mit
Now we will pass the true latent variable to the blackbox image generator to generate some images. Below are the example images from the three populations. They may seems to have no difference but neural network will pick up some subtle features usually.
# We have some images, each has 4000 pixels x_train = np.zeros((len(s), original_dim)) for counter, S in enumerate(s): xs = np.linspace(0, 40, original_dim) x_train[counter] = blackbox_image_generator(xs, 20, S) # Prevent nan causes error x_train[np.isnan(x_train.astype(float))] = 0 x_train *= 10 # Add some ...
demo_tutorial/VAE/variational_autoencoder_demo.ipynb
henrysky/astroNN
mit
Now we will pass the images to the neural network and train with them.
latent_dim = 1 # Dimension of our latent space vae, encoder = model_vae(latent_dim) vae.compile(optimizer='rmsprop', loss=nll, weighted_metrics=None, loss_weights=None, sample_weight_mode=None) vae.fit(x_train, x_train, shuffle=True, epochs=epochs, batch_size=batch_size, verbose=0...
demo_tutorial/VAE/variational_autoencoder_demo.ipynb
henrysky/astroNN
mit
Yay!! Seems like the neural network recovered the three population successfully. Althought the recovered latent variable is not exactly the same as the original ones we generated (I mean at least the scale isn't the same), usually you won't expect the neural network can learn the real phyiscs. In this case, the latent ...
m_1A = np.random.normal(28, 2, 300) m_1B = np.random.normal(19, 2, 300) m_1C = np.random.normal(12, 1, 300) m_2A = np.random.normal(28, 2, 300) m_2B = np.random.normal(19, 2, 300) m_2C = np.random.normal(12, 1, 300) m_3A = np.random.normal(28, 2, 300) m_3B = np.random.normal(19, 2, 300) m_3C = np.random.normal(12, 1,...
demo_tutorial/VAE/variational_autoencoder_demo.ipynb
henrysky/astroNN
mit
Since we have two independent variables to generate our images, what happened if you still try to force the neural network to explain the images with just one variable? Before we run the training, we should think about what we expect first. Lets denate the first latent variable population as 1, 2 and 3 , while the seco...
latent_dim = 1 # Dimension of our latent space vae, encoder = model_vae(latent_dim) vae.compile(optimizer='rmsprop', loss=nll, weighted_metrics=None, loss_weights=None, sample_weight_mode=None) epochs = 15 vae.fit(x_train, x_train, shuffle=True, epochs=epochs, batch_size=batch_si...
demo_tutorial/VAE/variational_autoencoder_demo.ipynb
henrysky/astroNN
mit
By visual inspection, seems like the neural network only recovered 6 population :( What will happen if we increase the latent space of the nerual network to 2?
latent_dim = 2 # Dimension of our latent space epochs = 40 vae, encoder = model_vae(latent_dim) vae.compile(optimizer='rmsprop', loss=nll, weighted_metrics=None, loss_weights=None, sample_weight_mode=None) vae.fit(x_train, x_train, shuffle=True, epochs=epochs, batch_size=batch_siz...
demo_tutorial/VAE/variational_autoencoder_demo.ipynb
henrysky/astroNN
mit
Plot one of the hurricanes Let's just plot the track of Hurricane MARIA
maria = df[df['name'] == 'MARIA'].sort_values('iso_time') m = Basemap(llcrnrlon=-100.,llcrnrlat=0.,urcrnrlon=-20.,urcrnrlat=57., projection='lcc',lat_1=20.,lat_2=40.,lon_0=-60., resolution ='l',area_thresh=1000.) x, y = m(maria['longitude'].values,maria['latitude'].values) m.plot(x,y,linewidth=...
blogs/goes16/maria/hurricanes2017.ipynb
turbomanage/training-data-analyst
apache-2.0
Plot all the hurricanes Use line thickness based on the maximum category reached by the hurricane
names = df.name.unique() names m = Basemap(llcrnrlon=-100.,llcrnrlat=0.,urcrnrlon=-20.,urcrnrlat=57., projection='lcc',lat_1=20.,lat_2=40.,lon_0=-60., resolution ='l',area_thresh=1000.) for name in names: if name != 'NOT_NAMED': named = df[df['name'] == name].sort_values('iso_time') ...
blogs/goes16/maria/hurricanes2017.ipynb
turbomanage/training-data-analyst
apache-2.0
Data The data records the divorce rate $D$, marriage rate $M$, and average age $A$ that people get married at for 50 US states.
# load data and copy url = "https://raw.githubusercontent.com/fehiepsi/rethinking-numpyro/master/data/WaffleDivorce.csv" WaffleDivorce = pd.read_csv(url, sep=";") d = WaffleDivorce # standardize variables d["A"] = d.MedianAgeMarriage.pipe(lambda x: (x - x.mean()) / x.std()) d["D"] = d.Divorce.pipe(lambda x: (x - x.mea...
notebooks/misc/linreg_divorce_numpyro.ipynb
probml/pyprobml
mit
Model (Gaussian likelihood) We predict divorce rate D given marriage rate M and age A.
def model(M, A, D=None): a = numpyro.sample("a", dist.Normal(0, 0.2)) bM = numpyro.sample("bM", dist.Normal(0, 0.5)) bA = numpyro.sample("bA", dist.Normal(0, 0.5)) sigma = numpyro.sample("sigma", dist.Exponential(1)) mu = numpyro.deterministic("mu", a + bM * M + bA * A) numpyro.sample("D", dist....
notebooks/misc/linreg_divorce_numpyro.ipynb
probml/pyprobml
mit
Posterior predicted vs actual
# call predictive without specifying new data # so it uses original data post = m5_3.sample_posterior(random.PRNGKey(1), p5_3, (int(1e4),)) post_pred = Predictive(m5_3.model, post)(random.PRNGKey(2), M=d.M.values, A=d.A.values) mu = post_pred["mu"] # summarize samples across cases mu_mean = jnp.mean(mu, 0) mu_PI = jnp...
notebooks/misc/linreg_divorce_numpyro.ipynb
probml/pyprobml
mit
Per-point LOO scores We compute the predicted probability of each point given the others, following sec 7.5.2 of Statistical Rethinking ed 2. The numpyro code is from Du Phan's site
# post = m5_3.sample_posterior(random.PRNGKey(24071847), p5_3, (1000,)) logprob = log_likelihood(m5_3.model, post, A=d.A.values, M=d.M.values, D=d.D.values)["D"] az5_3 = az.from_dict( posterior={k: v[None, ...] for k, v in post.items()}, log_likelihood={"D": logprob[None, ...]}, ) PSIS_m5_3 = az.loo(az5_3, poi...
notebooks/misc/linreg_divorce_numpyro.ipynb
probml/pyprobml
mit
Student likelihood
def model(M, A, D=None): a = numpyro.sample("a", dist.Normal(0, 0.2)) bM = numpyro.sample("bM", dist.Normal(0, 0.5)) bA = numpyro.sample("bA", dist.Normal(0, 0.5)) sigma = numpyro.sample("sigma", dist.Exponential(1)) # mu = a + bM * M + bA * A mu = numpyro.deterministic("mu", a + bM * M + bA * A...
notebooks/misc/linreg_divorce_numpyro.ipynb
probml/pyprobml
mit
Grading We will create a grader instance below and use it to collect your answers. Note that these outputs will be stored locally inside grader and will be uploaded to the platform only after running submitting function in the last part of this assignment. If you want to make a partial submission, you can run that cell...
grader = MCMCGrader()
python/coursera-BayesianML/04_mcmc_assignment.ipynb
saketkc/notebooks
bsd-2-clause
Task 1. Alice and Bob Alice and Bob are trading on the market. Both of them are selling the Thing and want to get as high profit as possible. Every hour they check out with each other's prices and adjust their prices to compete on the market. Although they have different strategies for price setting. Alice: takes Bob's...
def run_simulation(alice_start_price=300.0, bob_start_price=300.0, seed=42, num_hours=10000, burnin=1000): """Simulates an evolution of prices set by Bob and Alice. The function should simulate Alice and Bob behavior for `burnin' hours, then ignore the obtained simulation results, and then simulate it ...
python/coursera-BayesianML/04_mcmc_assignment.ipynb
saketkc/notebooks
bsd-2-clause
Task 1.2 What is the average price for Alice and Bob after the burn-in period? Whose prices are higher?
#### YOUR CODE HERE #### alice_prices, bob_prices = run_simulation(alice_start_price=300, bob_start_price=300) average_alice_price = np.mean(alice_prices) average_bob_price = np.mean(bob_prices) ### END OF YOUR CODE ### grader.submit_simulation_mean(average_alice_price, average_bob_price)
python/coursera-BayesianML/04_mcmc_assignment.ipynb
saketkc/notebooks
bsd-2-clause
Task 1.3 Let's look at the 2-d histogram of prices, computed using kernel density estimation.
data = np.array(run_simulation()) sns.jointplot(data[0, :], data[1, :], stat_func=None, kind='kde')
python/coursera-BayesianML/04_mcmc_assignment.ipynb
saketkc/notebooks
bsd-2-clause
Clearly, the prices of Bob and Alce are highly correlated. What is the Pearson correlation coefficient of Alice and Bob prices?
#### YOUR CODE HERE #### correlation = np.corrcoef(alice_prices, bob_prices)[0,1] ### END OF YOUR CODE ### grader.submit_simulation_correlation(correlation)
python/coursera-BayesianML/04_mcmc_assignment.ipynb
saketkc/notebooks
bsd-2-clause
Task 1.4 We observe an interesting effect here: seems like the bivariate distribution of Alice and Bob prices converges to a correlated bivariate Gaussian distribution. Let's check, whether the results change if we use different random seed and starting points.
# Pick different starting prices, e.g 10, 1000, 10000 for Bob and Alice. # Does the joint distribution of the two prices depend on these parameters? POSSIBLE_ANSWERS = { 0: 'Depends on random seed and starting prices', 1: 'Depends only on random seed', 2: 'Depends only on starting prices', 3: 'Does no...
python/coursera-BayesianML/04_mcmc_assignment.ipynb
saketkc/notebooks
bsd-2-clause
Task 2. Logistic regression with PyMC3 Logistic regression is a powerful model that allows you to analyze how a set of features affects some binary target label. Posterior distribution over the weights gives us an estimation of the influence of each particular feature on the probability of the target being equal to one...
data = pd.read_csv("adult_us_postprocessed.csv") data.head()
python/coursera-BayesianML/04_mcmc_assignment.ipynb
saketkc/notebooks
bsd-2-clause
Each row of the dataset is a person with his (her) features. The last column is the target variable $y$. One indicates that this person's annual salary is more than $50K. First of all let's set up a Bayesian logistic regression model (i.e. define priors on the parameters $\alpha$ and $\beta$ of the model) that predicts...
with pm.Model() as manual_logistic_model: # Declare pymc random variables for logistic regression coefficients with uninformative # prior distributions N(0, 100^2) on each weight using pm.Normal. # Don't forget to give each variable a unique name. #### YOUR CODE HERE #### alpha = pm.Normal('...
python/coursera-BayesianML/04_mcmc_assignment.ipynb
saketkc/notebooks
bsd-2-clause
Sumbit MAP estimations of corresponding coefficients:
with pm.Model() as logistic_model: # There's a simpler interface for generalized linear models in pymc3. # Try to train the same model using pm.glm.GLM.from_formula. # Do not forget to specify that the target variable is binary (and hence follows Binomial distribution). #### YOUR CODE HERE #### ...
python/coursera-BayesianML/04_mcmc_assignment.ipynb
saketkc/notebooks
bsd-2-clause
Task 2.2 MCMC To find credible regions let's perform MCMC inference.
# You will need the following function to visualize the sampling process. # You don't need to change it. def plot_traces(traces, burnin=200): ''' Convenience function: Plot traces with overlaid means and values ''' ax = pm.traceplot(traces[burnin:], figsize=(12,len(traces.varnames)*1.5), ...
python/coursera-BayesianML/04_mcmc_assignment.ipynb
saketkc/notebooks
bsd-2-clause
Metropolis-Hastings Let's use the Metropolis-Hastings algorithm for finding the samples from the posterior distribution. Once you wrote the code, explore the hyperparameters of Metropolis-Hastings such as the proposal distribution variance to speed up the convergence. You can use plot_traces function in the next cell t...
with pm.Model() as logistic_model: # Since it is unlikely that the dependency between the age and salary is linear, we will include age squared # into features so that we can model dependency that favors certain ages. # Train Bayesian logistic regression model on the following features: sex, age, age^2, edu...
python/coursera-BayesianML/04_mcmc_assignment.ipynb
saketkc/notebooks
bsd-2-clause
NUTS sampler Use pm.sample without specifying a particular sampling method (pymc3 will choose it automatically). The sampling algorithm that will be used in this case is NUTS, which is a form of Hamiltonian Monte Carlo, in which parameters are tuned automatically. This is an advanced method that we hadn't cover in the ...
with pm.Model() as logistic_model: # Train Bayesian logistic regression model on the following features: sex, age, age_squared, educ, hours # Use pm.sample to run MCMC to train this model. # Train your model for 400 samples. # Training can take a while, so relax and wait :) #### YOUR CODE HERE ...
python/coursera-BayesianML/04_mcmc_assignment.ipynb
saketkc/notebooks
bsd-2-clause
Estimating the odds ratio Now, let's build the posterior distribution on the odds ratio given the dataset (approximated by MCMC).
# We don't need to use a large burn-in here, since we initialize sampling # from a good point (from our approximation of the most probable # point (MAP) to be more precise). burnin = 100 b = trace['sex[T. Male]'][burnin:] plt.hist(np.exp(b), bins=20, density=True) plt.xlabel("Odds Ratio") plt.show()
python/coursera-BayesianML/04_mcmc_assignment.ipynb
saketkc/notebooks
bsd-2-clause
Finally, we can find a credible interval (recall that credible intervals are Bayesian and confidence intervals are frequentist) for this quantity. This may be the best part about Bayesian statistics: we get to interpret credibility intervals the way we've always wanted to interpret them. We are 95% confident that the ...
lb, ub = np.percentile(b, 2.5), np.percentile(b, 97.5) print("P(%.3f < Odds Ratio < %.3f) = 0.95" % (np.exp(lb), np.exp(ub))) # Submit the obtained credible interval. grader.submit_pymc_odds_ratio_interval(np.exp(lb), np.exp(ub))
python/coursera-BayesianML/04_mcmc_assignment.ipynb
saketkc/notebooks
bsd-2-clause
Task 2.3 interpreting the results
# Does the gender affects salary in the provided dataset? # (Note that the data is from 1996 and maybe not representative # of the current situation in the world.) POSSIBLE_ANSWERS = { 0: 'No, there is certainly no discrimination', 1: 'We cannot say for sure', 2: 'Yes, we are 95% sure that a female is *less...
python/coursera-BayesianML/04_mcmc_assignment.ipynb
saketkc/notebooks
bsd-2-clause
Authorization & Submission To submit assignment parts to Cousera platform, please, enter your e-mail and token into variables below. You can generate a token on this programming assignment's page. <b>Note:</b> The token expires 30 minutes after generation.
STUDENT_EMAIL = 'saketkc@gmail.com' STUDENT_TOKEN = '6r463miiML4NWB9M' grader.status()
python/coursera-BayesianML/04_mcmc_assignment.ipynb
saketkc/notebooks
bsd-2-clause
If you want to submit these answers, run cell below
grader.submit(STUDENT_EMAIL, STUDENT_TOKEN)
python/coursera-BayesianML/04_mcmc_assignment.ipynb
saketkc/notebooks
bsd-2-clause
(Optional) generating videos of sampling process In this part you will generate videos showing the sampling process. Setting things up You don't need to modify the code below, it sets up the plotting functions. The code is based on MCMC visualization tutorial.
from IPython.display import HTML # Number of MCMC iteration to animate. samples = 400 figsize(6, 6) fig = plt.figure() s_width = (0.81, 1.29) a_width = (0.11, 0.39) samples_width = (0, samples) ax1 = fig.add_subplot(221, xlim=s_width, ylim=samples_width) ax2 = fig.add_subplot(224, xlim=samples_width, ylim=a_width) ax...
python/coursera-BayesianML/04_mcmc_assignment.ipynb
saketkc/notebooks
bsd-2-clause
Animating Metropolis-Hastings
with pm.Model() as logistic_model: # Again define Bayesian logistic regression model on the following features: sex, age, age_squared, educ, hours #### YOUR CODE HERE #### ### END OF YOUR CODE ### step = pm.Metropolis() iter_sample = pm.iter_sample(2 * samples, step, start=map_estimate) an...
python/coursera-BayesianML/04_mcmc_assignment.ipynb
saketkc/notebooks
bsd-2-clause
Resolving Conflicts Using Precedence Declarations This file shows how shift/reduce and reduce/reduce conflicts can be resolved using operator precedence declarations. The following grammar is ambiguous because it does not specify the precedence of the arithmetical operators: expr : expr '+' expr | expr '-' exp...
import ply.lex as lex tokens = [ 'NUMBER' ] def t_NUMBER(t): r'0|[1-9][0-9]*' t.value = int(t.value) return t literals = ['+', '-', '*', '/', '^', '(', ')'] t_ignore = ' \t' def t_newline(t): r'\n+' t.lexer.lineno += t.value.count('\n') def t_error(t): print(f"Illegal character '{t.value[...
Ply/Conflicts-Resolved.ipynb
karlstroetmann/Formal-Languages
gpl-2.0
Specification of the Parser
import ply.yacc as yacc
Ply/Conflicts-Resolved.ipynb
karlstroetmann/Formal-Languages
gpl-2.0
The start variable of our grammar is expr, but we don't have to specify that. The default start variable is the first variable that is defined.
start = 'expr'
Ply/Conflicts-Resolved.ipynb
karlstroetmann/Formal-Languages
gpl-2.0
The following operator precedence declarations declare that the operators '+'and '-' have a lower precedence than the operators '*'and '/'. The operator '^' has the highest precedence. Furthermore, the declarations specify that the operators '+', '-', '*', and '/' are left associative, while the operator '^' is decl...
precedence = ( ('left', '+', '-') , # precedence 1 ('left', '*', '/'), # precedence 2 ('right', '^') # precedence 3 ) def p_expr_plus(p): "expr : expr '+' expr" p[0] = ('+', p[1], p[3]) def p_expr_minus(p): "expr : expr '-' expr" p[0] = ('-', p[1], p[3]) def p_expr_mult(...
Ply/Conflicts-Resolved.ipynb
karlstroetmann/Formal-Languages
gpl-2.0
Setting the optional argument write_tables to False <B style="color:red">is required</B> to prevent an obscure bug where the parser generator tries to read an empty parse table.
parser = yacc.yacc(write_tables=False, debug=True)
Ply/Conflicts-Resolved.ipynb
karlstroetmann/Formal-Languages
gpl-2.0
As there are no warnings all conflicts have been resolved using the precedence declarations. Let's look at the action table that is generated.
!type parser.out !cat parser.out %run ../ANTLR4-Python/AST-2-Dot.ipynb
Ply/Conflicts-Resolved.ipynb
karlstroetmann/Formal-Languages
gpl-2.0
The function test(s) takes a string s as its argument an tries to parse this string. If all goes well, an abstract syntax tree is returned. If the string can't be parsed, an error message is printed by the parser.
def test(s): t = yacc.parse(s) d = tuple2dot(t) display(d) return t test('2^3*4+5') test('1+2*3^4') test('1 + 2 * -3^4')
Ply/Conflicts-Resolved.ipynb
karlstroetmann/Formal-Languages
gpl-2.0
Chapter 1
def sigmoid(z): return 1./(1. + np.exp(-z)) def sigmoid_vector(w,x,b): return 1./(1. + np.exp(-1 * np.sum(w * x) - b)) def sigmoid_prime(z): return sigmoid(z) * (1 - sigmoid(z)) # Plot behavior of sigmoid. Continuous symmetric function, # asymptotically bounded by [0,1] in x = [-inf, inf]...
notebooks/neural_networks_and_deep_learning.ipynb
willettk/insight
apache-2.0
Exercises Take all the weights and biases in a network of perceptrons and multiply them by a positive constant $c > 0$. Show that the behavior of the network doesn't change. Input: $[x_1,x_2,\ldots,x_j]$ Old behavior Weights: $[w_1,w_2,\ldots,w_j]$ Bias: $b$ Perceptron output: output = 0 if $w \cdot x + b \leq...
# One set of possible weights and a bias; infinite amount # of legal combinations digits = np.identity(10) * 0.99 + 0.005 weights = np.ones((10,4)) * -1 weights[1::2,0] = 3 weights[2::4,1] = 3 weights[3::4,1] = 3 weights[4:8,2] = 3 weights[8:10,3] = 3 weights[0,1:3] = -2 bias = -2 print "Weights: \n{}".format(weig...
notebooks/neural_networks_and_deep_learning.ipynb
willettk/insight
apache-2.0
Load the MNIST data
import cPickle as pickle import gzip def load_data(): with gzip.open("neural-networks-and-deep-learning/data/mnist.pkl.gz","rb") as f: training_data,validation_data,test_data = pickle.load(f) return training_data,validation_data,test_data def load_data_wrapper(): tr_d,va_d,te_d = lo...
notebooks/neural_networks_and_deep_learning.ipynb
willettk/insight
apache-2.0
Run the network
training_data,validation_data,test_data = load_data_wrapper() net = Network([784,30,10]) net.SGD(training_data,30,10,3.0,test_data = test_data) net100 = Network([784,100,10]) net100.SGD(training_data,30,10,3.0,test_data=test_data) net2 = Network([784,10]) net2.SGD(training_data,30,10,3.0,test_data=test_data)
notebooks/neural_networks_and_deep_learning.ipynb
willettk/insight
apache-2.0
As mentioned before, if we want to perform a 3 dimensional displacement model of the composite plate, we would have 6 reaction forces that are a function of x and y. Those 6 reaction forces are related by 3 equalibrium equations
# # hyer page 584 # # Equations of equilibrium # Nxf = Function('N_x')(x,y) # Nyf = Function('N_y')(x,y) # Nxyf = Function('N_xy')(x,y) # Mxf = Function('M_x')(x,y) # Myf = Function('M_y')(x,y) # Mxyf = Function('M_xy')(x,y) # symbols for force and moments Nx,Ny,Nxy,Mx,My,Mxy = symbols('N_x,N_y,N_xy,M_x,M_y,M_xy') Nxf...
tutorials/Composite_Plate_Mechanics_with_Python_Theory.ipynb
nagordon/mechpy
mit