markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Split dataframes into categorical, continuous, discrete, dummy, and response
catD = df.loc[:,varTypes['categorical']] contD = df.loc[:,varTypes['continuous']] disD = df.loc[:,varTypes['discrete']] dummyD = df.loc[:,varTypes['dummy']] respD = df.loc[:,['id','Response']]
.ipynb_checkpoints/data-exploration-life-insurance-checkpoint.ipynb
ramabrahma/data-sci-int-capstone
gpl-3.0
Descriptive statistics and scatter plot relating Product_Info_2 and Response
prod_info = [ "Product_Info_"+str(i) for i in range(1,8)] a = catD.loc[:, prod_info[1]] stats = catD.groupby(prod_info[1]).describe() c = gb_PI2.Response.count() plt.figure(0) plt.scatter(c[0],c[1]) plt.figure(0) plt.title("Histogram of "+"Product_Info_"+str(i)) plt.xlabel("Categories " + str((a.describe())['count...
.ipynb_checkpoints/data-exploration-life-insurance-checkpoint.ipynb
ramabrahma/data-sci-int-capstone
gpl-3.0
Get Response Spectrum - Nigam & Jennings
# Create an instance of the Newmark-Beta class nigam_jennings = rsp.NigamJennings(x_record, x_time_step, periods, damping=0.05, units="cm/s/s") sax, time_series, acc, vel, dis = nigam_jennings.evaluate() # Plot Response Spectrum rsp.plot_response_spectra(sax, axis_type="semilogx", filename="images/response_nigam_jenni...
gmpe-smtk/Ground Motion IMs Short.ipynb
g-weatherill/notebooks
agpl-3.0
Plot Time Series
rsp.plot_time_series(time_series["Acceleration"], x_time_step, time_series["Velocity"], time_series["Displacement"])
gmpe-smtk/Ground Motion IMs Short.ipynb
g-weatherill/notebooks
agpl-3.0
Intensity Measures Get PGA, PGV and PGD
pga_x, pgv_x, pgd_x, _, _ = ims.get_peak_measures(0.002, x_record, True, True) print "PGA = %10.4f cm/s/s, PGV = %10.4f cm/s, PGD = %10.4f cm" % (pga_x, pgv_x, pgd_x) pga_y, pgv_y, pgd_y, _, _ = ims.get_peak_measures(0.002, y_record, True, True) print "PGA = %10.4f cm/s/s, PGV = %10.4f cm/s, PGD = %10.4f cm" % (p...
gmpe-smtk/Ground Motion IMs Short.ipynb
g-weatherill/notebooks
agpl-3.0
Get Durations: Bracketed, Uniform, Significant
print "Bracketed Duration (> 5 cm/s/s) = %9.3f s" % ims.get_bracketed_duration(x_record, x_time_step, 5.0) print "Uniform Duration (> 5 cm/s/s) = %9.3f s" % ims.get_uniform_duration(x_record, x_time_step, 5.0) print "Significant Duration (5 - 95 Arias ) = %9.3f s" % ims.get_significant_duration(x_record, x_time_step, 0...
gmpe-smtk/Ground Motion IMs Short.ipynb
g-weatherill/notebooks
agpl-3.0
Get Arias Intensity, CAV, CAV5 and rms acceleration
print "Arias Intensity = %12.4f cm-s" % ims.get_arias_intensity(x_record, x_time_step) print "Arias Intensity (5 - 95) = %12.4f cm-s" % ims.get_arias_intensity(x_record, x_time_step, 0.05, 0.95) print "CAV = %12.4f cm-s" % ims.get_cav(x_record, x_time_step) print "CAV5 = %12.4f cm-s" % ims.get_cav(x_record, x_time_step...
gmpe-smtk/Ground Motion IMs Short.ipynb
g-weatherill/notebooks
agpl-3.0
Spectrum Intensities: Housner Intensity, Acceleration Spectrum Intensity
# Get response spectrum sax = ims.get_response_spectrum(x_record, x_time_step, periods)[0] print "Velocity Spectrum Intensity (cm/s/s) = %12.5f" % ims.get_response_spectrum_intensity(sax) print "Acceleration Spectrum Intensity (cm-s) = %12.5f" % ims.get_acceleration_spectrum_intensity(sax)
gmpe-smtk/Ground Motion IMs Short.ipynb
g-weatherill/notebooks
agpl-3.0
Get the response spectrum pair from two records
sax, say = ims.get_response_spectrum_pair(x_record, x_time_step, y_record, y_time_step, periods, damping=0.05, units="cm/s/s", ...
gmpe-smtk/Ground Motion IMs Short.ipynb
g-weatherill/notebooks
agpl-3.0
Get Geometric Mean Spectrum
sa_gm = ims.geometric_mean_spectrum(sax, say) rsp.plot_response_spectra(sa_gm, "semilogx", filename="images/geometric_mean_spectrum.pdf", filetype="pdf")
gmpe-smtk/Ground Motion IMs Short.ipynb
g-weatherill/notebooks
agpl-3.0
Get Envelope Spectrum
sa_env = ims.envelope_spectrum(sax, say) rsp.plot_response_spectra(sa_env, "semilogx", filename="images/envelope_spectrum.pdf", filetype="pdf")
gmpe-smtk/Ground Motion IMs Short.ipynb
g-weatherill/notebooks
agpl-3.0
Rotationally Dependent and Independent IMs GMRotD50 and GMRotI50
gmrotd50 = ims.gmrotdpp(x_record, x_time_step, y_record, y_time_step, periods, percentile=50.0, damping=0.05, units="cm/s/s") gmroti50 = ims.gmrotipp(x_record, x_time_step, y_record, y_time_step, periods, percentile=50.0, damp...
gmpe-smtk/Ground Motion IMs Short.ipynb
g-weatherill/notebooks
agpl-3.0
Fourier Spectra, Smoothing and HVSR Show the Fourier Spectrum
ims.plot_fourier_spectrum(x_record, x_time_step, filename="images/fourier_spectrum.pdf", filetype="pdf")
gmpe-smtk/Ground Motion IMs Short.ipynb
g-weatherill/notebooks
agpl-3.0
Smooth the Fourier Spectrum Using the Konno & Omachi (1998) Method
from smtk.smoothing.konno_ohmachi import KonnoOhmachi # Get the original Fourier spectrum freq, amplitude = ims.get_fourier_spectrum(x_record, x_time_step) # Configure Smoothing Parameters smoothing_config = {"bandwidth": 40, # Size of smoothing window (lower = more smoothing) "count": 1, # Number ...
gmpe-smtk/Ground Motion IMs Short.ipynb
g-weatherill/notebooks
agpl-3.0
Get the HVSR Load in the Time Series
# Load in a three component data set record_file = "data/record_3component.csv" record_3comp = np.genfromtxt(record_file, delimiter=",") time_vector = record_3comp[:, 0] x_record = record_3comp[:, 1] y_record = record_3comp[:, 2] v_record = record_3comp[:, 3] time_step = 0.002 # Plot the records fig = plt.figure(figs...
gmpe-smtk/Ground Motion IMs Short.ipynb
g-weatherill/notebooks
agpl-3.0
Look at the Fourier Spectra
x_freq, x_four = ims.get_fourier_spectrum(x_record, time_step) y_freq, y_four = ims.get_fourier_spectrum(y_record, time_step) v_freq, v_four = ims.get_fourier_spectrum(v_record, time_step) plt.figure(figsize=(7, 5)) plt.loglog(x_freq, x_four, "k-", lw=1.0, label="EW") plt.loglog(y_freq, y_four, "b-", lw=1.0, label="NS"...
gmpe-smtk/Ground Motion IMs Short.ipynb
g-weatherill/notebooks
agpl-3.0
Calculate the Horizontal To Vertical Spectral Ratio
# Setup parameters params = {"Function": "KonnoOhmachi", "bandwidth": 40.0, "count": 1.0, "normalize": True } # Returns # 1. Horizontal to Vertical Spectral Ratio # 2. Frequency # 3. Maximum H/V # 4. Period of Maximum H/V hvsr, freq, max_hv, t_0 = ims.get_hvsr(x_record, time_step...
gmpe-smtk/Ground Motion IMs Short.ipynb
g-weatherill/notebooks
agpl-3.0
Selecting cell bags A table is also "bag of cells", which just so happens to be a set of all the cells in the table. A "bag of cells" is like a Python set (and looks like one when you print it), but it has extra selection functions that help you navigate around the table. We will learn these as we go along, but you c...
# Preview the table as a table inline savepreviewhtml(tab) bb = tab.is_bold() print("The cells with bold font are", bb) print("The", len(bb), "cells immediately below these bold font cells are", bb.shift(DOWN)) cc = tab.filter("Cars") print("The single cell with the text 'Cars' is", cc) cc.assert_one() # proves t...
databaker/tutorial/Finding_your_way.ipynb
scraperwiki/databaker
agpl-3.0
Note: As you work through this tutorial, do please feel free to temporarily insert new Jupyter-Cells in order to give yourself a place to experiment with any of the functions that are available. (Remember, the value of the last line in a Jupyter-Cell is always printed out -- in addition to any earlier print-statements...
"All the cells that have an 'o' in them:", tab.regex(".*?o")
databaker/tutorial/Finding_your_way.ipynb
scraperwiki/databaker
agpl-3.0
Observations and dimensions Let's get on with some actual work. In our terminology, an "Observation" is a numerical measure (eg anything in the 3x4 array of numbers in the example table), and a "Dimension" is one of the headings. Both are made up of a bag of cells, however a Dimension also needs to know how to "look u...
# We get the array of observations by selecting its corner and expanding down and to the right obs = tab.excel_ref('B4').expand(DOWN).expand(RIGHT) savepreviewhtml(obs) # the two main headings are in a row and a column r1 = tab.excel_ref('B3').expand(RIGHT) r2 = tab.excel_ref('A3').fill(DOWN) # here we pass in a list...
databaker/tutorial/Finding_your_way.ipynb
scraperwiki/databaker
agpl-3.0
Note the value of h1.cellvalobs(ob) is actually a pair composed of the heading cell and its value. This is is because we can over-ride its output value without actually rewriting the original table, as we shall see.
# You can change an output value like this: h1.AddCellValueOverride("Cars", "Horses") for ob in obs: print("Obs", ob, "maps to", h1.cellvalobs(ob)) # Alternatively, you can override by the reference to a single cell to a value # (This will work even if the cell C3 is empty, which helps with filling in blank head...
databaker/tutorial/Finding_your_way.ipynb
scraperwiki/databaker
agpl-3.0
Conversion segments and output A ConversionSegment is a collection of Dimensions with an Observation set that is going to be processed and output as a table all at once. You can preview them in HTML (just like the cell bags and dimensions), only this time the observation cells can be clicked on interactively to show ho...
dimensions = [ HDim(tab.excel_ref('B1'), TIME, CLOSEST, ABOVE), HDim(r1, "Vehicles", DIRECTLY, ABOVE), HDim(r2, "Name", DIRECTLY, LEFT), HDimConst("Category", "Beatles") ] c1 = ConversionSegment(obs, dimensions, processTIMEUNIT=False) savepreviewhtml(c1) # If the table is too big, we can preview...
databaker/tutorial/Finding_your_way.ipynb
scraperwiki/databaker
agpl-3.0
WDA Technical CSV The ONS uses their own data system for publishing their time-series data known as WDA. If you need to output to it, then this next section is for you. The function which outputs to the WDA format is writetechnicalCSV(filename, [conversionsegments]) The format is very verbose because it repeats ea...
print(writetechnicalCSV(None, c1)) # This is how to write to a file writetechnicalCSV("exampleWDA.csv", c1) # We can read this file back in to a list of pandas dataframes dfs = readtechnicalCSV("exampleWDA.csv") print(dfs[0])
databaker/tutorial/Finding_your_way.ipynb
scraperwiki/databaker
agpl-3.0
Note If you were wondering what the processTIMEUNIT=False was all about in the ConversionSegment constructor, it's a feature to help the WDA output automatically set the TIMEUNIT column according to whether it should be Year, Month, or Quarter. You will note that the TIME column above is 2014.0 when it really should be...
# See that the `2014` no longer ends with `.0` c1 = ConversionSegment(obs, dimensions, processTIMEUNIT=True) c1.topandas()
databaker/tutorial/Finding_your_way.ipynb
scraperwiki/databaker
agpl-3.0
Additive model The first example of conservative estimation consider an additive model $\eta : \mathbb R^d \rightarrow \mathbb R$ with Gaussian margins. The objectives are to estimate a quantity of interest $\mathcal C(Y)$ of the model output distribution. Unfortunately, the dependence structure is unknown. In order to...
from depimpact.tests import func_sum help(func_sum)
notebooks/grid-search.ipynb
NazBen/impact-of-dependence
mit
Dimension 2 We consider the problem in dimension $d=2$ and a number of pairs $p=1$ for gaussian margins.
dim = 2 margins = [ot.Normal()]*dim
notebooks/grid-search.ipynb
NazBen/impact-of-dependence
mit
Copula families We consider a gaussian copula for this first example
families = np.zeros((dim, dim), dtype=int) families[1, 0] = 1
notebooks/grid-search.ipynb
NazBen/impact-of-dependence
mit
Estimations We create an instance of the main class for a conservative estimate.
from depimpact import ConservativeEstimate quant_estimate = ConservativeEstimate(model_func=func_sum, margins=margins, families=families)
notebooks/grid-search.ipynb
NazBen/impact-of-dependence
mit
First, we compute the quantile at independence
n = 1000 indep_result = quant_estimate.independence(n_input_sample=n, random_state=random_state)
notebooks/grid-search.ipynb
NazBen/impact-of-dependence
mit
We aim to minimize the output quantile. To do that, we create a q_func object from the function quantile_func to associate a probability $\alpha$ to a function that computes the empirical quantile from a given sample.
from depimpact import quantile_func alpha = 0.05 q_func = quantile_func(alpha) indep_result.q_func = q_func
notebooks/grid-search.ipynb
NazBen/impact-of-dependence
mit
The computation returns a DependenceResult instance. This object gather the informations of the computation. It also computes the output quantity of interest (which can also be changed).
sns.jointplot(indep_result.input_sample[:, 0], indep_result.input_sample[:, 1]); h = sns.distplot(indep_result.output_sample_id, axlabel='Model output', label="Output Distribution") plt.plot([indep_result.quantity]*2, h.get_ylim(), label='Quantile at %d%%' % (alpha*100)) plt.legend(loc=0) print('Output quantile :', in...
notebooks/grid-search.ipynb
NazBen/impact-of-dependence
mit
A boostrap can be done on the output quantity
indep_result.compute_bootstrap(n_bootstrap=5000)
notebooks/grid-search.ipynb
NazBen/impact-of-dependence
mit
And we can plot it
sns.distplot(indep_result.bootstrap_sample, axlabel='Output quantile'); ci = [0.025, 0.975] quantity_ci = indep_result.compute_quantity_bootstrap_ci(ci) h = sns.distplot(indep_result.output_sample_id, axlabel='Model output', label="Output Distribution") plt.plot([indep_result.quantity]*2, h.get_ylim(), 'g-', label='Q...
notebooks/grid-search.ipynb
NazBen/impact-of-dependence
mit
Grid Search Approach Firstly, we consider a grid search approach in order to compare the perfomance with the iterative algorithm. The discretization can be made on the parameter space or on other concordance measure such as the kendall's Tau. This below example shows a grid-search on the parameter space.
K = 20 n = 10000 grid_type = 'lhs' dep_measure = 'parameter' grid_result = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type, dep_measure=dep_measure, random_state=random_state)
notebooks/grid-search.ipynb
NazBen/impact-of-dependence
mit
The computation returns a ListDependenceResult which is a list of DependenceResult instances and some bonuses.
print('The computation did %d model evaluations.' % (grid_result.n_evals))
notebooks/grid-search.ipynb
NazBen/impact-of-dependence
mit
Lets set the quantity function and search for the minimum among the grid results.
grid_result.q_func = q_func min_result = grid_result.min_result print('Minimum quantile: {} at param: {}'.format(min_result.quantity, min_result.dep_param))
notebooks/grid-search.ipynb
NazBen/impact-of-dependence
mit
We can plot the result in grid results. The below figure shows the output quantiles in function of the dependence parameters.
plt.plot(grid_result.dep_params, grid_result.quantities, '.', label='Quantiles') plt.plot(min_result.dep_param[0], min_result.quantity, 'ro', label='minimum') plt.xlabel('Dependence parameter') plt.ylabel('Quantile value') plt.legend(loc=0);
notebooks/grid-search.ipynb
NazBen/impact-of-dependence
mit
As for the individual problem, we can do a boostrap also, for each parameters. Because we have $K$ parameters, we can do a bootstrap for the $K$ samples, compute the $K$ quantiles for all the bootstrap and get the minimum quantile for each bootstrap.
grid_result.compute_bootstraps(n_bootstrap=500) boot_min_quantiles = grid_result.bootstrap_samples.min(axis=0) boot_argmin_quantiles = grid_result.bootstrap_samples.argmin(axis=0).ravel().tolist() boot_min_params = [grid_result.dep_params[idx][0] for idx in boot_argmin_quantiles] fig, axes = plt.subplots(1, 2, figsize...
notebooks/grid-search.ipynb
NazBen/impact-of-dependence
mit
For the parameter that have the most occurence for the minimum, we compute its bootstrap mean.
# The parameter with most occurence boot_id_min = max(set(boot_argmin_quantiles), key=boot_argmin_quantiles.count) boot_min_result = grid_result[boot_id_min] boot_mean = boot_min_result.bootstrap_sample.mean() boot_std = boot_min_result.bootstrap_sample.std() print('Worst Quantile: {} at {} with a C.O.V of {} %'.form...
notebooks/grid-search.ipynb
NazBen/impact-of-dependence
mit
Kendall's Tau An interesting feature is to convert the dependence parameters to Kendall's Tau values.
plt.plot(grid_result.kendalls, grid_result.quantities, '.', label='Quantiles') plt.plot(min_result.kendall_tau, min_result.quantity, 'ro', label='Minimum quantile') plt.xlabel("Kendall's tau") plt.ylabel('Quantile') plt.legend(loc=0);
notebooks/grid-search.ipynb
NazBen/impact-of-dependence
mit
As we can see, the bounds With bounds on the dependencies An interesting option in the ConservativeEstimate class is to bound the dependencies, due to some prior informations.
bounds_tau = np.asarray([[0., 0.7], [0.1, 0.]]) quant_estimate.bounds_tau = bounds_tau K = 20 n = 10000 grid_type = 'lhs' grid_result = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type, random_state=random_state) grid_result.q_func = q_func min_result = grid_result.min_result print('Minim...
notebooks/grid-search.ipynb
NazBen/impact-of-dependence
mit
Saving the results It is usefull to save the result in a file to load it later and compute other quantities or anything you need!
filename = './result.hdf' grid_result.to_hdf(filename) from dependence import ListDependenceResult load_grid_result = ListDependenceResult.from_hdf(filename, q_func=q_func, with_input_sample=False) np.testing.assert_array_equal(grid_result.output_samples, load_grid_result.output_samples) import os os.remove(filename...
notebooks/grid-search.ipynb
NazBen/impact-of-dependence
mit
Taking the extreme values of the dependence parameter If the output quantity of interest seems to have a monotonicity with the dependence parameter, it is better to directly take the bounds of the dependence problem. Obviously, the minimum should be at the edges of the design space
K = None n = 1000 grid_type = 'vertices' grid_result = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type, random_state=random_state) grid_result.q_func = q_func print("Kendall's Tau : {}, Quantile: {}".format(grid_result.kendalls.ravel(), grid_result.quantities)) from depimpact.plots impo...
notebooks/grid-search.ipynb
NazBen/impact-of-dependence
mit
Higher Dimension We consider the problem in dimension $d=5$.
dim = 5 quant_estimate.margins = [ot.Normal()]*dim
notebooks/grid-search.ipynb
NazBen/impact-of-dependence
mit
Copula families with one dependent pair We consider a gaussian copula for this first example, but for the moment only one pair is dependent.
families = np.zeros((dim, dim), dtype=int) families[2, 0] = 1 quant_estimate.families = families families quant_estimate.bounds_tau = None quant_estimate.bounds_tau
notebooks/grid-search.ipynb
NazBen/impact-of-dependence
mit
We reset the families and bounds for the current instance. (I don't want to create a new instance, just to check if the setters are good).
quant_estimate.vine_structure
notebooks/grid-search.ipynb
NazBen/impact-of-dependence
mit
Let's do the grid search to see
K = 20 n = 10000 grid_type = 'vertices' grid_result = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type, random_state=random_state)
notebooks/grid-search.ipynb
NazBen/impact-of-dependence
mit
The quantile is lower compare to the problem of dimension 1. Indeed, there is more variables, more uncertainty, so a larger deviation of the output.
grid_result.q_func = q_func min_result = grid_result.min_result print('Worst Quantile: {} at {}'.format(min_result.quantity, min_result.dep_param)) matrix_plot_input(min_result) plt.plot(grid_result.dep_params, grid_result.quantities, '.', label='Quantiles') plt.plot(min_result.dep_param[0], min_result.quantity, 'ro',...
notebooks/grid-search.ipynb
NazBen/impact-of-dependence
mit
Copula families with all dependent pairs We consider a gaussian copula for this first example, but for the moment only one pair is dependent.
families = np.zeros((dim, dim), dtype=int) for i in range(1, dim): for j in range(i): families[i, j] = 1 quant_estimate.margins = margins quant_estimate.families = families quant_estimate.vine_structure = None quant_estimate.bounds_tau = None quant_estimate.bounds_tau K = 100 n = 1000 grid_type = 'lhs' gr...
notebooks/grid-search.ipynb
NazBen/impact-of-dependence
mit
With one fixed pair
families[3, 2] = 0 quant_estimate = ConservativeEstimate(model_func=func_sum, margins=margins, families=families) K = 100 n = 10000 grid_type = 'lhs' grid_result = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type, q_func=q_func, random_sta...
notebooks/grid-search.ipynb
NazBen/impact-of-dependence
mit
Save the used grid and load it again
K = 100 n = 1000 grid_type = 'lhs' grid_result_1 = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type, save_grid=True, grid_path='./output') grid_result_2 = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type, q_fu...
notebooks/grid-search.ipynb
NazBen/impact-of-dependence
mit
Then gather the results from the same grid with the same configurations
grid_result_1.n_input_sample, grid_result_2.n_input_sample grid_result = grid_result_1 + grid_result_2
notebooks/grid-search.ipynb
NazBen/impact-of-dependence
mit
Because the configurations are the same, we can gather the results from two different runs
grid_result.n_input_sample
notebooks/grid-search.ipynb
NazBen/impact-of-dependence
mit
Source localization with a custom inverse solver The objective of this example is to show how to plug a custom inverse solver in MNE in order to facilate empirical comparison with the methods MNE already implements (wMNE, dSPM, sLORETA, eLORETA, LCMV, DICS, (TF-)MxNE etc.). This script is educational and shall be used ...
import numpy as np from scipy import linalg import mne from mne.datasets import sample from mne.viz import plot_sparse_source_estimates data_path = sample.data_path() meg_path = data_path / 'MEG' / 'sample' fwd_fname = meg_path / 'sample_audvis-meg-eeg-oct-6-fwd.fif' ave_fname = meg_path / 'sample_audvis-ave.fif' cov...
stable/_downloads/bcaf3ed1f43ea7377c6c0b00137d728f/custom_inverse_solver.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Auxiliary function to run the solver
def apply_solver(solver, evoked, forward, noise_cov, loose=0.2, depth=0.8): """Call a custom solver on evoked data. This function does all the necessary computation: - to select the channels in the forward given the available ones in the data - to take into account the noise covariance and do th...
stable/_downloads/bcaf3ed1f43ea7377c6c0b00137d728f/custom_inverse_solver.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Define your solver
def solver(M, G, n_orient): """Run L2 penalized regression and keep 10 strongest locations. Parameters ---------- M : array, shape (n_channels, n_times) The whitened data. G : array, shape (n_channels, n_dipoles) The gain matrix a.k.a. the forward operator. The number of locations ...
stable/_downloads/bcaf3ed1f43ea7377c6c0b00137d728f/custom_inverse_solver.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Apply your custom solver
# loose, depth = 0.2, 0.8 # corresponds to loose orientation loose, depth = 1., 0. # corresponds to free orientation stc = apply_solver(solver, evoked, forward, noise_cov, loose, depth)
stable/_downloads/bcaf3ed1f43ea7377c6c0b00137d728f/custom_inverse_solver.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
View in 2D and 3D ("glass" brain like 3D plot)
plot_sparse_source_estimates(forward['src'], stc, bgcolor=(1, 1, 1), opacity=0.1)
stable/_downloads/bcaf3ed1f43ea7377c6c0b00137d728f/custom_inverse_solver.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Latent Variables
dag_with_latent_variables = CausalGraphicalModel( nodes=["x", "y", "z"], edges=[ ("x", "z"), ("z", "y"), ], latent_edges=[ ("x", "y") ] ) dag_with_latent_variables.draw() # here there are no observed backdoor adjustment sets dag_with_latent_variables.get_all_backdoor_adjus...
notebooks/cgm-examples.ipynb
ijmbarr/causalgraphicalmodels
mit
StructuralCausalModels For Structural Causal Models (SCM) we need to specify the functional form of each node:
from causalgraphicalmodels import StructuralCausalModel import numpy as np scm = StructuralCausalModel({ "x1": lambda n_samples: np.random.binomial(n=1,p=0.7,size=n_samples), "x2": lambda x1, n_samples: np.random.normal(loc=x1, scale=0.1), "x3": lambda x2, n_samples: x2 ** 2, })
notebooks/cgm-examples.ipynb
ijmbarr/causalgraphicalmodels
mit
The only requirement on the functions are: - that variable names are consistent - each function accepts keyword variables in the form of numpy arrays and output numpy arrays of shape [n_samples] - that in addition to it's parents, each function takes a n_samples variables indicating how many samples to generate ...
ds = scm.sample(n_samples=100) ds.head() # and visualise the samples import seaborn as sns %matplotlib inline sns.kdeplot( data=ds.x2, data2=ds.x3, )
notebooks/cgm-examples.ipynb
ijmbarr/causalgraphicalmodels
mit
And to access the implied CGM"
scm.cgm.draw()
notebooks/cgm-examples.ipynb
ijmbarr/causalgraphicalmodels
mit
And to apply an intervention:
scm_do = scm.do("x1") scm_do.cgm.draw()
notebooks/cgm-examples.ipynb
ijmbarr/causalgraphicalmodels
mit
And sample from the distribution implied by this intervention:
scm_do.sample(n_samples=5, set_values={"x1": np.arange(5)})
notebooks/cgm-examples.ipynb
ijmbarr/causalgraphicalmodels
mit
Case Study Data There are a number of different sites that you can utilize to access past model output analyses and even forecasts. The most robust collection is housed at the National Center for Environmental Information (NCEI, formerly NCDC) on a THREDDS server. The general website to begin your search is https://www...
# Case Study Date year = 1993 month = 3 day = 13 hour = 0 dt = datetime(year, month, day, hour) # Read NARR Data from THREDDS server base_url = 'https://www.ncei.noaa.gov/thredds/catalog/narr-a-files/' # Programmatically generate the URL to the day of data we want cat = TDSCatalog(f'{base_url}{dt:%Y%m}/{dt:%Y%m%d}/c...
notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb
Unidata/unidata-python-workshop
mit
Let's see what dimensions are in the file:
data.dimensions
notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb
Unidata/unidata-python-workshop
mit
Pulling Data for Calculation/Plotting The object that we get from Siphon is netCDF-like, so we can pull data using familiar calls for all of the variables that are desired for calculations and plotting purposes. NOTE: Due to the curvilinear nature of the NARR grid, there is a need to smooth the data that we import for ...
# Extract data and assign units tmpk = gaussian_filter(data.variables['Temperature_isobaric'][0], sigma=1.0) * units.K hght = 0 uwnd = 0 vwnd = 0 # Extract coordinate data for plotting lat = data.variables['lat'][:] lon = data.variables['lon'][:] lev = 0
notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb
Unidata/unidata-python-workshop
mit
<button data-toggle="collapse" data-target="#sol1" class='btn btn-primary'>View Solution</button> <div id="sol1" class="collapse"> <code><pre> # Extract data and assign units tmpk = gaussian_filter(data.variables['Temperature_isobaric'][0], sigma=1.0) * units.K hght = gaussian_filter(data.variabl...
time = data.variables['time1'] print(time.units) vtime = num2date(time[0], units=time.units) print(vtime)
notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb
Unidata/unidata-python-workshop
mit
Finally, we need to calculate the spacing of the grid in distance units instead of degrees using the MetPy helper function lat_lon_grid_spacing.
# Calcualte dx and dy for calculations dx, dy = mpcalc.lat_lon_grid_deltas(lon, lat)
notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb
Unidata/unidata-python-workshop
mit
Finding Pressure Level Data A robust way to parse the data for a certain pressure level is to find the index value using the np.where function. Since the NARR pressure data ('levels') is in hPa, then we'll want to search that array for our pressure levels 850, 500, and 300 hPa. <div class="alert alert-success"> <b>...
# Specify 850 hPa data ilev850 = np.where(lev==850)[0][0] hght_850 = hght[ilev850] tmpk_850 = 0 uwnd_850 = 0 vwnd_850 = 0 # Specify 500 hPa data ilev500 = 0 hght_500 = 0 uwnd_500 = 0 vwnd_500 = 0 # Specify 300 hPa data ilev300 = 0 hght_300 = 0 uwnd_300 = 0 vwnd_300 = 0
notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb
Unidata/unidata-python-workshop
mit
<button data-toggle="collapse" data-target="#sol2" class='btn btn-primary'>View Solution</button> <div id="sol2" class="collapse"> <code><pre> # Specify 850 hPa data ilev850 = np.where(lev == 850)[0][0] hght_850 = hght[ilev850] tmpk_850 = tmpk[ilev850] uwnd_850 = uwnd[ilev850] vwnd_850 = vwnd[ilev850] \# Specify 500 h...
# Temperature Advection # tmpc_adv_850 = mpcalc.advection(--Fill in this call--).to('degC/s')
notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb
Unidata/unidata-python-workshop
mit
<button data-toggle="collapse" data-target="#sol3" class='btn btn-primary'>View Solution</button> <div id="sol3" class="collapse"> <code><pre> # Temperature Advection tmpc_adv_850 = mpcalc.advection(tmpk_850, [uwnd_850, vwnd_850], (dx, dy), dim_order='yx').to('degC/s') </pre></code> </d...
# Vorticity and Absolute Vorticity Calculations # Planetary Vorticity # f = mpcalc.coriolis_parameter(-- Fill in here --).to('1/s') # Relative Vorticity # vor_500 = mpcalc.vorticity(-- Fill in here --) # Abosolute Vorticity # avor_500 = vor_500 + f
notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb
Unidata/unidata-python-workshop
mit
<button data-toggle="collapse" data-target="#sol4" class='btn btn-primary'>View Solution</button> <div id="sol4" class="collapse"> <code><pre> # Vorticity and Absolute Vorticity Calculations \# Planetary Vorticity f = mpcalc.coriolis_parameter(np.deg2rad(lat)).to('1/s') \# Relative Vorticity vor_500 = mpcalc.vorticit...
# Vorticity Advection f_adv = mpcalc.advection(f, [uwnd_500, vwnd_500], (dx, dy), dim_order='yx') relvort_adv = mpcalc.advection(vor_500, [uwnd_500, vwnd_500], (dx, dy), dim_order='yx') absvort_adv = mpcalc.advection(avor_500, [uwnd_500, vwnd_500], (dx, dy), dim_order='yx')
notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb
Unidata/unidata-python-workshop
mit
Divergence and Stretching Vorticity If we want to analyze another component of the vorticity tendency equation other than advection, we might want to assess the stretching forticity term. -(Abs. Vort.)*(Divergence) We already have absolute vorticity calculated, so now we need to calculate the divergence of the level, w...
# Stretching Vorticity div_500 = mpcalc.divergence(uwnd_500, vwnd_500, dx, dy, dim_order='yx') stretch_vort = -1 * avor_500 * div_500
notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb
Unidata/unidata-python-workshop
mit
Wind Speed, Geostrophic and Ageostrophic Wind Wind Speed Calculating wind speed is not a difficult calculation, but MetPy offers a function to calculate it easily keeping units so that it is easy to convert units for plotting purposes. wind_speed(uwnd, vwnd) Geostrophic Wind The geostrophic wind can be computed from a ...
# Divergence 300 hPa, Ageostrophic Wind wspd_300 = mpcalc.wind_speed(uwnd_300, vwnd_300).to('kts') div_300 = mpcalc.divergence(uwnd_300, vwnd_300, dx, dy, dim_order='yx') ugeo_300, vgeo_300 = mpcalc.geostrophic_wind(hght_300, f, dx, dy, dim_order='yx') uageo_300 = uwnd_300 - ugeo_300 vageo_300 = vwnd_300 - vgeo_300
notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb
Unidata/unidata-python-workshop
mit
Maps and Projections
# Data projection; NARR Data is Earth Relative dataproj = ccrs.PlateCarree() # Plot projection # The look you want for the view, LambertConformal for mid-latitude view plotproj = ccrs.LambertConformal(central_longitude=-100., central_latitude=40., standard_parallels=[30, 60]) def crea...
notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb
Unidata/unidata-python-workshop
mit
850-hPa Temperature Advection Add one contour (Temperature in Celsius with a dotted linestyle Add one colorfill (Temperature Advection in C/hr) <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Add one contour (Temperature in Celsius with a dotted linestyle</li> <li>Add one filled conto...
fig, ax = create_map_background() # Contour 1 - Temperature, dotted # Your code here! # Contour 2 clev850 = np.arange(0, 4000, 30) cs = ax.contour(lon, lat, hght_850, clev850, colors='k', linewidths=1.0, linestyles='solid', transform=dataproj) plt.clabel(cs, fontsize=10, inline=1, inline_spacing=10, ...
notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb
Unidata/unidata-python-workshop
mit
<button data-toggle="collapse" data-target="#sol5" class='btn btn-primary'>View Solution</button> <div id="sol5" class="collapse"> <code><pre> fig, ax = create_map_background() \# Contour 1 - Temperature, dotted cs2 = ax.contour(lon, lat, tmpk_850.to('degC'), range(-50, 50, 2), colors='grey', linestyl...
fig, ax = create_map_background() # Contour 1 clev500 = np.arange(0, 7000, 60) cs = ax.contour(lon, lat, hght_500, clev500, colors='k', linewidths=1.0, linestyles='solid', transform=dataproj) plt.clabel(cs, fontsize=10, inline=1, inline_spacing=4, fmt='%i', rightside_up=True, use_clabeltext=...
notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb
Unidata/unidata-python-workshop
mit
<button data-toggle="collapse" data-target="#sol6" class='btn btn-primary'>View Solution</button> <div id="sol6" class="collapse"> <code><pre> fig, ax = create_map_background() \# Contour 1 clev500 = np.arange(0, 7000, 60) cs = ax.contour(lon, lat, hght_500, clev500, colors='k', linewidths=1.0, linesty...
fig, ax = create_map_background() # Contour 1 clev300 = np.arange(0, 11000, 120) cs2 = ax.contour(lon, lat, div_300 * 10**5, range(-10, 11, 2), colors='grey', transform=dataproj) plt.clabel(cs2, fontsize=10, inline=1, inline_spacing=4, fmt='%i', rightside_up=True, use_clabeltext=True) # Co...
notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb
Unidata/unidata-python-workshop
mit
<button data-toggle="collapse" data-target="#sol7" class='btn btn-primary'>View Solution</button> <div id="sol7" class="collapse"> <code><pre> fig, ax = create_map_background() \# Contour 1 clev300 = np.arange(0, 11000, 120) cs2 = ax.contour(lon, lat, div_300 * 10**5, range(-10, 11, 2), colors='grey',...
fig=plt.figure(1,figsize=(21.,16.)) # Upper-Left Panel ax=plt.subplot(221,projection=plotproj) ax.set_extent([-125.,-73,25.,50.],ccrs.PlateCarree()) ax.coastlines('50m', linewidth=0.75) ax.add_feature(cfeature.STATES,linewidth=0.5) # Contour #1 clev500 = np.arange(0,7000,60) cs = ax.contour(lon,lat,hght_500,clev500,c...
notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb
Unidata/unidata-python-workshop
mit
Plotting Data for Hand Calculation Calculating dynamic quantities with a computer is great and can allow for many different educational opportunities, but there are times when we want students to calculate those quantities by hand. So can we plot values of geopotential height, u-component of the wind, and v-component o...
# Set lat/lon bounds for region to plot data LLlon = -104 LLlat = 33 URlon = -94 URlat = 38.1 # Set up mask so that you only plot what you want skip_points = (slice(None, None, 3), slice(None, None, 3)) mask_lon = ((lon[skip_points].ravel() > LLlon + 0.05) & (lon[skip_points].ravel() < URlon + 0.01)) mask_lat = ((lat[...
notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb
Unidata/unidata-python-workshop
mit
<div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Plot markers and data around the markers.</li> </ul> </div>
# Set up plot basics and use StationPlot class from MetPy to help with plotting fig = plt.figure(figsize=(14, 8)) ax = plt.subplot(111,projection=ccrs.LambertConformal(central_latitude=50,central_longitude=-107)) ax.set_extent([LLlon,URlon,LLlat,URlat],ccrs.PlateCarree()) ax.coastlines('50m', edgecolor='grey', linewidt...
notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb
Unidata/unidata-python-workshop
mit
Download and manage data Download the following series from FRED: FRED series ID | Name | Frequency | ---------------|------|-----------| GDP | Gross Domestic Product | Q | PCEC | Personal Consumption Expenditures | Q | GPDI | Gross Private Domestic Investment | Q | GCE | Government Consumption Expenditures and Gross I...
# Download data gdp = fp.series('GDP') consumption = fp.series('PCEC') investment = fp.series('GPDI') government = fp.series('GCE') exports = fp.series('EXPGS') imports = fp.series('IMPGS') net_exports = fp.series('NETEXP') hours = fp.series('HOANBS') deflator = fp.series('GDPDEF') pce_deflator = fp.series('PCECTPI') c...
business-cycle-data/python/.ipynb_checkpoints/business_cycle_data-checkpoint.ipynb
letsgoexploring/economicData
mit
Compute capital stock for US using the perpetual inventory method Next, compute the quarterly capital stock series for the US using the perpetual inventory method. The discrete-time Solow growth model is given by: \begin{align} Y_t & = A_tK_t^{\alpha}L_t^{1-\alpha} \tag{1}\ C_t & = (1-s)Y_t \tag{2}\ Y_t & = C_t + I_t \...
# Set the capital share of income alpha = 0.35 # Average saving rate s = np.mean(investment.data/gdp.data) # Average quarterly labor hours growth rate n = (hours.data[-1]/hours.data[0])**(1/(len(hours.data)-1)) - 1 # Average quarterly real GDP growth rate g = ((gdp.data[-1]/gdp.data[0])**(1/(len(gdp.data)-1)) - 1) -...
business-cycle-data/python/.ipynb_checkpoints/business_cycle_data-checkpoint.ipynb
letsgoexploring/economicData
mit
Compute total factor productivity Use the Cobb-Douglas production function: \begin{align} Y_t & = A_tK_t^{\alpha}L_t^{1-\alpha} \tag{17} \end{align} and data on GDP, capital, and hours with $\alpha=0.35$ to compute an implied series for $A_t$.
# Compute TFP tfp = gdp.data/capital.data**alpha/hours.data**(1-alpha) tfp = fp.to_fred_series(data = tfp,dates =gdp.data.index,units = gdp.units,title='TFP of the US',frequency='Quarterly')
business-cycle-data/python/.ipynb_checkpoints/business_cycle_data-checkpoint.ipynb
letsgoexploring/economicData
mit
Additional data management Now that we have used the aggregate production data to compute an implied capital stock and TFP, we can scale the production data and M2 by the population.
# Convert real GDP, consumption, investment, government expenditures, net exports and M2 # into thousands of dollars per civilian 16 and over gdp = gdp.per_capita(civ_pop=True).times(1000) consumption = consumption.per_capita(civ_pop=True).times(1000) investment = investment.per_capita(civ_pop=True).times(1000) governm...
business-cycle-data/python/.ipynb_checkpoints/business_cycle_data-checkpoint.ipynb
letsgoexploring/economicData
mit
Plot aggregate data
fig, axes = plt.subplots(3,4,figsize=(6*4,4*3)) axes[0][0].plot(gdp.data) axes[0][0].set_title('GDP') axes[0][0].set_ylabel('Thousands of '+base_year+' $') axes[0][1].plot(consumption.data) axes[0][1].set_title('Consumption') axes[0][1].set_ylabel('Thousands of '+base_year+' $') axes[0][2].plot(investment.data) axes...
business-cycle-data/python/.ipynb_checkpoints/business_cycle_data-checkpoint.ipynb
letsgoexploring/economicData
mit
Compute HP filter of data
# HP filter to isolate trend and cyclical components gdp_log_cycle,gdp_log_trend= gdp.log().hp_filter() consumption_log_cycle,consumption_log_trend= consumption.log().hp_filter() investment_log_cycle,investment_log_trend= investment.log().hp_filter() government_log_cycle,government_log_trend= government.log().hp_filter...
business-cycle-data/python/.ipynb_checkpoints/business_cycle_data-checkpoint.ipynb
letsgoexploring/economicData
mit
Plot aggregate data with trends
fig, axes = plt.subplots(3,4,figsize=(6*4,4*3)) axes[0][0].plot(gdp.data) axes[0][0].plot(np.exp(gdp_log_trend.data),c='r') axes[0][0].set_title('GDP') axes[0][0].set_ylabel('Thousands of '+base_year+' $') axes[0][1].plot(consumption.data) axes[0][1].plot(np.exp(consumption_log_trend.data),c='r') axes[0][1].set_title...
business-cycle-data/python/.ipynb_checkpoints/business_cycle_data-checkpoint.ipynb
letsgoexploring/economicData
mit
Plot cyclical components of the data
fig, axes = plt.subplots(3,4,figsize=(6*4,4*3)) axes[0][0].plot(gdp_log_cycle.data) axes[0][0].set_title('GDP') axes[0][0].set_ylabel('Thousands of '+base_year+' $') axes[0][1].plot(consumption_log_cycle.data) axes[0][1].set_title('Consumption') axes[0][1].set_ylabel('Thousands of '+base_year+' $') axes[0][2].plot(i...
business-cycle-data/python/.ipynb_checkpoints/business_cycle_data-checkpoint.ipynb
letsgoexploring/economicData
mit
Create data files
# Create a DataFrame with actual and trend data data = pd.DataFrame({ 'gdp':gdp.data, 'gdp_trend':np.exp(gdp_log_trend.data), 'gdp_cycle':gdp_log_cycle.data, 'consumption':consumption.data, 'consumption_trend':np.exp(consumption_log_trend.data), 'consumption_cycle':consum...
business-cycle-data/python/.ipynb_checkpoints/business_cycle_data-checkpoint.ipynb
letsgoexploring/economicData
mit
At first, we need custom score function, described in task. https://www.kaggle.com/c/bike-sharing-demand/overview/evaluation Why do we need +1 in score function?
def rmsle(y_true, y_pred): y_pred_clipped = np.clip(y_pred, 0., None) return mean_squared_error(np.log1p(y_true), np.log1p(y_pred_clipped)) ** .5
BikeSharing-Linear.ipynb
dmittov/misc
apache-2.0
What happens without np.clip? Let's start with the exisiting features and simple linear regression. All that feature extractors and grid search would be more clear further.
class SimpleFeatureExtractor(BaseEstimator, TransformerMixin): def fit(self, X, y=None): return self def transform(self, X, y=None): return X[["holiday", "workingday", "season", "weather", "temp", "atemp", "humidity", "windspeed"]].values exctractor = SimpleFeatureExtractor() ...
BikeSharing-Linear.ipynb
dmittov/misc
apache-2.0
Hyperparameters Searcher always maximizes the score function, so if we need to decrease it, it just adds the minus.
researcher.best_score_
BikeSharing-Linear.ipynb
dmittov/misc
apache-2.0
Add regularization and grid search the hyperparameters Now it's more clear why we have Grid Searcher ;-)
exctractor = SimpleFeatureExtractor() clf = Pipeline([ ("extractor", exctractor), ("regression", linear_model.ElasticNet()), ]) param_grid = { "regression__alpha": np.logspace(-3, 2, 10), "regression__l1_ratio": np.linspace(0, 1, 10) } scorerer = make_scorer(rmsle, greater_is_better=False) researcher =...
BikeSharing-Linear.ipynb
dmittov/misc
apache-2.0
Try to add some custom features
class FeatureExtractor(BaseEstimator, TransformerMixin): ohe = OneHotEncoder(categories='auto', sparse=False) scaler = StandardScaler() categorical_columns = ["week_day", "hour", "season", "weather"] numerical_columns = ["temp", "atemp", "humidity", "windspeed"] def _add_features(self...
BikeSharing-Linear.ipynb
dmittov/misc
apache-2.0
What we can theoretically get if we optimize RMSE
param_grid = { "regression__alpha": np.logspace(-3, 2, 10), "regression__l1_ratio": np.linspace(0, 1, 10) } pd.options.mode.chained_assignment = None def rmse(y_true, y_pred): return mean_squared_error(y_true, y_pred) ** .5 scorerer = make_scorer(rmse, greater_is_better=False) researcher = GridSearchCV(cl...
BikeSharing-Linear.ipynb
dmittov/misc
apache-2.0
11 min!!! Now we also learn FeaureExtractor every time and the pipeline becomes heavier. Why? Can you speed it up? What was the point about Maximum Likelihood The process is described by possion distribution better https://en.wikipedia.org/wiki/Poisson_distribution In probability theory and statistics, the Poisson dis...
df[df["count"] == 0] np.log(0) class PoissonRegression(linear_model.ElasticNet): def __init__(self, alpha=1.0, l1_ratio=0.5, fit_intercept=True, normalize=False, precompute=False, max_iter=1000, copy_X=True, tol=1e-4, warm_start=False, positive=False, random...
BikeSharing-Linear.ipynb
dmittov/misc
apache-2.0
In terms of MSE the score is worse. But it doesn't mean MSE is the most relevant metric. At least poisson regression never predicts negative values. When you expect poisson regression to have better MSE score?
scorerer = make_scorer(mean_squared_error, greater_is_better=False) scores = cross_val_score(clf, df, df["count"].values, cv=5, n_jobs=4, scoring=scorerer) np.mean((-np.array(scores)) ** .5)
BikeSharing-Linear.ipynb
dmittov/misc
apache-2.0
Skill vs Education When you need to predict counts, try to use Poisson Regression. You can get good enough results with experience, but you can't handle on just your skills when face a new type of tasks. More complicated tasks you have less your previous experience can help you. The key to success is to have good enoug...
df_test = pd.read_csv("test.csv") cols = df_test.columns all_data = pd.concat([df[cols], df_test[cols]]) exctractor = FeatureExtractor() exctractor.collect_stats(all_data) clf = Pipeline([ ("extractor", exctractor), ("regression", PoissonRegression(alpha=0.001623776739188721, l1_ratio=0.1111111111111111)), ])...
BikeSharing-Linear.ipynb
dmittov/misc
apache-2.0