markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Cleaned data:
raw_tog.plot_psd(fmax=30)
0.22/_downloads/f781cba191074d5f4243e5933c1e870d/plot_find_ref_artifacts.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Now try the "separate" algorithm.
raw_sep = raw.copy() # Do ICA only on the reference channels. ref_picks = mne.pick_types(raw_sep.info, meg=False, ref_meg=True) ica_ref = ICA(n_components=2, allow_ref_meg=True, **ica_kwargs) ica_ref.fit(raw_sep, picks=ref_picks) # Do ICA on both reference and standard channels. Here, we can just reuse # ica_tog from...
0.22/_downloads/f781cba191074d5f4243e5933c1e870d/plot_find_ref_artifacts.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Cleaned raw data traces:
raw_sep.plot(**plot_kwargs)
0.22/_downloads/f781cba191074d5f4243e5933c1e870d/plot_find_ref_artifacts.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Cleaned raw data PSD:
raw_sep.plot_psd(fmax=30)
0.22/_downloads/f781cba191074d5f4243e5933c1e870d/plot_find_ref_artifacts.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
The code begins by loading a local dataset of gene annotations and extracting their promotorial regions (here defined as regions at $\left[gene_{start}-2000;gene_{start}+2000\right])$. Note that the start and stop attributes automatically consider the strand of the region.
genes = gl.load_from_path("../data/genes/") promoters = genes.reg_project(new_field_dict={ 'start':genes.start-2000, 'stop':genes.start + 2000})
examples/notebooks/02a_Mixing_Local_Remote_Processing_SIMPLE.ipynb
DEIB-GECO/PyGMQL
apache-2.0
The genes and promoters variables are GMQLDataset; the former is loaded directly, the latter results from a projection operation. Region feature names can be accessed directly from variables to build expressions and predicates (e.g., gene.start + 2000). Next, we load the external dataset of Chip-Seq from a remote GMQL...
gl.set_remote_address("http://gmql.eu/gmql-rest/") gl.login()
examples/notebooks/02a_Mixing_Local_Remote_Processing_SIMPLE.ipynb
DEIB-GECO/PyGMQL
apache-2.0
In the following snippet we show how to load the Chip-Seq data of the ENCODE dataset from the remote GMQL repository and select only the experiments of interest. First, the user sets the remote execution mode and imports remote datasets with the load_from_remote function; such loading is lazy, therefore no actual data ...
gl.set_mode("remote") hms = gl.load_from_remote("HG19_ENCODE_BROAD_AUG_2017", owner="public") hms_ac = hms[hms["experiment_target"] == "H3K9ac-human"]
examples/notebooks/02a_Mixing_Local_Remote_Processing_SIMPLE.ipynb
DEIB-GECO/PyGMQL
apache-2.0
Next, the PyGMQL map operation is used to compute the average of the signal of hms_ac intersecting each promoter; iteration over all samples is implicit. Finally, the materialize method triggers the execution of the query. Since the mode is set to \texttt{"remote"}, the dataset stored at ./genes is sent to the remote s...
mapping = promoters.map( hms_ac, refName='prom', expName='hm', new_reg_fields={ 'avg_signal': gl.AVG('signal')}) mapping = mapping.materialize()
examples/notebooks/02a_Mixing_Local_Remote_Processing_SIMPLE.ipynb
DEIB-GECO/PyGMQL
apache-2.0
At this point, Python libraries for data manipulation, visualization or analysis can be applied to the GDataframe. The following portion of code provides an example of data manipulation of a query result. The to_matrix method transforms the GDataframe into a Pandas matrix, where each row corresponds to a gene and each ...
import seaborn as sns heatmap=mapping.to_matrix( columns_meta=['hm.biosample_term_name'], index_regs=['gene_symbol'], values_regs=['avg_signal'], fill_value=0) plt.figure(figsize=(10, 10)) sns.heatmap(heatmap,vmax = 20) plt.show()
examples/notebooks/02a_Mixing_Local_Remote_Processing_SIMPLE.ipynb
DEIB-GECO/PyGMQL
apache-2.0
Controlling a single qubit The simulation of a unitary evolution with Processor is defiend by the control pulses. Each pulse is represented by a Pulse object consisting of the control Hamiltonian $H_j$, the target qubits, the pulse strength $c_j$ and the time sequence $t$. The evolution is given by \begin{equation} U(...
processor = Processor(N=1) processor.add_control(0.5 * sigmaz(), targets=0, label="sigmaz") processor.add_control(0.5 * sigmay(), targets=0, label="sigmay")
examples/qip-noisy-device-simulator.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
The list of defined pulses are saved in an attribute Processor.pulses. We can see the pulse that we just defined by
for pulse in processor.pulses: pulse.print_info()
examples/qip-noisy-device-simulator.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
We can see that the pulse strength coeff and time sequence tlist still remain undefined. To fully characterize the evolution, we need to define them both. The pulse strength and time are both given as a NumPy array. For discrete pulses, tlist specifies the start and the end time of each pulse coefficient, and thus is o...
processor.pulses[1].coeff = np.array([1.]) processor.pulses[1].tlist = np.array([0., pi]) for pulse in processor.pulses: pulse.print_info()
examples/qip-noisy-device-simulator.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
This pulse is a $\pi$ pulse that flips the qubit from $\left |0 \right\rangle$ to $\left |1 \right\rangle$, equivalent to a rotation around y-axis of angle $\pi$: $$R_y(\theta) = \begin{pmatrix} cos(\theta/2) & -sin(\theta/2) \ sin(\theta/2) & cos(\theta/2) \end{pmatrix}$$ We can run the simulation to see the result of...
basis0 = basis(2, 0) result = processor.run_state(init_state=basis0) result.states[-1].tidyup(1.e-5)
examples/qip-noisy-device-simulator.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
As arbitrary single-qubit gate can be decomposed into $R_z(\theta_1) \cdot R_y(\theta_2) \cdot R_z(\theta_3)$, it is enough to use three pulses. For demonstration purpose, we choose $\theta_1=\theta_2=\theta_3=\pi/2$
processor.pulses[0].coeff = np.array([1., 0., 1.]) processor.pulses[1].coeff = np.array([0., 1., 0.]) processor.pulses[0].tlist = np.array([0., pi/2., 2*pi/2, 3*pi/2]) processor.pulses[1].tlist = np.array([0., pi/2., 2*pi/2, 3*pi/2]) result = processor.run_state(init_state=basis(2, 1)) result.states[-1].tidyup(1.0e-5)...
examples/qip-noisy-device-simulator.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Pulse with continuous amplitude If your pulse strength is generated somewhere else and is a discretization of a continuous function, you can also tell the Processor to use them with the cubic spline interpolation. In this case tlist and coeff must have the same length.
tlist = np.linspace(0., 2*np.pi, 20) processor = Processor(N=1, spline_kind="step_func") processor.add_control(sigmaz(), 0) processor.pulses[0].tlist = tlist processor.pulses[0].coeff = np.array([np.sin(t) for t in tlist]) processor.plot_pulses(); tlist = np.linspace(0., 2*np.pi, 20) processor = Processor(N=1, spline_...
examples/qip-noisy-device-simulator.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Noisy evolution In real quantum devices, noise affects the perfect execution of gate-based quantum circuits, limiting their depths. In general, we can divide quantum noise into two types: coherent and incoherent noise. The former one usually dues to the deviation of the control pulse. The noisy evolution is still unita...
a = destroy(2) initial_state = basis(2,1) plus_state = (basis(2,1) + basis(2,0)).unit() tlist = np.arange(0.00, 2.02, 0.02) H_d = 10.*sigmaz()
examples/qip-noisy-device-simulator.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Decay time $T_1$ The $T_1$ relaxation time describes the strength of amplitude damping and can be described, in a two-level system, by a collapse operator $\frac{1}{\sqrt{T_1}}a$, where $a$ is the annihilation operator. This leads to an exponential decay of the population of excited states proportional to $\exp({-t/T_1...
from qutip.qip.pulse import Pulse t1 = 1. processor = Processor(1, t1=t1) # creat a dummpy pulse that has no Hamiltonian, but only a tlist. processor.add_pulse(Pulse(None, None, tlist=tlist, coeff=False)) result = processor.run_state(init_state=initial_state, e_ops=[a.dag()*a]) fig, ax = plt.subplots() ax.plot(tlist[0...
examples/qip-noisy-device-simulator.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Decay time $T_2$ The $T_2$ time describes the dephasing process. Here one has to be careful that the amplitude damping channel characterized by $T_1$ will also lead to a dephasing proportional to $\exp(-t/2T_1)$. To make sure that the overall phase dampling is $exp(-t/T_2)$, the processor (internally) uses an collapse ...
t1 = 1. t2 = 0.5 processor = Processor(1, t1=t1, t2=t2) processor.add_control(H_d, 0) processor.pulses[0].coeff = True processor.pulses[0].tlist = tlist Hadamard = hadamard_transform(1) result = processor.run_state(init_state=plus_state, e_ops=[Hadamard*a.dag()*a*Hadamard]) fig, ax = plt.subplots() # detail about len...
examples/qip-noisy-device-simulator.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Random noise in the pulse intensity Despite single-qubit decoherence, Processor can also simulate coherent control noise. For general types of noise, one can define a noise object and add it to the processor. An example of predefined noise is the random amplitude noise, where random value is added to the pulse every d...
from qutip.qip.noise import RandomNoise processor = Processor(N=1) processor.add_control(0.5 * sigmaz(), targets=0, label="sigmaz") processor.add_control(0.5 * sigmay(), targets=0, label="sigmay") processor.coeffs = np.array([[1., 0., 1.], [0., 1., 0.]]) processor.set_all_tlist(np.array([0....
examples/qip-noisy-device-simulator.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
We again compare the result of the evolution with and without noise.
result = processor.run_state(init_state=basis(2, 1)) result.states[-1].tidyup(1.0e-5) result_white = processor_white.run_state(init_state=basis(2, 1)) result_white.states[-1].tidyup(1.0e-4) fidelity(result.states[-1], result_white.states[-1])
examples/qip-noisy-device-simulator.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Since the result of this this noise is still a pure state, we can visualize it on a Bloch sphere
from qutip.bloch import Bloch b = Bloch() b.add_states([result.states[-1], result_white.states[-1]]) b.make_sphere()
examples/qip-noisy-device-simulator.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
We can print the pulse information to see the noise. The ideal pulses:
for pulse in processor_white.pulses: pulse.print_info()
examples/qip-noisy-device-simulator.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
And the noisy pulses:
for pulse in processor_white.get_noisy_pulses(): pulse.print_info()
examples/qip-noisy-device-simulator.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Getting a Pulse or QobjEvo representation If you define a complicate Processor but don't want to run the simulation right now, you can extract an ideal/noisy Pulse representation or QobjEvo representation. The later one can be feeded directly to QuTiP sovler for the evolution.
ideal_pulses = processor_white.pulses noisy_pulses = processor_white.get_noisy_pulses(device_noise=True, drift=True) qobjevo = processor_white.get_qobjevo(noisy=False) noisy_qobjevo, c_ops = processor_white.get_qobjevo(noisy=True)
examples/qip-noisy-device-simulator.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Structure inside the simulator The figures below help one understanding the workflow inside the simulator. The first figure shows how the noise is processed in the circuit processor. The noise is defined separately in a class object. When called, it takes parameters and the unitary noiseless qutip.QobjEvo from the proc...
from qutip.ipynbtools import version_table version_table()
examples/qip-noisy-device-simulator.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
The Bandit Here we define our bandit. For this example we are using a four-armed bandit. The pullBandit function generates a random number from a normal distribution with a mean of 0. The lower the bandit number, the more likely a positive reward will be returned. We want our agent to learn to always choose the arm tha...
#List out our bandit arms. #Currently arm 4 (index #3) is set to most often provide a positive reward. bandit_arms = [0.2,0,-0.2,-2] num_arms = len(bandit_arms) def pullBandit(bandit): #Get a random number. result = np.random.randn(1) if result > bandit: #return a positive reward. return 1 ...
Simple-Policy.ipynb
awjuliani/DeepRL-Agents
mit
The Agent The code below established our simple neural agent. It consists of a set of values for each of the bandit arms. Each value is an estimate of the value of the return from choosing the bandit. We use a policy gradient method to update the agent by moving the value for the selected action toward the recieved rew...
tf.reset_default_graph() #These two lines established the feed-forward part of the network. weights = tf.Variable(tf.ones([num_arms])) output = tf.nn.softmax(weights) #The next six lines establish the training proceedure. We feed the reward and chosen action into the network #to compute the loss, and use it to updat...
Simple-Policy.ipynb
awjuliani/DeepRL-Agents
mit
Training the Agent We will train our agent by taking actions in our environment, and recieving rewards. Using the rewards and actions, we can know how to properly update our network in order to more often choose actions that will yield the highest rewards over time.
total_episodes = 1000 #Set total number of episodes to train agent on. total_reward = np.zeros(num_arms) #Set scoreboard for bandit arms to 0. init = tf.global_variables_initializer() # Launch the tensorflow graph with tf.Session() as sess: sess.run(init) i = 0 while i < total_episodes: #...
Simple-Policy.ipynb
awjuliani/DeepRL-Agents
mit
If you have a CSV file, you can do: s = Striplog.from_csv(filename=filename) But we have text, so we do something slightly different, passing the text argument instead. We also pass a stop argument to tell Striplog to make the last unit (E) 50 m thick. (If you don't do this, it will be 1 m thick).
from striplog import Striplog s = Striplog.from_csv(text=data, stop=650)
docs/tutorial/10_Extract_curves_into_striplogs.ipynb
agile-geoscience/striplog
apache-2.0
Each element of the striplog is an Interval object, which has a top, base and one or more Components, which represent whatever is in the interval (maybe a rock type, or in this case a formation). There is also a data field, which we will use later.
s[0]
docs/tutorial/10_Extract_curves_into_striplogs.ipynb
agile-geoscience/striplog
apache-2.0
We can plot the striplog. By default, it will use a random legend for the colours:
s.plot(aspect=3)
docs/tutorial/10_Extract_curves_into_striplogs.ipynb
agile-geoscience/striplog
apache-2.0
Or we can plot in the 'tops' style:
s.plot(style='tops', field='formation', aspect=1)
docs/tutorial/10_Extract_curves_into_striplogs.ipynb
agile-geoscience/striplog
apache-2.0
Random curve data Make some fake data:
from welly import Curve import numpy as np depth = np.linspace(0, 699, 700) data = np.sin(depth/10) curve = Curve(data=data, index=depth)
docs/tutorial/10_Extract_curves_into_striplogs.ipynb
agile-geoscience/striplog
apache-2.0
Plot it:
import matplotlib.pyplot as plt fig, axs = plt.subplots(ncols=2, sharey=True) axs[0] = s.plot(ax=axs[0]) axs[1] = curve.plot(ax=axs[1])
docs/tutorial/10_Extract_curves_into_striplogs.ipynb
agile-geoscience/striplog
apache-2.0
Extract data from the curve into the striplog
s = s.extract(curve.values, basis=depth, name='GR')
docs/tutorial/10_Extract_curves_into_striplogs.ipynb
agile-geoscience/striplog
apache-2.0
Now we have some the GR data from each unit stored in that unit:
s[1]
docs/tutorial/10_Extract_curves_into_striplogs.ipynb
agile-geoscience/striplog
apache-2.0
So we could plot a segment of curve, say:
plt.plot(s[1].data['GR'])
docs/tutorial/10_Extract_curves_into_striplogs.ipynb
agile-geoscience/striplog
apache-2.0
Extract and reduce data We don't have to store all the data points. We can optionaly pass a function to produce anything we like, and store the result of that:
s = s.extract(curve, basis=depth, name='GRmean', function=np.nanmean) s[1]
docs/tutorial/10_Extract_curves_into_striplogs.ipynb
agile-geoscience/striplog
apache-2.0
Other helpful reducing functions: np.nanmedian &mdash; median average (ignoring nans) np.product &mdash; product np.nansum &mdash; sum (ignoring nans) np.nanmin &mdash; minimum (ignoring nans) np.nanmax &mdash; maximum (ignoring nans) scipy.stats.mstats.mode &mdash; mode average scipy.stats.mstats.hmean &mdash; harmon...
s[1].data['foo'] = 'bar' s[1]
docs/tutorial/10_Extract_curves_into_striplogs.ipynb
agile-geoscience/striplog
apache-2.0
局部异常因子计算出每个点的局部密度,通过它与K最近邻的点的距离来评估点的局部密度,并与邻居的密度进行比较,以此找出异常点--异常点比邻居的密度要低得多 为了理解LOF,先了解一些术语的定义 * 对象P的K距离:对象P与它第K个最近邻的距离,K是算法的参数 P的K距离邻居:到P的距离小于或等于P到第K个最邻近的距离的所有对象的集合Q * 从P到Q的可达距离:P与它的第K个最近邻的距离和P和Q之间的距离中的最大者。 P的局部可达密度(local Reachability Density of P):K距离邻居和K与其邻居的可达距离之和的比值 * P的局部异常因子(Local Outlier Factor of P):P与它的K最近邻...
# 获取点两两之间的距离pairwise_distance distance = 'manhattan' from sklearn.metrics import pairwise_distances dist = pairwise_distances(instance,metric=distance) print dist # 计算K距离,使用heapq来获得K最近邻 k = 2 # 计算K距离 import heapq # k_distance的值是tuple k_distance = defaultdict(tuple) # 对每个点计算 for i in range(instance.shape[0]): # 获取...
Data_Mining/Local_outlier_factor.ipynb
Roc-J/Python_data_science
apache-2.0
References https://class.coursera.org/statistics-003 https://www.udacity.com/course/intro-to-data-science--ud359 http://blog.minitab.com/blog/adventures-in-statistics/multiple-regession-analysis-use-adjusted-r-squared-and-predicted-r-squared-to-include-the-correct-number-of-variables https://en.wikipedia.org/wiki/Coef...
df.groupby('rain',as_index=False).ENTRIESn_hourly.mean()
p2-introds/nyc_subway/project.ipynb
napjon/ds-nd
mit
In this data, we can see summary statistic of number of ridership hourly, represented by ENTRIESn_hourly variable between rainy days and non-rainy days. So the independent variable is rain that represented as non-rainy day in control group, and non-rainy in experiment group. How rainy days affect the number of ridersh...
df.groupby('rain',as_index=False).ENTRIESn_hourly.mean() sp.mannwhitneyu(df.ix[df.rain==0,'ENTRIESn_hourly'], df.ix[df.rain==1,'ENTRIESn_hourly'])
p2-introds/nyc_subway/project.ipynb
napjon/ds-nd
mit
We're using Mann-Whitney U test with average 1090 hourly ridership on non-rainy days and 1105 hourly ridership on rainy days. Because p-value is 0.025 less than 0.05 p-critical, we reject the null hypothesis, and conclude that the data provide convincing evidence that average number of hourly ridership in rainy days is...
length = df.shape[0] subset = df.take(np.random.permutation(length)[:int(length*0.1)]).reset_index() dummy_hours = pd.get_dummies(subset['Hour'], prefix='hour') dummy_units = pd.get_dummies(subset['UNIT'], prefix='unit') # features = subset.join(dummy_units).join(dummy_hours) features = subset banned = ['ENTRIESn_hou...
p2-introds/nyc_subway/project.ipynb
napjon/ds-nd
mit
R squared is not a significant measures for testing our model. Since every time we're adding a variable, R-squared will keep increasing. We're going to use adjusted R-squared, since it will incorporate penalty everytime we're adding a variable.
def test_adjusted_R_squared(col): """Testing one variable with already approved predictors""" reg = sm.OLS(features['ENTRIESn_hourly'],features[predictors + [col]]) result = reg.fit() return result.rsquared_adj
p2-introds/nyc_subway/project.ipynb
napjon/ds-nd
mit
I'm going to choose forward selection, where I add one variable at a time based on highest adjusted R squared. And I will stop adding a variable if there's isnt anymore increase compared to previous adjusted R squared.
predictors = [] topr2 = 0 for i in xrange(len(candidates)): filtered = filter(lambda x: x not in predictors, candidates) list_r2 = map(test_adjusted_R_squared,filtered) highest,curr_topr2 = max(zip(filtered,list_r2),key=lambda x: x[1]) if curr_topr2 > topr2: topr2 = round(curr_topr2,10...
p2-introds/nyc_subway/project.ipynb
napjon/ds-nd
mit
These are non dummy features after I perform forward selection
predictors
p2-introds/nyc_subway/project.ipynb
napjon/ds-nd
mit
To test collinearity that may happen in my numerical features, I use scatter matrix.
print('Scatter Matrix of features and predictors to test collinearity'); pd.scatter_matrix(features[numerics],figsize=(10,10));
p2-introds/nyc_subway/project.ipynb
napjon/ds-nd
mit
I can see that there are no collinearity among the predictors. Next I join non-dummy features and dummy features to features_dummy and create the model.
features_dummy = features[predictors].join(dummy_units).join(dummy_hours) model = sm.OLS(features['ENTRIESn_hourly'],features_dummy).fit() filter_cols = lambda col: not col.startswith('unit') and not col.startswith('hour') model.params[model.params.index.map(filter_cols)] model.rsquared
p2-introds/nyc_subway/project.ipynb
napjon/ds-nd
mit
R2 is often interpreted as the proportion of response variation "explained" by the regressors in the model. So we can say 61.67% of the variability in the % number of ridership subway hourly can be explained by the model. Visualization At the time of this writing, pandas has grown mature, and ggplot for python,which re...
fig,axes = plt.subplots(nrows=1,ncols=2,sharex=True,sharey=True,squeeze=False) filtered = df.ix[df.ENTRIESn_hourly < 10000] for i in xrange(1): axes[0][i].set_xlabel('Number of ridership hourly') axes[0][i].set_ylabel('Frequency') filtered.ix[filtered.rain == 0,'ENTRIESn_hourly'].hist(ax=axes[0][0],bins=50) ...
p2-introds/nyc_subway/project.ipynb
napjon/ds-nd
mit
In this plot, we can see that more people is riding the subway. But we want to know whether the difference is significance, using hypothesis test. The frequency is indeed higher for non-rainy days compared to non-rainy days.
(df .resample('1D',how='mean') .groupby(lambda x : 1 if pd.datetools.isBusinessDay(x) else 0) .ENTRIESn_hourly .plot(legend=True)) plt.legend(['Not Business Day', 'Business Day']) plt.xlabel('By day in May 2011') plt.ylabel('Average number of ridership hourly') plt.title('Average number of ridership every day at in...
p2-introds/nyc_subway/project.ipynb
napjon/ds-nd
mit
We can see that the difference is likely siginificant of ridership from the time of day. We can create a new variable to turn this into categorical variable.
df['BusinessDay'] = df.index.map(lambda x : 0 if pd.datetools.isBusinessDay(x) else 1) df.resample('1D').rain.value_counts()
p2-introds/nyc_subway/project.ipynb
napjon/ds-nd
mit
Conclusion Since the data is observation and not controlled experiment, we can't make causation. However there is likely to be no different for average number of ridership hourly of non-rainy days and rainy days. We know that the dataset is taken from NYC data subway, but because the data is not random sampled in this ...
fig,axes = plt.subplots(nrows=1,ncols=3,sharey=True,squeeze=False) numerics = ['maxpressurei', 'mintempi', 'precipi'] for i in xrange(len(numerics)): axes[0][i].scatter(x=features[numerics[i]],y=model.resid,alpha=0.1) axes[0][i].set_xlabel(numerics[i]) axes[0][0].set_ylabel('final model residuals') axes[0...
p2-introds/nyc_subway/project.ipynb
napjon/ds-nd
mit
We see that eventhough seems categorical maxpressurei and mintempi is random scatter. But precipi is not a good candidate for linear relationship of the model. It seems it's not randomly scattered. Nearly normal residuals wih mean 0
fig,axes = plt.subplots(nrows=1,ncols=2,squeeze=False) sp.probplot(model.resid,plot=axes[0][0]) model.resid.hist(bins=20,ax=axes[0][1]); axes[0][1].set_title('Histogram of residuals') axes[0][1].set_xlabel('Residuals') axes[0][1].set_ylabel('Frequency');
p2-introds/nyc_subway/project.ipynb
napjon/ds-nd
mit
Next, we're checking by histogram that the residuals is normally distributed. The histogram shown that it's pretty normal and distributed around zero. Quantile plot checking if the residuals randomly scattered around zero. We can see that our model failed in this test. The residuals is very skewed, explained by large n...
fig,axes = plt.subplots(nrows=1,ncols=2,squeeze=False) axes[0][0].scatter(x=model.fittedvalues, y=model.resid, alpha=0.1) axes[0][1].scatter(x=model.fittedvalues, y=abs(model.resid), alpha=0.1); axes[0][0].set_xlabel('fitted_values') axes[0][1].set_xlabel('fitted_values') axes[0][0].set_ylabel('Abs(residuals)') axes[...
p2-introds/nyc_subway/project.ipynb
napjon/ds-nd
mit
The model also failed in this diagnostic. The first plot, the fitted values and residuals should be randomly scattered around zero, and not performing some kind of fan shape. For the plot in the left, we're seeing that there's some kind of boundary that limit the plot to be randomly scattered, and it's performing fan s...
resids = pd.DataFrame(model.resid.copy()) resids.columns = ['residuals'] resids.index = pd.to_datetime(features['index']) resids.sort_index(inplace=True) plt.plot_date(x=resids.resample('1H',how='mean').index, y=resids.resample('1H',how='mean').residuals); plt.xlabel('Time Series') plt.ylabel('residuals'...
p2-introds/nyc_subway/project.ipynb
napjon/ds-nd
mit
Ótimo temos uma função e podemos reaproveita-la. Porém, para de fato reaproveita-la temos que utilizar o comando return.
def maximo(x, y): if x > y: return x else: return y
Python/2016-07-22/aula2-parte1-funcoes.ipynb
rubensfernando/mba-analytics-big-data
mit
Pronto agora sim! Já podemos reaproveitar nossa função! E como fazer isso?
z = maximo(3, 4)
Python/2016-07-22/aula2-parte1-funcoes.ipynb
rubensfernando/mba-analytics-big-data
mit
Quando chamamos a função maximo(3, 4) estamos definindo que x = 3 e y = 4. Após, as expressões são avaliadas até que não se tenha mais expressões, e nesse caso é retornado None. Ou até que encontre a palavra especial return, retornando como valor da chamada da função.
print(z)
Python/2016-07-22/aula2-parte1-funcoes.ipynb
rubensfernando/mba-analytics-big-data
mit
Já entendemos o que é e como criar funções. Para testar vamos criar uma função que irá realizar uma conta.
def economias (dinheiro, conta, gastos): total = (dinheiro + conta) - gastos return (total) eco = economias(10, 20, 10) print(eco)
Python/2016-07-22/aula2-parte1-funcoes.ipynb
rubensfernando/mba-analytics-big-data
mit
Também podemos definir um valor padrão para um ou mais argumentos Vamos reescrever a função economias para que os gastos sejam fixados em 150, caso não seja passado nenhum valor por padrão.
def economias(dinheiro, conta, gastos=150): total = (dinheiro + conta) - gastos return(total) print(economias(100, 60)) print(economias(100, 60, 10))
Python/2016-07-22/aula2-parte1-funcoes.ipynb
rubensfernando/mba-analytics-big-data
mit
É importante notar que uma variável que está dentro de uma função, não pode ser utilizada novamente enquanto a função não terminar de ser executada. No mundo da programação, isso é chamado de escopo. Vamos tentar imprimir o valor da variável dinheiro.
print(dinheiro)
Python/2016-07-22/aula2-parte1-funcoes.ipynb
rubensfernando/mba-analytics-big-data
mit
<span style="color:blue;">Por que isso aconteceu?</span> Esse erro acontece pois a variável dinheiro somente existe dentro da função economias, ou seja, ela existe apenas no contexto local dessa função. Vamos modificar novamente a função economias:
def economias(dinheiro, conta, gastos=150): total = (dinheiro + conta) - gastos total = total + eco return(total) print(economias(100,60))
Python/2016-07-22/aula2-parte1-funcoes.ipynb
rubensfernando/mba-analytics-big-data
mit
<span style="color:blue;">Por que não deu problema?</span> Quando utilizamos uma variável que está fora da função dentro de uma função estamos utilizando a ideia de variáveis globais, onde dentro do contexto geral essa variável existe e pode ser utilizada dentro da função. <span style="color:red;">Isso não é recomendad...
def conta(valor, multa=7): # Seu código aqui
Python/2016-07-22/aula2-parte1-funcoes.ipynb
rubensfernando/mba-analytics-big-data
mit
Funções embutidas Python tem um número de funções embutidas que sempre estão presentes. Uma lista completa pode ser encontrada em https://docs.python.org/3/library/functions.html. <span style="color:blue;">Já utilizamos algumas delas! Quais?</span> input Uma outra função que é bem interessante, é a input. Essa função p...
idade = input('Digite sua idade:') print(idade) nome = input('Digite seu nome:') print(nome) print(type(idade)) print(type(nome))
Python/2016-07-22/aula2-parte1-funcoes.ipynb
rubensfernando/mba-analytics-big-data
mit
Note que ambas as variáveis são strings. Portanto precisamos converter para inteiro a idade.
idade = int(input("Digite sua idade:")) print(type(idade))
Python/2016-07-22/aula2-parte1-funcoes.ipynb
rubensfernando/mba-analytics-big-data
mit
open A função open, permite abrir um arquivo para leitura e escrita. open(nome_do_arquivo, modo) Modos: * r - abre o arquivo para leitura. * w - abre o arquivo para escrita. * a - abre o arquivo para escrita acrescentando os dados no final do arquivo. * + - pode ser lido e escrito simul...
import os os.remove("arquivo.txt") arq = open("arquivo.txt", "w") for i in range(1, 5): arq.write('{}. Escrevendo em arquivo\n'.format(i)) arq.close()
Python/2016-07-22/aula2-parte1-funcoes.ipynb
rubensfernando/mba-analytics-big-data
mit
Métodos read() - retorna uma string única com todo o conteúdo do arquivo. readlines() - todo o conteúdo do arquivo é salvo em uma lista, onde cada linha do arquivo será um elemento da lista.
f = open("arquivo.txt", "r") print(f, '\n') texto = f.read() print(texto) f.close() f = open("arquivo.txt", "r") texto = f.readlines() print(texto) f.close() #help(f.readlines)
Python/2016-07-22/aula2-parte1-funcoes.ipynb
rubensfernando/mba-analytics-big-data
mit
Para remover o \n podemos utilizar o método read que irá gerar uma única string e depois aplicamos o método splitlines.
f = open("arquivo.txt", "r") texto = f.read().splitlines() print(texto) f.close()
Python/2016-07-22/aula2-parte1-funcoes.ipynb
rubensfernando/mba-analytics-big-data
mit
Wavelet reconstruction Can reconstruct the sequence by $$ \hat y = W \hat \beta. $$ The objective is likelihood term + L1 penalty term, $$ \frac 12 \sum_{i=1}^T (y - W \beta)i^2 + \lambda \sum{i=1}^T |\beta_i|. $$ The L1 penalty "forces" some $\beta_i = 0$, inducing sparsity
plt.plot(tse_soft[:,4]) high_idx = np.where(np.abs(tse_soft[:,4]) > .0001)[0] print(high_idx) fig, axs = plt.subplots(len(high_idx) + 1,1) for i, idx in enumerate(high_idx): axs[i].plot(W[:,idx]) plt.plot(tse_den['FTSE'],c='r')
lectures/lecture5/lecture5.ipynb
jsharpna/DavisSML
mit
Non-orthogonal design The objective is likelihood term + L1 penalty term, $$ \frac 12 \sum_{i=1}^T (y - X \beta)i^2 + \lambda \sum{i=1}^T |\beta_i|. $$ does not have closed form for $X$ that is non-orthogonal. it is convex it is non-smooth (recall $|x|$) has tuning parameter $\lambda$ Compare to best subset selection...
# %load ../standard_import.txt import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn import preprocessing, model_selection, linear_model %matplotlib inline ## Modified from the github repo: https://github.com/JWarmenhoven/ISLR-python ## which is based on the book by James et al. Intro t...
lectures/lecture5/lecture5.ipynb
jsharpna/DavisSML
mit
Exercise 5.3 Run the lasso using linear_model.lars_path with the lasso modification (see docstring with ?linear_model.lars_path) Plot the lasso coefficients that are learned as a function of lambda. You should have a plot with the x-axis being lambda and the y-axis being the coefficient value, with $p=1000$ lines pl...
?linear_model.lars_path ## Answer to exercise 5.3 ## Run lars with lasso mod, find active set larper = linear_model.lars_path(X,y,method="lasso") S = set(np.where(Sbool)[0]) def plot_it(): for j in S: _ = plt.plot(larper[0],larper[2][j,:],'r') for j in set(range(p)) - S: _ = plt.plot(larper[0...
lectures/lecture5/lecture5.ipynb
jsharpna/DavisSML
mit
Exercise 5.4 You should cross-validate to select the lambda just like any other tuning parameter. Sklearn gives you the option of using their fast cross-validation script via linear_model.LassoCV, see the documentation. You can create a leave-one-out cross validator with model_selection.LeaveOneOut then pass this to ...
## Answer to 5.4 ## Fit the lasso and cross-validate, increased max_iter to achieve convergence loo = model_selection.LeaveOneOut() looiter = loo.split(X) hitlasso = linear_model.LassoCV(cv=looiter,max_iter=2000) hitlasso.fit(X,y) print("The selected lambda value is {:.2f}".format(hitlasso.alpha_)) hitlasso.coef_
lectures/lecture5/lecture5.ipynb
jsharpna/DavisSML
mit
We can also compare this to the selected model from forward stagewise regression: [-0.21830515, 0.38154135, 0. , 0. , 0. , 0.16139123, 0. , 0. , 0. , 0. , 0.09994524, 0.56696569, -0.16872682, 0.16924078, 0. , 0. , 0. ...
bforw = [-0.21830515, 0.38154135, 0. , 0. , 0. , 0.16139123, 0. , 0. , 0. , 0. , 0.09994524, 0.56696569, -0.16872682, 0.16924078, 0. , 0. , 0. , -0.19429699, 0. ] print(", ".join(X.columns[(hitlasso.coef_ ...
lectures/lecture5/lecture5.ipynb
jsharpna/DavisSML
mit
Données Les données sont artificielles mais simulent ce que pourraient être le chiffre d'affaires d'un magasin de quartier, des samedi très forts, une semaine morne, un Noël chargé, un été plat.
from ensae_teaching_cs.data import generate_sells import pandas df = pandas.DataFrame(generate_sells()) df.head()
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
Premiers graphiques La série a deux saisonnalités, hebdomadaire, mensuelle.
import matplotlib.pyplot as plt fig, ax = plt.subplots(1, 2, figsize=(14, 4)) df.iloc[-30:].set_index('date').plot(ax=ax[0]) df.set_index('date').plot(ax=ax[1]) ax[0].set_title("chiffre d'affaire sur le dernier mois") ax[1].set_title("chiffre d'affaire sur deux ans");
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
Elle a une vague tendance, on peut calculer un tendance à l'ordre 1, 2, ...
from statsmodels.tsa.tsatools import detrend notrend = detrend(df.value, order=1) df["notrend"] = notrend df["trend"] = df['value'] - notrend ax = df.plot(x="date", y=["value", "trend"], figsize=(14,4)) ax.set_title('tendance');
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
Autocorrélations...
from statsmodels.tsa.stattools import acf cor = acf(df.value) cor fig, ax = plt.subplots(1, 1, figsize=(14,2)) ax.plot(cor) ax.set_title("Autocorrélogramme");
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
La première saisonalité apparaît, 7, 14, 21... Les autocorrélations partielles confirment cela, plutôt 7 jours.
from statsmodels.tsa.stattools import pacf from statsmodels.graphics.tsaplots import plot_pacf plot_pacf(df.value, lags=50);
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
Comme il n'y a rien le dimanche, il vaut mieux les enlever. Garder des zéros nous priverait de modèles multiplicatifs.
df["weekday"] = df.date.dt.weekday df.head() df_nosunday = df[df.weekday != 6] df_nosunday.head(n=10) fig, ax = plt.subplots(1, 1, figsize=(14,2)) cor = acf(df_nosunday.value) ax.plot(cor) ax.set_title("Autocorrélogramme"); plot_pacf(df_nosunday.value, lags=50);
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
On décompose la série en tendance + saisonnalité. Les étés et Noël apparaissent.
from statsmodels.tsa.seasonal import seasonal_decompose res = seasonal_decompose(df_nosunday.value, freq=7) res.plot(); plt.plot(res.seasonal[-30:]) plt.title("Saisonnalité"); cor = acf(res.trend[5:-5]); plt.plot(cor);
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
On cherche maintenant la saisonnalité de la série débarrassée de sa tendance herbdomadaire. On retrouve la saisonnalité mensuelle.
res_year = seasonal_decompose(res.trend[5:-5], freq=25) res_year.plot();
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
Test de stationnarité Le test KPSS permet de tester la stationnarité d'une série.
from statsmodels.tsa.stattools import kpss kpss(res.trend[5:-5])
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
Comme ce n'est pas toujours facile à interpréter, on simule une variable aléatoire gaussienne donc sans tendance.
from numpy.random import randn bruit = randn(1000) kpss(bruit)
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
Et puis une série avec une tendance forte.
from numpy.random import randn from numpy import arange bruit = randn(1000) * 100 + arange(1000) / 10 kpss(bruit)
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
Une valeur forte indique une tendance et la série en a clairement une. Prédiction Les modèles AR, ARMA, ARIMA se concentrent sur une série à une dimension. En machine learning, il y a la série et plein d'autres informations. On construit une matrice avec des séries décalées.
from statsmodels.tsa.tsatools import lagmat lag = 8 X = lagmat(df_nosunday["value"], lag) lagged = df_nosunday.copy() for c in range(1,lag+1): lagged["lag%d" % c] = X[:, c-1] lagged.tail()
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
On ajoute ou on réécrit le jour de la semaine qu'on utilise comme variable supplémentaire.
lagged["weekday"] = lagged.date.dt.weekday X = lagged.drop(["date", "value", "notrend", "trend"], axis=1) Y = lagged["value"] X.shape, Y.shape from numpy import corrcoef corrcoef(X)
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
Etrange autant de grandes valeurs, cela veut dire que la tendance est trop forte pour calculer des corrélations, il vaudrait mieux tout recommencer avec la série $\Delta Y_t = Y_t - Y_{t-1}$. Bref, passons...
X.columns
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
Une régression linéaire car les modèles linéaires sont toujours de bonnes baseline et pour connaître le modèle simulé, on ne fera pas beaucoup mieux.
from sklearn.linear_model import LinearRegression clr = LinearRegression() clr.fit(X, Y) from sklearn.metrics import r2_score r2_score(Y, clr.predict(X)) clr.coef_
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
On retrouve la saisonnalité, $Y_t$ et $Y_{t-6}$ sont de mèches.
for i in range(1, X.shape[1]): print("X(t-%d)" % (i), r2_score(Y, X.iloc[:, i]))
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
Auparavant (l'année dernière en fait), je construisais deux bases, apprentissage et tests, comme ceci :
n = X.shape[0] X_train = X.iloc[:n * 2//3] X_test = X.iloc[n * 2//3:] Y_train = Y[:n * 2//3] Y_test = Y[n * 2//3:]
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
Et puis scikit-learn est arrivée avec TimeSeriesSplit.
from sklearn.model_selection import TimeSeriesSplit tscv = TimeSeriesSplit(n_splits=5) for train_index, test_index in tscv.split(lagged): data_train, data_test = lagged.iloc[train_index, :], lagged.iloc[test_index, :] print("TRAIN:", data_train.shape, "TEST:", data_test.shape)
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
Et on calé une forêt aléatoire...
import warnings from sklearn.ensemble import RandomForestRegressor clr = RandomForestRegressor() def train_test(clr, train_index, test_index): data_train = lagged.iloc[train_index, :] data_test = lagged.iloc[test_index, :] clr.fit(data_train.drop(["value", "date", "notrend", "trend"], ...
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
2 ans coupé en 5, soit tous les 5 mois, ça veut dire que ce découpage inclut parfois Noël, parfois l'été et que les performances y seront très sensibles.
from sklearn.metrics import r2_score r2 = r2_score(data_test.value, clr.predict(data_test.drop(["value", "date", "notrend", "trend"], axis=1).values)) r2
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
On compare avec le $r_2$ avec le même $r_2$ obtenu en utilisant $Y_{t-1}$, $Y_{t-2}$, ... $Y_{t-d}$ comme prédiction.
for i in range(1, 9): print(i, ":", r2_score(data_test.value, data_test["lag%d" % i])) lagged[:5]
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
En fait le jour de la semaine est une variable catégorielle, on crée une colonne par jour.
from sklearn.compose import ColumnTransformer from sklearn.preprocessing import OneHotEncoder cols = ['lag1', 'lag2', 'lag3', 'lag4', 'lag5', 'lag6', 'lag7', 'lag8'] ct = ColumnTransformer( [('pass', "passthrough", cols), ("dummies", OneHotEncoder(), ["weekday"])]) pred = ct.fit(lagged).transf...
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
On met tout dans un pipeline parce que c'est plus joli, plus pratique aussi.
from sklearn.pipeline import make_pipeline from sklearn.decomposition import PCA, TruncatedSVD cols = ['lag1', 'lag2', 'lag3', 'lag4', 'lag5', 'lag6', 'lag7', 'lag8'] model = make_pipeline( make_pipeline( ColumnTransformer( [('pass', "passthrough", cols), ...
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
C'est plus facile à voir visuellement.
from mlinsights.plotting import pipeline2dot dot = pipeline2dot(model, lagged) from jyquickhelper import RenderJsDot RenderJsDot(dot) r2_score(lagged['value'], model.predict(lagged))
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
Templating Complètement hors sujet mais utile.
from jinja2 import Template template = Template('Hello {{ name }}!') template.render(name='John Doe') template = Template(""" {{ name }} {{ "-" * len(name) }} Possède : {% for i in range(len(meubles)) %} - {{meubles[i]}}{% endfor %} """) meubles = ['table', "tabouret"] print(template.render(name='John Doe Doe', len=le...
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
As an example, we set $\alpha = 0.2$ (more like a ridge regression), and give double weights to the latter half of the observations. To avoid too long a display here, we set nlambda to 20. In practice, however, the number of values of $\lambda$ is recommended to be 100 (default) or more. In most cases, it does not come...
# call glmnet fit = glmnet(x = x.copy(), y = y.copy(), family = 'gaussian', \ weights = wts, \ alpha = 0.2, nlambda = 20 )
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
We can then print the glmnet object.
glmnetPrint(fit)
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0