markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
и линейное (в $\mathbb{R}^d$) $$ K( x, y ) = \langle x, y\rangle \,,$$
## Вид ядра : линейное ядро grid_linear_ = grid_search.GridSearchCV( svm_clf_, param_grid = { ## Параметр регуляризции: C = 0.0001, 0.001, 0.01, 0.1, 1, 10. "C" : np.logspace( -4, 1, num = 6 ), "kernel" : [ "linear" ], "degree" : [ 0 ] }, cv = 5, n_jobs = -1, verbose = 0 ).fit( X_train, y_tr...
year_15_16/fall_2015/game theoretic foundations of ml/labs/SVM-lab.ipynb
ivannz/study_notes
mit
Результаты поиска приведены ниже:
pd.concat( [ df_linear_, df_poly_, df_rbf_ ], axis = 0 ).sort_index( )
year_15_16/fall_2015/game theoretic foundations of ml/labs/SVM-lab.ipynb
ivannz/study_notes
mit
Посмотрим точность лучших моделей в каждом классе ядер на тестовтй выборке. Линейное ядро
print grid_linear_.best_estimator_ print "Accuracy: %0.3f%%" % ( grid_linear_.best_estimator_.score( X_test, y_test ) * 100, )
year_15_16/fall_2015/game theoretic foundations of ml/labs/SVM-lab.ipynb
ivannz/study_notes
mit
Гауссовское ядро
print grid_rbf_.best_estimator_ print "Accuracy: %0.3f%%" % ( grid_rbf_.best_estimator_.score( X_test, y_test ) * 100, )
year_15_16/fall_2015/game theoretic foundations of ml/labs/SVM-lab.ipynb
ivannz/study_notes
mit
Полимониальное ядро
print grid_poly_.best_estimator_ print "Accuracy: %0.3f%%" % ( grid_poly_.best_estimator_.score( X_test, y_test ) * 100, )
year_15_16/fall_2015/game theoretic foundations of ml/labs/SVM-lab.ipynb
ivannz/study_notes
mit
Построим ROC-AUC кривую для лучшей моделей.
result_ = { name_: metrics.roc_curve( y_test, estimator_.predict_proba( X_test )[:,1] ) for name_, estimator_ in { "Linear": grid_linear_.best_estimator_, "Polynomial": grid_poly_.best_estimator_, "RBF": grid_rbf...
year_15_16/fall_2015/game theoretic foundations of ml/labs/SVM-lab.ipynb
ivannz/study_notes
mit
Load and prepare the data A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
data_path = 'Bike-Sharing-Dataset/hour.csv' rides = pd.read_csv(data_path) rides.head()
your-first-network/.ipynb_checkpoints/dlnd-your-first-neural-network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
Checking out the data This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above. Below is a plot showing the number of bike riders ove...
rides[:24*10].plot(x='dteday', y='cnt')
your-first-network/.ipynb_checkpoints/dlnd-your-first-neural-network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
Dummy variables Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday'] for each in dummy_fields: dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False) rides = pd.concat([rides, dummies], axis=1) fields_to_drop = ['instant', 'dteday', 'season', 'weathersit', 'weekday', 'atemp', 'mnth...
your-first-network/.ipynb_checkpoints/dlnd-your-first-neural-network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
Scaling target variables To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1. The scaling factors are saved so we can go backwards when we use the network for predictions.
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed'] # Store scalings in a dictionary so we can convert back later scaled_features = {} for each in quant_features: mean, std = data[each].mean(), data[each].std() scaled_features[each] = [mean, std] data.loc[:, each] = (data[each] - me...
your-first-network/.ipynb_checkpoints/dlnd-your-first-neural-network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
Splitting the data into training, testing, and validation sets We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
# Save the last 21 days test_data = data[-21*24:] data = data[:-21*24] # Separate the data into features and targets target_fields = ['cnt', 'casual', 'registered'] features, targets = data.drop(target_fields, axis=1), data[target_fields] test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[...
your-first-network/.ipynb_checkpoints/dlnd-your-first-neural-network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
# Hold out the last 60 days of the remaining data as a validation set train_features, train_targets = features[:-60*24], targets[:-60*24] val_features, val_targets = features[-60*24:], targets[-60*24:]
your-first-network/.ipynb_checkpoints/dlnd-your-first-neural-network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes. The network has two layers, a hid...
class NeuralNetwork(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize we...
your-first-network/.ipynb_checkpoints/dlnd-your-first-neural-network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training se...
import sys ### Set the hyperparameters here ### epochs = 1000 learning_rate = 0.05 hidden_nodes = 3 output_nodes = 1 N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} for e in range(epochs): # Go through a random batch of ...
your-first-network/.ipynb_checkpoints/dlnd-your-first-neural-network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
Check out your predictions Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
fig, ax = plt.subplots(figsize=(8,4)) mean, std = scaled_features['cnt'] predictions = network.run(test_features)*std + mean ax.plot(predictions[0], label='Prediction') ax.plot((test_targets['cnt']*std + mean).values, label='Data') ax.set_xlim(right=len(predictions)) ax.legend() dates = pd.to_datetime(rides.ix[test_d...
your-first-network/.ipynb_checkpoints/dlnd-your-first-neural-network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
Thinking about your results Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does? Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter Your answer below Unit tests R...
import unittest inputs = [0.5, -0.2, 0.1] targets = [0.4] test_w_i_h = np.array([[0.1, 0.4, -0.3], [-0.2, 0.5, 0.2]]) test_w_h_o = np.array([[0.3, -0.1]]) class TestMethods(unittest.TestCase): ########## # Unit tests for data loading ########## def test_data_path(self...
your-first-network/.ipynb_checkpoints/dlnd-your-first-neural-network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
Plotting the data First let's make a plot of our data to see how it looks. In order to have a 2D plot, let's ingore the rank.
# Importing matplotlib import matplotlib.pyplot as plt # Function to help us plot def plot_points(data): X = np.array(data[["gre","gpa"]]) y = np.array(data["admit"]) admitted = X[np.argwhere(y==1)] rejected = X[np.argwhere(y==0)] plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected]...
student-admissions/StudentAdmissions.ipynb
ktmud/deep-learning
mit
Roughly, it looks like the students with high scores in the grades and test passed, while the ones with low scores didn't, but the data is not as nicely separable as we hoped it would. Maybe it would help to take the rank into account? Let's make 4 plots, each one for each rank.
# Separating the ranks data_rank1 = data[data["rank"]==1] data_rank2 = data[data["rank"]==2] data_rank3 = data[data["rank"]==3] data_rank4 = data[data["rank"]==4] # Plotting the graphs plot_points(data_rank1) plt.title("Rank 1") plt.show() plot_points(data_rank2) plt.title("Rank 2") plt.show() plot_points(data_rank3) ...
student-admissions/StudentAdmissions.ipynb
ktmud/deep-learning
mit
This looks more promising, as it seems that the lower the rank, the higher the acceptance rate. Let's use the rank as one of our inputs. In order to do this, we should one-hot encode it. TODO: One-hot encoding the rank Use the get_dummies function in numpy in order to one-hot encode the data.
# TODO: Make dummy variables for rank one_hot_data = pass # TODO: Drop the previous rank column one_hot_data = pass # Print the first 10 rows of our data one_hot_data[:10]
student-admissions/StudentAdmissions.ipynb
ktmud/deep-learning
mit
TODO: Scaling the data The next step is to scale the data. We notice that the range for grades is 1.0-4.0, whereas the range for test scores is roughly 200-800, which is much larger. This means our data is skewed, and that makes it hard for a neural network to handle. Let's fit our two features into a range of 0-1, by ...
# Making a copy of our data processed_data = one_hot_data[:] # TODO: Scale the columns # Printing the first 10 rows of our procesed data processed_data[:10]
student-admissions/StudentAdmissions.ipynb
ktmud/deep-learning
mit
Splitting the data into Training and Testing In order to test our algorithm, we'll split the data into a Training and a Testing set. The size of the testing set will be 10% of the total data.
sample = np.random.choice(processed_data.index, size=int(len(processed_data)*0.9), replace=False) train_data, test_data = processed_data.iloc[sample], processed_data.drop(sample) print("Number of training samples is", len(train_data)) print("Number of testing samples is", len(test_data)) print(train_data[:10]) print(t...
student-admissions/StudentAdmissions.ipynb
ktmud/deep-learning
mit
Splitting the data into features and targets (labels) Now, as a final step before the training, we'll split the data into features (X) and targets (y).
features = train_data.drop('admit', axis=1) targets = train_data['admit'] features_test = test_data.drop('admit', axis=1) targets_test = test_data['admit'] print(features[:10]) print(targets[:10])
student-admissions/StudentAdmissions.ipynb
ktmud/deep-learning
mit
Training the 2-layer Neural Network The following function trains the 2-layer neural network. First, we'll write some helper functions.
# Activation (sigmoid) function def sigmoid(x): return 1 / (1 + np.exp(-x)) def sigmoid_prime(x): return sigmoid(x) * (1-sigmoid(x)) def error_formula(y, output): return - y*np.log(output) - (1 - y) * np.log(1-output)
student-admissions/StudentAdmissions.ipynb
ktmud/deep-learning
mit
TODO: Backpropagate the error Now it's your turn to shine. Write the error term. Remember that this is given by the equation $$ -(y-\hat{y}) \sigma'(x) $$
# TODO: Write the error term formula def error_term_formula(y, output): pass # Neural Network hyperparameters epochs = 1000 learnrate = 0.5 # Training function def train_nn(features, targets, epochs, learnrate): # Use to same seed to make debugging easier np.random.seed(42) n_records, n_features...
student-admissions/StudentAdmissions.ipynb
ktmud/deep-learning
mit
Calculating the Accuracy on the Test Data
# Calculate accuracy on test data tes_out = sigmoid(np.dot(features_test, weights)) predictions = tes_out > 0.5 accuracy = np.mean(predictions == targets_test) print("Prediction accuracy: {:.3f}".format(accuracy))
student-admissions/StudentAdmissions.ipynb
ktmud/deep-learning
mit
Parameters
# number of realizations along which to average the psd estimate n_real = 100 # modulation scheme and constellation points constellation = [ -1, 1 ] # number of symbols n_symb = 100 t_symb = 1.0 chips_per_symbol = 8 samples_per_chip = 8 samples_per_symbol = samples_per_chip * chips_per_symbol # parameters for...
nt1/vorlesung/extra/dsss.ipynb
kit-cel/wt
gpl-2.0
Real data-modulated Tx-signal
# define rectangular function responses rect = np.ones( samples_per_symbol ) rect /= np.linalg.norm( rect ) # number of realizations along which to average the psd estimate n_real = 10 # initialize two-dimensional field for collecting several realizations along which to average RECT_PSD = np.zeros( (n_real, N_...
nt1/vorlesung/extra/dsss.ipynb
kit-cel/wt
gpl-2.0
Selecting Asset Data Checkout the QuantConnect docs to learn how to select asset data.
spy = qb.AddEquity("SPY") eur = qb.AddForex("EURUSD") btc = qb.AddCrypto("BTCUSD") fxv = qb.AddData[FxcmVolume]("EURUSD_Vol", Resolution.Hour)
Jupyter/KitchenSinkQuantBookTemplate.ipynb
Jay-Jay-D/LeanSTP
apache-2.0
Historical Data Requests We can use the QuantConnect API to make Historical Data Requests. The data will be presented as multi-index pandas.DataFrame where the first index is the Symbol. For more information, please follow the link.
# Gets historical data from the subscribed assets, the last 360 datapoints with daily resolution h1 = qb.History(qb.Securities.Keys, 360, Resolution.Daily) # Plot closing prices from "SPY" h1.loc["SPY"]["close"].plot() # Gets historical data from the subscribed assets, from the last 30 days with daily resolution h2 ...
Jupyter/KitchenSinkQuantBookTemplate.ipynb
Jay-Jay-D/LeanSTP
apache-2.0
Historical Options Data Requests Select the option data Sets the filter, otherwise the default will be used SetFilter(-1, 1, timedelta(0), timedelta(35)) Get the OptionHistory, an object that has information about the historical options data
goog = qb.AddOption("GOOG") goog.SetFilter(-2, 2, timedelta(0), timedelta(180)) option_history = qb.GetOptionHistory(goog.Symbol, datetime(2017, 1, 4)) print (option_history.GetStrikes()) print (option_history.GetExpiryDates()) h7 = option_history.GetAllData()
Jupyter/KitchenSinkQuantBookTemplate.ipynb
Jay-Jay-D/LeanSTP
apache-2.0
Historical Future Data Requests Select the future data Sets the filter, otherwise the default will be used SetFilter(timedelta(0), timedelta(35)) Get the FutureHistory, an object that has information about the historical future data
es = qb.AddFuture("ES") es.SetFilter(timedelta(0), timedelta(180)) future_history = qb.GetFutureHistory(es.Symbol, datetime(2017, 1, 4)) print (future_history.GetExpiryDates()) h7 = future_history.GetAllData()
Jupyter/KitchenSinkQuantBookTemplate.ipynb
Jay-Jay-D/LeanSTP
apache-2.0
Get Fundamental Data GetFundamental([symbol], selector, start_date = datetime(1998,1,1), end_date = datetime.now()) We will get a pandas.DataFrame with fundamental data.
data = qb.GetFundamental(["AAPL","AIG","BAC","GOOG","IBM"], "ValuationRatios.PERatio") data
Jupyter/KitchenSinkQuantBookTemplate.ipynb
Jay-Jay-D/LeanSTP
apache-2.0
Indicators We can easily get the indicator of a given symbol with QuantBook. For all indicators, please checkout QuantConnect Indicators Reference Table
# Example with BB, it is a datapoint indicator # Define the indicator bb = BollingerBands(30, 2) # Gets historical data of indicator bbdf = qb.Indicator(bb, "SPY", 360, Resolution.Daily) # drop undesired fields bbdf = bbdf.drop('standarddeviation', 1) # Plot bbdf.plot() # For EURUSD bbdf = qb.Indicator(bb, "EURUSD"...
Jupyter/KitchenSinkQuantBookTemplate.ipynb
Jay-Jay-D/LeanSTP
apache-2.0
Quickstart The easiest way to get started with using emcee is to use it for a project. To get you started, here’s an annotated, fully-functional example that demonstrates a standard usage pattern. How to sample a multi-dimensional Gaussian We’re going to demonstrate how you might draw samples from the multivariate Gaus...
import numpy as np
docs/_static/notebooks/quickstart.ipynb
johnbachman/emcee
mit
Then, we’ll code up a Python function that returns the density $p(\vec{x})$ for specific values of $\vec{x}$, $\vec{\mu}$ and $\Sigma^{-1}$. In fact, emcee actually requires the logarithm of $p$. We’ll call it log_prob:
def log_prob(x, mu, cov): diff = x - mu return -0.5 * np.dot(diff, np.linalg.solve(cov, diff))
docs/_static/notebooks/quickstart.ipynb
johnbachman/emcee
mit
It is important that the first argument of the probability function is the position of a single "walker" (a N dimensional numpy array). The following arguments are going to be constant every time the function is called and the values come from the args parameter of our :class:EnsembleSampler that we'll see soon. Now, w...
ndim = 5 np.random.seed(42) means = np.random.rand(ndim) cov = 0.5 - np.random.rand(ndim ** 2).reshape((ndim, ndim)) cov = np.triu(cov) cov += cov.T - np.diag(cov.diagonal()) cov = np.dot(cov, cov)
docs/_static/notebooks/quickstart.ipynb
johnbachman/emcee
mit
and where cov is $\Sigma$. How about we use 32 walkers? Before we go on, we need to guess a starting point for each of the 32 walkers. This position will be a 5-dimensional vector so the initial guess should be a 32-by-5 array. It's not a very good guess but we'll just guess a random number between 0 and 1 for each com...
nwalkers = 32 p0 = np.random.rand(nwalkers, ndim)
docs/_static/notebooks/quickstart.ipynb
johnbachman/emcee
mit
Now that we've gotten past all the bookkeeping stuff, we can move on to the fun stuff. The main interface provided by emcee is the :class:EnsembleSampler object so let's get ourselves one of those:
import emcee sampler = emcee.EnsembleSampler(nwalkers, ndim, log_prob, args=[means, cov])
docs/_static/notebooks/quickstart.ipynb
johnbachman/emcee
mit
Remember how our function log_prob required two extra arguments when it was called? By setting up our sampler with the args argument, we're saying that the probability function should be called as:
log_prob(p0[0], means, cov)
docs/_static/notebooks/quickstart.ipynb
johnbachman/emcee
mit
If we didn't provide any args parameter, the calling sequence would be log_prob(p0[0]) instead. It's generally a good idea to run a few "burn-in" steps in your MCMC chain to let the walkers explore the parameter space a bit and get settled into the maximum of the density. We'll run a burn-in of 100 steps (yep, I just m...
state = sampler.run_mcmc(p0, 100) sampler.reset()
docs/_static/notebooks/quickstart.ipynb
johnbachman/emcee
mit
You'll notice that I saved the final position of the walkers (after the 100 steps) to a variable called state. You can check out what will be contained in the other output variables by looking at the documentation for the :func:EnsembleSampler.run_mcmc function. The call to the :func:EnsembleSampler.reset method clears...
sampler.run_mcmc(state, 10000);
docs/_static/notebooks/quickstart.ipynb
johnbachman/emcee
mit
The samples can be accessed using the :func:EnsembleSampler.get_chain method. This will return an array with the shape (10000, 32, 5) giving the parameter values for each walker at each step in the chain. Take note of that shape and make sure that you know where each of those numbers come from. You can make histograms ...
import matplotlib.pyplot as plt samples = sampler.get_chain(flat=True) plt.hist(samples[:, 0], 100, color="k", histtype="step") plt.xlabel(r"$\theta_1$") plt.ylabel(r"$p(\theta_1)$") plt.gca().set_yticks([]);
docs/_static/notebooks/quickstart.ipynb
johnbachman/emcee
mit
Another good test of whether or not the sampling went well is to check the mean acceptance fraction of the ensemble using the :func:EnsembleSampler.acceptance_fraction property:
print("Mean acceptance fraction: {0:.3f}".format(np.mean(sampler.acceptance_fraction)))
docs/_static/notebooks/quickstart.ipynb
johnbachman/emcee
mit
and the integrated autocorrelation time (see the :ref:autocorr tutorial for more details)
print( "Mean autocorrelation time: {0:.3f} steps".format( np.mean(sampler.get_autocorr_time()) ) )
docs/_static/notebooks/quickstart.ipynb
johnbachman/emcee
mit
Load previously saved data In the previous notebook, we had saved the data in a binary format. Let us try and load the data back.
interactions_ts = gl.TimeSeries("data/user_activity_data.ts/") users = gl.SFrame("data/users.sf/")
dss-2016/churn_prediction/churn-tutorial.ipynb
turi-code/tutorials
apache-2.0
Training a churn predictor We define churn to be no activity within a period of time (called the churn_period). Hence, a user/customer is said to have churned if periods of activity is followed by no activity for a churn_period (for example, 30 days). <img src="https://dato.com/learn/userguide/churn_prediction/images/...
churn_period_oct = datetime.datetime(year = 2011, month = 10, day = 1)
dss-2016/churn_prediction/churn-tutorial.ipynb
turi-code/tutorials
apache-2.0
Making a train-validation split Next, we perform a train-validation split where we randomly split the data such that one split contains data for a fraction of the users while the second split contains all data for the rest of the users.
(train, valid) = gl.churn_predictor.random_split(interactions_ts, user_id = 'CustomerID', fraction = 0.9, seed = 12) print "Users in the training dataset : %s" % len(train['CustomerID'].unique()) print "Users in the validation dataset : %s" % len(valid['CustomerID'].unique())
dss-2016/churn_prediction/churn-tutorial.ipynb
turi-code/tutorials
apache-2.0
Training a churn predictor model
model = gl.churn_predictor.create(train, user_id='CustomerID', user_data = users, time_boundaries = [churn_period_oct]) model
dss-2016/churn_prediction/churn-tutorial.ipynb
turi-code/tutorials
apache-2.0
Consuming predictions made by the model Here the question to ask is will they churn after a certain period of time. To validate we can see if they user has used us after that evaluation period. Voila! I was confusing it with expiration time (customer churn not usage churn)
predictions = model.predict(valid, user_data=users) predictions predictions['probability'].show()
dss-2016/churn_prediction/churn-tutorial.ipynb
turi-code/tutorials
apache-2.0
Evaluating the model
metrics = model.evaluate(valid, user_data=users, time_boundary=churn_period_oct) metrics model.save('data/churn_model.mdl')
dss-2016/churn_prediction/churn-tutorial.ipynb
turi-code/tutorials
apache-2.0
The :class:Info &lt;mne.Info&gt; data structure The :class:Info &lt;mne.Info&gt; data object is typically created when data is imported into MNE-Python and contains details such as: date, subject information, and other recording details the sampling rate information about the data channels (name, type, position, etc.)...
import mne import os.path as op
0.15/_downloads/plot_info.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
:class:mne.Info behaves as a nested Python dictionary:
# Read the info object from an example recording info = mne.io.read_info( op.join(mne.datasets.sample.data_path(), 'MEG', 'sample', 'sample_audvis_raw.fif'), verbose=False)
0.15/_downloads/plot_info.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
List all the fields in the info object
print('Keys in info dictionary:\n', info.keys())
0.15/_downloads/plot_info.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Obtain the sampling rate of the data
print(info['sfreq'], 'Hz')
0.15/_downloads/plot_info.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
List all information about the first data channel
print(info['chs'][0])
0.15/_downloads/plot_info.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Obtaining subsets of channels There are a number of convenience functions to obtain channel indices, given an :class:mne.Info object. Get channel indices by name
channel_indices = mne.pick_channels(info['ch_names'], ['MEG 0312', 'EEG 005'])
0.15/_downloads/plot_info.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Get channel indices by regular expression
channel_indices = mne.pick_channels_regexp(info['ch_names'], 'MEG *')
0.15/_downloads/plot_info.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Channel types MNE supports different channel types: eeg : For EEG channels with data stored in Volts (V) meg (mag) : For MEG magnetometers channels stored in Tesla (T) meg (grad) : For MEG gradiometers channels stored in Tesla/Meter (T/m) ecg : For ECG channels stored in Volts (V) seeg : For Stereotactic EEG channels ...
channel_indices = mne.pick_types(info, meg=True) # MEG only channel_indices = mne.pick_types(info, eeg=True) # EEG only
0.15/_downloads/plot_info.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
MEG gradiometers and EEG channels
channel_indices = mne.pick_types(info, meg='grad', eeg=True)
0.15/_downloads/plot_info.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Get a dictionary of channel indices, grouped by channel type
channel_indices_by_type = mne.io.pick.channel_indices_by_type(info) print('The first three magnetometers:', channel_indices_by_type['mag'][:3])
0.15/_downloads/plot_info.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Obtaining information about channels
# Channel type of a specific channel channel_type = mne.io.pick.channel_type(info, 75) print('Channel #75 is of type:', channel_type)
0.15/_downloads/plot_info.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Channel types of a collection of channels
meg_channels = mne.pick_types(info, meg=True)[:10] channel_types = [mne.io.pick.channel_type(info, ch) for ch in meg_channels] print('First 10 MEG channels are of type:\n', channel_types)
0.15/_downloads/plot_info.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Dropping channels from an info structure It is possible to limit the info structure to only include a subset of channels with the :func:mne.pick_info function:
# Only keep EEG channels eeg_indices = mne.pick_types(info, meg=False, eeg=True) reduced_info = mne.pick_info(info, eeg_indices) print(reduced_info)
0.15/_downloads/plot_info.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Webdriver 主要用的是selenium的Webdriver 我们可以通过下面的方式先看看Selenium.Webdriver支持哪些浏览器
from selenium import webdriver help(webdriver)
code/04.PythonCrawler_selenium.ipynb
computational-class/cjc2016
mit
下载和设置Webdriver 对于Chrome需要的webdriver下载地址 http://chromedriver.storage.googleapis.com/index.html 需要将webdriver放在系统路径下: - 确保anaconda在系统路径名里 - 把下载的webdriver 放在Anaconda的bin文件夹下 PhantomJS PhantomJS是一个而基于WebKit的服务端JavaScript API,支持Web而不需要浏览器支持,其快速、原生支持各种Web标准:Dom处理,CSS选择器,JSON等等。PhantomJS可以用用于页面自动化、网络监测、网页截屏,以及无界面测试
#browser = webdriver.Firefox() # 打开Firefox浏览器 browser = webdriver.Chrome() # 打开Chrome浏览器
code/04.PythonCrawler_selenium.ipynb
computational-class/cjc2016
mit
访问页面
from selenium import webdriver browser = webdriver.Chrome() browser.get("http://music.163.com") print(browser.page_source) #browser.close()
code/04.PythonCrawler_selenium.ipynb
computational-class/cjc2016
mit
查找元素 单个元素查找
from selenium import webdriver browser = webdriver.Chrome() browser.get("http://music.163.com") input_first = browser.find_element_by_id("g_search") input_second = browser.find_element_by_css_selector("#g_search") input_third = browser.find_element_by_xpath('//*[@id="g_search"]') print(input_first) print(input_second...
code/04.PythonCrawler_selenium.ipynb
computational-class/cjc2016
mit
这里我们通过三种不同的方式去获取响应的元素,第一种是通过id的方式,第二个中是CSS选择器,第三种是xpath选择器,结果都是相同的。 常用的查找元素方法: find_element_by_name find_element_by_id find_element_by_xpath find_element_by_link_text find_element_by_partial_link_text find_element_by_tag_name find_element_by_class_name find_element_by_css_selector
# 下面这种方式是比较通用的一种方式:这里需要记住By模块所以需要导入 from selenium.webdriver.common.by import By browser = webdriver.Chrome() browser.get("http://music.163.com") input_first = browser.find_element(By.ID,"g_search") print(input_first) browser.close()
code/04.PythonCrawler_selenium.ipynb
computational-class/cjc2016
mit
多个元素查找 其实多个元素和单个元素的区别,举个例子:find_elements,单个元素是find_element,其他使用上没什么区别,通过其中的一个例子演示:
browser = webdriver.Chrome() browser.get("http://music.163.com") lis = browser.find_elements_by_css_selector('body') print(lis) browser.close()
code/04.PythonCrawler_selenium.ipynb
computational-class/cjc2016
mit
当然上面的方式也是可以通过导入from selenium.webdriver.common.by import By 这种方式实现 lis = browser.find_elements(By.CSS_SELECTOR,'.service-bd li') 同样的在单个元素中查找的方法在多个元素查找中同样存在: - find_elements_by_name - find_elements_by_id - find_elements_by_xpath - find_elements_by_link_text - find_elements_by_partial_link_text - find_elements_by_tag_na...
from selenium import webdriver import time browser = webdriver.Chrome() browser.get("https://music.163.com/") input_str = browser.find_element_by_id('srch') input_str.send_keys("周杰伦") time.sleep(3) #休眠,模仿人工搜索 input_str.clear() input_str.send_keys("林俊杰")
code/04.PythonCrawler_selenium.ipynb
computational-class/cjc2016
mit
运行的结果可以看出程序会自动打开Chrome浏览器并打开淘宝输入ipad,然后删除,重新输入MacBook pro,并点击搜索 Selenium所有的api文档:http://selenium-python.readthedocs.io/api.html#module-selenium.webdriver.common.action_chains 执行JavaScript 这是一个非常有用的方法,这里就可以直接调用js方法来实现一些操作, 下面的例子是通过登录知乎然后通过js翻到页面底部,并弹框提示
from selenium import webdriver browser = webdriver.Chrome() browser.get("https://www.zhihu.com/explore/") browser.execute_script('window.scrollTo(0, document.body.scrollHeight)') browser.execute_script('alert("To Bottom")')
code/04.PythonCrawler_selenium.ipynb
computational-class/cjc2016
mit
一个例子 ```pyton from selenium import webdriver browser = webdriver.Chrome() browser.get("https://www.privco.com/home/login") #需要翻墙打开网址 username = 'fake_username' password = 'fake_password' browser.find_element_by_id("username").clear() browser.find_element_by_id("username").send_keys(username) browser.find_element_by_i...
# url = "https://www.privco.com/private-company/329463" def download_excel(url): browser.get(url) name = url.split('/')[-1] title = browser.title source = browser.page_source with open(name+'.html', 'w') as f: f.write(source) try: soup = BeautifulSoup(source, 'html.parser') ...
code/04.PythonCrawler_selenium.ipynb
computational-class/cjc2016
mit
In the code, we assign a lambda function to variable f. The function specifies that on each possible input it receives, the resulting function that is applied is a multiplication by 2. Hence f(1)=2, f(2)=4, etc. Note that, invoking f only works if we provide an argument that can be combined with the * 2 operation. For ...
f('Pete')
notebooks/2_event_data_filtering.ipynb
pm4py/pm4py-core
gpl-3.0
Filter and Map Lambda functions allow us to write short, type-independent functions. Given a list of objects, Python provides two core functions that can apply a given lambda function on each element of the given list (in fact, any iterable): filter(f,l) apply the given lambda function f as a filter on the iterable l....
l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] filter(lambda n: n >= 5, l)
notebooks/2_event_data_filtering.ipynb
pm4py/pm4py-core
gpl-3.0
The previous example needs little to no explanation, i.e., the filter retains all numbers in the list greater or equal to five. However, what is interesting, is the fact that the resulting objects are not a list (or an iterables), rather a filter object. Such an objects can be easily transformed to a list by wrapping i...
list(filter(lambda n: n >= 5, l))
notebooks/2_event_data_filtering.ipynb
pm4py/pm4py-core
gpl-3.0
The same holds for the map() function:
map(lambda n: n * 3, l) list(map(lambda n: n * 3, l))
notebooks/2_event_data_filtering.ipynb
pm4py/pm4py-core
gpl-3.0
Observe that, the previous map function simply muliplies each element of list l by three. Lambda-Based Filtering in pm4py In pm4py, event log objects mimic lists of traces, which in turn, mimic lists of events. Clearly, lambda functions can therefore be applied to event logs and traces. However, as we have shown in the...
import pm4py log = pm4py.read_xes('data/running_example.xes') # inspect the length of each trace using a generic map function list(map(lambda t: len(t), log))
notebooks/2_event_data_filtering.ipynb
pm4py/pm4py-core
gpl-3.0
As we can see, there are four traces describing a trace of length 5, one trace of length 9 and one trace of length 13. Let's retain all traces that have a lenght greater than 5.
lf = pm4py.filter_log(lambda t: len(t) > 5, log) list(map(lambda t: len(t), lf))
notebooks/2_event_data_filtering.ipynb
pm4py/pm4py-core
gpl-3.0
The traces of length 9 and 13 have repeated behavior in them, i.e., the reinitiate request activity has been performed at least once:
list(map(lambda t: (len(t), len(list(filter(lambda e: e['concept:name'] == 'reinitiate request', t)))), log))
notebooks/2_event_data_filtering.ipynb
pm4py/pm4py-core
gpl-3.0
Observe that the map function maps each trace onto a tuple. The first element describes the length of the trace. The second element describes the number of occurrences of the activity register request. Observe that we obtain said counter by filtering the trace, i.e., by retaining only those events that describe the rei...
print(len(log)) lf = pm4py.filter_log(lambda t: len(t) > 5, log) print(len(lf)) print(len(log[0])) #log[0] fetches the 1st trace tf = pm4py.filter_trace(lambda e: e['concept:name'] in {'register request', 'pay compensation'}, log[0]) print(len(tf)) print(len(log[0])) ls = pm4py.sort_log(log, lambda t: len(t)) print(...
notebooks/2_event_data_filtering.ipynb
pm4py/pm4py-core
gpl-3.0
Specific Filters There are various pre-built filters in PM4Py, which make commonly needed process mining filtering functionality a lot easier. In the upcoming overview, we briefly give present these functions. We describe how to call them, their main input parameters and their return objects. Note that, all of the filt...
pm4py.filter_start_activities(log, {'register request'}) pm4py.filter_start_activities(log, {'register request TYPO!'}) import pandas ldf = pm4py.format_dataframe(pandas.read_csv('data/running_example.csv', sep=';'), case_id='case_id', activity_key='activity', timestamp_key='timestamp') ...
notebooks/2_event_data_filtering.ipynb
pm4py/pm4py-core
gpl-3.0
End Activities filter_end_activities(log, activities, retain=True) retains (or drops) the traces that contain the given activity as the final event. For example, we can retain the number of cases that end with a "payment of the compensation":
len(pm4py.filter_end_activities(log, 'pay compensation'))
notebooks/2_event_data_filtering.ipynb
pm4py/pm4py-core
gpl-3.0
Event Attribute Values filter_event_attribute_values(log, attribute_key, values, level="case", retain=True) retains (or drops) traces (or events) based on a given collection of values that need to be matched for the given attribute_key. If level=='case', complete traces are matched (or dropped if retain==False) that...
# retain any case that has either Peter or Mike working on it lf = pm4py.filter_event_attribute_values(log, 'org:resource', {'Pete', 'Mike'}) list(map(lambda t: list(map(lambda e: e['org:resource'], t)), lf)) # retain only those events that have Pete or Mik working on it lf = pm4py.filter_event_attribute_values(log, '...
notebooks/2_event_data_filtering.ipynb
pm4py/pm4py-core
gpl-3.0
Plot items Lines, Bars, Points and Right yAxis
x = [1, 4, 6, 8, 10] y = [3, 6, 4, 5, 9] pp = Plot(title='Bars, Lines, Points and 2nd yAxis', xLabel="xLabel", yLabel="yLabel", legendLayout=LegendLayout.HORIZONTAL, legendPosition=LegendPosition(position=LegendPosition.Position.RIGHT), omitCheckboxes=True) pp.add(...
doc/python/ChartingAPI.ipynb
jpallas/beakerx
apache-2.0
Lines, Points with Pandas
plot = Plot(title= "Pandas line") plot.add(Line(y= tableRows.y1, width= 2, color= Color(216, 154, 54))) plot.add(Line(y= tableRows.y10, width= 2, color= Color.lightGray)) plot plot = Plot(title= "Pandas Series") plot.add(Line(y= pd.Series([0, 6, 1, 5, 2, 4, 3]), width=2)) plot = Plot(title= "Bars") cs = [Color(255, ...
doc/python/ChartingAPI.ipynb
jpallas/beakerx
apache-2.0
Areas, Stems and Crosshair
ch = Crosshair(color=Color.black, width=2, style=StrokeType.DOT) plot = Plot(crosshair=ch) y1 = [4, 8, 16, 20, 32] base = [2, 4, 8, 10, 16] cs = [Color.black, Color.orange, Color.gray, Color.yellow, Color.pink] ss = [StrokeType.SOLID, StrokeType.SOLID, StrokeType.DASH, StrokeType.DOT, Stroke...
doc/python/ChartingAPI.ipynb
jpallas/beakerx
apache-2.0
Constant Lines, Constant Bands
p = Plot () p.add(Line(y=[-1, 1])) p.add(ConstantLine(x=0.65, style=StrokeType.DOT, color=Color.blue)) p.add(ConstantLine(y=0.1, style=StrokeType.DASHDOT, color=Color.blue)) p.add(ConstantLine(x=0.3, y=0.4, color=Color.gray, width=5, showLabel=True)) Plot().add(Line(y=[-3, 1, 3, 4, 5])).add(ConstantBand(x=[1, 2], y=[1...
doc/python/ChartingAPI.ipynb
jpallas/beakerx
apache-2.0
TimePlot
import time millis = current_milli_time() hour = round(1000 * 60 * 60) xs = [] ys = [] for i in range(11): xs.append(millis + hour * i) ys.append(i) plot = TimePlot(timeZone="America/New_York") # list of milliseconds plot.add(Points(x=xs, y=ys, size=10, displayName="milliseconds")) plot = TimePlot() plot.ad...
doc/python/ChartingAPI.ipynb
jpallas/beakerx
apache-2.0
numpy datatime64
y = pd.Series([7.5, 7.9, 7, 8.7, 8, 8.5]) dates = [np.datetime64('2015-02-01'), np.datetime64('2015-02-02'), np.datetime64('2015-02-03'), np.datetime64('2015-02-04'), np.datetime64('2015-02-05'), np.datetime64('2015-02-06')] plot = TimePlot() plot.add(Line(x=dates, y=y))
doc/python/ChartingAPI.ipynb
jpallas/beakerx
apache-2.0
Timestamp
y = pd.Series([7.5, 7.9, 7, 8.7, 8, 8.5]) dates = pd.Series(['2015-02-01', '2015-02-02', '2015-02-03', '2015-02-04', '2015-02-05', '2015-02-06'] , dtype='datetime64[ns]') plot = TimePlot() plot.add(Line(x=da...
doc/python/ChartingAPI.ipynb
jpallas/beakerx
apache-2.0
Datetime and date
import datetime y = pd.Series([7.5, 7.9, 7, 8.7, 8, 8.5]) dates = [datetime.date(2015, 2, 1), datetime.date(2015, 2, 2), datetime.date(2015, 2, 3), datetime.date(2015, 2, 4), datetime.date(2015, 2, 5), datetime.date(2015, 2, 6)] plot = TimePlot() plot.add(Line(x=dates, y=y)...
doc/python/ChartingAPI.ipynb
jpallas/beakerx
apache-2.0
NanoPlot
millis = current_milli_time() nanos = millis * 1000 * 1000 xs = [] ys = [] for i in range(11): xs.append(nanos + 7 * i) ys.append(i) nanoplot = NanoPlot() nanoplot.add(Points(x=xs, y=ys))
doc/python/ChartingAPI.ipynb
jpallas/beakerx
apache-2.0
Stacking
y1 = [1,5,3,2,3] y2 = [7,2,4,1,3] p = Plot(title='Plot with XYStacker', initHeight=200) a1 = Area(y=y1, displayName='y1') a2 = Area(y=y2, displayName='y2') stacker = XYStacker() p.add(stacker.stack([a1, a2]))
doc/python/ChartingAPI.ipynb
jpallas/beakerx
apache-2.0
SimpleTime Plot
SimpleTimePlot(tableRows, ["y1", "y10"], # column names timeColumn="time", # time is default value for a timeColumn yLabel="Price", displayNames=["1 Year", "10 Year"], colors = [[216, 154, 54], Color.lightGray], displayLines=True, # no lines (t...
doc/python/ChartingAPI.ipynb
jpallas/beakerx
apache-2.0
Second Y Axis The plot can have two y-axes. Just add a YAxis to the plot object, and specify its label. Then for data that should be scaled according to this second axis, specify the property yAxis with a value that coincides with the label given. You can use upperMargin and lowerMargin to restrict the range of the dat...
p = TimePlot(xLabel= "Time", yLabel= "Interest Rates") p.add(YAxis(label= "Spread", upperMargin= 4)) p.add(Area(x= tableRows.time, y= tableRows.spread, displayName= "Spread", yAxis= "Spread", color= Color(180, 50, 50, 128))) p.add(Line(x= tableRows.time, y= tableRows.m3, displayName= "3 Month")) p.add(Lin...
doc/python/ChartingAPI.ipynb
jpallas/beakerx
apache-2.0
Combined Plot
import math points = 100 logBase = 10 expys = [] xs = [] for i in range(0, points): xs.append(i / 15.0) expys.append(math.exp(xs[i])) cplot = CombinedPlot(xLabel= "Linear") logYPlot = Plot(title= "Linear x, Log y", yLabel= "Log", logY= True, yLogBase= logBase) logYPlot.add(Line(x= xs, y= expys, displayName= "f(x)...
doc/python/ChartingAPI.ipynb
jpallas/beakerx
apache-2.0
Hat potential The following potential is often used in Physics and other fields to describe symmetry breaking and is often known as the "hat potential": $$ V(x) = -a x^2 + b x^4 $$ Write a function hat(x,a,b) that returns the value of this function:
def hat(x,a,b): v = -a*x**2 + b*x**4 return v assert hat(0.0, 1.0, 1.0)==0.0 assert hat(0.0, 1.0, 1.0)==0.0 assert hat(1.0, 10.0, 1.0)==-9.0
assignments/assignment11/OptimizationEx01.ipynb
LimeeZ/phys292-2015-work
mit
Plot this function over the range $x\in\left[-3,3\right]$ with $b=1.0$ and $a=5.0$:
a = 5.0 b = 1.0 x1 = np.arange(-3,3,0.1) plt.plot(x1, hat(x1, 5,1)) assert True # leave this to grade the plot
assignments/assignment11/OptimizationEx01.ipynb
LimeeZ/phys292-2015-work
mit
Write code that finds the two local minima of this function for $b=1.0$ and $a=5.0$. Use scipy.optimize.minimize to find the minima. You will have to think carefully about how to get this function to find both minima. Print the x values of the minima. Plot the function as a blue line. On the same axes, show the minima...
def hat(x): b = 1 a = 5 v = -a*x**2 + b*x**4 return v xmin1 = opt.minimize(hat,-1.5)['x'][0] xmin2 = opt.minimize(hat,1.5)['x'][0] xmins = np.array([xmin1,xmin2]) print(xmin1) print(xmin2) x1 = np.arange(-3,3,0.1) plt.plot(x1, hat(x1)) plt.scatter(xmins,hat(xmins), c = 'r',marker = 'o') plt.grid(True...
assignments/assignment11/OptimizationEx01.ipynb
LimeeZ/phys292-2015-work
mit
Programa principal Substituïu els comentaris per les ordres necessàries:
# avançar # girar # avançar # girar # avançar # girar # avançar # girar # parar
task/quadrat.ipynb
ecervera/mindstorms-nb
mit