markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
One confusing aspect of this loop is range(1,4) why does this loop from 1 to 3? Why not 1 to 4? Well it has to do with the fact that computers start counting at zero. The easier way to understand it is if you subtract the two numbers you get the number of times it will loop. So for example, 4-1 == 3. 1.1 You Code In th...
# TODO Write code here
content/lessons/04-Iterations/LAB-Iterations.ipynb
IST256/learn-python
mit
Indefinite loops With indefinite loops we do not know how many times the program will execute. This is typically based on user action, and therefore our loop is subject to the whims of whoever interacts with it. Most applications like spreadsheets, photo editors, and games use indefinite loops. They'll run on your comp...
name = "" while name != 'mike': name = input("Say my name! : ") print(f"Nope, my name is not {name}!")
content/lessons/04-Iterations/LAB-Iterations.ipynb
IST256/learn-python
mit
In the above example, the loop will keep on looping until we enter mike. The value mike is called the sentinal value - a value we look out for, and when it exists we stop the loop. For this reason indefinite loops are also known as sentinal-controlled loops. The classic problem with indefinite/sentinal controlled loops...
while True: name = input("Say my name!: ") if name == 'mike': break print("Nope, my name is not %s!" %(name))
content/lessons/04-Iterations/LAB-Iterations.ipynb
IST256/learn-python
mit
1.2 You Code: Debug This loop This program should count the number of times you input the value ni. As soon as you enter a value other than ni the program will stop looping and print the count of ni's. Example Run: What say you? ni What say you? ni What say you? ni What say you? nay You said 'ni' 3 times. The problem ...
#TODO Debug this code nicount=0 while True: say = input "What say you? ") if say == 'ni': break nicount = 1 print(f"You said 'ni' P {nicount} times.")
content/lessons/04-Iterations/LAB-Iterations.ipynb
IST256/learn-python
mit
Multiple exit conditions This indefinite loop pattern makes it easy to add additional exit conditions. For example, here's the program again, but it now stops when you say my name or type in 3 wrong names. Make sure to run this program a couple of times to understand what is happening: First enter mike to exit the pr...
times = 0 while True: name = input("Say my name!: ") times = times + 1 if name == 'mike': # sentinal 1 print("You got it!") break if times == 3: # sentinal 2 print("Game over. Too many tries!") break print(f"Nope, my name is not {name}")
content/lessons/04-Iterations/LAB-Iterations.ipynb
IST256/learn-python
mit
Counting Characters in Text Let's conclude the lab with you writing your own program that uses both definite and indefinite loops. This program should input some text and then a character, counting the number of characters in the text. This process will repeat until the text entered is empty. The program should work a...
# TODO Write code here
content/lessons/04-Iterations/LAB-Iterations.ipynb
IST256/learn-python
mit
Next, we surround the code we wrote in 1.4 with a sentinal-controlled indefinite loop. The sentinal (the part that exits the loop is when the text is empty (text=="") The algorithm is: loop set count to 0 input the text if text is empty quit loop input the search character for ch in text if ...
# TODO Write Code here:
content/lessons/04-Iterations/LAB-Iterations.ipynb
IST256/learn-python
mit
Metacognition Rate your comfort level with this week's material so far. 1 ==> I don't understand this at all yet and need extra help. If you choose this please try to articulate that which you do not understand to the best of your ability in the questions and comments section below. 2 ==> I can do this with help or gui...
# run this code to turn in your work! from coursetools.submission import Submission Submission().submit()
content/lessons/04-Iterations/LAB-Iterations.ipynb
IST256/learn-python
mit
H2O init
h2o.init(max_mem_size = 20) #uses all cores by default h2o.remove_all()
zillow2017/H2Opy_v0.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
import xy_train, x_test
xy_tr = h2o.import_file(path = os.path.realpath("../daielee/xy_tr.csv")) x_test = h2o.import_file(path = os.path.realpath("../daielee/x_test.csv")) xy_tr_df = xy_tr.as_data_frame(use_pandas=True) x_test_df = x_test.as_data_frame(use_pandas=True) print (xy_tr_df.shape,x_test_df.shapepe)
zillow2017/H2Opy_v0.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
27-AUG-2017 dl_model Model Details dl_model = H2ODeepLearningEstimator(epochs=1000) dl_model.train(X, y, xy_tr) ============= * H2ODeepLearningEstimator : Deep Learning * Model Key: DeepLearning_model_python_1503841734286_1 ModelMetricsRegression: deeplearning Reported on train data. MSE: 0.02257823450695032 ...
X = xy_tr.col_names[0:57] y = xy_tr.col_names[57] dl_model = H2ODeepLearningEstimator(epochs=1000) dl_model.train(X, y, xy_tr) dl_model.summary sh = dl_model.score_history() sh = pd.DataFrame(sh) print(sh.columns) sh.plot(x='epochs',y = ['training_deviance', 'training_mae']) dl_model.default_params dl_model.model...
zillow2017/H2Opy_v0.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
28-AUG-2017 dl_model_list 1
nuron_cnts = [40,80,160] layer_cnts = [1,2,3,4,5] acts = ["Tanh","Maxout","Rectifier","RectifierWithDropout"] models_list = [] m_names_list = [] i = 0 # N 3 * L 5 * A 4 = 60n for act in acts: for layer_cnt in layer_cnts: for nuron_cnt in nuron_cnts: m_names_list.append("N:"+str(nuron_cnt)+"L:"+...
zillow2017/H2Opy_v0.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
split the data 3 ways: 60% for training 20% for validation (hyper parameter tuning) 20% for final testing We will train a data set on one set and use the others to test the validity of the model by ensuring that it can predict accurately on data the model has not been shown. The second set will be used fo...
train_h2o, valid_h2o, test_h2o = xy_tr.split_frame([0.6, 0.2], seed=1234)
zillow2017/H2Opy_v0.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
28-AUG-2017 dl_model_list 2
nuron_cnts = [40,80,160] layer_cnts = [1,2,3,4,5] acts = ["RectifierWithDropout"] #"Tanh","Maxout","Rectifier", models_list = [] m_names_list = [] time_tkn_wall =[] time_tkn_clk=[] i = 0 # N 3 * L 5 * A 1 = 15n for act in acts: for layer_cnt in layer_cnts: for nuron_cnt in nuron_cnts: m_names_l...
zillow2017/H2Opy_v0.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
time.time() shows that the wall-clock time has passed approximately one second while time.clock() shows the CPU time spent on the current process is less than 1 microsecond. time.clock() has a much higher precision than time.time().
for i in range(len(models_list)-1): try: sh = models_list[i].score_history() sh = pd.DataFrame(sh) perform = sh['validation_deviance'].tolist()[-1] print(models_list[i].model_id,end=" ") print(" clk "+str(time_tkn_clk[i])+" wall "+str(time_tkn_wall[i]),end=" ") print(...
zillow2017/H2Opy_v0.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
28-AUG-2017 dl_model_list 3 30,40 nurons, 4,5 layers
nuron_cnts = [30,40,50] layer_cnts = [4,5] acts = ["RectifierWithDropout"] #"Tanh","Maxout","Rectifier", dout=0.5 models_list = [] m_names_list = [] time_tkn_wall =[] time_tkn_clk=[] i = 0 # N 1 * L 10 * A 1 = 10n for act in acts: for layer_cnt in layer_cnts: for nuron_cnt in nuron_cnts: m_nam...
zillow2017/H2Opy_v0.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
tests
dl_pref=dl_model.model_performance(test_data=test) dl_model.mean dl_pref.mae() train.shape models_list[0].model_id for i in range(len(models_list)): try: sh = models_list[i].score_history() sh = pd.DataFrame(sh) sh.plot(x='epochs',y = ['training_mae', 'validation_mae']) tr_p...
zillow2017/H2Opy_v0.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
Predict test_h2o & combine Predict x_test & combine
import numpy as np import pandas as pd import xgboost as xgb from sklearn.preprocessing import LabelEncoder import lightgbm as lgb import gc from sklearn.linear_model import LinearRegression import random import datetime as dt np.random.seed(17) random.seed(17) train = pd.read_csv("../input/train_2016_v2.csv", parse_...
zillow2017/H2Opy_v0.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
编写自己的回调函数 <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://tensorflow.google.cn/guide/keras/custom_callback"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看</a> </td> <td><a target="_blank" href="https://colab.research.google.com/github...
import tensorflow as tf from tensorflow import keras
site/zh-cn/guide/keras/custom_callback.ipynb
tensorflow/docs-l10n
apache-2.0
Keras 回调函数概述 所有回调函数都将 keras.callbacks.Callback 类作为子类,并重写在训练、测试和预测的各个阶段调用的一组方法。回调函数对于在训练期间了解模型的内部状态和统计信息十分有用。 您可以将回调函数的列表(作为关键字参数 callbacks)传递给以下模型方法: keras.Model.fit() keras.Model.evaluate() keras.Model.predict() 回调函数方法概述 全局方法 on_(train|test|predict)_begin(self, logs=None) 在 fit/evaluate/predict 开始时调用。 on_(train|test...
# Define the Keras model to add callbacks to def get_model(): model = keras.Sequential() model.add(keras.layers.Dense(1, input_dim=784)) model.compile( optimizer=keras.optimizers.RMSprop(learning_rate=0.1), loss="mean_squared_error", metrics=["mean_absolute_error"], ) return ...
site/zh-cn/guide/keras/custom_callback.ipynb
tensorflow/docs-l10n
apache-2.0
然后,从 Keras 数据集 API 加载 MNIST 数据进行训练和测试:
# Load example MNIST data and pre-process it (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data() x_train = x_train.reshape(-1, 784).astype("float32") / 255.0 x_test = x_test.reshape(-1, 784).astype("float32") / 255.0 # Limit the data to 1000 samples x_train = x_train[:1000] y_train = y_train[:10...
site/zh-cn/guide/keras/custom_callback.ipynb
tensorflow/docs-l10n
apache-2.0
接下来,定义一个简单的自定义回调函数来记录以下内容: fit/evaluate/predict 开始和结束的时间 每个周期开始和结束的时间 每个训练批次开始和结束的时间 每个评估(测试)批次开始和结束的时间 每次推断(预测)批次开始和结束的时间
class CustomCallback(keras.callbacks.Callback): def on_train_begin(self, logs=None): keys = list(logs.keys()) print("Starting training; got log keys: {}".format(keys)) def on_train_end(self, logs=None): keys = list(logs.keys()) print("Stop training; got log keys: {}".format(keys...
site/zh-cn/guide/keras/custom_callback.ipynb
tensorflow/docs-l10n
apache-2.0
我们来试一下:
model = get_model() model.fit( x_train, y_train, batch_size=128, epochs=1, verbose=0, validation_split=0.5, callbacks=[CustomCallback()], ) res = model.evaluate( x_test, y_test, batch_size=128, verbose=0, callbacks=[CustomCallback()] ) res = model.predict(x_test, batch_size=128, callba...
site/zh-cn/guide/keras/custom_callback.ipynb
tensorflow/docs-l10n
apache-2.0
logs 字典的用法 logs 字典包含损失值,以及批次或周期结束时的所有指标。示例包括损失和平均绝对误差。
class LossAndErrorPrintingCallback(keras.callbacks.Callback): def on_train_batch_end(self, batch, logs=None): print( "Up to batch {}, the average loss is {:7.2f}.".format(batch, logs["loss"]) ) def on_test_batch_end(self, batch, logs=None): print( "Up to batch {}...
site/zh-cn/guide/keras/custom_callback.ipynb
tensorflow/docs-l10n
apache-2.0
self.model 属性的用法 除了在调用其中一种方法时接收日志信息外,回调还可以访问与当前一轮训练/评估/推断有关的模型:self.model。 以下是您可以在回调函数中使用 self.model 进行的一些操作: 设置 self.model.stop_training = True 以立即中断训练。 转变优化器(可作为 self.model.optimizer)的超参数,例如 self.model.optimizer.learning_rate。 定期保存模型。 在每个周期结束时,在少量测试样本上记录 model.predict() 的输出,以用作训练期间的健全性检查。 在每个周期结束时提取中间特征的可视化,随时间推移监视模...
import numpy as np class EarlyStoppingAtMinLoss(keras.callbacks.Callback): """Stop training when the loss is at its min, i.e. the loss stops decreasing. Arguments: patience: Number of epochs to wait after min has been hit. After this number of no improvement, training stops. """ def __init__...
site/zh-cn/guide/keras/custom_callback.ipynb
tensorflow/docs-l10n
apache-2.0
学习率规划 在此示例中,我们展示了如何在学习过程中使用自定义回调来动态更改优化器的学习率。 有关更通用的实现,请查看 callbacks.LearningRateScheduler。
class CustomLearningRateScheduler(keras.callbacks.Callback): """Learning rate scheduler which sets the learning rate according to schedule. Arguments: schedule: a function that takes an epoch index (integer, indexed from 0) and current learning rate as inputs and returns a new learning ...
site/zh-cn/guide/keras/custom_callback.ipynb
tensorflow/docs-l10n
apache-2.0
Adding Boundary Pores When performing transport simulations it is often useful to have 'boundary' pores attached to the surface(s) of the network where boundary conditions can be applied. When using the Cubic class, two methods are available for doing this: add_boundaries, which is specific for the Cubic class, and ad...
pn.add_boundary_pores(labels=['top', 'bottom'])
examples/tutorials/Intro to OpenPNM - Advanced.ipynb
TomTranter/OpenPNM
mit
Let's quickly visualize this network with the added boundaries:
#NBVAL_IGNORE_OUTPUT fig = op.topotools.plot_connections(pn, c='r') fig = op.topotools.plot_coordinates(pn, c='b', fig=fig) fig.set_size_inches([10, 10])
examples/tutorials/Intro to OpenPNM - Advanced.ipynb
TomTranter/OpenPNM
mit
Adding and Removing Pores and Throats OpenPNM uses a list-based data storage scheme for all properties, including topological connections. One of the benefits of this approach is that adding and removing pores and throats from the network is essentially as simple as adding or removing rows from the data arrays. The o...
Ts = np.random.rand(pn.Nt) < 0.1 # Create a mask with ~10% of throats labeled True op.topotools.trim(network=pn, throats=Ts) # Use mask to indicate which throats to trim
examples/tutorials/Intro to OpenPNM - Advanced.ipynb
TomTranter/OpenPNM
mit
When the trim function is called, it automatically checks the health of the network afterwards, so logger messages might appear on the command line if problems were found such as isolated clusters of pores or pores with no throats. This health check is performed by calling the Network's check_network_health method whi...
a = pn.check_network_health() print(a)
examples/tutorials/Intro to OpenPNM - Advanced.ipynb
TomTranter/OpenPNM
mit
The HealthDict contains several lists including things like duplicate throats and isolated pores, but also a suggestion of which pores to trim to return the network to a healthy state. Also, the HealthDict has a health attribute that is False is any checks fail.
op.topotools.trim(network=pn, pores=a['trim_pores'])
examples/tutorials/Intro to OpenPNM - Advanced.ipynb
TomTranter/OpenPNM
mit
Let's take another look at the network to see the trimmed pores and throats:
#NBVAL_IGNORE_OUTPUT fig = op.topotools.plot_connections(pn, c='r') fig = op.topotools.plot_coordinates(pn, c='b', fig=fig) fig.set_size_inches([10, 10])
examples/tutorials/Intro to OpenPNM - Advanced.ipynb
TomTranter/OpenPNM
mit
Define Geometry Objects The boundary pores we've added to the network should be treated a little bit differently. Specifically, they should have no volume or length (as they are not physically representative of real pores). To do this, we create two separate Geometry objects, one for internal pores and one for the bo...
Ps = pn.pores('*boundary', mode='not') Ts = pn.throats('*boundary', mode='not') geom = op.geometry.StickAndBall(network=pn, pores=Ps, throats=Ts, name='intern') Ps = pn.pores('*boundary') Ts = pn.throats('*boundary') boun = op.geometry.Boundary(network=pn, pores=Ps, throats=Ts, name='boun')
examples/tutorials/Intro to OpenPNM - Advanced.ipynb
TomTranter/OpenPNM
mit
The StickAndBall class is preloaded with the pore-scale models to calculate all the necessary size information (pore diameter, pore.volume, throat lengths, throat.diameter, etc). The Boundary class is speciall and is only used for the boundary pores. In this class, geometrical properties are set to small fixed values...
air = op.phases.Air(network=pn) water = op.phases.Water(network=pn) water['throat.contact_angle'] = 110 water['throat.surface_tension'] = 0.072
examples/tutorials/Intro to OpenPNM - Advanced.ipynb
TomTranter/OpenPNM
mit
Aside: Creating a Custom Phase Class In many cases you will want to create your own fluid, such as an oil or brine, which may be commonly used in your research. OpenPNM cannot predict all the possible scenarios, but luckily it is easy to create a custom Phase class as follows:
from openpnm.phases import GenericPhase class Oil(GenericPhase): def __init__(self, **kwargs): super().__init__(**kwargs) self.add_model(propname='pore.viscosity', model=op.models.misc.polynomial, prop='pore.temperature', a=[1.820...
examples/tutorials/Intro to OpenPNM - Advanced.ipynb
TomTranter/OpenPNM
mit
Creating a Phase class basically involves placing a series of self.add_model commands within the __init__ section of the class definition. This means that when the class is instantiated, all the models are added to itself (i.e. self). **kwargs is a Python trick that captures all arguments in a dict called kwargs and p...
oil = Oil(network=pn) print(oil)
examples/tutorials/Intro to OpenPNM - Advanced.ipynb
TomTranter/OpenPNM
mit
Define Physics Objects for Each Geometry and Each Phase In the tutorial #2 we created two Physics object, one for each of the two Geometry objects used to handle the stratified layers. In this tutorial, the internal pores and the boundary pores each have their own Geometry, but there are two Phases, which also each re...
phys_water_internal = op.physics.GenericPhysics(network=pn, phase=water, geometry=geom) phys_air_internal = op.physics.GenericPhysics(network=pn, phase=air, geometry=geom) phys_water_boundary = op.physics.GenericPhysics(network=pn, phase=water, geometry=boun) phys_air_boundary = op.physics.GenericPhysics(network=pn, ph...
examples/tutorials/Intro to OpenPNM - Advanced.ipynb
TomTranter/OpenPNM
mit
To reiterate, one Physics object is required for each Geometry AND each Phase, so the number can grow to become annoying very quickly Some useful tips for easing this situation are given below. Create a Custom Pore-Scale Physics Model Perhaps the most distinguishing feature between pore-network modeling papers is the...
def mason_model(target, diameter='throat.diameter', theta='throat.contact_angle', sigma='throat.surface_tension', f=0.6667): proj = target.project network = proj.network phase = proj.find_phase(target) Dt = network[diameter] theta = phase[theta] sigma = phase[sigma] Pc = 4*s...
examples/tutorials/Intro to OpenPNM - Advanced.ipynb
TomTranter/OpenPNM
mit
Let's examine the components of above code: The function receives a target object as an argument. This indicates which object the results will be returned to. The f value is a scale factor that is applied to the contact angle. Mason and Morrow suggested a value of 2/3 as a decent fit to the data, but we'll make thi...
mod = op.models.physics.hydraulic_conductance.hagen_poiseuille phys_water_internal.add_model(propname='throat.hydraulic_conductance', model=mod) phys_water_internal.add_model(propname='throat.entry_pressure', model=mason_model)
examples/tutorials/Intro to OpenPNM - Advanced.ipynb
TomTranter/OpenPNM
mit
Now make a copy of the models on phys_water_internal and apply it all the other water Physics objects:
phys_water_boundary.models = phys_water_internal.models
examples/tutorials/Intro to OpenPNM - Advanced.ipynb
TomTranter/OpenPNM
mit
The only 'gotcha' with this approach is that each of the Physics objects must be regenerated in order to place numerical values for all the properties into the data arrays:
phys_water_boundary.regenerate_models() phys_air_internal.regenerate_models() phys_air_internal.regenerate_models()
examples/tutorials/Intro to OpenPNM - Advanced.ipynb
TomTranter/OpenPNM
mit
Adjust Pore-Scale Model Parameters The pore-scale models are stored in a ModelsDict object that is itself stored under the models attribute of each object. This arrangement is somewhat convoluted, but it enables integrated storage of models on the object's wo which they apply. The models on an object can be inspected...
phys_water_internal.models['throat.entry_pressure']['f'] = 0.75 # Change value phys_water_internal.regenerate_models() # Regenerate model with new 'f' value
examples/tutorials/Intro to OpenPNM - Advanced.ipynb
TomTranter/OpenPNM
mit
More details about the ModelsDict and ModelWrapper classes can be found in :ref:models. Perform Multiphase Transport Simulations Use the Built-In Drainage Algorithm to Generate an Invading Phase Configuration
inv = op.algorithms.Porosimetry(network=pn) inv.setup(phase=water) inv.set_inlets(pores=pn.pores(['top', 'bottom'])) inv.run()
examples/tutorials/Intro to OpenPNM - Advanced.ipynb
TomTranter/OpenPNM
mit
The inlet pores were set to both 'top' and 'bottom' using the pn.pores method. The algorithm applies to the entire network so the mapping of network pores to the algorithm pores is 1-to-1. The run method automatically generates a list of 25 capillary pressure points to test, but you can also specify more pores, or whi...
Pi = inv['pore.invasion_pressure'] < 5000 Ti = inv['throat.invasion_pressure'] < 5000
examples/tutorials/Intro to OpenPNM - Advanced.ipynb
TomTranter/OpenPNM
mit
The resulting Boolean masks can be used to manually adjust the hydraulic conductivity of pores and throats based on their phase occupancy. The following lines set the water filled throats to near-zero conductivity for air flow:
Ts = phys_water_internal.map_throats(~Ti, origin=water) phys_water_internal['throat.hydraulic_conductance'][Ts] = 1e-20
examples/tutorials/Intro to OpenPNM - Advanced.ipynb
TomTranter/OpenPNM
mit
The logic of these statements implicitly assumes that transport between two pores is only blocked if the throat is filled with the other phase, meaning that both pores could be filled and transport is still permitted. Another option would be to set the transport to near-zero if either or both of the pores are filled a...
water_flow = op.algorithms.StokesFlow(network=pn, phase=water) water_flow.set_value_BC(pores=pn.pores('left'), values=200000) water_flow.set_value_BC(pores=pn.pores('right'), values=100000) water_flow.run() Q_partial, = water_flow.rate(pores=pn.pores('right'))
examples/tutorials/Intro to OpenPNM - Advanced.ipynb
TomTranter/OpenPNM
mit
The relative permeability is the ratio of the water flow through the partially water saturated media versus through fully water saturated media; hence we need to find the absolute permeability of water. This can be accomplished by regenerating the phys_water_internal object, which will recalculate the 'throat.hydrauli...
phys_water_internal.regenerate_models() water_flow.run() Q_full, = water_flow.rate(pores=pn.pores('right'))
examples/tutorials/Intro to OpenPNM - Advanced.ipynb
TomTranter/OpenPNM
mit
And finally, the relative permeability can be found from:
K_rel = Q_partial/Q_full print(f"Relative permeability: {K_rel:.5f}")
examples/tutorials/Intro to OpenPNM - Advanced.ipynb
TomTranter/OpenPNM
mit
Data:
# Get pricing data for an energy (XLE) and industrial (XLI) ETF xle = get_pricing('XLE', fields = 'price', start_date = '2016-01-01', end_date = '2017-01-01') xli = get_pricing('XLI', fields = 'price', start_date = '2016-01-01', end_date = '2017-01-01') # Compute returns xle_returns = xle.pct_change()[1:] xli_returns ...
notebooks/lectures/Case_Study_Comparing_ETFs/answers/notebook.ipynb
quantopian/research_public
apache-2.0
Exercise 1 : Hypothesis Testing on Variances. Plot the histogram of the returns of XLE and XLI Check to see if each return stream is normally distributed If the assets are normally distributed, use the F-test to perform a hypothesis test and decide whether they have the two assets have the same variance. If the assets...
xle = plt.hist(xle_returns, bins=30) xli = plt.hist(xli_returns, bins=30, color='r') plt.xlabel('returns') plt.ylabel('Frequency') plt.title('Histogram of the returns of XLE and XLI') plt.legend(['XLE returns', 'XLI returns']); # Checking for normality using function above. print 'XLE' normal_test(xle_returns) prin...
notebooks/lectures/Case_Study_Comparing_ETFs/answers/notebook.ipynb
quantopian/research_public
apache-2.0
Since we find a pvalue for the Levene test of less than our $\alpha$ level (0.05), we can reject the null hypothesis that the variability of the two groups are equal thus implying that the variances are unequal. Exercise 2 : Hypothesis Testing on Means. Since we know that the variances are not equal, we must use Welch...
# Manually calculating the t-statistic N1 = len(xle_returns) N2 = len(xli_returns) m1 = xle_returns.mean() m2 = xli_returns.mean() s1 = xle_returns.std() s2 = xli_returns.std() test_statistic = (m1 - m2) / (s1**2 / N1 + s2**2 / N2)**0.5 print 't-test statistic:', test_statistic # Alternative form, using the scipy ...
notebooks/lectures/Case_Study_Comparing_ETFs/answers/notebook.ipynb
quantopian/research_public
apache-2.0
Exercise 3 : Skewness Calculate the mean and median of the two assets Calculate the skewness using the scipy library
# Calculate the mean and median of xle and xli using the numpy library xle_mean = np.mean(xle_returns) xle_median = np.median(xle_returns) print 'Mean of XLE returns = ', xle_mean, '; median = ', xle_median xli_mean = np.mean(xli_returns) xli_median = np.median(xli_returns) print 'Mean of XLI returns = ', xli_mean, '...
notebooks/lectures/Case_Study_Comparing_ETFs/answers/notebook.ipynb
quantopian/research_public
apache-2.0
And the skewness of XLE returns of values > 0 means that there is more weight in the right tail of the distribution. The skewness of XLI returns of value > 0 means that there is more weight in the left tail of the distribution. Exercise 4 : Kurtosis Check the kurtosis of the two assets, using the scipy library. Usi...
# Print value of Kurtosis for xle and xli returns print 'kurtosis:', stats.kurtosis(xle_returns) print 'kurtosis:', stats.kurtosis(xli_returns) # Distribution plot of XLE returns in red (for Kurtosis of 1.6). # Distribution plot of XLI returns in blue (for Kurtosis of 2.0). xle = sns.distplot(xle_returns, color = ...
notebooks/lectures/Case_Study_Comparing_ETFs/answers/notebook.ipynb
quantopian/research_public
apache-2.0
Dijkstra's Shortest Path Algorithm The notebook Set.ipynb implements <em style="color:blue">sets</em> as <a href="https://en.wikipedia.org/wiki/AVL_tree">AVL trees</a>. The API provided by Set offers the following API: - Set() creates an empty set. - S.isEmpty() checks whether the set Sis empty. - S.member(x) checks wh...
%run Set.ipynb
Python/Chapter-09/Dijkstra.ipynb
Danghor/Algorithms
gpl-2.0
The function call shortest_path takes a node source and a set Edges. The function shortest_path takes two arguments. - source is the start node. - Edges is a dictionary that encodes the set of edges of the graph. For every node x the value of Edges[x] has the form $$ \bigl[ (y_1, l_1), \cdots, (y_n, l_n) \bigr]. $$...
def shortest_path(source, Edges): Distance = { source: 0 } Visited = { source } Fringe = Set() Fringe.insert( (0, source) ) while not Fringe.isEmpty(): d, u = Fringe.pop() # get and remove smallest element for v, l in Edges[u]: dv = Distance.get(v, None) if...
Python/Chapter-09/Dijkstra.ipynb
Danghor/Algorithms
gpl-2.0
The version of shortest_path given below provides a graphical animation of the algorithm.
def shortest_path(source, Edges): Distance = { source: 0 } Visited = { source } # set only needed for visualization Fringe = Set() Fringe.insert( (0, source) ) while not Fringe.isEmpty(): d, u = Fringe.pop() display(toDot(source, u, Edges, Fringe, Distance, Visited)) prin...
Python/Chapter-09/Dijkstra.ipynb
Danghor/Algorithms
gpl-2.0
Code to Display the Directed Graph
import graphviz as gv
Python/Chapter-09/Dijkstra.ipynb
Danghor/Algorithms
gpl-2.0
The function $\texttt{toDot}(\texttt{source}, \texttt{Edges}, \texttt{Fringe}, \texttt{Distance}, \texttt{Visited})$ takes a graph that is represented by its Edges, a set of nodes Fringe, and a dictionary Distance that has the distance of a node from the node source, and set Visited of nodes that have already been vi...
def toDot(source, p, Edges, Fringe, Distance, Visited): V = set() for x in Edges.keys(): V.add(x) dot = gv.Digraph(node_attr={'shape': 'record', 'style': 'rounded'}) dot.attr(rankdir='LR', size='8,5') for x in V: if x == source: dot.node(str(x), color='blue', shape='doubl...
Python/Chapter-09/Dijkstra.ipynb
Danghor/Algorithms
gpl-2.0
Code for Testing
Edges = { 'a': [ ('c', 2), ('b', 9)], 'b': [('d', 1)], 'c': [('e', 5), ('g', 3)], 'd': [('f', 2), ('e', 4)], 'e': [('f', 1), ('b', 2)], 'f': [('h', 5)], 'g': [('e', 1)], 'h': [] } s = 'a' sp = shortest_path(s, Edges) sp
Python/Chapter-09/Dijkstra.ipynb
Danghor/Algorithms
gpl-2.0
Crossing the Tunnel Four persons, Alice, Britney, Charly and Daniel have to cross a tunnel. The tunnel is so narrow, that at most two persons can cross it together. In order to cross the tunnel, a torch is needed. Together, they only have a single torch. 1. Alice is the fastest and can cross the tunnel in 1 minute...
All = frozenset({ 'Alice', 'Britney', 'Charly', 'Daniel', 'Torch' })
Python/Chapter-09/Dijkstra.ipynb
Danghor/Algorithms
gpl-2.0
The timining is modelled by a dictionary.
Time = { 'Alice': 1, 'Britney': 2, 'Charly': 4, 'Daniel': 5, 'Torch': 0 }
Python/Chapter-09/Dijkstra.ipynb
Danghor/Algorithms
gpl-2.0
The function $\texttt{power}(M)$ defined below computes the power list of the set $M$, i.e. we have: $$ \texttt{power}(M) = 2^M = \bigl{A \mid A \subseteq M \bigr} $$
def power(M): if M == set(): return { frozenset() } else: C = set(M) # C is a copy of M as we don't want to change the set M x = C.pop() # pop removes the element x from the set C P1 = power(C) P2 = { A | {x} for A in P1 } return P1 | P2
Python/Chapter-09/Dijkstra.ipynb
Danghor/Algorithms
gpl-2.0
If $B$ is a set of persons, then $\texttt{duration}(B)$ is the time that this group needs to cross the tunnel. $B$ also contains 'Torch'.
def duration(B): return max(Time[x] for x in B)
Python/Chapter-09/Dijkstra.ipynb
Danghor/Algorithms
gpl-2.0
$\texttt{left_right}(S)$ describes a crossing of the tunnel from the entrance at the left side left to the exit at the right side of the tunnel.
def left_right(S): return [(S - B, duration(B)) for B in power(S) if 'Torch' in B and 2 <= len(B) <= 3]
Python/Chapter-09/Dijkstra.ipynb
Danghor/Algorithms
gpl-2.0
$\texttt{right_left}(S)$ describes a crossing of the tunnel from right to left.
def right_left(S): return [(S | B, duration(B)) for B in power(All - S) if 'Torch' in B and 2 <= len(B) <= 3] Edges = { S: left_right(S) + right_left(S) for S in power(All) } len(Edges)
Python/Chapter-09/Dijkstra.ipynb
Danghor/Algorithms
gpl-2.0
The function shortest_path is Dijkstra's algorithm. It returns both a dictionary Parent containing the parent nodes and a dictionary Distance with the distances. The dictionary Parent can be used to compute the shortest path leading from the node source to some other node.
def shortest_path(source, Edges): Distance = { source: 0 } Parent = {} Fringe = Set() Fringe.insert( (0, source) ) while not Fringe.isEmpty(): d, u = Fringe.pop() for v, l in Edges[u]: dv = Distance.get(v, None) if dv == None or d + l < dv: ...
Python/Chapter-09/Dijkstra.ipynb
Danghor/Algorithms
gpl-2.0
Let us see whether the goal was reachable and how long it takes to reach the goal.
goal = frozenset() Distance[goal]
Python/Chapter-09/Dijkstra.ipynb
Danghor/Algorithms
gpl-2.0
Given to nodes source and goal and a dictionary containing the parent of every node, the function find_path returns the path from source to goal.
def find_path(source, goal, Parent): p = Parent.get(goal) if p == None: return [source] return find_path(source, p, Parent) + [goal] Path = find_path(frozenset(All), frozenset(), Parent) def print_path(): total = 0 print("_" * 81); for i in range(len(Path)): Left = set(Path[i]...
Python/Chapter-09/Dijkstra.ipynb
Danghor/Algorithms
gpl-2.0
Create transformers
import pyspark.ml.feature as ft births = births \ .withColumn( 'BIRTH_PLACE_INT', births['BIRTH_PLACE'] \ .cast(typ.IntegerType()))
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Having done this, we can now create our first Transformer.
encoder = ft.OneHotEncoder( inputCol='BIRTH_PLACE_INT', outputCol='BIRTH_PLACE_VEC')
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Let's now create a single column with all the features collated together.
featuresCreator = ft.VectorAssembler( inputCols=[ col[0] for col in labels[2:]] + \ [encoder.getOutputCol()], outputCol='features' )
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Create an estimator In this example we will (once again) us the Logistic Regression model.
import pyspark.ml.classification as cl
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Once loaded, let's create the model.
logistic = cl.LogisticRegression( maxIter=10, regParam=0.01, labelCol='INFANT_ALIVE_AT_REPORT')
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Create a pipeline All that is left now is to creat a Pipeline and fit the model. First, let's load the Pipeline from the package.
from pyspark.ml import Pipeline pipeline = Pipeline(stages=[ encoder, featuresCreator, logistic ])
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Fit the model Conventiently, DataFrame API has the .randomSplit(...) method.
births_train, births_test = births \ .randomSplit([0.7, 0.3], seed=666)
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Now run our pipeline and estimate our model.
model = pipeline.fit(births_train) test_model = model.transform(births_test)
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Here's what the test_model looks like.
test_model.take(1)
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Model performance Obviously, we would like to now test how well our model did.
import pyspark.ml.evaluation as ev evaluator = ev.BinaryClassificationEvaluator( rawPredictionCol='probability', labelCol='INFANT_ALIVE_AT_REPORT') print(evaluator.evaluate(test_model, {evaluator.metricName: 'areaUnderROC'})) print(evaluator.evaluate(test_model, {evaluator.metricName: 'areaUnderPR'}))
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Saving the model PySpark allows you to save the Pipeline definition for later use.
pipelinePath = './infant_oneHotEncoder_Logistic_Pipeline' pipeline.write().overwrite().save(pipelinePath)
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
So, you can load it up later and use straight away to .fit(...) and predict.
loadedPipeline = Pipeline.load(pipelinePath) loadedPipeline \ .fit(births_train)\ .transform(births_test)\ .take(1)
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
You can also save the whole model
from pyspark.ml import PipelineModel modelPath = './infant_oneHotEncoder_Logistic_PipelineModel' model.write().overwrite().save(modelPath) loadedPipelineModel = PipelineModel.load(modelPath) test_loadedModel = loadedPipelineModel.transform(births_test)
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Parameter hyper-tuning Grid search Load the .tuning part of the package.
import pyspark.ml.tuning as tune
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Next let's specify our model and the list of parameters we want to loop through.
logistic = cl.LogisticRegression( labelCol='INFANT_ALIVE_AT_REPORT') grid = tune.ParamGridBuilder() \ .addGrid(logistic.maxIter, [2, 10, 50]) \ .addGrid(logistic.regParam, [0.01, 0.05, 0.3]) \ .build()
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Next, we need some way of comparing the models.
evaluator = ev.BinaryClassificationEvaluator( rawPredictionCol='probability', labelCol='INFANT_ALIVE_AT_REPORT')
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Create the logic that will do the validation work for us.
cv = tune.CrossValidator( estimator=logistic, estimatorParamMaps=grid, evaluator=evaluator )
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Create a purely transforming Pipeline.
pipeline = Pipeline(stages=[encoder,featuresCreator]) data_transformer = pipeline.fit(births_train)
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Having done this, we are ready to find the optimal combination of parameters for our model.
cvModel = cv.fit(data_transformer.transform(births_train))
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
The cvModel will return the best model estimated. We can now use it to see if it performed better than our previous model.
data_train = data_transformer \ .transform(births_test) results = cvModel.transform(data_train) print(evaluator.evaluate(results, {evaluator.metricName: 'areaUnderROC'})) print(evaluator.evaluate(results, {evaluator.metricName: 'areaUnderPR'}))
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
What parameters has the best model? The answer is a little bit convoluted but here's how you can extract it.
results = [ ( [ {key.name: paramValue} for key, paramValue in zip( params.keys(), params.values()) ], metric ) for params, metric in zip( cvModel.getEstimatorParamMaps(), cvModel.avgMetrics ) ] ...
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Train-Validation splitting Use the ChiSqSelector to select only top 5 features, thus limiting the complexity of our model.
selector = ft.ChiSqSelector( numTopFeatures=5, featuresCol=featuresCreator.getOutputCol(), outputCol='selectedFeatures', labelCol='INFANT_ALIVE_AT_REPORT' ) logistic = cl.LogisticRegression( labelCol='INFANT_ALIVE_AT_REPORT', featuresCol='selectedFeatures' ) pipeline = Pipeline(stages=[encod...
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
The TrainValidationSplit object gets created in the same fashion as the CrossValidator model.
tvs = tune.TrainValidationSplit( estimator=logistic, estimatorParamMaps=grid, evaluator=evaluator )
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
As before, we fit our data to the model, and calculate the results.
tvsModel = tvs.fit( data_transformer \ .transform(births_train) ) data_train = data_transformer \ .transform(births_test) results = tvsModel.transform(data_train) print(evaluator.evaluate(results, {evaluator.metricName: 'areaUnderROC'})) print(evaluator.evaluate(results, {evaluator.metricN...
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Other features of PySpark ML in action Feature extraction NLP related feature extractors Simple dataset.
text_data = spark.createDataFrame([ ['''Machine learning can be applied to a wide variety of data types, such as vectors, text, images, and structured data. This API adopts the DataFrame from Spark SQL in order to support a variety of data types.'''], ['''DataFrame supports many basic...
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
First, we need to tokenize this text.
tokenizer = ft.RegexTokenizer( inputCol='input', outputCol='input_arr', pattern='\s+|[,.\"]')
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
The output of the tokenizer looks similar to this.
tok = tokenizer \ .transform(text_data) \ .select('input_arr') tok.take(1)
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Use the StopWordsRemover(...).
stopwords = ft.StopWordsRemover( inputCol=tokenizer.getOutputCol(), outputCol='input_stop')
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
The output of the method looks as follows
stopwords.transform(tok).select('input_stop').take(1)
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Build NGram model and the Pipeline.
ngram = ft.NGram(n=2, inputCol=stopwords.getOutputCol(), outputCol="nGrams") pipeline = Pipeline(stages=[tokenizer, stopwords, ngram])
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Now that we have the pipeline we follow in the very similar fashion as before.
data_ngram = pipeline \ .fit(text_data) \ .transform(text_data) data_ngram.select('nGrams').take(1)
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
That's it. We got our n-grams and we can then use them in further NLP processing. Discretize continuous variables It is sometimes useful to band the values into discrete buckets.
import numpy as np x = np.arange(0, 100) x = x / 100.0 * np.pi * 4 y = x * np.sin(x / 1.764) + 20.1234 schema = typ.StructType([ typ.StructField('continuous_var', typ.DoubleType(), False ) ]) data = spark.createDataFrame([[float(e), ] for e in y], schema=schema)
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0