markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
EvaluationUsually, the model and its predictions is not sufficient. In the following we want to evaluate our classifiers. Let's start by computing their error. The sklearn.metrics package contains several errors such as* Mean squared error* Mean absolute error* Mean squared log error* Median absolute error
#computing the squared error of the first model print("Mean squared error model 1: %.2f" % mean_squared_error(targetFeature1, targetFeature1_predict))
Mean squared error model 1: 0.56
MIT
Instruction4/Instruction4-RegressionSVM.ipynb
danikhani/ITDS-Instructions-WS20
We can also visualize the errors:
plt.scatter(targetFeature1_predict, (targetFeature1 - targetFeature1_predict) ** 2, color = "blue", s = 10,) ## plotting line to visualize zero error plt.hlines(y = 0, xmin = 0, xmax = 15, linewidth = 2) ## plot title plt.title("Squared errors Model 1") ## function to show plot plt.show()
_____no_output_____
MIT
Instruction4/Instruction4-RegressionSVM.ipynb
danikhani/ITDS-Instructions-WS20
Now it is your turn. Compute the mean squared error and visualize the squared errors. Play around using different error metrics.
#Your turn print("Mean squared error model 2: %.2f" % mean_squared_error(targetFeature2,targetFeature2_predict)) print("Mean absolute error model 2: %.2f" % mean_absolute_error(targetFeature2,targetFeature2_predict)) plt.scatter(targetFeature2_predict, (targetFeature2 - targetFeature2_predict) ** 2, color = "blue",) plt.scatter(targetFeature2,abs(targetFeature2 - targetFeature2_predict),color = "red") ## plotting line to visualize zero error plt.hlines(y = 0, xmin = 0, xmax = 80, linewidth = 2) ## plot title plt.title("errors Model 2") ## function to show plot plt.show()
Mean squared error model 2: 8.89 Mean absolute error model 2: 2.32
MIT
Instruction4/Instruction4-RegressionSVM.ipynb
danikhani/ITDS-Instructions-WS20
Handling multiple descriptive features at once - Multiple linear regressionIn most cases, we will have more than one descriptive feature . As an example we use an example data set of the scikit package. The dataset describes housing prices in Boston based on several attributes. Note, in this format the data is already split into descriptive features and a target feature.
from sklearn import datasets ## imports datasets from scikit-learn df3 = datasets.load_boston() #The sklearn package provides the data splitted into a set of descriptive features and a target feature. #We can easily transform this format into the pandas data frame as used above. descriptiveFeatures3 = pd.DataFrame(df3.data, columns=df3.feature_names) targetFeature3 = pd.DataFrame(df3.target, columns=['target']) print('Descriptive features:') print(descriptiveFeatures3.head()) print('Target feature:') print(targetFeature3.head())
Descriptive features: CRIM ZN INDUS CHAS NOX RM AGE DIS RAD TAX \ 0 0.00632 18.0 2.31 0.0 0.538 6.575 65.2 4.0900 1.0 296.0 1 0.02731 0.0 7.07 0.0 0.469 6.421 78.9 4.9671 2.0 242.0 2 0.02729 0.0 7.07 0.0 0.469 7.185 61.1 4.9671 2.0 242.0 3 0.03237 0.0 2.18 0.0 0.458 6.998 45.8 6.0622 3.0 222.0 4 0.06905 0.0 2.18 0.0 0.458 7.147 54.2 6.0622 3.0 222.0 PTRATIO B LSTAT 0 15.3 396.90 4.98 1 17.8 396.90 9.14 2 17.8 392.83 4.03 3 18.7 394.63 2.94 4 18.7 396.90 5.33 Target feature: target 0 24.0 1 21.6 2 34.7 3 33.4 4 36.2
MIT
Instruction4/Instruction4-RegressionSVM.ipynb
danikhani/ITDS-Instructions-WS20
To predict the housing price we will use a Multiple Linear Regression model. In Python this is very straightforward: we use the same function as for simple linear regression, but our set of descriptive features now contains more than one element (see above).
classifier = LinearRegression() model3 = classifier.fit(descriptiveFeatures3,targetFeature3) targetFeature3_predict = classifier.predict(descriptiveFeatures3) print('Coefficients: \n', classifier.coef_) print('Intercept: \n', classifier.intercept_) print("Mean squared error: %.2f" % mean_squared_error(targetFeature3, targetFeature3_predict))
Coefficients: [[-1.08011358e-01 4.64204584e-02 2.05586264e-02 2.68673382e+00 -1.77666112e+01 3.80986521e+00 6.92224640e-04 -1.47556685e+00 3.06049479e-01 -1.23345939e-02 -9.52747232e-01 9.31168327e-03 -5.24758378e-01]] Intercept: [36.45948839] Mean squared error: 21.89
MIT
Instruction4/Instruction4-RegressionSVM.ipynb
danikhani/ITDS-Instructions-WS20
As you can see above, we have a coefficient for each descriptive feature. Handling categorical descriptive featuresSo far we always encountered numerical dscriptive features, but data sets can also contain categorical attributes. The regression function can only handle numerical input. There are several ways to tranform our categorical data to numerical data (for example using one-hot encoding as explained in the lecture: we introduce a 0/1 feature for every possible value of our categorical attribute). For adequate data, another possibility is to replace each categorical value by a numerical value and adding an ordering with it. Popular possibilities to achieve this transformation are* the get_dummies function of pandas* the OneHotEncoder of scikit* the LabelEncoder of scikitAfter encoding the attributes we can apply our regular regression function.
#example using pandas df4 = pd.DataFrame({'A':['a','b','c'],'B':['c','b','a'] }) one_hot_pd = pd.get_dummies(df4) one_hot_pd #example using scikit from sklearn.preprocessing import LabelEncoder, OneHotEncoder #apply the one hot encoder encoder = OneHotEncoder(categories='auto') encoder.fit(df4) df4_OneHot = encoder.transform(df4).toarray() print('Transformed by One-hot Encoding: ') print(df4_OneHot) # encode labels with value between 0 and n_classes-1 encoder = LabelEncoder() df4_LE = df4.apply(encoder.fit_transform) print('Replacing categories by numerical labels: ') print(df4_LE.head())
Transformed by One-hot Encoding: [[1. 0. 0. 0. 0. 1.] [0. 1. 0. 0. 1. 0.] [0. 0. 1. 1. 0. 0.]] Replacing categories by numerical labels: A B 0 0 2 1 1 1 2 2 0
MIT
Instruction4/Instruction4-RegressionSVM.ipynb
danikhani/ITDS-Instructions-WS20
Now it is your turn. Perform linear regression using the data set given below. Don't forget to transform your categorical descriptive features. The rental price attribute represents the target variable.
from sklearn.preprocessing import LabelEncoder df5 = pd.DataFrame({'Size':[500,550,620,630,665],'Floor':[4,7,9,5,8], 'Energy rating':['C', 'A', 'A', 'B', 'C'], 'Rental price': [320,380,400,390,385] }) #Your turn # To transform the categorial feature to_trannsform = df5[['Energy rating']] encoder = LabelEncoder() transformed = to_trannsform.apply(encoder.fit_transform) df5_transformed = df5 df5_transformed[['Energy rating']] = transformed # the feature we would like to predict is called target fueature df5_traget = df5_transformed['Rental price'] # features that we use for the prediction are called the "descriptive" features df5_descpritive = df5_transformed[['Size','Floor','Energy rating']] # traing regression model: classifier5 = LinearRegression() model5 = classifier5.fit(df5_descpritive, df5_traget) #use the classifier to make prediction targetFeature5_predict = classifier5.predict(df5_descpritive) print('Coefficients: \n', classifier5.coef_) print('Intercept: \n', classifier5.intercept_) print("Mean squared error: %.2f" % mean_squared_error(df5_traget, targetFeature5_predict))
Coefficients: [ 0.39008474 -0.54300185 -18.80539593] Intercept: 166.068958800039 Mean squared error: 4.68
MIT
Instruction4/Instruction4-RegressionSVM.ipynb
danikhani/ITDS-Instructions-WS20
Predicting a categorical target value - Logistic regression We might also encounter data sets where our target feature is categorical. Here we don't transform them into numerical values, but insetad we use a logistic regression function. Luckily, sklearn provides us with a suitable function that is similar to the linear equivalent. Similar to linear regression, we can compute logistic regression on a single descriptive variable as well as on multiple variables.
# Importing the dataset iris = pd.read_csv('iris.csv') print('First look at the data set: ') print(iris.head()) #defining the descriptive and target features descriptiveFeatures_iris = iris[['sepal_length']] #we only use the attribute 'sepal_length' in this example targetFeature_iris = iris['species'] #we want to predict the 'species' of iris from sklearn.linear_model import LogisticRegression classifier = LogisticRegression(solver = 'liblinear', multi_class = 'ovr') classifier.fit(descriptiveFeatures_iris, targetFeature_iris) targetFeature_iris_pred = classifier.predict(descriptiveFeatures_iris) print('Coefficients: \n', classifier.coef_) print('Intercept: \n', classifier.intercept_)
First look at the data set: sepal_length sepal_width petal_length petal_width species 0 5.1 3.5 1.4 0.2 setosa 1 4.9 3.0 1.4 0.2 setosa 2 4.7 3.2 1.3 0.2 setosa 3 4.6 3.1 1.5 0.2 setosa 4 5.0 3.6 1.4 0.2 setosa Coefficients: [[-0.86959145] [ 0.01223362] [ 0.57972675]] Intercept: [ 4.16186636 -0.74244291 -3.9921824 ]
MIT
Instruction4/Instruction4-RegressionSVM.ipynb
danikhani/ITDS-Instructions-WS20
Now it is your turn. In the example above we only used the first attribute as descriptive variable. Change the example such that all available attributes are used.
#Your turn # Importing the dataset iris2 = pd.read_csv('iris.csv') print('First look at the data set: ') print(iris2.head()) #defining the descriptive and target features descriptiveFeatures_iris2 = iris[['sepal_length','sepal_width','petal_length','petal_width']] targetFeature_iris2 = iris['species'] #we want to predict the 'species' of iris from sklearn.linear_model import LogisticRegression classifier2 = LogisticRegression(solver = 'liblinear', multi_class = 'ovr') classifier2.fit(descriptiveFeatures_iris2, targetFeature_iris2) targetFeature_iris_pred2 = classifier2.predict(descriptiveFeatures_iris2) print('Coefficients: \n', classifier2.coef_) print('Intercept: \n', classifier2.intercept_)
First look at the data set: sepal_length sepal_width petal_length petal_width species 0 5.1 3.5 1.4 0.2 setosa 1 4.9 3.0 1.4 0.2 setosa 2 4.7 3.2 1.3 0.2 setosa 3 4.6 3.1 1.5 0.2 setosa 4 5.0 3.6 1.4 0.2 setosa Coefficients: [[ 0.41021713 1.46416217 -2.26003266 -1.02103509] [ 0.4275087 -1.61211605 0.5758173 -1.40617325] [-1.70751526 -1.53427768 2.47096755 2.55537041]] Intercept: [ 0.26421853 1.09392467 -1.21470917]
MIT
Instruction4/Instruction4-RegressionSVM.ipynb
danikhani/ITDS-Instructions-WS20
Note, that the regression classifier (both logistic and non-logistic) can be tweaked using several parameters. This includes, but is not limited to, non-linear regression. Check out the documentation for details and feel free to play around! Support Vector Machines Aside from regression models, the sklearn package also provides us with a function for training support vector machines. Looking at the example below we see that they can be trained in similar ways. We still use the iris data set for illustration.
from sklearn.svm import SVC #define descriptive and target features as before descriptiveFeatures_iris = iris[['sepal_length', 'sepal_width', 'petal_length', 'petal_width']] targetFeature_iris = iris['species'] #this time, we train an SVM classifier classifier = SVC(C=1, kernel='linear', gamma = 'auto') classifier.fit(descriptiveFeatures_iris, targetFeature_iris) targetFeature_iris_predict = classifier.predict(descriptiveFeatures_iris) targetFeature_iris_predict[0:5] #show the first 5 predicted values
_____no_output_____
MIT
Instruction4/Instruction4-RegressionSVM.ipynb
danikhani/ITDS-Instructions-WS20
As explained in the lecture, a support vector machine is defined by its support vectors. In the sklearn package we can access them and their properties very easily:* support_: indicies of support vectors* support_vectors_: the support vectors* n_support_: the number of support vectors for each class
print('Indicies of support vectors:') print(classifier.support_) print('The support vectors:') print(classifier.support_vectors_) print('The number of support vectors for each class:') print(classifier.n_support_)
Indicies of support vectors: [ 23 24 41 52 56 63 66 68 70 72 76 77 83 84 98 106 110 119 123 126 127 129 133 138 146 147 149] The support vectors: [[5.1 3.3 1.7 0.5] [4.8 3.4 1.9 0.2] [4.5 2.3 1.3 0.3] [6.9 3.1 4.9 1.5] [6.3 3.3 4.7 1.6] [6.1 2.9 4.7 1.4] [5.6 3. 4.5 1.5] [6.2 2.2 4.5 1.5] [5.9 3.2 4.8 1.8] [6.3 2.5 4.9 1.5] [6.8 2.8 4.8 1.4] [6.7 3. 5. 1.7] [6. 2.7 5.1 1.6] [5.4 3. 4.5 1.5] [5.1 2.5 3. 1.1] [4.9 2.5 4.5 1.7] [6.5 3.2 5.1 2. ] [6. 2.2 5. 1.5] [6.3 2.7 4.9 1.8] [6.2 2.8 4.8 1.8] [6.1 3. 4.9 1.8] [7.2 3. 5.8 1.6] [6.3 2.8 5.1 1.5] [6. 3. 4.8 1.8] [6.3 2.5 5. 1.9] [6.5 3. 5.2 2. ] [5.9 3. 5.1 1.8]] The number of support vectors for each class: [ 3 12 12]
MIT
Instruction4/Instruction4-RegressionSVM.ipynb
danikhani/ITDS-Instructions-WS20
We can also calculate the distance of the data points to the separating hyperplane by using the decision_function(X) method. Score(X,y) calculates the mean accuracy of the classification. The classification report shows metrics such as precision, recall, f1-score and support. You will learn more about these quality metrics in a few lectures.
from sklearn.metrics import classification_report classifier.decision_function(descriptiveFeatures_iris) print('Accuracy: \n', classifier.score(descriptiveFeatures_iris,targetFeature_iris)) print('Classification report: \n') print(classification_report(targetFeature_iris, targetFeature_iris_predict))
Accuracy: 0.9933333333333333 Classification report: precision recall f1-score support setosa 1.00 1.00 1.00 50 versicolor 1.00 0.98 0.99 50 virginica 0.98 1.00 0.99 50 accuracy 0.99 150 macro avg 0.99 0.99 0.99 150 weighted avg 0.99 0.99 0.99 150
MIT
Instruction4/Instruction4-RegressionSVM.ipynb
danikhani/ITDS-Instructions-WS20
Prepare final dataset
# organize dataset into a useful structure # create directories dataset_home = train_folder # create label subdirectories labeldirs = ['separate_singleDouble/single/', 'separate_singleDouble/double/'] for labldir in labeldirs: newdir = dataset_home + labldir os.makedirs(newdir, exist_ok=True) # copy training dataset images into subdirectories for file in os.listdir(train_folder): src = train_folder + '/' + file if file.startswith('single'): dst = dataset_home + 'separate_singleDouble/single/' + file shutil.copyfile(src, dst) elif file.startswith('double'): dst = dataset_home + 'separate_singleDouble/double/' + file shutil.copyfile(src, dst)
_____no_output_____
MIT
NASA/Python_codes/ML_Books/01_01_transfer_learning_model_EVI.ipynb
HNoorazar/Kirti
Plot For Fun
# plot dog photos from the dogs vs cats dataset from matplotlib.image import imread # define location of dataset # plot first few images files = os.listdir(train_folder)[2:4] # files = [sorted(os.listdir(train_folder))[2]] + [sorted(os.listdir(train_folder))[-2]] for i in range(2): # define subplot pyplot.subplot(210 + 1 + i) # define filename filename = train_folder + files[i] # load image pixels image = imread(filename) # plot raw pixel data pyplot.imshow(image) # show the figure pyplot.show()
_____no_output_____
MIT
NASA/Python_codes/ML_Books/01_01_transfer_learning_model_EVI.ipynb
HNoorazar/Kirti
Full Code
# define cnn model def define_model(): # load model model = VGG16(include_top=False, input_shape=(224, 224, 3)) # mark loaded layers as not trainable for layer in model.layers: layer.trainable = False # add new classifier layers flat1 = Flatten()(model.layers[-1].output) class1 = Dense(128, activation='relu', kernel_initializer='he_uniform')(flat1) output = Dense(1, activation='sigmoid')(class1) # define new model model = Model(inputs=model.inputs, outputs=output) # compile model opt = SGD(learning_rate=0.001, momentum=0.9) model.compile(optimizer=opt, loss='binary_crossentropy', metrics=['accuracy']) return model # run the test harness for evaluating a model def run_test_harness(): # define model _model = define_model() # create data generator datagen = ImageDataGenerator(featurewise_center=True) # specify imagenet mean values for centering datagen.mean = [123.68, 116.779, 103.939] # prepare iterator train_separate_dir = train_folder + "separate_singleDouble/" train_it = datagen.flow_from_directory(train_separate_dir, class_mode='binary', batch_size=16, target_size=(224, 224)) # fit model _model.fit(train_it, steps_per_epoch=len(train_it), epochs=10, verbose=1) model_dir = "/Users/hn/Documents/01_research_data/NASA/ML_Models/" _model.save(model_dir+'01_TL_SingleDouble.h5') # tf.keras.models.save_model(model=trained_model, filepath=model_dir+'01_TL_SingleDouble.h5') # return(_model) # entry point, run the test harness start_time = time.time() run_test_harness() end_time = time.time() # photo = load_img(train_folder + files[0], target_size=(200, 500)) # photo
_____no_output_____
MIT
NASA/Python_codes/ML_Books/01_01_transfer_learning_model_EVI.ipynb
HNoorazar/Kirti
Spark SQLSpark SQL is arguably one of the most important and powerful features in Spark. In a nutshell, with Spark SQL you can run SQL queries against views or tables organized into databases. You also can use system functions or define user functions and analyze query plans in order to optimize their workloads. This integrates directly into the DataFrame API, and as we saw in previous classes, you can choose to express some of your data manipulations in SQL and others in DataFrames and they will compile to the same underlying code. Big Data and SQL: Apache HiveBefore Spark’s rise, Hive was the de facto big data SQL access layer. Originally developed at Facebook, Hive became an incredibly popular tool across industry for performing SQL operations on big data. In many ways it helped propel Hadoop into different industries because analysts could run SQL queries. Although Spark began as a general processing engine with Resilient Distributed Datasets (RDDs), a large cohort of users now use Spark SQL. Big Data and SQL: Spark SQLWith the release of Spark 2.0, its authors created a superset of Hive’s support, writing a native SQL parser that supports both ANSI-SQL as well as HiveQL queries. This, along with its unique interoperability with DataFrames, makes it a powerful tool for all sorts of companies. For example, in late 2016, Facebook announced that it had begun running Spark workloads and seeing large benefits in doing so. In the words of the blog post’s authors:>We challenged Spark to replace a pipeline that decomposed to hundreds of Hive jobs into a single Spark job. Through a series of performance and reliability improvements, we were able to scale Spark to handle one of our entity ranking data processing use cases in production…. The Spark-based pipeline produced significant performance improvements (4.5–6x CPU, 3–4x resource reservation, and ~5x latency) compared with the old Hive-based pipeline, and it has been running in production for several months.The power of Spark SQL derives from several key facts: SQL analysts can now take advantage of Spark’s computation abilities by plugging into the Thrift Server or Spark’s SQL interface, whereas data engineers and scientists can use Spark SQL where appropriate in any data flow. This unifying API allows for data to be extracted with SQL, manipulated as a DataFrame, passed into one of Spark MLlibs’ large-scale machine learning algorithms, written out to another data source, and everything in between.**NOTE:** Spark SQL is intended to operate as an online analytic processing (OLAP) database, not an online transaction processing (OLTP) database. This means that it is not intended to perform extremely low-latency queries. Even though support for in-place modifications is sure to be something that comes up in the future, it’s not something that is currently available.
spark.sql("SELECT 1 + 1").show()
+-------+ |(1 + 1)| +-------+ | 2| +-------+
MIT
docs/Supplementary-Materials/01-Spark-SQL.ipynb
ymei9/Big-Data-Analytics-for-Business
As we have seen before, you can completely interoperate between SQL and DataFrames, as you see fit. For instance, you can create a DataFrame, manipulate it with SQL, and then manipulate it again as a DataFrame. It’s a powerful abstraction that you will likely find yourself using quite a bit:
bucket = spark._jsc.hadoopConfiguration().get("fs.gs.system.bucket") data = "gs://" + bucket + "/notebooks/data/" spark.read.json(data + "flight-data/json/2015-summary.json")\ .createOrReplaceTempView("flights_view") # DF => SQL spark.sql(""" SELECT DEST_COUNTRY_NAME, sum(count) FROM flights_view GROUP BY DEST_COUNTRY_NAME """)\ .where("DEST_COUNTRY_NAME like 'S%'").where("`sum(count)` > 10")\ .count() # SQL => DF
_____no_output_____
MIT
docs/Supplementary-Materials/01-Spark-SQL.ipynb
ymei9/Big-Data-Analytics-for-Business
Creating TablesYou can create tables from a variety of sources. For instance below we are creating a table from a SELECT statement:
spark.sql(''' CREATE TABLE IF NOT EXISTS flights_from_select USING parquet AS SELECT * FROM flights_view ''') spark.sql('SELECT * FROM flights_from_select').show(5) spark.sql(''' DESCRIBE TABLE flights_from_select ''').show()
+-------------------+---------+-------+ | col_name|data_type|comment| +-------------------+---------+-------+ | DEST_COUNTRY_NAME| string| null| |ORIGIN_COUNTRY_NAME| string| null| | count| bigint| null| +-------------------+---------+-------+
MIT
docs/Supplementary-Materials/01-Spark-SQL.ipynb
ymei9/Big-Data-Analytics-for-Business
CatalogThe highest level abstraction in Spark SQL is the Catalog. The Catalog is an abstraction for the storage of metadata about the data stored in your tables as well as other helpful things like databases, tables, functions, and views. The catalog is available in the `spark.catalog` package and contains a number of helpful functions for doing things like listing tables, databases, and functions.
Cat = spark.catalog Cat.listTables() spark.sql('SHOW TABLES').show(5, False) Cat.listDatabases() spark.sql('SHOW DATABASES').show() Cat.listColumns('flights_from_select') Cat.listTables()
_____no_output_____
MIT
docs/Supplementary-Materials/01-Spark-SQL.ipynb
ymei9/Big-Data-Analytics-for-Business
Caching Tables
spark.sql(''' CACHE TABLE flights_view ''') spark.sql(''' UNCACHE TABLE flights_view ''')
_____no_output_____
MIT
docs/Supplementary-Materials/01-Spark-SQL.ipynb
ymei9/Big-Data-Analytics-for-Business
Explain
spark.sql(''' EXPLAIN SELECT * FROM just_usa_view ''').show(1, False)
+-----------------------------------------------------------------------------------------------------------------+ |plan | +-----------------------------------------------------------------------------------------------------------------+ |== Physical Plan == org.apache.spark.sql.AnalysisException: Table or view not found: just_usa_view; line 2 pos 22| +-----------------------------------------------------------------------------------------------------------------+
MIT
docs/Supplementary-Materials/01-Spark-SQL.ipynb
ymei9/Big-Data-Analytics-for-Business
VIEWS - create/drop
spark.sql(''' CREATE VIEW just_usa_view AS SELECT * FROM flights_from_select WHERE dest_country_name = 'United States' ''') spark.sql(''' DROP VIEW IF EXISTS just_usa_view ''')
_____no_output_____
MIT
docs/Supplementary-Materials/01-Spark-SQL.ipynb
ymei9/Big-Data-Analytics-for-Business
Drop tables
spark.sql('DROP TABLE flights_from_select') spark.sql('DROP TABLE IF EXISTS flights_from_select')
_____no_output_____
MIT
docs/Supplementary-Materials/01-Spark-SQL.ipynb
ymei9/Big-Data-Analytics-for-Business
Как выложить бота на HEROKU*Подготовил Ян Пиле* Сразу оговоримся, что мы на heroku выкладываем**echo-Бота в телеграме, написанного с помощью библиотеки [pyTelegramBotAPI](https://github.com/eternnoir/pyTelegramBotAPI)**.А взаимодействие его с сервером мы сделаем с использованием [flask](http://flask.pocoo.org/)То есть вы боту пишете что-то, а он вам отвечает то же самое. Регистрация Идем к **@BotFather** в Telegram и по его инструкции создаем нового бота командой **/newbot**. Это должно закончиться выдачей вам токена вашего бота. Например последовательность команд, введенных мной:* **/newbot*** **my_echo_bot** (имя бота)* **ian_echo_bot** (ник бота в телеграме)Завершилась выдачей мне токена **1403467808:AAEaaLPkIqrhrQ62p7ToJclLtNNINdOopYk**И ссылки t.me/ian_echo_bot Регистрация на HEROKU Идем сюда: https://signup.heroku.com/loginСоздаем пользователя (это бесплатно)Попадаем на https://dashboard.heroku.com/apps и там создаем новое приложение: Вводим название и регион (Я выбрал Европу), создаем.После того, как приложение создано, нажмите, "Open App" и скопируйте адрес оттуда. У меня это https://ian-echo-bot.herokuapp.com Установить интерфейсы heroku и git для командной строки Теперь надо установить Интерфейсы командной строки heroku и git по ссылкам:* https://devcenter.heroku.com/articles/heroku-cli* https://git-scm.com/book/en/v2/Getting-Started-Installing-Git Установить библиотеки Теперь в вашем редакторе (например PyCharm) надо установить библиотеку для Телеграма и flask:* pip install pyTelegramBotAPI* pip install flask Код нашего echo-ботаВот этот код я уложил в файл main.py
import os import telebot from flask import Flask, request TOKEN = '1403467808:AAEaaLPkIqrhrQ62p7ToJclLtNNINdOopYk' # это мой токен bot = telebot.TeleBot(token=TOKEN) server = Flask(__name__) # Если строка на входе непустая, то бот повторит ее @bot.message_handler(func=lambda msg: msg.text is not None) def reply_to_message(message): bot.send_message(message.chat.id, message.text) @server.route('/' + TOKEN, methods=['POST']) def getMessage(): bot.process_new_updates([telebot.types.Update.de_json(request.stream.read().decode("utf-8"))]) return "!", 200 @server.route("/") def webhook(): bot.remove_webhook() bot.set_webhook(url='https://ian-echo-bot.herokuapp.com/' + TOKEN) # return "!", 200 if __name__ == "__main__": server.run(host="0.0.0.0", port=int(os.environ.get('PORT', 5000)))
_____no_output_____
MIT
lect13_NumPy/2021_DPO_13_2_heroku.ipynb
weqrwer/Python_DPO_2021_fall
Computing Alpha, Beta, and R Squared in Python *Suggested Answers follow (usually there are multiple ways to solve a problem in Python).* *Running a Regression in Python - continued:*
import numpy as np import pandas as pd from scipy import stats import statsmodels.api as sm import matplotlib.pyplot as plt data = pd.read_excel('D:/Python/Data_Files/IQ_data.xlsx') X = data['Test 1'] Y = data['IQ'] plt.scatter(X,Y) plt.axis([0, 120, 0, 150]) plt.ylabel('IQ') plt.xlabel('Test 1') plt.show()
_____no_output_____
MIT
Python for Finance - Code Files/83 Computing Alpha, Beta, and R Squared in Python/Python 2/Computing Alpha, Beta, and R Squared in Python - Solution.ipynb
siddharthjain1611/Python_for_Finance_Investment_Fundamentals-and-Data-Analytics
**** Use the statsmodels’ **.add_constant()** method to reassign the X data on X1. Use OLS with arguments Y and X1 and apply the fit method to obtain univariate regression results. Help yourself with the **.summary()** method.
X1 = sm.add_constant(X) reg = sm.OLS(Y, X1).fit() reg.summary()
_____no_output_____
MIT
Python for Finance - Code Files/83 Computing Alpha, Beta, and R Squared in Python/Python 2/Computing Alpha, Beta, and R Squared in Python - Solution.ipynb
siddharthjain1611/Python_for_Finance_Investment_Fundamentals-and-Data-Analytics
By looking at the p-values, would you conclude Test 1 scores are a good predictor? ***** Imagine a kid would score 84 on Test 1. How many points is she expected to get on the IQ test, approximately?
45 + 84*0.76
_____no_output_____
MIT
Python for Finance - Code Files/83 Computing Alpha, Beta, and R Squared in Python/Python 2/Computing Alpha, Beta, and R Squared in Python - Solution.ipynb
siddharthjain1611/Python_for_Finance_Investment_Fundamentals-and-Data-Analytics
****** Alpha, Beta, R^2: Apply the stats module’s **linregress()** to extract the value for the slope, the intercept, the r squared, the p_value, and the standard deviation.
slope, intercept, r_value, p_value, std_err = stats.linregress(X,Y) slope intercept r_value r_value ** 2 p_value std_err
_____no_output_____
MIT
Python for Finance - Code Files/83 Computing Alpha, Beta, and R Squared in Python/Python 2/Computing Alpha, Beta, and R Squared in Python - Solution.ipynb
siddharthjain1611/Python_for_Finance_Investment_Fundamentals-and-Data-Analytics
Use the values of the slope and the intercept to predict the IQ score of a child, who obtained 84 points on Test 1. Is the forecasted value different than the one you obtained above?
intercept + 84 * slope
_____no_output_____
MIT
Python for Finance - Code Files/83 Computing Alpha, Beta, and R Squared in Python/Python 2/Computing Alpha, Beta, and R Squared in Python - Solution.ipynb
siddharthjain1611/Python_for_Finance_Investment_Fundamentals-and-Data-Analytics
****** Follow the steps to draw the best fitting line of the provided regression. Define a function that will use the slope and the intercept value to calculate the dots of the best fitting line.
def fitline(b): return intercept + slope * b
_____no_output_____
MIT
Python for Finance - Code Files/83 Computing Alpha, Beta, and R Squared in Python/Python 2/Computing Alpha, Beta, and R Squared in Python - Solution.ipynb
siddharthjain1611/Python_for_Finance_Investment_Fundamentals-and-Data-Analytics
Apply it to the data you have stored in the variable X.
line = fitline(X)
_____no_output_____
MIT
Python for Finance - Code Files/83 Computing Alpha, Beta, and R Squared in Python/Python 2/Computing Alpha, Beta, and R Squared in Python - Solution.ipynb
siddharthjain1611/Python_for_Finance_Investment_Fundamentals-and-Data-Analytics
Draw a scatter plot with the X and Y data and then plot X and the obtained fit-line.
plt.scatter(X,Y) plt.plot(X,line) plt.show()
_____no_output_____
MIT
Python for Finance - Code Files/83 Computing Alpha, Beta, and R Squared in Python/Python 2/Computing Alpha, Beta, and R Squared in Python - Solution.ipynb
siddharthjain1611/Python_for_Finance_Investment_Fundamentals-and-Data-Analytics
# Installs %%capture !pip install --upgrade category_encoders plotly # Imports import os, sys os.chdir('/content') !git init . !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge.git !git pull origin master !pip install -r requirements.txt os.chdir('module1') # Disable warning import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') # Imports import pandas as pd import numpy as np import math import sklearn sklearn.__version__ # Import the models from sklearn.linear_model import LogisticRegressionCV from sklearn.pipeline import make_pipeline # Import encoder and scaler and imputer import category_encoders as ce from sklearn.preprocessing import StandardScaler from sklearn.impute import SimpleImputer # Import random forest classifier from sklearn.ensemble import RandomForestClassifier # Import, load data and split data into train, validate and test train_features = pd.read_csv('../data/tanzania/train_features.csv') train_labels = pd.read_csv('../data/tanzania/train_labels.csv') test_features = pd.read_csv('../data/tanzania/test_features.csv') sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv') assert train_features.shape == (59400, 40) assert train_labels.shape == (59400, 2) assert test_features.shape == (14358, 40) assert sample_submission.shape == (14358, 2) # Load initial train features and labels from sklearn.model_selection import train_test_split X_train = train_features y_train = train_labels['status_group'] # Split the initial train features and labels 80% into new train and new validation X_train, X_val, y_train, y_val = train_test_split( X_train, y_train, train_size = 0.80, test_size = 0.20, stratify = y_train, random_state=42 ) X_train.shape, X_val.shape, y_train.shape, y_val.shape # Wrangle train, validate, and test sets def wrangle(X): # Set bins value bins=20 chars = 3 # Prevent SettingWithCopyWarning X = X.copy() X['latitude'] = X['latitude'].replace(-2e-08, 0) # Create missing columns cols_with_zeros = ['longitude', 'latitude', 'construction_year', 'gps_height', 'population'] for col in cols_with_zeros: X[col] = X[col].replace(0, np.nan) X[col+'_missing'] = X[col].isnull() for col in cols_with_zeros: X[col] = X[col].replace(np.nan, 0) # Clean installer X['installer'] = X['installer'].str.lower() X['installer'] = X['installer'].str.replace('danid', 'danida') X['installer'] = X['installer'].str.replace('disti', 'district council') X['installer'] = X['installer'].str.replace('commu', 'community') X['installer'] = X['installer'].str.replace('central government', 'government') X['installer'] = X['installer'].str.replace('kkkt _ konde and dwe', 'kkkt') X['installer'] = X['installer'].str[:chars] X['installer'].value_counts(normalize=True) tops = X['installer'].value_counts()[:5].index X.loc[~X['installer'].isin(tops), 'installer'] = 'Other' # Clean funder and bin X['funder'] = X['funder'].str.lower() X['funder'] = X['funder'].str[:chars] X['funder'].value_counts(normalize=True) tops = X['funder'].value_counts()[:20].index X.loc[~X['funder'].isin(tops), 'funder'] = 'Other' # Use mean for gps_height missing values X.loc[X['gps_height'] == 0, 'gps_height'] = X['gps_height'].mean() # Bin lga tops = X['lga'].value_counts()[:10].index X.loc[~X['lga'].isin(tops), 'lga'] = 'Other' # Bin ward tops = X['ward'].value_counts()[:20].index X.loc[~X['ward'].isin(tops), 'ward'] = 'Other' # Bin subvillage tops = X['subvillage'].value_counts()[:bins].index X.loc[~X['subvillage'].isin(tops), 'subvillage'] = 'Other' # Clean latitude and longitude avg_lat_ward = X.groupby('ward').latitude.mean() avg_lat_lga = X.groupby('lga').latitude.mean() avg_lat_region = X.groupby('region').latitude.mean() avg_lat_country = X.latitude.mean() avg_long_ward = X.groupby('ward').longitude.mean() avg_long_lga = X.groupby('lga').longitude.mean() avg_long_region = X.groupby('region').longitude.mean() avg_long_country = X.longitude.mean() #cols_with_zeros = ['longitude', 'latitude'] #for col in cols_with_zeros: # X[col] = X[col].replace(0, np.nan) #X.loc[X['latitude'] == 0, 'latitude'] = X['latitude'].median() #X.loc[X['longitude'] == 0, 'longitude'] = X['longitude'].median() #for i in range(0, 9): # X.loc[(X['latitude'] == 0) & (X['ward'] == avg_lat_ward.index[0]), 'latitude'] = avg_lat_ward[i] # X.loc[(X['latitude'] == 0) & (X['lga'] == avg_lat_lga.index[0]), 'latitude'] = avg_lat_lga[i] # X.loc[(X['latitude'] == 0) & (X['region'] == avg_lat_region.index[0]), 'latitude'] = avg_lat_region[i] # X.loc[(X['latitude'] == 0), 'latitude'] = avg_lat_country # X.loc[(X['longitude'] == 0) & (X['ward'] == avg_long_ward.index[0]), 'longitude'] = avg_long_ward[i] # X.loc[(X['longitude'] == 0) & (X['lga'] == avg_long_lga.index[0]), 'longitude'] = avg_long_lga[i] # X.loc[(X['longitude'] == 0) & (X['region'] == avg_long_region.index[0]), 'longitude'] = avg_long_region[i] # X.loc[(X['longitude'] == 0), 'longitude'] = avg_long_country average_lat = X.groupby('region').latitude.mean().reset_index() average_long = X.groupby('region').longitude.mean().reset_index() shinyanga_lat = average_lat.loc[average_lat['region'] == 'Shinyanga', 'latitude'] shinyanga_long = average_long.loc[average_lat['region'] == 'Shinyanga', 'longitude'] X.loc[(X['region'] == 'Shinyanga') & (X['latitude'] > -1), ['latitude']] = shinyanga_lat[17] X.loc[(X['region'] == 'Shinyanga') & (X['longitude'] == 0), ['longitude']] = shinyanga_long[17] mwanza_lat = average_lat.loc[average_lat['region'] == 'Mwanza', 'latitude'] mwanza_long = average_long.loc[average_lat['region'] == 'Mwanza', 'longitude'] X.loc[(X['region'] == 'Mwanza') & (X['latitude'] > -1), ['latitude']] = mwanza_lat[13] X.loc[(X['region'] == 'Mwanza') & (X['longitude'] == 0) , ['longitude']] = mwanza_long[13] # Impute mean for tsh based on mean of source_class/basin/waterpoint_type_group def tsh_calc(tsh, source, base, waterpoint): if tsh == 0: if (source, base, waterpoint) in tsh_dict: new_tsh = tsh_dict[source, base, waterpoint] return new_tsh else: return tsh return tsh temp = X[X['amount_tsh'] != 0].groupby(['source_class', 'basin', 'waterpoint_type_group'])['amount_tsh'].mean() tsh_dict = dict(temp) X['amount_tsh'] = X.apply(lambda x: tsh_calc(x['amount_tsh'], x['source_class'], x['basin'], x['waterpoint_type_group']), axis=1) X.loc[X['amount_tsh'] == 0, 'amount_tsh'] = X['amount_tsh'].median() # Impute mean for construction_year based on mean of source_class/basin/waterpoint_type_group #temp = X[X['construction_year'] != 0].groupby(['source_class', # 'basin', # 'waterpoint_type_group'])['amount_tsh'].mean() #tsh_dict = dict(temp) #X['construction_year'] = X.apply(lambda x: tsh_calc(x['construction_year'], x['source_class'], x['basin'], x['waterpoint_type_group']), axis=1) #X.loc[X['construction_year'] == 0, 'construction_year'] = X['construction_year'].mean() # Impute mean for the feature based on latitude and longitude def latlong_conversion(feature, pop, long, lat): radius = 0.1 radius_increment = 0.3 if pop <= 1: pop_temp = pop while pop_temp <= 1 and radius <= 2: lat_from = lat - radius lat_to = lat + radius long_from = long - radius long_to = long + radius df = X[(X['latitude'] >= lat_from) & (X['latitude'] <= lat_to) & (X['longitude'] >= long_from) & (X['longitude'] <= long_to)] pop_temp = df[feature].mean() if math.isnan(pop_temp): pop_temp = pop radius = radius + radius_increment else: pop_temp = pop if pop_temp <= 1: new_pop = X_train[feature].mean() else: new_pop = pop_temp return new_pop # Impute population based on location #X['population'] = X.apply(lambda x: latlong_conversion('population', x['population'], x['longitude'], x['latitude']), axis=1) #X.loc[X['population'] == 0, 'population'] = X['population'].median() # Impute gps_height based on location #X['gps_height'] = X.apply(lambda x: latlong_conversion('gps_height', x['gps_height'], x['longitude'], x['latitude']), axis=1) # Drop recorded_by (never varies) and id (always varies, random) and num_private (empty) unusable_variance = ['recorded_by', 'id', 'num_private','wpt_name', 'extraction_type_class', 'quality_group', 'source_type', 'source_class', 'waterpoint_type_group'] X = X.drop(columns=unusable_variance) # Drop duplicate columns duplicates = ['quantity_group', 'payment_type', 'extraction_type_group'] X = X.drop(columns=duplicates) # return the wrangled dataframe return X # Wrangle the data X_train = wrangle(X_train) X_val = wrangle(X_val) # Feature engineering def feature_engineer(X): # Create new feature pump_age X['pump_age'] = 2013 - X['construction_year'] X.loc[X['pump_age'] == 2013, 'pump_age'] = 0 X.loc[X['pump_age'] == 0, 'pump_age'] = 10 # Convert date_recorded to datetime X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True) # Extract components from date_recorded, then drop the original column X['year_recorded'] = X['date_recorded'].dt.year X['month_recorded'] = X['date_recorded'].dt.month X['day_recorded'] = X['date_recorded'].dt.day # Engineer feature: how many years from construction_year to date_recorded X['years'] = X['year_recorded'] - X['construction_year'] X['years_missing'] = X['years'].isnull() column_list = ['date_recorded'] X = X.drop(columns=column_list) # Create new feature region_district X['region_district'] = X['region_code'].astype(str) + X['district_code'].astype(str) #X['tsh_pop'] = X['amount_tsh']/X['population'] return X # Feature engineer the data X_train = feature_engineer(X_train) X_val = feature_engineer(X_val) X_train.head() # Encode a feature def encode_feature(X, y, str): X['status_group'] = y X.groupby(str)['status_group'].value_counts(normalize=True) X['functional']= (X['status_group'] == 'functional').astype(int) X[['status_group', 'functional']] return X # Encode all the categorical features train = X_train.copy() train = encode_feature(train, y_train, 'quantity') train = encode_feature(train, y_train, 'waterpoint_type') train = encode_feature(train, y_train, 'extraction_type') train = encode_feature(train, y_train, 'installer') train = encode_feature(train, y_train, 'funder') train = encode_feature(train, y_train, 'water_quality') train = encode_feature(train, y_train, 'basin') train = encode_feature(train, y_train, 'region') train = encode_feature(train, y_train, 'payment') train = encode_feature(train, y_train, 'source') train = encode_feature(train, y_train, 'lga') train = encode_feature(train, y_train, 'ward') train = encode_feature(train, y_train, 'scheme_management') train = encode_feature(train, y_train, 'management') train = encode_feature(train, y_train, 'region_district') train = encode_feature(train, y_train, 'subvillage') # use quantity feature and the numerical features but drop id categorical_features = ['quantity', 'waterpoint_type', 'extraction_type', 'installer', 'basin', 'region', 'payment', 'source', 'lga', 'public_meeting', 'scheme_management', 'permit', 'management', 'region_district', 'subvillage', 'funder', 'water_quality', 'ward', 'years_missing', 'longitude_missing', 'latitude_missing','construction_year_missing', 'gps_height_missing', 'population_missing'] # numeric_features = X_train.select_dtypes('number').columns.tolist() features = categorical_features + numeric_features # make subsets using the quantity feature all numeric features except id X_train = X_train[features] X_val = X_val[features] # Create the logistic regression pipeline pipeline = make_pipeline ( ce.OneHotEncoder(use_cat_names=True), #SimpleImputer(), StandardScaler(), LogisticRegressionCV(random_state=42, n_jobs=-1) ) pipeline.fit(X_train, y_train) print('Validation Accuracy', pipeline.score(X_val, y_val)) features # Create the random forest pipeline pipeline = make_pipeline ( ce.OrdinalEncoder(), SimpleImputer(strategy='mean'), StandardScaler(), RandomForestClassifier(n_estimators=1400, random_state=42, min_samples_split=5, min_samples_leaf=1, max_features='auto', max_depth=30, bootstrap=True, n_jobs=-1, verbose = 1) ) pipeline.fit(X_train, y_train) print('Validation Accuracy', pipeline.score(X_val, y_val)) pd.set_option('display.max_rows', 200) model = pipeline.named_steps['randomforestclassifier'] encoder = pipeline.named_steps['ordinalencoder'] encoded_columns = encoder.transform(X_train).columns importances = pd.Series(model.feature_importances_, encoded_columns) importances.sort_values(ascending=False) # Create missing columns cols_with_zeros = ['longitude', 'latitude', 'construction_year', 'gps_height', 'population'] for col in cols_with_zeros: test_features[col] = test_features[col].replace(0, np.nan) test_features[col+'_missing'] = test_features[col].isnull() for col in cols_with_zeros: test_features[col] = test_features[col].replace(np.nan, 0) test_features['pump_age'] = 2013 - test_features['construction_year'] test_features.loc[test_features['pump_age'] == 2013, 'pump_age'] = 0 test_features.loc[test_features['pump_age'] == 0, 'pump_age'] = 10 # Convert date_recorded to datetime test_features['date_recorded'] = pd.to_datetime(test_features['date_recorded'], infer_datetime_format=True) # Extract components from date_recorded, then drop the original column test_features['year_recorded'] = test_features['date_recorded'].dt.year test_features['month_recorded'] = test_features['date_recorded'].dt.month test_features['day_recorded'] = test_features['date_recorded'].dt.day # Engineer feature: how many years from construction_year to date_recorded test_features['years'] = test_features['year_recorded'] - test_features['construction_year'] test_features['years_missing'] = test_features['years'].isnull() test_features['region_district'] = test_features['region_code'].astype(str) + test_features['district_code'].astype(str) column_list = ['recorded_by', 'id', 'num_private','wpt_name', 'extraction_type_class', 'quality_group', 'source_type', 'source_class', 'waterpoint_type_group', 'quantity_group', 'payment_type', 'extraction_type_group'] test_features = test_features.drop(columns=column_list) X_test = test_features[features] assert all(X_test.columns == X_train.columns) y_pred = pipeline.predict(X_test) submission = sample_submission.copy() submission['status_group'] = y_pred submission.to_csv('/content/submission-05.csv', index=False)
_____no_output_____
MIT
Kaggle_Challenge_Assignment_Submission5.ipynb
JimKing100/DS-Unit-2-Kaggle-Challenge
Data generators
@numba.njit def event_series_bernoulli(series_length, event_count): '''Generate an iid Bernoulli distributed event series. series_length: length of the event series event_count: number of events''' event_series = np.zeros(series_length) event_series[np.random.choice(np.arange(0, series_length), event_count, replace=False)] = 1 return event_series @numba.njit def time_series_mean_impact(event_series, order, signal_to_noise): '''Generate a time series with impacts in mean as described in the paper. The impact weights are sampled iid from N(0, signal_to_noise), and additional noise is sampled iid from N(0,1). The detection problem will be harder than in time_series_meanconst_impact for small orders, as for small orders we have a low probability to sample at least one impact weight with a high magnitude. On the other hand, since the impact is different at every lag, we can detect the impacts even if the order is larger than the max_lag value used in the test. event_series: input of shape (T,) with event occurrences order: order of the event impacts signal_to_noise: signal to noise ratio of the event impacts''' series_length = len(event_series) weights = np.random.randn(order)*np.sqrt(signal_to_noise) time_series = np.random.randn(series_length) for t in range(series_length): if event_series[t] == 1: time_series[t+1:t+order+1] += weights[:order-max(0, (t+order+1)-series_length)] return time_series @numba.njit def time_series_meanconst_impact(event_series, order, const): '''Generate a time series with impacts in mean by adding a constant. Better for comparing performance across different impact orders, since the magnitude of the impact will always be the same. event_series: input of shape (T,) with event occurrences order: order of the event impacts const: constant for mean shift''' series_length = len(event_series) time_series = np.random.randn(series_length) for t in range(series_length): if event_series[t] == 1: time_series[t+1:t+order+1] += const return time_series @numba.njit def time_series_var_impact(event_series, order, variance): '''Generate a time series with impacts in variance as described in the paper. event_series: input of shape (T,) with event occurrences order: order of the event impacts variance: variance under event impacts''' series_length = len(event_series) time_series = np.random.randn(series_length) for t in range(series_length): if event_series[t] == 1: for tt in range(t+1, min(series_length, t+order+1)): time_series[tt] = np.random.randn()*np.sqrt(variance) return time_series @numba.njit def time_series_tail_impact(event_series, order, dof): '''Generate a time series with impacts in tails as described in the paper. event_series: input of shape (T,) with event occurrences order: delay of the event impacts dof: degrees of freedom of the t distribution''' series_length = len(event_series) time_series = np.random.randn(series_length)*np.sqrt(dof/(dof-2)) for t in range(series_length): if event_series[t] == 1: for tt in range(t+1, min(series_length, t+order+1)): time_series[tt] = np.random.standard_t(dof) return time_series
_____no_output_____
MIT
simulations.ipynb
diozaka/eitest
Visualization of the impact models
default_T = 8192 default_N = 64 default_q = 4 es = event_series_bernoulli(default_T, default_N) for ts in [ time_series_mean_impact(es, order=default_q, signal_to_noise=10.), time_series_meanconst_impact(es, order=default_q, const=5.), time_series_var_impact(es, order=default_q, variance=4.), time_series_tail_impact(es, order=default_q, dof=3.), ]: fig, (ax1, ax2) = plt.subplots(1, 2, gridspec_kw={'width_ratios': [2, 1]}, figsize=(15, 2)) ax1.plot(ts) ax1.plot(es*np.max(ts), alpha=0.5) ax1.set_xlim(0, len(es)) samples = eitest.obtain_samples(es, ts, method='eager', lag_cutoff=15, instantaneous=True) eitest.plot_samples(samples, ax2) plt.show()
_____no_output_____
MIT
simulations.ipynb
diozaka/eitest
Simulations
def test_simul_pairs(impact_model, param_T, param_N, param_q, param_r, n_pairs, lag_cutoff, instantaneous, sample_method, twosamp_test, multi_test, alpha): true_positive = 0. false_positive = 0. for _ in tqdm(range(n_pairs)): es = event_series_bernoulli(param_T, param_N) if impact_model == 'mean': ts = time_series_mean_impact(es, param_q, param_r) elif impact_model == 'meanconst': ts = time_series_meanconst_impact(es, param_q, param_r) elif impact_model == 'var': ts = time_series_var_impact(es, param_q, param_r) elif impact_model == 'tail': ts = time_series_tail_impact(es, param_q, param_r) else: raise ValueError('impact_model must be "mean", "meanconst", "var" or "tail"') # coupled pair samples = eitest.obtain_samples(es, ts, lag_cutoff=lag_cutoff, method=sample_method, instantaneous=instantaneous, sort=(twosamp_test == 'ks')) # samples need to be sorted for K-S test tstats, pvals = eitest.pairwise_twosample_tests(samples, twosamp_test, min_pts=2) pvals_adj = eitest.multitest(np.sort(pvals[~np.isnan(pvals)]), multi_test) true_positive += (pvals_adj.min() < alpha) # uncoupled pair samples = eitest.obtain_samples(np.random.permutation(es), ts, lag_cutoff=lag_cutoff, method=sample_method, instantaneous=instantaneous, sort=(twosamp_test == 'ks')) tstats, pvals = eitest.pairwise_twosample_tests(samples, twosamp_test, min_pts=2) pvals_adj = eitest.multitest(np.sort(pvals[~np.isnan(pvals)]), multi_test) false_positive += (pvals_adj.min() < alpha) return true_positive/n_pairs, false_positive/n_pairs # global parameters default_T = 8192 n_pairs = 100 alpha = 0.05 twosamp_test = 'ks' multi_test = 'simes' sample_method = 'lazy' lag_cutoff = 32 instantaneous = True
_____no_output_____
MIT
simulations.ipynb
diozaka/eitest
Mean impact model
default_N = 64 default_r = 1. default_q = 4
_____no_output_____
MIT
simulations.ipynb
diozaka/eitest
... by number of events
vals = [4, 8, 16, 32, 64, 128, 256] tprs = np.empty(len(vals)) fprs = np.empty(len(vals)) for i, val in enumerate(vals): tprs[i], fprs[i] = test_simul_pairs(impact_model='mean', param_T=default_T, param_N=val, param_q=default_q, param_r=default_r, n_pairs=n_pairs, sample_method=sample_method, lag_cutoff=lag_cutoff, instantaneous=instantaneous, twosamp_test=twosamp_test, multi_test=multi_test, alpha=alpha) plt.figure(figsize=(3,3)) plt.axvline(default_N, ls='-', c='gray', lw=1, label='def') plt.axhline(alpha, ls='--', c='black', lw=1, label='alpha') plt.plot(vals, tprs, label='TPR', marker='x') plt.plot(vals, fprs, label='FPR', marker='x') plt.gca().set_xscale('log', base=2) plt.legend() plt.show() print(f'# mean impact model (T={default_T}, q={default_q}, r={default_r}, n_pairs={n_pairs}, cutoff={lag_cutoff}, instantaneous={instantaneous}, alpha={alpha}, {sample_method}-{twosamp_test}-{multi_test})') print(f'# N\ttpr\tfpr') for i, (tpr, fpr) in enumerate(zip(tprs, fprs)): print(f'{vals[i]}\t{tpr}\t{fpr}') print()
100%|██████████| 100/100 [00:05<00:00, 18.73it/s] 100%|██████████| 100/100 [00:00<00:00, 451.99it/s] 100%|██████████| 100/100 [00:00<00:00, 439.85it/s] 100%|██████████| 100/100 [00:00<00:00, 379.15it/s] 100%|██████████| 100/100 [00:00<00:00, 276.60it/s] 100%|██████████| 100/100 [00:00<00:00, 163.88it/s] 100%|██████████| 100/100 [00:01<00:00, 78.51it/s]
MIT
simulations.ipynb
diozaka/eitest
... by impact order
vals = [1, 2, 4, 8, 16, 32] tprs = np.empty(len(vals)) fprs = np.empty(len(vals)) for i, val in enumerate(vals): tprs[i], fprs[i] = test_simul_pairs(impact_model='mean', param_T=default_T, param_N=default_N, param_q=val, param_r=default_r, n_pairs=n_pairs, sample_method=sample_method, lag_cutoff=lag_cutoff, instantaneous=instantaneous, twosamp_test=twosamp_test, multi_test=multi_test, alpha=alpha) plt.figure(figsize=(3,3)) plt.axvline(default_q, ls='-', c='gray', lw=1, label='def') plt.axhline(alpha, ls='--', c='black', lw=1, label='alpha') plt.plot(vals, tprs, label='TPR', marker='x') plt.plot(vals, fprs, label='FPR', marker='x') plt.gca().set_xscale('log', base=2) plt.legend() plt.show() print(f'# mean impact model (T={default_T}, N={default_N}, r={default_r}, n_pairs={n_pairs}, cutoff={lag_cutoff}, instantaneous={instantaneous}, alpha={alpha}, {sample_method}-{twosamp_test}-{multi_test})') print(f'# q\ttpr\tfpr') for i, (tpr, fpr) in enumerate(zip(tprs, fprs)): print(f'{vals[i]}\t{tpr}\t{fpr}') print()
100%|██████████| 100/100 [00:00<00:00, 218.61it/s] 100%|██████████| 100/100 [00:00<00:00, 187.72it/s] 100%|██████████| 100/100 [00:00<00:00, 207.15it/s] 100%|██████████| 100/100 [00:00<00:00, 200.33it/s] 100%|██████████| 100/100 [00:00<00:00, 213.18it/s] 100%|██████████| 100/100 [00:00<00:00, 215.75it/s]
MIT
simulations.ipynb
diozaka/eitest
... by signal-to-noise ratio
vals = [1./32, 1./16, 1./8, 1./4, 1./2, 1., 2., 4.] tprs = np.empty(len(vals)) fprs = np.empty(len(vals)) for i, val in enumerate(vals): tprs[i], fprs[i] = test_simul_pairs(impact_model='mean', param_T=default_T, param_N=default_N, param_q=default_q, param_r=val, n_pairs=n_pairs, sample_method=sample_method, lag_cutoff=lag_cutoff, instantaneous=instantaneous, twosamp_test=twosamp_test, multi_test=multi_test, alpha=alpha) plt.figure(figsize=(3,3)) plt.axvline(default_r, ls='-', c='gray', lw=1, label='def') plt.axhline(alpha, ls='--', c='black', lw=1, label='alpha') plt.plot(vals, tprs, label='TPR', marker='x') plt.plot(vals, fprs, label='FPR', marker='x') plt.gca().set_xscale('log', base=2) plt.legend() plt.show() print(f'# mean impact model (T={default_T}, N={default_N}, q={default_q}, n_pairs={n_pairs}, cutoff={lag_cutoff}, instantaneous={instantaneous}, alpha={alpha}, {sample_method}-{twosamp_test}-{multi_test})') print(f'# r\ttpr\tfpr') for i, (tpr, fpr) in enumerate(zip(tprs, fprs)): print(f'{vals[i]}\t{tpr}\t{fpr}')
100%|██████████| 100/100 [00:00<00:00, 179.47it/s] 100%|██████████| 100/100 [00:00<00:00, 210.34it/s] 100%|██████████| 100/100 [00:00<00:00, 206.91it/s] 100%|██████████| 100/100 [00:00<00:00, 214.85it/s] 100%|██████████| 100/100 [00:00<00:00, 212.98it/s] 100%|██████████| 100/100 [00:00<00:00, 182.82it/s] 100%|██████████| 100/100 [00:00<00:00, 181.18it/s] 100%|██████████| 100/100 [00:00<00:00, 210.13it/s]
MIT
simulations.ipynb
diozaka/eitest
Meanconst impact model
default_N = 64 default_r = 0.5 default_q = 4
_____no_output_____
MIT
simulations.ipynb
diozaka/eitest
... by number of events
vals = [4, 8, 16, 32, 64, 128, 256] tprs = np.empty(len(vals)) fprs = np.empty(len(vals)) for i, val in enumerate(vals): tprs[i], fprs[i] = test_simul_pairs(impact_model='meanconst', param_T=default_T, param_N=val, param_q=default_q, param_r=default_r, n_pairs=n_pairs, sample_method=sample_method, lag_cutoff=lag_cutoff, instantaneous=instantaneous, twosamp_test=twosamp_test, multi_test=multi_test, alpha=alpha) plt.figure(figsize=(3,3)) plt.axvline(default_N, ls='-', c='gray', lw=1, label='def') plt.axhline(alpha, ls='--', c='black', lw=1, label='alpha') plt.plot(vals, tprs, label='TPR', marker='x') plt.plot(vals, fprs, label='FPR', marker='x') plt.gca().set_xscale('log', base=2) plt.legend() plt.show() print(f'# meanconst impact model (T={default_T}, q={default_q}, r={default_r}, n_pairs={n_pairs}, cutoff={lag_cutoff}, instantaneous={instantaneous}, alpha={alpha}, {sample_method}-{twosamp_test}-{multi_test})') print(f'# N\ttpr\tfpr') for i, (tpr, fpr) in enumerate(zip(tprs, fprs)): print(f'{vals[i]}\t{tpr}\t{fpr}') print()
100%|██████████| 100/100 [00:00<00:00, 370.92it/s] 100%|██████████| 100/100 [00:00<00:00, 387.87it/s] 100%|██████████| 100/100 [00:00<00:00, 364.85it/s] 100%|██████████| 100/100 [00:00<00:00, 313.86it/s] 100%|██████████| 100/100 [00:00<00:00, 215.43it/s] 100%|██████████| 100/100 [00:00<00:00, 115.63it/s] 100%|██████████| 100/100 [00:01<00:00, 52.62it/s]
MIT
simulations.ipynb
diozaka/eitest
... by impact order
vals = [1, 2, 4, 8, 16, 32] tprs = np.empty(len(vals)) fprs = np.empty(len(vals)) for i, val in enumerate(vals): tprs[i], fprs[i] = test_simul_pairs(impact_model='meanconst', param_T=default_T, param_N=default_N, param_q=val, param_r=default_r, n_pairs=n_pairs, sample_method=sample_method, lag_cutoff=lag_cutoff, instantaneous=instantaneous, twosamp_test=twosamp_test, multi_test=multi_test, alpha=alpha) plt.figure(figsize=(3,3)) plt.axvline(default_q, ls='-', c='gray', lw=1, label='def') plt.axhline(alpha, ls='--', c='black', lw=1, label='alpha') plt.plot(vals, tprs, label='TPR', marker='x') plt.plot(vals, fprs, label='FPR', marker='x') plt.gca().set_xscale('log', base=2) plt.legend() plt.show() print(f'# meanconst impact model (T={default_T}, N={default_N}, r={default_r}, n_pairs={n_pairs}, cutoff={lag_cutoff}, instantaneous={instantaneous}, alpha={alpha}, {sample_method}-{twosamp_test}-{multi_test})') print(f'# q\ttpr\tfpr') for i, (tpr, fpr) in enumerate(zip(tprs, fprs)): print(f'{vals[i]}\t{tpr}\t{fpr}') print()
100%|██████████| 100/100 [00:00<00:00, 191.97it/s] 100%|██████████| 100/100 [00:00<00:00, 209.09it/s] 100%|██████████| 100/100 [00:00<00:00, 181.51it/s] 100%|██████████| 100/100 [00:00<00:00, 170.74it/s] 100%|██████████| 100/100 [00:00<00:00, 196.70it/s] 100%|██████████| 100/100 [00:00<00:00, 191.42it/s]
MIT
simulations.ipynb
diozaka/eitest
... by mean value
vals = [0.125, 0.25, 0.5, 1, 2] tprs = np.empty(len(vals)) fprs = np.empty(len(vals)) for i, val in enumerate(vals): tprs[i], fprs[i] = test_simul_pairs(impact_model='meanconst', param_T=default_T, param_N=default_N, param_q=default_q, param_r=val, n_pairs=n_pairs, sample_method=sample_method, lag_cutoff=lag_cutoff, instantaneous=instantaneous, twosamp_test=twosamp_test, multi_test=multi_test, alpha=alpha) plt.figure(figsize=(3,3)) plt.axvline(default_r, ls='-', c='gray', lw=1, label='def') plt.axhline(alpha, ls='--', c='black', lw=1, label='alpha') plt.plot(vals, tprs, label='TPR', marker='x') plt.plot(vals, fprs, label='FPR', marker='x') plt.gca().set_xscale('log', base=2) plt.legend() plt.show() print(f'# meanconst impact model (T={default_T}, N={default_N}, q={default_q}, n_pairs={n_pairs}, cutoff={lag_cutoff}, instantaneous={instantaneous}, alpha={alpha}, {sample_method}-{twosamp_test}-{multi_test})') print(f'# r\ttpr\tfpr') for i, (tpr, fpr) in enumerate(zip(tprs, fprs)): print(f'{vals[i]}\t{tpr}\t{fpr}') print()
100%|██████████| 100/100 [00:00<00:00, 172.66it/s] 100%|██████████| 100/100 [00:00<00:00, 212.73it/s] 100%|██████████| 100/100 [00:00<00:00, 210.24it/s] 100%|██████████| 100/100 [00:00<00:00, 153.75it/s] 100%|██████████| 100/100 [00:00<00:00, 211.59it/s]
MIT
simulations.ipynb
diozaka/eitest
Variance impact modelIn the paper, we show results with the variance impact model parametrized by the **variance increase**. Here we directly modulate the variance.
default_N = 64 default_r = 8. default_q = 4
_____no_output_____
MIT
simulations.ipynb
diozaka/eitest
... by number of events
vals = [4, 8, 16, 32, 64, 128, 256] tprs = np.empty(len(vals)) fprs = np.empty(len(vals)) for i, val in enumerate(vals): tprs[i], fprs[i] = test_simul_pairs(impact_model='var', param_T=default_T, param_N=val, param_q=default_q, param_r=default_r, n_pairs=n_pairs, sample_method=sample_method, lag_cutoff=lag_cutoff, instantaneous=instantaneous, twosamp_test=twosamp_test, multi_test=multi_test, alpha=alpha) plt.figure(figsize=(3,3)) plt.axvline(default_N, ls='-', c='gray', lw=1, label='def') plt.axhline(alpha, ls='--', c='black', lw=1, label='alpha') plt.plot(vals, tprs, label='TPR', marker='x') plt.plot(vals, fprs, label='FPR', marker='x') plt.gca().set_xscale('log', base=2) plt.legend() plt.show() print(f'# var impact model (T={default_T}, q={default_q}, r={default_r}, n_pairs={n_pairs}, cutoff={lag_cutoff}, instantaneous={instantaneous}, alpha={alpha}, {sample_method}-{twosamp_test}-{multi_test})') print(f'# N\ttpr\tfpr') for i, (tpr, fpr) in enumerate(zip(tprs, fprs)): print(f'{vals[i]}\t{tpr}\t{fpr}') print()
100%|██████████| 100/100 [00:00<00:00, 379.83it/s] 100%|██████████| 100/100 [00:00<00:00, 399.36it/s] 100%|██████████| 100/100 [00:00<00:00, 372.13it/s] 100%|██████████| 100/100 [00:00<00:00, 319.38it/s] 100%|██████████| 100/100 [00:00<00:00, 216.67it/s] 100%|██████████| 100/100 [00:00<00:00, 121.62it/s] 100%|██████████| 100/100 [00:01<00:00, 58.75it/s]
MIT
simulations.ipynb
diozaka/eitest
... by impact order
vals = [1, 2, 4, 8, 16, 32] tprs = np.empty(len(vals)) fprs = np.empty(len(vals)) for i, val in enumerate(vals): tprs[i], fprs[i] = test_simul_pairs(impact_model='var', param_T=default_T, param_N=default_N, param_q=val, param_r=default_r, n_pairs=n_pairs, sample_method=sample_method, lag_cutoff=lag_cutoff, instantaneous=instantaneous, twosamp_test=twosamp_test, multi_test=multi_test, alpha=alpha) plt.figure(figsize=(3,3)) plt.axvline(default_q, ls='-', c='gray', lw=1, label='def') plt.axhline(alpha, ls='--', c='black', lw=1, label='alpha') plt.plot(vals, tprs, label='TPR', marker='x') plt.plot(vals, fprs, label='FPR', marker='x') plt.gca().set_xscale('log', base=2) plt.legend() plt.show() print(f'# var impact model (T={default_T}, N={default_N}, r={default_r}, n_pairs={n_pairs}, cutoff={lag_cutoff}, instantaneous={instantaneous}, alpha={alpha}, {sample_method}-{twosamp_test}-{multi_test})') print(f'# q\ttpr\tfpr') for i, (tpr, fpr) in enumerate(zip(tprs, fprs)): print(f'{vals[i]}\t{tpr}\t{fpr}') print()
100%|██████████| 100/100 [00:00<00:00, 205.11it/s] 100%|██████████| 100/100 [00:00<00:00, 208.57it/s] 100%|██████████| 100/100 [00:00<00:00, 208.42it/s] 100%|██████████| 100/100 [00:00<00:00, 215.50it/s] 100%|██████████| 100/100 [00:00<00:00, 210.17it/s] 100%|██████████| 100/100 [00:00<00:00, 213.72it/s]
MIT
simulations.ipynb
diozaka/eitest
... by variance
vals = [2., 4., 8., 16., 32.] tprs = np.empty(len(vals)) fprs = np.empty(len(vals)) for i, val in enumerate(vals): tprs[i], fprs[i] = test_simul_pairs(impact_model='var', param_T=default_T, param_N=default_N, param_q=default_q, param_r=val, n_pairs=n_pairs, sample_method=sample_method, lag_cutoff=lag_cutoff, instantaneous=instantaneous, twosamp_test=twosamp_test, multi_test=multi_test, alpha=alpha) plt.figure(figsize=(3,3)) plt.axvline(default_r, ls='-', c='gray', lw=1, label='def') plt.axhline(alpha, ls='--', c='black', lw=1, label='alpha') plt.plot(vals, tprs, label='TPR', marker='x') plt.plot(vals, fprs, label='FPR', marker='x') plt.gca().set_xscale('log', base=2) plt.legend() plt.show() print(f'# var impact model (T={default_T}, N={default_N}, q={default_q}, n_pairs={n_pairs}, cutoff={lag_cutoff}, instantaneous={instantaneous}, alpha={alpha}, {sample_method}-{twosamp_test}-{multi_test})') print(f'# r\ttpr\tfpr') for i, (tpr, fpr) in enumerate(zip(tprs, fprs)): print(f'{vals[i]}\t{tpr}\t{fpr}') print()
100%|██████████| 100/100 [00:00<00:00, 211.99it/s] 100%|██████████| 100/100 [00:00<00:00, 213.48it/s] 100%|██████████| 100/100 [00:00<00:00, 209.49it/s] 100%|██████████| 100/100 [00:00<00:00, 214.06it/s] 100%|██████████| 100/100 [00:00<00:00, 213.53it/s]
MIT
simulations.ipynb
diozaka/eitest
Tail impact model
default_N = 512 default_r = 3. default_q = 4
_____no_output_____
MIT
simulations.ipynb
diozaka/eitest
... by number of events
vals = [64, 128, 256, 512, 1024] tprs = np.empty(len(vals)) fprs = np.empty(len(vals)) for i, val in enumerate(vals): tprs[i], fprs[i] = test_simul_pairs(impact_model='tail', param_T=default_T, param_N=val, param_q=default_q, param_r=default_r, n_pairs=n_pairs, sample_method=sample_method, lag_cutoff=lag_cutoff, instantaneous=instantaneous, twosamp_test=twosamp_test, multi_test=multi_test, alpha=alpha) plt.figure(figsize=(3,3)) plt.axvline(default_N, ls='-', c='gray', lw=1, label='def') plt.axhline(alpha, ls='--', c='black', lw=1, label='alpha') plt.plot(vals, tprs, label='TPR', marker='x') plt.plot(vals, fprs, label='FPR', marker='x') plt.gca().set_xscale('log', base=2) plt.legend() plt.show() print(f'# tail impact model (T={default_T}, q={default_q}, r={default_r}, n_pairs={n_pairs}, cutoff={lag_cutoff}, instantaneous={instantaneous}, alpha={alpha}, {sample_method}-{twosamp_test}-{multi_test})') print(f'# N\ttpr\tfpr') for i, (tpr, fpr) in enumerate(zip(tprs, fprs)): print(f'{vals[i]}\t{tpr}\t{fpr}') print()
100%|██████████| 100/100 [00:00<00:00, 210.81it/s] 100%|██████████| 100/100 [00:00<00:00, 117.61it/s] 100%|██████████| 100/100 [00:01<00:00, 58.35it/s] 100%|██████████| 100/100 [00:03<00:00, 26.73it/s] 100%|██████████| 100/100 [00:07<00:00, 13.43it/s]
MIT
simulations.ipynb
diozaka/eitest
... by impact order
vals = [1, 2, 4, 8, 16, 32] tprs = np.empty(len(vals)) fprs = np.empty(len(vals)) for i, val in enumerate(vals): tprs[i], fprs[i] = test_simul_pairs(impact_model='tail', param_T=default_T, param_N=default_N, param_q=val, param_r=default_r, n_pairs=n_pairs, sample_method=sample_method, lag_cutoff=lag_cutoff, instantaneous=instantaneous, twosamp_test=twosamp_test, multi_test=multi_test, alpha=alpha) plt.figure(figsize=(3,3)) plt.axvline(default_q, ls='-', c='gray', lw=1, label='def') plt.axhline(alpha, ls='--', c='black', lw=1, label='alpha') plt.plot(vals, tprs, label='TPR', marker='x') plt.plot(vals, fprs, label='FPR', marker='x') plt.gca().set_xscale('log', base=2) plt.legend() plt.show() print(f'# tail impact model (T={default_T}, N={default_N}, r={default_r}, n_pairs={n_pairs}, cutoff={lag_cutoff}, instantaneous={instantaneous}, alpha={alpha}, {sample_method}-{twosamp_test}-{multi_test})') print(f'# q\ttpr\tfpr') for i, (tpr, fpr) in enumerate(zip(tprs, fprs)): print(f'{vals[i]}\t{tpr}\t{fpr}') print()
100%|██████████| 100/100 [00:03<00:00, 28.23it/s] 100%|██████████| 100/100 [00:03<00:00, 27.89it/s] 100%|██████████| 100/100 [00:03<00:00, 28.22it/s] 100%|██████████| 100/100 [00:03<00:00, 27.32it/s] 100%|██████████| 100/100 [00:03<00:00, 27.25it/s] 100%|██████████| 100/100 [00:03<00:00, 26.63it/s]
MIT
simulations.ipynb
diozaka/eitest
... by degrees of freedom
vals = [2.5, 3., 3.5, 4., 4.5, 5., 5.5, 6.] tprs = np.empty(len(vals)) fprs = np.empty(len(vals)) for i, val in enumerate(vals): tprs[i], fprs[i] = test_simul_pairs(impact_model='tail', param_T=default_T, param_N=default_N, param_q=default_q, param_r=val, n_pairs=n_pairs, sample_method=sample_method, lag_cutoff=lag_cutoff, instantaneous=instantaneous, twosamp_test=twosamp_test, multi_test=multi_test, alpha=alpha) plt.figure(figsize=(3,3)) plt.axvline(default_r, ls='-', c='gray', lw=1, label='def') plt.axhline(alpha, ls='--', c='black', lw=1, label='alpha') plt.plot(vals, tprs, label='TPR', marker='x') plt.plot(vals, fprs, label='FPR', marker='x') plt.legend() plt.show() print(f'# tail impact model (T={default_T}, N={default_N}, q={default_q}, n_pairs={n_pairs}, cutoff={lag_cutoff}, instantaneous={instantaneous}, alpha={alpha}, {sample_method}-{twosamp_test}-{multi_test})') print(f'# r\ttpr\tfpr') for i, (tpr, fpr) in enumerate(zip(tprs, fprs)): print(f'{vals[i]}\t{tpr}\t{fpr}') print()
100%|██████████| 100/100 [00:03<00:00, 27.68it/s] 100%|██████████| 100/100 [00:03<00:00, 27.97it/s] 100%|██████████| 100/100 [00:03<00:00, 27.91it/s] 100%|██████████| 100/100 [00:03<00:00, 28.07it/s] 100%|██████████| 100/100 [00:03<00:00, 27.99it/s] 100%|██████████| 100/100 [00:03<00:00, 27.71it/s] 100%|██████████| 100/100 [00:03<00:00, 27.94it/s] 100%|██████████| 100/100 [00:03<00:00, 27.64it/s]
MIT
simulations.ipynb
diozaka/eitest
%%capture %pip install nflfastpy --upgrade import nflfastpy from nflfastpy.utils import convert_to_gsis_id from nflfastpy import default_headshot from matplotlib import pyplot as plt import pandas as pd import seaborn as sns import requests print('Example default player headshot\n') plt.imshow(default_headshot); df = nflfastpy.load_pbp_data(year=2020) roster_df = nflfastpy.load_roster_data() team_logo_df = nflfastpy.load_team_logo_data() roster_df = roster_df.loc[roster_df['team.season'] == 2019] air_yards_df = df.loc[df['pass_attempt'] == 1, ['receiver_player_id', 'receiver_player_name', 'posteam', 'air_yards']] air_yards_df = air_yards_df.loc[air_yards_df['receiver_player_id'].notnull()] air_yards_df['gsis_id'] = air_yards_df['receiver_player_id'].apply(convert_to_gsis_id) #grabbing the top 5 air yards top_25 = air_yards_df.groupby('gsis_id')['air_yards'].sum().sort_values(ascending=False)[:25].index.unique() air_yards_df = air_yards_df.loc[air_yards_df['gsis_id'].isin(top_25)] air_yards_df.head() air_yards_df['receiver_player_name'].unique() fig, axes = plt.subplots(25, 2, figsize=(20, 40)) for i, row in enumerate(axes): ax1, ax2 = row[0], row[1] player_gsis_id = top_25[i] player_df = air_yards_df.loc[air_yards_df['gsis_id'] == player_gsis_id] team_logo_data = team_logo_df.loc[team_logo_df['team_abbr'] == player_df['posteam'].values[0]] team_color_1 = team_logo_data['team_color'].values[0] team_color_2 = team_logo_data['team_color2'].values[0] player_roster_data = roster_df.loc[roster_df['teamPlayers.gsisId'] == player_gsis_id] if player_roster_data.empty: #if the player is a rookie a = default_headshot else: player_headshot = player_roster_data['teamPlayers.headshot_url'].values[0] a = plt.imread(player_headshot) ax1.set_title(player_df['receiver_player_name'].values[0]) ax1.imshow(a) ax1.axis('off') sns.kdeplot(player_df['air_yards'], color=team_color_2, ax=ax2) x = ax2.get_lines()[0].get_xydata()[:, 0] y = ax2.get_lines()[0].get_xydata()[:, 1] ax2.set_xticks(range(-10, 60, 10)) ax2.fill_between(x, y, color=team_color_1, alpha=0.5) plt.show();
_____no_output_____
MIT
examples/Top 25 AY Graph w Roster and Team Logo Data.ipynb
AccidentalGuru/nflfastpy
The Analysis of The Evolution of The Russian Comedy. Part 3. In this analysis,we will explore evolution of the French five-act comedy in verse based on the following features:- The coefficient of dialogue vivacity;- The percentage of scenes with split verse lines;- The percentage of scenes with split rhymes;- The percentage of open scenes.- The percentage of scenes with split verse lines and rhymes.We will tackle the following questions:1. We will describe the features;2. We will explore feature correlations.3. We will check the features for normality using Shapiro-Wilk normality test. This will help us determine whether parametric vs. non-parametric statistical tests are more appropriate. If the features are not normally distributed, we will use non-parametric tests. 4. In our previous analysis of Sperantov's data, we discovered that instead of four periods of the Russian five-act tragedy in verse proposed by Sperantov, we can only be confident in the existence of two periods, where 1795 is the cut-off year. Therefore, we propose the following periods for the Russian verse comedy: - Period One (from 1775 to 1794) - Period Two (from 1795 to 1849).5. We will run statistical tests to determine whether these two periods are statistically different.6. We will create visualizations for each feature.7. We will run descriptive statistics for each feature.
import pandas as pd import numpy as np import json from os import listdir from scipy.stats import shapiro import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns def make_plot(feature, title): mean, std, median = summary(feature) plt.figure(figsize=(10, 7)) plt.title(title, fontsize=17) sns.distplot(feature, kde=False) mean_line = plt.axvline(mean, color='black', linestyle='solid', linewidth=2); M1 = 'Mean'; median_line = plt.axvline(median, color='green',linestyle='dashdot', linewidth=2); M2='Median' std_line = plt.axvline(mean + std, color='black', linestyle='dashed', linewidth=2); M3 = 'Standard deviation'; plt.axvline(mean - std, color='black', linestyle='dashed', linewidth=2) plt.legend([mean_line, median_line, std_line], [M1, M2, M3]) plt.show() def small_sample_mann_whitney_u_test(series_one, series_two): values_one = series_one.sort_values().tolist() values_two = series_two.sort_values().tolist() # make sure there are no ties - this function only works for no ties result_df = pd.DataFrame(values_one + values_two, columns=['combined']).sort_values(by='combined') # average for ties result_df['ranks'] = result_df['combined'].rank(method='average') # make a dictionary where keys are values and values are ranks val_to_rank = dict(zip(result_df['combined'].values, result_df['ranks'].values)) sum_ranks_one = np.sum([val_to_rank[num] for num in values_one]) sum_ranks_two = np.sum([val_to_rank[num] for num in values_two]) # number in sample one and two n_one = len(values_one) n_two = len(values_two) # calculate the mann whitney u statistic which is the smaller of the u_one and u_two u_one = ((n_one * n_two) + (n_one * (n_one + 1) / 2)) - sum_ranks_one u_two = ((n_one * n_two) + (n_two * (n_two + 1) / 2)) - sum_ranks_two # add a quality check assert u_one + u_two == n_one * n_two u_statistic = np.min([u_one, u_two]) return u_statistic def summary(feature): mean = feature.mean() std = feature.std() median = feature.median() return mean, std, median # updated boundaries def determine_period(row): if row <= 1794: period = 1 else: period = 2 return period
_____no_output_____
MIT
Analyses/The Evolution of The Russian Comedy_Verse_Features.ipynb
innawendell/European_Comedy
Part 1. Feature Descriptions For the Russian corpus of the five-act comedies, we generated additional features that inspired by Iarkho. So far, we had no understanding how these features evolved over time and whether they could differentiate literary periods. The features include the following:1. **The Coefficient of Dialogue Vivacity**, i.e., the number of utterances in a play / the number of verse lines in a play. Since some of the comedies in our corpus were written in iambic hexameter while others were written in free iambs, it is important to clarify how we made sure the number of verse lines was comparable. Because Aleksandr Griboedov's *Woe From Wit* is the only four-act comedy in verse that had an extensive markup, we used it as the basis for our calculation. - First, improved the Dracor's markup of the verse lines in *Woe From Wit*. - Next, we calculated the number of verse lines in *Woe From Wit*, which was 2220. - Then, we calculated the total number of syllables in *Woe From Wit*, which was 22076. - We calculated the average number of syllables per verse line: 22076 / 2220 = 9.944144144144143. - Finally, we divided the average number of syllables in *Woe From Wit* by the average number of syllables in a comedy written in hexameter, i.e., 12.5: 9.944144144144143 / 12.5 = 0.796. - To convert the number of verse lines in a play written in free iambs and make it comparable with the comedies written in hexameter, we used the following formula: rescaled_number of verse lines = the number of verse lines in free iambs * 0.796. - For example, in *Woe From Wit*, the number of verse lines = 2220, the rescaled number of verse lines = 2220 * 0.796 = 1767.12. The coefficient of dialogue vivacity = 702 / 1767.12 = 0.397. 2. **The Percentage of Scenes with Split Verse Lines**, i.e, the percentage of scenes where the end of a scene does not correspond with the end of a verse lines and the verse line extends into the next scene, e.g., "Не бойся. Онъ блажитъ. ЯВЛЕНІЕ 3. Какъ радъ что вижу васъ."3. **The Percentage of Scenes with Split Rhymes**, i.e., the percentage of scenes that rhyme with other scenes, e.g., "Надѣюсъ на тебя, Вѣтрана, какъ на стѣну. ЯВЛЕНІЕ 4. И въ ней , какъ ни крѣпка, мы видимЪ перемѣну."4. **The Percentage of Open Scenes**, i.e., the percentage of scenes with either split verse lines or rhymes.5. **The Percentage of Scenes With Split Verse Lines and Rhymes**, i.e., the percentage of scenes that are connected through both means: by sharing a verse lines and a rhyme.
comedies = pd.read_csv('../Russian_Comedies/Data/Comedies_Raw_Data.csv') # sort by creation date comedies_sorted = comedies.sort_values(by='creation_date').copy() # select only original comedies and five act original_comedies = comedies_sorted[(comedies_sorted['translation/adaptation'] == 0) & (comedies_sorted['num_acts'] == 5)].copy() original_comedies.head() original_comedies.shape # rename column names for clarity original_comedies = original_comedies.rename(columns={'num_scenes_iarkho': 'mobility_coefficient'}) comedies_verse_features = original_comedies[['index', 'title', 'first_name', 'last_name', 'creation_date', 'dialogue_vivacity', 'percentage_scene_split_verse', 'percentage_scene_split_rhymes', 'percentage_open_scenes', 'percentage_scenes_rhymes_split_verse']].copy() comedies_verse_features.head()
_____no_output_____
MIT
Analyses/The Evolution of The Russian Comedy_Verse_Features.ipynb
innawendell/European_Comedy
Part 1. Feature Correlations
comedies_verse_features[['dialogue_vivacity', 'percentage_scene_split_verse', 'percentage_scene_split_rhymes', 'percentage_open_scenes', 'percentage_scenes_rhymes_split_verse']].corr().round(2) original_comedies[['dialogue_vivacity', 'mobility_coefficient']].corr()
_____no_output_____
MIT
Analyses/The Evolution of The Russian Comedy_Verse_Features.ipynb
innawendell/European_Comedy
Dialogue vivacity is moderately positively correlated with the percentage of scenes with split verse lines (0.53), with the percentage of scenes with split rhymes (0.51), and slightly less correlated with the percentage of open scenes (0.45). However, it is strongly positively correlated with the percentage of scenes with both split rhymes and verse lines (0.73). The scenes with very fast-paced dialogue are more likely to be interconnected through both rhyme and shared verse lines. One unexpected discovery is that dialogue vivacity only weakly correlated with the mobility coefficient (0.06): more active movement of dramatic characters on stage does not necessarily entail that their utterances are going to be shorter.The percentage of scenes with split verse lines is moderately positively correlated with the percentage of scenes with split rhymes (0.66): the scenes that are connected by verse are likely but not necessarily always going to be connected through rhyme.Such features as the percentage of open scenes and the percentage of scenes with split rhymes and verse lines are strongly positively correlated with their constituent features (the correlation of the percentage of open scenes with the percentage of scenes with split verse lines is 0.86, with the percentage of split rhymes is 0.92). From this, we can infer that the bulk of the open scenes are connected through rhymes. The percentage of scenes with split rhymes and verse lines is strongly positively correlated with the percentage of scenes with split verse lines (0.87) and the percentage of scenes with split rhymes. Part 3. Feature Distributions and Normality
make_plot(comedies_verse_features['dialogue_vivacity'], 'Distribution of the Dialogue Vivacity Coefficient') mean, std, median = summary(comedies_verse_features['dialogue_vivacity']) print('Mean dialogue vivacity coefficient', round(mean, 2)) print('Standard deviation of the dialogue vivacity coefficient:', round(std, 2)) print('Median dialogue vivacity coefficient:', median)
Mean dialogue vivacity coefficient 0.46 Standard deviation of the dialogue vivacity coefficient: 0.1 Median dialogue vivacity coefficient: 0.4575
MIT
Analyses/The Evolution of The Russian Comedy_Verse_Features.ipynb
innawendell/European_Comedy
Shapiro-Wilk Normality Test
print('The p-value of the Shapiro-Wilk normality test:', shapiro(comedies_verse_features['dialogue_vivacity'])[1])
The p-value of the Shapiro-Wilk normality test: 0.2067030817270279
MIT
Analyses/The Evolution of The Russian Comedy_Verse_Features.ipynb
innawendell/European_Comedy
The Shapiro-Wilk test showed that the probability of the coefficient of dialogue vivacity of being normally distributed was 0.2067030817270279, which was above the 0.05 significance level. We failed to reject the null hypothesis of the normal distribution.
make_plot(comedies_verse_features['percentage_scene_split_verse'], 'Distribution of The Percentage of Scenes with Split Verse Lines') mean, std, median = summary(comedies_verse_features['percentage_scene_split_verse']) print('Mean percentage of scenes with split verse lines:', round(mean, 2)) print('Standard deviation of the percentage of scenes with split verse lines:', round(std, 2)) print('Median percentage of scenes with split verse lines:', median) print('The p-value of the Shapiro-Wilk normality test:', shapiro(comedies_verse_features['percentage_scene_split_verse'])[1])
The p-value of the Shapiro-Wilk normality test: 0.8681985139846802
MIT
Analyses/The Evolution of The Russian Comedy_Verse_Features.ipynb
innawendell/European_Comedy
The Shapiro-Wilk showed that the probability of the percentage of scenes with split verse lines of being normally distributed was very high (the p-value is 0.8681985139846802). We failed to reject the null hypothesis of normal distribution.
make_plot(comedies_verse_features['percentage_scene_split_rhymes'], 'Distribution of The Percentage of Scenes with Split Rhymes') mean, std, median = summary(comedies_verse_features['percentage_scene_split_rhymes']) print('Mean percentage of scenes with split rhymes:', round(mean, 2)) print('Standard deviation of the percentage of scenes with split rhymes:', round(std, 2)) print('Median percentage of scenes with split rhymes:', median) print('The p-value of the Shapiro-Wilk normality test:', shapiro(comedies_verse_features['percentage_scene_split_rhymes'])[1])
The p-value of the Shapiro-Wilk normality test: 0.5752763152122498
MIT
Analyses/The Evolution of The Russian Comedy_Verse_Features.ipynb
innawendell/European_Comedy
The Shapiro-Wilk test showed that the probability of the number of dramatic characters of being normally distributed was 0.5752763152122498. This probability was much higher than the 0.05 significance level. Therefore, we failed to reject the null hypothesis of normal distribution.
make_plot(comedies_verse_features['percentage_open_scenes'], 'Distribution of The Percentage of Open Scenes') mean, std, median = summary(comedies_verse_features['percentage_open_scenes']) print('Mean percentage of open scenes:', round(mean, 2)) print('Standard deviation of the percentage of open scenes:', round(std, 2)) print('Median percentage of open scenes:', median) print('The p-value of the Shapiro-Wilk normality test:', shapiro(comedies_verse_features['percentage_open_scenes'])[1])
The p-value of the Shapiro-Wilk normality test: 0.3018988370895386
MIT
Analyses/The Evolution of The Russian Comedy_Verse_Features.ipynb
innawendell/European_Comedy
The Shapiro-Wilk test showed that the probability of the number of the percentage of open scenes of being normally distributed was 0.3018988370895386, which was quite a lot higher than the significance level of 0.05. Therefore, we failed to reject the null hypothesis of normal distribution of the percentage of open scenes.
make_plot(comedies_verse_features['percentage_scenes_rhymes_split_verse'], 'Distribution of The Percentage of Scenes with Split Verse Lines and Rhymes') mean, std, median = summary(comedies_verse_features['percentage_scenes_rhymes_split_verse']) print('Mean percentage of scenes with split rhymes and verse lines:', round(mean, 2)) print('Standard deviation of the percentage of scenes with split rhymes and verse lines:', round(std, 2)) print('Median percentage of scenes with split rhymes and verse lines:', median) print('The p-value of the Shapiro-Wilk normality test:', shapiro(comedies_verse_features['percentage_scenes_rhymes_split_verse'])[1])
The p-value of the Shapiro-Wilk normality test: 0.015218793414533138
MIT
Analyses/The Evolution of The Russian Comedy_Verse_Features.ipynb
innawendell/European_Comedy
The Shapiro-Wilk test showed that the probability of the percentage of scenes with split verse lines and rhymes of being normally distributed was very low (the p-value was 0.015218793414533138). Therefore, we rejected the hypothesis of normal distribution. Summary:1. The majority of the verse features were normally distributed. For them, we could use a parametric statistical test.2. The only feature that was not normally distributed was the percentage of scenes with split rhymes and verse lines. For this feature, we used a non-parametric test such as the Mann-Whitney u test. Part 3. Hypothesis Testing We will run statistical tests to determine whether the two periods distinguishable for the Russian five-act verse tragedy are significantly different for the Russian five-act comedy. The two periods are: - Period One (from 1747 to 1794) - Period Two (from 1795 to 1822) For all features that were normally distributed, we will use *scipy.stats* Python library to run a **t-test** to check whether there is a difference between Period One and Period Two. The null hypothesis is that there is no difference between the two periods. The alternative hypothesis is that the two periods are different. Our significance level will be set at 0.05. If the p-value produced by the t-test will be below 0.05, we will reject the null hypothesis of no difference. For the percentage of scenes with split rhymes and verse lines, we will run **the Mann-Whitney u-test** to check whether there is a difference between Period One and Period Two. The null hypothesis will be no difference between these periods, whereas the alternative hypothesis will be that the periods will be different.Since both periods have fewer than 20 tragedies, we cannot use the scipy's Man-Whitney u-test that requires each sample size to be at least 20 because it uses normal approximation. Instead, we will have to run Mann-Whitney U-test without a normal approximation for which we wrote a custom function. The details about the test can be found in the following resource: https://sphweb.bumc.bu.edu/otlt/mph-modules/bs/bs704_nonparametric/bs704_nonparametric4.html.One limitation that we need to mention is the sample size. The first period has only six comedies and the second period has only ten. However, it is impossible to increase the sample size - we cannot ask the Russian playwrights of the eighteenth and nineteenth century to produce more five-act verse comedies. If there are other Russian five-act comedies of these periods, they are either unknown or not available to us.
comedies_verse_features['period'] = comedies_verse_features.creation_date.apply(determine_period) period_one = comedies_verse_features[comedies_verse_features['period'] == 1].copy() period_two = comedies_verse_features[comedies_verse_features['period'] == 2].copy() period_one.shape period_two.shape
_____no_output_____
MIT
Analyses/The Evolution of The Russian Comedy_Verse_Features.ipynb
innawendell/European_Comedy
The T-Test The Coefficient of Dialogue Vivacity
from scipy.stats import ttest_ind ttest_ind(period_one['dialogue_vivacity'], period_two['dialogue_vivacity'], equal_var=False)
_____no_output_____
MIT
Analyses/The Evolution of The Russian Comedy_Verse_Features.ipynb
innawendell/European_Comedy
The Percentage of Scenes With Split Verse Lines
ttest_ind(period_one['percentage_scene_split_verse'], period_two['percentage_scene_split_verse'], equal_var=False)
_____no_output_____
MIT
Analyses/The Evolution of The Russian Comedy_Verse_Features.ipynb
innawendell/European_Comedy
The Percentage of Scnes With Split Rhymes
ttest_ind(period_one['percentage_scene_split_rhymes'], period_two['percentage_scene_split_rhymes'], equal_var=False)
_____no_output_____
MIT
Analyses/The Evolution of The Russian Comedy_Verse_Features.ipynb
innawendell/European_Comedy
The Percentage of Open Scenes
ttest_ind(period_one['percentage_open_scenes'], period_two['percentage_open_scenes'], equal_var=False)
_____no_output_____
MIT
Analyses/The Evolution of The Russian Comedy_Verse_Features.ipynb
innawendell/European_Comedy
Summary|Feature |p-value |Result|---------------------------| ----------------|--------------------------------| The coefficient of dialogue vivacity |0.92 | Not Significant|The percentage of scenes with split verse lines|0.009 | Significant|The percentage of scenes with split rhymes| 0.44| Not significant|The percentage of open scenes| 0.10| Not significant The Mann-Whitney Test The Process:- Our null hypothesis is that there is no difference between two periods. Our alternative hypothesis is that the periods are different.- We will set the signficance level (alpha) at 0.05.- We will run the test and calculate the test statistic.- We will compare the test statistic with the critical value of U for a two-tailed test at alpha=0.05. Critical values can be found at https://www.real-statistics.com/statistics-tables/mann-whitney-table/.- If our test statistic is equal or lower than the critical value of U, we will reject the null hypothesis. Otherwise, we will fail to reject it. The Percentage of Scenes With Split Verse Lines and Rhymes
small_sample_mann_whitney_u_test(period_one['percentage_scenes_rhymes_split_verse'], period_two['percentage_scenes_rhymes_split_verse'])
_____no_output_____
MIT
Analyses/The Evolution of The Russian Comedy_Verse_Features.ipynb
innawendell/European_Comedy
Critical Value of U |Periods |Critical Value of U |---------------------------| ----------------| Period One (n=6) and Period Two (n=10) |11 Summary|Feature |u-statistic |Result|---------------------------| ----------------|--------------------------------| The percentage of scenes with split verse lines and rhymes|21 | Not Significant We discovered that the distribution of only one feature, the percentage of scenes with split verse lines, was different in Periods One and Two. Distributions of other features did not prove to be significantly different. Part 4. Visualizations
def scatter(df, feature, title, xlabel, text_y): sns.jointplot('creation_date', feature, data=df, color='b', height=7).plot_joint( sns.kdeplot, zorder=0, n_levels=20) plt.axvline(1795, color='grey',linestyle='dashed', linewidth=2) plt.text(1795.5, text_y, '1795') plt.title(title, fontsize=20, pad=100) plt.xlabel('Date', fontsize=14) plt.ylabel(xlabel, fontsize=14) plt.show()
_____no_output_____
MIT
Analyses/The Evolution of The Russian Comedy_Verse_Features.ipynb
innawendell/European_Comedy
The Coefficient of Dialogue Vivacity
scatter(comedies_verse_features, 'dialogue_vivacity', 'The Coefficient of Dialogue Vivacity by Year', 'The Coefficient of Dialogue Vivacity', 0.85)
/opt/anaconda3/envs/text_extraction/lib/python3.7/site-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation. FutureWarning
MIT
Analyses/The Evolution of The Russian Comedy_Verse_Features.ipynb
innawendell/European_Comedy
The Percentage of Scenes With Split Verse Lines
scatter(comedies_verse_features, 'percentage_scene_split_verse', 'The Percentage of Scenes With Split Verse Lines by Year', 'Percentage of Scenes With Split Verse Lines', 80)
/opt/anaconda3/envs/text_extraction/lib/python3.7/site-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation. FutureWarning
MIT
Analyses/The Evolution of The Russian Comedy_Verse_Features.ipynb
innawendell/European_Comedy
The Percentage of Scenes With Split Rhymes
scatter(comedies_verse_features, 'percentage_scene_split_rhymes', 'The Percentage of Scenes With Split Rhymes by Year', 'The Percentage of Scenes With Split Rhymes', 80)
/opt/anaconda3/envs/text_extraction/lib/python3.7/site-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation. FutureWarning
MIT
Analyses/The Evolution of The Russian Comedy_Verse_Features.ipynb
innawendell/European_Comedy
The Percentage of Open Scenes
scatter(comedies_verse_features, 'percentage_open_scenes', 'The Percentage of Open Scenes by Year', 'The Percentage of Open Scenes', 100)
/opt/anaconda3/envs/text_extraction/lib/python3.7/site-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation. FutureWarning
MIT
Analyses/The Evolution of The Russian Comedy_Verse_Features.ipynb
innawendell/European_Comedy
The Percentage of Scenes With Split Verse Lines and Rhymes
scatter(comedies_verse_features, 'percentage_scenes_rhymes_split_verse', ' The Percentage of Scenes With Split Verse Lines and Rhymes by Year', ' The Percentage of Scenes With Split Verse Lines and Rhymes', 45)
/opt/anaconda3/envs/text_extraction/lib/python3.7/site-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation. FutureWarning
MIT
Analyses/The Evolution of The Russian Comedy_Verse_Features.ipynb
innawendell/European_Comedy
Part 5. Descriptive Statistics For Two Periods and Overall The Coefficient of Dialogue Vivacity In Entire Corpus
comedies_verse_features.describe().loc[:, 'dialogue_vivacity'][['mean', 'std', '50%', 'min', 'max']].round(2)
_____no_output_____
MIT
Analyses/The Evolution of The Russian Comedy_Verse_Features.ipynb
innawendell/European_Comedy
By Tentative Periods
comedies_verse_features.groupby('period').describe().loc[:, 'dialogue_vivacity'][['mean', 'std', '50%', 'min', 'max']].round(2)
_____no_output_____
MIT
Analyses/The Evolution of The Russian Comedy_Verse_Features.ipynb
innawendell/European_Comedy
The Percentage of Scenes With Split Verse Lines In Entire Corpus
comedies_verse_features.describe().loc[:, 'percentage_scene_split_verse'][['mean', 'std', '50%', 'min', 'max']].round(2)
_____no_output_____
MIT
Analyses/The Evolution of The Russian Comedy_Verse_Features.ipynb
innawendell/European_Comedy
By Periods
comedies_verse_features.groupby('period').describe().loc[:, 'percentage_scene_split_verse'][['mean', 'std', '50%', 'min', 'max']].round(2)
_____no_output_____
MIT
Analyses/The Evolution of The Russian Comedy_Verse_Features.ipynb
innawendell/European_Comedy
The Percentage of Scenes With Split Rhymes
comedies_verse_features.describe().loc[:, 'percentage_scene_split_rhymes'][['mean', 'std', '50%', 'min', 'max']].round(2)
_____no_output_____
MIT
Analyses/The Evolution of The Russian Comedy_Verse_Features.ipynb
innawendell/European_Comedy
By Tentative Periods
comedies_verse_features.groupby('period').describe().loc[:, 'percentage_scene_split_rhymes'][['mean', 'std', '50%', 'min', 'max']].round(2)
_____no_output_____
MIT
Analyses/The Evolution of The Russian Comedy_Verse_Features.ipynb
innawendell/European_Comedy
The Percentage of Open Scenes In Entire Corpus
comedies_verse_features.describe().loc[:, 'percentage_open_scenes'][['mean', 'std', '50%', 'min', 'max']].round(2)
_____no_output_____
MIT
Analyses/The Evolution of The Russian Comedy_Verse_Features.ipynb
innawendell/European_Comedy
By Tenative Periods
comedies_verse_features.groupby('period').describe().loc[:, 'percentage_open_scenes'][['mean', 'std', '50%', 'min', 'max']].round(2)
_____no_output_____
MIT
Analyses/The Evolution of The Russian Comedy_Verse_Features.ipynb
innawendell/European_Comedy
The Percentage of Scenes With Split Verse Lines or Rhymes
comedies_verse_features.describe().loc[:, 'percentage_scenes_rhymes_split_verse'][['mean', 'std', '50%', 'min', 'max']].round(2) comedies_verse_features.groupby('period').describe().loc[:, 'percentage_scenes_rhymes_split_verse'][['mean', 'std', '50%', 'min', 'max']].round(2)
_____no_output_____
MIT
Analyses/The Evolution of The Russian Comedy_Verse_Features.ipynb
innawendell/European_Comedy
Lalonde Pandas API Exampleby Adam Kelleher We'll run through a quick example using the high-level Python API for the DoSampler. The DoSampler is different from most classic causal effect estimators. Instead of estimating statistics under interventions, it aims to provide the generality of Pearlian causal inference. In that context, the joint distribution of the variables under an intervention is the quantity of interest. It's hard to represent a joint distribution nonparametrically, so instead we provide a sample from that distribution, which we call a "do" sample.Here, when you specify an outcome, that is the variable you're sampling under an intervention. We still have to do the usual process of making sure the quantity (the conditional interventional distribution of the outcome) is identifiable. We leverage the familiar components of the rest of the package to do that "under the hood". You'll notice some similarity in the kwargs for the DoSampler. Getting the DataFirst, download the data from the LaLonde example.
import os, sys sys.path.append(os.path.abspath("../../../")) from rpy2.robjects import r as R %load_ext rpy2.ipython #%R install.packages("Matching") %R library(Matching) %R data(lalonde) %R -o lalonde lalonde.to_csv("lalonde.csv",index=False) # the data already loaded in the previous cell. we include the import # here you so you don't have to keep re-downloading it. import pandas as pd lalonde=pd.read_csv("lalonde.csv")
_____no_output_____
MIT
Utils/dowhy/docs/source/example_notebooks/lalonde_pandas_api.ipynb
maliha93/Fairness-Analysis-Code
The `causal` Namespace We've created a "namespace" for `pandas.DataFrame`s containing causal inference methods. You can access it here with `lalonde.causal`, where `lalonde` is our `pandas.DataFrame`, and `causal` contains all our new methods! These methods are magically loaded into your existing (and future) dataframes when you `import dowhy.api`.
import dowhy.api
_____no_output_____
MIT
Utils/dowhy/docs/source/example_notebooks/lalonde_pandas_api.ipynb
maliha93/Fairness-Analysis-Code
Now that we have the `causal` namespace, lets give it a try! The `do` OperationThe key feature here is the `do` method, which produces a new dataframe replacing the treatment variable with values specified, and the outcome with a sample from the interventional distribution of the outcome. If you don't specify a value for the treatment, it leaves the treatment untouched:
do_df = lalonde.causal.do(x='treat', outcome='re78', common_causes=['nodegr', 'black', 'hisp', 'age', 'educ', 'married'], variable_types={'age': 'c', 'educ':'c', 'black': 'd', 'hisp': 'd', 'married': 'd', 'nodegr': 'd','re78': 'c', 'treat': 'b'}, proceed_when_unidentifiable=True)
WARNING:dowhy.causal_model:Causal Graph not provided. DoWhy will construct a graph based on data inputs. INFO:dowhy.causal_graph:If this is observed data (not from a randomized experiment), there might always be missing confounders. Adding a node named "Unobserved Confounders" to reflect this. INFO:dowhy.causal_model:Model to find the causal effect of treatment ['treat'] on outcome ['re78'] WARNING:dowhy.causal_identifier:If this is observed data (not from a randomized experiment), there might always be missing confounders. Causal effect cannot be identified perfectly. INFO:dowhy.causal_identifier:Continuing by ignoring these unobserved confounders because proceed_when_unidentifiable flag is True. INFO:dowhy.causal_identifier:Instrumental variables for treatment and outcome:[] INFO:dowhy.causal_identifier:Frontdoor variables for treatment and outcome:[] INFO:dowhy.do_sampler:Using WeightingSampler for do sampling. INFO:dowhy.do_sampler:Caution: do samplers assume iid data.
MIT
Utils/dowhy/docs/source/example_notebooks/lalonde_pandas_api.ipynb
maliha93/Fairness-Analysis-Code
Notice you get the usual output and prompts about identifiability. This is all `dowhy` under the hood!We now have an interventional sample in `do_df`. It looks very similar to the original dataframe. Compare them:
lalonde.head() do_df.head()
_____no_output_____
MIT
Utils/dowhy/docs/source/example_notebooks/lalonde_pandas_api.ipynb
maliha93/Fairness-Analysis-Code
Treatment Effect EstimationWe could get a naive estimate before for a treatment effect by doing
(lalonde[lalonde['treat'] == 1].mean() - lalonde[lalonde['treat'] == 0].mean())['re78']
_____no_output_____
MIT
Utils/dowhy/docs/source/example_notebooks/lalonde_pandas_api.ipynb
maliha93/Fairness-Analysis-Code
We can do the same with our new sample from the interventional distribution to get a causal effect estimate
(do_df[do_df['treat'] == 1].mean() - do_df[do_df['treat'] == 0].mean())['re78']
_____no_output_____
MIT
Utils/dowhy/docs/source/example_notebooks/lalonde_pandas_api.ipynb
maliha93/Fairness-Analysis-Code
We could get some rough error bars on the outcome using the normal approximation for a 95% confidence interval, like
import numpy as np 1.96*np.sqrt((do_df[do_df['treat'] == 1].var()/len(do_df[do_df['treat'] == 1])) + (do_df[do_df['treat'] == 0].var()/len(do_df[do_df['treat'] == 0])))['re78']
_____no_output_____
MIT
Utils/dowhy/docs/source/example_notebooks/lalonde_pandas_api.ipynb
maliha93/Fairness-Analysis-Code
but note that these DO NOT contain propensity score estimation error. For that, a bootstrapping procedure might be more appropriate. This is just one statistic we can compute from the interventional distribution of `'re78'`. We can get all of the interventional moments as well, including functions of `'re78'`. We can leverage the full power of pandas, like
do_df['re78'].describe() lalonde['re78'].describe()
_____no_output_____
MIT
Utils/dowhy/docs/source/example_notebooks/lalonde_pandas_api.ipynb
maliha93/Fairness-Analysis-Code
and even plot aggregations, like
%matplotlib inline import seaborn as sns sns.barplot(data=lalonde, x='treat', y='re78') sns.barplot(data=do_df, x='treat', y='re78')
_____no_output_____
MIT
Utils/dowhy/docs/source/example_notebooks/lalonde_pandas_api.ipynb
maliha93/Fairness-Analysis-Code
Specifying InterventionsYou can find the distribution of the outcome under an intervention to set the value of the treatment.
do_df = lalonde.causal.do(x={'treat': 1}, outcome='re78', common_causes=['nodegr', 'black', 'hisp', 'age', 'educ', 'married'], variable_types={'age': 'c', 'educ':'c', 'black': 'd', 'hisp': 'd', 'married': 'd', 'nodegr': 'd','re78': 'c', 'treat': 'b'}, proceed_when_unidentifiable=True) do_df.head()
_____no_output_____
MIT
Utils/dowhy/docs/source/example_notebooks/lalonde_pandas_api.ipynb
maliha93/Fairness-Analysis-Code
This new dataframe gives the distribution of `'re78'` when `'treat'` is set to `1`. For much more detail on how the `do` method works, check the docstring:
help(lalonde.causal.do)
Help on method do in module dowhy.api.causal_data_frame: do(x, method='weighting', num_cores=1, variable_types={}, outcome=None, params=None, dot_graph=None, common_causes=None, estimand_type='nonparametric-ate', proceed_when_unidentifiable=False, stateful=False) method of dowhy.api.causal_data_frame.CausalAccessor instance The do-operation implemented with sampling. This will return a pandas.DataFrame with the outcome variable(s) replaced with samples from P(Y|do(X=x)). If the value of `x` is left unspecified (e.g. as a string or list), then the original values of `x` are left in the DataFrame, and Y is sampled from its respective P(Y|do(x)). If the value of `x` is specified (passed with a `dict`, where variable names are keys, and values are specified) then the new `DataFrame` will contain the specified values of `x`. For some methods, the `variable_types` field must be specified. It should be a `dict`, where the keys are variable names, and values are 'o' for ordered discrete, 'u' for un-ordered discrete, 'd' for discrete, or 'c' for continuous. Inference requires a set of control variables. These can be provided explicitly using `common_causes`, which contains a list of variable names to control for. These can be provided implicitly by specifying a causal graph with `dot_graph`, from which they will be chosen using the default identification method. When the set of control variables can't be identified with the provided assumptions, a prompt will raise to the user asking whether to proceed. To automatically over-ride the prompt, you can set the flag `proceed_when_unidentifiable` to `True`. Some methods build components during inference which are expensive. To retain those components for later inference (e.g. successive calls to `do` with different values of `x`), you can set the `stateful` flag to `True`. Be cautious about using the `do` operation statefully. State is set on the namespace, rather than the method, so can behave unpredictably. To reset the namespace and run statelessly again, you can call the `reset` method. :param x: str, list, dict: The causal state on which to intervene, and (optional) its interventional value(s). :param method: The inference method to use with the sampler. Currently, `'mcmc'`, `'weighting'`, and `'kernel_density'` are supported. The `mcmc` sampler requires `pymc3>=3.7`. :param num_cores: int: if the inference method only supports sampling a point at a time, this will parallelize sampling. :param variable_types: dict: The dictionary containing the variable types. Must contain the union of the causal state, control variables, and the outcome. :param outcome: str: The outcome variable. :param params: dict: extra parameters to set as attributes on the sampler object :param dot_graph: str: A string specifying the causal graph. :param common_causes: list: A list of strings containing the variable names to control for. :param estimand_type: str: 'nonparametric-ate' is the only one currently supported. Others may be added later, to allow for specific, parametric estimands. :param proceed_when_unidentifiable: bool: A flag to over-ride user prompts to proceed when effects aren't identifiable with the assumptions provided. :param stateful: bool: Whether to retain state. By default, the do operation is stateless. :return: pandas.DataFrame: A DataFrame containing the sampled outcome
MIT
Utils/dowhy/docs/source/example_notebooks/lalonde_pandas_api.ipynb
maliha93/Fairness-Analysis-Code
Welcome to the Datenguide Python PackageWithin this notebook the functionality of the package will be explained and demonstrated with examples. Topics- Import- get region IDs- get statstic IDs- get the data - for single regions - for multiple regions 1. Import **Import the helper functions 'get_all_regions' and 'get_statistics'****Import the module Query for the main functionality**
# ONLY FOR TESTING LOCAL PACKAGE # %cd .. from datenguidepy.query_helper import get_all_regions, get_statistics from datenguidepy import Query
C:\Users\Alexandra\Documents\GitHub\datenguide-python
MIT
use_case/01_intro_tutorial.ipynb
elekt/datenguide-python
**Import pandas and matplotlib for the usual display of data as tables and graphs**
import pandas as pd import matplotlib %matplotlib inline pd.set_option('display.max_colwidth', 150)
_____no_output_____
MIT
use_case/01_intro_tutorial.ipynb
elekt/datenguide-python
2. Get Region IDs How to get the ID of the region I want to query Regionalstatistik - the database behind Datenguide - has data for differently granular levels of Germany. nuts: 1 – Bundesländer 2 – Regierungsbezirke / statistische Regionen 3 – Kreise / kreisfreie Städte. lau: 1 - Verwaltungsgemeinschaften 2 - Gemeinden.the function `get_all_regions()` returns all IDs from all levels.
# get_all_regions returns all ids get_all_regions()
_____no_output_____
MIT
use_case/01_intro_tutorial.ipynb
elekt/datenguide-python
To get a specific ID, use the common pandas function `query()`
# e.g. get all "Bundesländer get_all_regions().query("level == 'nuts1'") # e.g. get the ID of Havelland get_all_regions().query("name =='Havelland'")
_____no_output_____
MIT
use_case/01_intro_tutorial.ipynb
elekt/datenguide-python
3. Get statistic IDs How to find statistics
# get all statistics get_statistics()
_____no_output_____
MIT
use_case/01_intro_tutorial.ipynb
elekt/datenguide-python
If you already know the statsitic ID you are looking for - perfect. Otherwise you can use the pandas `query()` function so search e.g. for specific terms.
# find out the name of the desired statistic about birth get_statistics().query('long_description.str.contains("Statistik der Geburten")', engine='python')
_____no_output_____
MIT
use_case/01_intro_tutorial.ipynb
elekt/datenguide-python