markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Now lets add some random noise to create a noisy dataset, and re-plot it:
np.random.seed(42) noisy = np.random.normal(digits.data, 4) plot_digits(noisy)
present/mcc2/PythonDataScienceHandbook/05.09-Principal-Component-Analysis.ipynb
csaladenes/csaladenes.github.io
mit
It's clear by eye that the images are noisy, and contain spurious pixels. Let's train a PCA on the noisy data, requesting that the projection preserve 50% of the variance:
pca = PCA(0.50).fit(noisy) pca.n_components_
present/mcc2/PythonDataScienceHandbook/05.09-Principal-Component-Analysis.ipynb
csaladenes/csaladenes.github.io
mit
Here 50% of the variance amounts to 12 principal components. Now we compute these components, and then use the inverse of the transform to reconstruct the filtered digits:
components = pca.transform(noisy) filtered = pca.inverse_transform(components) plot_digits(filtered)
present/mcc2/PythonDataScienceHandbook/05.09-Principal-Component-Analysis.ipynb
csaladenes/csaladenes.github.io
mit
This signal preserving/noise filtering property makes PCA a very useful feature selection routine—for example, rather than training a classifier on very high-dimensional data, you might instead train the classifier on the lower-dimensional representation, which will automatically serve to filter out random noise in the...
from sklearn.datasets import fetch_lfw_people faces = fetch_lfw_people(min_faces_per_person=60) print(faces.target_names) print(faces.images.shape)
present/mcc2/PythonDataScienceHandbook/05.09-Principal-Component-Analysis.ipynb
csaladenes/csaladenes.github.io
mit
Let's take a look at the principal axes that span this dataset. Because this is a large dataset, we will use RandomizedPCA—it contains a randomized method to approximate the first $N$ principal components much more quickly than the standard PCA estimator, and thus is very useful for high-dimensional data (here, a dimen...
# from sklearn.decomposition import RandomizedPCA from sklearn.decomposition import PCA as RandomizedPCA pca = RandomizedPCA(150) pca.fit(faces.data)
present/mcc2/PythonDataScienceHandbook/05.09-Principal-Component-Analysis.ipynb
csaladenes/csaladenes.github.io
mit
In this case, it can be interesting to visualize the images associated with the first several principal components (these components are technically known as "eigenvectors," so these types of images are often called "eigenfaces"). As you can see in this figure, they are as creepy as they sound:
fig, axes = plt.subplots(3, 8, figsize=(9, 4), subplot_kw={'xticks':[], 'yticks':[]}, gridspec_kw=dict(hspace=0.1, wspace=0.1)) for i, ax in enumerate(axes.flat): ax.imshow(pca.components_[i].reshape(62, 47), cmap='bone')
present/mcc2/PythonDataScienceHandbook/05.09-Principal-Component-Analysis.ipynb
csaladenes/csaladenes.github.io
mit
The results are very interesting, and give us insight into how the images vary: for example, the first few eigenfaces (from the top left) seem to be associated with the angle of lighting on the face, and later principal vectors seem to be picking out certain features, such as eyes, noses, and lips. Let's take a look at...
plt.plot(np.cumsum(pca.explained_variance_ratio_)) plt.xlabel('number of components') plt.ylabel('cumulative explained variance');
present/mcc2/PythonDataScienceHandbook/05.09-Principal-Component-Analysis.ipynb
csaladenes/csaladenes.github.io
mit
We see that these 150 components account for just over 90% of the variance. That would lead us to believe that using these 150 components, we would recover most of the essential characteristics of the data. To make this more concrete, we can compare the input images with the images reconstructed from these 150 componen...
# Compute the components and projected faces pca = RandomizedPCA(150).fit(faces.data) components = pca.transform(faces.data) projected = pca.inverse_transform(components) # Plot the results fig, ax = plt.subplots(2, 10, figsize=(10, 2.5), subplot_kw={'xticks':[], 'yticks':[]}, ...
present/mcc2/PythonDataScienceHandbook/05.09-Principal-Component-Analysis.ipynb
csaladenes/csaladenes.github.io
mit
You will use this helper function to write lists containing article ids, categories, and authors for each article in our database to local file.
def write_list_to_disk(my_list, filename): with open(filename, 'w') as f: for item in my_list: line = "%s\n" % item f.write(line)
courses/machine_learning/deepdive2/recommendation_systems/solutions/content_based_preproc.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Pull data from BigQuery The cell below creates a local text file containing all the article ids (i.e. 'content ids') in the dataset. Have a look at the original dataset in BigQuery. Then read through the query below and make sure you understand what it is doing.
sql=""" #standardSQL SELECT (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) AS content_id FROM `cloud-training-demos.GA360_test.ga_sessions_sample`, UNNEST(hits) AS hits WHERE # only include hits on pages hits.type = "PAGE" AND (SELECT MAX(IF(index=10, value, NULL)) FROM UNN...
courses/machine_learning/deepdive2/recommendation_systems/solutions/content_based_preproc.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
There should be 15,634 articles in the database. Next, you'll create a local file which contains a list of article categories and a list of article authors. Note the change in the index when pulling the article category or author information. Also, you are using the first author of the article to create our author list...
sql=""" #standardSQL SELECT (SELECT MAX(IF(index=7, value, NULL)) FROM UNNEST(hits.customDimensions)) AS category FROM `cloud-training-demos.GA360_test.ga_sessions_sample`, UNNEST(hits) AS hits WHERE # only include hits on pages hits.type = "PAGE" AND (SELECT MAX(IF(index=7, value, NULL)) FROM UNNEST(...
courses/machine_learning/deepdive2/recommendation_systems/solutions/content_based_preproc.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
The categories are 'News', 'Stars & Kultur', and 'Lifestyle'. When creating the author list, you'll only use the first author information for each article.
sql=""" #standardSQL SELECT REGEXP_EXTRACT((SELECT MAX(IF(index=2, value, NULL)) FROM UNNEST(hits.customDimensions)), r"^[^,]+") AS first_author FROM `cloud-training-demos.GA360_test.ga_sessions_sample`, UNNEST(hits) AS hits WHERE # only include hits on pages hits.type = "PAGE" AND (SELECT MAX(IF(index...
courses/machine_learning/deepdive2/recommendation_systems/solutions/content_based_preproc.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
There should be 385 authors in the database. Create train and test sets In this section, you will create the train/test split of our data for training our model. You use the concatenated values for visitor id and content id to create a farm fingerprint, taking approximately 90% of the data for the training set and 10%...
sql=""" WITH site_history as ( SELECT fullVisitorId as visitor_id, (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) AS content_id, (SELECT MAX(IF(index=7, value, NULL)) FROM UNNEST(hits.customDimensions)) AS category, (SELECT MAX(IF(index=6, value, NULL)) FROM UNNEST...
courses/machine_learning/deepdive2/recommendation_systems/solutions/content_based_preproc.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Let's have a look at the two csv files you just created containing the training and test set. You'll also do a line count of both files to confirm that you have achieved an approximate 90/10 train/test split. In the next notebook, Content Based Filtering you will build a model to recommend an article given information ...
%%bash wc -l *_set.csv !head *_set.csv
courses/machine_learning/deepdive2/recommendation_systems/solutions/content_based_preproc.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
当你想要构建一个将来需要序列化或编码成其他格式的映射的时候, OrderedDict 是非常有用的。 比如,你想精确控制以 JSON 编码后字段的顺序,你可以先使用 OrderedDict 来构建这样的数据:
import json json.dumps(d)
01 data structures and algorithms/01.07 keep dict in order.ipynb
wuafeing/Python3-Tutorial
gpl-3.0
Pré-processamento dos dados
# Define sliding window def window_time_series(series, n, step=1): # print "in window_time_series",series if step < 1.0: step = max(int(step * n), 1) return [series[i:i + n] for i in range(0, len(series) - n + 1, step)] # PAA function def paa(series, now, opw): if now == None: now = ...
phd-thesis/Benchmarking 2 - Identificação de Cargas através de Representação Visual de Séries Temporais-Copy1.ipynb
diegocavalca/Studies
cc0-1.0
Parâmetros gerais dos dados utilizados na modelagem (treino e teste)
################################# ###Define the parameters here#### ################################# datafiles = ['dish washer1-1'] # Data file name (TODO: alterar aqui) trains = [250] # Number of training instances (because we assume training and test data are mixed in one file) size = [32] # PAA size GAF_type = ...
phd-thesis/Benchmarking 2 - Identificação de Cargas através de Representação Visual de Séries Temporais-Copy1.ipynb
diegocavalca/Studies
cc0-1.0
Gerando dados A fim de normalizar os benchmarkings, serão utilizados os dados das séries do bechmarking 1 para o processo de Extração de Características (conversão serie2image - benchmarking 2). Extração de Características
def serie2image(serie, GAF_type = 'GADF', scaling = False, s = 32): """ Customized function to perform Series to Image conversion. Args: serie : original input data (time-serie chunk of appliance/main data - REDD - benchmarking 1) GAF_type : GADF / GASF (Benchmarking 2 process) ...
phd-thesis/Benchmarking 2 - Identificação de Cargas através de Representação Visual de Séries Temporais-Copy1.ipynb
diegocavalca/Studies
cc0-1.0
Conjunto de Treino
print("Processing train dataset (Series to Images)...") # Train... train_power_chunks = np.load( os.path.join(BENCHMARKING1_RESOURCES_PATH, 'datasets/train_power_chunks.npy') ) train_labels_binary = np.load( os.path.join(BENCHMARKING1_RESOURCES_PATH, 'datasets/train_labels_binary.npy') ) data_paa_train = [] data_with...
phd-thesis/Benchmarking 2 - Identificação de Cargas através de Representação Visual de Séries Temporais-Copy1.ipynb
diegocavalca/Studies
cc0-1.0
Conjunto de teste
print("Processing test dataset (Series to Images)...") # Test... test_power_chunks = np.load( os.path.join(BENCHMARKING1_RESOURCES_PATH, 'datasets/test_power_chunks.npy') ) test_labels_binary = np.load( os.path.join(BENCHMARKING1_RESOURCES_PATH, 'datasets/test_labels_binary.npy') ) data_paa_test = [] data_without_paa...
phd-thesis/Benchmarking 2 - Identificação de Cargas através de Representação Visual de Séries Temporais-Copy1.ipynb
diegocavalca/Studies
cc0-1.0
Modelagem
def metrics(test, predicted): ##CLASSIFICATION METRICS acc = accuracy_score(test, predicted) prec = precision_score(test, predicted) rec = recall_score(test, predicted) f1 = f1_score(test, predicted) f1m = f1_score(test, predicted, average='macro') # print('f1:',f1) # print('a...
phd-thesis/Benchmarking 2 - Identificação de Cargas através de Representação Visual de Séries Temporais-Copy1.ipynb
diegocavalca/Studies
cc0-1.0
Benchmarking (replicando estudo)
# Building dnn model (feature extraction) vgg16_model = VGG16( include_top=False, weights='imagenet', input_tensor=None, input_shape=(100, 100, 3), pooling='avg', classes=1000 )
phd-thesis/Benchmarking 2 - Identificação de Cargas através de Representação Visual de Séries Temporais-Copy1.ipynb
diegocavalca/Studies
cc0-1.0
Embedding das imagens de Treino
# GAFD Images with PAA (Train) images = sorted(glob( os.path.join( BENCHMARKING_RESOURCES_PATH, "GeneratedImages", "*_PAA_GADF_train_*.png" ) )) X_train, y_train = embedding_images(images, vgg16_model) # Data persistence np.save( os.path.join(BENCHMARKING_RESOURCES_PATH, 'datasets/X_...
phd-thesis/Benchmarking 2 - Identificação de Cargas através de Representação Visual de Séries Temporais-Copy1.ipynb
diegocavalca/Studies
cc0-1.0
Embedding das imagens de Teste
# GAFD Images with PAA (Train) images = sorted(glob( os.path.join( BENCHMARKING_RESOURCES_PATH, "GeneratedImages", "*_PAA_GADF_test_*.png" ) )) X_test, y_test = embedding_images(images, vgg16_model) # Data persistence np.save( os.path.join(BENCHMARKING_RESOURCES_PATH, 'datasets/X_tes...
phd-thesis/Benchmarking 2 - Identificação de Cargas através de Representação Visual de Séries Temporais-Copy1.ipynb
diegocavalca/Studies
cc0-1.0
Treinando Classificador Supervisionado
# Training supervised classifier clf = DecisionTreeClassifier(max_depth=15) # Train classifier clf.fit(X_train, y_train) # Save classifier for future use #joblib.dump(clf, 'Tree'+'-'+device+'-redd-all.joblib')
phd-thesis/Benchmarking 2 - Identificação de Cargas através de Representação Visual de Séries Temporais-Copy1.ipynb
diegocavalca/Studies
cc0-1.0
Avaliando Classificador
# Predict test data y_pred = clf.predict(X_test) # Print metrics final_performance = [] y_test = np.array(y_test) y_pred = np.array(y_pred) print("") print("RESULT ANALYSIS\n\n") print("ON/OFF State Charts") print("-" * 115) for i in range(y_test.shape[1]): fig = plt.figure(figsize=(15, 2)) plt.title("A...
phd-thesis/Benchmarking 2 - Identificação de Cargas através de Representação Visual de Séries Temporais-Copy1.ipynb
diegocavalca/Studies
cc0-1.0
TF.Text Metrics <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/text/tutorials/text_similarity"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.g...
!pip install -q "tensorflow-text==2.8.*" import tensorflow as tf import tensorflow_text as text
docs/tutorials/text_similarity.ipynb
tensorflow/text
apache-2.0
ROUGE-L The Rouge-L metric is a score from 0 to 1 indicating how similar two sequences are, based on the length of the longest common subsequence (LCS). In particular, Rouge-L is the weighted harmonic mean (or f-measure) combining the LCS precision (the percentage of the hypothesis sequence covered by the LCS) and the ...
hypotheses = tf.ragged.constant([['captain', 'of', 'the', 'delta', 'flight'], ['the', '1990', 'transcript']]) references = tf.ragged.constant([['delta', 'air', 'lines', 'flight'], ['this', 'concludes', 'the', 'transcript']])
docs/tutorials/text_similarity.ipynb
tensorflow/text
apache-2.0
The hypotheses and references are expected to be tf.RaggedTensors of tokens. Tokens are required instead of raw sentences because no single tokenization strategy fits all tasks. Now we can call text.metrics.rouge_l and get our result back:
result = text.metrics.rouge_l(hypotheses, references) print('F-Measure: %s' % result.f_measure) print('P-Measure: %s' % result.p_measure) print('R-Measure: %s' % result.r_measure)
docs/tutorials/text_similarity.ipynb
tensorflow/text
apache-2.0
ROUGE-L has an additional hyperparameter, alpha, which determines the weight of the harmonic mean used for computing the F-Measure. Values closer to 0 treat Recall as more important and values closer to 1 treat Precision as more important. alpha defaults to .5, which corresponds to equal weight for Precision and Recall...
# Compute ROUGE-L with alpha=0 result = text.metrics.rouge_l(hypotheses, references, alpha=0) print('F-Measure (alpha=0): %s' % result.f_measure) print('P-Measure (alpha=0): %s' % result.p_measure) print('R-Measure (alpha=0): %s' % result.r_measure) # Compute ROUGE-L with alpha=1 result = text.metrics.rouge_l(hypothes...
docs/tutorials/text_similarity.ipynb
tensorflow/text
apache-2.0
Load in house sales data Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
sales = graphlab.SFrame('kc_house_data.gl/')
Regression/examples/week-2-multiple-regression-assignment-1-blank.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Split data into training and testing. We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
train_data,test_data = sales.random_split(.8,seed=0)
Regression/examples/week-2-multiple-regression-assignment-1-blank.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Learning a multiple regression model Recall we can use the following code to learn a multiple regression model predicting 'price' based on the following features: example_features = ['sqft_living', 'bedrooms', 'bathrooms'] on training data with the following code: (Aside: We set validation_set = None to ensure that the...
example_features = ['sqft_living', 'bedrooms', 'bathrooms'] example_model = graphlab.linear_regression.create(train_data, target = 'price', features = example_features, validation_set = None)
Regression/examples/week-2-multiple-regression-assignment-1-blank.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Now that we have fitted the model we can extract the regression weights (coefficients) as an SFrame as follows:
example_weight_summary = example_model.get("coefficients") print example_weight_summary
Regression/examples/week-2-multiple-regression-assignment-1-blank.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Making Predictions In the gradient descent notebook we use numpy to do our regression. In this book we will use existing graphlab create functions to analyze multiple regressions. Recall that once a model is built we can use the .predict() function to find the predicted values for data we pass. For example using the e...
example_predictions = example_model.predict(train_data) print example_predictions[0] # should be 271789.505878
Regression/examples/week-2-multiple-regression-assignment-1-blank.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Compute RSS Now that we can make predictions given the model, let's write a function to compute the RSS of the model. Complete the function below to calculate RSS given the model, data, and the outcome.
def get_residual_sum_of_squares(model, data, outcome): # First get the predictions # Then compute the residuals/errors # Then square and add them up return(RSS)
Regression/examples/week-2-multiple-regression-assignment-1-blank.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Test your function by computing the RSS on TEST data for the example model:
rss_example_train = get_residual_sum_of_squares(example_model, test_data, test_data['price']) print rss_example_train # should be 2.7376153833e+14
Regression/examples/week-2-multiple-regression-assignment-1-blank.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Create some new features Although we often think of multiple regression as including multiple different features (e.g. # of bedrooms, squarefeet, and # of bathrooms) but we can also consider transformations of existing features e.g. the log of the squarefeet or even "interaction" features such as the product of bedroom...
from math import log
Regression/examples/week-2-multiple-regression-assignment-1-blank.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Next create the following 4 new features as column in both TEST and TRAIN data: * bedrooms_squared = bedrooms*bedrooms * bed_bath_rooms = bedrooms*bathrooms * log_sqft_living = log(sqft_living) * lat_plus_long = lat + long As an example here's the first one:
train_data['bedrooms_squared'] = train_data['bedrooms'].apply(lambda x: x**2) test_data['bedrooms_squared'] = test_data['bedrooms'].apply(lambda x: x**2) # create the remaining 3 features in both TEST and TRAIN data
Regression/examples/week-2-multiple-regression-assignment-1-blank.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this feature will mostly affect houses with many bedrooms. bedrooms times bathrooms gives what's called an "interaction" feature. It is large when both of them are lar...
model_1_features = ['sqft_living', 'bedrooms', 'bathrooms', 'lat', 'long'] model_2_features = model_1_features + ['bed_bath_rooms'] model_3_features = model_2_features + ['bedrooms_squared', 'log_sqft_living', 'lat_plus_long']
Regression/examples/week-2-multiple-regression-assignment-1-blank.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Now that you have the features, learn the weights for the three different models for predicting target = 'price' using graphlab.linear_regression.create() and look at the value of the weights/coefficients:
# Learn the three models: (don't forget to set validation_set = None) # Examine/extract each model's coefficients:
Regression/examples/week-2-multiple-regression-assignment-1-blank.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Quiz Question: What is the sign (positive or negative) for the coefficient/weight for 'bathrooms' in model 1? Quiz Question: What is the sign (positive or negative) for the coefficient/weight for 'bathrooms' in model 2? Think about what this means. Comparing multiple models Now that you've learned three models and extr...
# Compute the RSS on TRAINING data for each of the three models and record the values:
Regression/examples/week-2-multiple-regression-assignment-1-blank.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Quiz Question: Which model (1, 2 or 3) has lowest RSS on TRAINING Data? Is this what you expected? Now compute the RSS on on TEST data for each of the three models.
# Compute the RSS on TESTING data for each of the three models and record the values:
Regression/examples/week-2-multiple-regression-assignment-1-blank.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
4. Provide one or two visualizations that show the distribution of the sample data. Write one or two sentences noting what you observe about the plot or plots.
plt.hist( x=[initialData[CONGRUENT], initialData[INCONGRUENT]], normed=False, range=(min(initialData[CONGRUENT]), max(initialData[INCONGRUENT])), bins=10, label='Time to name' ) plt.hist( x=initialData[CONGRUENT], normed=False, range=(min(initialData[CONGRUENT]), max(initialData[CONG...
P1/P1_Cassio.ipynb
cassiogreco/udacity-data-analyst-nanodegree
mit
From analyzing the histograms of both the Congruent and Incongruent datasets we can visualy see that the Incongruent dataset contains a greater number of higher time-to-name values than the Congruent datasets. This is evident from looking at the values of the mean values of both datasets, previously calculated (14.0511...
degreesOfFreedom = len(initialData[CONGRUENT]) - 1 def standardError(standardDeviation, sampleSize): return standardDeviation / math.sqrt(sampleSize) def getTValue(mean, se): return mean / se se = standardError(standardDeviation(variance(valuesToPower(valuesMinusMean(dataDifference), 2))), len(dataDifference...
P1/P1_Cassio.ipynb
cassiogreco/udacity-data-analyst-nanodegree
mit
Introduction to Tethne: Working with data from the Web of Science Now that we have the basics down, in this notebook we'll begin working with data from the JSTOR Data-for-Research (DfR) portal. The JSTOR DfR portal gives researchers access to bibliographic data and N-grams for the entire JSTOR database. Tethne can use ...
from tethne.readers import dfr
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
Once again, read() accepts a string containing a path to either a single DfR dataset, or a directory containing several. Here, "DfR dataset" refers to the folder containing the file "citations.xml", and the contents of that folder. This will take considerably more time than loading a WoS dataset. The reason is that Tet...
dfr_corpus = dfr.read('/Users/erickpeirson/Dropbox/HSS ThatCamp Workshop/sample_data/DfR')
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
Combining DfR and WoS data We can combine our datasets using the merge() function. First, we load our WoS data in a separate Corpus:
from tethne.readers import wos wos_corpus = wos.read('/Users/erickpeirson/Dropbox/HSS ThatCamp Workshop/sample_data/wos')
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
Both of these datasets are for the Journal of the History of Biology. But note that the WoS and DfR corpora have different numbers of Papers:
len(dfr_corpus), len(wos_corpus)
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
Then import merge() from tethne.readers:
from tethne.readers import merge
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
We then create a new Corpus by passing both Corpus objects to merge(). If there is conflicting information in the two corpora, the first Corpus gets priority.
corpus = merge(dfr_corpus, wos_corpus)
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
merge() has combined data where possible, and discarded any duplicates in the original datasets.
len(corpus)
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
FeatureSets Our wordcount data are represented by a FeatureSet. A FeatureSet is a description of how certain sets of elements are distributed across a Corpus. This is kind of like an inversion of an index. For example, we might be interested in which words (elements) are found in which Papers. We can think of authors a...
corpus.features
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
Note that citations and authors are also FeatureSets. In fact, the majority of network-building functions in Tethne operate on FeatureSets -- including the coauthors() and bibliographic_coupling() functions that we used in the WoS notebook. Each FeatureSet has several attributes. The features attribute contains the dis...
corpus.features['wordcounts'].features.items()[0] # Just show data for the first Paper.
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
The index contains our "vocabulary":
print 'There are %i words in the wordcounts featureset' % len(corpus.features['wordcounts'].index)
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
We can use the feature_distribution() method of our Corpus to look at the distribution of words over time. In the example below I used MatPlotLib to visualize the distribution.
plt.figure(figsize=(10, 5)) plt.bar(*corpus.feature_distribution('wordcounts', 'evolutionary')) # <-- The action. plt.ylabel('Frequency of the word ``evolutionary`` in this Corpus') plt.xlabel('Publication Date') plt.show()
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
If we add the argument mode='documentCounts', we get the number of documents in which 'evolutionary' occurs.
plt.figure(figsize=(10, 5)) plt.bar(*corpus.feature_distribution('wordcounts', 'evolutionary', mode='documentCounts')) # <-- The action. plt.ylabel('Documents containing ``evolutionary``') plt.xlabel('Publication Date') plt.show()
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
Note that we can look how documents themselves are distributed using the distribution() method.
plt.figure(figsize=(10, 5)) plt.bar(*corpus.distribution()) # <-- The action. plt.ylabel('Number of Documents') plt.xlabel('Publication Date') plt.show()
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
So, putting these together, we can normalize our feature_distribution() data to get a sense of the relative use of the word 'evolution'.
dates, N_evolution = corpus.feature_distribution('wordcounts', 'evolutionary', mode='documentCounts') dates, N = corpus.distribution() normalized_frequency = [f/N[i] for i, f in enumerate(N_evolution)] plt.figure(figsize=(10, 5)) plt.bar(dates, normalized_frequency) # <-- The action. plt.ylabel('Proportion of docu...
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
Topic Modeling with DfR wordcounts Latent Dirichlet Allocation is a popular approach to discovering latent "topics" in large corpora. Many digital humanists use a software package called MALLET to fit LDA to text data. Tethne uses MALLET to fit LDA topic models. Before we use LDA, however, we need to do some preprocess...
from nltk.corpus import stopwords stoplist = stopwords.words()
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
We then need to define what elements to keep, and what elements to discard. We will use a function that will evaluate whether or not a word is in our stoplist. The function should take three arguments: f -- the feature itself (the word) v -- the number of instances of that feature in a specific document c -- the numbe...
def apply_stoplist(f, v, c, dc): if f in stoplist or dc > 500 or dc < 3 or len(f) < 4: return None # Discard the element. return v
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
We apply the stoplist using the transform() method. FeatureSets are not modified in place; instead, a new FeatureSet is generated that reflects the specified changes. We'll call the new FeatureSet 'wordcounts_filtered'.
corpus.features['wordcounts_filtered'] = corpus.features['wordcounts'].transform(apply_stoplist)
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
There should be significantly fewer words in our new "wordcounts_filtered" FeatureSet.
print 'There are %i words in the wordcounts featureset' % len(corpus.features['wordcounts'].index) print 'There are %i words in the wordcounts_filtered featureset' % len(corpus.features['wordcounts_filtered'].index)
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
The LDA topic model Tethne provides a class called LDAModel. You should be able to import it directly from the tethne package:
from tethne import LDAModel
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
Now we'll create a new LDAModel for our Corpus. The featureset_name parameter tells the LDAModel which FeatureSet we want to use. We'll use our filtered wordcounts.
model = LDAModel(corpus, featureset_name='wordcounts_filtered')
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
Next we'll fit the model. We need to tell MALLET how many topics to fit (the hyperparameter Z), and how many iterations (max_iter) to perform. This step may take a little while, depending on the size of your corpus.
model.fit(Z=50, max_iter=500)
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
You can inspect the inferred topics using the model's print_topics() method. By default, this will print the top ten words for each topic.
model.print_topics()
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
We can also look at the representation of a topic over time using the topic_over_time() method. In the example below we'll print the first five of the topics on the same plot.
plt.figure(figsize=(15, 5)) for k in xrange(5): # Generates numbers k in [0, 4]. x, y = model.topic_over_time(k) # Gets topic number k. plt.plot(x, y, label='topic {0}'.format(k), lw=2, alpha=0.7) plt.legend(loc='best') plt.show()
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
Generating networks from topic models The features module in the tethne.networks subpackage contains some useful methods for visualizing topic models as networks. You can import it just like the authors or papers modules.
from tethne.networks import topics
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
The terms function generates a network of words connected on the basis of shared affinity with a topic. If two words i and j are both associated with a topic z with $\Phi(i|z) >= 0.01$ and $\Phi(j|z) >= 0.01$, then an edge is drawn between them.
termGraph = topics.terms(model, threshold=0.01) termGraph.order(), termGraph.size() termGraph.name = '' from tethne.writers.graph import to_graphml to_graphml(termGraph, '/Users/erickpeirson/Desktop/topic_terms.graphml')
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
topicCoupling = topics.topic_coupling(model, threshold=0.2) print '%i nodes and %i edges' % (topicCoupling.order(), topicCoupling.size()) to_graphml(topicCoupling, '/Users/erickpeirson/Desktop/lda_topicCoupling.graphml')
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
Pandas
import pandas as pd pand_tmp = pd.DataFrame(data, columns=['x{0}'.format(i) for i in range(data.shape[1])]) pand_tmp.head()
exercises/03_aggregation.ipynb
milroy/Spark-Meetup
mit
What is the row sum?
pand_tmp.sum(axis=1)
exercises/03_aggregation.ipynb
milroy/Spark-Meetup
mit
Column sum?
pand_tmp.sum(axis=0) pand_tmp.to_csv('numbers.csv', index=False)
exercises/03_aggregation.ipynb
milroy/Spark-Meetup
mit
Spark
import findspark import os findspark.init() # you need that before import pyspark. import pyspark sc = pyspark.SparkContext('local[4]', 'pyspark') lines = sc.textFile('numbers.csv', 18) for l in lines.take(3): print l lines.take(3) type(lines.take(1))
exercises/03_aggregation.ipynb
milroy/Spark-Meetup
mit
How do we skip the header? How about using find()? What is Boolean value for true with find()?
lines = lines.filter(lambda x: x.find('x') != 0) for l in lines.take(2): print l data = lines.map(lambda x: x.split(',')) data.take(3)
exercises/03_aggregation.ipynb
milroy/Spark-Meetup
mit
Row Sum Cast to integer and sum!
def row_sum(x): int_x = map(lambda x: int(x), x) return sum(int_x) data_row_sum = data.map(row_sum) print data_row_sum.collect() print data_row_sum.count()
exercises/03_aggregation.ipynb
milroy/Spark-Meetup
mit
Column Sum This one's a bit trickier, and portends ill for large, complex data sets (like example 5)... Let's enumerate the list comprising each RDD "line" such that each value is indexed by the corresponding column number.
def col_key(x): for i, value in enumerate(x): yield (i, int(value)) tmp = data.flatMap(col_key) tmp.take(15)
exercises/03_aggregation.ipynb
milroy/Spark-Meetup
mit
Notice how flatMap works here: the generator is returned per partition, meaning that the first element value of each tuple cycles.
tmp.take(3) tmp = tmp.groupByKey() for i in tmp.take(2): print i, type(i) data_col_sum = tmp.map(lambda x: sum(x[1])) for i in data_col_sum.take(2): print i print data_col_sum.collect() print data_col_sum.count()
exercises/03_aggregation.ipynb
milroy/Spark-Meetup
mit
Column sum with Spark.sql.dataframe
from pyspark.sql import SQLContext sqlContext = SQLContext(sc) sc pyspark_df = sqlContext.createDataFrame(pand_tmp) pyspark_df.take(2)
exercises/03_aggregation.ipynb
milroy/Spark-Meetup
mit
groupBy() without arguments groups by all columns
for i in pyspark_df.columns: print pyspark_df.groupBy().sum(i).collect()
exercises/03_aggregation.ipynb
milroy/Spark-Meetup
mit
Manufacturer import Many report files from various adsorption device manufacturers can be imported directly using pyGAPS. Here are some examples.
cfld = base_path / "commercial" micromeritics = pgp.isotherm_from_commercial(cfld / "mic" / "Sample_A.xls", 'mic', 'xl') belsorp_dat = pgp.isotherm_from_commercial(cfld / "bel" / "BF010_DUT-13_CH4_111K_run2.DAT", 'bel', 'dat') belsorp_xl = pgp.isotherm_from_commercial(cfld / "bel" / "Sample_C.xls", 'bel', 'xl') threeP_...
docs/examples/parsing.ipynb
pauliacomi/pyGAPS
mit
AIF Parsing AIF Import Adsorption information files are fully supported in pyGAPS, both for import and exports. Isotherms can be imported from an .aif as:
# Import all isotherms = [pgp.isotherm_from_aif(path) for path in aif_file_paths] # Display an example file print(isotherms[1])
docs/examples/parsing.ipynb
pauliacomi/pyGAPS
mit
AIF Export Similarly, an isotherm can be exported as an AIF file or a string, depending on whether a path is passed. For this purpose use either the module pygaps.isotherm_to_aif() function or the convenience class function to_aif().
# module function for isotherm in isotherms: filename = f'{isotherm.material} {isotherm.adsorbate} {isotherm.temperature}.aif' pgp.isotherm_to_aif(isotherm, base_path / 'aif' / filename) # save to file with convenience function isotherms[0].to_aif('isotherm.aif') # string isotherm_string = isotherms[0].to_aif...
docs/examples/parsing.ipynb
pauliacomi/pyGAPS
mit
JSON Parsing JSON Import Isotherms can be imported either from a json file or from a json string. The same function is used in both cases.
# Import them isotherms = [pgp.isotherm_from_json(path) for path in json_file_paths] # Display an example file print(isotherms[1])
docs/examples/parsing.ipynb
pauliacomi/pyGAPS
mit
JSON Export Exporting to JSON can be done to a file or a string, depending on whether a path is passed. For this purpose use either the module pygaps.isotherm_to_json() function or the convenience class function to_json().
# module function for isotherm in isotherms: filename = f'{isotherm.material} {isotherm.adsorbate} {isotherm.temperature}.json' pgp.isotherm_to_json(isotherm, base_path / 'json' / filename) # save to file with convenience function isotherms[0].to_json('isotherm.json') # string isotherm_string = isotherms[0].t...
docs/examples/parsing.ipynb
pauliacomi/pyGAPS
mit
Excel Parsing Excel does not have to be installed on the system in use. Excel Import
# Import them isotherms = [pgp.isotherm_from_xl(path) for path in xl_file_paths] # Display an example file print(isotherms[1]) isotherms[1].plot()
docs/examples/parsing.ipynb
pauliacomi/pyGAPS
mit
Excel Export
# Export each isotherm in turn for isotherm in isotherms: filename = ' '.join([str(isotherm.material), str(isotherm.adsorbate), str(isotherm.temperature)]) + '.xls' pgp.isotherm_to_xl(isotherm, base_path / 'excel' / filename) # save to file with convenience function isotherms[0].to_xl('isotherm.xls')
docs/examples/parsing.ipynb
pauliacomi/pyGAPS
mit
CSV Parsing CSV Import Like JSON, isotherms can be imported either from a CSV file or from a CSV string. The same function is used in both cases.
# Import them isotherms = [pgp.isotherm_from_csv(path) for path in csv_file_paths] # Display an example file print(isotherms[0])
docs/examples/parsing.ipynb
pauliacomi/pyGAPS
mit
CSV Export
# Export each isotherm in turn for isotherm in isotherms: filename = ' '.join([str(isotherm.material), str(isotherm.adsorbate), str(isotherm.temperature)]) + '.csv' pgp.isotherm_to_csv(isotherm, base_path / 'csv' / filename) # save to file with convenience function isotherms[0].to_csv('isotherm.csv') # string...
docs/examples/parsing.ipynb
pauliacomi/pyGAPS
mit
Se computa $SVD$ :
#SE HACE LA DESCOMPOSICION DE VALORES SINGULARES U, sigma, Vt = np.linalg.svd(imgmatriz)
MNO/proyecto_final/MNO_2017/proyectos/equipos/equipo_6/avance_22_05_2017/code/Clase_SVD_Imagen.ipynb
csampez/analisis-numerico-computo-cientifico
apache-2.0
Imprimimos resultados de las matrices $U$ $\Sigma$ $Vt$ :
print("U:") print(U) print("sigma:") print(sigma) print("Vt:") print(Vt) #TOTAL DE bytes DEL ARREGLO (solo sigma) sigma.nbytes
MNO/proyecto_final/MNO_2017/proyectos/equipos/equipo_6/avance_22_05_2017/code/Clase_SVD_Imagen.ipynb
csampez/analisis-numerico-computo-cientifico
apache-2.0
Visualizamos la $/Sigma$ en una matriz diagonal:
S = np.zeros(imgmatriz.shape, "float") S[:min(imgmatriz.shape), :min(imgmatriz.shape)] = np.diag(sigma) print(S)
MNO/proyecto_final/MNO_2017/proyectos/equipos/equipo_6/avance_22_05_2017/code/Clase_SVD_Imagen.ipynb
csampez/analisis-numerico-computo-cientifico
apache-2.0
Calculo y reconstrucción: Se calcula una aproximacion usando la primera columna de U y la primera fila de V reporduciendo la imagen, cada columna de pixeles es una ponderacion de los mismos valores originales $\vec{u}_1 $ :
reconstimg = np.matrix(U[:, :1]) * np.diag(sigma[:1]) * np.matrix(V[:1, :]) plt.figure(figsize=(6,6)) plt.imshow(reconstimg, cmap='gray'); #RECONSTRUIMOS CON 8 Y 9 VECTORES for i in range(8, 10): reconstimg = np.matrix(U[:, :i]) * np.diag(sigma[:i]) * np.matrix(V[:i, :]) plt.imshow(reconstimg, cmap='gray') ...
MNO/proyecto_final/MNO_2017/proyectos/equipos/equipo_6/avance_22_05_2017/code/Clase_SVD_Imagen.ipynb
csampez/analisis-numerico-computo-cientifico
apache-2.0
Reconstruccion de matriz original:
np.dot(U, np.dot(S, Vt)) #se usa Vt imgmatriz
MNO/proyecto_final/MNO_2017/proyectos/equipos/equipo_6/avance_22_05_2017/code/Clase_SVD_Imagen.ipynb
csampez/analisis-numerico-computo-cientifico
apache-2.0
We will use the class TwoLayerNet in the file cs231n/classifiers/neural_net.py to represent instances of our network. The network parameters are stored in the instance variable self.params where keys are string parameter names and values are numpy arrays. Below, we initialize toy data and a toy model that we will use t...
# Create a small net and some toy data to check your implementations. # Note that we set the random seed for repeatable experiments. input_size = 4 hidden_size = 10 num_classes = 3 num_inputs = 5 def init_toy_model(): np.random.seed(0) return TwoLayerNet(input_size, hidden_size, num_classes, std=1e-1) def init_t...
assignment1/two_layer_net.ipynb
zlpure/CS231n
mit
Forward pass: compute scores Open the file cs231n/classifiers/neural_net.py and look at the method TwoLayerNet.loss. This function is very similar to the loss functions you have written for the SVM and Softmax exercises: It takes the data and weights and computes the class scores, the loss, and the gradients on the par...
scores = net.loss(X) print 'Your scores:' print scores print print 'correct scores:' correct_scores = np.asarray([ [-0.81233741, -1.27654624, -0.70335995], [-0.17129677, -1.18803311, -0.47310444], [-0.51590475, -1.01354314, -0.8504215 ], [-0.15419291, -0.48629638, -0.52901952], [-0.00618733, -0.12435261, -0.1...
assignment1/two_layer_net.ipynb
zlpure/CS231n
mit
Forward pass: compute loss In the same function, implement the second part that computes the data and regularizaion loss.
loss, _ = net.loss(X, y, reg=0.1) correct_loss = 1.30378789133 # should be very small, we get < 1e-12 print 'Difference between your loss and correct loss:' print np.sum(np.abs(loss - correct_loss))
assignment1/two_layer_net.ipynb
zlpure/CS231n
mit
Backward pass Implement the rest of the function. This will compute the gradient of the loss with respect to the variables W1, b1, W2, and b2. Now that you (hopefully!) have a correctly implemented forward pass, you can debug your backward pass using a numeric gradient check:
from cs231n.gradient_check import eval_numerical_gradient # Use numeric gradient checking to check your implementation of the backward pass. # If your implementation is correct, the difference between the numeric and # analytic gradients should be less than 1e-8 for each of W1, W2, b1, and b2. loss, grads = net.loss(...
assignment1/two_layer_net.ipynb
zlpure/CS231n
mit
Train the network To train the network we will use stochastic gradient descent (SGD), similar to the SVM and Softmax classifiers. Look at the function TwoLayerNet.train and fill in the missing sections to implement the training procedure. This should be very similar to the training procedure you used for the SVM and So...
net = init_toy_model() stats = net.train(X, y, X, y, learning_rate=1e-1, reg=1e-5, num_iters=100, verbose=False) print 'Final training loss: ', stats['loss_history'][-1] # plot the loss history plt.plot(stats['loss_history']) plt.xlabel('iteration') plt.ylabel('training loss') plt.title('Train...
assignment1/two_layer_net.ipynb
zlpure/CS231n
mit