markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
'What is relationship to company'? And what are the most common relationships? | recent['relationshiptocompany']
recent['relationshiptocompany'].describe()
# the most common relationship to company is founder | .ipynb_checkpoints/homework7_billionaire_shengyingzhao-checkpoint.ipynb | sz2472/foundations-homework | mit |
Most common source of wealth? Male vs. female? | recent['sourceofwealth'].describe()
# the most common source of wealth is real estate
recent.groupby('gender')['sourceofwealth'].describe() #describe the content of a given column
# the most common source of wealth for male is real estate, while for female is diversified | .ipynb_checkpoints/homework7_billionaire_shengyingzhao-checkpoint.ipynb | sz2472/foundations-homework | mit |
Given the richest person in a country, what % of the GDP is their wealth? | recent.sort_values(by='networthusbillion', ascending=False).head(10)['gdpcurrentus']
#From the website, I learned that the GDP for USA in 2014 is $17348 billion
#from the previous dataframe, I learned that the richest USA billionaire made $76 billion networth
richest = 76
usa_gdp = 17348
percent = round(richest / usa... | .ipynb_checkpoints/homework7_billionaire_shengyingzhao-checkpoint.ipynb | sz2472/foundations-homework | mit |
Add up the wealth of all of the billionaires in a given country (or a few countries) and then compare it to the GDP of the country, or other billionaires, so like pit the US vs India | recent.groupby('countrycode')['networthusbillion'].sum().sort_values(ascending=False)
# USA is $2322 billion, compared to Russian is $422 billion | .ipynb_checkpoints/homework7_billionaire_shengyingzhao-checkpoint.ipynb | sz2472/foundations-homework | mit |
What are the most common industries for billionaires to come from? What's the total amount of billionaire money from each industry? | recent['sourceofwealth'].describe()
recent.groupby('sourceofwealth')['networthusbillion'].sum().sort_values(ascending=False)
How old are billionaires? How old are billionaires self made vs. non self made? or different industries?
Who are the youngest billionaires? The oldest? Age distribution - maybe make a graph abo... | .ipynb_checkpoints/homework7_billionaire_shengyingzhao-checkpoint.ipynb | sz2472/foundations-homework | mit |
How many self made billionaires vs. others? | recent['selfmade'].value_counts() | .ipynb_checkpoints/homework7_billionaire_shengyingzhao-checkpoint.ipynb | sz2472/foundations-homework | mit |
How old are billionaires? How old are billionaires self made vs. non self made? or different industries? | recent.sort_values(by='age',ascending=False).head()
columns_want = recent[['name', 'age', 'selfmade','industry']] #[[]]:dataframe
columns_want.head() | .ipynb_checkpoints/homework7_billionaire_shengyingzhao-checkpoint.ipynb | sz2472/foundations-homework | mit |
The type of variable is a tuple. | type(tuple1) | coursera/python_for_data_science/2.1_Tuples.ipynb | mohanprasath/Course-Work | gpl-3.0 |
Each element of a tuple can be accessed via an index. The following table represents the relationship between the index and the items in the tuple. Each element can be obtained by the name of the tuple followed by a square bracket with the index number:
<img src = "https://ibm.box.com/shared/static/83kpang0opwen5e5gbwc... | print( tuple1[0])
print( tuple1[1])
print( tuple1[2]) | coursera/python_for_data_science/2.1_Tuples.ipynb | mohanprasath/Course-Work | gpl-3.0 |
We can print out the type of each value in the tuple: | print( type(tuple1[0]))
print( type(tuple1[1]))
print( type(tuple1[2])) | coursera/python_for_data_science/2.1_Tuples.ipynb | mohanprasath/Course-Work | gpl-3.0 |
We can also use negative indexing. We use the same table above with corresponding negative values:
<img src = "https://ibm.box.com/shared/static/uwlfzo367bekwg0p5s5odxlz7vhpojyj.png" width = 750, align = "center"></a>
We can obtain the last element as follows (this time we will not use the print statement to display th... | tuple1[-1] | coursera/python_for_data_science/2.1_Tuples.ipynb | mohanprasath/Course-Work | gpl-3.0 |
We can display the next two elements as follows: | tuple1[-2]
tuple1[-3] | coursera/python_for_data_science/2.1_Tuples.ipynb | mohanprasath/Course-Work | gpl-3.0 |
We can concatenate or combine tuples by using the + sign: | tuple2=tuple1+("hard rock", 10)
tuple2 | coursera/python_for_data_science/2.1_Tuples.ipynb | mohanprasath/Course-Work | gpl-3.0 |
We can slice tuples obtaining multiple values as demonstrated by the figure below:
<img src = "https://ibm.box.com/shared/static/s9nofy728bcnsgnx3vh159bu16w7frnc.gif" width = 750, align = "center"></a>
We can slice tuples, obtaining new tuples with the corresponding elements: | tuple2[0:3] | coursera/python_for_data_science/2.1_Tuples.ipynb | mohanprasath/Course-Work | gpl-3.0 |
We can obtain the last two elements of the tuple: | tuple2[3:5] | coursera/python_for_data_science/2.1_Tuples.ipynb | mohanprasath/Course-Work | gpl-3.0 |
We can obtain the length of a tuple using the length command: | len(tuple2) | coursera/python_for_data_science/2.1_Tuples.ipynb | mohanprasath/Course-Work | gpl-3.0 |
This figure shows the number of elements:
<img src = "https://ibm.box.com/shared/static/apxe8l3w42f597yjhizg305merlm4ijf.png" width = 750, align = "center"></a>
Consider the following tuple: | Ratings =(0,9,6,5,10,8,9,6,2) | coursera/python_for_data_science/2.1_Tuples.ipynb | mohanprasath/Course-Work | gpl-3.0 |
We can assign the tuple to a 2nd variable: | Ratings1=Ratings
Ratings | coursera/python_for_data_science/2.1_Tuples.ipynb | mohanprasath/Course-Work | gpl-3.0 |
We can sort the values in a tuple and save it to a new tuple: | RatingsSorted=sorted(Ratings )
RatingsSorted | coursera/python_for_data_science/2.1_Tuples.ipynb | mohanprasath/Course-Work | gpl-3.0 |
A tuple can contain another tuple as well as other more complex data types. This process is called 'nesting'. Consider the following tuple with several elements: | NestedT =(1, 2, ("pop", "rock") ,(3,4),("disco",(1,2))) | coursera/python_for_data_science/2.1_Tuples.ipynb | mohanprasath/Course-Work | gpl-3.0 |
Each element in the tuple including other tuples can be obtained via an index as shown in the figure:
<img src = "https://ibm.box.com/shared/static/estqe2bczv5weocc4ag4mx9dtqy952fp.png" width = 750, align = "center"></a> | print("Element 0 of Tuple: ", NestedT[0])
print("Element 1 of Tuple: ", NestedT[1])
print("Element 2 of Tuple: ", NestedT[2])
print("Element 3 of Tuple: ", NestedT[3])
print("Element 4 of Tuple: ", NestedT[4]) | coursera/python_for_data_science/2.1_Tuples.ipynb | mohanprasath/Course-Work | gpl-3.0 |
We can use the second index to access other tuples as demonstrated in the figure:
<img src = "https://ibm.box.com/shared/static/j1orgjuasaaj3d0feymedrnoqv8trqyo.png" width = 750, align = "center"></a>
We can access the nested tuples : | print("Element 2,0 of Tuple: ", NestedT[2][0])
print("Element 2,1 of Tuple: ", NestedT[2][1])
print("Element 3,0 of Tuple: ", NestedT[3][0])
print("Element 3,1 of Tuple: ", NestedT[3][1])
print("Element 4,0 of Tuple: ", NestedT[4][0])
print("Element 4,1 of Tuple: ", NestedT[4][1]) | coursera/python_for_data_science/2.1_Tuples.ipynb | mohanprasath/Course-Work | gpl-3.0 |
We can access strings in the second nested tuples using a third index: | NestedT[2][1][0]
NestedT[2][1][1] | coursera/python_for_data_science/2.1_Tuples.ipynb | mohanprasath/Course-Work | gpl-3.0 |
We can use a tree to visualise the process. Each new index corresponds to a deeper level in the tree:
<img src ='https://ibm.box.com/shared/static/vjvsygpzpwcr6czsucgno1wukyhk5vxq.gif' width = 750, align = "center"></a>
Similarly, we can access elements nested deeper in the tree with a fourth index: | NestedT[4][1][0]
NestedT[4][1][1] | coursera/python_for_data_science/2.1_Tuples.ipynb | mohanprasath/Course-Work | gpl-3.0 |
The following figure shows the relationship of the tree and the element NestedT[4][1][1]:
<img src ='https://ibm.box.com/shared/static/9y5s7515zwzc9v6i4f67yj3np2fv9evs.gif'width = 750, align = "center"></a>
<a id="ref2"></a>
<h2 align=center> Quiz on Tuples </h2>
Consider the following tuple: | genres_tuple = ("pop", "rock", "soul", "hard rock", "soft rock", \
"R&B", "progressive rock", "disco")
genres_tuple | coursera/python_for_data_science/2.1_Tuples.ipynb | mohanprasath/Course-Work | gpl-3.0 |
Find the length of the tuple, "genres_tuple": | len(genres_tuple) | coursera/python_for_data_science/2.1_Tuples.ipynb | mohanprasath/Course-Work | gpl-3.0 |
<div align="right">
<a href="#String1" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="String1" class="collapse">
"len(genres_tuple)"
<a ><img src = "https://ibm.box.com/shared/static/n4969qbta8hhsycs2dc4n8jqbf062wdw.png" width = 1100, align = "center"></a>
```
```
<... | genres_tuple[3] | coursera/python_for_data_science/2.1_Tuples.ipynb | mohanprasath/Course-Work | gpl-3.0 |
<div align="right">
<a href="#2" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="2" class="collapse">
<a ><img src = "https://ibm.box.com/shared/static/s6r8v2uy6wifmaqv53w6adabqci47zme.png" width = 1100, align = "center"></a>
</div>
Use slicing to obtain indexes 3, ... | genres_tuple[3:6] | coursera/python_for_data_science/2.1_Tuples.ipynb | mohanprasath/Course-Work | gpl-3.0 |
<div align="right">
<a href="#3" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="3" class="collapse">
<a ><img src = "https://ibm.box.com/shared/static/nqo84vydw6eixdex0trybuvactcw7ffi.png" width = 1100, align = "center"></a>
</div>
Find the first two elements of th... | genres_tuple[:2] | coursera/python_for_data_science/2.1_Tuples.ipynb | mohanprasath/Course-Work | gpl-3.0 |
<div align="right">
<a href="#q5" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="q5" class="collapse">
```
genres_tuple[0:2]
```
#### Find the first index of 'disco': | genres_tuple.index("disco") | coursera/python_for_data_science/2.1_Tuples.ipynb | mohanprasath/Course-Work | gpl-3.0 |
<div align="right">
<a href="#q6" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="q6" class="collapse">
```
genres_tuple.index("disco")
```
<hr>
#### Generate a sorted List from the Tuple C_tuple=(-5,1,-3): | C_tuple=sorted((-5, 1, -3))
C_tuple | coursera/python_for_data_science/2.1_Tuples.ipynb | mohanprasath/Course-Work | gpl-3.0 |
另外一条规则是:位置参数优先权: | def func(a, b = 1):
pass
func(20, a = "G") # TypeError 对参数 a 重复赋值 | Tips/2016-03-11-Arguments-and-Unpacking.ipynb | rainyear/pytips | mit |
最保险的方法就是全部采用关键词参数。
任意参数
任意参数可以接受任意数量的参数,其中*a的形式代表任意数量的位置参数,**d代表任意数量的关键词参数: | def concat(*lst, sep = "/"):
return sep.join((str(i) for i in lst))
print(concat("G", 20, "@", "Hz", sep = "")) | Tips/2016-03-11-Arguments-and-Unpacking.ipynb | rainyear/pytips | mit |
上面的这个def concat(*lst, sep = "/")的语法是PEP 3102提出的,在 Python 3.0 之后实现。这里的关键词函数必须明确指明,不能通过位置推断: | print(concat("G", 20, "-")) # Not G-20 | Tips/2016-03-11-Arguments-and-Unpacking.ipynb | rainyear/pytips | mit |
**d则代表任意数量的关键词参数 | def dconcat(sep = ":", **dic):
for k in dic.keys():
print("{}{}{}".format(k, sep, dic[k]))
dconcat(hello = "world", python = "rocks", sep = "~") | Tips/2016-03-11-Arguments-and-Unpacking.ipynb | rainyear/pytips | mit |
Unpacking
Python 3.5 添加的新特性(PEP 448),使得*a、**d可以在函数参数之外使用: | print(*range(5))
lst = [0, 1, 2, 3]
print(*lst)
a = *range(3), # 这里的逗号不能漏掉
print(a)
d = {"hello": "world", "python": "rocks"}
print({**d}["python"]) | Tips/2016-03-11-Arguments-and-Unpacking.ipynb | rainyear/pytips | mit |
所谓的解包(Unpacking)实际上可以看做是去掉()的元组或者是去掉{}的字典。这一语法也提供了一个更加 Pythonic 地合并字典的方法: | user = {'name': "Trey", 'website': "http://treyhunner.com"}
defaults = {'name': "Anonymous User", 'page_name': "Profile Page"}
print({**defaults, **user}) | Tips/2016-03-11-Arguments-and-Unpacking.ipynb | rainyear/pytips | mit |
在函数调用的时候使用这种解包的方法则是 Python 2.7 也可以使用的: | print(concat(*"ILovePython")) | Tips/2016-03-11-Arguments-and-Unpacking.ipynb | rainyear/pytips | mit |
Parameters
The next cell initializes the parameters that are used throughout the code. They are listed as:
N: The original sequence length N, which is also the length of the sequences that are going to be generated by the PFSA generated by DCGraM;
drange: range of values of D for which D-Markov and DCGraM machines tha... | N = 10000000
drange = range(4,11)
a = 20 | dcgram.ipynb | franchenstein/dcgram | gpl-3.0 |
Original Sequence Analysis
Make sure that the original sequence of length N is stored in the correct directory and run the cell to load it to X. After this, run the cells corresponding to the computation of the subsequence probabilities and the conditional probabilites for the value d_max, which is the last value in dr... | #Open original sequence from yaml file
with open(name + '/sequences/original_len_' + str(N) + '_' + tag + '.yaml', 'r') as f:
X = yaml.load(f)
#Value up to which results are computed
d_max = drange[-1]
#Initialization of variables:
p = None
p_cond = None
#Compute subsequence probabilities of occurrence up to... | dcgram.ipynb | franchenstein/dcgram | gpl-3.0 |
D-Markov Machines
The next step of DCGraM consists of generating D-Markov Machines for each value of D in drange defined above. The values of p_cond for each of these values is then needed, so it is necessary to compute it above. A D-Markov Machine is a PFSA with $|\Sigma|^D$ states, each one labeled with one of the su... | dmark_machines = []
#If the D-Markov machines have not been previously created, generate them with this cell
for D in list(map(str,drange)):
dmark_machines.append(dmarkov.create(p_cond, D))
dmark_machines[-1].to_csv(name + '/pfsa/dmarkov_D' + D + '_' + tag + '.csv')
#On the other hand, if there already are D-... | dcgram.ipynb | franchenstein/dcgram | gpl-3.0 |
D-Markov Machine Analysis
First of all, sequences should be generated from the D-Markov Machines. The same parameters computed in the analysis of the original sequence should be computed for the D-Markov Machines' sequences. Besides those parameters, the Kullback-Leibler Divergence and Distribution Distance between the... | dmark_seqs = []
#Generate sequences:
count = 0
for machine in dmark_machines:
seq = machine.generate_sequence(N)
with open(name + '/sequences/dmarkov_D' + str(drange[count]) + '_' + tag + '.yaml', 'w') as f:
yaml.dump(seq, f)
dmark_seqs.append(seq)
count += 1
#If the sequences have been previo... | dcgram.ipynb | franchenstein/dcgram | gpl-3.0 |
Clustering
Now that we have obtained the D-Markov Machines, the next step of DCGraM is to cluster the states of these machines. For a given D-Markov Machine G$_D$, its states $q$ are considered points in a $\Sigma$-dimensional space, in which each dimension is labeled with a symbol $\sigma$ from the alphabet and the po... | clustered = []
K = 4
for machine in dmark_machines:
clustered.append(clustering.kmeans_kld(machine, K)) | dcgram.ipynb | franchenstein/dcgram | gpl-3.0 |
Graph Minimization
Once that the states of the D-Markov Machines are clustered, these clusterings are then used as initial partitions of the D-Markov Machines' states. To these machines and initial partitions, a graph minimization algorithm (in the current version, only Moore) is applied in order to obtain a final redu... | dcgram_machines = []
for ini_part in clustered:
dcgram_machines.append(graphmin.moore(clustered)) | dcgram.ipynb | franchenstein/dcgram | gpl-3.0 |
DCGraM Analysis
Now that the DCGraM machines have been generated, the same analysis done for the D-Markov Machines is used for them. Sequences are generated for each of the DCGraM machines and afterwards all of the analysis is applied to them so the comparison can be made between regular D-Markov and DCGraM. | dcgram_seqs = []
#Generate sequences:
count = 0
for machine in dcgram_machines:
seq = machine.generate_sequence(N)
with open(name + '/sequences/dcgram_D' + str(drange[count]) + '_' + tag + '.yaml', 'w') as f:
yaml.dump(seq, f)
dcgram_seqs.append(seq)
count += 1
#If the sequences have been prev... | dcgram.ipynb | franchenstein/dcgram | gpl-3.0 |
Plots
Once all analysis have been made, the plots of each of those parameters is created to visualize the performance. The plots have the x-axis representing the number of states of each PFSA and the y-axis represents the parameters being observed. There are always two curves: one for the DCGraM machines and one for th... | #initialization
import matplotlib.pyplot as plt
#Labels to be used in the plots' legends
labels = ['D-Markov Machines, D from ' + str(drange[0]) + ' to ' + str(d_max),
'DCGraM Machines, D from ' + str(drange[0]) + ' to ' + str(d_max),
'Original Sequence Baseline']
#Obtaining number of states of th... | dcgram.ipynb | franchenstein/dcgram | gpl-3.0 |
Finetuning and Training | %cd $DATA_HOME_DIR
#Set path to sample/ path if desired
path = DATA_HOME_DIR + '/' #'/sample/'
test_path = DATA_HOME_DIR + '/test/' #We use all the test data
results_path=DATA_HOME_DIR + '/results/'
train_path=path + '/train/'
valid_path=path + '/valid/'
#import Vgg16 helper class
vgg = Vgg16()
#Set constants. You c... | FAI_old/lesson1/dogs_cats_redux.ipynb | WNoxchi/Kaukasos | mit |
In the next piece of code we will cycle through our directory again: first assigning readable names to our files and storing them as a list in the variable filenames; then we will remove the case and punctuation from the text, split the words into a list of tokens, and assign the words in each file to a list in the var... | filenames = []
for files in list_textfiles('../Counting Word Frequencies/data'):
files = get_filename(files)
filenames.append(files)
corpus = []
for filename in list_textfiles('../Counting Word Frequencies/data'):
text = read_file(filename)
words = text.split()
clean = [w.lower() for w in words if ... | Adding Context to Word Frequency Counts.ipynb | mediagestalt/Adding-Context | mit |
Here we recreate our list from the last exercise, counting the instances of the word privacy in each file. | for words, names in zip(corpus, filenames):
print("Instances of the word \'privacy\' in", names, ":", count_in_list("privacy", words)) | Adding Context to Word Frequency Counts.ipynb | mediagestalt/Adding-Context | mit |
Next we use the len function to count the total number of words in each file. | for files, names in zip(corpus, filenames):
print("There are", len(files), "words in", names) | Adding Context to Word Frequency Counts.ipynb | mediagestalt/Adding-Context | mit |
Now we can calculate the ratio of the word privacy to the total number of words in the file. To accomplish this we simply divide the two numbers. | print("Ratio of instances of privacy to total number of words in the corpus:")
for words, names in zip(corpus, filenames):
print('{:.6f}'.format(float(count_in_list("privacy", words))/(float(len(words)))),":",names) | Adding Context to Word Frequency Counts.ipynb | mediagestalt/Adding-Context | mit |
Now our descriptive statistics concerning word frequencies have added value. We can see that there has indeed been a steady increase in the frequency of the use of the word privacy in our corpus. When we investigate the yearly usage, we can see that the frequency almost doubled between 2008 and 2009, as well as dramati... | raw = []
for i in range(len(corpus)):
raw.append(count_in_list("privacy", corpus[i]))
ratio = []
for i in range(len(corpus)):
ratio.append('{:.3f}'.format((float(count_in_list("privacy", corpus[i]))/(float(len(corpus[i])))) * 100))
table = zip(filenames, raw, ratio) | Adding Context to Word Frequency Counts.ipynb | mediagestalt/Adding-Context | mit |
Using the tabulate module, we will display our tuple as a table. | print(tabulate(table, headers = ["Filename", "Raw", "Ratio %"], floatfmt=".3f", numalign="left")) | Adding Context to Word Frequency Counts.ipynb | mediagestalt/Adding-Context | mit |
And finally, we will write the values to a CSV file called privacyFreqTable. | import csv
with open('privacyFreqTable.csv','wb') as f:
w = csv.writer(f)
w.writerows(table) | Adding Context to Word Frequency Counts.ipynb | mediagestalt/Adding-Context | mit |
Part 2: Counting the number of transcripts
Another way we can provide context is to process the corpus in a different way. Instead of splitting the data by word, we will split it in larger chunks pertaining to each individual transcript. Each transcript corresponds to a unique debate but starts with exactly the same fo... | corpus_1 = []
for filename in list_textfiles('../Counting Word Frequencies/data'):
text = read_file(filename)
words = text.split(" OFFICIAL REPORT (HANSARD)")
corpus_1.append(words) | Adding Context to Word Frequency Counts.ipynb | mediagestalt/Adding-Context | mit |
Now, we can count the number of files in each dataset. This is also an important activity for error-checking. While it is easy to trust the numerical output of the code when it works sucessfully, we must always be sure to check that the code is actually performing in exactly the way we want it to. In this case, these n... | for files, names in zip(corpus_1, filenames):
print("There are", len(files), "files in", names) | Adding Context to Word Frequency Counts.ipynb | mediagestalt/Adding-Context | mit |
Here is a screenshot of some of the raw data. We can see that there are <u>97</u> files in 2006, <u>117</u> in 2007 and <u>93</u> in 2008. The rest of the data is also correct.
<img src="filecount.png">
Now we can compare the amount of occurences of privacy with the number of debates occuring in each dataset. | for names, files, words in zip(filenames, corpus_1, corpus):
print("In", names, "there were", len(files), "debates. The word privacy was said", \
count_in_list('privacy', words), "times.") | Adding Context to Word Frequency Counts.ipynb | mediagestalt/Adding-Context | mit |
These numbers confirm our earlier results. There is a clear indication that the usage of the term privacy is increasing, with major changes occuring between the years 2008 and 2009, as well as between 2012 and 2014. This trend is also clearly obervable between the 39th and 40th sittings of Parliament.
Part 3: Looking... | corpus_3 = []
for filename in list_textfiles('../Counting Word Frequencies/data2'):
text = read_file(filename)
words = text.split()
clean = [w.lower() for w in words if w.isalpha()]
corpus_3.append(clean) | Adding Context to Word Frequency Counts.ipynb | mediagestalt/Adding-Context | mit |
Now we will combine the three lists into one large list and assign it to the variable large. | large = list(sum(corpus_3, [])) | Adding Context to Word Frequency Counts.ipynb | mediagestalt/Adding-Context | mit |
We can use the same calculations to determine the total number of occurences of privacy, as well as the total number of words in the corpus. We can also calculate the total ratio of privacy to the total number of words. | print("There are", count_in_list('privacy', large), "occurences of the word 'privacy' and a total of", \
len(large), "words.")
print("The ratio of instances of privacy to total number of words in the corpus is:", \
'{:.6f}'.format(float(count_in_list("privacy", large))/(float(len(large)))), "or", \
'{:.3f}'.format((fl... | Adding Context to Word Frequency Counts.ipynb | mediagestalt/Adding-Context | mit |
Another type of word frequency statistic we can generate is a type/token ratio. The types are the total number of unique words in the corpus, while the tokens are the total number of words. The type/token ratio is used to determine the variability of the language used in the text. The higher the ratio, the more complex... | print("There are", (len(set(large))), "unique words in the Hansard corpus.") | Adding Context to Word Frequency Counts.ipynb | mediagestalt/Adding-Context | mit |
Now we can divide the types by the tokens to determine the ratio. | print("The type/token ratio is:", ('{:.6f}'.format(len(set(large))/(float(len(large))))), "or",\
'{:.3f}'.format(len(set(large))/(float(len(large)))*100),"%") | Adding Context to Word Frequency Counts.ipynb | mediagestalt/Adding-Context | mit |
Finally, we will use the NLTK module to create a graph that shows the top 50 most frequent words in the Hansard corpus. Although privacy will not appear in the graph, it's always interesting to see what types of words are most common, and what their distribution is. NLTK will be introduced with more detail in the next ... | text = nltk.Text(large)
fd = nltk.FreqDist(text) | Adding Context to Word Frequency Counts.ipynb | mediagestalt/Adding-Context | mit |
Here we will assign the frequency distribution to the plot function to produce a graph. While it's a little hard to read, the most commonly used word in the Hansard corpus is the, with a frequency just over 400,000 occurences. The next most frequent word is to, which only has a frequency of about 225,000 occurences, al... | %matplotlib inline
fd.plot(50,cumulative=False) | Adding Context to Word Frequency Counts.ipynb | mediagestalt/Adding-Context | mit |
Another feature of the NLTK frequency distribution function is the generation of a list of hapaxes. These are words that appear only once in the entire corpus. While not meaningful for this study, it's an interesting way to explore the data. | fd.hapaxes() | Adding Context to Word Frequency Counts.ipynb | mediagestalt/Adding-Context | mit |
Rating-specialized model
Depending on the weights we assign, the model will encode a different balance of the tasks. Let's start with a model that only considers ratings. | # Here, configuring the model with losses and metrics.
# TODO 1: Your code goes here.
cached_train = train.shuffle(100_000).batch(8192).cache()
cached_test = test.batch(4096).cache()
# Training the ratings model.
model.fit(cached_train, epochs=3)
metrics = model.evaluate(cached_test, return_dict=True)
print(f"Retri... | courses/machine_learning/deepdive2/recommendation_systems/labs/multitask.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
The model does OK on predicting ratings (with an RMSE of around 1.11), but performs poorly at predicting which movies will be watched or not: its accuracy at 100 is almost 4 times worse than a model trained solely to predict watches.
Retrieval-specialized model
Let's now try a model that focuses on retrieval only. | # Here, configuring the model with losses and metrics.
# TODO 2: Your code goes here.
# Training the retrieval model.
model.fit(cached_train, epochs=3)
metrics = model.evaluate(cached_test, return_dict=True)
print(f"Retrieval top-100 accuracy: {metrics['factorized_top_k/top_100_categorical_accuracy']:.3f}.")
print(f... | courses/machine_learning/deepdive2/recommendation_systems/labs/multitask.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
We get the opposite result: a model that does well on retrieval, but poorly on predicting ratings.
Joint model
Let's now train a model that assigns positive weights to both tasks. | # Here, configuring the model with losses and metrics.
# TODO 3: Your code goes here.
# Training the joint model.
model.fit(cached_train, epochs=3)
metrics = model.evaluate(cached_test, return_dict=True)
print(f"Retrieval top-100 accuracy: {metrics['factorized_top_k/top_100_categorical_accuracy']:.3f}.")
print(f"Ran... | courses/machine_learning/deepdive2/recommendation_systems/labs/multitask.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Proper use of Matplotlib
We will use interactive plots inline in the notebook. This feature is enabled through: | %matplotlib
import matplotlib.pyplot as plt
import numpy as np
# define a figure which can contains several plots, you can define resolution and so on here...
fig2 = plt.figure()
# add one axis, axes are actual plots where you can put data.fits (nx, ny, index)
ax = fig2.add_subplot(1, 1, 1) | notebooks/03-Plotting.ipynb | aboucaud/python-euclid2016 | bsd-3-clause |
Add a cruve with a title to the plot | x = np.linspace(0, 2*np.pi)
ax.plot(x, np.sin(x), '+')
ax.set_title('this title')
plt.show()
# is a simpler syntax to add one axis into the figure (we will stick to this)
fig, ax = plt.subplots()
ax.plot(x, np.sin(x), '+')
ax.set_title('simple subplot') | notebooks/03-Plotting.ipynb | aboucaud/python-euclid2016 | bsd-3-clause |
A long list of markers can be found at http://matplotlib.org/api/markers_api.html
as for the colors, there is a nice discussion at http://stackoverflow.com/questions/22408237/named-colors-in-matplotlib
All the components of a figure can be accessed throught the 'Figure' object | print(type(fig))
print(dir(fig))
print(fig.axes)
print('This is the x-axis object', fig.axes[0].xaxis)
print('And this is the y-axis object', fig.axes[0].yaxis)
# arrow pointing to the origin of the axes
ax_arrow = ax.annotate('ax = fig.axes[0]',
xy=(0, -1), # tip of the arrow
... | notebooks/03-Plotting.ipynb | aboucaud/python-euclid2016 | bsd-3-clause |
Add a labels to the x and y axes | # add some ascii text label
# this is equivelant to:
# ax.set_xlabel('x')
xax.set_label_text('x')
# add latex rendered text to the y axis
ax.set_ylabel('$sin(x)$', size=20, color='g', rotation=0) | notebooks/03-Plotting.ipynb | aboucaud/python-euclid2016 | bsd-3-clause |
Finally dump the figure to a png file | fig.savefig('myplot.png')
!ls
!eog myplot.png | notebooks/03-Plotting.ipynb | aboucaud/python-euclid2016 | bsd-3-clause |
Lets define a function that creates an empty base plot to which we will add
stuff for each demonstration. The function returns the figure and the axes object. | from matplotlib import pyplot as plt
import numpy as np
def create_base_plot():
fig, ax = plt.subplots()
ax.set_title('sample figure')
return fig, ax
def plot_something():
fig, ax = create_base_plot()
x = np.linspace(0, 2*np.pi)
ax.semilogx(x, np.cos(x)*np.cos(x/2), 'r--.')
plt.show() | notebooks/03-Plotting.ipynb | aboucaud/python-euclid2016 | bsd-3-clause |
Log plots | fig, ax = create_base_plot()
# normal-xlog plots
ax.semilogx(x, np.cos(x)*np.cos(x/2), 'r--.')
# clear the plot and plot a function using the y axis in log scale
ax.clear()
ax.semilogy(x, np.exp(x))
# you can (un)set it, whenever you want
#ax.set_yscale('linear') # change they y axis to linear scale
#ax.set_yscale... | notebooks/03-Plotting.ipynb | aboucaud/python-euclid2016 | bsd-3-clause |
This is equivelant to:
ax.plot(x, np.exp(x)*np.sin(x))
plt.setp(ax, 'yscale', 'log', 'xscale', 'log')
here we have introduced a new method of setting property values via pyplot.setp.
setp takes as first argument a matplotlib object. Each pair of positional argument
after that is treated as a key value pair for the set... | plt.setp(ax, 'xscale', 'linear', 'xlim', [1, 5], 'ylim', [0.1, 10], 'xlabel', 'x',
'ylabel', 'y', 'title', 'foo',
'xticks', [1, 2, 3, 4, 5],
'yticks', [0.1, 1, 10],
'yticklabels', ['low', 'medium', 'high']) | notebooks/03-Plotting.ipynb | aboucaud/python-euclid2016 | bsd-3-clause |
Histograms | fig1, ax = create_base_plot()
n, bins, patches = ax.hist(np.random.normal(0, 0.1, 10000), bins=50) | notebooks/03-Plotting.ipynb | aboucaud/python-euclid2016 | bsd-3-clause |
Subplots
Making subplots is relatively easy. Just pass the shape of the grid of plots to plt.subplots() that was used in the above examples. | # Create one figure with two plots/axes, with their xaxis shared
fig, (ax1, ax2) = plt.subplots(2, sharex=True)
ax1.plot(x, np.sin(x), '-.', color='r', label='first line')
other = ax2.plot(x, np.cos(x)*np.cos(x/2), 'o-', linewidth=3, label='other')
ax1.legend()
ax2.legend()
# adjust the spacing between the axes
fig.su... | notebooks/03-Plotting.ipynb | aboucaud/python-euclid2016 | bsd-3-clause |
create a 3x3 grid of plots | fig, axs = plt.subplots(3, 3)
print(axs.shape)
# add an index to all the subplots
for ax_index, ax in enumerate(axs.flatten()):
ax.set_title(ax_index)
# remove all ticks
for ax in axs.flatten():
plt.setp(ax, 'xticks', [], 'yticks', [])
fig.subplots_adjust(hspace=0, wspace=0)
# plot a curve in the diagonal ... | notebooks/03-Plotting.ipynb | aboucaud/python-euclid2016 | bsd-3-clause |
Images and contours | xx, yy = np.mgrid[-2:2:100j, -2:2:100j]
img = np.sin(xx) + np.cos(yy)
fig, ax = create_base_plot()
# to have 0,0 in the lower left corner and no interpolation
img_plot = ax.imshow(img, origin='lower', interpolation='None')
# to add a grid to any axis
ax.grid()
img_plot.set_cmap('hot')... | notebooks/03-Plotting.ipynb | aboucaud/python-euclid2016 | bsd-3-clause |
Animation | from IPython.display import HTML
import matplotlib.animation as animation
def f(x, y):
return np.sin(x) + np.cos(y)
fig, ax = create_base_plot()
im = ax.imshow(f(xx, yy), cmap=plt.get_cmap('viridis'))
def updatefig(*args):
global xx, yy
xx += np.pi / 15.
yy += np.pi / 20.
im.set_array(f(xx, yy))
... | notebooks/03-Plotting.ipynb | aboucaud/python-euclid2016 | bsd-3-clause |
Styles
Configuring matplotlib
Most of the matplotlib code chunk that are written are usually about styling and not actual plotting.
One feature that might be of great help if you are in this case is to use the matplotlib.style module.
In this notebook, we will go through the available matplotlib styles and their corre... | print('\n'.join(plt.style.available))
x = np.arange(0, 10, 0.01)
def f(x, t):
return np.sin(x) * np.exp(1 - x / 10 + t / 2)
def simple_plot(style):
plt.figure()
with plt.style.context(style, after_reset=True):
for t in range(5):
plt.plot(x, f(x, t))
plt.title('Simple plot')
s... | notebooks/03-Plotting.ipynb | aboucaud/python-euclid2016 | bsd-3-clause |
Content of the style files
A matplotlib style file is a simple text file containing the desired matplotlib rcParam configuration, with the .mplstyle extension.
Let's display the content of the 'ggplot' style. | import os
ggplotfile = os.path.join(plt.style.core.BASE_LIBRARY_PATH, 'ggplot.mplstyle')
!cat $ggplotfile | notebooks/03-Plotting.ipynb | aboucaud/python-euclid2016 | bsd-3-clause |
Maybe the most interesting feature of this style file is the redefinition of the color cycle using hexadecimal notation. This allows the user to define is own color palette for its multi-line plots.
use versus context
There are two ways of using the matplotlib styles.
plt.style.use(style)
plt.style.context(style):
Th... | plt.style.use('ggplot')
plt.figure()
plt.plot(x, f(x, 0)) | notebooks/03-Plotting.ipynb | aboucaud/python-euclid2016 | bsd-3-clause |
The 'ggplot' style has been applied to the current session. One of its features that differs from standard matplotlib configuration is to put the ticks outside the main figure (axes.axisbelow: True) | with plt.style.context('dark_background'):
plt.figure()
plt.plot(x, f(x, 1))
| notebooks/03-Plotting.ipynb | aboucaud/python-euclid2016 | bsd-3-clause |
Now using the 'dark_background' style as a context, we can spot the main changes (background, line color, axis color) and we can also see the outside ticks, although they are not part of this particular style. This is the 'ggplot' axes.axisbelow setup that has not been overwritten by the new style.
Once the with block ... | plt.figure()
plt.plot(x, f(x, 2)) | notebooks/03-Plotting.ipynb | aboucaud/python-euclid2016 | bsd-3-clause |
Custom style file
Starting from these configured files, it is easy to now create our own styles for textbook figures and talk figures and switch from one to another in a single code line plt.style.use('mystyle') at the beginning of the plotting script.
Where to create it ?
matplotlib will look for the user style files ... | print(plt.style.core.USER_LIBRARY_PATHS) | notebooks/03-Plotting.ipynb | aboucaud/python-euclid2016 | bsd-3-clause |
Note: The directory corresponding to this path will most probably not exist so one will need to create it. | styledir = plt.style.core.USER_LIBRARY_PATHS[0]
!mkdir -p $styledir | notebooks/03-Plotting.ipynb | aboucaud/python-euclid2016 | bsd-3-clause |
One can now copy an existing style file to serve as a boilerplate. | mystylefile = os.path.join(styledir, 'mystyle.mplstyle')
!cp $ggplotfile $mystylefile
!cd $styledir
%%file mystyle.mplstyle
font.size: 16.0 # large font
axes.linewidth: 2
axes.grid: True
axes.titlesize: x-large
axes.labelsize: x-large
axes.labelcolor: 555555
axes.axisbelow: True
xtick.color: 555555
xtic... | notebooks/03-Plotting.ipynb | aboucaud/python-euclid2016 | bsd-3-clause |
D3 | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import mpld3
mpld3.enable_notebook()
# Scatter points
fig, ax = plt.subplots(subplot_kw=dict(axisbg='#EEEEEE'))
ax.grid(color='white', linestyle='solid')
N = 50
scatter = ax.scatter(np.random.normal(size=N),
np.random.normal(si... | notebooks/03-Plotting.ipynb | aboucaud/python-euclid2016 | bsd-3-clause |
Seaborn | %matplotlib
plot_something()
import seaborn
plot_something() | notebooks/03-Plotting.ipynb | aboucaud/python-euclid2016 | bsd-3-clause |
Packet Forwarding
This category of questions allows you to query how different types of
traffic is forwarded by the network and if endpoints are able to
communicate. You can analyze these aspects in a few different ways.
Traceroute
Bi-directional Traceroute
Reachability
Bi-directional Reachability
Loop detection
Multi... | bf.set_network('generate_questions')
bf.set_snapshot('generate_questions') | docs/source/notebooks/forwarding.ipynb | batfish/pybatfish | apache-2.0 |
Traceroute
Traces the path(s) for the specified flow.
Performs a virtual traceroute in the network from a starting node. A destination IP and ingress (source) node must be specified. Other IP headers are given default values if unspecified.
Unlike a real traceroute, this traceroute is directional. That is, for it to su... | result = bf.q.traceroute(startLocation='@enter(as2border1[GigabitEthernet2/0])', headers=HeaderConstraints(dstIps='2.34.201.10', srcIps='8.8.8.8')).answer().frame() | docs/source/notebooks/forwarding.ipynb | batfish/pybatfish | apache-2.0 |
Return Value
Name | Description | Type
--- | --- | ---
Flow | The flow | Flow
Traces | The traces for this flow | Set of Trace
TraceCount | The total number traces for this flow | int
Retrieving the flow definition | result.Flow | docs/source/notebooks/forwarding.ipynb | batfish/pybatfish | apache-2.0 |
Retrieving the detailed Trace information | len(result.Traces)
result.Traces[0] | docs/source/notebooks/forwarding.ipynb | batfish/pybatfish | apache-2.0 |
Evaluating the first Trace | result.Traces[0][0] | docs/source/notebooks/forwarding.ipynb | batfish/pybatfish | apache-2.0 |
Retrieving the disposition of the first Trace | result.Traces[0][0].disposition | docs/source/notebooks/forwarding.ipynb | batfish/pybatfish | apache-2.0 |
Retrieving the first hop of the first Trace | result.Traces[0][0][0] | docs/source/notebooks/forwarding.ipynb | batfish/pybatfish | apache-2.0 |
Retrieving the last hop of the first Trace | result.Traces[0][0][-1]
bf.set_network('generate_questions')
bf.set_snapshot('generate_questions') | docs/source/notebooks/forwarding.ipynb | batfish/pybatfish | apache-2.0 |
Bi-directional Traceroute
Traces the path(s) for the specified flow, along with path(s) for reverse flows.
This question performs a virtual traceroute in the network from a starting node. A destination IP and ingress (source) node must be specified. Other IP headers are given default values if unspecified.
If the trace... | result = bf.q.bidirectionalTraceroute(startLocation='@enter(as2border1[GigabitEthernet2/0])', headers=HeaderConstraints(dstIps='2.34.201.10', srcIps='8.8.8.8')).answer().frame() | docs/source/notebooks/forwarding.ipynb | batfish/pybatfish | apache-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.