markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Note that the OrbitPlot function chooses reasonable limits for the axes for you. There are various ways to customize the plot. Have a look at the arguments used in the following examples, which are pretty much self-explanatory (if in doubt, check the documentation!).
fig = rebound.OrbitPlot(sim, unitlabel="[AU]", color=True, trails=True, periastron=True) fig = rebound.OrbitPlot(sim, unitlabel="[AU]", periastron=True, lw=2)
ipython_examples/OrbitPlot.ipynb
dchandan/rebound
gpl-3.0
<a id='import_data'></a> Import the data Data consists of a large matrix, with r rows and c columns. Rows are labeled with 2 pieces of information: 1) Which disease does row belong to? 2) Which GO term does row belong to? The values in each row represent the similarity of the focal (row) datapoint to other da...
# load the dataframe using pandas cluster_focal_df = pd.read_csv('cluster_diff_test_nodes_5d.csv',sep='\t', index_col='index') # drop this column because we don't need it cluster_focal_df = cluster_focal_df.drop('focal_mean',1) # add a column that is the mean of values in each row, and...
notebooks/networkAnalysis/specificity_visualization_high_dimensional_data/Visualizing and scoring labeled high dimensional data.ipynb
ucsd-ccbb/jupyter-genomics
mit
TOC <a id='plot_heatmap'></a> Plot the raw data as a heatmap
# plot the heatmap plt.figure(figsize=(15,15)) plt.matshow(cluster_focal_df,fignum=False,cmap='jet',vmin=0,vmax=1,aspect='auto') #plt.yticks(range(len(cluster_focal_df)),list(cluster_focal_df.index),fontsize=8) plt.xticks(range(len(cluster_focal_df.columns)),list(cluster_focal_df.columns),rotation=90,fontsize=10) plt.g...
notebooks/networkAnalysis/specificity_visualization_high_dimensional_data/Visualizing and scoring labeled high dimensional data.ipynb
ucsd-ccbb/jupyter-genomics
mit
TOC <a id='parse_rlabels'></a> Parse the row labels Here we include two functions that will be useful for parsing row labels from DF indices, and mapping these labels to colors NOTE These functions are specific to the example dataset used here
def build_row_colors(nodes_df,cmap = matplotlib.cm.nipy_spectral,find_col_colors = True): ''' Simple helper function for plotting to return row_colors and col_colors for sns.clustermap. - disease names will be extracted from df indices and columns and used for plotting - cmap defines the desired colorm...
notebooks/networkAnalysis/specificity_visualization_high_dimensional_data/Visualizing and scoring labeled high dimensional data.ipynb
ucsd-ccbb/jupyter-genomics
mit
TOC <a id='dim_reduce'></a> Reduce to two dimensions Methods (scikit-learn implementations used here): - t-SNE: Van der Maaten, Laurens, and Geoffrey Hinton. "Visualizing data using t-SNE." Journal of Machine Learning Research 9.2579-2605 (2008): 85. <img src="screenshots/sklearn_tsne.png" width="600" height="600"> ...
from sklearn.manifold import TSNE from sklearn.decomposition import PCA from sklearn.decomposition import NMF from sklearn.manifold import Isomap # select which dimensionality reduction technique you want here dim_reduct_method = 'TSNE' tsne = TSNE(n_components=2) pca = PCA(n_components=2) isomap = Isomap(n_neighbors...
notebooks/networkAnalysis/specificity_visualization_high_dimensional_data/Visualizing and scoring labeled high dimensional data.ipynb
ucsd-ccbb/jupyter-genomics
mit
TOC <a id='plot_transformed'></a> Plot the data in transformed coordinates Left panel: transformed coordinates color-coded by GO term. Looks like there is some grouping happening, where some points labeled by the same GO term appear to be clustered together. Right panel: transformed coordinates color-coded by disea...
plt.figure(figsize=(20,10)) plt.subplot(1,2,1) plt.plot(cluster_transf[:,0],cluster_transf[:,1],'o',color='gray',markersize=4) for i in range(len(idx_to_label_reduced)): reduced_labels = pd.Series(reduced_labels) label_temp = idx_to_label_reduced[i] idx_focal = list(reduced_labels[reduced_labels==labe...
notebooks/networkAnalysis/specificity_visualization_high_dimensional_data/Visualizing and scoring labeled high dimensional data.ipynb
ucsd-ccbb/jupyter-genomics
mit
TOC <a id='scoring_method'></a> Scoring method (Specificity) Our scoring method measures a weighted distance ($S$) between all pairs of points in the dataset, wehre the weights are determined by the labels. If two nearby points have the same label, they will be rewarded, if they have different labels, they will be ...
def weighted_score(x,y,labels1,labels2,dtype='log_inv'): ''' This function calculates the weighted scores of points in x,y, defined by labels1 and labels2. - Points are scored more highly if they are close to other points with the same label, and are penalized if they are close to points with different...
notebooks/networkAnalysis/specificity_visualization_high_dimensional_data/Visualizing and scoring labeled high dimensional data.ipynb
ucsd-ccbb/jupyter-genomics
mit
TOC <a id='plot_specificity'></a> Plot the average specificities per GO term and per disease name Plot points as label names Left panel: GO term plotted in specificity coordinates. Points are color-coded by the disease which contains the most counts of that term. Points are larger if the GO term has more occurre...
fig = plt.figure(figsize=(15,15)) axes = fig.add_subplot(1,1,1) subpos = [0.7,0.7,0.25,0.25] for GOname in list(sGO_GB_mean.index): msize = np.log(clusters_per_GOterm[GOname])*3*15 # set the marker size # get the text color D_freq_norm = GO_GB_D[GOname]# /clusters_per_disease # normalize by number of c...
notebooks/networkAnalysis/specificity_visualization_high_dimensional_data/Visualizing and scoring labeled high dimensional data.ipynb
ucsd-ccbb/jupyter-genomics
mit
1.8.2. Built-in Collection Data Types 1. lists Lists are heterogeneous, meaning that the data objects need not all be from the same class and the collection can be assigned to a variable as below. | Operation Name | Operator | Explanation | | --- | --- | --- | | indexing | [ ] | Access an element of a sequence | | co...
fakeList = ['str', 12, True, 1.232] # heterogeneous print(fakeList) myList = [1,2,3,4] A = [myList] * 3 print(A) myList[2]=45454545 print(A)
algorithms/python_revision.ipynb
AnasFullStack/Awesome-Full-Stack-Web-Developer
mit
| Method Name | Use | Explanation | | --- | --- | --- | | append | alist.append(item) | Adds a new item to the end of a list | | insert | alist.insert(i,item) | Inserts an item at the ith position in a list | | pop | alist.pop() | Removes and returns the last item in a list | | pop | alist.pop(i) | Removes and returns ...
myList = [1024, 3, True, 6.5] myList.append(False) print(myList) myList.insert(2,4.5) print(myList) print(myList.pop()) print(myList) print(myList.pop(1)) print(myList) myList.pop(2) print(myList) myList.sort() print(myList) myList.reverse() print(myList) print(myList.count(6.5)) print(myList.index(4.5)) myList.remove(...
algorithms/python_revision.ipynb
AnasFullStack/Awesome-Full-Stack-Web-Developer
mit
2. Strings | Method Name | Use | Explanation | | --- | --- | --- | | center | astring.center(w) | Returns a string centered in a field of size w | | count | astring.count(item) | Returns the number of occurrences of item in the string | | ljust | astring.ljust(w) | Returns a string left-justified in a field of size w |...
myName= "David" print(myName[3]) print(myName * 2) print(len(myName)) print(myName.upper()) print('.' + myName.center(10) + '.') print('.' + myName.ljust(10) + '.') print('.' + myName.rjust(10) + '.') print(myName.find('v')) print(myName.split('v'))
algorithms/python_revision.ipynb
AnasFullStack/Awesome-Full-Stack-Web-Developer
mit
A major difference between lists and strings is that lists can be modified while strings cannot. This is referred to as mutability. Lists are mutable; strings are immutable. For example, you can change an item in a list by using indexing and assignment. With a string that change is not allowed. 3. Tuples Tuples are v...
myTuple = (2,True,4.96) print(myTuple) print(len(myTuple))
algorithms/python_revision.ipynb
AnasFullStack/Awesome-Full-Stack-Web-Developer
mit
However, if you try to change an item in a tuple, you will get an error. Note that the error message provides location and reason for the problem.
myTuple[1]=False
algorithms/python_revision.ipynb
AnasFullStack/Awesome-Full-Stack-Web-Developer
mit
4. Set A set is an unordered collection of zero or more immutable Python data objects. Sets do not allow duplicates and are written as comma-delimited values enclosed in curly braces. The empty set is represented by set(). Sets are heterogeneous, and the collection can be assigned to a variable as below.
print({3,6,"cat",4.5,False}) mySet = {3,6,"cat",4.5,False} print(mySet)
algorithms/python_revision.ipynb
AnasFullStack/Awesome-Full-Stack-Web-Developer
mit
| Operation Name | Operator | Explanation | | --- | --- | --- | | membership | in | Set membership | | length | len | Returns the cardinality of the set | | &#124; | aset &#124; otherset | Returns a new set with all elements from both sets | | &amp; | aset &amp; otherset | Returns a new set with only those elements com...
mySet = {3,6,"cat",4.5,False} print(mySet) yourSet = {99,3,100} print(yourSet) print( mySet.union(yourSet)) print( mySet | yourSet) print( mySet.intersection(yourSet)) print( mySet & yourSet) print( mySet.difference(yourSet)) print( mySet - yourSet) print( {3,100}.issubset(yourSet)) print( {3,100}<=yourSet) mySet....
algorithms/python_revision.ipynb
AnasFullStack/Awesome-Full-Stack-Web-Developer
mit
5. Dictionary Dictionaries are collections of associated pairs of items where each pair consists of a key and a value. This key-value pair is typically written as key:value. Dictionaries are written as comma-delimited key:value pairs enclosed in curly braces. For example,
capitals = {'Iowa':'DesMoines','Wisconsin':'Madison'} print(capitals) print(capitals['Iowa']) capitals['Utah']='SaltLakeCity' print(capitals) capitals['California']='Sacramento' print(len(capitals)) for k in capitals: print(capitals[k]," is the capital of ", k)
algorithms/python_revision.ipynb
AnasFullStack/Awesome-Full-Stack-Web-Developer
mit
| Operator | Use | Explanation | | --- | --- | --- | | [] | myDict[k] | Returns the value associated with k, otherwise its an error | | in | key in adict | Returns True if key is in the dictionary, False otherwise | | del | del adict[key] | Removes the entry from the dictionary | | Method Name | Use | Explanation | | -...
phoneext={'david':1410,'brad':1137} print(phoneext) print(phoneext.keys()) print(list(phoneext.keys())) print(phoneext.values()) print(list(phoneext.values())) print(phoneext.items()) print(list(phoneext.items())) print(phoneext.get("kent")) print(phoneext.get("kent","NO ENTRY"))
algorithms/python_revision.ipynb
AnasFullStack/Awesome-Full-Stack-Web-Developer
mit
1.9. Input and Output
aName = input("Please enter your name ") print("Your name in all capitals is",aName.upper(), "and has length", len(aName)) sradius = input("Please enter the radius of the circle ") radius = float(sradius) diameter = 2 * radius print(diameter)
algorithms/python_revision.ipynb
AnasFullStack/Awesome-Full-Stack-Web-Developer
mit
1.9.1. String Formatting
print("Hello","World") print("Hello","World", sep="***") print("Hello","World", end="***") aName = "Anas" age = 10 print(aName, "is", age, "years old.") print("%s is %d years old." % (aName, age)) # The % operator is a string operator called the format operator.
algorithms/python_revision.ipynb
AnasFullStack/Awesome-Full-Stack-Web-Developer
mit
| Character | Output Format | | --- | --- | | d, i | Integer | | u | Unsigned integer | | f | Floating point as m.ddddd | | e | Floating point as m.ddddde+/-xx | | E | Floating point as m.dddddE+/-xx | | g | Use %e for exponents less than <span class="math"><span class="MathJax_Preview" style="color: inherit; display: ...
price = 24 item = "banana" print("The %s costs %d cents" % (item, price)) print("The %+10s costs %5.2f cents" % (item, price)) print("The %+10s costs %10.2f cents" % (item, price)) print("The %+10s costs %010.2f cents" % (item, price)) itemdict = {"item":"banana","cost":24} print("The %(item)s costs %(cost)7.1f cents" ...
algorithms/python_revision.ipynb
AnasFullStack/Awesome-Full-Stack-Web-Developer
mit
1.10. Control Structures algorithms require two important control structures: iteration and selection. - Iteration 1. While
counter = 1 while counter <= 5: print("Hello, world") counter = counter + 1
algorithms/python_revision.ipynb
AnasFullStack/Awesome-Full-Stack-Web-Developer
mit
2. for
for item in [1,3,6,2,5]: print(item) for item in range(5): ... print(item**2) wordlist = ['cat','dog','rabbit'] letterlist = [ ] for aword in wordlist: for aletter in aword: if(aletter not in letterlist): letterlist.append(aletter) print(letterlist)
algorithms/python_revision.ipynb
AnasFullStack/Awesome-Full-Stack-Web-Developer
mit
list comprehension
sqlist=[] for x in range(1,11): sqlist.append(x*x) print(sqlist) sqlist2=[x*x for x in range(1,11)] # list comprehension print(sqlist2) sqlist=[x*x for x in range(1,11) if x%2 != 0] print(sqlist) [ch.upper() for ch in 'comprehension' if ch not in 'aeiou'] wordlist = ['cat','dog','rabbit'] uniqueLe...
algorithms/python_revision.ipynb
AnasFullStack/Awesome-Full-Stack-Web-Developer
mit
1.12. Defining Functions problem Here’s a self check that really covers everything so far. You may have heard of the infinite monkey theorem? The theorem states that a monkey hitting keys at random on a typewriter keyboard for an infinite amount of time will almost surely type a given text, such as the complete works o...
import string import random import time start_time = time.time() def generate_new_sentense(): sentense = [random.choice(string.ascii_lowercase + " ") for x in range(28) ] return "".join(sentense) def compare_sentences(guess): target_sentence = "methinks it is like a weasel" return guess == target_sen...
algorithms/python_revision.ipynb
AnasFullStack/Awesome-Full-Stack-Web-Developer
mit
1.13. Object-Oriented Programming in Python: Defining Classes 1.13.1. A Fraction Class
class Fraction: def __init__(self, top, bottom): self.num = top self.den = bottom def show(self): print(self.num,"/",self.den) # Overriding the default __str__ function def __str__(self): return str(self.num)+"/"+str(self.den) def __add__(self,otherfracti...
algorithms/python_revision.ipynb
AnasFullStack/Awesome-Full-Stack-Web-Developer
mit
With out materials created, we'll now define key dimensions in our model. These dimensions are taken from the example in section 11.1.3 of the Serpent manual.
# Outer radius of fuel and clad r_fuel = 0.6122 r_clad = 0.6540 # Pressure tube and calendria radii pressure_tube_ir = 5.16890 pressure_tube_or = 5.60320 calendria_ir = 6.44780 calendria_or = 6.58750 # Radius to center of each ring of fuel pins ring_radii = np.array([0.0, 1.4885, 2.8755, 4.3305])
examples/jupyter/candu.ipynb
wbinventor/openmc
mit
To begin creating the bundle, we'll first create annular regions completely filled with heavy water and add in the fuel pins later. The radii that we've specified above correspond to the center of each ring. We actually need to create cylindrical surfaces at radii that are half-way between the centers.
# These are the surfaces that will divide each of the rings radial_surf = [openmc.ZCylinder(R=r) for r in (ring_radii[:-1] + ring_radii[1:])/2] water_cells = [] for i in range(ring_radii.size): # Create annular region if i == 0: water_region = -radial_surf[i] elif i == ring_radii.siz...
examples/jupyter/candu.ipynb
wbinventor/openmc
mit
Now we need to create a universe that contains a fuel pin. Note that we don't actually need to put water outside of the cladding in this universe because it will be truncated by a higher universe.
surf_fuel = openmc.ZCylinder(R=r_fuel) fuel_cell = openmc.Cell(fill=fuel, region=-surf_fuel) clad_cell = openmc.Cell(fill=clad, region=+surf_fuel) pin_universe = openmc.Universe(cells=(fuel_cell, clad_cell)) pin_universe.plot(**plot_args)
examples/jupyter/candu.ipynb
wbinventor/openmc
mit
The code below works through each ring to create a cell containing the fuel pin universe. As each fuel pin is created, we modify the region of the water cell to include everything outside the fuel pin.
num_pins = [1, 6, 12, 18] angles = [0, 0, 15, 0] for i, (r, n, a) in enumerate(zip(ring_radii, num_pins, angles)): for j in range(n): # Determine location of center of pin theta = (a + j/n*360.) * pi/180. x = r*cos(theta) y = r*sin(theta) pin_boundary = openmc.ZCyli...
examples/jupyter/candu.ipynb
wbinventor/openmc
mit
Looking pretty good! Finally, we create cells for the pressure tube and calendria and then put our bundle in the middle of the pressure tube.
pt_inner = openmc.ZCylinder(R=pressure_tube_ir) pt_outer = openmc.ZCylinder(R=pressure_tube_or) calendria_inner = openmc.ZCylinder(R=calendria_ir) calendria_outer = openmc.ZCylinder(R=calendria_or, boundary_type='vacuum') bundle = openmc.Cell(fill=bundle_universe, region=-pt_inner) pressure_tube = openmc.Cell(fill=cla...
examples/jupyter/candu.ipynb
wbinventor/openmc
mit
Let's look at the final product. We'll export our geometry and materials and then use plot_inline() to get a nice-looking plot.
geom = openmc.Geometry(root_universe) geom.export_to_xml() mats = openmc.Materials(geom.get_all_materials().values()) mats.export_to_xml() p = openmc.Plot.from_geometry(geom) p.color_by = 'material' p.colors = { fuel: 'black', clad: 'silver', heavy_water: 'blue' } openmc.plot_inline(p)
examples/jupyter/candu.ipynb
wbinventor/openmc
mit
1 - Generate materialized views Before generating the aline cohort, we require the following materialized views to be already generated: angus - from angus.sql heightweight - from HeightWeightQuery.sql aline_vaso_flag - from aline_vaso_flag.sql You can generate the above by executing the below codeblock. If you haven...
# Load in the query from file query='DROP TABLE IF EXISTS DATABASE.angus_sepsis;' cursor.execute(query.replace("DATABASE", gluedatabase)) f = os.path.join(concepts_path,'sepsis/angus-awsathena.sql') with open(f) as fp: query = ''.join(fp.readlines()) # Execute the query print('Generating table \'angus_sepsis\'...
mimic-iii/notebooks/aline-aws/aline-awsathena.ipynb
MIT-LCP/mimic-code
mit
Now we generate the aline_cohort table using the aline_cohort.sql file. Afterwards, we can generate the remaining 6 materialized views in any order, as they all depend on only aline_cohort and raw MIMIC-III data.
# Load in the query from file query='DROP TABLE IF EXISTS DATABASE.aline_cohort_all;' cursor.execute(query.replace("DATABASE", gluedatabase)) f = os.path.join(aline_path,'aline_cohort-awsathena.sql') with open(f) as fp: query = ''.join(fp.readlines()) # Execute the query print('Generating table \'aline_cohort_...
mimic-iii/notebooks/aline-aws/aline-awsathena.ipynb
MIT-LCP/mimic-code
mit
The following codeblock loads in the SQL from each file in the aline subfolder and executes the query to generate the materialized view. We specifically exclude the aline_cohort.sql file as we have already executed it above. Again, the order of query execution does not matter for these queries. Note also that the filen...
# get a list of all files in the subfolder aline_queries = [f for f in os.listdir(aline_path) # only keep the filename if it is actually a file (and not a directory) if os.path.isfile(os.path.join(aline_path,f)) # and only keep the filename if it is an SQL file ...
mimic-iii/notebooks/aline-aws/aline-awsathena.ipynb
MIT-LCP/mimic-code
mit
Summarize the cohort exclusions before we pull all the data together. 2 - Extract all covariates and outcome measures We now aggregate all the data from the various views into a single dataframe.
# Load in the query from file query = """ --FINAL QUERY select co.subject_id, co.hadm_id, co.icustay_id -- static variables from patient tracking tables , co.age , co.gender -- , co.gender_num -- gender, 0=F, 1=M , co.intime as icustay_intime , co.day_icu_intime -- day of week, text --, co.day_icu_inti...
mimic-iii/notebooks/aline-aws/aline-awsathena.ipynb
MIT-LCP/mimic-code
mit
Networks We give two sets of networks. One of them allows for all parameters. The other is identical except it only uses essential parameters.
network_strings = [ ["SWI4 : (NDD1)(~YOX1)", "HCM1 : SWI4", "NDD1 : HCM1", "YOX1 : SWI4"], ["SWI4 : (NDD1)(~YOX1)", "HCM1 : SWI4", "NDD1 : HCM1", "YOX1 : (SWI4)(HCM1)"], ["SWI4 : (NDD1)(~YOX1)", "HCM1 : SWI4", "NDD1 : HCM1", "YOX1 : (SWI4)(~HCM1)"], ["SWI4 : (NDD1)(~YOX1)", "HCM1 : SWI4", "NDD1 : HCM1", "YOX1 : (SWI4)...
Tutorials/PatternMatchExperiments.ipynb
shaunharker/DSGRN
mit
Full Networks
networks = [Network() for i in range(0,9)] for i,network in enumerate(networks): network.assign('\n'.join(network_strings[i]))
Tutorials/PatternMatchExperiments.ipynb
shaunharker/DSGRN
mit
Essential Networks
essential_network_strings = [ [ line + " : E" for line in network_string ] for network_string in network_strings] essential_networks = [Network() for i in range(0,9)] for i,network in enumerate(essential_networks): network.assign('\n'.join(essential_network_strings[i]))
Tutorials/PatternMatchExperiments.ipynb
shaunharker/DSGRN
mit
Path match analysis We give two functions for path match analysis. One looks at the entire domain graph. The other only checks for path matches in stable Morse sets. Analysis on entire domain graph
def Analyze(network, events, event_ordering): poe = PosetOfExtrema(network, events, event_ordering ) pattern_graph = PatternGraph(poe) parameter_graph = ParameterGraph(network) result = [] for parameter_index in range(0, parameter_graph.size()): parameter = parameter_graph.parameter(paramete...
Tutorials/PatternMatchExperiments.ipynb
shaunharker/DSGRN
mit
Analysis on stable Morse set only
def AnalyzeOnStable(network, events, event_ordering): poe = PosetOfExtrema(network, events, event_ordering ) pattern_graph = PatternGraph(poe) parameter_graph = ParameterGraph(network) results = [] for parameter_index in range(0, parameter_graph.size()): parameter = parameter_graph.parameter...
Tutorials/PatternMatchExperiments.ipynb
shaunharker/DSGRN
mit
Poset of Extrema We study two poset of extrema. The first poset comes from looking at times [10,60] and assuming SWI4 happens before the other minima at the beginning and thus can be excluded. The other comes from including all extrema. Original Poset of Extrema
original_events = [("HCM1", "min"), ("NDD1", "min"), ("YOX1", "min"), ("SWI4", "max"), ("HCM1", "max"), ("YOX1", "max"), ("NDD1", "max"), ("SWI4","min")] original_event_ordering = [ (i,j) for i in [0,1,2] for j in [3,4,5] ] + \ [ (i,j) for i in [3,4,5] for j in [6] ] + \...
Tutorials/PatternMatchExperiments.ipynb
shaunharker/DSGRN
mit
Alternative Poset of Extrema
all_events = [("SWI4", "min"), ("HCM1", "min"), ("NDD1", "min"), ("YOX1", "min"), ("SWI4", "max"), ("HCM1", "max"), ("YOX1", "max"), ("NDD1", "max"), ("SWI4","min"), ("YOX1", "min"), ("HCM1","min"), ("NDD1", "min"), ("SWI4", "max"), ("HCM1", "max"), ("YOX1",...
Tutorials/PatternMatchExperiments.ipynb
shaunharker/DSGRN
mit
Experiments There are 8 experiements corresponding to 3 binary choices: Full networks vs Essential networks Path matching in entire domain graph vs path matching in stable Morse sets Original poset of extrema vs Alternative poset of extrema
def DisplayExperiment(results, title): markdown_string = "# " + title + "\n\n" markdown_string += "| network | # parameters | # parameters with path match |\n" markdown_string += "| ------- |------------ | ---------------------------- |\n" for i, item in enumerate(results): [parameters_with_path...
Tutorials/PatternMatchExperiments.ipynb
shaunharker/DSGRN
mit
Step 2: Determine output variance
CV = 0.05 #Coefficient of variation set to 5% (CV = sigma/mu) var_A = np.power(abs(CV*A_det),2) #Variance of the A-matrix (var =sigma^2) var_B = np.power(abs(CV*B_det),2) #Variance of the B-matrix P = np.concatenate((np.reshape(dgdA, 4), dgdB), axis=1) #P contains partial der...
Code/.ipynb_checkpoints/AUP_LCA_evelynegroen-checkpoint.ipynb
evelynegroen/evelynegroen.github.io
mit
First: load and "featurize" Featurization refers to the process of converting the conformational snapshots from your MD trajectories into vectors in some space $\mathbb{R}^N$ that can be manipulated and modeled by subsequent analyses. The Gaussian HMM, for instance, uses Gaussian emission distributions, so it models th...
print(AlanineDipeptide.description()) dataset = AlanineDipeptide().get() trajectories = dataset.trajectories topology = trajectories[0].topology indices = [atom.index for atom in topology.atoms if atom.element.symbol in ['C', 'O', 'N']] featurizer = SuperposeFeaturizer(indices, trajectories[0][0]) sequences = featuri...
examples/advanced/hmm-and-msm.ipynb
cxhernandez/msmbuilder
lgpl-2.1
Expected output: <table> <tr> <td> **gradients["dWaa"][1][2] ** </td> <td> 10.0 </td> </tr> <tr> <td> **gradients["dWax"][3][1]** </td> <td> -10.0 </td> </td> </tr> <tr> <td> **gradients["dWya"][1][2]** </td> <td> 0.29713815361 </td> </tr> <...
# GRADED FUNCTION: sample def sample(parameters, char_to_ix, seed): """ Sample a sequence of characters according to a sequence of probability distributions output of the RNN Arguments: parameters -- python dictionary containing the parameters Waa, Wax, Wya, by, and b. char_to_ix -- python dictio...
deeplearning.ai/C5.SequenceModel/Week1_RNN/assignment/Dinosaur Island -- Character-level language model/Dinosaurus Island -- Character level language model final - v1.ipynb
jinzishuai/learn2deeplearn
gpl-3.0
Class Methods
class Employee: emp_count = 0 # Class Variable company = 'Google' # Class Variable raise_amount = 1.04 def __init__(self, fname, lname): self.fname = fname self.lname = lname self.email = self.fname + '.' + self.lname + '@' + self.company + '.com' Employee.emp_count += 1 ...
ipynb/OOP Concepts.ipynb
sripaladugu/sripaladugu.github.io
mit
Class Methods can be used to create alternate constructors
class Employee: emp_count = 0 # Class Variable company = 'Google' # Class Variable raise_amount = 1.04 def __init__(self, fname, lname, salary): self.fname = fname self.lname = lname self.salary = salary self.email = self.fname + '.' + self.lname + '@' + self.company + '....
ipynb/OOP Concepts.ipynb
sripaladugu/sripaladugu.github.io
mit
Static Methods Instance methods take self as the first argument Class methods take cls as the first argument Static methods don't take instance or class as their argument, we just pass the arguments we want to work with. Static methods don't operate on instance or class.
class Employee: emp_count = 0 # Class Variable company = 'Google' # Class Variable raise_amount = 1.04 def __init__(self, fname, lname, salary): self.fname = fname self.lname = lname self.salary = salary self.email = self.fname + '.' + self.lname + '@' + self.company + '....
ipynb/OOP Concepts.ipynb
sripaladugu/sripaladugu.github.io
mit
Inheritance - Creating subclasses
class Employee: emp_count = 0 # Class Variable company = 'Google' # Class Variable raise_amount = 1.04 def __init__(self, fname, lname, salary): self.fname = fname self.lname = lname self.salary = salary self.email = self.fname + '.' + self.lname + '@' + self.company + '....
ipynb/OOP Concepts.ipynb
sripaladugu/sripaladugu.github.io
mit
Now what if you want Developer's raise_amount to be 10%?
class Employee: emp_count = 0 # Class Variable company = 'Google' # Class Variable raise_amount = 1.04 def __init__(self, fname, lname, salary): self.fname = fname self.lname = lname self.salary = salary self.email = self.fname + '.' + self.lname + '@' + self.company + '....
ipynb/OOP Concepts.ipynb
sripaladugu/sripaladugu.github.io
mit
Now what if we want the Developer class to have an extra attribute like prog_lang?
class Employee: emp_count = 0 # Class Variable company = 'Google' # Class Variable raise_amount = 1.04 def __init__(self, fname, lname, salary): self.fname = fname self.lname = lname self.salary = salary self.email = self.fname + '.' + self.lname + '@' + self.company + '....
ipynb/OOP Concepts.ipynb
sripaladugu/sripaladugu.github.io
mit
Gotcha - Mutable default arguments * https://pythonconquerstheuniverse.wordpress.com/2012/02/15/mutable-default-arguments/
class Employee: emp_count = 0 # Class Variable company = 'Google' # Class Variable raise_amount = 1.04 def __init__(self, fname, lname, salary): self.fname = fname self.lname = lname self.salary = salary self.email = self.fname + '.' + self.lname + '@' + self.company + '....
ipynb/OOP Concepts.ipynb
sripaladugu/sripaladugu.github.io
mit
Magic or Dunder Methods https://www.youtube.com/watch?v=3ohzBxoFHAY&index=5&list=PL-osiE80TeTsqhIuOqKhwlXsIBIdSeYtc Dunder methods: 1. __repr__ 2. __str__
class Employee: company = 'Google' def __init__(self, fname, lname, salary): self.fname = fname self.lname = lname self.salary = salary self.email = self.fname + '.' + self.lname + '@' + self.company + '.com' def __repr__(self): # For other developers return "Employee...
ipynb/OOP Concepts.ipynb
sripaladugu/sripaladugu.github.io
mit
__add__ __len__
# if you do: 1 + 2 internally the interpreter calls the dunder method __add__ print(int.__add__(1,2)) # Similarly # if you do: [2,3] + [4,5] internally the interpreter calls the dunder method __add__ print(list.__add__([2,3],[4,5])) print('Paladugu'.__len__()) # This is same as len('Paladugu') class Employee: com...
ipynb/OOP Concepts.ipynb
sripaladugu/sripaladugu.github.io
mit
Property Decorators
class Employee: company = 'Google' def __init__(self, fname, lname, salary): self.fname = fname self.lname = lname self.salary = salary @property def email(self): return '{}.{}@{}.com'.format(self.fname, self.lname, self.company) @property def fullname(self): ...
ipynb/OOP Concepts.ipynb
sripaladugu/sripaladugu.github.io
mit
Abstract Base Classes in Python What are Abstract Base Classes good for? A while ago I had a discussion about which pattern to use for implementing a maintainable class hierarchy in Python. More specifically, the goal was to define a simple class hierarchy for a service backend in the most programmer-friendly and maint...
from abc import ABCMeta, abstractmethod class Base(metaclass=ABCMeta): @abstractmethod def foo(self): pass @abstractmethod def bar(self): pass class Concrete(Base): def foo(self): pass # We forget to declare bar() c = Concrete()
ipynb/OOP Concepts.ipynb
sripaladugu/sripaladugu.github.io
mit
Network definitions
from AAE import create_encoder, create_decoder, create_aae_trainer
Notebooks/AAE.ipynb
nimagh/CNN_Implementations
gpl-3.0
Training AAE You can either get the fully trained models from google drive or train your own models using the AAE.py script. Experiments Create demo networks and restore weights
iter_num = 18018 best_model = work_dir + "Model_Iter_%.3d.ckpt"%iter_num best_img = work_dir + 'Gen_Iter_%d.jpg'%iter_num Image(filename=best_img) latentD = 2 # of the best model trained batch_size = 128 tf.reset_default_graph() demo_sess = tf.InteractiveSession() is_training = tf.placeholder(tf.bool, [], 'is_train...
Notebooks/AAE.ipynb
nimagh/CNN_Implementations
gpl-3.0
Generate new data Approximate samples from the posterior distribution over the latent variables p(z|x)
Zdemo = np.random.normal(size=[128, latentD], loc=0.0, scale=1.).astype(np.float32) gen_sample = demo_sess.run(Xgen_op, feed_dict={Zph: Zdemo , is_training:False}) vis_square(gen_sample[:121], [11, 11], save_path=work_dir + 'sample.jpg') Image(filename=work_dir + 'sample.jpg')
Notebooks/AAE.ipynb
nimagh/CNN_Implementations
gpl-3.0
After doing a pip install, click on Reset Session so that the Python environment picks up the new package
import os PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1 MODEL_TYPE = 'tpu' # do not change these os.environ['PROJECT'] = PROJECT os.environ['BUCKET'] = BUCKET o...
courses/machine_learning/deepdive/08_image_keras/flowers_fromscratch_tpu.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Breaking a parser down (attach) If we examine the source code for the attach pipeline, we can see that it is in fact a two step pipeline combining the attach classifier wrapper and a decoder. So let's see what happens when we run the attach classifier by itself.
import numpy as np from attelo.learning import (SklearnAttachClassifier) from attelo.parser.attach import (AttachClassifierWrapper) from sklearn.linear_model import (LogisticRegression) def print_results_verbose(dpack): """Print detailed parse results""" for i, (edu1, edu2) in enumerate(dpack.pairings): ...
doc/tut_parser2.ipynb
kowey/attelo
gpl-3.0
Parsers and weighted datapacks In the output above, we have dug a little bit deeper into our datapacks. Recall above that a parser translates datapacks to datapacks. The output of a parser is always a weighted datapack., ie. a datapack whose 'graph' attribute is set to a record containing attachment weights label weig...
from attelo.decoding.baseline import (LocalBaseline) decoder = LocalBaseline(threshold=0.4) dpack2 = decoder.transform(dpack) print_results_verbose(dpack2)
doc/tut_parser2.ipynb
kowey/attelo
gpl-3.0
The result above is what we get if we run a decoder on the output of the attach classifier wrapper. This is in fact, the the same thing as running the attachment pipeline. We can define a similar pipeline below.
from attelo.parser.pipeline import (Pipeline) # this is basically attelo.parser.attach.AttachPipeline parser1 = Pipeline(steps=[('attach weights', parser1a), ('decoder', decoder)]) parser1.fit(train_dpacks, train_targets) print_results_verbose(parser1.transform(test_dpack))
doc/tut_parser2.ipynb
kowey/attelo
gpl-3.0
Mixing and matching Being able to break parsing down to this level of granularity lets us experiment with parsing techniques by composing different parsing substeps in different ways. For example, below, we write two slightly different pipelines, one which sets labels separately from decoding, and one which combines a...
from attelo.learning.local import (SklearnLabelClassifier) from attelo.parser.label import (LabelClassifierWrapper, SimpleLabeller) from attelo.parser.full import (AttachTimesBestLabel) learner_l = SklearnLabelClassifier(LogisticRegression()) print("Post-labelling") print("----------...
doc/tut_parser2.ipynb
kowey/attelo
gpl-3.0
3. Enter Storage Bucket Recipe Parameters Specify the name of the bucket and who will have owner permissions. Existing buckets are preserved. Adding a permission to the list will update the permissions but removing them will not. You have to manualy remove grants. Modify the values below for your use case, can be done...
FIELDS = { 'auth_write':'service', # Credentials used for writing data. 'bucket_bucket':'', # Name of Google Cloud Bucket to create. 'bucket_emails':'', # Comma separated emails. 'bucket_groups':'', # Comma separated groups. } print("Parameters Set To: %s" % FIELDS)
colabs/bucket.ipynb
google/starthinker
apache-2.0
4. Execute Storage Bucket This does NOT need to be modified unless you are changing the recipe, click play.
from starthinker.util.configuration import execute from starthinker.util.recipe import json_set_fields TASKS = [ { 'bucket':{ 'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}}, 'bucket':{'field':{'name':'buc...
colabs/bucket.ipynb
google/starthinker
apache-2.0
Loading data Load the data from disk into memory.
with open('potus_wiki_bios_cleaned.json','r') as f: bios = json.load(f)
2018/materials/boulder/day4-text-analysis/Day 4, Lecture 3 - Text networks and word embeddings.ipynb
compsocialscience/summer-institute
mit
Confirm there are 44 presidents (shaking fist at Grover Cleveland) in the dictionary.
print("There are {0} biographies of presidents.".format(len(bios)))
2018/materials/boulder/day4-text-analysis/Day 4, Lecture 3 - Text networks and word embeddings.ipynb
compsocialscience/summer-institute
mit
Get some metadata about the U.S. Presidents.
presidents_df = pd.DataFrame(requests.get('https://raw.githubusercontent.com/hitch17/sample-data/master/presidents.json').json()) presidents_df = presidents_df.set_index('president') presidents_df['wikibio words'] = pd.Series({bio_name:len(bio_text) for bio_name,bio_text in bios.items()}) presidents_df.head()
2018/materials/boulder/day4-text-analysis/Day 4, Lecture 3 - Text networks and word embeddings.ipynb
compsocialscience/summer-institute
mit
A really basic exploratory scatterplot for the number of words in each President's biography compared to their POTUS index.
presidents_df.plot.scatter(x='number',y='wikibio words')
2018/materials/boulder/day4-text-analysis/Day 4, Lecture 3 - Text networks and word embeddings.ipynb
compsocialscience/summer-institute
mit
TF-IDF We can create a document-term matrix where the rows are our 44 presidential biographies, the columns are the terms (words), and the values in the cells are the word counts: the number of times that document contains that word. This is the "term frequency" (TF) part of TF-IDF. The IDF part of TF-IDF is the "inver...
# Import the libraries from scikit-learn from sklearn.feature_extraction.text import CountVectorizer,TfidfTransformer count_vect = CountVectorizer() # Compute the word counts -- it expects a big string, so join our cleaned words back together bio_counts = count_vect.fit_transform([' '.join(bio) for bio in bios.values...
2018/materials/boulder/day4-text-analysis/Day 4, Lecture 3 - Text networks and word embeddings.ipynb
compsocialscience/summer-institute
mit
Make a text similarity network Once we have the TFIDF scores for every word in each president's biography, we can make a text similarity network. Multiplying the document by term matrix by its transpose should return the cosine similarities between documents. We can also import cosine_similarity from scikit-learn if yo...
# Compute cosine similarity pres_pres_df = pd.DataFrame(bio_tfidf_dense*bio_tfidf_dense.T) # If you don't believe me that cosine similiarty is the document-term matrix times its transpose from sklearn.metrics.pairwise import cosine_similarity pres_pres_df = pd.DataFrame(cosine_similarity(bio_tfidf_dense)) # Filter fo...
2018/materials/boulder/day4-text-analysis/Day 4, Lecture 3 - Text networks and word embeddings.ipynb
compsocialscience/summer-institute
mit
We read this pandas edgelist into networkx using from_pandas_edgelist, report out some basic descriptives about the network, and write the graph object to file in case we want to visualize it in a dedicated network visualization package like Gephi.
# Convert from edgelist to a graph object g = nx.from_pandas_edgelist(edgelist_df,source='from',target='to',edge_attr=['weight']) # Report out basic descriptives print("There are {0:,} nodes and {1:,} edges in the network.".format(g.number_of_nodes(),g.number_of_edges())) # Write graph object to disk for visualizatio...
2018/materials/boulder/day4-text-analysis/Day 4, Lecture 3 - Text networks and word embeddings.ipynb
compsocialscience/summer-institute
mit
Since this is a small and sparse network, we can try to use Matplotlib to visualize it instead. I would only use the nx.draw functionality for small networks like this one.
# Plot the nodes as a spring layout #g_pos = nx.layout.fruchterman_reingold_layout(g, k = 5, iterations=10000) g_pos = nx.layout.kamada_kawai_layout(g) # Draw the graph f,ax = plt.subplots(1,1,figsize=(10,10)) nx.draw(G = g, ax = ax, pos = g_pos, with_labels = True, node_size = [dc*(le...
2018/materials/boulder/day4-text-analysis/Day 4, Lecture 3 - Text networks and word embeddings.ipynb
compsocialscience/summer-institute
mit
Case study: Text similarity network of the S&P 500 companies Step 1: Load and preprocess the content of the articles.
# Load the data with open('sp500_wiki_articles.json','r') as f: sp500_articles = json.load(f) # Bring in the text_preprocessor we wrote from Day 4, Lecture 1 def text_preprocessor(text): """Takes a large string (document) and returns a list of cleaned tokens""" tokens = nltk.wordpunct_tokenize(text) cl...
2018/materials/boulder/day4-text-analysis/Day 4, Lecture 3 - Text networks and word embeddings.ipynb
compsocialscience/summer-institute
mit
Step 2: Compute the TFIDF matrix for the S&P 500 companies.
# Compute the word counts sp500_counts = # Compute the TF-IDF for the word counts from each biography sp500_tfidf = # Convert from sparse to dense array representation sp500_tfidf_dense =
2018/materials/boulder/day4-text-analysis/Day 4, Lecture 3 - Text networks and word embeddings.ipynb
compsocialscience/summer-institute
mit
Step 3: Compute the cosine similarities.
# Compute cosine similarity company_company_df = # Filter for edges in the 90th percentile or greater company_company_filtered_df = # Reshape and filter data sp500_edgelist_df = sp500_edgelist_df = # Rename and replace data sp500_edgelist_df.rename(columns={'level_0':'from','level_1':'to',0:'weight'},inplace=Tru...
2018/materials/boulder/day4-text-analysis/Day 4, Lecture 3 - Text networks and word embeddings.ipynb
compsocialscience/summer-institute
mit
Step 4: Visualize the resulting network. Word2Vec We used TF-IDF vectors of documents and cosine similarities between these document vectors as a way of representing the similarity in the networks above. However, TF-IDF score are simply (normalized) word frequencies: they do not capture semantic information. A vector s...
from gensim.models import Word2Vec bios_model = Word2Vec(bios.values(),size=100,window=10,min_count=8)
2018/materials/boulder/day4-text-analysis/Day 4, Lecture 3 - Text networks and word embeddings.ipynb
compsocialscience/summer-institute
mit
Each word in the vocabulary exists as a N-dimensional vector, where N is the "size" hyper-parameter set in the model. The "congress" token in located at this position in the 100-dimensional space we trained in bios_model.
bios_model.wv['congress'] bios_model.wv.most_similar('congress') bios_model.wv.most_similar('court') bios_model.wv.most_similar('war') bios_model.wv.most_similar('election')
2018/materials/boulder/day4-text-analysis/Day 4, Lecture 3 - Text networks and word embeddings.ipynb
compsocialscience/summer-institute
mit
There's a doesnt_match method that predicts which word in a list doesn't match the other word senses in the list. Sometime the results are predictable/trivial.
bios_model.wv.doesnt_match(['democrat','republican','whig','panama'])
2018/materials/boulder/day4-text-analysis/Day 4, Lecture 3 - Text networks and word embeddings.ipynb
compsocialscience/summer-institute
mit
Other times the results are unexpected/interesting.
bios_model.wv.doesnt_match(['canada','mexico','cuba','japan','france'])
2018/materials/boulder/day4-text-analysis/Day 4, Lecture 3 - Text networks and word embeddings.ipynb
compsocialscience/summer-institute
mit
One of the most powerful implications of having these vectorized embeddings of word meanings is the ability to do operations similar arithmetic that recover or reveal interesting semantic meanings. The classic example is Man:Woman::King:Queen: What are some examples of these vector similarities from our trained model?...
bios_model.wv.most_similar(positive=['democrat','slavery'],negative=['republican']) bios_model.wv.most_similar(positive=['republican','labor'],negative=['democrat'])
2018/materials/boulder/day4-text-analysis/Day 4, Lecture 3 - Text networks and word embeddings.ipynb
compsocialscience/summer-institute
mit
Finally, you can use the similarity method to return the similarity between two terms. In our trained model, "britain" and "france" are more similar to each other than "mexico" and "canada".
bios_model.wv.similarity('republican','democrat') bios_model.wv.similarity('mexico','canada') bios_model.wv.similarity('britain','france')
2018/materials/boulder/day4-text-analysis/Day 4, Lecture 3 - Text networks and word embeddings.ipynb
compsocialscience/summer-institute
mit
Case study: S&P500 company Word2Vec model Step 1: Open the "sp500_wiki_articles_cleaned.json" you previous saved of the cleaned S&P500 company article content or use a text preprocessor on "sp500_wiki_articles.json" to generate a dictionary of cleaned article content. Train a sp500_model using the Word2Vec model on the...
print(bio_tfidf_dense.shape) bio_tfidf_dense
2018/materials/boulder/day4-text-analysis/Day 4, Lecture 3 - Text networks and word embeddings.ipynb
compsocialscience/summer-institute
mit
Principal component analysis (PCA) is probably one of the most widely-used and efficient methods for dimensionality reduction.
# Step 1: Choose a class of models from sklearn.decomposition import PCA # Step 2: Instantiate the model pca = PCA(n_components=2) # Step 3: Arrange the data into features matrices # Already done # Step 4: Fit the model to the data pca.fit(bio_tfidf_dense) # Step 5: Evaluate the model X_pca = pca.transform(bio_tfid...
2018/materials/boulder/day4-text-analysis/Day 4, Lecture 3 - Text networks and word embeddings.ipynb
compsocialscience/summer-institute
mit
Multi-dimensional scaling is another common technique in the social sciences.
# Step 1: Choose your model class(es) from sklearn.manifold import MDS # Step 2: Instantiate your model class(es) mds = MDS(n_components=2,metric=False,n_jobs=-1) # Step 3: Arrange data into features matrices # Done! # Step 4: Fit the data and transform X_mds = mds.fit_transform(bio_tfidf_dense) # Plot the data f,a...
2018/materials/boulder/day4-text-analysis/Day 4, Lecture 3 - Text networks and word embeddings.ipynb
compsocialscience/summer-institute
mit
Isomap is an extension of MDS.
# Step 1: Choose your model class(es) from sklearn.manifold import Isomap # Step 2: Instantiate your model class(es) iso = Isomap(n_neighbors = 5, n_components = 2) # Step 3: Arrange data into features matrices # Done! # Step 4: Fit the data and transform X_iso = iso.fit_transform(bio_tfidf_dense) # Plot the data ...
2018/materials/boulder/day4-text-analysis/Day 4, Lecture 3 - Text networks and word embeddings.ipynb
compsocialscience/summer-institute
mit
Spectral embedding does interesting things to the eigenvectors of a similarity matrix.
# Step 1: Choose your model class(es) from sklearn.manifold import SpectralEmbedding # Step 2: Instantiate your model class(es) se = SpectralEmbedding(n_components = 2) # Step 3: Arrange data into features matrices # Done! # Step 4: Fit the data and transform X_se = se.fit_transform(bio_tfidf_dense) # Plot the dat...
2018/materials/boulder/day4-text-analysis/Day 4, Lecture 3 - Text networks and word embeddings.ipynb
compsocialscience/summer-institute
mit
Locally Linear Embedding is yet another dimensionality reduction method, but not my favorite to date given performance (meaningful clusters as output) and cost (expensive to compute).
# Step 1: Choose your model class(es) from sklearn.manifold import LocallyLinearEmbedding # Step 2: Instantiate your model class(es) lle = LocallyLinearEmbedding(n_components = 2,n_jobs=-1) # Step 3: Arrange data into features matrices # Done! # Step 4: Fit the data and transform X_lle = lle.fit_transform(bio_tfidf_...
2018/materials/boulder/day4-text-analysis/Day 4, Lecture 3 - Text networks and word embeddings.ipynb
compsocialscience/summer-institute
mit
t-Distributed Stochastic Neighbor Embedding (t-SNE) is ubiquitous for visualizing word or document embeddings. It can be expensive to run, but it does a great job recovering clusters. There are some hyper-parameters, particularly "perplexity" that you'll need to tune to get things to look interesting. Wattenberg, Viéga...
# Step 1: Choose your model class(es) from sklearn.manifold import TSNE # Step 2: Instantiate your model class(es) tsne = TSNE(n_components = 2, init='pca', random_state=42, perplexity=11) # Step 3: Arrange data into features matrices # Done! # Step 4: Fit the data and transform X_tsne = tsne.fit_transform(bio_tfidf...
2018/materials/boulder/day4-text-analysis/Day 4, Lecture 3 - Text networks and word embeddings.ipynb
compsocialscience/summer-institute
mit
Uniform Maniford Approximation and Projection (UMAP) is a new and particularly fast dimensionality reduction method with some comparatively great documentation. Unfortunately, UMAP is so new that it hasn't been translated into scikit-learn yet, so you'll need to install it separately from the terminal: conda install -c...
# Step 1: Choose your model class(es) from umap import UMAP # Step 2: Instantiate your model class(es) umap_ = UMAP(n_components=2, n_neighbors=10, random_state=42) # Step 3: Arrange data into features matrices # Done! # Step 4: Fit the data and transform X_umap = umap_.fit_transform(bio_tfidf_dense) # Plot the dat...
2018/materials/boulder/day4-text-analysis/Day 4, Lecture 3 - Text networks and word embeddings.ipynb
compsocialscience/summer-institute
mit
Case study: S&P500 company clusters Step 1: Using the sp500_tfidf_dense array/DataFrame, experiment with different dimensionality reduction tools we covered above. Visualize and inspect the distribution of S&P500 companies for interesting dimensions (do X and Y dimensions in this reduced data capture anything meaningfu...
top_words = pd.DataFrame(bio_counts.todense().sum(0).T, index=count_vect.get_feature_names())[0] top_words = top_words.sort_values(0,ascending=False).head(1000).index.tolist()
2018/materials/boulder/day4-text-analysis/Day 4, Lecture 3 - Text networks and word embeddings.ipynb
compsocialscience/summer-institute
mit
For each word in top_words, we get its vector from bios_model and add it to the top_word_vectors list and cast this list back to a numpy array.
top_word_vectors = [] for word in top_words: try: vector = bios_model.wv[word] top_word_vectors.append(vector) except KeyError: pass top_word_vectors = np.array(top_word_vectors)
2018/materials/boulder/day4-text-analysis/Day 4, Lecture 3 - Text networks and word embeddings.ipynb
compsocialscience/summer-institute
mit
We can then use the dimensionality tools we just covered in the previous section to visualize the word similarities. PCA is fast but rarely does a great job with this extremely high-dimensional and sparse data: it's a cloud of points with no discernable structure.
# Step 1: Choose your model class(es) # from sklearn.decomposition import PCA # Step 2: Instantiate the model pca = PCA(n_components=2) # Step 3: Arrange data into features matrices X_w2v = top_word_vectors # Step 4: Fit the data and transform X_w2v_pca = pca.fit_transform(X_w2v) # Plot the data f,ax = plt.subplot...
2018/materials/boulder/day4-text-analysis/Day 4, Lecture 3 - Text networks and word embeddings.ipynb
compsocialscience/summer-institute
mit
t-SNE was more-or-less engineered for precisely the task of visualizing word embeddings. It likely takes on the order of a minute or more for t-SNE to reduce the top_words embeddings to only two dimensions. Assuming our perplexity and other t-SNE hyperparameters are well-behaved, there should be relatively easy-to-disc...
# Step 1: Choose your model class(es) from sklearn.manifold import TSNE # Step 2: Instantiate your model class(es) tsne = TSNE(n_components = 2, init='pca', random_state=42, perplexity=25) # Step 3: Arrange data into features matrices X_w2v = top_word_vectors # Step 4: Fit the data and transform X_w2v_tsne = tsne.fi...
2018/materials/boulder/day4-text-analysis/Day 4, Lecture 3 - Text networks and word embeddings.ipynb
compsocialscience/summer-institute
mit
UMAP is faster and I think better, but you'll need to make sure this is installed on your system since it doesn't come with scikit-learn or Anaconda by default. Words like "nominee" and "campaign" or the names of the months cluster clearly together apart from the rest.
# Step 1: Choose your model class(es) from umap import UMAP # Step 2: Instantiate your model class(es) umap_ = UMAP(n_components=2, n_neighbors=5, random_state=42) # Step 3: Arrange data into features matrices # Done! # Step 4: Fit the data and transform X_w2v_umap = umap_.fit_transform(X_w2v) # Plot the data f,ax ...
2018/materials/boulder/day4-text-analysis/Day 4, Lecture 3 - Text networks and word embeddings.ipynb
compsocialscience/summer-institute
mit
Select the notebook runtime environment devices / settings Set the device to cpu / gpu for the test environment. If you have both CPU and GPU on your machine, you can optionally switch the devices. By default, we choose the best available device.
# Select the right target device when this notebook is being tested: if 'TEST_DEVICE' in os.environ: import cntk if os.environ['TEST_DEVICE'] == 'cpu': C.device.set_default_device(C.device.cpu()) else: C.device.set_default_device(C.device.gpu(0)) C.device.set_default_device(C.device.gpu(0))
simpleGan/CNTK_206B_DCGAN.ipynb
olgaliak/cntk-cyclegan
mit
Model Creation First we provide a brief recap of the basics of GAN. You may skip this block if you are familiar with CNTK 206A. A GAN network is composed of two sub-networks, one called the Generator ($G$) and the other Discriminator ($D$). - The Generator takes random noise vector ($z$) as input and strives to outpu...
# architectural parameters img_h, img_w = 28, 28 kernel_h, kernel_w = 5, 5 stride_h, stride_w = 2, 2 # Input / Output parameter of Generator and Discriminator g_input_dim = 100 g_output_dim = d_input_dim = img_h * img_w # We expect the kernel shapes to be square in this tutorial and # the strides to be of the same l...
simpleGan/CNTK_206B_DCGAN.ipynb
olgaliak/cntk-cyclegan
mit
Generator The generator takes a 100-dimensional random vector (for starters) as input ($z$) and the outputs a 784 dimensional vector, corresponding to a flattened version of a 28 x 28 fake (synthetic) image ($x^*$). In this tutorial, we use fractionally strided convolutions (a.k.a ConvolutionTranspose) with ReLU activa...
def convolutional_generator(z): with default_options(init=C.normal(scale=0.02)): print('Generator input shape: ', z.shape) s_h2, s_w2 = img_h//2, img_w//2 #Input shape (14,14) s_h4, s_w4 = img_h//4, img_w//4 # Input shape (7,7) gfc_dim = 1024 gf_dim = 64 h0 = Dense(...
simpleGan/CNTK_206B_DCGAN.ipynb
olgaliak/cntk-cyclegan
mit