markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Returning indexes of selected values
from datetime import datetime as py_dtime dt_x_index = DateScale(min=np.datetime64(py_dtime(2006, 6, 1))) lin_y2 = LinearScale() lc2_index = Lines(x=dates_actual, y=prices, scales={"x": dt_x_index, "y": lin_y2}) x_ax1 = Axis(label="Date", scale=dt_x_index) x_ay2 = Axis(label=(symbol + " Price"), scale=lin_y2, orient...
examples/Interactions/Interaction Layer.ipynb
bloomberg/bqplot
apache-2.0
Brush Selector We can do the same with any type of selector
## Defining a new Figure dt_x_brush = DateScale(min=np.datetime64(py_dtime(2006, 6, 1))) lin_y2_brush = LinearScale() lc3_brush = Lines(x=dates_actual, y=prices, scales={"x": dt_x_brush, "y": lin_y2_brush}) x_ax_brush = Axis(label="Date", scale=dt_x_brush) x_ay_brush = Axis(label=(symbol + " Price"), scale=lin_y2_bru...
examples/Interactions/Interaction Layer.ipynb
bloomberg/bqplot
apache-2.0
Scatter Chart Selectors Brush Selector
date_fmt = "%m-%d-%Y" sec2_data = price_data[symbol2].values dates = price_data.index.values sc_x = LinearScale() sc_y = LinearScale() scatt = Scatter(x=prices, y=sec2_data, scales={"x": sc_x, "y": sc_y}) sc_xax = Axis(label=(symbol), scale=sc_x) sc_yax = Axis(label=(symbol2), scale=sc_y, orientation="vertical") b...
examples/Interactions/Interaction Layer.ipynb
bloomberg/bqplot
apache-2.0
Brush Selector with Date Values
sc_brush_dt_x = DateScale(date_format=date_fmt) sc_brush_dt_y = LinearScale() scatt2 = Scatter( x=dates_actual, y=sec2_data, scales={"x": sc_brush_dt_x, "y": sc_brush_dt_y} ) br_sel_dt = BrushSelector(x_scale=sc_brush_dt_x, y_scale=sc_brush_dt_y, marks=[scatt2]) db_brush_dt = HTML(value=str(br_sel_dt.selected)) ...
examples/Interactions/Interaction Layer.ipynb
bloomberg/bqplot
apache-2.0
Histogram Selectors
## call back for selectors def interval_change_callback(name, value): db3.value = str(value) ## call back for the selector def brush_callback(change): if not br_intsel.brushing: db3.value = str(br_intsel.selected) returns = np.log(prices[1:]) - np.log(prices[:-1]) hist_x = LinearScale() hist_y = Line...
examples/Interactions/Interaction Layer.ipynb
bloomberg/bqplot
apache-2.0
Multi Selector This selector provides the ability to have multiple brush selectors on the same graph. The first brush works like a regular brush. Ctrl + click creates a new brush, which works like the regular brush. The active brush has a Green border while all the inactive brushes have a Red border. Shift + click dea...
def multi_sel_callback(change): if not multi_sel.brushing: db4.value = str(multi_sel.selected) line_x = LinearScale() line_y = LinearScale() line = Lines( x=np.arange(100), y=np.random.randn(100), scales={"x": line_x, "y": line_y} ) multi_sel = MultiSelector(scale=line_x, marks=[line]) multi_sel.obser...
examples/Interactions/Interaction Layer.ipynb
bloomberg/bqplot
apache-2.0
Multi Selector with Date X
def multi_sel_dt_callback(change): if not multi_sel_dt.brushing: db_multi_dt.value = str(multi_sel_dt.selected) line_dt_x = DateScale(min=np.datetime64(py_dtime(2007, 1, 1))) line_dt_y = LinearScale() line_dt = Lines( x=dates_actual, y=sec2_data, scales={"x": line_dt_x, "y": line_dt_y}, colors=["red"] ...
examples/Interactions/Interaction Layer.ipynb
bloomberg/bqplot
apache-2.0
Lasso Selector
lasso_sel = LassoSelector() xs, ys = LinearScale(), LinearScale() data = np.arange(20) line_lasso = Lines(x=data, y=data, scales={"x": xs, "y": ys}) scatter_lasso = Scatter(x=data, y=data, scales={"x": xs, "y": ys}, colors=["skyblue"]) bar_lasso = Bars(x=data, y=data / 2.0, scales={"x": xs, "y": ys}) xax_lasso, yax_la...
examples/Interactions/Interaction Layer.ipynb
bloomberg/bqplot
apache-2.0
Pan Zoom
xs_pz = DateScale(min=np.datetime64(py_dtime(2007, 1, 1))) ys_pz = LinearScale() line_pz = Lines( x=dates_actual, y=sec2_data, scales={"x": xs_pz, "y": ys_pz}, colors=["red"] ) panzoom = PanZoom(scales={"x": [xs_pz], "y": [ys_pz]}) xax = Axis(scale=xs_pz, label="Date", grids="off") yax = Axis(scale=ys_pz, label="P...
examples/Interactions/Interaction Layer.ipynb
bloomberg/bqplot
apache-2.0
Hand Draw
xs_hd = DateScale(min=np.datetime64(py_dtime(2007, 1, 1))) ys_hd = LinearScale() line_hd = Lines( x=dates_actual, y=sec2_data, scales={"x": xs_hd, "y": ys_hd}, colors=["red"] ) handdraw = HandDraw(lines=line_hd) xax = Axis(scale=xs_hd, label="Date", grid_lines="none") yax = Axis(scale=ys_hd, label="Price", orienta...
examples/Interactions/Interaction Layer.ipynb
bloomberg/bqplot
apache-2.0
Unified Figure with All Interactions
dt_x = DateScale(date_format=date_fmt, min=py_dtime(2007, 1, 1)) lc1_x = LinearScale() lc2_y = LinearScale() lc2 = Lines( x=np.linspace(0.0, 10.0, len(prices)), y=prices * 0.25, scales={"x": lc1_x, "y": lc2_y}, display_legend=True, labels=["Security 1"], ) lc3 = Lines( x=dates_actual, y=se...
examples/Interactions/Interaction Layer.ipynb
bloomberg/bqplot
apache-2.0
An Enum stands for an enumeration, it's a convenient way for you to define lists of things. Typing:
AccountType.SAVINGS
homeworks/HW6/HW6_finished.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
returns a Python representation of an enumeration. You can compare these account types:
AccountType.SAVINGS == AccountType.SAVINGS AccountType.SAVINGS == AccountType.CHECKING
homeworks/HW6/HW6_finished.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
To get a string representation of an Enum, you can use:
AccountType.SAVINGS.name
homeworks/HW6/HW6_finished.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
Part 1: Create a BankAccount class with the following specification: Constructor is BankAccount(self, owner, accountType) where owner is a string representing the name of the account owner and accountType is one of the AccountType enums Methods withdraw(self, amount) and deposit(self, amount) to modify the account bala...
class BankAccount(): def __init__(self,owner,accountType): self.owner=owner self.accountType=accountType self.balance=0 def withdraw(self,amount): if amount<0: raise ValueError("amount<0") if self.balance<amount: raise ValueError("withdraw more tha...
homeworks/HW6/HW6_finished.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
Part 2: Write a class BankUser with the following specification: Constructor BankUser(self, owner) where owner is the name of the account. Method addAccount(self, accountType) - to start, a user will have no accounts when the BankUser object is created. addAccount will add a new account to the user of the accountType ...
class BankUser(): def __init__(self,owner): self.owner=owner self.SavingAccount=None self.CheckingAccount=None def addAccount(self,accountType): if accountType==AccountType.SAVINGS: if self.SavingAccount==None: self.SavingAccount=BankAccount(self.owner...
homeworks/HW6/HW6_finished.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
Write some simple tests to make sure this is working. Think of edge scenarios a user might try to do. Part 3: ATM Closure Finally, we are going to rewrite a closure to use our bank account. We will make use of the input function which takes user input to decide what actions to take. Write a closure called ATMSession(b...
def ATMSession(bankUser): def Interface(): option1=input("Enter Options:\ 1)Exit\ 2)Creat Account\ 3)Check Balance\ 4)Deposit\ 5)Withdraw") if option1=="1": Interface()...
homeworks/HW6/HW6_finished.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
Part 4: Put everything in a module Bank.py We will be grading this problem with a test suite. Put the enum, classes, and closure in a single file named Bank.py. It is very important that the class and method specifications we provided are used (with the same capitalization), otherwise you will receive no credit.
%%file bank.py from enum import Enum class AccountType(Enum): SAVINGS = 1 CHECKING = 2 class BankAccount(): def __init__(self,owner,accountType): self.owner=owner self.accountType=accountType self.balance=0 def withdraw(self,amount): if type(amount)!=int: ...
homeworks/HW6/HW6_finished.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
Problem 2: Linear Regression Class Let's say you want to create Python classes for three related types of linear regression: Ordinary Least Squares Linear Regression, Ridge Regression, and Lasso Regression. Consider the multivariate linear model: $$y = X\beta + \epsilon$$ where $y$ is a length $n$ vector, $X$ is an $...
class Regression(): def __init__(self,X,y): self.X=X self.y=y self.alpha=0.1 def fit(self,X,y): return def get_params(self): return self.beta def predict(self,X): import numpy as np return np.dot(X,self.beta) def score(self,X,y): retur...
homeworks/HW6/HW6_finished.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
Part 2: OLS Linear Regression Write a class called OLSRegression that implements the OLS Regression model described above and inherits the Regression class.
class OLSRegression(Regression): def fit(self): import numpy as np X=self.X y=self.y self.beta=np.dot(np.dot(np.linalg.pinv(np.dot(np.transpose(X),X)),np.transpose(X)),y) ols1=OLSRegression([[2],[3]],[[1],[2]]) ols1.fit() ols1.predict([[2],[3]]) X=[[2],[3]] y=[[1],[2]] beta=np.do...
homeworks/HW6/HW6_finished.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
Part 3: Ridge Regression Write a class called RidgeRegression that implements Ridge Regression and inherits the OLSRegression class.
class RidgeRegression(Regression): def fit(self): import numpy as np X=self.X y=self.y self.beta=np.dot(np.dot(np.linalg.pinv(np.dot(np.transpose(X),X)+self.alpha**2),np.transpose(X)),y) return ridge1=RidgeRegression([[2],[3]],[[1],[2]]) ridge1.fit() ridge1.predict([[2],[3]]...
homeworks/HW6/HW6_finished.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
Part 3: Lasso Regression Write a class called LassoRegression that implements Lasso Regression and inherits the OLSRegression class. You should only use Lasso(), Lasso.fit(), Lasso.coef_, and Lasso._intercept from the sklearn.linear_model.Lasso class.
class LassoRegression(Regression): def fit(self): from sklearn.linear_model import Lasso myLs=Lasso(self.alpha) myLs.fit(self.X,self.y) self.beta=myLs.coef_.reshape((-1,1)) self.beta0=myLs.intercept_ return def predict(self,X): import numpy as np ...
homeworks/HW6/HW6_finished.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
Part 4: Model Scoring You will use the Boston dataset for this part. Instantiate each of the three models above. Using a for loop, fit (on the training data) and score (on the testing data) each model on the Boston dataset. Print out the $R^2$ value for each model and the parameters for the best model using the get_...
from sklearn.datasets import load_boston from sklearn.model_selection import KFold from sklearn.metrics import r2_score import statsmodels.api as sm import numpy as np boston=load_boston() boston_x=boston.data boston_y=boston.target kf=KFold(n_splits=2) kf.get_n_splits(boston) ols1_m=0 ridge1_m=0 lasso1_m=0 for train...
homeworks/HW6/HW6_finished.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
Part 5: Visualize Model Performance We can evaluate how the models perform for various values of $\alpha$. Calculate the $R^2$ scores for each model for $\alpha \in [0.05, 1]$ and plot the three lines on the same graph. To change the parameters, use the set_params() method. Be sure to label each line and add axis labe...
ols_r=[] ridge_r=[] lasso_r=[] alpha_l=[] for alpha_100 in range(5,100,5): alpha=alpha_100/100 alpha_l.append(alpha) for train_index, test_index in kf.split(boston_x): X_train, X_test = boston_x[train_index], boston_x[test_index] y_train, y_test = boston_y[train_index], boston_y[test_in...
homeworks/HW6/HW6_finished.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
Co-authorship network We start by building a mapping from authors to the set of identifiers of papers they authored. We'll be using Python's sets again for that purpose.
papers_of_author = defaultdict(set) for (id, p) in Summaries.items(): for a in p.authors: papers_of_author[a].add(id)
04_analysis.ipynb
VUInformationRetrieval/IR2016_2017
gpl-2.0
Let's try it out:
papers_of_author['Clauset A'] for id in papers_of_author['Clauset A']: display_summary(id)
04_analysis.ipynb
VUInformationRetrieval/IR2016_2017
gpl-2.0
We can now build a co-authorship network, that is a graph linking authors to the set of co-authors they have published with:
coauthors = defaultdict(set) for p in Summaries.values(): for a in p.authors: coauthors[a].update(p.authors) # The code above results in each author being listed as having co-authored with himself/herself. # We remove these self-references here: for (a, ca) in coauthors.items(): ca.remove(a)
04_analysis.ipynb
VUInformationRetrieval/IR2016_2017
gpl-2.0
And let's try it out again:
print(', '.join( coauthors['Clauset A'] ))
04_analysis.ipynb
VUInformationRetrieval/IR2016_2017
gpl-2.0
Now we can have a look at some basic statistics about our graph:
print('Number of nodes (authors): ', len(coauthors)) coauthor_rel_count = sum( len(c) for c in coauthors.values() ) print('Number of links (co-authorship relations): ', coauthor_rel_count)
04_analysis.ipynb
VUInformationRetrieval/IR2016_2017
gpl-2.0
With this data at hand, we can plot the degree distribution by showing the number of collaborators a scientist has published with:
plt.hist( x=[ len(ca) for ca in coauthors.values() ], bins=range(60) ) plt.xlabel('number of co-authors') plt.ylabel('number of researchers') plt.xlim(0,51);
04_analysis.ipynb
VUInformationRetrieval/IR2016_2017
gpl-2.0
Citations network Next, we can look at the citation network. We'll start by expanding the our data about citations into two mappings: papers_citing[id]: papers citing a given paper cited_by[id]: papers cited by a given paper (in other words: its list of references) papers_citing will give us the list of a node's inc...
papers_citing = Citations # no changes needed, this is what we are storing already in the Citations dataset cited_by = defaultdict(list) for ref, papers_citing_ref in papers_citing.items(): for id in papers_citing_ref: cited_by[ id ].append( ref ) display_summary(24130474)
04_analysis.ipynb
VUInformationRetrieval/IR2016_2017
gpl-2.0
As we are dealing with a subset of the data, papers_citing can contain references to papers outside of our subset. On the other hand, the way we created cited_by, it will only contain backward references from within our dataset, meaning that it is incomplete with respect to the whole dataset. Nethertheless, we can use ...
paper_id = 24130474 refs = { id : Summaries[id].title for id in cited_by[paper_id] } print(len(refs), 'references found for paper', paper_id) refs
04_analysis.ipynb
VUInformationRetrieval/IR2016_2017
gpl-2.0
If we lookup the same paper in papers_citing, we now see that some of the cited papers are themselves in our dataset, but others are not (shown below as '??'):
{ id : Summaries.get(id,['??'])[0] for id in papers_citing[paper_id] }
04_analysis.ipynb
VUInformationRetrieval/IR2016_2017
gpl-2.0
Paper 25122340, for example, is not in our dataset and we do not have any direct information about it, but its repeated occurrence in other papers' citation lists does allow us to reconstruct some of its references. Below is the list of papers in our dataset cited by that paper:
paper_id2 = 25122340 refs2 = { id : Summaries[id].title for id in cited_by[paper_id2] } print(len(refs2), 'references identified for the paper with id', paper_id2) refs2
04_analysis.ipynb
VUInformationRetrieval/IR2016_2017
gpl-2.0
Now that we have a better understanding about the data we're dealing with, let us obtain again some basic statistics about our graph.
n = len(Ids) print('Number of papers in our subset: %d (%.2f %%)' % (n, 100.0) ) with_citation = [ id for id in Ids if papers_citing[id] != [] ] with_citation_rel = 100. * len(with_citation) / n print('Number of papers cited at least once: %d (%.2f %%)' % (len(with_citation), with_citation_rel) ) isolated = set( id f...
04_analysis.ipynb
VUInformationRetrieval/IR2016_2017
gpl-2.0
Let us now find which 10 papers are the most cited in our dataset.
citation_count_per_paper = [ (id, len(citations)) for (id,citations) in papers_citing.items() ] sorted_by_citation_count = sorted(citation_count_per_paper, key=lambda i:i[1], reverse=True) for (id, c) in sorted_by_citation_count[:10]: display_summary(id, extra_text = 'Citation count: ' + str(c))
04_analysis.ipynb
VUInformationRetrieval/IR2016_2017
gpl-2.0
Link Analysis for Search Engines In order to use the citation network, we need to be able to perform some complex graph algorithms on it. To make our lives easier, we will use NetworkX, a Python package for dealing with complex networks. You might have to install the NetworkX package first.
import networkx as nx G = nx.DiGraph(cited_by)
04_analysis.ipynb
VUInformationRetrieval/IR2016_2017
gpl-2.0
We now have a NetworkX Directed Graph stored in G, where a node represents a paper, and an edge represents a citation. This means we can now apply the algorithms and functions of NetworkX to our graph:
print(nx.info(G)) print('Directed graph:', nx.is_directed(G)) print('Density of graph:', nx.density(G))
04_analysis.ipynb
VUInformationRetrieval/IR2016_2017
gpl-2.0
As this graph was generated from citations only, we need to add all isolated nodes (nodes that are not cited and do not cite other papers) as well:
G.add_nodes_from(isolated)
04_analysis.ipynb
VUInformationRetrieval/IR2016_2017
gpl-2.0
And now we get slightly different values:
print(nx.info(G)) print('Directed graph:', nx.is_directed(G)) print('Density of graph:', nx.density(G))
04_analysis.ipynb
VUInformationRetrieval/IR2016_2017
gpl-2.0
Assignments Your name: ... Task 1 Plot the in-degree distribution (the distribution of the number of incoming links) for the citation network. What can you tell about the shape of this distribution, and what does this tell us about the network?
# Add your code here
04_analysis.ipynb
VUInformationRetrieval/IR2016_2017
gpl-2.0
Answer: [Write your answer text here] Task 2 Using the Link Analysis algorithms provided by NetworkX, calculate the PageRank score for each node in the citation network, and store them in a variable. Print out the PageRank values for the two example papers given below. You can also use the pagerank_scipy implementation...
# Add your code here # print PageRank for paper 10399593 # print PageRank for paper 23863622
04_analysis.ipynb
VUInformationRetrieval/IR2016_2017
gpl-2.0
Task 3 Why do the two papers above have such different PageRank values? Write code below to investigate and show the cause of this, and then explain the cause of this difference based on the results generated by your code.
# Add your code here
04_analysis.ipynb
VUInformationRetrieval/IR2016_2017
gpl-2.0
Answer: [Write your answer text here] Task 4 Copy the scoring function score_ntn from Task 4 of mini-assignment 3. Rename it to score_ntn_pagerank and change its code to incorporate a paper's PageRank score in it's final score, in addition to tf-idf. In other words, the new function should return a single value that is...
# Add your code here
04_analysis.ipynb
VUInformationRetrieval/IR2016_2017
gpl-2.0
Answer: [Write your answer text here] Task 5 Copy the query function query_ntn from Task 4 of mini-assignment 3. Rename it to query_ntn_pagerank and change the code to use our new scoring function score_ntn_pagerank from task 4 above. Demonstrate these functions with an example query that returns our paper 10399593 fro...
# Add your code here
04_analysis.ipynb
VUInformationRetrieval/IR2016_2017
gpl-2.0
Split this string: s = "Hi there Sam!" into a list.
s = "Hi there Sam!" s.split()
1-Python Crash course/Python-Crash-Course/Python Crash Course Exercises .ipynb
shashank14/Asterix
apache-2.0
Given the variables: planet = "Earth" diameter = 12742 Use .format() to print the following string: The diameter of Earth is 12742 kilometers.
planet = "Earth" diameter = 12742 print("The diameter of {} is {} kilometers.".format(planet,diameter))
1-Python Crash course/Python-Crash-Course/Python Crash Course Exercises .ipynb
shashank14/Asterix
apache-2.0
Given this nested list, use indexing to grab the word "hello"
lst = [1,2,[3,4],[5,[100,200,['hello']],23,11],1,7] lst[3][1][2]
1-Python Crash course/Python-Crash-Course/Python Crash Course Exercises .ipynb
shashank14/Asterix
apache-2.0
Given this nested dictionary grab the word "hello". Be prepared, this will be annoying/tricky
d = {'k1':[1,2,3,{'tricky':['oh','man','inception',{'target':[1,2,3,'hello']}]}]}
1-Python Crash course/Python-Crash-Course/Python Crash Course Exercises .ipynb
shashank14/Asterix
apache-2.0
What is the main difference between a tuple and a list?
# Tuple is immutable na = "user@domain.com" na.split("@")[1]
1-Python Crash course/Python-Crash-Course/Python Crash Course Exercises .ipynb
shashank14/Asterix
apache-2.0
Create a function that grabs the email website domain from a string in the form: user@domain.com So for example, passing "user@domain.com" would return: domain.com
def domainGet(name): return name.split("@")[1] domainGet('user@domain.com')
1-Python Crash course/Python-Crash-Course/Python Crash Course Exercises .ipynb
shashank14/Asterix
apache-2.0
Create a basic function that returns True if the word 'dog' is contained in the input string. Don't worry about edge cases like a punctuation being attached to the word dog, but do account for capitalization.
def findDog(sentence): x = sentence.split() for item in x: if item == "dog": return True findDog('Is there a dog here?')
1-Python Crash course/Python-Crash-Course/Python Crash Course Exercises .ipynb
shashank14/Asterix
apache-2.0
Create a function that counts the number of times the word "dog" occurs in a string. Again ignore edge cases.
countDog('This dog runs faster than the other dog dude!')
1-Python Crash course/Python-Crash-Course/Python Crash Course Exercises .ipynb
shashank14/Asterix
apache-2.0
Use lambda expressions and the filter() function to filter out words from a list that don't start with the letter 's'. For example: seq = ['soup','dog','salad','cat','great'] should be filtered down to: ['soup','salad']
seq = ['soup','dog','salad','cat','great']
1-Python Crash course/Python-Crash-Course/Python Crash Course Exercises .ipynb
shashank14/Asterix
apache-2.0
Final Problem You are driving a little too fast, and a police officer stops you. Write a function to return one of 3 possible results: "No ticket", "Small ticket", or "Big Ticket". If your speed is 60 or less, the result is "No Ticket". If speed is between 61 and 80 inclusive, the result is "Small Ticket". If s...
def caught_speeding(speed, is_birthday): if s_birthday == False: if speed <= 60: return "No ticket" elif speed >= 61 and speed <=80: return "small ticket" elif speed >81: return "Big ticket" else: return "pass" caught_speeding(81,False) caught_speeding(81,False) ...
1-Python Crash course/Python-Crash-Course/Python Crash Course Exercises .ipynb
shashank14/Asterix
apache-2.0
Great job!
lst type(lst) type(lst[1])
1-Python Crash course/Python-Crash-Course/Python Crash Course Exercises .ipynb
shashank14/Asterix
apache-2.0
Network Architecture The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide rec...
learning_rate = 0.001 inputs_ = tf.placeholder(tf.float32, shape = (None, 28, 28, 1), name = 'inputs') targets_ = tf.placeholder(tf.float32, shape = (None, 28, 28, 1), name = 'targets') ### Encoder conv1 = tf.layers.conv2d(inputs_, 16, (3, 3), padding = 'same', activation = tf.nn.relu) # Now 28x28x16 maxpool1 = tf.lay...
autoencoder/Convolutional_Autoencoder.ipynb
chusine/dlnd
mit
Training As before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
sess = tf.Session() epochs = 20 batch_size = 200 sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) imgs = batch[0].reshape((-1, 28, 28, 1)) batch_cost, _ = sess.run([cost, opt...
autoencoder/Convolutional_Autoencoder.ipynb
chusine/dlnd
mit
Denoising As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then cl...
learning_rate = 0.001 inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs') targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets') ### Encoder conv1 = tf.layers.conv2d(inputs_, 32, (3, 3), padding = 'same', activation = tf.nn.relu) # Now 28x28x32 maxpool1 = tf.layers.max_pooling2d(co...
autoencoder/Convolutional_Autoencoder.ipynb
chusine/dlnd
mit
To test different clustering methods we need a sample data. In the scikit-learining module there are built-in functions to create it. We will use make_classification() to create a dataset of 1000 points with 2 clusters.
# define dataset X, y = make_classification(n_samples=1000, n_features=2, n_informative=2, n_redundant=0, n_clusters_per_class=1, random_state=4) # create scatter plot for samples from each class for class_value in range(2): # get row indexes for samples with this class row_ix = where(y == class_value) # create sca...
english/data_processing/lessons/ml_clustering.ipynb
OSGeoLabBp/tutorials
cc0-1.0
Now let's apply the different clustering algorithms on the dataset! Affinity propagation The method takes as input measures of similarity between pairs of data points. Real-valued messages are exchanged between data points until a high-quality set of exemplars and corresponding clusters gradually emerges.
from sklearn.cluster import AffinityPropagation from numpy import unique # define the model model = AffinityPropagation(damping=0.9) # fit the model model.fit(X) # assign a cluster to each example yhat = model.predict(X) # retrieve unique clusters clusters = unique(yhat) # create scatter plot for samples from each clu...
english/data_processing/lessons/ml_clustering.ipynb
OSGeoLabBp/tutorials
cc0-1.0
Agglomerative clustering It is type of hierarchical clustering, which is a general family of clustering algorithms that build nested clusters by merging or splitting them successively. This hierarchy of clusters is represented as a tree. The root of the tree is the unique cluster that gathers all the samples, the leave...
from sklearn.cluster import AgglomerativeClustering # define the model model = AgglomerativeClustering(n_clusters=2) # fit model and predict clusters yhat = model.fit_predict(X) # retrieve unique clusters clusters = unique(yhat) # create scatter plot for samples from each cluster for cluster in clusters: # get row in...
english/data_processing/lessons/ml_clustering.ipynb
OSGeoLabBp/tutorials
cc0-1.0
BIRCH BIRCH clustering (Balanced Iterative Reducing and Clustering using Hierarchies) involves constructing a tree structure from which cluster centroids are extracted. BRICH incrementally and dynamically clusters incoming multi-dimensional metric data points to try to produce the best quality clustering with the avail...
from sklearn.cluster import Birch model = Birch(threshold=0.01, n_clusters=2) # fit the model model.fit(X) # assign a cluster to each example yhat = model.predict(X) # retrieve unique clusters clusters = unique(yhat) # create scatter plot for samples from each cluster for cluster in clusters: # get row indexes for sa...
english/data_processing/lessons/ml_clustering.ipynb
OSGeoLabBp/tutorials
cc0-1.0
DBSCAN DBSCAN clustering (Density-Based Spatial Clustering of Applications with Noise) involves finding high-density areas in the domain and expanding those areas of the feature space around them as clusters. It can be used on large databases with good efficiency. The usage of the DBSCAN is not complicated, it requires...
from sklearn.cluster import DBSCAN from matplotlib import pyplot # define the model model = DBSCAN(eps=0.30, min_samples=9) # fit model and predict clusters yhat = model.fit_predict(X) # retrieve unique clusters clusters = unique(yhat) # create scatter plot for samples from each cluster for cluster in clusters: # get...
english/data_processing/lessons/ml_clustering.ipynb
OSGeoLabBp/tutorials
cc0-1.0
k-Means clustering May be the most widely known clustering method. During the creation of the clusters the algorithm trys to minimize the variance within each cluster. To use it we have to define the number of clusters.
from sklearn.cluster import KMeans # define the model model = KMeans(n_clusters=2) # fit the model model.fit(X) # assign a cluster to each example yhat = model.predict(X) # retrieve unique clusters clusters = unique(yhat) # create scatter plot for samples from each cluster for cluster in clusters: # get row indexes ...
english/data_processing/lessons/ml_clustering.ipynb
OSGeoLabBp/tutorials
cc0-1.0
There is a modified version of k-Means, which is called Mini-Batch K-Means clustering. The difference between the two that updated vesion using mini-batches of samples rather than the entire dataset. It makes faster for large datasets, and more robust to statistical noise. Mean shift clustering The algorithm is findin...
from sklearn.cluster import MeanShift # define the model model = MeanShift() # fit model and predict clusters yhat = model.fit_predict(X) # retrieve unique clusters clusters = unique(yhat) # create scatter plot for samples from each cluster for cluster in clusters: # get row indexes for samples with this cluster row...
english/data_processing/lessons/ml_clustering.ipynb
OSGeoLabBp/tutorials
cc0-1.0
The main characteristics of the clustering algorithms Task - Test the different clustering algorithms on different datasets! - Check and use scikit-learn's documentation to compare the algorithms! Applying ML based clustering algorithm on point cloud The presented culstering method can be useful when we would like ...
!wget -q https://github.com/OSGeoLabBp/tutorials/raw/master/english/data_processing/lessons/code/barnag_roofs.ply
english/data_processing/lessons/ml_clustering.ipynb
OSGeoLabBp/tutorials
cc0-1.0
Let's install Open3D!
!pip install open3d -q
english/data_processing/lessons/ml_clustering.ipynb
OSGeoLabBp/tutorials
cc0-1.0
After the installation import modules and display the point cloud!
import open3d as o3d import numpy as np from numpy import unique from numpy import where from sklearn.datasets import make_classification from sklearn.cluster import DBSCAN from matplotlib import pyplot pc = o3d.io.read_point_cloud('barnag_roofs.ply',format='ply') xyz = np.asarray(pc.points) # display the point clou...
english/data_processing/lessons/ml_clustering.ipynb
OSGeoLabBp/tutorials
cc0-1.0
Let's use DBSCAN on the imported point cloud.
# Save clusters as for cluster in clusters: # get row indexes for samples with this cluster row_ix = where(yhat == cluster) # create scatter of these samples pyplot.scatter(xyz[row_ix, 0], xyz[row_ix, 1], label=str(cluster)+' cluster') # export the clusters as a point cloud xyz_cluster = xyz[row_ix] p...
english/data_processing/lessons/ml_clustering.ipynb
OSGeoLabBp/tutorials
cc0-1.0
Have to install autograd module first: pip install autograd
import autograd.numpy as np # Thinly-wrapped version of Numpy from autograd import grad
workshops/w7/Workshop6_ Auto-Differentiation.ipynb
eds-uga/csci4360-fa17
mit
EX1, Normal Numpy
def tanh(x): y = np.exp(-x) return (1.0 - y) / (1.0 + y) start = time.time() grad_tanh = grad(tanh) print ("Gradient at x = 1.0\n", grad_tanh(1.0)) end = time.time() print("Operation time:\n", end-start)
workshops/w7/Workshop6_ Auto-Differentiation.ipynb
eds-uga/csci4360-fa17
mit
EX2-1, Taylor approximation to sine function
def taylor_sine(x): ans = currterm = x i = 0 while np.abs(currterm) > 0.001: currterm = -currterm * x**2 / ((2 * i + 3) * (2 * i + 2)) ans = ans + currterm i += 1 return ans start = time.time() grad_sine = grad(taylor_sine) print ("Gradient of sin(pi):\n", grad_sine(np.pi)) ...
workshops/w7/Workshop6_ Auto-Differentiation.ipynb
eds-uga/csci4360-fa17
mit
EX2-2, Second-order gradient
start = time.time() #second-order ggrad_sine = grad(grad_sine) print ("Gradient of second-order:\n", ggrad_sine(np.pi)) end = time.time() print("Operation time:\n", end-start)
workshops/w7/Workshop6_ Auto-Differentiation.ipynb
eds-uga/csci4360-fa17
mit
EX3, Logistic Regression A common use case for automatic differentiation is to train a probabilistic model. <br> A Simple (but complete) example of specifying and training a logistic regression model for binary classification:
def sigmoid(x): return 0.5*(np.tanh(x) + 1) def logistic_predictions(weights, inputs): # Outputs probability of a label being true according to logistic model. return sigmoid(np.dot(inputs, weights)) def training_loss(weights): # Training loss is the negative log-likelihood of the training labels. ...
workshops/w7/Workshop6_ Auto-Differentiation.ipynb
eds-uga/csci4360-fa17
mit
Of the three parts of this app, part 2 should be very familiar by now -- load some taxi dropoff locations, declare a Points object, datashade them, and set some plot options. Step 1 is new: Instead of loading the bokeh extension using hv.extension('bokeh'), we get a direct handle on a bokeh renderer using the hv.render...
# Exercise: Modify the app to display the pickup locations and add a tilesource, then run the app with bokeh serve # Tip: Refer to the previous notebook
notebooks/08-deploying-bokeh-apps.ipynb
ioam/scipy-2017-holoviews-tutorial
bsd-3-clause
Iteratively building a bokeh app in the notebook The above app script can be built entirely without using Jupyter, though we displayed it here using Jupyter for convenience in the tutorial. Jupyter notebooks are also often helpful when initially developing such apps, allowing you to quickly iterate over visualizations...
import holoviews as hv import geoviews as gv import dask.dataframe as dd from holoviews.operation.datashader import datashade, aggregate, shade from bokeh.models import WMTSTileSource hv.extension('bokeh', logo=False) usecols = ['tpep_pickup_datetime', 'dropoff_x', 'dropoff_y'] ddf = dd.read_csv('../data/nyc_taxi.cs...
notebooks/08-deploying-bokeh-apps.ipynb
ioam/scipy-2017-holoviews-tutorial
bsd-3-clause
Next we define a Counter stream which we will use to select taxi trips by hour.
stream = hv.streams.Counter() points = hv.Points(ddf, kdims=['dropoff_x', 'dropoff_y']) dmap = hv.DynamicMap(lambda counter: points.select(hour=counter%24).relabel('Hour: %s' % (counter % 24)), streams=[stream]) shaded = datashade(dmap) hv.opts('RGB [width=800, height=600, xaxis=None, yaxis=None]'...
notebooks/08-deploying-bokeh-apps.ipynb
ioam/scipy-2017-holoviews-tutorial
bsd-3-clause
Up to this point, we have a normal HoloViews notebook that we could display using Jupyter's rich display of overlay, as we would with an any notebook. But having come up with the objects we want interactively in this way, we can now display the result as a Bokeh app, without leaving the notebook. To do that, first ed...
renderer = hv.renderer('bokeh') server = renderer.app(overlay, show=True, websocket_origin='localhost:8888')
notebooks/08-deploying-bokeh-apps.ipynb
ioam/scipy-2017-holoviews-tutorial
bsd-3-clause
We could stop here, having launched an app, but so far the app will work just the same as in the normal Jupyter notebook, responding to user inputs as they occur. Having defined a Counter stream above, let's go one step further and add a series of periodic events that will let the visualization play on its own even wi...
dmap.periodic(1)
notebooks/08-deploying-bokeh-apps.ipynb
ioam/scipy-2017-holoviews-tutorial
bsd-3-clause
You can stop this ongoing process by clearing the cell displaying the app. Now let's open the text editor again and make this edit to a separate app, which we can then launch using Bokeh Server separately from this notebook.
# Exercise: Copy the example above into periodic_app.py and modify it so it can be run with bokeh serve # Hint: Use hv.renderer and renderer.server_doc # Note that you have to run periodic **after** creating the bokeh document
notebooks/08-deploying-bokeh-apps.ipynb
ioam/scipy-2017-holoviews-tutorial
bsd-3-clause
Combining HoloViews with bokeh models Now for a last hurrah let's put everything we have learned to good use and create a bokeh app with it. This time we will go straight to a Python script containing the app. If you run the app with bokeh serve --show ./apps/player_app.py from your terminal you should see something li...
# Advanced Exercise: Add a histogram to the bokeh layout next to the datashaded plot # Hint: Declare the histogram like this: hv.operation.histogram(aggregated, bin_range=(0, 20)) # then use renderer.get_plot and hist_plot.state and add it to the layout
notebooks/08-deploying-bokeh-apps.ipynb
ioam/scipy-2017-holoviews-tutorial
bsd-3-clause
Model Inputs First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks. Exercise: Finish the model_inpu...
def model_inputs(real_dim, z_dim): inputs_real = tf.placeholder(tf.float32 , [None, real_dim] ) inputs_z = tf.placeholder(tf.float32 , [None, z_dim] ) return inputs_real, inputs_z
gan_mnist/Intro_to_GANs_Exercises.ipynb
tanmay987/deepLearning
mit
Generator network Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero ...
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01): ''' Build the generator network. Arguments --------- z : Input tensor for the generator out_dim : Shape of the generator output n_units : Number of units in hidden layer reuse : Reuse the variables...
gan_mnist/Intro_to_GANs_Exercises.ipynb
tanmay987/deepLearning
mit
Discriminator The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer. Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a...
def discriminator(x, n_units=128, reuse=False, alpha=0.01): ''' Build the discriminator network. Arguments --------- x : Input tensor for the discriminator n_units: Number of units in hidden layer reuse : Reuse the variables with tf.variable_scope alpha : leak pa...
gan_mnist/Intro_to_GANs_Exercises.ipynb
tanmay987/deepLearning
mit
Build network Now we're building the network from the functions defined above. First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z. Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes. Th...
tf.reset_default_graph() # Create our input placeholders input_real, input_z = model_inputs(input_size, z_size) # Build the model g_model = generator(input_z, input_size, n_units=g_hidden_size, alpha=alpha) # g_model is the generator output d_model_real, d_logits_real = discriminator(input_real, n_units=d_hidden_size...
gan_mnist/Intro_to_GANs_Exercises.ipynb
tanmay987/deepLearning
mit
Discriminator and Generator Losses Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_wit...
# Calculate losses d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_logits_real) * (1 - smooth))) d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = d_logits_fake, labels=tf.zeros_like(d_logits_fake) )) d_loss = d_loss_real+d_m...
gan_mnist/Intro_to_GANs_Exercises.ipynb
tanmay987/deepLearning
mit
Optimizers We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph. For the gener...
# Optimizers learning_rate = 0.002 # Get the trainable_variables, split into G and D parts t_vars = tf.trainable_variables() g_vars = [var for var in t_vars if var.name.startswith('generator')] d_vars = [var for var in t_vars if var.name.startswith('discriminator')] d_train_opt = g_train_opt =
gan_mnist/Intro_to_GANs_Exercises.ipynb
tanmay987/deepLearning
mit
Training
batch_size = 100 epochs = 100 samples = [] losses = [] saver = tf.train.Saver(var_list = g_vars) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) ...
gan_mnist/Intro_to_GANs_Exercises.ipynb
tanmay987/deepLearning
mit
It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3. Sampling from the generator We can also get completely new images from the generator by us...
saver = tf.train.Saver(var_list=g_vars) with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) sample_z = np.random.uniform(-1, 1, size=(16, z_size)) gen_samples = sess.run( generator(input_z, input_size, reuse=True), feed_dict={input_...
gan_mnist/Intro_to_GANs_Exercises.ipynb
tanmay987/deepLearning
mit
Generate distorted image First, we generate a distorted image from an example line.
image = 1-ocrolib.read_image_gray("../tests/010030.bin.png") image = interpolation.affine_transform(image,array([[0.5,0.015],[-0.015,0.5]]),offset=(-30,0),output_shape=(200,1400),order=0) imshow(image,cmap=cm.gray) print image.shape
doc/line-normalization.ipynb
zuphilip/ocropy
apache-2.0
Load Normalizer and measure the image
#reload(lineest) mv = ocrolib.lineest.CenterNormalizer() mv.measure(image) print mv.r plot(mv.center) plot(mv.center+mv.r) plot(mv.center-mv.r) imshow(image,cmap=cm.gray)
doc/line-normalization.ipynb
zuphilip/ocropy
apache-2.0
Dewarp The dewarping of the text line (first image) tries to find the center (blue curve) and then cut out slices with some fixed radius around the center. See this illustration <img width="50%" src="https://cloud.githubusercontent.com/assets/5199995/25406275/6905c7ce-2a06-11e7-89e0-ca740cd8a21c.png"/>
dewarped = mv.dewarp(image) print dewarped.shape imshow(dewarped,cmap=cm.gray) imshow(dewarped[:,:320],cmap=cm.gray,interpolation='nearest')
doc/line-normalization.ipynb
zuphilip/ocropy
apache-2.0
Normalize This will also dewarp the image but additionally normalize the image size (default x_height is 48).
normalized = mv.normalize(image,order=0) print normalized.shape imshow(normalized,cmap=cm.gray)
doc/line-normalization.ipynb
zuphilip/ocropy
apache-2.0
Objetos Em Python, tudo é objeto!
# Criando uma lista lst_num = ["Data", "Science", "Academy", "Nota", 10, 10] # A lista lst_num é um objeto, uma instância da classe lista em Python type(lst_num) lst_num.count(10) # Usamos a função type, para verificar o tipo de um objeto print(type(10)) print(type([])) print(type(())) print(type({})) print(type('a'...
Cap05/Notebooks/DSA-Python-Cap05-02-Objetos.ipynb
dsacademybr/PythonFundamentos
gpl-3.0
Krok drugi Nie będziemy teraz przesyłać wartości $x$ do jądra, ale obliczymy je w locie, ze wzoru: $$ x = x_0 + i \frac{\Delta x}{N}$$
import pycuda.driver as cuda import pycuda.autoinit from pycuda.compiler import SourceModule mod = SourceModule(""" __global__ void sin1da(float *y) { int idx = threadIdx.x + blockDim.x*blockIdx.x; float x = -3.0f+6.0f*float(idx)/blockDim.x; y[idx] = sinf(powf(x,2.0f)); } """) Nx = 1...
CUDA/iCSE_PR_map2d.ipynb
marcinofulus/ProgramowanieRownolegle
gpl-3.0
Krok trzeci Wykonamy probkowanie funkcji dwóch zmiennych, korzystając z wywołania jądra, które zawiera $N_x$ wątków w bloku i $N_y$ bloków. Proszę zwrócić szczególną uwagę na linie: int idx = threadIdx.x; int idy = blockIdx.x; zawierające wykorzystanie odpowiednich indeksów na CUDA, oraz sposób obliczania globalnego ...
import pycuda.driver as cuda import pycuda.autoinit from pycuda.compiler import SourceModule mod = SourceModule(""" __global__ void sin2d(float *z) { int idx = threadIdx.x; int idy = blockIdx.x; int gid = idx + blockDim.x*idy; float x = -4.0f+6.0f*float(idx)/blockDim.x; float y ...
CUDA/iCSE_PR_map2d.ipynb
marcinofulus/ProgramowanieRownolegle
gpl-3.0
Porównajmy wyniki:
plt.contourf(XX,YY,z.reshape(Ny,Nx) ) plt.contourf(XX,YY,np.sin(XX**2+YY**2))
CUDA/iCSE_PR_map2d.ipynb
marcinofulus/ProgramowanieRownolegle
gpl-3.0
Krok czwarty Algorytm ten nie jest korzystny, gdyż rozmiar bloku determinuje rozmiar siatki na, której próbkujemy funkcje. Optymalnie było by wykonywać operacje w blokach o zadanym rozmiarze, niezależnie od ilości próbek danego obszaru. Poniższy przykład wykorzystuje dwuwymiarową strukturę zarówno bloku jak i gridu. ...
import pycuda.driver as cuda import pycuda.autoinit from pycuda.compiler import SourceModule mod = SourceModule(""" __global__ void sin2da(float *z) { int ix = threadIdx.x + blockIdx.x * blockDim.x; int iy = threadIdx.y + blockIdx.y * blockDim.y; int gid = ix + iy * blockDim.x * gridDi...
CUDA/iCSE_PR_map2d.ipynb
marcinofulus/ProgramowanieRownolegle
gpl-3.0
Boat race Given a river (say a sinusoid) find the total length actually rowed over a given interval $$f(x) = A \sin x$$
x = numpy.linspace(0, 4 * numpy.pi) plt.plot(x, 2.0 * numpy.sin(x)) plt.title("River Sine") plt.xlabel("x") plt.ylabel("y") plt.axis([0, 4*numpy.pi, -2, 2]) plt.show()
0_intro_numerical_methods.ipynb
btw2111/intro-numerical-methods
mit