markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Procesamiento de ADN 1.1 Solución Secuencia aleatoria v1
from random import choice # Definicion de funcion def cadena_al_azar(n): adn = "" for i in range(n): adn += choice("acgt") return adn # Casos de uso print cadena_al_azar(1) print cadena_al_azar(1) print cadena_al_azar(1) print cadena_al_azar(1) print cadena_al_azar(10) print cadena_al_azar(10) pri...
ipynb/23-ProcesamientoDeTexto/Texto.ipynb
usantamaria/iwi131
cc0-1.0
Procesamiento de ADN 1.1 Solución Secuencia aleatoria v2
from random import choice # Definicion de funcion def cadena_al_azar(n): bases = [] for i in range(n): bases.append(choice("acgt")) adn = "".join(bases) return adn # Casos de uso print cadena_al_azar(1) print cadena_al_azar(1) print cadena_al_azar(1) print cadena_al_azar(1) print cadena_al_aza...
ipynb/23-ProcesamientoDeTexto/Texto.ipynb
usantamaria/iwi131
cc0-1.0
Procesamiento de texto Procesamiento de ADN: Secuencia complementaria Escriba la función complementaria(s) que regrese la cadena complementaria de c: el complementario de "a" es "t" (y viceversa), y el complementario de "c" es "g" (y viceversa). Python cadena = 'cagcccatgaggcagggtg' print complementaria(cadena...
# Solucion estudiantes def cadena_(n): adn = "" for i in range(n): adn += choice("acgt") return adn
ipynb/23-ProcesamientoDeTexto/Texto.ipynb
usantamaria/iwi131
cc0-1.0
Procesamiento de texto Solución Secuencia complementaria v1
def complementaria(adn): rna = "" for base in adn: if base=="a": rna += "t" elif base=="t": rna += "a" elif base=="c": rna += "g" else: rna += "c" return rna adn = cadena_al_azar(20) print adn print complementaria(adn)
ipynb/23-ProcesamientoDeTexto/Texto.ipynb
usantamaria/iwi131
cc0-1.0
Procesamiento de texto Solución Secuencia complementaria v2
def complementaria(adn): pares = {"a":"t", "t":"a", "c":"g", "g":"c"} rna = "" for base in adn: rna += pares[base] return rna adn = cadena_al_azar(20) print adn print complementaria(adn)
ipynb/23-ProcesamientoDeTexto/Texto.ipynb
usantamaria/iwi131
cc0-1.0
Procesamiento de texto Solución Secuencia complementaria v3
def complementaria(adn): rna = adn.replace("a","T").replace("t","A").replace("c","G").replace("g","C") return rna.lower() adn = cadena_al_azar(20) print adn print complementaria(adn)
ipynb/23-ProcesamientoDeTexto/Texto.ipynb
usantamaria/iwi131
cc0-1.0
I. Creating a NetworkModel
# Create the model from a PSSTCase, optionally passing a sel_bus m = NetworkModel(case, sel_bus='Bus1')
docs/notebooks/interactive_visuals/Demo.ipynb
kdheepak/psst
mit
In the __init__, the NetworkModel...
display(m.case) # saves the case display(m.network) # creates a PSSTNetwork display(m.G) # stores the networkX graph (an attribute of the PSSTNetwork) display(m.model) # builds/solves the model # Creates df of x,y positions for each node (bus, load, gen), based off self.network.positions m.all_pos.head(...
docs/notebooks/interactive_visuals/Demo.ipynb
kdheepak/psst
mit
The sel_bus and view_buses attributes
# `sel_bus` is a single bus, upon which the visualization is initially centered. # It can be changed programatically, or via the dropdown menu. m.sel_bus # At first, it is the only bus in view_buses. # More buses get added to view_buses as they are clicked. m.view_buses
docs/notebooks/interactive_visuals/Demo.ipynb
kdheepak/psst
mit
II. Creating a NetworkView from the model
# Create the view from the model # (It can, alternatively, be created from a case.) v = NetworkView(model=m) v
docs/notebooks/interactive_visuals/Demo.ipynb
kdheepak/psst
mit
III. Generating the x,y data for the view Whenever the view_buses list get changed, it triggers the callback _callback_view_change This function first calls subset_positions and subset_edges Then, the subsetted DataFrames get segregated into seperate ones for bus, gen, and load Finally, the x,y coordinates are extract...
# The subsetting that occurs is all based on `view_buses` m.view_buses
docs/notebooks/interactive_visuals/Demo.ipynb
kdheepak/psst
mit
The subset_positions() call
# Subset positions creates self.pos m.pos
docs/notebooks/interactive_visuals/Demo.ipynb
kdheepak/psst
mit
The function looks like this: python def subset_positions(self): """Subset self.all_pos to include only nodes adjacent to those in view_buses list.""" nodes = [list(self.G.adj[item].keys()) for item in self.view_buses] # get list of nodes adj to selected buses nodes = set(itertools.chain.from_iterable(node...
# Subset edges creates self.edges m.edges
docs/notebooks/interactive_visuals/Demo.ipynb
kdheepak/psst
mit
The function looks like this: python def subset_edges(self): """Subset all_edges, with G.edges() info, based on view_buses list.""" edge_list = self.G.edges(nbunch=self.view_buses) # get edges of view_buses as list of tuples edges_fwd = self.all_edges.loc[edge_list] # query all_pos with edge_list edge...
m.view_buses = ['Bus2','Bus3'] edge_list = m.G.edges(nbunch=m.view_buses) # get edges of view_buses as list of tuples edge_list edges_fwd = m.all_edges.loc[edge_list] # query all_pos with edge_list edges_fwd edge_list_rev = [tuple(reversed(tup)) for tup in edge_list] # reverse order of each tuple edge_list_rev e...
docs/notebooks/interactive_visuals/Demo.ipynb
kdheepak/psst
mit
Segregating DataFrames and extracting data The DataFrames are segregated into bus, case, and load, using the names in case.bus, case.gen, and case.load x,y data is extracted, ready to be plotted by NetworkView Extracting bus data looks like this: python bus_pos = self.pos[self.pos.index.isin(self.case.bus_name)] sel...
print("x_vals: ", m.bus_x_vals) print("y_vals: ", m.bus_y_vals) print("names: ", m.bus_names)
docs/notebooks/interactive_visuals/Demo.ipynb
kdheepak/psst
mit
Extracting branch data looks like this: ```python edges = self.edges.reset_index() _df = edges.loc[edges.start.isin(self.case.bus_name) & edges.end.isin(self.case.bus_name)] self.bus_x_edges = [tuple(edge) for edge in _df[['start_x', 'end_x']].values] self.bus_y_edges = [tuple(edge) for edge in _df[['start_y', 'end_y']...
print("bus_x_edges:") print(m.bus_x_edges) print("\nbus_y_edges:") print(m.bus_y_edges)
docs/notebooks/interactive_visuals/Demo.ipynb
kdheepak/psst
mit
Data
tempsC = np.array([26, 27, 29, 31, 33, 35, 37]) voltages = np.array([2,3,6,7,9,11,12.5,14,16,18,20,22,23.5,26,27.5,29,31,32.5,34,36]) voltages = np.array([1.826,3.5652,5.3995,7.2368,9.0761,10.8711,12.7109,14.5508,16.3461,18.1414,19.9816,21.822,23.6174,25.4577,27.253,29.0935,30.889,32.7924,34.5699,35.8716]) measured_ps...
ElectroOptics/MinimizeAttempt.ipynb
JAmarel/LiquidCrystals
mit
Calculate the Boltzmann Factor and the Partition Function $$ {Boltz() \:returns:}\:\: e^{\frac{-U}{k_bT}}\:sin\:{\theta}\ $$
def Boltz(theta,phi,T,p0k,alpha,E): """Compute the integrand for the Boltzmann factor. Returns ------- A function of theta,phi,T,p0k,alpha,E to be used within dblquad """ return np.exp((1/T)*p0k*E*np.sin(theta)*np.cos(phi)*(1+alpha*E*np.cos(phi)))*np.sin(theta)
ElectroOptics/MinimizeAttempt.ipynb
JAmarel/LiquidCrystals
mit
Calculate the Tilt Angle $\psi$ $$ numerator() \:returns: {sin\:{2\theta}\:cos\:{\phi}}\:e^{\frac{-U}{k_bT}}\:sin\:{\theta} $$
def numerator(theta,phi,T,p0k,alpha,E): boltz = Boltz(theta,phi,T,p0k,alpha,E) return np.sin(2*theta)*np.cos(phi)*boltz
ElectroOptics/MinimizeAttempt.ipynb
JAmarel/LiquidCrystals
mit
$$ denominator()\: returns: {({cos}^2{\theta} - {sin}^2{\theta}\:{cos}^2{\phi}})\:e^{\frac{-U}{k_bT}}\:sin\:{\theta} $$
def denominator(theta,phi,T,p0k,alpha,E): boltz = Boltz(theta,phi,T,p0k,alpha,E) return ((np.cos(theta)**2) - ((np.sin(theta)**2) * (np.cos(phi)**2)))*boltz
ElectroOptics/MinimizeAttempt.ipynb
JAmarel/LiquidCrystals
mit
$$ tan(2\psi) = \frac{\int_{\theta_{min}}^{\theta_{max}} \int_0^{2\pi} {sin\:{2\theta}\:cos\:{\phi}}\:e^{\frac{-U}{k_bT}}\:sin\:{\theta}\: d\theta d\phi}{\int_{\theta_{min}}^{\theta_{max}} \int_0^{2\pi} ({{cos}^2{\theta} - {sin}^2{\theta}\:{cos}^2{\phi}})\:e^{\frac{-U}{k_bT}}\:sin\:{\theta}\: d\theta d\phi} $$
def compute_psi(T,p0k,alpha,E,thetamin,thetamax): """Computes the tilt angle(psi) by use of our tan(2psi) equation Returns ------- Float: The statistical tilt angle with conditions T,p0k,alpha,E """ avg_numerator, avg_numerator_error = dblquad(numerator, 0, 2*np.pi, lambda theta: thetam...
ElectroOptics/MinimizeAttempt.ipynb
JAmarel/LiquidCrystals
mit
Least Square Fitting $\alpha$ and $\rho_0$
def compute_error(xo,fields,T,thetamin,thetamax,measured_psi): """Computes the squared error for a pair of parameters by comparing it to all measured tilt angles at one temperature. This will be used with the minimization function, xo is a point that the minimization checks. Parameters/Conditions ...
ElectroOptics/MinimizeAttempt.ipynb
JAmarel/LiquidCrystals
mit
It might be better to use the minimization function individually for each temperature range. The minimization function returns a minimization object, which gives extra information about the results. The two important entries are fun and x. fun is the scalar value of the function that is being minimized. In our case fu...
def minimize_func(guess,fields,T,thetamin,thetamax,measured_psi,bnds): """A utility function that is will help me construct alpha and p0 arrays later. Uses the imported minimize function and compute_error to best fit our parameters at a temperature. Parameters/Conditions ---------- guess: ...
ElectroOptics/MinimizeAttempt.ipynb
JAmarel/LiquidCrystals
mit
Build an artificial dataset: starting from the string 'abcdefghijklmnopqrstuvwxyz', generate iteratively strings by swapping two characters at random. In this way instances are progressively more dissimilar
import random def make_data(size): text = ''.join([str(unichr(97+i)) for i in range(26)]) seqs = [] def swap_two_characters(seq): '''define a function that swaps two characters at random positions in a string ''' line = list(seq) id_i = random.randint(0,len(line)-1) id_j = ...
examples/Sequence_example.ipynb
bgruening/EDeN
gpl-3.0
define a function that builds a graph from a string, i.e. the path graph with the characters as node labels
import networkx as nx def sequence_to_graph(seq): '''convert a sequence into a EDeN 'compatible' graph i.e. a graph with the attribute 'label' for every node and edge''' G = nx.Graph() for id,character in enumerate(seq): G.add_node(id, label = character ) if id > 0: G.add_ed...
examples/Sequence_example.ipynb
bgruening/EDeN
gpl-3.0
make a generator that yields graphs: generators are 'good' as they allow functional composition
def pre_process(iterable): for seq in iterable: yield sequence_to_graph(seq)
examples/Sequence_example.ipynb
bgruening/EDeN
gpl-3.0
initialize the vectorizer object with the desired 'resolution'
%%time from eden.graph import Vectorizer vectorizer = Vectorizer( complexity = 4 )
examples/Sequence_example.ipynb
bgruening/EDeN
gpl-3.0
obtain an iterator over the sequences processed into graphs
%%time graphs = pre_process( seqs )
examples/Sequence_example.ipynb
bgruening/EDeN
gpl-3.0
compute the vector encoding of each instance in a sparse data matrix
%%time X = vectorizer.transform( graphs ) print 'Instances: %d ; Features: %d with an avg of %d features per instance' % (X.shape[0], X.shape[1], X.getnnz()/X.shape[0])
examples/Sequence_example.ipynb
bgruening/EDeN
gpl-3.0
compute the pairwise similarity as the dot product between the vector representations of each sequence
from sklearn import metrics K=metrics.pairwise.pairwise_kernels(X, metric='linear') print K
examples/Sequence_example.ipynb
bgruening/EDeN
gpl-3.0
visualize it as a picture is worth thousand words...
import pylab as plt plt.figure( figsize=(8,8) ) img = plt.imshow( K, interpolation='none', cmap=plt.get_cmap( 'YlOrRd' ) ) plt.show()
examples/Sequence_example.ipynb
bgruening/EDeN
gpl-3.0
2.Drucke alle die Zahlen von 0 bis 4 aus:
for x in range(5): print(x)
Kursteilnehmer/Sven Millischer/06 /01 Rückblick For-Loop-Übungen.ipynb
barjacks/pythonrecherche
mit
3.Drucke die Zahlen 3,4,5 aus:
for x in range(3, 6): print(x)
Kursteilnehmer/Sven Millischer/06 /01 Rückblick For-Loop-Übungen.ipynb
barjacks/pythonrecherche
mit
4.Baue einen For-Loop, indem Du alle geraden Zahlen ausdruckst, die tiefer sind als 237.
numbers = [ 951, 402, 984, 651, 360, 69, 408, 319, 601, 485, 980, 507, 725, 547, 544, 615, 83, 165, 141, 501, 263, 617, 865, 575, 219, 390, 984, 592, 236, 105, 942, 941, 386, 462, 47, 418, 907, 344, 236, 375, 823, 566, 597, 978, 328, 615, 953, 345, 399, 162, 758, 219, 918, 237, 412, 566, 826, 248, 866, ...
Kursteilnehmer/Sven Millischer/06 /01 Rückblick For-Loop-Übungen.ipynb
barjacks/pythonrecherche
mit
5.Addiere alle Zahlen in der Liste
sum(numbers) #Lösung:
Kursteilnehmer/Sven Millischer/06 /01 Rückblick For-Loop-Übungen.ipynb
barjacks/pythonrecherche
mit
6.Addiere nur die Zahlen, die gerade sind
numbers = [ 951, 402, 984, 651, 360, 69, 408, 319, 601, 485, 980, 507, 725, 547, 544, 615, 83, 165, 141, 501, 263, 617, 865, 575, 219, 390, 984, 592, 236, 105, 942, 941, 386, 462, 47, 418, 907, 344, 236, 375, 823, 566, 597, 978, 328, 615, 953, 345, 399, 162, 758, 219, 918, 237, 412, 566, 826, 248, 866, ...
Kursteilnehmer/Sven Millischer/06 /01 Rückblick For-Loop-Übungen.ipynb
barjacks/pythonrecherche
mit
7.Drucke mit einem For Loop 5 Mal hintereinander Hello World aus
for x in range(5): print ("Hello World") #Lösung
Kursteilnehmer/Sven Millischer/06 /01 Rückblick For-Loop-Übungen.ipynb
barjacks/pythonrecherche
mit
8.Entwickle ein Programm, das alle Nummern zwischen 2000 und 3200 findet, die durch 7, aber nicht durch 5 teilbar sind. Das Ergebnis sollte auf einer Zeile ausgedruckt werden. Tipp: Schaue Dir hier die Vergleichsoperanden von Python an.
l=[] for i in range(2000, 3200): if (i%7==0) and (i%5>=0): l.append(str(i)) print(','.join(l))
Kursteilnehmer/Sven Millischer/06 /01 Rückblick For-Loop-Übungen.ipynb
barjacks/pythonrecherche
mit
9.Schreibe einen For Loop, der die Nummern in der folgenden Liste von int in str verwandelt.
lst = range(45,99) new_list=[] for elem in lst: str(elem) new_list.append(str(elem)) print(new_list)
Kursteilnehmer/Sven Millischer/06 /01 Rückblick For-Loop-Übungen.ipynb
barjacks/pythonrecherche
mit
10.Schreibe nun ein Programm, das alle Ziffern 4 mit dem Buchstaben A ersetzte, alle Ziffern 5 mit dem Buchtaben B.
newnewlist = [] for elem in new_list: if '4' in elem: elem = elem.replace('4', 'A') if '5' in elem: elem = elem.replace('5', 'B') newnewlist.append(elem) newnewlist
Kursteilnehmer/Sven Millischer/06 /01 Rückblick For-Loop-Übungen.ipynb
barjacks/pythonrecherche
mit
The pyradi toolkit is a Python toolkit to perform optical and infrared computational radiometry (flux flow) calculations. Radiometry is the measurement and calculation of electromagnetic flux transfer for systems operating in the spectral region ranging from ultraviolet to microwaves. Indeed, these principles can be a...
display(Image(filename='images/PM236.jpg'))
03-Introduction-to-Radiometry.ipynb
NelisW/ComputationalRadiometry
mpl-2.0
Electromagnetic radiation can be modeled as a number of different phenomena: rays, electromagnetic waves, wavefronts, or particles. All of these models are mathematically related. The appropriate model to use depends on the task at hand. Either the electromagnetic wave model(developed by Maxwell) or the particle model ...
display(Image(filename='images/radiometry03.png'))
03-Introduction-to-Radiometry.ipynb
NelisW/ComputationalRadiometry
mpl-2.0
The photon is a massless elementary particle and acts as the energy carrier for the electromagnetic wave. Photon particles have discrete energy quanta proportional to the frequency of the electromagnetic energy, $Q = h\nu = hc/\lambda$, where $h$ is Planck's constant. Definitions The following figure (expanded from Pin...
display(Image(filename='images/radiometry01.png')) display(Image(filename='images/radiometry02.png'))
03-Introduction-to-Radiometry.ipynb
NelisW/ComputationalRadiometry
mpl-2.0
Spectral quantities See notebook 4 in this series, Introduction to computational radiometry with pyradi, for a detailed description of spectral quantities. Three spectral domains are commonly used: wavelength $\lambda$ in [m], frequency $\nu$ in [Hz], and wavenumber $\tilde{\nu}$ in [cm$^{-1}$] (the number of waves t...
display(Image(filename='images/radiometry04.png'))
03-Introduction-to-Radiometry.ipynb
NelisW/ComputationalRadiometry
mpl-2.0
Lambertian radiators A Lambertian source is, by definition, one whose radiance is completely independent of viewing angle. Many (but not all) rough and natural surfaces produce radiation whose radiance is approximately independent of the angle of observation. These surfaces generally have a rough texture at microscopi...
display(Image(filename='images/radiometry05.png'))
03-Introduction-to-Radiometry.ipynb
NelisW/ComputationalRadiometry
mpl-2.0
Flux transfer through lossless and lossy mediums A lossless medium is defined as a medium with no losses between the source and the receiver, such as a complete vacuum. This implies that no absorption, scattering, or any other attenuating mechanism is present in the medium. For a lossless medium the flux that flow bet...
display(Image(filename='images/radiometry06.png'))
03-Introduction-to-Radiometry.ipynb
NelisW/ComputationalRadiometry
mpl-2.0
Multi-spectral flux transfer The optical power leaving a source undergoes a succession of scaling or 'spectral filtering' processes as the flux propagates through the system, as shown below. This filtering varies with wavelength. Examples of such filters are source emissivity, atmospheric transmittance, optical filter...
display(Image(filename='images/radiometry07.png'))
03-Introduction-to-Radiometry.ipynb
NelisW/ComputationalRadiometry
mpl-2.0
Extend the above flux-transfer equation for multi-spectral calculations by noting that over a spectral width $d\lambda$ the radiance is given by $L = L_\lambda d\lambda$: $$ d^3 \Phi_\lambda= \frac{L_{01\lambda}\,dA_0\;\cos\theta_0\,dA_1\;\cos\theta_1 \;\tau_{01}\,d\lambda}{R_{01}^2}, $$ where $d^3\Phi_\lambda$ is the ...
try: import pyradi.ryutils as ryutils print(ryutils.VersionInformation('matplotlib,numpy,pyradi,scipy,pandas')) except: print("pyradi.ryutils not found")
03-Introduction-to-Radiometry.ipynb
NelisW/ComputationalRadiometry
mpl-2.0
1. Source additional data from public sources This section will provide short examples to demonstrate the use of public data sources in your notebooks. 1.1 World Bank This example demonstrates how to source data from an external source to enrich your existing analyses. You will need to combine the data sources and add ...
# Load the grouped_geocoded dataset from Module 1. df1 = pd.read_csv('data/grouped_geocoded.csv',index_col=[0]) # Prepare the student location dataset for use in this example. # We use the geometrical center by obtaining the mean location for all observed coordinates per country. df2 = df1.groupby('country').agg({'stu...
module_2/M2_NB1_SourcesOfData.ipynb
getsmarter/bda
mit
The column label index has multiple levels. Although this is useful metadata, it would be better to drop multilevel labeling and, instead, rename the columns to capture this information.
df3.columns = df3.columns.droplevel(1) df3.rename(columns={'lat': "lat_mean", 'long': "long_mean"}, inplace=True) df3.head()
module_2/M2_NB1_SourcesOfData.ipynb
getsmarter/bda
mit
Get and prepare the external dataset from the World Bank Remember you can use "wb.download?" (without the quotation marks) in a separate code cell to get help on the pandas-datareader method for remote data access of the World Bank Indicators. Refer to the pandas-datareader remote data access documentation for more de...
# After running this cell you can close the help by clicking on close (`X`) button in the upper right corner wb.download? # The selected indicator is the world population, "SP.POP.TOTL", for the years from 2008 to 2016 wb_indicator = 'SP.POP.TOTL' start_year = 2008 end_year = 2016 df4 = wb.download(indicator = wb_i...
module_2/M2_NB1_SourcesOfData.ipynb
getsmarter/bda
mit
The data set contains entries for multiple years. The focus of this example is the entry corresponding to the latest year of data available for each country.
df5 = df4.reset_index() idx = df5.groupby(['country'])['year'].transform(max) == df5['year']
module_2/M2_NB1_SourcesOfData.ipynb
getsmarter/bda
mit
You can now extract only the values that correspond to the most recent year available for each country.
# Create a new dataframe where entries corresponds to maximum year indexes in previous list. df6 = df5.loc[idx,:] # Review the data df6.head()
module_2/M2_NB1_SourcesOfData.ipynb
getsmarter/bda
mit
Now merge your dataset with the World Bank data.
# Combine the student and population datasets. df7 = pd.merge(df3, df6, on='country', how='left') # Rename the columns of our merged dataset and assign to a new variable. df8 = df7.rename(index=str, columns={('SP.POP.TOTL'): "PopulationTotal_Latest_WB"}) # Drop NAN values. df8 = df8[~df8.PopulationTotal_Latest_WB.isn...
module_2/M2_NB1_SourcesOfData.ipynb
getsmarter/bda
mit
Let's plot the data. Note: The visualization below does not have any meaning. The scaling factors selected are used to demonstrate the difference in population sizes, and number of students on this course, per country.
# Plot the combined dataset # Set map center and zoom level mapc = [0, 30] zoom = 2 # Create map object. map_osm = folium.Map(location=mapc, tiles='Stamen Toner', zoom_start=zoom) # Plot each of the locations that we geocoded. for j in range(len(df8)): # Plot a blue circle ...
module_2/M2_NB1_SourcesOfData.ipynb
getsmarter/bda
mit
<br> <div class="alert alert-info"> <b>Exercise 1 Start.</b> </div> Instructions Review the available indicators in the World Bank dataset, and select an indicator of your choice (other than the population indicator). Using a copy of the code (from above) in the cells below, replace the population indicator with y...
# Your solution here # Note: Break your logic using separate cells to break code into units that can be executed # should you need to review individual steps.
module_2/M2_NB1_SourcesOfData.ipynb
getsmarter/bda
mit
<br> <div class="alert alert-info"> <b>Exercise 1 End.</b> </div> Exercise complete: This is a good time to "Save and Checkpoint". 1.2 Using Wikipedia as a data source To demonstrate how quickly data can be sourced from public, "untrusted" data sources, you have been supplied with a number of sample scripts below. W...
# Display MIT page summary from Wikipedia print(wikipedia.summary("MIT")) # Display a single sentence summary. wikipedia.summary("MIT", sentences=1) # Create variable page that contains the wikipedia information. page = wikipedia.page("List of countries and dependencies by population") # Display the page title. pag...
module_2/M2_NB1_SourcesOfData.ipynb
getsmarter/bda
mit
Define path to data: (It's a good idea to put it in a subdirectory of your notebooks folder, and then exclude that directory from git control by adding it to .gitignore.)
#path = "data/dogscats/" path = "data/dogscats/sample/"
cnn/tw_vgg16.ipynb
sysid/nbs
mit
A few basic libraries that we'll need for the initial exercises:
from __future__ import division,print_function import os, json from glob import glob import numpy as np np.set_printoptions(precision=4, linewidth=100) from matplotlib import pyplot as plt
cnn/tw_vgg16.ipynb
sysid/nbs
mit
We have created a file most imaginatively called 'utils.py' to store any little convenience functions we'll want to use. We will discuss these as we use them.
import utils import importlib importlib.reload(utils) from utils import plots
cnn/tw_vgg16.ipynb
sysid/nbs
mit
Use a pretrained VGG model with our Vgg16 class Our first step is simply to use a model that has been fully created for us, which can recognise a wide variety (1,000 categories) of images. We will use 'VGG', which won the 2014 Imagenet competition, and is a very simple model to create and understand. The VGG Imagenet t...
# As large as you can, but no larger than 64 is recommended. # If you have an older or cheaper GPU, you'll run out of memory, so will have to decrease this. # batch_size=64 batch_size=2 # Import our class, and instantiate import vgg16 from vgg16 import Vgg16 vgg.classes # %%capture x # ping bug: disconnect -> recon...
cnn/tw_vgg16.ipynb
sysid/nbs
mit
The code above will work for any image recognition task, with any number of categories! All you have to do is to put your images into one folder per category, and run the code above. Let's take a look at how this works, step by step... Use Vgg16 for basic image recognition Let's start off by using the Vgg16 class to re...
vgg = Vgg16()
cnn/tw_vgg16.ipynb
sysid/nbs
mit
Vgg16 is built on top of Keras (which we will be learning much more about shortly!), a flexible, easy to use deep learning library that sits on top of Theano or Tensorflow. Keras reads groups of images and labels in batches, using a fixed directory structure, where images from each category for training must be placed ...
batches = vgg.get_batches(path+'train', batch_size=4)
cnn/tw_vgg16.ipynb
sysid/nbs
mit
(BTW, when Keras refers to 'classes', it doesn't mean python classes - but rather it refers to the categories of the labels, such as 'pug', or 'tabby'.) Batches is just a regular python iterator. Each iteration returns both the images themselves, as well as the labels.
imgs,labels = next(batches) imgs[0].shape labels
cnn/tw_vgg16.ipynb
sysid/nbs
mit
As you can see, the labels for each image are an array, containing a 1 in the first position if it's a cat, and in the second position if it's a dog. This approach to encoding categorical variables, where an array containing just a single 1 in the position corresponding to the category, is very common in deep learning....
plots(imgs, titles=labels)
cnn/tw_vgg16.ipynb
sysid/nbs
mit
We can now pass the images to Vgg16's predict() function to get back probabilities, category indexes, and category names for each image's VGG prediction.
vgg.predict(imgs, True)
cnn/tw_vgg16.ipynb
sysid/nbs
mit
The category indexes are based on the ordering of categories used in the VGG model - e.g here are the first four:
vgg.classes[:4]
cnn/tw_vgg16.ipynb
sysid/nbs
mit
(Note that, other than creating the Vgg16 object, none of these steps are necessary to build a model; they are just showing how to use the class to view imagenet predictions.) Use our Vgg16 class to finetune a Dogs vs Cats model To change our model so that it outputs "cat" vs "dog", instead of one of 1,000 very specifi...
batch_size=64 batches = vgg.get_batches(path+'train', batch_size=batch_size) val_batches = vgg.get_batches(path+'valid', batch_size=batch_size)
cnn/tw_vgg16.ipynb
sysid/nbs
mit
Calling finetune() modifies the model such that it will be trained based on the data in the batches provided - in this case, to predict either 'dog' or 'cat'.
vgg.finetune(batches)
cnn/tw_vgg16.ipynb
sysid/nbs
mit
Finally, we fit() the parameters of the model using the training data, reporting the accuracy on the validation set after every epoch. (An epoch is one full pass through the training data.)
vgg.fit(batches, val_batches, nb_epoch=1)
cnn/tw_vgg16.ipynb
sysid/nbs
mit
Imports Run both of these cells:
#@title All `dm_control` imports required for this tutorial # The basic mujoco wrapper. from dm_control import mujoco # Access to enums and MuJoCo library functions. from dm_control.mujoco.wrapper.mjbindings import enums from dm_control.mujoco.wrapper.mjbindings import mjlib # PyMJCF from dm_control import mjcf # C...
tutorial.ipynb
deepmind/dm_control
apache-2.0
Model definition, compilation and rendering We begin by describing some basic concepts of the MuJoCo physics simulation library, but recommend the official documentation for details. Let's define a simple model with two geoms and a light.
#@title A static model {vertical-output: true} static_model = """ <mujoco> <worldbody> <light name="top" pos="0 0 1"/> <geom name="red_box" type="box" size=".2 .2 .2" rgba="1 0 0 1"/> <geom name="green_sphere" pos=".2 .2 .2" size=".1" rgba="0 1 0 1"/> </worldbody> </mujoco> """ physics = mujoco.Physics...
tutorial.ipynb
deepmind/dm_control
apache-2.0
static_model is written in MuJoCo's XML-based MJCF modeling language. The from_xml_string() method invokes the model compiler, which instantiates the library's internal data structures. These can be accessed via the physics object, see below. Adding DOFs and simulating, advanced rendering This is a perfectly legitimate...
#@title A child body with a joint { vertical-output: true } swinging_body = """ <mujoco> <worldbody> <light name="top" pos="0 0 1"/> <body name="box_and_sphere" euler="0 0 -30"> <joint name="swing" type="hinge" axis="1 -1 0" pos="-.2 -.2 -.2"/> <geom name="red_box" type="box" size=".2 .2 .2" rg...
tutorial.ipynb
deepmind/dm_control
apache-2.0
The things that move (and which have inertia) are called bodies. The body's child joint specifies how that body can move with respect to its parent, in this case box_and_sphere w.r.t the worldbody. Note that the body's frame is rotated with an euler directive, and its children, the geoms and the joint, rotate with it....
#@title Making a video {vertical-output: true} duration = 2 # (seconds) framerate = 30 # (Hz) # Visualize the joint axis scene_option = mujoco.wrapper.core.MjvOption() scene_option.flags[enums.mjtVisFlag.mjVIS_JOINT] = True # Simulate and display video. frames = [] physics.reset() # Reset state and time while p...
tutorial.ipynb
deepmind/dm_control
apache-2.0
Note how we collect the video frames. Because physics simulation timesteps are generally much smaller than framerates (the default timestep is 2ms), we don't render after each step. Rendering options Like joint visualisation, additional rendering options are exposed as parameters to the render method.
#@title Enable transparency and frame visualization {vertical-output: true} scene_option = mujoco.wrapper.core.MjvOption() scene_option.frame = enums.mjtFrame.mjFRAME_GEOM scene_option.flags[enums.mjtVisFlag.mjVIS_TRANSPARENT] = True pixels = physics.render(scene_option=scene_option) PIL.Image.fromarray(pixels) #@tit...
tutorial.ipynb
deepmind/dm_control
apache-2.0
MuJoCo basics and named indexing mjModel MuJoCo's mjModel, encapsulated in physics.model, contains the model description, including the default initial state and other fixed quantities which are not a function of the state, e.g. the positions of geoms in the frame of their parent body. The (x, y, z) offsets of the box ...
physics.model.geom_pos
tutorial.ipynb
deepmind/dm_control
apache-2.0
The model.opt structure contains global quantities like
print('timestep', physics.model.opt.timestep) print('gravity', physics.model.opt.gravity)
tutorial.ipynb
deepmind/dm_control
apache-2.0
mjData mjData, encapsulated in physics.data, contains the state and quantities that depend on it. The state is made up of time, generalized positions and generalised velocities. These are respectively data.time, data.qpos and data.qvel. Let's print the state of the swinging body where we left it:
print(physics.data.time, physics.data.qpos, physics.data.qvel)
tutorial.ipynb
deepmind/dm_control
apache-2.0
physics.data also contains functions of the state, for example the cartesian positions of objects in the world frame. The (x, y, z) positions of our two geoms are in data.geom_xpos:
print(physics.data.geom_xpos)
tutorial.ipynb
deepmind/dm_control
apache-2.0
Named indexing The semantics of the above arrays are made clearer using the named wrapper, which assigns names to rows and type names to columns.
print(physics.named.data.geom_xpos)
tutorial.ipynb
deepmind/dm_control
apache-2.0
Note how model.geom_pos and data.geom_xpos have similar semantics but very different meanings.
print(physics.named.model.geom_pos)
tutorial.ipynb
deepmind/dm_control
apache-2.0
Name strings can be used to index into the relevant quantities, making code much more readable and robust.
physics.named.data.geom_xpos['green_sphere', 'z']
tutorial.ipynb
deepmind/dm_control
apache-2.0
Joint names can be used to index into quantities in configuration space (beginning with the letter q):
physics.named.data.qpos['swing']
tutorial.ipynb
deepmind/dm_control
apache-2.0
We can mix NumPy slicing operations with named indexing. As an example, we can set the color of the box using its name ("red_box") as an index into the rows of the geom_rgba array.
#@title Changing colors using named indexing{vertical-output: true} random_rgb = np.random.rand(3) physics.named.model.geom_rgba['red_box', :3] = random_rgb pixels = physics.render() PIL.Image.fromarray(pixels)
tutorial.ipynb
deepmind/dm_control
apache-2.0
Note that while physics.model quantities will not be changed by the engine, we can change them ourselves between steps. This however is generally not recommended, the preferred approach being to modify the model at the XML level using the PyMJCF library, see below. Setting the state with reset_context() In order for da...
physics.named.data.qpos['swing'] = np.pi print('Without reset_context, spatial positions are not updated:', physics.named.data.geom_xpos['green_sphere', ['z']]) with physics.reset_context(): physics.named.data.qpos['swing'] = np.pi print('After reset_context, positions are up-to-date:', physics.named.data...
tutorial.ipynb
deepmind/dm_control
apache-2.0
Free bodies: the self-inverting "tippe-top" A free body is a body with a free joint, with 6 movement DOFs: 3 translations and 3 rotations. We could give our box_and_sphere body a free joint and watch it fall, but let's look at something more interesting. A "tippe top" is a spinning toy which flips itself on its head (W...
#@title The "tippe-top" model{vertical-output: true} tippe_top = """ <mujoco model="tippe top"> <option integrator="RK4"/> <asset> <texture name="grid" type="2d" builtin="checker" rgb1=".1 .2 .3" rgb2=".2 .3 .4" width="300" height="300"/> <material name="grid" texture="grid" texrepeat="8 8" reflectan...
tutorial.ipynb
deepmind/dm_control
apache-2.0
Note several new features of this model definition: 0. The free joint is added with the &lt;freejoint/&gt; clause, which is similar to &lt;joint type="free"/&gt;, but prohibits unphysical attributes like friction or stiffness. 1. We use the &lt;option/&gt; clause to set the integrator to the more accurate Runge Kutta 4...
print('positions', physics.data.qpos) print('velocities', physics.data.qvel)
tutorial.ipynb
deepmind/dm_control
apache-2.0
The velocities are easy to interpret, 6 zeros, one for each DOF. What about the length-7 positions? We can see the initial 2cm height of the body; the subsequent four numbers are the 3D orientation, defined by a unit quaternion. These normalized four-vectors, which preserve the topology of the orientation group, are th...
#@title Video of the tippe-top {vertical-output: true} duration = 7 # (seconds) framerate = 60 # (Hz) # Simulate and display video. frames = [] physics.reset(0) # Reset to keyframe 0 (load a saved state). while physics.data.time < duration: physics.step() if len(frames) < (physics.data.time) * framerate: ...
tutorial.ipynb
deepmind/dm_control
apache-2.0
Measuring values from physics.data The physics.data structure contains all of the dynamic variables and intermediate results produced by the simulation. These are expected to change on each timestep. Below we simulate for 2000 timesteps and plot the state and height of the sphere as a function of time.
#@title Measuring values {vertical-output: true} timevals = [] angular_velocity = [] stem_height = [] # Simulate and save data physics.reset(0) while physics.data.time < duration: physics.step() timevals.append(physics.data.time) angular_velocity.append(physics.data.qvel[3:6].copy()) stem_height.append(physic...
tutorial.ipynb
deepmind/dm_control
apache-2.0
PyMJCF tutorial This library provides a Python object model for MuJoCo's XML-based MJCF physics modeling language. The goal of the library is to allow users to easily interact with and modify MJCF models in Python, similarly to what the JavaScript DOM does for HTML. A key feature of this library is the ability to easil...
class Leg(object): """A 2-DoF leg with position actuators.""" def __init__(self, length, rgba): self.model = mjcf.RootElement() # Defaults: self.model.default.joint.damping = 2 self.model.default.joint.type = 'hinge' self.model.default.geom.type = 'capsule' self.model.default.geom.rgba = rg...
tutorial.ipynb
deepmind/dm_control
apache-2.0
The Leg class describes an abstract articulated leg, with two joints and corresponding proportional-derivative actuators. Note that: MJCF attributes correspond directly to arguments of the add() method. When referencing elements, e.g when specifying the joint to which an actuator is attached, the MJCF element itself ...
BODY_RADIUS = 0.1 BODY_SIZE = (BODY_RADIUS, BODY_RADIUS, BODY_RADIUS / 2) random_state = np.random.RandomState(42) def make_creature(num_legs): """Constructs a creature with `num_legs` legs.""" rgba = random_state.uniform([0, 0, 0, 1], [1, 1, 1, 1]) model = mjcf.RootElement() model.compiler.angle = 'radian' #...
tutorial.ipynb
deepmind/dm_control
apache-2.0
The make_creature function uses PyMJCF's attach() method to procedurally attach legs to the torso. Note that at this stage both the torso and hip attachment sites are children of the worldbody, since their parent body has yet to be instantiated. We'll now make an arena with a chequered floor and two lights, and place o...
#@title Six Creatures on a floor.{vertical-output: true} arena = mjcf.RootElement() chequered = arena.asset.add('texture', type='2d', builtin='checker', width=300, height=300, rgb1=[.2, .3, .4], rgb2=[.3, .4, .5]) grid = arena.asset.add('material', name='grid', texture=chequered, ...
tutorial.ipynb
deepmind/dm_control
apache-2.0
Multi-legged creatures, ready to roam! Let's inject some controls and watch them move. We'll generate a sinusoidal open-loop control signal of fixed frequency and random phase, recording both video frames and the horizontal positions of the torso geoms, in order to plot the movement trajectories.
#@title Video of the movement{vertical-output: true} #@test {"timeout": 600} duration = 10 # (Seconds) framerate = 30 # (Hz) video = [] pos_x = [] pos_y = [] torsos = [] # List of torso geom elements. actuators = [] # List of actuator elements. for creature in creatures: torsos.append(creature.find('geom', 'tor...
tutorial.ipynb
deepmind/dm_control
apache-2.0
The plot above shows the corresponding movement trajectories of creature positions. Note how physics.bind(torsos) was used to access both xpos and rgba values. Once the Physics had been instantiated by from_mjcf_model(), the bind() method will expose both the associated mjData and mjModel fields of an mjcf element, pro...
#@title The `Creature` class class Creature(composer.Entity): """A multi-legged creature derived from `composer.Entity`.""" def _build(self, num_legs): self._model = make_creature(num_legs) def _build_observables(self): return CreatureObservables(self) @property def mjcf_model(self): return se...
tutorial.ipynb
deepmind/dm_control
apache-2.0
The Creature Entity includes generic Observables for joint angles and velocities. Because find_all() is called on the Creature's MJCF model, it will only return the creature's leg joints, and not the "free" joint with which it will be attached to the world. Note that Composer Entities should override the _build and _bu...
#@title The `Button` class NUM_SUBSTEPS = 25 # The number of physics substeps per control timestep. class Button(composer.Entity): """A button Entity which changes colour when pressed with certain force.""" def _build(self, target_force_range=(5, 10)): self._min_force, self._max_force = target_force_range ...
tutorial.ipynb
deepmind/dm_control
apache-2.0
Note how the Button counts the number of sub-steps during which it is pressed with the desired force. It also exposes an Observable of the force being applied to the button, whose value is an average of the readings over the physics time-steps. We import some variation modules and an arena factory:
#@title Random initialiser using `composer.variation` class UniformCircle(variation.Variation): """A uniformly sampled horizontal point on a circle of radius `distance`.""" def __init__(self, distance): self._distance = distance self._heading = distributions.Uniform(0, 2*np.pi) def __call__(self, initi...
tutorial.ipynb
deepmind/dm_control
apache-2.0
The Control Suite The Control Suite is a set of stable, well-tested tasks designed to serve as a benchmark for continuous control learning agents. Tasks are written using the basic MuJoCo wrapper interface. Standardised action, observation and reward structures make suite-wide benchmarking simple and learning curves ea...
#@title Iterating over tasks{vertical-output: true} max_len = max(len(d) for d, _ in suite.BENCHMARKING) for domain, task in suite.BENCHMARKING: print(f'{domain:<{max_len}} {task}') #@title Loading and simulating a `suite` task{vertical-output: true} # Load the environment random_state = np.random.RandomState(42)...
tutorial.ipynb
deepmind/dm_control
apache-2.0
Locomotion Humanoid running along corridor with obstacles As an illustrative example of using the Locomotion infrastructure to build an RL environment, consider placing a humanoid in a corridor with walls, and a task specifying that the humanoid will be rewarded for running along this corridor, navigating around the wa...
#@title A position controlled `cmu_humanoid` walker = cmu_humanoid.CMUHumanoidPositionControlledV2020( observable_options={'egocentric_camera': dict(enabled=True)})
tutorial.ipynb
deepmind/dm_control
apache-2.0
Next, we construct a corridor-shaped arena that is obstructed by walls.
#@title A corridor arena with wall obstacles arena = corridor_arenas.WallsCorridor( wall_gap=3., wall_width=distributions.Uniform(2., 3.), wall_height=distributions.Uniform(2.5, 3.5), corridor_width=4., corridor_length=30., )
tutorial.ipynb
deepmind/dm_control
apache-2.0
The task constructor places the walker in the arena.
#@title A task to navigate the arena task = corridor_tasks.RunThroughCorridor( walker=walker, arena=arena, walker_spawn_position=(0.5, 0, 0), target_velocity=3.0, physics_timestep=0.005, control_timestep=0.03, )
tutorial.ipynb
deepmind/dm_control
apache-2.0