markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
Model
Let's consider the optimal growth model,
\begin{align}
&\max\int_{0}^{\infty}e^{-\rho t}u(c(t))dt \
&\text{subject to} \
&\qquad\dot{k}(t)=f(k(t))-\delta k(t)-c(t),\
&\qquad k(0):\text{ given.} \
\end{align}
We will assume the following specific function forms when necessary
\begin{align}
u(c)... | alpha = 0.3
delta = 0.05
rho = 0.1
theta = 1
A = 1
def f(x):
return A * x**alpha
kgrid = np.linspace(0.0, 7.5, 300)
fig, ax = plt.subplots(1,1)
# Locus obtained from (EE)
kstar = ((delta + rho) / (A * alpha)) ** (1/(alpha - 1))
ax.axvline(kstar)
ax.text(kstar*1.01, 0.1, '$\dot c = 0$', fontsize=16)
# Locus obt... | doc/python/Optimal Growth (Euler).ipynb | kenjisato/intro-macro | mit |
What we want to do is to draw paths on this phase space. It is convenient to have a function that returns this kind of figure. | def phase_space(kmax, gridnum, yamp=1.8, colors=['black', 'black'], labels_on=False):
kgrid = np.linspace(0.0, kmax, gridnum)
fig, ax = plt.subplots(1,1)
# EE locus
ax.plot(kgrid, f(kgrid) - delta * kgrid, color=colors[0])
if labels_on:
ax.text(4, f(4) - delta * 4, '$\dot k = 0$', fontsiz... | doc/python/Optimal Growth (Euler).ipynb | kenjisato/intro-macro | mit |
You can draw the loci by calling the function as in the following. | fig, ax = phase_space(kmax=7, gridnum=300) | doc/python/Optimal Growth (Euler).ipynb | kenjisato/intro-macro | mit |
The dynamics
Discretize
\begin{align}
\dot{c} &= \theta^{-1} c [f'(k) - \delta - \rho] & \text{(EE)} \
\dot{k} &= f(k) - \delta k - c. & \text{(CA)}
\end{align}
to get the discretized dynamic equations:
\begin{align}
c(t+\Delta t) &= c(t){1 + \theta^{-1} [f'(k(t)) - \delta - \rho] \Delta t}& \text{(D-EE)} ... | dt = 0.001
def f_deriv(k):
"""derivative of f"""
return A * alpha * k ** (alpha - 1)
def update(k, c):
cnew = c * (1 + (f_deriv(k) - delta - rho) * dt / theta) # D-EE
knew = k + (f(k) - delta * k - c) * dt
return knew, cnew
k_initial, c_guess = 0.4, 0.2
# Find a first-order path from the initi... | doc/python/Optimal Growth (Euler).ipynb | kenjisato/intro-macro | mit |
The blue curve shows the dynamic path of the system of differential equation. The solution moves from left to right in this case. This path doesn't seem to satisfy the transversality condition and so it's not the optimal path.
What we do next is to find $c(0)$ that converges to the steady state. I will show you how to ... | def compute_path(k0, c_guess, steps, ax=None, output=True):
"""compute a path starting from (k0, c_guess) that satisfies EE and CA"""
k, c = [k0], [c_guess]
for i in range(steps):
knew, cnew = update(k[-1], c[-1])
# stop if the new values violate nonnegativity constraints
... | doc/python/Optimal Growth (Euler).ipynb | kenjisato/intro-macro | mit |
Typical usage: | k_init = 0.4
steps = 30000
fig, ax = phase_space(40, 3000)
for c_init in [0.1, 0.2, 0.3, 0.4, 0.5]:
compute_path(k_init, c_init, steps, ax, output=False) | doc/python/Optimal Growth (Euler).ipynb | kenjisato/intro-macro | mit |
Let's find the optimal path. The following code makes a plot that relates a guess of $c(0)$ to the final $c(t)$ and $k(t)$ for large $t$. | k_init = 0.4
steps = 30000
# set of guesses about c(0)
c_guess = np.linspace(0.40, 0.50, 1000)
k_final = []
c_final = []
for c0 in c_guess:
k, c = compute_path(k_init, c0, steps, output=True)
# Final values
k_final.append(k[-1])
c_final.append(c[-1])
plt.plot(c_guess, k_final, label='lim k')... | doc/python/Optimal Growth (Euler).ipynb | kenjisato/intro-macro | mit |
As you can clearly see, there is a critical value around 0.41. To know the exact value of the threshold, execute the following code. | cdiff = [c1 - c0 for c0, c1 in zip(c_final[:-1], c_final[1:])]
c_optimal = c_guess[cdiff.index(max(cdiff))]
c_optimal
fig, ax = phase_space(7.5, 300)
compute_path(k_init, c_optimal, steps=15000, ax=ax, output=False) | doc/python/Optimal Growth (Euler).ipynb | kenjisato/intro-macro | mit |
Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file. | capacity_curves_file = "../../../../../../rmtk_data/capacity_curves_Sa-Sd.csv"
capacity_curves = utils.read_capacity_curves(capacity_curves_file)
utils.plot_capacity_curves(capacity_curves) | rmtk/vulnerability/derivation_fragility/equivalent_linearization/lin_miranda_2008/lin_miranda_2008.ipynb | ccasotto/rmtk | agpl-3.0 |
Load ground motion records
Please indicate the path to the folder containing the ground motion records to be used in the analysis through the parameter gmrs_folder.
Note: Each accelerogram needs to be in a separate CSV file as described in the RMTK manual.
The parameters minT and maxT are used to define the period boun... | gmrs_folder = "../../../../../../rmtk_data/accelerograms"
gmrs = utils.read_gmrs(gmrs_folder)
minT, maxT = 0.1, 2.0
utils.plot_response_spectra(gmrs, minT, maxT) | rmtk/vulnerability/derivation_fragility/equivalent_linearization/lin_miranda_2008/lin_miranda_2008.ipynb | ccasotto/rmtk | agpl-3.0 |
Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below.
The damage types currently supported are: capacity curve dependent, spectral displacement and interstorey drift. If the damage model type is interstorey drift the user can provide the ... | damage_model_file = "../../../../../../rmtk_data/damage_model.csv"
damage_model = utils.read_damage_model(damage_model_file) | rmtk/vulnerability/derivation_fragility/equivalent_linearization/lin_miranda_2008/lin_miranda_2008.ipynb | ccasotto/rmtk | agpl-3.0 |
Obtain the damage probability matrix | PDM, Sds = lin_miranda_2008.calculate_fragility(capacity_curves, gmrs, damage_model) | rmtk/vulnerability/derivation_fragility/equivalent_linearization/lin_miranda_2008/lin_miranda_2008.ipynb | ccasotto/rmtk | agpl-3.0 |
Fit lognormal CDF fragility curves
The following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above:
1. IMT: This parameter specifies the intensity measure type to be used. Currently supported options are "PGA", "Sd" and "Sa".
2... | IMT = "Sd"
period = 2.0
damping_ratio = 0.05
regression_method = "max likelihood"
fragility_model = utils.calculate_mean_fragility(gmrs, PDM, period, damping_ratio,
IMT, damage_model, regression_method) | rmtk/vulnerability/derivation_fragility/equivalent_linearization/lin_miranda_2008/lin_miranda_2008.ipynb | ccasotto/rmtk | agpl-3.0 |
Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above:
* minIML and maxIML: These parameters define the limits of the intensity measure level for plotting the functions | minIML, maxIML = 0.01, 2.00
utils.plot_fragility_model(fragility_model, minIML, maxIML) | rmtk/vulnerability/derivation_fragility/equivalent_linearization/lin_miranda_2008/lin_miranda_2008.ipynb | ccasotto/rmtk | agpl-3.0 |
Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy:... | taxonomy = "RC"
minIML, maxIML = 0.01, 2.00
output_type = "nrml"
output_path = "../../../../../../rmtk_data/output/"
utils.save_mean_fragility(taxonomy, fragility_model, minIML, maxIML, output_type, output_path) | rmtk/vulnerability/derivation_fragility/equivalent_linearization/lin_miranda_2008/lin_miranda_2008.ipynb | ccasotto/rmtk | agpl-3.0 |
Obtain vulnerability function
A vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribu... | cons_model_file = "../../../../../../rmtk_data/cons_model.csv"
imls = [0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50,
0.60, 0.70, 0.80, 0.90, 1.00, 1.20, 1.40, 1.60, 1.80, 2.00]
distribution_type = "lognormal"
cons_model = utils.read_consequence_model(cons_model_file)
vulnerability_model = utils.... | rmtk/vulnerability/derivation_fragility/equivalent_linearization/lin_miranda_2008/lin_miranda_2008.ipynb | ccasotto/rmtk | agpl-3.0 |
Plot vulnerability function | utils.plot_vulnerability_model(vulnerability_model) | rmtk/vulnerability/derivation_fragility/equivalent_linearization/lin_miranda_2008/lin_miranda_2008.ipynb | ccasotto/rmtk | agpl-3.0 |
Save vulnerability function
The derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtai... | taxonomy = "RC"
output_type = "nrml"
output_path = "../../../../../../rmtk_data/output/"
utils.save_vulnerability(taxonomy, vulnerability_model, output_type, output_path) | rmtk/vulnerability/derivation_fragility/equivalent_linearization/lin_miranda_2008/lin_miranda_2008.ipynb | ccasotto/rmtk | agpl-3.0 |
<h3> Extract sample data from BigQuery </h3>
The dataset that we will use is <a href="https://bigquery.cloud.google.com/table/nyc-tlc:yellow.trips">a BigQuery public dataset</a>. Click on the link, and look at the column names. Switch to the Details tab to verify that the number of records is one billion, and then swi... | %%bigquery
SELECT
FORMAT_TIMESTAMP( "%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime,
pickup_longitude,
pickup_latitude,
dropoff_longitude,
dropoff_latitude,
passenger_count,
trip_distance,
tolls_amount,
fare_amount,
total_amount
FROM
`nyc-tlc.yellow.trips`
LIMIT
10 | notebooks/launching_into_ml/solutions/1_explore_data.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Let's increase the number of records so that we can do some neat graphs. There is no guarantee about the order in which records are returned, and so no guarantee about which records get returned if we simply increase the LIMIT. To properly sample the dataset, let's use the HASH of the pickup time and return 1 in 100,0... | %%bigquery trips
SELECT
FORMAT_TIMESTAMP( "%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime,
pickup_longitude,
pickup_latitude,
dropoff_longitude,
dropoff_latitude,
passenger_count,
trip_distance,
tolls_amount,
fare_amount,
total_amount
FROM
`nyc-tlc.yellow.trips`
WHERE
ABS(MOD(FARM_FI... | notebooks/launching_into_ml/solutions/1_explore_data.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
<h3> Exploring data </h3>
Let's explore this dataset and clean it up as necessary. We'll use the Python Seaborn package to visualize graphs and Pandas to do the slicing and filtering. | ax = sns.regplot(
x="trip_distance",
y="fare_amount",
fit_reg=False,
ci=None,
truncate=True,
data=trips,
)
ax.figure.set_size_inches(10, 8) | notebooks/launching_into_ml/solutions/1_explore_data.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Hmm ... do you see something wrong with the data that needs addressing?
It appears that we have a lot of invalid data that is being coded as zero distance and some fare amounts that are definitely illegitimate. Let's remove them from our analysis. We can do this by modifying the BigQuery query to keep only trips longer... | %%bigquery trips
SELECT
FORMAT_TIMESTAMP( "%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime,
pickup_longitude,
pickup_latitude,
dropoff_longitude,
dropoff_latitude,
passenger_count,
trip_distance,
tolls_amount,
fare_amount,
total_amount
FROM
`nyc-tlc.yellow.trips`
WHERE
ABS(MOD(FARM_FI... | notebooks/launching_into_ml/solutions/1_explore_data.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
What's up with the streaks around 45 dollars and 50 dollars? Those are fixed-amount rides from JFK and La Guardia airports into anywhere in Manhattan, i.e. to be expected. Let's list the data to make sure the values look reasonable.
Let's also examine whether the toll amount is captured in the total amount. | tollrides = trips[trips["tolls_amount"] > 0]
tollrides[tollrides["pickup_datetime"] == "2012-02-27 09:19:10 UTC"]
notollrides = trips[trips["tolls_amount"] == 0]
notollrides[notollrides["pickup_datetime"] == "2012-02-27 09:19:10 UTC"] | notebooks/launching_into_ml/solutions/1_explore_data.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Looking at a few samples above, it should be clear that the total amount reflects fare amount, toll and tip somewhat arbitrarily -- this is because when customers pay cash, the tip is not known. So, we'll use the sum of fare_amount + tolls_amount as what needs to be predicted. Tips are discretionary and do not have t... | trips.describe() | notebooks/launching_into_ml/solutions/1_explore_data.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Hmm ... The min, max of longitude look strange.
Finally, let's actually look at the start and end of a few of the trips. | def showrides(df, numlines):
lats = []
lons = []
for iter, row in df[:numlines].iterrows():
lons.append(row["pickup_longitude"])
lons.append(row["dropoff_longitude"])
lons.append(None)
lats.append(row["pickup_latitude"])
lats.append(row["dropoff_latitude"])
la... | notebooks/launching_into_ml/solutions/1_explore_data.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
As you'd expect, rides that involve a toll are longer than the typical ride.
<h3> Quality control and other preprocessing </h3>
We need to do some clean-up of the data:
<ol>
<li>New York city longitudes are around -74 and latitudes are around 41.</li>
<li>We shouldn't have zero passengers.</li>
<li>Clean up the total_... | def preprocess(trips_in):
trips = trips_in.copy(deep=True)
trips.fare_amount = trips.fare_amount + trips.tolls_amount
del trips["tolls_amount"]
del trips["total_amount"]
del trips["trip_distance"] # we won't know this in advance!
qc = np.all(
[
trips["pickup_longitude"] > -... | notebooks/launching_into_ml/solutions/1_explore_data.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
The quality control has removed about 300 rows (11400 - 11101) or about 3% of the data. This seems reasonable.
Let's move on to creating the ML datasets.
<h3> Create ML datasets </h3>
Let's split the QCed data randomly into training, validation and test sets.
Note that this is not the entire data. We have 1 billion ta... | shuffled = tripsqc.sample(frac=1)
trainsize = int(len(shuffled["fare_amount"]) * 0.70)
validsize = int(len(shuffled["fare_amount"]) * 0.15)
df_train = shuffled.iloc[:trainsize, :]
df_valid = shuffled.iloc[trainsize : (trainsize + validsize), :] # noqa: E203
df_test = shuffled.iloc[(trainsize + validsize) :, :] # noq... | notebooks/launching_into_ml/solutions/1_explore_data.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Let's write out the three dataframes to appropriately named csv files. We can use these csv files for local training (recall that these files represent only 1/100,000 of the full dataset) just to verify our code works, before we run it on all the data. | def to_csv(df, filename):
outdf = df.copy(deep=False)
outdf.loc[:, "key"] = np.arange(0, len(outdf)) # rownumber as key
# Reorder columns so that target is first column
cols = outdf.columns.tolist()
cols.remove("fare_amount")
cols.insert(0, "fare_amount")
print(cols) # new order of columns... | notebooks/launching_into_ml/solutions/1_explore_data.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
<h3> Verify that datasets exist </h3> | !ls -l *.csv | notebooks/launching_into_ml/solutions/1_explore_data.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
We have 3 .csv files corresponding to train, valid, test. The ratio of file-sizes correspond to our split of the data. | %%bash
head taxi-train.csv | notebooks/launching_into_ml/solutions/1_explore_data.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Looks good! We now have our ML datasets and are ready to train ML models, validate them and evaluate them.
<h3> Benchmark </h3>
Before we start building complex ML models, it is a good idea to come up with a very simple model and use that as a benchmark.
My model is going to be to simply divide the mean fare_amount by... | def distance_between(lat1, lon1, lat2, lon2):
# Haversine formula to compute distance "as the crow flies".
lat1_r = np.radians(lat1)
lat2_r = np.radians(lat2)
lon_diff_r = np.radians(lon2 - lon1)
sin_prod = np.sin(lat1_r) * np.sin(lat2_r)
cos_prod = np.cos(lat1_r) * np.cos(lat2_r) * np.cos(lon_d... | notebooks/launching_into_ml/solutions/1_explore_data.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
<h2>Benchmark on same dataset</h2>
The RMSE depends on the dataset, and for comparison, we have to evaluate on the same dataset each time. We'll use this query in later labs: | validation_query = """
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
"unused" AS key
FROM
`nyc-tlc.yellow.trips`
WHERE... | notebooks/launching_into_ml/solutions/1_explore_data.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Relaxation stage
Firstly, all required modules are imported. | import sys
sys.path.append('../')
from sim import Sim
from atlases import BoxAtlas
from meshes import RectangularMesh
from energies.exchange import UniformExchange
from energies.demag import Demag
from energies.zeeman import FixedZeeman | new/notebooks/fmr_standard_problem.ipynb | fangohr/oommf-python | bsd-2-clause |
Now, the simulation object can be created and exchange, demagnetisation, and Zeeman energies are added. | # Create a BoxAtlas object.
atlas = BoxAtlas(cmin, cmax)
# Create a mesh object.
mesh = RectangularMesh(atlas, d)
# Create a simulation object.
sim = Sim(mesh, Ms, name='fmr_standard_problem')
# Add exchange energy.
sim.add(UniformExchange(A))
# Add demagnetisation energy.
sim.add(Demag())
# Add Zeeman energy.
sim... | new/notebooks/fmr_standard_problem.ipynb | fangohr/oommf-python | bsd-2-clause |
At this point, the system is initialised in the out-of-plane direction. As an example, we use a python function. This initialisation can also be achieved using the tuple or list object. | # Python function for initialising the system's magnetisation.
def m_init(pos):
return (0, 0, 1)
# Initialise the magnetisation.
sim.set_m(m_init)
# The same initialisation can be achieved using:
# sim.set_m((0, 0, 1))
# sim.set_m([0, 0, 1])
# sim.set_m(np.array([0, 0, 1])) | new/notebooks/fmr_standard_problem.ipynb | fangohr/oommf-python | bsd-2-clause |
Finally, the system is relaxed for $5 \,\text{ns}$. | sim.run_until(5e-9) | new/notebooks/fmr_standard_problem.ipynb | fangohr/oommf-python | bsd-2-clause |
We can now load the relaxed state to the Field object and plot the $z$ slice of magnetisation. | %matplotlib inline
sim.m.plot_slice('z', 5e-9) | new/notebooks/fmr_standard_problem.ipynb | fangohr/oommf-python | bsd-2-clause |
Dynamic stage
In the dynamic stage, we use the relaxed state from the relaxation stage. | # Change external magnetic field.
H = 8e4 * np.array([0.81923192051904048, 0.57346234436332832, 0.0])
sim.set_H(H) | new/notebooks/fmr_standard_problem.ipynb | fangohr/oommf-python | bsd-2-clause |
In this stage, the Gilbert damping is reduced. | sim.alpha = 0.008 | new/notebooks/fmr_standard_problem.ipynb | fangohr/oommf-python | bsd-2-clause |
Finally, we run the multiple stage simulation. | total_time = 20e-9
stages = 4000
sim.run_until(total_time, stages) | new/notebooks/fmr_standard_problem.ipynb | fangohr/oommf-python | bsd-2-clause |
Postprocessing
From the obtained vector field samples, we can compute the average of magnetisation $y$ component and plot its time evolution. | import glob
import matplotlib.pyplot as plt
from field import load_oommf_file
# Compute the <my>
t_list = []
myav = []
for i in range(stages):
omf_filename = glob.glob('fmr_standard_problem/fmr_standard_problem-Oxs_TimeDriver-Spin-%09d-*.omf' % i)[0]
m_field = load_oommf_file(omf_filename)
t_list.append(i*... | new/notebooks/fmr_standard_problem.ipynb | fangohr/oommf-python | bsd-2-clause |
From the $<m_{y}>$ time evolution, we can compute and plot its Fourier transform. | import scipy.fftpack
psd = np.log10(np.abs(scipy.fftpack.fft(myav))**2)
f_axis = scipy.fftpack.fftfreq(stages, d=total_time/stages)
plt.plot(f_axis/1e9, psd)
plt.xlim([0, 12])
plt.ylim([-4.5, 2])
plt.xlabel('f (GHz)')
plt.ylabel('Psa (a.u.)')
plt.grid() | new/notebooks/fmr_standard_problem.ipynb | fangohr/oommf-python | bsd-2-clause |
<hr>
Over-abbreviated Names<a name="abbr"></a>
Since the most of data being manually uploaded, there are lot of abbreviations in street names,locality names.
Where they are filtered and replaced with full names. | #the city below can be hoodi or bunkyo
for st_type, ways in city_types.iteritems():
for name in ways:
better_name = update_name(name, mapping)
if name != better_name:
print name, "=>", better_name
#few examples
Bunkyo:
Meidai Jr. High Sch. => Meidai Junior High School
St. Mary's Cathed... | P3 wrangle_data/DataWrangling_ganga.ipynb | gangadhara691/gangadhara691.github.io | mit |
<hr>
Merging Both cities<a name="combine_cities"></a>
These two maps are selected since ,right now i am living at Hoodi,Bengaluru. And one day i want do my masters in japan in robotics,so i had selected locality of University of tokyo, Bunkyo.I really wanted to explore differences between the regions.
I need to add a... | bangalore.osm -40MB
bangalore.osm.json-51MB
tokyo1.osm- 82MB
tokyo1.osm.json-102.351MB | P3 wrangle_data/DataWrangling_ganga.ipynb | gangadhara691/gangadhara691.github.io | mit |
Number of documents | print "Bunkyo:",mongo_db.cities.find({'city':'bunkyo'}).count()
print "Hoodi:",mongo_db.cities.find({'city':'hoodi'}).count() | P3 wrangle_data/DataWrangling_ganga.ipynb | gangadhara691/gangadhara691.github.io | mit |
Bunkyo: 1268292
Hoodi: 667842
Number of node nodes. | print "Bunkyo:",mongo_db.cities.find({"type":"node",
'city':'bunkyo'}).count()
print "Hoodi:",mongo_db.cities.find({"type":"node",
'city':'hoodi'}).count()
Bunkyo: 1051170
Hoodi: 548862 | P3 wrangle_data/DataWrangling_ganga.ipynb | gangadhara691/gangadhara691.github.io | mit |
Number of way nodes. | print "Bunkyo:",mongo_db.cities.find({'type':'way',
'city':'bunkyo'}).count()
print "Hoodi:",mongo_db.cities.find({'type':'way',
'city':'hoodi'}).count()
Bunkyo: 217122
Hoodi: 118980 | P3 wrangle_data/DataWrangling_ganga.ipynb | gangadhara691/gangadhara691.github.io | mit |
Total Number of contributor. | print "Constributors:", len(mongo_db.cities.distinct("created.user"))
Contributors: 858 | P3 wrangle_data/DataWrangling_ganga.ipynb | gangadhara691/gangadhara691.github.io | mit |
<hr>
3. Additional Data Exploration using MongoDB<a name="exploration"></a>
I am going to use the pipeline function to retrive data from the database | def pipeline(city):
p= [{"$match":{"created.user":{"$exists":1},
"city":city}},
{"$group": {"_id": {"City":"$city",
"User":"$created.user"},
"contribution": {"$sum": 1}}}, ... | P3 wrangle_data/DataWrangling_ganga.ipynb | gangadhara691/gangadhara691.github.io | mit |
The top contributors for hoodi are no where near since bunkyo being a more compact region than hoodi ,there are more places to contribute.
<hr>
To get the top Amenities in Hoodi and Bunkyo
I will be showing the pipeline that will go in the above mentioned "Pipleline" function | pipeline=[{"$match":{"Additional Information.amenity":{"$exists":1},
"city":city}},
{"$group": {"_id": {"City":"$city",
"Amenity":"$Additional Information.amenity"},
"count": {"$sum": 1}}},
... | P3 wrangle_data/DataWrangling_ganga.ipynb | gangadhara691/gangadhara691.github.io | mit |
As compared to hoodi ,bunkyo have few atms,And parking can be commonly found in bunkyo locality
<hr>
popular places of worship | p = [{"$match":{"Additional Information.amenity":{"$exists":1},
"Additional Information.amenity":"place_of_worship",
"city":city}},
{"$group":{"_id": {"City":"$city",
"Religion":"$Add... | P3 wrangle_data/DataWrangling_ganga.ipynb | gangadhara691/gangadhara691.github.io | mit |
As expected japan is popular with buddism,
but india being a secular country it will be having most of the reglious places of worship,where hinduism being majority
<hr>
popular restaurants | p = [{"$match":{"Additional Information.amenity":{"$exists":1},
"Additional Information.amenity":"restaurant",
"city":city}},
{"$group":{"_id":{"City":"$city",
"Food":"$Additional Information.... | P3 wrangle_data/DataWrangling_ganga.ipynb | gangadhara691/gangadhara691.github.io | mit |
{u'Count': 582, u'City': u'bunkyo'}
{u'Food': u'japanese', u'City': u'bunkyo', u'Count': 192}
{u'Food': u'chinese', u'City': u'bunkyo', u'Count': 126}
{u'Food': u'italian', u'City': u'bunkyo', u'Count': 69}
{u'Food': u'indian', u'City': u'bunkyo', u'Count': 63}
{u'Food': u'sushi', u'City': u'bunkyo', u'Count': 63}
{u'C... | p = [{"$match":{"Additional Information.amenity":{"$exists":1},
"Additional Information.amenity":"fast_food",
"city":city}},
{"$group":{"_id":{"City":"$city",
"Food":"$Additional Information.... | P3 wrangle_data/DataWrangling_ganga.ipynb | gangadhara691/gangadhara691.github.io | mit |
Burger seems very popular among japanese in fast foods,i was expecting ramen to be more popular
, but in hoodi pizza is really common,being a metropolitan city.
<hr>
ATM's near locality | p = [{"$match":{"Additional Information.amenity":{"$exists":1},
"Additional Information.amenity":"atm",
"city":city}},
{"$group":{"_id":{"City":"$city",
"Name":"$Additional Information.nam... | P3 wrangle_data/DataWrangling_ganga.ipynb | gangadhara691/gangadhara691.github.io | mit |
There are quite a few ATM in Bunkyo as compared to hoodi
<hr>
Martial arts or Dojo Center near locality | ## Martial arts or Dojo Center near locality
import re
pat = re.compile(r'dojo', re.I)
d=mongo_db.cities.aggregate([{"$match":{ "$or": [ { "Additional Information.name": {'$regex': pat}}
,{"Additional Information.amenity": {'$regex': pat}}]}}
... | P3 wrangle_data/DataWrangling_ganga.ipynb | gangadhara691/gangadhara691.github.io | mit |
I wanted to learn martial arts ,
In japan is known for its akido and other ninjistsu martial arts , where i can find some in bunkyo
Where as in hoodi,india Kalaripayattu Martial Arts are one of the ancient arts that ever existed.
<hr>
most popular shops. | p = [{"$match":{"Additional Information.shop":{"$exists":1},
"city":city}},
{"$group":{"_id":{"City":"$city",
"Shop":"$Additional Information.shop"},
"count":{"$sum":1}}},
... | P3 wrangle_data/DataWrangling_ganga.ipynb | gangadhara691/gangadhara691.github.io | mit |
most popular supermarkets | p = [{"$match":{"Additional Information.shop":{"$exists":1},
"city":city,
"Additional Information.shop":"supermarket"}},
{"$group":{"_id":{"City":"$city",
"Supermarket":"$Additional Information.name"},
... | P3 wrangle_data/DataWrangling_ganga.ipynb | gangadhara691/gangadhara691.github.io | mit |
Uploaded RH and temp data into Python¶
First I upload my data set(s). I am working with environmental data from different locations in the church at differnet dates. Files include: environmental characteristics (CO2, temperature (deg C), and relative humidity (RH) (%) measurements).
I can discard the CO2_2 column valu... | #I import a temp and RH data file
env=pd.read_table('../Data/CO2May.csv', sep=',')
#assigning columns names
env.columns=[['test', 'time','temp C', 'RH %', 'CO2_1', 'CO2_2']]
#I display my dataframe
env
#change data time variable to actual values of time.
env['time']= pd.to_datetime(env['time'])
#print the new tabl... | organ_pitch/Scripts/.ipynb_checkpoints/upload_env_data-checkpoint.ipynb | taliamo/Final_Project | mit |
Next
1. Create a function for expected pitch (frequency of sound waves) from CO2 data
2. Add expected_frequency to dataframe
Calculated pitch from CO2 levels
Here I use Cramer's equation for frequency of sound from CO2 concentration (1992).
freq = a0 + a1(T) + ... + (a9 +...) +... + a14(xc^2)
where xc is the mole frac... | #Here I am trying to create a function for the above equation.
#I want to plug in each CO2_ave value for a time stamp (row) from the "env" data frame above.
#define coefficients (Cramer, 1992)
a0 = 331.5024
#a1 = 0.603055
#a2 = -0.000528
a9 = -(-85.20931) #need to account for negative values
#a10 = -0.228525
a14 = 2... | organ_pitch/Scripts/.ipynb_checkpoints/upload_env_data-checkpoint.ipynb | taliamo/Final_Project | mit |
Visualizing the expected pitch values by time
1. Plot calculated frequency, CO2 (ppm), and measured frequency values | print(calc_freq)
#define variables from dataframe columns
CO2_1 = env[['CO2_1']]
calc_freq=env[['calc_freq']]
#measured_pitch = output_from_'pitch_data.py'
#want to set x-axis as date_time
#how do I format the ax2 y axis scale
def make_plot(variable_1, variable_2):
'''Make a three variable plot with two axes'... | organ_pitch/Scripts/.ipynb_checkpoints/upload_env_data-checkpoint.ipynb | taliamo/Final_Project | mit |
That's everything we need for a working function! Let's walk through it:
def keyword: required before writing any function, to tell Python "hey! this is a function!"
Function name: one word (can "fake" spaces with underscores), which is the name of the function and how we'll refer to it later
Arguments: a comma-separa... | from numpy import array | lectures/L9.ipynb | eds-uga/csci1360e-su16 | mit |
Now the array() method can be called directly without prepending the package name numpy in front. USE THIS CAUTIOUSLY: if you accidentally name a variable array later in your code, you will get some very strange errors!
Part 2: Function Arguments
Arguments (or parameters), as stated before, are the function's input; th... | def one_arg(arg1):
pass
def two_args(arg1, arg2):
pass
def three_args(arg1, arg2, arg3):
pass
# And so on... | lectures/L9.ipynb | eds-uga/csci1360e-su16 | mit |
Like functions, you can name the arguments anything you want, though also like functions you'll probably want to give them more meaningful names besides arg1, arg2, and arg3. When these become just three functions among hundreds in a massive codebase written by dozens of different people, it's helpful when the code its... | try:
one_arg("some arg")
except Exception as e:
print("one_arg FAILED: {}".format(e))
else:
print("one_arg SUCCEEDED")
try:
two_args("only1arg")
except Exception as e:
print("two_args FAILED: {}".format(e))
else:
print("two_args SUCCEEDED") | lectures/L9.ipynb | eds-uga/csci1360e-su16 | mit |
To be fair, it's a pretty easy error to diagnose, but still something to keep in mind--especially as we move beyond basic "positional" arguments (as they are so called in the previous error message) into optional arguments.
Default arguments
"Positional" arguments--the only kind we've seen so far--are required. If the ... | def func_with_default_arg(positional, default = 10):
print("'{}' with default arg {}".format(positional, default))
func_with_default_arg("Input string")
func_with_default_arg("Input string", default = 999) | lectures/L9.ipynb | eds-uga/csci1360e-su16 | mit |
If you look through the NumPy online documentation, you'll find most of its functions have entire books' worth of default arguments.
The numpy.array function we've been using has quite a few; the only positional (required) argument for that function is some kind of list/array structure to wrap a NumPy array around. Eve... | import numpy as np
x = np.array([1, 2, 3])
y = np.array([1, 2, 3], dtype = float) # Specifying the data type of the array, using "dtype"
print(x)
print(y) | lectures/L9.ipynb | eds-uga/csci1360e-su16 | mit |
Notice the decimal points that follow the values in the second array! This is NumPy's way of showing that these numbers are floats, not integers!
In this example, NumPy detected that our initial list contained integers, and we see in the first example that it left the integer type alone. But, in the second example, we ... | def games_in_library(username, library):
print("User '{}' owns: ".format(username))
for game in library:
print(game)
print()
games_in_library('fps123', ['DOTA 2', 'Left 4 Dead', 'Doom', 'Counterstrike', 'Team Fortress 2'])
games_in_library('rts456', ['Civilization V', 'Cities: Skylines', 'Sins of a... | lectures/L9.ipynb | eds-uga/csci1360e-su16 | mit |
In this example, our function games_in_library has two positional arguments: username, which is the Steam username of the person, and library, which is a list of video game titles. The function simply prints out the username and the titles they own.
Part 3: Return Values
Just as functions [can] take input, they also [c... | def identity_function(in_arg):
return in_arg
x = "this is the function input"
return_value = identity_function(x)
print(return_value) | lectures/L9.ipynb | eds-uga/csci1360e-su16 | mit |
This is pretty basic: the function returns back to the programmer as output whatever was passed into the function as input. Hence, "identity function."
Anything you can pass in as function parameters, you can return as function output, including lists: | def explode_string(some_string):
list_of_characters = []
for index in range(len(some_string)):
list_of_characters.append(some_string[index])
return list_of_characters
words = "Blahblahblah"
output = explode_string(words)
print(output) | lectures/L9.ipynb | eds-uga/csci1360e-su16 | mit |
PCA
We start by performing PCA (principal component analysis), which finds patterns that capture most of the variance in the data. First load toy example data, and cache it to speed up repeated queries. | rawdata = tsc.loadExample('fish-series')
data = rawdata.toTimeSeries().normalize()
data.cache()
data.dims; | worker/notebooks/thunder/tutorials/factorization.ipynb | CodeNeuro/notebooks | mit |
Run PCA with three components | from thunder import PCA
model = PCA(k=2).fit(data) | worker/notebooks/thunder/tutorials/factorization.ipynb | CodeNeuro/notebooks | mit |
Fitting PCA adds two attributes to model: comps, which are the principal components, and scores, which are the data represented in principal component space. In this case, the input data were space-by-time, so the components are temporal basis functions, and the scores are spatial basis functions. Look at the results f... | plt.plot(model.comps.T); | worker/notebooks/thunder/tutorials/factorization.ipynb | CodeNeuro/notebooks | mit |
The scores are spatial basis functions. We can pack them into a local array and look at them as images one by one. | imgs = model.scores.pack()
imgs.shape
image(imgs[0,:,:,0], clim=(-0.05,0.05))
image(imgs[1,:,:,0], clim=(-0.05,0.05)) | worker/notebooks/thunder/tutorials/factorization.ipynb | CodeNeuro/notebooks | mit |
Clearly there is some spatial structure to each component, but looking at them one by one can be difficult. A useful trick is to look at two components at once via a color code that converts the scores into polar coordinates. The color (hue) shows the relative amount of the two components, and the brightness shows the ... | maps = Colorize(cmap='polar', scale=4).transform(imgs)
from numpy import amax
image(amax(maps,2)) | worker/notebooks/thunder/tutorials/factorization.ipynb | CodeNeuro/notebooks | mit |
To get more intuition for these colors, we can get the scores from a random subset of pixels. This will return two numbers per pixel, the projection onto the first and second principal component, and we threshold based on the norm so we are sure to retrieve pixels with at least some structure. Then we make a scatter pl... | pts = model.scores.subset(500, thresh=0.01, stat='norm')
from numpy import newaxis, squeeze
clrs = Colorize(cmap='polar', scale=4).transform([pts[:,0][:,newaxis], pts[:,1][:,newaxis]]).squeeze()
plt.scatter(pts[:,0],pts[:,1], c=clrs, s=75, alpha=0.7); | worker/notebooks/thunder/tutorials/factorization.ipynb | CodeNeuro/notebooks | mit |
Recall that each of these points represents a single pixel. Another way to better understand the PCA space is to plot the time series corresponding to each of these pixels, reconstructed using the first two principal components. | from numpy import asarray
recon = asarray(map(lambda x: (x[0] * model.comps[0, :] + x[1] * model.comps[1, :]).tolist(), pts))
plt.gca().set_color_cycle(clrs)
plt.plot(recon.T); | worker/notebooks/thunder/tutorials/factorization.ipynb | CodeNeuro/notebooks | mit |
NMF
Non-negative matrix factorization is an alternative decomposition. It is meant to be applied to data that are strictly positive, which is often approximately true of neural responses. Like PCA, it also returns a set of temporal and spatial basis functions, but unlike PCA, it tends to return basis functions that do ... | from thunder import NMF
model = NMF(k=3, maxIter=20).fit(data) | worker/notebooks/thunder/tutorials/factorization.ipynb | CodeNeuro/notebooks | mit |
After fitting, model will have two attributes, h and w. For these data, h contains the temporal basis functions, and w contains the spatial basis functions. Let's look at both. | plt.plot(model.h.T);
imgs = model.w.pack()
image(imgs[0][:,:,0])
image(imgs[1][:,:,0])
image(imgs[2][:,:,0]) | worker/notebooks/thunder/tutorials/factorization.ipynb | CodeNeuro/notebooks | mit |
For NMF, a useful way to look at the basis functions is to encode each one as a separate color channel. We can do that using colorization with an rgb conversion, which simply maps the spatial basis functions directly to red, green, and blue values, and applies a global scaling factor which controls overall brightness. | maps = Colorize(cmap='rgb', scale=1.0).transform(imgs)
image(maps[:,:,0,:]) | worker/notebooks/thunder/tutorials/factorization.ipynb | CodeNeuro/notebooks | mit |
One problem with this way to look at NMF components is that the scale of the different components can cause some to dominante others. We also might like more control over color assignments. The indexed colorization option lets you specify one color per channel, and automatically normalizes the amplitude of each one. | maps = Colorize(cmap='indexed', colors=[ "hotpink", "cornflowerblue", "mediumseagreen"], scale=1).transform(imgs)
image(maps[:,:,0,:]) | worker/notebooks/thunder/tutorials/factorization.ipynb | CodeNeuro/notebooks | mit |
With these plots, it can be useful to add in a background image (for example, the mean). In this case, we also show how to select and colorize just two of the three map components against a background. | ref = rawdata.seriesMean().pack()
maps = Colorize(cmap='indexed', colors=['red', 'blue'], scale=1).transform(imgs[[0,2]], background=ref, mixing=0.5)
image(maps[:,:,0,:]) | worker/notebooks/thunder/tutorials/factorization.ipynb | CodeNeuro/notebooks | mit |
ICA
Independent component analysis is a final factorization approach. Unlike NMF, it does not require non-negative signals, but whereas PCA finds basis functions that maximize explained variance, ICA finds basis functions that maximize the non-Gaussianity of the recovered signals, and in practice, they tend to be both ... | from thunder import ICA
model = ICA(k=10,c=3).fit(data)
sns.set_style('darkgrid')
plt.plot(model.a); | worker/notebooks/thunder/tutorials/factorization.ipynb | CodeNeuro/notebooks | mit |
Some signals will be positive and others negative. This is expected because sign is arbitrary in ICA. It is useful to look at absolute value when making maps. | imgs = model.sigs.pack()
maps = Colorize(cmap='indexed', colors=['red','green', 'blue'], scale=3).transform(abs(imgs))
image(maps[:,:,0,:]) | worker/notebooks/thunder/tutorials/factorization.ipynb | CodeNeuro/notebooks | mit |
Overview
Use linked DMA channels to perform "scan" across multiple ADC input channels.
After each scan, use DMA scatter chain to write the converted ADC values to a
separate output array for each ADC channel. The length of the output array to
allocate for each ADC channel is determined by the sample_count in the
examp... | import arduino_helpers.hardware.teensy as teensy
from arduino_rpc.protobuf import resolve_field_values
from teensy_minimal_rpc import SerialProxy
import teensy_minimal_rpc.DMA as DMA
import teensy_minimal_rpc.ADC as ADC
import teensy_minimal_rpc.SIM as SIM
import teensy_minimal_rpc.PIT as PIT
# Disconnect from existi... | teensy_minimal_rpc/notebooks/dma-examples/Example - [BROKEN] Periodic multi-channel ADC multiple samples using DMA and PIT.ipynb | wheeler-microfluidics/teensy-minimal-rpc | gpl-3.0 |
Configure ADC sample rate, etc. |
# Set ADC parameters
proxy.setAveraging(16, teensy.ADC_0)
proxy.setResolution(16, teensy.ADC_0)
proxy.setConversionSpeed(teensy.ADC_MED_SPEED, teensy.ADC_0)
proxy.setSamplingSpeed(teensy.ADC_MED_SPEED, teensy.ADC_0)
proxy.update_adc_registers(
teensy.ADC_0,
ADC.Registers(CFG2=ADC.R_CFG2(MUXSEL=ADC.R_CFG2.B))) | teensy_minimal_rpc/notebooks/dma-examples/Example - [BROKEN] Periodic multi-channel ADC multiple samples using DMA and PIT.ipynb | wheeler-microfluidics/teensy-minimal-rpc | gpl-3.0 |
Pseudo-code to set DMA channel $i$ to be triggered by ADC0 conversion complete.
DMAMUX0_CFGi[SOURCE] = DMAMUX_SOURCE_ADC0 // Route ADC0 as DMA channel source.
DMAMUX0_CFGi[TRIG] = 0 // Disable periodic trigger.
DMAMUX0_CFGi[ENBL] = 1 // Enable the DMAMUX configuration for channel.
DMA_ERQ[i] = 1 // DMA request inp... | DMAMUX_SOURCE_ADC0 = 40 # from `kinetis.h`
DMAMUX_SOURCE_ADC1 = 41 # from `kinetis.h`
# DMAMUX0_CFGi[SOURCE] = DMAMUX_SOURCE_ADC0 // Route ADC0 as DMA channel source.
# DMAMUX0_CFGi[TRIG] = 0 // Disable periodic trigger.
# DMAMUX0_CFGi[ENBL] = 1 // Enable the DMAMUX configuration for channel.
proxy.updat... | teensy_minimal_rpc/notebooks/dma-examples/Example - [BROKEN] Periodic multi-channel ADC multiple samples using DMA and PIT.ipynb | wheeler-microfluidics/teensy-minimal-rpc | gpl-3.0 |
Analog channel list
List of channels to sample.
Map channels from Teensy references (e.g., A0, A1, etc.) to the Kinetis analog
pin numbers using the adc.CHANNEL_TO_SC1A_ADC0 mapping. | import re
import numpy as np
import pandas as pd
import arduino_helpers.hardware.teensy.adc as adc
# The number of samples to record for each ADC channel.
sample_count = 10
teensy_analog_channels = ['A0', 'A1', 'A0', 'A3', 'A0']
sc1a_pins = pd.Series(dict([(v, adc.CHANNEL_TO_SC1A_ADC0[getattr(teensy, v)])
... | teensy_minimal_rpc/notebooks/dma-examples/Example - [BROKEN] Periodic multi-channel ADC multiple samples using DMA and PIT.ipynb | wheeler-microfluidics/teensy-minimal-rpc | gpl-3.0 |
Allocate and initialize device arrays
SD1A register configuration for each ADC channel in the channel_sc1as list.
Copy channel_sc1as list to device.
ADC result array
Initialize to zero. | proxy.free_all()
N = np.dtype('uint16').itemsize * channel_sc1as.size
# Allocate source array
adc_result_addr = proxy.mem_alloc(N)
# Fill result array with zeros
proxy.mem_fill_uint8(adc_result_addr, 0, N)
# Copy channel SC1A configurations to device memory
adc_sda1s_addr = proxy.mem_aligned_alloc_and_set(4, channe... | teensy_minimal_rpc/notebooks/dma-examples/Example - [BROKEN] Periodic multi-channel ADC multiple samples using DMA and PIT.ipynb | wheeler-microfluidics/teensy-minimal-rpc | gpl-3.0 |
Configure DMA channel $i$ | ADC0_SC1A = 0x4003B000 # ADC status and control registers 1
sda1_tcd_msg = DMA.TCD(CITER_ELINKNO=DMA.R_TCD_ITER_ELINKNO(ELINK=False, ITER=channel_sc1as.size),
BITER_ELINKNO=DMA.R_TCD_ITER_ELINKNO(ELINK=False, ITER=channel_sc1as.size),
ATTR=DMA.R_TCD_ATTR(SSIZE=DMA.R_TCD_A... | teensy_minimal_rpc/notebooks/dma-examples/Example - [BROKEN] Periodic multi-channel ADC multiple samples using DMA and PIT.ipynb | wheeler-microfluidics/teensy-minimal-rpc | gpl-3.0 |
Configure DMA channel $ii$ | ADC0_RA = 0x4003B010 # ADC data result register
ADC0_RB = 0x4003B014 # ADC data result register
tcd_msg = DMA.TCD(CITER_ELINKYES=DMA.R_TCD_ITER_ELINKYES(ELINK=True, LINKCH=1, ITER=channel_sc1as.size),
BITER_ELINKYES=DMA.R_TCD_ITER_ELINKYES(ELINK=True, LINKCH=1, ITER=channel_sc1as.size),
... | teensy_minimal_rpc/notebooks/dma-examples/Example - [BROKEN] Periodic multi-channel ADC multiple samples using DMA and PIT.ipynb | wheeler-microfluidics/teensy-minimal-rpc | gpl-3.0 |
Trigger sample scan across selected ADC channels | # Clear output array to zero.
proxy.mem_fill_uint8(adc_result_addr, 0, N)
proxy.mem_fill_uint8(samples_addr, 0, sample_count * N)
# Software trigger channel $i$ to copy *first* SC1A configuration, which
# starts ADC conversion for the first channel.
#
# Conversions for subsequent ADC channels are triggered through min... | teensy_minimal_rpc/notebooks/dma-examples/Example - [BROKEN] Periodic multi-channel ADC multiple samples using DMA and PIT.ipynb | wheeler-microfluidics/teensy-minimal-rpc | gpl-3.0 |
The import here was very simple, because this notebook is in the same folder as the hello_quantum.py file. If this is not the case, you'll have to change the path. See the Hello_Qiskit notebook for an example of this.
Once the import has been done, you can set up and display the visualization. | grid = hello_quantum.pauli_grid()
grid.update_grid() | community/games/game_engines/Making_your_own_hello_quantum.ipynb | antoniomezzacapo/qiskit-tutorial | apache-2.0 |
This has attributes and methods which create and run quantum circuits with Qiskit. | for gate in [['x','1'],['h','0'],['z','0'],['h','1'],['z','1']]:
command = 'grid.qc.'+gate[0]+'(grid.qr['+gate[1]+'])'
eval(command)
grid.update_grid() | community/games/game_engines/Making_your_own_hello_quantum.ipynb | antoniomezzacapo/qiskit-tutorial | apache-2.0 |
There is also an alternative visualization, which can be used to better represent non-Clifford gates. | grid = hello_quantum.pauli_grid(mode='line')
grid.update_grid() | community/games/game_engines/Making_your_own_hello_quantum.ipynb | antoniomezzacapo/qiskit-tutorial | apache-2.0 |
The run_game function, can also be used to implement custom 'Hello Quantum' games within a notebook. This is called with
hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names)
where the arguments set up the puzzle by specifying the following information.
initialize
* List of gates applied... | initialize = [['x', '0'],['cx', '1']]
success_condition = {'IZ': 1.0}
allowed_gates = {'0': {'h':0}, '1': {'h':0}, 'both': {'cz': 1}}
vi = [[], True, True]
qubit_names = {'0':'qubit 0', '1':'qubit 1'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names) | community/games/game_engines/Making_your_own_hello_quantum.ipynb | antoniomezzacapo/qiskit-tutorial | apache-2.0 |
Creating a Series
You can convert a list,numpy array, or dictionary to a Series: | labels = ['a', 'b', 'c']
my_list = [10, 20, 30]
arr = np.array([10, 20, 30])
d = {'a': 10,'b': 20,'c': 30} | 17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/03-General Pandas/02-Series.ipynb | arcyfelix/Courses | apache-2.0 |
Using Lists | pd.Series(data = my_list)
pd.Series(data = my_list,
index = labels)
pd.Series(my_list, labels) | 17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/03-General Pandas/02-Series.ipynb | arcyfelix/Courses | apache-2.0 |
NumPy Arrays | pd.Series(arr)
pd.Series(arr, labels) | 17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/03-General Pandas/02-Series.ipynb | arcyfelix/Courses | apache-2.0 |
Dictionary | pd.Series(d) | 17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/03-General Pandas/02-Series.ipynb | arcyfelix/Courses | apache-2.0 |
Data in a Series
A pandas Series can hold a variety of object types: | pd.Series(data = labels)
# Even functions (although unlikely that you will use this)
pd.Series([sum, print, len]) | 17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/03-General Pandas/02-Series.ipynb | arcyfelix/Courses | apache-2.0 |
Using an Index
The key to using a Series is understanding its index. Pandas makes use of these index names or numbers by allowing for fast look ups of information (works like a hash table or dictionary).
Let's see some examples of how to grab information from a Series. Let us create two sereis, ser1 and ser2: | ser1 = pd.Series([1, 2, 3, 4],
index = ['USA', 'Germany', 'USSR', 'Japan'])
ser1
ser2 = pd.Series([1, 2, 5, 4],
index = ['USA', 'Germany', 'Italy', 'Japan'])
ser2
ser1['USA'] | 17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/03-General Pandas/02-Series.ipynb | arcyfelix/Courses | apache-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.