markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Return Value Name | Description | Type --- | --- | --- Forward_Flow | The forward flow. | Flow Forward_Traces | The forward traces. | List of Trace New_Sessions | Sessions initialized by the forward trace. | List of str Reverse_Flow | The reverse flow. | Flow Reverse_Traces | The reverse traces. | List of Trace Retriev...
result.Forward_Flow
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
Retrieving the detailed Forward Trace information
len(result.Forward_Traces) result.Forward_Traces[0]
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
Evaluating the first Forward Trace
result.Forward_Traces[0][0]
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
Retrieving the disposition of the first Forward Trace
result.Forward_Traces[0][0].disposition
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
Retrieving the first hop of the first Forward Trace
result.Forward_Traces[0][0][0]
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
Retrieving the last hop of the first Forward Trace
result.Forward_Traces[0][0][-1]
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
Retrieving the Return flow definition
result.Reverse_Flow
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
Retrieving the detailed Return Trace information
len(result.Reverse_Traces) result.Reverse_Traces[0]
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
Evaluating the first Reverse Trace
result.Reverse_Traces[0][0]
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
Retrieving the disposition of the first Reverse Trace
result.Reverse_Traces[0][0].disposition
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
Retrieving the first hop of the first Reverse Trace
result.Reverse_Traces[0][0][0]
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
Retrieving the last hop of the first Reverse Trace
result.Reverse_Traces[0][0][-1] bf.set_network('generate_questions') bf.set_snapshot('generate_questions')
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
Reachability Finds flows that match the specified path and header space conditions. Searches across all flows that match the specified conditions and returns examples of such flows. This question can be used to ensure that certain services are globally accessible and parts of the network are perfectly isolated from eac...
result = bf.q.reachability(pathConstraints=PathConstraints(startLocation = '/as2/'), headers=HeaderConstraints(dstIps='host1', srcIps='0.0.0.0/0', applications='DNS'), actions='SUCCESS').answer().frame()
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
Bi-directional Reachability Searches for successfully delivered flows that can successfully receive a response. Performs two reachability analyses, first originating from specified sources, then returning back to those sources. After the first (forward) pass, sets up sessions in the network and creates returning flows ...
result = bf.q.bidirectionalReachability(pathConstraints=PathConstraints(startLocation = '/as2dist1/'), headers=HeaderConstraints(dstIps='host1', srcIps='0.0.0.0/0', applications='DNS'), returnFlowType='SUCCESS').answer().frame()
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
Loop detection Detects forwarding loops. Searches across all possible flows in the network and returns example flows that will experience forwarding loops. Inputs Name | Description | Type | Optional | Default Value --- | --- | --- | --- | --- maxTraces | Limit the number of traces returned. | int | True | Invocation
result = bf.q.detectLoops().answer().frame()
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
Return Value Name | Description | Type --- | --- | --- Flow | The flow | Flow Traces | The traces for this flow | Set of Trace TraceCount | The total number traces for this flow | int Print the first 5 rows of the returned Dataframe
result.head(5) bf.set_network('generate_questions') bf.set_snapshot('generate_questions')
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
Multipath Consistency for host-subnets Validates multipath consistency between all pairs of subnets. Searches across all flows between subnets that are treated differently (i.e., dropped versus forwarded) by different paths in the network and returns example flows. Inputs Name | Description | Type | Optional | Default ...
result = bf.q.subnetMultipathConsistency().answer().frame()
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
Multipath Consistency for router loopbacks Validates multipath consistency between all pairs of loopbacks. Finds flows between loopbacks that are treated differently (i.e., dropped versus forwarded) by different paths in the presence of multipath routing. Inputs Name | Description | Type | Optional | Default Value --- ...
result = bf.q.loopbackMultipathConsistency().answer().frame()
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
Retrieving the last hop of the first Trace
result.Traces[0][0][-1]
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
He picked one of these cards and kept it in his mind. Next, the 9 playing cards would flash one-by-one in a random order across the screen. Each card was presented a total of 30 times. The subject would mentally count the number of times his card would appear on the screen (which was 30 if he was paying attention, we a...
import scipy.io m = scipy.io.loadmat('data/tutorial1-01.mat') print(m.keys())
eeg-bci/1. Load EEG data and plot ERP.ipynb
wmvanvliet/neuroscience_tutorials
bsd-2-clause
The scipy.io.loadmat function returns a dictionary containing the variables stored in the matlab file. Two of them are of interest to us, the actual EEG and the labels which indicate at which point in time which card was presented to the subject.
EEG = m['EEG'] labels = m['labels'].flatten() print('EEG dimensions:', EEG.shape) print('Label dimensions:', labels.shape)
eeg-bci/1. Load EEG data and plot ERP.ipynb
wmvanvliet/neuroscience_tutorials
bsd-2-clause
All channels are drawn on top of each other, which is not convenient. Usually, EEG data is plotted with the channels vertically stacked, an artefact stemming from the days where EEG machines drew on large rolls of paper. Lets add a constant value to each EEG channel before plotting them and some decoration like a meani...
from matplotlib.collections import LineCollection def plot_eeg(EEG, vspace=100, color='k'): ''' Plot the EEG data, stacking the channels horizontally on top of each other. Parameters ---------- EEG : array (channels x samples) The EEG data vspace : float (default 100) Amount of...
eeg-bci/1. Load EEG data and plot ERP.ipynb
wmvanvliet/neuroscience_tutorials
bsd-2-clause
And to top it off, lets add vertical lines whenever a card was shown to the subject:
figure(figsize=(15, 4)) plot_eeg(EEG) for onset in flatnonzero(labels): axvline(onset / 2048., color='r')
eeg-bci/1. Load EEG data and plot ERP.ipynb
wmvanvliet/neuroscience_tutorials
bsd-2-clause
As you can see, cards were shown at a rate of 2 per second. We are interested in the response generated whenever a card was shown, so we cut one-second-long pieces of EEG signal that start from the moment a card was shown. These pieces will be named 'trials'. A useful function here is flatnonzero which returns all the...
onsets = flatnonzero(labels) print(onsets[:10]) # Print the first 10 onsets print('Number of onsets:', len(onsets)) classes = labels[onsets] print('Card shown at each onset:', classes[:10])
eeg-bci/1. Load EEG data and plot ERP.ipynb
wmvanvliet/neuroscience_tutorials
bsd-2-clause
Lets create a 3-dimensional array containing all the trials:
nchannels = 7 # 7 EEG channels sample_rate = 2048. # The sample rate of the EEG recording device was 2048Hz nsamples = int(1.0 * sample_rate) # one second's worth of data samples ntrials = len(onsets) trials = zeros((ntrials, nchannels, nsamples)) for i, onset in enumerate(onsets): trials[i, :, :] = EEG[:, onset:o...
eeg-bci/1. Load EEG data and plot ERP.ipynb
wmvanvliet/neuroscience_tutorials
bsd-2-clause
All that's left is to calculate this score for each card, and pick the card with the highest score:
nclasses = len(cards) scores = [pearsonr(classes == i+1, p300_amplitudes)[0] for i in range(nclasses)] # Plot the scores figure(figsize=(4,3)) bar(arange(nclasses)+1, scores, align='center') xticks(arange(nclasses)+1, cards, rotation=-90) ylabel('score') # Pick the card with the highest score winning_card = argmax(sc...
eeg-bci/1. Load EEG data and plot ERP.ipynb
wmvanvliet/neuroscience_tutorials
bsd-2-clause
To compute if an optical transition between two states is possible or not, we first get some libraries to make this easier.
# Importing necessary extensions import numpy as np import itertools import functools import operator # The use of type annotations requires Python 3.6 or newer from typing import List
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
The question is, whether there is a transition matrix element between the aforementioned initial and final states. We can easily anser that with a yes, since the receiving level $\beta,+$ is empty in the initial state and no spin flip is involved when moving the particle from $\alpha,+$ to $\beta,+$. Thus, the question...
# looking for the positions/levels with different occupation changes = np.logical_xor(initial, final) # obtaining the indexes of those positions np.nonzero(changes)[0].tolist()
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
We can see that we get a change in positions $0$ and $6$ which correspond to $\alpha,+$ and $\beta,+$ in site $i$ and $j$, respectively. Now we apply modulo 2, which will allow us to check if the changes are in even or odd positions mapping even positions to $0$ whereas odd positions to $1$. Thus, if both are even or o...
modulo = 2 np.unique(np.remainder(np.nonzero(changes), modulo)).size == 1
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
Thus, in this case of chosen initial and finals states, the transition is allowed since both are even. We can wraps all of this logic in a function.
def is_allowed(initial: List[int], final: List[int]) -> bool: """ Given an initial and final states as represented in a binary list, returns if it is allowed considering spin conservation. """ return np.unique( np.remainder( np.nonzero( np.logical_xor(initial,final))...
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
Now we have a function that tells us if between two states an optical transition is possible or not. To recapitulate, we can recompute our previous case and then with a different final state that is not allowed since it involves a spin flip, e.g., [0 0 0 0 0 1 1 0].
is_allowed(initial, final) is_allowed(initial, [0, 0, 0, 0, 0, 1, 1, 0])
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
With this preamble, we are equiped to handle more complex cases. Given the chosen computational representation for the states, the normalization coefficients of the states are left out. Thus, one has to take care to keep track of them when constructing properly the transition matrix element in question later on. Ca$_2$...
def generate_states(electrons: int, levels: int) -> List[List[int]]: """ Generates the list representation of a given number of electrons and levels (degeneracy not considered). """ # create an array of length equal to the amount of levels # with an amount of 1's equal to the number of electro...
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
With this we can generate states of 3, 4, and 5 electrons in a 3 level system with degeneracy 2 meaning 6 levels in total.
states_d3 = generate_states(3,6) states_d4 = generate_states(4,6) states_d5 = generate_states(5,6)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
We can consider first the $d^4$ states and take a look at them.
states_d4
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
It is quite a list of generated states. But from this whole list, not all states are relevant for the problem at hand. This means that we can reduce the amount of states beforehand by applying the physical constrains we have. From all the $d^4$ states, we consider only those with a full $d_{xy}$ orbital and those whic...
possible_states_d4 = [ # select states that fulfill list(state) for state in states_d4 # dxy orbital double occupancy if state[0]==1 and state[1]==1 # dzx/dyz orbital single occupancy and ...
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
We obtain 4 different $d^4$ states that fullfill the conditions previously indicated. From the previous list, the first and last elements correspond to states with $S_z=\pm1$ whereas the ones in the middle correspond to the two superimposed states for the $S=0$ state, namely, a magnon. These four states, could have bee...
possible_states_d3 = [list(state) for state in states_d3 if state[0]==1 # xy up occupied or state[1]==1] # xy down occupied possible_states_d3
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
In the case of the $d^5$ states, since our ground state has a doule occupied $d_{xy}$ orbital then it has to stay occupied.
possible_states_d5 = [list(state) for state in states_d5 # xy up down occupied if state[0]==1 and state[1]==1 ] possible_states_d5
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
We could generate all $d^3d^5$ combinations and check how many of them there are.
def combine_states(first: List[List[int]], second: List[List[int]]) -> List[List[int]]: """ Takes two lists of list representations of states and returns the list representation of a two-site state. """ # Producing all the possible final states. # This has to be read from b...
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
We already saw in the previous section how we can check if a transition is allowed in our list codification of the states. Here we will make it a function slightly more complex to help us deal with generating final states.
def label(initial, final, levels, mapping): """Helper function to label the levels/orbitals involved.""" changes = np.nonzero(np.logical_xor(initial, final)) positions = np.remainder(changes, levels)//2 return f"{mapping[positions[0][0]]} and {mapping[positions[0][1]]}" def transition(initial: List[int...
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
With this, we can now explore the transitions between the different initial states and final states ($^4A_2$, $^2E$, and $^2T_1$ multiplets for the $d^3$ sector). Concerning the $d^4$ states, as explained in chapter 5, there is the possibility to be in the $S_z=\pm1$ or $S_z=0$. We will cover each one of them in the fo...
A2_32 = [[1,0,1,0,1,0]] # 4A2 Sz=3/2 A2_neg_32 = [[0,1,0,1,0,1]] # 4A2 Sz=-3/2
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
whereas the ones for the$|^4A_2,\pm1/2>$
A2_12 = [[0,1,1,0,1,0], [1,0,0,1,1,0], [1,0,1,0,0,1]] # 4A2 Sz=1/2 A2_neg_12 = [[1,0,0,1,0,1], [0,1,1,0,0,1], [0,1,0,1,1,0]] # 4A2 Sz=-1/2
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
Notice that the prefactors and signs are missing from this representation, and have to be taken into account when combining all the pieces into the end result. $S_z=\pm1$ Starting with the pure $S_z=\pm1$ as initial states, meaning $d_{\uparrow}^4d_{\uparrow}^4$ (FM) and $d_{\uparrow}^4d_{\downarrow}^4$ (AFM), we have ...
FM = [1,1,1,0,1,0,1,1,1,0,1,0] AFM_up = [1,1,1,0,1,0,1,1,0,1,0,1] AFM_down = [1,1,0,1,0,1,1,1,1,0,1,0]
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
Handling the ferromagnetic ordering first, the allowed transitions from the initial state into the $|^4A_2,3/2>$ state are
transition(FM, A2_32)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
Comparing the initial and final states representations and considering the $|^4A_2,3/2>$ prefactor, we obtain that there are two possible transitions with matrix element $t_{xy,xz}$ and $t_{xy,yz}$. Each one is allowed twice from swapping the positions between $d^3$ and $d^5$. Then, for the $|^4A_2,\pm1/2>$ states
transition(FM, A2_12) transition(FM, A2_neg_12)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
Thus, for the $|^4A_2,\pm1/2>$ states, there is no allowed transition starting from the FM initial ground state. Repeating for both $^4A_2$ but starting from the antiferromagnetic state ($d^4_\uparrow d^4_\downarrow$) initial state we get
transition(AFM_up, A2_32) transition(AFM_up, A2_12) transition(AFM_up, A2_neg_12)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
We see that the AFM initial ground state has no transition matrix element for the $|^4A_2,3/2>$ state. Whereas transitions involving the $|^4A_2,\pm1/2>$ state are allowed. Once again, checking the prefactors for the multiplet and the initial ground state we get a transition matrix element of $t_{xy,xz}/\sqrt{3}$ and $...
S0_1 = [1, 1, 1, 0, 0, 1] # |A> S0_2 = [1, 1, 0, 1, 1, 0] # |B> d_zero_down = [1, 1, 0, 1, 0, 1] d_zero_up = [1, 1, 1, 0, 1, 0]
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
Thus, we append the $d^4_\uparrow$ representation to each part of the $d^4_0$ states. Then, checking for the transitions into the $|^4A_2,\pm3/2>$ $d^3$ state we get
transition(S0_1 + d_zero_up, A2_32) transition(S0_2 + d_zero_up, A2_32) print("\n\n") transition(S0_1 + d_zero_up, A2_neg_32) transition(S0_2 + d_zero_up, A2_neg_32)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
Collecting the terms we get that for $|^4A_2, 3/2>$ there is no transitions into a $|d^3>|d^5>$ final state but there are transitions into two different $|d^5>|d^3>$ final states, one for each of the $|A>$ and $|B>$ parts. Thus, considering the numerical factors of the involved states, the amplitude in this case is $\f...
transition(S0_1 + d_zero_down, A2_32) transition(S0_2 + d_zero_down, A2_32) print("\n\n") transition(S0_1 + d_zero_down, A2_neg_32) transition(S0_2 + d_zero_down, A2_neg_32)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
Here, we observe the same situation than before but swapping the roles between the $|^4A_2,\pm3/2>$ states. This means that the contribution of the $d^0 d^4_\uparrow$ is the same as the $d^0 d^4_\downarrow$ one. Similarly, we can start from the $d^4_\uparrow d^0$ or the $d^4_\downarrow d^0$ which will also swap from tr...
transition(d_zero_up + S0_1, A2_32) transition(d_zero_up + S0_2, A2_32) print("\n\n") transition(d_zero_up + S0_1, A2_neg_32) transition(d_zero_up + S0_2, A2_neg_32) print("\n\n") transition(d_zero_down + S0_1, A2_32) transition(d_zero_down + S0_2, A2_32) print("\n\n") transition(d_zero_down + S0_1, A2_neg_32) transiti...
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
Following the same procedure for the $|^4A_2, 1/2>$ states and $d^4_0d^4_\uparrow$ ground state
transition(S0_1 + d_zero_up, A2_12) transition(S0_2 + d_zero_up, A2_12)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
Here we get some possible transitions to final states of interest. Here, we have to remember that the "receiving" $d3$ multiplet has three terms, which have to be added if present. For the $|d^3>|d^5>$ case there are two allowed transitions into $d^5$ states involving $t_{xy,xz}$ and $t_{xy,yz}$ for $|A>$ and $|B>$. Fr...
transition(S0_1 + d_zero_up, A2_neg_12) transition(S0_2 + d_zero_up, A2_neg_12)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
there is no transition found. We repeat for $|d^4_\uparrow d^4_0>$
transition(d_zero_up + S0_1, A2_12) transition(d_zero_up + S0_2, A2_12) print("\n\n") transition(d_zero_up + S0_1, A2_neg_12) transition(d_zero_up + S0_2, A2_neg_12)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
Which is the same situation than before but swapping the position of the contributions as we already saw for the $|^4A_2, 3/2>$ case. For completeness we show the situation with $d^4_\downarrow$ as follows.
transition(S0_1 + d_zero_down, A2_12) transition(S0_2 + d_zero_down, A2_12) print("\n\n") transition(d_zero_down + S0_1, A2_12) transition(d_zero_down + S0_2, A2_12) print("\n\n") transition(S0_1 + d_zero_down, A2_neg_12) transition(S0_2 + d_zero_down, A2_neg_12) print("\n\n") transition(d_zero_down + S0_1, A2_neg_12) ...
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
Continuing with the $d^4_0d^4_0$ the situation gets more complicated since $<f|\hat{t}|d^4_0>|d^4_0>$ can be split as follows $<f|\hat{t}(|A>+|B>)(|A>+|B>)$ which gives 4 terms labeled $F$ to $I$. Thus, we construct the four combinations for the initial state and calculate each one of them to later sum them up.
F = S0_1 + S0_1 G = S0_1 + S0_2 H = S0_2 + S0_1 I = S0_2 + S0_2
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
First dealing with the $|^4A_2,\pm 3/2>$ states for the $d^3$ sector.
transition(F, A2_32) transition(G, A2_32) transition(H, A2_32) transition(I, A2_32) transition(F, A2_neg_32) transition(G, A2_neg_32) transition(H, A2_neg_32) transition(I, A2_neg_32)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
No transitions from the $d^4_0d^4_0$ state to $|^4A_2,\pm3/2>$. And now repeating the same strategy for the $|^4A_2,1/2>$ state
transition(F, A2_12) transition(G, A2_12) transition(H, A2_12) transition(I, A2_12)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
Here we have terms for both $|d^3>|d^5>$ and $|d^5>|d^3>$ and for each component of the initial state which can be grouped into which $d^5$ state they transition into. Terms pairs $F-H$ and $G-I$ belong together involving the $d^5_{xz\downarrow}$ and $d^5_{yz\downarrow}$ states, respectively. Adding terms corresponding...
transition(F, A2_neg_12) transition(G, A2_neg_12) transition(H, A2_neg_12) transition(I, A2_neg_12)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
For $|^4A_2,-1/2>$ states we obtain the same values than for $|^4A_2,1/2>$ but involving the other spin state. Now we have all the amplitudes corresponding to transitions into the $^4A_2$ multiplet enabled by the initial states involving $S_z=0$, namely, $\uparrow 0+ 0\uparrow+ \downarrow 0 + 0\downarrow + 00$. $|^2E,a...
Ea = [[0,1,1,0,1,0], [1,0,0,1,1,0], [1,0,1,0,0,1]] transition(AFM_down, Ea) transition(AFM_up, Ea) transition(FM, Ea)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
For the $|^2E,a>$ multiplet, only transitions from the AFM ground state are possible. Collecting the prefactors we get that the transition matrix element in $-\sqrt{2/3}t_{xy,xz}$ and $-\sqrt{2/3}t_{xy,yz}$ as could be easily checked by hand. Then, for the $|^2E,b>$ multiplet
Eb = [[1,0,1,0,0,1], [1,0,0,1,1,0]] transition(AFM_down, Eb) transition(AFM_up, Eb) transition(FM, Eb)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
From the $S=\pm1$ initial states, no transitions possible to Eb. We follow with the situation when considering the $S=0$. In this case, each initial state is decomposed in two parts resulting in 4 terms.
transition(S0_1 + S0_1, Ea) transition(S0_1 + S0_2, Ea) transition(S0_2 + S0_1, Ea) transition(S0_2 + S0_2, Ea)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
Each one of the combinations is allowed, thus considering the prefactors of the $S_0$ and $|^2E,a>$ we obtain $\sqrt{\frac{2}{3}}t_{xy,xz}$ and $\sqrt{\frac{2}{3}}t_{xy,yz}$. Doing the same for $|^2E,b>$
transition(S0_1 + S0_1, Eb) transition(S0_1 + S0_2, Eb) transition(S0_2 + S0_1, Eb) transition(S0_2 + S0_2, Eb)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
Adding all the contributions of the allowed terms we obtain, that due to the - sign in the $|^2E,b>$ multiplet, the contribution is 0. We sill have to cover the ground state of the kind $d_0^4d_\uparrow^4$. As done previously, we again will split the $d_0^4$ in the two parts.
S0_1 = [1, 1, 1, 0, 0, 1] S0_2 = [1, 1, 0, 1, 1, 0]
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
and then we add the $d^4_\uparrow$ representation to each one. Thus, for the $|^2E, Ea>$ $d^3$ multiplet we get
transition(S0_1 + d_zero_up, Ea) transition(S0_2 + d_zero_up, Ea) print("\n\n") transition(d_zero_up + S0_1, Ea) transition(d_zero_up + S0_2, Ea)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
Here, both parts of the $S_z=0$ state contribute. Checking the prefactors for $S_z=0$ ($1/\sqrt{2}$) and $|^2E, Ea>$ ($1/\sqrt{6}$) we get a matrix element $\sqrt{\frac{2}{3}}t_{xy/xz}$. Following for transitions into the $|^2E, Eb>$
transition(S0_1 + d_zero_up, Eb) transition(S0_2 + d_zero_up, Eb) print("\n\n") transition(d_zero_up + S0_1, Eb) transition(d_zero_up + S0_2, Eb)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
$|^2T_1,+/->$ This multiplet has 6 possible forms, $\textit{xy}$, $\textit{xz}$, and $\textit{yz}$ singly occupied First we encode the $|^2T_1,+>$ multiplet with singly occupied $\textit{xy}$
T1_p_xy = [[1,0,1,1,0,0], [1,0,0,0,1,1]] transition(AFM_down, T1_p_xy) transition(AFM_up, T1_p_xy) transition(FM, T1_p_xy)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
And for the $|^2T_1,->$
T1_n_xy = [[0,1,1,1,0,0], [0,1,0,0,1,1]] transition(AFM_down, T1_n_xy) transition(AFM_up, T1_n_xy) transition(FM, T1_n_xy)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
In this case, there is no possible transition to states with a singly occupied $\textit{xy}$ orbital from the $\textit{xy}$ ordered ground state.
T1_p_xz = [[1,1,1,0,0,0], [0,0,1,0,1,1]] transition(AFM_up, T1_p_xz) transition(FM, T1_p_xz) T1_p_yz = [[1,1,0,0,1,0], [0,0,1,1,1,0]] transition(AFM_up, T1_p_yz) transition(FM, T1_p_yz)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
We can see that the transitions from the ferromagnetic state are forbidden for the $xy$ orbitally ordered ground state for both $|^2T_1, xz\uparrow>$ and $|^2T_1, yz\uparrow>$ while allowing for transitions with amplitudes: $t_{yz,xz}/\sqrt{2}$, $t_{xz,xz}/\sqrt{2}$, $t_{xz,yz}/\sqrt{2}$, and $t_{yz,yz}/\sqrt{2}$. For ...
T1_n_xz = [[1,1,0,1,0,0], [0,0,0,1,1,1]] transition(AFM_up, T1_n_xz) transition(FM, T1_n_xz) T1_n_yz = [[1,1,0,0,0,1], [0,0,1,1,0,1]] transition(AFM_up, T1_n_yz) transition(FM, T1_n_yz)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
S=0 Now the challenge of addressing this multiplet when considering the $S=0$ component in the ground state.
S0_1 = [1, 1, 1, 0, 0, 1] S0_2 = [1, 1, 0, 1, 1, 0] T1_p_xz = [[1,1,1,0,0,0], [0,0,1,0,1,1]] T1_p_yz = [[1,1,0,0,1,0], [0,0,1,1,1,0]]
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
First, we calculate for the $d^4_0d^4_\uparrow$ ground state. Again the $d^4_0$ state is split in two parts.
transition(S0_1 + d_zero_up, T1_p_xz) transition(S0_2 + d_zero_up, T1_p_xz) print("\n\n") transition(S0_1 + d_zero_up, T1_p_yz) transition(S0_2 + d_zero_up, T1_p_yz)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
And for $d^4_0d^4_\downarrow$
transition(S0_1 + d_zero_down, T1_p_xz) transition(S0_2 + d_zero_down, T1_p_xz) print("\n\n") transition(S0_1 + d_zero_down, T1_p_yz) transition(S0_2 + d_zero_down, T1_p_yz)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
Thus, for final states with singly occupied $\textit{xz}$ multiplet, we obtain transitions involving $t_{yz,xz}/2$, $t_{yz,yz}/2$, $t_{xz,xz}/2$ and $t_{xz,yz}/2$ when accounting for the prefactors of the states. For completeness, repeating for the cases $d^4_\uparrow d^4_0$ and $d^4_\downarrow d^4_0$
transition(d_zero_up + S0_1, T1_p_xz) transition(d_zero_up + S0_2, T1_p_xz) print("\n\n") transition(d_zero_up + S0_1, T1_p_yz) transition(d_zero_up + S0_2, T1_p_yz) print("\n\n") print("\n\n") transition(d_zero_down + S0_1, T1_p_xz) transition(d_zero_down + S0_2, T1_p_xz) print("\n\n") transition(d_zero_down + S0_1, T...
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
In this case, considering the prefactors of the states involved, we obtain contributions $t_{yz,xy}/{2}$ and $t_{yz,yz}/{2}$, $t_{xz,xz}/{2}$, and $t_{xz,yz}/{2}$. And at last $d^4_0d^4_0$
transition(S0_1 + S0_1, T1_p_xz) transition(S0_1 + S0_2, T1_p_xz) transition(S0_2 + S0_1, T1_p_xz) transition(S0_2 + S0_2, T1_p_xz) print("------------------------") transition(S0_1 + S0_1, T1_p_yz) transition(S0_1 + S0_2, T1_p_yz) transition(S0_2 + S0_1, T1_p_yz) transition(S0_2 + S0_2, T1_p_yz)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
<b>Restart the kernel</b> after you do a pip install (click on the reload button above).
%%bash pip freeze | grep -e 'flow\|beam' import tensorflow as tf import tensorflow_transform as tft import shutil print(tf.__version__) # change these to try this notebook out BUCKET = 'cloud-training-demos-ml' PROJECT = 'cloud-training-demos' REGION = 'us-central1' import os os.environ['BUCKET'] = BUCKET os.environ...
courses/machine_learning/feateng/tftransform.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Input source: BigQuery Get data from BigQuery but defer filtering etc. to Beam. Note that the dayofweek column is now strings.
from google.cloud import bigquery def create_query(phase, EVERY_N): """ phase: 1=train 2=valid """ base_query = """ WITH daynames AS (SELECT ['Sun', 'Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat'] AS daysofweek) SELECT (tolls_amount + fare_amount) AS fare_amount, daysofweek[ORDINAL(EXTRACT(DAYOFWEEK FROM pic...
courses/machine_learning/feateng/tftransform.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Create ML dataset using tf.transform and Dataflow Let's use Cloud Dataflow to read in the BigQuery data and write it out as CSV files. Along the way, let's use tf.transform to do scaling and transforming. Using tf.transform allows us to save the metadata to ensure that the appropriate transformations get carried out du...
%%writefile requirements.txt tensorflow-transform==0.8.0
courses/machine_learning/feateng/tftransform.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Test transform_data is type pcollection. test if _ = is neccesary
import datetime import tensorflow as tf import apache_beam as beam import tensorflow_transform as tft from tensorflow_transform.beam import impl as beam_impl def is_valid(inputs): try: pickup_longitude = inputs['pickuplon'] dropoff_longitude = inputs['dropofflon'] pickup_latitude = inputs['pickuplat'] ...
courses/machine_learning/feateng/tftransform.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
<h2> Train off preprocessed data </h2>
%%bash rm -rf taxifare_tft.tar.gz taxi_trained export PYTHONPATH=${PYTHONPATH}:$PWD/taxifare_tft python -m trainer.task \ --train_data_paths="gs://${BUCKET}/taxifare/preproc_tft/train*" \ --eval_data_paths="gs://${BUCKET}/taxifare/preproc_tft/eval*" \ --output_dir=./taxi_trained \ --train_steps=10 --job-di...
courses/machine_learning/feateng/tftransform.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
If we don't indicate we want a simple 2-level list of list with lolviz(), we get a generic object graph:
objviz(table) courses = [ ['msan501', 51], ['msan502', 32], ['msan692', 101] ] mycourses = courses print(id(mycourses), id(courses)) objviz(courses)
examples.ipynb
parrt/lolviz
bsd-3-clause
You can also display strings as arrays in isolation (but not in other data structures as I figured it's not that useful in most cases):
strviz('New York') class Tree: def __init__(self, value, left=None, right=None): self.value = value self.left = left self.right = right root = Tree('parrt', Tree('mary', Tree('jim', Tree('srinivasan'), Tree('a...
examples.ipynb
parrt/lolviz
bsd-3-clause
If you'd like to save an image from jupyter, use render():
def f(x): thestack = callsviz(varnames=['table','x','tree','head','courses']) print(thestack.source[:100]) # show first 100 char of graphviz syntax thestack.render("/tmp/t") # save as PDF f(99)
examples.ipynb
parrt/lolviz
bsd-3-clause
Numpy viz
import numpy as np A = np.array([[1,2,8,9],[3,4,22,1]]) objviz(A) B = np.ones((100,100)) for i in range(100): for j in range(100): B[i,j] = i+j B matrixviz(A) matrixviz(B) A = np.array(np.arange(-5.0,5.0,2.1)) B = A.reshape(-1,1) matrices = [A,B] def f(): w,h = 20,20 C = np.ones((w,h), dtype...
examples.ipynb
parrt/lolviz
bsd-3-clause
Pandas dataframes, series
import pandas as pd df = pd.DataFrame() df["sqfeet"] = [750, 800, 850, 900,950] df["rent"] = [1160, 1200, 1280, 1450,2000] objviz(df) objviz(df.rent)
examples.ipynb
parrt/lolviz
bsd-3-clause
Model 3: More sophisticated models What if we try a more sophisticated model? Let's try Deep Neural Networks (DNNs) in BigQuery: DNN To create a DNN, simply specify dnn_regressor for the model_type and add your hidden layers.
%%bigquery -- This model type is in alpha, so it may not work for you yet. -- This training takes on the order of 15 minutes. CREATE OR REPLACE MODEL serverlessml.model3b_dnn OPTIONS(input_label_cols=['fare_amount'], model_type='dnn_regressor', hidden_units=[32, 8]) AS SELECT * FROM serverlessml.c...
courses/machine_learning/deepdive2/launching_into_ml/solutions/first_model.ipynb
turbomanage/training-data-analyst
apache-2.0
Behavioural parameters
gam_pars = { 'Control': dict(Freq=(2.8, 0.0, 1), Rare=(2.8, 0.75, 1)), 'Patient': dict(Freq=(3.0, 0.0, 1.2), Rare=(3.0, 1., 1.2))} subs_per_group = 20 n_trials = 1280 probs = dict(Rare=0.2, Freq=0.8) accuracy = dict(Control=dict(Freq=0.96, Rare=0.886), Patient=dict(Freq=0.945, Rare=0.847)) log...
src/Logfile_generator.ipynb
MadsJensen/intro_to_scientific_computing
bsd-3-clause
Plot RT distributions for sanity checking
fig, axs = plt.subplots(2, 1) # These are chosen empirically to generate sane RTs x_shift = 220 x_mult = 100 cols = dict(Patient='g', Control='r') lins = dict(Freq='-', Rare='--') # For plotting x = np.linspace(gamma.ppf(0.01, *gam_pars['Control']['Freq']), gamma.ppf(0.99, *gam_pars['Patient']['Rare']...
src/Logfile_generator.ipynb
MadsJensen/intro_to_scientific_computing
bsd-3-clause
Create logfile data
# calculate time in 100 us steps # 1-3 sec start delay start_time = np.random.randint(1e4, 3e4) # Modify ISI a little from paper: accomodate slightly longer tails # of the simulated distributions (up to about 1500 ms) ISI_ran = (1.5e4, 1.9e4) freq_stims = string.ascii_lowercase rare_stims = string.digits
src/Logfile_generator.ipynb
MadsJensen/intro_to_scientific_computing
bsd-3-clause
Create subject IDs
# ctrl_NUMs = list(np.random.randint(10, 60, size=2 * subs_per_group)) ctrl_NUMs = list(random.sample(range(10, 60), 2 * subs_per_group)) pat_NUMs = sorted(random.sample(ctrl_NUMs, subs_per_group)) ctrl_NUMs = sorted([c for c in ctrl_NUMs if not c in pat_NUMs]) IDs = dict(Control=['{:04d}_{:s}'.format(n, ''.join(rando...
src/Logfile_generator.ipynb
MadsJensen/intro_to_scientific_computing
bsd-3-clause
Write subject ID codes to a CSV file
with open(os.path.join(logs_autogen, 'subj_codes.csv'), 'wt') as fp: csvw = csv.writer(fp, delimiter=';') for stype in IDs.keys(): for sid in IDs[stype]: csvw.writerow([sid, stype])
src/Logfile_generator.ipynb
MadsJensen/intro_to_scientific_computing
bsd-3-clause
Function for generating individualised RTs
def indiv_RT(sub_type, cond): # globals: gam_pars, probs, n_trials, x_mult, x_shift return(gamma.rvs(*gam_pars[sub_type][cond], size=int(probs[cond] * n_trials)) * x_mult + x_shift)
src/Logfile_generator.ipynb
MadsJensen/intro_to_scientific_computing
bsd-3-clause
Write logfiles
# Write to empty logs dir if not os.path.exists(logs_autogen): os.makedirs(logs_autogen) for f in glob.glob(os.path.join(logs_autogen, '*.log')): os.remove(f) for stype in ['Control', 'Patient']: for sid in IDs[stype]: log_date = random_date(*logfile_date_range) log_fname = '{:s}_{:s}.log'....
src/Logfile_generator.ipynb
MadsJensen/intro_to_scientific_computing
bsd-3-clause
Notice this is a typical dataframe, possibly with more columns as strings than numbers. The text in contained in the column 'text'. Notice also there are missing texts. For now, we will drop these texts so we can move forward with text analysis. In your own work, you should justify dropping missing texts when possible.
df = df.dropna(subset=["text"]) df ##Ex: Print the first text in the dataframe (starts with "A DOG WITH A BAD NAME"). ###Hint: Remind yourself about the syntax for slicing a dataframe
03-Pandas_and_DTM/00-PandasAndTextAnalysis.ipynb
lknelson/text-analysis-2017
bsd-3-clause
<a id='stats'></a> 1. Descriptive Statistics and Visualization The first thing we probably want to do is describe our data, to make sure everything is in order. We can use the describe function for the numerical data, and the value_counts function for categorical data.
print(df.describe()) #get descriptive statistics for all numerical columns print() print(df['author gender'].value_counts()) #frequency counts for categorical data print() print(df['year'].value_counts()) #treat year as a categorical variable print() print(df['year'].mode()) #find the year in which the most novels were...
03-Pandas_and_DTM/00-PandasAndTextAnalysis.ipynb
lknelson/text-analysis-2017
bsd-3-clause
We can do a few things by just using the metadata already present. For example, we can use the groupby and the count() function to graph the number of books by male and female authors. This is similar to the value_counts() function, but allows us to plot the output.
#creat a pandas object that is a groupby dataframe, grouped on author gender grouped_gender = df.groupby("author gender") print(grouped_gender['text'].count())
03-Pandas_and_DTM/00-PandasAndTextAnalysis.ipynb
lknelson/text-analysis-2017
bsd-3-clause
Let's graph the number of texts by gender of author.
grouped_gender['text'].count().plot(kind = 'bar') plt.show() #Ex: Create a variable called 'grouped_year' that groups the dataframe by year. ## Print the number of texts per year.
03-Pandas_and_DTM/00-PandasAndTextAnalysis.ipynb
lknelson/text-analysis-2017
bsd-3-clause
We can graph this via a line graph.
grouped_year['text'].count().plot(kind = 'line') plt.show()
03-Pandas_and_DTM/00-PandasAndTextAnalysis.ipynb
lknelson/text-analysis-2017
bsd-3-clause
Oops! That doesn't look right! Python automatically converted the year to scientific notation. We can set that option to False.
plt.ticklabel_format(useOffset=False) #forces Python to not convert numbers grouped_year['text'].count().plot(kind = 'line') plt.show()
03-Pandas_and_DTM/00-PandasAndTextAnalysis.ipynb
lknelson/text-analysis-2017
bsd-3-clause
We haven't done any text analysis yet. Let's apply some of our text analysis techniques to the text, add columns with the output, and analyze/visualize the output. <a id='str'></a> 2. The str attribute Luckily for us, pandas has an attribute called 'str' which allows us to access Python's built-in string functions. For...
df['text_lc'] = df['text'].str.lower() df ##Ex: create a new column, 'text_split', that contains the lower case text split into list. ####HINT: split on white space, don't tokenize it.
03-Pandas_and_DTM/00-PandasAndTextAnalysis.ipynb
lknelson/text-analysis-2017
bsd-3-clause
<a id='apply'></a> 3. The apply function We can also apply a function to each row. To get a word count of a text file we would take the length of the split string like this: len(text_split) If we want to do this on every row in our dataframe, we can use the apply() function.
df['word_count'] = df['text_split'].apply(len) df
03-Pandas_and_DTM/00-PandasAndTextAnalysis.ipynb
lknelson/text-analysis-2017
bsd-3-clause
What is the average length of each novel in our data? With pandas, this is easy!
df['word_count'].mean()
03-Pandas_and_DTM/00-PandasAndTextAnalysis.ipynb
lknelson/text-analysis-2017
bsd-3-clause