text
stringlengths
87
777k
meta.hexsha
stringlengths
40
40
meta.size
int64
682
1.05M
meta.ext
stringclasses
1 value
meta.lang
stringclasses
1 value
meta.max_stars_repo_path
stringlengths
8
226
meta.max_stars_repo_name
stringlengths
8
109
meta.max_stars_repo_head_hexsha
stringlengths
40
40
meta.max_stars_repo_licenses
sequencelengths
1
5
meta.max_stars_count
int64
1
23.9k
meta.max_stars_repo_stars_event_min_datetime
stringlengths
24
24
meta.max_stars_repo_stars_event_max_datetime
stringlengths
24
24
meta.max_issues_repo_path
stringlengths
8
226
meta.max_issues_repo_name
stringlengths
8
109
meta.max_issues_repo_head_hexsha
stringlengths
40
40
meta.max_issues_repo_licenses
sequencelengths
1
5
meta.max_issues_count
int64
1
15.1k
meta.max_issues_repo_issues_event_min_datetime
stringlengths
24
24
meta.max_issues_repo_issues_event_max_datetime
stringlengths
24
24
meta.max_forks_repo_path
stringlengths
8
226
meta.max_forks_repo_name
stringlengths
8
109
meta.max_forks_repo_head_hexsha
stringlengths
40
40
meta.max_forks_repo_licenses
sequencelengths
1
5
meta.max_forks_count
int64
1
6.05k
meta.max_forks_repo_forks_event_min_datetime
stringlengths
24
24
meta.max_forks_repo_forks_event_max_datetime
stringlengths
24
24
meta.avg_line_length
float64
15.5
967k
meta.max_line_length
int64
42
993k
meta.alphanum_fraction
float64
0.08
0.97
meta.converted
bool
1 class
meta.num_tokens
int64
33
431k
meta.lm_name
stringclasses
1 value
meta.lm_label
stringclasses
3 values
meta.lm_q1_score
float64
0.56
0.98
meta.lm_q2_score
float64
0.55
0.97
meta.lm_q1q2_score
float64
0.5
0.93
text_lang
stringclasses
53 values
text_lang_conf
float64
0.03
1
label
float64
0
1
## Divide y vencerás Este es un método de diseño de algoritmos que se basa en *subdividir* el problema en sub-problemas, resolverlos *recursivamente*, y luego *combinar* las soluciones de los sub-problemas para construir la solución del problema original. Es necesario que los subproblemas tengan la misma estructura que el problema original, de modo que se pueda aplicar la recursividad. ## Ejemplo: Obtener la subsecuencia de suma máxima Dada una sucesión de enteros $$ a_{1}, a_{2}, …, a_{n}$$ encontrar e identificar el valor máximo de la suma de una porción consecutiva de la secuencia. Cuando todos los enteros son negativos entendemos que la subsecuencia de suma máxima es la vacía, siendo su suma cero. Ejemplos: 1. { -2, 11, -4, 13, -5, 2} 2. {1, -3, 4, -2, -1, 6} Si revisamos el problema de forma intuitiva, mediante el uso de esquemas por ejemplo, para 1 y 2 las subsecuencias de suma máxima están marcadas en negrita: 1. { -2, **11, -4, 13**, -5, 2} y la suma de esta es 20. 2. {1, -3, **4, -2, -1, 6**} y la suma de esta es 7. Esta solución intuitiva utiliza del orden de $n^2$ comparaciones: $\Theta(n^2)$. ## Usando divide y vencerás Supongamos ahora que la sucesión dada es {4, -3, 5, -2, -1, 2, 6, -2}. Dividiremos esta secuencia en dos partes iguales, como se muestra en la figura a continuación. Entonces la subsecuencia de suma máxima puede aparecer en una de estas tres formas: * *Caso 1*: está totalmente incluida en la primera mitad. * *Caso 2*: está totalmente incluida en la segunda mitad. * *Caso 3*: comienza en la primera mitad, pero termina en la segunda. La figura anterior muestra que podemos calcular, para cada elemento de la primera mitad, la suma de la subsecuencia contigua que termina en el elemento situado más a la derecha. Hacemos esto con un recorrido de derecha a izquierda, partiendo del elemento situado entre las dos mitades. Análogamente, podemos calcular la suma de todas las subsecuencias contiguas que comiencen con el primer elemento de la segunda mitad. Entonces se puede combinar estas dos subsecuencias para formar la subsecuencia de suma máxima que cruza la línea divisoria. En el ejemplo de la figura, la secuencia resultante va desde el primer elemento de la primera mitad hasta el penúltimo elemento de la segunda mitad. La suma total es la suma de las dos subsecuencias, 4+7 = 11. Esto nos muestra que el caso 3 se puede resolver en tiempo lineal. Tanto para el caso 1 como el caso 2, tenemos el mismo problema original, pero para una secuencia de tamaño $n/2$, es decir el mismo $\Theta(n^2)$ (recordemos que se obvían los coeficientes). Sin embargo, podemos aplicar la misma estrategia de división por la mitad en los casos 1 y 2. Podemos continuar dividiendo hasta que sea imposible dividir más. Esto equivale, más concretamente, a resolver los casos 1 y 2 recursivamente. Se puede demostrar que esto reduce el tiempo de ejecución por de bajo de cuadrático, pues los ahorros se acumulan a lo largo de la ejecución del algoritmo. Mostramos a continuación un esquema del algoritmo: 1. Calcular recursivamente la subsecuencia de suma máxima que está totalmente contenida en la primera mitad. 2. Calcular recursivamente la subsecuencia de suma máxima que está totalmente contenida en la segunda mitad. 3. Calcular, usando dos bucles consecutivos, la subsecuencia de suma máxima que comienza en la primera mitad pero termina en la segunda. 4. Elegir la mayor de las tres sumas. El método resultante aparece a continuación. Un algoritmo recursivo nos exige definir un caso base. Naturalmente, cuando el dato es un solo elemento, no usamos recursión. A la llamada recursiva se le pasa el vector de entrada junto con los límites izquierdo y derecho, los cuales delimitan la porción de vector sobre la que se está operando. Una rutina guía de una línea inicializa los parámetros límite a 0 y N - 1. ```python def max3(a,b,c): aux=[] aux.append(a) aux.append(b) aux.append(c) max = -1000 for e in aux: if e > max: max = e return max def maxSumaRec( a, izq, der): # Sub-problema cualquiera # a: [][ ][ ] | [][][]. (n=6) # izq der # # Sub-problema inicial # a: [ ][ ][ ] | [][][ ] # izq=0. centro=2. der=5 maxSumIzqBorde = 0; maxSumDerBorde = 0 sumIzqBorde = 0; sumDerBorde = 0 centro = int((izq+der)/2) # CASO BASE: if izq == der: if a[izq] > 0: return a[izq] else: return 0 # Se busca de forma recursiva la suma máxima en la sección izquierda maxSumIzq = maxSumaRec(a, izq, centro) # Se busca de forma recursiva la suma máxima en la sección derecha maxSumDer = maxSumaRec(a, centro+1, der) for i in range(centro, izq-1,-1): # al ppio: 3, 2, 1, 0 , para el ej. sumIzqBorde += a[i] if sumIzqBorde > maxSumIzqBorde: maxSumIzqBorde = sumIzqBorde for j in range(centro+1, der+1): # al ppio: 4, 5, 6, 7 sumDerBorde += a[j] if sumDerBorde > maxSumDerBorde: maxSumDerBorde = sumDerBorde return max3(maxSumIzq,maxSumDer, maxSumIzqBorde+maxSumDerBorde) a = [4,-3,5,-2,-1,2,6,-2] print(maxSumaRec(a,0,len(a)-1)) b = [-2, 11, -4, 13, -5, 2] print(maxSumaRec(b,0,len(b)-1)) c = [1, -3, 4, -2, -1, 6] print(maxSumaRec(c,0,len(c)-1)) ``` 11 20 7 ### Ejercicio de divide y vencerás Si tenemos dos números complejos $$ \begin{align} u&=a+bi\\ v&=c+di \end{align} $$ podemos calcular su producto $$ uv=(ac-bd)+(ad+bc)i $$ haciendo 4 multiplicación de números reales. Encuentre una forma de realizar este cálculo haciendo solo 3 multiplicaciones de números reales. ## Referencias 1. Weiss, M. A., & Marroquín, O. (2000). Estructuras de datos en JavaTM. Addison-Wesley. 2. Apuntes de Patricio Poblete (U. de Chile) disponible en: https://github.com/ivansipiran/AED-Apuntes#Algoritmos-y-Estructuras-de-Datos (visitado en mayo 2021)
c580d718bf76158a572568de2acf9a892e6899af
9,538
ipynb
Jupyter Notebook
Dividir_para_reinar.ipynb
femunoz/AED
7d156a385ff4e4948f53003eb161ff2df0088bb2
[ "CC0-1.0" ]
1
2022-01-16T16:15:34.000Z
2022-01-16T16:15:34.000Z
Dividir_para_reinar.ipynb
femunoz/AED
7d156a385ff4e4948f53003eb161ff2df0088bb2
[ "CC0-1.0" ]
null
null
null
Dividir_para_reinar.ipynb
femunoz/AED
7d156a385ff4e4948f53003eb161ff2df0088bb2
[ "CC0-1.0" ]
null
null
null
40.935622
833
0.52705
true
1,904
Qwen/Qwen-72B
1. YES 2. YES
0.795658
0.91118
0.724988
__label__spa_Latn
0.988202
0.522721
# SLU11 - Tree-based models: Learning notebook ## Imports ```python from math import log2 import pandas as pd import numpy as np from sklearn.datasets import load_boston from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier from sklearn.metrics import mean_squared_error from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor from IPython.display import Image from utils.utils import ( make_data, separate_target_variable, process_categorical_features, visualize_tree ) ``` # Table of Contents 1. [Decision trees](#decision-trees) 2. [Tree-based ensembles](#tree-based-ensembles) 3. [Bagging with random forests](#bagging-with-random-forests) 4. [Boosting with gradient boosting](#boosting-with-gradient-boosting) # Decision trees <a class="anchor" id="decision-trees"></a> A **decision tree** is a decision support tool in a form of a tree-like structure, it can be used for classification or regression, being more used for classification, where each node is associated with a test on a feature, each branch represents an outcome of the test, and each leaf the final decision. In order to explain you better how **decision trees** work lets use an example, where according with the weather we want to know if we should go hiking. A simple **decision tree** can be represented as: *Fig. 1: A simple decision-tree used for classification (Quinlan, 1986).* This **decision tree**, displays a flow of conditional statements (i.e., `if Condition then Outcome`) where: * Each **node** represents a test on a feature (E.g. is it windy?) * Each **branch** represents the outcome of the test (E.g. as true or false) * Each **leaf** represents an outcome or decision (represented with N and P). The paths from root to leaf represent the rules, e.g., `if Outlook is Sunny and Humidity is Normal then Go Hiking`. ## Learning sets of rules as decision trees Typically, a decision tree is developed from the top-down (known as top-down induction), and there are several algorithms that can be used to build them, including: * **Iterative Dichotomiser 3 (ID3)**, for classification using categorical features (use of entropy and information gain as metric.) * **C4.5**, successor to ID3 to support non-categorical features * **Classification and Regression Trees (CART)**, generalizes C4.5 to support regression (use of Gini impurity as the metric). We follow ID3 ([Quinlan, 1986][1]), for binary classification using categorical features with a small set of possible values (i.e. low cardinality). [1]: http://hunch.net/~coms-4771/quinlan.pdf "Quinlan, J. 1986. Induction of Decision Trees. Machine Learning 1: 81-106." ```python data = make_data() ``` ### Algorithm overview Take an arbitrary training set $C = \{\textbf{x}_i, y_i\}_{i=1}^m$, a collection of labeled observations, with $y_i \in \{L_1, \dots, L_v\}$. The features vector is represented as $\textbf{x}_i = \{x_i^{(1)}, \dots, x_i^{(n)}\}$, where $x_i^{(j)}$ is the value of the $j$-th feature for the $i$-th observation. In ID3, $x_i^{(j)}$ can take on one of a fixed number of possible values $x_i^{(j)} \in \{A_1^{(j)}, \dots, A_w^{(j)}\}$. $$X = \begin{bmatrix} x_1^{(1)} & x_1^{(2)} & \dots & x_1^{(n)} \\ x_2^{(1)} & x_2^{(2)} & \dots & x_2^{(n)} \\ \dots & \dots & \dots & \dots \\ x_m^{(1)} & x_m^{(2)} & \dots & x_m^{(n)} \end{bmatrix}$$ ```python X, y = separate_target_variable(data) X ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>Outlook</th> <th>Temperature</th> <th>Humidity</th> <th>Windy</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>sunny</td> <td>hot</td> <td>high</td> <td>false</td> </tr> <tr> <th>1</th> <td>sunny</td> <td>hot</td> <td>high</td> <td>true</td> </tr> <tr> <th>2</th> <td>overcast</td> <td>hot</td> <td>high</td> <td>false</td> </tr> <tr> <th>3</th> <td>rain</td> <td>mild</td> <td>high</td> <td>false</td> </tr> <tr> <th>4</th> <td>rain</td> <td>cool</td> <td>normal</td> <td>false</td> </tr> <tr> <th>5</th> <td>rain</td> <td>cool</td> <td>normal</td> <td>false</td> </tr> <tr> <th>6</th> <td>overcast</td> <td>cool</td> <td>normal</td> <td>false</td> </tr> <tr> <th>7</th> <td>sunny</td> <td>mild</td> <td>high</td> <td>false</td> </tr> <tr> <th>8</th> <td>sunny</td> <td>cool</td> <td>normal</td> <td>false</td> </tr> <tr> <th>9</th> <td>rain</td> <td>mild</td> <td>normal</td> <td>false</td> </tr> <tr> <th>10</th> <td>sunny</td> <td>mild</td> <td>normal</td> <td>true</td> </tr> <tr> <th>11</th> <td>overcast</td> <td>mild</td> <td>high</td> <td>true</td> </tr> <tr> <th>12</th> <td>overcast</td> <td>hot</td> <td>normal</td> <td>false</td> </tr> <tr> <th>13</th> <td>rain</td> <td>mild</td> <td>high</td> <td>true</td> </tr> </tbody> </table> </div> This is a binary classification problem, where $y_i \in \{0, 1\}.$ $$y = \begin{bmatrix} y_1 \\ y_2 \\ \dots \\ y_3 \end{bmatrix}$$ ```python y ``` 0 0 1 0 2 1 3 1 4 1 5 0 6 1 7 0 8 1 9 1 10 1 11 1 12 1 13 0 Name: Class, dtype: int64 Now, imagine a test, $T$, on $x_i$, with possible outcomes $O_1, O_2, \dots, O_w$. It can be a condition on the features, for example. $T$ produces a partition $\{C_1, C_2, \dots, C_w\}$ of $C$, where $C_k$ contains those observations having outcome $O_k$. *Fig. 2: Partitioning of the objects in $C$ with a test $T$ (Quinlan, 1986).* We can recursively apply different tests to each of these partitions, creating smaller and smaller sub-partitions, until we get homogenous (i.e., single-class) leaves. The result of this process is a decision tree for all of $C$. This tree-building algorithm can be described in the following way: ``` ID3 (Data, Target, Attributes) If all examples are positive, Return the single-node tree Root, with label = 1. If all examples are negative, Return the single-node tree Root, with label = 0. Otherwise Begin A <- Pick the Attribute that best classifies examples. Decision tree for Root = A. For each possible value, O_i, of A, Add a new tree branch, corresponding to the test A = O_i. Let Data(v_i) be the subset of examples that have the value O_i for A. Below this new branch add the subtree ID3 (Data(O_i), Target, Attributes – {A}). End Return Root ``` But what is the **attribute that best classifies examples** at each step? As we will soon see, this is related with the concepts of **information gain** and **entropy**. ## Attribute selection A test will have a high **information gain** if it generates partitions with low [entropy](https://en.wikipedia.org/wiki/Entropy_(information_theory)). In general: the greater the homogeneity of a partition, the lower its entropy. In this case, a partition is said to be very homogeneous if the majority of its elements belong to a single class. Consider the problem of binary classification (it can be extended to support multi-class settings). Take $p$ as the probability of the positive class, i.e., the proportion of positive cases in the set. In this case, the entropy is given by: $$I(p) = - p \log_2 p - (1 - p) \log_2 (1-p)$$ It ranges between 0 and 1: * 0, when $p=1$ or $p=0$. We are in a situation of low entropy/high purity, because there is no "surprise" in the outcome. We say the information is minimum, because observing the class of an instance will not give us any new information - we already know what it will be. * 1, when $p=\frac{1}{2}$. We are in a situation of high entropy/low purity, because the "uncertainty" of the outcome is maximum. This is equivalent to saying that the information is maximum. Out of curiosity, entropy is measured in **bits** (hence the $log_2$ in the formula, since it tells you the number of digits you would need to represent a number in base 2). ```python def entropy(p): if 0 < p < 1: return -p * log2(p) - (1 - p) * log2(1 - p) else: return 0 ``` Let's compute the entropy for the entire data set: ```python def compute_probability(data): n = data.shape[0] f = (data['Class'] == 1).sum() return f / n p = compute_probability(data) ``` ```python entropy(p) ``` 0.9402859586706311 We have a fairly high entropy for the entire data set. This means that substantial proportions of the data set belong to each of the classes, as confirmed by the value of $p$ - the data is not homogenous. ```python p ``` 0.6428571428571429 Starting from a set with high entropy, we would like to select a test that divides it into partitions that are as homogeneous as possible. In the ideal case, each partition would be a leaf node, containing only instances from a single class. A test is restricted to branching on an attribute $A$ with values $\{A_1, A_2, \dots, A_v\}$, thus partitioning $C$ into $\{C_1, C_2, \dots, C_v\}$. The expected entropy of the test is obtained as a weighted average of the entropy of the resulting groupings: $$E(A) = \sum_{i=1}^v \frac{\|C_i\|}{\|C\|}I(p_i)$$ In short, and as expected, we are measuring an attribute's ability to generate homogeneous groupings. Let's calculate the expected information of the `Outlook` feature: ```python def mean_entropy(data, attribute): c_norm = data.shape[0] information = 0 values = data[attribute].unique() for value in values: group = data[data[attribute] == value] c_i_norm = group.shape[0] w = c_i_norm / c_norm p = compute_probability(group) e = entropy(p) information += w * e return information mean_entropy(data, 'Outlook') ``` 0.6935361388961918 Given this result, the `Outlook` feature yields partitions that have, on average, a much lower entropy than the entire data set. It is a good candidate for the first test (first node that is the root!) We call this loss of entropy the **information gain** of branching on an attribute $A$, and it is given by: $$IG(A) = I(p) - E(A)$$ where $I(p)$ is the entropy on the node over which the test is being performed, and $E(A)$ is the expected entropy of the test. Let's compute it. ```python def information_gain(data, attribute): p = compute_probability(data) i = entropy(p) e = mean_entropy(data, attribute) return i - e information_gain(data, 'Outlook') ``` 0.24674981977443933 To generate the simplest possible tree, it is a good choice to branch, at each step, on the attribute with the highest information gain. Let's find out which: ```python def examine_candidate_attributes(data, attributes): return { attribute:information_gain(data, attribute) for attribute in attributes } attributes = [c for c in data.columns if c is not 'Class'] examine_candidate_attributes(data, attributes) ``` {'Outlook': 0.24674981977443933, 'Temperature': 0.02922256565895487, 'Humidity': 0.15183550136234159, 'Windy': 0.02507817350585062} So, we would start by branching on the values of the `Outlook` attribute. ```python outlook_values = data['Outlook'].unique() outlook_values ``` array(['sunny', 'overcast', 'rain'], dtype=object) We repeat the process in each partition generated by the `Outlook` attribute: ```python for i, value in enumerate(outlook_values): partition = data[data['Outlook'] == value] results = examine_candidate_attributes(partition, attributes) print("\n---\n") print("Partition: Outlook == {}. \nResults:\n{}.".format(value, results)) ``` --- Partition: Outlook == sunny. Results: {'Outlook': 0.0, 'Temperature': 0.5709505944546686, 'Humidity': 0.9709505944546686, 'Windy': 0.01997309402197489}. --- Partition: Outlook == overcast. Results: {'Outlook': 0.0, 'Temperature': 0.0, 'Humidity': 0.0, 'Windy': 0.0}. --- Partition: Outlook == rain. Results: {'Outlook': 0.0, 'Temperature': 0.01997309402197489, 'Humidity': 0.01997309402197489, 'Windy': 0.3219280948873623}. Interestingly enough, none of the attributes provides any information gain in the `Outlook == overcast` case. Let's try to understand why: ```python data[data['Outlook'] == 'overcast'] ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>Outlook</th> <th>Temperature</th> <th>Humidity</th> <th>Windy</th> <th>Class</th> </tr> </thead> <tbody> <tr> <th>2</th> <td>overcast</td> <td>hot</td> <td>high</td> <td>false</td> <td>1</td> </tr> <tr> <th>6</th> <td>overcast</td> <td>cool</td> <td>normal</td> <td>false</td> <td>1</td> </tr> <tr> <th>11</th> <td>overcast</td> <td>mild</td> <td>high</td> <td>true</td> <td>1</td> </tr> <tr> <th>12</th> <td>overcast</td> <td>hot</td> <td>normal</td> <td>false</td> <td>1</td> </tr> </tbody> </table> </div> As it turns out, this grouping is already homogeneous (all instances are positive) - it is already a leaf. We figured out our first rule: ``` if Outlook is Overcast then Go Hiking ``` Finally, we would continue partitioning recursively until all the resulting groupings are homogeneous (we obtain pure leaves), or until we have exhausted our features (in which case, we assign to the final leaves the majority class in their partition). ## Using decision trees The `sklearn` implementation uses an optimized version of the CART algorithm. This means that the resulting decision tree might be different than if you had used ID3. We won't go into this in detail, but don't worry - the principles behind it are very similar to what you've learned. ```python X_ = process_categorical_features(X) clf = DecisionTreeClassifier(criterion='entropy', random_state=101) clf.fit(X_, y) ``` DecisionTreeClassifier(criterion='entropy', random_state=101) Above, we do a trick to convert categorical variables into something more `sklearn`-friendly. (ignore it for now - you will learn all about it in due time.) Then, we train a `DecisionTreeClassifier` (refer to the [docs](https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html) for more information). Let's visualize the resulting tree: ```python class_names = ["Don't go hiking!", "Go hiking!"] t = visualize_tree(clf, X_.columns, class_names) Image(t) ``` On each node, we can see: * two different paths, indicated by each result of the test `(feature_value <= 0.5)`, applied to the current node. `True` means **all instances in this node for which `feature` IS NOT equal to `value`**. The left path is always True, and the right path is always False. * the entropy in that node. * the `value` array, which indicates how many samples reaching that node belong to each class (the classes are in ascending numerical order) * the name of the majority class in the node (in case of a tie, the first class in numerical order is taken) The CART algorithm can also be used for regression problems. In that case, you should use the `DecisionTreeRegressor` (refer to the [docs](https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeRegressor.html)). To visualize a simple example, check out [this page](https://scikit-learn.org/stable/auto_examples/tree/plot_tree_regression.html). ## Feature importance A great characteristic of decision trees is that they allow us to compute **feature importance**: a score for each feature, based on how useful they are at predicting the target variable. The importance of a feature is given by the decrease in node impurity (according to the criterion being used - in this case, the entropy), weighted by the proportion of samples reaching the node. We can calculate the feature importance in the following way: ```python feature_importances = pd.Series(data=clf.feature_importances_, index=X_.columns) feature_importances.sort_values(ascending=False) ``` Outlook_overcast 0.283411 Humidity_high 0.249079 Outlook_sunny 0.211799 Windy_true 0.179147 Temperature_mild 0.076563 Outlook_rain 0.000000 Temperature_cool 0.000000 Temperature_hot 0.000000 Humidity_normal 0.000000 Windy_false 0.000000 dtype: float64 By inspecting the visual representation of the tree, we can see that the `Windy_true` attribute has the highest information gain (from 1 to 0 on both leaves it generate), and therefore the highest decrease in node impurity. However, since not many samples reach that node (only 2 in the training set), this is not the most important feature, since it is not as relevant to predicting the class in the overall dataset. Compare it with the feature with the highest importance, `Outlook_overcast`, placed right at the root of the tree: despite not having as high of an information gain, it immediately allows us to classify 4 samples as "Go Hiking". ## Pros and cons ### Pros Decision trees are a straightforward way to represent rules that: * are simple to understand, interpret and visualize * require little to no data preparation (for example, don't require data scaling and normalization) * are able to handle numerical and categorical variables * missing values in the data doesn't affect (to a larger extent) the process of building the tree * are a white-box model, all decisions are replicable and easily explainable. ### Cons Decision trees are extremely flexible and prone to creating over-complex trees that: * small changes in the data can cause a large change in the structure of the decision tree causing instability * usually involves higher time to train (leading to an increase in price as the complexity and time increase) * Overfitting * Overfitting * Overfitting. (Repetition makes perfect.) Mechanisms such as pruning (removing sections of the tree) and setting the maximum depth of the tree help with overfitting. Let's see how we can configure the maximum tree depth with `sklearn`: ```python clf = DecisionTreeClassifier( criterion='entropy', max_depth=2, random_state=101 ) clf.fit(X_, y) ``` DecisionTreeClassifier(criterion='entropy', max_depth=2, random_state=101) ```python t = visualize_tree(clf, X_.columns, class_names) Image(t) ``` We can also set the minimum number of samples required to split a node, in order to avoid fully-grown trees. ```python clf = DecisionTreeClassifier( criterion='entropy', min_samples_split=5, random_state=101 ) clf.fit(X_, y) ``` DecisionTreeClassifier(criterion='entropy', min_samples_split=5, random_state=101) ```python t = visualize_tree(clf, X_.columns, class_names) Image(t) ``` These all help with controlling overfitting, but they might not be enough; and in fact, sometimes this approach is too heavy-handed, and might lead to underfitting. We will need to find a smarter way to control overfitting, while still allowing our trees to represent complex rules. Some final notes: * if the attributes are adequate, it is always possible to construct a decision-tree that correctly classifies every training instance. * attributes are **inadequate** if the data contains two objects that have identical values for each attribute and yet belong to different classes. * the key takeaway: **decision trees overfit like hell**. ## Tree-based ensembles <a class="anchor" id="tree-based-ensembles"></a> **Ensemble methods** combine the predictions of several models, known as **base learners** or **base estimators**. Each individual estimator in the ensemble is often a "weak learner" (i.e. has slightly better accuracy than random). However, combining the predictions of the entire ensemble, we often get better results that are less prone to overfitting, when compared to using a single, larger model. *Fig. 3: A simple ensemble model, using many trained models to generate a single prediction.* There are homogeneous and heterogenous ensembles, based on whether the base learners are all of the same type or not. We focus on homogenous ensembles of decision-trees, particularly: * Building several independent trees and then average their predictions, so that variance is reduced (i.e., **bagging**) * Building trees sequentially as to reduce the bias of the combined estimator (i.e., **boosting**). # Bagging with random forests <a class="anchor" id="bagging-with-random-forests"></a> Bootstrap aggregating, also known as bagging, consists in: 1. Creating several independent data sets 2. Training a model in each data set 3. Aggregating individual predictions. Bagging can be seen as training several independent models in parallel and averaging the predictions. Imagine the following example: you are experiencing a strange feeling in your nose, and want to get a diagnosis. You have the option of going to a single, extremely-specialized podiatrist (that's a foot doctor, in case you didn't know, and in this analogy, it represents a single large model), or you could get your diagnosis from an association of 500 3rd year university students of medicine (which in this analogy represent a bag of simpler, shallower models). Now, the podiatrist is very specialized: one might even say that he might have **overfit** to foot diseases. He might be able to extend his knowledge to your particular affliction and give a good diagnosis, but odds are he will diagnose you with something totally unrelated. The group of 3rd year medicine students, on the other hand, might lack individual expertise; but if each of them has at least attended *some* classes, and *if they haven't all attended the same classes*, it's very likely that their collective knowledge surpasses the podiatrist's, and that their majority vote will be the right diagnosis. This example illustrates the main strength of bagging, which is that several weak models can cancel each other's weaknesses, provided that they are highly independent and were exposed to different information. But how does this relate to decision trees, you might ask? Well, decision-trees don't simply overfit, they are also highly unstable: small variations in the data result in wildly different trees. This makes them particularly suited for bagging. ## Bagging Suppose you have a sequence of data sets $\{D_1, D_2, \dots, D_k\}$, with observations from the same underlying distribution $\mathcal{D}$. We can obtain an **ensemble of models**, $\{h_1, h_2, \dots, h_k\}$, by training a model in each data set. *Fig. 4: In bagging different data sets are generated from the main one, models are trained in parallel and predictions averaged.* By running the models in parallel, we get $\{\hat{y}_1\, \hat{y}_2\, \dots, \hat{y}_k\}$, a list of predictions from our ensemble. To obtain a single prediction we can then evaluate all models and aggregate the results by: * Averaging the $k$ predictions (regression) * Using majority voting to predict a class (classification). More often than not, however, we don't have multiple data sets. In this situation, we can do bootstrapping. ## Bootstrapping We can take repeated random samples with replacement $\{C^1, C^2, \dots, C^b\}$ from $C$, our full dataset: ```python def make_bootstrap_data(data, b): n = data.shape[0] return [data.sample(n=n, replace=True) for i in range(b)] bootstrap_data = make_bootstrap_data(data, 2) ``` Typically, the bootstrapped data sets are the same size as the original data set. ```python bootstrap_data[0] ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>Outlook</th> <th>Temperature</th> <th>Humidity</th> <th>Windy</th> <th>Class</th> </tr> </thead> <tbody> <tr> <th>7</th> <td>sunny</td> <td>mild</td> <td>high</td> <td>false</td> <td>0</td> </tr> <tr> <th>8</th> <td>sunny</td> <td>cool</td> <td>normal</td> <td>false</td> <td>1</td> </tr> <tr> <th>4</th> <td>rain</td> <td>cool</td> <td>normal</td> <td>false</td> <td>1</td> </tr> <tr> <th>12</th> <td>overcast</td> <td>hot</td> <td>normal</td> <td>false</td> <td>1</td> </tr> <tr> <th>12</th> <td>overcast</td> <td>hot</td> <td>normal</td> <td>false</td> <td>1</td> </tr> <tr> <th>12</th> <td>overcast</td> <td>hot</td> <td>normal</td> <td>false</td> <td>1</td> </tr> <tr> <th>10</th> <td>sunny</td> <td>mild</td> <td>normal</td> <td>true</td> <td>1</td> </tr> <tr> <th>12</th> <td>overcast</td> <td>hot</td> <td>normal</td> <td>false</td> <td>1</td> </tr> <tr> <th>0</th> <td>sunny</td> <td>hot</td> <td>high</td> <td>false</td> <td>0</td> </tr> <tr> <th>6</th> <td>overcast</td> <td>cool</td> <td>normal</td> <td>false</td> <td>1</td> </tr> <tr> <th>4</th> <td>rain</td> <td>cool</td> <td>normal</td> <td>false</td> <td>1</td> </tr> <tr> <th>0</th> <td>sunny</td> <td>hot</td> <td>high</td> <td>false</td> <td>0</td> </tr> <tr> <th>2</th> <td>overcast</td> <td>hot</td> <td>high</td> <td>false</td> <td>1</td> </tr> <tr> <th>5</th> <td>rain</td> <td>cool</td> <td>normal</td> <td>false</td> <td>0</td> </tr> </tbody> </table> </div> ```python bootstrap_data[1] ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>Outlook</th> <th>Temperature</th> <th>Humidity</th> <th>Windy</th> <th>Class</th> </tr> </thead> <tbody> <tr> <th>2</th> <td>overcast</td> <td>hot</td> <td>high</td> <td>false</td> <td>1</td> </tr> <tr> <th>12</th> <td>overcast</td> <td>hot</td> <td>normal</td> <td>false</td> <td>1</td> </tr> <tr> <th>3</th> <td>rain</td> <td>mild</td> <td>high</td> <td>false</td> <td>1</td> </tr> <tr> <th>9</th> <td>rain</td> <td>mild</td> <td>normal</td> <td>false</td> <td>1</td> </tr> <tr> <th>9</th> <td>rain</td> <td>mild</td> <td>normal</td> <td>false</td> <td>1</td> </tr> <tr> <th>2</th> <td>overcast</td> <td>hot</td> <td>high</td> <td>false</td> <td>1</td> </tr> <tr> <th>4</th> <td>rain</td> <td>cool</td> <td>normal</td> <td>false</td> <td>1</td> </tr> <tr> <th>8</th> <td>sunny</td> <td>cool</td> <td>normal</td> <td>false</td> <td>1</td> </tr> <tr> <th>11</th> <td>overcast</td> <td>mild</td> <td>high</td> <td>true</td> <td>1</td> </tr> <tr> <th>1</th> <td>sunny</td> <td>hot</td> <td>high</td> <td>true</td> <td>0</td> </tr> <tr> <th>4</th> <td>rain</td> <td>cool</td> <td>normal</td> <td>false</td> <td>1</td> </tr> <tr> <th>3</th> <td>rain</td> <td>mild</td> <td>high</td> <td>false</td> <td>1</td> </tr> <tr> <th>10</th> <td>sunny</td> <td>mild</td> <td>normal</td> <td>true</td> <td>1</td> </tr> <tr> <th>7</th> <td>sunny</td> <td>mild</td> <td>high</td> <td>false</td> <td>0</td> </tr> </tbody> </table> </div> We would now train a different decision tree in each data set and use voting to predict whether or not to go hiking. Bagging works very well in controlling overfitting and the generalization error in unstable models. If one of the models in the bag overfits to a certain noisy observation, it's very likely that its vote will be drowned out by the rest of the models in the ensemble. ## Random forests A **random forest** is an ensemble learning method created by bagging multiple decision trees. In random forests, bagging is used in tandem with random feature selection: * Datasets are generated from the original data, using bootstrapping (row sampling) * Then a tree is grown on each bootstrapped data set using random feature selection (column sampling). Random feature selection means that only a random subset of the features is available at each split. Randomizing features acts as a kind of regularization, further mitigating overfitting, because it forces each individual classifier to be as good as possible, while having access to limited information. This increases diversity inside the ensemble, which is often very beneficial. Fortunately, `sklearn` implements all this intricate logic for us. ```python rf = RandomForestClassifier( n_estimators=10, criterion='entropy', max_features=2, bootstrap=True ) rf.fit(X_, y) ``` RandomForestClassifier(criterion='entropy', max_features=2, n_estimators=10) For detailed information about the models, check the documentation: * RandomForestClassifier ([docs](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html)) * RandomForstsRegressor ([docs](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html)). Bagging is a great technique to reduce overfitting, and as such, to reduce the variance of our final predictions. This comes at the cost of interpretability - it is much easier to interpret the rules contained within a single decision tree, than it is to analyze the driving forces in a majority vote of hundreds or even thousands of models. # Boosting with gradient boosting <a class="anchor" id="boosting-with-gradient-boosting"></a> **Boosting** is a different ensemble learning technique. In this case, instead of training several base learners in parallel and averaging their predictions, we train them sequentially; and the input of each model is the residual error of the previous model. The general idea is to build strong ensembles by combining base learners sequentially, each of them correcting previous errors (and thus reducing bias). **Note:** this section can be pretty intimidating at first, but don't despair! It will be worth it in the end. ## Boosting Trees are grown sequentially, and each tree is created using information derived from the previous tree. In each iteration: 1. Errors and misclassifications of the past model are given increased weight in a new training data set 2. The current model is trained on the new training set, fitting the residual errors of the previous model. As such, each model specializes in correcting past mistakes and misclassifications. Intuitively, this produces an ensemble of models that are good in different "parts" of the training data, as they sequentially correct each other's mistakes. The final prediction is obtained by summing the predictions of each model in the ensemble. Boosting is less robust than bagging against overfitting, and as such, it is recommended to control the number of estimators used, the strength of each individual estimator, the learning rate, and use other regularization techniques. In this example we will use the [boston](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) house-prices data set (regression). ```python def prepare_boston(): boston = load_boston() X = pd.DataFrame(data=boston.data, columns=boston.feature_names) y = pd.Series(data=boston.target, name='price') return X, y X_boston, y_boston = prepare_boston() ``` ## What do we mean by "gradient" boosting To understand the meaning of **gradient boosting**, we need to review a couple of concepts. ### Loss function A loss function quantifies how bad our predictions are. An example is the squared error, seen below: $$L_i = (y_i - h(x_i))^2$$ As we can see, the square error increases with the square of the difference between each of our predictions, and the real value. The worse this error across all data samples, the worse will the overall model performance be. The training of an individual machine learning model is driven by the minimization the total loss, over all observations in the training set, with respect to the model's parameters $\theta$: $$\min_{\theta} J = \sum_{i=1}^N L(y_i, h(x_i, \theta))$$ $h(x_i, \theta)$ represents our model's predictions, or $\hat{y}$. ### Gradient descent The gradient of a loss function ($\nabla$), at each point, is its inclination or slope. It's the fancy term for the multi-variable generalization of the derivative, i.e., the partial derivatives of function $f$ at a given point. When training a model by gradient descent, we calculate, at each training step, the derivative of the loss $L$ with respect to the model's parameters, and then update the parameters by giving a small step in the opposite direction of this gradient - the direction in which the loss is decreasing: $$\theta^{n+1} = \theta^{n} - \eta \nabla_{\theta} L(y, \hat{y})$$ As such, at each step, we are trying to gradually move our model to a location in parameter space where the loss function has a lower value. ## Gradient boosting in detail We now have all pieces required to understand gradient boosting. Let's consider the case of a numerical target variable. We have our first model in the boosting ensemble, $F^{(1)}(x, \theta)$. This model achieves a certain loss, given by $L(\textbf{y}, F^{(1)}(x, \theta))$. We now want to add a second model, $h$, to improve our first model in a way that $$ F^{(1)}(x_1) + h(x_1) = y_1 \\ F^{(1)}(x_2) + h(x_2) = y_2 \\ ... $$ or equivalently, $$ h(x_1) = y_1 - F^{(1)}(x_1) \\ h(x_2) = y_2 - F^{(1)}(x_2) \\ ... $$ Maybe there isn't a single model that can achieve this. But perhaps some regression tree can do it approximately. We can try to fit it to the following data points: $$ (x_1, y_1 - F^{(1)}(x_1)) \\ (x_2, y_2 - F^{(1)}(x_2)) \\ ... $$ $y_i - F^{(1)}(x_i)$ are called **residuals**. The role of $h$ is to be strong where the initial model was weak. And if the new model $F^{(2)} = F^{(1)} + h$ still isn't good enough, we can fit an additional regression tree to its residuals, and so on and so forth. Taking a step back to our first model, imagine we are using the squared error loss function: $$ L(\textbf{y},F_1(x)) = (\textbf{y} - F^{(1)}(x))^2 $$ We want to minimize $J = \sum_i{L(y_i, F^{(1)}(x_i))}$ by adjusting $F^{(1)}(x_1)$, $F^{(1)}(x_2)$, ... Now, what we are truly adjusting during learning are the parameters $\theta$ of $F^{(1)}$. But if we take a functional perspective, we can observe that $F^{(1)}(x_i)$ are simply numbers - we can treat them as parameters and take derivatives: $$ \begin{align} \frac{\partial J}{\partial F^{(1)}(x_i)} & = \frac{\partial \sum_{j=0}^{n}{L(y_j, F^{(1)}(x_j))}}{\partial F^{(1)}(x_i)} \\ & = \frac{\partial L(y_i, F^{(1)}(x_i))}{\partial F^{(1)}(x_i)} \\ & = -2 (y_i - F^{(1)}(x_i)) \\ & = -2*residuals \end{align} $$ As you may have noticed, when the loss function is the squared error, the residuals are proportional to the negative gradients of the loss function! Thus arises the similarity with gradient descent: $$ \begin{align} F^{(2)}(x_i) & = F^{(1)}(x_i) + h(x_i) \\ & = F^{(1)}(x_i) + y_i - F^{(1)}(x_i) \\ & = F^{(1)}(x_i) - \frac{1}{2}\frac{\partial J}{\partial F^{(1)}(x_i)} \\ \end{align} $$ and $$ \theta^{n+1} = \theta^{n} - \eta \frac{\partial J}{\partial F(\theta_i)} $$ To summarize: for regression with **square loss**, * the residual <=> negative gradient * fitting h to the residual <=> fitting h to the negative gradient * adding a new estimator based on the residual <=> adding a new estimator based on the negative gradient When adding a new estimator to our boosting ensemble, we are actually minimizing a global loss function by using gradient descent, in function space! The final thing to understand here is that the concept of gradients is more useful than the concept of residuals, because it allows us to generalize to loss functions other than the squared error. As long as we fit each additional model to the negative gradients of the current global model, we will be minimizing our desired global loss function. ## In practice Let's implement one step of a boosting algorithm, for when the loss function is the mean squared error, based on what we have discussed so far! ### Initialization We initialize all $F^{(1)}(x_i)$ to a sensible constant, $F^{(1)}(x_i) = \gamma$. Since our goal is to minimize the mean squared error, we will use the mean value of the target variable in the training set. ```python y_boston_h0_pred = np.repeat(y_boston.mean(), y_boston.size) ``` We compute the mean squared error of this initial set of predictions. ```python mean_squared_error(y_boston, y_boston_h0_pred) ``` 84.41955615616556 ### Iterations #### Generating a new data set We build a new training set at the start of each iteration, by recomputing the target variable to be the gradient of the previous model. More concretely, at the $j$-th iteration we compute a new target variable $r^{(j)}_i$, corresponding to the gradient of the past iteration: $$r^{(j)}_{i} = \frac{\partial L(y_i, F^{(j-1)}(x_i))}{\partial F^{(j-1)}(x_i)} = -2(r^{(j-1)}_{i} - F^{(j-1)}(x_i))$$ For the first iteration: * $r^{(j-1)}_i = y_i$, the original targets * $F^{(j-1)}(x_i)) = F^{(1)}(x_i) = \gamma$ ```python def compute_gradient(y, y_pred): return -2 * (y - y_pred) y_boston_h0_residual = compute_gradient(y_boston, y_boston_h0_pred) ``` #### Training a decision-tree We now fit a new decision-tree to the negative gradient that we just calculated, $r^{(j)}_i$: ```python dt = DecisionTreeRegressor(max_depth=1) dt.fit(X_boston, y_boston_h0_residual) y_boston_h0_residual_pred = dt.predict(X_boston) ``` Typically, we will want to train the simplest, shallowest decision-trees as base learners, to avoid overfitting (which is why we are setting the maximum depth of the tree to one.) #### Updating the global model We now want to add the predictions made by this model (over the residuals) to the predictions made by the base model. The update rule goes as follows, where $\eta$ is a **learning rate** that controls the magnitude of the update and acts as a regularizer: $$F^{(j)}(x_i)) = F^{(j-1)}(x_i)) - \eta \cdot \frac{\partial L(y_i, F^{(j-1)}(x_i))}{\partial F^{(j-1)}(x_i)}$$ The learning rate is also known as the *shrinkage factor*, as it shrinks the impact of the corrections if $\eta$ between 0 and 1. Since we don't know the true value of $y_i$ at prediction time, we plug-in the prediction of the decision-tree we fit in the previous step: $$F^{(j)}(x_i)) \approx F^{(j-1)}(x_i)) - \eta \cdot \hat{r}^{(j)}_i(x_i) $$ If everything goes well, with each step we are moving closer and closer to the function that minimizes the squared error. ```python def update_predictions(previous_prediction, residual_prediction, lr): return previous_prediction - lr * residual_prediction y_boston_h1_pred = update_predictions(y_boston_h0_pred, y_boston_h0_residual_pred, lr=0.1) ``` Now, we compute the mean squared error for the updated predictions. ```python mean_squared_error(y_boston, y_boston_h1_pred) ``` 70.660188943705 Hurrah! The error decreased. We would continue the process until the error stopped decreasing, or until we detected signs of overfitting. ### Putting it all together Suppose we have $m$ individual decision-trees (boosting stages) The final prediction will be given by the sum of the initial constant and all the gradient-based corrections: $$F^m_i(x_i) = \gamma + \sum_{j=1}^m \eta \cdot \hat{r}^{(j)}_i(x_i)$$ Where $\hat{r}^{(j)}_i(x_i)$ is the output of a decision-tree that predicts the negative gradient of the previous iteration. A final note: don't worry if you didn't understand all the details of previous section - it was quite heavy on the mathematical side. Just make sure you understand the basics of boosting and the idea behind it, and `sklearn` has got you covered! ## With sklearn Enough with theory: `sklearn` to the rescue! Let's create a `GradientBoostingClassifier`, which can be used for classification problems: ```python gb = GradientBoostingClassifier( learning_rate=.1, n_estimators=10 ) gb.fit(X_, y) ``` GradientBoostingClassifier(n_estimators=10) #### Include sampling Some implementations provide the ability to sample observations and features at each iteration, similarly to what happens with Random Forests. Unsurprisingly, this reduces overfitting at the expense of increased bias. ```python gb = GradientBoostingClassifier( learning_rate=.1, n_estimators=10, subsample=.5 ) gb.fit(X_, y) ``` GradientBoostingClassifier(n_estimators=10, subsample=0.5) # Conclusion You are now armed with the power of bagging and boosting, two techniques you can easily use to obtain models with a lot of expressive power that also generalize better! Hopefully, you also got some insight into the world of more advanced machine learning techniques. Make sure to review this notebook well, and when you're ready, go solve the exercises. Good luck!
6bc88c3d1295c5d3a5407c8ef30965529b8e4621
314,141
ipynb
Jupyter Notebook
S01 - Bootcamp and Binary Classification/SLU11 - Tree-Based Models/Learning notebook.ipynb
FarhadManiCodes/batch5-students
3a147145dc4f4ac65a851542987cf687b9915d5b
[ "MIT" ]
2
2022-02-04T17:40:04.000Z
2022-03-26T18:03:12.000Z
S01 - Bootcamp and Binary Classification/SLU11 - Tree-Based Models/Learning notebook.ipynb
FarhadManiCodes/batch5-students
3a147145dc4f4ac65a851542987cf687b9915d5b
[ "MIT" ]
null
null
null
S01 - Bootcamp and Binary Classification/SLU11 - Tree-Based Models/Learning notebook.ipynb
FarhadManiCodes/batch5-students
3a147145dc4f4ac65a851542987cf687b9915d5b
[ "MIT" ]
2
2021-10-30T16:20:13.000Z
2021-11-25T12:09:31.000Z
152.569694
106,160
0.866108
true
12,011
Qwen/Qwen-72B
1. YES 2. YES
0.882428
0.839734
0.741005
__label__eng_Latn
0.988157
0.559934
```python import numpy as np import matplotlib.pyplot as plt from sympy import * f, d, D, z, kx = symbols('f,d,D,Z, K_x') z = symbols('Z') Delta = Symbol("Delta") eq_right = Delta * z eq_left = Abs(1/d-1/(d+Delta*d))*f*D*kx Eq(eq_right, eq_left) ``` $\displaystyle \Delta Z = D K_{x} f \left|{\frac{1}{\Delta d + d} - \frac{1}{d}}\right|$ ```python # intrinsics: [1137.952358814274, 1139.2455706836256, 1040.2907098789112, 764.4019425366827] # resolution: [2048, 1536] # assuming that first entry of intrinsics is the kx parameter [pixels/inches] # error for given measured diameter d on image plane in meters def error_function(d, delta_d): delta_Z = D*f*kx*abs(1/(d+delta_d) - 1/d)*10**(-3) # factor at the end converts it to meters from cm return delta_Z kx = 1137.952358814274/2.54 # to turn [pixels/inches] to [pixels/cm] ky = 1139.2455706836256/2.54 # to turn [pixels/inches] to [pixels/cm] num_of_samples = 2048 d_d = 3 f = 1 D = 10 # d is diameter of ball in pixels (only integer numbers) d = np.linspace(1, 2048, num_of_samples, endpoint=True) delta_Z = error_function(d, d_d) print(delta_Z.shape) pprint(delta_Z) fig, ax = plt.subplots(nrows=1, ncols=3) fig.set_size_inches(h=14, w=21) fig.subplots_adjust(left=0.05, bottom=0.05, right=0.95, top=0.9, wspace=0.25, hspace=0.25) distance = np.linspace(0,100, num_of_samples, endpoint=True) # different distances (camera frame) to scale x_error and y_error # delta_x is in pixels for delta_x in range(1,10): delta_X = distance*delta_x/(kx*f) ax[0].plot(distance, delta_X, label='with delta_x: {}'.format(delta_x)) for delta_y in range(1,10): delta_Y = distance*delta_y/(ky*f) ax[1].plot(distance, delta_Y, label='with delta_y: {}'.format(delta_y)) for delta_d in range(1,10): delta_Z = error_function(d[:30], delta_d) ax[2].plot(d[:30], delta_Z, label='with delta_d: {}'.format(delta_d)) ax[0].set_xlabel('distance in Z-direction in meters') ax[0].set_ylabel('error in X-direction in meters') ax[0].legend() ax[1].set_xlabel('distance in Z-direction in meters') ax[1].set_ylabel('error in Y-direction in meters') ax[1].legend() ax[2].set_xlabel('diameter d in pixels') ax[2].set_ylabel('error in Z-direction in meters') ax[2].legend() plt.show() ``` ```python ```
db7d93839ef39f15c445d821d7063fb3ad55a6db
206,605
ipynb
Jupyter Notebook
measurement_model.ipynb
nanaimi/aux_tools
397fd959d48b616c604ac490bc007aacc51cd6ed
[ "BSD-2-Clause" ]
null
null
null
measurement_model.ipynb
nanaimi/aux_tools
397fd959d48b616c604ac490bc007aacc51cd6ed
[ "BSD-2-Clause" ]
null
null
null
measurement_model.ipynb
nanaimi/aux_tools
397fd959d48b616c604ac490bc007aacc51cd6ed
[ "BSD-2-Clause" ]
null
null
null
1,395.97973
202,268
0.956986
true
756
Qwen/Qwen-72B
1. YES 2. YES
0.917303
0.718594
0.659169
__label__eng_Latn
0.508754
0.3698
# Poisson Distribution > ***GitHub***: https://github.com/czs108 ## Definition \begin{equation} P(X = r) = \frac{e^{-\lambda} \cdot {\lambda}^{r}}{r!} \end{equation} \begin{equation} \lambda = \text{The mean number of occurrences in the interval or the rate of occurrence.} \end{equation} If a variable $X$ follows a *Poisson Distribution* where the *mean* number of occurrences in the interval or the rate of occurrence is $\lambda$. This can be written as \begin{equation} X \sim Po(\lambda) \end{equation} ## Expectation \begin{equation} E(X) = \lambda \end{equation} ## Variance \begin{equation} Var(X) = \lambda \end{equation} ## Approximation A *Binomial Distribution* $X \sim B(n,\, p)$ can be approximated by $X \sim Po(np)$ if $n$ is *large* and $p$ is *small*. Because both the *expectation* and *variance* of $X \sim Po(np)$ are $np$. When $n$ is large and $p$ is small, $(1 - p) \approx 1$ and for $X \sim B(n,\, p)$: \begin{equation} E(X) = np \end{equation} \begin{equation} Var(X) \approx np \end{equation} The approximation is typically very close if $n > 50$ and $p < 0.1$.
28bd062f42e394a77e59fa24b9aceb6a9aecc546
2,222
ipynb
Jupyter Notebook
exercises/Poisson Distribution.ipynb
czs108/Probability-Theory-Exercises
60c6546db1e7f075b311d1e59b0afc3a13d93229
[ "MIT" ]
null
null
null
exercises/Poisson Distribution.ipynb
czs108/Probability-Theory-Exercises
60c6546db1e7f075b311d1e59b0afc3a13d93229
[ "MIT" ]
null
null
null
exercises/Poisson Distribution.ipynb
czs108/Probability-Theory-Exercises
60c6546db1e7f075b311d1e59b0afc3a13d93229
[ "MIT" ]
1
2022-03-21T05:04:07.000Z
2022-03-21T05:04:07.000Z
23.638298
178
0.49865
true
366
Qwen/Qwen-72B
1. YES 2. YES
0.909907
0.91848
0.835732
__label__eng_Latn
0.973664
0.780018
<h1>SymPy: Open Source Symbolic Mathematics</h1> ``` %load_ext sympyprinting ``` ``` from __future__ import division from sympy import * x, y, z = symbols("x y z") k, m, n = symbols("k m n", integer=True) f, g, h = map(Function, 'fgh') ``` <h2>Elementary operations</h2> ``` Rational(3,2)*pi + exp(I*x) / (x**2 + y) ``` ``` exp(I*x).subs(x,pi).evalf() ``` ``` e = x + 2*y ``` ``` srepr(e) ``` Add(Symbol(&apos;x&apos;), Mul(Integer(2), Symbol(&apos;y&apos;))) ``` exp(pi * sqrt(163)).evalf(50) ``` <h2>Algebra<h2> ``` ((x+y)**2 * (x+1)).expand() ``` ``` a = 1/x + (x*sin(x) - 1)/x display(a) simplify(a) ``` ``` solve(Eq(x**3 + 2*x**2 + 4*x + 8, 0), x) ``` [-2⋅ⅈ, 2⋅ⅈ, -2] ``` a, b = symbols('a b') Sum(6*n**2 + 2**n, (n, a, b)) ``` <h2>Calculus</h2> ``` limit((sin(x)-x)/x**3, x, 0) ``` ``` (1/cos(x)).series(x, 0, 6) ``` ``` diff(cos(x**2)**2 / (1+x), x) ``` ``` integrate(x**2 * cos(x), (x, 0, pi/2)) ``` ``` eqn = Eq(Derivative(f(x),x,x) + 9*f(x), 1) display(eqn) dsolve(eqn, f(x)) ```
650b9291537b166ff3d4655fe5673e07652a4d9f
37,033
ipynb
Jupyter Notebook
docs/examples/notebooks/sympy.ipynb
tinyclues/ipython
71e32606b0242772b81c9be0d40751ba47d95f2c
[ "BSD-3-Clause-Clear" ]
1
2016-05-26T10:57:18.000Z
2016-05-26T10:57:18.000Z
docs/examples/notebooks/sympy.ipynb
adgaudio/ipython
a924f50c0f7b84127391f1c396326258c2b303e2
[ "BSD-3-Clause-Clear" ]
null
null
null
docs/examples/notebooks/sympy.ipynb
adgaudio/ipython
a924f50c0f7b84127391f1c396326258c2b303e2
[ "BSD-3-Clause-Clear" ]
null
null
null
138.182836
3,463
0.734021
true
453
Qwen/Qwen-72B
1. YES 2. YES
0.960361
0.847968
0.814355
__label__kor_Hang
0.127141
0.730353
## Multiple Linear Regression We will see how multiple input variables together influence the output variable, while also learning how the calculations differ from that of Simple LR model. We will also build a regression model using Python. At last, we will go deeper into Linear Regression and will learn things like Multicollinearity, Hypothesis Testing, Feature Selection, and much more. ```python # Importing required libraries import pandas as pd import numpy as np %matplotlib inline import matplotlib.pyplot as plt from mpl_toolkits import mplot3d from mpl_toolkits.mplot3d import Axes3D ``` ### Advertising Data We are going to use Advertising data which is available on the site of USC Marshall School of Business. The advertising data set consists of the sales of a product in 200 different markets, along with advertising budgets for three different media: TV, radio, and newspaper. Loading and plotting the Data ```python data = pd.read_csv("Advertising.csv") data.drop("Unnamed: 0", axis =1,inplace=True) data ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>TV</th> <th>Radio</th> <th>Newspaper</th> <th>Sales</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>230.1</td> <td>37.8</td> <td>69.2</td> <td>22.1</td> </tr> <tr> <th>1</th> <td>44.5</td> <td>39.3</td> <td>45.1</td> <td>10.4</td> </tr> <tr> <th>2</th> <td>17.2</td> <td>45.9</td> <td>69.3</td> <td>9.3</td> </tr> <tr> <th>3</th> <td>151.5</td> <td>41.3</td> <td>58.5</td> <td>18.5</td> </tr> <tr> <th>4</th> <td>180.8</td> <td>10.8</td> <td>58.4</td> <td>12.9</td> </tr> <tr> <th>...</th> <td>...</td> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <th>195</th> <td>38.2</td> <td>3.7</td> <td>13.8</td> <td>7.6</td> </tr> <tr> <th>196</th> <td>94.2</td> <td>4.9</td> <td>8.1</td> <td>9.7</td> </tr> <tr> <th>197</th> <td>177.0</td> <td>9.3</td> <td>6.4</td> <td>12.8</td> </tr> <tr> <th>198</th> <td>283.6</td> <td>42.0</td> <td>66.2</td> <td>25.5</td> </tr> <tr> <th>199</th> <td>232.1</td> <td>8.6</td> <td>8.7</td> <td>13.4</td> </tr> </tbody> </table> <p>200 rows × 4 columns</p> </div> ```python plt.scatter(data.TV, data.Sales, color='blue', label='TV', alpha=0.5) plt.scatter(data.Radio, data.Sales, color='red', label='radio', alpha=0.5) plt.scatter(data.Newspaper, data.Sales, color='green', label='newspaper', alpha=0.5) plt.legend(loc="lower right") plt.title("Sales vs. Advertising") plt.xlabel("Advertising [1000 $]") plt.ylabel("Sales [Thousands of units]") plt.grid() plt.show() ``` ---- #### Mathematically, general Linear Regression model can be expressed as: \begin{align} Y= β_0+ β_1 X_1+ β_2 X_2+⋯+ β_p X_p \end{align} Here, Y is the output variable, and X terms are the corresponding input variables.The first β term (βo) is the intercept constant and is the value of Y in absence of all predictors (i.e when all X terms are 0). Finding the values of these constants(β) is what regression model does by minimizing the error function and fitting the best line or hyperplane (depending on the number of input variables). This is done by minimizing the Residual Sum of Squares (RSS), which is obtained by squaring the differences between actual and predicted outcomes. \begin{align} RSS=∑_i^n(y_i-y ̂_i )^2 \end{align} Because this method finds the least sum of squares, it is also known as the **Ordinary Least Squares** (OLS) method. There are two primary ways to implement the OLS algorithm: **Scikit Learn** and **Statsmodels** #### SciKit Learn: Just import the Linear Regression module from the Sklearn package and fit the model on the data. This method is pretty straightforward and you can see how to use it below. ```python from sklearn.linear_model import LinearRegression sk_model = LinearRegression() sk_model.fit(data.drop('Sales', axis=1), data.Sales) ``` LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None, normalize=False) ```python print("Intercept: ", sk_model.intercept_) print("Coefficients: ", sk_model.coef_) ``` Intercept: 2.938889369459412 Coefficients: [ 0.04576465 0.18853002 -0.00103749] #### Statsmodels Another way is to use the Statsmodels package to implement OLS. Statsmodels is a Python package that allows performing various statistical tests on the data. We will use it here because it will be helpful for us later as well. ```python # Importing statsmodels import statsmodels.formula.api as sm # Fitting the OLS on data model = sm.ols('Sales ~ TV + Radio + Newspaper', data).fit() print(model.params) ``` Intercept 2.938889 TV 0.045765 Radio 0.188530 Newspaper -0.001037 dtype: float64 /usr/local/lib/python3.7/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead. import pandas.util.testing as tm These results can be interpreted as follows: * If we fix the budget for TV & newspaper, then increasing the radio budget by 1000 USD will lead to an increase in sales by around **189 units**(0.189*1000). * Similarly, by fixing the radio & newspaper, we infer an approximate rise of **46 units** of products per 1000 USD increase in the TV budget. * However, for the newspaper budget, since the coefficient is quite negligible (close to zero), it’s evident that the newspaper is not affecting the sales. In fact, it’s on the negative side of zero(-0.001) which, if the magnitude was big enough, could have meant that this agent is rather causing the sales to fall. But we cannot make that kind of inference with such negligible value. If sales is regressed solely on newspaper (as shown below), the slope coefficient will come out to be 0.055, which is significanly large as compared to what we saw above. ```python # Simple Linear regression for sales vs newspaper model_npaper = sm.ols('Sales ~ Newspaper', data).fit() print(model_npaper.params) ``` Intercept 12.351407 Newspaper 0.054693 dtype: float64 #### This is explained by **Multicollinearity** Let's plot and observe the correlations among the variables ```python data.corr() ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>TV</th> <th>Radio</th> <th>Newspaper</th> <th>Sales</th> </tr> </thead> <tbody> <tr> <th>TV</th> <td>1.000000</td> <td>0.054809</td> <td>0.056648</td> <td>0.782224</td> </tr> <tr> <th>Radio</th> <td>0.054809</td> <td>1.000000</td> <td>0.354104</td> <td>0.576223</td> </tr> <tr> <th>Newspaper</th> <td>0.056648</td> <td>0.354104</td> <td>1.000000</td> <td>0.228299</td> </tr> <tr> <th>Sales</th> <td>0.782224</td> <td>0.576223</td> <td>0.228299</td> <td>1.000000</td> </tr> </tbody> </table> </div> #### Plotting the heatmap to visualize the Correlation ```python # Plotting correlation heatmap plt.ylim(-.5,3.5) plt.imshow(data.corr(), cmap=plt.cm.GnBu, interpolation='nearest',data=True) plt.colorbar() tick_marks = [i for i in range(len(data.columns))] plt.xticks(tick_marks, data.columns, rotation=45) plt.yticks(tick_marks, data.columns, rotation=45) # Putting annotations for i in range(len(data.columns)): for j in range(len(data.columns)): text = '%.2f'%(data.corr().iloc[i,j]) plt.text(i-0.2,j-0.1,text) ``` The correlation between newspaper and radio is 0.35. This indicates a fair relationship between newspaper and radio budgets. Hence, it can be inferred that → when the radio budget is increased for a product, there’s a tendency to spend more on newspapers as well. This is called Multicollinearity and is referred to as a situation in which two or more input variables are linearly related. Hence, even though the Multiple Regression model shows no impact on sales by the newspaper, the Simple Regression model still does due to this multicollinearity and the absence of other input variables. $Sales & Radio → probable causation Newspaper & Radio → multicollinearity Sales & Newspaper → transitive correlation$ --- ### Hypothesis Testing One of the fundamental questions that should be answered while running Multiple Linear Regression is, whether or not, at least one of the predictors is useful in predicting the output. We saw that the three predictors TV, radio and newspaper had a different degree of linear relationship with the sales. But what if the relationship is just by chance and there is no actual impact on sales due to any of the predictors? The model can only give us numbers to establish a close enough linear relationship between the response variable and the predictors. However, it cannot prove the credibility of these relationships. To have some confidence, we take help from statistics and do something known as a Hypothesis Test. We start by forming a Null Hypothesis and a corresponding Alternative Hypothesis. <center> NULL HYPOTHESIS </center> \begin{align} H_0 : \beta_1 = \beta_2 = ... = \beta_p = 0 \end{align} \begin{align} H_0 : \beta_TV = \beta_radio = \beta_newspaper = 0 \end{align} <center> ALTERNATIVE HYPOTHESIS </center> \begin{align} H_a : At \: least \: one \: \beta_i \: is \: non-zero \end{align} The hypothesis test is performed by using F-Statistic. The formula for this statistic contains Residual Sum of Squares (RSS) and the Total Sum of Squares (TSS), which we can calculate using the Statsmodels. ```python print(model.summary2()) ``` Results: Ordinary least squares ================================================================= Model: OLS Adj. R-squared: 0.896 Dependent Variable: Sales AIC: 780.3622 Date: 2021-03-09 18:29 BIC: 793.5555 No. Observations: 200 Log-Likelihood: -386.18 Df Model: 3 F-statistic: 570.3 Df Residuals: 196 Prob (F-statistic): 1.58e-96 R-squared: 0.897 Scale: 2.8409 ------------------------------------------------------------------ Coef. Std.Err. t P>|t| [0.025 0.975] ------------------------------------------------------------------ Intercept 2.9389 0.3119 9.4223 0.0000 2.3238 3.5540 TV 0.0458 0.0014 32.8086 0.0000 0.0430 0.0485 Radio 0.1885 0.0086 21.8935 0.0000 0.1715 0.2055 Newspaper -0.0010 0.0059 -0.1767 0.8599 -0.0126 0.0105 ----------------------------------------------------------------- Omnibus: 60.414 Durbin-Watson: 2.084 Prob(Omnibus): 0.000 Jarque-Bera (JB): 151.241 Skew: -1.327 Prob(JB): 0.000 Kurtosis: 6.332 Condition No.: 454 ================================================================= If the value of F-statistic is equal to or very close to 1, then the results are in favor of the Null Hypothesis and we fail to reject it. But as we can see that the F-statistic is many folds larger than 1, thus providing strong evidence against the Null Hypothesis (that all coefficients are zero). Hence, we **reject the Null Hypothesis** and are confident that at least one predictor is useful in predicting the output. *Note that F-statistic is not suitable when the number of predictors(p) is large, or if p is greater than the number of data samples (n).* Hence, we can say that at least one of the three advertising agents is useful in predicting sales. But to find out, which predictor or predictors are useful and which one are not, we do **Feature Selection** Two of the ways of doing Feature Selection are **Forward Selecton** & **Backward Selection** Let's proceed with forward selection. We start with a model without any predictor and just the intercept term. We then perform simple linear regression for each predictor to find the best performer(lowest RSS). We then add another variable to it and check for the best 2-variable combination again by calculating the lowest RSS(Residual Sum of Squares). After that the best 3-variable combination is checked, and so on. The approach is stopped when some stopping rule is satisfied. ```python # Defining the function to evaluate amodel def evaluateModel (model): print("RSS = ", ((data.sales - model.predict())**2).sum()) print("R2 = ", model.rsquared) ``` Single Predictor Models ```python # For TV model_TV = sm.ols('Sales ~ TV', data=data).fit() print("model_TV") print(model_TV.summary()) print("------------") # For radio model_radio = sm.ols('Sales ~ Radio', data=data).fit() print("model_radio") print(model_radio.summary()) print("------------") # For newspaper model_newspaper = sm.ols('Sales ~ Newspaper', data=data).fit() print("model_newspaper") print(model_newspaper.summary()) print("------------") ``` model_TV OLS Regression Results ============================================================================== Dep. Variable: Sales R-squared: 0.612 Model: OLS Adj. R-squared: 0.610 Method: Least Squares F-statistic: 312.1 Date: Tue, 09 Mar 2021 Prob (F-statistic): 1.47e-42 Time: 18:29:01 Log-Likelihood: -519.05 No. Observations: 200 AIC: 1042. Df Residuals: 198 BIC: 1049. Df Model: 1 Covariance Type: nonrobust ============================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------ Intercept 7.0326 0.458 15.360 0.000 6.130 7.935 TV 0.0475 0.003 17.668 0.000 0.042 0.053 ============================================================================== Omnibus: 0.531 Durbin-Watson: 1.935 Prob(Omnibus): 0.767 Jarque-Bera (JB): 0.669 Skew: -0.089 Prob(JB): 0.716 Kurtosis: 2.779 Cond. No. 338. ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. ------------ model_radio OLS Regression Results ============================================================================== Dep. Variable: Sales R-squared: 0.332 Model: OLS Adj. R-squared: 0.329 Method: Least Squares F-statistic: 98.42 Date: Tue, 09 Mar 2021 Prob (F-statistic): 4.35e-19 Time: 18:29:01 Log-Likelihood: -573.34 No. Observations: 200 AIC: 1151. Df Residuals: 198 BIC: 1157. Df Model: 1 Covariance Type: nonrobust ============================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------ Intercept 9.3116 0.563 16.542 0.000 8.202 10.422 Radio 0.2025 0.020 9.921 0.000 0.162 0.243 ============================================================================== Omnibus: 19.358 Durbin-Watson: 1.946 Prob(Omnibus): 0.000 Jarque-Bera (JB): 21.910 Skew: -0.764 Prob(JB): 1.75e-05 Kurtosis: 3.544 Cond. No. 51.4 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. ------------ model_newspaper OLS Regression Results ============================================================================== Dep. Variable: Sales R-squared: 0.052 Model: OLS Adj. R-squared: 0.047 Method: Least Squares F-statistic: 10.89 Date: Tue, 09 Mar 2021 Prob (F-statistic): 0.00115 Time: 18:29:01 Log-Likelihood: -608.34 No. Observations: 200 AIC: 1221. Df Residuals: 198 BIC: 1227. Df Model: 1 Covariance Type: nonrobust ============================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------ Intercept 12.3514 0.621 19.876 0.000 11.126 13.577 Newspaper 0.0547 0.017 3.300 0.001 0.022 0.087 ============================================================================== Omnibus: 6.231 Durbin-Watson: 1.983 Prob(Omnibus): 0.044 Jarque-Bera (JB): 5.483 Skew: 0.330 Prob(JB): 0.0645 Kurtosis: 2.527 Cond. No. 64.7 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. ------------ We observe that for model_TV, the RSS is least and R² value is the most among all the models. Hence we select model_TV as our base model to move forward. Now, we will add the radio and newspaper one by one and check the new values. ```python # For TV & radio model_TV_radio = sm.ols('Sales ~ TV + Radio', data=data).fit() print("model_TV_radio") print(model_TV_radio.summary()) print("------------") # For TV & newspaper model_TV_newspaper = sm.ols('Sales ~ TV + Newspaper', data=data).fit() print("model_TV_newspaper") print(model_TV_newspaper.summary()) print("------------") ``` model_TV_radio OLS Regression Results ============================================================================== Dep. Variable: Sales R-squared: 0.897 Model: OLS Adj. R-squared: 0.896 Method: Least Squares F-statistic: 859.6 Date: Tue, 09 Mar 2021 Prob (F-statistic): 4.83e-98 Time: 18:29:01 Log-Likelihood: -386.20 No. Observations: 200 AIC: 778.4 Df Residuals: 197 BIC: 788.3 Df Model: 2 Covariance Type: nonrobust ============================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------ Intercept 2.9211 0.294 9.919 0.000 2.340 3.502 TV 0.0458 0.001 32.909 0.000 0.043 0.048 Radio 0.1880 0.008 23.382 0.000 0.172 0.204 ============================================================================== Omnibus: 60.022 Durbin-Watson: 2.081 Prob(Omnibus): 0.000 Jarque-Bera (JB): 148.679 Skew: -1.323 Prob(JB): 5.19e-33 Kurtosis: 6.292 Cond. No. 425. ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. ------------ model_TV_newspaper OLS Regression Results ============================================================================== Dep. Variable: Sales R-squared: 0.646 Model: OLS Adj. R-squared: 0.642 Method: Least Squares F-statistic: 179.6 Date: Tue, 09 Mar 2021 Prob (F-statistic): 3.95e-45 Time: 18:29:01 Log-Likelihood: -509.89 No. Observations: 200 AIC: 1026. Df Residuals: 197 BIC: 1036. Df Model: 2 Covariance Type: nonrobust ============================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------ Intercept 5.7749 0.525 10.993 0.000 4.739 6.811 TV 0.0469 0.003 18.173 0.000 0.042 0.052 Newspaper 0.0442 0.010 4.346 0.000 0.024 0.064 ============================================================================== Omnibus: 0.658 Durbin-Watson: 1.969 Prob(Omnibus): 0.720 Jarque-Bera (JB): 0.415 Skew: -0.093 Prob(JB): 0.813 Kurtosis: 3.122 Cond. No. 410. ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. ------------ As we can see that our values have improved tremendously in TV & radio model. RSS has increased and R² has decreased further, as compared to model_TV. It’s a good sign. On the other hand, it's not the same for TV and newspaper. The values have improved slightly by adding newspaper too, but not as significantly as with the radio. Hence, at this step, we will proceed with the TV & radio model and will observe the difference when we add newspaper to this model. ```python # For TV, radio & newspaper model_all = sm.ols('Sales ~ TV + Radio + Newspaper', data=data).fit() print("model_all") print(model_all.summary()) print("------------") ``` model_all OLS Regression Results ============================================================================== Dep. Variable: Sales R-squared: 0.897 Model: OLS Adj. R-squared: 0.896 Method: Least Squares F-statistic: 570.3 Date: Tue, 09 Mar 2021 Prob (F-statistic): 1.58e-96 Time: 18:29:02 Log-Likelihood: -386.18 No. Observations: 200 AIC: 780.4 Df Residuals: 196 BIC: 793.6 Df Model: 3 Covariance Type: nonrobust ============================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------ Intercept 2.9389 0.312 9.422 0.000 2.324 3.554 TV 0.0458 0.001 32.809 0.000 0.043 0.049 Radio 0.1885 0.009 21.893 0.000 0.172 0.206 Newspaper -0.0010 0.006 -0.177 0.860 -0.013 0.011 ============================================================================== Omnibus: 60.414 Durbin-Watson: 2.084 Prob(Omnibus): 0.000 Jarque-Bera (JB): 151.241 Skew: -1.327 Prob(JB): 1.44e-33 Kurtosis: 6.332 Cond. No. 454. ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. ------------ The values have not improved with any significance. Hence, it’s imperative to not add newspaper and finalize the model with TV and radio as selected features. So our final model can be expressed as below: \begin{align} sales = 2.92 + 0.046*TV + 0.188*radio \end{align} Plotting the variables TV, radio, and sales in the 3D graph, we can visualize how our model has fit a regression plane to the data. ```python fig = plt.figure(figsize=(12,10)) ax = fig.add_subplot(211, projection='3d') fig.suptitle('Regression: Sales ~ TV & radio Advertising') # Defining z function (or sales in terms of TV and radio) def z_function(x,y): return (2.938889 + (0.045765*y) + (0.188530*x)) X, Y = np.meshgrid(range(0,50,2),range(0,300,10)) Z = z_function(X, Y) ## Creating Wireframe ax.plot_wireframe(X, Y, Z, color='black') ax.set_xlabel('x') ax.set_ylabel('y') ax.set_zlabel('z') ## Creating Surface plot ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap='winter', edgecolor='black', alpha=0.5) ## Adding Scatter Plot ax.scatter(data.Radio, data.TV, data.Sales, c='red', s=25) ## Adding labels ax.set_xlabel('Radio') ax.set_ylabel('TV') ax.set_zlabel('Sales') ax.text(0,150,1, '@DataScienceWithSan') ## Rotating for better view ax.view_init(10,30) ```
6da3c734da62ed5271f59e32d2cf68280e7323f2
229,583
ipynb
Jupyter Notebook
05_Multi Linear Regression/03_Advertisment_Solution.ipynb
kunalk3/eR_task
4d717680576bc62793c42840e772a28fb3f6c223
[ "MIT" ]
1
2021-02-25T13:58:57.000Z
2021-02-25T13:58:57.000Z
05_Multi Linear Regression/03_Advertisment_Solution.ipynb
kunalk3/eR_task
4d717680576bc62793c42840e772a28fb3f6c223
[ "MIT" ]
null
null
null
05_Multi Linear Regression/03_Advertisment_Solution.ipynb
kunalk3/eR_task
4d717680576bc62793c42840e772a28fb3f6c223
[ "MIT" ]
null
null
null
186.047812
108,106
0.83937
true
7,080
Qwen/Qwen-72B
1. YES 2. YES
0.851953
0.880797
0.750398
__label__eng_Latn
0.850391
0.581757
# hw 6: estimators learning objectives: * solidify "what is an estimator" * evaluate the effectiveness of an estimator computationally (through simulation) * understand the notion of unbiasedness, consistency, and efficiency of an estimator and evaluate these qualities computationally (through simulation) ```julia using StatsBase using Random using Statistics using PyPlot using PyCall sns = pyimport("seaborn") sns.set_context("talk"); ``` ┌ Info: Recompiling stale cache file C:\Users\cartemic\.julia\compiled\v1.2\StatsBase\EZjIG.ji for StatsBase [2913bbd2-ae8a-5f71-8c99-4fb6c76f3a91] └ @ Base loading.jl:1240 ### what is the most important problem in your field? (0) It is important to effectively communicate with people from different fields. Introduce yourself to someone in the class from a different field of study (someone you haven't met). Ask them, "what is the most important problem in your field of study?". Then, argue/explain what you think is the most important problem in *your* field of study. Write the name of the student you spoke with and their field of study. I'd also be interested to read about what you think is the most important problem in your field (a few sentences), but this is not required. ("field of study" can be as general as "chemistry" or as specific as "self-assembly of nanoparticles".) I spoke with Heather Miller, whose field is humanitarian engineering ### counting (1) A biologist takes a random sample of six fish from a lake. The lake has ten distinct species of fish. It is possible for the biologist to select more than one fish of any given species. e.g., one outcome is: one fish of species $X$, two fish of species $Y$, and four fish of species $Z$. How many different samples could the biologist draw from the lake? * the sampling is done without replacement, but this is irrelevant since the lake contains more than six fish of each species * the order in which the fish are sampled does not matter * fish of a given species are indistinguishable Essentially what this question comes down to is that we have 10 distinct bins (i.e. species), and we want to know how many ways we can split up 6 particles (samples) between them. Pretty neat how partitioning just kind of... works for multisets like that. Anyway, from our class notes there are ${n + k - 1 \choose k - 1}$ ways to partition $n$ particles into $k$ bins. Interestingly, there are also ${n + k - 1 \choose n}$ ways to partition the same $n$ particles into the same $k$ bins because $${{n + k - 1} \choose k - 1} = \frac{(n + k - 1)!}{(k-1)!(n+k-1-k+1)!} = \frac{(n+k-1)!}{(k-1)!n!}$$ is equivalent to $${n + k - 1 \choose n} = \frac{(n + k - 1)!}{n!(n+k-1-n)!} = \frac{(n+k-1)!}{n!(k-1)!}$$ which means that, generally, $${m + n \choose m} = {m + n \choose n}$$ which is pretty cool. ```julia function n_choose_k(n::Int, k::Int) # using multiple divide operations to avoid overflow for larger values # of n, k return Int(factorial(n) / factorial(k) / factorial(n - k)) end @assert n_choose_k(7, 5) == 21 sample_size = 6 num_species = 10 n = sample_size + num_species - 1 k = sample_size fish_partitions = n_choose_k(n, k) println("There are $fish_partitions samples of fish that the biologist could collect.") ``` There are 5005 samples of fish that the biologist could collect. ### capture, mark, release, recapture In ecology, one wishes to estimate the size of a population (e.g. turtles). It is too costly and impractical to count *every* member of a population. One strategy is to: 1. capture a random sample from the population, without replacement 2. mark/tag each member of this random sample 3. release the marked sample back into the population 4. after sufficient time has passed, recapture another sample 5. count the number of marked members from the recaptured sample We assume that: * when the captured and marked sample is released back into the wild, they randomly (homogenously) mix with the rest of the (unmarked) population before we recapture * marking a member of the population does not change its likelihood of being recaptured * the time between capture/mark/release and recapture/count is short enough to neglect deaths, births, and migration out of the population define the variables: * $n$: the total number of turtles in the population (unknown) * $k$: number of turtles captured, marked, and released in the first phase * $k_r$: number of turtles recaptured * $m$: the number of recaptured turtles that had marks on them ##### Lincoln-Petersen estimator A very intuitive estimator is found by imposing that the proportion found marked in the recaptured sample is equal to the proportion of the population that was captured/marked/released in the first phase. \begin{equation} \frac{k}{n}=\frac{m}{k_r} \end{equation} giving the Lincoln-Petersen estimator for the population size $n$: \begin{equation} \hat{n} = \frac{k k_r}{m} \end{equation} ##### Chapmen estimator Chapmen derived a different estimator that we will compare to the Lincoln-Petersen estimator below. \begin{equation} \hat{n} = \frac{(k + 1) (k_r +1)}{m+1} - 1 \end{equation} #### estimating the population of turtles on a small island We wish to estimate the population of turtles on a small island by a capture, mark, release, and recapture strategy. (2) create a mutable data structure `Turtle` that represents a turtle in the population. It should have a single attribute, `marked`, that indicates whether it has been marked or not. ```julia mutable struct Turtle marked::Bool function Turtle() # new turtles aren't marked new(false) end end @assert !Turtle().marked ``` (3) write a function `create_population(nb_turtles::Int)` that creates a population of `nb_turtles` unmarked `Turtle`s. return the turtles as an `Array{Turtle}`. ```julia function create_population(nb_turtles::Int) return [Turtle() for _ in 1:nb_turtles] end @assert create_population(4) isa Array{Turtle} ``` (4) write a function `count_marked(turtles::Array{Turtle})` that takes in a population of turtles and returns the number of these turtles that are marked. ```julia turtles = create_population(500) count_marked(turtles) # should return zero ``` ```julia function count_marked(turtles::Array{Turtle}) sum([t.marked for t in turtles]) end turtles = create_population(500) @assert count_marked(turtles) == 0 [turtles[i].marked = true for i in 1:5] @assert count_marked(turtles) == 5 # put the test turtles back in the pond [t.marked = false for t in turtles]; ``` (5) write a function `capture_mark_release!` that takes in two arguments: * `turtles::Array{Turtle}` the population of turtles * `nb_capture_mark_release::Int` the number of turtles to randomly capture (select without replacement) and mark and modifies the `marked` attribute of `nb_capture_mark_release` randomly selected turtles in `turtles` to denote that they have been marked. think about why the function has an `!`. ```julia turtles = create_population(500) capture_mark_release!(turtles, 45) count_marked(turtles) # should return 45 ``` ```julia function capture_mark_release!( turtles::Array{Turtle}, nb_capture_mark_release ) selected = sample(turtles, nb_capture_mark_release, replace=false) for t in selected t.marked = true end end turtles = create_population(100) capture_mark_release!(turtles, 30) @assert count_marked(turtles) == 30 ``` (6) write a function `recapture(turtles::Array{Turtle}, nb_recapture::Int)` that returns a random sample (without replacement) of `nb_recapture` turtles from `turtles` in the form of an `Array{Turtle}`. ```julia turtles = create_population(500) capture_mark_release!(turtles, 45) recaptured_turtles = recapture(turtles, 50) # should return Array{Turtle} with 50 elements ``` ```julia function recapture(turtles::Array{Turtle}, nb_recapture::Int) return sample(turtles, nb_recapture, replace=false) end turtles = create_population(500) capture_mark_release!(turtles, 45) @assert length(recapture(turtles, 50)) == 50 ``` (7) write two functions, one for each estimator: * `chapman_estimator(nb_capture_mark_release::Int, nb_recapture::Int, nb_marked_in_recaptured::Int)` * `lincoln_petersen_estimator(nb_capture_mark_release::Int, nb_recapture::Int, nb_marked_in_recaptured::Int)` that each take in the entire population of turtles (marked and unmarked), `turtles`, and the array of recaptured turtles, `recaptured_turtles`, and returns the respective estimate $\hat{n}$ of the number of turtles. here, `nb_capture_mark_release` is $k$, `nb_recapture` is $k_r$, and `nb_marked_in_recaptured` is $m$. ```julia function chapman_estimator( nb_capture_mark_release::Int, nb_recapture::Int, nb_marked_in_recaptured::Int ) return (nb_capture_mark_release + 1) * (nb_recapture + 1) / (nb_marked_in_recaptured + 1) - 1 end ``` chapman_estimator (generic function with 1 method) ```julia function lincoln_petersen_estimator( nb_capture_mark_release::Int, nb_recapture::Int, nb_marked_in_recaptured::Int ) return nb_capture_mark_release * nb_recapture / nb_marked_in_recaptured end ``` lincoln_petersen_estimator (generic function with 1 method) (8) write a function `sim_capture_mark_release_recapture` that takes in: * `nb_turtles::Int` $=n$ * `nb_capture_mark_release::Int` $=k$ * `nb_recapture::Int` $=k_r$ * `estimator::Function` either `chapman_estimator` or `lincoln_peterson_estimator` that you wrote above and returns $\hat{n}$, the estimate of the number of turtles in this simulation. use all of the functions you wrote above. ```julia nb_turtles = 200 nb_capture_mark_release = 50 nb_recapture = 42 estimator = lincoln_petersen_estimator n̂ = sim_capture_mark_release_recapture( nb_turtles, nb_capture_mark_release, nb_recapture, estimator ) ``` ```julia function sim_capture_mark_release_recapture( nb_turtles::Int, nb_capture_mark_release::Int, nb_recapture::Int, estimator::Function ) turtles = create_population(nb_turtles) capture_mark_release!(turtles, nb_capture_mark_release) new_turtles = recapture(turtles, nb_recapture) nb_marked_in_recaptured = count_marked(new_turtles) estimator(nb_capture_mark_release, nb_recapture, nb_marked_in_recaptured) end ``` sim_capture_mark_release_recapture (generic function with 1 method) (9) evaluate/compare the biasedness of the Chapman and Lincoln-Peterson estimators by plotting the distribution of $\hat{n}$ over many capture, mark, release, recapture simulations. Label which histogram corresponds to which estimator (in a legend if in the same histogram panel or in a title if in two subplots). plot as a vertical line the true number of turtles for comparison. print off the average and standard deviation of $\hat{n}$ over the simulations. ```julia nb_turtles = 200 nb_capture_mark_release = 50 nb_recapture = 42 sims_range = 1:5000 # number of simulations face_color = "#fcf8eb" estimators = Dict( "Chapman" => Dict( "function" => chapman_estimator, "̂n" => [NaN for _ in sims_range] ), "Lincoln-Petersen" => Dict( "function" => lincoln_petersen_estimator, "̂n" => [NaN for _ in sims_range] ) ) fig, ax = subplots(2, 1, figsize=(18, 8), sharey=true) bins = range(0, stop=2*nb_turtles, length=16) est_keys = collect(keys(estimators)) num_keys = length(est_keys) avgs = [NaN for _ in 1:num_keys] stds = [NaN for _ in 1:num_keys] for (plot, name) in enumerate(est_keys) for i in sims_range estimators[name]["̂n"][i] = sim_capture_mark_release_recapture( nb_turtles, nb_capture_mark_release, nb_recapture, estimators[name]["function"] ) end ax[plot].hist( estimators[name]["̂n"], alpha=0.7, color="C$plot", label=name, bins=bins, density=true ) ax[plot].axvline(nb_turtles, color="k") ax[plot].set_xlabel("Estimated # Turtles") ax[plot].set_ylabel("Simulation Density") ax[plot].set_title("$name Estimator", weight="bold") ax[plot].set_xlim([0, 2*nb_turtles]) ax[plot].set_facecolor(face_color) avgs[plot] = mean(estimators[name]["̂n"]) stds[plot] = std(estimators[name]["̂n"]) println(name) println("=" ^ length(name)) println("avg ̂n:\t", avgs[plot]) println("std ̂n:\t", stds[plot]) println() end fig.tight_layout() sns.despine() println() println("least biased:\t", est_keys[argmin(avgs .- nb_turtles)]) println("most efficient:\t", est_keys[argmin(stds)]) ``` (10) comment on which estimator, Chapman or Lincoln-Petersen, that appears to be most unbiased. Based on the above simulations, the Chapman estimator appears unbiased, as well as more efficient. (11) evaluate the consistency of the Lincoln-Petersen estimator by plotting the simulated distribution of $\hat{n}$ for several different values of `nb_recapture`. Use 10000 simulations for each value of `nb_recapture`. ```julia recapture_range = 50:25:150 fig, ax = subplots(1, 1, figsize=(18, 8)) bins = range(0, stop=2*nb_turtles, length=16) sims_range = 1:10000 color_norm = PyPlot.matplotlib.colors.Normalize(vmin=50, vmax=150) kr_to_color = PyPlot.cm.ScalarMappable( norm=color_norm, cmap=get_cmap("viridis") ).to_rgba for nb_recapture in recapture_range n = [sim_capture_mark_release_recapture( nb_turtles, nb_capture_mark_release, nb_recapture, chapman_estimator ) for _ in sims_range] ax.hist( n, label=nb_recapture, alpha=0.4, density=true, color=kr_to_color(nb_recapture) ) end ax.axvline(nb_turtles, color="k") ax.set_xlabel("Estimated # Turtles") ax.set_ylabel("Simulation Density") ax.set_title("Chapman Consistency Evaluation", weight="bold") ax.legend(title="# Recaptured", frameon=false) ax.set_xlim([0, 2*nb_turtles]) ax.set_facecolor(face_color) sns.despine() ```
0387d7762847a676d2151bb5692542b8488188a2
160,272
ipynb
Jupyter Notebook
Homework/Homework 6/Homework 6.ipynb
cartemic/CHE-599-intro-to-data-science
a2afe72b51a3b9e844de94d59961bedc3534a405
[ "MIT" ]
null
null
null
Homework/Homework 6/Homework 6.ipynb
cartemic/CHE-599-intro-to-data-science
a2afe72b51a3b9e844de94d59961bedc3534a405
[ "MIT" ]
null
null
null
Homework/Homework 6/Homework 6.ipynb
cartemic/CHE-599-intro-to-data-science
a2afe72b51a3b9e844de94d59961bedc3534a405
[ "MIT" ]
2
2019-10-02T16:11:36.000Z
2019-10-15T20:10:40.000Z
232.278261
74,214
0.908524
true
3,713
Qwen/Qwen-72B
1. YES 2. YES
0.824462
0.828939
0.683428
__label__eng_Latn
0.986421
0.426165
# Simple Linear Regression with NumPy In school, students are taught to draw lines like the following. $$ y = 2 x + 1$$ They're taught to pick two values for $x$ and calculate the corresponding values for $y$ using the equation. Then they draw a set of axes, plot the points, and then draw a line extending through the two dots on their axes. ```python # Import matplotlib. import matplotlib.pyplot as plt # Draw some axes. plt.plot([0, 10], [0, 0], 'k-') plt.plot([0, 0], [0, 10], 'k-') # Plot the red, blue and green lines. plt.plot([1, 1], [-1, 3], 'b:') plt.plot([-1, 1], [3, 3], 'r:') ``` [<matplotlib.lines.Line2D at 0x764f6b0>] ```python # Import matplotlib. import matplotlib.pyplot as plt # Draw some axes. plt.plot([-1, 10], [0, 0], 'k-') plt.plot([0, 0], [-1, 10], 'k-') # Plot the red, blue and green lines. plt.plot([1, 1], [-1, 3], 'b:') plt.plot([-1, 1], [3, 3], 'r:') # Plot the two points (1,3) and (2,5). plt.plot([1, 2], [3, 5], 'ko') # Join them with an (extending) green lines. plt.plot([-1, 10], [-1, 21], 'g-') # Set some reasonable plot limits. plt.xlim([-1, 10]) plt.ylim([-1, 10]) # Show the plot. plt.show() ``` Simple linear regression is about the opposite problem - what if you have some points and are looking for the equation? It's easy when the points are perfectly on a line already, but usually real-world data has some noise. The data might still look roughly linear, but aren't exactly so. *** ## Example (contrived and simulated) #### Scenario Suppose you are trying to weigh your suitcase to avoid an airline's extra charges. You don't have a weighing scales, but you do have a spring and some gym-style weights of masses 7KG, 14KG and 21KG. You attach the spring to the wall hook, and mark where the bottom of it hangs. You then hang the 7KG weight on the end and mark where the bottom of the spring is. You repeat this with the 14KG weight and the 21KG weight. Finally, you place your case hanging on the spring, and the spring hangs down halfway between the 7KG mark and the 14KG mark. Is your case over the 10KG limit set by the airline? #### Hypothesis When you look at the marks on the wall, it seems that the 0KG, 7KG, 14KG and 21KG marks are evenly spaced. You wonder if that means your case weighs 10.5KG. That is, you wonder if there is a *linear* relationship between the distance the spring's hook is from its resting position, and the mass on the end of it. #### Experiment You decide to experiment. You buy some new weights - a 1KG, a 2KG, a 3Kg, all the way up to 20KG. You place them each in turn on the spring and measure the distance the spring moves from the resting position. You tabulate the data and plot them. #### Analysis Here we'll import the Python libraries we need for or investigations below. ```python # Make matplotlib show interactive plots in the notebook. %matplotlib inline ``` ```python # numpy efficiently deals with numerical multi-dimensional arrays. import numpy as np # matplotlib is a plotting library, and pyplot is its easy-to-use module. import matplotlib.pyplot as plt # This just sets the default plot size to be bigger. plt.rcParams['figure.figsize'] = (8, 6) ``` Ignore the next couple of lines where I fake up some data. I'll use the fact that I faked the data to explain some results later. Just pretend that w is an array containing the weight values and d are the corresponding distance measurements. ```python w = np.arange(0.0, 21.0, 1.0) d = 5.0 * w + 10.0 + np.random.normal(0.0, 5.0, w.size) ``` ```python np.random.normal(0.0, 5.0, w.size) ``` array([ 7.76864124, 4.85760018, -0.68109271, 0.31446679, -3.56635431, -2.88521706, 0.41894594, -6.38556658, -0.82708525, 6.32974283, -6.88184383, -14.09388547, -2.83531469, -4.51188494, 2.3780392 , 3.63282701, -2.92077605, 4.65775214, -3.35318035, -2.84539091, -1.14529207]) ```python # Let's have a look at w. w ``` array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14., 15., 16., 17., 18., 19., 20.]) ```python # Let's have a look at d. d ``` array([ 13.19872125, 17.74325632, 12.51455027, 18.84140841, 30.70536085, 37.39378596, 28.8508251 , 46.49083068, 49.42916434, 63.70874352, 61.62885712, 60.0283102 , 75.08469992, 81.9319978 , 79.69408215, 81.73271907, 84.66041043, 96.89873735, 106.22136569, 102.47469234, 113.93805149]) Let's have a look at the data from our experiment. ```python # Create the plot. plt.plot(w, d, 'k.') # Set some properties for the plot. plt.xlabel('Weight (KG)') plt.ylabel('Distance (CM)') # Show the plot. plt.show() ``` #### Model It looks like the data might indeed be linear. The points don't exactly fit on a straight line, but they are not far off it. We might put that down to some other factors, such as the air density, or errors, such as in our tape measure. Then we can go ahead and see what would be the best line to fit the data. #### Straight lines All straight lines can be expressed in the form $y = mx + c$. The number $m$ is the slope of the line. The slope is how much $y$ increases by when $x$ is increased by 1.0. The number $c$ is the y-intercept of the line. It's the value of $y$ when $x$ is 0. #### Fitting the model To fit a straight line to the data, we just must pick values for $m$ and $c$. These are called the parameters of the model, and we want to pick the best values possible for the parameters. That is, the best parameter values *given* the data observed. Below we show various lines plotted over the data, with different values for $m$ and $c$. ```python # Plot w versus d with black dots. plt.plot(w, d, 'k.', label="Data") # Overlay some lines on the plot. x = np.arange(0.0, 21.0, 1.0) plt.plot(x, 5.0 * x + 10.0, 'r-', label=r"$5x + 10$") plt.plot(x, 6.0 * x + 5.0, 'g-', label=r"$6x + 5$") plt.plot(x, 5.0 * x + 15.0, 'b-', label=r"$5x + 15$") # Add a legend. plt.legend() # Add axis labels. plt.xlabel('Weight (KG)') plt.ylabel('Distance (CM)') # Show the plot. plt.show() ``` #### Calculating the cost You can see that each of these lines roughly fits the data. Which one is best, and is there another line that is better than all three? Is there a "best" line? It depends how you define the word best. Luckily, everyone seems to have settled on what the best means. The best line is the one that minimises the following calculated value. $$ \sum_i (y_i - mx_i - c)^2 $$ Here $(x_i, y_i)$ is the $i^{th}$ point in the data set and $\sum_i$ means to sum over all points. The values of $m$ and $c$ are to be determined. We usually denote the above as $Cost(m, c)$. Where does the above calculation come from? It's easy to explain the part in the brackets $(y_i - mx_i - c)$. The corresponding value to $x_i$ in the dataset is $y_i$. These are the measured values. The value $m x_i + c$ is what the model says $y_i$ should have been. The difference between the value that was observed ($y_i$) and the value that the model gives ($m x_i + c$), is $y_i - mx_i - c$. Why square that value? Well note that the value could be positive or negative, and you sum over all of these values. If we allow the values to be positive or negative, then the positive could cancel the negatives. So, the natural thing to do is to take the absolute value $\mid y_i - m x_i - c \mid$. Well it turns out that absolute values are a pain to deal with, and instead it was decided to just square the quantity instead, as the square of a number is always positive. There are pros and cons to using the square instead of the absolute value, but the square is used. This is usually called *least squares* fitting. ```python # Calculate the cost of the lines above for the data above. cost = lambda m,c: np.sum([(d[i] - m * w[i] - c)**2 for i in range(w.size)]) print("Cost with m = %5.2f and c = %5.2f: %8.2f" % (5.0, 10.0, cost(5.0, 10.0))) print("Cost with m = %5.2f and c = %5.2f: %8.2f" % (6.0, 5.0, cost(6.0, 5.0))) print("Cost with m = %5.2f and c = %5.2f: %8.2f" % (5.0, 15.0, cost(5.0, 15.0))) ``` Cost with m = 5.00 and c = 10.00: 525.39 Cost with m = 6.00 and c = 5.00: 1551.12 Cost with m = 5.00 and c = 15.00: 1018.69 #### Minimising the cost We want to calculate values of $m$ and $c$ that give the lowest value for the cost value above. For our given data set we can plot the cost value/function. Recall that the cost is: $$ Cost(m, c) = \sum_i (y_i - mx_i - c)^2 $$ This is a function of two variables, $m$ and $c$, so a plot of it is three dimensional. See the **Advanced** section below for the plot. In the case of fitting a two-dimensional line to a few data points, we can easily calculate exactly the best values of $m$ and $c$. Some of the details are discussed in the **Advanced** section, as they involve calculus, but the resulting code is straight-forward. We first calculate the mean (average) values of our $x$ values and that of our $y$ values. Then we subtract the mean of $x$ from each of the $x$ values, and the mean of $y$ from each of the $y$ values. Then we take the *dot product* of the new $x$ values and the new $y$ values and divide it by the dot product of the new $x$ values with themselves. That gives us $m$, and we use $m$ to calculate $c$. Remember that in our dataset $x$ is called $w$ (for weight) and $y$ is called $d$ (for distance). We calculate $m$ and $c$ below. ```python # Calculate the best values for m and c. # First calculate the means (a.k.a. averages) of w and d. w_avg = np.mean(w) d_avg = np.mean(d) # Subtract means from w and d. w_zero = w - w_avg d_zero = d - d_avg # The best m is found by the following calculation. m = np.sum(w_zero * d_zero) / np.sum(w_zero * w_zero) # Use m from above to calculate the best c. c = d_avg - m * w_avg print("m is %8.6f and c is %6.6f." % (m, c)) ``` m is 5.154265 and c is 8.608328. Note that numpy has a function that will perform this calculation for us, called polyfit. It can be used to fit lines in many dimensions. ```python np.polyfit(w, d, 1) ``` array([5.15426517, 8.60832781]) ```python # z=np.polyfit(w, d, 1) plt.plot(w, d, 'k.', label='Original data') plt.plot(z, 'b-', label='Best fit line') # Add axis labels and a legend. plt.xlabel('Weight (KG)') plt.ylabel('Distance (CM)') plt.legend() # Show the plot. plt.show() ``` #### Best fit line So, the best values for $m$ and $c$ given our data and using least squares fitting are about $4.95$ for $m$ and about $11.13$ for $c$. We plot this line on top of the data below. ```python # Plot the best fit line. plt.plot(w, d, 'k.', label='Original data') plt.plot(w, m * w + c, 'b-', label='Best fit line') # Add axis labels and a legend. plt.xlabel('Weight (KG)') plt.ylabel('Distance (CM)') plt.legend() # Show the plot. plt.show() ``` Note that the $Cost$ of the best $m$ and best $c$ is not zero in this case. ```python print("Cost with m = %5.2f and c = %5.2f: %8.2f" % (m, c, cost(m, c))) ``` Cost with m = 5.15 and c = 8.61: 506.59 ### Summary In this notebook we: 1. Investigated the data. 2. Picked a model. 3. Picked a cost function. 4. Estimated the model parameter values that minimised our cost function. ### Advanced In the following sections we cover some of the more advanced concepts involved in fitting the line. #### Simulating data Earlier in the notebook we glossed over something important: we didn't actually do the weighing and measuring - we faked the data. A better term for this is *simulation*, which is an important tool in research, especially when testing methods such as simple linear regression. We ran the following two commands to do this: ```python w = np.arange(0.0, 21.0, 1.0) d = 5.0 * w + 10.0 + np.random.normal(0.0, 5.0, w.size) ``` The first command creates a numpy array containing all values between 1.0 and 21.0 (including 1.0 but not including 21.0) in steps of 1.0. ```python np.arange(0.0, 21.0, 1.0) ``` array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14., 15., 16., 17., 18., 19., 20.]) The second command is more complex. First it takes the values in the `w` array, multiplies each by 5.0 and then adds 10.0. ```python 5.0 * w + 10.0 ``` array([ 10., 15., 20., 25., 30., 35., 40., 45., 50., 55., 60., 65., 70., 75., 80., 85., 90., 95., 100., 105., 110.]) It then adds an array of the same length containing random values. The values are taken from what is called the normal distribution with mean 0.0 and standard deviation 5.0. ```python np.random.normal(0.0, 5.0, w.size) ``` array([-1.46317757, 5.30481846, -3.49158247, -2.52540294, -1.37674985, -2.78091921, -7.5305471 , -8.2341408 , 1.44809467, -3.12508821, -0.06114925, 1.96890369, -2.53676148, 4.33296745, 2.43633524, -4.09099253, 1.74922495, 2.59597544, -0.68926184, -4.92585686, -2.78279611]) The normal distribution follows a bell shaped curve. The curve is centred on the mean (0.0 in this case) and its general width is determined by the standard deviation (5.0 in this case). ```python # Plot the normal distrution. normpdf = lambda mu, s, x: (1.0 / (2.0 * np.pi * s**2)) * np.exp(-((x - mu)**2)/(2 * s**2)) x = np.linspace(-20.0, 20.0, 100) y = normpdf(0.0, 5.0, x) plt.plot(x, y) plt.show() ``` The idea here is to add a little bit of randomness to the measurements of the distance. The random values are entered around 0.0, with a greater than 99% chance they're within the range -15.0 to 15.0. The normal distribution is used because of the [Central Limit Theorem](https://en.wikipedia.org/wiki/Central_limit_theorem) which basically states that when a bunch of random effects happen together the outcome looks roughly like the normal distribution. (Don't quote me on that!) #### Plotting the cost function We can plot the cost function for a given set of data points. Recall that the cost function involves two variables: $m$ and $c$, and that it looks like this: $$ Cost(m,c) = \sum_i (y_i - mx_i - c)^2 $$ To plot a function of two variables we need a 3D plot. It can be difficult to get the viewing angle right in 3D plots, but below you can just about make out that there is a low point on the graph around the $(m, c) = (\approx 5.0, \approx 10.0)$ point. ```python # This code is a little bit involved - don't worry about it. # Just look at the plot below. from mpl_toolkits.mplot3d import Axes3D # Ask pyplot a 3D set of axes. ax = plt.figure().gca(projection='3d') # Make data. mvals = np.linspace(4.5, 5.5, 100) cvals = np.linspace(0.0, 20.0, 100) # Fill the grid. mvals, cvals = np.meshgrid(mvals, cvals) # Flatten the meshes for convenience. mflat = np.ravel(mvals) cflat = np.ravel(cvals) # Calculate the cost of each point on the grid. C = [np.sum([(d[i] - m * w[i] - c)**2 for i in range(w.size)]) for m, c in zip(mflat, cflat)] C = np.array(C).reshape(mvals.shape) # Plot the surface. surf = ax.plot_surface(mvals, cvals, C) # Set the axis labels. ax.set_xlabel('$m$', fontsize=16) ax.set_ylabel('$c$', fontsize=16) ax.set_zlabel('$Cost$', fontsize=16) # Show the plot. plt.show() ``` #### Coefficient of determination Earlier we used a cost function to determine the best line to fit the data. Usually the data do not perfectly fit on the best fit line, and so the cost is greater than 0. A quantity closely related to the cost is the *coefficient of determination*, also known as the *R-squared* value. The purpose of the R-squared value is to measure how much of the variance in $y$ is determined by $x$. For instance, in our example the main thing that affects the distance the spring is hanging down is the weight on the end. It's not the only thing that affects it though. The room temperature and density of the air at the time of measurment probably affect it a little. The age of the spring, and how many times it has been stretched previously probably also have a small affect. There are probably lots of unknown factors affecting the measurment. The R-squared value estimates how much of the changes in the $y$ value is due to the changes in the $x$ value compared to all of the other factors affecting the $y$ value. It is calculated as follows: $$ R^2 = 1 - \frac{\sum_i (y_i - m x_i - c)^2}{\sum_i (y_i - \bar{y})^2} $$ Note that sometimes the [*Pearson correlation coefficient*](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient) is used instead of the R-squared value. You can just square the Pearson coefficient to get the R-squred value. ```python # Calculate the R-squared value for our data set. rsq = 1.0 - (np.sum((d - m * w - c)**2)/np.sum((d - d_avg)**2)) print("The R-squared value is %6.4f" % rsq) ``` The R-squared value is 0.9758 ```python # The same value using numpy. np.corrcoef(w, d)[0][1]**2 ``` 0.9758337791380363 #### The minimisation calculations Earlier we used the following calculation to calculate $m$ and $c$ for the line of best fit. The code was: ```python w_zero = w - np.mean(w) d_zero = d - np.mean(d) m = np.sum(w_zero * d_zero) / np.sum(w_zero * w_zero) c = np.mean(d) - m * np.mean(w) ``` In mathematical notation we write this as: $$ m = \frac{\sum_i (x_i - \bar{x}) (y_i - \bar{y})}{\sum_i (x_i - \bar{x})^2} \qquad \textrm{and} \qquad c = \bar{y} - m \bar{x} $$ where $\bar{x}$ is the mean of $x$ and $\bar{y}$ that of $y$. Where did these equations come from? They were derived using calculus. We'll give a brief overview of it here, but feel free to gloss over this section if it's not for you. If you can understand the first part, where we calculate the partial derivatives, then great! The calculations look complex, but if you know basic differentiation, including the chain rule, you can easily derive them. First, we differentiate the cost function with respect to $m$ while treating $c$ as a constant, called a partial derivative. We write this as $\frac{\partial m}{ \partial Cost}$, using $\delta$ as opposed to $d$ to signify that we are treating the other variable as a constant. We then do the same with respect to $c$ while treating $m$ as a constant. We set both equal to zero, and then solve them as two simultaneous equations in two variables. ###### Calculate the partial derivatives $$ \begin{align} Cost(m, c) &= \sum_i (y_i - mx_i - c)^2 \\[1cm] \frac{\partial Cost}{\partial m} &= \sum 2(y_i - m x_i -c)(-x_i) \\ &= -2 \sum x_i (y_i - m x_i -c) \\[0.5cm] \frac{\partial Cost}{\partial c} & = \sum 2(y_i - m x_i -c)(-1) \\ & = -2 \sum (y_i - m x_i -c) \\ \end{align} $$ ###### Set to zero $$ \begin{align} & \frac{\partial Cost}{\partial m} = 0 \\[0.2cm] & \Rightarrow -2 \sum x_i (y_i - m x_i -c) = 0 \\ & \Rightarrow \sum (x_i y_i - m x_i x_i - x_i c) = 0 \\ & \Rightarrow \sum x_i y_i - \sum_i m x_i x_i - \sum x_i c = 0 \\ & \Rightarrow m \sum x_i x_i = \sum x_i y_i - c \sum x_i \\[0.2cm] & \Rightarrow m = \frac{\sum x_i y_i - c \sum x_i}{\sum x_i x_i} \\[0.5cm] & \frac{\partial Cost}{\partial c} = 0 \\[0.2cm] & \Rightarrow -2 \sum (y_i - m x_i - c) = 0 \\ & \Rightarrow \sum y_i - \sum_i m x_i - \sum c = 0 \\ & \Rightarrow \sum y_i - m \sum_i x_i = c \sum 1 \\ & \Rightarrow c = \frac{\sum y_i - m \sum x_i}{\sum 1} \\ & \Rightarrow c = \frac{\sum y_i}{\sum 1} - m \frac{\sum x_i}{\sum 1} \\[0.2cm] & \Rightarrow c = \bar{y} - m \bar{x} \\ \end{align} $$ ###### Solve the simultaneous equations Here we let $n$ be the length of $x$, which is also the length of $y$. $$ \begin{align} & m = \frac{\sum_i x_i y_i - c \sum_i x_i}{\sum_i x_i x_i} \\[0.2cm] & \Rightarrow m = \frac{\sum x_i y_i - (\bar{y} - m \bar{x}) \sum x_i}{\sum x_i x_i} \\ & \Rightarrow m \sum x_i x_i = \sum x_i y_i - \bar{y} \sum x_i + m \bar{x} \sum x_i \\ & \Rightarrow m \sum x_i x_i - m \bar{x} \sum x_i = \sum x_i y_i - \bar{y} \sum x_i \\[0.3cm] & \Rightarrow m = \frac{\sum x_i y_i - \bar{y} \sum x_i}{\sum x_i x_i - \bar{x} \sum x_i} \\[0.2cm] & \Rightarrow m = \frac{\sum (x_i y_i) - n \bar{y} \bar{x}}{\sum (x_i x_i) - n \bar{x} \bar{x}} \\ & \Rightarrow m = \frac{\sum (x_i y_i) - n \bar{y} \bar{x} - n \bar{y} \bar{x} + n \bar{y} \bar{x}}{\sum (x_i x_i) - n \bar{x} \bar{x} - n \bar{x} \bar{x} + n \bar{x} \bar{x}} \\ & \Rightarrow m = \frac{\sum (x_i y_i) - \sum y_i \bar{x} - \sum \bar{y} x_i + n \bar{y} \bar{x}}{\sum (x_i x_i) - \sum x_i \bar{x} - \sum \bar{x} x_i + n \bar{x} \bar{x}} \\ & \Rightarrow m = \frac{\sum_i (x_i - \bar{x}) (y_i - \bar{y})}{\sum_i (x_i - \bar{x})^2} \\ \end{align} $$ #### End ```python ```
c6d6c8de8171f8aedd804b06ae6ed3ad2723e18e
288,104
ipynb
Jupyter Notebook
simple-linear-regression.ipynb
mikequaid/numpy
350e0dc3d2bc482c69be60b41619fc710a778eb3
[ "Apache-2.0" ]
null
null
null
simple-linear-regression.ipynb
mikequaid/numpy
350e0dc3d2bc482c69be60b41619fc710a778eb3
[ "Apache-2.0" ]
null
null
null
simple-linear-regression.ipynb
mikequaid/numpy
350e0dc3d2bc482c69be60b41619fc710a778eb3
[ "Apache-2.0" ]
null
null
null
274.908397
117,404
0.916822
true
6,637
Qwen/Qwen-72B
1. YES 2. YES
0.944177
0.849971
0.802523
__label__eng_Latn
0.99514
0.702863
# ADM Quantities in terms of BSSN Quantities ## Author: Zach Etienne ### Formatting improvements courtesy Brandon Clark [comment]: <> (Abstract: TODO) **Module Status:** <font color='orange'><b> Self-Validated </b></font> **Validation Notes:** This tutorial module has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](#code_validation). **Additional validation tests may have been performed, but are as yet, undocumented. (TODO)** ### NRPy+ Source Code for this module: [ADM_in_terms_of_BSSN.py](../edit/BSSN/ADM_in_terms_of_BSSN.py) ## Introduction: This tutorial module constructs all quantities in the [ADM formalism](https://en.wikipedia.org/wiki/ADM_formalism) (see also Chapter 2 in Baumgarte & Shapiro's book *Numerical Relativity*) in terms of quantities in our adopted (covariant, tensor-rescaled) BSSN formalism. That is to say, we will write the ADM quantities $\left\{\gamma_{ij},K_{ij},\alpha,\beta^i\right\}$ and their derivatives in terms of the BSSN quantities $\left\{\bar{\gamma}_{ij},\text{cf},\bar{A}_{ij},\text{tr}K,\alpha,\beta^i\right\}$ and their derivatives. ### A Note on Notation: As is standard in NRPy+, * Greek indices refer to four-dimensional quantities where the zeroth component indicates temporal (time) component. * Latin indices refer to three-dimensional quantities. This is somewhat counterintuitive since Python always indexes its lists starting from 0. As a result, the zeroth component of three-dimensional quantities will necessarily indicate the first *spatial* direction. As a corollary, any expressions in NRPy+ involving mixed Greek and Latin indices will need to offset one set of indices by one; a Latin index in a four-vector will be incremented and a Greek index in a three-vector will be decremented (however, the latter case does not occur in this tutorial module). <a id='toc'></a> # Table of Contents $$\label{toc}$$ This module is organized as follows 1. [Step 1](#initializenrpy): Initialize core Python/NRPy+ modules 1. [Step 2](#threemetric): The ADM three-metric $\gamma_{ij}$ and its derivatives in terms of rescaled BSSN quantities 1. [Step 2.a](#derivatives_e4phi): Derivatives of $e^{4\phi}$ 1. [Step 2.b](#derivatives_adm_3metric): Derivatives of the ADM three-metric: $\gamma_{ij,k}$ and $\gamma_{ij,kl}$ 1. [Step 2.c](#christoffel): Christoffel symbols $\Gamma^i_{jk}$ associated with the ADM 3-metric $\gamma_{ij}$ 1. [Step 3](#extrinsiccurvature): The ADM extrinsic curvature $K_{ij}$ and its derivatives in terms of rescaled BSSN quantities 1. [Step 4](#code_validation): Code Validation against `BSSN.ADM_in_terms_of_BSSN` NRPy+ module 1. [Step 5](#latex_pdf_output): Output this module to $\LaTeX$-formatted PDF file <a id='initializenrpy'></a> # Step 1: Initialize core Python/NRPy+ modules \[Back to [top](#toc)\] $$\label{initializenrpy}$$ Let's start by importing all the needed modules from Python/NRPy+: ```python # Step 1.a: Import all needed modules from NRPy+ import sympy as sp import NRPy_param_funcs as par import indexedexp as ixp import grid as gri import finite_difference as fin import reference_metric as rfm # Step 1.b: Set the coordinate system for the numerical grid par.set_parval_from_str("reference_metric::CoordSystem","Spherical") # Step 1.c: Given the chosen coordinate system, set up # corresponding reference metric and needed # reference metric quantities # The following function call sets up the reference metric # and related quantities, including rescaling matrices ReDD, # ReU, and hatted quantities. rfm.reference_metric() # Step 1.d: Set spatial dimension (must be 3 for BSSN, as BSSN is # a 3+1-dimensional decomposition of the general # relativistic field equations) DIM = 3 # Step 1.e: Import all basic (unrescaled) BSSN scalars & tensors import BSSN.BSSN_quantities as Bq Bq.BSSN_basic_tensors() gammabarDD = Bq.gammabarDD cf = Bq.cf AbarDD = Bq.AbarDD trK = Bq.trK Bq.gammabar__inverse_and_derivs() gammabarDD_dD = Bq.gammabarDD_dD gammabarDD_dDD = Bq.gammabarDD_dDD Bq.AbarUU_AbarUD_trAbar_AbarDD_dD() AbarDD_dD = Bq.AbarDD_dD ``` <a id='threemetric'></a> # Step 2: The ADM three-metric $\gamma_{ij}$ and its derivatives in terms of rescaled BSSN quantities. \[Back to [top](#toc)\] $$\label{threemetric}$$ The ADM three-metric is written in terms of the covariant BSSN three-metric tensor as (Eqs. 2 and 3 of [Ruchlin *et al.*](https://arxiv.org/pdf/1712.07658.pdf)): $$ \gamma_{ij} = \left(\frac{\gamma}{\bar{\gamma}}\right)^{1/3} \bar{\gamma}_{i j}, $$ where $\gamma=\det{\gamma_{ij}}$ and $\bar{\gamma}=\det{\bar{\gamma}_{ij}}$. The "standard" BSSN conformal factor $\phi$ is given by (Eq. 3 of [Ruchlin *et al.*](https://arxiv.org/pdf/1712.07658.pdf)): \begin{align} \phi &= \frac{1}{12} \log\left(\frac{\gamma}{\bar{\gamma}}\right) \\ \implies e^{\phi} &= \left(\frac{\gamma}{\bar{\gamma}}\right)^{1/12} \\ \implies e^{4 \phi} &= \left(\frac{\gamma}{\bar{\gamma}}\right)^{1/3} \end{align} Thus the ADM three-metric may be written in terms of the BSSN three-metric and conformal factor $\phi$ as $$ \gamma_{ij} = e^{4 \phi} \bar{\gamma}_{i j}. $$ NRPy+'s implementation of BSSN allows for $\phi$ and two other alternative conformal factors to be defined: \begin{align} \chi &= e^{-4\phi} \\ W &= e^{-2\phi}, \end{align} Thus if `"BSSN_quantities::EvolvedConformalFactor_cf"` is set to `"chi"`, then \begin{align} \gamma_{ij} &= \frac{1}{\chi} \bar{\gamma}_{i j} \\ &= \frac{1}{\text{cf}} \bar{\gamma}_{i j}, \end{align} and if `"BSSN_quantities::EvolvedConformalFactor_cf"` is set to `"W"`, then \begin{align} \gamma_{ij} &= \frac{1}{W^2} \bar{\gamma}_{i j} \\ &= \frac{1}{\text{cf}^2} \bar{\gamma}_{i j}. \end{align} ```python # Step 2: The ADM three-metric gammaDD and its # derivatives in terms of BSSN quantities. gammaDD = ixp.zerorank2() exp4phi = sp.sympify(0) if par.parval_from_str("EvolvedConformalFactor_cf") == "phi": exp4phi = sp.exp(4*cf) elif par.parval_from_str("EvolvedConformalFactor_cf") == "chi": exp4phi = (1 / cf) elif par.parval_from_str("EvolvedConformalFactor_cf") == "W": exp4phi = (1 / cf**2) else: print("Error EvolvedConformalFactor_cf type = \""+par.parval_from_str("EvolvedConformalFactor_cf")+"\" unknown.") exit(1) for i in range(DIM): for j in range(DIM): gammaDD[i][j] = exp4phi*gammabarDD[i][j] ``` <a id='derivatives_e4phi'></a> ## Step 2.a: Derivatives of $e^{4\phi}$ \[Back to [top](#toc)\] $$\label{derivatives_e4phi}$$ To compute derivatives of $\gamma_{ij}$ in terms of BSSN variables and their derivatives, we will first need derivatives of $e^{4\phi}$ in terms of the conformal BSSN variable `cf`. \begin{align} \frac{\partial}{\partial x^i} e^{4\phi} &= 4 e^{4\phi} \phi_{,i} \\ \implies \frac{\partial}{\partial x^j} \frac{\partial}{\partial x^i} e^{4\phi} &= \frac{\partial}{\partial x^j} \left(4 e^{4\phi} \phi_{,i}\right) \\ &= 16 e^{4\phi} \phi_{,i} \phi_{,j} + 4 e^{4\phi} \phi_{,ij} \end{align} Thus computing first and second derivatives of $e^{4\phi}$ in terms of the BSSN quantity `cf` requires only that we evaluate $\phi_{,i}$ and $\phi_{,ij}$ in terms of $e^{4\phi}$ (computed above in terms of `cf`) and derivatives of `cf`: If `"BSSN_quantities::EvolvedConformalFactor_cf"` is set to `"phi"`, then \begin{align} \phi_{,i} &= \text{cf}_{,i} \\ \phi_{,ij} &= \text{cf}_{,ij} \end{align} If `"BSSN_quantities::EvolvedConformalFactor_cf"` is set to `"chi"`, then \begin{align} \text{cf} = e^{-4\phi} \implies \text{cf}_{,i} &= -4 e^{-4\phi} \phi_{,i} \\ \implies \phi_{,i} &= -\frac{e^{4\phi}}{4} \text{cf}_{,i} \\ \implies \phi_{,ij} &= -e^{4\phi} \phi_{,j} \text{cf}_{,i} -\frac{e^{4\phi}}{4} \text{cf}_{,ij}\\ &= -e^{4\phi} \left(-\frac{e^{4\phi}}{4} \text{cf}_{,j}\right) \text{cf}_{,i} -\frac{e^{4\phi}}{4} \text{cf}_{,ij} \\ &= \frac{1}{4} \left[\left(e^{4\phi}\right)^2 \text{cf}_{,i} \text{cf}_{,j} -e^{4\phi} \text{cf}_{,ij}\right] \\ \end{align} If `"BSSN_quantities::EvolvedConformalFactor_cf"` is set to `"W"`, then \begin{align} \text{cf} = e^{-2\phi} \implies \text{cf}_{,i} &= -2 e^{-2\phi} \phi_{,i} \\ \implies \phi_{,i} &= -\frac{e^{2\phi}}{2} \text{cf}_{,i} \\ \implies \phi_{,ij} &= -e^{2\phi} \phi_{,j} \text{cf}_{,i} -\frac{e^{2\phi}}{2} \text{cf}_{,ij}\\ &= -e^{2\phi} \left(-\frac{e^{2\phi}}{2} \text{cf}_{,j}\right) \text{cf}_{,i} -\frac{e^{2\phi}}{2} \text{cf}_{,ij} \\ &= \frac{1}{2} \left[e^{4\phi} \text{cf}_{,i} \text{cf}_{,j} -e^{2\phi} \text{cf}_{,ij}\right] \\ \end{align} ```python # Step 2.a: Derivatives of $e^{4\phi}$ phidD = ixp.zerorank1() phidDD = ixp.zerorank2() cf_dD = ixp.declarerank1("cf_dD") cf_dDD = ixp.declarerank2("cf_dDD","sym01") if par.parval_from_str("EvolvedConformalFactor_cf") == "phi": for i in range(DIM): phidD[i] = cf_dD[i] for j in range(DIM): phidDD[i][j] = cf_dDD[i][j] elif par.parval_from_str("EvolvedConformalFactor_cf") == "chi": for i in range(DIM): phidD[i] = -sp.Rational(1,4)*exp4phi*cf_dD[i] for j in range(DIM): phidDD[i][j] = sp.Rational(1,4)*( exp4phi**2*cf_dD[i]*cf_dD[j] - exp4phi*cf_dDD[i][j] ) elif par.parval_from_str("EvolvedConformalFactor_cf") == "W": exp2phi = (1 / cf) for i in range(DIM): phidD[i] = -sp.Rational(1,2)*exp2phi*cf_dD[i] for j in range(DIM): phidDD[i][j] = sp.Rational(1,2)*( exp4phi*cf_dD[i]*cf_dD[j] - exp2phi*cf_dDD[i][j] ) else: print("Error EvolvedConformalFactor_cf type = \""+par.parval_from_str("EvolvedConformalFactor_cf")+"\" unknown.") exit(1) exp4phidD = ixp.zerorank1() exp4phidDD = ixp.zerorank2() for i in range(DIM): exp4phidD[i] = 4*exp4phi*phidD[i] for j in range(DIM): exp4phidDD[i][j] = 16*exp4phi*phidD[i]*phidD[j] + 4*exp4phi*phidDD[i][j] ``` <a id='derivatives_adm_3metric'></a> ## Step 2.b: Derivatives of the ADM three-metric: $\gamma_{ij,k}$ and $\gamma_{ij,kl}$ \[Back to [top](#toc)\] $$\label{derivatives_adm_3metric}$$ Recall the relation between the ADM three-metric $\gamma_{ij}$, the BSSN conformal three-metric $\bar{\gamma}_{i j}$, and the BSSN conformal factor $\phi$: $$ \gamma_{ij} = e^{4 \phi} \bar{\gamma}_{i j}. $$ Now that we have constructed derivatives of $e^{4 \phi}$ in terms of the chosen BSSN conformal factor `cf`, and the [BSSN.BSSN_quantities module](../edit/BSSN/BSSN_quantities.py) ([**tutorial**](Tutorial-BSSN_quantities.ipynb)) defines derivatives of $\bar{\gamma}_{ij}$ in terms of rescaled BSSN variables, derivatives of $\gamma_{ij}$ can be immediately constructed using the product rule: \begin{align} \gamma_{ij,k} &= \left(e^{4 \phi}\right)_{,k} \bar{\gamma}_{i j} + e^{4 \phi} \bar{\gamma}_{ij,k} \\ \gamma_{ij,kl} &= \left(e^{4 \phi}\right)_{,kl} \bar{\gamma}_{i j} + \left(e^{4 \phi}\right)_{,k} \bar{\gamma}_{i j,l} + \left(e^{4 \phi}\right)_{,l} \bar{\gamma}_{ij,k} + e^{4 \phi} \bar{\gamma}_{ij,kl} \end{align} ```python # Step 2.b: Derivatives of gammaDD, the ADM three-metric gammaDDdD = ixp.zerorank3() gammaDDdDD = ixp.zerorank4() for i in range(DIM): for j in range(DIM): for k in range(DIM): gammaDDdD[i][j][k] = exp4phidD[k]*gammabarDD[i][j] + exp4phi*gammabarDD_dD[i][j][k] for l in range(DIM): gammaDDdDD[i][j][k][l] = exp4phidDD[k][l]*gammabarDD[i][j] + \ exp4phidD[k]*gammabarDD_dD[i][j][l] + \ exp4phidD[l]*gammabarDD_dD[i][j][k] + \ exp4phi*gammabarDD_dDD[i][j][k][l] ``` <a id='christoffel'></a> ## Step 2.c: Christoffel symbols $\Gamma^i_{jk}$ associated with the ADM 3-metric $\gamma_{ij}$ \[Back to [top](#toc)\] $$\label{christoffel}$$ The 3-metric analog to the definition of Christoffel symbol (Eq. 1.18) in Baumgarte & Shapiro's *Numerical Relativity* is given by $$ \Gamma^i_{jk} = \frac{1}{2} \gamma^{il} \left(\gamma_{lj,k} + \gamma_{lk,j} - \gamma_{jk,l} \right), $$ which we implement here: ```python # Step 2.c: 3-Christoffel symbols associated with ADM 3-metric gammaDD # Step 2.c.i: First compute the inverse 3-metric gammaUU: gammaUU, detgamma = ixp.symm_matrix_inverter3x3(gammaDD) GammaUDD = ixp.zerorank3() for i in range(DIM): for j in range(DIM): for k in range(DIM): for l in range(DIM): GammaUDD[i][j][k] += sp.Rational(1,2)*gammaUU[i][l]* \ (gammaDDdD[l][j][k] + gammaDDdD[l][k][j] - gammaDDdD[j][k][l]) ``` <a id='extrinsiccurvature'></a> # Step 3: The ADM extrinsic curvature $K_{ij}$ and its derivatives in terms of rescaled BSSN quantities. \[Back to [top](#toc)\] $$\label{extrinsiccurvature}$$ The ADM extrinsic curvature may be written in terms of the BSSN trace-free extrinsic curvature tensor $\bar{A}_{ij}$ and the trace of the ADM extrinsic curvature $K$: \begin{align} K_{ij} &= \left(\frac{\gamma}{\bar{\gamma}}\right)^{1/3} \bar{A}_{ij} + \frac{1}{3} \gamma_{ij} K \\ &= e^{4\phi} \bar{A}_{ij} + \frac{1}{3} \gamma_{ij} K \\ \end{align} We only compute first spatial derivatives of $K_{ij}$, as higher-derivatives are generally not needed: $$ K_{ij,k} = \left(e^{4\phi}\right)_{,k} \bar{A}_{ij} + e^{4\phi} \bar{A}_{ij,k} + \frac{1}{3} \left(\gamma_{ij,k} K + \gamma_{ij} K_{,k}\right) $$ which is expressed in terms of quantities already defined. ```python # Step 3: Define ADM extrinsic curvature KDD and # its first spatial derivatives KDDdD # in terms of BSSN quantities KDD = ixp.zerorank2() for i in range(DIM): for j in range(DIM): KDD[i][j] = exp4phi*AbarDD[i][j] + sp.Rational(1,3)*gammaDD[i][j]*trK KDDdD = ixp.zerorank3() trK_dD = ixp.declarerank1("trK_dD") for i in range(DIM): for j in range(DIM): for k in range(DIM): KDDdD[i][j][k] = exp4phidD[k]*AbarDD[i][j] + exp4phi*AbarDD_dD[i][j][k] + \ sp.Rational(1,3)*(gammaDDdD[i][j][k]*trK + gammaDD[i][j]*trK_dD[k]) ``` <a id='code_validation'></a> # Step 4: Code Validation against `BSSN.ADM_in_terms_of_BSSN` NRPy+ module \[Back to [top](#toc)\] $$\label{code_validation}$$ Here, as a code validation check, we verify agreement in the SymPy expressions between 1. this tutorial and 2. the NRPy+ [BSSN.ADM_in_terms_of_BSSN](../edit/BSSN/ADM_in_terms_of_BSSN.py) module. ```python all_passed=True def comp_func(expr1,expr2,basename,prefixname2="Bq."): if str(expr1-expr2)!="0": print(basename+" - "+prefixname2+basename+" = "+ str(expr1-expr2)) all_passed=False def gfnm(basename,idx1,idx2=None,idx3=None,idx4=None): if idx2==None: return basename+"["+str(idx1)+"]" if idx3==None: return basename+"["+str(idx1)+"]["+str(idx2)+"]" if idx4==None: return basename+"["+str(idx1)+"]["+str(idx2)+"]["+str(idx3)+"]" return basename+"["+str(idx1)+"]["+str(idx2)+"]["+str(idx3)+"]["+str(idx4)+"]" expr_list = [] exprcheck_list = [] namecheck_list = [] import BSSN.ADM_in_terms_of_BSSN as AB AB.ADM_in_terms_of_BSSN() namecheck_list.extend(["detgamma"]) exprcheck_list.extend([AB.detgamma]) expr_list.extend([detgamma]) for i in range(DIM): for j in range(DIM): namecheck_list.extend([gfnm("gammaDD",i,j),gfnm("gammaUU",i,j),gfnm("KDD",i,j)]) exprcheck_list.extend([AB.gammaDD[i][j],AB.gammaUU[i][j],AB.KDD[i][j]]) expr_list.extend([gammaDD[i][j],gammaUU[i][j],KDD[i][j]]) for k in range(DIM): namecheck_list.extend([gfnm("gammaDDdD",i,j,k),gfnm("GammaUDD",i,j,k),gfnm("KDDdD",i,j,k)]) exprcheck_list.extend([AB.gammaDDdD[i][j][k],AB.GammaUDD[i][j][k],AB.KDDdD[i][j][k]]) expr_list.extend([gammaDDdD[i][j][k],GammaUDD[i][j][k],KDDdD[i][j][k]]) for l in range(DIM): namecheck_list.extend([gfnm("gammaDDdDD",i,j,k,l)]) exprcheck_list.extend([AB.gammaDDdDD[i][j][k][l]]) expr_list.extend([gammaDDdDD[i][j][k][l]]) for i in range(len(expr_list)): comp_func(expr_list[i],exprcheck_list[i],namecheck_list[i]) if all_passed: print("ALL TESTS PASSED!") ``` ALL TESTS PASSED! <a id='latex_pdf_output'></a> # Step 5: Output this module to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\] $$\label{latex_pdf_output}$$ The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename [Tutorial-ADM_in_terms_of_BSSN.pdf](Tutorial-ADM_in_terms_of_BSSN.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.) ```python !jupyter nbconvert --to latex --template latex_nrpy_style.tplx Tutorial-ADM_in_terms_of_BSSN.ipynb !pdflatex -interaction=batchmode Tutorial-ADM_in_terms_of_BSSN.tex !pdflatex -interaction=batchmode Tutorial-ADM_in_terms_of_BSSN.tex !pdflatex -interaction=batchmode Tutorial-ADM_in_terms_of_BSSN.tex !rm -f Tut*.out Tut*.aux Tut*.log ``` [NbConvertApp] Converting notebook Tutorial-ADM_in_terms_of_BSSN.ipynb to latex [NbConvertApp] Writing 60742 bytes to Tutorial-ADM_in_terms_of_BSSN.tex This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex) restricted \write18 enabled. entering extended mode This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex) restricted \write18 enabled. entering extended mode This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex) restricted \write18 enabled. entering extended mode
538ba5f82d979edff77ab41d18c5a071e6f451c7
24,195
ipynb
Jupyter Notebook
Tutorial-ADM_in_terms_of_BSSN.ipynb
dinatraykova/nrpytutorial
74d1bab0c45380727975568ba956b69c082e2293
[ "BSD-2-Clause" ]
null
null
null
Tutorial-ADM_in_terms_of_BSSN.ipynb
dinatraykova/nrpytutorial
74d1bab0c45380727975568ba956b69c082e2293
[ "BSD-2-Clause" ]
null
null
null
Tutorial-ADM_in_terms_of_BSSN.ipynb
dinatraykova/nrpytutorial
74d1bab0c45380727975568ba956b69c082e2293
[ "BSD-2-Clause" ]
2
2019-11-14T03:31:18.000Z
2019-12-12T13:42:52.000Z
43.75226
559
0.553296
true
6,040
Qwen/Qwen-72B
1. YES 2. YES
0.805632
0.709019
0.571209
__label__eng_Latn
0.550014
0.165439
# The positive predictive value ## Let's see this notebook in a better format : ### [HERE](http://www.reproducibleimaging.org/module-stats/05-PPV/) ## Some Definitions * $H_0$ : null hypothesis: The hypotheis that the effect we are testing for is null * $H_A$ : alternative hypothesis : Not $H_0$, so there is some signal * $T$ : The random variable that takes value "significant" or "not significant" * $T_S$ : Value of T when test is significant (eg $T = T_S$) - or, the event "the test is significant" * $T_N$ : Value of T when test is not significant (eg $T = T_N$) or, the event "the test is not significant" * $\alpha$ : false positive rate - probability to reject $H_0$ when $H_0$ is true ($H_A$ is false) * $\beta$ : false negative rate - probability to accept $H_0$ when $H_A$ is true ($H_0$ is false) power = $1-\beta$ where $\beta$ is the risk of *false negative* So, to compute power, *we need to know what is the risk of false negative*, ie, the risk to not show a significant effect while we have some signal (null is false). ## Some standard python imports ```python import matplotlib.pyplot as plt %matplotlib inline import numpy as np import scipy.stats as sst import matplotlib.pyplot as plt from __future__ import division #python 2.x legacy ``` ## A function to plot nicely some tables of probability ```python from sympy import symbols, Eq, solve, simplify, lambdify, init_printing, latex init_printing(use_latex=True, order='old') from sympy.abc import alpha, beta # get alpha, beta symbolic variables from IPython.display import HTML # Code to make HTML for a probability table def association_table(assocs, title): latexed = {'title': title} for key, value in assocs.items(): latexed[key] = latex(value) latexed['s_total'] = latex(assocs['t_s'] + assocs['f_s']) latexed['ns_total'] = latex(assocs['t_ns'] + assocs['f_ns']) return """<h3>{title}</h3> <TABLE><TR><TH>$H/T$<TH>$T_S$<TH>$T_N$ <TR><TH>$H_A$<TD>${t_s}$<TD>${t_ns}$ <TR><TH>$H_0$<TD>${f_s}$<TD>${f_ns}$ <TR><TH>Total<TD>${s_total}$<TD>${ns_total}$ </TABLE>""".format(**latexed) assoc = dict(t_s = 1 - beta, # H_A true, test significant = true positives t_ns = beta, # true, not significant = false negatives f_s = alpha, # false, significant = false positives f_ns = 1 - alpha) # false, not sigificant = true negatives HTML(association_table(assoc, 'Not considering prior')) ``` <h3>Not considering prior</h3> <TABLE><TR><TH>$H/T$<TH>$T_S$<TH>$T_N$ <TR><TH>$H_A$<TD>$1 - \beta$<TD>$\beta$ <TR><TH>$H_0$<TD>$\alpha$<TD>$1 - \alpha$ <TR><TH>Total<TD>$1 + \alpha - \beta$<TD>$1 + \beta - \alpha$ </TABLE> ```python ``` ## Derivation of Ionannidis / Button positive prediction value : PPV ### Recall some important statistic concepts: Marginalization and Baye theorem #### Marginalization $\newcommand{Frac}[2]{\frac{\displaystyle #1}{\displaystyle #2}}$ We now consider that the hypotheses are *random events*, so we have a probability associated to these events. Let's define some new terms: * $P(H_A)$ - prior probability of $H_A$ - probability of $H_A$ before the experiment. * $P(H_0)$ - prior probability of $H_0$ = $1 - Pr(H_A)$ - probability of null hypothesis before the experiment We are interested in updating the probability of $H_A$ and $H_0$ as a result of a test on some collected data. This updated probability is $P(H_A | T)$ - the probability of $H_A$ given the test result $T$. $P(H_A | T)$ is called the *posterior* probability because it is the probability after the test result is known. Lets imagine that the event A occurs under the events b1, b2, .., bn, these events bi are mutually exclusive and they represent all possibilities. For instance, the event "the test is significant" occurs under "H0" and "H1". The marginalization theorem is simply that $$ P(A) = \sum_{b_i} P(A,B=b_i) $$ In our previous example, $$ P(T_S) = \sum_{h=H_0, H_1} P(T_S, h) = P(T_S, H_0) + P(T_S, H_1) $$ Throughout $P(A, B)$ reads "Probability of A AND B". To simplify the notation, we note $P(B=b)$ as $P(b)$ #### Baye theorem Remembering [Bayes theorem](http://en.wikipedia.org/wiki/Bayes'_theorem#Derivation): $$P(A, B) = P(A | B) P(B)$$ and therefore $$P(A | B) = \Frac{P(B, A)}{P(B)} = \Frac{P(B | A) P(A)}{P(B)}$$ Putting marginalization and Bayes together we have : $$P(A) = \sum_{b_i} P(A|B=b_i) P(B=b_i)$$ Now, apply this to the probability of the test results $T$. The test takes a value either under $H_A$ or $H_0$. The probability of a *signficant* result of the test $T=T_S$ is : $Pr(T=T_S) = P(T_S) = Pr(T_S | H_A) Pr(H_A) + Pr(T_S | H_0) Pr(H_0)$ What is the posterior probability of $H_A$ given that the test is significant? $P(H_A | T_S) = \Frac{P(T_S | H_A) P(H_A)}{P(T_S)} = \Frac{P(T_S | H_A) P(H_A)}{P(T_S | H_A) Pr(H_A) + Pr(T_S | H_0) Pr(H_0)}$ We have $P(T_S | H_A)$, $P(T_S | H_0)$ from the first column of the table above. Substituting into the equation: $P(H_A | T_S) = \Frac{(1 - \beta) P(H_A)}{(1 - \beta) P(H_A) + \alpha P(H_0)}$ Defining: $\pi := Pr(H_A)$, hence: $1 - \pi = Pr(H_0)$ we have: $P(H_A | T_S) = \Frac{(1 - \beta) \pi}{(1 - \beta) \pi + \alpha (1 - \pi)}$ ```python from sympy.abc import pi # get symbolic variable pi post_prob = (1 - beta) * pi / ((1 - beta) * pi + alpha * (1 - pi)) post_prob ``` ```python assoc = dict(t_s = pi * (1 - beta), t_ns = pi * beta, f_s = (1 - pi) * alpha, f_ns = (1 - pi) * (1 - alpha)) HTML(association_table(assoc, r'Considering prior $\pi := P(H_A)$')) ``` <h3>Considering prior $\pi := P(H_A)$</h3> <TABLE><TR><TH>$H/T$<TH>$T_S$<TH>$T_N$ <TR><TH>$H_A$<TD>$\pi \left(1 - \beta\right)$<TD>$\beta \pi$ <TR><TH>$H_0$<TD>$\alpha \left(1 - \pi\right)$<TD>$\left(1 - \alpha\right) \left(1 - \pi\right)$ <TR><TH>Total<TD>$\alpha \left(1 - \pi\right) + \pi \left(1 - \beta\right)$<TD>$\beta \pi + \left(1 - \alpha\right) \left(1 - \pi\right)$ </TABLE> ## Retrieving the Ioannidis / Button et al formula Same as Ioannidis - do the derivation starting with odd ratios From Button et al., we have the positive predictive value PPV defined as : $$ PPV = \frac{(1-\beta)R}{(1-\beta)R + \alpha},\textrm{ with } R = P(H_1)/P(H_0) = P_1/P_0 = \pi / (1-\pi) $$ Hence, $$ PPV = \frac{(1-\beta)P_1}{P_0}\frac{P_0}{(1-\beta)P_1 + \alpha P_0} $$ $$ = \frac{(1-\beta)P_1}{(1-\beta)P_1 + \alpha P_0} $$ $$ = P(H_1, T_S) / P(T_S) = P(H_1 | T_S) $$ If we have 4 chances over 5 that $H_0$ is true, and one over five that $H_1$ true, then R = 1/5 / 4/5 = .25. If there's 30% power we have PPV = 50%. So, 50% chance that our result is indeed true. 80% power leads to 80% chance of $H_1$ to be true, knowing that we have detected an effect at the $\alpha$ risk of error. ### A small function to compute PPV ```python def PPV_OR(odd_ratio, power, alpha, verbose=True): """ returns PPV from odd_ratio, power and alpha parameters: ----------- odd_ratio: float P(H_A)/(1-P(H_A)) power: float Power for this study alpha: float type I risk of error Returns: ---------- float The positive predicted value """ ppv = (power*odd_ratio)/(power*odd_ratio + alpha) if verbose: print("With odd ratio=%3.2f, " "Power=%3.2f, alpha=%3.2f, " "We have PPV=%3.2f" %(odd_ratio,power,alpha,ppv)) return ppv ``` ```python one4sure = PPV_OR(1, 1, 0, verbose=False) assert one4sure == 1 zero4sure = PPV_OR(0, 1, 0.05, verbose=False) assert zero4sure == 0 weird2think = PPV_OR(1, 1, 1, verbose=False) assert weird2think == 0.5 ``` ### A small function for display ```python def plot_ppv(xvalues, yvalues, xlabel, ylabel, title): ''' simply plot yvalues against xvalues, with labels and title Parameters: ----------- xvalues, yvalues : iterables of numbers labels and title : string ''' fig = plt.figure(); axis = fig.add_subplot(1, 1, 1) axis.plot(xvalues, yvalues, color='red', marker='o', linestyle='dashed', linewidth=2, markersize=14); axis.set_xlabel(xlabel,fontsize=20); axis.set_ylabel(ylabel,fontsize=20); axis.set_title(figure_title, fontsize=20); return fig, axis ``` ### Example from Button et al, 2013 ```python # example from Button et al: P1 = 1/5, P0 = 4/5. R = 1/4 R = 1./5. Pw = .4 alph = .05 ppv = PPV_OR(R, Pw, alph) ``` With odd ratio=0.20, Power=0.40, alpha=0.05, We have PPV=0.62 ### Vary power ```python #----------------------------------------------------------------- # Vary power: R = .2 Pw = np.arange(.1,.80001,.1) alph = .20 ppvs = [PPV_OR(R, pw, alph, verbose = False) for pw in Pw] xlabel = 'Power' ylabel = 'PPV' figure_title = 'With an odd ratio H1/H0 = {odd_ratio}'.format(odd_ratio=R) #----------------------------------------------------------------- # print plot_ppv(Pw, ppvs, xlabel, ylabel, figure_title); ``` ### Vary odd ratio ```python #----------------------------------------------------------------- # Vary odd ratio: Pw = .4 alph = .05 odd_ratios = np.arange(.05,.5,.05) ppvs = [PPV_OR(R, Pw, alph, verbose = False) for R in odd_ratios] xlabel = 'odd_ratios' ylabel = 'PPV' figure_title = 'With a power of {power}'.format(power=Pw) #----------------------------------------------------------------- # print plot_ppv(odd_ratios, ppvs, xlabel, ylabel, figure_title); ``` ### Vary alpha ```python #----------------------------------------------------------------- # Vary alpha: Pw = .5 R = 1/5 alphas = np.arange(0, .2, 0.01)# [0.001, .005, 0.01, 0.05, 0.1] #, 0.2, 0.3, 0.4, 0.5] ppvs = [PPV_OR(R, Pw, alph, verbose = False) for alph in alphas] #----------------------------------------------------------------- # print xlabel = 'alpha' ylabel = 'PPV' figure_title = 'With a power of {power} and odd ratio of {odd_ratio}'.format( power=Pw, odd_ratio=R) plot_ppv(alphas, ppvs, xlabel, ylabel, figure_title); ``` # End of the PPV section ```python ``` ```python ```
c88ee41bd33e81eddda97893e5047d54154ae8f9
77,097
ipynb
Jupyter Notebook
notebooks/Positive-Predictive-Value.ipynb
ReproNim/stat-repronim-module
ccef0fa1d23d0023db4cbbfc1e3091037df77f3b
[ "CC-BY-4.0" ]
3
2020-02-27T19:04:46.000Z
2020-02-27T19:13:30.000Z
notebooks/Positive-Predictive-Value.ipynb
ReproNim/stat-repronim-module
ccef0fa1d23d0023db4cbbfc1e3091037df77f3b
[ "CC-BY-4.0" ]
6
2016-11-12T02:07:16.000Z
2020-06-11T10:47:46.000Z
notebooks/Positive-Predictive-Value.ipynb
ReproNim/stat-repronim-module
ccef0fa1d23d0023db4cbbfc1e3091037df77f3b
[ "CC-BY-4.0" ]
12
2016-11-03T18:03:01.000Z
2021-06-04T06:53:07.000Z
105.32377
19,614
0.835558
true
3,274
Qwen/Qwen-72B
1. YES 2. YES
0.853913
0.857768
0.732459
__label__eng_Latn
0.87698
0.54008
```python import numpy as np import dapy.filters as filters from dapy.models import NettoGimenoMendesModel import matplotlib.pyplot as plt %matplotlib inline plt.style.use('seaborn-white') plt.rcParams['figure.dpi'] = 100 ``` ## Model One-dimensional stochastic dynamical system due to Netto et al. [1] with state dynamics defined by discrete time map \begin{equation} x_{t+1} = \alpha x_t + \beta \frac{x_t}{1 + z_t^2} + \gamma \cos(\delta t) + \sigma_x u_t \end{equation} with $u_t \sim \mathcal{N}(0, 1) ~\forall t$ and $x_0 \sim \mathcal{N}(m, s^2)$. Observed process defined by \begin{equation} y_{t} = \epsilon x_t^2 + \sigma_y v_t \end{equation} with $v_t \sim \mathcal{N}(0, 1)$. Standard parameter values assumed here are $\alpha = 0.5$, $\beta = 25$, $\gamma = 8$, $\delta = 1.2$, $\epsilon = 0.05$, $m=10$, $s=5$, $\sigma_x^2 = 1$, $\sigma_y^2 = 1$ and $T = 200$ simulated time steps. ### References 1. M. L. A. Netto, L. Gimeno, and M. J. Mendes. A new spline algorithm for non-linear filtering of discrete time systems. *Proceedings of the 7th Triennial World Congress*, 1979. ```python model_params = { 'initial_state_mean': 10., 'initial_state_std': 5., 'state_noise_std': 1., 'observation_noise_std': 1., 'alpha': 0.5, 'beta': 25., 'gamma': 8, 'delta': 1.2, 'epsilon': 0.05, } model = NettoGimenoMendesModel(**model_params) ``` ## Generate data from model ```python num_observation_time = 100 observation_time_indices = np.arange(num_observation_time) seed = 20171027 rng = np.random.default_rng(seed) state_sequence, observation_sequence = model.sample_state_and_observation_sequences( rng, observation_time_indices) ``` <div style="line-height: 28px; width: 100%; display: flex; flex-flow: row wrap; align-items: center; position: relative; margin: 2px;"> <label style="margin-right: 8px; flex-shrink: 0; font-size: var(--jp-code-font-size, 13px); font-family: var(--jp-code-font-family, monospace);"> Sampling:&nbsp;100% </label> <div role="progressbar" aria-valuenow="1.0" aria-valuemin="0" aria-valuemax="1" style="position: relative; flex-grow: 1; align-self: stretch; margin-top: 4px; margin-bottom: 4px; height: initial; background-color: #eee;"> <div style="background-color: var(--jp-success-color1, #4caf50); position: absolute; bottom: 0; left: 0; width: 100%; height: 100%;"></div> </div> <div style="margin-left: 8px; flex-shrink: 0; font-family: var(--jp-code-font-family, monospace); font-size: var(--jp-code-font-size, 13px);"> 100/100 [00:00&lt;00:00, 21597.18time-steps/s] </div> </div> ```python fig, ax = plt.subplots(figsize=(12, 4)) ax.plot(observation_time_indices, state_sequence) ax.plot(observation_time_indices, observation_sequence, '.') ax.set_xlabel('Time index $t$') ax.set_ylabel('State') _ = ax.set_xlim(0, num_observation_time - 1) ax.legend(['$x_t$', '$y_t$'], ncol=4) fig.tight_layout() ``` ## Infer state from observations ```python def plot_results(results, z_reference=None, plot_traces=True, plot_region=False, trace_skip=1, trace_alpha=0.25): fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(12, 4)) ax.plot(results['z_mean_seq'][:, 0], 'g-', lw=1) if plot_region: ax.fill_between( np.arange(n_steps), results['z_mean_seq'][:, 0] - 3 * results['z_std_seq'][:, 0], results['z_mean_seq'][:, 0] + 3 * results['z_std_seq'][:, 0], alpha=0.25, color='g' ) if plot_traces: ax.plot(results['z_particles_seq'][:, ::trace_skip, 0], 'r-', lw=0.25, alpha=trace_alpha) if z_reference is not None: ax.plot(z_reference[:, 0], 'k--') ax.set_ylabel('State $z$') ax.set_xlabel('Time index $t$') fig.tight_layout() return fig, ax def plot_results(results, observation_time_indices, state_sequence=None, plot_particles=True, plot_region=False, particle_skip=5, trace_alpha=0.25): fig, ax = plt.subplots(sharex=True, figsize=(12, 4)) ax.plot(results['state_mean_sequence'][:, 0], 'g-', lw=1, label='Est. mean') if plot_region: ax.fill_between( observation_time_indices, results['state_mean_sequence'][:, 0] - 3 * results['state_std_sequence'][:, 0], results['state_mean_sequence'][:, 0] + 3 * results['state_std_sequence'][:, 0], alpha=0.25, color='g', label='Est. mean ± 3 standard deviation' ) if plot_particles: lines = ax.plot( observation_time_indices, results['state_particles_sequence'][:, ::particle_skip, 0], 'r-', lw=0.25, alpha=trace_alpha) lines[0].set_label('Particles') if state_sequence is not None: ax.plot(observation_time_indices, state_sequence[:, 0], 'k--', label='Truth') ax.set_ylabel('$x_t$') ax.legend(loc='upper center', ncol=4) ax.set_xlabel('Time index $t$') fig.tight_layout() return fig, ax ``` ### Ensemble Kalman filter (perturbed observations) ```python enkf = filters.EnsembleKalmanFilter() ``` ```python results_enkf = enkf.filter( model, observation_sequence, observation_time_indices, num_particle=500, rng=rng, return_particles=True) ``` <div style="line-height: 28px; width: 100%; display: flex; flex-flow: row wrap; align-items: center; position: relative; margin: 2px;"> <label style="margin-right: 8px; flex-shrink: 0; font-size: var(--jp-code-font-size, 13px); font-family: var(--jp-code-font-family, monospace);"> Filtering:&nbsp;100% </label> <div role="progressbar" aria-valuenow="1.0" aria-valuemin="0" aria-valuemax="1" style="position: relative; flex-grow: 1; align-self: stretch; margin-top: 4px; margin-bottom: 4px; height: initial; background-color: #eee;"> <div style="background-color: var(--jp-success-color1, #4caf50); position: absolute; bottom: 0; left: 0; width: 100%; height: 100%;"></div> </div> <div style="margin-left: 8px; flex-shrink: 0; font-family: var(--jp-code-font-family, monospace); font-size: var(--jp-code-font-size, 13px);"> 100/100 [00:00&lt;00:00, 427.09time-steps/s] </div> </div> ```python fig, axes = plot_results(results_enkf, observation_time_indices, state_sequence) ``` ### Ensemble Trasform Kalman filter (deterministic square root) ```python etkf = filters.EnsembleTransformKalmanFilter() ``` ```python results_etkf = etkf.filter( model, observation_sequence, observation_time_indices, num_particle=500, rng=rng, return_particles=True) ``` <div style="line-height: 28px; width: 100%; display: flex; flex-flow: row wrap; align-items: center; position: relative; margin: 2px;"> <label style="margin-right: 8px; flex-shrink: 0; font-size: var(--jp-code-font-size, 13px); font-family: var(--jp-code-font-family, monospace);"> Filtering:&nbsp;100% </label> <div role="progressbar" aria-valuenow="1.0" aria-valuemin="0" aria-valuemax="1" style="position: relative; flex-grow: 1; align-self: stretch; margin-top: 4px; margin-bottom: 4px; height: initial; background-color: #eee;"> <div style="background-color: var(--jp-success-color1, #4caf50); position: absolute; bottom: 0; left: 0; width: 100%; height: 100%;"></div> </div> <div style="margin-left: 8px; flex-shrink: 0; font-family: var(--jp-code-font-family, monospace); font-size: var(--jp-code-font-size, 13px);"> 100/100 [00:00&lt;00:00, 143.24time-steps/s] </div> </div> ```python fig, axes = plot_results(results_etkf, observation_time_indices, state_sequence) ``` ### Bootstrap particle filter ```python bspf = filters.BootstrapParticleFilter() ``` ```python results_bspf = bspf.filter( model, observation_sequence, observation_time_indices, num_particle=500, rng=rng, return_particles=True) ``` <div style="line-height: 28px; width: 100%; display: flex; flex-flow: row wrap; align-items: center; position: relative; margin: 2px;"> <label style="margin-right: 8px; flex-shrink: 0; font-size: var(--jp-code-font-size, 13px); font-family: var(--jp-code-font-family, monospace);"> Filtering:&nbsp;100% </label> <div role="progressbar" aria-valuenow="1.0" aria-valuemin="0" aria-valuemax="1" style="position: relative; flex-grow: 1; align-self: stretch; margin-top: 4px; margin-bottom: 4px; height: initial; background-color: #eee;"> <div style="background-color: var(--jp-success-color1, #4caf50); position: absolute; bottom: 0; left: 0; width: 100%; height: 100%;"></div> </div> <div style="margin-left: 8px; flex-shrink: 0; font-family: var(--jp-code-font-family, monospace); font-size: var(--jp-code-font-size, 13px);"> 100/100 [00:00&lt;00:00, 3062.47time-steps/s] </div> </div> ```python fig, axes = plot_results(results_bspf, observation_time_indices, state_sequence) ``` ### Ensemble transform particle filter ```python etpf = filters.EnsembleTransformParticleFilter() ``` ```python results_etpf = etpf.filter( model, observation_sequence, observation_time_indices, num_particle=500, rng=rng, return_particles=True) ``` <div style="line-height: 28px; width: 100%; display: flex; flex-flow: row wrap; align-items: center; position: relative; margin: 2px;"> <label style="margin-right: 8px; flex-shrink: 0; font-size: var(--jp-code-font-size, 13px); font-family: var(--jp-code-font-family, monospace);"> Filtering:&nbsp;100% </label> <div role="progressbar" aria-valuenow="1.0" aria-valuemin="0" aria-valuemax="1" style="position: relative; flex-grow: 1; align-self: stretch; margin-top: 4px; margin-bottom: 4px; height: initial; background-color: #eee;"> <div style="background-color: var(--jp-success-color1, #4caf50); position: absolute; bottom: 0; left: 0; width: 100%; height: 100%;"></div> </div> <div style="margin-left: 8px; flex-shrink: 0; font-family: var(--jp-code-font-family, monospace); font-size: var(--jp-code-font-size, 13px);"> 100/100 [00:02&lt;00:00, 41.33time-steps/s] </div> </div> ```python fig, axes = plot_results(results_etpf, observation_time_indices, state_sequence) ```
13ef3d9f741f183a0f37eacdc1ff4688d954795b
988,058
ipynb
Jupyter Notebook
notebooks/Netto_Gimeno_Mendes_1979_non_linear_model.ipynb
hassaniqbal209/data-assimilation
ec52d655395dbed547edf4b4f3df29f017633f1b
[ "MIT" ]
11
2020-07-29T07:46:39.000Z
2022-03-17T01:28:07.000Z
notebooks/Netto_Gimeno_Mendes_1979_non_linear_model.ipynb
hassaniqbal209/data-assimilation
ec52d655395dbed547edf4b4f3df29f017633f1b
[ "MIT" ]
1
2020-07-14T11:49:17.000Z
2020-07-29T07:43:22.000Z
notebooks/Netto_Gimeno_Mendes_1979_non_linear_model.ipynb
hassaniqbal209/data-assimilation
ec52d655395dbed547edf4b4f3df29f017633f1b
[ "MIT" ]
10
2020-07-14T11:34:24.000Z
2022-03-07T09:08:12.000Z
1,709.442907
232,228
0.958537
true
3,078
Qwen/Qwen-72B
1. YES 2. YES
0.79053
0.841826
0.665489
__label__eng_Latn
0.195089
0.384484
# Calculate near surface RH% using ERA-interim fields * 2-m dew point * 2-m temperature * surface pressure Once the yearly RH files are made, merge these data into a single file and put into the era merged time directory, then, regrid that file and place in the common grid file, such that this newly created variable has the same handling as variables processed by Python/format_raw_era_data.py NOTE: RH was calculated from raw, native (0.75 x 0.75 deg grid) era-interim nc data. These data live in a different project directory (metSpreadData). Link to documentations and instructions on how to calculate near surface relative humidity using ecmwf era-interim: https://www.ecmwf.int/en/faq/do-era-datasets-contain-parameters-near-surface-humidity \begin{align} RH=100\frac{es(T_{d})}{es(T)} \end{align} ### Saturation Specific Humidity Saturation specific humidity is expressed as a function of saturation water vapor pressure as: \begin{align} q_{sat} & = \frac{e_{sat}(T)\frac{R_{dry}}{R_{vapor}}}{p-(1-\frac{R_{dry}}{R_{vapor}})e_{sat}(T)} \end{align} where the saturation water vopor pressure ($e_{sat}$) is expressed with the Teten's formula: \begin{align} e_{sat}(T) & = a_{1}exp(a_{3}\frac{T - T_{0}}{T-a_{4}})\\ \\ a_{1} & = 611.21Pa \\ a_{3} & = 17.502 \\ a_{4} & = 32.19 \\ T_{0} & = 273.16 \end{align} ```python import numpy as np def calculat_svp(T, P): """Calculates and returns saturation vapor pressure (svp)""" # constants for saturation over water a1 = 611.21 # Pa a3 = 17.502 a4 = 32.19 # K To = 273.16 # K R_dry = 287. # J/kg/K gas constant of dry air R_vap = 461. # J/kg/K gas constant for water vapor e_sat = a1 * np.exp( a3 * (T-To)/(T-a4) ) # Saturation specific humidity (eqn 7.4) top = e_sat * R_dry / R_vap bottom = P - (1. - R_dry / R_vap) * e_sat q_sat = top / bottom return q_sat ``` ### Create 3D RH arrays ```python from netCDF4 import Dataset import os from matplotlib import pylab as plt dataDir = os.path.join("..","..","metSpreadData","ERA-Interim") def get_nc(var, year): """ Loads an era-interim netcdf file. Very simple. """ loadFile = os.path.join(dataDir, var + "_" + str(year) + ".nc") nc = Dataset(loadFile) vals = nc.variables[var][:] t = nc.variables["time"][:] lon = nc.variables["longitude"][:] lat = nc.variables["latitude"][:] nc.close() return vals, t, lon, lat def write_RH_nc(RH, t, x, y, year, dataDir): """ Writes a relative humidity netcdf file. """ outputFile = os.path.join(dataDir, "RH_" + str(year) + ".nc") ncFile = Dataset(outputFile, 'w', format='NETCDF4') ncFile.description = 'Relative Humidity (saturation pressure relative to water, Tetons eq.)' ncFile.location = 'Global' ncFile.createDimension('time', len(t) ) ncFile.createDimension('latitude', len(y) ) ncFile.createDimension('longitude', len(x) ) VAR_ = ncFile.createVariable("RH", 'f4',('time', 'latitude','longitude')) VAR_.long_name = "Relative Humidity" VAR_.units = "%" # Create time variable time_ = ncFile.createVariable('time', 'i4', ('time',)) time_.units = "hours since 1900-01-01 00:00:0.0" time_.calendar = "gregorian" # create lat variable latitude_ = ncFile.createVariable('latitude', 'f4', ('latitude',)) latitude_.units = 'degrees_north' # create longitude variable longitude_ = ncFile.createVariable('longitude', 'f4', ('longitude',)) longitude_.units = 'degrees_east' # Write the actual data to these dimensions VAR_[:] = RH time_[:] = t latitude_[:] = y longitude_[:] = x ncFile.close() ``` Generate the RH yearly files! ```python years = range(1983, 2017+1) for year in years : print("Making RH file for %i " % year) # Get the grids needed to calculate specific humidity t2m,t,x,y = get_nc("t2m", year) # 2 meter temperature d2m,t,x,y = get_nc("d2m", year) # 2 meter dew point sp,t,x,y = get_nc("sp", year) # Surface pressure RH = calculat_svp(d2m, sp) / calculat_svp(t2m, sp) * 100. write_RH_nc(RH, t, x, y, year, dataDir) ``` Making RH file for 1983 Making RH file for 1984 Making RH file for 1985 Making RH file for 1986 Making RH file for 1987 Making RH file for 1988 Making RH file for 1989 Making RH file for 1990 Making RH file for 1991 Making RH file for 1992 Making RH file for 1993 Making RH file for 1994 Making RH file for 1995 Making RH file for 1996 Making RH file for 1997 Making RH file for 1998 Making RH file for 1999 Making RH file for 2000 Making RH file for 2001 Making RH file for 2002 Making RH file for 2003 Making RH file for 2004 Making RH file for 2005 Making RH file for 2006 Making RH file for 2007 Making RH file for 2008 Making RH file for 2009 Making RH file for 2010 Making RH file for 2011 Making RH file for 2012 Making RH file for 2013 Making RH file for 2014 Making RH file for 2015 Making RH file for 2016 Making RH file for 2017 Show the last month for the last year output as an example of what these values and output look like. Make sure the dry places in the world have lower RH values. ```python plt.figure(dpi=150) plt.pcolor(x,y,RH[0,:,:]) plt.title("Example of RH% output for January 2017") plt.colorbar(extend="both", label="RH%") plt.xlabel("Longitude") plt.ylabel("Latitude") plt.show() ``` Merge yearly files and regrid - TODO: the code below is not working in a notebook, something strange with the call to Cdo() - These commands were issued at the command line using cdo on 11/24/2018. import globfrom cdo import * # python version cdo = Cdo()common_grid_txt = os.path.join(dataDir, 'COMMON_GRID.txt') files_to_merge = glob.glob(os.path.join(dataDir, "RH_*")) merged_time_file = os.path.join(dataDir, 'merged_time', 'RH_1983-2017.nc') common_grid_out = os.path.join(dataDir, 'merged_t_COMMON_GRID', 'RH_1983-2017.nc')cdo.mergetime(input=" ".join(files_to_merge), output=merged_time_file, options="-b F64")cdo.remapbil(common_grid_txt, input=merged_time_file, output=common_grid_out, options="-b F64")
bbb1577ca7810c11a711957f0d29f8707540a0d4
256,025
ipynb
Jupyter Notebook
Python/calculate_RH_from_d2m.ipynb
stevenjoelbrey/metSpread
38d667f0e2f58563345fed14132bb5a6eb7022af
[ "MIT" ]
null
null
null
Python/calculate_RH_from_d2m.ipynb
stevenjoelbrey/metSpread
38d667f0e2f58563345fed14132bb5a6eb7022af
[ "MIT" ]
null
null
null
Python/calculate_RH_from_d2m.ipynb
stevenjoelbrey/metSpread
38d667f0e2f58563345fed14132bb5a6eb7022af
[ "MIT" ]
null
null
null
795.108696
246,192
0.948536
true
1,890
Qwen/Qwen-72B
1. YES 2. YES
0.904651
0.79053
0.715154
__label__eng_Latn
0.775865
0.499873
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W2D1_DeepLearning/W2D1_Tutorial1.ipynb" target="_parent"></a> &nbsp; <a href="https://kaggle.com/kernels/welcome?src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W2D1_DeepLearning/W2D1_Tutorial1.ipynb" target="_parent"></a> # Tutorial 1: Decoding Neural Responses **Week 2, Day 1: Deep Learning** **By Neuromatch Academy** **Content creators**: Jorge A. Menendez, Carsen Stringer **Content reviewers**: Roozbeh Farhoodi, Madineh Sarvestani, Kshitij Dwivedi, Spiros Chavlis, Ella Batty, Michael Waskom **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** <p align='center'></p> --- # Tutorial Objectives *Estimated timing of tutorial: 1 hr, 20 minutes* In this tutorial, we'll use deep learning to decode stimulus information from the responses of sensory neurons. Specifically, we'll look at the activity of ~20,000 neurons in mouse primary visual cortex responding to oriented gratings recorded in [this study](https://www.biorxiv.org/content/10.1101/679324v2.abstract). Our task will be to decode the orientation of the presented stimulus from the responses of the whole population of neurons. We could do this in a number of ways, but here we'll use deep learning. Deep learning is particularly well-suited to this problem for a number of reasons: * The data are very high-dimensional: the neural response to a stimulus is a ~20,000 dimensional vector. Many machine learning techniques fail in such high dimensions, but deep learning actually thrives in this regime, as long as you have enough data (which we do here!). * As you'll be able to see below, different neurons can respond quite differently to stimuli. This complex pattern of responses will, therefore, require non-linear methods to be decoded, which we can easily do with non-linear activation functions in deep networks. * Deep learning architectures are highly flexible, meaning we can easily adapt the architecture of our decoding model to optimize decoding. Here, we'll focus on a single architecture, but you'll see that it can easily be modified with few changes to the code. More concretely, our goal will be learn how to: * Build a deep feed-forward network using PyTorch * Evaluate the network's outputs using PyTorch built-in loss functions * Compute gradients of the loss with respect to each parameter of the network using automatic differentiation * Implement gradient descent to optimize the network's parameters ```python # @title Tutorial slides # @markdown These are the slides for all videos in this tutorial. from IPython.display import IFrame IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/vb7c4/?direct%26mode=render%26action=download%26mode=render", width=854, height=480) ``` ```python # @title Video 1: Decoding from neural data using feed-forward networks in pytorch from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV1Xa4y1a7Jz", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="SlrbMvvBOzM", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ``` This video covers the decoding task we will use in these tutorials, a linear network with one hidden layer, and how to build this in Pytorch. Generalized linear models were used as decoding and encoding models in W1D4 Machine Learning. A model that decodes a variable from neural activity can tell us *how much information* a brain area contains about that variable. An encoding model is a model from an input variable, like visual stimulus, to neural activity. The encoding model is meant to approximate the same transformation that the brain performs on input variables and therefore help us understand *how the brain represents information*. Today we will use deep neural networks to build these models because deep neural networks can approximate a wide range of non-linear functions and can be easily fit. --- # Setup ```python # Imports import os import numpy as np import torch from torch import nn from torch import optim import matplotlib as mpl from matplotlib import pyplot as plt ``` ```python #@title Data retrieval and loading import hashlib import requests fname = "W3D4_stringer_oribinned1.npz" url = "https://osf.io/683xc/download" expected_md5 = "436599dfd8ebe6019f066c38aed20580" if not os.path.isfile(fname): try: r = requests.get(url) except requests.ConnectionError: print("!!! Failed to download data !!!") else: if r.status_code != requests.codes.ok: print("!!! Failed to download data !!!") elif hashlib.md5(r.content).hexdigest() != expected_md5: print("!!! Data download appears corrupted !!!") else: with open(fname, "wb") as fid: fid.write(r.content) ``` ```python #@title Figure Settings %config InlineBackend.figure_format = 'retina' plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle") ``` ```python # @title Plotting Functions def plot_data_matrix(X, ax): """Visualize data matrix of neural responses using a heatmap Args: X (torch.Tensor or np.ndarray): matrix of neural responses to visualize with a heatmap ax (matplotlib axes): where to plot """ cax = ax.imshow(X, cmap=mpl.cm.pink, vmin=np.percentile(X, 1), vmax=np.percentile(X, 99)) cbar = plt.colorbar(cax, ax=ax, label='normalized neural response') ax.set_aspect('auto') ax.set_xticks([]) ax.set_yticks([]) def plot_decoded_results(train_loss, test_loss, test_labels, predicted_test_labels): """ Plot decoding results in the form of network training loss and test predictions Args: train_loss (list): training error over iterations test_labels (torch.Tensor): n_test x 1 tensor with orientations of the stimuli corresponding to each row of train_data, in radians predicted_test_labels (torch.Tensor): n_test x 1 tensor with predicted orientations of the stimuli from decoding neural network """ # Plot results fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 6)) # Plot the training loss over iterations of GD ax1.plot(train_loss) # Plot the testing loss over iterations of GD ax1.plot(test_loss) ax1.legend(['train loss', 'test loss']) # Plot true stimulus orientation vs. predicted class ax2.plot(stimuli_test.squeeze(), predicted_test_labels, '.') ax1.set_xlim([0, None]) ax1.set_ylim([0, None]) ax1.set_xlabel('iterations of gradient descent') ax1.set_ylabel('negative log likelihood') ax2.set_xlabel('true stimulus orientation ($^o$)') ax2.set_ylabel('decoded orientation bin') ax2.set_xticks(np.linspace(0, 360, n_classes + 1)) ax2.set_yticks(np.arange(n_classes)) class_bins = [f'{i * 360 / n_classes: .0f}$^o$ - {(i + 1) * 360 / n_classes: .0f}$^o$' for i in range(n_classes)] ax2.set_yticklabels(class_bins); # Draw bin edges as vertical lines ax2.set_ylim(ax2.get_ylim()) # fix y-axis limits for i in range(n_classes): lower = i * 360 / n_classes upper = (i + 1) * 360 / n_classes ax2.plot([lower, lower], ax2.get_ylim(), '-', color="0.7", linewidth=1, zorder=-1) ax2.plot([upper, upper], ax2.get_ylim(), '-', color="0.7", linewidth=1, zorder=-1) plt.tight_layout() def plot_train_loss(train_loss): plt.plot(train_loss) plt.xlim([0, None]) plt.ylim([0, None]) plt.xlabel('iterations of gradient descent') plt.ylabel('mean squared error') plt.show() ``` ```python # @title Helper Functions def load_data(data_name=fname, bin_width=1): """Load mouse V1 data from Stringer et al. (2019) Data from study reported in this preprint: https://www.biorxiv.org/content/10.1101/679324v2.abstract These data comprise time-averaged responses of ~20,000 neurons to ~4,000 stimulus gratings of different orientations, recorded through Calcium imaging. The responses have been normalized by spontaneous levels of activity and then z-scored over stimuli, so expect negative numbers. They have also been binned and averaged to each degree of orientation. This function returns the relevant data (neural responses and stimulus orientations) in a torch.Tensor of data type torch.float32 in order to match the default data type for nn.Parameters in Google Colab. This function will actually average responses to stimuli with orientations falling within bins specified by the bin_width argument. This helps produce individual neural "responses" with smoother and more interpretable tuning curves. Args: bin_width (float): size of stimulus bins over which to average neural responses Returns: resp (torch.Tensor): n_stimuli x n_neurons matrix of neural responses, each row contains the responses of each neuron to a given stimulus. As mentioned above, neural "response" is actually an average over responses to stimuli with similar angles falling within specified bins. stimuli: (torch.Tensor): n_stimuli x 1 column vector with orientation of each stimulus, in degrees. This is actually the mean orientation of all stimuli in each bin. """ with np.load(data_name) as dobj: data = dict(**dobj) resp = data['resp'] stimuli = data['stimuli'] if bin_width > 1: # Bin neural responses and stimuli bins = np.digitize(stimuli, np.arange(0, 360 + bin_width, bin_width)) stimuli_binned = np.array([stimuli[bins == i].mean() for i in np.unique(bins)]) resp_binned = np.array([resp[bins == i, :].mean(0) for i in np.unique(bins)]) else: resp_binned = resp stimuli_binned = stimuli # Return as torch.Tensor resp_tensor = torch.tensor(resp_binned, dtype=torch.float32) stimuli_tensor = torch.tensor(stimuli_binned, dtype=torch.float32).unsqueeze(1) # add singleton dimension to make a column vector return resp_tensor, stimuli_tensor def identityLine(): """ Plot the identity line y=x """ ax = plt.gca() lims = np.array([ax.get_xlim(), ax.get_ylim()]) minval = lims[:, 0].min() maxval = lims[:, 1].max() equal_lims = [minval, maxval] ax.set_xlim(equal_lims) ax.set_ylim(equal_lims) line = ax.plot([minval, maxval], [minval, maxval], color="0.7") line[0].set_zorder(-1) def get_data(n_stim, train_data, train_labels): """ Return n_stim randomly drawn stimuli/resp pairs Args: n_stim (scalar): number of stimuli to draw resp (torch.Tensor): train_data (torch.Tensor): n_train x n_neurons tensor with neural responses to train on train_labels (torch.Tensor): n_train x 1 tensor with orientations of the stimuli corresponding to each row of train_data, in radians Returns: (torch.Tensor, torch.Tensor): n_stim x n_neurons tensor of neural responses and n_stim x 1 of orientations respectively """ n_stimuli = train_labels.shape[0] istim = np.random.choice(n_stimuli, n_stim) r = train_data[istim] # neural responses to this stimulus ori = train_labels[istim] # true stimulus orientation return r, ori def stimulus_class(ori, n_classes): """Get stimulus class from stimulus orientation Args: ori (torch.Tensor): orientations of stimuli to return classes for n_classes (int): total number of classes Returns: torch.Tensor: 1D tensor with the classes for each stimulus """ bins = np.linspace(0, 360, n_classes + 1) return torch.tensor(np.digitize(ori.squeeze(), bins)) - 1 # minus 1 to accomodate Python indexing ``` --- # Section 1: Load and visualize data <details> <summary> <font color='blue'>Click here for text recap of relevant part of video </font></summary> We will be exploring neural activity in mice while the mice is viewing oriented grating stimuli on a screen in front of it. We record neural activity using a technique called two-photon calcium imaging, which allows us to record many thousands of neurons simultanously. The neurons light up when they fire. We then convert this imaging data to a matrix of neural responses by stimuli presented. For the purposes of this tutorial we are going to bin the neural responses and compute each neuron’s tuning curve. We used bins of 1 degree. We will use the response of all neurons in a single bin to try to predict which stimulus was shown. So we are going to be using the responses of 24000 neurons to try to predict 360 different possible stimulus conditions corresponding to each degree of orientation - which means we're in the regime of big data! </details> In the next cell, we have provided code to load the data and plot the matrix of neural responses. Next to it, we plot the tuning curves of three randomly selected neurons. These tuning curves are the averaged response of each neuron to oriented stimuli within 1$^\circ$, and since there are 360$^\circ$ in total, we have 360 responses. In the recording, there were actually thousands of stimuli shown, but in practice we often create these tuning curves because we want to visualize averaged responses with respect to the variable we varied in the experiment, in this case stimulus orientation. ```python #@title #@markdown Execute this cell to load and visualize data # Load data resp_all, stimuli_all = load_data() # argument to this function specifies bin width n_stimuli, n_neurons = resp_all.shape print(f'{n_neurons} neurons in response to {n_stimuli} stimuli') fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(2 * 6, 5)) # Visualize data matrix plot_data_matrix(resp_all[:, :100].T, ax1) # plot responses of first 100 neurons ax1.set_xlabel('stimulus') ax1.set_ylabel('neuron') # Plot tuning curves of three random neurons ineurons = np.random.choice(n_neurons, 3, replace=False) # pick three random neurons ax2.plot(stimuli_all, resp_all[:, ineurons]) ax2.set_xlabel('stimulus orientation ($^o$)') ax2.set_ylabel('neural response') ax2.set_xticks(np.linspace(0, 360, 5)) plt.tight_layout() ``` We will split our data into a training set and test set. In particular, we will have a training set of orientations (`stimuli_train`) and the corresponding responses (`resp_train`). Our testing set will have held-out orientations (`stimuli_test`) and the corresponding responses (`resp_test`). ```python #@title #@markdown Execute this cell to split into training and test sets # Set random seeds for reproducibility np.random.seed(4) torch.manual_seed(4) # Split data into training set and testing set n_train = int(0.6 * n_stimuli) # use 60% of all data for training set ishuffle = torch.randperm(n_stimuli) itrain = ishuffle[:n_train] # indices of data samples to include in training set itest = ishuffle[n_train:] # indices of data samples to include in testing set stimuli_test = stimuli_all[itest] resp_test = resp_all[itest] stimuli_train = stimuli_all[itrain] resp_train = resp_all[itrain] ``` --- # Section 2: Deep feed-forward networks in *pytorch* <details> <summary> <font color='blue'>Click here for text recap of relevant part of video </font></summary> We can build a linear network with no hidden layers, where the stimulus prediction $y$ is a product of weights $\mathbf{W}_{out}$ and neural responses $\mathbf{r}$ with an added term $\mathbf{b}$ which is called the bias term. When you fit a linear model such as this you minimize the squared error between the predicted stimulus $y$ and the true stimulus $\tilde{y}$, this is the “loss function”. \begin{align} L &= (y - \tilde{y})^2 \\ &= ((\mathbf{W}^{out} \mathbf{r} + \mathbf{b}) - \tilde{y})^2 \end{align} The solution to minimizing this loss function in a linear model can be found in closed form, and you learned how to solve this linear regression problem in the first week if you remember. If we use a simple linear model for this data we are able to predict the stimulus within 2-3 degrees. Let’s see if we can predict the neural activity better with a deep network. Let’s add a hidden layer with $M$ units to this linear model, where now the output $y$ is as follows: \begin{align} \mathbf{h} &= \mathbf{W}^{in} \mathbf{r} + \mathbf{b}^{in}, && [\mathbf{W}^{in}: M \times N,\, \mathbf{b}^{in}: M \times 1], \\ y &= \mathbf{W}^{out} \mathbf{h} + \mathbf{b}^{out}, && [\mathbf{W}^{out}: 1 \times M,\, \mathbf{b}^{in}: 1 \times 1], \end{align} Note this linear network with one hidden layer where $M$ hidden units is less than $N$ inputs is equivalent to performing [reduced rank regression](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3444519/), a technique that is useful for regularizing your regression model. Adding this hidden layer means the model now has a depth of $1$. The number of units $M$ is termed the width of the network. Increasing the depth and the width of the network can increase the expressivity of the model -- in other words how well it can fit complex non-linear functions. Many state-of-the-art models now have close to 100 layers! But for now let’s start with a model with a depth of $1$ and see if we can improve our prediction of the stimulus. See [bonus section 1](#b1) for a deeper discussion of what this choice entails, and when one might want to use deeper/shallower and wider/narrower architectures. The $M$-dimensional vector $\mathbf{h}$ denotes the activations of the **hidden layer** of the network. The blue components of this diagram denote the **parameters** of the network, which we will later optimize with gradient descent. These include all the weights and biases $\mathbf{W}^{in}, \mathbf{b}^{in}, \mathbf{W}^{out}, \mathbf{b}^{out}$. The **weights** are matrices of size (# of outputs, # of inputs) that are multiplied by the input of each layer, like the regression coefficients in linear regression. The **biases** are vectors of size (# of outputs, 1), like the intercept term in linear regression (see W1D3 for more details on multivariate linear regression). <p align="center"> </p> </details> We'll now build a simple deep neural network that takes as input a vector of neural responses and outputs a single number representing the decoded stimulus orientation. Let $\mathbf{r}^{(n)} = \begin{bmatrix} r_1^{(n)} & r_2^{(n)} & \ldots & r_N^{(n)} \end{bmatrix}^T$ denote the vector of neural responses (of neurons $1, \ldots, N$) to the $n$th stimulus. The network we will use is described by the following set of equations: \begin{align} \mathbf{h}^{(n)} &= \mathbf{W}^{in} \mathbf{r}^{(n)} + \mathbf{b}^{in}, && [\mathbf{W}^{in}: M \times N,\, \mathbf{b}^{in}: M \times 1], \\ y^{(n)} &= \mathbf{W}^{out} \mathbf{h}^{(n)} + \mathbf{b}^{out}, && [\mathbf{W}^{out}: 1 \times M,\, \mathbf{b}^{in}: 1 \times 1], \end{align} where $y^{(n)}$ denotes the scalar output of the network: the decoded orientation of the $n$-th stimulus. ### Section 2.1: Introduction to PyTorch *Estimated timing to here from start of tutorial: 16 min* Here, we'll use the **PyTorch** package to build, run, and train deep networks of this form in Python. PyTorch uses a data type called a `torch.Tensor`. `torch.Tensor`'s are effectively just like a `numpy` arrays, except that they have some important attributes and methods needed for automatic differentiation (to be discussed below). They also come along with infrastructure for easily storing and computing with them on GPU's, a capability we won't touch on here but which can be really useful in practice. <details> <summary> <font color='blue'>Click here for text recap of relevant part of video </font></summary> First we import the pytorch library called `torch` and its neural network module `nn`. Next we will create a class for the deep network called DeepNet. A class has functions which are called methods. A class in python is initialized using a method called `__init__`. In this case the init method is declared to takes two inputs (other than the `self` input which represents the class itself), which are `n_inputs` and `n_hidden`. In our case `n_inputs` is the number of neurons we are using to do the prediction, and `n_hidden` is the number of hidden units. We first call the super function to invoke the `nn.Module`’s init function. Next we add the hidden layer `in_layer` as an attribute of the class. It is a linear layer called `nn.Linear` with size `n_inputs` by `n_hidden`. Then we add a second linear layer `out_layer` of size `n_hidden` by `1`, because we are predicting one output - the orientation of the stimulus. PyTorch will initialize all weights and biases randomly. Note the number of hidden units `n_hidden` is a parameter that we are free to vary in deciding how to build our network. See [Bonus Section 1](#b1) for a discussion of how this architectural choice affects the computations the network can perform. Next we add another method to the class called `forward`. This is the method that runs when you call the class as a function. It takes as input `r` which is the neural responses. Then `r` is sent through the linear layers `in_layer` and `out_layer` and returns our prediction `y`. Let’s create an instantiation of this class called `net` with 200 hidden units with `net = DeepNet(n_neurons, 200)`. Now we can run the neural response through the network to predict the stimulus (`net(r)`); running the “net” this way calls the forward method. </details> The next cell contains code for building the deep network we defined above and in the video using the `nn.Module` base class for deep neural network models (documentation [here](https://pytorch.org/docs/stable/generated/torch.nn.Module.html?highlight=nn%20module#torch.nn.Module)). ```python class DeepNet(nn.Module): """Deep Network with one hidden layer Args: n_inputs (int): number of input units n_hidden (int): number of units in hidden layer Attributes: in_layer (nn.Linear): weights and biases of input layer out_layer (nn.Linear): weights and biases of output layer """ def __init__(self, n_inputs, n_hidden): super().__init__() # needed to invoke the properties of the parent class nn.Module self.in_layer = nn.Linear(n_inputs, n_hidden) # neural activity --> hidden units self.out_layer = nn.Linear(n_hidden, 1) # hidden units --> output def forward(self, r): """Decode stimulus orientation from neural responses Args: r (torch.Tensor): vector of neural responses to decode, must be of length n_inputs. Can also be a tensor of shape n_stimuli x n_inputs, containing n_stimuli vectors of neural responses Returns: torch.Tensor: network outputs for each input provided in r. If r is a vector, then y is a 1D tensor of length 1. If r is a 2D tensor then y is a 2D tensor of shape n_stimuli x 1. """ h = self.in_layer(r) # hidden representation y = self.out_layer(h) return y ``` ## Section 2.2: Activation functions *Estimated timing to here from start of tutorial: 25 min* ```python # @title Video 2: Nonlinear activation functions from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV1m5411h7V5", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="JAdukDCQALA", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ``` This video covers adding a nonlinear activation funciton, specifically a Rectified Linear Unit (ReLU), to the linear network. <details> <summary> <font color='blue'>Click here for text recap of video </font></summary> Note that the deep network we constructed above comprises solely **linear** operations on each layer: each layer is just a weighted sum of all the elements in the previous layer. It turns out that linear hidden layers like this aren't particularly useful, since a sequence of linear transformations is actually essentially the same as a single linear transformation. We can see this from the above equations by plugging in the first one into the second one to obtain \begin{equation} y^{(n)} = \mathbf{W}^{out} \left( \mathbf{W}^{in} \mathbf{r}^{(n)} + \mathbf{b}^{in} \right) + \mathbf{b}^{out} = \mathbf{W}^{out}\mathbf{W}^{in} \mathbf{r}^{(n)} + \left( \mathbf{W}^{out}\mathbf{b}^{in} + \mathbf{b}^{out} \right) \end{equation} In other words, the output is still just a weighted sum of elements in the input -- the hidden layer has done nothing to change this. To extend the set of computable input/output transformations to more than just weighted sums, we'll incorporate a **non-linear activation function** in the hidden units. This is done by simply modifying the equation for the hidden layer activations to be \begin{equation} \mathbf{h}^{(n)} = \phi(\mathbf{W}^{in} \mathbf{r}^{(n)} + \mathbf{b}^{in}) \end{equation} where $\phi$ is referred to as the activation function. Using a non-linear activation function will ensure that the hidden layer performs a non-linear transformation of the input, which will make our network much more powerful (or *expressive*, see [Bonus Section 1](#b1)). In practice, deep networks *always* use non-linear activation functions. The most common non-linearity used is the rectified linear unit (or ReLU), which is a max(0, x) function. At the beginning of neural network development, researchers experimented with different non-linearities such as sigmoid and tanh functions, but in the end they found that RELU activation functions worked the best. It works well because the gradient is able to back-propagate through the network as long as the input is positive - the gradient is 1 for all values of x greater than 0. If you use a saturating non-linearity then the gradients will be very small in the saturating regimes, reducing the effective computing regime of the unit. </details> #### Coding Exercise 2.2: Nonlinear Activations Create a new class `DeepNetReLU` by modifying our above deep network model to add a **non-linear activation** function $\phi$: \begin{equation} \mathbf{h}^{(n)} = \phi(\mathbf{W}^{in} \mathbf{r}^{(n)} + \mathbf{b}^{in}) \end{equation} We'll use the linear rectification function: \begin{equation} \phi(x) = \begin{cases} x & \text{if } x > 0 \\ 0 & \text{else} \end{cases} \end{equation} which can be implemented in PyTorch using `torch.relu()`. Hidden layers with this activation function are typically referred to as "**Re**ctified **L**inear **U**nits", or **ReLU**'s. Initialize this network with 10 hidden units and run on an example stimulus. **Hint**: you only need to modify the `forward()` method of the above `DeepNet()` class to include `torch.relu()`. We then initialize and run this network. We use it to decode stimulus orientation (true stimulus given by `ori`) from a vector of neural responses `r` to the very first stimulus. Note that when the initialized network class is called as a function on an input (e.g. `net(r)`), its `.forward()` method is called. This is a special property of the `nn.Module` class. Note that the decoded orientations at this point will be nonsense, since the network has been initialized with random weights. Below, we'll learn how to optimize these weights for good stimulus decoding. ```python class DeepNetReLU(nn.Module): """ network with a single hidden layer h with a RELU """ def __init__(self, n_inputs, n_hidden): super().__init__() # needed to invoke the properties of the parent class nn.Module self.in_layer = nn.Linear(n_inputs, n_hidden) # neural activity --> hidden units self.out_layer = nn.Linear(n_hidden, 1) # hidden units --> output def forward(self, r): ############################################################################ ## TO DO for students: write code for computing network output using a ## rectified linear activation function for the hidden units # Fill out function and remove raise NotImplementedError("Student exercise: complete DeepNetReLU forward") ############################################################################ h = ... # h is size (n_inputs, n_hidden) y = ... # y is size (n_inputs, 1) return y # Set random seeds for reproducibility np.random.seed(1) torch.manual_seed(1) # Initialize a deep network with M=200 hidden units net = DeepNetReLU(n_neurons, 200) # Get neural responses (r) to and orientation (ori) to one stimulus in dataset r, ori = get_data(1, resp_train, stimuli_train) # using helper function get_data # Decode orientation from these neural responses using initialized network out = net(r) # compute output from network, equivalent to net.forward(r) print('decoded orientation: %.2f degrees' % out) print('true orientation: %.2f degrees' % ori) ``` ```python # to_remove solution class DeepNetReLU(nn.Module): """ network with a single hidden layer h with a RELU """ def __init__(self, n_inputs, n_hidden): super().__init__() # needed to invoke the properties of the parent class nn.Module self.in_layer = nn.Linear(n_inputs, n_hidden) # neural activity --> hidden units self.out_layer = nn.Linear(n_hidden, 1) # hidden units --> output def forward(self, r): h = torch.relu(self.in_layer(r)) # h is size (n_inputs, n_hidden) y = self.out_layer(h) # y is size (n_inputs, 1) return y # Set random seeds for reproducibility np.random.seed(1) torch.manual_seed(1) # Initialize a deep network with M=200 hidden units net = DeepNetReLU(n_neurons, 200) # Get neural responses (r) to and orientation (ori) to one stimulus in dataset r, ori = get_data(1, resp_train, stimuli_train) # using helper function get_data # Decode orientation from these neural responses using initialized network out = net(r) # compute output from network, equivalent to net.forward(r) print('decoded orientation: %.2f degrees' % out) print('true orientation: %.2f degrees' % ori) ``` You should see that the decoded orientation is 0.17 $^{\circ}$ while the true orientation is 139.00 $^{\circ}$. --- # Section 3: Loss functions and gradient descent ```python # @title Video 3: Loss functions & gradient descent from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV19k4y1271n", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="aEtKpzEuviw", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ``` This video covers loss functions, gradient descent, and how to implement these in Pytorch. ### Section 3.1: Loss functions *Estimated timing to here from start of tutorial: 40 min* Because the weights of the network are currently randomly chosen, the outputs of the network are nonsense: the decoded stimulus orientation is nowhere close to the true stimulus orientation. We'll shortly write some code to change these weights so that the network does a better job of decoding. But to do so, we first need to define what we mean by "better". One simple way of defining this is to use the squared error \begin{equation} L = (y - \tilde{y})^2 \end{equation} where $y$ is the network output and $\tilde{y}$ is the true stimulus orientation. When the decoded stimulus orientation is far from the true stimulus orientation, $L$ will be large. We thus refer to $L$ as the **loss function**, as it quantifies how *bad* the network is at decoding stimulus orientation. <details> <summary> <font color='blue'>Click here for text recap of relevant part of video </font></summary> First we run the neural responses through the network `net` to get the output `out`. Then we declare our loss function, we will use the built in `nn.MSELoss` function for this purpose: `loss_fn = nn.MSELoss()`. This loss function takes two inputs, the network output `out` and the true stimulus orientations `ori` and finds the mean squared error: `loss = loss_fn(out, ori)`. Specifically, it will take as arguments a **batch** of network outputs $y_1, y_2, \ldots, y_P$ and corresponding target outputs $\tilde{y}_1, \tilde{y}_2, \ldots, \tilde{y}_P$, and compute the **mean squared error (MSE)** \begin{equation} L = \frac{1}{P}\sum_{n=1}^P \left(y^{(n)} - \tilde{y}^{(n)}\right)^2 \end{equation} where $P$ is the number of different stimuli in a batch, called the *batch size*. **Computing MSE** Evaluate the mean squared error for a deep network with $M=10$ rectified linear units, on the decoded orientations from neural responses to 20 random stimuli. ```python # Set random seeds for reproducibility np.random.seed(1) torch.manual_seed(1) # Initialize a deep network with M=10 hidden units net = DeepNetReLU(n_neurons, 10) # Get neural responses to first 20 stimuli in the data set r, ori = get_data(20, resp_train, stimuli_train) # Decode orientation from these neural responses out = net(r) # Initialize PyTorch mean squared error loss function (Hint: look at nn.MSELoss) loss_fn = nn.MSELoss() # Evaluate mean squared error loss = loss_fn(out, ori) print('mean squared error: %.2f' % loss) ``` You should see a mean squared error of 42949.14. ### Section 3.2: Optimization with gradient descent *Estimated timing to here from start of tutorial: 50 min* <details> <summary> <font color='blue'>Click here for text recap of relevant part of video </font></summary> Next we minimize this loss function using gradient descent. In **gradient descent** we compute the gradient of the loss function with respect to each parameter (all W’s and b’s). We then update the parameters by subtracting the learning rate times the gradient. Let’s visualize this loss function $L$ with respect to a weight $w$. If the gradient is positive (the slope $\frac{dL}{dw}$ > 0) as in this case then we want to move in the opposite direction which is negative. So we update the $w$ accordingly in the negative direction on each iteration. Once the iterations complete the weight will ideally be at a value that minimizes the cost function. In reality these cost functions are not convex like this one and depend on hundreds of thousands of parameters. There are tricks to help navigate this rocky cost landscape such as adding momentum or changing the optimizer but we won’t have time to get into that today. There are also ways to change the architecture of the network to improve optimization, such as including skip connections. These skip connections are used in residual networks and allow for the optimization of many layer networks. </details> ```python #@markdown Execute this cell to view gradient descent gif from IPython.display import Image Image(url='https://github.com/NeuromatchAcademy/course-content/blob/master/tutorials/static/grad_descent.gif?raw=true') ``` We'll use the **gradient descent (GD)** algorithm to modify our weights to reduce the loss function, which consists of iterating three steps. 1. **Evaluate the loss** on the training data, ``` out = net(train_data) loss = loss_fn(out, train_labels) ``` where `train_data` are the network inputs in the training data (in our case, neural responses), and `train_labels` are the target outputs for each input (in our case, true stimulus orientations). 2. **Compute the gradient of the loss** with respect to each of the network weights. In PyTorch, we can do this with the `.backward()` method of the loss `loss`. Note that the gradients of each parameter need to be cleared before calling `.backward()`, or else PyTorch will try to accumulate gradients across iterations. This can again be done using built-in optimizers via the method `.zero_grad()`. Putting these together we have ``` optimizer.zero_grad() loss.backward() ``` 3. **Update the network weights** by descending the gradient. In Pytorch, we can do this using built-in optimizers. We'll use the `optim.SGD` optimizer (documentation [here](https://pytorch.org/docs/stable/optim.html#torch.optim.SGD)) which updates parameters along the negative gradient, scaled by a learning rate. To initialize this optimizer, we have to tell it * which parameters to update, and * what learning rate to use For example, to optimize *all* the parameters of a network `net` using a learning rate of .001, the optimizer would be initialized as follows ``` optimizer = optim.SGD(net.parameters(), lr=.001) ``` where `.parameters()` is a method of the `nn.Module` class that returns a [Python generator object](https://wiki.python.org/moin/Generators) over all the parameters of that `nn.Module` class (in our case, $\mathbf{W}^{in}, \mathbf{b}^{in}, \mathbf{W}^{out}, \mathbf{b}^{out}$). After computing all the parameter gradients in step 2, we can then update each of these parameters using the `.step()` method of this optimizer, ``` optimizer.step() ``` In the next exercise, we'll give you a code skeleton for implementing the GD algorithm. Your job will be to fill in the blanks. For the mathematical details of the GD algorithm, see [bonus section 2.1](#b21). In this case we are using gradient descent (not *stochastic* gradient descent) because we are computing the gradient over ALL training data at once. Normally there is too much training data to do this in practice, and for instance the neural responses may be divided into sets of 20 stimuli. An **epoch** in deep learning is defined as the forward and backward pass of all the training data through the network. We will run the forward and backward pass of the network here for 20 **epochs**, in practice training may require thousands of epochs. See [bonus section 2.2](#b22) for a more detailed discussion of stochastic gradient descent. #### Coding Exercise 3.2: Gradient descent in PyTorch Complete the function `train()` that uses the gradient descent algorithm to optimize the weights of a given network. This function takes as input arguments * `net`: the PyTorch network whose weights to optimize * `loss_fn`: the PyTorch loss function to use to evaluate the loss * `train_data`: the training data to evaluate the loss on (i.e. neural responses to decode) * `train_labels`: the target outputs for each data point in `train_data` (i.e. true stimulus orientations) We will then train a neural network on our data and plot the loss (mean squared error) over time. When we run this function, behind the scenes PyTorch is actually changing the parameters inside this network to make the network better at decoding, so its weights will now be different than they were at initialization. ```python def train(net, loss_fn, train_data, train_labels, n_epochs=50, learning_rate=1e-4): """Run gradient descent to opimize parameters of a given network Args: net (nn.Module): PyTorch network whose parameters to optimize loss_fn: built-in PyTorch loss function to minimize train_data (torch.Tensor): n_train x n_neurons tensor with neural responses to train on train_labels (torch.Tensor): n_train x 1 tensor with orientations of the stimuli corresponding to each row of train_data n_epochs (int, optional): number of epochs of gradient descent to run learning_rate (float, optional): learning rate to use for gradient descent Returns: (list): training loss over iterations """ # Initialize PyTorch SGD optimizer optimizer = optim.SGD(net.parameters(), lr=learning_rate) # Placeholder to save the loss at each iteration train_loss = [] # Loop over epochs for i in range(n_epochs): ###################################################################### ## TO DO for students: fill in missing code for GD iteration raise NotImplementedError("Student exercise: write code for GD iterations") ###################################################################### # compute network output from inputs in train_data out = ... # compute network output from inputs in train_data # evaluate loss function loss = loss_fn(out, train_labels) # Clear previous gradients ... # Compute gradients ... # Update weights ... # Store current value of loss train_loss.append(loss.item()) # .item() needed to transform the tensor output of loss_fn to a scalar # Track progress if (i + 1) % (n_epochs // 5) == 0: print(f'iteration {i + 1}/{n_epochs} | loss: {loss.item():.3f}') return train_loss # Set random seeds for reproducibility np.random.seed(1) torch.manual_seed(1) # Initialize network with 10 hidden units net = DeepNetReLU(n_neurons, 10) # Initialize built-in PyTorch MSE loss function loss_fn = nn.MSELoss() # Run gradient descent on data train_loss = train(net, loss_fn, resp_train, stimuli_train) # Plot the training loss over iterations of GD plot_train_loss(train_loss) ``` ```python # to_remove solution def train(net, loss_fn, train_data, train_labels, n_epochs=50, learning_rate=1e-4): """Run gradient descent to opimize parameters of a given network Args: net (nn.Module): PyTorch network whose parameters to optimize loss_fn: built-in PyTorch loss function to minimize train_data (torch.Tensor): n_train x n_neurons tensor with neural responses to train on train_labels (torch.Tensor): n_train x 1 tensor with orientations of the stimuli corresponding to each row of train_data n_epochs (int, optional): number of epochs of gradient descent to run learning_rate (float, optional): learning rate to use for gradient descent Returns: (list): training loss over iterations """ # Initialize PyTorch SGD optimizer optimizer = optim.SGD(net.parameters(), lr=learning_rate) # Placeholder to save the loss at each iteration train_loss = [] # Loop over epochs for i in range(n_epochs): # compute network output from inputs in train_data out = net(train_data) # compute network output from inputs in train_data # evaluate loss function loss = loss_fn(out, train_labels) # Clear previous gradients optimizer.zero_grad() # Compute gradients loss.backward() # Update weights optimizer.step() # Store current value of loss train_loss.append(loss.item()) # .item() needed to transform the tensor output of loss_fn to a scalar # Track progress if (i + 1) % (n_epochs // 5) == 0: print(f'iteration {i + 1}/{n_epochs} | loss: {loss.item():.3f}') return train_loss # Set random seeds for reproducibility np.random.seed(1) torch.manual_seed(1) # Initialize network with 10 hidden units net = DeepNetReLU(n_neurons, 10) # Initialize built-in PyTorch MSE loss function loss_fn = nn.MSELoss() # Run gradient descent on data train_loss = train(net, loss_fn, resp_train, stimuli_train) # Plot the training loss over iterations of GD with plt.xkcd(): plot_train_loss(train_loss) ``` **We can further improve our model - please see the Bonus Tutorial when you have time to dive deeper into this model by evaluating and improving its performance by visualizing the weights, looking at performance on test data, switching to a new loss function and adding regularization.** --- # Summary *Estimated timing of tutorial: 1 hour, 20 minutes* We have now covered a number of common and powerful techniques for applying deep learning to decoding from neural data, some of which are common to almost any machine learning problem: * Building and training deep networks using the **PyTorch** `nn.Module` class and built-in **optimizers** * Choosing **loss functions** An important aspect of this tutorial was the `train()` function we wrote in coding exercise 3.2. Note that it can be used to train *any* network to minimize *any* loss function on *any* training data. This is the power of using PyTorch to train neural networks and, for that matter, **any other model**! There is nothing in the `nn.Module` class that forces us to use `nn.Linear` layers that implement neural network operations. You can actually put anything you want inside the `.__init__()` and `.forward()` methods of this class. As long as its parameters and computations involve only `torch.Tensor`'s, and the model is differentiable, you'll then be able to optimize the parameters of this model in exactly the same way we optimized the deep networks here. What kinds of conclusions can we draw from these sorts of analyses? If we can decode the stimulus well from visual cortex activity, that means that there is information about this stimulus available in visual cortex. Whether or not the animal uses that information to make decisions is not determined from an analysis like this. In fact mice perform poorly in orientation discrimination tasks compared to monkeys and humans, even though they have information about these stimuli in their visual cortex. Why do you think they perform poorly in orientation discrimination tasks? See [this paper](https://www.biorxiv.org/content/10.1101/679324v2) for some potential hypotheses, but this is totally an open question! --- # Bonus <a name='b1'></a> ## Bonus Section 1: Neural network *depth*, *width* and *expressivity* Two important architectural choices that always have to be made when constructing deep feed-forward networks like those used here are * the number of hidden layers, or the network's *depth* * the number of units in each layer, or the layer *widths* Here, we restricted ourselves to networks with a single hidden layer with a width of $M$ units, but it is easy to see how this code could be adapted to arbitrary depths. Adding another hidden layer simply requires adding another `nn.Linear` module to the `__init__()` method and incorporating it into the `.forward()` method. The depth and width of a network determine the set of input/output transormations that it can perform, often referred to as its *expressivity*. The deeper and wider the network, the more *expressive* it is; that is, the larger the class of input/output transformations it can compute. In fact, it turns out that an infinitely wide *or* infinitely deep networks can in principle [compute (almost) *any* input/output transformation](https://en.wikipedia.org/wiki/Universal_approximation_theorem). A classic mathematical demonstration of the power of depth is given by the so-called [XOR problem](https://medium.com/@jayeshbahire/the-xor-problem-in-neural-networks-50006411840b#:~:text=The%20XOr%2C%20or%20%E2%80%9Cexclusive%20or,value%20if%20they%20are%20equal.). This toy problem demonstrates how even a single hidden layer can drastically expand the set of input/output transformations a network can perform, relative to a shallow network with no hidden layers. The key intuition is that the hidden layer allows you to represent the input in a new format, which can then allow you to do almost anything you want with it. The *wider* this hidden layer, the more flexibility you have in this representation. In particular, if you have more hidden units than input units, then the hidden layer representation of the input is higher-dimensional than the raw data representation. This higher dimensionality effectively gives you more "room" to perform arbitrary computations in. It turns out that even with just this one hidden layer, if you make it wide enough you can actually approximate any input/output transformation you want. See [here](http://neuralnetworksanddeeplearning.com/chap4.html) for a neat visual demonstration of this. In practice, however, it turns out that increasing depth seems to grant more expressivity with fewer units than increasing width does (for reasons that are not well understood). It is for this reason that truly *deep* networks are almost always used in machine learning, which is why this set of techniques is often referred to as *deep* learning. That said, there is a cost to making networks deeper and wider. The bigger your network, the more parameters (i.e. weights and biases) it has, which need to be optimized! The extra expressivity afforded by higher width and/or depth thus carries with it (at least) two problems: * optimizing more parameters usually requires more data * a more highly parameterized network is more prone to overfit to the training data, so requires more sophisticated optimization algorithms to ensure generalization ## Bonus Section 2: Gradient descent <a name='b21'></a> ### Bonus Section 2.1: Gradient descent equations Here we provide the equations for the three steps of the gradient descent algorithm, as applied to our decoding problem: 1. **Evaluate the loss** on the training data. For a mean squared error loss, this is given by \begin{equation} L = \frac{1}{P}\sum_{n=1}^P (y^{(n)} - \tilde{y}^{(n)})^2 \end{equation} where $y^{(n)}$ denotes the stimulus orientation decoded from the population response $\mathbf{r}^{(n)}$ to the $n$th stimulus in the training data, and $\tilde{y}^{(n)}$ is the true orientation of that stimulus. $P$ denotes the total number of data samples in the training set. In the syntax of our `train()` function above, $\mathbf{r}^{(n)}$ is given by `train_data[n, :]` and $\tilde{y}^{(n)}$ by `train_labels[n]`. 2. **Compute the gradient of the loss** with respect to each of the network weights. In our case, this entails computing the quantities \begin{equation} \frac{\partial L}{\partial \mathbf{W}^{in}}, \frac{\partial L}{\partial \mathbf{b}^{in}}, \frac{\partial L}{\partial \mathbf{W}^{out}}, \frac{\partial L}{\partial \mathbf{b}^{out}} \end{equation} Usually, we would require lots of math in order to derive each of these gradients, and lots of code to compute them. But this is where PyTorch comes to the rescue! Using a cool technique called [automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation), PyTorch automatically calculates these gradients when the `.backward()` function is called. More specifically, when this function is called on a particular variable (e.g. `loss`, as above), PyTorch will compute the gradients with respect to each network parameter. These are computed and stored behind the scenes, and can be accessed through the `.grad` attribute of each of the network's parameters. As we saw above, however, we actually never need to look at or call these gradients when implementing gradient descent, as this can be taken care of by PyTorch's built-in optimizers, like `optim.SGD`. 3. **Update the network weights** by descending the gradient: \begin{align} \mathbf{W}^{in} &\leftarrow \mathbf{W}^{in} - \alpha \frac{\partial L}{\partial \mathbf{W}^{in}} \\ \mathbf{b}^{in} &\leftarrow \mathbf{b}^{in} - \alpha \frac{\partial L}{\partial \mathbf{b}^{in}} \\ \mathbf{W}^{out} &\leftarrow \mathbf{W}^{out} - \alpha \frac{\partial L}{\partial \mathbf{W}^{out}} \\ \mathbf{b}^{out} &\leftarrow \mathbf{b}^{out} - \alpha \frac{\partial L}{\partial \mathbf{b}^{out}} \end{align} where $\alpha$ is called the **learning rate**. This **hyperparameter** of the SGD algorithm controls how far we descend the gradient on each iteration. It should be as large as possible so that fewer iterations are needed, but not too large so as to avoid parameter updates from skipping over minima in the loss landscape. While the equations written down here are specific to the network and loss function considered in this tutorial, the code provided above for implementing these three steps is completely general: no matter what loss function or network you are using, exactly the same commands can be used to implement these three steps. The way that the gradients are calculated is called **backpropagation**. We have a loss function: \begin{align} L &= (y - \tilde{y})^2 \\ &= (\mathbf{W}^{out} \mathbf{h} - \tilde{y})^2 \end{align} where $\mathbf{h} = \phi(\mathbf{W}^{in} \mathbf{r} + \mathbf{b}^{in})$ You may see that $\frac{\partial L}{\partial \mathbf{W}^{out}}$ is simple to calculate as it is on the outside of the equation (it is also a vector in this case, not a matrix, so the derivative is standard): \begin{equation} \frac{\partial L}{\partial \mathbf{W}^{out}} = 2 (\mathbf{h} - \tilde{y}) \end{equation} Now let's compute the derivative with respect to $\mathbf{W}^{in}$ using the chain rule. Note it is only positive if the output is positive due to the RELU $\phi$: \begin{align} \frac{\partial L}{\partial \mathbf{W}^{in}} &= \begin{cases} \frac{\partial L}{\partial \mathbf{W}^{out}} \frac{\partial \mathbf{h}}{\partial \mathbf{W}^{in}} & \text{if } \mathbf{h} > 0 \\ 0 & \text{else} \end{cases} \\ &= \begin{cases} 2 (\mathbf{h} - \tilde{y}) \mathbf{r}^\top & \text{if } \mathbf{h} > 0 \\ 0 & \text{else} \end{cases} \end{align} It is most efficient to compute the derivative once for the last layer, then once for the next layer and multiply by the previous layer's derivative and so on using the chain rule. Each of these operations is relatively fast, making training of deep networks feasible. The command `loss.backward()` computes these gradients for the defined `loss` with respect to each network parameter. The computation is done using [automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation), which implements backpropagation. Note that this works no matter how big/small the network is, allowing us to perform gradient descent for any deep network model built using PyTorch. <a name='b22'></a> ### Bonus Section 2.2: *Stochastic* gradient descent (SGD) vs. gradient descent (GD) In this tutorial, we used the gradient descent algorithm, which differs in a subtle yet very important way from the more commonly used **stochastic gradient descent (SGD)** algorithm. The key difference is in the very first step of each iteration, where in the GD algorithm we evaluate the loss *at every data sample in the training set*. In SGD, on the other hand, we evaluate the loss only at a random subset of data samples from the full training set, called a **mini-batch**. At each iteration, we randomly sample a mini-batch to perform steps 1-3 on. All the above equations still hold, but now the $P$ data samples $\mathbf{r}^{(n)}, \tilde{y}^{(n)}$ denote a mini-batch of $P$ random samples from the training set, rather than the whole training set. There are several reasons why one might want to use SGD instead of GD. The first is that the training set might be too big, so that we actually can't actually evaluate the loss on every single data sample in it. In this case, GD is simply infeasible, so we have no choice but to turn to SGD, which bypasses the restrictive memory demands of GD by sub-sampling the training set into smaller mini-batches. But, even when GD is feasible, SGD turns out to often be better. The stochasticity induced by the extra random sampling step in SGD effectively adds some noise in the search for local minima of the loss function. This can be really useful for avoiding potential local minima, and enforce that whatever minimum is converged to is a good one. This is particularly important when networks are wider and/or deeper, in which case the large number of parameters can lead to overfitting. Here, we used only GD because (1) it is simpler, and (2) it suffices for the problem being considered here. Because we have so many neurons in our data set, decoding is not too challenging and doesn't require a particularly deep or wide network. The small number of parameters in our deep networks therefore can be optimized without a problem using GD.
1d92fc3d10dd29e692b0535d6bc23aa0620b7bfa
72,759
ipynb
Jupyter Notebook
tutorials/W2D1_DeepLearning/W2D1_Tutorial1.ipynb
Beilinson/course-content
b74c630bec7002abe2f827ff8e0707f9bbb43f82
[ "CC-BY-4.0" ]
null
null
null
tutorials/W2D1_DeepLearning/W2D1_Tutorial1.ipynb
Beilinson/course-content
b74c630bec7002abe2f827ff8e0707f9bbb43f82
[ "CC-BY-4.0" ]
null
null
null
tutorials/W2D1_DeepLearning/W2D1_Tutorial1.ipynb
Beilinson/course-content
b74c630bec7002abe2f827ff8e0707f9bbb43f82
[ "CC-BY-4.0" ]
null
null
null
50.492019
1,248
0.633818
true
14,130
Qwen/Qwen-72B
1. YES 2. YES
0.795658
0.774583
0.616304
__label__eng_Latn
0.993485
0.27021
# 亜臨界ホップ分岐の標準形 \begin{equation} \begin{aligned} \dot{x}_0 = \lambda x_0 - \omega x_1 + x_0 \left[ c_1 (x_0^2 + x_1^2) - (x_0^2 + x_1^2)^2 \right],\\ \dot{x}_1 = \omega x_0 + \lambda x_1 + x_1 \left[ c_1 (x_0^2 + x_1^2) - (x_0^2 + x_1^2)^2 \right],\\ \end{aligned} \end{equation} ```python import numpy as np import pathfollowing as pf from scipy.integrate import ode, solve_ivp from scipy.linalg import solve import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline sns.set('poster', 'whitegrid', 'dark', rc={"lines.linewidth": 2, 'grid.linestyle': '-'}) ``` ベクトル場とその微分 ```python c_lyap = 0.25 def f(x, a): A = np.array([[a[0], -1.0],[1.0, a[0]]]) r = x@x return A @ x + r*(c_lyap - r)*x def fx(x, a): r = x @ x a00 = a[0] + r*(c_lyap - r) + 2*(c_lyap - 2*r)*x[0]**2 a01 = -1.0 + 2*x[0]*x[1]*(c_lyap - 2*r) a10 = 1.0 + 2*x[0]*x[1]*(c_lyap - 2*r) a11 = a[0] + r*(c_lyap-r)+2*(c_lyap-2*r)*x[1]**2 return np.array([[a00, a01],[a10, a11]]) def fa(x, a): return np.array([x[0], x[1]]) ``` 一段射撃法の定式化 ```python def func(x, a): T = x[-1] def f2(t, y): return T * f(y, a) r = ode(f2).set_integrator('dop853') y0 = np.copy(x[:-1]) h = 1.0 r.set_initial_value(y0, 0.0) y1 = r.integrate(r.t+h) x1 = np.zeros(len(x)) x1[:-1] = y1 - y0 x1[-1] = y0[0] return x1 def dfdx(x, a): def df2(t, y, n): z = np.zeros((n+1)*(n+2)) z[:n] = y[n] * f(y[:n], a) z[n] = 0.0 J = np.zeros((n+1, n+1)) J[:n, :n] = y[n] * fx(y[:n], a) J[:n, n] = f(y[:n], a) for m in range(n+1): z[(n+1)*(m+1):(n+1)*(m+2)] = J @ y[(n+1)*(m+1):(n+1)*(m+2)] return z r = ode(df2).set_integrator('dop853') n = len(x)-1 y0 = np.zeros((n+1)*(n+2)) I = np.identity(n+1) y0[:n+1] = np.copy(x) for m in range(n+1): y0[(n+1)*(m+1):(n+1)*(m+2)] = I[:,m] h = 1.0 r.set_initial_value(y0, 0.0).set_f_params(n) y1 = r.integrate(r.t+h) A = -np.identity(n+1) for m in range(n+1): A[:-1,m] += y1[(n+1)*(m+1):(n+1)*(m+2)-1] A[-1,:] = 0.0 A[-1,0] = 1.0 return A def dfda(x, a): T = x[-1] def df2(t, y, n): z = np.zeros(2*(n+1)) z[:n] = T * f(y[:n], np.array([y[n]])) z[n] = 0.0 J = np.zeros((n+1, n+1)) J[:n, :n] = fx(y[:n], np.array([y[n]])) J[:n, n] = fa(y[:n], np.array([y[n]])) z[n+1:] = T * J @ y[n+1:] return z r = ode(df2).set_integrator('dop853') n = len(x)-1 y0 = np.zeros(2*(n+1)) y0[:n] = np.copy(x[:-1]) y0[n] = a[0] y0[-1] = 1.0 h = 1.0 r.set_initial_value(y0, 0.0).set_f_params(n) y1 = r.integrate(r.t+h) y1[-1] = 0.0 return y1[n+1:] ``` 一段射撃法のニュートン法 ```python x = np.array([0.0, 0.5, 2*np.pi]) a = np.array([0.00]) y = np.copy(x) for m in range(10): b = func(y, a) A = dfdx(y, a) y -= solve(A, b) print(y, np.linalg.norm(b)) ``` [0. 0.50000006 6.28318561] 1.534154170863662e-07 [0. 0.50000006 6.28318561] 4.9608493090846714e-14 [0. 0.50000006 6.28318561] 1.047382306668854e-15 [0. 0.50000006 6.28318561] 2.159233940399377e-15 [0. 0.50000006 6.28318561] 1.790180836524724e-15 [0. 0.50000006 6.28318561] 3.6691989157368576e-15 [0. 0.50000006 6.28318561] 2.5823128856601336e-15 [0. 0.50000006 6.28318561] 4.742874840267547e-16 [0. 0.50000006 6.28318561] 5.900916318210353e-16 [0. 0.50000006 6.28318561] 3.0967063759893227e-15 一段射撃法の追跡 ```python x=np.array([0.0, 0.5, 2*np.pi]) a=np.array([0.0]) bd,bp,lp=pf.pathfollow(x, a, func, dfdx, dfda,nmax=20, h=0.05, epsr=1.0e-10, epsb=1.0e-10, quiet=False) ``` # TY a x R 0.015023155570483651 0.0 -0.4014886326913046 -1.1210151741032572 R 0.035917260293237226 0.0 -0.5137134389482804 -1.0830209867548288 R 0.06238882678998029 0.0 -0.598635550276071 -1.0336654249654065 R 0.09366115556610179 0.0 -0.6596456895451372 -0.9906320993827148 R 0.12878337264553655 0.0 -0.7049271175618399 -0.9606197790481326 R 0.16687167400969033 0.0 -0.7411210689365288 -0.9433512479878107 R 0.20720964186011206 0.0 -0.7720442328849861 -0.9359801983313732 R 0.249254084532369 0.0 -0.7995859087257013 -0.9355073217026986 R 0.2926045087542456 0.0 -0.8246703700354263 -0.939585814511053 R 0.3369675354791696 0.0 -0.8477989761336331 -0.9465799839662211 R 0.3821271995562121 0.0 -0.8692906175761658 -0.9554041805435147 R 0.4279228184124911 0.0 -0.8893765913715667 -0.9653493846126342 R 0.47423331226694254 0.0 -0.9082387879660252 -0.9759536058460752 R 0.5209660539735923 0.0 -0.9260254818232135 -0.9869139548929227 R 0.5680492881279677 0.0 -0.9428602121660424 -0.9980308609630433 R 0.6154266565926063 0.0 -0.958846830250254 -1.0091720497059906 R 0.6630533271597377 0.0 -0.9740734149750584 -1.0202498422828012 R 0.7108932242619 0.0 -0.9886152742495733 -1.0312065207193601 R 0.7589170001148506 0.0 -1.002537020127154 -1.0420044666419417 R 0.8071005408438864 0.0 -1.0158947050854057 -1.052620089102601 一段射撃法では不安定なリミットサイクルを上手く追跡できない ```python bd2,bp2,lp2=pf.pathfollow(x, a, func, dfdx, dfda,nmax=160, h=-0.01, epsr=1.0e-10, epsb=1.0e-10, quiet=True) ``` 2段射撃法の定式化 ```python Npts = 2 def func(x, a): T = x[-1] def f2(t, y): return T * f(y, a) r = ode(f2).set_integrator('dop853', atol=1.0e-14, rtol=1.0e-14) n = (len(x) - 1) // Npts h = 1.0 / Npts x1 = np.zeros(len(x)) y0 = np.copy(x[:n]) r.set_initial_value(y0, 0.0) y1 = r.integrate(r.t+h) x1[:n] = y1 - x[n:2*n] y0 = np.copy(x[n:2*n]) r.set_initial_value(y0, 0.0) y1 = r.integrate(r.t+h) x1[n:2*n] = y1 - x[:n] x1[-1] = x[0] return x1 def dfdx(x, a): def df2(t, y, n): z = np.zeros((n+1)*(n+2)) z[:n] = y[n] * f(y[:n], a) z[n] = 0.0 J = np.zeros((n+1, n+1)) J[:n, :n] = y[n] * fx(y[:n], a) J[:n, n] = f(y[:n], a) for m in range(n+1): z[(n+1)*(m+1):(n+1)*(m+2)] = J @ y[(n+1)*(m+1):(n+1)*(m+2)] return z r = ode(df2).set_integrator('dop853', atol=1.0e-14, rtol=1.0e-14) n = (len(x)-1) // Npts h = 1.0 / Npts A = np.zeros((len(x), len(x))) y0 = np.zeros((n+1)*(n+2)) I = np.identity(n+1) y0[:n] = x[:n] y0[n] = x[-1] for m in range(n+1): y0[(n+1)*(m+1):(n+1)*(m+2)] = I[:,m] r.set_initial_value(y0, 0.0).set_f_params(n) y1 = r.integrate(r.t+h) for m in range(n): A[:n,m] = y1[(n+1)*(m+1):(n+1)*(m+1)+n] A[:n, n:2*n] = -np.identity(n) A[:n, -1] = y1[-(n+1):-1] y0 = np.zeros((n+1)*(n+2)) y0[:n] = x[n:2*n] y0[n] = x[-1] for m in range(n+1): y0[(n+1)*(m+1):(n+1)*(m+2)] = I[:,m] r.set_initial_value(y0, 0.0).set_f_params(n) y1 = r.integrate(r.t+h) for m in range(n): A[n:2*n,n+m] = y1[(n+1)*(m+1):(n+1)*(m+1)+n] A[n:2*n, :n] = -np.identity(n) A[n:2*n, -1] = y1[-(n+1):-1] A[-1,0] = 1.0 return A def dfda(x, a): T = x[-1] def df2(t, y, n): z = np.zeros(2*(n+1)) z[:n] = T * f(y[:n], np.array([y[n]])) z[n] = 0.0 J = np.zeros((n+1, n+1)) J[:n, :n] = fx(y[:n], np.array([y[n]])) J[:n, n] = fa(y[:n], np.array([y[n]])) z[n+1:] = T * J @ y[n+1:] return z n = (len(x) - 1) // Npts h = 1.0 / Npts r = ode(df2).set_integrator('dop853', atol=1e-14, rtol=1e-14) b = np.zeros(len(x)) y0 = np.zeros(2*(n+1)) y0[:n] = np.copy(x[:n]) y0[n] = a[0] y0[-1] = 1.0 r.set_initial_value(y0, 0.0).set_f_params(n) y1 = r.integrate(r.t+h) b[:n] = y1[n+1:2*n+1] y0[:n] = np.copy(x[n:2*n]) y0[n] = a[0] y0[-1] = 1.0 r.set_initial_value(y0, 0.0).set_f_params(n) y1 = r.integrate(r.t+h) b[n:2*n] = y1[n+1:2*n+1] return b ``` 2段射撃法のニュートン法 ```python x = np.array([0.0, 0.5, 0.0, -0.5, 2*np.pi]) a = np.array([0.0]) y = np.copy(x) for m in range(10): b = func(y, a) A = dfdx(y, a) y -= solve(A, b) print(y[:2], y[-1], np.linalg.norm(b)) ``` [1.03152808e-30 5.00000000e-01] 6.283185307179601 5.9170423507905885e-15 [-3.85185989e-33 5.00000000e-01] 6.283185307179601 3.4577177419348924e-16 [-3.85185989e-33 5.00000000e-01] 6.283185307179601 1.422896290682169e-16 [-3.85185989e-33 5.00000000e-01] 6.283185307179601 1.42289629068217e-16 [-3.85185989e-33 5.00000000e-01] 6.283185307179601 1.4228962906821706e-16 [-3.85185989e-33 5.00000000e-01] 6.283185307179601 1.422896290682171e-16 [-3.85185989e-33 5.00000000e-01] 6.283185307179601 1.422896290682171e-16 [-3.85185989e-33 5.00000000e-01] 6.283185307179601 1.422896290682171e-16 [-3.85185989e-33 5.00000000e-01] 6.283185307179601 1.422896290682171e-16 [-3.85185989e-33 5.00000000e-01] 6.283185307179601 1.422896290682171e-16 2段射撃法の追跡 ```python x=np.array([0.0, 0.5, 0.0, -0.5, 2*np.pi]) a=np.array([0.0]) bd,bp,lp=pf.pathfollow(x, a, func, dfdx, dfda,nmax=20, h=0.05, epsr=1.0e-10, epsb=1.0e-10, quiet=False) ``` # TY a x R 0.010238862505482822 -5.403755707824863e-32 -0.3662316500243911 -1.5498606621213413 R 0.023739873764932994 -4.733750501935393e-33 -0.4553158497308608 -1.4976143607705639 R 0.04071774492605943 -4.733750501935393e-33 -0.5326683322583992 -1.4215795060654648 R 0.06124544544111728 -4.733750501935393e-33 -0.595783057204346 -1.33786000086712 R 0.08524514750751452 -4.733750501935393e-33 -0.645843059581797 -1.258955671257655 R 0.11250697657464735 -4.733750501935393e-33 -0.6859340014747413 -1.1920208867688635 R 0.14272871566530332 -4.733750501935393e-33 -0.719272411828372 -1.1394278395780593 R 0.17556223958176542 -4.733750501935393e-33 -0.7482786043200086 -1.1004645408132008 R 0.2106536541412975 -4.733750501935393e-33 -0.7744508738274067 -1.0730099622800326 R 0.24767068404422057 -4.733750501935393e-33 -0.7986132498030031 -1.0546474955082767 R 0.28631720026645063 -4.733750501935393e-33 -0.8212010575834192 -1.043208091386597 R 0.3263380778023389 -4.733750501935393e-33 -0.8424572605441736 -1.0369422447846928 R 0.367518134342238 -4.733750501935393e-33 -0.8625378665466908 -1.034511606951831 R 0.40967811560807316 -4.733750501935393e-33 -0.8815600443684318 -1.0349175874854633 R 0.45266963470701993 -4.733750501935393e-33 -0.8996216456621391 -1.0374220910433045 R 0.49637011577806034 -4.733750501935393e-33 -0.9168085306160499 -1.0414800539483653 R 0.5406782309452516 -4.733750501935393e-33 -0.9331973200173574 -1.0466875369358697 R 0.5855099985558622 -4.733750501935393e-33 -0.9488566373477703 -1.0527435057737162 R 0.6307955473406922 -4.733750501935393e-33 -0.963847921027768 -1.0594221907336139 R 0.6764764793878222 -4.733750501935393e-33 -0.9782261306327295 -1.0665531933909338 ```python x=np.array([0.0, 0.5, 0.0, -0.5, 2*np.pi]) a=np.array([0.0]) bd2,bp2,lp2=pf.pathfollow(x, a, func, dfdx, dfda,nmax=70, h=-0.01, epsr=1.0e-10, epsb=1.0e-10, quiet=False) ``` # TY a x R -0.0016825386140757964 7.02246187503377e-32 -0.253373901911225 -1.5594923177185995 R -0.003250526669919375 -9.500955706734343e-33 -0.23495225480671061 -1.554225015837006 R -0.004707045164395174 -4.1717537362564895e-32 -0.21684026359514308 -1.5468666346052207 R -0.006055255185025018 -4.745611209927768e-33 -0.19910931589742567 -1.5374020664495056 R -0.007298384840448311 6.937419267494025e-33 -0.18182701631679232 -1.5258313322500723 R -0.008439716743882195 2.1403554471442047e-32 -0.16505638049320132 -1.512169384074778 R -0.009482576120891142 -3.182652306984614e-32 -0.14885513870366288 -1.4964457011495438 R -0.010430319595285854 -1.9254836635551235e-32 -0.1332751587127739 -1.4787036956990314 R -0.011286324690703826 -2.3660363989649884e-33 -0.11836199364344027 -1.4589999486902951 R -0.012053980070587788 -8.15659325687763e-32 -0.10415455686249707 -1.4374032980901197 R -0.012736676526062455 1.2532475427875193e-31 -0.09068492238716055 -1.413993803993903 R -0.01333779870965182 -1.6902129538851887e-32 -0.07797824621488383 -1.3888616159242642 R -0.013860717602919558 2.6857825741020185e-32 -0.06605280133070679 -1.362105767782887 R -0.014308783697889401 -1.394509914872856e-32 -0.054920116993636824 -1.3338329254439085 R -0.014685320865443788 -2.9327019841978325e-34 -0.04458521126164959 -1.3041561108907922 R -0.014993620878689641 1.4120792175793725e-32 -0.03504690457576111 -1.2731934252215826 R -0.015236938555380698 -2.560586211890893e-32 -0.026298201560356352 -1.241066790883206 R -0.015418487480769441 -2.0065881480611696e-32 -0.018326727968370305 -1.2079007312479293 R -0.015541436270575885 2.2033632912675172e-33 -0.011115209854372206 -1.1738212032129862 R -0.015608905332941564 1.7828107967090242e-32 -0.004641982538141977 -1.1389544959791036 L -0.0156250000111336 4.593689218450005e-32 R -0.015623964088195335 -4.399922198324964e-32 0.0011184823347377155 -1.103426206626107 R -0.015589628605790393 3.202952976089426e-32 0.006195042381084744 -1.0673603006244607 R -0.01550885961884735 1.486508627666432e-32 0.01061935236929509 -1.030878263057281 R -0.015384560878159328 -1.2433754754900076e-33 0.014425342875801725 -0.9940983441247259 R -0.015219577809270248 -8.002791040535006e-33 0.01764871694459497 -0.9571349004944583 R -0.015016696438177901 -4.801393121692155e-33 0.02032647075184356 -0.92009783227188 R -0.014778642553314594 1.4581306494120055e-32 0.022496443075133994 -0.8830921138044552 R -0.01450808107363924 1.2399363924074191e-33 0.024196897223117287 -0.8462174152105282 R -0.014207615594889327 3.3847388567702675e-32 0.025466138017625466 -0.8095678104316425 R -0.013879788088257948 -9.544221565995105e-33 0.026342165451563405 -0.7732315667399483 R -0.013527078727935607 3.4788174538783335e-32 0.02686236578172633 -0.737291009975498 R -0.013151905826081452 6.0372022905579326e-33 0.027063240061055163 -0.7018224593255217 R -0.01275662585582855 -1.3691208466656678e-32 0.02698016947061885 -0.6668962251702814 R -0.012343533544876328 4.380385838663099e-33 0.02664721627547096 -0.6325766633876211 R -0.011914862024080257 -9.663232139807833e-32 0.026096958795552194 -0.5989222795100413 R -0.011472782947019975 1.975291754548149e-33 0.02536035857838516 -0.5659858766452528 R -0.011019407010201982 1.975291754548149e-33 0.0244666567454681 -0.5338147383355227 R -0.010556783702487468 9.863896643218951e-33 0.023443299348405482 -0.5024508470879117 R -0.010086901902417945 9.863896643218951e-33 0.022315886177793884 -0.47193112340875404 R -0.009611690030057338 1.9752891526190395e-33 0.021108142978135572 -0.4422876876554451 R -0.009133016288377569 1.9752891526190395e-33 0.019841913542047705 -0.4135481366208599 R -0.008652688893396631 -5.207157985932626e-34 0.01853716943421786 -0.3857358309986061 R -0.008172456288500477 -2.092009165413256e-33 0.01721203500158411 -0.35887018973805873 R -0.007694007339320203 4.943245187978106e-32 0.015882825442390388 -0.33296698772980665 R -0.007218971506382985 3.98410225662768e-32 0.014564095843033737 -0.30803865368339045 R -0.006748918993544735 -9.281342647265258e-33 0.013268699241951051 -0.28409456546403644 R -0.006285360870901335 1.0054099562571345e-32 0.012007851939089972 -0.2611413405391832 R -0.005829749171520191 -5.033408734445277e-32 0.010791204433264303 -0.23918311954546376 R -0.005383476961893109 -2.967228418416021e-33 0.009626916534295201 -0.2182218413202301 R -0.004947878386515336 3.97635009002737e-33 0.008521735359177319 -0.19825750804726577 R -0.004524228687443472 -3.8615639129132616e-33 0.007481075079187902 -0.1792884394432921 R -0.00411374420006816 1.7393139021441016e-32 0.006509097436007451 -0.16131151516028142 R -0.0037175823266748795 2.6781642490740776e-34 0.0056087921881079845 -0.14432240479882824 R -0.0033368414896467248 4.361491661919753e-35 0.00478205678293946 -0.1283157851208176 R -0.0029725610664046343 -2.0631705852600443e-32 0.004029774675169233 -0.1132855442165176 R -0.0026257213083543747 -7.227249565150428e-33 0.0033518918261041947 -0.09922497252345197 R -0.002297243246276104 -3.057875973042373e-33 0.0027474910243355296 -0.08612694071346937 R -0.0019879885846666606 2.0300780705111897e-33 0.002214863762799272 -0.07398406456210362 R -0.0016987595876376352 1.2618591781704367e-32 0.0017515794930291011 -0.06278885699217213 R -0.0014302989589630704 1.3220683442614323e-32 0.0013545521539083819 -0.05253386754346646 R -0.0011832897188841205 -1.3609665858538619e-33 0.0010201039401040912 -0.0432118095639032 R -0.0009583550802136594 -1.2838213124605414e-32 0.0007440263351997581 -0.03481567544637829 R -0.0007560583262075292 2.9926311315617922e-33 0.0005216384868919923 -0.027338840251318962 R -0.0005769026925587485 1.2084793488618262e-32 0.00034784304707438115 -0.02077515405908025 R -0.00042133125572308094 6.599873378810254e-33 0.00021717963879318026 -0.015119023390295569 R -0.00028972682961939285 4.7552921664299147e-32 0.00012387614548361973 -0.010365482017378391 R -0.0001824118725240042 2.4217752809700947e-32 6.189804611899412e-05 -0.006510251467824137 R -9.964840567176564e-05 2.8478822538315235e-32 2.4996043417941123e-05 -0.0035497914909131197 R -4.163794423291495e-05 -6.99758518339629e-32 6.752251495610047e-06 -0.001481340724932748 R -8.521433803888265e-06 -1.335534449656945e-31 6.252247051094989e-07 -0.00030294776310889576 ```python bd_r = np.array([bd[m]['a'][0] for m in range(len(bd))]) bd_x = np.array([bd[m]['x'][1] for m in range(len(bd))]) bd2_r = np.array([bd2[m]['a'][0] for m in range(len(bd2))]) bd2_x = np.array([bd2[m]['x'][1] for m in range(len(bd2))]) ``` ```python fig = plt.figure(figsize=(8, 5)) ax = fig.add_subplot(111) ax.set_xlim(-0.1,0.1) ax.set_ylim(0, 0.8) ax.set_xlabel(r"$\lambda$") ax.set_ylabel("$x_1$") ax.plot(bd_r, bd_x, '-k') ax.plot(bd2_r, bd2_x, '-k') # plt.savefig("bd_hopf_sub.pdf", bbox_inches='tight') ``` N段射撃法の定式化 ```python Npts = 4 def func(x, a): T = x[-1] def f2(t, y): return T * f(y, a) r = ode(f2).set_integrator('dop853', atol=1.0e-14, rtol=1.0e-14) n = (len(x) - 1) // Npts h = 1.0 / Npts x1 = np.zeros(len(x)) for k in range(Npts-1): y0 = np.copy(x[k*n:(k+1)*n]) r.set_initial_value(y0, 0.0) y1 = r.integrate(r.t+h) x1[k*n:(k+1)*n] = y1 - x[(k+1)*n:(k+2)*n] y0 = np.copy(x[-(n+1):-1]) r.set_initial_value(y0, 0.0) y1 = r.integrate(r.t+h) x1[-(n+1):-1] = y1 - x[:n] x1[-1] = x[0] return x1 def dfdx(x, a): def df2(t, y, n): z = np.zeros((n+1)*(n+2)) z[:n] = y[n] * f(y[:n], a) z[n] = 0.0 J = np.zeros((n+1, n+1)) J[:n, :n] = y[n] * fx(y[:n], a) J[:n, n] = f(y[:n], a) for m in range(n+1): z[(n+1)*(m+1):(n+1)*(m+2)] = J @ y[(n+1)*(m+1):(n+1)*(m+2)] return z r = ode(df2).set_integrator('dop853', atol=1.0e-14, rtol=1.0e-14) n = (len(x)-1) // Npts h = 1.0 / Npts A = np.zeros((len(x), len(x))) I = np.identity(n+1) for k in range(Npts-1): y0 = np.zeros((n+1)*(n+2)) y0[:n] = x[k*n:(k+1)*n] y0[n] = x[-1] for m in range(n+1): y0[(n+1)*(m+1):(n+1)*(m+2)] = I[:,m] r.set_initial_value(y0, 0.0).set_f_params(n) y1 = r.integrate(r.t+h) for m in range(n): A[k*n:(k+1)*n,k*n+m] = y1[(n+1)*(m+1):(n+1)*(m+1)+n] A[k*n:(k+1)*n, (k+1)*n:(k+2)*n] = -np.identity(n) A[k*n:(k+1)*n, -1] = y1[-(n+1):-1] y0 = np.zeros((n+1)*(n+2)) y0[:n] = x[-(n+1):-1] y0[n] = x[-1] for m in range(n+1): y0[(n+1)*(m+1):(n+1)*(m+2)] = I[:,m] r.set_initial_value(y0, 0.0).set_f_params(n) y1 = r.integrate(r.t+h) for m in range(n): A[-(n+1):-1,-(n+1)+m] = y1[(n+1)*(m+1):(n+1)*(m+1)+n] A[-(n+1):-1, :n] = -np.identity(n) A[-(n+1):-1, -1] = y1[-(n+1):-1] A[-1,0] = 1.0 return A def dfda(x, a): T = x[-1] def df2(t, y, n): z = np.zeros(2*(n+1)) z[:n] = T * f(y[:n], np.array([y[n]])) z[n] = 0.0 J = np.zeros((n+1, n+1)) J[:n, :n] = fx(y[:n], np.array([y[n]])) J[:n, n] = fa(y[:n], np.array([y[n]])) z[n+1:] = T * J @ y[n+1:] return z n = (len(x) - 1) // Npts h = 1.0 / Npts r = ode(df2).set_integrator('dop853', atol=1e-14, rtol=1e-14) b = np.zeros(len(x)) for k in range(Npts-1): y0 = np.zeros(2*(n+1)) y0[:n] = np.copy(x[k*n:(k+1)*n]) y0[n] = a[0] y0[-1] = 1.0 r.set_initial_value(y0, 0.0).set_f_params(n) y1 = r.integrate(r.t+h) b[k*n:(k+1)*n] = y1[n+1:2*n+1] y0[:n] = np.copy(x[-(n+1):-1]) y0[n] = a[0] y0[-1] = 1.0 r.set_initial_value(y0, 0.0).set_f_params(n) y1 = r.integrate(r.t+h) b[-(n+1):-1] = y1[n+1:2*n+1] return b ``` ```python x = np.array([0.0, 0.5, -0.5, 0.0, 0.0, -0.5, 0.5, 0.0, 2*np.pi]) a = np.array([0.0]) y = np.copy(x) for m in range(10): b = func(y, a) A = dfdx(y, a) y -= solve(A, b) print(y[:2], y[-1], np.linalg.norm(b)) ``` [7.00143703e-31 5.00000000e-01] 6.283185307179599 3.660606886031751e-15 [6.4594719e-34 5.0000000e-01] 6.283185307179599 5.5511151231256336e-17 [6.4594719e-34 5.0000000e-01] 6.283185307179599 5.551115123125776e-17 [6.4594719e-34 5.0000000e-01] 6.283185307179599 5.5511151231257796e-17 [6.4594719e-34 5.0000000e-01] 6.283185307179599 5.5511151231257815e-17 [6.4594719e-34 5.0000000e-01] 6.283185307179599 5.551115123125783e-17 [6.4594719e-34 5.0000000e-01] 6.283185307179599 5.551115123125783e-17 [6.4594719e-34 5.0000000e-01] 6.283185307179599 5.551115123125783e-17 [6.4594719e-34 5.0000000e-01] 6.283185307179599 5.5511151231257815e-17 [6.4594719e-34 5.0000000e-01] 6.283185307179599 5.5511151231257815e-17 ```python x = np.array([0.0, 0.5, -0.5, 0.0, 0.0, -0.5, 0.5, 0.0, 2*np.pi]) a=np.array([0.0]) bd2,bp2,lp2=pf.pathfollow(x, a, func, dfdx, dfda,nmax=70, h=-0.01, epsr=1.0e-10, epsb=1.0e-10, quiet=False) ``` # TY a x R -0.001210268696060905 7.198167504054491e-32 -0.2587215678218521 -2.1918394978718214 R -0.0023612070190184293 -7.21711024583157e-32 -0.24552966875223967 -2.188928139255222 R -0.00345408333781922 -8.814414118241897e-33 -0.23248150764202807 -2.184435318491585 R -0.004490171124609288 -1.8836903516904824e-31 -0.21960270477940474 -2.1783524761134347 R -0.005470747503757936 7.587450033747564e-33 -0.20691807158734546 -2.1706766196547833 R -0.006397091866208182 1.218589692112546e-32 -0.19445146066600277 -2.161410283164076 R -0.007270484549517834 5.966658182570574e-32 -0.18222562760231445 -2.1505614493633103 R -0.008092205583572721 2.527208452493161e-33 -0.17026210563868416 -2.138143436274803 R -0.008863533501607799 3.4767697796386317e-31 -0.1585810940735847 -2.12417475045875 R -0.009585744215846688 -6.254627325329278e-32 -0.14720136104662054 -2.1086789092818727 R -0.010260109956806077 6.568973293615825e-32 -0.13614016114321376 -2.091684234867552 R -0.010887898275042022 -7.37911432883456e-33 -0.12541316804322206 -2.073223622561223 R -0.011470371103924827 2.6897639083149237e-32 -0.11503442223685624 -2.0533342868804954 R -0.012008783881813432 4.5094954688537154e-32 -0.10501629364293395 -2.032057488010041 R -0.012504384731871241 4.4044516986361624e-32 -0.09536945879114277 -2.0094382419466923 R -0.012958413697611647 -1.8013976989168722e-32 -0.08610289207343917 -1.985525017404777 R -0.013372102032167852 -8.057160079642488e-33 -0.07722387043117984 -1.9603694225568327 R -0.013746671539196217 3.591256326304304e-32 -0.06873799072507682 -1.934025884614749 R -0.014083333963255493 -2.2369431992181358e-32 -0.060649198934828244 -1.9065513251547643 R -0.014383290427469855 1.15808779063889e-31 -0.05295983025446071 -1.878004833959784 R -0.014647730916249378 5.589821366067934e-32 -0.04567065908753618 -1.8484473439992686 R -0.014877833800841966 4.67864545963793e-33 -0.038780957902853 -1.8179413099935147 R -0.015074765405485647 -7.872565909864388e-33 -0.03228856388506306 -1.7865503928201891 R -0.015239679611956656 4.771081554658003e-32 -0.02618995230471244 -1.7543391518201017 R -0.015373717500330621 7.089405839613484e-32 -0.02048031553706693 -1.7213727468501026 R -0.01547800702381367 -2.94317736596072e-32 -0.015153646677448116 -1.6877166517173021 R -0.015553662715546762 1.49905132741321e-32 -0.010202826730966983 -1.6534363804136183 R -0.015601785425338808 6.455549030347572e-32 -0.005619714395063983 -1.618597227355915 R -0.015623462084343755 7.581441856016526e-32 -0.0013952375024328264 -1.5832640226273966 L -0.015624999996993403 -2.63228903061851e-32 R -0.015619765495758407 -1.9510485983096633e-32 0.002480514751760856 -1.5475009030127767 R -0.015591754149684547 -1.197455159271e-32 0.006018199612507215 -1.5113710994247993 R -0.015540472060374497 -1.4979111444372854e-32 0.009229129345413008 -1.4749367411349137 R -0.015466948532945649 1.107177705805614e-32 0.012125181464125485 -1.4382586780564164 R -0.015372198412488495 -4.8646439332687734e-33 0.014718710058294211 -1.401396315003434 R -0.015257221408811679 1.0759128892130986e-32 0.01702246168260961 -1.3644074735020226 R -0.015123002394409972 6.708355767788752e-33 0.01904949232825777 -1.3273482597343906 R -0.01497051123049537 -6.25985072474833e-33 0.02081308886988497 -1.2902729544744174 R -0.01480070270515486 -1.2593205181380003e-32 0.022326694106522722 -1.2532339179992278 R -0.014614516483455256 -8.624459643725233e-33 0.023603835867839713 -1.2162815105693687 R -0.014412877068305048 -4.3883776714409886e-33 0.02465806039186443 -1.1794640278742656 R -0.014196693770944185 -6.505417479332634e-34 0.02550287012338774 -1.1428276507734112 R -0.013966860690010966 -5.4069674959442654e-33 0.026151666029330105 -1.106416408609622 R -0.013724256698189849 2.5626020839866466e-33 0.02661769447876687 -1.0702721553284724 R -0.0134697454355274 1.8020379419666742e-34 0.02691399869103761 -1.0344345576065397 R -0.013204175308541026 1.6375931021328905e-32 0.027053374715691886 -0.9989410941701473 R -0.012928379494330415 1.5353178967911281e-33 0.02704833187260692 -0.9638270654741721 R -0.012643175948949334 5.914909028621731e-34 0.026911057549703788 -0.9291256129072064 R -0.012349367419347565 -7.421523867591449e-33 0.026653386228965858 -0.8948677466936453 R -0.012047741458253014 1.8629519679389478e-32 0.026286772588768968 -0.8610823816741386 R -0.011739070441416399 9.262296157643818e-32 0.025822268511741398 -0.827796380162724 R -0.011424111586686124 -1.8188678061306044e-32 0.025270503812238027 -0.7950346011010523 R -0.011103606974430504 2.744298127360088e-32 0.024641670485688556 -0.7628199547563483 R -0.010778283568866 -5.214762502343715e-33 0.023945510273508244 -0.7311734622398115 R -0.010448853239901116 8.115620501449142e-33 0.02319130533141326 -0.7001143191548607 R -0.010116012785135707 -6.26792995076934e-33 0.022387871785872183 -0.669659962719967 R -0.009780443951700272 -9.816435376251078e-33 0.02154355596253014 -0.6398261417474992 R -0.009442813457655765 -6.272002525047159e-33 0.020666233071698133 -0.6106269888980855 R -0.009103773012706201 -8.248277365433435e-33 0.019763308139014297 -0.5820750946686282 R -0.008763959338011008 3.371553482219906e-33 0.018841718974002195 -0.5541815826110104 R -0.008423994184912879 -3.03490866095637e-32 0.01790794097519896 -0.5269561853172688 R -0.008084484352429866 -6.376702031941655e-33 0.01696799357758452 -0.5004073207452406 R -0.007746021703378907 -3.70464754869927e-32 0.016027448156055026 -0.4745421684962132 R -0.0074091831790322905 -5.677314539924261e-32 0.015091437207383552 -0.44936674569244794 R -0.0070745308122285334 -1.1998506857394412e-32 0.014164664642421327 -0.42488598213770623 R -0.006742611738876729 5.822282226640375e-32 0.013251417029970262 -0.4011037944776565 R -0.006413958207823184 -2.162844721481795e-33 0.012355575643730528 -0.37802315910921935 R -0.006089087589058139 2.1181091615919165e-32 0.011480629173833308 -0.3556461836185353 R -0.005768502380266706 9.778225375746458e-33 0.010629686974595308 -0.33397417655598655 R -0.005452690211734032 3.1168339841328094e-32 0.009805492730238896 -0.31300771538381444 R -0.005142123849644269 -2.2825041836206972e-32 0.009010438430213584 -0.29274671245701456
578d019d016e5c34da29309d1a28e9c9ae10eed6
59,725
ipynb
Jupyter Notebook
notebooks/continuation/subhopf.ipynb
tmiyaji/sgc164
660f61b72a3898f8e287feb464134f5c48f9383e
[ "BSD-3-Clause" ]
3
2021-02-01T15:29:43.000Z
2021-10-01T13:20:21.000Z
notebooks/continuation/subhopf.ipynb
tmiyaji/sgc164
660f61b72a3898f8e287feb464134f5c48f9383e
[ "BSD-3-Clause" ]
null
null
null
notebooks/continuation/subhopf.ipynb
tmiyaji/sgc164
660f61b72a3898f8e287feb464134f5c48f9383e
[ "BSD-3-Clause" ]
1
2020-12-20T07:46:22.000Z
2020-12-20T07:46:22.000Z
70.264706
22,460
0.728221
true
13,386
Qwen/Qwen-72B
1. YES 2. YES
0.810479
0.63341
0.513366
__label__yue_Hant
0.135212
0.03105
# One-dimensional Lagrange Interpolation The problem of interpolation or finding the value of a function at an arbitrary point $X$ inside a given domain, provided we have discrete known values of the function inside the same domain is at the heart of the finite element method. In this notebooke we use Lagrange interpolation where the approximation $\hat f(x)$ to the function $f(x)$ is built like: \begin{equation} \hat f(x)={L^I}(x)f^I \end{equation} In the expression above $L^I$ represents the $I$ Lagrange Polynomial of order $n-1$ and $f^1, f^2,,...,f^n$ are the $n$ known values of the function. Here we are using the summation convention over the repeated superscripts. The $I$ Lagrange polynomial is given by the recursive expression: \begin{equation} {L^I}(x)=\prod_{J=1, J \ne I}^{n}{\frac{{\left( {x - {x^J}} \right)}}{{\left( {{x^I} - {x^J}} \right)}}} \end{equation} in the domain $x\in[-1.0,1.0]$. We wish to interpolate the function $ f(x)=x^3+4x^2-10 $ assuming we know its value at points $x=-1.0$, $x=1.0$ and $x=0.0$. ```python from __future__ import division import numpy as np from scipy import interpolate import sympy as sym import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D ``` ```python %matplotlib notebook sym.init_printing() ``` First we use a function to generate the Lagrage polynomial of order $order$ at point $i$ ```python def basis_lagrange(x_data, var, cont): """Find the basis for the Lagrange interpolant""" prod = sym.prod((var - x_data[i])/(x_data[cont] - x_data[i]) for i in range(len(x_data)) if i != cont) return sym.simplify(prod) ``` we now define the function $ f(x)=x^3+4x^2-10 $: ```python fun = lambda x: x**3 + 4*x**2 - 10 ``` ```python x = sym.symbols('x') x_data = np.array([-1, 1, 0]) f_data = fun(x_data) ``` And obtain the Lagrange polynomials using: ```python basis = [] for cont in range(len(x_data)): basis.append(basis_lagrange(x_data, x, cont)) sym.pprint(basis[cont]) ``` x⋅(x - 1) ───────── 2 x⋅(x + 1) ───────── 2 2 - x + 1 which are shown in the following plots/ ```python npts = 101 x_eval = np.linspace(-1, 1, npts) basis_num = sym.lambdify((x), basis, "numpy") # Create a lambda function for the polynomials ``` ```python plt.figure(figsize=(6, 4)) for k in range(3): y_eval = basis_num(x_eval)[k] plt.plot(x_eval, y_eval) ``` <IPython.core.display.Javascript object> ```python y_interp = sym.simplify(sum(f_data[k]*basis[k] for k in range(3))) y_interp ``` Now we plot the complete approximating polynomial, the actual function and the points where the function was known. ```python y_interp = sum(f_data[k]*basis_num(x_eval)[k] for k in range(3)) y_original = fun(x_eval) plt.figure(figsize=(6, 4)) plt.plot(x_eval, y_original) plt.plot(x_eval, y_interp) plt.plot([-1, 1, 0], f_data, 'ko') plt.show() ``` <IPython.core.display.Javascript object> The next cell change the format of the Notebook. ```python from IPython.core.display import HTML def css_styling(): styles = open('../styles/custom_barba.css', 'r').read() return HTML(styles) css_styling() ```
58a534a611940b799937c611a4ccef212bf5295a
151,725
ipynb
Jupyter Notebook
notebooks/.ipynb_checkpoints/LAGRANGE1D-checkpoint.ipynb
jomorlier/FEM-Notes
3b81053aee79dc59965c3622bc0d0eb6cfc7e8ae
[ "MIT" ]
1
2020-04-15T01:53:14.000Z
2020-04-15T01:53:14.000Z
notebooks/.ipynb_checkpoints/LAGRANGE1D-checkpoint.ipynb
jomorlier/FEM-Notes
3b81053aee79dc59965c3622bc0d0eb6cfc7e8ae
[ "MIT" ]
null
null
null
notebooks/.ipynb_checkpoints/LAGRANGE1D-checkpoint.ipynb
jomorlier/FEM-Notes
3b81053aee79dc59965c3622bc0d0eb6cfc7e8ae
[ "MIT" ]
1
2020-05-25T17:19:53.000Z
2020-05-25T17:19:53.000Z
81.969206
42,435
0.718853
true
967
Qwen/Qwen-72B
1. YES 2. YES
0.957912
0.931463
0.892259
__label__eng_Latn
0.938223
0.911351
# Density Operator and Matrix ## Imports ```python from IPython.display import display ``` ```python # TODO: there is a bug in density.py that is preventing this from working, uncomment to reproduce # from sympy import init_printing # init_printing(use_latex=True) ``` ```python from sympy import * from sympy.core.trace import Tr from sympy.physics.quantum import * from sympy.physics.quantum.density import * from sympy.physics.quantum.spin import ( Jx, Jy, Jz, Jplus, Jminus, J2, JxBra, JyBra, JzBra, JxKet, JyKet, JzKet, ) ``` ## Basic density operator Create a density matrix using symbolic states: ```python psi = Ket('psi') phi = Ket('phi') ``` ```python d = Density((psi,0.5),(phi,0.5)); d ``` Density((|psi>, 0.5),(|phi>, 0.5)) ```python d.states() ``` (|psi>, |phi>) ```python d.probs() ``` (0.5, 0.5) ```python d.doit() ``` 0.5*|phi><phi| + 0.5*|psi><psi| ```python Dagger(d) ``` Density((|psi>, 0.5),(|phi>, 0.5)) ```python A = Operator('A') ``` ```python d.apply_op(A) ``` Density((A*|psi>, 0.5),(A*|phi>, 0.5)) ## Density operator for spin states Now create a density operator using spin states: ```python up = JzKet(S(1)/2,S(1)/2) down = JzKet(S(1)/2,-S(1)/2) ``` ```python d2 = Density((up,0.5),(down,0.5)); d2 ``` Density((|1/2,1/2>, 0.5),(|1/2,-1/2>, 0.5)) ```python represent(d2) ``` Matrix([ [0.5, 0], [ 0, 0.5]]) ```python d2.apply_op(Jz) ``` Density((Jz*|1/2,1/2>, 0.5),(Jz*|1/2,-1/2>, 0.5)) ```python qapply(_) ``` Density((hbar*|1/2,1/2>/2, 0.5),(-hbar*|1/2,-1/2>/2, 0.5)) ```python qapply((Jy*d2).doit()) ``` 0.5*Jy*|1/2,-1/2><1/2,-1/2| + 0.5*Jy*|1/2,1/2><1/2,1/2| ## Evaluate entropy of the density matrices ```python entropy(d2) ``` log(2)/2 ```python entropy(represent(d2)) ``` log(2)/2 ```python entropy(represent(d2,format="numpy")) ``` (0.69314718055994529-0j) ```python entropy(represent(d2,format="scipy.sparse")) ``` (0.69314718055994529-0j) ## Density operators with tensor products ```python A, B, C, D = symbols('A B C D',commutative=False) t1 = TensorProduct(A,B,C) d = Density([t1, 1.0]) d.doit() t2 = TensorProduct(A,B) t3 = TensorProduct(C,D) d = Density([t2, 0.5], [t3, 0.5]) d.doit() ``` 0.5*(A*Dagger(A))x(B*Dagger(B)) + 0.5*(C*Dagger(C))x(D*Dagger(D)) ```python d = Density([t2+t3, 1.0]) d.doit() ``` 1.0*(A*Dagger(A))x(B*Dagger(B)) + 1.0*(A*Dagger(C))x(B*Dagger(D)) + 1.0*(C*Dagger(A))x(D*Dagger(B)) + 1.0*(C*Dagger(C))x(D*Dagger(D)) ## Trace operators on density operators with spin states ```python d = Density([JzKet(1,1),0.5],[JzKet(1,-1),0.5]); t = Tr(d); t ``` Tr(Density((|1,1>, 0.5),(|1,-1>, 0.5))) ```python t.doit() ``` 1.00000000000000 ## Partial Trace on density operators with mixed state ```python A, B, C, D = symbols('A B C D',commutative=False) t1 = TensorProduct(A,B,C) d = Density([t1, 1.0]) d.doit() t2 = TensorProduct(A,B) t3 = TensorProduct(C,D) d = Density([t2, 0.5], [t3, 0.5]) d ``` Density((AxB, 0.5),(CxD, 0.5)) ```python tr = Tr(d,[1]) tr.doit() ``` 0.5*A*Dagger(A)*Tr(B*Dagger(B)) + 0.5*C*Dagger(C)*Tr(D*Dagger(D)) ## Partial trace on density operators with spin states ```python tp1 = TensorProduct(JzKet(1,1), JzKet(1,-1)) ``` Trace out the `0` index: ```python d = Density([tp1,1]); t = Tr(d,[0]) t ``` Tr((|1,1>x|1,-1>, 1)) ```python t.doit() ``` |1,-1><1,-1| Trace out the `1` index: ```python t = Tr(d,[1]) t ``` Tr((|1,1>x|1,-1>, 1)) ```python t.doit() ``` |1,1><1,1| ## Examples of `qapply()` on density matrices with spin states ```python psi = Ket('psi') phi = Ket('phi') u = UnitaryOperator() d = Density((psi,0.5),(phi,0.5)); d qapply(u*d) ``` O*Density((|psi>, 0.5),(|phi>, 0.5)) ```python up = JzKet(S(1)/2, S(1)/2) down = JzKet(S(1)/2, -S(1)/2) d = Density((up,0.5),(down,0.5)) uMat = Matrix([[0,1],[1,0]]) qapply(uMat*d) ``` Matrix([ [ 0, Density((|1/2,1/2>, 0.5),(|1/2,-1/2>, 0.5))], [Density((|1/2,1/2>, 0.5),(|1/2,-1/2>, 0.5)), 0]]) ## Example of `qapply()` on density matrices with qubits ```python from sympy.physics.quantum.gate import UGate from sympy.physics.quantum.qubit import Qubit uMat = UGate((0,), Matrix([[0,1],[1,0]])) d = Density([Qubit('0'),0.5],[Qubit('1'), 0.5]) d ``` Density((|0>, 0.5),(|1>, 0.5)) ```python #after applying Not gate qapply(uMat*d) ``` U((0,),Matrix([ [0, 1], [1, 0]]))*Density((|0>, 0.5),(|1>, 0.5))
bd0b0a21c066c3979424793b744f88d6214a8f24
16,217
ipynb
Jupyter Notebook
notebooks/density.ipynb
gvvynplaine/quantum_notebooks
58783823596465fe2d6c494c2cc3a53ae69a9752
[ "BSD-3-Clause" ]
42
2017-10-17T22:44:27.000Z
2022-03-28T06:26:46.000Z
notebooks/density.ipynb
gvvynplaine/quantum_notebooks
58783823596465fe2d6c494c2cc3a53ae69a9752
[ "BSD-3-Clause" ]
2
2017-10-09T05:16:41.000Z
2018-09-22T03:08:29.000Z
notebooks/density.ipynb
gvvynplaine/quantum_notebooks
58783823596465fe2d6c494c2cc3a53ae69a9752
[ "BSD-3-Clause" ]
12
2017-10-09T04:22:19.000Z
2022-03-28T06:25:21.000Z
17.801317
142
0.446075
true
1,854
Qwen/Qwen-72B
1. YES 2. YES
0.907312
0.839734
0.761901
__label__yue_Hant
0.317466
0.608483
# 베타 분포 베타 분포(Beta distribution)는 다른 확률 분포와 달리 자연계에 존재하는 데이터의 분포를 묘사하기 보다는 베이지안 추정의 결과를 묘사하기위한 목적으로 주로 사용된다. 베이지안 추정(Bayesian estimation)은 추정하고자 하는 모수의 값을 하나의 숫자로 나타내는 것이 아니라 분포로 묘사한다. 베타 분포의 확률 밀도 함수는 $a$와 $b$라는 두 개의 모수(parameter)를 가지며 수학적으로 다음과 같이 정의된다. $$ \begin{align} \text{Beta}(x;a,b) & = \frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}\, x^{a-1}(1-x)^{b-1} \end{align} $$ 이 식에서 $$ \Gamma(a) = \int_0^\infty x^{a-1} e^{-x}\, dx $$ 베타 분포의 확률 밀도 함수는 그림에서 볼 수 있듯이 0부터 1까지만의 값을 가진다. 이러한 함수를 finite support를 가진다고 한다. ```python xx = np.linspace(0, 1, 1000) plt.subplot(221) plt.fill(xx, sp.stats.beta(1.0001, 1.0001).pdf(xx)); plt.ylim(0, 6) plt.title("(A) a=1, b=1") plt.subplot(222) plt.fill(xx, sp.stats.beta(4, 2).pdf(xx)); plt.ylim(0, 6) plt.title("(B) a=4, b=2, mode={0}".format((4-1)/(4+2-2))) plt.subplot(223) plt.fill(xx, sp.stats.beta(8, 4).pdf(xx)); plt.ylim(0, 6) plt.title("(C) a=8, b=4, mode={0}".format((8-1)/(8+4-2))) plt.subplot(224) plt.fill(xx, sp.stats.beta(30, 12).pdf(xx)); plt.ylim(0, 6) plt.title("(D) a=30, b=12, mode={0}".format((30-1)/(30+12-2))) plt.tight_layout() plt.show() ``` 위 그림이 베이지안 추정 결과라면 각각은 모수에 대해 다음과 같이 추정한 것과 같다. * (A): 추정할 수 없다. (정보가 없음) * (B): 모수값이 0.75일 가능성이 가장 크다. (정확도 낮음) * (C): 모수값이 0.70일 가능성이 가장 크다. (정확도 중간) * (D): 모수값이 0.725일 가능성이 가장 크다. (정확도 높음) 베타 분포의 기댓값, 최빈값, 분산은 각각 다음과 같다. * 기댓값 $$E[x] = \dfrac{a}{a+b}$$ * 최빈값 $$\dfrac{a - 1}{a+b - 2}$$ * 분산 $$\text{Var}[x] = \dfrac{ab}{(a+b)^2(a+b+1)}$$
6ed1612186260640ac73e5737fbb551c9c74097d
245,850
ipynb
Jupyter Notebook
10. 기초 확률론3 - 확률 분포 모형/08. 베타 분포 (파이썬 버전).ipynb
zzsza/Datascience_School
da27ac760ca8ad1a563a0803a08b332d560cbdc0
[ "MIT" ]
39
2017-04-30T06:17:21.000Z
2022-01-07T07:50:11.000Z
10. 기초 확률론3 - 확률 분포 모형/08. 베타 분포 (파이썬 버전).ipynb
yeajunseok/Datascience_School
da27ac760ca8ad1a563a0803a08b332d560cbdc0
[ "MIT" ]
null
null
null
10. 기초 확률론3 - 확률 분포 모형/08. 베타 분포 (파이썬 버전).ipynb
yeajunseok/Datascience_School
da27ac760ca8ad1a563a0803a08b332d560cbdc0
[ "MIT" ]
32
2017-04-09T16:51:49.000Z
2022-01-23T20:30:48.000Z
39.405353
183
0.499174
true
874
Qwen/Qwen-72B
1. YES 2. YES
0.874077
0.798187
0.697677
__label__kor_Hang
0.999961
0.459269
# Likevektskonsentrasjoner i en syre-base likevekt Her skal vi regne på en syre-baselikevekt, vi tar utgangspunkt i eksempel 16.8 (side 562) fra læreboken der vi blir bedt om å finne pH i 0.036 M HNO$_2$: $$\text{HNO}_2 \rightleftharpoons \text{NO}_2^{-} + \text{H}^{+},$$ $$K_{a} = 4.5 \times 10^{-4}.$$ Vi skal løse denne oppgaven ved å bruke Python. For å kunne regne symbolsk skal vi bruke et bibliotek som heter [SymPy](https://www.sympy.org/). ```python import sympy as sym # Importer SymPy ``` ```python # Definer størrelsene vi kjenner START_KONSENTRASJON = 0.036 KA = 4.5e-4 ``` Over har vi listet opp hva vi kjenner. La oss også liste opp alle de ukjente som vi skal bestemme: - $[\text{HNO}_2]$ ved likevekt. - $[\text{NO}_2^{-}]$ ved likevekt. - $[\text{H}^{+}]$ ved likevekt. Vi har altså tre ukjente. La oss definere de som størrelser (spesifikt som [SymPy-symboler](https://docs.sympy.org/latest/tutorial/intro.html#a-more-interesting-example)) slik at vi kan regne med de (dette blir litt som når vi introduserer $x$ osv. for ukjente størrelser i ligninger vi skriver for hånd): ```python # Vi definerer de ukjente størrelsene. For å spare litt skriving bruker vi # - HA for syren HNO2 # - A for den korresponderende basen NO_2^- # - H for H^+ c_HA, c_A, c_H = sym.symbols('c_HA c_A c_H') ``` Vi har nå definert konsentrasjonene. Disse er foreløpig ukjente. For å bestemme de, så trenger vi noen ligninger som relaterer de til hverandre. Mulige slike ligninger er: - syre-basekonstanten - elektronøytralitet - massebalanser ```python # La oss begynne med syre-basekonstanten. # Her sier vi at (c_A * c_H)/c_HA skal være lik ("sym.Eq") KA: ligning1 = sym.Eq((c_A * c_H)/c_HA, KA) ``` Vi kan be SymPy skrive ut hva denne ligningen er, for å sjekke at den ser ut som vi hadde tenkt: ```python ligning1 ``` Den neste ligningen vi kan benytte oss av, er at det må være like mye negativ og positiv ladning. Her er det bare to ladede forbindelser, og de har motsatt fortegn. Det betyr at $[\text{A}]^- = [\text{H}]^+$. La oss skrive det som en ligning: ```python # Elektronøytralitet: ligning2 = sym.Eq(c_A, c_H) ``` ```python # Skriv ut denne ligningen også, for å dobbeltsjekke: ligning2 ``` I kjemiske reaksjoner er massen bevart (med mindre det skjer noen kjernereaksjoner som endrer typen av grunnstoffer). Det er litt omstendelig å jobbe med massebalanser, så vi skriver det her heller som en molbalanse. Tankegangen er som følger: Vi har oppgitt startkonsentrasjonen av HNO$_2$, hvis vi vet volumet, så vet vi også hvor mange mol HNO$_2$ vi har i starten. Men da vet vi også hvor mange mol av grunnstoffene H, N og O vi har i systemet vårt. Disse moltallene endres ikke, men mengden av HNO$_2$ kan endres. Altså: generelt i kjemiske reaksjoner så endres ikke grunnstoffene, men hvordan de er koblet sammen endres. Vi kan derfor her lage tre massebalanser: - En for hydrogen, - En for nitrogen, - En for oksygen. La oss bruke nitrogen: - Antall mol av nitrogen totalt er lik antall mol av HNO$_2$ vi startet med. - Ved likevekt er antall mol nitrogen totalt i systemet lik antall mol av HNO$_2$ pluss antall mol av NO$_2^-$ (siden det er bare i disse to stoffene vi finner nitrogen, og det er ett nitrogen i hver av disse forbindelsene). Med symboler (vi deler på et volumet for å gjøre om til konsentrasjon): $$[\text{HNO}_3]_{\text{start}} = [\text{HNO}_3]_{\text{likevekt}} + [\text{NO}_2^-]_{\text{likevekt}} $$ La oss formulere det som en ligning: ```python ligning3 = sym.Eq(START_KONSENTRASJON, c_HA + c_A) ``` ```python # Skriv ut ligning3 for dobbeltsjekk: ligning3 ``` Vi har nå tre ligninger og vi har tre ukjente. Dette kan vi (eller i dette tilfellet, SymPy) løse: ```python løsninger = sym.solve([ligning1, ligning2, ligning3], [c_HA, c_A, c_H], dict=True) ``` Her får vi to løsninger: ```python løsninger ``` En av løsningene SymPy fant gir negative konsentrasjoner. Dette er en ugyldig løsning og vi beholder bare den som har kun positive løsninger: ```python # Vis gyldige løsninger: gyldige = [] for løsning in løsninger: if all(i > 0 for i in løsning.values()): gyldige.append(løsning) print('Gyldig løsning:') print(f'- [HA] = {løsning.get(c_HA)}') print(f'- [A^-] = {løsning.get(c_A)}') print(f'- [H^+] = {løsning.get(c_H)}') ``` ```python # La oss finne pH: ph = -sym.log(gyldige[0].get(c_H), 10) ``` ```python # La oss skrive ut verdien, for å få numerisk verdi, ber vi SymPy evaluere uttrykket: print(f'pH = {ph.evalf()}') ``` Til sammenligning sier læreboka: $\text{pH} = 2.42$.
ff1c29ec23b2753bd8589738623ceb70c76c0631
8,389
ipynb
Jupyter Notebook
jupyter/syrebase/syrebase.ipynb
andersle/kj1000
9d68e9810c5541ebbe2e4559df8d066a85780129
[ "CC-BY-4.0" ]
null
null
null
jupyter/syrebase/syrebase.ipynb
andersle/kj1000
9d68e9810c5541ebbe2e4559df8d066a85780129
[ "CC-BY-4.0" ]
5
2021-06-21T15:04:15.000Z
2021-11-10T10:58:07.000Z
jupyter/syrebase/syrebase.ipynb
andersle/kj1000
9d68e9810c5541ebbe2e4559df8d066a85780129
[ "CC-BY-4.0" ]
null
null
null
28.056856
418
0.571582
true
1,626
Qwen/Qwen-72B
1. YES 2. YES
0.913677
0.843895
0.771047
__label__nob_Latn
0.996098
0.629733
```python from galgebra.ga import Ga from sympy import symbols from galgebra.printer import Format Format() coords = (et,ex,ey,ez) = symbols('t,x,y,z',real=True) base=Ga('e*t|x|y|z',g=[1,-1,-1,-1],coords=symbols('t,x,y,z',real=True),wedge=False) potential=base.mv('phi','vector',f=True) potential ``` \begin{equation*}phi = \phi ^{t} \boldsymbol{e}_{t} + \phi ^{x} \boldsymbol{e}_{x} + \phi ^{y} \boldsymbol{e}_{y} + \phi ^{z} \boldsymbol{e}_{z}\end{equation*} ```python field=base.grad*potential field ``` \begin{equation*}\left ( \partial_{t} \phi ^{t} + \partial_{x} \phi ^{x} + \partial_{y} \phi ^{y} + \partial_{z} \phi ^{z} \right ) + \left ( \partial_{x} \phi ^{t} + \partial_{t} \phi ^{x} \right ) \boldsymbol{e}_{tx} + \left ( \partial_{y} \phi ^{t} + \partial_{t} \phi ^{y} \right ) \boldsymbol{e}_{ty} + \left ( \partial_{z} \phi ^{t} + \partial_{t} \phi ^{z} \right ) \boldsymbol{e}_{tz} + \left ( \partial_{y} \phi ^{x} - \partial_{x} \phi ^{y} \right ) \boldsymbol{e}_{xy} + \left ( \partial_{z} \phi ^{x} - \partial_{x} \phi ^{z} \right ) \boldsymbol{e}_{xz} + \left ( \partial_{z} \phi ^{y} - \partial_{y} \phi ^{z} \right ) \boldsymbol{e}_{yz}\end{equation*} ```python grad_field = base.grad*field grad_field ``` \begin{equation*}\left ( \partial^{2}_{t} \phi ^{t} - \partial^{2}_{x} \phi ^{t} - \partial^{2}_{y} \phi ^{t} - \partial^{2}_{z} \phi ^{t} \right ) \boldsymbol{e}_{t} + \left ( \partial^{2}_{t} \phi ^{x} - \partial^{2}_{x} \phi ^{x} - \partial^{2}_{y} \phi ^{x} - \partial^{2}_{z} \phi ^{x} \right ) \boldsymbol{e}_{x} + \left ( \partial^{2}_{t} \phi ^{y} - \partial^{2}_{x} \phi ^{y} - \partial^{2}_{y} \phi ^{y} - \partial^{2}_{z} \phi ^{y} \right ) \boldsymbol{e}_{y} + \left ( \partial^{2}_{t} \phi ^{z} - \partial^{2}_{x} \phi ^{z} - \partial^{2}_{y} \phi ^{z} - \partial^{2}_{z} \phi ^{z} \right ) \boldsymbol{e}_{z}\end{equation*} ```python part=field.proj([base.mv()[0]^base.mv()[1]]) part ``` \begin{equation*}\left ( \partial_{x} \phi ^{t} + \partial_{t} \phi ^{x} \right ) \boldsymbol{e}_{tx}\end{equation*} ```python dpart = base.grad*part dpart ``` \begin{equation*}\left ( - \partial^{2}_{x} \phi ^{t} - \partial_{t}\partial_{x} \phi ^{x} \right ) \boldsymbol{e}_{t} + \left ( \partial^{2}_{t} \phi ^{x} + \partial_{t}\partial_{x} \phi ^{t} \right ) \boldsymbol{e}_{x} + \left ( - \partial_{x}\partial_{y} \phi ^{t} - \partial_{t}\partial_{y} \phi ^{x} \right ) \boldsymbol{e}_{txy} + \left ( - \partial_{x}\partial_{z} \phi ^{t} - \partial_{t}\partial_{z} \phi ^{x} \right ) \boldsymbol{e}_{txz}\end{equation*}
80b34e5cf6b2d4173d98d854c8435e03f70c3487
5,598
ipynb
Jupyter Notebook
examples/ipython/second_derivative.ipynb
pygae/galgebra
3a53b29fb141be1ae47d8df8fc7005c10869cded
[ "BSD-3-Clause" ]
151
2018-09-18T12:30:14.000Z
2022-03-16T08:02:48.000Z
examples/ipython/second_derivative.ipynb
caiomrcs/galgebra
3a53b29fb141be1ae47d8df8fc7005c10869cded
[ "BSD-3-Clause" ]
454
2018-09-19T01:42:30.000Z
2022-01-18T14:02:00.000Z
examples/ipython/second_derivative.ipynb
caiomrcs/galgebra
3a53b29fb141be1ae47d8df8fc7005c10869cded
[ "BSD-3-Clause" ]
30
2019-02-22T08:25:50.000Z
2022-01-15T05:20:22.000Z
36.350649
741
0.49732
true
1,107
Qwen/Qwen-72B
1. YES 2. YES
0.908618
0.805632
0.732012
__label__eng_Latn
0.188474
0.539041
# Homework 2 **For exercises in the week 20-25.11.19** **Points: 6 + 1b** Please solve the problems at home and bring to class a [declaration form](http://ii.uni.wroc.pl/~jmi/Dydaktyka/misc/kupony-klasyczne.pdf) to indicate which problems you are willing to present on the blackboard. ## Problem 1 [1p] Let $(x^{(i)},y^{(i)})$ be a data sample with $x^{(i)}\in\mathbb{R}^D$, $y^{(i)}\in\mathbb{R}$. Let $\Theta \in\mathbb{R}^D$ a parameter vector. Find the closed form solution $\Theta^*$ to $$ \min_\Theta \left(\frac{1}{2}\sum_i (\Theta^Tx^{(i)} - y^{(i)})^2 + \frac{\lambda}{2}\sum_{d=1}^D \Theta_d^2\right). $$ ## Problem 2 [1p] Let $v\in\mathbb{R}^D$ be a vector. Define the gradient of $f(v)\in\mathbb{R}$ with respect to $v$ to be $\frac{\partial f}{\partial v} = \left[\frac{\partial f(v)}{\partial v_1}, \frac{\partial f(v)}{\partial v_2}, ..., \frac{\partial f(v)}{\partial v_D}\right]$ Find the following functions' gradients with respect to vector $[x, y, z]^T$: 1. $f_1([x, y, z]^T) = x + y$ 2. $f_2([x, y, z]^T) = xy$ 3. $f_3([x, y, z]^T) = x^2y^2$ 4. $f_4([x, y, z]^T) = (x + y)^2$ 5. $f_5([x, y, z]^T) = x^4 + x^2 y z + x y^2 z + z^4$ 6. $f_6([x, y, z]^T) = e^{x + 2y}$ 7. $f_7([x, y, z]^T) = \frac{1}{x y^2}$ 8. $f_8([x, y, z]^T) = ax + by + c$ 9. $f_9([x, y, z]^T) = \tanh(ax + by + c)$ ## Problem 3 [0.5p] Find the following functions' gradients or Jacobians with respect to vector $\mathbf{x}$, where $\mathbf{x}, \mathbf{b} \in \mathbb{R}^{n}$, $\mathbf{W} \in \mathbb{R}^{n \times n}$: 1. $\mathbf{W} \mathbf{x} + \mathbf{b}$ 2. $\mathbf{x}^T \mathbf{W} \mathbf{x}$, ## Problem 4 [1p] Find the derivative of $-\log(S(\mathbf{x})_j)$, where $S$ is the softmax function (https://en.wikipedia.org/wiki/Softmax_function) and we are interested in the derivative over the $j$-th output of the Softmax. ## Problem 5 [0.5p] Consider a dataset with 400 examples of class C1 and 400 of class C2. Let tree A have 2 leaves with class distributions: | Tree A | C1 | C2 | |----------|-------|-----| | Leaf 1 | 100 | 300 | | Leaf 2 | 300 | 100 | and let tree B have 2 leaves with class distribution: | Tree B | C1 | C2 | |----------|-------|-----| | Leaf 1 | 200 | 400 | | Leaf 2 | 200 | 0 | What is the misclassification rate for both trees? Which tree is more pure according to Gini or Infogain? ## Problem 6 [1p] Consider regresion problem, with $M$ predictors $h_m(x)$ trained to aproximate a target $y$. Define the error to be $\epsilon_m(x) = h_m(x) - y$. Suppose you train $M$ independent classifiers with average least squares error $$ E_{AV} = \frac{1}{M}\sum_{m=1}^M \mathbb{E}_{x}[\epsilon_m(x)^2]. $$ Further assume that the errors have zero mean and are uncorrelated: $$ \mathbb{E}_{x}[\epsilon_m(x)] = 0\qquad\text{ and }\qquad\mathbb{E}_{x}[\epsilon_m(x)\epsilon_l(x)] = 0\text{ for } m \neq l $$ Let the mean predictor be $$ h_M(x) = \frac{1}{M}h_m(x). $$ What is the average error of $h_M(x)$? ## Problem 7 [1p] Suppose you work on a binary classification problem and train 3 weak classifiers. You combine their prediction by voting. Can the training error rate of the voting ensemble smaller that the error rate of the individual weak predictors? Can it be larger? Show an example or prove infeasibility. ## Problem 8 [1 bonus point] While on a walk, you notice that a locomotive has the serial number 50. Assuming that all locomotives used by PKP (the Polish railroad operator) are numbered using consecutive natural numbers, what is your estimate of $N$ the total number of locomotives operated by PKP? Tell why the Maximum Likelihood principle may not yield satisfactory results. Use the Bayesian approach to find the posterior distribution over the number of locomotives. Then compute the expected count of locomotives. For the prior use the power law: \begin{equation} p(N) = \frac{1}{N^\alpha}\frac{1}{\zeta(\alpha,1)}, \end{equation} where the $\zeta(s,q)=\sum_{n=0}^{\infty}\frac{1}{(q+n)^s}$ is the Hurwitz Zeta function (https://en.wikipedia.org/wiki/Hurwitz_zeta_function) available in Python as `scipy.special.zeta`. The use of the power law is motivated by the observation that the frequency of occurrence of a company is inversely proportional to its size (see also: R.L. Axtell, Zipf distribution of US firm sizes https://www.sciencemag.org/content/293/5536/1818). How would your estimate change after seeing 5 locomotives, with the biggest serial number among them being 50? **Note**: During the Second World War, a similar problem was encountered while trying to estimate the total German tank production from the serial numbers of captured machines. The statistical estimates were the most precise!
1c50bf310106f232668dd6294f9c9ba1199121f1
8,010
ipynb
Jupyter Notebook
ML/Homework2/Homework2.ipynb
TheFebrin/DataScience
3e58b89315960e7d4896e44075a8105fcb78f0c0
[ "MIT" ]
null
null
null
ML/Homework2/Homework2.ipynb
TheFebrin/DataScience
3e58b89315960e7d4896e44075a8105fcb78f0c0
[ "MIT" ]
null
null
null
ML/Homework2/Homework2.ipynb
TheFebrin/DataScience
3e58b89315960e7d4896e44075a8105fcb78f0c0
[ "MIT" ]
null
null
null
37.605634
294
0.492135
true
1,600
Qwen/Qwen-72B
1. YES 2. YES
0.861538
0.822189
0.708347
__label__eng_Latn
0.978818
0.48406
# Maximum likelihood Estimation (MLE) based on http://python-for-signal-processing.blogspot.com/2012/10/maximum-likelihood-estimation-maximum.html ## Simulate coin flipping - [Bernoulli distribution](https://en.wikipedia.org/wiki/Bernoulli_distribution) is the probability distribution of a random variable which takes the value 1 with probability $p$ and the value 0 with probability $q = 1 - p$ - [scipy.stats.bernoulli](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.bernoulli.html) ```python import numpy as np from scipy.stats import bernoulli np.random.seed(123456789) p_true = 1/2 # this is the value we will try to estimate from the observed data fp = bernoulli(p_true) def sample(n=10): """ simulate coin flipping """ return fp.rvs(n) # flip it n times xs = sample(100) # generate some samples ``` ## Find maximum of Bernoulli distribution Single experiment $$\phi(x) = p ^ {x} * (1 - p) ^ { 1 - x }$$ Series of experiments $$\mathcal{L}(p|x) = \prod_{i=1}^{n} p^{x_{i}}*(p-1)^{1-x_{i}}$$ ### Hints - [sympy.diff()](http://docs.sympy.org/dev/modules/core.html#sympy.core.function.diff) - [sympy.expand()](http://docs.sympy.org/dev/modules/core.html#sympy.core.function.expand) - [sympy.expand_log()](http://docs.sympy.org/dev/modules/core.html#sympy.core.function.expand_log) - [sympy.solve()](http://docs.sympy.org/dev/modules/core.html#sympy.core.function.solve) - [sympy.symbols()](http://docs.sympy.org/dev/modules/core.html#symbols) - [sympy gotchas](http://docs.sympy.org/dev/tutorial/gotchas.html) ```python import sympy from sympy.abc import x p = sympy.symbols('p', positive=True) phi = p ** x * (1 - p) ** (1 - x) L = np.prod([phi.subs(x, i) for i in xs]) # objective function to maximize log_L = sympy.expand_log(sympy.log(L)) sol = sympy.solve(sympy.diff(log_L, p), p)[0] ``` ```python import matplotlib.pyplot as plt x_space = np.linspace(1/100, 1, 100, endpoint=False) plt.plot(x_space, list(map(sympy.lambdify(p, log_L, 'numpy'), x_space)), sol, log_L.subs(p, sol), 'o', p_true, log_L.subs(p, p_true), 's', ) plt.xlabel('$p$', fontsize=18) plt.ylabel('Likelihood', fontsize=18) plt.title('Estimate not equal to true value', fontsize=18) plt.grid(True) plt.show() ``` ## Empirically examine the behavior of the maximum likelihood estimator - [evalf()](http://docs.sympy.org/dev/modules/core.html#module-sympy.core.evalf) ```python def estimator_gen(niter=10, ns=100): """ generate data to estimate distribution of maximum likelihood estimator' """ x = sympy.symbols('x', real=True) phi = p**x*(1-p)**(1-x) for i in range(niter): xs = sample(ns) # generate some samples from the experiment L = np.prod([phi.subs(x,i) for i in xs]) # objective function to maximize log_L = sympy.expand_log(sympy.log(L)) sol = sympy.solve(sympy.diff(log_L, p), p)[0] yield float(sol.evalf()) entries = list(estimator_gen(100)) # this may take awhile, depending on how much data you want to generate plt.hist(entries) # histogram of maximum likelihood estimator plt.title('$\mu={:3.3f},\sigma={:3.3f}$'.format(np.mean(entries), np.std(entries)), fontsize=18) plt.show() ``` ## Dynamic of MLE by length sample sequence ```python def estimator_dynamics(ns_space, num_tries = 20): for ns in ns_space: estimations = list(estimator_gen(num_tries, ns)) yield np.mean(estimations), np.std(estimations) ns_space = list(range(10, 100, 5)) entries = list(estimator_dynamics(ns_space)) entries_mean = list(map(lambda e: e[0], entries)) entries_std = list(map(lambda e: e[1], entries)) plt.errorbar(ns_space, entries_mean, entries_std, fmt='-o') plt.show() ``` ```python ```
4940682496d40fe493ef3f22d66b75e23e264263
46,821
ipynb
Jupyter Notebook
mle.ipynb
hyzhak/mle
257d8046a950b7381052cc56d9931cf98aeb0a5c
[ "MIT" ]
1
2017-10-22T09:29:36.000Z
2017-10-22T09:29:36.000Z
mle.ipynb
hyzhak/mle
257d8046a950b7381052cc56d9931cf98aeb0a5c
[ "MIT" ]
null
null
null
mle.ipynb
hyzhak/mle
257d8046a950b7381052cc56d9931cf98aeb0a5c
[ "MIT" ]
3
2019-01-23T04:46:01.000Z
2020-04-21T18:38:49.000Z
199.238298
21,420
0.894876
true
1,098
Qwen/Qwen-72B
1. YES 2. YES
0.946597
0.874077
0.827399
__label__eng_Latn
0.443614
0.760657
# From transfer function to difference equation In approximately the middle of Peter Corke's lecture [Introduction to digital control](https://youtu.be/XuR3QKVtx-g?t=34m56s), he explaines how to go from a transfer function description of a controller (or compensator) to a difference equation that can be implemented on a microcontroller. The idea is to recognize that the term $$ sX(s) $$ in a transfer function is the laplace transform of the derivative of $x(t)$, \begin{equation} sX(s) + x(0) \quad \overset{\mathcal{L}}{\longleftrightarrow} \quad \frac{d}{dt} x(t), \end{equation} where the inital value $x(0)$ is often taken to be zero. We then make use of a discrete approximation of the derivative $$ \frac{d}{dt}x(t) \approx \frac{x(t-h) - x(t)}{h}, $$ where $h$ is the time between the samples in the sampled version of signal $x(t)$. The steps to convert the system on transfer function form $$ Y(s) = F(s)U(s) = \frac{s+b}{s+a}U(s) $$ are to write $$ (s+a)Y(s) = (s+b)U(s) $$ $$ sY(s) + aY(s) = sU(s) + bU(s), $$ take the inverse Laplace transform $$ \frac{d}{dt} y + ay = \frac{d}{dt} u + bu$$ and use the discrete approximation of the derivative $$ \frac{y_k - y_{k-1}}{h} + ay_k = \frac{u_k - u_{k-1}}{h} + bu_k $$ which can be written $$ (1+ah) y_k = y_{k-1} + u_k - u_{k-1} + bh u_k,$$ or $$ y_k = \frac{1}{1+ah} y_{k-1} + \frac{1+bh}{1+ah}u_k - \frac{1}{1+ah}u_{k-1}. $$ ## Example With the system $$ F(s) = \frac{s+1}{s+2} $$ and the sampling time $$ h=0.1 $$ we get the difference equation $$ y_k = \frac{1}{1.2}y_{k-1} + \frac{1.1}{1.2}u_k - \frac{1}{1.2} u_{k-1}. $$ Let's implement the system and see how the discrete approximation compares to the continuous-time system for the case of a step-response. ```python import numpy as np import scipy.signal as signal import matplotlib.pyplot as plt %matplotlib inline # Define the continuous-time linear time invariant system F a = 2 b = 1 num = [1, b] den = [1, a] F = signal.lti(num, den) # Plot a step response (t, y) = signal.step(F) plt.figure(figsize=(14,6)) plt.plot(t, y, linewidth=2) # Solve the difference equation y_k = c y_{k-1} + d_0 u_k + d_1 u_{k-1} h = 0.1 # The sampling time c = 1.0/(1 + a*h) d0 = (1 + b*h) / (1 + a*h) d1 = -c td = np.arange(35)* h #The sampling time instants ud = np.ones(35) # The input signal is a step, limited in time to 3.5 seconds yd = np.zeros(35) # A vector to hold the discrete output signal yd[0] = c*0 + d0*ud[0] - d1*0 # The first sample of the output signal for k in range(1,35): # And then the rest yd[k] = c*yd[k-1] + d0*ud[k] + d1*ud[k-1] plt.plot(td, yd, 'o', markersize=8) plt.xlim([0, 3.5]) plt.ylim([0, 1]) plt.legend(('Continuous-time system', 'Discrete approximation')) ``` ## Exercise 1. Why is the error in the discrete approximation larger in the beginning than at the end of the step-response? 2. Make a discrete approximation of the transfer function $$ F(s) = \frac{3}{s+3} $$ using the sampling time $$ h=0.2 $$ Then simulate and plot a step-response for the continuous- and discrete system, following the example above. *Hint*: Copy the python code for the example above into the code cell below and modify for the exercise. ```python ## Your python code goes here ``` # Recursively computing values of a polynomial using difference equations In the lecture by Peter Corke, he talks about the historical importance of difference equations for computing values of a polynomial. Let's look at this in some more detail. ## A first order polynomial Consider the polynomial $$ p(x) = 4x + 2. $$ The first difference is $$ \Delta p(x) = p(x) - p(x-h) = 4x + 2 - \big( 4(x-h) + 2 \big) = 4h, $$ and the second order difference is zero (as are all higher order differences): $$ \Delta^2 p(x) = \Delta p(x) - \Delta p(x-h) = 4h - 4h = 0. $$ Using the firs order difference, we can also write the second order difference $ \Delta p(x) - \Delta p(x-h) = \Delta^2 p(x) $ as $$ p(x) - p(x-h) - \Delta p(x-h) = \Delta^2p(x) $$ or $$ p(x) = p(x-h) + \Delta p(x-h) + \Delta^2 p(x)$$ which for the first order polynomial above becomes $$ p(x) = p(x-h) + \Delta p(x-h) = p(x-h) + 4h. $$ ```python def p1(x): return 4*x + 2 # Our first-order polynomial # Compute values for x=[0,0.2, 0.4, ... 2] recursively using the difference equation h = 0.2 x = h*np.arange(11) # Gives the array [0,0.2, 0.4, ... 2] pd = np.zeros(11) d1 = 4*h # Need to compute the first value as the initial value for the difference equation, pd[0] = p1(x[0]) for k in range(1,10): # Solve difference equation pd[k] = pd[k-1] + d1 plt.figure(figsize=(14,6)) plt.plot(x, p1(x), linewidth=2) plt.plot(x, pd, 'ro') ``` ## Second order polynomial For a second order polynomial $$ p(x) = a_0x^2 + a_1x + a_2 $$ we have $$ p''(x) = 2a_0, $$ and the differences $$ \Delta p(x) = p(x) - p(x-h) = a_0x^2 + a_1x + a_2 - \big( a_0(x-h)^2 + a_1(x-h) + a_2 \big) = h(2a_0x + a_1) -a_0h^2, $$ $$ \Delta^2 p(x) = \Delta p(x) - \Delta p(x-h) = h(2a_0x+a_1) - a_0h^2 - \big( h(2a_0(x-h) + a_1) - a_0 h^2 \big) = h^22a_0 $$ Recall the difference equation using the second order difference $$ p(x) = p(x-h) + \Delta p(x-h) + \Delta^2 p(x)$$ We now get $$ p(x) = p(x-h) + \Delta p(x-h) + \Delta^2 p(x) = p(x-h) + \Delta p(x-h) + h^2 2 a_0,$$ or, using the definition of the first-order difference $\Delta p(x-h)$ $$ p(x) = 2p(x-h) - p(x-2h) + h^2 2 a_0,$$ Consider the second order polynomial $$ p(x) = 2x^2 - 3x + 2, $$ and compute values using the difference equation. ```python a0 = 2 a1 = -3 a2 = 2 def p2(x): return a0*x**2 + a1*x + a2 # Our second-order polynomial # Compute values for x=[0,0.2, 0.4, ... 8] recursively using the difference equation h = 0.2 x = h*np.arange(41) # Gives the array [0,0.2, 0.4, ... 2] d1 = np.zeros(41) # The first differences pd = np.zeros(41) d2 = h**2*2*a0 # The constant, second difference # Need to compute the first two values to get the initial values for the difference equation, pd[0] = p2(x[0]) pd[1] = p2(x[1]) for k in range(2,41): # Solve difference equation pd[k] = 2*pd[k-1] - pd[k-2] + d2 plt.figure(figsize=(14,6)) plt.plot(x, p2(x), linewidth=2) # Evaluating the polynomial plt.plot(x, pd, 'ro') # The solution using the difference equation ``` ## Exercise What order would the difference equation be for computing valuse of a third-order polynomial? What is the difference equation? ```python ```
b951561490a92e552738afa4fd29e0043a64cb6c
81,521
ipynb
Jupyter Notebook
discrete-time-systems/notebooks/Simple-approximation.ipynb
kjartan-at-tec/mr2007-computerized-control
16e35f5007f53870eaf344eea1165507505ab4aa
[ "MIT" ]
2
2020-11-07T05:20:37.000Z
2020-12-22T09:46:13.000Z
discrete-time-systems/notebooks/Simple-approximation.ipynb
alfkjartan/control-computarizado
5b9a3ae67602d131adf0b306f3ffce7a4914bf8e
[ "MIT" ]
4
2020-06-12T20:44:41.000Z
2020-06-12T20:49:00.000Z
discrete-time-systems/notebooks/Simple-approximation.ipynb
kjartan-at-tec/mr2007-computerized-control
16e35f5007f53870eaf344eea1165507505ab4aa
[ "MIT" ]
1
2021-03-14T03:55:27.000Z
2021-03-14T03:55:27.000Z
254.753125
25,800
0.905619
true
2,237
Qwen/Qwen-72B
1. YES 2. YES
0.877477
0.843895
0.740498
__label__eng_Latn
0.967305
0.558758
```julia using CSV using DataFrames using PyPlot using ScikitLearn # machine learning package using StatsBase using Random using LaTeXStrings # for L"$x$" to work instead of needing to do "\$x\$" using Printf using PyCall sns = pyimport("seaborn") # (optional)change settings for all plots at once, e.g. font size rcParams = PyPlot.PyDict(PyPlot.matplotlib."rcParams") rcParams["font.size"] = 16 # (optional) change the style. see styles here: https://matplotlib.org/3.1.1/gallery/style_sheets/style_sheets_reference.html PyPlot.matplotlib.style.use("seaborn-white") ``` ## classifying breast tumors as malignant or benign source: [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Diagnostic)) > Features are computed from a digitized image of a fine needle aspirate (FNA) of a breast mass. They describe characteristics of the cell nuclei present in the image. The mean radius and smoothness of the cell nuclei (the two features) and the outcome (M = malignant, B = benign) of the tumor are in the `breast_cancer_data.csv`. ```julia df = CSV.read("breast_cancer_data.csv") df[!, :class] = map(row -> row == "B" ? 0 : 1, df[:, :outcome]) first(df, 5) ``` <table class="data-frame"><thead><tr><th></th><th>mean_radius</th><th>mean_smoothness</th><th>outcome</th><th>class</th></tr><tr><th></th><th>Float64</th><th>Float64</th><th>String</th><th>Int64</th></tr></thead><tbody><p>5 rows × 4 columns</p><tr><th>1</th><td>13.85</td><td>1.495</td><td>B</td><td>0</td></tr><tr><th>2</th><td>9.668</td><td>2.275</td><td>B</td><td>0</td></tr><tr><th>3</th><td>9.295</td><td>2.388</td><td>B</td><td>0</td></tr><tr><th>4</th><td>19.69</td><td>4.585</td><td>M</td><td>1</td></tr><tr><th>5</th><td>9.755</td><td>1.243</td><td>B</td><td>0</td></tr></tbody></table> ## visualize the two classes distributed in feature space Where SVM just computes a dividing plane, logistic regression calculates a probability that each point is in a partaicular class ```julia markers = Dict("M" => "x", "B" => "o") fig, ax = subplots(figsize=(8, 8)) ax.set_xlabel("mean radius") ax.set_ylabel("mean smoothness") ax.set_facecolor("#efefef") for df_c in groupby(df, :outcome) outcome = df_c[1, :outcome] ax.scatter(df_c[:, :mean_radius], df_c[:, :mean_smoothness], label="$outcome", marker=markers[outcome], alpha=0.5) end legend() axis("equal") sns.despine() ``` ## get data ready for classifiation in scikitlearn scikitlearn takes as input: * a feature matrix `X`, which must be `n_samples` by `n_features` * a target vector `y`, which must be `n_samples` long (of course) ```julia n_tumors = nrow(df) X = zeros(n_tumors, 2) y = zeros(n_tumors) for (i, tumor) in enumerate(eachrow(df)) X[i, 1] = tumor[:mean_radius] X[i, 2] = tumor[:mean_smoothness] y[i] = tumor[:class] end X # look at y too! ``` 300×2 Array{Float64,2}: 13.85 1.495 9.668 2.275 9.295 2.388 19.69 4.585 9.755 1.243 16.11 4.533 14.78 2.45 15.78 3.598 15.71 1.972 14.68 3.195 13.71 3.856 21.09 4.414 11.31 1.831 ⋮ 11.08 1.719 18.94 5.486 15.32 4.061 14.25 5.373 20.6 5.772 8.671 1.435 11.64 2.155 12.06 1.171 13.88 1.709 14.9 3.466 19.59 2.916 14.81 1.677 ## logistic regression let $\mathbf{x} \in \mathbb{R}^2$ be the feature vector describing a tumor. let $T$ be the random variable that denotes whether the tumor is benign (0) or malignant (1). the logistic model is a probabilistic model for the probability that a tumor is malignant given its feature vector: \begin{equation} \log \frac{Pr(T=1 | \mathbf{x})}{1-Pr(T=1 | \mathbf{x})} = \beta_0 + \boldsymbol \beta^\intercal \mathbf{x} \end{equation} where $\beta_0$ is the intercept and $\boldsymbol \beta \in \mathbb{R}$ are the weights for the features. we will use scikitlearn to learn the $\beta_0$ and $\boldsymbol \beta$ that maximize the likelihood. ```julia @sk_import linear_model : LogisticRegression ``` PyObject <class 'sklearn.linear_model.logistic.LogisticRegression'> $$\vec{\nabla}_{\vec{B}}\ell = \vec{0}$$ ```julia # default LR in sklearn has an L1 regularization, so we have to set penalty to none to fit this model # solver minimizes grad_b l = 0 lr = LogisticRegression(penalty="none", solver="newton-cg") lr.fit(X, y) println("β = ", lr.coef_) println("β₀ = ", lr.intercept_) ``` β = [1.168660778217552 0.9420681231447384] β₀ = [-19.387890643955803] prediction of the probability that a new tumor is 0 (benign) or 1 (malignant) ```julia # x = [20.0 5.0] x = [15.0 2.5] lr.predict(x) # should be malignant for x 0 lr.predict_proba(x) # [Pr(y=0|x) PR(y-1|x)] ``` 1×2 Array{Float64,2}: 0.378201 0.621799 ## visualize the learned model $Pr(T=1|\mathbf{x})$ ```julia radius = 5:0.25:30 smoothness = 0.0:0.25:20.0 lr_prediction = zeros(length(smoothness), length(radius)) for i = 1:length(radius) for j = 1:length(smoothness) # consider this feature vector x = [radius[i] smoothness[j]] # use logistic regression to predict P(y=1|x) lr_prediction[j, i] = lr.predict_proba(x)[2] # second elem bc want y=1 end end ``` ```julia fig, ax = subplots(figsize=(8, 8)) ax.set_xlabel("mean radius") ax.set_ylabel("mean smoothness") asdf = ax.pcolor(radius, smoothness, lr_prediction, cmap="viridis", vmin=0.0, vmax=1.0) colorbar(asdf, label="Pr(y=1|x)") sns.despine() ``` ## making decisions: the ROC curve this depends on the cost of a false positive versus false negative. (here, "positive" is defined as testing positive for "malignant") > "I equally value minimizing (1) false positives and (2) false negatives." $\implies$ choose $Pr(T=1|\mathbf{x})=0.5$ as the decision boundary. > "I'd rather predict that a benign tumor is malignant (false positive) than predict that a malignant tumor is benign (false negative)." $\implies$ choose $Pr(T=1|\mathbf{x})=0.2$ as the decision boundary. Even if there is a relatively small chance that the tumor is malignant, we still take action and classify it as malignant... the receiver operator characteristic (ROC) curve is a way we can evaluate a classification algorithm without imposing our values and specifying where the decision boundary should be. ```julia @sk_import metrics : roc_curve @sk_import metrics : auc ``` WARNING: redefining constant roc_curve PyObject <function auc at 0x7fe80b1fb7b8> ```julia ``` ```julia ``` ```julia ``` tradeoff: * threshold too small: classify all of the tumors as malignant, false positive rate very high * threshold too large: classify all of the tumors as benign, false negative rate very high somewhere in the middle (but still depending on the cost of a false positive versus false negative) is where we should operate. the `auc`, area under the curve, has a probabilistic interpretation: > the area under the curve (often referred to as simply the AUC) is equal to the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one (assuming 'positive' ranks higher than 'negative') -[Wikipedia](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) **warning**: always split your data into test or train or do cross-validation to assess model performance. we trained on all data here to see the mechanics of fitting a logistic regression model to data, visualizing the model, and creating an ROC curve.
e7737278b9c9cc4d8c6a6510dd6d77d372dbb9d0
142,366
ipynb
Jupyter Notebook
In-Class Notes/Logistic Regression/.ipynb_checkpoints/logistic regression_sparse-checkpoint.ipynb
cartemic/CHE-599-intro-to-data-science
a2afe72b51a3b9e844de94d59961bedc3534a405
[ "MIT" ]
null
null
null
In-Class Notes/Logistic Regression/.ipynb_checkpoints/logistic regression_sparse-checkpoint.ipynb
cartemic/CHE-599-intro-to-data-science
a2afe72b51a3b9e844de94d59961bedc3534a405
[ "MIT" ]
null
null
null
In-Class Notes/Logistic Regression/.ipynb_checkpoints/logistic regression_sparse-checkpoint.ipynb
cartemic/CHE-599-intro-to-data-science
a2afe72b51a3b9e844de94d59961bedc3534a405
[ "MIT" ]
2
2019-10-02T16:11:36.000Z
2019-10-15T20:10:40.000Z
283.59761
75,962
0.919117
true
2,372
Qwen/Qwen-72B
1. YES 2. YES
0.909907
0.890294
0.810085
__label__eng_Latn
0.915945
0.720432
```python from sympy import symbols, cos, sin, pi, simplify, pprint, tan, expand_trig, sqrt, trigsimp, atan2 from sympy.matrices import Matrix ``` ```python # rotation matrices in x, y, z axes def rotx(q): sq, cq = sin(q), cos(q) r = Matrix([ [1., 0., 0.], [0., cq,-sq], [0., sq, cq] ]) return r def roty(q): sq, cq = sin(q), cos(q) r = Matrix([ [ cq, 0., sq], [ 0., 1., 0.], [-sq, 0., cq] ]) return r def rotz(q): sq, cq = sin(q), cos(q) r = Matrix([ [cq,-sq, 0.], [sq, cq, 0.], [0., 0., 1.] ]) return r ``` ```python def pose(theta, alpha, a, d): # returns the pose T of one joint frame i with respect to the previous joint frame (i - 1) # given the parameters: # theta: theta[i] # alpha: alpha[i-1] # a: a[i-1] # d: d[i] r11, r12 = cos(theta), -sin(theta) r23, r33 = -sin(alpha), cos(alpha) r21 = sin(theta) * cos(alpha) r22 = cos(theta) * cos(alpha) r31 = sin(theta) * sin(alpha) r32 = cos(theta) * sin(alpha) y = -d * sin(alpha) z = d * cos(alpha) T = Matrix([ [r11, r12, 0.0, a], [r21, r22, r23, y], [r31, r32, r33, z], [0.0, 0.0, 0.0, 1] ]) T = simplify(T) return T ``` ```python # get the pose (homogenous transforms) of each joint wrt to previous joint q1, q2, q3, q4, q5, q6= symbols('q1:7') d90 = pi / 2 T01 = pose(q1, 0, 0, 0.75) T12 = pose(q2 - d90, -d90, 0.35, 0) T23 = pose(q3, 0, 1.25, 0) T34 = pose(q4, -d90, -0.054, 1.5) T45 = pose(q5, d90, 0, 0) T56 = pose(q6, -d90, 0, 0) T6g = pose(0, 0, 0, 0.303) T0g_a = simplify(T01 * T12 * T23 * T34 * T45 * T56 * T6g) ``` ```python # Total transform wrt gripper given # yaw (alpha), pitch (beta), roll (beta) # position px, py, pz px, py, pz = symbols('px py pz', real = True) alpha, beta, gamma = symbols('alpha beta gamma', real = True) R = rotz(alpha) * roty(beta) * rotx(gamma) * (rotz(pi) * roty(-pi/2)).T T0g_b = Matrix([ [R[0, 0], R[0, 1], R[0, 2], px], [R[1, 0], R[1, 1], R[1, 2], py], [R[2, 0], R[2, 1], R[2, 2], pz], [0, 0, 0, 1] ]) T0g_b = simplify(trigsimp(T0g_b)) print(T0g_b) ``` Matrix([ [1.0*sin(alpha)*sin(gamma) + sin(beta)*cos(alpha)*cos(gamma), 1.0*sin(alpha)*cos(gamma) - 1.0*sin(beta)*sin(gamma)*cos(alpha), 1.0*cos(alpha)*cos(beta), px], [sin(alpha)*sin(beta)*cos(gamma) - 1.0*sin(gamma)*cos(alpha), -1.0*sin(alpha)*sin(beta)*sin(gamma) - 1.0*cos(alpha)*cos(gamma), 1.0*sin(alpha)*cos(beta), py], [ 1.0*cos(beta)*cos(gamma), -1.0*sin(gamma)*cos(beta), -1.0*sin(beta), pz], [ 0, 0, 0, 1]]) ```python ''' px, py, pz = 0.49792, 1.3673, 2.4988 roll, pitch, yaw = 0.366, -0.078, 2.561 q1: 1.01249809363771 q2: -0.275800363737724 q3: -0.115686651053751 q4: 1.63446527240323 q5: 1.52050002599430 q6: -0.815781306199679 ''' Tb = T0g_b.evalf(subs = { gamma: 0.366, #roll beta: -0.078, #pitch alpha: 2.561, #yaw px: 0.49792, py: 1.3673, pz: 2.4988 }) print() pprint(Tb) print() print(T0g_b) ``` ⎡0.257143295038827 0.48887208255965 -0.833595473062543 0.49792⎤ ⎢ ⎥ ⎢0.259329420712765 0.796053601157403 0.54685182237706 1.3673 ⎥ ⎢ ⎥ ⎢0.93092726749696 -0.356795110642117 0.0779209320563015 2.4988 ⎥ ⎢ ⎥ ⎣ 0 0 0 1.0 ⎦ Matrix([ [1.0*sin(alpha)*sin(gamma) + sin(beta)*cos(alpha)*cos(gamma), 1.0*sin(alpha)*cos(gamma) - 1.0*sin(beta)*sin(gamma)*cos(alpha), 1.0*cos(alpha)*cos(beta), px], [sin(alpha)*sin(beta)*cos(gamma) - 1.0*sin(gamma)*cos(alpha), -1.0*sin(alpha)*sin(beta)*sin(gamma) - 1.0*cos(alpha)*cos(gamma), 1.0*sin(alpha)*cos(beta), py], [ 1.0*cos(beta)*cos(gamma), -1.0*sin(gamma)*cos(beta), -1.0*sin(beta), pz], [ 0, 0, 0, 1]]) ```python Ta = T0g_a.evalf(subs = { q1: 1.01249809363771, q2: -0.275800363737724, q3: -0.115686651053751, q4: 1.63446527240323, q5: 1.52050002599430, q6: -0.815781306199679 }) print() pprint(Ta) print() print(T0g_a) ``` ⎡0.257143295038831 0.488872082559654 -0.83359547306254 0.497920000000004⎤ ⎢ ⎥ ⎢0.259329420712762 0.796053601157401 0.546851822377065 1.36729999999999 ⎥ ⎢ ⎥ ⎢0.93092726749696 -0.356795110642117 0.0779209320563043 2.49880000000001 ⎥ ⎢ ⎥ ⎣ 0 0 0 1.0 ⎦ Matrix([ [((sin(q1)*sin(q4) + sin(q2 + q3)*cos(q1)*cos(q4))*cos(q5) + sin(q5)*cos(q1)*cos(q2 + q3))*cos(q6) - (-sin(q1)*cos(q4) + sin(q4)*sin(q2 + q3)*cos(q1))*sin(q6), -((sin(q1)*sin(q4) + sin(q2 + q3)*cos(q1)*cos(q4))*cos(q5) + sin(q5)*cos(q1)*cos(q2 + q3))*sin(q6) + (sin(q1)*cos(q4) - sin(q4)*sin(q2 + q3)*cos(q1))*cos(q6), -(sin(q1)*sin(q4) + sin(q2 + q3)*cos(q1)*cos(q4))*sin(q5) + cos(q1)*cos(q5)*cos(q2 + q3), -0.303*sin(q1)*sin(q4)*sin(q5) + 1.25*sin(q2)*cos(q1) - 0.303*sin(q5)*sin(q2 + q3)*cos(q1)*cos(q4) - 0.054*sin(q2 + q3)*cos(q1) + 0.303*cos(q1)*cos(q5)*cos(q2 + q3) + 1.5*cos(q1)*cos(q2 + q3) + 0.35*cos(q1)], [ ((sin(q1)*sin(q2 + q3)*cos(q4) - sin(q4)*cos(q1))*cos(q5) + sin(q1)*sin(q5)*cos(q2 + q3))*cos(q6) - (sin(q1)*sin(q4)*sin(q2 + q3) + cos(q1)*cos(q4))*sin(q6), -((sin(q1)*sin(q2 + q3)*cos(q4) - sin(q4)*cos(q1))*cos(q5) + sin(q1)*sin(q5)*cos(q2 + q3))*sin(q6) - (sin(q1)*sin(q4)*sin(q2 + q3) + cos(q1)*cos(q4))*cos(q6), -(sin(q1)*sin(q2 + q3)*cos(q4) - sin(q4)*cos(q1))*sin(q5) + sin(q1)*cos(q5)*cos(q2 + q3), 1.25*sin(q1)*sin(q2) - 0.303*sin(q1)*sin(q5)*sin(q2 + q3)*cos(q4) - 0.054*sin(q1)*sin(q2 + q3) + 0.303*sin(q1)*cos(q5)*cos(q2 + q3) + 1.5*sin(q1)*cos(q2 + q3) + 0.35*sin(q1) + 0.303*sin(q4)*sin(q5)*cos(q1)], [ -(sin(q5)*sin(q2 + q3) - cos(q4)*cos(q5)*cos(q2 + q3))*cos(q6) - sin(q4)*sin(q6)*cos(q2 + q3), (sin(q5)*sin(q2 + q3) - cos(q4)*cos(q5)*cos(q2 + q3))*sin(q6) - sin(q4)*cos(q6)*cos(q2 + q3), -sin(q5)*cos(q4)*cos(q2 + q3) - sin(q2 + q3)*cos(q5), -0.303*sin(q5)*cos(q4)*cos(q2 + q3) - 0.303*sin(q2 + q3)*cos(q5) - 1.5*sin(q2 + q3) + 1.25*cos(q2) - 0.054*cos(q2 + q3) + 0.75], [ 0, 0, 0, 1]])
7f2212b18becef5505df731d3b029b917e679d6e
10,729
ipynb
Jupyter Notebook
notebooks/total_transform.ipynb
mithi/arm-ik
e7e87ef0e43b04278d2300f67d863c3f7eafb77e
[ "MIT" ]
39
2017-07-29T11:40:03.000Z
2022-02-28T14:49:48.000Z
notebooks/total_transform.ipynb
mithi/arm-ik
e7e87ef0e43b04278d2300f67d863c3f7eafb77e
[ "MIT" ]
null
null
null
notebooks/total_transform.ipynb
mithi/arm-ik
e7e87ef0e43b04278d2300f67d863c3f7eafb77e
[ "MIT" ]
16
2017-10-27T13:30:21.000Z
2022-02-10T10:08:42.000Z
37.645614
628
0.388759
true
2,816
Qwen/Qwen-72B
1. YES 2. YES
0.930458
0.79053
0.735555
__label__kor_Hang
0.057433
0.547274
## Ainsley Works on Problem Sets Ainsley sits down on Sunday night to finish S problem sets, where S is a random variable that is equally likely to be 1, 2, 3, or 4. She learns C concepts from the problem sets and drinks D energy drinks to stay awake, where C and D are random and depend on how many problem sets she does. You know that $p_{C|S}(c|s) = 1/(2s+1)$ for $c \in \{ 0,1,\ldots ,2s\}.$ For each problem set she completes, regardless of concepts learned, she independently decides to have an energy drink with probability $q.$ That is, the number of energy drinks she has is binomial with parameters $q$ and $S:$ $$\begin{eqnarray} p_{D\mid S}(d\mid s) &= \begin{cases} {s \choose d}\, q^d\, (1-q)^{s-d} & d \in \{0,\ldots,s\} \\ 0 & \text{otherwise} \end{cases} \end{eqnarray}$$ where ${n \choose k} = \frac{n!}{k!\, (n-k)!}.$ **Question:** Does the conditional entropy $H(C\mid S=s)$ decrease, stay the same, or increase as $s$ increases from $1$ to $4?$ [$\times $] It decreases. <br> [$\times $] It stays the same<br> [$\checkmark$] It increases. **Solution:** Conditioned on $S=s, C$ is uniform from $0$ to $2s.$ $$\begin{align} H(C|S=s)&= \sum _{c=0}^{2s} p_{C|S}(c|s) \log \frac{1}{p_{C|S}(c|s)}\\ &= \sum _{c=0}^{2s} \frac{1}{2s+1} \log \frac{1}{\frac{1}{2s+1}}\\ &= \log \frac{1}{\frac{1}{2s+1}}\\ &= \log (2s+1) \end{align}$$ As $s$ increases, so does $\log (2s+1).$ We can also see this intuitively: as $s$ increases, $c$ is uniform over a broader range of possibilities, which implies a higher entropy. ```python %matplotlib inline from numpy import log2, arange import matplotlib.pyplot as plt f = lambda x: - x * log2(x) g = lambda s: (2*s + 1) * f(1/(2*s+1)) s = arange(1, 5, 1) plt.plot(s, g(s), '-', s, g(s), 'ro') plt.xlabel("$s$") plt.ylabel("$H(C\mid S=s)$") plt.show() ``` **Question:** The next morning, her roommate notices that Ainsley drank $d$ energy drinks. What is the expected number of concepts that she learned? You should derive a general expression for this although in the answer boxes below we only ask you to evaluate the expression for specific choices of $d$ and $q.$ If you're general expression is correct, your answers to these should also be correct. (Please be precise with at least 3 decimal places, unless of course the answer doesn't need that many decimal places. You could also put a fraction.) 1. When $q=0.2, \mathbb {E}[C | D = 1] =$ {{ans1}} 2. When $q=0.5, \mathbb {E}[C | D = 2] =$ {{ans2}} 3. When $q=0.7, \mathbb {E}[C | D = 3] =$ {{ans3}} **Solution:** We are interested in $\mathbb {E}[C|D=d].$ Since we are given information about $C$ conditioned on $S,$ we will condition on $S$ and use total expectation. We will also use the fact that $C$ and $D$ are conditionally independent given $S:$ $$\begin{align} \mathbb {E}[C|D=d] &= \sum _{s=1}^4 \mathbb {E}[C|D=d, S=s] \mathbb {P}(S=s | D=d)\\ \text {(by conditional independence)} &= \sum _{s=1}^4 \mathbb {E}[C|S=s] p_{S|D}(s|d)\\ \text {(by Bayes' rule)} &= \sum _{s=1}^4 \mathbb {E}[C|S=s] \frac{p_{D|S}(d|s) p_ S(s)}{p_ D(d)}\\ &= \frac{\sum _{s=1}^4 \left(\sum _{c=0}^{2s} c \frac{1}{2s+1}\right) p_{D|S}(d|s) p_ S(s)}{\sum _{s=1}^4 p_{D|S}(d|s) p_ S(s)} \\ &=\frac{\sum _{s=1}^4 s \cdot p_{D|S}(d|s) }{\sum _{s=1}^4 p_{D|S}(d|s)}\\ \text {(since $p_{D|S}(d|s) = 0$ for $s < d$)}&= \frac{\sum _{s=d}^4 s \cdot p_{D|S}(d|s) }{\sum _{s=d}^4 p_{D|S}(d|s)} \\ &=\frac{\sum _{s=d}^4 s {s \choose d} q^ d (1-q)^{s-d}}{\sum _{s=d}^4 {s \choose d} q^ d (1-q)^{s-d}} \end{align}$$ Another solution that works is to compute $p_{C|D}(\cdot \mid d)$ and compute the expectation with respect to this distribution. This leads to a very similar set of steps as above. ```python from scipy.misc import comb f = lambda s, d, q : comb(s, d) * (q ** d) * ((1 - q) ** (s - d)) ED = lambda d, q : sum([s * f(s, d, q) for s in range(d, 5)]) / \ sum([f(s, d, q) for s in range(d, 5)]) ans1 = "{0:.3f}".format(ED(1, 0.2)) ans2 = "{0:.3f}".format(ED(2, 0.5)) ans3 = "{0:.3f}".format(ED(3, 0.7)) ``` **Question:** Is the mutual information $I(C ; D)$ greater than, less than, or equal to zero? You should assume that $q$ lies in the range $0 < q < 1.$ [$\checkmark$] Greater than 0<br> [$\times $] Less than 0 <br> [$\times $] Equal to 0 **Solution:** $\boxed {\text {Greater than zero}}.$ Since the conditional expectation in the previous part depends on $d,$ we can infer that they are not independent. We can also justify this intuitively: for example, knowing that $D=4$ guarantees that $S=4,$ and therefore changes our belief about $C$ (i.e., $C$ is more likely to take on higher values). ## Consecutive Sixes **Question:** On average, how many times do you have to roll a fair six-sided die before getting two consecutive sixes? Hint: Use total expectation. **Solution:** Let $\mu = \mathbb {E}[\# \text { rolls until we get two consecutive 6's}].$ The problem can be broken up into two events (that forms a partition of the sample space): - Event 1: The very first time we roll a 6, the roll right afterward is also a 6. The probability of this first event is $1/6.$ You can think of it as we will, with probability $1,$ roll a 6 in a finite amount of time, and then it's just the next roll that we are looking at the probability for, and rolling a 6 in this next roll happens with probability $1/6.$ (Note that the probability that we never see a 6 is $\lim _{n \rightarrow \infty } (5/6)^ n = 0.$) Conditioned on this first event, let's compute the expected number of rolls until we get two consecutive 6's: The expected number of rolls until the first 6 is the expectation of a $\text {Geo}(1/6)$ random variable, which is $1/(1/6) = 6.$ The event we are conditioning on says that the next roll is a 6, so there the conditional expectation here is just $6 + 1 = 7$ rolls. - Event 2: The very first time we roll a 6, the roll right afterward is not a 6. The probability for this second event is $5/6,$ i.e., the roll right after getting the first 6 is not a 6. Conditioned on this second event, let's compute the expected number of rolls until we get two consecutive 6's: The expected number of rolls until the first 6 is 6 rolls (again, this is the expectation of a $\text {Geo}(1/6)$ random variable), and then the 7th roll is not a 6. And then we restart the whole process over. So the conditional expectation for this case is $7 + \mu.$ Now using the law of total expectation, $$\begin{align} \mu &= 7 \cdot \frac16 + (7 + \mu ) \frac56 \\ &= \frac76 + \frac{35}6 + \frac{5}{6}\mu\\ &= \frac{42}6 + \frac{5}{6}\mu , \end{align}$$ so $$\frac16 \mu = \frac{42}6,\qquad {\text {i.e.,}}\qquad \mu = \boxed {42}.$$ ```python ```
0015f8de6c6d57b341e26dce75a57c961cf09b16
29,398
ipynb
Jupyter Notebook
week04/06 Homework.ipynb
infimath/Computational-Probability-and-Inference
e48cd52c45ffd9458383ba0f77468d31f781dc77
[ "MIT" ]
1
2019-04-04T03:07:47.000Z
2019-04-04T03:07:47.000Z
week04/06 Homework.ipynb
infimath/Computational-Probability-and-Inference
e48cd52c45ffd9458383ba0f77468d31f781dc77
[ "MIT" ]
null
null
null
week04/06 Homework.ipynb
infimath/Computational-Probability-and-Inference
e48cd52c45ffd9458383ba0f77468d31f781dc77
[ "MIT" ]
1
2021-02-27T05:33:49.000Z
2021-02-27T05:33:49.000Z
131.241071
19,852
0.825192
true
2,333
Qwen/Qwen-72B
1. YES 2. YES
0.943348
0.908618
0.857142
__label__eng_Latn
0.993791
0.829763
# Efficiency Analysis ## Objective and Prerequisites How can mathematical optimization be used to measure the efficiency of an organization? Find out in this example, where you’ll learn how to formulate an Efficiency Analysis model as a linear programming problem using the Gurobi Python API and then generate an optimal solution with the Gurobi Optimizer. This model is example 22 from the fifth edition of Model Building in Mathematical Programming by H. Paul Williams on pages 278-280 and 335-336. This example is at the intermediate level, where we assume that you know Python and the Gurobi Python API and that you have some knowledge of building mathematical optimization models. **Download the Repository** <br /> You can download the repository containing this and other examples by clicking [here](https://github.com/Gurobi/modeling-examples/archive/master.zip). **Gurobi License** <br /> In order to run this Jupyter Notebook properly, you must have a Gurobi license. If you do not have one, you can request an [evaluation license](https://www.gurobi.com/downloads/request-an-evaluation-license/?utm_source=3PW&utm_medium=OT&utm_campaign=WW-MU-MUI-OR-O_LEA-PR_NO-Q3_FY20_WW_JPME_Efficiency_Analysis_COM_EVAL_GitHub&utm_term=Efficiency%20Analysis&utm_content=C_JPM) as a *commercial user*, or download a [free license](https://www.gurobi.com/academia/academic-program-and-licenses/?utm_source=3PW&utm_medium=OT&utm_campaign=WW-MU-EDU-OR-O_LEA-PR_NO-Q3_FY20_WW_JPME_Efficiency_Analysis_COM_EVAL_GitHub&utm_term=Efficiency%20Analysis&utm_content=C_JPM) as an *academic user*. ## Background The Data Envelopment Analysis (DEA) is a nonparametric problem in operations research and economics whose solution is an estimation of production frontiers. It is used to empirically measure the productive efficiency of decision making units (DMUs). There are a number of linear programming formulations of the DEA problem. Fuller coverage of the subject can be found in Farrell (1957), Charnes et al. (1978) and Thanassoulis et al. (1987). The formulation given by H.P. Williams is described in Land (1991). This formulation is the dual model of a model commonly used that relies on finding weighted ratios of outputs to inputs. We will use the formulation that is commonly used and can be found in Cooper et al. (2007). The Data Envelopment Analysis has been used to evaluate the performance of many different kinds of entities engaged in many different activities, and in many different contexts in many different countries. Examples include the maintenance activities of U.S. Air Force bases in different geographic locations, or police forces in England and Wales as well as the performance of branch banks in Cyprus and Canada and the efficiency of universities in performing their education and research functions in the U.S., England and France. The DEA approach is concerned with evaluations of *efficiency*. The most common measure of efficiency takes the form of a ratio like the following one: $$ \text{efficiency} = \frac{\text{output}}{\text{input}} $$ ## Model Formulation Assume there is a set of DMUs. Some common input and output items for each of these DMUs are selected as follows: 1. Numerical data are available for each input and output, with the data assumed to be positive, for all DMUs. 2. The items (inputs, outputs and choice of DMUs) should reflect an analyst's or a manager's interest in the components that will enter into the relative efficiency evaluations of the DMUs. 3. In principle, smaller input amounts are preferable and larger output amounts are preferable so the efficiency scores should reflect these principles. 4. The measurement units of the different inputs and outputs do not need to be congruent. Some may involve a number of persons, or areas of floor space, money expended, etc. ### Fractional problem formulation The proposed measure of the efficiency of a target DMU $k$ is obtained as the maximum of a ratio of weighted outputs to weighted inputs subject to the condition that the similar ratios for every DMU be less than or equal to one. ### Sets and indices $j,k \in \text{DMUS}$: Indices and set of DMUs, where $k$ represents the target DMU. $i \in \text{Inputs}$: Index and set of inputs. $r \in \text{Outputs}$: Index and set of outputs. ### Parameters $\text{invalue}_{i,j} > 0$: Value of input $i$ for DMU $j$. $\text{outvalue}_{r,j} > 0$: Value of output $r$ for DMU $j$. ### Decision Variables $u_{r} \geq 0$: Weight of output $r$. $v_{i} \geq 0$: Weight of input $i$. ### Objective function **Target DMU Efficiency**: Maximize efficiency at the target DMU $k$. $$ \text{Maximize} \quad E_k = \frac{\sum_{r \in \text{Outputs}} \text{outvalue}_{r,k}*u_{r}}{\sum_{i \in \text{Inputs}} \text{invalue}_{i,k}*v_{i}} \tag{FP0} $$ ### Constraints **Efficiency ratios**: The efficiency of a DMU is a number between $[0,1]$. \begin{equation} \frac{\sum_{r \in \text{Outputs}} \text{outvalue}_{r,j}*u_{r}}{\sum_{i \in \text{Inputs}} \text{invalue}_{i,j}*v_{i}} \leq 1 \quad \forall j \in \text{DMUS} \tag{FP1} \end{equation} ### Linear programming problem formulation This linear programming formulation can be found in the book by Cooper et al. (2007). ### Objective function **Target DMU Efficiency**: Maximize efficiency at the target DMU $k$. $$ \text{Maximize} \quad E_k = \sum_{r \in \text{Outputs}} \text{outvalue}_{r,k}*u_{r} \tag{LP0} $$ ### Constraints **Efficiency ratio**: The efficiency of a DMU is a number between $[0,1]$. \begin{equation} \sum_{r \in \text{Outputs}} \text{outvalue}_{r,j}*u_{r} - \sum_{i \in \text{Inputs}} \text{invalue}_{i,k}*v_{i} \leq 0 \quad \forall j \in \text{DMUS} \tag{LP1} \end{equation} **Normalization**: This constraint ensures that the denominator of the objective function of the fractional problem is equal to one. \begin{equation} \sum_{i \in \text{Inputs}} \text{invalue}_{i,k}*v_{i} = 1 \tag{LP2} \end{equation} It is easy to verify that the fractional problem and the linear programming problem are equivalent. Let's assume that the denominator of the efficiency ratio constraints of the fractional problem is positive for all DMUs, then we can obtain the constraints $LP1$ by multiplying both sides of the constraints $FP1$ by the denominator. Next, we set the denominator of $FP0$ eqaul to 1 and define constraint $LP2$, and then maximize the numerator, resulting in the objective function $LP0$. ### Definition of efficiency 1. $DMU_k$ is efficient if the optimal objective function value $E_{k}^{*} = 1$. 2. Otherwise, $DMU_k$ is inefficient. ## Problem Description A car manufacturer wants to evaluate the efficiencies of different garages that have been granted a license to sell its cars. Each garage has a certain number of measurable ‘inputs’: * Staff * Showroom Space * Population in category 1 * Population in category 2 * Enquiries Alpha model * Enquiries Beta model Each garage also has a certain number of measurable ‘outputs’: * Number Sold of different brands of car * annual Profit The following table gives the inputs and outputs for each of the 28 franchised garages. The goal is to identify efficient and inefficient garages and their input-output weights. In order to solve this problem, it is necessary to solve the LP model for each garage. --- ## Python Implementation We import the Gurobi Python Module and other Python libraries. ### Helper Functions * `solve_DEA` builds and solves the LP model. ```python import numpy as np import pandas as pd from itertools import product import gurobipy as gp from gurobipy import GRB # tested with Python 3.7.0 & Gurobi 9.1.0 ``` ```python def solve_DEA(target, verbose=True): # input-output values for the garages inattr = ['staff', 'showRoom', 'Population1', 'Population2', 'alphaEnquiries', 'betaEnquiries'] outattr = ['alphaSales', 'BetaSales', 'profit'] dmus, inputs, outputs = gp.multidict({ 'Winchester': [{'staff': 7, 'showRoom': 8, 'Population1': 10, 'Population2': 12, 'alphaEnquiries': 8.5, 'betaEnquiries': 4}, {'alphaSales': 2, 'BetaSales': 0.6, 'profit': 1.5}], 'Andover': [{'staff': 6, 'showRoom': 6, 'Population1': 20, 'Population2': 30, 'alphaEnquiries': 9, 'betaEnquiries': 4.5}, {'alphaSales': 2.3, 'BetaSales': 0.7, 'profit': 1.6}], 'Basingstoke': [{'staff': 2, 'showRoom': 3, 'Population1': 40, 'Population2': 40, 'alphaEnquiries': 2, 'betaEnquiries': 1.5}, {'alphaSales': 0.8, 'BetaSales': 0.25, 'profit': 0.5}], 'Poole': [{'staff': 14, 'showRoom': 9, 'Population1': 20, 'Population2': 25, 'alphaEnquiries': 10, 'betaEnquiries': 6}, {'alphaSales': 2.6, 'BetaSales': 0.86, 'profit': 1.9}], 'Woking': [{'staff': 10, 'showRoom': 9, 'Population1': 10, 'Population2': 10, 'alphaEnquiries': 11, 'betaEnquiries': 5}, {'alphaSales': 2.4, 'BetaSales': 1, 'profit': 2}], 'Newbury': [{'staff': 24, 'showRoom': 15, 'Population1': 15, 'Population2': 13, 'alphaEnquiries': 25, 'betaEnquiries': 1.9}, {'alphaSales': 8, 'BetaSales': 2.6, 'profit': 4.5}], 'Portsmouth': [{'staff': 6, 'showRoom': 7, 'Population1': 50, 'Population2': 40, 'alphaEnquiries': 8.5, 'betaEnquiries': 3}, {'alphaSales': 2.5, 'BetaSales': 0.9, 'profit': 1.6}], 'Alresford': [{'staff': 8, 'showRoom': 7.5, 'Population1': 5, 'Population2': 8, 'alphaEnquiries': 9, 'betaEnquiries': 4}, {'alphaSales': 2.1, 'BetaSales': 0.85, 'profit': 2}], 'Salisbury': [{'staff': 5, 'showRoom': 5, 'Population1': 10, 'Population2': 10, 'alphaEnquiries': 5, 'betaEnquiries': 2.5}, {'alphaSales': 2, 'BetaSales': 0.65, 'profit': 0.9}], 'Guildford': [{'staff': 8, 'showRoom': 10, 'Population1': 30, 'Population2': 35, 'alphaEnquiries': 9.5, 'betaEnquiries': 4.5}, {'alphaSales': 2.05, 'BetaSales': 0.75, 'profit': 1.7}], 'Alton': [{'staff': 7, 'showRoom': 8, 'Population1': 7, 'Population2': 8, 'alphaEnquiries': 3, 'betaEnquiries': 2}, {'alphaSales': 1.9, 'BetaSales': 0.70, 'profit': 0.5}], 'Weybridge': [{'staff': 5, 'showRoom': 6.5, 'Population1': 9, 'Population2': 12, 'alphaEnquiries': 8, 'betaEnquiries': 4.5}, {'alphaSales': 1.8, 'BetaSales': 0.63, 'profit': 1.4}], 'Dorchester': [{'staff': 6, 'showRoom': 7.5, 'Population1': 10, 'Population2': 10, 'alphaEnquiries': 7.5, 'betaEnquiries': 4}, {'alphaSales': 1.5, 'BetaSales': 0.45, 'profit': 1.45}], 'Bridport': [{'staff': 11, 'showRoom': 8, 'Population1': 8, 'Population2': 10, 'alphaEnquiries': 10, 'betaEnquiries': 6}, {'alphaSales': 2.2, 'BetaSales': 0.65, 'profit': 2.2}], 'Weymouth': [{'staff': 4, 'showRoom': 5, 'Population1': 10, 'Population2': 10, 'alphaEnquiries': 7.5, 'betaEnquiries': 3.5}, {'alphaSales': 1.8, 'BetaSales': 0.62, 'profit': 1.6}], 'Portland': [{'staff': 3, 'showRoom': 3.5, 'Population1': 3, 'Population2': 20, 'alphaEnquiries': 2, 'betaEnquiries': 1.5}, {'alphaSales': 0.9, 'BetaSales': 0.35, 'profit': 0.5}], 'Chichester': [{'staff': 5, 'showRoom': 5.5, 'Population1': 8, 'Population2': 10, 'alphaEnquiries': 7, 'betaEnquiries': 3.5}, {'alphaSales': 1.2, 'BetaSales': 0.45, 'profit': 1.3}], 'Petersfield': [{'staff': 21, 'showRoom': 12, 'Population1': 6, 'Population2': 6, 'alphaEnquiries': 15, 'betaEnquiries': 8}, {'alphaSales': 6, 'BetaSales': 0.25, 'profit': 2.9}], 'Petworth': [{'staff': 6, 'showRoom': 5.5, 'Population1': 2, 'Population2': 2, 'alphaEnquiries': 8, 'betaEnquiries': 5}, {'alphaSales': 1.5, 'BetaSales': 0.55, 'profit': 1.55}], 'Midhurst': [{'staff': 3, 'showRoom': 3.6, 'Population1': 3, 'Population2': 3, 'alphaEnquiries': 2.5, 'betaEnquiries': 1.5}, {'alphaSales': 0.8, 'BetaSales': 0.20, 'profit': 0.45}], 'Reading': [{'staff': 30, 'showRoom': 29, 'Population1': 120, 'Population2': 80, 'alphaEnquiries': 35, 'betaEnquiries': 20}, {'alphaSales': 7, 'BetaSales': 2.5, 'profit': 8}], 'Southampton': [{'staff': 25, 'showRoom': 16, 'Population1': 110, 'Population2': 80, 'alphaEnquiries': 27, 'betaEnquiries': 12}, {'alphaSales': 6.5, 'BetaSales': 3.5, 'profit': 5.4}], 'Bournemouth': [{'staff': 19, 'showRoom': 10, 'Population1': 90, 'Population2': 22, 'alphaEnquiries': 25, 'betaEnquiries': 13}, {'alphaSales': 5.5, 'BetaSales': 3.1, 'profit': 4.5}], 'Henley': [{'staff': 7, 'showRoom': 6, 'Population1': 5, 'Population2': 7, 'alphaEnquiries': 8.5, 'betaEnquiries': 4.5}, {'alphaSales': 1.2, 'BetaSales': 0.48, 'profit': 2}], 'Maidenhead': [{'staff': 12, 'showRoom': 8, 'Population1': 7, 'Population2': 10, 'alphaEnquiries': 12, 'betaEnquiries': 7}, {'alphaSales': 4.5, 'BetaSales': 2, 'profit': 2.3}], 'Fareham': [{'staff': 4, 'showRoom': 6, 'Population1': 1, 'Population2': 1, 'alphaEnquiries': 7.5, 'betaEnquiries': 3.5}, {'alphaSales': 1.1, 'BetaSales': 0.48, 'profit': 1.7}], 'Romsey': [{'staff': 2, 'showRoom': 2.5, 'Population1': 1, 'Population2': 1, 'alphaEnquiries': 2.5, 'betaEnquiries': 1}, {'alphaSales': 0.4, 'BetaSales': 0.1, 'profit': 0.55}], 'Ringwood': [{'staff': 2, 'showRoom': 3.5, 'Population1': 2, 'Population2': 2, 'alphaEnquiries': 1.9, 'betaEnquiries': 1.2}, {'alphaSales': 0.3, 'BetaSales': 0.09, 'profit': 0.4}] }) ### Create LP model model = gp.Model('DEA') # Decision variables wout = model.addVars(outattr, name="outputWeight") win = model.addVars(inattr, name="inputWeight") # Constraints ratios = model.addConstrs( ( gp.quicksum(outputs[h][r]*wout[r] for r in outattr ) - gp.quicksum(inputs[h][i]*win[i] for i in inattr ) <= 0 for h in dmus ), name='ratios' ) normalization = model.addConstr((gp.quicksum(inputs[target][i]*win[i] for i in inattr ) == 1 ), name='normalization') # Objective function model.setObjective( gp.quicksum(outputs[target][r]*wout[r] for r in outattr ), GRB.MAXIMIZE) # Run optimization engine if not verbose: model.params.OutputFlag = 0 model.optimize() # Print results print(f"\nThe efficiency of target DMU {target} is {round(model.objVal,3)}") print("__________________________________________________________________") print(f"The weights for the inputs are:") for i in inattr: print(f"For {i}: {round(win[i].x,3)} ") print("__________________________________________________________________") print(f"The weights for the outputs are") for r in outattr: print(f"For {r} is: {round(wout[r].x,3)} ") print("__________________________________________________________________\n\n") return model.objVal ``` ## Input Data We define the list of garages. ```python dmus = ['Winchester','Andover','Basingstoke', 'Poole', 'Woking','Newbury','Portsmouth','Alresford','Salisbury','Guildford','Alton','Weybridge', 'Dorchester', 'Bridport', 'Weymouth', 'Portland', 'Chichester', 'Petersfield', 'Petworth', 'Midhurst', 'Reading', 'Southampton', 'Bournemouth', 'Henley', 'Maidenhead', 'Fareham', 'Romsey', 'Ringwood'] ``` --- ## Output Report We print out the efficiency score of each garage and its associated input and output weights. ```python # Solving DEA model for each DMU performance = {} for h in dmus: performance[h] = solve_DEA(h, verbose=False) ``` Using license file c:\gurobi\gurobi.lic The efficiency of target DMU Winchester is 0.835 __________________________________________________________________ The weights for the inputs are: For staff: 0.012 For showRoom: 0.0 For Population1: 0.0 For Population2: 0.002 For alphaEnquiries: 0.095 For betaEnquiries: 0.02 __________________________________________________________________ The weights for the outputs are For alphaSales is: 0.113 For BetaSales is: 0.004 For profit is: 0.404 __________________________________________________________________ The efficiency of target DMU Andover is 0.917 __________________________________________________________________ The weights for the inputs are: For staff: 0.115 For showRoom: 0.03 For Population1: 0.0 For Population2: 0.0 For alphaEnquiries: 0.014 For betaEnquiries: 0.0 __________________________________________________________________ The weights for the outputs are For alphaSales is: 0.399 For BetaSales is: 0.0 For profit is: 0.0 __________________________________________________________________ The efficiency of target DMU Basingstoke is 1.0 __________________________________________________________________ The weights for the inputs are: For staff: 0.429 For showRoom: 0.0 For Population1: 0.0 For Population2: 0.0 For alphaEnquiries: 0.071 For betaEnquiries: 0.0 __________________________________________________________________ The weights for the outputs are For alphaSales is: 1.25 For BetaSales is: 0.0 For profit is: 0.0 __________________________________________________________________ The efficiency of target DMU Poole is 0.864 __________________________________________________________________ The weights for the inputs are: For staff: 0.0 For showRoom: 0.019 For Population1: 0.0 For Population2: 0.001 For alphaEnquiries: 0.081 For betaEnquiries: 0.0 __________________________________________________________________ The weights for the outputs are For alphaSales is: 0.065 For BetaSales is: 0.0 For profit is: 0.366 __________________________________________________________________ The efficiency of target DMU Woking is 0.845 __________________________________________________________________ The weights for the inputs are: For staff: 0.015 For showRoom: 0.0 For Population1: 0.0 For Population2: 0.001 For alphaEnquiries: 0.062 For betaEnquiries: 0.032 __________________________________________________________________ The weights for the outputs are For alphaSales is: 0.0 For BetaSales is: 0.218 For profit is: 0.314 __________________________________________________________________ The efficiency of target DMU Newbury is 1.0 __________________________________________________________________ The weights for the inputs are: For staff: 0.0 For showRoom: 0.0 For Population1: 0.0 For Population2: 0.0 For alphaEnquiries: 0.0 For betaEnquiries: 0.526 __________________________________________________________________ The weights for the outputs are For alphaSales is: 0.0 For BetaSales is: 0.0 For profit is: 0.222 __________________________________________________________________ The efficiency of target DMU Portsmouth is 1.0 __________________________________________________________________ The weights for the inputs are: For staff: 0.149 For showRoom: 0.0 For Population1: 0.0 For Population2: 0.0 For alphaEnquiries: 0.0 For betaEnquiries: 0.036 __________________________________________________________________ The weights for the outputs are For alphaSales is: 0.4 For BetaSales is: 0.0 For profit is: 0.0 __________________________________________________________________ The efficiency of target DMU Alresford is 1.0 __________________________________________________________________ The weights for the inputs are: For staff: 0.0 For showRoom: 0.0 For Population1: 0.0 For Population2: 0.002 For alphaEnquiries: 0.098 For betaEnquiries: 0.026 __________________________________________________________________ The weights for the outputs are For alphaSales is: 0.081 For BetaSales is: 0.005 For profit is: 0.413 __________________________________________________________________ The efficiency of target DMU Salisbury is 1.0 __________________________________________________________________ The weights for the inputs are: For staff: 0.087 For showRoom: 0.0 For Population1: 0.0 For Population2: 0.0 For alphaEnquiries: 0.113 For betaEnquiries: 0.0 __________________________________________________________________ The weights for the outputs are For alphaSales is: 0.5 For BetaSales is: 0.0 For profit is: 0.0 __________________________________________________________________ The efficiency of target DMU Guildford is 0.802 __________________________________________________________________ The weights for the inputs are: For staff: 0.014 For showRoom: 0.0 For Population1: 0.0 For Population2: 0.0 For alphaEnquiries: 0.075 For betaEnquiries: 0.036 __________________________________________________________________ The weights for the outputs are For alphaSales is: 0.062 For BetaSales is: 0.0 For profit is: 0.397 __________________________________________________________________ The efficiency of target DMU Alton is 1.0 __________________________________________________________________ The weights for the inputs are: For staff: 0.0 For showRoom: 0.0 For Population1: 0.0 For Population2: 0.0 For alphaEnquiries: 0.333 For betaEnquiries: 0.0 __________________________________________________________________ The weights for the outputs are For alphaSales is: 0.0 For BetaSales is: 1.429 For profit is: 0.0 __________________________________________________________________ The efficiency of target DMU Weybridge is 0.854 __________________________________________________________________ The weights for the inputs are: For staff: 0.149 For showRoom: 0.0 For Population1: 0.028 For Population2: 0.0 For alphaEnquiries: 0.0 For betaEnquiries: 0.0 __________________________________________________________________ The weights for the outputs are For alphaSales is: 0.379 For BetaSales is: 0.0 For profit is: 0.122 __________________________________________________________________ The efficiency of target DMU Dorchester is 0.867 __________________________________________________________________ The weights for the inputs are: For staff: 0.014 For showRoom: 0.0 For Population1: 0.0 For Population2: 0.003 For alphaEnquiries: 0.118 For betaEnquiries: 0.0 __________________________________________________________________ The weights for the outputs are For alphaSales is: 0.11 For BetaSales is: 0.0 For profit is: 0.484 __________________________________________________________________ The efficiency of target DMU Bridport is 0.982 __________________________________________________________________ The weights for the inputs are: For staff: 0.0 For showRoom: 0.002 For Population1: 0.0 For Population2: 0.002 For alphaEnquiries: 0.096 For betaEnquiries: 0.0 __________________________________________________________________ The weights for the outputs are For alphaSales is: 0.06 For BetaSales is: 0.0 For profit is: 0.387 __________________________________________________________________ The efficiency of target DMU Weymouth is 1.0 __________________________________________________________________ The weights for the inputs are: For staff: 0.25 For showRoom: 0.0 For Population1: 0.0 For Population2: 0.0 For alphaEnquiries: 0.0 For betaEnquiries: 0.0 __________________________________________________________________ The weights for the outputs are For alphaSales is: 0.556 For BetaSales is: 0.0 For profit is: 0.0 __________________________________________________________________ The efficiency of target DMU Portland is 1.0 __________________________________________________________________ The weights for the inputs are: For staff: 0.0 For showRoom: 0.0 For Population1: 0.032 For Population2: 0.0 For alphaEnquiries: 0.452 For betaEnquiries: 0.0 __________________________________________________________________ The weights for the outputs are For alphaSales is: 0.0 For BetaSales is: 0.0 For profit is: 2.0 __________________________________________________________________ The efficiency of target DMU Chichester is 0.825 __________________________________________________________________ The weights for the inputs are: For staff: 0.011 For showRoom: 0.008 For Population1: 0.0 For Population2: 0.002 For alphaEnquiries: 0.125 For betaEnquiries: 0.0 __________________________________________________________________ The weights for the outputs are For alphaSales is: 0.098 For BetaSales is: 0.0 For profit is: 0.545 __________________________________________________________________ The efficiency of target DMU Petersfield is 1.0 __________________________________________________________________ The weights for the inputs are: For staff: 0.0 For showRoom: 0.019 For Population1: 0.0 For Population2: 0.0 For alphaEnquiries: 0.04 For betaEnquiries: 0.021 __________________________________________________________________ The weights for the outputs are For alphaSales is: 0.167 For BetaSales is: 0.0 For profit is: 0.0 __________________________________________________________________ The efficiency of target DMU Petworth is 0.988 __________________________________________________________________ The weights for the inputs are: For staff: 0.0 For showRoom: 0.156 For Population1: 0.058 For Population2: 0.013 For alphaEnquiries: 0.0 For betaEnquiries: 0.0 __________________________________________________________________ The weights for the outputs are For alphaSales is: 0.141 For BetaSales is: 0.0 For profit is: 0.501 __________________________________________________________________ The efficiency of target DMU Midhurst is 0.889 __________________________________________________________________ The weights for the inputs are: For staff: 0.018 For showRoom: 0.0 For Population1: 0.0 For Population2: 0.011 For alphaEnquiries: 0.366 For betaEnquiries: 0.0 __________________________________________________________________ The weights for the outputs are For alphaSales is: 0.294 For BetaSales is: 0.047 For profit is: 1.431 __________________________________________________________________ The efficiency of target DMU Reading is 0.984 __________________________________________________________________ The weights for the inputs are: For staff: 0.001 For showRoom: 0.006 For Population1: 0.0 For Population2: 0.0 For alphaEnquiries: 0.023 For betaEnquiries: 0.0 __________________________________________________________________ The weights for the outputs are For alphaSales is: 0.019 For BetaSales is: 0.0 For profit is: 0.106 __________________________________________________________________ The efficiency of target DMU Southampton is 1.0 __________________________________________________________________ The weights for the inputs are: For staff: 0.0 For showRoom: 0.005 For Population1: 0.0 For Population2: 0.0 For alphaEnquiries: 0.028 For betaEnquiries: 0.012 __________________________________________________________________ The weights for the outputs are For alphaSales is: 0.0 For BetaSales is: 0.057 For profit is: 0.149 __________________________________________________________________ The efficiency of target DMU Bournemouth is 1.0 __________________________________________________________________ The weights for the inputs are: For staff: 0.0 For showRoom: 0.1 For Population1: 0.0 For Population2: 0.0 For alphaEnquiries: 0.0 For betaEnquiries: 0.0 __________________________________________________________________ The weights for the outputs are For alphaSales is: 0.0 For BetaSales is: 0.323 For profit is: 0.0 __________________________________________________________________ The efficiency of target DMU Henley is 1.0 __________________________________________________________________ The weights for the inputs are: For staff: 0.0 For showRoom: 0.161 For Population1: 0.007 For Population2: 0.0 For alphaEnquiries: 0.0 For betaEnquiries: 0.0 __________________________________________________________________ The weights for the outputs are For alphaSales is: 0.0 For BetaSales is: 0.0 For profit is: 0.5 __________________________________________________________________ The efficiency of target DMU Maidenhead is 1.0 __________________________________________________________________ The weights for the inputs are: For staff: 0.0 For showRoom: 0.122 For Population1: 0.004 For Population2: 0.0 For alphaEnquiries: 0.0 For betaEnquiries: 0.0 __________________________________________________________________ The weights for the outputs are For alphaSales is: 0.0 For BetaSales is: 0.5 For profit is: 0.0 __________________________________________________________________ The efficiency of target DMU Fareham is 1.0 __________________________________________________________________ The weights for the inputs are: For staff: 0.0 For showRoom: 0.0 For Population1: 0.0 For Population2: 1.0 For alphaEnquiries: 0.0 For betaEnquiries: 0.0 __________________________________________________________________ The weights for the outputs are For alphaSales is: 0.0 For BetaSales is: 2.083 For profit is: 0.0 __________________________________________________________________ The efficiency of target DMU Romsey is 1.0 __________________________________________________________________ The weights for the inputs are: For staff: 0.0 For showRoom: 0.0 For Population1: 0.0 For Population2: 0.0 For alphaEnquiries: 0.31 For betaEnquiries: 0.224 __________________________________________________________________ The weights for the outputs are For alphaSales is: 0.0 For BetaSales is: 0.0 For profit is: 1.818 __________________________________________________________________ The efficiency of target DMU Ringwood is 0.908 __________________________________________________________________ The weights for the inputs are: For staff: 0.0 For showRoom: 0.0 For Population1: 0.0 For Population2: 0.013 For alphaEnquiries: 0.512 For betaEnquiries: 0.0 __________________________________________________________________ The weights for the outputs are For alphaSales is: 0.299 For BetaSales is: 0.0 For profit is: 2.044 __________________________________________________________________ --- ## Analysis We identify which garages are efficient and which ones are inefficient, and provide the efficiency scores for each garage. ```python # Identifying efficient and inefficient DMUs # Sorting garages in descending efficiency number sorted_performance = {k: v for k, v in sorted(performance.items(), key=lambda item: item[1], reverse = True)} efficient = [] inefficient = [] for h in sorted_performance.keys(): if sorted_performance[h] >= 0.9999999: efficient.append(h) if sorted_performance[h] < 0.9999999: inefficient.append(h) print('____________________________________________') print(f"The efficient DMUs are:") for eff in efficient: print(f"The performance value of DMU {eff} is: {round(performance[eff],3)}") print('____________________________________________') print(f"The inefficient DMUs are:") for ine in inefficient: print(f"The performance value of DMU {ine} is: {round(performance[ine],3)}") ``` ____________________________________________ The efficient DMUs are: The performance value of DMU Newbury is: 1.0 The performance value of DMU Alresford is: 1.0 The performance value of DMU Salisbury is: 1.0 The performance value of DMU Alton is: 1.0 The performance value of DMU Weymouth is: 1.0 The performance value of DMU Petersfield is: 1.0 The performance value of DMU Southampton is: 1.0 The performance value of DMU Bournemouth is: 1.0 The performance value of DMU Maidenhead is: 1.0 The performance value of DMU Fareham is: 1.0 The performance value of DMU Romsey is: 1.0 The performance value of DMU Basingstoke is: 1.0 The performance value of DMU Portsmouth is: 1.0 The performance value of DMU Portland is: 1.0 The performance value of DMU Henley is: 1.0 ____________________________________________ The inefficient DMUs are: The performance value of DMU Petworth is: 0.988 The performance value of DMU Reading is: 0.984 The performance value of DMU Bridport is: 0.982 The performance value of DMU Andover is: 0.917 The performance value of DMU Ringwood is: 0.908 The performance value of DMU Midhurst is: 0.889 The performance value of DMU Dorchester is: 0.867 The performance value of DMU Poole is: 0.864 The performance value of DMU Weybridge is: 0.854 The performance value of DMU Woking is: 0.845 The performance value of DMU Winchester is: 0.835 The performance value of DMU Chichester is: 0.825 The performance value of DMU Guildford is: 0.802 ## References H. Paul Williams, Model Building in Mathematical Programming, fifth edition. Cooper, W. W, L. M. Seiford, K. Tone. (2007) Data Envelopment Analysis: A Comprehensive Text with Models, Applications, References and DEA-Solver Software. Second edition. Springer-Verlag US. Land, A. (1991) Data envelopment analysis, Chapter 5, in Operations Research in Management (eds S.C. Littlechild and M.F. Shutler), Prentice Hall, London. Farrell, M.J. (1957) The measurement of productive efficiency. Journal of the Royal Statistical Society, Series A, 120, 253–290. Charnes, A., Cooper, W.W. and Rhodes, E. (1978) Measuring the efficiency of decision making units. European Journal of Operational Research, 2, 429–444. Thanassoulis, E., Dyson, R.G. and Foster, M.J. (1987) Relative efficiency assessments using data envelopment analysis: an application to data on rates departments. Journal of the Operational Research Society, 5, 397–411. Copyright © 2020 Gurobi Optimization, LLC
e4279e7528ba062671618e83ce5ac32405490f98
43,763
ipynb
Jupyter Notebook
efficiency_analysis/efficiency_analysis.ipynb
Maninaa/modeling-examples
51575a453d28e1e9435abd865432955b182ba577
[ "Apache-2.0" ]
1
2021-11-29T07:42:12.000Z
2021-11-29T07:42:12.000Z
efficiency_analysis/efficiency_analysis.ipynb
Maninaa/modeling-examples
51575a453d28e1e9435abd865432955b182ba577
[ "Apache-2.0" ]
null
null
null
efficiency_analysis/efficiency_analysis.ipynb
Maninaa/modeling-examples
51575a453d28e1e9435abd865432955b182ba577
[ "Apache-2.0" ]
1
2021-11-29T07:41:53.000Z
2021-11-29T07:41:53.000Z
45.586458
732
0.616754
true
9,480
Qwen/Qwen-72B
1. YES 2. YES
0.907312
0.831143
0.754106
__label__yue_Hant
0.926135
0.590374
# Bayes by Backprop An implementation of the algorithm described in https://arxiv.org/abs/1505.05424. This notebook accompanies the article at https://www.nitarshan.com/bayes-by-backprop. ```python %matplotlib inline import math import matplotlib.pyplot as plt import numpy as np import seaborn as sns import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from tensorboardX import SummaryWriter from torchvision import datasets, transforms from torchvision.utils import make_grid from tqdm import tqdm, trange from copy import deepcopy writer = SummaryWriter() sns.set() sns.set_style("dark") sns.set_palette("muted") sns.set_color_codes("muted") ``` ```python DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu") LOADER_KWARGS = {'num_workers': 1, 'pin_memory': True} if torch.cuda.is_available() else {} print(torch.cuda.is_available()) ``` ## Data Preparation ```python BATCH_SIZE = 100 TEST_BATCH_SIZE = 5 train_loader = torch.utils.data.DataLoader( datasets.FashionMNIST( './fmnist', train=True, download=True, transform=transforms.ToTensor()), batch_size=BATCH_SIZE, shuffle=True, **LOADER_KWARGS) test_loader = torch.utils.data.DataLoader( datasets.FashionMNIST( './fmnist', train=False, download=True, transform=transforms.ToTensor()), batch_size=TEST_BATCH_SIZE, shuffle=False, **LOADER_KWARGS) # train_loader = torch.utils.data.DataLoader( # datasets.CIFAR10( # '../gitignored/data', train=True, download=True, # transform=transforms.ToTensor()), # batch_size=BATCH_SIZE, shuffle=True, **LOADER_KWARGS) # test_loader = torch.utils.data.DataLoader( # datasets.CIFAR10( # '../gitignored/data', train=False, download=True, # transform=transforms.ToTensor()), # batch_size=TEST_BATCH_SIZE, shuffle=False, **LOADER_KWARGS) TRAIN_SIZE = len(train_loader.dataset) TEST_SIZE = len(test_loader.dataset) NUM_BATCHES = len(train_loader) NUM_TEST_BATCHES = len(test_loader) CLASSES = 10 TRAIN_EPOCHS = 20 SAMPLES = 2 TEST_SAMPLES = 10 assert (TRAIN_SIZE % BATCH_SIZE) == 0 assert (TEST_SIZE % TEST_BATCH_SIZE) == 0 ``` ## Modelling $$\underline{\text{Reparameterized Gaussian}}$$ $$\begin{aligned} \theta &= (\mu, \rho)\\ \sigma &= \ln{(1+e^\rho)}\\ \mathcal{N}(x\vert \mu, \sigma) &= \frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x-\mu)^2}{2\sigma^2}}\\ \ln{\mathcal{N}(x\vert \mu, \sigma)} &= -\ln{\sqrt{2\pi}} -\ln{\sigma} -\frac{(x-\mu)^2}{2\sigma^2}\\ P(\mathbf{w}) &= \prod_j{\mathcal{N}(\mathbf{w}_j \vert 0, \sigma^2)}\\ \ln{P(\mathbf{w})} &= \sum_j{\ln{\mathcal{N}(\mathbf{w}_j \vert 0, \sigma^2)}}\\ \end{aligned}$$ ```python class Gaussian(object): def __init__(self, mu, rho): super().__init__() self.mu = mu self.rho = rho self.normal = torch.distributions.Normal(0,1) self.mask = torch.ones_like(self.mu) @property def sigma(self): return torch.log1p(torch.exp(self.rho)) def sample(self): epsilon = self.normal.sample(self.rho.size()).to(DEVICE) return self.mu + (self.sigma * epsilon) * self.mask.to(self.mu.device) def log_prob(self, input): return (-math.log(math.sqrt(2 * math.pi)) - torch.log(self.sigma) - ((input - self.mu) ** 2) / (2 * self.sigma ** 2)).sum() ``` $$\underline{\text{Scale Mixture Gaussian}}$$ $$\begin{align} P(\mathbf{w}) &= \prod_j{\pi \mathcal{N}(\mathbf{w}_j \vert 0, \sigma_1^2) + (1-\pi) \mathcal{N}(\mathbf{w}_j \vert 0, \sigma_2^2)}\\ \ln{P(\mathbf{w})} &= \sum_j{\ln{(\pi \mathcal{N}(\mathbf{w}_j \vert 0, \sigma_1^2) + (1-\pi) \mathcal{N}(\mathbf{w}_j \vert 0, \sigma_2^2))}}\\ \end{align}$$ ```python class ScaleMixtureGaussian(object): def __init__(self, pi, sigma1, sigma2): super().__init__() self.pi = pi self.sigma1 = sigma1 self.sigma2 = sigma2 self.gaussian1 = torch.distributions.Normal(0,sigma1) self.gaussian2 = torch.distributions.Normal(0,sigma2) def log_prob(self, input): prob1 = torch.exp(self.gaussian1.log_prob(input)) prob2 = torch.exp(self.gaussian2.log_prob(input)) return (torch.log(self.pi * prob1 + (1-self.pi) * prob2)).sum() ``` $$\pi = \frac{1}{2}$$ $$-\ln{\sigma_1} = 0$$ $$-\ln{\sigma_2} = 6$$ ```python PI = 0.5 SIGMA_1 = torch.cuda.FloatTensor([math.exp(-0)]) SIGMA_2 = torch.cuda.FloatTensor([math.exp(-6)]) # def visualize_scale_mixture_components(): # def show_lines(): # pass # mix = ScaleMixtureGaussian(PI, SIGMA_1, SIGMA_2) # normal_1 = torch.distributions.Normal(0, SIGMA_1) # normal_2 = torch.distributions.Normal(0, SIGMA_2) # x_points = np.linspace(-5,5,10000) # d1 = np.array([torch.exp(normal_1.log_prob(float(c))) for c in x_points]) # d2 = np.array([torch.exp(normal_2.log_prob(float(c))) for c in x_points]) # d3 = np.array([torch.exp(mix.log_prob(float(c))) for c in x_points]) # plt.subplots(1,3,figsize=(14,4)) # plt.subplot(1,3,1) # plt.plot(x_points,d2,color="g") # plt.plot(x_points,d3,color="r") # plt.plot(x_points,d1,color="b") # plt.legend(["sigma2", "mix", "sigma1"]) # plt.ylim(0,0.5) # plt.subplot(1,3,2) # plt.plot(x_points,d1,color="b") # plt.plot(x_points,d2,color="g") # plt.plot(x_points,d3,color="r") # plt.legend(["sigma1", "sigma2", "mix"]) # plt.ylim(0,160) # plt.subplot(1,3,3) # plt.plot(x_points,d2,color="g") # plt.plot(x_points,d3,color="r") # plt.plot(x_points,d1,color="b") # plt.legend(["sigma2", "mix", "sigma1"]) # plt.ylim(0,80) # visualize_scale_mixture_components() ``` ```python class BayesianLinear(nn.Module): def __init__(self, in_features, out_features): super().__init__() self.in_features = in_features self.out_features = out_features # Weight parameters self.weight_mu = nn.Parameter(torch.Tensor(out_features, in_features).uniform_(-0.2, 0.2)) self.weight_rho = nn.Parameter(torch.Tensor(out_features, in_features).uniform_(-5,-4)) self.weight = Gaussian(self.weight_mu, self.weight_rho) # Bias parameters self.bias_mu = nn.Parameter(torch.Tensor(out_features).uniform_(-0.2, 0.2)) self.bias_rho = nn.Parameter(torch.Tensor(out_features).uniform_(-5,-4)) self.bias = Gaussian(self.bias_mu, self.bias_rho) # Prior distributions self.weight_prior = ScaleMixtureGaussian(PI, SIGMA_1, SIGMA_2) self.bias_prior = ScaleMixtureGaussian(PI, SIGMA_1, SIGMA_2) self.log_prior = 0 self.log_variational_posterior = 0 def forward(self, input, sample=False, calculate_log_probs=False): if self.training or sample: weight = self.weight.sample() bias = self.bias.sample() else: weight = self.weight.mu bias = self.bias.mu if self.training or calculate_log_probs: self.log_prior = self.weight_prior.log_prob(weight) + self.bias_prior.log_prob(bias) self.log_variational_posterior = self.weight.log_prob(weight) + self.bias.log_prob(bias) else: self.log_prior, self.log_variational_posterior = 0, 0 return F.linear(input, weight, bias) ``` ```python class BayesianNetwork(nn.Module): def __init__(self): super().__init__() self.l1 = BayesianLinear(28*28, 1200) # self.l1 = BayesianLinear(3*32*32, 1200) self.l2 = BayesianLinear(1200, 1200) self.l3 = BayesianLinear(1200, 10) def forward(self, x, sample=False): x = x.view(-1, 28*28) # x = x.view(-1, 3*32*32) x = F.relu(self.l1(x, sample)) x = F.relu(self.l2(x, sample)) x = F.log_softmax(self.l3(x, sample), dim=1) return x def log_prior(self): return self.l1.log_prior \ + self.l2.log_prior \ + self.l3.log_prior def log_variational_posterior(self): return self.l1.log_variational_posterior \ + self.l2.log_variational_posterior \ + self.l3.log_variational_posterior def sample_elbo(self, input, target, samples=SAMPLES): outputs = torch.zeros(samples, BATCH_SIZE, CLASSES).to(DEVICE) log_priors = torch.zeros(samples).to(DEVICE) log_variational_posteriors = torch.zeros(samples).to(DEVICE) for i in range(samples): outputs[i] = self(input, sample=True) log_priors[i] = self.log_prior() log_variational_posteriors[i] = self.log_variational_posterior() log_prior = log_priors.mean() log_variational_posterior = log_variational_posteriors.mean() negative_log_likelihood = F.nll_loss(outputs.mean(0), target, size_average=False) loss = (log_variational_posterior - log_prior)/NUM_BATCHES + negative_log_likelihood return loss, log_prior, log_variational_posterior, negative_log_likelihood net = BayesianNetwork().to(DEVICE) ``` ## Training ```python def write_weight_histograms(epoch): writer.add_histogram('histogram/w1_mu', net.l1.weight_mu,epoch) writer.add_histogram('histogram/w1_rho', net.l1.weight_rho,epoch) writer.add_histogram('histogram/w2_mu', net.l2.weight_mu,epoch) writer.add_histogram('histogram/w2_rho', net.l2.weight_rho,epoch) writer.add_histogram('histogram/w3_mu', net.l3.weight_mu,epoch) writer.add_histogram('histogram/w3_rho', net.l3.weight_rho,epoch) writer.add_histogram('histogram/b1_mu', net.l1.bias_mu,epoch) writer.add_histogram('histogram/b1_rho', net.l1.bias_rho,epoch) writer.add_histogram('histogram/b2_mu', net.l2.bias_mu,epoch) writer.add_histogram('histogram/b2_rho', net.l2.bias_rho,epoch) writer.add_histogram('histogram/b3_mu', net.l3.bias_mu,epoch) writer.add_histogram('histogram/b3_rho', net.l3.bias_rho,epoch) def write_loss_scalars(epoch, batch_idx, loss, log_prior, log_variational_posterior, negative_log_likelihood): writer.add_scalar('logs/loss', loss, epoch*NUM_BATCHES+batch_idx) writer.add_scalar('logs/complexity_cost', log_variational_posterior-log_prior, epoch*NUM_BATCHES+batch_idx) writer.add_scalar('logs/log_prior', log_prior, epoch*NUM_BATCHES+batch_idx) writer.add_scalar('logs/log_variational_posterior', log_variational_posterior, epoch*NUM_BATCHES+batch_idx) writer.add_scalar('logs/negative_log_likelihood', negative_log_likelihood, epoch*NUM_BATCHES+batch_idx) ``` ```python def train(net, optimizer, epoch): net.train() if epoch == 0: # write initial distributions write_weight_histograms(epoch) for batch_idx, (data, target) in enumerate(tqdm(train_loader)): data, target = data.to(DEVICE), target.to(DEVICE) net.zero_grad() loss, log_prior, log_variational_posterior, negative_log_likelihood = net.sample_elbo(data, target) loss.backward() optimizer.step() write_loss_scalars(epoch, batch_idx, loss, log_prior, log_variational_posterior, negative_log_likelihood) write_weight_histograms(epoch+1) ``` ```python optimizer = optim.Adam(net.parameters()) for epoch in range(TRAIN_EPOCHS): train(net, optimizer, epoch) ``` ```python backup_net = deepcopy(net) ``` ```python net = deepcopy(backup_net) ``` ```python # for name, module in net.named_modules(): # if name in ['l1', 'l2']: # breakpoint() ``` ```python all_scores = [] for name, module in net.named_modules(): if name in ['l1', 'l2']: # scores = - module.weight.sigma scores = torch.abs(module.weight.mu) / module.weight.sigma all_scores.append(scores.flatten()) ``` ```python all_scores = torch.cat([x for x in all_scores]) ``` ```python # all_scores ``` ```python # torch.histc(all_scores, 10) ``` ```python threshold, _ = torch.topk(all_scores, int(len(all_scores)*0.25), sorted=True) ``` ```python acceptable_score = threshold[-1] ``` ```python acceptable_score ``` ```python for name, module in net.named_modules(): if name in ['l1', 'l2']: mask = (torch.abs(module.weight.mu) / module.weight.sigma > acceptable_score) # mask = (- module.weight.sigma) > acceptable_score module.weight.mu = mask * module.weight.mu module.weight.mask = mask print(mask.sum().float() / torch.numel(mask)) ``` ## Evaluation ### Model Ensemble ```python def test_ensemble(): net.eval() correct = 0 corrects = np.zeros(TEST_SAMPLES+1, dtype=int) with torch.no_grad(): for data, target in test_loader: data, target = data.to(DEVICE), target.to(DEVICE) outputs = torch.zeros(TEST_SAMPLES+1, TEST_BATCH_SIZE, CLASSES).to(DEVICE) for i in range(TEST_SAMPLES): outputs[i] = net(data, sample=True) outputs[TEST_SAMPLES] = net(data, sample=False) output = outputs.mean(0) preds = preds = outputs.max(2, keepdim=True)[1] pred = output.max(1, keepdim=True)[1] # index of max log-probability corrects += preds.eq(target.view_as(pred)).sum(dim=1).squeeze().cpu().numpy() correct += pred.eq(target.view_as(pred)).sum().item() for index, num in enumerate(corrects): if index < TEST_SAMPLES: print('Component {} Accuracy: {}/{}'.format(index, num, TEST_SIZE)) else: print('Posterior Mean Accuracy: {}/{}'.format(num, TEST_SIZE)) print('Ensemble Accuracy: {}/{}'.format(correct, TEST_SIZE)) test_ensemble() ``` ### Model Uncertainty #### In-Domain Uncertainty ```python def show(img): npimg = img.numpy() plt.imshow(np.transpose(npimg, (1,2,0)), interpolation='nearest') ``` ```python fmnist_sample = iter(test_loader).next() fmnist_sample[0] = fmnist_sample[0].to(DEVICE) print(fmnist_sample[1]) sns.set_style("dark") show(make_grid(fmnist_sample[0].cpu())) ``` ```python net.eval() fmnist_outputs = net(fmnist_sample[0], True).max(1, keepdim=True)[1].detach().cpu().numpy() for _ in range(99): fmnist_outputs = np.append(fmnist_outputs, net(fmnist_sample[0], True).max(1, keepdim=True)[1].detach().cpu().numpy(), axis=1) sns.set_style("darkgrid") plt.subplots(5,1,figsize=(10,4)) for i in range(5): plt.subplot(5,1,i+1) plt.ylim(0,100) plt.xlabel("Categories") plt.xticks(range(10), ["Top", "Trouser", "Pullover", "Dress", "Coat", "Sandal", "Shirt", "Sneaker", "Bag", "Ankle Boot"]) plt.ylabel("Count") plt.yticks(range(50,101,50)) plt.hist(fmnist_outputs[i], np.arange(-0.5, 10, 1)) ``` #### Out-of-Domain Uncertainty ```python mnist_loader = torch.utils.data.DataLoader( datasets.MNIST('../gitignored/data', train=False, download=True, transform=transforms.ToTensor()), batch_size=5, shuffle=False) # mnist_loader = torch.utils.data.DataLoader( # datasets.SVHN('../gitignored/data', split='test', download=True, transform=transforms.ToTensor()), batch_size=5, shuffle=False) ``` ```python from sklearn import metrics import numpy as np def calculate_auroc(correct, predictions): fpr, tpr, thresholds = metrics.roc_curve(correct, predictions) auroc = metrics.auc(fpr, tpr) return auroc def calculate_aupr(correct, predictions): aupr = metrics.average_precision_score(correct, predictions) return aupr ``` ```python ood_labels = [] ood_scores = [] with torch.no_grad(): for data, _ in test_loader: data = data.to(DEVICE) out = net(data, True) probs = F.softmax(out, 1) max_probs, _ = probs.max(1) max_probs = max_probs.detach().cpu().numpy() ood_labels.append(np.ones_like(max_probs)) ood_scores.append(max_probs) for data, _ in mnist_loader: data = data.to(DEVICE) out = net(data, True) probs = F.softmax(out, 1) max_probs, _ = probs.max(1) max_probs = max_probs.detach().cpu().numpy() ood_labels.append(np.zeros_like(max_probs)) ood_scores.append(max_probs) ood_labels = np.concatenate(ood_labels) ood_scores = np.concatenate(ood_scores) ``` ```python ood_scores[10000:].mean() ``` ```python print(calculate_auroc(ood_labels, ood_scores)) print(calculate_aupr(ood_labels, ood_scores)) ``` ```python # 0.7348483900000001 # 0.7189197489131008 # 0.6768591982944069 # 0.495362640026969 # 0.657779435 # 0.6424149416360756 ``` ```python mnist_sample = iter(mnist_loader).next() mnist_sample[0] = mnist_sample[0].to(DEVICE) print(mnist_sample[1]) sns.set_style("dark") show(make_grid(mnist_sample[0].cpu())) ``` ```python net.eval() mnist_outputs = net(mnist_sample[0], True).max(1, keepdim=True)[1].detach().cpu().numpy() for _ in range(99): mnist_outputs = np.append(mnist_outputs, net(mnist_sample[0], True).max(1, keepdim=True)[1].detach().cpu().numpy(), axis=1) sns.set_style("darkgrid") plt.subplots(5,1,figsize=(10,4)) for i in range(5): plt.subplot(5,1,i+1) plt.ylim(0,100) plt.xlabel("Categories") plt.xticks(range(10), ["Top", "Trouser", "Pullover", "Dress", "Coat", "Sandal", "Shirt", "Sneaker", "Bag", "Ankle Boot"]) plt.ylabel("Count") plt.yticks(range(50,101,50)) plt.hist(mnist_outputs[i], np.arange(-0.5, 10, 1)) ``` ```python %load_ext watermark %watermark --updated --datename --python --machine --watermark -p torch,numpy,matplotlib,tensorboardX,torchvision,seaborn ```
61073acbd5cc447c0a6dcc612edf8628509f757e
34,069
ipynb
Jupyter Notebook
notebooks/Weight Uncertainty in Neural Networks.ipynb
MorganeAyle/SNIP-it
df2bf44d6d3f7e4ea7733242a79c916735a7b49e
[ "MIT" ]
null
null
null
notebooks/Weight Uncertainty in Neural Networks.ipynb
MorganeAyle/SNIP-it
df2bf44d6d3f7e4ea7733242a79c916735a7b49e
[ "MIT" ]
null
null
null
notebooks/Weight Uncertainty in Neural Networks.ipynb
MorganeAyle/SNIP-it
df2bf44d6d3f7e4ea7733242a79c916735a7b49e
[ "MIT" ]
null
null
null
30.229814
169
0.540638
true
4,874
Qwen/Qwen-72B
1. YES 2. YES
0.887205
0.841826
0.746872
__label__eng_Latn
0.240271
0.573565
# Controlled oscillator The controlled oscillator is an oscillator with an extra input that controls the frequency of the oscillation. To implement a basic oscillator, we would use a neural ensemble of two dimensions that has the following dynamics: $$ \dot{x} = \begin{bmatrix} 0 && - \omega \\ \omega && 0 \end{bmatrix} x $$ where the frequency of oscillation is $\omega \over {2 \pi}$ Hz. We need the neurons to represent three variables, $x_0$, $x_1$, and $\omega$. According the the dynamics principle of the NEF, in order to implement some particular dynamics, we need to convert this dynamics equation into a feedback function: $$ \begin{align} \dot{x} &= f(x) \\ &\implies f_{feedback}(x) = x + \tau f(x) \end{align} $$ where $\tau$ is the post-synaptic time constant of the feedback connection. In this case, the feedback function to be computed is $$ \begin{align} f_{feedback}(x) &= x + \tau \begin{bmatrix} 0 && - \omega \\ \omega && 0 \end{bmatrix} x \\ &= \begin{bmatrix} x_0 - \tau \cdot \omega \cdot x_1 \\ x_1 + \tau \cdot \omega \cdot x_0 \end{bmatrix} \end{align} $$ Since the neural ensemble represents all three variables but the dynamics only affects the first two ($x_0$, $x_1$), we need the feedback function to not affect that last variable. We do this by adding a zero to the feedback function. $$ f_{feedback}(x) = \begin{bmatrix} x_0 - \tau \cdot \omega \cdot x_1 \\ x_1 + \tau \cdot \omega \cdot x_0 \\ 0 \end{bmatrix} $$ We also generally want to keep the ranges of variables represented within an ensemble to be approximately the same. In this case, if $x_0$ and $x_1$ are between -1 and 1, $\omega$ will also be between -1 and 1, giving a frequency range of $-1 \over {2 \pi}$ to $1 \over {2 \pi}$. To increase this range, we introduce a scaling factor to $\omega$ called $\omega_{max}$. $$ f_{feedback}(x) = \begin{bmatrix} x_0 - \tau \cdot \omega \cdot \omega_{max} \cdot x_1 \\ x_1 + \tau \cdot \omega \cdot \omega_{max} \cdot x_0 \\ 0 \end{bmatrix} $$ ```python %matplotlib inline import matplotlib.pyplot as plt import nengo from nengo.processes import Piecewise ``` ## Step 1: Create the network ```python tau = 0.1 # Post-synaptic time constant for feedback w_max = 10 # Maximum frequency is w_max/(2*pi) model = nengo.Network(label='Controlled Oscillator') with model: # The ensemble for the oscillator oscillator = nengo.Ensemble(500, dimensions=3, radius=1.7) # The feedback connection def feedback(x): x0, x1, w = x # These are the three variables stored in the ensemble return x0 + w * w_max * tau * x1, x1 - w * w_max * tau * x0, 0 nengo.Connection(oscillator, oscillator, function=feedback, synapse=tau) # The ensemble for controlling the speed of oscillation frequency = nengo.Ensemble(100, dimensions=1) nengo.Connection(frequency, oscillator[2]) ``` ## Step 2: Create the input ```python with model: # We need a quick input at the beginning to start the oscillator initial = nengo.Node(Piecewise({0: [1, 0, 0], 0.15: [0, 0, 0]})) nengo.Connection(initial, oscillator) # Vary the speed over time input_frequency = nengo.Node( Piecewise({ 0: 1, 1: 0.5, 2: 0, 3: -0.5, 4: -1 })) nengo.Connection(input_frequency, frequency) ``` ## Step 3: Add Probes ```python with model: # Indicate which values to record oscillator_probe = nengo.Probe(oscillator, synapse=0.03) ``` ## Step 4: Run the Model ```python with nengo.Simulator(model) as sim: sim.run(5) ``` ## Step 5: Plot the Results ```python plt.figure() plt.plot(sim.trange(), sim.data[oscillator_probe]) plt.xlabel('Time (s)') plt.legend(['$x_0$', '$x_1$', r'$\omega$']); ```
8e57ed466a9d3feef61f5f036573451f75943f6d
6,242
ipynb
Jupyter Notebook
docs/examples/dynamics/controlled_oscillator.ipynb
pedrombmachado/nengo
abc85e1a75ce2f980e19eef195d98081f95efd28
[ "BSD-2-Clause" ]
null
null
null
docs/examples/dynamics/controlled_oscillator.ipynb
pedrombmachado/nengo
abc85e1a75ce2f980e19eef195d98081f95efd28
[ "BSD-2-Clause" ]
null
null
null
docs/examples/dynamics/controlled_oscillator.ipynb
pedrombmachado/nengo
abc85e1a75ce2f980e19eef195d98081f95efd28
[ "BSD-2-Clause" ]
null
null
null
27.619469
86
0.512816
true
1,143
Qwen/Qwen-72B
1. YES 2. YES
0.90053
0.833325
0.750434
__label__eng_Latn
0.967176
0.581841
--- # What are the marginal and the conditional probabilities? --- In this script, we show the 1-D marginal and 1-D conditional probability density functions (PDF) for a 2-D gaussian PDF. In its matrix-form, the equation for the 2-D gausian PDF reads like this: <blockquote> $P(\bf{x}) = \frac{1}{2\pi |\Sigma|^{0.5}} \exp{[-\frac{1}{2}(\bf{x}-\bf{\mu})^\top \Sigma^{-1} (\bf{x}-\bf{\mu})]}$ </blockquote> where <blockquote> $ \begin{align} \bf{x} &= [x_{1} x_{2}]^\top \\ \bf{\mu} &= [\mu_{1} \mu_{2}]^\top \\ \Sigma &= \begin{pmatrix} \sigma_{x_{1}}^2 & \rho\sigma_{x_{1}}\sigma_{x_{2}} \\ \rho\sigma_{x_{1}}\sigma_{x_{2}} & \sigma_{x_{2}}^2 \end{pmatrix} \end{align} $ </blockquote> where $\rho$ is the correlation factor between the $x_{1}$ and $x_{2}$ data. ## A useful trick to generate a covariance matrix $\Sigma$ with desired features Instead of guessing the values of $\sigma_{x_{1}}$, $\sigma_{x_{2}}$ and $\rho$ to create a covariance matrix $\Sigma$ for visualization purpose, we can design it with some desired characteristics. Those are the principal axis variances $\sigma_{1}^2$ and $\sigma_{2}^2$, and the rotation angle $\theta$. First, we generate the covariance matrix of a correlated PDF with its principal axes oriented along the $x_{1}$ and $x_{2}$ axes. <blockquote> $\Sigma_{PA} = \begin{pmatrix} \sigma_{1}^2 & 0 \\ 0 & \sigma_{2}^2 \end{pmatrix}$ </blockquote> Next, we generate the rotation matrix for the angle $\theta$: <blockquote> $R = \begin{pmatrix} \cos{\theta} & -\sin{\theta} \\ \sin{\theta} & \cos{\theta} \end{pmatrix}$ </blockquote> The covariance matrix we are looking for is <blockquote> $\Sigma = R \Sigma_{PA} R^\top $ </blockquote> Hence, we only have to specify the values of $\sigma_{1}$ and $\sigma_{2}$ along principal axes and the rotation angle $\theta$. The correlation coefficient $\rho$ depends on the value of $\theta$: <blockquote> $\rho>0$ when $\theta>0$ </blockquote> <blockquote> $\rho<0$ when $\theta<0$ </blockquote> <br> N.B. For ease of reading, we use below the variables x and y instead of x1 and x2. The final results are shown with x1 and x2. ```python print(__doc__) # Author: Pierre Gravel <pierre.gravel@iid.ulaval.ca> # License: BSD %matplotlib inline import numpy as np import matplotlib.pyplot as plt from matplotlib import cm from scipy import stats from scipy.stats import multivariate_normal import seaborn as sns sns.set(color_codes=True) # Used for reproductibility of the results np.random.seed(43) ``` Automatically created module for IPython interactive environment Here are the characteristics of the 2-D PDF we want to generate: ```python # Origin of the PDF Mu = np.array([0.5, 0.5]) # Individual standard deviations along the principal axes sigma = np.array([0.25, 0.05]) # Rotation angle for a negative correlation theta = -45. ``` Generate the covariance matrix $\Sigma$: ```python theta = np.radians(theta) c, s = np.cos(theta), np.sin(theta) # Rotation matrix R = np.array(((c, -s), (s, c))) # Covariance matrix for a PDF with its principal axes oriented along the x and y directions Sigma = np.array([[sigma[0]**2, 0.],[0., sigma[1]**2]]) # Covariance matrix after rotation Sigma = R.dot( Sigma.dot(R.T) ) ``` Generate a spatial grid where the various PDF will be evaluated locally. ```python x_min, x_max = 0., 1. y_min, y_max = 0., 1. nx, ny = 60, 60 x = np.linspace(x_min, x_max, nx) y = np.linspace(y_min, y_max, ny) xx, yy = np.meshgrid(x,y) pos = np.dstack((xx, yy)) ``` Get the marginal distributions $P(x)$ and $P(y)$. Each one is the probability of an event irrespective of the outcome of the other variable. ```python # Generator for the 2-D gaussian PDF model = multivariate_normal(Mu, Sigma) pdf = model.pdf(pos) # Project P(x,y) on the x axis to get the marginal 1-D distribution P(x) pdf_x = multivariate_normal.pdf(x, mean=Mu[0], cov=Sigma[0,0]) # Project P(x,y) on the y axis to get the marginal 1-D distribution P(y) pdf_y = multivariate_normal.pdf(y, mean=Mu[1], cov=Sigma[1,1]) ``` Get the conditional 1-D distributions $P(x|y=yc)$ and $P(y|x=xc)$. Each one is the probability of one event occurring in the presence of a second event. ```python # Vertical slice position xc = 0.3 # Horizontal slice position yc = 0.7 # Make a vertical slice of P(x,y) at x=xc to get the conditional 1-D distribution P(y|x=xc) P = np.empty([ny,2]) P[:,0] = xc P[:,1] = y pdf_xc = multivariate_normal.pdf(P, mean=Mu, cov=Sigma) # Make an horizontal slice of P(x,y) at y=yc to get the conditional 1-D distribution P(x|y=yc) P = np.empty([nx,2]) P[:,0] = x P[:,1] = yc pdf_yc = multivariate_normal.pdf(P, mean=Mu, cov=Sigma) ``` Show the various PDF. The color of each conditional 1-D distribution corresponds to its slice through the 2-D PDF. ```python fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2,2,figsize=(10,10)) # Display the 2-D PDF P(x,y) cset = ax1.contourf(xx, yy, pdf, zdir='z', cmap=cm.viridis, levels=7) ax1.plot([x_min, x_max], [yc, yc], linewidth=2.0, color='red') ax1.text(0.79, 0.72,'$x_{2} = 0.7$', fontsize=16, color='white') ax1.plot([xc, xc], [y_min, y_max], linewidth=2.0, color='green') ax1.text(0.31, 0.02,'$x_{1} = 0.3$', fontsize=16, color='white') ax1.set_xlabel('$x_{1}$',fontsize=18) ax1.set_ylabel('$x_{2}$',rotation=0,fontsize=18) ax1.xaxis.set_label_coords(0.5, -0.08) ax1.yaxis.set_label_coords(-0.08, 0.5) # Display the 1-D conditional PDF P(y|x=xc) ax2.plot(pdf_y, y, label='$P(x_{2})$', linewidth=3.0, color='black') ax2.plot(pdf_xc, y, label='$P(x_{2}|x_{1}=0.3)$', linewidth=2.0, color='green') ax2.set_ylabel('$x_{2}$',rotation=0,fontsize=18) ax2.set_xlabel('Probability Density',fontsize=18) ax2.xaxis.set_label_coords(0.5, -0.08) ax2.yaxis.set_label_coords(-0.08, 0.5) ax2.legend(loc='best',fontsize=14) # Display the 1-D conditional PDF P(x|y=yc) ax3.plot(x, pdf_x, label='$P(x_{1})$', linewidth=3.0, color='black') ax3.plot(x, pdf_yc, label='$P(x_{1}|x_{2}=0.7)$', linewidth=2.0, color='red') ax3.set_xlabel('$x_{1}$',fontsize=18) ax3.set_ylabel('Probability Density',fontsize=18) ax3.xaxis.set_label_coords(0.5, -0.08) ax3.yaxis.set_label_coords(-0.08, 0.5) ax3.legend(loc='best',fontsize=14) # Hide the unused fouth panel ax4.axis('off') fig.tight_layout() plt.savefig('Example_of_2D_PDF_with_conditional_1D.png') plt.savefig('Example_of_2D_PDF_with_conditional_1D.pdf') plt.show() ``` ```python ```
6392f9b66e7d0733bbc0d33d40004d0b38bfaa97
85,517
ipynb
Jupyter Notebook
generate_example_of_2D_PDF_with_conditional_1D_PDF.ipynb
AstroPierre/Scripts-for-figures-courses-GIF-4101-GIF-7005
a38ad6f960cc6b8155fad00e4c4562f5e459f248
[ "BSD-2-Clause" ]
null
null
null
generate_example_of_2D_PDF_with_conditional_1D_PDF.ipynb
AstroPierre/Scripts-for-figures-courses-GIF-4101-GIF-7005
a38ad6f960cc6b8155fad00e4c4562f5e459f248
[ "BSD-2-Clause" ]
null
null
null
generate_example_of_2D_PDF_with_conditional_1D_PDF.ipynb
AstroPierre/Scripts-for-figures-courses-GIF-4101-GIF-7005
a38ad6f960cc6b8155fad00e4c4562f5e459f248
[ "BSD-2-Clause" ]
null
null
null
261.519878
75,168
0.916239
true
2,060
Qwen/Qwen-72B
1. YES 2. YES
0.951142
0.885631
0.842361
__label__eng_Latn
0.754884
0.795421
# BASIC CONTROLLERS This notebook describes the proportional, integral, and differential controllers. # Preliminaries ```python !pip install -q control !pip install -q tellurium !pip install -q controlSBML import control import controlSBML as ctl from IPython.display import HTML, Math import numpy as np import pandas as pd import matplotlib.pyplot as plt import sympy import tellurium as te ``` ```python TIMES = ctl.makeSimulationTimes(0, 5, 500) ``` # Antimony Model ```python # Constants CONSTANT_DCT = {"k1": 1, "k2": 2, "k3": 3, "k4": 4} s = sympy.Symbol("s") REF = 10 ``` ```python MODEL = """ $S1 -> S2; k1*$S1 J1: S2 -> S3; k2*S2 J2: S3 -> S2; k3*S3 J3: S2 -> ; k4*S2 k1 = 1 k2 = 2 k3 = 3 k4 = 4 $S1 = 10 S2 = 0 S3 = 0 S4 = 0 """ RR = te.loada(MODEL) RR.simulate() RR.plot() ``` # PID Controllers The controllers considered here are systems that input the control error and produce a control signal used to regulate the system under control. The input signal is $e(t)$ and the output signal is $u(t)$. A **proportional controller** has the parameter $k_P$. This controller outputs a signal that is proportional to the control error. That is, $u(t) = k_P e(t)$. The transfer function for this controller is $H_{P} (s) = k_p$. An **integral controller** has the parameter $k_I$. This controller outputs a signal that is proportional to the *integral* of the control error. That is, $u(t) = k_I \int_0^{t} e(\tau) d \tau$. The transfer function for this controller is $H_{I} (s) = \frac{k_I}{s}$. An **differential controller** has the parameter $k_D$. This controller outputs a signal that is proportional to the derivative of the control error. That is, $u(t) = k_D \frac{de(t)}{d t}$. The transfer function for this controller is $H_{D} (s) = s k_D$. These controllers can be used in combination. For example, a PI controller produces $u(t) = k_P e(t) + k_I \int_0^{t} e(\tau) d \tau + k_D \frac{d e(t)}{d t}$, and its transfer function is $H_{CPI}(s) = k_P + \frac{k_I}{s} + s k_D$. # Analysis in Discrete Time To improve our understanding of the different types of control, we'll do an analysis in discrete time. Let $n$ index the instances. Consider an input signal $e_n$ and an output signal $u_n$. Here's how the controllers work: * **Proportional control**. $u_n = k_P e_n$ * **Integral control**. $u_n = k_I \sum_{i=0}^n e_i$ * **Differential control**. $u_n = k_D (e_n - e_{n-1})$ Now let's create an envaluation environment. ## Analysis Codes ```python def evaluateController(e_vec, kp=0, ki=0, kd=0, is_plot=True, **kwargs): """ Plots the output of the controller for the signal. Parameters ---------- e_vec: list-float kp: float ki: float kd: float is_plot: bool kwargs: dict plot options Returns ------- np.array """ u_vec = [] e_last = e_vec[0] e_sum = e_vec[0] for e_val in e_vec[1:]: e_sum += e_val # u_val = kp*e_val u_val += ki*e_sum u_val += kd*(e_val - e_last) u_vec.append(u_val) # e_last = e_val # if is_plot: times = range(len(u_vec)) y_vec = e_vec[0:len(times)] df = pd.DataFrame({ "time": times, "e(t)": e_vec[0:len(times)], "u(t)": u_vec }) ts = ctl.Timeseries(df) title = "kp=%2.2f ki=%2.2f kd=%2.2f" % (kp, ki, kd) kwargs["title"] = title ctl.plotOneTS(ts, **kwargs) if False: if ax is None: _, ax = plt.subplots(1) ax.plot(range(len(u_vec)), u_vec, color="blue") ax.plot(times, y_vec, linestyle="--", color="black") ax.legend(["u(t)", "e(t)"]) ax.set_xlabel("time") ax.set_ylim([-1.5, 1.5]) title = "kp=%2.2f ki=%2.2f kd=%2.2f" % (kp, ki, kd) ax.set_title(title) return np.array(u_vec) # Tests e_vec = np.repeat(1, 10) u_vec = evaluateController(e_vec, kp=0, ki=0, kd=1, is_plot=False) assert(u_vec[0] == 0) print("OK!") ``` OK! ## Analysis What signals should we consider for $e(t)$? ```python step_input = np.repeat(1, 10) step_input = np.sin(0.1*np.array(range(100))) _ = evaluateController(step_input, kp=2, ylim=[-2,2], figsize=(5,5)) _ = evaluateController(step_input, ki=1, ylim=[-2,2], figsize=(5,5)) _ = evaluateController(step_input, kd=1, ylim=[-2,2], figsize=(5,5)) ``` # Analysis in Continuous Time We can analyze controllers by looking at their transfer functions by considering poles, DC gain, and step respoinse. $H_{PID} (s) = H_P(s) + H_I(s) + H_D(s) = k_P + \frac{k_I}{s} + s k_D$ ## Analysis Codes ```python def plotPIDStepResponse(kp=1, ki=0, kd=0, is_plot=True): """ Plots the step response of the PID controller. Parameters ---------- kp: float ki: float kd: float is_plot: bool Returns ------- control.TransferFunction """ tf = control.TransferFunction([kp], [1]) \ + control.TransferFunction([ki], [1, 0]) \ + control.TransferFunction([kd, 0], [1]) if is_plot: if len(tf.num[0][0]) > len(tf.den[0][0]): print("Improper transfer function. Cannot simulate.") else: result = control.step_response(tf) plt.plot(result.t.flatten(), result.y.flatten()) return tf # Tests tf = plotPIDStepResponse(kp=1, ki=0, kd=0, is_plot=False) assert(tf.dcgain() == 1) print("OK!") ``` OK! ```python # Find the DC gains for different variations of controllers print("kp=1: %2.2f" % plotPIDStepResponse(kp=1, ki=0, kd=0, is_plot=False).dcgain()) print("ki=1: %2.2f" % plotPIDStepResponse(kp=0, ki=1, kd=0, is_plot=False).dcgain()) ``` kp=1: 1.00 ki=1: inf ## Controller in Closed Loop ### Extending ``plotTF`` to PID Controllers ```python def plotTFs(Gs, kp=0, ki=0, kd=0, times=TIMES, ylim=None, title=None, is_plot=True): """ Constructs the transfer functions for the proportional controller, and filter. Calculates the transfer functions HRYs, HREs, HNYs, HDYs and plots them. Parameters ---------- Gs: control.TransferFunction kp: float ki: float kd: float times: list-float ylim: (float, float) limits of y-values title: str Returns ------- dct key: name of transfer function value: control.TransferFunction """ Cs = control.TransferFunction([kp], [1]) + control.TransferFunction([ki], [1, 0]) \ + control.TransferFunction([kd, 0], [1]) Fs = 1 denom = 1 + Cs*Gs*Fs # Construct the transfer functions tf_dct = { "HRYs": Cs*Gs/denom, "HREs": 1/denom, "HNYs": -Fs/denom, "HDYs": Cs/denom, } # Construct the plots _, ax = plt.subplots(1) for tf in tf_dct.values(): result = control.forced_response(tf, T=times, U=1) plt.plot(result.t.flatten(), result.y.flatten()) # Refine plots plt.legend(list(tf_dct.keys())) xmax = max(result.t.flatten()) plt.plot([0, xmax], [0, 0], linestyle="--", color="black") plt.plot([0, xmax], [1, 1], linestyle="--", color="grey") plt.ylim([-5, 5]) title = "kp=%2.2f ki=%2.2f kd=%2.2f" % (kp, ki, kd) plt.title(title) if not is_plot: plt.close() return tf_dct # Tests Gs = control.TransferFunction([2], [1, 3]) dct = plotTFs(Gs, kp=10, ylim=[0, 3], title="Example", is_plot=False) assert(len(dct) == 4) assert("TransferFunction" in str(type(dct["HRYs"]))) print("OK!") ``` OK! ```python dct["HRYs"] ``` $$\frac{20 s + 60}{s^2 + 26 s + 69}$$ ### Plots ```python for kp in [1, 5, 10, 20]: for ki in [1, 5, 10]: title = "kp: %2.1f ki: %2.1f" % (kp, ki) _ = plotTFs(Gs, kp=kp, ki=ki, title=title, ylim=[-1, 3]) ``` # Testbed ## Controller Factory **We need to extend ``makeController`` to PID** ```python # Extend to PID def makeController(name, kp=0, ki=0, kd=0): """ Creates a proportional controller as a NonlinearIOSystem with input "in" and output "out". Parameters ---------- name: str Name of the system kp: float ki: float kd: float Returns ------- control.NonlinearIOSystem """ def updfcn(time, x_vec, u_vec, params): """ Calculates the derivative of state. Parameters ---------- x_vec: array of dimension 2 0: last time 1: last input 2: sum of values u_vec: error signal input Returns ------- dtime: float derivative of time dinput: float derivative of input dsum: float derivative of sum """ try: u_val = u_vec[0] except: u_val = u_vec # dtime = time - x_vec[0] if np.isclose(dtime, 0): dinput = 0 dsum = 0 else: dinput = (u_val - x_vec[1])/dtime dsum = u_val/dtime return dtime, dinput, dsum def outfcn(_, x_vec, u_val, ___): # u: float (error signal) try: u_val = u_vec[0] except: u_val = u_vec return kp*(u_val) + ki*x_vec[2] + kd*(u_val - x_vec[1]) # return control.NonlinearIOSystem( updfcn, outfcn, inputs=['in'], outputs=['out'], states=["dtime", "dinput", "dsum"], name=name) # Tests controller = makeController("controller", kp=1, ki=1, kd=1) times = ctl.makeSimulationTimes() U = np.repeat(1, len(times)) U[0] = 1 result = control.input_output_response(controller, T=times, U=U) #trues = [r == kp*( t) for t, r in zip(result.t, result.outputs)] #assert(all(trues)) print("OK!") ``` OK! ## Closed Loop System ```python # Elements of the system factory = ctl.IOSystemFactory() kp = 100 ki = 0 kd = 0 # Create the elements of the feedback loop noise = factory.makeSinusoid("noise", 0, 20) disturbance = factory.makeSinusoid("disturbance", 0, 2) ctlsb = ctl.ControlSBML(MODEL, input_names=["S2"], output_names=["S3"]) system = ctlsb.makeNonlinearIOSystem("system") controller = factory.makePIDController("controller", kp=kp, ki=ki, kd=kd) fltr = factory.makePassthru("fltr") sum_Y_N = factory.makeAdder("sum_Y_N") sum_U_D = factory.makeAdder("sum_U_D") sum_R_F = factory.makeAdder("sum_R_F") ``` ```python # Create the closed loop system closed_loop = control.interconnect( [noise, disturbance, sum_Y_N, sum_R_F, sum_U_D, system, fltr, controller ], connections=[ ['controller.in', 'sum_R_F.out'], # e(t) ['sum_U_D.in1', 'controller.out'], # u(t) ['sum_U_D.in2', 'disturbance.out'], # d(t) ['system.S2', 'sum_U_D.out'], ['sum_Y_N.in1', 'system.S3'], # y(t) ['sum_Y_N.in2', 'noise.out'], # n(t) ['fltr.in', 'sum_Y_N.out'], ['sum_R_F.in1', '-fltr.out'], ], inplist=["sum_R_F.in2"], outlist=["sum_R_F.in2", "sum_Y_N.out", 'system.S2', 'system.S3'], ) ``` ## Plots We can use the transfer functions to guide our choice of parameters. Note that we it may be necessary to divide by $s$ to calculate the DC gain. ```python times = ctl.makeSimulationTimes(0, 50, 100) result = control.input_output_response(closed_loop, T=times, U=10) plt.plot(result.t, result.outputs[0].flatten()) plt.plot(result.t, result.outputs[1].flatten()) #plt.plot(result.t, result.outputs[2].flatten()) #plt.plot(result.t, result.outputs[3].flatten()) plt.ylim([5, 15]) legends = ["input", "output"] plt.legend(legends) ``` ## Transfer Function Analysis ```python # Linearized analysis print(kp, ki, kd) Gs = ctlsb.makeTransferFunction() dct = plotTFs(Gs, kp=kp, ki=ki, kd=kd, title=title, ylim=[-1, 3], is_plot=False) dct["HRYs"] ``` 100 0 0 $$\frac{200 s + 600}{s^2 + 206 s + 609}$$ ## Evaluation Environment We want to repeatedly evaluate different controller parameters. This is cumbersome to do if we have to rerun multiple cells and reset various parameters. It's much more efficient to create an evaluation function. ```python def runTestbed(model=MODEL, input_name="S2", output_name="S3", kp=0, ki=0, kd=0, noise_amp=0, disturbance_amp=0): """ Run the testbed and plot the results. Parameters ---------- model: str System under control input_name: str output_name: str kp: float ki: float kd: float noise_amp: float disturbance_amp: float Results ------- control.InterconnectedSystem """ ``` ```python def makeHRY(model=MODEL, input_name="S2", output_name="S3", time=0, kp=0, ki=0, kd=0): """ Calculates the transfer function from the reference input to the output. Parameters ---------- model: str input_name: str output_name: str time: float kp: float ki: float kd: float Returns ------- control.TransferFunction """ ctlsb = ctl.ControlSBML(MODEL, input_names=[input_name], output_names=[output_name]) Gs = ctlsb.makeTransferFunction(time=time) dct = plotTFs(Gs, kp=kp, ki=ki, kd=kd, is_plot=False) return dct["HRYs"] # TESTS tf = makeHRY() assert(tf.dcgain() == 0) ``` # More On Filters How do filters help with sinusoidal noise and disturbances? ```python # Create a testsbed factory = ctl.IOSystemFactory() fltr = factory.makeFilter("fltr", -50) noise = factory.makeSinusoid("noise", 1, 20) ``` ```python # Create the testbed test_bed = control.interconnect( [noise, fltr ], connections=[ ['fltr.in', 'noise.out'], # e(t) ], outlist=["fltr.out"], ) ``` ```python # Simulate it times = ctl.makeSimulationTimes(0, 20, 500) result = control.input_output_response(test_bed, T=times) plt.plot(result.t, result.outputs.flatten()) plt.ylim([-1, 1]) ```
a5b5b2661d68197a9bb5aa8266269420fea20084
359,444
ipynb
Jupyter Notebook
Lecture_19_20-Basic-Controllers/Basic-Controllers.ipynb
joseph-hellerstein/advanced-controls-lectures
dc43f6c3517616da3b0ea7c93192d911414ee202
[ "MIT" ]
null
null
null
Lecture_19_20-Basic-Controllers/Basic-Controllers.ipynb
joseph-hellerstein/advanced-controls-lectures
dc43f6c3517616da3b0ea7c93192d911414ee202
[ "MIT" ]
null
null
null
Lecture_19_20-Basic-Controllers/Basic-Controllers.ipynb
joseph-hellerstein/advanced-controls-lectures
dc43f6c3517616da3b0ea7c93192d911414ee202
[ "MIT" ]
null
null
null
294.867925
57,168
0.923256
true
4,221
Qwen/Qwen-72B
1. YES 2. YES
0.824462
0.819893
0.675971
__label__eng_Latn
0.535017
0.408838
# 词频、互信息、信息熵发现中文新词 **新词发现**任务是中文自然语言处理的重要步骤。**新词**有“新”就有“旧”,属于一个相对个概念,在相对的领域(金融、医疗),在相对的时间(过去、现在)都存在新词。[文本挖掘](https://zh.wikipedia.org/wiki/文本挖掘)会先将文本[分词](https://zh.wikipedia.org/wiki/中文自动分词),而通用分词器精度不过,通常需要添加**自定义字典**补足精度,所以发现新词并加入字典,成为文本挖掘的一个重要工作。 [**单词**](https://zh.wikipedia.org/wiki/單詞)的定义,来自维基百科的定义如下: >在语言学中,**单词**(又称为词、词语、单字;英语对应用语为“word”)是能独立运用并含有语义内容或语用内容(即具有表面含义或实际含义)的最小单位。单词的集合称为词汇、术语,例如:所有中文单词统称为“中文词汇”,医学上专用的词统称为“医学术语”等。词典是为词语提供音韵、词义解释、例句、用法等等的工具书,有的词典只修录特殊领域的词汇。 单从语义角度,“苹果“的法语是"pomme",而“土豆”的法语是"pomme de terre",若按上面的定义,“土豆”是要被拆的面目全非,但"pomme de terre"是却是表达“土豆”这个语义的最小单位;在机构名中,这类问题出现的更频繁,"Paris 3"是"巴黎第三大学"的简称,如果"Paris"和"3"分别表示地名和数字,那这两个就无法表达“巴黎第三大学”的语义。而中文也有类似的例子,“北京大学”的”北京“和”大学“都可以作为一个最小单位来使用,分别表示”地方名“和“大学”,如果这样分词,那么就可以理解为“北京的大学”了,所以“北京大学”是一个表达语义的最小单位。前几年有部电影《夏洛特烦恼》,我们是要理解为“夏洛特 烦恼“还是”夏洛 特 烦恼“,这就是很经典的分词问题。 但是从语用角度,这些问题似乎能被解决,我们知道"pomme de terre"在日常生活中一般作为“土豆”而不是“土里的苹果”,在巴黎学习都知道“Paris 3”,就像我们提到“北京大学”特指那所著名的高等学府一样。看过电影《夏洛特烦恼》的观众很容易的就能区分这个标题应该看为“夏洛 特 烦恼”。 发现新词的方法,《[互联网时代的社会语言学:基于SNS的文本数据挖掘](http://www.matrix67.com/blog/archives/5044]) 》一文,里面提到的给每一个文本串计算**文本片段**的**凝固程度**和文本串对外的使用**自由度**,通过设定阈值来将文本串分类为词和非词两类。原文给了十分通俗易懂的例子来解释凝固度和自动度。这里放上计算方法。这个方法还有许多地方需要优化,在之后的实践中慢慢调整了。 ## 文本片段 **文本片段**,最常用的方法就是[n元语法(ngram)](https://zh.wikipedia.org/wiki/N元语法),将分本分成多个n长度的文本片段。数据结构,这里采用Trie树的方案,这个方案是简单容易实现,而且用Python的字典做Hash索引实现起来也很优美,唯独的一个问题是所有的数据都存在内存中,这会使得内存占用量非常大,如果要把这个工程化使用,还需要采用其他方案,比如硬盘检索。 <a href="https://upload.wikimedia.org/wikipedia/commons/b/be/Trie_example.svg " target="_blank"></a> ```python class TrieNode(object): def __init__(self, frequence=0, children_frequence=0, parent=None): self.parent = parent self.frequence = frequence self.children = {} self.children_frequence = children_frequence def insert(self, char): self.children_frequence += 1 self.children[char] = self.children.get(char, TrieNode(parent=self)) self.children[char].frequence += 1 return self.children[char] def fetch(self, char): return self.children[char] class TrieTree(object): def __init__(self, size=6): self._root = TrieNode() self.size = size def get_root(self): return self._root def insert(self, chunk): node = self._root for char in chunk: node = node.insert(char) if len(chunk) < self.size: # add symbol "EOS" at end of line trunck node.insert("EOS") def fetch(self, chunk): node = self._root for char in chunk: node = node.fetch(char) return node ``` Trie树的结构上,我添加了几个参数,parent,frequence,children_frequence,他们分别是: - parent,当前节点的父节点,如果是“树根”的时候,这个父节点为空; - frequence,当前节点出现的频次,在Trie树上,也可以表示某个文本片段的频次,比如"中国",“国”这个节点的frequence是100的时候,“中国”俩字也出现了100次。这个可以作为最后的词频过滤用。 - children_frequence,当前接点下有子节点的"frequence"的总和。比如在刚才的例子上加上“中间”出现了99次,那么“中”这个节点的children_frequence的值是199次。 这样的构造让第二部分的计算更加方面。 这个任务中需要构建两棵Trie树,表示正向和反向两个字符片段集。 ## 自由度 **自由度**,使用信息论中的[信息熵](https://zh.wikipedia.org/wiki/熵_(信息论))构建文本片段左右熵,公式[1]。熵越大,表示该片段和左右邻字符相互关系的不稳定性越高,那么越有可能作为独立的片段使用。公式[1]第一个等号后面的I(x)表示x的自信息。 \begin{align} H(X) = \sum_{i} {\mathrm{P}(x_i)\,\mathrm{I}(x_i)} = -\sum_{i} {\mathrm{P}(x_i) \log \mathrm{P}(x_i)} [1] \end{align} ```python def calc_entropy(chunks, ngram): """计算信息熵 Args: chunks,是所有数据的文本片段 ngram,是Trie树 Return: word2entropy,返回一个包含每个chunk和对应信息熵的字典。 """ def entropy(sample, total): """Entropy""" s = float(sample) t = float(total) result = - s/t * math.log(s/t) return result def parse(chunk, ngram): node = ngram.fetch(chunk) total = node.children_frequence return sum([entropy(sub_node.frequence, total) for sub_node in node.children.values()]) word2entropy = {} for chunk in chunks: word2entropy[chunk] = parse(chunk, ngram) return word2entropy ``` ## 凝固度 **凝固度**,用信息论中的**互信息**表示,公式[2]。在概率论中,如果x跟y不相关,则p(x,y)=p(x)p(y)。二者相关性越大,则p(x,y)就相比于p(x)p(y)越大。用后面的式子可能更好理解,在y出现的情况下x出现的条件概率p(x|y)除以x本身出现的概率p(x),自然就表示x跟y的相关程度。 \begin{align} I(x;y) = \log\frac{p(x,y)}{p(x)p(y)} = \log\frac{p(x|y)}{p(x)} = \log\frac{p(y|x)}{p(y)} [2] \end{align} 这里比较容易产生一个概念的混淆,维基百科将式[2]定义为[点互信息](https://en.wikipedia.org/wiki/Pointwise_mutual_information),[互信息](https://zh.wikipedia.org/wiki/互信息)的定义如下: \begin{align} I(X;Y) = \sum_{y \in Y} \sum_{x \in X} p(x,y) \log{ \left(\frac{p(x,y)}{p(x)\,p(y)} \right) }\ [3] \end{align} 在傅祖芸编著的《信息论——基础理论与应用(第4版)》的绪论中,把式[2]定义为互信息,而式[3]定义为平均互信息,就像信息熵指的是**平均自信息**。 ```python def calc_mutualinfo(chunks, ngram): """计算互信息 Args: chunks,是所有数据的文本片段 ngram,是Trie树 Return: word2mutualinfo,返回一个包含每个chunk和对应互信息的字典。 """ def parse(chunk, root): sub_node_y_x = ngram.fetch(chunk) node = sub_node_y_x.parent sub_node_y = root.children[chunk[-1]] # 这里采用互信息log(p(y|x)/p(y))的计算方法 prob_y_x = float(sub_node_y_x.frequence) / node.children_frequence prob_y = float(sub_node_y.frequence) / root.children_frequence mutualinfo = math.log(prob_y_x / prob_y) return mutualinfo, sub_node_y_x.frequence word2mutualinfo = {} root = ngram.get_root() for chunk in chunks: word2mutualinfo[chunk] = parse(chunk, root) return word2mutualinfo ``` ## 过滤 最终计算得出互信息、信息熵,甚至也统计了词频,最后一步就是根据阈值对词进行过滤。 ```python def _fetch_final(fw_entropy, bw_entropy, fw_mi, bw_mi entropy_threshold=0.8, mutualinfo_threshold=7, freq_threshold=10): final = {} for k, v in fw_entropy.items(): last_node = self.fw_ngram if k[::-1] in bw_mi and k in fw_mi: mi_min = min(fw_mi[k][0], bw_mi[k[::-1]][0]) word_prob = min(fw_mi[k][1], bw_mi[k[::-1]][1]) if mi_min < mutualinfo_threshold: continue else: continue if word_prob < freq_threshold: continue if k[::-1] in bw_entropy: en_min = min(v, bw_entropy[k[::-1]]) if en_min < entropy_threshold: continue else: continue final[k] = (word_prob, mi_min, en_min) return final ``` ## 结果 最终,通过这个方法对这次十九大的开幕发言做的一个词汇发现,ngram的n=10,结果按词频排序输出,可以发现这次十九大谈了许多内容,不一一说了。这个结果还存在不少问题,比如“二〇”,这在阈值的设置上还不够准确,可以尝试使用机器学习的方法来获取阈值。 经济|70 改革|69 我们|64 必须|61 领导|60 完善|57 历史|44 不断|43 群众|43 教育|43 战略|42 思想|40 世界|39 问题|37 提高|37 组织|36 监督|35 加快|35 依法|34 精神|33 团结|33 复兴|32 保障|31 奋斗|30 根本|29 环境|29 军队|29 开放|27 服务|27 理论|26 干部|26 创造|26 基础|25 意识|25 维护|25 协商|24 解决|24 贯彻|23 斗争|23 目标|21 统筹|20 始终|19 方式|19 水平|19 科学|19 利益|19 市场|19 基层|19 积极|18 马克思|18 反对|18 道路|18 自然|18 增长|17 科技|17 稳定|17 原则|17 两岸|17 取得|16 质量|16 农村|16 矛盾|16 协调|15 巩固|15 收入|15 绿色|15 自觉|15 方针|15 纪律|15 长期|15 保证|15 同胞|15 命运|14 美好生活|14 五年|14 传统|14 繁荣|14 没有|14 使命|13 广泛|13 日益|13 价值|13 健康|13 资源|13 参与|13 突出|13 腐败|13 充分|13 梦想|13 任何|13 二〇|13 代表|12 阶段|12 深刻|12 布局|12 区域|12 贸易|12 核心|12 城乡|12 生态文明|12 工程|12 任务|12 地区|12 责任|12 认识|12 胜利|11 贡献|11 覆盖|11 生态环境|11 具有|11 面临|11 各种|11 培育|11 企业|11 继续|10 团结带领|10 提升|10 明显|10 弘扬|10 脱贫|10 贫困|10 标准|10 注重|10 基本实现|10 培养|10 青年|10 ## 代码下载地址 git clone https://github.com/Ushiao/new-word-discovery.git
8a7ccc78cd873593fa2d60702f23b2f60ce972a4
11,336
ipynb
Jupyter Notebook
docs/wordiscovery.ipynb
KunFly/new-word-discovery
ac9c15ea3b899cc279c721c1f45eaccc37cc9fb7
[ "MIT" ]
45
2018-01-04T02:43:53.000Z
2021-12-02T11:57:55.000Z
docs/wordiscovery.ipynb
KunFly/new-word-discovery
ac9c15ea3b899cc279c721c1f45eaccc37cc9fb7
[ "MIT" ]
4
2018-01-08T03:15:27.000Z
2020-07-24T05:48:41.000Z
docs/wordiscovery.ipynb
KunFly/new-word-discovery
ac9c15ea3b899cc279c721c1f45eaccc37cc9fb7
[ "MIT" ]
13
2018-01-04T02:43:53.000Z
2019-12-25T09:00:17.000Z
34.455927
772
0.533257
true
4,017
Qwen/Qwen-72B
1. YES 2. YES
0.835484
0.749087
0.62585
__label__yue_Hant
0.340731
0.29239
This notebook is part of the `clifford` documentation: https://clifford.readthedocs.io/. # Application to Robotic Manipulators This notebook is intended to expand upon the ideas in part of the presentation [Robots, Ganja & Screw Theory](https://slides.com/hugohadfield/game2020) ## Serial manipulator [(slides)](https://slides.com/hugohadfield/game2020#/serial) Let's consider a 2-link 3 DOF arm. We'll model the links within the robot with rotors, which transform to the coordinate frame of the end of each link. This is very similar to the approach that would classically be taken with 4&times;4 matrices. We're going to define our class piecewise as we go along here. To aid that, we'll write a simple base class to let us do just that. In your own code, there's no need to do this. ```python class AddMethodsAsWeGo: @classmethod def _add_method(cls, m): if isinstance(m, property): name = (m.fget or m.fset).__name__ else: name = m.__name__ setattr(cls, name, m) ``` Let's start by defining some names for the links, and a place to store our parameters: ```python from enum import Enum class Links(Enum): BASE = 'b' SHOULDER = 's' UPPER = 'u' ELBOW = 'e' FOREARM = 'f' ENDPOINT = 'n' class SerialRobot(AddMethodsAsWeGo): def __init__(self, rho, l): self.l = l self.rho = rho self._thetas = (0, 0, 0) @property def thetas(self): return self._thetas ``` ### Forward kinematics [(slides)](https://slides.com/hugohadfield/game2020#/serial-forward-rotors) As a reminder, we can construct rotation and translation motors as: $$ \begin{align} T(a) &= \exp \left(\frac{1}{2} n_{\infty} \wedge a \right) \\ &= 1 + \frac{1}{2}n_{\infty} \wedge a \\ R(\theta, \hat B) &= \exp (\frac{1}{2} \theta \hat B) \\ &= \cos \frac{\theta}{2} + \sin \frac{\theta}{2} \hat B \end{align} $$ Applying these to our geometry, we get $$ \begin{align} R_{\text{base} \gets \text{shoulder}} &= R(\theta_0, e_1 \wedge e_3) \\ R_{\text{shoulder} \gets \text{upper arm}} &= R(\theta_1, e_1 \wedge e_2) \\ R_{\text{upper arm} \gets \text{elbow}} &= T(\rho e_1) \\ R_{\text{elbow} \gets \text{forearm}} &= R(\theta_2, e_1 \wedge e_2) \\ R_{\text{forearm} \gets \text{endpoint}} &= T(-l e_1)\\ \end{align} $$ From which we can get the overall rotor to the frame of the endpoint, and the positions $X$ and $Y$: $$ \begin{align} R_{\text{base} \gets \text{elbow}} &= R_{\text{base} \gets \text{shoulder}} R_{\text{shoulder} \gets \text{upper arm}} R_{\text{upper arm} \gets \text{elbow}} \\ X &= R_{\text{base} \gets \text{elbow}} n_0 \tilde R_{\text{base} \gets \text{elbow}} \\ R_{\text{base} \gets \text{endpoint}} &= R_{\text{base} \gets \text{shoulder}} R_{\text{shoulder} \gets \text{upper arm}} R_{\text{upper arm} \gets \text{elbow}} R_{\text{elbow} \gets \text{forearm}} R_{\text{forearm} \gets \text{endpoint}} \\ Y &= R_{\text{base} \gets \text{endpoint}} n_0 \tilde R_{\text{base} \gets \text{endpoint}} \\ \end{align} $$ We can write this as: ```python from clifford.g3c import * from clifford.tools.g3c import generate_translation_rotor, apply_rotor from clifford.tools.g3 import generate_rotation_rotor def _update_chain(rotors, a, b, c): rotors[a, c] = rotors[a, b] * rotors[b, c] @SerialRobot._add_method @SerialRobot.thetas.setter def thetas(self, value): theta0, theta1, theta2 = self._thetas = value # shorthands for brevity R = generate_rotation_rotor T = generate_translation_rotor rotors = {} rotors[Links.BASE, Links.SHOULDER] = R(theta0, e1, e3) rotors[Links.SHOULDER, Links.UPPER] = R(theta1, e1, e2) rotors[Links.UPPER, Links.ELBOW] = T(self.rho * e1) rotors[Links.ELBOW, Links.FOREARM] = R(theta2, e1, e2) rotors[Links.FOREARM, Links.ENDPOINT] = T(-self.l * e1) _update_chain(rotors, Links.BASE, Links.SHOULDER, Links.UPPER) _update_chain(rotors, Links.BASE, Links.UPPER, Links.ELBOW) _update_chain(rotors, Links.BASE, Links.ELBOW, Links.FOREARM) _update_chain(rotors, Links.BASE, Links.FOREARM, Links.ENDPOINT) self.rotors = rotors @SerialRobot._add_method @property def y_pos(self): return apply_rotor(eo, self.rotors[Links.BASE, Links.ENDPOINT]) @SerialRobot._add_method @property def x_pos(self): return apply_rotor(eo, self.rotors[Links.BASE, Links.ELBOW]) ``` Let's write a renderer so we can check this all works ```python from pyganja import GanjaScene def add_rotor(sc: GanjaScene, r, *, label=None, color=None, scale=0.1): """ show how a rotor transforms the axes at the origin """ y = apply_rotor(eo, r) y_frame = [ apply_rotor(d, r) for d in [up(scale*e1), up(scale*e2), up(scale*e3)] ] sc.add_object(y, label=label, color=color) sc.add_facet([y, y_frame[0]], color=(255, 0, 0)) sc.add_facet([y, y_frame[1]], color=(0, 255, 0)) sc.add_facet([y, y_frame[2]], color=(0, 0, 255)) @SerialRobot._add_method def to_scene(self): sc = GanjaScene() axis_scale = 0.1 link_scale = 0.05 arm_color = (192, 192, 192) base_obj = (up(0.2*e1)^up(0.2*e3)^up(-0.2*e1)).normal() sc.add_object(base_obj, color=0) shoulder_axis = [ apply_rotor(p, self.rotors[Links.BASE, Links.UPPER]) for p in [up(axis_scale*e3), up(-axis_scale*e3)] ] sc.add_facet(shoulder_axis, color=(0, 0, 128)) shoulder_angle = [ apply_rotor(eo, self.rotors[Links.BASE, Links.SHOULDER]), apply_rotor(up(axis_scale*e1), self.rotors[Links.BASE, Links.SHOULDER]), apply_rotor(up(axis_scale*e1), self.rotors[Links.BASE, Links.UPPER]), ] sc.add_facet(shoulder_angle, color=(0, 0, 128)) upper_arm_points = [ apply_rotor(up(link_scale*e3), self.rotors[Links.BASE, Links.UPPER]), apply_rotor(up(-link_scale*e3), self.rotors[Links.BASE, Links.UPPER]), apply_rotor(up(link_scale*e3), self.rotors[Links.BASE, Links.ELBOW]), apply_rotor(up(-link_scale*e3), self.rotors[Links.BASE, Links.ELBOW]) ] sc.add_facet(upper_arm_points[:3], color=arm_color) sc.add_facet(upper_arm_points[1:], color=arm_color) elbow_axis = [ apply_rotor(p, self.rotors[Links.BASE, Links.ELBOW]) for p in [up(axis_scale*e3), up(-axis_scale*e3)] ] sc.add_facet(elbow_axis, color=(0, 0, 128)) forearm_points = [ apply_rotor(up(link_scale*e3), self.rotors[Links.BASE, Links.FOREARM]), apply_rotor(up(-link_scale*e3), self.rotors[Links.BASE, Links.FOREARM]), apply_rotor(up(link_scale*e3), self.rotors[Links.BASE, Links.ENDPOINT]), apply_rotor(up(-link_scale*e3), self.rotors[Links.BASE, Links.ENDPOINT]) ] sc.add_facet(forearm_points[:3], color=arm_color) sc.add_facet(forearm_points[1:], color=arm_color) add_rotor(sc, self.rotors[Links.BASE, Links.ELBOW], label='x', color=(128, 128, 128)) add_rotor(sc, self.rotors[Links.BASE, Links.ENDPOINT], label='y', color=(128, 128, 128)) return sc ``` We can now instantiate our robot ```python serial_robot = SerialRobot(rho=1, l=0.5) ``` Choose a trajectory ```python import math theta_traj = [ (math.pi/6 + i*math.pi/12, math.pi/3 - math.pi/12*i, 3*math.pi/4) for i in range(3) ] ``` And plot the robot in each state, using `ipywidgets` ([docs](https://ipywidgets.readthedocs.io/)) to let us plot ganja side-by-side. Unfortunately, `pyganja` provides no mechanism to animate these plots from python. <div class="alert alert-info"> This will not render side-by-side in the online clifford documentation, but will in a local notebook. </div> ```python import ipywidgets from IPython.display import Latex, display from pyganja import draw outputs = [ ipywidgets.Output(layout=ipywidgets.Layout(flex='1')) for i in range(len(theta_traj)) ] for output, thetas in zip(outputs, theta_traj): with output: # interesting part here - run the forward kinematics, print the angles we used serial_robot.thetas = thetas display(Latex(r"$\theta_i = {:.2f}, {:.2f}, {:.2f}$".format(*thetas))) draw(serial_robot.to_scene(), scale=1.5) ipywidgets.HBox(outputs) ``` ### Inverse kinematics [(slides)](https://slides.com/hugohadfield/game2020#/serial-reverse) For the forward kinematics, we didn't actually need conformal geometric algebra at all&mdash;PGA would have done just fine, as all we needed were rotations and translations. The inverse kinematics of a serial manipulator is where CGA provide some nice tricks. There are three facts we know about the position $X$, each of which describes a constraint surface * $X$ must lie on a sphere with radius $l$ centered at $Y$, which can be written $$S^* = Y - \frac{1}{2}l^2n_\infty$$ * $X$ must lie on a sphere with radius $\rho$ centered at $n_o$, which can be written $$S_\text{base}^* = n_0 - \frac{1}{2}\rho^2n_\infty$$ * $X$ must lie on a plane through $n_o$, $e_3$, and $Y$, which can be written $$\Pi = n_0\wedge \operatorname{up}(e_3)\wedge Y\wedge n_\infty$$ Note that $\Pi = 0$ is possible iff $Y = \operatorname{up}(ke_3)$. For $X$ to satisfy all three constraints. we have \begin{align} S \wedge X = S_\text{base} \wedge X = \Pi \wedge X &= 0 \\ X \wedge (\underbrace{S \vee S_\text{base} \vee \Pi}_P) &= 0 \quad\text{If $\Pi \ne 0$} \\ X \wedge (\underbrace{S \vee S_\text{base}}_C) &= 0 \quad\text{otherwise} \\ \end{align} By looking at the grade of the term labelled $P$, we conclude it must be a point-pair&mdash;which tells us $X$ must lie in one of two locations. Similarly, $C$ must be a circle. ```python @SerialRobot._add_method def _get_x_constraints_for(self, Y): """ Get the space containing all possible elbow positions """ # strictly should be undual, but we don't have that in clifford S = (Y - 0.5*self.l**2*einf).dual() S_base = (eo - 0.5*self.rho**2*einf).dual() Pi = eo ^ up(e2) ^ Y ^ einf return S, S_base, Pi @SerialRobot._add_method def _get_x_positions_for(self, Y): """ Get the space containing all possible elbow positions """ S, S_base, Pi = self._get_x_constraints_for(Y) if Pi == 0: # any solution on the circle is OK return S & S_base else: # there are just two solutions return S & S_base & Pi ``` From the pointpair $P$ we can extract the two possible $X$ locations with: $$ X = \left[1 \pm \frac{P}{\sqrt{P\tilde{P}}}\right](P\cdot n_\infty) $$ To be considered a full solution to the inverse kinematics problem, we need to produce the angles $\theta_0, \theta_1, \theta_2$. We can do this as follows ```python @SerialRobot._add_method @SerialRobot.y_pos.setter def y_pos(self, Y): R = generate_rotation_rotor T = generate_translation_rotor rotors = {} rotors[Links.UPPER, Links.ELBOW] = T(self.rho * e1) rotors[Links.FOREARM, Links.ENDPOINT] = T(-self.l * e1) x_options = self._get_x_positions_for(Y) if x_options.grades == {3}: # no need to adjust the base angle theta_0 = self.thetas[0] rotors[Links.BASE, Links.SHOULDER] = self.rotors[Links.BASE, Links.SHOULDER] # remove the rotation from x, intersect it with the plane of the links x_options = x_options & (eo ^ up(e3) ^ up(e1) ^ einf) else: y_down = down(Y) theta0 = math.atan2(y_down[(3,)], y_down[(1,)]) rotors[Links.BASE, Links.SHOULDER] = R(theta0, e1, e3) # remove the first rotor from x x_options = apply_rotor(x_options, ~rotors[Links.BASE, Links.SHOULDER]) # project out one end of the point-pair x = (1 - x_options.normal()) * (x_options | einf) x_down = down(x) theta1 = math.atan2(x_down[(2,)], x_down[(1,)]) rotors[Links.SHOULDER, Links.UPPER] = R(theta1, e1, e2) _update_chain(rotors, Links.BASE, Links.SHOULDER, Links.UPPER) _update_chain(rotors, Links.BASE, Links.UPPER, Links.ELBOW) # remove the second rotor Y = apply_rotor(Y, ~rotors[Links.BASE, Links.ELBOW]) y_down = down(Y) theta2 = math.atan2(-y_down[(2,)], -y_down[(1,)]) rotors[Links.ELBOW, Links.FOREARM] = R(theta2, e1, e2) _update_chain(rotors, Links.BASE, Links.ELBOW, Links.FOREARM) _update_chain(rotors, Links.BASE, Links.FOREARM, Links.ENDPOINT) self._thetas = (theta0, theta1, theta2) self.rotors = rotors ``` Define a trajectory again, this time with a scene to render it: ```python y_traj = [ up(0.3*e3 + 0.8*e2 - 0.25*e1), up(0.6*e3 + 0.8*e2), up(0.9*e3 + 0.8*e2 + 0.25*e1) ] expected_scene = GanjaScene() expected_scene.add_facet(y_traj[0:2], color=(255, 128, 128)) expected_scene.add_facet(y_traj[1:3], color=(255, 128, 128)) ``` And we can run the inverse kinematics by setting `serial_robot.y_pos`: ```python outputs = [ ipywidgets.Output(layout=ipywidgets.Layout(flex='1')) for i in range(len(y_traj)) ] first = True for output, y in zip(outputs, y_traj): with output: # interesting part here - run the reverse kinematics, print the angles we used serial_robot.y_pos = y display(Latex(r"$\theta_i = {:.2f}, {:.2f}, {:.2f}$".format(*serial_robot.thetas))) sc = serial_robot.to_scene() # Show the spheres we used to construct the solution sc += expected_scene if first: extra_scene = GanjaScene() S, S_base, Pi = serial_robot._get_x_constraints_for(y) extra_scene.add_object(S_base, label='S_base', color=(255, 255, 128)) extra_scene.add_object(S, label='S', color=(255, 128, 128)) extra_scene.add_object(Pi, label='Pi', color=(128, 255, 192, 128)) sc += extra_scene draw(sc, scale=1.5) first = False ipywidgets.HBox(outputs) ``` ## Parallel manipulators For now, refer to the presentation [(slides)](https://slides.com/hugohadfield/game2020#/parallel) ### Inverse kinematics [(slides)](https://slides.com/hugohadfield/game2020#/agile-3dof-inverse) For now, refer to the presentation ### Forward kinematics [(slides)](https://slides.com/hugohadfield/game2020#/agile-2dof-forward) For now, refer to the presentation
972efbcd990bfccd5465fbb65f31e4556a341b6f
20,506
ipynb
Jupyter Notebook
docs/tutorials/cga/robotic-manipulators.ipynb
hugohadfield/clifford
3e15da3ba429c69a5a5a641f2103d7bcca42617d
[ "BSD-3-Clause" ]
642
2017-11-17T09:49:48.000Z
2022-03-21T22:02:25.000Z
docs/tutorials/cga/robotic-manipulators.ipynb
hugohadfield/clifford
3e15da3ba429c69a5a5a641f2103d7bcca42617d
[ "BSD-3-Clause" ]
347
2017-11-17T13:57:43.000Z
2022-01-20T09:40:15.000Z
docs/tutorials/cga/robotic-manipulators.ipynb
hugohadfield/clifford
3e15da3ba429c69a5a5a641f2103d7bcca42617d
[ "BSD-3-Clause" ]
61
2017-11-19T17:15:26.000Z
2022-01-15T05:18:27.000Z
35.416235
254
0.54111
true
4,445
Qwen/Qwen-72B
1. YES 2. YES
0.841826
0.727975
0.612828
__label__eng_Latn
0.581948
0.262136
# Modular Strided Intervals Fix $N \in \{1, \ldots, 2^{23} - 1\}$. The LLVM type $\texttt{i}N$ represents $N$-bit tuples: $\texttt{i}N := \{0, 1\}^N$ These tuples can be interpreted as elements of $\mathbb{Z}/{2^N}$ using the isomorphism $\phi_N$ together with an appropriate map of operations: $\phi_N \colon \texttt{i}N \rightarrow \mathbb{Z}/{2^N}, (b_0, \ldots, b_{N-1}) \mapsto \left(\sum_{k=0}^{N-1}b_k^k\right) + 2^N \mathbb{Z}$ An abstraction of $\mathbb{Z}$ and thefore also of $\texttt{i}N$ can be obtained by a generalization of intervals over $\mathbb{Z}$, represeted by the type $\mathrm{MSI}_N$ of _modular strided intervals (MSI)_: $\mathrm{MSI}_N := \{s[a, b]_N \mid a, b, s \in \mathbb{Z}/2^N\}$ The sematics of an MSI is given by the concetization function $\gamma_N$: $\gamma_N \colon \mathrm{MSI}_N \rightarrow \mathcal{P}(\mathbb{Z}/{2^N}), s[a, b]_N \mapsto \{k + 2^N \mathbb{Z} \mid k \in \mathbb{Z}, a \leq k, k \leq \min \{l \in \mathbb{Z} \mid a \leq k, l \equiv b \mod 2^N\}, k \equiv a \mod s\}$ ```python from itertools import count, takewhile from random import randint from sympy import gcd, lcm ``` ```python class MSI(object): """ Modular strided iterval """ def __init__(self, bit_width, begin, end, stride=1): self.bit_width = bit_width self.begin = begin self.end = end self.stride = stride def __eq__(self, other): return (self.bit_width == other.bit_width and self.stride == other.stride and self.begin == other.begin and self.end == other.end) def __repr__(self): return f'{self.stride}[{self.begin}, {self.end}]_{{{self.bit_width}}}' def __hash__(self): return (self.begin+23) * (self.end+29) * (self.stride+31) % 16777216 def _tuple_repr(self): return (self.bit_width, self.begin, self.end, self.stride) ``` ## Defining Functions and Predicates ```python # This predicate has not tests, it's an axiom. def valid(i): n, a, b, s = i._tuple_repr() if n <= 0: return False if a < 0 or 2**n <= a: return False if b < 0 or 2**n <= b: return False if s < 0 or 2**n <= s: return False return True ``` ```python def gamma(i): n = i.bit_width s = i.stride a = i.begin b = i.end if a <= i.end else i.end + 2**n return {k % 2**n for k in takewhile( lambda k: k <= b, (a+l*s for l in count()) if s > 0 else [a] )} ``` $\gamma_N$ is not injective, therefore normalization of MSIs is needed s.t. $\gamma_N$ restricted to $\{i \in \mathrm{MSI}_N \mid \mathrm{normal}(i)\}$ is injective. All other operatins on MSIs assume that there operands are normal and are expected to return a normal MSI. Expanation of $\textrm{normal}$: Fix $s[a, b]_N \in \textrm{MSI}_N$. Case 1: Assume $s = 0$. ```python def normal(i): n, a, b, s = i._tuple_repr() if s == 0 and a != b: return False if a == b and s != 0: return False if not b in gamma(i): return False a_ = a - s if a_ != a and a_ >= 0 and gamma(i) == gamma(MSI(n, a_, (b-s) % 2**n, s)): return False if b < a and gamma(i) == gamma(MSI(n, b, a, 2**n - s)): return False return True ``` ## Test sets of MSIs with theire respective concretizations ```python test_MSIs_handpicked_gamma = [ # normalized # no wraparound # strid = 0 # begin = 0 (MSI(4, 0, 0, 0), {0}), # begin > 0 (MSI(4, 3, 3, 0), {3}), # strid = 1 # begin = 0 # end < 2**N-1 (MSI(4, 0, 2, 1), {0, 1, 2}), # end = 2**N-1 (MSI(3, 0, 7, 1), {0, 1, 2, 3, 4, 5, 6, 7}), # begin > 0 (MSI(4, 3, 4, 1), {3, 4}), # stride > 1 # begin = 0 (MSI(4, 0, 4, 2), {0, 2, 4}), # begin > 0 (MSI(3, 1, 7, 3), {1, 4, 7}), (MSI(6, 6, 26, 10), {6, 16, 26}), # wraparound # stride = 1 (MSI(4, 14, 2, 1), {14, 15, 0, 1, 2}), # stride > 1 (MSI(4, 11, 4, 3), {1, 4, 11, 14})] test_MSIs_handpicked_gamma_unnormalized = [ # unnormalized # no wraparound # stride = 0 # begin = 0 (MSI(4, 0, 3, 0), {0}), # begin > 0 (MSI(4, 3, 8, 0), {3}), # stride = 1 # begin = 0 # end = begin (MSI(4, 0, 0, 1), {0}), # end != begin (MSI(2, 2, 1, 1), {0, 1, 2, 3}), # begin > 0 # end = begin (MSI(4, 3, 3, 1), {3}), # end != begin (MSI(3, 5, 4, 1), {0, 1, 2, 3, 4, 5, 6, 7}), # stride > 1 # begin = 0 (MSI(4, 0, 5, 2), {0, 2, 4}), (MSI(4, 0, 3, 5), {0}), # begin > 0 # end = begin - stride mod 2**N (MSI(4, 11, 7, 4), {3, 7, 11, 15}), # end != begin - stride mod 2**N (MSI(6, 6, 35, 10), {6, 16, 26}), (MSI(4, 3, 7, 5), {3}), # wraparound # stride = 0 (MSI(4, 5, 3, 0), {5}), # stride = 1 (MSI(3, 5, 4, 1), {0, 1, 2, 3, 4, 5, 6, 7}), (MSI(4, 15, 0, 1), {15, 0}), # stride > 1 # end = begin - stride mod 2**N (MSI(4, 10, 6, 4), {2, 6, 10, 14}), (MSI(4, 12, 2, 6), {2, 12}), # end != begin and != begin - stride mod 2**N (MSI(4, 13, 2, 8), {13}), (MSI(4, 11, 6, 3), {11, 14, 1, 4}), (MSI(4, 10, 9, 4), {2, 6, 10, 14}), (MSI(4, 12, 7, 6), {2, 12}) ] ``` ```python test_MSIs_handpicked = {} for i, _ in test_MSIs_handpicked_gamma: n = i.bit_width if n not in test_MSIs_handpicked: test_MSIs_handpicked[n] = [i] else: test_MSIs_handpicked[n].append(i) print('size: ' + ', '.join(f'{n}: {len(js)}' for n, js in test_MSIs_handpicked.items())) test_MSIs_handpicked_unnormalized = {} for i, _ in test_MSIs_handpicked_gamma_unnormalized: n = i.bit_width if n not in test_MSIs_handpicked_unnormalized: test_MSIs_handpicked_unnormalized[n] = [i] else: test_MSIs_handpicked_unnormalized[n].append(i) print('size: ' + ', '.join(f'{n}: {len(js)}' for n, js in test_MSIs_handpicked_unnormalized.items())) ``` size: 4: 7, 3: 2, 6: 1 size: 4: 16, 2: 1, 3: 2, 6: 1 ## Tests for gamma ```python def test_gamma(): failed = False for i, ks in test_MSIs_handpicked_gamma: if not gamma(i) == ks: failed = True print(f'{i}: {gamma(i)}, {ks}') if not failed: print('succeeded') def test_gamma_unnormalized(): failed = False for i, ks in test_MSIs_handpicked_gamma_unnormalized: if not gamma(i) == ks: failed = True print(f'{i}: {gamma(i)}, {ks}') if not failed: print('succeeded') ``` ```python test_gamma() test_gamma_unnormalized() ``` succeeded succeeded ## Normalization function ```python def normalize(i): n, a, b, s = i._tuple_repr() if s == 0: b = a else: b_ = b if a <= b else b+2**n b = (b_ - (b_-a) % s) % 2**n if a == b: s = 0 else: if 2**n % s == 0 and (a-b) % 2**n == s: a = a % s b = (a-s) % 2**n elif b == (a+s) % 2**n and b < a: a, b = b, a s = b-a return MSI(n, a, b, s) ``` ## Test sets and utility functions for testing Warning: `normal` is used in `unary_function_test` if the `unnormalized` parameter is `True`, but tested later. Therefore this parameter should not be set before `normal` is tested. ```python def test_set(bit_widths, begins, ends, strides, only_normal=True, print_stats=False): MSIs = {} for n in bit_widths: js = set() bs = begins(n) for b in bs: es = ends(n) for e in es: ss = strides(n) for s in ss: if only_normal: js.add(normalize(MSI(n, b, e, s))) else: js.add(MSI(n, b, e, s)) MSIs[n] = list(js) if print_stats: print('size: ' + ', '.join(f'{n}: {len(js)}' for n, js in MSIs.items())) if not only_normal: print('unnormalized: ' + ', '.join(f'{n}: {len(list(0 for j in js if not normal(j)))}' for n, js in MSIs.items())) return MSIs ``` ```python f = lambda n: list(range(2**n)) g = lambda n: list(range(2**n)) print('test_MSIs_4_exhaustive') test_MSIs_4_exhaustive = test_set(range(1, 4+1), f, g, f, print_stats=True) print('test_MSIs_4_exhaustive') test_MSIs_4_exhaustive_unnormalized = test_set(range(1, 4+1), f, g, f, only_normal=False, print_stats=True) ``` test_MSIs_4_exhaustive size: 1: 3, 2: 15, 3: 95, 4: 575 test_MSIs_4_exhaustive size: 1: 8, 2: 64, 3: 512, 4: 4096 unnormalized: 1: 5, 2: 49, 3: 417, 4: 3521 ```python f = lambda n: list(range(2**n)) g = lambda n: list(range(2**n)) print('test_MSIs_5_6_exhaustive') test_MSIs_5_6_exhaustive = test_set(range(5, 6+1), f, g, f, print_stats=True) print('test_MSIs_5_6_exhaustive') test_MSIs_5_6_exhaustive_unnormalized = test_set(range(5, 6+1), f, g, f, only_normal=False, print_stats=True) ``` test_MSIs_5_6_exhaustive size: 5: 3039, 6: 15231 test_MSIs_5_6_exhaustive size: 5: 32768, 6: 262144 unnormalized: 5: 29729, 6: 246913 ```python test_MSIs_6_exhaustive = { **test_MSIs_4_exhaustive, **test_MSIs_5_6_exhaustive } test_MSIs_6_exhaustive_unnormalized = { **test_MSIs_4_exhaustive_unnormalized, **test_MSIs_5_6_exhaustive_unnormalized } ``` ```python ks = [a+b for a in [0, 30] for b in [0, 1, 2, 3, 4, 5, 6, 7, 8, 10, 15]] ls = [30, 31, 32, 33, 35, 36, 40, 45] f = lambda _: ks print('test_MSIs_6_partial') test_MSIs_6_partial = test_set([6], f, g, f, print_stats=True) print('\ntest_MSIs_6_partial_unnormalized') test_MSIs_6_partial_unnormalized = test_set([6], f, f, f, only_normal=False, print_stats=True) ``` test_MSIs_6_partial size: 6: 4111 test_MSIs_6_partial_unnormalized size: 6: 10648 unnormalized: 6: 9309 ```python ks = [a+b for a in [0, 30, 60] for b in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 15, 25]] ls = [0, 2, 3, 5, 6, 10, 15] f = lambda n: takewhile(lambda k: k < 2**n, ks) g = lambda n: (((15 if 2**n < 30 else 45) + 15 + a) % 2**n for a in ls) print('test_MSIs_8_partial') test_MSIs_8_partial = test_set([8], f, g, f, print_stats=True) print('\ntest_MSIs_8_partial_unnormalized') test_MSIs_8_partial_unnormalized = test_set([8], f, g, f, only_normal=False, print_stats=True) ``` test_MSIs_8_partial size: 8: 2438 test_MSIs_8_partial_unnormalized size: 8: 10647 unnormalized: 8: 9705 ```python f = lambda n: set(randint(0, 2**n-1) for _ in range(8)) g = lambda n: set(randint(0, 2**(n-1)-1) for _ in range(8)) print('test_MSIs_random') test_MSIs_random = test_set(range(5, 8+1), f, f, g, print_stats=True) print('\ntest_MSIs_random_unnormalized') test_MSIs_random_unnormalized = test_set(range(5, 8+1), f, f, g, only_normal=False, print_stats=True) ``` test_MSIs_random size: 5: 181, 6: 288, 7: 339, 8: 316 test_MSIs_random_unnormalized size: 5: 253, 6: 412, 7: 462, 8: 487 unnormalized: 5: 218, 6: 366, 7: 432, 8: 468 ```python def _unary_function_test(f, p, test_MSIs, test_count=0, fail_count=0, fail_lim=8): for n, js in test_MSIs.items(): print(f' testing bit width: {n}') for i in js: test_count += 1 x = f(i) if not p(n, i, x): fail_count += 1 print(f' {i}: {x}') if fail_count == fail_lim: return test_count, fail_count if test_count % 25000 == 0: print(f'- tested {test_count} arguments') return test_count, fail_count def unary_function_test(f, p, big=False, unnormalized=False): fail_lim = 16 if big else 8 test_count = fail_count = 0 print('testing MSIs with bit width up to 4 exhaustively') MSIs = test_MSIs_4_exhaustive_unnormalized if unnormalized else test_MSIs_4_exhaustive test_count, fail_count = _unary_function_test(f, p, MSIs, test_count, fail_count, fail_lim) if fail_count == fail_lim: return print('testing random MSIs with bit width from 5 to 8') MSIs = test_MSIs_random_unnormalized if unnormalized else test_MSIs_random test_count, fail_count = _unary_function_test(f, p, MSIs, test_count, fail_count, fail_lim) if fail_count == fail_lim: return if big: print('testing some MSIs with bit width 6') MSIs = test_MSIs_6_partial_unnormalized if unnormalized else test_MSIs_6_partial test_count, fail_count = _unary_function_test(f, p, MSIs, test_count, fail_count, fail_lim) if fail_count == fail_lim: return print('testing some MSIs with bit width 8') MSIs = test_MSIs_8_partial_unnormalized if unnormalized else test_MSIs_8_partial test_count, fail_count = _unary_function_test(f, p, MSIs, test_count, fail_count, fail_lim) if fail_count == fail_lim: return if fail_count == 0: print(f'succeeded (tested {test_count} arguments in total)') ``` ```python def _bin_fun_test(f, p, test_MSIs, test_count=0, fail_count=0, fail_lim=8): for n, js in test_MSIs.items(): print(f' testing bit width: {n}') for i in js: for j in js: test_count += 1 x = f(i, j) if not p(n, i, j, x): fail_count += 1 print(f' f {i} {j}: {x}') if fail_count == fail_lim: return test_count, fail_count if test_count % 25000 == 0: print(f'- tested {test_count} arguments') return test_count, fail_count def bin_fun_test(f, p, big=False, non_zero=False): fail_lim = 16 if big else 8 test_count = fail_count = 0 print('testing MSIs with bit width up to 4 exhaustively') test_count, fail_count = _bin_fun_test(f, p, test_MSIs_4_exhaustive, test_count, fail_count, fail_lim) if fail_count == fail_lim: return print('testing random MSIs with bit width from 5 to 8') test_count, fail_count = _bin_fun_test(f, p, test_MSIs_random, test_count, fail_count, fail_lim) if fail_count == fail_lim: return if big: print('testing some MSIs with bit width 6') test_count, fail_count = _bin_fun_test(f, p, test_MSIs_6_partial, test_count, fail_count, fail_lim) if fail_count == fail_lim: return print('testing some MSIs with bit width 8') test_count, fail_count = _bin_fun_test(f, p, test_MSIs_8_partial, test_count, fail_count, fail_lim) if fail_count == fail_lim: return if fail_count == 0: print(f'succeeded (tested {test_count} arguments in total)') ``` ```python def _bin_op_test(op_MSI, op, test_MSIs, test_count=0, fail_count=0, fail_lim=8, bad_args={}, bad_lim=8, non_zero=False): bad_precision = max(bad_args.values()) if len(bad_args) > 0 else 1 for n, js in test_MSIs.items(): print(f' testing bit width: {n}') for i in js: vals_i = gamma(i) for j in js: test_count += 1 vals_j = gamma(j) if non_zero and 0 in vals_j: vals_op = {op(n, k, l) for k in vals_i for l in vals_j if not l == 0} else: vals_op = {op(n, k, l) for k in vals_i for l in vals_j} vals_op_MSI = gamma(op_MSI(i, j)) if not vals_op <= vals_op_MSI: fail_count += 1 print(f' {i} op {j}: {op_MSI(i, j)}, {vals_op}, {vals_i}, {vals_j}') if fail_count == fail_lim: return test_count, fail_count, bad_args elif not len(vals_op) == 0: precision = len(vals_op) / (len(vals_op_MSI) * 2**n) if precision < bad_precision: if len(bad_args) == bad_lim: bad_args.pop(list(bad_args.keys())[list(bad_args.values()).index(bad_precision)]) bad_args[(i, j)] = precision bad_precision = max(bad_args.values()) if test_count % 25000 == 0: print(f'- tested {test_count} arguments') return test_count, fail_count, bad_args def bin_op_test(op_MSI, op, big=False, non_zero=False): fail_lim = bad_lim = 16 if big else 8 test_count = fail_count = 0 bad_args = {} print('testing MSIs with bit width up to 4 exhaustively') test_count, fail_count, bad_args = _bin_op_test(op_MSI, op, test_MSIs_4_exhaustive, test_count, fail_count, fail_lim, bad_args, bad_lim, non_zero=non_zero) if fail_count == fail_lim: return print('testing random MSIs with bit width from 5 to 8') test_count, fail_count, bad_args = _bin_op_test(op_MSI, op, test_MSIs_random, test_count, fail_count, fail_lim, bad_args, bad_lim, non_zero=non_zero) if fail_count == fail_lim: return if big: print('testing some MSIs with bit width 6') test_count, fail_count, bad_args = _bin_op_test(op_MSI, op, test_MSIs_6_partial, test_count, fail_count, fail_lim, bad_args, bad_lim, non_zero=non_zero) if fail_count == fail_lim: return print('testing some MSIs with bit width 8') test_count, fail_count, bad_args = _bin_op_test(op_MSI, op, test_MSIs_8_partial, test_count, fail_count, fail_lim, bad_args, bad_lim, non_zero=non_zero) if fail_count == fail_lim: return if fail_count == 0: print(f'succeeded (tested {test_count} arguments in total)') print('arguments with least precise results:') for (i, j), r in bad_args.items(): print(f'{i}, {j}: {r}') ``` ```python def _bin_rel_test(rel_MSI, rel, test_MSIs, test_count=0, fail_count=0, fail_lim=8): for n, js in test_MSIs.items(): print(f' testing bit width: {n}') for i in js: for j in js: test_count += 1 if not (rel_MSI(i, j) == rel(i, j)): fail_count += 1 print(f' {i} rel {j}: {rel_MSI(i, j)}') if fail_count == fail_lim: return test_count, fail_count if test_count % 25000 == 0: print(f'- tested {test_count} arguments') return test_count, fail_count def bin_rel_test(rel_MSI, rel, big=False): fail_lim = 16 if big else 8 test_count = fail_count = 0 print('testing MSIs with bit width up to 4 exhaustively') test_count, fail_count = _bin_rel_test(rel_MSI, rel, test_MSIs_4_exhaustive, test_count, fail_count, fail_lim) if fail_count == fail_lim: return print('testing random MSIs with bit width from 5 to 8') test_count, fail_count = _bin_rel_test(rel_MSI, rel, test_MSIs_random, test_count, fail_count, fail_lim) if fail_count == fail_lim: return if big: print('testing some MSIs with bit width 6') test_count, fail_count = _bin_rel_test(rel_MSI, rel, test_MSIs_6_partial, test_count, fail_count, fail_lim) if fail_count == fail_lim: return print('testing some MSIs with bit width 8') test_count, fail_count = _bin_rel_test(rel_MSI, rel, test_MSIs_8_partial, test_count, fail_count, fail_lim) if fail_count == fail_lim: return if fail_count == 0: print(f'succeeded (tested {test_count} arguments in total)') ``` ## Test for normal ```python def test_normal(): failed = False test_count = fail_count = 0 for n, js in test_MSIs_6_exhaustive.items(): equiv_classes = {} for i in js: a = frozenset(gamma(i)) if a in equiv_classes: equiv_classes[a].add(i) else: equiv_classes[a] = {i} for equiv_class in equiv_classes.values(): norm_forms = list(filter(normal, equiv_class)) test_count += 1 if len(norm_forms) != 1: failed = True fail_count += 1 if len(norm_forms) == 0: print(f'no normal form for {equiv_class}') else: print(f'multiple normal forms {norm_forms}') if fail_count > 8: return print(f'succeeded (tested {test_count} equivalence classes in total)') ``` ```python test_normal() ``` succeeded (tested 18958 equivalence classes in total) ## Helper functions ```python def bounds(i): n, a, b, _ = i._tuple_repr() if a <= b: return a, b, False else: return a, b + 2**n, True ``` ```python def contains(i, k): n, a, b, s = i._tuple_repr() if s == 0: return a == k elif a <= b: return a <= k and k <= b and (k - a) % s == 0 else: if k >= a: return (k - a) % s == 0 elif k <= b: return (k - b) % s == 0 else: return False ``` ```python def test_contains(): failed = False test_count = fail_count = 0 for n, js in test_MSIs_6_exhaustive.items(): for i in js: test_count += 1 a = gamma(i) for k in range(2**n): if k in a and not contains(i, k): failed = True fail_count += 1 print(f'{k} in gamma({i})') if k not in a and contains(i, k): failed = True fail_count += 1 print(f'{k} not in gamma({i})') if fail_count > 8: return print(f'succeeded (tested {test_count} arguments)') ``` ```python test_contains() ``` succeeded (tested 18958 arguments) ```python def leq_MSI(i, j, debug=False): n, a, b, s = i._tuple_repr() m, c, d, t = j._tuple_repr() assert n == m, 'strides must be equal' if s == 0: # i contains exactly 1 value return contains(j, a) elif t == 0: # j contains exactly 1 value return False elif b == (a+s) % 2**n: # i contains exactly 2 values return contains(j, a) and contains(j, b) elif s % t == 0: if 2**n % t == 0 and (c-d) % 2**n == t: # j represents a residue class of Z/t (=> t | 2**n) return (a-c) % t == 0 else: b_ = (b-a) % 2**n c_, d_ = (c-a) % 2**n, (d-a) % 2**n if d_ < c_ and c_ <= b_: # this branch may not return, but continue below [a...d_...c_...b_...] e_ = s * (d_ // s) f_ = (b_ - s * ((b_-c_) // s)) % s**n if (f_-e_) == s: if e_ < s: if contains(j, a) and c_ % t == 0: return True elif contains(j, b) and d_ % t == 0: return True if c_ <= d_: return c_ == 0 and b_ <= d_ else: return b_ <= d_ and (d_-b_) % t == 0 else: return False ``` ```python bin_rel_test(leq_MSI, lambda i, j: gamma(i) <= gamma(j)) ``` testing MSIs with bit width up to 4 exhaustively testing bit width: 1 testing bit width: 2 testing bit width: 3 testing bit width: 4 - tested 25000 arguments - tested 50000 arguments - tested 75000 arguments - tested 100000 arguments - tested 125000 arguments - tested 150000 arguments - tested 175000 arguments - tested 200000 arguments - tested 225000 arguments - tested 250000 arguments - tested 275000 arguments - tested 300000 arguments - tested 325000 arguments testing random MSIs with bit width from 5 to 8 testing bit width: 5 - tested 350000 arguments testing bit width: 6 - tested 375000 arguments - tested 400000 arguments - tested 425000 arguments - tested 450000 arguments testing bit width: 7 - tested 475000 arguments - tested 500000 arguments - tested 525000 arguments - tested 550000 arguments testing bit width: 8 - tested 575000 arguments - tested 600000 arguments - tested 625000 arguments succeeded (tested 645567 arguments in total) ```python lhs = MSI(3, 2, 0, 3) rhs = MSI(3, 5, 3, 3) res = leq_MSI(lhs, rhs) print(f'{lhs} leq {rhs} = {res}') print(f'{gamma(lhs)} leq {gamma(rhs)} = {gamma(lhs) <= gamma(rhs)}') leq_MSI(lhs, rhs, debug=True) ``` 3[2, 0]_{3} leq 3[5, 3]_{3} = False {0, 2, 5} leq {0, 3, 5} = False False ```python def size(i): n, a, b, s = i._tuple_repr() if s == 0: return 1 else: return ((b-a) % 2**n) // s + 1 ``` ```python unary_function_test(size, lambda n, i, s: s == len(gamma(i)), big=True) ``` testing MSIs with bit width up to 4 exhaustively testing bit width: 1 testing bit width: 2 testing bit width: 3 testing bit width: 4 testing random MSIs with bit width from 5 to 8 testing bit width: 5 testing bit width: 6 testing bit width: 7 testing bit width: 8 testing some MSIs with bit width 6 testing bit width: 6 testing some MSIs with bit width 8 testing bit width: 8 succeeded (tested 8386 arguments in total) ```python def lub(i, j, debug=False): n, a, b, s = i._tuple_repr() m, c, d, t = j._tuple_repr() assert n == m, 'strides must be equal' if b == (a+s) % 2**n and s >= 2**(n-1): if debug: print('correction 1') a, b = b, a s = 2**n - s if d == (c+t) % 2**n and t >= 2**(n-1): if debug: print('correction 2') c, d = d, c t = 2**n - t b_ = (b-a) % 2**n c_, d_ = (c-a) % 2**n, (d-a) % 2**n if debug: print(f'a: {a}, b: {b}, c: {c}, d: {d}') print(f'b_: {b_}, c_: {c_}, d_: {d_}') if (b_ < c_ and c_ < d_): # no overlapping regions if debug: print(f'case 0: b_: {b_}, c_: {c_}, d_: {d_}') u1 = int(gcd(gcd(s, t), (c-b) % 2**n)) e1, f1 = a, d u2 = int(gcd(gcd(s, t), (a-d) % 2**n)) e2, f2 = c, b opt1 = normalize(MSI(n, e1, f1, u1)) opt2 = normalize(MSI(n, e2, f2, u2)) if debug: print(f'opt2: {opt2}, opt1: {opt1}') if (size(opt1) < size(opt2)): return opt1 else: return opt2 elif d_ < c_ and c_ <= b_: # two overlapping regions if debug: print(f'case 1: b_: {b_}, c_: {c_}, d_: {d_}') u = int(gcd(gcd(s, t), gcd(c_ if c_ <= d_ else d_, 2**(n-1)))) e = a % u f = (e - u) % 2**n return normalize(MSI(n, e, f, u)) else: # one overlapping region if debug: print(f'case 2: b_: {b_}, c_: {c_}, d_: {d_}') e = a if c_ <= d_ else c f = b if d_ < b_ else d u = int(gcd(gcd(s, t), (c_ if c_ <= d_ else d_))) return normalize(MSI(n, e, f, u)) ``` ```python lhs, rhs = MSI(2, 0, 1, 1), MSI(2, 3, 3, 0) print(f'{lhs}, {rhs}: {gamma(lhs)}, {gamma(rhs)}') res = lub(lhs, rhs, debug=True) print(f'{res}: {gamma(res)}') print() res = lub(rhs, lhs, debug=True) print(f'{res}: {gamma(res)}') ``` 1[0, 1]_{2}, 0[3, 3]_{2}: {0, 1}, {3} a: 0, b: 1, c: 3, d: 3 b_: 1, c_: 3, d_: 3 case 2: b_: 1, c_: 3, d_: 3 1[0, 3]_{2}: {0, 1, 2, 3} a: 3, b: 3, c: 0, d: 1 b_: 0, c_: 1, d_: 2 case 0: b_: 0, c_: 1, d_: 2 opt2: 1[0, 3]_{2}, opt1: 1[3, 1]_{2} 1[3, 1]_{2}: {0, 1, 3} ```python bin_fun_test(lub, lambda n, i, j, x: gamma(i) | gamma(j) <= gamma(x) and lub(i, j) == lub(j, i)) ``` testing MSIs with bit width up to 4 exhaustively testing bit width: 1 testing bit width: 2 f 1[0, 1]_{2} 0[3, 3]_{2}: 1[0, 3]_{2} f 3[0, 3]_{2} 0[2, 2]_{2}: 1[0, 3]_{2} f 0[0, 0]_{2} 1[1, 2]_{2}: 1[0, 2]_{2} f 1[1, 2]_{2} 0[0, 0]_{2}: 1[0, 3]_{2} f 1[2, 3]_{2} 0[1, 1]_{2}: 1[0, 3]_{2} f 0[3, 3]_{2} 1[0, 1]_{2}: 1[3, 1]_{2} f 0[1, 1]_{2} 1[2, 3]_{2}: 1[1, 3]_{2} f 0[2, 2]_{2} 3[0, 3]_{2}: 1[2, 0]_{2} ```python lhs, rhs = MSI(3, 2, 0, 3), MSI(3, 4, 2, 3) print(f'{lhs}, {rhs}: {gamma(lhs)}, {gamma(rhs)}') res = lub(lhs, rhs, debug=True) print(f'{res}: {gamma(res)}') ``` 3[2, 0]_{3}, 3[4, 2]_{3}: {0, 2, 5}, {2, 4, 7} a: 2, b: 0, c: 4, d: 2 b_: 6, c_: 2, d_: 0 case 1: b_: 6, c_: 2, d_: 0 1[0, 7]_{3}: {0, 1, 2, 3, 4, 5, 6, 7} ```python lub(MSI(6, 1, 37, 6), MSI(6, 31, 7, 6)) ``` 2[1, 63]_{6} ```python def as_signed_int(n, k): return k if k < 2**(n-1) else k - 2**n ``` ```python def umax_MSI(i): n, a, b, s = i._tuple_repr() if a <= b: return b else: return 2**n - 1 - ((2**n - 1 - a) % s) ``` ```python def umin_MSI(i): n, a, b, s = i._tuple_repr() if a <= b: return a else: return b % s ``` ```python def smax_MSI(i): n, a, b, s = i._tuple_repr() a, b = as_signed_int(n, a), as_signed_int(n, b) if a <= b: return b % 2**n else: return 2**(n-1) - 1 - ((2**(n-1) - 1 - a) % s) ``` ```python def smin_MSI(i): n, a, b, s = i._tuple_repr() a, b = as_signed_int(n, a), as_signed_int(n, b) if a <= b: return a % 2**n else: b = b % 2**n return (((b + 2**(n-1)) % 2**n % s) - 2**(n-1)) % 2**n ``` ```python smin_MSI(MSI(2, 0, 3, 3)) ``` 3 ```python def ustride(i): n, a, b, s = i._tuple_repr() if a <= b: return s else: return int(gcd(s, a-b)) ``` ```python def sstride(i): n, a, b, s = i._tuple_repr() if as_signed_int(n, a) <= as_signed_int(n, b): return s else: return int(gcd(s, as_signed_int(n, a)-as_signed_int(n, b))) ``` ```python def pos_min(i): m = umin_MSI(i) if m < 2**(n-1): return m else: return None ``` ```python def neg_max(i): m = umax_MSI(i) if m < 2**(n-1): return None else: return m ``` ```python def sabsmin(i): n, a, b, s = i._tuple_repr() a_, b_ = as_singed_int(a), as_singed_int(b) if s == 0: return a elif b_ < 0: return b elif 0 < a_: return a else: x = a % s y = x - s return x if x <= y else y ``` ```python def absmax(i): a, b = as_singed_int(i.begin), as_singed_int(i.end) b if abs(a) <= abs(b) else a ``` ```python unary_function_test(umax_MSI, lambda n, i, k: max(gamma(i)) == k, big=True) ``` testing MSIs with bit width up to 4 exhaustively testing bit width: 1 testing bit width: 2 testing bit width: 3 testing bit width: 4 testing random MSIs with bit width from 5 to 8 testing bit width: 5 testing bit width: 6 testing bit width: 7 testing bit width: 8 testing some MSIs with bit width 6 testing bit width: 6 testing some MSIs with bit width 8 testing bit width: 8 succeeded (tested 8386 arguments in total) ```python unary_function_test(umin_MSI, lambda n, i, k: min(gamma(i)) == k, big=True) ``` testing MSIs with bit width up to 4 exhaustively testing bit width: 1 testing bit width: 2 testing bit width: 3 testing bit width: 4 testing random MSIs with bit width from 5 to 8 testing bit width: 5 testing bit width: 6 testing bit width: 7 testing bit width: 8 testing some MSIs with bit width 6 testing bit width: 6 testing some MSIs with bit width 8 testing bit width: 8 succeeded (tested 8386 arguments in total) ```python unary_function_test(smax_MSI, lambda n, i, k: max(map(lambda k: as_signed_int(n, k), gamma(i))) % 2**n == k, big=True) ``` testing MSIs with bit width up to 4 exhaustively testing bit width: 1 testing bit width: 2 testing bit width: 3 testing bit width: 4 testing random MSIs with bit width from 5 to 8 testing bit width: 5 testing bit width: 6 testing bit width: 7 testing bit width: 8 testing some MSIs with bit width 6 testing bit width: 6 testing some MSIs with bit width 8 testing bit width: 8 succeeded (tested 8386 arguments in total) ```python unary_function_test(smin_MSI, lambda n, i, k: min(map(lambda k: as_signed_int(n, k), gamma(i))) % 2**n == k, big=True) ``` testing MSIs with bit width up to 4 exhaustively testing bit width: 1 testing bit width: 2 testing bit width: 3 testing bit width: 4 testing random MSIs with bit width from 5 to 8 testing bit width: 5 testing bit width: 6 testing bit width: 7 testing bit width: 8 testing some MSIs with bit width 6 testing bit width: 6 testing some MSIs with bit width 8 testing bit width: 8 succeeded (tested 8386 arguments in total) ```python def as_unsigned(i): n, a, b, s = i._tuple_repr() if a <= b: return MSI(n, a, b, s) else: t = int(gcd(s, (a-b) & 2**n)) c = a % t d = (c-t) % 2**n return MSI(n, c, d, t) ``` ## Implementation of Operations ```python def add(i, j): n, a, b, s = i._tuple_repr() m, c, d, t = j._tuple_repr() assert n == m, 'strides must be equal' u = int(gcd(s, t)) b_ = b if a <= b else b + 2**n d_ = d if c <= d else d + 2**n e, f = a+c, b_+d_ if f-e < 2**n: u_ = u e_, f_ = e % 2**n, f % 2**n else: u_ = int(gcd(u, 2**n)) e_ = e % 2**n f_ = (e_-u_) % 2**n return normalize(MSI(n, e_, f_, u_)) ``` ```python bin_op_test(add, lambda n, a, b: (a+b) % 2**n) ``` testing MSIs with bit width up to 4 exhaustively testing bit width: 1 testing bit width: 2 testing bit width: 3 testing bit width: 4 - tested 25000 arguments - tested 50000 arguments - tested 75000 arguments - tested 100000 arguments - tested 125000 arguments - tested 150000 arguments - tested 175000 arguments - tested 200000 arguments - tested 225000 arguments - tested 250000 arguments - tested 275000 arguments - tested 300000 arguments - tested 325000 arguments testing random MSIs with bit width from 5 to 8 testing bit width: 5 - tested 350000 arguments testing bit width: 6 - tested 375000 arguments - tested 400000 arguments - tested 425000 arguments - tested 450000 arguments testing bit width: 7 - tested 475000 arguments - tested 500000 arguments - tested 525000 arguments testing bit width: 8 - tested 550000 arguments - tested 575000 arguments - tested 600000 arguments - tested 625000 arguments - tested 650000 arguments - tested 675000 arguments succeeded (tested 684701 arguments in total) arguments with least precise results: 133[8, 141]_{8}, 133[8, 141]_{8}: 4.57763671875e-05 133[8, 141]_{8}, 123[24, 147]_{8}: 4.57763671875e-05 133[8, 141]_{8}, 123[48, 171]_{8}: 4.57763671875e-05 123[24, 147]_{8}, 133[8, 141]_{8}: 4.57763671875e-05 187[53, 240]_{8}, 187[53, 240]_{8}: 4.57763671875e-05 117[56, 173]_{8}, 139[115, 254]_{8}: 4.57763671875e-05 117[56, 173]_{8}, 139[101, 240]_{8}: 4.57763671875e-05 17[101, 118]_{8}, 239[15, 254]_{8}: 4.57763671875e-05 ```python def sub(i, j): n, a, b, s = i._tuple_repr() m, c, d, t = j._tuple_repr() assert n == m, 'strides must be equal' u = int(gcd(s, t)) b_ = b if a <= b else b + 2**n d_ = d if c <= d else d + 2**n e, f = a-d_, b_-c if f-e < 2**n: u_ = u e_, f_ = e % 2**n, f % 2**n else: u_ = int(gcd(u, 2**n)) e_ = e % 2**n f_ = (e_-u_) % 2**n return normalize(MSI(n, e_, f_, u_)) ``` ```python bin_op_test(sub, lambda n, a, b: (a-b) % 2**n) ``` testing MSIs with bit width up to 4 exhaustively testing bit width: 1 testing bit width: 2 testing bit width: 3 testing bit width: 4 - tested 25000 arguments - tested 50000 arguments - tested 75000 arguments - tested 100000 arguments - tested 125000 arguments - tested 150000 arguments - tested 175000 arguments - tested 200000 arguments - tested 225000 arguments - tested 250000 arguments - tested 275000 arguments - tested 300000 arguments - tested 325000 arguments testing random MSIs with bit width from 5 to 8 testing bit width: 5 - tested 350000 arguments testing bit width: 6 - tested 375000 arguments - tested 400000 arguments - tested 425000 arguments - tested 450000 arguments testing bit width: 7 - tested 475000 arguments - tested 500000 arguments - tested 525000 arguments testing bit width: 8 - tested 550000 arguments - tested 575000 arguments - tested 600000 arguments - tested 625000 arguments - tested 650000 arguments - tested 675000 arguments succeeded (tested 684701 arguments in total) arguments with least precise results: 133[8, 141]_{8}, 133[8, 141]_{8}: 4.57763671875e-05 133[8, 141]_{8}, 123[24, 147]_{8}: 4.57763671875e-05 133[8, 141]_{8}, 123[48, 171]_{8}: 4.57763671875e-05 123[24, 147]_{8}, 133[8, 141]_{8}: 4.57763671875e-05 187[53, 240]_{8}, 187[53, 240]_{8}: 4.57763671875e-05 117[56, 173]_{8}, 139[115, 254]_{8}: 4.57763671875e-05 117[56, 173]_{8}, 139[101, 240]_{8}: 4.57763671875e-05 17[101, 118]_{8}, 239[15, 254]_{8}: 4.57763671875e-05 ```python def mul(i, j, debug=False): n, a, b, s = i._tuple_repr() m, c, d, t = j._tuple_repr() assert n == m, 'strides must be equal' m = 2**n u = int(gcd(a, s)) * int(gcd(c, t)) b_ = b if a <= b else b + m d_ = d if c <= d else d + m e, f = a*c, b_*d_ if f-e < m: u_ = u e_, f_ = e % m, f % m else: u_ = int(gcd(u, m)) e_ = e % m f_ = (e_-u_) % m if debug: print(f'u: {u}, e: {e}, f: {f}, u_: {u_}, e_: {e_}, f_: {f_}') return normalize(MSI(n, e_, f_, u_)) ``` ```python def urem(i, j, debug=False): n, _, _, s = i._tuple_repr() m, _, _, t = j._tuple_repr() assert n == m, 'strides must be equal' a, b = umin_MSI(i), umax_MSI(i) c, d = umin_MSI(j), umax_MSI(j) s, t = ustride(i), ustride(j) if c == 0: if t == 0: if debug: print('case 1') return MSI(n, 0, (-1) % 2**n, 1) else: c = t if b < c: if debug: print('case 2') return i elif t == 0: if a//c == b//c: if debug: print('case 3.1') return normalize(MSI(n, a % c, b % c, s)) else: if debug: print('case 3.2') u = int(gcd(s, c)) return normalize(MSI(n, a % u, c-1, u)) else: if debug: print('case 4') u = int(gcd(gcd(c, t), s)) return normalize(MSI(n, a % u, min(b, d-1), u)) ``` ```python bin_op_test(urem, lambda n, a, b: (a % b) % 2**n, big=False, non_zero=True) ``` ```python def udiv(i, j, debug=False): n, _, _, _ = i._tuple_repr() m, _, _, t = j._tuple_repr() assert n == m, 'strides must be equal' a, b = umin_MSI(i), umax_MSI(i) c, d = umin_MSI(j), umax_MSI(j) s = ustride(i) m = 2**n if c == 0: if t == 0: return MSI(n, 0, (-1) % 2**n, 1) else: c = ustride(j) s_ = int(gcd(a, s)) if t == 0: u = s_ // c u = u if u*c == s_ else 1 return normalize(MSI(n, a//c, b//c, u)) else: e, f = a//d, b//c return normalize(MSI(n, e, f, 1)) ``` ```python lhs, rhs = MSI(3, 1, 5, 1), MSI(3, 2, 0, 3) print(f'{lhs}, {rhs}: {gamma(lhs)}, {gamma(rhs)}') res = udiv(lhs, rhs, debug=True) print(f'{res}: {gamma(res)}') ``` 1[1, 5]_{3}, 3[2, 0]_{3}: {1, 2, 3, 4, 5}, {0, 2, 5} 1[0, 5]_{3}: {0, 1, 2, 3, 4, 5} ```python bin_op_test(udiv, lambda n, a, b: (a // b), big=False, non_zero=True) ``` testing MSIs with bit width up to 4 exhaustively testing bit width: 1 testing bit width: 2 testing bit width: 3 testing bit width: 4 - tested 25000 arguments - tested 50000 arguments - tested 75000 arguments - tested 100000 arguments - tested 125000 arguments - tested 150000 arguments - tested 175000 arguments - tested 200000 arguments - tested 225000 arguments - tested 250000 arguments - tested 275000 arguments - tested 300000 arguments - tested 325000 arguments testing random MSIs with bit width from 5 to 8 testing bit width: 5 - tested 350000 arguments testing bit width: 6 - tested 375000 arguments - tested 400000 arguments - tested 425000 arguments testing bit width: 7 - tested 450000 arguments - tested 475000 arguments - tested 500000 arguments - tested 525000 arguments testing bit width: 8 - tested 550000 arguments - tested 575000 arguments - tested 600000 arguments - tested 625000 arguments succeeded (tested 647793 arguments in total) arguments with least precise results: 74[178, 252]_{8}, 173[1, 174]_{8}: 4.650297619047619e-05 0[178, 178]_{8}, 173[1, 174]_{8}: 4.3890449438202246e-05 0[178, 178]_{8}, 220[1, 221]_{8}: 4.3645251396648046e-05 0[174, 174]_{8}, 173[1, 174]_{8}: 4.489942528735632e-05 0[174, 174]_{8}, 220[1, 221]_{8}: 4.464285714285714e-05 0[221, 221]_{8}, 173[1, 174]_{8}: 3.535067873303168e-05 0[221, 221]_{8}, 220[1, 221]_{8}: 3.535067873303168e-05 174[0, 174]_{8}, 220[1, 221]_{8}: 4.464285714285714e-05 ```python def rem(k, n): assert not n == 0, 'remainder by 0' if k > 0: return k % abs(n) else: return -(abs(k) % abs(n)) ``` ```python def div(k, n): assert not n == 0, 'division by 0' if k > 0: return k // n else: return -(abs(k) // n) ``` ```python def smin(n, k, l): k_, l_ = as_signed_int(n, k), as_signed_int(n, l) if k_ <= l_: return k_ else: return l_ def smax(n, k, l): k_, l_ = as_signed_int(n, k), as_signed_int(n, l) if k_ >= l_: return k_ else: return l_ ``` ```python def srem(i, j, debug=False): n, m = i.bit_width, j.bit_width assert n == m, 'strides must be equal' a, b = as_signed_int(n, smin_MSI(i)), as_signed_int(n, smax_MSI(i)) c, d = as_signed_int(n, smin_MSI(j)), as_signed_int(n, smax_MSI(j)) s, t = sstride(i), sstride(j) if debug: print(f'a: {a}, b: {b}, c: {c}, d: {d}, s: {s}, t: {t}') if d < 0: if debug: print('all negative') c, d = -d, -c elif c < 0: if debug: print('some negative') t_ = (d+c) % t c, d = min(-c % t, d % t), max(-c, d) t = gcd(t, t_) if debug: print(f'a: {a}, b: {b}, c: {c}, d: {d}, s: {s}, t: {t}') if c == 0: # remainder by bound not possible if t == 0: # definite remainder by 0 if debug: print('case 1') return MSI(n, 0, (-1) % 2**n, 1) else: # correct bound to avoid ramainder by 0 if debug: print('avoid 0') c = c+t # renormalize if c == d: if debug: print('renormalize') t = 0 if debug: print(f'a: {a}, b: {b}, c: {c}, d: {d}, s: {s}, t: {t}') absMaxI = max(abs(a), abs(b)) if absMaxI < c: # remainder has no effect if debug: print('case 2') return i elif t == 0: # remainder by constant if div(a, c) == div(b, c): # E x. x*c <= a <= b < (x+1)*c if debug: print('case 3') return normalize(MSI(n, rem(a, c) % 2**n, rem(b, c) % 2**n, s)) if debug: print(f'case 5') u = int(gcd(gcd(c, t), s)) e = a % u if 0 < a else max(a, 1-d + (a+d-1) % u) f = min(b, d-1) if 0 < b else (e-1) % u + 1 - u if debug: print(f'u: {u}, e: {e}, f: {f}, {a % u if 0 < a else 1-d + (a-d+1) % u}, {d-1 if 0 < b else (e-1) % u + 1 - u}') return normalize(MSI(n, e % 2**n, f % 2**n, u)) ``` ```python srem(MSI(3, 5, 6, 1), MSI(3, 2, 2, 0), debug=True) ``` a: -3, b: -2, c: 2, d: 2, s: 1, t: 0 a: -3, b: -2, c: 2, d: 2, s: 1, t: 0 a: -3, b: -2, c: 2, d: 2, s: 1, t: 0 case 3 7[0, 7]_{3} ```python def srem_cases(i, j): p = [] n, m = i.bit_width, j.bit_width assert n == m, 'strides must be equal' a, b = as_signed_int(n, smin_MSI(i)), as_signed_int(n, smax_MSI(i)) c, d = as_signed_int(n, smin_MSI(j)), as_signed_int(n, smax_MSI(j)) s, t = sstride(i), sstride(j) if append(p, d < 0) and d < 0: c, d = -d, -c elif append(p, c < 0) and c < 0: t_ = (d+c) % t append(p, -c % t <= d % t) append(p, -c <= d) c, d = min(-c % t, d % t), max(-c, d) t = gcd(t, t_) if append(p, c == 0) and c == 0: # remainder by bound not possible if append(p, t == 0) and t == 0: # definite remainder by 0 return MSI(n, 0, (-1) % 2**n, 1), p else: # correct bound to avoid ramainder by 0 c = c+t # renormalize if append(p, c == d) and c == d: t = 0 append(p, a >= 0) append(p, a >= b) append(p, abs(a) >= abs(b)) absMaxI = max(abs(a), abs(b)) if append(p, absMaxI < c) and absMaxI < c: # remainder has no effect return i, p elif append(p, t == 0) and t == 0: # remainder by constant if append(p, div(a, c) == div(b, c)) and div(a, c) == div(b, c): # E x. x*c <= a <= b < (x+1)*c return normalize(MSI(n, rem(a, c) % 2**n, rem(b, c) % 2**n, s)), p u = int(gcd(gcd(c, t), s)) if append(p, 0 < a) and 0 < a: e = a % u else: append(p, a >= 1-d + (a+d-1) % u) e = max(a, 1-d + (a+d-1) % u) if append(p, 0 < b) and 0 < b: append(p, b >= d-1) f = min(b, d-1) else: f = (e-1) % u + 1 - u return normalize(MSI(n, e % 2**n, f % 2**n, u)), p ``` ```python srem_cases(MSI(4, 2, 5, 3), MSI(4, 15, 3, 2)) ``` (1[0, 2]_{4}, [False, True, True, True, False, True, False, False, False, False, True, True, True]) ```python def append(xs, x): xs.append(x) return True ``` ```python def add_case(cases, path, ex): if cases is None: if len(path) == 0: return (True, True, ex) else: return (False, False, {path[0]: add_case(None, path[1:], ex), not path[0]: None}) else: fin0, a0, cs0 = cases if fin0: return cases else: b = path[0] fin1, a1, cs1 = add_case(cs0[b], path[1:], ex) fin = cs0[not b] is not None and cs0[not b][0] and fin1 return (fin, a0, {b: (fin1, a1, cs1), not b: cs0[not b]}) ``` ```python def gen_test_cases(f): cases = None for n in range(1, 4+1): for i in test_MSIs_6_exhaustive[n]: for j in test_MSIs_6_exhaustive[n]: _, path = f(i, j) cases = add_case(cases, path, (i, j)) if cases[0]: return cases return cases ``` ```python def get_cases(cases): if cases is None: return [] elif cases[1]: return [cases[2]] else: return get_cases(cases[2][True]) + get_cases(cases[2][False]) ``` ```python def find_unreachable_pathes(cases): if cases is None: return [p] elif cases[0]: return [] r = [] for b in [True, False]: if cases[2][b] is None: r += [[b]] else: r += [[b]+p for p in find_unreachable_pathes(cases[2][b])] return r ``` ```python cases = gen_test_cases(srem_cases) ``` ```python cs = get_cases(cases) ``` ```python len(cs) ``` 267 ```python cs[0] ``` (0[0, 0]_{1}, 0[1, 1]_{1}) ```python srem(MSI(2, 2, 2, 0), MSI(2, 2, 3, 1)) ``` 3[0, 3]_{2} ```python for lhs, rhs in cs: n, a, b, s = lhs._tuple_repr() _, c, d, t = rhs._tuple_repr() ref = srem(lhs, rhs) _, e, f, u = ref._tuple_repr() print(f'lhs = {{{n}, {a}, {b}, {s}}}; rhs = {{{n}, {c}, {d}, {t}}}; ref = {{{n}, {e}, {f}, {u}}};') print(f'res_p = lhs.srem({n}, rhs);'); print(f'res = *(static_cast<StridedInterval *>(res_p.get()));') print(f'if (res != ref) {{') print(f' errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\\n";') print(f'}}') ``` lhs = {1, 0, 0, 0}; rhs = {1, 1, 1, 0}; ref = {1, 0, 0, 0}; res_p = lhs.srem(1, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 1, 1, 0}; rhs = {2, 3, 3, 0}; ref = {2, 0, 0, 0}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 1, 1, 0}; rhs = {2, 2, 3, 1}; ref = {2, 0, 1, 1}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 1, 0}; rhs = {3, 5, 7, 1}; ref = {3, 0, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 0, 1, 1}; rhs = {2, 2, 2, 0}; ref = {2, 0, 1, 1}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 2, 3, 1}; rhs = {3, 6, 6, 0}; ref = {3, 0, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 3, 1}; rhs = {3, 7, 7, 0}; ref = {3, 0, 0, 0}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 0, 1, 1}; rhs = {2, 3, 3, 0}; ref = {2, 0, 0, 0}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 3, 1}; rhs = {3, 5, 7, 1}; ref = {3, 0, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 2, 1}; rhs = {3, 4, 6, 1}; ref = {3, 0, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 0, 1, 1}; rhs = {2, 2, 3, 1}; ref = {2, 0, 1, 1}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 0, 1, 1}; rhs = {3, 5, 7, 1}; ref = {3, 0, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 3, 3, 0}; rhs = {2, 2, 2, 0}; ref = {2, 3, 3, 0}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {1, 1, 1, 0}; rhs = {1, 1, 1, 0}; ref = {1, 0, 0, 0}; res_p = lhs.srem(1, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 3, 3, 0}; rhs = {2, 2, 3, 1}; ref = {2, 0, 3, 3}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 2, 2, 0}; rhs = {2, 2, 3, 1}; ref = {2, 0, 3, 3}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 0, 3, 3}; rhs = {2, 2, 2, 0}; ref = {2, 0, 3, 3}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 5, 6, 1}; rhs = {3, 6, 6, 0}; ref = {3, 0, 7, 7}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 0, 2, 1}; rhs = {2, 3, 3, 0}; ref = {2, 0, 0, 0}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 5, 1, 2}; rhs = {3, 5, 5, 0}; ref = {3, 6, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {1, 0, 1, 1}; rhs = {1, 1, 1, 0}; ref = {1, 0, 0, 0}; res_p = lhs.srem(1, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 1, 3, 2}; rhs = {2, 2, 3, 1}; ref = {2, 3, 1, 1}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 6, 5}; rhs = {3, 5, 7, 1}; ref = {3, 6, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 0, 3, 3}; rhs = {2, 2, 3, 1}; ref = {2, 0, 3, 3}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 0, 2, 1}; rhs = {2, 2, 3, 1}; ref = {2, 3, 1, 1}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 5, 1, 2}; rhs = {3, 5, 7, 1}; ref = {3, 6, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 2, 0, 1}; rhs = {2, 2, 3, 1}; ref = {2, 0, 3, 3}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 6, 3, 1}; rhs = {3, 4, 4, 0}; ref = {3, 6, 3, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 6, 3, 1}; rhs = {3, 5, 5, 0}; ref = {3, 6, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 6, 3, 1}; rhs = {3, 7, 7, 0}; ref = {3, 0, 0, 0}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 6, 3, 1}; rhs = {3, 5, 7, 1}; ref = {3, 6, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 7, 2, 1}; rhs = {3, 4, 6, 1}; ref = {3, 7, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 6, 3, 1}; rhs = {3, 6, 7, 1}; ref = {3, 7, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 0, 0, 0}; rhs = {2, 3, 1, 1}; ref = {2, 0, 0, 0}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 1, 1, 0}; rhs = {2, 3, 1, 1}; ref = {2, 0, 0, 0}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 0, 1, 1}; rhs = {3, 5, 3, 3}; ref = {3, 0, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 2, 3, 1}; rhs = {3, 6, 2, 2}; ref = {3, 0, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 3, 1}; rhs = {3, 7, 1, 1}; ref = {3, 0, 0, 0}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 0, 1, 1}; rhs = {2, 3, 1, 1}; ref = {2, 0, 0, 0}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 7, 7, 0}; rhs = {3, 5, 3, 3}; ref = {3, 7, 7, 0}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 3, 3, 0}; rhs = {2, 3, 1, 1}; ref = {2, 0, 0, 0}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 6, 5}; rhs = {3, 5, 3, 3}; ref = {3, 1, 6, 5}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 5, 6, 1}; rhs = {3, 6, 2, 2}; ref = {3, 0, 7, 7}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 0, 2, 1}; rhs = {2, 3, 1, 1}; ref = {2, 0, 0, 0}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 5, 1, 2}; rhs = {3, 5, 3, 3}; ref = {3, 6, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 0, 3, 3}; rhs = {2, 3, 1, 1}; ref = {2, 0, 0, 0}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 7, 2, 1}; rhs = {3, 5, 3, 3}; ref = {3, 7, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 6, 3, 1}; rhs = {3, 5, 3, 3}; ref = {3, 6, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 6, 3, 1}; rhs = {3, 7, 1, 1}; ref = {3, 0, 0, 0}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 0, 0, 0}; rhs = {3, 5, 3, 1}; ref = {3, 0, 0, 0}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 3, 3, 0}; rhs = {3, 5, 3, 1}; ref = {3, 0, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 1, 0}; rhs = {3, 5, 3, 1}; ref = {3, 0, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 0, 2, 1}; rhs = {4, 10, 6, 3}; ref = {4, 0, 2, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 3, 1}; rhs = {3, 5, 3, 1}; ref = {3, 0, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 1, 3, 1}; rhs = {4, 9, 7, 1}; ref = {4, 0, 3, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 0, 3, 1}; rhs = {3, 5, 3, 1}; ref = {3, 0, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 0, 1, 1}; rhs = {3, 5, 3, 1}; ref = {3, 0, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 14, 14, 0}; rhs = {4, 10, 6, 3}; ref = {4, 14, 14, 0}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 7, 7, 0}; rhs = {3, 5, 3, 1}; ref = {3, 0, 7, 7}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 6, 6, 0}; rhs = {3, 7, 2, 1}; ref = {3, 0, 7, 7}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 1, 14, 13}; rhs = {4, 10, 6, 3}; ref = {4, 1, 14, 13}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 6, 2, 1}; rhs = {3, 5, 3, 1}; ref = {3, 6, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 6, 5}; rhs = {3, 5, 3, 1}; ref = {3, 6, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 0, 6, 6}; rhs = {3, 5, 3, 1}; ref = {3, 6, 0, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 5, 1}; rhs = {3, 5, 3, 1}; ref = {3, 6, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 5, 1, 2}; rhs = {3, 5, 3, 1}; ref = {3, 6, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 5, 7, 1}; rhs = {3, 5, 3, 1}; ref = {3, 6, 0, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 15, 2, 1}; rhs = {4, 10, 6, 3}; ref = {4, 15, 2, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 6, 3, 1}; rhs = {3, 5, 3, 1}; ref = {3, 6, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 3, 15, 12}; rhs = {4, 9, 7, 1}; ref = {4, 15, 3, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 6, 3, 1}; rhs = {3, 7, 2, 1}; ref = {3, 7, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 0, 0, 0}; rhs = {2, 1, 3, 2}; ref = {2, 0, 0, 0}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 1, 1, 0}; rhs = {2, 1, 3, 2}; ref = {2, 0, 0, 0}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 1, 0}; rhs = {3, 3, 7, 2}; ref = {3, 0, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 0, 1, 1}; rhs = {3, 2, 6, 4}; ref = {3, 0, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 3, 1}; rhs = {3, 3, 7, 2}; ref = {3, 0, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 1, 3, 1}; rhs = {4, 1, 15, 2}; ref = {4, 0, 3, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 0, 1, 1}; rhs = {2, 1, 3, 2}; ref = {2, 0, 0, 0}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 0, 1, 1}; rhs = {3, 3, 7, 2}; ref = {3, 0, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 7, 7, 0}; rhs = {3, 2, 6, 4}; ref = {3, 7, 7, 0}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 7, 7, 0}; rhs = {3, 3, 7, 2}; ref = {3, 0, 7, 7}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 3, 3, 0}; rhs = {2, 1, 3, 2}; ref = {2, 0, 0, 0}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 7, 1, 1}; rhs = {3, 2, 6, 4}; ref = {3, 7, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 6, 2, 1}; rhs = {3, 3, 7, 2}; ref = {3, 6, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 6, 5}; rhs = {3, 3, 7, 2}; ref = {3, 6, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 0, 6, 6}; rhs = {3, 3, 7, 2}; ref = {3, 6, 0, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 0, 2, 1}; rhs = {2, 1, 3, 2}; ref = {2, 0, 0, 0}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 5, 1, 2}; rhs = {3, 3, 7, 2}; ref = {3, 6, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 0, 3, 3}; rhs = {2, 1, 3, 2}; ref = {2, 0, 0, 0}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 3, 15, 12}; rhs = {4, 4, 12, 8}; ref = {4, 3, 15, 12}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 6, 3, 1}; rhs = {3, 3, 7, 2}; ref = {3, 6, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 3, 15, 12}; rhs = {4, 1, 15, 2}; ref = {4, 15, 3, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 6, 3, 1}; rhs = {3, 1, 7, 6}; ref = {3, 0, 0, 0}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {1, 0, 0, 0}; rhs = {1, 0, 1, 1}; ref = {1, 0, 0, 0}; res_p = lhs.srem(1, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 1, 1, 0}; rhs = {2, 0, 3, 3}; ref = {2, 0, 0, 0}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 0, 1, 1}; rhs = {2, 0, 2, 2}; ref = {2, 0, 1, 1}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 2, 3, 1}; rhs = {3, 0, 6, 6}; ref = {3, 0, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 3, 1}; rhs = {3, 0, 6, 6}; ref = {3, 0, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 0, 1, 1}; rhs = {2, 0, 3, 3}; ref = {2, 0, 0, 0}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 3, 3, 0}; rhs = {2, 0, 2, 2}; ref = {2, 3, 3, 0}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {1, 1, 1, 0}; rhs = {1, 0, 1, 1}; ref = {1, 0, 0, 0}; res_p = lhs.srem(1, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 0, 3, 3}; rhs = {2, 0, 2, 2}; ref = {2, 0, 3, 3}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 5, 6, 1}; rhs = {3, 0, 6, 6}; ref = {3, 0, 7, 7}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 0, 2, 1}; rhs = {2, 0, 3, 3}; ref = {2, 0, 0, 0}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 2, 4, 2}; rhs = {3, 0, 4, 4}; ref = {3, 6, 2, 2}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {1, 0, 1, 1}; rhs = {1, 0, 1, 1}; ref = {1, 0, 0, 0}; res_p = lhs.srem(1, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 6, 3, 1}; rhs = {3, 0, 4, 4}; ref = {3, 6, 3, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 7, 3, 1}; rhs = {3, 0, 6, 6}; ref = {3, 7, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 6, 3, 1}; rhs = {3, 0, 6, 6}; ref = {3, 7, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 0, 0, 0}; rhs = {2, 0, 2, 1}; ref = {2, 0, 0, 0}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 1, 1, 0}; rhs = {2, 0, 2, 1}; ref = {2, 0, 1, 1}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 1, 0}; rhs = {3, 1, 5, 1}; ref = {3, 0, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 0, 1, 1}; rhs = {3, 2, 4, 2}; ref = {3, 0, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 3, 1}; rhs = {3, 1, 5, 1}; ref = {3, 0, 3, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 2, 1}; rhs = {3, 1, 5, 1}; ref = {3, 0, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 0, 1, 1}; rhs = {2, 0, 2, 1}; ref = {2, 0, 1, 1}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 0, 1, 1}; rhs = {3, 1, 5, 1}; ref = {3, 0, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 7, 7, 0}; rhs = {3, 2, 4, 2}; ref = {3, 7, 7, 0}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 3, 3, 0}; rhs = {2, 0, 2, 1}; ref = {2, 0, 3, 3}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 2, 2, 0}; rhs = {2, 0, 2, 1}; ref = {2, 0, 3, 3}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 7, 1, 1}; rhs = {3, 2, 4, 2}; ref = {3, 7, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 1, 3, 2}; rhs = {2, 0, 2, 1}; ref = {2, 3, 1, 1}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 6, 5}; rhs = {3, 1, 5, 1}; ref = {3, 6, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 0, 3, 3}; rhs = {2, 0, 2, 1}; ref = {2, 0, 3, 3}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 0, 2, 1}; rhs = {2, 0, 2, 1}; ref = {2, 3, 1, 1}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 5, 1, 2}; rhs = {3, 2, 0, 3}; ref = {3, 6, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 2, 0, 1}; rhs = {2, 0, 2, 1}; ref = {2, 0, 3, 3}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 3, 15, 12}; rhs = {4, 4, 8, 4}; ref = {4, 3, 15, 12}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 6, 3, 1}; rhs = {3, 1, 5, 1}; ref = {3, 6, 3, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 7, 2, 1}; rhs = {3, 1, 5, 1}; ref = {3, 7, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 6, 3, 1}; rhs = {3, 1, 6, 5}; ref = {3, 7, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 0, 0, 0}; rhs = {3, 5, 1, 2}; ref = {3, 0, 0, 0}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 3, 3, 0}; rhs = {3, 5, 1, 2}; ref = {3, 0, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 1, 0}; rhs = {3, 5, 1, 2}; ref = {3, 0, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 0, 1, 1}; rhs = {4, 9, 3, 5}; ref = {4, 0, 1, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 3, 1}; rhs = {3, 5, 1, 2}; ref = {3, 0, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 2, 1}; rhs = {3, 4, 2, 3}; ref = {3, 0, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 0, 3, 1}; rhs = {3, 5, 1, 2}; ref = {3, 0, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 0, 1, 1}; rhs = {3, 5, 1, 2}; ref = {3, 0, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 15, 15, 0}; rhs = {4, 9, 3, 5}; ref = {4, 15, 15, 0}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 7, 7, 0}; rhs = {3, 5, 1, 2}; ref = {3, 0, 7, 7}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 5, 5, 0}; rhs = {3, 5, 1, 2}; ref = {3, 6, 0, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 1, 15, 14}; rhs = {4, 9, 3, 5}; ref = {4, 1, 15, 14}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 5, 3, 1}; rhs = {3, 4, 2, 3}; ref = {3, 5, 3, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 6, 5}; rhs = {3, 5, 1, 2}; ref = {3, 6, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 5, 7, 1}; rhs = {3, 4, 2, 3}; ref = {3, 5, 0, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 5, 1}; rhs = {3, 5, 1, 2}; ref = {3, 6, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 5, 1, 2}; rhs = {3, 5, 1, 2}; ref = {3, 6, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 5, 7, 1}; rhs = {3, 5, 1, 2}; ref = {3, 6, 0, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 6, 3, 1}; rhs = {3, 5, 1, 2}; ref = {3, 6, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 7, 2, 1}; rhs = {3, 4, 2, 3}; ref = {3, 7, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 11, 10, 3}; rhs = {4, 12, 2, 3}; ref = {4, 13, 3, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 0, 0, 0}; rhs = {4, 11, 7, 6}; ref = {4, 0, 0, 0}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 7, 7, 0}; rhs = {4, 11, 7, 6}; ref = {4, 0, 6, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 5, 5, 0}; rhs = {4, 11, 7, 6}; ref = {4, 0, 5, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 0, 1, 1}; rhs = {4, 13, 7, 5}; ref = {4, 0, 1, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 1, 3, 1}; rhs = {4, 14, 4, 3}; ref = {4, 0, 3, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 1, 3, 1}; rhs = {4, 11, 7, 6}; ref = {4, 0, 3, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 0, 3, 3}; rhs = {4, 14, 4, 3}; ref = {4, 0, 3, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 0, 2, 1}; rhs = {4, 11, 7, 6}; ref = {4, 0, 2, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 15, 15, 0}; rhs = {4, 13, 7, 5}; ref = {4, 15, 15, 0}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 14, 14, 0}; rhs = {4, 11, 7, 6}; ref = {4, 14, 0, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 8, 8, 0}; rhs = {4, 11, 7, 6}; ref = {4, 10, 0, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 1, 15, 14}; rhs = {4, 13, 7, 5}; ref = {4, 1, 15, 14}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 10, 6, 1}; rhs = {4, 11, 7, 6}; ref = {4, 10, 6, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 12, 2, 3}; rhs = {4, 11, 7, 6}; ref = {4, 12, 2, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 10, 13, 3}; rhs = {4, 11, 7, 6}; ref = {4, 10, 0, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 1, 11, 1}; rhs = {4, 11, 7, 6}; ref = {4, 10, 6, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 9, 1, 1}; rhs = {4, 11, 7, 6}; ref = {4, 10, 1, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 9, 11, 1}; rhs = {4, 11, 7, 6}; ref = {4, 10, 0, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 11, 10, 3}; rhs = {4, 11, 7, 6}; ref = {4, 10, 6, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 3, 15, 12}; rhs = {4, 11, 7, 6}; ref = {4, 15, 3, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 11, 10, 3}; rhs = {4, 14, 4, 3}; ref = {4, 13, 3, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 0, 0, 0}; rhs = {3, 1, 5, 4}; ref = {3, 0, 0, 0}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 3, 3, 0}; rhs = {3, 1, 5, 4}; ref = {3, 0, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 1, 0}; rhs = {3, 1, 5, 4}; ref = {3, 0, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 0, 2, 1}; rhs = {4, 3, 11, 8}; ref = {4, 0, 2, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 3, 1}; rhs = {3, 1, 5, 4}; ref = {3, 0, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 1, 3, 1}; rhs = {4, 5, 13, 4}; ref = {4, 0, 3, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 0, 3, 1}; rhs = {3, 1, 5, 4}; ref = {3, 0, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 0, 1, 1}; rhs = {3, 1, 5, 4}; ref = {3, 0, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 14, 14, 0}; rhs = {4, 3, 11, 8}; ref = {4, 14, 14, 0}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 7, 7, 0}; rhs = {3, 1, 5, 4}; ref = {3, 0, 7, 7}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 5, 5, 0}; rhs = {3, 1, 5, 4}; ref = {3, 6, 0, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 1, 14, 13}; rhs = {4, 3, 11, 8}; ref = {4, 1, 14, 13}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 6, 2, 1}; rhs = {3, 1, 5, 4}; ref = {3, 6, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 6, 5}; rhs = {3, 1, 5, 4}; ref = {3, 6, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 0, 6, 6}; rhs = {3, 1, 5, 4}; ref = {3, 6, 0, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 5, 1}; rhs = {3, 1, 5, 4}; ref = {3, 6, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 5, 1, 2}; rhs = {3, 1, 5, 4}; ref = {3, 6, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 5, 7, 1}; rhs = {3, 1, 5, 4}; ref = {3, 6, 0, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 15, 2, 1}; rhs = {4, 3, 11, 8}; ref = {4, 15, 2, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 6, 3, 1}; rhs = {3, 1, 5, 4}; ref = {3, 6, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 3, 15, 12}; rhs = {4, 5, 13, 4}; ref = {4, 15, 3, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 11, 10, 3}; rhs = {4, 1, 13, 12}; ref = {4, 14, 2, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {1, 0, 1, 1}; rhs = {1, 0, 0, 0}; ref = {1, 0, 1, 1}; res_p = lhs.srem(1, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 0, 0, 0}; rhs = {2, 0, 1, 1}; ref = {2, 0, 0, 0}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 1, 1, 0}; rhs = {2, 0, 1, 1}; ref = {2, 0, 0, 0}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 0, 1, 1}; rhs = {3, 0, 3, 3}; ref = {3, 0, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 2, 3, 1}; rhs = {3, 0, 2, 2}; ref = {3, 0, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 3, 1}; rhs = {3, 0, 1, 1}; ref = {3, 0, 0, 0}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 0, 1, 1}; rhs = {2, 0, 1, 1}; ref = {2, 0, 0, 0}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 7, 7, 0}; rhs = {3, 0, 3, 3}; ref = {3, 7, 7, 0}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 3, 3, 0}; rhs = {2, 0, 1, 1}; ref = {2, 0, 0, 0}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 6, 5}; rhs = {3, 0, 3, 3}; ref = {3, 1, 6, 5}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 5, 6, 1}; rhs = {3, 0, 2, 2}; ref = {3, 0, 7, 7}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 0, 2, 1}; rhs = {2, 0, 1, 1}; ref = {2, 0, 0, 0}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 5, 1, 2}; rhs = {3, 0, 3, 3}; ref = {3, 6, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 0, 3, 3}; rhs = {2, 0, 1, 1}; ref = {2, 0, 0, 0}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 7, 2, 1}; rhs = {3, 0, 3, 3}; ref = {3, 7, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 6, 3, 1}; rhs = {3, 0, 3, 3}; ref = {3, 6, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 6, 3, 1}; rhs = {3, 0, 1, 1}; ref = {3, 0, 0, 0}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 0, 0, 0}; rhs = {3, 0, 3, 1}; ref = {3, 0, 0, 0}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 3, 3, 0}; rhs = {3, 0, 3, 1}; ref = {3, 0, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 1, 0}; rhs = {3, 0, 3, 1}; ref = {3, 0, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 0, 2, 1}; rhs = {4, 0, 6, 3}; ref = {4, 0, 2, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 3, 1}; rhs = {3, 0, 3, 1}; ref = {3, 0, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 1, 3, 1}; rhs = {4, 0, 5, 1}; ref = {4, 0, 3, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 0, 3, 1}; rhs = {3, 0, 3, 1}; ref = {3, 0, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 0, 1, 1}; rhs = {3, 0, 3, 1}; ref = {3, 0, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 14, 14, 0}; rhs = {4, 0, 6, 3}; ref = {4, 14, 14, 0}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 7, 7, 0}; rhs = {3, 0, 3, 1}; ref = {3, 0, 7, 7}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 6, 6, 0}; rhs = {3, 0, 2, 1}; ref = {3, 0, 7, 7}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 1, 14, 13}; rhs = {4, 0, 6, 3}; ref = {4, 1, 14, 13}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 6, 2, 1}; rhs = {3, 0, 3, 1}; ref = {3, 6, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 6, 5}; rhs = {3, 0, 3, 1}; ref = {3, 6, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 0, 6, 6}; rhs = {3, 0, 3, 1}; ref = {3, 6, 0, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 5, 1}; rhs = {3, 0, 3, 1}; ref = {3, 6, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 5, 1, 2}; rhs = {3, 0, 3, 1}; ref = {3, 6, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 5, 7, 1}; rhs = {3, 0, 3, 1}; ref = {3, 6, 0, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 15, 2, 1}; rhs = {4, 0, 6, 3}; ref = {4, 15, 2, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 6, 3, 1}; rhs = {3, 0, 3, 1}; ref = {3, 6, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 3, 15, 12}; rhs = {4, 0, 5, 1}; ref = {4, 15, 3, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 6, 3, 1}; rhs = {3, 0, 2, 1}; ref = {3, 7, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 0, 0, 0}; rhs = {2, 1, 1, 0}; ref = {2, 0, 0, 0}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 1, 1, 0}; rhs = {2, 1, 1, 0}; ref = {2, 0, 0, 0}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 3, 3, 0}; rhs = {3, 1, 3, 1}; ref = {3, 0, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 1, 0}; rhs = {3, 1, 3, 1}; ref = {3, 0, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 0, 1, 1}; rhs = {3, 2, 3, 1}; ref = {3, 0, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 2, 3, 1}; rhs = {3, 2, 2, 0}; ref = {3, 0, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 3, 1}; rhs = {3, 3, 3, 0}; ref = {3, 0, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 0, 1, 1}; rhs = {2, 1, 1, 0}; ref = {2, 0, 0, 0}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 3, 1}; rhs = {3, 1, 3, 1}; ref = {3, 0, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 1, 3, 1}; rhs = {4, 2, 7, 1}; ref = {4, 0, 3, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 0, 3, 1}; rhs = {3, 1, 3, 1}; ref = {3, 0, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 0, 1, 1}; rhs = {3, 1, 3, 1}; ref = {3, 0, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 7, 7, 0}; rhs = {3, 2, 3, 1}; ref = {3, 7, 7, 0}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 3, 3, 0}; rhs = {2, 1, 1, 0}; ref = {2, 0, 0, 0}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 7, 7, 0}; rhs = {3, 1, 3, 1}; ref = {3, 0, 7, 7}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 6, 6, 0}; rhs = {3, 1, 2, 1}; ref = {3, 0, 7, 7}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 6, 5}; rhs = {3, 3, 3, 0}; ref = {3, 1, 6, 5}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 5, 6, 1}; rhs = {3, 2, 2, 0}; ref = {3, 0, 7, 7}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 0, 2, 1}; rhs = {2, 1, 1, 0}; ref = {2, 0, 0, 0}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 5, 1, 2}; rhs = {3, 3, 3, 0}; ref = {3, 6, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {2, 0, 3, 3}; rhs = {2, 1, 1, 0}; ref = {2, 0, 0, 0}; res_p = lhs.srem(2, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 6, 2, 1}; rhs = {3, 1, 3, 1}; ref = {3, 6, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 6, 5}; rhs = {3, 1, 3, 1}; ref = {3, 6, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 0, 6, 6}; rhs = {3, 1, 3, 1}; ref = {3, 6, 0, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 1, 5, 1}; rhs = {3, 1, 3, 1}; ref = {3, 6, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 5, 1, 2}; rhs = {3, 1, 3, 1}; ref = {3, 6, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 5, 7, 1}; rhs = {3, 1, 3, 1}; ref = {3, 6, 0, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 7, 2, 1}; rhs = {3, 3, 3, 0}; ref = {3, 7, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 6, 3, 1}; rhs = {3, 3, 3, 0}; ref = {3, 6, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 6, 3, 1}; rhs = {3, 1, 1, 0}; ref = {3, 0, 0, 0}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 6, 3, 1}; rhs = {3, 1, 3, 1}; ref = {3, 6, 2, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {4, 3, 15, 12}; rhs = {4, 2, 7, 1}; ref = {4, 15, 3, 1}; res_p = lhs.srem(4, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } lhs = {3, 6, 3, 1}; rhs = {3, 1, 2, 1}; ref = {3, 7, 1, 1}; res_p = lhs.srem(3, rhs); res = *(static_cast<StridedInterval *>(res_p.get())); if (res != ref) { errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\n"; } ```python n = 4 lhs, rhs = MSI(n, 10, 13, 3), MSI(n, 10, 13, 3) print(f'{lhs}, {rhs}: {set(map(lambda k: as_signed_int(n, k), gamma(lhs)))}, {set(map(lambda k: as_signed_int(n, k), gamma(rhs)))}') res = srem(lhs, rhs, debug=True) print(f'{res}: {set(map(lambda k: as_signed_int(n, k), gamma(res)))}') ``` 3[10, 13]_{4}, 3[10, 13]_{4}: {-6, -3}, {-6, -3} a: -6, b: -3, c: -6, d: -3, s: 3, t: 3 all negative a: -6, b: -3, c: 3, d: 6, s: 3, t: 3 a: -6, b: -3, c: 3, d: 6, s: 3, t: 3 case 5 u: 3, e: -3, f: 0, -4, 0 13[0, 13]_{4}: {0, -3} ```python bin_op_test(srem, lambda n, a, b: rem(as_signed_int(n, a), as_signed_int(n, b)) % 2**n, big=False, non_zero=True) ```
5cf8125e795f1a3c3209d5cdf11cb261a1b1c334
200,813
ipynb
Jupyter Notebook
spec/msi.ipynb
peterrum/po-lab-2018
e4547288c582f36bd73d94157ea157b0a631c4ae
[ "MIT" ]
3
2018-06-05T08:07:52.000Z
2018-11-04T19:18:40.000Z
spec/msi.ipynb
peterrum/po-lab-2018
e4547288c582f36bd73d94157ea157b0a631c4ae
[ "MIT" ]
60
2018-06-05T15:14:39.000Z
2018-11-24T07:47:28.000Z
spec/msi.ipynb
peterrum/po-lab-2018
e4547288c582f36bd73d94157ea157b0a631c4ae
[ "MIT" ]
4
2018-11-05T10:04:30.000Z
2019-04-16T14:26:24.000Z
48.018412
1,989
0.464895
true
48,487
Qwen/Qwen-72B
1. YES 2. YES
0.891811
0.805632
0.718472
__label__eng_Latn
0.4877
0.507582
## Exploring reference frame ```python import sympy as sp import sympy.physics.mechanics as me ``` ```python psi = me.dynamicsymbols('psi') x0,y0 = me.dynamicsymbols('x0 y0') x01d,y01d = me.dynamicsymbols('x0 y0',1) u,v = me.dynamicsymbols('u v') ``` ```python N = me.ReferenceFrame('N') B = N.orientnew('B','Axis',[psi,N.z]) ``` ```python B.dcm(N) ``` $\displaystyle \left[\begin{matrix}\cos{\left(\psi{\left(t \right)} \right)} & \sin{\left(\psi{\left(t \right)} \right)} & 0\\- \sin{\left(\psi{\left(t \right)} \right)} & \cos{\left(\psi{\left(t \right)} \right)} & 0\\0 & 0 & 1\end{matrix}\right]$ ### V2pt_theory ```python O = me.Point('O') P = O.locatenew('P',10*B.x) ``` ```python O.set_vel(N,x0*N.x + y0*N.y) ``` ```python P.v2pt_theory(O,N,B) ``` $\displaystyle 10 \dot{\psi}\mathbf{\hat{b}_y} + x_{0}\mathbf{\hat{n}_x} + y_{0}\mathbf{\hat{n}_y}$ ```python P.vel(N).to_matrix(N) ``` $\displaystyle \left[\begin{matrix}\operatorname{x_{0}}{\left(t \right)} - 10 \sin{\left(\psi{\left(t \right)} \right)} \frac{d}{d t} \psi{\left(t \right)}\\\operatorname{y_{0}}{\left(t \right)} + 10 \cos{\left(\psi{\left(t \right)} \right)} \frac{d}{d t} \psi{\left(t \right)}\\0\end{matrix}\right]$ ### V1pt_theory ```python O = me.Point('O') P = O.locatenew('P',0) ``` ```python O.set_vel(N,0) P.set_vel(B,u*B.x + v*B.y) ``` ```python P.v1pt_theory(O,N,B) ``` $\displaystyle u\mathbf{\hat{b}_x} + v\mathbf{\hat{b}_y}$ ```python P.vel(N).to_matrix(N) ``` $\displaystyle \left[\begin{matrix}u{\left(t \right)} \cos{\left(\psi{\left(t \right)} \right)} - v{\left(t \right)} \sin{\left(\psi{\left(t \right)} \right)}\\u{\left(t \right)} \sin{\left(\psi{\left(t \right)} \right)} + v{\left(t \right)} \cos{\left(\psi{\left(t \right)} \right)}\\0\end{matrix}\right]$ ## Bludder ```python O = me.Point('O') O.set_vel(N,0) P = me.Point('P') P.set_pos(O,x0*N.x + y0*N.y) P.set_vel(B,u*B.x + v*B.y) P.v1pt_theory(O,N,B) ``` $\displaystyle u\mathbf{\hat{b}_x} + v\mathbf{\hat{b}_y} - y_{0} \dot{\psi}\mathbf{\hat{n}_x} + x_{0} \dot{\psi}\mathbf{\hat{n}_y}$ ```python P.vel(N).to_matrix(N) ``` $\displaystyle \left[\begin{matrix}u{\left(t \right)} \cos{\left(\psi{\left(t \right)} \right)} - v{\left(t \right)} \sin{\left(\psi{\left(t \right)} \right)} - \operatorname{y_{0}}{\left(t \right)} \frac{d}{d t} \psi{\left(t \right)}\\u{\left(t \right)} \sin{\left(\psi{\left(t \right)} \right)} + v{\left(t \right)} \cos{\left(\psi{\left(t \right)} \right)} + \operatorname{x_{0}}{\left(t \right)} \frac{d}{d t} \psi{\left(t \right)}\\0\end{matrix}\right]$ ```python ```
08136f05095ce02f5cb2d8e95aac43bc854248e5
7,522
ipynb
Jupyter Notebook
reference_frame.ipynb
axelande/rigidbodysimulator
a87c3eb3b7978ef01efca15e66a6de6518870cd8
[ "MIT" ]
null
null
null
reference_frame.ipynb
axelande/rigidbodysimulator
a87c3eb3b7978ef01efca15e66a6de6518870cd8
[ "MIT" ]
1
2020-10-26T19:47:02.000Z
2020-10-26T19:47:02.000Z
reference_frame.ipynb
axelande/rigidbodysimulator
a87c3eb3b7978ef01efca15e66a6de6518870cd8
[ "MIT" ]
1
2020-10-26T09:17:00.000Z
2020-10-26T09:17:00.000Z
24.031949
522
0.447089
true
999
Qwen/Qwen-72B
1. YES 2. YES
0.941654
0.831143
0.782649
__label__yue_Hant
0.094596
0.656689
# Lecture 18: Numerical Solutions to the Diffusion Equation ## (Implicit Methods) ### Sections * [Introduction](#Introduction) * [Learning Goals](#Learning-Goals) * [On Your Own](#On-Your-Own) * [In Class](#In-Class) * [Revisiting the Discrete Version of Fick's Law](#Revisiting-the-Discrete-Version-of-Fick's-Law) * [A Linear System for Diffusion](#A-Linear-System-for-Diffusion) * [An Implicit Numerical Solution](#An-Implicit-Numerical-Solution) * [Deconstruction of the Solution Scheme](#Deconstruction-of-the-Solution-Scheme) * [Homework](#Homework) * [Summary](#Summary) * [Looking Ahead](#Looking-Ahead) * [Reading Assignments and Practice](#Reading-Assignments-and-Practice) Possible future improvement: Crank-Nicholson. ### Introduction ---- This lecture introduces the implicit scheme for solving the diffusion equation. Examine the descritization for our explicit scheme that we covered in the previous lecture: $$ \frac{u_{i,\, j+1} - u_{i,\, j}}{\Delta t} = D \frac{u_{i - 1,\, j} - 2 u_{i,\, j} + u_{i + 1,\, j}}{\Delta x^2} $$ This expression uses a _forward difference_ in time where we are subtracting the value of our dependent variable at time index $j$ from the value of our dependent variable at time-index $j+1$. If, instead, we perform a backward difference (replace $j$ with $j-1$) we will be subtracting our dependent variable at $j-1$ from the value at the index $j$. For example: $$ \frac{u_{i,\, j} - u_{i,\, j-1}}{\Delta t} = D \frac{u_{i - 1,\, j} - 2 u_{i,\, j} + u_{i + 1,\, j}}{\Delta x^2} $$ Attempting to repeat our previous algebraic manipulations we find that the solution to this equation is in terms of three unknown quantities at the index $j$. These quantities depend on indices $i-1$, $i$ and $i+1$ and our solution is only known to the index $j-1$. This seems like a complication that cannot be resolved however, examination of all the resulting equations in our grid will revel that this is a linear system that can be solved with the inclusion of the boundary conditions. The point of this lecture is to develop your understanding for how the use of matrices and linear algebra can be used to solve this problem. The system of equations and the method for solving the equations is known as an "implicit method". There is a good discussion in Numerical Recipes by Teukolsky, et al. to provide a foundation for these methods. [Top of Page](#Sections) ### Learning Goals ---- * Re-develop the descretizaton of Fick's law such that the solution scheme is implicit * Write the method as a linear system * Incorporate boundary conditions * Develop a solution strategy using linear algebra and Numpy or SciPy as appropriate. [Top of Page](#Sections) ### On Your Own ---- Suggestions for improvement of this section: * Develop numpy methods for linear algebra (e.g. creating matrices efficiently) * Matrix operations relevant to LA. * Solve a simple linear system. [Top of Page](#Sections) ### In Class ---- * Re-derive the discrete form of Fick's law. * Examine the structure of the resulting matrix. * Write a solver. ### Revisiting the Discrete Version of Fick's Law We start with a re-statement of Fick's second law in finite difference form that uses a FORWARD difference in time: $$ \frac{u_{i,\, j+1} - u_{i,\, j}}{\Delta t} = D \frac{u_{i - 1,\, j} - 2 u_{i,\, j} + u_{i + 1,\, j}}{\Delta x^2} $$ This choice of time differencing led to the EXPLICIT scheme. This time, we choose a BACKWARD difference in time as follows: $$ \frac{u_{i,\, j} - u_{i,\, j-1}}{\Delta t} = D \frac{u_{i - 1,\, j} - 2 u_{i,\, j} + u_{i + 1,\, j}}{\Delta x^2} $$ This choice leads to a set of linear equations. To illustrate how this develops we will write the equation above for a grid of eight points that represent the quantity of diffusing substance. See the next figure. In the above figure we represent a grid of two dimensions - this grid is identical to the explicit finite difference grid. The main difference between the explicit and implicit method is the way we fill the grid to arrive at our solution. In the spatial dimension we have 8 columns (the "$i$" index) and in the time dimension we show only three rows (the "$j$" index). The sizes of the grid in the two dimensions are arbitrary. Keep the following in mind: * The solution is known to the $j-1$ index. * The unknowns are the $j$ indcies. Algebraiclly rearranging this differencing scheme, we can write down: $$ u_{i,\, j-1} = \frac{\Delta t D}{\Delta x^2} \left( - u_{i - 1,\, j} + 2 u_{i,\, j} - u_{i + 1,\, j} \right) + u_{i,\, j} $$ one additional re-arrangment (substitute $\beta$ for the factor containing the diffusion coefficient) and we get: $$ - \beta u_{i - 1,\, j} + (1 + 2 \beta) u_{i,\, j} - \beta u_{i + 1,\, j} = u_{i,\, j-1} $$ We include "ghost cells" in grey above to enforce the boundary conditions. We can use fixed value (setting the ghost cells to a particular number) or fixed flux (setting the value of the ghost cell based on a pair of interior cells) to produce a linear system with an equal number of unknowns and equations. ### A Linear System for Diffusion We begin as usual by importing SymPy into the namespace and using `init_session` to define some helpful symbols. We also define a pair of symbols $U_{LHS}$ and $U_{RHS}$ that will be used to define values in the ghost cells. ```python import sympy as sp sp.init_session(quiet=True) var('U_LHS U_RHS') ``` We define the symbols we want to use in our linear system. For this demonstration, I don't add the time index but I keep my subscripts consistent with the figure above. ```python var('dt dx beta u1:7 b1:7') ``` In this cell we create the square matrix holding the coefficients that multiply the unknown quantities. Note the structure of the matrix. It is a _tridiagonal_ matrix. The function in NumPy is very compact, in SymPy not so much. So I apologize for the syntax in the SymPy/Python code below, but the more compact version can be difficult to read: ```python hpad = ones(0, 1); vpad = ones(1, 0) mainDiag = 2*beta+1; offDiag = -beta M = (sp.diag(vpad, offDiag, offDiag, offDiag, offDiag, offDiag, hpad)+ \ sp.diag(hpad, offDiag, offDiag, offDiag, offDiag, offDiag, vpad)+ \ sp.diag(mainDiag,mainDiag,mainDiag,mainDiag,mainDiag,mainDiag)) M ``` Here is our vector of unknown quantities. We know the solution to the $j-1$ time step. All of these symbols represent the value of our field (e.g. concentration, temperature, etc.) at the $j$'th time step. ```python xmatrix = sp.Matrix([u1,u2,u3,u4,u5,u6]) xmatrix ``` If we've got everything correct, this matrix product will reproduce the discrete diffusion equation outlined above. You'll note that the boundary equations are not formed correctly. For reference, here is the discrete form: $$ - \beta u_{i - 1,\, j} + (1 + 2 \beta) u_{i,\, j} - \beta u_{i + 1,\, j} = u_{i,\, j-1} $$ ```python M*xmatrix ``` It should start to become clear that we can write this linear system (of a tridiagonal matrix and a column vector of unknowns) as a matrix equation: $$ M \cdot \overline{x} = \overline{b} $$ Where M is the square matrix, x is the vector of unknown quantities and b is the last known value of the system variables (the $u_{i,j}$ are the unknowns, the $j-1$ are the last known values). There is still some work to be done before we can use linear algebra to get the solution. We need to implement the boundary conditions. ### Fixed Value Boundary Conditions Start with the form at the interior of the grid: $$ - \beta u_{i - 1,\, j} + (1 + 2 \beta) u_{i,\, j} - \beta u_{i + 1,\, j} = u_{i,\, j-1} $$ To get the form correct at the top and bottom of this solution vector we need to imagine adding "ghost cells" to the boundaries of our domain at $i=0$ and $i=7$. Using the above expression, let $i = 1$: $$ - \beta u_{0,\, j} + (1 + 2 \beta) u_{1,\, j} - \beta u_{2,\, j} = u_{1,\, j-1} $$ If we have fixed value boundary conditions, we then know the value of $u_0$. This is the boundary condition of our simulation. We will call this value $U_{LHS}$, substitute $U_{LHS} = u_0$ and move the known quantities to the RHS of the equation: $$ (1 + 2 \beta) u_{1,\, j} - \beta u_{2,\, j} = u_{1,\, j-1} + \beta U_{LHS} $$ ### Fixed Flux Boundary Conditions If we have fixed flux boundary conditions we can write the flux as a central difference on the cell $u_1$ that uses the "ghost" point at $u_0$: $$ \frac{u_{2,\, j} - u_{0,\, j}}{2 \Delta x} = F $$ Proceeding as before with $i=1$: $$ - \beta u_{0,\, j} + (1 + 2 \beta) u_{1,\, j} - \beta u_{2,\, j} = u_{1,\, j-1} $$ This time we know the relationship of $u_0$ to the other unknowns due to the specification of the defined flux boundary condition. Solving for $u_0$ we get: $$ u_{0,\, j} = u_{2,\, j} - {2 \Delta x} F $$ Substituting this into our expression that includes the ghost cell gives us: $$ - \beta (u_{2,\, j} - {2 \Delta x} F) + (1 + 2 \beta) u_{1,\, j} - \beta u_{2,\, j} = u_{1,\, j-1} $$ Simplifying: $$ (1 + 2 \beta) u_{1,\, j} - 2 \beta u_{2,\, j} = u_{1,\, j-1} - \beta 2 \Delta x F $$ So in this case we have to modify the matrix $M$ entries AND the solution vector $b$ recalling that the $j-1$ index is the known solution. We have now recovered the form of the equation in the dot product $M \cdot x$ and the form of this equation is telling us that we need to modify the solution vector $b$ with information about the boundary conditions before we find the inverse of the matrix and compute the new solution vector. Modifying the $b$ matrix with the known ghost cell values for the fixed value boundary conditions we get: ```python bmatrix = sp.Matrix([(b1+beta*U_LHS),b2,b3,b4,b5,(b6+beta*U_RHS)]) bmatrix ``` So the full form of our system is therefore: $$ \left[\begin{matrix}2 \beta + 1 & - \beta & 0 & 0 & 0 & 0\\- \beta & 2 \beta + 1 & - \beta & 0 & 0 & 0\\0 & - \beta & 2 \beta + 1 & - \beta & 0 & 0\\0 & 0 & - \beta & 2 \beta + 1 & - \beta & 0\\0 & 0 & 0 & - \beta & 2 \beta + 1 & - \beta\\0 & 0 & 0 & 0 & - \beta & 2 \beta + 1\end{matrix}\right] \cdot \left[\begin{matrix}u_{1}\\u_{2}\\u_{3}\\u_{4}\\u_{5}\\u_{6}\end{matrix}\right] = \left[\begin{matrix}U_{LHS} \beta + b_{1}\\b_{2}\\b_{3}\\b_{4}\\b_{5}\\U_{RHS} \beta + b_{6}\end{matrix}\right] $$ `SymPy` can evaluate the LHS for us. ```python sp.Eq(M*xmatrix,bmatrix) ``` All that remains is to solve the above linear system. Instead of using `SymPy`, we will use some tools in a different Python library. [Top of Page](#Sections) ### An Implicit Numerical Solution General setup in this section: ```python %matplotlib inline import numpy as np import matplotlib.pyplot as plt ``` Simulation parameters: ```python numberOfPoints = 100 lengthOfDomain = 1.0 dx = lengthOfDomain/numberOfPoints xPoints = np.linspace(0.0, lengthOfDomain, numberOfPoints) initialCondition = np.sin(xPoints*np.pi/lengthOfDomain) ``` A simple function to plot the initial condition: ```python def plotIC(): fig = plt.figure() axes = fig.add_axes([0.1, 0.1, 0.8, 0.8]) axes.plot(xPoints, initialCondition, 'ro') axes.set_xlabel('Distance $x$') axes.set_ylabel('Concentration of Stuff $c(x,t)$') axes.set_title('Initial Conditions'); ``` ```python plotIC() ``` It is worth noting that these schemes are unconditionally stable - so any choice of time step will produce a solution. The accuracy of the solution does depend on this choice, though. ```python diffusionCoefficient = 10.0 dt = dx**2/(diffusionCoefficient) numberOfIterations = 1000 ``` We create two solution vectors rather than one whole array to hold all of our solution. This is not particular to the implicit method, but it demonstrates another technique for saving memory and speeding up the calculation. We will fill these matrices and swap them (move data from `new` into `old` and overwrite `new`) at each time step. ```python newConcentration = np.zeros((numberOfPoints), dtype='float32') oldConcentration = np.zeros((numberOfPoints), dtype='float32') ``` First, some syntax: ```python ['h','h','h']*3 ``` The matrix has to be square. It should have the same dimensions as the nubmer of points in the system. The following code snippet was inspired by [this](http://stackoverflow.com/questions/5842903/block-tridiagonal-matrix-python) post. ```python def tridiag(a, b, c, k1=-1, k2=0, k3=1): # Here we use Numpy addition to make the job easier. return np.diag(a, k1) + np.diag(b, k2) + np.diag(c, k3) a = [-dt*diffusionCoefficient/dx/dx]*(numberOfPoints-1) b = [2*dt*diffusionCoefficient/dx/dx+1]*(numberOfPoints) c = [-dt*diffusionCoefficient/dx/dx]*(numberOfPoints-1) A = tridiag(a, b, c) ``` ```python A ``` We first need to prime the arrays by copying the initial condition into `oldConcentration`. Afterwards it will be enough to swap pointers (a variable that points to a memory location). ```python np.copyto(oldConcentration,initialCondition) ``` [Top of Page](#Sections) ### Deconstruction of the Solution Scheme In spite of the small chunk of code a few cells below, there is a lot going on. Let us dissect it. In bullet points: * Before the first solution step we enforce the boundary conditions. Our choice of matrix means that we are using "fixed value" boundary conditions. So we need to modify the `b` vector accordingly. The indexing notation of Numpy that permits us to find the first (`[0]`) and last cell (`[-1]`) of an array is very helpful here. ```python oldConcentration[0] = oldConcentration[0] + uLHS*dt*diffusionCoefficient/dx/dx oldConcentration[-1] = oldConcentration[-1] + uRHS*dt*diffusionCoefficient/dx/dx ``` Recall: $$ \left[\begin{matrix}2 \beta + 1 & - \beta & 0 & 0 & 0 & 0\\- \beta & 2 \beta + 1 & - \beta & 0 & 0 & 0\\0 & - \beta & 2 \beta + 1 & - \beta & 0 & 0\\0 & 0 & - \beta & 2 \beta + 1 & - \beta & 0\\0 & 0 & 0 & - \beta & 2 \beta + 1 & - \beta\\0 & 0 & 0 & 0 & - \beta & 2 \beta + 1\end{matrix}\right] \cdot \left[\begin{matrix}u_{1}\\u_{2}\\u_{3}\\u_{4}\\u_{5}\\u_{6}\end{matrix}\right] = \left[\begin{matrix}U_{LHS} \beta + b_{1}\\b_{2}\\b_{3}\\b_{4}\\b_{5}\\U_{RHS} \beta + b_{6}\end{matrix}\right] $$ * Solving the system involves using the built in `NumPy` functions to invert the matrix. What is returned is the solution vector. Please note that I'm using an internal `Numpy` (an optimized function!) function to COPY the results of the linear algebra solution into the `newConcentration` vector. ```python np.copyto(newConcentration,np.linalg.solve(A,oldConcentration)) ``` * Rather than storing ALL the data, we instead store just the current and the old concentrations. There are efficiencies in doing this, but if we want the older values, we need to store them on disk or in memory. * Tuple unpacking in Python leads to the `A,B=B,A` syntax below. This switches the references to the arrays. This is important for efficiency - you don't want to move any data if you don't have to. If you are running big calculations then moving that data around is a waste of time/resources. Better to just swap references. ```python oldConcentration, newConcentration = newConcentration, oldConcentration ``` * Repeat the process and after a specified number of iterations, plot the results. ```python uLHS = 0.0 uRHS = 0.0 numIterations = 200 for i in range(numIterations): # enforce boundary conditions oldConcentration[0] = oldConcentration[0] + uLHS*dt*diffusionCoefficient/dx/dx oldConcentration[-1] = oldConcentration[-1] + uRHS*dt*diffusionCoefficient/dx/dx # solve the system np.copyto(newConcentration,np.linalg.solve(A,oldConcentration)) # swap pointers oldConcentration, newConcentration = newConcentration, oldConcentration # plot the results fig2 = plt.figure() axes = fig2.add_axes([0.1, 0.1, 0.8, 0.8]) axes.plot(xPoints, newConcentration, 'ro') axes.set_ylim(0,1) axes.set_xlabel('Distance $x$') axes.set_ylabel('Concentration of Stuff $c(x,t)$') axes.set_title('Solution'); ``` [Top of Page](#Sections) ### Homework ---- * Solve the diffusion couple problem * Compare to the analytical solution * Describe the differences between them (in words and with some plots, maybe) * Examine the error in terms of truncation versus roundoff error. [Top of Page](#Sections) ### Looking Ahead ---- TBA [Top of Page](#Sections) ### Reading Assignments and Practice ---- TBA [Top of Page](#Sections)
e0c317b67f4d4b5e2bf4a31fd806e0157559a533
29,319
ipynb
Jupyter Notebook
Lecture-18-Implicit-Finite-Difference.ipynb
juhimgupta/MTLE-4720
41797715111636067dd4e2b305a782835c05619f
[ "MIT" ]
23
2017-07-19T04:04:38.000Z
2022-02-18T19:33:43.000Z
Lecture-18-Implicit-Finite-Difference.ipynb
juhimgupta/MTLE-4720
41797715111636067dd4e2b305a782835c05619f
[ "MIT" ]
2
2019-04-08T15:21:45.000Z
2020-03-03T20:19:00.000Z
Lecture-18-Implicit-Finite-Difference.ipynb
juhimgupta/MTLE-4720
41797715111636067dd4e2b305a782835c05619f
[ "MIT" ]
11
2017-07-27T02:27:49.000Z
2022-01-27T08:16:40.000Z
28.002865
565
0.553736
true
4,869
Qwen/Qwen-72B
1. YES 2. YES
0.863392
0.843895
0.728612
__label__eng_Latn
0.988507
0.531141
```python from sympy import * init_printing() ``` ```python eye(3) ``` ```python Matrix([[1, 2], [3, 4]]) * Matrix([[1, 2], [3, 4]]) ``` ```python x = symbols('x') (2 * x**2 + x + 10).as_poly().all_coeffs() ```
55300d0dbd13a5a801b603fb6e5b9b40e4132888
6,550
ipynb
Jupyter Notebook
notebooks/sympy_examples.ipynb
joebentley/simba
dd1b7bc6d22ad96566898dd1851cfa210462cb00
[ "MIT" ]
8
2020-03-19T10:59:25.000Z
2022-01-22T22:33:07.000Z
notebooks/sympy_examples.ipynb
joebentley/simba
dd1b7bc6d22ad96566898dd1851cfa210462cb00
[ "MIT" ]
1
2022-01-22T11:24:45.000Z
2022-01-22T11:24:45.000Z
notebooks/sympy_examples.ipynb
joebentley/simba
dd1b7bc6d22ad96566898dd1851cfa210462cb00
[ "MIT" ]
1
2020-03-19T13:27:41.000Z
2020-03-19T13:27:41.000Z
56.956522
1,708
0.790992
true
84
Qwen/Qwen-72B
1. YES 2. YES
0.868827
0.779993
0.677679
__label__yue_Hant
0.285859
0.412806
# 4. Gyakorlat - Vasúti ütközőbak 2021.03.01 ## Feladat: ```python from IPython.display import Image Image(filename='gyak4_1.png',width=900) ``` A mellékelt ábrán látható módon egy $m$ tömegű vasúti szerlvény egy ütközőbakba csapódik $v_0$ kezdősebességgel. Feltételezzük, hogy a folyamat során a bak mozdulatlan marad. Modellezvén a bak elaszticitását és energia disszipációját, a fenti ábrán látható mechanikai modellt használjuk. Miközben a szerelvény ütközője érintezik a bak ütközőjével, kettejük rugómerevségét és csillapítási tényezőjét kombinálhatjuk egy $k$ eredő rugómerevséggé, valamint egy $2c$ nagyságú eredő csillapítási tényezővé. ### Adatok: | | | |:-----------------------|-----------------------| | $m$ = 5$\cdot$10$^4$ kg | $v_0$ = 1 m/s | | $k$ = 10$^6$ N/m | $c$ = 10$^5$ Ns/m | ### Részfeladatok: 1. Számítsa ki az ütközés során a rugóban ébredő maximális erőt! 2. Határozza meg az ütközés során elnyelt energiát! ## Megoldás: ### 1. Feladat További részletezés nélkül a mozgásegyenlet: $$m\ddot{x} + 2c\dot{x} + kx = 0.$$ ```python import sympy as sp from IPython.display import Math # hogy tudjunk LaTeX szöveget kiírni sp.init_printing() m,c,k,ζ,ω_n,ω_d,v_0,t = sp.symbols('m,c,k,ζ,ω_n,ω_d,v_0,t') x = sp.Function('x')(t) # Készítsünk behelyettesítési listát az adatok alapján, SI-ben adatok = [(m, 5*10**4), (v_0, 1), (k, 10**6), (c, 10**5)] mozgasegy = m*sp.diff(x,t,2) + 2*c*sp.diff(x,t,1) + k*x # Osszunk le a főegyütthatóval: foegyutthato = mozgasegy.coeff(sp.diff(x,t,2)) mozgasegy = (mozgasegy/foegyutthato).expand() mozgasegy ``` ```python mozgasegy = mozgasegy.subs([(2*c/m,2*ζ*ω_n),(k/m,ω_n**2)]) mozgasegy ``` ```python ω_n_num = sp.sqrt(k/m).subs(adatok) display(Math('\omega_n = {}'.format(sp.latex(ω_n_num)))) # rad/s ``` $\displaystyle \omega_n = 2 \sqrt{5}$ ```python ζ_num = (c/m/ω_n).subs(adatok).subs(ω_n,ω_n_num) display(Math('\zeta = {}'.format(sp.latex(ζ_num)))) # 1 ``` $\displaystyle \zeta = \frac{\sqrt{5}}{5}$ ```python ω_d_num = (ω_n*sp.sqrt(1-ζ**2)).subs(adatok).subs(ω_n,ω_n_num).subs(ζ,ζ_num) display(Math('\omega_d = {}'.format(sp.latex(ω_d_num)))) # rad/s ``` $\displaystyle \omega_d = 4$ ```python T_d_num = ((2*sp.pi)/ω_d).subs(adatok).subs(ω_d,ω_d_num) display(Math('T_d = {}'.format(sp.latex(T_d_num)))) # s ``` $\displaystyle T_d = \frac{\pi}{2}$ ```python mozgasegy_num = mozgasegy.subs([(ζ,ζ_num),(ω_n,ω_n_num)]) mozgasegy_num # természetesen ez a bal oldal, amennyiben a jobb oldal 0 ``` ```python # Oldjuk meg a mozgásegyenletet előszőr # az integrálási konstansok kézi meghatározásával. # általános megoldás: alt_meg = sp.dsolve(mozgasegy_num,x).rhs # right hand side alt_meg # figyelem, itt az eredeti kidolgozásban pont fordítva van C1 és C2! ``` ```python # Általános megoldás deriváltja: d_alt_meg = sp.diff(alt_meg,t) d_alt_meg ``` ```python # Kezedit értékek: x(0) = 0, v(0) = v0. # C1, C2 konstansok kifejezése: v0, C1, C2 = sp.symbols("v0, C1, C2") # Oldjuk meg az egyenletrendszert C1 és C2-re. konst = sp.solve([alt_meg.subs(t,0),d_alt_meg.subs(t,0)-v0],C1, C2) C1_num = konst[C1] C2_num = konst[C2] display(Math('C_1 = {},'.format(sp.latex(konst[C1])))) display(Math('C_2 = {}.'.format(sp.latex(konst[C2])))) ``` $\displaystyle C_1 = \frac{v_{0}}{4},$ $\displaystyle C_2 = 0.$ ```python alt_meg.subs([(C1,C1_num),(C2,C2_num)]) ``` ```python # Most pedig értékeljük, hogy a sympy a fentieket # automatikusan is képes elvégezni: kezdeti_ert = {x.subs(t,0): 0, x.diff(t).subs(t,0): v_0} display(kezdeti_ert) mozg_torv = sp.dsolve(mozgasegy_num,x,ics=kezdeti_ert) mozg_torv ``` Maximális rugóerő a maximális kitérésnél: keressük a maximális kitéréshez tartozó $t^*$ időt: ```python # A maximumban a derivált értéke zérus. # Itt már felhasználhatjuk azt, hogy C2 = 0 d_alt_meg.subs(C2,0) ``` ```python # Megoldva az egyenletet t-re: meg = sp.solve(d_alt_meg.subs(C2,0),t) t_csillag = meg[0] display(Math('t^* = {}'.format(t_csillag.evalf(4)))) # s ``` $\displaystyle t^* = 0.2768$ ```python # Visszahelyettesítve a mozgástörvénybe: x_max = mozg_torv.rhs.subs(t,t_csillag).subs(v_0,1) display(Math('x_{{max}} = {}'.format(x_max.evalf(4)))) # m ``` $\displaystyle x_{max} = 0.1285$ ```python # Amiből a maximális rugóerő: F_max = (k*x_max).subs(adatok) display(Math('F_{{r,max}} = {}'.format(F_max.evalf(5)/1000))) # kN ``` $\displaystyle F_{r,max} = 128.55$ ## Második feladat ```python # Newton II-ből a mozdony által az ütközőre ható erő kontakt_egy = sp.Eq(k*alt_meg+2*c*d_alt_meg,0) kontakt_egy = kontakt_egy.subs(C2,0).simplify() kontakt_egy ``` ```python # Oldjuk meg t-re, felhasználva, hogy C2 = 0 és C1 ≠ 0: kontakt_megold = sp.solve(kontakt_egy.subs(adatok),t) display(kontakt_megold) # a 2 megoldás közül a legkisebb kell, ami még pozitív. # Módszer: `ternary operator`, ami könnyen olvasható és értelmezhető. t_cscs = kontakt_megold[0] if 0 < kontakt_megold[0] < kontakt_megold[1] else kontakt_megold[1] display(Math('t^{{**}} = {}'.format(t_cscs.evalf(4)))) # s ``` ```python # A mozdony sebessége az elválás pillanatában (t** időpontban) v_t = mozg_torv.rhs.diff(t) # mozgástörvény deriváltja -> sebesség v_tcscs = v_t.subs(t,t_cscs).subs(adatok).evalf(4) display(Math('v_{{t^{{**}}}} = {}'.format(v_tcscs.evalf(4)))) # m/s ``` $\displaystyle v_{t^{**}} = -0.3305$ ```python # A kinetikus energia megváltozása a munkavégzéssel egyenlő: W_diss = (1/2*m*v_tcscs**2-1/2*m*v_0**2).subs(adatok) # A disszipált energia ennek mínusz egyszerese display(Math('E^\mathrm{{diss}} = {}'.format(-W_diss.evalf(4)/1000))) # kJ ``` $\displaystyle E^\mathrm{diss} = 22.27$ Készítette: Juhos-Kiss Álmos (Alkalmazott Mechanika Szakosztály) és Csuzdi Domonkos (Alkalmazott Mechanika Szakosztály) Takács Dénes (BME MM) ábrái és Vörös Illés (BME MM) kidolgozása alapján. Hibák, javaslatok: amsz.bme@gmail.com csuzdi02@gmail.com almosjuhoskiss@gmail.com 2021.03.01
aff05d70289b1d6df3dcedced641e30205e27bff
145,521
ipynb
Jupyter Notebook
negyedik_het/gyak_4.ipynb
barnabaspiri/RezgestanPython
3fcc4374c90d041436c816d26ded63af95b44103
[ "MIT" ]
null
null
null
negyedik_het/gyak_4.ipynb
barnabaspiri/RezgestanPython
3fcc4374c90d041436c816d26ded63af95b44103
[ "MIT" ]
12
2021-03-29T19:12:39.000Z
2021-04-26T18:06:02.000Z
negyedik_het/gyak_4.ipynb
barnabaspiri/RezgestanPython
3fcc4374c90d041436c816d26ded63af95b44103
[ "MIT" ]
3
2021-03-29T19:29:08.000Z
2021-04-10T20:58:06.000Z
192.743046
93,464
0.902873
true
2,531
Qwen/Qwen-72B
1. YES 2. YES
0.835484
0.76908
0.642554
__label__hun_Latn
0.998235
0.331199
# 微積分 ## 積分 ### 不定積分 微分すると $f(x)$ になるような関数を $f(x)$ の **不定積分**(または**原始関数**)と呼び、以下のような記号で表す $$ \int{f(x)dx} $$ $f(x)$ の不定積分の1つを $F(x)$ とすると、定数の微分が 0 であることから、任意定数 $C$ について $F(x)+C$ も $f(x)$ の不定積分になる そのため、一般に次のように表される $$ \int{f(x)dx} = F(x) + C $$ このような $C$ は **積分定数** と呼ばれる 微分の性質から、不定積分について以下が成り立つ $$ \begin{align} \int(f(x) + g(x))dx &= \int f(x)dx + \int g(x)dx \\ \int(kf(x))dx &= k\int f(x)dx \end{align} $$ ここで、不定積分は積分定数分の不確実性があるため、上式 $\int(f(x) + g(x))dx &= \int f(x)dx + \int g(x)dx$ は「$f(x)$ の不定積分の1つを $F(x)$、$g(x)$ の不定積分の1つを $G(x)$ とすると、$(f(x) + g(x))$ の不定積分は $F(x)+G(x)+(定数)$ で表される」という意味になるため、要注意である これは $\int(kf(x))dx &= k\int f(x)dx$ についても同様である 具体的な例として、多項式関数の不定積分を考えてみると、例えば次の式が成り立つ $$ \int(x^3 + 2x)dx = \frac{1}{4}x^4 + x^2 + C \quad ただしCは積分定数 $$ 実際に $\frac{1}{4}x^4 + x^2$ を微分すると $x^3 + 2x$ になる また、微分の公式より、次の公式が成り立つ $$ \begin{align} \int x^pdx &= \frac{1}{p+1}x^{p+1} + C \\ \int\frac{1}{x}dx &= \ln|x| + C \\ \int e^xdx &= e^x + C \end{align} $$ #### 演習1 次の不定積分を求めよ 1. $$ \int(x^3 + 2x^2 + 1)dx $$ 2. $$ \int(x^2 + \frac{1}{x})dx $$ 3. $$ \int(\frac{1}{\sqrt x} + e^x)dx $$ #### 演習2 次の式が成り立つことを確かめよ 1. $$ \int(x^2 + 3x + 1)e^xdx = (x^2 + x)e^x + C $$ 2. $$ \int\frac{1}{1-x^2}dx = \frac{1}{2}(\ln|1+x| - \ln|1-x|) + C $$ #### 解答1 1. $$ \begin{align} \int(x^3 + 2x^2 + 1)dx &= \int x^3dx + 2\int x^2dx + \int1dx \\ &= \frac{1}{4}x^4 + \frac{2}{3}x^3 + x + C \end{align} $$ 2. $$ \begin{align} \int(x^2 + \frac{1}{x})dx &= \int x^2dx + \int\frac{1}{x}dx \\ &= \frac{1}{3}x^3 + \ln|x| + C \end{align} $$ 3. $$ \begin{align} \int(\frac{1}{\sqrt x} + e^x)dx &= \int x^{-\frac{1}{2}}dx + \int e^xdx \\ &= 2x^\frac{1}{2} + e^x + C \\ &= 2\sqrt{x} + e^x + C \end{align} $$ #### 解答2 それぞれの右辺を微分して確かめる 1. $$ \begin{align} \frac{d}{dx}\{(x^2 + x)e^x + C\} &= (x^2 + x)'\cdot e^x + (x^2 + x)\cdot(e^x)' \\ &= (2x + 1)e^x + (x^2 + x)e^x \\ &= (x^2 + 3x + 1)e^x \end{align} $$ 2. $$ \begin{align} \frac{d}{dx}\left\{\frac{1}{2}(\ln|1+x| - \ln|1-x|) + C\right\} &= \frac{1}{2}\{(\ln|1+x|)' - (\ln|1-x|)'\} \\ &= \frac{1}{2}\left(\frac{1}{1+x} - \left(-\frac{1}{1-x}\right)\right) \\ &= \frac{1}{1-x^2} \end{align} $$ ```julia using SymPy @vars x f(x) = x^3 + 2x^2 + 1 # 不定積分 integrate(f(x), x) ``` $\begin{equation*}\frac{x^{4}}{4} + \frac{2 x^{3}}{3} + x\end{equation*}$ ```julia f(x) = x^2 + 1/x integrate(f(x), x) ``` $\begin{equation*}\frac{x^{3}}{3} + \log{\left(x \right)}\end{equation*}$ ```julia f(x) = 1/sqrt(x) + exp(x) integrate(f(x), x) ``` $\begin{equation*}2 \sqrt{x} + e^{x}\end{equation*}$ ### 定積分 関数 $f(x)$ の不定積分の1つを $F(x)$ とすると、与えられた定数 $a$, $b$ の区間における $f(x)$ の **定積分** は次の式で定義される $$ \begin{align} \int_a^bf(x)dx &= F(b) - F(a) \\ &= \left[F(x)\right]_a^b \end{align} $$ 特に区間 $[a, b]$ において $f(x)\geq0$ ならば、定積分 $\int_a^bf(x)dx = F(b) - F(a)$ は、$a\leq x\leq b$, $0\leq y\leq f(x)$ 部分における面積になることが知られている 不定積分の性質から、次のことが分かる $$ \begin{align} \int_a^b(f(x) + g(x))dx &= \int_a^bf(x)dx + \int_a^bg(x)dx \\ \int_a^bkf(x)dx &= k\int_a^bf(x)dx \end{align} $$ #### 演習 次の計算を行え 1. $$ \int_0^2(x^3 + x) dx $$ 2. $$ \int_1^4(x + \sqrt x)dx $$ #### 解答 1. $$ \begin{align} \int_0^2(x^3 + x)dx &= \left[\frac{1}{4}x^4 + \frac{1}{2}x^2\right]_0^2 \\ &= \frac{1}{4}\times2^4 + \frac{1}{2}\times2^2 \\ &= 6 \end{align} $$ 2. $$ \begin{align} \int_1^4(x + \sqrt x)dx &= \left[\frac{1}{2}x^2 + \frac{2}{3}x^\frac{3}{2}\right]_1^4 \\ &= (\frac{1}{2}\times4^2 + \frac{2}{3}\times4^\frac{3}{2}) - (\frac{1}{2}\times1^2 + \frac{2}{3}\times1^\frac{3}{2}) \\ &= \frac{73}{6} \end{align} $$ ```julia f(x) = x^3 + x integrate(f(x), (x, 0, 2)) ``` $\begin{equation*}6\end{equation*}$ ```julia f(x) = x + sqrt(x) integrate(f(x), (x, 1, 4)) ``` $\begin{equation*}\frac{73}{6}\end{equation*}$ ## 偏微分と勾配 2変数関数 $z=f(x,y)$ を考える これは $(x,y)$ の値の組に対して、$z$ の値を1つ対応付ける写像である $z=f(x,y)$ を満たすような $(x,y,z)$ の集合は、3次元空間上の曲面と考えることが出来る 次のような具体例を考える $$ f(x,y) = x^2 + xy + y^2 + x $$ このとき $f(x,y)$ を $x$ の関数とみなして($y$ を定数とみなして)微分すると、 $$ 2x + y + 1 $$ となる これを $f(x,y)$ の $x$ についての **偏導関数** と呼び、$\frac\partial{\partial x}f(x,y)$ や $f_x(x,y)$ などと記述する 偏導関数を求めることを **偏微分** する、という 偏導関数は、注目している変数を固定したときの導関数を意味している 上記の例でいうと、例えば $y=0$ に固定した関数 $f(x,0)=x^2+x$ を考えてみると、これを $x$ で微分した導関数 $2x+1$ は、偏導関数で $y=0$ としたもの $\frac\partial{\partial x}f(x,0) = 2x + 1$ に一致する 全ての $y$ についてこのような関係が成り立つため、$y=y_0$ における導関数は偏導関数 $\frac\partial{\partial x}f(x,y)$ で $y=y_0$ とおいたものであり、 $$ \frac{\partial y}{\partial x}(x,y_0) = 2x + y_0 + 1 $$ となる $xyz$ 空間では $z=f(x,y)$ は曲面を表し、これを平面 $y=y_0$ で切り取った曲線が $z=f(x,y_0)$ となる その切り口の曲線の導関数が、偏導関数で $y=y_0$ とおいたものであり $z=\frac\partial{\partial x}f(x,y_0)$ となる 2変数関数 $z=f(x,y)$ についても極値を考えることが出来る もし2変数関数 $z=f(x,y)$ が $(x,y)=(x_0,y_0)$ という点で極値をとるとすると、$z=f(x,y)$ の平面 $x=x_0$ による切り口が $y=y_0$ で極値をとり、$y=y_0$ による切り口が $x=x_0$ で極値をとらなければならない つまり必要条件は次のようになる $$ \frac{\partial f}{\partial x}(x_0, y_0) = 0,\ \frac{\partial f}{\partial y}(x_0, y_0) = 0 $$ ここで、 $$ \nabla f = \begin{pmatrix}\frac{\partial f}{\partial x} \\ \frac{\partial f}{\partial y}\end{pmatrix} $$ とおくと、この $\nabla f$ (ナブラf) は $\mathbb R^2$ から $\mathbb R^2$ への写像だと考えることが出来る つまりベクトル $\begin{pmatrix}x \\ y\end{pmatrix} \in \mathbb R^2$ を1つ考えると、それに対応してベクトル $\nabla f(x,y)$ が1つ決まる よって $f(x,y)$ の極値を求めるためには、 $$ \nabla f(x,y) = 0 $$ となる $x$, $y$ を求めれば良いことになる 具体例をもとに計算すると、 $$ \nabla f(x,y) = \begin{pmatrix}2x+y+1 \\ x+2y\end{pmatrix} = 0 $$ を解けばよいため、 $$ x=-\frac{2}{3},\ y=\frac{1}{3} $$ となる ```julia using Plots @vars x y f(x, y) = x^2 + x*y + y^2 + x # 2変数関数の3Dグラフをプロット ## ref: https://github.com/JuliaPy/SymPy.jl/blob/master/src/plot_recipes.jl surface(-10:10, -10:10, f(x, y)) ``` ```julia # (x,y)->z の増減表をプロット plot(VectorField(f)) ``` 以上の話を一般の$n$次変数関数について拡張する $n$次変数関数は次式で表される $$ y=f(x_1, x_2, \cdots, x_n) $$ または、変数をカンマ区切りの集合で表さず、ベクトル $\vec{x}=\begin{pmatrix}x_1 \\ x_2 \\ \vdots \\ x_n\end{pmatrix}$ で表し、 $$ y=f(\vec{x}) $$ と書くこともある ここで「$f$ が偏微分可能である」とは、全ての $x_i$ について $f$ が $x_i$ で偏微分可能であることを言う 特に全ての $x_i$ について無限回偏微分可能である場合、$f$ は **滑らか** であると言う ### 滑らかな$n$次変数関数の性質 滑らかな$n$次変数関数 $f$ の **勾配** $\nabla f$ は次で定義される $$ \nabla f = \begin{pmatrix}\frac{\partial f}{\partial x_1} \\ \frac{\partial f}{\partial x_2} \\ vdots \\ \frac{\partial f}{\partial x_n} \end{pmatrix} $$ これは$n$次元ベクトル $\vec{x}\in\mathbb{R}^n$ が1つ決まると、それに$n$次元ベクトルを1つ対応付けることが出来るため、$\mathbb{R}^2$ から $\mathbb{R}^2$ への写像になっている 関数 $f$ が $\vec{x}$ で極値をとるためには、次の条件が必要である $$ \nabla f(x) = \boldsymbol0 $$ これは必要条件に過ぎず、この条件を満たしても極値をとらないことがあるため要注意である 極値をとるための必要十分条件について考えるには、2階偏微分まで考える必要がある まず、その前に関数を複数回偏微分することを考える $f(\vec{x})$ を $x_i$ で偏微分した後に $x_j$ で偏微分したものを $\frac{\partial^2f}{\partial x_j\partial x_i}(\vec x)$ または $f_{x_ix_j}(\vec x)$ と書くことにする $f(\vec x)$ が滑らかなときは $f_{x_ix_j}(\vec x) = f_{x_jx_i}(\vec x)$ であることが知られているため、偏微分する順序を気にする必要はない このとき $f$ の **ヘッセ行列** は次で定義される $$ \nabla^2f = \begin{pmatrix} \frac{\partial^2f}{\partial x_1 \partial x_1} & \frac{\partial^2f}{\partial x_1 \partial x_2} & \ldots & \frac{\partial^2f}{\partial x_1 \partial x_n} \\ \frac{\partial^2f}{\partial x_2 \partial x_1} & \frac{\partial^2f}{\partial x_2 \partial x_2} & \ldots & \frac{\partial^2f}{\partial x_2 \partial x_n} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial^2f}{\partial x_n \partial x_1} & \frac{\partial^2f}{\partial x_n \partial x_2} & \ldots & \frac{\partial^2f}{\partial x_n \partial x_n} \end{pmatrix} $$ 関数 $f$ のヘッセ行列は $\nabla^2f$ や $\boldsymbol H_f$ と記述する $\frac{\partial^2}{\partial x_i\partial x_j}(\vec x) = \frac{\partial^2}{\partial x_j\partial x_i}(\vec x)$ であるため、ヘッセ行列は対称行列である 多変数関数 $f$ が $\nabla f(\vec x) = \boldsymbol$ となる点で極値をとるかどうかはヘッセ行列を調べれば分かる - 極大となる条件: ヘッセ行列が負定値 - 極小となる条件: ヘッセ行列が正定値 計算例として二次形式の勾配について考えてみる ここで **二次形式** とは$n$次対称対称行列 $\boldsymbol A$ に対して、 $$ f(\vec x) = \vec x^T \boldsymbol A \vec x $$ で定義されるものである この勾配を考えてみる $\boldsymbol A = (a_{ij})$ とすると、 $$ f(\vec x) = \sum_{i=1}^n\sum_{j=1}^n a_{ij}x_ix_j $$ となる 1つの添字 $k$ に注目して $x_k$ がこの右辺でいつ現れるかを考えると、$i=k$ のときと $j=k$ のときがあり、特に $i=j=k$ のときは $a_{kk}x_k^2$ という項がある $f$ を $x_k$ で偏微分すると、$x_k$ が現れる項だけを考えればよく、他は 0 になる $x_k$ が現れる項を、$i=j=k$ の場合、$i=k かつ j\neq k$ の場合、$i\neq k かつ j=k$ の場合、の3通りに分けて計算すると以下のようになる $$ \begin{align} \frac{\partial f}{\partial x_k}(\vec x) &= \frac{\partial}{\partial x_k}\left[a_{kk}x_k^2 + \sum_{j\neq k}a_{kj}x_kx_j + \sum_{i\neq k}a_{ik}x_ix_k\right] \\ &= 2a_{kk}x_k + \sum_{j\neq k}a_{kj}x_j + \sum_{i\neq k}a_{ik}x_i \\ &= 2a_{kk}x_k + 2\sum_{j\neq k}a_{kj}x_j \\ &= 2\sum_{j=1}^n a_{kj}x_j \end{align} $$ ただし、ここで $\sum_{i\neq k}$ という記号は「$i$を 1 から $n$ まで増やすが、$i=k$ の場合のみスキップして和をとる」という意味である また、$\boldsymbol A$ が対称行列であることから $a_{ik} = a_{kj}$ である性質を利用して計算している 以上より、勾配は次の式で表される $$ \nabla f(\vec x) = \begin{pmatrix}2\sum_{j=1}^na_{1j}x_j \\ 2\sum_{j=1}^na_{2j}x_j \\ vdots \\ 2\sum_{j=1}^na_{nj}x_j\end{pmatrix} = 2\boldsymbol A \vec x $$ この二次形式の勾配は、二次関数 $y=ax^2$ の導関数が $y'=2ax$ であることと形式が似ているため、比較的覚えやすい 次にこの $f$ のヘッセ行列を求める ヘッセ行列の $(k,l)$ 成分を $h_{kl}$ とすると、$h_{kl}$ は $\nabla f$ の第$k$成分を $x_l$ で偏微分したものになるため、 $$ h_{kl} = \frac{\partial}{\partial x_l}\left(2\sum_{j=1}^n a_{kj}x_j\right) = 2a_{kl} $$ となる 従ってヘッセ行列は、 $$ \nabla^2f(\vec x) = 2\boldsymbol A $$ である 以上より、$f$ が極値をとる条件は以下の通りである - $\vec x$ で極小となる条件: $\boldsymbol A\vec x = \boldsymbol0 かつ \boldsymbol A が正定値$ - $\vec x$ で極大となる条件: $\boldsymbol A\vec x = \boldsymbol0 かつ \boldsymbol A が負定値$ #### 演習 次の2変数関数の極値と、極値をとるときの $x$, $y$ を求めよ $$ f(x,y) = x^3 + 2xy + y^2 - x $$ #### 解答 まず $x$ と $y$ で偏微分する $$ \begin{align} \frac{\partial}{\partial x}f(x,y) &= 3x^2 + 2y - 1 \\ \frac{\partial}{\partial y}f(x,y) &= 2xy + 2y \\ \end{align} $$ これらを 0 とおいた方程式、つまり、 $$ \left\{ 3x^2 + 2y - 1 = 0 \\ 2xy + 2y = 0 \right. $$ を解くと、 $$ (x,y) = \left(-\frac{1}{3}, \frac{1}{3}\right),\ \left(1, -1\right) $$ を得る これらが極値をとる $(x,y)$ の候補であるが、実際に極値になるかを調べるために2階偏導関数を計算する $$ \begin{align} \frac{\partial^2}{\partial x \partial x}f(x,y) &= 6x \\ \frac{\partial^2}{\partial x \partial y}f(x,y) &= 2 \\ \frac{\partial^2}{\partial y \partial y}f(x,y) &= 2 \end{align} $$ 以上より、 $$ \nabla^2f(x,y) = \begin{pmatrix}6x & 2 \\ 2 & 2\end{pmatrix} $$ となる ここで、$(x,y) = \left(-\frac{1}{3}, \frac{1}{3}\right)$ の時は $\nabla^2f\left(-\frac{1}{3}, \frac{1}{3}\right) = \begin{pmatrix}-2 & 2 \\ 2 & 2\end{pmatrix}$ より、正定値にも負定値にもならないため、この点では極値をとらない 一方、$(x,y) = (1,-1)$ の時は $\nabla^2f(1,-1) = \begin{pmatrix}6 & 2 \\ 2 & 2\end{pmatrix}$ で正定値となるため、 $$ (x,y) = (1,-1) の時、極小値 -1 をとる $$ ```julia ```
ddc0d26c1cdccbf08fbcbead5c040d912ddcfe07
346,807
ipynb
Jupyter Notebook
02_machine-learning/02-03_calculus.ipynb
amenoyoya/julia_ml-tuto
9c0be0923ea00ca4d1d51c0c6f61f6f2748232be
[ "MIT" ]
null
null
null
02_machine-learning/02-03_calculus.ipynb
amenoyoya/julia_ml-tuto
9c0be0923ea00ca4d1d51c0c6f61f6f2748232be
[ "MIT" ]
null
null
null
02_machine-learning/02-03_calculus.ipynb
amenoyoya/julia_ml-tuto
9c0be0923ea00ca4d1d51c0c6f61f6f2748232be
[ "MIT" ]
null
null
null
85.483609
12,005
0.755342
true
6,581
Qwen/Qwen-72B
1. YES 2. YES
0.879147
0.600188
0.527654
__label__yue_Hant
0.31668
0.064246
# Playing with Sets and Probability In this chapter, we’ll start by learning how we can make our programs understand and manipulate sets of numbers. We’ll then see how sets can help us understand basic concepts in prob- ability. Finally, we’ll learn about generating random numbers to simulate random events. Let’s get started! --- ## Set Construction In mathematical notation, you represent a set by writing the set members enclosed in curly brackets. For example, {2, 4, 6} represents a set with 2, 4, and 6 as its members. To create a set in Python, we can use the FiniteSet class from the sympy package, as follows: ```python from sympy import FiniteSet s = FiniteSet(2,4,6) s ``` $\displaystyle \left\{2, 4, 6\right\}$ ```python from sympy import FiniteSet from fractions import Fraction s = FiniteSet(1, 1.5, Fraction(1, 5)) print("Set is:" + str(s) + " and its length is:" + str(len(s))) ``` Set is:FiniteSet(1/5, 1, 1.5) and its length is:3
e9761c6f7297eff294a8e5dafea90f8bb395933e
1,895
ipynb
Jupyter Notebook
2021/Python-Maths/chapter5.ipynb
Muramatsu2602/python-study
c81eb5d2c343817bc29b2763dcdcabed0f6a42c6
[ "MIT" ]
null
null
null
2021/Python-Maths/chapter5.ipynb
Muramatsu2602/python-study
c81eb5d2c343817bc29b2763dcdcabed0f6a42c6
[ "MIT" ]
null
null
null
2021/Python-Maths/chapter5.ipynb
Muramatsu2602/python-study
c81eb5d2c343817bc29b2763dcdcabed0f6a42c6
[ "MIT" ]
null
null
null
1,895
1,895
0.691293
true
276
Qwen/Qwen-72B
1. YES 2. YES
0.937211
0.831143
0.778956
__label__eng_Latn
0.995067
0.648109
# Matrix Formalism of the Equations of Movement > Renato Naville Watanabe > [Laboratory of Biomechanics and Motor Control](http://pesquisa.ufabc.edu.br/bmclab) > Federal University of ABC, Brazil <h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Forward-Dynamics" data-toc-modified-id="Forward-Dynamics-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Forward Dynamics</a></span></li><li><span><a href="#Problems" data-toc-modified-id="Problems-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Problems</a></span></li><li><span><a href="#References" data-toc-modified-id="References-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>References</a></span></li></ul></div> In this notebook will be shown two examples of how to use a matrix formalism to perform forward dynamics. It does not consist a comprehensive treatise about the subject. It is rather an introduction based on examples. Nevertheless, the reader of this notebook will have sufficient knowledge to read recent texts on biomechanics and other multibody dynamic analysis. ## Forward Dynamics For the forward dynamics analysis, we will consider that we know the torques and find the angular accelerations. Naturally, we could begin our analysis from the muscle forces, from the muscle activations or even from the neural commands from the motor cortex. <figure> <figcaption><i><center>Adapted from Erdemir et al. (2007) </center></i></figcaption> As an introduction to the matrix formalism used in multibody analysis, we will consider the double-pendulum system. This system could be used as a model from the arm and forearm of a subject, for example, and is an example of chaotic system. In the notebook about [free-body diagram](FreeBodyDiagramForRigidBodies.ipynb#doublependulum), we found the following equations of motion to the double-pendulum with actuators. <span class="notranslate"> \begin{align} \begin{split} \left(\frac{m_1l_1^2}{3} +m_2l_1^2\right)\frac{d^2\theta_1}{dt^2} + \frac{m_2l_1l_2}{2} \cos{(\theta_1-\theta_2)\frac{d^2\theta_2}{dt^2}} &= -\frac{m_1gl_1}{2}\sin(\theta_1)- m_2l_1g \sin(\theta_1) - \frac{m_2l_1l_2}{2}\left(\frac{d\theta_2}{dt}\right)^2 \sin(\theta_1-\theta_2) + M_1 - M_{12} \\ \frac{m_2l_1l_2}{2}\cos(\theta_1-\theta_2)\frac{d^2\theta_1}{dt^2} + \frac{m_2l_2^2}{3}\frac{d^2\theta_2}{dt^2} &= -\frac{m_2gl_2}{2}\sin(\theta_2) + \frac{m_2l_1l_2}{2}\left(\frac{d\theta_1}{dt}\right)^2 \sin(\theta_1-\theta_2)+ M_{12} \end{split} \end{align} </span> If we wanted to simulate this double pendulum we still need to isolate the angular accelerations of each of the bars. As can be easily noted, this would be too laborious to do by hand. Luckily, numerical integration is performed by computers. The easiest way to isolate these angular accelerations is to note that we can write the angular accelerations and the numbers multiplying them as a matrix and a vector. <span class="notranslate"> \begin{equation} \left[\begin{array}{cc}\frac{m_1l_1^2}{3} +m_2l_1^2&\frac{m_2l_1l_2}{2} \cos(\theta_1-\theta_2)\\\frac{m_2l_1l_2}{2}\cos(\theta_1-\theta_2)&\frac{m_2l_2^2}{3}\end{array}\right]\cdot\left[\begin{array}{c}\frac{d^2\theta_1}{dt^2}\\\frac{d^2\theta_2}{dt^2} \end{array}\right] = \left[\begin{array}{c}- \frac{m_2l_1l_2}{2}\left(\frac{d\theta_2}{dt}\right)^2 \sin(\theta_1-\theta_2)-\frac{m_1gl_1}{2}\sin(\theta_1)- m_2l_1g \sin(\theta_1) + M_1 - M_{12}\\ \frac{m_2l_1l_2}{2}\left(\frac{d\theta_1}{dt}\right)^2 \sin(\theta_1-\theta_2)-\frac{m_2gl_2}{2}\sin(\theta_2) + M_{12} \end{array}\right] \end{equation} </span> Typically the equations of motion are divided into a matrix corresponding to the terms involving the angular velocities (centrifugal and Coriolis forces), a matrix corresponding to gravitational force and another matrix corresponding to the forces and torques being applied to the system. <span class="notranslate"> \begin{equation} M(q)\ddot{q} = C(q,\dot{q}) + G(q) + Q + E \end{equation} </span> where - <span class="notranslate">$q$</span> is the vector of the generalized coordinates, like angles and positions; - <span class="notranslate">$M(q)$</span> is the matrix containing the inertia elements like mass and moments of inertia; - <span class="notranslate">$C(q,\dot{q})$</span> is the vector with the forces and moments dependent from the velocities and angular velocities, like centrifugal and Coriolis forces; - <span class="notranslate">$G(q)$</span> is the vector with the forces and torques caused by the gravitational force; - <span class="notranslate">$Q$</span> is the vector with forces and torques being applied to the body, like muscular torques and forces due to the constraints. - <span class="notranslate">$E$</span> is the vector with the forces and torques due to some external element, like springs or the Ground reaction Force. We can divide the equation describing the behavior of the double-pendulum in the matrices above: <span class="notranslate"> \begin{equation} \underbrace{\left[\begin{array}{cc}\frac{m_1l_1^2}{3} +m_2l_1^2&\frac{m_2l_1l_2}{2} \cos(\theta_1-\theta_2)\\\frac{m_2l_1l_2}{2}\cos(\theta_1-\theta_2)&\frac{m_2l_2^2}{3}\end{array}\right]}_{M}\cdot\underbrace{\left[\begin{array}{c}\frac{d^2\theta_1}{dt^2}\\\frac{d^2\theta_2}{dt^2} \end{array}\right]}_{\ddot{q}} = \underbrace{\left[\begin{array}{c}- \frac{m_2l_1l_2}{2}\left(\frac{d\theta_2}{dt}\right)^2 \sin(\theta_1-\theta_2)\\ \frac{m_2l_1l_2}{2}\left(\frac{d\theta_1}{dt}\right)^2 \sin(\theta_1-\theta_2)\end{array}\right]}_{C} + \underbrace{\left[\begin{array}{c}-\frac{m_1gl_1}{2}\sin(\theta_1)- m_2l_1g \sin(\theta_1)\\ -\frac{m_2gl_2}{2}\sin(\theta_2) \end{array}\right]}_{G} + \underbrace{\left[\begin{array}{c} M_1 - M_{12}\\M_{12} \end{array}\right]}_{Q} \end{equation} </span> To solve this differential equation numerically, we must obtain the expression of the angular accelerations. We can perform this by multiplying both sides by the inverse of the matrix $M$. <span class="notranslate"> \begin{equation} \left[\begin{array}{c}\frac{d^2\theta_1}{dt^2}\\\frac{d^2\theta_2}{dt^2} \end{array}\right] = \left[\begin{array}{cc}\frac{m_1l_1^2}{3} +m_2l_1^2&\frac{m_2l_1l_2}{2} \cos(\theta_1-\theta_2)\\\frac{m_2l_1l_2}{2}\cos(\theta_1-\theta_2)&\frac{m_2l_2^2}{3}\end{array}\right]^{-1}\cdot\left(\left[\begin{array}{c}- \frac{m_2l_1l_2}{2}\left(\frac{d\theta_2}{dt}\right)^2 \sin(\theta_1-\theta_2)\\ \frac{m_2l_1l_2}{2}\left(\frac{d\theta_1}{dt}\right)^2 \sin(\theta_1-\theta_2)\end{array}\right] + \left[\begin{array}{c}-\frac{m_1gl_1}{2}\sin(\theta_1)- m_2l_1g \sin(\theta_1)\\ -\frac{m_2gl_2}{2}\sin(\theta_2) \end{array}\right] + \left[\begin{array}{c} M_1 - M_{12}\\M_{12} \end{array}\right]\right) \end{equation} </span> Generically, having the differential equations in the format: <span class="notranslate"> \begin{equation} M(q)\ddot{q} = C(q,\dot{q}) + G(q) + Q + E \end{equation} </span> we can obtain the equation to perform the forward dynamics by: <span class="notranslate"> \begin{equation} \ddot{q} = M(q)^{-1}\left[C(q,\dot{q}) + G(q) + Q + E\right] \end{equation} </span> Now that we have the angular accelerations, to solve the equation numerically we must transform the second-order differential equations in first-order differential equations: <span class="notranslate"> \begin{equation} \left[\begin{array}{c}\frac{d\omega_1}{dt}\\\frac{d\omega_2}{dt}\\\frac{d\theta_1}{dt}\\\frac{d\theta_2}{dt} \end{array}\right] = \left[\begin{array}{c}\left[\begin{array}{cc}\frac{m_1l_1^2}{3} +m_2l_1^2&\frac{m_2l_1l_2}{2} \cos(\theta_1-\theta_2)\\\frac{m_2l_1l_2}{2}\cos(\theta_1-\theta_2)&\frac{m_2l_2^2}{3}\end{array}\right]^{-1}\left(\left[\begin{array}{c}- \frac{m_2l_1l_2}{2}\omega_2^2 \sin(\theta_1-\theta_2)\\ \frac{m_2l_1l_2}{2}\omega_1^2 \sin(\theta_1-\theta_2)\end{array}\right]+\left[\begin{array}{c} -\frac{m_1gl_1}{2}\sin(\theta_1)- m_2l_1g \sin(\theta_1) \\-\frac{m_2gl_2}{2}\sin(\theta_2) \end{array}\right] + \left[ \begin{array}{c}M_1 - M_{12}\\M_{12}\end{array}\right]\right)\\ \omega_1\\ \omega_2\end{array}\right] \end{equation} </span> Below is the numerical solution of a double-pendulum with each bar having length equal to 1 m and mass equal to 1 kg. ```python import numpy as np g = 9.81 m1 = 1 m2 = 1 l1 = 1 l2 = 0.5 theta10 = np.pi/10 theta20 = np.pi/3 omega10 = 2*np.pi omega20 = -6*np.pi dt = 0.0001 t = np.arange(0, 20, dt) state = np.zeros((4, len(t))) state[:,0] = np.array([omega10, omega20, theta10, theta20]) #print(state[0,0]) M1 = 0 M12 = 0 for i in range(1,len(t)): thetaDiff = state[2,i-1] - state[3,i-1] M = np.array([[m1*l1**2/3 + m2*l1**2, m2*l1*l2*np.cos(thetaDiff)/2], [m2*l1*l2*np.cos(thetaDiff)/2, m2*l2**2/3]]) C = np.array([[-m2*l1*l2*np.sin(thetaDiff)*state[1,i-1]**2/2], [m2*l1*l2*np.sin(thetaDiff)*state[0,i-1]**2/2]]) G = np.array([[-m1*g*l1/2*np.sin(state[2,i-1]) - m2*g*l2*np.sin(state[2,i-1])], [-m2*g*l2/2*np.sin(state[3,i-1])]]) #PID control #r = np.pi/3 #M12 = 30*(r-state[3,i-1])- 2*state[1,i-1] + 3*np.trapz(r-state[3,0:i])*dt Q = np.array([[M1 - M12],[M12]]) dstatedt = np.vstack((np.linalg.inv(M)@(C+G+Q),state[0,[i-1]],state[1,[i-1]])) state[:,[i]] = state[:,[i-1]] + dt*dstatedt ``` ```python import matplotlib.pyplot as plt %matplotlib notebook plt.figure() plt.plot(t[0::10], state[3,0::10].T) #plt.plot(t[0::10], r*np.ones_like(t[0::10])) plt.show() ``` <IPython.core.display.Javascript object> ```python plt.figure() step = 3000 for i in range(len(state[2,0::step])): plt.plot([0, l1*np.sin(state[2,i*step])], [0, -l1*np.cos(state[2,i*step])]) plt.plot([l1*np.sin(state[2,i*step]), l1*np.sin(state[2,i*step])+l2*np.sin(state[3,i*step])], [-l1*np.cos(state[2,i*step]), -l1*np.cos(state[2,i*step])-l2*np.cos(state[3,i*step])]) plt.show() ``` <IPython.core.display.Javascript object> ## Problems 1) Solve the problems 19.3.20 and 19.3.24 of the Ruina and Rudra's book by using the Lagrangian formalism (it is much easier than use the Newton-Euler formalism) and then use the matrix formalism to obtain the expressions of the angular accelerations. ## References - YAMAGUCHI, G. T. Dynamic modeling of musculoskeletal motion: a vectorized approach for biomechanical analysis in three dimensions., 2001 - CRAIG, J. Introduction to robotics. , 1989 - JAIN, A. Robot and multibody dynamics. , 2011 - SPONG, M. W.; HUTCHINSON, S.; VIDYASAGAR, M. Robot modeling and control., 2006 - ERDEMIR, A. et al. Model-based estimation of muscle forces exerted during movements. Clinical Biomechanics, v. 22, n. 2, p. 131–154, 2007. - STANEV, D.; MOUSTAKAS, K. Simulation of constrained musculoskeletal systems in task space. IEEE Transactions on Biomedical Engineering, v. 65, n. 2, p. 307–318, 2018. - ZAJAC FE, GORDON ME , [Determining muscle's force and action in multi-articular movement](https://drive.google.com/open?id=0BxbW72zV7WmUcC1zSGpEOUxhWXM&authuser=0). Exercise and Sport Sciences Reviews, 17, 187-230. , 1989 - RUINA A, RUDRA P. [Introduction to Statics and Dynamics](http://ruina.tam.cornell.edu/Book/index.html). Oxford University Press. , 2015
e372f94fd3bec4363e13e0683e19ffe883378854
408,663
ipynb
Jupyter Notebook
notebooks/MatrixFormalism.ipynb
rnwatanabe/BMC
545c4c28125684a707641fd97f5d92f303020c47
[ "CC-BY-4.0" ]
1
2021-03-15T20:07:52.000Z
2021-03-15T20:07:52.000Z
notebooks/.ipynb_checkpoints/MatrixFormalism-checkpoint.ipynb
guinetn/BMC
ae2d187a5fb9da0a2711a1ed56b87a3e1da0961f
[ "CC-BY-4.0" ]
null
null
null
notebooks/.ipynb_checkpoints/MatrixFormalism-checkpoint.ipynb
guinetn/BMC
ae2d187a5fb9da0a2711a1ed56b87a3e1da0961f
[ "CC-BY-4.0" ]
1
2018-10-13T17:35:16.000Z
2018-10-13T17:35:16.000Z
201.113681
273,851
0.866558
true
4,060
Qwen/Qwen-72B
1. YES 2. YES
0.841826
0.695958
0.585876
__label__eng_Latn
0.69243
0.199515
$$ \LaTeX \text{ command declarations here.} \newcommand{\N}{\mathcal{N}} \newcommand{\R}{\mathbb{R}} \renewcommand{\vec}[1]{\mathbf{#1}} \newcommand{\norm}[1]{\|#1\|_2} \newcommand{\d}{\mathop{}\!\mathrm{d}} \newcommand{\qed}{\qquad \mathbf{Q.E.D.}} \newcommand{\vx}{\mathbf{x}} \newcommand{\vy}{\mathbf{y}} \newcommand{\vt}{\mathbf{t}} \newcommand{\vb}{\mathbf{b}} \newcommand{\vw}{\mathbf{w}} $$ ```python %matplotlib inline from Lec07 import * ``` # EECS 445: Introduction to Machine Learning ## Lecture 06: Logistic Regression * Instructor: **Jacob Abernethy** and **Jia Deng** * Date: September 26, 2016 *Lecture Exposition Credit:* Benjamin Bray, Saket Dewangan ## Outline - Concept of Classification - Logistic Regression - Intuition, Motivation - Newton's Method ## Reading List - Suggested: - **[MLAPP]**, Chapter 8: Logistic Regression > In this lecture, we will move from regression to classification.Unlike of predicting some value for data in regression, we predict what category data belongs to in classification. And we will introduce logistic regression. In logistic regression, we will show how to find the optimal coefficients $\vw$ using Newton's method. ## Review: Supervised Learning - Goal - Given data $X$ in feature sapce and the labels $Y$ - Learn to predict $Y$ from $X$ - Labels could be discrete or continuous - Discrete-valued labels: Classification - Continuous-valued labels: Regression <center> </center> ## Classification Problem ### Classification Problem: Basics - Given an input vector $\vx$, assign it to one of $K$ distinct classes $C_k$, where $k = 1,\dots,K$. - The case $K=2$ is **Binary Classification** - Label $t=1$ means $x \in C_1$ - Label $t=0$ means $x \in C_2$ (or sometimes $t=-1$) - **Training:** Learn a classifier $y(\vx)$ from data, $$ \text{Training Data} \quad \{ (\vx_1, t_1), \dots, (\vx_N, t_N) \} \implies \text{Classifier} \ y(\vx) $$ - **Prediction:** Predict labels of new data, $$ \text{New Data} \quad \{ (\vx^{new}_1, t^{new}_1), \dots, (\vx^{new}_m, t^{new}_m) \} \stackrel{h}{\implies} \{ y(\vx^{new}_1), \dots, y(x^{new}_m) \} $$ - **Performance Evaluation:** Evaluate learned classifier on test data, $$ \text{Test Data} \quad \{ (\vx^{test}_1, t^{test}_1), \dots, (\vx^{test}_m, t^{test}_m) \} \stackrel{y}{\implies} \{ y(\vx^{test}_1), \dots, y(\vx^{test}_m) \} \implies \text{Error Estimate} $$ - To estimate **classification error**, we could use e.g. *zero-one loss*: $$ E = \frac{1}{m} \sum_{j=1}^m \mathbb{1} [ y(\vx^{test}_j) \neq t^{test}_j) ] $$ i.e. number of misclassified data. ### Classification Problems: Strategies - **Nearest-Neighbors:** Given query data $\vx$, find closest training points and do a majority vote. - **Discriminant Functions:** Learn a function $y(\vx)$ mapping $\vx$ to some class $C_k$. - **Probabilistic Model:** Learn the distributions $P(C_k | \vx)$ - *Discriminative Models* directly model $P(C_k | \vx)$ and learn parameters from the training set. - *Generative Models* learn class-conditional densities $P(\vx | C_k)$ and priors $P(C_k)$ ## Logistic Regression > - Logistic Regression is a technique for **classification**! > - We will focus on *binary* classification ### Logistic Regression: Preliminary—Logistic Sigmoid Function - The **logistic sigmoid function** is $$ \sigma(a) = \frac{1}{1 + \exp(-a)} = \frac{\exp(a)}{1 + \exp(a)} $$ - Sigmoid function $\sigma(a)$ maps $(-\infty, +\infty) \to (0,1)$ <center> </center> ### Logistic Regression: Why use Logistic Sigmoid Function? - Prediction is picking the larger one of $P(y=1 | \vx)$ and $P(y=0 | \vx)$. - This can be implemented by evaluating **log odds** $$ a = \ln \frac{P(y=1 | \vx)}{P(y=0 | \vx)} $$ - So the prediction is $$ y = \left\{\begin{matrix} 1& a\geq 0\\ 0& a< 0 \end{matrix}\right. $$ - Since $P(y=1 | \vx) + P(y=0 | \vx)=1$, we could solve for $$ P(y=1 | \vx) = \frac{\exp(a)}{1+\exp(a)} = \sigma(a) $$ *Logistic Function* appears! - A heuristic choice for log odds is a separating **hyper plane** $a = \vw^T \phi(\vx)$. So the criterion becomes $$ \boxed{ y = \left\{\begin{matrix} 1& \vw^T \phi(\vx)\geq 0, \quad \text{i.e.} \quad \sigma(\vw^T\phi(\vx)\geq 0) \geq 0.5\\ 0& \vw^T \phi(\vx)< 0 , \quad \text{i.e.} \quad \sigma(\vw^T\phi(\vx)\geq 0) < 0.5 \end{matrix}\right.} $$ - In this case, $P(y=1 | \vx) = \sigma(\vw^T \phi(\vx))$ and $P(y=0 | \vx) = 1-\sigma(\vw^T \phi(\vx))$. ### Logistic Regression: Underlying Model - We already have $$ \begin{align} P(y=1 | \vx, \vw) &= \sigma(\vw^T \phi(\vx)) \\ P(y=0 | \vx, \vw) &= 1-\sigma(\vw^T \phi(\vx)) \end{align} $$ - So we could model **class posterior** using Bernoulli random variable $$ y | \vx ,\vw \sim \mathrm{Bernoulli}( \sigma(\vw^T \phi(\vx)) ) $$ - We can obtain the best paramter $\vw$ by maximizing the likelihood of the training data.(Later) - Logistic regression is simpest discriminative model that is **linear** in the parameters. ### Logistic Regression: Example ```python plt.figure(figsize=(10,6)); plot_linear_boundary(); ``` - We could clearly see the linear boundary corresponding to $\vw^T \phi(\vx)$ ### Logistic Regression: Likelihood - We saw before that the **likelihood** for each binary label is: $$ \begin{align} P(y = 1 | \vx,\vw) &= \sigma(\vw^T \phi(\vx)) \\ P(y = 0 | \vx,\vw) &= 1 - \sigma(\vw^T \phi(\vx)) \end{align} $$ - With a clever trick, this $$ P(y | x,w) = \sigma(\vw^T \phi(\vx))^y \cdot (1 - \sigma(\vw^T \phi(\vx)))^{1-y} $$ - For a data set $\{(\vx_n, t_n) \}_{n=1}^N$ where $t_n \in \{ 0,1 \}$, the **likelihood function** is $$ P(\vy = \vt| \mathcal{X}, \vw) = \prod_{n=1}^N P(y=t_n | \vx_n, \vw) =\prod_{n=1}^N \sigma(\vw^T \phi(\vx_n))^{t_n} [1 - \sigma(\vw^T \phi(\vx_n))] ^{1-t_n} $$ - where $\mathcal{X} = \{\vx_n \}_{n=1}^N$ - The optimal $\vw$ can be obtained by maximizing this likelihood. - Maximum likelihood estimate $\vw_{ML}$ makse sense because $\vw_{ML}$ is the coefficient that are most likely to produce $\{t_n \}_{n=1}^N$ given $\mathcal{X}$. - Define **negative log-likelihood** as the **loss** $$ E(\vw) \triangleq -\ln P(\vy = \vt| \mathcal{X}, \vw) $$ - Maximizing **likelihood** is equivalent to minimizing **loss** $E(\vw)$ ### Logistic Regression: Gradient of Loss - Loss function $E(\vw)$ can be transformed: $$ \begin{align} E(\vw) &= -\ln P(\vy = \vt| \mathcal{X}, \vw) \\ &= -\ln \prod \nolimits_{n=1}^N \sigma(\vw^T \phi(\vx_n))^{t_n} [1 - \sigma(\vw^T \phi(\vx_n))] ^{1-t_n} \\ &= -\sum \nolimits_{n=1}^N \left[ t_n \ln \sigma(\vw^T \phi(\vx_n)) + (1-t_n) \ln(1-\sigma(\vw^T \phi(\vx_n))) \right] \\ &= -\sum \nolimits_{n=1}^N \left[ t_n \ln \frac{\exp(\vw^T\phi(\vx_n))}{1+\exp(\vw^T\phi(\vx_n))} + (1-t_n) \ln(\frac{1}{1+\exp(\vw^T\phi(\vx_n))}) \right] \\ &= -\sum \nolimits_{n=1}^N \left[ t_n \ln \frac{1}{1+\exp(-\vw^T\phi(\vx_n))} + (1-t_n) \ln(\frac{1}{1+\exp(\vw^T\phi(\vx_n))}) \right] \\ &= \boxed{\sum \nolimits_{n=1}^N \left[ t_n \ln (1+\exp(-\vw^T\phi(\vx_n))) + (1-t_n) \ln(1+\exp(\vw^T\phi(\vx_n))) \right] }\\ \end{align} $$ - Gradient of loss $\nabla_\vw E(\vw)$ $$ \begin{align} \nabla_\vw E(\vw) &= \nabla_\vw \sum \nolimits_{n=1}^N \left[ t_n \ln (1+\exp(-\vw^T\phi(\vx_n))) + (1-t_n) \ln(1+\exp(\vw^T\phi(\vx_n))) \right] \\ &= \sum \nolimits_{n=1}^N \left[-t_n \frac{\exp(-\vw^T\phi(\vx_n))}{1+\exp(-\vw^T\phi(\vx_n))} \phi(\vx_n)+ (1-t_n) \frac{\exp(\vw^T\phi(\vx_n))}{1+\exp(\vw^T\phi(\vx_n))} \phi(\vx_n) \right] \\ &= \sum \nolimits_{n=1}^N \left[-t_n (1-\sigma(\vw^T\phi(\vx_n)))+ (1-t_n) \sigma(\vw^T\phi(\vx_n)) \right] \phi(\vx_n) \\ &= \sum \nolimits_{n=1}^N \left[\sigma(\vw^T\phi(\vx_n)) - t_n \right] \phi(\vx_n) \\ &= \boxed{ \Phi^T \left( \sigma(\Phi \vw) - \vt \right) } \end{align} $$ of which $$ \Phi = \begin{bmatrix} - & \phi(\vx_1)^T & -\\ & \vdots & \\ - & \phi(\vx_N)^T & - \end{bmatrix}_{N \times M} \qquad \sigma(\Phi \vw)=\begin{bmatrix} \sigma(\vw^T\phi(\vx_1))\\ \vdots\\ \sigma(\vw^T\phi(\vx_N)) \end{bmatrix}_{N \times 1} \qquad \vt = \begin{bmatrix} t_1\\ \vdots\\ t_N \end{bmatrix}_{N \times 1} $$ - With the gradient of loss, we could perform *gradient descent* to find $\vw_{ML}$. - But we will use a new method by finding roots of first order derivative! > Remark > - Note that this gradient resembles the gradient in linear regression with least squares (Check Lecture 4) $$ \begin{align} \text{Logistic Regression} \quad & \nabla_\vw E(\vw) = \Phi^T \left( \sigma(\Phi \vw) - \vt \right) \\ \text{Linear Regression} \quad & \nabla_\vw E(\vw) = \Phi^T \left( \Phi \vw - \vt \right) \end{align} $$ ### Newton's Method: Overview - First let's consider one dimension case. - **Goal:** Finding *root* of a general function $f(x)$, i.e. solve for $x$ such that $$f(x)=0$$ - **Newton's Method:** Repeat until convergence: $$ x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)} $$ ### Newton's Method: Geometric Intuition - Find the roots of $f(x)$ by following its **tangent lines**. The tangent line of $f(x)$ at $x_n$ has equation $$ \ell(x) = f(x_n) + (x-x_n) f'(x_n) $$ - Set next iterate $x_{n+1}$ to be **root** of tangent line: $$ \begin{gather} f(x_n) + (x-x_n) f'(x_n) = 0 \\ \implies \boxed{ x_{n+1}= x_n - \frac{f(x_n)}{f'(x_n)} } \end{gather} $$ <center> </center> ### Newton's Method: Example ```python def fn(x): return (x-2)**3; def d1(x): return 3*(x-2)**2; newton_example(fn,d1) ``` > Remark > - Here is how Newton's method works > - Given some initial point $x_1$, we first find the tangent line $\ell_1(x)$ of $f(x)$ at $x_1$. > - Let $x_2$ denote the root of $\ell_1(x)$, i.e. $\ell_1(x_2)=0$ > - Find the tangent line $\ell_2(x)$ of $f(x)$ at $x_2$. > - Let $x_3$ denote the root of $\ell_2(x)$, i.e. $\ell_1(x_3)=0$ ### Newton's Method: Finding Stationary Point - We have shown how to use Newton's method to find the root for $f(x)$ $$ x_{n+1}= x_n - \frac{f(x_n)}{f'(x_n)} $$ - **Note** that Stationary point of $f(x)$ is equivalent to root of $f'(x)$ - So, we could find stationary point of $f(x)$ by finding root of $f'(x)$ using Newton's method. - The iteration steps becomes $$ x_{n+1}= x_n - \frac{f'(x_n)}{f''(x_n)} $$ - For *multi-dimension* case, this iteration turns into $$ \vx_{n+1}= \vx_n - \left(\nabla^2 f(\vx_n)\right)^{-1} \nabla_\vx f(\vx_n) $$ of which $\nabla^2 f(\vx_n)$ is **Hessian matrix** which is the *second order derivative* $$ \nabla^2 f = \begin{bmatrix} \frac{\partial f}{\partial x_1\partial x_1} & \cdots & \frac{\partial f}{\partial x_1\partial x_n}\\ \vdots & \ddots & \vdots\\ \frac{\partial f}{\partial x_n\partial x_1} & \cdots & \frac{\partial f}{\partial x_n\partial x_n} \end{bmatrix} $$ ### Logistic Regression: Applying Newton's Method - Back to logistic regression! - Recall our goal to minimize $E(\vw)$ and we already have its gradient $$ \nabla_\vw E(\vw) = \sum \nolimits_{n=1}^N \left[\sigma(\vw^T\phi(\vx_n)) - t_n \right] \phi(\vx_n) = \Phi^T \left( \sigma(\Phi \vw) - \vt \right) $$ - To minimize of $E(\vw)$, we could use Newton's method to find its *stationary point*! - To use Newton's method, we need the *Hessian matrix*. ### Logistic Regression: Hessian Matrix $$ \begin{align} \nabla^2 E(\vw) &= \nabla_\vw \nabla_\vw E(\vw) \\ &= \nabla_\vw \sum \nolimits_{n=1}^N \left[\sigma(\vw^T\phi(\vx_n)) - t_n \right] \phi(\vx_n) \\ &= \sum \nolimits_{n=1}^N \nabla_\vw \sigma(\vw^T\phi(\vx_n)) \phi(\vx_n) \\ &= \sum \nolimits_{n=1}^N \nabla_\vw \frac{1}{1 + \exp(-\vw^T \phi(\vx_n))} \phi(\vx_n) \\ &= \sum \nolimits_{n=1}^N \phi(\vx_n) \frac{\exp(-\vw^T \phi(\vx_n))}{(1 + \exp(-\vw^T \phi(\vx_n)))^2} \phi(\vx_n)^T \\ &= \sum \nolimits_{n=1}^N \phi(\vx_n) \frac{1}{1 + \exp(-\vw^T \phi(\vx_n))} \frac{\exp(-\vw^T \phi(\vx_n))}{1 + \exp(-\vw^T \phi(\vx_n))} \phi(\vx_n)^T \\ &= \sum \nolimits_{n=1}^N \phi(\vx_n) [\sigma(\vw^T \phi(\vx_n)) \cdot ( 1 - \sigma(\vw^T \phi(\vx_n)) )] \phi(\vx_n)^T \\ &= \sum \nolimits_{n=1}^N \phi(\vx_n) r_n(\vw) \phi(\vx_n)^T \end{align} $$ - of which $r_n(\vw) = \sigma(\vw^T \phi(\vx_n)) \cdot ( 1 - \sigma(\vw^T \phi(\vx_n)) )$ $$ \begin{align} H_\vw E(\vw) &= \sum \nolimits_{n=1}^N \phi(\vx_n) r_n(\vw) \phi(\vx_n)^T \\ &= \begin{bmatrix} | & & | \\ \phi(\vx_1) & \cdots & \phi(\vx_N)\\ | & & | \end{bmatrix} \begin{bmatrix} r_1(\vw) & & \\ & \ddots & \\ & & r_N(\vw) \end{bmatrix} \begin{bmatrix} - & \phi(\vx_1)^T & -\\ & \vdots & \\ - & \phi(\vx_N)^T & - \end{bmatrix} \\ &= \boxed{\Phi^T R(\vw) \Phi} \end{align} $$ - of which $$ R(\vw) = \begin{bmatrix} r_1(\vw) & & & \\ & r_2(\vw) & & \\ & & \ddots & \\ & & & r_N(\vw) \end{bmatrix} $$ ### Logistic Regression: Applying Newton's Method - We already have $$ \begin{gather} \nabla_\vw E(\vw) = \Phi^T \left( \sigma(\Phi \vw) - \vt \right) \\ H_\vw E(\vw) = \Phi^T R(\vw) \Phi \end{gather} $$ - So the iteration step is $$ \begin{align} \vw_{n+1} &= \vw_n - \left(H_\vw E(\vw_n)\right)^{-1} \nabla_\vw f(\vw_n) \\ &= \boxed{\vw_n - \left(\Phi^T R(\vw_n) \Phi \right)^{-1} \Phi^T \left( \sigma(\Phi \vw_n) - \vt \right)} \end{align} $$ - Repeat until convergence and we could get maximum likelihood estimate $\vw_{ML}$ which minimizes the loss function $E(\vw)$ and maximizes likelihood function $ P(\vy = \vt| \mathcal{X}, \vw)$ ### Logistic Regression: Do we have closed-form solution? - Recall for **ordinary least squares** and **regularized least squares**, we have closed-form solution: | | Ordinary Least Squares | Regularized Least Squares | | ------------- | :-------------: | :-------------: | | **Derivate of Loss Function** | $\Phi^T\Phi \vec{w} - \Phi^T \vec{t}$ | $(\Phi^T \Phi + \lambda I)\vec{w} - \Phi^T \vec{t}$ | | **Closed-form Solution** | $(\Phi^T \Phi)^{-1} \Phi^T \vec{t}$ | $(\Phi^T \Phi + \lambda I)^{-1} \Phi^T \vec{t}$ | - They are obtained by finding the closed-form root of derivative of loss function。 - For logistic regression, we have $$ \begin{gather} \nabla_\vw E(\vw) = 0 \\ \Downarrow \\ \Phi^T \left( \sigma(\Phi \vw) - \vt \right) = 0 \end{gather} $$ - Existence of sigmoid function makes $\nabla_\vw E(\vw)$ **nonlinear** and no closed-form solution exists. - So we must **iterate**! ### Appendix: Multi-class Classification using Logistic Regression - We have seen sigmoid function enables us to do binary classification with logistic regression. - What if we want have multiple classes? - We will resort to **softmax** aka **normalized exponential** function - **Softmax Function** $$ p_k = \frac{\exp(q_k)}{\sum_j \exp(q_j)} $$ Given any real numbers $q_1, \ldots, q_n$, we can generate a distribution on them using softmax function. - Recall in binary case, we have $$ P(y = 1 | \vx,\vw) = \sigma(\vw^T \phi(\vx)) $$ - For K-class classification, we define $\mathcal{W} = {\vw}_{k=1}^K$. - The probablity data $\vx$ belongs to class $j$ is $$ P(y = j | \vx,\mathcal{W}) = \frac{\exp(\vw_j^T \phi(\vw))}{\sum_{k=1}^{K} \exp(\vw_k^T \phi(\vw))} $$ - We classify using $$ y = \underset{j \in \{1,\dots, K\}}{\arg \max} P(y = j | \vx,\mathcal{W}) $$ - Similarly, $\mathcal{W} = {\vw}_{k=1}^K$ is learned by maximizing likelihood function. - For details, please refer to [this](http://ufldl.stanford.edu/wiki/index.php/Softmax_Regression)
408856fd3539ddb6830f7c690986f3888070e954
26,814
ipynb
Jupyter Notebook
lecture06_logistic_regression/lecture06_logistic-regression.ipynb
xipengwang/umich-eecs445-f16
298407af9fd417c1b6daa6127b17cb2c34c2c772
[ "MIT" ]
97
2016-09-11T23:15:35.000Z
2022-02-22T08:03:24.000Z
lecture06_logistic_regression/lecture06_logistic-regression.ipynb
eecs445-f16/umich-eecs445-f16
298407af9fd417c1b6daa6127b17cb2c34c2c772
[ "MIT" ]
null
null
null
lecture06_logistic_regression/lecture06_logistic-regression.ipynb
eecs445-f16/umich-eecs445-f16
298407af9fd417c1b6daa6127b17cb2c34c2c772
[ "MIT" ]
77
2016-09-12T20:50:46.000Z
2022-01-03T14:41:23.000Z
33.601504
545
0.470351
true
5,896
Qwen/Qwen-72B
1. YES 2. YES
0.808067
0.647798
0.523464
__label__eng_Latn
0.637619
0.054513
## Week 3 MA544 --- **Objectives and Plan** 1. Pseudo-inverse (Perron-Frobenius inverse) Properties 1. Linear Systems of Equations and Gaussian Elimination with pivoting 1. LU Decomposition of A 1. QR Decomposition of a matrix 1. Iterative solution of Linear Systems ```python #IMPORT import numpy as np import matplotlib.pyplot as plt import matplotlib.image as mpimg %matplotlib inline from ipywidgets import interact, interactive, fixed, interact_manual import ipywidgets as widgets ## Set a seed for the random number generator np.random.seed(100) ``` ## Linear System of Equations --- Consider the following system of $m$ linear equations in $n$ variables. \begin{align} a_{11} x_1 + a_{12} x_2 + \cdots + a_{1n} x_n &= b_1 \\ a_{21} x_1 + a_{22} x_2 + \cdots + a_{2n} x_n &= b_2 \\ \vdots \qquad \qquad & \ \\ a_{m1} x_1 + a_{m2} x_2 + \cdots + a_{mn} x_n &= b_ m, \end{align} - The solution of a linear system represents the **point of intersection of hyperplanes** $$ x_1 \begin{pmatrix}a_{11}\\a_{21}\\ \vdots \\a_{m1}\end{pmatrix} + x_2 \begin{pmatrix}a_{12}\\a_{22}\\ \vdots \\a_{m2}\end{pmatrix} + \cdots + x_n \begin{pmatrix}a_{1n}\\a_{2n}\\ \vdots \\a_{mn}\end{pmatrix} = \begin{pmatrix}b_1\\b_2\\ \vdots \\b_m\end{pmatrix} $$ - The solution also represents the **linear coding of the columns** of a matrix $A$ to get a vector $b$ in the column space ($\mathcal{R}(A)$). - The system could be represented in a compact form as $Ax = b$, where $$ A= \begin{pmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{pmatrix},\quad \mathbf{x}= \begin{pmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{pmatrix},\quad \mathbf{b}= \begin{pmatrix} b_1 \\ b_2 \\ \vdots \\ b_m \end{pmatrix} $$ >- The system is called... >>- overdetermined when $m > n$. >>- underdetermined when $ m < n$. ## Gaussian elimination with scaled-row partial pivoting --- >- What is the need for pivoting? >- How can you find a factorization of the matrix by using the followign code. >- How can you modify the code to find the determinant of a matrix? >- How can you modify the following code to solve a linear system? >- How can you modify the code to find the inverse of a matrix? ```python ## Gaussian Elimination: Scaled Row Pivoting ## This function is based on the pseudo-code on page-148 in the Text by Kincaid and Cheney def GE_rsp(A): ''' This function returns the P'LU factorization of a square matrix A by scaled row partial pivoting. In place of returning L and U, elements of modified A are used to hold values of L and U. ''' m,n = A.shape L = np.eye(n) # Not being used U = np.zeros_like(A) # Not being used if m !=n: sys.exit("This function needs a square matrix as an input.") # The initial ordering of rows p = list(range(n)) # Scaling vector: absolute maximum elements of each row s = np.max(np.abs(A), axis=1) print("Scaling Vector: ",s) # Start the k-1 passes of Guassian Elimination on A for k in range(n-1): print("\n PASS {}: \n".format(k+1), A) # Find the pivot element and interchange the rows pivot_index = k + np.argmax(np.abs(A[p[k:], k])/s[p[k:]]) # Interchange element in the permutation vector if pivot_index !=k: temp = p[k] p[k]=p[pivot_index] p[pivot_index] = temp print("permutation vector: ",p) print("\n Pivot Element: {0:.2f} \n".format(A[p[k],k])) if np.abs(A[p[k],k]) < 10**(-20): sys.exit("ERROR!! Provided matrix is singular.") # For the k-th pivot row Perform the Gaussian elimination on the following rows for i in range(k+1, n): # Find the multiplier z = A[p[i],k]/A[p[k],k] #Save z in A itself. You can save this in L also A[p[i],k] = z #Elimination operation: Changes all elements in a row simultaneously ## A[p[i],k+1:] -= z*A[p[k],k+1:] return A, p ``` ```python ## Example on page number 146 (Kincaid Cheney). ## Example solved manually in class A = np.array([[2, 3, -6], [1,-6,8], [3, -2, 1]], dtype=float) print("\n Given A: \n ",A) A,p =GE_rsp(A) print("\n After Gaussian Elimination with RSPP: \n", A) print("\n The permutation Vector is: \n", p) ``` Given A: [[ 2. 3. -6.] [ 1. -6. 8.] [ 3. -2. 1.]] Scaling Vector: [6. 8. 3.] PASS 1: [[ 2. 3. -6.] [ 1. -6. 8.] [ 3. -2. 1.]] permutation vector: [2, 1, 0] Pivot Element: 3.00 PASS 2: [[ 0.66666667 4.33333333 -6.66666667] [ 0.33333333 -5.33333333 7.66666667] [ 3. -2. 1. ]] permutation vector: [2, 0, 1] Pivot Element: 4.33 After Gaussian Elimination with RSPP: [[ 0.66666667 4.33333333 -6.66666667] [ 0.33333333 -1.23076923 -0.53846154] [ 3. -2. 1. ]] The permutation Vector is: [2, 0, 1] ### LU Factorization --- One can use the Gaussian elimination to decompose the matrix A as follows $$ P A = L U,\text{ or }\ A = P^T L U; $$ where $P$ is a permutation matrix, $L$ is a unit lower-triangular matrix and $U$ is an upper triangular matrix. ```python print("\n Upper triangular, U:\n ", np.triu(A[p,:])) print("\n Lower triangular, L:\n", np.tril(A[p,:], -1)+np.eye(3)) print("The Permutation Matrix, P:\n",np.eye(3)[p,:]) ``` Upper triangular, U: [[ 3. -2. 1. ] [ 0. 4.33333333 -6.66666667] [ 0. 0. -0.53846154]] Lower triangular, L: [[ 1. 0. 0. ] [ 0.66666667 1. 0. ] [ 0.33333333 -1.23076923 1. ]] The Permutation Matrix, P: [[ 0. 0. 1.] [ 1. 0. 0.] [ 0. 1. 0.]] ## Iterative Methods for Linear Systems --- ### Jacobi Method ```python # You can modify this code to answer the following ''' Jacobi's iteration method for solving the system of equations Ax=b. p0 is the initialization for the iteration. ''' def jacobi(A, b, p0, tol, maxIter=100): n=len(A) p = p0 for k in range(maxIter): p_old = p.copy() # In python assignment is not the same as copy # Update every component of iterant p for i in range(n): sumi = b[i]; for j in range(n): if i==j: # Diagonal elements are not included in Jacobi continue; sumi = sumi - A[i,j] * p_old[j] p[i] = sumi/A[i,i] rel_error = np.linalg.norm(p-p_old)/n # print("Relative error in iteration", k+1,":",rel_error) if rel_error<tol: print("TOLERANCE MET BEFORE MAX-ITERATION") break return p; ``` ```python # Example System A = np.array([[10, -1, 2, 0], [-1, 11, -1, 3], [2, -1, 10, -1], [0, 3, -1, 8]],dtype=float) b = np.array([6, 25, -11, 15],dtype=float) ``` ```python ```
cfa70841a5d6f982f2bdcad37ec67c89733d9e1d
11,208
ipynb
Jupyter Notebook
course_notes/MA544 Share/NB3 MA544.ipynb
jschmidtnj/ma544-final-project
61fb57d344ad4f693eb697015ed926988402186f
[ "MIT" ]
2
2021-03-23T01:48:51.000Z
2022-02-01T22:49:47.000Z
course_notes/MA544 Share/NB3 MA544.ipynb
jschmidtnj/ma544-final-project
61fb57d344ad4f693eb697015ed926988402186f
[ "MIT" ]
null
null
null
course_notes/MA544 Share/NB3 MA544.ipynb
jschmidtnj/ma544-final-project
61fb57d344ad4f693eb697015ed926988402186f
[ "MIT" ]
1
2021-05-05T01:35:11.000Z
2021-05-05T01:35:11.000Z
30.622951
154
0.461367
true
2,332
Qwen/Qwen-72B
1. YES 2. YES
0.815232
0.824462
0.672128
__label__eng_Latn
0.910279
0.39991
```python from preamble import * %matplotlib notebook import matplotlib as mpl mpl.rcParams['legend.numpoints'] = 1 ``` ## Evaluation Metrics and scoring ### Metrics for binary classification ```python from sklearn.model_selection import train_test_split data = pd.read_csv("data/bank-campaign.csv") X = data.drop("target", axis=1).values y = data.target.values X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0) ``` ```python from sklearn.dummy import DummyClassifier dummy_majority = DummyClassifier(strategy='most_frequent').fit(X_train, y_train) pred_most_frequent = dummy_majority.predict(X_test) print("predicted labels: %s" % np.unique(pred_most_frequent)) print("score: %f" % dummy_majority.score(X_test, y_test)) ``` predicted labels: ['no'] score: 0.887540 ```python from sklearn.tree import DecisionTreeClassifier tree = DecisionTreeClassifier(max_depth=2).fit(X_train, y_train) pred_tree = tree.predict(X_test) tree.score(X_test, y_test) ``` 0.90278721957851804 ```python from sklearn.linear_model import LogisticRegression dummy = DummyClassifier().fit(X_train, y_train) pred_dummy = dummy.predict(X_test) print("dummy score: %f" % dummy.score(X_test, y_test)) logreg = LogisticRegression(C=0.1).fit(X_train, y_train) pred_logreg = logreg.predict(X_test) print("logreg score: %f" % logreg.score(X_test, y_test)) ``` dummy score: 0.803729 logreg score: 0.912013 #### Confusion matrices ```python from sklearn.metrics import confusion_matrix confusion = confusion_matrix(y_test, pred_logreg) print(confusion) ``` [[8911 228] [ 678 480]] ```python mglearn.plots.plot_binary_confusion_matrix() ``` <IPython.core.display.Javascript object> ```python print("Most frequent class:") print(confusion_matrix(y_test, pred_most_frequent)) print("\nDummy model:") print(confusion_matrix(y_test, pred_dummy)) print("\nDecision tree:") print(confusion_matrix(y_test, pred_tree)) print("\nLogistic Regression") print(confusion_matrix(y_test, pred_logreg)) ``` Most frequent class: [[9139 0] [1158 0]] Dummy model: [[8111 1028] [1024 134]] Decision tree: [[8809 330] [ 671 487]] Logistic Regression [[8911 228] [ 678 480]] ##### Relation to accuracy \begin{equation} \text{Accuracy} = \frac{\text{TP} + \text{TN}}{\text{TP} + \text{TN} + \text{FP} + \text{FN}} \end{equation} #### Precision, recall and f-score \begin{equation} \text{Precision} = \frac{\text{TP}}{\text{TP} + \text{FP}} \end{equation} \begin{equation} \text{Recall} = \frac{\text{TP}}{\text{TP} + \text{FN}} \end{equation} \begin{equation} \text{F} = 2 \cdot \frac{\text{precision} \cdot \text{recall}}{\text{precision} + \text{recall}} \end{equation} ```python from sklearn.metrics import f1_score print("f1 score most frequent: %.2f" % f1_score(y_test, pred_most_frequent, pos_label="yes")) print("f1 score dummy: %.2f" % f1_score(y_test, pred_dummy, pos_label="yes")) print("f1 score tree: %.2f" % f1_score(y_test, pred_tree, pos_label="yes")) print("f1 score: %.2f" % f1_score(y_test, pred_logreg, pos_label="yes")) ``` f1 score most frequent: 0.00 f1 score dummy: 0.12 f1 score tree: 0.49 f1 score: 0.51 /home/andy/checkout/scikit-learn/sklearn/metrics/classification.py:1117: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 due to no predicted samples. 'precision', 'predicted', average, warn_for) ```python from sklearn.metrics import classification_report print(classification_report(y_test, pred_most_frequent, target_names=["no", "yes"])) ``` precision recall f1-score support no 0.89 1.00 0.94 9139 yes 0.00 0.00 0.00 1158 avg / total 0.79 0.89 0.83 10297 /home/andy/checkout/scikit-learn/sklearn/metrics/classification.py:1117: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. 'precision', 'predicted', average, warn_for) ```python print(classification_report(y_test, pred_tree, target_names=["no", "yes"])) ``` precision recall f1-score support no 0.93 0.96 0.95 9139 yes 0.60 0.42 0.49 1158 avg / total 0.89 0.90 0.90 10297 ```python print(classification_report(y_test, pred_logreg, target_names=["no", "yes"])) ``` precision recall f1-score support no 0.93 0.98 0.95 9139 yes 0.68 0.41 0.51 1158 avg / total 0.90 0.91 0.90 10297 # Taking uncertainty into account ```python from mglearn.datasets import make_blobs from sklearn.svm import SVC X, y = make_blobs(n_samples=(400, 50), centers=2, cluster_std=[7.0, 2], random_state=22) X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0) svc = SVC(gamma=.05).fit(X_train, y_train) ``` ```python mglearn.plots.plot_decision_threshold() ``` <IPython.core.display.Javascript object> ```python print(classification_report(y_test, svc.predict(X_test))) ``` precision recall f1-score support 0 0.97 0.89 0.93 104 1 0.35 0.67 0.46 9 avg / total 0.92 0.88 0.89 113 ```python y_pred_lower_threshold = svc.decision_function(X_test) > -.8 ``` ```python print(classification_report(y_test, y_pred_lower_threshold)) ``` precision recall f1-score support 0 1.00 0.82 0.90 104 1 0.32 1.00 0.49 9 avg / total 0.95 0.83 0.87 113 ## Precision-Recall curves and ROC curves ```python from sklearn.metrics import precision_recall_curve precision, recall, thresholds = precision_recall_curve(y_test, svc.decision_function(X_test)) ``` ```python # create a similar dataset as before, but with more samples to get a smoother curve X, y = make_blobs(n_samples=(4000, 500), centers=2, cluster_std=[7.0, 2], random_state=22) X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0) svc = SVC(gamma=.05).fit(X_train, y_train) precision, recall, thresholds = precision_recall_curve( y_test, svc.decision_function(X_test)) # find threshold closest to zero: close_zero = np.argmin(np.abs(thresholds)) plt.figure() plt.plot(precision[close_zero], recall[close_zero], 'o', markersize=10, label="threshold zero", fillstyle="none", c='k', mew=2) plt.plot(precision, recall, label="precision recall curve") plt.xlabel("precision") plt.ylabel("recall") plt.title("precision_recall_curve"); plt.legend(loc="best") ``` <IPython.core.display.Javascript object> <matplotlib.legend.Legend at 0x7fe27528b940> ```python from sklearn.ensemble import RandomForestClassifier rf = RandomForestClassifier(n_estimators=100, random_state=0, max_features=2) rf.fit(X_train, y_train) # RandomForestClassifier has predict_proba, but not decision_function precision_rf, recall_rf, thresholds_rf = precision_recall_curve( y_test, rf.predict_proba(X_test)[:, 1]) plt.figure() plt.plot(precision, recall, label="svc") plt.plot(precision[close_zero], recall[close_zero], 'o', markersize=10, label="threshold zero svc", fillstyle="none", c='k', mew=2) plt.plot(precision_rf, recall_rf, label="rf") close_default_rf = np.argmin(np.abs(thresholds_rf - 0.5)) plt.plot(precision_rf[close_default_rf], recall_rf[close_default_rf], '^', markersize=10, label="threshold 0.5 rf", fillstyle="none", c='k', mew=2) plt.xlabel("precision") plt.ylabel("recall") plt.legend(loc="best") plt.title("precision_recall_comparison"); ``` <IPython.core.display.Javascript object> ```python print("f1_score of random forest: %f" % f1_score(y_test, rf.predict(X_test))) print("f1_score of svc: %f" % f1_score(y_test, svc.predict(X_test))) ``` f1_score of random forest: 0.609756 f1_score of svc: 0.655870 ```python from sklearn.metrics import average_precision_score ap_rf = average_precision_score(y_test, rf.predict_proba(X_test)[:, 1]) ap_svc = average_precision_score(y_test, svc.decision_function(X_test)) print("average precision of random forest: %f" % ap_rf) print("average precision of svc: %f" % ap_svc) ``` average precision of random forest: 0.665737 average precision of svc: 0.662636 #### Receiver Operating Characteristics (ROC) and AUC \begin{equation} \text{FPR} = \frac{\text{FP}}{\text{FP} + \text{TN}} \end{equation} ```python from sklearn.metrics import roc_curve fpr, tpr, thresholds = roc_curve(y_test, svc.decision_function(X_test)) plt.figure() plt.plot(fpr, tpr, label="ROC Curve") plt.xlabel("FPR") plt.ylabel("TPR (recall)") plt.title("roc_curve"); # find threshold closest to zero: close_zero = np.argmin(np.abs(thresholds)) plt.plot(fpr[close_zero], tpr[close_zero], 'o', markersize=10, label="threshold zero", fillstyle="none", c='k', mew=2) plt.legend(loc=4) ``` <IPython.core.display.Javascript object> <matplotlib.legend.Legend at 0x7fe273a24be0> ```python from sklearn.metrics import roc_curve fpr_rf, tpr_rf, thresholds_rf = roc_curve(y_test, rf.predict_proba(X_test)[:, 1]) plt.figure() plt.plot(fpr, tpr, label="ROC Curve SVC") plt.plot(fpr_rf, tpr_rf, label="ROC Curve RF") plt.xlabel("FPR") plt.ylabel("TPR (recall)") plt.title("roc_curve_comparison"); plt.plot(fpr[close_zero], tpr[close_zero], 'o', markersize=10, label="threshold zero SVC", fillstyle="none", c='k', mew=2) close_default_rf = np.argmin(np.abs(thresholds_rf - 0.5)) plt.plot(fpr_rf[close_default_rf], tpr[close_default_rf], '^', markersize=10, label="threshold 0.5 RF", fillstyle="none", c='k', mew=2) plt.legend(loc=4) ``` <IPython.core.display.Javascript object> <matplotlib.legend.Legend at 0x7fe27399a5f8> ```python from sklearn.metrics import roc_auc_score rf_auc = roc_auc_score(y_test, rf.predict_proba(X_test)[:, 1]) svc_auc = roc_auc_score(y_test, svc.decision_function(X_test)) print("AUC for Random Forest: %f" % rf_auc) print("AUC for SVC: %f" % svc_auc) ``` AUC for Random Forest: 0.936695 AUC for SVC: 0.916294 ```python X = data.drop("target", axis=1).values y = data.target.values X.shape ``` (41188, 63) ```python X_train, X_test, y_train, y_test = train_test_split( X, y, random_state=0, train_size=.1, test_size=.1) plt.figure() for gamma in [1, 0.01, 0.001]: svc = SVC(gamma=gamma).fit(X_train, y_train) accuracy = svc.score(X_test, y_test) auc = roc_auc_score(y_test == "yes", svc.decision_function(X_test)) fpr, tpr, _ = roc_curve(y_test , svc.decision_function(X_test), pos_label="yes") print("gamma = %.03f accuracy = %.02f AUC = %.02f" % (gamma, accuracy, auc)) plt.plot(fpr, tpr, label="gamma=%.03f" % gamma, linewidth=4) plt.xlabel("FPR") plt.ylabel("TPR") plt.xlim(-0.01, 1) plt.ylim(0, 1.02) plt.legend(loc="best") ``` <IPython.core.display.Javascript object> gamma = 1.000 accuracy = 0.89 AUC = 0.62 gamma = 0.010 accuracy = 0.89 AUC = 0.89 gamma = 0.001 accuracy = 0.91 AUC = 0.88 <matplotlib.legend.Legend at 0x7ff697fe0208> ### Multi-class classification ```python from sklearn.metrics import accuracy_score from sklearn.datasets import load_digits digits = load_digits() X_train, X_test, y_train, y_test = train_test_split( digits.data, digits.target, random_state=0) lr = LogisticRegression().fit(X_train, y_train) pred = lr.predict(X_test) print("accuracy: %0.3f" % accuracy_score(y_test, pred)) print("confusion matrix:") print(confusion_matrix(y_test, pred)) ``` accuracy: 0.953 confusion matrix: [[37 0 0 0 0 0 0 0 0 0] [ 0 39 0 0 0 0 2 0 2 0] [ 0 0 41 3 0 0 0 0 0 0] [ 0 0 1 43 0 0 0 0 0 1] [ 0 0 0 0 38 0 0 0 0 0] [ 0 1 0 0 0 47 0 0 0 0] [ 0 0 0 0 0 0 52 0 0 0] [ 0 1 0 1 1 0 0 45 0 0] [ 0 3 1 0 0 0 0 0 43 1] [ 0 0 0 1 0 1 0 0 1 44]] ```python plt.figure() scores_image = mglearn.tools.heatmap(confusion_matrix(y_test, pred), xlabel='Predicted label', ylabel='True label', xticklabels=digits.target_names, yticklabels=digits.target_names, cmap=plt.cm.gray_r, fmt="%d") plt.title("Confusion matrix") plt.gca().invert_yaxis() ``` <IPython.core.display.Javascript object> ```python print(classification_report(y_test, pred)) ``` precision recall f1-score support 0 1.00 1.00 1.00 37 1 0.89 0.91 0.90 43 2 0.95 0.93 0.94 44 3 0.90 0.96 0.92 45 4 0.97 1.00 0.99 38 5 0.98 0.98 0.98 48 6 0.96 1.00 0.98 52 7 1.00 0.94 0.97 48 8 0.93 0.90 0.91 48 9 0.96 0.94 0.95 47 avg / total 0.95 0.95 0.95 450 ```python print("micro average f1 score: %f" % f1_score(y_test, pred, average="micro")) print("macro average f1 score: %f" % f1_score(y_test, pred, average="macro")) ``` micro average f1 score: 0.953333 macro average f1 score: 0.954000 ## Using evaluation metrics in model selection ```python from sklearn.cross_validation import cross_val_score # default scoring for classification is accuracy print("default scoring ", cross_val_score(SVC(), X, y)) # providing scoring="accuracy" doesn't change the results explicit_accuracy = cross_val_score(SVC(), digits.data, digits.target == 9, scoring="accuracy") print("explicit accuracy scoring ", explicit_accuracy) roc_auc = cross_val_score(SVC(), digits.data, digits.target == 9, scoring="roc_auc") print("AUC scoring ", roc_auc) ``` default scoring [ 0.9 0.9 0.9] explicit accuracy scoring [ 0.9 0.9 0.9] AUC scoring [ 0.994 0.99 0.996] ```python from sklearn.model_selection import GridSearchCV # back to the bank campaign X = data.drop("target", axis=1).values y = data.target.values X_train, X_test, y_train, y_test = train_test_split( X, y, train_size=.1, test_size=.1, random_state=0) # we provide a somewhat bad grid to illustrate the point: param_grid = {'gamma': [0.0001, 0.01, 0.1, 1, 10]} # using the default scoring of accuracy: grid = GridSearchCV(SVC(), param_grid=param_grid) grid.fit(X_train, y_train) print("Grid-Search with accuracy") print("Best parameters:", grid.best_params_) print("Best cross-validation score (accuracy)):", grid.best_score_) print("Test set AUC: %.3f" % roc_auc_score(y_test, grid.decision_function(X_test))) print("Test set accuracy %.3f: " % grid.score(X_test, y_test)) # using AUC scoring instead: grid = GridSearchCV(SVC(), param_grid=param_grid, scoring="roc_auc") grid.fit(X_train, y_train) print("\nGrid-Search with AUC") print("Best parameters:", grid.best_params_) print("Best cross-validation score (AUC):", grid.best_score_) print("Test set AUC: %.3f" % roc_auc_score(y_test, grid.decision_function(X_test))) print("Test set accuracy %.3f: " % grid.score(X_test, y_test)) ``` Grid-Search with accuracy Best parameters: {'gamma': 0.0001} Best cross-validation score (accuracy)): 0.970304380104 Test set AUC: 0.992 Test set accuracy 0.973: Grid-Search with AUC Best parameters: {'gamma': 0.01} Best cross-validation score (AUC): 0.997467845028 Test set AUC: 1.000 Test set accuracy 1.000: /home/andy/anaconda3/lib/python3.5/site-packages/sklearn/grid_search.py:418: ChangedBehaviorWarning: The long-standing behavior to use the estimator's score function in GridSearchCV.score has changed. The scoring parameter is now used. ChangedBehaviorWarning) ```python from sklearn.metrics.scorer import SCORERS print(sorted(SCORERS.keys())) ``` ['accuracy', 'adjusted_rand_score', 'average_precision', 'f1', 'f1_macro', 'f1_micro', 'f1_samples', 'f1_weighted', 'log_loss', 'mean_absolute_error', 'mean_squared_error', 'median_absolute_error', 'precision', 'precision_macro', 'precision_micro', 'precision_samples', 'precision_weighted', 'r2', 'recall', 'recall_macro', 'recall_micro', 'recall_samples', 'recall_weighted', 'roc_auc'] ```python def my_scoring(fitted_estimator, X_test, y_test): return (fitted_estimator.predict(X_test) == y_test).mean() GridSearchCV(SVC(), param_grid, scoring=my_scoring) ``` # Exercises Load the adult dataset from ``data/adult.csv``, and split it into training and test set. Apply grid-search to the training set, searching for the best C for Logistic Regression, also search over L1 penalty vs L2 penalty. Plot the ROC curve of the best model on the test set. ```python # get dummy variables, needed for scikit-learn models on categorical data: X = pd.get_dummies(data.drop("income", axis=1)) y = data.income == " >50K" ```
de83b15b3fe4b129cb368659f73239a39a9d2d9b
685,590
ipynb
Jupyter Notebook
03.2 Evaluation Metrics.ipynb
bagustris/advanced_training
9b96ecc0bb6c913d12cd33b51cfe6ba80a5a58b0
[ "BSD-2-Clause" ]
132
2016-06-06T17:30:23.000Z
2021-11-16T13:51:36.000Z
03.2 Evaluation Metrics.ipynb
afcarl/advanced_training
1ef9246e2f70b82295bb3c4dc9a283e32fd427fb
[ "BSD-2-Clause" ]
1
2017-03-08T19:49:13.000Z
2017-03-08T19:55:03.000Z
03.2 Evaluation Metrics.ipynb
afcarl/advanced_training
1ef9246e2f70b82295bb3c4dc9a283e32fd427fb
[ "BSD-2-Clause" ]
47
2016-06-07T09:39:22.000Z
2021-09-01T01:45:44.000Z
94.096898
182,361
0.748599
true
5,227
Qwen/Qwen-72B
1. YES 2. YES
0.826712
0.83762
0.69247
__label__eng_Latn
0.339803
0.447172
# Numerical Integration #### Preliminaries We have to import the array library `numpy` and the plotting library `matplotlib.pyplot`, note that we define shorter aliases for these. Next we import from `numpy` some of the functions that we will use more frequently and from an utility library functions to format conveniently our results. Eventually we tell to the plotting library how we want to format our plots (`fivethirtyeight` is a predefined style that mimics a popular site of statistical analysis and yes, statistical analysis and popular apply both to a single item... while `00_mplrc` refers to some tweaks I prefer to use) ```python %matplotlib inline import numpy as np import matplotlib.pyplot as plt from numpy import cos, exp, pi, sin, sqrt from IPython.display import Latex, display plt.style.use(['fivethirtyeight', './00_mplrc']) ``` ## Plot of the load We start analyzing the loading process. Substituting $t_1=1.2$s in the expression of $p(t)$ we have $$p(t) = \left({15625 \over 27} {t^3 \over s^3} - {8750 \over 9} {t^2 \over s^2} + {1100 \over 3} {t \over s} + 400 \right)\text{N}$$ ```python N = 6000 def it(t): return int((t+1)*N/3) t = np.linspace(-1,2,N+1) p = np.where(t<0, 0, np.where( t<=1.2, 15625./27*t**3 - 8750./9*t**2 + 1100./3*t + 400, 0)) ``` We will use the above expression everywhere in the following. Let's visualize this loading with the help of a plot: ```python plt.plot(t,p) plt.ylim((-20,520)) plt.xticks((-1,0,1,1.2,2)) plt.xlabel('t [second]') plt.ylabel('p(t) [Newton]') plt.title(r'The load $p(t)$') plt.show() ``` From the plot it is apparent that the loading can be approximated by a constant rectangle plus a sine, of relatively small amplitude, with a period of about 1 second. I expect that the forced response will be a damped oscillation about the static displacement $400/k$, followed by a free response in which we will observe a damped oscillation about the zero displacement. ## The Particular Integral Writing the particuar integral as $$\xi(t) = \left(A {t^3 \over s^3} + B {t^2 \over s^2} + C {t \over s} + D \right)$$ where $A, B, C, D$ all have the dimension of a length, deriving with respect to time and substituting in the equation of motion we have $$\left( 6A\frac{t}{s} + 2 B\right)\frac{m}{s^2} + \left( 3A\frac{t^2}{s^2} + 2B\frac{t}{s} + C \right) \frac{c}{s} + \left( A\frac{t^3}{s^3} + B\frac{t^2}{s^2} + C\frac{t}{s} + D \right) k=\left({15625 \over 27} {t^3 \over s^3} - {8750 \over 9} {t^2 \over s^2} + {1100 \over 3} {t \over s} + 400 \right)\text{N}$$ Collecting the powers of $t/s$ in the left member we have $$A k \frac{t^3}{s^3} + \left( B k + \frac{3 A c}{s} \right) \frac{t^2}{s^2} + \left( C k + \frac{2 B c}{s} + \frac{6 A m}{s^2} \right) \frac{t}{s} + \left(D k + \frac{C c}{s} + \frac{2 B m}{s^2}\right)=\left({15625 \over 27} {t^3 \over s^3} - {8750 \over 9} {t^2 \over s^2} + {1100 \over 3} {t \over s} + 400 \right)\text{N}$$ and equating the coefficients of the different powers, starting from the higher to the lower ones, we have a sequence of equations of a single unknown, solving this sequence of equations gives, upon substitution of the numerical values of the system parameters, \begin{align*} A& = \frac{625}{14256}\text{m}=0.0438411896745 \text{m}\\ B& = \left(\frac{-175}{2376} - \frac{19}{209088} \sqrt{11}\right)\text{m}=-0.0739545830991 \text{m}\\ C& = \left(\frac{133}{1306800} \sqrt{11} + \frac{218117}{7920000}\right)\text{m}=0.0278775758319\text{m}\\ D& = \left(\frac{3013181}{99000000} - \frac{135571859}{7187400000000} \sqrt{11}\right)\text{m}=0.0303736121006\text{m}\\ \end{align*} Substituting in $\xi(t)$ and taking the time derivative we have \begin{align*} \xi(t) &= \frac{625}{14256} t^3 - \left( \frac{19}{209088} \sqrt{11} + \frac{175}{2376} \right) t^2 + \left( \frac{133}{1306800} \sqrt{11} + \frac{218117}{7920000} \right) t - \frac{135571859}{7187400000000} \sqrt{11} + \frac{3013181}{99000000},\\ \dot\xi(t) &= \frac{625}{4752} t^2 - 2 \left( \frac{19}{209088} \sqrt{11} + \frac{175}{2376} \right) t + \frac{133}{1306800} \sqrt{11} + \frac{218117}{7920000} \end{align*} ```python xi = np.where(t<0, 0, np.where(t<=1.2001, 625./14256*t**3 - (19./209088*sqrt(11) + 175./2376)*t**2 + t*(133./1306800*sqrt(11) + 218117./7920000) - 135571859./7187400000000*sqrt(11) + 3013181./99000000, 0)) dot_xi = np.where(t<0, 0, np.where(t<=1.2001, 625./4752*t**2 - 2*(t*(19./209088*sqrt(11) + 175./2376)) + 133./1306800*sqrt(11) + 218117./7920000, 0)) xi_0 = - 135571859./7187400000000*sqrt(11) + 3013181./99000000 dot_xi_0 = + 133./1306800*sqrt(11) + 218117./7920000 ``` ### Plot of the particular integral ```python plt.plot(t,xi) plt.title('Particular integral') plt.xlabel('s') plt.ylabel('m') plt.show() plt.plot(t, dot_xi) plt.title('Time derivative of particular integral') plt.xlabel('s') plt.ylabel('m/s') plt.show() ``` ## System Response ### Forced Response $0\le t\le t_1$ The response in terms of displacement and velocity is \begin{align} x(t) &= \exp(-\zeta\omega_nt) \left( R\cos(\omega_Dt) + S\sin(\omega_Dt)\right) + \xi(t),\\ v(t) &= \exp(-\zeta\omega_nt) \left( \left(S\cos(\omega_Dt)-R\sin(\omega_Dt)\right)\omega_D - \left(R\cos(\omega_Dt)+S\sin(\omega_Dt)\right)\zeta\omega_n \right) + \dot\xi(t) \end{align} and we can write the following initial conditions, taking into account that at time $t=0$ the system is at rest, \begin{align} x(0) &= 1 \cdot \left( R\cdot1 + S\cdot0\right) + \xi_0\\&=R+\xi_0=0,\\ \dot x(0) &= 1 \cdot \left( \left(S\cdot1-R\cdot0\right)\omega_D - \left(R\cdot1+S\cdot0\right)\zeta\omega_n \right) + \dot\xi_0\\&=S\omega_D-R\zeta\omega_n+\dot\xi_0=0 \end{align} The constants of integration are \begin{align} R &= -\xi_0,\\ S &= \frac{R\zeta\omega_n-\dot\xi_0}{\omega_D} \end{align} ```python mass = 12.0 stif = 13200.0 zeta = 0.038 w2_n = stif/mass w1_n = sqrt(w2_n) w1_D = w1_n*sqrt(1-zeta**2) damp = 2*zeta*w1_n*mass R = -xi_0 S = (R*zeta*w1_n-dot_xi_0)/w1_D display(HTML("<center><h3>Forced Displacements</h3></center>")) display(Latex(r""" $x(t) = \exp(-%g\cdot%g\,t) \left( %+g\cos(%g\,t)%+g\sin(%g\,t) \right)+\xi(t)$ """%(zeta,w1_n,R,w1_D,S,w1_D))) t1 = 1.2 it1 = it(t1) x_t1 = exp(-zeta*w1_n*t1)*(R*cos(w1_D*t1)+S*sin(w1_D*t1))+xi[it1] v_t1 =(exp(-zeta*w1_n*t1)*(R*cos(w1_D*t1)+S*sin(w1_D*t1))*(-zeta)*w1_n +exp(-zeta*w1_n*t1)*(S*cos(w1_D*t1)-R*sin(w1_D*t1))*w1_D +dot_xi[it1]) display(Latex( r'$$x(t_1)=x_1=%+g\,\text{m},\qquad v(t_1)=v_1=%+g\,\text{m/s}$$'% (x_t1,v_t1))) ``` <center><h3>Forced Displacements</h3></center> $x(t) = \exp(-0.038\cdot33.1662\,t) \left( -0.0303736\cos(33.1423\,t)-0.00199618\sin(33.1423\,t) \right)+\xi(t)$ $$x(t_1)=x_1=+0.035918\,\text{m},\qquad v(t_1)=v_1=+0.237819\,\text{m/s}$$ ### Free Response For $t\ge t_1$ the response (no external force) is given by \begin{align} x^*(t) &= \exp(-\zeta\omega_nt) \left(R^*\cos(\omega_Dt) + S^*\sin(\omega_Dt)\right),\\ v^*(t) &= \exp(-\zeta\omega_nt) \left( \left(S^*\cos(\omega_Dt)-R^*\sin(\omega_Dt)\right)\omega_D - \left(R^*\cos(\omega_Dt)+S^*\sin(\omega_Dt)\right)\zeta\omega_n \right). \end{align} By the new initial conditions, $$x^*(t_1) = x(t_1) = x_1, \qquad v^*(t_1) = v(t_1) = v_1,$$ we have, with $e_1 = \exp(-\zeta\omega_{n}t_1)$, $c_1 = \cos(\omega_Dt_1)$ and $s_1 = \sin(\omega_Dt_1)$ $$ e_1 \begin{bmatrix} c_1 & s_1 \\ -\omega_D s_1 - \zeta\omega_n c_1 & \omega_D c_1 - \zeta\omega_n s_1 \end{bmatrix} \, \begin{Bmatrix}R^*\\S^*\end{Bmatrix} = \begin{Bmatrix}x_1\\v_1\end{Bmatrix} $$ that gives $$ \begin{Bmatrix}R^*\\S^*\end{Bmatrix} = \frac{1}{\omega_D\,e_1}\, \begin{bmatrix} \omega_D c_1 - \zeta\omega_n s_1 & -s_1 \\ \zeta\omega_n c_1 + \omega_D s_1 & c_1 \end{bmatrix}\, \begin{Bmatrix}x_1\\v_1\end{Bmatrix}. $$ ```python e_t1 = exp(-zeta*w1_n*t1) c_t1 = cos(w1_D*t1) s_t1 = sin(w1_D*t1) Rs = (w1_D*c_t1*x_t1 - zeta*w1_n*s_t1*x_t1 - s_t1*v_t1) /w1_D /e_t1 Ss = (w1_D*s_t1*x_t1 + zeta*w1_n*c_t1*x_t1 + c_t1*v_t1) /w1_D /e_t1 display(HTML("<center><h3>Free Displacements</h3></center>")) display(Latex(r""" $$x^*(t) = \exp(-%g\cdot%g\,t) \left( %+g\cos(%g\,t)%+g\sin(%g\,t) \right)$$ """%(zeta,w1_n,Rs,w1_D,Ss,w1_D))) xs_t1 = exp(-zeta*w1_n*t1)*(Rs*cos(w1_D*t1)+Ss*sin(w1_D*t1)) vs_t1 = ((exp(-zeta*w1_n*t1)*(Rs*cos(w1_D*t1)+Ss*sin(w1_D*t1))*(-zeta)*w1_n +exp(-zeta*w1_n*t1)*(Ss*cos(w1_D*t1)-Rs*sin(w1_D*t1))*w1_D)) display(Latex(r'$$x^*(t_1)=%+g\,\text{m},\qquad v^*(t_1) = %+g\,\text{m/s}$$'% (xs_t1,vs_t1))) ``` <center><h3>Free Displacements</h3></center> $$x^*(t) = \exp(-0.038\cdot33.1662\,t) \left( -0.112254\cos(33.1423\,t)+0.124351\sin(33.1423\,t) \right)$$ $$x^*(t_1)=+0.035918\,\text{m},\qquad v^*(t_1) = +0.237819\,\text{m/s}$$ #### Putting it all together First, the homogeneous response ```python x_hom = np.where(t<0, 0, np.where(t<1.2001, exp(-zeta*w1_n*t) * (R *cos(w1_D*t) + S *sin(w1_D*t)), exp(-zeta*w1_n*t) * (Rs*cos(w1_D*t) + Ss*sin(w1_D*t)))) v_hom = np.where(t<0, 0, np.where(t<1.2001, exp(-t*zeta*w1_n) * ( ( S*w1_D- R*zeta*w1_n)*cos(t*w1_D) - ( S*zeta*w1_n+ R*w1_D)*sin(t*w1_D)), exp(-t*zeta*w1_n) * ( (Ss*w1_D-Rs*zeta*w1_n)*cos(t*w1_D) - (Ss*zeta*w1_n+Rs*w1_D)*sin(t*w1_D)))) ``` then, we put together the homogeneous response and the particular integral, in the different intervals ```python x = x_hom+xi v = v_hom+dot_xi ``` ### Plot of the response ```python plt.plot(t,x) plt.title('Displacement') plt.show() plt.plot(t,v) plt.title('Velocity') plt.show() plt.plot(t,x) plt.title('Displacement, zoom around t_1') plt.xlim((1.16,1.24)) plt.show() plt.plot(t,v) plt.title('Velocity, zoom around t_1') plt.xlim((1.16,1.24)) plt.show() ``` ## Numerical Integration The time step we are going to use is specified in terms of the natural period of vibration, $h=T_n/12$: ```python t_n = 2*pi/w1_n h = t_n/12.0 ``` We need a function that returns the value of the load, ```python def load(t): return 0 if t > 1.20001 else ( + 578.703703703703705*t**3 - 972.22222222222222*t**2 + 366.666666666666667*t + 400.) ``` ### Initialization The final time for the computation, the factors that modify the increment of the load to take into account the initial conditions, the modified stiffness, the containers for the results, the initial values of the results. ```python stop = 2.0 + h/2 a0fac = 3.0*mass + damp*h/2.0 v0fac = 6.0*mass/h + 3.0*damp ks = stif + 3.0*damp/h + 6.0*mass/h**2 T, X, V, A = [], [], [], [] x0 = 0 v0 = 0 t0 = 0 p0 = load(t0) ``` ### Computational Loop ```python while t0 < stop: a0 = (p0-stif*x0-damp*v0)/mass for current, vec in zip((t0,x0,v0,a0),(T,X,V,A)): vec.append(current) t1 = t0+h p1 = load(t1) dp = p1-p0 dx = (dp+a0fac*a0+v0fac*v0)/ks dv = 3*dx/h -a0*h/2 -3*v0 t0 = t1 ; p0 = p1 ; x0 = x0+dx ; v0 = v0+dv ``` ### The Plot ```python plt.plot(T,X,label='numerical') plt.plot(t,x,label='analytical') plt.xlim((0,2)) plt.legend(loc=3); ``` ```python plt.plot(T,X,'-o',label='numerical') plt.plot(t,x,'-x',label='analytical') plt.title('Displacement, zoom around t_1') plt.xlim((1.16,1.24)) plt.ylim((0,0.04)) plt.legend(loc=3) plt.show() ``` ```python # an IPython incantation that properly formats this notebook from IPython.display import HTML HTML(open("00_custom.sav.css", "r").read()) ``` <style> @font-face { font-family: 'Charis SIL'; src: url('fonts/CharisSILEur-R.woff') format('woff'); } @font-face { font-family: 'Architects Daughter'; font-style: normal; src: url(https://fonts.gstatic.com/s/architectsdaughter/v6/RXTgOOQ9AAtaVOHxx0IUBM3t7GjCYufj5TXV5VnA2p8.woff2) format('woff2'); unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2212, U+2215, U+E0FF, U+EFFD, U+F000; } .prompt{display: None;} div.cell{margin: auto;width:900px;} div #notebook { /* centre the content */ background: #fffaf0; margin: auto; padding: 0em; padding-top: 1em; } div #notebook_container {width: 960px!important;} #notebook li { /* More space between bullet points */ margin-top:0.2em; } div.text_cell_render{ font-family: "Charis SIL", Cambria, serif; line-height: 155%; font-size: 130%; } .CodeMirror{ font-family: Consolas, monospace; font-size: 140%; background-color:#fcffff!important; } .output_area { font-family: Consolas,monospace;font-size: 120%; background-color:#fcffff!important; margin-top:0.8em; } div.output_latex{overflow:hidden} h1 { font-family: 'Architects Daughter', serif; font-size: 48pt!important; text-align: center; text-shadow: 4px 4px 4px #aaa; padding-bottom: 48pt; } .warning{color: rgb( 240, 20, 20 )} </style>
cff4c799de5df2b5a1c2d3fdbed691ac91776afd
430,156
ipynb
Jupyter Notebook
dati_2015/ha03/03_Numerical_Integration.ipynb
shishitao/boffi_dynamics
365f16d047fb2dbfc21a2874790f8bef563e0947
[ "MIT" ]
null
null
null
dati_2015/ha03/03_Numerical_Integration.ipynb
shishitao/boffi_dynamics
365f16d047fb2dbfc21a2874790f8bef563e0947
[ "MIT" ]
null
null
null
dati_2015/ha03/03_Numerical_Integration.ipynb
shishitao/boffi_dynamics
365f16d047fb2dbfc21a2874790f8bef563e0947
[ "MIT" ]
2
2019-06-23T12:32:39.000Z
2021-08-15T18:33:55.000Z
500.181395
77,539
0.92503
true
5,127
Qwen/Qwen-72B
1. YES 2. YES
0.934395
0.849971
0.794209
__label__eng_Latn
0.480749
0.683546
$\newcommand{\Normal}{\mathcal{N}} \newcommand{\lp}{\left(} \newcommand{\rp}{\right)} \newcommand{\lf}{\left\{} \newcommand{\rf}{\right\}} \newcommand{\ls}{\left[} \newcommand{\rs}{\right]} \newcommand{\lv}{\left|} \newcommand{\rv}{\right|} \newcommand{\state}{x} \newcommand{\State}{\boldx} \newcommand{\StateR}{\boldX} \newcommand{\Covariance}{\Sigma} \newcommand{\CovX}{\Covariance_{\boldX}} \newcommand{\CovY}{\Covariance_{\boldY}} \newcommand{\CovZ}{\Covariance_{\boldZ}} \newcommand{\CovXY}{\Covariance_{\boldX\boldY}} \newcommand{\hatState}{\hat{\State}} \newcommand{\StateNum}{N} \newcommand{\StateDim}{K} \newcommand{\NumStates}{N} \newcommand{\StateToState}{A} \newcommand{\stateToState}{A} \newcommand{\StateCov}{\Sigma} \newcommand{\StateJac}{A} \newcommand{\hatStateCov}{\hat{\StateCov}} \newcommand{\StateMean}{\boldmu} \newcommand{\hatStateMean}{\hat{\StateMean}} \newcommand{\StateToStateHistory}{\boldA} \newcommand{\StateNoise}{\boldr} \newcommand{\StateNoiseCov}{R} \newcommand{\StateHistory}{\boldX} \newcommand{\StatesHistory}{\StateHistory} \newcommand{\StateToObserv}{C} \newcommand{\StateToobserv}{\boldc} \newcommand{\StateToObservHistory}{\boldC} \newcommand{\DState}{\bolddelta} \newcommand{\hatDState}{\hat{\DState}} \newcommand{\DStateMean}{\boldlambda} \newcommand{\hatDStateMean}{\hat{\DStateMean}} \newcommand{\DStateCov}{\Lambda} \newcommand{\hatDStateCov}{\hat{\DStateCov}} \newcommand{\DObserv}{\boldgamma} \newcommand{\hatDObserv}{\hat{\DObserv}} \newcommand{\observ}{z} \newcommand{\Observ}{\boldsymbol{\observ}} \newcommand{\ObservCov}{\Lambda} \newcommand{\observMean}{\lambda} \newcommand{\ObservMean}{\boldlambda} \newcommand{\hatobserv}{\hat{\observ}} \newcommand{\hatObserv}{\hat{\Observ}} \newcommand{\hatObservCov}{\hat{\ObservCov}} \newcommand{\hatobservMean}{\hat{\observMean}} \newcommand{\hatObservMean}{\hat{\ObservMean}} \newcommand{\ObservSet}{\ZZ} \newcommand{\ObservNum}{N} \newcommand{\ObservDim}{D} \newcommand{\ObservSourceNum}{M} \newcommand{\ObservHistory}{\boldZ} \newcommand{\ObservsHistory}{\ObservHistory} \newcommand{\Timestamps}{\boldT} \newcommand{\ObservJac}{H} \newcommand{\observNoise}{q} \newcommand{\ObservNoise}{\boldq} \newcommand{\ObservNoiseCov}{Q} \newcommand{\ObservNoiseCovHistory}{\boldQ} \newcommand{\Jacobian}{\boldJ} \newcommand{\Kalman}{K} \newcommand{\kalman}{\boldk} \newcommand{\CC}{\mathbb{C}} \newcommand{\NN}{\mathbb{N}} \newcommand{\RR}{\mathbb{R}} \newcommand{\XX}{\mathbb{X}} \newcommand{\ZZ}{\mathbb{Z}} \renewcommand{\AA}{\mathbb{A}} \newcommand{\boldzero}{\boldsymbol{0}} \newcommand{\boldone}{\boldsymbol{1}} \newcommand{\bolda}{\boldsymbol{a}} \newcommand{\boldb}{\boldsymbol{b}} \newcommand{\boldc}{\boldsymbol{c}} \newcommand{\boldd}{\boldsymbol{d}} \newcommand{\bolde}{\boldsymbol{e}} \newcommand{\boldf}{\boldsymbol{f}} \newcommand{\boldg}{\boldsymbol{g}} \newcommand{\boldh}{\boldsymbol{h}} \newcommand{\boldi}{\boldsymbol{i}} \newcommand{\boldj}{\boldsymbol{j}} \newcommand{\boldk}{\boldsymbol{k}} \newcommand{\boldl}{\boldsymbol{l}} \newcommand{\boldm}{\boldsymbol{m}} \newcommand{\boldn}{\boldsymbol{n}} \newcommand{\boldo}{\boldsymbol{o}} \newcommand{\boldp}{\boldsymbol{p}} \newcommand{\boldq}{\boldsymbol{q}} \newcommand{\boldr}{\boldsymbol{r}} \newcommand{\bolds}{\boldsymbol{s}} \newcommand{\boldt}{\boldsymbol{t}} \newcommand{\boldu}{\boldsymbol{u}} \newcommand{\boldv}{\boldsymbol{v}} \newcommand{\boldw}{\boldsymbol{w}} \newcommand{\boldx}{\boldsymbol{x}} \newcommand{\boldy}{\boldsymbol{y}} \newcommand{\boldz}{\boldsymbol{z}} \newcommand{\boldA}{\boldsymbol{A}} \newcommand{\boldB}{\boldsymbol{B}} \newcommand{\boldC}{\boldsymbol{C}} \newcommand{\boldD}{\boldsymbol{D}} \newcommand{\boldE}{\boldsymbol{E}} \newcommand{\boldF}{\boldsymbol{F}} \newcommand{\boldH}{\boldsymbol{H}} \newcommand{\boldJ}{\boldsymbol{J}} \newcommand{\boldK}{\boldsymbol{K}} \newcommand{\boldL}{\boldsymbol{L}} \newcommand{\boldM}{\boldsymbol{M}} \newcommand{\boldI}{\boldsymbol{I}} \newcommand{\boldP}{\boldsymbol{P}} \newcommand{\boldQ}{\boldsymbol{Q}} \newcommand{\boldR}{\boldsymbol{R}} \newcommand{\boldS}{\boldsymbol{S}} \newcommand{\boldT}{\boldsymbol{T}} \newcommand{\boldO}{\boldsymbol{O}} \newcommand{\boldU}{\boldsymbol{U}} \newcommand{\boldV}{\boldsymbol{V}} \newcommand{\boldW}{\boldsymbol{W}} \newcommand{\boldX}{\boldsymbol{X}} \newcommand{\boldY}{\boldsymbol{Y}} \newcommand{\boldZ}{\boldsymbol{Z}} \newcommand{\boldXY}{\boldsymbol{XY}} \newcommand{\boldmu}{\boldsymbol{\mu}}$ ### Состояние системы Теперь рассмотрим чуть более близкую к реальности модель движения материальной точки. В данной модели состояние движущейся описывается следующим вектором: $$ \State = \begin{pmatrix} x\\ y\\ \gamma\\ v\\ \omega\\ \end{pmatrix}, $$ где * $x$ &mdash; координата точки по оси $X$; * $y$ &mdash; координата точки по оси $Y$; * $\gamma$ &mdash; угол рыскания; далее просто yaw; * $v$ &mdash; проекция скорости на ось $X$; * $\omega$ &mdash; угловая скорость материальной точки. ### Модель эволюции системы #### Переход из момента времени $t$ к моменту $t + \Delta t$<sup>[toc](#toc)</sup> $$ \State(t + \Delta t) = \boldf(\State(t)) \Rightarrow \begin{pmatrix} x(t + \Delta t)\\ y(t + \Delta t)\\ \gamma(t + \Delta t)\\ v(t + \Delta t)\\ \omega(t + \Delta t) \end{pmatrix} \approx \begin{pmatrix} x(t) + v(t)\cos(\gamma(t))\Delta t\\ y(t) + v(t)\sin(\gamma(t))\Delta t\\ \gamma(t) + \omega(t) \Delta t\\ v(t)\\ \omega(t)\\ \end{pmatrix}. $$ Заметим, что зависимость между состояниям нелинейная. Т.е. уже нельзя представить переход из момента $t$ в момент $t + \Delta t$ в виде: $$ \State(t + \Delta t) \approx A(\Delta t) \State(t) $$ Поэтому в данном случае мы будем использовать **EKF-фильтр**. #### Якобиан функции перехода $f(x(t))$<sup>[toc](#toc)</sup> Итак, мы знаем, что $\State(t) \sim \Normal(\mu(t), \Sigma(t))$. Также мы знаем, что состояние в момент $\State(t + \Delta t)$ выражается через состояние в момент $\State(t)$ через некоторую нелинейную функцию $f(\cdot)$: $$ \State(t + \Delta t) \approx f(\State(t), \Delta t) = \begin{pmatrix} x(t) + v(t)\cos(\gamma(t))\Delta t\\ y(t) + v(t)\sin(\gamma(t))\Delta t\\ \gamma(t) + \omega(t) \Delta t\\ v(t)\\ \omega(t)\\ \end{pmatrix} $$ Поэтому $$ \State(t + \Delta t) \approx f(\mu(t)) + J_f(\mu(t)) (\State(t) - \mu(t)) + \text{H.O.D.}, $$ где * $f(\mu(t), \Delta t)$ &mdash; состояние в момент времени $t + \Delta t$, при условии, что в момент $t$ мы детерминированно находились в состоянии $\State(t) = \mu(t)$. * $J_f(\mu(t))$ &mdash; значение Якобиана $f(\State)$ по $\State$ в точке $\mu(t)$. * $\State(t) - \mu(t)$ &mdash; отклонение истинного значения состояния $\State(t)$ от ожидаемого $\mu(t)$; имеет распределение $\Normal(\boldzero, \Sigma(t))$, чем и обуславливает нормальное распределение для $\State(t + \Delta t)$. * H.O.D. &mdash; компоненты разложения высшего порядка, которыми пренебрегаем (Higher Order Degress). Якобиан имеет вид: $$ J_f(\State(t)) = \begin{pmatrix} 1 &0 &-v(t)\sin(\gamma(t))\Delta t &\cos(\gamma(t))\Delta t &0\\ 0 &1 &v(t)\cos(\gamma(t))\Delta t &\sin(\gamma(t))\Delta t &0\\ 0 &0 &1 &0 &\Delta t\\ 0 &0 &0 &1 &0\\ 0 &0 &0 &0 &1\\ \end{pmatrix} $$ #### Модель эволюции системы<sup>[toc](#toc)</sup> В результате получаем \begin{align} &\mu(t + \Delta t) = f(\mu(t), \Delta t),\\ &\Sigma(t + \Delta t) = J_f(\mu(t)) \Sigma(t)J_f(\mu(t))^T + R(t) \Delta t, \end{align} где * $\mu(t)$ &mdash; текущее среднее состояния; * $\Sigma(t)$ &mdash; текущая дисперсия состояния; * $\Delta t$ &mdash; шаг по времени; * $R(t)$ &mdash; матрица плотности шума (показывает, как быстро дисперсия состояния наростает со временем). ### Модель наблюдений системы По сути достаточно лишь указать матрицы наблюдений. Так как в них нет нелинейностей, то тут можно применять обычную обработку показаний (без появления Якобианов и т.п.). #### GPS $$ C_{gps} = \begin{pmatrix} 1 &0 &0 &0 &0\\ 0 &1 &0 &0 &0\\ \end{pmatrix} $$ #### CAN $$ C_{can} = \begin{pmatrix} 0 &0 &0 &1 &0\\ \end{pmatrix} $$ #### IMU $$ C_{imu} = \begin{pmatrix} 0 &0 &0 &0 &1\\ \end{pmatrix} $$ ```python import numpy as np from matplotlib import pyplot as plt from matplotlib.patches import Ellipse, Rectangle from IPython import display import time from matplotlib.lines import Line2D %matplotlib inline from sdc.timestamp import Timestamp from sdc.car import Car from sdc.linear_movement_model import LinearMovementModel from sdc.cycloid_movement_model import CycloidMovementModel from sdc.can_sensor import CanSensor from sdc.gps_sensor import GpsSensor from sdc.imu_sensor import ImuSensor ``` <a id='program_car_model_create'></a> ### Создание автомобиля, настрока движения и установка необходимых датчиков<sup>[toc](#toc)</sup> Функция для удобного создания полностью настроенной модели машины: ```python def create_car( initial_position=[5, 5], initial_velocity=5, initial_omega=0.0, initial_yaw=np.pi / 4, can_noise_variances=[0.25], # Стандартное отклонение - 0.5м gps_noise_variances=[1, 1], # Стандартное отклонение - 1м imu_noise_variances=None, # По умолчанию IMU не используется random_state=0, ): # Начальное состояние автомобиля car = Car( initial_position=initial_position, initial_velocity=initial_velocity, initial_yaw=initial_yaw, initial_omega=initial_omega) # Создание сенсоров if can_noise_variances is not None: car.add_sensor(CanSensor(noise_variances=can_noise_variances, random_state=random_state)) random_state += 1 if gps_noise_variances is not None: car.add_sensor(GpsSensor(noise_variances=gps_noise_variances, random_state=random_state)) random_state += 1 if imu_noise_variances is not None: car.add_sensor(ImuSensor(noise_variances=imu_noise_variances, random_state=random_state)) random_state += 1 # Последовательное движение movement_model = LinearMovementModel() car.set_movement_model(movement_model) return car ``` ```python # Калмановская локализация from sdc.car_plotter import CarPlotter from sdc.kalman_car import KalmanCar from sdc.kalman_can_sensor import KalmanCanSensor from sdc.kalman_gps_sensor import KalmanGpsSensor from sdc.kalman_imu_sensor import KalmanImuSensor from sdc.kalman_movement_model import KalmanMovementModel from sdc.kalman_filter import ( kalman_transit_covariance, kalman_process_observation, ) ``` <a id='program_model_kalman_car_from_car'></a> ### Создание калмановской модели машины из обычной модели<sup>[toc](#toc)</sup> ```python def create_kalman_car(car, gps_variances=None, can_variances=None, imu_variances=None): """Создает калмановскую модель движения автомобиля на основе уже настроенной модели самого автомобиля""" # Скорость нарастания дисперсии в секунду noise_covariance_density = np.diag([ 0.1, 0.1, 0.1, # Дисперсия yaw 0.1, # Дисперсия скорости 0.1 # Дисперсия угловой скорости ]) # Формирование состояние калмановской локализации kalman_car = KalmanCar( initial_position=car.initial_position, initial_velocity=car.initial_velocity, initial_yaw=car.initial_yaw, initial_omega=car.initial_omega) # Начальная матрица ковариации kalman_car.covariance_matrix = noise_covariance_density # Модель движения kalman_movement_model = KalmanMovementModel(noise_covariance_density=noise_covariance_density) kalman_car.set_movement_model(kalman_movement_model) for sensor in car.sensors: noise_variances = sensor._noise_variances if isinstance(sensor, GpsSensor): noise_variances = noise_variances if gps_variances is None else gps_variances kalman_sensor = KalmanGpsSensor(noise_variances=noise_variances) elif isinstance(sensor, CanSensor): noise_variances = noise_variances if can_variances is None else can_variances kalman_sensor = KalmanCanSensor(noise_variances=noise_variances) elif isinstance(sensor, ImuSensor): noise_variances = noise_variances if imu_variances is None else imu_variances kalman_sensor = KalmanImuSensor(noise_variances=noise_variances) else: assert False kalman_car.add_sensor(kalman_sensor) return kalman_car ``` <a id='visualization_trajectory'></a> ### Визуализация траектории автомобиля<sup>[toc](#toc)</sup> ```python initial_position = [20, 20] initial_velocity = 10 initial_yaw = 0.5 initial_omega = 0.02 car = Car(initial_position=initial_position, initial_velocity=initial_velocity, initial_yaw=initial_yaw, initial_omega=initial_omega) print(car) # Тестирование последовательного движения movement_model = LinearMovementModel() car.set_movement_model(movement_model) dt = Timestamp.seconds(0.1) duration = Timestamp.seconds(40) final_time = car.time + duration while car.time < final_time: car.move(dt) print(car) # Отрисовка траектории fig = plt.figure(figsize=(15, 15)) ax = plt.subplot(111, aspect='equal') ax.grid(which='both', linestyle='--', alpha=0.5) car_plotter = CarPlotter(car_width=3, car_height=1.5) car_plotter.plot_car(ax, car) car_plotter.plot_trajectory(ax, car, traj_color='k') # Установка корректных пределов x_limits, y_limits = car_plotter.get_limits(car) ax.set_xlim(x_limits) ax.set_ylim(y_limits); ``` <a id='test_kalman_localization'></a> ## Тестирование калмановской фильтрации<sup>[toc](#toc)</sup> ```python car = create_car(initial_omega=0.05) kalman_car = create_kalman_car(car) # Создаем полотно fig = plt.figure(figsize=(15, 15)) ax = plt.subplot(111, aspect='equal') # ax.grid(which='both', linestyle='--', alpha=0.5) real_color = 'green' obs_color = 'blue' est_color = 'red' legend_lines = {'Kalman estimate': Line2D([0], [0], color=est_color, lw=4), 'Measurement': Line2D([0], [0], color=obs_color, lw=4), 'Real position': Line2D([0], [0], color=real_color, lw=4)} # Шаг интегрирования dt = Timestamp.seconds(0.1) # Длительность проезда duration = Timestamp.seconds(40) final_time = car.time + duration # Отрисовщик автомобиля и траектории car_plotter = CarPlotter( car_width=3, car_height=1.5, real_color=real_color, obs_color=obs_color, pred_color=est_color) car_plotter.plot_car(ax, car) # Отрисовка начальной позиции car_plotter.plot_car(ax, car) display.clear_output(wait=True) time.sleep(0.25) ax.set_xlim(-10, 100) ax.set_ylim(-10, 100) while car.time < final_time: ax.clear() ax.legend([legend_lines['Kalman estimate'], legend_lines['Measurement'], legend_lines['Real position']], ['Kalman estimate', 'Measurement', 'Real position']) # Делаем реальный переход к моменту времени t + dt car.move(dt) # Делаем предсказание на момент времени t + dt kalman_car.move(dt) # Теперь обработаем наблюдения в момент t + dt for sensor, kalman_sensor in zip(car.sensors, kalman_car.sensors): observation = sensor.observe() kalman_sensor.process_observation(observation) car_plotter.plot_car(ax, car) car_plotter.plot_kalman_car(ax, kalman_car) car_plotter.plot_trajectory(ax, car, traj_color=real_color) car_plotter.plot_trajectory(ax, kalman_car, traj_color=est_color) car_plotter.plot_observations(ax, car.gps_sensor.history[:, 0], car.gps_sensor.history[:, 1], color=obs_color) ax.set_xlim(-5, 100) ax.set_ylim(-5, 100) display.clear_output(wait=True) display.display(plt.gcf()) time.sleep(0.05) display.clear_output(wait=True) ```
b94019c1d35568539fa92dbab7366c090f599ab9
164,006
ipynb
Jupyter Notebook
road situation analysis/research/road/state estimation/kalman_filter_demo.ipynb
MikhailKitikov/DrivingMonitor
0b698d1ba644ce74e1c7d88c3e18a1ef997aabc0
[ "MIT" ]
null
null
null
road situation analysis/research/road/state estimation/kalman_filter_demo.ipynb
MikhailKitikov/DrivingMonitor
0b698d1ba644ce74e1c7d88c3e18a1ef997aabc0
[ "MIT" ]
null
null
null
road situation analysis/research/road/state estimation/kalman_filter_demo.ipynb
MikhailKitikov/DrivingMonitor
0b698d1ba644ce74e1c7d88c3e18a1ef997aabc0
[ "MIT" ]
null
null
null
258.684543
62,916
0.856731
true
5,311
Qwen/Qwen-72B
1. YES 2. YES
0.851953
0.808067
0.688435
__label__kor_Hang
0.090227
0.437797
# Tarea 1 _Tarea 1_ de _Benjamín Rivera_ para el curso de __Métodos Numéricos__ impartido por _Joaquín Peña Acevedo_. Fecha limite de entrega __6 de Septiembre de 2020__. ## Como ejecutar #### Requerimientos Este programa se ejecuto en mi computadora con la version de __Python 3.8.2__ y con estos [requerimientos](https://github.com/BenchHPZ/UG-Compu/blob/master/MN/requerimientos.txt) ### Jupyter En caso de tener acceso a un _servidor jupyter_ ,con los requerimientos antes mencionados, unicamente basta con ejecutar todas las celdas de este _notebook_. Probablemente no todas las celdas de _markdown_ produzcan el mismo resultado por las [_Nbextensions_](jupyter-contrib-nbextensions.readthedocs.io). ### Consola Habrá archivos e instrucciones para poder ejecutar cada uno de los ejercicios desde la consola. ### Si todo sale mal En caso de que todo salga mal, tratare de dejar una copia disponible en __GoogleColab__ que se pueda ejecutar con la versión de __Python__ de _GoogleColab_ ## Ejercicio 1 Supongamos que una computadora tiene 8 dígitos para representar la parte fraccionaria de un número de punto flotante. ### Calcular el valor del épsilon de la máquina En clase vimos que si conocemos la cantidad de bits $p$ para representar la parte fraccionaria de la mantisa, entonces \begin{equation} \epsilon_m = (1.0)_2 \times 2^{-p} \label{eq:epsilon}\end{equation} Dado que sabemos que la máquina de este ejercicio usara $8bits$ para representar la parte fraccionaria entonces, por \ref{eq:epsilon}, el $\epsilon_m$ para este ejercicio es \begin{equation*} \epsilon_m = (1.0)_2 \times 2^{-8} = 3.90625 \times 10^{-3} \label{eq:epsilon maquina} \end{equation*} ```python print(2**(-8)) ``` 0.00390625 ### Dar la representación en notación cientifica (mantisa base 2, multiplicada por 2 elevda al exponente correspondiente) del número 5. Sabemos que el número $5$ se representa en binario como $(101)_2$. Como los números se prefieren normalizados entonces debemos representar $(1.01)_2$ en la notación solicitada. Por lo que \begin{equation*} 5 = (101)_2 \times 2^0 = (1.01)_2 \times 2^2 = 1.25 \times 2^2 \end{equation*} ### Dar la representación científica del número consecutico a 5 en la computadora. Escribir la distancia $d_c$ entre 5 y su consecutivo. Expresar $d_c$ en términos del épslon de la máquina. Sabemos que el número consecutivo $(5_c)$ a $5$ en esta computadora es $5+\epsilon_m$. Si extendemos toda la mantisa de $5$ este se ve $(1.01000000)$. Y sumar $\epsilon_m$ implica sumar $1$ a la mantisa, enspecificamente en esta computadora, al $bit 8$ de la mantisa. De manera que \begin{eqnarray*} 5_c &=& 5 + \epsilon_m = (1.01)_2 \times 2^2 + (1.0)_2 \times 2^{2-8} \\ &=& ((1.01000000)_2 + (0.00000001)_2) \times 2^2 \\ &=& (1.01000001)_2 \times 2^2 \\ &=& 5.015625 \end{eqnarray*} por lo que la distancia entre $5$ y su consecutivo $5+\epsilon_m$ es \begin{equation*} d_c = (5+\epsilon_m)-5 = \epsilon_m \times 2^2 = (0.00000001)_2 \times 2^2 = 0.015625 \end{equation*} ### Tenemos que el consecutivo de $5$ es expresable como $5 + d_c$. Si tenemos un $x$ real tal que $x \in (5, 5+d_c)$ entonces la computadora representara a $x$ como $fl(x)=5$ o $fl(x)=5+d_c$. Escribir una cota para el error relativo para las dos posibles represetnaciones de x > Error relativo \begin{equation*} \left| \frac{fl(x) - x}{x} \right| \end{equation*} Sabemos que las dos posibles representaciones son el _truncamiento_ y el _redondeo_ ,para dar la cota se deben calcular los valores minimos y maximos del error relativo. En general trabajaremos con \begin{equation} \label{eq:error relativo} \left| \frac{fl(x) - x}{x} \right| = \left| \frac{fl(x)}{x} - 1 \right| \end{equation} Antes de continuar es importante notar lo siguiente. Sabemos que, para ambos casos, el dominio de la función sera $[5, 5+d_c]$. Además, notemos que el rango de $\frac{fl(x)}{x}$, para nuestro dominio y los dos valores que puede tomar $fl(x)$, es $\left[ \frac{5}{5+d_c}, \frac{5+d_c}{5} \right]$. Por ultimo, sabemos que $\frac{fl(x)}{x}$ es descendente para $x>0$. __Truncamiento__ Para el truncamiento $fl(x) = 5$ por lo que debemos calcular la cota para \begin{equation*} Er_\_ = \left| \frac{5}{x} - 1 \right| \end{equation*} En este caso, el rango de $\frac{fl(x)}{x} = \frac{5}{x}$ es $\left[ \frac{5}{5+d_c}, \frac{5}{5}=1 \right]$ por lo que el rango de $Er_\_$ es $\left[ 0, |\frac{5}{5+d_c}-1| \right]$. Por iluminación divina, suponemos que esta función es ascendente, y por lo tanto $Er_\_(5) < Er_\_(5+dc)$. Primero evaluamos la función en estos puntos \begin{equation*} Er_\_(5) = \left|\frac{5}{5} -1\right| = 0 \quad\quad Er_\_(5+d_c) = \left|\frac{5}{5+d_c} -1 \right| \sim 0.00311 \end{equation*} Por lo que, usando el truncamiento, esta __<ins>función esta acotada</ins>__ por $\left[ Er_\_(5), Er_\_(5+d_c) \right]$ __Redondeo__ Procedemos de manera similar al anterior. Para el redondeo tenemos que $fl(x) = 5+d_c$, esto nos da la expresion \begin{equation*} Er_+ = \left| \frac{5+d_c}{x} - 1 \right| \end{equation*} Y de manera similar al anterior podemos ver que esta función es descendente, por lo que $Er_+(5+d_c) < Er_+(d_c)$ \begin{equation*} Er_+(5+d_c) = \left| \frac{5+d_c}{5+d_c} -1 \right| = 0 \quad\quad Er_+(5) = \left| \frac{5+d_c}{5} -1 \right| \sim 0.00312 \end{equation*} De manera que esta función queda acotada por $\left[ Er_+(5+d_c),Er_+(5) \right]$ ### Explique si los siguientes números tienen respresentación exacta en la computadora, es decir, si $fl(a_i)=a_i$ - $a_1 = \epsilon/2$ \par Sabemos que en el sistema de este ejercicio ($8bits$ parte flotante) $\epsilon_m = (1.0)_2 \times 2^{-8}$, de manera que, \begin{equation*} \epsilon_m/2 = \frac{(1.0)_2 \times 2^{-8}}{2} = (1.0)_2 \times 2^{-9} \end{equation*} por lo que, mientras que el rango del exponente sea suficiente (dado que el ejercicio solo da información del tamaño de la \textit{matisa}), este número __es representable__ en el sistema. - $a_2 = 1+ \epsilon/2$ \par De manera que $a_2$ se expresa como \begin{equation*} a_2 = (1.0)_2 + \epsilon/2 = (1.0)_2 + (1.0)_2 \times 2^{-9} = (1.000000001)_2 \end{equation*} pero como este sistema solo usa $8bits$ para la mantisa entonces, para este sistema \begin{equation*} a_2 = (1.0)_2 + \epsilon/2 = (1.000000001)_2 = (1.0)_2 \end{equation*} por lo que este número __no tiene representación__ en este sistema. - $a_3 = 1- \epsilon$ \par Este número se escribe como \begin{eqnarray*} a_3 = 1- \epsilon &=& (1.0)_2 - (1.0)_2 \times 2^{-8} \\ &=& (1.00000000)_2 - (0.00000001)_2 \\ &=& (0.11111111)_2 \\ &=& (1.1111111)_2 \times 2^{-1} \end{eqnarray*} por lo que este número __si tiene una representación__ en este sistema. - $a_4 = 1- \epsilon/2$ \par Como con los anteriores, esta operación se expresa como \begin{eqnarray*} a_4 = 1- \epsilon/2 &=& (1.0)_2 - (1.0)_2 \times 2^{-9} \\ &=& (1.00000000)_2 - (0.000000001)_2 \\ &=& (0.111111111)_2 \\ &=& (1.11111111)_2 \times 2^{-1} \end{eqnarray*} el cual __si tiene respresentación__ en el sistema de este ejercicio. - $a_5 = 1- \epsilon/4$ \par En este inciso primero calcularemos $\epsilon/4$, para el cual expandimos lo siguiente \begin{equation*} \epsilon/4 = \frac{(1.0)_2 \times 2^{-8}}{2^2} = (1.0)_2 \times 2^{-10} \end{equation*} el cual, mientras el exponente alcance, _si es representable_ en el sistema. Por otro lado, el numero de este inciso nos da \begin{eqnarray*} a_5 = 1 - \epsilon/4 &=& (1.0)_2 - (1.0)_2 \times 2^{-10} \\ &=& (1.00000000)_2 - (0.0000000001)_2 \\ &=& (0.1111111111)_2 \\ &=& (1.111111111)_2 \times 2^{-1} \end{eqnarray*} pero este número tiene una mantisa de $9bits$, por lo cual, __no tiene representación__ en el sistema. - $a_6 = \epsilon^2$ \par Para este inciso \begin{equation} a_6 = \epsilon^2 = ((1.0)_2 \times 2^{-8})_2^2 = (1.0)_2 \times 2^{-16} \end{equation} el cual __si tiene represetación__ en el sistema. - $a_7 = 0.125$ \par Primero pasamos de el número de decimal a binario, de manera que $(0.125)_{10} = (0.001)_{2}$. Luego hay que normalizarlo, por lo que $(0.001)_{2} = (1.0)_{2} \times 2^{-3}$. Por lo que este numero __si tiene respresentacion__ en el sistema. - $a_8 = 2^{-10}$ \par Y por \'ultimo, tenemos que \begin{equation*} a_8 = 2^{-10} = ((10.0)_2)^{-10} = ((1.0)_2 \times 2^{1})^{-10} = (1.0)_2 \times 2^{-10} \end{equation*} ### De una cota para el error relativo de las restas \ref{eq:resta 1} y \ref{eq:resta 2} respecto al verdader valor. Suponga que $fl(x)$ se obtiene por redondeo hacia abajo _(truncamiento)_ \begin{equation} fl(0.9) - fl(0.5) \label{eq:resta 1} \end{equation} \begin{equation} fl(0.9) - fl(0.895) \label{eq:resta 2} \end{equation} Para este ejercicio, dado que nos pide usar el \textit{truncamiento}, definimos a la \textbf{unidad de redondeo} como $u = \epsilon_m/2$. Esto tambien nos define a $fl(x)=x(1+\delta)$ y $|\delta| \leq u$. Adem\'as, siguiendo las notas, podemos usar la cota que ah\'i se proporciona y sustituir $u$. \begin{equation} |\delta_{x-y}| \leq \frac{\epsilon}{2}\frac{|x|+|y|}{|x-y|} \label{eq:cota} \end{equation} Y antes de continuar calculamos $\epsilon_m/2$ para poder usarlo mas adelante. De manera que \begin{equation*} \epsilon_m/2 = \frac{(1.0)_2 \times 2^{-8}}{2} = (1.0)_2 \times 2^{-9} \end{equation*} Y, como tambien lo usaremos, es bueno tener en cuenta. \begin{equation*} \delta_{x-y} = \frac{fl(x)-fl(y) - (x-y)}{x-y} \end{equation*} Empezamos por la operaci\'on \ref{eq:resta 1}, de donde obtenemos $x=0.9$ y $y=0.5$. Primero calculamos los valores de maquina de estos. \begin{eqnarray*} fl(x) = fl(0.9) &=& (1.1100(1100))_2\times 2^{-1} \\ &=& (1.11001100)_2\times 2^{-1} \quad\quad\quad \text{(Truncamiento (no cabe))} \\ &\sim& 0.8984375 &&\\&&\\ fl(y) = fl(0.5) &=& (1.0)_2\times 2^{-1} \\ &=& (1.00000000)_2\times 2^{-1} \quad\quad\quad \text{(Truncamiento (si cabe))} \\ &=& 0.5 \end{eqnarray*} De manera que, como ya conocemos $\epsilon_m$, los valores de la operaci\'on y sus representaciones en el sistema, procedemos a encontrar la cota. Dado que usamos la cota de las notas, unicamente queda sustituir, de esto obtenemos que: \begin{eqnarray*} \delta_{x-y} \sim -0.00390625 \\ \frac{\epsilon}{2}\frac{|x|+|y|}{|x-y|} = 0.00683593 \end{eqnarray*} Por lo que, siguiendo la cota de la ecuaci\'on \ref{eq:cota}, podemos decir que esta operaci\'on esta acotada por \begin{equation*} |\delta_{x-y}| \sim 0.00390625 \leq 0.00683593 \sim \frac{\epsilon}{2}\frac{|x|+|y|}{|x-y|} \end{equation*} De manera similar, para la operaci\'on \ref{eq:resta 2}, tenemos que $x=0.9$ y $y=0.895$. Calculamos los redondondeos de la computadora. \begin{eqnarray*} fl(x) = fl(0.9) &=& (1.1100(1100))_2\times 2^{-1} \\ &=& (1.11001100)_2\times 2^{-1} \quad\quad\quad \text{(Truncamiento (no cabe))} \\ &\sim& 0.8984375 &&\\&&\\ fl(y) = fl(0.895) &=& (1.1100101000111101)_2\times 2^{-1} \\ &=& (1.11001010)_2\times 2^{-1} \quad\quad\quad \text{(Truncamiento (no cabe))} \\ &=& 0.89453125 \end{eqnarray*} Ahora procedemos a calcular los limites de la cota, por lo que \begin{eqnarray*} \delta_{x-y} \sim -0.21875\\ \frac{\epsilon}{2}\frac{|x|+|y|}{|x-y|} = 0.70117187 \end{eqnarray*} Y por \'ultimo, seg\'un la cota \ref{eq:cota}, acotamos esta operaci\'on por \begin{equation*} |\delta_{x-y}| \sim -0.21875 \leq 0.70117187 \sim \frac{\epsilon}{2}\frac{|x|+|y|}{|x-y|} \end{equation*} ```python # Funciones auxiliares para el Ejercicio 2.3 def deltaXY(fx, fy, x, y): return ((fx-fy)-(x-y))/(x-y) def cotaSup(x, y): ep2 = 0.001953125 return ep2*(abs(x) + abs(y))/abs(x - y) #print(deltaXY(0.8984375, 0.89453125, 0.9, 0.895)) #print(cotaSup(0.9, 0.895)) ``` ## Ejercicio 2 Programe la función `epsilonFloat` que devuelve el épsilon de la máquina $\epsilon_m$ para números de simple precisión y la función `epsilonDouble` para números de doble precisión. ``` epsilon = 1.0 unidad = 1.0 valor = unidad + epsilon while valor > unidad epsilon = epsilon/2 valor = end + epsilon end while epsilon = epsilon*2 ``` Usar el algoritmo visto en clase. ```python # Primera parte del ejercicio 1 import numpy as np def epsilonMaquina(tipoDato): """ Funcion que trata de calcular el epsilon de la maquina mediante el algoritmo antes pre_ sentado. Input: tipoDato := esta pensado para ser uno de los tipos proporcionados por la li_ breria numpy. Output: Regresa el epsilon de la maquina calcula_ do con el tipo de dato especificado. """ epsilon = tipoDato(1.0) unidad = tipoDato(1.0) valor = unidad + epsilon while valor > unidad: epsilon = epsilon/tipoDato(2.0) valor = unidad + epsilon return epsilon*2 def epsilonFloat(): """ Calculamos el epsilon de la maquina con precision de 32bits """ return epsilonMaquina(np.float32) def epsilonDouble(): """ Calculamos el epsilon de la maquina con precision de 64 bits A pesar de que el flotante de python ya tiene esta precision, creo que es convenien_ te especificarlo. """ return epsilonMaquina(np.float64) # Epsilons calculados eF = np.float32(epsilonFloat()) eD = np.float64(epsilonDouble()) # Imprimir en pantalla print(f'Se calculalron los epsilons \n\t{eF=} y \n\t{eD=} \n' 'para 32 y 64 bits correspondientemente.\n') ``` Se calculalron los epsilons eF=1.1920929e-07 y eD=2.220446049250313e-16 para 32 y 64 bits correspondientemente. ```python # Segunda parte del ejercicio 1 def respuesta(res): """ Funcion para formatear la respuesta. """ return "iguales" if res else "diferentes" def comparacion(epsilon): """ Esta funcion resivira el epsilon a eva_ luar y el tipo de dato al que este correspon_ de para hacer las comparaciones solicitadas en el ejercicio. Input: En caso de que Output: Las respuestas son procesadas por la funcion respuesta para obtener el for_ mato solicitado True implica que son iguales False implica que son diferentes """ tD = type(epsilon) # Para escribir menos print(f'Con {epsilon=} y tipo de dato = {str(tD)} se da que') # Comprobaciones print(f'{respuesta( tD(1 + epsilon ) == 1 ) =}') print(f'{respuesta( tD( epsilon/2 ) == 0 ) =}') print(f'{respuesta( tD(1 + epsilon/2 ) == 1 ) =}') print(f'{respuesta( tD(1 - epsilon/2 ) == 1 ) =}') print(f'{respuesta( tD(1 - epsilon/4 ) == 1 ) =}') print(f'{respuesta( tD( epsilon**2 ) == 0 ) =}') print(f'{respuesta(epsilon + tD(epsilon**2) == epsilon) =}') print(f'{respuesta(epsilon - tD(epsilon**2) == epsilon) =}') print('...\n') # Hacemos la comparacion para 32bits comparacion(eF) # y para 64 comparacion(eD) ``` Con epsilon=1.1920929e-07 y tipo de dato = <class 'numpy.float32'> se da que respuesta( tD(1 + epsilon ) == 1 ) ='diferentes' respuesta( tD( epsilon/2 ) == 0 ) ='diferentes' respuesta( tD(1 + epsilon/2 ) == 1 ) ='iguales' respuesta( tD(1 - epsilon/2 ) == 1 ) ='diferentes' respuesta( tD(1 - epsilon/4 ) == 1 ) ='iguales' respuesta( tD( epsilon**2 ) == 0 ) ='diferentes' respuesta(epsilon + tD(epsilon**2) == epsilon) ='diferentes' respuesta(epsilon - tD(epsilon**2) == epsilon) ='diferentes' ... Con epsilon=2.220446049250313e-16 y tipo de dato = <class 'numpy.float64'> se da que respuesta( tD(1 + epsilon ) == 1 ) ='diferentes' respuesta( tD( epsilon/2 ) == 0 ) ='diferentes' respuesta( tD(1 + epsilon/2 ) == 1 ) ='iguales' respuesta( tD(1 - epsilon/2 ) == 1 ) ='diferentes' respuesta( tD(1 - epsilon/4 ) == 1 ) ='iguales' respuesta( tD( epsilon**2 ) == 0 ) ='diferentes' respuesta(epsilon + tD(epsilon**2) == epsilon) ='diferentes' respuesta(epsilon - tD(epsilon**2) == epsilon) ='diferentes' ... De manera que el programa calculo que el _epsilon de la maquina_ es {{eF}} y {{eD}} para las precisiones de _32_ y _64 bits_ correspondientemente, y repecto a las comparaciones unicamente se encontraron __dos igualdades__ cuando se uso preciosion de _64bits_. #### Como ejecutar [GoogleColab](https://colab.research.google.com/gist/BenchHPZ/cd5bc176f3e3841fa1e84924feca9ec2/tarea1-benjamin_rivera-metodos_numericos.ipynb) <a href="https://colab.research.google.com/gist/BenchHPZ/cd5bc176f3e3841fa1e84924feca9ec2/tarea1-benjamin_rivera-metodos_numericos.ipynb"> </a> Para ejecutar este ejercicio en __consola__ es importante ubicarse en la misma carpeta del [archivo `T1.py`](https://github.com/BenchHPZ/UG-Compu/blob/master/MN/Tareas/T1/T1.py) y ejecutar el siguiente comando en consola ```console python3 T1.py ``` Este programa no espera recibir argumento alguno. La salida debe ser similar a la siguiente imagen ```python ```
a028084c370652fc96246bfbb566de2081d01990
27,373
ipynb
Jupyter Notebook
MN/Tareas/T1/Tarea1.ipynb
BenchHPZ/UG-Compu
fa3551a862ee04b59a5ba97a791f39a77ce2df60
[ "MIT" ]
null
null
null
MN/Tareas/T1/Tarea1.ipynb
BenchHPZ/UG-Compu
fa3551a862ee04b59a5ba97a791f39a77ce2df60
[ "MIT" ]
null
null
null
MN/Tareas/T1/Tarea1.ipynb
BenchHPZ/UG-Compu
fa3551a862ee04b59a5ba97a791f39a77ce2df60
[ "MIT" ]
null
null
null
34.693283
377
0.542725
true
6,039
Qwen/Qwen-72B
1. YES 2. YES
0.705785
0.890294
0.628356
__label__spa_Latn
0.887242
0.298213
# Demo - LISA Horizon Distance This demo shows how to use ``LEGWORK`` to compute the horizon distance for a collection of sources. ```python %matplotlib inline ``` ```python import legwork as lw import numpy as np import astropy.units as u import matplotlib.pyplot as plt ``` ```python %config InlineBackend.figure_format = 'retina' plt.rc('font', family='serif') plt.rcParams['text.usetex'] = False fs = 24 # update various fontsizes to match params = {'figure.figsize': (12, 8), 'legend.fontsize': fs, 'axes.labelsize': fs, 'xtick.labelsize': 0.9 * fs, 'ytick.labelsize': 0.9 * fs, 'axes.linewidth': 1.1, 'xtick.major.size': 7, 'xtick.minor.size': 4, 'ytick.major.size': 7, 'ytick.minor.size': 4} plt.rcParams.update(params) ``` ## Horizon distance of circular binaries The horizon distance for a source is the maximum distance at which the SNR of a source is still above some detectable threshold. The horizon distance can be computed from the SNR as follows since it is inversely proportional to the distance. \begin{equation} D_{\rm hor} = \frac{\rho(D)}{\rho_{\rm detect}} \cdot D, \label{eq:snr_to_hor_dist} \end{equation} Where $\rho(D)$ is the SNR at some distance $D$ and $\rho_{\rm detect}$ is the SNR above which we consider a source detectable. Let's start doing this by creating a grid of chirp masses and orbital frequencies and creating a Source class from them. ```python # create a list of masses and frequencies m_c_grid = np.logspace(-1, np.log10(50), 500) * u.Msun f_orb_grid = np.logspace(np.log10(4e-5), np.log10(3e-1), 400) * u.Hz # turn the two lists into grids MC, FORB = np.meshgrid(m_c_grid, f_orb_grid) # flatten grids m_c, f_orb = MC.flatten(), FORB.flatten() # convert chirp mass to individual masses for source class q = 1.0 m_1 = m_c / q**(3/5) * (1 + q)**(1/5) m_2 = m_1 * q # use a fixed distance and circular binaries dist = np.repeat(1, len(m_c)) * u.kpc ecc = np.zeros(len(m_c)) # create the source class sources = lw.source.Source(m_1=m_1, m_2=m_2, dist=dist, f_orb=f_orb, ecc=ecc, gw_lum_tol=1e-3) ``` Next, we can use `LEGWORK` to compute their merger times and SNRs for the contours. ```python # calculate merger times and then SNR sources.get_merger_time() sources.get_snr(verbose=True) ``` Calculating SNR for 200000 sources 0 sources have already merged 125169 sources are stationary 125169 sources are stationary and circular 74831 sources are evolving 74831 sources are evolving and circular array([5.37039831e-04, 5.48303595e-04, 5.59803603e-04, ..., 1.06420933e+04, 1.07531750e+04, 1.08654183e+04]) We flattened the grid to fit into the Source class but now we can reshape the output to match the original grid. ```python # reshape the output into grids t_merge_grid = sources.t_merge.reshape(MC.shape) snr_grid = sources.snr.reshape(MC.shape) ``` Now we can define a couple of functions for formatting the time, distance and galaxy name contours. ```python def fmt_time(x): if x == 4: return r"$t_{\rm merge} = T_{\rm obs}$" elif x >= 1e9: return "{0:1.0f} Gyr".format(x / 1e9) elif x >= 1e6: return "{0:1.0f} Myr".format(x / 1e6) elif x >= 1e3: return "{0:1.0f} kyr".format(x / 1e3) elif x >= 1: return "{0:1.0f} yr".format(x) elif x >= 1/12: return "{0:1.0f} month".format(x * 12) else: return "{0:1.0f} week".format(x * 52) def fmt_dist(x): if x >= 1e9: return "{0:1.0f} Gpc".format(x / 1e9) elif x >= 1e6: return "{0:1.0f} Mpc".format(x / 1e6) elif x >= 1e3: return "{0:1.0f} kpc".format(x / 1e3) else: return "{0:1.0f} pc".format(x) def fmt_name(x): if x == np.log10(8): return "MW Centre" elif x == np.log10(50): return "SMC/LMC" elif x == np.log10(800): return "Andromeda" elif x == np.log10(40000): return "GW170817" ``` Finally, we put it all together to create a contour plot with all of the information. ```python # create a square figure plus some space for a colourbar size = 12 cbar_space = 2 fig, ax = plt.subplots(figsize=(size + cbar_space, size)) # set up scales early so contour labels show up nicely ax.set_xscale("log") ax.set_yscale("log") # set axes labels and lims ax.set_xlabel(r"Orbital Frequency, $f_{\rm orb} \, [\rm Hz]$") ax.set_ylabel(r"Chirp Mass, $\mathcal{M}_c \, [\rm M_{\odot}]$") ax.set_xlim(4e-5, 3e-1) # calculate the horizon distance snr_threshold = 7 horizon_distance = (snr_grid / snr_threshold * 1 * u.kpc).to(u.kpc) # set up the contour levels distance_levels = np.arange(-3, 6 + 0.5, 0.5) distance_tick_levels = distance_levels[::2] # plot the contours for horizon distance distance_cont = ax.contourf(FORB, MC, np.log10(horizon_distance.value), levels=distance_levels) # hide edges that show up in rendered PDFs for c in distance_cont.collections: c.set_edgecolor("face") # create a colour with custom formatted labels cbar = fig.colorbar(distance_cont, ax=ax, pad=0.02, ticks=distance_tick_levels, fraction=cbar_space / (size + cbar_space)) cbar.ax.set_yticklabels([fmt_dist(np.power(10, distance_tick_levels + 3)[i]) for i in range(len(distance_tick_levels))]) cbar.set_label(r"Horizon Distance", fontsize=fs) cbar.ax.tick_params(axis="both", which="major", labelsize=0.7 * fs) # annotate the colourbar with some named distances named_distances = np.log10([8, 50, 800, 40000]) for name in named_distances: cbar.ax.axhline(name, color="white", linestyle="dotted") # plot the same names as contours named_cont = ax.contour(FORB, MC, np.log10(horizon_distance.value), levels=named_distances, colors="white", alpha=0.8, linestyles="dotted") ax.clabel(named_cont, named_cont.levels, fmt=fmt_name, use_clabeltext=True, fontsize=0.7*fs, manual=[(1.1e-3, 2e-1), (4e-3, 2.2e-1), (4e-3,1e0), (3e-3, 1.2e1)]) # add a line for when the merger time becomes less than the inspiral time time_cont = ax.contour(FORB, MC, t_merge_grid.to(u.yr).value, levels=[4], colors="black", linewidths=2, linestyles="dotted") #[1/52, 1/12, 4, 1e2, 1e3, 1e4, 1e5, 1e6, 1e7, 1e8, 1e9, 1e10] ax.clabel(time_cont, time_cont.levels, fmt=fmt_time, fontsize=0.7*fs, use_clabeltext=True, manual=[(2.5e-2, 5e0)]) # plot a series of lines and annotations for average DCO masses for m_1, m_2, dco in [(0.6, 0.6, "WDWD"), (1.4, 1.4, "NSNS"), (10, 1.4, "BHNS"), (10, 10, "BHBH"), (30, 30, "BHBH")]: # find chirp mass m_c_val = lw.utils.chirp_mass(m_1, m_2) # plot lines before and after bbox ax.plot([4e-5, 4.7e-5], [m_c_val, m_c_val], color="black", lw=0.75, zorder=1, linestyle="--") ax.plot([1.2e-4, 1e0], [m_c_val, m_c_val], color="black", lw=0.75, zorder=1, linestyle="--") # plot name and bbox, then masses below in smaller font ax.annotate(dco + "\n", xy=(7.5e-5, m_c_val), ha="center", va="center", fontsize=0.7*fs, bbox=dict(boxstyle="round", fc="white", ec="white", alpha=0.25)) ax.annotate(r"${{{}}} + {{{}}}$".format(m_1, m_2), xy=(7.5e-5, m_c_val * 0.95), ha="center", va="top", fontsize=0.5*fs) # ensure that everyone knows this only applies for circular sources ax.annotate(r"$e = 0.0$", xy=(0.03, 0.04), xycoords="axes fraction", fontsize=0.8*fs, bbox=dict(boxstyle="round", fc="white", ec="white", alpha=0.25)) ax.set_facecolor(plt.get_cmap("viridis")(0.0)) plt.show() ```
81209efd8039bee79ea9424a0e98810a5ee18235
503,543
ipynb
Jupyter Notebook
docs/demos/HorizonDistance.ipynb
arfon/LEGWORK
91ca299d00ed6892acdf5980f33826421fa348ef
[ "MIT" ]
14
2021-09-28T21:53:24.000Z
2022-02-05T14:29:44.000Z
docs/demos/HorizonDistance.ipynb
arfon/LEGWORK
91ca299d00ed6892acdf5980f33826421fa348ef
[ "MIT" ]
44
2021-10-31T15:04:26.000Z
2022-03-15T19:01:40.000Z
docs/demos/HorizonDistance.ipynb
arfon/LEGWORK
91ca299d00ed6892acdf5980f33826421fa348ef
[ "MIT" ]
4
2021-11-18T09:20:53.000Z
2022-03-16T11:30:44.000Z
1,379.569863
491,724
0.955205
true
2,478
Qwen/Qwen-72B
1. YES 2. YES
0.824462
0.721743
0.59505
__label__eng_Latn
0.807482
0.22083
# Week 5 worksheet 1: Introduction to numerical integration This notebook is modified from one created by Charlotte Desvages. This week, we investigate numerical methods to estimate integrals. The best way to learn programming is to write code. Don't hesitate to edit the code in the example cells, or add your own code, to test your understanding. You will find practice exercises throughout the notebook, denoted by 🚩 ***Exercise $x$:***. #### Displaying solutions Solutions will be released one week after the worksheets are released, as a new `.txt` file in the same GitHub repository. After pulling the file to your workspace, run the following cell to create clickable buttons under each exercise, which will allow you to reveal the solutions. ```python %run scripts/create_widgets.py W05-W1 ``` ## Numerical integration 🚩 *Recommended reading:* Section 3.3 in **ASC** Numerical integration is the process of computing an approximation of a definite integral, using a particular *scheme*. There are many different ways we could go about this, but in general, we want to approximate an integral using a **weighted sum** which is easy to compute: $$ \int_a^b f(x) \ dx \approx \sum_{k=0}^{N-1} w_k f(x_k), $$ where - $x_k \in [a, b]$ are **nodes**, i.e. a finite number of points chosen in the integration interval, - $w_k \in \mathbb{R}$ are **weights** (coefficients) chosen appropriately. The choice of nodes and weights differentiates one numerical integration method from another, and different choices lead to different *degrees of precision*. We will see more about this next week. ### Riemann sums You probably already know a numerical integration method -- the Riemann sum. Run the code cell below to display a figure (it uses [`matplotlib.patches.Rectangle()`](https://matplotlib.org/api/_as_gen/matplotlib.patches.Rectangle.html)): *Remark:* The first command `%matplotlib notebook` is a notebook-wide setting, which allows to generate **dynamic plots** inside the Jupyter notebook, which we can e.g. zoom into or further modify. (We can toggle back to the default behaviour using the command `%matplotlib inline`, where all plots are "printed" for good when they are created, and cannot be further modified.) Once you are finished with the plot you, you should click on the 'Stop interaction' blue button in the plot above. ```python %matplotlib notebook import matplotlib.pyplot as plt import matplotlib.patches as patches import numpy as np def f(x): return np.exp(-x**2) # Create an x-axis with 100 points and estimate the function a, b = 0, 3 x_plot = np.linspace(a, b, 100) f_plot = f(x_plot) # Create the nodes N = 10 x_node = np.linspace(a, b, N+1) f_node = f(x_node) # Plot the function fig, ax = plt.subplots(1, 2, figsize=(9, 3)) ax[0].plot(x_plot, f_plot, 'k-') ax[1].plot(x_plot, f_plot, 'k-') # Plot the rectangles for left and right sums h = (b - a) / N for k in range(N): rect = patches.Rectangle((x_node[k], 0), h, f_node[k], edgecolor='k') ax[0].add_patch(rect) rect = patches.Rectangle((x_node[k], 0), h, f_node[k+1], edgecolor='k') ax[1].add_patch(rect) # Plot the nodes ax[0].plot(x_node[:-1], f_node[:-1], 'rx') ax[1].plot(x_node[1:], f_node[1:], 'rx') # Label the plots ax[0].set_xlabel(r'$x$') ax[1].set_xlabel(r'$x$') ax[0].set_ylabel(r'$f(x)$') ax[0].set_title('Left Riemann sum') ax[1].set_title('Right Riemann sum') plt.show() ``` We can estimate the integral of $f(x)$ by calculating the area shaded in blue. Here, we subdivide the interval $[a, b]$ into $N$ partitions of equal width $h$: $$ h = \frac{b-a}{N} $$ The **nodes** are the end points of these sub-intervals, and here the **weights** are simply $h$, the width of each interval. The integral of $f(x)$ between $a$ and $b$ can then be estimated as: $$ \begin{align} \int_a^b f(x) \ dx &\approx \sum_{k=0}^{N-1} h \ f(x_k), \quad & \text{left Riemann sum} \\ \int_a^b f(x) \ dx &\approx \sum_{k=1}^N h \ f(x_k), \quad & \text{right Riemann sum} \end{align} $$ where the $N+1$ nodes $x_k$ are given by $x_k = a + kh$, with $k = 0, 1, \dots, N$. With this choice of nodes and weights, each element of the sum is simply the area of one blue rectangle. ```python from math import erf import numpy as np # Estimate the integral left_I = np.sum(h * f_node[:-1]) right_I = np.sum(h * f_node[1:]) # Exact value I_exact = np.sqrt(np.pi) / 2 * (erf(b) - erf(a)) print(f'The exact integral is {I_exact:.3f}.\n') print(f'The left Riemann sum is {left_I:.3f}.\n') print(f'The right Riemann sum is {right_I:.3f}.\n') ``` --- 🚩 ***Exercise 1:*** Using the Riemann sum methods above, estimate the value of the integral using different values of $N$. How does the accuracy change with $N$? *Hint:* plot $\log(N)$ vs. $\log(\text{error})$. You may wish to use e.g. [`np.polyfit()`](https://numpy.org/doc/stable/reference/generated/numpy.polyfit.html) or [`scipy.stats.linregress()`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.linregress.html). ```python ``` ```python %run scripts/show_solutions.py W05-W1_ex1 ``` --- ### The midpoint rule The midpoint rule is similar to the Riemann sums, but the nodes are taken as the **midpoint** of each partition instead of one of the extremities: $$ \int_a^b f(x) \ dx \approx \sum_{k=0}^{N-1} h \ f(x_k), $$ where the nodes $x_k$ are given by $x_k = a + \left(k + \frac{1}{2}\right)h$, with $k = 0, 1, \dots, N-1$. ```python def f(x): return np.exp(-x**2) # Create an x-axis with 100 points and estimate the function a, b = 0, 3 x_plot = np.linspace(a, b, 100) f_plot = f(x_plot) # Create the nodes N = 10 h = (b - a) / N x_node = np.linspace(a + 0.5*h, b, N) f_node = f(x_node) # Plot the function fig, ax = plt.subplots(figsize=(5, 3)) ax.plot(x_plot, f_plot, 'k-') # Plot the rectangles for k in range(N): rect = patches.Rectangle((x_node[k] - 0.5*h, 0), h, f_node[k], edgecolor='k') ax.add_patch(rect) # Plot the nodes ax.plot(x_node, f_node, 'rx') # Label the plots ax.set_xlabel(r'$x$') ax.set_ylabel(r'$f(x)$') ax.set_title('Midpoint rule') plt.show() # Estimate the integral midpoint_I = np.sum(h * f_node) # Exact value I_exact = np.sqrt(np.pi) / 2 * (erf(b) - erf(a)) print(f'The exact integral is {I_exact:.3f}.\n') print(f'The estimated integral using the midpoint rule is {midpoint_I:.3f}.\n') ``` --- 🚩 ***Exercise 2:*** Using the midpoint rule method above, estimate the value of the integral using different values of $N$. How does the accuracy change with $N$? ```python ``` ```python %run scripts/show_solutions.py W05-W1_ex2 ``` --- ### The trapezoid rule The trapezoid rule also uses partitions of equal width, but instead of approximating the integral as the area of rectangles, it uses trapezoids -- the function is interpolated linearly between the nodes. $$ \int_a^b f(x) \ dx \approx \sum_{k=0}^{N-1} h\frac{\left(f(x_k) + f(x_{k+1})\right)}{2} , $$ where the nodes $x_k$ are given by $x_k = a +kh$, with $k = 0, 1, \dots, N$. ```python def f(x): return np.exp(-x**2) # Create an x-axis with 100 points and estimate the function a, b = 0, 3 x_plot = np.linspace(a, b, 100) f_plot = f(x_plot) # Create the nodes N = 6 h = (b - a) / N x_node = np.linspace(a, b, N + 1) f_node = f(x_node) # Plot the function fig, ax = plt.subplots(figsize=(5, 3)) ax.plot(x_plot, f_plot, 'k-') # Plot the trapezoids for k in range(N): verts = [[x_node[k], 0], [x_node[k+1], 0], [x_node[k+1], f_node[k+1]], [x_node[k], f_node[k]]] trapz = patches.Polygon(verts, h, edgecolor='k') ax.add_patch(trapz) # Plot the nodes ax.plot(x_node, f_node, 'rx') # Label the plots ax.set_xlabel(r'$x$') ax.set_ylabel(r'$f(x)$') ax.set_title('Trapezoid rule') plt.show() # Estimate the integral midpoint_I = np.sum(0.5 * h * (f_node[:-1] + f_node[1:])) # Exact value I_exact = np.sqrt(np.pi) / 2 * (erf(b) - erf(a)) print(f'The exact integral is {I_exact:.3f}.\n') print(f'The estimated integral using the midpoint rule is {midpoint_I:.3f}.\n') ``` --- 🚩 ***Exercise 3:*** Using the trapezoid rule method above, estimate the value of the integral using different values of $N$. How does the accuracy change with $N$? ```python ``` ```python %run scripts/show_solutions.py W05-W1_ex3 ``` ```python ```
63449f7a6ab230cb1d6d7c60bf9fe3fa6c9897cb
12,753
ipynb
Jupyter Notebook
Workshops/W05-W1_NMfCE_Numerical_integration.ipynb
DrFriedrich/nmfce-2021-22
2ccee5a97b24bd5c1e80e531957240ffb7163897
[ "MIT" ]
null
null
null
Workshops/W05-W1_NMfCE_Numerical_integration.ipynb
DrFriedrich/nmfce-2021-22
2ccee5a97b24bd5c1e80e531957240ffb7163897
[ "MIT" ]
null
null
null
Workshops/W05-W1_NMfCE_Numerical_integration.ipynb
DrFriedrich/nmfce-2021-22
2ccee5a97b24bd5c1e80e531957240ffb7163897
[ "MIT" ]
null
null
null
31.488889
388
0.549204
true
2,506
Qwen/Qwen-72B
1. YES 2. YES
0.803174
0.839734
0.674452
__label__eng_Latn
0.973284
0.40531
# A demonstration of SuSiE's motivations This document explains with toy example illustration the unique type of inference SuSiE is interested in. ## The inference problem We assume our audience are familiar or interested in large scale regression. Similar to eg LASSO, SuSiE is a method for variable selection in large scale regression. Yet the type of inference SuSiE attempts to accomplish is different from other large scale regression methods, and is worth motivating in this document for those not familiar with this particular type of inference. The type of inference is motivated by the so-called "genetic fine-mapping" study. Consider fitting the regression model $$y = \sum_{j=1}^p x_j\beta_j + \epsilon \quad \epsilon \sim N(0, \sigma^2I_n)$$ where $x_1 = x_2, x_3 = x_4$ and $\beta_1 \ne 0, \beta_4 \ne 0, \beta_{j \notin \{1,4\}} = 0$. Our goal is to make a statement that $$\beta_1 \ne 0 \text{ or } \beta_2 \ne 0, \text{ and } \beta_3 \ne 0 \text{ or } \beta_4 \ne 0,$$ that is, in SuSiE variable selection we want to capture the fact that variables $x_1$ and $x_2$ (likewise $x_3$ and $x_4$) are too similar to distinguish. This is an inference that to our knowledge few other large scale regression can make; at least not in an intuitive and interpretable fashion. To illustrate here I set $p=1000$ and simulate a data-set: ```sos set.seed(1) n = 500 p = 1000 b = rep(0,p) b[200] = 1 b[800] = 1 X = matrix(rnorm(n*p),nrow=n,ncol=p) X[,200] = X[,400] X[,600] = X[,800] y = X %*% b + rnorm(n) ``` The "true" effects are: ```sos pdf('truth.pdf', width =5, height = 5, pointsize=16) plot(b, col="black", pch=16, main = 'True effect size') pos = 1:length(b) points(pos[b!=0],b[b!=0],col=2,pch=16) dev.off() ``` <strong>png:</strong> 2 ```sos %preview truth.pdf -s png --dpi 100 ``` ## LASSO inference LASSO will simply choose one of: - $x_1, x_3$ - $x_1, x_4$ - $x_2, x_3$ - $x_2, x_4$ This is not what we want. For instance on the simulated data, ```sos alpha = 1 y.fit = glmnet::glmnet(X,y,alpha = alpha,intercept = FALSE) y.cv = glmnet::cv.glmnet(X,y,alpha = alpha,intercept = FALSE, lambda = y.fit$lambda) bhat = glmnet::predict.glmnet(y.fit,type ="coefficients", s = y.cv$lambda.min)[-1,1] ``` ```sos pdf('lasso.pdf', width =5, height = 5, pointsize=16) plot(bhat, col="black", pch=16, main = 'Lasso') pos = 1:length(bhat) points(pos[b!=0],bhat[b!=0],col=2,pch=16) dev.off() ``` <strong>png:</strong> 2 ```sos %preview lasso.pdf -s png --dpi 100 ``` LASSO randomly picked $x_1$ and $x_3$ even though the true non-zero effects are $\beta_1$ and $\beta_4$. ## Existing Bayesian methods for sparse regression For sparse enough problems, most other Bayesian methods provide posterior probabilities on "models" by enumerating (CAVIAR), schochastic search (FINEMAP) or sampling (BIMBAM) from all possible combinations of variables. In our motivating example, the "true" posterior for models are \begin{align} p(\beta_1 \ne 0, \beta_2 = 0, \beta_3 \ne 0, \beta_4 = 0, \beta_{5:p} = 0) &= 0.25 \\ p(\beta_1 = 0, \beta_2 \ne 0, \beta_3 \ne 0, \beta_4 = 0, \beta_{5:p} = 0) &= 0.25 \\ p(\beta_1 \ne 0, \beta_2 = 0, \beta_3 = 0, \beta_4 \ne 0, \beta_{5:p} = 0) &= 0.25 \\ p(\beta_1 = 0, \beta_2 \ne 0, \beta_3 = 0, \beta_4 \ne 0, \beta_{5:p} = 0) &= 0.25. \end{align} Although compared to LASSO the model posterior contains the information to make statements about $\beta_1 \ne 0 \text{ or } \beta_2 \ne 0, \text{ and } \beta_3 \ne 0 \text{ or } \beta_4 \ne 0$, it is not explicitly provided. One has to post-process these posteriors in order to make the statement. Such pre-processing is non-trivial, see eg Stephens and Balding 2009 for an example in the context of genetic fine-mapping. In addition, marginalized probabilities are often provided by these Bayesian methods, calcualted as \begin{align} p(\beta_1) &= p(\beta_1 \ne 0, \beta_2 = 0, \beta_3 \ne 0, \beta_4 = 0, \beta_{5:p} = 0) \\ & + p(\beta_1 \ne 0, \beta_2 = 0, \beta_3 = 0, \beta_4 \ne 0, \beta_{5:p} = 0) \\ & = 0.5, \end{align} but this does not provide the type of inference we are interested in. We illustrate Bayesian variable selection using `FINEMAP`. First we compute "summary statistics" that `FINEMAP` requires, ```sos library(abind) mm_regression = function(X, Y, Z=NULL) { if (!is.null(Z)) { Z = as.matrix(Z) } reg = lapply(seq_len(ncol(Y)), function (i) simplify2array(susieR:::univariate_regression(X, Y[,i], Z))) reg = do.call(abind, c(reg, list(along=0))) # return array: out[1,,] is betahat, out[2,,] is shat return(aperm(reg, c(3,2,1))) } sumstats = mm_regression(as.matrix(X), as.matrix(y)) dat = list(X=X,Y=as.matrix(y)) saveRDS(list(data=dat, sumstats=sumstats), '/tmp/Toy.with_sumstats.rds') ``` we then set up the path containing the `finemap` executable (here set to `~/GIT/github/mvarbvs/dsc/modules/linux`) and run a wrapper script provided in the `susieR` package under `inst/code` folder (the package source can be [downloaded here](https://github.com/stephenslab/susieR)). ```sos export PATH=~/GIT/github/mvarbvs/dsc/modules/linux:$PATH Rscript ~/GIT/susieR/inst/code/finemap.R input=\"/tmp/Toy.with_sumstats.rds\" output=\"N2finemapping.FINEMAP\" args=\"--n-causal-max\ 2\" 2> /dev/null ``` |---------------------------------| | Welcome to FINEMAP v1.1 | | | | (c) 2015 University of Helsinki | | | | Help / Documentation: | | - ./finemap --help | | - www.christianbenner.com | | | | Contact: | | - christian.benner@helsinki.fi | | - matti.pirinen@helsinki.fi | |---------------------------------| -------- SETTINGS -------- - regions : 1 - n-causal-max : 2 - n-configs-top : 50000 - n-iterations : 100000 - n-convergence : 1000 - prob-tol : 0.001 - corr-config : 0.95 - prior-std : 0.05 - prior-k0 : 0 --------------- RUNNING FINEMAP (1/1) --------------- - GWAS z-scores : /tmp/RtmpRHQEch/file7d275e5895af.finemap_condition_1.z - SNP correlations : /tmp/RtmpRHQEch/file7d272073d9a6.ld - Causal SNP configurations : /tmp/RtmpRHQEch/file7d275e5895af.finemap_condition_1.config - Single-SNP statistics : /tmp/RtmpRHQEch/file7d275e5895af.finemap_condition_1.snp - Log file : /tmp/RtmpRHQEch/file7d275e5895af.finemap_condition_1.log - Reading input : done! - Number of SNPs in region : 1000 - Number of individuals in GWAS : 500 - Prior-Pr(# of causal SNPs is k) : (0 -> 0) 1 -> 0.666667 2 -> 0.333333 - 6979 configurations evaluated (1.004/100%) : converged after 1004 iterations - log10(Evidence of at least 1 causal SNP) : 60.0667 - Post-Pr(# of causal SNPs is k) : 0 -> 0 1 -> 8.6199e-22 2 -> 1 - Writing output : done! - Run time : 0 hours, 0 minutes, 4 seconds Posterior probabilities for "models" are: ```sos finemap = readRDS("N2finemapping.FINEMAP.rds")[[1]] head(finemap$set) ``` <table> <thead><tr><th scope=col>rank</th><th scope=col>config</th><th scope=col>config_prob</th><th scope=col>config_log10bf</th></tr></thead> <tbody> <tr><td>1 </td><td>400,800 </td><td>0.25 </td><td>65.64032</td></tr> <tr><td>2 </td><td>400,600 </td><td>0.25 </td><td>65.64032</td></tr> <tr><td>3 </td><td>200,800 </td><td>0.25 </td><td>65.64032</td></tr> <tr><td>4 </td><td>200,600 </td><td>0.25 </td><td>65.64032</td></tr> <tr><td>5 </td><td>200 </td><td>0.00 </td><td>41.57626</td></tr> <tr><td>6 </td><td>400 </td><td>0.00 </td><td>41.57626</td></tr> </tbody> </table> `FINEMAP` correctly determined 4 "model configurations" as expected, each with probability 0.25. Simply marginalizing across all models we obtain posterior inclusion probability per feature, ```sos snp = finemap$snp pip = snp[order(as.numeric(snp$snp)),]$snp_prob ``` ```sos pdf('finemap.pdf', width =5, height = 5, pointsize=16) susieR::susie_plot(pip, y='PIP', b=b, main = 'Bayesian sparse regression') dev.off() ``` <strong>png:</strong> 2 ```sos %preview finemap.pdf -s png --dpi 100 ``` From these marginalized posterior we can only make a statement that each of the 4 identified variables have 0.5 probability of being non-zero. That is, they all have equal contributions. It does not reflect the fact that the result comes from identical feature pairs $x_1, x_2$ and $x_3, x_4$. Existing variational inference methods (`varbvs`) do not provide accurate calculation of marginal posterior probabilities, although like LASSO, existing variational methods are good for prediction purposes. ## SuSiE SuSiE uses a variational inference algorithm which is not only computational convenient, but also directly obtains posterior statements of the desired form. ```sos fitted = susieR::susie(X, y, L=5, estimate_residual_variance=TRUE, scaled_prior_variance=0.2, tol=1e-3, track_fit=TRUE, min_abs_corr=0.1) ``` ```sos pdf('susie.pdf', width =5, height = 5, pointsize=16) susieR::susie_plot(fitted, y='PIP', b=b, max_cs=0.4, main = paste('SuSiE, ', length(fitted$sets$cs), 'CS identified')) dev.off() ``` <strong>png:</strong> 2 ```sos %preview susie.pdf -s png --dpi 100 ``` Here SuSiE identifies 2 sets of variables -- 2 confidence sets each containing one causal variable (blue and green circles). The marginal posterior is the same as from existing Bayesian methods (FINEMAP result), but with the two sets identified one can easily state that the causal variables are ($x_1$ or $x_2$), and ($x_3$ or $x_4$). Within each set, the contributions of variables are equal because there is no information to distinguish between them. Notice that SuSiE does not directly provide posterior of model configuration, that is, inference on ($x_1$ and $x_3$), or ($x_1$ and $x_4$), or ($x_2$ and $x_3$), or ($x_2$ and $x_4$). Effect size estimate by SuSiE, ```sos bhat = coef(fitted)[-1] pdf('susie_eff.pdf', width =5, height = 5, pointsize=16) susieR::susie_plot(bhat, y='bhat', b=b, main = 'SuSiE, effect size estimate') dev.off() ``` <strong>png:</strong> 2 ```sos %preview susie_eff.pdf -s png --dpi 100 ``` ```sos convert -density 120 \( truth.pdf lasso.pdf +append \) \( finemap.pdf susie.pdf +append \) -append Motivating_Example.png ```
688d3f536259d7265d26318da7a5622644bb2bb7
57,029
ipynb
Jupyter Notebook
manuscript_results/motivating_example.ipynb
llgai508/susie-paper
7633d734b5c02ae9f102d7d7c0e4d249a007afd9
[ "MIT" ]
null
null
null
manuscript_results/motivating_example.ipynb
llgai508/susie-paper
7633d734b5c02ae9f102d7d7c0e4d249a007afd9
[ "MIT" ]
null
null
null
manuscript_results/motivating_example.ipynb
llgai508/susie-paper
7633d734b5c02ae9f102d7d7c0e4d249a007afd9
[ "MIT" ]
null
null
null
66.467366
7,393
0.769258
true
3,452
Qwen/Qwen-72B
1. YES 2. YES
0.795658
0.746139
0.593672
__label__eng_Latn
0.871123
0.217628
<p> <div align="right"> Massimo Nocentini<br> <small> <br>March and April 2018: cleanup <br>November 2016: splitting from "big" notebook </small> </div> </p> <br> <br> <div align="center"> <b>Abstract</b><br> Theory of matrix functions, with applications to Pascal array $\mathcal{P}$. </div> ```python from sympy import * from sympy.abc import n, i, N, x, lamda, phi, z, j, r, k, a, alpha, beta init_printing() ``` ```python import functions_catalog ``` ```python from matrix_functions import * from commons import * from sequences import * ``` ```python %run ../../src/commons.py %run ../../src/matrix_functions.py %run ../../src/functions_catalog.py ``` # Pascal array $\mathcal{P}$ ```python m=8 ``` ```python P_ = Matrix(m,m, lambda n,k: binomial(n, k, evaluate=k >= n or not k)) P_ # not usable in the framework because not a definition in equality style ``` ```python P = define(Symbol(r'\mathcal{{P}}_{{ {} }}'.format(m)), Matrix(m,m,binomial)) P ``` ```python eigendata = spectrum(P) eigendata ``` ```python data, eigenvals, multiplicities = eigendata.rhs # unpacking to use `eigenvals` in `subs` ``` ```python Phi_polynomials = component_polynomials(eigendata, early_eigenvals_subs=True) Phi_polynomials ``` ```python cmatrices = component_matrices(P, Phi_polynomials) list(cmatrices.values()) ``` ## `power` function ```python f_power, g_power = functions_catalog.power(eigendata, Phi_polynomials) ``` ```python P_power = g_power(P) P_power ``` ```python production_matrix(P_power.rhs) ``` ```python assert P_power.rhs == (P.rhs**r).applyfunc(simplify) ``` ## `inverse` function ```python f_inverse, g_inverse = functions_catalog.inverse(eigendata, Phi_polynomials) ``` ```python P_inverse = g_inverse(P) P_inverse, g_inverse(P_inverse) ``` ```python production_matrix(P_inverse.rhs) ``` ```python assert (P_inverse.rhs * P.rhs) == Matrix(m, m, identity_matrix()) assert P_inverse.rhs == P.rhs**(-1) assert P_inverse.rhs == P_power.rhs.subs({r:-1}) ``` ## `sqrt` function ```python f_sqrt, g_sqrt = functions_catalog.square_root(eigendata, Phi_polynomials) ``` ```python P_sqrt = g_sqrt(P) P_sqrt ``` ```python production_matrix(P_sqrt.rhs) ``` ```python assert P_sqrt.rhs == P.rhs**(S(1)/2) assert P_sqrt.rhs * P_sqrt.rhs == P.rhs assert P_sqrt.rhs == P_power.rhs.subs({r:S(1)/2}) ``` ```python P_power.rhs.subs({r:S(1)/3}) ``` ```python P_power.rhs.subs({r:2}) ``` ```python inspect(_) ``` nature(is_ordinary=True, is_exponential=True) ## `expt` function ```python f_exp, g_exp = functions_catalog.exp(eigendata, Phi_polynomials) ``` ```python P_exp = g_exp(P) P_exp ``` ```python inspect(P_exp.rhs) ``` nature(is_ordinary=False, is_exponential=True) ```python production_matrix(P_exp.rhs.subs({alpha:1})) # faster than `production_matrix(P_exp.rhs).subs({alpha:1})` ``` ```python simp_P_expt = define(P_exp.lhs, Mul(exp(alpha), P_exp.rhs.applyfunc(lambda c: (c/exp(alpha)).expand()), evaluate=False)) simp_P_expt ``` ```python from sympy.functions.combinatorial.numbers import stirling ``` ```python S = Matrix(m, m, lambda n,k: stirling(n,k, kind=2)) define(Symbol('S'), S), production_matrix(S), production_matrix(S, exp=True) ``` ```python S*P.rhs*S**(-1), S**(-1)*P.rhs*S ``` ## `log` function ```python f_log, g_log, = functions_catalog.log(eigendata, Phi_polynomials) ``` ```python P_log = g_log(P) P_log ``` ```python inspect(P_log.rhs[1:,:-1]) ``` nature(is_ordinary=False, is_exponential=True) ```python production_matrix(P_log.rhs[1:,:-1]) ``` ```python P_exp_dirty = define(P_exp.lhs, P_exp.rhs.subs({alpha:1})) P_exp_dirty ``` ```python P_exp_eigendata = spectrum(P_exp_dirty) P_exp_eigendata ``` ```python P_exp_Phi_polynomials = component_polynomials(P_exp_eigendata, early_eigenvals_subs=True) P_exp_Phi_polynomials ``` ```python f_log_dirty, g_log_dirty, = functions_catalog.log(P_exp_eigendata, P_exp_Phi_polynomials) ``` ```python g_log_dirty ``` ```python g_log_dirty(P_exp_dirty) ``` ```python P_log_eigendata = spectrum(P_log) P_log_eigendata ``` ```python P_log_Phi_polynomials = component_polynomials(P_log_eigendata, early_eigenvals_subs=True) P_log_Phi_polynomials ``` ```python f_exp_dirty, g_exp_dirty, = functions_catalog.exp(P_log_eigendata, P_log_Phi_polynomials) ``` ```python g_exp_dirty ``` ```python g_exp_dirty(P_log) ``` ```python _.rhs.subs({alpha:1}) ``` ## `sin` function ```python f_sin, g_sin, G_sin = functions_catalog.sin(eigendata, Phi_polynomials) ``` ```python P_sin = G_sin(P) P_sin ``` ```python production_matrix(P_sin.rhs).applyfunc(simplify) # takes long to evaluate ``` ## `cos` function ```python f_cos, g_cos, G_cos = functions_catalog.cos(eigendata, Phi_polynomials) ``` ```python P_cos = G_cos(P) P_cos ``` ```python production_matrix(P_sin).applyfunc(simplify) # takes long to evaluate ``` ```python assert (P_sin.rhs**2 + P_cos.rhs**2).applyfunc(trigsimp) == Matrix(m,m, identity_matrix()) # sin^2 + cos^2 = 1 ``` ## `sin` function ```python P2 = define(Symbol(r'2\,\mathcal{P}'), 2*P.rhs) P2 ``` ```python eigendata_P2 = spectrum(P2) Phi_polynomials_P2 = component_polynomials(eigendata_P2, early_eigenvals_subs=True) ``` ```python f_sin2, g_sin2, G_sin2 = functions_catalog.sin(eigendata_P2, Phi_polynomials_P2) ``` ```python g_sin2 ``` ```python P_sin2 = G_sin2(P2) P_sin2 ``` ```python PP = P_sin2.rhs.applyfunc(lambda i: i.subs({alpha:1})) assert PP == (2*P_sin.rhs*P_cos.rhs).applyfunc(trigsimp) ``` ```python PP ``` --- <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>.
078383e36e9b9326334bfa1553e6ef758e140bfb
514,402
ipynb
Jupyter Notebook
notes/matrices-functions/pascal-riordan-array.ipynb
massimo-nocentini/simulation-methods
f7578a9719b1a22e5a25a8de85cc229aef4c259d
[ "MIT" ]
null
null
null
notes/matrices-functions/pascal-riordan-array.ipynb
massimo-nocentini/simulation-methods
f7578a9719b1a22e5a25a8de85cc229aef4c259d
[ "MIT" ]
null
null
null
notes/matrices-functions/pascal-riordan-array.ipynb
massimo-nocentini/simulation-methods
f7578a9719b1a22e5a25a8de85cc229aef4c259d
[ "MIT" ]
null
null
null
196.561712
37,800
0.774643
true
1,801
Qwen/Qwen-72B
1. YES 2. YES
0.843895
0.76908
0.649023
__label__eng_Latn
0.251207
0.346229
.. meta:: :description: A guide which introduces the most important steps to get started with pymoo, an open-source multi-objective optimization framework in Python. .. meta:: :keywords: Multi-Criteria Decision Making, Multi-objective Optimization, Python, Evolutionary Computation, Optimization Test Problem ```python %%capture %run ./part_2.ipynb ``` # Part III: Multi-Criteria Decision Making Having now obtained a set of non-dominated solutions, one can ask how a decision-maker can nail down the set to only a few or even a single solution. This decision-making process for multi-objective problems is also known as Multi-Criteria Decision Making (MCDM). You should know that the main focus of *pymoo* lies in the optimization, not the MCDM part. However, the framework offers some rudimentary tools to find an appropriate solution. The Pareto-optimal solutions obtained from the optimization procedure are given by: ```python F = res.F xl, xu = problem.bounds() plt.figure(figsize=(7, 5)) plt.scatter(F[:, 0], F[:, 1], s=30, facecolors='none', edgecolors='blue') plt.title("Objective Space") plt.show() ``` Before we start using any technique, we should note that the objectives have a different scale. What has not been a problem for single-objective optimization because not more than one dimension existed now becomes fundamentally important to consider. ```python fl = F.min(axis=0) fu = F.max(axis=0) print(f"Scale f1: [{fl[0]}, {fu[0]}]") print(f"Scale f2: [{fl[1]}, {fu[1]}]") ``` Scale f1: [1.3377795039158837, 74.97223429467643] Scale f2: [0.01809179532919018, 0.7831767823138299] As one can observe, the lower and upper bounds of the objectives $f_1$ and $f_2$ are very different, and such normalization is required. A common way is normalizing using the so-called ideal and nadir point. However, for the decision-making purpose here and the sake of generalization, we assume the ideal and nadir points (also referred to as boundary points) and the Pareto-front) are not known. Thus the points can be approximated by: ```python approx_ideal = F.min(axis=0) approx_nadir = F.max(axis=0) ``` ```python plt.figure(figsize=(7, 5)) plt.scatter(F[:, 0], F[:, 1], s=30, facecolors='none', edgecolors='blue') plt.scatter(approx_ideal[0], approx_ideal[1], facecolors='none', edgecolors='red', marker="*", s=100, label="Ideal Point (Approx)") plt.scatter(approx_nadir[0], approx_nadir[1], facecolors='none', edgecolors='black', marker="p", s=100, label="Nadir Point (Approx)") plt.title("Objective Space") plt.legend() plt.show() ``` Normalizing the obtained objective values regarding the boundary points is relatively simple by: ```python nF = (F - approx_ideal) / (approx_nadir - approx_ideal) fl = nF.min(axis=0) fu = nF.max(axis=0) print(f"Scale f1: [{fl[0]}, {fu[0]}]") print(f"Scale f2: [{fl[1]}, {fu[1]}]") plt.figure(figsize=(7, 5)) plt.scatter(nF[:, 0], nF[:, 1], s=30, facecolors='none', edgecolors='blue') plt.title("Objective Space") plt.show() ``` ### Compromise Programming Without going into too much detail in this getting started guide, one way for decision-making is using decomposition functions. They require the definition of weights that reflect the user's wishes. A vector gives the weights with only positive float numbers summing up to one and a length equal to the number of objectives. Here for a bi-objective problem, let us assume the first objective is less a bit less important than the second objective by setting the weights to ```python weights = np.array([0.2, 0.8]) ``` Next, we choose the decomposition method called Augmented Scalarization Function (ASF), a well-known metric in the multi-objective optimization literature. ```python from pymoo.decomposition.asf import ASF decomp = ASF() ``` Now let us obtain the best solution regarding the ASF. Because ASF is supposed to be minimized, we choose the minimum ASF values calculated from all solutions. You might be wondering why the weights are not passed directly, but `1/weights`. For ASF, different formulations exist, one where the values are divided and one where they are multiplied. In *pymoo*, we divide, which does not reflect the idea of the user's criteria. Thus, the inverse needs to be applied. No worries if this is too much detail for now; however, decision-making about decomposition techniques is vital. ```python i = decomp.do(nF, 1/weights).argmin() ``` After having found a solution ($i$) we can operate on the original scale to represent the results: ```python print("Best regarding ASF: Point \ni = %s\nF = %s" % (i, F[i])) plt.figure(figsize=(7, 5)) plt.scatter(F[:, 0], F[:, 1], s=30, facecolors='none', edgecolors='blue') plt.scatter(F[i, 0], F[i, 1], marker="x", color="red", s=200) plt.title("Objective Space") plt.show() ``` ### Pseudo-Weights A simple way to chose a solution out of a solution set in the context of multi-objective optimization is the pseudo-weight vector approach proposed in <cite data-cite="multi_objective_book"></cite>. Respectively, the pseudo weight $w_i$ for the i-ith objective function can be calculated by: \begin{equation} w_i = \frac{(f_i^{max} - f_i {(x)}) \, /\, (f_i^{max} - f_i^{min})}{\sum_{m=1}^M (f_m^{max} - f_m (x)) \, /\, (f_m^{max} - f_m^{min})} \end{equation} This equation calculates the normalized distance to the worst solution regarding each objective $i$. Please note that for non-convex Pareto fronts, the pseudo weight does not correspond to the result of an optimization using the weighted sum. However, for convex Pareto-fronts, the pseudo weights indicate the location in the objective space. ```python from pymoo.mcdm.pseudo_weights import PseudoWeights i = PseudoWeights(weights).do(nF) ``` ```python print("Best regarding Pseudo Weights: Point \ni = %s\nF = %s" % (i, F[i])) plt.figure(figsize=(7, 5)) plt.scatter(F[:, 0], F[:, 1], s=30, facecolors='none', edgecolors='blue') plt.scatter(F[i, 0], F[i, 1], marker="x", color="red", s=200) plt.title("Objective Space") plt.show() ```
a05d8e43642e0f7dcb039450379f305c832f4e33
221,749
ipynb
Jupyter Notebook
source/getting_started/part_3.ipynb
SunTzunami/pymoo-doc
f82d8908fe60792d49a7684c4bfba4a6c1339daf
[ "Apache-2.0" ]
2
2021-09-11T06:43:49.000Z
2021-11-10T13:36:09.000Z
source/getting_started/part_3.ipynb
SunTzunami/pymoo-doc
f82d8908fe60792d49a7684c4bfba4a6c1339daf
[ "Apache-2.0" ]
3
2021-09-21T14:04:47.000Z
2022-03-07T13:46:09.000Z
source/getting_started/part_3.ipynb
SunTzunami/pymoo-doc
f82d8908fe60792d49a7684c4bfba4a6c1339daf
[ "Apache-2.0" ]
3
2021-10-09T02:47:26.000Z
2022-02-10T07:02:37.000Z
416.0394
52,936
0.935319
true
1,588
Qwen/Qwen-72B
1. YES 2. YES
0.888759
0.851953
0.757181
__label__eng_Latn
0.989204
0.597516
# SIRD: A Epidemic Model with Social Distancing **Prof. Tony Saad (<a href='www.tsaad.net'>www.tsaad.net</a>) <br/>Department of Chemical Engineering <br/>University of Utah** <hr/> ```python #HIDDEN from routines import plot_sird_model import ipywidgets as widgets from ipywidgets import interact, interact_manual %matplotlib inline %config InlineBackend.figure_formats = ['svg'] continuousUpdate = False style = {'description_width': 'initial'} beta = widgets.BoundedFloatText(value=0.5,min=0,max=1,step=0.01,description='Infection rate ($\\beta$):',continuous_update=continuousUpdate, style=style) days = widgets.BoundedFloatText(value=14,min=1,max=20,step=1, description='Recovery (days):' ,style=style,continuous_update=continuousUpdate) δ1 = widgets.BoundedFloatText(value=0.1,min=0,max=20.0,step=0.01,description='D to S ($\delta_1$):',readout_format='.3f',style=style,continuous_update=continuousUpdate) δ2 = widgets.BoundedFloatText(value=0.15,min=0,max=20.0,step=0.01,description='S to D ($\delta_2$):',readout_format='.3f',style=style,continuous_update=continuousUpdate) months = widgets.BoundedFloatText(value=12,min=1,max=60,step=1, description='Simulation (months):',style=style,continuous_update=continuousUpdate) vaccinateAfter = widgets.BoundedFloatText(value=8,min=1,description='Vaccine found (months):',max=64,style=style,continuous_update=continuousUpdate) minI = widgets.BoundedFloatText(value=1.0,min=0.0,max=50.0,step=0.001,description='Min Infected (%):',readout_format='.3f',style=style,continuous_update=continuousUpdate, disabled=True) maxI = widgets.BoundedFloatText(value=10.0,min=0,max=50.0,step=0.001,description='Max Infected (%):',readout_format='.3f',style=style,continuous_update=continuousUpdate, disabled=True) semilogy = widgets.Checkbox(value=False,description='semilogy plot:',style=style) # widgets.link((maxI, 'value'), (minI, 'max')) def on_value_change(change): minI.max = maxI.value*0.99 maxI.observe(on_value_change) distanceModel = widgets.Dropdown(options=[('Constant'), ('Reactive')],description='Social Distancing Model:',style=style) def show_max_min(change): if distanceModel.value == 'Constant': maxI.disabled=True minI.disabled=True else: maxI.disabled=False minI.disabled=False distanceModel.observe(show_max_min) ui1 = widgets.HBox([beta,months]) ui2 = widgets.HBox([days,vaccinateAfter]) ui3 = widgets.HBox([δ1,δ2]) ui4 = widgets.HBox([minI,maxI]) ui5 = widgets.HBox([distanceModel, semilogy]) out = widgets.interactive_output(plot_sird_model, {'infection_rate': beta, 'incubation_period': days, 'D_to_S': δ1, 'S_to_D': δ2, 'tend_months':months, 'vaccinateAfter': vaccinateAfter,'minIpercent':minI, 'maxIpercent':maxI,'distanceModel':distanceModel,'semilogy':semilogy}) display(ui1,ui2,ui3,ui4, ui5, out) ``` HBox(children=(BoundedFloatText(value=0.5, description='Infection rate ($\\beta$):', max=1.0, step=0.01, style… HBox(children=(BoundedFloatText(value=14.0, description='Recovery (days):', max=20.0, min=1.0, step=1.0, style… HBox(children=(BoundedFloatText(value=0.1, description='D to S ($\\delta_1$):', max=20.0, step=0.01, style=Des… HBox(children=(BoundedFloatText(value=1.0, description='Min Infected (%):', disabled=True, max=50.0, step=0.00… HBox(children=(Dropdown(description='Social Distancing Model:', options=('Constant', 'Reactive'), style=Descri… Output() # Description SIRD is an epidemic model for infectious disease and is a variation of the SIR model with the addition of social distancing. A compartmental diagram is shown below: The governing equations are: \begin{equation} \frac{\text{d}S}{\text{d}t} = -\beta SI - \delta_1 S + \delta_2 D \end{equation} \begin{equation} \frac{\text{d}I}{\text{d}t} = \beta SI - \gamma I \end{equation} \begin{equation} \frac{\text{d}R}{\text{d}t} =\gamma I \end{equation} \begin{equation} \frac{\text{d}D}{\text{d}t} =\delta_1 S - \delta_2 D \end{equation} where: * $\beta$ is the average infection rate * $\gamma$ is the fraction of people recovering, and is equal to $1/d$ where $d$ is the average number of days for recovery * $\delta_1$ is the fraction at which people move from being socially distanced back to the general population * $\delta_2$ is the average fraction at which people become socially distant The goal of this model is to better understanding the effectiveness of social distancing on "flattening" the curve of an infectious disease. For example, is full lock-down better than a step-by-step implementation of social distancing? ## Social Distancing Models Two social distancing models are supported on this page, Constant and Reactive. ### Constant Social Distancing Assumes that both $\delta_1$ and $\delta_2$ are constant ### Reactive Social Distancing This Model triggers social distancing when the # of infected reaches a certain percentage of the population (designated as Max Infected in the GUI, $I_\text{max}$) and relaxes social distancing when it reaches a minimum # of infected (designated as Min Infected in the GUI, $I_\text{min}$). This produces very interesting dynamics especially as $I_\text{max} \to I_\text{min}$ ## Slides You can see a more detailed description in the following <a href='https://github.com/saadtony/SIRD/blob/master/SIRD.pdf'>slides</a>.
f0cb7e3eeb8df2424cb013c86eed846fec54f5e4
8,660
ipynb
Jupyter Notebook
SIRD.ipynb
saadtony/SIRD
87888445826aa9db0e386cf23f0f262bb4f9ab55
[ "MIT" ]
null
null
null
SIRD.ipynb
saadtony/SIRD
87888445826aa9db0e386cf23f0f262bb4f9ab55
[ "MIT" ]
null
null
null
SIRD.ipynb
saadtony/SIRD
87888445826aa9db0e386cf23f0f262bb4f9ab55
[ "MIT" ]
null
null
null
39.907834
390
0.618245
true
1,514
Qwen/Qwen-72B
1. YES 2. YES
0.888759
0.815232
0.724545
__label__eng_Latn
0.758744
0.521693
# Eq. (4.17) and (4.18) Equation (4.17) is \begin{equation} \overline{\phi}^{(k,\alpha,\beta)}_m = \gamma^{(k,\alpha,\beta)}_{m} \phi^{(k/2, \alpha+{k}/{2}, \beta+{k}/{2})}_{m}, \label{eq:phiover} \end{equation} where \begin{equation} \gamma^{(k,\alpha,\beta)}_{n} = \frac{\psi^{(k/2,\alpha,\beta)}_{n+k} g^{(\alpha, \beta)}_{n+k}h^{({k}/{2},\alpha+{k}/{2}, \beta+{k}/{2})}_{n+{k}/{2}}}{g^{(\alpha+{k}/{2},\beta+{k}/{2})}_{n+{k}/{2}}h^{(k,\alpha,\beta)}_{n+k}}. \end{equation} Equation (4.17) is derived using \begin{equation} \partial^k P^{(\alpha, \beta)}_n = \psi^{(k,\alpha,\beta)}_{n} P^{(\alpha+k,\beta+k)}_{n-k}, \label{eq:derP} \end{equation} where $\psi^{(k,\alpha,\beta)}_{n}$ is given in Eq. (2.8). Using \begin{equation} Q^{(\alpha,\beta)}_n(x) = g_n^{(\alpha,\beta)} P^{(\alpha,\beta)}_n(x), \label{eq:Qspec} \end{equation} we get \begin{equation} \partial^k Q^{(\alpha, \beta)}_n = \frac{g^{(\alpha,\beta)}_n }{g^{(\alpha+k,\beta+k)}_{n-k}} \psi^{(k,\alpha,\beta)}_{n} Q^{(\alpha+k,\beta+k)}_{n-k}. \label{eq:derQ} \end{equation} The same equation for $k/2$ is \begin{equation} \partial^{k/2} Q^{(\alpha, \beta)}_{n} = \frac{g^{(\alpha,\beta)}_n }{g^{(\alpha+k/2,\beta+k/2)}_{n-k/2}} \psi^{(k/2,\alpha,\beta)}_{n} Q^{(\alpha+k/2,\beta+k/2)}_{n-k/2}, \label{eq:derQ2} \end{equation} Take $\partial^{k/2}$ of this equation to obtain \begin{equation} \partial^{k} Q^{(\alpha, \beta)}_{n} = \frac{g^{(\alpha,\beta)}_n }{g^{(\alpha+k/2,\beta+k/2)}_{n-k/2}} \psi^{(k/2,\alpha,\beta)}_{n} \partial^{k/2}Q^{(\alpha+k/2,\beta+k/2)}_{n-k/2}, \label{eq:derQ3} \end{equation} and shift the index to $n+k$ to obtain \begin{equation} \partial^{k} Q^{(\alpha, \beta)}_{n+k} = \frac{g^{(\alpha,\beta)}_{n+k} }{g^{(\alpha+k/2,\beta+k/2)}_{n+k/2}} \psi^{(k/2,\alpha,\beta)}_{n+k} \partial^{k/2}Q^{(\alpha+k/2,\beta+k/2)}_{n+k/2}, \label{eq:derQ4} \end{equation} Now we want to find $\gamma^{(k,\alpha,\beta)}$ such that \begin{equation} \overline{\phi}^{(k,\alpha,\beta)}_m = \gamma^{(k,\alpha,\beta)}_{m} \phi^{(k/2, \alpha+{k}/{2}, \beta+{k}/{2})}_{m}, \end{equation} where \begin{equation} \overline{\phi}^{(k,\alpha,\beta)}_m = \frac{(1-x^2)^{k/2}}{h^{(k,\alpha,\beta)}_{n+k}}\partial^k Q^{(\alpha,\beta)}_{n+k} \end{equation} and \begin{equation} \phi^{(k/2,\alpha+k/2,\beta+k/2)}_n = \frac{(1-x^2)^{k/2}}{h^{(k/2,\alpha+k/2,\beta+k/2)}_{n+k/2}}\partial^{k/2} Q^{(\alpha+k/2,\beta+k/2)}_{n+k/2} \end{equation} We get \begin{equation} \gamma^{(k,\alpha,\beta)}_n = \frac{\frac{(1-x^2)^{k/2}}{h^{(k,\alpha,\beta)}_{n+k}}\partial^k Q^{(\alpha,\beta)}_{n+k}}{\frac{(1-x^2)^{k/2}}{h^{(k/2,\alpha+k/2,\beta+k/2)}_{n+k/2}}\partial^{k/2} Q^{(\alpha,\beta)}_{n+k/2}} \end{equation} The $(1-x^2)^{k/2}$ cancel out \begin{equation} \gamma^{(k,\alpha,\beta)}_n = \frac{{h^{(k/2,\alpha+k/2,\beta+k/2)}_{n+k/2}}}{h^{(k,\alpha,\beta)}_{n+k}} \frac{ \partial^k Q^{(\alpha,\beta)}_{n+k} }{\partial^{k/2}Q^{(\alpha+k/2,\beta+k/2)}_{n+k/2}} \end{equation} and we insert for $\frac{\partial^k Q^{(\alpha,\beta)}_{n+k}}{\partial^{k/2}Q^{(\alpha+k/2,\beta+k/2)}_{n+k/2}}$ to get \begin{equation} \gamma^{(k,\alpha,\beta)}_{n} = \frac{\psi^{(k/2,\alpha,\beta)}_{n+k} g^{(\alpha, \beta)}_{n+k}h^{({k}/{2},\alpha+{k}/{2}, \beta+{k}/{2})}_{n+{k}/{2}}}{g^{(\alpha+{k}/{2},\beta+{k}/{2})}_{n+{k}/{2}}h^{(k,\alpha,\beta)}_{n+k}}. \end{equation} Note that the two scaling functions $g^{(\alpha, \beta)}_{n+k}$ and $g^{(\alpha+{k}/{2},\beta+{k}/{2})}_{n+{k}/{2}}$ represents the specialized polynomials $Q^{(\alpha,\beta)}_{n+k}$ and $Q^{(\alpha+k/2,\beta+k/2)}_{n+k/2}$, that not necessarily use the same scaling function $g$. For example, Chebyshev polynomials of first and second kind use slightly different scaling functions. ```python from shenfun.jacobi.recursions import half, alfa, beta, psi, cn, un, h, n, sp def gamma(k, alf, bet, gna, gnb): """Return Eq. (12) Parameters ---------- k : int alf : number bet : number gna : The scaling function of the trial space gnb : The scaling function of the test space """ return sp.simplify(psi(alf, bet, n + k, k//2) *gna(alf, bet, n + k) *h(alf + k//2, bet + k//2, n + k//2, k//2, gnb) /(gnb(alf + k//2, bet + k//2, n + k//2) * h(alf, bet, n + k, k, gna))) ``` Let the trial space be based on Chebyshev polynomials of the first kind, whereas the test space makes use of Chebyshev polynomials of the second kind. For these to bases the scaling function is $$ g_n^{(-1/2,-1/2)} = \frac{1}{P^{(-1/2,-1/2)}_{n}(1)} $$ and $$ g_n^{(1/2,1/2)} = \frac{n+1}{P^{(1/2,1/2)}_{n}(1)} $$ respectively. These two functions are imported under names `cn` and `un`, respectively. See [jacobi.recursions.py](https://github.com/spectralDNS/shenfun/blob/master/shenfun/jacobi/recursions.py). We get $\gamma^{(2,-1/2,-1/2)}_n$ to be ```python gam = gamma(2, -half, -half, cn, un) gam ``` $\displaystyle \frac{1}{n + 2}$
d63fca795a73562fb2a6c02372de916232f1996e
7,823
ipynb
Jupyter Notebook
binder/Equations (4.17-4.18).ipynb
spectralDNS/PG-paper-2022
0bfa82e1ff77a5bb9a6ec5be930b4c00b449a1e0
[ "BSD-2-Clause" ]
null
null
null
binder/Equations (4.17-4.18).ipynb
spectralDNS/PG-paper-2022
0bfa82e1ff77a5bb9a6ec5be930b4c00b449a1e0
[ "BSD-2-Clause" ]
null
null
null
binder/Equations (4.17-4.18).ipynb
spectralDNS/PG-paper-2022
0bfa82e1ff77a5bb9a6ec5be930b4c00b449a1e0
[ "BSD-2-Clause" ]
null
null
null
36.556075
396
0.471303
true
2,104
Qwen/Qwen-72B
1. YES 2. YES
0.79053
0.76908
0.607981
__label__eng_Latn
0.281736
0.250874
# Workshop 12: Introduction to Numerical ODE Solutions *Source: http://phys.csuchico.edu/ayars/312 * **Submit this notebook to bCourses to receive a grade for this Workshop.** Please complete workshop activities in code cells in this iPython notebook. The activities titled **Practice** are purely for you to explore Python. Some of them may have some code written, and you should try to modify it in different ways to understand how it works. Although no particular output is expected at submission time, it is _highly_ recommended that you read and work through the practice activities before or alongside the exercises. However, the activities titled **Exercise** have specific tasks and specific outputs expected. Include comments in your code when necessary. Enter your name in the cell at the top of the notebook. The workshop should be submitted on bCourses under the Assignments tab. $\color{red}{\large\text{Comments}}$ Ex 1: 4 pts Ex 2: 3 pts Ex 3: 3 pts ```python # Run this cell before preceding %matplotlib inline import numpy as np import matplotlib.pyplot as plt ``` ### Exercise 1: The damped harmonic oscillator (DHO) satisfies the following differential equation: $$\frac{d^2x}{dt^2}+\frac{c}{m}\frac{dx}{dt}+\frac{k}{m}x = 0$$ It differs from the previous example by the addition of the $(c/m) dx/dt$ term. Like we did above, we can unwrap this second-order ODE into two first-order ODEs using two separate variables $x(t)$ and $v(t)$ \begin{align} x' &= v \\ v' &= -\frac{c}{m}v - \frac{k}{m}x \end{align} 1. Like in the example above, write down the update rules for $x_i$ and $v_i$. 1. Then write some code to implement your rules to estimate a numerical solution for $x(t)$ and $v(t)$ for a given initial condition $x_0$ and $v_0$ (you can assume $t_0 = 0$ like above). 1. Plot your results for $x(t)$ and $v(t)$ and make sure that they make sense. You may use the code in the example as a template. *Hint*: Recall that the qualitative behavior of the oscillator is different depending on the (dimensionless) value of the ratio $$\frac{(c/m)^2}{k/m}$$ So you should be able to see the effect of this by trying out different values for $c/m$ and $k/m$. ```python # Use Euler method to solve coupled first order ODE import numpy as np %matplotlib inline import matplotlib.pyplot as plt # Initial conditions x_0 = 1.0 v_0 = 0.0 # Number of timesteps T = 1000 dt = 0.1 #size of time step (Delta t) def Euler(t0, x0, v0, T, dt, cm, km): x_data = np.zeros(T) v_data = np.zeros(T) t_data = np.arange(T) * dt + t0 x_data[0] = x0 v_data[0] = v0 for i in range(1,T): x_data[i] = x_data[i-1] + v_data[i-1] * dt v_data[i] = v_data[i-1] + (-cm * v_data[i - 1] - km * x_data[i-1]) * dt return t_data, x_data, v_data km = 0.3 # value of k / m cms = [.3, 1.095] # values of c/m labels = ['underdamped', 'critically damped'] plt.figure(figsize=(8,8)) x_0 = 1.0 plt.subplot(211) plt.ylabel("Position") plt.title("Position of the mass on damped spring") plt.subplot(212) plt.ylabel("Velocity") plt.xlabel("Time") plt.title("Velocity of the mass on damped spring") for i, cm in enumerate(cms): # Analytical solutions for (x(t), v(t)) assuming x_0 = 1.0, v_0 = 0, t_0 = 0 D = x_0/np.sqrt(1 - cm**2/(4*km)) y = cm/2 w = (km - 1/4 * cm**2)**(1/2) phi = np.arctan2(-y, w) t_data, x_data, v_data = Euler(0.0, x_0, v_0, T, dt, cm, km) analytical_x = D * np.exp(-y * t_data) * np.cos(w*t_data + phi) analytical_v = -y * D * np.exp(-y * t_data) * np.cos(w*t_data + phi) \ - w * D * np.exp(-y*t_data) * np.sin(w*t_data + phi) plt.subplot(211) plt.plot(t_data, x_data, label="numerical " + labels[i]) plt.plot(t_data, analytical_x,label="analytical " + labels[i]) plt.legend() plt.subplot(212) plt.plot(t_data, v_data, label="numerical " + labels[i]) plt.plot(t_data, analytical_v, label="analytical " + labels[i]) plt.legend() overdamped_cm = 2 t_data, x_data, v_data = Euler(0.0, x_0, v_0, T, dt, overdamped_cm, km) plt.subplot(211) plt.plot(t_data, x_data, label="numerical overdamped") plt.legend() plt.subplot(212) plt.plot(t_data, v_data, label="numerical overdamped") plt.legend() ``` The ratio $\frac{cm^2}{km}$ determines whether the oscillator is underdamped, critically damped, or overdamped ## But wait... But you know that for a closed system, like the SHO, we actually have a special constraint on the system--the total energy (kinetic + potential) must be constant! So at every point of our solution, we should check whether this is true. How do we evaluate the total energy? $$E = T + U = \frac{1}{2}mv^2 + \frac{1}{2}kx^2$$ Let's define a rescaled energy $\tilde{E}$ as $(1/m)E$: $$\tilde{E} = \frac{1}{2}v^2 + \frac{1}{2}\frac{k}{m} x^2$$ ### Exercise 2: 1. Copy the code from the example using the SHO above, in which we solved the SHO using the Euler Method. Add code to calculate the rescaled energy $\tilde{E}_i$ for each time step. 1. Plot $\tilde{E}(t)$ vs. the time. Does the energy stay constant, fluctuate around some constant value, or does it diverge/decay? ```python # Use Euler method to solve coupled first order ODE import numpy as np %matplotlib inline import matplotlib.pyplot as plt import scipy.optimize as sp plt.rcParams['font.size'] = 14 km = 0.3 # value of k / m # Initial conditions x_0 = 1.0 v_0 = 0.0 # Number of timesteps T = 1000 dt = 0.1 #size of time step (Delta t) def Euler(t0, x0, v0, T, dt): x_data = np.zeros(T) v_data = np.zeros(T) E_data = np.zeros(T) t_data = np.arange(T) * dt + t0 x_data[0] = x0 v_data[0] = v0 E_data[0] = 1/2 * v0**2 + 1/2 * km * x0**2 for i in range(1,T): x_data[i] = x_data[i-1] + v_data[i-1] * dt v_data[i] = v_data[i-1] + (-km * x_data[i-1]) * dt E_data[i] = 1/2 * v_data[i]**2 + 1/2 * km * x_data[i]**2 return t_data, x_data, v_data, E_data t_data, x_data, v_data, E_data = Euler(0.0, x_0, v_0, T, dt) def linear(x, a, b): return a * x + b par, _ = sp.curve_fit(linear, t_data, np.log(E_data)) plt.plot(t_data, E_data, 'b', linewidth = 8, label = 'eulor method energy') plt.plot(t_data, np.exp(linear(t_data, *par)), 'r', linewidth = 3, label = '$E = {:.2f}e^{{{:.2f}t}}$'.format(np.exp(par[1]),par[0])) plt.xlabel('time') plt.ylabel('energy') plt.legend() ``` * the Euler method yields an exponentially increasing energy ## Euler-Cromer/Symplectic Euler Method There exists a way to keep the energy fluctuations from growing, using a just a slight variant of the update rules described above. This update rule is called the \begin{align} v_i &= v_{i-1} + \left(-\frac{k}{m}x_{i-1}\right)\Delta t \\ x_i &= x_{i-1} + v_{i} \Delta t \end{align} In this version, you use the approximate velocity at time $t_i$ instead of the velocity at time $t_{i-1}$ to calculate $x_i$. ### Exercise 3: 1. Modify the code from Exercise 2 to instead implement the update rule in the Euler-Cromer method. You can either modify the it in-place or copy it to the cell below and modify it. 1. Now run your code to calculate and plot $x(t)$, $v(t)$, and $\tilde{E}(t)$. Does the energy stay constant, fluctuate around some constant value, or does it diverge/decay? ```python # Use Euler method to solve coupled first order ODE import numpy as np %matplotlib inline import matplotlib.pyplot as plt plt.rcParams['font.size'] = 14 km = 0.3 # value of k / m # Initial conditions x_0 = 1.0 v_0 = 0.0 # Number of timesteps T = 1000 dt = 0.1 #size of time step (Delta t) def Euler(t0, x0, v0, T, dt): x_data = np.zeros(T) v_data = np.zeros(T) E_data = np.zeros(T) t_data = np.arange(T) * dt + t0 x_data[0] = x0 v_data[0] = v0 E_data[0] = 1/2 * v0**2 + 1/2 * km * x0**2 for i in range(1,T): v_data[i] = v_data[i-1] + (-km * x_data[i-1]) * dt x_data[i] = x_data[i-1] + v_data[i] * dt E_data[i] = 1/2 * v_data[i]**2 + 1/2 * km * x_data[i]**2 return t_data, x_data, v_data, E_data t_data, x_data, v_data, E_data = Euler(0.0, x_0, v_0, T, dt) print('Amplitude = {:.3f}'.format((np.max(E_data) - E_data[0]))) print('Maximum fractional error error = {:.2f}'.format((np.max(E_data) - E_data[0])/E_data[0])) plt.plot(t_data, E_data) plt.xlabel('time') plt.ylabel('energy') ``` * the energy oscillates about an equilibrium value of .15 with an amplitude of .004 * the maximum fractional error in the energy is .03
cd093ba0a69615acb938450dcdf89979fa8eb845
142,995
ipynb
Jupyter Notebook
Fall2020_DeCal_Material/Resources/Workshop12_solutions.ipynb
emilyma53/Python_DeCal
1b98351ecd16f93a5357c9e00af18dde82c813b1
[ "MIT" ]
2
2021-02-01T22:53:16.000Z
2022-02-18T19:04:52.000Z
Fall_2020_DeCal_Material/Resources/Workshop12_solutions.ipynb
James11222/Python_DeCal_2020
7e7d28bce2248812446ef2e2e141230308b318c4
[ "MIT" ]
null
null
null
Fall_2020_DeCal_Material/Resources/Workshop12_solutions.ipynb
James11222/Python_DeCal_2020
7e7d28bce2248812446ef2e2e141230308b318c4
[ "MIT" ]
1
2021-09-30T23:10:25.000Z
2021-09-30T23:10:25.000Z
328.724138
64,980
0.92483
true
2,720
Qwen/Qwen-72B
1. YES 2. YES
0.859664
0.859664
0.739022
__label__eng_Latn
0.945979
0.555327
```python import numpy as np import pandas as pd import matplotlib.pyplot as plt plt.style.use('classic') %matplotlib inline ``` # Class 11: Introduction to Dynamic Optimization: A Two-Period Cake-Eating Problem Dynamic optimization, the optimal choice over elements in a time series, is at the heart of macroeconomic theory. People and firms routinely make decisions that affect their future opportunities. For example, a household chooses how much of its income to save and therefore allocates some of its current consumption to the future. In this Notebook, we consider a simple two-period dynamic optimization model. A person has some initial quantity of cake and chooses the optimal way to eat it over the next two periods. We solve for the optimal consumption path. ## Preferences Suppose that a person has only two periods to eat cake. Let $C_0$ denote the quantity of cake consumed in period 0 and let $C_1$ denote the quantity consumed in period 1. The person's utility in period 0 from consuming $C_0$ and $C_1$ is given by: \begin{align} U(C_0,C_1) & = \log C_0 + \beta \log C_1, \end{align} where $0\leqslant\beta\leqslant 1$ is the person's *subjective discount factor*. $U$ is an *intertemporal utility function* and $\log C$ is the *flow* of utility generated each period. The presence of $\beta$ indicates the the person places lower weight on consumption in the future. The utility function implies indifference curves of the form: \begin{align} C_1 & = e^{ \frac{\bar{U} - \log C_0}{\beta}}, \end{align} where $\bar{U}$ is some given level of utility. ### Example: Plot Indifference Curves for Different Utility Levels For $C_0$ between 0.01 and 10 and $\beta=0.95$, plot indifference curves for $\bar{U}=1, 2, 3, 4$. Set x- and y-axis limits to [0,10]. ```python # Define a function for computing c1 given a beta value and an array of c0 values # Create a variable called 'c0' that stores c0 values from 0.01 to 10 # Create a variable called 'beta' that stores the value of beta # Plot indifference curves # Place legend to right of figure. PROVIDED plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) ``` ### Example: Plot Indifference Curves for Different $\beta$ Values For $C_0$ between 0.01 and 30 and $\bar{U} = 3$, plot indifference curves for $\beta=1, 0.95,0.5,0.25$. Set x-axis limits to [0,30] and y-axis limits to [0,10]. ```python # Create a variable called u that stores the value of utility for the indifference curves # Create variable called 'c0' that stores c0 values from 0.01 to 30 # Plot indifference curves # Place legend to right of figure. PROVIDED plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) ``` ## Budget Constraints and the Boundary Condition Let $K_t$ denote the quantity of cake available in period $t$ and let $K_0$ denote the initial quantity of cake. The cake neither grows nor depreciates over time so the person faces the following budget constraints each period: \begin{align} C_0 & = K_0 - K_1\\ C_1 & = K_1 - K_2. \end{align} Finally, the person's problem is subject to a *boundary condition* that imposes a limit on the endpoint of the problem. Since the person only consumes cake in periods 0 and 1, there is no value in leaving leftover cake at the end of period 1. This reasoning implies the boundary condition: \begin{align} K_2 & = 0. \end{align} The boundary condition allows us to eliminate $K_2$ from the period 1 budget constraint. Combining the two budget constraints to eliminate $K_1$ implies an *intertemporal budget constraint* that links consumption in period 0 with consumption in period 1: \begin{align} C_0 & = K_0 - C_1. \end{align} Notice that the implied price of $C_0$ in terms of $C_1$ is 1. ### Example: Plot the Intertemporal Budget Constraint For $C_0$ between 0.01 and 20 and plot indifference curves for $K_0=5, 10, 15, 20$. Set x-axis limits to [0,10] and y-axis limits to [0,30]. ```python # Define a function for computing c1 given a k0 value and an array of c0 values # Create variable called 'c0' that stores c0 values from 0.01 to 10 # Plot budget constraints # Place legend to right of figure. PROVIDED plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) ``` ## Optimization The person's problem is to choose $C_0$, $C_1$, $K_1$, and $K_2$ to maximize his utility subject to the two budget constraints and the boundary condition. However, the problem can be simplified considerably. Use the boundary condition to eliminate $K_2$ from the budget constraints. Then use the budget constraints to substitute $C_0$ and $C_1$ out of the utility function. The result is an unconstrained optimization problem in $K_1$. \begin{align} \max_{K_1} \; \log (K_0 - K_1) + \beta \log K_1, \end{align} Take the derivative with respect to $K_1$ and set the derivative equal to zero to obtain the *first-order* or *optimality* condition: \begin{align} -\frac{1}{K_0 - K_1} + \frac{\beta}{K_1} & = 0. \end{align} Solve for $K_1$ to find the optimal cake holding for period 1: \begin{align} \boxed{K_1 = \frac{\beta}{1+\beta}K_0} \end{align} Then use the budget constraints to infer optimal cake consumption in period 0: \begin{align} \boxed{C_0 = \frac{1}{1+\beta}K_0} \end{align} and period 1: \begin{align} \boxed{C_1 = \frac{\beta}{1+\beta}K_0}. \end{align} Notice that $C_0 + C_1 = K_0$ so all of the cake is eaten by the end. ### Example: Solve for the Optimal Consumption Path Compute $K_1, C_0, C_1$ for $\beta = 0.9$ and $K_0 = 20$. ```python # Define a function that returns the solution to the two-period cake-eating problem # Initialize arrays for cake and consumption # Assign values to cake array elements # Assign values to consumption array elements as implied by the budget constraints # Create a cariable called 'cake0' that stores initial cake value # Create a cariable called 'beta' that stores beta value # Compute the optimal path ``` $\beta <1$ means that the person places greater weight on consumption in period 0 and therefore the person chooses to consume a greater share of the cake in period 0.
f429bfb77666f943eca34902a5d478a3cdb68524
12,830
ipynb
Jupyter Notebook
Lecture Notebooks/Econ126_Class_11_blank.ipynb
t-hdd/econ126
17029937bd6c40e606d145f8d530728585c30a1d
[ "MIT" ]
null
null
null
Lecture Notebooks/Econ126_Class_11_blank.ipynb
t-hdd/econ126
17029937bd6c40e606d145f8d530728585c30a1d
[ "MIT" ]
null
null
null
Lecture Notebooks/Econ126_Class_11_blank.ipynb
t-hdd/econ126
17029937bd6c40e606d145f8d530728585c30a1d
[ "MIT" ]
null
null
null
40.473186
580
0.413952
true
1,700
Qwen/Qwen-72B
1. YES 2. YES
0.917303
0.83762
0.768351
__label__eng_Latn
0.993864
0.623469
# Review of Basics of Linear Algebra --- **Agenda** >1. Matrix Vector Operations using NumPy >1. Vector Spaces and Matrices: Four fundamental fubspaces >1. Motivating Examples: Image and text manipulations >1. Eigen-decomposition, determinant and trace >1. Special Matrices: Orthogonal Matrices >1. Norms > ```python # Following lines are for Python 2.x to 3.xx compatibility from __future__ import print_function from __future__ import division ``` ```python #IMPORT import numpy as np import matplotlib.pyplot as plt import matplotlib.image as mpimg %matplotlib inline ## Set a seed for the random number generator np.random.seed(100) ``` ## NumPy Arrays: Vectors, Matrices and Tensors ### Creating some special vectors and matrices *** >- Fixed ones: having a set of given elements >- Random ones >- Reshaping vectors to get matrices >- Zero matrix >- One matrix >- Identity matrix >- Permutation matrix ```python ## Create a vectors of length 4 v1 = np.array([1, 2, 3, 4]) v2 = np.array([3, 2,1,-1]) print("v1:", v1) print ("v2:", v2) ``` v1: [1 2 3 4] v2: [ 3 2 1 -1] ```python # Create a random vector of Integers v3 = np.random.randint(0, high=15, size=(4,)) print ("v3: ",v3) ``` v3: [8 8 3 7] ```python # Create a random COLUMN vector of Integers v31 = np.random.randint(0, high=8, size=(4,1)) print ("v31: \n",v31) ``` v31: [[7] [7] [0] [2]] ```python v4 = np.arange(5,9) print ("v4: ",v4) ``` v4: [5 6 7 8] ```python ## Create a matrix of order 4x3 A = np.array([[1, 2, 3], [2, 1, 4], [2, 4, 7], [1, 2, 3]], dtype=float) ## Following is no longer recommeded matA = np.matrix([[1, 2, 3], [2, 1, 4], [2, 4, 7], [1, 2, 3]]) matA2 = np.matrix("1, 2; 3, 4") ## Create a random matrix of order 4x3 whose elements are chosen uniformly randomly ## CHANGES NEEDED WITH Python 3.x for compatibilty B = np.random.rand(4,3) ## Create a matrix of order 4x3 made of all zeros zero_43 = np.zeros((4,3), dtype=float) ## Create a matrix of all ones of order 3x5 ones_35 = np.ones((3,5), dtype=float) ## Create an identity matrix of order 4x4 # eye_4 = np.identity(4) eye_4 = np.eye(4) # This is more general. See the documentation. ``` ```python print("The random matrix B is:\n", B) print ("\n The identity matrix of order 4: ",eye_4) ``` The random matrix B is: [[ 0.81168315 0.17194101 0.81622475] [ 0.27407375 0.43170418 0.94002982] [ 0.81764938 0.33611195 0.17541045] [ 0.37283205 0.00568851 0.25242635]] The identity matrix of order 4: [[ 1. 0. 0. 0.] [ 0. 1. 0. 0.] [ 0. 0. 1. 0.] [ 0. 0. 0. 1.]] ```python ## Create a vector of order 12 (such as [3,5,7,...])and ## Rearrange its elements to create a matrix of order 4-by-3. v12 = np.arange(3,26,2) A43 = v12.reshape(4,3) # or v12.reshape(4,-1) print( "An array v12=",v12, "\n Reshaped into a matrix:\n", A43) ``` An array v12= [ 3 5 7 9 11 13 15 17 19 21 23 25] Reshaped into a matrix: [[ 3 5 7] [ 9 11 13] [15 17 19] [21 23 25]] ```python ## Create a vector of order 12 (such as [3,5,7,...])and ## Rearrange its elements to create a matrix of order 4-by-3. v12 = np.arange(3,42,2) A43 = v12.reshape(4,-1) # or v12.reshape(4,-1) print( "An array v12=",v12, "\n Reshaped into a matrix:\n", A43) ``` An array v12= [ 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41] Reshaped into a matrix: [[ 3 5 7 9 11] [13 15 17 19 21] [23 25 27 29 31] [33 35 37 39 41]] ```python v12.shape ``` (20,) ```python ``` ```python ``` ```python ``` ### Basic array operations --- Revise the following operation on matrices. Study the properties. >- Transpose of a matrix >- Addition of two matrices >- Elementwise product of two matrices and other elementwise operations >- Multiplication of two matrices (dot product) >- Finding Submatrices >- Broadcasting in NumPy ```python ## Add the two vectors above v3 = v1 + v2 ## np.add(v1,v2) ## Multiply the two vectors (element-wise) v4=v1*v2 #Dot product dotp = np.dot(v1,v2) print("The sum of the vectors", v1,"+", v2, "=",v3) print("The elementwise product of the vectors is:", v4) print("The dot product of the vectors is A NUMBER: ", dotp) ``` The sum of the vectors [1 2 3 4] + [ 3 2 1 -1] = [4 4 4 3] The elementwise product of the vectors is: [ 3 4 3 -4] The dot product of the vectors is A NUMBER: 6 ```python ## Adding two matrices A_plus_B = A + B #np.add(A,B) print ("A:\n", A) print ("\n B:\n", B) print ("\n The sum is: \n", A_plus_B) ## Can you multiply the two matrices, A and B? How? multAB = A*B print ("\n The element wise product is: \n", multAB) ``` A: [[ 1. 2. 3.] [ 2. 1. 4.] [ 2. 4. 7.] [ 1. 2. 3.]] B: [[ 0.81168315 0.17194101 0.81622475] [ 0.27407375 0.43170418 0.94002982] [ 0.81764938 0.33611195 0.17541045] [ 0.37283205 0.00568851 0.25242635]] The sum is: [[ 1.81168315 2.17194101 3.81622475] [ 2.27407375 1.43170418 4.94002982] [ 2.81764938 4.33611195 7.17541045] [ 1.37283205 2.00568851 3.25242635]] The element wise product is: [[ 0.81168315 0.34388203 2.44867425] [ 0.54814749 0.43170418 3.76011928] [ 1.63529876 1.3444478 1.22787318] [ 0.37283205 0.01137701 0.75727906]] ```python print ('A: \n',A) print ('Transpose of A, A.T \n ', A.T) # Let us create a matrix C = np.random.randint(0, high=2, size=(3,5)) print ("\n Random matrix C:\n",C) ## MULTIPLICATION (4,3) and (3,5): Find the transpose of B and Multiply to A in appropriate order AtimesC = np.dot(A, C) print ('\n Product of A and C in the sense of dot product : \n', AtimesC) ``` A: [[ 1. 2. 3.] [ 2. 1. 4.] [ 2. 4. 7.] [ 1. 2. 3.]] Transpose of A, A.T [[ 1. 2. 2. 1.] [ 2. 1. 4. 2.] [ 3. 4. 7. 3.]] Random matrix C: [[0 1 1 1 0] [0 0 0 0 0] [1 0 1 0 1]] Product of A and C in the sense of dot product : [[ 3. 1. 4. 1. 3.] [ 4. 2. 6. 2. 4.] [ 7. 2. 9. 2. 7.] [ 3. 1. 4. 1. 3.]] ```python # This has been depricated. No longer advised to be used. mat1 = np.matrix('1 0; 0 2') mat2 = np.matrix('1 2 3; -1 0 1') print ("Product of two matrices of data type matrix:\n",mat1*mat2) # no need of np.dot(mat1, mat2) ``` Product of two matrices of data type matrix: [[ 1 2 3] [-2 0 2]] ### Submatrices by slicing ```python # Random Example Matrix S = np.random.rand(4,4) print("S:\n",S) print ("S[:,[1,3]]") print (S[:,[1,3]]) print ("S[[1,3],:]") print (S[[1,3],:]) E = S[:,[1,3]][0:2] # Submatrix by slicing print ('A submatrix is E: ----------------------------') print (E) print ("S[:,[1,2]][0:2]") print (S[:,[1,2]][0:2]) ``` S: [[0.00607005 0.82080849 0.26320965 0.50106643] [0.71096644 0.02219788 0.53582866 0.24812327] [0.55302567 0.81867102 0.04103166 0.56728301] [0.2774666 0.75213614 0.62362505 0.81281704]] S[:,[1,3]] [[0.82080849 0.50106643] [0.02219788 0.24812327] [0.81867102 0.56728301] [0.75213614 0.81281704]] S[[1,3],:] [[0.71096644 0.02219788 0.53582866 0.24812327] [0.2774666 0.75213614 0.62362505 0.81281704]] A submatrix is E: ---------------------------- [[0.82080849 0.50106643] [0.02219788 0.24812327]] S[:,[1,2]][0:2] [[0.82080849 0.26320965] [0.02219788 0.53582866]] ### Broadcasting <hr> Can you add the following? $$ A = \left( \begin{array}{ccc} 1 & 1 & 2 \\ 1 & -1 & 3 \end{array} \right) + \left( \begin{array}{cc} 5 \\ -5 \end{array} \right) $$ OR $$ B = \left( \begin{array}{ccc} 1 & 1 & 2 \\ 1 & -1 & 3 \end{array} \right) + \left( \begin{array}{ccc} 1 & 1 &1\\ \end{array} \right) $$ ```python A = np.array([[1, 1, 2],[1,-1,3]]) + np.array([[5],[-5]]) B = np.array([[1, 1, 2],[1,-1,3]]) + np.array([1,1,1]) print("A is\n",A) print("B is\n",B) ``` A is [[ 6 6 7] [-4 -6 -2]] B is [[2 2 3] [2 0 4]] ## Vector Spaces ### Independence, Orthogonality and Subspaces --- <h5 style="margin-bottom:10px"> What does it mean for a set of vectors $\{ v_1, v_2, \cdots, v_r\}$ to be linearly independent?</h5> <div class='eqnbox2'> $$ \alpha_1 v_1 + \alpha_2 v_2 + \cdots + \alpha_r v_r \implies \alpha_i = 0,\ i=1,\cdots,r.$$ </div> In other words these vectors can not be combined to form a zero vector, or, one of these vectors can not be given as a linear combination of the remaining.<br> <h5 style="margin-bottom:10px"> What conditions two vectors that are perpendicular (orthogonal) to each other satisfy?</h5> <div class='eqnbox2'> $$ u \perp v \iff \langle u, v\rangle = v^Tu = 0$$ </div> <h5 style="margin-bottom:10px"> Provide the conditions so that two subspaces of a vector space are mutually orthogonal.</h5> <div class='eqnbox2'> $$ U \perp V \iff \langle u, v\rangle = v^Tu = 0,\ \forall (u \in U, v \in V).$$ </div> --- ## The Four Fundamental Subspaces --- The matrix $A \in \mathbb{R}^{m \times n}$ represents a linear transformation, $T_A(x): \mathbb{R}^n \to \mathbb{R}^m$ given by $T_A(x) = A x$. We can define the following four space with respect to matrix $A$. >1. Row space of $A$ is a subspace of $\mathbb{R}^n$: $$ \mathcal{R}(A) =\mathcal{C}(A^T) = \left\{ x \in \mathbb{R}^n : x = A^T y \textrm{ for some }y \in \mathbb{R}^n \right\}.$$ >1. Null space of $A$ is a subspace of $\mathbb{R}^n$: $$ \mathcal{N}(A) = \left\{ x \in \mathbb{R}^n : Ax = 0 \right\}.$$ >1. Column space of $A$ is a subspace of $\mathbb{R}^m$: $$ \mathcal{C}(A) = \left\{ y \in \mathbb{R}^m : y = Ax \textrm{ for some }x \in \mathbb{R}^n \right\}.$$ >1. Left null space of $A$, or null space of $A^T$, is a subspace of $\mathbb{R}^m$: $$ \mathcal{N}(A^T) = \left\{ y \in \mathbb{R}^m : A^T y = 0 \right\}.$$ <br> <br> The above four subspaces satisfy the following properties: >1. $ \mathcal{R}(A) \perp \mathcal{N}(A) $, or $ \mathcal{C}(A^T) \perp \mathcal{N}(A) $. >1. $ \mathcal{C}(A) \perp \mathcal{N}(A^T) $. >1. $ \mathcal{R}(A) \oplus \mathcal{N}(A) =\mathbb{R}^n $. >1. $ \mathcal{C}(A) \oplus \mathcal{N}(A^T) =\mathbb{R}^m $. Here $\perp $ stands for perpendicular, and $\oplus$ means that every element $x \in \mathbb{R}^n$ could be written as $x = x_n+x_r$, where $x_n \in \mathcal{N}(A)$ and $x_r \in \mathcal{R}(A)$. If rank$(A)$ = r, then >- dim $\mathcal{R}(A) = $ dim $\mathcal{C}(A) = r$, >- dim $\mathcal{N}(A)=n-r$, >- and dim$\mathcal{N}(A^T)=m-r$. **Group-work**: Consider matrix A and its echelon form to answer the following $$ A = \begin{bmatrix} 1 & 2 & 0 & 1\\ 0 & 1 & 1 & 0\\ 1 & 2 & 0 & 1 \end{bmatrix} \Rightarrow U = \begin{bmatrix} 1 & 2 & 0 & 1\\ 0 & 1 & 1 & 0\\ 0 & 0 & 0 & 0 \end{bmatrix} $$ >1. Find all its four subspaces as spans of vectors. ### Motivating Example: An image as a tensor, a matrix, and a vector *** Do the following 1. Read an image from your computer (I downloaded a 'Baby Yoda') 1. Verify the datatype and check the dimensions of the data. 1. Convert the image to a monochrome image (a matrix) by using weighted addition of the RGB values. 1. Put a frame around the image. 1. Add some Gaussian-noise to the image. ```python pix = mpimg.imread("./images/BabyYodaDoll.jpg") # OR BabyYodaDoll.jpg, BabyYodaXmas plt.axis('off') plt.imshow(pix) plt.show() ``` ```python type(pix) ``` numpy.ndarray ```python dim = pix.shape print ("Original order of the image tensor:", dim) X = pix.reshape(-1,3)/255.0 print("After vectorizing the image, the dimensions are:", X.shape ) ``` Original order of the image tensor: (368, 700, 3) After vectorizing the image, the dimensions are: (257600, 3) #### Conversion to a black & white image [Check this out](https://www.prasannakumarr.in/journal/color-to-grayscale-python-image-processing ) ```python ## Converting a color image to black-n-white ### Read this blog: https://www.prasannakumarr.in/journal/color-to-grayscale-python-image-processing color_weight = [0.2125, 0.7154, 0.0721]; # LUMA-REC.709 #color_weight = [1, 0, 0] pix_gray = np.dot(pix[..., 0:3], color_weight) # Note conversion to gray-scale is not unique print ("Order of the gray-scale image matrix:", pix_gray.shape) plt.axis('off') plt.imshow(pix_gray, cmap='gray') plt.show() ``` #### Submatrices by Slicing *** ```python # Create a frame around the image pix_framed = np.zeros(np.add(pix_gray.shape, tuple([20,20]))) print ("Order of the framed gray-scale image matrix:", pix_framed.shape) pix_framed[10:pix_gray.shape[0]+10, 10:pix_gray.shape[1]+10] = pix_gray plt.axis('off') plt.imshow(pix_framed, cmap='gray') plt.show() ``` #### Adding Gaussian noise to a black-&-white image. ```python #classwork: Make the frame lighter and wider. # Add gaussian noise to the image #size = pix_gray.shape gauss_noise = np.random.normal(0.0,10.0, (368,700)) noisy_pix= pix_gray + gauss_noise plt.axis('off') plt.imshow(pix) #plt.imshow(gauss_noise.reshape(368,700)) plt.imshow(noisy_pix, cmap='gray') plt.show() ``` #### Normal (Gaussian) Distribution $$\large p(x) = \frac{1}{\sqrt{ 2 \pi \sigma^2 }} e^{ - \frac{ (x - \mu)^2 } {2 \sigma^2} }, $$ where $\mu$ is the mean and $\sigma$ is the standard deviation of the distribution. ### Motivating Example: Term-document Matrix --- AIM: Load some text data in a term-document matrix. You can try removing the stop-words. ```python from sklearn.feature_extraction.text import CountVectorizer,ENGLISH_STOP_WORDS ``` ```python vectorizer = CountVectorizer() ``` ```python document1 = " I bought this game as a gift for my 8 year old daughter who loves games. I was expecting lots of gross foods--but I was surprised at the inappropriate cards--eyeball, human burger, blood salsa, and fresh brains. Those are not foods that typical people find in their refrigerators. We do not practice cannibalism. She was very upset when I suggested that we just take out those cards. I seriously wonder who thinks that those cards are appropriate for kids. The rest of the game is funny, but I wish I would have looked through the cards before I gave it to her." document2 = "Absolutely love Taco vs burrito. I️ bought it as a kickstarter. I️ originally bought this game because my husband and I️ love to play games with friends but most of them are not targeted to children so I️ got this to add to our collection so we had options when our friends with kids came. I’m not gonna lie I️ did No have high expectations for this to be a game for adult but I️ was Sooooo wrong!!!!!! We have now played with several different groups of friends and it’s a hit!!!!! With adults it becomes a major strategy game. I️ have Now bought it as a Christmas present bc it was so well received!!!!" document3 = " Unlike several of the reviewers here, I didn't purchase this originally for when kids are around. I bought it because of the reviews that said the adults all loved it too! I'm always on the lookout for games playable by 2 people and this was a great one. It's incredibly simple, but brings a lot of laughs with the competition and sabotage. I'm really glad I gave this game a chance." doc_list = [document1, document2, document3] ``` ```python # Fit a bag of words bow = vectorizer.fit_transform(doc_list) print(type(bow)) print ("Feature (terms) Names: \n",vectorizer.get_feature_names()) ``` <class 'scipy.sparse.csr.csr_matrix'> Feature (terms) Names: ['absolutely', 'add', 'adult', 'adults', 'all', 'always', 'and', 'appropriate', 'are', 'around', 'as', 'at', 'bc', 'be', 'because', 'becomes', 'before', 'blood', 'bought', 'brains', 'brings', 'burger', 'burrito', 'but', 'by', 'came', 'cannibalism', 'cards', 'chance', 'children', 'christmas', 'collection', 'competition', 'daughter', 'did', 'didn', 'different', 'do', 'expectations', 'expecting', 'eyeball', 'find', 'foods', 'for', 'fresh', 'friends', 'funny', 'game', 'games', 'gave', 'gift', 'glad', 'gonna', 'got', 'great', 'gross', 'groups', 'had', 'have', 'her', 'here', 'high', 'hit', 'human', 'husband', 'in', 'inappropriate', 'incredibly', 'is', 'it', 'just', 'kickstarter', 'kids', 'laughs', 'lie', 'looked', 'lookout', 'lot', 'lots', 'love', 'loved', 'loves', 'major', 'most', 'my', 'no', 'not', 'now', 'of', 'old', 'on', 'one', 'options', 'originally', 'our', 'out', 'people', 'play', 'playable', 'played', 'practice', 'present', 'purchase', 'really', 'received', 'refrigerators', 'rest', 'reviewers', 'reviews', 'sabotage', 'said', 'salsa', 'seriously', 'several', 'she', 'simple', 'so', 'sooooo', 'strategy', 'suggested', 'surprised', 'taco', 'take', 'targeted', 'that', 'the', 'their', 'them', 'thinks', 'this', 'those', 'through', 'to', 'too', 'typical', 'unlike', 'upset', 'very', 'vs', 'was', 'we', 'well', 'when', 'who', 'wish', 'with', 'wonder', 'would', 'wrong', 'year'] ```python # Check the matrix print("Bag of words sparse matrix (data structure CSR-compressed sparse row):\n",bow, "\n To an array: \n", bow.toarray()) ``` Bag of words sparse matrix (data structure CSR-compressed sparse row): (0, 18) 1 (0, 129) 1 (0, 47) 2 (0, 10) 1 (0, 50) 1 (0, 43) 2 (0, 84) 1 (0, 149) 1 (0, 89) 1 (0, 33) 1 (0, 143) 2 (0, 81) 1 (0, 48) 1 (0, 139) 3 (0, 39) 1 (0, 78) 1 (0, 88) 2 (0, 55) 1 (0, 42) 2 (0, 23) 2 (0, 120) 1 (0, 11) 1 (0, 125) 4 (0, 66) 1 (0, 27) 4 : : (2, 35) 1 (2, 102) 1 (2, 9) 1 (2, 108) 1 (2, 110) 1 (2, 4) 1 (2, 80) 1 (2, 133) 1 (2, 5) 1 (2, 90) 1 (2, 76) 1 (2, 98) 1 (2, 24) 1 (2, 54) 1 (2, 91) 1 (2, 67) 1 (2, 115) 1 (2, 20) 1 (2, 77) 1 (2, 73) 1 (2, 32) 1 (2, 109) 1 (2, 103) 1 (2, 51) 1 (2, 28) 1 To an array: [[0 0 0 0 0 0 1 1 2 0 1 1 0 0 0 0 1 1 1 1 0 1 0 2 0 0 1 4 0 0 0 0 0 1 0 0 0 1 0 1 1 1 2 2 1 0 1 2 1 1 1 0 0 0 0 1 0 0 1 1 0 0 0 1 0 1 1 0 1 1 1 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 2 0 2 1 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 1 1 0 0 0 0 1 1 0 1 0 0 0 0 1 1 0 1 0 3 4 1 0 1 1 3 1 1 0 1 0 1 1 0 3 2 0 1 2 1 0 1 1 0 1] [1 1 1 1 0 0 2 0 1 0 2 0 1 1 1 1 0 0 3 0 0 0 1 2 0 1 0 0 0 1 1 1 0 0 1 0 1 0 1 0 0 0 0 2 0 3 0 3 1 0 0 0 1 1 0 0 1 1 3 0 0 1 1 0 1 0 0 0 0 5 0 1 1 0 1 0 0 0 0 2 0 0 1 1 1 1 2 2 2 0 0 0 1 1 2 0 0 1 0 1 0 1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 3 1 1 0 0 1 0 1 0 0 0 1 0 3 0 0 5 0 0 0 0 0 1 2 2 1 1 0 0 4 0 0 1 0] [0 0 0 1 1 1 2 0 1 1 0 0 0 0 1 0 0 0 1 0 1 0 0 1 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 0 0 2 0 0 0 1 1 1 0 1 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 3 0 0 1 1 0 0 1 1 0 0 1 0 0 0 0 0 0 0 3 0 1 1 0 1 0 0 1 0 1 0 0 0 1 1 0 0 0 1 1 1 1 0 0 1 0 1 0 0 0 0 0 0 0 0 1 5 0 0 0 3 0 0 0 1 0 1 0 0 0 1 0 0 1 0 0 1 0 0 0 0]] ```python print ("Data Type of variable bow:",type(bow)) # Find the index of specific words print (vectorizer.vocabulary_.get("children")) print (vectorizer.vocabulary_.get("salsa")) print (bow.getcol(vectorizer.vocabulary_.get("great"))) ``` Data Type of variable bow: <class 'scipy.sparse.csr.csr_matrix'> 29 111 (2, 0) 1 ### Inverse and genralized inverse *** A square matrix is said to have an inverse if all its rows or all its columns are linearly independent. Inverse of square matrix $A$ is denoted by $A^{-1}$. It has the following property: $$ A^{-1} A = A A^{-1} = I$$ **Review Gauss-Jordan Elimination** for finding inverse of a matrix. Ref: __[WikiPedia](https://en.wikipedia.org/wiki/Gaussian_elimination)__ ```python # Using linear algebra module for finding inverse # Random Example Matrix S = np.random.rand(4,4) inv_S = np.linalg.inv(S) print ("Inverse of S: \n",inv_S) print ("Verify inverse:\n",np.dot(S,inv_S)) ``` Inverse of S: [[ 6.29223122e-02 4.65377320e-01 1.11496530e+00 -1.38531775e+00] [ 5.39687908e-01 -9.77003966e-01 1.14702973e+00 2.37923981e-01] [-6.96882874e-01 4.52175993e-01 -9.25863481e-01 1.51010216e+00] [ 9.12693835e-01 3.45607950e-01 -5.85756832e-01 -3.29197827e-04]] Verify inverse: [[ 1.00000000e+00 -4.16333634e-17 4.16333634e-17 -1.38777878e-17] [ 1.11022302e-16 1.00000000e+00 -1.11022302e-16 2.77555756e-17] [ 5.55111512e-17 0.00000000e+00 1.00000000e+00 0.00000000e+00] [ 0.00000000e+00 5.55111512e-17 0.00000000e+00 1.00000000e+00]] ### Eigenvalues and eigenvectors <hr> An eigenvector of a square matrix $A$ is a special non-zero vector such that $$ A v = \lambda v$$ where $\lambda$ is called the associated eigenvalue. **Example**: Find the eigenvalues and eigenvectors of $ A = \left( \begin{array}{cc} 2 & 4 \\ 1 & -1 \end{array} \right) $ ### Eigen-decomposition If the square matrix $A\in \mathbb{C}^{n\times n}$ has $n$ linearly independent eigenvectors then $A$ can be given in the following factorized form as $$ A = Q \Lambda Q^{-1}, $$ where $\Lambda = $ diag$(\lambda_1, \cdots, \lambda_n)$ and columns of matrix $Q$ are made of the eigenvector $q_i$ of $A$ $(i=1,\cdots,n)$, arranged in the same order as the eigenvalues in $\Lambda$. >- When $A$ is a real and symmetric matrix: $A = Q \Lambda Q^T$, where $Q$ is orthogonal $(Q^TQ = I = QQ^T)$ and $\Lambda$ is made of real diagonal entries. >- If a function $f(x)$ has power series expansion in $x$, then $f(A) = Q f(\Lambda) Q^{-1}$. ### Daterminant --- Laplace's Formula for determinant of a square matrix $$\det(A) = \sum_{j=1}^n (-1)^{i+j} a_{ij} M_{ij}\ \textrm{ for a fixed row } i,$$ $$\det(A) = \sum_{i=1}^n (-1)^{i+j} a_{ij} M_{ij} \textrm{ for a fixed column } j .$$ where minor $M_{ij}$ si the determinant of the submatrix without the $i$-th row and $j$-th column. <br> >- ##### Determinant of a matrix is the product of all its eigenvalues. >- ##### $\det\left(I_n\right) = 1$, where $I_n$ is the $n\times n$ identity matrix. >- ##### $\det\left(A^\textsf{T}\right) = \det(A)$, where $A^\textsf{T}$ denotes the transpose of $A$. >- ##### $\det\left(A^{-1}\right) = \frac{1}{\det(A)} = [\det(A)]^{-1}.$ >- ##### For square matrices $A$ and $B$ of equal size, $$\det(AB) = \det(A) \det(B).$$ >- ##### $\det(cA) = c^n\det(A)$, for an $n\times n$ matrix $A$. ### Trace --- Trace of a square matrix is the sum of its diagonal elements $$ \operatorname{tr}(\mathbf{A}) = \sum_{i=1}^n a_{ii} = a_{11} + a_{22} + \dots + a_{nn} $$ >- Trace is the sum of the eigenvalues of a matrix. >- For any nonsingular matrix $\mathbf{S}$: $\operatorname{tr}(\mathbf{S} \mathbf{A} \mathbf{S}^{-1}) = \operatorname{tr}(\mathbf{A}) $ (i.e., invariance under change of base) >- Following holds $$ \begin{align} \operatorname{tr}(\mathbf{A}^T) &= \operatorname{tr}(\mathbf{A}),\\\\ \operatorname{tr}(\mathbf{A} + \mathbf{B}) &= \operatorname{tr}(\mathbf{A}) + \operatorname{tr}(\mathbf{B}), \\\\ \operatorname{tr}(c\mathbf{A}) &= c \operatorname{tr}(\mathbf{A}), \\\\ \operatorname{tr}(\mathbf{A}\mathbf{B}) &= \operatorname{tr}(\mathbf{B}\mathbf{A}),\\\\ \operatorname{tr}(\mathbf{A}\mathbf{B}) &\ne \operatorname{tr}(\mathbf{A})\operatorname{tr}(\mathbf{B}) \end{align} $$ >- $$ \operatorname{tr}\left(\mathbf{A}^\mathsf{T}\mathbf{B}\right) = \operatorname{tr}\left(\mathbf{A}\mathbf{B}^\mathsf{T}\right) = \operatorname{tr}\left(\mathbf{B}^\mathsf{T}\mathbf{A}\right) = \operatorname{tr}\left(\mathbf{B}\mathbf{A}^\mathsf{T}\right) = \sum_{i,j}A_{ij}B_{ij}. $$ --- # DISREGARD THE REMAINING --- ## The Review continues in Notebook 2. ### Some Special Matrices *** - Symmetric ans Skew-symmetric Matrics - Upper and Lower Triangular Matrices - Banded Matrices - Orthogonal and Unitary Matrices - Positive definite, positive semidefinite matrices - Negative definite, negative semidefinite matrices - Indefinite Matrices - Permutation Matrix - Diagonally Dominant Matrices - Nonnegative Matrices ### Derivatives of Matrices of functions of some variables *** #### Review the basics of derivatives, partial derivatives and gradients. <div class="alert alert-block alert-info"> <b> Definition of derivative of a matrix of functions.</b> $$ \frac{d}{d \alpha} C(\alpha) = \dot{C}(\alpha) = [\dot{c}_{i,j}(\alpha)]. $$ <b>Product Rule</b> <div class='eqnbox'> $$ \frac{d}{d \alpha} [ A(\alpha)\, B(\alpha)] = \dot{A}(\alpha)\,B(\alpha) + A(\alpha)\,\dot{B}(\alpha). $$ </div> </div> [See Matrix Calculus on WikiPedia](https://en.wikipedia.org/wiki/Matrix_calculus) <br> **Exercise: ** If $\phi(x) = \frac{1}{2} x^T A x - b^T x$, show that the gradient is given by $$ \nabla \phi(x) = \frac{1}{2} (A^T + A)x - b. $$ This result will be quite useful as we move along. <div class='eqnbox'> $$\large \nabla_x \left( \frac{1}{2} x^T A x - b^T x \right) = \frac{1}{2} (A^T + A)x - b. $$ </div> ### Norms --- Please see Text-5 for a comprehensive review. >- What is the norms of a vector? >- $\|\cdot\|_p$ norms: Euclidean norm, 1-norms, Manhattan distance. >- Matrix Norms, subordinate matrix norms >- Unit circles in different norms. [Image Source: WikiMedia](https://upload.wikimedia.org/wikipedia/commons/f/f8/L1_and_L2_balls.svg) ```python # NORM : Euclidean, Frobenius D1 = np.array([[1,1],[1,-1]]) print D1 print np.linalg.norm(D1) D2 = np.array([[1,2, -1],[3,4, -6]]) print D2 print np.linalg.norm(D2) #print np.linalg.norm(D2, ord=np.inf) ``` ```python ``` ```python ``` ```python ``` ### Derivatives of Matrices of functions of some variables *** #### Review the basics of derivatives, partial derivatives and gradients. <div class="alert alert-block alert-info"> <b> Definition of derivative of a matrix of functions.</b> $$ \frac{d}{d \alpha} C(\alpha) = \dot{C}(\alpha) = [\dot{c}_{i,j}(\alpha)]. $$ <b>Product Rule</b> <div class='eqnbox'> $$ \frac{d}{d \alpha} [ A(\alpha)\, B(\alpha)] = \dot{A}(\alpha)\,B(\alpha) + A(\alpha)\,\dot{B}(\alpha). $$ </div> </div> [See Matrix Calculus on WikiPedia](https://en.wikipedia.org/wiki/Matrix_calculus) <br> **Exercise: ** If $\phi(x) = \frac{1}{2} x^T A x - b^T x$, show that the gradient is given by $$ \nabla \phi(x) = \frac{1}{2} (A^T + A)x - b. $$ This result will be quite useful as we move along. <div class='eqnbox'> $$\large \nabla_x \left( \frac{1}{2} x^T A x - b^T x \right) = \frac{1}{2} (A^T + A)x - b. $$ </div> ### Linear Systems of Equations --- Example: Solve the following system by using inversion of the coefficient matrix. > $2x+3y-z = 4$ > $-x+y+2z = 2$ > $3+2x-4z = 1$ ```python # FIND the solution using coding A1 = np.array([[2, 3, -1], [-1, 1, 2], [3, 2, -4]], dtype=float) b = np.array([4, 2, 1]).reshape(3, -1) # X = inverse(A) b X = np.dot(np.linalg.inv(A1), b) print X ``` ```python %%html <style> .eqnbox{ margin:auto;width:500px;padding:20px; border: 3px solid green; border-radius:15px;margin-top:20px;margin-bottom:20px; } .eqnbox2{ margin:auto;width:500px;padding:20px; border: 1px solid green; border-radius:15px;margin-top:20px;margin-bottom:20px; } </style> ``` <style> .eqnbox{ margin:auto;width:500px;padding:20px; border: 3px solid green; border-radius:15px;margin-top:20px;margin-bottom:20px; } .eqnbox2{ margin:auto;width:500px;padding:20px; border: 1px solid green; border-radius:15px;margin-top:20px;margin-bottom:20px; } </style> ```python ```
8be5d78b0f2ba1a9f5853cfbcf145f7f7fedb4f2
430,902
ipynb
Jupyter Notebook
course_notes/MA544 Share/NB1 MA544 updated.ipynb
jschmidtnj/ma544-final-project
61fb57d344ad4f693eb697015ed926988402186f
[ "MIT" ]
2
2021-03-23T01:48:51.000Z
2022-02-01T22:49:47.000Z
course_notes/MA544 Share/NB1 MA544 updated.ipynb
jschmidtnj/ma544-final-project
61fb57d344ad4f693eb697015ed926988402186f
[ "MIT" ]
null
null
null
course_notes/MA544 Share/NB1 MA544 updated.ipynb
jschmidtnj/ma544-final-project
61fb57d344ad4f693eb697015ed926988402186f
[ "MIT" ]
1
2021-05-05T01:35:11.000Z
2021-05-05T01:35:11.000Z
287.45964
128,512
0.91483
true
10,689
Qwen/Qwen-72B
1. YES 2. YES
0.909907
0.885631
0.805842
__label__eng_Latn
0.800774
0.710574
# Programovanie Letná škola FKS 2018 Maťo Gažo, Fero Dráček (& vykradnuté materiály od Mateja Badina, Feriho Hermana, Kuba, Peťa, Jarných škôl FX a kade-tade po internete) V tomto kurze si ukážeme základy programovania a naučíme sa programovať matematiku a fyziku. Takéto vedomosti sú skvelé a budete vďaka nim: * vedieť efektívnejšie robiť domáce úlohy * kvalitnejšie riešiť seminárové a olympiádové príklady * lepšie rozumieť svetu (IT je dnes na trhu najrýchlejšie rozvíjajúcim sa odvetvím) Počítač je blbý a treba mu všetko povedať a vysvetliť. Komunikovať sa s ním dá na viacerých úrovniach, my budeme používať Python. Python (názov odvodený z Monty Python's Flying Circus) je všeobecný programovací jazyk, ktorým sa dajú vytvárať webové stránky ako aj robiť seriózne vedecké výpočty. To znamená, že naučiť sa ho nie je na škodu a možno vás raz bude živiť. Rozhranie, v ktorom píšeme kód, sa volá Jupyter Notebook. Je to prostredie navrhnuté tak, aby sa dalo programovať doslova v prehliadači a aby sa kód dal kúskovať. Pre zbehnutie kúskov programu stačí stlačiť Shift+Enter. # Dátové typy a operátory ### Čísla podľa očakávaní, vracia trojku ```python 3 ``` 3 ```python 2+3 # scitanie ``` 5 ```python 6-2 # odcitanie ``` 4 ```python 10*2 # nasobenie ``` 20 ```python 35/5 # delenie ``` 7.0 ```python 5//3 # celociselne delenie TODO je toto treba? ``` 1 ```python 7%3 # modulo ``` 1 ```python 2**3 # umocnovanie ``` 8 ```python 4 * (2 + 3) # poradie dodrzane ``` 20 ### Logické výrazy ```python 1 == 1 # logicka rovnost ``` True ```python 2 != 3 # logicka nerovnost ``` True ```python 1 < 10 ``` True ```python 1 > 10 ``` False ```python 2 <= 2 ``` True # Premenné Toto je premenná. Po stlačení Shift+Enter program v okienku zbehne a premenná sa uloží do pamäte (RAMky, všetko sa deje na RAMke). ```python a = 2 ``` Teraz s ňou možno pracovať ako s bežným číslom. ```python 2 * a ``` 4 ```python a + a ``` 4 ```python a + a*a ``` 6 Možno ju aj umocniť. ```python a**3 ``` 8 Pridajme druhú premennú. ```python b = 5 ``` Nasledovné výpočty dopadnú podľa očakávaní. ```python a + b ``` 7 ```python a * b ``` 10 ```python b**a ``` 25 Reálne čísla môžeme zobrazovať aj vo vedeckej forme: $2.3\times 10^{-3}$. ```python d = 2.3e-3 ``` ### Priklad [0] FERO DAJ SEM NIECO # --------------------------------------------------------------------------- Teraz sa môžeme posunúť k niečomu viac zmysluplnejšiemu. Môžeme začať počítaču zadávať úlohy, ktoré by sme už sami nezvládli ! # Funkcie Spravme si jednoduchú funkciu, ktorá za nás sčíta dve čísla, aby sme sa s tým už nemuseli trápiť my: ```python def scitaj(a, b): print("Číslo a je {} a číslo b je {}".format(a, b)) return a + b ``` ```python scitaj(10, 12) # vypise vetu a vrati sucet ``` Číslo a je 10 a číslo b je 12 22 Funkcia funguje na celých aj reálnych číslach. Naša sčítacia funkcia má __štyri podstatné veci__: 1. `def`: toto slovo definuje funkciu. 2. dvojbodka na konci prvého riadku, odtiaľ začína definícia. 3. Odsadenie kódu vnútri funkcie o štyri medzery. 4. Samotný kód. V ňom sa môže diať čokoľvek, Python ho postupne prechádza. 5. `return`: kľúčová vec. Za toto slovo sa píše, čo je output funkcie. ### Úloha 1 Napíšte funkciu `priemer`, ktorá zoberie dve čísla (výšky dvoch chlapcov) a vypočíta ich priemernú výšku. Ak máš úlohu hotovú, prihlás sa vedúcemu. ```python # Tvoje riesenie: def priemer(prvy, druhy): return ((prvy+druhy)/2) priemer(90,20) ``` 55.0 # Poďme na fyziku V tomto momente môžeme začať používať Python ako sofistikovanejšiu kalkulačku a počítať ňoz základné fyzikálne problémy. Predstavme si napríklad, že potrebujeme zistiť, koľko mólov atómov je v dvoch litroch vody. ```python rho = 1000.0 # hustota V = 2.0 * 1e-3 # treba premeniť na metre kubické m = rho * V # hmotnosť vody Mm = (16 + 1 + 1) * 1e-3 # kg/mol n = m / Mm print(n) # vypiseme ``` 111.1111111111111 A koľko molekúl je v jednom litri vody? ```python NA = 6.022e23 # Avogadrova konštanta V = 1e-3 m = rho * V N = m / Mm * NA # zamyslite sa nad poradím násobenia a delenia print(N) ``` 3.345555555555555e+25 Vcelku dosť... ## Úloha 2 Spočítajte objem, ktorý v priemere zaberá jedna molekula ľubovoľnej kvapaliny. Vyjadrite ho v nanometroch kubických. Urobte to tak, že napíšete funkciu, ktorá bude ako vstup brať: * objem kvapaliny * hustotu kvapaliny * molárnu hmotnosť kvapaliny a ako výstup to dá objem jednej molekuly kvapaliny. Spočítajte to pre metanol, etanol a benzén a výsledky potom ukážte vedúcemu. Neváhajte používať Google. ```python # Tvoje riesenie: def ObjemMolekuly(objem, hustota, molarna_hmotnost): pocet_molekul = (objem*hustota/molarna_hmostnost)*6.022e23 return (objem/pocet_molekul) ``` # Zoznamy Zatiaľ sme sa zoznámili s číslami (celé, reálne), stringami a trochu aj logickými hodnotami. Zo všetkých týchto prvkov vieme vytvárať množiny, v informatickom jazyku `zoznamy`. Na úvod sa teda pozrieme, ako s vytvára zoznam (po anglicky `list`). Takúto vec všeobecne nazývame dátová štruktúra. ```python li = [] # prazdny list ``` ```python v = [4, 2, 3] # list s cislami ``` ```python v ``` [4, 2, 3] ```python v[0] # indexovat zaciname nulou! ``` 4 ```python v[1] ``` 2 ```python type(v) ``` list ```python w = [5, 'ahoj', True] ``` ```python type(w[0]) ``` int Čo sa sa stane, ak zoznamy sčítame? Spoja sa. ```python v + w ``` [4, 2, 3, 5, 'ahoj', True] Môžeme ich násobiť? ```python v * v ``` Smola, nemôžeme. Ale všimnime si, aká užitočná je chybová hláška. Jasne nám hovorí, že nemožno násobiť `list`y. So zoznamami môžeme robiť rôzne iné užitočné veci. Napríklad ich sčítať. ```python sum(v) ``` 9 Alebo zistiť dĺžku: ```python len(v) ``` 3 Alebo ich utriediť: ```python sorted(v) ``` [2, 3, 4] Alebo na koniec pridať nový prvok: ```python v.append(10) v ``` [4, 2, 3, 10] Alebo odobrať: ```python v.pop() v ``` [4, 2, 3] ### Interval v zozname sa dá hľadať pomocou intervalov, ktoré sú $\langle x,y)$, čiže uzavretý - otvorený. ```python li = [2, 5, 7, 8, 10, 11, 14, 18, 20, 25] len(li) ``` 10 ```python li[1:5] # indexovanie na zaciatku nulou + polouzavrety interval ``` [5, 7, 8, 10] ```python li[:3] # prve tri prvky ``` [2, 5, 7] ```python li[6:] # prvky zacinajuce na 6tom mieste ``` [14, 18, 20, 25] ```python li[2:9:2] # zaciatok:koniec:krok ``` [7, 10, 14, 20] ```python del li[2] li ``` [2, 5, 8, 10, 11, 14, 18, 20, 25] Zoznam možno zadefinovať aj cez rozsah: ```python range(10) type(range(10)) ``` range ```python list(range(10)) ``` [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] ```python list(range(3, 9)) ``` [3, 4, 5, 6, 7, 8] ## Úloha 3 Spočítajte: * súčet všetkých čísel od 1 do 1000. Vytvorte zoznam `letnaskola`, ktorý bude obsahovať vašich 5 obľúbených celých čísel. * Pridajte na koniec zoznamu číslo 100 * Vymažte druhé číslo zo zoznamu. * Prepíšte prvé číslo v zozname tak, aby sa rovnalo poslednému v zozname. * Vypočítajte súčet prvého čísla, posledného čísla a dĺžky zoznamu. ```python # Tvoje riesenie: zoznam = list(range(1,1001)) print(sum(zoznam)) letnaskola = [1,1995,12,6,42] print(letnaskola) letnaskola.append(100) print(letnaskola) del letnaskola[1] print(letnaskola) letnaskola[0] = letnaskola[len(letnaskola)-1] print(letnaskola) print(letnaskola[0]+letnaskola[len(letnaskola)-1], len(letnaskola)) ``` 500500 [1, 1995, 12, 6, 42] [1, 1995, 12, 6, 42, 100] [1, 12, 6, 42, 100] [100, 12, 6, 42, 100] 200 5 # For cyklus Indexy zoznamu môžeme postupne prechádzať. For cyklus je tzv. `iterátor`, ktorý iteruje cez zoznam. ```python for i in li: print(i) ``` 2 5 8 10 11 14 18 20 25 ```python for i in li: print(i**2) ``` 4 25 64 100 121 196 324 400 625 Ako úspešne vytvoriť For cyklus? Podobne, ako pri funkciách: * `for`: toto slovo je na začiatku. * `i`: iterovana velicina * `in`: pred zoznamom, cez ktorý prechádzame (iterujeme). * dvojbodka na konci prvého riadku. * kod, ktory sa cykli sa odsadzuje o štyri medzery. Za pomoci for cyklu môžeme takisto sčítať čísla. Napr. čísla od 0 do 100: ```python sum = 0 for i in range(101): # uvedomme si, preco tam je 101 a nie 100 sum = sum + i # skratene sum += i print(sum) ``` 5050 ## Úloha 4 Spočítajte súčet druhých mocnín všetkých nepárnych čísel od 1 do 100 s využitím for cyklu. ```python # Tvoje riesenie: sum = 0 for i in range(101): if (i%2 == 1): sum = sum + i**2 print(sum) ``` 166650 # Podmienky Pochopíme ich na príklade. Zmeňte `a` a zistite, čo to spraví. ```python a = 5 if a == 3: print("cislo a je rovne trom.") elif a == 5: print("cislo a je rovne piatim") else: print("cislo a nie je rovne trom ani piatim.") ``` cislo a je rovne piatim Za pomoci podmienky teraz môžeme z for cyklu vypísať napr. len párne čísla. Párne číslo identifikujeme ako také, ktoré po delení dvomi dáva zvyšok nula. Pre zvyšok po delení sa používa percento: ```python for i in range(10): if i % 2 == 0: print(i) ``` 0 2 4 6 8 Cyklus mozeme zastavit, ak sa porusi nejaka podmienka ```python for i in range(20): print(i) if i>10: print('Koniec.') break ``` 0 1 2 3 4 5 6 7 8 9 10 11 Koniec. ## Úloha 5 Spočítajte: * súčet všetkých čísel od 1 do 1000 deliteľné jedenástimi. * súčet tretích mocnín čísel od 1 do 1000 deliteľných dvanástimi. ```python # Tvoje riesenie: sum = 0 for i in range(1001): if (i%11 == 0): sum = sum + i print(sum) sum = 0 for i in range(1001): if (i%12 == 0): sum = sum + i**3 print(sum) ``` 45045 20998994688 ## Úloha 6 Teraz, keď už vieme, ako sa zisťuje deliteľnosť, môžeme tiež zistiť, či je zadané číslo prvočíslom. Vymyslite algoritmus, ktorý overí prvočíselnú vlastnosť nejakého čísla. Výstup by mal byť nasledovný: ```Python >>> prvocislo(10) nie >>> prvocislo(13) ano ``` ```python # Tvoje riesenie: from math import * def prvocislo(cislo): vysledok = "ano" for i in range(2,int(floor(sqrt(cislo)))+1): if (cislo%i == 0): vysledok = "nie" return vysledok print(prvocislo(10)) print(prvocislo(13)) ``` nie ano ## Úloha 7 Predstavme si, že chceme sčítať nekonečný počet čísel: $$ \sum_{n=1}^\infty \frac{1}{n^2}.$$ Analytický výsledok takejto sumy je $\pi^2/6$. Koľko členov potrebujeme sčítať, aby presnosť s analytický výsledkov bola určená na tri desatinné miesta? ```python # Tvoje riesenie: from math import * print('Presne ',round(pi**2/6,3)) sum = 0 for i in range(1,2005): sum = sum + 1.0/i**2 print(i,' ',sum) ``` Presne 1.645 2004 1.6444351893330087 # Konečne poďme na niečo zaujímavé! V predchádzajúcej časti sme sa zoznámili so základnou syntaxou Pythonu, zoznámili sme sa s Jupyterom a naučili sme sa niečo málo o tom akoby sme mohli sčítať nejaké rady. Nastal však čas pustiť sa do niečo zaujímavejšieho. V nasledujúcej časti sa pozrieme ako sa dajú pomocou numerickej matematiky a nejakých tých šikovných matematických vzťahov dajú vypočítať konštanty, s ktorými ste sa už stretli. ## Hľadanie hodnoty zlatého rezu $\varphi$ Jednoduché cvičenie na oboznámenie sa s tzv. selfkonzistentným problémom a for cyklom ```python x = 1; for i in range (0,20): x = 1+1/x print (x) ``` ## Hľadanie hodnoty Eulerovho čísla $e$ Hoci je to málo známe, Eulerove čislo $e$ sa dá nájsť ako odpoveď na nasledujúcu úlohu (rozdeľuj a ponásob): Na aké veľké časti treba rozdeliť hocaké číslo tak, aby súčin týchto častí bol maximálny. ```python import matplotlib.pyplot as plt %matplotlib inline import numpy as np ``` ```python Num0 = 25 delitele = np.arange(1, 20 , 1) casti = [] suciny = [] for i in range (0, 19): casti.append(Num0/delitele[i]) suciny.append(casti[i]**delitele[i]) ``` ```python plt.plot(casti, suciny, "ro-") plt.show() ``` ```python MinDel = delitele[np.argmax(suciny)-1] MaxDel = delitele[np.argmax(suciny)+1] ``` ```python delitele = np.arange(MinDel, MaxDel, (MaxDel - MinDel)/10) casti = [] suciny = [] for i in range (0, 9): casti.append(Num0/delitele[i]) suciny.append(casti[i]**delitele[i]) ``` ```python plt.plot(casti, suciny, "ro-") plt.show() ``` ```python casti[np.argmax(suciny)] ``` # Ako sa numericky derivuje, integruje a riešia difky? ## Derivovanie Na to, aby sme numericky zderivovali nejakú funkciu si musíme najprv uvedomiť, čo tá derivácia vlastne intuitívne robí. Predstavme si obrázok a v ňom nakreslenú dotyčnicu a malý trojuholníček. Buď sme sa učili alebo sme sa práve teraz dozvedeli, že derivácia funkcie je smernica jej dotyčnice v danom bode. Odtiaľ by už mohlo byť (po nejakých tých obrázkoch) jasné, že to bude nejako takto: $$ \frac{df}{dx} = \frac{f(x+h) - f(x-h)}{2h} $$ Zvolíme si malé $h$ a derivujeme. ```python def func(x): return cos(x) def deriv(f, x, h=0.01): return (f(x+h)-f(x-h))/(2*h) x = np.linspace(0, 2*pi, 101) #print(x) y = [func(i) for i in x] dydx = [deriv(func, i) for i in x] plt.plot(x, y, label="$f(x)$") plt.plot(x, dydx, label="$\\frac{df}{dx}$") plt.xlim([0, 2*pi]) plt.xticks([0, pi, 2*pi]) plt.legend(loc="best") plt.show() ``` ### Išlo by to však aj s lepšou presnosťou? Odkiaľ spadol magický vzorček vyššie? Tí čo už na podobné akcie chodia dlhšie sa už možno stretli s pojmom Taylorov rozvoj, intuitívne v princípe každá slušná a poslušná funkcia, s ktorou sa stretneme sa dá rozviť v istom okolí bodu $x_0$ do súčtu mocninových funkcií nasledovne: $$ f(x) \approx f(x_0) + \left.\frac{df}{dx}\right|_{x=x_0}(x-x_0) + \frac{1}{2!}{\left(\left.\frac{df}{dx}\right|_{x=x_0}\right)}^2{(x-x_0)}^2 + \frac{1}{3!}{\left(\left.\frac{df}{dx}\right|_{x=x_0}\right)}^3{(x-x_0)}^3 + ...$$ Ak takto rozvinieme funkcie nielen v okolí bodu $x_0$, ale aj $x_0 + h$ a $x_0 - h$, a následne rozvejo vhodne medzi sebou odčítame, tak sme schopný nájsť hodnotu derivácie funkcie $\frac{df}{dx}$ v bode $x_0$, teda $$ \left.\frac{df}{dx}\right|_{x=x_0} $$ Skúste rozviť funkciu aj v bodoch $x_0+2h$ a $x_0-2h$ a získať vzorček, ktorý nám umožní vypočítať deriváciu s lepšou presnosťou. ## Integrovanie Dve základné metódy ako sa dá niečo numericky integrovať sú: * Kvadratúra * Monte Carlo Dva spôsoby: * Kvadratúra (dobrá v 1D, ale so zvyšovaním rozmerov presnosť klesá) * Monte Carlo (presnosť vždy $O(N^{-1/2})$ (N je počet bodov, ktoré použijeme na výpočet inegr), použiť pri viac ako troch rozmeroch) Pomôžu nám knižnice. Teraz si ukážeme kvadratúru. ```python #from scipy.integrate import quad import scipy.integrate as sp def func2(x): return exp(-x**2) sp.quad(func2, -20, 20) # vysledok je 1/4 ``` ```python sqrt(pi) ``` ## Hľadanie hodnoty Ludolfovho čísla $\pi$ Pomocou Monte Carlo metódy integrovania sa naučíme ako napríklad vypočítať $\pi$. ```python import random as rnd NOP = 50000 CoordXList = []; CoordYList = []; for j in range (NOP): CoordXList.append(rnd.random()) CoordYList.append(rnd.random()) ``` ```python CircPhi = np.arange(0,np.pi/2,0.01) ``` ```python plt.figure(figsize=(7,7)) plt.plot( CoordXList, CoordYList, color = "red", linestyle= "none", marker = "," ) plt.plot(np.cos(CircPhi),np.sin(CircPhi)) #plt.axis([0, 1, 0, 1]) #plt.axes().set_aspect('equal', 'datalim') plt.show() ``` NumIn = 0 for j in range (NOP): #if (CoordXList[j] - 0.5)*(CoordXList[j] - 0.5) + (CoordYList[j] - 0.5)*(CoordYList[j] - 0.5) < 0.25: if CoordXList[j]*CoordXList[j] + CoordYList[j]*CoordYList[j] <= 1: NumIn = NumIn + 1; NumIn/NOP*4 ## Diferenciálne rovnice Príklad neriešiteľnej difky: $$ y'(x) = \sqrt{1+xy} $$ Príklad Eulerovej (najjednoduchšej) metódy. ```python N = 101 x = np.linspace(0, 1, N) dx = x[1] - x[0] y_eu = np.zeros(N) y_eu[0] = 1 # pociatocna podmienka def func(x, y): return sqrt(1.0 + x*y) # return sin(x) for i in range(1, N): y_eu[i] = y_eu[i-1] + func(x[i-1], y_eu[i-1])*dx plt.plot(x, y_eu) plt.show() ``` ## Difky za pomoci knižníc Použijeme funkciu `ode` z knižnice `scipy`. Znova riešime $$y'(x) = \sqrt{1+xy} .$$ ```python from scipy.integrate import odeint N = 101 def func(y, x): return sqrt(1 + x*y) x = np.linspace(0, 1, N) y0 = 1.0 y = odeint(func, y0, x) ### Pozri Google Scipy.integrate.odeint plt.plot(x, (y_eu-y.T[0])/y.T[0]) plt.title("Rozdiel medzi odeint a Eulerom") plt.show() ``` ## Iný príklad na Difky: Exponenciálny rozpad Ešte sa kukneme na numerické riešenie jednoduchých diferenciálnych rovníc. Metódu demonštrujeme na príklade exponenciálneho rozpadu. Znovu pomocou najjednoduchšej - Eulerovej metódy. $$\frac{d n}{dt}=-\lambda n$$ $$\frac{n(t+dt)-n(t)}{dt}=-\lambda n(t)$$ $$n(t+dt)=n(t)(1 -\lambda dt), n(0)=N_0$$ ```python N0 = 500 #τ=λdt# τ = 0.5 ``` ```python tauka=[] NumOfPart=[] for i in range (15): if i == 0: n = N0 else: n = n*(1 - τ) tauka.append(i*τ) NumOfPart.append(n) ``` ```python plt.plot(tauka, NumOfPart, "ro-") plt.show() ``` # Obiehanie Zeme okolo Slnka Fyziku (dúfam!) všetci poznáme. * gravitačná sila: $$ \mathbf F(\mathbf r) = -\frac{G m M}{r^3} \mathbf r $$ ### Eulerov algoritmus (zlý) $$\begin{align} a(t) &= F(t)/m \\ v(t+dt) &= v(t) + a(t) dt \\ x(t+dt) &= x(t) + v(t) dt \\ \end{align}$$ ### Verletov algoritmus (dobrý) $$ x(t+dt) = 2 x(t) - x(t-dt) + a(t) dt^2 $$ ```python from numpy.linalg import norm G = 6.67e-11 Ms = 2e30 Mz = 6e24 dt = 86400.0 N = int(365*86400.0/dt) #print(N) R0 = 1.5e11 r_list = np.zeros((N, 2)) r_list[0] = [R0, 0.0] # mozno miesat listy s ndarray v0 = 29.7e3 v_list = np.zeros((N, 2)) v_list[0] = [0.0, v0] # sila medzi planetami def force(A, r): return -A / norm(r)**3 * r # Verletova integracia def verlet_step(r_n, r_nm1, a, dt): # r_nm1 -- r n minus 1 return 2*r_n - r_nm1 + a*dt**2 # prvy krok je specialny a = force(G*Ms, r_list[0]) r_list[1] = r_list[0] + v_list[0]*dt + a*dt**2/2 # riesenie pohybovych rovnic for i in range(2, N): a = force(G*Ms, r_list[i-1]) r_list[i] = verlet_step(r_list[i-1], r_list[i-2], a, dt) plt.plot(r_list[:, 0], r_list[:, 1]) plt.xlim([-2e11, 2e11]) plt.ylim([-2e11, 2e11]) plt.xlabel("$x$", fontsize=20) plt.ylabel("$y$", fontsize=20) plt.gca().set_aspect('equal', adjustable='box') #plt.axis("equal") plt.show() ``` ## Pridajme Mesiac ```python Mm = 7.3e22 R0m = R0 + 384e6 v0m = v0 + 1e3 rm_list = np.zeros((N, 2)) rm_list[0] = [R0m, 0.0] vm_list = np.zeros((N, 2)) vm_list[0] = [0.0, v0m] # prvy Verletov krok am = force(G*Ms, rm_list[0]) + force(G*Mz, rm_list[0] - r_list[0]) rm_list[1] = rm_list[0] + vm_list[0]*dt + am*dt**2/2 # riesenie pohybovych rovnic for i in range(2, N): a = force(G*Ms, r_list[i-1]) - force(G*Mm, rm_list[i-1]-r_list[i-1]) am = force(G*Ms, rm_list[i-1]) + force(G*Mz, rm_list[i-1]-r_list[i-1]) r_list[i] = verlet_step(r_list[i-1], r_list[i-2], a, dt) rm_list[i] = verlet_step(rm_list[i-1], rm_list[i-2], am, dt) plt.plot(r_list[:, 0], r_list[:, 1]) plt.plot(rm_list[:, 0], rm_list[:, 1]) plt.xlabel("$x$", fontsize=20) plt.ylabel("$y$", fontsize=20) plt.gca().set_aspect('equal', adjustable='box') plt.xlim([-2e11, 2e11]) plt.ylim([-2e11, 2e11]) plt.show() # mesiac moc nevidno, ale vieme, ze tam je ``` ## Úloha pre Vás: Treba pridať Mars :) Pridajte Mars! ## Matematické kyvadlo s odporom Nasimulujte matematické kyvadlo s odporom $\gamma$, $$ \ddot \theta = -\frac g l \sin\theta -\gamma \theta^2,$$ za pomoci metódy `odeint`. Alebo pád telesa v odporovom prostredí: $$ a = -g - kv^2.$$ ```python from scipy.integrate import odeint def F(y, t, g, k): return [y[1], g -k*y[1]**2] N = 101 k = 1.0 g = 10.0 t = np.linspace(0, 1, N) y0 = [0.0, 0.0] y = odeint(F, y0, t, args=(g, k)) plt.plot(t, y[:, 1]) plt.xlabel("$t$", fontsize=20) plt.ylabel("$v(t)$", fontsize=20) plt.show() ``` ## Harmonický oscilátor pomocou metódy Leapfrog (modifikácia Verletovho algoritmu) ```python N = 10000 t = linspace(0,100,N) dt = t[1] - t[0] # Funkcie def integrate(F,x0,v0,gamma): x = zeros(N) v = zeros(N) E = zeros(N) # Počiatočné podmienky x[0] = x0 v[0] = v0 # Integrovanie rovníc pomocou metódy Leapfrog (wiki) fac1 = 1.0 - 0.5*gamma*dt fac2 = 1.0/(1.0 + 0.5*gamma*dt) for i in range(N-1): v[i + 1] = fac1*fac2*v[i] - fac2*dt*x[i] + fac2*dt*F[i] x[i + 1] = x[i] + dt*v[i + 1] E[i] += 0.5*(x[i]**2 + ((v[i] + v[i+1])/2.0)**2) E[-1] = 0.5*(x[-1]**2 + v[-1]**2) # Vrátime riešenie return x,v,E ``` ```python # Pozrime sa na tri rôzne počiatočné podmienky F = zeros(N) x1,v1,E1 = integrate(F,0.0,1.0,0.0) # x0 = 0.0, v0 = 1.0, gamma = 0.0 x2,v2,E2 = integrate(F,0.0,1.0,0.05) # x0 = 0.0, v0 = 1.0, gamma = 0.01 x3,v3,E3 = integrate(F,0.0,1.0,0.4) # x0 = 0.0, v0 = 1.0, gamma = 0.5 # Nakreslime si grafy plt.rcParams["axes.grid"] = True plt.rcParams['font.size'] = 14 plt.rcParams['axes.labelsize'] = 18 plt.figure() plt.subplot(211) plt.plot(t,x1) plt.plot(t,x2) plt.plot(t,x3) plt.ylabel("x(t)") plt.subplot(212) plt.plot(t,E1,label=r"$\gamma = 0.0$") plt.plot(t,E2,label=r"$\gamma = 0.01$") plt.plot(t,E3,label=r"$\gamma = 0.5$") plt.ylim(0,0.55) plt.ylabel("E(t)") plt.xlabel("Čas") plt.legend(loc="center right") plt.tight_layout() ``` A čo ak bude oscilátor aj tlmenný? ```python def force(f0,t,w,T): return f0*cos(w*t)*exp(-t**2/T**2) F1 = zeros(N) F2 = zeros(N) F3 = zeros(N) for i in range(N-1): F1[i] = force(1.0,t[i] - 20.0,1.0,10.0) F2[i] = force(1.0,t[i] - 20.0,0.9,10.0) F3[i] = force(1.0,t[i] - 20.0,0.8,10.0) ``` ```python x1,v1,E1 = integrate(F1,0.0,0.0,0.0) x2,v2,E2 = integrate(F1,0.0,0.0,0.01) x3,v3,E3 = integrate(F1,0.0,0.0,0.1) plt.figure() plt.subplot(211) plt.plot(t,x1) plt.plot(t,x2) plt.plot(t,x3) plt.ylabel("x(t)") plt.subplot(212) plt.plot(t,E1,label=r"$\gamma = 0$") plt.plot(t,E2,label=r"$\gamma = 0.01$") plt.plot(t,E3,label=r"$\gamma = 0.1$") pt.ylabel("E(t)") plt.xlabel("Time") plt.rcParams['legend.fontsize'] = 14.0 plt.legend(loc="upper left") plt.show() ``` # Lineárna algebra ## Program na dnes * Matematické operácie s vektormi a maticami. Hľadanie vlastných čísel. * Zoznámenia sa s knižnicou `numpy`. ## Vektory ```python a = np.array([1, 2, 3]) print(a) print(type(a)) b = np.array([2, 3, 4]) print(a + b) # spravne scitanie! ``` np.dot(a, b) # skalarny sucin a.dot(b) ```python np.cross(a, b) # vektorovy sucin ``` np.outer(a, b) # outer product (jak sa to povie po slovensky?), premyslite si ## Matice A = np.array([[0, 1], [1, 0]]) print(A) type(A)AA = np.matrix(A) # premena na iny datovy typ type(AA)B = np.array([[1, 0], [0, -1]]) print(B)np.dot(A, B)# pozor! A*B# Vlastnosti numpy matic len(A) # pocet riadkovA.shape # rozmery# uzitocne vektory/matice, konvencia z Matlabu N = 3 np.ones(N) # konstantny vektornp.ones((N, N)) # jednotky np.eye(N) # identita np.zeros((N, N+1)) # nulova matica NxN ```python # spajanie matic A = np.ones((3, 3)) B = np.eye(3) print(A, "\n") print(B) np.hstack((A, B)) ``` ### Prístup k prvkom # Najprv si pripravime nase vektory A = np.arange(9) print(A) A = A.reshape((3, 3)) A# Pri poliach by sme urobili A[1][1]. Pri vektoroch mozme pouzit aj A[1,1]. # Pri maticiach musime pouzit A[1,1], lebo A[1][1] vrati prekvapivy vysledok A[1, 1]A[1][1]A[-1, -1] = 10 # znovu vieme indexovat aj od zadu AA[0, 0]A[-3, -2] ## Slicing Ako vyberať jednotlivé stĺpce a riadky matíc? A = np.arange(9) A = A.reshape((3, 3)) AA[:, 0] # Ak nechceme vyberat nejaky usek, pouzijeme dvojbodku.A[1, :]A[[0, 2], :] # Mozme pouzit aj pole na indexovanie# zmena riadku A[:, 1] = 4 A# pripocitanie cisla k stlpcu for i in range(len(A)): A[i, 2] += 100 AA = np.arange(25).reshape((5, 5)) AA[2:, 3] ## Generovanie (pseudo)náhodných čísel # cisla sa menie, pokial nefixneme seed! np.random.seed(12) N = 5 a = 2*np.random.rand(N) # v intervale [0,1] aA = np.random.rand(N, N) Aa = np.random.randn(5) amu, sigma = 1.0, 2.0 a = np.random.randn(1000)*sigma + mu plt.hist(a, bins=20) plt.show()a = np.random.randint(10, size=1000) # cele cisla, pozor na size! u = plt.hist(a) #plt.plot(u[1][1:], u[0]) ## Vlastné čísla $$ A v = \lambda v$$ Použijeme funkciu `eig`. Okrem nej existujú ešte funkcie na vlastné čísla symetrických alebo hermitovských matíc `eigs` a `eigh`. ```python from numpy.linalg import eig ``` N = 20 A = np.random.rand(N, N) vals, vecs = eig(A) print(vals.real) ## Vlastné čísla náhodných matíc Vygenerujte (100x100) maticu náhodných čísel, získajte jej vlastné čísla a spravte z nich histogram. N = 100 A = np.random.rand(N, N) vals, vecs = eig(A) vals = np.real(vals) #print(vals) plt.hist(np.real(vals)) plt.show()
9da35c83665e7c910b8dde4550a4c612a5e68ca1
62,337
ipynb
Jupyter Notebook
Programko.ipynb
matoga/LetnaSkolaFKS_notebooks
26faa2d30ee942e18246fe466d9bf42f16cc1433
[ "MIT" ]
null
null
null
Programko.ipynb
matoga/LetnaSkolaFKS_notebooks
26faa2d30ee942e18246fe466d9bf42f16cc1433
[ "MIT" ]
null
null
null
Programko.ipynb
matoga/LetnaSkolaFKS_notebooks
26faa2d30ee942e18246fe466d9bf42f16cc1433
[ "MIT" ]
null
null
null
20.40491
376
0.474181
true
10,834
Qwen/Qwen-72B
1. YES 2. YES
0.928409
0.907312
0.842357
__label__slk_Latn
0.989819
0.79541
## 高斯判别模型(GDA) 高斯判别模型作分类,假设有两类,则: \begin{equation} y \sim Bernouli(\phi) \\ x|y=0 \sim N(\mu_0, \Sigma) \\ x|y=1 \sim N(\mu_1, \Sigma) \end{equation} 令$\theta = (\phi, \mu_0, \mu_1, \Sigma)$,则参数的似然估计为: \begin{align*} L(\theta) &= log(\prod_{i=1}^m P(x^{(i)},y^{(i)})) \\ &= log(\prod_{i=1}^m P(x^{(i)}|y^{(i)})P(y^{(i)})) \\ &= \sum_{i=1}^m log(P(x^{(i)}|y^{(i)})) + \sum_{i=1}^m log (P(y^{(i)})) \\ &= \sum_{i=1}^m (1-y^{(i)})log (P(x^{(i)}|y^{(i)} = 0)) + \sum_{i=1}^m y^{(i)}log(P(x^{(i)}|y^{(i)} = 1)) + \sum_{i=1}^m log(P(y^{(i)})) \\ \end{align*} 求参数的最大似然估计,$argmax_{\theta} L(\theta)$. 参数$\phi$: \begin{align*} \frac{\partial L(\theta)}{\partial \phi} &= \frac{\partial \sum_{i=1}^m log(P(y^{(i)}))}{\partial \phi} \\ &= \frac{\partial \sum_{i=1}^m log(\phi^{y^{(i)}}(1-\phi)^{1-y^{(i)}})}{\partial \phi} \\ &= \frac{\partial \sum_{i=1}^m y^{(i)}log(\phi) + (1-y^{(i)})log(1-\phi)}{\partial \phi} \\ &= \sum_{i=1}^m y^{(i)} \frac{1}{\phi} - (1-y^{(i)}) \frac{1}{1-\phi} \\ &= \sum_{i=1}^m I(y^{(i)}=1) \frac{1}{\phi} - I(y^{(i)}=0) \frac{1}{1-\phi} \\ &= a \frac{1}{\phi} + b \frac{1}{1-\phi} \\ &= 0 \end{align*} $I$为指示函数,$a$为$(y^{(i)}=1)$的样本数量,$b$为$(y^{(i)}=0)$的样本数量,$a+b=m$,$m$为样本总数,令导数为0,则求得: \begin{equation} \phi = \frac{I(y^{(i)}=1)}{m} = \frac{a}{m} \end{equation} 求参数$\mu_0$: \begin{align*} \frac{\partial L(\theta)}{\partial \mu_0} &= \frac{\partial \sum_{i=1}^m (1-y^{(i)})log(P(x^{(i)}|y^{(i)} = 0)) }{\partial \mu_0} \\ &= \frac{\partial \sum_{i=1}^m (1-y^{(i)})(log(\frac{1}{\sqrt{(2\pi)^l|\Sigma|}}) - \frac{1}{2}(x^{(i)} - \mu_0)|\Sigma|^{-1}(x^{(i)} - \mu_0)^T)}{\partial \mu_0} \\ &= \sum_{i=1}^m (1 - y^{(i)})(\Sigma^{-1}(x^{(i)} - \mu_0)) \\ &= \sum_{i=1}^m I(y^{(i)} = 0)(\Sigma^{-1}(x^{(i)} - \mu_0)) \\ &= 0 \end{align*} 令导数为0,从而求得: \begin{equation} \mu_0 = \frac{\sum_{i=1}^m I(y^{(i)}=0) x^{(i)}}{\sum_{i=1}^m I(y^{(i)}=0)} \end{equation} 同理可求得参数$\mu_1$: \begin{equation} \mu_1 = \frac{\sum_{i=1}^m I(y^{(i)}=1) x^{(i)}}{\sum_{i=1}^m I(y^{(i)}=1)} \end{equation} 协方差矩阵$\Sigma$,$x$的维度为2,$P(x)$是一个2维高斯分布, $x-\mu = [x_0,x_1] - [\mu_0,\mu_1]$ $\Sigma = E(x-\mu)^T(x-\mu)$: \begin{align*} \Sigma &= \frac{1}{m} \sum_{i=1}^m (x^{(i)} - \mu^{i})^T(x^{(i)} - \mu^{i}) \end{align*} ```python import numpy as np import matplotlib.pyplot as plt m_samples = 500 t_samples = 200 np.random.seed(0) # 平移从正态分布中采样的高斯样本,正态分布以0为中心 # 这是一个二维高斯,每一维加上其对应的均值,作为平移偏差 shifted_gaussian = np.random.randn(m_samples, 2) + np.array([6,13]) test_sample_1 = np.random.randn(t_samples, 2) + np.array([3,20]) y_label1 = np.zeros(m_samples) test_label_1 = np.zeros(t_samples) # 从正态分布采样数据,乘以协方差矩阵C,用以伸缩正态分布 C1 = np.array([[5, 0], [0.3, 2]]) C2 = np.array([[0.8, -1.68], [3.3, 3.2]]) stretched_gaussian = np.dot(np.random.randn(m_samples * 2, 2), C1) test_sample_2 = np.dot(np.random.randn(t_samples * 2, 2), C2) y_label2 = np.ones(m_samples * 2) test_label_2 = np.ones(t_samples * 2) X_train = np.vstack([shifted_gaussian, stretched_gaussian]) X_label = np.concatenate([y_label1, y_label2]) X_train_label = np.hstack([X_train, X_label.reshape(X_label.shape[0],1)]) idxs = xrange(X_train_label.shape[0]) idxs = np.random.choice(idxs, m_samples, replace=False) X_train_label = X_train_label[idxs] X_test = np.vstack([test_sample_1, test_sample_2]) X_test_label = np.concatenate([test_label_1, test_label_2]) plt.figure(1) plt.scatter(X_train_label[:,0],X_train_label[:,1]) plt.title('X_train') plt.figure(2) plt.scatter(X_test[:,0], X_test[:,1]) plt.title('X_test') ``` ```python def trainGDA(X): m,D= X.shape D = D-1 pos = X[X[:,2] == 1][:,:2] nag = X[X[:,2] == 0][:,:2] a = pos.shape[0] b = nag.shape[0] phi = float(a) / float(m) pos_sum = np.sum(pos, axis=0) nag_sum = np.sum(nag, axis=0) # [ 5.92345515 13.02799222] mu0 = nag_sum / float(b) # [0.18769247 0.10985849] mu1 = pos_sum / float(a) Sigma = np.zeros((D,D)) for i in range(m): x = X[i,:2].reshape(1,D) if X[i,2] == 1.0 : Sigma += np.dot((x - mu1).T, (x - mu1)) else: Sigma += np.dot((x - mu0).T, (x - mu0)) Sigma = Sigma / m return (phi, mu0, mu1, Sigma) phi,mu0,mu1,Sigma = trainGDA(X_train_label) print('phi:',phi) print('mu0:', mu0) print('mu1:', mu1) print('Sigma:', Sigma) samples_num = 500 samples_num_1 = int(samples_num * phi) samples_num_2 = int(samples_num * (1-phi)) sample_data_1 = np.random.randn(samples_num_1,2) + mu1 sample_data_1 = sample_data_1.dot(Sigma) plt.figure(1) plt.scatter(sample_data_1[:,0],sample_data_1[:,1]) sample_data_2 = np.random.randn(samples_num_2,2) + mu0 sample_data_2 = sample_data_2.dot(Sigma) plt.figure(2) plt.scatter(sample_data_2[:,0],sample_data_2[:,1]) sample_data = np.vstack([sample_data_1, sample_data_2]) plt.figure(3) plt.scatter(sample_data[:,0],sample_data[:,1]) ``` ```python def prediction(X,phi,mu0,mu1,Sigma): m,D = X.shape # 求协方差矩阵的行列式 sig_det = np.linalg.det(Sigma) # 求协方差矩阵的逆 sig_inv = np.linalg.inv(Sigma) ce = X[1].reshape(1,D) div_val = 2 * np.pi * np.sqrt(sig_det) pre_label = [] for i in range(m): x = X[i].reshape(1,D) time_inv_1 = np.dot(x - mu1, sig_inv) temp_val_1 = np.dot(time_inv_1, (x - mu1).T) time_inv_2 = np.dot(x - mu0, sig_inv) temp_val_2 = np.dot(time_inv_2, (x - mu0).T) pro_pos = (phi * np.exp(-0.5 * temp_val_1) / div_val).flatten() pro_nag = ((1-phi) * np.exp(-0.5 * temp_val_2) / div_val).flatten() if pro_pos > pro_nag: pre_label.append(1) else: pre_label.append(0) return pre_label pre_label = prediction(X_test,phi,mu0,mu1,Sigma) total_num = len(pre_label) hit_num = X_test_label[X_test_label == pre_label].shape[0] accuracy = float(hit_num) / float(total_num) print('accuracy is %d %%' % (accuracy * 100)) ``` accuracy is 97 %
0d0df6fbc2b8cff7b3dbe99f4a0bc1949ce35279
84,817
ipynb
Jupyter Notebook
ML/GMM/GDA.ipynb
tianqichongzhen/ProgramPrac
5e575f394179709a4964483308b91796c341e45f
[ "Apache-2.0" ]
2
2019-01-12T13:54:52.000Z
2021-09-13T12:47:25.000Z
ML/GMM/GDA.ipynb
Johnwei386/Warehouse
5e575f394179709a4964483308b91796c341e45f
[ "Apache-2.0" ]
null
null
null
ML/GMM/GDA.ipynb
Johnwei386/Warehouse
5e575f394179709a4964483308b91796c341e45f
[ "Apache-2.0" ]
null
null
null
253.943114
18,120
0.9018
true
2,574
Qwen/Qwen-72B
1. YES 2. YES
0.927363
0.7773
0.720839
__label__eng_Latn
0.070863
0.513083
# Upper envelope This notebook shows how to use the **upperenvelope** module from the **consav** package. # Model Consider a **standard consumption-saving** model \begin{align} v_{t}(m_{t})&=\max_{c_{t}}\frac{c_{t}^{1-\rho}}{1-\rho}+\beta v_{t+1}(m_{t+1}) \end{align} where \begin{align} a_{t} &=m_{t}-c_{t} \\ m_{t+1} &=Ra_{t}+y \\ \end{align} The **Euler equation** is \begin{align} c_{t}^{-\rho} &=\beta Rc_{t+1}^{-\rho} \end{align} Assume that the **t+1 consumption and value functions** are given by \begin{align} c_{t+1}(m_{t}) &= \sqrt{m_{t}}-\eta_{c} \cdot 1\{m_{t}\geq\underline{m}\} \\ v_{t+1}(m_{t}) &= \sqrt{m_{t}}+\eta_{v}\sqrt{m_{t}-\underline{m}} \cdot 1\{m_{t}\geq\underline{m}\} \end{align} This **notebook** shows how to find the **t consumption and value function** using an **upper envelope** code despite the **kink** in the next-period value function. # Algorithm 1. Specify an increasing grid of $m_t$ indexed by $j$, such as {${m_1,m_2,...,m_{\#_m}}$} <br> 2. Specify an increasing grid of $a_t $ indexed by $i$, such as {${a^1,a^2,...,a^{\#_a}}$} <br> 3. For each $i$ compute (using linear interpolation):<br> a. Post-decision value function: $w^i = \beta \breve{v}_{t+1}(Ra^i+y)$ <br> b. Post-decision marginal value of cash: $q^i = \beta R\breve{c}_{t+1}(Ra^i+y)^{-\rho}$ <br> c. Consumption: $c_i = (q^i)^{-1/\rho}$ <br> d. Cash-on-hand: $m^i = a^i + c^i$ <br> 4. For each $j$: <br> a. Constraint: If $m_j < m^1$ then set $c_j = m_j$ <br> b. Find best segment: If $m_j \geq m^1$ then set $c_j =c_j^{i^{\star}(j)} $ where <br> $$ \begin{align} c_j^i=c_j^i+\frac{c^{i+1}-c^i}{m^{i+1}-m^i}(m_j-m^i) \end{align} $$ and $$ \begin{align} i^{\star}(j)=\arg\max_{i\in\{1,\dots\#_{A}-1\}}\frac{(c_{j}^{i})^{1-\rho}}{1-\rho}+\beta w_{j}^{i} \\ \end{align} $$ subject to $$ \begin{align} m_{j} &\in [m^{i},m^{i+1}] \\ a_{j}^{i} &= m_{j}-c_{j}^{i} \\ w_{j}^{i} &= w^{i}+\frac{w^{i+1}-w^{i}}{a^{i+1}-a^{i}}(a_{j}^{i}-a^{i}) \end{align} $$ # Setup ```python import numpy as np from numba import njit import matplotlib.pyplot as plt plt.style.use('seaborn-whitegrid') prop_cycle = plt.rcParams["axes.prop_cycle"] colors = prop_cycle.by_key()["color"] import ipywidgets as widgets ``` Choose parameters and create grids: ```python def setup(): par = dict() # a. model parameters par['beta'] = 0.96 par['rho'] = 2 par['R'] = 1.02 par['y'] = 1 # b. cash-on-hand (exogenous grid) par['Nm'] = 10000 par['m_max'] = 10 # c. end-of-period assets (exogenous grid) par['Na'] = 1000 par['a_max'] = 10 # d. next-period consumption and value function par['eta_v'] = 0.5 par['eta_c'] = 0.5 par['x_ubar'] = 5 return par def create_grids(par): par['grid_a'] = np.linspace(0,par['a_max'],par['Na']) par['grid_m'] = np.linspace(1e-8,par['m_max'],par['Nm']) return par par = setup() par = create_grids(par) ``` # Next-period functions Calculate the next-period consumption and value functions: ```python sol = dict() # a. consumption function sol['c_next'] = np.sqrt(par['grid_m']) - par['eta_c']*(par['grid_m'] >= par['x_ubar']); # b. value function sol['v_next'] = np.sqrt(par['grid_m']) + par['eta_v']*np.sqrt(np.fmax(par['grid_m']-par['x_ubar'],0))*(par['grid_m'] >= par['x_ubar']) ``` ## Figures Plot them to see the jump in consumption and the kink in the value function. ```python # a. next-period consumption function fig = plt.figure() ax = fig.add_subplot(1,1,1) ax.plot(par['grid_m'],sol['c_next'],'o',MarkerSize=0.5) ax.set_title('next-period consumption function') ax.set_xlabel('$m_t$') ax.set_ylabel('$c_t$') # b. next-period value function fig = plt.figure() ax = fig.add_subplot(1,1,1) ax.plot(par['grid_m'],sol['v_next'],'o',MarkerSize=0.5) ax.set_title('next-period value function') ax.set_xlabel('$m_t$') ax.set_ylabel('$v_t$'); ``` # EGM ```python from consav import linear_interp # linear interpolation ``` Apply the EGM algorithm. ```python @njit def u(c,rho): return c**(1-2)/(1-2) def marg_u(c,par): return c**(-par['rho']) def inv_marg_u(u,par): return u**(-1.0/par['rho']) def EGM(par,sol): # a. next-period cash-on-hand m_plus = par['R']*par['grid_a'] + par['y'] # b. post-decision value function sol['w_vec'] = np.empty(m_plus.size) linear_interp.interp_1d_vec(par['grid_m'],sol['v_next'],m_plus,sol['w_vec']) # c. post-decision marginal value of cash c_next_interp = np.empty(m_plus.size) linear_interp.interp_1d_vec(par['grid_m'],sol['c_next'],m_plus,c_next_interp) q = par['beta']*par['R']*marg_u(c_next_interp,par) # d. EGM sol['c_vec'] = inv_marg_u(q,par) sol['m_vec'] = par['grid_a'] + sol['c_vec'] return sol sol = EGM(par,sol) ``` ## Figures Plot the result of the EGM algorithm to see that the its does not define a consumption function. ```python # a. raw consumption function fig = plt.figure() ax = fig.add_subplot(1,1,1) ax.plot(sol['m_vec'],sol['c_vec'],'o',MarkerSize=0.5) ax.set_title('raw consumption points') ax.set_xlabel('$m_t$') ax.set_ylabel('$c_t$') # b. raw value function fig = plt.figure() ax = fig.add_subplot(1,1,1) ax.plot(sol['m_vec'],sol['w_vec'],'o',MarkerSize=0.5) ax.set_title('raw value function points') ax.set_xlabel('$m_t$') ax.set_ylabel('$w_t$'); ``` # Upper envelope ```python from consav import upperenvelope # a. create myupperenvelope = upperenvelope.create(u) # where is the utility function # b. apply c_ast_vec = np.empty(par['grid_m'].size) # output v_ast_vec = np.empty(par['grid_m'].size) # output myupperenvelope(par['grid_a'],sol['m_vec'],sol['c_vec'],sol['w_vec'],par['grid_m'],c_ast_vec,v_ast_vec,par['rho']) ``` ## Figures ```python # a. consumption function fig = plt.figure() ax = fig.add_subplot(1,1,1) ax.plot(par['grid_m'],c_ast_vec,'o',MarkerSize=0.5) ax.set_title('consumption function') ax.set_xlabel('$m_t$') ax.set_ylabel('$c_t$') # b. value function fig = plt.figure() ax = fig.add_subplot(1,1,1) ax.plot(par['grid_m'],v_ast_vec,'o',MarkerSize=0.5) ax.set_title('value function') ax.set_xlabel('$m_t$') ax.set_ylabel('$v_t$') ax.set_ylim((-5,5)); ```
d149f393fcbb87ec686d6fceac19abdd9f862507
81,699
ipynb
Jupyter Notebook
Tools/Upper envelope.ipynb
ThomasHJorgensen/ConsumptionSavingNotebooks
badbdfb1da226d5494026de2adcfec171c7f40ea
[ "MIT" ]
1
2021-11-07T23:37:25.000Z
2021-11-07T23:37:25.000Z
Tools/Upper envelope.ipynb
ThomasHJorgensen/ConsumptionSavingNotebooks
badbdfb1da226d5494026de2adcfec171c7f40ea
[ "MIT" ]
null
null
null
Tools/Upper envelope.ipynb
ThomasHJorgensen/ConsumptionSavingNotebooks
badbdfb1da226d5494026de2adcfec171c7f40ea
[ "MIT" ]
null
null
null
151.575139
13,028
0.886192
true
2,218
Qwen/Qwen-72B
1. YES 2. YES
0.851953
0.824462
0.702403
__label__eng_Latn
0.404823
0.470248
**Problem 1** (7 pts) In the finite space all norms are equivalent. This means that given any two norms $\|\cdot\|_*$ and $\|\cdot\|_{**}$ over $\mathbb{C}^{n\times 1}$, inequality $$ c_1 \Vert x \Vert_* \leq \Vert x \Vert_{**} \leq c_2 \Vert x \Vert_* $$ holds for every $x\in \mathbb{C}^{n\times 1}$ for some constants $c_1, c_2$ which in general depend on the vector size $n$: $c_1 \equiv c_1(n)$, $c_2 \equiv c_2(n)$. Norms equivalence means that for a certain process convergence in $\Vert \cdot \Vert_{**}$ is followed by $\Vert \cdot \Vert_{*}$ and vice versa. Note that practically convergence in a certain norm may be better than in another due to the strong dependence on $n$. Consider \begin{equation} \begin{split} c_1(n) \Vert x \Vert_\infty &\leqslant \Vert x \Vert_1 \leqslant c_2(n) \Vert x \Vert_\infty \\ c_1(n) \Vert x \Vert_\infty &\leqslant \Vert x \Vert_2 \leqslant c_2(n)\Vert x \Vert_\infty \\ c_1(n) \Vert x \Vert_2\ &\leqslant \Vert x \Vert_1 \leqslant c_2(n) \Vert x \Vert_2 \end{split} \end{equation} - Generate random vectors and plot optimal constants $c_1$ and $c_2$ from inequalities above as a function of $n$. - Find these optimal constants analytically and plot them together with constants found numerically ``` ``` **Problem 2** (7 pts) Given $A = [a_{ij}] \in\mathbb{C}^{n\times m}$ - prove that for operator matrix norms $\Vert \cdot \Vert_{1}$, $\Vert \cdot \Vert_{\infty}$ hold $$ \Vert A \Vert_{1} = \max_{1\leqslant j \leqslant m} \sum_{i=1}^n |a_{ij}|, \quad \Vert A \Vert_{\infty} = \max_{1\leqslant i \leqslant n} \sum_{j=1}^m |a_{ij}|. $$ **Hint**: show that $$ \Vert Ax\Vert_{1} \leqslant \left(\max_{1\leqslant j \leqslant m} \sum_{i=1}^n |a_{ij}|\right) \Vert x\Vert_1 $$ and find such $x$ that this inequality becomes equality (almost the same hint is for $ \Vert A \Vert_\infty$). - check that for randomly generated $x$ and for given analytical expressions for $\Vert \cdot \Vert_{1}$, $\Vert \cdot \Vert_{\infty}$ always hold $\|A\| \geqslant \|Ax\|/\|x\|$ (choose matrix $A$ randomly) - prove that $ \Vert A \Vert_F = \sqrt{\text{trace}(A^{*} A)}$ ``` ``` **Problem 3** (6 pts) - Prove Cauchy-Schwarz (Cauchy-Bunyakovsky) inequality $(x, y) \leqslant \Vert x \Vert \Vert y \Vert $, where $(\cdot, \cdot)$ is a dot product that induces norm $ \Vert x \Vert = (x,x)^{1/2}$. - Show that vector norm $\|\cdot \|_2$ is unitary ivariant: $\|Ux\|_2\equiv \|x\|_2$, where $U$ is unitary - Prove that matrix norms $\|\cdot \|_2$ and $\|\cdot \|_F$ are unitary invariant: $\|UAV\|_2 = \|A\|_2$ and $\|UAV\|_F = \|A\|_F$, where $U$ and $V$ are unitary ``` ``` **Problem 4** (5 pts) - Download [Lenna image](http://www.ece.rice.edu/~wakin/images/lenaTest3.jpg) and import it in Python as a 2D real array - Find its SVD and plot singular values (use logarithmic scale) - Plot compressed images for several accuracies (use $\verb|plt.subplots|$). Specify their compression rates ``` ``` **Problem 5** (bonus tasks) - The norm is called absolute if $\|x\|=\| \lvert x \lvert \|$ for any vector $x$, where $x=(x_1,\dots,x_n)^T$ and $\lvert x \lvert = (\lvert x_1 \lvert,\dots, \lvert x_n \lvert)^T$. Give an example of a norm which is not absolute. - Prove that Frobenius norm is not an operator norm ``` ```
6057a7498fa78bf10bd942e04e6f5246623f2db3
5,309
ipynb
Jupyter Notebook
problems/Pset2.ipynb
oseledets/NLA
d16d47bc8e20df478d98b724a591d33d734ec74b
[ "MIT" ]
14
2015-01-20T13:24:38.000Z
2022-02-03T05:54:09.000Z
problems/Pset2.ipynb
oseledets/NLA
d16d47bc8e20df478d98b724a591d33d734ec74b
[ "MIT" ]
null
null
null
problems/Pset2.ipynb
oseledets/NLA
d16d47bc8e20df478d98b724a591d33d734ec74b
[ "MIT" ]
4
2015-09-10T09:14:10.000Z
2019-10-09T04:36:07.000Z
40.219697
281
0.530797
true
1,183
Qwen/Qwen-72B
1. YES 2. YES
0.92944
0.912436
0.848055
__label__eng_Latn
0.923686
0.808649
# AutoDiff by Symboic Representation in Julia ```julia using Symbolics ``` ```julia i(x) = x f(x) = 3x^2 g(x) = 2x^2 h(x) = x^2 w_vec = [i, h, g, f] @variables x ``` \begin{equation} \left[ \begin{array}{c} x \\ \end{array} \right] \end{equation} ```julia function forward_fn(w_vec, x, i::Int) y = w_vec[i](x) i == size(w_vec)[1] ? y : [y; forward_fn(w_vec,y,i+1)] end ``` forward_fn (generic function with 1 method) ```julia x_vec = forward_fn(w_vec, x, 1) display(x_vec) ``` \begin{equation} \left[ \begin{array}{c} x \\ x^{2} \\ 2 x^{4} \\ 12 x^{8} \\ \end{array} \right] \end{equation} ```julia function gradient(w_i, x_i_1) @variables x dy = expand_derivatives(Differential(x)(w_i(x))) (substitute(dy, (Dict(x=>x_i_1,))),) end function reverse_autodiff(w_vec, x_vec, i::Int) i == 1 ? 1 : gradient(w_vec[i], x_vec[i-1])[1] * reverse_autodiff(w_vec, x_vec, i-1) end ``` reverse_autodiff (generic function with 1 method) ```julia y_ad = x_vec[end] display(y_ad) dy_ad = reverse_autodiff(w_vec, x_vec, size(w_vec)[1]) display(dy_ad) ``` \begin{equation} 12 x^{8} \end{equation} \begin{equation} 96 x^{7} \end{equation} ## Check by theory ```julia y_th = f(g(h(x))) display(y_th) dy_th = expand_derivatives(Differential(x)(y_th)) display(dy_th) ``` \begin{equation} 12 x^{8} \end{equation} \begin{equation} 96 x^{7} \end{equation} ## Check by Zygote ```julia using Symbolics using Zygote f(x) = 3x^2 g(x) = 2x^2 h(x) = x^2 y(x) = f(g(h(x))) display(y(x)) dy(x) = Zygote.gradient(y,x)[1] display(dy(x)) ``` \begin{equation} 12 x^{8} \end{equation} \begin{equation} 96 x^{7} \end{equation} ```julia function y(x) N = 5 y = 1 for i=1:N y *= x end y end display(y(x)) dy(x) = Zygote.gradient(y,x)[1] dy(x) ``` \begin{equation} x^{5} \end{equation} \begin{equation} 5 x^{4} \end{equation} ```julia function y(x, N) # N = 5 y = 1 for i=1:N y *= x end y end display(y(x, 5)) dy(x,N) = Zygote.gradient(y,x,N)[1] dy(x,5) ``` \begin{equation} x^{5} \end{equation} \begin{equation} 5 x^{4} \end{equation} ## All Codes ```julia function gradient(w_i, x_i_1) # 1) Newly added @variables x dy = expand_derivatives(Differential(x)(w_i(x))) (substitute(dy, (Dict(x=>x_i_1,))),) end function main(w_vec) @variables x # 2) Replaced from x = 2.0 x_vec = forward_fn(w_vec, x, 1) y_ad = x_vec[end] dy_ad = reverse_autodiff(w_vec, x_vec, size(w_vec)[1]) return x_vec, y_ad, dy_ad end i(x) = x f(x) = 3x^2 g(x) = 2x^2 h(x) = x^2 w_vec = [i, h, g, f] x_vec, y_ad, dy_ad = main(w_vec) display(x_vec) display(y_ad) display(dy_ad) # 3) Verification code @variables x y_th = f(g(h(x))) display(y_th) dy_th = expand_derivatives(Differential(x)(y_th)) display(dy_th) ``` \begin{equation} \left[ \begin{array}{c} x \\ x^{2} \\ 2 x^{4} \\ 12 x^{8} \\ \end{array} \right] \end{equation} \begin{equation} 12 x^{8} \end{equation} \begin{equation} 96 x^{7} \end{equation} \begin{equation} 12 x^{8} \end{equation} \begin{equation} 96 x^{7} \end{equation} ```julia ```
3a1b7ff4c7fffa8397f2aaa76972e1f16fb89f52
10,755
ipynb
Jupyter Notebook
diffprog/julia_dp/autodiff_chain_rule-symb.ipynb
jskDr/keraspp_2021
dc46ebb4f4dea48612135136c9837da7c246534a
[ "MIT" ]
4
2021-09-21T15:35:04.000Z
2021-12-14T12:14:44.000Z
diffprog/julia_dp/autodiff_chain_rule-symb.ipynb
jskDr/keraspp_2021
dc46ebb4f4dea48612135136c9837da7c246534a
[ "MIT" ]
null
null
null
diffprog/julia_dp/autodiff_chain_rule-symb.ipynb
jskDr/keraspp_2021
dc46ebb4f4dea48612135136c9837da7c246534a
[ "MIT" ]
null
null
null
19.378378
68
0.406509
true
1,211
Qwen/Qwen-72B
1. YES 2. YES
0.938124
0.885631
0.830832
__label__yue_Hant
0.239012
0.768634
``` from sympy import * from ga import Ga from printer import Format, Fmt Format() ``` ``` xyz_coords = (x, y, z) = symbols('x y z', real=True) (o3d, ex, ey, ez) = Ga.build('e', g=[1, 1, 1], coords=xyz_coords, norm=True) ``` ``` f = o3d.mv('f', 'scalar', f=True) F = o3d.mv('F', 'vector', f=True) lap = o3d.grad*o3d.grad ``` ``` lap.Fmt(1,r'\nabla^{2}') ``` \begin{equation*} \nabla^{2} = \frac{\partial^{2}}{\partial x^{2}} + \frac{\partial^{2}}{\partial y^{2}} + \frac{\partial^{2}}{\partial z^{2}} \end{equation*} ``` (lap*f).Fmt(1,r'\nabla^{2}f') ``` \begin{equation*} \nabla^{2}f = \partial^{2}_{x} f + \partial^{2}_{y} f + \partial^{2}_{z} f \end{equation*} ``` o3d.grad | (o3d.grad * f) ``` \begin{equation*} \partial^{2}_{x} f + \partial^{2}_{y} f + \partial^{2}_{z} f \end{equation*} ``` o3d.grad|F ``` \begin{equation*} \partial_{x} F^{x} + \partial_{y} F^{y} + \partial_{z} F^{z} \end{equation*} ``` o3d.grad * F ``` \begin{equation*} \left ( \partial_{x} F^{x} + \partial_{y} F^{y} + \partial_{z} F^{z} \right ) + \left ( - \partial_{y} F^{x} + \partial_{x} F^{y} \right ) e_{x}\wedge e_{y} + \left ( - \partial_{z} F^{x} + \partial_{x} F^{z} \right ) e_{x}\wedge e_{z} + \left ( - \partial_{z} F^{y} + \partial_{y} F^{z} \right ) e_{y}\wedge e_{z} \end{equation*} ``` sph_coords = (r, th, phi) = symbols('r theta phi', real=True) (sp3d, er, eth, ephi) = Ga.build('e', g=[1, r**2, r**2 * sin(th)**2], coords=sph_coords, norm=True) ``` ``` f = sp3d.mv('f', 'scalar', f=True) F = sp3d.mv('F', 'vector', f=True) B = sp3d.mv('B', 'bivector', f=True) lap = sp3d.grad*sp3d.grad lap.Fmt(1,r'\nabla^{2}') ``` \begin{equation*} \nabla^{2} = \frac{2}{r} \frac{\partial}{\partial r} + \frac{\cos{\left (\theta \right )}}{r^{2} \sin{\left (\theta \right )}} \frac{\partial}{\partial \theta } + \frac{\partial^{2}}{\partial r^{2}} + r^{-2} \frac{\partial^{2}}{\partial \theta ^{2}} + \frac{1}{r^{2} \sin^{2}{\left (\theta \right )}} \frac{\partial^{2}}{\partial \phi ^{2}} \end{equation*} ``` lap*f ``` \begin{equation*} \frac{1}{r^{2}} \left(r^{2} \partial^{2}_{r} f + 2 r \partial_{r} f + \partial^{2}_{\theta } f + \frac{\partial_{\theta } f }{\tan{\left (\theta \right )}} + \frac{\partial^{2}_{\phi } f }{\sin^{2}{\left (\theta \right )}}\right) \end{equation*} ``` sp3d.grad | (sp3d.grad * f) ``` \begin{equation*} \frac{1}{r^{2}} \left(r^{2} \partial^{2}_{r} f + 2 r \partial_{r} f + \partial^{2}_{\theta } f + \frac{\partial_{\theta } f }{\tan{\left (\theta \right )}} + \frac{\partial^{2}_{\phi } f }{\sin^{2}{\left (\theta \right )}}\right) \end{equation*} ``` sp3d.grad | F ``` \begin{equation*} \frac{1}{r} \left(r \partial_{r} F^{r} + 2 F^{r} + \frac{F^{\theta } }{\tan{\left (\theta \right )}} + \partial_{\theta } F^{\theta } + \frac{\partial_{\phi } F^{\phi } }{\sin{\left (\theta \right )}}\right) \end{equation*} ``` sp3d.grad ^ F ``` \begin{equation*} \frac{1}{r} \left(r \partial_{r} F^{\theta } + F^{\theta } - \partial_{\theta } F^{r} \right) e_{r}\wedge e_{\theta } + \frac{1}{r} \left(r \partial_{r} F^{\phi } + F^{\phi } - \frac{\partial_{\phi } F^{r} }{\sin{\left (\theta \right )}}\right) e_{r}\wedge e_{\phi } + \frac{1}{r} \left(\frac{F^{\phi } }{\tan{\left (\theta \right )}} + \partial_{\theta } F^{\phi } - \frac{\partial_{\phi } F^{\theta } }{\sin{\left (\theta \right )}}\right) e_{\theta }\wedge e_{\phi } \end{equation*} ``` (sp3d.grad | B).Fmt(3) ``` \begin{align*} & - \frac{1}{r} \left(\frac{B^{r\theta } }{\tan{\left (\theta \right )}} + \partial_{\theta } B^{r\theta } + \frac{\partial_{\phi } B^{r\phi } }{\sin{\left (\theta \right )}}\right) e_{r} \\ & + \frac{1}{r} \left(r \partial_{r} B^{r\theta } + B^{r\theta } - \frac{\partial_{\phi } B^{\phi \phi } }{\sin{\left (\theta \right )}}\right) e_{\theta } \\ & + \frac{1}{r} \left(r \partial_{r} B^{r\phi } + B^{r\phi } + \partial_{\theta } B^{\phi \phi } \right) e_{\phi } \end{align*} ``` Fmt([o3d.grad,o3d.grad],1) ``` \begin{equation*} \left [ \begin{array}{cc} e_{x} \frac{\partial}{\partial x} + e_{y} \frac{\partial}{\partial y} + e_{z} \frac{\partial}{\partial z}, & e_{x} \frac{\partial}{\partial x} + e_{y} \frac{\partial}{\partial y} + e_{z} \frac{\partial}{\partial z}\\ \end{array} \right ] \end{equation*} ``` F.Fmt(1) ``` \begin{equation*} F = F^{r} e_{r} + F^{\theta } e_{\theta } + F^{\phi } e_{\phi } \end{equation*} ``` Fmt((F,F)) ``` \begin{equation*} \begin{array}{c} \left ( F^{r} e_{r} + F^{\theta } e_{\theta } + F^{\phi } e_{\phi }, \right. \\ \left. F^{r} e_{r} + F^{\theta } e_{\theta } + F^{\phi } e_{\phi }\right ) \\ \end{array}\end{equation*} ``` ```
fab731e58e557fc7f87f54d3d2fc968aef5f7e12
14,158
ipynb
Jupyter Notebook
examples/ipython/dop.ipynb
moble/galgebra
f77305eae1366eb2bb6e5e5c47b9788e22bd46e8
[ "BSD-3-Clause" ]
1
2018-03-06T15:00:36.000Z
2018-03-06T15:00:36.000Z
examples/ipython/dop.ipynb
rschwiebert/galgebra
852f6d2574780718ef327f172cd1b8fa7d0a9879
[ "BSD-3-Clause" ]
null
null
null
examples/ipython/dop.ipynb
rschwiebert/galgebra
852f6d2574780718ef327f172cd1b8fa7d0a9879
[ "BSD-3-Clause" ]
1
2018-12-04T02:06:14.000Z
2018-12-04T02:06:14.000Z
35.306733
573
0.439116
true
2,017
Qwen/Qwen-72B
1. YES 2. YES
0.939913
0.831143
0.781202
__label__eng_Latn
0.159955
0.653327
## Graphical Models ```python import graphviz as gz import numpy as np ``` GMs are depictions of independence/dependence relationships for distributions. All GMs have their strengths and weaknesses. Belief networks is one type of a GM, they are useful to represent ancestral conditional independence; however, they cannot always represent dependence relationships. In this chapter, we will mention Markov networks, chain graphs, and factor graphs. The framework used for graphical models can be summarized into two parts: 1. **Modelling** 1. Identify variables in the problem environment. 2. Describe how these variables interact. This is achieved by making structural assumptions about the joint probability distribution. **Each class of GM corresponds to a factorization property of the joint distribution**. 2. **Inference** 1. Use algorithms to make inference on GMs. ### Markov Networks \begin{definition} A potential $\phi(x)$ is a non-negative function of $x$. A distribution is a special type of potential satisfying $\sum_x \phi(x) = 1$. \end{definition} For a set of variables $X = \{x_1, \dots, x_n\}$, a **Markov Network** is defined as a product of potentials on subsets of the variables $X_c \subseteq X$: $$ p(x_1, \dots, x_n) = \frac{1}{Z}\prod_{c = 1}^C \phi_c(X_c) $$ where $Z$ is a constant which ensures normalisation, called the partition function. Graphically, this is represented by an undirected graph G with $X_c, c = 1, \dots, C$ being the **maximal cliques** of G. If clique potentials are all strictly positive, then the network is called a Gibbs distribution. * A **pairwise Markov network** corresponds to an undirected graph $G$ having maximal cliques of size $2$ only. #### Examples ##### $p(A, B, C, D) = \frac{1}{Z}\phi(A, B)\phi(B, C)\phi(C, D)\phi(A, D)$ ```python G = gz.Graph() G.engine = 'circo' G.edges(['AB', 'BC', 'CD', 'AD']) G ``` ##### $p(A, B, C, D, E, F) = \frac{1}{Z}\phi(A, B, C)\phi(B, C, D)\phi(D, E)\phi(D, F)$ ```python G.clear(keep_attrs=True) G.edges(['AB', 'BC', 'AC', 'CD', 'BD', 'DE', 'DF']) G ``` #### Properties \begin{definition} A subset $S$ **separates** a subset $A$ from a subset $B$ (for disjoint $A$ and $B$) if every path from $A$ to $B$ passes through $S$. If there is no path from $A$ to $B$, then $A$ and $B$ are separated even if $S = \varnothing$. \end{definition} ##### Marginalizing and Conditioning Let $p(A, B, C) = \phi(A, C)\phi(B, C)/Z$. * Initially $A$ and $B$ are not independent because of the path from $A$ to $B$. 1. **Marginalizing** over $C$ means we don't have any information on $C$. Not knowing $C$ doesn't get rid of the fact that $A$ and $B$ are not independent. Hence, marginalizing $C$ makes $A$ and $B$ graphically dependent, and in general $p(A, B) \neq p(A)p(B)$. 2. **Conditioning** on $C$ means we know $C$. In this case, $A$ can be explained without knowing $B$ due to the Markov property, and vice versa. Hence, $A \ci B \mid C$. * **Markov Property**: If we know the parents of a variable, then the ancestors of the parents doesn't tell us anything new about that variable. ##### Global Markov Property For disjoint sets of variables $(A, B, S)$ where $S$ separates $A$ and $B$ in G, $A \ci B \mid S$. ##### Algorithm for Independence Due to the separation property, $A \ci B \mid S$ can be easily checked in Markov networs with the following procedure 1. Remove all links that neighbour the set of variables $S$ 2. If there is no path between $A$ and $B$, then $A \ci B \mid S$. ##### Local Markov Property * **Valid for positive potentials** When conditioned on its neighbours, $x$ is independent of the remaining variables in the graph, i.e. $p(x \mid X \setminus x) = p(x \mid ne(x))$ where $ne(x)$ is the set of neighbours of $x$. ##### Pairwise Markov Property For any **non-adjacent** vertices $x$ and $y$, $x \ci y \mid X \setminus \{x, y\}$ #### Markov Random Fields As MRF is defined by a set of distributions $p(x_i \mid ne(x_i))$ where $i$ indexes the distributions. A distribution is an MRF with respect to an undirected graph $G$ if $p(x_i \mid x_{\backslash i}) = p(x_i \mid ne(x_i))$ where $x_{\backslash i} = X \setminus x_i$. #### Hammersley-Clifford Theorem Let $G$ be an undirected graph and $F$ be the factorisation as the product of clique potentials defined on $G$. 1. ($G \implies F$): Local Markov property implies that if we know $ne(x)$, we don't need the other variables. If we do this for each variable, then we naturally get a product of maximal clique potentials. 2. ($F \implies G$): Given a factorisation $F$, local Markov properties on $G$ are implied. See BRML 4.2.3 for a detailed discussion. #### Conditional Independence in Belief and Markov Networks For sets of variables $X, Y, Z$, determine whether $X \ci Y \mid Z$. The following procedure applies to both belief (Bayes) and Markov networks. For Markov networks, only the final separation step is applied. 1. **Ancestral Graph** 1. Identify the ancestors $A$ of $(X \cup Y \cup Z)$. 2. For all nodes $n$, if $n \notin (A \cup X \cup Y \cup Z)$, remove $n$ and all edges in/out of $n$. 2. **Moralisation** 1. For all immoralities $(A, B, C)$ where $C$ is child of both $A$ and $B$, 1. add a link between $A$ and $B$, 2. remove link between $A$ and $C$, 3. remove link between $B$ and $C$. 3. **Separation** 1. Remove links neighbouring $Z$. 2. Look for a path from $X$ to $Y$. If there is no such path, then $X \ci Y \mid Z$. #### Lattice Model for Image Cleaning An example Lattice model is: ```python def lattice_graph(n): """ Construct an nxn lattice graph as graphviz.Graph object. Parameters ---------- n : int Number of nodes per row and column. Returns ------- graph : graphviz.Graph Corresponding lattice graph with nodes indexed from 0 to n-1. """ G = gz.Graph() for i in range(n): with G.subgraph() as S: S.attr(rank='same') for j in range(n): S.node(str(n*i + j)) edge_set = set() for index in range(n*n): i, j = divmod(index, n) if i > 0: edge_set.add(frozenset((index, n*(i - 1) + j))) if i < n - 1: edge_set.add(frozenset((index, n*(i + 1) + j))) if j > 0: edge_set.add(frozenset((index, n*i + j - 1))) if j < n - 1: edge_set.add(frozenset((index, n*i + j + 1))) sorted_edges = (sorted(pair) for pair in edge_set) str_edges = ((str(first), str(second)) for (first, second) in sorted_edges) G.edges(str_edges) return G lattice_graph(4) ``` and the corresponding Markov network model corresponds to the following joint distribution $$ p(x_1, \dots, x_9) = \frac{1}{Z}\prod_{i \sim j} \phi_{ij}(x_i, x_j) $$ where $i \sim j$ if $i$ and $j$ are neighbours. This model can be thought as representing a locality principle where if we know the neighbours of a node, we know everything we need to know about that node. We can use this idea for images since pixels tend to be similar to their neighbours. ### Chain Graphical Models Chain Graphs are a mixture of directed models (Bayes Nets) and undirected models (Markov Networks). #### Example ```python G = gz.Digraph() with G.subgraph() as S: S.attr(rank='same') S.node('a') S.node('b') G.edges(['ac', 'bd']) with G.subgraph() as S: S.attr(rank='same') S.edge('c', 'd', dir='none') G ``` corresponds to $p(a, b, c, d) = p(c, d \mid a, b)p(a)p(b)$ with $$ p(c, d \mid a, b) = \frac{\phi(c, d)p(c \mid a)p(d \mid b)}{\sum_{c, d} \phi(c, d)p(c \mid a)p(d \mid b)} $$ A chain graph is a DAG over the chain components. Hence, chain components can be thought of as Markov networks whereas the whole DAG is a belief network over these separate Markov networks. \begin{definition} The chain components of a graph $G$ are obtained by 1. Forming a graph $G\prime$ with directed edges removed from $G$ 2. Each connected component in $G\prime$ constitutes a chain component \end{definition} * A Bayesian Network is a Chain Graph where each connected component is a singleton. * A Markov Network is a Chain Graph with a single connected component. --- * Chain graphs can represent independence statements that no Markov or Bayes network can represent alone. 1. Markov networks can represent cyclic dependencies while Bayes networks cannot. 2. Bayes networks can represent certain marginal independencies which Markov networks cannot (3 node network with a collider). 3. A chain graph can represent both of these independence assumptions in a single model. #### Chain Graph Distribution The distribution associated with a chain graph $G$ is found by first identifying the chain components, $\tau$ and their associated variables $X_{\tau}$. Then $$ p(x) = \prod_{\tau} p(X_{\tau} \mid pa(X_{\tau})) $$ where $$ p(X_{\tau} \mid pa(X_{\tau})) \propto \prod_{d \in D_{\tau}} p(x_d \mid pa(x_d))\prod_{c \in C_{\tau}} \phi(X_c) $$ where $C_{\tau}$ denotes the union of the cliques in the component $\tau$, $D_{\tau}$ is the set of variables in component $\tau$ that correspond to directed terms $p(x_d \mid pa(x_d))$. The proportionality factor is determined implicitly by the constraint that the distribution sums to $1$. ##### Example CG Distribution ```python G = gz.Digraph() for level, rank in [('bgc', 'min'), ('ade', 'same'), ('hf', 'max')]: with G.subgraph() as S: S.attr(rank=rank) for node in level: S.node(node) directed_edges = ['ah', 'ab', 'cd', 'cg'] undirected_edges = ['df', 'ad', 'bg', 'eh', 'ef', 'ae'] G.edges(directed_edges) for edge in undirected_edges: G.edge(edge[0], edge[1], dir='none') G ``` When the directed edges are removed, the connected components are $b, cg, aedhf$. Hence, this CG corresponds to the following Bayes Net ```python G.clear() G.edges([('aedhf', 'bg'), ('c', 'bg'), ('c', 'aedhf')]) G ``` where each node of this Bayes Net is itself a Markov Network. Hence, the overall distribution of the CG can be written as $$ p(\cdot) \propto p(b \mid a)p(g \mid c)\phi(b, g) \cdot p(d \mid c)\phi(a, e)\phi(e, h)\phi(e, f)\phi(d, f)\phi(a, d) \cdot p(c) $$ ### Factor Graphs Given a function $f(x_1, \dots, x_n) = \prod_i \psi_i(X_i)$, the factor graph has a node (a square) for each factor $\psi_i$ , and a variable node (circle) for each variable $x_j$. For each $x_j \in X_i$ an undirected link is made between factor $\psi_i$ and variable $x_j$. When used to represent a distribution, we need to normalize: $$ p(x_1, \dots, x_n) = \frac{1}{Z}\prod_i \psi_i(X_i) $$ where $Z = \sum_X \prod_i \psi_i(X_i)$ is the normalisation constant. If a factor $\psi_i(X_i)$ is a conditional distribution in the form $p(x_i \mid pa(x_i))$, then we can use directed edges from the parents of the factor to the factor, and from the factor to the children of the factor. Directed edges allows us to preserve the information that the factor is a probability distribution. #### Example $$ p(a, b, c) = p(a \mid b, c)p(b)p(c). $$ ##### Corresponding Bayes Net ```python G = gz.Digraph() G.edges(['ba', 'ca']) G ``` ##### Corresponding Factor Graph $$ p(a, b, c) \propto \psi_1(a, b, c)\psi_2(b)\psi_2(c) $$ ```python G = gz.Digraph() G.node('fa', shape='square', style='filled', color='black', width='0.1') G.node('fb', shape='square', style='filled', color='black', width='0.1') G.node('fc', shape='square', style='filled', color='black', width='0.1') G.edges([('fc', 'c'), ('fb', 'b'), ('fa', 'a'), ('b', 'fa'), ('c', 'fa')]) G ``` #### Factor Graph Conditional Independence Two disjoint sets of variables $A$ and $B$ are conditionally independent given a third set $C$ if all paths from $A$ to $B$ are blocked. A path is blocked if at least one of the following is satisfied: 1. One of the variables in the path is in the conditioning set 2. One of the variables **or factors** in the path is a collider, and neither the variable/factor nor any of its descendants are in the conditioning set. ### Expressiveness of Graphical Models #### Bayes Network to Markov Network Each directed distribution can be represented as an undirected distribution by associating a potential for each factor of the directed distribution. However, transformation from a bayes net to a markov net **may cause information loss!** \begin{remark} Markov network associated with a Bayes network is the **moralised** version of the Bayes network. One can moralise a Bayes network by drawing an undirected edge between each pair of unconnected nodes which have a common child. \end{remark} ##### Moralisation Causes Loss of Independence Information Consider ```python G = gz.Digraph() G.edges(['ac', 'bc']) G ``` In this Bayes net, $a \ci b$. However, if we write the corresponding Markov network ```python G = gz.Graph() G.engine = 'neato' G.edges(['ab', 'bc', 'ac']) G ``` we cannot infer from the graph that $a \ci b$. During the transformation process, $a \ci b$ information is lost. #### Markov Network to Bayes Network Representing cyclic dependencies is not possible with a Bayes Network #### Distribution Maps \begin{definition} A graph $G$ is an **independence map** of a distribution $P$ if every conditional independence statement that one can derive from $G$ is true in $P$. * By contraposition, every dependency statement one can derive from $P$ is true in $G$. Conversely, a graph $G$ is a **dependence map** of a distribution $P$ if every conditional independence statement that one can derive from $P$ is true in $G$. * By contraposition, every dependency statement one can derive from $G$ is true in $P$. A graph $G$ which is both an I-map and a D-map of distribution $P$ is a **perfect map** of $P$. \end{definition} Let $L_P$ be the set of all conditional independence statements one can derive from $P$. Define $L_G$ similarly. Then, according to these definitions, $ \begin{align} \text{G is a D-map} &\implies L_P \subseteq L_G, \\ \text{G is an I-map} &\implies L_G \subseteq L_P, \\ \text{G is a perfect map} &\implies L_G = L_P. \end{align} $ ##### Maps of Distribution Classes In this case, all numerical instances of the distribution class must obey the requirements. To do this, $L_P = \cup_i L_{P_i}$ where $P_i$ is a numerical instance of distribution $P$.
9baab462b46ce5f64a6a805251de67f205235287
70,344
ipynb
Jupyter Notebook
BRML/notebooks/chapter4.ipynb
eozd/brml-notes
46a14ae7ea22e9786a750b99293bea70c8a11af9
[ "MIT" ]
null
null
null
BRML/notebooks/chapter4.ipynb
eozd/brml-notes
46a14ae7ea22e9786a750b99293bea70c8a11af9
[ "MIT" ]
null
null
null
BRML/notebooks/chapter4.ipynb
eozd/brml-notes
46a14ae7ea22e9786a750b99293bea70c8a11af9
[ "MIT" ]
1
2020-03-23T00:44:06.000Z
2020-03-23T00:44:06.000Z
45.94644
329
0.499673
true
4,125
Qwen/Qwen-72B
1. YES 2. YES
0.824462
0.894789
0.73772
__label__eng_Latn
0.991746
0.552302
# Quantum phase estimation Your task in this notebook is to implement the quantum Fourier transform on a set of 3 qubits in Qiskit, and then use it to estimate the eigenvalue of a simple Hamiltonian. ### Part A: Implementing the QFT ```python import numpy as np from qiskit import QuantumRegister, ClassicalRegister from qiskit import QuantumCircuit from qiskit import execute, BasicAer import qiskit.tools.visualization as qvis import warnings warnings.filterwarnings("ignore", category=DeprecationWarning) ``` **Task 1.** In the lecture slides I showed the QFT in general; using this template, write out the circuit for the 3-qubit case. Don't forget the SWAP at the end! ** A. ** _Here's a graphic of the circuit._ **Task 2.** Recall that the rotations have the form \begin{equation} R_k = \begin{pmatrix} 1 & 0 \\ 0 & e^{2\pi i / 2^k} \end{pmatrix} \end{equation} Evaluate the unitary operations for the rotations present in Task 1. Do these gates look familiar? ** A. ** _These are the $S$ and $T$ gates that I showed in the slides!_ ** Task 3.** Now it's time to implement the circuit. I've set up a 3-qubit quantum register, so all you have to do is implement the gates. For reference, [this page](https://qiskit.org/documentation/terra/summary_of_quantum_operations.html#controlled-operations-on-qubits) contains information on the gates you can implement natively in Qiskit. Hint: look at the `cu1` gates. ```python # Empty register q = QuantumRegister(3) # This object will be our circuit qft = QuantumCircuit(q) # Apply gates to the first qubit qft.h(q[0]) qft.cu1(np.pi/2, q[1], q[0]) qft.cu1(np.pi/4, q[2], q[0]) # Apply gates to the second qubit qft.h(q[1]) qft.cu1(np.pi/2, q[2], q[1]) # Apply remaining gates qft.h(q[2]) qft.swap(q[0], q[2]) ``` ### Part B: Eigenvalue estimation In this part, we're going to implement the phase estimation algorithm to get the eigenvalue of the following unitary: \begin{equation} U = \begin{pmatrix} 1 & 0 \\ 0 & e^{5\pi i / 4} \\ \end{pmatrix} \end{equation} ** Task 4. ** What is the eigenvalue $\varphi$ of this matrix as expressed in the phase estimation problem, i.e. $e^{2\pi i \varphi}$? What is its expansion in the notation $0.\varphi_1 \cdots \varphi_n$? ** A. ** The eigenvalue here is $\varphi = 0.625$. In decimal notation this is $0.101$, i.e. $1 \cdot (2^{-1}) + 0 \cdot 2^{-2} + 1 \cdot 2^{-3}$. This fits perfectly in a 3-qubit state, and we should expect to measure $|101\rangle$ as our output state after running the algorithm. Now let's set up the two quantum registers: one for the 3 qubits to represent phase, and 2 for the eigenvector since it is a 4-dimensional vector. We'll also need 3 classical bits to hold the measurement outcomes of the phase register. ```python ph_reg = QuantumRegister(3) eig_reg = QuantumRegister(1) meas_reg = ClassicalRegister(3) qpe = QuantumCircuit(ph_reg, eig_reg, meas_reg) ``` ** Task 5. ** Apply the phase estimation algorithm to your registers. Recall that you'll also need to initialize the eigenvector register into the proper eigenvector. ```python # Initialize the eigenvector qpe.x(eig_reg) # Perform the phase estimation; we're applying a controlled U varying amounts of times qpe.h(ph_reg) U_phase = 2 * np.pi * 0.625 qpe.cu1(U_phase, ph_reg[2], eig_reg[0]) qpe.cu1(U_phase, ph_reg[1], eig_reg[0]) qpe.cu1(U_phase, ph_reg[1], eig_reg[0]) qpe.cu1(U_phase, ph_reg[0], eig_reg[0]) qpe.cu1(U_phase, ph_reg[0], eig_reg[0]) qpe.cu1(U_phase, ph_reg[0], eig_reg[0]) qpe.cu1(U_phase, ph_reg[0], eig_reg[0]) ``` ** Task 6. ** Apply the inverse QFT to your phase register. Note: unfortunately, Qiskit does not yet support a simple circuit reversal operation; so you'll have to manually list the inverses of each gate. Don't forget to take the adjoint! ```python qpe.swap(ph_reg[0], ph_reg[2]) qpe.h(ph_reg[2]) qpe.cu1(-np.pi/2, ph_reg[2], ph_reg[1]) qpe.h(ph_reg[1]) qpe.cu1(-np.pi/4, ph_reg[2], ph_reg[0]) qpe.cu1(-np.pi/2, ph_reg[1], ph_reg[0]) qpe.h(ph_reg[0]) ``` ** Final task. ** Let's simulate the circuit and measure to get the vector corresponding to our phase! ```python qpe.measure(ph_reg, meas_reg) backend = BasicAer.get_backend('qasm_simulator') # the device to run on result = execute(qpe, backend, shots=1000).result() counts = result.get_counts(qpe) # Extract the counts and compute the value eigenvalue = 0 # Reverse - in this case it is symmetric, but in general you'll have to be careful binary_eigenvalue = list(counts.keys())[0] binary_eigenvalue = binary_eigenvalue[::-1] for idx, b in enumerate(list(binary_eigenvalue)): if b == '1': eigenvalue += 2 ** (-(idx + 1)) print(f"Phase estimation routine returned the eigenvalue {eigenvalue}.") ```
84ff879c13ccd4909b689b4e7092630655f5a77f
7,870
ipynb
Jupyter Notebook
02-gate-model-applications/notebooks/Solved-Quantum-Phase-Estimation.ipynb
a-capra/Intro-QC-TRIUMF
9738e6a49f226367247cf7bc05a00751f7bf2fe5
[ "MIT" ]
27
2019-05-09T17:40:20.000Z
2021-12-15T12:23:17.000Z
02-gate-model-applications/notebooks/Solved-Quantum-Phase-Estimation.ipynb
a-capra/Intro-QC-TRIUMF
9738e6a49f226367247cf7bc05a00751f7bf2fe5
[ "MIT" ]
1
2021-09-29T07:34:09.000Z
2021-09-29T21:01:29.000Z
02-gate-model-applications/notebooks/Solved-Quantum-Phase-Estimation.ipynb
a-capra/Intro-QC-TRIUMF
9738e6a49f226367247cf7bc05a00751f7bf2fe5
[ "MIT" ]
14
2019-05-09T18:45:49.000Z
2021-12-15T12:23:21.000Z
29.365672
381
0.57014
true
1,435
Qwen/Qwen-72B
1. YES 2. YES
0.938124
0.853913
0.801076
__label__eng_Latn
0.952179
0.699501
# Subsampling approaches to MCMC for tall data Last modified on 11th May 2015 This notebook illustrates various approaches to subsampling MCMC, see (Bardenet, Doucet, and Holmes, ICML'14 and a 2015 arxiv preprint entitled "On MCMC for tall data" by the same authors. By default, executing cells from top to bottom will reproduce the running examples in the latter paper. If you want to jump to a particular method, you should at least evaluate the first two sections beforehand ("Generate..." and "Vanilla MH"), as they contain functions and data that is used throughout the notebook. Please report any issue (or interesting discovery!) to the paper's corresponding author. **Table of contents** [Generate toy data](#Generate-toy-data) [Vanilla MH](#Vanilla-isotropic-Gaussian-random-walk-Metropolis) [Austerity MH](#Austerity-MH) [Confidence sampler without proxy](#Vanilla-confidence-sampler) [Poisson estimator](#Pseudo-marginal-MH-with-Poisson-estimator) [Confidence sampler with proxy](#Confidence-MH-with-2nd-order-Taylor-likelihood-proxy) [Firefly MH](#Firefly-MH-with-2nd-order-Taylor-lower-bound) [SGLD](#Stochastic-gradient-Langevin-dynamics) ```python %pylab inline figsize(10,10) # in the global namespace when inline backend is in use. ``` Populating the interactive namespace from numpy and matplotlib ```python import numpy as np import numpy.random as npr import scipy.stats as sps import scipy.special as spsp import scipy.misc as spm import scipy.optimize as spo import numpy.linalg as npl import matplotlib as mpl import matplotlib.pyplot as plt import matplotlib.colors as colors import matplotlib.cm as cmx import sympy as sym import time import seaborn as sns import seaborn.distributions as snsd import math as math sns.set(style="ticks"); plt.ioff() # turn off interactive plotting plt.ticklabel_format(style='sci', axis='x', scilimits=(0,0)) mpl.rcParams['xtick.labelsize'] = 22 mpl.rcParams['ytick.labelsize'] = 22 plt.rc('axes', labelsize=22) plt.rc('legend', fontsize=22) mpl.rcParams['ps.useafm'] = True mpl.rcParams['pdf.use14corefonts'] = True mpl.rcParams['text.usetex'] = True npr.seed(1) ``` //anaconda/envs/py27/lib/python2.7/site-packages/IPython/html.py:14: ShimWarning: The `IPython.html` package has been deprecated. You should import from `notebook` instead. `IPython.html.widgets` has moved to `ipywidgets`. "`IPython.html.widgets` has moved to `ipywidgets`.", ShimWarning) We will save Figures in the following directory, by default it is the current directory. ```python saveDirStr = "" ``` ## Generate toy data First, let us generate some data. Change variable "dataType" to switch between the Gaussian and the lognormal examples from the paper. ```python # Generate data npr.seed(1) N = 100000 dataType = "Gaussian" #dataType = "logNormal" if dataType == "Gaussian": x = npr.randn(N) elif dataType == "logNormal": x = npr.lognormal(0,1,size=N) plt.clf() plt.hist(x, 30, normed=True) plt.show() # We store the mean and std deviation for later reference, they are also the MAP and MLE estimates in this case. realMean = np.mean(x) realStd = np.std(x) print "Mean of x =", realMean print "Std of x =", realStd ``` We are going to estimate the mean and std deviation of a Gaussian model, applied to the toy dataset generated above. ```python def getLogLhd(x, mu, sigma): """ return an array of Gaussian log likelihoods up to a constant """ return -(x-mu)**2/(2*sigma**2) - np.log(sigma) ``` ```python def combineMeansAndSSQs(N1, mu1, ssq1, N2, mu2, ssq2): """ combine means and sum of squares of two sets """ dd = mu2 - mu1 mu = mu1 ssq = ssq1 N = N1+N2 mu += dd*N2/N ssq += ssq2 ssq += (dd**2) * N1 * N2 / N return N, mu, ssq ``` The following function plots the results as in the paper. It is a bit messy, but you can safely skip this cell without missing anything on the algorithms. ```python def plotResults(S, ns, algoName="doesNotMatter", weights="doesNotMatter", boolSave=0, figId="basic"): """ plot results """ # Plot joint sample with seaborn m = np.min(S[:,0]) # Precompute limits for x and y plots M = np.max(S[:,0]) m_ref = np.min(S_ref[:,0]) # Precompute limits for x and y plots M_ref = np.max(S_ref[:,0]) xlimInf = min(m, m_ref)# - (M-m)/10 xlimSup = max(M, M_ref)# +(M-m)/10 print "xlims =", xlimInf, xlimSup # +(M-m)/10 xPlot = np.linspace(xlimInf, xlimSup, 1000) m = np.min(np.exp(S[:,1])) M = np.max(np.exp(S[:,1])) m_ref = np.min(np.exp(S_ref[:,1])) # Precompute limits for x and y plots M_ref = np.max(np.exp(S_ref[:,1])) ylimInf = min(m, m_ref)# - (M-m)/10 ylimSup = max(M, M_ref) yPlot = np.linspace(ylimInf, ylimSup, 1000) if algoName == "sgld": # Need to convert a weighted sample into a unweighted sample sumWeights = np.sum(weights) normalizedWeights = weights/sumWeights T = S.shape[0] inds = npr.choice(np.arange(T), T, p=normalizedWeights) S = S[inds,:] g = sns.jointplot(S[:,0], np.exp(S[:,1]), kind="hex", space=0,size=10, xlim=(xlimInf,xlimSup), ylim=(ylimInf,ylimSup), stat_func=None, marginal_kws={"norm_hist":True}) # plt.sca(g.ax_joint) plt.xlabel("$\mu$",) plt.ylabel("$\sigma$") # Add Reference long MH draw # ... to the joint plot sns.kdeplot(S_ref[:,0], np.exp(S_ref[:,1]), ax=g.ax_joint, bw="silverman", cmap="BuGn_r", linewidth=5) # ... to the marginal plots g.ax_marg_x.plot(xPlot, marg0(xPlot), 'g', linewidth=6, label="Ref") g.ax_marg_y.plot(marg1(yPlot), yPlot, 'g', linewidth=6) # Add Bernstein von Mises approximations # ... to the joint plot X, Y = np.meshgrid(xPlot, yPlot) minusFisher = np.array([[1./realStd**2, 0],[0, 2./realStd**2]]) SS = 1./N*npl.inv(minusFisher) Z = plt.mlab.bivariate_normal(X, Y, sigmax=np.sqrt(SS[0,0]), mux=realMean, muy=realStd, sigmay=np.sqrt(SS[1,1]), sigmaxy=np.sqrt(SS[0,1])) # Plot BvM approximation g.ax_joint.contour(X, Y, -Z, 1, colors="r", label="BvM", linestyle='--',linewidths=(6)) # ... to the marginal plots g.ax_marg_x.plot(xPlot, sps.norm(realMean, np.sqrt(SS[0,0])).pdf(xPlot), color="red", linewidth=6, linestyle='--', label="BvM") g.ax_marg_y.plot(sps.norm(realStd, np.sqrt(SS[1,1])).pdf(yPlot), yPlot, color="red", linewidth=6, linestyle='--') # Print legend and save g.ax_marg_x.legend() print saveDirStr+"chain_"+dataType+"_"+algoName+"_"+figId+".pdf" plt.savefig(saveDirStr+"chain_"+dataType+"_"+algoName+"_"+figId+".pdf") if boolSave: plt.savefig(saveDirStr+"chain_"+dataType+"_"+algoName+"_"+figId+".pdf") plt.savefig(saveDirStr+"chain_"+dataType+"_"+algoName+"_"+figId+".eps") plt.show() # Plot autocorr of second component c = plt.acorr(np.exp(S[:,1]), maxlags=50, detrend=detrend_mean, normed=True) plt.clf() c = c[1][c[0]>=0] plt.plot(c, linewidth=3) plt.plot(c_ref, label="Ref", linewidth=3, color="g") plt.grid(True) plt.legend(loc=1) if boolSave: plt.savefig(saveDirStr+"autocorr_"+dataType+"_"+algoName+"_"+figId+".pdf") plt.savefig(saveDirStr+"autocorr_"+dataType+"_"+algoName+"_"+figId+".eps") plt.show() # Plot average number of likelihoods computed if not algoName =="vanillaMH": plt.hist(ns, histtype="stepfilled", alpha=.3) labStr = "mean="+str(np.around(1.0*np.mean(ns)/N*100,1))+"\%" plt.axvline(np.mean(ns), linewidth = 4, color="blue", label=labStr) labStr = "median="+str(np.around(1.0*np.median(ns)/N*100,1))+"\%" print "Median=", np.median(ns) plt.axvline(np.median(ns), linewidth = 4, color="blue",linestyle='--', label=labStr) plt.axvline(N, linewidth = 4, color="k", label="n") plt.xlim([0, 2*N+1]) plt.legend() if boolSave: plt.savefig(saveDirStr+"numLhdEvals_"+dataType+"_"+algoName+"_"+figId+".pdf") plt.savefig(saveDirStr+"numLhdEvals_"+dataType+"_"+algoName+"_"+figId+".eps") print "Plots saved" plt.show() ``` ```python # Concentration bounds def ctBernsteinSerfling(N,n,a,b,sigma,delta): """ Bernstein-type bound without replacement, from (Bardenet and Maillard, to appear in Bernoulli) """ l5 = np.log(5/delta) kappa = 7.0/3+3/np.sqrt(2) if n<=N/2: rho = 1-1.0*(n-1)/N else: rho = (1-1.0*n/N)*(1+1.0/n) return sigma*np.sqrt(2*rho*l5/n) + kappa*(b-a)*l5/n def ctHoeffdingSerfling(N,n,a,b,delta): """ Classical Hoeffding-type bound without replacement, from (Serfling, Annals of Stats 1974) """ l2 = np.log(2/delta) if n<=N/2: rho = 1-1.0*(n-1)/N else: rho = (1-1.0*n/N)*(1+1.0/n) return (b-a)*np.sqrt(rho*l2/2/n) def ctBernstein(N,n,a,b,sigma,delta): """ Classical Bernstein bound, see e.g. the book by Boucheron, Lugosi, and Massart, 2014. """ l3 = np.log(3/delta) return sigma*np.sqrt(2*l3/n) + 3*(b-a)*l3/n ``` The proxies we use are Taylor expansions. We will need derivatives of the log likelihood up to order 3. ```python # Differential functions for proxies, # Define vectorized evaluation of gradient and Hessian myGradientVect = lambda x_float, mu_float, sigma_float:np.array([-(2*mu_float - 2*x_float)/(2*sigma_float**2), -1/sigma_float + (-mu_float + x_float)**2/sigma_float**3]).T myHessianVect = lambda x_float, mu_float, sigma_float:[[-1/sigma_float**2*np.ones(x_float.shape), 2*(mu_float - x_float)/sigma_float**3], [2*(mu_float - x_float)/sigma_float**3, (1 - 3*(mu_float - x_float)**2/sigma_float**2)/sigma_float**2]] # Compute third order derivatives to bound the Taylor remainder. Symbolic differentiation is not really necessary in this simple case, but # it may be useful in later applications def thirdDerivatives(): x, mu, sigma = sym.symbols('x, mu, sigma') L = [] for i in range(4): for j in range(4): if i+j == 3: args = tuple([-(x-mu)**2/(2*sigma**2) -sym.log(sigma)] + [mu for cpt in range(i)] + [sigma for cpt in range(j)]) L.append( sym.diff(*args) ) return L def evalThirdDerivatives(x_float, mu_float, logSigma_float): tt = thirdDerivatives() return [tt[i].subs('x',x_float).subs('mu',mu_float).subs('sigma',np.exp(logSigma_float)).evalf() for i in range(4)] # Find the MAP (not really necessary here since the MAP are the mean and std deviation of the data) f = lambda theta: -np.mean(getLogLhd(x, theta[0], np.exp(theta[1]))) thetaMAP = spo.minimize(f, np.array([realMean, np.log(realStd)])).x print "MAP is", thetaMAP, "Real values are", realMean, np.log(realStd) tt = thirdDerivatives() print tt ``` MAP is [ 0.00525303 -0.00167212] Real values are 0.00525302848968 -0.00167212367818 [2*(-1 + 6*(mu - x)**2/sigma**2)/sigma**3, -6*(mu - x)/sigma**4, 2/sigma**3, 0] ## Vanilla isotropic Gaussian random walk Metropolis ```python def vanillaMH(T): """ perform traditional isotropic random walk Metropolis """ theta = np.array([realMean,np.log(realStd)]) stepsize = .5/np.sqrt(N) S = np.zeros((T, 2)) acceptance = 0.0 for i in range(T): accepted = 0 done = 0 thetaNew = theta thetaP = theta + stepsize*npr.randn(2) u = npr.rand() lhds = getLogLhd(x, thetaP[0], np.exp(thetaP[1])) - getLogLhd(x, theta[0], np.exp(theta[1])) Lambda = np.mean(lhds) psi = 1./N*np.log(u) if Lambda>psi: thetaNew = thetaP theta = thetaP accepted = 1 S[i,:] = thetaNew else: S[i,:] = theta if i<T/10: # Perform some adaptation of the stepsize in the early iterations stepsize *= np.exp(1./(i+1)**0.6*(accepted-0.5)) acceptance*=i acceptance+=accepted acceptance/=(i+1) if np.mod(i,T/10)==0: print "Iteration", i, "Acceptance", acceptance return S ``` ```python # Compute good marginals using a long run for later comparisons S_ref = vanillaMH(50000) marg0 = sps.gaussian_kde(S_ref[:,0]) marg1 = sps.gaussian_kde(np.exp(S_ref[:,1])) c = plt.acorr(np.exp(S_ref[:,1]), maxlags=50, detrend=detrend_mean, normed=True); c_ref = c[1][c[0]>=0] plt.show() ``` ```python S = vanillaMH(10000) ``` Iteration 0 Acceptance 1.0 Iteration 1000 Acceptance 0.497502497502 Iteration 2000 Acceptance 0.519740129935 Iteration 3000 Acceptance 0.526157947351 Iteration 4000 Acceptance 0.530617345664 Iteration 5000 Acceptance 0.534893021396 Iteration 6000 Acceptance 0.536910514914 Iteration 7000 Acceptance 0.536637623197 Iteration 8000 Acceptance 0.537057867767 Iteration 9000 Acceptance 0.537495833796 ```python plotResults(S, [], algoName="vanillaMH", boolSave=1) ``` ## Austerity MH ```python # Look at the distribution of Student statistics for a given value of (theta,theta') # It depends on the data distribution, but also on the size n of the subsample and (theta, theta') npr.seed(3) theta = np.array([realMean,np.log(realStd)]) # Pick theta to be the MAP thetaP = theta+1./np.sqrt(N)*npr.randn(2) numRepeats = 1000 students = np.zeros((numRepeats,)) n = 100 Lambda_N = np.mean(getLogLhd(x, thetaP[0], np.exp(thetaP[1])) - getLogLhd(x, theta[0], np.exp(theta[1]))) for j in range(numRepeats): npr.shuffle(x) lhds = getLogLhd(x[:n], thetaP[0], np.exp(thetaP[1])) - getLogLhd(x[:n], theta[0], np.exp(theta[1])) Lambda = np.mean(lhds) s = np.std(lhds)/np.sqrt(n)*np.sqrt(1-1.0*n/N) t = (Lambda-Lambda_N)/s students[j] = t plt.hist(students,30,normed=True,alpha=0.3) m = np.min(students) M = np.max(students) xplot = np.linspace(m-(M-m)/10, M+(M-m)/10, 100) plt.plot(xplot, sps.t(n-1).pdf(xplot),'r', label="Student pdf") plt.legend(loc=2) plt.savefig(saveDirStr+"student_"+str(n)+"_"+dataType+"_"+"austerityMH"+".pdf") plt.savefig(saveDirStr+"student_"+str(n)+"_"+dataType+"_"+"austerityMH"+".eps") plt.show() ``` ```python def austerityMH(T): """ perform Korattikara, Chen & Welling's austerity MH (ICML'14) """ theta = np.array([realMean,np.log(realStd)]) stepsize = 0.005 S_K = np.zeros((T,2)) eps = .05 acceptance = 0.0 gamma = 1.5 ns_K = [] for i in range(T): npr.shuffle(x) accepted = 0 done = 0 thetaNew = theta thetaP = theta + stepsize*npr.randn(2) u = npr.rand() psi = np.log(u)/N n = 100 # Size of first subsample used: this parameter is crucial but hard to set while not done and n<N: lhds = getLogLhd(x[:n], thetaP[0], np.exp(thetaP[1])) - getLogLhd(x[:n], theta[0], np.exp(theta[1])) Lambda = np.mean(lhds) s = np.std(lhds)/np.sqrt(n)*np.sqrt(1-1.0*n/N) t = (Lambda-psi)/s if 1 - sps.t(n-1).cdf(np.abs(t)) < eps: done = 1 n = min(N,np.floor(gamma*n)) if not done: # the test never rejected H_0 lhds = getLogLhd(x[:N], thetaP[0], np.exp(thetaP[1])) - getLogLhd(x[:N], theta[0], np.exp(theta[1])) Lambda = np.mean(lhds) n = N if i>1 and ns_K[-1] == 2*N: ns_K.append(n) # Half of the likelihoods were computed at the previous stage else: ns_K.append(2*n) if Lambda>psi: theta = thetaP accepted = 1 S_K[i] = thetaP else: S_K[i] = theta if i<T/10: # Perform some adaptation of the stepsize in the early iterations stepsize *= np.exp(1./(i+1)**0.6*(accepted-0.5)) acceptance*=i acceptance+=accepted acceptance/=(i+1) if np.mod(i,T/10)==0: print "Iteration", i, "Acceptance", acceptance, "Avg. num. evals", np.mean(ns_K) return S_K, ns_K ``` ```python S_K, ns_K = austerityMH(10000) ``` ```python plotResults(S_K, ns_K, algoName="austerityMH", boolSave=1) ``` ## Vanilla confidence sampler ```python # Confidence MCMC (Bardenet, Doucet, and Holmes, ICML'14) def confidenceMCMC(T): # Initialize theta = np.array([realMean,np.log(realStd)]) stepsize = .01 S_B = np.zeros((T,2)) delta = .1 acceptance = 0.0 gamma = 1.5 ns_B = [] for i in range(T): npr.shuffle(x) accepted = 0 done = 0 thetaNew = theta thetaP = theta + stepsize*npr.randn(2) u = npr.rand() n = N/10 cpt = 0 lhds = getLogLhd(x, thetaP[0], np.exp(thetaP[1])) - getLogLhd(x, theta[0], np.exp(theta[1])) a = np.min(lhds) b = np.max(lhds) while not done and n<N: n = min(N,np.floor(gamma*n)) cpt+=1 deltaP = delta/2/cpt**2 # The following step should be done differently to avoid recomputing previous likelihoods, but for the toy examples we keep it short lhds = getLogLhd(x[:n], thetaP[0], np.exp(thetaP[1])) - getLogLhd(x[:n], theta[0], np.exp(theta[1])) Lambda = np.mean(lhds) sigma = np.std(lhds) psi = np.log(u)/N if np.abs(Lambda-psi) > ctBernstein(N,n,a,b,sigma,deltaP): done = 1 if i>1 and ns_B[-1] == 2*N: ns_B.append(n) # Half of the likelihoods were computed at the previous stage else: ns_B.append(2*n) # The algorithm required all likelihoods for theta and theta', next iteration we can reuse half of them if Lambda>psi: # Accept theta = thetaP accepted = 1 S_B[i] = thetaP else: # Reject S_B[i] = theta if i<T/10: # Perform some adaptation of the stepsize in the early iterations stepsize *= np.exp(1./(i+1)**0.6*(accepted-0.5)) acceptance*=i acceptance+=accepted acceptance/=(i+1) if np.mod(i,T/10)==0: # Monitor acceptance and average number of samples used print "Iteration", i, "Acceptance", acceptance, "Avg. num evals", np.mean(ns_B), "sigma/sqrt(n)", sigma/np.sqrt(n), "R/n", (b-a)/n return S_B, ns_B ``` ```python S_B, ns_B = confidenceMCMC(10000) ``` ```python plotResults(S_B, ns_B, boolSave=1, algoName="vanillaConfidence") ``` ## Pseudo-marginal MH with Poisson estimator ```python # define unbiased estimator def logOfUnbiasedEstimate(batch, a, eps): """ return log of unbiased estimator of [Rhee and Glynn, tech report 2013], as stated in [Jacob and Thiéry, to appear in AoS] """ N = npr.geometric(eps/(1+eps)) # Draw geometric truncation level logw = np.arange(N)*np.log(1+eps) # Compute everything in the log domain to avoid big products logfacts = np.array([math.lgamma(k) for k in range(2,N+2,1)]) logprods = ( np.array([np.sum(np.log(batch[:k]-a)) for k in range(1,N+1,1)]) ) logY = a + spm.logsumexp([0, spm.logsumexp(logw - logfacts + logprods)]) return N, logY ``` We first test the above estimator by estimating $e^\mu$ with a sample from a Gaussian $\mathcal{N}(\mu,\sigma^2)$ or a gamma $\Gamma(\mu/1.5,1.5)$. Note that everything is computed in the log domain, so we actually compare the log of the average of the unbiased estimates and compare it to $\mu$. ```python # test unbiased estimator on a simple example eps = 0.001 L = [] mu = 1 sigma = 1 numRepeats = 1000 batchSize = 100 for i in range(numRepeats): #batch = mu + sigma*npr.randn(batchSize) batch = npr.gamma(mu/1.5,1.5,size=batchSize) # Try a gamma with mean mu a = np.min(batch)-1 # Control how loose the bound is L.append( logOfUnbiasedEstimate(batch, a, eps)[1] ) plt.hist(L, alpha=.3, histtype="stepfilled") plt.axvline(mu, linewidth = 8, color="r", alpha = .3, label="mu") plt.axvline(np.log(1./numRepeats)+spm.logsumexp(L), linestyle='--', linewidth = 4, color="g", label="log of avg. of estimates") print "Mean", np.mean(L), "Var", np.var(L), "Exp(sigma2)", np.exp(sigma**2), "Exp(mu-a)2", np.exp((mu-a)**2) plt.legend(loc=2) plt.show() ``` ```python def poissonPseudoMarginalMH(T): """ perform psudo-marginal MH with Poisson estimator of the likelihood """ eps = .01 batchSize = 1000 # Expected number of samples used is roughly batchSize/eps for small eps theta = np.array([realMean,np.log(realStd)]) lhds = getLogLhd(x, theta[0], np.exp(theta[1])) batchLogLhds = np.array([np.sum(npr.choice(lhds, batchSize)) for j in range(20000)]) a = np.min(batchLogLhds) # Set optimal bound, unrealistic numBatches, logPiHat = logOfUnbiasedEstimate(N*1./batchSize*batchLogLhds, N*1./batchSize*a, eps) ns_P = [batchSize*numBatches] stepsize = .1/np.sqrt(N) S_P = np.zeros((T, 2)) #ns_P = [] acceptance = 0.0 for i in range(T): accepted = 0 done = 0 thetaNew = theta thetaP = theta + stepsize*npr.randn(2) u = npr.rand() tmp = getLogLhd(x, thetaP[0], np.exp(thetaP[1])) # likelihoods to choose from lhdsP = np.array([np.sum(npr.choice(tmp, batchSize)) for j in range(100)]) # compute log likelihoods of batches aP = np.min(lhdsP) # Set optimal bound, unrealistic numBatchesP, logPiHatP = logOfUnbiasedEstimate(N*1./batchSize*lhdsP, N*1./batchSize*aP, eps) # note the renormalization N/batchSize Lambda = 1./N*(logPiHatP - logPiHat) psi = 1./N*np.log(u) #print "Lambda, Psi", Lambda, psi if Lambda>psi: # Accept S_P[i,:] = thetaP theta = thetaP logPiHat = logPiHatP accepted = 1 else: # Reject S_P[i,:] = theta if i<T/10: # Perform some adaptation of the stepsize in the early iterations stepsize *= np.exp(1./(i+1)**0.6*(accepted-0.5)) acceptance*=i acceptance+=accepted acceptance/=(i+1) ns_P.append(batchSize*numBatchesP) if np.mod(i,T/10)==0: realLogLhd = np.sum(getLogLhd(x, theta[0], np.exp(theta[1]))) print "Iteration", i, "Acceptance", acceptance, "logPiHat", logPiHat, "logPiHatP", logPiHatP, "real logLhd", realLogLhd return ns_P, S_P ``` ```python ns_P, S_P = poissonPseudoMarginalMH(1000) ``` ```python # Plot the chain plt.plot(S_P[:,0]) plt.show() plt.plot(S_P[:,1]) plt.show() ``` ```python plotResults(S_P, ns_P) # Not useful as we couldn't make the chain mix well enough ``` ## Confidence MH with 2nd order Taylor likelihood proxy We will need to bound the absolute value of the third derivatives. ```python tt = thirdDerivatives() print tt ``` To apply the two Taylor expansions, it is enough to bound them on the union of the two segments $$\{(1-t)\theta^\star+t\theta, t\in[0,1]\}\cup\{(1-t)\theta^\star+t\theta', t\in[0,1]\},$$ where $\theta=(\mu,\sigma)$. Given their form, it is enough to bound them by taking the max of their absolute values when $\vert x-\mu\vert$ is maximal and $\sigma$ minimal in this union. Since the code is a bit hard to read, we have added a simple check of the bound in the algorithm to convince the reader the bound is correct. ```python # Confidence MCMC with proxy (Bardenet, Doucet, and Holmes, this submission) def confidenceMCMCWithProxy(T): npr.seed(1) # Initialize theta = np.array([realMean,np.log(realStd)]) stepsize = .01 S_B = np.zeros((T,2)) delta = .1 acceptance = 0.0 gamma = 2. ns_B = [] # Compute some statistics of the data that will be useful for bounding the error and averaging the proxies minx = np.min(x) maxx = np.max(x) meanx = np.mean(x) meanxSquared = np.mean(x**2) # Prepare total sum of Taylor proxys muMAP = thetaMAP[0] sigmaMAP = np.exp(thetaMAP[1]) meanGradMAP = np.array( [(meanx - muMAP)/sigmaMAP**2, (meanxSquared-2*muMAP*meanx+muMAP**2)/sigmaMAP**3 - 1./sigmaMAP] ) meanHessMAP = np.array( [[-1./sigmaMAP**2, -2*(meanx-muMAP)/sigmaMAP**3], [-2*(meanx-muMAP)/sigmaMAP**3, -3*(meanxSquared-2*muMAP*meanx+muMAP**2)/sigmaMAP**4 + 1/sigmaMAP**2]] ) for i in range(T): npr.shuffle(x) accepted = 0 done = 0 thetaNew = theta thetaP = theta + stepsize*npr.randn(2) u = npr.rand() n = 2 t0 = 0 cpt = 0 Lambda = 0 ssq = 0 # Sum of squares # Prepare Taylor bounds xMinusMuMax = np.max(np.abs([1, minx-theta[0], maxx-theta[0], minx-thetaMAP[0], maxx-thetaMAP[0], minx-thetaP[0], maxx-thetaP[0]])) sigmaMin = np.min(np.exp([theta[1], thetaMAP[1], thetaP[1]])) R = float(np.max(np.abs(evalThirdDerivatives(xMinusMuMax, 0, np.log(sigmaMin))))) h = np.array([theta[0]-thetaMAP[0], np.exp(theta[1])-np.exp(thetaMAP[1])]) hP = np.array([thetaP[0]-thetaMAP[0], np.exp(thetaP[1])-np.exp(thetaMAP[1])]) R *= 2*1./6 * max(np.sum(np.abs(h)), np.sum(np.abs(hP)))**3 a = -R b = R # We can already compute the average proxy log likelihood ratio avgTotalProxy = np.dot(meanGradMAP, hP-h) + .5*np.dot( hP-h, np.dot(meanHessMAP, h+hP) ) while not done and n<N: n = min(N,np.floor(gamma*n)) cpt+=1 deltaP = delta/2/cpt**2 batch = x[t0:n] lhds = getLogLhd(batch, thetaP[0], np.exp(thetaP[1])) - getLogLhd(batch, theta[0], np.exp(theta[1])) proxys = np.dot(myGradientVect(batch, muMAP, sigmaMAP), hP-h) + 0.5*np.dot(np.dot(hP-h, myHessianVect(batch,muMAP,sigmaMAP)).T, h+hP) if np.any(np.abs(lhds-proxys)>R): # Just a check that our error is correctly bounded print "Taylor remainder is underestimated" tmp, Lambda, ssq = combineMeansAndSSQs(t0, Lambda, ssq, n-t0, np.mean(lhds-proxys), (n-t0)*np.var(lhds-proxys)) sigma = np.sqrt(1./n*ssq) psi = np.log(u)/N t0 = n if np.abs(Lambda-psi + avgTotalProxy) > ctBernstein(N,n,a,b,sigma,deltaP): done = 1 if i>1 and ns_B[-1] == 2*N: ns_B.append(n) # Half of the likelihoods were computed at the previous stage else: ns_B.append(2*n) if Lambda+avgTotalProxy>psi: # Accept theta = thetaP accepted = 1 S_B[i] = thetaP else: # Reject S_B[i] = theta if i<T/10: # Perform some adaptation of the stepsize in the early iterations stepsize *= np.exp(1./(i+1)**0.6*(accepted-0.5)) acceptance*=i acceptance+=accepted acceptance/=(i+1) if np.mod(i,T/10)==0: # Monitor acceptance and average number of samples used print "Iteration", i, "Acceptance", acceptance, "Avg. num samples", np.mean(ns_B), "Dist. to MAP", np.sum( np.abs(theta-thetaMAP) ), "sigma/sqrt(n)", sigma/np.sqrt(n), "R/n", R/n return S_B, ns_B ``` ```python S_BP, ns_BP = confidenceMCMCWithProxy(10000) ``` ```python plotResults(S_BP, ns_BP, algoName="confidenceProxy", boolSave=1) ``` ## Confidence MCMC with proxys dropped along the way This is a version of the confidence sampler with proxy that drops a proxy every 20 iterations. ```python def dropProxy(thetaStar, meanx, minx, maxx, meanxSquared): """ compute all quantities necessary to the evaluation of a proxy at thetaStar """ muStar = thetaStar[0] sigmaStar = np.exp(thetaStar[1]) meanGradStar = np.array( [(meanx - muStar)/sigmaStar**2, (meanxSquared-2*muStar*meanx+muStar**2)/sigmaStar**3 - 1./sigmaStar] ) meanHessStar = np.array( [[-1./sigmaStar**2, -2*(meanx-muStar)/sigmaStar**3], [-2*(meanx-muStar)/sigmaStar**3, -3*(meanxSquared-2*muStar*meanx+muStar**2)/sigmaStar**4 + 1/sigmaStar**2]] ) return meanGradStar, meanHessStar def confidenceMCMCWithProxyDroppedAlong(T): """ perform confidence MCMC with proxy dropped every 20 iterations """ npr.seed(1) # Initialize theta = np.array([realMean,np.log(realStd)]) stepsize = .1 S_B = np.zeros((T,2)) delta = .1 acceptance = 0.0 gamma = 2. ns_B = [] # Compute min and max of data minx = np.min(x) maxx = np.max(x) meanx = np.mean(x) meanxSquared = np.mean(x**2) # Prepare Taylor proxys thetaStar = thetaMAP muStar = thetaStar[0] sigmaStar = np.exp(thetaStar[1]) meanGradStar, meanHessStar = dropProxy(thetaStar, meanx, minx, maxx, meanxSquared) for i in range(T): npr.shuffle(x) accepted = 0 done = 0 thetaNew = theta thetaP = theta + stepsize*npr.randn(2) u = npr.rand() n = 2 t0 = 0 cpt = 0 Lambda = 0 ssq = 0 # Prepare Taylor bounds xMinusMuMax = np.max(np.abs([1, minx-theta[0], maxx-theta[0], minx-thetaStar[0], maxx-thetaStar[0], minx-thetaP[0], maxx-thetaP[0]])) sigmaMin = np.min(np.exp([theta[1], thetaStar[1], thetaP[1]])) R = float(np.max(np.abs(evalThirdDerivatives(xMinusMuMax, 0, np.log(sigmaMin))))) h = np.array([theta[0]-thetaStar[0], np.exp(theta[1])-np.exp(thetaStar[1])]) hP = np.array([thetaP[0]-thetaStar[0], np.exp(thetaP[1])-np.exp(thetaStar[1])]) R *= 2*1./6 * max(np.sum(np.abs(h)), np.sum(np.abs(hP)))**3 a = -R b = R avgTotalProxy = np.dot(meanGradStar, hP-h) + .5*np.dot( hP-h, np.dot(meanHessStar, h+hP) ) while not done and n<N: n = min(N,np.floor(gamma*n)) if not np.mod(i,20): # Loop over whole dataset and recompute proxys when finished n = N cpt+=1 deltaP = delta/2/cpt**2 batch = x[t0:n] lhds = getLogLhd(batch, thetaP[0], np.exp(thetaP[1])) - getLogLhd(batch, theta[0], np.exp(theta[1])) proxys = np.dot(myGradientVect(batch, muStar, sigmaStar), hP-h) + 0.5*np.dot(np.dot(hP-h, myHessianVect(batch,muStar,sigmaStar)).T, h+hP) if np.any(np.abs(lhds-proxys)>R): print "Taylor remainder is underestimated" tmp, Lambda, ssq = combineMeansAndSSQs(t0, Lambda, ssq, n-t0, np.mean(lhds-proxys), (n-t0)*np.var(lhds-proxys)) sigma = np.sqrt(1./n*ssq) psi = np.log(u)/N t0 = n #print "n, abs(L-psi), bound, sigma/sqrt(n), R/n", n, np.abs(Lambda-psi), ctBernstein(N,n,a,b,sigma,deltaP), sigma/np.sqrt(n), R/n if np.abs(Lambda-psi + avgTotalProxy) > ctBernstein(N,n,a,b,sigma,deltaP): done = 1 if i>1 and ns_B[-1] == 2*N: ns_B.append(n) # Half of the likelihoods were computed at the previous stage else: ns_B.append(2*n) if not np.mod(i,20): # Recompute proxys every 20 iterations thetaStar = theta muStar = thetaStar[0] sigmaStar = np.exp(thetaStar[1]) meanGradStar, meanHessStar = dropProxy(thetaStar, meanx, minx, maxx, meanxSquared) if Lambda+avgTotalProxy>psi: # Accept theta = thetaP accepted = 1 S_B[i] = thetaP else: # Reject S_B[i] = theta if i<T/10: # Perform some adaptation of the stepsize in the early iterations stepsize *= np.exp(1./(i+1)**0.6*(accepted-0.5)) acceptance*=i acceptance+=accepted acceptance/=(i+1) if np.mod(i,T/10)==0: # Monitor acceptance and average number of samples used print "Iteration", i, "Acceptance", acceptance, "Avg. num samples", np.mean(ns_B), "Dist. to MAP", np.sum( np.abs(theta-thetaMAP) ), "sigma/sqrt(n)", sigma/np.sqrt(n), "R/n", R/n return S_B, ns_B ``` ```python S_BPD, ns_BPD = confidenceMCMCWithProxyDroppedAlong(10000) ``` ```python plotResults(S_BPD, ns_BPD) ``` ## Firefly MH with 2nd order Taylor lower bound ```python # Firefly MH with same Taylor bound as the confidence sampler with proxy def fireflyMHWithTaylorBound(T): # Initialize theta = np.array([realMean, np.log(realStd)]) z = np.zeros((N,)) z[npr.randint(0, N, N/10)] = 1 # Start with 10% bright points stepsize = 1./np.sqrt(N) S = np.zeros((T,2)) acceptance = 0.0 ns = [] nsBright = [] resampleFraction = .1 # Try resampling this fraction of the z's at each iteration numResampledZs = int(np.ceil(N*resampleFraction)) # Compute min and max of data minx = np.min(x) maxx = np.max(x) meanx = np.mean(x) meanxSquared = np.mean(x**2) print "min and max computed" # Prepare total sum of Taylor proxys muMAP = thetaMAP[0] sigmaMAP = np.exp(thetaMAP[1]) logLhdMAP = getLogLhd(x, muMAP, sigmaMAP) meanGradMAP = np.array( [(meanx - muMAP)/sigmaMAP**2, (meanxSquared-2*muMAP*meanx+muMAP**2)/sigmaMAP**3 - 1./sigmaMAP] ) meanHessMAP = np.array( [[-1./sigmaMAP**2, -2*(meanx-muMAP)/sigmaMAP**3], [-2*(meanx-muMAP)/sigmaMAP**3, -3*(meanxSquared-2*muMAP*meanx+muMAP**2)/sigmaMAP**4 + 1/sigmaMAP**2]] ) print "Taylor preprocessing done" for i in range(T): accepted = 0 done = 0 #----------------------------------------------------- # Prepare proposal on theta and MH's uniform draw #----------------------------------------------------- thetaNew = theta thetaP = theta + stepsize*npr.randn(2) u = npr.rand() psi = 1./N*np.log(u) #----------------------------------------------------- # Prepare Taylor bounds #----------------------------------------------------- xMinusMuMax = np.max(np.abs([1, minx-theta[0], maxx-theta[0], minx-thetaMAP[0], maxx-thetaMAP[0], minx-thetaP[0], maxx-thetaP[0]])) sigmaMin = np.min(np.exp([theta[1], thetaMAP[1], thetaP[1]])) R = float(np.max(np.abs(evalThirdDerivatives(xMinusMuMax, 0, np.log(sigmaMin))))) RP = R # We could tighten the bounds by considering theta and thetaP separately. Given the results, this is unnecessary h = np.array([theta[0]-thetaMAP[0], np.exp(theta[1])-np.exp(thetaMAP[1])]) hP = np.array([thetaP[0]-thetaMAP[0], np.exp(thetaP[1])-np.exp(thetaMAP[1])]) R *= 1./6 * np.sum(np.abs(h))**3 # no multiplication by 2 since only one point RP *= 1./6 * np.sum(np.abs(hP))**3 # no multiplication by 2 since only one point avgLogBound = np.mean(logLhdMAP) + np.dot(meanGradMAP, h) + .5*np.dot( h, np.dot(meanHessMAP, h) ) - R avgLogBoundP = np.mean(logLhdMAP) + np.dot(meanGradMAP, hP) + .5*np.dot( hP, np.dot(meanHessMAP, hP) ) - RP #----------------------------------------------------- # Resample z's #----------------------------------------------------- resampledInds = npr.randint(0, N, size=numResampledZs) L = np.exp(getLogLhd(x[resampledInds], theta[0], np.exp(theta[1]))) logB = logLhdMAP[resampledInds] + np.dot(myGradientVect(x[resampledInds], muMAP, sigmaMAP), h) + 0.5*np.dot(np.dot(h, myHessianVect(x[resampledInds], muMAP, sigmaMAP)).T, h) - R B = np.exp(logB) z[resampledInds] = npr.binomial(1, 1-B/L) #----------------------------------------------------- # Compute posterior for acceptance of theta move #----------------------------------------------------- indsBright = z==1 numBright = np.sum(indsBright) logB = logLhdMAP[indsBright] + np.dot(myGradientVect(x[indsBright], muMAP, sigmaMAP), h) + 0.5*np.dot(np.dot(h, myHessianVect(x[indsBright], muMAP, sigmaMAP)).T, h) - R B = np.exp(logB) L = np.exp(getLogLhd(x[indsBright], theta[0], np.exp(theta[1]))) logBP = logLhdMAP[indsBright] + np.dot(myGradientVect(x[indsBright], muMAP, sigmaMAP), hP) + 0.5*np.dot(np.dot(hP, myHessianVect(x[indsBright], muMAP, sigmaMAP)).T, hP) - RP BP = np.exp(logBP) LP = np.exp(getLogLhd(x[indsBright], thetaP[0], np.exp(thetaP[1]))) #----------------------------------------------------- # Accept/reject step #----------------------------------------------------- Lambda = (avgLogBoundP-avgLogBound) # sum of log of bounds Lambda += 1./N*( np.sum(np.log(LP/BP - 1)) - np.sum(np.log(L/B - 1)) ) # if the lower bound is wrong, this will raise a flag if Lambda > psi: # Accept theta = thetaP accepted = 1 S[i] = thetaP else: # Reject S[i] = theta #----------------------------------------------------- # Save number of evaluations of the likelihood during this iteration #----------------------------------------------------- ns.append(numBright + numResampledZs) # This is an upper estimate of the number of cost units nsBright.append(numBright) #----------------------------------------------------- # Update stepsize, acceptance and print status #----------------------------------------------------- if i<T/10: # Perform some adaptation of the stepsize in the early iterations stepsize *= np.exp(1./(i+1)**0.6*(accepted-0.5)) acceptance*=i acceptance+=accepted acceptance/=(i+1) #print "t4=", time.time() - tic if np.mod(i,T/10)==0: # Monitor acceptance and average number of samples used print "Iteration", i, "Acceptance", int(100*acceptance), "% Avg. num lhds", np.mean(ns), "Avg num bright samples", np.mean(nsBright), "Dist. to MAP", np.sum( np.abs(theta-thetaMAP) ) return S, ns, nsBright ``` ```python S_F, ns_F, nsBright_F = fireflyMHWithTaylorBound(10000) ``` ```python plotResults(S_F[1000:,], ns_F[1000:], algoName="fireflyMH", boolSave=0, figId="resampling10p") plt.show() plt.plot(nsBright_F) ``` ## Stochastic gradient Langevin dynamics ```python def sgld(T): """ perform SGLD using constant or decreasing stepsizes """ theta = np.array([realMean, np.log(realStd)]) S = np.zeros((T,2)) acceptance = 0.0 ns = [] M = N/10. # Size of the subsample weights = np.zeros((T,)) for i in range(T): stepsize = .1/N/((i+1)**.33) weights[i] = stepsize inds = npr.randint(0,N,size=M) gradEstimate = N/M*np.sum(myGradientVect(x[inds], theta[0], np.exp(theta[1])), 0) theta[0] = theta[0] + stepsize*gradEstimate[0] + np.sqrt(stepsize)*npr.randn() theta[1] = np.log(np.exp(theta[1]) + stepsize*gradEstimate[1] + np.sqrt(stepsize)*npr.randn()) ns.append(M) S[i,:] = theta if np.mod(i,T/10)==0: print "Iteration", i return S, ns, weights # SGLD returns a weighted sample, unlike other methods in this notebook ``` ```python S_SGLD, ns_SGLD, weights = sgld(10000) ``` ```python plotResults(S_SGLD, ns_SGLD, algoName="sgld", weights=weights, boolSave=1, figId="batchsize10p_length10k") ``` ```python ``` ```python ``` ```python ```
52d6666ca08c614505d605d694006c87fa7e95cd
368,581
ipynb
Jupyter Notebook
.ipynb_checkpoints/examples-checkpoint.ipynb
rbardenet/rbardenet.github.io
48168e60ffa12c05c4866db3691b6b7a968841ad
[ "MIT" ]
null
null
null
.ipynb_checkpoints/examples-checkpoint.ipynb
rbardenet/rbardenet.github.io
48168e60ffa12c05c4866db3691b6b7a968841ad
[ "MIT" ]
1
2020-04-29T22:46:33.000Z
2020-04-29T22:46:33.000Z
.ipynb_checkpoints/examples-checkpoint.ipynb
rbardenet/rbardenet.github.io
48168e60ffa12c05c4866db3691b6b7a968841ad
[ "MIT" ]
null
null
null
227.941249
230,972
0.881432
true
11,898
Qwen/Qwen-72B
1. YES 2. YES
0.675765
0.782662
0.528896
__label__eng_Latn
0.545017
0.067131
## Simple qubit rotation old version of TFQ, with manual GD In this jupyter file we define a variational quantum circuit $V(\theta)$ that rotates an initial state $|0000\rangle$ into a target state with equal superposition $\sum_{\sigma_i} | \sigma_i \rangle$. The aim is that $\langle 1111 | V(\theta) | 0000\rangle = 1$. ```python import tensorflow as tf import tensorflow_quantum as tfq import cirq import sympy import numpy as np # visualization tools %matplotlib inline import matplotlib.pyplot as plt from cirq.contrib.svg import SVGCircuit ``` ### Generate symbols I did not figure out, how I can do the gradients in TFQ without using symbols, this seems to be mandatory for tfq. I don't reallay see the advantage so far. Especially the evaluation with the resolver function seems a bit odd and unnecessary. The definition of the circuit is almost the same as in Pennylane. There is no option to define a projections operator to calculate the overlap with a target state because they are not unitary. This gives a bit less room to play with TFQ. I assume the idea was, that these gates are not really feasible on a real quantum device. % Instaed of defining a hermitian matrix that gives the overlap with the target state, we can simply measure the operator $M = 1/4*(X_1 + X_2 + X_3 + X_4)$ and minimize the loss $1-\langle M \rangle$. ```python def generate_circuit(nr_of_qubits, layers): qubits = cirq.GridQubit.rect(1, nr_of_qubits) # Define qubit grid. In this case nr_parameters = 3*nr_of_qubits*layers # 3 params for each qubit and layer symb = sympy.symbols('theta0:'+str(nr_parameters)) symbols = np.array(symb) symbols = symbols.reshape(layers, nr_of_qubits, 3) circuit = cirq.Circuit() for l in range(layers): # Add a series of single qubit rotations. for i, qubit in enumerate(qubits): circuit += cirq.rz(symbols[l][i][0])(qubit) circuit += cirq.rx(symbols[l][i][1])(qubit) circuit += cirq.rz(symbols[l][i][2])(qubit) circuit += cirq.CZ(qubits[0], qubits[1]) circuit += cirq.CZ(qubits[2], qubits[3]) circuit += cirq.CZ(qubits[1], qubits[2]) op = 1/4*(cirq.X(qubits[0]) + cirq.X(qubits[1]) + cirq.X(qubits[2]) + cirq.X(qubits[3])) return circuit, op, list(symb) nr_of_qubits = 4 layers = 2 tf_circuit, op, (symbols) = generate_circuit(nr_of_qubits, layers) SVGCircuit(tf_circuit) ``` ### Training This part took me very long to figure out, because the TFQ documentation is mostly focus on training with data. I tried to use the keras.model.fit() functions, but I did not manage to make them work without input data. There is probably some way to do it, but after a few hours I gave up and I do the gradient update manually. The key in the following part is the function `tfq.layers.Expectation()`. We can give it as an argumnet our circuit, which has to be converted to a tf tensor, the operator that we want to optimize the expectation value of (in our case this is $1-M$). We also need to feed the list of symbols and their current values. TF finds the grad of this circuit with respect to the parameters in 'symbols'. The gradient descent update rule is $\theta_i^{t+1} = \theta_i^t - \eta \partial_{\theta_i} f(\theta)$ ```python circuit_tensor = tfq.convert_to_tensor([tf_circuit]) expectation = tfq.layers.Expectation() values_tensor = tf.convert_to_tensor(np.random.uniform(0, 2 * np.pi, (1, layers* nr_of_qubits*3 )).astype(np.float32)) eta = 0.1 for i in range(200): with tf.GradientTape() as g: g.watch(values_tensor) forward = expectation(circuit_tensor, operators=1-op, symbol_names=symbols, symbol_values=values_tensor) if i%10==0: print(forward.numpy()[0][0]) # Return variance of gradients across all circuits. grads = g.gradient(forward, values_tensor) values_tensor -= eta*grads del grads ``` 1.1165314 0.8423661 0.5715227 Like in the Hello World example we can extract the wave function we see that we get the qual superposition state with some global phase. ```python simulator = cirq.Simulator() dictionary = {} for i in range(len(symbols)): symb = symbols[i] dictionary[symb] = values_tensor.numpy()[0][i] resolver = cirq.ParamResolver(dictionary) resolved_circuit = cirq.resolve_parameters(tf_circuit, resolver) output_state_vector = simulator.simulate(tf_circuit, resolver).final_state output_state_vector ``` ```python ```
dd75b978161b0a1d3bc4f8f7277843e923bd5c84
17,354
ipynb
Jupyter Notebook
Simple_qubit_rotation_TFQ_with_manual_GD.ipynb
PatrickHuembeli/Pennaylane_and_TFQ
bbee7f2ddaa0f5d4c7eb1768164663bc7c985327
[ "Apache-2.0" ]
5
2020-03-18T05:31:51.000Z
2020-09-03T22:43:36.000Z
Simple_qubit_rotation_TFQ_with_manual_GD.ipynb
PatrickHuembeli/Pennaylane_and_TFQ
bbee7f2ddaa0f5d4c7eb1768164663bc7c985327
[ "Apache-2.0" ]
null
null
null
Simple_qubit_rotation_TFQ_with_manual_GD.ipynb
PatrickHuembeli/Pennaylane_and_TFQ
bbee7f2ddaa0f5d4c7eb1768164663bc7c985327
[ "Apache-2.0" ]
2
2020-03-17T09:10:21.000Z
2020-07-30T16:08:47.000Z
65.486792
8,993
0.599285
true
1,200
Qwen/Qwen-72B
1. YES 2. YES
0.845942
0.798187
0.67522
__label__eng_Latn
0.975151
0.407094
```python from typing import List, Dict from sympy import Point, Point2D, Segment import matplotlib.pyplot as plt from collections import defaultdict from itertools import combinations import numpy as np # Load data txt_lines = open("input.txt").read().splitlines() ``` # Visualization Horizontal and vertical lines: ```python def str_to_point(p_str) -> Point: parts = p_str.split(",") return Point(int(parts[0]), int(parts[1])) lines = [Segment(str_to_point(l.split(" -> ")[0]), str_to_point(l.split(" -> ")[1])) for l in txt_lines] for l in [l for l in lines if l.p1.x == l.p2.x or l.p1.y == l.p2.y]: plt.plot([p.x for p in l.points], [p.y for p in l.points]) plt.xticks(range(0,1000,100)) plt.yticks(range(0,1000,100)) plt.grid() ``` ```python for l in [l for l in lines]: plt.plot([p.x for p in l.points], [p.y for p in l.points]) plt.xticks(range(0,1000,100)) plt.yticks(range(0,1000,100)) plt.grid() ``` All lines: ## Part 1 (slow) > Consider only horizontal and vertical lines. At how many points do at least two lines overlap? **Attention: Incredible slow!** ```python intersections: Dict[Point2D, int] = defaultdict(int) filtered_lines = [l for l in lines if l.p1.x == l.p2.x or l.p1.y == l.p2.y] combs = list(combinations(filtered_lines, r=2)) i = 0 for l_pair in combs: i += 1 if i % int(len(combs) / 20) == 0: print (f"{(i/(len(combs) / 100))} %") p = next(iter(l_pair[0].intersection(l_pair[1])), None) if p: if issubclass(type(p), Point): intersections[p] += 1 elif issubclass(type(p), Segment): if p.p1.x == p.p2.x: for y in range(min(p.p1.y, p.p2.y), max(p.p1.y, p.p2.y)+1, 1): intersections[Point(p.p1.x, y)] += 1 else: for x in range(min(p.p1.x, p.p2.x), max(p.p1.x, p.p2.x)+1, 1): intersections[Point(x, p.p1.y)] += 1 result = len(intersections) print(result) ``` 4.999454009391038 % 9.998908018782076 % 14.998362028173114 % 19.997816037564153 % 24.99727004695519 % 29.996724056346228 % 34.99617806573727 % 39.995632075128306 % 44.99508608451934 % 49.99454009391038 % 54.99399410330142 % 59.993448112692455 % 64.99290212208349 % 69.99235613147454 % 74.99181014086557 % 79.99126415025661 % 84.99071815964764 % 89.99017216903869 % 94.98962617842973 % 99.98908018782076 % 4421 ## Part 1 & 2 (fast) > Consider only horizontal and vertical lines. At how many points do at least two lines overlap? ```python lines = [[int(z) for z in x.replace(" -> ", ",").split(",")] for x in txt_lines] mat_1 = np.zeros((1000,1000)) mat_2 = np.zeros((1000,1000)) for x1, y1, x2, y2 in lines: if x1 == x2: mat_1[min(y1,y2):max(y1,y2)+1,x1] += 1 elif y1 == y2: mat_1[y1,min(x1,x2):max(x1,x2)+1] += 1 else: l = [x1,y1,x2,y2] if x1 < x2 else [x2,y2,x1,y1] modi = 1 if l[1] < l[3] else -1 for x in range(l[0],l[2]+1): y = modi * (x-l[0]) + l[1] mat_2[y,x] += 1 result_1 = np.count_nonzero(mat_1 > 1) result_2 = np.count_nonzero(mat_1+mat_2 > 1) print(f"Part 1: {result_1}, Part 2: {result_2}") ``` Part 1: 4421, Part 2: 18674
f3213c7aa014ae9d8f1c0341b017399b7addac04
250,465
ipynb
Jupyter Notebook
day_05/task.ipynb
codebude/aoc-2021
8d0867ee5d710bfb7b8f4cccf5de59f8889bbade
[ "MIT" ]
null
null
null
day_05/task.ipynb
codebude/aoc-2021
8d0867ee5d710bfb7b8f4cccf5de59f8889bbade
[ "MIT" ]
null
null
null
day_05/task.ipynb
codebude/aoc-2021
8d0867ee5d710bfb7b8f4cccf5de59f8889bbade
[ "MIT" ]
null
null
null
1,098.530702
187,126
0.955551
true
1,133
Qwen/Qwen-72B
1. YES 2. YES
0.879147
0.727975
0.639997
__label__eng_Latn
0.44332
0.325259
# Iterative Solvers 4 - Preconditioning ## The basic idea For both the GMRES method and CG we have seen that the eigenvalue distribution is crucial for fast convergence. In both cases we would like the eigenvalues of the matrix be clustered close together and be well separated from zero. Unfortunately, in many applications the matrices that arise naturally are badly behaved in this respect. Preconditioning is a strategy to try to modify a linear system so that it becomes more ameniable to iterative solvers. Preconditioning is one of the most active research areas and crucial for the solution of linear systems of equations with millions or even billions of unknowns. Consider the linear system of equations $$ Ax = b $$ In its basic form the idea of preconditioning is to multiply the system with a matrix $P^{-1}$ that is some kind of approximation to $A^{-1}$, that is $P^{-1}\approx A^{-1}$ in some sense (making this precise is the difficult bit). We obtain either the left-preconditioned system $$ P^{-1}Ax = P^{-1}b $$ or the right-preconditioned system $$ AP^{-1}y = b, $$ where in the latter case we then additionally need to solve $Px=y$. Classes of preconditioners include * SPAI (Sparse Approximate Inverses) * Incomplete LU Decomposition * Incomplete Cholesky Decomposition * Splitting Preconditioners * Algebraic Multigrid Methods These are also known as **algebraic preconditioners**. They consider the matrix $A$ and try to find an approximation to $A$ that is easily invertible. A different class of preconditioners are **analytic preconditioners**. These are preconditioners that are often constructed as problems to PDEs that are easier to solve than the original PDE but still approximate to some sense the underlying physics of the problem. ## Sparse Approximate Inverse As example of the class of algebraic preconditioners we consider here the Sparse Approximate Inverse (SPAI). Incomplete LU decompositions will be discussed later on. We note that SPAI is a technique that works well in certain cases, but is not suitable in others. **There is no general preconditioning technique that always works well.** We denote by $\|A\|_F$ the Frobenious norm of a matrix $A$ defined by $$ \|A\|_F^2 := \sum_{i, j}|a_{ij}|^2. $$ The idea of SPAI is now to try to find a matrix $M := P^{-1}$ such that $$ F(M) := \|I - AM\|_F $$ is small. The minimum of this function is obviously reached for $M = A^{-1}$. But this usually not practical. Instead, we try to find a successive sequence of matrices $M_k$ that are approaching the minimum of the function $F$. There are many ways to define an approximate minimization procedure to minimize $F$. The following is a global minimum residual algorithm, described by Saad in [Iterative Methods for Sparse Linear Systems](https://www-users.cs.umn.edu/~saad/IterMethBook_2ndEd.pdf). $$ \begin{align} C_k &= A M_k\nonumber\\ G_k &= I - C_k\nonumber\\ \alpha_k &=\text{tr}(G_k^TAG_k) / \|AG_k\|_F^2\nonumber\\ M_{k+1} &= M_k + \alpha_k G_k\nonumber\\ \end{align} $$ In each step of the algorithm the matrix $M_k$ becomes slightly denser. Hence, in practice this is often combined with a numerical drop strategy for entries of $M$. The following code implements this method. As starting matrix $M_0$ we choose $M_0 = \frac{2}{\|AA^T\|_1}A$, which was recommended by Chow and Saad in [Approximate Inverse Preconditioners via Sparse-Sparse Iterations](https://dl.acm.org/doi/10.1137/S1064827594270415). Note that in the following code we did not implement dropping of values. For practical purposes this is essential and strategies are discussed in the paper by Chow and Saad. ```python def spai(A, m): """Perform m step of the SPAI iteration.""" from scipy.sparse import identity from scipy.sparse import diags from scipy.sparse.linalg import onenormest n = A.shape[0] ident = identity(n, format='csr') alpha = 2 / onenormest(A @ A.T) M = alpha * A for index in range(m): C = A @ M G = ident - C AG = A @ G trace = (G.T @ AG).diagonal().sum() alpha = trace / np.linalg.norm(AG.data)**2 M = M + alpha * G return M ``` We run the code with the following matrix, which is a slightly shifted version of the discrete 3-point second order differential operator. ```python import numpy as np from scipy.sparse import diags n = 1000 data = [2.001 * np.ones(n), -1. * np.ones(n - 1), -1. * np.ones(n - 1)] offsets = [0, 1, -1] A = diags(data, offsets=offsets, shape=(n, n), format='csr') ``` The condition number without the preconditioner is ```python np.linalg.cond(A.todense()) ``` 3961.9652414689454 Let us now generate the preconditioner. ```python M = spai(A, 50) ``` ```python %matplotlib inline from matplotlib import pyplot as plt fig = plt.figure(figsize=(8 ,8)) ax = fig.add_subplot(111) ax.spy(M, markersize=1) ``` Let us check the condition number of the right-preconditioned system. ```python np.linalg.cond(A.todense() @ M.todense()) ``` 40.18659718436073 It has been reduced by a factor 100. This is a good sign. We are now running CG for the preconditioned and non-preconditioned system. ```python %matplotlib inline from matplotlib import pyplot as plt from scipy.sparse.linalg import cg n = A.shape[0] b = np.ones(n) residuals = [] callback = lambda x: residuals.append(np.linalg.norm(A @ x - b)) x, _ = cg(A, b, callback=callback) residuals_preconditioned = [] callback = lambda x: residuals_preconditioned.append(np.linalg.norm(A @ x - b) / np.linalg.norm(b)) x, _ = cg(A, b, M=M, callback=callback) fig = plt.figure(figsize=(8, 8)) ax = fig.add_subplot(111) ax.semilogy(residuals, 'k--') ax.semilogy(residuals_preconditioned, 'b--') ax.set_ylabel('relative residual') fig.legend(['no preconditioning', 'SPAI'], loc='lower center', fancybox=True, shadow=True) ``` We can see a significant improvement in the number of iterations. However, the comparison is not quite fair. Setting up the preconditioner took time, which should be taken into account. Just considering number of iterations is not sufficient. In practice, the overall computation time is the much more important measure. We have implemented here a very primitive variant of SPAI that should not be used in practice. For practical alternatives see the cited literature. For the type of matrix that $A$ is there are also much better precondtioners available, some of which we will encounter later. ## A note on preconditioned Conjugate Gradient We have passed here the preconditioner as a matrix into the CG algorithm. This is possible if the preconditioner is also symmetric, positive definite. This is the case in our example. We are not going to discuss the details of preconditioned conjugate gradients in more details here and refer to the book by Saad.
1c4c69858aadf50d15b5f81d4fb2ced09ee2d9ed
45,692
ipynb
Jupyter Notebook
hpc_lecture_notes/it_solvers4.ipynb
tbetcke/hpc_lecture_notes
f061401a54ef467c8f8d0fb90294d63d83e3a9e1
[ "BSD-3-Clause" ]
3
2020-10-02T11:11:58.000Z
2022-03-14T10:40:51.000Z
hpc_lecture_notes/it_solvers4.ipynb
tbetcke/hpc_lecture_notes
f061401a54ef467c8f8d0fb90294d63d83e3a9e1
[ "BSD-3-Clause" ]
null
null
null
hpc_lecture_notes/it_solvers4.ipynb
tbetcke/hpc_lecture_notes
f061401a54ef467c8f8d0fb90294d63d83e3a9e1
[ "BSD-3-Clause" ]
3
2020-11-18T15:21:30.000Z
2022-01-26T12:38:25.000Z
122.827957
24,612
0.871203
true
1,762
Qwen/Qwen-72B
1. YES 2. YES
0.872347
0.812867
0.709103
__label__eng_Latn
0.997646
0.485815
``` %load_ext autoreload ``` ``` autoreload 2 ``` ``` %matplotlib inline ``` ``` import matplotlib.pyplot as plt import numpy as np import sympy as sym import inputs import models import solvers ``` # Example: ## Worker skill and firm productivity are $\sim U[a, b]$... ``` # define some workers skill x, a, b = sym.var('x, a, b') skill_cdf = (x - a) / (b - a) skill_params = {'a': 1.0, 'b': 2.0} skill_bounds = [skill_params['a'], skill_params['b']] workers = inputs.Input(var=x, cdf=skill_cdf, params=skill_params, bounds=skill_bounds, ) # define some firms y = sym.var('y') productivity_cdf = (y - a) / (b - a) productivity_params = skill_params productivity_bounds = skill_bounds firms = inputs.Input(var=y, cdf=productivity_cdf, params=productivity_params, bounds=productivity_bounds, ) ``` ## ...and production function is multiplicatively separable. A particularly attractive case arises under multiplicative separability of the form $$F(x, y, l, r) = A(x, y)B(l, r)$$ In this case the condition for positive assortative matching can be written as $$\frac{AA_{xy}}{A_xA_y}\frac{BB_{lr}}{B_lB_r} \ge 1$$ If $B$ has constant elasticity of substitution form, then we obtain an even simpler condition $$\frac{AA_{xy}}{A_xA_y} \frac{1}{\sigma_{lr}}\ge 1$$ where $\sigma_{lr}$ is the elasticity of substitution between $l$ and $r$. Finally, if $A$ also has constant elasticity of substitution form, then we obtain an even simpler condition $$\frac{1}{\sigma_{xy}\sigma_{lr}}\ge 1 \iff \sigma_{xy}\sigma_{lr} \le 1$$ where $\sigma_{xy}$ is the elasticity of substitution between $x$ and $y$. ``` # define symbolic expression for CES between x and y omega_A, sigma_A = sym.var('omega_A, sigma_A') A = ((omega_A * x**((sigma_A - 1) / sigma_A) + (1 - omega_A) * y**((sigma_A - 1) / sigma_A))**(sigma_A / (sigma_A - 1))) # define symbolic expression for Cobb-Douglas between l and r l, r, omega_B, sigma_B = sym.var('l, r, omega_B, sigma_B') B = l**omega_B * r**(1 - omega_B) F = A * B ``` Create an instance of the `models.Model` class and an instance of the `solvers.ShootingSolver` class... ### Positive assortative matching ``` F_params = {'omega_A':0.5, 'omega_B':0.45, 'sigma_A':0.95, 'sigma_B':1.0} model = models.Model(assortativity='positive', workers=workers, firms=firms, production=F, params=F_params) solver = solvers.ShootingSolver(model=model) ``` ``` solver.solve(1e1, tol=1e-6, number_knots=100, atol=1e-15, rtol=1e-12) ``` Success! All workers and firms are matched ``` # examine the solution attribute solver.solution ``` <div style="max-height:1000px;max-width:1500px;overflow:auto;"> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>firm productivity</th> <th>firm size</th> <th>wage</th> <th>profit</th> </tr> <tr> <th>x</th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <th>1.000000</th> <td> 0.999999</td> <td> 1.089348</td> <td> 0.429310</td> <td> 0.571594</td> </tr> <tr> <th>1.010101</th> <td> 1.009285</td> <td> 1.086340</td> <td> 0.434131</td> <td> 0.576417</td> </tr> <tr> <th>1.020202</th> <td> 1.018595</td> <td> 1.083403</td> <td> 0.438957</td> <td> 0.581249</td> </tr> <tr> <th>1.030303</th> <td> 1.027931</td> <td> 1.080534</td> <td> 0.443788</td> <td> 0.586090</td> </tr> <tr> <th>1.040404</th> <td> 1.037292</td> <td> 1.077730</td> <td> 0.448625</td> <td> 0.590940</td> </tr> <tr> <th>1.050505</th> <td> 1.046676</td> <td> 1.074990</td> <td> 0.453467</td> <td> 0.595799</td> </tr> <tr> <th>1.060606</th> <td> 1.056084</td> <td> 1.072310</td> <td> 0.458313</td> <td> 0.600666</td> </tr> <tr> <th>1.070707</th> <td> 1.065516</td> <td> 1.069689</td> <td> 0.463165</td> <td> 0.605541</td> </tr> <tr> <th>1.080808</th> <td> 1.074970</td> <td> 1.067126</td> <td> 0.468022</td> <td> 0.610425</td> </tr> <tr> <th>1.090909</th> <td> 1.084447</td> <td> 1.064617</td> <td> 0.472884</td> <td> 0.615316</td> </tr> <tr> <th>1.101010</th> <td> 1.093946</td> <td> 1.062161</td> <td> 0.477751</td> <td> 0.620214</td> </tr> <tr> <th>1.111111</th> <td> 1.103466</td> <td> 1.059756</td> <td> 0.482622</td> <td> 0.625120</td> </tr> <tr> <th>1.121212</th> <td> 1.113009</td> <td> 1.057401</td> <td> 0.487499</td> <td> 0.630033</td> </tr> <tr> <th>1.131313</th> <td> 1.122572</td> <td> 1.055094</td> <td> 0.492380</td> <td> 0.634954</td> </tr> <tr> <th>1.141414</th> <td> 1.132156</td> <td> 1.052834</td> <td> 0.497266</td> <td> 0.639881</td> </tr> <tr> <th>1.151515</th> <td> 1.141760</td> <td> 1.050619</td> <td> 0.502157</td> <td> 0.644815</td> </tr> <tr> <th>1.161616</th> <td> 1.151384</td> <td> 1.048447</td> <td> 0.507053</td> <td> 0.649755</td> </tr> <tr> <th>1.171717</th> <td> 1.161028</td> <td> 1.046318</td> <td> 0.511953</td> <td> 0.654702</td> </tr> <tr> <th>1.181818</th> <td> 1.170692</td> <td> 1.044229</td> <td> 0.516858</td> <td> 0.659656</td> </tr> <tr> <th>1.191919</th> <td> 1.180374</td> <td> 1.042180</td> <td> 0.521767</td> <td> 0.664615</td> </tr> <tr> <th>1.202020</th> <td> 1.190076</td> <td> 1.040170</td> <td> 0.526681</td> <td> 0.669580</td> </tr> <tr> <th>1.212121</th> <td> 1.199796</td> <td> 1.038197</td> <td> 0.531600</td> <td> 0.674551</td> </tr> <tr> <th>1.222222</th> <td> 1.209535</td> <td> 1.036260</td> <td> 0.536523</td> <td> 0.679528</td> </tr> <tr> <th>1.232323</th> <td> 1.219291</td> <td> 1.034359</td> <td> 0.541451</td> <td> 0.684511</td> </tr> <tr> <th>1.242424</th> <td> 1.229066</td> <td> 1.032491</td> <td> 0.546383</td> <td> 0.689499</td> </tr> <tr> <th>1.252525</th> <td> 1.238857</td> <td> 1.030657</td> <td> 0.551320</td> <td> 0.694493</td> </tr> <tr> <th>1.262626</th> <td> 1.248667</td> <td> 1.028854</td> <td> 0.556260</td> <td> 0.699491</td> </tr> <tr> <th>1.272727</th> <td> 1.258493</td> <td> 1.027083</td> <td> 0.561206</td> <td> 0.704495</td> </tr> <tr> <th>1.282828</th> <td> 1.268336</td> <td> 1.025343</td> <td> 0.566155</td> <td> 0.709504</td> </tr> <tr> <th>1.292929</th> <td> 1.278195</td> <td> 1.023632</td> <td> 0.571109</td> <td> 0.714518</td> </tr> <tr> <th>...</th> <td>...</td> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <th>1.707071</th> <td> 1.694467</td> <td> 0.971475</td> <td> 0.777619</td> <td> 0.923313</td> </tr> <tr> <th>1.717172</th> <td> 1.704869</td> <td> 0.970515</td> <td> 0.782733</td> <td> 0.928466</td> </tr> <tr> <th>1.727273</th> <td> 1.715282</td> <td> 0.969566</td> <td> 0.787849</td> <td> 0.933621</td> </tr> <tr> <th>1.737374</th> <td> 1.725706</td> <td> 0.968626</td> <td> 0.792969</td> <td> 0.938778</td> </tr> <tr> <th>1.747475</th> <td> 1.736139</td> <td> 0.967697</td> <td> 0.798093</td> <td> 0.943937</td> </tr> <tr> <th>1.757576</th> <td> 1.746582</td> <td> 0.966778</td> <td> 0.803219</td> <td> 0.949098</td> </tr> <tr> <th>1.767677</th> <td> 1.757035</td> <td> 0.965869</td> <td> 0.808349</td> <td> 0.954262</td> </tr> <tr> <th>1.777778</th> <td> 1.767498</td> <td> 0.964970</td> <td> 0.813482</td> <td> 0.959427</td> </tr> <tr> <th>1.787879</th> <td> 1.777970</td> <td> 0.964079</td> <td> 0.818619</td> <td> 0.964594</td> </tr> <tr> <th>1.797980</th> <td> 1.788453</td> <td> 0.963198</td> <td> 0.823759</td> <td> 0.969763</td> </tr> <tr> <th>1.808081</th> <td> 1.798944</td> <td> 0.962326</td> <td> 0.828901</td> <td> 0.974934</td> </tr> <tr> <th>1.818182</th> <td> 1.809445</td> <td> 0.961463</td> <td> 0.834048</td> <td> 0.980108</td> </tr> <tr> <th>1.828283</th> <td> 1.819956</td> <td> 0.960609</td> <td> 0.839197</td> <td> 0.985282</td> </tr> <tr> <th>1.838384</th> <td> 1.830476</td> <td> 0.959763</td> <td> 0.844350</td> <td> 0.990459</td> </tr> <tr> <th>1.848485</th> <td> 1.841005</td> <td> 0.958926</td> <td> 0.849505</td> <td> 0.995638</td> </tr> <tr> <th>1.858586</th> <td> 1.851543</td> <td> 0.958097</td> <td> 0.854664</td> <td> 1.000818</td> </tr> <tr> <th>1.868687</th> <td> 1.862090</td> <td> 0.957276</td> <td> 0.859826</td> <td> 1.006000</td> </tr> <tr> <th>1.878788</th> <td> 1.872647</td> <td> 0.956463</td> <td> 0.864992</td> <td> 1.011184</td> </tr> <tr> <th>1.888889</th> <td> 1.883212</td> <td> 0.955657</td> <td> 0.870160</td> <td> 1.016369</td> </tr> <tr> <th>1.898990</th> <td> 1.893786</td> <td> 0.954860</td> <td> 0.875331</td> <td> 1.021556</td> </tr> <tr> <th>1.909091</th> <td> 1.904369</td> <td> 0.954070</td> <td> 0.880506</td> <td> 1.026745</td> </tr> <tr> <th>1.919192</th> <td> 1.914961</td> <td> 0.953287</td> <td> 0.885683</td> <td> 1.031935</td> </tr> <tr> <th>1.929293</th> <td> 1.925561</td> <td> 0.952512</td> <td> 0.890864</td> <td> 1.037127</td> </tr> <tr> <th>1.939394</th> <td> 1.936170</td> <td> 0.951744</td> <td> 0.896048</td> <td> 1.042321</td> </tr> <tr> <th>1.949495</th> <td> 1.946787</td> <td> 0.950983</td> <td> 0.901235</td> <td> 1.047516</td> </tr> <tr> <th>1.959596</th> <td> 1.957413</td> <td> 0.950229</td> <td> 0.906424</td> <td> 1.052713</td> </tr> <tr> <th>1.969697</th> <td> 1.968047</td> <td> 0.949481</td> <td> 0.911617</td> <td> 1.057911</td> </tr> <tr> <th>1.979798</th> <td> 1.978690</td> <td> 0.948740</td> <td> 0.916813</td> <td> 1.063110</td> </tr> <tr> <th>1.989899</th> <td> 1.989341</td> <td> 0.948006</td> <td> 0.922012</td> <td> 1.068311</td> </tr> <tr> <th>2.000000</th> <td> 2.000000</td> <td> 0.947279</td> <td> 0.927213</td> <td> 1.073514</td> </tr> </tbody> </table> <p>100 rows × 4 columns</p> </div> ``` def plot_residual_mu(ax, x, deg, normed=False): residual = solver.evaluate_residual(x, k=deg) residual_mu = np.abs(residual[:,0]) residual_mu_line, = ax.plot(x, residual_mu) ax.set_xlim(solver.model.workers.lower, solver.model.workers.upper) ax.set_yscale('log') ax.grid('on') return [residual_mu_line] def plot_residual_theta(ax, x, deg, normed=False): residual = solver.evaluate_residual(x, k=deg) residual_theta, = ax.plot(x, np.abs(residual[:,1])) ax.set_xlim(solver.model.workers.lower, solver.model.workers.upper) ax.set_yscale('log') ax.grid('on') return [residual_theta] ``` ``` fig, ax = plt.subplots(1,1, figsize=(8,6)) plot_residual_mu(ax, np.linspace(1, 2, 1000), deg=5) plt.show() ``` ``` fig, ax = plt.subplots(1,1, figsize=(8,6)) plot_residual_theta(ax, np.linspace(1, 2, 1000), deg=5) plt.show() ``` ``` solver.evaluate_rhs(solver.solution.index.values, solver.solution[['firm productivity', 'firm size']].values.T) ``` array([ 0.91798057, 0.92052183, 0.92301727, 0.92546817, 0.92787578, 0.93024131, 0.93256592, 0.93485074, 0.93709686, 0.93930532, 0.94147715, 0.94361332, 0.94571478, 0.94778246, 0.94981723, 0.95181997, 0.95379149, 0.9557326 , 0.95764408, 0.95952669, 0.96138115, 0.96320816, 0.96500842, 0.96678258, 0.96853128, 0.97025515, 0.97195478, 0.97363076, 0.97528365, 0.97691401, 0.97852236, 0.98010922, 0.98167509, 0.98322045, 0.98474577, 0.98625151, 0.98773812, 0.98920602, 0.99065562, 0.99208735, 0.99350158, 0.9948987 , 0.99627908, 0.99764309, 0.99899107, 1.00032337, 1.00164031, 1.00294222, 1.00422941, 1.00550219, 1.00676084, 1.00800567, 1.00923695, 1.01045496, 1.01165995, 1.01285219, 1.01403193, 1.01519941, 1.01635488, 1.01749856, 1.01863069, 1.01975147, 1.02086114, 1.02195989, 1.02304793, 1.02412547, 1.02519269, 1.02624978, 1.02729693, 1.02833431, 1.02936211, 1.0303805 , 1.03138964, 1.03238969, 1.03338082, 1.03436317, 1.03533691, 1.03630218, 1.03725912, 1.03820788, 1.03914859, 1.04008138, 1.0410064 , 1.04192377, 1.0428336 , 1.04373604, 1.04463119, 1.04551918, 1.04640012, 1.04727412, 1.04814129, 1.04900173, 1.04985557, 1.05070288, 1.05154379, 1.05237838, 1.05320675, 1.054029 , 1.05484522, 1.05565549, -0.30129466, -0.2942018 , -0.28736374, -0.28076888, -0.27440625, -0.26826547, -0.26233671, -0.25661066, -0.2510785 , -0.24573187, -0.24056282, -0.23556384, -0.23072776, -0.2260478 , -0.2215175 , -0.21713074, -0.21288167, -0.20876475, -0.20477469, -0.20090646, -0.19715528, -0.19351657, -0.18998598, -0.18655937, -0.18323277, -0.18000241, -0.17686467, -0.17381612, -0.17085346, -0.16797355, -0.16517338, -0.16245008, -0.15980089, -0.15722319, -0.15471446, -0.15227228, -0.14989434, -0.14757843, -0.14532243, -0.14312431, -0.14098211, -0.13889397, -0.13685808, -0.13487273, -0.13293626, -0.13104708, -0.12920367, -0.12740456, -0.12564834, -0.12393365, -0.1222592 , -0.12062372, -0.11902602, -0.11746493, -0.11593934, -0.11444817, -0.11299038, -0.11156498, -0.110171 , -0.10880752, -0.10747364, -0.1061685 , -0.10489128, -0.10364116, -0.10241738, -0.10121919, -0.10004587, -0.09889673, -0.09777108, -0.09666829, -0.09558772, -0.09452877, -0.09349086, -0.09247341, -0.09147589, -0.09049775, -0.08953849, -0.08859762, -0.08767465, -0.08676911, -0.08588057, -0.08500858, -0.08415272, -0.08331259, -0.08248779, -0.08167794, -0.08088266, -0.0801016 , -0.0793344 , -0.07858073, -0.07784026, -0.07711267, -0.07639766, -0.07569492, -0.07500416, -0.0743251 , -0.07365747, -0.073001 , -0.07235543, -0.07172051]) ``` fig, ax = plt.subplots(1, 1, figsize=(8,6)) solver.plot_equilibrium_firm_size(ax, 'firm_productivity') plt.show() ``` ### Negative assortative matching ``` # negative assortativity requires that sigma_A * sigma_B > 1 F_params = {'omega_A':0.25, 'omega_B':0.5, 'sigma_A':1.5, 'sigma_B':1.0} ``` ``` model = models.Model(assortativity='negative', workers=workers, firms=firms, production=F, params=F_params) solver = solvers.ShootingSolver(model=model) ``` ``` solver.solve(2e0, tol=1e-6, number_knots=100, atol=1e-15, rtol=1e-12) ``` Exhausted firms: initial guess of 1.0 for firm size is too low. Exhausted firms: initial guess of 1.5 for firm size is too low. Exhausted firms: initial guess of 1.75 for firm size is too low. Exhausted workers: initial guess of 1.875 for firm size is too high! Exhausted firms: Initial guess of 1.8125 for firm size was too low! Exhausted workers: initial guess of 1.84375 for firm size is too high! Exhausted firms: Initial guess of 1.828125 for firm size was too low! Exhausted workers: initial guess of 1.8359375 for firm size is too high! Exhausted workers: initial guess of 1.83203125 for firm size is too high! Exhausted workers: initial guess of 1.830078125 for firm size is too high! Exhausted firms: Initial guess of 1.8291015625 for firm size was too low! Exhausted firms: Initial guess of 1.82958984375 for firm size was too low! Exhausted workers: initial guess of 1.82983398438 for firm size is too high! Exhausted workers: initial guess of 1.82971191406 for firm size is too high! Exhausted workers: initial guess of 1.82965087891 for firm size is too high! Exhausted firms: Initial guess of 1.82962036133 for firm size was too low! Exhausted firms: Initial guess of 1.82963562012 for firm size was too low! Exhausted workers: initial guess of 1.82964324951 for firm size is too high! Success! All workers and firms are matched ``` solver.solution ``` <div style="max-height:1000px;max-width:1500px;overflow:auto;"> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>firm productivity</th> <th>firm size</th> <th>wage</th> <th>profit</th> </tr> <tr> <th>x</th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <th>1.000000</th> <td> 2.000000</td> <td> 1.829639</td> <td> 0.630705</td> <td> 1.153963</td> </tr> <tr> <th>1.010101</th> <td> 1.994455</td> <td> 1.813970</td> <td> 0.633368</td> <td> 1.148910</td> </tr> <tr> <th>1.020202</th> <td> 1.988863</td> <td> 1.798401</td> <td> 0.636024</td> <td> 1.143826</td> </tr> <tr> <th>1.030303</th> <td> 1.983222</td> <td> 1.782931</td> <td> 0.638673</td> <td> 1.138710</td> </tr> <tr> <th>1.040404</th> <td> 1.977532</td> <td> 1.767557</td> <td> 0.641317</td> <td> 1.133564</td> </tr> <tr> <th>1.050505</th> <td> 1.971792</td> <td> 1.752276</td> <td> 0.643954</td> <td> 1.128386</td> </tr> <tr> <th>1.060606</th> <td> 1.966003</td> <td> 1.737087</td> <td> 0.646586</td> <td> 1.123175</td> </tr> <tr> <th>1.070707</th> <td> 1.960162</td> <td> 1.721986</td> <td> 0.649212</td> <td> 1.117933</td> </tr> <tr> <th>1.080808</th> <td> 1.954271</td> <td> 1.706973</td> <td> 0.651832</td> <td> 1.112659</td> </tr> <tr> <th>1.090909</th> <td> 1.948327</td> <td> 1.692044</td> <td> 0.654446</td> <td> 1.107352</td> </tr> <tr> <th>1.101010</th> <td> 1.942331</td> <td> 1.677197</td> <td> 0.657056</td> <td> 1.102012</td> </tr> <tr> <th>1.111111</th> <td> 1.936282</td> <td> 1.662431</td> <td> 0.659660</td> <td> 1.096639</td> </tr> <tr> <th>1.121212</th> <td> 1.930179</td> <td> 1.647743</td> <td> 0.662259</td> <td> 1.091233</td> </tr> <tr> <th>1.131313</th> <td> 1.924021</td> <td> 1.633132</td> <td> 0.664853</td> <td> 1.085793</td> </tr> <tr> <th>1.141414</th> <td> 1.917808</td> <td> 1.618594</td> <td> 0.667443</td> <td> 1.080319</td> </tr> <tr> <th>1.151515</th> <td> 1.911540</td> <td> 1.604130</td> <td> 0.670027</td> <td> 1.074810</td> </tr> <tr> <th>1.161616</th> <td> 1.905214</td> <td> 1.589736</td> <td> 0.672607</td> <td> 1.069268</td> </tr> <tr> <th>1.171717</th> <td> 1.898832</td> <td> 1.575410</td> <td> 0.675183</td> <td> 1.063690</td> </tr> <tr> <th>1.181818</th> <td> 1.892391</td> <td> 1.561152</td> <td> 0.677754</td> <td> 1.058076</td> </tr> <tr> <th>1.191919</th> <td> 1.885891</td> <td> 1.546958</td> <td> 0.680321</td> <td> 1.052428</td> </tr> <tr> <th>1.202020</th> <td> 1.879331</td> <td> 1.532828</td> <td> 0.682883</td> <td> 1.046743</td> </tr> <tr> <th>1.212121</th> <td> 1.872711</td> <td> 1.518759</td> <td> 0.685442</td> <td> 1.041022</td> </tr> <tr> <th>1.222222</th> <td> 1.866029</td> <td> 1.504751</td> <td> 0.687997</td> <td> 1.035264</td> </tr> <tr> <th>1.232323</th> <td> 1.859285</td> <td> 1.490800</td> <td> 0.690548</td> <td> 1.029469</td> </tr> <tr> <th>1.242424</th> <td> 1.852478</td> <td> 1.476906</td> <td> 0.693095</td> <td> 1.023637</td> </tr> <tr> <th>1.252525</th> <td> 1.845606</td> <td> 1.463067</td> <td> 0.695639</td> <td> 1.017767</td> </tr> <tr> <th>1.262626</th> <td> 1.838670</td> <td> 1.449281</td> <td> 0.698180</td> <td> 1.011858</td> </tr> <tr> <th>1.272727</th> <td> 1.831667</td> <td> 1.435546</td> <td> 0.700716</td> <td> 1.005911</td> </tr> <tr> <th>1.282828</th> <td> 1.824597</td> <td> 1.421862</td> <td> 0.703250</td> <td> 0.999924</td> </tr> <tr> <th>1.292929</th> <td> 1.817458</td> <td> 1.408225</td> <td> 0.705781</td> <td> 0.993898</td> </tr> <tr> <th>...</th> <td>...</td> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <th>1.707071</th> <td> 1.446606</td> <td> 0.871852</td> <td> 0.808071</td> <td> 0.704518</td> </tr> <tr> <th>1.717172</th> <td> 1.434933</td> <td> 0.858825</td> <td> 0.810563</td> <td> 0.696132</td> </tr> <tr> <th>1.727273</th> <td> 1.423081</td> <td> 0.845771</td> <td> 0.813056</td> <td> 0.687660</td> </tr> <tr> <th>1.737374</th> <td> 1.411045</td> <td> 0.832690</td> <td> 0.815552</td> <td> 0.679102</td> </tr> <tr> <th>1.747475</th> <td> 1.398818</td> <td> 0.819576</td> <td> 0.818049</td> <td> 0.670454</td> </tr> <tr> <th>1.757576</th> <td> 1.386393</td> <td> 0.806429</td> <td> 0.820548</td> <td> 0.661714</td> </tr> <tr> <th>1.767677</th> <td> 1.373764</td> <td> 0.793244</td> <td> 0.823050</td> <td> 0.652879</td> </tr> <tr> <th>1.777778</th> <td> 1.360923</td> <td> 0.780018</td> <td> 0.825554</td> <td> 0.643947</td> </tr> <tr> <th>1.787879</th> <td> 1.347862</td> <td> 0.766748</td> <td> 0.828061</td> <td> 0.634914</td> </tr> <tr> <th>1.797980</th> <td> 1.334573</td> <td> 0.753431</td> <td> 0.830571</td> <td> 0.625778</td> </tr> <tr> <th>1.808081</th> <td> 1.321046</td> <td> 0.740063</td> <td> 0.833083</td> <td> 0.616534</td> </tr> <tr> <th>1.818182</th> <td> 1.307271</td> <td> 0.726639</td> <td> 0.835599</td> <td> 0.607179</td> </tr> <tr> <th>1.828283</th> <td> 1.293240</td> <td> 0.713157</td> <td> 0.838119</td> <td> 0.597710</td> </tr> <tr> <th>1.838384</th> <td> 1.278940</td> <td> 0.699610</td> <td> 0.840642</td> <td> 0.588122</td> </tr> <tr> <th>1.848485</th> <td> 1.264360</td> <td> 0.685996</td> <td> 0.843170</td> <td> 0.578411</td> </tr> <tr> <th>1.858586</th> <td> 1.249486</td> <td> 0.672308</td> <td> 0.845701</td> <td> 0.568572</td> </tr> <tr> <th>1.868687</th> <td> 1.234306</td> <td> 0.658542</td> <td> 0.848237</td> <td> 0.558600</td> </tr> <tr> <th>1.878788</th> <td> 1.218804</td> <td> 0.644692</td> <td> 0.850778</td> <td> 0.548489</td> </tr> <tr> <th>1.888889</th> <td> 1.202965</td> <td> 0.630751</td> <td> 0.853324</td> <td> 0.538235</td> </tr> <tr> <th>1.898990</th> <td> 1.186770</td> <td> 0.616714</td> <td> 0.855875</td> <td> 0.527830</td> </tr> <tr> <th>1.909091</th> <td> 1.170201</td> <td> 0.602573</td> <td> 0.858432</td> <td> 0.517268</td> </tr> <tr> <th>1.919192</th> <td> 1.153236</td> <td> 0.588320</td> <td> 0.860995</td> <td> 0.506541</td> </tr> <tr> <th>1.929293</th> <td> 1.135854</td> <td> 0.573948</td> <td> 0.863565</td> <td> 0.495641</td> </tr> <tr> <th>1.939394</th> <td> 1.118029</td> <td> 0.559447</td> <td> 0.866141</td> <td> 0.484560</td> </tr> <tr> <th>1.949495</th> <td> 1.099734</td> <td> 0.544807</td> <td> 0.868725</td> <td> 0.473288</td> </tr> <tr> <th>1.959596</th> <td> 1.080937</td> <td> 0.530019</td> <td> 0.871317</td> <td> 0.461814</td> </tr> <tr> <th>1.969697</th> <td> 1.061606</td> <td> 0.515068</td> <td> 0.873917</td> <td> 0.450127</td> </tr> <tr> <th>1.979798</th> <td> 1.041702</td> <td> 0.499944</td> <td> 0.876526</td> <td> 0.438214</td> </tr> <tr> <th>1.989899</th> <td> 1.021183</td> <td> 0.484630</td> <td> 0.879145</td> <td> 0.426060</td> </tr> <tr> <th>2.000000</th> <td> 1.000000</td> <td> 0.469110</td> <td> 0.881774</td> <td> 0.413649</td> </tr> </tbody> </table> <p>100 rows × 4 columns</p> </div> Now we can do all the cool things that one can do with Pandas... ``` solver.solution['firm size'].plot() ``` ``` ``` # Example: Multiplicative separability with log-normal worker skill and firm productivity distributions... ``` # define some workers skill mu1, sigma1 = sym.var('mu1, sigma1') skill_cdf = 0.5 + 0.5 * sym.erf((sym.log(x) - mu1) / sym.sqrt(2 * sigma1**2)) skill_params = {'mu1': 0.0, 'sigma1': 1.0} skill_bounds = [1e-3, 5e1] workers = inputs.Input(var=x, cdf=skill_cdf, params=skill_params, bounds=skill_bounds, ) # define some firms mu2, sigma2 = sym.var('mu2, sigma2') productivity_cdf = 0.5 + 0.5 * sym.erf((sym.log(y) - mu2) / sym.sqrt(2 * sigma2**2)) productivity_params = {'mu2': 0.0, 'sigma2': 1.0} productivity_bounds = [1e-3, 5e1] firms = inputs.Input(var=y, cdf=productivity_cdf, params=productivity_params, bounds=productivity_bounds, ) ``` ### Positive assortative matching ``` # define symbolic expression for CES between x and y omega_A, sigma_A = sym.var('omega_A, sigma_A') A = ((omega_A * x**((sigma_A - 1) / sigma_A) + (1 - omega_A) * y**((sigma_A - 1) / sigma_A))**(sigma_A / (sigma_A - 1))) # define symbolic expression for Cobb-Douglas between l and r l, r, omega_B, sigma_B = sym.var('l, r, omega_B, sigma_B') B = l**omega_B * r**(1 - omega_B) F = A * B F_params = {'omega_A':0.5, 'omega_B':0.45, 'sigma_A':0.5, 'sigma_B':1.0} model = models.Model('positive', workers=workers, firms=firms, production=F, params=F_params) solver = solvers.ShootingSolver(model=model) ``` ``` solver.solve(5e-1, tol=1e-6, number_knots=5000, integrator='lsoda', with_jacobian=True, atol=1e-12, rtol=1e-9) ``` Exhausted firms: initial guess of 0.25 for firm size is too low. Exhausted firms: initial guess of 0.375 for firm size is too low. /Users/drpugh/anaconda/lib/python2.7/site-packages/scipy/integrate/_ode.py:1127: UserWarning: lsoda: Excess work done on this call (perhaps wrong Dfun type). 'Unexpected istate=%s' % istate)) ``` solver.solution ``` <div style="max-height:1000px;max-width:1500px;overflow:auto;"> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>firm productivity</th> <th>firm size</th> <th>wage</th> <th>profit</th> </tr> <tr> <th>x</th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <th>0.001000 </th> <td> 0.001067</td> <td> 3.549875</td> <td> 0.000232</td> <td> 0.001005</td> </tr> <tr> <th>0.051049 </th> <td> 0.043892</td> <td> 1.585344</td> <td> 0.016516</td> <td> 0.032003</td> </tr> <tr> <th>0.101098 </th> <td> 0.086925</td> <td> 1.429779</td> <td> 0.034621</td> <td> 0.060501</td> </tr> <tr> <th>0.151147 </th> <td> 0.129960</td> <td> 1.345459</td> <td> 0.053521</td> <td> 0.088013</td> </tr> <tr> <th>0.201196 </th> <td> 0.172995</td> <td> 1.288533</td> <td> 0.072958</td> <td> 0.114900</td> </tr> <tr> <th>0.251245 </th> <td> 0.216032</td> <td> 1.245988</td> <td> 0.092806</td> <td> 0.141332</td> </tr> <tr> <th>0.301294 </th> <td> 0.259069</td> <td> 1.212244</td> <td> 0.112987</td> <td> 0.167405</td> </tr> <tr> <th>0.351343 </th> <td> 0.302107</td> <td> 1.184414</td> <td> 0.133450</td> <td> 0.193185</td> </tr> <tr> <th>0.401392 </th> <td> 0.345146</td> <td> 1.160814</td> <td> 0.154158</td> <td> 0.218716</td> </tr> <tr> <th>0.451441 </th> <td> 0.388186</td> <td> 1.140382</td> <td> 0.175083</td> <td> 0.244030</td> </tr> <tr> <th>0.501490 </th> <td> 0.431227</td> <td> 1.122406</td> <td> 0.196201</td> <td> 0.269155</td> </tr> <tr> <th>0.551540 </th> <td> 0.474268</td> <td> 1.106387</td> <td> 0.217496</td> <td> 0.294109</td> </tr> <tr> <th>0.601589 </th> <td> 0.517310</td> <td> 1.091959</td> <td> 0.238953</td> <td> 0.318910</td> </tr> <tr> <th>0.651638 </th> <td> 0.560353</td> <td> 1.078853</td> <td> 0.260558</td> <td> 0.343572</td> </tr> <tr> <th>0.701687 </th> <td> 0.603397</td> <td> 1.066857</td> <td> 0.282303</td> <td> 0.368105</td> </tr> <tr> <th>0.751736 </th> <td> 0.646441</td> <td> 1.055808</td> <td> 0.304177</td> <td> 0.392519</td> </tr> <tr> <th>0.801785 </th> <td> 0.689487</td> <td> 1.045576</td> <td> 0.326172</td> <td> 0.416824</td> </tr> <tr> <th>0.851834 </th> <td> 0.732533</td> <td> 1.036054</td> <td> 0.348282</td> <td> 0.441026</td> </tr> <tr> <th>0.901883 </th> <td> 0.775580</td> <td> 1.027156</td> <td> 0.370501</td> <td> 0.465131</td> </tr> <tr> <th>0.951932 </th> <td> 0.818627</td> <td> 1.018809</td> <td> 0.392822</td> <td> 0.489146</td> </tr> <tr> <th>1.001981 </th> <td> 0.861676</td> <td> 1.010952</td> <td> 0.415241</td> <td> 0.513076</td> </tr> <tr> <th>1.052030 </th> <td> 0.904725</td> <td> 1.003536</td> <td> 0.437754</td> <td> 0.536924</td> </tr> <tr> <th>1.102079 </th> <td> 0.947775</td> <td> 0.996515</td> <td> 0.460356</td> <td> 0.560696</td> </tr> <tr> <th>1.152128 </th> <td> 0.990826</td> <td> 0.989852</td> <td> 0.483043</td> <td> 0.584395</td> </tr> <tr> <th>1.202177 </th> <td> 1.033878</td> <td> 0.983515</td> <td> 0.505813</td> <td> 0.608025</td> </tr> <tr> <th>1.252226 </th> <td> 1.076930</td> <td> 0.977474</td> <td> 0.528662</td> <td> 0.631588</td> </tr> <tr> <th>1.302275 </th> <td> 1.119983</td> <td> 0.971705</td> <td> 0.551587</td> <td> 0.655087</td> </tr> <tr> <th>1.352324 </th> <td> 1.163038</td> <td> 0.966186</td> <td> 0.574586</td> <td> 0.678525</td> </tr> <tr> <th>1.402373 </th> <td> 1.206093</td> <td> 0.960898</td> <td> 0.597655</td> <td> 0.701905</td> </tr> <tr> <th>1.452422 </th> <td> 1.249148</td> <td> 0.955823</td> <td> 0.620793</td> <td> 0.725228</td> </tr> <tr> <th>...</th> <td>...</td> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <th>48.548578</th> <td> 47.782881</td> <td> 0.626701</td> <td> 28.025228</td> <td> 21.466424</td> </tr> <tr> <th>48.598627</th> <td> 47.856867</td> <td> 0.626869</td> <td> 28.057248</td> <td> 21.496718</td> </tr> <tr> <th>48.648676</th> <td> 47.931017</td> <td> 0.627039</td> <td> 28.089273</td> <td> 21.527071</td> </tr> <tr> <th>48.698725</th> <td> 48.005333</td> <td> 0.627209</td> <td> 28.121305</td> <td> 21.557486</td> </tr> <tr> <th>48.748774</th> <td> 48.079815</td> <td> 0.627381</td> <td> 28.153343</td> <td> 21.587963</td> </tr> <tr> <th>48.798823</th> <td> 48.154466</td> <td> 0.627554</td> <td> 28.185387</td> <td> 21.618502</td> </tr> <tr> <th>48.848872</th> <td> 48.229287</td> <td> 0.627729</td> <td> 28.217437</td> <td> 21.649103</td> </tr> <tr> <th>48.898921</th> <td> 48.304279</td> <td> 0.627905</td> <td> 28.249494</td> <td> 21.679767</td> </tr> <tr> <th>48.948970</th> <td> 48.379445</td> <td> 0.628082</td> <td> 28.281558</td> <td> 21.710495</td> </tr> <tr> <th>48.999019</th> <td> 48.454784</td> <td> 0.628260</td> <td> 28.313628</td> <td> 21.741288</td> </tr> <tr> <th>49.049068</th> <td> 48.530300</td> <td> 0.628440</td> <td> 28.345705</td> <td> 21.772145</td> </tr> <tr> <th>49.099117</th> <td> 48.605993</td> <td> 0.628621</td> <td> 28.377788</td> <td> 21.803067</td> </tr> <tr> <th>49.149166</th> <td> 48.681864</td> <td> 0.628803</td> <td> 28.409878</td> <td> 21.834055</td> </tr> <tr> <th>49.199215</th> <td> 48.757917</td> <td> 0.628987</td> <td> 28.441975</td> <td> 21.865110</td> </tr> <tr> <th>49.249264</th> <td> 48.834153</td> <td> 0.629172</td> <td> 28.474079</td> <td> 21.896232</td> </tr> <tr> <th>49.299313</th> <td> 48.910573</td> <td> 0.629359</td> <td> 28.506189</td> <td> 21.927422</td> </tr> <tr> <th>49.349362</th> <td> 48.987179</td> <td> 0.629547</td> <td> 28.538307</td> <td> 21.958680</td> </tr> <tr> <th>49.399411</th> <td> 49.063971</td> <td> 0.629736</td> <td> 28.570432</td> <td> 21.990006</td> </tr> <tr> <th>49.449460</th> <td> 49.140903</td> <td> 0.629926</td> <td> 28.602557</td> <td> 22.021386</td> </tr> <tr> <th>49.499510</th> <td> 49.218053</td> <td> 0.630118</td> <td> 28.634693</td> <td> 22.052845</td> </tr> <tr> <th>49.549559</th> <td> 49.295391</td> <td> 0.630311</td> <td> 28.666836</td> <td> 22.084373</td> </tr> <tr> <th>49.599608</th> <td> 49.372918</td> <td> 0.630506</td> <td> 28.698986</td> <td> 22.115971</td> </tr> <tr> <th>49.649657</th> <td> 49.450632</td> <td> 0.630702</td> <td> 28.731141</td> <td> 22.147637</td> </tr> <tr> <th>49.699706</th> <td> 49.528536</td> <td> 0.630900</td> <td> 28.763304</td> <td> 22.179374</td> </tr> <tr> <th>49.749755</th> <td> 49.606634</td> <td> 0.631099</td> <td> 28.795473</td> <td> 22.211182</td> </tr> <tr> <th>49.799804</th> <td> 49.684927</td> <td> 0.631299</td> <td> 28.827648</td> <td> 22.243062</td> </tr> <tr> <th>49.849853</th> <td> 49.763416</td> <td> 0.631501</td> <td> 28.859831</td> <td> 22.275013</td> </tr> <tr> <th>49.899902</th> <td> 49.842100</td> <td> 0.631704</td> <td> 28.892020</td> <td> 22.307037</td> </tr> <tr> <th>49.949951</th> <td> 49.920980</td> <td> 0.631909</td> <td> 28.924216</td> <td> 22.339133</td> </tr> <tr> <th>50.000000</th> <td> 50.000000</td> <td> 0.632115</td> <td> 28.956412</td> <td> 22.371282</td> </tr> </tbody> </table> <p>1000 rows × 4 columns</p> </div> ``` solver.solution['firm size'].plot() ``` ### Negative assortative matching ``` F_params = {'omega_A':0.5, 'omega_B':0.45, 'sigma_A':1.5, 'sigma_B':1.0} model = models.Model('negative', workers=workers, firms=firms, production=F, params=F_params) solver = solvers.ShootingSolver(model=model) ``` ``` solver.solve(1e3, tol=1e-6, number_knots=1000, atol=1e-12, rtol=1e-9) ``` Exhausted firms: initial guess of 500.0 for firm size is too low. Exhausted workers: initial guess of 750.0 for firm size is too high! Exhausted firms: initial guess of 625.0 for firm size is too low. Exhausted firms: initial guess of 687.5 for firm size is too low. Exhausted workers: initial guess of 718.75 for firm size is too high! Exhausted workers: initial guess of 703.125 for firm size is too high! Exhausted firms: initial guess of 695.3125 for firm size is too low. Exhausted workers: initial guess of 699.21875 for firm size is too high! Exhausted firms: initial guess of 697.265625 for firm size is too low. Exhausted workers: initial guess of 698.2421875 for firm size is too high! Exhausted workers: initial guess of 697.75390625 for firm size is too high! Exhausted firms: initial guess of 697.509765625 for firm size is too low. Exhausted workers: initial guess of 697.631835938 for firm size is too high! Exhausted workers: initial guess of 697.570800781 for firm size is too high! Exhausted workers: initial guess of 697.540283203 for firm size is too high! Exhausted workers: initial guess of 697.525024414 for firm size is too high! Exhausted firms: initial guess of 697.51739502 for firm size is too low. Exhausted workers: initial guess of 697.521209717 for firm size is too high! Exhausted firms: initial guess of 697.519302368 for firm size is too low. Exhausted firms: initial guess of 697.520256042 for firm size is too low. Exhausted firms: initial guess of 697.52073288 for firm size is too low. Exhausted workers: initial guess of 697.520971298 for firm size is too high! Exhausted firms: initial guess of 697.520852089 for firm size is too low. Exhausted workers: initial guess of 697.520911694 for firm size is too high! Exhausted workers: initial guess of 697.520881891 for firm size is too high! Exhausted firms: initial guess of 697.52086699 for firm size is too low. Exhausted workers: initial guess of 697.520874441 for firm size is too high! Exhausted firms: initial guess of 697.520870715 for firm size is too low. Exhausted firms: initial guess of 697.520872578 for firm size is too low. Exhausted workers: initial guess of 697.520873509 for firm size is too high! Exhausted firms: initial guess of 697.520873044 for firm size is too low. Exhausted workers: initial guess of 697.520873277 for firm size is too high! Exhausted firms: initial guess of 697.52087316 for firm size is too low. Exhausted firms: initial guess of 697.520873218 for firm size is too low. Exhausted firms: initial guess of 697.520873247 for firm size is too low. Exhausted firms: Initial guess of 697.520873262 for firm size was too low! Exhausted firms: Initial guess of 697.520873269 for firm size was too low! Exhausted workers: initial guess of 697.520873273 for firm size is too high! Exhausted firms: Initial guess of 697.520873271 for firm size was too low! Exhausted firms: Initial guess of 697.520873272 for firm size was too low! Exhausted firms: Initial guess of 697.520873272 for firm size was too low! Exhausted workers: initial guess of 697.520873273 for firm size is too high! Exhausted firms: Initial guess of 697.520873273 for firm size was too low! Exhausted firms: Initial guess of 697.520873273 for firm size was too low! Success! All workers and firms are matched ``` solver.solution ``` array([[ 1.00000000e-04, 5.00000000e+01, 6.97520873e+02, 7.96982563e-02, 6.79447968e+01], [ 1.00200000e-01, 4.38421936e+01, 2.76462641e+02, 1.62305818e-01, 5.48429383e+01], [ 2.00300000e-01, 3.01915720e+01, 1.25086665e+02, 1.99932868e-01, 3.05664771e+01], ..., [ 9.97998000e+01, 2.00562063e-02, 4.40863126e-04, 4.66712075e+02, 2.51479732e-01], [ 9.98999000e+01, 1.70657087e-02, 4.33023425e-04, 4.67696609e+02, 2.47528829e-01], [ 1.00000000e+02, 9.99753364e-04, 3.61574332e-04, 4.68687387e+02, 2.07124291e-01]]) ``` plt.plot(solver.solution[:, 0], solver.solution[:, 2]) plt.xscale('log') plt.yscale('log') plt.show() ``` # Example: More sophisticated production function that is not multiplicatively separable... ``` # define some workers skill x, mu1, sigma1 = sym.var('x, mu1, sigma1') skill_cdf = 0.5 + 0.5 * sym.erf((sym.log(x) - mu1) / sym.sqrt(2 * sigma1**2)) skill_params = {'mu1': 0.0, 'sigma1': 1e0} skill_bounds = [1e-3, 50.0] workers = inputs.Input(var=x, cdf=skill_cdf, params=skill_params, bounds=skill_bounds, ) # define some firms y, mu2, sigma2 = sym.var('y, mu2, sigma2') productivity_cdf = 0.5 + 0.5 * sym.erf((sym.log(y) - mu2) / sym.sqrt(2 * sigma2**2)) productivity_params = {'mu2': 0.0, 'sigma2': 1e0} productivity_bounds = [1e-3, 50.0] firms = inputs.Input(var=y, cdf=productivity_cdf, params=productivity_params, bounds=productivity_bounds, ) # define some valid model params F_params = {'eta': 0.89, 'kappa': 1.0, 'gamma': 0.54, 'rho': 0.24, 'A': 1.0, 'k': 1.0} # define a valid production function A, k, kappa, eta, rho, l, gamma, r = sym.var('A, k, kappa, eta, rho, l, gamma, r') F = r * A * kappa * eta * ((k * x)**rho + (1 - eta) * (y * (l / r))**rho)**(gamma / rho) model = models.Model('negative', workers=workers, firms=firms, production=F, params=F_params) ``` ``` solver = solvers.ShootingSolver(model=model) ``` ``` solver.solve(1e0, tol=1e-4, number_knots=10000, integrator='vode', atol=1e-9, rtol=1e-6, check=True) ``` Exhausted firms: initial guess of 0.5 for firm size is too low. ``` solver.integrator.successful() ``` False ``` solver._solution ``` array([[ 1.00000000e-03, 5.00000000e+01, 7.50000000e-01, 6.25292214e-02, 1.02985470e-01], [ 6.00040004e-03, 4.91355141e+01, 2.93698995e-02, 6.50895813e-01, 1.02614135e-01], [ 1.10008001e-02, 1.87728995e+01, 6.10881218e-04, 8.73207163e+00, 9.34543423e-02], [ 1.34286568e-02, 2.61227847e+06, 1.98653860e+04, 1.04151668e-01, 1.79568793e+03]]) ``` integrat ``` 50.0 ``` ```
83cce094b47b38a24865f13041a97d92f94f8071
234,416
ipynb
Jupyter Notebook
examples.ipynb
davidrpugh/assortative-matching-large-firms
e475dbc04e59ea066fae681b830fecdb8981c1d6
[ "MIT" ]
2
2019-07-31T06:34:01.000Z
2020-07-29T10:32:37.000Z
examples.ipynb
davidrpugh/assortative-matching-large-firms
e475dbc04e59ea066fae681b830fecdb8981c1d6
[ "MIT" ]
null
null
null
examples.ipynb
davidrpugh/assortative-matching-large-firms
e475dbc04e59ea066fae681b830fecdb8981c1d6
[ "MIT" ]
8
2016-11-13T19:55:54.000Z
2021-09-17T07:20:22.000Z
80.777395
43,755
0.715612
true
18,865
Qwen/Qwen-72B
1. YES 2. YES
0.891811
0.771843
0.688339
__label__eng_Latn
0.188054
0.437572
# Mass Matrix The 2 DOF dynamical system in figure is composed of two massless rigid bodies and a massive one. Compute the mass matrix of the system with reference to the degrees of freedom indicated in figure, in the hypotesis of small displacements. ## Solution We are going to use symbols for the relevant quantities. ```python m, L, x1, x2 = symbols('m L x_1 x_2') ``` #### Contribution of $x_1$ to the displacements of the massive bar We constrain $x_2$ to zero (i.e., the roller becomes a hinge) and impose a unit displacement $x_1=1$. The Centre of Instantaneous Rotation (CIR) of the massive bar, at the intersection of the dashed lines in figure, coincides with the CIR of the left bar, hence the rotation of the two bars are the same. Because the two rotations are $\phi_1=1/2L$ the displacements of the centre of mass are $u_{G1} = -\phi_1\times L/2 = -1/4$ and $v_{G1} = +\phi_1\times L=1/2$. ```python ug1, vg1, 𝜙1 = -x1/4, +x1/2, +x1/(2*L) ``` #### Contribution of $x_2$ to the displacements of the massive bar We constrain $x_1$ to zero (i.e., we introduce a roller) and impose a unit displacement $x_2=1$. The left beam can't move, hence the CIR of the massive bar is the top internal hinge. The CIR of the bottom hinge is at an infinite distance in the vertical direction (the bottom bar undergos a horizontal motion) and by continuity we have $\phi_2=1/L$, $u_{G2}=-\phi_2\times(-L/2)=+1/2$ and $v_{G2}=0$. ```python ug2, vg2, 𝜙2 = +x2/2, 0, +x2/L ``` #### Total Displacements and Velocities The total displacement components are the sum of the two cuntributions, the total rotation is the sum of the two contributions. The velocities are obtained differentiating w/r to time. ```python ug, vg, 𝜙 = ug1+ug2, vg1+vg2, 𝜙1+𝜙2 dot_u, dot_v, ω = diff_t(ug), diff_t(vg), diff_t(𝜙) ``` #### Kinetic Energy ```python T = m * (dot_u**2 + dot_v**2 + ω**2*L**2/12) / 2 display(Latex('$$T=' + latex(T.expand()) + '.$$')) ``` $$T=\frac{m x_{1}^{2}}{6} - \frac{m x_{1}}{12} x_{2} + \frac{m x_{2}^{2}}{6}.$$ #### Mass Matrix Coefficients The coefficient can be computed as $$m_{ij} = \frac{\partial^2 T}{\partial x_i \partial x_j}.$$ ```python for i, xi in enumerate((x1, x2), 1): for j, xj in enumerate((x1, x2), 1): display(Latex('$$m_{%d%d}='%(i,j)+latex(T.diff(xi,xj))+'.$$')) ``` $$T=\frac{m x_{1}^{2}}{6} - \frac{m x_{1}}{12} x_{2} + \frac{m x_{2}^{2}}{6}.$$ $$m_{11}=\frac{m}{3}.$$ $$m_{12}=- \frac{m}{12}.$$ $$m_{21}=- \frac{m}{12}.$$ $$m_{22}=\frac{m}{3}.$$ ## Initialization ```python from sympy import symbols, init_printing, latex init_printing(use_latex=1) from IPython.display import HTML, Latex display(HTML(open('01.css').read())) def diff_t(expr): return expr ``` <style> @font-face { font-family: 'Charis SIL'; src: url('fonts/CharisSILEur-R.woff') format('woff'); } @font-face { font-family: 'Architects Daughter'; font-style: normal; src: url(https://fonts.gstatic.com/s/architectsdaughter/v6/RXTgOOQ9AAtaVOHxx0IUBM3t7GjCYufj5TXV5VnA2p8.woff2) format('woff2'); unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2212, U+2215, U+E0FF, U+EFFD, U+F000; } .prompt{display: None;} div.cell{margin: auto;width:720px;} code{font-family: Consolas} div #notebook { /* centre the content */ background: #fdfffa; margin: auto; padding: 0em; padding-top: 1em; } div #notebook_container {width: 700px!important;} #notebook li { /* More space between bullet points */ margin-top:0.2em; } div.text_cell_render{ font-family: "Charis SIL", Cambria, serif; line-height: 150%; font-size: 140%; } .CodeMirror{ width: 700px!important; font-family: Consolas,monospace; font-size: 100%; background-color:#fefffc!important; } .output_area { font-family: Consolas,monospace;font-size: 100%; background-color:#ffffff!important; margin-top:0.8em; } div.output_latex{overflow:hidden} h1 { font-family: 'Architects Daughter', serif; font-size: 32pt!important; text-align: center; text-shadow: 4px 4px 4px #aaa; padding-bottom: 32pt; } .warning{color: rgb( 240, 20, 20 )} </style>
4ebd5ec998a238f010338dc7413db904656667d5
8,830
ipynb
Jupyter Notebook
dati_2017/wt05/MassMatrix.ipynb
shishitao/boffi_dynamics
365f16d047fb2dbfc21a2874790f8bef563e0947
[ "MIT" ]
null
null
null
dati_2017/wt05/MassMatrix.ipynb
shishitao/boffi_dynamics
365f16d047fb2dbfc21a2874790f8bef563e0947
[ "MIT" ]
null
null
null
dati_2017/wt05/MassMatrix.ipynb
shishitao/boffi_dynamics
365f16d047fb2dbfc21a2874790f8bef563e0947
[ "MIT" ]
2
2019-06-23T12:32:39.000Z
2021-08-15T18:33:55.000Z
26.596386
225
0.503171
true
1,438
Qwen/Qwen-72B
1. YES 2. YES
0.779993
0.803174
0.62647
__label__eng_Latn
0.821245
0.29383
# Debiasing with Orthogonalization Previously, we saw how to evaluate a causal model. By itself, that's a huge deed. Causal models estimates the elasticity $\frac{\delta y}{\delta t}$, which is an unseen quantity. Hence, since we can't see the ground truth of what our model is estimating, we had to be very creative in how we would go about evaluating them. The technique shown on the previous chapter relied heavily on data where the treatment was randomly assigned. The idea was to estimate the elasticity $\frac{\delta y}{\delta t}$ as the coefficient of a single variable linear regression of `y ~ t`. However, this only works if the treatment is randomly assigned. If it isn't, we get into trouble due to omitted variable bias. To workaround this, we need to make the data look as if the treatment is randomly assigned. I would say there are two main techniques to do this. One is using propensity score and the other using orthogonalization. We will cover the latter in this chapter. One final word of caution before we continue. I would argue that probably the safest way out of non random data is to go out and do some sort of experiment to gather random data. I myself don't trust very much on debiasing techniques because you can never know if you've accounted for every confounder. Having said that, orthogonalization is still very much worth learning. It's an incredibly powerful technique that will be the foundation of many causal models to come. ## Linear Regression Reborn The idea of orthogonalization is based on a theorem designed by three econometricians in 1933, Ragnar Frisch, Frederick V. Waugh, and Michael C. Lovell. Simply put, it states that you can decompose any multivariable linear regression model into three stages or models. Let's say that your features are in an $X$ matrix. Now, you partition that matrix in such a way that you get one part, $X_1$, with some of the features and another part, $X_2$, with the rest of the features. In the first stage, we take the first set of features and estimate the following linear regression model $$ y_i = \theta_0 + \pmb{\theta_1 X}_{1i} + e_i $$ where $\pmb{\theta_1}$ is a vector of parameters. We then take the residuals of that model $$ y^* = y_i - (\hat{\theta}_0 + \pmb{\hat{\theta}_1 X}_{1i}) $$ On the second stage, we take the first set of features again, but now we run a model where we estimate the second set of features $$ \pmb{X}_{2i} = \gamma_0 + \pmb{\gamma_1 X}_{1i} + e_i $$ Here, we are using the first set of features to predict the second set of features. Finally, we also take the residuals for this second stage. $$ \pmb{X}^*_{2i} = \pmb{X}_{2i} - (\hat{\gamma}_0 + \pmb{\hat{\gamma}_1 X}_{1i}) $$ Lastly, we take the residuals from the first and second stage, and estimate the following model $$ y_i^* = \beta_0 + \pmb{\beta_2 X}^*_{2i} + e_i $$ The Frisch–Waugh–Lovell theorem states that the parameter estimate $\pmb{\hat{\beta}_2}$ from estimating this model is equivalent to the one we get by running the full regression, with all the features: $$ y_i = \beta_0 + \pmb{\beta_1 X}_{1i} + \pmb{\beta_2 X}_{2i} + e_i $$ OK. Let's unpack this a bit further. We know that regression is a very special model. Each of its parameters has the interpretation of a partial derivative: how much would $Y$ increase if I increase one feature **while holding all the others fixed**. This is very nice for causal inference, because it means we can control for variables in the analysis, even if those same variables have not been held fixed during the collection of the data. We also know that if we omit variables from the regression, we get bias. Specifically, omitted variable bias (or confounding bias). Still, the Frisch–Waugh–Lovell is saying that I can break my regression model into two parts, neither of them containing the full feature set, and still get the same estimate I would get by running the entire regression. Not only that, this theorem also provides some insight into what linear regression is doing. To get the coefficient of one variable $X_k$, regression first uses all the other variables to predict $X_k$ and takes the residuals. This "cleans" $X_k$ of any influence from those variables. That way, when we try to understand $X_k$'s impact on $Y$, it will be free from omitted variable bias. Second, regression uses all the other variables to predict $Y$ and takes the residuals. This "cleans" $Y$ from any influence from those variables, reducing the variance of $Y$ so that it is easier to see how $X_k$ impacts $Y$. I know it can be hard to appreciate how awesome this is. But remember what linear regression is doing. It's estimating the impact of $X_2$ on $y$ while accounting for $X_1$. This is incredibly powerful for causal inference. It says that I can build a model that predicts my treatment $t$ using my features $X$, a model that predicts the outcome $y$ using the same features, take the residuals from both models and run a model that estimates how the residual of $t$ affects the residual of $y$. This last model will tell me how $t$ affects $y$ while controlling for $X$. In other words, the first two models are controlling for the confounding variables. They are generating data which is as good as random. This is debiasing my data. That's what we use in the final model to estimate the elasticity. There is a (not so complicated) mathematical proof for why that is the case, but I think the intuition behind this theorem is so straightforward we can go directly into it. ## The Intuition Behind Orthogonalization ```python import pandas as pd import numpy as np from matplotlib import pyplot as plt import seaborn as sns from sklearn.model_selection import train_test_split import statsmodels.formula.api as smf import statsmodels.api as sm from nb21 import cumulative_elast_curve_ci, elast, cumulative_gain_ci ``` Let's take our price data once again. But now, we will only take the sample where prices where **not** randomly assigned. Once again, we separate them into a training and a test set. Since we will use the test set to evaluate our causal model, let's see how we can use orthogonalization to debias it. ```python prices = pd.read_csv("./data/ice_cream_sales.csv") train, test = train_test_split(prices, test_size=0.5) train.shape, test.shape ``` ((5000, 5), (5000, 5)) If we show the correlations on the test set, we can see that price is positively correlated with sales, meaning that sales should go up as we increase prices. This is obviously nonsense. People don't buy more if ice cream is expensive. We probably have some sort of bias here. ```python test.corr() ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>temp</th> <th>weekday</th> <th>cost</th> <th>price</th> <th>sales</th> </tr> </thead> <tbody> <tr> <th>temp</th> <td>1.000000</td> <td>0.003630</td> <td>0.006605</td> <td>-0.011977</td> <td>0.379108</td> </tr> <tr> <th>weekday</th> <td>0.003630</td> <td>1.000000</td> <td>0.011889</td> <td>0.002610</td> <td>0.004589</td> </tr> <tr> <th>cost</th> <td>0.006605</td> <td>0.011889</td> <td>1.000000</td> <td>0.388046</td> <td>-0.009410</td> </tr> <tr> <th>price</th> <td>-0.011977</td> <td>0.002610</td> <td>0.388046</td> <td>1.000000</td> <td>0.080040</td> </tr> <tr> <th>sales</th> <td>0.379108</td> <td>0.004589</td> <td>-0.009410</td> <td>0.080040</td> <td>1.000000</td> </tr> </tbody> </table> </div> If we plot our data, we can see why this is happening. Weekends (Saturday and Sunday) have higher price but also higher sales. We can see that this is the case because the weekend cloud of points seems to be to the upper right part of the plot. Weekend is probably playing an important role in the bias here. On the weekends, there are more ice cream sales because there is more demand. In response to that demand, prices go up. So it is not that the increase in price causes sales to go up. It is just that both sales and prices are high on weekends. ```python np.random.seed(123) sns.scatterplot(data=test.sample(1000), x="price", y="sales", hue="weekday"); ``` To debias this dataset we will need two models. The first model, let's call it $M_t(X)$, predicts the treatment (price, in our case) using the confounders. It's the one of the stages we've seen above, on the Frisch–Waugh–Lovell theorem. ```python m_t = smf.ols("price ~ cost + C(weekday) + temp", data=test).fit() debiased_test = test.assign(**{"price-Mt(X)":test["price"] - m_t.predict(test)}) ``` Once we have this model, we will construct the residuals $$ \hat{t}_i = t_i - M_t(X_i) $$ You can think of this residual as a version of the treatment that is unbiased or, better yet, that is impossible to predict from the confounders $X$. Since the confounders were already used to predict $t$, the residual is by definition, unpredictable with $X$. Another way of saying this is that the bias has been explained away by the model $M_t(X_i)$, prudicing $\hat{t}_i$ which is as good as randomly assigned. Of course this only works if we have in $X$ all the confounders that cause both $T$ and $Y$. We can also plot this data to see what it looks like. ```python np.random.seed(123) sns.scatterplot(data=debiased_test.sample(1000), x="price-Mt(X)", y="sales", hue="weekday") plt.vlines(0, debiased_test["sales"].min(), debiased_test["sales"].max(), linestyles='--', color="black"); ``` We can see that the weekends are no longer to the upper right corner. They got pushed to the center. Moreover, we can no longer differentiate between different price levels (the treatment) using the weekdays. We can say that the residual $price-M_t(X)$, plotted on the x-axis, is a "random" or debiased version of the original treatment. This alone is sufficient to debias the dataset. This new treatment we've created is as good as randomly assigned. But we can still do one other thing to make the debiased dataset even better. Namely, we can also construct residuals for the outcome. $$ \hat{y}_i = y_i - M_y(X_i) $$ This is another stage from the Frisch–Waugh–Lovell theorem. It doesn't make the set less biased, but it makes it easier to estimate the elasticity by reducing the variance in $y$. Once again, you can think about $\hat{y}_i$ as a version of $y_i$ that is unpredictable from $X$ or that had all its variances due to $X$ explained away. Think about it. We've already used $X$ to predict $y$ with $M_y(X_i)$. And $\hat{y}_i$ is the error of this prediction. So, by definition, it's not possible to predict it from $X$. All the information in $X$ to predict $y$ has already been used. If that is the case, the only thing left to explain $\hat{y}_i$ is something we didn't used to construct it (not included in $X$), which is only the treatment (again, assuming no unmeasured confounders). ```python m_y = smf.ols("sales ~ cost + C(weekday) + temp", data=test).fit() debiased_test = test.assign(**{"price-Mt(X)":test["price"] - m_t.predict(test), "sales-My(X)":test["sales"] - m_y.predict(test)}) ``` Once we do both transformations, not only does weekdays not predict the price residuals, but it also can't predict the residual of sales $\hat{y}$. The only thing left to predict these residuals is the treatment. Also, notice something interesting. In the plot above, it was hard to know the direction of the price elasticity. It looked like sales decreased as prices went up, but there was such a large variance in sales that it was hard to say that for sure. Now, when we plot the two residuals, it becomes much clear that sales indeed causes prices to go down. ```python np.random.seed(123) sns.scatterplot(data=debiased_test.sample(1000), x="price-Mt(X)", y="sales-My(X)", hue="weekday") plt.vlines(0, debiased_test["sales-My(X)"].min(), debiased_test["sales-My(X)"].max(), linestyles='--', color="black"); ``` One small disadvantage of this debiased data is that the residuals have been shifted to a different scale. As a result, it's hard to interpret what they mean (what is a price residual of -3?). Still, I think this is a small price to pay for the convenience of building random data from data that was not initially random. To summarize, by predicting the treatment, we've constructed $\hat{t}$ which works as an unbiased version of the treatment; by predicting the outcome, we've constructed $\hat{y}$ which is a version of the outcome that can only be further explained if we use the treatment. This data, where we replace $y$ by $\hat{y}$ and $t$ by $\hat{t}$ is the debiased data we wanted. We can use it to evaluate our causal model just like we deed previously using random data. To see this, let's once again build a causal model for price elasticity using the training data. ```python m3 = smf.ols(f"sales ~ price*cost + price*C(weekday) + price*temp", data=train).fit() ``` Then, we'll make elasticity predictions on the debiased test set. ```python def predict_elast(model, price_df, h=0.01): return (model.predict(price_df.assign(price=price_df["price"]+h)) - model.predict(price_df)) / h debiased_test_pred = debiased_test.assign(**{ "m3_pred": predict_elast(m3, debiased_test), }) debiased_test_pred.head() ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>temp</th> <th>weekday</th> <th>cost</th> <th>price</th> <th>sales</th> <th>price-Mt(X)</th> <th>sales-My(X)</th> <th>m3_pred</th> </tr> </thead> <tbody> <tr> <th>7791</th> <td>20.8</td> <td>3</td> <td>1.5</td> <td>6.3</td> <td>187</td> <td>-0.201769</td> <td>1.441373</td> <td>-0.073317</td> </tr> <tr> <th>1764</th> <td>26.6</td> <td>3</td> <td>1.5</td> <td>6.3</td> <td>201</td> <td>-0.179506</td> <td>4.737748</td> <td>-2.139611</td> </tr> <tr> <th>5785</th> <td>24.0</td> <td>4</td> <td>1.0</td> <td>5.8</td> <td>186</td> <td>-0.215107</td> <td>-5.855171</td> <td>-0.549798</td> </tr> <tr> <th>3542</th> <td>20.9</td> <td>3</td> <td>1.5</td> <td>5.1</td> <td>180</td> <td>-1.401386</td> <td>-5.743172</td> <td>-0.108943</td> </tr> <tr> <th>9250</th> <td>26.7</td> <td>5</td> <td>1.0</td> <td>7.0</td> <td>201</td> <td>0.978382</td> <td>4.384885</td> <td>-1.427230</td> </tr> </tbody> </table> </div> Now, when it comes to plotting the cumulative elasticity, we still order the dataset by the predictive elasticity, but now we use the debiased versions of the treatment and outcome to get this elasticity. This is equivalent to estimating $\beta_1$ in the following regression model $$ \hat{y}_i = \beta_0 + \beta_1 \hat{t}_i + e_i $$ where the residuals are like we've described before. ```python plt.figure(figsize=(10,6)) cumm_elast = cumulative_elast_curve_ci(debiased_test_pred, "m3_pred", "sales-My(X)", "price-Mt(X)", min_periods=50, steps=200) x = np.array(range(len(cumm_elast))) plt.plot(x/x.max(), cumm_elast, color="C0") plt.hlines(elast(debiased_test_pred, "sales-My(X)", "price-Mt(X)"), 0, 1, linestyles="--", color="black", label="Avg. Elast.") plt.xlabel("% of Top Elast. Customers") plt.ylabel("Elasticity of Top %") plt.title("Cumulative Elasticity") plt.legend(); ``` We can do the same thing for the cumulative gain curve, of course. ```python plt.figure(figsize=(10,6)) cumm_gain = cumulative_gain_ci(debiased_test_pred, "m3_pred", "sales-My(X)", "price-Mt(X)", min_periods=50, steps=200) x = np.array(range(len(cumm_gain))) plt.plot(x/x.max(), cumm_gain, color="C1") plt.plot([0, 1], [0, elast(debiased_test_pred, "sales-My(X)", "price-Mt(X)")], linestyle="--", label="Random Model", color="black") plt.xlabel("% of Top Elast. Customers") plt.ylabel("Cumulative Gain") plt.title("Cumulative Gain on Debiased Sample") plt.legend(); ``` Notice how similar these plots are to the ones in the previous chapter. This is some indication that the debiasing worked wonders here. In contrast, let's see what the cumulative gain plot would look like if we used the original, biased data. ```python plt.figure(figsize=(10,6)) cumm_gain = cumulative_gain_ci(debiased_test_pred, "m3_pred", "sales", "price", min_periods=50, steps=200) x = np.array(range(len(cumm_gain))) plt.plot(x/x.max(), cumm_gain, color="C1") plt.plot([0, 1], [0, elast(debiased_test_pred, "sales", "price")], linestyle="--", label="Random Model", color="black") plt.xlabel("% of Top Elast. Customers") plt.title("Cumulative Gains on Biased Sample") plt.ylabel("Cumulative Gains") plt.legend(); ``` First thing you should notice is that the average elasticity goes up, instead of down. We've seen this before. In the biased data, it looks like sales goes up as price increases. As a result, the final point in the cumulative gain plot is positive. This makes little sense, since we now people don't buy more as we increase ice cream prices. If the average price elasticity is already messed up, any ordering in it also makes little sense. The bottom line being that this data should not be used for model evaluation. ## Orthogonalization with Machine Learning In a 2016 paper, Victor Chernozhukov *et all* showed that you can also do orthogonalization with machine learning models. This is obviously very recent science and we still have much to discover on what we can and can't do with ML models. Still, it's a very interesting idea to know about. The nuts and bolts are pretty much the same to what we've already covered. The only difference is that now, we use machine learning models for the debiasing. $$ \begin{align} \hat{y}_i &= y_i - M_y(X_i) \\ \hat{t}_i &= t_i - M_t(X_i) \end{align} $$ There is a catch, though. As we know very well, machine learning models are so powerful that they can fit the data perfectly, or rather, overfit. Just by looking at the equations above, we can know what will happen in that case. If $M_y$ somehow overfitts, the residuals will all be very close to zero. If that happens, it will be hard to find how $t$ affects it. Similarly, if $M_t$ somehow overfitts, its residuals will also be close to zero. Hence, there won't be variation in the treatment residual to see how it can impact the outcome. To account for that, we need to do sample splitting. That is, we estimate the model with one part of the dataset and we make predictions in the other part. The simplest way to do this is to split the test sample in half, make two models in such a way that each one is estimated in one half of the dataset and makes predictions in the other half. A slightly more elegant implementation uses K-fold cross validation. The advantage being that we can train all the models on a sample which is bigger than half the test set. Fortunately, this sort of cross prediction is very easy to implement using Sklearn's `cross_val_predict` function. ```python from sklearn.model_selection import cross_val_predict from sklearn.ensemble import RandomForestRegressor X = ["cost", "weekday", "temp"] t = "price" y = "sales" folds = 5 np.random.seed(123) m_t = RandomForestRegressor(n_estimators=100) t_res = test[t] - cross_val_predict(m_t, test[X], test[t], cv=folds) m_y = RandomForestRegressor(n_estimators=100) y_res = test[y] - cross_val_predict(m_y, test[X], test[y], cv=folds) ``` Now that we have the residuals, let's store them as columns on a new dataset. ```python ml_debiased_test = test.assign(**{ "sales-ML_y(X)": y_res, "price-ML_t(X)": t_res, }) ml_debiased_test.head() ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>temp</th> <th>weekday</th> <th>cost</th> <th>price</th> <th>sales</th> <th>sales-ML_y(X)</th> <th>price-ML_t(X)</th> </tr> </thead> <tbody> <tr> <th>7791</th> <td>20.8</td> <td>3</td> <td>1.5</td> <td>6.3</td> <td>187</td> <td>-3.150833</td> <td>-0.869267</td> </tr> <tr> <th>1764</th> <td>26.6</td> <td>3</td> <td>1.5</td> <td>6.3</td> <td>201</td> <td>-0.418857</td> <td>-0.192867</td> </tr> <tr> <th>5785</th> <td>24.0</td> <td>4</td> <td>1.0</td> <td>5.8</td> <td>186</td> <td>-2.515667</td> <td>0.790429</td> </tr> <tr> <th>3542</th> <td>20.9</td> <td>3</td> <td>1.5</td> <td>5.1</td> <td>180</td> <td>-11.718500</td> <td>-1.280460</td> </tr> <tr> <th>9250</th> <td>26.7</td> <td>5</td> <td>1.0</td> <td>7.0</td> <td>201</td> <td>-1.214167</td> <td>1.715117</td> </tr> </tbody> </table> </div> Finally, we can plot the debiased dataset. ```python np.random.seed(123) sns.scatterplot(data=ml_debiased_test.sample(1000), x="price-ML_t(X)", y="sales-ML_y(X)", hue="weekday"); ``` Once again, we've uncovered a negative price elasticity on sales. Actually, the plot is incredibly similar to the one we've got when using simple linear regression. But that's probably because this is a very simple dataset. The advantages of machine learning orthogonalization is that it can estimate more complicated functions. It can learn interactions and non linearities in a way that it's hard to encode into linear regression. Also, there is the advantage that some machine learning models (those bases on decision trees) are much simpler to run than linear regression. They can handle categorical data, outliers and even missing data, stuff that would require some attention if you are just using linear regression. Finally, before we close, I just need to cover one final common mistake that data scientists often make when they are introduced to this idea (been there, done that). If the treatment or the outcome is binary, one might think it is better to replace the machine learning regression models for their classification versions. However, this does not work. The theory of orthogonalization only functions under regression models, similarly with what we've seen a long time ago when talking about Instrumental Variables. To be honest, it is not that the model will fail miserably if you replace regression by classification, but I would advise against it. If the theory doesn't justify it, why run the risk? ## Key Ideas We've started the chapter by highlighting the necessity of random treatment assignment in order for our causal evaluation methods to work. This poses a problem in the case where random data is not available. To be clear, the safest solution in this case is to go and do some experiments in order to get random data. If that is out of the question, only then, we can rely on a clever alternative: transform our data to look as if the treatment has been randomly assigned. Here, we've covered how to do that using the principles of orthogonalization. First, we've built a model that uses our features $X$ to predict the treatment $t$ and get it's residuals. The idea being that the treatment residuals is, by definition, independent of the features used to construct it. In other words, the treatment residuals are orthogonal to the features. We can see these residuals as a version of the treatment where all the confounding bias due to $X$ has been removed. That alone is enough to make our data look as good as random. But we can go one step further. We can build a model that predicts the outcome $y$ using the features $X$ but not the treatment and also get its residuals. Again, the intuition is very similar. These outcome residuals is a version of the outcome where all the variance due to the features has been explained away. That will hopefully explay a lot of the variance, making it easier to see the treatment effect. Here we are using orthogonalization with the goal of debiasing our data for model evaluation. However, this technique is also used for other purposes. Namely, lot's of causal inference models use orthogonalization as a first pre-processing step to ease the task of the causal inference model. We can say that orthogonalization makes the foundation of many modern causal inference algorithms. ## References The things I've written here are mostly stuff from my head. I've learned them through experience. This means that they have **not** passed the academic scrutiny that good science often goes through. Instead, notice how I'm talking about things that work in practice, but I don't spend too much time explaining why that is the case. It's a sort of science from the streets, if you will. However, I am putting this up for public scrutiny, so, by all means, if you find something preposterous, open an issue and I'll address it to the best of my efforts. This chapter is based on Victor Chernozhukov *et all* (2016), Double/Debiased Machine Learning for Treatment and Causal Parameters. You can also check Frisch, Ragnar; Waugh, Frederick V. (1933) original article, Partial Time Regressions as Compared with Individual Trends. ## Contribute Causal Inference for the Brave and True is an open-source material on causal inference, the statistics of science. It uses only free software, based in Python. Its goal is to be accessible monetarily and intellectually. If you found this book valuable and you want to support it, please go to [Patreon](https://www.patreon.com/causal_inference_for_the_brave_and_true). If you are not ready to contribute financially, you can also help by fixing typos, suggesting edits or giving feedback on passages you didn't understand. Just go to the book's repository and [open an issue](https://github.com/matheusfacure/python-causality-handbook/issues). Finally, if you liked this content, please share it with others who might find it useful and give it a [star on GitHub](https://github.com/matheusfacure/python-causality-handbook/stargazers). ```python ```
e6ac047c68944f9609772ef0094434a97fee9926
455,086
ipynb
Jupyter Notebook
causal-inference-for-the-brave-and-true/Debiasing-with-Orthogonalization.ipynb
keesterbrugge/python-causality-handbook
4075476ee99422ed04ef3b2f8cabc982698f96b5
[ "MIT" ]
1
2021-12-21T12:59:17.000Z
2021-12-21T12:59:17.000Z
causal-inference-for-the-brave-and-true/Debiasing-with-Orthogonalization.ipynb
HAlicia/python-causality-handbook
d2614cb1fbf8ae621d08be0e71df39b7a0d9e524
[ "MIT" ]
null
null
null
causal-inference-for-the-brave-and-true/Debiasing-with-Orthogonalization.ipynb
HAlicia/python-causality-handbook
d2614cb1fbf8ae621d08be0e71df39b7a0d9e524
[ "MIT" ]
null
null
null
479.037895
87,180
0.931536
true
7,287
Qwen/Qwen-72B
1. YES 2. YES
0.899121
0.870597
0.782773
__label__eng_Latn
0.997291
0.656975
# Asymptotic solutions in short-times Projectile motion in a linear potential field with images is described by the equation $$y_{\tau \tau} + \alpha \frac{1}{(1 + \epsilon y)^2} + 1= 0,$$ with $y(0) = \epsilon$ and $y_{\tau}(0)=1$, and where $\epsilon \ll 1$ is expected. ```python import sympy as sym from sympy import init_printing init_printing(order='rev-lex') ``` ```python y, eps, a, b, t, alpha = sym.symbols('y, epsilon, a, b, t, alpha') y0 = sym.Function('y0')(t) y1 = sym.Function('y1')(t) y2 = sym.Function('y2')(t) y3 = sym.Function('y3')(t) y4 = sym.Function('y4')(t) ``` ```python y = sym.Eq(y0 + eps*y1 + eps**2*y2 + eps**3*y3 + eps**4*y4) # naive expansion class f(sym.Function): @classmethod def eval(cls, y): return y.lhs.diff(t,t) + alpha*1/(1 + eps*y.lhs)**2 + 1 #return y.lhs.diff(tau, tau) + eps/y.lhs**2 ``` ```python the_series = sym.series(f(y), eps, x0=0, n=5) by_order = sym.collect(the_series, eps, evaluate=False) the_series ``` ### $\mathcal{O} \left( 1 \right) \mbox{Solution}$ ```python sym.Eq(by_order[1].removeO()) ``` ```python eqn = sym.Eq(by_order[1].removeO()) #1 + y0(tau).diff(tau, tau)) soln0 = sym.dsolve(eqn, y0) constants = sym.solve([soln0.rhs.subs(t,0) - 0, \ soln0.rhs.diff(t).subs(t,0) - 1]) C1, C2 = sym.symbols('C1 C2') soln0 = soln0.subs(constants) print(sym.latex(soln0)) soln0 ``` ### $\mathcal{O} \left( \epsilon \right) \mbox{Solution}$ ```python by_order[eps] ``` ```python try: eqn = sym.Eq(by_order[eps].replace(y0, soln0.rhs)) except NameError: eqn = sym.Eq(by_order[eps]) soln1 = sym.dsolve(eqn, y1) constants = sym.solve([soln1.rhs.subs(t,0) - 0, \ soln1.rhs.diff(t,1).subs(t,0) - 0]) C1, C2 = sym.symbols('C1 C2') soln1 = soln1.subs(constants) soln1 ``` ### $\mathcal{O} \left( \epsilon^2 \right) \mbox{Solution}$ ```python by_order[eps**2] ``` ```python try: eqn = sym.Eq(by_order[eps**2].replace(y1, soln1.rhs).replace(y0, soln0.rhs)) except NameError: eqn = sym.Eq(by_order[eps**2].replace(y1, soln1.rhs)) soln2 = sym.dsolve(eqn, y2) constants = sym.solve([soln2.rhs.subs(t,0) - 0, \ soln2.rhs.diff(t,1).subs(t,0) - 0]) C1, C2 = sym.symbols('C1 C2') soln2 = soln2.subs(constants) sym.factor(soln2) ``` ### $\mathcal{O} \left( \epsilon^3 \right) \mbox{Solution}$ ```python by_order[eps**3] ``` ```python try: eqn = sym.Eq(by_order[eps**3].replace(y2, soln2.rhs).replace(y1, soln1.rhs).replace(y0, soln0.rhs)) except NameError: eqn = sym.Eq(by_order[eps**3].replace(y2, soln2.rhs)) soln3 = sym.dsolve(eqn, y3) constants = sym.solve([soln3.rhs.subs(t,0) - 0, \ soln3.rhs.diff(t,1).subs(t,0) - 0]) C1, C2 = sym.symbols('C1 C2') soln3 = soln3.subs(constants) sym.factor(soln3) ``` ### $\mathcal{O} \left( \epsilon^4 \right) \mbox{Solution}$ ```python by_order[eps**4] ``` ```python try: eqn = sym.Eq(by_order[eps**4].replace(y3, soln3.rhs).replace( y2, soln2.rhs).replace(y1, soln1.rhs).replace(y0, soln0.rhs)) except NameError: eqn = sym.Eq(by_order[eps**4].replace(y3, soln3.rhs)) soln4 = sym.dsolve(eqn, y4) constants = sym.solve([soln4.rhs.subs(t,0) - 0, \ soln4.rhs.diff(t,1).subs(t,0) - 0]) C1, C2 = sym.symbols('C1 C2') soln4 = soln4.subs(constants) sym.factor(soln4) ``` ### $\mbox{Composite Solution}$ ```python y_comp = sym.symbols('y_{comp}', cls=sym.Function) try: y_comp = sym.Eq(y_comp, soln0.rhs + eps*soln1.rhs + eps**2*soln2.rhs + eps**3*soln3.rhs + eps**4*soln4.rhs) # + eps**2*soln2.rhs) except NameError: y_comp = sym.Eq(y_comp, eps*soln1.rhs + eps**2*soln2.rhs + eps**3*soln3.rhs + eps**4*soln4.rhs) # + eps**2*soln2.rhs) #print(sym.latex(y_comp)) y_comp print(str(y_comp.rhs.subs(t, 1))) ``` -alpha/2 + epsilon**4*(alpha*(159*alpha + 100)/420 + alpha*(-305*alpha**2 - 441*alpha - 150)/1120 + alpha*(3548*alpha**3 + 8424*alpha**2 + 6453*alpha + 1575)/45360 + alpha*(-3548*alpha**4 - 11972*alpha**3 - 14877*alpha**2 - 8028*alpha - 1575)/453600 - alpha/6) + epsilon**3*(alpha*(-17*alpha - 12)/60 + alpha*(73*alpha**2 + 117*alpha + 45)/630 + alpha*(-73*alpha**3 - 190*alpha**2 - 162*alpha - 45)/5040 + alpha/5) + epsilon**2*(alpha*(11*alpha + 9)/60 + alpha*(-11*alpha**2 - 20*alpha - 9)/360 - alpha/4) + epsilon*(alpha*(-alpha - 1)/12 + alpha/3) + 1/2 ### $\mbox{The Trajectory}$ ```python def savefig(filename, pics): if pics == True: plt.savefig('../doc/figures/{}.pgf'.format(filename), bbox_inches='tight', dpi=400) else: pass pics = True ``` ```python import matplotlib.pyplot as plt import matplotlib import numpy as np import scipy as sp %config InlineBackend.figure_format = 'retina' #plt.rc('text', usetex=True) #plt.rc('font', family='serif') #plt.rcParams['figure.dpi'] = 300 %matplotlib inline matplotlib.rcParams.update( { 'text.color': 'k', 'xtick.color': 'k', 'ytick.color': 'k', 'axes.labelcolor': 'k' }) plt.rc('font', size=14) eps_val = [.1, .5, 1.][::-1] linestyle = ['rs--', 'bo-', 'cv-.', 'k+:', 'm'] tt = sp.arange(0,1.2,0.001) al = [2, 1., .5, .01] fig, axs = plt.subplots(2,2, figsize=(10, 8), sharex='col', sharey='row') fig.subplots_adjust(hspace = .2, wspace=.2) axs = axs.ravel() i = 0 for aas in al: yc = y_comp.rhs.subs(alpha, aas) #plt.figure(figsize=(6, 4), dpi=100) for keys, vals in enumerate(eps_val): y_compP = sym.lambdify(t, yc.subs(eps, vals), 'numpy') if aas == 2: label='$\mathbf{E}\mbox{u}=$'+ ' {}'.format(vals).rstrip('0').rstrip('.') else: label=None axs[i].plot(tt, y_compP(tt), linestyle[keys],label=label, markevery=100) axs[i].set_ylim(ymin=0., ymax=0.5) axs[i].set_xlim(xmax=1.05) axs[i].tick_params(axis='both', which='major', labelsize=16) leg = axs[i].legend(title = r'$\mathbf{I}\mbox{g} = $' + ' {:1.2f}'.format(aas).rstrip('0').rstrip('.'), loc=2) leg.get_frame().set_linewidth(0.0) i += 1 fig.text(0.5, -0.01, r'$t^*$', ha='center', fontsize=20) fig.text(-0.03, 0.5, r'$y^*$', va='center', rotation='vertical', fontsize=20) fig.tight_layout() savefig('short_times', pics) plt.show() ``` ```python eps_val = [.01, .1, 1.][::-1] linestyle = ['rs--', 'bo-', 'cv-.', 'k+:', 'm'] tt = sp.arange(0,2.5,0.001) yc = y_comp.rhs.subs(alpha, eps*0.0121 + 0.2121) plt.figure(figsize=(6, 4))#, dpi=100) for keys, vals in enumerate(eps_val): y_compP = sym.lambdify(t, yc.subs(eps, vals), 'numpy') plt.plot(tt, y_compP(tt), linestyle[keys], label='$\mathbf{E}\mbox{u} =$'+ ' {}'.format(vals).rstrip('0').rstrip('.'), markevery=100) plt.ylim(ymin=0., ymax=0.5) plt.xlim(xmax=2.05) plt.legend() plt.xlabel(r'$t^*$') plt.ylabel(r'$y^*$') #savefig('short_times_better', pics) plt.show() ``` ## Time aloft ```python y2 = sym.symbols('y2', cls=sym.Function) y2 = sym.Function('y2')(t) try: y2 = sym.Eq(y2, soln0.rhs + eps*soln1.rhs + eps**2*soln2.rhs) # + eps**2*soln2.rhs) except NameError: y2 = sym.Eq(y2, eps*soln1.rhs + eps**2*soln2.rhs) y2.rhs #y2.diff(t) ``` ```python tau0, tau1, tau2 = sym.symbols('tau0 tau1 tau2') tau = sym.Eq(tau0 + eps*tau1 + eps**2*tau2) y3 = y2.rhs.subs(t, tau.lhs).series(eps) col = sym.collect(y3, eps, evaluate=False) ``` ### $\mathcal{O} \left( 1 \right) \mbox{Solution}$ ```python #tau0 = 2 sym.Eq(col[1].removeO()) ``` ### $\mathcal{O} \left( \epsilon \right) \mbox{Solution}$ ```python order_eps = col[eps].subs(tau0, 2) order_eps soln_eps = sym.solve(order_eps, tau1) ``` ### $\mathcal{O} \left( \epsilon^2 \right) \mbox{Solution}$ ```python order_eps2 = col[eps**2].subs(tau0, 2).subs(tau1, soln_eps[0]) order_eps2 soln_eps2 = sym.solve(order_eps2, tau2) ``` ### Composite Solution Using the linear regression for Im. ```python tau0, tau1, tau2 = sym.symbols('tau0 tau1 tau2') tau = sym.Eq(tau0 + eps*tau1 + eps**2*tau2) tau = tau.subs(tau0, 2).subs(tau1, soln_eps[0]).subs(tau2, soln_eps2[0]) print(str(tau.subs(alpha, eps*0.0121 + 0.2121))) tau.subs(alpha, eps*0.0121 + 0.2121) ``` ```python ttt = np.arange(0.01, 2.,0.001) #betas = [bet] linestyle = ['k','rs--', 'bo-', 'cv-.', 'k+:', 'm'] plt.figure(figsize=(6, 4), dpi=100) #taun = tau.subs(beta, vals) tau_soln = sym.lambdify(eps, tau.lhs.subs(alpha, eps*0.0121 + 0.2121), 'numpy') plt.semilogx(ttt, tau_soln(ttt), 'k', markevery=100) plt.xlabel(r'$\mathbf{E}\mbox{u}$') plt.ylabel(r'$t_f$') #plt.legend() #savefig('drag', pics) plt.show(); ``` ```python ```
0fbe3edc396fd2af8b4abdfe50105d4276d5e4f9
315,280
ipynb
Jupyter Notebook
src/asymptotic-short.ipynb
7deeptide/Thesis_scratch
d776d57f642de4df718c1f655f080c8fe402e092
[ "MIT" ]
null
null
null
src/asymptotic-short.ipynb
7deeptide/Thesis_scratch
d776d57f642de4df718c1f655f080c8fe402e092
[ "MIT" ]
null
null
null
src/asymptotic-short.ipynb
7deeptide/Thesis_scratch
d776d57f642de4df718c1f655f080c8fe402e092
[ "MIT" ]
null
null
null
335.404255
139,820
0.911215
true
3,245
Qwen/Qwen-72B
1. YES 2. YES
0.817574
0.727975
0.595174
__label__kor_Hang
0.13463
0.221119
# Introduction to the Harmonic Oscillator *Note:* Much of this is adapted/copied from https://flothesof.github.io/harmonic-oscillator-three-methods-solution.html This week week we are going to begin studying molecular dynamics, which uses classical mechanics to study molecular systems. Our "hydrogen atom" in this section will be the 1D harmomic oscillator. The harmonic oscillator is a system that, when displaced from its equilibrium position, experiences a restoring force F proportional to the displacement x: $$F=-kx$$ The potential energy of this system is $$V = {1 \over 2}k{x^2}$$ These are sometime rewritten as $$ F=- \omega_0^2 m x, \text{ } V(x) = {1 \over 2} \omega_0^2 m {x^2}$$ Where $\omega_0 = \sqrt {{k \over m}} $ In classical mechanics, our goal is to determine the equations of motion, $x(t),y(t)$, that describe our system. In this notebook we will use sympy to solve an second order, ordinary differential equation. ## 1. Solving differential equations with sympy Soliving differential equations can be tough, and there is not always a set plan on how to proceed. Luckily for us, the harmonic osscillator is the classic second order diffferential eqations. Consider the following second order differential equation $$ay(t)''+by(t)'=c$$ where $y(t)'' = {{{d^2}y} \over {dt^2}}$, and $y(t)' = {{{d}y} \over {dt}}$ We can rewrite this as a homogeneous linear differential equations $$ay(t)''+by(t)'-c=0$$ The goal here is to find $y(t)$, similar to our classical mechanics problems. Lets use sympy to solve this equation ### Second order ordinary differential equation First we import the sympy library ```python import sympy as sym ``` Next we initialize pretty printing ```python sym.init_printing() ``` Next we will set our symbols ```python t,a,b,c=sym.symbols("t,a,b,c") ``` Now for somehting new. We can define functions using `sym.Function("f")` ```python y=sym.Function("y") y(t) ``` Now, If I want to define a first or second derivative, I can use `sym.diff` ```python sym.diff(y(t),(t,1)),sym.diff(y(t),(t,2)) ``` My differential equation can be written as follows ```python dfeq=a*sym.diff(y(t),(t,2))+b*sym.diff(y(t),(t,1))-c dfeq ``` ```python sol = sym.dsolve(dfeq) sol ``` The two constants $C_1$ and $C_2$ can be determined by setting boundry conditions. First, we can set the condition $y(t=0)=y_0$ The next intial condition we will set is $y'(t=0)=v_0$ To setup the equality we want to solve, we are using `sym.Eq`. This function sets up an equaility between a lhs aand rhs of an equation ```python # sym.Eq example alpha,beta=sym.symbols("alpha,beta") sym.Eq(alpha+2,beta) ``` Back to the actual problem ```python y0,v0=sym.symbols("y_0,v_0") ics=[sym.Eq(sol.args[1].subs(t, 0), y0), sym.Eq(sol.args[1].diff(t).subs(t, 0), v0)] ics ``` We can use this result to first solve for $C_2$ and then solve for $C_1$. Or we can use sympy to solve this for us. ```python solved_ics=sym.solve(ics) solved_ics ``` Substitute the result back into $y(t)$ ```python full_sol = sol.subs(solved_ics[0]) full_sol ``` We can plot this result too. Assume that $a,b,c=1$ and that the starting conditions are $y_0=0,v_0=0$ We will use two sample problems: * case 1 : initial position is nonzero and initial velocity is zero * case 2 : initial position is zero and initialvelocity is nonzero ```python # Print plots %matplotlib inline ``` #### Initial velocity set to zero ```python case1 = sym.simplify(full_sol.subs({y0:0, v0:0, a:1, b:1, c:1})) case1 ``` ```python sym.plot(case1.rhs) sym.plot(case1.rhs,(t,-2,2)) ``` #### Initial velocity set to one ```python case2 = sym.simplify(full_sol.subs({y0:0, v0:1, a:1, b:1, c:1})) case2 ``` ```python sym.plot(case2.lhs,(t,-2,2)) ``` ## Calculate the phase space As we will see in lecture, the state of our classical systems are defined as points in phase space, a hyperspace defined by ${{\bf{r}}^N},{{\bf{p}}^N}$. We will convert our sympy expression into a numerical function so that we can plot the path of $y(t)$ in phase space $y,y'$. ```python case1 ``` ```python # Import numpy library import numpy as np # Make numerical functions out of symbolic expressions yfunc=sym.lambdify(t,case1.rhs,'numpy') vfunc=sym.lambdify(t,case1.rhs.diff(t),'numpy') # Make list of numbers tlst=np.linspace(-2,2,100) # Import pyplot import matplotlib import matplotlib.pyplot as plt # Make plot plt.plot(yfunc(tlst),vfunc(tlst)) plt.xlabel('$y$') plt.ylabel("$y'$") plt.show() ``` ### Exercise 1.1 Change the initial starting conditions and see how that changes the plots. Make three different plots with different starting conditions ```python # Import numpy library import numpy as np #new cases case2 = sym.simplify(full_sol.subs({y0:2, v0:10, a:1, b:1, c:1})) case3 = sym.simplify(full_sol.subs({y0:3, v0:2, a:1, b:1, c:1})) case4 = sym.simplify(full_sol.subs({y0:1, v0:5, a:1, b:1, c:1})) # Make numerical functions out of symbolic expressions yfunc2=sym.lambdify(t,case2.rhs,'numpy') vfunc2=sym.lambdify(t,case2.rhs.diff(t),'numpy') yfunc3=sym.lambdify(t,case3.rhs,'numpy') vfunc3=sym.lambdify(t,case3.rhs.diff(t),'numpy') yfunc4=sym.lambdify(t,case4.rhs,'numpy') vfunc4=sym.lambdify(t,case4.rhs.diff(t),'numpy') # Make list of numbers tlst=np.linspace(-2,2,100) # Import pyplot import matplotlib import matplotlib.pyplot as plt # Make plot plt.plot(yfunc2(tlst),vfunc2(tlst)) plt.plot(yfunc3(tlst),vfunc3(tlst)) plt.plot(yfunc4(tlst),vfunc4(tlst)) plt.xlabel('$y$') plt.ylabel("$y'$") plt.show() ``` ## 2. Harmonic oscillator Applying the harmonic oscillator force to Newton's second law leads to the following second order differential equation $$ F = m a $$ $$ F= - \omega_0^2 m x $$ $$ a = - \omega_0^2 x $$ $$ x(t)'' = - \omega_0^2 x $$ The final expression can be rearranged into a second order homogenous differential equation, and can be solved using the methods we used above Your goal is determine and plot the equations of motion of a 1D harmomnic oscillator ### Exercise 2.1 1. Use the methodology above to determine the equations of motion $x(t), v(t)$ for a harmonic ocillator 1. Solve for any constants by using the following initial conditions: $x(0)=x_0, v(0)=v_0$ 1. Show expressions for and plot the equations of motion for the following cases: 1. $x(0)=0, v(0)=0$ 1. $x(0)=0, v(0)>0$ 1. $x(0)>0, v(0)=0$ 1. $x(0)<0, v(0)=0$ 1. Plot the phasespace diagram for the harmonic oscillator ```python import sympy as sym sym.init_printing() F,t,a,m,omega0 = sym.symbols("F,t,a,m,omega_0") x = sym.Function("x") a = sym.diff(x(t),(t,2))+ omega0**2*x(t) sola = sym.dsolve(a) sola x0,v0 = sym.symbols("x_0,v_0") con = [sym.Eq(sola.args[1].subs(t, 0), x0), sym.Eq(sola.args[1].diff(t).subs(t, 0), v0)] con consolve = sym.solve(con) consolve full_sol = sola.subs(consolve[0]) full_sol ``` ```python import numpy as np import sympy as sym import matplotlib import matplotlib.pyplot as plt case1 = sym.simplify(full_sol.subs({x0:0, v0:0, omega0:1})) case2 = sym.simplify(full_sol.subs({x0:0, v0:1, omega0:1})) case3 = sym.simplify(full_sol.subs({x0:1, v0:0, omega0:1})) case4 = sym.simplify(full_sol.subs({x0:-1, v0:0, omega0:1})) sym.plot(case1.rhs,(t,-10,10)) sym.plot(case2.rhs,(t,-10,10)) sym.plot(case3.rhs,(t,-10,10)) sym.plot(case4.rhs,(t,-10,10)) ``` ```python import numpy as np import sympy as sym import matplotlib import matplotlib.pyplot as plt case1 = sym.simplify(full_sol.subs({x0:0, v0:0, omega0:1})) case2 = sym.simplify(full_sol.subs({x0:0, v0:1, omega0:1})) case3 = sym.simplify(full_sol.subs({x0:1, v0:0, omega0:1})) case4 = sym.simplify(full_sol.subs({x0:-1, v0:0, omega0:1})) xfunc1 = sym.lambdify(t,case1.rhs,'numpy') vfunc1 = sym.lambdify(t,case1.rhs.diff(t),'numpy') xfunc2 = sym.lambdify(t,case2.rhs,'numpy') vfunc2 = sym.lambdify(t,case2.rhs.diff(t),'numpy') xfunc3 = sym.lambdify(t,case3.rhs,'numpy') vfunc3 = sym.lambdify(t,case3.rhs.diff(t),'numpy') xfunc4 = sym.lambdify(t,case4.rhs,'numpy') vfunc4 = sym.lambdify(t,case4.rhs.diff(t),'numpy') plt.plot(xfunc1(tlst),vfunc1(tlst)) plt.xlabel('$x$') plt.ylabel("$x'$") plt.show() plt.plot(xfunc2(tlst),vfunc2(tlst)) plt.xlabel('$x$') plt.ylabel("$x'$") plt.show() plt.plot(xfunc3(tlst),vfunc3(tlst)) plt.xlabel('$x$') plt.ylabel("$x'$") plt.show() plt.plot(xfunc4(tlst),vfunc4(tlst)) plt.xlabel('$x$') plt.ylabel("$x'$") plt.show() ```
a0d6cd6f0c288eb69d25ad4db8b32c8f916abb06
262,903
ipynb
Jupyter Notebook
harmonic_oscillator.ipynb
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-jonathanyuan123
a52d4d8d63d8dc11feb307649c721768c5c0c005
[ "MIT" ]
null
null
null
harmonic_oscillator.ipynb
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-jonathanyuan123
a52d4d8d63d8dc11feb307649c721768c5c0c005
[ "MIT" ]
null
null
null
harmonic_oscillator.ipynb
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-jonathanyuan123
a52d4d8d63d8dc11feb307649c721768c5c0c005
[ "MIT" ]
null
null
null
254.258221
28,316
0.921184
true
2,726
Qwen/Qwen-72B
1. YES 2. YES
0.847968
0.868827
0.736737
__label__eng_Latn
0.935202
0.550019
```python import numpy as np import matplotlib.pyplot as plt import sympy as sp import control as co s = sp.Symbol('s', real=True) k = sp.Symbol('k', real=True) ``` ```python Ka = 1/((s+1)*(s+2)*(s+3)) Ka ``` $\displaystyle \frac{1}{\left(s + 1\right) \left(s + 2\right) \left(s + 3\right)}$ ```python Ka = sp.expand(Ka) Ka ``` $\displaystyle \frac{1}{s^{3} + 6 s^{2} + 11 s + 6}$ ```python Kr = 1 Kr ``` 1 ```python CLS = (Ka*Kr)/(1+Ka*Kr) CLS = sp.expand(CLS) CLS ``` $\displaystyle \frac{1}{s^{3} + \frac{s^{3}}{s^{3} + 6 s^{2} + 11 s + 6} + 6 s^{2} + \frac{6 s^{2}}{s^{3} + 6 s^{2} + 11 s + 6} + 11 s + \frac{11 s}{s^{3} + 6 s^{2} + 11 s + 6} + 6 + \frac{6}{s^{3} + 6 s^{2} + 11 s + 6}}$ ```python ```
f51b01a5cc7421012cc45e5345baacba3b5ae3fc
2,818
ipynb
Jupyter Notebook
CW/CW8/a.ipynb
John15321/TR
82dd73425dde3a5f3f50411b2ee03be88e8bf65e
[ "MIT" ]
8
2021-02-04T10:39:41.000Z
2021-04-15T13:32:46.000Z
CW/CW8/a.ipynb
John15321/TR
82dd73425dde3a5f3f50411b2ee03be88e8bf65e
[ "MIT" ]
null
null
null
CW/CW8/a.ipynb
John15321/TR
82dd73425dde3a5f3f50411b2ee03be88e8bf65e
[ "MIT" ]
null
null
null
22.365079
249
0.456352
true
334
Qwen/Qwen-72B
1. YES 2. YES
0.94079
0.731059
0.687772
__label__kor_Hang
0.259864
0.436257
# Generative Learning Algorithms ## Discriminative & Generative discriminative:try to learn $p(y|x)$ directly, such as logistic regression<br> or try to learn mappings from the space of inputs to the labels $\left \{ 0, 1\right \}$ directly, such as perceptron generative:algorithms that try to model $p(x|y)$ and $p(y)$(called class prior), such as guassian discriminant analysis and naive bayes when predicting, use bayes rule: $$p(y|x)=\frac{p(x|y)p(y)}{p(x)}$$ then: $$\hat{y}=\underset{y}{argmax}\ \frac{p(x|y)p(y)}{p(x)}= \underset{y}{argmax}\ {p(x|y)p(y)}$$ ## Guassian Discriminant Analysis(GDA) multivariant normal distribution guassian distribution is parameterized by a mean vector $\mu \in \mathbb{R}^{d}$ and a covariance matrix $\Sigma \in \mathbb{R}^{d \times d}$, where $\Sigma >= 0$ is symmetric and positive semi-definite. also written $\mathcal{N}(\mu,\Sigma)$, it's density is given by: $$p(x;\mu,\Sigma)=\frac{1}{(2\pi)^{d/2}\left | \Sigma \right |^{1/2} }exp\left ( -\frac{1}{2}(x-\mu)^{T}{\Sigma}^{-1}(x-\mu)\right )$$ unsurprisingly, for random variable $X \sim \mathcal{N}(\mu,\Sigma)$: $$E[X] = \int_{x}xp(x;\mu, \Sigma)dx = \mu$$ $$Cov(X) = E[(X - E(X))(X - E(X))^{T}] = \Sigma$$ GDA: when we have a classification problem in which the input features x are continous, we can use GDA, which model $p(x|y)$ using a multivariant normal distribution: $$y\sim Bernoulli(\phi)$$ $$x | y=0 \sim \mathcal{N}(\mu_{0},\Sigma) $$ $$x | y=1 \sim \mathcal{N}(\mu_{1},\Sigma) $$ writing out the distribution: $$p(y) = \phi^{y}(1 - \phi)^{1 - y}$$ $$p(x| y=0)=\frac{1}{(2\pi)^{d/2}\left | \Sigma \right |^{1/2} }exp\left (-\frac{1}{2}(x-\mu_{0})^{T}{\Sigma}^{-1}(x-\mu_{0})\right )$$ $$p(x| y=1)=\frac{1}{(2\pi)^{d/2}\left | \Sigma \right |^{1/2} }exp\left (-\frac{1}{2}(x-\mu_{1})^{T}{\Sigma}^{-1}(x-\mu_{1})\right )$$ the log-likelihood of the data is given by: $$ \begin{equation} \begin{split} l(\phi,\mu_{0},\mu_{1},\Sigma) &= log\prod_{i=1}^{n}p(x^{(i)},y^{(i)};\phi,\mu_{0},\mu_{1},\Sigma) \\ &= log\prod_{i=1}^{n}p(x^{(i)}|y^{(i)};\mu_{0},\mu_{1},\Sigma)p(y^{(i)};\phi) \end{split} \end{equation} $$ we find the maximum likelihood estimate of the parameters are: $$\phi = \frac{1}{n}\sum_{i=1}^{n}1\left \{ y^{(i)}=1 \right \}$$ $$\mu_{0} = \frac{\sum_{i=1}^{n}1\left \{ y^{(i)}=0 \right \}x^{(i)} }{\sum_{i=1}^{n}1\left \{ y^{(i)}=0 \right \}} $$ $$\mu_{1} = \frac{\sum_{i=1}^{n}1\left \{ y^{(i)}=1 \right \}x^{(i)} }{\sum_{i=1}^{n}1\left \{ y^{(i)}=1 \right \}} $$ $$\Sigma=\frac{1}{n}\sum_{i=1}^{n}(x^{(i)} - \mu_{y^{(i)}})(x^{(i)} - \mu_{y^{(i)}})^{T}$$ as we wanted ## GDA and logistic regression the GDA model has an interesting relationship to logistic regression. if we view the quantity $p(y=1|x; \phi,\Sigma,\mu_{0},\mu_{1})$ as a function of $x$, we'll find that it can be expressed in the form: $$ p(y=1|x; \phi,\Sigma,\mu_{0},\mu_{1}) = \frac{1}{1 + exp(-\theta^{T}x)} $$ where $\theta$ is some appropriate function of $\phi,\Sigma,\mu_{0},\mu_{1}$.<br> proof: $$ \begin{equation} \begin{split} p(y=1|x; \phi,\Sigma,\mu_{0},\mu_{1}) &= \frac{p(y=1, x)}{p(x)} \\ &=\frac{p(x|y=1)p(y=1)}{p(x|y=1)p(y=1) + p(x|y=1)p(y=1)} \\ &=\frac{\phi\cdot exp\left (-\frac{1}{2}(x-\mu_{1})^{T}{\Sigma}^{-1}(x-\mu_{1})\right )}{\phi\cdot exp\left (-\frac{1}{2}(x-\mu_{1})^{T}{\Sigma}^{-1}(x-\mu_{1})\right ) + (1 - \phi)\cdot exp\left (-\frac{1}{2}(x-\mu_{0})^{T}{\Sigma}^{-1}(x-\mu_{0})\right )} \\ &=\frac{1}{1 + \frac{1 - \phi}{\phi}exp\left(-\frac{1}{2}\left((x-\mu_{0})^{T}{\Sigma}^{-1}(x-\mu_{0}) - (x-\mu_{1})^{T}{\Sigma}^{-1}(x-\mu_{1})\right)\right)} \\ &=\frac{1}{1 + \frac{1 - \phi}{\phi}exp\left(-\frac{1}{2}\left((\mu_{1}^T -\mu_{0}^T)\Sigma^{-1}x + x^{T}\Sigma^{-1}(\mu_{1}-\mu_{0}) + (\mu_{0}^{T}\Sigma^{-1}\mu_{0} - \mu_{1}^{T}\Sigma^{-1}\mu_{1})\right)\right)} \end{split} \end{equation} $$ because of $x^{T}a=a^{T}x$, we can express the above as: $$\frac{1}{1 + exp(-\theta^{T}x)}$$ so the separating surface of GDA is $\theta^{T}x=0$. as logistic regression, GDA can be interpreted by logistic model, but with different optimization strategy. ## Naive Bayes in GDA,x is continous, when x is discrete, we can use naive bayes. suppose $x=(x_{1}, x_{2},..., x_{d})$, for simplicity, we assume $x_{j}$ binary here, we of course have: $$ \begin{equation} \begin{split} p(x|y) &= p(x_{1},x_{2},...,x_{d}|y) \\ &= p(x_{1}|y)p(x_{2}|y,x_{1})...p(x_{d}|y,x_{1},...,x_{d-1}) \end{split} \end{equation} $$ we will assume that $x_{j}$'s are conditionally independent given $y$. this assumption is called the naive bayes assumption, and the resulting algorithm is called the naive bayes classifier: $$ \begin{equation} \begin{split} p(x|y) &= p(x_{1}|y)p(x_{2}|y,x_{1})...p(x_{d}|y,x_{1},...,x_{d-1}) \\ &= p(x_{1}|y)p(x_{2}|y)...p(x_{d}|y) \end{split} \end{equation} $$ our model is parameterized by: $$y\sim Bernoulli(\phi)$$ $$x_{j}|y=1 \sim Bernoulli(\phi_{j|y=1})$$ $$x_{j}|y=0 \sim Bernoulli(\phi_{j|y=0})$$ note: $$k = \sum_{i=1}^{n}1\left\{y^{(i)}=1\right\}$$ $$s_{j} = \sum_{i=1}^{n}1\left\{x_{j}^{(i)}=1 \wedge y^{(i)}=1\right\}$$ $$t_{j} = \sum_{i=1}^{n}1\left\{x_{j}^{(i)}=1 \wedge y^{(i)}=0\right\}$$ we can write down the joint log likelihood of the data: $$ \begin{equation} \begin{split} \mathcal{L}(\phi, \phi_{j|y=1}, \phi_{j|y=0}) &= log\prod_{i=1}^{n}p(x^{(i)},y^{(i)})\\ &=\sum_{i=1}^{n}log(p(x^{(i)}, y^{(i)})) \\ &=\sum_{i=1}^{n}log(p(y^{(i)})\prod_{j=1}^{d}p(x_{j}^{(i)}|y^{(i)})) \\ &=\sum_{y^{(i)}=1}(log(\phi) + \sum_{j=1}^{d}log(p(x_{j}^{(i)}|y=1))) + \sum_{y^{(i)}=0}(log(1 - \phi) + \sum_{j=1}^{d}log(p(x_{j}^{(i)}|y=0))) \\ &=k\ log(\phi) + (n-k)log(1 - \phi) + \sum_{j=1}^{d}(s_{j}\ log(\phi_{j|y=1}) + (k-s_{j})log(1 - \phi_{j|y=1}) + t_{j}\ log(\phi_{j|y=0}) + (n -k - t_{j})log(1 - \phi_{j|y=0}) \end{split} \end{equation} $$ maximizing this equal to maximize each part, we derive: $$\phi=\frac{k}{n}$$ $$\phi_{j|y=1} = \frac{s_{j}}{k}$$ $$\phi_{j|y=0} = \frac{t_{j}}{n-k}$$ as expected ## Laplace Smoothing there is problem with the naive bayes in its current form if $x_{j}=1$ never occur in the training set, then $p(x_{j}=1|y=1)=0,p(x_{j}=1|y=0)=0$. hence when predicting a sample $x$ with $x_{j}=1$, then: $$ \begin{equation} \begin{split} p(y=1|x) &= \frac{\prod_{k=1}^{d}p(x_{k}|y=1)p(y=1)}{\prod_{k=1}^{d}p(x_{k}|y=1)p(y=1) + \prod_{k=1}^{d}p(x_{k}|y=0)p(y=0)} \\ &= \frac{0}{0} \end{split} \end{equation} $$ does't know how to predict. to fix this problem, instead of: $$\phi_{j|y=1}=\frac{\sum_{i=1}^{n}1\left\{x_{j}^{(i)}=1 \wedge y^{(i)}=1\right\}}{\sum_{i=1}^{n}1\left\{y^{(i)}=1\right\}}$$ $$\phi_{j|y=0}=\frac{\sum_{i=1}^{n}1\left\{x_{j}^{(i)}=1 \wedge y^{(i)}=0\right\}}{\sum_{i=1}^{n}1\left\{y^{(i)}=0\right\}}$$ we add 1 to the numerator, add 2 to the denominator to: 1. avoid 0 in the parameter 2. $\phi_{j|y=1}$ and $\phi_{j|y=0}$ is still a probability distribution. i.e: $$\phi_{j|y=1}=\frac{1 + \sum_{i=1}^{n}1\left\{x_{j}^{(i)}=1 \wedge y^{(i)}=1\right\}}{2 + \sum_{i=1}^{n}1\left\{y^{(i)}=1\right\}}$$ $$\phi_{j|y=0}=\frac{1 + \sum_{i=1}^{n}1\left\{x_{j}^{(i)}=1 \wedge y^{(i)}=0\right\}}{2 + \sum_{i=1}^{n}1\left\{y^{(i)}=0\right\}}$$ ## Text Classification bernoulli event model:<br> first randomly determined whether a spammer or non-spammer<br> then runs through the dictionary deciding whether to include each word j. multinomial event model:<br> first randomly determined whether a spammer or non-spammer<br> then each word in the email is generating from some same multinomial distribution independently. using multinomial event model, if we have training set $\left \{ (x^{(1)},y^{(1)}),...,(x^{(n)},y^{(n)}) \right \}$ where $x^{(i)}=(x_{1}^{(i)},...,x_{d_{i}}^{(i)})\ $(here $d_{i}$ is the number of words in the i-th training example) using maximum likelihood estimates of parameters like the above: $$\phi = \frac{1}{n}\sum_{i=1}^{n}1\left \{ y^{(i)}=1 \right \}$$ $$\phi_{k|y=1}=\frac{\sum_{i=1}^{n}\sum_{j=1}^{d_{i}}1\left \{x_{j}^{(i)}=k\wedge y^{(i)}=1 \right \}}{\sum_{i=1}^{n}1\left \{ y^{(i)}=1 \right \}d_{i}}$$ $$\phi_{k|y=0}=\frac{\sum_{i=1}^{n}\sum_{j=1}^{d_{i}}1\left \{x_{j}^{(i)}=k\wedge y^{(i)}=0 \right \}}{\sum_{i=1}^{n}1\left \{ y^{(i)}=0 \right \}d_{i}}$$ add laplace smoothing with respect to multinomial event model: $$\phi_{k|y=1}=\frac{1 + \sum_{i=1}^{n}\sum_{j=1}^{d_{i}}1\left \{x_{j}^{(i)}=k\wedge y^{(i)}=1 \right \}}{\left | V \right | + \sum_{i=1}^{n}1\left \{ y^{(i)}=1 \right \}d_{i}}$$ $$\phi_{k|y=0}=\frac{1 + \sum_{i=1}^{n}\sum_{j=1}^{d_{i}}1\left \{x_{j}^{(i)}=k\wedge y^{(i)}=0 \right \}}{\left | V \right | + \sum_{i=1}^{n}1\left \{ y^{(i)}=0 \right \}d_{i}}$$ $\left | V \right |$ is the size of the vocabulary in short: $$\phi_{k|y=1}=\frac{1 + number\ of\ words\ k\ occur\ in\ spammer}{\left | V \right | + number\ of\ words\ in\ spammer}$$ ```python ```
59b547f5630df49258a1321e5bed3f25cdab49ef
12,386
ipynb
Jupyter Notebook
_build/html/_sources/04_generative_learning_algorithms.ipynb
newfacade/machine-learning-notes
1e59fe7f9b21e16151654dee888ceccc726274d3
[ "MIT" ]
null
null
null
_build/html/_sources/04_generative_learning_algorithms.ipynb
newfacade/machine-learning-notes
1e59fe7f9b21e16151654dee888ceccc726274d3
[ "MIT" ]
null
null
null
_build/html/_sources/04_generative_learning_algorithms.ipynb
newfacade/machine-learning-notes
1e59fe7f9b21e16151654dee888ceccc726274d3
[ "MIT" ]
null
null
null
41.563758
296
0.471016
true
3,906
Qwen/Qwen-72B
1. YES 2. YES
0.91848
0.857768
0.787843
__label__eng_Latn
0.62705
0.668756
```python from kanren import run, var, fact import kanren.assoccomm as la # 足し算(add)と掛け算(mul) # addとmulはルールの名前なだけ add = 'addition' mul = 'multiplication' # 足し算、掛け算は交換法則(commutative)、結合法則(associative)を持つ事をfactを使って宣言する # 交換法則とは、入れ替えても結果が変わらない事。足し算も掛け算も入れ替えても答えは変わらない # 結合法則とは、カッコの位置を変えても変わらない事。足し算だけの式、掛け算だけの式はカッコの位置が変わっても答えは変わらない # 交換法則も結合法則も文字列をどう扱うかといったルールでも使える fact(la.commutative, mul) fact(la.commutative, add) fact(la.associative, mul) fact(la.associative, add) a, b, c = var('a'), var('b'), var('c') expression_orig = (add, (mul, 3, -2), (mul, (add, 1, (mul, 2, 3)), -1)) expression1 = (add, (mul, (add, 1, (mul, 2, a)), b), (mul, 3, c)) expression2 = (add, (mul, 3, c), (mul, b, (add, (mul, 2, a), 1))) expression3 = (add, (add, (mul, (mul, 2, a), b), b), (mul, 3, c)) ``` この式を例に $$ expression\_orig = 3 \times (-2) + (1 + 2 \times 3) \times (-1) $$ 変数で置き換えた式を照合する $$ \begin{align} &expression1 = (1 + 2 \times a) \times b + 3 \times c\\ &expression2 = c \times 3 + b \times (2 \times a + 1)\\ &expression3 = (((2 \times a) \times b) + b) + 3 \times c \end{align} $$ expression1〜3は数学的に同じ式 ```python print(run(0, (a, b, c), la.eq_assoccomm(expression1, expression_orig))) print(run(0, (a, b, c), la.eq_assoccomm(expression2, expression_orig))) print(run(0, (a, b, c), la.eq_assoccomm(expression3, expression_orig))) ``` ((3, -1, -2),) ((3, -1, -2),) () 1番目と2番目の式は一緒。3番目は構造的に異なる(らしい) run(答えの個数, (欲しい答えの変数), 分配法則を定義していないから?? `eq_assoccomm`が分からん。。。
c5f093799ad88b181cfde6094677648d6bb7a20f
3,470
ipynb
Jupyter Notebook
Ex/Chapter6/Chapter6-5.ipynb
tryoutlab/python-ai-oreilly
111a0db4a9d5bf7ec4c07b1e9e357ed4fa225f28
[ "Unlicense" ]
null
null
null
Ex/Chapter6/Chapter6-5.ipynb
tryoutlab/python-ai-oreilly
111a0db4a9d5bf7ec4c07b1e9e357ed4fa225f28
[ "Unlicense" ]
null
null
null
Ex/Chapter6/Chapter6-5.ipynb
tryoutlab/python-ai-oreilly
111a0db4a9d5bf7ec4c07b1e9e357ed4fa225f28
[ "Unlicense" ]
null
null
null
22.679739
80
0.491643
true
805
Qwen/Qwen-72B
1. YES 2. YES
0.839734
0.685949
0.576015
__label__eng_Latn
0.130228
0.176606
```python import networkx as nx import matplotlib.pyplot as plt import numpy as np import random import sympy ``` Bad key "text.kerning_factor" on line 4 in /home/sc/anaconda3/envs/old_nx/lib/python3.6/site-packages/matplotlib/mpl-data/stylelib/_classic_test_patch.mplstyle. You probably need to get an updated matplotlibrc file from http://github.com/matplotlib/matplotlib/blob/master/matplotlibrc.template or from the matplotlib source distribution ```python max_B = 10000 # maximum budget no_runs = 100 # no. of runs to average K = 4 # controls number of restarts np.random.seed(0) random.seed(42) ``` ```python edges = np.loadtxt('facebook_combined.txt',dtype=int) G = nx.Graph() G.add_edges_from(edges) G_no_edges=G.number_of_edges() G_no_nodes=G.number_of_nodes() print(G_no_nodes,G_no_edges) ``` 4039 88234 ```python from louvain import detect_communities, modularity ''' def draw_communities(G, node_size=90, alpha=1, k=None, randomized=False): partition = detect_communities(G, randomized=randomized) print("Modularity for best partition:", modularity(G, partition)) community_map = {} for community, nodes in enumerate(partition): for node in nodes: community_map[node] = community cmap = plt.get_cmap("jet") plt.figure(figsize=(15, 15)) pos = nx.spring_layout(G, k=k) indexed = [community_map.get(node) for node in G] plt.axis("off") nx.draw_networkx_nodes(G, pos=pos, cmap=cmap, node_color=indexed, node_size=node_size, alpha=alpha) nx.draw_networkx_edges(G, pos=pos, alpha=0.2) for x in range(len(pos)): pos[x] = pos[x]+np.array([0.02,0]) labels = dict([(n, n) for n in G.nodes()]) #add labels #_ = nx.draw_networkx_labels(G, pos, labels=labels, font_color='#000000', font_size=15) #draw labels draw_communities(G) ''' ``` '\ndef draw_communities(G, node_size=90, alpha=1, k=None, randomized=False):\n partition = detect_communities(G, randomized=randomized)\n print("Modularity for best partition:", modularity(G, partition))\n community_map = {}\n for community, nodes in enumerate(partition):\n for node in nodes:\n community_map[node] = community\n \n cmap = plt.get_cmap("jet")\n plt.figure(figsize=(15, 15))\n pos = nx.spring_layout(G, k=k)\n indexed = [community_map.get(node) for node in G]\n plt.axis("off")\n nx.draw_networkx_nodes(G, pos=pos, cmap=cmap, node_color=indexed, node_size=node_size, alpha=alpha)\n nx.draw_networkx_edges(G, pos=pos, alpha=0.2)\n for x in range(len(pos)):\n pos[x] = pos[x]+np.array([0.02,0])\n labels = dict([(n, n) for n in G.nodes()]) #add labels\n #_ = nx.draw_networkx_labels(G, pos, labels=labels, font_color=\'#000000\', font_size=15) #draw labels\ndraw_communities(G)\n' ```python partition = detect_communities(G, randomized=False) community_map = {} node_map = {} for community, nodes in enumerate(partition): for node in nodes: node_map[node] = community community_map[community] = nodes ``` ```python def RDSRR_sampling(G,B,U=None): restart_ind = [10] t=20 while restart_ind[-1]<B: restart_ind.append(restart_ind[-1]+int(80*np.log(t))) t+=1 est_RW = [] est_RW_t1 = 0 est_RW_t2 = 0 sample = np.random.choice(G.nodes()) deg_pr_sent = G.degree(sample) est_RW_t1 += node_fn(sample)/deg_pr_sent est_RW_t2 += 1/deg_pr_sent est_RW.append(est_RW_t1/est_RW_t2) for ii in range(2,B+1): if ii in restart_ind: if U is None: sample = np.random.choice(community_map[np.random.choice([x for x in list(community_map.keys()) if x!=node_map[sample]])]) else: sample = np.random.choice(U) #z = [node_map[x] for x in U].index(node_map[sample]) #sample = np.random.choice(U[:z]+U[z+1:]) else: neighbors = list(nx.neighbors(G,sample)) sample = np.random.choice(neighbors) deg_pr_sent = G.degree(sample) est_RW_t1 += node_fn(sample)/deg_pr_sent est_RW_t2 += 1/deg_pr_sent est_RW.append(est_RW_t1/est_RW_t2) return np.array(est_RW) def MHRR_sampling(G,B,U=None): restart_ind = [10] t=20 while restart_ind[-1]<B: restart_ind.append(restart_ind[-1]+int(K*np.log(t))) t+=1 est_MH= [] est_MH_t = 0 sample = np.random.choice(G.nodes()) est_MH_t += node_fn(sample) est_MH.append(est_MH_t) for ii in range(2,B+1): if ii in restart_ind: if U is None: sample = np.random.choice(community_map[np.random.choice([x for x in list(community_map.keys()) if x!=node_map[sample]])]) else: sample = np.random.choice(U) #z = [node_map[x] for x in U].index(node_map[sample]) #sample = np.random.choice(U[:z]+U[z+1:]) else: neighbors = list(nx.neighbors(G,sample)) sample_t = np.random.choice(neighbors) if np.random.rand() <= (G.degree(sample)/G.degree(sample_t)): sample = sample_t est_MH_t += node_fn(sample) est_MH.append(est_MH_t/ii) return np.array(est_MH) ``` ## Finding High Degree Nodes in Clusters ```python U = [] for com in list(community_map.keys()): nodes = community_map[com] maxd = -1 maxn = -1 for node in nodes: if G.degree(node)>maxd: maxd = G.degree(node) maxn = node U.append(maxn) ``` ## F(v) = int(node_map[node] == 1) ```python def node_fn(node): return int(node_map[node]==1) F_org = sum([node_fn(i) for i in G.nodes()])/G_no_nodes print(F_org) ``` 0.10670958157959891 ```python MSE_MH_t = 0 for ii in range(1,no_runs+1): MSE_MH_t += (MHRR_sampling(G,max_B)-F_org)**2 MSE_MH = MSE_MH_t/(no_runs) MSE_MH = np.sqrt(MSE_MH)/F_org MSE_rds_t = 0 for ii in range(1,no_runs+1): MSE_rds_t += (RDSRR_sampling(G,max_B)-F_org)**2 MSE_rds = MSE_rds_t/(no_runs) MSE_rds = np.sqrt(MSE_rds)/F_org MSE_rdsrr_t = 0 for ii in range(1,no_runs+1): MSE_rdsrr_t += (RDSRR_sampling(G,max_B,U)-F_org)**2 MSE_rdsrr = MSE_rdsrr_t/(no_runs) MSE_rdsrr = np.sqrt(MSE_rdsrr)/F_org MSE_mhrr_t = 0 for ii in range(1,no_runs+1): MSE_mhrr_t += (MHRR_sampling(G,max_B,U)-F_org)**2 MSE_mhrr = MSE_mhrr_t/(no_runs) MSE_mhrr = np.sqrt(MSE_mhrr)/F_org plt.figure(figsize=(10,8)) plt.plot(np.array(list(range(len(MSE_MH)))),MSE_MH,color='red',linewidth=1.5,label='MHRR') plt.plot(np.array(list(range(len(MSE_mhrr)))),MSE_mhrr,color='blue',linewidth=1.5,label='MHRR-T') plt.plot(np.array(list(range(len(MSE_rds)))),MSE_rds,color='black',linewidth=1.5,label='RDSRR') plt.plot(np.array(list(range(len(MSE_rdsrr)))),MSE_rdsrr,color='purple',linewidth=1.5,label='RDSRR-T') legend = plt.legend(loc='best', shadow=True, fontsize='xx-large') legend.get_frame().set_facecolor('0.90') for legobj in legend.legendHandles: legobj.set_linewidth(2.5) plt.ylim(top=1) plt.grid() ``` ## F(v) = int(G.degree(node)>100) ```python def node_fn(node): return int(G.degree(node)>100) F_org = sum([node_fn(i) for i in G.nodes()])/G_no_nodes print(F_org) ``` 0.11908888338697697 ```python MSE_MH_t = 0 for ii in range(1,no_runs+1): MSE_MH_t += (MHRR_sampling(G,max_B)-F_org)**2 MSE_MH = MSE_MH_t/(no_runs) MSE_MH = np.sqrt(MSE_MH)/F_org MSE_rds_t = 0 for ii in range(1,no_runs+1): MSE_rds_t += (RDSRR_sampling(G,max_B)-F_org)**2 MSE_rds = MSE_rds_t/(no_runs) MSE_rds = np.sqrt(MSE_rds)/F_org MSE_rdsrr_t = 0 for ii in range(1,no_runs+1): MSE_rdsrr_t += (RDSRR_sampling(G,max_B,U)-F_org)**2 MSE_rdsrr = MSE_rdsrr_t/(no_runs) MSE_rdsrr = np.sqrt(MSE_rdsrr)/F_org MSE_mhrr_t = 0 for ii in range(1,no_runs+1): MSE_mhrr_t += (MHRR_sampling(G,max_B,U)-F_org)**2 MSE_mhrr = MSE_mhrr_t/(no_runs) MSE_mhrr = np.sqrt(MSE_mhrr)/F_org plt.figure(figsize=(10,8)) plt.plot(np.array(list(range(len(MSE_MH)))),MSE_MH,color='red',linewidth=1.5,label='MHRR') plt.plot(np.array(list(range(len(MSE_mhrr)))),MSE_mhrr,color='blue',linewidth=1.5,label='MHRR-T') plt.plot(np.array(list(range(len(MSE_rds)))),MSE_rds,color='black',linewidth=1.5,label='RDSRR') plt.plot(np.array(list(range(len(MSE_rdsrr)))),MSE_rdsrr,color='purple',linewidth=1.5,label='RDSRR-T') legend = plt.legend(loc='best', shadow=True, fontsize='xx-large') legend.get_frame().set_facecolor('0.90') for legobj in legend.legendHandles: legobj.set_linewidth(2.5) plt.ylim(top=1) plt.grid() ``` ## F(v) = isprime(v) ```python def node_fn(node): return int(sympy.isprime(G.degree(node))) F_org = sum([node_fn(i) for i in G.nodes()])/G_no_nodes print(F_org) ``` 0.2904184204010894 ```python MSE_MH_t = 0 for ii in range(1,no_runs+1): MSE_MH_t += (MHRR_sampling(G,max_B)-F_org)**2 MSE_MH = MSE_MH_t/(no_runs) MSE_MH = np.sqrt(MSE_MH)/F_org MSE_rds_t = 0 for ii in range(1,no_runs+1): MSE_rds_t += (RDSRR_sampling(G,max_B)-F_org)**2 MSE_rds = MSE_rds_t/(no_runs) MSE_rds = np.sqrt(MSE_rds)/F_org MSE_rdsrr_t = 0 for ii in range(1,no_runs+1): MSE_rdsrr_t += (RDSRR_sampling(G,max_B,U)-F_org)**2 MSE_rdsrr = MSE_rdsrr_t/(no_runs) MSE_rdsrr = np.sqrt(MSE_rdsrr)/F_org MSE_mhrr_t = 0 for ii in range(1,no_runs+1): MSE_mhrr_t += (MHRR_sampling(G,max_B,U)-F_org)**2 MSE_mhrr = MSE_mhrr_t/(no_runs) MSE_mhrr = np.sqrt(MSE_mhrr)/F_org plt.figure(figsize=(10,8)) plt.plot(np.array(list(range(len(MSE_MH)))),MSE_MH,color='red',linewidth=1.5,label='MHRR') plt.plot(np.array(list(range(len(MSE_mhrr)))),MSE_mhrr,color='blue',linewidth=1.5,label='MHRR-T') plt.plot(np.array(list(range(len(MSE_rds)))),MSE_rds,color='black',linewidth=1.5,label='RDSRR') plt.plot(np.array(list(range(len(MSE_rdsrr)))),MSE_rdsrr,color='purple',linewidth=1.5,label='RDSRR-T') plt.ylim(0,0.3) legend = plt.legend(loc='best', shadow=True, fontsize='xx-large') legend.get_frame().set_facecolor('0.90') for legobj in legend.legendHandles: legobj.set_linewidth(2.5) plt.grid() ``` ## F(v) = random ```python fn_mapping = np.random.exponential(1,size=(G_no_nodes)) def node_fn(node): return fn_mapping[node] F_org = sum([node_fn(i) for i in G.nodes()])/G_no_nodes print(F_org) ``` 0.9856442941970124 ```python MSE_MH_t = 0 for ii in range(1,no_runs+1): MSE_MH_t += (MHRR_sampling(G,max_B)-F_org)**2 MSE_MH = MSE_MH_t/(no_runs) MSE_MH = np.sqrt(MSE_MH)/F_org MSE_rds_t = 0 for ii in range(1,no_runs+1): MSE_rds_t += (RDSRR_sampling(G,max_B)-F_org)**2 MSE_rds = MSE_rds_t/(no_runs) MSE_rds = np.sqrt(MSE_rds)/F_org MSE_rdsrr_t = 0 for ii in range(1,no_runs+1): MSE_rdsrr_t += (RDSRR_sampling(G,max_B,U)-F_org)**2 MSE_rdsrr = MSE_rdsrr_t/(no_runs) MSE_rdsrr = np.sqrt(MSE_rdsrr)/F_org MSE_mhrr_t = 0 for ii in range(1,no_runs+1): MSE_mhrr_t += (MHRR_sampling(G,max_B,U)-F_org)**2 MSE_mhrr = MSE_mhrr_t/(no_runs) MSE_mhrr = np.sqrt(MSE_mhrr)/F_org plt.figure(figsize=(10,8)) plt.plot(np.array(list(range(len(MSE_MH)))),MSE_MH,color='red',linewidth=1.5,label='MHRR') plt.plot(np.array(list(range(len(MSE_mhrr)))),MSE_mhrr,color='blue',linewidth=1.5,label='MHRR-T') plt.plot(np.array(list(range(len(MSE_rds)))),MSE_rds,color='black',linewidth=1.5,label='RDSRR') plt.plot(np.array(list(range(len(MSE_rdsrr)))),MSE_rdsrr,color='purple',linewidth=1.5,label='RDSRR-T') legend = plt.legend(loc='best', shadow=True, fontsize='xx-large') legend.get_frame().set_facecolor('0.90') for legobj in legend.legendHandles: legobj.set_linewidth(2.5) plt.ylim(0,0.15) plt.grid() ```
d4567b7d8e35d2f5bd9c65695a1121b500602256
218,821
ipynb
Jupyter Notebook
FormalExperiments/Experiment-Facebook-diffT.ipynb
CyanideBoy/Accelerated-MCMC
aeca24ff4ac03d3a62c39b9041d004135b5beca6
[ "MIT" ]
null
null
null
FormalExperiments/Experiment-Facebook-diffT.ipynb
CyanideBoy/Accelerated-MCMC
aeca24ff4ac03d3a62c39b9041d004135b5beca6
[ "MIT" ]
null
null
null
FormalExperiments/Experiment-Facebook-diffT.ipynb
CyanideBoy/Accelerated-MCMC
aeca24ff4ac03d3a62c39b9041d004135b5beca6
[ "MIT" ]
null
null
null
370.883051
54,640
0.928247
true
3,738
Qwen/Qwen-72B
1. YES 2. YES
0.835484
0.79053
0.660475
__label__kor_Hang
0.134617
0.372836
# M10. Amdahl's Law The most useful corollaries to what is now known as *Amdahl's Law* are hardly profound. The notion of prioritizing the improvements *with the greatest bearing on the overall result* is almost common sense (which is maybe to suggest that it isn't common at all). Violations of the Law are prevalent (though not ubiquitous): - Behavior characterized by the Britishism: "Penny wise, pound foolish"&mdash;people who are thrifty when it comes to small expenses but who are indulgent or forgetful of larger ones. - Spending legislative energy on banning plastic straws when they constitute 0.02 *percent* (by mass) of annual plastic waste deposited in the oceans. The original version of the Law, advanced by Gene Amdahl in 1967, was nowhere near as ambitious in its scope (nor did it pretend to be). It merely provided an upper bound on speedups afforded by one type of improvement, parallelization, and it was formulated as follows: Suppose a fixed sequential workload of latency $W$ can be partitioned into a parallelizable fraction $p$ and a non-parallelizable fraction $1-p$. Then, the speedup in latency offered by running the workload on $n$ processors is given by \begin{align} S(n) &= \frac{W}{\frac{pW}{n} + (1-p)W}\\ &= \frac{1}{\frac{p}{n}+(1-p)} \end{align} and hence we obtain the upper bound $$\lim_{n\to\infty} S(n) = \frac{1}{1-p}.$$ Amdahl's Law is a 'real law' (much as Moore's Law is *not*), but it is somewhat naive and primitive in the sense that its operative assumption is that workloads can be partitioned into different regimes, which is often easier said than done. In fact, Amdahl's Law is sometimes seen being used to *back-calculate* the fraction $p$ of a workload exposed to a given improvement. One word of caution: the fraction $p$ referred to has nothing to do with the *frequency* of the improvable portion of a workload and everything to do with its *latency*. Things which occur infrequently may still be catastrophically time-intensive when they do occur. On the other hand, there are many things which *do* happen often and yet contribute little to overall execution time.
76328778b3295307b089cf6516bb36c8abe806c0
2,960
ipynb
Jupyter Notebook
M10_Amdahl's_Law.ipynb
brekekekex/computer_organization_memoranda
d16dc251075d3da49aaf01f148676f857d05dc4b
[ "Unlicense" ]
2
2020-01-17T16:34:17.000Z
2020-02-23T22:06:07.000Z
M10_Amdahl's_Law.ipynb
brekekekex/computer_organization_memoranda
d16dc251075d3da49aaf01f148676f857d05dc4b
[ "Unlicense" ]
null
null
null
M10_Amdahl's_Law.ipynb
brekekekex/computer_organization_memoranda
d16dc251075d3da49aaf01f148676f857d05dc4b
[ "Unlicense" ]
null
null
null
47.741935
395
0.664189
true
528
Qwen/Qwen-72B
1. YES 2. YES
0.904651
0.897695
0.812101
__label__eng_Latn
0.999896
0.725114
```python %run header.ipynb ``` ```python from sympy import pi from sympy.physics.units import meter, foot ``` ```python # dictionary that holds all values. # if already defined (such as by another notebook) then don't override if "values" not in vars(): values={"d": 12.1 * meter} Formula.set_global_values(values) ``` ```python d = Formula("d") area_formula = Formula(value_name="A", formula="pi*(d**2/4)", sigfigs=4) ``` The area of a circle with diameter {{ d.get_inline() }} is: {{ area_formula.get_display(oneline=False) }} ```python ```
a81067a5180c536fec5ca95882cec8dad3a79409
2,035
ipynb
Jupyter Notebook
notebooks/Circle multifile/Circle Area.ipynb
alugowski/jupyter-forchaps
c46904286df8b60a8e5200e0c8b6bafca3379c9d
[ "BSD-2-Clause" ]
1
2020-02-12T11:25:37.000Z
2020-02-12T11:25:37.000Z
notebooks/Circle multifile/Circle Area.ipynb
alugowski/jupyter-forchaps
c46904286df8b60a8e5200e0c8b6bafca3379c9d
[ "BSD-2-Clause" ]
null
null
null
notebooks/Circle multifile/Circle Area.ipynb
alugowski/jupyter-forchaps
c46904286df8b60a8e5200e0c8b6bafca3379c9d
[ "BSD-2-Clause" ]
null
null
null
22.362637
233
0.50516
true
154
Qwen/Qwen-72B
1. YES 2. YES
0.874077
0.709019
0.619738
__label__eng_Latn
0.88786
0.278188
## Optimal Power Flow _**[Power Systems Optimization](https://github.com/east-winds/power-systems-optimization)**_ _by Michael R. Davidson, Jesse D. Jenkins, and Sambuddha Chakrabarti_ This notebook consists an introductory glimpse of and a few hands-on activities and demostrations of the Optimal Power Flow (OPF) problem—which minimizes the short-run production costs of meeting electricity demand from a given set of generators subject to various technical and flow limit constraints. We will talk about a single-time period, simple generator, and line flow limit constraints (while modeling the network flows as dictated by the laws of physics). This is adds a layer of complexity and sophistication on top of the Economic Dispatch (ED) problem. Since we will only discuss single time-period version of the problem, we will not be considering inter-temporal constraints, like ramp-rate limits. However, this model can easily be extended to allow for such constraints. We will start off with some simple systems, whose solutions can be worked out manually without resorting to any mathematical optimization model and software. But, eventually we will be solving larger system, thereby emphasizing the importance of such software and mathematical models. ## Introduction to OPF Optimal Power Flow (OPF) is a power system optimal scheduling problem which fully captures the physics of electricity flows, which adds a leyr of complexity, as well gives a more realistic version of the Economic Dispatch (ED) problem. It usually attempts to capture the entire network topology by representing the interconnections between the different nodes through transmission lines and also representing the electrical parameters, like the resistance, series reactance, shunt admittance etc. of the lines. however, the full-blown "AC" OPF turns out to be an extremely hard problem to solve (usually NP-hard). Hence, system operators and power marketers usually go about solving a linearized version of it, called the DC-OPF. The DC-OPF approximation works satisfactorily for bulk power transmission networks as long as such networks are not operated at the brink of instability or, under very heavily heavily loaded conditions. ## Single-time period, simple generator constraints We will first examine the case where we are optimizing dispatch for a single snapshot in time, with only very simple constraints on the generators. $x^2$ $$ \begin{align} \mathbf{Objective\;Function:}\min_{P_g}\sum_{g\in{G}}C_{g}(P_{g})\longleftarrow\mathbf{power\;generation\; cost}\\ \mathbf{Subject\;to:\:}{\underline{P}_{g}}\leqslant{P_{g}}\leqslant{{\overline{P}_{g}}},\;\forall{g\in{G}}\longleftarrow\mathbf{MW\; generation\; limits}\\ P_{g(i)}-P_{d(i)}\longleftarrow\mathbf{real\; power\; injection}\notag\\=\sum_{j\in J(i)}B_{ij}(\theta_j-\theta_i),\;\forall{{i}\in\mathcal{N}}\\ |P_{ij}|\leqslant{\overline{P}_{ij}},\;\forall{ij}\in{T}\longleftarrow\mathbf{MW\; line\; limit}\\ \end{align} $$ The **decision variable** in the above problem is: - $P_{g}$, the generation (in MW) produced by each generator, $g$ - $\theta_i$, $\theta_j$ the voltage phase angle of each bus/node, $i,j$ The **parameters** are: - ${\underline{P}_{g}}$, the minimum operating bounds for the generator (based on engineering or natural resource constraints) - ${\overline{P}_{g}}$, the maximum operating bounds for the generator (based on engineering or natural resource constraints) - $P_{d(i)}$, the demand (in MW) at node $i$ - ${\overline{P}_{ij}}$, the line-flow limit for line connecting buses $i$ and $j$ - $B_{ij}$, susceptance for line connecting buses $i$ and $j$ just like the ED problem, here also, we can safely ignore fixed costs for the purposes of finding optimal dispatch. With that, let's implement OPF. # 1. Load packages¶ ```julia # New packages introduced in this tutorial (uncomment to download the first time) import Pkg; Pkg.add("PlotlyBase") using JuMP, GLPK using Plots; plotly(); using VegaLite # to make some nice plots using DataFrames, CSV, PrettyTables ENV["COLUMNS"]=120; # Set so all columns of DataFrames and Matrices are displayed ```  Updating registry at `~/.julia/registries/General`  Updating git-repo `https://github.com/JuliaRegistries/General.git` [?25h[1mFetching: [========================================>] 100.0 %.0 %13.1 %> ] 26.1 %Fetching: [================> ] 39.2 % ] 52.2 % [===========================> ] 65.2 %78.2 % Resolving package versions...  Installed LaTeXStrings ──────── v1.2.0  Installed PlotlyBase ────────── v0.4.1  Installed DocStringExtensions ─ v0.8.3  Updating `~/.julia/environments/v1.3/Project.toml`  [a03496cd] + PlotlyBase v0.4.1  Updating `~/.julia/environments/v1.3/Manifest.toml`  [ffbed154] + DocStringExtensions v0.8.3  [b964fa9f] + LaTeXStrings v1.2.0  [a03496cd] + PlotlyBase v0.4.1 ┌ Info: Precompiling PlotlyBase [a03496cd-edff-5a9b-9e67-9cda94a718b5] └ @ Base loading.jl:1273 ### 2. Load and format data We will use data for IEEE 118 bus test case and two other test cases for a 3 bus and a 2 bus system: - generator cost curve, power limit data, and connection-node - load demand data with MW demand and connection node - transmission line data with resistance, reactance, line MW capacity, from, and to nodes ```julia datadir = joinpath("OPF_data") # Note: joinpath is a good way to create path reference that is agnostic # to what file system you are using (e.g. whether directories are denoted # with a forward or backwards slash). gen_info = CSV.read(joinpath(datadir,"Gen118.csv"), DataFrame); line_info = CSV.read(joinpath(datadir,"Tran118.csv"), DataFrame); loads = CSV.read(joinpath(datadir,"Load118.csv"), DataFrame); # Rename all columns to lowercase (by convention) for f in [gen_info, line_info, loads] rename!(f,lowercase.(names(f))) end ``` ```julia #= Function to solve Optimal Power Flow (OPF) problem (single-time period) Inputs: gen_info -- dataframe with generator info line_info -- dataframe with transmission lines info loads -- dataframe with load info Note: it is always a good idea to include a comment blog describing your function's inputs clearly! =# function OPF_single(gen_df, line_info, loads) OPF = Model(GLPK.Optimizer) # You could use Clp as well, with Clp.Optimizer # Define sets based on data # A set of all variable generators G_var = gen_df[gen_df[!,:is_variable] .== 1,:r_id] # A set of all non-variable generators G_nonvar = gen_df[gen_df[!,:is_variable] .== 0,:r_id] # Set of all generators G = gen_df.r_id # Extract some parameters given the input data # Generator capacity factor time series for variable generators gen_var_cf = innerjoin(gen_variable, gen_df[gen_df.is_variable .== 1 , [:r_id, :gen_full, :existing_cap_mw]], on = :gen_full) # Decision variables @variables(ED, begin GEN[G] >= 0 # generation # Note: we assume Pmin = 0 for all resources for simplicty here end) # Objective function @objective(ED, Min, sum( (gen_df[i,:heat_rate_mmbtu_per_mwh] * gen_df[i,:fuel_cost] + gen_df[i,:var_om_cost_per_mwh]) * GEN[i] for i in G_nonvar) + sum(gen_df[i,:var_om_cost_per_mwh] * GEN[i] for i in G_var) ) # Demand constraint @constraint(ED, cDemand, sum(GEN[i] for i in G) == loads[1,:demand]) # Capacity constraint (non-variable generation) for i in G_nonvar @constraint(ED, GEN[i] <= gen_df[i,:existing_cap_mw]) end # Variable generation capacity constraint for i in 1:nrow(gen_var_cf) @constraint(ED, GEN[gen_var_cf[i,:r_id] ] <= gen_var_cf[i,:cf] * gen_var_cf[i,:existing_cap_mw]) end # Solve statement (! indicates runs in place) optimize!(ED) # Dataframe of optimal decision variables solution = DataFrame( r_id = gen_df.r_id, resource = gen_df.resource, gen = value.(GEN).data ) # Return the solution and objective as named tuple return ( solution = solution, cost = objective_value(ED), ) end ``` OPF_single (generic function with 1 method) ```julia ```
a14411ae2d618522ce56b738a289f41f944a5d5f
12,600
ipynb
Jupyter Notebook
Notebooks/06-OPF-problem_other.ipynb
sambuddhac/power-systems-optimization
f65ec4b718807703452cf6723105926dac73c649
[ "CC-BY-4.0", "MIT" ]
null
null
null
Notebooks/06-OPF-problem_other.ipynb
sambuddhac/power-systems-optimization
f65ec4b718807703452cf6723105926dac73c649
[ "CC-BY-4.0", "MIT" ]
null
null
null
Notebooks/06-OPF-problem_other.ipynb
sambuddhac/power-systems-optimization
f65ec4b718807703452cf6723105926dac73c649
[ "CC-BY-4.0", "MIT" ]
null
null
null
45.818182
940
0.59619
true
2,411
Qwen/Qwen-72B
1. YES 2. YES
0.891811
0.851953
0.759781
__label__eng_Latn
0.946954
0.603558
<a href="https://colab.research.google.com/github/gherbin/ComputerVisionKUL/blob/master/CV_Group9_assignment.ipynb" target="_parent"></a> # Hi there! > *\[14 Apr 2020] A notebook written by Geoffroy Herbin, group9, r0426473, in the context of the Computer Vision course [H02A5](https://p.cygnus.cc.kuleuven.be/webapps/blackboard/execute/announcement?method=search&context=course_entry&course_id=_891702_1&handle=announcements_entry&mode=view), Master of Artificial Intelligence, KULeuven.* Welcome to this Colab where we'll dig into some Computer Vision fancy stuff! The goals of this notebook is to perform faces classification and identification. To reach that goals we will first: - retrieve training and test images - build two "features". At this point, we simply say that a "feature" is another way to represent the input data (images, in our case). 1. handcrafted feature: Histogram of Oriented Gradients 2. feature learnt from the data: Principal Component Analysis Then, we will train different models based on the two features and compare the classification and identification results. --- * Several optimized libraries (ex: `sklearn`) will be extensively used. However, at first, some of the key functionalities will be coded as to provide a better view of what really happens behind the calls to library functions. * The notebook is compatible with grayscale and colormode, depending on a parameter defined a bit later. The text, static content, is written based on analysis made in `color = False` mode. Results may vary a little. ## Import (most of) the required packages Almost all the required packages are imported first. > a few, used very locally, will be imported on the code snippet. ``` import os import cv2 import tarfile import zipfile import shutil import numpy as np from numpy.testing import assert_array_almost_equal import random import logging from urllib import request from socket import timeout from urllib.error import HTTPError, URLError from google.colab import drive from google.colab.patches import cv2_imshow from distutils.dir_util import copy_tree from skimage.feature import hog as skimage_feature_hog from skimage import exposure from sklearn.decomposition import PCA as sklearn_decomposition_PCA from sklearn.model_selection import GridSearchCV from sklearn import svm from sklearn.base import BaseEstimator, TransformerMixin from sklearn.linear_model import SGDClassifier from sklearn.model_selection import cross_val_predict from sklearn.preprocessing import StandardScaler from sklearn import metrics from sklearn.metrics import mean_squared_error from sklearn.pipeline import Pipeline from sklearn.neighbors import KNeighborsClassifier from sklearn.preprocessing import StandardScaler import sklearn.manifold from math import sqrt from matplotlib import pyplot as plt from matplotlib.gridspec import GridSpec from matplotlib.offsetbox import OffsetImage, AnnotationBbox %matplotlib inline import seaborn as sns from scipy.interpolate import RectBivariateSpline from scipy.linalg import svd as scipy_linalg_svd from scipy import ndimage, misc logging.basicConfig(level=logging.INFO) mpl_logger = logging.getLogger("matplotlib") mpl_logger.setLevel(logging.WARNING) ``` ## Parameters As all computerized systems, several parameters help in defining how the system should react. Those parameters are centralized here. ``` base_path = "/content/sample_data/CV__Group_assignment" path_datasets = r"/content/datasets/" path_discard = r"/content/discard/" path_database = r"/content/DATABASE/" ''' Parameters handling the database build up ''' need_vgg_download = False confirmation_db_renewal = False to_drive_confirmation = False to_drive_confirmation_vgg = False load_from_local_drive = True # allows downloading the source images directly from the archive in github repository (see "important note") show = True # similar as global verbose parameter, for images (when custom functions allows it) sq_size = 64 # square size used -> shall be smaller than the output of get_min_size(faces_cropped) # assert sq_size <= min(get_min_size(faces_cropped)) color = False # if False, tasks are run in Grayscale. if True, tasls are run in full colormode if color: my_color_map = plt.cm.viridis else: my_color_map = plt.cm.gray ``` ## Several utils functions Several *utils* functions are used to: - pretty plot a dictionnary content, - retrieve minimal size of a batch of images, - reshape in the appropriate way an input, considering the `color` and `sq_size` parameters defined, - retrieve the data in the appropriate format from the datasets initially built. You can find more info on the functions and the codes below, or when we'll use them later in the tutorial. ``` def pretty_return_dict_size(my_dict): ''' returns a string containing the different size of the elements of a dict ''' output_list = ["\n"] for k in my_dict.keys(): output_list.append(str(k)) output_list.append(":") output_list.append(str(len(my_dict[k]))) output_list.append("\n") return ''.join(output_list) def show_images_from_dict(my_dict, show_index = False): ''' shows the images contained in a dictionary, going through all keys ''' for k in my_dict.keys(): logging.debug("@------------------- Images of " + str(k) + " -------------------@") index = 0 for img in my_dict[k]: if show_index: logging.debug("Image index: " + str(index)) index+=1 cv2_imshow(img) logging.debug("Shape = " + str(img.shape)) def get_min_size(images_dict): ''' returns the minimum size of images contained in the images_dict input ''' min_rows, min_cols = float("inf"), float("inf") max_rows, max_cols = 0, 0 for person in persons: for src in images_dict[person]: r, c = src.shape[0], src.shape[1] min_rows = min(min_rows, r) max_rows = max(max_rows, r) min_cols = min(min_cols, c) max_cols = max(max_cols, c) # logging.info("smallest px numbers (row, cols) = " + str((min_rows,min_cols))) return min_rows, min_cols def my_reshape(image_vector, sq_size, color): ''' returns a reshape version of an image represented as an image array, depending of the color parameter. If color is True, it returns a colored RGB format image of size (sq_size x sq_size) (useable as is by matplotlib) If color is False, it returns a grayscale image (sq_size x sq_size) ''' flattened = image_vector.ndim == 1 if flattened: if color: img_reshaped = (np.reshape(image_vector, (sq_size, sq_size, 3))).astype('uint8') return cv2.cvtColor(img_reshaped, cv2.COLOR_BGR2RGB) else: return np.reshape(image_vector, (sq_size, sq_size)) else: if color: img_reshaped = (np.reshape(image_vector, (sq_size, sq_size, 3))).astype('uint8') return cv2.cvtColor(img_reshaped, cv2.COLOR_BGR2RGB) else: return image_vector def get_matrix_from_set(images_set, color, sq_size = 64, flatten = True): ''' from images_set (training_set or test_set), create and fill in matrix so that it contains the input data. if flatten, then the matrix contains images represented in 1D ''' # init output matrix = None nb_faces = sum([len(images_set[x]) for x in images_set if isinstance(images_set[x], list)]) # depending on mode, select appropriate size items. N if color and flatten: matrix = np.empty((nb_faces, sq_size*sq_size*3)) # *3 => color images elif color and (not flatten): matrix = np.empty((nb_faces, sq_size, sq_size, 3)) elif (not color) and flatten: matrix = np.empty((nb_faces, sq_size*sq_size)) elif (not color) and (not flatten): matrix = np.empty((nb_faces, sq_size,sq_size )) else: raise RuntimeError i = 0 for person in persons: for src in images_set[person]: src_rescaled = cv2.resize(src, (sq_size,sq_size)) if color and flatten: matrix[i,:] = src_rescaled.flatten() elif color and (not flatten): matrix[i,:,:,:] = src_rescaled elif (not color) and flatten: matrix[i,:] = cv2.cvtColor(src_rescaled, cv2.COLOR_BGR2GRAY).flatten() elif (not color) and (not flatten): matrix[i,:,:] = cv2.cvtColor(src_rescaled, cv2.COLOR_BGR2GRAY) i +=1 return matrix def plot_matrix(images_matrix, color, my_color_map, h=8, w=5, transpose = False, return_figure = False): ''' plots the images contained in a matrix of data, reshaping and coloring them ''' fig = plt.figure(figsize=(w,h)) fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05) # plot the faces, each image is 64 by 64 pixels if transpose: images_matrix_used = images_matrix.T.copy() else: images_matrix_used = images_matrix.copy() i=0 for img_vector in images_matrix_used: ax = fig.add_subplot(h, w, i+1, xticks=[], yticks=[]) ax.imshow(my_reshape(img_vector, sq_size, color), cmap = my_color_map, interpolation='nearest') i+=1 plt.show() if return_figure: return fig ``` ## Inputs * The very first input of the system is an archive containing several text files containing each 1000 weblinks to images. This archive is downloaded from [here](http://www.robots.ox.ac.uk/~vgg/data/vgg_face) and extracted locally in `/content/sample_data/CV__Group_assignment` folder. We download and extract it only if needed. * To extract faces in the images, we download the Haarcascade model ``` if not os.path.isdir(base_path): os.makedirs(base_path) if need_vgg_download: vgg_face_dataset_url = "http://www.robots.ox.ac.uk/~vgg/data/vgg_face/vgg_face_dataset.tar.gz" with request.urlopen(vgg_face_dataset_url) as r, open(os.path.join(base_path, "vgg_face_dataset.tar.gz"), 'wb') as f: f.write(r.read()) with tarfile.open(os.path.join(base_path, "vgg_face_dataset.tar.gz")) as f: f.extractall(os.path.join(base_path)) trained_haarcascade_url = "https://raw.githubusercontent.com/opencv/opencv/master/data/haarcascades/haarcascade_frontalface_default.xml" with request.urlopen(trained_haarcascade_url) as r, open(os.path.join(base_path, "haarcascade_frontalface_default.xml"), 'wb') as f: f.write(r.read()) logging.info("Downloaded haarcascade_frontalface_default") ``` # Data Retrieval This tutorial will extensively use images from four different actors. The images are selected (pseudo-)randomly. The movie stars are (chosen quite randomly as well): 1. personA: Emma Stone 2. personB: Bradley Cooper 3. personC: Jane Levy 4. personD: Marc Blucas Process to get datasets images: 1. Randomly pick N images (60 for persons A and B, and 30 for persons C and D) images from the list of 1000 images provided in the textfile. 3. Reject some "i" images that are not appropriated (see rejection step later) for each person. it may of course be different "i" for all actors. 2. Select M images randomly out of the (N-i) images obtained for each persons: - M=30 for personA and personB (Training and Test), - M=10 for personC and personD (Test only) --- **[IMPORTANT NOTE]** If nothing else is set up, getting the N images require to perform an url request on websites we do not control. This is risky, as for any reason, the target website could be modified, not responding, responding too slowly, have removed the picture of interest, ... To prevent such issue, this tutorial provide the code to do things differently. - the first time (with some parameters below properly set), the source images are downloaded from the website (retrieving errors, skipping too slow website, etc.) - the images, downloaded, are then saved and zipped with a logfile - this zip archive is then uploaded on my personal Github account, as a public file It leads to a controlled database containing the source images, and ensure reproducibility during the different test run. *Three remarks* 1. only the original files are stored in the archive in the ZIP. Those files were selected randomly, using a random number generator. 2. the curation of the source files, the face cropping, and selection between training and test sets is still done at every run of this notebook. 3. the code dedicated to the archiving and saving part will not be detailed (while yet provided) in this notebook, but surely, you are welcome to contact me for more details using geoffroy.herbin@student.kuleuven.be. --- Start from clean sheet ``` try: shutil.rmtree(path_database) shutil.rmtree(path_datasets) shutil.rmtree(path_discard) except: pass ``` Create required folders ``` file_info = path_database+ r"info_retrieved.txt" try: os.mkdir(path_database) os.mkdir(path_datasets) os.mkdir(path_discard) except OSError as error: logging.error(error) ``` Instead of randomly download from web, download images from a "clean" and controlled repository (in [github](https://raw.githubusercontent.com/gherbin/cv_group9_database_replica/master/DATABASE-20200318T142918Z-001.zip) ), dedicated for this notebook. It ensures reproducibility and accessibility to the 180 input images. ``` if load_from_local_drive: !wget https://raw.githubusercontent.com/gherbin/cv_group9_database_replica/master/DATABASE-20200318T142918Z-001.zip with zipfile.ZipFile("DATABASE-20200318T142918Z-001.zip", 'r') as zip_ref: zip_ref.extractall() !rm -r "DATABASE-20200318T142918Z-001.zip" path_, dirs_, files = next(os.walk(path_database)) if len(files) == 180+1: logging.info("Successful database retrieval") elif load_from_local_drive: logging.error("Most Likely problem with database retrieval, number of files = " + str(len(files))) else: logging.info("No database images retrieved yet") ``` ###Definition of several data structures `images_size` : dictionary containing the number of images to first get from the web `persons` : list containing the names of the four persons (the names actually are the name of the text file in original database) `images`: dictionary containing the source images. The keys of the dictionary are the names of the four persons of interest ``` personA = "Emma_Stone.txt" personC = "Jane_Levy.txt" personB = "Bradley_Cooper.txt" personD = "Marc_Blucas.txt" persons = [personA, personB, personC, personD] datasets_dict = {} images_size = {} images_size[personA] = 60 images_size[personB] = 60 images_size[personC] = 30 images_size[personD] = 30 total_images_size = sum(images_size.values()) # Dictionary containing the ids of the pictures downloaded from internet vgg_ids = {} for p in persons: vgg_ids[p] = [] ``` If `confirmation_db_renewal` is `True`, the following code picks randomly (based on a seed being the name of the person) the images from the web. For a normal run, if the user does not want to change the original sourced data, `confirmation_db_renewal` should remain `False` (aka *change at your own risk* ;-) ) ``` if confirmation_db_renewal: try: shutil.rmtree(path_database) except: pass try: os.mkdir(path_database) except: pass fo = open(file_info, "w+") # images = {} # images_nominal_indices = {} for person in persons: logging.debug("Taking care of: " + str(person)) random.seed(person) # print(hash(person)) images_ = [] # images_nominal_indices_ = [] prev_index = [] with open(os.path.join(base_path, "vgg_face_dataset", "files", person), 'r') as f: lines = f.readlines() while len(images_) < images_size[person]: index = random.randrange(0, 1000) logging.debug("Index = " + str(index)) if index in prev_index: logging.debug("Index = " + str(index) + " => already there") continue else: prev_index.append(index) line = lines[index] # only curated data if int(line.split(" ")[8]) == 1: url = line[line.find("http://"): line.find(".jpg") + 4] logging.debug("URL > \"" + str(url)) try: res = request.urlopen(url, timeout = 1) img = np.asarray(bytearray(res.read()), dtype="uint8") img = cv2.imdecode(img, cv2.IMREAD_COLOR) h, w = img.shape[:2] cv2_imshow(cv2.resize(img, (w//4, h//4))) # images_nominal_indices_.append(index) filename = path_database + str(index) + "_" + str(person.split(".")[0]) + ".jpg" value = cv2.imwrite(filename, img) # logging.debug("saved in DB: " + str(filename)) images_.append(img) fo.write(line) except ValueError as e: logging.error("Value Error >" + str(e)) except (HTTPError, URLError) as e: logging.error('ERROR RETRIEVING URL >' + str(e)) except timeout: logging.error('socket timed out - URL %s', str(url)) except cv2.error as e: logging.error("ERROR WRITING FILE IN DB >" + str(e)) except: logging.error("Weird exception : " + str(line)) else: logging.debug("File not curated => rejected (id = " + str(index) + " )") # images[person] = images_ # images_nominal_indices[person] = images_nominal_indices_ fo.close() else: logging.warning("If you really want to erase and renew the database, please change first the \"confirmation\" boolean variable, at the beginning of this cell") ``` From the logfile in the archive, extract the information and fill the dictionary containing all the images `images`. ``` with open(file_info, 'r') as f: lines = f.readlines() assert len(lines)==total_images_size, "amount of lines in file incompatible" images = {} for p in persons: images[p] = [] images_index = {} for running_index in range(len(lines)): if running_index in range(0,images_size[personA]): p = personA elif running_index in range(images_size[personA],images_size[personA]+images_size[personB]): p = personB elif running_index in range(images_size[personA]+images_size[personB],images_size[personA]+images_size[personB]+images_size[personC]): p = personC elif running_index in range(images_size[personA]+images_size[personB]+images_size[personC],total_images_size): p = personD ind = str(int(lines[running_index].split(" ")[0])-1) vgg_ids[p].append(ind) filename = ind + "_" + str(p.split(".")[0]) + ".jpg" images[p].append(cv2.imread(path_database+filename, cv2.IMREAD_COLOR)) ``` ###Rejection step From the sources files, although the images downloaded were part of a curated data, several images need to be removed to be used in the context of this *educative* tutorial. The main reasons are: * too different from usual representation (make up, masks, ...) * really poor image quality * irrelevance and/or error in dataset * same image already in dataset * cropped image Considering the tight selection of images to train our model (20 from personA and 20 from personB), and the relatively large global amount of image candidates (1000 for each person), it is acceptable to reject the images we know won't help. From the initial retrieved images, we then remove the undesired images, that we copy in discard images folder, for tracking purposes. We/you may want to use them later. --- `datasets_size`: dictionary of the size required per persons ( keys = person names) `to_remove`: dictionary containing the indices to remove, per persons ( keys = person names) `print_images = False` indicates that the remaining images in `images` dictionary will not be printed. --- From the remaining images (after rejection), we can select randomly the images that are part of the final sets (training and test sets are not split yet). ``` # Dictionary of the size required (see section 3) datasets_size = {} datasets_size[personA] = 30 datasets_size[personB] = 30 datasets_size[personC] = 10 datasets_size[personD] = 10 # manually remove images that are not relevant or considered not good enough to be part of the dataset to_remove = {} to_remove[personA] = [0,1,4,8,12,13,16,23,28,34,36,42,44,47,48,49,54] to_remove[personB] = [4,7,11,12,13,16,21,22,23,24,25,26,27,32,36,39,41,46,49,53,55,58] to_remove[personC] = [0,1,6,7,8,11,14,16,17,19,20,21,24] to_remove[personD] = [0,3,5,6,8,10,12,15,16,17,24] # goal is to sort in descending to remove elements from lists without modifying the indexes for p in persons: to_remove[p].sort(reverse = True) # retrieve images candidates # -------------------------- if len(os.listdir(path_datasets) ) == 0 or True: logging.debug("datasets empty - need to retrieve all !") # removing images to discard for person in persons: for index in to_remove[person]: img = images[person].pop(index) logging.debug("Removing item " + str(index) + " from list " + str(person)) try: filename = path_discard + str(index) +"_discarded_" + str(person.split(".")[0]) + ".jpg" cv2.imwrite(filename, img) except: logging.error("Error while writing discarded image " + str(filename)) # randomly select among remaining images for person in persons: # build list of indices from remaining images logging.debug("Phase 2 (part 1) -> random indices selection for " + str(person)) images_ = [] indices = [] new_ids = [] # prev_index = [] random.seed(person) while len(indices) < datasets_size[person]: index = random.randrange(0, len(images[person])) if index in indices: logging.debug("Index among remaining = " + str(index) + " => already there") continue else: # prev_index.append(index) indices.append(index) logging.debug("Phase 2 (part 2) -> image selection based on indices") for index in indices: img = images[person][index] images_.append(img) filename = path_datasets + str(vgg_ids[person][index]) + "_" + str(person.split(".")[0]) + ".jpg" logging.debug("saved: " + str(filename)) cv2.imwrite(filename, img) new_ids.append(vgg_ids[person][index]) images[person] = images_ vgg_ids[person] = new_ids else: logging.debug("folders not empty => can build directly images dictionnary") # logging.debug("Number of images keys=" + len(images.keys)) # logging.debug("Number of images values=" + len(images.values)) logging.info(pretty_return_dict_size(images)) ''' print images to get to_remove indices ''' print_images = False if print_images: for person in persons: counter = 0 for img in images[person]: h = 0 w = 0 img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) faces = faceCascade.detectMultiScale( img_gray, scaleFactor=1.13, minNeighbors=10, minSize=(30, 30), flags=cv2.CASCADE_SCALE_IMAGE ) for (x,y,w,h) in faces: # faces_cropped[person].append(img[y:y+h, x:x+w]) cv2.rectangle(img, (x, y), (x+w, y+h), (0, 255, 0), 2) logging.debug("------------------------------------------------------") logging.debug("Photo ID = " + str(counter)) logging.debug("size = " + str((h,w))) cv2_imshow(img) counter += 1 ``` ###Save images on drive (only if required) Mount drive and save images, according to the parameter `to_drive_confirmation` value. *Change at your own risk ;-)* ``` # Save to drive folders path_drive_DB = r"/content/drive/My Drive/ComputerVision/DATABASE" path_drive_Datasets = r"/content/drive/My Drive/ComputerVision/DATASETS" # drive folders should be properly set up if to_drive_confirmation: logging.warning("You need to have a drive mounted for this snippet to run successfully") try: drive.mount('/content/drive') except: pass try: shutil.rmtree(path_drive_DB) shutil.rmtree(path_drive_Datasets) except: logging.error("Error in rmtree") try: os.mkdir(path_drive_DB) os.mkdir(path_drive_Datasets) except OSError as error: logging.error(error) logging.debug("Saving database in drive : start") fromDirectory = path_database toDirectory = path_drive_DB copy_tree(fromDirectory, toDirectory) logging.debug("Saving datasets in drive : start") fromDirectory = path_datasets toDirectory = path_drive_Datasets copy_tree(fromDirectory, toDirectory) logging.debug("Saving: done !") ``` ``` path_vgg = r"/content/sample_data/CV__Group_assignment" path_drive_vgg = r"/content/drive/My Drive/ComputerVision/CV__Group_assignment" # drive folders should be properly set up if to_drive_confirmation_vgg: logging.warning("You need to have a drive mounted for this snippet to run successfully") try: drive.mount('/content/drive') except: pass try: shutil.rmtree(path_drive_vgg) except: logging.error("Error in rmtree") try: os.mkdir(path_drive_vgg) except OSError as error: logging.error(error) logging.debug("Saving database in drive : start") fromDirectory = path_vgg toDirectory = path_drive_vgg copy_tree(fromDirectory, toDirectory) logging.debug("Saving: done !") ``` ##Face detection using Haar Cascade From the raw images saved in the `images` dictionary, the faces are extracted using the *HaarCascade* method. The following code is based on the tutorial: [How to detect faces using Haar Cascade](https://www.digitalocean.com/community/tutorials/how-to-detect-and-extract-faces-from-an-image-with-opencv-and-python) The faces are saved in a new dictionary: `faces_cropped` ``` faceCascade = cv2.CascadeClassifier(os.path.join(base_path, "haarcascade_frontalface_default.xml")) faces_cropped = {} with open(file_info, 'r') as f: lines = f.readlines() for person in persons: faces_cropped[person] = [] for img in images[person]: img_ = img.copy() img_gray = cv2.cvtColor(img_, cv2.COLOR_BGR2GRAY) faces = faceCascade.detectMultiScale( img_gray, scaleFactor=1.13, minNeighbors=10, minSize=(30, 30), flags=cv2.CASCADE_SCALE_IMAGE ) for (x,y,w,h) in faces: faces_cropped[person].append(img[y:y+h, x:x+w]) cv2.rectangle(img_, (x, y), (x+w, y+h), (0, 255, 0), 2) # h, w = img_.shape[:2] # draw_box(lines, int(vgg_ids[person][running_index])+1, img_, person) # cv2_imshow(cv2.resize(img_, (w // 2, h // 2))) logging.info("Faces extracted and saved in dictionnary faces_cropped") logging.info(pretty_return_dict_size(faces_cropped)) ``` At this point, `faces_cropped` dictionary contains 30 cropped faces for personA and B, and 10 images for personC and personD. The following code selects randomly (based on a seed) the 20 images part of the training set for personA and personB. The other faces (10 for each person) are then part of the test set. --- * `training_set`: dictionary containing faces cropped (original size) part of the training set * `test_set`: dictionary containing faces cropped (original size) part of the test set --- At this point, there is not (yet) dedicated validation sets. It is discussed later on. All the training will be done on the training set faces, without any tailoring or dedicated fitting on the test set images. Indeed, metrics on the test set faces indicate how well our model will generalize. It's therefore important to not influence our model with the data of the test set. ``` training_sets_size = {} training_sets_size[personA] = 20 training_sets_size[personB] = 20 training_sets_size[personC] = 0 training_sets_size[personD] = 0 test_sets_size = {} test_sets_size[personA] = 10 test_sets_size[personB] = 10 test_sets_size[personC] = 10 test_sets_size[personD] = 10 training_set = {} test_set = {} for person in persons: image_ = faces_cropped[person] training_set_ = [] random.seed(person) init_set = set(range(0, len(image_))) indices_training = random.sample(init_set, training_sets_size[person]) indices_test = list(init_set - set(indices_training)) training_set[person] = [faces_cropped[person][i] for i in indices_training] test_set[person] = [faces_cropped[person][i] for i in indices_test] logging.info("Faces saved in dictionnary training_set: ") logging.info(pretty_return_dict_size(training_set)) logging.info("Faces saved in dictionnary test_set: ") logging.info(pretty_return_dict_size(test_set)) ``` ``` # show_images_from_dict(training_set) ``` ``` # show_images_from_dict(test_set) ``` ### Faces of the training set - 20 faces of Emma Stone, personA - 20 faces of Bradley Cooper, personB PersonA and PersonB are quite different, A being a female, and B a male. Furthermore, the images within a class are somehow dissimilar as well - different viewpoints (front, left, right) - different lighting conditions - not same hair color - beard/no beard (personB) - not same (limited) background However, a similar characteristic is that both of the actors are most of the time smiling on the faces extracted. Sometimes showing their teeth, sometimes not. ``` # Visualization training_set_matrix = get_matrix_from_set(training_set, color = True, sq_size = sq_size,flatten = True) plot_matrix(training_set_matrix, color = True, my_color_map = plt.cm.viridis, h=4, w=10, transpose = False) ``` ### Faces of the test set Test images are needed for four persons: - 10 faces of Emma Stone, personA - 10 faces of Bradley Cooper, personB - 10 faces of Jane Levy, personC - 10 faces of Marc Blucas, personD (A - C) and (B - D) respectively share some characteristics: * both female / male * same kind of skin tone * visually quite similar (especially for A and C) Within each groups, as for the training set, the faces are taken from different viewpoints, lightening conditions, ... ``` test_set_matrix = get_matrix_from_set(test_set, color = True, sq_size=sq_size, flatten = True) plot_matrix(test_set_matrix, color = True, my_color_map = plt.cm.viridis, h=4, w=10, transpose = False) ``` # Feature Representations A feature representation of an object is intuitively a piece of *information*, of a reduced dimension with respect to the object, and that captures the object. It tells what defines the object, and allows differentiating different objects. In the context of an image, a good feature needs to be: - **robust**: the same feature extracted from the same object on an image should be *close*, even if the lightening condition change, the view point change, ... - **discriminative**: different images, representing different object, should lead to different representation in the feature space. As a toy example, the size of an image is not a good feature to detect a person, as several person can be represented in images of the same size. We will look at two features: 1. **HOG**: Histogram of Oriented Gradients. This is a handcrafted feature, extracted using a specific algorithm 2. **PCA**: Principal Component Analysis. This is a feature learnt from the data. ## Histogram of Oriented Gradients - HOG As this is a tutorial of Computer Vision, let's look first at what is visually / intuitively the HOG on a real image - one of the Emma Stone (personA) faces. The execution of the following code snippet shows on the left the input face, and on the right, the results. Then, we'll see the details of the algorithm, and its specificities (parameters) This section is extensively inspired by [this course](https://www.learnopencv.com/histogram-of-oriented-gradients/), while the technique has been introduced by [this paper](http://lear.inrialpes.fr/people/triggs/pubs/Dalal-cvpr05.pdf), which I strongly advise to read! ``` ''' First example - Emma stone first image ''' src = faces_cropped[personA][0] #1 resizing resized_img = cv2.resize(src, (sq_size, sq_size)) #2 computing HOG fd, hog_image = skimage_feature_hog(resized_img, orientations=9, pixels_per_cell=(8,8), cells_per_block=(2, 2), block_norm = "L2", visualize=True, transform_sqrt = True, multichannel=True) ''' Plotting results ''' logging.info("Resized image and its Histogram of Oriented Gradients (its visual representation)") fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20, 8), sharex=True, sharey=True) ax1.imshow(cv2.cvtColor(resized_img, cv2.COLOR_BGR2RGB)) ax1.set_title('Input image') # Rescale histogram for better display hog_image_rescaled = exposure.rescale_intensity(hog_image, in_range=(0, 10)) ax2.imshow(hog_image_rescaled, cmap=plt.cm.gray) ax2.set_title('Histogram of Oriented Gradients - rescaled') logging.debug("HOG Rescaled: " + str(hog_image_rescaled.min()) + " -> " + str(hog_image_rescaled.max()) ) # ax3.imshow(hog_image, cmap=plt.cm.gray) # ax3.set_title('Histogram of Oriented Gradients') # logging.debug("HOG: " + str(hog_image.min()) + " -> " + str(hog_image.max()) ) plt.show() # logging.debug(hog_image_rescaled.shape) logging.debug("fd shape = " + str(fd.shape)) ``` ### HOG - What is that ? HOG is a feature descriptor that extracts information from an image (or more precisely, a patch) based on the gradients in this image. More specifically, it builds a vector representing the weighted distribution of the gradients orientation across the images. #### Why is it interesting ? Let's remember that our goal is to perform image classification and identification. A face can be recognized through the inherent shapes: circular of face, shapes of the eyes, the nose, potentially the glasses, etc. The *edge* information is therefore useful! It is even more useful than the colors... Intuitively, you can think about recognizing someone familiar with only some contours of one face. It is easy to recognize the former US President, while no color information is given. Intuitively, HOG gives the same information: - magnitude of gradient is large around the edges and corners - orientation gives the shape #### How to compute the HOG of an image ? In a nutshell: - The gradients are first computed on each pixel. - The gradients orientation and magnitude are used to build an histogram for a cell. The size of a cell is typically 8 x 8 pixels. - Those histogram are normalized - All the histograms computed on the images are then concatenated in a *long* vector, yet much smaller than original image. In the upcoming sections, we will detail the process, as well as the code and parameters required at each step. The following code snippet is a homemade class required to compute the HOG. The results obtained with this code will be compared with the infamous skimage library optimized for the HOG descriptor. You can simply run the snippet and come back later on to see the details of the implementation. ``` class MyHog(): def __init__(self, img): self.img = img # image of the size 64x64; 64x128; 128x128; ... => resized image of the original self.mag_max, self.orn_max = self.compute_gradients() def compute_gradients(self): gx = cv2.Sobel(self.img, cv2.CV_32F, 1, 0, ksize = 1) gy = cv2.Sobel(self.img, cv2.CV_32F, 0, 1, ksize = 1) mag, angle = cv2.cartToPolar(gx, gy, angleInDegrees=True) orn = angle.copy() logging.debug("mag shape :" + str(mag.shape)) logging.debug("orn shape :" + str(orn.shape)) # constructing matrices of max dimension mag_max = np.zeros((mag.shape[0], mag.shape[1])) orn_max = np.zeros((orn.shape[0], orn.shape[1])) for i in range(mag.shape[0]): for j in range(mag.shape[1]): mag_max[i,j] = mag[i,j].max() idx = np.argmax(mag[i,j]) orn_max[i,j] = orn[i,j,idx] # mag_max = mag_max.T # orn_max = orn_max.T return mag_max, orn_max def get_cells_mag_orn(self, y_start, x_start, cell_h, cell_w): ''' returns the cell magnitude, orientation and "clipped" orientation ( where 0 -> 360 is mapped into 0 -> 180) ''' cell_mag = np.zeros((cell_h,cell_w)) cell_orn = np.zeros((cell_h,cell_w)) for i in range(cell_h): for j in range(cell_w): cell_mag[i,j] = self.mag_max[y_start+i, x_start+j] cell_orn[i,j] = round(self.orn_max[y_start+i,x_start+j]) cell_orn_clipped = cell_orn.copy() cell_orn_clipped = ((cell_orn_clipped) + 90 ) % 360 for i in range(cell_h): for j in range(cell_w): if 0 <= cell_orn_clipped[i,j] < 180: cell_orn_clipped[i,j] = 180 - cell_orn_clipped[i,j] elif 180 <= cell_orn_clipped[i,j] <=360: cell_orn_clipped[i,j] = 180 - cell_orn_clipped[i,j] % 180 return cell_mag.T, cell_orn.T, cell_orn_clipped.T def fill_bins_one_pixel(self, mag, orn, bin_list, implementation_type = "skimage"): ''' # mag: magnitude of the gradient of 1 px # orn: orientation of the gradient of 1 px bin_list: reference, list of bins that is incremented ''' N_BUCKETS = len(bin_list) assert N_BUCKETS == 9, "N_BUCKETS is not 9!!!" size_bin = 20. if implementation_type == "learnopencv": if orn >= 160: left_bin = 8 right_bin = 9 left_val= mag * (right_bin * 20 - orn) / 20 right_val = mag * (orn - left_bin * 20) / 20 left_bin = 8 right_bin = 0 else: left_bin = int(orn / size_bin) right_bin = (int(orn / size_bin) + 1) % N_BUCKETS left_val= mag * (right_bin * 20 - orn) / 20 right_val = mag * (orn - left_bin * 20) / 20 assert left_val >= 0, "leftval = " + str(left_val) + ", " + str("mag = ") + str(mag) + " & orn = " + str(orn) assert right_val >= 0, "rightval = " + str(right_val) + ", " + str("mag = ") + str(mag) + " & orn = " + str(orn) # print(left_val) # print(right_val) bin_list[left_bin] += left_val bin_list[right_bin] += right_val elif implementation_type == "skimage": # easiest ''' this implementation mimics the one from skimage ''' if 0 <= orn <= 10: bin_list[4] += mag elif 10 < orn <= 30: bin_list[3] += mag elif 30 < orn <= 50: bin_list[2] += mag elif 50 < orn <= 70: bin_list[1] += mag elif 70 < orn <= 90: bin_list[0] += mag elif 90 < orn <= 110: bin_list[8] += mag elif 110 < orn <= 130: bin_list[7] += mag elif 130 < orn <= 150: bin_list[6] += mag elif 150 < orn <= 170: bin_list[5] += mag elif 170 < orn <= 180: bin_list[4] += mag else: raise RuntimeError("Impossible ! > " + str(orn)) else: raise NotImplementedError def compute_hog_bins(self, y_start, x_start, cell_h, cell_w, show_src = True, show_results=True, figsize = (12,4)): ''' y_start: y value of the top left pixel x_start: x value of the top left pixel cell_h : height of the cell in which HOG is computed cell_w : width of the cell in which HOG is computed ''' cell_img = self.img[y_start:y_start + cell_h, x_start:x_start+cell_w] if show_src: tmp = self.img.copy() cv2.rectangle(tmp, (x_start-1, y_start-1), (x_start+cell_w+1, y_start+cell_h+1), (0,255,0)) fig, ax = plt.subplots(1,1, figsize = (figsize[1],figsize[1])) ax.imshow(cv2.cvtColor(tmp, cv2.COLOR_BGR2RGB)) ax.set_title("Selection of a cell") plt.show() # construction of the magnitude and orn matrices cell_mag, cell_orn, cell_orn_clipped = self.get_cells_mag_orn(y_start, x_start, cell_h, cell_w) number_of_bins = 9 bin_list = np.zeros(number_of_bins) for i in range(cell_h): for j in range(cell_w): # m = round(mag_normalized[y_start+j,x_start+i].max()) m = cell_mag[i,j] d = cell_orn_clipped[i,j] # print("m,d =" + str((m,d))) self.fill_bins_one_pixel(m,d,bin_list) # logging.debug("Bins computed:" + str(bin_list)) n = np.linalg.norm(bin_list) bin_norms = bin_list/n # logging.debug("Bins normalized:" + str(bin_norms)) if show_results: fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=figsize, sharex=True, sharey=True) # ax1 => just to visually represent the arrows for i in range(cell_h): for j in range(cell_w): radius = cell_mag[i,j] / (cell_mag.max() - cell_mag.min()) angle_ = cell_orn[i,j] orn_value_clipped = cell_orn_clipped[i,j] mag_value = round(cell_mag[i,j]) ax1.arrow(i, j, radius*np.cos(np.deg2rad(angle_)), radius*np.sin(np.deg2rad(angle_)), head_width=0.15, head_length=0.15, fc='b', ec='b') ax2.text(i, j, str(orn_value_clipped.astype(np.int64)), fontsize=10,va='center', ha='center') ax3.text(i, j, str(mag_value.astype(np.int64)), fontsize=10,va='center', ha='center') ax1.imshow(cv2.cvtColor(cell_img, cv2.COLOR_BGR2RGB)) ax1.set_title('Input image') ax2.matshow(cell_orn_clipped, alpha=0) ax2.set_title('Orientation values') ax3.set_title('Magnitude values') intersection_matrix = np.ones(cell_mag.shape) ax3.matshow(cell_mag, alpha = 0) plt.show() return bin_list, bin_norms ``` ###HOG How-to, Step1: Preprocessing the image Usually, the size of an image is not appropriate to perform the HOG computation. The easiest thing is just to resize the image to an appropriate size. In this tutorial, we use a multiple of 8 and a square image. Considering the smallest face cropped, we select 64 x 64 pixels. Furthermore, as often in image processing, the largest the image, the more resource consuming it is. Keeping a reasonably small image helps in having a decent computation time. ``` logging.info("Toy Example") img = faces_cropped[personA][0].copy() resized_img = cv2.resize(img, (64,64)) logging.info("Shape of source face: " + str(img.shape)) logging.info("Shape of resized face: " + str(resized_img.shape)) fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(8, 4), sharex=False, sharey=False) ax1.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB)) ax1.set_title('Input image') ax2.imshow(cv2.cvtColor(resized_img, cv2.COLOR_BGR2RGB)) ax2.set_title('Resized image') plt.show() ``` ###HOG How-to, Step2: Computing the gradient for all pixels Computing the gradient in horizontal ($x$) and vertical ($y$) directions can be done using a pass of the Sobel Filter, part of the *openCV* library. This is implemented in the `MyHog.compute_gradients()` function (see implementation of `MyHog` class, above). From these gradients in $x$ and $y$ we can derive the magnitudes and orientations in all pixels using the formulas: > $ \begin{align} mag &= \sqrt{g_x^2 + g_y^2} \\ orn &= atan(\frac{g_y}{g_x}) \end{align} $ This is implemented using *openCV* library with `cartToPolar`. #####*Grayscale or Colored image* > For grayscale image, every pixel has one value so that this computation is straightforward. For colored image, a pixel has three values (one for Red, one for Green, one for Blue). In the HOG algorithm, the gradients are computed for all three channels, and the final magnitude is the maximum of the three, and the orientation is the one corresponding to the magnitude. ``` ''' Create an object MyHog, which takes a resized image as input, and compute its gradients. ''' myhog = MyHog(resized_img) fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(8, 4), sharex=True, sharey=True) ax1.imshow(myhog.mag_max, cmap = plt.cm.jet) ax1.set_title('Max of magnitude') ax2.imshow(myhog.orn_max, cmap = plt.cm.jet) ax2.set_title('Max of Orientation') plt.show() ``` As visible on the magnitude and orientation plots above, only essential information regarding the edges is kept. ###HOG How-to, Step3: Compute histograms for cells The histograms are first computed for small cells. It has two major benefits 1. the representation is more compact: Suppose we take cells of $8 \times 8$ pixels. The gradient of each pixel is described using 2 numbers (magnitude and orientation), leading to 128 numbers. Considering an histogram applied on such a cell allows to represent those 128 numbers by a tiny array of typically 9 numbers. In total a colored image of $64 \times 64$ pixels is represented using $9*8*8$ vector. 2. the representation is less sensitive to noise, as applying a cell is equivalent to a low-pass filter. Higher frequency outliers are therefore of less importance. The choice of the cell size is a design choice that can be modified. In a later section, we will modify this parameter to see how it can affect the results. In the paper that first presented the technique, a cell of $8 \times 8$ pixels was used - we will continue with this (hyper-)parameter. > *You may try yourself to modify the cell size or the x_start or y_start values, to see the influence on the histogram computed for that particular cell* ``` ''' Size of a cell ''' cell_h = 8 cell_w = 8 ''' starting point of the cell on the image ''' y_start = 3 * cell_h - 1 x_start = 2 * cell_w - 1 hog_val, hog_val_normalized = myhog.compute_hog_bins(y_start, x_start, cell_h, cell_w, show_results=True, figsize=(16,5)) fig, ax = plt.subplots(1,1, figsize = (2*5, 5)) ax.bar(["]70-90]","]50-70]", "]30-50]", "]10-30]","]10-\n180]","]150-\n170]","]130-\n150]","]110-\n130]", "]90-\n110]"], hog_val_normalized) ax.set_title("Histogram computed with MyHog (homemade)") plt.show() logging.info("MyHog bins normalized : {}, {}, {}, {}, {}, {}, {}, {}, {}".format(*np.round(hog_val_normalized,2))) ``` **Legend of the above images** 1. source image with *cell* visible in flashy green, 2. details of the gradients computations - *leftmost*: cell enlarged with an arrow indicating the gradient: length of the arrow represent the magnitude, and orientation is the gradient orientation computed on that pixel - *middle*: matrix (shape == cell) containing the orientations computed (unsigned) - *rightmost*: matrix (shape == cell) containung the magnitude computed 3. histogram computed (`keyword = "skimage"`) ####In details The details of building the histogram for a cell is not complicated: - consider N bins. N is a design parameter, and each of the bins represent a range of gradient orientations. I have chosen N = 9, following the [introducing paper](http://lear.inrialpes.fr/people/triggs/pubs/Dalal-cvpr05.pdf), as it induces a granularity fine enough to observe change in the picture. > The HOG is usually applied using **unsigned gradients**. The numbers on the orientation matrix are between 0 and 180 instead of 0 and 360 degrees. Concretely, an angle $\alpha [deg] $ and $(180 + \alpha) [deg]$ contribute to the same bin. Empirically, it has been observed that it wasn't decreasing performance in the detection. Of course, nothing forbids to use signed gradients. - for a particular pixel: - the bin is selected according to the orientation of the gradient; - the value that goes in the bin is based on the magnitude. Different methods were proposed by the [introducing paper](http://lear.inrialpes.fr/people/triggs/pubs/Dalal-cvpr05.pdf), and are also explained in details for instance in [vidha blog](https://www.analyticsvidhya.com/blog/2019/09/feature-engineering-images-introduction-hog-feature-descriptor/). The class `MyHog` above contains two implementation: either the magnitude is split proportionnaly between two bins (as described in [learnopencv](https://www.learnopencv.com/histogram-of-oriented-gradients/), or -- as done in *openCV* library --, the whole magnitude is assigned to the closest bin. While this step is not difficult, let's realize in image how it's done! ####Creation of dedicated images The following function allows creating images "on demand", in order to better understand the histogram computation. The parameter "special" indicate what type of image is required. ``` ''' CREATE_IMAGE returns an image created based on a keyword. ''' def create_image(height, width, special=None): mat0 = np.ones((height, width), dtype=np.uint8)*255 mat1 = np.ones((height, width), dtype=np.uint8)*255 mat2 = np.ones((height, width), dtype=np.uint8)*255 if special == "center_black": mat0[height//2-1:height//2+1, width//2-1 : width//2+1 ] = 0 mat1[height//2-1:height//2+1, width//2-1 : width//2+1 ] = 0 mat2[height//2-1:height//2+1, width//2-1 : width//2+1 ] = 0 elif special == "center_gray": mat0[height//2-3:height//2+3, width//2-3 : width//2+3 ] = 125 mat1[height//2-3:height//2+3, width//2-3 : width//2+3 ] = 125 mat2[height//2-3:height//2+3, width//2-3 : width//2+3 ] = 125 mat0[height//2-1:height//2+1, width//2-1 : width//2+1 ] = 0 mat1[height//2-1:height//2+1, width//2-1 : width//2+1 ] = 0 mat2[height//2-1:height//2+1, width//2-1 : width//2+1 ] = 0 elif special == "90": mat0[0:height, width//2 : width ] = 0 mat1[0:height, width//2 : width ] = 0 mat2[0:height, width//2 : width ] = 0 elif special == "180": mat0[height//2:height, 0 : width ] = 0 mat1[height//2:height, 0 : width ] = 0 mat2[height//2:height, 0 : width ] = 0 elif special == "135": for i in range(height): for j in range(i,width): mat0[i,j] = mat1[i,j] = mat2[i,j] = 0 elif special == "45": for i in range(height): for j in range(width-i-1,width): mat0[i,j] = mat1[i,j] = mat2[i,j] = 0 elif special == "28_34_37": mat0[4, 6:8] = 200 mat0[5, 4:8] = 150 mat0[6, 2:8] = 100 mat0[7, 0:8] = 50 mat1 = mat0.copy() mat2 = mat0.copy() elif special == "up_01": mat0[4, 4:8] = 250 mat0[5, 0:8] = 50 mat0[6, 0:8] = 50 mat0[7, 0:8] = 50 mat1 = mat0.copy() mat2 = mat0.copy() elif special == "up_10": mat0[4, 4:8] = 220 mat0[5, 0:8] = 50 mat0[6, 0:8] = 50 mat0[7, 0:8] = 50 mat1 = mat0.copy() mat2 = mat0.copy() elif special == "up_11": mat0[4, 4:8] = 225 mat0[5, 0:8] = 50 mat0[6, 0:8] = 50 mat0[7, 0:8] = 50 mat1 = mat0.copy() mat2 = mat0.copy() elif special == "up_15": mat0[4, 4:8] = 200 mat0[5, 0:8] = 50 mat0[6, 0:8] = 50 mat0[7, 0:8] = 50 mat1 = mat0.copy() mat2 = mat0.copy() elif special == "up_27": mat0[4, 4:8] = 150 mat0[5, 0:8] = 50 mat0[6, 0:8] = 50 mat0[7, 0:8] = 50 mat1 = mat0.copy() mat2 = mat0.copy() elif special == "up_152": mat0 = np.ones((height, width), dtype=np.uint8)*0 mat1 = np.ones((height, width), dtype=np.uint8)*0 mat2 = np.ones((height, width), dtype=np.uint8)*0 mat0[0,6:7] = mat1[0,6:7] = mat2[0,6:7] = 255 mat0[1,7] = mat1[1,7] = mat2[1,7] = 135 elif special == "down_153": mat0 = np.ones((height, width), dtype=np.uint8)*255 mat1 = np.ones((height, width), dtype=np.uint8)*255 mat2 = np.ones((height, width), dtype=np.uint8)*255 mat0[0,6:7] = mat1[0,6:7] = mat2[0,6:7] = 0 mat0[1,7] = mat1[1,7] = mat2[1,7] = 125 elif special == "up_141": mat0 = np.ones((height, width), dtype=np.uint8)*0 mat1 = np.ones((height, width), dtype=np.uint8)*0 mat2 = np.ones((height, width), dtype=np.uint8)*0 mat0[0,6:7] = mat1[0,6:7] = mat2[0,6:7] = 255 mat0[1,7] = mat1[1,7] = mat2[1,7] = 210 elif special == "up_111": mat0 = np.ones((height, width), dtype=np.uint8)*0 mat1 = np.ones((height, width), dtype=np.uint8)*0 mat2 = np.ones((height, width), dtype=np.uint8)*0 mat0[0,0:7] = mat1[0,0:7] = mat2[0,0:7] = 100 mat0[1,7] = mat1[1,7] = mat2[1,7] = 255 elif special == "personA": mat0 = np.ones((height, width), dtype=np.uint8)*122 mat1 = np.ones((height, width), dtype=np.uint8)*122 mat2 = np.ones((height, width), dtype=np.uint8)*122 for i in range(24): for j in range(40, width): if j >= (i+40): mat0[i,j] = mat1[i,j] = mat2[i,j] = 10 elif special == "personB": mat0 = np.ones((height, width), dtype=np.uint8)*122 mat1 = np.ones((height, width), dtype=np.uint8)*122 mat2 = np.ones((height, width), dtype=np.uint8)*122 mat0[:,50:width] = 10 mat1[:,50:width] = 10 mat2[:,50:width] = 10 image = np.dstack((mat0, mat1, mat2)) return image ``` ####Examples of histogram computed on created images In order to understand how the bins are fulled, let's look at a few of simple images. Those images are $8 \times 8$, meaning 1 cell == 1 image * pure 90° gradient * pure 180° gradient * diagonal: 45° * diagonal: 135° For each case, we plot: 1. the arrows representing the gradients, 2. the matrices of the magnitude and orientation values, 3. the resulting histogram **and** we log: - the raw values of the histogram bins - the normalized values of the histogram bins, using *L2-Normalization*: $\begin{align} bins\_values &= [x_1, x_2, x_3, x_4, x_5, x_6, x_7, x_8, x_9] \\ \|bins\_values\| &= \sqrt{x_1^2 + x_2^2 + x_3^2 + x_4^2 + x_5^2 + x_6^2 + x_7^2 + x_8^2 + x_9^2 } \\ bins\_values_{normalized} &= \frac{v}{\|v\|} \\ &= \Bigg[ \frac{x_1}{\|v\|}, \frac{x_2}{\|v\|}, \frac{x_3}{\|v\|}, \frac{x_4}{\|v\|}, \frac{x_5}{\|v\|}, \frac{x_6}{\|v\|}, \frac{x_7}{\|v\|}, \frac{x_8}{\|v\|}, \frac{x_9}{\|v\|} \Bigg] \end{align}$ #### Validation of intuition To prove ourselves our implementation and understanding is correct, we will confront the results with the infamous library `skimage`, using `skimage.feature.hog` with the same parameters as the handmade function: 9 bins, an $8\times 8$ cell, and 1 cell per *block* (we discuss the *blocks* in the next section). Two parameters are still unknown: transform_sqrt and multichannel - `transform_sqrt`: if True, then the sqrt operator is applied to the global image first. It can give better results. We can safely leave it to False for the purpose of this exercices with the HOG bins... - `multichannel`: simply indicates if the image is grayscale (`multichannel = False`) or in color (`multichannel = True`) <!--Note: we briefly discussed the `block_norm` parameter, but more to come in the next step.--> ####Finally... Coming back to the very first example of the HOG, we saw the Emma Stone picture with weird white-ish pictograms describing her face... Well, thanks to the `skimage` library, it's very easy to get this image, and we show it for the toy example we are studying now. It allows grabbing the full overview of how, eventually, the complete histogram and visualization is computed. > *Of course, don't hesitate to change yourself the list of images that are analyzed, considering the list implemented. You find the keywords accepted in the special list* ``` ''' Creation of the images ''' special = ["up_01","45","90", "135","180","up_10","up_11","up_15", "up_27", "28_34_37", "up_111", "up_141", "up_152", "down_153","center_black", "center_gray"] list_as_example = ["90", "180", "45", "135", "28_34_37"] # created_img = create_image(cell_h,cell_w, "45") ''' Homemade implementation of the histogram and Validation with an optimized library ''' for keyword in list_as_example: logging.info("Considering image: " + str(keyword)) # creation of the simple image created_img = create_image(cell_h, cell_w, keyword) # creation of MyHog object toyhog=MyHog(created_img) # compute bins using MyHog y_start_loc = 0 x_start_loc = 0 bins, bins_normalized = toyhog.compute_hog_bins(y_start_loc, x_start_loc, cell_h, cell_w, show_src=False, show_results=True, figsize = (14,4)) # compute bins using Skimage fd, hog_image = skimage_feature_hog(created_img, orientations=9, pixels_per_cell=(8,8), cells_per_block=(1, 1), block_norm = "L2", visualize=True, transform_sqrt = False, multichannel=True) # plot results from Skimage plt.figure() fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(14, 4), sharex=False, sharey=False) # Rescale hog for better display hog_image_rescaled = exposure.rescale_intensity(hog_image, in_range=(0, 10)) xlabels = ["]70-90]","]50-70]", "]30-50]", "]10-30]","]10-\n180]","]150-\n170]","]130-\n150]","]110-\n130]", "]90-\n110]"] x = np.arange(9) w = 0.2 ax1.bar( x-w, bins_normalized, width=2*w, align='center',color="b") ax1.bar( x+w, fd, width=2*w, align='center',color="r") ax1.set_title("Histogram computed by Skimage.feature.hog") ax1.legend(["MyHog (homemade)", "Skimage.feature.hog"]) # start, end = ax.get_xlim() # ax.xaxis.set_ticks(np.arange(start, end, 1)) # ax1.set_xticklabels(xlabels) ax1.set_xticks(x) ax1.set_xticklabels(xlabels) ax2.imshow(hog_image_rescaled, cmap=plt.cm.gray) ax2.set_title('Histogram of Oriented Gradients - visual') plt.show() logging.info("MyHog bins computed : {}, {}, {}, {}, {}, {}, {}, {}, {}".format(*np.round(bins,2))) logging.info("MyHog bins normalized : {}, {}, {}, {}, {}, {}, {}, {}, {}".format(*np.round(bins_normalized,2))) logging.info("Skimage bins normalized : {}, {}, {}, {}, {}, {}, {}, {}, {}".format(*np.round(fd,2))) logging.info("***"*30) ``` > Notice: as a reminder, the purpose of the `MyHog` class (or any of the other class from this tutorial) is definitely not to reproduce exactly the behavior of a well-known and optimized library, but solely to break the magic behind using a library function without understanding the algorithm behind. As a result, the HOG computed may differ in several ways. ####Coming back to our initial cell... Emma Stone cell defined above can now be shown in terms of HOG, helped by the `skimage` library. ``` ''' Retrieving the cell defined above ''' cell_img=resized_img[y_start:y_start + cell_h, x_start:x_start+cell_w] ''' computing HOG of the cell using same parameters ''' fd, hog_image = skimage_feature_hog(cell_img, orientations=9, pixels_per_cell=(8,8), cells_per_block=(1, 1), block_norm = "L2", visualize=True, transform_sqrt = False, multichannel=True) plt.figure() fig, (ax0, ax1, ax2) = plt.subplots(1, 3, figsize=(12, 4), sharex=False, sharey=False) # hog_cropped = hog_image[y_start:y_start + cell_h, x_start:x_start+cell_w] ax0.imshow(cv2.cvtColor(resized_img, cv2.COLOR_BGR2RGB)) ax0.set_title('Input image') ax1.imshow(cv2.cvtColor(cell_img, cv2.COLOR_BGR2RGB)) ax1.set_title('Extracted cell') # Rescale histogram for better display hog_image_rescaled = exposure.rescale_intensity(hog_image, in_range=(0, 10)) ax2.imshow(hog_image_rescaled, cmap=plt.cm.gray) ax2.set_title('Histogram of Oriented Gradients - rescaled') # print(hog_image_rescaled) plt.show() logging.info("Skimage bins normalized : {}, {}, {}, {}, {}, {}, {}, {}, {}".format(*np.round(fd,2))) ``` This is the end of the Step3: the computation of the histogram for one cell! A careful eye will have seen the parameters `cells_per_block` and `block_norm` of the library method. This is linked to Step4: Block Normalization! ###HOG How-to, Step4: Block normalization In the Step3, we have extensively seen how to compute the histogram of gradients for a cell. We are almost at the end of the feature representation build up, but there are yet one normalization step. > In the previous step, we actually already normalized the histogram values using *L2-Normalization*. This is a simple case of the Block normalization where there is 1 cell per block. In general, we can define to perform Block normalization on more than 1 block. A common value is 4, as discussed in the [introducing paper](http://lear.inrialpes.fr/people/triggs/pubs/Dalal-cvpr05.pdf). ####<u>Why do we need normalization ?</u> When normalized, a histogram becomes independant from the lighting variation. Indeed, illumination has the impact of increasing/decreasing the values of the pixels in a cell. Using normalization, a change on the pixel value has no impact if all the pixels in a cell are subject to the same change. > let's say a low illumination make the pixel values divided by two. Having a normalized histogram on the cell will not be affected by such a change, as in the end, the absolute value is not important: only the relative value of one pixel to others matter. This is the very essence of the normalization. As a result, normalizing the histogram makes it quite independant of the lighting condition (provided that in a cell, all the pixels have the same lighting condition, which seems a sensible assumption). ####<u>Normalizing by block</u> A nice visualization of the normalization by block of multiple cells is given in [learnopencv](https://www.learnopencv.com/wp-content/uploads/2016/12/hog-16x16-block-normalization.gif) where we see in green the different cells, and in blue a block of 4 cells. Using a block normalization - so, normalizing multiple cells at ones, and slide the normalization window across the image - is introduced in the [introducing paper](http://lear.inrialpes.fr/people/triggs/pubs/Dalal-cvpr05.pdf) which shows some benefits in terms of missing rate. Typically, 4 cells per blocks is often used. In a later section (see Classification), a grid search tends to try out other block sizes. ####<u>What normalization ?</u> Several normalization can be considered: *L1*, *L2*, *L2-Hys*, ... ``` ''' Definition of the block size (in number of px) 1 cell = 8 x 8 1 block => 16 x 16 => 4 cells ''' block_w = 16 block_h = 16 x_cells = np.arange(0,64,8) y_cells = np.arange(0,64,8) # credit: https://stackoverflow.com/questions/44816682/drawing-grid-lines-across-the-image-uisng-opencv-python def draw_grid(img, line_color=(0, 255, 0), thickness=1, type_=8, pxstep=8): '''(ndarray, 3-tuple, int, int) -> void draw gridlines on img line_color: BGR representation of colour thickness: line thickness type: 8, 4 or cv2.LINE_AA pxstep: grid line frequency in pixels ''' x = pxstep y = pxstep while x < img.shape[1]: cv2.line(img, (x, 0), (x, img.shape[0]), color=line_color, lineType=type_, thickness=thickness) x += pxstep while y < img.shape[0]: cv2.line(img, (0, y), (img.shape[1], y), color=line_color, lineType=type_, thickness=thickness) y += pxstep def draw_one_block(img, origin=(0,0), line_color=(255,0, 0), block_size = 16, thickness=1, type_=8): cv2.rectangle(img, origin, (origin[0]+block_size, origin[1]+block_size), line_color, thickness=thickness, lineType =type_) def draw_all_blocks(img, thickness): color_list = [(255,0,0), (255,255,0), (255,0,255)] x = 0 y = 0 counter = 0 while x < img.shape[1]-8: # cv2.line(img, (x, 0), (x, img.shape[0]), color=line_color, lineType=type_, thickness=thickness) # draw_one_block(img, (x,y)) while y < img.shape[0]-8: # cv2.line(img, (0, y), (img.shape[1], y), color=line_color, lineType=type_, thickness=thickness) lc = color_list[counter%3] draw_one_block(img, (x,y),line_color=lc, thickness=thickness) counter += 1 y += 8 y=0 x += 8 return counter ''' creating a green grid covering the resized image ''' grid_cells_img = resized_img.copy() draw_grid(grid_cells_img, type_=8) ''' Creating the three first blocks ''' first_block_img = grid_cells_img.copy() second_block_img = grid_cells_img.copy() third_block_img = grid_cells_img.copy() draw_one_block(first_block_img, origin=(0,0), line_color=(255,0,0), thickness=2) draw_one_block(second_block_img, origin=(8,0), line_color=(255,255,0),thickness=2) draw_one_block(third_block_img, origin=(16,0), line_color=(255,0,255),thickness=2) ''' Creating all the blocks on top of the cells ''' blocks_img = grid_cells_img.copy() counter = draw_all_blocks(blocks_img, thickness=1) ''' Vizualization ''' fig, (ax0, ax1, ax2, ax3) = plt.subplots(1, 4, figsize=(16, 4), sharex=False, sharey=False) ax0.imshow(cv2.cvtColor(grid_cells_img, cv2.COLOR_BGR2RGB)) ax0.set_title('Cells') ax1.imshow(cv2.cvtColor(first_block_img, cv2.COLOR_BGR2RGB)) ax1.set_title('first block') ax2.imshow(cv2.cvtColor(second_block_img, cv2.COLOR_BGR2RGB)) ax2.set_title('second block') ax3.imshow(cv2.cvtColor(third_block_img, cv2.COLOR_BGR2RGB)) ax3.set_title('third block') fig, ax = plt.subplots(1,1,figsize=(4,4)) ax.imshow(cv2.cvtColor(blocks_img, cv2.COLOR_BGR2RGB)) ax.set_title("All Blocks") plt.show() logging.info("In total, there are: " + str(counter) + " blocks possible in the picture") ``` As shown in the previous example, on the image chosen, there are $49$ blocks possible of size $(16 \times 16)$ pixels. ###HOG How-to, Step5: concatenation After the normalization, the only step remaining is the concatenation of the computed vectors into a larger one, that represent the input image. This will be the feature representation of the image, based on the *oriented* gradients in that image. ###What size is this feature representation ? - one cell is represented by $9$ numbers (histogram) - four histograms are normalized together, leading to a $(36,1)$ vector - there are $49$ such vectors representing the image. If the image as a width of size $(w*8)$ pixels, and a height of $(h*8)$ pixels, the image dimension is $(8*h \times 8*w)$. In such an image, they are : * h cells vertically, and w cells horizontally, * (h-1) blocks vertically and (w-1) blocks horizontally. The length of the final vector is then $36 \cdot 49$ numbers, or a $(1764,1)$ vector. Of course, this is still large... But much more compact that our initial $(64,64,3)$ array of $12288$ numbers ``` fd, hog_image = skimage_feature_hog(resized_img, orientations=9, pixels_per_cell=(8,8), cells_per_block=(2, 2), block_norm = "L2", visualize=True, transform_sqrt = True, multichannel=True) ''' Plotting results ''' fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20, 8), sharex=True, sharey=True) ax1.imshow(cv2.cvtColor(resized_img, cv2.COLOR_BGR2RGB)) ax1.set_title('Input image') # Rescale histogram for better display hog_image_rescaled = exposure.rescale_intensity(hog_image, in_range=(0, 10)) ax2.imshow(hog_image_rescaled, cmap=plt.cm.gray) ax2.set_title('Histogram of Oriented Gradients - rescaled') logging.debug("HOG Rescaled: " + str(hog_image_rescaled.min()) + " -> " + str(hog_image_rescaled.max()) ) plt.show() # logging.debug(hog_image_rescaled.shape) logging.info("Shape of the HOG feature : " + str(fd.shape)) ``` <!--histogram construction is based on the gradient computed - both magnitude and orientation (as defined) - the bin is selected according to the orientation (direction) of the gradient; - the value that goes in the bin is based on the magnitude For instance on the toy example: first pixel has: * mag = 6; orn = 45°. So, the vote of this pixel goes for 75% in the bin of 40°, and 25% in the bin of 60°, as closer to 40°. As a result, we add 4.5 to bin nb 3, and 1.5 to bin nb 4. --> ###HOG: Exec summary * one cell (typ 8 x 8) of an image is represented by a histogram * the orientation and magnitude of the gradient are computed on each pixel * orientation of the gradient indicate a bin * magnitude indicate the amount to place into the bin * the histogram is a vector of size 9 (as 9 bins) * one block (16 x 16) histogram is the concatenation of the 4 histograms, each representing one cell of the block, hence represented by a (36 x 1) vector. * the final HOG feature vector is based on the concatenation of all blocks. For an image of 64 x 64, it follows: * 8 cells along the width * 8 cells along the height * Number of cells = 64 cells * Number of blocks = 7 * 7 = 49 blocks Each block has a representative vector of dimension $(36 \times 1)$, and the resulting vector has dimension $(49*36 \times 1)$, or $(1764,)$ This ends the first part of Histogram of Oriented Gradient, where I showed in detail how to compute such a feature representation, and the meaning of the different parameters. As many other parameters, the "hyper-parameters" of a HOG method should always be assessed according to a specific problem. - Insofar, the parameters have mainly be considered equal to their suggested value by the litterature, in this [HOG paper](http://lear.inrialpes.fr/people/triggs/pubs/Dalal-cvpr05.pdf) - Later, in this tutorial, we will optimize a classifier using a systematic method and a cross-validation set - Stay tuned ! Recall that we shall **never** use the test set to fine tune our model parameters. <!--Still to do : These [hog] parameters were obtained by experimentation and examining the accuracy of the classifier — you should expect to do this as well whenever you use the HOG descriptor. Running experiments and tuning the HOG parameters based on these parameters is a critical component in obtaining an accurate classifier.--> ### Detecting an object of interest in a new image In this part, the goal is to use the HOG feature descriptor in order to detect if an object is present or not in a new image. > we are not building a classifier or identificator (yet!) The goal is *just* to detect which region of an image have structures that correspond well to the ones of the feature descriptor. To do so, the steps are: 1. Select an object 2. Compute its HOG feature description 3. Select a new image 4. pre-process this image in terms of dimensions 5. compute the image HOG at every location 6. Assess the matching between the descriptor and the image ####Select an object and compute its hog As we have built a training set and a test set, let's just pick randomly one image of the training set. We can do even better and compute the HOG for all images in training and test sets using the parameters already seen. We store the results in the dictionary `hog_training` and `hog_test`. As *usual*, the keys are the persons names. Later, we can select any of those as the `image_of_interest` ``` hog_training = {} hog_test = {} for person in persons: hog_training[person] = [] for src_img in training_set[person]: if not color: src_img = cv2.cvtColor(src_img, cv2.COLOR_BGR2GRAY) resized_img = cv2.resize(src_img, (sq_size,sq_size)) fd, hog_image = skimage_feature_hog(resized_img, orientations=9, pixels_per_cell=(8,8), cells_per_block=(2,2), block_norm = "L2", visualize=True, transform_sqrt = True, multichannel=color) hog_training[person].append((fd, hog_image, resized_img)) logging.debug(pretty_return_dict_size(hog_training)) logging.info("hog_training dictionary contains the HOG descriptors (resized) for all faces of the training set") for person in persons: hog_test[person] = [] for src_img in test_set[person]: if not color: src_img = cv2.cvtColor(src_img, cv2.COLOR_BGR2GRAY) resized_img = cv2.resize(src_img, (sq_size,sq_size)) fd, hog_image = skimage_feature_hog(resized_img, orientations=9, pixels_per_cell=(8,8), cells_per_block=(2,2), block_norm = "L2", visualize=True, transform_sqrt = True, multichannel=color) hog_test[person].append((fd, hog_image, resized_img)) logging.debug(pretty_return_dict_size(hog_test)) logging.info("hog_test dictionary contains the HOG descriptors (resized) for all faces of the test set") ``` ``` ''' Let's pick an image of interest, by its index [0 -> 19] ''' idx_of_interest = 2 image_of_interest = training_set[personA][idx_of_interest] hog_of_interest = hog_training[personA][idx_of_interest] ''' Visualization of the image and its hog selected as image_of_interest ''' fig, (ax0, ax1) = plt.subplots(1,2,figsize = (8,4), sharex=False, sharey=False) ax0.imshow(cv2.cvtColor(image_of_interest, cv2.COLOR_BGR2RGB)) ax0.set_title("Image of interest from training set") ax1.imshow(hog_of_interest[1]) ax1.set_title("Visualization of the HOG of interest") logging.info("Shape of the descriptor : " + str(hog_of_interest[0].shape)) logging.info("Shape of the descriptor (visu): " + str(hog_of_interest[1].shape)) logging.info("Shape of the image of interest: " + str(image_of_interest.shape)) ``` ####Select a new image This is the image we want to apply the descriptor matching. Put in another words, we want to verify, using a distance metric such as the euclidean distance, if the HOG descriptor of my *image_of_interest* presents similarities with a region in a new image. Let's just pick a certain number of images from our **original** images downloaded, the ones that are not cropped yet, nor resized. ``` person_ = personA # can be {personA, personB, personC, personD} number_of_candidates = 3 random.seed("7/4/2020") images_candidates = random.sample(images[person_], number_of_candidates) fig = plt.figure(figsize=(12,4)) i=0 for img in images_candidates: ax=fig.add_subplot(1,number_of_candidates, i+1) ax.imshow( cv2.cvtColor(img, cv2.COLOR_BGR2RGB) ) i+=1 plt.show() ``` The different images selected don't have the same size. ``` logging.info("Shapes of image on which to look for the image_of_interest: ") for img in images_candidates: logging.info("Image shape: " + str(img.shape)) ``` ####Matching In this larger step, we want to see if there is a match between the descriptor of interest and the image candidate. Several problems arise: - The descriptor of interest has a fixed (designed) size of $(1764,1)$ - The different image candidates have different sizes, - The image candidates contains more information than just a face *Why not just cropping the faces on the image and resize it?* We could of course do that - but that is not really the purpose :-) Rather, we want to assess, in **every location** of the image candidate, if there is a chance to see a pattern such as the descriptor provided of the image of interest. *Simpler case* Let's consider first that if the image candidate contains a face (=object), this face has approximately the same size as the original face of interest One way to proceed is to slide, across the image candidate, a window of the size of the object to detect. - at *each* location, crop the part of the image candidate in the sliding window > sliding at every each location is cumbersome and time consuming. The parameter `step`actually defines the amount of pixels that is skipped in both directions during the sliding. - compute the HOG of this crop - compare (using Euclidean distance for instance) this descriptor with the descriptor of the object to find - go to the next location At the end, the result is a score attributed to every location, indicating the correspondance between the object to detect, and the area in the image. *Helper functions* - `get_local_crop`: realize the cropping before computing the HOG, ensuring the appropriate size to compute HOG - `match_hog`: homemade matching between an object of interest and an image candidate, implementing this "sliding" accross the image - `fill_matrix_min_neighbours`: compensate the effect of the `step`parameter in the plot ``` def get_local_crop(src_img, center_pixel, crop_shape, show = False): ''' src_img: center_pixel: crop_shape: ''' crop_height = crop_shape[0] crop_width = crop_shape[1] top_= center_pixel[0] - crop_height // 2 bottom_ = center_pixel[0] + crop_height // 2 left_ = center_pixel[1] - crop_width // 2 right_ = center_pixel[1] + crop_width // 2 if len(src_img.shape)> 2: crop = src_img[ top_ : bottom_, left_:right_, :] else: crop = src_img[ top_ : bottom_, left_:right_] if show: cv2_imshow(crop) return crop def match_hog(src_img, hog_desc, original_face_shape, win_stride = (8,8), show = False): ''' src_img : image to analyse hog_desc: (fd, hog_image) of the corresponding faces_cropped original_face_shape: shape of the face used for hog_desc computation ''' height = src_img.shape[0] # height of the image to analyze width = src_img.shape[1] # width of the image to analyze height_face = original_face_shape[0] width_face = original_face_shape[1] res_shape = (src_img.shape[0], src_img.shape[1]) res = np.ones(res_shape)*-1 if show: tmp_image = src_img.copy() cv2.rectangle(tmp_image, (width_face//2, height_face//2), (width - width_face//2, height - height_face//2), (0, 255, 0)) cv2_imshow(tmp_image) running_h_idx = range(height_face //2, height - height_face//2+1, win_stride[0]) running_w_idx = range(width_face //2, width - width_face//2+1, win_stride[1]) for h_idx in running_h_idx: for w_idx in running_w_idx: local_crop = get_local_crop(src_img, (h_idx, w_idx), original_face_shape, False) local_resized_img = cv2.resize(local_crop, (64,64)) local_fd = skimage_feature_hog(local_resized_img, orientations=9, pixels_per_cell=(8,8), cells_per_block=(2, 2), block_norm = "L2", visualize=False, transform_sqrt = True, multichannel=True) # computing euclidean distance here !! res[h_idx,w_idx]= np.linalg.norm(local_fd-hog_desc[0]) if show: cv2_imshow(local_resized_img) return res def fill_matrix_min_neighbours(matrx, win_stride = (16,16), margin = 0): ''' fill the gaps in the computation by taking the min values from closest neighbours that were computed. ''' l = np.argwhere(matrx != -1) res = matrx.copy() if len(l)<2: idx_to_change = res == -1 logging.warning("len < 2 --> len(idx_to_change) = " + str(len(idx_to_change))) res[idx_to_change] = res.max() + margin return res top_left_corner = l[0] bottom_right_corner = l[-1] for i in range(top_left_corner[0], bottom_right_corner[0]+win_stride[0]//2): for j in range(top_left_corner[1],bottom_right_corner[1]+win_stride[1]//2): if matrx[i,j] == -1: local_roi = get_local_crop(matrx, (i,j),win_stride) cand = local_roi[ local_roi != -1 ] res[i,j] = cand.min() idx_to_change = res == -1 res[idx_to_change] = res.max() + margin return res ``` *Details* For all the image in images_candidates list, - compute the matching on every required location (point spaced by win_stride) - fill the non computed point with min neighbour - normalized (L2) to get values between 0 -> 1 - show the results ``` for img in images_candidates: win_stride=(16,16) res = match_hog(img, hog_of_interest, image_of_interest.shape, win_stride) new_res = fill_matrix_min_neighbours(res, win_stride) logging.debug("worst match hog results: " + str(new_res.max())) logging.debug("best match hog results: " + str(new_res.min())) b = new_res.copy() bmax, bmin = b.max(), b.min() if bmax == bmin and bmax == 0: logging.info("Perfect match!") b = (b - bmin)/(bmax - bmin) # b = (b - bmin)/(bmax) logging.debug("worst match hog results normalized [expexted 1]: " + str(b.max())) logging.debug("best match hog results [expected 0]: " + str(b.min())) fig, (ax1, ax2) = plt.subplots(1,2 , figsize=(16, 8), sharex=True, sharey=True) ax1.imshow(cv2.cvtColor(img,cv2.COLOR_BGR2RGB)) ax1.set_title("Image where to find base face") # ax2.imshow(new_res, cmap="jet") # ax2.set_title("Results gaps filled with min neighbour") ax2.imshow(b, cmap="jet") ax2.set_title("Normalized results of Matching") plt.show() logging.debug("=====================================================") ``` ####Matching - part2 As showed in the results above, it works ! Several observations nonetheless: 1. the borders are red (indicating large distance): this is because of the sliding that considers the full object of interest is required in the image 2. It works even if not the exact same shape! * provided that a threshold is chosen, one could use this to recognize an object * only *LOCAL* description is given: only local object shape and appearances (to some extend) can be represented. * Because of locality and limit of expressiveness, the face of Bradley Cooper and Emma Stone may look alike, in terms of HOG descriptors. 3. Only *ONE* size of the object is verified. If the object to find is of the same size as the object in the image, the descriptors will be very similar, and the distance small. However, if the object to find is smaller or larger in the image candidate, the HOG descriptor won't match at all. To solve this problem, one can perform multiscale detection, where the object is scaled several times to ensure to really detect the object, if it is present. A nice overview of this multiscaling can be found in [pyimagesearch](https://www.pyimagesearch.com/2015/11/16/hog-detectmultiscale-parameters-explained/) 4. The detection of object with this sliding window takes time, and is even more time / resource consuming if used with multiscale analysis. Several techniques should be set in place (also discussed in [pyimagesearch](https://www.pyimagesearch.com/2015/11/16/hog-detectmultiscale-parameters-explained/)) such as: - reducing the size of the image candidate, without losing too much information - adjusting the HOG parameters so that the computation time is optimized for a **specific application** ###Wrap up the feature representation We can now wrap up the results in a matrix of dimension $(n \times p)$, where $n$ is the number of training images, and $p$ is the number of features (the length of the feature representation) We call this matrix `X_HOG_train` ``` ''' get dimensions from previously built data structure ''' local_n = sum([len(hog_training[person]) for person in persons]) local_p = len(hog_training[personA][0][0]) ''' create data structure X_HOG_train ''' X_HOG_train = np.empty((local_n, local_p)) ''' Fill data structure with feature representation ''' i=0 for person in hog_training.keys(): for item in hog_training[person]: X_HOG_train[i,:] = item[0] i+=1 ''' log the shape of the data structure ''' logging.info("X_HOG_train shape: " + str(X_HOG_train.shape)) ``` and let's do the same with the test set... ``` ''' get dimensions from previously built data structure ''' local_t_n = sum([len(hog_test[person]) for person in persons]) local_t_p = len(hog_test[personA][0][0]) ''' create data structure X_HOG_train ''' X_HOG_test = np.empty((local_t_n, local_t_p)) ''' Fill data structure with feature representation ''' i=0 for person in hog_test.keys(): for item in hog_test[person]: X_HOG_test[i,:] = item[0] i+=1 ''' log the shape of the data structure ''' logging.info("X_HOG_test shape: " + str(X_HOG_test.shape)) ``` ###HOG Conclusion In this first feature representation building, we studied in details how the descriptor is built and computed, and the different parameters that comes in play, and in particular - the cell size - the block size - the normalization method We have then used the HOG descriptor to try and detect if an object is in an image, by computing on (almost) every position of the image the descriptor and assessing the distance (as similarity metric) with the face of interest descriptor. Doing so, we have then seen that the techniques works well to find region of a similar shape in the image, hence the locality of the descriptor. We also covered some of the challenges related to the images'size, locality of the descriptors, and complexity of the computations. ## Principal Component Analysis - **PCA** We have extensively covered HOG. Similarly, we will go through the PCA technique. First, we will give some intuition; then we will go through the maths, as this technique rely heavily on the computing, then we will apply PCA on the training set and try and observe some results. When discussing PCA, we will mainly focus on the technique applied on images. There are plenty of blogs out there precisely defining and tailoring the techniques for all kinds of application. This tutorial is just an example of PCA applied to face images. --- As for the previous HOG technique, homemade code is fully provided. This gives the details on the implementation and insight about how things are calculated. For this part of the tutorial, most of the examples are done with this homemade code. A section is dedicated to a demo of a library tool as well. For the classification and identification parts, however, library optimized code will be used as it's the purpose of those libraries. Don't hesitate to try out different parameters in the provided function, and please report any bug to geoffroy.herbin@student.kuleuven.be --- ``` class MyPCA: ''' homemade class to perform PCA using several methods and compare Three methods are implemented - method = "svd": singular value decomposition technique - method = "eigen": nominal eigenvalue decomposition technique is implemented - method = "eigen_fast": eigenvalue decomposition using dimensionality reduction is implemented ''' def __init__(self, method = "svd"): self.method = method self.eigenvalues = None self.eigenvectors = None self.X_mean = None def fit(self, data): X = data.copy() n, m = X.shape assert n < m, "n is not smaller than m -> you most likely need " + \ "to transpose your input data" self.X_mean = np.mean(X, axis = 0) X -= self.X_mean self.eigenvalues = None self.eigenvectors = None if self.method == "svd": # singular value decomposition U, S, Vt = np.linalg.svd(X) self.eigenvalues = S**2 / (n - 1) self.eigenvectors = Vt.T[:,0:n] elif self.method == "eigen_fast": # compute small covariance matrix D = np.dot(X, X.T) / (n - 1) # eigen decomposition LD, W = np.linalg.eig(D) order_D = np.argsort(LD)[::-1] LD_sorted = LD[order_D] W_sorted = W[:,order_D] eigenVectors_sorted_tmp = np.dot(X.T, W_sorted) eigenVectors_sorted = np.empty(eigenVectors_sorted_tmp.shape) for i in range(n): v = eigenVectors_sorted_tmp[:,i] eigenVectors_sorted[:,i] = v / np.linalg.norm(v) self.eigenvalues = LD_sorted self.eigenvectors = eigenVectors_sorted elif self.method =="eigen": # compute covariance matrix Cov = np.dot(X.T, X) / (n - 1) # eigen decomposition LC, V =np.linalg.eig(Cov) # sort in appropriate order and keep only relevant component order = np.argsort(LC)[::-1] LC_sorted = LC[order][0:n] V_sorted = V[:,order][:,0:n] self.eigenvalues = LC_sorted self.eigenvectors = V_sorted.real else: raise RuntimeError("method value unknown") def projectPC(self, X, k): Vk = self.eigenvectors[:, 0:k] # logging.debug("Reduce X " + str(X.shape) + "using k = " + str(k) + " components") # logging.debug("Vk (4096 x k)= " + str(Vk.shape)) X_reduced = np.dot(X, Vk) # logging.debug("X_reduced (n x k) = " + str(X_reduced.shape)) return X_reduced def reconstruct(self, X_reduced, k, show = False): if len(X_reduced.shape) > 1: X_reduced = X_reduced[:,0:k] else: X_reduced = X_reduced[0:k] Vkt = self.eigenvectors[:, 0:k].T # logging.debug("Reduce using X_reduced = " + str(X_reduced.shape) ) # logging.debug("Reconstruct using k = " + str(k) + " components") # logging.debug("Vkt (k x 4096)= " + str(Vkt.shape)) X_hat_centered = np.dot(X_reduced, Vkt) # logging.debug("X_hat_centered.shape (20,4096): " + str(X_hat_centered.shape)) if show: self.show_data(X_hat_centered, add_mean = True) return X_hat_centered def compute_error(self, X, X_hat): return sqrt(mean_squared_error(X, X_hat)) def show_principal_components(self,k, figsize=(10,4)): w = figsize[0] h = figsize[1] fig = plt.figure(figsize=(w,h)) fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05) logging.debug("self.eigenvectors.shape = " + str(self.eigenvectors.shape) ) for i in range(k): pc = self.eigenvectors[:,i] assert pc.shape[0]==(sq_size**2)*1 or pc.shape[0]==(sq_size**2)*3, "Not proper shape (expected (sq_size**2) (*3)) " + str(pc.shape) ax = fig.add_subplot(h, w, i+1, xticks=[], yticks=[]) if color: pc_img = (np.reshape(pc.real, (sq_size, sq_size, 3))*255).astype("uint8") ax.imshow(pc_img, interpolation='nearest') else: pc_img = np.reshape(pc.real, (sq_size, sq_size)) ax.imshow(pc_img , cmap=plt.cm.gray, interpolation='nearest') plt.show() def compute_explained_variance(self, show=True): sum_all_eigenValues = sum(self.eigenvalues) logging.debug("\nSum of all eigenValues: " + str(sum_all_eigenValues)) explained_variance = [(value / sum_all_eigenValues)*100 for value in self.eigenvalues] cum_explained_variance = np.cumsum(explained_variance) logging.debug("Cum explained variance : \n" + str(cum_explained_variance)) if show: fig = plt.figure(figsize=(12, 6)) ax1 = fig.add_subplot(121) ax1.bar(range(len(self.eigenvalues)), self.eigenvalues) ax1.set_xlabel('eigenvalues') ax1.set_ylabel('values') ax2 = fig.add_subplot(122) ax2.bar(range(len(explained_variance)), explained_variance) ax2.plot(range(len(cum_explained_variance)), cum_explained_variance, color='green', linestyle='dashed', marker='o', markersize=5) ax2.set_xlabel('eigenvalues') ax2.set_ylabel('% information ') ax2.legend( labels = ["Cumulative Expl. Var.", "Explained Variance"]) ax2.grid() plt.show() return explained_variance, cum_explained_variance def show_data(self, X, add_mean = False): # copy so that adding the mean does not modify the original centered # data X X_ = X.copy() if len(X.shape) > 1: self._show_data(X_, add_mean) else: fig = plt.figure(figsize=(3,3)) if add_mean: X_ += self.X_mean # img = np.reshape(X_, (sq_size, sq_size)) img = my_reshape(X_, sq_size, color) ax = fig.add_subplot(1, 1, 1, xticks=[], yticks=[]) ax.imshow(img, cmap = my_color_map, interpolation='nearest') plt.show() def _show_data(self, X, add_mean = False): fig = plt.figure(figsize=(8,8)) fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05) i=0 for x in X: if add_mean: x += self.X_mean # assert x.shape[0]==(sq_size**2)*3, "Not proper shape (expected (sq_size**2)*3) " + str(x.shape) ax = fig.add_subplot(8, 5, i+1, xticks=[], yticks=[]) img = my_reshape(x, sq_size, color) ax.imshow(img, cmap = my_color_map, interpolation='nearest') i+=1 plt.show() ``` ### Basic idea If you are completely unfamiliar with the principal components analysis, the thread in [PCA intuition](https://stats.stackexchange.com/questions/2691/making-sense-of-principal-component-analysis-eigenvectors-eigenvalues) contains a wonderful layered explanation of what the PCA is, and what can its use be. Another very nice introduction is given in [medium](https://medium.com/@aptrishu/understanding-principle-component-analysis-e32be0253ef0) Essentially, the PCA technique extracts the information from the data to get the main directions intrinsically present in the data. To realize that, PCA uses the covariance, which is a *measure of the extent to which corresponding elements from two sets of ordered data move in the same direction* (definition extracted from [medium website](https://medium.com/@aptrishu/understanding-principle-component-analysis-e32be0253ef0)) Based on this covariance information, the PCA *\"finds a new set of dimensions (or a set of basis of views) such that all the dimensions are orthogonal (and hence linearly independent) and ranked according to the variance of data along them. It means more important principle axis occurs first\"* (source: [medium website](https://medium.com/@aptrishu/understanding-principle-component-analysis-e32be0253ef0)) If you're more of a visual person, the plot [here](https://i.stack.imgur.com/lNHqt.gif) shows the first principal component of a cloud of points, with an animation that shows how the variance is minimized. It is really an extraction of information from the data, hence an unsupervised technique, that is used to reduce the dimensionality of the problem. Applying PCA on images can therefore be used as a feature representation! ### PCA How-to, Step1: Pre-processing The PCA is applied on *hyper-points*: points in our high-dimensional space. We reprensent each of those points by a vector. Applied on images, we then need to convert our sets of images into a useable format. The representation chosen is a $(n \times p)$ matrix where - n is the number of images - p is the dimension of the vector The matrix containing the training data is then a $(40 \times p)$ matrix, and -- in our specific problem insofar -- the matrix containing the test data is a $(40 \times p)$ matrix. #### Resizing All the images (*hyper-points*) shall have the same dimensions to start with. There is therefore a first step of resizing all the images into a commonly appropriate size, keeping in mind that the faces are square images. The parameter indicating the lenght of this square side is defined as `sq_size` in this notebook. `sq_size = 64` indicates that the images are resized as a $(64 \times 64)$ matrix, containing 1 or 3 channels depending on their colorscale. #### Color or Grayscale ? In the litterature, we find both implementation, and I decide not to choose at this point. A grayscale image contains only one channel, a colored image contains three. > to change from grayscale to color, just change the boolean parameter `color` in the beginning of this notebook. The grayscale image is then simply converted to a vector by flattening the matrix, concatenating each row one after the others. The colored image is converted applying the same technique on the three channels, then concatenating the three resulting vectors into one, longer, vector. ####Resulting Dimensionality From a training set colored image of dimensions $( h, w )$ - resize to $(64,64)$ - convert to vector: * if grayscale, convert to $(1, 4096)$ * if color, convert to ($1, 12288)$ The resulting matrix representing the training set has a shape $(40,4096)$ or $(40,12288)$ depending if grayscale or color. ####Process the training set as a data matrix ``` ''' Build useable training set from (hyper-)parameters ''' # column 0 = first image # column 1 = second image # ... m_src = get_matrix_from_set(training_set, color, sq_size = sq_size, flatten = True) logging.debug(" m_src: original matrix") logging.debug(" m_src: shape = " + str(m_src.shape)) plot_matrix(m_src, color, my_color_map, h=4, w=10, transpose = False) ``` #### Process the test set as a data matrix ``` ''' Build useable test set from (hyper-)parameters ''' m_test_src = get_matrix_from_set(test_set, color, sq_size = sq_size, flatten = True) logging.debug(" m_test_src: original matrix") logging.debug(" m_test_src: shape = " + str(m_test_src.shape)) plot_matrix(m_test_src, color, my_color_map,h = 4, w=10,transpose = False) ``` ###PCA How-to, Step2: Centering the Data Centering the data is essential in order to have, eventually, the eigenvectors sorted according to the eigenvalues properly meaning what we desire, aka the directions of max variances in the data. The following code shows: 1. (left) a plot of the data (training) matrix 2. (middle) the mean image 3. (right) a plot of the data where the mean image is substracted: the data are now centered ``` X = m_src.copy() X_mean = np.mean(X, axis = 0) Xc = X - X_mean fig = plt.figure(figsize=(18, 5)) Xs = np.arange(0, X.shape[0]) Ys = np.arange(0, X.shape[1]) Xs, Ys = np.meshgrid(Xs, Ys) ax1 = fig.add_subplot(131,projection='3d') surf1 = ax1.plot_surface(Xs, Ys, X.T, cmap=plt.cm.jet, antialiased=True) ax1.set_xlabel('image index') ax1.set_ylabel('vector index') ax1.set_zlabel('value') ax1.set_title('Original data') ax2 = fig.add_subplot(132) ax2.imshow(my_reshape(X_mean, sq_size, color), cmap = my_color_map, interpolation='nearest') ax2.set_title("Mean image") ax3 = fig.add_subplot(133,projection='3d') surf3 = ax3.plot_surface(Xs, Ys, Xc.T, cmap=plt.cm.jet, antialiased=True) ax3.set_xlabel('image index') ax3.set_ylabel('vector index') ax3.set_zlabel('value') ax3.set_title("Centered data") plt.show() logging.info("Visualization of the images - centered:") plot_matrix(Xc, color, my_color_map, h=4, w=10,transpose = False) plt.show() ``` ###PCA How-to, Step3: Decomposition ####*Canonical - EigenDecomposition* ####0. <u>Data </u> Let's take our initial data matrix $X$, a $(n \times p)$ matrix of data where n is the number of images, and p is the number of variables. In our case, the number of variables is the number of pixels of one image (or three times this number, if color image). First, we have to center the data, hence substracting the mean image. This is done in the previous step. In the following text, $X$ is assumed centered. ####1. <u>Covariance Matrix </u> We compute the covariance matrix, that indicates how a variable (= pixel intensity) varies with respect to other pixels. The Covariance matrix indicates how the variables evolve with respect to each others. $C = \frac{X^T \cdot X}{n-1}$ and has dimension $(p \times p)$. This is a pretty large matrix already. ####2. <u>Eigenvalues and EigenVectors </u> Having computed the covariance matrix C, we compute its eigenvectors and eigenvalues, indicating the main directions and their strength of how the data evolve. The eigendecomposition is expressed as: $C = V L V^T$ where - $L$ is a diagonal matrix of eigenvalues - $V$ is the $(p \times p)$ matrix of eigenvectors <u>*Mathematical Trick: Exploiting the dimensionality of the matrix*</u> Computing the eigenvalues and eigenvectors of $C$ is pretty cumbersome, as $C$ is a large matrix $(p \times p)$. Recalling our algebra skills, given the dimension of $X$, we know that only a limited amount of eigenvalues are non zero: there are only $(n-1)$ non zero eigenvalues. There is no need to compute the $p$ eigenvalues and related $(p \times p)$ eigenvectors matrix as (all) the information is contained in only $(n-1)$ eigenvalues. As a result, to speed up the computation and take advantage of this property, instead of computing the eigenvalues and eigenvectors of $C = \frac{X^T \cdot X}{n-1}$ of size $(p \times p)$, let's rather compute the $n$ eigenvalues and corresponding eigenvectors of the matrix $D = \frac{X \cdot X^T}{n-1}$ of size $(n \times n)$, such that $D = W L W^T$ - the eigenvalues computed are the same as $C$'s - the corresponding eigenvectors of C, in matrix $V$, are related such that $V = X^T \cdot W$ This way, it takes advantage of the dimensions of the problem. ####3. <u>Principal Components</u> The principal components are defined as the eigenvectors $V$. The eigenvectors can be sorted according to the value of their associated eigenvalue, in decreasing order. The eigenvector that has the largest eigenvalue associated is called "first principal component", the second largest is called "second principal component", and so on. In the context of faces analysis, the eigenvectors are often called **eigenfaces**, as they can be reshaped as an image, and displayed (provided the appropriate number conversion so that it's in a range visible) By projecting the original data $X$ on the new directions, the eigenfaces, we get *new coordinates* that yet *fully describe the original data*. Furthermore, the number of eigenvectors on which we project the data is reduced with respect to original problem dimensionality. ####*Singular Value Decomposition* The idea behind the SVD is essentially mathematical, and help in computing the eigenvectors and eigenvalues in a different way. From a matrix X, centered, (as in previous section), one can compute its decomposition $X = U \cdot S \cdot V^T$ where $U$ is a unitary matrix, $S$ is diagonal, containing what's called the singular values $s_i$. One can see that $V$, right singular vectors, are related to eigenvectors of the covariance matrix from previous section. Indeed, computing this covariance matrix gives: \begin{align} Cov &= \frac{X^T \cdot X}{n-1} \\ &= \frac{V \cdot S \cdot U^T \cdot U \cdot S \cdot V^T}{n-1} \\ &= V \cdot \frac{S^2}{n-1} V^T \end{align} There is therefore a link between the singular values ($s_i$) and the eigenvalues ($\lambda_i$): $$\lambda_i = \frac{s_i^2}{n-1}$$ and the right singular vectors are the eigenvectors $V$. Note that in practice, most of the implementation of the PCA algorithm uses singular value decomposition, and starts by centering the data - such as the library function `sklearn.decomposition.PCA` that we will extensively use later on. ####Decomposition and Eigenfaces The following lines of codes create an object of type `MyPCA`, and compute the decompositions following two methods: "svd" and "eigen_fast". ``` nb_training_faces = sum(training_sets_size.values()) ''' Ensuring reset of object if cell is rerun ''' my_pca = None my_pca2 = None ''' Getting the source matrix ''' X = m_src.copy() ''' Creating the MyPCA objects -> solving according to SVD -> solving according to EigenDecomposition (with Math Trick) ''' my_pca = MyPCA("svd") my_pca.fit(X) print("All principal components, using SVD") my_pca.show_principal_components(k=nb_training_faces) my_pca2 = MyPCA("eigen_fast") my_pca2.fit(X) print("All principal components, using eigendecomposition") my_pca2.show_principal_components(k=nb_training_faces) # my_pca3 = MyPCA("eigen") # my_pca3.fit(X) # print("All principal components, using nominal eigendecomposition") # my_pca3.show_principal_components(k=nb_training_faces) ``` The two plots above show the same information: the eigenfaces, but computed in two different ways: using SVD and using eigendecomposition. Several things are important to be noticed: - the eigenfaces are very much alike. Actually, they are exactly the same (despite the last one, see next point) considering the well-known sign ambiguity related to decomposition, see [Standord course, sect. 5.3, Properties of eigenvectors](https://graphics.stanford.edu/courses/cs205a-13-fall/assets/notes/chapter5.pdf). Long story short, white and black may be reversed without any issue in the eigenfaces, on each image independantly. * In the commonly used library implementation, there is often an `sign_flip`function implemented to ensure repeatability of the results of the decomposition. See [documentation](https://kite.com/python/docs/sklearn.utils.extmath.svd_flip) - the last eigenface seems to differ... Definitely, this isn't an issue. * recall that there are only (n - 1) non-zero eigenvalue. The eigenface associated to this eigenvalue is therefore multiplied by 0, and doesn't play a role. If interested, you may uncomment the last lines in order to create a PCA with parameter `method = "eigen"`, and see the result of the true eigendecomposition, without the mathematical trick. Or you can trust me that the result is the same - with the actual noisy 40th component (time to run ~50 seconds) ####Reconstructing on k components ``` ''' Selecting X as the input image we want to reconstruct ''' X = m_src.copy()[9,:] logging.debug("Shape of input X = " + str((X.shape))) ''' Centering the data ''' X_mean = np.mean(m_src.copy(), axis=0) Xc = X - X_mean ''' Computing X_reduced, projection of Xc on the principal components space ''' X_reduced = my_pca.projectPC(Xc, k=nb_training_faces) logging.debug("Shape of X_reduced = " + str(X_reduced.shape)) ''' Reconstructing progressively Based on k first components only ''' k=0 fig = plt.figure(figsize=(3,3)) img = my_reshape(X_mean, sq_size, color) ax = fig.add_subplot(1, 1, 1, xticks=[], yticks=[]) ax.imshow(img, cmap = my_color_map, interpolation='nearest') plt.show() logging.info("Principal components used = " + str(k) + ";\nReconstruction error = " + str(np.round(my_pca.compute_error(Xc+X_mean, X_mean),2))) for k in [1,2,3,5,8,10,12,15,20,25,30,40]: X_hat_centered = my_pca.reconstruct(X_reduced, k, show=True) logging.info("\nAbove: \nPrincipal components used = " + str(k) + ";\nReconstruction error = " + str(np.round(my_pca.compute_error(Xc, X_hat_centered),2)) + "\n"+"___"*30) ``` ``` expl_var, cum_expl_var = my_pca.compute_explained_variance(show = True) ``` *Discussions* On the above results and images, several things can be observed: 1. First the different reconstructions of one selected face. As expected, the reconstruction error decreases as the number of components used (k) increases. This also matches the intuition behind the explained variance. 2. Second, the ultimate error remaining is 0, indicating no information was lost when considering the 40 components. That confirms the mathematical theory. 3. The last two graphs show on the left, the eigenvalues, and on the right, the cumulative explained variance. * the eigenvalues are sorted from the most important to the least important, confirming the right curve of the cumulative expl. var. However, the descent of the values is not fast (not exponential). Furthermore, there is no big drop off in the values. * this indicates that the choice of an **optimal number p** of components such that the dimensionality of the feature space is reduced but still informative is not obvious. ####Choice of optimal $p$ $p$ is defined as the optimal number of components used in the reduced space so that: 1. the dimension is reduced (lower than n) 2. the reconstructed information is *close* to input data. It means the reconstruction is still informative. Selecting only the first $p$ components has the effect of removing small variances. This can be important for some application, if there are little variance between different classes. As said above: - there is no clear drop-off in the eigenvalues, - there is no exponential decrease if the eigenvalues. It makes the choice of an optimal $p$ complicated. To choose, we will do: 1. compute the reconstruction loss (error): * for all training examples * for all possible choice of p 2. plot this RMSE, in absolute value, 3. plot, in percentage, the ratio between the reconstruction loss using p-components and the RMSE between mean image and input image. This gives the notion of percentage of reconstruction error -- that can actually be related to the cumulative explained variance ! 2. define a threshold of 95% of the information kept (5% of error tolerated) That will lead to a sensible choice of $p$. This is however purely arbitrary. ``` X_train = m_src.copy() X_train_mean = np.mean(X_train, axis = 0) X_train_centered = X_train - X_train_mean my_pca = None my_pca = MyPCA("svd") my_pca.fit(X_train) n = X_train.shape[0] ''' np array containing all the rmse computed if n = 40, max number of components, then rmse has a size 40x40 (0 -> 39) > a row matches the rmse of one image wrt the dimension reconstructed. Last column should be 0 (or close, ~e-14) ''' rmse = np.empty((n,n+1)) rmse_pc = np.empty((n,n+1)) # rmse in percentage rmse_base = np.empty((n,)) for i in range(n): rmse_base[i]=my_pca.compute_error(X_train[i], X_train_mean) index_image = 0 for img_center_vector in X_train_centered: # logging.debug("projecting on " + str(n) + " principal components") X_centered_reduced = my_pca.projectPC(img_center_vector, n) for k in range(n+1): # from 1 to n, included if k == 0: rmse[index_image, k] = rmse_base[index_image] rmse_pc[index_image, k] = 100 * rmse[index_image,k] / rmse_base[index_image] else: # logging.debug("reconstructing using " + str(p) + " principal components") X_hat_centered = my_pca.reconstruct(X_centered_reduced, k) rmse[index_image,k] = my_pca.compute_error(img_center_vector, X_hat_centered) rmse_pc[index_image, k] = 100 * rmse[index_image,k] / rmse_base[index_image] index_image += 1 fig = plt.figure(figsize = (16,16)) ax1 = fig.add_subplot(2,2,1) for idx in range(n): rmse_ = rmse[idx,:] ax1.plot([i for i in range(0,n+1)],rmse_, '-') ax1.set_title("reconstruction errors (RMSE) for all images") ax1.set_xlabel("reconstruction dimension(s) \'p\' ") ax1.set_ylabel("RMSE") rmse_mean = np.mean(rmse, axis = 0) ax2 = fig.add_subplot(2,2,2) ax2.plot([i for i in range(0, n+1)], rmse_mean, "ro-") ax2.set_title("Mean of RMSE for all images") ax2.set_xlabel("reconstruction dimension(s) \'p\' ") ax2.set_ylabel("mean of RMSEs") ax1.set_ylim((0,80)) ax2.set_ylim((0,80)) ax1.grid() ax2.grid() ax3 = fig.add_subplot(2,2,3) for idx in range(n): rmse_ = rmse_pc[idx,:] ax3.plot([i for i in range(0,n+1)],rmse_, '-') ax3.set_title("reconstruction errors (RMSE) for all images, in %") ax3.set_xlabel("reconstruction dimension(s) \'p\' ") ax3.set_ylabel("RMSE") rmse_pc_mean = np.mean(rmse_pc, axis = 0) ax4 = fig.add_subplot(2, 2,4) ax4.plot([i for i in range(0, n+1)], rmse_pc_mean, "ro-") ax4.set_title("Mean of RMSE for all images, in %") ax4.set_xlabel("reconstruction dimension(s) \'p\' ") ax4.set_ylabel("mean of RMSEs") ax3.set_ylim((0,110)) ax4.set_ylim((0,110)) ax3.grid() ax4.grid() # print(rmse_pc_mean) ``` Following the explained process of defining $p$, the threshold is set at $p = 35$. This is definitely not an extraordinary result, but yet constitutes somehow a reduction, and based on a reasonable choice. We can now visualize the training image reconstructed based on the first 35 components. ``` ''' Define optimal p ''' p = 35 ''' Selecting X as the input image we want to reconstruct ''' X = m_src.copy() logging.debug("Shape of input X = " + str((X.shape))) ''' Centering the data ''' X_mean = np.mean(m_src.copy(), axis=0) Xc = X - X_mean ''' Creating data structure for outputs ''' X_hat=np.empty(X.shape) ''' Computing X_reduced, projection of Xc on the principal components space ''' X_reduced = my_pca.projectPC(Xc, k=p) logging.debug("Shape of X_reduced = " + str(X_reduced.shape)) ''' Reconstructing based on p first components only ''' X_hat_centered = my_pca.reconstruct(X_reduced, p, show=False) X_hat = X_hat_centered + X_mean logging.info("\nReconstructed images:") plot_matrix(X_hat, color, my_color_map, h=4, w=10, transpose=False) ''' Visualize original input, for comparison purpose ''' logging.info("\nOriginal images:") plot_matrix(X, color, my_color_map,h=4, w=10, transpose=False) # for axis in ['top','bottom','left','right']: # fig2.axes[1].spines[axis].set_linewidth(2) # fig2.axes[1].spines[axis].set_color('white') ``` On the above two series of images, the top one is the reconstructed using 35 components, and the bottom one plots the original data. - overall, the quality seems good enough to fully recognize all the faces pretty well, confirming that most of the variance is kept, and the remaining construction loss is small. - several images, however, clearly show this loss (analysis in grayscale): * image[0,6] contains visibly some extra riddles on the bottom left of the face * image[2,1] is appears difformed * image[3,6] shows reminiscence of hair on Bradley Cooper's forehead. * ... Luckily, it still show some loss in the reconstruction, confirming the previous results established. Nonetheless, the quality is considered good enough to go on with the $p=35$ selected. ####Plot on first two Principal Components PCA is often used as dimensionality reduction technique to represent high dimensional data. Let's show the reconstructed faces on the first two principal components base. *We do that first with the training images, reconstructed using p=35, for information. Later, we will apply the same thing on some test images* ``` my_pca = None X = m_src.copy() my_pca = MyPCA("svd") my_pca.fit(X) data = m_src.copy() data_mean = np.mean(m_src.copy(), axis = 0) data_centered = data - data_mean # my_pca_svd.show_data(X) # reduce the image to the principal components (k=2) data_projected = my_pca.projectPC(data_centered, k=2) logging.debug("shape of data_projected = " + str(data_projected.shape)) plt.figure() fig, ax = plt.subplots(1, 1, figsize=(14, 14), sharex=True, sharey=True) eig1 = data_projected[:,0] eig2 = data_projected[:,1] ax.plot(eig1, eig2, 'bo') ax.set_xlabel("First Component") ax.set_ylabel("Second Component") for (x_, y_), img_vector_ in zip(data_projected, X_hat): img = my_reshape(img_vector_, sq_size, color) ab = AnnotationBbox(OffsetImage(img, cmap = my_color_map), (x_, y_), frameon=False) ax.add_artist(ab) ax.grid() plt.show() logging.info("\nIn order to better visualize the plot on top, here is the same view\nwere personA is in red, and personB is in green") plt.figure() fig, ax = plt.subplots(1, 1, figsize=(8, 8), sharex=True, sharey=True) ax.plot(eig1[0:20], eig2[0:20], 'ro') ax.plot(eig1[20:40], eig2[20:40], 'go') ax.legend(["personA", "personB"]) labels=[i for i in range(20)]*2 for i, txt in enumerate(labels): ax.annotate(txt, (eig1[i], eig2[i])) plt.show() ``` ###Demo scikit learn Of course, everything that has been done so far regarding PCA can be achieved using dedicated - and optimized - libraries. For that purpose, we can use the `sklearn.decomposition` package that, among other things, implement the PCA using the *SVD* decomposition that we've looked at. A difference to note is the use, internally, of the `svd_flip(u, vt)`, a function that ensures the vectors to be deterministic, hence solving the sign ambiguity inherent to matrix decomposition, as already discussed above. The following part first performs the same operation as we've implemented before to get familiarized. ``` ''' Let's start --again-- from the training set, processed as a matrix ''' logging.info("shape of input matrix (n x p) = " + str(m_src.shape)) plot_matrix(m_src, color, my_color_map, h=4, w=10, transpose = False) ``` ``` ''' Create pca sklearn object, and compute decomposition ''' X = m_src.copy() n_components = 40 logging.info("Shape input data: "+str(X.shape)) pca_ = sklearn_decomposition_PCA(n_components=n_components) pca_.fit(X) ''' Get the eigenfaces ''' eigenfaces = pca_.components_ logging.debug("eigenfaces shape = " + str(eigenfaces.shape)) ''' Visualize the eigenfaces, just as we did before ''' if color: eigenfaces_cvt = (eigenfaces*255).astype(np.uint).copy() else: eigenfaces_cvt = eigenfaces.copy() plot_matrix(eigenfaces_cvt, color, my_color_map, h=4, w=10, transpose=False) ``` Without any surprise, we find again the same principal components as before. Continuing with sklearn library, we can reconstruct the original data based on the first $p$ components. Hopefully, the results are the same as the ones obtained with the homemade PCA. ``` p=35 pca_ = sklearn_decomposition_PCA(n_components=p) pca_.fit(X) X_reduced_sk = pca_.transform(X) X_reconstructed_sk = pca_.inverse_transform(X_reduced_sk) print(X_reconstructed_sk.shape) plot_matrix(X_reconstructed_sk, color, my_color_map, h=4, w=10) logging.info("Difference between Homemade PCA and Scikit-learn PCA: " + str(np.linalg.norm(X_hat - X_reconstructed_sk))) logging.info(" ==> OK!") ``` We find the exact same images as the one reconstructed using the homemade code `MyPCA`, which is the expected but nonetheless relieving and self-rewarding conclusion! Let's continue with the PCA then... ### Projection of Test Faces reconstructed Using the same kind of plot as before, we can reconstruct, **in the same eigenfaces base** the test set images using $p$ first components. We need to apply the sames steps that were applied to the training set, to the test set. that is: 1. centering the data: substracting the mean **of the training set** 2. projecting the resulting centered data on the p first principal components computed by applying PCA on the training data. We won't fit the PCA to the test images, we *just* project the test data using the already-found principal components 3. reconstructing the original data based on those p first principal components, and plot the result of the 40 images on the 2-first components space. ``` scatter_plot_3D = False scatter_plot_3times = False ''' Once again, start by locally copying the data structure > data_test as new data > data_train_mean for centering ''' data_test = m_test_src.copy() data_train_mean = np.mean(m_src.copy(), axis = 0) data_test_centered = data_test - data_train_mean ''' Perojecting the data onto the p components ''' data_test_projected = my_pca.projectPC(data_test_centered, k=p) logging.debug("shape of data_projected = " + str(data_test_projected.shape)) ''' scatter plot in 3D - 3 first components ''' if scatter_plot_3D: fig = plt.figure(figsize=(8,8)) ax = fig.add_subplot(111, projection='3d') eig1 = data_test_projected[:,0] eig2 = data_test_projected[:,1] eig3 = data_test_projected[:,2] ax.scatter(eig1[0:10], eig2[0:10], eig3[0:10], 'b') ax.scatter(eig1[10:20], eig2[10:20], eig3[10:20], 'r') ax.scatter(eig1[20:30], eig2[20:30], eig3[20:30], 'g') ax.scatter(eig1[30:40], eig2[30:40], eig3[30:40], 'y') plt.show() if scatter_plot_3times: fig = plt.figure(figsize=(24,8)) eig1 = data_test_projected[:,0] eig2 = data_test_projected[:,1] eig3 = data_test_projected[:,2] ax1 = fig.add_subplot(1,3,1) eig1 = data_test_projected[:,0] eig2 = data_test_projected[:,1] eig3 = data_test_projected[:,2] ax1.plot(eig1[0:10], eig2[0:10], 'bo') ax1.plot(eig1[10:20], eig2[10:20], 'ro') ax1.plot(eig1[20:30], eig2[20:30], 'go') ax1.plot(eig1[30:40], eig2[30:40], 'yo') ax2 = fig.add_subplot(1,3,2) ax2.plot(eig1[0:10], eig3[0:10], 'bo') ax2.plot(eig1[10:20], eig3[10:20], 'ro') ax2.plot(eig1[20:30], eig3[20:30], 'go') ax2.plot(eig1[30:40], eig3[30:40], 'yo') ax3 = fig.add_subplot(1,3,3) ax3.plot(eig2[0:10], eig3[0:10], 'bo') ax3.plot(eig2[10:20], eig3[10:20], 'ro') ax3.plot(eig2[20:30], eig3[20:30], 'go') ax3.plot(eig2[30:40], eig3[30:40], 'yo') labels=[0,1,2,3,4,5,6,7,8,9]*4 for i, txt in enumerate(labels): ax1.annotate(txt, (eig1[i], eig2[i])) ax2.annotate(txt, (eig1[i], eig3[i])) ax3.annotate(txt, (eig2[i], eig3[i])) plt.show() ``` ``` eig1 = data_test_projected[:,0] eig2 = data_test_projected[:,1] ''' Reconstructing the data, using p first principal components ''' data_test_reconstructed = my_pca.reconstruct(data_test_projected, p, show=False) ''' Visualization of the reconstructed data ''' plt.figure() fig, ax = plt.subplots(1, 1, figsize=(16, 16), sharex=True, sharey=True) ax.grid() ax.plot(eig1, eig2, 'bo') ax.set_title("Visualization of the reconstruced data using p components onto 2 first PC") for x_, y_, img_vector_ in zip(eig1, eig2, data_test_reconstructed): img = my_reshape(img_vector_ + data_train_mean, sq_size, color) ab = AnnotationBbox(OffsetImage(img, cmap = my_color_map), (x_, y_), frameon=False) ax.add_artist(ab) plt.figure() fig, ax = plt.subplots(1, 1, figsize=(8, 8), sharex=True, sharey=True) eig1 = data_test_projected[:,0] eig2 = data_test_projected[:,1] ax.plot(eig1[0:10], eig2[0:10], 'bo') ax.plot(eig1[10:20], eig2[10:20], 'ro') ax.plot(eig1[20:30], eig2[20:30], 'go') ax.plot(eig1[30:40], eig2[30:40], 'yo') ax.legend(["personA", "personB", "personC", "personD"]) ax.set_title("Visualization of the reconstruced data using p components onto 2 first PC") labels=[0,1,2,3,4,5,6,7,8,9]*4 for i, txt in enumerate(labels): ax.annotate(txt, (eig1[i], eig2[i])) plt.show() ``` The plots of the test reconstructions confirms the intuition behing the eigenfaces: - Emma Stone, in blue, is mainly on the top; while Bradley Cooper, in red, in mainly on the bottom. They are well separated according to the 2nd eigenface, which seems related to the shape of the face, with a very dark part on the bottom right. - Jane Levy, in green, a woman that resembles to Emma Stone for a human, is mainly on the top of the plot, - Marc Blucas, in yellow, is a white man similar to Bradley Cooper. While half of the points are located in the lower left half, where most of Bradley cooper images also are, the rest of Marc Blucas images is in the middle of blue and green points. Two two first eigenfaces, or principal components, already gives us some of the important information present in the data, even if their cumulative explained variance - fitted for the training set inputs - was actually not that high! Although we will certainly not modify any hyper-parameters based on the test set images, it is still interesting to reproduce the metrics that we built for the training set and the choice of an optimal $p$. It is interesting to answer such questions as: - what is the final relative reconstruction error ? - How does it evolve with p ? - Was $p$ a nice choice, regarding the test sets ? ``` ''' np array containing all the rmse computed if n = 40, max number of components, then rmse has a size 40x41 (0 -> 40 included) > a row matches the rmse of one image wrt the dimension reconstructed. Last column should NOT be 0 as we compute construction of test images (= with loss) ''' n = m_test_src.shape[0] rmse_test = np.empty((n,n+1)) rmse_test_pc = np.empty((n,n+1)) # rmse in percentage rmse_test_base = np.empty((n,)) for i in range(n): rmse_test_base[i]=my_pca.compute_error(data_test[i], X_train_mean) index_image = 0 for img_center_vector in data_test_centered: X_test_centered_reduced = my_pca.projectPC(img_center_vector, n) for k in range(n+1): # from 1 to n, included if k == 0: rmse_test[index_image, k] = rmse_test_base[index_image] rmse_test_pc[index_image, k] = 100 * rmse_test[index_image,k] / rmse_test_base[index_image] else: # logging.debug("reconstructing using " + str(p) + " principal components") X_test_hat_centered = my_pca.reconstruct(X_test_centered_reduced, k) rmse_test[index_image,k] = my_pca.compute_error(img_center_vector, X_test_hat_centered) rmse_test_pc[index_image, k] = 100 * rmse_test[index_image,k] / rmse_test_base[index_image] index_image += 1 ''' Visualization ! ''' fig = plt.figure(figsize = (16,16)) ax1 = fig.add_subplot(2,2,1) for idx in range(n): rmse_ = rmse_test[idx,:] ax1.plot([i for i in range(0,n+1)],rmse_, '-') ax1.set_title("reconstruction errors (RMSE) for all test images") ax1.set_xlabel("reconstruction dimension(s) \'p\' ") ax1.set_ylabel("RMSE") rmse_test_mean = np.mean(rmse_test, axis = 0) ax2 = fig.add_subplot(2,2,2) ax2.plot([i for i in range(0, n+1)], rmse_test_mean, "ro-") ax2.set_title("Mean of RMSE for all test images") ax2.set_xlabel("reconstruction dimension(s) \'p\' ") ax2.set_ylabel("mean of RMSEs") ax1.set_ylim((0,80)) ax2.set_ylim((0,80)) ax1.grid() ax2.grid() ax3 = fig.add_subplot(2,2,3) for idx in range(n): rmse_ = rmse_test_pc[idx,:] ax3.plot([i for i in range(0,n+1)],rmse_, '-') ax3.set_title("reconstruction errors (RMSE) for all test images, in %") ax3.set_xlabel("reconstruction dimension(s) \'p\' ") ax3.set_ylabel("RMSE") rmse_test_pc_mean = np.mean(rmse_test_pc, axis = 0) ax4 = fig.add_subplot(2, 2,4) ax4.plot([i for i in range(0, n+1)], rmse_test_pc_mean, "ro-") ax4.set_title("Mean of RMSE for all test images, in %") ax4.set_xlabel("reconstruction dimension(s) \'p\' ") ax4.set_ylabel("mean of RMSEs") ax3.set_ylim((0,110)) ax4.set_ylim((0,110)) ax3.grid() ax4.grid() ``` *Observations* - the reconstruction loss, computed in the same fashion as for the training set images, decreases also as $p$ increases - the slope seems to become near 0 as $p$ is ~35. It comforts us with the choice of $p=35$. However, assessing this parameter on a validation set, or even better performing cross-validation (or leave-one-out cross-validation) would be preferable. On the test set, one could also argue that not much info is gained for the component after the 20th. - using all the components, the remaining error in the reconstruction is still 60% of the error of the base error. there is "no way" to do better. > as a reminder, the "base" error is the RMSE between the input image, and the mean image of the training set. ###PCA Conclusion In this section, we performed a lot :-)! We tried to give the basic idea of the technique, and explained the pre-processing steps required in order to obtain genuine results using PCA (resizing, centering). Then, we have covered some of the math behind the technique, and discussed about the nominal *eigendecomposition*, the mathematical *trick* associated, and the *singular value decomposition*. Those three methods have been fully implemented in a class `MyPCA`and tested against each other. Using `MyPCA`, we have furthermore detailed and visualized what the eigenfaces are, and discussed about the reconstruction to find back our original data. This involves the choice of an *optimal* $p$, number of components used, which is a trade-off between information loss and dimensionality reduction. We discussed abundantly this topic and showed one way to choose $p$. Using this number, we finally plotted the train images **and** the test images onto the 2 first components space, and discussed those results. Finally, we also repeated some steps about eigenfaces generation and reconstruction using a well-known and optimized library `sklearn`, which confirmed all the results obtained using the homemade implementation. If you've reached this line: Congrat's! I know it's dense, but it's worth it! More to come... ##Transfer Learning *Doing this project alone as a working student, this part can be skipped* ##Features 2D Visualizations t-SNE is a quite nice technique for dimensionality reduction used in order to visualize high dimensional data into the 2D (or 3D) space. Other dimensionality reduction techniques often make use of the variance only in order to complete this dimensionality reduction, while t_SNE uses probabilities of being similar (or not). t-SNE stands for *t-Distributed Stochastic Neighbor Embedding* We won't go through the details of the technique, but you can surely find the theory in the [original paper](http://www.jmlr.org/papers/volume9/vandermaaten08a/vandermaaten08a.pdf), and some other complete tutorial in [towardsdatascience](https://towardsdatascience.com/t-sne-python-example-1ded9953f26) for instance. Using this dimensionality reduction technique, and its `sklearn` implementation, we can try and visualize our feature representation built. This technique is not trivial, and some interesting remarks and insights are explained in [How to Use t-SNE Effectively](https://distill.pub/2016/misread-tsne/). In particular, it's important to note the hyperparameters: - `perplexity`, which intuitively is *a guess of the number of close neighbors each point has* - `learning_rate`, very common in iterative methods. Those two parameters are tailored for our application. Let's specify what the labels are on the training set: - personA, Emma Stone: 0 - personB, Bradley Cooper: 1 ``` y_train = np.ones((40,)) y_train[0:20] = 0 logging.info("y_train shape : " + str(y_train)) logging.info("y_train sum [20]: " + str(sum(y_train))) ``` Util function to generate a colored and scattered plot, inspired by [datacamp](https://www.datacamp.com/community/tutorials/introduction-t-sne) ``` # Utility function to visualize the outputs t-SNE # inspired by https://www.datacamp.com/community/tutorials/introduction-t-sne def tsne_scatter(x, colors, title): # choose a color palette with seaborn. num_classes = len(np.unique(colors)) palette = np.array(sns.color_palette("bright", num_classes)) # palette = sns.color_palette("bright", num_classes) # create a scatter plot. f = plt.figure(figsize=(8, 8)) ax = f.add_subplot(111) sc = ax.scatter(x[:,0], x[:,1], lw=0, s=40, c=palette[colors.astype(np.int)]) ax.set_title(title) ax.grid() # # add the labels for each digit corresponding to the label txts = [] for i in range(num_classes): # Position of each label at median of data points. xtext, ytext = np.median(x[colors == i, :], axis=0) txt = ax.text(xtext, ytext, str(i), fontsize=24) txts.append(txt) plt.show() ``` Let's generate a random number based on a seed, for the sake of reproducibility ``` random.seed(8042020) rand_nb = random.randrange(0,1000) logging.info("Random Number generated is: " + str(rand_nb)) ``` Now, we can call the t-sne `sklearn` implementation, using two tailored parameters: - `perplexity` is set to the theoretical number of neighbours, that we now at this point, - `learning_rate` is set to a small value which seems a good balance. ``` tsne_hog = sklearn.manifold.TSNE(random_state=rand_nb, perplexity=20, learning_rate=50.0) X_HOG_embedded = tsne_hog.fit_transform(X_HOG_train) tsne_scatter(X_HOG_embedded, y_train, "Projection of HOG features of the training set using t-SNE") ``` ``` X_train = get_matrix_from_set(training_set, color, sq_size,flatten=True) p = 35 pca_ = sklearn_decomposition_PCA(n_components=p) pca_.fit(X_train) X_PCA_train = pca_.transform(X_train) logging.info("X_PCA_train shape: " + str(X_PCA_train.shape)) ``` ``` tsne_pca = sklearn.manifold.TSNE(random_state=rand_nb, perplexity=20, learning_rate=50.0) X_PCA_embedded = tsne_pca.fit_transform(X_PCA_train) tsne_scatter(X_PCA_embedded, y_train, "Projection of PCA features of the training set using t-SNE") ``` ###*Comments on the t-SNE plots* Before going into the comments, let's first remind two of the six key messages taken from the deep analysis in [How to Use t-SNE Effectively](https://distill.pub/2016/misread-tsne/). 1. Hyper-parameters really matters, 2. Cluster sizes in a t-SNE plot mean nothing, 3. Distances between well-separated clusters in a t-SNE plot may mean nothing > This analysis was performed with `color = False`, meaning the PCA computed in grayscale. The results, specifically t-SNE reproduction, are of course influenced by this change. --- The two different plots, for the two feature representations, are built with the hyperparameter `perplexity` set to the theoretical number of neighbours. The hyperparameter `learning_rate` was also modified to try and cope with the problem. Changing those values change the results; as is, it seems to give pretty *interesting* results. ###*Interesting* result ? Those two graphs are based on the feature representations built: HOG and PCA. Those are high dimensional features, and t-SNE technique is applied to project, in a non-linear fashion, the features onto a 2D figure. The two plots shows what we *hope* to find: some notion of distances/similarities between points of the same class. - the plots indicate that in both cases, the features seem to be (mostly)separable between the classes, even not linearly in 2D after projection. This is a nice and promising result to build upon. - Considering the size and distance between the cluster, it should not really matters in the analysis as reminded above - As announced by the litterature, the `perplexity` hyperparameter matters a lot. Intuitively, one could say -- based on the plots above -- that several classifiers may work better than others for certain points. > for instance, 3-NN may not work well if an image has a HOG feature representation projected onto [2, 4.5] or a PCA feature projected onto [2, 4] by the t-SNE transformation. This kind of intuition may be erroneous, due to the high non-linearity inherent to this transformation. Let's be cautious then, and verify those feelings/ideas in the next steps (see Identification part). While t-SNE is a great technique for visualization purpose, one shall remain careful regarding the conclusion drawn based on the plots. ``` # tsne of the test set for the HOG feature # tsne_hog_test = sklearn.manifold.TSNE(random_state=rand_nb, perplexity=10, learning_rate=50.0) # X_HOG_embedded = tsne_hog.fit_transform(X_HOG_test) # y_test = np.ones((40,)) # y_test[0:10]=0 # y_test[20:30]=2 # y_test[30:40]=3 # tsne_scatter(X_HOG_embedded, y_test, "Projection of HOG features of the training set using t-SNE") ``` # Exploit Feature Representations In this part, we will use the representations learnt before in order to build a classifier and identification system. > From an academic stand point, I was exempted of performing the *Classification* topic, being alone on this project as a working student. I appreciate the flexibility. However, as part of the *Impress your TA's*, and as this ought to be a *fun* part, I have decided to cover this topic as well. In the next two parts I will construct a classification and an identification system for each of the feature representations, HOG then PCA, and I will qualitatively and quantitatively compare the results obtained. > In the analysis, unless specified otherwise, we assume `color=False` and `sq_size=64`. Results with other parameters may differ. ``` def show_missed(X_test, y_test, y_predict): missed = np.where( np.array(y_test != y_predict)) correct = np.where( np.array(y_test == y_predict)) if len(missed[0]) > 0: logging.info("Mis-classified images: ") logging.info("Index: " + str(missed[0])) plot_matrix(X_test[missed[0],:], color, my_color_map, h=1, w=len(missed[0])) logging.info("Properly classifier images: ") logging.info("Index: " + str(correct[0])) plot_matrix(X_test[correct[0],:], color, my_color_map, h=2, w=1+len(correct[0])//2) ``` ## Classification - howto While the following sections may appear quite dense, the overall idea is the following: 1. pre-process data 1. Resizing to appropriate size, as already discussed in the first parts of this tutorial. Typically, the size is smaller than all images in the trainging and test set. 2. color or grayscale, as this notebook is fully compatible with both. 3. shuffle the (ordered) training set 2. compute feature representation, as described in the previous part 3. train the corresponding classifier using training sets 4. apply the classifier on unseen images (test) * apply same preprocessing as for training (except shuffling) * predict * compute metrics * observations and discussions In case you're lost, don't hesitate to go back a few steps - keeping an eye on the table of content. ### Preprocess the data - Let's just check the "meta-hyper-parameters": color and size. ``` logging.info("smallest px shall be less than: " + str(min(min(get_min_size(training_set)), min(get_min_size(test_set))))) logging.info("hyper-parameter sq_size : " + str(sq_size)) logging.info("hyper-parameter color : " + str(color)) ``` The smallest face crop that we have in training set and test set is (70,70), so we can without issue rescale all our face crops to (64,64) as part of the data pre-processing. > In this notebook, this resizing is handled with the parameter `sq_size = 64`, set at the very beginning - Get raw training data. Those are the raw face cropped from personA and personB. In order to be sure not to be disturbed by previous execution or previous code snipper, let's just get those data again. > this is not a performance issue to repeat this step considering the relative small amount of data we actually deal with. - Resize the raw images to a common image size The image size is define by `sq_size`, and we need to resize the training raw image accordingly. Those two actions are performed in a single function. Code is of course provided. ``` X_train = get_matrix_from_set(training_set, color, sq_size = sq_size, flatten = False) X_test = get_matrix_from_set(test_set, color, sq_size = sq_size, flatten = False) y_train = np.zeros((40,)) y_train[20:40] = 1 ''' For now, set up "0" for personC; "1" for personD ! see discussion in a later cell. ''' y_test = np.zeros((40,)) y_test[10:20] = 1 y_test[30:40] = 1 ``` #### Shuffling The training mathematical methods are of course numerical methods, for which the ordening of the training input may have its influence. In order to prevent as much as possible this bias, the classifier `fit` method automatically shuffle the training data between each epoch, unless specified otherwise. We leave the default parameter to ensure this shuffling. Nonetheless, as the training data are currently completely ordered, we introduce a pre-shuffling at this stage, before starting the very first run. For the sake of repeatability, we seed the RNG, as before in this tutorial. ``` def shuffle_training(X_train, y_train): np.random.seed(0) np.random.shuffle(X_train) np.random.seed(0) np.random.shuffle(y_train) np.random.seed() shuffle_training(X_train, y_train) ``` In order to make sure to keep those data as we may need them in a future (optimization ;-) ) step, let's create some "backup" variables. It is ok to do that as the amount of data remain small. ``` X_train_HOG_shuffled_back = X_train.copy() y_train_HOG_shuffled_back = y_train.copy() X_test_HOG_back = X_test.copy() y_test_HOG_back = y_test.copy() ``` ###Visualization of the training images (shuffled) ``` logging.info("Training set :") logging.info("Training set shuffled [0 -> 19]:") plot_matrix(X_train[0:20,:], color, my_color_map, h=1, w=20, transpose = False) logging.info("Training set shuffled [20 -> 39]:") plot_matrix(X_train[20:40,:], color, my_color_map, h=1, w=20, transpose = False) ``` Obviously, we see that Emma Stone and Bradley Cooper are now interleaved (at least, their images) ###Visualization of the test images ``` logging.info("Test set :") logging.info("Test set PersonA [0 -> 9]:") plot_matrix(X_test[0:10, :], color, my_color_map, h=1, w=10, transpose = False) logging.info("Test set PersonB [10 -> 19]:") plot_matrix(X_test[10:20, :], color, my_color_map, h=1, w=10, transpose = False) logging.info("Test set PersonC [20 -> 29]:") plot_matrix(X_test[20:30, :], color, my_color_map, h=1, w=10, transpose = False) logging.info("Test set PersonD [30 -> 39]:") plot_matrix(X_test[30:40, :], color, my_color_map, h=1, w=10, transpose = False) ``` ###Standard scaler It is often a question of "should we scale our features or not" ? Scaling as in "have a variance between 0 and 1". While one could argue it's always better, in the context of this tutorial, we will not. The very essence of the inputs are pixel intensities: they already are on the same scale of data, and there isn't order of magnitude differences between them. Of course, it does not lead to having scaled feature representations...Yet, it does not seem to matter very much in our problem. As it does not strictly participate to the educative goal of this tutorial, I dediced not to include the scaler step in the different systems (or Pipeline, as we will call them). Nonetheless, if one wanted to try out, a common scaler is `sklearn.preprocessing.StandardScaler()`. If the data are not scaled, they however **do need** to be centered for the PCA technique, as discussed previously. This does not change. ## HOG Classification In this section, we focus on the classifier based on the HOG feature representation. Util function to plot side by side an color face, and its HOG descriptor, for educative purpose ``` def show_one_image_hog(idx_of_interest, person=personA, set="training"): if set == "training": source_set = training_set hog_set = hog_training elif set == "test": source_set = test_set hog_set = hog_test image_of_interest = source_set[person][idx_of_interest] hog_of_interest = hog_set[person][idx_of_interest] ''' Visualization of the image and its hog selected as image_of_interest ''' fig, (ax0, ax1) = plt.subplots(1,2,figsize = (8,4), sharex=False, sharey=False) ax0.imshow(cv2.cvtColor(image_of_interest, cv2.COLOR_BGR2RGB)) ax0.set_title("Face \'barely\' properly classified") ax1.imshow(hog_of_interest[1]) ax1.set_title("Visualization of the HOG of interest") logging.info("Shape of the descriptor : " + str(hog_of_interest[0].shape)) logging.info("Shape of the descriptor (visu): " + str(hog_of_interest[1].shape)) logging.info("Shape of the image of interest: " + str(image_of_interest.shape)) plt.show() ``` ### Construction of HOG Transformer Following the method and advices of [Kapernikov](https://kapernikov.com/tutorial-image-classification-with-scikit-learn/) ####[Kézako](https://forum.wordreference.com/threads/k%C3%A9zako.245210/)? We won't go into the details of the computer science design pattern leading to the Transformer building by `sklearn`, but in very essence, a transformer is a class that takes some input and perform some transformation on that, depending (possibly) on extra parameters. We actually already used such a transformer during the PCA demo using `sklearn`, and more specifically `sklearn.decomposition.PCA`. This `PCA` class is a `Transformer` because it inherits from `BaseEstimator` and `TransformerMixin`. ``` import inspect tuple_ancestor = inspect.getmro(sklearn_decomposition_PCA) for ancestor_class in tuple_ancestor: print("child of " + str(ancestor_class)) ``` In particular, `sklearn.decomposition.PCA` implements -among others- the methods `fit` and `transform`, which are required to be called a `Transformer`. A Transformer helps in defining a systematic way of performing some actions on the data. We can build our own transformer, called `HogTransformer`, make it inheriting of `BaseEstimator` and `TransformerMixin` and benefit from the same capabilities. *Don't worry if it's still fuzzy, it'll become clearer when we will use it.* ``` class HogTransformer(BaseEstimator, TransformerMixin): ''' Expects an array of 2D arrays (1 channel images) Calculates hog features for each image ''' def __init__(self, y=None, orientations = 9, pixels_per_cell = (8,8), cells_per_block = (2,2), block_norm = "L2-Hys", transform_sqrt = False, multichannel = False): self.y = y self.orientations = orientations self.pixels_per_cell = pixels_per_cell self.cells_per_block = cells_per_block self.block_norm = block_norm self.transform_sqrt = transform_sqrt self.multichannel = multichannel # default is grayscale def fit(self, X, y=None): logging.debug("[HOGTransformer.fit] X.Shape " + str(X.shape)) return self def transform(self, X, y=None): logging.debug("[HOGTransformer.transform] X.Shape " + str(X.shape)) def local_hog(X): if self.multichannel: X_ = X.copy() #.T # logging.debug("TO CHECK IF TRANSPOSE STILL NEEDED ?") # logging.debug("[HOGTransformer.transform] (1) X.Shape " + str(X.shape)) # logging.debug("[HOGTransformer.transform] (2) X_.Shape " + str(X_.shape)) else: X_ = X.copy() # logging.debug("[HOGTransformer.transform.local_HOG]" ) # cv2_imshow(X_) return skimage_feature_hog(X_, orientations = self.orientations, pixels_per_cell = self.pixels_per_cell, cells_per_block = self.cells_per_block, block_norm = self.block_norm, visualize = False, transform_sqrt = self.transform_sqrt, feature_vector = True, multichannel = self.multichannel) try: # tmp = [str(image.shape) for image in X] # logging.debug( str(tmp) ) return np.array([local_hog(image) for image in X]) except ValueError as ve: logging.error(str(ve)) except NameError as ne: logging.error(str(ne)) ``` /!\ A careful reader could recommend to also build a Transformer for the color conversion according to `color` attribute, as well as the resizing according to the `sq_size` attribute. ==> That is completely True ! Nonetheless, this part of the code has been covered much later that the pre-processing steps, and it is not mandatory to cover the full scope of this tutorial. That being said, a future version could indeed replace the utility functions handling those `color` and `sq_size` parameters as Transformers. From the previous section, we have ready the training data `X_train` and `y_train`. Let's apply (= fit, then transform) the `HogTransformer`. We log the final shape of the training data hog representations. ``` # scalify = StandardScaler() hogify = HogTransformer( orientations = 9, pixels_per_cell = (8,8), cells_per_block = (2,2), block_norm = "L2", transform_sqrt = True, multichannel = color) X_train_hog = hogify.fit_transform(X_train) # X_train_prepared = scalify.fit_transform(X_train_hog) X_train_hog_prepared = X_train_hog logging.info("X_train prepared for HOG classification. Shape: " + str(X_train_hog_prepared.shape)) ``` Now that we have the training data ready to train the classifier, let's go! Many possibilities exist, and let's try out a stochastic gradient descent classifier, from `sklearn`. In a first step, we can leave most of the parameters as is. Considering the loss function and penalty parameters, it leads to a linear SVM classifier, see [sklearn SGDClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDClassifier.html) for more details. When created, we can call the function `fit` to train the classifier. ####Training ``` sgd_clf = SGDClassifier(random_state=42, max_iter=1000, tol=1e-6, verbose = 1, shuffle=True) sgd_clf.fit(X_train_hog_prepared, y_train) ``` The classifier is now trained. We can test the result on the test sets. > Important note: there was no validation set dedicated to parameters evaluation. While this is a good practice, it's not crucial for the moment as the goal is to demonstrate the methodology. In a few sections, while we will discuss optimization of the classifier, we will realize cross-validation using dedicated tools from `sklearn`. We need to apply the same pre-processing steps as we did on the training set. As we already resized and set up the color according to global hyperparameters, we just need to apply the `transform`method of the `Hog_Transformer` > Why not fit? The `fit` method will never be applied on the test data, as it implies tuning the Transfomer for the data - which we don't want with the test set. For the present HOG case, it does not change anything as there is nothing done in the method; for other cases such as PCA, this is highly important as we don't want the Principal Components to be modified by the test set. ###Test on personA and personB ####Gathering and processing the test data Those two persons correspond to the persons of the training set. Their indices in the test set are from [0,19] so that we can create `X_test_ab` that contains the input data for the test only of those 2 persons. Similarly, we extract the labels in `y_test_ab` ``` X_test_ab = X_test[0:20,:] y_test_ab = y_test[0:20] print("X_test shape -> " + str(X_test.shape)) print("X_test_ab shape -> " + str(X_test_ab.shape)) print("sum y_test_ab [10] -> " + str(sum(y_test_ab))) ''' Application of the hog transform ''' X_test_hog_ab = hogify.transform(X_test_ab) X_test_hog_ab_prepared = X_test_hog_ab ``` As before, let's save those as backup variables ``` X_test_HOG_ab_back = X_test_ab.copy() y_test_HOG_ab_back = y_test_ab.copy() ``` ####Tests and metrics we use our classifier to predict the labels of the test inputs. ``` y_pred_hog_ab = sgd_clf.predict(X_test_hog_ab_prepared) ``` Now, we can compute the accuracy for the classifier ``` accuracy = np.sum(y_pred_hog_ab == y_test_ab)/len(y_test_ab) logging.info("Accuracy: " + str(accuracy) ) ``` Note that if we are really not sure about how to compute the accuracy, or if we prefer to use library functions, we can! ``` logging.info("Accuracy [SKlearn]: " + str(metrics.accuracy_score(y_test_ab, y_pred_hog_ab ))) ``` Luckily, both accuracy give the same answers ;-)! #####Summary - we built a Transformer that modifies the input (= raw images) into HOG features - we created a stochastic Gradient descent classifier - we trained this classifier using the HOG feature - we transformed the test set raw images into HOG features - we tested the classifier against those test HOG features, for personA (= class 0) and personB (=class 1) - we computed the accuracy It is not 100%, and it may be interesting to observe the failures. ####Visualization: Confusion Matrix Let's create a (very simple) confusion matrix. The goal is to see where are the faulty classification. In other words, in this simple classification that we run, what were the failing images ? ``` from sklearn.metrics import confusion_matrix from sklearn.metrics import plot_confusion_matrix ''' Confusion matrix ''' def my_plot_confusion_matrix(classifier, X_test, y_test, true_labels, predicted_labels=None): if predicted_labels == None: predicted_labels = true_labels titles_options = [("Confusion matrix, without normalization", None), ("Normalized confusion matrix", 'true')] for title, normalize in titles_options: disp = plot_confusion_matrix(classifier, X_test, y_test, display_labels=None, cmap=plt.cm.viridis, normalize=normalize, include_values=True, xticks_rotation='horizontal', values_format=None) disp.ax_.set_title(title) disp.ax_.set_xticklabels(predicted_labels) disp.ax_.set_yticklabels(true_labels) plt.show() ``` ``` true_labels= ["Emma Stone", "Bradley Cooper"] my_plot_confusion_matrix(sgd_clf, X_test_hog_ab_prepared, y_test_ab, true_labels = true_labels, predicted_labels = true_labels) ``` The result is pretty clear: all the images that were misclassified (4) are Emma Stone faces that were classified as Bradley Cooper faces. Let's dig into the analysis to see what precisely are those images. - let's find which is index of the misclassification, - let's show the test images, separated from all the others. ``` mis_idx = np.where( np.array(y_test_ab - y_pred_hog_ab) != 0) logging.info("Indices misclassified: " + str(mis_idx[0])) images_misclassified = X_test_ab[mis_idx[0]].copy() logging.info("Images misclassified: ") plot_matrix(images_misclassified, color, my_color_map, h=1, w=mis_idx[0].shape[0]) logging.info("Images correctly classified: ") correct_idx = np.where(np.array(y_test_ab - y_pred_hog_ab) == 0) remaining_images = X_test_ab[correct_idx[0]].copy() plot_matrix(remaining_images, color, my_color_map, h=2, w=1+correct_idx[0].shape[0]//2) logging.info("Again - Overview of the training data to ease the understanding...") plot_matrix(X_train, color, my_color_map, h=4, w=10) ``` ####Observations and Discussions on test results > those observations are done with `color = False` and `sq_size = 64` The good thing with working with images is that we may get a better understanding by simply looking at the high dimensional input data: the raw images. Images #1, #2 and #7 and #9 are misclassified. They are all personA images, and we will focus more on those images first. #####personA (mis)classification - Images #2, #7, #9 appear visually really different from all the others: hair color and haircut are dramatically different than most of Emma Stone faces. To be more convinced, let's come back to the HOG feature descriptor, for an image we've already seen. ``` show_one_image_hog(2, person=personA, set="training") ``` On this descriptor, it appears clear that the haircut plays a key role in the description of the face, as visible in the top right corner where a clear oblique line is visible. This line cannot be present in the descriptor of the test set images #2 nor #7 nor #9, leading to a harder classification. - Image #1 is also misclassified. It may be more difficult to understand why there was a mistake there. Some intuition however: - it is the only image from the person A training and test sets that has this rotation - the haircut is - besides rotated - not as sharp as most of the other images - there is a strong dark background on the left, leading to a large gradient magnitude at this side, which is not present for other personA image. To confirm all those intuitions, let's look at the real descriptor used for those misclassified image. We are used to these image from the first part of this tutorial. ``` ''' computation of the HOG descriptor for the misclassified images ''' hog_missed = [] for index in mis_idx[0]: hog_image = hog_test[personA][index][1] hog_missed.append(hog_image) fig, ax_ = plt.subplots(1,mis_idx[0].shape[0],figsize = (16,4), sharex=True, sharey=True) for count in range(len(ax_)): ax_[count].imshow(hog_missed[count]) ``` Our assumptions seem to cope with the descriptors: 1. Image #1 is rotated, the haircut doesn't lead to a clear difference, and the background on the left is captured by the descriptor 2. Images #2, #7 and #9 indeed do not show the haircut. ##### Decision Boundaries Another information given by the classifier confirms the intuition that - image #1 seems more alike the others, but the rotation makes it harder to be classified, - image #2, #7, #9 don't have a key element of the descriptor, hence are more easily classified wrongly. This is confirmed by the *decision function* from the classifier, which returns the distance with respect to the boundary line. As stated in the documentation, it predicts the confidence scores for samples which is the signed distance of that sample to the hyperplane, see [sklearn source](https://github.com/scikit-learn/scikit-learn/blob/95d4f0841/sklearn/linear_model/_base.py#L247) > $ distance \gt 0 \Rightarrow class = 1$ ``` ''' Printing classifier scores (= distance to decision plane) ''' scores = sgd_clf.decision_function(X_test_hog_ab_prepared) logging.info("scores (decision functions): " + str(scores)) ''' Visualization ''' # plt.figure(figsize=(8,8)) fig, ax = plt.subplots(1, 1, figsize=(8,8)) scores_a = scores[0:10] scores_b = scores[10:20] ax.scatter(range(len(scores_a)), scores_a) ax.scatter(range(len(scores_a), len(scores_a)+len(scores_b)), scores_b) ax.plot(range(len(scores)), np.zeros((len(scores),)),"r-") ax.set_title("Visualization of distance to classification boundary") ax.legend(["Boundary", "personA", "personB"]) ax.set_xlabel("test image index",fontsize=12) ax.set_ylabel("Distance to boundary decision",fontsize=12) if not color: ax.set_xlim(left=-0.5, right=20.5) ax.set_ylim(bottom=-275.0, top=275.0) ax.text(2,150,"Classified as \nPersonB",fontsize=12) ax.text(12,-150,"Classified as \nPersonA",fontsize=12) plt.show() ``` On the above plot, personA is in blue, on the left part (indices 0 -> 9 included) and personB is on the right side, in orange, (indices 10 -> 19 included). From this graph, everything that is said is confirmed: * 4 images from personA are in the wrong side of the line, hence misclassified as personB instead of personA, * misclassified images #1 are nonetheless close to the boundary line. Image #1, in particular, is the rotated image: although visually, the image looks well personA's face, the rotation makes it harder for our classifier. Besides, it is close from the boundary, indicating the classifier *is not so confident* about its choice. * Images #2 and #7, two of the other three misclassified images -- that don't show the nominal personA haircut -- are further away from the boundary line. This corresponds to the visual hints that those test images actually do not look alike the training image, because of the haircut. * Image #9, the last test image of personA is not properly classified, but barely! It indeed shows the same characteristics as misclassified #2 and #7 (blond hair, different haircut, looks younger, ...) "Thanks to" the background and the viewpoint, however, the HOG descriptor differences do not lead to such a misclassification as for the two other similar images #2 and #7. The descriptor is reminded here below. * On the contrary, personB is classified, with a high confidence, properly for all test images. ``` show_one_image_hog(index, person=personA, set="test") ``` #####personB classification Bradley Cooper, personB, was properly classified 100% of the time already, indicating a nice resemblance between personB test set and training set HOG descriptors. As visible on the distance to boundary decision plot above, the classifier is really condifent about its choice, specifically in comparison with results from personA tests images. ###One More Thing... Pipelining! Until now, the training and test have been quite *manual* - we define data structure between the transformers - we specify the transformation manually one after the others The use of the `Transformer`'s that we have intriduced already gives the opportunity to do (much) better and take advantage of a dedicated *Architecture Style* of software programs called [Pipes and Filters](https://medium.com/@syedhasan010/pipe-and-filter-architecture-bd7babdb908-). The very good thing is that it's already provided by `sklearn`, and fully applicable to `Transformer`'s, which are in fact just `Filter`'s. This makes the classification much less of a manual process: - no more care about the follow-up of action for each run, - no more intermediate data structure creation, - **very** easy to modify and add/remove steps, withou messing with the data structures. Concretely, we need to define a `Pipeline` object that we will call to `fit` and `predict`. This `Pipeline`will make use of our `Transformer` objects and automatically connects the output of one to the input of the next one. We are now ready to reproduce the results using the `Pipeline` architecture! ``` ''' Definition of a pipeline ''' HOG_pipeline = Pipeline([('hogify', HogTransformer( orientations = 9, pixels_per_cell = (8,8), cells_per_block = (2,2), block_norm='L2', transform_sqrt=True, multichannel = color) ), ('classify', SGDClassifier( random_state = 42, max_iter = 1000, tol=1e-3) ) ]) ''' Training ''' # we set the X_train before the hogify of course... classifier = HOG_pipeline.fit(X_train, y_train) # logging.info("Percentage correct: " + str( 100* np.sum(y_pred == y_test_ab)/len(y_test_ab) ) + " %") ''' Predicting ''' y_pred_= classifier.predict(X_test_ab) ''' Computing and Showing accuracy ''' logging.info("Percentage correct: " + str( 100*np.sum(y_pred_ == y_test_ab)/len(y_test_ab)) + " %") misclassifier_index = np.where( np.array(y_test_ab != y_pred_)) logging.info("Indices misclassified: " + str(misclassifier_index[0])) ``` At the end of the pipeline, we have reproduced exactly the same results as before. ***From now on, I will use Pipelining instead of regular manual scripting*** ###Tests on personC and personD We have a *not great* accuracy so far for personA and personB, and we can wonder how the current classifier, based on HOG descriptors which are local, performs when personC and personD come into play. - we create `X_test_cd` containing the test data for personC and personD - we create `y_test_cd` containing the 0/1 labels for personC and personD ####Wait - What? Why? Hum... Indeed, assigning a label between 0 and 1 means we expect personC to be classified as personA, and personD as personB. This is disputable. Let's see this step as just a way to assess how the classification based on our feature works, and how the metric evolves when the inputs are "so" different, keeping in mind the results should not be as high as previously. Another way to see it is to understand persons A-C and B-D from the same classes, but only a biased training set is available. We want to assess how the final classifier perform on images never seen and quite different from training set, yet part of the classes (say, *white_young_female*, *white_40s_male*). ``` X_test_cd = X_test[20:40, :] y_test_cd = y_test[20:40] logging.info("X_test_cd shape: " + str(X_test_cd.shape)) logging.info("y_test_cd shape: " + str(y_test_cd.shape)) X_test_HOG_cd_back = X_test_cd.copy() y_test_HOG_cd_back = y_test_cd.copy() ``` Based on that, we can simply reuse the classifier already created ####Prediction ``` ''' Predicting ''' # note that as we use the pipeline, we can directly set as input the data matrix. # the hogify step is included. y_pred_cd= classifier.predict(X_test_cd) ``` ####Confusion Matrix ``` ''' Computing and Showing accuracy ''' logging.info("Percentage correct: " + str( 100*np.sum(y_pred_cd == y_test_cd)/len(y_test_cd)) + " %") misclassifier_index = np.where( np.array(y_test_cd != y_pred_cd)) properclassifier_index = np.where( np.array(y_test_cd == y_pred_cd)) logging.info("Indices misclassified: " + str(misclassifier_index[0])) true_labels=["Jane Levy", "Marc Blucas"] predicted_labels = ["Emma Stone", "Bradley Cooper"] my_plot_confusion_matrix(classifier, X_test_cd, y_test_cd, true_labels = true_labels, predicted_labels = predicted_labels) ``` Besides the *accuracy* of 55%, this gives a very interesting results: - All the images of personD were *correctly* classified - All the images of personC, except index 0, were *wrongly* classified > Again, I emphasize the fact that *correctly* and *wrongly* may not be appropriate considering previous remark. Let's have a closer look at the images. ``` logging.info("Mis-classified images: ") plot_matrix(X_test_cd[misclassifier_index[0],:], color, my_color_map, h=1, w=len(misclassifier_index[0])) logging.info("Properly classifier images: ") plot_matrix(X_test_cd[properclassifier_index[0],:], color, my_color_map, h=2, w=1+len(properclassifier_index[0])//2) ``` #### Decisions boundary -- confidence score In order to visually understand current classification results, we can plot the decision boundary. As indicated in the documentation, "the confidence score for a sample is the signed distance of that sample to the hyperplane", see [sklearn](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDClassifier.html#sklearn.linear_model.SGDClassifier.decision_function) It gives an idea of how far the sample is from the hyperplane, hence an idea of the confidence the classifier has. This is great to observe what samples are easy / tricky to classify. ``` ''' Printing classifier scores (= distance to decision plane) ''' scores = classifier.decision_function(X_test_cd) logging.info("scores (decision functions): " + str(scores)) ''' Visualization ''' # plt.figure(figsize=(8,8)) fig, ax = plt.subplots(1, 1, figsize=(8,4)) scores_a = scores[0:10] scores_b = scores[10:20] ax.scatter(range(len(scores_a)), scores_a) ax.scatter(range(len(scores_a), len(scores_a)+len(scores_b)), scores_b) ax.plot(range(len(scores)), np.zeros((len(scores),)),"r-") ax.set_title("Visualization of distance to classification boundary") ax.legend(["Boundary", "personA", "personB"]) ax.set_xlabel("test image index",fontsize=12) ax.set_ylabel("Distance to boundary decision",fontsize=12) if not color: ax.set_xlim(left=-0.5, right=20.5) ax.set_ylim(bottom=-275.0, top=275.0) ax.text(2,150,"Classified as \nPersonB",fontsize=12) ax.text(12,-150,"Classified as \nPersonA",fontsize=12) plt.show() ``` Looking at the above plots of the distance to boundary decision line: - there is no clear difference in classification of personC and personD - personD was always properly classified as personB (class = "1"). We can however say that the average distance to boundary line (hence confidence of the classifier) is lower than for previous personB test images (see previous section). This is perfectly normal: the classifier is correct, but less confident, for personD (absent from training) than for personB. - personC is almost always misclassified, except for one image, that we plot after. - personC is similar to personA **but** does not have this same haircut, characteristics to personA - the rest of the descriptor cannot make it for the haircut, and it leads to a misclassification. In particular, the lateral sides are mostly vertical lines, more characteristics to personB than personA. ####Better understanding ``` show_one_image_hog(0,personC, "test") ``` Above: this is the only image of personC classified as personA so far. We recognize, due to the view point and rotation of the picture, the oblique part on the upper right corner. Other images do not show this, and are then classified as personB. Example given, Image#1 of personC, and Image#0 of personB are shown here after. ``` show_one_image_hog(1,personC, "test") show_one_image_hog(1,personB, "training") ``` ### HOG Classifier conclusion In this section, we used our HOG feature to train a simple classifier (stochastic Gradient Descent), a linear model, that performed with an accuracy of 80% on the test set of personA and personB, and 55% on the test set of personC and personD. These numbers need to be taken with caution considering: - the small amount of images (both in training and test sets) - there was currently no optimization -- using a cross validation technique -- on the model hyperparameters. In particular: * `orientations` * `pixels_per_cell` * `cells_per_block` * `block_norm` * `transform_sqrt` The results have been assessed, and in particular the (relative) poor performance on PersonA test images, and personC test images. This emphasize the inherent "quality" of the representation, its locality, and fine orientation *granularity* (changes in the orientation can be perceived quite well), spatial *granularity*,...all that depending of key parametres listed above. Surely, some optimizations are possible and will be treated in a later stage. ## PCA Classification Applying the same routine as described for the HOG feature representation, we will build a classifier based on the PCA feature representation. ###Gathering and processing the data As already discussed in the HOG section, we retrieve the original data, resize, convert them according to `color` attribute, and reshape them in a useable matrix. Those could also be done using a `Transformer`. ####Training data Code Subtlety: because of the `get_matrix_from_set` function and current organization of the code, we need to reapply the shuffling as we re-use the import from the beginning according to the different parameters. In case of a production code, this would need to be improved. ``` ''' get_matrix_from_set ''' X_train_PCA = get_matrix_from_set(training_set, color, sq_size, flatten=True) y_train_PCA = np.zeros((40,)) y_train_PCA[20:40] = 1 ''' Shuffling ''' logging.info("Labels before shuffling:\n" + str(", ".join([str(i.astype(np.uint8)) for i in y_train_PCA]))) shuffle_training(X_train_PCA, y_train_PCA) logging.info("Labels after shuffling:\n" + str(", ".join([str(i.astype(np.uint8)) for i in y_train_PCA]))) ``` ####Test data As an anticipation, we already pre-process the data we will use for the test. we split the dataset to our convenience between: - test on personA and personB - test on personC and personD ``` ''' get_matrix_from_set ''' X_test_PCA= get_matrix_from_set(test_set, color, sq_size, flatten = True) y_test_PCA = y_test.copy() logging.info("X_test_PCA Shape = " + str(X_test_PCA.shape)) logging.info("y_test_PCA Shape = " + str(y_test_PCA.shape)) ''' AB ''' X_test_PCA_ab = X_test_PCA[0:20, :].copy() y_test_PCA_ab = y_test_PCA[0:20].copy() logging.info("X_test_PCA_ab Shape = " + str(X_test_PCA_ab.shape)) logging.info("y_test_PCA_ab Shape = " + str(y_test_PCA_ab.shape)) logging.info("Sum y_test_PCA_ab [10] = " + str(sum(y_test_PCA_ab))) ''' CD ''' X_test_PCA_cd = X_test_PCA[20:40, :].copy() y_test_PCA_cd = y_test_PCA[20:40].copy() logging.info("X_test_PCA_cd Shape = " + str(X_test_PCA_cd.shape)) logging.info("y_test_PCA_cd Shape = " + str(y_test_PCA_cd.shape)) logging.info("Sum y_test_PCA_cd [10] = " + str(sum(y_test_PCA_cd))) ``` ``` ''' Define backup variables ''' X_train_PCA_shuffle_back = X_train_PCA.copy() y_train_PCA_shuffle_back = y_train_PCA.copy() X_test_PCA_back = X_test_PCA.copy() y_test_PCA_back = y_test_PCA.copy() X_test_PCA_ab_back = X_test_PCA_ab.copy() y_test_PCA_ab_back = y_test_PCA_ab.copy() X_test_PCA_cd_back = X_test_PCA_cd.copy() y_test_PCA_cd_back = y_test_PCA_cd.copy() ``` ###PCA pipeline Using Scikit-Learn library, that we already introduced in the previous part, we create the PCA Transformer. As input, we give first the `n_components` equal to the optimal $p$ that we found in previous section. This does not represent much of a dimensionality reduction, as already discussed. ``` ''' Below commented code create and train a classifier \"manually\"" ''' # p = 35 # pcaify = sklearn_decomposition_PCA(n_components = p) # X_train_PCA = pcaify.fit_transform(X_train) # X_train_prepared = X_train_PCA # logging.info("PCA fit_transform result:\nshape:" + str(X_train_prepared.shape)) # sgd_clf = SGDClassifier(random_state=42, max_iter = 1000, tol=1e-4, verbose = 1) # sgd_clf.fit(X_train_prepared, y_train) ``` The pipeline is created, and trained using input data directly. ``` ''' Definition of a PCA pipeline ''' pcaify = sklearn_decomposition_PCA(n_components = 35) classify = SGDClassifier( random_state = 42, max_iter = 10000, tol=0.0001) PCA_pipeline = Pipeline([('pcaify', pcaify), ('classify', classify)]) ''' Training ''' clf_pca = PCA_pipeline.fit(X_train_PCA, y_train_PCA) ``` ###Test on personA and personB ####Predictions Using the pipeline, we can predict the results for the tests images of personA and personB. We compute the accuracy ``` ''' Predicting A and B ''' y_pred_PCA_ab = clf_pca.predict(X_test_PCA_ab) logging.info("Percentage correct: " + str( 100*np.sum(y_pred_PCA_ab == y_test_PCA_ab)/len(y_test_PCA_ab)) + " %") ``` For the sake of completion, we can - one extra time - get the eigenfaces used, according to the `n_components = 35` parameter. ``` efaces = pcaify.components_ logging.debug("eigenfaces shape = " + str(efaces.shape)) ''' Visualize the eigenfaces, just as we did before ''' if color: efaces_cvt = (efaces*255).astype(np.uint).copy() else: efaces_cvt = efaces.copy() plot_matrix(efaces_cvt, color, my_color_map, h=4, w=10, transpose=False) ``` ####Confusion Matrix As previously, it's interesting to get the confusion matrix view, even if it is quite simple in our problem, considering only two classes. ``` true_labels = ["Emma Stone", "Bradley Cooper"] my_plot_confusion_matrix(clf_pca, X_test_PCA_ab, y_test_PCA_ab, true_labels = true_labels) ``` The confusion matrix directly shows that on test images for A and B, there is an accuracy of 100% and there is no misclassified images. ####Observations and Discussions on test results The first thing to note is the absolute result 100% accuracy with the first attempt. This is pretty good. As a reminder, we had "only" 80% using HOG feature representation. However, we should be careful to draw any conclusion at this point as none of the classifier have been optimized yet in terms of hyperparameters (specifically not the HOG classifier - see later) Similarly to what we did for the HOG feature representation, let's analyze deeper the results of the classification on test images of personA and personB using the PCA feature representation, with hyperparameter $p=35$. This analysis is performed with `sq_scale = 64` and `color = False`. ``` ''' Printing classifier scores (= distance to decision plane) ''' scores = clf_pca.decision_function(X_test_PCA_ab) logging.info("scores (decision functions): " + str(scores)) ''' Visualization ''' # plt.figure(figsize=(8,8)) fig, ax = plt.subplots(1, 1, figsize=(8,4)) scores_a = scores[0:10] scores_b = scores[10:20] ax.scatter(range(len(scores_a)), scores_a) ax.scatter(range(len(scores_a), len(scores_a)+len(scores_b)), scores_b) ax.plot(range(len(scores)), np.zeros((len(scores),)),"r-") ax.set_title("Visualization of distance to classification boundary") ax.legend(["Boundary", "personA", "personB"]) ax.set_xlabel("test image index",fontsize=12) ax.set_ylabel("Distance to boundary decision",fontsize=12) if not color: ax.text(2,0.5e8,"Classified as \nPersonB",fontsize=12) ax.text(12,-0.5e8,"Classified as \nPersonA",fontsize=12) plt.show() ``` Visualizing the decision boundaries score, it seems the classifier is *pretty certain* about the personB classification, and slightly *less certain* for personA. ###Tests on personC and personD Similarly for what we did with the HOG feature representation, we can use our classifier on person C and person D test data. The remarks we made before regarding the intrinsic meaning of this test stay applicable. We use `X_test_PCA_cd` and `y_pred_PCA_cd` as data structures. We first use the classifer to predict, then print the indices and plot the misclassified images, the confusion matrix and the score, as we did before already. ####Predictions ``` ''' Predictions on C an D ''' y_pred_PCA_cd = clf_pca.predict(X_test_PCA_cd) logging.info("Percentage correct: " + str( 100*np.sum(y_pred_PCA_cd == y_test_PCA_cd)/len(y_test_PCA_cd)) + " %") ''' get misclassified and correctly classified images index show related images ''' show_missed(X_test_PCA_cd, y_test_PCA_cd, y_pred_PCA_cd) ``` ####Confusion Matrix ``` true_labels=["Jane Levy", "Marc Blucas"] predicted_labels = ["Emma Stone", "Bradley Cooper"] my_plot_confusion_matrix(clf_pca, X_test_PCA_cd, y_test_PCA_cd, true_labels = true_labels, predicted_labels = predicted_labels) ``` ####Decision Boundaries ``` scores = clf_pca.decision_function(X_test_PCA_cd) logging.info("scores (decision functions): " + str(scores)) ''' Visualization ''' # plt.figure(figsize=(8,8)) fig, ax = plt.subplots(1, 1, figsize=(8,4)) scores_a = scores[0:10] scores_b = scores[10:20] ax.scatter(range(len(scores_a)), scores_a) ax.scatter(range(len(scores_a), len(scores_a)+len(scores_b)), scores_b) ax.plot(range(len(scores)), np.zeros((len(scores),)),"r-") ax.set_title("Visualization of distance to classification boundary") ax.legend(["Boundary", "personA", "personB"]) ax.set_xlabel("test image index",fontsize=12) ax.set_ylabel("Distance to boundary decision",fontsize=12) # if not color: # ax.text(4,4e7,"Classified as \nPersonB",fontsize=12) # ax.text(12,-2.5e7,"Classified as \nPersonA",fontsize=12) plt.show() ``` ####Observations and Discussions on the test results The results of the prediction on personC and D are interesting as they differ a lot from the ones of the HOG feature. In the case of the PCA feature, both person C and person D have several correct and wrong predictions. Results are slighlty better (by one image) for person D, but considering the few images on the test sets, this is most likely not statistically representative. Some hints to better understand the results: as a reminder, PCA feature representation is a projection of the images into a vector of coefficients - weights - of the eigenfaces (Principal Components) found during the training phase. If an image is not classified properly, it means its feature representation *differs too much* from the class feature representation. Else, the classifier most likely would have found the correct class. Furthermore, the principal components are the directions of maximal variance of the training images. It follows that a wrongly classified image is not explained best by a linear combination of the $p$ principal components (direction of max variance of the training phase). Looking above at the misclassified images, one can wonder how it comes that an image, looking just like another - is misclassified. There are several possible explanations: - too much influence from the background - a too different lighting conditions - different scale w.r.t training images - different orientation (pose and view points) w.r.t. training images ####Better understanding > Note: the results of this section run in `color=True` mode may be slightly different In order to better understand how the classifier works, let's choose a test image and try to improve the results. To do so, we will modify the image input itself, to see what can lead to a good classification with the system we have. > this may not be a good practice, as usually, one would rather work on the training and validation sets, and **not** modify the test set. However, the goal of this sub section is limited to give a bit more insight about PCA and classification based on PCA information, so that modifying a chosen image lead to a change in classification results. The goal is not to improve the classifier, but rather understand the modifications in input images that makes it delivering the results. Let's choose image of person C, index #6. ``` logging.info("Image we work on: image personC, index 6") img = X_test_PCA_cd[6,:].copy() fig = plt.figure(figsize=(4,4 )) ax= fig.subplots(1,1) ax.imshow(my_reshape(img, sq_size, color), cmap = my_color_map, interpolation='nearest') ax.set_axis_off() ``` This image is misclassified as class "1" instead of "0". - Mean value of the image, after *training_mean* substraction ``` img_centered = img - pcaify.mean_ fig = plt.figure(figsize=(4,4)) ax= fig.subplots(1,1) ax.imshow(my_reshape(img_centered, sq_size, color), cmap = my_color_map, interpolation='nearest') ax.set_axis_off() plt.show() logging.info("Remaining mean value test ab : " + str(np.round(np.mean(X_test_PCA_ab-pcaify.mean_),2))) logging.info("Remaining mean value test cd : " + str(np.round(np.mean(X_test_PCA_cd-pcaify.mean_),2))) logging.info("Remaining mean value expexted : " + str(np.round(np.mean(X_train_PCA - pcaify.mean_),2))) logging.info("Remaining mean value image#6 : " + str(np.round(np.mean(img_centered),2))) logging.info("Nominal mean value image#6 : " + str(np.round(np.mean(img),2))) ``` We see that the test images of personC and personD have a resulting mean lower than the training set (=0) and also lower than personA and personB test images. > Intuitively, the average is a bit "darker". We also see that the image is **rotated** wrt mean image. This is visible to whitish areas around the eyes and mouth, and slightly darker around the supposed chin. An intuitive PCA-related conclusion is that the variance induced by the rotation is not well explained by personA images. What if we would modify this test image so that we try to rotate it "back" to a regular front face ? Doing so, we hope to decrease this unexplained variance. - we use the `scipy.ndimage` library - the angle to rotate is experimentally $-22 [deg]$ - as rotated, some pixels are missing values to fill the shape. We set those pixels at the average pixel value of the image before modification. - Rotation is (experimentally) enough; no need of extra shift to fit better. ``` ''' Rotation of the image ''' img_rotated = ndimage.rotate(my_reshape(img, sq_size, color), -22, reshape=False, mode = "constant", cval=np.mean(img)) img_rotated_flatten = img_rotated.flatten() img_rotated_centered = img_rotated_flatten - pcaify.mean_ ''' Visualization ''' fig = plt.figure(figsize=(12,4)) ax1, ax2, ax3 = fig.subplots(1,3) ax1.imshow(my_reshape(img, sq_size, color), cmap = my_color_map, interpolation='nearest') ax2.imshow(img_rotated, cmap = my_color_map, interpolation='nearest') ax3.imshow(my_reshape(img_rotated_centered, sq_size, color), cmap = my_color_map, interpolation='nearest') ax1.set_title("Original #6") ax2.set_title("Rotated #6") ax3.set_title("Rotated #6 - training_mean ") ax1.set_axis_off() ax2.set_axis_off() ax3.set_axis_off() logging.info("New mean value image#6 modified : " + str(np.round(np.mean(img_rotated_centered),2))) ``` Now, we can replace the image #6 that was misclassified by the newly rotated image, for which missing values added are the mean of the original image. Successively, we show the test set for person C and D originally, and the modified one (image #6 changed by its rotated version) ``` ''' Replacement of the original image #6 by its rotated version ''' X_test_PCA_cd_new = X_test_PCA_cd.copy() X_test_PCA_cd_new[6,:] = img_rotated_flatten logging.info("Usual and nominal test set for personC and personD") plot_matrix(X_test_PCA_cd, color, my_color_map, h=2, w=10) logging.info("Modified test set for personC and personD, image#6 replaced") plot_matrix(X_test_PCA_cd_new, color, my_color_map, h=2, w=10) ``` Let's try to predict again the class for this test "new" test set, which is only different by image#6. ``` ''' Prediction on new test set ''' y_pred_PCA_cd_new = clf_pca.predict(X_test_PCA_cd_new) logging.info("Percentage correct: " + str( 100*np.sum(y_pred_PCA_cd_new == y_test_PCA_cd)/len(y_test_PCA_cd)) + " %") misclassifier_index = np.where( np.array(y_test_PCA_cd != y_pred_PCA_cd_new)) properclassifier_index = np.where( np.array(y_test_PCA_cd == y_pred_PCA_cd_new)) logging.info("Indices misclassified: " + str(misclassifier_index[0])) scores = clf_pca.decision_function(X_test_PCA_cd_new) # Visualization true_labels=["Jane Levy", "Marc Blucas"] predicted_labels = ["Emma Stone", "Bradley Cooper"] my_plot_confusion_matrix(clf_pca, X_test_PCA_cd_new, y_test_PCA_cd, true_labels = true_labels, predicted_labels = predicted_labels) ``` Now, the image#6_rotated is properly classified, as hoped, thanks to the rotation. Concretely, the rotation had mostly the following impacts: - decrease of the influence of a dark area because of the hair * influence of background * lighting conditions - better match with eigenfaces * variance better explained by personA-related eigenfaces This is not a rigorous method to determine the exact behavior of the classifier. Rather, it's an intuitive reasoning showing how an image can be (mis-)classified by PCA repesentation based classifier. It also shows that rotation of an image can also matter in case of the PCA feature representation, as in the HOG representation. ###HOG vs PCA classification Based on the same training samples, and the same classifier technique used, PCA does a better job at classifying the test set images. The test set contains some images that are just alike the training set, but also some with different view points, and visual differences in terms of person haircut, color, ... PCA-based classification seems less dramatically confused by those aspects, and seems to have successfully captured the representation associated to personA and personB (the two classes). Yet, we have seen and analyzed that rotation may have a large impact on classification results. HOG-based classification, on the other side, suffers more with respect to these training-test sets differences and, as we saw, is also more disturbed by the local haircut change between personA and personC. However, we should remain careful at this point: - the classifiers have not been optimized in any sort, - the training set is pretty reduced. ## Identification In an identification setup the goal is to **compute similarity scores** between pairs of data examples and use them to identify new images. In this section, we will: 1. describe the visualizations used along the section 2. compute the feature representations HOG / PCA 3. compute the distances pairwaise between the test set images and the training set images. 4. discuss those distance results, macroscopically and at image level, 5. Use k-NN to label an image based on its nearest neighbours. In particular, we will: - discuss the choice of $k$ parameters, using different ways - discuss the results of the labeling - analyze the images having the closest and furthest nearest neighbor First, we will need to import several pairwise metrics functions from `sklearn` library. ``` from sklearn.metrics.pairwise import cosine_similarity from sklearn.metrics.pairwise import cosine_distances from sklearn.metrics.pairwise import euclidean_distances from sklearn.metrics.pairwise import manhattan_distances ``` ``` >>> from numpy.linalg import norm >>> norm(X, axis=1, ord=1) # L-1 norm >>> norm(X, axis=1, ord=2) # L-2 norm >>> norm(X, axis=1, ord=np.inf) # L-∞ norm ``` We create also a util function that allows plotting the similarity matrix (or distance matrix), with several parameters. This will be extensively used, while the code itself isn't that important. > Note that by default, the colormap "jet" is used. This is a personnal preference and I'm used to work with it. Should that **not** suit you, you can of course change this colormap, either locally as the argument of the plot_similarity_matrix function, or globally as the default value of this parameter. ``` from mpl_toolkits.axes_grid1 import make_axes_locatable def plot_similarity_matrix(similarity_matrix, show_numbers = False, vmax1 = None, vmax2 = None, norm_only=False, width=16, height = 8, fontsize = 10, return_normed = False, cmap=plt.cm.jet): similarity_matrix_norm = 100*similarity_matrix / np.linalg.norm(similarity_matrix, axis = 1, ord=1, keepdims=True) # for i in range(similarity_matrix_norm.shape[0]): # print(sum(similarity_matrix_norm[0,:])) if norm_only: fig = plt.figure() ax = fig.add_subplot(111) fig.set_size_inches(width, height) im = ax.imshow(similarity_matrix_norm, vmax=vmax2, cmap=cmap) ax.set_title('%') divider = make_axes_locatable(ax) cax = divider.append_axes("right", size="5%", pad=0.1) fig.colorbar(im, cax=cax) if show_numbers: for i in range(similarity_matrix.shape[0]): for j in range(similarity_matrix.shape[1]): ax.text(j, i, str(round(similarity_matrix_norm[i,j],0)).split(".")[0],fontsize=fontsize,va='center', ha='center') else: fig, ax = plt.subplots(ncols=2) fig.set_size_inches(width, height) im1 = ax[0].imshow(similarity_matrix, vmax=vmax1, cmap=cmap) ax[0].set_title('as is') im2 = ax[1].imshow(similarity_matrix_norm, vmax=vmax2, cmap=cmap) ax[1].set_title('%') dividers = [make_axes_locatable(a) for a in ax] cax1, cax2 = [divider.append_axes("right", size="5%", pad=0.1) for divider in dividers] fig.colorbar(im1, cax=cax1) fig.colorbar(im2, cax=cax2) if show_numbers: for i in range(similarity_matrix.shape[0]): for j in range(similarity_matrix.shape[1]): ax[0].text(j, i, str(similarity_matrix[i,j]), fontsize=fontsize,va='center', ha='center') ax[1].text(j, i, str(round(similarity_matrix_norm[i,j],0)).split(".")[0],fontsize=fontsize,va='center', ha='center') fig.tight_layout() plt.show() if return_normed: return similarity_matrix_norm ``` #### Illustration of distance measures In order to better understand the Visualization used, a tool example is given hereunder. The input is a matrix handmade. Later, it will be the matrix of shape (#test_image, #train_image) containing the distances computed pairwise. Regarding the input matrix: - on each line, the ratio between number is kept the same - the five rows contain three different scale of the numbers: ``` [[ 100, 20, 30, 40, 50], [ 2, 10, 3, 4, 5], [ 20, 30, 100, 40, 50], [ 2, 3, 4, 10, 5], [ 200, 300, 400, 500,1000]] ``` The numbers written indicate the value of the cell. **Left side**: The "As is" matrix colors the cell as the numbers are set. The colorscale is therefore really large, and it can be useful to observe the disparity of measures across all tests. If the represented matrix is a distance matrix, for instance, one can observe that the 5th test image is *very far* from all the training images, as the 5th row as larger numbers than any other row. The drawback is that if some numbers are much higher than others, we lose in granularity to represent the differences between those numbers. For instance, a distance of "2" and a distance of "20" are very much alike. **Right side**: The figure shows the same input matrix but normalized by row. That means that the sum of all numbers in a row is equal to 100 %. It is helpful to analyze *locally* the distance/similarity values for 1 specific test image with respect to all training images. ``` test_dist_mtx = np.array([[100,20,30,40,50], [2,10,3,4,5], [20,30,100,40,50],[2,3,4,10,5], [200,300,400,500,1000]]) plot_similarity_matrix(test_dist_mtx, show_numbers=True, width=8, height=4, fontsize=14) ``` In the next sections of this tutorial, these Visualizations will be used to detail the similarity/distance measures between our different images. The vertical axis correspond to test images, with the following mapping - 0 -> 9: images of PersonA, Emma Stone - 10 -> 19: images of PersonB, Bradley Cooper - 20 -> 29: images of PersonC, Jane Levy - 30 -> 39: images of PersonD, Marc Blucas The horizontal axis corresponds to the training images, with the following mapping: - 0 -> 19: images of PersonA (training set) - 20 -> 39: images of PersonB (training set) If the feature descritions are appropriate, the features distance measurements should lead to - very small distance (= large similarity measure) between training and test images of PersonA; and similarly for personB) - very high distance (= small similarity measure) between training and test images of PersonA; and similarly for personB) Put in another way, persons from the same class would share feature descriptions, and not share other class feature description. This is exaclty inline with how we define what is a *good* feature, at the beginning of this notebook. Intuitively, feature description of : - test personA $\simeq$ training personA $\&$ test personA $\neq$ training personB; - test personB $\neq$ training personA $\&$ test personB $\simeq$ training personB; - test personC $\sim$ training personA $\&$ test personC $\neq$ training personB; - test personD $\neq$ training personA $\&$ test personD $\sim$ training personB; Visually plot as a matrix, it could hence look like the following images. ``` test_dist_mtx = np.array([[5,20], [20,5], [10,20], [20,10]]) plot_similarity_matrix(test_dist_mtx, show_numbers=True, width=5, height=5, fontsize=14) ``` To ease later computation, we also write a simple `get_distances` (naming is not great...) that returns the sum of the distances in the eight eights of the global pairwise distances matrix. If it is not clear, it'll become soon enough when using it. ``` def get_distances(matrix_distances): ''' res:[ [dist(A,A), dist(A,B)] [dist(B,A), dist(B,B)] [dist(C,A), dist(C,B)] [dist(D,A), dist(D,B)] ] ''' res = np.empty((4,2)) res[0,0] = sum(sum(matrix_distances[0:10,0:20])) res[0,1] = sum(sum(matrix_distances[0:10,20:40])) res[1,0] = sum(sum(matrix_distances[10:20,0:20])) res[1,1] = sum(sum(matrix_distances[10:20,20:40])) res[2,0] = sum(sum(matrix_distances[20:30,0:20])) res[2,1] = sum(sum(matrix_distances[20:30,20:40])) res[3,0] = sum(sum(matrix_distances[30:40,0:20])) res[3,1] = sum(sum(matrix_distances[30:40,20:40])) return res ``` ### HOG feature decriptors #### Pre-process data Similarly to classification, the very first step is to pre-process the data. We use the `get_matrix_from_set` function that allows to work with different size and color. For the identification, it is not required to shuffle the training set, as the distance to all samples is computed. ``` X_train = get_matrix_from_set(training_set, color, sq_size = sq_size, flatten = False) X_test = get_matrix_from_set(test_set, color, sq_size = sq_size, flatten = False) y_train = np.zeros((40,)) y_train[20:40] = 1 ''' For now, set up "0" for personC; "1" for personD ! ''' y_test = np.zeros((40,)) y_test[10:20] = 1 y_test[30:40] = 1 ``` #####Visualization ``` logging.info("Training set (horizontal matrix axis):") logging.info("Training set PersonA [0 -> 19]:") plot_matrix(X_train[0:20,:], color, my_color_map, h=1, w=20, transpose = False) logging.info("Training set PersonB [20 -> 39]:") plot_matrix(X_train[20:40,:], color, my_color_map, h=1, w=20, transpose = False) logging.info("Test set (vertical matrix axis):") logging.info("Test set PersonA [0 -> 9]:") plot_matrix(X_test[0:10, :], color, my_color_map, h=1, w=10, transpose = False) logging.info("Test set PersonB [10 -> 19]:") plot_matrix(X_test[10:20, :], color, my_color_map, h=1, w=10, transpose = False) logging.info("Test set PersonC [20 -> 29]:") plot_matrix(X_test[20:30, :], color, my_color_map, h=1, w=10, transpose = False) logging.info("Test set PersonD [30 -> 39]:") plot_matrix(X_test[30:40, :], color, my_color_map, h=1, w=10, transpose = False) ``` #####Creation of Transformer As usual now, let's create the transformer that computes the HOG for the images. The parameters may sound new to you: don't panic. Those parameters are actually found to be an optimization - discussed later -, and the effect of the parameters on the identification part is also discussed a bit later. ``` hogify = None hogify = HogTransformer( orientations = 9, pixels_per_cell = (16,16), cells_per_block = (2,2), block_norm = "L2-Hys", transform_sqrt = False, multichannel = color) ``` #####Transformation of Inputs - **X_train**: application of the fit then transform method from the transformer - **X_test**: application of the transform method from the transformer As discussed already, it doesn't really change a thing for the HOG computation as the fit doesn't do much (recall, it is a homemade Transformer we created in the previous task). ``` X_train_hog = hogify.fit_transform(X_train) X_test_hog = hogify.transform(X_test) logging.debug("X_train_hog Shape: " + str(X_train_hog.shape)) logging.debug("X_test_hog Shape: " + str(X_test_hog.shape)) ``` #### Compute pairwise distances After the computation of the feature representations for both data sets (training and test), we can compute the distances pairwise -- between each pair. Three common distance formula are used: 1. Euclidean distance 2. Manhattan distance 3. Cosine distance While the Euclidean distance is intuitive up to 3 dimensions, it is known to behave not as good in high dimensions where "everything is far away". Comparing the results of the three methods will give a better insight if there is an issue with the distance measurements or not. Let's recall the formalism we use: - horizontal axis: training images [20 personA; 20 personB] - vertical axis: test images [10 personA; 10 personB, 10 personC, 10 personD] ``` hog_distances_eucl = euclidean_distances(X_test_hog, X_train_hog) hog_distances_man = manhattan_distances(X_test_hog, X_train_hog) hog_distances_cos = cosine_distances(X_test_hog, X_train_hog) # logging.debug("DISTANCE BASED ON COSINE - LOG10") # plot_similarity_matrix(np.log10(hog_distances_cos)) logging.info("DISTANCE BASED ON EUCL") plot_similarity_matrix((hog_distances_eucl)) logging.info("DISTANCE BASED ON MANHATTAN") plot_similarity_matrix((hog_distances_man)) logging.info("DISTANCE BASED ON COSINE") plot_similarity_matrix((hog_distances_cos)) ``` #### Analysis on the distance computed From a macroscopic view, the first thing to note is that all the distances give *similar* results. As macroscopic, I understand the matrix as devided in eight blocks as presented above. This is particularly true when comparing normed distances, on the right side. > Remember that on the right side, the sum of all values of a line is equal to 100. It helps figuring relatively what is the closest/furthest training image from a specific test image, which we actually are interested in. Of course there are differences between the colors represented but they don't change drastically, and euclidean seems to give enough granularity to continue using it. As it has been a lot of colors/matrix represented, let's just plot below only the euclidean distance, normed per test sample (so, the euclidean distance as a percentage of the sum of the distances, per test sample). ``` logging.info("Expected Look-alike matrix coloration") plot_similarity_matrix(test_dist_mtx, show_numbers=True, norm_only=True, width=3, height=3, fontsize=14) logging.info("Global results of pairwise distances") hog_eucl_normed = plot_similarity_matrix(hog_distances_eucl, show_numbers = False, norm_only=True, width=10, height = 10, fontsize = 10, return_normed=True) res=get_distances(hog_distances_eucl) logging.info("Macroscopic view") plot_similarity_matrix(res, show_numbers=True, norm_only=True, width=4, height=4) ``` > The second plot is a really macro view of the large matrix. It is read as *\"56% of the sum of the distances of all personB test images with respect to all training images regards personA training images, while only 44% regards personB training images. It seems reasonable to assume that personB test images are closer to personB training images than personA training images\"*. Continuing the analysis of the pairwise distance results focusing on this normed matrix: - from the macroscopic view, it complies with expectation for all eights (test personX - training personY) except of Jane Levy, personC - Test images of personC is overall closer to personB than personA, which is not what we could expect considering personC is a young white female, with similar hair color than personA. - PersonB test images are clearly close to training personB images; personA test images are globally further from their corresponding training image. However, this is fully inline with the classification results we obtained in previous section for the HOG feature, and the digging we made into the HOG representation. Because of the haircut, a large part of the histogram of personC actually is much more similar to the one of Bradley Cooper (personB). - At the image level, it is quite obvious that some training images seems (very) far from all others. Visually, it is the case for training set images 0, 18 and 19. This is confirmed when looking at the sum of the (positive) distances over all test examples -- see next plot. It seems those images are not very useful for identification purpose, at least. A careful eye confirms it is three images belonging to personA dataset. A conclusion is then: *Based on the pairwise distances between all HOG representations, three images of the personA dataset seems to be little help in identification tasks.* Note that the threshold on the below graph is chosen purely arbitrarily. ``` if not color: threshold = 65 else: threshold = 65 eucl_dist_vert_sum = np.sum(np.abs(hog_distances_eucl), axis = 0) # Visualization fig=plt.figure(figsize=(8,4)) ax = fig.add_subplot(111) ax.scatter([i for i in range(len(eucl_dist_vert_sum))],eucl_dist_vert_sum) ax.plot([-1,41],[threshold,threshold], '-r') ax.set_title("Sum of the distances to test images, per training images") plt.show() indices = np.where(eucl_dist_vert_sum > threshold)[0] logging.info("Indices of training images > Threshold: " + str(indices)) plot_matrix(X_train[indices,:], color, my_color_map, h=1, w=len(indices)) ``` - The results obtained for Jane Levy are *completely* inline with the results obtained in previous classification task! The reasons behind this *bad* results, as discussed already - and in details in the previous section, mainly rely on the haircut which makes personC's HOG similar to personB's. - Nothing much special about Marc Blucas, personD, despite that the results are as expected: less similar than mean to personA, more similar to personB. ####Hog Transformer parameters As for the classification, the results on the distance are of course dependant on the HOG transformer parameters, and specifically the `pixels_per_cell` or the `cells_per_block` which somehow define the spatial granularity of the representation. We expect that, with a smaller number of pixels per cells as defined, while the real value will change, the overall behavior should remain (macroscopic view). We can try that out, with `pixels_per_cell = (4,4)` for instance, hence *16 times finer* ``` ''' Hog Transformer ''' hogify = HogTransformer( orientations = 9, pixels_per_cell = (4,4), cells_per_block = (2,2), block_norm = "L2-Hys", transform_sqrt = False, multichannel = color) ''' Preparation of the data ''' X_train_hog = hogify.fit_transform(X_train) X_test_hog = hogify.transform(X_test) ''' Pairwise distance computing ''' hog_distances_eucl = euclidean_distances(X_test_hog, X_train_hog) ''' Show the Matrix ''' logging.info("DISTANCE BASED ON EUCL") plot_similarity_matrix((hog_distances_eucl)) ''' Setup the new threshold value (trial-error) ''' if not color: threshold = 570 else: threshold = 570 ''' Compute sum in vertical axis ''' eucl_dist_vert_sum = np.sum(np.abs(hog_distances_eucl), axis = 0) ''' extract indices ''' indices = np.where(eucl_dist_vert_sum > threshold)[0] ''' Plot results ''' # print(eucl_dist_vert_sum) fig=plt.figure(figsize=(8,4)) ax = fig.add_subplot(111) ax.scatter([i for i in range(len(eucl_dist_vert_sum))],eucl_dist_vert_sum) ax.plot([-1,41],[threshold,threshold], '-r') ax.set_title("Sum of the distances to test images, per training images") plt.show() logging.info("Indices of training images > Threshold: " + str(indices)) ''' Show images ''' plot_matrix(X_train[indices,:], color, my_color_map, h=1, w=len(indices)) ``` With such a small number of pixels per cells (details of what happens in the HOG can be found in part 1 of this tutorial), the overal results remain: as expected for all eights but for PersonC's! * The most distant training images are not all the same as before: - 2 Emma Stone images remain the furthest from all test images, - 1 Bradly Cooper image becomes the third furthest. This parameter doesn't actually change a lot: overall, the distribution of the distances doesn't change much. However, it, once more, indicates the locality of the HOG representation, and the impact of the feature representation parameters on the subsequent tasks. * Another interesting consequence of this finer HOG is that personC and personD seems "more distant", globally, to training samples. This is indicated by the reddish lower part of the left-side matrix, above. The results complies with our intuition that personA and personB test images should be "closer" to training set, as they represent visually the same person. * Finally, it becomes much clearer, with this finer HOG, that the some test images of personA appears further away from the training examples. For instance, images #1 and #9, Emma Stone (vertical axis) don't seem to have any blue parts. This is interesting to put in perspective with classification results obtained with the HOG representation before, where images 1 and 9 were among the misclassified ones, at least at the first attempt (prior to any optimization). ####Identification of test images using k-NN In this step, using k-nearest neighbor, the goal is to label the test images according to their nearest neighbours. In a very intuitive way, we could say that the images on the vertical axis of the matrix above should get as label the ones from the bluest set of neighbors (either personA (=0) or personB (=1)) As from previous section, we are familiar with the pipeline, we will implement a pipeline using the knn classifier in the HOG dataspace. #####Choosing k **What k-value to take** ? This is often a critical question. We could start with k=1, assuming that given the small number of samples in the training set, the closest should be the most appropriate label. **Can we do better ?** Definitely, one of the best way to go is to assess different values for k using a validation set, performing cross-validation. Although it is a spoiler to the "Impress you TA's " section, next, we will use a [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html#sklearn.model_selection.GridSearchCV) technique, limited to the k parameter, trying out different values. ``` k=1 ''' Definition of a pipeline ''' HOG_knn_pipeline = Pipeline([ ('hogify', HogTransformer( orientations = 9, pixels_per_cell = (16,16), cells_per_block = (2,2), block_norm='L2-Hys', transform_sqrt=True, multichannel = color) ), ('classify', KNeighborsClassifier( n_neighbors = k, metric='euclidean') )]) param_grid_knn = [ { 'hogify__pixels_per_cell': [(4,4),(16,16)], 'classify__n_neighbors':[1,2,3]} ] grid_search = GridSearchCV(HOG_knn_pipeline, param_grid_knn, cv = 3, n_jobs = -1, scoring = "accuracy", verbose = 1, return_train_score = True) grid_res = grid_search.fit(X_train, y_train) ``` `GridSearchCV` helps in performing a systematic choice within different (hyper-)parameter values in automatically running the pipeline with those different values, and output the best result according to a (specified) metric. Here, I choose the accuracy to indicate the best system. The accuracy is computed using cross validation, and not! using the test set. #####Prediction and metrics ``` logging.info("Best parameters: " + str(grid_res.best_params_)) logging.info("Best scores: " + str(100* grid_res.best_score_ ) + "% ") best_prediction_ab = grid_res.predict(X_test_ab) logging.info('Percentage correct persons A, B: ' + str(100*np.sum(best_prediction_ab == y_test_ab)/len(y_test_ab))) best_prediction_cd = grid_res.predict(X_test_cd) logging.info('Percentage correct persons C, D: ' + str( 100*np.sum(best_prediction_cd == y_test_cd)/len(y_test_cd))) ``` ``` show_missed(X_test_ab, y_test_ab, best_prediction_ab) show_missed(X_test_cd, y_test_cd, best_prediction_cd) ``` As foreseen by the distances matrix shown, the accuracy is very high for A and B, and personC is the most difficult to identify. A key message from here is that the results also vary with hyperparameters. #####Closest and Furthest nearest neighbors After this optimization step on the hog transformer and k-NN hyperparameter selection, we can: - recompute the pairwise distance and plot the resulting matrix, similarly as before - show the two closest images - show the neighbors with the largest distance in-between ``` ''' get hog transformed for X_test and X_train ''' X_test_hog = grid_res.best_estimator_['hogify'].transform(X_test) X_train_hog = grid_res.best_estimator_['hogify'].transform(X_train) ''' Compute pairwise distances ''' hog_distances_eucl = euclidean_distances(X_test_hog, X_train_hog) ''' plot matrix ''' plot_similarity_matrix(hog_distances_eucl) ''' Using classifier kNN, get closest neighbours list ''' closest_neighbors = grid_res.best_estimator_['classify'].kneighbors(X_test_hog, 10) logging.info("\nImage # => Image Training idx @ distance \n") for i in range(closest_neighbors[1].shape[0]): print("Image Test " + str(i) + " => closest neighbor: Image Training " + str(closest_neighbors[1][i,0]) + " @ " + str(np.round(closest_neighbors[0][i,0],3))) ``` From the results printed above, where the closest image is shown for all test images, an interesting point to note is the min and max values of the closest neighbor distances: - Image Test #17 => very close to #38, 0.776 - Image Test #35 => (not so) close to #19, 1.591 We clearly see that the distance information gives an hint on the certainty of the identification: distance is very low for #17, and high for #19. This indicate that personD test image is actually pretty far from everything (considering the euclidean distance). Looking at the images, it actually appears clear for a human why those are considered the closest, and why the images appear much close for personB (Bradley Cooper, on top) than for Marc Blucas, personD (bottom line) ``` ''' Closest nearest neighbors of all ''' plot_matrix([X_test[17,:],X_train[38,:]], color, my_color_map, h=1, w=2) show_one_image_hog(17-10, personB, 'test') show_one_image_hog(38-20, personB, 'training') ``` - personB: same pose, same scale, same view point, same face expression... The HOG will be similar! ``` ''' Furthest nearest neighbors of all ''' plot_matrix([X_test[35,:],X_train[19,:]], color, my_color_map, h=1, w=2) show_one_image_hog(35-30, personD, 'test') show_one_image_hog(19, personA, 'training') ``` - personD image has a non-nominal face pose, and almost a quarter of the image on the right side is the background, leading to a vertical edge "in the middle". Without surprise, this makes personD's image far from everything else in terms of HOG representation, and the closest image is based on this face pose: the right eye and the vertical line are the most important elements of the representation. As a result, while the distance is larger than in previous case (analysis of personB), Emma Stone image becomes the closest to Marc Blucas' image. ###PCA feature descriptors We can repeat the steps performed for HOG feature representation, for the PCA feature representation. The same formalism is used. ####Pre-process data As usual now, and because of our current architecture, it's necessary to re-import the data using `flatten = True` parameter. ``` X_train = get_matrix_from_set(training_set, color, sq_size = sq_size, flatten = True) X_test = get_matrix_from_set(test_set, color, sq_size = sq_size, flatten = True) y_train = np.zeros((40,)) y_train[20:40] = 1 ''' For now, set up "0" for personC; "1" for personD ! ''' y_test = np.zeros((40,)) y_test[10:20] = 1 y_test[30:40] = 1 ''' AB >< CD ''' X_test_ab = X_test[0:20,:] y_test_ab = y_test[0:20] X_test_cd = X_test[20:40,:] y_test_cd = y_test[20:40] ``` We can vizualize - once more - the different sets ``` logging.info("Training set (horizontal axis):") logging.info("Test set PersonA [0 -> 19]:") plot_matrix(X_train[0:20,:], color, my_color_map, h=1, w=20, transpose = False) logging.info("Test set PersonA [20 -> 39]:") plot_matrix(X_train[20:40,:], color, my_color_map, h=1, w=20, transpose = False) logging.info("Test set (vertical axis):") logging.info("Test set PersonA [0 -> 9]:") plot_matrix(X_test[0:10, :], color, my_color_map, h=1, w=10, transpose = False) logging.info("Test set PersonB [10 -> 19]:") plot_matrix(X_test[10:20, :], color, my_color_map, h=1, w=10, transpose = False) logging.info("Test set PersonC [20 -> 29]:") plot_matrix(X_test[20:30, :], color, my_color_map, h=1, w=10, transpose = False) logging.info("Test set PersonD [30 -> 39]:") plot_matrix(X_test[30:40, :], color, my_color_map, h=1, w=10, transpose = False) ``` ####Compute pairwaise distances ``` pcaify = None pcaify = sklearn_decomposition_PCA(n_components = 35) X_train_PCA = pcaify.fit_transform(X_train) X_test_PCA = pcaify.transform(X_test) # pca_distances_cos = cosine_distances(X_test_PCA, X_train_PCA) pca_distances_eucl = euclidean_distances(X_test_PCA, X_train_PCA) # plot_similarity_matrix((pca_distances_cos)) plot_similarity_matrix((pca_distances_eucl)) ``` ####Analysis on the distances computed ``` logging.info("Expected look-alike macroscopic coloration") plot_similarity_matrix(test_dist_mtx, show_numbers=True, norm_only=True, width=3, height=3, fontsize=14) res=get_distances(pca_distances_eucl) logging.info("Macroscopic view of the pairwise distances coloration") plot_similarity_matrix(res, show_numbers=True, norm_only=True, width=4, height=4) ``` From a macroscopic view, personA and personB seems to confirm the expected behavior, specifically using the normed plot (on the right), or even better, the tiny macro representaiton just above. In this case of PCA feature representation, it indicates that the variance in the personA and personB test images is explained in a similar fashion as the variance of (one or more) training images. > The formalism behind the different plot was already explained above and is not repeated here. Please read again previous sections if needed. Similarly to the analysis performed on the classification task, the results differ notably for personC and personD with respect to HOG feature representation: - here, some personC seems properly similar to personA, and some personD seems properly similar to personB, without a clear indication that personC is globally much further from personA than personD is from personB. Said differently, there may be some personC close to personA. This wasn't really the case in the previous feature representation using HOG. - personD test images don't look that close to personB training image "anymore", as it was the case for the HOG representation. Considering the remarks done already, and the care needed to interpret this "average distance", it does not directly follows that the results of the identification using k-NN will be worse for personD than in the HOG feature. --- Another point, more surprising maybe -- but enlighting the differences between the two feature representations -- is the training images that are the most dissimilar to test images. As we did before, let's sum the distances vertically, and show what image is "globally" the furthest. > Of course, this metric has its limitation: the sum of the distances may suffer from one very very high distance (or reversely, benefit from one very very close...) But this is quite precisely what we wish to show, and even if there are some pitfalls, it's a convenient way to illustrate our saying. ``` if color: threshold = 355000 else: threshold = 200000 eucl_dist_vert_sum = np.sum(np.abs(pca_distances_eucl), axis = 0) # print(eucl_dist_vert_sum) fig=plt.figure(figsize=(8,4)) ax = fig.add_subplot(111) ax.scatter([i for i in range(len(eucl_dist_vert_sum))],eucl_dist_vert_sum) ax.plot([-1,41],[threshold,threshold], '-r') ax.set_title("Sum of the distances to test images, per training images") plt.show() indices = np.where(eucl_dist_vert_sum > threshold)[0] logging.info("Indices of training images > Threshold: " + str(indices)) plot_matrix(X_train[indices,:], color, my_color_map, h=1, w=len(indices)) ``` - The image that is the furthest from all test images is different from the previous feature representation. Note that the threshold is arbitrarily chosen. It means that the combination of weights of the eigenfaces is the most distant of all the combinations of the test images. The way the variance is explained in this #20 training image is different from the way it is explained in any test image. ####Identification of test images using k-NN In this step, using k-nearest neighbor, the goal is to label the test images according to their nearest neighbours. In a very intuitive way, we could say that the images on the vertical axis of the matrix above should get as label the ones from the bluest set of neighbors (either personA (=0) or personB (=1)) #####Choosing k As discussed in HOG-based identification, choosing a right number for k usually implies different test on validation set. That's what we did previously. In this PCA-based identification task, we will see another way - more intuitive but yet sensible. When plotting the matrix of distances, it appears with a lot of colors and it's not easy to determine really what k would be most appropriate. The question is "what number k is a good trade-off so that most of the images seem to be identified appropriately". Some thoughts: - k should not lead to too noisy identification, - k should be kept small, - k should be such that only distances that **matters** are taken into account. ######What does it mean ? Described differently, we should select k such that only what it seems to be really relevant indicate the choice. Let's say we consider a sample and its neighbours. The 2 first neighbours are really close and indicate "class 0", and the three next neighbours are actually all much further - yet closest then remaining samples - and indicate "class 1". k should not take into account (too much at least) the neighbours 3,4 and 5, or the sample could be identified as a "class 1" while *obviously*, it should have been "class 0" thanks to the two closest. ######How to do that ? Intuitively, we could look at the matrix above and evaluate "by the eye" the number of "most blues", on average. A little subtlety, however: what we are really interested in is the order of magnitude of the distance, hence we take the logarithms of the distances to visually represent better this order of magnitude. ######On what data set should we do that ? Well, definitely not on the test set. As we don't have much input samples, and as this mostly constitue an intuition and not a rigorous approach, we decide to follow this approach on the full training set, instead of a validation set. ``` dist_training_training = euclidean_distances(X_train_PCA, X_train_PCA) for i in range(dist_training_training.shape[0]): dist_training_training[i,i]= max(dist_training_training[i,:]) logging.info("Pairwise distance between training images\n(diagonal = max distances, for visualization)") plot_similarity_matrix((dist_training_training), False, norm_only=True,cmap=plt.cm.jet) logging.info("Pairwise distance in log scale between training images\n(diagonal = max distances, for visualization)") plot_similarity_matrix(np.log(dist_training_training), False, norm_only=True, cmap=plt.cm.jet) ``` First matrix are the distances in regular scale, and the second are the distances in log scale. It gives a better intuition about the scale of the different distances. Following this last plot, a parameter $k=1$ or $k=2$ seems an acceptable choice: most of the images seems to be closest (darkest blue) to at least one other corresponding images. Of course, it could be argued it doesn't seem mathematically rigorous and reliable. I agree and this is why the cross validation was used for the previous section. Nonetheless, I have presented here another way of selecting this parameter, efficiently. #####Prediction First, we play - again using the gridsearch method - to try out some combinations, and we predict the label of the test images ``` k=2 ''' Definition of a pipeline ''' PCA_knn_pipeline = Pipeline([ ('pcaify', sklearn_decomposition_PCA(n_components=35) ), ('classify', KNeighborsClassifier( n_neighbors = k, metric='euclidean') )]) param_grid_knn = [ { 'pcaify__n_components': range(15,40,1), 'classify__n_neighbors':[1,2,3]} ] grid_search = GridSearchCV(PCA_knn_pipeline, param_grid_knn, cv = 2, n_jobs = -1, scoring = "accuracy", verbose = 1, return_train_score = True) ''' "Training" ''' grid_res = grid_search.fit(X_train, y_train) logging.info("Best parameters : " + str(grid_res.best_params_)) logging.info("Best scores (CV): " + str(100* grid_res.best_score_ ) + "% ") ``` Now, we are ready to use the considered "best" pipeline in order to predict the labels: - $p=18$, number of components for the PCA transformer, - $k=1$, number of neighbors to label the test sample ``` ''' prediction ''' best_prediction_ab = grid_res.predict(X_test_ab) logging.info('Percentage correct persons A, B: ' + str(100*np.sum(best_prediction_ab == y_test_ab)/len(y_test_ab))) best_prediction_cd = grid_res.predict(X_test_cd) logging.info('Percentage correct persons C, D: ' + str( 100*np.sum(best_prediction_cd == y_test_cd)/len(y_test_cd))) ``` ``` ''' Visualization of the images ''' logging.info("Tests - Identification of personA and personB") show_missed(X_test_ab, y_test_ab, best_prediction_ab) logging.info("\n"*2) logging.info("Tests - Identification of personC and personD") show_missed(X_test_cd, y_test_cd, best_prediction_cd) ``` We have reused the `GridSearchCV` technique, for the number of neighbours $k$, but also for the number of components $p$ used in the PCA representation. * For instance, good accuracy results can be achieved already with other parameters: * $p=5$, $k=1$: test AB: 90%, test CD: 60% (cv = 2) * $p=20$, $k=1$: test AB: 100%, test CD: 65% (cv = 2) Those parameters are found with GridSearch on larger batches. The considered best is the couple $(p,k) = (18,1)$, leading to test AB: 100% and test CD: 65%. As a reminder, "considered best" is **not** regarding the accuracy on the test results, but during the cross validation step. It is important to note (again:)) than the number of training samples being limited, cross validation may not deliver the full potential. With parameters $k=1$ and $p=18$, corresponding to the distances showed before, the identification results corresponds to the expectation: - high accuracy on personA and personB, - much lower accuracy on personC and personD which are further from all, in our case. Nonetheless, the results is better than guess - luckily! Comparing the results to classification ones, on personA and B, there is no difference. For personC and D, the misclassified images were indices [2, 3, 6, 7, 15, 16, 19]. In this identification step, the errors in labeling (based on 1! neighbours and 18 components), are [0, 3, 5, 6, 7, 12, 15]. They don't fully match, but we could yet say that if it's hard to identify, it's hard to classify as well. #####Closest and Furthest nearest neighbors As for previous section, we can show with this feature representation the closest and furthest nearest neighbors. ``` ''' get PCA transformed for X_test and X_train ''' X_test_PCA = grid_res.best_estimator_['pcaify'].transform(X_test) X_train_PCA = grid_res.best_estimator_['pcaify'].transform(X_train) ''' Compute pairwise distances ''' pca_distances_eucl = euclidean_distances(X_test_PCA, X_train_PCA) ''' plot matrix ''' plot_similarity_matrix(pca_distances_eucl) ''' Using classifier kNN, get n closest neighbours list ''' n=10 closest_neighbors = grid_res.best_estimator_['classify'].kneighbors(X_test_PCA, n) logging.info("\nImage # => Image Training idx @ distance \n") for i in range(closest_neighbors[1].shape[0]): print("Image Test " + str(i) + " => closest neighbor: Image Training " + str(closest_neighbors[1][i,0]) + " @ " + str(np.round(closest_neighbors[0][i,0],3))) distance_max = np.max(closest_neighbors[0][:,0]) distance_max_index_test = np.where(closest_neighbors[0][:,0] == distance_max)[0][0] distance_max_index_train = closest_neighbors[1][distance_max_index_test,0] distance_min = np.min(closest_neighbors[0][:,0]) distance_min_index_test = np.where(closest_neighbors[0][:,0] == distance_min)[0][0] distance_min_index_train = closest_neighbors[1][distance_min_index_test,0] print("Largest \"closest distance\" is: " + str(np.round(distance_max,0)) + " between (test image,training image) = (" + str(distance_max_index_test) + ", "+str(distance_max_index_train) + ")" ) print("Smallest \"closest distance\" is: " + str(np.round(distance_min,0)) + " between (test image,training image) = (" + str(distance_min_index_test) + ", "+str(distance_min_index_train) + ")" ) ``` ``` ''' Closest nearest neighbors of all ''' plot_matrix([X_test[distance_min_index_test,:],X_train[distance_min_index_train,:]], color, my_color_map, h=1, w=2) ``` ``` ''' Furthest nearest neighbors of all ''' plot_matrix([X_test[distance_max_index_test,:],X_train[distance_max_index_train,:]], color, my_color_map, h=1, w=2) ``` This constitutes a surprising result: - the nearest neighbours with the shortest distance, using the PCA input, is definitely not what a human eye would have guessed: two different persons. It means that the variance from the mean image is explained in both images in a similar fashion. This result is confirmed by the darkest blue point of the normed distance matrix. - the nearest neighbors having the largest distance is -- without surprise now -- between Jane Levy and Emma Stone. "Luckily", it yet gives the correct labeling result, but it also tends to indicate that the test image is quite different, PCA-feature wise, from the others. We could go on and look for other funny things out from those numbers, such as the test image which is the furthest from any training image. ###Identification - Conclusion In this identification task, we computed a similarity score -- the euclidean distance -- pairwise. Using the distance from a test image to a training image, we labeled the test images according to $k$ neighbors. We confirmed the role of different parameters for both methods, and the results and trends on those results that we could already observed in the previous tasks. We also saw that the labeling using k-NN was at least "as easy" as the classification task (in terms of performance reached), specifically for personC and personD. Of course, this is not the end of the story, and many more things could be achieved: - analyzing in more depth the influence of the parameters of the feature representations, - for each test images, assessing the distribution of the distance to all other images, - within a class, assessing the distance between each pair of images * this would be linked to the notion of cluster - assess if there is a "predominant" image, an image that is close to many others * this actually would be the reverse of what we did when we observed which images were the furthest from all - ... There is no exhaustive list on what can be done. ## Impress your TA's > As said at the beginning of the classification section, I was normally exempted to perform the classification part but yet decided to do it, per personnal interest and eagerness to learn. I hope it will also help in the sense of impressing my TA's. During the sections on classification and identification, a lot of work has been done to define a clean way to perform the tasks, using the `Transformer` and `Pipeline`. Also, the notion of `GridSearchCV` was introduced and used in the identification task to help choosing the $k$ parameter based on a reliable method. However, until now, performance of the classification are "behind" the ones of the identification, specifically regarding the HOG representation. In this section, we will try to improve the performances of the HOG-based classification, and the PCA-based classification, using the *gridsearch*. We will also try and understand better the classification results, specifically the HOG classifier, using two template images that we create on purpose. > This part has been mostly done using `color = False` and `sq_size = 64`. There can be differences in the results if those parameters are changed. ####HOG Classification - Optimization The results obtained above in [this cell](https://colab.research.google.com/drive/1OYq1-SZZURJ5uujmf3PTqdEx3SDGAqZQ#scrollTo=mkYFiPjIKHan&line=2&uniqifier=1) are not so bad for a first attempt, but it's worth it to assess if we can do better! > better: obtain better **accuracy** results on the test set. Of course, we don't want to tailor the parameters **for** the test set images. In order to optimize the results of the classification, we will first implement a **systematic** way of searching for optimal parameters on the training set, using [`GridSearchCV`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html): - intrinsic cross-validation (the training set is successively split into different subset to allow cross validation) - automatic parameter testing, according to the given sequence of possibilities to test. This is therefore much less of a "manual" process: the system automatically goes through all possible parameter combinations, and establishes the metrics of the tested models using cross validation set. To perform this *gridsearch*, we reimplement the same steps as before. We speedup a little the process by retrieving the backup variables we set in the previous steps ``` ''' Make sure of the inputs ''' X_train = X_train_HOG_shuffled_back.copy() y_train = y_train_HOG_shuffled_back.copy() X_test_ab = X_test_HOG_ab_back.copy() y_test_ab = y_test_HOG_ab_back.copy() X_test_cd = X_test_HOG_cd_back.copy() y_test_cd = y_test_HOG_cd_back.copy() ''' Definition of a pipeline ''' HOG_pipeline = Pipeline([ ('hogify', HogTransformer( orientations = 9, pixels_per_cell = (8,8), cells_per_block = (2,2), block_norm='L2', transform_sqrt=True, multichannel = color) ), ('classify', SGDClassifier( random_state = 42, max_iter = 1000, tol=1e-3) )]) ''' Definition of the parameters grid ''' param_grid_HOG = [ { 'hogify__orientations': [9], 'hogify__cells_per_block': [(2, 2),(3, 3)], 'hogify__pixels_per_cell': [(4,4),(8, 8),(16, 16)], 'hogify__block_norm': ['L2-Hys','L1', 'L2'], 'hogify__transform_sqrt': [False, True], 'hogify__multichannel':[color], 'classify': [ SGDClassifier(random_state=42, max_iter=1000, tol=1e-5), svm.SVC(kernel='linear', C=0.1), svm.SVC(kernel='linear', C=1)]} ] ''' Creation of the object GridSearch ''' grid_search_HOG = GridSearchCV( HOG_pipeline, param_grid_HOG, cv = 4, n_jobs = -1, scoring = "accuracy", verbose = 1, return_train_score = True) ''' Train the model, trying out the combination ''' grid_res_HOG = grid_search_HOG.fit(X_train, y_train) # print(grid_res_HOG.best_estimator_) print("\n"*3) print("=="*40) print("Best accuracy score (cross validation): " + str(100*grid_res_HOG.best_score_) + " %") print("Summary of the search best parameters:") print("orientations = ", grid_res_HOG.best_params_['hogify__orientations']) print("cells_per_block = ", grid_res_HOG.best_params_['hogify__cells_per_block']) print("pixels_per_cell = ", grid_res_HOG.best_params_['hogify__pixels_per_cell']) print("block_norm = ", grid_res_HOG.best_params_['hogify__block_norm']) print("transform_sqrt = ", grid_res_HOG.best_params_['hogify__transform_sqrt']) print("classifier = ", grid_res_HOG.best_params_['classify']) print("=="*40) ``` The best parameters found according to our *gridsearch* lead to a major change in: - `pixels_per_cell`: leading to a less finer cell definition. Spacially, it leads to a wider low-pass filtering. This was described in the first part of this notebook, - `transform_sqrt`: which, according to the authors of the base paper, needs to be tried out experimentally to "decide", - `block_norm`: which, similarly, needs to be tested on cross validation set before being adopted. The best classifier, among the ones tested, remain the SGD used already in previous sections. We can now simply used the *best* estimator found by the *gridsearch* to perform prediction, and visualize the results (both accuracy and images themselves). ``` ''' predict AB ''' best_prediction_ab = grid_res_HOG.predict(X_test_ab) logging.info("Percentage correct : " + str(100*np.sum(best_prediction_ab == y_test_ab)/len(y_test_ab))) show_missed(X_test_ab, y_test_ab, best_prediction_ab) my_plot_confusion_matrix(grid_res_HOG, X_test_ab, y_test_ab, ["Emma Stone", "Bradley Cooper"]) ''' predict CD ''' best_prediction_cd = grid_res_HOG.predict(X_test_cd) logging.info("Percentage correct : " + str(100*np.sum(best_prediction_cd == y_test_cd)/len(y_test_cd))) show_missed(X_test_cd, y_test_cd, best_prediction_cd) my_plot_confusion_matrix(grid_res_HOG, X_test_cd, y_test_cd,["Jane Levy", "Marc Blucas"] ,["Emma Stone", "Bradley Cooper"]) ``` After this optimization pass, we clearly see that the best classification result is 100% for personA and personB. This is much better than what we had originally (80%). This confirms that, after optimization, we cannot (at least without deeper analysis) state that one feature representation or the other is intrinsically better than the others. Most likely, it depends on other metrics as well. Because of the new hyper-parameters defined, the decision boundary changes such that more personC and personD images are misclassified. This is actually not surprising, as the hyperparameters are tailored against cross-validation sets, hence containing only personA and personB images. There is no cross-validation using personC or personD images. Their classification results are therefore not expected to improve. In our specific case, we even see that the accuracy in classifying C and D is just as a random classifier, basically. Looking at the confusion matrix, it clearly appears that personC is the one leading (mainly) to such a bad score. This behavior is exactly the one already discussed in previous section, related to HOG locality and the *particular* descriptor of personA, with the oblique hair. These optimization results also open up the possible ways of parameters tuning in regards of the ultimate goal of the system being designed: how do we want the system to perform on a test set containing personC and personD images ? ####PCA Classification - Optimization While the results obtained earlier using the PCA feature representation are already very good, with 100% accuracy on the test sets of personA and personB, we can use the *gridsearch* technique to test different number of principal components taken into account, or even to try out different classifiers. ``` ''' Make sure of the inputs ''' X_train_PCA = X_train_PCA_shuffle_back.copy() y_train_PCA = y_train_PCA_shuffle_back.copy() X_test_PCA_ab = X_test_PCA_ab_back.copy() X_test_PCA_cd = X_test_PCA_cd_back.copy() y_test_PCA_ab = y_test_PCA_ab_back.copy() y_test_PCA_cd = y_test_PCA_cd_back.copy() ''' Set up a grid of parameters to test ''' param_grid_PCA = [ { 'pcaify__n_components': range(10,41,1), 'classify': [ SGDClassifier(random_state=42, max_iter=10000, tol=1e-5), svm.SVC(kernel='linear', C=1e-5, tol=1e-5 ), svm.SVC(kernel='linear', C=0.1, tol=1e-5 ) ]} ] ''' create the GridSearchCV object ''' grid_search_PCA = GridSearchCV(PCA_pipeline, param_grid_PCA, cv = 5, n_jobs = -1, scoring = "accuracy", verbose = 1, return_train_score = True) grid_res_PCA = grid_search_PCA.fit(X_train_PCA, y_train_PCA) # logging.info("Best Score :" + str(grid_res_PCA.best_score_)) # logging.info("Best Parameters found :") # logging.info(grid_res_PCA.best_params_) print("\n"*3) print("=="*40) print("Best accuracy score (cross validation): " + str(100*grid_res_PCA.best_score_) + " %") print("Summary of the search best parameters:") print("n_components = ", grid_res_PCA.best_params_['pcaify__n_components']) print("classifier = ", grid_res_PCA.best_params_['classify']) print("=="*40) ``` Based on the 5-fold-cross-validation, the best accuracy score is 100%, and more interestingly, the number of components is now only 15, when using another classifier, a linear support vector machine with a very small regularization parameter (leading actually to a really strong regularization, see [doc](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html). We can now use the "best found" pipeline to predict the class of our persons. ``` ''' predict AB ''' best_prediction_ab = grid_res_PCA.predict(X_test_PCA_ab) logging.info("Percentage correct : " + str(100*np.sum(best_prediction_ab == y_test_PCA_ab)/len(y_test_PCA_ab))) show_missed(X_test_PCA_ab, y_test_PCA_ab, best_prediction_ab) my_plot_confusion_matrix(grid_res_PCA, X_test_PCA_ab, y_test_PCA_ab,["Emma Stone", "Bradley Cooper"]) ''' predict CD ''' best_prediction_cd = grid_res_PCA.predict(X_test_PCA_cd) logging.info("Percentage correct : " + str(100*np.sum(best_prediction_cd == y_test_PCA_cd)/len(y_test_PCA_cd))) show_missed(X_test_PCA_cd, y_test_PCA_cd, best_prediction_cd) my_plot_confusion_matrix(grid_res_PCA, X_test_PCA_cd, y_test_PCA_cd,["Jane Levy", "Marc Blucas"] ,["Emma Stone", "Bradley Cooper"]) ``` After this optimization pass, we clearly see that the best classification result remain 100%, without surprise, for personA and personB. Unlike the HOG-based classifier, the accuracy result on personC and D hasn't changed, and what's more, the samples misclassified have not changed. It tends to indicate that the eigenfaces after 15 (excluded) are not that useable for the classifier to differentiate the classes. Similarly to what was said for the HOG classifier optimization, the fact the accuracy does not improve for personC and personD is expected: the hyper-parameters are selected against crossvalidation, hence containing only personA and personB images. When looking closer, it appears that - once again - it's the performance on personC that penalized overall accuracy on C and D: the classifier acts as a random classifier for personC. Also there, we discussed already largely this behavior in the previous section. With this optimized pipeline, we reach the same accuracy on personA and personB test set with only 15 components, instead of 35 before. This is a nice result as it means less computing in the end. ####Understanding our classifiers Without going much deeper in the analysis, it may be useful to illustrate once more the differences between the classifiers (HOG-based vs PCA-based). - HOG: For an image to be classified as personA, it is much easier if it has, in HOG representation, the oblique line, characteristic of personA haircut. This indicates at least two things: * the sensitivity of the classifier to "edge details" of the image. In this case, it is the haircut. It could be also, for instance a skirt collar for a man, a beard, ... Those information are visual characteristic and can be found deeply in the HOG. It is useful sometimes... but also may not be highly reliable, as we will see in the example below: a simple image with an oblique line. * the lack of variability of the training set images. That is, recognizing Emma Stone is almost reduced to recognizing this oblique line. Similarly, classifying as Bradley cooper is reduced to recognizing a vertical line on the right side, thanks to its hair shape and forehead, and face's shape. To illustrate this, let's just create two images, using the function created in the first part of this notebook. ``` ''' Construction of a new set, with two template images ''' personA_template = create_image(64, 64, special="personA") personB_template = create_image(64, 64, special="personB") new_set={} new_set[personA]=[personA_template] new_set[personB]=[personB_template] new_set[personC]=[] new_set[personD]=[] ''' pre-processing ''' new_X_HOG = get_matrix_from_set(new_set, color, sq_size, flatten=False) ''' visualization ''' plot_matrix(new_X_HOG, color, my_color_map, h=2, w=2) ''' Prediction using best gridsearch output ''' res_HOG = grid_res_HOG.predict(new_X_HOG) logging.info("First image labeled as class: " + str(res_HOG[0]).split(".")[0]) logging.info("Second image labeled as class: " + str(res_HOG[1]).split(".")[0]) ``` We see that -- as expected -- the first image with the oblique line is classified as "0", Emma Stone, and the second is classified as "1", Bradley Cooper. - PCA: To understand better the PCA, we need to look at the eigenfaces, and the variance that each of the eigenfaces can explain. In the classification task, we deeply cover the rotation of an image of personC. Its variance introduced by the rotation could not be well explained, leading to a classifier actually assigning the wrong class. We saw that manually reducing this variance (= rotating the image) lead to a correct (= expected) result from the classifier. As already discussed, rotation isn't the only thing affecting the PCA-based classifier results (background, lighting, ...) For the sake of completeness, it is interesting to show the results on PCA-based classification of the two *template* images created above, for the HOG feature. ``` ''' pre-processing ''' new_X_PCA = get_matrix_from_set(new_set, color, sq_size, flatten=True) ''' Visualization ''' plot_matrix(new_X_PCA, color, my_color_map, h=2, w=2) ''' Prediction ''' res_PCA = grid_res_PCA.predict(new_X_PCA) logging.info("First image labeled as class: " + str(res_PCA[0]).split(".")[0]) logging.info("Second image labeled as class: " + str(res_PCA[1]).split(".")[0]) ``` Clearly, this is a proof both classifiers don't work the same way. Classifying from a linear combination of the eigenfaces, the **same** class is given to both "template" images, while they are visually really different. While PCA-based and HOG-based classifiers don't work the same way, the interpretation of misclassification may be easier (at least, in my opinion based mostly on this project), on the HOG feature representation, which is essentially based on the edges on an image. ####Other optimizations The results in classification and identification are not that bad, with 100% on the test sets for all personA and personB run once optimized. However, we may want to robustify our predictions, and include much more data. It is possible to "artificially" generate new data without too much effort. Indeed, we saw an example of a rotation (when understanding better Classification based on PCA). What we may be interested in is populating our current data set simply by generating new images: - rotation of every images several times for different angular values. * using different rotation center * completing the rotated images with pixels (background) having different colors (color level) - translation of every images several amount of pixels, in different directions - add some energy in the image, particularly for PCA which is not robust to lighting - hide some parts of the images It will quickly increase the amount of data there is, and robustify the classification results (even if the result is already very good). One shall however make sure to not fall into overfitting when training the classifier. Last point, we explained previously why we would not scale the data - that is, we would not make the variance between 0 and 1. In the context of getting the very best out of our data, we should repeat the experience with the scaling implemented, and confirm the resulting behavior. ##Discussion CONGRAT's !! You are at the end of this tutorial and if you've reached this step, you most likely understand everything related to HOG, PCA, Classification and Identification. A lot has already been said overall in this tutorial - we'll try to wrap this up. ###Summary of the activities This tutorial was quite long, and it's even possible we forgot what we did... So let's refresh our memory! 1. we retrieved the data, carefully and really randomly. This leads to a training set of faces from personA (20) and personB (20), and a test set from personA (10) and personB(10), but also personC (10) and personD (10). 2. we spent quite some time on feature representation constructions: * HOG, where we detailed precisely how to compute the gradients, the histogram (we even built several toy images to understand the very essence of the Histogram). We learnt what a cell is, what a block is, how to compute normalization, ... and the influence of all these (hyper-)parameters. A homemade class was coded to reproduce library results * PCA, where we also went through the maths and saw three different ways to compute the principal components. We also compared results on examples with library results, and we detailed the effect of the number of components chosen in terms of error. In particular, we spent some time on analyzing the **explained variance**, **cumulative energy**, and **optimal number of principal components**. To give a bit of insight, we represented those quite high-dimensional features in the 2D space using t-SNE tool. All of that was done first, with the ultimate goal of building systems for classification, and for identification. 3. we built a classification system, first based on the HOG feature representation, then on the PCA feature representation. - we learnt progressively what is: * the required pre-processing steps, including the building of the features, * the pipeline architecture, using Transformers * how to deal with python libraries, and specifically `sklearn` functions - the results were deeply analyzed: * HOG: we digged into the HOG representation of misclassified images * PCA: we succeeded in modifying (according to a plan well established :-) ) an input image such that it could pass the classification tests. In particular, we saw that it appears easier to interpret the results from HOG classification than PCA classification. Also, without any optimization per se, the PCA-based classifier eached higher accuracy scores than HOG-based classifier on the test set. 4. we built an identification system, first based on the HOG feature representation, then on the PCA feature representation. - before this, we detailed the formalism to study the metrics, the euclidean distance between the representations - we saw that comparing different distances (manhattan, cosine, ...) euclidean distance was giving appropriate results - we showed, using a colorful matrix representation, that indeed, from a macroscopic perspective, personA test images were closer to personA training images, and resp. for personB. However, we saw funny things for personC and personD. - in regards of the classification results obtained before, the distances computed confirmed what we had: personC is very far from personA in terms of HOG feature, while it's more "fuzzy" for the PCA feature. This is a very nice take-home message: all the representation don't lead to the same results, so that there really is "engineering" behind such systems. - using k-NN classifer, we assigned a label to personA, personB, personC and personD test images, and discussed the results. * To assign this label, we needed to define the k parameter: the number of neighbors that would participate to the labeling decision. We decided not to go for a fancy weighted model, but rather to use * a gridsearch technique, that is already an optimization of the kind using cross validation * an intuitive technique, where the distances are shown in colors, and - using a logarithmic scale -, only a few neighbors seem of interest to label the image * We confirmed the classification results in the sense that it's often "easier" for personA and personB than personC and personD. - for both representation, we showed the closest and furthest nearest neighbor of our dataset (between test image and training image) * This system/model can of course show the three closest neighbors, three furthest, ... many improvements and extra work are suggested. 5. Beyond the optimization and improvement techniques used along the way, we continued in presenting a little deeper the Grid Search, and improved the original results obtained, in particular: * in terms of accuracy for the classification using HOG feature, * in terms of number of useful components using PCA feature We continued showing the difference in essence between the two classifiers showing the results on two template images created specifically. Finally, we suggested different ways of populating our training set in order to robustify the classification and identification results, without requiring other faces downloads. ###Main differences between PCA and HOG In this section, we come back to a few key differences between both features we used in our tutorial - HOG is based on gradients, their magnitude and orientation. Intuitively, it corresponds to the edges (cfr Obama pictures). It's therefore heavily impacted by rotation, translation, ... which makes the interpretation easy when looking at the visual representation of the HOG (Without this image of "bars", it's not always easy: see for instance personC classification results). Many parameters allow to tune the robustness to noise, to lighting condition, or the "granularity" of the representation. - PCA is based on finding the intrinsic directions of maximal variance in the images. It is a very famous and useful technique for dimensionality reduction by selecting only the $p$ most important component. This can also be used for denoising. It is however an undesirable property when little variance is needed to differentiate between classes: reducing the dimensionality may decrease the performance. PCA technique is sensitive to lighting conditions, rotation, scale, background, ... While the eigenfaces are interpretable in qualitative terms 'this eigenface tends to emphasize the contrast between this and this', it's (as far as I'm concerned) less intuitive to fully interpret the results without analyzing deeper the numbers. In terms of accuracy, (all results given after optimization), with `sq_size = 64` and `color = False` - A,B: * HOG: - classification: 100% - identification: 100% * PCA: - classification: 100% - identification: 100% - C,D: * HOG: - classification: 50% - identification: 70% * PCA: - classification: 65% - identification: 65% During all our classification/identification tasks, we have observed and explained why the results in terms of accuracy were worse for personC and D than A and B. This of course matches the intuition as, even if the faces are visually similar for a human, with respect to some physical aspects, those aspects may not be the ones captured by the features and classifiers/identification systems. In particular, we saw that the HOG transform would consider Marc Blucas really similar to Bradley Cooper...as well as Jane Levy! At least, in comparison with Emma Stone, which has a remarkable oblique haircut. To go on even further, we could definitely go along the road of the "other optimizations" suggested (see [other optimizations](https://colab.research.google.com/drive/1OYq1-SZZURJ5uujmf3PTqdEx3SDGAqZQ#scrollTo=k-fU8twAA2aM&line=11&uniqifier=1) ). A message learnt seems to be that HOG is particularly well suited to recognized specific pattern in an image -- just like an obvious oblique haircut. Parameters of course allow to deviate from that pattern, but in essence, that's what it is. PCA, on the other hands, may grasp better overall information from the data themselves. Depending on the goal of the application, both features could lead to different results - or different ease to reach the desired performances. This leads to the question: on a real system, what is the message learnt of such results? ###Classification or Identification As is, there is no clear answer :-) In the context of this tutorial, we reached pretty good results with both systems. Conceptually, an identification does not really learn anything - it "simply" computes a **relevant** metric between the feature, and gives the most appropriate label based on that. On the contrary, the classifier try to find a boundary, with a "clear" separation between the classes. Both systems can perform many things, but the challenges are different: - Identification is hard if the number of pairwaise computation is immense, - the metric to use as similarity measure may not be easy to find, specifically in high dimension (see the curse of dimensionality) - For the classification, there is moreover the challenge of finding the appropriate classifier, and its adapted hyper-parameters. What we saw in particular along this tutorial is that results on all systems differ because of intrinsic characteristic (bias) of the methods. As a perfect example, the identification with $k=1$: there needs to be only one very close training image to be properly labeled. However, this may be subject to "noise", as the background/viewpoint (see [HOG](https://colab.research.google.com/drive/1OYq1-SZZURJ5uujmf3PTqdEx3SDGAqZQ#scrollTo=1esG7zSkqgX4&line=1&uniqifier=1) or [PCA](https://colab.research.google.com/drive/1OYq1-SZZURJ5uujmf3PTqdEx3SDGAqZQ#scrollTo=zMfCvQRhRx6g&line=3&uniqifier=1)) ### Authentication system Let's imagine a company wants to purchase our face detection system to perform authentication on its employees... What would be our advices ? The questions raised by an authentication system is much more complicated that it could seem. This answer will be centered on computer vision topic. First, obviously, it is currently very easy to fool the different systems, see examples of the template images. Linked to the same idea, we shall avoid that someone just print out a 1:1 scale picture of one of the employees in order to get access somewhere. A solution for that is to have an extra system verifying that there is movement, or some distance measurement so that it's not a flat picture,... This is outside of the scope of current discussion. Regarding the authentification system, the first thing to understand is the need, in particular in terms of penalty and scores. > what is worse: false positives (authenticating X as Y, giving to X Y's access rights) , or false negatives (not authenticating X as X) ? We saw both systems (classification and identification) can reach an accuracy of 100% quite "easily", while the results on "unknown person" (personC and D)differ. My succinct advices would then be: - make sure to have a training set large enough for each of the employees - use a threshold quite high in terms of system confidence on the output - use a combinations of both feature representations * the combination was not used in this notebook #####Classification or Identification ? We can imagine a k-NN identification, as implemented currently, with a (low) thresholding on the distance computed, so that only a face that is really close to another gets labelled. The "drawback" of such system is that the training set needs to be large enough to decrease the amount of false positives: there needs to be an image "really close" to the test image. A priori, good results are also obtained in this prototype with the classifiers. This is most likely what we wish for: - high accuracy on personA and personB, - "low accuracy" (= close to random) on personC and personD. * this is actually not entirely true if are looking at the results on C and D separately. Definitely, for both classifiers and identification systems, those cases of personC and personD need to be well-thought, and it seems a threshold on the system confidence score needs to be established. Again, this is to put in regards of the False Positive / False Negative penalty scores. Other possibilities to try out would be to organize a vote between several classifiers/identification systems. According to the metrics to reach, this could lead to a very **robust** and reliable system! #####Dataset sizing Currently, the size of the different sets is of course too low to guarantee any production-grade performances: globally, it tends to lower down the confidence results. In all cases, increasing the training sets (and test sets) would allow: - performing extensive and reliable cross-validation of the model parameters, - cover more poses, viewpoints, scale, light conditions, ... in order to avoid misclassification/misidentification, - potentially make the threshold even higher in terms of confidence level Provided that some regularization mechanism are implemented as well (or accuracy may drop in real life) #####What feature to be based on ? *Simpler* feature representations as HOG give already very good results provided fine tuning. Besides, taking into account the environment (light conditions, pose, ...), some features may be more robust. - Can there be a led indicating where the employee gaze should point at ? - Can they keep their glasses or not ? * is it supposed to work with / without glasses ? - Is the lighting purely artificial (and controled), or is it to be implemented in a hall where natural light is abundant, leading to changing light conditions ? All those questions may lead to the use of different features: HOG is more robust to lighting if proper normalization is done, as already discussed. --- Thanks! Geoffroy Herbin, R0426473
a2c1356b939c519562dd290350f405781677ea90
401,157
ipynb
Jupyter Notebook
assignment2/CV_Group9_assignment.ipynb
gherbin/ComputerVisionKUL
c1367c812007d4533aca24d0d09b06590c0193a5
[ "MIT" ]
null
null
null
assignment2/CV_Group9_assignment.ipynb
gherbin/ComputerVisionKUL
c1367c812007d4533aca24d0d09b06590c0193a5
[ "MIT" ]
null
null
null
assignment2/CV_Group9_assignment.ipynb
gherbin/ComputerVisionKUL
c1367c812007d4533aca24d0d09b06590c0193a5
[ "MIT" ]
null
null
null
44.302264
672
0.543859
true
65,258
Qwen/Qwen-72B
1. YES 2. YES
0.712232
0.779993
0.555536
__label__eng_Latn
0.985515
0.129026
# One layer model Here we show how to run our two-layer model as a single-layer model. There are two different ways to do this, which we present below. ## Imports and loading data ```python # NBVAL_IGNORE_OUTPUT import os.path import numpy as np import pandas as pd from openscm_units import unit_registry as ur import tqdm.autonotebook as tqdman from scmdata import ScmRun, run_append from openscm_twolayermodel import TwoLayerModel import matplotlib.pyplot as plt ``` /Users/znicholls/Documents/AGCEC/MCastle/openscm-twolayermodel/venv/lib/python3.7/site-packages/ipykernel_launcher.py:7: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console) import sys For this we use an idealised scenario which is a reasonable representation of the forcing which occurs in response to an abrupt doubling in atmospheric CO$_2$ concentrations (often referred to as an abrupt-2xCO2 experiment). ```python run_length = 2000 data = np.zeros(run_length) data[10 : ] = 4.0 driver = ScmRun( data=data, index=1850 + np.arange(run_length), columns={ "unit": "W/m^2", "model": "idealised", "scenario": "1pctCO2", "region": "World", "variable": "Effective Radiative Forcing", }, ) driver ``` <scmdata.ScmRun (timeseries: 1, timepoints: 2000)> Time: Start: 1850-01-01T00:00:00 End: 3849-01-01T00:00:00 Meta: model region scenario unit variable 0 idealised World 1pctCO2 W/m^2 Effective Radiative Forcing ```python # NBVAL_IGNORE_OUTPUT fig = plt.figure(figsize=(12, 8)) ax = fig.add_subplot(111) driver.filter(variable="Effective Radiative Forcing").lineplot() ``` ## No second layer The first, and arguably the simplest way, to make a single layer model is to simply remove the connection between the top and second layers. Recalling the equations which define the two-layer model below, \begin{align} C \frac{dT}{dt} & = F - (\lambda_0 - a T) T - \epsilon \eta (T - T_D) \\ C_D \frac{dT_D}{dt} & = \eta (T - T_D) \end{align} We see that we can effectively remove the second layer by setting $\eta = 0$. ```python TwoLayerModel().eta ``` 0.8 watt/(delta_degree_Celsius meter<sup>2</sup>) ```python # NBVAL_IGNORE_OUTPUT eta_values = np.array([0, 0.8]) * ur("W/m^2/K") eta_values ``` <table><tbody><tr><th>Magnitude</th><td style='text-align:left;'><pre>[0.0 0.8]</pre></td></tr><tr><th>Units</th><td style='text-align:left;'>watt/(kelvin meter<sup>2</sup>)</td></tr></tbody></table> ```python # NBVAL_IGNORE_OUTPUT du_values = np.array([50, 250, 500]) * ur("m") du_values ``` <table><tbody><tr><th>Magnitude</th><td style='text-align:left;'><pre>[ 50 250 500]</pre></td></tr><tr><th>Units</th><td style='text-align:left;'>meter</td></tr></tbody></table> ```python # NBVAL_IGNORE_OUTPUT runner = TwoLayerModel() output = [] equivalent_parameters = [] for eta in tqdman.tqdm(eta_values, desc="eta values", leave=False): runner.eta = eta for du in tqdman.tqdm(du_values, desc="du values", leave=False): runner.du = du output.append(runner.run_scenarios(driver)) output = run_append(output) output.head() ``` HBox(children=(HTML(value='eta values'), FloatProgress(value=0.0, max=2.0), HTML(value=''))) HBox(children=(HTML(value='du values'), FloatProgress(value=0.0, max=3.0), HTML(value=''))) HBox(children=(HTML(value='scenarios'), FloatProgress(value=1.0, bar_style='info', layout=Layout(width='20px')… HBox(children=(HTML(value='scenarios'), FloatProgress(value=1.0, bar_style='info', layout=Layout(width='20px')… HBox(children=(HTML(value='scenarios'), FloatProgress(value=1.0, bar_style='info', layout=Layout(width='20px')… HBox(children=(HTML(value='du values'), FloatProgress(value=0.0, max=3.0), HTML(value=''))) HBox(children=(HTML(value='scenarios'), FloatProgress(value=1.0, bar_style='info', layout=Layout(width='20px')… HBox(children=(HTML(value='scenarios'), FloatProgress(value=1.0, bar_style='info', layout=Layout(width='20px')… HBox(children=(HTML(value='scenarios'), FloatProgress(value=1.0, bar_style='info', layout=Layout(width='20px')… <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th>time</th> <th>1850-01-01 00:00:00</th> <th>1851-01-01 00:00:00</th> <th>1852-01-01 00:00:00</th> <th>1853-01-01 00:00:00</th> <th>1854-01-01 00:00:00</th> <th>1855-01-01 00:00:00</th> <th>1856-01-01 00:00:00</th> <th>1857-01-01 00:00:00</th> <th>1858-01-01 00:00:00</th> <th>1859-01-01 00:00:00</th> <th>...</th> <th>3840-01-01 00:00:00</th> <th>3841-01-01 00:00:00</th> <th>3842-01-01 00:00:00</th> <th>3843-01-01 00:00:00</th> <th>3844-01-01 00:00:00</th> <th>3845-01-01 00:00:00</th> <th>3846-01-01 00:00:00</th> <th>3847-01-01 00:00:00</th> <th>3848-01-01 00:00:00</th> <th>3849-01-01 00:00:00</th> </tr> <tr> <th>a (watt / delta_degree_Celsius ** 2 / meter ** 2)</th> <th>climate_model</th> <th>dl (meter)</th> <th>du (meter)</th> <th>efficacy (dimensionless)</th> <th>eta (watt / kelvin / meter ** 2)</th> <th>lambda0 (watt / delta_degree_Celsius / meter ** 2)</th> <th>model</th> <th>region</th> <th>run_idx</th> <th>scenario</th> <th>unit</th> <th>variable</th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <th rowspan="5" valign="top">0.0</th> <th rowspan="5" valign="top">two_layer</th> <th rowspan="5" valign="top">1200</th> <th rowspan="4" valign="top">50</th> <th rowspan="4" valign="top">1.0</th> <th rowspan="4" valign="top">0.0</th> <th rowspan="4" valign="top">1.246667</th> <th rowspan="4" valign="top">idealised</th> <th rowspan="4" valign="top">World</th> <th rowspan="4" valign="top">0</th> <th rowspan="4" valign="top">1pctCO2</th> <th>W/m^2</th> <th>Effective Radiative Forcing</th> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>...</td> <td>4.000000</td> <td>4.000000</td> <td>4.000000</td> <td>4.000000</td> <td>4.000000</td> <td>4.000000</td> <td>4.000000</td> <td>4.000000</td> <td>4.000000</td> <td>4.000000</td> </tr> <tr> <th rowspan="2" valign="top">delta_degC</th> <th>Surface Temperature|Upper</th> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>...</td> <td>3.208556</td> <td>3.208556</td> <td>3.208556</td> <td>3.208556</td> <td>3.208556</td> <td>3.208556</td> <td>3.208556</td> <td>3.208556</td> <td>3.208556</td> <td>3.208556</td> </tr> <tr> <th>Surface Temperature|Lower</th> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>...</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> </tr> <tr> <th>W/m^2</th> <th>Heat Uptake</th> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>...</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> <td>0.000000</td> </tr> <tr> <th>250</th> <th>1.0</th> <th>0.0</th> <th>1.246667</th> <th>idealised</th> <th>World</th> <th>0</th> <th>1pctCO2</th> <th>W/m^2</th> <th>Effective Radiative Forcing</th> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>...</td> <td>4.000000</td> <td>4.000000</td> <td>4.000000</td> <td>4.000000</td> <td>4.000000</td> <td>4.000000</td> <td>4.000000</td> <td>4.000000</td> <td>4.000000</td> <td>4.000000</td> </tr> </tbody> </table> <p>5 rows × 2000 columns</p> </div> As we can see in the plots below, the runs with $\eta = 0$ only have a single timescale in their response. In contrast, the runs with $\eta \neq 0$ have two clear, distinct timescales. Notably, because equilibrium warming is independent of ocean heat uptake, the equilibrium warming is the same in all cases. As expected, we see that the depth of the mixed-layer affects the response time of the mixed-layer (the only response time in the case of $\eta = 0$) whilst having a much smaller effect on the response time of the deep ocean. ```python # NBVAL_IGNORE_OUTPUT scenario_to_plot = "1pctCO2" xlim = [1850, 3500] pkwargs = dict( hue="du (meter)", style="eta (watt / kelvin / meter ** 2)", time_axis="year" ) fig = plt.figure(figsize=(9, 9)) ax = fig.add_subplot(211) output.filter(scenario=scenario_to_plot, variable="Surface Temperature|Upper").lineplot(**pkwargs, ax=ax) ax.set_title("Surface Temperature|Upper") ax = fig.add_subplot(212, sharex=ax) output.filter(scenario=scenario_to_plot, variable="Heat Uptake").lineplot(**pkwargs, ax=ax) ax.set_title("Heat Uptake") ax.set_xlim(xlim) plt.tight_layout() ``` ## Infinite reservoir second layer If we make the deep ocean component of the two-layer model infinitely deep, then we also have a single layer model. The concept is described by Equation 4 of [Geoffroy et al. 2013, Part 1](https://journals.ametsoc.org/doi/10.1175/JCLI-D-12-00195.1). \begin{align} C \frac{dT}{dt} & = F - (\lambda_0 - a T) T - \epsilon \eta (T - T_D) \\ C_D \frac{dT_D}{dt} & = \eta (T - T_D) \end{align} In short, if $C_D \rightarrow \infty$, then $T_D = 0$ and the equation governing the mixed layer response becomes \begin{align} C \frac{dT}{dt} & = F - (\lambda_0 - a T) T - \epsilon \eta T \end{align} In effect, we alter the climate feedback factor from $\lambda_0 - a T$ to $\lambda_0 - a T + \epsilon \eta$ we increase the climate feedback factor and hence lower the equilibrium climate sensitivity. ```python # NBVAL_IGNORE_OUTPUT dl_values = np.array([10 ** 3, 10 ** 4, 10 ** 5, 10 ** 15]) * ur("m") dl_values ``` <table><tbody><tr><th>Magnitude</th><td style='text-align:left;'><pre>[ 1000 10000 100000 1000000000000000]</pre></td></tr><tr><th>Units</th><td style='text-align:left;'>meter</td></tr></tbody></table> ```python # NBVAL_IGNORE_OUTPUT runner = TwoLayerModel() output = [] equivalent_parameters = [] for dl in tqdman.tqdm(dl_values, desc="dl values", leave=False): runner.dl = dl output.append(runner.run_scenarios(driver)) equivalent_parameters.append(({"two-layer deep ocean depth": runner.dl}, runner.get_impulse_response_parameters())) output = run_append(output) output.head() ``` HBox(children=(HTML(value='dl values'), FloatProgress(value=0.0, max=4.0), HTML(value=''))) HBox(children=(HTML(value='scenarios'), FloatProgress(value=1.0, bar_style='info', layout=Layout(width='20px')… HBox(children=(HTML(value='scenarios'), FloatProgress(value=1.0, bar_style='info', layout=Layout(width='20px')… HBox(children=(HTML(value='scenarios'), FloatProgress(value=1.0, bar_style='info', layout=Layout(width='20px')… HBox(children=(HTML(value='scenarios'), FloatProgress(value=1.0, bar_style='info', layout=Layout(width='20px')… <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th>time</th> <th>1850-01-01 00:00:00</th> <th>1851-01-01 00:00:00</th> <th>1852-01-01 00:00:00</th> <th>1853-01-01 00:00:00</th> <th>1854-01-01 00:00:00</th> <th>1855-01-01 00:00:00</th> <th>1856-01-01 00:00:00</th> <th>1857-01-01 00:00:00</th> <th>1858-01-01 00:00:00</th> <th>1859-01-01 00:00:00</th> <th>...</th> <th>3840-01-01 00:00:00</th> <th>3841-01-01 00:00:00</th> <th>3842-01-01 00:00:00</th> <th>3843-01-01 00:00:00</th> <th>3844-01-01 00:00:00</th> <th>3845-01-01 00:00:00</th> <th>3846-01-01 00:00:00</th> <th>3847-01-01 00:00:00</th> <th>3848-01-01 00:00:00</th> <th>3849-01-01 00:00:00</th> </tr> <tr> <th>a (watt / delta_degree_Celsius ** 2 / meter ** 2)</th> <th>climate_model</th> <th>dl (meter)</th> <th>du (meter)</th> <th>efficacy (dimensionless)</th> <th>eta (watt / delta_degree_Celsius / meter ** 2)</th> <th>lambda0 (watt / delta_degree_Celsius / meter ** 2)</th> <th>model</th> <th>region</th> <th>run_idx</th> <th>scenario</th> <th>unit</th> <th>variable</th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <th rowspan="5" valign="top">0.0</th> <th rowspan="5" valign="top">two_layer</th> <th rowspan="4" valign="top">1000</th> <th rowspan="4" valign="top">50</th> <th rowspan="4" valign="top">1.0</th> <th rowspan="4" valign="top">0.8</th> <th rowspan="4" valign="top">1.246667</th> <th rowspan="4" valign="top">idealised</th> <th rowspan="4" valign="top">World</th> <th rowspan="4" valign="top">0</th> <th rowspan="4" valign="top">1pctCO2</th> <th>W/m^2</th> <th>Effective Radiative Forcing</th> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>...</td> <td>4.000000</td> <td>4.000000</td> <td>4.000000</td> <td>4.000000</td> <td>4.000000</td> <td>4.000000</td> <td>4.000000</td> <td>4.000000</td> <td>4.000000</td> <td>4.000000</td> </tr> <tr> <th rowspan="2" valign="top">delta_degC</th> <th>Surface Temperature|Upper</th> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>...</td> <td>3.207635</td> <td>3.207638</td> <td>3.207642</td> <td>3.207645</td> <td>3.207648</td> <td>3.207652</td> <td>3.207655</td> <td>3.207658</td> <td>3.207661</td> <td>3.207665</td> </tr> <tr> <th>Surface Temperature|Lower</th> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>...</td> <td>3.206227</td> <td>3.206236</td> <td>3.206244</td> <td>3.206252</td> <td>3.206261</td> <td>3.206269</td> <td>3.206278</td> <td>3.206286</td> <td>3.206294</td> <td>3.206302</td> </tr> <tr> <th>W/m^2</th> <th>Heat Uptake</th> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>...</td> <td>0.001153</td> <td>0.001149</td> <td>0.001144</td> <td>0.001140</td> <td>0.001136</td> <td>0.001132</td> <td>0.001128</td> <td>0.001124</td> <td>0.001120</td> <td>0.001115</td> </tr> <tr> <th>10000</th> <th>50</th> <th>1.0</th> <th>0.8</th> <th>1.246667</th> <th>idealised</th> <th>World</th> <th>0</th> <th>1pctCO2</th> <th>W/m^2</th> <th>Effective Radiative Forcing</th> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>...</td> <td>4.000000</td> <td>4.000000</td> <td>4.000000</td> <td>4.000000</td> <td>4.000000</td> <td>4.000000</td> <td>4.000000</td> <td>4.000000</td> <td>4.000000</td> <td>4.000000</td> </tr> </tbody> </table> <p>5 rows × 2000 columns</p> </div> As we can see below, as the deep ocean becomes deeper and deeper, its equivalent timescale increases. This demonstrates that the deep ocean is becoming increasingly close to being an infinite reservoir. ```python for v in equivalent_parameters: v[1]["d1"] = v[1]["d1"].to("yr") v[1]["d2"] = v[1]["d2"].to("yr") equivalent_parameters ``` [({'two-layer deep ocean depth': 1000 <Unit('meter')>}, {'d1': 3.211845269334279 <Unit('a')>, 'd2': 273.9854219906419 <Unit('a')>, 'q1': 0.4810875417166762 <Unit('delta_degree_Celsius * meter ** 2 / watt')>, 'q2': 0.32105149571648217 <Unit('delta_degree_Celsius * meter ** 2 / watt')>, 'efficacy': 1.0 <Unit('dimensionless')>}), ({'two-layer deep ocean depth': 10000 <Unit('meter')>}, {'d1': 3.2342013046041056 <Unit('a')>, 'd2': 2720.915300585768 <Unit('a')>, 'q1': 0.4878523568580023 <Unit('delta_degree_Celsius * meter ** 2 / watt')>, 'q2': 0.3142866805751838 <Unit('delta_degree_Celsius * meter ** 2 / watt')>, 'efficacy': 1.0 <Unit('dimensionless')>}), ({'two-layer deep ocean depth': 100000 <Unit('meter')>}, {'d1': 3.2364276918618766 <Unit('a')>, 'd2': 27190.435420502465 <Unit('a')>, 'q1': 0.4885246922441889 <Unit('delta_degree_Celsius * meter ** 2 / watt')>, 'q2': 0.31361434518885845 <Unit('delta_degree_Celsius * meter ** 2 / watt')>, 'efficacy': 1.0 <Unit('dimensionless')>}), ({'two-layer deep ocean depth': 1000000000000000 <Unit('meter')>}, {'d1': 3.238959349323281 <Unit('a')>, 'd2': 271883581625601.66 <Unit('a')>, 'q1': 0.4889441930744013 <Unit('delta_degree_Celsius * meter ** 2 / watt')>, 'q2': 0.31555972744518723 <Unit('delta_degree_Celsius * meter ** 2 / watt')>, 'efficacy': 1.0 <Unit('dimensionless')>})] As shown in the plots below, as the deep ocean becomes bigger, it can uptake more heat and hence mixed-layer warming is reduced. However, whilst the deep ocean is finite, the mixed-layer warming does eventually reach the same equilibrium (independent of deep ocean depth), it just takes longer to do so. Once the deep ocean becomes infinite, as discussed above, we effectively have a single-layer model with an increased climate feedback factor (lower equilibrium climate sensitivity) which can uptake heat forever. ```python # NBVAL_IGNORE_OUTPUT scenario_to_plot = "1pctCO2" xlim = [1850, 3500] pkwargs = dict( hue="dl (meter)", style="variable", time_axis="year" ) fig = plt.figure(figsize=(9, 9)) ax = fig.add_subplot(211) output.filter(scenario=scenario_to_plot, variable="Surface Temperature|Upper").lineplot(**pkwargs, ax=ax) ax = fig.add_subplot(212, sharex=ax) output.filter(scenario=scenario_to_plot, variable="Heat Uptake").lineplot(**pkwargs, ax=ax) ax.set_xlim(xlim) ```
cdd6043b49bd3ac6e59dafea50c75cefbc1018cb
277,048
ipynb
Jupyter Notebook
docs/source/usage/one-layer-model.ipynb
sadielbartholomew/openscm-twolayermodel
19b030571892a3238082765671e161ddd4c2ab97
[ "BSD-3-Clause" ]
6
2020-10-13T00:34:04.000Z
2022-02-16T23:33:48.000Z
docs/source/usage/one-layer-model.ipynb
sadielbartholomew/openscm-twolayermodel
19b030571892a3238082765671e161ddd4c2ab97
[ "BSD-3-Clause" ]
16
2020-04-16T11:17:05.000Z
2021-06-15T00:58:09.000Z
docs/source/usage/one-layer-model.ipynb
sadielbartholomew/openscm-twolayermodel
19b030571892a3238082765671e161ddd4c2ab97
[ "BSD-3-Clause" ]
6
2020-10-12T13:24:28.000Z
2021-06-22T12:54:15.000Z
180.604954
87,816
0.639109
true
7,804
Qwen/Qwen-72B
1. YES 2. YES
0.83762
0.774583
0.648806
__label__eng_Latn
0.358364
0.345726
# Reduced Helmholtz equation of state: carbon dioxide **Water equation of state:** You can see the full, state-of-the-art equation of state for water, which also uses a reduced Helmholtz approach: the IAPWS 1995 formulation {cite}`Wagner2002`. This equation is state is available using CoolProp with the `Water` fluid. One modern approach for calculating thermodynamic properties of real fluids uses a reduced Helmholtz equation of state, using the reduced Helmholtz free energy function $\alpha$: \begin{equation} \alpha (\tau, \delta) = \frac{a}{RT} = \frac{u - Ts}{RT} \end{equation} which is a function of reduced density $\delta$ and reduced temperature $\tau$: \begin{equation} \delta = \frac{\rho}{\rho_{\text{crit}}} \quad \text{and} \quad \tau = \frac{T_{\text{crit}}}{T} \end{equation} The reduced Helmhotz free energy function, $\alpha(\tau, \delta)$, is given as the sum of ideal gas and residual components: \begin{equation} \alpha(\tau, \delta) = \alpha_{IG} (\tau, \delta) + \alpha_{\text{res}} (\tau, \delta) \;, \end{equation} which are both given as high-order fits using many coefficients: ```python import matplotlib.pyplot as plt %matplotlib inline # these are mostly for making the saved figures nicer import matplotlib_inline.backend_inline matplotlib_inline.backend_inline.set_matplotlib_formats('pdf', 'png') plt.rcParams['figure.dpi']= 150 plt.rcParams['savefig.dpi'] = 150 import numpy as np import cantera as ct from scipy import integrate, optimize from pint import UnitRegistry ureg = UnitRegistry() Q_ = ureg.Quantity ``` ```python import sympy sympy.init_printing(use_latex='mathjax') T, R, tau, delta = sympy.symbols('T, R, tau, delta', real=True) a_vars = sympy.symbols('a0, a1, a2, a3, a4, a5, a6, a7', real=True) theta_vars = sympy.symbols('theta3, theta4, theta5, theta6, theta7', real=True) n_vars = sympy.symbols('n0, n1, n2, n3, n4, n5, n6, n7, n8, n9, n10, n11', real=True) alpha_ideal = sympy.log(delta) + a_vars[0] + a_vars[1]*tau + a_vars[2]*sympy.log(tau) for i in range(3, 8): alpha_ideal += a_vars[i] * sympy.log(1.0 - sympy.exp(-tau * theta_vars[i-3])) display(sympy.Eq(sympy.symbols('alpha_IG'), alpha_ideal)) alpha_res = ( n_vars[0] * delta * tau**0.25 + n_vars[1] * delta * tau**1.25 + n_vars[2] * delta * tau**1.50 + n_vars[3] * delta**3 * tau**0.25 + n_vars[4] * delta**7 * tau**0.875 + n_vars[5] * delta * tau**2.375 * sympy.exp(-delta) + n_vars[6] * delta**2 * tau**2 * sympy.exp(-delta) + n_vars[7] * delta**5 * tau**2.125 * sympy.exp(-delta) + n_vars[8] * delta * tau**3.5 * sympy.exp(-delta**2) + n_vars[9] * delta * tau**6.5 * sympy.exp(-delta**2) + n_vars[10] * delta**4 * tau**4.75 * sympy.exp(-delta**2) + n_vars[11] * delta**2 * tau**12.5 * sympy.exp(-delta**3) ) display(sympy.Eq(sympy.symbols('alpha_res'), alpha_res)) ``` $\displaystyle \alpha_{IG} = a_{0} + a_{1} \tau + a_{2} \log{\left(\tau \right)} + a_{3} \log{\left(1.0 - e^{- \tau \theta_{3}} \right)} + a_{4} \log{\left(1.0 - e^{- \tau \theta_{4}} \right)} + a_{5} \log{\left(1.0 - e^{- \tau \theta_{5}} \right)} + a_{6} \log{\left(1.0 - e^{- \tau \theta_{6}} \right)} + a_{7} \log{\left(1.0 - e^{- \tau \theta_{7}} \right)} + \log{\left(\delta \right)}$ $\displaystyle \alpha_{res} = \delta^{7} n_{4} \tau^{0.875} + \delta^{5} n_{7} \tau^{2.125} e^{- \delta} + \delta^{4} n_{10} \tau^{4.75} e^{- \delta^{2}} + \delta^{3} n_{3} \tau^{0.25} + \delta^{2} n_{11} \tau^{12.5} e^{- \delta^{3}} + \delta^{2} n_{6} \tau^{2} e^{- \delta} + \delta n_{0} \tau^{0.25} + \delta n_{1} \tau^{1.25} + \delta n_{2} \tau^{1.5} + \delta n_{5} \tau^{2.375} e^{- \delta} + \delta n_{8} \tau^{3.5} e^{- \delta^{2}} + \delta n_{9} \tau^{6.5} e^{- \delta^{2}}$ ## Carbon dioxide equation of state The coefficients $a_i$, $\theta_i$, and $n_i$ are given for carbon dioxide: ```python # actual coefficients coeffs_a = [ 8.37304456, -3.70454304, 2.500000, 1.99427042, 0.62105248, 0.41195293, 1.04028922, 8.327678e-2 ] coeffs_theta = [ 3.151630, 6.111900, 6.777080, 11.32384, 27.08792 ] coeffs_n = [ 0.89875108, -0.21281985e1, -0.68190320e-1, 0.76355306e-1, 0.22053253e-3, 0.41541823, 0.71335657, 0.30354234e-3, -0.36643143, -0.14407781e-2, -0.89166707e-1, -0.23699887e-1 ] ``` Through some math, we can find an expression for pressure: \begin{equation} P = R T \rho \left[ 1 + \delta \left(\frac{\partial \alpha_{\text{res}}}{\partial \delta} \right)_{\tau} \right] \end{equation} Use this expression to estimate the pressure at $T$ = 350 K and $v$ = 0.01 m$^3$/kg, and compare against that obtained from Cantera. We can use our symbolic expression for $\alpha_{\text{res}} (\tau, \delta)$ and take the partial derivative: ```python # use Cantera fluid to get specific gas constant and critical properties f = ct.CarbonDioxide() gas_constant = ct.gas_constant / f.mean_molecular_weight temp_crit = f.critical_temperature density_crit = f.critical_density # conditions of interest temp = 350 specific_volume = 0.01 density = 1.0 / specific_volume # take the partial derivative of alpha_res with respect to delta derivative_alpha_delta = sympy.diff(alpha_res, delta) # substitute all coefficients derivative_alpha_delta = derivative_alpha_delta.subs( [(n, n_val) for n, n_val in zip(n_vars, coeffs_n)] ) def get_pressure( temp, specific_vol, fluid, derivative_alpha_delta, tau, delta ): '''Calculates pressure for reduced Helmholtz equation of state''' red_density = (1.0 / specific_vol) / fluid.critical_density red_temp_inv = fluid.critical_temperature / temp gas_constant = ct.gas_constant / fluid.mean_molecular_weight dalpha_ddelta = derivative_alpha_delta.subs( [(delta, red_density), (tau, red_temp_inv)] ) pres = ( gas_constant * temp * (1.0 / specific_vol) * (1.0 + red_density * dalpha_ddelta) ) return pres pres = get_pressure(temp, specific_volume, f, derivative_alpha_delta, tau, delta) print(f'Pressure: {pres / 1e6: .3f} MPa') f.TV = temp, specific_volume print(f'Cantera pressure: {f.P / 1e6: .3f} MPa') ``` Pressure: 5.464 MPa Cantera pressure: 5.475 MPa Our calculation and that from Cantera agree fairly well! They are not exactly the same because Cantera uses a slightly different formulation for the equation of state. Let's compare the calculations now for a range of specific volumes and multiple temperatures: ```python fig, ax = plt.subplots(figsize=(8, 4)) specific_volumes = np.geomspace(0.001, 0.01, num=20) temperatures = [300, 400, 500] for temp in temperatures: pressures = np.zeros(len(specific_volumes)) pressures_cantera = np.zeros(len(specific_volumes)) for idx, spec_vol in enumerate(specific_volumes): pressures[idx] = get_pressure( temp, spec_vol, f, derivative_alpha_delta, tau, delta ) f.TV = temp, spec_vol pressures_cantera[idx] = f.P ax.loglog(specific_volumes, pressures/1000., 'o', color='blue') ax.loglog(specific_volumes, pressures_cantera/1000., color='blue') bbox_props = dict(boxstyle='round', fc='w', ec='0.3', alpha=0.9) ax.text(2e-3, 4e3, '300 K', ha='center', va='center', bbox=bbox_props) ax.text(2e-3, 1.6e4, '400 K', ha='center', va='center', bbox=bbox_props) ax.text(2e-3, 7e4, '500 K', ha='center', va='center', bbox=bbox_props) ax.legend(['Reduced Helmholtz', 'Cantera']) plt.grid(True, which='both') plt.xlabel('Specific volume (m^3/kg)') plt.ylabel('Pressure (kPa)') fig.tight_layout() plt.show() ``` We can see that the pressure calculated using the reduced Helmholtz equation of state matches closely with that from Cantera, which uses a different but similarly advanced equation of state.
8ebaf4a95839c234dfcf936f837bdef8958296e1
123,089
ipynb
Jupyter Notebook
book/content/properties-pure/reduced-helmholtz.ipynb
kyleniemeyer/computational-thermo
3f0d1d4a6d4247ac3bf3b74867411f2090c70cbd
[ "CC-BY-4.0", "BSD-3-Clause" ]
13
2020-04-01T05:52:06.000Z
2022-03-27T20:25:59.000Z
book/content/properties-pure/reduced-helmholtz.ipynb
kyleniemeyer/computational-thermo
3f0d1d4a6d4247ac3bf3b74867411f2090c70cbd
[ "CC-BY-4.0", "BSD-3-Clause" ]
1
2020-04-28T04:02:05.000Z
2020-04-29T17:49:52.000Z
book/content/properties-pure/reduced-helmholtz.ipynb
kyleniemeyer/computational-thermo
3f0d1d4a6d4247ac3bf3b74867411f2090c70cbd
[ "CC-BY-4.0", "BSD-3-Clause" ]
6
2020-04-03T14:52:24.000Z
2022-03-29T02:29:43.000Z
369.636637
86,980
0.921707
true
2,587
Qwen/Qwen-72B
1. YES 2. YES
0.879147
0.73412
0.645399
__label__eng_Latn
0.633737
0.337808
# All together now We have now discretized the two first order equations over a single cell. What is left is to assemble and solve the DC system over the entire mesh. To implement the divergence on the full mesh, the stencil of $\pm 1$'s must index into $\mathbf{j}$ on the entire mesh (instead of four elements). Although this can be done in a \texttt{for-loop}, it is conceptually, and often computationally, easier to create this stencil using nested kronecker products (see notebook). The volume and area terms in the divergence get expanded to diagonal matrices, and we multiply them together to get the discrete divergence operator. The discretization of the \emph{face} inner product can be abstracted to a function, $\mathbf{M}_f(\sigma^{-1})$, that completes the inner product on the entire mesh at once. The main difference when implementing this is the $\mathbf{P}$ matrices, which must index into the entire mesh. With the necessary operators defined for both equations on the entire mesh, we are left with two discrete equations: \begin{equation} \text{diag}(\mathbf{v}) \mathbf{D}\mathbf{j} = \mathbf{q} \\ \mathbf{M}_f(\sigma^{-1}) \mathbf{j} = \mathbf{D}^\top \text{diag}(\mathbf{v}) \boldsymbol{\phi}. \end{equation} Note that now all variables are defined over the entire mesh. We could solve this coupled system or we could eliminate $\mathbf{j}$ and solve for $\phi$ directly (a smaller, second-order system). \begin{equation} \text{diag}(\mathbf{v}) \mathbf{D} \mathbf{M}_f(\sigma^{-1})^{-1} \mathbf{D}^\top \text{diag}(\mathbf{v}) \boldsymbol{\phi} = \mathbf{q}. \end{equation} By solving this system matrix, we obtain a solution for the electric potential $\phi$ everywhere in the domain. Creating predicted data from this requires an interpolation to the electrode locations and subtraction to obtain potential differences! Below we have wrapped up the functions in [index.ipynb](index.ipynb) in [dc_interact.py](dc_interact.py) so that we can use them with IPython widgets. ```python %matplotlib inline from dc_interact import dc_resistivity from ipywidgets import interact interact( dc_resistivity, log_sigma_background=(-4,4), log_sigma_block=(-4,4), plot_type=['potential','conductivity','current'] ); ``` Moving from continuous equations to their discrete analogues is fundamental in geophysical simulations. In this tutorial, we have started from a continuous description of the governing equations for the DC resistivity problem, selected locations on the mesh to discretize the continuous functions, constructed differential operators by considering one cell at a time, assembled and solved the discrete DC equations. Composing the finite volume system in this way allows us to move to different meshes and incorporate various types of boundary conditions that are often necessary when solving these equations in practice. # Next up ... To learn more about SimPEG you can go to our website http://simpeg.xyz where you can find code, documentation, presentations, more tutorials and papers! For another DC example using a curvi mesh - start at the [SimPEG Examples](http://docs.simpeg.xyz/content/examples/Mesh_Basic_ForwardDC.html)
a79e69c48767cc564038f85213177cb089a0d6e6
28,565
ipynb
Jupyter Notebook
notebooks/fundamentals/pixels_and_neighbors/all_together_now.ipynb
ahartikainen/computation
2b7f0fd2fe2d9f1fc494cb52f57764a09ba0617e
[ "MIT" ]
13
2017-03-09T06:01:04.000Z
2021-12-15T07:40:40.000Z
notebooks/fundamentals/pixels_and_neighbors/all_together_now.ipynb
ahartikainen/computation
2b7f0fd2fe2d9f1fc494cb52f57764a09ba0617e
[ "MIT" ]
14
2016-03-29T18:08:09.000Z
2017-03-07T16:34:22.000Z
notebooks/fundamentals/pixels_and_neighbors/all_together_now.ipynb
ahartikainen/computation
2b7f0fd2fe2d9f1fc494cb52f57764a09ba0617e
[ "MIT" ]
6
2017-06-19T15:42:02.000Z
2020-03-02T03:29:21.000Z
213.171642
23,562
0.901873
true
767
Qwen/Qwen-72B
1. YES 2. YES
0.835484
0.907312
0.758044
__label__eng_Latn
0.996644
0.599523
# Causal Impact Will Fuks https://github.com/WillianFuks/tfcausalimpact [LinkedIn](https://www.linkedin.com/in/willian-fuks-62622217/) https://github.com/WillianFuks/pyDataSP-tfcausalimpact ```sh git clone git@github.com:WillianFuks/pyDataSP-tfcausalimpact.git cd pyDataSP-tfcausalimpact/ python3.9 -m venv .env source .env/bin/activate pip install -r requirements.txt .env/bin/jupyter notebook ``` ```python import daft import os import collections os. environ['TF_CPP_MIN_LOG_LEVEL'] = '3' import matplotlib.pyplot as plt from matplotlib import rc from IPython.display import HTML import tensorflow as tf import tensorflow_probability as tfp import seaborn as sns import pandas as pd import numpy as np # Attempts to disable TF warnings tf.get_logger().setLevel('ERROR') tf.autograph.set_verbosity(tf.compat.v1.logging.ERROR) import logging tf.get_logger().setLevel(logging.ERROR) # TFP namespaces tfd = tfp.distributions tfb = tfp.bijectors # Remove prompt from notebook css HTML(open('styles/custom.css').read()) ``` ## Here's our final destination: <center></center> ## Our Journey: 1. Causality 2. Bayesian Time Series 3. Causal Impact ## Causality Is Simple! ```python rc("font", family="serif", size=12) rc("text", usetex=False) pgm = daft.PGM(grid_unit=4.0, node_unit=2.5) rect_params = {"lw": 2} edge_params = { 'linewidth': 1, 'head_width': .8 } pgm.add_node("rain", r"$Rain$", 0.5, 1.5, scale=1.5, fontsize=24) pgm.add_node("wet", r"$Wet$", 2.5 + 0.2, 1.5, scale=1.5, fontsize=24) pgm.add_edge("rain", "wet", plot_params=edge_params) pgm.render(); ``` ## Until It's Not... ```python pgm = daft.PGM(grid_unit=4.0, node_unit=4.5) pgm.add_node("fatigue", r"$Fatigue Train$", 0.5, 1.5, scale=1.5, fontsize=24) pgm.add_node("perf", r"$Performance$", 2.5 + 0.2, 1.5, scale=1.5, fontsize=24) pgm.add_edge("fatigue", "perf", plot_params=edge_params) pgm.render(); ``` ```python pgm = daft.PGM(grid_unit=4.0, node_unit=2.5) pgm.add_node("diet", r"$Diet$", 0.5, 3, scale=1.5, fontsize=24) pgm.add_node("rest", r"$Rest$", 0.5, 1.5, scale=1.5, fontsize=24) pgm.add_node("volume", r"$Volume$", 0.5, 0, scale=1.5, fontsize=24) pgm.add_node("fatigue", r"$Fatigue$", 0.5, -1.5, scale=1.5, fontsize=24) pgm.add_node("perf", r"$Performance$", 2.5 + 0.2, 1.5, scale=2.5, fontsize=24) pgm.add_edge("diet", "perf", plot_params=edge_params) pgm.add_edge("volume", "perf", plot_params=edge_params) pgm.add_edge("rest", "perf", plot_params=edge_params) pgm.add_edge("fatigue", "perf", plot_params=edge_params) # pgm.add_edge("diet", "rest", plot_params=edge_params) # pgm.add_edge("rest", "diet", plot_params=edge_params) # pgm.add_edge("diet", "volume", plot_params=edge_params) # pgm.add_edge("volume", "diet", plot_params=edge_params) # pgm.add_edge("diet", "fatigue", plot_params=edge_params) # pgm.add_edge("fatigue", "diet", plot_params=edge_params) pgm.render(); ``` ## So How To Compute Causality?! ### Correlations? ## Let's Explore The Idea ### Tensorflow Probability (Random Variables) ```python import os import tensorflow as tf import tensorflow_probability as tfp # tf.get_logger().setLevel('INFO') # os. environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # tf.autograph.set_verbosity(1) tfd = tfp.distributions tfb = tfp.bijectors ``` ```python X = tfd.Normal(loc=2, scale=1) sns.displot(X.sample(1000), kde=True); ``` ```python b_X = tfd.TransformedDistribution( tfd.Normal(loc=2, scale=1), bijector=tfb.Exp() ) sns.displot(b_X.sample(1000), kde=True); ``` ```python ``` ## Suppose This (Simplified) Structure ```python pgm = daft.PGM(grid_unit=4.0, node_unit=2.5) pgm.add_node("diet", r"$Diet$", 0., 0., scale=1.5, fontsize=24) pgm.add_node("volume", r"$Volume$", 1.25, 0, scale=1.5, fontsize=24) pgm.add_node("perf", r"$Performance$", 3., 0., scale=2.5, fontsize=24) pgm.add_edge("diet", "volume", plot_params=edge_params) pgm.add_edge("volume", "perf", plot_params=edge_params) pgm.render(); ``` ## And Respective Data ```python dist = tfd.JointDistributionNamed( { 'diet': tfd.Normal(loc=3, scale=1), 'volume': lambda diet: tfd.Normal(diet * 2, scale=0.5), 'performance': lambda volume: tfd.Normal(volume * 1.3, scale=0.3) } ) data = dist.sample(3000) data = pd.DataFrame(data, columns=['performance', 'diet', 'volume']) # data.set_index(pd.date_range(start='20200101', periods=len(data)), inplace=True) # data ``` ## Linear Regression Keras $$ performance = W\cdot[x_{diet}, x_{volume}]^T$$ ```python import tensorflow as tf linear_model = tf.keras.Sequential([ tf.keras.layers.Dense(units=1, use_bias=False) ]) linear_model.compile( optimizer=tf.optimizers.Adam(learning_rate=0.1), loss=tf.keras.losses.MeanSquaredError() ) linear_model.fit( data[['diet']], data['performance'], epochs=100, verbose=0, ) w = linear_model.get_weights() print(f"Linear Relationship is: {w[0][0][0]:.2f}") ``` ```python linear_model = tf.keras.Sequential([ tf.keras.layers.Dense(units=1, use_bias=False) ]) linear_model.compile( optimizer=tf.optimizers.Adam(learning_rate=0.1), loss=tf.keras.losses.MeanSquaredError() ) linear_model.fit( data[['volume']], data['performance'], epochs=100, verbose=0, ) w = linear_model.get_weights() print(f"Linear Relationship is: {w[0][0][0]:.2f}") ``` ```python linear_model = tf.keras.Sequential([ tf.keras.layers.Dense(units=1, use_bias=False) ]) linear_model.compile( optimizer=tf.optimizers.Adam(learning_rate=0.1), loss=tf.keras.losses.MeanSquaredError() ) linear_model.fit( data[['diet', 'volume']], data['performance'], epochs=100, verbose=0, ) w = linear_model.get_weights() print(f"Linear Relationship is: {w[0]}") ``` ## Bayesian Linear Regression ### Recipe! $$\begin{equation} \label{eq1} \begin{split} P(A|B) & = \frac{P(B|A)P(A)}{P(B)} = \frac{P(A, B)}{P(B)} \end{split} \end{equation} $$ $$\begin{equation} \label{eq1} \begin{split} P(\theta|D) & = \frac{P(\theta, D)}{P(D)} \end{split} \end{equation} $$ ## Step 1: Priors ```python pgm = daft.PGM(grid_unit=4.0, node_unit=2.5) pgm.add_node("diet", r"$w_{diet}$", 0., 0., scale=1.5, fontsize=24) pgm.add_node("volume", r"$w_{volume}$", 1.25, 0, scale=1.5, fontsize=24) pgm.add_node("sigma", r"$\sigma^2$", 2.5, 0, scale=1.5, fontsize=24) pgm.add_node("perf", r"$Performance$", 1.25, -2, scale=2.25, fontsize=24) pgm.add_edge("diet", "perf", plot_params=edge_params) pgm.add_edge("volume", "perf", plot_params=edge_params) pgm.add_edge("sigma", "perf", plot_params=edge_params) pgm.render(); ``` \begin{equation} \label{eq1} \begin{split} w_{diet} & \sim N(3, 5) \\ w_{volume} & \sim N(3, 5) \\ \sigma^2 & \sim Exp(2) \\ performance & \sim N(w_{diet} \cdot x_{diet} + w_{volume} \cdot x_{volume}, \sigma^2) \end{split} \end{equation} ```python joint_dist = tfd.JointDistributionNamedAutoBatched(dict( sigma=tfd.Exponential(2), w_diet=tfd.Normal(loc=3, scale=5), w_volume=tfd.Normal(loc=3, scale=5), performance=lambda sigma, w_diet, w_volume: tfd.Normal(loc=data['diet'].values * w_diet + data['volume'].values * w_volume, scale=sigma) )) ``` ```python prior_samples = joint_dist.sample(500) nrows = 3 labels = ['sigma', 'w_diet', 'w_volume'] fig, axes = plt.subplots(nrows=nrows, ncols=1, figsize=(10, 8)) for i in range(nrows): sns.histplot(prior_samples[labels[i]], kde=True, ax=axes[i], label=labels[i]); axes[i].legend(); ``` # Step 2: Joint Distribution $$P(\theta, X)$$ ```python def target_log_prob_fn(sigma, w_diet, w_volume): return joint_dist.log_prob(sigma=sigma, w_diet=w_diet, w_volume=w_volume, performance=data['performance'].values) ``` # Step 3: MCMC ```python num_results = int(1e4) num_burnin_steps = int(1e3) kernel = tfp.mcmc.HamiltonianMonteCarlo( target_log_prob_fn=target_log_prob_fn, step_size=0.3, num_leapfrog_steps=3 ) kernel = tfp.mcmc.TransformedTransitionKernel( inner_kernel=kernel, bijector=[tfb.Exp(), tfb.Identity(), tfb.Identity()] ) kernel = tfp.mcmc.DualAveragingStepSizeAdaptation( inner_kernel=kernel, num_adaptation_steps=int(num_burnin_steps * 0.8) ) ``` ```python @tf.function(autograph=False) def sample_chain(): return tfp.mcmc.sample_chain( num_results=num_results, num_burnin_steps=num_burnin_steps, kernel=kernel, current_state=[ tf.constant(0.5, dtype=tf.float32), tf.constant(0.3, dtype=tf.float32), tf.constant(0.2, dtype=tf.float32) ], trace_fn=lambda _, pkr: [pkr.inner_results.inner_results.accepted_results.step_size, pkr.inner_results.inner_results.log_accept_ratio] ) ``` ```python samples, [step_size, log_accept_ratio] = sample_chain() ``` ```python p_accept = tf.reduce_mean(tf.exp(tf.minimum(log_accept_ratio, 0.))) p_accept ``` ```python nrows = 3 labels = ['sigma', 'w_diet', 'w_volume'] fig, axes = plt.subplots(nrows=nrows, ncols=2, figsize=(10, 8)) axes[0][0].set_title('Prior') axes[0][1].set_title('Posterior') for i in range(nrows): sns.histplot(prior_samples[labels[i]], kde=True, ax=axes[i][0], label=labels[i]); sns.histplot(samples[i], kde=True, ax=axes[i][1], label=labels[i]); axes[i][0].legend(); axes[i][1].legend(); plt.tight_layout() ``` ```python sigmas, w_diets, w_volumes = samples ``` ```python performance_estimated = ( tf.linalg.matmul(data['diet'].values[..., tf.newaxis], w_diets[..., tf.newaxis], transpose_b=True) + tf.linalg.matmul(data['volume'].values[..., tf.newaxis], w_volumes[..., tf.newaxis], transpose_b=True) ) ``` ```python quanties_performance = tf.transpose(tfp.stats.percentile( performance_estimated, [2.5, 97.5], axis=1, interpolation=None, keepdims=False, )) ``` ```python mean_y = tf.math.reduce_mean(performance_estimated, axis=1) mean_y ``` ```python std_y = tf.math.reduce_std(performance_estimated, axis=1) std_y ``` ```python fig, ax = plt.subplots(figsize=(9, 8)) ax.errorbar( x=data['performance'], y=mean_y, yerr=2*std_y, fmt='o', capsize=2, label='predictions +/- CI' ) sns.regplot( x=data['performance'], y=mean_y, scatter=False, line_kws=dict(alpha=0.5), label='performance / predicted performance', truncate=False, ax=ax ); ax.set(ylabel='predicted performance'); plt.legend(); ``` When we fit `diet` and `volume` together `diet` seems to lose causality!! ## Interesting Problem ```python pgm = daft.PGM(grid_unit=4.0, node_unit=2.5) pgm.add_node("diet", r"$Diet$", 0.5, 3, scale=1.5, fontsize=24) pgm.add_node("rest", r"$Rest$", 0.5, 1.5, scale=1.5, fontsize=24) pgm.add_node("volume", r"$Volume$", 0.5, 0, scale=1.5, fontsize=24) pgm.add_node("fatigue", r"$Fatigue$", 0.5, -1.5, scale=1.5, fontsize=24) pgm.add_node("question", r"$?$", 2.5 + 0.2, 1.5, scale=1.5, fontsize=24) pgm.add_node("perf", r"$Performance$", 5, 1.5, scale=2.5, fontsize=24) pgm.add_edge("diet", "question", plot_params=edge_params) pgm.add_edge("volume", "question", plot_params=edge_params) pgm.add_edge("rest", "question", plot_params=edge_params) pgm.add_edge("fatigue", "question", plot_params=edge_params) pgm.add_edge("question", "perf", plot_params=edge_params) pgm.render(); ``` ```python ``` ## Correlations won't work ## Better Solution? ## A/B Test! <center></center> ## Not so fast... <center></center> ## Control Group Fail... ## Solution: Quasi Experiments <center></center> ## Structural Time Series <center></center> <center></center> ## Important thing is: Structures - [AutoRegressive](https://github.com/tensorflow/probability/blob/v0.11.1/tensorflow_probability/python/sts/autoregressive.py#L258) - [DynamicRegression](https://github.com/tensorflow/probability/blob/v0.11.1/tensorflow_probability/python/sts/dynamic_regression.py#L230) - [LocalLevel](https://github.com/tensorflow/probability/blob/v0.11.1/tensorflow_probability/python/sts/local_level.py#L254) - [Seasonal](https://github.com/tensorflow/probability/blob/v0.11.1/tensorflow_probability/python/sts/seasonal.py#L688) - [LocalLinearTrend](https://github.com/tensorflow/probability/blob/v0.11.1/tensorflow_probability/python/sts/local_linear_trend.py#L222) - [SemiLocalLinearTrend](https://github.com/tensorflow/probability/blob/v0.11.1/tensorflow_probability/python/sts/semilocal_linear_trend.py#L294) - [SmoothSeasonal](https://github.com/tensorflow/probability/blob/v0.11.1/tensorflow_probability/python/sts/smooth_seasonal.py#L321) - [Regression](https://github.com/tensorflow/probability/blob/v0.11.1/tensorflow_probability/python/sts/regression.py#L51) - [SparseLinearRegression](https://github.com/tensorflow/probability/blob/v0.11.1/tensorflow_probability/python/sts/regression.py#L264) ## Local Level $$\mu_t = \mu_{t-1} + Normal(0, \sigma^2_{\mu})$$ ```python local_level_model = tfp.sts.LocalLevelStateSpaceModel( num_timesteps=20, level_scale=.1, initial_state_prior=tfd.MultivariateNormalDiag(scale_diag=[1.]) ) s = local_level_model.sample(1) plt.plot(tf.squeeze(s)); ``` ```python local_level_model.log_prob(s) ``` ## Local And Regression (Model Fit) ```python data = pd.read_csv('https://raw.githubusercontent.com/WillianFuks/tfcausalimpact/master/tests/fixtures/arma_data.csv', dtype=np.float32)[['y', 'X']] data.iloc[70:, 0] += 5 data.plot() plt.axvline(70, 0, 130, linestyle='--', color='r', linewidth=0.85); y = tf.cast(data['y'].values[:70], tf.float32) ``` ```python local_level = tfp.sts.LocalLevel( observed_time_series=y ) ``` ```python regression = tfp.sts.LinearRegression( design_matrix=tf.cast(data['X'].values[..., tf.newaxis], tf.float32) ) ``` ```python model = tfp.sts.Sum([local_level, regression], observed_time_series=y) ``` ```python samples, _ = tfp.sts.fit_with_hmc(model, y) ``` ```python one_step_predictive_dist = tfp.sts.one_step_predictive(model, observed_time_series=y, parameter_samples=samples) ``` ```python predictive_means = one_step_predictive_dist.mean() predictive_means ``` ```python predictive_scales = one_step_predictive_dist.stddev() predictive_scales ``` ```python plt.figure(figsize=(10, 9)) color = (1.0, 0.4981, 0.0549) plt.plot(y, label='y', color='k') # plt.plot(predictive_means[1:], color=color, label='predictive mean') plt.fill_between( np.arange(1, 70), predictive_means[1:70] - predictive_scales[1:70], predictive_means[1:70] + predictive_scales[1:70], alpha=0.4, color=color, label='predictive std' ) plt.legend(); ``` ```python forecast_dist = tfp.sts.forecast(model, observed_time_series=y, parameter_samples=samples, num_steps_forecast=30) ``` ```python forecast_means = tf.squeeze(forecast_dist.mean()) forecast_scales = tf.squeeze(forecast_dist.stddev()) ``` ```python plt.figure(figsize=(10, 9)) plt.plot(data['y'], label='y', color='k') plt.fill_between( np.arange(1, 70), predictive_means[1:] - predictive_scales[1:], predictive_means[1:] + predictive_scales[1:], alpha=0.4, color=color, label='predictive std' ) plt.plot(np.arange(70, 100), forecast_means, color='r', label='mean forecast') plt.fill_between( np.arange(70, 100), forecast_means - forecast_scales, forecast_means + forecast_scales, alpha=0.4, color='red', label='forecast std' ) plt.legend(); ``` ## How to obtain causal impact? ### tfcausalimpact ```python !pip install tfcausalimpact > /dev/null ``` ```python from causalimpact import CausalImpact data = pd.read_csv('https://raw.githubusercontent.com/WillianFuks/tfcausalimpact/master/tests/fixtures/arma_data.csv')[['y', 'X']] data.iloc[70:, 0] += 5 pre_period = [0, 69] post_period = [70, 99] ci = CausalImpact(data, pre_period, post_period) ``` ```python print(ci.summary()) ``` ```python print(ci.summary(output='report')) ``` ```python ci.plot(figsize=(15, 15)) ``` ```python ci.model.components_by_name ``` ```python ci.model_samples ``` ```python # https://www.tensorflow.org/probability/examples/Structural_Time_Series_Modeling_Case_Studies_Atmospheric_CO2_and_Electricity_Demand component_dists = tfp.sts.decompose_by_component( ci.model, observed_time_series=y, parameter_samples=ci.model_samples ) component_means, component_stddevs = ( {k.name: c.mean() for k, c in component_dists.items()}, {k.name: c.stddev() for k, c in component_dists.items()} ) def plot_components(dates, component_means_dict, component_stddevs_dict): x_locator, x_formatter = None, None colors = sns.color_palette() c1, c2 = colors[0], colors[1] axes_dict = collections.OrderedDict() num_components = len(component_means_dict) fig = plt.figure(figsize=(12, 2.5 * num_components)) for i, component_name in enumerate(component_means_dict.keys()): component_mean = component_means_dict[component_name] component_stddev = component_stddevs_dict[component_name] ax = fig.add_subplot(num_components, 1, 1 + i) ax.plot(dates, component_mean, lw=2) ax.fill_between(dates, component_mean-2*component_stddev, component_mean+2*component_stddev, color=c2, alpha=0.5) ax.set_title(component_name) if x_locator is not None: ax.xaxis.set_major_locator(x_locator) ax.xaxis.set_major_formatter(x_formatter) axes_dict[component_name] = ax # fig.autofmt_xdate() fig.tight_layout() plot_components(np.arange(0, 70), component_means, component_stddevs); ``` ## Real Example: Bitcoin ```python ! pip install pandas-datareader > /dev/null ``` ```python import datetime import pandas_datareader as pdr btc_data = pdr.get_data_yahoo(['BTC-USD'], start=datetime.datetime(2018, 1, 1), end=datetime.datetime(2020, 12, 3))['Close'] btc_data = btc_data.reset_index().drop_duplicates(subset='Date', keep='last').set_index('Date').sort_index() btc_data = btc_data.resample('D').fillna('nearest') X_data = pdr.get_data_yahoo(['TWTR', 'GOOGL', 'AAPL', 'MSFT', 'AMZN', 'FB', 'GOLD'], start=datetime.datetime(2018, 1, 1), end=datetime.datetime(2020, 12, 2))['Close'] X_data = X_data.reset_index().drop_duplicates(subset='Date', keep='last').set_index('Date').sort_index() X_data = X_data.resample('D').fillna('nearest') data = pd.concat([btc_data, X_data], axis=1) data.dropna(inplace=True) data = data.resample('W-Wed').last() # Weekly is easier to process. We select Wednesday so 2020-10-21 is available. data = data.astype(np.float32) np.log(data).plot(figsize=(15, 12)) plt.axvline('2020-10-14', 0, np.max(data['BTC-USD']), lw=2, ls='--', c='red', label='PayPal Impact') plt.legend(loc='upper left'); ``` ```python pre_period=['20180103', '20201014'] post_period=['20201021', '20201125'] ci = CausalImpact(data, pre_period, post_period, model_args={'fit_method': 'vi'}) ``` ```python print(ci.summary()) ``` ```python ci.plot(figsize=(15, 15)) ``` ## Tips ## Q - How select covariates? ## A - Yes! tfp.sts.SparseLinearRegression ## Decompose by [`statsmodels`](https://github.com/statsmodels/statsmodels) ```python !pip install statsmodels > /dev/null ``` ```python from statsmodels.tsa.seasonal import seasonal_decompose fig, axes = plt.subplots(4, 1, figsize=(15, 15)) fig.tight_layout() res = seasonal_decompose(data['BTC-USD']) axes[0].plot(res.observed) axes[0].set_title('Observed') axes[1].plot(res.trend) axes[1].set_title('Trend') axes[2].plot(res.seasonal) axes[2].set_title('Seasonal') axes[3].plot(res.resid) axes[3].set_title('Residuals'); ``` # Fourier FTW ```python # https://colab.research.google.com/drive/10VADEg8F5t_FuryEf_ObFfeIFwX-CxII?usp=sharing#scrollTo=UyA6K6GTyJqF from numpy.fft import rfft, irfft, rfftfreq def annot_max(x, y, ax=None): xmax = x[np.argmax(y)] ymax = y.max() text= "x={:.3f}, y={:.3f}".format(xmax, ymax) #text = f"{xmax=}, {ymax=}, (period: {1./xmax} days)" #Eh, Colab has Python 3.6 ... text = f"x={xmax:.3f}, y={ymax:.3f}, (period: {(1./xmax):.2f} weeks)" if not ax: ax=plt.gca() bbox_props = dict(boxstyle="square,pad=0.3", fc="w", ec="k", lw=0.72) arrowprops=dict(arrowstyle="->",connectionstyle="angle,angleA=0,angleB=60") kw = dict(xycoords='data',textcoords="axes fraction", arrowprops=arrowprops, bbox=bbox_props, ha="right", va="top") ax.annotate(text, xy=(xmax, ymax), xytext=(0.94, 0.96), **kw) y = data['BTC-USD'] nobs = len(y) btc_ft = np.abs(rfft(y)) btc_freq = rfftfreq(nobs) plt.plot(btc_freq[2:], btc_ft[2:]) annot_max(btc_freq[2:], btc_ft[2: ]); ``` # Cross Validation ```python plt.figure(figsize=(15, 10)) plt.plot(y) plt.axvline(pd.to_datetime('2018-09-01'), 0, 19000, c='r') plt.axvline(pd.to_datetime('2019-09-01'), 0, 19000, c='g') plt.text(pd.to_datetime('2018-03-01'), 18000, 'train', bbox=dict(fill=False, edgecolor='k', linewidth=0.5), fontdict={'fontsize': 20}) plt.text(pd.to_datetime('2019-01-01'), 18000, 'cross-validate', bbox=dict(fill=False, edgecolor='k', linewidth=0.5), fontdict={'fontsize': 20}) plt.text(pd.to_datetime('2020-02-01'), 18000, 'causal impact', bbox=dict(fill=False, edgecolor='k', linewidth=0.5), fontdict={'fontsize': 20}); ``` ## And that's pretty much it ;)! ## Thanks!
81d9c36a9b07531f942d04616fdf50cc824a018d
47,109
ipynb
Jupyter Notebook
pyDataSP - tfcausalimpact.ipynb
WillianFuks/pyDataSP-tfcausalimpact
612a9e6326e278cf21fbee118a204fa5c2e95b92
[ "MIT" ]
1
2021-08-22T09:59:34.000Z
2021-08-22T09:59:34.000Z
pyDataSP - tfcausalimpact.ipynb
WillianFuks/pyDataSP-tfcausalimpact
612a9e6326e278cf21fbee118a204fa5c2e95b92
[ "MIT" ]
null
null
null
pyDataSP - tfcausalimpact.ipynb
WillianFuks/pyDataSP-tfcausalimpact
612a9e6326e278cf21fbee118a204fa5c2e95b92
[ "MIT" ]
null
null
null
23.126657
157
0.517332
true
6,515
Qwen/Qwen-72B
1. YES 2. YES
0.760651
0.709019
0.539316
__label__eng_Latn
0.159494
0.091341
# 823 HW2 ## https://yiyangzhang2020.github.io/yz628-823-blog/ ## Number theory and a Google recruitment puzzle ### Find the first 10-digit prime in the decimal expansion of 17π. ### The first 5 digits in the decimal expansion of π are 14159. The first 4-digit prime in the decimal expansion of π are 4159. You are asked to find the first 10-digit prime in the decimal expansion of 17π. First solve sub-problems (divide and conquer): ### ------------------------------------------------------------------------------------------------------- ### Write a function to generate an arbitrary large expansion of a mathematical expression like π. #### (Hint: You can use the standard library `decimal` or the 3rd party library `sympy` to do this) ### For the first function, I used the sympy library (mpmath). Since the function wasn't outputting enough decimal places, I set the mp.dps as 1000. ### This function has two input conditions to check, either pi(π) or e(euler number). Next, I set a multiplier and multiply them by it. Then I set the precision criterion to get the number of digits after the decimal wanted and turn it into a string. This creates the decimal expansion of a given number. ```python import math try: from sympy.mpmath import mp except ImportError: from mpmath import mp mp.dps=1000 def create_expansion(precision,number,multiplier): ''' this function takes a number with a multiplier and outputs its decimal expansion with a specific number of decimals input: precision, number of decimals wanted number, number to be expanded('pi' or 'e') multiplier, multiplier of the number to be expanded returns: an string of decimal expansion of the input number ''' #check if number is pi if number =='pi': #create a string of the number(multiplied by the multiplier) expansion with precision as number of decimals str_pi = str(( multiplier*mp.pi)).replace('.','')[0:precision] return(str_pi) #check if number is e elif number =='e': str_e = str(( multiplier*mp.e)).replace('.','')[0:precision] return(str_e) #type either 'pi' or 'e' for number in create_expansion function #print(create_expansion(50,'pi',17)) #print(create_expansion(25,'e',1)) ``` ### Unit test of create_expansion function: ```python import unittest class TestNotebook(unittest.TestCase): def test_create_expansion(self): """test the expansion of the number we want""" self.assertEqual(create_expansion(5,'pi',1),str(31415)) unittest.main(argv=[''], verbosity=2, exit=False) ``` test_create_expansion (__main__.TestNotebook) test the expansion of the number we want ... ok ---------------------------------------------------------------------- Ran 1 test in 0.001s OK <unittest.main.TestProgram at 0x7f8427f6b950> ### ------------------------------------------------------------------------------------------------------- ### - Write a function to check if a number is prime. Hint: See Sieve of Eratosthenes ### For this function, the first criterion I set is to check if the given number is 1 or not. If it is 1, then it is not a prime number. ### Next we check if the given number is 2 or not. If it is 2, then it is a prime number. ### Then we check if the given number can be divided by 2, if so, it is an even number, thus it is not a prime number. ### Lastly, we check from 3 to the positive square root of x so that it only iterate a portion of X values. The step is 2 so no even number other than 2 will participate in this iteration. ### This function reduces the run time complexity dramatically from a function without the above steps. ```python import math def IsPrimeNumber(x): ''' this function takes an input number and test whether it is a prime number or not and outputs an answer x: int, input to be tested returns: an answer (True or False) ''' # exclude 1 which is not prime if x == 1: return False # take out 2 as a base case elif x == 2: return True elif x % 2 == 0: return False else: # iterate through 3 to the positive square root of x to see if x can be divided by any, step is 2 which excludes all even number. for y in range(3, int(math.sqrt(x) + 1), 2): # if x can be divided, then x is not prime if x % y == 0: return False # if x can not be divided, then x is a prime number return True ``` ### Unit test of IsPrimeNumber function: ```python class TestNotebook(unittest.TestCase): def test_IsPrimeNumber(self): """test IsPrimeNumber""" self.assertFalse(IsPrimeNumber(1)) self.assertTrue(IsPrimeNumber(2)) self.assertFalse(IsPrimeNumber(51)) self.assertTrue(IsPrimeNumber(1373)) self.assertFalse(IsPrimeNumber(33333)) unittest.main(argv=[''], verbosity=2, exit=False) ``` test_IsPrimeNumber (__main__.TestNotebook) test IsPrimeNumber ... ok ---------------------------------------------------------------------- Ran 1 test in 0.001s OK <unittest.main.TestProgram at 0x7f84390c1d10> ### ------------------------------------------------------------------------------------------------------- ### - Write a function to generate sliding windows of a specified width from a long iterable (e.g. a string representation of a number) ### Then we have the window function which generates sliding windows of a specified width from a long iterable. This one is pretty straight forward, it returns a list of sliding windows(substrings of the input string). One interesting part I did in this function is that I added a list called 'seen' which records every slinding windows(substrings) we have seen so we will not have repeated slinding windows(substrings) in the output list(non_repeated). This will reduce the run time complexity since a lof of redundant values will be checked later if the specified width is too small. ```python def window(seq, width): ''' this function takes an input string and returns all the substrings with the length wanted seq: str, input string to be sliced into 'windows' width: length of windows wanted returns: all the substrings(windows) with the length wanted(width) ''' #exclude the number before the decimal seq=seq[1:] #create two lists, seen and non_repeated seen, non_repeated =[], [] #iterate through the input string for i in range(0,len(seq)-(width-1)): #create windows of given width t = seq[i:i+width] #excluded repeated windows if t not in seen: #collect non-repeated windows non_repeated.append(t) #collect repeated windows as seen windows seen.append(t) #return a list of non-repeated windows return list(non_repeated) ``` ```python print(window(str(12345678), 4)) ``` ['2345', '3456', '4567', '5678'] ### Unit test of window function: ```python class TestNotebook(unittest.TestCase): def test_window(self): """test window.""" self.assertEqual(window(str(12345678), 4), ['2345', '3456', '4567', '5678']) unittest.main(argv=[''], verbosity=2, exit=False) ``` test_window (__main__.TestNotebook) test window. ... ok ---------------------------------------------------------------------- Ran 1 test in 0.001s OK <unittest.main.TestProgram at 0x7f8427f0f710> ### ------------------------------------------------------------------------------------------------------- ### Now use these helper functions to write the function that you need. ### Write a unit test for this final function, given that the first 10-digit prime in the expansion e is 7427466391. ### Finally, solve the given problem. ### This function uses all of the helper functions I wrote above. ### It iterates through the list of numbers generated by the window function from the string of the expansion of the given number generated by the create_expansion function, and then checks whether every number of this list is a prime number using the IsPrimeNumber function. ### Lastly, it returns the first prime number of the expansion with the wanted length of digits. ```python def give_prime_expansion(digits_of_number, input_number, multiplier, length_of_prime): ''' this function takes the digits of number, an input number('pi' or 'e'), a multiplier and the length of prime number we want, returns the first prime number in the expansion. digits_of_number: the number of decimal digits the input_number: pi or e multiplier: a multiplier of the input number length_of_prime: the length of the prime number we want returns: the first prime number in the decimal expansion of the input number ''' #iterte through the list of windows of numbers for numbers in window(create_expansion(digits_of_number,input_number,multiplier),length_of_prime): #check if the number is prime if IsPrimeNumber(int(numbers)): #output the number we want print(f"The first {length_of_prime}-digit prime number in the decimal expansion of {multiplier} {input_number} is: ") return(int(numbers)) break ``` ### Thus we can find the first 10-digit prime in the decimal expansion of 17π. ```python #example print(give_prime_expansion(99,'pi',17,10)) ``` The first 10-digit prime number in the decimal expansion of 17 pi is: 8649375157 ### Unit test of give_prime_expansion function: ```python class TestNotebook(unittest.TestCase): def test_give_prime_expansion(self): """test test_give_prime_expansion .""" self.assertEqual(give_prime_expansion(120,'e',1,10), 7427466391) self.assertEqual(give_prime_expansion(99,'pi',1,5), 14159) unittest.main(argv=[''], verbosity=2, exit=False) ``` test_give_prime_expansion (__main__.TestNotebook) test test_give_prime_expansion . ... The first 10-digit prime number in the decimal expansion of 1 e is: The first 5-digit prime number in the decimal expansion of 1 pi is: ok ---------------------------------------------------------------------- Ran 1 test in 0.008s OK <unittest.main.TestProgram at 0x7f8427f19d90> ```python ``` ```python ```
8039d14c6774d3a80550c307fbbf2bc4bf618ace
16,685
ipynb
Jupyter Notebook
_notebooks/2021-09-17-Yiyang-Zhang-823-HW2.ipynb
yiyangzhang2020/yz628-823-blog
12ea3947c3e2f0fb0eb5d5acc4c1baf8e4954aec
[ "Apache-2.0" ]
null
null
null
_notebooks/2021-09-17-Yiyang-Zhang-823-HW2.ipynb
yiyangzhang2020/yz628-823-blog
12ea3947c3e2f0fb0eb5d5acc4c1baf8e4954aec
[ "Apache-2.0" ]
null
null
null
_notebooks/2021-09-17-Yiyang-Zhang-823-HW2.ipynb
yiyangzhang2020/yz628-823-blog
12ea3947c3e2f0fb0eb5d5acc4c1baf8e4954aec
[ "Apache-2.0" ]
null
null
null
32.587891
592
0.538388
true
2,392
Qwen/Qwen-72B
1. YES 2. YES
0.877477
0.851953
0.747569
__label__eng_Latn
0.990043
0.575185
# Circuito RLC paralelo sem fonte Jupyter Notebook desenvolvido por [Gustavo S.S.](https://github.com/GSimas) Circuitos RLC em paralelo têm diversas aplicações, como em projetos de filtros e redes de comunicação. Suponha que a corrente inicial I0 no indutor e a tensão inicial V0 no capacitor sejam: \begin{align} {\Large i(0) = I_0 = \frac{1}{L} \int_{-\infty}^{0} v(t) dt} \end{align} \begin{align} {\Large v(0) = V_0} \end{align} Portanto, aplicando a LKC ao nó superior fornece: \begin{align} {\Large \frac{v}{R} + \frac{1}{L} \int_{-\infty}^{t} v(\tau) d\tau + C \frac{dv}{dt} = 0} \end{align} Extraindo a derivada em relação a t e dividindo por C resulta em: \begin{align} {\Large \frac{d^2v}{dt^2} + \frac{1}{RC} \frac{dv}{dt} + \frac{1}{LC} v = 0} \end{align} Obtemos a equação característica substituindo a primeira derivada por s e a segunda por s^2: \begin{align} {\Large s^2 + \frac{1}{RC} s + \frac{1}{LC} = 0} \end{align} Assim, as raízes da equação característica são: \begin{align} {\Large s_{1,2} = -\alpha \pm \sqrt{\alpha^2 - \omega_0^2}} \end{align} onde: \begin{align} {\Large \alpha = \frac{1}{2RC}, \space \space \space \omega_0 = \frac{1}{\sqrt{LC}} } \end{align} ## Amortecimento Supercrítico / Superamortecimento (α > ω0) Quando α > ω0, as raízes da equação característica são reais e negativas. A resposta é: \begin{align} {\Large v(t) = A_1 e^{s_1 t} + A_2 e^{s_2 t} } \end{align} ## Amortecimento Crítico (α = ω0) Quando α = ω0 as raízes da equação característica são reais e iguais de modo que a resposta seja: \begin{align} {\Large v(t) = (A_1 + A_2t)e^{-\alpha t}} \end{align} ## Subamortecimento (α < ω0) Quando α < ω0, nesse caso, as raízes são complexas e podem ser expressas como segue: \begin{align} {\Large s_{1,2} = -\alpha \pm j\omega_d} \\ \\{\Large \omega_d = \sqrt{\omega_0^2 - \alpha^2}} \end{align} \begin{align} {\Large v(t) = e^{-\alpha t}(A_1 cos(\omega_d t) + A_2 sen(\omega_d t))} \end{align} As constantes A1 e A2 em cada caso podem ser determinadas a partir das condições iniciais. Precisamos de v(0) e dv(0)/dt. **Exemplo 8.5** No circuito paralelo da Figura 8.13, determine v(t) para t > 0, supondo que v(0) = 5 V, i(0) = 0, L = 1 H e C = 10 mF. Considere os seguintes casos: R = 1,923 Ω, R = 5 Ω e R = 6,25 Ω . ```python print("Exemplo 8.5") from sympy import * m = 10**(-3) #definicao de mili L = 1 C = 10*m v0 = 5 i0 = 0 A1 = symbols('A1') A2 = symbols('A2') t = symbols('t') def sqrt(x, root = 2): #definir funcao para raiz y = x**(1/root) return y print("\n--------------\n") ## PARA R = 1.923 R = 1.923 print("Para R = ", R) def resolve_rlc(R,L,C): alpha = 1/(2*R*C) omega = 1/sqrt(L*C) print("Alpha:",alpha) print("Omega:",omega) s1 = -alpha + sqrt(alpha**2 - omega**2) s2 = -alpha - sqrt(alpha**2 - omega**2) def rlc(alpha,omega): #funcao para verificar tipo de amortecimento resposta = "" if alpha > omega: resposta = "superamortecimento" v = A1*exp(s1*t) + A2*exp(s2*t) elif alpha == omega: resposta = "amortecimento critico" v = (A1 + A2*t)*exp(-alpha*t) else: resposta = "subamortecimento" v = exp(-alpha*t)*(A1*cos(omega_d*t) + A2*sin(omega_d*t)) return resposta,v resposta,v = rlc(alpha,omega) print("Tipo de resposta:",resposta) print("Resposta v(t):",v) print("v(0):",v.subs(t,0)) print("dv(0)/dt:",v.diff(t).subs(t,0)) return alpha,omega,s1,s2,resposta,v alpha,omega,s1,s2,resposta,v = resolve_rlc(R,L,C) #v(0) = 5 = A1 + A2 -> A2 = 5 - A1 #dv(0)/dt = -2A1 - 50A2 #C*dv(0)/dt + i(0) + v(0)/R = 0 #0.01*(-2A1 - 50A2) + 0 + 5/1.923 = 0 #(-2A1 -50(5 - A1)) = -5/(1.923*0.01) #48A1 = 250 - 5/(1.923*0.01) A1 = (250 - 5/(1.923*0.01))/48 print("Constante A1:",A1) A2 = 5 - A1 print("Constante A2:",A2) v = A1*exp(s1*t) + A2*exp(s2*t) print("Resposta v(t):",v,"V") print("\n--------------\n") ## PARA R = 5 R = 5 A1 = symbols('A1') A2 = symbols('A2') print("Para R = ", R) alpha,omega,s1,s2,resposta,v = resolve_rlc(R,L,C) #v(t) = (A1 + A2t)e^(-alpha*t) #v(0) = A1 = 5 A1 = 5 #C*dv(0)/dt + i(0) + v(0)/R = 0 #0.01(-10A1 + A2) + 0 + 5/5 = 0 #0.01A2 = -1 + 0.5 A2 = (-1 + 0.5)/0.01 print("Constante A1:",A1) print("Constante A2:",A2) v = (A1 + A2*t)*exp(-alpha*t) print("Resposta v(t):",v,"V") print("\n--------------\n") ## PARA R = 6.25 R = 6.25 A1 = symbols('A1') A2 = symbols('A2') print("Para R = ", R) omega_d = sqrt(omega**2 - alpha**2) alpha,omega,s1,s2,resposta,v = resolve_rlc(R,L,C) #v(t) = e^-(alpha*t)*(A1cos(wd*t) + A2sen(wd*t)) #v(0) = A1 = 5 A1 = 5 #C*dv(0)/dt + i(0) + v(0)/R = 0 #0.01*(-8A1 + 6A2) + 0 + 5/6.25 = 0 #-0.4 + 0.06A2 = -5/6.25 A2 = (-5/6.25 + 0.4)/0.06 print("Constante A1:",A1) print("Constante A2:",A2) v = exp(-alpha*t)*(A1*cos(omega_d*t) + A2*sin(omega_d*t)) print("Resposta v(t):",v,"V") ``` Exemplo 8.5 -------------- Para R = 1.923 Alpha: 26.001040041601662 Omega: 10.0 Tipo de resposta: superamortecimento Resposta v(t): A1*exp(-1.9999133337787*t) + A2*exp(-50.0021667494246*t) v(0): A1 + A2 dv(0)/dt: -1.9999133337787*A1 - 50.0021667494246*A2 Constante A1: -0.20855000866701326 Constante A2: 5.208550008667014 Resposta v(t): 5.20855000866701*exp(-50.0021667494246*t) - 0.208550008667013*exp(-1.9999133337787*t) V -------------- Para R = 5 Alpha: 10.0 Omega: 10.0 Tipo de resposta: amortecimento critico Resposta v(t): (A1 + A2*t)*exp(-10.0*t) v(0): A1 dv(0)/dt: -10.0*A1 + A2 Constante A1: 5 Constante A2: -50.0 Resposta v(t): (-50.0*t + 5)*exp(-10.0*t) V -------------- Para R = 6.25 Alpha: 8.0 Omega: 10.0 Tipo de resposta: subamortecimento Resposta v(t): A1*exp(-8.0*t) v(0): A1 dv(0)/dt: -8.0*A1 Constante A1: 5 Constante A2: -6.666666666666667 Resposta v(t): 5*exp(-8.0*t) V **Problema Prático 8.5** Na Figura 8.13, seja R = 2 Ω, L = 0,4 H, C = 25 mF, v(0) = 0, e i(0) = 50 mA. Determine v(t) para t > 0. ```python print("Problema Prático 8.5") R = 2 L = 0.4 C = 25*m v0 = 0 i0 = 50*m A1 = symbols('A1') A2 = symbols('A2') alpha,omega,s1,s2,resposta,v = resolve_rlc(R,L,C) #C*dv(0)/dt + i(0) + v(0)/R = 0 #C*(-10A1 + A2) + i0 + v(0)/2 = 0 #v(0) = 0 = A1 #C*A2 = -i0 A2 = -i0/C A1 = 0 print("Constante A1:",A1) print("Constante A2:",A2) v = (A1 + A2*t)*exp(-10.0*t) print("Resposta v(t):",v,"V") ``` Problema Prático 8.5 Alpha: 10.0 Omega: 10.0 Tipo de resposta: amortecimento critico Resposta v(t): (A1 + A2*t)*exp(-10.0*t) v(0): A1 dv(0)/dt: -10.0*A1 + A2 Constante A1: 0 Constante A2: -2.0 Resposta v(t): -2.0*t*exp(-10.0*t) V **Exemplo 8.6** Determine v(t) para t > 0 no circuito RLC da Figura 8.15. ```python print("Exemplo 8.6") u = 10**(-6) #definicao de micro Vs = 40 L = 0.4 C = 20*u A1 = symbols('A1') A2 = symbols('A2') #Para t < 0 v0 = Vs*50/(50 + 30) i0 = -Vs/(50 + 30) print("V0:",v0,"V") print("i0:",i0,"A") #Para t > 0 #C*dv(0)/dt + i(0) + v(0)/50 = 0 #20u*dv(0)/dt - 0.5 + 0.5 = 0 #dv(0)/dt = 0 R = 50 alpha,omega,s1,s2,resposta,v = resolve_rlc(R,L,C) #v(0) = 25 = A1 + A2 #A1 = 25 - A2 #dv(0)/dt = -146A1 - 854A2 = 0 #-146(25 - A2) - 854A2 = 0 #146A2 - 854A2 = 3650 #-708A2 = 3650 A2 = -3650/708 A1 = 25 - A2 print("Constante A1:",A1) print("Constante A2:",A2) v = A1*exp(s1*t) + A2*exp(s2*t) print("Resposta v(t):",v,"V") ``` Exemplo 8.6 V0: 25.0 V i0: -0.5 A Alpha: 500.0 Omega: 353.5533905932738 Tipo de resposta: superamortecimento Resposta v(t): A1*exp(-146.446609406726*t) + A2*exp(-853.553390593274*t) v(0): A1 + A2 dv(0)/dt: -146.446609406726*A1 - 853.553390593274*A2 Constante A1: 30.15536723163842 Constante A2: -5.155367231638418 Resposta v(t): -5.15536723163842*exp(-853.553390593274*t) + 30.1553672316384*exp(-146.446609406726*t) V
324c716e4d05506b0ae48282565bf5f02765f9c9
12,760
ipynb
Jupyter Notebook
Aula 14 - Circuito RLC paralelo.ipynb
ofgod2/Circuitos-electricos-Boylestad-12ed-Portugues
60e815f6904858f3cda8b5c7ead8ea77aa09c7fd
[ "MIT" ]
7
2019-08-13T13:33:15.000Z
2021-11-16T16:46:06.000Z
Aula 14 - Circuito RLC paralelo.ipynb
ofgod2/Circuitos-electricos-Boylestad-12ed-Portugues
60e815f6904858f3cda8b5c7ead8ea77aa09c7fd
[ "MIT" ]
1
2017-08-24T17:36:15.000Z
2017-08-24T17:36:15.000Z
Aula 14 - Circuito RLC paralelo.ipynb
ofgod2/Circuitos-electricos-Boylestad-12ed-Portugues
60e815f6904858f3cda8b5c7ead8ea77aa09c7fd
[ "MIT" ]
8
2019-03-29T14:31:49.000Z
2021-12-30T17:59:23.000Z
28.482143
119
0.447571
true
3,520
Qwen/Qwen-72B
1. YES 2. YES
0.826712
0.843895
0.697658
__label__por_Latn
0.459219
0.459225
<p align="center"> </p> ## Interactive Hypothesis Testing Demonstration ### Boostrap and Analytical Methods for Hypothesis Testing, Difference in Means * we calculate the hypothesis test for different in means with boostrap and compare to the analytical expression * **Welch's t-test**: we assume the features are Gaussian distributed and the variance are unequal #### Michael Pyrcz, Associate Professor, University of Texas at Austin ##### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1) | [GeostatsPy](https://github.com/GeostatsGuy/GeostatsPy) #### Hypothesis Testing Powerful methodology for spatial data analytics: 1. extracted sample set 1 and 2, the means look different, but are they? 2. should we suspect that the samples are in fact from 2 different populations? Now, let's try the t-test, hypothesis test for difference in means. This test assumes that the variances are similar along with the data being Gaussian distributed (see the course notes for more on this). This is our test: \begin{equation} H_0: \mu_{X1} = \mu_{X2} \end{equation} \begin{equation} H_1: \mu_{X1} \ne \mu_{X2} \end{equation} To test this we will calculate the t statistic with the bootstrap and analytical approaches. #### The Welch's t-test for Difference in Means by Analytical and Empirical Methods We work with the following test statistic, *t statistic*, from the two sample sets. \begin{equation} \hat{t} = \frac{\overline{x}_1 - \overline{x}_2}{\sqrt{\frac{s^2_1}{n_1} + \frac{s^2_2}{n_2}}} \end{equation} where $\overline{x}_1$ and $\overline{x}_2$ are the sample means, $s^2_1$ and $s^2_2$ are the sample variances and $n_1$ and $n_2$ are the numer of samples from the two datasets. The critical value, $t_{critical}$ is calculated by the analytical expression by: \begin{equation} t_{critical} = \left|t(\frac{\alpha}{2},\nu)\right| \end{equation} The degrees of freedom, $\nu$, is calculated as follows: \begin{equation} \nu = \frac{\left(\frac{1}{n_1} + \frac{\mu}{n_2}\right)^2}{\frac{1}{n_1^2(n_1-1)} + \frac{\mu^2}{n_2^2(n_2-1)}} \end{equation} Alternatively, the sampling distribution of the t_{statistic} and t_{critical} may be calculated empirically with bootstrap. The workflow proceeds as: * shift both sample sets to have the mean of the combined data, $x_1$ → $x^*_1$, $x_2$ → $x^*_2$ * for each bootstrap realization, $\ell=1\ldots,L$ * perform $n_1$ Monte Carlo simulations, draws with replacement, from sample set $x^*_1$ * perform $n_2$ Monte Carlo simulations, draws with replacement, from sample set $x^*_2$ * calculate the t_{statistic} realization, $\hat{t}^{\ell}$ given the resulting sample means $\overline{x}^{*,\ell}_1$ and $\overline{x}^{*,\ell}_2$ and the sample variances $s^{*,2,\ell}_1$ and $s^{*,2,\ell}_2$ * pool the results to assemble the $t_{statistic}$ sampling distribution * calculate the cumulative probability of the observed t_{statistic}m, $\hat{t}$, from the boostrap distribution based on $\hat{t}^{\ell}$, $\ell = 1,\ldots,L$. Here's some prerequisite information on the boostrap. #### Bootstrap Uncertainty in the sample statistics * one source of uncertainty is the paucity of data. * do 200 or even less wells provide a precise (and accurate estimate) of the mean? standard deviation? skew? P13? Would it be useful to know the uncertainty in these statistics due to limited sampling? * what is the impact of uncertainty in the mean porosity e.g. 20%+/-2%? **Bootstrap** is a method to assess the uncertainty in a sample statistic by repeated random sampling with replacement. Assumptions * sufficient, representative sampling, identical, idependent samples Limitations 1. assumes the samples are representative 2. assumes stationarity 3. only accounts for uncertainty due to too few samples, e.g. no uncertainty due to changes away from data 4. does not account for boundary of area of interest 5. assumes the samples are independent 6. does not account for other local information sources The Bootstrap Approach (Efron, 1982) Statistical resampling procedure to calculate uncertainty in a calculated statistic from the data itself. * Does this work? Prove it to yourself, for uncertainty in the mean solution is standard error: \begin{equation} \sigma^2_\overline{x} = \frac{\sigma^2_s}{n} \end{equation} Extremely powerful - could calculate uncertainty in any statistic! e.g. P13, skew etc. * Would not be possible access general uncertainty in any statistic without bootstrap. * Advanced forms account for spatial information and sampling strategy (game theory and Journel’s spatial bootstrap (1993). Steps: 1. assemble a sample set, must be representative, reasonable to assume independence between samples 2. optional: build a cumulative distribution function (CDF) * may account for declustering weights, tail extrapolation * could use analogous data to support 3. For $\ell = 1, \ldots, L$ realizations, do the following: * For $i = \alpha, \ldots, n$ data, do the following: * Draw a random sample with replacement from the sample set or Monte Carlo simulate from the CDF (if available). 6. Calculate a realization of the sammary statistic of interest from the $n$ samples, e.g. $m^\ell$, $\sigma^2_{\ell}$. Return to 3 for another realization. 7. Compile and summarize the $L$ realizations of the statistic of interest. This is a very powerful method. Let's try it out and compare the result to the analytical form of the confidence interval for the sample mean. #### Objective Provide an example and demonstration for: 1. interactive plotting in Jupyter Notebooks with Python packages matplotlib and ipywidgets 2. provide an intuitive hands-on example of confidence intervals and compare to statistical boostrap #### Getting Started Here's the steps to get setup in Python with the GeostatsPy package: 1. Install Anaconda 3 on your machine (https://www.anaconda.com/download/). 2. Open Jupyter and in the top block get started by copy and pasting the code block below from this Jupyter Notebook to start using the geostatspy functionality. #### Load the Required Libraries The following code loads the required libraries. ```python %matplotlib inline from ipywidgets import interactive # widgets and interactivity from ipywidgets import widgets from ipywidgets import Layout from ipywidgets import Label from ipywidgets import VBox, HBox import matplotlib.pyplot as plt # plotting import numpy as np # working with arrays import pandas as pd # working with DataFrames from scipy import stats # statistical calculations import random # random drawing / bootstrap realizations of the data ``` #### Make a Synthetic Dataset This is an interactive method to: * select a parametric distribution * select the distribution parameters * select the number of samples and visualize the synthetic dataset distribution ```python # interactive calculation of the sample set (control of source parametric distribution and number of samples) l = widgets.Text(value=' Interactive Hypothesis Testing, Difference in Means, Analytical & Bootstrap Methods, Michael Pyrcz, Associate Professor, The University of Texas at Austin',layout=Layout(width='950px', height='30px')) n1 = widgets.IntSlider(min=0, max = 100, value = 10, step = 1, description = '$n_{1}$',orientation='horizontal',layout=Layout(width='300px', height='30px'),continuous_update=False) n1.style.handle_color = 'red' m1 = widgets.FloatSlider(min=0, max = 50, value = 3, step = 1.0, description = '$\overline{x}_{1}$',orientation='horizontal',layout=Layout(width='300px', height='30px'),continuous_update=False) m1.style.handle_color = 'red' s1 = widgets.FloatSlider(min=0, max = 10, value = 3, step = 0.25, description = '$s_1$',orientation='horizontal',layout=Layout(width='300px', height='30px'),continuous_update=False) s1.style.handle_color = 'red' ui1 = widgets.VBox([n1,m1,s1],) # basic widget formatting n2 = widgets.IntSlider(min=0, max = 100, value = 10, step = 1, description = '$n_{2}$',orientation='horizontal',layout=Layout(width='300px', height='30px'),continuous_update=False) n2.style.handle_color = 'yellow' m2 = widgets.FloatSlider(min=0, max = 50, value = 3, step = 1.0, description = '$\overline{x}_{2}$',orientation='horizontal',layout=Layout(width='300px', height='30px'),continuous_update=False) m2.style.handle_color = 'yellow' s2 = widgets.FloatSlider(min=0, max = 10, value = 3, step = 0.25, description = '$s_2$',orientation='horizontal',layout=Layout(width='300px', height='30px'),continuous_update=False) s2.style.handle_color = 'yellow' ui2 = widgets.VBox([n2,m2,s2],) # basic widget formatting L = widgets.IntSlider(min=10, max = 1000, value = 100, step = 1, description = '$L$',orientation='horizontal',layout=Layout(width='300px', height='30px'),continuous_update=False) L.style.handle_color = 'gray' alpha = widgets.FloatSlider(min=0, max = 50, value = 3, step = 1.0, description = '$α$',orientation='horizontal',layout=Layout(width='300px', height='30px'),continuous_update=False) alpha.style.handle_color = 'gray' ui3 = widgets.VBox([L,alpha],) # basic widget formatting ui4 = widgets.HBox([ui1,ui2,ui3],) # basic widget formatting ui2 = widgets.VBox([l,ui4],) def f_make(n1, m1, s1, n2, m2, s2, L, alpha): # function to take parameters, make sample and plot np.random.seed(73073) x1 = np.random.normal(loc=m1,scale=s1,size=n1) np.random.seed(73074) x2 = np.random.normal(loc=m2,scale=s2,size=n2) mu = (s2*s2)/(s1*s1) nu = ((1/n1 + mu/n2)*(1/n1 + mu/n2))/(1/(n1*n1*(n1-1)) + ((mu*mu)/(n2*n2*(n2-1)))) prop_values = np.linspace(-8.0,8.0,100) analytical_distribution = stats.t.pdf(prop_values,df = nu) analytical_tcrit = stats.t.ppf(1.0-alpha*0.005,df = nu) # Analytical Method with SciPy t_stat_observed, p_value_analytical = stats.ttest_ind(x1,x2,equal_var=False) # Bootstrap Method global_average = np.average(np.concatenate([x1,x2])) # shift the means to be equal to the globla mean x1s = x1 - np.average(x1) + global_average x2s = x2 - np.average(x2) + global_average t_stat = np.zeros(L); p_value = np.zeros(L) random.seed(73075) for l in range(0, L): # loop over realizations samples1 = random.choices(x1s, weights=None, cum_weights=None, k=len(x1s)) #print(samples1) samples2 = random.choices(x2s, weights=None, cum_weights=None, k=len(x2s)) #print(samples2) t_stat[l], p_value[l] = stats.ttest_ind(samples1,samples2,equal_var=False) bootstrap_lower = np.percentile(t_stat,alpha * 0.5) bootstrap_upper = np.percentile(t_stat,100.0 - alpha * 0.5) plt.subplot(121) #print(t_stat) plt.hist(x1,cumulative = False, density = True, alpha=0.4,color="red",edgecolor="black", bins = np.linspace(0,50,50), label = '$x_1$') plt.hist(x2,cumulative = False, density = True, alpha=0.4,color="yellow",edgecolor="black", bins = np.linspace(0,50,50), label = '$x_2$') plt.ylim([0,0.4]); plt.xlim([0.0,30.0]) plt.title('Sample Distributions'); plt.xlabel('Value'); plt.ylabel('Density') plt.legend() #plt.hist(x2) plt.subplot(122) plt.ylim([0,0.6]); plt.xlim([-8.0,8.0]) plt.title('Bootstrap and Analytical $t_{statistic}$ Sampling Distributions'); plt.xlabel('$t_{statistic}$'); plt.ylabel('Density') plt.plot([t_stat_observed,t_stat_observed],[0.0,0.6],color = 'black',label='observed $t_{statistic}$') plt.plot([bootstrap_lower,bootstrap_lower],[0.0,0.6],color = 'blue',linestyle='dashed',label = 'bootstrap interval') plt.plot([bootstrap_upper,bootstrap_upper],[0.0,0.6],color = 'blue',linestyle='dashed') plt.plot(prop_values,analytical_distribution, color = 'red',label='analytical $t_{statistic}$') plt.hist(t_stat,cumulative = False, density = True, alpha=0.2,color="blue",edgecolor="black", bins = np.linspace(-8.0,8.0,50), label = 'bootstrap $t_{statistic}$') plt.fill_between(prop_values, 0, analytical_distribution, where = prop_values <= -1*analytical_tcrit, facecolor='red', interpolate=True, alpha = 0.2) plt.fill_between(prop_values, 0, analytical_distribution, where = prop_values >= analytical_tcrit, facecolor='red', interpolate=True, alpha = 0.2) ax = plt.gca() handles,labels = ax.get_legend_handles_labels() handles = [handles[0], handles[2], handles[3], handles[1]] labels = [labels[0], labels[2], labels[3], labels[1]] plt.legend(handles,labels,loc=1) plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=1.2, wspace=0.2, hspace=0.2) plt.show() # connect the function to make the samples and plot to the widgets interactive_plot = widgets.interactive_output(f_make, {'n1': n1, 'm1': m1, 's1': s1, 'n2': n2, 'm2': m2, 's2': s2, 'L': L, 'alpha': alpha}) interactive_plot.clear_output(wait = True) # reduce flickering by delaying plot updating ``` ### Boostrap and Analytical Methods for Hypothesis Testing, Difference in Means * including the analytical and bootstrap methods for testing the difference in means * interactive plot demonstration with ipywidget, matplotlib packages #### Michael Pyrcz, Associate Professor, University of Texas at Austin ##### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1) | [GeostatsPy](https://github.com/GeostatsGuy/GeostatsPy) ### The Problem Let's simulate bootstrap, resampling with replacement from a hat with $n_{red}$ and $n_{green}$ balls * **$n_1$**, **$n_2$** number of samples, **$\overline{x}_1$**, **$\overline{x}_2$** means and **$s_1$**, **$s_2$** standard deviation of the 2 sample sets * **$L$**: number of bootstrap realizations * **$\alpha$**: alpha level ```python display(ui2, interactive_plot) # display the interactive plot ``` VBox(children=(Text(value=' Interactive Hypothesis Testing, Difference in Means, Analytical & Bootstrap Method… Output() #### Observations Some observations: * lower dispersion and higher difference in means increases the absolute magnitude of the observed $t_{statistic}$ * the bootstrap distribution closely matches the analytical distribution if $L$ is large enough * it is possible to use bootstrap to calculate the sampling distribution instead of relying on the theoretical express distribution, in this case the Student's t distribution. #### Comments This was a demonstration of interactive hypothesis testing for the significance in difference in means aboserved between 2 sample sets in Jupyter Notebook Python with the ipywidgets and matplotlib packages. I have many other demonstrations on data analytics and machine learning, e.g. on the basics of working with DataFrames, ndarrays, univariate statistics, plotting data, declustering, data transformations, trend modeling and many other workflows available at https://github.com/GeostatsGuy/PythonNumericalDemos and https://github.com/GeostatsGuy/GeostatsPy. I hope this was helpful, *Michael* #### The Author: ### Michael Pyrcz, Associate Professor, University of Texas at Austin *Novel Data Analytics, Geostatistics and Machine Learning Subsurface Solutions* With over 17 years of experience in subsurface consulting, research and development, Michael has returned to academia driven by his passion for teaching and enthusiasm for enhancing engineers' and geoscientists' impact in subsurface resource development. For more about Michael check out these links: #### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1) #### Want to Work Together? I hope this content is helpful to those that want to learn more about subsurface modeling, data analytics and machine learning. Students and working professionals are welcome to participate. * Want to invite me to visit your company for training, mentoring, project review, workflow design and / or consulting? I'd be happy to drop by and work with you! * Interested in partnering, supporting my graduate student research or my Subsurface Data Analytics and Machine Learning consortium (co-PIs including Profs. Foster, Torres-Verdin and van Oort)? My research combines data analytics, stochastic modeling and machine learning theory with practice to develop novel methods and workflows to add value. We are solving challenging subsurface problems! * I can be reached at mpyrcz@austin.utexas.edu. I'm always happy to discuss, *Michael* Michael Pyrcz, Ph.D., P.Eng. Associate Professor The Hildebrand Department of Petroleum and Geosystems Engineering, Bureau of Economic Geology, The Jackson School of Geosciences, The University of Texas at Austin #### More Resources Available at: [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1) ```python ```
21d100e460e2365a9d25d1e9d2c96f4f62cf0800
23,805
ipynb
Jupyter Notebook
Interactive_Hypothesis_Testing.ipynb
Skipper6931/PythonNumericalDemos
9822f8252ed8d714e029163eaede75a5d1232bc9
[ "MIT" ]
403
2017-10-15T02:07:38.000Z
2022-03-30T15:27:14.000Z
Interactive_Hypothesis_Testing.ipynb
ahmeduncc/PythonNumericalDemos
c4702799196f549c72a2b2378ffe8ee4e7d66c8c
[ "MIT" ]
4
2019-08-21T10:35:09.000Z
2021-02-04T04:57:13.000Z
Interactive_Hypothesis_Testing.ipynb
ahmeduncc/PythonNumericalDemos
c4702799196f549c72a2b2378ffe8ee4e7d66c8c
[ "MIT" ]
276
2018-06-27T11:20:30.000Z
2022-03-25T16:04:24.000Z
51.75
512
0.619492
true
4,966
Qwen/Qwen-72B
1. YES 2. YES
0.841826
0.859664
0.723687
__label__eng_Latn
0.895487
0.519699
(c) Juan Gomez 2019. Thanks to Universidad EAFIT for support. This material is part of the course Introduction to Finite Element Analysis # Elasticity in a notebook ## Introduction This notebook sumarizes the boundary value problem (BVP) for the linearized theory of elasticity. It is assumed that the student is familiar with the fundamental concepts of stress and strains. After presenting the BVP in its stress and displacements forms the NB focuses in the principle of virtual work as an alternative representation of equlibrium. The NB concludes with a proposed homework or in-class activity which particularizes the general equations to a two-dimensional idealization. **After completing this notebook you should be able to:** * Recognize the equations of equilibrium for a material point in an elastic medium in its stress and displacements forms. * Recognize the equations of equlibrium for a material point in an elastic medium in its displacement form, together with conditions of the displacement field at the surface as an elliptic boundary value problem. * Recognize the principle of virtual displacements as an alternative formulation of the equlibrium for a material point in an elastic medium. * Recognize two-dimensional idealizations in the forms of plane strain and plane stress models. ## Equilibrium equations. Consider the schematic representation of an elastic solid ocupying a volume $V$ bounded by the surface $S$. The volume and the surface are termed the **domain** and the **boundary** respectively. The outward vector normal to the boundary is $\widehat n$. <center></center> The solid is subjected to the action of external actions in the form of **surface tractions $t_i^{(\widehat n)}$** applied directly through the boundary and distant actions or **body forces $f_i$**. The prescribed surface tractions are applied over the part of the surface labeled $S_t$, while over the remaing part of the surface, $S_u = S - S_t$, there are prescribed displacements ${\overline u}_i$. The prescribed tractions and displacements are termed the **boundary conditions (BCs)** of the problem and for the resulting boundary value problem to be well possed $S_t$ and $S_u$ must satisfy: \begin{align*} S & =S_t\cup S_u\\ \varnothing & =S_t\cap S_u. \end{align*} **Questions:** **For the problem shown in the figure below identify the tractions and displacements boundary conditions and specify the regions $S_t$ and $S_u$. Indicate the normal vector in each part of the surface.** **¿ Accordying to the specified BCs is the problem well possed?** <center></center> ### Governing equations To find governing equations for the internal stresses $\sigma_{ij}$ appearing at a field point of position vector $\overrightarrow x$ we apply the laws of conservation of linear momentum and of moment of momentum over an arbitrary region of the solid. From the arbitrariness in the computational region it follows that he equilibrium equations for a field point $\overrightarrow x$ are: \begin{align*} & \sigma_{ij,j}+f_i=0\\ & \sigma_{ij}=\sigma_{ji}. \end{align*} for $\overrightarrow x$ in $V$. There are two issues with the above equilibrium equations: * They have infinite solutions of the form $\sigma_{ij}=\sigma_{ij}(\overrightarrow x)$. * There are 9 unknowns but only 6 equations making the systen indeterminate. To destroy the indeterminancy we must consider the kinematic changes in the problem and its connection to the stresses. **Questions:** **If the stress tensor at a point in a given reference system $x-y$ is given by the matrix** $$ \sigma=\begin{bmatrix}\sigma_{xx}&\tau_{xy}\\\tau_{xy}&\sigma_{yy}\end{bmatrix} $$ **find the tensor at the same point but expressed in a second coordinate system $x' - y'$ rotated by an angle $\theta$ with respect to the first one.** ## Kinematic description. To describe the change in configuration at the **material point** level and the total change of configuration of the complete domain we represent local changes at a point in terms of the relative displacement $du_i$ as a linear transformation of differential fiber elements like: $$ du_i=\varepsilon_{ij}dx_j+\omega_{ij}dx_j $$ where $\varepsilon_{ij}$ and $\omega_{ij}$ are the strain tensor and the local rigid-body rotation vector respectively and given by: $$ \varepsilon_{ij}=\frac12\left(\frac{\partial u_i}{\partial x_j}+\frac{\partial u_j}{\partial x_i}\right) $$ and $$ \omega_{ij}=\frac12\left(\frac{\partial u_i}{\partial x_j}-\frac{\partial u_j}{\partial x_i}\right). $$ **Questions:** **Using explicit notation write the particular forms of the relative displacements vector $du_i$, the strain tensor $\epsilon_{ij}$ and the rotation vector $\omega_{ij}$ for a two-dimensional idealization.** ## Stress-strain relationship (Hooke's law). For a linear elastic material the strains can be related to the stresses in terms of material parameters $\mu$ and $\lambda$ as given by the following general form of Hooke's law: $$ \sigma_{ij}=2\mu\varepsilon_{ij}+\lambda\varepsilon_{kk}\delta_{ij}. $$ In this form $\mu$ and $\lambda$ are called Lamé parameters and they are material constants, completely defining the material response and changing from one material to the other. In most engineering applictions it is more common to describe the material response in terms of Young's modulus ($E$) and shear modulus ($G\equiv\mu$). There are two alternative parameters that can be used in the description of a material, namely Poisson's ratio ($\nu$) relating direct normal strains to secondary transverse strains and the compressibility parameter ($\kappa$) describing the material response to purely volumetric changes. In fact from $E, \mu , \nu , \lambda$ and $\kappa$ there are only two material parameters which are linearly independent leaving the others as combinations from the two taken as the basis. See for instance Shames and Cozzarelli (1997). ## Equations of motion in terms of displacements. To arrive at a solvable boundary value problem we use the strain-displacement relationship after using the definition of $\varepsilon_{ij}$ in terms of $u_i$ giving: $$ \sigma_{ij}=\mu(u_{i,j}+u_{j,i})+\lambda u_{k,k}\delta_{ij} $$ thus allowing us to write: $$ \sigma_{ij,j}=\mu(u_{i,jj}+u_{j,ij})+\lambda u_{k,ki}. $$ Substituting the above into the stress equations yields: $$ \left(\lambda+\mu\right)u_{j,ij}+\mu u_{i,jj}+f_i=0 $$ which are the Navier equations governing the displacement field in an elastic solid. Note that the Navier equations involve equilibrium, kineamtic relationships and Hooke's law. The boundary value problem is completed after considering the conditions at the boundary $S$ namely; $$ t_i^{(\widehat n)}=\sigma_{ij}{\widehat n}_j $$ for $\overset\rightharpoonup x\in S_t$ and $$ u_i=\overline{u_i} $$ for $\overset\rightharpoonup x\in S_u$. **Questions:** **Use explicit notation to write the particular form of the Navier equations in a two-dimensional idealization.** ## The prinicple of virtual work (or virtual displacements). ### Theorem An elastic state $(\sigma_{ij},\varepsilon_{ij},u_i)$ is the unique solution to the boundary value problem given by: $$ \sigma_{ij,j}+f_i=0 $$ for $\overset\rightharpoonup x\in V$ and the boundary conditions $$ t_i^{(\widehat n)}=\sigma_{ij}{\widehat n}_j $$ for $\overset\rightharpoonup x\in S_t$ and $$ u_i=\overline{u_i} $$ for $\overset\rightharpoonup x\in S_u$ and where $S_t$ and $S_u$ are such that \begin{align*} S & =S_t\cup S_u\\ \varnothing & =S_t\cap S_u \end{align*} If $$ \int_{V(\overset\rightharpoonup x)}\sigma_{ij}\delta\varepsilon_{ij}\operatorname dV(\overset\rightharpoonup x)-\int_{V(\overset\rightharpoonup x)}f_i\delta u_i\operatorname d{V(\overset\rightharpoonup x)}-\int_{S_t}t_i^{(\widehat n)}\delta u_i\operatorname dS=0 $$ for a given arbitrary displacement field $\delta u_i$ such $\delta u_i=0$ in $S_u$. ### Proof: To show the validity of the PVW write $$ \int_{V(\overset\rightharpoonup x)}\left(\sigma_{ij,j}+f_i\right)\delta u_i\operatorname dV(\overset\rightharpoonup x)=0 $$ which is valid since $\delta u_i$ is arbitrary. Then expand and use integration by parts in the first integral to get: $$ -\int_{V(\overset\rightharpoonup x)}\sigma_{ij}\delta u_{i,j}\operatorname dV(\overset\rightharpoonup x)+\int_{S_t}t_i^{(\widehat n)}\delta u_i\operatorname dS+\int_{V(\overset\rightharpoonup x)}f_i\delta u_i\operatorname d{V(\overset\rightharpoonup x)}=0 $$ and use the symmetry condition $\sigma_{ij} = \sigma_{ji}$ to arrive at: $$ \int_{V(\overset\rightharpoonup x)}\sigma_{ij}\delta\varepsilon_{ij}\operatorname dV(\overset\rightharpoonup x)-\int_{S_t}t_i^{(\widehat n)}\delta u_i\operatorname dS-\int_{V(\overset\rightharpoonup x)}f_i\delta u_i\operatorname d{V(\overset\rightharpoonup x)}=0. $$ **Questions:** * **If the stress solutin for the wedge shown previously is given by:** \begin{align*} \sigma_{xx} & =SCot(\phi)\\ \sigma_{yy} & =-STan(\phi)\\ \tau_{xy} & =0. \end{align*} **write the particular form of the principle of virtual work.** **Note: Propose an arbitrary function $\delta u_i$ such $\delta u_i = 0$ in $S_u$ and verify that the principle holds for the given stress solution.** ## Two-dimensional idealization In two-dimensional idealizations (eg., plane strain and plane stress) the elastic state is defined in terms of the displacement vector $u^T=\begin{bmatrix}u&v\end{bmatrix}$ where $u$ and $v$ are the horizontal and vertical scalar components of the displacement. The kinematic relationships defining strains $$ \varepsilon^T=\begin{bmatrix}\varepsilon_{xx}&\varepsilon_{yy}&\gamma_{xy}\end{bmatrix} $$ in terms of displacements follows: \begin{align*} \varepsilon_{xx} & =\left(\frac{\partial u}{\partial x}\right)\\ \varepsilon_{yy} & =\left(\frac{\partial v}{\partial y}\right)\\ \gamma_{yy} & =\left(\frac{\partial u}{\partial y}+\frac{\partial v}{\partial x}\right) \end{align*} Similarly the stress tensor $$ \sigma^T=\begin{bmatrix}\sigma_{xx}&\sigma_{yy}&\tau_{xy}\end{bmatrix} $$ satisfies the equilibrium equations \begin{align*} & \frac{\partial\sigma_{xx}}{\partial x}+\frac{\displaystyle\partial\tau_{xy}}{\displaystyle\partial y}+f_x=0\\ & \frac{\displaystyle\partial\tau_{xy}}{\displaystyle\partial x}+\frac{\partial\sigma_{yy}}{\partial x}+f_y=0.\\ \end{align*} On the other hand the constitutive relationship for plane stress can be written like $$\left\{\sigma\right\}=\left[C\right]\left\{\varepsilon\right\}$$ where the constitutive matrix reads: $C=\frac E{1-\nu^2}\begin{bmatrix}1&\nu&0\\\nu&1&0\\0&0&\frac{1-\nu}2\end{bmatrix}$. ## Class activity The symmetric wedge of semi-angle $\phi$ and side $\ell$ <center></center> is loaded by surface tractions of magnitude $S$ as shown in the figure. A very sensitive engineer assumes that the stress solution for this problem is given by: \begin{align*} \sigma_{xx} & =SCot(\phi)\\ \sigma_{yy} & =-STan(\phi)\\ \tau_{xy} & =0. \end{align*} * Verify that this is in fact the solution to the problem. * Find the corresponding strain field using the inverse form of the stress-strain relationship: \begin{align*} \varepsilon_{xx} & =\frac1E\left(\sigma_{xx}-\nu\sigma_{yy}\right)\\ \varepsilon_{yy} & =\frac1E\left(\sigma_{yy}-\nu\sigma_{xx}\right)\\ \gamma_{xy} & =\frac{\tau_{xy}}\mu \end{align*} * Find the displacement field after integrating the displacement increment: \begin{align*} du & =\varepsilon_{xx}dx+\frac12\gamma_{xy}dy\\ dv & =\frac12\gamma_{xy}dx+\varepsilon_{yy}dy \end{align*} * For the case of a wedge with $\phi=45^0$ and $\ell = 1.0$ complete the missing parts in the code provided below and plot the displacement field if $S= 1.0$, $E=1.0$ and $\nu = 1/3$. * Complete the missing parts in the code provided below and plot the internal strain energy $W=\int_{V(\overrightarrow x)}\frac12\sigma_{ij}\varepsilon_{ij}\operatorname dV(\overrightarrow x)$ ## Notes: * **For the stress field to be the solution it must satisfy the equations of equlibrium and the boundary conditions.** * **To plot the displacement field use the plotter coded in previous notebooks.** ## Python solution To visualize the solution use intrinsic Python interpolation tools like in previously developed examples. For that purpose create a mesh of triangular linear elements as shown in the figure. In this case the mesh is represented by the nodal and elemental files **Wnodes.txt** and **Weles.txt.** (These files must recide in memory.). Care must be taken with the location of the reference system in the mesh and that of the analytic solution. <center></center> ### Import modules ```python %matplotlib inline import matplotlib.pyplot as plt from matplotlib.tri import Triangulation, CubicTriInterpolator import numpy as np import sympy as sym ``` ### Wedge solution In the following function code the displacement and stress solution for a material point of coordinates $(x , y)$ in the wedge reference system: ```python def cunia(x,y): """Computes the solution for self-equilibated wedge at a point (x , y). """ # ux = 1.0 uy = 1.0 sigx = 1.0 sigy = 1.0 return ux , uy , sigx , sigy ``` ### Interpolation and visualization subroutines Use the plotting functions **plot_disp()** and **plot_stress()** from **SolidsPy** to visualize the displacement and stress solution. Recall that these subroutines also use two auxiliary functions to handle the conversion from the mesh to Pyhton Triangularization objects. ```python def plot_disp(UC, nodes, elements, Ngra , plt_type="contourf", levels=12, savefigs=False, title="Solution:" ): """Plots a 2D nodal displacement field using a triangulation. Parameters ---------- UC : ndarray (float) Array with the displacements. nodes : ndarray (float) Array with number and nodes coordinates: `number coordX coordY BCX BCY` elements : ndarray (int) Array with the node number for the nodes that correspond to each element. """ tri = mesh2tri(nodes, elements) tri_plot(tri, UC[:, 0] , Ngra, title=r'$u_x$', figtitle=title + "Horizontal displacement", levels=levels, plt_type=plt_type, savefigs=savefigs, filename="ux_sol.pdf" ) tri_plot(tri, UC[:, 1], Ngra , title=r'$u_y$', figtitle=title + "Vertical displacement", levels=levels, plt_type=plt_type, savefigs=savefigs, filename="uy_sol.pdf") ``` ```python def plot_stress(S_nodes, nodes, elements, Ngra , plt_type="contourf", levels=12, savefigs=False ): """Plots a 2 component stresses field using a triangulation. The stresses need to be computed at nodes first. Parameters ---------- S_nodes : ndarray (float) Array with the nodal stresses. nodes : ndarray (float) Array with number and nodes coordinates: `number coordX coordY` elements : ndarray (int) Array with the node number for the nodes that correspond to each element. """ tri = mesh2tri(nodes, elements) tri_plot(tri, S_nodes[:, 0], Ngra , title=r'$\sigma_{11}$', figtitle="Solution: sigma-xx stress", levels=levels, plt_type=plt_type, savefigs=savefigs, filename="sigmaxx_sol.pdf") tri_plot(tri, S_nodes[:, 1], Ngra , title=r'$\sigma_{22}$', figtitle="Solution: sigma-xy stress", levels=levels, plt_type=plt_type, savefigs=savefigs, filename="sigmaxy_sol.pdf") ``` ```python def mesh2tri(nodes, elements): """Generates a matplotlib.tri.Triangulation object from the mesh Parameters ---------- nodes : ndarray (float) Array with number and nodes coordinates: `number coordX coordY BCX BCY` elements : ndarray (int) Array with the node number for the nodes that correspond to each element. Returns ------- tri : Triangulation An unstructured triangular grid consisting of npoints points and ntri triangles. """ x = nodes[:, 1] y = nodes[:, 2] triangs = [] for el in elements: if el[1]==3: triangs.append(el[[3, 4, 5]]) triangs.append(el[[5, 6, 3]]) if el[1]==9: triangs.append(el[[3, 6, 8]]) triangs.append(el[[6, 7, 8]]) triangs.append(el[[6, 4, 7]]) triangs.append(el[[7, 5, 8]]) if el[1]==2: triangs.append(el[3:]) tri = Triangulation(x, y, np.array(triangs)) # return tri ``` ```python def tri_plot(tri, field, Ngra , title="", figtitle="", levels=12, savefigs=False, plt_type="contourf" , filename="solution_plot.pdf" ): plt.figure(Ngra) if plt_type=="pcolor": disp_plot = plt.tripcolor elif plt_type=="contourf": disp_plot = plt.tricontourf plt.figure(figtitle) disp_plot(tri, field, levels, shading="gouraud") plt.title(title) plt.colorbar(orientation='vertical') plt.axis("image") plt.grid() ``` ### Main program Complete the main program accordingly to read the mesh files and evaluate the solution at every nodal point from each element. ```python nodes = np.loadtxt('files/' + 'Wnodes.txt') elements = np.loadtxt('files/' +'Weles.txt') nn =len(nodes[:,0]) # coords=np.zeros([nn,2]) coords[:,0]=nodes[:,1] coords[:,1]=nodes[:,2] # U = np.zeros([nn , 2]) Sig = np.zeros([nn , 2]) ``` ```python for i in range(0,nn): x = coords[i,0] y = coords[i,1] ux , uy , sx , sy = cunia(x , y) U[i , 0] = ux U[i , 1] = uy Sig[i , 0] = sx Sig[i , 1] = sy ``` ### Plot the solution ```python plot_disp(U, nodes, elements , 1 , plt_type="contourf", levels=12 ) #plot_stress(Sig, nodes, elements , 2 , savefigs = True) ``` ### Glossary of terms **Boundary value problem:** A set of partial differential equations specified over a given domain $V$ bounded by a surface or boundary $S$ where bundary conditions or prescribed characteritics of the solution are available. **Material point:** Fundamental mathemtical abstraction in the continuum model and representing the equivalent of a particle in classical mechanics. This material point has no shape nor volume yet it experiences mechanical interactions. **Tractions vector:** This is the fundamental description of forces introduced by Cauchy. In fact tractions represent forces per unit surface at a material point. **Stress tensor:** The complete set of traction vectors associatted to three (two in plane problems) non co-lineal directions and completely defining the state of forces at the material point. **Strain tensor:** This second order tensor describes the local changes in shape along the infinite directions emanating from a material point. **Constitutive tensor:** Set of material parameters, transforming like a tensor, and fully describing the stress-strain response for a given material. ### References * Timoshenko, S.P., and Goodier, J.N. (1976). Theory of Elasticity. International Student Edition. McGraw-Hill International. * Love, A. E. H. (2013). A treatise on the mathematical theory of elasticity. Cambridge university press. * Shames, I.H and Cozzarelli, F.A. (1997). Elastic and inelastic stress analysis. Taylor and Francis. ```python from IPython.core.display import HTML def css_styling(): styles = open('./nb_style.css', 'r').read() return HTML(styles) css_styling() ``` <link href='http://fonts.googleapis.com/css?family=Fenix' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Source+Code+Pro:300,400' rel='stylesheet' type='text/css'> <style> /* Template for Notebooks for Modelación computacional. Based on Lorena Barba template available at: https://github.com/barbagroup/AeroPython/blob/master/styles/custom.css */ /* Fonts */ @font-face { font-family: "Computer Modern"; src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf'); } /* Text */ div.cell{ width:800px; margin-left:16% !important; margin-right:auto; } h1 { font-family: 'Alegreya Sans', sans-serif; } h2 { font-family: 'Fenix', serif; } h3{ font-family: 'Fenix', serif; margin-top:12px; margin-bottom: 3px; } h4{ font-family: 'Fenix', serif; } h5 { font-family: 'Alegreya Sans', sans-serif; } div.text_cell_render{ font-family: 'Alegreya Sans',Computer Modern, "Helvetica Neue", Arial, Helvetica, Geneva, sans-serif; line-height: 135%; font-size: 120%; width:600px; margin-left:auto; margin-right:auto; } .CodeMirror{ font-family: "Source Code Pro"; font-size: 90%; } /* .prompt{ display: None; }*/ .text_cell_render h1 { font-weight: 200; font-size: 50pt; line-height: 100%; color:#CD2305; margin-bottom: 0.5em; margin-top: 0.5em; display: block; } .text_cell_render h5 { font-weight: 300; font-size: 16pt; color: #CD2305; font-style: italic; margin-bottom: .5em; margin-top: 0.5em; display: block; } .warning{ color: rgb( 240, 20, 20 ) } </style> ```python ```
02b5e79d9d16899070949b88d438856ba590fda1
54,469
ipynb
Jupyter Notebook
notebooks/07_elasticity.ipynb
AppliedMechanics-EAFIT/Introductory-Finite-Elements
a4b44d8bf29bcd40185e51ee036f38102f9c6a72
[ "MIT" ]
39
2019-11-26T13:28:30.000Z
2022-02-16T17:57:11.000Z
notebooks/07_elasticity.ipynb
jgomezc1/Introductory-Finite-Elements
a4b44d8bf29bcd40185e51ee036f38102f9c6a72
[ "MIT" ]
null
null
null
notebooks/07_elasticity.ipynb
jgomezc1/Introductory-Finite-Elements
a4b44d8bf29bcd40185e51ee036f38102f9c6a72
[ "MIT" ]
18
2020-02-17T07:24:59.000Z
2022-03-02T07:54:28.000Z
57.822718
10,221
0.703299
true
5,750
Qwen/Qwen-72B
1. YES 2. YES
0.800692
0.766294
0.613565
__label__eng_Latn
0.964321
0.263848
<h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Introduction" data-toc-modified-id="Introduction-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Introduction</a></span><ul class="toc-item"><li><span><a href="#Simulation" data-toc-modified-id="Simulation-1.1"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Simulation</a></span><ul class="toc-item"><li><span><a href="#The-Mesh" data-toc-modified-id="The-Mesh-1.1.1"><span class="toc-item-num">1.1.1&nbsp;&nbsp;</span>The Mesh</a></span></li><li><span><a href="#The-Elements-and-DofHandlers" data-toc-modified-id="The-Elements-and-DofHandlers-1.1.2"><span class="toc-item-num">1.1.2&nbsp;&nbsp;</span>The Elements and DofHandlers</a></span></li><li><span><a href="#The-Gaussian-Field" data-toc-modified-id="The-Gaussian-Field-1.1.3"><span class="toc-item-num">1.1.3&nbsp;&nbsp;</span>The Gaussian Field</a></span></li><li><span><a href="#Assembly" data-toc-modified-id="Assembly-1.1.4"><span class="toc-item-num">1.1.4&nbsp;&nbsp;</span>Assembly</a></span></li><li><span><a href="#Solver" data-toc-modified-id="Solver-1.1.5"><span class="toc-item-num">1.1.5&nbsp;&nbsp;</span>Solver</a></span></li><li><span><a href="#Quantity-of-Interest" data-toc-modified-id="Quantity-of-Interest-1.1.6"><span class="toc-item-num">1.1.6&nbsp;&nbsp;</span>Quantity of Interest</a></span></li></ul></li></ul></li><li><span><a href="#Multiresolution-Approximation" data-toc-modified-id="Multiresolution-Approximation-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Multiresolution Approximation</a></span></li></ul></div> # Optimal Upscaling ```python # Add src folder to path import os import sys sys.path.insert(0,'../../src/') ``` ## Introduction Suppose $u$ satisfies the elliptic partial differential equation \begin{align*} -\nabla \cdot (\exp(q) \nabla u) = 1, \ \ &x \in D,\\ u = 0, \ \ &x \in \partial D \end{align*} where $D=[0,1]^2$ is a physical domain and $q(x,\omega)$ is a Gaussian random field. Moreover, let \begin{equation} J = \iint_{R} u(x,\omega) dx, \ \ \text{where } R = [0.75,1]^2, \end{equation} be a physical quantity of interest for which we want to compute statistics using Monte Carlo sampling or whatever. ### Simulation First, we show how to generate a single sample of $J$. Specifically, we will 1. Construct the computational mesh and define the elements 2. Generate samples of the random parameter $q$ 3. Assemble the finite element system 4. Solve the system for $u$ 5. Compute the associated $J$ We will use the ```quadmesh``` package, obtainable [here](https://github.com/hvanwyk/quadmesh). #### The Mesh The ```mesh``` module contains methods for generating and working with meshes. In this work, we use quadrilateral meshes, since they can be readily refined. On our mesh, we need to mark the boundary $\partial D$ and the region of integration $R$. To do this, we use marking functions that check whether a point lies in the given region. ```python from mesh import QuadMesh from plot import Plot plot = Plot() # Define the Quadrilateral Mesh mesh = QuadMesh(resolution=(16,16)) # Mark boundary bnd_fn = lambda x,y: abs(x)<1e-6 or abs(1-x)<1e-6 or abs(y)<1e-6 or abs(1-y)<1e-6 mesh.mark_region('bnd', bnd_fn, entity_type='half_edge', on_boundary=True) # Mark averaging region dmn_fn = lambda x,y: x>=0.75 and x<=1 and y>=0.75 and y<=1 mesh.mark_region('dmn', dmn_fn, entity_type='cell', strict_containment=True, on_boundary=False) #cells = mesh.get_region(flag='dmn', entity_type='cell', on_boundary=False, subforest_flag=None) plot.mesh(mesh, regions=[('bnd','edge'),('dmn','cell')]) ``` #### The Elements and DofHandlers Next we need to define the approximation space, i.e. the elements. This is done using the ```QuadFE``` class. - Here we use piecewise constant elements (```DQ0```) to represent the parameter and piecewise linear elements (```Q1```) for the solution. - Since we would like to have multiple different elements associated with the same mesh, we handle the element degrees of freedom by means of a **degrees-of-freedom-handler** or ```DofHandler```, which contains information of the ```mesh``` and the ```element```. - For the assembly and function definitions, we may also require ```Basis``` functions. These also encode derivative information. ```python from fem import QuadFE, DofHandler, Basis # # Elements # Q0 = QuadFE(mesh.dim(), 'DQ0') # "Discontinuous Quadratic of Degree 0", for parameter Q1 = QuadFE(mesh.dim(), 'Q1') # "Continuous Linear" for output # # DofHandlers # dQ0 = DofHandler(mesh,Q0) dQ1 = DofHandler(mesh,Q1) # # Distribute DOFs, i.e. store the dof-indices in the dofhandler # dQ0.distribute_dofs() dQ1.distribute_dofs() # # Basis functions for assembly and definition of Nodal functions # phi_0 = Basis(dQ0) # piecewise constant phi_1 = Basis(dQ1) # piecewise linear phix_1 = Basis(dQ1,'ux') # piecewise linear first partial w.r.t. x phiy_1 = Basis(dQ1,'uy') # piecewise linear first partial w.r.t. y ``` #### The Gaussian Field The ```gmrf``` module contains routines to generate a variety of Gaussian random fields. 1. First, the covariance matrix is assembled from a bivariate kernel function using the ```Covariance``` class. The assembly is dependent on the mesh and approximation space (element) associated with the field. 2. Next we compute its eigendecomposition. 3. Finally we define a Gaussian random field using the ```GaussianField``` class, which allows us to sample, condition, etc. 4. Samples from the Gaussian field are given as vectors but we can use these to define ```Nodal``` interpolants on the given mesh. ```python from function import Nodal from gmrf import Covariance from gmrf import GaussianField import numpy as np # # Approximate Covariance Matrix # cov = Covariance(dQ0, name='gaussian', parameters={'l':0.01}) cov.compute_eig_decomp() # # Define the Gaussian field # q = GaussianField(dQ0.n_dofs(), K=cov) # Sample Random field and store all samples in a Nodal DQ0 finite element function n_samples = 100 eq = Nodal(basis=phi_0, data=np.exp(q.sample(n_samples))) # # Plot a single sample # plot.contour(eq, n_sample=20) ``` #### Assembly The weak form of the elliptic PDE is given by \begin{equation}\label{eq:weak_form} \iint_D \exp(q) \nabla u \cdot \nabla \phi dx = \iint_D 1 \phi dx, \end{equation} for any test function $\phi$. Approximating $u = \sum_j c_j \phi_j$ in the Galerkin framework, we get $A c = b $, where \begin{equation}\label{eq:bilinear} A_{ij} = \iint_D \exp(q) \nabla \phi_j \cdot \nabla \phi_i dx = \iint_D \exp(q) \frac{\partial \phi_j}{\partial x} \frac{\partial \phi_i}{\partial x} + \exp(q) \frac{\partial \phi_j}{\partial u}\frac{\partial \phi_i}{\partial y} dx \end{equation} and \begin{equation}\label{eq:linear} b_i = \iint_D \phi_i dx. \end{equation} The assembly therefore requires a bilinear form $A$ and a linear form $b$. We define each form with the ```Form``` class in the ```assembler``` module. Each form requires a ```Kernel``` function, possibly a ```test``` function, and possibly a ```trial``` function (both of which should be ```Basis``` functions). > For example, the linear form \ref{eq:linear} is defined by ```Form(1, test=phi_1)```, while the bilinear form $\iint_D \exp(q) \frac{\partial \phi_j}{\partial x} \frac{\partial \phi_i}{\partial x}dx$ is stored as ```Form(eq, test=phix_1, trial=phix_1)```. 1. The object ```Form(kernel)``` is assembled into a scalar. 2. The linear form ```Form(kernel, test=phi_1)``` is assembled into a vector. 3. The bilinear form ```Form(kernel, test=phix_1, trial=phix_1)``` is assembled into a sparse matrix. Some forms, such as $\iint_R \phi_i dx$ should only be assembled over a subdomain of the mesh. To do this, use ```Form(1, test=phi_1, flag='dmn')```, where ```dmn``` was the flag marking the region $R$ from earlier. > __REMARK:__ When the kernel is a function with multiple samples (such as our function ```eq```), then the relevant form is assembled for all samples. The ```Assembler``` class handles finite element assembly. For each problem you want to assemble, e.g. the elliptic problem, or the problem of computing $J$, you list all relevant bilinear, linear, and constant forms you need. The assembler will automatically add up the contributions of all bilinear forms into a sparse matrix, those of all the linear forms into a vector, and those of all the constant forms into a scalar. To assemble multiple problems, simply form a ```list``` of all relevant problems. ```python from assembler import Assembler from assembler import Kernel from assembler import Form # Define weak form of elliptic problem problem1 = [Form(eq, test=phix_1, trial=phix_1), Form(eq, test=phiy_1, trial=phiy_1), Form(1, test=phi_1)] # Define integration operator problem2 = [Form(1,test=phi_1,flag='dmn')] # Put them together in a list state = [problem1,problem2] # Assemble system assembler = Assembler(state) # initialize the assembler assembler.add_dirichlet('bnd', dir_fn=0, i_problem=0) # add homogeneous Dirichlet conditions for problem 0 assembler.assemble() # assemble ``` The vector representing the integration operator $\iint_R \phi_i dx$ can be obtained by ```python J = assembler.get_vector(i_problem=1) ``` #### Solver To solve the elliptic equation, you simply specify the problem (default=0), the matrix sample (default=0), and the vector sample for the right hand side (default=0) you want to use, i.e. ```python # Solve system u_vec = assembler.solve(i_problem=0, i_matrix=20) # Define a finite element function using the solution vector u = Nodal(basis=phi_1, data=u_vec) # Plot plot.contour(u) ``` #### Quantity of Interest We can now compute $J = \iint_R u dx$ by taking dot product ```python J_sample = J.dot(u_vec) print(J_sample) ``` 0.0006180484835314365 ## Multiresolution Approximation ```python ```
974181cdda2591045b211f4b2164c40eb51b5b4e
62,983
ipynb
Jupyter Notebook
experiments/multiscale_gmrf/optimal_upscaling_01.ipynb
hvanwyk/quadmesh
12561f1fae1ce4cd27aa70c71041c3fdb417267c
[ "MIT" ]
6
2019-06-04T09:26:15.000Z
2021-10-21T05:00:23.000Z
experiments/multiscale_gmrf/optimal_upscaling_01.ipynb
hvanwyk/quadmesh
12561f1fae1ce4cd27aa70c71041c3fdb417267c
[ "MIT" ]
9
2018-02-28T22:04:43.000Z
2022-02-18T17:14:30.000Z
experiments/multiscale_gmrf/optimal_upscaling_01.ipynb
hvanwyk/quadmesh
12561f1fae1ce4cd27aa70c71041c3fdb417267c
[ "MIT" ]
null
null
null
141.217489
33,192
0.872267
true
2,874
Qwen/Qwen-72B
1. YES 2. YES
0.833325
0.721743
0.601446
__label__eng_Latn
0.924616
0.235692
(page_topic1)= Errors in Numerical Differentiation ======================= In order for the finite difference formulas derived in the previous section to be useful, we need to have some idea of the errors involved in using these formulas. As mentioned, we are replacing $f(x)$ with its polynomial interpolant $p_n(x)$ and then taking the derivatives, so that $$ f'(x) = p_n'(x) + \epsilon'(x),\qquad f''(x) = p_n''(x) + \epsilon''(x).$$ We need to $\epsilon'(x)$ and \epsilon''(x) to determine the errors in the formulas we derived. First recall [the polynomial interpolation error](../InterpFit/InterpErrors) is $$\begin{equation} \epsilon(x)= f(x)-p_n(x) = \frac{f^{n+1}(\xi)}{(n+1)!}\prod_{j=0}^n (x-x_j). \end{equation}$$ (inter_err) Taking the derivative of this using the product rule yields $$ \begin{equation} \epsilon'(x) = \frac{f^{n+1}(\xi)}{(n+1)!}\left[\sum_{k=0}^n \prod_{j=0,j\neq k}^n (x-x_j)\right] + \frac{f^{n+2}(\xi)\xi'}{(n+1)!}\prod_{j=0}^n (x-x_j), \end{equation}$$ (errprime) where the second term arises from the fact that $\xi$ may depend on $x$. Evaluating this at one of the gridpoints, say $x_i$, simplifies the expression considerably as all but one of the terms contain the factor $(x_i-x_i)$, $$ \begin{equation} \epsilon'(x_i) = \frac{f^{n+1}(\xi)}{(n+1)!}\prod_{j=0,j\neq i}^n (x_i-x_j). \end{equation}$$ (pprimeerr) We can now use this expression for the errors in our first derivative finite difference formulas. Before we do, let's derive the error formula for the second derivatives. Taking the derivative {eq}`errprime` yields $$ \begin{align} \epsilon''(x) = & \frac{f^{n+1}(\xi)}{(n+1)!}\left[ \sum_{k=0}^n \sum_{m=0, m\neq k}^n \prod_{j=0,j\neq k,m}^n (x-x_j)\right]\\ & + \frac{f^{n+2}(\xi)\xi''+f^{n+3}(\xi)\xi'}{(n+1)!}\prod_{j=0}^n (x-x_j) \\ &+ \frac{f^{n+2}(\xi)\xi'}{(n+1)!}\left[\sum_{k=0}^n \prod_{j=0,j\neq k}^n (x-x_j)\right],\end{align}$$ Evaluating this at one of the gridpoints, say $x_i$, simplifies the expression to, $$ \begin{align} \epsilon''(x_i) &=\frac{f^{n+1}(\xi)}{(n+1)!}\left[\sum_{m=0,m\neq i}^n \prod_{j=0, j\neq i,m} (x_i-x_j) + \sum_{k=0,k\neq i}^n \prod_{j=0, j\neq i,k} (x_i-x_j) \right] +\frac{f^{n+2}(\xi)\xi'}{(n+1)!}\prod_{j=0,j\neq i}^n (x_i-x_j),\\ &=\frac{2f^{n+1}(\xi)}{(n+1)!}\left[\sum_{k=0,k\neq i}^n \prod_{j=0, j\neq i,k} (x_i-x_j) \right] +\frac{f^{n+2}(\xi)\xi'}{(n+1)!}\prod_{j=0,j\neq i}^n (x_i-x_j). \end{align}$$ (ppperr) This is a bit messier than the error for the first derivative {eq}`pprimeerr`. In most cases one can argue that the second term is small compared to the first as it contains one additional factor of the difference of grid points and one expects $\xi'$ to be small. **Example: Error in the two point formulas** In the previous section, for two points $x_0$ and $x_1=x_0+h$ we derived the forward/backwards difference formulae and we would now like to know the error in using these formulae. Using {eq}`pprimeerr` with $n=1$ we straightforwardly get $$ \epsilon'(x_0) = \frac{f''(\xi)}{2}(-h),\quad \text{and}\quad \epsilon'(x_1) = \frac{f''(\xi)}{2}(h),$$ noting that $\xi$ is not likely to be the same number in the two cases. So, in summary, we now have $$ \begin{equation} \left[{\begin{array}{c} f'(x_0) \\ f'(x_1) \\ \end{array}} \right] = \left[{\begin{array}{c} \frac{f(x_1)-f(x_0)}{h} \\ \frac{f(x_1)-f(x_0)}{h} \\ \end{array}} \right] + \left[{\begin{array}{c} -\frac{h}{2}f''(\xi) \\ \frac{h}{2}f''(\xi) \\ \end{array}} \right]. \end{equation} $$ (twopoint) **Example: Error in the three point formulae** In the previous section, for three points $x_0,\,x_1=x_0+h,$ and $x_2=x_0+2h,$ $x_0$ we derived finite difference formulae for first and second derivatives and we would now like to know the error in using these formulae. Using {eq}`pprimeerr` with $n=2$ we straightforwardly get the result for the first derivatives: $$ \begin{equation} \left[{\begin{array}{c} f'(x_0) \\ f'(x_1) \\ f'(x_2) \end{array}} \right] = \left[{\begin{array}{c} \frac{-3f(x_0)+4f(x_1)-f(x_2)}{2h} \\ \frac{f(x_2)-f(x_0)}{2h} \\ \frac{3f(x_2)-4f(x_1)+f(x_0)}{2h} \end{array}} \right] + \left[{\begin{array}{c} \frac{h^2}{3}f^{(3)}(\xi) \\ -\frac{h^2}{6}f^{(3)}(\xi) \\\ \frac{h^2}{3}f^{(3)}(\xi) \\ \end{array}} \right]. \end{equation} $$ First note that the errors here are $\mathcal{O}(h^2)$ compared to {eq}`twopoint` for the two-point formulae which are $\mathcal{O}(h)$. So, assuming $h$ is small, these do indeed have a higher order accuracy. Further, the error in the *central* difference formula is *half* that of the backward/forward differences. As a result, the central difference formula is the one most widely used. We now turn to the second derivative errors. At $x_0$ {eq}`ppperr` gives $$ \begin{align} \epsilon''(x_0) &= 2\frac{f^{(3)}(\xi)}{3!}\left[(x_0-x_1)+(x_0-x_2)\right]+\frac{f^{(4)}(\xi)\xi'}{3!}(x_0-x_1)(x_0-x_2)\\ &= -f^{(3)}(\xi) h + \frac{f^{(4)}(\xi)\xi'}{3}h^2. \end{align} $$ At $x_1$ we get $$ \begin{align} \epsilon''(x_1) &= 2\frac{f^{(3)}(\xi)}{3!}\left[(x_1-x_0)+(x_1-x_2)\right]+\frac{f^{(4)}(\xi)\xi'}{3!}(x_1-x_0)(x_1-x_2)\\ &= -\frac{f^{(4)}(\xi)\xi'}{6} h^2. \end{align} $$ These, together with the result at $x_2$ gives $$ \begin{equation} \left[{\begin{array}{c} f''(x_0) \\ f''(x_1) \\ f''(x_2) \end{array}} \right] = \left[{\begin{array}{c} \frac{f(x_0)-2f(x_1)+f(x_2)}{h^2} \\ \frac{f(x_0)-2f(x_1)+f(x_2)}{h^2} \\ \frac{f(x_0)-2f(x_1)+f(x_2)}{h^2} \end{array}} \right] + \left[{\begin{array}{c} -h f^{(3)}(\xi) \\ -\frac{h^2}{6}f^{(4)}(\xi)\xi' \\\ h f^{(3)}(\xi) \\ \end{array}} \right], \end{equation} $$ where we have dropped the higher order error terms. Note that even through the approximation formula is the same for all 3 points, the error in this formula is much worse at the endpoints ($\mathcal{O}(h)$ versus $\mathcal{O}(h^2)$). In homework you will see that you can also derive the central difference formula using a Taylor series expansion (including errors) about $x_1$ and then using the mean value theorem to combine the errors at the points $x_1\pm h$ at a new, also unknown, point $\eta$ to get $$ f''(x_1) \approx \frac{f(x_0)-2f(x_1)+f(x_2)}{h^2} -\frac{h^2}{12}f^{(4)}(\eta). $$ ### Stability The central difference formula we derived $$ f'(x_1)=\frac{f(x_2)-f(x_0)}{2h}-\frac{h^2}{6}f^{(3)}(\xi), $$ involves two things we warned about when we talked about [roundoff errors](../ErrorsModule/RoundoffAmplification). Namely, subtraction of two similar numbers, the $f(x_2)-f(x_0)$, and division by a small number $h$. As a result, attempting to obtain the "exact" value of $f'(x_1)$ by taking ever smaller $h$ is **not** a numerically stable operation as we will now demonstrate. As in our discussion of [roundoff errors](../ErrorsModule/HornersAlgorithm) let $\mathcal{fl}(.)$ denote the finite precision arithmetic on the computer, i.e. $$ \mathcal{fl}(f(x))=f(x) + e_r(x) $$ where $e_r$ is the roundoff error. The error in evaluation of the central difference formula on a computer is then $$ \begin{align} E(h) &= \left|f'(x_1)-\mathcal{fl}\left(\frac{f(x_2)-f(x_0)}{2h} \right)\right|, \\ &= \left|-\frac{h^2}{6}f^{(3)}(\xi) + \frac{e_r(x_2)-e_r(x_0)}{2h}\right|, \end{align} $$ where the second term comes from the fact that we cannot evaluate the central difference exactly using finite precision arithemetic. The roundoff errors here are unlikely to be correlated and, in particular, are as likely to be positive as negative. We do expect it to be bounded so let's assume $|e_r(x)|<\beta$. Putting this together, along with use of the triangle inequality gives $$ E(h) \leq \frac{\beta}{h} + \frac{h^2}{6}|f^{(3)}(\xi)|. $$ Note that while the truncation error (the second term) decreases as $h$ gets smaller, the roundoff error *increases* as $h$ gets smaller. As a result, there is actually an *optimal* $h$ to minimize the error. To find this we take the derivative of $E(h)$ and set it to zero to get $$ \frac{dE}{dh}\approx -\frac{\beta}{h^2}+\frac{h}{3}|f^{(3)}(\xi)| = 0. $$ Solving this for the optimal $h_{op}$ gives $$ h_{op}= \sqrt[3]{\frac{3 \beta}{|f^{(3)}(\xi)|}}. $$ If $f(x)\sim 1$ we would expect the roundoff to be of order of the machine epsilon $\epsilon_r$. For $f(x)$ not of order unity, we would expect $\beta \approx \epsilon_r |f(x)|$. So around a given $x$ we expect roughly $$ h_{op}\approx \sqrt[3]{\frac{3 \epsilon_r |f(x)|}{|f^{(3)}(x)|}}. $$ (hop) **Example** Suppose we use the central difference formula to evaluate the derivative of $f(x)=e^x$. What is the error and what is the optimal $h_{op}$? First note that we can work out the analytic derivative ($f'(x)=e^x$) and the third derivative here ($f^{(3)}(x)=e^x$). In this case, {eq}`hop` then gives, $$ h_{op}=6.7\times 10^{-6}. $$ We can test this assertion with a short python script: ```python from matplotlib import pyplot as plt import numpy as np x = np.arange(2, 12, 0.1) h = pow(10,-x) err = abs(np.exp(1.)-(np.exp(1.+h)-np.exp(1.-h))/(2.*h)) plt.loglog(h,err) plt.xlabel(r'$h$') plt.ylabel(r'absolute error') plt.show() ``` Indeed, we do see the minimum in the error plot is around our calculated $h_{op}$. Interestingly, the error using $h=10^{-12}$ is comparable to the error using $h=10^{-2}$. Also, the part of the curve dominated by roundoff errors ($h<h_{op}$) is quite noisy. This is simply a refection of the somewhat unpreditable nature or roundoff errors. Importantly, our arguments correctly predict the trends here. In practise, though, one should keep in mind that roundoff errors may fluctuate an order or magnitude around these trends. Note that similar problems arise when attempting to apply numerical differentiation to noisy data (data with stochastic noise). A similar construction, with the errors being from stochastic noise rather than roundoff, limits how small $h$ can be before the results for the derivative become less accurate. In this case, a presmoothing operation to reduce the noise amplitude should be applied before attempting to construct a finite difference for the derivative.
38f9e419837c1ef2e9d38db098660d8e2e66d61a
36,538
ipynb
Jupyter Notebook
class/NDiffInt/NumDiffInt_Errors.ipynb
CDenniston/NumericalAnalysis
8f4ccaa864461c36e269824a0e9038bc14ef10b1
[ "MIT" ]
null
null
null
class/NDiffInt/NumDiffInt_Errors.ipynb
CDenniston/NumericalAnalysis
8f4ccaa864461c36e269824a0e9038bc14ef10b1
[ "MIT" ]
null
null
null
class/NDiffInt/NumDiffInt_Errors.ipynb
CDenniston/NumericalAnalysis
8f4ccaa864461c36e269824a0e9038bc14ef10b1
[ "MIT" ]
null
null
null
132.865455
23,100
0.816793
true
3,613
Qwen/Qwen-72B
1. YES 2. YES
0.826712
0.857768
0.709127
__label__eng_Latn
0.978302
0.485871
```python import numpy as np import sympy as sp import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D import CurrentDistribution as current currentPath = current.Solenoid(10, 10, 1, sp.pi/64) offset=5 testPointNum = 16 xPoints = np.linspace(int(currentPath.xmin-offset), int(currentPath.xmax+offset), num=testPointNum) yPoints = np.linspace(int(currentPath.ymin-offset), int(currentPath.ymax+offset), num=testPointNum) zPoints = np.linspace(int(currentPath.zmin-offset), int(currentPath.zmax+offset), num=testPointNum) xx, yy, zz = np.meshgrid(xPoints, yPoints, zPoints) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.scatter(currentPath.segments[:,0],currentPath.segments[:,1],currentPath.segments[:,2],s=1) ax.scatter(xx,yy,zz,s=1, c="Red") plt.show() ``` ```python ```
eb85ef7969c289f87635bde61d655936e3075afc
99,908
ipynb
Jupyter Notebook
Magnetic Field Distributions/example_3d_plot.ipynb
phys2331/EM-Notebooks
a62d0ded01ce5c7228d85b39964d17c1a98839af
[ "Unlicense" ]
null
null
null
Magnetic Field Distributions/example_3d_plot.ipynb
phys2331/EM-Notebooks
a62d0ded01ce5c7228d85b39964d17c1a98839af
[ "Unlicense" ]
1
2020-10-12T20:37:11.000Z
2020-10-12T20:37:11.000Z
Magnetic Field Distributions/example_3d_plot.ipynb
phys2331/EM-Notebooks
a62d0ded01ce5c7228d85b39964d17c1a98839af
[ "Unlicense" ]
1
2020-10-10T00:53:03.000Z
2020-10-10T00:53:03.000Z
1,314.578947
98,044
0.958091
true
229
Qwen/Qwen-72B
1. YES 2. YES
0.875787
0.705785
0.618117
__label__eng_Latn
0.282724
0.274424
# Basic Neural Network from Scratch >## Objective: To understand and build a basic one hidden layer Neutral Network from scratch > ## Approach : - **Getting the Dataset** - **Logistic Regression** - **Neural Network : Understanding** - Neural Network structure - Activation functions : Softmax and Tanh and their necessity - Weight factors and Bias - Forward Propagation - Back Propagation - Shape of Vectors - Differentiation - **Implementation** - Model Function - Predict Function - Loss Function - **Varying hidden layer nodes** - **Conclusion** **Importing Libraries** ```python import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import sklearn.linear_model import matplotlib %matplotlib inline matplotlib.rcParams['figure.figsize'] = (10.0, 8.0) ``` **`Plot_decision_boundary` function is defined to conveniently see the decision boundary** ```python # Helper function to plot a decision boundary. # If you don't fully understand this function don't worry, it just generates the contour plot below. def plot_decision_boundary(pred_func): # Set min and max values and give it some padding x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5 y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5 h = 0.01 # Generate a grid of points with distance h between them xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) # Predict the function value for the whole gid Z = pred_func(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) # Plot the contour and training examples plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral) plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Spectral) ``` ## `1.` Getting the Dataset Dataset is generated using sklearn's built in methods ```python # Generate a dataset and plot it import sklearn from sklearn import datasets np.random.seed(0) X, y = sklearn.datasets.make_moons(200, noise=0.20) plt.scatter(X[:,0], X[:,1], s=40, c=y, cmap=plt.cm.Spectral) ``` The dataset generated has two classes, plotted as red and blue points. You can think of the blue dots as male patients and the red dots as female patients, with the x- and y- axis being medical measurements. Your goal is to train a Machine Learning classifier that predicts the correct class (male or female) given the x- and y- coordinates. Note that the data is not linearly separable, a straight line cant be drawn that separates the two classes. This means that linear classifiers, such as Logistic Regression, won't be able to fit the data unless you hand-engineer non-linear features (such as polynomials) that work well for the given dataset. In fact, that's one of the major advantages of Neural Networks. You don't need to worry about feature engineering. The hidden layer of a neural network will learn features for you. ## `2.` Logistic Regression: To understand the problem better , try to implement a logistic regression first on the dataset and plot the decision boundary using the earlier defined function **TASK : Build a `logistic regression` model and fit it on the dataset** ```python ### START CODE HERE (~ 2 Lines of code) ### END CODE ``` **TASK : Plot the `Decision boundary` using the earlier defined function** ```python # For the argument of lambda x: use the predict function of the earlier defined classifier that was fit on dataset ### START CODE HERE (~2 Lines of code) #(Write code where '#' is given) plot_decision_boundary(lambda x: '# ') plt.title("Logistic Regression") ### END CODE ``` The graph shows the decision boundary learned by our Logistic Regression classifier. It separates the data as good as it can using a straight line, but it's unable to capture the "moon shape" of our data. ## `3.` Neural Networks : ### `3.1` Neural Networks Understanding Let's now build a 3-layer neural network with one input layer, one hidden layer, and one output layer. The number of nodes in the input layer is determined by the dimensionality of our data, 2. Similarly, the number of nodes in the output layer is determined by the number of classes we have, also 2. (Because we only have 2 classes we could actually get away with only one output node predicting 0 or 1, but having 2 makes it easier to extend the network to more classes later on). The input to the network will be x- and y- coordinates and its output will be two probabilities, one for class 0 ("female") and one for class 1 ("male"). It looks something like this: You can choose the dimensionality (the number of nodes) of the hidden layer. The more nodes we put into the hidden layer the more complex functions we will be able fit. But higher dimensionality comes at a cost. First, more computation is required to make predictions and learn the network parameters. A bigger number of parameters also means that the model become more prone to overfitting the data. How to choose the size of the hidden layer? While there are some general guidelines and recommendations, it always depends on your specific problem and is more of an art than a science. Its best to play with the number of nodes in the hidden layer and see how it affects the output. You also need to pick an activation function for the hidden layer. The activation function transforms the inputs of the layer into its outputs. Common chocies for activation functions are tanh, the sigmoid function, or ReLUs. Here you'll use tanh, which performs quite well in many scenarios. A nice property of these functions is that their derivate can be computed using the original function value. For example, the derivative of $\tanh x$ is $1-\tanh^2 x$. This is useful because it allows to compute $\tanh x$ once and re-use its value later on to get the derivative. The network is desired to output probabilities thus the activation function for the output layer will be the softmax, which is simply a way to convert raw scores to probabilities. If you're familiar with the logistic function you can think of softmax as its generalization to multiple classes. ### `3.2` Activation Functions **Softmax :** Where the function is defined as : **Tanh :** **Why are Activation Functions necessary?** If there were no activation function or just linear activation function then it would work same as a linear classifier/regressor and the models applications would be very limited and thus a nonlinear activation function is what really allows to fit nonlinear hypotheses. **Understanding the network** The input $x$ contains two attributes namely $x1$ and $x2$ and has $m$ number of training examples , in this dataset $m$ = 200. Neural networks are similar to logistic regression just that the latter is used for binary classification/regression and the former is used for multiclass classification/regression. Think of it this way that In logistic regression , attributes or inputs are given to a box or a node which multiplies the inputs with weight factors , adds bias and outputs the resultant similar to linear regression which is then transformed by a non linear activtion function , sigmoid in logistic regression to give the final output. Similarly , In neural networks this whole process happens alot of times , in this notebook with just one hidden layer the whole process occurs as many times as the number of nodes in the hidden layer , so each node's input-output can be visualised as logistic regression and many such singluar logistic regression units give birth to neural networks which shows generally why neural networks perform better than its singular counterpart. This whole process can be explained better with the following image : ### `3.3` Weight factors and Bias **Bias** : The term bias is used to adjust the final output matrix as the y-intercept does. For instance, in the classic equation, $y = mx + c$, if $c$ = 0, then the line will always pass through 0. Adding the bias term provides more flexibility and better generalisation to our Neural Network model. **Weight Factors** : Weight factors gives out the best linear combinations of attributes which would give the closest ouput Weight factors and Bias values are first randomly given and then adjusted through `Back Propaagtion` and `Gradient Descent` to find the optimum combination ### `3.4` Forward Propagation : How the network makes predictions The network makes predictions using forward propagation, which is just a bunch of matrix multiplications and the application of the activation function(s) as defined above. If $x$ is the 2-dimensional input to the network then predictions $\hat{y}$ (also two-dimensional) can be calculated as follows: $$ \begin{aligned} z_1 = xW_1 + b_1 \\ a_1 = \tanh(z_1) \\ z_2 = a_1W_2 + b_2 \\ a_2 = \hat{y} = \mathrm{softmax}(z_2) \end{aligned} $$ $z_i$ is the weighted sum of inputs of layer $i$ (bias included) so $z_1$ contains all the outputs from all the nodes in the first hidden layer so $z_1[1]$ will be the output of first node of hidden layer , $z_1[2]$ the output of second node of hidden layer and so on. $a_i$ is the output of layer $i$ after applying the activation function, it forms a vector similar to $z_i$ and the values are tranformed from $z_i$ using an activation function.\ $W_1, b_1, W_2, b_2$ are parameters of the network, which are needed to learn from the training data. You can think of them as matrices transforming data between layers of the network. Looking at the matrix multiplications above you can figure out the dimensionality of these matrices. If 500 nodes are used for the hidden layer then $W_1 \in \mathbb{R}^{2\times500}$, $b_1 \in \mathbb{R}^{500}$, $W_2 \in \mathbb{R}^{500\times2}$, $b_2 \in \mathbb{R}^{2}$. Now you see why you'll have more parameters if we increase the size of the hidden layer. ### `3.5` Shapes of all the Vectors Understand that all the variables like weight factors, bias , input x etc are all vectors. Vectorization is done to ease out processes like element wise multiplication and then its summation which would generally require a for loop but with a vectorization a simple dot product would suffice thus increasing the speed and also conveneient and short to code. With neural networks , understanding the shapes of all the parameters is very important so lets look at the shapes : Here , number of training examples are $m$=200 and **n_hid** is the number of nodes in hidden layer 1. $X$ has a shape of `(m,2)` : 2 is for the number of attributes i.e $x_1$ and $x_2$ 2. $W_1$ has a shape of `(2,n_hid)` connecting each node of input layer and hidden layer and giving them weights : 2 is for the number of attributes i.e $x_1$ and $x_2$ 3. $b_1$ has a shape of `(1,n_hid)` as there is one bias for each node in the hidden layer 4. $W_2$ has a shape of `(n_hid,2)` connecting each node from hidden layer to the output layer of 2 : 2 is for the output layer giving probabilities of `male` and `female` 5. $b_2$ has a shape of `(1,2)` as there is one bias for each node in the output layer thus 2 for 2 nodes (`male` and `female` probabilities) 6. $z_1$ has a shape of `(1,n_hid)` as it gives one output for each node and it does that $m$ such instances 7. $a_1$ has a shape of `(1,n_hid)` same as $z_1$ as only values inside are changed and not the shape 8. $ \hat{y}$ has a shape of `(1,2)` as its the final output and it has 2 probability values for each of **m** instances ### `3.6` Back Propagation : Learning the Parameters Learning the parameters for the network means finding parameters ($W_1, b_1, W_2, b_2$) that minimize the error on the training data. But how to define the error? Especially for that a function that measures our error the loss function is called. Negative log neighbourhood would be used here . If you have $N$ training examples and $C$ classes then the loss for our prediction $\hat{y}$ with respect to the true labels $y$ is given by: $$ \begin{aligned} H(y,p) = - \sum_i y_i log(p_i) \end{aligned} $$ The formula looks complicated, but all it really does is sum over our training examples and the $p_{i}$ are the probabilities of the correct classes. So, the further away $y$ (the correct labels) and $\hat{y}$ (our predictions) are, the greater the loss will be. **Understand that if $p_{i}$ is the probability of the correct classes then $\hat{y}$ will always be equal to one. So this would be just summation of negative log of probabilities of the correct classes.** Remember that the goal is to find the parameters that minimize the loss function. You can use gradient descent to find its minimum. Here the most basic version of Gradient Descent, also called batch gradient descent with a fixed learning rate is implemented. Variations such as SGD (stochastic gradient descent) or minibatch gradient descent typically perform better in practice. So if you are serious you'll want to use one of these, and ideally you would also decay the learning rate over time but for now you can use this. As an input, gradient descent needs the gradients (vector of derivatives) of the loss function with respect to the parameters: $\frac{\partial{L}}{\partial{W_1}}$, $\frac{\partial{L}}{\partial{b_1}}$, $\frac{\partial{L}}{\partial{W_2}}$, $\frac{\partial{L}}{\partial{b_2}}$. To calculate these gradients following formulas can be used, Applying the backpropagation formula ,the following results are obtained : $$ \begin{aligned} \delta_3 = \hat{y} - y \\ \delta_2 = (1 - \tanh^2z_1) \circ \delta_3W_2^T \\ \frac{\partial{L}}{\partial{W_2}} = a_1^T \delta_3 \\ \frac{\partial{L}}{\partial{b_2}} = \delta_3\\ \frac{\partial{L}}{\partial{W_1}} = x^T \delta_2\\ \frac{\partial{L}}{\partial{b_1}} = \delta_2 \\ \end{aligned} $$ $T$ signifies transpose of that matrix/vector. **NOTE : in $\delta_3$ , $y$ would always be equal to 1 if $\hat{y}$ are the probabilities of the correct classes. So in the code its better to find the probabilities of the right classes and then subtract it by 1** A More in-depth and better understanding can be given [here](https://eli.thegreenplace.net/2016/the-softmax-function-and-its-derivative/) ### `3.7` Differentiations : The Softmax differentiation with the cross entropy loss can be given as : $$ \begin{align} L &= - \sum_i y_i log(p_i) \\ \frac{\partial L}{\partial o_i} &= - \sum_k y_k \frac{\partial log(p_k)}{\partial o_i } \\ &= - \sum_k y_k \frac{\partial log(p_k)}{\partial p_k} \times \frac{\partial p_k}{ \partial o_i} \\ &= - \sum y_k \frac{1}{p_k} \times \frac{\partial p_k}{\partial o_i} \\ \end{align} $$ With the softmax derivative : $$ \begin{align} \frac{\partial L}{\partial o_i} &= -y_i(1-p_i) - \sum_{k\neq i} y_k \frac{1}{p_k}(-p_k.p_i) \\ &= -y_i(1-p_i) + \sum_{k \neq 1} y_k.p_i \\ &= - y_i + y_ip_i + \sum_{k \neq 1} y_k.p_i \\ &= p_i\left( y_i + \sum_{k \neq 1} y_k\right) - y_i \\ &= p_i\left( y_i + \sum_{k \neq 1} y_k\right) - y_i \end{align} $$ $y$ is a one hot encoded vector for the labels, so $\sum_k y_k = 1$ and $y_i + \sum_{k \neq 1} y_k = 1$ , Thus simplifying it into :\ $\frac{\partial L}{\partial o_i} = p_i - y_i$ which is the same as $\delta_3 = p_i - y_i$ A much more intensive and indepth explanation can be provided [here](https://deepnotes.io/softmax-crossentropy) and [here](https://eli.thegreenplace.net/2016/the-softmax-function-and-its-derivative/) **Forward and backward propagation can be understood with the help of following schematics** **For single node :** **For multiple :** **These derivatives are calculated , multiplied by learning rate and then added to the weights so as these gradients tend to zero that is error is minimum, the final optimum weights are obtained** **IMPORTANT** : In the following section the terms $dW1$ , $dW2$ , $db1$ ... etc are short forms of $\frac{\partial{L}}{\partial{W_1}}$ , $\frac{\partial{L}}{\partial{W_2}}$ , $\frac{\partial{L}}{\partial{b_1}}$ ... respectively and always assume the same for other gradient parameters also ## `4.` Implementation : Now , that a proper understanding of neural networks has been obtained , its time to make a model implementing it **Pre defining constant parameters** ```python num_examples = len(X) # training set size nn_input_dim = 2 # input layer dimensionality nn_output_dim = 2 # output layer dimensionality # Gradient descent parameters (I picked these by hand) epsilon = 0.01 # learning rate for gradient descent reg_lambda = 0.01 # regularization strength ``` ### `4.1` Model function **TASK : Build a function to implement the Neural Network** ```python # This function learns parameters for the neural network and returns the model. # - nn_hdim: Number of nodes in the hidden layer # - num_passes: Number of passes through the training data for gradient descent # - print_loss: If True, print the loss every 1000 iterations # One completion of forward and backward pass happens in one pass and you'll be using 20000 such passes ### START CODE HERE : (Write code where '#' is given) def build_model(nn_hdim, num_passes=20000, print_loss=False): # Initialize the parameters to random values. # With succesive gradient descents , the model will learn and will keep updating the parameters np.random.seed(0) W1 = np.random.randn(nn_input_dim, nn_hdim) / np.sqrt(nn_input_dim) b1 = np.zeros((1, nn_hdim)) # Weights are divided by square root of dimensions to somehow normalise the weights and not have very high values # Seeing the earlier definition , define 'W2' and 'b2' in the same way # Take help of the earlier section where shapes are defined W2 = '#' # Understand that while weight factors cant be randomly assigned to zeros , however biases can. b2 = '#' # This is the model to return at the end model = {} # Gradient descent. For each batch... for i in range(0, num_passes): ## FORWARD PROPAGATION z1 = X.dot(W1) + b1 a1 = np.tanh(z1) ## Seeing the earlier definition , define 'z2' and find 'probs' using softmax activation instead of tanh ## Take help of the section where forward propagation formulas are defined z2 = '#' exp_scores = '#' probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) ## BACK PROPAGATION delta3 = probs delta3[range(num_examples), y] -= 1 dW2 = (a1.T).dot(delta3) db2 = np.sum(delta3, axis=0, keepdims=True) #keepdims =True keeps the earlier vectors dimensions which is very important ## Seeing the earlier definitions , define 'delta2' ,'dW1' ,'db1' ## Take help of the backward propagation formulas defined in the earlier sections delta2 = '#' dW1 = '#' db1 = '#' # Regularisation terms are added (b1 and b2 don't have regularization terms) dW2 += reg_lambda * W2 dW1 += reg_lambda * W1 # Gradient descent parameter update W1 += -epsilon * dW1 b1 += -epsilon * db1 ## Seeing the earlier definitions , define 'W2' , 'b2' ## Understand how parameters are updated , epsilon is the contant learning rate W2 = '#' b2 = '#' # Assigning new parameters to the model model = { 'W1': W1, 'b1': b1, 'W2': W2, 'b2': b2} # Optionally print the loss. # This is expensive because it uses the whole dataset, so don't want to do it too often. if print_loss and i % 1000 == 0: print("Loss after iteration %i: %f" %(i, calculate_loss(model))) return model ``` Now , The neural network is made which takes in input `nn_hdim` which is the number of nodes in hidden layer and returns the model keeping in mind the input and output parameters. The approach of this was to first randomly assign the parameters and with gradient descent , update it until it reaches the minimum. Once model is made , its essential to make a `predict` function to predict the models outputs and then a `calculate_loss` function that calculates the loss between $\hat{y}$ and $y$ ### `4.2` Predict function : **TASK : Make a function `Predict` that takes in model and X as arguments and returns the maximum probabilities** ```python # Helper function to predict an output (0 or 1) def predict(model, x): W1, b1, W2, b2 = model['W1'], model['b1'], model['W2'], model['b2'] ### START CODE HERE (Write code where '#' is given) # FORWARD PROPAGATION ## Write the full steps of forward propagration as before using the updated parameters z1 = '#' a1 = '#' z2 = '#' exp_scores = '#' probs = '#' ### END CODE return np.argmax(probs, axis=1) # The maximum probabilites are returned ``` A good schematic to understand why the function returns the maximum probability is given below : Now , that the output probabilites are calculated , The last step remaining is to calculate the loss between $\hat{y}$ and $y$ using `cross-entropy` function **Remember it can also be written as summation of negative log of the correct class probabilities which will be implemented in the code** ### `4.3` Loss function: **TASK : Make function to calculate loss between $\hat{y}$ and $y$** ```python # Helper function to evaluate the total loss on the dataset def calculate_loss(model): ### START CODE HERE : (Write code where '#' is given) ## Take parameters 'W1','b1','W2','b2' from 'model' ## Take help from the previously defined functions to understand how it can be done W1, b1, W2, b2 = '#' # Forward propagation to calculate our predictions ## Note : you can also use previously defined function predict instead # Write code to get all the parameters using forward propagation formulas z1 = '#' a1 = '#' z2 = '#' exp_scores = '#' probs = '#' ## Calculating the loss corect_logprobs = -np.log(probs['#', '#']) ## Remember the arguments are such that probs[,] gives out probabilities of correct classes similar to p_i in delta3 formula data_loss = np.sum(corect_logprobs) # Adding regulatization term to loss (optional) data_loss += reg_lambda/2 * (np.sum(np.square(W1)) + np.sum(np.square(W2))) return 1./num_examples * data_loss ### END CODE ``` **Now that all the helper functions and model functions are made its time to implement it** **TASK : Using `build_model` function, take `nn_hdim=3` and `print_loss=True` and get the model in a variable called `model`** ```python # Build a model with a 3-dimensional hidden layer ### START CODE HERE (~ 1 Line of code) ### END CODE HERE ``` **TASK : Plot the decision boundary using `plot_decision_boundary` function** ```python # Plot the decision boundary ### START CODE HERE (~ 2 Lines of code) ## See the plot_decision_boundary implementation in logistic regression and write the arguments similarly ## Use predict function in lambda x : argument ## Give title to the plot ### END CODE HERE ``` ## `5.` Varying the hidden layer size In the example above you made a hidden layer size of 3. Let's now get a sense of how varying the hidden layer size affects the result. **TASK : Using a `for_loop` and the functions build above. make models and plot decision boundaries of varying hidden layer sizes** ```python plt.figure(figsize=(16, 32)) hidden_layer_dimensions = [1, 2, 3, 4, 5, 20, 50] for i, nn_hdim in enumerate(hidden_layer_dimensions): ### START CODE HERE (FULL CODE) ## Use plt.subplots and plt.title to form subplots of different nn_hdim and give them title accordingly ## Make model using build_model and make plot decision boundary using plot_decision_boundary and predict functions plt.show() ### END CODE ``` ## `6.` Conclusion After trying out different activation functions with different sizes of hidden layer ,get an idea of why alot of nodes cant be used in a single hidden layer and which activation functions work best . Based on this , make the adjustments and choose the best final model
7273c9247943f40caf1bbd39f1327a1d33942ec4
77,923
ipynb
Jupyter Notebook
Basic Neural Network from Scratch/Basic Neural Network from Scratch (To-Do Template).ipynb
abhisngh/Data-Science
c7fa9e4d81c427382fb9a9d3b97912ef2b21f3ae
[ "MIT" ]
1
2020-05-29T20:07:49.000Z
2020-05-29T20:07:49.000Z
Basic Neural Network from Scratch/Basic Neural Network from Scratch (To-Do Template).ipynb
abhisngh/Data-Science
c7fa9e4d81c427382fb9a9d3b97912ef2b21f3ae
[ "MIT" ]
null
null
null
Basic Neural Network from Scratch/Basic Neural Network from Scratch (To-Do Template).ipynb
abhisngh/Data-Science
c7fa9e4d81c427382fb9a9d3b97912ef2b21f3ae
[ "MIT" ]
null
null
null
91.244731
43,360
0.80491
true
6,074
Qwen/Qwen-72B
1. YES 2. YES
0.861538
0.841826
0.725265
__label__eng_Latn
0.99788
0.523365
``` from pylab import plot, semilogy from numpy import loadtxt, linspace from numpy.fft import fft, fftfreq from sympy import exp, sin, pi, var, Integral var("x") L = 2 # domain [0, L] rho = exp(sin(2*pi*x/L)) #rho = sin(2*pi*x/L) integ = Integral(rho, (x, 0, L)).n() rho -= integ / L print "rho(x) =", rho print "Integral of rho =", Integral(rho, (x, 0, L)).n() ``` rho(x) = exp(sin(pi*x)) - 1.26606587775201 Integral of rho = -1.41156085310944e-16 ``` D = [] for m in range(1, 15): #2m+1 terms -Gmax, ..., 0, ..., Gmax n = 2*m+1 xj = array([L*(j-1.)/n for j in range(1, n+1)]) rhoj = array([float(rho.subs(x, val).n(20)) for val in xj]) nr = fft(rhoj) / n # re-order -> -Gmax, ... Gmax f = fftfreq(n) idx = f.argsort() f = f[idx] nr = nr[idx] nr[m] = 0 # set G=0 term = 0 G = array([float((2*pi*j/L)) for j in range(-m, m+1)]) G[m] = 1 tmp = array([complex(2*pi*abs(nr[j])**2/G[j]**2) for j in range(n)]) E = sum(tmp).real * L print n, E D.append((n, E)) D = array(D) ``` 3 0.857623650249 5 0.825420490259 7 0.825230159883 9 0.825229196177 11 0.825229191862 13 0.825229191846 15 0.825229191846 17 0.825229191846 19 0.825229191846 21 0.825229191846 23 0.825229191846 25 0.825229191846 27 0.825229191846 29 0.825229191846 ``` N = D[:, 0] E = D[:, 1] E_exact = E[-1] figure(figsize=(8, 6), dpi=80) semilogy(N, abs(E-E_exact), "bo-") grid() title("Convergence of an FFT Poisson solver (semilogy)") xlabel("N (number of PW in each direction)") ylabel("E - E_exact [a.u.]") figure(figsize=(8, 6), dpi=80) loglog(N, abs(E-E_exact), "bo-", label="calculated") grid() title("Convergence of an FFT Poisson solver (loglog)") xlabel("N (number of PW in each direction)") ylabel("E - E_exact [a.u.]") ``` ``` ```
c82e213f618336f1e2d924e1b0632607c4523985
61,319
ipynb
Jupyter Notebook
src/tests/fem/plots/fft.ipynb
certik/hfsolver
b4c50c1979fb7e468b1852b144ba756f5a51788d
[ "BSD-2-Clause" ]
20
2015-03-24T13:06:39.000Z
2022-03-29T00:14:02.000Z
src/tests/fem/plots/fft.ipynb
certik/hfsolver
b4c50c1979fb7e468b1852b144ba756f5a51788d
[ "BSD-2-Clause" ]
6
2015-03-25T04:59:43.000Z
2017-06-06T23:00:09.000Z
src/tests/fem/plots/fft.ipynb
certik/hfsolver
b4c50c1979fb7e468b1852b144ba756f5a51788d
[ "BSD-2-Clause" ]
5
2016-01-20T13:38:22.000Z
2020-11-24T15:35:43.000Z
352.408046
30,588
0.920612
true
733
Qwen/Qwen-72B
1. YES 2. YES
0.912436
0.731059
0.667044
__label__eng_Latn
0.291208
0.388098
<a href="https://colab.research.google.com/github/GalinaZh/Appl_alg2021/blob/main/Applied_Alg_Lab_1.ipynb" target="_parent"></a> # Лабораторная работа 1 # Прикладная алгебра и численные методы ## Псевдообратная матрица, ортогонализация Грама-Шмидта, LU, QR, МНК, полиномы Лагранжа, Чебышева, сплайны, кривые Безье, нормы. ``` import numpy as np import pandas as pd import sympy from sympy import S, latex, Matrix, I, simplify, expand import scipy.linalg import numpy.linalg from google.colab import files from copy import deepcopy ``` ## Задание 1 Найти псевдообратную матрицу к матрице из файла ХХХ.xlsx с помощью sympy, numpy и scipy Записать в файл YYY.csv полученные псевдообратные матрицы. Каждый элемент матрицы должен быть в отдельной ячейке. Можно пользоваться средствами pandas, можно написать всою функцию для преобразования матрицы в строку, подходящую для записи в файл .csv. ``` ``` ### Задание 2 Построить LU разложение для матрицы Задания 1 с помощью sympy и scipy. Выполнить проверку сравнением с исходной матрицей. ``` ``` ## Задание 3 На основе LU разложения Задания 2 построить псевдообратную матрицу. Выполнить проверку по определению псевдообратной матрицы (должны выполняться два равенства) ``` ``` ## Задание 4 Считать данные из файла ZZZ.xlsx. Найти коэффициенты $a$, $b$, $c$, $d$ линейной регрессии $Y = at + bu + cv + d$, если $Y$ - столбец 1, $t$ столбец 2, $u$ - столбец 3, $v$ - столбец 4. Искать так: составить матрицу по данным задачи из файла, с помощью процесса ортогонализации Грама-Шмидта построить $QR$-разложение, по нему псевдообратную матрицу и по псевдообратной матрице найти коэффициенты. ``` ``` ## Задание 5 Найти как можно больше разных норм разности вектора данных $Y_{data}$ Задания 4 (столбец 1 из файла ZZZ.xlsx) и вектора значений, полученных по формуле линейной регрессии $Y = at + bu + cv + d$ с найденными в Задании 4 параметрами. ``` ``` ## Задание 6 Построить разложение SVD с помощью numpy для матрицы XXX. На основе полученного разложения составить псевдообратную матрицу. ``` ``` ## Задание 7 Для данных из файла PPP.xlsx построить полином Лагранжа ``` ``` ## Задание 8 Построить по данным Задания 7 кубический сплайн. ``` ``` ## Задание 9 Построить полином Чебышева степени не выше 4, приближающий $f(x) = 3x^6 - 4x^3 +x^2 - x + 2$. ``` ``` ## Задание 10 Соединить точки A и B кривой Безье так, чтобы касательные на концах совпадали с касательными к функции в точках A и B. A и B точки на графике функции $f(x) = 3tg(x/6)$ с горизонтальными координатами 1 и 4. ``` ```
009711f0ec02beda39a02c55fa95f59b8a3388ae
7,122
ipynb
Jupyter Notebook
Applied_Alg_Lab_1.ipynb
GalinaZh/Appl_alg2021
09761b56eb2bdfee4cd5f12cd96562ca146fcb15
[ "MIT" ]
null
null
null
Applied_Alg_Lab_1.ipynb
GalinaZh/Appl_alg2021
09761b56eb2bdfee4cd5f12cd96562ca146fcb15
[ "MIT" ]
null
null
null
Applied_Alg_Lab_1.ipynb
GalinaZh/Appl_alg2021
09761b56eb2bdfee4cd5f12cd96562ca146fcb15
[ "MIT" ]
null
null
null
25.255319
265
0.48975
true
1,119
Qwen/Qwen-72B
1. YES 2. YES
0.872347
0.76908
0.670905
__label__rus_Cyrl
0.972141
0.397068
# Model The generalized Roy model is characterized by the following set of equations. **Potential Outcomes** \begin{align} Y_1 &= X\beta_1 + U_1 \\ Y_0 &= X\beta_0 + U_0 \end{align} **Cost** \begin{align} C = Z\gamma + U_C \end{align} **Choice** \begin{align} S &= Y_1 - Y_0 - C\\ D &= I[S > 0] \end{align} Collecting unobservables: \begin{align} V &= U_C - (U_1 - U_0) \end{align} Rewriting the choice equation: \begin{align} P(X,Z) &= Pr(D = 1 | X,Z) = F_V(X(\beta_1 - \beta_0) - Z\gamma) \\ U_S & = F_V (V)\\ D & = \mathbb{1}\{P(X,Z) > U_S\} \end{align} **Observed Outcome** \begin{align} Y = D Y_1 + (1 - D)Y_0 \end{align} $(Y_1, Y_0)$ are objective outcomes associated with each potential treatment state $D$ and realized after the treatment decision. $Y_1$ refers to the outcome in the treated state and $Y_0$ in the untreated state. $C$ denotes the subjective cost of treatment participation. Any subjective benefits,e.g. job amenities, are included (as a negative contribution) in the subjective cost of treatment. Agents take up treatment $D$ if they expect the objective benefit to outweigh the subjective cost. In that case, their subjective evaluation, i.e. the expected surplus from participation $S$, is positive. If agents take up treatment, then the observed outcome $Y$ corresponds to the outcome in the presence of treatment $Y_1$. Otherwise, $Y_0$ is observed. The unobserved potential outcome is referred to as the counterfactual outcome. ## Objects of Interest **Individual-specific Treatment Effect** \begin{align} B = Y_1 - Y_0 = X(\beta_1 - \beta_0) + (U_1 - U_0) \end{align} * Heterogeneity * Observed * Unobserved **Average Treatment Effect** \begin{align} ATE & = E\left[Y_1 - Y_0 \right]\\ TT & = E\left[Y_1 - Y_0 \mid D = 1\right]\\ TUT & = E\left[Y_1 - Y_0 \mid D = 0\right] \end{align} **Marginal Treatment Effect** The marginal effect parameters condition on the unobserved desire to select into treatment. The *Marginal Benefit*, *Marginal Cost*, and *Marginal Surplus* are defined as: \begin{align} B^{MTE}(x,u_S) & = E[Y_1 - Y_0|X = x,U_S = u_S]\\ C^{MTE}(z,u_S) & = E[C|Z = z,U_S = u_S]\\ S^{MTE}(x,z,u_S) & = E[S|X = x,Z = z, U_S = u_S] \end{align} **Distribution of Potential Outcomes** \begin{align} F_{Y_1,Y_0} \end{align} * Distribution of Benefits * Heterogeneity * Population Shares ## Additional References Aakvik, A., Heckman, J. J., and Vytlacil, E. J. (2005). Treatment Effects for Discrete Outcomes When Responses to Treatment Vary Among Observationally Identical Persons: An Application to Norwegian Vocational Rehabilitation Programs. *Journal of Econometrics*, 125(1-2):15–51. Abbring, J. and Heckman, J. J. (2007). Econometric Evaluation of Social Programs, Part III: Distributional Treatment Effects, Dynamic Treatment Effects, Dynamic Discrete Choice, and General Equilibrium Policy Evaluation. In Heckman, J. J. and Leamer, E. E., editors, *Handbook of Econometrics*, volume 6B, pages 5145–5303. Elsevier Science, Amsterdam, Netherlands Browning, M., Heckman, J. J., and Hansen, L. P. (1999). Micro Data and General Equilibrium Models. In Taylor, J. B. and Woodford, M., editors, *Handbook of Macroeconomics*, volume 1A, pages 543–633. Elsevier Science, Amsterdam, Netherlands. Carneiro, P., Hansen, K., and Heckman, J. J. (2003). Estimating Distributions of Treatment Effects with an Application to the Returns to Schooling and Measurement of the Effects of Uncertainty on College Choice. *International Economic Review*, 44(2):361–422. Eisenhauer, P. (2012). Issues in the Economics and Econometrics of Policy Evaluation: Explorations Using a Factor Structure Model. Unpublished Manuscript. Heckman, J. J. (2001). Micro Data, Heterogeneity, and the Evaluation of Public Policy: Nobel Lecture. *Journal of Political Economy*, 109(4):673–748. Heckman, J. J., Smith, J., and Clements, N. (1997). Making the Most Out of Programme Evaluations and Social Experiments: Accounting for Heterogeneity in Programme Impacts. *The Review of Economic Studies*, 64(4):487–535. Heckman, J. J. and Vytlacil, E. J. (2007a). Econometric Evaluation of Social Programs, Part I: Causal Effects, Structural Models and Econometric Policy Evaluation. In Heckman, J. J. and Leamer, E. E., editors, *Handbook of Econometrics*, volume 6B, pages 4779–4874. Elsevier Science, Amsterdam, Netherlands. Heckman, J. J. and Vytlacil, E. J. (2007b). Econometric Evaluation of Social Programs, Part II: Using the Marginal Treatment Effect to Organize Alternative Economic Estimators to Evaluate Social Programs and to Forecast their Effects in New Environments. In Heckman, J. J. and Leamer, E. E., editors, *Handbook of Econometrics*, volume 6B, pages 4875–5144. Elsevier Science, Amsterdam, Netherlands. Quandt, R. E. (1958). The Estimation of the Parameters of a Linear Regression System Obeying two Separate Regimes. *Journal of the American Statistical Association*, 53(284):873–880. Quandt, R. E. (1958). The Estimation of the Parameters of a Linear Regression System Obeying two Separate Regimes. *Journal of the American Statistical Association*, 53(284):873–880. **Formatting** ```python import urllib; from IPython.core.display import HTML HTML(urllib.urlopen('http://bit.ly/1OKmNHN').read()) ``` <link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Arvo:400,700,400italic' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=PT+Mono' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Shadows+Into+Light' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Philosopher:400,700,400italic,700italic' rel='stylesheet' type='text/css'> <style> @font-face { font-family: "Computer Modern"; src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf'); } #notebook_panel { /* main background */ background: #888; color: #f6f6f6; } div.cell { /* set cell width to about 80 chars */ width: 1000px; } div #notebook { /* centre the content */ background: #fff; /* white background for content */ width: 1200px; margin: auto; padding-left: 1em; } #notebook li { /* More space between bullet points */ margin-top:0.8em; } /* draw border around running cells */ div.cell.border-box-sizing.code_cell.running { border: 3px solid #111; } /* Put a solid color box around each cell and its output, visually linking them together */ div.cell.code_cell { background-color: rgba(171,165,131,0.3); border-radius: 10px; /* rounded borders */ padding: 1em; margin-top: 1em; } div.text_cell_render{ font-family: 'Arvo' sans-serif; line-height: 130%; font-size: 150%; width:900px; margin-left:auto; margin-right:auto; } /* Formatting for header cells */ .text_cell_render h1 { font-family: 'Philosopher', sans-serif; font-weight: 400; font-size: 32pt; line-height: 100%; color: rgb(12,85,97); margin-bottom: 0.1em; margin-top: 0.1em; display: block; } .text_cell_render h2 { font-family: 'Philosopher', serif; font-weight: 700; font-size: 24pt; line-height: 100%; color: rgb(171,165,131); margin-bottom: 0.1em; margin-top: 0.1em; display: block; } .text_cell_render h3 { font-family: 'Philosopher', serif; margin-top:12px; margin-bottom: 3px; font-style: italic; color: rgb(95,92,72); } .text_cell_render h4 { font-family: 'Philosopher', serif; } .text_cell_render h5 { font-family: 'Alegreya Sans', sans-serif; font-weight: 300; font-size: 16pt; color: grey; font-style: italic; margin-bottom: .1em; margin-top: 0.1em; display: block; } .text_cell_render h6 { font-family: 'PT Mono', sans-serif; font-weight: 300; font-size: 10pt; color: grey; margin-bottom: 1px; margin-top: 1px; } .CodeMirror{ font-family: "PT Mono"; font-size: 120%; } </style>
db1ae1e9abfb02e4649a4446f72a28e080300631
11,943
ipynb
Jupyter Notebook
lectures/economic_models/generalized_roy/model/lecture.ipynb
snowdj/course
7caff2b0fd9958c315168791810a05521153d2e7
[ "MIT" ]
null
null
null
lectures/economic_models/generalized_roy/model/lecture.ipynb
snowdj/course
7caff2b0fd9958c315168791810a05521153d2e7
[ "MIT" ]
null
null
null
lectures/economic_models/generalized_roy/model/lecture.ipynb
snowdj/course
7caff2b0fd9958c315168791810a05521153d2e7
[ "MIT" ]
null
null
null
38.775974
840
0.557314
true
2,383
Qwen/Qwen-72B
1. YES 2. YES
0.843895
0.727975
0.614335
__label__eng_Latn
0.618922
0.265636
# Periodic homogenization of linear elastic materials ## Introduction This tour will show how to perform periodic homogenization of linear elastic materials. The considered 2D plane strain problem deals with a skewed unit cell of dimensions $1\times \sqrt{3}/2$ consisting of circular inclusions (numbered $1$) of radius $R$ with elastic properties $(E_r, \nu_r)$ and embedded in a matrix material (numbered $0$) of properties $(E_m, \nu_m)$ following an hexagonal pattern. A classical result of homogenization theory ensures that the resulting overall behavior will be isotropic, a property that will be numerically verified later. ```python from __future__ import print_function from dolfin import * import numpy as np import matplotlib.pyplot as plt # %matplotlib notebook %matplotlib inline ``` ```python a = 1. # unit cell width b = sqrt(3.)/2. # unit cell height c = 0.5 # horizontal offset of top boundary R = 0.2 # inclusion radius vol = a*b # unit cell volume # we define the unit cell vertices coordinates for later use vertices = np.array([[0, 0.], [a, 0.], [a+c, b], [c, b]]) fname = "hexag_incl" mesh = Mesh(fname + ".xml") subdomains = MeshFunction("size_t", mesh, fname + "_physical_region.xml") facets = MeshFunction("size_t", mesh, fname + "_facet_region.xml") plt.figure(dpi=80) plt.subplot(1,2,1) plot(subdomains) plt.subplot(1,2,2) plot(mesh) ``` **Remark**: `mshr` does not allow to generate a meshed domain with perfectly matching vertices on opposite boundaries as would be required when imposing periodic boundary conditions. For this reason, we used a `Gmsh`-generated mesh. ## Periodic homogenization framework The goal of homogenization theory consists in computing the apparent elastic moduli of the homogenized medium associated with a given microstructure. In a linear elastic setting, this amounts to solving the following auxiliary problem defined on the unit cell $\mathcal{A}$: $$\begin{equation}\begin{cases}\operatorname{div} \boldsymbol{\sigma} = \boldsymbol{0} & \text{in } \mathcal{A} \\ \boldsymbol{\sigma} = \mathbb{C}(\boldsymbol{y}):\boldsymbol{\varepsilon} & \text{for }\boldsymbol{y}\in\mathcal{A} \\ \boldsymbol{\varepsilon} = \boldsymbol{E} + \nabla^s \boldsymbol{v} & \text{in } \mathcal{A} \\ \boldsymbol{v} & \text{is } \mathcal{A}\text{-periodic} \\ \boldsymbol{T}=\boldsymbol{\sigma}\cdot\boldsymbol{n} & \text{is } \mathcal{A}\text{-antiperiodic} \end{cases} \label{auxiliary-problem} \end{equation}$$ where $\boldsymbol{E}$ is the **given** macroscopic strain, $\boldsymbol{v}$ a periodic fluctuation and $\mathbb{C}(\boldsymbol{y})$ is the heterogeneous elasticity tensor depending on the microscopic space variable $\boldsymbol{y}\in\mathcal{A}$. By construction, the local microscopic strain is equal on average to the macroscopic strain: $\langle \boldsymbol{\varepsilon} \rangle = \boldsymbol{E}$. Upon defining the macroscopic stress $\boldsymbol{\Sigma}$ as the microscopic stress average: $\langle \boldsymbol{\sigma} \rangle = \boldsymbol{\Sigma}$, there will be a linear relationship between the auxiliary problem loading parameters $\boldsymbol{E}$ and the resulting average stress: $$\boldsymbol{\Sigma} = \mathbb{C}^{hom}:\boldsymbol{E}$$ where $\mathbb{C}^{hom}$ represents the apparent elastic moduli of the homogenized medium. Hence, its components can be computed by solving elementary load cases corresponding to the different components of $\boldsymbol{E}$ and performing a unit cell average of the resulting microscopic stress components. ### Total displacement as the main unknown The previous problem can also be reformulated by using the total displacement $\boldsymbol{u} = \boldsymbol{E}\cdot\boldsymbol{y} + \boldsymbol{v}$ as the main unknown with now $\boldsymbol{\varepsilon} = \nabla^s \boldsymbol{u}$. The periodicity condition is therefore equivalent to the following constraint: $$\boldsymbol{u}(\boldsymbol{y}^+)-\boldsymbol{u}(\boldsymbol{y}^-) = \boldsymbol{E}\cdot(\boldsymbol{y}^+-\boldsymbol{y}^-)$$ where $\boldsymbol{y}^{\pm}$ are opposite points on the unit cell boundary related by the periodicity condition. This formulation is widely used in solid mechanics FE software as it does not require specific change of the problem formulation but just adding tying constraints between some degrees of freedom. This formulation is however not easy to translate in FEniCS. It would indeed require introducing Lagrange multipliers defined on some part of the border only, a feature which does not seem to be available at the moment. ### Periodic fluctuation as the main unknown Instead, we will keep the initial formulation and consider the periodic fluctuation $\boldsymbol{v}$ as the main unknown. The periodicity constraint on $\boldsymbol{v}$ will be imposed in the definition of the associated FunctionSpace using the ``constrained_domain`` optional keyword. To do so, one must define the periodic map linking the different unit cell boundaries. Here the unit cell is 2D and its boundary is represented by a parallelogram of vertices ``vertices`` and the corresponding base vectors `a1` and `a2` are computed. The right part is then mapped onto the left part, the top part onto the bottom part and the top-right corner onto the bottom-left one. ```python # class used to define the periodic boundary map class PeriodicBoundary(SubDomain): def __init__(self, vertices, tolerance=DOLFIN_EPS): """ vertices stores the coordinates of the 4 unit cell corners""" SubDomain.__init__(self, tolerance) self.tol = tolerance self.vv = vertices self.a1 = self.vv[1,:]-self.vv[0,:] # first vector generating periodicity self.a2 = self.vv[3,:]-self.vv[0,:] # second vector generating periodicity # check if UC vertices form indeed a parallelogram assert np.linalg.norm(self.vv[2, :]-self.vv[3, :] - self.a1) <= self.tol assert np.linalg.norm(self.vv[2, :]-self.vv[1, :] - self.a2) <= self.tol def inside(self, x, on_boundary): # return True if on left or bottom boundary AND NOT on one of the # bottom-right or top-left vertices return bool((near(x[0], self.vv[0,0] + x[1]*self.a2[0]/self.vv[3,1], self.tol) or near(x[1], self.vv[0,1] + x[0]*self.a1[1]/self.vv[1,0], self.tol)) and (not ((near(x[0], self.vv[1,0], self.tol) and near(x[1], self.vv[1,1], self.tol)) or (near(x[0], self.vv[3,0], self.tol) and near(x[1], self.vv[3,1], self.tol)))) and on_boundary) def map(self, x, y): if near(x[0], self.vv[2,0], self.tol) and near(x[1], self.vv[2,1], self.tol): # if on top-right corner y[0] = x[0] - (self.a1[0]+self.a2[0]) y[1] = x[1] - (self.a1[1]+self.a2[1]) elif near(x[0], self.vv[1,0] + x[1]*self.a2[0]/self.vv[2,1], self.tol): # if on right boundary y[0] = x[0] - self.a1[0] y[1] = x[1] - self.a1[1] else: # should be on top boundary y[0] = x[0] - self.a2[0] y[1] = x[1] - self.a2[1] ``` We now define the constitutive law for both phases: ```python Em = 50e3 num = 0.2 Er = 210e3 nur = 0.3 material_parameters = [(Em, num), (Er, nur)] nphases = len(material_parameters) def eps(v): return sym(grad(v)) def sigma(v, i, Eps): E, nu = material_parameters[i] lmbda = E*nu/(1+nu)/(1-2*nu) mu = E/2/(1+nu) return lmbda*tr(eps(v) + Eps)*Identity(2) + 2*mu*(eps(v)+Eps) ``` ## Variational formulation The previous problem is very similar to a standard linear elasticity problem, except for the periodicity constraint which has now been included in the FunctionSpace definition and for the presence of an eigenstrain term $\boldsymbol{E}$. It can easily be shown that the variational formulation of the previous problem reads as: Find $\boldsymbol{v}\in V$ such that: $$\begin{equation} F(\boldsymbol{v},\widehat{\boldsymbol{v}}) = \int_{\mathcal{A}} (\boldsymbol{E}+\nabla^s\boldsymbol{v}):\mathbb{C}(\boldsymbol{y}):\nabla^s\widehat{\boldsymbol{v}}\text{ d} \Omega = 0 \quad \forall \widehat{\boldsymbol{v}}\in V \end{equation}$$ The above problem is not well-posed because of the existence of rigid body translations. One way to circumvent this issue would be to fix one point but instead we will add an additional constraint of zero-average of the fluctuation field $v$ as is classically done in homogenization theory. This is done by considering an additional vectorial Lagrange multiplier $\lambda$ and considering the following variational problem (see the [pure Neumann boundary conditions FEniCS demo](https://fenicsproject.org/docs/dolfin/2019.1.0/python/demos/neumann-poisson/demo_neumann-poisson.py.html) for a similar formulation): Find $(\boldsymbol{v},\boldsymbol{\lambda})\in V\times \mathbb{R}^2$ such that: $$\begin{equation} \int_{\mathcal{A}} (\boldsymbol{E}+\nabla^s\boldsymbol{v}):\mathbb{C}(\boldsymbol{y}):\nabla^s\widehat{\boldsymbol{v}}\text{ d} \Omega + \int_{\mathcal{A}} \boldsymbol{\lambda}\cdot\widehat{\boldsymbol{v}} \text{ d} \Omega + \int_{\mathcal{A}} \widehat{\boldsymbol{\lambda}}\cdot\boldsymbol{v} \text{ d} \Omega = 0 \quad \forall (\widehat{\boldsymbol{v}}, \widehat{\boldsymbol{\lambda}})\in V\times\mathbb{R}^2 \end{equation}$$ Which can be summarized as: $$\begin{equation} a(\boldsymbol{v},\widehat{\boldsymbol{v}}) + b(\boldsymbol{\lambda},\widehat{\boldsymbol{v}}) + b(\widehat{\boldsymbol{\lambda}}, \boldsymbol{v}) = L(\widehat{\boldsymbol{v}}) \quad \forall (\widehat{\boldsymbol{v}}, \widehat{\boldsymbol{\lambda}})\in V\times\mathbb{R}^2 \end{equation}$$ This readily translates into the following FEniCS code: ```python Ve = VectorElement("CG", mesh.ufl_cell(), 2) Re = VectorElement("R", mesh.ufl_cell(), 0) W = FunctionSpace(mesh, MixedElement([Ve, Re]), constrained_domain=PeriodicBoundary(vertices, tolerance=1e-10)) V = FunctionSpace(mesh, Ve) v_,lamb_ = TestFunctions(W) dv, dlamb = TrialFunctions(W) w = Function(W) dx = Measure('dx')(subdomain_data=subdomains) Eps = Constant(((0, 0), (0, 0))) F = sum([inner(sigma(dv, i, Eps), eps(v_))*dx(i) for i in range(nphases)]) a, L = lhs(F), rhs(F) a += dot(lamb_,dv)*dx + dot(dlamb,v_)*dx ``` We have used a general implementation using a sum over the different phases for the functional `F`. We then used the `lhs` and `rhs` functions to respectively extract the corresponding bilinear $a$ and linear $L$ forms. ## Resolution The resolution of the auxiliary problem is performed for elementary load cases consisting of uniaxial strain and pure shear sollicitations by assigning unit values of the correspnonding $E_{ij}$ components. For each load case, the average stress $\boldsymbol{\Sigma}$ is computed components by components and the macroscopic stiffness components $\mathbb{C}^{hom}$ are then printed. ```python def macro_strain(i): """returns the macroscopic strain for the 3 elementary load cases""" Eps_Voigt = np.zeros((3,)) Eps_Voigt[i] = 1 return np.array([[Eps_Voigt[0], Eps_Voigt[2]/2.], [Eps_Voigt[2]/2., Eps_Voigt[1]]]) def stress2Voigt(s): return as_vector([s[0,0], s[1,1], s[0,1]]) Chom = np.zeros((3, 3)) for (j, case) in enumerate(["Exx", "Eyy", "Exy"]): print("Solving {} case...".format(case)) Eps.assign(Constant(macro_strain(j))) F = sum([inner(sigma(dv, i, Constant(macro_strain(j))), eps(v_))*dx(i) for i in range(nphases)]) + dot(lamb_,dv)*dx + dot(dlamb,v_)*dx a, L = lhs(F), rhs(F) solve(a == L, w, []) (v, lamb) = split(w) Sigma = np.zeros((3,)) for k in range(3): Sigma[k] = assemble(sum([stress2Voigt(sigma(v, i, Eps))[k]*dx(i) for i in range(nphases)]))/vol Chom[j, :] = Sigma print(np.array_str(Chom, precision=2)) ``` Solving Exx case... Solving Eyy case... Solving Exy case... [[ 6.56e+04 1.74e+04 -2.43e-02] [ 1.74e+04 6.56e+04 -4.10e-02] [-2.43e-02 -4.10e-02 2.41e+04]] ```python for (j, case) in enumerate(["Exx", "Eyy", "Exy"]): print("Solving {} case...".format(case)) print(j) ``` It can first be verified that the obtained macroscopic stiffness is indeed symmetric and that the corresponding behaviour is quasi-isotropic (up to the finite element discretization error). Indeed, if $\lambda^{hom} = \mathbb{C}_{xxyy}$ and $\mu^{hom} = \mathbb{C}_{xyxy}$ we have that $\mathbb{C}_{xxxx}\approx\mathbb{C}_{yyyy}\approx \mathbb{C}_{xxyy}+2\mathbb{C}_{xyxy} = \lambda^{hom}+2\mu^{hom}$. > **Note:** The macroscopic stiffness is not exactly symmetric because we computed it from the average stress which is not stricly verifying local equilibrium on the unit cell due to the FE discretization. A truly symmetric version can be obtained from the computation of the bilinear form for a pair of solutions to the elementary load cases. ```python lmbda_hom = Chom[0, 1] mu_hom = Chom[2, 2] print(Chom[0, 0], lmbda_hom + 2*mu_hom) ``` We thus deduce that $E^{hom} = \mu^{hom}\dfrac{3\lambda^{hom}+2\mu^{hom}}{\lambda^{hom}+\mu^{hom}}$ and $\nu^{hom} = \dfrac{\lambda^{hom}}{2(\lambda^{hom}+\mu^{hom})}$ that is: ```python E_hom = mu_hom*(3*lmbda_hom + 2*mu_hom)/(lmbda_hom + mu_hom) nu_hom = lmbda_hom/(lmbda_hom + mu_hom)/2 print("Apparent Young modulus:", E_hom) print("Apparent Poisson ratio:", nu_hom) ``` ```python # plotting deformed unit cell with total displacement u = Eps*y + v y = SpatialCoordinate(mesh) plt.figure() p = plot(0.5*(dot(Eps, y)+v), mode="displacement", title=case) plt.colorbar(p) plt.show() ``` ```python ```
f438e69c97bc9de7b803684a6f86d24a7645c8f3
24,836
ipynb
Jupyter Notebook
periodic_homog_elas.ipynb
alhermann/FEniCS-Code
4d39b89de66f4f9d8cf818f40f31c83092579a8c
[ "MIT" ]
null
null
null
periodic_homog_elas.ipynb
alhermann/FEniCS-Code
4d39b89de66f4f9d8cf818f40f31c83092579a8c
[ "MIT" ]
null
null
null
periodic_homog_elas.ipynb
alhermann/FEniCS-Code
4d39b89de66f4f9d8cf818f40f31c83092579a8c
[ "MIT" ]
null
null
null
63.845758
6,492
0.694234
true
4,034
Qwen/Qwen-72B
1. YES 2. YES
0.896251
0.793106
0.710822
__label__eng_Latn
0.94617
0.48981
```python import numpy as np import sympy as sy from sympy.utilities.codegen import codegen import control.matlab as cm import re import matplotlib.pyplot as plt from scipy import signal ``` # Designing RST controller for the harmonic oscillator ## The plant model ```python z = sy.symbols('z', real=False) # The plant model wh = np.pi/6; cwh = np.cos(wh); b1 = 1-cwh; b2 = b1; a1 = -2*cwh; a2 = 1; Bp = sy.poly(b1*z + b2, z) Ap = sy.poly( z**2 + a1*z + a2, z) # Poles and zeros H = cm.tf([0, b1, b2], [1, a1, a2], 1) pz = cm.pzmap(H) np.abs(pz) ``` ## Desired poles ```python p1 = 0.6 p2 = p1 Ac = sy.poly((z-p1)*(z-p2), z) Ao = sy.poly(z, z) ``` ## Define and solve Diophantine equation ```python r1,s0,s1, s2 = sy.symbols('r1,s0,s1, s2', real=True) # Right hand side Acl = Ac*Ao # Left hand side Rp = sy.poly(z+r1, z) Sp = sy.poly(s0*z + s1, z) dioph=(Ap*Rp+Bp*Sp-Acl).all_coeffs() print(dioph) print(Acl) print(Ap*Rp) print(Ac) print(Ap*Rp) print(Ap*Rp + Bp*Sp) ``` [1.0*r1 + 0.133974596215561*s0 - 0.532050807568877, -1.73205080756888*r1 + 0.133974596215561*s0 + 0.133974596215561*s1 + 0.64, 1.0*r1 + 0.133974596215561*s1] Poly(1.0*z**3 - 1.2*z**2 + 0.36*z, z, domain='RR') Poly(1.0*z**3 + (1.0*r1 - 1.73205080756888)*z**2 + (-1.73205080756888*r1 + 1.0)*z + 1.0*r1, z, domain='RR[r1]') Poly(1.0*z**2 - 1.2*z + 0.36, z, domain='RR') Poly(1.0*z**3 + (1.0*r1 - 1.73205080756888)*z**2 + (-1.73205080756888*r1 + 1.0)*z + 1.0*r1, z, domain='RR[r1]') Poly(1.0*z**3 + (1.0*r1 + 0.133974596215561*s0 - 1.73205080756888)*z**2 + (-1.73205080756888*r1 + 0.133974596215561*s0 + 0.133974596215561*s1 + 1.0)*z + 1.0*r1 + 0.133974596215561*s1, z, domain='RR[r1,s0,s1]') ```python sol = sy.solve(dioph, (r1,s0,s1)) print('r_1 = %f' % sol[r1]) print('s_0 = %f' % sol[s0]) print('s_1 = %f' % sol[s1]) t0 = Ac.evalf(subs={z:1})/Bp.evalf(subs={z:1,}) print('t_0 = %f' % t0) R = Rp.subs(sol) S = Sp.subs(sol) T = t0*Ao Hc = T*Bp/(Ac*Ao) Hcc = t0*0.8/Ac sy.pretty_print(sy.expand(Hc)) sy.pretty_print(sy.expand(Hcc)) sy.pretty_print(Hc.evalf(subs={z:1})) sy.pretty_print(sy.simplify(Ap*R + Bp*S)) ``` r_1 = 0.314050 s_0 = 1.627180 s_1 = -2.344102 t_0 = 0.597128 0.597128129211021⋅Poly(z, z, domain='ZZ')⋅Poly(0.133974596215561*z + 0.1339745 ────────────────────────────────────────────────────────────────────────────── Poly(1.0*z**3 - 1.2*z**2 + 0.36*z, z, domain='RR') 96215561, z, domain='RR') ───────────────────────── 0.477702503368817 ───────────────────────────────────────────── Poly(1.0*z**2 - 1.2*z + 0.36, z, domain='RR') 1.00000000000000 Poly(1.0*z**3 - 1.2*z**2 + 0.360000000000001*z, z, domain='RR') ```python 1.0/0.3125 ``` 3.2 ```python num = sy.list2numpy((Ap*R).all_coeffs(), dtype=np.float64) den = sy.list2numpy((Ac*Ao).all_coeffs(), dtype=np.float64) print(num) print(den) print(type(num[0])) Hd = cm.tf(num[:-1], den[:-1], -1) print(Hd) ystep, t = cm.step(Hd) plt.figure() plt.plot(t, ystep) plt.show() ``` ## Design with incremental controller ```python r1,s0,s1,s2 = sy.symbols('r1,s0,s1,s2', real=True) # Right hand side Ao = sy.poly(z**2, z) Acl = Ac*Ao # Left hand side Rp = sy.poly((z-1)*(z+r1), z) Sp = sy.poly(s0*z**2 + s1*z + s2, z) dioph=(Ap*Rp+Bp*Sp-Acl).all_coeffs() print(dioph) print(Acl) print(Ap*Rp) print(Ac) print(Ap*Rp) print(Ap*Rp + Bp*Sp) ``` [1.0*r1 + 0.133974596215561*s0 - 1.53205080756888, -2.73205080756888*r1 + 0.133974596215561*s0 + 0.133974596215561*s1 + 2.37205080756888, 2.73205080756888*r1 + 0.133974596215561*s1 + 0.133974596215561*s2 - 1.0, -1.0*r1 + 0.133974596215561*s2] Poly(1.0*z**4 - 1.2*z**3 + 0.36*z**2, z, domain='RR') Poly(1.0*z**4 + (1.0*r1 - 2.73205080756888)*z**3 + (-2.73205080756888*r1 + 2.73205080756888)*z**2 + (2.73205080756888*r1 - 1.0)*z - 1.0*r1, z, domain='RR[r1]') Poly(1.0*z**2 - 1.2*z + 0.36, z, domain='RR') Poly(1.0*z**4 + (1.0*r1 - 2.73205080756888)*z**3 + (-2.73205080756888*r1 + 2.73205080756888)*z**2 + (2.73205080756888*r1 - 1.0)*z - 1.0*r1, z, domain='RR[r1]') Poly(1.0*z**4 + (1.0*r1 + 0.133974596215561*s0 - 2.73205080756888)*z**3 + (-2.73205080756888*r1 + 0.133974596215561*s0 + 0.133974596215561*s1 + 2.73205080756888)*z**2 + (2.73205080756888*r1 + 0.133974596215561*s1 + 0.133974596215561*s2 - 1.0)*z - 1.0*r1 + 0.133974596215561*s2, z, domain='RR[r1,s0,s1,s2]') ```python sol = sy.solve(dioph, (r1,s0,s1, s2)) sol ``` {r1: 0.657025033688163, s0: 6.53128129211024, s1: -10.8382547780370, s2: 4.90410161513777} ```python print('r_1 = %f' % sol[r1]) print('s_0 = %f' % sol[s0]) print('s_1 = %f' % sol[s1]) print('s_2 = %f' % sol[s2]) t0 = Ac.evalf(subs={z:1})/Bp.evalf(subs={z:1,}) print('t_0 = %f' % t0) R = Rp.subs(sol) S = Sp.subs(sol) T = t0*Ao Hc = T*Bp/(Ac*Ao) Hcc = t0*0.8/Ac sy.pretty_print(sy.expand(Hc)) sy.pretty_print(sy.expand(Hcc)) sy.pretty_print(Hc.evalf(subs={z:1})) sy.pretty_print(sy.simplify(Ap*R + Bp*S)) ``` r_1 = 0.657025 s_0 = 6.531281 s_1 = -10.838255 s_2 = 4.904102 t_0 = 0.597128 0.597128129211021⋅Poly(z**2, z, domain='ZZ')⋅Poly(0.133974596215561*z + 0.1339 ────────────────────────────────────────────────────────────────────────────── Poly(1.0*z**4 - 1.2*z**3 + 0.36*z**2, z, domain='RR' 74596215561, z, domain='RR') ──────────────────────────── ) 0.477702503368817 ───────────────────────────────────────────── Poly(1.0*z**2 - 1.2*z + 0.36, z, domain='RR') 1.00000000000000 Poly(1.0*z**4 - 1.2*z**3 + 0.359999999999998*z**2, z, domain='RR') ```python # Reorganize solution expression for matlab code generation sol_expr = ('RST_DC_lab', [Bp.all_coeffs()[0], Bp.all_coeffs()[1], Ap.all_coeffs()[1], Ap.all_coeffs()[2], sol[r1], sol[s0], sol[s1], A2p.subs(z, 1)/Bp.subs(z,1), h,np.exp(h*po1) ]) ``` ```python # Export to matlab code [(m_name, m_code)] = codegen(sol_expr, 'octave') ``` ```python m_code = m_code.replace("out1", "b0").replace("out2", "b1").replace("out3", "a1").replace("out4", "a2") m_code = m_code.replace("out5", "r1").replace("out6", "s0").replace("out7", "s1").replace("out8", "t0") m_code = m_code.replace("out9", "h").replace("out10", "obsPole") m_code = m_code.replace("function ", "% function ") m_code = m_code.replace("end", "") print m_code with open("/home/kjartan/Dropbox/undervisning/tec/MR2007/labs/dc_rst_design.m", "w") as text_file: text_file.write(m_code) ``` ```python cm.step? ``` ```python G = Km * cm.tf([1], [tau, 1, 0]) Gd = Km * cm.tf([tau*(hpt-1+np.exp(-hpt)), tau*(1-(1+hpt)*np.exp(-hpt))], [1, -(1+np.exp(-hpt)), np.exp(-hpt)], h) Gd2 = cm.c2d(G, h) print Gd print Gd2 ``` ```python print A2p print A2p.evalf(subs={z:1}) print Bp print Bp.evalf(subs={z:1}) ``` ```python 0.3/(5*np.sqrt(2)) ``` ```python np.exp(-0.21)*np.sin(0.21) ``` ```python np.exp(0.03*(-14)) ``` ```python 0.746*41.8 ``` ```python ```
728704eb87c5ce9bcbea4ae21ead64bff59cce5f
27,443
ipynb
Jupyter Notebook
polynomial-design/notebooks/.ipynb_checkpoints/hw3-ht2018-harmonic-oscillator-checkpoint.ipynb
kjartan-at-tec/mr2007-computerized-control
16e35f5007f53870eaf344eea1165507505ab4aa
[ "MIT" ]
2
2020-11-07T05:20:37.000Z
2020-12-22T09:46:13.000Z
polynomial-design/notebooks/.ipynb_checkpoints/hw3-ht2018-harmonic-oscillator-checkpoint.ipynb
kjartan-at-tec/mr2007-computerized-control
16e35f5007f53870eaf344eea1165507505ab4aa
[ "MIT" ]
4
2020-06-12T20:44:41.000Z
2020-06-12T20:49:00.000Z
polynomial-design/notebooks/.ipynb_checkpoints/hw3-ht2018-harmonic-oscillator-checkpoint.ipynb
kjartan-at-tec/mr2007-computerized-control
16e35f5007f53870eaf344eea1165507505ab4aa
[ "MIT" ]
1
2021-03-14T03:55:27.000Z
2021-03-14T03:55:27.000Z
52.775
7,940
0.665088
true
3,029
Qwen/Qwen-72B
1. YES 2. YES
0.822189
0.675765
0.555606
__label__kor_Hang
0.069262
0.129189
## SurfinPy #### Tutorial 2 - Introducing temperature dependence In tutorial 1 we generated a phase diagram at 0K. However this is not representative of normal conditions. Temperature is an important consideration for materials chemists and we may wish to evaluate the state of a solid electrolyte at the operating temperature or synthesis conditions. In order to overcome this and introduce temperature we need to modify the equation for our surface energy in tutorial 1. This explanation will again be using $TiO_2$ \begin{align} \gamma_{Surf} & = \frac{1}{2A} \Bigg( E_{TiO_2}^{slab} - \frac{nTi_{Slab}}{nTi_{Bulk}} E_{TiO_2}^{Bulk} \Bigg) - \Gamma_O \mu_O - \Gamma_{H_2O} \mu_{H_2O} - n_O \mu_O (T) - n_{H_2O} \mu_{H_2O} (T) \end{align} where A is the surface area, $E_{TiO_2}^{slab}$ is the DFT energy of the slab, $nTi_{Slab}$ is the number of cations in the slab, $nTi_{Bulk}$ is the number of cations in the bulk unit cell, $E_{TiO_2}^{Bulk}$ is the DFT energy of the bulk unit cell and \begin{align} \Gamma_O & = \frac{1}{2A} \Bigg( nO_{Slab} - \frac{nO_{Bulk}}{nTi_{Bulk}}nTi_{Slab} \Bigg) , \end{align} \begin{align} \Gamma_{H_2O} & = \frac{nH_2O}{2A} , \end{align} $nO_{Slab}$ is the number of anions in the slab, $nO_{Bulk}$ is the number of anions in the bulk, $nH_2O$ is the number of adsorbing water molecules and $n_O$ is the number of defective oxygen. $\Gamma_O$ / $\Gamma_{H_2O}$ is the excess oxygen / water at the surface and $\mu_O$ / $\mu_{H_2O}$ is the oxygen / water chemcial potential. $\mu_{H_2O} $(T) / $\mu_O (T)$ are terms to correct the chemcial potential of oxygen and water based on thermochemical data from the NIST_JANAF database, making the chemical potential a temperature dependent term. \begin{align} \mu_O (T) & = \frac{1}{2} \mu_O (T) (0 K , DFT) + \frac{1}{2} \mu_O (T) (0 K , EXP) + \frac{1}{2} \Delta G_{O_2} ( \Delta T, Exp) \end{align} where $\mu_O$ (T) (0 K , DFT) is the 0K free energy of an isolated oxygen moleculeevaluated with DFT, $\mu_O$ (T) (0 K , EXP) is the 0 K experimental Gibbs energy for oxygen gas and $\Delta$ $G_{O_2}$ ( $\Delta$ T, Exp) is the Gibbs energy defined at temperature T as \begin{align} \Delta G_{O_2} ( \Delta T, Exp) & = \frac{1}{2} [H(T, {O_2}) - H(0 K, {O_2})] - \frac{1}{2} T[S(T, {O_2}]) \end{align} ```python import matplotlib.pyplot as plt from surfinpy import mu_vs_mu from surfinpy import utils as ut from surfinpy import data ``` In order to calculate our $\Delta$ $G_{O_2}$ ( $\Delta$ T, Exp) values we need to use experimental data from the NIST-JANAF database. As a user you need to download the tables for the species you are interested in ( In our case oxygen and water). Surfinpy has a function that can read this data, assuming it is in the correct format and calculate the temperature correction for you. Provide the path to the file and the temperature that you want as an index. ```python Oxygen_exp = ut.fit_nist("O2.txt")[298] Water_exp = ut.fit_nist("H2O.txt")[298] ``` -9.08 is the DFT energy of an oxygen molecule, -0.86 is the zero point energy and Oxygen_exp is the experimental free energy at 298 K. ```python Oxygen_corrected = (-9.08 + -0.86 + Oxygen_exp) / 2 Water_corrected = -14.84 + 0.55 + Water_exp print(Oxygen_corrected) print(Water_corrected) ``` -5.2427609629166 -14.77234481554258 Oxygen_corrected and Water_corrected are now temperature dependent terms corresponding to a temperature of 298 K. The resulting phase diagram will now be at a temperature of 298 K. ```python bulk = data.ReferenceDataSet(cation = 1, anion = 2, energy = -780.0, funits = 4) pure = data.DataSet(cation = 24, x = 48, y = 0, area = 60.0, energy = -575.0, label = "0.00 $TiO_2$", nspecies = 1) H2O = data.DataSet(cation = 24, x = 48, y = 2, area = 60.0, energy = -612.0, label = "0.16 $TiO_2$", nspecies = 1) H2O_2 = data.DataSet(cation = 24, x = 48, y = 4, area = 60.0, energy = -640.0, label = "0.32 $TiO_2$", nspecies = 1) H2O_3 = data.DataSet(cation = 24, x = 48, y = 8, area = 60.0, energy = -676.0, label = "0.64 $TiO_2$", nspecies = 1) Vo = data.DataSet(cation = 24, x = 46, y = 0, area = 60.0, energy = -558.0, label = "0.00 $TiO_1.9$", nspecies = 1) H2O_Vo_1 = data.DataSet(cation = 24, x = 46, y = 2, area = 60.0, energy = -594.0, label = "0.00 $TiO_1.9$", nspecies = 1) H2O_Vo_2 = data.DataSet(cation = 24, x = 46, y = 4, area = 60.0, energy = -624.0, label = "0.16 $TiO_1.9$", nspecies = 1) H2O_Vo_3 = data.DataSet(cation = 24, x = 46, y = 6, area = 60.0, energy = -640.0, label = "0.32 $TiO_1.9$", nspecies = 1) H2O_Vo_4 = data.DataSet(cation = 24, x = 46, y = 8, area = 60.0, energy = -670.0, label = "0.64 $TiO_1.9$", nspecies = 1) data = [pure, Vo, H2O, H2O_Vo_1, H2O_2, H2O_Vo_2, H2O_3, H2O_Vo_3, H2O_Vo_4] ``` ```python deltaX = {'Range': [ -12, -6], 'Label': 'O'} deltaY = {'Range': [ -19, -12], 'Label': 'H_2O'} ``` ```python system = mu_vs_mu.calculate(data, bulk, deltaX, deltaY, x_energy=Oxygen_corrected, y_energy=Water_corrected) ``` ```python ax = system.plot_phase(temperature=298, set_style="fast", colourmap="RdBu", cbar_title="$H_2O$ $/$ $nm^2$", figsize=(6, 5)) plt.savefig("../../../docs/source/Figures/Surfaces_3.png", dpi=600) plt.show() ```
75491feb8c605e884fd406ad47b4b238a234beb8
24,731
ipynb
Jupyter Notebook
examples/Notebooks/Surfaces/Tutorial_2.ipynb
jstse/SurfinPy
ff3a79f9415c170885e109ab881368271f3dcc19
[ "MIT" ]
30
2019-01-28T17:47:24.000Z
2022-03-22T03:26:00.000Z
examples/Notebooks/Surfaces/Tutorial_2.ipynb
jstse/SurfinPy
ff3a79f9415c170885e109ab881368271f3dcc19
[ "MIT" ]
14
2018-09-03T15:49:06.000Z
2022-02-08T22:09:51.000Z
examples/Notebooks/Surfaces/Tutorial_2.ipynb
jstse/SurfinPy
ff3a79f9415c170885e109ab881368271f3dcc19
[ "MIT" ]
19
2019-02-11T09:11:29.000Z
2022-03-11T08:47:24.000Z
97.750988
16,356
0.835146
true
1,904
Qwen/Qwen-72B
1. YES 2. YES
0.815232
0.682574
0.556456
__label__eng_Latn
0.927251
0.131164
```python import cirq ``` We have already seen that quantum circuits can be used to transfer information efficiently. Now, we will see for the first time how we can use a quantum circuit to solve a problem in a more efficient way than it is possible with a classical probabilistic Turing machine. While the problem we will introduce below seems and also is a bit artificial, it demonstrates some essential concepts. Reading Ch. 6, rspa.1992.0167 Before we state the problem and show the algorithm we want to briefly describe a process called phase kick-back, which quite nicely demonstrates the power of working with states that are superposed To demonstrate the basic idea behind the phase kick-back we imagine a two qubit system, where the first qubit is im some arbitray superposition $a_0|0\rangle +a_1|1\rangle $ and the second qubit is in the following state $\frac{1}{\sqrt{2}}(|0\rangle -|1\rangle) = H|1\rangle$. Hence the system is in the follwoing state \begin{equation} |\psi\rangle = \left(a_0|0\rangle +a_1|1\rangle\right)\frac{1}{\sqrt{2}}(|0\rangle -|1\rangle) \end{equation} If we now look at the effect of a general unitary transformation $cU_{f(x)}$ where the first gate controls the transformation, i.e. $cU=|0\rangle\langle0| \otimes U_{f(0)} + |1\rangle\langle1| \otimes U_{f(1)}$, with $f:\{0,1\}\rightarrow \{0,1\}$ and $U_{f(x)}|y\rangle = |y\oplus f(x)\rangle $. It is now a simple exercise to apply $cU_{f(x)}$ to the initial state and we find \begin{equation} |\psi\rangle = \left((-1)^{f(0)}a_0|0\rangle +(-1)^{f(1)}a_1|1\rangle\right)\frac{1}{\sqrt{2}}(|0\rangle -|1\rangle) \end{equation} Surprisingly, we have, in fact not modified the target state but the control state, and its phase is changed by $\pi f(x)$, hence the name phase kick-back We are now going to formulate the Deutsch problem and from there it should become clear, how one can use the phase kick-back idea to solve it \textbf{The Deutsch Problem} \begin{problem} Let $f: \{0,1\} \rightarrow \{0,1\}$ be an unkown function. Determine $f(0) \oplus f(1)$ \end{problem} It is evident that with a classical computer, this problem can be solved using two queries. The Deutsch algorithm can solve this problem using a quantum circuit using a \textbf{single} query This can be done using the phase kick-back. Imagine we strart out in the state $|\psi\rangle = \frac{1}{2}\left(|0\rangle +1\rangle\right)(|0\rangle -|1\rangle)$ If we now apply $cU_{f(x)}$ we find \begin{equation} |\psi_1\rangle =cU|\psi\rangle = \frac{(-1)^{f(0)}}{2}\left(|0\rangle +(-1)^{f(1)\oplus f(0)}|1\rangle\right)(|0\rangle -|1\rangle) \end{equation} One can now see that the state in the first bracket is the outcome of applying the Hadamard gate to $|0\rangle$ or ($|1\rangle$), when $f(1)\oplus f(0) =0$ ($f(1)\oplus f(0)=1$). Using that the Hadmard gate is self inverse, we find \begin{equation} H|\psi_1\rangle = \frac{(-1)^{f(0)}}{\sqrt{2}}|0 (1)\rangle (|0\rangle -|1\rangle) \end{equation} telling us the value of $f(0) \oplus f(1)$ Before we implement the Deutsch cuircuit we have to construct a circuit, that implements $cU_{f(x)}$ ```python def f_cirquit(q0,q1, f0, f1): if f0: yield [cirq.X(q0),cirq.CNOT(q0,q1),cirq.X(q0)] if f1: yield cirq.CNOT(q0,q1) ``` ```python def deutsch_circuit(f0, f1): c = cirq.Circuit() q0,q1 = cirq.LineQubit.range(2) #create intial states c.append([cirq.H(q0),cirq.X(q1),cirq.H(q1)]) #apply cU_f c.append(f_cirquit(q0,q1, f0, f1)) #apply final H gate c.append(cirq.H(q0)) #measure first qubit c.append(cirq.measure(q0)) return c ``` ```python s=cirq.Simulator() c = deutsch_circuit(1,0) print(c) ``` 0: ───H───X───@───X───H───M─── │ 1: ───X───H───X─────────────── ```python x = s.simulate(c) ``` ```python x ``` measurements: 0=1 output vector: -0.707|10⟩ + 0.707|11⟩ ```python for f in [(0,0),(0,1),(1,0),(1,1)]: s=cirq.Simulator() c = deutsch_circuit(*f) x = s.simulate(c) assert (f[0] + f[1])%2 == x.measurements['0'][0] ```
3d52956a869085d58f9460e3c74b3b261424f735
7,136
ipynb
Jupyter Notebook
src/3. The Deutsch Algorithm.ipynb
phyjonas/QC
bbb3ace33dc7c5e64ba051c2908ea1fd2f88f4ee
[ "MIT" ]
null
null
null
src/3. The Deutsch Algorithm.ipynb
phyjonas/QC
bbb3ace33dc7c5e64ba051c2908ea1fd2f88f4ee
[ "MIT" ]
null
null
null
src/3. The Deutsch Algorithm.ipynb
phyjonas/QC
bbb3ace33dc7c5e64ba051c2908ea1fd2f88f4ee
[ "MIT" ]
null
null
null
30.891775
401
0.557455
true
1,367
Qwen/Qwen-72B
1. YES 2. YES
0.901921
0.888759
0.80159
__label__eng_Latn
0.973545
0.700695