text
stringlengths
87
777k
meta.hexsha
stringlengths
40
40
meta.size
int64
682
1.05M
meta.ext
stringclasses
1 value
meta.lang
stringclasses
1 value
meta.max_stars_repo_path
stringlengths
8
226
meta.max_stars_repo_name
stringlengths
8
109
meta.max_stars_repo_head_hexsha
stringlengths
40
40
meta.max_stars_repo_licenses
sequencelengths
1
5
meta.max_stars_count
int64
1
23.9k
meta.max_stars_repo_stars_event_min_datetime
stringlengths
24
24
meta.max_stars_repo_stars_event_max_datetime
stringlengths
24
24
meta.max_issues_repo_path
stringlengths
8
226
meta.max_issues_repo_name
stringlengths
8
109
meta.max_issues_repo_head_hexsha
stringlengths
40
40
meta.max_issues_repo_licenses
sequencelengths
1
5
meta.max_issues_count
int64
1
15.1k
meta.max_issues_repo_issues_event_min_datetime
stringlengths
24
24
meta.max_issues_repo_issues_event_max_datetime
stringlengths
24
24
meta.max_forks_repo_path
stringlengths
8
226
meta.max_forks_repo_name
stringlengths
8
109
meta.max_forks_repo_head_hexsha
stringlengths
40
40
meta.max_forks_repo_licenses
sequencelengths
1
5
meta.max_forks_count
int64
1
6.05k
meta.max_forks_repo_forks_event_min_datetime
stringlengths
24
24
meta.max_forks_repo_forks_event_max_datetime
stringlengths
24
24
meta.avg_line_length
float64
15.5
967k
meta.max_line_length
int64
42
993k
meta.alphanum_fraction
float64
0.08
0.97
meta.converted
bool
1 class
meta.num_tokens
int64
33
431k
meta.lm_name
stringclasses
1 value
meta.lm_label
stringclasses
3 values
meta.lm_q1_score
float64
0.56
0.98
meta.lm_q2_score
float64
0.55
0.97
meta.lm_q1q2_score
float64
0.5
0.93
text_lang
stringclasses
53 values
text_lang_conf
float64
0.03
1
label
float64
0
1
# Introduction to zfit In this notebook, we will have a walk through the main components of zfit and their features. Especially the extensive model building part will be discussed separately. zfit consists of 5 mostly independent parts. Other libraries can rely on this parts to do plotting or statistical inference, such as hepstats does. Therefore we will discuss two libraries in this tutorial: zfit to build models, data and a loss, minimize it and get a fit result and hepstats, to use the loss we built here and do inference. ## Data This component in general plays a minor role in zfit: it is mostly to provide a unified interface for data. Preprocessing is therefore not part of zfit and should be done beforehand. Python offers many great possibilities to do so (e.g. Pandas). zfit `Data` can load data from various sources, most notably from Numpy, Pandas DataFrame, TensorFlow Tensor and ROOT (using uproot). It is also possible, for convenience, to convert it directly `to_pandas`. The constructors are named `from_numpy`, `from_root` etc. ```python import zfit from zfit import z import tensorflow as tf import numpy as np import matplotlib.pyplot as plt ``` A `Data` needs not only the data itself but also the observables: the human readable string identifiers of the axes (corresponding to "columns" of a Pandas DataFrame). It is convenient to define the `Space` not only with the observable but also with a limit: this can directly be re-used as the normalization range in the PDF. First, let's define our observables ```python obs = zfit.Space('obs1', (-5, 10)) ``` This `Space` has limits. Next to the effect of handling the observables, we can also play with the limits: multiple `Spaces` can be added to provide disconnected ranges. More importantly, `Space` offers functionality: - limit1d: return the lower and upper limit in the 1 dimensional case (raises an error otherwise) - rect_limits: return the n dimensional limits - area(): calculate the area (e.g. distance between upper and lower) - inside(): return a boolean Tensor corresponding to whether the value is _inside_ the `Space` - filter(): filter the input values to only return the one inside ```python size_normal = 10000 data_normal_np = np.random.normal(size=size_normal, scale=2) data_normal = zfit.Data.from_numpy(obs=obs, array=data_normal_np) ``` The main functionality is - nevents: attribute that returns the number of events in the object - data_range: a `Space` that defines the limits of the data; if outside, the data will be cut - n_obs: defines the number of dimensions in the dataset - with_obs: returns a subset of the dataset with only the given obs - weights: event based weights Furthermore, `value` returns a Tensor with shape `(nevents, n_obs)`. To retrieve values, in general `z.unstack_x(data)` should be used; this returns a single Tensor with shape (nevents) or a list of tensors if `n_obs` is larger then 1. ```python print(f"We have {data_normal.nevents} events in our dataset with the minimum of {np.min(data_normal.unstack_x())}") # remember! The obs cut out some of the data ``` We have 9950 events in our dataset with the minimum of -4.979805501079585 ```python data_normal.n_obs ``` 1 ## Model Building models is by far the largest part of zfit. We will therefore cover an essential part, the possibility to build custom models, in an extra chapter. Let's start out with the idea that you define your parameters and your observable space; the latter is the expected input data. There are two types of models in zfit: - functions, which are rather simple and "underdeveloped"; their usage is often not required. - PDF that are function which are normalized (over a specified range); this is the main model and is what we gonna use throughout the tutorials. A PDF is defined by \begin{align} \mathrm{PDF}_{f(x)}(x; \theta) = \frac{f(x; \theta)}{\int_{a}^{b} f(x; \theta)} \end{align} where a and b define the normalization range (`norm_range`), over which (by inserting into the above definition) the integral of the PDF is unity. zfit has a modular approach to things and this is also true for models. While the normalization itself (e.g. what are parameters, what is normalized data) will already be pre-defined in the model, models are composed of functions that are transparently called inside. For example, a Gaussian would usually be implemented by writing a Python function `def gauss(x, mu, sigma)`, which does not care about the normalization and then be wrapped in a PDF, where the normalization and what is a parameter is defined. In principle, we can go far by using simply functions (e.g. [TensorFlowAnalysis/AmpliTF](https://github.com/apoluekt/AmpliTF) by Anton Poluektov uses this approach quite successfully for Amplitude Analysis), but this design has limitations for a more general fitting library such as zfit (or even [TensorWaves](https://github.com/ComPWA/tensorwaves), being built on top of AmpliTF). The main thing is to keep track of the different ordering of the data and parameters, especially the dependencies. Let's create a simple Gaussian PDF. We already defined the `Space` for the data before, now we only need the parameters. This are a different object than a `Space`. ### Parameter A `Parameter` (there are different kinds actually, more on that later) takes the following arguments as input: `Parameter(human readable name, initial value[, lower limit, upper limit])` where the limits are recommended but not mandatory. Furthermore, `step_size` can be given (which is useful to be around the given uncertainty, e.g. for large yields or small values it can help a lot to set this). Also, a `floating` argument is supported, indicating whether the parameter is allowed to float in the fit or not (just omitting the limits does _not_ make a parameter constant). Parameters have a unique name. This is served as the identifier for e.g. fit results. However, a parameter _cannot_ be retrieved by its string identifier (its name) but the object itself should be used. In places where a parameter maps to something, the object itself is needed, not its name. ```python mu = zfit.Parameter('mu', 1, -3, 3, step_size=0.2) sigma_num = zfit.Parameter('sigma42', 1, 0.1, 10, floating=False) ``` These attributes can be changed: ```python print(f"sigma is float: {sigma_num.floating}") sigma_num.floating = True print(f"sigma is float: {sigma_num.floating}") ``` sigma is float: False sigma is float: True *PITFALL NOTEBOOKS: since the parameters have a unique name, a second parameter with the same name cannot be created; the behavior is undefined and therefore it raises an error. While this does not pose a problem in a normal Python script, it does in a Jupyter-like notebook, since it is an often practice to "rerun" a cell as an attempt to "reset" things. Bear in mind that this does not make sense, from a logic point of view. The parameter already exists. Best practice: write a small wrapper, do not rerun the parameter creation cell or simply rerun the notebook (restart kernel & run all). For further details, have a look at the discussion and arguments [here](https://github.com/zfit/zfit/issues/186)* Now we have everything to create a Gaussian PDF: ```python gauss = zfit.pdf.Gauss(obs=obs, mu=mu, sigma=sigma_num) ``` Since this holds all the parameters and the observables are well defined, we can retrieve them ```python gauss.n_obs # dimensions ``` 1 ```python gauss.obs ``` ('obs1',) ```python gauss.space ``` <zfit Space obs=('obs1',), axes=(0,), limits=(array([[-5.]]), array([[10.]]))> ```python gauss.norm_range ``` <zfit Space obs=('obs1',), axes=(0,), limits=(array([[-5.]]), array([[10.]]))> As we've seen, the `obs` we defined is the `space` of Gauss: this acts as the default limits whenever needed (e.g. for sampling). `gauss` also has a `norm_range`, which equals by default as well to the `obs` given, however, we can explicitly change that with `set_norm_range`. We can also access the parameters of the PDF in two ways, depending on our intention: either by _name_ (the parameterization name, e.g. `mu` and `sigma`, as defined in the `Gauss`), which is useful if we are interested in the parameter that _describes_ the shape ```python gauss.params ``` OrderedDict([('mu', <zfit.Parameter 'mu' floating=True value=1>), ('sigma', <zfit.Parameter 'sigma42' floating=True value=1>)]) or to retrieve all the parameters that the PDF depends on. As this now may sounds trivial, we will see later that models can depend on other models (e.g. sums) and parameters on other parameters. There is one function that automatically retrieves _all_ dependencies, `get_params`. It takes three arguments to filter: - floating: whether to filter only floating parameters, only non-floating or don't discriminate - is_yield: if it is a yield, or not a yield, or both - extract_independent: whether to recursively collect all parameters. This, and the explanation for why independent, can be found later on in the `Simultaneous` tutorial. Usually, the default is exactly what we want if we look for _all free parameters that this PDF depends on_. ```python gauss.get_params() ``` OrderedSet([<zfit.Parameter 'mu' floating=True value=1>, <zfit.Parameter 'sigma42' floating=True value=1>]) The difference will also be clear if we e.g. use the same parameter twice: ```python gauss_only_mu = zfit.pdf.Gauss(obs=obs, mu=mu, sigma=mu) print(f"params={gauss_only_mu.params}") print(f"get_params={gauss_only_mu.get_params()}") ``` params=OrderedDict([('mu', <zfit.Parameter 'mu' floating=True value=1>), ('sigma', <zfit.Parameter 'mu' floating=True value=1>)]) get_params=OrderedSet([<zfit.Parameter 'mu' floating=True value=1>]) ## Functionality PDFs provide a few useful methods. The main features of a zfit PDF are: - `pdf`: the normalized value of the PDF. It takes an argument `norm_range` that can be set to `False`, in which case we retrieve the unnormalized value - `integrate`: given a certain range, the PDF is integrated. As `pdf`, it takes a `norm_range` argument that integrates over the unnormalized `pdf` if set to `False` - `sample`: samples from the pdf and returns a `Data` object ```python integral = gauss.integrate(limits=(-1, 3)) # corresponds to 2 sigma integral integral ``` <tf.Tensor: shape=(1,), dtype=float64, numpy=array([0.95449974])> ### Tensors As we see, many zfit functions return Tensors. This is however no magical thing! If we're outside of models, than we can always safely convert them to a numpy array by calling `zfit.run(...)` on it (or any structure containing potentially multiple Tensors). However, this may not even be required often! They can be added just like numpy arrays and interact well with Python and Numpy: ```python np.sqrt(integral) ``` array([0.97698502]) They also have shapes, dtypes, can be slices etc. So do not convert them except you need it. More on this can be seen in the talk later on about zfit and TensorFlow 2.0. ```python sample = gauss.sample(n=1000) # default space taken as limits sample ``` <zfit.core.data.SampleData at 0x7f1790089d00> ```python sample.unstack_x()[:10] ``` <tf.Tensor: shape=(10,), dtype=float64, numpy= array([-0.09952247, 1.49016828, 2.12148953, 0.39491123, 1.08061772, 0.49297195, 0.57305784, 1.98737622, 1.46084697, 0.30197322])> ```python sample.n_obs ``` 1 ```python sample.obs ``` ('obs1',) We see that sample returns also a zfit `Data` object with the same space as it was sampled in. This can directly be used e.g. ```python probs = gauss.pdf(sample) probs[:10] ``` <tf.Tensor: shape=(10,), dtype=float64, numpy= array([0.21796662, 0.35378319, 0.21271375, 0.33220443, 0.39764798, 0.35082167, 0.36419045, 0.24502515, 0.35875036, 0.31268492])> **NOTE**: In case you want to do this repeatedly (e.g. for toy studies), there is a way more efficient way (see later on) ## Plotting so far, we have a dataset and a PDF. Before we go for fitting, we can make a plot. This functionality is not _directly_ provided in zfit (but can be added to [zfit-physics](https://github.com/zfit/zfit-physics)). It is however simple enough to do it: ```python def plot_model(model, data, scale=1, plot_data=True): # we will use scale later on nbins = 50 lower, upper = data.data_range.limit1d x = tf.linspace(lower, upper, num=1000) # np.linspace also works y = model.pdf(x) * size_normal / nbins * data.data_range.area() y *= scale plt.plot(x, y) data_plot = zfit.run(z.unstack_x(data)) # we could also use the `to_pandas` method if plot_data: plt.hist(data_plot, bins=nbins) ``` ```python plot_model(gauss, data_normal) ``` We can of course do better (and will see that later on, continuously improve the plots), but this is quite simple and gives us the full power of matplotlib. ### Different models zfit offers a selection of predefined models (and extends with models from zfit-physics that contain physics specific models such as ARGUS shaped models). ```python print(zfit.pdf.__all__) ``` ['BasePDF', 'BaseFunctor', 'Exponential', 'CrystalBall', 'DoubleCB', 'Gauss', 'Uniform', 'TruncatedGauss', 'WrapDistribution', 'Cauchy', 'Chebyshev', 'Legendre', 'Chebyshev2', 'Hermite', 'Laguerre', 'RecursivePolynomial', 'ProductPDF', 'SumPDF', 'GaussianKDE1DimV1', 'ZPDF', 'SimplePDF', 'SimpleFunctorPDF'] To create a more realistic model, we can build some components for a mass fit with a - signal component: CrystalBall - combinatorial background: Exponential - partial reconstructed background on the left: Kernel Density Estimation ```python mass_obs = zfit.Space('mass', (0, 1000)) ``` ```python # Signal component mu_sig = zfit.Parameter('mu_sig', 400, 100, 600) sigma_sig = zfit.Parameter('sigma_sig', 50, 1, 100) alpha_sig = zfit.Parameter('alpha_sig', 300, 100, 400) n_sig = zfit.Parameter('n sig', 4, 0.1, 30) signal = zfit.pdf.CrystalBall(obs=mass_obs, mu=mu_sig, sigma=sigma_sig, alpha=alpha_sig, n=n_sig) ``` ```python # combinatorial background lam = zfit.Parameter('lambda', -0.01, -0.05, -0.001) comb_bkg = zfit.pdf.Exponential(lam, obs=mass_obs) ``` ```python part_reco_data = np.random.normal(loc=200, scale=150, size=700) part_reco_data = zfit.Data.from_numpy(obs=mass_obs, array=part_reco_data) # we don't need to do this but now we're sure it's inside the limits part_reco = zfit.pdf.GaussianKDE1DimV1(obs=mass_obs, data=part_reco_data, bandwidth='adaptive') ``` ## Composing models We can also compose multiple models together. Here we'll stick to one dimensional models, the extension to multiple dimensions is explained in the "custom models tutorial". Here we will use a `SumPDF`. This takes pdfs and fractions. If we provide n pdfs and: - n - 1 fracs: the nth fraction will be 1 - sum(fracs) - n fracs: no normalization attempt is done by `SumPDf`. If the fracs are not implicitly normalized, this can lead to bad fitting behavior if there is a degree of freedom too much ```python sig_frac = zfit.Parameter('sig_frac', 0.3, 0, 1) comb_bkg_frac = zfit.Parameter('comb_bkg_frac', 0.25, 0, 1) model = zfit.pdf.SumPDF([signal, comb_bkg, part_reco], [sig_frac, comb_bkg_frac]) ``` In order to have a corresponding data sample, we can just create one. Since we want to fit to this dataset later on, we will create it with slightly different values. Therefore, we can use the ability of a parameter to be set temporarily to a certain value with ```python print(f"before: {sig_frac}") with sig_frac.set_value(0.25): print(f"new value: {sig_frac}") print(f"after 'with': {sig_frac}") ``` before: <zfit.Parameter 'sig_frac' floating=True value=0.3> new value: <zfit.Parameter 'sig_frac' floating=True value=0.25> after 'with': <zfit.Parameter 'sig_frac' floating=True value=0.3> While this is useful, it does not fully scale up. We can use the `zfit.param.set_values` helper therefore. (_Sidenote: instead of a list of values, we can also use a `FitResult`, the given parameters then take the value from the result_) ```python with zfit.param.set_values([mu_sig, sigma_sig, sig_frac, comb_bkg_frac, lam], [370, 34, 0.18, 0.15, -0.006]): data = model.sample(n=10000) ``` ```python plot_model(model, data); ``` Plotting the components is not difficult now: we can either just plot the pdfs separately (as we still can access them) or in a generalized manner by accessing the `pdfs` attribute: ```python def plot_comp_model(model, data): for mod, frac in zip(model.pdfs, model.params.values()): plot_model(mod, data, scale=frac, plot_data=False) plot_model(model, data) ``` ```python plot_comp_model(model, data) ``` Now we can add legends etc. Btw, did you notice that actually, the `frac` params are zfit `Parameters`? But we just used them as if they were Python scalars and it works. ```python print(model.params) ``` OrderedDict([('frac_0', <zfit.Parameter 'sig_frac' floating=True value=0.3>), ('frac_1', <zfit.Parameter 'comb_bkg_frac' floating=True value=0.25>), ('frac_2', <zfit.ComposedParameter 'Composed_autoparam_2' params=OrderedDict([('param_0', <zfit.Parameter 'sig_frac' floating=True value=0.3>), ('param_1', <zfit.Parameter 'comb_bkg_frac' floating=True value=0.25>)]) value=0.45>)]) ### Extended PDFs So far, we have only looked at normalized PDFs that do contain information about the shape but not about the _absolute_ scale. We can make a PDF extended by adding a yield to it. The behavior of the new, extended PDF does **NOT change**, any methods we called before will act the same. Only exception, some may require an argument _less_ now. All the methods we used so far will return the same values. What changes is that the flag `model.is_extended` now returns `True`. Furthermore, we have now a few more methods that we can use which would have raised an error before: - `get_yield`: return the yield parameter (notice that the yield is _not_ added to the shape parameters `params`) - `ext_{pdf,integrate}`: these methods return the same as the versions used before, however, multiplied by the yield - `sample` is still the same, but does not _require_ the argument `n` anymore. By default, this will now equal to a _poissonian sampled_ n around the yield. The `SumPDF` now does not strictly need `fracs` anymore: if _all_ input PDFs are extended, the sum will be as well and use the (normalized) yields as fracs The preferred way to create an extended PDf is to use `PDF.create_extended(yield)`. However, since this relies on copying the PDF (which may does not work for different reasons), there is also a `set_yield(yield)` method that sets the yield in-place. This won't lead to ambiguities, as everything is supposed to work the same. ```python yield_model = zfit.Parameter('yield_model', 10000, 0, 20000, step_size=10) model_ext = model.create_extended(yield_model) ``` alternatively, we can create the models as extended and sum them up ```python sig_yield = zfit.Parameter('sig_yield', 2000, 0, 10000, step_size=1) sig_ext = signal.create_extended(sig_yield) comb_bkg_yield = zfit.Parameter('comb_bkg_yield', 6000, 0, 10000, step_size=1) comb_bkg_ext = comb_bkg.create_extended(comb_bkg_yield) part_reco_yield = zfit.Parameter('part_reco_yield', 2000, 0, 10000, step_size=1) part_reco.set_yield(part_reco_yield) # unfortunately, `create_extended` does not work here. But no problem, it won't change anyting. part_reco_ext = part_reco ``` ```python model_ext_sum = zfit.pdf.SumPDF([sig_ext, comb_bkg_ext, part_reco_ext]) ``` # Loss A loss combines the model and the data, for example to build a likelihood. Furthermore, it can contain constraints, additions to the likelihood. Currently, if the `Data` has weights, these are automatically taken into account. ```python nll_gauss = zfit.loss.UnbinnedNLL(gauss, data_normal) ``` The loss has several attributes to be transparent to higher level libraries. We can calculate the value of it using `value`. ```python nll_gauss.value() ``` <tf.Tensor: shape=(), dtype=float64, numpy=32625.929833399066> Notice that due to graph building, this will take significantly longer on the first run. Rerun the cell above and it will be way faster. Furthermore, the loss also provides a possibility to calculate the gradients or, often used, the value and the gradients. We can access the data and models (and possible constraints) ```python nll_gauss.model ``` [<zfit.Gauss params=[mu, sigma42] dtype=float64>0] ```python nll_gauss.data ``` [<zfit.core.data.Data at 0x7f17900ed820>] ```python nll_gauss.constraints ``` [] Similar to the models, we can also get the parameters via `get_params`. ```python nll_gauss.get_params() ``` OrderedSet([<zfit.Parameter 'mu' floating=True value=1>, <zfit.Parameter 'sigma42' floating=True value=1>]) ### Extended loss More interestingly, we can now build a loss for our composite sum model using the sampled data. Since we created an extended model, we can now also create an extended likelihood, taking into account a Poisson term to match the yield to the number of events. ```python nll = zfit.loss.ExtendedUnbinnedNLL(model_ext_sum, data) ``` ```python nll.get_params() ``` OrderedSet([<zfit.Parameter 'sig_yield' floating=True value=2000>, <zfit.Parameter 'comb_bkg_yield' floating=True value=6000>, <zfit.Parameter 'part_reco_yield' floating=True value=2000>, <zfit.Parameter 'alpha_sig' floating=True value=300>, <zfit.Parameter 'mu_sig' floating=True value=400>, <zfit.Parameter 'n sig' floating=True value=4>, <zfit.Parameter 'sigma_sig' floating=True value=50>, <zfit.Parameter 'lambda' floating=True value=-0.01>]) # Minimization While a loss is interesting, we usually want to minimize it. Therefore we can use the minimizers in zfit, most notably `Minuit`, a wrapper around the [iminuit minimizer](https://github.com/scikit-hep/iminuit). The philosophy is to create a minimizer instance that is mostly _stateless_, e.g. does not remember the position (there are considerations to make it possible to have a state, in case you feel interested, [contact us](https://github.com/zfit/zfit#contact)) Given that iminuit provides us with a very reliable and stable minimizer, it is usually recommended to use this. Others are implemented as well and could easily be wrapped, however, the convergence is usually not as stable. Minuit has a few options: - `tolerance`: the Estimated Distance to Minimum (EDM) criteria for convergence (default 1e-3) - `verbosity`: between 0 and 10, 5 is normal, 7 is verbose, 10 is maximum - `use_minuit_grad`: if True, uses the Minuit numerical gradient instead of the TensorFlow gradient. This is usually more stable for smaller fits; furthermore the TensorFlow gradient _can_ (experience based) sometimes be wrong. ```python minimizer = zfit.minimize.Minuit(use_minuit_grad=True) ``` For the minimization, we can call `minimize`, which takes a - loss as we created above - optionally: the parameters to minimize By default, `minimize` uses all the free floating parameters (obtained with `get_params`). We can also explicitly specify which ones to use by giving them (or better, objects that depend on them) to `minimize`; note however that non-floating parameters, even if given explicitly to `minimize` won 't be minimized. ## Pre-fit parts of the PDF Before we want to fit the whole PDF however, it can be useful to pre-fit it. A way can be to fix the combinatorial background by fitting the exponential to the right tail. Therefore we create a new data object with an additional cut and furthermore, set the normalization range of the background pdf to the range we are interested in. ```python values = z.unstack_x(data) obs_right_tail = zfit.Space('mass', (700, 1000)) data_tail = zfit.Data.from_tensor(obs=obs_right_tail, tensor=values) with comb_bkg.set_norm_range(obs_right_tail): nll_tail = zfit.loss.UnbinnedNLL(comb_bkg, data_tail) minimizer.minimize(nll_tail) ``` ------------------------------------------------------------------ | FCN = 328 | Ncalls=19 (19 total) | | EDM = 9.18e-10 (Goal: 0.001) | up = 0.5 | ------------------------------------------------------------------ | Valid Min. | Valid Param. | Above EDM | Reached call limit | ------------------------------------------------------------------ | True | True | False | False | ------------------------------------------------------------------ | Hesse failed | Has cov. | Accurate | Pos. def. | Forced | ------------------------------------------------------------------ | False | True | True | True | False | ------------------------------------------------------------------ Since we now fit the lambda parameter of the exponential, we can fix it. ```python lam.floating = False lam ``` <zfit.Parameter 'lambda' floating=False value=-0.008587> ```python result = minimizer.minimize(nll) ``` ------------------------------------------------------------------ | FCN = -1.93e+04 | Ncalls=185 (185 total) | | EDM = 6.35e-06 (Goal: 0.001) | up = 0.5 | ------------------------------------------------------------------ | Valid Min. | Valid Param. | Above EDM | Reached call limit | ------------------------------------------------------------------ | True | True | False | False | ------------------------------------------------------------------ | Hesse failed | Has cov. | Accurate | Pos. def. | Forced | ------------------------------------------------------------------ | False | True | True | True | False | ------------------------------------------------------------------ ```python plot_comp_model(model_ext_sum, data) ``` # Fit result The result of every minimization is stored in a `FitResult`. This is the last stage of the zfit workflow and serves as the interface to other libraries. Its main purpose is to store the values of the fit, to reference to the objects that have been used and to perform (simple) uncertainty estimation. ```python print(result) ``` FitResult of <ExtendedUnbinnedNLL model=[<zfit.SumPDF params=[Composed_autoparam_5, Composed_autoparam_6, Composed_autoparam_7] dtype=float64>0] data=[<zfit.core.data.SampleData object at 0x7f176002fdc0>] constraints=[]> with <Minuit strategy=PushbackStrategy tolerance=0.001> ╒═════════╤═════════════╤══════════════════╤═════════╤═════════════╕ │ valid │ converged │ param at limit │ edm │ min value │ ╞═════════╪═════════════╪══════════════════╪═════════╪═════════════╡ │ True │ True │ False │ 6.4e-06 │ -1.93e+04 │ ╘═════════╧═════════════╧══════════════════╧═════════╧═════════════╛ Parameters name value at limit --------------- ------- ---------- sig_yield 1804 False comb_bkg_yield 1095 False part_reco_yield 7101 False alpha_sig 300 False mu_sig 370.8 False n sig 4 False sigma_sig 33.87 False This gives an overview over the whole result. Often we're mostly interested in the parameters and their values, which we can access with a `params` attribute. ```python print(result.params) ``` name value at limit --------------- ------- ---------- sig_yield 1804 False comb_bkg_yield 1095 False part_reco_yield 7101 False alpha_sig 300 False mu_sig 370.8 False n sig 4 False sigma_sig 33.87 False This is a `dict` which stores any knowledge about the parameters and can be accessed by the parameter (object) itself: ```python result.params[mu_sig] ``` {'value': 370.7878667059073} 'value' is the value at the minimum. To obtain other information about the minimization process, `result` contains more attributes: - fmin: the function minimum - edm: estimated distance to minimum - info: contains a lot of information, especially the original information returned by a specific minimizer - converged: if the fit converged ```python result.fmin ``` -19300.779346305906 ## Estimating uncertainties The `FitResult` has mainly two methods to estimate the uncertainty: - a profile likelihood method (like MINOS) - Hessian approximation of the likelihood (like HESSE) When using `Minuit`, this uses (currently) it's own implementation. However, zfit has its own implementation, which are likely to become the standard and can be invoked by changing the method name. Hesse is also [on the way to implement](https://github.com/zfit/zfit/pull/244) the [corrections for weights](https://inspirehep.net/literature/1762842). We can explicitly specify which parameters to calculate, by default it does for all. ```python result.hesse() ``` OrderedDict([(<zfit.Parameter 'sig_yield' floating=True value=1804>, {'error': 70.15087267737408}), (<zfit.Parameter 'comb_bkg_yield' floating=True value=1095>, {'error': 70.35114369167526}), (<zfit.Parameter 'part_reco_yield' floating=True value=7101>, {'error': 131.80924912505543}), (<zfit.Parameter 'alpha_sig' floating=True value=300>, {'error': 141.4213562373095}), (<zfit.Parameter 'mu_sig' floating=True value=370.8>, {'error': 1.3661545484142485}), (<zfit.Parameter 'n sig' floating=True value=4>, {'error': 10.069756698215553}), (<zfit.Parameter 'sigma_sig' floating=True value=33.87>, {'error': 1.2650183734125646})]) ```python # result.hesse(method='hesse_np') ``` We get the result directly returned. This is also added to `result.params` for each parameter and is nicely displayed with an added column ```python print(result.params) ``` name value minuit_hesse at limit --------------- ------- -------------- ---------- sig_yield 1804 +/- 70 False comb_bkg_yield 1095 +/- 70 False part_reco_yield 7101 +/- 1.3e+02 False alpha_sig 300 +/- 1.4e+02 False mu_sig 370.8 +/- 1.4 False n sig 4 +/- 10 False sigma_sig 33.87 +/- 1.3 False ```python errors, new_result = result.errors(params=[sig_yield, part_reco_yield, mu_sig]) # just using three for speed reasons ``` /home/jonas/Documents/physics/software/zfit_project/zfit_repo/zfit/minimizers/fitresult.py:360: FutureWarning: 'minuit_minos' will be changed as the default errors method to a custom implementationwith the same functionality. If you want to make sure that 'minuit_minos' will be used in the future, add it explicitly as in `errors(method='minuit_minos')` warnings.warn("'minuit_minos' will be changed as the default errors method to a custom implementation" ```python # errors, new_result = result.errors(params=[yield_model, sig_frac, mu_sig], method='zfit_error') ``` ```python print(errors) ``` OrderedDict([(<zfit.Parameter 'sig_yield' floating=True value=1804>, MError(name='sig_yield', is_valid=True, lower=-69.66325485797651, upper=70.75759128186598, lower_valid=True, upper_valid=True, at_lower_limit=False, at_upper_limit=False, at_lower_max_fcn=False, at_upper_max_fcn=False, lower_new_min=False, upper_new_min=False, nfcn=138, min=1803.8532804234746)), (<zfit.Parameter 'part_reco_yield' floating=True value=7101>, MError(name='part_reco_yield', is_valid=True, lower=-131.88637854089905, upper=132.34447403753458, lower_valid=True, upper_valid=True, at_lower_limit=False, at_upper_limit=False, at_lower_max_fcn=False, at_upper_max_fcn=False, lower_new_min=False, upper_new_min=False, nfcn=60, min=7101.093509366213)), (<zfit.Parameter 'mu_sig' floating=True value=370.8>, MError(name='mu_sig', is_valid=True, lower=-1.36717243612375, upper=1.356060293846917, lower_valid=True, upper_valid=True, at_lower_limit=False, at_upper_limit=False, at_lower_max_fcn=False, at_upper_max_fcn=False, lower_new_min=False, upper_new_min=False, nfcn=106, min=370.7878667059073))]) ```python print(result.params) ``` name value minuit_hesse minuit_minos at limit --------------- ------- -------------- ------------------- ---------- sig_yield 1804 +/- 70 - 70 + 71 False comb_bkg_yield 1095 +/- 70 False part_reco_yield 7101 +/- 1.3e+02 -1.3e+02 +1.3e+02 False alpha_sig 300 +/- 1.4e+02 False mu_sig 370.8 +/- 1.4 - 1.4 + 1.4 False n sig 4 +/- 10 False sigma_sig 33.87 +/- 1.3 False #### What is 'new_result'? When profiling a likelihood, such as done in the algorithm used in `errors`, a new minimum can be found. If this is the case, this new minimum will be returned, otherwise `new_result` is `None`. Furthermore, the current `result` would be rendered invalid by setting the flag `valid` to `False`. _Note_: this behavior only applies to the zfit internal error estimator. ### A simple profile There is no default function (yet) for simple profiling plot. However, again, we're in Python and it's simple enough to do that for a parameter. Let's do it for `sig_yield` ```python x = np.linspace(1600, 2000, num=50) y = [] sig_yield.floating = False for val in x: sig_yield.set_value(val) y.append(nll.value()) sig_yield.floating = True zfit.param.set_values(nll.get_params(), result) ``` <zfit.util.temporary.TemporarilySet at 0x7f16cc8d7550> ```python plt.plot(x, y) ``` We can also access the covariance matrix of the parameters ```python result.covariance() ``` array([[ 4.92114494e+03, 1.14332473e+03, -4.33133015e+03, 0.00000000e+00, -1.99905281e+01, 0.00000000e+00, 4.04511667e+01], [ 1.14332473e+03, 4.94928342e+03, -5.67290390e+03, 0.00000000e+00, -6.88067541e+00, 0.00000000e+00, 1.48550756e+01], [-4.33133015e+03, -5.67290390e+03, 1.73736782e+04, 0.00000000e+00, 2.77291911e+01, 0.00000000e+00, -5.58205907e+01], [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 2.00000000e+04, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00], [-1.99905281e+01, -6.88067541e+00, 2.77291911e+01, 0.00000000e+00, 1.86637825e+00, 0.00000000e+00, -3.46142640e-01], [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 1.01400000e+02, 0.00000000e+00], [ 4.04511667e+01, 1.48550756e+01, -5.58205907e+01, 0.00000000e+00, -3.46142640e-01, 0.00000000e+00, 1.60027149e+00]]) # End of zfit This is where zfit finishes and other libraries take over. # Beginning of hepstats `hepstats` is a library containing statistical tools and utilities for high energy physics. In particular you do statistical inferences using the models and likelhoods function constructed in `zfit`. Short example: let's compute for instance a confidence interval at 68 % confidence level on the mean of the gaussian defined above. ```python from hepstats.hypotests.parameters import POIarray from hepstats.hypotests.calculators import AsymptoticCalculator from hepstats.hypotests import ConfidenceInterval ``` ```python calculator = AsymptoticCalculator(input=result, minimizer=minimizer) ``` ```python value = result.params[mu_sig]["value"] error = result.params[mu_sig]["minuit_hesse"]["error"] mean_scan = POIarray(mu_sig, np.linspace(value - 1.5*error, value + 1.5*error, 10)) ``` ```python ci = ConfidenceInterval(calculator, mean_scan) ``` ```python ci.interval() ``` Confidence interval on mu_sig: 369.42424650773955 < mu_sig < 372.1404588905455 at 68.0% C.L. {'observed': 370.7878667059073, 'upper': 372.1404588905455, 'lower': 369.42424650773955} ```python from utils import one_minus_cl_plot ax = one_minus_cl_plot(ci) ax.set_xlabel("mean") ``` There will be more of `hepstats` later.
b00b392978a7224fab32ff55c40d9292dc6918f0
170,262
ipynb
Jupyter Notebook
tutorial2_zfit/Introduction.ipynb
zfit/python_hpc_TensorFlow_MSU
a866b63ddf59c773d89b7e499625bd1eb3d70cb0
[ "BSD-3-Clause" ]
1
2020-10-10T13:34:04.000Z
2020-10-10T13:34:04.000Z
tutorial2_zfit/Introduction.ipynb
zfit/python_hpc_TensorFlow_MSU
a866b63ddf59c773d89b7e499625bd1eb3d70cb0
[ "BSD-3-Clause" ]
null
null
null
tutorial2_zfit/Introduction.ipynb
zfit/python_hpc_TensorFlow_MSU
a866b63ddf59c773d89b7e499625bd1eb3d70cb0
[ "BSD-3-Clause" ]
null
null
null
89.049163
26,560
0.823449
true
9,792
Qwen/Qwen-72B
1. YES 2. YES
0.72487
0.699254
0.506869
__label__eng_Latn
0.972054
0.015955
# Cálculo e clasificación de puntos críticos Con todas as ferramentas que xa levamos revisado nas anteriores prácticas, o cálculo de puntos críticos e a súa clasificación mediante o criterio que involucra á matriz Hessiana de funcións de dúas variables diferenciables é moi sinxelo usando o módulo **Sympy**. No caso de puntos críticos basta calcular o gradiente da función e resolver un sistema de dúas ecuacións (habitualmente non lineal) e para a clasificación dos puntos críticos se deben inspeccionar os autovalores da matriz Hessiana (cálculo que tamén está dispoñible en **Sympy**). Como aplicación do cálculo e identificación de máximos e mínimos relativos, revisaremos como interpretar como un problema de optimización o axuste polinomial mediante mínimos cadrados dun conxunto de puntos unidimensional. ### Obxectivos: - Cálculo de puntos críticos - Clasificación de puntos críticos: matriz Hessiana - Problema de optimización: axuste polinomial mediante mínimos cadrados ## Cálculo de puntos críticos Nesta práctica usaremos tanto o módulo **Sympy**, como tamén **Numpy** e **Matplotlib**. Así que debemos importalos para o resto do guión de prácticas: ```python import sympy as sp import numpy as np import matplotlib.pyplot as plt ``` Como xa aconteceu en prácticas anteriores, debemos facer unha implementación propia para calcular o gradiente dunha función escalar $f$. Para iso usaremos a relación que xa coñecemos entre a matriz Xacobiana $Df$ dunha función escalar e o vector (columna) gradiente $\nabla f$, isto é $\nabla f=Df^{t}$: ```python gradient = lambda f, v: sp.transpose(sp.Matrix([f]).jacobian(v)) ``` Como xa estudamos nas sesións expositivas, que se supoñemos que a función de dúas variables $f$ é diferenciable, o cálculo de puntos críticos realízase tendo en conta que o plano tanxente á superficie que define a función son planos horizontais nos extremos relativos da función, é dicir, naqueles puntos onde as derivadas parciais de $f$ son nulas. Vexamos isto cun exemplo no que $f(x,y)=-x^3 +4xy-2y^2+1$: ```python x, y = sp.symbols('x y', real=True) # define as variables simbólicas x e y f = sp.Lambda((x,y), -x**3 +4*x*y-2*y**2+1) # Cálculo de puntos críticos grad_f = gradient(f(x,y),(x,y)) sol = sp.solve((sp.Eq(grad_f[0],0),sp.Eq(grad_f[1],0)),(x,y)) display('Critical points for x and y:', sol) ``` 'Critical points for x and y:' [(0, 0), (4/3, 4/3)] Para comprobar visualmente o tipo de puntos críticos que posúe esta función, a podemos representar gráficamente: ```python p = sp.plotting.plot3d(f(x,y), (x, -2, 2), (y, -2, 2), show=False) p.xlabel='x' p.ylabel='y' p.zlabel='z' p.show() ``` ### **Exercicio 8.1** Calcula os puntos críticos e representa graficamente a función: $$ f(x,y) = \left(\frac12-x^2+y^2\right)e^{1-x^2-y^2} $$ na rexión $(x,y)\in[-4,4]\times[-4,4]$. ```python # O TEU CÓDIGO AQUÍ ``` ## Clasificación de puntos críticos: matriz Hessiana O cálculo da matriz Hessiana co módulo **Sympy** é inmediato xa que basta usar o comando `sp.hessian`. Unha vez feito isto, para calcular os autovalores desta matriz e decidir atendendo aos seus valores sen os puntos críticos son máximos relativos, mínimos relativos ou puntos de sela, soamnete debemos empregar o método `eigenvals` que está accesible na clase de obxectos tipo `sp.Matrix`: ```python H = sp.Lambda((x,y), sp.hessian(f(x,y), (x,y))) display('Hessian matrix', H(x,y)) # Clasificación do primeiro punto crítico: (0,0) eigs = H(*sol[0]).eigenvals() display('Eigenvalues for point (0,0)', np.double([*eigs])) # Clasificación do segundo punto crítico: (4/3,4/3) eigs = H(*sol[1]).eigenvals() display('Eigenvalues for point (4/3,4/3)', np.double([*eigs])) ``` 'Hessian matrix' $\displaystyle \left[\begin{matrix}- 6 x & 4\\4 & -4\end{matrix}\right]$ 'Eigenvalues for point (0,0)' array([-6.47213595, 2.47213595]) 'Eigenvalues for point (4/3,4/3)' array([-10.47213595, -1.52786405]) ### **Exercicio 8.2** Clasifica os puntos críticos obtidos no exercicio 8.1, que correspondían á función: $$ f(x,y) = \left(\frac12-x^2+y^2\right)e^{1-x^2-y^2} $$ na rexión $(x,y)\in[-4,4]\times[-4,4]$. ```python # O TEU CÓDIGO AQUÍ ``` ## Axuste polinomial mediante mínimos cadrados Dado un conxunto de puntos nun plano $(x_1,y_1), (x_2,y_2),\ldots, (x_{m},y_{m})$, é habitual que se trate atopar cal é o mellor polinomio de grao $N$ que minimiza o erro cadrático medio entre os datos proporcionados e os valores do polinomio axustado. No caso, dunh polinomio de grado $1$, o polinomio se escribe como $p(x)=ax+b$ e o anterior problema redúcese a atopar $(a^*,b^*)$ tal que se minimiza a función error: $$ \mathrm{error}(a^*,b^*)=\min_{(a,b)\in\mathbb{R}^{2}}\mathrm{error}(a,b) $$ onde $$ \mathrm{error}(a,b)=\sum_{i=1}^{m}(ax_i+b-y_i)^2. $$ Como calquera outro problema de minimización sen restriccións, para a súa resolución se deben calcular os puntos críticos da función erro e despois comprobar que se trata dun mínimo relativo (que será absoluto xa que a función erro tende a infinito cando $a$ ou $b$ tenden a $\pm\infty$). Vexamos este cálculo cun exemplo concreto. En primeiro lugar introducimos os datos: ```python # Datos xdata = np.array([0.0, 1.0, 2.0, 3.0, 4.0, 5.0]) ydata = np.array([0.0, 0.8, 0.9, 0.1, -0.8, -1.0]) ``` A continuación, definimos a función error, calculamos os puntos críticos e mediante a matriz Hessiana determinamos que se trata dun mínimo relativo. ```python # Erro a minimizar no método de mínimos cadrados a, b = sp.symbols('a b', real=True) error = sp.Lambda((a,b),sum((a*xi + b - yi)**2 for xi,yi in zip(xdata, ydata))) display('Error function', error(a,b)) # Cálculo de puntos críticos grad_error = gradient(error(a,b),(a,b)) sol = sp.solve((sp.Eq(grad_error[0],0),sp.Eq(grad_error[1],0)),(a,b)) display('Critical points for a and b:', sol) # Clasificación de puntos críticos: autovalores da matriz Hessiana H = sp.hessian(error(a,b), (a,b)) display('Hessian matrix', H) eigs = H.eigenvals() display('Eigenvalues', [*eigs]) display('Eigenvalues', np.double([*eigs])) ``` 'Error function' $\displaystyle b^{2} + \left(1.0 a + b - 0.8\right)^{2} + \left(2.0 a + b - 0.9\right)^{2} + \left(3.0 a + b - 0.1\right)^{2} + \left(4.0 a + b + 0.8\right)^{2} + \left(5.0 a + b + 1.0\right)^{2}$ 'Critical points for a and b:' {a: -0.302857142857143, b: 0.757142857142857} 'Hessian matrix' $\displaystyle \left[\begin{matrix}110.0 & 30.0\\30.0 & 12\end{matrix}\right]$ 'Eigenvalues' [61 - sqrt(3301), sqrt(3301) + 61] 'Eigenvalues' array([ 3.54567031, 118.45432969]) Dado que o proceso de axuste polinomial de datos é unha tarefa moi recurrente, existen métodos numéricos dedicados ao seu cálculo (tanto nunha como en varias dimensións). En particular, o módulo **Numpy** tamén ten un ferramenta directa para facer este axuste co comando `np.polyfit`. Comprobemos que os coeficientes que calcula son os mesmos que se obteñen cos cálculos en **Sympy**: ```python # Axuste por mínimos cadrados dun polinomio de orde 1 z = np.polyfit(xdata, ydata, 1) display('Values for a and b', z) ``` 'Values for a and b' array([-0.30285714, 0.75714286]) Adicionalmente, o comando `np.polyfit` permite facer o axuste usando un polinomo de calquera orde. No que segue, se calcula e representa graficamente o axuste polinomial de orde $3$: ```python # Axuste por mínimos cadrados dun polinomio de orde 3 z = np.polyfit(xdata, ydata, 3) # Define un polinomio en Sympy a partir dos seus coeficientes x = sp.symbols('x', real=True) P = sp.Lambda(x,sum((a*x**i for i,a in enumerate(z[::-1])))) # Representación gráfica pol = sp.lambdify(x,P(x),"numpy") plt.plot(xdata, ydata, '.', label='Data') xp = np.linspace(-1.,6.,100) plt.plot(xp, pol(xp), '-', label='Fitting') plt.xlim(-1,6) plt.ylim(-2,2) plt.legend() plt.show() ``` ### **Exercicio 8.3** Sobre os datos usados anteriormente, sábese que no punto $x=4.5$, o valor é $-1.01$. Usa diferente orde de polinomios para axustar os datos, por exemplo $N=3, 5, 10, 20, 30$. Calcula o erro que se comete neste axuste no punto $x=4.5$: - Ao aumentar a orde polinomial, mellorase o erro? - Cal é o valor de $N$ co que se comete menor erro no axuste para o punto $x=4.5$? - Ao introducir o novo dato do punto $x=4.5$ e usar $N<4$: apreciase algunha diferencia na curva axustada? ```python # O TEU CÓDIGO AQUÍ ```
5ff0a30107d820b79eda1a3b90654559cfe0816f
112,842
ipynb
Jupyter Notebook
practicas/extremos-relativos.ipynb
maprieto/CalculoMultivariable
6bd7839803d696c6cd0e3536c0631453eacded70
[ "MIT" ]
1
2021-01-09T18:30:54.000Z
2021-01-09T18:30:54.000Z
practicas/extremos-relativos.ipynb
maprieto/CalculoMultivariable
6bd7839803d696c6cd0e3536c0631453eacded70
[ "MIT" ]
null
null
null
practicas/extremos-relativos.ipynb
maprieto/CalculoMultivariable
6bd7839803d696c6cd0e3536c0631453eacded70
[ "MIT" ]
null
null
null
307.47139
82,555
0.905718
true
2,880
Qwen/Qwen-72B
1. YES 2. YES
0.833325
0.737158
0.614292
__label__glg_Latn
0.970904
0.265537
# Constrained optimization Now we will move to studying constrained optimizaton problems i.e., the full problem $$ \begin{align} \ \min \quad &f(x)\\ \text{s.t.} \quad & g_j(x) \geq 0\text{ for all }j=1,\ldots,J\\ & h_k(x) = 0\text{ for all }k=1,\ldots,K\\ &a_i\leq x_i\leq b_i\text{ for all } i=1,\ldots,n\\ &x\in \mathbb R^n, \end{align} $$ where for all $i=1,\ldots,n$ it holds that $a_i,b_i\in \mathbb R$ or they may also be $-\infty$ of $\infty$. For example, we can have an optimization problem $$ \begin{align} \ \min \quad &x_1^2+x_2^2\\ \text{s.t.} \quad & x_1+x_2-1\geq 0\\ &-1\leq x_1\leq 1, x_2\leq 3.\\ \end{align} $$ In order to optimize that problem, we can define the following python function: ```python import numpy as np def f_constrained(x): return np.linalg.norm(x)**2,[x[0]+x[1]-1],[] ``` Now, we can call the function: ```python (f_val,ieq,eq) = f_constrained([1,0]) print "Value of f is "+str(f_val) if len(ieq)>0: print "The values of inequality constraints are:" for ieq_j in ieq: print str(ieq_j)+", " if len(eq)>0: print "The values of the equality constraints are:" for eq_k in eq: print str(eq_k)+", " ``` Value of f is 1.0 The values of inequality constraints are: 0, Is this solution feasible? ```python if all([ieq_j>=0 for ieq_j in ieq]) and all([eq_k==0 for eq_k in eq]): print "Solution is feasible" else: print "Solution is infeasible" ``` Solution is feasible # Indirect and direct methods for constrained optimization There are two categories of methods for constrained optimization: Indirect and direct methods. The main difference is that 1. Indirect methods convert the constrained optimization problem into a single or a sequence of unconstrained optimization problems, that are then solved. Often, the intermediate solutions do not need to be feasbiel, the sequence of solutions converges to a solution that is optimal (and, thus, feasible). 2. Direct methods deal with the constrained optimization problem directly. In this case, the intermediate solutions are feasible. # Indirect methods ## Penalty function methods **IDEA:** Include constraints into the objective function with the help of penalty functions that penalize constraint violations. Let, $\alpha(x):\mathbb R^n\to\mathbb R$ be a function so that * $\alpha(x)=$ for all feasible $x$ * $\alpha(x)>0$ for all infeasible $x$. Define optimization problems $$ \begin{align} \ \min \qquad &f(x)+r\alpha(x)\\ \text{s.t.} \qquad &x\in \mathbb R^n \end{align} $$ for $r>0$ and $x_r$ be the optimal solutions of these problems. In this case, the optimal solutions $x_r$ converge to the optimal solution of the constrained problem, when $r\to\infty$, if such solution exists. For example, good ideas for penalty functions are * $h_k(x)^2$ for equality constraints, * $\left(\min\{0,g_j(x)\}\right)^2$ for inequality constraints. ```python def alpha(x,f): (_,ieq,eq) = f(x) return sum([min([0,ieq_j])**2 for ieq_j in ieq])+sum([eq_k**2 for eq_k in eq]) ``` ```python alpha([1,0],f_constrained) ``` 0 ```python def penalized_function(x,f,r): return f(x)[0] + r*alpha(x,f) ``` ```python penalized_function([-1,0],f_constrained,10000) ``` 40001.0 ```python from scipy.optimize import minimize res = minimize(lambda x:penalized_function(x,f_constrained,100000), [0,0],method='Nelder-Mead', options={'disp': True}) print res.x ``` Optimization terminated successfully. Current function value: 0.499998 Iterations: 57 Function evaluations: 96 [ 0.49994305 0.50005243] ```python (f_val,ieq,eq) = f_constrained(res.x) print "Value of f is "+str(f_val) if len(ieq)>0: print "The values of inequality constraints are:" for ieq_j in ieq: print str(ieq_j)+", " if len(eq)>0: print "The values of the equality constraints are:" for eq_k in eq: print str(eq_k)+", " if all([ieq_j>=0 for ieq_j in ieq]) and all([eq_k==0 for eq_k in eq]): print "Solution is feasible" else: print "Solution is infeasible" ``` Value of f is 0.49999548939 The values of inequality constraints are: -4.51660156242e-06, Solution is infeasible ### How to set the penalty term $r$? The penalty term should * be large enough in order for the solutions be close enough to the feasible region, but * not be too large to * cause numerical problems, or * cause premature convergence to non-optimal solutions because of relative tolerances. Usually, the penalty term is either * set as big as possible without causing problems (hard to know), or * updated iteratively. # Barrier function methods **IDEA:** Prevent leaving the feasible region so that the value of the objective is $\infty$ outside the feasible set. This method is only applicable to problems with inequality constraints and for which the set $$\{x\in \mathbb R^n: g_j(x)>0\text{ for all }j=1,\ldots,J\}$$ is non-empty. Let $\beta:\{x\in \mathbb R^n: g_j(x)>0\text{ for all }j=1,\ldots,J\}\to \mathbb R$ be a function so that $\beta(x)\to \infty$, when $x\to\partial\{x\in \mathbb R^n: g_j(x)>0\text{ for all }j=1,\ldots,J\}$, where $\partial A$ is the boundary of the set $A$. Now, define optimization problem $$ \begin{align} \min \qquad & f(x) + r\beta(x)\\ \text{s.t. } \qquad & x\in \{x\in \mathbb R^n: g_j(x)>0\text{ for all }j=1,\ldots,J\}. \end{align} $$ and let $x_r$ be the optimal solution of this problem (which we assume to exist for all $r>0$). In this case, $x_r$ converges to the optimal solution of the problem (if it exists), when $r\to 0^+$ (i.e., $r$ converges to zero from the right). A good idea for barrier algorithm is $\frac1{g_j(x)}$. ```python def beta(x,f): _,ieq,_ = f(x) try: value=sum([1/max([0,ieq_j]) for ieq_j in ieq]) except ZeroDivisionError: value = float("inf") return value ``` ```python def function_with_barrier(x,f,r): return f(x)[0]+r*beta(x,f) ``` ```python from scipy.optimize import minimize res = minimize(lambda x:function_with_barrier(x,f_constrained,0.00000000000001), [1,1],method='Nelder-Mead', options={'disp': True}) print res.x ``` Optimization terminated successfully. Current function value: 0.500000 Iterations: 78 Function evaluations: 136 [ 0.49998927 0.50001085] ```python (f_val,ieq,eq) = f_constrained(res.x) print "Value of f is "+str(f_val) if len(ieq)>0: print "The values of inequality constraints are:" for ieq_j in ieq: print str(ieq_j)+", " if len(eq)>0: print "The values of the equality constraints are:" for eq_k in eq: print str(eq_k)+", " if all([ieq_j>=0 for ieq_j in ieq]) and all([eq_k==0 for eq_k in eq]): print "Solution is feasible" else: print "Solution is infeasible" ``` Value of f is 0.500000122097 The values of inequality constraints are: 1.21864303093e-07, Solution is feasible ## Other notes about using penalty and barrier function methods * It is worthwile to consider whether feasibility can be compromized. If the constraints do not have any tolerances, then barrier function method should be considered. * Also barrier methods parameter can be set iteratively * Penalty and barrier functions should be chosen so that they are differentiable (thus $x^2$ above) * In both methods, the minimum is attained at the limit. * Different penalty and barrier parameters can be used for differnt constraints, even for same problem.
45256d4d304a6853e389d2698bef91a04485aac0
14,539
ipynb
Jupyter Notebook
Lecture 6, Indirect methods for constrained optimization.ipynb
maeehart/TIES483
cce5c779aeb0ade5f959a2ed5cca982be5cf2316
[ "CC-BY-3.0" ]
4
2019-04-26T12:46:14.000Z
2021-11-23T03:38:59.000Z
Lecture 6, Indirect methods for constrained optimization.ipynb
maeehart/TIES483
cce5c779aeb0ade5f959a2ed5cca982be5cf2316
[ "CC-BY-3.0" ]
null
null
null
Lecture 6, Indirect methods for constrained optimization.ipynb
maeehart/TIES483
cce5c779aeb0ade5f959a2ed5cca982be5cf2316
[ "CC-BY-3.0" ]
6
2016-01-08T16:28:11.000Z
2021-04-10T05:18:10.000Z
25.285217
321
0.528578
true
2,227
Qwen/Qwen-72B
1. YES 2. YES
0.94079
0.897695
0.844543
__label__eng_Latn
0.981543
0.800488
# Laboratorio de física con Python ## Temario * Simulación de ODE (de primer orden) [Proximamente, de orden superior!) * Análisis de datos - Transformación de datos, filtrado - Ajuste de modelos - Integración - Derivación * Adquisición de datos * Gráficos Importamos librerías: _numpy_ para análisis numérico, _scipy_ para funciones de integración y ajuste, y _matplotlib_ para graficar. De paso, definimos características de ploteo. ```python import numpy as np import scipy as sp import matplotlib.pyplot as plt from matplotlib import animation import scipy.optimize as opt %matplotlib inline #Estilos de los gráficos de matplotlib. plt.rcParams["figure.figsize"] = (5 * (1 + np.sqrt(5)) / 2, 5) plt.rcParams["lines.linewidth"] = 2.5 plt.rcParams["ytick.labelsize"] = 12 plt.rcParams["xtick.labelsize"] = 12 plt.rcParams["axes.labelsize"] = 20 plt.rcParams["axes.grid"] = True ``` ## Explicación del circuito En este taller vamos a analizar el siguiente circuito, que corresponde a un RC Para encontrar la tensión en el capacitor usamos la segunda ley de Kirchoff, que nos dice que $v_{in} = v_{R} + v_{c}$ $ v_{in}(t) = R\;i(t) + \frac{1}{C} \int_{0}^{t} i(\tau) d\tau $ que si derivamos respecto al tiempo y multiplicamos por $C$, finalmente llegamos a la ecuación diferencial $$ RC \; \frac{d i(t)}{dt} + i(t) = C \frac{d v_{in}(t)}{dt} $$ Para una señal escalón, que se simula con una señal cuadrada, se puede demostrar (ver https://en.wikipedia.org/wiki/Heaviside_step_function) que la derivada es nula para $t>0$, que es el tiempo que nos importa, por lo que vamos a resolver finalmente la siguiente ecuación $$ RC \; \frac{d i(t)}{dt} + i(t) = 0 $$ $$\frac{d i(t)}{dt} = - \frac{1}{RC} i(t)$$ Pasemos a la simulación propiamente dicha, para lo que definimos el parámetro $$\tau = RC$$ ## Análisis de datos Adquiridos datos o ya adquiridos, vamos a analizarlos. ```python data = np.loadtxt("RC.csv") #Traigo datos ya adquiridos ``` ```python print(type(data)) #print(dir(data)) #Permite graficar las funciones del ndarray print(data) ``` <class 'numpy.ndarray'> [[ 8.00000000e+00 2.70000000e+01] [ 1.28000000e+02 9.40000000e+01] [ 2.48000000e+02 1.58000000e+02] [ 3.68000000e+02 2.16000000e+02] [ 4.88000000e+02 2.71000000e+02] [ 6.08000000e+02 3.22000000e+02] [ 7.28000000e+02 3.69000000e+02] [ 8.48000000e+02 4.14000000e+02] [ 9.68000000e+02 4.55000000e+02] [ 1.08800000e+03 4.93000000e+02] [ 1.20800000e+03 5.29000000e+02] [ 1.32800000e+03 5.63000000e+02] [ 1.44800000e+03 5.94000000e+02] [ 1.56800000e+03 6.23000000e+02] [ 1.68800000e+03 6.50000000e+02] [ 1.81600000e+03 6.77000000e+02] [ 1.93600000e+03 7.00000000e+02] [ 2.05600000e+03 7.22000000e+02] [ 2.17600000e+03 7.43000000e+02] [ 2.29600000e+03 7.62000000e+02] [ 2.41600000e+03 7.79000000e+02] [ 2.53600000e+03 7.96000000e+02] [ 2.65600000e+03 8.11000000e+02] [ 2.77600000e+03 8.25000000e+02] [ 2.89600000e+03 8.39000000e+02] [ 3.01600000e+03 8.51000000e+02] [ 3.13600000e+03 8.63000000e+02] [ 3.25600000e+03 8.74000000e+02] [ 3.37600000e+03 8.84000000e+02] [ 3.49600000e+03 8.93000000e+02] [ 3.61600000e+03 9.02000000e+02] [ 3.73600000e+03 9.11000000e+02] [ 3.85600000e+03 9.18000000e+02] [ 3.97600000e+03 9.25000000e+02] [ 4.09600000e+03 9.32000000e+02] [ 4.21600000e+03 9.38000000e+02] [ 4.33600000e+03 9.44000000e+02] [ 4.45600000e+03 9.49000000e+02] [ 4.57600000e+03 9.54000000e+02] [ 4.69600000e+03 9.59000000e+02] [ 4.81600000e+03 9.63000000e+02] [ 4.93600000e+03 9.67000000e+02] [ 5.05600000e+03 9.71000000e+02] [ 5.17600000e+03 9.74000000e+02] [ 5.29600000e+03 9.78000000e+02] [ 5.41600000e+03 9.81000000e+02] [ 5.53600000e+03 9.84000000e+02] [ 5.65600000e+03 9.86000000e+02] [ 5.77600000e+03 9.89000000e+02] [ 5.89600000e+03 9.91000000e+02] [ 6.01600000e+03 9.93000000e+02] [ 6.13600000e+03 9.95000000e+02] [ 6.25600000e+03 9.97000000e+02] [ 6.37600000e+03 9.99000000e+02] [ 6.49600000e+03 1.00100000e+03] [ 6.61600000e+03 1.00200000e+03] [ 6.73600000e+03 1.00400000e+03] [ 6.85600000e+03 1.00500000e+03] [ 6.97600000e+03 1.00600000e+03] [ 7.09600000e+03 1.00700000e+03] [ 7.21600000e+03 1.00800000e+03] [ 7.33600000e+03 1.00900000e+03] [ 7.45600000e+03 1.01000000e+03] [ 7.57600000e+03 1.01100000e+03] [ 7.70400000e+03 9.94000000e+02] [ 7.82400000e+03 9.26000000e+02] [ 7.94400000e+03 8.63000000e+02] [ 8.06400000e+03 8.04000000e+02] [ 8.18400000e+03 7.49000000e+02] [ 8.30400000e+03 6.98000000e+02] [ 8.42400000e+03 6.51000000e+02] [ 8.54400000e+03 6.07000000e+02] [ 8.66400000e+03 5.66000000e+02] [ 8.78400000e+03 5.27000000e+02] [ 8.90400000e+03 4.92000000e+02] [ 9.02400000e+03 4.58000000e+02] [ 9.14400000e+03 4.27000000e+02] [ 9.26400000e+03 3.98000000e+02] [ 9.38400000e+03 3.71000000e+02] [ 9.50400000e+03 3.46000000e+02] [ 9.62400000e+03 3.22000000e+02] [ 9.74400000e+03 3.00000000e+02] [ 9.86400000e+03 2.80000000e+02] [ 9.98400000e+03 2.61000000e+02] [ 1.01040000e+04 2.43000000e+02] [ 1.02240000e+04 2.26000000e+02] [ 1.03440000e+04 2.11000000e+02] [ 1.04640000e+04 1.97000000e+02] [ 1.05840000e+04 1.83000000e+02] [ 1.07040000e+04 1.71000000e+02] [ 1.08240000e+04 1.54000000e+02] [ 1.10000000e+04 1.44000000e+02] [ 1.11400000e+04 1.33000000e+02] [ 1.12560000e+04 1.24000000e+02] [ 1.13760000e+04 1.16000000e+02] [ 1.14960000e+04 1.08000000e+02] [ 1.16160000e+04 1.00000000e+02] [ 1.17360000e+04 9.30000000e+01] [ 1.18560000e+04 8.70000000e+01] [ 1.19760000e+04 8.10000000e+01] [ 1.21120000e+04 7.50000000e+01] [ 1.22320000e+04 7.00000000e+01] [ 1.23520000e+04 6.50000000e+01] [ 1.24720000e+04 6.00000000e+01] [ 1.25920000e+04 5.60000000e+01] [ 1.27120000e+04 5.20000000e+01] [ 1.28320000e+04 4.90000000e+01] [ 1.29520000e+04 4.50000000e+01] [ 1.30720000e+04 4.20000000e+01] [ 1.31920000e+04 3.90000000e+01] [ 1.33120000e+04 3.60000000e+01] [ 1.34320000e+04 3.40000000e+01] [ 1.35520000e+04 3.10000000e+01] [ 1.36720000e+04 2.90000000e+01] [ 1.37920000e+04 2.70000000e+01] [ 1.39120000e+04 2.50000000e+01] [ 1.40320000e+04 2.30000000e+01] [ 1.41520000e+04 2.20000000e+01] [ 1.42720000e+04 2.00000000e+01] [ 1.43920000e+04 1.90000000e+01] [ 1.45120000e+04 1.70000000e+01] [ 1.46320000e+04 1.60000000e+01] [ 1.47520000e+04 1.50000000e+01] [ 1.48720000e+04 1.40000000e+01] [ 1.49920000e+04 1.30000000e+01] [ 1.51120000e+04 1.20000000e+01] [ 1.52320000e+04 1.10000000e+01] [ 1.53520000e+04 1.00000000e+01]] Ahora imprimimos los datos. Como son dos tiras de datos, debemos imprimir varios ```python plt.plot(data[:,0], data[:,1], "ro-"); ``` Vamos a hacer un poco de análisis de datos, para eso, tomemos solamente una parte de la curva verde. Index slicing al rescate! Graficamos para revistar los resultados ```python #Hay varias opciones para sacar elementos #fitData = data[0:64] fitData = data[data[:, 0] < 7700] #Grafiquemos los resultados plt.plot(fitData[:,0], fitData[:,1], 'bo-') ``` Para ajustar el modelo, que sabemos que es $ V = V_0 ( 1 - e^{-B t}) $ primero construimos la función ```python f = lambda x, A, B, C: A * np.exp(- B * x) + C ``` Y luego usamos la función curve_fit de scipy ```python T = (fitData[:,0] - fitData[:,0].min()) * 1e-6 V = fitData[:,1] ErrV = 20 #Este error corresponde solamente al instrumental p0 = (-1023, 1000, 1024) p, cov = opt.curve_fit(f, T, V, p0) #Construyo una variable auxiliar del T t = np.linspace(T.min(), T.max(), 1000) plt.errorbar(T, V, yerr = ErrV, fmt = 'go') #plt.plot(T, V, 'go') plt.plot(t, f(t, *p), 'r-') #Grafico el ajuste #Presento de una manera "linda" el ajuste sigma = np.sqrt(np.diag(cov)) #la diagonal de la covarianza corresponde a la varianza, el "cuadrado del error" print("f = A exp(-B t) + C") print("A = {:.2f} +- {:.2f}".format(p[0], sigma[0])) #Le digo que ponga solo dos digitos después de la coma print("B = {:.2f} +- {:.2f}".format(p[1], sigma[1])) print("C = {:.2f} +- {:.2f}".format(p[2], sigma[2])) ``` La componentes corresponde a $R = (18,0 \pm 0,2)\text{k}\Omega$, $C = (0,10 \pm 0,01)\mu\text{F}$, con lo que la constante del circuito nos queda $\tau = \dfrac{1}{RC} = (555 \pm 62) $ que nos permite corresponder el modelo propuesto con lo experimental. También podemos ver la "bondad del ajuste", para lo que necesitamos el $\chi$-cuadrado. La función curve_fit no lo devuelve, pero podemos calcularlo rápidamente ```python chi2Red = (np.power((V - f(T, *p)) / np.sqrt(ErrV), 2)).sum()/(len(V) - 2) chi2Red ``` 0.0039539953544748853 Acá tenemos el $\chi$-cuadrado reducido, que debe ser cercano a 1. Si es mucho menor que 1, los errores están sobre dimensionados; si es mucho mayor a 1, el ajuste puede ser rechazado. Para cuantificar la palabra "mucho" podemos usar el p-valor con el test de $\chi$-cuadrado (https://en.wikipedia.org/wiki/Goodness_of_fit) ## Derivación e integración Ahora vamos a tomar datos para un integrador y un derivador. Para eso ejecuten la parte de adquisición de datos, si tienen el dispositivo, y obtengan los datos. Si no importenlos como ya hicimos ```python data = np.loadtxt("RC_int.csv") plt.plot(data[:,0], data[:,1], "ro-"); ``` Derivemos los datos para ver que resultado tenemos. Para eso tenemos la función de _numpy.diff_. Notar que al diferenciar el resultado tiene un dato menos, por lo que se debe eliminar del "vector tiempo" data[:,0] ```python plt.plot(data[:-1,0], np.diff(data[:,1]), 'go'); ``` Se "parece" a una cuadrada, creemos datos que representen a una cuadrada e integremoslos y comparandolos. Para eso, debemos usar la función _scipy.integrate.cumtrapz_. Luego reescalamos la señal integrada para compararla con la cuadrada ```python from scipy.integrate import cumtrapz T = data[:,0] N = int(data[:,1].shape[0] / 2) V = np.concatenate((np.full(N, 1), np.zeros(N) - 1)) plt.plot(T,V,'go') V_int = sp.integrate.cumtrapz(V, initial = 0) V_int /= V_int.max() plt.plot(T, V_int, 'ro') ``` Veamos ahora comparado los datos adquiridos respecto a la integración numérica, escalado para que sea comparable ```python plt.plot(data[:,0], data[:,1] / data[:,1].max(), 'bo', T, V_int, 'go') ``` Se nota que el circuito RC se acerca a la integración numérica, pero debido a lo que se conoce como ganancia del filtro RC y la constante de tiempo $\tau$ no se llega a obtener un integrador real. Para mejorar esto, en general, se debe usar circuitos activos (ver https://en.wikipedia.org/wiki/Active_filter), pero requieren más experiencia de diseño ## Simulación del circuito ### Simulación numérica Conociendo el comportamiento del circuito y la forma funcional de su derivada podemos usar las funciones que ya conocemos para simularlo. Recordemos que la ecuación diferencial había quedado como $ \dfrac{d i(t)}{dt} = - \dfrac{1}{\tau} i(t)$ Esto lo podemos escribir como ```python from scipy.integrate import odeint tau = 10 #Varien el parámetro y vean el resultado def f(y, t, tau): return (-y * tau) t = np.linspace(0, 1, 1000) y0 = [1] y = odeint(f, y0, t, args = (tau,)) plt.plot(t, y, 'r-'); ``` Recordemos que la ecuación diferencial que estamos resolviendo corresponde a la corriente del circuito. Si en vez de eso queremos la tensión del capacitor, debemos integrar este resultado, ya que $$ v_{c}(t) = \frac{1}{C} \int_{0}^{t} i(\tau) d\tau $$ Con la librería _scipy.integrate_, y la función _cumtrapz_, podemos integrar, ya que permite aplicar acumulativamente la regla del trapesoide. ```python from scipy.integrate import cumtrapz y_int = cumtrapz(y.ravel(), t) plt.plot(t[:-1], y_int.ravel(),'g-') ``` ### Simulación analítica Para completar, vamos a encontrar la expresión analítica de la corriente y la tensión del capacitor, que nos va a permitir determinar el modelo y ajustar los datos ```python #Usamos sympy, que tiene un tutorial muy completo en #http://docs.sympy.org/latest/tutorial/ #Nostros no hacemos más que repetir from sympy import symbols, Function, Eq, dsolve, integrate, collect_const #Creo los simbolos, es decir variables con significado simbolico t, tau = symbols('t, tau') i = symbols("i", cls=Function) diffEq = Eq(i(t).diff(t), -i(t)/tau) sol = dsolve(diffEq) sol ``` i(t) == C1*exp(-t/tau) Esta expresión es la corriente, para la tensión del capacitor, debemos integrar este resultado ```python C1, u, A = symbols('C1 u A') #Creo una variable de integración, la constante de integración y una variable A g = sol.rhs.subs(t, u) #Tomo la parte izquierda de la igualdad, y elimino t por u para integrar I = integrate(g, (u, 0, t)) collect_const(I.subs(tau*C1,A), A) #Esta función solo reodena el resultado, y remplazo tau*C1 -> A ``` A*(1 - exp(-t/tau)) Por lo que el modelo utilizado previamente es coherente con la solución de la ecuación diferencial. Con esto queda un pantallazo de toda la simulación, analítica y numérica, que se puede efectuar en Python. Nada mal ## Adquisición de datos Acá están las funciones para adquisición de datos. **No es necesario ejecutarlas**, pero si tenés un dispositivo serie que devuelva una lista de datos ASCII separada por tabulaciones (como ser algunos osciloscopios, por ejemplo), esto te va a permitir guardarlo!. Mientras, está pensado para utilizarlo con un [Arduino](http://www.arduino.cc), que fue programado por nosotros (y [acá está el código]() en Processing para ver que se hace). La librería _pandas_ tiene herramientas muy poderosas de análisis y filtrado de datos, pero es más de lo que necesitamos en general y usamos solamente _numpy_ ```python import io import time import serial import pandas as pd from serial.tools import list_ports def inputPort(): '''Obtiene la lista de puertos, la presenta en pantalla y da a elegir un puerto, devolviendo el string para conectarse ''' ports = list(list_ports.comports()) for i,p in enumerate(ports): print("[{}]: Puerto {}".format(i + 1, p[1])) port = "" if len(ports) > 0: port = ports[int(input("Ingrese el puerto serie: ")) - 1][0] return port def updateData(s): '''Obtiene los datos que manda el Arduino/Teensy por UART USB''' A = [] s.flushInput() s.write(b"1") time.sleep(2) while True: if ser.inWaiting() == 0: break A.append(s.read().decode()) A = "".join(A) data = pd.read_csv(io.StringIO(A),sep="\t",names=["t","v"]).dropna(axis=0) return data port = inputPort() ser = serial.Serial(port, "9600") data = updateData(ser).values ser.close() np.savetxt("data.csv", data) ``` Grafiquemos los datos para estar seguros de que obtenimos el resultado! ```python plt.plot(data[:,0], data[:,1], "ro-"); ``` Para complentar, reimplementamos la adquisición de datos, pero la hacemos un bucle y presentamos en tiempo real la adquisición, con un intervalo de refresco de 2s, aproximadamente ```python from IPython import display #Permite borrar la salida y volverla a cargar. Sirve para toda instancia de Ipython inputPort() ser = serial.Serial(port, "9600") data = updateData(ser).values ser.close() for i in range(50): plt.clf() data = update() plt.plot(data.t, data.V, "ro") display.clear_output(wait=True) display.display(plt.gcf()) #time.sleep(0.5) #Este delay no es necesario, ya está implementado en el update plt.close() ```
28dcb52ac12618a2eec3535582e5e3b44dd9023c
203,947
ipynb
Jupyter Notebook
python/Extras/Arduino/laboratorio.ipynb
LTGiardino/talleresfifabsas
a711b4425b0811478f21e6c405eeb4a52e889844
[ "MIT" ]
17
2015-10-23T17:14:34.000Z
2021-12-31T02:18:29.000Z
python/Extras/Arduino/laboratorio.ipynb
LTGiardino/talleresfifabsas
a711b4425b0811478f21e6c405eeb4a52e889844
[ "MIT" ]
5
2016-04-03T23:39:11.000Z
2020-04-03T02:09:02.000Z
python/Extras/Arduino/laboratorio.ipynb
LTGiardino/talleresfifabsas
a711b4425b0811478f21e6c405eeb4a52e889844
[ "MIT" ]
29
2015-10-16T04:16:01.000Z
2021-09-18T16:55:48.000Z
225.355801
26,954
0.890388
true
6,268
Qwen/Qwen-72B
1. YES 2. YES
0.894789
0.774583
0.693089
__label__spa_Latn
0.761206
0.448609
# Binet's Formula ## Formula Explicit formula to find the nth term of the Fibonacci sequence. $\displaystyle F_n = \frac{1}{\sqrt{5}} \Bigg(\Bigg( \frac{1 + \sqrt{5}}{2} \Bigg)^n - \Bigg( \frac{1 - \sqrt{5}}{2} \Bigg)^n \Bigg)$ *Derived by Jacques Philippe Marie Binet, alreday known by Abraham de Moivre* ---- ## Fibonacci Sequence The Fibonacci sequence iterates with the next value being the sum of the previous two: $F_{n+1} = F_n + F_{n-1}$ ```python def fib(n): a = b = 1 for _ in range(n): yield a a, b = b, a + b ", ".join([str(x) for x in fib(20)]) ``` '1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765' ---- ## Proof ### Fibonacci Ratios The ratios of the Fibanacci sequence $\displaystyle \lim_{n \rightarrow \infty} \frac{F_n}{F_{n-1}} = \varphi$ <br/> $\displaystyle \frac{F_n}{F_{n-1}}$ converges to the limit $\phi$ ```python def fib_ratio(n): a = b = 1 for _ in range(n): yield a/b a, b = b, a + b ", ".join(["{0:.6f}".format(x) for x in fib_ratio(20)]) ``` '1.000000, 0.500000, 0.666667, 0.600000, 0.625000, 0.615385, 0.619048, 0.617647, 0.618182, 0.617978, 0.618056, 0.618026, 0.618037, 0.618033, 0.618034, 0.618034, 0.618034, 0.618034, 0.618034, 0.618034' ### Compose as a Geometric Sequence This sequence resembles a geometric sequence. Geometric sequences have terms in the form of $G_n = a \cdot r^n$ . Therefore $F_{n+1} = F_n + F_{n-1} \implies a \cdot r^{n+1} = a \cdot r^n + a \cdot r^{n-1} \implies r^2 = r + 1$. ### Resolve Quadratic Using the Quadratic formula we find r as $1+\varphi$, or $\displaystyle\frac{1 \pm \sqrt{5}}{2}$ Let's declare $G_n = \displaystyle\frac{1 + \sqrt{5}}{2}$, and $H_n = \displaystyle\frac{1 - \sqrt{5}}{2}$ ```python # x^2−x−1=0 from sympy import * from sympy.plotting import plot from sympy.solvers import solve init_printing() x = symbols('x') exp = x**2 - x -1 plot(exp, (x, -2, 2)) answers = solve(x**2 -x -1, x) [ratsimp(a) for a in answers] ``` ### Conclusion Although neither $G_n$ and $H_n$ conform to the Fibonacci sequence, through induction, $G_n - H_n$ does. To find $a$, we can see that $F_0 = G_0 - H_0 = 0, and F_1 = G_1 - H_1 = 1 \implies a = \frac{1}{\sqrt{5}}$ ---- ## References - [Art of Problem Solving - Binet's Formula][3] - [Art of Problem Solving - Geometric Sequence][2] - [Art of Problem Solving - Fibonacci Sequence][1] [1]: https://artofproblemsolving.com/wiki/index.php?title=Fibonacci_sequence [2]: https://artofproblemsolving.com/wiki/index.php?title=Geometric_sequence [3]: https://artofproblemsolving.com/wiki/index.php?title=Binet%27s_Formula
6e6e90986c00390ebd9217509a085ab7b81c477e
21,750
ipynb
Jupyter Notebook
notebooks/math/number_theory/binets_formula.ipynb
sparkboom/my_jupyter_notes
9255e4236b27f0419cdd2c8a2159738d8fc383be
[ "MIT" ]
null
null
null
notebooks/math/number_theory/binets_formula.ipynb
sparkboom/my_jupyter_notes
9255e4236b27f0419cdd2c8a2159738d8fc383be
[ "MIT" ]
null
null
null
notebooks/math/number_theory/binets_formula.ipynb
sparkboom/my_jupyter_notes
9255e4236b27f0419cdd2c8a2159738d8fc383be
[ "MIT" ]
null
null
null
106.097561
14,436
0.849011
true
996
Qwen/Qwen-72B
1. YES 2. YES
0.913677
0.855851
0.781971
__label__eng_Latn
0.594059
0.655113
# Exercise 1 ## JIT the pressure poisson equation The equation we need to unroll is given by \begin{equation} p_{i,j}^{n} = \frac{1}{4}\left(p_{i+1,j}^{n}+p_{i-1,j}^{n}+p_{i,j+1}^{n}+p_{i,j-1}^{n}\right) - b \end{equation} and recall that `b` is already computed, so no need to worry about unrolling that. We've also filled in the boundary conditions, so don't worry about those. (don't forget to decorate your function!) ```python import numpy from numba import jit ``` ```python def pressure_poisson(p, b, l2_target=1e-4): I, J = b.shape iter_diff = l2_target + 1 n = 0 while iter_diff > l2_target and n <= 500: pn = p.copy() #Your code here #boundary conditions for i in range(I): p[i, 0] = p[i, 1] p[i, -1] = 0 for j in range(J): p[0, j] = p[1, j] p[-1, j] = p[-2, j] if n % 10 == 0: iter_diff = numpy.sqrt(numpy.sum((p - pn)**2)/numpy.sum(pn**2)) n += 1 return p ``` ```python import pickle from snippets.ns_helper import cavity_flow, velocity_term, quiver_plot ``` ```python def run_cavity(): nx = 41 with open('../IC.pickle', 'rb') as f: u, v, p, b = pickle.load(f) dx = 2 / (nx - 1) dt = .005 nt = 1000 u, v, p = cavity_flow(u, v, p, nt, dt, dx, velocity_term, pressure_poisson, rtol=1e-4) return u, v, p ``` ```python un, vn, pn = run_cavity() ``` ```python %timeit run_cavity() ``` ```python with open('../numpy_ans.pickle', 'rb') as f: u, v, p = pickle.load(f) ``` ```python assert numpy.allclose(u, un) assert numpy.allclose(v, vn) assert numpy.allclose(p, pn) ``` # Exercise 2 (optional) Finish early? Just want to try more stuff? This line is not super efficient: ```python iter_diff = numpy.sqrt(numpy.sum((p - pn)**2)/numpy.sum(pn**2)) ``` Try rewriting it using a jitted function and see what kind of performance gain you can get. ```python ```
0e5936f7cf5cec621e0e31c2a188ada3e097a3e4
4,223
ipynb
Jupyter Notebook
notebooks/exercises/05.Cavity.Flow.Exercises.ipynb
gforsyth/numba_tutorial_scipy2017
01befd25218783f6d3fb803f55dd9e52f6072ff7
[ "CC-BY-4.0" ]
131
2017-06-23T10:18:26.000Z
2022-03-27T21:16:56.000Z
notebooks/exercises/05.Cavity.Flow.Exercises.ipynb
gforsyth/numba_tutorial_scipy2017
01befd25218783f6d3fb803f55dd9e52f6072ff7
[ "CC-BY-4.0" ]
9
2017-06-11T21:20:59.000Z
2018-10-18T13:57:30.000Z
notebooks/exercises/05.Cavity.Flow.Exercises.ipynb
gforsyth/numba_tutorial_scipy2017
01befd25218783f6d3fb803f55dd9e52f6072ff7
[ "CC-BY-4.0" ]
64
2017-06-26T13:04:48.000Z
2022-01-11T20:36:31.000Z
23.461111
206
0.460336
true
653
Qwen/Qwen-72B
1. YES 2. YES
0.880797
0.888759
0.782816
__label__eng_Latn
0.89803
0.657077
```python from sympy import * from IPython.display import display, Latex, HTML, Markdown init_printing() from eqn_manip import * from codegen_extras import * import codegen_extras from importlib import reload from sympy.codegen.ast import Assignment, For, CodeBlock, real, Variable, Pointer, Declaration from sympy.codegen.cnodes import void ``` ## Cubic Spline solver - derivation and code generation ### Tridiagonal Solver From Wikipedia: https://en.wikipedia.org/wiki/Tridiagonal_matrix_algorithm In the future it would be good to derive these equations from Gaussian elimintation (as on the Wikipedia page), but for now they are simply given. ```python n = Symbol('n', integer=True) i = Symbol('i', integer=True) x = IndexedBase('x',shape=(n,)) dp = IndexedBase("d'",shape=(n,)) cp = IndexedBase("c'",shape=(n,)) a = IndexedBase("a",shape=(n,)) b = IndexedBase("b",shape=(n,)) c = IndexedBase("c",shape=(n,)) d = IndexedBase("d",shape=(n,)) ``` ```python # forward sweep # start/end using the natural range for math notation #start = 1 #end = n # Use the C++ range 0,n-1 start = 0 end = n-1 teq1 = Eq(cp[start], c[start]/b[start]) display(teq1) teq2 = Eq(dp[start], d[start]/b[start]) display(teq2) teq3 = Eq(dp[i],(d[i] - dp[i-1]*a[i])/ (b[i] - cp[i-1]*a[i])) display(teq3) teq4 = Eq(cp[i],c[i]/(b[i] - cp[i-1]*a[i])) display(teq4) ``` $${c'}_{0} = \frac{{c}_{0}}{{b}_{0}}$$ $${d'}_{0} = \frac{{d}_{0}}{{b}_{0}}$$ $${d'}_{i} = \frac{- {a}_{i} {d'}_{i - 1} + {d}_{i}}{- {a}_{i} {c'}_{i - 1} + {b}_{i}}$$ $${c'}_{i} = \frac{{c}_{i}}{- {a}_{i} {c'}_{i - 1} + {b}_{i}}$$ ```python # backward sweep teq5 = Eq(x[end],dp[end]) display(teq5) teq6 = Eq(x[i],dp[i] - cp[i]*x[i+1]) display(teq6) ``` $${x}_{n - 1} = {d'}_{n - 1}$$ $${x}_{i} = - {c'}_{i} {x}_{i + 1} + {d'}_{i}$$ ### Cubic Spline equations Start with uniform knot spacing. The derivation is easier to see than in the case with general knot spacing. ```python # Distance from the previous knot, for the case of uniform knot spacing t = Symbol('t') # Number of knots n = Symbol('n', integer=True) i = Symbol('i', integer=True) # Function values to intepolated at the knots y = IndexedBase('y',shape=(n,)) # Coefficients of the spline function a,b,c,d = [IndexedBase(s, shape=(n,)) for s in 'a b c d'.split()] # Cubic spline function s = a + b*t + c*t*t + d*t**3 display(Eq(y,s)) # With indexed variables si = a[i] + b[i]*t + c[i]*t*t + d[i]*t**3 display(Eq(y[i],si)) ``` $$y = t^{3} d + t^{2} c + t b + a$$ $${y}_{i} = t^{3} {d}_{i} + t^{2} {c}_{i} + t {b}_{i} + {a}_{i}$$ ### Strategy To eventually reduce the equations to a tridiagonal form, express the equations in terms of the second derivative ($E$). See the MathWorld page for cubic splines, which derives the equations in terms of the first derivative ($D$). http://mathworld.wolfram.com/CubicSpline.html ```python # Value at knots (t=0) sp1 = Eq(si.subs(t,0), y[i]) sp1 ``` $${a}_{i} = {y}_{i}$$ ```python # Value at knots (t=1) sp2 = Eq(si.subs(t,1), y[i+1]) sp2 ``` $${a}_{i} + {b}_{i} + {c}_{i} + {d}_{i} = {y}_{i + 1}$$ ```python # Express the second derivative at the beginning of the interval in terms of E E = IndexedBase('E',shape=(n,)) sp3 = Eq(E[i], diff(si,t,2).subs(t,0)) sp3 ``` $${E}_{i} = 2 {c}_{i}$$ ```python # Express the second derivative at the end of the interval in terms of E sp4 = Eq(E[i+1], diff(si,t,2).subs(t,1)) sp4 ``` $${E}_{i + 1} = 2 {c}_{i} + 6 {d}_{i}$$ ```python # Continuity of the first derivative sp5 = Eq(diff(si,t).subs(t,1), diff(si,t).subs(t,0).subs(i,i+1)) sp5 ``` $${b}_{i} + 2 {c}_{i} + 3 {d}_{i} = {b}_{i + 1}$$ ### For general spacing of the knots ```python L = IndexedBase('L',shape=(n,)) # L[i] = x[i+1] - x[i] t = Symbol('t') x = IndexedBase('x',shape=(n,)) si = a[i] + b[i]*t + c[i]*t*t + d[i]*t**3 ``` ```python # Value at knots (t=0) sp1 = Eq(si.subs(t,0), y[i]) sp1 ``` $${a}_{i} = {y}_{i}$$ ```python # Value at next knot sp2 = Eq(si.subs(t,L[i]), y[i+1]) sp2 ``` $${L}_{i}^{3} {d}_{i} + {L}_{i}^{2} {c}_{i} + {L}_{i} {b}_{i} + {a}_{i} = {y}_{i + 1}$$ ```python # Express the second derivative at the beginning of the interval in terms of E E = IndexedBase('E',shape=(n,)) sp3 = Eq(E[i], diff(si,t,2).subs(t,0)) sp3 ``` $${E}_{i} = 2 {c}_{i}$$ ```python # Express the second derivative at the end of the interval in terms of E sp4 = Eq(E[i+1], diff(si,t,2).subs(t,L[i])) sp4 ``` $${E}_{i + 1} = 6 {L}_{i} {d}_{i} + 2 {c}_{i}$$ ```python # Solve for spline coefficients in terms of E's sln = solve([sp1,sp2,sp3,sp4], [a[i],b[i],c[i],d[i]]) sln ``` $$\left \{ {a}_{i} : {y}_{i}, \quad {b}_{i} : \frac{- \frac{\left({E}_{i + 1} + 2 {E}_{i}\right) {L}_{i}^{2}}{6} + {y}_{i + 1} - {y}_{i}}{{L}_{i}}, \quad {c}_{i} : \frac{{E}_{i}}{2}, \quad {d}_{i} : \frac{{E}_{i + 1} - {E}_{i}}{6 {L}_{i}}\right \}$$ ```python # also for i+1 sln1 = {k.subs(i,i+1):v.subs(i,i+1) for k,v in sln.items()} sln1 ``` $$\left \{ {a}_{i + 1} : {y}_{i + 1}, \quad {b}_{i + 1} : \frac{- \frac{\left(2 {E}_{i + 1} + {E}_{i + 2}\right) {L}_{i + 1}^{2}}{6} - {y}_{i + 1} + {y}_{i + 2}}{{L}_{i + 1}}, \quad {c}_{i + 1} : \frac{{E}_{i + 1}}{2}, \quad {d}_{i + 1} : \frac{- {E}_{i + 1} + {E}_{i + 2}}{6 {L}_{i + 1}}\right \}$$ ```python # Continuity of first derivatives at knots # This will define the tridiagonal system to be solved sp5 = Eq(diff(si,t).subs(t,L[i]), diff(si,t).subs(i, i+1).subs(t,0)) sp5 ``` $$3 {L}_{i}^{2} {d}_{i} + 2 {L}_{i} {c}_{i} + {b}_{i} = {b}_{i + 1}$$ ```python sp6 = sp5.subs(sln).subs(sln1) sp7 = expand(sp6) sp7 ``` $$\frac{{E}_{i + 1} {L}_{i}}{3} + \frac{{E}_{i} {L}_{i}}{6} + \frac{{y}_{i + 1}}{{L}_{i}} - \frac{{y}_{i}}{{L}_{i}} = - \frac{{E}_{i + 1} {L}_{i + 1}}{3} - \frac{{E}_{i + 2} {L}_{i + 1}}{6} - \frac{{y}_{i + 1}}{{L}_{i + 1}} + \frac{{y}_{i + 2}}{{L}_{i + 1}}$$ ```python sp8 = divide_terms(sp7, [E[i],E[i+1],E[i+2]], [y[i],y[i+1],y[i+2]]) display(sp8) sp9 = mult_eqn(sp8,6) display(sp9) # The index 'i' used in the cubic spline equations is not the same 'i' used # in the tridigonal solver. Here we need to make them match. # The first foundary condition will the equation at index at 0. # Adjust the indexing on this equation so i=1 is the index of the first continuity interval match sp9 = sp9.subs(i,i-1) ``` $$\frac{{E}_{i + 1} {L}_{i + 1}}{3} + \frac{{E}_{i + 1} {L}_{i}}{3} + \frac{{E}_{i + 2} {L}_{i + 1}}{6} + \frac{{E}_{i} {L}_{i}}{6} = - \frac{{y}_{i + 1}}{{L}_{i}} + \frac{{y}_{i}}{{L}_{i}} - \frac{{y}_{i + 1}}{{L}_{i + 1}} + \frac{{y}_{i + 2}}{{L}_{i + 1}}$$ $$2 {E}_{i + 1} {L}_{i + 1} + 2 {E}_{i + 1} {L}_{i} + {E}_{i + 2} {L}_{i + 1} + {E}_{i} {L}_{i} = - \frac{6 {y}_{i + 1}}{{L}_{i}} + \frac{6 {y}_{i}}{{L}_{i}} - \frac{6 {y}_{i + 1}}{{L}_{i + 1}} + \frac{6 {y}_{i + 2}}{{L}_{i + 1}}$$ ```python # Extract the three coefficients in each row for the general case symlist = [E[i-1],E[i],E[i+1],E[i+2]] coeff1 = get_coeff_for(sp9.lhs, E[i-1], symlist) display(coeff1) coeff2 = get_coeff_for(sp9.lhs, E[i], symlist) display(coeff2) coeff3 = get_coeff_for(sp9.lhs, E[i+1], symlist) display(coeff3) ``` $${L}_{i - 1}$$ $$2 {L}_{i - 1} + 2 {L}_{i}$$ $${L}_{i}$$ ```python # Now get the coefficients for the boundary conditions (first row and last row) # Natural BC bc_natural_start = Eq(E[i].subs(i,0),0) display(bc_natural_start) bc_natural_end = Eq(E[i].subs(i,end),0) display(bc_natural_end) # The coefficients and RHS for this BC are pretty simple. but we will follow # a deterministic path for derivation anyway. bc_natural_start_coeff1 = get_coeff_for(bc_natural_start.lhs, E[start],[E[start]]) display(bc_natural_start_coeff1) bc_natural_start_coeff2 = get_coeff_for(bc_natural_start.lhs, E[start+1],[E[start],E[start+1]]) display(bc_natural_start_coeff2) bc_natural_end_coeff1 = get_coeff_for(bc_natural_end.lhs, E[end-1],[E[end]]) display(bc_natural_end_coeff1) bc_natural_end_coeff2 = get_coeff_for(bc_natural_end.lhs, E[end],[E[end]]) bc_natural_end_coeff2 ``` $${E}_{0} = 0$$ $${E}_{n - 1} = 0$$ $$1$$ $$0$$ $$0$$ $$1$$ ```python # BC - first derivative specified at the beginning of the range yp0 = Symbol('yp0') eqbc1=Eq(diff(si,t).subs(t,0).subs(sln).subs(i,0), yp0) display(eqbc1) eqbc1b = divide_terms(expand(eqbc1),[E[0],E[1]],[y[0],y[1],yp0]) eqbc1c = mult_eqn(eqbc1b, 6) display(eqbc1c) bc_firstd_start_coeff1 = get_coeff_for(eqbc1c.lhs, E[0], [E[0],E[1]]) display(bc_firstd_start_coeff1) bc_firstd_start_coeff2 = get_coeff_for(eqbc1c.lhs, E[1], [E[0],E[1]]) display(bc_firstd_start_coeff2) ``` $$\frac{- \frac{\left(2 {E}_{0} + {E}_{1}\right) {L}_{0}^{2}}{6} - {y}_{0} + {y}_{1}}{{L}_{0}} = yp_{0}$$ $$- 2 {E}_{0} {L}_{0} - {E}_{1} {L}_{0} = 6 yp_{0} + \frac{6 {y}_{0}}{{L}_{0}} - \frac{6 {y}_{1}}{{L}_{0}}$$ $$- 2 {L}_{0}$$ $$- {L}_{0}$$ ```python # For the general algorithm, the input parameters for the boundary conditions are # - first derivative, if value is less than cutoff # - second derivative is zero, if vlaue is greater than cutoff bc_cutoff = 0.99e30 tbc_start_coeff1 = Piecewise((bc_firstd_start_coeff1, yp0 < bc_cutoff),(bc_natural_start_coeff1,True)) display(tbc_start_coeff1) tbc_start_coeff2 = Piecewise((bc_firstd_start_coeff2, yp0 < bc_cutoff),(bc_natural_start_coeff2,True)) display(tbc_start_coeff2) sym_bc_start_coeff1 = Symbol('bc_start1') sym_bc_start_coeff2 = Symbol('bc_start2') bc_eqs = [Eq(sym_bc_start_coeff1, tbc_start_coeff1)] bc_eqs.append(Eq(sym_bc_start_coeff2, tbc_start_coeff2)) ``` $$\begin{cases} - 2 {L}_{0} & \text{for}\: yp_{0} < 9.9 \cdot 10^{29} \\1 & \text{otherwise} \end{cases}$$ $$\begin{cases} - {L}_{0} & \text{for}\: yp_{0} < 9.9 \cdot 10^{29} \\0 & \text{otherwise} \end{cases}$$ ```python # BC - first derivative specified at the end of the range ypn = Symbol('ypn') eqbc2=Eq(diff(si,t).subs(t,L[end-1]).subs(sln).subs(i,end-1),ypn) display(eqbc2) eqbc2b = divide_terms(expand(eqbc2),[E[end-1],E[end]],[y[end-1],y[end],ypn]) display(eqbc2b) eqbc2c = mult_eqn(eqbc2b, 6) display(eqbc2c) bc_firstd_end_coeff1 = get_coeff_for(eqbc2c.lhs, E[end-1],[E[end-1],E[end]]) display(bc_firstd_end_coeff1) bc_firstd_end_coeff2 = get_coeff_for(eqbc2c.lhs, E[end],[E[end-1],E[end]]) display(bc_firstd_end_coeff2) ``` $$\frac{\left({E}_{n - 1} - {E}_{n - 2}\right) {L}_{n - 2}}{2} + \frac{- \frac{\left({E}_{n - 1} + 2 {E}_{n - 2}\right) {L}_{n - 2}^{2}}{6} + {y}_{n - 1} - {y}_{n - 2}}{{L}_{n - 2}} + {E}_{n - 2} {L}_{n - 2} = ypn$$ $$\frac{{E}_{n - 1} {L}_{n - 2}}{3} + \frac{{E}_{n - 2} {L}_{n - 2}}{6} = ypn - \frac{{y}_{n - 1}}{{L}_{n - 2}} + \frac{{y}_{n - 2}}{{L}_{n - 2}}$$ $$2 {E}_{n - 1} {L}_{n - 2} + {E}_{n - 2} {L}_{n - 2} = 6 ypn - \frac{6 {y}_{n - 1}}{{L}_{n - 2}} + \frac{6 {y}_{n - 2}}{{L}_{n - 2}}$$ $${L}_{n - 2}$$ $$2 {L}_{n - 2}$$ ```python # Create the conditional expression for the end BC tbc_end_coeff1 = Piecewise((bc_firstd_end_coeff1, ypn < bc_cutoff),(bc_natural_end_coeff1, True)) display(tbc_end_coeff1) sym_bc_end_coeff1 = Symbol('bc_end1') bc_eqs.append(Eq(sym_bc_end_coeff1, tbc_end_coeff1)) tbc_end_coeff2 = Piecewise((bc_firstd_end_coeff2, ypn < bc_cutoff),(bc_natural_end_coeff2, True)) tbc_end_coeff2 display(tbc_end_coeff2) sym_bc_end_coeff2 = Symbol('bc_end2') bc_eqs.append(Eq(sym_bc_end_coeff2, tbc_end_coeff2)) ``` $$\begin{cases} {L}_{n - 2} & \text{for}\: ypn < 9.9 \cdot 10^{29} \\0 & \text{otherwise} \end{cases}$$ $$\begin{cases} 2 {L}_{n - 2} & \text{for}\: ypn < 9.9 \cdot 10^{29} \\1 & \text{otherwise} \end{cases}$$ ```python # conditional expressions for RHS for boundary conditions rhs_start = Piecewise((eqbc1c.rhs,yp0 < bc_cutoff),(bc_natural_start.rhs,True)) display(rhs_start) rhs_end = Piecewise((eqbc2c.rhs, ypn < bc_cutoff), (bc_natural_end.rhs, True)) display(rhs_end) sym_rhs_start = Symbol('rhs_start') sym_rhs_end = Symbol('rhs_end') bc_eqs.append(Eq(sym_rhs_start, rhs_start)) bc_eqs.append(Eq(sym_rhs_end, rhs_end)) bc_eqs ``` $$\begin{cases} 6 yp_{0} + \frac{6 {y}_{0}}{{L}_{0}} - \frac{6 {y}_{1}}{{L}_{0}} & \text{for}\: yp_{0} < 9.9 \cdot 10^{29} \\0 & \text{otherwise} \end{cases}$$ $$\begin{cases} 6 ypn - \frac{6 {y}_{n - 1}}{{L}_{n - 2}} + \frac{6 {y}_{n - 2}}{{L}_{n - 2}} & \text{for}\: ypn < 9.9 \cdot 10^{29} \\0 & \text{otherwise} \end{cases}$$ $$\left [ bc_{start1} = \begin{cases} - 2 {L}_{0} & \text{for}\: yp_{0} < 9.9 \cdot 10^{29} \\1 & \text{otherwise} \end{cases}, \quad bc_{start2} = \begin{cases} - {L}_{0} & \text{for}\: yp_{0} < 9.9 \cdot 10^{29} \\0 & \text{otherwise} \end{cases}, \quad bc_{end1} = \begin{cases} {L}_{n - 2} & \text{for}\: ypn < 9.9 \cdot 10^{29} \\0 & \text{otherwise} \end{cases}, \quad bc_{end2} = \begin{cases} 2 {L}_{n - 2} & \text{for}\: ypn < 9.9 \cdot 10^{29} \\1 & \text{otherwise} \end{cases}, \quad rhs_{start} = \begin{cases} 6 yp_{0} + \frac{6 {y}_{0}}{{L}_{0}} - \frac{6 {y}_{1}}{{L}_{0}} & \text{for}\: yp_{0} < 9.9 \cdot 10^{29} \\0 & \text{otherwise} \end{cases}, \quad rhs_{end} = \begin{cases} 6 ypn - \frac{6 {y}_{n - 1}}{{L}_{n - 2}} + \frac{6 {y}_{n - 2}}{{L}_{n - 2}} & \text{for}\: ypn < 9.9 \cdot 10^{29} \\0 & \text{otherwise} \end{cases}\right ]$$ ### Substitute cubic spline equations into tridiagonal solver ```python subslist = { a[start] : 0, a[i] : coeff1, a[end] : sym_bc_end_coeff1, b[start] : sym_bc_start_coeff1, b[i] : coeff2, b[end] : sym_bc_end_coeff2, c[start] : sym_bc_start_coeff2, c[i] : coeff3, c[end] : 0, d[start] : sym_rhs_start, d[i] : sp9.rhs, d[end] : sym_rhs_end, } # Replace knot spacing with differences bewteen knot locations subsL = { L[i] : x[i+1] - x[i], L[i+1] : x[i+2] - x[i+1], L[i-1] : x[i] - x[i-1], L[start] : x[start+1]-x[start], L[start+1] : x[start+2]-x[start+1], L[end-1] : x[end] - x[end-1], } subslist ``` $$\left \{ {a}_{0} : 0, \quad {a}_{i} : {L}_{i - 1}, \quad {a}_{n - 1} : bc_{end1}, \quad {b}_{0} : bc_{start1}, \quad {b}_{i} : 2 {L}_{i - 1} + 2 {L}_{i}, \quad {b}_{n - 1} : bc_{end2}, \quad {c}_{0} : bc_{start2}, \quad {c}_{i} : {L}_{i}, \quad {c}_{n - 1} : 0, \quad {d}_{0} : rhs_{start}, \quad {d}_{i} : \frac{6 {y}_{i + 1}}{{L}_{i}} - \frac{6 {y}_{i}}{{L}_{i}} + \frac{6 {y}_{i - 1}}{{L}_{i - 1}} - \frac{6 {y}_{i}}{{L}_{i - 1}}, \quad {d}_{n - 1} : rhs_{end}\right \}$$ ```python # Substitute into the tridiagonal solver display(teq1.subs(subslist)) teq2b = teq2.subs(subslist).subs(subsL) display(teq2b) teq3b = simplify(teq3.subs(subslist).subs(subsL)) display(teq3b) teq4b = teq4.subs(subslist).subs(subsL) display(teq4b) teq5b = Eq(teq5.lhs,teq5.rhs.subs(dp[end],teq3.rhs).subs(i,end).subs(subslist)) display(teq5b) display(teq6.subs(subslist)) ``` $${c'}_{0} = \frac{bc_{start2}}{bc_{start1}}$$ $${d'}_{0} = \frac{rhs_{start}}{bc_{start1}}$$ $${d'}_{i} = \frac{- \left({x}_{i + 1} - {x}_{i}\right) \left({x}_{i - 1} - {x}_{i}\right)^{2} {d'}_{i - 1} + 6 \left({x}_{i + 1} - {x}_{i}\right) \left({y}_{i - 1} - {y}_{i}\right) + 6 \left({x}_{i - 1} - {x}_{i}\right) \left(- {y}_{i + 1} + {y}_{i}\right)}{\left({x}_{i + 1} - {x}_{i}\right) \left({x}_{i - 1} - {x}_{i}\right) \left(- \left({x}_{i - 1} - {x}_{i}\right) {c'}_{i - 1} - 2 {x}_{i + 1} + 2 {x}_{i - 1}\right)}$$ $${c'}_{i} = \frac{{x}_{i + 1} - {x}_{i}}{- \left(- {x}_{i - 1} + {x}_{i}\right) {c'}_{i - 1} + 2 {x}_{i + 1} - 2 {x}_{i - 1}}$$ $${x}_{n - 1} = \frac{- bc_{end1} {d'}_{n - 2} + rhs_{end}}{- bc_{end1} {c'}_{n - 2} + bc_{end2}}$$ $${x}_{i} = - {c'}_{i} {x}_{i + 1} + {d'}_{i}$$ ```python # Extract sub-expressions subexpr, final_expr = cse([simplify(teq3b),simplify(teq4b)],symbols=numbered_symbols('z')) display(subexpr) display(final_expr) ``` $$\left [ \left ( z_{0}, \quad - {x}_{i}\right ), \quad \left ( z_{1}, \quad z_{0} + {x}_{i + 1}\right ), \quad \left ( z_{2}, \quad z_{0} + {x}_{i - 1}\right ), \quad \left ( z_{3}, \quad 2 {x}_{i + 1}\right ), \quad \left ( z_{4}, \quad 2 {x}_{i - 1}\right ), \quad \left ( z_{5}, \quad z_{2} {c'}_{i - 1}\right ), \quad \left ( z_{6}, \quad - {y}_{i}\right )\right ]$$ $$\left [ {d'}_{i} = \frac{z_{1} z_{2}^{2} {d'}_{i - 1} - 6 z_{1} \left(z_{6} + {y}_{i - 1}\right) + 6 z_{2} \left(z_{6} + {y}_{i + 1}\right)}{z_{1} z_{2} \left(z_{3} - z_{4} + z_{5}\right)}, \quad {c'}_{i} = \frac{- {x}_{i + 1} + {x}_{i}}{- z_{3} + z_{4} - z_{5}}\right ]$$ ```python # Substitute knot spacing into the boundary conditions bc_eqs2 = [eq.subs(subsL) for eq in bc_eqs] bc_eqs2 ``` $$\left [ bc_{start1} = \begin{cases} 2 {x}_{0} - 2 {x}_{1} & \text{for}\: yp_{0} < 9.9 \cdot 10^{29} \\1 & \text{otherwise} \end{cases}, \quad bc_{start2} = \begin{cases} {x}_{0} - {x}_{1} & \text{for}\: yp_{0} < 9.9 \cdot 10^{29} \\0 & \text{otherwise} \end{cases}, \quad bc_{end1} = \begin{cases} {x}_{n - 1} - {x}_{n - 2} & \text{for}\: ypn < 9.9 \cdot 10^{29} \\0 & \text{otherwise} \end{cases}, \quad bc_{end2} = \begin{cases} 2 {x}_{n - 1} - 2 {x}_{n - 2} & \text{for}\: ypn < 9.9 \cdot 10^{29} \\1 & \text{otherwise} \end{cases}, \quad rhs_{start} = \begin{cases} 6 yp_{0} + \frac{6 {y}_{0}}{- {x}_{0} + {x}_{1}} - \frac{6 {y}_{1}}{- {x}_{0} + {x}_{1}} & \text{for}\: yp_{0} < 9.9 \cdot 10^{29} \\0 & \text{otherwise} \end{cases}, \quad rhs_{end} = \begin{cases} 6 ypn - \frac{6 {y}_{n - 1}}{{x}_{n - 1} - {x}_{n - 2}} + \frac{6 {y}_{n - 2}}{{x}_{n - 1} - {x}_{n - 2}} & \text{for}\: ypn < 9.9 \cdot 10^{29} \\0 & \text{otherwise} \end{cases}\right ]$$ ```python # Use temporary storage for cp, and reuse output vector for dp # In the future there should be some dependency analysis to verify this is a legal transformation tmp = IndexedBase('u',shape=(n,)) y2 = IndexedBase('y2',shape=(n,)) storage_subs = {cp:y2, dp:tmp} #storage_subs = {} teq1c = teq1.subs(subslist).subs(storage_subs) display(teq1c) teq2c = teq2b.subs(subslist).subs(storage_subs) display(teq2c) teq3c = final_expr[0].subs(storage_subs) display(teq3c) teq4c = final_expr[1].subs(storage_subs) display(teq4c) teq5c = teq5b.subs(storage_subs).subs(x,y2) display(teq5c) teq6c = teq6.subs(storage_subs).subs(x,y2) display(teq6c) ``` $${y_{2}}_{0} = \frac{bc_{start2}}{bc_{start1}}$$ $${u}_{0} = \frac{rhs_{start}}{bc_{start1}}$$ $${u}_{i} = \frac{z_{1} z_{2}^{2} {u}_{i - 1} - 6 z_{1} \left(z_{6} + {y}_{i - 1}\right) + 6 z_{2} \left(z_{6} + {y}_{i + 1}\right)}{z_{1} z_{2} \left(z_{3} - z_{4} + z_{5}\right)}$$ $${y_{2}}_{i} = \frac{- {x}_{i + 1} + {x}_{i}}{- z_{3} + z_{4} - z_{5}}$$ $${y_{2}}_{n - 1} = \frac{- bc_{end1} {u}_{n - 2} + rhs_{end}}{- bc_{end1} {y_{2}}_{n - 2} + bc_{end2}}$$ $${y_{2}}_{i} = {u}_{i} - {y_{2}}_{i + 1} {y_{2}}_{i}$$ ```python # Now for some code generation #reload(codegen_more) #from codegen_more import * ``` ```python templateT = Type('T') ``` ```python # forward sweep fr = ARange(start+1,end,1) body = [] for e in subexpr: body.append(Variable(e[0],type=templateT).as_Declaration(value=e[1].subs(storage_subs))) body.append(convert_eq_to_assignment(teq3c)) body.append(convert_eq_to_assignment(teq4c)) loop1 = For(i,fr,body) ``` ```python # backward sweep br = ARangeClosedEnd(end-1,start,-1) loop2 = For(i,br,[convert_eq_to_assignment(teq6c)]) ``` ```python tmp_init = VariableWithInit("n",tmp,type=Type("std::vector<T>")).as_Declaration() bc_tmps = [] for e in bc_eqs2: bc_tmps.append(Variable(e.lhs, type=templateT).as_Declaration(value=e.rhs)) algo = CodeBlock(tmp_init, *bc_tmps, convert_eq_to_assignment(teq1c), convert_eq_to_assignment(teq2c), loop1, convert_eq_to_assignment(teq5c), loop2) ``` ```python # Generate the inner part of the algorithm to check it ACP = ACodePrinter() s = ACP.doprint(algo) print(s) ``` // Not supported in C++: // IndexedBase std::vector<T> u(n); T bc_start1 = ((yp0 < 9.9000000000000002e+29) ? ( 2*x[0] - 2*x[1] ) : ( 1 )); T bc_start2 = ((yp0 < 9.9000000000000002e+29) ? ( x[0] - x[1] ) : ( 0 )); T bc_end1 = ((ypn < 9.9000000000000002e+29) ? ( x[n - 1] - x[n - 2] ) : ( 0 )); T bc_end2 = ((ypn < 9.9000000000000002e+29) ? ( 2*x[n - 1] - 2*x[n - 2] ) : ( 1 )); T rhs_start = ((yp0 < 9.9000000000000002e+29) ? ( 6*yp0 + 6*y[0]/(-x[0] + x[1]) - 6*y[1]/(-x[0] + x[1]) ) : ( 0 )); T rhs_end = ((ypn < 9.9000000000000002e+29) ? ( 6*ypn - 6*y[n - 1]/(x[n - 1] - x[n - 2]) + 6*y[n - 2]/(x[n - 1] - x[n - 2]) ) : ( 0 )); y2[0] = bc_start2/bc_start1; u[0] = rhs_start/bc_start1; for (auto i = 1; i < n - 1; i += 1) { T z0 = -x[i]; T z1 = z0 + x[i + 1]; T z2 = z0 + x[i - 1]; T z3 = 2*x[i + 1]; T z4 = 2*x[i - 1]; T z5 = z2*y2[i - 1]; T z6 = -y[i]; u[i] = (z1*z2*z2*u[i - 1] - 6*z1*(z6 + y[i - 1]) + 6*z2*(z6 + y[i + 1]))/(z1*z2*(z3 - z4 + z5)); y2[i] = (-x[i + 1] + x[i])/(-z3 + z4 - z5); }; y2[n - 1] = (-bc_end1*u[n - 2] + rhs_end)/(-bc_end1*y2[n - 2] + bc_end2); for (auto i = n - 2; i >= 0; i += -1) { y2[i] = u[i] - y2[i + 1]*y2[i]; }; ```python # Set up to create a template function tx = Pointer(x,type=templateT) ty = Pointer(y,type=templateT) ty2 = Pointer(y2,type=templateT) yp0_var = Variable('yp0',type=templateT) ypn_var = Variable('ypn',type=templateT) tf = TemplateFunctionDefinition(void, "cubic_spline_solve",[tx,ty,n,yp0_var,ypn_var,ty2],[templateT],algo) ``` ```python ACP = ACodePrinter() s = ACP.doprint(tf) print(s) ``` // Not supported in C++: // IndexedBase // IndexedBase // IndexedBase // IndexedBase template<typename T> void cubic_spline_solve(T * x, T * y, int n, T yp0, T ypn, T * y2){ std::vector<T> u(n); T bc_start1 = ((yp0 < 9.9000000000000002e+29) ? ( 2*x[0] - 2*x[1] ) : ( 1 )); T bc_start2 = ((yp0 < 9.9000000000000002e+29) ? ( x[0] - x[1] ) : ( 0 )); T bc_end1 = ((ypn < 9.9000000000000002e+29) ? ( x[n - 1] - x[n - 2] ) : ( 0 )); T bc_end2 = ((ypn < 9.9000000000000002e+29) ? ( 2*x[n - 1] - 2*x[n - 2] ) : ( 1 )); T rhs_start = ((yp0 < 9.9000000000000002e+29) ? ( 6*yp0 + 6*y[0]/(-x[0] + x[1]) - 6*y[1]/(-x[0] + x[1]) ) : ( 0 )); T rhs_end = ((ypn < 9.9000000000000002e+29) ? ( 6*ypn - 6*y[n - 1]/(x[n - 1] - x[n - 2]) + 6*y[n - 2]/(x[n - 1] - x[n - 2]) ) : ( 0 )); y2[0] = bc_start2/bc_start1; u[0] = rhs_start/bc_start1; for (auto i = 1; i < n - 1; i += 1) { T z0 = -x[i]; T z1 = z0 + x[i + 1]; T z2 = z0 + x[i - 1]; T z3 = 2*x[i + 1]; T z4 = 2*x[i - 1]; T z5 = z2*y2[i - 1]; T z6 = -y[i]; u[i] = (z1*z2*z2*u[i - 1] - 6*z1*(z6 + y[i - 1]) + 6*z2*(z6 + y[i + 1]))/(z1*z2*(z3 - z4 + z5)); y2[i] = (-x[i + 1] + x[i])/(-z3 + z4 - z5); }; y2[n - 1] = (-bc_end1*u[n - 2] + rhs_end)/(-bc_end1*y2[n - 2] + bc_end2); for (auto i = n - 2; i >= 0; i += -1) { y2[i] = u[i] - y2[i + 1]*y2[i]; }; } ```python ``` ```python ```
7b01ff500c6cfa45f4fff37c44f8da2857c39ab1
58,317
ipynb
Jupyter Notebook
Wavefunctions/CubicSplineSolver.ipynb
QMCPACK/qmc_algorithms
015fd1973e94f98662149418adc6b06dcd78946d
[ "MIT" ]
3
2018-02-06T06:15:19.000Z
2019-11-26T23:54:53.000Z
Wavefunctions/CubicSplineSolver.ipynb
chrinide/qmc_algorithms
015fd1973e94f98662149418adc6b06dcd78946d
[ "MIT" ]
null
null
null
Wavefunctions/CubicSplineSolver.ipynb
chrinide/qmc_algorithms
015fd1973e94f98662149418adc6b06dcd78946d
[ "MIT" ]
4
2017-11-14T20:25:00.000Z
2022-02-28T06:02:01.000Z
31.403877
1,028
0.365434
true
9,653
Qwen/Qwen-72B
1. YES 2. YES
0.859664
0.817574
0.702839
__label__eng_Latn
0.199713
0.471262
```python from decodes.core import * from decodes.io.jupyter_out import JupyterOut import math out = JupyterOut.unit_square( ) ``` # Transformation Mathematics We are familiar with a set of operations in CAD designated by verbs, such as "Move”, “Mirror”, “Rotate”, and “Scale”, and that ***act upon a geometric object to produce the same kind of object, only transformed***. Operations such as these are termed ***transformations or transforms***. After an object has undergone a transformation, we can observe that certain properties of the object are altered while others are preserved. Mathematicians employ a number of terms (such as ***congruency, isometry, similarity, and affinity***) to classify transformations by the features they preserve and those they distort. Consider, for example, the axonometric projection transformation, which projects geometry onto a plane in such a way that parallel lines are mapped onto parallel lines, thereby maintaining parallelism. Mathematically speaking, we say that ***a transformation of a space onto itself is a rule which assigns to every point $P$ in the space another point $P^*$ in the space***. The simple reflection that was constructed by compass and straightedge is an example of a transformation of the plane (or any point on the plane) onto itself (in that all points end up somewhere else on the plane). Notice that, in the case of the mirror reflection, ***the kinds of things that come out after being reflected are the same kinds of things that go in***; namely, a reflected line is still a line with the same length, a reflected circle is still a circle with the same radius, and a reflected curve is still the same curve with all of the same geometric properties. There are other transformations that do not preserve geometric features in the same way. Consider the ***circle inversion transformation***, which can be expressed as the function below. \begin{align} T(x,y) = (\frac{R^2x}{x^2+y^2},\frac{R^2y}{x^2+y^2}) \end{align} The geometric properties that are preserved here may be more difficult to discern. Knowing that the graphic shows the inversion of points on a hexagonal grid, we can understand that while the linearity of lines are not preserved, the angle between two lines or curves is preserved. How and under what circumstances different classes of geometric features are preserved is an important and distinguishing property of transformations. Even though a transformation is formally defined as any function that takes a point and gives a point back in return, we will find it beneficial to narrow this definition to include only those transformations that may be represented by a particularly useful mathematical construct: ***the matrix***. A matrix is ***a structure for organizing sets of values in rows and columns, such that these values may be operated upon by a set of algebraic rules***. Matrix algebra underlies much of geometric computation. The expenditure of just a bit of effort in mastering the fundamentals of this potentially imposing mathematical construct will yield a wealth of insight in return. ## Matrix Fundamentals A brief account of the matrix will serve to ground our understanding of how transformations work in computer graphics in general, and will offer a basis for the implementation of transformations in the Decod.es library in particular. For this we will need a grasp of the basic notation for writing matrices, and a working understanding of how they are used to perform operations. In this section, we: * Detail the relevant notational conventions * Present the algebra of matrices * Demonstrate their basic operation on a simple example of transforming 2d vectors ### Matrix Notation A mathematical matrix is much like its namesake in code: a two-dimensional array that organizes values into regular rows and columns. An ***m x n matrix*** (read as “*m by n*”) and denoted throughout this chapter as $(m \times n)$, is an arrangement of elements into ***m rows*** and ***n columns***. Any matrix for which the number of rows and the number of columns are the same may be termed a square matrix. By convention, the notation for a generic element contained within a matrix is $c_{ij}$, with the subscript index $i$ indicating the containing row, and the index $j$ the containing column. Note that the conventional ordering of the indices of a matrix is the reverse of the `(x,y)` convention that we are accustomed to in describing horizontal and vertical positions. Also, positions are numbered starting at the top left, and the indexing starts with `(1,1)`, not with `(0,0)` as we have become accustomed to in code. \begin{bmatrix} c_{11} & c_{12} & c_{13} & c_{14} \\ c_{21} & c_{22} & c_{23} & c_{24} \\ c_{31} & c_{32} & c_{33} & c_{34} \end{bmatrix} ### Matrix Algebra With a grasp of the notation conventionally used to describe matrices, we are ready to review the rules by which they may be combined and manipulated. Three of the basic operations we are able to perform on matrices - addition, subtraction, and scalar multiplication - work exactly the same for matrices as they do for vectors, proceeding by operating on one set of matching components at a time. #### Matrix Addition Matrix addition and subtraction works ***component-wise***, matching components at the same indices of each matrix. This procedure requires that each matrix exhibits the same number of rows and columns. \begin{align} \begin{bmatrix} 2 & -1 \\ 3 & 0 \end{bmatrix} + \begin{bmatrix} -1 & 5 \\ 0 & 10 \end{bmatrix} = \begin{bmatrix} 1 & 4 \\ 3 & 10 \end{bmatrix} \end{align} #### Matrix-Scalar Multiplication Scalar multiplication matches the given scalar to each of the components of the matrix. \begin{align} 3 \begin{bmatrix} 1 & -1 \\ -2 & 1 \\ 0 & 2 \end{bmatrix} = \begin{bmatrix} 3 & -3 \\ -6 & 3 \\ 0 & 6 \end{bmatrix} \end{align} #### Matrix-Matrix Multiplication Matrices may be multiplied together to form another matrix, but the convention for doing so is more involved. Here, the components are formed by pairing rows of the first matrix with columns of the second and performing a “dot product” of the components. This convention imposes a rule on the shapes of the two matrices being multiplied: the number of columns in the first must match the number of rows of the second, such that a $(m \times p)$ matrix can only multiply a $(p \times n)$ matrix. The result of this multiplication is a $(m \times n)$ matrix that takes its number of rows from the first matrix, and its number of columns from the second. In summary: \begin{align} (m \times p)(p \times n) = (m \times n) \end{align} Each entry ***matches a row from the first matrix with a column from the second***, and is calculated by a dot product operation, as shown below. ### Matrices and Vectors Points and Vecs may be represented by matrices, such that a two-dimensional vector may be expressed as a $(1 \times 2)$ or, more often ***a $(2 \times 1)$ matrix***. Seen in this way, we can multiply a matrix by a vector only so long as the dimensions are compatible. A square matrix $M$, can then multiply a vector $\vec{x} = (x,y)$ in the following way: \begin{align} M\vec{x} = \begin{bmatrix} c_{11} & c_{12} \\ c_{21} & c_{22} \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} c_{11}x + c_{12}y \\ c_{21}x + c_{22}y \end{bmatrix} \end{align} A square matrix multiplied by a vector in $\mathbb{R}^2$ yields another vector in $\mathbb{R}^2$. We can legitimately say that ***$M$ maps one set of points onto a corresponding set of points***. This is the very definition of a transformation. It is no exaggeration to say that the implications of this are profound. Consider that we have demonstrated that ***a compact and versatile mathematical form is capable of describing a high-level operation***. So armed, we need not think of any geometric operation, such as the rotation of a set of objects about an axis, merely as a command in software. Instead, we now have ***a mathematical instrument that captures this action precisely, compactly, and in a format that is completely independent*** from any software platform. The ramifications of this discovery are indeed far-reaching, and extend well beyond the two-dimensional planar transformations captured by the square matrix demonstrated above. In summary. When a matrix $M$ multiplies a vector $\vec{x}$, it has the effect of transforming this vector into a new vector $M\vec{x}$. Substituting points for vectors, any $(2 \times 2)$ matrix can then be seen as a ***planar transformation*** that maps any point in the plane to another point in the plane. Similarly, a $(3 \times 3)$ matrix specifies a ***spatial transformation*** which maps a point from one location in space to another. We require two more insights before we are in good position for implementation. * We need a deeper understanding of the nature of ***a special class of transformations*** that represents the basic building blocks critical to many operations relevant to visual design. * To aggregate these basic elements into more complex operations requires ***a method for expressing transformations into coherent sequences***. #### Examples of Matrix-Vector Multiplication Before moving on, it will be worth our time to consider the specific cases outlined in a nearby table that demonstrate what happens to a generic vector when multiplied by a variety of fixed square matrices. These examples will help us to associate some familiar actions with matrices that produce them. ##### Scaling Matrix $ M = \begin{bmatrix} s & 0 \\ 0 & s \end{bmatrix} $ This matrix scales vectors by a uniform scaling factor as can be seen by multiplying the matrix by a vector, expanded out below. The vector is stretched for values of $s$ greater than one, and contracts for values less than one. For negative values of $s$, the transformed vector is both scaled and flipped across the origin. \begin{align} \begin{bmatrix} s & 0 \\ 0 & s \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} sx \\ sy \end{bmatrix} = s\begin{bmatrix} x \\ y \end{bmatrix} \end{align} ##### Rotation Matrix $ M = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} $ We can see what transformation this matrix represents by looking at how it acts on specific vectors. Multiplying this matrix by a vector rotates the vector by ninety degrees counterclockwise about the origin: * $(1,0)$ is transformed to $(0,1)$ * $(0,1)$ is transformed to $(-1,0)$ * $(- 1,0)$ is transformed to $(0, -1)$. \begin{align} \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} (0x) + (-1y) \\ (1x) + (0y) \end{bmatrix} = \begin{bmatrix} -y \\ x \end{bmatrix} \end{align} ##### Mirror Matrix $ M = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} $ This matrix transforms the vector $(x,y)$ to $(y,x)$ which is the vector mirrored across the line $y = x$. \begin{align} \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} (0x) + (1y) \\ (1x) + (0y) \end{bmatrix} = \begin{bmatrix} y \\ x \end{bmatrix} \end{align} ##### Projection Matrix $ M = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} $ This matrix maps $(x,y)$ to $(x,0)$, its projection onto the x-axis. \begin{align} \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} (1x) + (0y) \\ (0x) + (0y) \end{bmatrix} = \begin{bmatrix} x \\ 0 \end{bmatrix} \end{align} ## Matrix Transformations The kind of transformations that can be described by matrices are very special, and just three categories of matrix transformations are prevalent in computer graphics: ***linear, affine, and projective transformations***. ***Linear transformations*** represent the most constrained category, and include elemental transforms such as rotation, scaling, shearing, and reflection. A discussion of these will comprise the bulk of this section. A closely related category are the ***affine transformations***. These are ***"almost" linear***, as they can be expressed as the combination of a linear transformation and a translation vector. While pairing a matrix with a vector can be useful, an even more compact representation of an affine transformation as a matrix can be achieved by ***elevating the dimension*** of the matrix. Finally, we have the ***projective transformations*** that include orthographic projection and perspectival projection. ### Linear Transformations To discuss the unique features of linear transformations, we will first establish the relationship between linear transformations and matrix transformations. To do so, we denote transformations that act on a vector by multiplication of a matrix as $T(\vec{x}) = M\vec{x}$. Matrices such as this share a number of properties in common for any choice of matrix $M$. Crucially, the following two properties hold true: * The transformation of the sum of any two vectors is equal to the sum of their individual transformations. In other words, $T(\vec{x} + \vec{y}) = T(\vec{x}) + T(\vec{y})$ for any vectors $\vec{x}$ and $\vec{y}$. * The transformation of the product of a scalar and a vector is equal to the product of the scalar and the transformation of the vector. In other words, $T(c\vec{x}) = cT(\vec{x})$ for any vector $\vec{x}$ and scalar $c$. Any transformation that satisfies these two properties is called a ***linear transformation***. Linearity yields a remarkable number of useful consequences. Among these, three are particularly relevant for our purposes: two that concern the preservation of geometric features, and one that allows us to predict the action of a transformation simply by examining the values held by particular components of it. * Linear transformations map straight lines to straight lines. * Linear transformations preserve parallelism * If we know how a linear transformation acts for ***each vector in a basis***, then we can predict how it will transform ***every point and vector in that space***. \begin{align} T(\vec{x}) = T(x,y) = xT(1,0) + yT(0,1) = xT(\vec{e_{1}}) + yT(\vec{e_{2}}) \end{align} This last property of linear transformations allows us to quickly read off the action of any given matrix, and enables us to write matrices with properties that we can easily control. Take, for example, the following matrix: An examination of the components here reveals how the standard basis vectors are transformed, and from this, we are able to extrapolate a pattern of behavior that can be applied more generally. \begin{align} \begin{bmatrix} 1 & 0 \\ 0 & 2 \end{bmatrix} \end{align} The basis vector $\vec{e_{1}} = (1,0)$ is unchanged by the transformation, but that $\vec{e_{2}} = (0,1)$ is stretched to twice its length. This is a scale 1d. \begin{align} \begin{bmatrix} 1 & 1 \\ 0 & 2 \end{bmatrix} \end{align} The vector $\vec{e_{1}} = (1,0)$ is again fixed, so the x-axis remains unchanged, but $\vec{e_{2}} = (0,1)$ is shifted to the line $y = 2x$. This is a shear. \begin{align} \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \end{align} This does precisely nothing. Mathematicians gave this one a compelling name: the identity transformation or $I$. \begin{align} T(\vec{x}) = xT(\vec{e_{1}}) + yT(\vec{e_{2}}) = \begin{bmatrix} T(\vec{e_{1}}) T(\vec{e_{2}}) \end{bmatrix} \vec{x} \end{align} Not only is every matrix transformation a linear one, but every linear transformation can be represented by a matrix. With this in mind, we can now assemble a library of useful linear transformations. #### Selected Linear Transformations in the Plane ##### Rotation $ \begin{bmatrix} cos\theta & -sin\theta \\ sin\theta & cos\theta \end{bmatrix} $ Building upon the earlier example representing a rotation by ninety degrees, the above matrix shows a transformation that rotates a vector by an arbitrary angle counter-clockwise about the origin. We’ve seen that all we need in order to construct this matrix is to understand how basis vectors are transformed. Working with the standard basis, we can show that rotating $\vec{e_{1}} = (1,0)$ by $\theta$ counterclockwise will result in the vector $(cos\theta, sin\theta)$. Similarly, $\vec{e_{2}} = (0,1)$ transforms to $(-sin\theta, cos\theta)$. Putting these transformed basis vectors as columns, we arrive at the nearby matrix. ##### Orthogonal Projection $ \begin{bmatrix} cos^2\theta & cos\theta \ sin\theta \\ cos\theta \ sin\theta & sin^2\theta \end{bmatrix} $ Given a line through the origin rotated at an angle $\theta$ counterclockwise from the horizontal, we may construct a matrix representing the transformation of the normal projection onto this line. The orthogonal projection of a point onto this line is equivalent to the nearest point on the line. To see how the standard basis vectors are transformed, we will make use of the formula for the projected vector derived using the dot product. Since a unit vector along the projection line is given by $\vec{u} = (cos\theta, sin\theta)$, the projected vector for $\vec{u_{1}}$ onto this line is given by \begin{align} (\vec{e_{1}} \bullet \vec{u}) \ \vec{u} = cos\theta \ (cos\theta, sin\theta) = (\ cos^2\theta, \ cos\theta sin\theta \ ) \end{align} Similarly, the projected vector for $\vec{e_{2}} = (0,1)$ is $( \ sin\theta, \ cos\theta sin^2\theta \ )$. ##### Mirror $ \begin{bmatrix} 2 \ cos^2\theta-1 & 2 \ cos\theta \ sin\theta \\ 2 \ cos\theta \ sin\theta & 2 \ sin^2\theta-1 \end{bmatrix} $ Given a line as constructed above, we may express a general mirror transformation across this line in terms of the projection vectors by simple vector subtraction, as given by \begin{align} \vec{p_{mirror}} = \vec{p_{near}} + ( \vec{p_{near}} - \vec{p}) = 2\vec{p_{near}} - \vec{p} \end{align} The reflection across this line of $\vec{e_{1}} = (1,0)$ is thus given by $2(cos^2 \theta, cos\theta sin\theta) - (1,0)$ and the mirror of $\vec{e_{1}} = (0,1)$ is given by $2(sin \theta, cos\theta sin^2\theta) - (0,1)$. From these, we arrive at the general mirror transformation above. ***What's missing?*** Examining what we have covered thus far, we may note the conspicuous absence of what is perhaps the most basic of all transformations: translation. Although basic, this transformation is actually not a linear transformation. Expressing translation as displacement by a fixed vector, $T(\vec{x}) = \vec{x} + \vec{b}$, we see that the first condition of linearity is violated by any nonzero translation vector $\vec{b}$. \begin{align} T(\vec{x} + \vec{y}) = \vec{x}+\vec{y}+\vec{b} \neq T(\vec{x}) + T(\vec{y}) \end{align} It appears the that ***translation is not able to be represented using a square matrix***. To account for a wider range of transformations that include translations using matrices requires elevating the size of the matrices employed. We'll get to that in a bit. ### The Algebra of Transformations in Sequence Some transformations are better described as a sequence of operations, broken down into an ordered list of more basic transformations. The order of operations at work here matters. One great advantage of the matrix form is that ***the cumulative effect of the application of a sequence of transformations is equivalent to the application of the ordered product of this sequence***. In other words, we can capture a series of transformations in a single matrix. Of critical importance here is the order in which this multiplication is done: Successive application of transformations represented by matrices translates to multiplying matrices in ***right-to-left order***. \begin{align} \begin{bmatrix} \frac{1}{\sqrt{2}} & \frac{-1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \end{bmatrix} \begin{bmatrix} -1 & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} \frac{-1}{\sqrt{2}} & \frac{-1}{\sqrt{2}} \\ \frac{-1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \end{bmatrix} \end{align} \begin{align} \begin{bmatrix} -1 & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} \frac{1}{\sqrt{2}} & \frac{-1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \end{bmatrix} = \begin{bmatrix} \frac{-1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \end{bmatrix} \end{align} ## From Math to Code The matrices used to express transformations in three-dimensional space are not $(3 \times 3)$, as we would expect from the discussion so far. Rather, they are $(4 \times 4)$. In deconstructing the rationale behind this, we address the question of how to represent the translation transformation. The easiest way to include translation into the mix of linear transformations is simply to combine a linear transformation with a translation vector: $T(\vec{x}) = Mx + \vec{b}$. In fact, this broader set of transformations make up the class of ***affine transformations***. Every linear transformation is affine but not the other way around. Tacking a vector on a matrix to deal with affine transformations is fantastic, but it ruins the purity of the matrix format. This is no good when considering how to move from math to code. There is an alternative. We can employ a system of coordinates called ***homogeneous coordinates***. By employing these, it is possible to use a $(4 \times 4)$ matrix to describe not only affine transformations in a three-dimensional space, but also many other useful transforms. ### The Elevated Matrix The dominant technique in computer graphics is to elevate the square matrix to have an added dimension on each side. Therefore, $(3 \times 3)$ matrices are used for transformations in two dimensions and $(4 \times 4)$ are used for transformations in three dimensions. This unified representation both accounts for ***translation***, and accommodates the larger class of ***projective transformations*** which includes perspective projection. Since matrix multiplication only works if the two matrices involved have compatible shapes, this technique also require vectors and points that exhibit a modified structure. To be compatible with elevated matrices, our points and vectors must be granted an extra coordinate. Points in homogeneous coordinates are interchangeable with Cartesian points so long as $w = 1$, while vectors in homogeneous coordinates maintain a $w = 0$. We can learn alot by simply familiarizing ourselves with the relationship between certain patterns of component values in a $(4 \times 4)$ matrix and the spatial transformations that result. First, the long awaited ***translation transformation***. \begin{align} \begin{bmatrix} 1 & 0 & 0 & b_{x} \\ 0 & 1 & 0 & b_{y} \\ 0 & 0 & 1 & b_{z} \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \begin{bmatrix} x \\ y \\ z \\ 1 \end{bmatrix} = \begin{bmatrix} x + b_{x} \\ y + b_{y} \\ z + b_{z} \\ 1 \end{bmatrix} \end{align} This translation matrix applied to points moves them, but applied to a vector in homogeneous coordinates $(x,y,z,0)$, leaves the vector unchanged. ```python ```
3c9cc74527c223690a3d4d7509cc2912e12c259c
35,016
ipynb
Jupyter Notebook
107 - Transformations and Intersections/242 - Transformation Mathematics.ipynb
ksteinfe/decodes_ipynb
2e4bb6b398472fc61ef8b88dad7babbdeb2a5754
[ "MIT" ]
1
2018-05-15T14:31:23.000Z
2018-05-15T14:31:23.000Z
107 - Transformations and Intersections/242 - Transformation Mathematics.ipynb
ksteinfe/decodes_ipynb
2e4bb6b398472fc61ef8b88dad7babbdeb2a5754
[ "MIT" ]
null
null
null
107 - Transformations and Intersections/242 - Transformation Mathematics.ipynb
ksteinfe/decodes_ipynb
2e4bb6b398472fc61ef8b88dad7babbdeb2a5754
[ "MIT" ]
2
2020-05-19T05:40:18.000Z
2020-06-28T02:18:08.000Z
41.439053
467
0.617061
true
5,839
Qwen/Qwen-72B
1. YES 2. YES
0.695958
0.847968
0.59015
__label__eng_Latn
0.998084
0.209447
# Announcements - No Problem Set this week, Problem Set 4 will be posted on 9/28. - Stay on at the end of lecture if you want to ask questions about Problem Set 3. <style> @import url(https://www.numfys.net/static/css/nbstyle.css); </style> <a href="https://www.numfys.net"></a> # Ordinary Differential Equations - higher order methods <section class="post-meta"> Based on notes and notebooks by Niels Henrik Aase, Thorvald Ballestad, Vasilis Paschalidis and Jon Andreas Støvneng </section> ## Algorithms for initial value problem ODEs Assume we have a first-order differential equation which can be expressed in the form $$ \frac{dy}{dt} = g(y,t) $$ We will solve this on constant-interval mesh of the independent variable $t$ defined by $$ t_n = t_0 + n h $$ ### Forward-Euler method In Lecture 10 we derived Euler's method, which simply solves the first-order forward difference approximation to $dy/dt$ $$ \frac{y_{i+1}-y_i}{h} = g(y_i,t_i)$$ as $$ y_{i+1} = y_i + h g(y_i,t_i) \label{Euler_fwd}\tag{3}$$ ```python # Importing the necessary libraries import numpy as np # NumPy is used to generate arrays and to perform some mathematical operations import matplotlib.pyplot as plt # Used for plotting results ``` ```python def forwardEuler_step(t, y, h, g, *P): """ Implements a single step of the forward-Euler finite-difference scheme Parameters: t: time t y: Numerical approximation of y at time t h: Step size g: RHS of our ODE (RHS = Right hand side). Can be any function with signature g(t,y,*P). *P: tuple of parameters, arguments to g Returns: next_y: Numerical approximation of y at time t+h """ next_y = y + h*g(t, y, *P) return next_y ``` We now need some sort of framework which will take this function and do the integration for us. Let's rewrite `full_Euler` from Lecture 10 to be more general: ```python def odeSolve(t0, y0, tmax, h, g, method, *P): """ A full numerical aproximation of an ODE in a set time interval. Performs consecutive steps of `method` with step size h from start time until the end time. Also takes into account the initial values of the ODE Parameters: t0: start time y0 : Initial condition for y at t = t0 tmax: The end of the interval where the `method` is integrated, t_N h: Step size g: RHS of our ODE (RHS = Right hand side). Can be any function with signature g(t,y,*P). *P: tuple of parameters, arguments to g Returns: t_list: Evenly spaced discrete list of time with spacing h. Starting time = start_t, and end time = end_t y_list: Numerical approximation of y at times t_list """ # make the t-mesh; guarantees we stop precisely at tmax t_list = np.arange(t0,tmax+h,h) # allocate space for the solution y_list = np.zeros_like(t_list) # set the initial condition y_list[0] = y0 # find out the size of the t-mesh, and then integrate forward one meshpoint per iteration of the loop n, = t_list.shape for i in range(0,n-1): y_list[i+1] = method(t_list[i], y_list[i], h, g, *P) # return the solution return t_list,y_list ``` Armed with this machinery, let's set up another simple problem and try it out. Last time, we looked at exponential growth, let's solve exponential decay this time: $$ \frac{dy}{dt} = - c y, \quad y[0] = 1 $$ First, we provide a function to implement the RHS: ```python def expRHS(t, y, c): """ Implements the RHS (y'(x)) of the DE """ return -c*y ``` Now we set up the problem to compute and plot the result, along with a plot of the magnitude of the fractional error ### Runge-Kutta Schemes The idea of the Runge-Kutta schemes is to take advantage of derivative information at the times between $t_i$ and $t_{i+1}$ to increase the order of accuracy. For example, in the midpoint method, the derivative at the initial time is used to approximate the derivative at the midpoint of the interval, $f(y_i+\frac{1}{2}hf(y_i,t_i), t_i+\frac{1}{2}h)$. The derivative at the midpoint is then used to advance the solution to the next step. The method can be written in two stages $k_i$, $$ \begin{aligned} \begin{array}{l} k_1 = h f(y_i,t_i)\\ k_2 = h f(y_i+\frac{1}{2}k_1, t_n+\frac{1}{2}h)\\ y_{i+1} = y_i + k_2 \end{array} \end{aligned}\label{RK2}\tag{4} $$ The midpoint method is known as a __2nd-order Runge-Kutta__ formula. In general, an explicit 2-stage Runge-Kutta method can be written as, $$ \begin{array}{l} k_1 = h f(y_n,t_n)\\ k_2 = h f(y_n+b_{21}k_1, t_n+a_2h)\ \\ y_{n+1} = y_n + c_1k_1 +c_2k_2 \label{explicitrk2}\tag{5}\end{array} $$ The scheme is said to be *explicit* since a given stage does not depend *implicitly* on itself, as in the backward Euler method , or on a later stage. Other explicit second-order schemes can be derived by comparing Eq.(\ref{explicitrk2}) to other second-order expansions and matching terms to determine the coefficients $a_2$, $b_{21}$, $c_1$ and $c_2$. ### Explicit Fourth-Order Runge-Kutta Method Explicit Runge-Kutta methods are popular as each stage can be calculated with one function evaluation. In contrast, implicit Runge-Kutta methods usually involves solving a non-linear system of equations in order to evaluate the stages. As a result, explicit schemes are much less expensive to implement than implicit schemes. The higher-order Runge-Kutta methods can be derived by in manner similar to the midpoint formula. An s-stage method is compared to a Taylor method and the terms are matched up to the desired order. As it happens to be, <strong>The Fourth Order Runge-Kutta Method</strong> uses three such test-points and is the most widely used Runge-Kutta Method. You might ask why we don't use five, ten or even more test-points, and the answer is quite simple: It is not computationally free to calculate all these test-points, and the gain in accuracy rapidly decreases beyond the fourth order of the method. That is, if high precision is of such importance that you would require a tenth-order Runge-Kutta, then you're better off reducing the step size $h$, than increasing the order of the method. Also, there exists other more sophisticated methods which can be both faster and more accurate for equivalent choices of $h$, but obviously, may be a lot more complicated to implement. See for instance <i>Richardson Extrapolation</i>, <i>the Bulirsch-Stoer method</i>, <i>Multistep methods, Multivalue methods</i> and <i>Predictor-Corrector methods</i>. The classic fourth-order Runge-Kutta formula is: $$ \begin{array}{l} k_1 = h f(y_n,t_n)\\ k_2 = h f(y_n+\frac{k_1}{2}, t_n+\frac{h}{2})\\ k_3 = h f(y_n+\frac{k_2}{2}, t_n+\frac{h}{2})\\ k_4 = h f(y_n+k_3, t_n+h)\\ y_{n+1} = y_n + \frac{k_1}{6}+ \frac{k_2}{3}+ \frac{k_3}{3} + \frac{k_4}{6} \label{RK4}\tag{6}\end{array} $$ ```python def RK2_step(t, y, h, g, *P): """ Implements a single step of the second-order, explicit midpoint method """ thalf = t + 0.5*h k1 = h * g(t, y, *P) k2 = h * g(thalf, y + 0.5*k1, *P) return y +k2 ``` ```python def RK4_step(t, y, h, g, *P): """ Implements a single step of a fourth-order, explicit Runge-Kutta scheme """ thalf = t + 0.5*h k1 = h * g(t, y, *P) k2 = h * g(thalf, y + 0.5*k1, *P) k3 = h * g(thalf, y + 0.5*k2, *P) k4 = h * g(t + h, y + k3, *P) return y + (k1 + 2*k2 + 2*k3 + k4)/6 ``` ```python # set up problem c = 1.0 h = 0.5 t0 = 0.0 y0 = 1.0 tmax = 5.0 # call the solver for RK2 t, y = odeSolve(t0, y0, tmax, h, expRHS, RK2_step, c) # plot the result fig,ax = plt.subplots(1,2) ans = np.exp(-c*t) ax[0].plot(t,ans,'r') ax[0].set_xlabel('t') ax[0].set_ylabel('y') ax[0].plot(t,y,'o','RK2') err_RK2 = np.abs((ans-y)/ans) # call the solver for Euler t, y = odeSolve(t0, y0, tmax, h, expRHS, forwardEuler_step, c) ax[0].plot(t,y,'o','Euler') err = np.abs((ans-y)/ans) # call the solver for RK2 t, y4 = odeSolve(t0, y0, tmax, h, expRHS, RK4_step, c) ax[0].plot(t,y4,'o','RK4') err_RK4 = np.abs((ans-y4)/ans) # along with the errors err_RK2 = np.abs((ans-y)/ans) ax[1].plot(t, err_RK2, 'o',label = "RK2") ax[1].plot(t, err_RK4, 'o',label = "RK4") ax[1].plot(t, err, 'o',label = "Euler") ax[1].set_xlabel('t') ax[1].set_ylabel('fractional error') ax[1].legend() # now also overplot the error we calculated for forward-Euler # this gives better spacing between axes plt.tight_layout() plt.show() ``` ### Systems of First-Order ODEs Next, we turn to systems of ODE's. We'll take as our example the Lotke-Volterra equations, a simple model of population dynamics in an ecosystem (with many other uses as well). Imagine a population of rabbits and of foxes on a small island. The rabbits eat a plentiful supply of grass and would breed like, well, rabbits, with their population increasing exponentially with time in the absence of preditors. The foxes eat the rabbits, and would die out exponentially in time with no food supply. The rate at which foxes eat rabbits depends upon the product of the fox and rabbit populations. The equations for the population of the rabbits $R$ and foxes $F$ in this simple model is then \begin{eqnarray*} \frac{dR}{dt} &= \alpha R - \beta R F \\ \frac{dF}{dt} &= \delta R F - \gamma F \end{eqnarray*} Without the cross terms in $RF$, these are just two decay equations of the form we have used as an example above. A random set of parameters (I am not a biologist!) might be that a rabbit lives four years, so $\alpha=1/4$ and a fox lives 10 years, so $\gamma=1/10$. Let's pick the other parameters as $\beta = 1$ and $\delta = 1/4$. We can express the unknown populations as a vector of length two: $y = (R, F)$. The rate of change of populations then can also be expressed as a vector $dy/dt = (dR/dt, DF/dt)$. With such a definition, we can write the RHS function of our system as ```python def lvRHS(t, y, *P): # Lotke-Volterra system RHS # unpack the parameters from the array P alpha, beta, gamma, delta = P # make temporary variables with rabbit and fox populations R = y[0] F = y[1] # LV system dRdt = alpha * R - beta * R * F dFdt = delta * R * F - gamma * F # return an array of derivatives with same order as input vector return np.array([ dRdt, dFdt ]) ``` We now have to generalize our odeSolve function to allow more than one equation ```python def odeSolve(t0, y0, tmax, h, RHS, method, *P): """ ODE driver with constant step-size, allowing systems of ODE's """ # make array of times and find length of array t = np.arange(t0,tmax+h,h) ntimes, = t.shape # find out if we are solving a scalar ODE or a system of ODEs, and allocate space accordingly if type(y0) in [int, float]: # check if primitive type -- means only one eqn neqn = 1 y = np.zeros( ntimes ) else: # otherwise assume a numpy array -- a system of more than one eqn neqn, = y0.shape y = np.zeros( (ntimes, neqn) ) # set first element of solution to initial conditions (possibly a vector) y[0] = y0 # march on... for i in range(0,ntimes-1): y[i+1] = method(t[i], y[i], h, RHS, *P) return t,y ``` Now we can solve our system of two coupled ODEs. Note that the solution is now a vector of 2D vectors... the first index is the solution time, the second the variable: ```python alpha = 1.0 beta = 0.025 gamma = 0.4 delta = 0.01 h = 0.2 t0 = 0.0 y0 = np.array([ 30, 10 ]) tmax = 50 # call the solver t, y = odeSolve(t0, y0, tmax, h, lvRHS, RK4_step, alpha, beta, gamma, delta) fig,ax = plt.subplots() ax.plot(t,y[:,0],'b', label='prey') ax.plot(t,y[:,1],'r', label='preditor') ax.set_xlabel('time') ax.set_ylabel('population') ax.legend() plt.tight_layout() plt.show() ``` ### Higher Order Derivatives and Sets of 1st order ODEs The trick to solving ODEs with higher derivatives is turning them into systems of first-order ODEs. As a simple example, consider the second-order differential equation describing the van der Pol oscillator $$ \frac{d^2 x}{dt^2} - a (1-x^2) \frac{dx}{dt} + x = 0 $$ We turn this into a pair of first-order ODEs by defining an auxiliary function $v(t) = dx/dt$ and writing the system as \begin{align} \begin{split} \frac{dv}{dt} &= a (1-x^2) v - x\\ \frac{dx}{dt} &= v \end{split} \end{align} Note that there are only functions (and the independent variable) on the RHS; all "differentials" are on the LHS. Now that we have a system of first-order equations ,we can proceed as above. A function describing the RHS of this system is ```python def vdpRHS(t, y, a): # we store our function as the array [x, x'] return np.array([ y[1], # dx/dt = v a*(1-y[0]**2)*y[1] - y[0] # dv/dt = a*(1-x**2)*v - x ]) ``` ```python a = 15 # parameter h = 0.01 t0 = 0.0 y0 = np.array([ 0, 1]) tmax = 50 # call the solver t, y = odeSolve(t0, y0, tmax, h, vdpRHS, RK4_step, a) fig,ax = plt.subplots() ax.plot(t,y[:,0],'b', label='x') ax.plot(t,y[:,1],'r--', label='v') ax.set_xlabel('time') ax.legend() ax.set_title(f"van der Pol Oscillator for a={a}") plt.tight_layout() plt.show() ``` A somewhat more complex example is the Lane-Emden equation, which is really just Poisson's equation in spherical symmetry for the graviational potential of a self-gravitating fluid whose pressure is related to its density as $P\propto\rho^\gamma$. Such a system is called a _polytrope_, and is often used in astrophysics as a simple model for the structure of system such as a a star in which outward pressure and inward gravity are in equilibrium. Let $\xi$ be the dimensionless radius of the system, and let $\theta$ be related to the density as $\rho = \rho_c \theta^n$, where $\rho_c$ is the density at the origin and $n = 1/(\gamma-1)$. We then have the dimensionless second-order differential equation $$ \frac{1}{\xi^2}\frac{d}{d\xi}\left(\xi^2\frac{d\theta}{d\xi}\right) + \theta^n = 0 $$ Note that the first term is just the divergence $\nabla\cdot\theta$ in spherical symmetry. If we expand out the first term, we have $$ \frac{d^2\theta}{d\xi^2} + \frac{2}{\xi}\frac{d\theta}{d\xi} + \theta^n = 0 $$ Defining an auxiliary function $v(\xi) = d\theta/d\xi$, we can then convert this into a system of two first-order ODEs: \begin{align} \begin{split} \frac{dv}{d\xi} &= -\frac{2}{\xi} v - \theta^n \\ \frac{d\theta}{d\xi} &= v \end{split} \end{align} Again, we have "derivatives" only on the LHS and no derivatives on the RHS of our system. Looking at this expression, one can see right away that at the origin $\xi=0$ we will have a numerical problem; we are dividing by zero. Analytically, this is not a problem, since $v/\xi\rightarrow0$ as $\xi\rightarrow0$, but here we need to address this numerically. The first approach is to take care of the problem in our RHS function: ```python def leRHS(x, y, n): dthetadx = y[1] if x==0: dvdx = -y[0]**n else: dvdx = -2/x*y[1] - y[0]**n return np.array([ dthetadx, dvdx ]) ``` This is somewhat clunky, however, and you would first have to convince yourself that in fact $v(\xi)\rightarrow0$ faster than $\xi$ (don't just take my word for it!). Instead, we could use a more direct RHS function ```python def leRHS(x, y, n): dthetadx = y[1] dvdx = -2/x*y[1] - y[0]**n return np.array([ dthetadx, dvdx ]) ``` and expand the solution in a Taylor series about the origin to get a starting value for our numerical integration at a small distance away from the origin. To do this, write $$\theta(\xi) = a_0 + a_1 \xi + a_2 \xi^2 + \dots$$ The first thing to notice is that, by symmetry, only even powers of $\xi$ will appear in the solution. Thus we will have $$ \theta(\xi) = a_0 + a_2 \xi^2 + a_4 \xi^4 + \dots$$ By the boundary condition $\theta(0) = 1$, we have immediately that $a_0 = 1$. Next, substitute $\theta(\xi) = 1 + a_2 \xi^2 + a_4 \xi ^4 + O(\xi^6)$ into the Lane-Emden equation. $\theta$ and its first two derivatives are \begin{align} \begin{split} \theta(\xi) &= 1 + a_2 \xi^2 + a_4 \xi^4 + O(\xi^6)\\ \theta'(\xi) &= 2 a_2 \xi + 4 a_4 \xi^3 + O(\xi^5) \\ \theta''(\xi) &= 2 a_2 + 12 a_4 \xi^2 + O(\xi^4) \end{split} \end{align} Putting these into the Lane-Emden equation, we have \begin{align} \begin{split} 2 a_2 + 12 a_4 \xi^2 + O(\xi^4) + \frac{2}{\xi} (2 a_2 x + 4 a_4 \xi^3 + O(\xi^5)) &= -\theta^n \\ 6 a_2 + 20 a_4 \xi^2 + O(\xi^4) &= -\theta^n \end{split} \end{align} A boundary condition $\theta(0)=1$, and thus we have $a_2 = -1/6$. Away from zero, then, we have \begin{align} \begin{split} -1 + 20 a_4 \xi^2 + O(\xi^4) &= -\left(1 - 1/6 \xi^2 + a_4 \xi^4 + O(\xi^6)\right)^n \end{split} \end{align} The term on the RHS is $ 1 - n \xi^2/6 + O(\xi^4)$, and so we must have $a_4 = n/120$. Thus, the series expansion of the solution around the origin is $$ \theta(\xi) = 1 - \frac{1}{6}\xi^2 + \frac{n}{120} \xi^4 + \dots $$ We can now use this expansion to take a first step slightly away from the origin before beginning our numerical integration, thus avoiding the divide by zero. Note that this series solution near the origin is $O(h^5)$ and so is a good match for RK4 if we take the same (or smaller) step-size. ```python n = 3 xi0 = 0.01 # starting value of xi for our numerical integration theta0 = 1 - xi0**2/6 + n*xi0**4/120 # Taylor series solution to the DE near zero derived above theta0p = -xi0/3 + n*xi0**3/30 y0 = np.array([ theta0, theta0p]) # set IC's for numerical integration print(f"IC at {xi0:10.5e}: {y0[0]:10.5e}, {y0[1]:10.5e}") h = 0.1 tmax = 8 # call the solver t, y = odeSolve(xi0, y0, tmax, h, leRHS, RK4_step, n) fig,ax = plt.subplots() ax.plot(t,y[:,0],'b', label=r'$\theta(\xi)$') ax.plot(t,y[:,1],'r--', label=r'$\frac{d\theta}{d\xi}$') ax.plot([0,tmax],[0,0],'k') ax.set_xlabel(r'$\xi$') ax.set_title(f"Lane Emden Equation for n={n}") ax.legend() plt.tight_layout() plt.show() ``` For values of $n\le5$, the solutions of the Lane Emden equation (the so-called Lane-Emden functions of index $n$) decrease to zero at finite $\xi$. Since this is the radius at which the density goes to zero, we can interpret it as the surface of the self-gravitating body (for example, the radius of the star). Knowing this value $\xi_1$ is thus interesting... Let us see how to determine it numerically. Cleary, we are looking for the solution to $\theta(\xi_1)=0$; this is just root-finding, which we already know how to do. Instead of using some closed-form function, however, the value of the function $\theta(\xi)$ must in this case be determined numerically. But we have just figured out how to do this! Let's use the bisection method for our root-finding algorithm; here is a quick version (no error checking!) ```python def bisection(func, low, high, eps, *P): flow = func(low, *P) fhigh = func(high, *P) mid = 0.5*(low+high) fmid = func(mid,*P) while (high-low)> eps: if fmid*flow < 0: high = mid fhigh = fmid else: low = mid flow = mid mid = 0.5*(low+high) fmid = func(mid,*P) return low ``` Now let us make a function which returns $\theta(\xi)$, the solution to the Lane-Emden equation at $\xi$ ```python def theta(xi, n): h = 1e-4 xi0 = 1e-4 theta0 = 1 - xi0**2/6 + n*xi0**4/120 theta0p = -xi0/3 + n*xi0**3/30 y0 = np.array([ theta0, theta0p]) t, y = odeSolve(xi0, y0, xi, h, leRHS, RK4_step, n) return y[-1,0] ``` Using these, we can compute the surface radius of the polytrope ```python n = 3 xi1 = bisection(theta, 6, 8, 1e-5, n) print(f"xi_1 = {xi1:7.5f}") ``` A more careful treatment gives a value $\xi_1 = 6.89685...$, so we are doing pretty well... ```python ```
30c00abbaaa3111abafe96512375232710a15b33
28,444
ipynb
Jupyter Notebook
Lectures/Lecture 12/Lecture12_ODE_part3.ipynb
astroarshn2000/PHYS305S20
18f4ebf0a51ba62fba34672cf76bd119d1db6f1e
[ "MIT" ]
3
2020-09-10T06:45:46.000Z
2020-10-20T13:50:11.000Z
Lectures/Lecture 12/Lecture12_ODE_part3.ipynb
astroarshn2000/PHYS305S20
18f4ebf0a51ba62fba34672cf76bd119d1db6f1e
[ "MIT" ]
null
null
null
Lectures/Lecture 12/Lecture12_ODE_part3.ipynb
astroarshn2000/PHYS305S20
18f4ebf0a51ba62fba34672cf76bd119d1db6f1e
[ "MIT" ]
null
null
null
36.84456
598
0.544825
true
6,292
Qwen/Qwen-72B
1. YES 2. YES
0.880797
0.91848
0.808995
__label__eng_Latn
0.987972
0.717899
# definition 数值定义. 对于 N-bit two's complement number system, 最高位 N-th bit 为符号位, 0 为正, 1 为负. 对于任意一个非负整数, 它的相反数为 its complement with respect to $2^N$. # properties - 一个数字的 two's complement 可以通过: 1. take its ones' complement and add one. 因为: the sum of a number and its ones' complement is -0, i.e. ‘1’ bits, or $2^N-1$; and by definition, the sum of a number and its two's complement is $2^N$. 2. 通过定义中的 unsigned binary arithmetic 计算得出. - The value of A N-bit binary number in two's complement numercal system can be computed by $$w = -a_{N-1} 2^{N-1} + \sum_{i=0}^{N-2} a_i 2^i$$. 推导: 对于互为相反数的 $A$ 和 $B$, 已知: $$ \begin{align} A + B &= 0 \\ A &= \sim B + 1 \end{align} $$ 第二式为 two's complement 和 ones' complement 的关系. 根据第二式得到 $$ A=1+\sum_{i=0}^{N-1}(1-b_i)2^i $$ 对于 N-bit number system, $A+B=0$ 意味着 $$(A+B) \operatorname{mod} 2^N = 0$$ 满足上式的 $B$ 有两种设计方式: $$ \begin{align} B &= b_{N-1}2^{N-1} + \sum_{i=0}^{N-2}b_i 2^i \\ B &= -b_{N-1}2^{N-1} + \sum_{i=0}^{N-2}b_i 2^i \end{align} $$ 分别对应于 unsigned number 和 signed number 的数值公式. - most negative number, `INT_MIN`. 它没有可以在系统内表达的相反数. 因为正负数范围的不对称性. 根据 two's complement 的定义计算 `INT_MIN` 得到的是自身, 这显然是不对的. - 加减法. 普通的二进制加减法运算即可, 没有特殊处理. - 加减法的溢出检测. an XOR operation on leftmost two carry/borrow bits can quickly determine if an overflow condition exists. ```python ```
57e58ee67b325cd5a78841b53ba007ba4ce08912
2,461
ipynb
Jupyter Notebook
math/arithmetic/binary-arithmetic/two-s-complement.ipynb
Naitreey/notes-and-knowledge
48603b2ad11c16d9430eb0293d845364ed40321c
[ "BSD-3-Clause" ]
5
2018-05-16T06:06:45.000Z
2021-05-12T08:46:18.000Z
math/arithmetic/binary-arithmetic/two-s-complement.ipynb
Naitreey/notes-and-knowledge
48603b2ad11c16d9430eb0293d845364ed40321c
[ "BSD-3-Clause" ]
2
2018-04-06T01:46:22.000Z
2019-02-13T03:11:33.000Z
math/arithmetic/binary-arithmetic/two-s-complement.ipynb
Naitreey/notes-and-knowledge
48603b2ad11c16d9430eb0293d845364ed40321c
[ "BSD-3-Clause" ]
2
2019-04-11T11:02:32.000Z
2020-06-27T11:59:09.000Z
30.7625
102
0.502641
true
675
Qwen/Qwen-72B
1. YES 2. YES
0.872347
0.875787
0.76399
__label__eng_Latn
0.872233
0.613338
```python from sympy import * from sympy.abc import m,M,l,b,c,g,t from sympy.physics.mechanics import dynamicsymbols, init_vprinting th = dynamicsymbols('theta') x = dynamicsymbols('x') dth = diff(th) dx = diff(x) ddth = diff(dth) ddx = diff(dx) init_vprinting() ``` ```python ``` ```python ddth = (-(1/2)*m*l cos(th)ddth - b*dx +(1/2)*m*l*sin(th)*dth*dx)/((m/12)(3l+l^2)) ddx = (-(1/2)*m*l cos(th)ddth - b*dx +(1/2)*m*l*sin(th)*dth^2)/(M+m) ```
de761e7fe343e53c15b1cbb441c4f622da1a09df
1,294
ipynb
Jupyter Notebook
notebook.ipynb
dnlrbns/pendcart
696c5d2c5fc7b787f3ab074e3ec3949a94dfc5ed
[ "MIT" ]
null
null
null
notebook.ipynb
dnlrbns/pendcart
696c5d2c5fc7b787f3ab074e3ec3949a94dfc5ed
[ "MIT" ]
null
null
null
notebook.ipynb
dnlrbns/pendcart
696c5d2c5fc7b787f3ab074e3ec3949a94dfc5ed
[ "MIT" ]
null
null
null
21.213115
90
0.51391
true
174
Qwen/Qwen-72B
1. YES 2. YES
0.90599
0.61878
0.560609
__label__yue_Hant
0.234876
0.140812
# The Harmonic Oscillator Strikes Back *Note:* Much of this is adapted/copied from https://flothesof.github.io/harmonic-oscillator-three-methods-solution.html This week we continue our adventures with the harmonic oscillator. The harmonic oscillator is a system that, when displaced from its equilibrium position, experiences a restoring force F proportional to the displacement x: $$F=-kx$$ The potential energy of this system is $$V = {1 \over 2}k{x^2}$$ These are sometime rewritten as $$ F=- \omega_0^2 m x, \text{ } V(x) = {1 \over 2} m \omega_0^2 {x^2}$$ Where $\omega_0 = \sqrt {{k \over m}} $ If the equilibrium value of the harmonic oscillator is not zero, then $$ F=- \omega_0^2 m (x-x_{eq}), \text{ } V(x) = {1 \over 2} m \omega_0^2 (x-x_{eq})^2$$ ## 1. Harmonic oscillator from last time (with some better defined conditions) Applying the harmonic oscillator force to Newton's second law leads to the following second order differential equation $$ F = m a $$ $$ F= -m \omega_0^2 (x-x_{eq}) $$ $$ a = - \omega_0^2 (x-x_{eq}) $$ $$ x(t)'' = - \omega_0^2 (x-x_{eq}) $$ The final expression can be rearranged into a second order homogenous differential equation, and can be solved using the methods we used above This is already solved to remind you how we found these values ```python import sympy as sym sym.init_printing() ``` **Note** that this time we define some of the properties of the symbols. Namely, that the frequency is always positive and real and that the positions are always real ```python omega0,t=sym.symbols("omega_0,t",positive=True,nonnegative=True,real=True) xeq=sym.symbols("x_{eq}",real=True) x=sym.Function("x",real=True) x(t),omega0 ``` ```python dfeq=sym.Derivative(x(t),t,2)+omega0**2*(x(t)-xeq) dfeq ``` ```python sol = sym.dsolve(dfeq) sol ``` ```python sol,sol.args[0],sol.args[1] ``` **Note** this time we define the initial positions and velocities as real ```python x0,v0=sym.symbols("x_0,v_0",real=True) ics=[sym.Eq(sol.args[1].subs(t, 0), x0), sym.Eq(sol.args[1].diff(t).subs(t, 0), v0)] ics ``` ```python solved_ics=sym.solve(ics) solved_ics ``` ### 1.1 Equation of motion for $x(t)$ ```python full_sol = sol.subs(solved_ics[0]) full_sol ``` ### 1.2 Equation of motion for $p(t)$ ```python m=sym.symbols("m",positive=True,nonnegative=True,real=True) p=sym.Function("p") sym.Eq(p(t),m*sol.args[1].subs(solved_ics[0]).diff(t)) ``` ## 2. Time average values for a harmonic oscillator If we want to understand the average value of a time dependent observable, we need to solve the following integral $${\left\langle {A(t)} \right\rangle}_t = \begin{array}{*{20}{c}} {\lim }\\ {\tau \to 0} \end{array}\frac{1}{\tau }\int\limits_0^\tau {A(t)dt} $$ ### 2.1 Average position ${\left\langle {x} \right\rangle}_t$ for a harmonic oscillator ```python tau=sym.symbols("tau",nonnegative=True,real=True) xfunc=full_sol.args[1] xavet=(xfunc.integrate((t,0,tau))/tau).limit(tau,sym.oo) xavet ``` The computer does not always make the best choices the first time. If you treat each sum individually this is not a hard limit to do by hand. The computer is not smart. We can help it by inseting an `expand()` function in the statement ```python xavet=(xfunc.integrate((t,0,tau))/tau).expand().limit(tau,sym.oo) xavet ``` ### 2.2 Excercise: Calculate the average momenta ${\left\langle {p} \right\rangle}_t$ for a harmonic oscillator ```python # Your code here m=sym.symbols("m",positive=True,nonnegative=True,real=True) p=sym.Function("p") sym.Eq(p(t),m*sol.args[1].subs(solved_ics[0]).diff(t)) tau=sym.symbols("tau",nonnegative=True,real=True) pfunc=sym.Eq(p(t),m*sol.args[1].subs(solved_ics[0]).diff(t)).args[1] pavet=(pfunc.integrate((t,0,tau))/tau).limit(tau,sym.oo) pavet ``` ### 2.3 Exercise: Calculate the average kinetic energy of a harmonic oscillator ```python # Your code here m=sym.symbols("m",positive=True,nonnegative=True,real=True) KE=sym.Function("KE") sym.Eq(KE(t),(m*sol.args[1].subs(solved_ics[0]).diff(t))**2/(2*m)) ``` ```python tau=sym.symbols("tau",nonnegative=True,real=True) KEfunc=sym.Eq(KE(t),(m*sol.args[1].subs(solved_ics[0]).diff(t))**2/(2*m)).args[1] KEavet=(KEfunc.integrate((t,0,tau))/tau).limit(tau,sym.oo) KEavet ``` ```python KEavet=(KEfunc.integrate((t,0,tau))/tau).expand().limit(tau,sym.oo) KEavet ``` ## 3. Ensemble (Thermodynamic) Average values for a harmonic oscillator If we want to understand the thermodynamics ensemble average value of an observable, we need to solve the following integral. $${\left\langle {A(t)} \right\rangle}_{T} = \frac{\int{A e^{-\beta H}dqdp}}{\int{e^{-\beta H}dqdp} } $$ You can think of this as a Temperature average instead of a time average. Here $\beta=\frac{1}{k_B T}$ and the classical Hamiltonian, $H$ is $$ H = \frac{p^2}{2 m} + V(q)$$ **Note** that the factors of $1/h$ found in the classical partition function cancel out when calculating average values ### 3.1 Average position ${\left\langle {x} \right\rangle}_t$ for a harmonic oscillator For a harmonic oscillator with equilibrium value $x_{eq}$, the Hamiltonian is $$ H = \frac{p^2}{2 m} + \frac{1}{2} m \omega_0 (x-x_{eq})^2 $$ First we will calculate the partition function $\int{e^{-\beta H}dqdp}$ ```python k,T=sym.symbols("k,T",positive=True,nonnegative=True,real=True) xT,pT=sym.symbols("x_T,p_T",real=True) ham=sym.Rational(1,2)*(pT)**2/m + sym.Rational(1,2)*m*omega0**2*(xT-xeq)**2 beta=1/(k*T) bolz=sym.exp(-beta*ham) z=sym.integrate(bolz,(xT,-sym.oo,sym.oo),(pT,-sym.oo,sym.oo)) z ``` Then we can calculate the numerator $\int{A e^{-\beta H}dqdp}$ ```python numx=sym.integrate(xT*bolz,(xT,-sym.oo,sym.oo),(pT,-sym.oo,sym.oo)) numx ``` And now the average value ```python xaveT=numx/z xaveT ``` ### 3.2 Exercise: Calculate the average momenta ${\left\langle {p} \right\rangle}_t$ for a harmonic oscillator After calculating the value, explain why you think you got this number ```python # your code here nump=sym.integrate(pT*bolz,(xT,-sym.oo,sym.oo),(pT,-sym.oo,sym.oo)) nump ``` ```python paveT=nump/z paveT ``` ### 3.3 Exercise: Calculate the average kinetic energy The answer you get here is a well known result related to the energy equipartition theorem ```python # Your code here numKE=sym.integrate((pT**2)/(2*m)*bolz,(xT,-sym.oo,sym.oo),(pT,-sym.oo,sym.oo)) numKE ``` ```python KEaveT=numKE/z KEaveT ``` # Back to the lecture ## 4. Exercise Verlet integrators In this exercise we will write a routine to solve for the equations of motion for a hamonic oscillator. Plot the positions and momenta (seprate plots) of the harmonic oscillator as a functions of time. Calculate trajectories using the following methods: 1. Exact solution 2. Simple taylor series expansion 3. Predictor-corrector method 4. Verlet algorithm 5. Leapfrog algorithm 6. Velocity Verlet algorithm ```python # Your code here ```
5da4b5239749aaa9408ca4110a87007295b39726
87,460
ipynb
Jupyter Notebook
harmonic_student.ipynb
sju-chem264-2019/new-10-14-10-m-jacobo
a80b342b8366f5203d08b8d572468b519067752c
[ "MIT" ]
null
null
null
harmonic_student.ipynb
sju-chem264-2019/new-10-14-10-m-jacobo
a80b342b8366f5203d08b8d572468b519067752c
[ "MIT" ]
null
null
null
harmonic_student.ipynb
sju-chem264-2019/new-10-14-10-m-jacobo
a80b342b8366f5203d08b8d572468b519067752c
[ "MIT" ]
null
null
null
93.540107
11,464
0.804574
true
2,225
Qwen/Qwen-72B
1. YES 2. YES
0.867036
0.746139
0.646929
__label__eng_Latn
0.880557
0.341364
# Sizing a mosfet using gm/Id method This is an example you can use to calculate mosfet size in Sky130 for given design parameters. You can change the parameters below and recalculate. ```python %pylab inline import numpy as np from scipy.interpolate import interp1d import pint ureg = pint.UnitRegistry() # convenient unit conversions ``` Populating the interactive namespace from numpy and matplotlib First we'll setup the design parameters. The mosfet length and width will need to be one of the bin values for the selected mosfet model. ```python A_v = np.abs(-2) # voltage gain at DC I_d = 0.5 * ureg.mA # maximum drain current f_c = 500 * ureg.MHz # corner (3dB) frequency C_L = 1 * ureg.pF # Load capacitance # simulation parameters sim_L = 0.15 * ureg.um # target mosfet length sim_W = 1 * ureg.um # calculations are independent of width but we need to have a matching bin value for the initial simulations sim_Vdd = 1.8 * ureg.V ``` First we calculate the load resistance. \begin{align} R_L &= \frac{1}{2 * \pi * f_c * C_L} \end{align} ```python R_L = 1 / (2 * 3.1415 * f_c * C_L) R_L = R_L.to(ureg.ohms) print(R_L) ``` 318.3192742320548 ohm Next we calculate transconductance. \begin{align} g_m &= \frac{A_v}{R_L} \end{align} ```python g_m = A_v / R_L g_m = g_m.to(ureg.mS) print(f'gm={g_m}') ``` gm=6.2829999999999995 millisiemens Now we need to generate the gm/Id graphs we'll need to determine the remaining values. These can be pre-generated and loaded or calculate here. We'll load them from an hdf5 file generated with _gen_gm_id_plots.py_. ```python import h5py f = h5py.File('gm_id_01v8/sky130_fd_pr__nfet_01v8__data.h5', 'r') bin_idx = 4 assert(f['bins'][bin_idx][1] - sim_L.magnitude < 0.00001) # index of the W=1 L=0.15 bin in the repo data. vsweep=f['vsweep'][bin_idx] * ureg.V gm_id = (f['gm'][bin_idx] * ureg.mS) / (f['id'][bin_idx] * ureg.A) id_W = (f['id'][bin_idx] * ureg.A / sim_W) ``` We could just look for the $\frac{I_d}{W}$ on the graph, but we've got the data and data interpolation tools, so we can calculate exactly. We'll figure out the value and plot it on the graph as a visual validation. ```python i_id_w__gm_id = interp1d(gm_id.magnitude, id_W.magnitude) id_interp = i_id_w__gm_id(g_m.magnitude) * id_W.units print(f'Id={id_interp.to(ureg.uA / ureg.um)}') ``` Id=63.30221617728767 microampere / micrometer ```python fig = figure() id_w__gm_id = fig.subplots(1, 1) id_w__gm_id.plot(gm_id.magnitude, id_W.magnitude) id_w__gm_id.axes.set_xlabel(f'gm/Id ({gm_id.units})') id_w__gm_id.axes.set_ylabel(f'Id/W ({id_W.units})') id_w__gm_id.plot(g_m.magnitude, id_interp.magnitude, 'o', markersize=8) fig.tight_layout() ``` This allows us to calculate the transistor width. \begin{align} W &= \frac{I_d}{\frac{I_d}{W}} \end{align} ```python W = I_d / id_interp W = W.to(ureg.um) print(f'W={W}') ``` W=7.898617618689249 micrometer Next we determine the gate bias using the same interpolation technique as above. ```python i_vgg__gm_id = interp1d(gm_id.magnitude, vsweep.magnitude) vbias_interp = i_vgg__gm_id(g_m.magnitude) * vsweep.units print(f'Vbias={vbias_interp}') fig = figure() gm_id__vgg = fig.subplots(1, 1) gm_id__vgg.plot(vsweep.magnitude, gm_id.magnitude) gm_id__vgg.axes.set_xlabel(f'Vgg ({vsweep.units})') gm_id__vgg.axes.set_ylabel(f'gm/Id ({gm_id.units})') gm_id__vgg.plot(vbias_interp.magnitude, g_m.magnitude, 'o', markersize=8) fig.tight_layout() ```
14f0905464a6d9cee459617aaf0502b60725b1bf
39,374
ipynb
Jupyter Notebook
utils/gm_id_example.ipynb
tclarke/sky130radio
4eca853b7e4fd6bc0d69998f65c04f97e73bee84
[ "Apache-2.0" ]
14
2020-09-28T19:41:26.000Z
2021-10-05T01:40:00.000Z
utils/gm_id_example.ipynb
tclarke/sky130radio
4eca853b7e4fd6bc0d69998f65c04f97e73bee84
[ "Apache-2.0" ]
null
null
null
utils/gm_id_example.ipynb
tclarke/sky130radio
4eca853b7e4fd6bc0d69998f65c04f97e73bee84
[ "Apache-2.0" ]
6
2020-07-30T21:54:19.000Z
2021-02-07T07:58:12.000Z
133.471186
16,484
0.893254
true
1,108
Qwen/Qwen-72B
1. YES 2. YES
0.875787
0.826712
0.724023
__label__eng_Latn
0.736974
0.520481
# CHEM 1000 - Spring 2022 Prof. Geoffrey Hutchison, University of Pittsburgh ## 9 Probability Chapter 9 in [*Mathematical Methods for Chemists*](http://sites.bu.edu/straub/mathematical-methods-for-molecular-science/) (These lectures notes on probability and statistics will include substantial material not found in the text.) By the end of this session, you should be able to: - Understand the binomial and multinomial processes - Compute cumulative chances (e.g., lottery) - Understand calculating moments from probability distributions - Mean, variance, skew, and kurtosis ### Randomness, Probability and Chemistry A common technique in simulating chemistry and physical behavior is called [Monte Carlo](https://en.wikipedia.org/wiki/Monte_Carlo_method) - essentially using random processes to solve complicated problems. For example, one can randomly sample many possible shapes of a polymer or sample from multiple possible arrangements in a nanoparticle, etc. I will assume you have some background in probability and statistics and focus only on some key areas. ### Coin Flips and Binomial Distribution We're generally familiar with *descrete* random numbers, like flipping a coin heads or tails. If we flip a coin once, it's 1/2 chance of heads or tails. Over multiple events - each one is independent - the probability of a particular number of heads (n) in a total of N flips is: $$ p(n, N) =\frac{1}{2^{N}} \frac{N !}{n !(N-n) !} $$ For 6 flips, this looks like: Image from [*Mathematical Methods for Chemists*](http://sites.bu.edu/straub/mathematical-methods-for-molecular-science/) We can calculate the counts either using the formula above, or via [Pascal's Triangle](https://en.wikipedia.org/wiki/Pascal%27s_triangle) We can relate coin-flips with electron spin or nuclear spin (for NMR), etc. For example, given 3 unpaired electrons, how many arrangements are there? - Up, Up, Up - Up, Up, Down .. (etc) ### Multinomial Probability Obviously we don't just flip coins (or electrons) so we also need to consider a multinomial distribution (i.e., rolling a 6-sided dice, or a system that can be in multiple equivalent states): For example, if we roll a six-sided die, 5 times how many ways can we get 2 ones and 3 sixes (1 1 6 6 6): $$ W(2,0,0,0,0,3 ; 5)=\frac{5 !}{2 ! 0 ! 0 ! 0 ! 0 ! 3 !}=10 $$ (Important to remember that 0! = 1) ### Cumulative Probability One common real-world probability question is about cumulative chances. My son came to me, asking about an iPad game where he can win a prize every time he opens a gift. Each gift has a chance of winning the prize. So he asks me if he buys 30 gifts, what's the chance he'll win the super-awesome dragon? That's a cumulative probability - he doesn't care *which* gift gives him the dragon, only that one of the thirty gifts works. Here's the catch - the game only gives dragons with 1% chance - you're more likely to get other prizes. While there's a formula, it's really, really easy to compute this with a for() loop. - what's the total cumulative chance? - what's the chance we didn't get the prize on the last round e.g. - first time through, there's a 1% chance of a win - second time, there was a 99% chance we didn't win, times the 1% chance I win on this round = 0.99% chance on this gift - third time, there's a 99.01% we didn't win on the 2nd round, times the 1% chance on the 3rd round - etc. ```python # help danny total = 0.0 # start out with no chance to win the prize missed = 1.0 # i.e., he doesn't have the prize yet chance = 0.01 # chance of winning each time he opens the gift for egg in range(1,31): # remember the loop will go from start to end - 1 = 30 total = total + chance * missed missed = missed * (1.0 - chance) print(egg, round(total, 4)) ``` Notice that even though there are 30 gifts, his cumulative probability is **not** 30 * 0.01, but lower... Not surprisingly, he decides a 26% chance of getting a dragon isn't very good and he picks a different game. No dragons, but this new game has a 4% chance of winning each time he plays. How long does he have to go for a good chance of winning a prize? ```python # is this a better game? total = 0.0 # start out with no chance to win the prize missed = 1.0 # i.e., he doesn't have the prize yet chance = 0.04 # chance of winning each time he opens the gift for egg in range(1,21): # remember the loop will go from start to end - 1 = 20 total = total + chance * missed missed = missed * (1.0 - chance) print(egg, round(total, 4)) ``` So it's 17 rounds before we break 50% so it's clearly better than the first game. Still, the cumulative chance is not N * 0.04... (Danny decided neither game was worth the money, incidentally.) ### Moments from Distributions Sometimes people will discuss "[*moments*](https://en.wikipedia.org/wiki/Moment_(mathematics))" of probability distributions or statistical distributions. These are related to the shape of the distribution. - the "zeroth" moment is the total (e.g., for a probability it should be 1 = 100%) - first moment is the [mean](https://en.wikipedia.org/wiki/Expected_value) $\mu$ (i.e., the center or "expected value") - second moment is the [variance](https://en.wikipedia.org/wiki/Variance) $\sigma^2$ (i.e., the width) - you're probably more familiar with the standard deviation $\sigma$ - third moment is the [skewness](https://en.wikipedia.org/wiki/Skewness) (i.e., the asymmetry of the distribution) - fourth moment, the [kurtosis](https://en.wikipedia.org/wiki/Kurtosis) (i.e., how thin or thick the "tail" of the distribution) In general, the mean, variance (or standard deviation which is the square root of the variance) and the skewness are the most useful measures of a distribution. #### Skewness Not all distributions are "normal" or symmetric. For example, the number of people waiting for a bus is never negative. Even if I tell you the average is 5 people in the morning, sometimes it's zero (when the bus just arrived) and it's sometimes much higher (right before the bus comes and someone runs to catch it). Image from Wikipedia: <a href="https://commons.wikimedia.org/wiki/File:Negative_and_positive_skew_diagrams_(English).svg#/media/File:Negative_and_positive_skew_diagrams_(English).svg"></a> #### Kurtosis Kurtosis is a measure of whether the data are heavy-tailed or light-tailed relative to a normal distribution. That is, distributions with high kurtosis tend to have many outliers. This is probably easier to plot: ```python # let's plot this import numpy as np import matplotlib.pyplot as plt %matplotlib inline %config InlineBackend.figure_format = 'retina' plt.style.use('../lectures/chem1000.mplstyle') ``` ```python # generate a "normal" distribution mu, sigma = 0, 0.1 # mean and standard deviation points = np.random.normal(mu, sigma, 1000) # 1,000 points from a normal distribution ``` ```python import scipy.stats print('mean', np.mean(points)) print('variance', np.var(points)) # variance print('skewness', scipy.stats.skew(points)) print('kurtosis', scipy.stats.kurtosis(points)) ``` ```python count, bins, ignored = plt.hist(points, 30, density=True) # add a red line with the perfect curve from a Gaussian distribution plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) * np.exp( - (bins - mu)**2 / (2 * sigma**2) ), linewidth=2, color='r') plt.show() ``` ```python # here's a different distribution (Cauchy - the same as a Lorentzian peak in spectra) from scipy.stats import cauchy mu, sigma = 0, 0.1 # mean and standard deviation points = cauchy.rvs(mu, sigma, 1000) print('mean', np.mean(points)) print('variance', np.var(points)) # variance print('skew', scipy.stats.skew(points)) print('kurtosis', scipy.stats.kurtosis(points)) ``` ```python count, bins, ignored = plt.hist(points, 1000, density=True) # add a red line with the perfect curve from a Gaussian distribution plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) * np.exp( - (bins - mu)**2 / (2 * sigma**2) ), linewidth=2, color='r') plt.xlim(-0.5, 0.5) plt.show() ``` Notice that there's not as much in the middle and more on the outside? That's **kurtosis**. Can we generate some skew? Yes, there are many kinds of distributions, including intentionally skewed distributions ```python from scipy.stats import skewnorm mu, sigma = 0, 0.1 # mean and standard deviation asym = 4 # skew parameter points = skewnorm.rvs(asym, mu, sigma, 1000) print('mean', np.mean(points)) print('variance', np.var(points)) # variance print('skew', scipy.stats.skew(points)) print('kurtosis', scipy.stats.kurtosis(points)) ``` ```python count, bins, ignored = plt.hist(points, 30, density=True) # add a red line with the perfect curve from a Gaussian distribution plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) * np.exp( - (bins - mu)**2 / (2 * sigma**2) ), linewidth=2, color='r') plt.show() ``` ### Common Probability Distributions The following are all continuous distributions: - Uniform random (`scipy.stats.uniform`) - Gaussian / normal (`scipy.stats.norm`) - Cauchy (Lorentzian) (`scipy.stats.cauchy`) - Exponential (`scipy.stats.expon`) - example: exponential decay of radioactive elements The [Poisson distribution](https://en.wikipedia.org/wiki/Poisson_distribution) and [binomial distribution](https://en.wikipedia.org/wiki/Poisson_distribution) are discrete probability distributions (e.g., 5 people at the bus stop, or 6 heads when flipping coins). ### Calculating Mean, Variance, etc. for a Probability Distribution When we have discrete data, it's easy to calculate a mean. We add up the values and divide by the count. However, it's not so hard when we have a continuous probability distribution either. Consider an equivalent way to calculate the mean - it's the value times the probability (e.g., a weighted mean): $$ \bar{x}=\sum_{k=1}^{n} x_{k} p_{k} $$ In other words, we add up the values times the probability of occurring. With a continuous probability distribution, we "just" change the sum to an integral. $$ \sum_{k} p_{k} \rightarrow \int p(x) d x $$ So now if we want to calculate the mean of a probability distribution, we need: $$ \int x p(x) dx $$ For example: $$ \bar{x}=\int_{a}^{b} x p(x) d x $$ Similarly, if we want the average of $x^2$ we can use: $$ \overline{x^{2}}=\int_{a}^{b} x^{2} p(x) d x $$ To calculate the variance, we need: $$ \sigma_{x}^{2}=\overline{(x-\bar{x})^{2}}=\overline{x^{2}}-(\bar{x})^{2} $$ That might look confusing, but just means we want the difference: - the average of $x^2$ - the average of $x$ and then square that number Below I've taken the integrals for the particle in a box, e.g.: $$ p(x) = \psi^*\psi = \frac{2}{L} \sin^2 (\frac{n \pi x}{L}) $$ ```python from sympy import init_session init_session() ``` ```python L = symbols('L') f = 2*x * sin(n*pi*x/L)**2/L simplify(integrate(f, (x, 0, L))) ``` ```python x_sq = 2*x**2*sin(n*pi*x/L)**2 / L simplify(integrate(x_sq, (x, 0, L))) ``` ```python # variance = average(x**2) - average(x)**2 var = integrate(x_sq, (x, 0, L)) - integrate(f, (x, 0, L))**2 simplify(var) ``` ------- This notebook is from Prof. Geoffrey Hutchison, University of Pittsburgh https://github.com/ghutchis/chem1000 <a rel="license" href="http://creativecommons.org/licenses/by/4.0/"></a>
04a1b286199f64909f70542fb7ae2a8946c300a4
17,333
ipynb
Jupyter Notebook
lectures/09a-probability.ipynb
ghutchis/chem1000
07a7eac20cc04ee9a1bdb98339fbd5653a02a38d
[ "CC-BY-4.0" ]
12
2020-06-23T18:44:37.000Z
2022-03-14T10:13:05.000Z
lectures/09a-probability.ipynb
ghutchis/chem1000
07a7eac20cc04ee9a1bdb98339fbd5653a02a38d
[ "CC-BY-4.0" ]
null
null
null
lectures/09a-probability.ipynb
ghutchis/chem1000
07a7eac20cc04ee9a1bdb98339fbd5653a02a38d
[ "CC-BY-4.0" ]
4
2021-07-29T10:45:23.000Z
2021-10-16T09:51:00.000Z
35.373469
425
0.5868
true
3,084
Qwen/Qwen-72B
1. YES 2. YES
0.718594
0.76908
0.552657
__label__eng_Latn
0.989087
0.122336
# **[HW6] DCGAN** 1. DataLoader 2. Model 3. Inception Score 4. Trainer 5. Train 이번 실습에서는 Convolution기반의 Generative Adversarial Network를 구현해서 이미지를 직접 생성해보는 실습을 진행해보겠습니다. - dataset: CIFAR-10 (https://www.cs.toronto.edu/~kriz/cifar.html) - model: DCGAN (https://arxiv.org/abs/1511.06434) - evaluation: Inception Score (https://arxiv.org/abs/1801.01973) ## Import packages 런타임의 유형을 변경해줍니다. 상단 메뉴에서 [런타임]->[런타임유형변경]->[하드웨어가속기]->[GPU] 변경 이후 아래의 cell을 실행 시켰을 때, torch.cuda.is_avialable()이 True가 나와야 합니다. ```python import torch import torch.nn as nn import torch.nn.functional as F import torchvision import torch.optim as optim print(torch.__version__) print(torch.cuda.is_available()) ``` ```python import matplotlib.pyplot as plt import numpy as np import scipy as sp import tqdm import os import random import time import datetime from sklearn.datasets import fetch_openml from sklearn.model_selection import train_test_split # for reproducibility np.set_printoptions(precision=3) np.set_printoptions(suppress=True) random.seed(1234) torch.manual_seed(1234) np.random.seed(1234) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False ``` # 1. DataLoader 이전의 실습들에서 사용한것과 마찬가지로, pre-defined된 CIFAR-10 dataset을 활용해서 dataloader를 만들어 두겠습니다. ```python from PIL import Image from torch.utils import data import torchvision import torchvision.transforms as transforms def create_dataloader(batch_size=64, num_workers=1): transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5))]) trainset = torchvision.datasets.CIFAR10(root='./data/', train=True, transform=transform, download=True) testset = torchvision.datasets.CIFAR10(root='./data/', train=False, transform=transform, download=True) trainloader = data.DataLoader(dataset=trainset, batch_size=batch_size, shuffle=True, num_workers=num_workers) testloader = data.DataLoader(dataset=testset, batch_size=batch_size, shuffle=False, num_workers=num_workers) return trainloader, testloader ``` # 2. Model 이번 section에서는 DCGAN의 모델구조를 직접 구현해보도록 하겠습니다. 우선 본격적인 모델 구현에 앞서 GAN의 전체적인 구조에 대해 살펴보겠습니다. GAN은 Generator와 Discriminator로 구성되어, Generator는 random latent vector를 받아 Discriminator를 속일 수 있는 fake image를 만들고, Discriminator는 real image와 fake image를 구분하는 형태로 학습이 진행되게 됩니다. DCGAN은 image 데이터 처리에 효과적인 convolution layer를 활용하여 Generator와 Discriminator의 구조를 변형한 모델입니다. DCGAN의 Generator와 Discriminator의 구조는 아래와 같습니다. 이 때, Generator는 output의 width와 height를 키우는 convolution을 진행해주어야 하기 때문에, standard한 convolution operation이 아닌 deconvolution 혹은 transpose convolution이라고 불리는 연산을 통해 output의 size를 키워주는 연산을 진행하게 됩니다. 반대로, Discriminator는 Generator와 대칭되는 구조를 통해 standard한 convolution을 사용하여 classification을 진행해주게 됩니다. Transpose Convolution:(https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose2d.html) ## Convolution Block 우선, 모델을 쉽게 구현할 수 있도록, Generator와 Discriminator에서 반복적으로 사용할 convolution block고 deconvolution block을 정의해두도록 하겠습니다. ```python def conv(c_in, c_out, k_size, stride=2, pad=1, bias=False, norm='bn', activation=None): layers = [] # Conv. layers.append(nn.Conv2d(c_in, c_out, k_size, stride, pad, bias=bias)) # Normalization if norm == 'bn': layers.append(nn.BatchNorm2d(c_out)) elif norm == None: pass # Activation if activation == 'lrelu': layers.append(nn.LeakyReLU(0.2)) elif activation == 'relu': layers.append(nn.ReLU()) elif activation == 'tanh': layers.append(nn.Tanh()) elif activation == 'sigmoid': layers.append(nn.Sigmoid()) elif activation == None: pass return nn.Sequential(*layers) def deconv(c_in, c_out, k_size, stride=2, pad=1, output_padding=0, bias=False, norm='bn', activation=None): layers = [] # Deconv. layers.append(nn.ConvTranspose2d(c_in, c_out, k_size, stride, pad, output_padding, bias=bias)) # Normalization if norm == 'bn': layers.append(nn.BatchNorm2d(c_out)) elif norm == None: pass # Activation if activation == 'lrelu': layers.append(nn.LeakyReLU(0.2)) elif activation == 'relu': layers.append(nn.ReLU()) elif activation == 'tanh': layers.append(nn.Tanh()) elif activation == 'sigmoid': layers.append(nn.Sigmoid()) elif activation == None: pass return nn.Sequential(*layers) ``` ## Generator 이제, 위에서 정의한 deconv block을 활용해서 DCGAN의 Generator를 구현해보도록 하겠습니다. ```python class Generator(nn.Module): def __init__(self): super(Generator, self).__init__() model = [] ### DCGAN Generator # You have to implement 4-layers generator. # Note: Recommend to use 'deconv' function ### YOUR CODE HERE (~ 4 lines) ### END YOUR CODE self.model = nn.Sequential(*model) def forward(self, z): # Input (z) size : [Batch, 256, 1, 1] # Output (Image) size : [Batch, 3, 32, 32] z = z.view(z.size(0), z.size(1), 1, 1) output = self.model(z) return output ``` ## Discriminator 이제, 위에서 정의한 conv block을 활용해서 DCGAN의 Discriminator를 구현해보도록 하겠습니다. ```python class Discriminator(nn.Module): def __init__(self): super(Discriminator, self).__init__() model = [] ### DCGAN Discriminator # You have to implement 4-layers discriminator. # Note: Recommend to use 'conv' function ### YOUR CODE HERE (~ 4 lines) ### END YOUR CODE self.model = nn.Sequential(*model) def forward(self, x: torch.Tensor): # Input (z) size : [Batch, 3, 32, 32] # Output (probability) size : [Batch, 1] output = self.model(x).squeeze() return output ``` ## Implementation Test 이제 Generator와 Discriminator를 맞게 구현했는지 test해보도록 하겠습니다. 체크를 위해서 코드와 함께 주어졌던 두개의 파일 - sanity_check_dcgan_netG.pth - sanity_check_dcgan_netD.pth 를 왼쪽 상단에 [파일]->[세션 저장소에 업로드]를 눌러 업로드 하고, \\ 아래의 코드를 실행시켜 코드가 통과되면 성공입니다. ```python def test_model(): print("=====Model Initializer Test Case======") netG = Generator() # the first test try: netG.load_state_dict(torch.load("sanity_check_dcgan_netG.pth", map_location='cpu')) except Exception as e: print("Your DCGAN generator initializer is wrong. Check the comments in details and implement the model precisely.") raise e print("The first test passed!") # the second test netD = Discriminator() try: netD.load_state_dict(torch.load("sanity_check_dcgan_netD.pth", map_location='cpu')) except Exception as e: print("Your DCGAN discriminator initializer is wrong. Check the comments in details and implement the model precisely.") raise e print("The second test passed!") print("All 2 tests passed!") test_model() ``` # 3. Inception Score 비록 이제 dataloader와 model을 구현하였지만, 본격적으로 학습을 진행하기전 지도학습과 다르게 한가지 추가적으로 필요한 것이 있습니다. 기존의 지도학습 세팅에서는 loss나 validation accuracy를 통해서 학습이 원활히 진행되고 있는지 모니터링이 가능했지만, GAN에서는 generator가 비록 discriminator를 잘 속이고 있을지라도 (i.e., 낮은 loss) discriminator가 학습이 충분히 되지 못했다면 낮은 퀄리티의 이미지가 생성되게 됩니다. 이미지의 퀄리티를 측정하는 방법은 크게 2가지 입니다. 1. Fidelity(충실도): 얼마나 고품질의 이미지를 생성하는가?. 2. Diversity(다양성): 생성된 이미지들이 얼마나 다양한가? (e.g., 고양이만 생성하지 않음) 보통 Fidelity를 측정하기 위해서는 **Frechet Inception Distance**라는 metric이, Diversity를 측정하기 위해서는 **Inception Score**라는 evaluation metric이 사용되곤 합니다. 이번 실습에서는 이미지의 다양성을 측정하는 Inception Score를 통해 학습이 원활히 진행되고 있는지 모니터링 하도록 하겠습니다. Inception score를 측정하는 방법은 아래와 같습니다. 1. Generator를 통해 이미지를 N개 생성한다. 2. 생성된 이미지들을 pre-trained된 inception network (=googleNet)에 통과시킨다. 3. inception network가 예측한 생성된 image의 label별 probability의 평균이 얼마나 diverse한지 측정한다. Inception score에 대한 자세한 내용이 궁금하신 분은 아래를 참조해주세요 - https://arxiv.org/abs/1801.01973 - https://cyc1am3n.github.io/2020/03/01/is_fid.html ```python from torchvision.models.inception import inception_v3 from scipy.stats import entropy class Inception_Score(): def __init__(self, dataset): self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # Dataset & DataLoader self.N = len(dataset) self.batch_size = 64 self.dataset = dataset self.dataloader = data.DataLoader(dataset=dataset, batch_size=self.batch_size, num_workers=1) self.transform = nn.Upsample(size=(299, 299), mode='bilinear').to(self.device) # Inception Model self.inception_model = inception_v3(pretrained=True, transform_input=False).to(self.device) self.inception_model.eval() def get_pred(self, x): with torch.no_grad(): x = self.transform(x) x = self.inception_model(x) return F.softmax(x, dim=1).data.cpu().numpy() def compute_score(self, splits=1): preds = np.zeros((self.N, 1000)) for i, batch in tqdm.tqdm(enumerate(self.dataloader)): batch = batch.to(self.device) batch_size_i = batch.size(0) preds[i * self.batch_size : i * self.batch_size + batch_size_i] = self.get_pred(batch) # Compute the mean KL-divergence # You have to calculate the inception score. # The logit values from inception model are already stored in 'preds'. inception_score = 0.0 split_scores = [] for k in tqdm.tqdm(range(splits)): part = preds[k * (self.N // splits): (k + 1) * (self.N // splits), :] py = np.mean(part, axis=0) scores = [] for i in range(part.shape[0]): pyx = part[i, :] scores.append(entropy(pyx, py)) split_scores.append(np.exp(np.mean(scores))) inception_score = np.mean(split_scores) return inception_score ``` ```python def test_inception_score(): print("======Inception Score Test Case======") # CIFAR10 Datset without Label class CIFAR10woLabel(): def __init__(self): transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5))]) self.dataset = torchvision.datasets.CIFAR10(root='./data/', download=True, transform=transform) def __getitem__(self, index): return self.dataset[index][0] def __len__(self): return len(self.dataset) print("Calculating Inception Score...") Inception = Inception_Score(CIFAR10woLabel()) score = Inception.compute_score(splits=1) assert np.allclose(score, 9.719672, atol=1e-3), \ "Your inception score does not match expected result." print("All test passed!") test_inception_score() ``` # 4. Trainer 이제 앞서 선언한 dataloader, model, evaluator를 모두 활용해서 GAN을 학습시키는 Trainer를 구현해보도록 하겠습니다. ## Preliminary \begin{equation} D_{\theta}: \text{Discriminator network}\\ G_{\phi}: \text{Generator network}\\ x: \text{real_image} \\ z: \text{latent_vector} \\ \end{equation} ## Discriminator Loss \begin{equation} \mathcal{L}_{D_{\theta}} = -E_{x \sim p_{data}}[logD_{\theta}(x) + E_{z}[\log(1 - D_{\theta}(G_{\phi}(z)))]] \end{equation} Discriminator loss는 위와 같이 real_image는 1으로, generated_image는 0으로 판별하는 방식으로 학습을 진행하게 됩니다. ## Generator Loss Generator network는 이론적으로는 discriminator의 loss에서 generator가 해당되는 부분에 -1을 곱해서 표현할 수 있습니다. \begin{equation} \mathcal{L}_{G_{\phi}} = E_{z}[\log(1-D_{\theta}(G_{\phi}(z))] \tag{1} \end{equation} 하지만, 위의 식으로 학습을 진행할 경우 Generator의 학습이 원활히 이루어지지 않게되는 문제점이 있습니다. ```python plt.title('log(1-D(G(z))') x = np.arange(0, 1.0, 0.01) y = np.log(1-x) plt.xlabel('D(G(z))') plt.plot(x,y) ``` 위의 loss plot에서 볼 수 있듯이, Generator는 Discriminator를 속이는 것에 성공할 수록 $D_{\theta}(G_{\phi}(z)) \approx 1$ 낮은 loss를 갖게 됩니다. 하지만, 이미지 생성의 난이도를 생각하면, 학습 초반에 Discriminator에 비해 Generator가 못하는 일은 자명한 일입니다. 이 때, D(G(z))가 0에 가까운 지점 $D_{\theta}(G_{\phi}(z)) \approx 0$ 에서의 함수의 기울기가 너무 작기 때문에 학습 초반에 Generator가 충분한 양의 학습 시그널을 받지 못하게 되는 문제점이 발생하게 됩니다. 따라서, 위의 식과 직관적으로 유사한 의미를 가지는 다른 loss function을 정의해보도록 하겠습니다. \begin{equation} \mathcal{L}_{G_{\phi}} = -E_{z}[\log(D_{\theta}(G_{\phi}(z))] \tag{2} \end{equation} ```python plt.title('-log(D(G(z))') x = np.arange(0, 1.0, 0.01) y = -np.log(x) plt.xlabel('D(G(z))') plt.plot(x,y) ``` 위의 loss plot에서 볼 수 있듯이, Generator는 여전히 Discriminator를 속이는 것에 성공할 수록 $D_{\theta}(G_{\phi}(z)) \approx 1$ 낮은 loss를 갖게 됩니다. 하지만, 이전과는 달리 $D_{\theta}(G_{\phi}(z)) \approx 0$에서의 gradient가 크기 때문에 학습 초반에 이미지를 생성하지 못할 때 오히려 충분한 양의 학습 시그널을 받을 수 있게 됩니다. 따라서, 이번 과제에서는 Generator의 Loss로 두번째 식을 사용하도록 하겠습니다. ```python # Utility Functions def denorm(x): out = (x + 1) / 2 return out.clamp(0, 1) def save_checkpoint(model, save_path, device): if not os.path.exists(os.path.dirname(save_path)): os.makedirs(os.path.dirname(save_path)) torch.save(model.cpu().state_dict(), save_path) model.to(device) def load_checkpoint(model, checkpoint_path, device): if not os.path.exists(checkpoint_path): print("Invalid path!") return model.load_state_dict(torch.load(checkpoint_path)) model.to(device) class FolderDataset(data.Dataset): def __init__(self, folder): self.folder = folder self.image_list = os.listdir(folder) self.transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5))]) def __getitem__(self, index): image = Image.open(os.path.join(self.folder, self.image_list[index])) return self.transform(image) def __len__(self): return len(self.image_list) # Trainer class Trainer(): def __init__(self, trainloader, testloader, generator, discriminator, criterion, g_optimizer, d_optimizer, device): """ trainloader: train data's loader testloader: test data's loader generator: generator discriminator: discriminator criterion: loss function to evaluate the model (e.g., BCE Loss) g_optimizer: optimizer for generator d_optimizer: optimizer for discriminator """ self.trainloader = trainloader self.testloader = testloader self.G = generator self.D = discriminator self.criterion = criterion self.g_optimizer = g_optimizer self.d_optimizer = d_optimizer self.device = device # Make directory to save the images & models for a specific checkpoint os.makedirs(os.path.join('./results/', 'images'), exist_ok=True) os.makedirs(os.path.join('./results/', 'checkpoints'), exist_ok=True) os.makedirs(os.path.join('./results/', 'evaluation'), exist_ok=True) def train(self, epochs = 1): self.G.to(self.device) self.D.to(self.device) start_time = time.time() for epoch in range(epochs): for iter, (real_img, _) in enumerate(self.trainloader): self.G.train() self.D.train() batch_size = real_img.size(0) real_label = torch.ones(batch_size).to(self.device) fake_label = torch.zeros(batch_size).to(self.device) # get real CIFAR-10 image real_img = real_img.to(self.device) # initialize latent_vector to feed into the Generator z = torch.randn(real_img.size(0), 256).to(self.device) ########################################################################################## # Discriminator Loss 구현 # # Note : Discriminator Loss는 Generator network의 parameter에 영향을 주지 않아야 합니다. # # detach() function을 참고하세요. # # https://pytorch.org/docs/stable/generated/torch.Tensor.detach.html # ########################################################################################## D_loss: torch.Tensor = None ### YOUR CODE HERE (~ 4 lines) ### END YOUR CODE # TEST CODE # (TEST의 통과가 맞는 구현을 보장하지는 못합니다. 일반적으로는 loss가 1.38~1.45 사이의 값이 나와야 합니다.) if epoch == 0 and iter == 0: assert D_loss.detach().allclose(torch.tensor(1.4000), atol=2e-1), \ f"Discriminator Loss of the model does not match expected result." print("==Discriminator loss function test passed!==") self.D.zero_grad() D_loss.backward() self.d_optimizer.step() ####################################################### # Generator Loss 구현 # # Note : 위의 정의된 두번 째 식을 사용해서 구현하세요 # ####################################################### G_loss: torch.Tensor = None ### YOUR CODE HERE (~ 3 lines) ### END YOUR CODE # Test code # (TEST의 통과가 맞는 구현을 보장하지는 못합니다. 일반적으로는 loss가 1.35~1.52 사이의 값이 나와야 합니다.) if epoch == 0 and iter == 0: assert G_loss.detach().allclose(torch.tensor(1.5), atol=2e-1), \ f"Generator Loss of the model does not match expected result." print("==Generator loss function test passed!==") self.G.zero_grad() G_loss.backward() self.g_optimizer.step() # verbose end_time = time.time() - start_time end_time = str(datetime.timedelta(seconds=end_time))[:-7] print('Time [%s], Epoch [%d/%d], lossD: %.4f, lossG: %.4f' % (end_time, epoch+1, epochs, D_loss.item(), G_loss.item())) # Save Images fake_img = fake_img.reshape(fake_img.size(0), 3, 32, 32) torchvision.utils.save_image(denorm(fake_img), os.path.join('./results/', 'images', 'fake_image-{:03d}.png'.format(epoch+1))) if epoch % 10 == 0: self.test() # Save Checkpoints save_checkpoint(self.G, os.path.join('./results', 'checkpoints', 'G_final.pth'), self.device) save_checkpoint(self.D, os.path.join('./results', 'checkpoints', 'D_final.pth'), self.device) def test(self): print('Start computing Inception Score') self.G.eval() with torch.no_grad(): for iter in tqdm.tqdm(range(5000)): z = torch.randn(1, 256).to(self.device) fake_img = self.G(z) torchvision.utils.save_image(denorm(fake_img), os.path.join('./results/', 'evaluation', 'fake_image-{:03d}.png'.format(iter))) # Compute the Inception score dataset = FolderDataset(folder = os.path.join('./results/', 'evaluation')) Inception = Inception_Score(dataset) score = Inception.compute_score(splits=1) print('Inception Score : ', score) ``` ### Train 자, 이제 학습을 진행해 보겠습니다. 학습이 진행됨에 따라 generator가 생성하는 image는 \\ [파일]->[results]->[images]에서 각 epoch별로 확인해보실 수 있습니다. ```python lr = 2e-4 trainloader, testloader = create_dataloader() G = Generator() D = Discriminator() criterion = nn.BCELoss() g_optimizer = optim.Adam(G.parameters(), lr=lr, betas=(0.5, 0.999)) d_optimizer = optim.Adam(D.parameters(), lr=lr, betas=(0.5, 0.999)) device = torch.device('cuda') trainer = Trainer(trainloader=trainloader, testloader=testloader, generator=G, discriminator=D, criterion=criterion, g_optimizer=g_optimizer, d_optimizer=d_optimizer, device=device) trainer.train(epochs=50) ``` ```python ```
43bb04f0c8dd862a6d5548827c328e58e74d3d02
25,331
ipynb
Jupyter Notebook
Curriculum/03_Machine Learning/[HW6]DCGAN.ipynb
ohikendoit/Goorm-KAIST-NLP
83b13a8599fd1588e99ef8513255a7058c482f0b
[ "MIT" ]
null
null
null
Curriculum/03_Machine Learning/[HW6]DCGAN.ipynb
ohikendoit/Goorm-KAIST-NLP
83b13a8599fd1588e99ef8513255a7058c482f0b
[ "MIT" ]
null
null
null
Curriculum/03_Machine Learning/[HW6]DCGAN.ipynb
ohikendoit/Goorm-KAIST-NLP
83b13a8599fd1588e99ef8513255a7058c482f0b
[ "MIT" ]
null
null
null
25,331
25,331
0.591962
true
5,966
Qwen/Qwen-72B
1. YES 2. YES
0.763484
0.654895
0.500001
__label__kor_Hang
0.955005
0
# Exercise 7) Learning and Planning In this exercise, we will again investigate the inverted pendulum from the `gym` environment. We want to check, which benefits the implementation of planning offers. Please note that the parameter $n$ has a different meaning in the context of planning (number of planning steps per actual step) than in the context of n-step learning. ```python import numpy as np import gym gym.logger.set_level(40) import random import time from tqdm.notebook import tqdm import matplotlib.pyplot as plt plt.style.use('seaborn-talk') ``` We will reuse the discretization routine from the previous exercise: ```python d_T = 15 d_theta = 15 d_omega = 15 def discretize_state(states): limits = [1, 1, 8] nb_disc_intervals = [d_theta, d_theta, d_omega] # bring to value range [-1, 1] norm_states = [state / limit for state, limit in zip(states, limits)] interval_lengths = [2 / d for d in nb_disc_intervals] disc_state = [(norm_state + 1) // interval_length for norm_state, interval_length in zip(norm_states, interval_lengths)] disc_state = [(state - 1) if state == d else state for state, d in zip(disc_state, nb_disc_intervals)] # ensure that disc_state < d return np.array(disc_state) def continualize_action(disc_action): limit = 2 interval_length = 2 / (d_T-1) norm_action = disc_action * interval_length cont_action = (norm_action - 1) * limit return np.array(cont_action).flatten() ``` ## 1) Dyna-Q Write a Dyna-Q algorithm to solve the inverted pendulum. Check the quality of the result for different number of episodes, number of steps per episode and number of planning steps per interaction. Make sure that the total number of learning steps stays the same for different n, such that comparisons are fair: $\text{episodes} \cdot \text{steps} \cdot (1+n) = \text{const.}$ Interesting metrics for a comparison could be e.g. the execution time (the tqdm loading bar shows execution time of loops, alternatively you can use the time.time() command to get the momentary system time in seconds) and training stability. ## Solution 1) The solution code is given below. ```python def pendulumDynaQ(alpha, gamma, epsilon, n, nb_episodes, nb_steps): env = gym.make('Pendulum-v0') env = env.unwrapped action_values = np.zeros([d_theta, d_theta, d_omega, d_T]) pi = np.zeros([d_theta, d_theta, d_omega]) model = {} # dictionary cumulative_reward_history = [] # we can use this to figure out how well the learning worked for j in tqdm(range(nb_episodes), position=0, leave=True): ### BEGIN SOLUTION rewards = [] # this time, this list is only for monitoring state = env.reset() # initialize x_0 state = tuple(discretize_state(state).astype(int)) for k in range(nb_steps): # sample experiences for the model if np.random.uniform(0, 1) < epsilon: action = np.random.choice(d_T) # explorative action else: action = pi[state].astype(int) # exploitative action cont_action = continualize_action(action) next_state, reward, done, _ = env.step(cont_action) next_state = tuple(discretize_state(next_state).astype(int)) # learn from the momentary experience action_values[state][action] += alpha * (reward + gamma * np.max(action_values[next_state]) - action_values[state][action] ) pi[state] = np.argmax(action_values[state]) model[state, action] = (next_state, reward) # update the model accoring to the latest experience state = next_state rewards.append(reward) # learn from the model, IMPORTANT: USE DIFFERENT VARIABLES HERE for i in range(n): # sample a random key from the dict: this state action combination has surely been taken in the past x, u = random.sample(model.keys(), 1)[0] x_1, r = model[x, u] action_values[x][u] += alpha * (r + gamma * np.max(action_values[x_1]) - action_values[x][u] ) pi[x] = np.argmax(action_values[x]) if done: break cumulative_reward_history.append(np.sum(rewards)) env.close() ### END SOLUTION return cumulative_reward_history, pi ``` Function to evaluate and render the measurement using the gym environment: ```python def experiment(pi,nb_steps = 300): # Runs the inverted pendulum experiment using policy pi for nb_steps steps env = gym.make('Pendulum-v0') env = env.unwrapped state = env.reset() # initialize x_0 disc_state = tuple(discretize_state(state).astype(int)) # only tuples of integers can be used as index disc_action = pi[disc_state].astype(int) for k in range(nb_steps): cont_action = continualize_action(disc_action) env.render() # comment out for faster execution state, reward, done, _ = env.step(cont_action) disc_state = tuple(discretize_state(state).astype(int)) if done: break disc_action = pi[disc_state].astype(int) # exploitative action env.close() ``` Let's use nb_episodes = 5000, nb_steps = 500, n = 0 as a first try. This is effectively Q learning. \begin{align} 5000 \cdot 500 \cdot (0+1) &= 2.5 \cdot 10^6 \end{align} The resulting policy is satisfactory. ```python ### train print("Run without planning") no_planning_history, no_planning_pi = pendulumDynaQ(alpha = 0.1, gamma = 0.9, epsilon = 0.1, n = 0, nb_episodes = 5000, nb_steps = 500) ``` Run without planning HBox(children=(FloatProgress(value=0.0, max=5000.0), HTML(value=''))) ```python ### run and render the experiment experiment(no_planning_pi,nb_steps = 300) ``` Now let's try $n=9$ with the same nb_steps = 500: \begin{align} \text{nb_episodes} &= \frac{2.5 \cdot 10^6}{500 \cdot (9+1)} = 500 \end{align} The resulting policy also looks good. ```python ### train print("Run with planning") with_planning_history, with_planning_pi = pendulumDynaQ(alpha = 0.1, gamma = 0.9, epsilon = 0.1, n = 9, nb_episodes = 500, nb_steps = 500) ``` Run with planning HBox(children=(FloatProgress(value=0.0, max=500.0), HTML(value=''))) ```python ### run and render the experiment experiment(with_planning_pi,nb_steps = 300) ``` Now lets compare the cumulative rewards: ```python plt.plot(no_planning_history) plt.plot(with_planning_history) plt.xlabel("episode") plt.ylabel(r"$\sum R$") plt.show() ``` The cumulative reward over the episodes both seems to have high variance. So why should we prefer the planning method? ### Planning leads to the agent interacting less often with the "real" environment, such that in the end fewer interaction time is needed. ## 2) Simulation-based planning Although it can be useful for small state spaces, building a system model by storing large amounts of state transitions like in task (1) is rarely feasible in engineering. As engineers, we are capable of a more efficient way of system modeling that we can utilize here: differential equations. Using a state-space model allows to efficiently integrate existing pre-knowledge into the Dyna-Q algorithm we already used. To do so, write a class `pendulum_model` that implements a model of the pendulum. This class should work similar to `gym`: it should at least have a `step` and a `reset` method. In the step method, make use of forward Euler integration to simulate the system dynamics. In the reset method, allow to pass an optional initial state to the model, such that we can easily compare model and environment. If no initial state is passed to the `reset` function, a random initial state should be determined. Integrate this model into a Dyna-Q algorithm. Model of the pendulum in differential-equation form for change of the angular frequency $\omega$ and the angle $\theta$ depending on the torque $T_\mathrm{u}$: \begin{align} \dot{\omega} &= -\frac{3 g}{2 l} \text{sin}(\theta +\pi) + \frac{1}{J} T_\mathrm{u} \\ \dot{\theta} &= \omega \end{align} Parameters (gravity constant $g$, mass $m$, length $l$ and intertia $J$ of the pendulum): \begin{align} g&=10 \, \frac{\text{m}}{\text{s}^2} & m&=1 \, \text{kg} & l&=1 \, \text{m} & J&=\frac{1}{3} m l^2 \end{align} Forward Euler integration: \begin{align} \dot{x}(k T_S) \approx \frac{x[k+1] - x[k]}{T_S} \end{align} with sampling time $T_S = 0.05 \, \text{s}$ Reward function: \begin{align} r_{k+1} = -(\theta^2[k] + 0.1 \, \text{s}^2 \cdot \omega^2[k] + 0.001 \frac{1}{(\text{N}\text{m})^2} \cdot T_\mathrm{u}^2[k]) \end{align} Limitations of state and action space: \begin{align} \theta &\in [-\pi, \pi] & \omega &\in [-8 \, \frac{1}{\text{s}}, 8 \, \frac{1}{\text{s}}] & T_\mathrm{u} &\in [-2 \, \text{N}\text{m}, 2 \, \text{N}\text{m}] \end{align} And of course input and output space: \begin{align} \text{action}&=T_\mathrm{u} & \text{state}&= \begin{bmatrix} \text{cos}(\theta)\\ \text{sin}(\theta)\\ \omega \end{bmatrix} \end{align} ## Solution 2) Model-based planning does not necessarily run faster than experience-based planning. However, experience-based planning fails to cover the whole state space especially in the earlier episodes when there are too few experiences. On the other hand, model-based planning can, of course, only be performed if a state-space model with accurate parametrization is available. In order to overcome parametric deviations between state-space model and environment, one could even use the measurements from the actual environment in order to identify the parameters of the model during runtime. ```python class pendulum_model: def __init__(self, dt=0.05, m=1, g=10, l=1): ### BEGIN SOLUTION self.max_speed = 8 self.max_torque = 2 self.dt = dt # sampling time in s self.g = g # gravity in m / s^2 self.m = m # mass in kg self.l = l # length in m self.J = 1 / 3 * m * l ** 2 # pedulums moment of inertia in kg * m^2 ### END SOLUTION def reset(self, state=None): ### BEGIN SOLUTION # if no state is give, set randomly if np.any(state == None): self.theta = np.random.uniform(-np.pi, +np.pi) self.omega = np.random.uniform(-self.max_speed, +self.max_speed) # else set initial state as given else: self.theta = np.arctan2(state[1], state[0]) self.omega = state[2] state = np.array([np.cos(self.theta), np.sin(self.theta), self.omega]) ### END SOLUTION return state def step(self, T_u): ### BEGIN SOLUTION T_u = np.clip(T_u, -self.max_torque, +self.max_torque)[0] reward = -(self.angle_normalize(self.theta)** 2 + 0.1 * self.omega ** 2 + 0.001 * (T_u ** 2)) # differential-equations for state values self.omega = self.omega + self.dt * (-3 * self.g/(2 * self.l) * np.sin(self.theta + np.pi) + 1 / self.J * T_u) self.theta = self.theta + self.dt * self.omega self.omega = np.clip(self.omega, -self.max_speed, +self.max_speed) state = np.array([np.cos(self.theta), np.sin(self.theta), self.omega]) ### END SOLUTION return state, reward def angle_normalize(self, theta): # usage of this helper function is optional return (((theta+np.pi) % (2*np.pi)) - np.pi) ``` The following cell is for debugging of the `pendulum_model` class. ```python env = gym.make('Pendulum-v0') env = env.unwrapped # removes a builtin time limit of k_T = 200, we want to determine the time limit ourselves model = pendulum_model() state = env.reset() print(state) m_state = model.reset(state) # model is set to state of env for _ in range(10000): print(f"state: {state}") print(f"model: {m_state}") print() action = env.action_space.sample() state, reward, done, _ = env.step(action) # take action on env m_state, m_reward = model.step(action) # take the same action on model print(f"reward difference: {reward- m_reward}, env_reward:{reward}, model_reward:{m_reward}") env.close() ``` Using $-\mathrm{sin}(\theta)$ instead of $\mathrm{sin}(\theta +\pi)$ makes no difference when assuming analytical precision, but due to numeric errors these formulations will still yield different results in numpy, mainly because $\pi$ is represented with finite (float) precision. In order to yield the same numbers as in `gym`, we will still make use of the (more cumbersome) $\mathrm{sin}(\theta +\pi)$. Write a function for the Dyna-Q algorithm, which uses the model we defined above: ```python def pendulumModelDynaQ(alpha, gamma, epsilon, n, nb_episodes, nb_steps): env = gym.make('Pendulum-v0') env = env.unwrapped model = pendulum_model() action_values = np.zeros([d_theta, d_theta, d_omega, d_T]) pi = np.zeros([d_theta, d_theta, d_omega]) cumulative_reward_history = [] # we can use this to figure out how well the learning worked for j in tqdm(range(nb_episodes), position=0, leave=True): ### BEGIN SOLUTION rewards = [] # this time, this list is only for monitoring state = env.reset() # initialize x_0 state = tuple(discretize_state(state).astype(int)) for k in range(nb_steps): # sample experiences for the model if np.random.uniform(0, 1) < epsilon: action = np.random.choice(d_T) # explorative action else: action = pi[state].astype(int) # exploitative action cont_action = continualize_action(action) next_state, reward, done, _ = env.step(cont_action) next_state = tuple(discretize_state(next_state).astype(int)) # learn from the momentary experience action_values[state][action] += alpha * (reward + gamma * np.max(action_values[next_state]) - action_values[state][action] ) pi[state] = np.argmax(action_values[state]) # no model update is needed state = next_state rewards.append(reward) # learn from the model, IMPORTANT: USE DIFFERENT VARIABLES HERE for i in range(n): x = model.reset() # if no state is passed to the model, state is initialized randomly u_d = np.random.choice(d_T) x_d = tuple(discretize_state(x).astype(int)) u = continualize_action(u_d) x_1, r = model.step(u) x_1_d = tuple(discretize_state(x_1).astype(int)) action_values[x_d][u_d] += alpha * (r + gamma * np.max(action_values[x_1_d]) - action_values[x_d][u_d] ) pi[x_d] = np.argmax(action_values[x_d]) if done: break cumulative_reward_history.append(np.sum(rewards)) env.close() ### END SOLUTION return cumulative_reward_history, pi ``` Use the following cell to compare the learing from experience from 1) to the learning using the defined model: (Beware, nb_steps = 10000 can take some time) ```python ### train both setups once print("Run with planning from experience") exp_planning_history, exp_planning_pi = pendulumDynaQ(alpha = 0.1, gamma = 0.9, epsilon = 0.1, n = 19, nb_episodes = 30, nb_steps = 10000) print("Run with planning from model") model_planning_history, model_planning_pi = pendulumModelDynaQ(alpha = 0.1, gamma = 0.9, epsilon = 0.1, n = 19, nb_episodes = 30, nb_steps = 10000) plt.plot(exp_planning_history) plt.plot(model_planning_history) plt.xlabel("episode") plt.ylabel(r"$\sum R$") plt.show() ``` Use the following cell to execute the policy we got using the model: ```python experiment(model_planning_pi,nb_steps = 300) ``` ### Extra-task: Change the model parameters (e.g. $g$, $m$, $l$) so that our model differs from the "real world" (we got from gym). What do you observe By changing the parameters our model differs from the "real world". Depending on the amount of difference, the learing curve looks worse than the one with the correct values. The experiment result is also depending on the random starting position. Depending on the parameter difference, the experiment can not be executed successfully any more. Try to change the parameters on your own. The Following learning curve results from a parameter change using $g =20 \, \frac{\text{m}}{\text{s}^2}, m = 5 \, \text{kg and } l = 2 \, \text{m}$: ```python plt.plot(exp_planning_history, label="exp") plt.plot(model_planning_history, label="model") plt.xlabel("episode") plt.ylabel(r"$\sum R$") plt.legend(loc="upper left") plt.show() ```
7ee2361cd20cb9def9247e5cb233dc12da93334a
856,697
ipynb
Jupyter Notebook
exercises/solutions/ex07/LearningAndPlanning.ipynb
adilsheraz/reinforcement_learning_course_materials
e086ae7dcee2a0c1dbb329c2b25cf583c339c75a
[ "MIT" ]
557
2020-07-20T08:38:15.000Z
2022-03-31T19:30:35.000Z
exercises/solutions/ex07/LearningAndPlanning.ipynb
speedhunter001/reinforcement_learning_course_materials
09a211da5707ba61cd653ab9f2a899b08357d6a3
[ "MIT" ]
7
2020-07-22T07:27:55.000Z
2021-05-12T14:37:08.000Z
exercises/solutions/ex07/LearningAndPlanning.ipynb
speedhunter001/reinforcement_learning_course_materials
09a211da5707ba61cd653ab9f2a899b08357d6a3
[ "MIT" ]
115
2020-09-08T17:12:25.000Z
2022-03-31T18:13:08.000Z
67.749862
53,556
0.696195
true
4,373
Qwen/Qwen-72B
1. YES 2. YES
0.817574
0.847968
0.693277
__label__eng_Latn
0.918956
0.449046
```python import sympy as sym x, L, C, D, c_0, c_1, = sym.symbols('x L C D c_0 c_1') def model1(f, L, D): """Solve -u'' = f(x), u(0)=0, u(L)=D.""" # Integrate twice u_x = - sym.integrate(f, (x, 0, x)) + c_0 u = sym.integrate(u_x, (x, 0, x)) + c_1 # Set up 2 equations from the 2 boundary conditions and solve # with respect to the integration constants c_0, c_1 r = sym.solve([u.subs(x, 0)-0, # x=0 condition u.subs(x,L)-D], # x=L condition [c_0, c_1]) # unknowns # Substitute the integration constants in the solution u = u.subs(c_0, r[c_0]).subs(c_1, r[c_1]) u = sym.simplify(sym.expand(u)) return u def model2(f, L, C, D): """Solve -u'' = f(x), u'(0)=C, u(L)=D.""" u_x = - sym.integrate(f, (x, 0, x)) + c_0 u = sym.integrate(u_x, (x, 0, x)) + c_1 r = sym.solve([sym.diff(u,x).subs(x, 0)-C, # x=0 cond. u.subs(x,L)-D], # x=L cond. [c_0, c_1]) u = u.subs(c_0, r[c_0]).subs(c_1, r[c_1]) u = sym.simplify(sym.expand(u)) return u def model3(f, a, L, C, D): """Solve -(a*u')' = f(x), u(0)=C, u(L)=D.""" au_x = - sym.integrate(f, (x, 0, x)) + c_0 u = sym.integrate(au_x/a, (x, 0, x)) + c_1 r = sym.solve([u.subs(x, 0)-C, u.subs(x,L)-D], [c_0, c_1]) u = u.subs(c_0, r[c_0]).subs(c_1, r[c_1]) u = sym.simplify(sym.expand(u)) return u def demo(): f = 2 u = model1(f, L, D) print(('model1:', u, u.subs(x, 0), u.subs(x, L))) print((sym.latex(u, mode='plain'))) u = model2(f, L, C, D) #f = x #u = model2(f, L, C, D) print(('model2:', u, sym.diff(u, x).subs(x, 0), u.subs(x, L))) print((sym.latex(u, mode='plain'))) u = model3(0, 1+x**2, L, C, D) print(('model3:', u, u.subs(x, 0), u.subs(x, L))) print((sym.latex(u, mode='plain'))) if __name__ == '__main__': demo() ``` ('model1:', x*(D + L*(L - x))/L, 0, D) \frac{x \left(D + L \left(L - x\right)\right)}{L} ('model2:', -C*L + C*x + D + L**2 - x**2, C, D) - C L + C x + D + L^{2} - x^{2} ('model3:', (C*atan(L) - C*atan(x) + D*atan(x))/atan(L), C, D) \frac{C \operatorname{atan}{\left(L \right)} - C \operatorname{atan}{\left(x \right)} + D \operatorname{atan}{\left(x \right)}}{\operatorname{atan}{\left(L \right)}} ```python ```
066c129840a97eaf5f8341a7a803d54af933e636
3,729
ipynb
Jupyter Notebook
Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/EXAMPLES/FINITE_ELEMENTS/INTRO/SRC/39_U_XX_F_SYMPY.ipynb
okara83/Becoming-a-Data-Scientist
f09a15f7f239b96b77a2f080c403b2f3e95c9650
[ "MIT" ]
null
null
null
Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/EXAMPLES/FINITE_ELEMENTS/INTRO/SRC/39_U_XX_F_SYMPY.ipynb
okara83/Becoming-a-Data-Scientist
f09a15f7f239b96b77a2f080c403b2f3e95c9650
[ "MIT" ]
null
null
null
Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/EXAMPLES/FINITE_ELEMENTS/INTRO/SRC/39_U_XX_F_SYMPY.ipynb
okara83/Becoming-a-Data-Scientist
f09a15f7f239b96b77a2f080c403b2f3e95c9650
[ "MIT" ]
2
2022-02-09T15:41:33.000Z
2022-02-11T07:47:40.000Z
33
188
0.428533
true
961
Qwen/Qwen-72B
1. YES 2. YES
0.907312
0.83762
0.759983
__label__eng_Latn
0.154743
0.604027
```python from matplotlib import pyplot as plt from mpl_toolkits.mplot3d import Axes3D import numpy as np import pandas as pd import scipy import math from collections import Counter from warnings import filterwarnings filterwarnings('ignore') ``` # Необходимые сведения из высшей математики (линейная алгебра, математический анализ, теория вероятности, статистика) <p>Курс "Модели и методы интеллектуального анализа данных"</p> <p>Чернышов Юрий</p> <p>к.ф.-м.н., доцент кафедры ИТЗИ УрГУПС (Екатеринбург)</p> <p>yuchernyshov@usurt.ru</p> # Оглавление <a name=toc> <ol> <li><a href='#algebra'>Линейная алгебра</a> <ol> <li><a href='#algebra_vectors'>Векторы</a></li> <li><a href='#algebra_matrixes'>Матрицы</a></li> </ol> </li> <li><a href='#matan'>Математический анализ</a> <ol> <li><a href='#matan_func'>Функции одной переменной</a></li> <li><a href='#matan_deriv'>Производные</a></li> <li><a href='#matan_fmp'>Функции нескольких переменных</a></li> <li><a href='#matan_partial'>Частные производные.</a></li> <li><a href='#matan_gradient'>Градиент. Градиентные методы</a></li> </ol> </li> <li><a href='#probability'>Теория вероятностей</a> <ol> <li><a href='#prob_def'>Определение вероятности</a></li> <li><a href='#prob_bayes'>Формула Байеса</a></li> </ol> </li> <li><a href='#statistics'>Математическая статистика</a> <ol> <li><a href='#central'>Центральная предельная теорема</a></li> <li><a href='#pvalue'>Уровни значимости, p-value</a></li> </ol> </li> <li><a href='#tasks'>Задачи</a> <ol> <li><a href='#tasks_linal'>Линейная алгебра</a></li> <li><a href='#tasks_matan'>Математический анализ</a></li> <li><a href='#tasks_terver'>Теория вероятности и математическая статистика</a></li> </ol> </li> <li><a href='#lit'>Литература</a> </li> </ol> Математические объекты и методы играют важнейшую роль в исследовании данных. Далее описаны некоторые основные базовые понятия и методы из линейной алгебры, математического анализа и теории вероятности, которые используются при работе с данными. При этом математические понятия разбираются в контексте их использования в языке Python (библиотеки numpy, pandas, scipy). Для более глубокого изучения рекомендуется обратиться к соответствующим книгам из списка литературы. # Линейная алгебра <a name='algebra'></a> Алгебра - раздел высшей математики, который изучает различные пространства, операции в этих пространствах и их свойства. Примеры пространств: - пространство действительных чисел $R=(-\infty, \infty)$, - пространство векторов, - пространство матриц. Линейная алгебра изучает векторные пространства. В библиотеке numpy есть специальный модуль линейной алгебры linalg. ```python np.info(np.linalg) ``` Core Linear Algebra Tools ------------------------- Linear algebra basics: - norm Vector or matrix norm - inv Inverse of a square matrix - solve Solve a linear system of equations - det Determinant of a square matrix - lstsq Solve linear least-squares problem - pinv Pseudo-inverse (Moore-Penrose) calculated using a singular value decomposition - matrix_power Integer power of a square matrix Eigenvalues and decompositions: - eig Eigenvalues and vectors of a square matrix - eigh Eigenvalues and eigenvectors of a Hermitian matrix - eigvals Eigenvalues of a square matrix - eigvalsh Eigenvalues of a Hermitian matrix - qr QR decomposition of a matrix - svd Singular value decomposition of a matrix - cholesky Cholesky decomposition of a matrix Tensor operations: - tensorsolve Solve a linear tensor equation - tensorinv Calculate an inverse of a tensor Exceptions: - LinAlgError Indicates a failed linear algebra operation Как видно из описаний модуль numpy.linalg содержит функции для работы с векторами и матрицами, решения уравнений, работы с тензорами и другие возможности. <a href='#toc'>Назад к Оглавлению</a> ## Векторы <a name='algebra_vectors'></a> Скаляр - это число. Вектор - это набор чисел. Векторы можно складывать и вычитать, умножать на число (скаляр). Также для векторов определены два вида умножения - скалярное ($\vec{a} \cdot \vec{b}$) и векторное ($\vec{a} \times \vec{b}$) произведения векторов. $$ \vec{a}=(a_1, a_2, a_3), \vec{b}=(b_1, b_2, b_3) $$ $$ \vec{a}\pm\vec{b} = (a_1 \pm b_1, a_2 \pm b_2, a_3 \pm b_3) $$ $$ k \vec{a} = (ka_1, ka_2, ka_3) $$ $$ \vec{a} \cdot \vec{b} = a_1 b_1 + a_2 b_2 + a_3 b_3 $$ $$ \vec{a} \cdot \vec{b} = \left| \vec{a} \right| \left| \vec{b} \right| \cos (\alpha),~\alpha - угол~между~векторами~\vec{a}~и~\vec{b}. $$ $$ \vec{a} \times \vec{b} = \left| \vec{a} \right| \left| \vec{b} \right| \sin (\alpha),~\alpha - угол~между~векторами~\vec{a}~и~\vec{b}. $$ Для работы с векторами в Python можно использовать обычные списки, либо массивы numpy (в которых оптимизирована работа с большими векторами). Также в библиотеке pandas есть специальный тип pandas.Series для задания вектора. #### Список python ```python рост_вес = [180, 80] рост_вес[0]*рост_вес[1] ``` 14400 #### Массив numpy.array ```python a = np.array([180,80], dtype='int32') print(a**2) ``` [32400 6400] При работе с большими числами важно обращать внимание на то, чтобы размера памяти для хранения элементов массива было достаточно. В частности, при очень больших значениях элементов массива типа элемента 'int32' не хватает. ```python print(a**5) a = np.array([180,80], dtype='int64') print(a**5) ``` [ -21761024 -1018167296] [188956800000 3276800000] #### Скалярное произведение в numpy ```python a = np.array([1, 2]) b = np.array([3, 4]) a.dot(b) ``` 11 #### Объект pandas.Series ```python ds = pd.Series(np.array([1,2,3])) display(ds) print("Сумма элементов: {}".format(ds.sum())) ``` 0 1 1 2 2 3 dtype: int32 Сумма элементов: 6 Графически вектор - это направленный отрезок прямой, имеющий начало $A$ и конец $B$. Обозначение: $\vec{AB}$ или просто $\vec{a}$. Вектор задается координатами начала $(x_0, y_0)$ и конца $(x_1, y_1)$, при этом также задается направление вектора. Начало любого вектора можно перенести в точку $(0,0)$, при этом координаты конца вектора будут $(x_1-x_0,y_1-y_0)$. ```python plt.xlim(-1, 5); plt.ylim(-1, 6) a_begin = np.array ([1, 3]); a_end = np.array ([4, 5]) b_begin = np.array ([a_begin[0]-a_begin[0], a_begin[1]-a_begin[1]]); b_end = np.array ([a_end[0]-a_begin[0], a_end[1]-a_begin[1]]) plt.annotate (u'$\\vec{a}$', xy=a_begin, xytext=a_end, arrowprops={'arrowstyle': '<|-',}) plt.annotate (u'$\\vec{b}$', xy=b_begin, xytext=b_end, arrowprops={'arrowstyle': '<|-',}) plt.grid(True) plt.xticks(range(6)) plt.yticks(range(7)) plt.show() ``` #### Сложение и вычитание векторов Сложение и вычитание двух векторов $\vec{a}=(a_1,a_2)$ и $\vec{b}=(b_1,b_2)$. $$ \vec{a} \pm \vec{b}=\vec{c} $$ $$ \vec{c}=(a_1 \pm b_1, a_2 \pm b_2) $$ ```python plt.xlim(-1, 7) plt.ylim(-1, 6) a = np.array ([1, 3]) b = np.array ([5, 2]) c = a + b plt.annotate ('', xy=(0, 0), xytext=a, arrowprops={'arrowstyle': '<|-', 'color':'b'}) plt.annotate ('', xy=(0, 0), xytext=b, arrowprops={'arrowstyle': '<|-', 'color':'r'}) plt.annotate ('', xy=a, xytext=a+b, arrowprops={'arrowstyle': '<|-', 'color':'r'}) plt.annotate ('', xy=(0, 0), xytext=c, arrowprops={'arrowstyle': '<|-', 'color':'g'}) plt.annotate("$\\vec{a}$", xy=(0,2) ,xytext=(0,2), arrowprops={'color':'b'}) plt.annotate("$\\vec{b}$", xy=(3,0) ,xytext=(3,0), arrowprops={'color':'r'}) plt.annotate("$\\vec{c}=\\vec{a}+\\vec{b}$", xy=(4,3) ,xytext=(4,3) , arrowprops={'color':'g'}) plt.grid(True) plt.xticks(range(7)) plt.yticks(range(7)) plt.show() ``` #### Умножение векторов Умножение вектора $\vec{a}=(x_1,y_1)$ на число $k$: $$ k \vec{a} = (kx_1, ky_1) $$ ```python plt.xlim(-1, 7) plt.ylim(-1, 3) a = np.array ([3, 1]) b = a * 2 plt.annotate ('', xy=a, xytext=(0, 0), arrowprops={'arrowstyle': 'fancy', 'color':'b'}) plt.annotate ('', xy=(0, 0), xytext=b, arrowprops={'arrowstyle': '<|-', 'color':'r'}) plt.annotate("$\\vec{a}$", xy=(0,1) ,xytext=(0,1), arrowprops={'color':'b'}) plt.annotate("$\\vec{b} = 2\\vec{a}$", xy=(6,1.5) ,xytext=(6,1.5), arrowprops={'color':'r'}) plt.grid(True) plt.xticks(range(7)) plt.yticks(range(4)) plt.show() ``` #### Скалярное произведение векторов Скалярное произведение векторов задается формулой $\vec{a}\cdot\vec{b}=|\vec{a}||\vec{b}|\cos \alpha$, где $\alpha$ - угол между векторами $\vec{a}$ и $\vec{b}$. Кроме того, если заданы декартовы координаты векторов $\vec{a}=(a_1, a_2, a_3)$ и $\vec{b}=(b_1, b_2, b_3)$, то скалярное произведение векторов можно представить в виде $\vec{a}\cdot\vec{b}=a_1 b_1 + a_2 b_2 +a_3 b_3$. Геометрически на плоскости скалярное произведение вектора $\vec{a}=(a_1, a_2)$ на вектор $\vec{a}=(b_1, b_2)$ это модуль (длина) проекции вектора $\vec{a}$ на вектор $\vec{b}$ (или, учитывая переместительное свойство скалярного произведения, вектора $\vec{b}$ на вектор $\vec{a}$). ```python a = np.array ([3, 3]) b = np.array ([4, 1]) plt.xlim(-1, 7) plt.ylim(-1, 4) plt.annotate ('', xy=a, xytext=(0, 0), arrowprops={'arrowstyle': '-|>', 'color':'b'}) plt.annotate ('', xy=b, xytext=(0, 0), arrowprops={'arrowstyle': '-|>', 'color':'r'}) plt.annotate("$\\vec{a}$", xy=(0,1), xytext=(0,1)) plt.annotate("$\\vec{b}$", xy=(4,0), xytext=(4,0)) N = a*np.dot(a,b)/np.linalg.norm(a)/np.linalg.norm(b); plt.plot([N[0], b[0]],[N[1], b[1]], c='g'); plt.scatter(N[0],N[1], c='g') M = b*np.dot(a,b)/np.linalg.norm(a)/np.linalg.norm(b); plt.plot([M[0], a[0]],[M[1], a[1]], c='b'); plt.scatter(M[0],M[1], c='b') plt.grid(True) plt.xticks(range(7)) plt.yticks(range(4)) plt.show() ``` <a href='#toc'>Назад к Оглавлению</a> ## Матрицы <a name='algebra_matrixes'></a> Матрица - двумерная таблица. В python матрица реализуется как список списков при использовании встроенных в Python списков, либо с использованием встроенных средств библиотек numpy или pandas. Пример создания матрицы с помощью списка списков: ```python A = [[int(i+j) for i in range(3)] for j in range(4)] print(A) for row in A: print(" ".join([str(elem) for elem in row])) print(A[1][1]) ``` [[0, 1, 2], [1, 2, 3], [2, 3, 4], [3, 4, 5]] 0 1 2 1 2 3 2 3 4 3 4 5 2 Пример создания двумерного массива с помощью функции numpy.array. Скачала создается одномерный массов чисел, затем из него создается матрица с помощью функции reshape(). ```python B = np.array([i for i in range(1,13)]) print(B, B[4]) B = B.reshape(3,4) print(B, B[1,0]) type(B) ``` [ 1 2 3 4 5 6 7 8 9 10 11 12] 5 [[ 1 2 3 4] [ 5 6 7 8] [ 9 10 11 12]] 5 numpy.ndarray Пример создания матрицы с помощью функции numpy.matrix. ```python C = np.mat([[1,2],[3,4]]) print(C, type(C)) ``` [[1 2] [3 4]] <class 'numpy.matrixlib.defmatrix.matrix'> Пример создания объекта pandas.DataFrame ```python df = pd.DataFrame(np.array([1,2,3,4]).reshape(2,2)) display(df) ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>0</th> <th>1</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>1</td> <td>2</td> </tr> <tr> <th>1</th> <td>3</td> <td>4</td> </tr> </tbody> </table> </div> #### Вычисление определителя матрицы ```python A = np.array([[1,2,3],[0,2,1],[5,4,2]]) np.linalg.det(A) ``` -19.999999999999996 #### Операции над матрицами: сложение, вычитание, умножение, транспонирование. Пусть заданы матрицы $$ A = \left( \begin{array}{lll} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{array} \right), ~~ B = \left( \begin{array}{lll} b_{11} & b_{12} & b_{13} \\ b_{21} & b_{22} & b_{23} \\ b_{31} & b_{32} & b_{33} \end{array} \right), ~~ $$ Умножение матрицы A на число k $$ k A = \left( \begin{array}{lll} ka_{11} & ka_{12} & ka_{13} \\ ka_{21} & ka_{22} & ka_{23} \\ ka_{31} & ka_{32} & ka_{33} \end{array} \right) $$ Сумма матриц A+B $$ A\pm = \left( \begin{array}{lll} a_{11} \pm b_{11} & a_{11} \pm b_{12} & a_{11} \pm b_{13} \\ a_{21} \pm b_{21} & a_{21} \pm b_{22} & a_{21} \pm b_{23} \\ a_{31} \pm b_{31} & a_{31} \pm b_{32} & a_{31} \pm b_{33} \end{array} \right) $$ Произведение матриц A и B $$ AB = \left( \begin{array}{lll} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{array} \right) \left( \begin{array}{lll} b_{11} & b_{12} & b_{13} \\ b_{21} & b_{22} & b_{23} \\ b_{31} & b_{32} & b_{33} \end{array} \right) = $$ $$ = \left( \begin{array}{lll} a_{11}b_{11}+a_{12}b_{21}+a_{13}b_{31} & a_{11}b_{12}+a_{12}b_{22}+a_{13}b_{32} & a_{11}b_{13}+a_{12}b_{23}+a_{13}b_{33} \\ a_{21}b_{11}+a_{22}b_{21}+a_{23}b_{31} & a_{21}b_{12}+a_{22}b_{22}+a_{33}b_{32} & a_{21}b_{13}+a_{22}b_{23}+a_{23}b_{33} \\ a_{31}b_{11}+a_{32}b_{21}+a_{33}b_{31} & a_{31}b_{12}+a_{32}b_{22}+a_{33}b_{32} & a_{31}b_{13}+a_{32}b_{23}+a_{33}b_{33} \end{array} \right) $$ Транспонированная матрица $A^T$ определяется следующим образом: $$ A^T = \left( \begin{array}{lll} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{array} \right)^T = \left( \begin{array}{lll} a_{11} & a_{12} & a_{31} \\ a_{12} & a_{22} & a_{32} \\ a_{13} & a_{23} & a_{33} \end{array} \right) $$ ```python A = np.array([[1,2],[3,4]]) B = np.array([[1,2],[3,4]]) print("Сложение матриц") print(np.add(A,B)) print("Вычитание матриц") print(np.subtract(A,B)) print("Произведение матриц") print(np.dot(A,B)) print("Обратная матрица") print(A.T) ``` Сложение матриц [[2 4] [6 8]] Вычитание матриц [[0 0] [0 0]] Произведение матриц [[ 7 10] [15 22]] Обратная матрица [[1 3] [2 4]] Матрица, обратная к матрице $A$, обозначается $A^{-1}$ и обладает следующим свойством: $A A^{-1} = E$, где $E$ - единичная матрица (элементы на главной диагонали равны 1, остальные элементы равны нулю). Вычисление обратной матрицы ```python A = np.array([np.random.choice([0,1,2,3]) for _ in range(25)]).reshape(-1,5) print(A) invA = np.linalg.inv(A) print(invA) I = np.dot(A, invA) print("I=") for row in I: print(" ".join([str(round(i,2)) for i in row])) ``` [[0 3 3 2 2] [0 1 0 2 1] [2 1 0 1 0] [1 1 2 3 1] [3 2 1 1 2]] [[-0.14893617 -0.21276596 0.14893617 0.12765957 0.19148936] [ 0.40425532 0.14893617 0.59574468 -0.4893617 -0.23404255] [ 0.12765957 -0.53191489 -0.12765957 0.31914894 -0.0212766 ] [-0.10638298 0.27659574 0.10638298 0.23404255 -0.14893617] [-0.19148936 0.29787234 -0.80851064 0.0212766 0.53191489]] I= 1.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 -0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 1.0 #### Ранг матрицы Рангом матрицы называется порядок наибольшего минора матрицы, определитель которого отличен от нуля. ```python A = np.array([[1,2,3],[0,2,1],[5,4,2]]) np.linalg.matrix_rank(A) ``` 3 #### Собственные векторы матрицы. Собственные числа. Собственные векторы матрицы A это векторы, которые при применении линейного преобразования с помощью матрицы A изменяются только в масштабе, но не в направлении. $$ A \vec{v} = \lambda \vec{v} $$ ```python A = np.array([[1,2],[3,4]]) ``` ```python np.linalg.eig(A) ``` (array([-0.37228132, 5.37228132]), array([[-0.82456484, -0.41597356], [ 0.56576746, -0.90937671]])) #### След матрицы След матрицы это сумма элементов, стоящих на главной диагонали ```python A = np.array(range(1,10)).reshape(-1,3) print(A) print(np.diagonal(A)) print(np.trace(A)) ``` [[1 2 3] [4 5 6] [7 8 9]] [1 5 9] 15 #### Практические задачи с применением матриц. Матрицы используются для линейных преобразований пространства. Например, если задан вектор, то его умножение на матрицу приведет к изменению этого вектора - растяжению, повороту относительно начала координат. ```python plt.xlim(-1, 6) plt.ylim(-1, 6) M1 = np.array([1, 2]); M2 = np.array([3, 1]) A = np.array([[1,0],[0,2]]) M3 = M1.dot(A); M4 = M2.dot(A); plt.plot([M1[0], M2[0]],[M1[1], M2[1]], c='r') plt.annotate(u'', xy=(0, 0), xytext = M1, arrowprops = {'arrowstyle': '<|-', 'color': 'r'}) plt.annotate(u'', xy=(0, 0), xytext = M2, arrowprops = {'arrowstyle': '<|-', 'color': 'r'}) plt.plot([M3[0], M4[0]],[M3[1], M4[1]], c='g') plt.annotate(u'', xy=(0, 0), xytext = M3, arrowprops = {'arrowstyle': '<|-', 'color': 'g'}) plt.annotate(u'', xy=(0, 0), xytext = M4, arrowprops = {'arrowstyle': '<|-', 'color': 'g'}) plt.grid(True) plt.show() ``` ```python print(M1, M2, M3, M4) print(A) ``` [1 2] [3 1] [1 4] [3 2] [[1 0] [0 2]] Матрицы используют для решения систем линейных алгебраических уравнений (матодом Крамера, методом обратной матрицы). Например, решим систему линейных уравнений методом Крамера $$ \left\{ \begin{array}{l} x+2y=5,\\ 3x+4y=6 \end{array} \right. $$ Эту систему уравнений можно представить в матричном виде $$ A\vec{X}=\vec{B}, ~~ A= \left( \begin{array}{cc} 1 & 2 \\ 3 &4 \end{array} \right), ~~ \vec{X}=\left( \begin{array}{c} x\\ y \end{array} \right), ~~ \vec{B}=\left( \begin{array}{c} 5\\ 6 \end{array} \right). $$ ```python A = np.array([1,2,3,4]).reshape(2,2) B = np.array([5,6]) D1 = np.linalg.det(np.array([5,2,6,4]).reshape(2,2)) D2 = np.linalg.det(np.array([1,5,3,6]).reshape(2,2)) D = np.linalg.det(np.array([1,2,3,4]).reshape(2,2)) x = D1/D; y = D2/D print("Корни уравнений: {} и {}".format(x,y)) ``` Корни уравнений: -4.0 и 4.5 Решим эту же систему методом обратной матрицы по формуле $$ \vec{X} = A^{-1} \vec{B}. $$ ```python X = np.linalg.inv(A).dot(B) print(X) ``` [-4. 4.5] <a href='#toc'>Назад к Оглавлению</a> # Математический анализ <a name='matan'></a> Математический анализ изучает функции одной и нескольких переменных и их свойства, используя понятие непрерывности, которое определяется через последовательности. Для непрерывных функций вводится важнейший математический инструмент - производная. Также в математическом анализе изучаются интегралы, ряды. ## Функции <a name='matan_func'></a> Основными объектами исследований в задачах математического анализа являются функции. Простейшие функции - элементарные. Это - степенная функция $y=x^n$, показательная функция $y=a^x$ (важным частным случаем которой является экспоненциальная функция $y=e^x$, где $e$ - экспонента), тригонометрические функции, логарифмические функции. ```python plt.rcParams['figure.figsize']=(15,10) xs = np.linspace(-2, 2, 150) titles = ["$y=x^2$", "$y=\sqrt{x}$", "$y=e^x$", "y=sin(x)", "y=cos(x)", "y=tan(x)", "y=ln(x)", "y=arcsin(x)", "y=sinh(x)" ] functions = [xs**2, xs**(1/2), np.exp(xs), np.sin(xs), np.cos(xs), np.tan(xs), np.log(xs), np.arcsin(xs), np.sinh(xs)] fig, ax = plt.subplots(3,3) for pos in range(9): ax[pos%3][pos//3].plot(xs, functions[pos]); ax[pos%3][pos//3].set_title(titles[pos]) plt.show() ``` ## Производные <a name='matan_deriv'></a> Производной функции называется предел отношения приращения функции $\Delta f = f(x + \Delta x)-f(x)$ к приращению аргумента этой функции $\Delta x$ при $\Delta x \to 0$. $$f'(x)=\frac{df}{dx}=\lim_{\Delta x \to \infty}\frac{f(x+\Delta x)-f(x)}{\Delta x}$$ Производная $f'(x)$ характеризует скорость изменения функции. ```python plt.rcParams['figure.figsize']=(10,5) fig, ax = plt.subplots(2) ax[0].plot([x for x in range(-5,6)], [x**2 for x in range(-5,6)], color='b') ax[1].plot([x for x in range(-5,6)], [2*x for x in range(-5,6)], color='r') for i in [0,1]: ax[i].grid(True) plt.show() ``` Видим на рисунке, что для тех $x$, для которых $f'(x)<0$, функция $f(x)$ убывает (причем, чем меньше $f'(x)$, тем быстрее убывает $f(x)$), а для $x$ при которых $f'(x)>0$, функция $f(x)$ возрастает (причем чем больше $f'(x)$, тем быстрее возрастает $f(x)$). В точке $x_*$ в которой $f'(x_*)=0$ находится экстремум - минимум функции $f(x)$. Геометрический смысл производной - производная $y'(x_0)$ функции $y(x)$ в точке $x_0$ равна тангенсу угла графика наклона касательной прямой к функции $y(x)$ в точке $x_0$. ```python xs = np.linspace(-2,2,10) ys = xs**2 plt.plot(xs,ys) plt.grid(True) plt.scatter(2,4,c='r') plt.plot([2,1],[4,0]) plt.show() ``` ## Функции нескольких переменных <a name='matan_fmp'></a> Функции нескольких переменных зависят от нескольких переменных. Например, функция $F(x,y,z)=x^2+y^2+z^2$ зависит от переменных $x$, $y$, $z$. По каждой из этих переменных можно взять производную, такие производные называются частными. $$F(x,y)=x^3+y^3$$ ```python from sympy import * x, y = symbols('x y') r1 = diff(x**3 + y**3, x) r2 = diff(x**3 + y**3, x) print(type(r1)) print(r1,r2) ``` <class 'sympy.core.mul.Mul'> 3*x**2 3*x**2 <a href='#toc'>Назад к Оглавлению</a> ## Градиент. Градиентные методы <a name='matan_gradient'></a> Вектор, составленный из частных производных функции $F(x,y$ называется градиентом. Важное свойство вектора градиента - он указывает в направлении наискорейшего возрастания функции. $$\nabla F(x,y)=(F_x,F_y)$$ С использованием метода градииента работают методы поиска экстремума (максимума или минимума) функций нескольких переменных. Суть метода: - выбирается начальная точка - их этой точки делается шаг длины $h$ в направлении вектора градиента (если ищется максимум) или антиградиента (если ищется минимум) - далее процедура повторяется из новой точки ```python def grad_step(f, xp, yp, h): fx = (f(xp + h,yp ) - f(xp,yp))/h fy = (f(xp , yp + h) - f(xp,yp))/h L = math.sqrt(fx**2+fy**2) xn = xp - h*fx/L yn = yp - h*fy/L return xn, yn ``` Зададим функцию $y(x)=(x-1)^2+2(y-2)^2$. Это парабалоид оси которого направлены вверх. Очевидно он имеет минимум в точке $(x=1, y=2)$ равный $0$. ```python def f(x,y): return (x-1)**2 + 2*(y-2)**2 ``` ```python u, v = np.mgrid[-10:10, -1:10] x = u y = v z = (u-1)**2+2*(v-2)**2 fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.plot_wireframe(x, y, z) #ax.scatter(1, 2, 0, c='r', s=100) plt.show() ``` ```python s=[] xt, yt = 5, 5 for _ in range(100): xt, yt = grad_step(f, xt, yt, 0.2) s.append((xt,yt)) plt.scatter([i[0] for i in s],[i[1] for i in s]) plt.grid(True) plt.xticks(range(-1,10)) plt.show() ``` ```python s[-5:] ``` [(0.8999999999999999, 1.758377005960798), (0.9, 1.958377005960798), (0.8999999999999999, 1.758377005960798), (0.9, 1.958377005960798), (0.8999999999999999, 1.758377005960798)] Видим, что из за величины шага $h=1$ метод начал колебаться около искомой точки, не имея возможности к ней приблизиться. Попробуем взять более близкую к месту колебаний точку и уменьшить шаг. ```python s=[] xt, yt = -3, -4 for _ in range(100): xt, yt = grad_step(f, xt, yt, 0.2) s.append((xt,yt)) plt.scatter([i[0] for i in s], [i[1] for i in s]) plt.grid(True) plt.show() ``` ```python s[-5:] ``` [(0.9000000001520045, 1.7408429736519178), (0.9000000000564985, 1.9408429736519177), (0.8999999999181675, 1.7408429736519178), (0.8999999999695837, 1.9408429736519177), (0.900000000044055, 1.7408429736519178)] Видимо, что с уменьшением шага ($h=0.01$) и выбором более близкого к искомому решению начального приближения ($x_0=1.1$ и $y_0=1$) удалось получить более точное решение с точностью $\epsilon=10^{-3}$. ```python x = np.linspace(-5,5,100) y = np.linspace(-5,5,100) z = [[f(i,j) for i in x] for j in y] plt.xticks([i for i in range(100) if not i%10]+[len(x)-1], [round(x[i]) for i in range(100) if not i%10]+[x[-1]], rotation=20) plt.yticks([i for i in range(100) if not i%10]+[len(y)-1], [round(y[i]) for i in range(100) if not i%10]+[y[-1]], rotation=20) plt.contour(z) plt.grid(True) plt.scatter([(i[0]+5)*10 for i in s], [(i[1]+5)*10 for i in s]) plt.show() ``` На контурном графике видно, что градиентный метод, начав из точки $(-3, -4)$ и двигаясь в направлении вектора антиградиента с постоянным шагом $h=0.2$, приближается к точке минимума $(-1,-2)$. <a href='#toc'>Назад к Оглавлению</a> ## Стохастический градиентный спуск <a name='matan-stochastic-gradient'></a> Метод стохастического градиентного спуска отличается от рассмотренного пакетного градиентного спуска тем, что при вычислении шага выбирается одно произвольное направление (одна произвольно выбранная координата). На большщих данных это приводит к существенной экономии в ресурсах при вычислении. ```python def stohastic_grad_step(f, xp, yp, h): i = np.random.choice([0,1]) if i==0: f_step = (f(xp + h,yp ) - f(xp,yp))/h else: f_step = (f(xp , yp + h) - f(xp,yp))/h L = math.sqrt(f_step**2 + f_step**2) xn = xp - h*f_step/L yn = yp - h*f_step/L return xn, yn ``` Зададим функцию $y(x)=(x-1)^2+2(y-2)^2$. Это парабалоид оси которого направлены вверх. Очевидно он имеет минимум в точке $(x=1, y=2)$ равный $0$. ```python def f(x,y): return (x-1)**2 + 2*(y-2)**2 ``` ```python u, v = np.mgrid[-10:10, -1:10] x = u y = v z = (u-1)**2 + 2*(v-2)**2 fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.plot_wireframe(x, y, z) ax.scatter(1, 2, 0, c='r', s=100) plt.show() ``` ```python s=[] xt, yt = -3, -4 for _ in range(300): xt, yt = stohastic_grad_step(f, xt, yt, 0.2) s.append((xt,yt)) plt.scatter([i[0] for i in s], [i[1] for i in s]) plt.grid(True) plt.show() ``` ```python x = np.linspace(-5,5,100) y = np.linspace(-5,5,100) z = [[f(i,j) for i in x] for j in y] plt.xticks([i for i in range(100) if not i%10]+[len(x)-1], [round(x[i]) for i in range(100) if not i%10]+[x[-1]], rotation=20) plt.yticks([i for i in range(100) if not i%10]+[len(y)-1], [round(y[i]) for i in range(100) if not i%10]+[y[-1]], rotation=20) plt.contour(z) plt.grid(True) plt.scatter([(i[0]+5)*10 for i in s], [(i[1]+5)*10 for i in s]) plt.show() ``` <a href='#toc'>Назад к Оглавлению</a> ```python from matplotlib import animation, rc from IPython.display import HTML, display_html num = 10 # nice figure settings fig, ax = plt.subplots() y_true_value = np.array([[1]*num, [2]*num]) level_x = np.linspace(0, 2, num) level_y = np.linspace(0, 3, num) X, Y = np.meshgrid(level_x, level_y) Z = (X - y_true_value[0])**2 + (Y - y_true_value[1])**2 ax.set_xlim(-0.02, 2) ax.set_ylim(-0.02, 3) ax.scatter(*y_true_value, c='red') contour = ax.contour(X, Y, Z, 10) ax.clabel(contour, inline=1, fontsize=10) plt.show() ``` ```python xs = np.array([0]) ys = np.array([0]) trajectory = [] line, = ax.plot([], [], lw=2) # start animation with empty trajectory def init(): line.set_data([], []) return (line,) # one animation step (make one GD step) def animate(i): xt = xs[-1] - 0.1*1 yt = ys[-1] - 0.1*1 np.append(xs, xt) np.append(ys, yt) trajectory.append([xt, yt]) line.set_data(*zip(*trajectory)) return (line,) anim = animation.FuncAnimation(fig, animate, init_func=init, frames=100, interval=20, blit=True) ``` ```python try: display_html(HTML(anim.to_html5_video())) except (RuntimeError, KeyError): # In case the build-in renderers are unaviable, fall back to # a custom one, that doesn't require external libraries anim.save('test.gif', writer='pillow') ``` <a> ```python ``` <a href='#toc'>Назад к Оглавлению</a> # Теория вероятности <a name='probability'></a> ## Определение вероятности Упрощенно вероятность наступления какого-либо события задается с помощью фоотношения $P=\frac{m}{n}$, где $m$ - число благоприятных случаев, $n$ - общее число случаев. Например, вероятность выпадения четного числа при бросании игрального шестигранного кубика равна $P=\frac{3}{6}=\frac{1}{2}$, поскольку $m=3$ (выпадение 2, 4 или 6), а $n=6$. $P$ принимает значения от 0 до 1. Значение $1$ соответствует достоверному событию, $0$ - невозможному событию. ## Зависимые и независимые события. Условная вероятность. Пусть $A$ и $B$ - события, $P(A)$ и $P(B)$ - вероятности этих событий. События $A$ и $B$ называются независимыми, если $P(AB)=P(A)P(B)$. $P(A|B)$ - вероятность наступления события $A$ при условии наступления события $B$. Для зависимых событий $A$ и $B$ выполняется $P(AB)=P(A)P(B|A)$. ## Теорема Байеса <a name='bayes'></a> Пусть $H_i,~i\in [0,n]$ независимые события, составляющие полную группу событий (т.е. $\sum_{i=1}^{n}{H_i}=1$). Также $H_i$ называют гипотезами. Справедлива следующая формула Байеса $$ P(H_i|A) = \frac{P(H_i) P (A|H_i)}{\sum_{i=1}^{n}{P(H_i) P (A|H_i)}}. $$ ```python ``` Например, пусть есть события: $S$ - сообщение является спамом, $R$ - сообщение содержит слово Rolex. Тогда условные вероятности: - $P(S|R)$ - вероятность того, что сообщение является спамом в том случае, если оно содержит слово Rolex - $P(R|S)$ - доля спам сообщений, содержащих слово Rolex Формула Байеса для вычисления того, что сообщение является спамом в том случае, если содержит слово Rolex: $$ P(S|R) = \frac{P(S) P(R|S)}{P(\overline{S}) P(R|\overline{S})+P(S) P(R|S)} $$ ```python ``` <a href='#toc'>Назад к Оглавлению</a> ## Случайные величины Случайная величина принимает в результате определенное значение (заранее неизвестно какое). Примеры: в результате бросания игрального кубика возможно выпадение числа от 1 до 6, в результате выстрела по мишени возможно отклонение от центра мишени. Различают дискретные и непрерывные случайные величины. Для дискретных случайных величин все возможные значения могут быть заранее перечислены (как в примере с бросанием кубика). Для непрерывных случайных величин заранее перечислить значения нельзя, они непрерывно заполняют некоторый промежуток (как в примере со стрельбой по мишени). ## Числовые характеристики случайных величин - Математическое ожидание Дискретная случайная величина: $M[X]=\sum_{i=1}^{n}{p_i x_i}$ Непрерывная случайная величина: $M[X]=\int_{-\infty}^{\infty}{xf(x) dx}$ - Дисперсия. Дискретная случайная величина: $D[X]=\sum_{i=1}^{n}{p_i (x_i-M[X])^2}$ Непрерывная случайная величина: $M[X]=\int_{-\infty}^{\infty}{x(f(x)-M[X])^2 dx}$ - Ковариация двух случайных величин $K_{xy} = M[(X-M[X])(Y-M[Y])]$ - Коррелция двух случайных величин $r_{xy} = \frac{K_{xy}}{D[X]D[Y]}$ Для вычисления характеристик случайных величин в библиотеке numpy есть соответствующие функции. ```python x = np.random.randn(1,10) y = np.random.randn(1,10) ``` ```python print(x.mean()) print(np.var(x)) print(np.median(x)) print(x.std()) print(np.corrcoef(x,y)) ``` 0.05050911370569235 1.7270207966730493 -0.2004098768967763 1.3141616326285932 [[1. 0.05765124] [0.05765124 1. ]] <a href='#toc'>Назад к Оглавлению</a> # Математическая статистика <a name='statistics'></a> Задача математической статистики - описать случайную величину (ее основные свойства) по имеющимся выборкам значений. Эти выборки могут быть получены, например, в результате проведения эксперимента. Описание наборов чисел. Сам по себе числовой ряд характеризует себя, для него можно вычислить основные характеристики - минимальное и максимальное значения, среднее арифметическое, моду (самое часто встречающееся значение), медиану (значение, которое разделяет числовой ряд пополам). ```python m = np.array([np.random.randint(1,100) for _ in range(10)]) ``` ```python print(m) print(m.min()) print(m.max()) print(m.mean()) print(m.var()) print(m.std()) ``` [92 80 88 43 46 50 57 55 91 65] 43 92 66.7 336.40999999999997 18.34148303709381 При большом размере выборки работать одновременно со всеми числами становится затруднительно, поэтому используют статистики (статистические показатели), при помощи которых передают существенные характеристики ряда. ```python from collections import Counter m = np.array([np.random.randint(1,10) for _ in range(1000)]) c = Counter(m) plt.bar(c.keys(),c.values()) plt.show() ``` ## Центральная предельная теорема <a name='central'></a> При бесконечном повторении опытов выборка независимо распределенных случайных величин (i.i.d.) сходится к нормальному закону распределения. ```python def f_bernulli(n,p): result = 0 for i in range(n): if np.random.random()<p: result += 1 return result ``` ```python cols = 10 p = 0.5 n = 10 fig, ax = plt.subplots(1, cols, figsize=(20,3)) for i in range(cols): num = (i+1)*10 probs = [f_bernulli(n,p) for _ in range(num)] c = Counter(probs) ax[i].bar(c.keys(), c.values()) ax[i].set_xticks(list(c.keys())) plt.show() ``` <a href='#toc'>Назад к Оглавлению</a> ## Уровни значимости, p-value <a name='pvalue'></a> ```python ``` ```python ``` ```python ``` <a href='#toc'>Назад к Оглавлению</a> # Задачи <a name='tasks'></a> ## Линейная алгебра <a name='tasks_linal'></a> <a href='#toc'>Назад к Оглавлению</a> ## Математический анализ <a name='tasks_matan'></a> <a href='#toc'>Назад к Оглавлению</a> ## Теория вероятности и математическая статистика <a name='tasks_terver'></a> #### Задача (условную вероятность) Предположим, что в некоторой популяции до 60 лет доживает 50%, а до 80 лет — 20%. Какова вероятность (от 0 до 1), что случайно выбранный шестидесятилетний представитель популяции доживёт до восьмидесяти? Запишите ответ с точностью до одного знака после десятичной точки. ```python 0.2/0.5 ``` 0.4 #### Задача (формула полной вероятности) В супермаркете 60% яблок из Турции и 40% яблок из Индии. 10% турецких и 15% индийских яблок — червивые. Какова вероятность, что яблоко, купленное в этом магазине, окажется червивым? Запишите ответ от 0 до 1 с двумя знаками после десятичной точки. ```python 0.6*0.1+0.4*0.15 ``` 0.12 #### Задача (формула полной вероятности) Представьте, что вы купили яблоко в магазине из предыдущей задачи, и оно оказалось червивым. Какова вероятность того, что оно из Турции, а не из Индии? ```python 0.6*0.1/(0.6*0.1+0.4*0.15) ``` 0.5 #### Задача (формула Байеса) 1% женщин больны раком груди. У 80% женщин, больных раком груди, маммограмма верно выявляет наличие заболевания; кроме того, она даёт ложный положительный результат (то есть, неверно показывает наличие рака) для 9.6% здоровых женщин. У какого процента женщин, маммограмма которых дала положительный результат, есть рак груди? Запишите ответ с точностью до одного знака после десятичной точки (знак процента не нужен). ```python 0.01*0.8/(0.01*0.8+0.99*0.096) ``` 0.07763975155279504 #### Задача (условная вероятность) Какова вероятность того, что при независимом подбрасывании двух симметричных шестигранных кубиков хотя бы на одном из них выпадет больше трёх очков? Запишите точный ответ в виде десятичной дроби. ```python 1/2+1/4 ``` 0.75 ```python 3**5 * np.exp(-3) / np.math.factorial(5) ``` 0.10081881344492448 Метеостанция, находящаяся в ботаническом саду Сиднея, регистрирует количество выпадающих осадков с 1885 года. Среднее годовое количество осадков за период с 1885 по 2015 включительно составляет 1197.69 мм, выборочная дисперсия — 116182.2. Считая, что годовое количество осадков — случайная величина, не меняющаяся во времени и имеющая нормальное распределение, постройте интервал, который с вероятностью 99.7% будет содержать количество осадков, которые выпадут в 2016 году. Чему равна его верхняя граница? Округлите ответ до двух знаков после десятичной точки. ```python m = 1197.69 d = 116182.2 np.sqrt(d) ``` 340.8551011793721 ```python m+3*np.sqrt(d) ``` 2220.2553035381166 Оцените значение параметра $\sigma_n$ — среднеквадратического отклонения нормального распределения, которым, согласно центральной предельной теореме, можно аппроксимировать распределение среднего количества осадков за год из предыдущей задачи. Округлите ответ до двух знаков после десятичной точки. ```python n = 2015-1885+1 n ``` 131 ```python np.sqrt(d/n) ``` 29.780648463402596 Постройте приближённый 99.7% доверительный интервал для среднего количества осадков за год. Чему равен верхний доверительный предел? Округлите ответ до двух знаков после десятичной точки. ```python m+3*np.sqrt(d/n) ``` 1287.0319453902077 Для продвижения вашего продукта рекламный отдел предлагает использовать новый видеоролик. На фокус-группе вы показываете 40 испытуемым новый и старый видеоролики и спрашиваете, какой из них им нравится больше; 62.5% испытуемых выбирают новый. Используя центральную предельную теорему и правило двух сигм, постройте 95% доверительный интервал для доли членов целевой аудитории, предпочитающих новый видеоролик. Выберите вывод, соответствующий построенному интервалу. ```python 40*0.625 ``` 25.0 ```python 25/40 ``` 0.625 ```python np.sqrt(1/39*(25*(1-0.625)**2 + 15*(-0.625)**2)) ``` 0.49029033784546006 <a href='#toc'>Назад к Оглавлению</a> # Список литературы <a name='lit'></a> 1. Фихтенгольц Г.М. Основы математического анализа, Том 1-3 Москва, 1968 2. Вентцель Е.С. Теория вероятностей 4-е изд., стереотип. - М.: Наука, Физматгиз, 1969 - 576 с. 3. Курош А.Г. Курс высшей алгебры Москва: Издательство «Наука», 1966 <a href='#toc'>Назад к Оглавлению</a>
cafe74aa5771d986e882eb1516a7c50e78c32ace
704,625
ipynb
Jupyter Notebook
lessons/lec-DA-Maths.ipynb
yurichernyshov/Data-Science-Course-USURT
6a9d87ff7dd88fc48b73f3250b8a37953811dc0e
[ "CC0-1.0" ]
4
2020-10-02T10:46:12.000Z
2022-02-14T14:11:04.000Z
lessons/lec-DA-Maths.ipynb
yurichernyshov/Data-Science-Course-USURT
6a9d87ff7dd88fc48b73f3250b8a37953811dc0e
[ "CC0-1.0" ]
null
null
null
lessons/lec-DA-Maths.ipynb
yurichernyshov/Data-Science-Course-USURT
6a9d87ff7dd88fc48b73f3250b8a37953811dc0e
[ "CC0-1.0" ]
4
2021-01-27T08:39:25.000Z
2022-02-14T14:11:01.000Z
252.192198
127,180
0.921476
true
15,886
Qwen/Qwen-72B
1. YES 2. YES
0.70253
0.787931
0.553545
__label__rus_Cyrl
0.554714
0.124401
We start by bringing in the Python libraries that we will be using for this code. We will be using Torch, which will enable us to run the code in the GPU for faster processing as compared to the CPU. Torch also assists us during backpropagation, using Autograd that does differentiation and stores the gradients that can be called when required. PyTorch also enables us to create neural networks very easily, and PyTorch together with the Numpy, Pandas and Scipy libraries makes it very powerful for Neural Networks calculations. ## Neural Networks Neural networks are the building blocks for deep learning. They are made up of neurons or units. Each weighted unit sums up the units of the previous layers and passes the sum through an activation function to get the neuron's output. Some activation functions used in this code are sigmoid, ReLU (Rectified Linear Units), Softmax, tanH (hyperbolic tangent) just to name a few. A simple neural network would be depicted as shown below: $$ \begin{align} y &= f(w_1 x_1 + w_2 x_2 + w_3 x_3 + b) \\ y &= f\left(\sum_i w_i x_i +b \right) \end{align} $$ The linear transformation done on each unit is given by the dot product of the input vector and the weight matrix: $$ h = \begin{bmatrix} x_1 \, x_2 \cdots x_n \end{bmatrix} \cdot \begin{bmatrix} w_{11} w_{21}\\ w_{12} w_{22}\\ \vdots \\ w_{1n} w_{2n} \end{bmatrix} $$ We begin by importing the libraries that we will be using in our code: * torch - used to bring the torch package which will be used in creating the neural network layers, weihts and biases, as well as the backpropagation using autograd. * nn - this module will be used for easy neural network creation. * optim - used to optimize the neural network to adjust the weights to the optimal weights to give us the intended labels. ```python import torch from torch import nn, optim from torch import optim import numpy as np ``` We're going to build a small neural network with only one example as shown below. Our goal is to build a network that will give the correct output `[0.0,1.0,0.0]` ```python x = torch.tensor([0.1,0.2,0.7]) x = x.reshape(1,3) print(x.shape) print(x) ``` torch.Size([1, 3]) tensor([[0.1000, 0.2000, 0.7000]]) Our output will be given by the vector below: ```python labels = torch.tensor([0,1,0]) labels = labels.reshape(1,3) labels = labels print(labels.shape) print(labels) ``` torch.Size([1, 3]) tensor([[0, 1, 0]]) Here is where the neural network is made and all the fun begins! I will go through the code line by line. ```python class Network(nn.Module): ``` Using the `nn.Module` combined with `__super().__init__()` creates a class that will assist us with tracking of the neural network methods and attributes. The class name can be changed to anything, although it is mandatory to inherit it from the `nn.Module`. ```python self.hidden = nn.Linear(3,3) ``` This line creates a 3 X 3 matrix for the first hidden layer. `nn.Linear(3,3)` creates a linear transformation $x\mathbf{W} + b$, with 3 inputs and 3 outputs and assigns it to `self.hidden`. This module also creates the weights and biases which will be used in the feedforward using the `forward` method. The subsequent line, `self.hidden2 = nn.Linear(3,3)` performs the same function as the first hidden layer. Here we generate a 3 X 3 tensor for the second hidden layer. A linear transformation creates the weights and bias tensors and assigns them to `self.hidden2`. This will later be used in the `forward` method. ```python self.output = nn.Linear(3,3) ``` This creates a linear transformation in the output of size 3 X 3 to be used to generate the output of the neural network. This will be used in the softmax activation function in the output layer. ```python self.hidden.weight = torch.nn.Parameter(torch.tensor([[0.1,0.3,0.4],[0.2,0.2,0.3],[0.3,0.7,0.9]])) ``` This line of code creates the exact weights that will be used in the code. This was done to enable comparison between the code and the hand calculations already performed. `self.hidden2.weight and self.output.weight` tensors are also created to explicitly define the tensors for the second hidden layer and the output layer of the neural network. ```python self.sigmoid = nn.Sigmoid() self.relu = nn.ReLU() self.softmax = nn.Softmax(dim=1) ``` These operations are the sigmoid, Rectified Linear Unit (ReLU) and the softmax activation functions. The softmax is used in the output where we will be generating 3 classes for the 3 output units. The `dim=1` is used to ensure that the summation is done across the columns, as opposed to across the rows. The softmax summation should always add to 1 as this is a propability summation. Some things to note : * The softmax activation function can ONLY be used in the output layer. * The softmax function and the Logistic/sigmoid function yield the same results when the number of classes = 2, therefore softmax is a generalization of the sigmoid function. ```python def forward (self,x): ``` PyTorch neural networks require a `forward` method defined to be able to perform the feedforward step. This is achieved by taking a tensor `x` and taking it through all the operation defined in the `__init__` method. ```python x = self.hidden(x) x = self.relu(x) x = self.hidden2(x) x = self.sigmoid(x) x = self.output(x) x = self.softmax(x) ``` Here we pass the input tensor `x` through the hidden layes, hidden layer 2, the sigmoid the output layer and finally the softmax function. The operations must be sequenced correctly in the forward method to ensure that the operations are done sequentially as expected. ```python class Network(nn.Module): def __init__(self): super().__init__() # Inputs to hidden layer linear transformation self.hidden = nn.Linear(3, 3) self.hidden2 = nn.Linear(3, 3) self.output = nn.Linear(3, 3) self.hidden.weight = torch.nn.Parameter(torch.tensor([[0.1,0.3,0.4],[0.2,0.2,0.3],[0.3,0.7,0.9]])) self.hidden2.weight = torch.nn.Parameter(torch.tensor([[0.2,0.3,0.6],[0.3,0.5,0.4],[0.5,0.7,0.8]])) self.output.weight = torch.nn.Parameter(torch.tensor([[0.1,0.3,0.5],[0.4,0.7,0.2],[0.8,0.2,0.9]])) # Define sigmoid activation, ReLU and softmax output self.sigmoid = nn.Sigmoid() self.relu = nn.ReLU() self.softmax = nn.Softmax(dim=1) def forward(self, x): # Pass the input tensor through each of our operations x = self.hidden(x) x = self.relu(x) x = self.hidden2(x) x = self.sigmoid(x) x = self.output(x) x = self.softmax(x) return x ``` We can now create a `Network` object `model = Network()` An epoch in Machine Learning is the number of times we want to pass through the same data to optimize the weights to reduce the loss. For this neural network, we will use 50 epochs. The next thing after the feedforward is the calculation of the loss of the neural network. We will be using the `nn.CrosEntropyloss()` criterion. This criterion is a combination of the `nn.LogSoftmax()` and `nn.NLLLoss`. The NLLLoss stands for the Negative Log Likelihood Loss. It is used when softmax function is being used in the output layer of the neural network. This loss maximizes the likelihood of the correct label while reducing the other classes to 0. `loss.backward` performs the backpropagation and calculates the gradients in each weight and bias. PyTorch has a module, autograd that enables us to automatically calculate the gradients of the tensors. We can use this to calculate the gradients with all of our parameters to the loss. Autograd tracks the operations performed during the feedforward, then calculates the gradient dugin backpropagation. Autograd, basically does differentiation, for those who are familiar with math Calculus using chain rule. During backpropagation, it is important to ensure that the gradient calculations are turned on by setting `requires_grad = True` on a tensor. This can be done during the tensor creation with `required_grad` keyword, or any time using `y.required_grad_True)`. Gradient can be trurned off in a block of code using `torch.no_grad()` or globally on the entire code using `torch.set_grad_enables(True|False)`. To train the network, we use the [`optim` package](https://pytorch.org/docs/stable/optim.html). For this network, we use the `optim.Adam` package. There are a lot of other packages we can use such as `optim.SGD` where the opimizer used Stochastic gradient descent for optimization. It is critical to zero out the gradient in each epoch using `optimizer.zero_grad()` to ensure that the gradients are not accumulated for each training .For this network, the learning rate, `lr=0.09` was adequate enough to ensure that we got to a global minumum as fast as possible. ```python model = Network() epochs = 50 for t in range (1,epochs): criterion = nn.CrossEntropyLoss() logps = model(x) print(labels) loss = criterion(logps, torch.max(labels, 1)[1]) loss.backward() optimizer = optim.Adam(model.parameters(), lr=0.09) optimizer.zero_grad() optimizer.step() print(loss) print(model.forward(x)) ``` tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) tensor([[0, 1, 0]]) tensor(1.2714, grad_fn=<NllLossBackward>) tensor([[0.2725, 0.1738, 0.5537]], grad_fn=<SoftmaxBackward>) Below I take you through a quick example showing how autograd calculates and performs backpropagation. ```python a = torch.randn(2,2, requires_grad=True) print(a) ``` tensor([[-0.3680, -1.1311], [ 0.1723, -1.1959]], requires_grad=True) ```python y = a**3 print(y) ``` tensor([[-0.0498, -1.4470], [ 0.0051, -1.7103]], grad_fn=<PowBackward0>) Here, we see that `y` is created with the a power operation `PowBackward0` ```python m = y**2 print(m) ``` tensor([[2.4846e-03, 2.0937e+00], [2.6132e-05, 2.9250e+00]], grad_fn=<PowBackward0>) The `y` then goes through another power operation where it is squared. ```python print(m.grad_fn) ``` <PowBackward0 object at 0x12027d990> Autograd tracks all the operations and calculates the gradients for each operation during backpropagation. Next we take a mean operation to get a scalar value from m. ```python z = m.mean() print(z) ``` tensor(0.1634, grad_fn=<MeanBackward0>) ```python print(a.grad) #print((a**2)/3) ``` None We can see that there are no gradients since no backpropagation has been done at this point.The gradient after the `z.backward()` results in exactly the same results from the math differentiation with respect to a: $$ \frac{\partial z}{\partial a} = \frac{\partial}{\partial a}\left[\frac{1}{n}\sum_i^n (a_i^3)^2\right] = \frac{3a^5}{2} $$ ```python z.backward() print(a.grad) print((3*(a**5))/2) ``` tensor([[ 1.6507e-04, 4.4403e-01], [-3.4622e-03, 7.2903e-01]]) tensor([[ 1.6507e-04, 4.4403e-01], [-3.4622e-03, 7.2903e-01]], grad_fn=<DivBackward0>) The results from the autograd's `z.backward()`yields the same result as the differentiation using chain rule from Calculus.
8f75aac42786164895b657d5cc0df8ddb09d39e9
24,432
ipynb
Jupyter Notebook
Code for paper.ipynb
brianAsimba/Moon
8bc6a745c5ada85f4636a3de54bb12bc043e923a
[ "MIT" ]
null
null
null
Code for paper.ipynb
brianAsimba/Moon
8bc6a745c5ada85f4636a3de54bb12bc043e923a
[ "MIT" ]
null
null
null
Code for paper.ipynb
brianAsimba/Moon
8bc6a745c5ada85f4636a3de54bb12bc043e923a
[ "MIT" ]
1
2019-04-26T08:35:17.000Z
2019-04-26T08:35:17.000Z
37.99689
677
0.574411
true
5,643
Qwen/Qwen-72B
1. YES 2. YES
0.92079
0.76908
0.708161
__label__eng_Latn
0.952628
0.483627
# **Basic Python & Jupyter** ## **Load YT Video** ```python #load yt video from IPython.display import YouTubeVideo YouTubeVideo("HW29067qVWk",560,315,rel=0) ``` ## **Interactive Widget** ```python from ipywidgets import * import ipywidgets as widgets import numpy as np def f(x): return x interact(f, x=10); ``` interactive(children=(IntSlider(value=10, description='x', max=30, min=-10), Output()), _dom_classes=('widget-… ```python def say_something(x): """ Print the current widget value in short sentence """ print(f'Widget says: {x}') widgets.interact(say_something, x=[0, 1, 2, 3]) widgets.interact(say_something, x=(0, 10, 1)) widgets.interact(say_something, x=(0, 10, .5)) _ = widgets.interact(say_something, x=True) ``` interactive(children=(Dropdown(description='x', options=(0, 1, 2, 3), value=0), Output()), _dom_classes=('widg… interactive(children=(IntSlider(value=5, description='x', max=10), Output()), _dom_classes=('widget-interact',… interactive(children=(FloatSlider(value=5.0, description='x', max=10.0, step=0.5), Output()), _dom_classes=('w… interactive(children=(Checkbox(value=True, description='x'), Output()), _dom_classes=('widget-interact',)) ## **Plot** ```python %matplotlib inline import matplotlib.pyplot as plt x = np.linspace(0,10) y = np.sin(x) z = np.cos(x) plt.plot(x,y,'b',x,z,'r') plt.xlabel('Radians'); plt.ylabel('Value'); plt.title('Plotting Demonstration') plt.legend(['Sin','Cos']) plt.grid() ``` ## **Interactive Plot** ```python x = np.linspace(0, 2 * np.pi) def update(w = 1.0): fig = plt.figure() ax = fig.add_subplot(1, 1, 1) ax.plot(x, np.sin(w * x)) fig.canvas.draw() interact(update); ``` interactive(children=(FloatSlider(value=1.0, description='w', max=3.0, min=-1.0), Output()), _dom_classes=('wi… ## **Symbolic on Python** ```python import sympy as sym sym.var('P V n R T'); # Gas constant R = 8.314 # J/K/gmol R = R * 1000 # J/K/kgmol # Moles of air mAir = 1 # kg mwAir = 28.97 # kg/kg-mol n = mAir/mwAir # kg-mol # Temperature T = 298 # Equation eqn = sym.Eq(P*V,n*R*T) # Solve for P f = sym.solve(eqn,P) print(f[0]) # Use the sympy plot function to plot sym.plot(f[0],(V,1,10),xlabel='Volume m**3',ylabel='Pressure Pa') ``` ```python ```
21fcf85a76fe8f1435a6ba704c5fdc375a5e92be
154,504
ipynb
Jupyter Notebook
sistem_kendali_pertemuan_1.ipynb
2black0/python-control-laboratory
005c15b6750c807c69a625b321ee04624acce8d9
[ "MIT" ]
null
null
null
sistem_kendali_pertemuan_1.ipynb
2black0/python-control-laboratory
005c15b6750c807c69a625b321ee04624acce8d9
[ "MIT" ]
null
null
null
sistem_kendali_pertemuan_1.ipynb
2black0/python-control-laboratory
005c15b6750c807c69a625b321ee04624acce8d9
[ "MIT" ]
null
null
null
79.847028
30,430
0.761715
true
731
Qwen/Qwen-72B
1. YES 2. YES
0.757794
0.83762
0.634744
__label__eng_Latn
0.264048
0.313053
# Basic Image Standards In this notebooks we are going to explore some of the different standards that are used to store photographs and similar images. Specifically, we will explore the following standards * [TGA](https://en.wikipedia.org/wiki/Truevision_TGA) * [PNG](https://www.w3.org/TR/2003/REC-PNG-20031110/) * [TIFF](https://www.adobe.io/open/standards/TIFF.html) * [JPEG](https://jpeg.org/jpeg/) ```python !python -m pip install -U git+https://github.com/chapmanbe/dminteract#egg=dminteract ``` ```python from dminteract.modules.m4c import * import warnings warnings.filterwarnings('ignore') ``` We are going to start with a sample photograph of my daughter (used with her permission!). ```python display(question_banks["qbank1"]["photo1, qbank1"]) ``` ### Once you have typed your thoughts about the metadata... Let's look at the metadata that is stored for our simplest ima ```python print(view_img_metadata("./data/daughter.tga")) ``` ```python display(question_banks["qbank1"]["photo2, qbank1"]) ``` ### Now let's look at a the TIFF representation ```python print(view_img_metadata("./data/daughter.tiff")) ``` ```python display(question_banks["qbank1"]["photo3, qbank1"]) ``` ## Now let's look at PNG ```python print(view_img_metadata("./data/daughter.png")) ``` ### That is a lot more information! * Now we have lots of information about how the photograph was created * Camera make and model * Camera settings (e.g. PhotographicSensitivity=200) * Note that there are values that obviously need interpretation * What in the world does `MeteringMode=3` mean? or `LightSource=9`? * Notice that there is some sense of file history: * DateTimeOriginal=2005:07:08 19:51:47 * DateTime=2008:12:23 12:50:21 * modify=2017-02-06T03:47:30+00:00 * create=2020-03-25T23:34:30+00:00 * Endianness has changed! * There is now explicit information about how color is represented: `Pixel format: RGB`. * Did TGA and TIFF only have one choice ### Now let's look at the original JPEG image ```python print(view_img_metadata("./data/daughter.jpg")) ``` ```python display(question_banks["qbank1"]["photo4, qbank1"]) ``` ### What are the respective file sizes? ```python !ls -ltra data/daughter.* ``` The TIFF and TGA images are uncompressed, so they only differ by the size of the header (metadata). PNG uses a **lossless** compression, so is substantially smaller than the TIF/TGA but larger than the JPEG image which uses **lossy** compression. ### Here is a newer image ```python print(view_img_metadata("./data/skiing.jpg")) ``` ```python display(question_banks["qbank1"]["photo5, qbank1"]) ``` ## Exercise Now let's try to reverse engineer an image standard. You have been given 3 "images" in a proprietary format and software to render the images. You are are trying to reverse engineer the standard being used to represent the images. You can "peek" at the raw data in the file on the computer system and see that the files are the following list of numbers for each image respectively: \begin{eqnarray} 1, 5, 2, 10 , 5 , 3 , 7 , -6 , 4 , 2 , 0 , 21 , 11 , -2 , 17\\ 2 , 3 , 2 , 1 , 0 , 5 , -1 , 17 , 11 , -5 , 6 \\ 2, 2, 3, 2, 3, 2, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144 \end{eqnarray}. When the "images" are "rendered" to the screen, this is what you see: \begin{equation} \begin{array}{ccccc} (-7,2) & (-16, -1) & (-8,-5) & (11,1) & (-12,12) \end{array} \end{equation} --------------------- \begin{equation} \begin{array}{cc} 5 & -1 \\ 17 & 11\\ -5 & 6 \end{array} \end{equation} ------------------ \begin{equation} \begin{array}{ccc} (-2,-1) & (-1, 1) & (2, 6) \\ (10, 19) & (31, 53) & (86,142) \end{array} \end{equation}. ------------------ The you know the image data consists of a **header** followed by **values,** but you don't know where the **header** ends and the **values** begin. The values make up the elements (e.g. pixel--"picture element") of the picture. The header describes the structure and nature of the values. Based on your analysis, answer the following questions about the header and values * The standard has a fixed header size (that is, the header size is the same regardless of the nature of the values) (T/F) (F) * The values are stored by rows. T * Image values can be multidimensional. T * The rendered values are modified by multiplying baseline value(s) defined in the header. F. They are subtracted. ```python for q in question_banks["qbank2"].values(): display(q) ``` ### [Move onto the next notebook](./dicom_intro.ipynb)
a3fd3aef1da40fb4dc053b18601afe5e69936bd1
8,539
ipynb
Jupyter Notebook
m4c_imgs/basic_image_standards.ipynb
chapmanbe/isys90069_2020_exploration
e73249d391acf195c4779955e3cb84f6562d42f6
[ "MIT" ]
null
null
null
m4c_imgs/basic_image_standards.ipynb
chapmanbe/isys90069_2020_exploration
e73249d391acf195c4779955e3cb84f6562d42f6
[ "MIT" ]
null
null
null
m4c_imgs/basic_image_standards.ipynb
chapmanbe/isys90069_2020_exploration
e73249d391acf195c4779955e3cb84f6562d42f6
[ "MIT" ]
null
null
null
26.684375
383
0.544092
true
1,324
Qwen/Qwen-72B
1. YES 2. YES
0.654895
0.857768
0.561748
__label__eng_Latn
0.977062
0.143458
``` %matplotlib inline from sympy import * init_printing() x = symbols('x') ``` ``` f = 1 - sqrt((1+x)/x) f ``` ``` plot(f, (x, 0, 1)) ``` ``` f_strich = simplify(diff(f, x)) f_strich ``` ``` krel = Abs(simplify(f_strich * x / f)) krel ``` $$\frac{1}{2} \left\lvert{\frac{\sqrt{\frac{1}{x} \left(x + 1\right)}}{\left(x + 1\right) \left(\sqrt{\frac{1}{x} \left(x + 1\right)} - 1\right)}}\right\rvert$$ ``` plot(krel, (x, 0, 10)) ``` ``` krel_strich = simplify(diff(krel,x)) krel_strich ``` /usr/lib/python3.3/site-packages/IPython/core/formatters.py:239: FormatterWarning: Exception in image/png formatter: \frac{1}{2 \left\lvert{\frac{\sqrt{\left(x + 1\right) / x}}{\left(x + 1\right) \left(\sqrt{\left(x + 1\right) / x} - 1\right)}}\right\rvert} \left(\Re {\left (\frac{\sqrt{\left(x + 1\right) / x}}{\left(x + 1\right) \left(\sqrt{\left(x + 1\right) / x} - 1\right)} \right )} \frac{d}{d x} \Re {\left (\frac{\sqrt{\left(x + 1\right) / x}}{\left(x + 1\right) \left(\sqrt{\left(x + 1\right) / x} - 1\right)} \right )} + \Im {\left ( \frac{\sqrt{\left(x + 1\right) / x}}{\left(x + 1\right) \left(\sqrt{\left(x + 1\right) / x} - 1\right)} \right )} \frac{d}{d x} \Im {\left ( \frac{\sqrt{\left(x + 1\right) / x}}{\left(x + 1\right) \left(\sqrt{\left(x + 1\right) / x} - 1\right)} \right )}\right) ^ Expected a delimiter (at char 16), (line:1, col:17) FormatterWarning, $$\frac{1}{2 \left\lvert{\frac{\sqrt{\frac{1}{x} \left(x + 1\right)}}{\left(x + 1\right) \left(\sqrt{\frac{1}{x} \left(x + 1\right)} - 1\right)}}\right\rvert} \left(\Re {\left (\frac{\sqrt{\frac{1}{x} \left(x + 1\right)}}{\left(x + 1\right) \left(\sqrt{\frac{1}{x} \left(x + 1\right)} - 1\right)} \right )} \frac{d}{d x} \Re {\left (\frac{\sqrt{\frac{1}{x} \left(x + 1\right)}}{\left(x + 1\right) \left(\sqrt{\frac{1}{x} \left(x + 1\right)} - 1\right)} \right )} + \Im {\left ( \frac{\sqrt{\frac{1}{x} \left(x + 1\right)}}{\left(x + 1\right) \left(\sqrt{\frac{1}{x} \left(x + 1\right)} - 1\right)} \right )} \frac{d}{d x} \Im {\left ( \frac{\sqrt{\frac{1}{x} \left(x + 1\right)}}{\left(x + 1\right) \left(\sqrt{\frac{1}{x} \left(x + 1\right)} - 1\right)} \right )}\right)$$ ``` ``` ``` ```
5b0e51177c23a0436f5d67a9e30b8e2330e52ab1
47,130
ipynb
Jupyter Notebook
Aufgabe 5b).ipynb
bschwb/Numerik
dcd178847104c382474142eae3365b6df76d8dbf
[ "MIT" ]
null
null
null
Aufgabe 5b).ipynb
bschwb/Numerik
dcd178847104c382474142eae3365b6df76d8dbf
[ "MIT" ]
null
null
null
Aufgabe 5b).ipynb
bschwb/Numerik
dcd178847104c382474142eae3365b6df76d8dbf
[ "MIT" ]
null
null
null
160.306122
9,057
0.770592
true
926
Qwen/Qwen-72B
1. YES 2. YES
0.879147
0.654895
0.575749
__label__ltz_Latn
0.102476
0.175987
## Theorem 0.0.2 (Approximate Caratheodory's Theorem) In this worksheet we run through the proof of approximate Caratheodory, keeping an example to work with as we go. Please fill in code where indicated. Here is the theorem (slightly generalized) for reference: **Theorem 0.0.2** (Generalized)**.** *Consider a set $T \subset \mathbb{R}^n$. Then, for every point $x \in \text{conv}(T)$ and every integer $k$, one can find points $x_1,\ldots,x_k \in T$ such that* $$ \left\lVert x - \frac{1}{k}\sum_{j=1}^k x_j \right\rVert \leq \frac{\text{diam}(T)}{\sqrt k}$$ ```python # Some useful imports import matplotlib.pyplot as plt import numpy as np import random as rand import math ``` For the purposes of the code, let us fix a set $T \subset \mathbb R^n$ and an integer $k$ to work with. ```python # Choose a dimension to work in n = 3 # Take some set of points with the points listed as np arrays of length n. T = [np.zeros(3),np.zeros(3)] # TODO: write this! # Pick a k, any k k = 5 ``` Let $m = |T|$. ```python m = 2 # TODO: Set m to the size of T (ideally, don't hard-code this in) ``` Our task is to find some $x \in \text{conv}(T)$ and express it as a convex combination of vectors in $T$. Recalling the definition of the convex hull, this amounts to finding an assignment of coefficients $\lambda_1, \ldots, \lambda_m$, so that $\lambda_1 + \cdots + \lambda_m = 1$ and $\lambda_1,\ldots,\lambda_m \geq 0$. Below we present one way of doing this, but it would be nice to be able to sample more uniformly from $\text{conv}(T)$. See if you can improve it! ```python # Find random coefficients lambda_1,...,lambda_m as necessary. # To recap, these satisfy: # 1. lambda_1 + ... + lambda_m = 1 # 2. lambda_1, ... , lambda_m >= 0 coefficients = [0]*m # initialize list of coefficients perm = np.random.permutation(m) # get a random permutation of 0,...,m-1 accum = 1.0 for i in perm[:-1]: # sample from viable range coefficients[i] = np.random.uniform(0,accum) accum = accum - coefficients[i] coefficients[perm[-1]] = accum coefficients ``` [0.01639505649315731, 0.9836049435068427] We can now get $x \in \text{conv}(T)$ with $x = \lambda_1 z_1 + \cdots + \lambda_m z_m$ where $z_1, \ldots, z_m$ are the elements of $T$. ```python # Construct x x = np.zeros(n) # TODO: Write this! x ``` array([0., 0., 0.]) Now, we interpret the definition of convex combination probabilistically, with $\lambda_i$ taking the roles of probabilities. Specifically, we can define a random vector $Z$ that takes values $z_i$ with probabilities $\lambda_i$: $$ \mathbb P \{Z = z_i\} = \lambda_i, \ \ \ i = 1, \ldots, m. $$ ```python # We can sample from the distribution of this random variable as follows: Z_sample = rand.choices(T, weights = coefficients) print(Z_sample) ``` [array([0., 0., 0.])] Consider independent copies $Z_1, \ldots, Z_k$ of $Z$. Then we are interested in the random variable $$\left\lVert x - \frac{1}{k} \sum_{j=1}^k Z_j \right\rVert_2^2$$. ```python # Sample from the distribution of the random variable specified above, and return # the sampled value along with the assignments to Z_1,...,Z_k def sample_from_distribution(): vectors = rand.choices(T, weights = coefficients, k = k) vec_sum = np.zeros(len(T[0])) for vec in vectors: vec_sum = vec_sum + vec distance = np.sum((x - vec_sum/k)**2) return distance, vectors ``` In the proof of the theorem it is shown that \begin{equation} \mathbb E \left\lVert x - \frac{1}{k} \sum_{j=1}^k Z_j \right\rVert_2^2 \leq \frac{\text{diam}^2(T)}{k} \tag{1} \end{equation} Prove that this generalization to an arbitrary set is true. We now find $\text{diam}(T)$. Recall $\text{diam}(T) = \sup_{x,y \in T} \lvert x - y \rvert $ ```python # Compute diam(T) diamT = 0 # TODO: Write this! ``` We can now verify inequality $(1)$ for our set $T$. ```python numSamples = 100000 samples = [] bestVal = float("inf") best = None for i in range(numSamples): sample, vectors = sample_from_distribution() if sample < bestVal: best = vectors bestVal = sample samples.append(sample) ``` ```python # plotting the example num_bins = 10 plt.hist(samples, num_bins, facecolor='blue', alpha=0.5) plt.axvline((diamT**2)/k, color='k', linestyle='dashed', linewidth=1.5, label = "$diam^2(T)/k$") plt.axvline(np.mean(samples), color='r', linestyle='dashed', linewidth=1.5, label = "mean" ) plt.legend() plt.show() print("Best assignment (with value",bestVal, "): ") print(best) ``` Copyright (c) 2020 TRIPODS/GradStemForAll 2020 Team
1e90e553da75dcb6bd38052b94bf7b749891dbd0
18,936
ipynb
Jupyter Notebook
2. Tuesday/Theorem0.0.2.ipynb
ArianNadjim/Tripods
f3c973251870e2e64af798f802798704d2f0249e
[ "MIT" ]
3
2020-08-10T02:19:44.000Z
2020-08-13T23:33:38.000Z
2. Tuesday/Theorem0.0.2.ipynb
ArianNadjim/Tripods
f3c973251870e2e64af798f802798704d2f0249e
[ "MIT" ]
null
null
null
2. Tuesday/Theorem0.0.2.ipynb
ArianNadjim/Tripods
f3c973251870e2e64af798f802798704d2f0249e
[ "MIT" ]
3
2020-08-10T17:38:32.000Z
2020-08-12T15:29:08.000Z
59.54717
10,332
0.768166
true
1,433
Qwen/Qwen-72B
1. YES 2. YES
0.835484
0.709019
0.592374
__label__eng_Latn
0.944514
0.214613
# Implementing FIR filters In real-time filtering applications, filters are implemented by using some variation or other of their constant-coefficient difference equation (CCDE), so that one new output sample is generated for each new input sample. If all input data is available in advance, as in non-real-time (aka "offline") applications, then the CCDE-based algorithm is iteratively applied to all samples in the buffer. In the case of FIR filters, the CCDE coefficients correspond to the impulse response and implementing the CCDE is equivalent to performing a convolution sum. In this notebook we will look at different ways to implement FIR filters. ```python %matplotlib inline import matplotlib import matplotlib.pyplot as plt import numpy as np ``` ## Online implementation The classic way to implement a filter is the one-in one-out approach. We will need to implement a persistent delay line. In Python we can either define a class or use function attributes; classes are tidier and reusable: ```python class FIR_loop(): def __init__(self, h): self.h = h self.ix = 0 self.M = len(h) self.buf = np.zeros(self.M) def filter(self, x): y = 0 self.buf[self.ix] = x for n in range(0, self.M): y += self.h[n] * self.buf[(self.ix+self.M-n) % self.M] self.ix = (self.ix + 1) % self.M return y ``` ```python # simple moving average: h = np.ones(5)/5 f = FIR_loop(h) for n in range(0, 10): print(round(f.filter(n), 3), end=' ') ``` 0.0 0.2 0.6 1.2 2.0 3.0 4.0 5.0 6.0 7.0 While there's nothing wrong with the above implementation, when the data to be filtered is known in advance, it makes no sense to explicitly iterate over its element and it's better to use higher-level commands to perform the convolution. In Numpy, the command is `convolve`; before we use it, though, we need to take border effects into consideration. ## Offline implementations: border effects When filtering a finite-length data vector with a finite-length impulse response, we need to decide what to do with the "invalid" shifts appearing in the terms of the convolution sum. Remember that, in the infinite-length case, the output is defined as $$ y[n] = \sum_{k=-\infty}^{\infty} h[k]x[n-k] $$ Let's say that the impulse response is $M$ points long, so that $h[n]$ is nonzero only between $0$ and $M-1$; this means that the sum is reduced to $$ y[n] = \sum_{k=0}^{M-1} h[k]x[n-k] $$ Now assume that $x[n]$ is a length-$N$ signal, so it is defined only for $0 \leq n \le N$ (we can safely consider $N > M$, otherwise exchange the roles of $x$ and $h$). In this case, the above sum is properly defined only for $M - 1 \le n \le N-1$; for any other value of $n$, the sum will contain an element $x[n-k]$ outside of the valid range of indices for the input. So, if we start with an $N$-point input, we can only formally compute $N-M+1$ output samples. While this may not be a problem in some applications, it certainly is troublesome if repeated filtering operations end up "chipping away" at the signal little by little. The solution is to "embed" the finite-length input data signal into an infinite-length sequence and, as always, the result will depend on the method we choose: finite support or periodization. (Note that the impulse response is already an infinite sequence since it's the response of the filter to the infinite sequence $\delta[n]$). However, the embedding will create "artificial" data points that are dependent on the chosen embedding: these data points are said to suffer from **border effects**. Let's build a simple signal and a simple FIR filter: ```python # let's use a simple moving average: M = 5 h = np.ones(M)/float(M) # let's build a signal with a ramp and a plateau x = np.concatenate((np.arange(1, 9), np.ones(5) * 8, np.arange(8,0,-1))) plt.stem(x, use_line_collection=True); print(f'signal length: {len(x)}') ``` ### 1) No border effects We may choose to accept the loss of data points and use only the $N-M+1$ output samples that correspond to a full overlap between the input data and the impulse response. This can be achieved by selecting `mode='valid'` in `correlate`: ```python y = np.convolve(x, h, mode='valid') print(f'signal length: {len(y)}') plt.stem(y, use_line_collection=True); ``` ### 2) finite-support extension By embedding the input into a finite-support signal, the convolution sum is now well defined for all values of $n$, which now creates a new problem: the output will be nonzero for all values of $n$ for which $x[n-k]$ is nonzero, that is for $0 \le n \le N+M-1$: we end up with a *longer* support for the output sequence. This is the default in `correlate` and corresponds to `mode='full'`: ```python y = np.convolve(x, h, mode='full') print(f'signal length: {len(y)}') plt.stem(y, use_line_collection=True); ``` If we want to preserve the same length for input and output, we need to truncate the result. You can keep the *first* $N$ samples and discard the tail; this corresponds to the online implementation of the FIR filter. Alternatively, you can discard half the extra samples from the beginning and half from the end of the output and distribute the border effect evenly; this is achieved in `correlate` by setting `mode='same'`: ```python y = np.convolve(x, h, mode='same') print(f'signal length: {len(y)}') plt.stem(y, use_line_collection=True); ``` ### 3) Periodic extension As we know, the other way of embedding a finite-length signal is to build a periodic extension. The convolution in this case will return an $N$-periodic output: $$ \tilde{y}[n] = \sum_{k=0}^{M-1} h[k]\tilde{x}[n-k] $$ We can easily implement a circular convolution using `convolve` like so: since the overlap between time-reversed impulse response and input is already good for the last $N-M$ points in the output, we just need to consider two periods of the input to compute the first $M$: ```python def cconv(x, h): # as before, we assume len(h) < len(x) L = len(x) xp = np.concatenate((x,x)) # full convolution y = np.convolve(xp, h) return y[L:2*L] ``` ```python y = cconv(x, h) print(f'signal length: {len(y)}') plt.stem(y, use_line_collection=True); ``` OK, clearly the result is not necessarily what we expected; note however that in both circular and "normal" convolution, you still have $M-1$ output samples "touched" by border effects, it's just that the border effects act differently in the two cases. Interestingly, you can still obtain a "normal" convolution using a circular convolution if you zero-pad the input signal with $M-1$ zeros: ```python y = cconv(np.concatenate((x, np.zeros(M-1))), h) print(f'signal length: {len(y)}') plt.stem(y, use_line_collection=True); # plot in red the *difference* with the standard conv plt.stem(y - np.convolve(x, h, mode='full'), markerfmt='ro', use_line_collection=True); ``` Why is this interesting? Because of the DFT.... ## Offline implementations using the DFT The convolution theorem states that, for infinite sequences, $$ (x\ast y)[n] = \mbox{IDTFT}\{X(e^{j\omega})Y(e^{j\omega})\}[n] $$ Can we apply this result to the finite-length case? In other words, what is the inverse DFT of the product of two DFTs? Let's see: \begin{align} \sum_{k=0}^{N-1}X[k]Y[k]e^{j\frac{2\pi}{N}nk} &= \sum_{k=0}^{N-1}\sum_{p=0}^{N-1}x[p]e^{-j\frac{2\pi}{N}pk}\sum_{q=0}^{N-1}y[q]e^{-j\frac{2\pi}{N}qk} \,e^{j\frac{2\pi}{N}nk} \\ &= \sum_{p=0}^{N-1}\sum_{q=0}^{N-1}x[p]y[q]\sum_{k=0}^{N-1}e^{j\frac{2\pi}{N}(n-p-q)k} \\ &= N\sum_{p=0}^{N-1}x[p]y[(n-p) \mod N] \end{align} The results follows from the fact that $\sum_{k=0}^{N-1}e^{j\frac{2\pi}{N}(n-p-q)k}$ is nonzero only for $n-p-q$ multiple of $N$; as $p$ varies from $0$ to $N-1$, the corresponding value of $q$ between $0$ and $N$ that makes $n-p-q$ multiple of $N$ is $(n-p) \mod N$. So the fundamental result is: **the inverse DFT of the product of two DFTs is the circular convolution of the underlying time-domain sequences!** To apply this result to FIR filtering, the first step is to choose the space for the DFTs. In our case we have a finite-length data vector of length $N$ and a finite-support impulse response of length $M$ with $M<N$ so let's operate in $\mathbb{C}^N$ by zero-padding the impulse response to size $N$. Also, we most likely want the normal convolution, so let's zero-pad both signals by an additional $M-1$ samples ```python def DFTconv(x, h, mode='full'): # we want the compute the full convolution N = len(x) M = len(h) X = np.fft.fft(x, n=N+M-1) H = np.fft.fft(h, n=N+M-1) # we're using real-valued signals, so drop the imaginary part y = np.real(np.fft.ifft(X * H)) if mode == 'valid': # only N-M+1 points, starting at M-1 return y[M-1:N] elif mode == 'same': return y[int((M-1)/2):int((M-1)/2+N)] else: return y ``` Let's verify that the results are the same ```python y = np.convolve(x, h, mode='valid') print(f'signal length: {len(y)}') plt.stem(y, use_line_collection=True); y = DFTconv(x, h, mode='valid') print(f'signal length: {len(y)}') plt.stem(y, markerfmt='ro', use_line_collection=True); ``` ```python y = np.convolve(x, h, mode='same') print(f'signal length: {len(y)}') plt.stem(y, use_line_collection=True); y = DFTconv(x, h, mode='same') print(f'signal length: {len(y)}') plt.stem(y, markerfmt='ro', use_line_collection=True); ``` Of course the question at this point is: why go through the trouble of taking DFTs if all we want is the standard convolution? The answer is: **computational efficiency.** If you look at the convolution sum, each output sample requires $M$ multiplications (and $M-1$ additions but let's just consider multiplications). In order to filter an $N$-point signal we will need $NM$ multiplications. Assume $N \approx M$ and you can see that the computational requirements are on the order of $M^2$. If we go the DFT route using an efficient FFT implementation we have approximately: * $M\log_2 M$ multiplication to compute $H[k]$ * $M\log_2 M$ multiplication to compute $X[k]$ * $M\log_2 M$ multiplication to compute $X[k]H[k]$ * $M\log_2 M$ multiplication to compute the inverse DFT Even considering that we now have to use complex multiplications (which will cost twice as much), we can estimate the cost of the DFT based convolution at around $8M\log_2M$, which is smaller than $M^2$ as soon as $M>44$. In practice, the data vector is much longer than the impulse response so that filtering via standard convolution requires on the order of $MN$ operations. Two techniques, called [Overlap Add](https://en.wikipedia.org/wiki/Overlap%E2%80%93add_method) and [Overlap Save](https://en.wikipedia.org/wiki/Overlap%E2%80%93save_method) can be used to divide the convolution into $N/M$ independent convolutions between $h[n]$ and an $M$-sized piece of $x[n]$; FFT-based convolution can then be used on each piece. While the exact cost per sample of each technique is a bit complicated to estimate, as a rule of thumb **as soon as the impulse response is longer than 50 samples, it's more convenient to use DFT-based filtering.** ```python ```
f70e9b9cc4638ee5b700d21c698c697a12f62d4d
71,905
ipynb
Jupyter Notebook
FIRimplementation/FIRImplementation.ipynb
hellgheast/COM303
48cfaf2ee2826662dd8f47f7aed8d7caf69ac489
[ "MIT" ]
null
null
null
FIRimplementation/FIRImplementation.ipynb
hellgheast/COM303
48cfaf2ee2826662dd8f47f7aed8d7caf69ac489
[ "MIT" ]
null
null
null
FIRimplementation/FIRImplementation.ipynb
hellgheast/COM303
48cfaf2ee2826662dd8f47f7aed8d7caf69ac489
[ "MIT" ]
null
null
null
126.149123
7,120
0.865976
true
3,142
Qwen/Qwen-72B
1. YES 2. YES
0.839734
0.877477
0.736847
__label__eng_Latn
0.995584
0.550274
# Coherent dark states and polarization switching Studying the effect of polarization switching on coherent dark states in a 9-level system. The system is made of two ground states, one excited state but all with J = 1 for a total of nine levels. Basically just 3x the 3-level system studied in "Coherent dark states in a 3-level system.ipynb". Note that the system actually has two polarization dark states which significantly complicates things. ## Imports Start by importing the necessary packages ```python %load_ext autoreload %autoreload 2 import joblib import matplotlib.pyplot as plt plt.style.use("ggplot") import numpy as np import scipy import qutip from sympy import Symbol from toy_systems.couplings import FirstRankCouplingJ, ToyEnergy from toy_systems.dark_states import get_dark_states from toy_systems.decays import CouplingDecay, ToyDecay from toy_systems.hamiltonian import Hamiltonian from toy_systems.operators import JRotation from toy_systems.quantum_system import QuantumSystem from toy_systems.states import Basis, BasisState, JQuantumNumbers, ToyQuantumNumbers from toy_systems.utils import generate_P_op ``` The autoreload extension is already loaded. To reload it, use: %reload_ext autoreload ## Set up states and basis We start by defining the three states of the system: the ground states $|g0\rangle$ and $|g1\rangle$, and the excited state $|e\rangle$> ```python g0s = [BasisState(qn=JQuantumNumbers(label="g0", J = 1, mJ = mJ)) for mJ in np.arange(-1,2)] g1s = [BasisState(qn=JQuantumNumbers(label="g1", J = 1, mJ = mJ)) for mJ in np.arange(-1,2)] es = [BasisState(qn=JQuantumNumbers(label="e", J = 1, mJ = mJ)) for mJ in np.arange(-1,2)] dump = [BasisState(qn = ToyQuantumNumbers(label="dump"))] # A state where the excited state can decay if desired # Define basis basis = Basis(g0s+g1s+es+dump) basis.print() ``` |0> = JQuantumNumbers(J=1, mJ=-1, label='g0') |1> = JQuantumNumbers(J=1, mJ=0, label='g0') |2> = JQuantumNumbers(J=1, mJ=1, label='g0') |3> = JQuantumNumbers(J=1, mJ=-1, label='g1') |4> = JQuantumNumbers(J=1, mJ=0, label='g1') |5> = JQuantumNumbers(J=1, mJ=1, label='g1') |6> = JQuantumNumbers(J=1, mJ=-1, label='e') |7> = JQuantumNumbers(J=1, mJ=0, label='e') |8> = JQuantumNumbers(J=1, mJ=1, label='e') |9> = |dump> ## Define energies, couplings, decays and quantum system I'm going to define the system in the rotating frame as usual. ### Energies ```python δ = Symbol('delta') # Energy splitting between |g0> and |g1 Δ = Symbol('Delta') # Detuning of drive field from 0 E0 = ToyEnergy(g0s, -δ/2) E1 = ToyEnergy(g1s, +δ/2) Ee = ToyEnergy(es, Δ) ``` ### Couplings Will treat the problem as if it has two time dependent laser fields: one polarized along z and the other one along x. The polarization will rotate back and forth between the two directions ```python Ωz = Symbol('Omega_z') # Drive field Rabi rate for z-polarization Ωx = Symbol('Omega_x') # Drive field Rabi rate for x-polarization # A condition to make sure the ground states don't get coupled to each other def both_not_ground(state1, state2): return (state1.qn.label == 'e') or (state2.qn.label == 'e') def both_not_excited(state1, state2): return not ((state1.qn.label == 'e') and (state2.qn.label == 'e')) def not_dump(state1, state2): return (state1.qn.label != "dump") and (state2.qn.label != "dump") coupling_z = FirstRankCouplingJ(Ωz, p_car=np.array((0,0,1)), other_conds = [both_not_ground, both_not_excited, not_dump], time_dep = "(t<0)") coupling_x = FirstRankCouplingJ(Ωx, p_car=np.array((1,0,0)), other_conds = [both_not_ground, both_not_excited, not_dump], time_dep = "(t>0)") ``` ### Decays Defining a decay from all $|e\rangle$ to all $|g0\rangle$ and $|g1\rangle$ as permitted by angular momentum: ```python # Define dipole couplings that connect excited state to ground states decay_couplings = [ FirstRankCouplingJ(1, p_car=np.array((1,0,0)), other_conds=[both_not_ground, both_not_excited, not_dump]), FirstRankCouplingJ(1, p_car=np.array((0,1,0)), other_conds=[both_not_ground, both_not_excited, not_dump]), FirstRankCouplingJ(1, p_car=np.array((0,0,1)), other_conds=[both_not_ground, both_not_excited, not_dump]), ] decays = [CouplingDecay(e, Symbol("Gamma")/2, decay_couplings) for e in es] + [ToyDecay(e, ground = dump[0], gamma = Symbol("Gamma_d")) for e in es] ``` ### Define a QuantumSystem The QuantumSystem object combines the basis, Hamiltonian and decays to make setting parameters for time evolution using QuTiP more convenient. ```python # Define the system system = QuantumSystem( basis=basis, couplings=[E0, E1, Ee, coupling_z, coupling_x], decays=decays, ) # Get representations of the Hamiltonian and the decays that will be accepted by qutip Hqobj, c_qobj = system.get_qobjs() ``` ## Time evolution No matter what state the system starts in, it should always end up in the dark state, from which it will slowly evolve out since the dark state is not an eigenstate of the Hamiltonian. ```python # Get a pointer to the time-evolution arguments args = Hqobj.args print("Keys for setting arguments:") print(f"args = {args}") ``` Keys for setting arguments: args = {'delta': 1, 'Delta': 1, 'Omega_z': 1, 'Omega_x': 1, 'Gamma': 1, 'Gamma_d': 1} ```python test_coupling_z = FirstRankCouplingJ(Ωz, p_car=np.array((0,0,1)), other_conds = [both_not_ground, both_not_excited]) bright_state, dark_states, pol_dark_states = get_dark_states([g0s[0], g1s[0]], es[0], [test_coupling_z]) print(f"|B_z> =\n{bright_state}") if len(dark_states) != 0: print(f"\n|D_z> =\n{dark_states[0]}") if len(pol_dark_states) != 0: print(f"\n|D_z> =\n{pol_dark_states[0]}") ``` |B_z> = [-0.71+0.00j x JQuantumNumbers(J=1, mJ=-1, label='g0') -0.71+0.00j x JQuantumNumbers(J=1, mJ=-1, label='g1')] |D_z> = -0.71+0.00j x JQuantumNumbers(J=1, mJ=-1, label='g0') 0.71-0.00j x JQuantumNumbers(J=1, mJ=-1, label='g1') ```python # Set the parameters for the system args['delta'] = 0.1 args['Delta'] = 0 args['Omega_z'] = 1 args['Omega_x'] = 1 args['Gamma'] = 1 args['Gamma_d'] = 0 # Find the dark and bright states for each mJ and each polarization bright_states_z = [] dark_states_z = [] pol_dark_states_z = [] bright_states_x = [] dark_states_x = [] pol_dark_states_x = [] bright_states_y = [] dark_states_y = [] pol_dark_states_y = [] for g0, g1, e in zip(g0s, g1s, es): test_coupling_z = FirstRankCouplingJ(Ωz, p_car=np.array((0,0,1)), other_conds = [both_not_ground, both_not_excited]) # Find the bright and dark states for z-polarization bright_states, dark_states, pol_dark_states = get_dark_states([g0, g1], e, [test_coupling_z]) bright_states_z += bright_states dark_states_z += dark_states pol_dark_states_z += pol_dark_states # Find the dark and bright states for x- and y-polarization by rotating the basis # X-polarized rotation_x = JRotation(np.pi/2, np.array((0,1,0))) for bs in bright_states_z: bright_states_x.append(bs.apply_operator(basis, rotation_x)) for ds in dark_states_z: dark_states_x.append(ds.apply_operator(basis, rotation_x)) for ds in pol_dark_states_z: pol_dark_states_x.append(ds.apply_operator(basis, rotation_x)) # Y-polarized rotation_y = JRotation(np.pi/2, np.array((1,0,0))) for bs in bright_states_z: bright_states_y.append(bs.apply_operator(basis, rotation_y)) for ds in dark_states_z: dark_states_y.append(ds.apply_operator(basis, rotation_y)) for ds in pol_dark_states_z: pol_dark_states_y.append(ds.apply_operator(basis, rotation_y)) # Print the bright and dark states for each polarization: print("X-polarization:\n") for i, state in enumerate(bright_states_x): print(f"\n|Bx{i}> = ") state.print_state() for i, state in enumerate(dark_states_x): print(f"\n|Dx{i}> = ") state.print_state() print("\n\nY-polarization:\n") for i, state in enumerate(bright_states_y): print(f"\n|By{i}> = ") state.print_state() for i, state in enumerate(dark_states_y): print(f"\n|Dy{i}> = ") state.print_state() print("\n\nZ-polarization:\n") for i, state in enumerate(bright_states_z): print(f"\n|Bz{i}> = ") state.print_state() for i, state in enumerate(dark_states_z): print(f"\n|Dz{i}> = ") state.print_state() ``` X-polarization: |Bx0> = -0.3536+0.0000j x JQuantumNumbers(J=1, mJ=-1, label='g0') +0.5000+0.0000j x JQuantumNumbers(J=1, mJ=0, label='g0') -0.3536+0.0000j x JQuantumNumbers(J=1, mJ=1, label='g0') -0.3536+0.0000j x JQuantumNumbers(J=1, mJ=-1, label='g1') +0.5000+0.0000j x JQuantumNumbers(J=1, mJ=0, label='g1') -0.3536+0.0000j x JQuantumNumbers(J=1, mJ=1, label='g1') |Bx1> = +0.3536+0.0000j x JQuantumNumbers(J=1, mJ=-1, label='g0') +0.5000+0.0000j x JQuantumNumbers(J=1, mJ=0, label='g0') +0.3536+0.0000j x JQuantumNumbers(J=1, mJ=1, label='g0') +0.3536+0.0000j x JQuantumNumbers(J=1, mJ=-1, label='g1') +0.5000+0.0000j x JQuantumNumbers(J=1, mJ=0, label='g1') +0.3536+0.0000j x JQuantumNumbers(J=1, mJ=1, label='g1') |Dx0> = -0.3536+0.0000j x JQuantumNumbers(J=1, mJ=-1, label='g0') +0.5000+0.0000j x JQuantumNumbers(J=1, mJ=0, label='g0') -0.3536+0.0000j x JQuantumNumbers(J=1, mJ=1, label='g0') +0.3536+0.0000j x JQuantumNumbers(J=1, mJ=-1, label='g1') -0.5000+0.0000j x JQuantumNumbers(J=1, mJ=0, label='g1') +0.3536+0.0000j x JQuantumNumbers(J=1, mJ=1, label='g1') |Dx1> = +0.3536+0.0000j x JQuantumNumbers(J=1, mJ=-1, label='g0') +0.5000+0.0000j x JQuantumNumbers(J=1, mJ=0, label='g0') +0.3536+0.0000j x JQuantumNumbers(J=1, mJ=1, label='g0') -0.3536+0.0000j x JQuantumNumbers(J=1, mJ=-1, label='g1') -0.5000+0.0000j x JQuantumNumbers(J=1, mJ=0, label='g1') -0.3536+0.0000j x JQuantumNumbers(J=1, mJ=1, label='g1') Y-polarization: |By0> = -0.3536+0.0000j x JQuantumNumbers(J=1, mJ=-1, label='g0') 0.0000+0.5000j x JQuantumNumbers(J=1, mJ=0, label='g0') +0.3536+0.0000j x JQuantumNumbers(J=1, mJ=1, label='g0') -0.3536+0.0000j x JQuantumNumbers(J=1, mJ=-1, label='g1') 0.0000+0.5000j x JQuantumNumbers(J=1, mJ=0, label='g1') +0.3536+0.0000j x JQuantumNumbers(J=1, mJ=1, label='g1') |By1> = -0.3536+0.0000j x JQuantumNumbers(J=1, mJ=-1, label='g0') 0.0000-0.5000j x JQuantumNumbers(J=1, mJ=0, label='g0') +0.3536+0.0000j x JQuantumNumbers(J=1, mJ=1, label='g0') -0.3536+0.0000j x JQuantumNumbers(J=1, mJ=-1, label='g1') 0.0000-0.5000j x JQuantumNumbers(J=1, mJ=0, label='g1') +0.3536+0.0000j x JQuantumNumbers(J=1, mJ=1, label='g1') |Dy0> = -0.3536+0.0000j x JQuantumNumbers(J=1, mJ=-1, label='g0') 0.0000+0.5000j x JQuantumNumbers(J=1, mJ=0, label='g0') +0.3536+0.0000j x JQuantumNumbers(J=1, mJ=1, label='g0') +0.3536+0.0000j x JQuantumNumbers(J=1, mJ=-1, label='g1') 0.0000-0.5000j x JQuantumNumbers(J=1, mJ=0, label='g1') -0.3536+0.0000j x JQuantumNumbers(J=1, mJ=1, label='g1') |Dy1> = -0.3536+0.0000j x JQuantumNumbers(J=1, mJ=-1, label='g0') 0.0000-0.5000j x JQuantumNumbers(J=1, mJ=0, label='g0') +0.3536+0.0000j x JQuantumNumbers(J=1, mJ=1, label='g0') +0.3536+0.0000j x JQuantumNumbers(J=1, mJ=-1, label='g1') 0.0000+0.5000j x JQuantumNumbers(J=1, mJ=0, label='g1') -0.3536+0.0000j x JQuantumNumbers(J=1, mJ=1, label='g1') Z-polarization: |Bz0> = -0.7071+0.0000j x JQuantumNumbers(J=1, mJ=-1, label='g0') -0.7071+0.0000j x JQuantumNumbers(J=1, mJ=-1, label='g1') |Bz1> = +0.7071+0.0000j x JQuantumNumbers(J=1, mJ=1, label='g0') +0.7071+0.0000j x JQuantumNumbers(J=1, mJ=1, label='g1') |Dz0> = -0.7071+0.0000j x JQuantumNumbers(J=1, mJ=-1, label='g0') +0.7071-0.0000j x JQuantumNumbers(J=1, mJ=-1, label='g1') |Dz1> = +0.7071+0.0000j x JQuantumNumbers(J=1, mJ=1, label='g0') -0.7071+0.0000j x JQuantumNumbers(J=1, mJ=1, label='g1') ```python # Generate a Qobj representing the initial state # psi0 = (1*g0s[0]).qobj(basis) psi0 = (1*dark_states_z[0]).qobj(basis) # Operators for getting probability of being in each state as a function of time P_g0 = generate_P_op(g0s, basis) P_g1 = generate_P_op(g1s, basis) P_e = generate_P_op(es, basis) P_B_z = generate_P_op(bright_states_z, basis) P_D_z = generate_P_op(dark_states_z, basis) P_D_pol_z = generate_P_op(pol_dark_states_z, basis) P_B_x = generate_P_op(bright_states_x, basis) P_D_x = generate_P_op(dark_states_x, basis) P_D_pol_x = generate_P_op(pol_dark_states_x, basis) P_B_y = generate_P_op(bright_states_y, basis) P_D_y = generate_P_op(dark_states_y, basis) P_dump = generate_P_op(dump, basis) P_mJm1 = generate_P_op([g0s[0], g1s[0]], basis) P_mJ0 = generate_P_op([g0s[1], g1s[1]], basis) P_mJp1 = generate_P_op([g0s[2], g1s[2]], basis) P_ops = [P_g0, P_g1, P_e, P_B_z, P_D_z, P_B_x, P_D_x, P_B_y, P_D_y, P_mJm1, P_mJ0, P_mJp1, P_D_pol_z, P_D_pol_x] # Times at which result is requested times = np.linspace(-10,10,1001)/args["delta"] # Setting the max_step is sometimes necessary options = qutip.solver.Options(method = 'adams', nsteps=10000, max_step=1e0, rhs_reuse=False) # Setup a progress bar pb = qutip.ui.progressbar.EnhancedTextProgressBar() # Run the time-evolution result = qutip.mesolve(Hqobj, psi0, times, c_ops = c_qobj, e_ops = P_ops, progress_bar=pb, options = options) ``` Total run time: 0.74s*] Elapsed 0.74s / Remaining 00:00:00:00 Plot the results ```python fig, ax = plt.subplots(3, 1, figsize = (16,18)) ax[0].plot(times, result.expect[0], label = "P_g0") ax[0].plot(times, result.expect[1], label = "P_g1", ls = '--') ax[0].plot(times, result.expect[2], label = "P_e") ax[0].legend() ax[0].set_ylabel("Population in each state") ax[0].set_title("Energy eigenstate basis") ln = [] ln +=ax[1].plot(times, result.expect[3], label = "P_B_z") ln +=ax[1].plot(times, result.expect[4], label = "P_D_z") ln +=ax[1].plot(times, result.expect[12], label = "P_D_pol_z") ln +=ax[1].plot(times, result.expect[2], label = "P_e") ax[1].set_ylabel("Population in each state") ax[1].set_title("Dark and bright state basis for z-polarization") ax1c = ax[1].twinx() ax1c.grid(False) ln +=coupling_z.plot_time_dep(times, args, ax = ax1c, c = 'k', ls = '--', label = "z-pol mag") ax1c.set_ylabel("Magnitude of z-polarization") ax[1].legend(ln, [l.get_label() for l in ln]) ln2 = [] ln2+=ax[2].plot(times, result.expect[5], label = "P_B_x") ln2+=ax[2].plot(times, result.expect[6], label = "P_D_x") ln2+=ax[2].plot(times, result.expect[13], label = "P_D_pol_x") ln2+=ax[2].plot(times, result.expect[2], label = "P_e") ax[2].legend() ax[2].set_ylabel("Population in each state") ax[2].set_title("Dark and bright state basis for x-polarization") ax2c = ax[2].twinx() ax2c.grid(False) ln2 +=coupling_x.plot_time_dep(times, args, ax = ax2c, c = 'k', ls = '--', label = "x-pol mag") ax2c.set_ylabel("Magnitude of x-polarization") ax[2].legend(ln2, [l.get_label() for l in ln2]) print(f"\nPopulation in excited state at the end: {result.expect[2][-1]*100:.1e} %") print(f"Photons per unit time: {scipy.integrate.trapezoid(result.expect[2], x = times)/(times[-1]-times[0]):.2e}") ``` So what is happening here?: - I'm starting the system in the z-polarization coherent dark state with mJ = -1. - For x-polarization the initial state is an even mixture of a coherent and polarization dark state - Population slowly gets transferred to the z-polarization polarization dark state with mJ = 0, which is an even mixture of bright and dark for x-polarized light - Once the polarization gets flipped from z to x at t = 0, the bright state component for x-polarization allows some transitions to proceed quickly until the bright state is depleted. - After the bright state is depleted, the coherent dark state again starts to deplete at a rate that is proportional to $\delta$ and accumulates in the polarization dark state. - Would be more clear in a system that doesn't have any polarization dark states, so try J = 2 for the excited state (separate notebook for that). ```python ```
753d30d5fd0be309914b7722e6a01e94659c0144
177,613
ipynb
Jupyter Notebook
examples/Coherent dark states and polarization switching.ipynb
otimgren/toy-systems
017184e26ad19eb8497af7e7e4f3e7bb814d5807
[ "MIT" ]
null
null
null
examples/Coherent dark states and polarization switching.ipynb
otimgren/toy-systems
017184e26ad19eb8497af7e7e4f3e7bb814d5807
[ "MIT" ]
null
null
null
examples/Coherent dark states and polarization switching.ipynb
otimgren/toy-systems
017184e26ad19eb8497af7e7e4f3e7bb814d5807
[ "MIT" ]
null
null
null
279.265723
153,832
0.913914
true
5,748
Qwen/Qwen-72B
1. YES 2. YES
0.785309
0.692642
0.543938
__label__eng_Latn
0.548561
0.102079
```python # Make SymPy available to this program: import sympy from sympy import * # Make GAlgebra available to this program: from galgebra.ga import * from galgebra.mv import * from galgebra.printer import Fmt, GaPrinter, Format # Fmt: sets the way that a multivector's basis expansion is output. # GaPrinter: makes GA output a little more readable. # Format: turns on latex printer. from galgebra.gprinter import gFormat, gprint gFormat() ``` ```python # Set up standard G^3 geometric algebra g3coords = (x,y,z) = symbols('x y z', real=True) # Without real=True, symbols are complex g3 = Ga('\mathbf{e}', g=[1,1,1], coords=g3coords) (ex, ey, ez) = g3.mv() # Program names of basis vectors. (exr, eyr, ezr) = g3.mvr() # Program names of reciprocal basis vectors. ``` ```python gprint(r'word word\ word \cdot ex<ey', r'\\ \text{word word\ word \cdot ex<ey)}') # \\ gives a new line. The second string encloses the first in \text{}. ``` ```python B = g3.mv('B', 'bivector') Fmt(1) # Set Fmt globally gprint(r'\mathbf{B} =', B) # B will be bold. gprint(r'\mathbf{B} =', B.Fmt(3)) # Fmt(3) here only. gprint(r'\mathbf{B} =', B) # Global Fmt remembered. ``` ```python gprint(r'\mathbf{B}^2 =', B*B) ``` ```python M = g3.mv('M', 'mv') gprint(r'\langle \mathbf{M} \rangle_2 =', M.grade(2)) # grade(2) could be replaced by, e.g., odd(), or omitted altogether. ``` ```python gprint(r'\alpha_1\mathbf{X}/\gamma_r^3') ``` ```python # Program name and output are different theta = symbols('theta', real = True) th = symbols('theta', real = True) # This will save typing if theta is used a lot. gprint(theta, ', ', th) ``` ```python grad = g3.grad ``` ```python grad ``` ```python gprint(r'{\nabla}') ``` ```python ```
99ace15d1fbbdf07263588dbb667a03986750bd8
3,785
ipynb
Jupyter Notebook
python/GeometryAG/gaprimer/GAlgebraOutput.ipynb
karng87/nasm_game
a97fdb09459efffc561d2122058c348c93f1dc87
[ "MIT" ]
null
null
null
python/GeometryAG/gaprimer/GAlgebraOutput.ipynb
karng87/nasm_game
a97fdb09459efffc561d2122058c348c93f1dc87
[ "MIT" ]
null
null
null
python/GeometryAG/gaprimer/GAlgebraOutput.ipynb
karng87/nasm_game
a97fdb09459efffc561d2122058c348c93f1dc87
[ "MIT" ]
null
null
null
23.955696
98
0.520476
true
579
Qwen/Qwen-72B
1. YES 2. YES
0.887205
0.835484
0.741245
__label__eng_Latn
0.850503
0.560492
# Generalization: reflecting boundaries <div id="wave:pde2:Neumann"></div> The boundary condition $u=0$ in a wave equation reflects the wave, but $u$ changes sign at the boundary, while the condition $u_x=0$ reflects the wave as a mirror and preserves the sign, see a [web page](mov-wave/demo_BC_gaussian/index.html) or a [movie file](mov-wave/demo_BC_gaussian/movie.flv) for demonstration. Our next task is to explain how to implement the boundary condition $u_x=0$, which is more complicated to express numerically and also to implement than a given value of $u$. We shall present two methods for implementing $u_x=0$ in a finite difference scheme, one based on deriving a modified stencil at the boundary, and another one based on extending the mesh with ghost cells and ghost points. ## Neumann boundary condition <div id="wave:pde2:Neumann:bc"></div> When a wave hits a boundary and is to be reflected back, one applies the condition <!-- Equation labels as ordinary links --> <div id="wave:pde1:Neumann:0"></div> $$ \begin{equation} \frac{\partial u}{\partial n} \equiv \boldsymbol{n}\cdot\nabla u = 0 \label{wave:pde1:Neumann:0} \tag{1} \thinspace . \end{equation} $$ The derivative $\partial /\partial n$ is in the outward normal direction from a general boundary. For a 1D domain $[0,L]$, we have that $$ \left.\frac{\partial}{\partial n}\right\vert_{x=L} = \left.\frac{\partial}{\partial x}\right\vert_{x=L},\quad \left.\frac{\partial}{\partial n}\right\vert_{x=0} = - \left.\frac{\partial}{\partial x}\right\vert_{x=0}\thinspace . $$ **Boundary condition terminology.** Boundary conditions that specify the value of $\partial u/\partial n$ (or shorter $u_n$) are known as [Neumann](http://en.wikipedia.org/wiki/Neumann_boundary_condition) conditions, while [Dirichlet conditions](http://en.wikipedia.org/wiki/Dirichlet_conditions) refer to specifications of $u$. When the values are zero ($\partial u/\partial n=0$ or $u=0$) we speak about *homogeneous* Neumann or Dirichlet conditions. ## Discretization of derivatives at the boundary <div id="wave:pde2:Neumann:discr"></div> How can we incorporate the condition ([1](#wave:pde1:Neumann:0)) in the finite difference scheme? Since we have used central differences in all the other approximations to derivatives in the scheme, it is tempting to implement ([1](#wave:pde1:Neumann:0)) at $x=0$ and $t=t_n$ by the difference <!-- Equation labels as ordinary links --> <div id="wave:pde1:Neumann:0:cd"></div> $$ \begin{equation} [D_{2x} u]^n_0 = \frac{u_{-1}^n - u_1^n}{2\Delta x} = 0 \thinspace . \label{wave:pde1:Neumann:0:cd} \tag{2} \end{equation} $$ The problem is that $u_{-1}^n$ is not a $u$ value that is being computed since the point is outside the mesh. However, if we combine ([2](#wave:pde1:Neumann:0:cd)) with the scheme <!-- ([wave:pde1:step4](#wave:pde1:step4)) --> <!-- Equation labels as ordinary links --> <div id="wave:pde1:Neumann:0:scheme"></div> $$ \begin{equation} u^{n+1}_i = -u^{n-1}_i + 2u^n_i + C^2 \left(u^{n}_{i+1}-2u^{n}_{i} + u^{n}_{i-1}\right), \label{wave:pde1:Neumann:0:scheme} \tag{3} \end{equation} $$ for $i=0$, we can eliminate the fictitious value $u_{-1}^n$. We see that $u_{-1}^n=u_1^n$ from ([2](#wave:pde1:Neumann:0:cd)), which can be used in ([3](#wave:pde1:Neumann:0:scheme)) to arrive at a modified scheme for the boundary point $u_0^{n+1}$: <!-- Equation labels as ordinary links --> <div id="_auto1"></div> $$ \begin{equation} u^{n+1}_i = -u^{n-1}_i + 2u^n_i + 2C^2 \left(u^{n}_{i+1}-u^{n}_{i}\right),\quad i=0 \thinspace . \label{_auto1} \tag{4} \end{equation} $$ [Figure](#wave:pde1:fig:Neumann:stencil) visualizes this equation for computing $u^3_0$ in terms of $u^2_0$, $u^1_0$, and $u^2_1$. <!-- dom:FIGURE: [mov-wave/N_stencil_gpl/stencil_n_left.png, width=500] Modified stencil at a boundary with a Neumann condition. <div id="wave:pde1:fig:Neumann:stencil"></div> --> <!-- begin figure --> <div id="wave:pde1:fig:Neumann:stencil"></div> <p>Modified stencil at a boundary with a Neumann condition.</p> <!-- end figure --> Similarly, ([1](#wave:pde1:Neumann:0)) applied at $x=L$ is discretized by a central difference <!-- Equation labels as ordinary links --> <div id="wave:pde1:Neumann:0:cd2"></div> $$ \begin{equation} \frac{u_{N_x+1}^n - u_{N_x-1}^n}{2\Delta x} = 0 \thinspace . \label{wave:pde1:Neumann:0:cd2} \tag{5} \end{equation} $$ Combined with the scheme for $i=N_x$ we get a modified scheme for the boundary value $u_{N_x}^{n+1}$: <!-- Equation labels as ordinary links --> <div id="_auto2"></div> $$ \begin{equation} u^{n+1}_i = -u^{n-1}_i + 2u^n_i + 2C^2 \left(u^{n}_{i-1}-u^{n}_{i}\right),\quad i=N_x \thinspace . \label{_auto2} \tag{6} \end{equation} $$ The modification of the scheme at the boundary is also required for the special formula for the first time step. How the stencil moves through the mesh and is modified at the boundary can be illustrated by an animation in a [web page](${doc_notes}/book/html/mov-wave/N_stencil_gpl/index.html) or a [movie file](${docraw}/mov-wave/N_stencil_gpl/movie.ogg). ## Implementation of Neumann conditions <div id="wave:pde2:Neumann:impl"></div> We have seen in the preceding section that the special formulas for the boundary points arise from replacing $u_{i-1}^n$ by $u_{i+1}^n$ when computing $u_i^{n+1}$ from the stencil formula for $i=0$. Similarly, we replace $u_{i+1}^n$ by $u_{i-1}^n$ in the stencil formula for $i=N_x$. This observation can conveniently be used in the coding: we just work with the general stencil formula, but write the code such that it is easy to replace `u[i-1]` by `u[i+1]` and vice versa. This is achieved by having the indices `i+1` and `i-1` as variables `ip1` (`i` plus 1) and `im1` (`i` minus 1), respectively. At the boundary we can easily define `im1=i+1` while we use `im1=i-1` in the internal parts of the mesh. Here are the details of the implementation (note that the updating formula for `u[i]` is the general stencil formula): i = 0 ip1 = i+1 im1 = ip1 # i-1 -> i+1 u[i] = u_n[i] + C2*(u_n[im1] - 2*u_n[i] + u_n[ip1]) i = Nx im1 = i-1 ip1 = im1 # i+1 -> i-1 u[i] = u_n[i] + C2*(u_n[im1] - 2*u_n[i] + u_n[ip1]) We can in fact create one loop over both the internal and boundary points and use only one updating formula: for i in range(0, Nx+1): ip1 = i+1 if i < Nx else i-1 im1 = i-1 if i > 0 else i+1 u[i] = u_n[i] + C2*(u_n[im1] - 2*u_n[i] + u_n[ip1]) The program [`wave1D_n0.py`](${src_wave}/wave1D/wave1D_n0.py) contains a complete implementation of the 1D wave equation with boundary conditions $u_x = 0$ at $x=0$ and $x=L$. It would be nice to modify the `test_quadratic` test case from the `wave1D_u0.py` with Dirichlet conditions, described in the section [wave:pde1:impl:vec:verify:quadratic](#wave:pde1:impl:vec:verify:quadratic). However, the Neumann conditions require the polynomial variation in the $x$ direction to be of third degree, which causes challenging problems when designing a test where the numerical solution is known exactly. [Exercise 9: Verification by a cubic polynomial in space](#wave:fd2:exer:verify:cubic) outlines ideas and code for this purpose. The only test in `wave1D_n0.py` is to start with a plug wave at rest and see that the initial condition is reached again perfectly after one period of motion, but such a test requires $C=1$ (so the numerical solution coincides with the exact solution of the PDE, see the section [Numerical dispersion relation](wave_analysis.ipynb#wave:pde1:num:dispersion)). ## Index set notation <div id="wave:indexset"></div> To improve our mathematical writing and our implementations, it is wise to introduce a special notation for index sets. This means that we write $x_i$, followed by $i\in\mathcal{I}_x$, instead of $i=0,\ldots,N_x$. Obviously, $\mathcal{I}_x$ must be the index set $\mathcal{I}_x =\{0,\ldots,N_x\}$, but it is often advantageous to have a symbol for this set rather than specifying all its elements (all the time, as we have done up to now). This new notation saves writing and makes specifications of algorithms and their implementation as computer code simpler. The first index in the set will be denoted $\mathcal{I}_x^0$ and the last $\mathcal{I}_x^{-1}$. When we need to skip the first element of the set, we use $\mathcal{I}_x^{+}$ for the remaining subset $\mathcal{I}_x^{+}=\{1,\ldots,N_x\}$. Similarly, if the last element is to be dropped, we write $\mathcal{I}_x^{-}=\{0,\ldots,N_x-1\}$ for the remaining indices. All the indices corresponding to inner grid points are specified by $\mathcal{I}_x^i=\{1,\ldots,N_x-1\}$. For the time domain we find it natural to explicitly use 0 as the first index, so we will usually write $n=0$ and $t_0$ rather than $n=\mathcal{I}_t^0$. We also avoid notation like $x_{\mathcal{I}_x^{-1}}$ and will instead use $x_i$, $i={\mathcal{I}_x^{-1}}$. The Python code associated with index sets applies the following conventions: <table border="1"> <thead> <tr><th align="center"> Notation </th> <th align="center"> Python </th> </tr> </thead> <tbody> <tr><td align="left"> $\mathcal{I}_x$ </td> <td align="left"> <code>Ix</code> </td> </tr> <tr><td align="left"> $\mathcal{I}_x^0$ </td> <td align="left"> <code>Ix[0]</code> </td> </tr> <tr><td align="left"> $\mathcal{I}_x^{-1}$ </td> <td align="left"> <code>Ix[-1]</code> </td> </tr> <tr><td align="left"> $\mathcal{I}_x^{-}$ </td> <td align="left"> <code>Ix[:-1]</code> </td> </tr> <tr><td align="left"> $\mathcal{I}_x^{+}$ </td> <td align="left"> <code>Ix[1:]</code> </td> </tr> <tr><td align="left"> $\mathcal{I}_x^i$ </td> <td align="left"> <code>Ix[1:-1]</code> </td> </tr> </tbody> </table> **Why index sets are useful.** An important feature of the index set notation is that it keeps our formulas and code independent of how we count mesh points. For example, the notation $i\in\mathcal{I}_x$ or $i=\mathcal{I}_x^0$ remains the same whether $\mathcal{I}_x$ is defined as above or as starting at 1, i.e., $\mathcal{I}_x=\{1,\ldots,Q\}$. Similarly, we can in the code define `Ix=range(Nx+1)` or `Ix=range(1,Q)`, and expressions like `Ix[0]` and `Ix[1:-1]` remain correct. One application where the index set notation is convenient is conversion of code from a language where arrays has base index 0 (e.g., Python and C) to languages where the base index is 1 (e.g., MATLAB and Fortran). Another important application is implementation of Neumann conditions via ghost points (see next section). For the current problem setting in the $x,t$ plane, we work with the index sets <!-- Equation labels as ordinary links --> <div id="_auto3"></div> $$ \begin{equation} \mathcal{I}_x = \{0,\ldots,N_x\},\quad \mathcal{I}_t = \{0,\ldots,N_t\}, \label{_auto3} \tag{7} \end{equation} $$ defined in Python as ```python mathcal{I}_x = range(0, Nx+1) mathcal{I}_t = range(0, Nt+1) ``` A finite difference scheme can with the index set notation be specified as $$ \begin{align*} u_i^{n+1} &= u^n_i - \frac{1}{2} C^2\left(u^{n}_{i+1}-2u^{n}_{i} + u^{n}_{i-1}\right),\quad, i\in\mathcal{I}_x^i,\ n=0,\\ u^{n+1}_i &= -u^{n-1}_i + 2u^n_i + C^2 \left(u^{n}_{i+1}-2u^{n}_{i}+u^{n}_{i-1}\right), \quad i\in\mathcal{I}_x^i,\ n\in\mathcal{I}_t^i,\\ u_i^{n+1} &= 0, \quad i=\mathcal{I}_x^0,\ n\in\mathcal{I}_t^{-},\\ u_i^{n+1} &= 0, \quad i=\mathcal{I}_x^{-1},\ n\in\mathcal{I}_t^{-}\thinspace . \end{align*} $$ The corresponding implementation becomes ```python # Initial condition for i in mathcal{I}_x[1:-1]: u[i] = u_n[i] - 0.5*C2*(u_n[i-1] - 2*u_n[i] + u_n[i+1]) # Time loop for n in mathcal{I}_t[1:-1]: # Compute internal points for i in mathcal{I}_x[1:-1]: u[i] = - u_nm1[i] + 2*u_n[i] + \ C2*(u_n[i-1] - 2*u_n[i] + u_n[i+1]) # Compute boundary conditions i = mathcal{I}_x[0]; u[i] = 0 i = mathcal{I}_x[-1]; u[i] = 0 ``` **Notice.** The program [`wave1D_dn.py`](src-wave/wave1D/python/wave1D_dn.py) applies the index set notation and solves the 1D wave equation $u_{tt}=c^2u_{xx}+f(x,t)$ with quite general boundary and initial conditions: * $x=0$: $u=U_0(t)$ or $u_x=0$ * $x=L$: $u=U_L(t)$ or $u_x=0$ * $t=0$: $u=I(x)$ * $t=0$: $u_t=V(x)$ The program combines Dirichlet and Neumann conditions, scalar and vectorized implementation of schemes, and the index set notation into one piece of code. A lot of test examples are also included in the program: * A rectangular plug-shaped initial condition. (For $C=1$ the solution will be a rectangle that jumps one cell per time step, making the case well suited for verification.) * A Gaussian function as initial condition. * A triangular profile as initial condition, which resembles the typical initial shape of a guitar string. * A sinusoidal variation of $u$ at $x=0$ and either $u=0$ or $u_x=0$ at $x=L$. * An analytical solution $u(x,t)=\cos(m\pi t/L)\sin({\frac{1}{2}}m\pi x/L)$, which can be used for convergence rate tests. [hpl 1: Should include some experiments here or make exercises. Qualitative behavior of the wave equation can be exemplified.] ## Verifying the implementation of Neumann conditions <div id="wave:pde1:verify"></div> How can we test that the Neumann conditions are correctly implemented? The `solver` function in the `wave1D_dn.py` program described in the box above accepts Dirichlet or Neumann conditions at $x=0$ and $x=L$. mathcal{I}_t is tempting to apply a quadratic solution as described in the sections [wave:pde2:fd](#wave:pde2:fd) and [wave:pde1:impl:verify:quadratic](#wave:pde1:impl:verify:quadratic), but it turns out that this solution is no longer an exact solution of the discrete equations if a Neumann condition is implemented on the boundary. A linear solution does not help since we only have homogeneous Neumann conditions in `wave1D_dn.py`, and we are consequently left with testing just a constant solution: $u=\hbox{const}$. ```python def test_constant(): """ Check the scalar and vectorized versions for a constant u(x,t). We simulate in [0, L] and apply Neumann and Dirichlet conditions at both ends. """ u_const = 0.45 u_exact = lambda x, t: u_const I = lambda x: u_exact(x, 0) V = lambda x: 0 f = lambda x, t: 0 def assert_no_error(u, x, t, n): u_e = u_exact(x, t[n]) diff = np.abs(u - u_e).max() msg = 'diff=%E, t_%d=%g' % (diff, n, t[n]) tol = 1E-13 assert diff < tol, msg for U_0 in (None, lambda t: u_const): for U_L in (None, lambda t: u_const): L = 2.5 c = 1.5 C = 0.75 Nx = 3 # Very coarse mesh for this exact test dt = C*(L/Nx)/c T = 18 # long time integration solver(I, V, f, c, U_0, U_L, L, dt, C, T, user_action=assert_no_error, version='scalar') solver(I, V, f, c, U_0, U_L, L, dt, C, T, user_action=assert_no_error, version='vectorized') print U_0, U_L ``` The quadratic solution is very useful for testing, but it requires Dirichlet conditions at both ends. Another test may utilize the fact that the approximation error vanishes when the Courant number is unity. We can, for example, start with a plug profile as initial condition, let this wave split into two plug waves, one in each direction, and check that the two plug waves come back and form the initial condition again after "one period" of the solution process. Neumann conditions can be applied at both ends. A proper test function reads ```python def test_plug(): """Check that an initial plug is correct back after one period.""" L = 1.0 c = 0.5 dt = (L/10)/c # Nx=10 I = lambda x: 0 if abs(x-L/2.0) > 0.1 else 1 u_s, x, t, cpu = solver( I=I, V=None, f=None, c=0.5, U_0=None, U_L=None, L=L, dt=dt, C=1, T=4, user_action=None, version='scalar') u_v, x, t, cpu = solver( I=I, V=None, f=None, c=0.5, U_0=None, U_L=None, L=L, dt=dt, C=1, T=4, user_action=None, version='vectorized') tol = 1E-13 diff = abs(u_s - u_v).max() assert diff < tol u_0 = np.array([I(x_) for x_ in x]) diff = np.abs(u_s - u_0).max() assert diff < tol ``` Other tests must rely on an unknown approximation error, so effectively we are left with tests on the convergence rate. ## Alternative implementation via ghost cells <div id="wave:pde1:Neumann:ghost"></div> ### Idea Instead of modifying the scheme at the boundary, we can introduce extra points outside the domain such that the fictitious values $u_{-1}^n$ and $u_{N_x+1}^n$ are defined in the mesh. Adding the intervals $[-\Delta x,0]$ and $[L, L+\Delta x]$, known as *ghost cells*, to the mesh gives us all the needed mesh points, corresponding to $i=-1,0,\ldots,N_x,N_x+1$. The extra points with $i=-1$ and $i=N_x+1$ are known as *ghost points*, and values at these points, $u_{-1}^n$ and $u_{N_x+1}^n$, are called *ghost values*. The important idea is to ensure that we always have $$ u_{-1}^n = u_{1}^n\hbox{ and } u_{N_x+1}^n = u_{N_x-1}^n, $$ because then the application of the standard scheme at a boundary point $i=0$ or $i=N_x$ will be correct and guarantee that the solution is compatible with the boundary condition $u_x=0$. Some readers may find it strange to just extend the domain with ghost cells as a general technique, because in some problems there is a completely different medium with different physics and equations right outside of a boundary. Nevertheless, one should view the ghost cell technique as a purely mathematical technique, which is valid in the limit $\Delta x \rightarrow 0$ and helps us to implement derivatives. ### Implementation The `u` array now needs extra elements corresponding to the ghost points. Two new point values are needed: ```python u = zeros(Nx+3) ``` The arrays `u_n` and `u_nm1` must be defined accordingly. Unfortunately, a major indexing problem arises with ghost cells. The reason is that Python indices *must* start at 0 and `u[-1]` will always mean the last element in `u`. This fact gives, apparently, a mismatch between the mathematical indices $i=-1,0,\ldots,N_x+1$ and the Python indices running over `u`: `0,..,Nx+2`. One remedy is to change the mathematical indexing of $i$ in the scheme and write $$ u^{n+1}_i = \cdots,\quad i=1,\ldots,N_x+1, $$ instead of $i=0,\ldots,N_x$ as we have previously used. The ghost points now correspond to $i=0$ and $i=N_x+1$. A better solution is to use the ideas of the section [Index set notation](#wave:indexset): we hide the specific index value in an index set and operate with inner and boundary points using the index set notation. To this end, we define `u` with proper length and `mathcal{I}_x` to be the corresponding indices for the real physical mesh points ($1,2,\ldots,N_x+1$): u = zeros(Nx+3) mathcal{I}_x = range(1, u.shape[0]-1) That is, the boundary points have indices `mathcal{I}_x[0]` and `mathcal{I}_x[-1]` (as before). We first update the solution at all physical mesh points (i.e., interior points in the mesh): ```python for i in mathcal{I}_x: u[i] = - u_nm1[i] + 2*u_n[i] + \ C2*(u_n[i-1] - 2*u_n[i] + u_n[i+1]) ``` The indexing becomes a bit more complicated when we call functions like `V(x)` and `f(x, t)`, as we must remember that the appropriate $x$ coordinate is given as `x[i-mathcal{I}_x[0]]`: ```python for i in mathcal{I}_x: u[i] = u_n[i] + dt*V(x[i-mathcal{I}_x[0]]) + \ 0.5*C2*(u_n[i-1] - 2*u_n[i] + u_n[i+1]) + \ 0.5*dt2*f(x[i-mathcal{I}_x[0]], t[0]) ``` mathcal{I}_t remains to update the solution at ghost points, i.e., `u[0]` and `u[-1]` (or `u[Nx+2]`). For a boundary condition $u_x=0$, the ghost value must equal the value at the associated inner mesh point. Computer code makes this statement precise: ```python i = mathcal{I}_x[0] # x=0 boundary u[i-1] = u[i+1] i = mathcal{I}_x[-1] # x=L boundary u[i+1] = u[i-1] ``` The physical solution to be plotted is now in `u[1:-1]`, or equivalently `u[mathcal{I}_x[0]:mathcal{I}_x[-1]+1]`, so this slice is the quantity to be returned from a solver function. A complete implementation appears in the program [`wave1D_n0_ghost.py`](${src_wave}/wave1D/wave1D_n0_ghost.py). **Warning.** We have to be careful with how the spatial and temporal mesh points are stored. Say we let `x` be the physical mesh points, ```python x = linspace(0, L, Nx+1) ``` "Standard coding" of the initial condition, ```python for i in mathcal{I}_x: u_n[i] = I(x[i]) ``` becomes wrong, since `u_n` and `x` have different lengths and the index `i` corresponds to two different mesh points. In fact, `x[i]` corresponds to `u[1+i]`. A correct implementation is ```python for i in mathcal{I}_x: u_n[i] = I(x[i-mathcal{I}_x[0]]) ``` Similarly, a source term usually coded as `f(x[i], t[n])` is incorrect if `x` is defined to be the physical points, so `x[i]` must be replaced by `x[i-mathcal{I}_x[0]]`. An alternative remedy is to let `x` also cover the ghost points such that `u[i]` is the value at `x[i]`. The ghost cell is only added to the boundary where we have a Neumann condition. Suppose we have a Dirichlet condition at $x=L$ and a homogeneous Neumann condition at $x=0$. One ghost cell $[-\Delta x,0]$ is added to the mesh, so the index set for the physical points becomes $\{1,\ldots,N_x+1\}$. A relevant implementation is ```python u = zeros(Nx+2) mathcal{I}_x = range(1, u.shape[0]) ... for i in mathcal{I}_x[:-1]: u[i] = - u_nm1[i] + 2*u_n[i] + \ C2*(u_n[i-1] - 2*u_n[i] + u_n[i+1]) + \ dt2*f(x[i-mathcal{I}_x[0]], t[n]) i = mathcal{I}_x[-1] u[i] = U_0 # set Dirichlet value i = mathcal{I}_x[0] u[i-1] = u[i+1] # update ghost value ``` The physical solution to be plotted is now in `u[1:]` or (as always) `u[mathcal{I}_x[0]:mathcal{I}_x[-1]+1]`. # Generalization: variable wave velocity <div id="wave:pde2:var:c"></div> Our next generalization of the 1D wave equation ([wave:pde1](#wave:pde1)) or ([wave:pde2](#wave:pde2)) is to allow for a variable wave velocity $c$: $c=c(x)$, usually motivated by wave motion in a domain composed of different physical media. When the media differ in physical properties like density or porosity, the wave velocity $c$ is affected and will depend on the position in space. [Figure](#wave:pde1:fig:pulse1:two:media) shows a wave propagating in one medium $[0, 0.7]\cup [0.9,1]$ with wave velocity $c_1$ (left) before it enters a second medium $(0.7,0.9)$ with wave velocity $c_2$ (right). When the wave meets the boundary where $c$ jumps from $c_1$ to $c_2$, a part of the wave is reflected back into the first medium (the *reflected* wave), while one part is transmitted through the second medium (the *transmitted* wave). <!-- dom:FIGURE: [fig-wave/pulse1_in_two_media.png, width=800] Left: wave entering another medium; right: transmitted and reflected wave. <div id="wave:pde1:fig:pulse1:two:media"></div> --> <!-- begin figure --> <div id="wave:pde1:fig:pulse1:two:media"></div> <p>Left: wave entering another medium; right: transmitted and reflected wave.</p> <!-- end figure --> ## The model PDE with a variable coefficient Instead of working with the squared quantity $c^2(x)$, we shall for notational convenience introduce $q(x) = c^2(x)$. A 1D wave equation with variable wave velocity often takes the form <!-- Equation labels as ordinary links --> <div id="wave:pde2:var:c:pde"></div> $$ \begin{equation} \frac{\partial^2 u}{\partial t^2} = \frac{\partial}{\partial x}\left( q(x) \frac{\partial u}{\partial x}\right) + f(x,t) \label{wave:pde2:var:c:pde} \tag{8} \thinspace . \end{equation} $$ This is the most frequent form of a wave equation with variable wave velocity, but other forms also appear, see the section [wave:app:string](#wave:app:string) and equation ([wave:app:string:model2](#wave:app:string:model2)). As usual, we sample ([8](#wave:pde2:var:c:pde)) at a mesh point, $$ \frac{\partial^2 }{\partial t^2} u(x_i,t_n) = \frac{\partial}{\partial x}\left( q(x_i) \frac{\partial}{\partial x} u(x_i,t_n)\right) + f(x_i,t_n), $$ where the only new term to discretize is $$ \frac{\partial}{\partial x}\left( q(x_i) \frac{\partial}{\partial x} u(x_i,t_n)\right) = \left[ \frac{\partial}{\partial x}\left( q(x) \frac{\partial u}{\partial x}\right)\right]^n_i \thinspace . $$ ## Discretizing the variable coefficient <div id="wave:pde2:var:c:ideas"></div> The principal idea is to first discretize the outer derivative. Define $$ \phi = q(x) \frac{\partial u}{\partial x}, $$ and use a centered derivative around $x=x_i$ for the derivative of $\phi$: $$ \left[\frac{\partial\phi}{\partial x}\right]^n_i \approx \frac{\phi_{i+\frac{1}{2}} - \phi_{i-\frac{1}{2}}}{\Delta x} = [D_x\phi]^n_i \thinspace . $$ Then discretize $$ \phi_{i+\frac{1}{2}} = q_{i+\frac{1}{2}} \left[\frac{\partial u}{\partial x}\right]^n_{i+\frac{1}{2}} \approx q_{i+\frac{1}{2}} \frac{u^n_{i+1} - u^n_{i}}{\Delta x} = [q D_x u]_{i+\frac{1}{2}}^n \thinspace . $$ Similarly, $$ \phi_{i-\frac{1}{2}} = q_{i-\frac{1}{2}} \left[\frac{\partial u}{\partial x}\right]^n_{i-\frac{1}{2}} \approx q_{i-\frac{1}{2}} \frac{u^n_{i} - u^n_{i-1}}{\Delta x} = [q D_x u]_{i-\frac{1}{2}}^n \thinspace . $$ These intermediate results are now combined to <!-- Equation labels as ordinary links --> <div id="wave:pde2:var:c:formula"></div> $$ \begin{equation} \left[ \frac{\partial}{\partial x}\left( q(x) \frac{\partial u}{\partial x}\right)\right]^n_i \approx \frac{1}{\Delta x^2} \left( q_{i+\frac{1}{2}} \left({u^n_{i+1} - u^n_{i}}\right) - q_{i-\frac{1}{2}} \left({u^n_{i} - u^n_{i-1}}\right)\right) \label{wave:pde2:var:c:formula} \tag{9} \thinspace . \end{equation} $$ With operator notation we can write the discretization as <!-- Equation labels as ordinary links --> <div id="wave:pde2:var:c:formula:op"></div> $$ \begin{equation} \left[ \frac{\partial}{\partial x}\left( q(x) \frac{\partial u}{\partial x}\right)\right]^n_i \approx [D_x (\overline{q}^{x} D_x u)]^n_i \label{wave:pde2:var:c:formula:op} \tag{10} \thinspace . \end{equation} $$ **Do not use the chain rule on the spatial derivative term!** Many are tempted to use the chain rule on the term $\frac{\partial}{\partial x}\left( q(x) \frac{\partial u}{\partial x}\right)$, but this is not a good idea when discretizing such a term. The term with a variable coefficient expresses the net flux $qu_x$ into a small volume (i.e., interval in 1D): $$ \frac{\partial}{\partial x}\left( q(x) \frac{\partial u}{\partial x}\right) \approx \frac{1}{\Delta x}(q(x+\Delta x)u_x(x+\Delta x) - q(x)u_x(x))\thinspace . $$ Our discretization reflects this principle directly: $qu_x$ at the right end of the cell minus $qu_x$ at the left end, because this follows from the formula ([9](#wave:pde2:var:c:formula)) or $[D_x(q D_x u)]^n_i$. When using the chain rule, we get two terms $qu_{xx} + q_xu_x$. The typical discretization is <!-- Equation labels as ordinary links --> <div id="wave:pde2:var:c:chainrule_scheme"></div> $$ \begin{equation} [D_x q D_x u + D_{2x}q D_{2x} u]_i^n, \label{wave:pde2:var:c:chainrule_scheme} \tag{11} \end{equation} $$ Writing this out shows that it is different from $[D_x(q D_x u)]^n_i$ and lacks the physical interpretation of net flux into a cell. With a smooth and slowly varying $q(x)$ the differences between the two discretizations are not substantial. However, when $q$ exhibits (potentially large) jumps, $[D_x(q D_x u)]^n_i$ with harmonic averaging of $q$ yields a better solution than arithmetic averaging or ([11](#wave:pde2:var:c:chainrule_scheme)). In the literature, the discretization $[D_x(q D_x u)]^n_i$ totally dominates and very few mention the alternative in ([11](#wave:pde2:var:c:chainrule_scheme)). <!-- Needs some better explanation here - maybe the exact solution of a --> <!-- poisson type problem (piecewise linear solution) failes if we use --> <!-- the chain rule? Wesserling has an example, but it is tedious to --> <!-- work out. --> ## Computing the coefficient between mesh points <div id="wave:pde2:var:c:means"></div> If $q$ is a known function of $x$, we can easily evaluate $q_{i+\frac{1}{2}}$ simply as $q(x_{i+\frac{1}{2}})$ with $x_{i+\frac{1}{2}} = x_i + \frac{1}{2}\Delta x$. However, in many cases $c$, and hence $q$, is only known as a discrete function, often at the mesh points $x_i$. Evaluating $q$ between two mesh points $x_i$ and $x_{i+1}$ must then be done by *interpolation* techniques, of which three are of particular interest in this context: <!-- Equation labels as ordinary links --> <div id="wave:pde2:var:c:mean:arithmetic"></div> $$ \begin{equation} q_{i+\frac{1}{2}} \approx \frac{1}{2}\left( q_{i} + q_{i+1}\right) = [\overline{q}^{x}]_i \quad \hbox{(arithmetic mean)} \label{wave:pde2:var:c:mean:arithmetic} \tag{12} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="wave:pde2:var:c:mean:harmonic"></div> $$ \begin{equation} q_{i+\frac{1}{2}} \approx 2\left( \frac{1}{q_{i}} + \frac{1}{q_{i+1}}\right)^{-1} \quad \hbox{(harmonic mean)} \label{wave:pde2:var:c:mean:harmonic} \tag{13} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="wave:pde2:var:c:mean:geometric"></div> $$ \begin{equation} q_{i+\frac{1}{2}} \approx \left(q_{i}q_{i+1}\right)^{1/2} \quad \hbox{(geometric mean)} \label{wave:pde2:var:c:mean:geometric} \tag{14} \end{equation} $$ The arithmetic mean in ([12](#wave:pde2:var:c:mean:arithmetic)) is by far the most commonly used averaging technique and is well suited for smooth $q(x)$ functions. The harmonic mean is often preferred when $q(x)$ exhibits large jumps (which is typical for geological media). The geometric mean is less used, but popular in discretizations to linearize quadratic % if BOOK == "book": nonlinearities (see the section [vib:ode2:fdm:fquad](#vib:ode2:fdm:fquad) for an example). % else: nonlinearities. % endif With the operator notation from ([12](#wave:pde2:var:c:mean:arithmetic)) we can specify the discretization of the complete variable-coefficient wave equation in a compact way: <!-- Equation labels as ordinary links --> <div id="wave:pde2:var:c:scheme:op"></div> $$ \begin{equation} \lbrack D_tD_t u = D_x\overline{q}^{x}D_x u + f\rbrack^{n}_i \thinspace . \label{wave:pde2:var:c:scheme:op} \tag{15} \end{equation} $$ Strictly speaking, $\lbrack D_x\overline{q}^{x}D_x u\rbrack^{n}_i = \lbrack D_x (\overline{q}^{x}D_x u)\rbrack^{n}_i$. From the compact difference notation we immediately see what kind of differences that each term is approximated with. The notation $\overline{q}^{x}$ also specifies that the variable coefficient is approximated by an arithmetic mean, the definition being $[\overline{q}^{x}]_{i+\frac{1}{2}}=(q_i+q_{i+1})/2$. Before implementing, it remains to solve ([15](#wave:pde2:var:c:scheme:op)) with respect to $u_i^{n+1}$: $$ u^{n+1}_i = - u_i^{n-1} + 2u_i^n + \nonumber $$ $$ \quad \left(\frac{\Delta t}{\Delta x}\right)^2 \left( \frac{1}{2}(q_{i} + q_{i+1})(u_{i+1}^n - u_{i}^n) - \frac{1}{2}(q_{i} + q_{i-1})(u_{i}^n - u_{i-1}^n)\right) + \nonumber $$ <!-- Equation labels as ordinary links --> <div id="wave:pde2:var:c:scheme:impl"></div> $$ \begin{equation} \quad \Delta t^2 f^n_i \thinspace . \label{wave:pde2:var:c:scheme:impl} \tag{16} \end{equation} $$ ## How a variable coefficient affects the stability <div id="wave:pde2:var:c:stability"></div> The stability criterion derived later (the section [wave:pde1:stability](#wave:pde1:stability)) reads $\Delta t\leq \Delta x/c$. If $c=c(x)$, the criterion will depend on the spatial location. We must therefore choose a $\Delta t$ that is small enough such that no mesh cell has $\Delta t > \Delta x/c(x)$. That is, we must use the largest $c$ value in the criterion: <!-- Equation labels as ordinary links --> <div id="_auto4"></div> $$ \begin{equation} \Delta t \leq \beta \frac{\Delta x}{\max_{x\in [0,L]}c(x)} \thinspace . \label{_auto4} \tag{17} \end{equation} $$ The parameter $\beta$ is included as a safety factor: in some problems with a significantly varying $c$ it turns out that one must choose $\beta <1$ to have stable solutions ($\beta =0.9$ may act as an all-round value). A different strategy to handle the stability criterion with variable wave velocity is to use a spatially varying $\Delta t$. While the idea is mathematically attractive at first sight, the implementation quickly becomes very complicated, so we stick to a constant $\Delta t$ and a worst case value of $c(x)$ (with a safety factor $\beta$). ## Neumann condition and a variable coefficient <div id="wave:pde2:var:c:Neumann"></div> Consider a Neumann condition $\partial u/\partial x=0$ at $x=L=N_x\Delta x$, discretized as $$ [D_{2x} u]^n_i = \frac{u_{i+1}^{n} - u_{i-1}^n}{2\Delta x} = 0\quad\Rightarrow\quad u_{i+1}^n = u_{i-1}^n, $$ for $i=N_x$. Using the scheme ([16](#wave:pde2:var:c:scheme:impl)) at the end point $i=N_x$ with $u_{i+1}^n=u_{i-1}^n$ results in $$ u^{n+1}_i = - u_i^{n-1} + 2u_i^n + \nonumber $$ <!-- Equation labels as ordinary links --> <div id="_auto5"></div> $$ \begin{equation} \quad \left(\frac{\Delta t}{\Delta x}\right)^2 \left( q_{i+\frac{1}{2}}(u_{i-1}^n - u_{i}^n) - q_{i-\frac{1}{2}}(u_{i}^n - u_{i-1}^n)\right) + \Delta t^2 f^n_i \label{_auto5} \tag{18} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="wave:pde2:var:c:scheme:impl:Neumann0"></div> $$ \begin{equation} = - u_i^{n-1} + 2u_i^n + \left(\frac{\Delta t}{\Delta x}\right)^2 (q_{i+\frac{1}{2}} + q_{i-\frac{1}{2}})(u_{i-1}^n - u_{i}^n) + \Delta t^2 f^n_i \label{wave:pde2:var:c:scheme:impl:Neumann0} \tag{19} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="wave:pde2:var:c:scheme:impl:Neumann"></div> $$ \begin{equation} \approx - u_i^{n-1} + 2u_i^n + \left(\frac{\Delta t}{\Delta x}\right)^2 2q_{i}(u_{i-1}^n - u_{i}^n) + \Delta t^2 f^n_i \thinspace . \label{wave:pde2:var:c:scheme:impl:Neumann} \tag{20} \end{equation} $$ Here we used the approximation $$ q_{i+\frac{1}{2}} + q_{i-\frac{1}{2}} = q_i + \left(\frac{dq}{dx}\right)_i \Delta x + \left(\frac{d^2q}{dx^2}\right)_i \Delta x^2 + \cdots +\nonumber $$ $$ \quad q_i - \left(\frac{dq}{dx}\right)_i \Delta x + \left(\frac{d^2q}{dx^2}\right)_i \Delta x^2 + \cdots\nonumber $$ $$ = 2q_i + 2\left(\frac{d^2q}{dx^2}\right)_i \Delta x^2 + {\cal O}(\Delta x^4) \nonumber $$ <!-- Equation labels as ordinary links --> <div id="_auto6"></div> $$ \begin{equation} \approx 2q_i \thinspace . \label{_auto6} \tag{21} \end{equation} $$ An alternative derivation may apply the arithmetic mean of $q_{n-\frac{1}{2}}$ and $q_{n+\frac{1}{2}}$ in ([19](#wave:pde2:var:c:scheme:impl:Neumann0)), leading to the term $$ (q_i + \frac{1}{2}(q_{i+1}+q_{i-1}))(u_{i-1}^n-u_i^n)\thinspace . $$ Since $\frac{1}{2}(q_{i+1}+q_{i-1}) = q_i + {\cal O}(\Delta x^2)$, we can approximate with $2q_i(u_{i-1}^n-u_i^n)$ for $i=N_x$ and get the same term as we did above. A common technique when implementing $\partial u/\partial x=0$ boundary conditions, is to assume $dq/dx=0$ as well. This implies $q_{i+1}=q_{i-1}$ and $q_{i+1/2}=q_{i-1/2}$ for $i=N_x$. The implications for the scheme are $$ u^{n+1}_i = - u_i^{n-1} + 2u_i^n + \nonumber $$ $$ \quad \left(\frac{\Delta t}{\Delta x}\right)^2 \left( q_{i+\frac{1}{2}}(u_{i-1}^n - u_{i}^n) - q_{i-\frac{1}{2}}(u_{i}^n - u_{i-1}^n)\right) + \nonumber $$ <!-- Equation labels as ordinary links --> <div id="_auto7"></div> $$ \begin{equation} \quad \Delta t^2 f^n_i \label{_auto7} \tag{22} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="wave:pde2:var:c:scheme:impl:Neumann2"></div> $$ \begin{equation} = - u_i^{n-1} + 2u_i^n + \left(\frac{\Delta t}{\Delta x}\right)^2 2q_{i-\frac{1}{2}}(u_{i-1}^n - u_{i}^n) + \Delta t^2 f^n_i \thinspace . \label{wave:pde2:var:c:scheme:impl:Neumann2} \tag{23} \end{equation} $$ ## Implementation of variable coefficients <div id="wave:pde2:var:c:impl"></div> The implementation of the scheme with a variable wave velocity $q(x)=c^2(x)$ may assume that $q$ is available as an array `q[i]` at the spatial mesh points. The following loop is a straightforward implementation of the scheme ([16](#wave:pde2:var:c:scheme:impl)): ```python for i in range(1, Nx): u[i] = - u_nm1[i] + 2*u_n[i] + \ C2*(0.5*(q[i] + q[i+1])*(u_n[i+1] - u_n[i]) - \ 0.5*(q[i] + q[i-1])*(u_n[i] - u_n[i-1])) + \ dt2*f(x[i], t[n]) ``` The coefficient `C2` is now defined as `(dt/dx)**2`, i.e., *not* as the squared Courant number, since the wave velocity is variable and appears inside the parenthesis. With Neumann conditions $u_x=0$ at the boundary, we need to combine this scheme with the discrete version of the boundary condition, as shown in the section [Neumann condition and a variable coefficient](#wave:pde2:var:c:Neumann). Nevertheless, it would be convenient to reuse the formula for the interior points and just modify the indices `ip1=i+1` and `im1=i-1` as we did in the section [Implementation of Neumann conditions](#wave:pde2:Neumann:impl). Assuming $dq/dx=0$ at the boundaries, we can implement the scheme at the boundary with the following code. ```python i = 0 ip1 = i+1 im1 = ip1 u[i] = - u_nm1[i] + 2*u_n[i] + \ C2*(0.5*(q[i] + q[ip1])*(u_n[ip1] - u_n[i]) - \ 0.5*(q[i] + q[im1])*(u_n[i] - u_n[im1])) + \ dt2*f(x[i], t[n]) ``` With ghost cells we can just reuse the formula for the interior points also at the boundary, provided that the ghost values of both $u$ and $q$ are correctly updated to ensure $u_x=0$ and $q_x=0$. A vectorized version of the scheme with a variable coefficient at internal mesh points becomes ```python u[1:-1] = - u_nm1[1:-1] + 2*u_n[1:-1] + \ C2*(0.5*(q[1:-1] + q[2:])*(u_n[2:] - u_n[1:-1]) - 0.5*(q[1:-1] + q[:-2])*(u_n[1:-1] - u_n[:-2])) + \ dt2*f(x[1:-1], t[n]) ``` ## A more general PDE model with variable coefficients Sometimes a wave PDE has a variable coefficient in front of the time-derivative term: <!-- Equation labels as ordinary links --> <div id="wave:pde2:var:c:pde2"></div> $$ \begin{equation} \varrho(x)\frac{\partial^2 u}{\partial t^2} = \frac{\partial}{\partial x}\left( q(x) \frac{\partial u}{\partial x}\right) + f(x,t) \label{wave:pde2:var:c:pde2} \tag{24} \thinspace . \end{equation} $$ One example appears when modeling elastic waves in a rod with varying density, cf. ([wave:app:string](#wave:app:string)) with $\varrho (x)$. A natural scheme for ([24](#wave:pde2:var:c:pde2)) is <!-- Equation labels as ordinary links --> <div id="_auto8"></div> $$ \begin{equation} [\varrho D_tD_t u = D_x\overline{q}^xD_x u + f]^n_i \thinspace . \label{_auto8} \tag{25} \end{equation} $$ We realize that the $\varrho$ coefficient poses no particular difficulty, since $\varrho$ enters the formula just as a simple factor in front of a derivative. There is hence no need for any averaging of $\varrho$. Often, $\varrho$ will be moved to the right-hand side, also without any difficulty: <!-- Equation labels as ordinary links --> <div id="_auto9"></div> $$ \begin{equation} [D_tD_t u = \varrho^{-1}D_x\overline{q}^xD_x u + f]^n_i \thinspace . \label{_auto9} \tag{26} \end{equation} $$ ## Generalization: damping Waves die out by two mechanisms. In 2D and 3D the energy of the wave spreads out in space, and energy conservation then requires the amplitude to decrease. This effect is not present in 1D. Damping is another cause of amplitude reduction. For example, the vibrations of a string die out because of damping due to air resistance and non-elastic effects in the string. The simplest way of including damping is to add a first-order derivative to the equation (in the same way as friction forces enter a vibrating mechanical system): <!-- Equation labels as ordinary links --> <div id="wave:pde3"></div> $$ \begin{equation} \frac{\partial^2 u}{\partial t^2} + b\frac{\partial u}{\partial t} = c^2\frac{\partial^2 u}{\partial x^2} + f(x,t), \label{wave:pde3} \tag{27} \end{equation} $$ where $b \geq 0$ is a prescribed damping coefficient. A typical discretization of ([27](#wave:pde3)) in terms of centered differences reads <!-- Equation labels as ordinary links --> <div id="wave:pde3:fd"></div> $$ \begin{equation} [D_tD_t u + bD_{2t}u = c^2D_xD_x u + f]^n_i \thinspace . \label{wave:pde3:fd} \tag{28} \end{equation} $$ Writing out the equation and solving for the unknown $u^{n+1}_i$ gives the scheme <!-- Equation labels as ordinary links --> <div id="wave:pde3:fd2"></div> $$ \begin{equation} u^{n+1}_i = (1 + {\frac{1}{2}}b\Delta t)^{-1}(({\frac{1}{2}}b\Delta t -1) u^{n-1}_i + 2u^n_i + C^2 \left(u^{n}_{i+1}-2u^{n}_{i} + u^{n}_{i-1}\right) + \Delta t^2 f^n_i), \label{wave:pde3:fd2} \tag{29} \end{equation} $$ for $i\in\mathcal{I}_x^i$ and $n\geq 1$. New equations must be derived for $u^1_i$, and for boundary points in case of Neumann conditions. The damping is very small in many wave phenomena and thus only evident for very long time simulations. This makes the standard wave equation without damping relevant for a lot of applications. # Building a general 1D wave equation solver <div id="wave:pde2:software"></div> The program [`wave1D_dn_vc.py`](${src_wave}/wave1D/wave1D_dn_vc.py) is a fairly general code for 1D wave propagation problems that targets the following initial-boundary value problem <!-- Equation labels as ordinary links --> <div id="wave:pde2:software:ueq"></div> $$ \begin{equation} u_{tt} = (c^2(x)u_x)_x + f(x,t),\quad x\in (0,L),\ t\in (0,T] \label{wave:pde2:software:ueq} \tag{30} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="_auto10"></div> $$ \begin{equation} u(x,0) = I(x),\quad x\in [0,L] \label{_auto10} \tag{31} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="_auto11"></div> $$ \begin{equation} u_t(x,0) = V(t),\quad x\in [0,L] \label{_auto11} \tag{32} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="_auto12"></div> $$ \begin{equation} u(0,t) = U_0(t)\hbox{ or } u_x(0,t)=0,\quad t\in (0,T] \label{_auto12} \tag{33} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="wave:pde2:software:bcL"></div> $$ \begin{equation} u(L,t) = U_L(t)\hbox{ or } u_x(L,t)=0,\quad t\in (0,T] \label{wave:pde2:software:bcL} \tag{34} \end{equation} $$ The only new feature here is the time-dependent Dirichlet conditions, but they are trivial to implement: ```python i = mathcal{I}_x[0] # x=0 u[i] = U_0(t[n+1]) i = mathcal{I}_x[-1] # x=L u[i] = U_L(t[n+1]) ``` The `solver` function is a natural extension of the simplest `solver` function in the initial `wave1D_u0.py` program, extended with Neumann boundary conditions ($u_x=0$), time-varying Dirichlet conditions, as well as a variable wave velocity. The different code segments needed to make these extensions have been shown and commented upon in the preceding text. We refer to the `solver` function in the `wave1D_dn_vc.py` file for all the details. Note in that `solver` function, however, that the technique of "hashing" is used to check whether a certain simulation has been run before, or not. % if BOOK == 'book': This technique is further explained in the section [softeng2:wave1D:filestorage:hash](#softeng2:wave1D:filestorage:hash). % endif The vectorization is only applied inside the time loop, not for the initial condition or the first time steps, since this initial work is negligible for long time simulations in 1D problems. The following sections explain various more advanced programming techniques applied in the general 1D wave equation solver. ## User action function as a class A useful feature in the `wave1D_dn_vc.py` program is the specification of the `user_action` function as a class. This part of the program may need some motivation and explanation. Although the `plot_u_st` function (and the `PlotMatplotlib` class) in the `wave1D_u0.viz` function remembers the local variables in the `viz` function, it is a cleaner solution to store the needed variables together with the function, which is exactly what a class offers. ### The code A class for flexible plotting, cleaning up files, making movie files, like the function `wave1D_u0.viz` did, can be coded as follows: ```python %matplotlib inline class PlotAndStoreSolution: """ Class for the user_action function in solver. Visualizes the solution only. """ def __init__( self, casename='tmp', # Prefix in filenames umin=-1, umax=1, # Fixed range of y axis pause_between_frames=None, # Movie speed backend='matplotlib', # or 'gnuplot' or None screen_movie=True, # Show movie on screen? title='', # Extra message in title skip_frame=1, # Skip every skip_frame frame filename=None): # Name of file with solutions self.casename = casename self.yaxis = [umin, umax] self.pause = pause_between_frames self.backend = backend if backend is None: # Use native matplotlib import matplotlib.pyplot as plt elif backend in ('matplotlib', 'gnuplot'): module = 'scitools.easyviz.' + backend + '_' exec('import %s as plt' % module) self.plt = plt self.screen_movie = screen_movie self.title = title self.skip_frame = skip_frame self.filename = filename if filename is not None: # Store time points when u is written to file self.t = [] filenames = glob.glob('.' + self.filename + '*.dat.npz') for filename in filenames: os.remove(filename) # Clean up old movie frames for filename in glob.glob('frame_*.png'): os.remove(filename) def __call__(self, u, x, t, n): """ Callback function user_action, call by solver: Store solution, plot on screen and save to file. """ # Save solution u to a file using numpy.savez if self.filename is not None: name = 'u%04d' % n # array name kwargs = {name: u} fname = '.' + self.filename + '_' + name + '.dat' np.savez(fname, **kwargs) self.t.append(t[n]) # store corresponding time value if n == 0: # save x once np.savez('.' + self.filename + '_x.dat', x=x) # Animate if n % self.skip_frame != 0: return title = 't=%.3f' % t[n] if self.title: title = self.title + ' ' + title if self.backend is None: # native matplotlib animation if n == 0: self.plt.ion() self.lines = self.plt.plot(x, u, 'r-') self.plt.axis([x[0], x[-1], self.yaxis[0], self.yaxis[1]]) self.plt.xlabel('x') self.plt.ylabel('u') self.plt.title(title) self.plt.legend(['t=%.3f' % t[n]]) else: # Update new solution self.lines[0].set_ydata(u) self.plt.legend(['t=%.3f' % t[n]]) self.plt.draw() else: # scitools.easyviz animation self.plt.plot(x, u, 'r-', xlabel='x', ylabel='u', axis=[x[0], x[-1], self.yaxis[0], self.yaxis[1]], title=title, show=self.screen_movie) # pause if t[n] == 0: time.sleep(2) # let initial condition stay 2 s else: if self.pause is None: pause = 0.2 if u.size < 100 else 0 time.sleep(pause) self.plt.savefig('frame_%04d.png' % (n)) ``` ### Dissection Understanding this class requires quite some familiarity with Python in general and class programming in particular. The class supports plotting with Matplotlib (`backend=None`) or SciTools (`backend=matplotlib` or `backend=gnuplot`) for maximum flexibility. <!-- Since all the plot frames are to be collected in a separate subdirectory, --> <!-- we demand a (logical) "casename" from the user that is used as --> <!-- subdirectory name in the `make_movie_file` method. The statements --> <!-- in this method perform actions normally done in the operating --> <!-- system, but the Python interface via `shutil.rmtree`, `os.mkdir`, --> <!-- `os.chdir`, etc., works on all platforms where Python works. --> The constructor shows how we can flexibly import the plotting engine as (typically) `scitools.easyviz.gnuplot_` or `scitools.easyviz.matplotlib_` (note the trailing underscore - it is required). With the `screen_movie` parameter we can suppress displaying each movie frame on the screen. Alternatively, for slow movies associated with fine meshes, one can set `skip_frame=10`, causing every 10 frames to be shown. The `__call__` method makes `PlotAndStoreSolution` instances behave like functions, so we can just pass an instance, say `p`, as the `user_action` argument in the `solver` function, and any call to `user_action` will be a call to `p.__call__`. The `__call__` method plots the solution on the screen, saves the plot to file, and stores the solution in a file for later retrieval. More details on storing the solution in files appear in in the document [Scientific software engineering; wave equation case](http://tinyurl.com/k3sdbuv/pub/softeng2) [[Langtangen_deqbook_softeng2]](#Langtangen_deqbook_softeng2). ## Pulse propagation in two media The function `pulse` in `wave1D_dn_vc.py` demonstrates wave motion in heterogeneous media where $c$ varies. One can specify an interval where the wave velocity is decreased by a factor `slowness_factor` (or increased by making this factor less than one). [Figure](#wave:pde1:fig:pulse1:two:media) shows a typical simulation scenario. Four types of initial conditions are available: 1. a rectangular pulse (`plug`), 2. a Gaussian function (`gaussian`), 3. a "cosine hat" consisting of one period of the cosine function (`cosinehat`), 4. frac{1}{2} a period of a "cosine hat" (`frac{1}{2}-cosinehat`) These peak-shaped initial conditions can be placed in the middle (`loc='center'`) or at the left end (`loc='left'`) of the domain. With the pulse in the middle, it splits in two parts, each with frac{1}{2} the initial amplitude, traveling in opposite directions. With the pulse at the left end, centered at $x=0$, and using the symmetry condition $\partial u/\partial x=0$, only a right-going pulse is generated. There is also a left-going pulse, but it travels from $x=0$ in negative $x$ direction and is not visible in the domain $[0,L]$. The `pulse` function is a flexible tool for playing around with various wave shapes and jumps in the wave velocity (i.e., discontinuous media). The code is shown to demonstrate how easy it is to reach this flexibility with the building blocks we have already developed: ```python def pulse( C=1, # Maximum Courant number Nx=200, # spatial resolution animate=True, version='vectorized', T=2, # end time loc='left', # location of initial condition pulse_thinspace .='gaussian', # pulse/init.cond. type slowness_factor=2, # inverse of wave vel. in right medium medium=[0.7, 0.9], # interval for right medium skip_frame=1, # skip frames in animations sigma=0.05 # width measure of the pulse ): """ Various peaked-shaped initial conditions on [0,1]. Wave velocity is decreased by the slowness_factor inside medium. The loc parameter can be 'center' or 'left', depending on where the initial pulse is to be located. The sigma parameter governs the width of the pulse. """ # Use scaled parameters: L=1 for domain length, c_0=1 # for wave velocity outside the domain. L = 1.0 c_0 = 1.0 if loc == 'center': xc = L/2 elif loc == 'left': xc = 0 if pulse_thinspace . in ('gaussian','Gaussian'): def I(x): return np.exp(-0.5*((x-xc)/sigma)**2) elif pulse_thinspace . == 'plug': def I(x): return 0 if abs(x-xc) > sigma else 1 elif pulse_thinspace . == 'cosinehat': def I(x): # One period of a cosine w = 2 a = w*sigma return 0.5*(1 + np.cos(np.pi*(x-xc)/a)) \ if xc - a <= x <= xc + a else 0 elif pulse_thinspace . == 'frac{1}{2}-cosinehat': def I(x): # Half a period of a cosine w = 4 a = w*sigma return np.cos(np.pi*(x-xc)/a) \ if xc - 0.5*a <= x <= xc + 0.5*a else 0 else: raise ValueError('Wrong pulse_thinspace .="%s"' % pulse_thinspace .) def c(x): return c_0/slowness_factor \ if medium[0] <= x <= medium[1] else c_0 umin=-0.5; umax=1.5*I(xc) casename = '%s_Nx%s_sf%s' % \ (pulse_thinspace ., Nx, slowness_factor) action = PlotMediumAndSolution( medium, casename=casename, umin=umin, umax=umax, skip_frame=skip_frame, screen_movie=animate, backend=None, filename='tmpdata') # Choose the stability limit with given Nx, worst case c # (lower C will then use this dt, but smaller Nx) dt = (L/Nx)/c_0 cpu, hashed_input = solver( I=I, V=None, f=None, c=c, U_0=None, U_L=None, L=L, dt=dt, C=C, T=T, user_action=action, version=version, stability_safety_factor=1) if cpu > 0: # did we generate new data? action.close_file(hashed_input) action.make_movie_file() print 'cpu (-1 means no new data generated):', cpu def convergence_rates( u_exact, I, V, f, c, U_0, U_L, L, dt0, num_meshes, C, T, version='scalar', stability_safety_factor=1.0): """ Half the time step and estimate convergence rates for for num_meshes simulations. """ class ComputeError: def __init__(self, norm_type): self.error = 0 def __call__(self, u, x, t, n): """Store norm of the error in self.E.""" error = np.abs(u - u_exact(x, t[n])).max() self.error = max(self.error, error) E = [] h = [] # dt, solver adjusts dx such that C=dt*c/dx dt = dt0 for i in range(num_meshes): error_calculator = ComputeError('Linf') solver(I, V, f, c, U_0, U_L, L, dt, C, T, user_action=error_calculator, version='scalar', stability_safety_factor=1.0) E.append(error_calculator.error) h.append(dt) dt /= 2 # halve the time step for next simulation print 'E:', E print 'h:', h r = [np.log(E[i]/E[i-1])/np.log(h[i]/h[i-1]) for i in range(1,num_meshes)] return r def test_convrate_sincos(): n = m = 2 L = 1.0 u_exact = lambda x, t: np.cos(m*np.pi/L*t)*np.sin(m*np.pi/L*x) r = convergence_rates( u_exact=u_exact, I=lambda x: u_exact(x, 0), V=lambda x: 0, f=0, c=1, U_0=0, U_L=0, L=L, dt0=0.1, num_meshes=6, C=0.9, T=1, version='scalar', stability_safety_factor=1.0) print 'rates sin(x)*cos(t) solution:', \ [round(r_,2) for r_ in r] assert abs(r[-1] - 2) < 0.002 ``` The `PlotMediumAndSolution` class used here is a subclass of `PlotAndStoreSolution` where the medium with reduced $c$ value, as specified by the `medium` interval, is visualized in the plots. **Comment on the choices of discretization parameters.** The argument $N_x$ in the `pulse` function does not correspond to the actual spatial resolution of $C<1$, since the `solver` function takes a fixed $\Delta t$ and $C$, and adjusts $\Delta x$ accordingly. As seen in the `pulse` function, the specified $\Delta t$ is chosen according to the limit $C=1$, so if $C<1$, $\Delta t$ remains the same, but the `solver` function operates with a larger $\Delta x$ and smaller $N_x$ than was specified in the call to `pulse`. The practical reason is that we always want to keep $\Delta t$ fixed such that plot frames and movies are synchronized in time regardless of the value of $C$ (i.e., $\Delta x$ is varied when the Courant number varies). The reader is encouraged to play around with the `pulse` function: To easily kill the graphics by Ctrl-C and restart a new simulation it might be easier to run the above two statements from the command line with Terminal> python -c 'import wave1D_dn_vc as w; w.pulse(...)' # Exercises <!-- --- begin exercise --- --> ## Exercise 1: Find the analytical solution to a damped wave equation <div id="wave:exer:standingwave:damped:uex"></div> Consider the wave equation with damping ([27](#wave:pde3)). The goal is to find an exact solution to a wave problem with damping and zero source term. A starting point is the standing wave solution from [wave:exer:standingwave](#wave:exer:standingwave). mathcal{I}_t becomes necessary to include a damping term $e^{-\beta t}$ and also have both a sine and cosine component in time: $$ \uex(x,t) = e^{-\beta t} \sin kx \left( A\cos\omega t + B\sin\omega t\right) \thinspace . $$ Find $k$ from the boundary conditions $u(0,t)=u(L,t)=0$. Then use the PDE to find constraints on $\beta$, $\omega$, $A$, and $B$. Set up a complete initial-boundary value problem and its solution. <!-- --- begin solution of exercise --- --> **Solution.** Mathematical model: $$ \frac{\partial^2 u}{\partial t^2} + b\frac{\partial u}{\partial t} = c^2\frac{\partial^2 u}{\partial x^2}, \nonumber $$ $b \geq 0$ is a prescribed damping coefficient. Ansatz: $$ u(x,t) = e^{-\beta t} \sin kx \left( A\cos\omega t + B\sin\omega t\right) $$ Boundary condition: $u=0$ for $x=0,L$. Fulfilled for $x=0$. Requirement at $x=L$ gives $$ kL = m\pi, $$ for an arbitrary integer $m$. Hence, $k=m\pi/L$. Inserting the ansatz in the PDE and dividing by $e^{-\beta t}$ results in $$ \begin{align*} (\beta^2 sin kx -\omega^2 sin kx - b\beta sin kx) (A\cos\omega t + B\sin\omega t) &+ \nonumber \\ (b\omega sin kx - 2\beta\omega sin kx) (-A\sin\omega t + B\cos\omega t) &= -(A\cos\omega t + B\sin\omega t)k^2c^2 \nonumber \end{align*} $$ This gives us two requirements: $$ \beta^2 - \omega^2 + b\beta + k^2c^2 = 0 $$ and $$ -2\beta\omega + b\omega = 0 $$ Since $b$, $c$ and $k$ are to be given in advance, we may solve these two equations to get $$ \begin{align*} \beta &= \frac{b}{2} \nonumber \\ \omega &= \sqrt{c^2k^2 - \frac{b^2}{4}} \nonumber \end{align*} $$ From the initial condition on the derivative, i.e. $\frac{\partial u_e}{\partial t} = 0$, we find that $$ B\omega = \beta A $$ Inserting the expression for $\omega$, we find that $$ B = \frac{b}{2\sqrt{c^2k^2 - \frac{b^2}{4}}} A $$ for $A$ prescribed. Using $t = 0$ in the expression for $u_e$ gives us the initial condition as $$ I(x) = A sin kx $$ Summarizing, the PDE problem can then be states as $$ \frac{\partial^2 u}{\partial t^2} + b\frac{\partial u}{\partial t} = c^2 \frac{\partial^2 u}{\partial x^2}, \quad x\in (0,L),\ t\in (0,T] \nonumber $$ $$ u(x,0) = I(x), \quad x\in [0,L] \nonumber $$ $$ \frac{\partial}{\partial t}u(x,0) = 0, \quad x\in [0,L] \nonumber $$ $$ u(0,t) = 0, \quad t\in (0,T] \nonumber $$ $$ u(L,t) = 0, \quad t\in (0,T] \nonumber $$ where constants $c$, $A$, $b$ and $k$, as well as $I(x)$, are prescribed. The solution to the problem is then given as $$ \uex(x,t) = e^{-\beta t} \sin kx \left( A\cos\omega t + B\sin\omega t\right) \thinspace . $$ with $k=m\pi/L$ for arbitrary integer $m$, $\beta = \frac{b}{2}$, $\omega = \sqrt{c^2k^2 - \frac{b^2}{4}}$, $B = \frac{b}{2\sqrt{c^2k^2 - \frac{b^2}{4}}} A$ and $I(x) = A sin kx$. <!-- --- end solution of exercise --- --> Filename: `damped_waves`. <!-- --- end exercise --- --> <!-- --- begin exercise --- --> ## Problem 2: Explore symmetry boundary conditions <div id="wave:exer:symmetry:bc"></div> Consider the simple "plug" wave where $\Omega = [-L,L]$ and $$ I(x) = \left\lbrace\begin{array}{ll} 1, & x\in [-\delta, \delta],\\ 0, & \hbox{otherwise} \end{array}\right. $$ for some number $0 < \delta < L$. The other initial condition is $u_t(x,0)=0$ and there is no source term $f$. The boundary conditions can be set to $u=0$. The solution to this problem is symmetric around $x=0$. This means that we can simulate the wave process in only frac{1}{2} of the domain $[0,L]$. **a)** Argue why the symmetry boundary condition is $u_x=0$ at $x=0$. <!-- --- begin hint in exercise --- --> **Hint.** Symmetry of a function about $x=x_0$ means that $f(x_0+h) = f(x_0-h)$. <!-- --- end hint in exercise --- --> <!-- --- begin solution of exercise --- --> **Solution.** A symmetric $u$ around $x=0$ means that $u(-x,t)=u(x,t)$. Let $x_0=0$ and $x=x_0+h$. Then we can use a *centered* finite difference definition of the derivative: $$ \frac{\partial}{\partial x}u(x_0,t) = \lim_{h\rightarrow 0}\frac{u(x_0+h,t)- u(x_0-h)}{2h} = \lim_{h\rightarrow 0}\frac{u(h,t)- u(-h,t)}{2h} = 0, $$ since $u(h,t)=u(-h,t)$ for any $h$. Symmetry around a point $x=x_0$ therefore always implies $u_x(x_0,t)=0$. <!-- --- end solution of exercise --- --> **b)** Perform simulations of the complete wave problem on $[-L,L]$. Thereafter, utilize the symmetry of the solution and run a simulation in frac{1}{2} of the domain $[0,L]$, using a boundary condition at $x=0$. Compare plots from the two solutions and confirm that they are the same. <!-- --- begin solution of exercise --- --> **Solution.** We can utilize the `wave1D_dn.py` code which allows Dirichlet and Neumann conditions. The `solver` and `viz` functions must take $x_0$ and $x_L$ as parameters instead of just $L$ such that we can solve the wave equation in $[x_0, x_L]$. The we can call up `solver` for the two problems on $[-L,L]$ and $[0,L]$ with boundary conditions $u(-L,t)=u(L,t)=0$ and $u_x(0,t)=u(L,t)=0$, respectively. The original `wave1D_dn.py` code makes a movie by playing all the `.png` files in a browser. mathcal{I}_t can then be wise to let the `viz` function create a movie directory and place all the frames and HTML player file in that directory. Alternatively, one can just make some ordinary movie file (Ogg, WebM, MP4, Flash) with `ffmpeg` or `ffmpeg` and give it a name. mathcal{I}_t is a point that the name is transferred to `viz` so it is easy to call `viz` twice and get two separate movie files or movie directories. The plots produced by the code (below) shows that the solutions indeed are the same. <!-- --- end solution of exercise --- --> **c)** Prove the symmetry property of the solution by setting up the complete initial-boundary value problem and showing that if $u(x,t)$ is a solution, then also $u(-x,t)$ is a solution. <!-- --- begin solution of exercise --- --> **Solution.** The plan in this proof is to introduce $v(x,t)=u(-x,t)$ and show that $v$ fulfills the same initial-boundary value problem as $u$. If the problem has a unique solution, then $v=u$. Or, in other words, the solution is symmetric: $u(-x,t)=u(x,t)$. We can work with a general initial-boundary value problem on the form <!-- Equation labels as ordinary links --> <div id="_auto13"></div> $$ \begin{equation} u_tt(x,t) = c^2u_{xx}(x,t) + f(x,t) \label{_auto13} \tag{35} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="_auto14"></div> $$ \begin{equation} u(x,0) = I(x) \label{_auto14} \tag{36} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="_auto15"></div> $$ \begin{equation} u_t(x,0) = V(x) \label{_auto15} \tag{37} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="_auto16"></div> $$ \begin{equation} u(-L,0) = 0 \label{_auto16} \tag{38} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="_auto17"></div> $$ \begin{equation} u(L,0) = 0 \label{_auto17} \tag{39} \end{equation} $$ Introduce a new coordinate $\bar x = -x$. We have that $$ \frac{\partial^2 u}{\partial x^2} = \frac{\partial}{\partial x} \left( \frac{\partial u}{\partial\bar x} \frac{\partial\bar x}{\partial x} \right) = \frac{\partial}{\partial x} \left( \frac{\partial u}{\partial\bar x} (-1)\right) = (-1)^2 \frac{\partial^2 u}{\partial \bar x^2} $$ The derivatives in time are unchanged. Substituting $x$ by $-\bar x$ leads to <!-- Equation labels as ordinary links --> <div id="_auto18"></div> $$ \begin{equation} u_{tt}(-\bar x,t) = c^2u_{\bar x\bar x}(-\bar x,t) + f(-\bar x,t) \label{_auto18} \tag{40} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="_auto19"></div> $$ \begin{equation} u(-\bar x,0) = I(-\bar x) \label{_auto19} \tag{41} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="_auto20"></div> $$ \begin{equation} u_t(-\bar x,0) = V(-\bar x) \label{_auto20} \tag{42} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="_auto21"></div> $$ \begin{equation} u(L,0) = 0 \label{_auto21} \tag{43} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="_auto22"></div> $$ \begin{equation} u(-L,0) = 0 \label{_auto22} \tag{44} \end{equation} $$ Now, dropping the bars and introducing $v(x,t)=u(-x,t)$, we find that <!-- Equation labels as ordinary links --> <div id="_auto23"></div> $$ \begin{equation} v_{tt}(x,t) = c^2v_{xx}(x,t) + f(-x,t) \label{_auto23} \tag{45} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="_auto24"></div> $$ \begin{equation} v(x,0) = I(-x) \label{_auto24} \tag{46} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="_auto25"></div> $$ \begin{equation} v_t(x ,0) = V(-x) \label{_auto25} \tag{47} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="_auto26"></div> $$ \begin{equation} v(-L,0) = 0 \label{_auto26} \tag{48} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="_auto27"></div> $$ \begin{equation} v(L,0) = 0 \label{_auto27} \tag{49} \end{equation} $$ *Provided that $I$, $f$, and $V$ are all symmetric* around $x=0$ such that $I(x)=I(-x)$, $V(x)=V(-x)$, and $f(x,t)=f(-x,t)$, we can express the initial-boundary value problem as <!-- Equation labels as ordinary links --> <div id="_auto28"></div> $$ \begin{equation} v_{tt}(x,t) = c^2v_{xx}(x,t) + f(x,t) \label{_auto28} \tag{50} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="_auto29"></div> $$ \begin{equation} v(x,0) = I(x) \label{_auto29} \tag{51} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="_auto30"></div> $$ \begin{equation} v_t(x ,0) = V(x) \label{_auto30} \tag{52} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="_auto31"></div> $$ \begin{equation} v(-L,0) = 0 \label{_auto31} \tag{53} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="_auto32"></div> $$ \begin{equation} v(L,0) = 0 \label{_auto32} \tag{54} \end{equation} $$ This is the same problem as the one that $u$ fulfills. If the solution is unique, which can be proven, then $v=u$, and $u(-x,t)=u(x,t)$. To summarize, the necessary conditions for symmetry are that * all involved functions $I$, $V$, and $f$ must be symmetric, and * the boundary conditions are symmetric in the sense that they can be flipped (the condition at $x=-L$ can be applied at $x=L$ and vice versa). <!-- --- end solution of exercise --- --> **d)** If the code works correctly, the solution $u(x,t) = x(L-x)(1+\frac{t}{2})$ should be reproduced exactly. Write a test function `test_quadratic` that checks whether this is the case. Simulate for $x$ in $[0, \frac{L}{2}]$ with a symmetry condition at the end $x = \frac{L}{2}$. <!-- --- begin solution of exercise --- --> **Solution.** Running the code below, shows that the test case indeed is reproduced exactly. ```python #!/usr/bin/env python from scitools.std import * # Add an x0 coordinate for solving the wave equation on [x0, xL] def solver(I, V, f, c, U_0, U_L, x0, xL, Nx, C, T, user_action=None, version='scalar'): """ Solve u_tt=c^2*u_xx + f on (0,L)x(0,T]. u(0,t)=U_0(t) or du/dn=0 (U_0=None), u(L,t)=U_L(t) or du/dn=0 (u_L=None). """ x = linspace(x0, xL, Nx+1) # Mesh points in space dx = x[1] - x[0] dt = C*dx/c Nt = int(round(T/dt)) t = linspace(0, Nt*dt, Nt+1) # Mesh points in time C2 = C**2; dt2 = dt*dt # Help variables in the scheme # Wrap user-given f, V, U_0, U_L if f is None or f == 0: f = (lambda x, t: 0) if version == 'scalar' else \ lambda x, t: zeros(x.shape) if V is None or V == 0: V = (lambda x: 0) if version == 'scalar' else \ lambda x: zeros(x.shape) if U_0 is not None: if isinstance(U_0, (float,int)) and U_0 == 0: U_0 = lambda t: 0 if U_L is not None: if isinstance(U_L, (float,int)) and U_L == 0: U_L = lambda t: 0 u = zeros(Nx+1) # Solution array at new time level u_1 = zeros(Nx+1) # Solution at 1 time level back u_2 = zeros(Nx+1) # Solution at 2 time levels back mathcal{I}_x = range(0, Nx+1) mathcal{I}_t = range(0, Nt+1) import time; t0 = time.clock() # CPU time measurement # Load initial condition into u_1 for i in mathcal{I}_x: u_1[i] = I(x[i]) if user_action is not None: user_action(u_1, x, t, 0) # Special formula for the first step for i in mathcal{I}_x[1:-1]: u[i] = u_1[i] + dt*V(x[i]) + \ 0.5*C2*(u_1[i-1] - 2*u_1[i] + u_1[i+1]) + \ 0.5*dt2*f(x[i], t[0]) i = mathcal{I}_x[0] if U_0 is None: # Set boundary values du/dn = 0 # x=0: i-1 -> i+1 since u[i-1]=u[i+1] # x=L: i+1 -> i-1 since u[i+1]=u[i-1]) ip1 = i+1 im1 = ip1 # i-1 -> i+1 u[i] = u_1[i] + dt*V(x[i]) + \ 0.5*C2*(u_1[im1] - 2*u_1[i] + u_1[ip1]) + \ 0.5*dt2*f(x[i], t[0]) else: u[0] = U_0(dt) i = mathcal{I}_x[-1] if U_L is None: im1 = i-1 ip1 = im1 # i+1 -> i-1 u[i] = u_1[i] + dt*V(x[i]) + \ 0.5*C2*(u_1[im1] - 2*u_1[i] + u_1[ip1]) + \ 0.5*dt2*f(x[i], t[0]) else: u[i] = U_L(dt) if user_action is not None: user_action(u, x, t, 1) # Update data structures for next step u_2[:], u_1[:] = u_1, u for n in mathcal{I}_t[1:-1]: # Update all inner points if version == 'scalar': for i in mathcal{I}_x[1:-1]: u[i] = - u_2[i] + 2*u_1[i] + \ C2*(u_1[i-1] - 2*u_1[i] + u_1[i+1]) + \ dt2*f(x[i], t[n]) elif version == 'vectorized': u[1:-1] = - u_2[1:-1] + 2*u_1[1:-1] + \ C2*(u_1[0:-2] - 2*u_1[1:-1] + u_1[2:]) + \ dt2*f(x[1:-1], t[n]) else: raise ValueError('version=%s' % version) # Insert boundary conditions i = mathcal{I}_x[0] if U_0 is None: # Set boundary values # x=0: i-1 -> i+1 since u[i-1]=u[i+1] when du/dn=0 # x=L: i+1 -> i-1 since u[i+1]=u[i-1] when du/dn=0 ip1 = i+1 im1 = ip1 u[i] = - u_2[i] + 2*u_1[i] + \ C2*(u_1[im1] - 2*u_1[i] + u_1[ip1]) + \ dt2*f(x[i], t[n]) else: u[0] = U_0(t[n+1]) i = mathcal{I}_x[-1] if U_L is None: im1 = i-1 ip1 = im1 u[i] = - u_2[i] + 2*u_1[i] + \ C2*(u_1[im1] - 2*u_1[i] + u_1[ip1]) + \ dt2*f(x[i], t[n]) else: u[i] = U_L(t[n+1]) if user_action is not None: if user_action(u, x, t, n+1): break # Update data structures for next step u_2[:], u_1[:] = u_1, u cpu_time = t0 - time.clock() return u, x, t, cpu_time def viz(I, V, f, c, U_0, U_L, x0, xL, Nx, C, T, umin, umax, version='scalar', animate=True, movie_dir='tmp'): """Run solver and visualize u at each time level.""" import scitools.std as plt, time, glob, os def plot_u(u, x, t, n): """user_action function for solver.""" plt.plot(x, u, 'r-', xlabel='x', ylabel='u', axis=[x0, xL, umin, umax], title='t=%f' % t[n]) # Let the initial condition stay on the screen for 2 # seconds, else insert a pause of 0.2 s between each plot time.sleep(2) if t[n] == 0 else time.sleep(0.2) plt.savefig('frame_%04d.png' % n) # for movie making # Clean up old movie frames for filename in glob.glob('frame_*.png'): os.remove(filename) user_action = plot_u if animate else None u, x, t, cpu = solver(I, V, f, c, U_0, U_L, L, Nx, C, T, user_action, version) if animate: # Make a directory with the frames if os.path.isdir(movie_dir): shutil.rmtree(movie_dir) os.mkdir(movie_dir) os.chdir(movie_dir) # Move all frame_*.png files to this subdirectory for filename in glob.glob(os.path.join(os.pardir, 'frame_*.png')): os.renamve(os.path.join(os.pardir, filename), filename) plt.movie('frame_*.png', encoder='html', fps=4, output_file='movie.html') # Invoke movie.html in a browser to steer the movie return cpu import nose.tools as nt def test_quadratic(): """ Check the scalar and vectorized versions work for a quadratic u(x,t)=x(L-x)(1+t/2) that is exactly reproduced. We simulate in [0, L/2] and apply a symmetry condition at the end x=L/2. """ exact_solution = lambda x, t: x*(L-x)*(1+0.5*t) I = lambda x: exact_solution(x, 0) V = lambda x: 0.5*exact_solution(x, 0) f = lambda x, t: 2*(1+0.5*t)*c**2 U_0 = lambda t: exact_solution(0, t) U_L = None L = 2.5 c = 1.5 Nx = 3 # very coarse mesh C = 1 T = 18 # long time integration def assert_no_error(u, x, t, n): u_e = exact_solution(x, t[n]) diff = abs(u - u_e).max() nt.assert_almost_equal(diff, 0, places=13) solver(I, V, f, c, U_0, U_L, 0, L/2, Nx, C, T, user_action=assert_no_error, version='scalar') solver(I, V, f, c, U_0, U_L, 0, L/2, Nx, C, T, user_action=assert_no_error, version='vectorized') def plug(C=1, Nx=50, animate=True, version='scalar', T=2): """Plug profile as initial condition.""" L = 1. c = 1 delta = 0.1 def I(x): if abs(x) > delta: return 0 else: return 1 # Solution on [-L,L] cpu = viz(I=I, V=0, f=0, c, U_0=0, U_L=0, -L, L, 2*Nx, C, T, umin=-1.1, umax=1.1, version=version, animate=animate, movie_dir='full') # Solution on [0,L] cpu = viz(I=I, V=0, f=0, c, U_0=None, U_L=0, 0, L, Nx, C, T, umin=-1.1, umax=1.1, version=version, animate=animate, movie_dir='frac{1}{2}') if __name__ == '__main__': plug() ``` <!-- --- end solution of exercise --- --> Filename: `wave1D_symmetric`. <!-- --- end exercise --- --> <!-- --- begin exercise --- --> ## Exercise 3: Send pulse waves through a layered medium <div id="wave:app:exer:pulse1D"></div> Use the `pulse` function in `wave1D_dn_vc.py` to investigate sending a pulse, located with its peak at $x=0$, through two media with different wave velocities. The (scaled) velocity in the left medium is 1 while it is $\frac{1}{s_f}$ in the right medium. Report what happens with a Gaussian pulse, a "cosine hat" pulse, frac{1}{2} a "cosine hat" pulse, and a plug pulse for resolutions $N_x=40,80,160$, and $s_f=2,4$. Simulate until $T=2$. <!-- --- begin solution of exercise --- --> **Solution.** In all cases, the change in velocity causes some of the wave to be reflected back (while the rest is let through). When the waves go from higher to lower velocity, the amplitude builds, and vice versa. ```python import wave1D_dn_vc as wave import os, sys, shutil, glob for pulse_thinspace . in 'gaussian', 'cosinehat', 'frac{1}{2}-cosinehat', 'plug': for Nx in 40, 80, 160: for sf in 2, 4: if sf == 1 and Nx > 40: continue # homogeneous medium with C=1: Nx=40 enough print 'wave1D.pulse:', pulse_thinspace ., Nx, sf wave.pulse(C=1, Nx=Nx, animate=False, # just hardcopies version='vectorized', T=2, loc='left', pulse_thinspace .=pulse_thinspace ., slowness_factor=sf, medium=[0.7, 0.9], skip_frame = 1, sigma=0.05) ``` <!-- --- end solution of exercise --- --> Filename: `pulse1D`. <!-- --- end exercise --- --> <!-- --- begin exercise --- --> ## Exercise 4: Explain why numerical noise occurs <div id="wave:app:exer:pulse1D:analysis"></div> The experiments performed in [Exercise 3: Send pulse waves through a layered medium](#wave:app:exer:pulse1D) shows considerable numerical noise in the form of non-physical waves, especially for $s_f=4$ and the plug pulse or the frac{1}{2} a "cosinehat" pulse. The noise is much less visible for a Gaussian pulse. Run the case with the plug and frac{1}{2} a "cosinehat" pulse for $s_f=1$, $C=0.9, 0.25$, and $N_x=40,80,160$. Use the numerical dispersion relation to explain the observations. Filename: `pulse1D_analysis`. <!-- --- end exercise --- --> <!-- --- begin exercise --- --> ## Exercise 5: Investigate harmonic averaging in a 1D model <div id="wave:app:exer:pulse1D:harmonic"></div> Harmonic means are often used if the wave velocity is non-smooth or discontinuous. Will harmonic averaging of the wave velocity give less numerical noise for the case $s_f=4$ in [Exercise 3: Send pulse waves through a layered medium](#wave:app:exer:pulse1D)? Filename: `pulse1D_harmonic`. <!-- --- end exercise --- --> <!-- --- begin exercise --- --> ## Problem 6: Implement open boundary conditions <div id="wave:app:exer:radiationBC"></div> <!-- Solution file is actually periodic.py from Exer [Exercise 7: Implement periodic boundary conditions](#wave:exer:periodic), --> <!-- just remove the periodc stuff ;-) --> To enable a wave to leave the computational domain and travel undisturbed through the boundary $x=L$, one can in a one-dimensional problem impose the following condition, called a *radiation condition* or *open boundary condition*: <!-- Equation labels as ordinary links --> <div id="wave:app:exer:radiationBC:eq"></div> $$ \begin{equation} \frac{\partial u}{\partial t} + c\frac{\partial u}{\partial x} = 0\thinspace . \label{wave:app:exer:radiationBC:eq} \tag{55} \end{equation} $$ The parameter $c$ is the wave velocity. Show that ([55](#wave:app:exer:radiationBC:eq)) accepts a solution $u = g_R(x-ct)$ (right-going wave), but not $u = g_L(x+ct)$ (left-going wave). This means that ([55](#wave:app:exer:radiationBC:eq)) will allow any right-going wave $g_R(x-ct)$ to pass through the boundary undisturbed. A corresponding open boundary condition for a left-going wave through $x=0$ is <!-- Equation labels as ordinary links --> <div id="wave:app:exer:radiationBC:eqL"></div> $$ \begin{equation} \frac{\partial u}{\partial t} - c\frac{\partial u}{\partial x} = 0\thinspace . \label{wave:app:exer:radiationBC:eqL} \tag{56} \end{equation} $$ **a)** A natural idea for discretizing the condition ([55](#wave:app:exer:radiationBC:eq)) at the spatial end point $i=N_x$ is to apply centered differences in time and space: <!-- Equation labels as ordinary links --> <div id="wave:app:exer:radiationBC:eq:op"></div> $$ \begin{equation} [D_{2t}u + cD_{2x}u =0]^n_{i},\quad i=N_x\thinspace . \label{wave:app:exer:radiationBC:eq:op} \tag{57} \end{equation} $$ Eliminate the fictitious value $u_{N_x+1}^n$ by using the discrete equation at the same point. The equation for the first step, $u_i^1$, is in principle also affected, but we can then use the condition $u_{N_x}=0$ since the wave has not yet reached the right boundary. **b)** A much more convenient implementation of the open boundary condition at $x=L$ can be based on an explicit discretization <!-- Equation labels as ordinary links --> <div id="wave:app:exer:radiationBC:eq:op:1storder"></div> $$ \begin{equation} [D^+_tu + cD_x^- u = 0]_i^n,\quad i=N_x\thinspace . \label{wave:app:exer:radiationBC:eq:op:1storder} \tag{58} \end{equation} $$ From this equation, one can solve for $u^{n+1}_{N_x}$ and apply the formula as a Dirichlet condition at the boundary point. However, the finite difference approximations involved are of first order. Implement this scheme for a wave equation $u_{tt}=c^2u_{xx}$ in a domain $[0,L]$, where you have $u_x=0$ at $x=0$, the condition ([55](#wave:app:exer:radiationBC:eq)) at $x=L$, and an initial disturbance in the middle of the domain, e.g., a plug profile like $$ u(x,0) = \left\lbrace\begin{array}{ll} 1,& L/2-\ell \leq x \leq L/2+\ell,\\ 0,\hbox{otherwise}\end{array}\right. $$ Observe that the initial wave is split in two, the left-going wave is reflected at $x=0$, and both waves travel out of $x=L$, leaving the solution as $u=0$ in $[0,L]$. Use a unit Courant number such that the numerical solution is exact. Make a movie to illustrate what happens. Because this simplified implementation of the open boundary condition works, there is no need to pursue the more complicated discretization in a). <!-- --- begin hint in exercise --- --> **Hint.** Modify the solver function in [`wave1D_dn.py`](${src_wave}/wave1D/wave1D_dn.py). <!-- --- end hint in exercise --- --> **c)** Add the possibility to have either $u_x=0$ or an open boundary condition at the left boundary. The latter condition is discretized as <!-- Equation labels as ordinary links --> <div id="wave:app:exer:radiationBC:eq:op:1storder2"></div> $$ \begin{equation} [D^+_tu - cD_x^+ u = 0]_i^n,\quad i=0, \label{wave:app:exer:radiationBC:eq:op:1storder2} \tag{59} \end{equation} $$ leading to an explicit update of the boundary value $u^{n+1}_0$. The implementation can be tested with a Gaussian function as initial condition: $$ g(x;m,s) = \frac{1}{\sqrt{2\pi}s}e^{-\frac{(x-m)^2}{2s^2}}\thinspace . $$ Run two tests: 1. Disturbance in the middle of the domain, $I(x)=g(x;L/2,s)$, and open boundary condition at the left end. 2. Disturbance at the left end, $I(x)=g(x;0,s)$, and $u_x=0$ as symmetry boundary condition at this end. Make test functions for both cases, testing that the solution is zero after the waves have left the domain. **d)** In 2D and 3D it is difficult to compute the correct wave velocity normal to the boundary, which is needed in generalizations of the open boundary conditions in higher dimensions. Test the effect of having a slightly wrong wave velocity in ([58](#wave:app:exer:radiationBC:eq:op:1storder)). Make movies to illustrate what happens. Filename: `wave1D_open_BC`. <!-- Closing remarks for this Problem --> ### Remarks The condition ([55](#wave:app:exer:radiationBC:eq)) works perfectly in 1D when $c$ is known. In 2D and 3D, however, the condition reads $u_t + c_x u_x + c_y u_y=0$, where $c_x$ and $c_y$ are the wave speeds in the $x$ and $y$ directions. Estimating these components (i.e., the direction of the wave) is often challenging. Other methods are normally used in 2D and 3D to let waves move out of a computational domain. <!-- --- end exercise --- --> <!-- --- begin exercise --- --> ## Exercise 7: Implement periodic boundary conditions <div id="wave:exer:periodic"></div> mathcal{I}_t is frequently of interest to follow wave motion over large distances and long times. A straightforward approach is to work with a very large domain, but that might lead to a lot of computations in areas of the domain where the waves cannot be noticed. A more efficient approach is to let a right-going wave out of the domain and at the same time let it enter the domain on the left. This is called a *periodic boundary condition*. The boundary condition at the right end $x=L$ is an open boundary condition (see [Problem 6: Implement open boundary conditions](#wave:app:exer:radiationBC)) to let a right-going wave out of the domain. At the left end, $x=0$, we apply, in the beginning of the simulation, either a symmetry boundary condition (see [Problem 2: Explore symmetry boundary conditions](#wave:exer:symmetry:bc)) $u_x=0$, or an open boundary condition. This initial wave will split in two and either be reflected or transported out of the domain at $x=0$. The purpose of the exercise is to follow the right-going wave. We can do that with a *periodic boundary condition*. This means that when the right-going wave hits the boundary $x=L$, the open boundary condition lets the wave out of the domain, but at the same time we use a boundary condition on the left end $x=0$ that feeds the outgoing wave into the domain again. This periodic condition is simply $u(0)=u(L)$. The switch from $u_x=0$ or an open boundary condition at the left end to a periodic condition can happen when $u(L,t)>\epsilon$, where $\epsilon =10^{-4}$ might be an appropriate value for determining when the right-going wave hits the boundary $x=L$. The open boundary conditions can conveniently be discretized as explained in [Problem 6: Implement open boundary conditions](#wave:app:exer:radiationBC). Implement the described type of boundary conditions and test them on two different initial shapes: a plug $u(x,0)=1$ for $x\leq 0.1$, $u(x,0)=0$ for $x>0.1$, and a Gaussian function in the middle of the domain: $u(x,0)=\exp{(-\frac{1}{2}(x-0.5)^2/0.05)}$. The domain is the unit interval $[0,1]$. Run these two shapes for Courant numbers 1 and 0.5. Assume constant wave velocity. Make movies of the four cases. Reason why the solutions are correct. Filename: `periodic`. <!-- --- end exercise --- --> <!-- --- begin exercise --- --> ## Exercise 8: Compare discretizations of a Neumann condition We have a 1D wave equation with variable wave velocity: $u_{tt}=(qu_x)_x$. A Neumann condition $u_x$ at $x=0, L$ can be discretized as shown in ([20](#wave:pde2:var:c:scheme:impl:Neumann)) and ([23](#wave:pde2:var:c:scheme:impl:Neumann2)). The aim of this exercise is to examine the rate of the numerical error when using different ways of discretizing the Neumann condition. **a)** As a test problem, $q=1+(x-L/2)^4$ can be used, with $f(x,t)$ adapted such that the solution has a simple form, say $u(x,t)=\cos (\pi x/L)\cos (\omega t)$ for, e.g., $\omega = 1$. Perform numerical experiments and find the convergence rate of the error using the approximation ([20](#wave:pde2:var:c:scheme:impl:Neumann)). **b)** Switch to $q(x)=1+\cos(\pi x/L)$, which is symmetric at $x=0,L$, and check the convergence rate of the scheme ([23](#wave:pde2:var:c:scheme:impl:Neumann2)). Now, $q_{i-1/2}$ is a 2nd-order approximation to $q_i$, $q_{i-1/2}=q_i + 0.25q_i''\Delta x^2 + \cdots$, because $q_i'=0$ for $i=N_x$ (a similar argument can be applied to the case $i=0$). **c)** A third discretization can be based on a simple and convenient, but less accurate, one-sided difference: $u_{i}-u_{i-1}=0$ at $i=N_x$ and $u_{i+1}-u_i=0$ at $i=0$. Derive the resulting scheme in detail and implement it. Run experiments with $q$ from a) or b) to establish the rate of convergence of the scheme. **d)** A fourth technique is to view the scheme as $$ [D_tD_tu]^n_i = \frac{1}{\Delta x}\left( [qD_xu]_{i+\frac{1}{2}}^n - [qD_xu]_{i-\frac{1}{2}}^n\right) + [f]_i^n, $$ and place the boundary at $x_{i+\frac{1}{2}}$, $i=N_x$, instead of exactly at the physical boundary. With this idea of approximating (moving) the boundary, we can just set $[qD_xu]_{i+\frac{1}{2}}^n=0$. Derive the complete scheme using this technique. The implementation of the boundary condition at $L-\Delta x/2$ is $\Oof{\Delta x^2}$ accurate, but the interesting question is what impact the movement of the boundary has on the convergence rate. Compute the errors as usual over the entire mesh and use $q$ from a) or b). Filename: `Neumann_discr`. <!-- --- end exercise --- --> <!-- --- begin exercise --- --> ## Exercise 9: Verification by a cubic polynomial in space <div id="wave:fd2:exer:verify:cubic"></div> The purpose of this exercise is to verify the implementation of the `solver` function in the program [`wave1D_n0.py`](#src_wave/wave1D/wave1D_n0.py) by using an exact numerical solution for the wave equation $u_{tt}=c^2u_{xx} + f$ with Neumann boundary conditions $u_x(0,t)=u_x(L,t)=0$. A similar verification is used in the file [`wave1D_u0.py`](#src_wave}/wave1D/wave1D_u0.py), which solves the same PDE, but with Dirichlet boundary conditions $u(0,t)=u(L,t)=0$. The idea of the verification test in function `test_quadratic` in `wave1D_u0.py` is to produce a solution that is a lower-order polynomial such that both the PDE problem, the boundary conditions, and all the discrete equations are exactly fulfilled. Then the `solver` function should reproduce this exact solution to machine precision. More precisely, we seek $u=X(x)T(t)$, with $T(t)$ as a linear function and $X(x)$ as a parabola that fulfills the boundary conditions. Inserting this $u$ in the PDE determines $f$. mathcal{I}_t turns out that $u$ also fulfills the discrete equations, because the truncation error of the discretized PDE has derivatives in $x$ and $t$ of order four and higher. These derivatives all vanish for a quadratic $X(x)$ and linear $T(t)$. mathcal{I}_t would be attractive to use a similar approach in the case of Neumann conditions. We set $u=X(x)T(t)$ and seek lower-order polynomials $X$ and $T$. To force $u_x$ to vanish at the boundary, we let $X_x$ be a parabola. Then $X$ is a cubic polynomial. The fourth-order derivative of a cubic polynomial vanishes, so $u=X(x)T(t)$ will fulfill the discretized PDE also in this case, if $f$ is adjusted such that $u$ fulfills the PDE. However, the discrete boundary condition is not exactly fulfilled by this choice of $u$. The reason is that <!-- Equation labels as ordinary links --> <div id="wave:fd2:exer:verify:cubic:D2x"></div> $$ \begin{equation} [D_{2x}u]^n_i = u_{x}(x_i,t_n) + \frac{1}{6}u_{xxx}(x_i,t_n)\Delta x^2 + \Oof{\Delta x^4}\thinspace . \label{wave:fd2:exer:verify:cubic:D2x} \tag{60} \end{equation} $$ At the two boundary points, we must demand that the derivative $X_x(x)=0$ such that $u_x=0$. However, $u_{xxx}$ is a constant and not zero when $X(x)$ is a cubic polynomial. Therefore, our $u=X(x)T(t)$ fulfills $$ [D_{2x}u]^n_i = \frac{1}{6}u_{xxx}(x_i,t_n)\Delta x^2, $$ and not $$ [D_{2x}u]^n_i =0, \quad i=0,N_x, $$ as it should. (Note that all the higher-order terms $\Oof{\Delta x^4}$ also have higher-order derivatives that vanish for a cubic polynomial.) So to summarize, the fundamental problem is that $u$ as a product of a cubic polynomial and a linear or quadratic polynomial in time is not an exact solution of the discrete boundary conditions. To make progress, we assume that $u=X(x)T(t)$, where $T$ for simplicity is taken as a prescribed linear function $1+\frac{1}{2}t$, and $X(x)$ is taken as an *unknown* cubic polynomial $\sum_{j=0}^3 a_jx^j$. There are two different ways of determining the coefficients $a_0,\ldots,a_3$ such that both the discretized PDE and the discretized boundary conditions are fulfilled, under the constraint that we can specify a function $f(x,t)$ for the PDE to feed to the `solver` function in `wave1D_n0.py`. Both approaches are explained in the subexercises. <!-- {wave:fd2:exer:verify:cubic:D2x} --> **a)** One can insert $u$ in the discretized PDE and find the corresponding $f$. Then one can insert $u$ in the discretized boundary conditions. This yields two equations for the four coefficients $a_0,\ldots,a_3$. To find the coefficients, one can set $a_0=0$ and $a_1=1$ for simplicity and then determine $a_2$ and $a_3$. This approach will make $a_2$ and $a_3$ depend on $\Delta x$ and $f$ will depend on both $\Delta x$ and $\Delta t$. Use `sympy` to perform analytical computations. A starting point is to define $u$ as follows: ```python def test_cubic1(): import sympy as sm x, t, c, L, dx, dt = sm.symbols('x t c L dx dt') i, n = sm.symbols('i n', integer=True) # Assume discrete solution is a polynomial of degree 3 in x T = lambda t: 1 + sm.Rational(1,2)*t # Temporal term a = sm.symbols('a_0 a_1 a_2 a_3') X = lambda x: sum(a[q]*x**q for q in range(4)) # Spatial term u = lambda x, t: X(x)*T(t) ``` The symbolic expression for $u$ is reached by calling `u(x,t)` with `x` and `t` as `sympy` symbols. Define `DxDx(u, i, n)`, `DtDt(u, i, n)`, and `D2x(u, i, n)` as Python functions for returning the difference approximations $[D_xD_x u]^n_i$, $[D_tD_t u]^n_i$, and $[D_{2x}u]^n_i$. The next step is to set up the residuals for the equations $[D_{2x}u]^n_0=0$ and $[D_{2x}u]^n_{N_x}=0$, where $N_x=L/\Delta x$. Call the residuals `R_0` and `R_L`. Substitute $a_0$ and $a_1$ by 0 and 1, respectively, in `R_0`, `R_L`, and `a`: ```python R_0 = R_0.subs(a[0], 0).subs(a[1], 1) R_L = R_L.subs(a[0], 0).subs(a[1], 1) a = list(a) # enable in-place assignment a[0:2] = 0, 1 ``` Determining $a_2$ and $a_3$ from the discretized boundary conditions is then about solving two equations with respect to $a_2$ and $a_3$, i.e., `a[2:]`: ```python s = sm.solve([R_0, R_L], a[2:]) # s is dictionary with the unknowns a[2] and a[3] as keys a[2:] = s[a[2]], s[a[3]] ``` Now, `a` contains computed values and `u` will automatically use these new values since `X` accesses `a`. Compute the source term $f$ from the discretized PDE: $f^n_i = [D_tD_t u - c^2D_xD_x u]^n_i$. Turn $u$, the time derivative $u_t$ (needed for the initial condition $V(x)$), and $f$ into Python functions. Set numerical values for $L$, $N_x$, $C$, and $c$. Prescribe the time interval as $\Delta t = CL/(N_xc)$, which imply $\Delta x = c\Delta t/C = L/N_x$. Define new functions `I(x)`, `V(x)`, and `f(x,t)` as wrappers of the ones made above, where fixed values of $L$, $c$, $\Delta x$, and $\Delta t$ are inserted, such that `I`, `V`, and `f` can be passed on to the `solver` function. Finally, call `solver` with a `user_action` function that compares the numerical solution to this exact solution $u$ of the discrete PDE problem. <!-- --- begin hint in exercise --- --> **Hint.** To turn a `sympy` expression `e`, depending on a series of symbols, say `x`, `t`, `dx`, `dt`, `L`, and `c`, into a plain Python function `e_exact(x,t,L,dx,dt,c)`, one can write ```python e_exact = sm.lambdify([x,t,L,dx,dt,c], e, 'numpy') ``` The `'numpy'` argument is a good habit as the `e_exact` function will then work with array arguments if it contains mathematical functions (but here we only do plain arithmetics, which automatically work with arrays). <!-- --- end hint in exercise --- --> **b)** An alternative way of determining $a_0,\ldots,a_3$ is to reason as follows. We first construct $X(x)$ such that the boundary conditions are fulfilled: $X=x(L-x)$. However, to compensate for the fact that this choice of $X$ does not fulfill the discrete boundary condition, we seek $u$ such that $$ u_x = \frac{\partial}{\partial x}x(L-x)T(t) - \frac{1}{6}u_{xxx}\Delta x^2, $$ since this $u$ will fit the discrete boundary condition. Assuming $u=T(t)\sum_{j=0}^3a_jx^j$, we can use the above equation to determine the coefficients $a_1,a_2,a_3$. A value, e.g., 1 can be used for $a_0$. The following `sympy` code computes this $u$: ```python def test_cubic2(): import sympy as sm x, t, c, L, dx = sm.symbols('x t c L dx') T = lambda t: 1 + sm.Rational(1,2)*t # Temporal term # Set u as a 3rd-degree polynomial in space X = lambda x: sum(a[i]*x**i for i in range(4)) a = sm.symbols('a_0 a_1 a_2 a_3') u = lambda x, t: X(x)*T(t) # Force discrete boundary condition to be zero by adding # a correction term the analytical suggestion x*(L-x)*T # u_x = x*(L-x)*T(t) - 1/6*u_xxx*dx**2 R = sm.diff(u(x,t), x) - ( x*(L-x) - sm.Rational(1,6)*sm.diff(u(x,t), x, x, x)*dx**2) # R is a polynomial: force all coefficients to vanish. # Turn R to Poly to extract coefficients: R = sm.poly(R, x) coeff = R.all_coeffs() s = sm.solve(coeff, a[1:]) # a[0] is not present in R # s is dictionary with a[i] as keys # Fix a[0] as 1 s[a[0]] = 1 X = lambda x: sm.simplify(sum(s[a[i]]*x**i for i in range(4))) u = lambda x, t: X(x)*T(t) print 'u:', u(x,t) ``` The next step is to find the source term `f_e` by inserting `u_e` in the PDE. Thereafter, turn `u`, `f`, and the time derivative of `u` into plain Python functions as in a), and then wrap these functions in new functions `I`, `V`, and `f`, with the right signature as required by the `solver` function. Set parameters as in a) and check that the solution is exact to machine precision at each time level using an appropriate `user_action` function. Filename: `wave1D_n0_test_cubic`. <!-- --- end exercise --- -->
091094a1b3a0099b1fc1fc31edaf8d7424be1ef1
142,673
ipynb
Jupyter Notebook
fdm-devito-notebooks/02_wave/wave1D_fd2.ipynb
devitocodes/devito_book
30405c3d440a1f89df69594fd0704f69650c1ded
[ "CC-BY-4.0" ]
7
2020-07-17T13:19:15.000Z
2021-03-27T05:21:09.000Z
fdm-devito-notebooks/02_wave/wave1D_fd2.ipynb
devitocodes/devito_book
30405c3d440a1f89df69594fd0704f69650c1ded
[ "CC-BY-4.0" ]
73
2020-07-14T15:38:52.000Z
2020-09-25T11:54:59.000Z
fdm-devito-notebooks/02_wave/wave1D_fd2.ipynb
devitocodes/devito_book
30405c3d440a1f89df69594fd0704f69650c1ded
[ "CC-BY-4.0" ]
1
2021-03-27T05:21:14.000Z
2021-03-27T05:21:14.000Z
33.156635
200
0.522201
true
31,166
Qwen/Qwen-72B
1. YES 2. YES
0.815232
0.822189
0.670275
__label__eng_Latn
0.97874
0.395605
```julia using DifferentialEquations using Plots ``` Consider the simple reaction: \begin{align} A &\longleftrightarrow B\\ B &\longleftrightarrow C\\ \end{align} Both are elementary steps that occur in the liquid phase, and we will consider it in a few different solvent environments. ```julia gammaA(XA, XS, A12A, A21A) = exp.(XS.^2 .*(A12A .+ 2*(A21A - A12A)*XA)) gammaB(XB, XS, A12B, A21B) = exp.(XS.^2 .*(A12B .+ 2*(A21B - A12B)*XB)) gammaC(XC, XS, A12C, A21C) = exp.(XS.^2 .*(A12C .+ 2*(A21C - A12C)*XC)) gammaTS1(XTS1, XS, A12TS1, A21TS1) = exp.(XS.^2 .*(A12TS1 .+ 2*(A21TS1 - A12TS1)*XTS1)) gammaTS2(XTS2, XS, A12TS2, A21TS2) = exp.(XS.^2 .*(A12TS2 .+ 2*(A21TS2 - A12TS2)*XTS2)) z1(XA, XB, XS, A12A, A21A, A12B, A21B) = 1/K10*gammaB(XB, XS, A12B, A21B)./gammaA(XA, XS, A12A, A21A).*XB./XA z2(XB, XC, XS, A12B, A21B, A12C, A21C) = 1/K20*gammaC(XC, XS, A12C, A21C)./gammaB(XB, XS, A12B, A21B).*XC./XB rate1(XA, XB, XTS1, XS, A12A, A21A, A12B, A21B, A12TS1, A21TS1) = k10*gammaA(XA, XS, A12A, A21A)./gammaTS1(XTS1, XS, A12TS1, A21TS1).*XA.*(1 .- z1(XA, XB, XS, A12A, A21A, A12B, A21B)) rate2(XB, XC, XTS2, XS, A12B, A21B, A12C, A21C, A12TS2, A21TS2) = k20*gammaB(XB, XS, A12B, A21B)./gammaTS2(XTS2, XS, A12TS2, A21TS2).*XB.*(1 .- z2(XB, XC, XS, A12B, A21B, A12C, A21C)) ``` rate2 (generic function with 1 method) ```julia function batch(du, u, p, t) MAR = p["MAR"] PAR = p["PAR"] k10, k20, K10, K20, V, NS = PAR NA = u[:,1] NB = u[:,2] NC = u[:,3] NT = NA + NB + NC .+ NS XA = NA./NT XB = NB./NT XC = NC./NT XTS1 = XA XTS2 = XB XS = NS./NT #For A in solvent A12A = MAR[1] A21A = MAR[2] #For B in solvent A12B = MAR[3] A21B = MAR[4] #For C in solvent A12C = MAR[5] A21C = MAR[6] #For Transition State 1 in solvent A12TS1 = MAR[7] A21TS1 = MAR[8] #For Transition State 2 in solvent A12TS2 = MAR[9] A21TS2 = MAR[10] gammaA = exp.(XS.^2 .*(A12A .+ 2*(A21A - A12A)*XA)) gammaB = exp.(XS.^2 .*(A12B .+ 2*(A21B - A12B)*XB)) gammaC = exp.(XS.^2 .*(A12C .+ 2*(A21C - A12C)*XC)) gammaTS1 = exp.(XS.^2 .*(A12TS1 .+ 2*(A21TS1 - A12TS1)*XTS1)) gammaTS2 = exp.(XS.^2 .*(A12TS2 .+ 2*(A21TS2 - A12TS2)*XTS2)) z1 = 1/K10*gammaB./gammaA.*XB./XA z2 = 1/K20*gammaC./gammaB.*XC./XB z2[isnan.(z2)] .= 0 r1 = k10*gammaA./gammaTS1.*XA.*(1 .- z1).*NT/V r2 = k20*gammaB./gammaTS2.*XB.*(1 .- z2).*NT/V RA = -r1[1] RB = r1[1] - r2[1] RC = r2[1] du[1] = RA*V du[2] = RB*V du[3] = RC*V return du, r1, r2, z1, z2 end ``` batch (generic function with 1 method) ```julia k10 = 1 k20 = 1 K10 = 1 K20 = 1 V = 1 NTOT = 100 NA0 = 0.1 NB0 = 0.0 NC0 = 0.0 NS = NTOT - NA0 - NB0 - NC0 var0 = [NA0 NB0 NC0] span = (0.0, 20.0); ``` ```julia #Solvate transition state relative to reactants MARSET1 = zeros(10,3) MARSET1[:,1] = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] #no solvation MARSET1[:,2] = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] #destabilize TS1 MARSET1[:,3] = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] #stabilize TS1 tfine = range(0.0, stop = maximum(span), length = 1000) e1out = zeros(length(tfine), size(MARSET1, 2)) e2out = zeros(length(tfine), size(MARSET1, 2)) r1out = zeros(length(tfine), size(MARSET1, 2)) r2out = zeros(length(tfine), size(MARSET1, 2)) z1out = zeros(length(tfine), size(MARSET1, 2)) z2out = zeros(length(tfine), size(MARSET1, 2)) for i = 1:size(MARSET1, 2) p0 = Dict("MAR" => MARSET1[:,i], "PAR" => [k10, k20, K10, K20, V, NS]) prob = ODEProblem(batch, var0, span, p0) sol = solve(prob, Rodas5(), abstol = 1e-10, reltol = 1e-10) solf = sol(tfine) NA = solf[1,:] NB = solf[2,:] NC = solf[3,:] NT = NA + NB + NC .+ NS ex1 = (NA0 .- NA)/NA0 ex2 = (NC)/NA0 dut, rt1, rt2, zt1, zt2 = batch([0., 0., 0.], [NA NB NC], p0, tfine) e1out[:,i] = ex1 e2out[:,i] = ex2 r1out[:,i] = rt1 r2out[:,i] = rt2 z1out[:,i] = zt1 z2out[:,i] = zt2 end plt1 = plot(tfine, e1out, xlabel = "time", ylabel = "extent", labels = ["e1" nothing nothing], legend = :bottomright) plt1 = plot!(plt1,tfine, e2out, ls = :dash, labels = ["e2" nothing nothing]) plt2 = plot(tfine, r1out, xlabel = "time", ylabel = "rate", labels = ["r1" nothing nothing], legend = :topright) plt2 = plot!(tfine, r2out, ls = :dash, labels = ["r2" nothing nothing]) plt3 = plot(e1out, r1out, xlabel = "extent", ylabel = "rate", labels = ["r1" nothing nothing], legend = :topright) plt3 = plot!(e1out, r2out, ls = :dash, labels = ["r2" nothing nothing]) plt4 = plot(e1out, z1out, xlabel = "extent", ylabel = "z", labels = ["z1" nothing nothing], legend = :topright) plt4 = plot!(e1out, z2out, ls = :dash, labels = ["z2" nothing nothing]) plt5 = plot(tfine, z1out, xlabel = "time", ylabel = "z", labels = ["z1" nothing nothing], legend = :topright) plt5 = plot!(tfine, z2out, ls = :dash, labels = ["z2" nothing nothing]) display(plt1) display(plt2) display(plt3) display(plt4) display(plt5) ```
d0e383c262941cb3bae976cf83a378ee1a70408c
694,451
ipynb
Jupyter Notebook
2021_JCAT_DeDonder_Solvents/Case Study 2.ipynb
jqbond/Research_Public
a6eb581e4e3e72f40fd6c7e900b6f4b30311076f
[ "MIT" ]
null
null
null
2021_JCAT_DeDonder_Solvents/Case Study 2.ipynb
jqbond/Research_Public
a6eb581e4e3e72f40fd6c7e900b6f4b30311076f
[ "MIT" ]
null
null
null
2021_JCAT_DeDonder_Solvents/Case Study 2.ipynb
jqbond/Research_Public
a6eb581e4e3e72f40fd6c7e900b6f4b30311076f
[ "MIT" ]
null
null
null
177.472783
16,447
0.684044
true
2,386
Qwen/Qwen-72B
1. YES 2. YES
0.865224
0.651355
0.563568
__label__yue_Hant
0.206957
0.147687
# Monte Carlo - Doble Pozo \begin{equation} V(x)=E_{0}\left[ \left(\frac{x}{a}\right)^4 -2\left(\frac{x}{a}\right)^2 \right]-\frac{b}{a}x \end{equation} ```python import openmm as mm from openmm import app from openmm import unit from openmmtools.constants import kB import numpy as np from tqdm import tqdm import matplotlib.pyplot as plt ``` ```python from numpy.random import default_rng rng = default_rng() ``` ```python # Definición del sistema. n_particles = 1 mass = 100 * unit.amu ``` ```python # Creación del sistema. system = mm.System() for ii in range(n_particles): system.addParticle(mass) ``` ```python # Añadiendo el potencial externo al sistema Eo = 3.0 * unit.kilocalories_per_mole a = 0.5 * unit.nanometers b = 0.0 * unit.kilocalories_per_mole k = 1.0*unit.kilocalories_per_mole/unit.angstrom**2 A = Eo/(a**4) B = -2.0*Eo/(a**2) C = -b/a D = k/2.0 force = mm.CustomExternalForce('A*x^4+B*x^2+C*x + D*(y^2+z^2)') force.addGlobalParameter('A', A) force.addGlobalParameter('B', B) force.addGlobalParameter('C', C) force.addGlobalParameter('D', D) for ii in range(n_particles): force.addParticle(ii, []) _ = system.addForce(force) ``` ```python # Definición del estado termodinámico y el integrador. step_size = 0.01*unit.picoseconds temperature = 300*unit.kelvin friction = 1.0/unit.picosecond # Damping para la dinámica de Langevin integrator = mm.LangevinIntegrator(temperature, friction, step_size) ``` **Por favor prueba a remplazar en la siguiente celda 'CPU' por 'CUDA' a ver si corre** ```python # Creación de la plataforma. platform_name = 'CUDA' platform = mm.Platform.getPlatformByName(platform_name) ``` ```python # Creación del contexto. context = mm.Context(system, integrator, platform) ``` ```python def movement(lmax): return lmax * rng.uniform(-1,1) ``` ```python def decide(Ui, Uf, temperature): kBT = kB * temperature accept = False if Uf <= Ui: accept = True else: weight = np.exp(- (Uf - Ui)/kBT) random = rng.uniform(0,1) if weight >= random: accept = True else: accept = False return accept ``` ```python # Condiciones iniciales initial_positions = np.zeros([n_particles, 3], np.float32) * unit.angstroms initial_positions[0,0] = 5.0 * unit.angstroms mc_steps = 50000 num_trues = 0 mc_traj = np.zeros([mc_steps+1], np.float32) * unit.angstroms mc_traj[0] = initial_positions[0,0] for ii in tqdm(range(mc_steps)): context.setPositions(initial_positions) state_initial = context.getState(getEnergy=True) Ui = state_initial.getPotentialEnergy() final_positions = np.zeros([n_particles, 3], np.float32) * unit.angstroms final_positions[0,0] = initial_positions[0,0] + movement(4.0*unit.angstroms) context.setPositions(final_positions) state_final = context.getState(getEnergy=True) Uf = state_final.getPotentialEnergy() accept = decide(Ui, Uf, temperature) if accept == True: initial_positions = final_positions num_trues += 1 mc_traj[ii+1] = initial_positions[0,0] acceptance_rate=num_trues/mc_steps ``` 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50000/50000 [00:16<00:00, 3019.51it/s] ```python acceptance_rate ``` 0.34022 ```python plt.scatter(range(mc_steps+1), mc_traj) ``` ```python mc_traj.mean() ``` Quantity(value=0.705989, unit=angstrom) ```python ```
0f7586853f6527893dc68e9e83421e754fd02559
34,815
ipynb
Jupyter Notebook
Tarea5/Example.ipynb
dprada/DIY_MD
6fb4f880616a558a03d67f1cbb8426ccda6cd4e2
[ "MIT" ]
null
null
null
Tarea5/Example.ipynb
dprada/DIY_MD
6fb4f880616a558a03d67f1cbb8426ccda6cd4e2
[ "MIT" ]
null
null
null
Tarea5/Example.ipynb
dprada/DIY_MD
6fb4f880616a558a03d67f1cbb8426ccda6cd4e2
[ "MIT" ]
1
2022-02-15T21:10:09.000Z
2022-02-15T21:10:09.000Z
101.501458
26,664
0.859687
true
1,070
Qwen/Qwen-72B
1. YES 2. YES
0.831143
0.672332
0.558804
__label__kor_Hang
0.154386
0.136618
# Sample Coding Exercise : Interpolation - https://www.hackerrank.com/contests/intro-to-statistics/challenges/temperature-predictions/problem - Take care with 2-D: you may need to use the correlation in the variables to improve the fit!\ ```python %matplotlib inline from IPython.core.display import display, HTML import matplotlib.pyplot as plt import matplotlib.dates as mdates from pylab import rcParams import matplotlib.pyplot as plt import matplotlib.dates as mdates import pandas_profiling from pylab import rcParams rcParams['figure.figsize'] = 10, 6 plt.rc("font", size=14) ``` /anaconda3/lib/python3.6/site-packages/pandas_profiling/plot.py:15: UserWarning: This call to matplotlib.use() has no effect because the backend has already been chosen; matplotlib.use() must be called *before* pylab, matplotlib.pyplot, or matplotlib.backends is imported for the first time. The backend was *originally* set to 'module://ipykernel.pylab.backend_inline' by the following code: File "/anaconda3/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/anaconda3/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py", line 16, in <module> app.launch_new_instance() File "/anaconda3/lib/python3.6/site-packages/traitlets/config/application.py", line 658, in launch_instance app.start() File "/anaconda3/lib/python3.6/site-packages/ipykernel/kernelapp.py", line 486, in start self.io_loop.start() File "/anaconda3/lib/python3.6/site-packages/tornado/platform/asyncio.py", line 127, in start self.asyncio_loop.run_forever() File "/anaconda3/lib/python3.6/asyncio/base_events.py", line 422, in run_forever self._run_once() File "/anaconda3/lib/python3.6/asyncio/base_events.py", line 1432, in _run_once handle._run() File "/anaconda3/lib/python3.6/asyncio/events.py", line 145, in _run self._callback(*self._args) File "/anaconda3/lib/python3.6/site-packages/tornado/platform/asyncio.py", line 117, in _handle_events handler_func(fileobj, events) File "/anaconda3/lib/python3.6/site-packages/tornado/stack_context.py", line 276, in null_wrapper return fn(*args, **kwargs) File "/anaconda3/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py", line 450, in _handle_events self._handle_recv() File "/anaconda3/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py", line 480, in _handle_recv self._run_callback(callback, msg) File "/anaconda3/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py", line 432, in _run_callback callback(*args, **kwargs) File "/anaconda3/lib/python3.6/site-packages/tornado/stack_context.py", line 276, in null_wrapper return fn(*args, **kwargs) File "/anaconda3/lib/python3.6/site-packages/ipykernel/kernelbase.py", line 283, in dispatcher return self.dispatch_shell(stream, msg) File "/anaconda3/lib/python3.6/site-packages/ipykernel/kernelbase.py", line 233, in dispatch_shell handler(stream, idents, msg) File "/anaconda3/lib/python3.6/site-packages/ipykernel/kernelbase.py", line 399, in execute_request user_expressions, allow_stdin) File "/anaconda3/lib/python3.6/site-packages/ipykernel/ipkernel.py", line 208, in do_execute res = shell.run_cell(code, store_history=store_history, silent=silent) File "/anaconda3/lib/python3.6/site-packages/ipykernel/zmqshell.py", line 537, in run_cell return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs) File "/anaconda3/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2662, in run_cell raw_cell, store_history, silent, shell_futures) File "/anaconda3/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2785, in _run_cell interactivity=interactivity, compiler=compiler, result=result) File "/anaconda3/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2903, in run_ast_nodes if self.run_code(code, result): File "/anaconda3/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2963, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-1-183fefe6eb41>", line 1, in <module> get_ipython().run_line_magic('matplotlib', 'inline') File "/anaconda3/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2131, in run_line_magic result = fn(*args,**kwargs) File "<decorator-gen-107>", line 2, in matplotlib File "/anaconda3/lib/python3.6/site-packages/IPython/core/magic.py", line 187, in <lambda> call = lambda f, *a, **k: f(*a, **k) File "/anaconda3/lib/python3.6/site-packages/IPython/core/magics/pylab.py", line 99, in matplotlib gui, backend = self.shell.enable_matplotlib(args.gui) File "/anaconda3/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3051, in enable_matplotlib pt.activate_matplotlib(backend) File "/anaconda3/lib/python3.6/site-packages/IPython/core/pylabtools.py", line 311, in activate_matplotlib matplotlib.pyplot.switch_backend(backend) File "/anaconda3/lib/python3.6/site-packages/matplotlib/pyplot.py", line 231, in switch_backend matplotlib.use(newbackend, warn=False, force=True) File "/anaconda3/lib/python3.6/site-packages/matplotlib/__init__.py", line 1410, in use reload(sys.modules['matplotlib.backends']) File "/anaconda3/lib/python3.6/importlib/__init__.py", line 166, in reload _bootstrap._exec(spec, module) File "/anaconda3/lib/python3.6/site-packages/matplotlib/backends/__init__.py", line 16, in <module> line for line in traceback.format_stack() matplotlib.use(BACKEND) ```python import os, sys, re import calendar import collections from collections import defaultdict, OrderedDict from scipy.stats import linregress from datetime import datetime from dateutil.relativedelta import * import itertools from dateutil import parser import pandas as pd pd.set_option('display.max_columns', 100) import numpy as np import scipy import statsmodels import statsmodels.api as sm import statsmodels.formula.api as smf import statsmodels.tsa.api as smt import sympy import requests from bs4 import BeautifulSoup from scipy.stats import mode from scipy import interp from sklearn import preprocessing, linear_model, metrics from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split from sklearn.cross_validation import cross_val_score from sklearn.model_selection import StratifiedKFold from sklearn.linear_model import LinearRegression from sklearn.model_selection import GridSearchCV from sklearn.multiclass import OneVsRestClassifier from sklearn.linear_model import LogisticRegression from sklearn.metrics import roc_auc_score, f1_score, classification_report, roc_curve, auc from sklearn.pipeline import Pipeline, FeatureUnion ``` /anaconda3/lib/python3.6/site-packages/sklearn/cross_validation.py:41: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20. "This module will be removed in 0.20.", DeprecationWarning) ```python ## Data I/O Test Data Provided - Input: I am reading in the data from copy paste from the website - Output: ordered list printed to terminal #! note copy from keyboard will turn columns to strings ``` ```python df = pd.read_clipboard(header = 0) display(df) ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>yyyy</th> <th>month</th> <th>tmax</th> <th>tmin</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>1908</td> <td>January</td> <td>5.0</td> <td>-1.4</td> </tr> <tr> <th>1</th> <td>1908</td> <td>February</td> <td>7.3</td> <td>1.9</td> </tr> <tr> <th>2</th> <td>1908</td> <td>March</td> <td>6.2</td> <td>0.3</td> </tr> <tr> <th>3</th> <td>1908</td> <td>April</td> <td>Missing_1</td> <td>2.1</td> </tr> <tr> <th>4</th> <td>1908</td> <td>May</td> <td>Missing_2</td> <td>7.7</td> </tr> <tr> <th>5</th> <td>1908</td> <td>June</td> <td>17.7</td> <td>8.7</td> </tr> <tr> <th>6</th> <td>1908</td> <td>July</td> <td>Missing_3</td> <td>11.0</td> </tr> <tr> <th>7</th> <td>1908</td> <td>August</td> <td>17.5</td> <td>9.7</td> </tr> <tr> <th>8</th> <td>1908</td> <td>September</td> <td>16.3</td> <td>8.4</td> </tr> <tr> <th>9</th> <td>1908</td> <td>October</td> <td>14.6</td> <td>8.0</td> </tr> <tr> <th>10</th> <td>1908</td> <td>November</td> <td>9.6</td> <td>3.4</td> </tr> <tr> <th>11</th> <td>1908</td> <td>December</td> <td>5.8</td> <td>Missing_4</td> </tr> <tr> <th>12</th> <td>1909</td> <td>January</td> <td>5.0</td> <td>0.1</td> </tr> <tr> <th>13</th> <td>1909</td> <td>February</td> <td>5.5</td> <td>-0.3</td> </tr> <tr> <th>14</th> <td>1909</td> <td>March</td> <td>5.6</td> <td>-0.3</td> </tr> <tr> <th>15</th> <td>1909</td> <td>April</td> <td>12.2</td> <td>3.3</td> </tr> <tr> <th>16</th> <td>1909</td> <td>May</td> <td>14.7</td> <td>4.8</td> </tr> <tr> <th>17</th> <td>1909</td> <td>June</td> <td>15.0</td> <td>7.5</td> </tr> <tr> <th>18</th> <td>1909</td> <td>July</td> <td>17.3</td> <td>10.8</td> </tr> <tr> <th>19</th> <td>1909</td> <td>August</td> <td>18.8</td> <td>10.7</td> </tr> </tbody> </table> </div> ```python df_answer = pd.read_clipboard(header = None) df_answer = pd.to_numeric(df_answer[0]) df_answer = df_answer.to_frame("truth") display(df_answer) ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>truth</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>8.6</td> </tr> <tr> <th>1</th> <td>15.8</td> </tr> <tr> <th>2</th> <td>18.9</td> </tr> <tr> <th>3</th> <td>0.0</td> </tr> </tbody> </table> </div> ## Treat missing values in a standard way ```python df2 = df.copy(deep = True) df2[["tmax_clean", "tmin_clean"]] = df2[["tmax", "tmin"]].replace(to_replace= r'(?i)missing', value=np.nan, regex= True) df2["tmax_clean"] = df2["tmax_clean"].apply(pd.to_numeric) df2["tmin_clean"] = df2["tmin_clean"].apply(pd.to_numeric) df2.head(15) ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>yyyy</th> <th>month</th> <th>tmax</th> <th>tmin</th> <th>tmax_clean</th> <th>tmin_clean</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>1908</td> <td>January</td> <td>5.0</td> <td>-1.4</td> <td>5.0</td> <td>-1.4</td> </tr> <tr> <th>1</th> <td>1908</td> <td>February</td> <td>7.3</td> <td>1.9</td> <td>7.3</td> <td>1.9</td> </tr> <tr> <th>2</th> <td>1908</td> <td>March</td> <td>6.2</td> <td>0.3</td> <td>6.2</td> <td>0.3</td> </tr> <tr> <th>3</th> <td>1908</td> <td>April</td> <td>Missing_1</td> <td>2.1</td> <td>NaN</td> <td>2.1</td> </tr> <tr> <th>4</th> <td>1908</td> <td>May</td> <td>Missing_2</td> <td>7.7</td> <td>NaN</td> <td>7.7</td> </tr> <tr> <th>5</th> <td>1908</td> <td>June</td> <td>17.7</td> <td>8.7</td> <td>17.7</td> <td>8.7</td> </tr> <tr> <th>6</th> <td>1908</td> <td>July</td> <td>Missing_3</td> <td>11.0</td> <td>NaN</td> <td>11.0</td> </tr> <tr> <th>7</th> <td>1908</td> <td>August</td> <td>17.5</td> <td>9.7</td> <td>17.5</td> <td>9.7</td> </tr> <tr> <th>8</th> <td>1908</td> <td>September</td> <td>16.3</td> <td>8.4</td> <td>16.3</td> <td>8.4</td> </tr> <tr> <th>9</th> <td>1908</td> <td>October</td> <td>14.6</td> <td>8.0</td> <td>14.6</td> <td>8.0</td> </tr> <tr> <th>10</th> <td>1908</td> <td>November</td> <td>9.6</td> <td>3.4</td> <td>9.6</td> <td>3.4</td> </tr> <tr> <th>11</th> <td>1908</td> <td>December</td> <td>5.8</td> <td>Missing_4</td> <td>5.8</td> <td>NaN</td> </tr> <tr> <th>12</th> <td>1909</td> <td>January</td> <td>5.0</td> <td>0.1</td> <td>5.0</td> <td>0.1</td> </tr> <tr> <th>13</th> <td>1909</td> <td>February</td> <td>5.5</td> <td>-0.3</td> <td>5.5</td> <td>-0.3</td> </tr> <tr> <th>14</th> <td>1909</td> <td>March</td> <td>5.6</td> <td>-0.3</td> <td>5.6</td> <td>-0.3</td> </tr> </tbody> </table> </div> ### Convert to datetime index ```python d = {"month", dict(zip(pd.date_range('2000-01-01', freq='M', periods=12).strftime('%B'), range(1,13)))} # df2["month_number"] = df2["month"].replace(d) Does not work in pandas 0.19 for idx,row in df2.iterrows(): df2.loc[idx, "month_number"] = d[row["month"]] df2["yyyy"] = df2["yyyy"].map(str) df2["date_time"] = df2['month'] + "-" + df2["yyyy"] df2["date_time"] = df2["date_time"].apply(lambda x: pd.to_datetime(x,format = '%B-%Y')) df2.set_index("date_time", inplace = True) #pandas_profiling.ProfileReport(df2[["tmax_clean", "tmin_clean", "month_number"]]) ``` # Correlation among the Temperature Min and Max Values ```python df2.plot(x='tmin_clean', y='tmax_clean', style='o') ``` # Perform Linear interpolation [tmin,tmax] - leverage the correlation in the data ```python df_answer = df_answer["truth"] ``` ```python x = df2.dropna(how='any',subset= ["tmin_clean", "tmax_clean"]).tmin_clean.values y = df2.dropna(how='any',subset= ["tmin_clean", "tmax_clean"]).tmax_clean.values stats = linregress(x, y) m = stats.slope b = stats.intercept print(m,b) fig2, ax2 = plt.subplots(figsize=(10,6)) plt.scatter(x, y) plt.plot(x, m * x + b, color="red") # I've added a color argument here ax2.set_title("Temperature Correlation (Dropouts Removed)") ax2.set_ylabel("Temp_Max") ax2.set_xlabel("Temp_Min") plt.tight_layout() plt.savefig("TempCorrelation.png") plt.show() my_dict = OrderedDict() for idx, row in df2.iterrows(): if (("Missing" in row["tmin"]) & (not "Missing" in row["tmax"])): my_dict[row["tmin"]] = 1/float(m)*(row["tmax_clean"] - b) if (("Missing" in row["tmax"]) & (not "Missing" in row["tmin"])): my_dict[row["tmax"]] = m * row["tmin_clean"] + b print(my_dict) my_list = list(my_dict.values()) print() for elem in my_list: print(elem) df_answer = pd.concat([df_answer, pd.DataFrame(my_list, columns= ["answer_lreg",])], axis = 1) df_answer["delta_lreg2"] = df_answer["truth"] - df_answer["answer_lreg"] df_answer ``` ## SciKit Learn Fit based on [month_number, tmin, tmax] ignoring the year. - Use data without Nan's as the training set - Use the tmin = nan as those to predict based on [month_number, tmax] - Use the tmax = nan as those to predict based on [month_number, tmin] ```python df_train = df2.dropna(how='any',subset= ["tmin_clean", "tmax_clean"]) df_train = df_train[["month_number", "tmax_clean", "tmin_clean"]] df_test = df2[df2[["tmin_clean", "tmax_clean"]].isnull().any(axis=1)] df_test = df_test[["month_number", "tmax_clean", "tmin_clean"]] X_train = df_train[["month_number", "tmax_clean"]].values Y_train = df_train["tmin_clean"].values X_mintest = df_test[df_test["tmin_clean"].isnull()][["month_number", "tmax_clean"]].values reg = LinearRegression() model = reg.fit(X_train, Y_train) tmin_predict = model.predict(X_mintest) X_train = df_train[["month_number", "tmin_clean"]].values Y_train = df_train["tmax_clean"].values X_maxtest = df_test[df_test["tmax_clean"].isnull()][["month_number", "tmin_clean"]].values reg = LinearRegression() model = reg.fit(X_train, Y_train) tmax_predict = model.predict(X_maxtest) df_sklearn = df2.copy(deep = True) df_sklearn["tmax_hat"] = df_sklearn["tmax_clean"] df_sklearn["tmin_hat"] = df_sklearn["tmin_clean"] df_sklearn.loc[df_sklearn["tmax_clean"].isnull(),"tmax_hat"] = tmax_predict df_sklearn.loc[df_sklearn["tmin_clean"].isnull(),"tmin_hat"] = tmin_predict my_dict = OrderedDict() for idx, row in df_sklearn.iterrows(): if "Missing" in row["tmin"]: my_dict[row["tmin"]] = row["tmin_hat"] if "Missing" in row["tmax"]: my_dict[row["tmax"]] = row["tmax_hat"] my_list = list(my_dict.values()) print() for elem in my_list: print(elem) df_answer = pd.concat([df_answer, pd.DataFrame(my_list, columns= ["answer_scikitreg",])], axis = 1) df_answer["delta_scikitreg"] = df_answer["truth"] - df_answer["answer_scikitreg"] df_answer ``` 8.662087950728417 15.401943733483362 19.291694067282894 1.3625249040725382 # Apply Pandas built in interpolation methods - https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.interpolate.html Types of missing data: - if upsampling is required: upsampled = df.series.resample('D') - if the dates are missing df = df.reindex(pd.date_range("2011-01-01", "2011-10-31"), fill_value="NaN") - if the data contains duplicates: df.drop_duplicates(keep = 'first', inplace = True) - forward fill copies values forward. Limit will impact how big a gap you will fill https://chrisalbon.com/machine_learning/preprocessing_dates_and_times/handling_missing_values_in_time_series/ https://chrisalbon.com/python/data_wrangling/pandas_missing_data/ - methods: {‘linear’, ‘time’, ‘index’, ‘values’, ‘nearest’, ‘zero’, 'slinear’, ‘quadratic’, ‘cubic’, ‘barycentric’, ‘krogh’, ‘polynomial’, ‘spline’, ‘piecewise_polynomial’, ‘from_derivatives’, ‘pchip’, ‘akima’} - method='quadratic' if you are dealing with a time series that is growing at an increasing rate. - method='pchip' if you have values approximating a cumulative distribution function. - method='akima': to fill missing values with goal of smooth plotting. ```python df_interp = df2.copy(deep = True) df_interp["tmin_hat"] = df_interp["tmin_clean"].interpolate(axis=0, method='time',\ limit=None, inplace=False, limit_direction='forward', limit_area=None, downcast=None).ffill().bfill() df_interp["tmax_hat"] = df_interp["tmax_clean"].interpolate(axis=0, method='time',\ limit=None, inplace=False, limit_direction='forward', limit_area=None, downcast=None).ffill().bfill() # Print the missing values df_pandas = df_interp[df_interp['tmin'].str.startswith("Missing") | df_interp['tmax'].str.startswith("Missing")] my_dict = OrderedDict() for idx, row in df_pandas.iterrows(): if "Missing" in row["tmin"]: my_dict[row["tmin"]] = row["tmin_hat"] if "Missing" in row["tmax"]: my_dict[row["tmax"]] = row["tmax_hat"] #print(my_dict) my_list = list(my_dict.values()) print() for elem in my_list: print(elem) df_answer = pd.concat([df_answer, pd.DataFrame(my_list, columns= ["answer_pandasreg",])], axis = 1) df_answer["delta_pandasreg"] = df_answer["truth"] - df_answer["answer_pandasreg"] df_answer ``` 10.075 13.825 17.601639344262296 1.777049180327869 <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>truth</th> <th>answer_lreg</th> <th>delta_lreg2</th> <th>answer_scikitreg</th> <th>delta_scikitreg</th> <th>answer_pandasreg</th> <th>delta_pandasreg</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>8.6</td> <td>8.668500</td> <td>-0.068500</td> <td>8.662088</td> <td>-0.062088</td> <td>10.075000</td> <td>-1.475000</td> </tr> <tr> <th>1</th> <td>15.8</td> <td>15.282367</td> <td>0.517633</td> <td>15.401944</td> <td>0.398056</td> <td>13.825000</td> <td>1.975000</td> </tr> <tr> <th>2</th> <td>18.9</td> <td>19.179825</td> <td>-0.279825</td> <td>19.291694</td> <td>-0.391694</td> <td>17.601639</td> <td>1.298361</td> </tr> <tr> <th>3</th> <td>0.0</td> <td>-0.328775</td> <td>0.328775</td> <td>1.362525</td> <td>-1.362525</td> <td>1.777049</td> <td>-1.777049</td> </tr> </tbody> </table> </div> ### Varia/tion on Pandas interpolation method ```python df_interp2 = df2.copy(deep = True) df_interp2["tmin_hat"] = df_interp2["tmin_clean"].interpolate(method='polynomial', order=2).ffill().bfill() df_interp2["tmax_hat"] = df_interp2["tmax_clean"].interpolate(method='polynomial', order=2).ffill().bfill() # Print the missing values df_pandas2 = df_interp2[df_interp2['tmin'].str.startswith("Missing") | df_interp2['tmax'].str.startswith("Missing")] my_dict = OrderedDict() for idx, row in df_pandas2.iterrows(): if "Missing" in row["tmin"]: my_dict[row["tmin"]] = row["tmin_hat"] if "Missing" in row["tmax"]: my_dict[row["tmax"]] = row["tmax_hat"] #print(my_dict) my_list = list(my_dict.values()) print() for elem in my_list: print(elem) df_answer = pd.concat([df_answer, pd.DataFrame(my_list, columns= ["answer_pdPolyreg",])], axis = 1) df_answer["delta_pdPolyreg"] = df_answer["truth"] - df_answer["answer_pdPolyreg"] df_answer ``` 8.381624067909202 13.975349515190798 18.423150274632224 0.8619823925615726 <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>truth</th> <th>answer_lreg</th> <th>delta_lreg2</th> <th>answer_scikitreg</th> <th>delta_scikitreg</th> <th>answer_pandasreg</th> <th>delta_pandasreg</th> <th>answer_pdPolyreg</th> <th>delta_pdPolyreg</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>8.6</td> <td>8.668500</td> <td>-0.068500</td> <td>8.662088</td> <td>-0.062088</td> <td>10.075000</td> <td>-1.475000</td> <td>8.381624</td> <td>0.218376</td> </tr> <tr> <th>1</th> <td>15.8</td> <td>15.282367</td> <td>0.517633</td> <td>15.401944</td> <td>0.398056</td> <td>13.825000</td> <td>1.975000</td> <td>13.975350</td> <td>1.824650</td> </tr> <tr> <th>2</th> <td>18.9</td> <td>19.179825</td> <td>-0.279825</td> <td>19.291694</td> <td>-0.391694</td> <td>17.601639</td> <td>1.298361</td> <td>18.423150</td> <td>0.476850</td> </tr> <tr> <th>3</th> <td>0.0</td> <td>-0.328775</td> <td>0.328775</td> <td>1.362525</td> <td>-1.362525</td> <td>1.777049</td> <td>-1.777049</td> <td>0.861982</td> <td>-0.861982</td> </tr> </tbody> </table> </div> # SCIKIT Learn is the Winner! ## Look at the Fit Constraints - 1908 <=time <= 2013 - -75 <= Tmax/Tmin <= 75 ```python df_sklearn["temp_constraint_v"] = df_sklearn["tmax_hat"]/df_sklearn["tmin_hat"] df_sklearn[abs(df_sklearn["temp_constraint_v"]) > 75] ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>yyyy</th> <th>month</th> <th>tmax</th> <th>tmin</th> <th>tmax_clean</th> <th>tmin_clean</th> <th>month_number</th> <th>tmax_hat</th> <th>tmin_hat</th> <th>temp_constraint_v</th> </tr> <tr> <th>date_time</th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> </tbody> </table> </div> # Check the Output by making some Residual Plots ```python df_sklearn[['tmin', 'tmin_hat']].plot(figsize=(12, 8)) plt.show() df_sklearn[['tmax', 'tmax_hat']].plot(figsize=(12, 8)) plt.show() df_sklearn["min_resid"] = df_sklearn['tmin_clean'] - df_sklearn['tmin_hat'] df_sklearn["min_resid"].plot(figsize=(12, 8)) plt.show() df_sklearn["max_resid"] = df_sklearn['tmax_clean'] - df_sklearn['tmax_hat'] df_sklearn["max_resid"].plot(figsize=(12, 8)) plt.show() ``` # SUBMITTED CODE ```python import os, sys, re import calendar import collections from collections import defaultdict, OrderedDict from scipy.stats import linregress from datetime import datetime from dateutil.relativedelta import * import itertools from dateutil import parser import pandas as pd pd.set_option('display.max_columns', 100) import numpy as np import scipy import statsmodels import statsmodels.api as sm import statsmodels.formula.api as smf import statsmodels.tsa.api as smt import sympy import requests from bs4 import BeautifulSoup from scipy.stats import mode from scipy import interp from sklearn import preprocessing, linear_model, metrics from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split from sklearn.cross_validation import cross_val_score from sklearn.model_selection import StratifiedKFold from sklearn.linear_model import LinearRegression from sklearn.model_selection import GridSearchCV from sklearn.multiclass import OneVsRestClassifier from sklearn.linear_model import LogisticRegression from sklearn.metrics import roc_auc_score, f1_score, classification_report, roc_curve, auc from sklearn.pipeline import Pipeline, FeatureUnion if __name__ == "__main__": # Read Data from STDIN t=int(sys.stdin.readline()) my_header = sys.stdin.readline().split() data = sys.stdin.read().splitlines() data = [re.split(r'\t', l) for l in data] df = pd.DataFrame(data, columns= my_header) # PreProcess data df2 = df.copy(deep = True) df2[["tmax_clean", "tmin_clean"]] = df2[["tmax", "tmin"]].\ replace(to_replace= r'(?i)missing', value=np.nan, regex= True) #df2["tmax_clean"] = = df["tmax"].replace(to_replace= r'(?i)missing', value=np.nan, regex= True) #df2["tmin_clean"] = df["tmin"].replace(to_replace= r'(?i)missing', value=np.nan, regex= True) df2["tmax_clean"] = df2["tmax_clean"].apply(pd.to_numeric) df2["tmin_clean"] = df2["tmin_clean"].apply(pd.to_numeric) # ### Convert to datetime index d = dict(zip(pd.date_range('2000-01-01', freq='M', periods=12).strftime('%B'), range(1,13))) for idx,row in df2.iterrows(): df2.loc[idx, "month_number"] = d[row["month"]] df2["yyyy"] = df2["yyyy"].map(str) df2["date_time"] = df2['month'] + "-" + df2["yyyy"] df2["date_time"] = df2["date_time"].apply(lambda x: pd.to_datetime(x,format = '%B-%Y')) df2.set_index("date_time", inplace = True) # # SciKit Learn interpolation methods df_train = df2.dropna(how='any',subset= ["tmin_clean", "tmax_clean"]) df_train = df_train[["month_number", "tmax_clean", "tmin_clean"]] df_test = df2[df2[["tmin_clean", "tmax_clean"]].isnull().any(axis=1)] df_test = df_test[["month_number", "tmax_clean", "tmin_clean"]] X_train = df_train[["month_number", "tmax_clean"]].values Y_train = df_train["tmin_clean"].values X_mintest = df_test[(df_test["tmin_clean"].isnull()) &\ (df_test["tmax_clean"].notnull())][["month_number", "tmax_clean"]].values reg = LinearRegression() model = reg.fit(X_train, Y_train) tmin_predict = model.predict(X_mintest) X_train = df_train[["month_number", "tmin_clean"]].values Y_train = df_train["tmax_clean"].values X_maxtest = df_test[(df_test["tmax_clean"].isnull()) &\ (df_test["tmin_clean"].notnull())][["month_number", "tmin_clean"]].values reg = LinearRegression() model = reg.fit(X_train, Y_train) tmax_predict = model.predict(X_maxtest) df_sklearn = df2.copy(deep = True) df_sklearn["tmax_hat"] = df_sklearn["tmax_clean"] df_sklearn["tmin_hat"] = df_sklearn["tmin_clean"] df_sklearn.loc[((df_sklearn["tmax_clean"].isnull()) &\ (df_sklearn["tmin_clean"].notnull())), "tmax_hat"] = tmax_predict df_sklearn.loc[((df_sklearn["tmin_clean"].isnull()) &\ (df_sklearn["tmax_clean"].notnull())), "tmin_hat"] = tmin_predict my_dict = OrderedDict() for idx, row in df_sklearn.iterrows(): if "Missing" in row["tmin"]: my_dict[row["tmin"]] = row["tmin_hat"] if "Missing" in row["tmax"]: my_dict[row["tmax"]] = row["tmax_hat"] my_list = list(my_dict.values()) print() for elem in my_list: print(elem) ``` # PostMortem - The second test failed initially b/c I did not know how to get it to read the STD input correctly. - Below, I downloaded that file and ran it locally here ```python df_case2 = pd.read_csv("test_case#2.tsv", sep = '\t', header = 1) df_case2answer= pd.read_csv("answer_case#2", sep = '\t', header = None, names = ["truth"]) ``` ```python if __name__ == "__main__": df2 = df_case2.copy(deep = True) df2[["tmax_clean", "tmin_clean"]] = df2[["tmax", "tmin"]].\ replace(to_replace= r'(?i)missing', value=np.nan, regex= True) #df2["tmax_clean"] = = df["tmax"].replace(to_replace= r'(?i)missing', value=np.nan, regex= True) #df2["tmin_clean"] = df["tmin"].replace(to_replace= r'(?i)missing', value=np.nan, regex= True) df2["tmax_clean"] = df2["tmax_clean"].apply(pd.to_numeric) df2["tmin_clean"] = df2["tmin_clean"].apply(pd.to_numeric) # ### Convert to datetime index d = dict(zip(pd.date_range('2000-01-01', freq='M', periods=12).strftime('%B'), range(1,13))) for idx,row in df2.iterrows(): df2.loc[idx, "month_number"] = d[row["month"]] df2["yyyy"] = df2["yyyy"].map(str) df2["date_time"] = df2['month'] + "-" + df2["yyyy"] df2["date_time"] = df2["date_time"].apply(lambda x: pd.to_datetime(x,format = '%B-%Y')) df2.set_index("date_time", inplace = True) # # SciKit Learn interpolation methods df_train = df2.dropna(how='any',subset= ["tmin_clean", "tmax_clean"]) df_train = df_train[["month_number", "tmax_clean", "tmin_clean"]] df_test = df2[df2[["tmin_clean", "tmax_clean"]].isnull().any(axis=1)] df_test = df_test[["month_number", "tmax_clean", "tmin_clean"]] X_train = df_train[["month_number", "tmax_clean"]].values Y_train = df_train["tmin_clean"].values X_mintest = df_test[(df_test["tmin_clean"].isnull()) &\ (df_test["tmax_clean"].notnull())][["month_number", "tmax_clean"]].values reg = LinearRegression() model = reg.fit(X_train, Y_train) tmin_predict = model.predict(X_mintest) X_train = df_train[["month_number", "tmin_clean"]].values Y_train = df_train["tmax_clean"].values X_maxtest = df_test[(df_test["tmax_clean"].isnull()) &\ (df_test["tmin_clean"].notnull())][["month_number", "tmin_clean"]].values reg = LinearRegression() model = reg.fit(X_train, Y_train) tmax_predict = model.predict(X_maxtest) df_sklearn = df2.copy(deep = True) df_sklearn["tmax_hat"] = df_sklearn["tmax_clean"] df_sklearn["tmin_hat"] = df_sklearn["tmin_clean"] df_sklearn.loc[((df_sklearn["tmax_clean"].isnull()) &\ (df_sklearn["tmin_clean"].notnull())), "tmax_hat"] = tmax_predict df_sklearn.loc[((df_sklearn["tmin_clean"].isnull()) &\ (df_sklearn["tmax_clean"].notnull())), "tmin_hat"] = tmin_predict my_dict = OrderedDict() for idx, row in df_sklearn.iterrows(): if "Missing" in row["tmin"]: my_dict[row["tmin"]] = row["tmin_hat"] if "Missing" in row["tmax"]: my_dict[row["tmax"]] = row["tmax_hat"] my_list = list(my_dict.values()) print() for elem in my_list: print(elem) ``` 8.023567576598913 14.956071972588784 18.876006747611 1.4060334342946659 7.502582149144462 2.940294910002235 7.074080962002098 0.2394289909028684 12.897675794501763 5.857332293815556 0.9488187331023301 9.90315408171065 13.571314830987102 17.608417930900735 3.1853997123960625 7.124921110525822 10.559753472400955 6.397686123397952 8.770170965854728 13.740795283767309 3.7887345051962704 17.99479765750084 11.097168013436823 5.953237861611667 -0.6042969976989871 19.00189376048388 15.998042827497684 12.356038142165625 0.22528834240129836 16.978982866536334 4.165818750696401 9.030663679581954 8.914723440701952 18.2465716832466 19.262386474211105 14.990946724514643 5.857332293815556 0.1498714933012728 3.8005183504017506 9.473279484701361 15.242720750260407 1.4460985765032852 2.2309051678027734 3.543629702802444 4.856354237802115 19.388273487083985 -0.514739500097392 8.657834793102644 17.11358856739068 10.845393987691065 15.594225724934653 11.58495125950209 8.177052401499349 5.072028771501506 -0.10701715429803382 3.4988509540016457 17.634573994845127 11.0535745735295 11.735784957702142 9.609972534399844 14.461242609078731 3.813158642772704 6.864428396798598 7.706937353850524 6.755978759888647 9.699530032001439 10.635170321500981 3.3362334105961144 10.163646795437874 17.374081281117903 5.942828225501708 3.109982863296035 6.8731470847800615 10.967475818200311 6.096018726997849 9.217581508325086 19.262386474211105 3.4870671087961655 0.4256143959943275 8.023567576598913 11.238505114301187 2.8837323159959585 9.291807686202082 19.640047512829746 1.7477659729033883 10.541307834056514 7.601998905702279 3.600084750085178 -0.4841013997981638 6.630091747015767 5.565743980001578 10.107252377800798 13.740795283767309 -2.4449394763988406 6.864428396798598 6.369599033288542 3.260816561496088 -1.3443248401976784 4.750299288402861 10.861420868801057 6.063023823402531 19.631328824848282 11.60943475290981 -1.4197416892977044 11.170742898420915 12.438331715131183 18.884725435592465 9.156550692454834 16.375703866116325 12.985473206530026 -0.10701715429803382 18.381177384100944 2.6127030198950827 17.23075689228209 6.881865772761527 8.778889653836194 9.72984824779807 -1.3136867398984502 7.888961875744568 15.368607763133284 6.096018726997849 9.383721987099767 8.965807482579326 5.0071879360021665 3.109982863296035 1.8585345288948218 0.1498714933012728 10.4536985230017 1.099652431302383 3.1665454573023144 17.986078969519372 8.89605797872761 8.73325164220267 11.78056370650294 3.4870671087961655 9.744308780802237 0.9181806328031028 0.12394699959422262 -3.531413464098433 9.895142479002288 15.872155814624808 7.502582149144462 15.585507036953185 17.23075689228209 16.753364904734966 6.834446951721827 -1.1628530416983982 3.0157117591022615 18.62423272186524 17.508686981972247 9.469355534070846 8.653002640963313 3.1971835576015426 18.363740008138013 1.250486129502435 8.140735901490329 0.11923339300204461 18.381177384100944 7.086220977467589 19.37955479910252 11.087671416101136 8.509678252127504 10.724727819102576 6.322269274297927 1.2952648783032323 10.163646795437874 7.3003315093021754 13.199343190901866 8.478719797899453 13.740795283767309 3.109982863296035 3.24196230640234 9.85036373020149 9.956418679600745 7.679772558098396 17.104869879409215 7.1518546143982125 5.227897229451155 5.6105227288023745 11.163088265201162 2.940294910002235 17.104869879409215 12.596008398101658 17.608417930900735 13.992569309513069 15.468338712061769 18.758838422719585 -3.576192212899231 10.121393026302366 10.529115372101726 13.363134245148668 6.213857521602584 3.109982863296035 9.774946881101467 14.244343335258826 8.73325164220267 3.7698802501025224 6.6969967165019675 14.838903647697371 20.89019895357708 6.486767358179957 8.887339290746144 3.1665454573023144 19.25366778622964 2.46186932169503 -0.026886698605829373 11.841839907101397 20.26076388921268 17.508686981972247 9.390887342237665 10.163646795437874 1.3259029786024605 12.037452354102248 9.847016572689487 6.171435576097876 8.280750547602514 9.139113316491905 6.5485198215980045 3.2160378126952898 9.508055667129081 10.65847615894793 7.752832603902332 18.138122046336647 4.014985052496348 20.27820126517561 9.60396123492519 13.562596143005639 9.685389383499869 10.263377744366363 14.830184959715904 11.615589359801318 7.378105161698292 21.393747005068604 18.884725435592465 6.699353519798057 0.31484584000289484 2.3817388660028254 16.853095853663454 11.20786701400196 5.579884628503146 ```python df_case2answer = pd.concat([df_case2answer, pd.DataFrame(my_list, columns= ["answer_scikit2",])], axis = 1) df_case2answer["delta_scikit2"] = df_case2answer["truth"] - df_case2answer["answer_scikit2"] df_case2answer ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>truth</th> <th>answer_scikit2</th> <th>delta_scikit2</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>8.6</td> <td>8.023568</td> <td>0.576432</td> </tr> <tr> <th>1</th> <td>15.8</td> <td>14.956072</td> <td>0.843928</td> </tr> <tr> <th>2</th> <td>18.9</td> <td>18.876007</td> <td>0.023993</td> </tr> <tr> <th>3</th> <td>0.0</td> <td>1.406033</td> <td>-1.406033</td> </tr> <tr> <th>4</th> <td>7.0</td> <td>7.502582</td> <td>-0.502582</td> </tr> <tr> <th>5</th> <td>2.2</td> <td>2.940295</td> <td>-0.740295</td> </tr> <tr> <th>6</th> <td>6.0</td> <td>7.074081</td> <td>-1.074081</td> </tr> <tr> <th>7</th> <td>1.8</td> <td>0.239429</td> <td>1.560571</td> </tr> <tr> <th>8</th> <td>12.5</td> <td>12.897676</td> <td>-0.397676</td> </tr> <tr> <th>9</th> <td>4.8</td> <td>5.857332</td> <td>-1.057332</td> </tr> <tr> <th>10</th> <td>1.3</td> <td>0.948819</td> <td>0.351181</td> </tr> <tr> <th>11</th> <td>9.0</td> <td>9.903154</td> <td>-0.903154</td> </tr> <tr> <th>12</th> <td>14.9</td> <td>13.571315</td> <td>1.328685</td> </tr> <tr> <th>13</th> <td>16.7</td> <td>17.608418</td> <td>-0.908418</td> </tr> <tr> <th>14</th> <td>3.8</td> <td>3.185400</td> <td>0.614600</td> </tr> <tr> <th>15</th> <td>7.2</td> <td>7.124921</td> <td>0.075079</td> </tr> <tr> <th>16</th> <td>10.1</td> <td>10.559753</td> <td>-0.459753</td> </tr> <tr> <th>17</th> <td>7.3</td> <td>6.397686</td> <td>0.902314</td> </tr> <tr> <th>18</th> <td>8.5</td> <td>8.770171</td> <td>-0.270171</td> </tr> <tr> <th>19</th> <td>12.5</td> <td>13.740795</td> <td>-1.240795</td> </tr> <tr> <th>20</th> <td>3.5</td> <td>3.788735</td> <td>-0.288735</td> </tr> <tr> <th>21</th> <td>17.5</td> <td>17.994798</td> <td>-0.494798</td> </tr> <tr> <th>22</th> <td>11.2</td> <td>11.097168</td> <td>0.102832</td> </tr> <tr> <th>23</th> <td>6.6</td> <td>5.953238</td> <td>0.646762</td> </tr> <tr> <th>24</th> <td>0.3</td> <td>-0.604297</td> <td>0.904297</td> </tr> <tr> <th>25</th> <td>18.4</td> <td>19.001894</td> <td>-0.601894</td> </tr> <tr> <th>26</th> <td>15.8</td> <td>15.998043</td> <td>-0.198043</td> </tr> <tr> <th>27</th> <td>12.5</td> <td>12.356038</td> <td>0.143962</td> </tr> <tr> <th>28</th> <td>-0.9</td> <td>0.225288</td> <td>-1.125288</td> </tr> <tr> <th>29</th> <td>18.7</td> <td>16.978983</td> <td>1.721017</td> </tr> <tr> <th>...</th> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <th>210</th> <td>11.3</td> <td>10.163647</td> <td>1.136353</td> </tr> <tr> <th>211</th> <td>0.1</td> <td>1.325903</td> <td>-1.225903</td> </tr> <tr> <th>212</th> <td>12.4</td> <td>12.037452</td> <td>0.362548</td> </tr> <tr> <th>213</th> <td>10.5</td> <td>9.847017</td> <td>0.652983</td> </tr> <tr> <th>214</th> <td>5.9</td> <td>6.171436</td> <td>-0.271436</td> </tr> <tr> <th>215</th> <td>8.1</td> <td>8.280751</td> <td>-0.180751</td> </tr> <tr> <th>216</th> <td>7.9</td> <td>9.139113</td> <td>-1.239113</td> </tr> <tr> <th>217</th> <td>7.1</td> <td>6.548520</td> <td>0.551480</td> </tr> <tr> <th>218</th> <td>2.6</td> <td>3.216038</td> <td>-0.616038</td> </tr> <tr> <th>219</th> <td>8.8</td> <td>9.508056</td> <td>-0.708056</td> </tr> <tr> <th>220</th> <td>9.7</td> <td>10.658476</td> <td>-0.958476</td> </tr> <tr> <th>221</th> <td>6.6</td> <td>7.752833</td> <td>-1.152833</td> </tr> <tr> <th>222</th> <td>18.3</td> <td>18.138122</td> <td>0.161878</td> </tr> <tr> <th>223</th> <td>2.9</td> <td>4.014985</td> <td>-1.114985</td> </tr> <tr> <th>224</th> <td>19.6</td> <td>20.278201</td> <td>-0.678201</td> </tr> <tr> <th>225</th> <td>8.5</td> <td>9.603961</td> <td>-1.103961</td> </tr> <tr> <th>226</th> <td>15.1</td> <td>13.562596</td> <td>1.537404</td> </tr> <tr> <th>227</th> <td>9.8</td> <td>9.685389</td> <td>0.114611</td> </tr> <tr> <th>228</th> <td>8.9</td> <td>10.263378</td> <td>-1.363378</td> </tr> <tr> <th>229</th> <td>15.4</td> <td>14.830185</td> <td>0.569815</td> </tr> <tr> <th>230</th> <td>12.8</td> <td>11.615589</td> <td>1.184411</td> </tr> <tr> <th>231</th> <td>7.6</td> <td>7.378105</td> <td>0.221895</td> </tr> <tr> <th>232</th> <td>20.0</td> <td>21.393747</td> <td>-1.393747</td> </tr> <tr> <th>233</th> <td>18.8</td> <td>18.884725</td> <td>-0.084725</td> </tr> <tr> <th>234</th> <td>6.9</td> <td>6.699354</td> <td>0.200646</td> </tr> <tr> <th>235</th> <td>1.0</td> <td>0.314846</td> <td>0.685154</td> </tr> <tr> <th>236</th> <td>3.6</td> <td>2.381739</td> <td>1.218261</td> </tr> <tr> <th>237</th> <td>18.2</td> <td>16.853096</td> <td>1.346904</td> </tr> <tr> <th>238</th> <td>10.5</td> <td>11.207867</td> <td>-0.707867</td> </tr> <tr> <th>239</th> <td>4.6</td> <td>5.579885</td> <td>-0.979885</td> </tr> </tbody> </table> <p>240 rows × 3 columns</p> </div>
6d6fd1595ecd5456da6df23693002ffd5880f1ac
288,799
ipynb
Jupyter Notebook
Tutorials/Interpolation/.ipynb_checkpoints/Practice_interp-checkpoint.ipynb
rlbellaire/MyPythonTools
3816e735aa24b2f317b083f010e5c138dcc7b56c
[ "MIT" ]
null
null
null
Tutorials/Interpolation/.ipynb_checkpoints/Practice_interp-checkpoint.ipynb
rlbellaire/MyPythonTools
3816e735aa24b2f317b083f010e5c138dcc7b56c
[ "MIT" ]
null
null
null
Tutorials/Interpolation/.ipynb_checkpoints/Practice_interp-checkpoint.ipynb
rlbellaire/MyPythonTools
3816e735aa24b2f317b083f010e5c138dcc7b56c
[ "MIT" ]
1
2021-04-27T02:31:27.000Z
2021-04-27T02:31:27.000Z
118.408774
55,128
0.81645
true
16,546
Qwen/Qwen-72B
1. YES 2. YES
0.828939
0.705785
0.585053
__label__eng_Latn
0.169704
0.197603
Contenido bajo licencia Creative Commons BY 4.0 y código bajo licencia MIT. © Juan Gómez y Nicolás Guarín-Zapata 2020. Este material es parte del curso Modelación Computacional en el programa de Ingeniería Civil de la Universidad EAFIT. # Interpolación en 2D ## Introducción Acá extenderemos el esquema de interpolación unidimensional estudiado previamente al caso mas general de un dominio bidimensional. Desde el punto de vista geométrico también veremos como un **elemento finito** es solo un dominio espacial canónico descrito por puntos nodales y el correspondiente grupo de funciones de interpolación (o de **forma**). **Al completar este notebook usted debería estar en la capacidad de:** * Reconocer el problema de interpolación en dominios bidimensionales como uno de aplicación de los esquemas unidimensionales. * Formalizar el concepto de un elemento finito como un espacio de interpolación canónico con funciones de interpolación predefinidas. * Proponer esquemas de interpolación para dominios bidimensionales arbitrarios. ## Dominio bidimensional Consideremos el dominio cuadrado mostrado en la figura y en el cual queremos aproximar, por medio de interpolación, una función escalar (o vectorial) $f=f(x,y)$. Para ese propósito los puntos negros en la figura representan puntos nodales donde asumimos que la función es conocida. En este caso el polinomio de interpolación, denotado por $p(x,y)$ se construye como: $$p(x,y) = \sum_{Q=1}^N H_Q(x,y)f_Q$$ donde $Q = 1,...,N$ para un dominio de *N* puntos nodales y donde $H_Q(x,y)$ son las funciones de interpolación o funciones de forma. Como se detallará a continuación para construir las funciones de interpolación bidimensionales $ H_Q(x,y)$ en realidad aplicamos un proceso de interpolaciones unidimensionales iteradas. Denotemos como $x_A$ y $x_B$ a las coordenadas de los puntos A y B en el dominio cuadrilátero mostrado en la figura y supongamos que queremos encontrar el valor de la función en el punto A. <center> </center> El punto A tiene una coordenada en $y$ que es arbitraria pero una coordenada en $x$ constante correspondiente a $x = x_A$ de manera que para un punto A arbitrario a lo largo de la dirección 1-4 (ver figura) el esquema de interpolación es aún unidimensional solamente con dependencia en $y$ y expresado como $f(y , x= A)$ en la figura. Usando polinomios de interpolación de Lagrange unidimensionales la dependencia en $y$ puede ser capturada por: $$f(x_A , y) = L_1(y) f_1 + L_4(y) f_4$$ <center> </center> Procediendo de manera similar para un punto arbitrario $B$ a lo largo de la dirección 2-3 se tiene que: $$f(x_B, y) = L_2(y) f_2 + L_3(y) f_3\, .$$ con $f_A$ y $f_B$ conocidos la dependencia en $x$ puede capturarse como: $$f(x, y) = L_A(x) f(x_A, y) + L_B(x)f(x_B, y)\, .$$ Para llegar a la forma final de las funciones de forma bidimensionales calculamos los polinomios $L_2(y)$, $L_3(y)$, $ L_A(x)$ y $L_B(x)$ y los reemplazamos en las expresiones anteriores. En el caso de un elemento de lado $2.0$ las funciones son: \begin{align*} H_1(x,y) & = L_1(x)L_1(y) \equiv \frac{1}{4}(1-x)(1-y)\, ,\\ H_2(x,y) & = L_2(x)L_1(y) \equiv \frac{1}{4}(1+x)(1-y)\, ,\\ H_3(x,y) & = L_2(x)L_2(y) \equiv \frac{1}{4}(1+x)(1+y)\, ,\\ H_4(x,y) & = L_1(x)L_2(y) \equiv \frac{1}{4}(1-x)(1+y)\, . \end{align*} ### Elemento finito canónico En la siguiente subrutina codificamos la forma final $H_Q(x,y)$ de las funciones de forma en vez de calcular directamente los polinomios fundamentales en una dimensión de la forma $L_I(y)$ para posteriormente realizar la interpolación iterada. La subrutina llamada ``sha4`` almacena las funciones en una estructura matricial que depende de $x$ e $y$. Asumimos que el elemento es un cuadradado perfecto de lado $\mathcal{l}=2.0$ con puntos nodales en las esquinas correspondiente a una interpolación lineal a lo largo de cada cara. ```python import numpy as np import matplotlib.pyplot as plt import sympy as sym ``` ```python %matplotlib notebook sym.init_printing() ``` ```python def sha4(x, y): """ Compute the shape functions for bi-linear square element of size 2.0. """ sh = sym.Matrix([[ (1 - x)*(1 - y), (1 + x)*(1 - y), (1 + x)*(1 + y), (1 - x)*(1 + y)]])/4 return sh ``` Este elemento cuadrado es un elemento **canónico** o de referencia en el cual es fácil la realización de las operaciones de interpolación. En una malla real de elementos finitos es de esperar que los elementos estén distorsionados en relación con este elemento canónico. En esos casos la interpolación también se realiza en el espacio del elemento canónico pero ahora tanto la geometría como las funciones son transformadas usando operaciones matemáticas. Estos detalles sin embargo no se discutirán acá. Las funciones de forma almacenadas en la subrutina corresponden a: $$H = \frac{1}{4}\begin{bmatrix}(1-x)(1-y)&(1+x)(1-y)&(1+x)(1+y)&(1-x)(1+y)\end{bmatrix}$$ <div class="alert alert-warning"> **Preguntas** - Escriba las funciones de forma asumiendo que el sub-dominio el mismo cuadrado discutido hasta el momento, pero además de los nodos de la esquina también incluye nodos en la mitad de las caras para completar un total de 8 puntos nodales. - Haga una copia de la subrutina `sha4` y modifíquela para que calcule las funciones de forma para el elemento de 8 nodos. </div> ```python x, y= sym.symbols('x y') H = sha4(x, y) display(H) ``` ## Interpolación en un dominio cuadrado En este paso consideramos un elemento cuadrado conformado por 4 puntos nodales localizados en las esquinas y donde se asumen conocidos los valores de la función. Usaremos estos valores, conjuntamente con las funciones de forma para encontrar un polinomio de interpolación. El polinomio resultante se usa posteriormente para generar valores aproximados de la función en una serie de puntos que conforman una grilla usada para visualizar la solución. La grilla de puntos de observación se genera usando la función `mgrid` de `numpy`. Note que el sistema de referencia se localiza en el centro del elemento por lo tanto $x \in [-1, 1]$ and $y \in [-1, 1]$. El arreglo unidimensional `u_interp` almacenará los valores interpolados en cada punto de la grilla. Para realizar la interpolación asumiremos valores nodales de la función en un punto dado $(x , y)$ de manera que podemos obtener el valor interpolado como: $$u(x,y)\;=\;\left[H(x,y)\right]\left\{u\right\}$$ ```python # Agregue comentarios para aclarar los pasos mas relevantes del siguiente código li = -1.0 ls = 1.1 dl = 0.05 npts = int((ls - li)/dl) u_interp = np.zeros((npts, npts)) xx, yy = np.mgrid[li:ls:npts*1j, li:ls:npts*1j] # Intente diferentes valores nodales u = sym.Matrix(4, 1, [-0.2, 0.2, -0.2, 0.2]) for i in range(npts): for j in range(npts): NS = H.subs([(x, xx[i,j]), (y, yy[i,j])]) up = NS*u u_interp[i, j] = up[0] plt.figure() plt.contourf(xx, yy, u_interp, cmap="RdYlBu") plt.axis("image") ``` ### Glosario de términos **Elemento finito canonico:** Sub-dominio no distorsionado de tamaño constante y con funciones de forma únicas. En un caso práctico los elementos difieren en tamaño y nivel de distorión, sin embargo todos ellos son transformados al elemento canónico. **Funciones de forma:** Funciones de interpolación formuladas sobre un elemento canónico. **Malla:** Conjunto de elementos finitos que cubren un dominio computacional dado. Se dice que una malla ha sido refinada cuando el tamaño característico de sus elementos se reduce produciendo un mayor número de elementos para cubrir el mismo dominio computacional. ## Actividades para la clase ### Problema 1 Extender el esquema de interpolación en 2D discutido previamente al caso de una función vectorial en el contexto de teoría de la elasticidad con las siguientes consideraciones: * Asuma que el vector de desplazamientos con componentes horizontal y vertical $u$ y $v$ respectivamente es conocido en cada nodo del dominio cuadrado. * Usando los valores nodales del vector de desplazamientos calcule las componentes horizontal y vertical a lo largo del elemento (al interior). * Usando los valores nodales calculo el campo de deformaciones unitarias definido por: $$\varepsilon_{xx}=\frac12\left(\frac{\partial u}{\partial x}\right)$$ $$\varepsilon_{yy}=\frac12\left(\frac{\partial v}{\partial y}\right)$$ $$\gamma_{yy}=\left(\frac{\partial u}{\partial y}+\frac{\partial v}{\partial x}\right)$$ * Almacene las derivadas de las funciones de forma en una matriz independiente $B$. ### Problema 2 En la actividad que se describe a continuación aplicaremos algunas de las capacidades intrínsecas de interpolación disponibles en Python para resolver un problema de interés en Ingeniería Civil. Utilizaremos algunas funciones del módulo `geopandas` para crear un `GeoDataFrame` a partir de información almacenada en un archivo en [formato shp](https://en.wikipedia.org/wiki/Shapefile) que contiene la geometría de los municipios del Valle de Aburrá. También utilizaremos la información de las coordenadas, altitudes y aceleraciones de las estaciones de la red acelerográfica que se encuentran en formato csv (archivo separado por comas, del inglés _comma separated value_). Posteriormente, usaremos la información de las estaciones acelerográficas para visualizar las altitudes y aceleraciones máximas. Para esto, Python usa un objeto denominado `Triangulation` que permite realizar operaciones de interpolación y graficación de manera simple. <div class="alert alert-warning"> **Antes de iniciar la actividad** Consultar el significado de los siguientes términos: * Módulo `geopandas`: * GeoDataFrame: * Análisis geo-espacial: * Archivo de formato shape: * `Triangulation`: </div> Uno de los insumos fundamentales para el diseño de estructuras sismo-resistentes es la aceleración máxima del terreno que se puede esperar ante la ocurrencia de un sismo. Existe amplia evidencia teórica y experimental de que el tren de ondas incidentes desde la fuente, y por ende los movimientos resultantes en el terreno, se ven fuertemente afectados por la topografía superficial. A este fenómeno se le conoce como efectos topográficos. En el caso del Valle de Aburrá, sobre el cual descansa la ciudad de Medellín, se conoce que este está conformado por una zona relativamente plana y uniforme en el centro del valle, pero que esta rodeado por un perfil topográfico con laderas de pendiente considerable y que pueden generar efectos topográficos. Para tratar de identificar si efectivamente tales efectos son importantes en la ciudad de Medellín se propone utilizar los registros acelerográficos del sismo de Armenia, 1999, capturados por los equipos de la Red Acelerográfica de Medellín (RAM) y tratar de identificar si existe alguna correlación entre las aceleraciones máximas de dichos registros y la topografía del valle. Para estudiar el problema se dispone de los siguientes archivos. * Archivo separado por comas (csv) denominado `estaciones_siata.csv` el cual contiene información de latitud (columna 2), longitud (columna 3) y altitud (columna 6) para las diferentes estaciones de la RAM. Adicionalmente las columnas 8, 9 y 10 contienen las componentes Norte-Sur, Este-Oeste y vertical de la aceleración máxima del terreno (cm/s²) registrados durante el sismo de Armenia, Colombia, 1999. * Archivo de extensión `shp` denominado "medellin_colombia_osm_admin.shp" el cual contiene un mapa de Medellín. * Archivos separados por comas conteniendo los registros acelerográficos para las estaciones de la RAM. Usando estos archivos se requiere: * Leer el archivo `shp` que contiene el mapa de Medellín y visualizarlo (usando los módulos `geopandas` y `matplotlib`). * Leer el archivo `estaciones_siata.csv` para extraer de allí la localización de las estaciones y posteriormente graficarlas sobre el mapa de Medellín. * Utilizar la función `tricontourf()` para interpolar y visualizar la distribución de la altitud sobre el mapa de Medellín. * Utilizar la función `tricontourf()` para interpolar y visualizar la distribución de cada una de las componentes de la aceleración sobre el mapa de Medellín. * Usando las visualizaciones de altitud y de aceleraciones máximas correlacionar estas 2 variables para identificar si hay efectos topográficos. **Nota: Reportar sus resultados completando este Notebook** ## Formato del notebook La siguiente celda cambia el formato del Notebook. ```python from IPython.core.display import HTML def css_styling(): styles = open('./nb_style.css', 'r').read() return HTML(styles) css_styling() ``` <link href='http://fonts.googleapis.com/css?family=Fenix' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Source+Code+Pro:300,400' rel='stylesheet' type='text/css'> <style> /* Template for Notebooks for Modelación computacional. Based on Lorena Barba template available at: https://github.com/barbagroup/AeroPython/blob/master/styles/custom.css */ /* Fonts */ @font-face { font-family: "Computer Modern"; src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf'); } /* Text */ div.cell{ width:800px; margin-left:16% !important; margin-right:auto; } h1 { font-family: 'Alegreya Sans', sans-serif; } h2 { font-family: 'Fenix', serif; } h3{ font-family: 'Fenix', serif; margin-top:12px; margin-bottom: 3px; } h4{ font-family: 'Fenix', serif; } h5 { font-family: 'Alegreya Sans', sans-serif; } div.text_cell_render{ font-family: 'Alegreya Sans',Computer Modern, "Helvetica Neue", Arial, Helvetica, Geneva, sans-serif; line-height: 135%; font-size: 120%; width:600px; margin-left:auto; margin-right:auto; } .CodeMirror{ font-family: "Source Code Pro"; font-size: 90%; } /* .prompt{ display: None; }*/ .text_cell_render h1 { font-weight: 200; font-size: 50pt; line-height: 100%; color:#CD2305; margin-bottom: 0.5em; margin-top: 0.5em; display: block; } .text_cell_render h5 { font-weight: 300; font-size: 16pt; color: #CD2305; font-style: italic; margin-bottom: .5em; margin-top: 0.5em; display: block; } .warning{ color: rgb( 240, 20, 20 ) } </style> ```python ```
e3bd36d1a0664a2e19a60fb4a19f3d66de8f40c0
98,449
ipynb
Jupyter Notebook
notebooks/02c_interpolacion_2d.ipynb
AppliedMechanics-EAFIT/Mod_Temporal
6a0506d906ed42b143b773777e8dc0da5af763eb
[ "MIT" ]
5
2019-02-20T18:14:01.000Z
2020-07-19T22:44:44.000Z
notebooks/02c_interpolacion_2d.ipynb
AppliedMechanics-EAFIT/Mod_Temporal
6a0506d906ed42b143b773777e8dc0da5af763eb
[ "MIT" ]
3
2020-04-15T00:22:58.000Z
2020-07-04T17:03:54.000Z
notebooks/02c_interpolacion_2d.ipynb
AppliedMechanics-EAFIT/Mod_Temporal
6a0506d906ed42b143b773777e8dc0da5af763eb
[ "MIT" ]
3
2020-05-14T18:17:09.000Z
2020-10-27T06:37:05.000Z
71.288197
36,147
0.703664
true
4,039
Qwen/Qwen-72B
1. YES 2. YES
0.826712
0.754915
0.624097
__label__spa_Latn
0.979959
0.288317
```python import tensorflow as tf ``` ```python from pycalphad import Database, Model, variables as v from pycalphad.codegen.sympydiff_utils import build_functions from sympy import lambdify import numpy as np dbf = Database('Al-Cu-Zr_Zhou.tdb') mod = Model(dbf, ['AL', 'CU', 'ZR'], 'LIQUID') ``` ```python mod.variables ``` [T, LIQUID0AL, LIQUID0CU, LIQUID0ZR] # Expression Building ```python %time bfr = build_functions(mod.GM, mod.variables, include_grad=True) ``` Wall time: 377 ms ```python %%timeit tf_func = lambdify(mod.variables, mod.GM, modules='tensorflow') tf_grads = lambdify(mod.variables, [mod.GM.diff(x) for x in mod.variables], modules='tensorflow') ``` 632 ms ± 20.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ```python tf_func = lambdify(mod.variables, mod.GM, modules='tensorflow') tf_grads = lambdify(mod.variables, [mod.GM.diff(x) for x in mod.variables], modules='tensorflow') ``` ```python %%timeit func_xla = tf.function(experimental_compile=True)(tf_func) grad_xla = tf.function(experimental_compile=True)(tf_grads) ``` 93 µs ± 3.83 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) ```python func_xla = tf.function(experimental_compile=True)(tf_func) grad_xla = tf.function(experimental_compile=True)(tf_grads) ``` # Function Evaluation ```python out = np.array([0.]) %timeit bfr.func.unsafe_real(np.array([300., 0.3, 0.3, 0.4]), out) print(out) ``` 3.94 µs ± 189 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) [-41178.9428136] ```python out = np.zeros(4) %timeit bfr.grad.unsafe_real(np.array([300., 0.3, 0.3, 0.4]), out) print(out) ``` 4.27 µs ± 84.5 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) [-3.39000632e+01 -6.67993032e+04 -2.12186404e+04 -3.12432907e+04] ```python %timeit func_xla(300.0, 0.3, 0.3, 0.4) ``` 202 µs ± 76.5 µs per loop (mean ± std. dev. of 7 runs, 1 loop each) ```python %timeit grad_xla(300.0, 0.3, 0.3, 0.4) ``` 299 µs ± 71.8 µs per loop (mean ± std. dev. of 7 runs, 1 loop each) ```python temps = tf.constant(np.linspace(300., 2000., num=1000, dtype=np.float32)) %timeit grad_xla(temps, 0.3, 0.3, 0.4) ``` 480 µs ± 76.4 µs per loop (mean ± std. dev. of 7 runs, 1 loop each) ```python ```
322e9c803078b5573f58fa53b0aaac104dd8ad17
5,437
ipynb
Jupyter Notebook
Tensorflow-XLA.ipynb
richardotis/pycalphad-sandbox
43d8786eee8f279266497e9c5f4630d19c893092
[ "MIT" ]
1
2017-03-08T18:21:30.000Z
2017-03-08T18:21:30.000Z
Tensorflow-XLA.ipynb
richardotis/pycalphad-sandbox
43d8786eee8f279266497e9c5f4630d19c893092
[ "MIT" ]
null
null
null
Tensorflow-XLA.ipynb
richardotis/pycalphad-sandbox
43d8786eee8f279266497e9c5f4630d19c893092
[ "MIT" ]
1
2018-11-03T01:31:57.000Z
2018-11-03T01:31:57.000Z
21.073643
103
0.510576
true
813
Qwen/Qwen-72B
1. YES 2. YES
0.774583
0.705785
0.546689
__label__eng_Latn
0.353857
0.108472
# Clustering Techniques Writeup ### September 26, 2016 ### K-Means Clustering K-Means clustering is a regression technique which involves 'fitting' a number $n$ of given values from a dataset around a pre-defined number of $k$ clusters. The K-means clustering process seeks to minimize the Euclidean distance from each point $n$ from its associated $k$ cluster. Note that although the $k$ clusters inhabit the same coordinate space as the $n$ data points, the $k$ cluster locations are not necessarily derived from the set of $n$ points. The process of K-means involves first defining the number of $k$ clusters. Defining the number of $k$ means is a non-trivial problem that is dependent on the specific experiment at hand. Analysis with an elbow plot showing explained variance as a function of number of clusers allows the designer to designate the $k$ number by choosing the point where adding additional $k$ clusters has diminishing returns for explained variance. The actual algorithm behind K-means clustering is relatively simple. Given $n$ data points and $k$ clusters, we're looking for µ<sub>k</sub> (cluster means that minimize the Euclidean distance between the $n$ data points and said cluster). Effectively, the objective function we're trying to minimize can be written as (courtesy SciKit): $$ \begin{align} \sum\limits_{i=0}^n \min_{µ_j \subseteq C} ||(x_j - µ_i)||^2 \end{align} $$ For the analysis, we must first choose $k$ starting mean µ's. This is often achieved by randomly sampling $k$ points from the dataset arbitrarily. Below, we have an image of the data set at the start, with arbitrarily placed means. *(Image sequence courtesy David Runyan, from Project Rhea)* After choosing the initial set of starting means, we can apply a three step iterative process to eventually 'converge' on finalized means: 1) **Fit n points to µ means:** Calculate the Euclidean distances and assign each $n$ to its closest µ. In this case, all points are closest to the blue node, so all $n$ are assigned to the blue µ. 2) **Define new µ's by finding average:** Find the center of each cluster and assign those as new µ's. Only the blue µ moves, as the other nodes have no values assigned to them. 3) **Repeat until convergence:** Repeat until there is no change in the µ positions. I've shown the final converged image below. **PROS:** 1. Oft-implemented, existing documentation. 2. Has SciKit implementation in Python. **CONS:** 1. Defining number of means is a non-trivial process. 2. Does not necessarily converge to global solution, as randomized starting positions will alter final result for different results. 3. Does not set the means to positions to some value from the dataset (if we're interested in brain clusters, we want our clusters to be centered around some set of pre-defined nodes, not some weighted average determined by k-means.) 4. Assumes similarly sized 'clusters', which is a very large assumption to be making for brain areas. ### K-Medoid Clustering K-Medoids clustering is similar to K-means in that both start with a known, pre-defined number of 'k' clusters. However, K-medoids sets the 'k' clusters to points that occur naturally in the dataset of 'n' points. Thus, while K-means finds new 'means' and fits each 'n' point to these new means by reducing some summed error, K-medoids instead seeks to maximize some measure of similarity (eg: the similarity matrix) between each points and its medoid (which itself is one of the pre-existing points). The process for K-Medoids is similar to K-Means; again, the problem of defining the number of 'k' means is a difficult process. From there, an additional difficulty lies in how to define the 'similarity' between two points. A way to do so would be to make a similarity matrix that is has each corresponding value as the inverse of the Euclidean distance between the points. Aside from this, a similar process would be applied iteratively to converge to medoids that maximize the similarity between their respective nodes. The specific name of the algorithmic approach is Partitioning Around Medoids, or PAM. The steps are, specifically: 1) Arbitrarily select 'k' of the 'n' nodes as the medoids. 2) Calculate the total 'similarity' (eg, by finding the inverse of the total of the distances between all 'n' and their closest 'k', or by using some other measure). 3) Swap one 'n' with one 'k', and recalculate the 'similarity' measure. If the 'similarity' increased, keep the new configuration and continue. Otherwise, return to the previous configuration. **PROS:** 1. Simple, like K-means. **CONS:** 1. Has similiar faults as the K-means methodology (defining 'k', not necessarily global, assumes equal sized regions) 2. Recalculating similarity at each step relative to all other points is very computationally intensive ### Spectral Clustering Up until now, we've focused on clustering techniques centered around compactness. Spectral clustering is a different dataset fitting technique that focuses on connectivity instead. The actual algorithm behind K-means clustering is relatively simple. Given 'n' data points and 'k' clusters, we're looking for µk (cluster means that minimize the Euclidean distance between the 'n' data points and said cluster). **Pros:** 1. abc 2. abc **Cons:** 1. abc 2. abc ### Louvain Method of Community Detection The Louvain method of community detection is a clustering technique that relies on monitoring the relative connectiveness of communities. From Blondel, Guillaume, Lambiotte and Lefebvre's seminal paper that defined the Louvain method, they defined first a modularity value. From the paper, "the modularity of a partition is a scalar value between -1 and 1 that measures the density of links inside communities as compared to links between communities." In the case of weighted nodes (where for instance they used the number of calls between between two phone users), the modularity is defined as: $$ \begin{align} Q = \frac{1}{2m} \sum\limits_{i, \ j} \big[ A_{i, \ j} - \frac{k_i k_j}{2m} \big] \ \delta \big( c_i , c_j \big) \end{align} $$ In this case, the $A_{i, \ j}$ refers to the weight of the edge between $i$ and $j$. From there, they defined $k_i$ as the $\sum_{j} A_{i, \ j}$ as the "weights of the edges attached to vertex $i$". $c_i$ is the 'community' to which node $i$ is assigned, while the $\delta$-function $\delta \big( u, v \big)$ is 1 if $u$ is $v$, and 0 otherwise. $m$ is defined to be $\frac{1}{2}\sum_{i, \ j} A_{i, \ j}$ In lay speech, then, what this calculates is 'how connected' some node $i$ is relative some node $j$ (since we're subtracting from the edge weight of $ij$ the edge weights of all connections from $i$ and $j$). From there, the paper progresses to explain how this traditional definition of modularity takes a significant amount of storage power to process. Data analytics on the original data using continuous modularity calls take more processing power than is truly necessary. What Blondel, Guillaume, Lambiotte, and Lefebvre did was define an algorithmically (and computationally) more efficient method to measure the *change* in modularity. It does so by calculating a $\Delta Q$ value that is the *change* in modularity by moving (inserting) some node $i$ from an 'isolated neighborhood' into $C$: $$ \begin{align} \Delta Q = \bigg[ \frac{\sum_{in} + \ k_{i, \ in}}{2m} - \big( \frac{\sum_{tot} + \ k_i}{2m} \big)^2 \bigg] - \bigg[ \frac{\sum_{in}}{2m} - \big( \frac{\sum_{tot}}{2m} \big)^2 - \big( \frac{k_i}{2m} \big)^2 \bigg] \end{align} $$ A similar equation is provided for the removal of a node $i$ from the neighborhood $C$. Application-wise, the process for Louvain's method is similar to the approach for all the other regression techniques (K-means, K-medoids). The approach first assigns and puts each node $N$ into a unique community (so it's in its own community). They then calculate the $\Delta Q$ by adding the node $i$ into community $j$; if after all similar additions into other neighborhoods, the $\Delta Q$ is maximized by adding i to j, that process is fulfilled. Otherwise, the system is reverted to before the addition. By applying this until a local minimum occurs (when no changes to $i$ increase or decrease $\Delta Q$, the Louvain method can create a very solid mapping. **Pros:** 1. Computationally efficient, their calculations in the paper show a significant reduction in computation time (running on 6.3M nodes, their algorithm only took 197 seconds. The only comparable algorithm that even converged was the Wakita and Tsumuri's CNN implementation method, which took almost three times as long). 2. Accuracy seems to be high, according to their own discussion and results. **Cons:** 1. Requires weighted edges (we haven't generated these yet). If our weights are defined by the number of connections within the epsilon ball, what's the point of using weighted edges over K-means or K-medoids? Isn't that just an extra layer of complexity? 2. Requires a C-based workaround that allows us to run the methods in Python (so I won't be able to use these lovely Markdown/Python notebooks). Not too big a problem, but it's nice when packets like SciKit already has implementations of the other methods. Link: http://perso.crans.org/aynaud/communities/ Link to Louvain's Method paper: http://arxiv.org/abs/0803.0476 ```latex %%latex \begin{align} \nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\ \nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\ \nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\ \nabla \cdot \vec{\mathbf{B}} & = 0 \end{align} ``` \begin{align} \nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\ \nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\ \nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\ \nabla \cdot \vec{\mathbf{B}} & = 0 \end{align} ```python ```
0f4b8f9436902b82fe1850c289ac1deadbba56bc
12,987
ipynb
Jupyter Notebook
examples/Jupyter/.ipynb_checkpoints/Clustering Techniques-checkpoint.ipynb
jonl1096/seelvizorg
ae4e3567ce89eb62edcd742060619fdf1883b991
[ "Apache-2.0" ]
null
null
null
examples/Jupyter/.ipynb_checkpoints/Clustering Techniques-checkpoint.ipynb
jonl1096/seelvizorg
ae4e3567ce89eb62edcd742060619fdf1883b991
[ "Apache-2.0" ]
2
2017-04-18T02:50:14.000Z
2017-04-18T18:04:20.000Z
Jupyter/.ipynb_checkpoints/Clustering Techniques-checkpoint.ipynb
NeuroDataDesign/seelviz-archive
cb9bcf7c0f32f0256f71be59dd7d7a9086d0f3b3
[ "Apache-2.0" ]
null
null
null
61.549763
641
0.653654
true
2,682
Qwen/Qwen-72B
1. YES 2. YES
0.877477
0.901921
0.791414
__label__eng_Latn
0.998578
0.677053
```python # 그래프, 수학 기능 추가 # Add graph and math features import pylab as py import numpy as np import numpy.linalg as nl # 기호 연산 기능 추가 # Add symbolic operation capability import sympy as sy ``` # 임의하중하 단순지지보의 반력<br>Reaction forces of a simple supported beam under a general load 다음과 같은 보의 반력을 구해 보자.<br> Let's try to find the reaction forces of the following beam. 보의 길이:<br> Length of the beam: ```python L = sy.symbols('L[m]', real=True, nonnegative=True) ``` ```python L ``` 양단 단순 지지인 경우는 x방향 하나, y방향 두개의 반력을 가정할 수 있다.<br> Simple supports at both ends would have three reaction forces: one in $x$ and two in $y$ directions. ```python R_Ax, R_Ay, R_By = sy.symbols('R_{Ax}[N] R_{Ay}[N] R_{By}[N]', real=True) ``` ```python R_Ax ``` ```python R_Ay ``` ```python R_By ``` $R_{Ax}$ 는 $-\infty$ 방향, $R_{Ay}$ 와 $R_{By}$ 는 $+\infty$ 방향이 양의 방향으로 가정하자.<br> Let's assume $R_{Ax}$ is positive in $-\infty$ direction. Also $R_{Ay}$ and $R_{By}$ would be positive in $+\infty$ direction. 하중 벡터의 성분:<br> Components of the load vector: ```python F_x, F_y = sy.symbols('F_{x}[N] F_{y}[N]', real=True) ``` ```python F_x ``` ```python F_y ``` $F_{x}$ 는 $+\infty$ 방향, $F_{y}$ 는 $-\infty$ 방향이 양의 방향으로 가정하자.<br> Let's assume $F_{x}$ and $F_{y}$ are positive in $+\infty$ and $-\infty$ directions, respectively. 받침점 A로부터 하중의 상대 위치 벡터의 성분:<br> Components of the location vector of load relative to support A: ```python P_x, P_y = sy.symbols('P_{x}[m] P_{y}[m]', real=True) ``` ```python P_x ``` ```python P_y ``` $x$ 방향 힘의 평형<br>Force equilibrium in $x$ direction ```python x_eq = sy.Eq(R_Ax, F_x) ``` ```python x_eq ``` $y$ 방향 힘의 평형<br>Force equilibrium in $y$ direction ```python y_eq = sy.Eq(R_Ay+R_By, F_y) ``` ```python y_eq ``` A 점 중심의 모멘트 평형<br> Moment equilibrium at A ```python A_eq = sy.Eq(-P_y * F_x + P_x * F_y +R_By * L ) ``` ```python A_eq ``` 연립하여 반력에 관하여 풀면 다음과 같다.<br> Solving the system of the equations about the reaction forces would give the followings. ```python sol = sy.solve([x_eq, y_eq, A_eq], [R_Ax, R_Ay, R_By]) ``` ```python sol[R_Ax] ``` ```python sol[R_Ay] ``` ```python sol[R_By] ``` ## Final Bell<br>마지막 종 ```python # stackoverfow.com/a/24634221 import os os.system("printf '\a'"); ``` ```python ```
0f89961e0780439f33e0dab2bfa9bdf3bb74bacd
7,045
ipynb
Jupyter Notebook
45_sympy/20_Beam_Reaction_Force_General.ipynb
kangwonlee/2009eca-nmisp-template
46a09c988c5e0c4efd493afa965d4a17d32985e8
[ "BSD-3-Clause" ]
null
null
null
45_sympy/20_Beam_Reaction_Force_General.ipynb
kangwonlee/2009eca-nmisp-template
46a09c988c5e0c4efd493afa965d4a17d32985e8
[ "BSD-3-Clause" ]
null
null
null
45_sympy/20_Beam_Reaction_Force_General.ipynb
kangwonlee/2009eca-nmisp-template
46a09c988c5e0c4efd493afa965d4a17d32985e8
[ "BSD-3-Clause" ]
null
null
null
17.971939
137
0.474663
true
944
Qwen/Qwen-72B
1. YES 2. YES
0.891811
0.73412
0.654696
__label__kor_Hang
0.925317
0.359409
# Free-Body Diagram for particles > Renato Naville Watanabe > [Laboratory of Biomechanics and Motor Control](http://pesquisa.ufabc.edu.br/bmclab) > Federal University of ABC, Brazil ```python import numpy as np import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline sns.set_context('notebook', font_scale=1.2) ``` ## Free-Body Diagram In the mechanical modeling of an inanimate or living system, composed by one or more bodies (bodies as units that are mechanically isolated according to the question one is trying to answer), it is convenient to isolate each body (be they originally interconnected or not) and identify each force and moment of force (torque) that act on this body in order to apply the laws of mechanics. **The free body diagram (FBD) of a mechanical system or model is the representation in a diagram of all forces and moments of force acting on each body, isolated from the rest of the system.** The term free means that each body, which maybe was part of a connected system, is represented as isolated (free) and any existent contact force is represented in the diagram as forces (action and reaction) acting on the formerly connected bodies. Then, the laws of mechanics are applied on each body, and the unknown movement, force or moment of force can be found if the system of equations is determined (the number of unknown variables can not be greater than the number of equations for each body). How exactly a FBD is drawn for a mechanical model of something is dependent on what one is trying to find. For example, the air resistance might be neglected or not when modeling the movement of an object and the number of parts the system is divided is dependent on what is needed to know about the model. The use of FBD is very common in biomechanics; a typical use is to use the FBD in order to determine the forces and torques on the ankle, knee, and hip joints of the lower limb (foot, leg, and thigh) during locomotion, and the FBD can be applied to any problem where the laws of mechanics are needed to solve a problem. For now, let's study how to draw free-body diagrams for systems that can be modeled as particles. ### Steps to draw a free-body diagram (FBD) 1. Draw separately each object considered in the problem. How you separate depends on what questions you want to answer. 2. Identify the forces acting on each object. If you are analyzing more than one object, remember the Newton's third Law (action and reaction), and identify where the reaction of a force is being applied. 3. Draw all the identified forces, representing them as vectors. The vectors should be represented with the origin in the object. In the case of particles, the origin should be in the center of the particle. 4. If necessary, you should represent the reference frame in the free-body diagram. 5. After this, you can solve the problem using the Newton's second Law (see, e.g, [Newton's Laws](https://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/Notebooks/newtonLawForParticles.ipynb)) to find the motion of the particle. ## Basic element and forces ### Gravity The gravity force acts on two masses, each one atracting each other: \begin{equation} \vec{{\bf{F}}} = - G\frac{m_1m_2}{||\vec{\bf{r}}||^2}\frac{\vec{\bf{r}}}{||\vec{\bf{r}}||} \end{equation} where $G = 6.67.10^{−11} Nm^2/kg^2$ and $\vec{\bf{r}}$ is a vector with length equal to the distance between the masses and directing towards the other mass. Note the forces acting on each mass have the same absolute value. Since the mass of the Earth is $m_1=5.9736×10^{24}kg$ and its radius is 6.371×10$^6$ m, the gravity force near the surface of the Earth is: <span class="notranslate"> \begin{equation} \vec{{\bf{F}}} = m\vec{\bf{g}} \end{equation} </span> with the absolute value of $\vec{\bf{g}}$ approximately equal to 9.81 $m/s^2$, pointing towards the center of Earth. ### Spring Spring is an element used to represent a force proportional to some length or displacement. It produces a force in the same direction of the vector linking the spring extremities and opposite to its length or displacement from an equilibrium length. Frequently it has a linear relation, but it could be nonlinear as well. The force exerted by the spring in one of the extremities is: <span class="notranslate"> \begin{equation} \vec{\bf{F}} = - k(||\vec{\bf{r}}||-l_0)\frac{\vec{\bf{r}}}{||\vec{\bf{r}}||} = -k\vec{\bf{r}} +kl_0\frac{\vec{\bf{r}}}{||\vec{\bf{r}}||} = -k\left(1-\frac{l_0}{||\vec{\bf{r}}||}\right)\vec{\bf{r}} \end{equation} </span> where $\vec{\bf{r}}$ is the vector linking the extremity applying the force to the other extremity and $l_0$ is the equilibrium length of the spring. Since the spring element is a massless element, the force in both extremities have the same absolute value and opposite directions. ### Damping Damper is an element used to represent a force proportional to the velocity of displacement. It produces a force in the opposite direction of its velocity. Frequently it has a linear relation, but it could be nonlinear as well. The force exerted by the damper element in one of its extremities is: <span class="notranslate"> \begin{equation} \vec{\bf{F}} = - b||\vec{\bf{v}}||\frac{\vec{\bf{v}}}{||\vec{\bf{v}}||} = -b\vec{\bf{v}} = -b\frac{d\vec{\bf{r}}}{dt} \end{equation} </span> where $\vec{\bf{r}}$ is the vector linking the extremity applying the force to the other extremity. Since the damper element is a massless element, , the force in both extremities have the same absolute value and opposite directions. ## Examples of free-body diagram Let's see some examples on how to draw the free-body diagram and obtain the motion equations to solve the problems. ### 1. No force acting on the particle The most trivial situation is a particle with no force acting on it. The free-body diagram is below, with no force vectors acting on the particle. <figure><center><figcaption><i>Figure. Free-body diagram of a ball with no force acting on it.</i></figcaption></center></figure> In this situation, the resultant force is: <span class="notranslate"> \begin{equation} \vec{\bf{F}} = 0 \end{equation} </span> And the second Newton law for this particle is: <span class="notranslate"> \begin{equation} m\frac{d^2\vec{\bf{r}}}{dt^2} = 0 \quad \rightarrow \quad \frac{d^2\vec{\bf{r}}}{dt^2} = 0 \end{equation} </span> The motion of of the particle can be found by integrating twice both times, getting the following: <span class="notranslate"> \begin{equation} \vec{\bf{r}} = \vec{\bf{v}}_0t + \vec{\bf{r}}_0 \end{equation} </span> The particle continues to change its position with the same velocity it was at the beginning of the analysis. This could be predicted by Newton's first law. ### 2. Gravity force acting on the particle Now, let's consider a ball with the gravity force acting on it. The free-body diagram is depicted below. <figure><center><figcaption><i>Figure. Free-body diagram of a ball under the influence of gravity.</i></figcaption></center></figure> The only force acting on the ball is the gravitational force: <span class="notranslate"> \begin{equation} \vec{\bf{F}}_g = - mg \; \hat{\bf{j}} \end{equation} </span> Applying Newton's second Law: <span class="notranslate"> \begin{equation} \vec{\bf{F}}_g = m \frac{d^2\vec{\bf{r}}}{dt^2} \rightarrow - mg \; \hat{\bf{j}} = m \frac{d^2\vec{\bf{r}}}{dt^2} \rightarrow - g \; \hat{\bf{j}} = \frac{d^2\vec{\bf{r}}}{dt^2} \end{equation} </span> Now, we can separate the equation in two components (x and y): <span class="notranslate"> \begin{equation} 0 = \frac{d^2x}{dt^2} \end{equation} </span> and <span class="notranslate"> \begin{equation} - g = \frac{d^2y}{dt^2} \end{equation} </span> These equations were solved in [this Notebook about the Newton's laws](https://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/Notebooks/newtonLawForParticles.ipynb). ### 3. Ground reaction force Now, we will analyze the situation of a particle at rest in contact with the ground. To simplify the analysis, only the vertical movement will be considered. <figure><center><figcaption><i>Figure. Free-body diagram of a ball at rest in contact with the ground.</i></figcaption></center></figure> The forces acting on the particle are the ground reaction force (often called as normal force) and the gravity force. The free-body diagram of the particle is below: <figure><center><figcaption><i>Figure. Free-body diagram of a ball under the influence of gravity.</i></figcaption></center></figure> So, the resultant force in the particle is: <span class="notranslate"> \begin{equation} \vec{\bf{F}} = \overrightarrow{\bf{GRF}} + m\vec{\bf{g}} = \overrightarrow{\bf{GRF}} - mg \; \hat{\bf{j}} \end{equation} </span> Considering only the y direction: <span class="notranslate"> \begin{equation} F = GRF - mg \end{equation} </span> Applying Newton's second law to the particle: <span class="notranslate"> \begin{equation} m \frac{d^2y}{dt^2} = GRF - mg \end{equation} </span> Note that since we have no information about how the force GRF varies along time, we cannot solve this equation. To find the position of the particle along time, one would have to measure the ground reaction force. See [the notebook on Vertical jump](http://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/notebooks/VerticalJump.ipynb) for an application of this model. ## 4. Mass-spring system with horizontal movement The example below represents a mass attached to a spring and the other extremity of the spring is fixed. <figure><center><figcaption><i>Figure. Mass-spring system with horizontal movement.</i></figcaption></center></figure> The only force force acting on the mass is from the spring. Below is the free-body diagram from the mass. <figure><center><figcaption><i>Figure. Free-body diagram of a mass-spring system.</i></figcaption></center></figure> Since the movement is horizontal, we can neglect the gravity force. <span class="notranslate"> \begin{equation} \vec{\bf{F}} = -k\left(1-\frac{l_0}{||\vec{\bf{r}}||}\right)\vec{\bf{r}} \end{equation} </span> Applying Newton's second law to the mass: <span class="notranslate"> \begin{equation} m\frac{d^2\vec{\bf{r}}}{dt^2} = -k\left(1-\frac{l_0}{||\vec{\bf{r}}||}\right)\vec{\bf{r}} \rightarrow \frac{d^2\vec{\bf{r}}}{dt^2} = -\frac{k}{m}\left(1-\frac{l_0}{||\vec{\bf{r}}||}\right)\vec{\bf{r}} \end{equation} </span> Since the movement is unidimensional, we can deal with it scalarly: <span class="notranslate"> \begin{equation} \frac{d^2x}{dt^2} = -\frac{k}{m}\left(1-\frac{l_0}{x}\right)x = -\frac{k}{m}(x-l_0) \end{equation} </span> To solve this equation numerically, we must break the equations into two first-order differential equation: <span class="notranslate"> \begin{equation} \frac{dv_x}{dt} = -\frac{k}{m}(x-l_0) \end{equation} </span> <span class="notranslate"> \begin{equation} \frac{dx}{dt} = v_x \end{equation} </span> In the numerical solution below, we will use $k = 40 N/m$, $m = 2 kg$, $l_0 = 0.5 m$ and the mass starts from the position $x = 0.8m$ and at rest. ```python k = 40 m = 2 l0 = 0.5 x0 = 0.8 v0 = 0 x = x0 v = v0 dt = 0.001 t = np.arange(0, 3, dt) r = np.array([x]) for i in t[1:]: dxdt = v dvxdt = -k/m*(x-l0) x = x + dt*dxdt v = v + dt*dvxdt r = np.vstack((r,np.array([x]))) plt.figure(figsize=(8, 4)) plt.plot(t, r, lw=4) plt.xlabel('t(s)') plt.ylabel('x(m)') plt.title('Spring displacement') plt.show() ``` ### 5. Linear spring in bidimensional movement at horizontal plane This example below represents a system with two masses attached to a spring. To solve the motion of both masses, we have to draw a free-body diagram for each one of the masses. <figure><center><figcaption><i>Figure. Linear spring in bidimensional movement at horizontal plane.</i></figcaption></center></figure> The only force acting on each mass is the force due to the spring. Since the movement is happening at the horizontal plane, the gravity force can be neglected. <figure><center><figcaption><i>Figure. FBD of linear spring in bidimensional movement at horizontal plane.</i></figcaption></center></figure> So, the forces acting on mass 1 is: <span class="notranslate"> \begin{equation} \vec{\bf{F_1}} = k\left(||\vec{\bf{r_2}}-\vec{\bf{r_1}}||-l_0\right)\frac{(\vec{\bf{r_2}}-\vec{\bf{r_1}})}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||} \end{equation} </span> and the forces acting on mass 2 is: <span class="notranslate"> \begin{equation} \vec{\bf{F_2}} =k\left(||\vec{\bf{r_2}}-\vec{\bf{r_1}}||-l_0\right)\frac{(\vec{\bf{r_1}}-\vec{\bf{r_2}})}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||} \end{equation} </span> Applying Newton's second law for the masses: <span class="notranslate"> \begin{equation} m_1\frac{d^2\vec{\bf{r_1}}}{dt^2} = k\left(||\vec{\bf{r_2}}-\vec{\bf{r_1}}||-l_0\right)\frac{(\vec{\bf{r_2}}-\vec{\bf{r_1}})}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||} \\ \frac{d^2\vec{\bf{r_1}}}{dt^2} = -\frac{k}{m_1}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)\vec{\bf{r_1}}+\frac{k}{m_1}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)\vec{\bf{r_2}} \\ \frac{d^2x_1\hat{\bf{i}}}{dt^2}+\frac{d^2y_1\hat{\bf{j}}}{dt^2} = -\frac{k}{m_1}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)(x_1\hat{\bf{i}}+y_1\hat{\bf{j}})+\frac{k}{m_1}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)(x_2\hat{\bf{i}}+y_2\hat{\bf{j}}) \end{equation} </span> <br/> <span class="notranslate"> \begin{equation} m_2\frac{d^2\vec{\bf{r_2}}}{dt^2} = k\left(||\vec{\bf{r_2}}-\vec{\bf{r_1}}||-l_0\right)\frac{(\vec{\bf{r_1}}-\vec{\bf{r_2}})}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||} \\ \frac{d^2\vec{\bf{r_2}}}{dt^2} = -\frac{k}{m_2}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)\vec{\bf{r_2}}+\frac{k}{m_2}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)\vec{\bf{r_1}} \\ \frac{d^2x_2\hat{\bf{i}}}{dt^2}+\frac{d^2y_2\hat{\bf{j}}}{dt^2} = -\frac{k}{m_2}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)(x_2\hat{\bf{i}}+y_2\hat{\bf{j}})+\frac{k}{m_2}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)(x_1\hat{\bf{i}}+y_1\hat{\bf{j}}) \end{equation} </span> Now, we can separate the equations for each of the coordinates: <span class="notranslate"> \begin{equation} \frac{d^2x_1}{dt^2} = -\frac{k}{m_1}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)x_1+\frac{k}{m_1}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)x_2=-\frac{k}{m_1}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(x_1-x_2) \end{equation} </span> <span class="notranslate"> \begin{equation} \frac{d^2y_1}{dt^2} = -\frac{k}{m_1}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)y_1+\frac{k}{m_1}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)y_2=-\frac{k}{m_1}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(y_1-y_2) \end{equation} </span> <span class="notranslate"> \begin{equation} \frac{d^2x_2}{dt^2} = -\frac{k}{m_2}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)x_2+\frac{k}{m_2}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)x_1=-\frac{k}{m_2}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(x_2-x_1) \end{equation} </span> <span class="notranslate"> \begin{equation} \frac{d^2y_2}{dt^2} = -\frac{k}{m_2}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)y_2+\frac{k}{m_2}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)y_1=-\frac{k}{m_2}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(y_2-y_1) \end{equation} </span> To solve these equations numerically, you must break these equations into first-order equations: <span class="notranslate"> \begin{equation} \frac{dv_{x_1}}{dt} = -\frac{k}{m_1}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(x_1-x_2) \end{equation} </span> <span class="notranslate"> \begin{equation} \frac{dv_{y_1}}{dt} = -\frac{k}{m_1}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(y_1-y_2) \end{equation} </span> <span class="notranslate"> \begin{equation} \frac{dv_{x_2}}{dt} = -\frac{k}{m_2}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(x_2-x_1) \end{equation} </span> <span class="notranslate"> \begin{equation} \frac{dv_{y_2}}{dt} = -\frac{k}{m_2}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(y_2-y_1) \end{equation} </span> <span class="notranslate"> \begin{equation} \frac{dx_1}{dt} = v_{x_1} \end{equation} </span> <span class="notranslate"> \begin{equation} \frac{dy_1}{dt} = v_{y_1} \end{equation} </span> <span class="notranslate"> \begin{equation} \frac{dx_2}{dt} = v_{x_2} \end{equation} </span> <span class="notranslate"> \begin{equation} \frac{dy_2}{dt} = v_{y_2} \end{equation} </span> Note that if you did not want to know the details about the motion of each mass, but only the motion of the center of mass of the masses-spring system, you could have modeled the whole system as a single particle. To solve the equations numerically, we will use the $m_1=1 kg$, $m_2 = 2 kg$, $l_0 = 0.5 m$, $k = 90 N/m$ and $x_{1_0} = 0 m$, $x_{2_0} = 0 m$, $y_{1_0} = 1 m$, $y_{2_0} = -1 m$, $v_{x1_0} = -2 m/s$, $v_{x2_0} = 0 m/s$, $v_{y1_0} = 0 m/s$, $v_{y2_0} = 0 m/s$. ```python x01 = 0 y01= 0.5 x02 = 0 y02 = -0.5 vx01 = 0.1 vy01 = 0 vx02 = -0.1 vy02 = 0 x1= x01 y1 = y01 x2= x02 y2 = y02 vx1= vx01 vy1 = vy01 vx2= vx02 vy2 = vy02 r1 = np.array([x1,y1]) r2 = np.array([x2,y2]) k = 30 m1 = 1 m2 = 1 l0 = 0.5 dt = 0.0001 t = np.arange(0,5,dt) for i in t[1:]: dvx1dt = -k/m1*(x1-x2)*(1-l0/np.sqrt((x2-x1)**2+(y2-y1)**2)) dvx2dt = -k/m2*(x2-x1)*(1-l0/np.sqrt((x2-x1)**2+(y2-y1)**2)) dvy1dt = -k/m1*(y1-y2)*(1-l0/np.sqrt((x2-x1)**2+(y2-y1)**2)) dvy2dt = -k/m2*(y2-y1)*(1-l0/np.sqrt((x2-x1)**2+(y2-y1)**2)) dx1dt = vx1 dx2dt = vx2 dy1dt = vy1 dy2dt = vy2 x1 = x1 + dt*dx1dt x2 = x2 + dt*dx2dt y1 = y1 + dt*dy1dt y2 = y2 + dt*dy2dt vx1 = vx1 + dt*dvx1dt vx2 = vx2 + dt*dvx2dt vy1 = vy1 + dt*dvy1dt vy2 = vy2 + dt*dvy2dt r1 = np.vstack((r1,np.array([x1,y1]))) r2 = np.vstack((r2,np.array([x2,y2]))) springLength = np.sqrt((r1[:,0]-r2[:,0])**2+(r1[:,1]-r2[:,1])**2) plt.figure(figsize=(8, 4)) plt.plot(t, springLength, lw=4) plt.xlabel('t(s)') plt.ylabel('Spring length (m)') plt.show() ``` ### 6. Particle under action of gravity and linear air resistance Below is the free-body diagram of a particle with the gravity force and a linear drag force due to the air resistance. <figure><center><figcaption><i>Figure. Particle under action of gravity and linear air resistance.</i></figcaption></center></figure> the forces being applied in the ball are: <span class="notranslate"> \begin{equation} \vec{\bf{F}} = -mg \hat{\bf{j}} - b\vec{\bf{v}} = -mg \hat{\bf{j}} - b\frac{d\vec{\bf{r}}}{dt} = -mg \hat{\bf{j}} - b\left(\frac{dx}{dt}\hat{\bf{i}}+\frac{dy}{dt}\hat{\bf{j}}\right) = - b\frac{dx}{dt}\hat{\bf{i}} - \left(mg + b\frac{dy}{dt}\right)\hat{\bf{j}} \end{equation} </span> Writing down Newton's second law: <span class="notranslate"> \begin{equation} \vec{\bf{F}} = m \frac{d^2\vec{\bf{r}}}{dt^2} \rightarrow - b\frac{dx}{dt}\hat{\bf{i}} - \left(mg + b\frac{dy}{dt}\right)\hat{\bf{j}} = m\left(\frac{d^2x}{dt^2}\hat{\bf{i}}+\frac{d^2y}{dt^2}\hat{\bf{j}}\right) \end{equation} </span> Now, we can separate into one equation for each coordinate: <span class="notranslate"> \begin{equation} - b\frac{dx}{dt} = m\frac{d^2x}{dt^2} -\rightarrow \frac{d^2x}{dt^2} = -\frac{b}{m} \frac{dx}{dt} \end{equation} </span> <span class="notranslate"> \begin{equation} -mg - b\frac{dy}{dt} = m\frac{d^2y}{dt^2} \rightarrow \frac{d^2y}{dt^2} = -\frac{b}{m}\frac{dy}{dt} - g \end{equation} </span> These equations were solved in [this notebook](https://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/Notebooks/newtonLawForParticles.ipynb). ### 7. Particle under action of gravity and nonlinear air resistance Below, is the free-body diagram of a particle with the gravity force and a drag force due to the air resistance proportional to the square of the particle velocity. <figure><center><figcaption><i>Figure. Particle under action of gravity and nonlinear air resistance.</i></figcaption></center></figure> The forces being applied in the ball are: <span class="notranslate"> \begin{equation} \vec{\bf{F}} = -mg \hat{\bf{j}} - bv^2\hat{\bf{e_t}} = -mg \hat{\bf{j}} - b (v_x^2+v_y^2) \frac{v_x\hat{\bf{i}}+v_y\hat{\bf{j}}}{\sqrt{v_x^2+v_y^2}} = -mg \hat{\bf{j}} - b \sqrt{v_x^2+v_y^2} \,(v_x\hat{\bf{i}}+v_y\hat{\bf{j}}) = -mg \hat{\bf{j}} - b \sqrt{\left(\frac{dx}{dt} \right)^2+\left(\frac{dy}{dt} \right)^2} \,\left(\frac{dx}{dt} \hat{\bf{i}}+\frac{dy}{dt}\hat{\bf{j}}\right) \end{equation} </span> Writing down Newton's second law: <span class="notranslate"> \begin{equation} \vec{\bf{F}} = m \frac{d^2\vec{\bf{r}}}{dt^2} \rightarrow -mg \hat{\bf{j}} - b \sqrt{\left(\frac{dx}{dt} \right)^2+\left(\frac{dy}{dt} \right)^2} \,\left(\frac{dx}{dt} \hat{\bf{i}}+\frac{dy}{dt}\hat{\bf{j}}\right) = m\left(\frac{d^2x}{dt^2}\hat{\bf{i}}+\frac{d^2y}{dt^2}\hat{\bf{j}}\right) \end{equation} </span> Now, we can separate into one equation for each coordinate: <span class="notranslate"> \begin{equation} - b \sqrt{\left(\frac{dx}{dt} \right)^2+\left(\frac{dy}{dt} \right)^2} \,\frac{dx}{dt} = m\frac{d^2x}{dt^2} \rightarrow \frac{d^2x}{dt^2} = - \frac{b}{m} \sqrt{\left(\frac{dx}{dt} \right)^2+\left(\frac{dy}{dt} \right)^2} \,\frac{dx}{dt} \end{equation} </span> <span class="notranslate"> \begin{equation} -mg - b \sqrt{\left(\frac{dx}{dt} \right)^2+\left(\frac{dy}{dt} \right)^2} \,\frac{dy}{dt} = m\frac{d^2y}{dt^2} \rightarrow \frac{d^2y}{dt^2} = - \frac{b}{m} \sqrt{\left(\frac{dx}{dt} \right)^2+\left(\frac{dy}{dt} \right)^2} \,\frac{dy}{dt} -g \end{equation} </span> These equations were solved numerically in [this notebook](https://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/Notebooks/newtonLawForParticles.ipynb). ## 8. Linear spring and damping on bidimensional horizontal movement This situation is very similar to the example of horizontal movement with one spring and two masses, with a damper added in parallel to the spring. <figure><center><figcaption><i>Figure. Linear spring and damping on bidimensional horizontal movement.</i></figcaption></center></figure> Now, the forces acting on each mass are the force due to the spring and the force due to the damper. <figure><center><figcaption><i>Figure. FBD of linear spring and damping on bidimensional horizontal movement.</i></figcaption></center></figure> So, the forces acting on mass 1 is: <span class="notranslate"> \begin{equation} \vec{\bf{F_1}} = b\frac{d(\vec{\bf{r_2}}-\vec{\bf{r_1}})}{dt} + k\left(||\vec{\bf{r_2}}-\vec{\bf{r_1}}||-l_0\right)\frac{(\vec{\bf{r_2}}-\vec{\bf{r_1}})}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||} = b\frac{d(\vec{\bf{r_2}}-\vec{\bf{r_1}})}{dt} + k\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)(\vec{\bf{r_2}}-\vec{\bf{r_1}}) \end{equation} </span> and the forces acting on mass 2 is: <span class="notranslate"> \begin{equation} \vec{\bf{F_2}} = b\frac{d(\vec{\bf{r_1}}-\vec{\bf{r_2}})}{dt} + k\left(||\vec{\bf{r_2}}-\vec{\bf{r_1}}||-l_0\right)\frac{(\vec{\bf{r_1}}-\vec{\bf{r_2}})}{||\vec{\bf{r_1}}-\vec{\bf{r_2}}||}= b\frac{d(\vec{\bf{r_1}}-\vec{\bf{r_2}})}{dt} + k\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)(\vec{\bf{r_1}}-\vec{\bf{r_2}}) \end{equation} </span> Applying the Newton's second law for the masses: <span class="notranslate"> \begin{equation} m_1\frac{d^2\vec{\bf{r_1}}}{dt^2} = b\frac{d(\vec{\bf{r_2}}-\vec{\bf{r_1}})}{dt}+k\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)(\vec{\bf{r_2}}-\vec{\bf{r_1}}) \end{equation} </span> <span class="notranslate"> \begin{equation} \frac{d^2\vec{\bf{r_1}}}{dt^2} = -\frac{b}{m_1}\frac{d\vec{\bf{r_1}}}{dt} -\frac{k}{m_1}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)\vec{\bf{r_1}} + \frac{b}{m_1}\frac{d\vec{\bf{r_2}}}{dt}+\frac{k}{m_1}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)\vec{\bf{r_2}} \end{equation} </span> <span class="notranslate"> \begin{equation} \frac{d^2x_1\hat{\bf{i}}}{dt^2}+\frac{d^2y_1\hat{\bf{j}}}{dt^2} = -\frac{b}{m_1}\left(\frac{dx_1\hat{\bf{i}}}{dt}+\frac{dy_1\hat{\bf{j}}}{dt}\right)-\frac{k}{m_1}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)(x_1\hat{\bf{i}}+y_1\hat{\bf{j}})+\frac{b}{m_1}\left(\frac{dx_2\hat{\bf{i}}}{dt}+\frac{dy_2\hat{\bf{j}}}{dt}\right)+\frac{k}{m_1}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)(x_2\hat{\bf{i}}+y_2\hat{\bf{j}}) = -\frac{b}{m_1}\left(\frac{dx_1\hat{\bf{i}}}{dt}+\frac{dy_1\hat{\bf{j}}}{dt}-\frac{dx_2\hat{\bf{i}}}{dt}-\frac{dy_2\hat{\bf{j}}}{dt}\right)-\frac{k}{m_1}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(x_1\hat{\bf{i}}+y_1\hat{\bf{j}}-x_2\hat{\bf{i}}-y_2\hat{\bf{j}}) \end{equation} </span> \begin{equation} m_2\frac{d^2\vec{\bf{r_2}}}{dt^2} = b\frac{d(\vec{\bf{r_1}}-\vec{\bf{r_2}})}{dt}+k\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)(\vec{\bf{r_1}}-\vec{\bf{r_2}}) \end{equation} \begin{equation} \frac{d^2\vec{\bf{r_2}}}{dt^2} = -\frac{b}{m_2}\frac{d\vec{\bf{r_2}}}{dt} -\frac{k}{m_2}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)\vec{\bf{r_2}} + \frac{b}{m_2}\frac{d\vec{\bf{r_1}}}{dt}+\frac{k}{m_2}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)\vec{\bf{r_1}} \end{equation} \begin{equation} \frac{d^2x_2\hat{\bf{i}}}{dt^2}+\frac{d^2y_2\hat{\bf{j}}}{dt^2} = -\frac{b}{m_2}\left(\frac{dx_2\hat{\bf{i}}}{dt}+\frac{dy_2\hat{\bf{j}}}{dt}\right)-\frac{k}{m_2}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)(x_2\hat{\bf{i}}+y_2\hat{\bf{j}})+\frac{b}{m_2}\left(\frac{dx_1\hat{\bf{i}}}{dt}+\frac{dy_1\hat{\bf{j}}}{dt}\right)+\frac{k}{m_2}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)(x_1\hat{\bf{i}}+y_1\hat{\bf{j}})=-\frac{b}{m_2}\left(\frac{dx_2\hat{\bf{i}}}{dt}+\frac{dy_2\hat{\bf{j}}}{dt}-\frac{dx_1\hat{\bf{i}}}{dt}-\frac{dy_1\hat{\bf{j}}}{dt}\right)-\frac{k}{m_2}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(x_2\hat{\bf{i}}+y_2\hat{\bf{j}}-x_1\hat{\bf{i}}-y_1\hat{\bf{j}}) \end{equation} Now, we can separate the equations for each of the coordinates: <span class="notranslate"> \begin{equation} \frac{d^2x_1}{dt^2} = -\frac{b}{m_1}\left(\frac{dx_1}{dt}-\frac{dx_2}{dt}\right)-\frac{k}{m_1}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(x_1-x_2) \end{equation} </span> <span class="notranslate"> \begin{equation} \frac{d^2y_1}{dt^2} = -\frac{b}{m_1}\left(\frac{dy_1}{dt}-\frac{dy_2}{dt}\right)-\frac{k}{m_1}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(y_1-y_2) \end{equation} </span> <span class="notranslate"> \begin{equation} \frac{d^2x_2}{dt^2} = -\frac{b}{m_2}\left(\frac{dx_2}{dt}-\frac{dx_1}{dt}\right)-\frac{k}{m_2}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(x_2-x_1) \end{equation} </span> <span class="notranslate"> \begin{equation} \frac{d^2y_2}{dt^2} = -\frac{b}{m_2}\left(\frac{dy_2}{dt}-\frac{dy_1}{dt}\right)-\frac{k}{m_2}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(y_2-y_1) \end{equation} </span> If you want to solve these equations numerically, you must break these equations into first-order equations: <span class="notranslate"> \begin{equation} \frac{dv_{x_1}}{dt} = -\frac{b}{m_1}\left(v_{x_1}-v_{x_2}\right)-\frac{k}{m_1}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(x_1-x_2) \end{equation} </span> <span class="notranslate"> \begin{equation} \frac{dv_{y_1}}{dt} = -\frac{b}{m_1}\left(v_{y_1}-v_{y_2}\right)-\frac{k}{m_1}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(y_1-y_2) \end{equation} </span> <span class="notranslate"> \begin{equation} \frac{dv_{x_2}}{dt} = -\frac{b}{m_2}\left(v_{x_2}-v_{x_1}\right)-\frac{k}{m_2}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(x_2-x_1) \end{equation} </span> <span class="notranslate"> \begin{equation} \frac{dv_{y_2}}{dt} = -\frac{b}{m_2}\left(v_{y_2}-v_{y_1}\right)-\frac{k}{m_2}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(y_2-y_1) \end{equation} </span> <span class="notranslate"> \begin{equation} \frac{dx_1}{dt} = v_{x_1} \end{equation} </span> <span class="notranslate"> \begin{equation} \frac{dy_1}{dt} = v_{y_1} \end{equation} </span> <span class="notranslate"> \begin{equation} \frac{dx_2}{dt} = v_{x_2} \end{equation} </span> <span class="notranslate"> \begin{equation} \frac{dy_2}{dt} = v_{y_2} \end{equation} </span> To solve the equations numerically, we will use the $m_1=1 kg$, $m_2 = 2 kg$, $l_0 = 0.5 m$, $k = 10 N/m$, $b = 0.6 Ns/m$ and $x_{1_0} = 0 m$, $x_{2_0} = 0 m$, $y_{1_0} = 1 m$, $y_{2_0} = -1 m$, $v_{x1_0} = -2 m/s$, $v_{x2_0} = 1 m/s$, $v_{y1_0} = 0 m/s$, $v_{y2_0} = 0 m/s$. ```python x01 = 0 y01= 1 x02 = 0 y02 = -1 vx01 = -2 vy01 = 0 vx02 = 1 vy02 = 0 x1= x01 y1 = y01 x2= x02 y2 = y02 vx1= vx01 vy1 = vy01 vx2= vx02 vy2 = vy02 r1 = np.array([x1,y1]) r2 = np.array([x2,y2]) k = 10 m1 = 1 m2 = 2 b = 0.6 l0 = 0.5 dt = 0.001 t = np.arange(0,5,dt) for i in t[1:]: dvx1dt = -b/m1*(vx1-vx2) -k/m1*(1-l0/np.sqrt((x2-x1)**2+(y2-y1)**2))*(x1-x2) dvx2dt = -b/m2*(vx2-vx1) -k/m2*(1-l0/np.sqrt((x2-x1)**2+(y2-y1)**2))*(x2-x1) dvy1dt = -b/m1*(vy1-vy2) -k/m1*(1-l0/np.sqrt((x2-x1)**2+(y2-y1)**2))*(y1-y2) dvy2dt = -b/m2*(vy2-vy1) -k/m2*(1-l0/np.sqrt((x2-x1)**2+(y2-y1)**2))*(y2-y1) dx1dt = vx1 dx2dt = vx2 dy1dt = vy1 dy2dt = vy2 x1 = x1 + dt*dx1dt x2 = x2 + dt*dx2dt y1 = y1 + dt*dy1dt y2 = y2 + dt*dy2dt vx1 = vx1 + dt*dvx1dt vx2 = vx2 + dt*dvx2dt vy1 = vy1 + dt*dvy1dt vy2 = vy2 + dt*dvy2dt r1 = np.vstack((r1,np.array([x1,y1]))) r2 = np.vstack((r2,np.array([x2,y2]))) springDampLength = np.sqrt((r1[:,0]-r2[:,0])**2+(r1[:,1]-r2[:,1])**2) plt.figure(figsize=(8, 4)) plt.plot(t, springDampLength, lw=4) plt.xlabel('t(s)') plt.ylabel('Spring length (m)') plt.show() plt.figure(figsize=(8, 4)) plt.plot(r1[:,0], r1[:,1], 'r.', lw=4) plt.plot(r2[:,0], r2[:,1], 'b.', lw=4) plt.plot((m1*r1[:,0]+m2*r2[:,0])/(m1+m2), (m1*r1[:,1]+m2*r2[:,1])/(m1+m2),'g.') plt.xlim(-2,2) plt.ylim(-2,2) plt.xlabel('x(m)') plt.ylabel('y(m)') plt.title('Masses position') plt.legend(('Mass1','Mass 2','Masses center of mass')) plt.show() ``` ### 9. Simple muscle model The diagram below shows a simple muscle model. The spring in the left represents the tendinous tissues and the spring in the right represents the elastic properties of the muscle fibers. The damping is present to model the viscous properties of the muscle fibers, the element CE is the contractile element (force production) and the mass $m$ is the muscle mass. The length $L_{MT}$ is the length of the muscle plus the tendon. In our model $L_{MT}$ is constant, but it could be a function of the joint angle. <figure><center><figcaption><i>Figure. Simple muscle model.</i></figcaption></center></figure> The length of the tendon will be denoted by $l_T(t)$ and the muscle length, by $l_{m}(t)$. Both lengths are related by each other by the following expression: <span class="notranslate"> \begin{equation} l_t(t) + l_m(t) = L_{MT} \end{equation} </span> The free-body diagram of the muscle mass is depicted below. <figure><center><figcaption><i>Figure. FBD of simple muscle model.</i></figcaption></center></figure> The resultant force being applied in the muscle mass is: <span class="notranslate"> $$\vec{\bf{F}} = -k_T(||\vec{\bf{r_m}}||-l_{t_0})\frac{\vec{\bf{r_m}}}{||\vec{\bf{r_m}}||} + b\frac{d(L_{MT}\hat{\bf{i}} - \vec{\bf{r_{m}}})}{dt} + k_m (||L_{MT}\hat{\bf{i}} - \vec{\bf{r_{m}}}||-l_{m_0})\frac{L_{MT}\hat{\bf{i}} - \vec{\bf{r_{m}}}}{||L_{MT}\hat{\bf{i}} - \vec{\bf{r_{m}}}||} +\vec{\bf{F}}{\bf{_{CE}}}(t)$$ </span> where $\vec{\bf{r_m}}$ is the muscle mass position. Since the model is unidimensional, we can assume that the force $\vec{\bf{F}}\bf{_{CE}}(t)$ is in the x direction, so the analysis will be done only in this direction. <span class="notranslate"> $$F = -k_T(l_t-l_{t_0}) + b\frac{d(L_{MT} - l_t)}{dt} + k_m (l_m-l_{m_0}) + F_{CE}(t) \\ F = -k_T(l_t-l_{t_0}) -b\frac{dl_t}{dt} + k_m (L_{MT}-l_t-l_{m_0}) + F_{CE}(t) \\ F = -b\frac{dl_t}{dt}-(k_T+k_m)l_t+F_{CE}(t)+k_Tl_{t_0}+k_m(L_{MT}-l_{m_0})$$ </span> Applying the Newton's second law: <span class="notranslate"> $$m\frac{d^2l_t}{dt^2} = -b\frac{dl_t}{dt}-(k_T+k_m)l_t+F_{CE}(t)+k_Tl_{t_0}+k_m(L_{MT}-l_{m_0})$$ </span> To solve this equation, we must break the equation into two first-order differential equations: <span class="notranslate"> \begin{equation} \frac{dvt}{dt} = - \frac{b}{m}v_t - \frac{k_T+k_m}{m}l_t +\frac{F_{CE}(t)}{m} + \frac{k_T}{m}l_{t_0}+\frac{k_m}{m}(L_{MT}-l_{m_0}) \end{equation} </span> <span class="notranslate"> \begin{equation} \frac{d l_t}{dt} = v_t \end{equation} </span> Now, we can solve these equations by using some numerical method. To obtain the solution, we will use the damping factor of the muscle as $b = 10\,Ns/m$, the muscle mass is $m = 2 kg$, the stiffness of the tendon as $k_t=1000\,N/m$ and the elastic element of the muscle as $k_m=1500\,N/m$. The tendon-length is $L_{MT} = 0.35\,m$, and the tendon equilibrium length is $l_{t0} = 0.28\,m$ and the muscle fiber equilibrium length is $l_{m0} = 0.07\,m$. Both the tendon and the muscle fiber are at their equilibrium lengths and at rest. Also, we will model the force of the contractile element as a Heaviside step of $90\,N$ (90 N beginning at $t=0$), but normally it is modeled as a function of $l_m$ and $v_m$ having a neural activation signal as input. ```python m = 2 b = 10 km = 1500 kt = 1000 lt0 = 0.28 lm0 = 0.07 Lmt = 0.35 vt0 = 0 dt = 0.0001 t = np.arange(0, 10, dt) Fce = 90 lt = lt0 vt = vt0 ltp = np.array([lt0]) lmp = np.array([lm0]) Ft = np.array([0]) for i in range(1,len(t)): dvtdt = -b/m*vt-(kt+km)/m*lt + Fce/m + kt/m*lt0 +km/m*(Lmt-lm0) dltdt = vt vt = vt + dt*dvtdt lt = lt + dt*dltdt Ft = np.vstack((Ft,np.array(kt*(lt-lt0)))) ltp = np.vstack((ltp,np.array(lt))) lmp = np.vstack((lmp,np.array(Lmt - lt))) plt.figure(figsize=(8, 4)) plt.plot(t, Ft, lw=4) plt.xlabel('t(s)') plt.ylabel('Tendon force (N)') plt.show() plt.figure(figsize=(8, 4)) plt.plot(t, ltp, lw=4) plt.plot(t, lmp, lw=4) plt.xlabel('t(s)') plt.ylabel('Length (m)') plt.legend(('Tendon length', 'Muscle fiber length')) plt.show() ``` ## Problems 1. Solve the problems 2.3.9, 2.3.20, 11.1.6, 13.1.6 (a, b, c, d, f), 13.1.7, 13.1.10 (a, b) from Ruina and Pratap's book. ## References - Ruina A, Rudra P (2019) [Introduction to Statics and Dynamics](http://ruina.tam.cornell.edu/Book/index.html). Oxford University Press. - Nigg & Herzog (2006) [Biomechanics of the Musculo-skeletal System](https://books.google.com.br/books?id=hOIeAQAAIAAJ&dq=editions:ISBN0470017678). 3rd Edition. Wiley.
03e2fc952088163e07b0e5c3699b37f371fd60a4
201,578
ipynb
Jupyter Notebook
notebooks/FBDParticles.ipynb
e-moncao-lima/BMC
98c3abbf89e630d64b695b535b0be4ddc8b2724b
[ "CC-BY-4.0" ]
null
null
null
notebooks/FBDParticles.ipynb
e-moncao-lima/BMC
98c3abbf89e630d64b695b535b0be4ddc8b2724b
[ "CC-BY-4.0" ]
null
null
null
notebooks/FBDParticles.ipynb
e-moncao-lima/BMC
98c3abbf89e630d64b695b535b0be4ddc8b2724b
[ "CC-BY-4.0" ]
1
2018-10-13T17:35:16.000Z
2018-10-13T17:35:16.000Z
149.760773
33,965
0.845559
true
13,561
Qwen/Qwen-72B
1. YES 2. YES
0.72487
0.743168
0.5387
__label__eng_Latn
0.739534
0.089911
```python # -*- coding: utf-8 -*- """ Created on Thu Sep 16 20:23:07 2021 @author: gansa001 """ from sympy import * from sympy.plotting import plot import matplotlib.pyplot as plt import numpy as np import math ``` ### 1. Write a computer program to calculate the Lagrange interpolation polynomial Pn(x) to f(x) such that: Pn(xj)=f(xj) j=0,1,2,……,n ```python def delta(j,a,x,N): ''' Parameters ---------- j : int. the jth lagrange polynomial a : sympy symbol the dependent term in the lagrange polynomial x : list x values from the original function N : int Returns ------- The lagrange polynomial for any given j ''' num=1 den=1 for i in range(N): if i!=j: num*=(a-x[i]) den*=(x[j]-x[i]) return (num/den) def Lagrange(x,y): ''' Parameters ---------- x : list dependent variable values y : list independent variable values Returns ------- f: The lagrangian approximation of the function ''' N=len(x) a=Symbol('a') f=0 for j in range(len(x)): answer= (delta(j,a,x,N)) f+=y[j]*answer return (simplify(f)) ``` #### a) Apply it to some elementary functions or discrete data sets. ```python ''' Exponential function from range -1 to 1 and 3 points to confirm the values from class. ''' x=[-1,0,1] y=[math.exp(i) for i in x] print('x = ',x) print('y = ',y) ``` x = [-1, 0, 1] y = [0.36787944117144233, 1.0, 2.718281828459045] ```python ##function call Lagrange(x,y) ``` $\displaystyle 0.543080634815244 a^{2} + 1.1752011936438 a + 1.0$ ```python ##plotting a=Symbol('a') p=plot(Lagrange(x,y), (E**a),(a,-1,1),show=False) p[1].line_color='r' p[0].label='Pn(x)' p[1].label='f(x)' p.legend=True p.show() ``` ```python ''' Quartic function from range -1 to 1 and 3 points. ''' x=[-1,0,1] y=[i**4 for i in x] print('x = ',x) print('y = ',y) ``` x = [-1, 0, 1] y = [1, 0, 1] ```python Lagrange(x,y) ``` $\displaystyle a^{2}$ ```python p=plot(Lagrange(x,y), (a**4),(a,-1,1),show=False) p[1].line_color='r' p[0].label='Pn(x)' p[1].label='f(x)' p.legend=True p.show() ``` #### b) Apply it to function $$ f(x) = \frac{1}{1+25x^2} $$ in the interval [-1,1] with equally spaced interpolation points and n=8, 16, 32. Plot Pn(x) and f(x) in [-1,1] interval to see the interpolation error. ```python n=[8,16,32] for npoints in n: print('********** NUMBER OF POINTS = ',npoints,' **********') print() x=list(np.linspace(-1,1,npoints)) y=[(1/(1+25*(i**2))) for i in x] print('Lagrange= ',Lagrange(x,y)) p=plot(Lagrange(x,y), (1/(1+25*(a**2))),(a,-1,1),show=False) p[1].line_color='r' p[0].label='Pn(x)' p[1].label='f(x)' p.legend=True p.show() print('\n\n') ``` #### c) Do the same as in b, but use the roots of the Chebychev polynomial Tn+1(x) as interpolation points. Compare results with those obtained in (b). ```python for npoints in n: print('********** NUMBER OF POINTS = ',npoints,' **********') x = [math.cos(((math.pi/2)*(2*j+1))/npoints) for j in range(npoints)] print('Chebyshev roots= ',x) print() y=[(1/(1+25*(i**2))) for i in x] print('Lagrange= ',Lagrange(x,y)) p=plot(Lagrange(x,y), (1/(1+25*(a**2))),(a,-1,1),show=False) p[1].line_color='r' p[0].label='Pn(x)' p[1].label='f(x)' p.legend=True p.show() print('\n\n') ``` ### Comparing errors ```python for npoints in n: print('********** NUMBER OF POINTS = ',npoints,' **********') x = [math.cos(((math.pi/2)*(2*j+1))/npoints) for j in range(npoints)] print('Chebyshev roots= ',x) print() x1=list(np.linspace(-1,1,npoints)) y=[(1/(1+25*(i**2))) for i in x] print('Lagrange= ',Lagrange(x,y)) p=plot(Lagrange(x,y),Lagrange(x1,y), (1/(1+25*(a**2))),(a,-1,1),show=False) p[1].line_color='r' p[2].line_color='m' p[1].label='Pn(x)' p[0].label='Tn(x)' p[2].label='f(x)' p.legend=True p.show() print('\n\n') ``` # The plots above show that using the Chebyshev roots provide a much smaller error when doing Lagragian interpolation as opposed to using equally spaced points in a given range.
0cc319a6271dbe222c3b607104b7c90822e65a1b
285,396
ipynb
Jupyter Notebook
Lagrange-Chebyshev Interpolation Error.ipynb
GJAnsah/Lagrangian
8619f905fff0943242e3069404f49d55d8ca3f5a
[ "MIT" ]
null
null
null
Lagrange-Chebyshev Interpolation Error.ipynb
GJAnsah/Lagrangian
8619f905fff0943242e3069404f49d55d8ca3f5a
[ "MIT" ]
null
null
null
Lagrange-Chebyshev Interpolation Error.ipynb
GJAnsah/Lagrangian
8619f905fff0943242e3069404f49d55d8ca3f5a
[ "MIT" ]
null
null
null
448.735849
33,720
0.935644
true
1,479
Qwen/Qwen-72B
1. YES 2. YES
0.899121
0.875787
0.787439
__label__eng_Latn
0.675676
0.667817
# 05 The Closed-Shell CCSD energy The coupled cluster model provides a higher level of accuracy beyond the MP2 approach. The purpose of this project is to understand the fundamental aspects of the calculation of the CCSD (coupled cluster singles and doubles) energy. Reference to this project is [Hirata, ..., Bartlett, JCP 2004](https://dx.doi.org/10.1063/1.1637577) (though notations are different, and this project is not discussing extended systems), and PySCF code ([rintermediates.py](https://github.com/pyscf/pyscf/blob/master/pyscf/cc/rintermediates.py), [rccsd.py](https://github.com/pyscf/pyscf/blob/master/pyscf/cc/rccsd.py)) (dimension convention is similar to PySCF implementation). This project will use spacial orbitals (like previous projects, [SCF](../Project_03/Project_03.ipynb) and [MP2](../Project_04/Project_04.ipynb) energy calculation), instead of more computation-costly spin orbitals. However, the latter approach could be also applicable in situation of unrestricted or restricted open-shell. We will discuss spin orbital approach in latter projects. This project could be a challenging one, and the coding is quite intensive in this project. Transforming tensor contraction formula to codes is the gist of this project. We will make extensive use of `numpy.einsum` here. ```python # Following os.chdir code is only for thebe (live code), since only in thebe default directory is /home/jovyan import os if os.getcwd().split("/")[-1] != "Project_05": os.chdir("source/Project_05") from solution_05 import Molecule as SolMol from pyscf import gto import numpy as np from typing import Tuple import pickle np.set_printoptions(precision=7, linewidth=120, suppress=True) ``` ```python # Solution mol only uses PySCF approach sol_mole = SolMol() sol_mole.construct_from_dat_file("input/h2o/STO-3G/geom.dat") sol_mole.obtain_mol_instance(basis="STO-3G") sol_mole.obtain_nao() sol_mole.obtain_nocc() sol_mole.obtain_eri_ao() ``` ## Molecule Object Initialization In this project, we may use the even updated Molecule initialization. Since these are technical details, we toggle the code and explanation below. However, take a look at those instructions could be essential. Most of the method functions are illustrated in [Project 01](../Project_01/Project_01.ipynb), [Project 03](../Project_03/Project_03.ipynb) and [Project 04](../Project_04/Project_04.ipynb). Note that since details of SCF iteration are no longer required, we can make some simplification to SCF code. ```python class Molecule: def __init__(self): # Project 03 existed self.atom_charges = NotImplemented # type: np.ndarray self.atom_coords = NotImplemented # type: np.ndarray self.natm = NotImplemented # type: int self.mol = NotImplemented # type: gto.Mole self.nao = NotImplemented # type: int self.charge = 0 # type: int self.nocc = NotImplemented # type: int # Project 04 added self.mo_coeff = NotImplemented # type: np.ndarray self.mo_energy = NotImplemented # type: np.ndarray self.eri_ao = NotImplemented # type: np.ndarray self.eri_mo = NotImplemented # type: np.ndarray self.energy_rhf = NotImplemented # type: np.ndarray self.energy_corr = NotImplemented # type: np.ndarray # Project 05 added self.w = NotImplemented # type: np.ndarray self.w_ovoo = NotImplemented # type: np.ndarray self.w_ovvo = NotImplemented # type: np.ndarray self.w_ovov = NotImplemented # type: np.ndarray self.w_ovvv = NotImplemented # type: np.ndarray self.v_oooo = NotImplemented # type: np.ndarray self.v_ovoo = NotImplemented # type: np.ndarray self.v_ovvo = NotImplemented # type: np.ndarray self.v_ovov = NotImplemented # type: np.ndarray self.v_oovv = NotImplemented # type: np.ndarray self.v_ovvv = NotImplemented # type: np.ndarray self.v_vvvv = NotImplemented # type: np.ndarray def construct_from_dat_file(self, file_path: str): # Same to Project 01 with open(file_path, "r") as f: dat = np.array([line.split() for line in f.readlines()][1:]) self.atom_charges = np.array(dat[:, 0], dtype=float).astype(int) self.atom_coords = np.array(dat[:, 1:4], dtype=float) self.natm = self.atom_charges.shape[0] def obtain_mol_instance(self, basis: str, verbose=0): # Same to Project 03 mol = gto.Mole() mol.unit = "Bohr" mol.atom = "\n".join([("{:3d} " + " ".join(["{:25.18f}"] * 3)).format(chg, *coord) for chg, coord in zip(self.atom_charges, self.atom_coords)]) mol.basis = basis mol.charge = self.charge mol.spin = 0 mol.verbose = verbose self.mol = mol.build() def obtain_nao(self): self.nao = self.mol.nao_nr() def obtain_nocc(self): assert (self.atom_charges.sum() - self.charge) % 2 == 0 self.nocc = (self.atom_charges.sum() - self.charge) // 2 def obtain_eri_ao(self): self.eri_ao = self.mol.intor("int2e") def get_hcore(self) -> np.ndarray: return self.mol.intor("int1e_kin") + self.mol.intor("int1e_nuc") def get_fock(self, dm: np.ndarray) -> np.ndarray: return self.get_hcore() + (self.eri_ao * dm).sum(axis=(-1, -2)) - 0.5 * (self.eri_ao * dm[:, None, :]).sum(axis=(-1, -3)) def make_rdm1(self, coeff: np.ndarray) -> np.ndarray: return 2 * coeff[:, :self.nocc] @ coeff[:, :self.nocc].T def eng_total(self, dm: np.ndarray) -> float: return (0.5 * (self.get_hcore() + self.get_fock(dm)) * dm).sum() + self.mol.energy_nuc() def scf_process(self, dm_guess: np.ndarray=None) -> Tuple[float, np.ndarray]: eng, dm = 0., np.zeros((self.nao, self.nao)) if dm_guess is None else np.copy(dm_guess) max_iter, thresh_eng, thresh_dm = 64, 1e-10, 1e-8 for epoch in range(max_iter): eng_next, dm_next = self.eng_total(dm), self.make_rdm1(scipy.linalg.eigh(self.get_fock(dm), self.mol.intor("int1e_ovlp"))[1]) if np.abs(eng_next - eng) < thresh_eng and np.linalg.norm(dm_next - dm) < thresh_dm: eng, dm = eng_next, dm_next break eng, dm = eng_next, dm_next return eng, dm def obtain_scf_intermediates(self, dm_guess: np.ndarray=None): eng, dm = self.scf_process(dm_guess) self.energy_rhf = eng self.mo_energy, self.mo_coeff = scipy.linalg.eigh(self.get_fock(dm), self.mol.intor("int1e_ovlp")) def get_eri_mo_einsum(self): return np.einsum("uvkl, up, vq, kr, ls -> pqrs", self.eri_ao, self.mo_coeff, self.mo_coeff, self.mo_coeff, self.mo_coeff, optimize=True) def obtain_eri_mo(self): self.eri_mo = self.get_eri_mo_einsum() ``` ````{toggle} Lots of attributes like `w_ovov` is introduced in this project. We will illustrate meaning of these attributes later. Since this project is hard to achieve, and numerical problems could occur. Numerical problem means for molecular orbital energy degenerate or non-degenerate systems, $\mathscr{C}_{\mu \mathscr{p}} = - C_{\mu p}$ is also valid molecular orbital coefficients, but that will cause $(\mathscr{p} q | rs) \neq (pq|rs)$. Although final results (SCF, MP2, CCSD) should left unchanged, calculation intermediates could be different and hard to reproduce. Thus, in this project, we may use pre-computed water/STO-3G results to check whether calculation intermediates is correctly implemented. - `mo_coeff`: pre-computed $C_{\mu p}$ molecular orbital coefficients - `mo_energy`: pre-computed $\varepsilon_p$ molecular orbital energies - `t1_precomput`: pre-computed $t_i^a$ CCSD single excitation amplitude - `t2_precomput`: pre-computed $t_{ij}^{ab}$ CCSD double excitation amplitude You may wish to run the following `Molecule` initialization code after running the code-cell below: ```python mole = Molecule() mole.construct_from_dat_file("input/h2o/STO-3G/geom.dat") mole.obtain_mol_instance(basis="STO-3G") mole.obtain_nao() mole.obtain_nocc() mole.obtain_eri_ao() mole.mo_coeff = mo_coeff mole.mo_energy = mo_energy mole.obtain_eri_mo() ``` ```` ```python with open("demo_data_h2o_sto3g.dat", "rb") as f: d = pickle.load(f) mo_coeff = d["mo_coeff"] mo_energy = d["mo_energy"] t1_precomput, t2_precomput = d["t1"], d["t2"] sol_mole.mo_coeff = mo_coeff sol_mole.mo_energy = mo_energy sol_mole.obtain_eri_mo() ``` ## Step 1: ERI and Its Biorthogonal Form In closed-shell CCSD prototype programming, *electron repulsion integral* (ERI) can be ambiguous. We usually use $$ v^{pq}_{rs} = (pr|qs) $$ to denote electron repulsion integrals (ERI), as previous projects did before. Here we define *biorthogonal* electron repulsion integral (biorthogonal ERI) as (Hirata, eq 23) $$ w^{pq}_{rs} = 2 v^{pq}_{rs} - v^{pq}_{sr} $$ Biorthogonal basis is a conception from closed-shell CCSD derivation, which makes formula expression as well as program implementation more concise. We do not discuss this concept in detail. ````{admonition} Dimension convention of ERI :class: dropdown Dimension convention to these tensors is the same to `Molecule.eri_mo`, i.e. - `Molecule.eri_mo` $v^{pq}_{rs}$, dim: $(p, r, q, s)$ - `Molecule.w` $w^{pq}_{rs}$, dim: $(p, r, q, s)$ Take caution when transposing those tensors. ```` ### Implementation ```python def obtain_w(mole: Molecule): # Attribute Modification: `w` biorthogonal electron repulsion integral raise NotImplementedError("Exactly 1 line of code") Molecule.obtain_w = obtain_w ``` ### Solution ```python sol_mole.obtain_w() sol_mole.w[3, 5] ``` array([[ 0.5311873, 0.0277178, 0. , 0.0183313, -0. , -0.0017709, 0. ], [ 0.0253682, 0.218575 , -0. , 0.1602001, 0. , -0.060252 , -0. ], [ 0. , -0. , 0.1339514, 0. , 0. , -0. , 0.1456495], [ 0.0012042, 0.0477327, -0. , 0.1319441, -0. , 0.0955699, 0. ], [-0. , 0. , 0. , -0. , 0.2900729, 0. , -0. ], [-0.0215503, -0.0719668, -0. , -0.352678 , 0. , 0.0524259, -0. ], [ 0. , -0. , 0.1382612, 0. , -0. , 0. , 0.1620096]]) ## Step 2: ERI Slices We will use various kinds of ERIs slices when programming closed-shell CCSD. It could be convenient to pre-store those slices. To storing these slices one may use class attributes, class properties, dictionary, or generate slices on-the-fly by convenient function. In this project, we may use class attributes to pre-store slices, which is the most intutive one. ````{admonition} Variable naming convention :class: dropdown The most commonly used ERI slice is $v^{ij}_{ab}$. This slice is named to `v_ovov`, since it's dimension is $(i, a, j, b)$, with $i, j$ occupied orbitals and $a, b$ virtual orbitals. ```python mole.v_ovov = mole.eri_mo[:mole.nocc, mole.nocc:, :mole.nocc, mole.nocc:] ``` For $w^{ij}_{ak}$, this slice is named to `w_ovoo`, since it's dimension is $(i, a, j, k)$, with $i, j, k$ occupied orbitals and $a$ virtual orbitals. ```python mole.w_ovoo = mole.w[:mole.nocc, mole.nocc:, :mole.nocc, :mole.nocc] ``` ```` ````{admonition} Slice object in Python :class: dropdown Code above could be somehow unclear, for that `mole.nocc:` looks and codes very similar to `:mole.nocc`. To mitigate this issue, we could use slice object in python. ```python so, sv = slice(0, mole.nocc), slice(mole.nocc, mole.nao) ``` Then `so` works similar to `0:mole.nocc`, i.e. occupied orbital slice; and `sv` works similar to `mole.nocc:mole.nao`, i.e. virtual orbital slice. Note that $n_\mathrm{AO}$ `mole.nao` is the same to $n_\mathrm{MO}$ in cases of this project. So code of `mole.v_ovov` could be ```python mole.v_ovov = mole.eri_mo[so, sv, so, sv] ``` ```` ### Implementation ```python def obtain_wv_slices(mole: Molecule): # Attribute Modification: Various (biorthogonal) ERI slices nocc, nmo = mole.nocc, mole.nao so, sv = slice(0, nocc), slice(nocc, nmo) mole.w_ovoo = self.w[so, sv, so, so] mole.w_ovvo = ____ # Fill this line mole.w_ovov = ____ # Fill this line mole.w_ovvv = ____ # Fill this line mole.v_oooo = ____ # Fill this line mole.v_ovoo = ____ # Fill this line mole.v_ovvo = ____ # Fill this line mole.v_ovov = self.eri_mo[so, sv, so, sv] mole.v_oovv = ____ # Fill this line mole.v_ovvv = ____ # Fill this line mole.v_vvvv = ____ # Fill this line Molecule.obtain_wv_slices = obtain_wv_slices ``` ### Solution ```python sol_mole.obtain_wv_slices() print(sol_mole.v_ovvo.shape) print(sol_mole.v_ovvo[3, 1]) ``` (5, 2, 2, 5) [[ 0. -0. 0.0541371 0. -0. ] [-0.0112477 0.0171886 0. 0.0709527 0. ]] ## Step 3: $\boldsymbol{\mathscr{F}}$ Matrices Step 3 to Step 5 is to implement CCSD calculation intermediates. In this step, we will calculate $\boldsymbol{\mathscr{F}}$ matrices (Hirata, eq 37-39, note that our reference state is canonical RHF, so off-diagonal occupied-virtual block Fock matrix $F_{ia} = 0$). $$ \begin{align} \mathscr{F}^k_i &= f^k_i + \sum_{lcd} w^{kl}_{cd} t_{il}^{cd} + \sum_{lcd} w^{kl}_{cd} t_i^c t_l^d \\ \mathscr{F}^a_c &= f^a_c - \sum_{kld} w^{kl}_{cd} t_{kl}^{ad} - \sum_{kld} w^{kl}_{cd} t_k^a t_l^d \\ \mathscr{F}^k_c &= \sum_{ld} w^{kl}_{cd} t_l^d \end{align} $$ Where $f^p_q$ is the same to molecular orbital basis Fock matrix $f^p_q = F_{pq} = \delta_{pq} \varepsilon_p$. Dimension convention of $\mathscr{F}^k_i$ is $(k, i)$. Same to $\mathscr{F}^a_c$ and $\mathscr{F}^k_c$. ````{admonition} Dimension convention of CCSD amplitudes :class: dropdown Dimension convention of CCSD single excitation amplitude $t_i^a$ is $(i, a)$: ```python >>> t1_precomput.shape (5, 2) ``` Dimension of CCSD double excitation amplitude $t_{ij}^{ab}$ is $(i, j, a, b)$: ```python >>> t2_precomput.shape (5, 5, 2, 2) ``` This is quite different to convention of ERIs. So take caution when dealing those tensors! ```` ### Implementation ```python def cc_Foo(mole: Molecule, t1: np.ndarray, t2: np.ndarray) -> np.ndarray: # Ref: Hitara, eq 37, generate F^k_i # This is a reference implementation. There is no need to change this code. Fki = np.diag(self.mo_energy[:self.nocc]) Fki += np.einsum("kcld, ilcd -> ki", self.w_ovov, t2, optimize=True) Fki += np.einsum("kcld, ic, ld -> ki", self.w_ovov, t1, t1, optimize=True) return Fki Molecule.cc_Foo = cc_Foo ``` ```python def cc_Fvv(mole: Molecule, t1: np.ndarray, t2: np.ndarray) -> np.ndarray: # Ref: Hitara, eq 38 raise NotImplementedError("No more than 4 lines of code") Molecule.cc_Fvv = cc_Fvv ``` ```python def cc_Fov(mole: Molecule, t1: np.ndarray, t2: np.ndarray) -> np.ndarray: # Ref: Hitara, eq 39 # Note that amplitude t2 is actually not taken into account, # but for signature consistency, we still include this useless amplitude raise NotImplementedError("Exactly 1 line of code") Molecule.cc_Fov = cc_Fov ``` ### Solution ```python sol_mole.cc_Foo(t1_precomput, t2_precomput) ``` array([[-20.2629403, 0.0004753, -0. , 0.0008499, 0. ], [ 0.0000935, -1.2196514, -0. , 0.014577 , 0. ], [ -0. , -0. , -0.5825172, 0. , 0. ], [ 0.0000376, 0.0098341, 0. , -0.4614169, -0. ], [ 0. , 0. , 0. , -0. , -0.3888219]]) ```python sol_mole.cc_Fvv(t1_precomput, t2_precomput) ``` array([[ 0.5124191, -0. ], [-0. , 0.624019 ]]) ```python sol_mole.cc_Fov(t1_precomput, t2_precomput) ``` array([[ 0.0001061, -0. ], [ 0.0012685, 0. ], [ 0. , -0.0032164], [-0.0022473, -0. ], [-0. , 0. ]]) ## Step 4: $\boldsymbol{\mathscr{L}}$ Matrices In this step, we will calculate $\boldsymbol{\mathscr{L}}$ matrices (Hirata, eq 40-41). $$ \begin{align} \mathscr{L}^k_i &= \mathscr{F}^k_i + \sum_{lc} w^{lk}_{ci} t_l^c \\ \mathscr{L}^a_c &= \mathscr{F}^a_c + \sum_{kd} w^{ka}_{dc} t_k^d \end{align} $$ Dimension convention of $\mathscr{L}^k_i$ is $(k, i)$. Same to $\mathscr{L}^a_c$. ### Implementation ```python def Loo(mole: Molecule, t1: np.ndarray, t2: np.ndarray) -> np.ndarray: # Ref: Hitara, eq 40 # This is a reference implementation. There is no need to change this code. L_ki = self.cc_Foo(t1, t2) L_ki += np.einsum("lcki, lc -> ki", self.w_ovoo, t1, optimize=True) return L_ki Molecule.Loo = Loo ``` ```python def Lvv(mole: Molecule, t1: np.ndarray, t2: np.ndarray) -> np.ndarray: # Ref: Hitara, eq 41 raise NotImplementedError("About 3 lines of code") Molecule.Lvv = Lvv ``` ### Solution ```python sol_mole.Loo(t1_precomput, t2_precomput) ``` array([[-20.2746448, -0.0001825, -0. , 0.000496 , 0. ], [ -0.0005209, -1.2246573, -0. , 0.0106743, 0. ], [ -0. , -0. , -0.5850956, 0. , 0. ], [ 0.0001375, 0.0087952, 0. , -0.4644571, -0. ], [ 0. , 0. , 0. , -0. , -0.3953228]]) ```python sol_mole.Lvv(t1_precomput, t2_precomput) ``` array([[ 0.5111931, -0. ], [-0. , 0.6206725]]) ## Step 5: $\boldsymbol{\mathscr{W}}$ Tensors In this step, we will calculate $\boldsymbol{\mathscr{W}}$ Tensors (Hirata, eq 42-45). $$ \begin{align} \mathscr{W}^{kl}_{ij} &= v^{kl}_{ij} + \sum_c v^{lk}_{ci} t_j^c + \sum_c v^{kl}_{cj} t_i^c + \sum_{cd} v^{kl}_{cd} t_{ij}^{cd} + \sum_{cd} v^{kl}_{cd} t_i^c t_j^d \\ \mathscr{W}^{ab}_{cd} &= v^{ab}_{cd} - \sum_k v^{ka}_{dc} t_k^b - \sum_k v^{kb}_{cd} t_k^a\\ \mathscr{W}^{ak}_{ic} &= v^{ak}_{ic} - \sum_l v^{kl}_{ci} t_l^a + \sum_d v^{ka}_{cd} t_i^d - \frac{1}{2} \sum_{ld} v^{lk}_{dc} t_{il}^{da} - \sum_{ld} v^{lk}_{dc} t_i^d t_l^a + \frac{1}{2} \sum_{ld} w^{lk}_{dc} t_{il}^{ad} \\ \mathscr{W}^{ak}_{ci} &= v^{ak}_{ci} - \sum_l v^{lk}_{ci} t_l^a + \sum_d v^{ka}_{dc} t_i^d - \frac{1}{2} \sum_{ld} v^{lk}_{cd} t_{il}^{da} - \sum_{ld} v^{lk}_{cd} t_i^d t_l^a \end{align} $$ Dimension convention of $\mathscr{W}^{kl}_{ij}$ is $(k, l, i, j)$. Same to $\mathscr{W}^{ab}_{cd}$, $\mathscr{W}^{ak}_{ic}$, $\mathscr{W}^{ak}_{ci}$. ````{admonition} Einstein summation convention :class: dropdown We have learned using `numpy.einsum`. Actually, the original purpose of this function is to make *Einstein summation* by program. In [Einstein summation convention](http://en.wikipedia.org/wiki/Einstein_notation), summation notations are dropped out. One can know that the indices of summation should be taken, if indices occurs both on subscript and superscript in one term. For example, $$ \mathscr{W}^{ak}_{ic} \leftarrow v_{{\color{orange}{d}} c}^{{\color{blue}{l}} k} t_{i {\color{blue}{l}}}^{{\color{orange}{d}} a} $$ both ${\color{orange}{d}}$ and ${\color{blue}{l}}$ are in subscripts and super scripts, so summation should be taken on $d, l$. Further more, superscripts $k, a$ and subscripts $c, i$ are still left not summed, so these indices would be on the term at LHS. So, the $\mathscr{W}^{ak}_{ic}$ could be recasted by Einstein summation convention as $$ \mathscr{W}^{ak}_{ic} = v^{ak}_{ic} - v^{kl}_{ci} t_l^a + v^{ka}_{cd} t_i^d - \frac{1}{2} v^{lk}_{dc} t_{il}^{da} - v^{lk}_{dc} t_i^d t_l^a + \frac{1}{2} w^{lk}_{dc} t_{il}^{ad} $$ In this documentation, we may not use Einstein summation convention in most cases. However, notations are extremely important in deriving and implementing formulas, and this convention can make equations very concise and clear. You may want to try out this convention in your future research or courses. ```` ````{admonition} Hint 1: Use deep copy when necessary :class: dropdown A very common **FAULTY** implementation to $\mathscr{W}^{ab}_{cd}$ could be ```python def cc_Wvvvv(mole: Molecule, t1: np.ndarray, t2: np.ndarray) -> np.ndarray: # Ref: Hitara, eq 43 W_abcd = mole.v_vvvv.transpose((0, 2, 1, 3)) # This line of code is actually WRONG W_abcd -= np.einsum("kdac, kb -> abcd", mole.v_ovvv, t1, optimize=True) W_abcd -= np.einsum("kcbd, ka -> abcd", mole.v_ovvv, t1, optimize=True) return W_abcd ``` This is due to NumPy's boardcasting nature. When calling NumPy array view (e.g. slicing array, kind of operations we've discussed in Step 2) or transposing, the data of array is actually not copied, saving memory space and CPU time. However, it could be risky to use these arrays without double check. In the code above, tensor `mole.eri_mo` will actually be modified, since transpose of `mole.v_vvvv` is view of array, so the underlying data of `W_abcd` comes from shared-data of `mole.eri_mo`. Then substraction operation `-=` will actually directly modify values on `mole.eri_mo`. For demonstration, we can use the following code to illustrate this point: ```python >>> arr_base = np.zeros((3, 4), dtype=int) >>> arr_view = arr_base[:2, 1:4].transpose((0, 1)) >>> arr_view -= 1 >>> arr_base array([[ 0, -1, -1, -1], [ 0, -1, -1, -1], [ 0, 0, 0, 0]]) ``` So, use `numpy.copy` ([NumPy](https://numpy.org/doc/stable/reference/generated/numpy.copy.html)) to make *deep copy* of array (i.e. make copy of underlying data, different to operation `=`) when necessary, alhough this will cost memory and CPU time. ```` ````{admonition} Hint 2: Complicated tensor transpose :class: dropdown We've learned how to use `numpy.transpose` or `numpy.swapaxes` to tensors for simple cases, such as matrix transpose or $w^{pq}_{rs}$ generation. For those transposes, only two indices are swaped. However, in this step, multiple indice swaps may occur. What tuple shall we pass into transpose? That could be puzzling. For example, we need $\mathscr{W}^{ak}_{ic} \leftarrow v^{ak}_{ic}$, but we don't have slice of `v_voov` $v^{ak}_{ic}$. Recall that ERI $v^{pq}_{rs}$ is 8-fold symmetric, so `v_voov` could be obtained from transpose of `v_ovvo` $v^{ic}_{ak}$. So the transpose of `v_ovvo` to `Wvoov` is $$ v^{ic}_{ak} \mapsto \mathscr{W}^{ak}_{ic}, \quad (i, a, c, k) \mapsto (a, k, i, c), \quad (2, 0, 3, 1) \mapsto (0, 1, 2, 3) $$ So the first line to generate $\mathscr{W}^{ak}_{ic}$ could be ```python W_akic = np.copy(mole.v_ovvo.transpose((2, 0, 3, 1))) ``` However, the code above is putting a tuple $(2, 0, 3, 1)$ into `numpy.transpose`, which could be a little puzzling. Another intutive way could be utilizing `numpy.einsum`: ```python W_akic = np.copy(np.einsum("iack -> akic", mole.v_ovvo)) ``` Note that when doing transposing, `numpy.einsum` actually does not deep copy the underlying array data, so `numpy.copy` is required. ```` ### Implementation ```python def cc_Woooo(mole: Molecule, t1: np.ndarray, t2: np.ndarray) -> np.ndarray: # Ref: Hitara, eq 42 raise NotImplementedError("About 6 lines of code") Molecule.cc_Woooo = cc_Woooo ``` ```python def cc_Wvvvv(self, t1: np.ndarray, t2: np.ndarray) -> np.ndarray: # Ref: Hitara, eq 43 raise NotImplementedError("About 4 lines of code") Molecule.cc_Wvvvv = cc_Wvvvv ``` ```python def cc_Wvoov(mole: Molecule, t1: np.ndarray, t2: np.ndarray) -> np.ndarray: # Ref: Hitara, eq 44 # This is a reference implementation. There is no need to change this code. W_akic = np.copy(mole.v_ovvo.transpose((2, 0, 3, 1))) W_akic -= np.einsum("kcli, la -> akic", mole.v_ovoo, t1, optimize=True) W_akic += np.einsum("kcad, id -> akic", mole.v_ovvv, t1, optimize=True) W_akic -= 0.5 * np.einsum("ldkc, ilda -> akic", mole.v_ovov, t2, optimize=True) W_akic -= np.einsum("ldkc, id, la -> akic", mole.v_ovov, t1, t1, optimize=True) W_akic += 0.5 * np.einsum("ldkc, ilad -> akic", mole.w_ovov, t2, optimize=True) return W_akic Molecule.cc_Wvoov = cc_Wvoov ``` ```python def cc_Wvovo(mole: Molecule, t1: np.ndarray, t2: np.ndarray) -> np.ndarray: # Ref: Hitara, eq 45 raise NotImplementedError("About 6 lines of code") Molecule.cc_Wvovo = cc_Wvovo ``` ### Solution ```python sol_mole.cc_Woooo(t1_precomput, t2_precomput)[1, 3] ``` array([[ 0.0218411, 0.0081018, -0. , 0.0113409, -0. ], [ 0.0126575, 0.023576 , -0. , 0.6571315, -0. ], [-0. , 0. , -0.0260943, -0. , 0. ], [-0.0180776, 0.1264385, -0. , 0.0991944, -0. ], [ 0. , -0. , 0. , 0. , 0.045617 ]]) ```python sol_mole.cc_Wvvvv(t1_precomput, t2_precomput)[0, 1] ``` array([[-0. , 0.55115 ], [ 0.1111799, 0. ]]) ```python sol_mole.cc_Wvoov(t1_precomput, t2_precomput)[0, 1] ``` array([[-0.0102725, 0. ], [ 0.0947736, 0. ], [ 0. , -0.0307268], [-0.0575545, -0. ], [-0. , 0. ]]) ```python sol_mole.cc_Wvovo(t1_precomput, t2_precomput)[0, 1] ``` array([[ 0.0074454, 0.5997484, 0. , -0.0504503, -0. ], [-0. , -0. , -0.0874274, -0. , 0. ]]) ## Step 6: CCSD Single Excitation Amplitude $t_i^a$ In this step, we will calculate CCSD single excitation amplitude $t_i^a$ (Hirata, eq 35). $$ \begin{align} t_i^a D_i^a &= \sum_c (\mathscr{F}^a_c - f^a_c) t_i^c - \sum_k (\mathscr{F}^k_i - f^k_i) t_k^a + \sum_{kc} \mathscr{F}^k_c (2 t_{ki}^{ca} - t_{ik}^{ca}) + \sum_{kc} \mathscr{F}^k_c t_i^c t_k^a \\ &\quad + \sum_{kc} w^{ak}_{ic} t_k^c + \sum_{kcd} w^{ak}_{cd} t_{ik}^{cd} + \sum_{kcd} w^{ak}_{cd} t_i^c t_k^d - \sum_{klc} w^{kl}_{ic} t_{kl}^{ac} - \sum_{klc} w^{kl}_{ic} t_k^a t_l^c \end{align} $$ We use notation $\delta^p_q$ as kronecker delta ($\delta^p_q = \delta_{pq}$), and $f^p_q$ Fock matrix element $f^p_q = F_{pq} = \delta_{pq} \varepsilon_p$. We define $$ D_i^a = \varepsilon_i - \varepsilon_a $$ ````{admonition} Hint 1: Generate $\tilde{\mathscr{F}}^a_c = \mathscr{F}^a_c - f^a_c$ :class: dropdown Since Fock matrix is diagonal, $f^a_c = \delta_{ac} \varepsilon_a$. So, in the program, we can generate `Fvvt` $\tilde{\mathscr{F}}^a_c$ as ```python Fvv = mole.cc_Fvv(t1, t2) Fvvt = Fvv - np.diag(mole.mo_energy[mole.nocc:]) ``` The same process applies to `Foo` $\tilde{\mathscr{F}}^k_i = \mathscr{F}^k_i - f^k_i$. Well, this variable name is not something like `foo`, `bar` or `ket` ;-> ```` ````{admonition} Hint 2: Generate $D_i^a$ :class: dropdown Using numpy boardcasting, we can generate `D_ov` $D_i^a$ as (dim: $(i, a)$) ```python D_ov = mole.mo_energy[:mole.nocc, None] - mole.mo_energy[None, mole.nocc:] ``` ```` ````{admonition} Hint 3: Update $t_i^a$ :class: dropdown We may simply write the formula above as $$ t_i^a D_i^a = \mathtt{RHS}_i^a $$ So $t_i^a$ is updated in form of $$ t_i^a = (D_i^a)^{-1} \mathtt{RHS}_i^a $$ You can first generate `RHS` $\mathtt{RHS}_i^a$ (dim: $(i, a)$) and `D_ov` $D_i^a$ (dim: $(i, a)$), then ```python return RHS / D_ov ``` ```` ````{admonition} Hint 4: Tensor transpose without explicit statement :class: dropdown You may remember that there is no `w_voov` attribute for $w_{ic}^{ak}$. However, $w_{ic}^{ak} = w_{ci}^{ka}$ is 2-fold symmetric, so $w_{ic}^{ak}$ could be seen as transpose of `w_ovvo`. However, there is no need explicitly stating transpose, and $\sum_{kc} w^{ak}_{ic} t_k^c$ could be written as ```python RHS += np.einsum("kcai, kc ->ia", self.w_ovvo, t1, optimize=True) ``` The transpose is already imbeded in string `"kcai"`. ```` ````{admonition} Hint 5: Einstein summation expression :class: dropdown Einstein summation expression can be more concise and clear. $$ \begin{align} t_i^a D_i^a &= \tilde{\mathscr{F}}^a_c t_i^c - \tilde{\mathscr{F}}^k_i t_k^a + 2 \mathscr{F}^k_c t_{ki}^{ca} - \mathscr{F}^k_c t_{ik}^{ca} + \mathscr{F}^k_c t_i^c t_k^a \\ &\quad + w^{ak}_{ic} t_k^c + w^{ak}_{cd} t_{ik}^{cd} + w^{ak}_{cd} t_i^c t_k^d - w^{kl}_{ic} t_{kl}^{ac} - w^{kl}_{ic} t_k^a t_l^c \end{align} $$ ```` ### Implementation ```python def update_t1(mole: Molecule, t1: np.ndarray, t2: np.ndarray) -> np.ndarray: # Input: `t1`, `t2` CCSD amplitudes guess # Output: Updated `t1` CCSD single amplitude nocc = mole.nocc Foo, Fvv, Fov = mole.cc_Foo(t1, t2), mole.cc_Fvv(t1, t2), mole.cc_Fov(t1, t2) Foot = ____ # Fill this line Fvvt = ____ # Fill this line # Formula Line 1 RHS = np.einsum("ac, ic -> ia", Fvvt, t1 , optimize=True) RHS -= ____ # Fill this line RHS += 2 * np.einsum("kc, kica -> ia", Fov, t2 , optimize=True) RHS -= ____ # Fill this line RHS += ____ # Fill this line # Formula Line 2 RHS += np.einsum("kcai, kc ->ia", mole.w_ovvo, t1, optimize=True) RHS += ____ # Fill this line RHS += ____ # Fill this line RHS -= ____ # Fill this line RHS -= ____ # Fill this line D_ov = ____ # Fill this line return ____ # Fill this line Molecule.update_t1 = update_t1 ``` ### Solution ```python sol_mole.update_t1(t1_precomput, t2_precomput) ``` array([[-0.0000489, -0. ], [-0.0032898, -0. ], [ 0. , -0.0025011], [-0.0217781, -0. ], [-0. , 0. ]]) As said before, `t1_precomput` and `t2_precomput` are converged CCSD amplitudes, so actually result of the code-cell above should be extremely close to `t1_precomput`. One can use `numpy.allclose` ([NumPy API](https://numpy.org/doc/stable/reference/generated/numpy.allclose.html)) to check whether two arrays are very close to each other. ```python np.allclose(sol_mole.update_t1(t1_precomput, t2_precomput), t1_precomput) ``` True ## Step 7: CCSD Double Excitation Amplitude $t_{ij}^{ab}$ In this step, we will calculate CCSD double excitation amplitude $t_{ij}^{ab}$ (Hirata, eq 36). $$ \begin{align} t_{ij}^{ab} D_{ij}^{ab} = \mathscr{P} (ia, jb) \bigg[ & \frac{1}{2} v^{ij}_{ab} + \frac{1}{2} \sum_{kl} \mathscr{W}^{kl}_{ij} t_{kl}^{ab} + \frac{1}{2} \sum_{kl} \mathscr{W}^{kl}_{ij} t_k^a t_l^b + \frac{1}{2} \sum_{cd} \mathscr{W}^{ab}_{cd} t_{ij}^{cd} + \frac{1}{2} \sum_{cd} \mathscr{W}^{ab}_{cd} t_i^c t_j^d \\ & + \sum_c (\mathscr{L}^a_c - f^a_c) t_{ij}^{cb} - \sum_k (\mathscr{L}^k_i - f^k_i) t_{kj}^{ab} \\ & + \sum_c v^{ab}_{ic} t_j^c - \sum_{kc} v^{kb}_{ic} t_k^a t_j^c - \sum_k v^{ak}_{ij} t_k^b - \sum_{kc} v^{ak}_{ic} t_j^c t_k^b \\ & + 2 \sum_{kc} \mathscr{W}^{ak}_{ic} t_{kj}^{cb} - \sum_{kc} \mathscr{W}^{ak}_{ci} t_{kj}^{cb} - \sum_{kc} \mathscr{W}^{ak}_{ic} t_{kj}^{bc} - \sum_{kc} \mathscr{W}^{bk}_{ci} t_{kj}^{ac} \bigg] \end{align} $$ Where $$ D_{ij}^{ab} = \varepsilon_i + \varepsilon_j - \varepsilon_a - \varepsilon_b $$ and **operator** $\mathscr{P} (ia, jb)$ is a permutation operator. When apply $\mathscr{P} (ia, jb)$ to a function (or tensor) $f(i, a, j, b)$, then $$ \big( \mathscr{P} (ia, jb) \circ f \big) (i, a, j, b) = f(i, a, j, b) + f(j, b, i, a) $$ ````{admonition} Hint 1: Generate $\tilde{\mathscr{L}}^a_c = \mathscr{L}^a_c - f^a_c$ :class: dropdown Generate $\tilde{\mathscr{L}}^a_c$ in the same way of $\tilde{\mathscr{F}}^a_c$ generation ```python Lvv = mole.Lvv(t1, t2) Lvvt = Lvv - np.diag(mole.mo_energy[mole.nocc:]) ``` ```` ````{admonition} Hint 2: Generate $D_{ij}^{ab}$ :class: dropdown Generate `D_oovv` $D_{ij}^{ab}$ in the same way of `D_ov` $D_i^a$ or what you've learned from MP2 calculation. ```` ````{admonition} Hint 3: Update $t_{ij}^{ab}$ :class: dropdown We may write the formula above as $$ t_{ij}^{ab} D_{ij}^{ab} = \mathscr{P} (ia, jb) \big[ \mathtt{RHS}_{ij}^{ab} \big] = \mathtt{RHS}_{ij}^{ab} + \mathtt{RHS}_{ji}^{ba} $$ Where $\mathtt{RHS}_{ij}^{ab}$ is not exactly right-hand-side of double excitation amplitude equation, but the sum of terms in square brackets. So $t_{ij}^{ab}$ is updated in form of $$ t_{ij}^{ab} = (D_{ij}^{ab})^{-1} \big( \mathtt{RHS}_{ij}^{ab} + \mathtt{RHS}_{ji}^{ba} \big) $$ You can first generate `RHS` $\mathtt{RHS}_{ij}^{ab}$ (dim: $(i, j, a, b)$) and `D_ov` $D_{i, j}^{a, b}$ (dim: $(i, j, a, b)$), then ```python return (RHS + RHS.transpose((1, 0, 3, 2))) / D_oovv # permutation of P(ia, jb) applies here ``` ```` ````{admonition} Einstein summation expression :class: dropdown $$ \begin{align} t_{ij}^{ab} D_{ij}^{ab} = \mathscr{P} (ia, jb) \bigg[ & \frac{1}{2} v^{ij}_{ab} + \frac{1}{2} \mathscr{W}^{kl}_{ij} t_{kl}^{ab} + \frac{1}{2} \mathscr{W}^{kl}_{ij} t_k^a t_l^b + \frac{1}{2} \mathscr{W}^{ab}_{cd} t_{ij}^{cd} + \frac{1}{2} \mathscr{W}^{ab}_{cd} t_i^c t_j^d \\ & + \tilde{\mathscr{L}}^a_c t_{ij}^{cb} - \tilde{\mathscr{L}}^k_i t_{kj}^{ab} \\ & + v^{ab}_{ic} t_j^c - v^{kb}_{ic} t_k^a t_j^c - v^{ak}_{ij} t_k^b - v^{ak}_{ic} t_j^c t_k^b \\ & + 2 \mathscr{W}^{ak}_{ic} t_{kj}^{cb} - \mathscr{W}^{ak}_{ci} t_{kj}^{cb} - \mathscr{W}^{ak}_{ic} t_{kj}^{bc} - \mathscr{W}^{bk}_{ci} t_{kj}^{ac} \bigg] \end{align} $$ ```` ### Implementation ```python def update_t2(mole: Molecule, t1: np.ndarray, t2: np.ndarray) -> np.ndarray: # Input: `t1`, `t2` CCSD amplitudes guess # Output: Updated `t2` CCSD double amplitude nocc, e = mole.nocc, mole.mo_energy Loot = ____ # Fill this line Lvvt = ____ # Fill this line Woooo = mole.cc_Woooo(t1, t2) Wvoov = ____ # Fill this line Wvovo = ____ # Fill this line Wvvvv = ____ # Fill this line # Formula Line 1 RHS = ____ # Fill this line RHS += 0.5 * np.einsum("klij, klab -> ijab", Woooo, t2, optimize=True) RHS += ____ # Fill this line RHS += ____ # Fill this line RHS += ____ # Fill this line # Formula Line 2 RHS += ____ # Fill this line RHS -= ____ # Fill this line # Formula Line 3 RHS += np.einsum("iacb, jc -> ijab", mole.v_ovvv, t1, optimize=True) RHS -= ____ # Fill this line RHS -= ____ # Fill this line RHS -= ____ # Fill this line # Formula Line 4 RHS += 2 * np.einsum("akic, kjcb -> ijab", Wvoov, t2, optimize=True) RHS -= ____ # Fill this line RHS -= ____ # Fill this line RHS -= ____ # Fill this line D_oovv = ____ # Fill this line return ____ # Fill this line ``` ### Solution Again, `t1_precomput` and `t2_precomput` are converged CCSD amplitudes, so when feed those tensors to `update_t2`, it should return a tensor very close to `t2_precomput`. ```python np.allclose(sol_mole.update_t2(t1_precomput, t2_precomput), t2_precomput) ``` True ## Step 8: CCSD Correlation Energy Closed-shell CCSD correlation energy is (Hirata, eq 32) $$ E_\mathrm{corr}^\mathsf{CCSD} = \sum_{ijab} w^{ij}_{ab} (t_{ij}^{ab} + t_i^a t_j^b) $$ ### Implementation ```python def eng_ccsd_corr(mol: Molecule, t1: np.ndarray, t2: np.ndarray) -> float: # Input: `t1`, `t2` CCSD amplitudes # Output: (Closed-shell) CCSD correlation energy for given amplitudes (not converged value) raise NotImplementedError("About 3 lines of code") Molecule.eng_ccsd_corr = eng_ccsd_corr ``` ### Solution Since we use the converged `t1_precomput` and `t2_precomput` amplitudes, the result of solution should recast the final CCSD energy result of water/STO-3G. ```python sol_mole.eng_ccsd_corr(t1_precomput, t2_precomput) ``` -0.07068008832822766 ## Step 9: CCSD Amplitudes Initial Guess A common initial guess of CCSD amplitudes could be $$ t_i^a (\mathtt{Guess}) = 0, \quad t_{ij}^{ab} (\mathtt{Guess}) = \frac{v_{ij}^{ab}}{D_{ij}^{ab}} $$ Feed those amplitudes to CCSD correlation energy evaluation function `eng_ccsd_corr` should recast MP2 correlation energy of water/STO-3G. ### Implementation ```python def get_t1_t2_initial_guess(mole: Molecule) -> Tuple[np.ndarray, np.ndarray]: # Output: `t1`, `t2` Initial guess of CCSD amplitudes raise NotImplementedError("About 3~10 lines of code") Molecule.get_t1_t2_initial_guess = get_t1_t2_initial_guess ``` ### Solution ```python sol_mole.eng_ccsd_corr(*sol_mole.get_t1_t2_initial_guess()) ``` -0.049149636147146854 ## Step 10: CCSD Loop and Convergence Just like SCF Loop in [Project 03](../Project_03/Project_03.ipynb#Step-10:-SCF-Loop-and-Convergence), CCSD amplitudes are updated by similar loop. Within this function, you may - Initialize $t_i^a$ and $t_{ij}^{ab}$ guesses; - Determine maximum iteration number and converge threshold; - While not exceeding the maximum iteration number, - Update $t_i^a$ and $t_{ij}^{ab}$; - Calculate current $E_\mathrm{corr}^\mathsf{CCSD}$; - Print debug information; - Check whether energy converged; - Return converged $E_\mathrm{corr}^\mathsf{CCSD}$, $t_i^a$ and $t_{ij}^{ab}$. ### Implementation ```python def ccsd_process(mole: Molecule) -> Tuple[float, np.ndarray, np.ndarray]: # Output: Converged CCSD correlation energy, and density matrix raise NotImplementedError("About 15 lines of code") Molecule.ccsd_process = ccsd_process ``` ### Solution ```python sol_mole.ccsd_process()[0] ``` Epoch CCSD Corr Energy 0 -0.062758205988 1 -0.067396582633 2 -0.069224536447 3 -0.070007757593 4 -0.070360041940 5 -0.070523820256 6 -0.070602032655 7 -0.070640293065 8 -0.070659428867 9 -0.070669194464 10 -0.070674268086 11 -0.070676945033 12 -0.070678375897 13 -0.070679148925 14 -0.070679570177 15 -0.070679801317 16 -0.070679928834 17 -0.070679999483 18 -0.070680038755 19 -0.070680060641 20 -0.070680072863 21 -0.070680079698 22 -0.070680083526 23 -0.070680085671 24 -0.070680086874 25 -0.070680087549 26 -0.070680087929 27 -0.070680088141 28 -0.070680088261 29 -0.070680088328 -0.07068008832822766 ## Test Cases The input structures, integrals, etc. for these examples are found in the [input directory](https://github.com/ajz34/PyCrawfordProgProj/tree/master/source/Project_05/). You can also use PySCF approach and simply ignore the integral files. **Water/STO-3G** ([Directory](https://github.com/ajz34/PyCrawfordProgProj/tree/master/source/Project_05/input/h2o/STO-3G)) ```python sol_mole = SolMol() sol_mole.construct_from_dat_file("input/h2o/STO-3G/geom.dat") sol_mole.obtain_mol_instance(basis="STO-3G") sol_mole.print_solution_05() ``` === CCSD Iterations === Epoch CCSD Corr Energy 0 -0.062758205988 1 -0.067396582633 2 -0.069224536447 3 -0.070007757593 4 -0.070360041940 5 -0.070523820256 6 -0.070602032655 7 -0.070640293065 8 -0.070659428867 9 -0.070669194464 10 -0.070674268086 11 -0.070676945033 12 -0.070678375897 13 -0.070679148925 14 -0.070679570177 15 -0.070679801317 16 -0.070679928834 17 -0.070679999483 18 -0.070680038755 19 -0.070680060641 20 -0.070680072863 21 -0.070680079698 22 -0.070680083526 23 -0.070680085671 24 -0.070680086874 25 -0.070680087549 26 -0.070680087929 27 -0.070680088141 28 -0.070680088261 29 -0.070680088328 === Final Results === MP2 Correlation energy: -0.049149636147 CCSD Correlation energy: -0.070680088328 SCF Total energy: -74.942079928192 MP2 Total energy: -74.991229564340 CCSD Total energy: -75.012760016521 **Water/DZ** ([Directory](https://github.com/ajz34/PyCrawfordProgProj/tree/master/source/Project_05/input/h2o/DZ)) ```python sol_mole = SolMol() sol_mole.construct_from_dat_file("input/h2o/DZ/geom.dat") sol_mole.obtain_mol_instance(basis="DZ") sol_mole.print_solution_05() ``` === CCSD Iterations === Epoch CCSD Corr Energy 0 -0.153219621576 1 -0.157583607647 2 -0.158780675284 3 -0.159372391280 4 -0.159624912085 5 -0.159744546130 6 -0.159800894374 7 -0.159828278196 8 -0.159841745959 9 -0.159848484681 10 -0.159851901946 11 -0.159853658737 12 -0.159854573259 13 -0.159855055055 14 -0.159855311727 15 -0.159855449898 16 -0.159855524996 17 -0.159855566175 18 -0.159855588937 19 -0.159855601610 20 -0.159855608712 21 -0.159855612715 22 -0.159855614984 23 -0.159855616275 24 -0.159855617013 25 -0.159855617437 26 -0.159855617681 27 -0.159855617821 28 -0.159855617903 === Final Results === MP2 Correlation energy: -0.152709879014 CCSD Correlation energy: -0.159855617903 SCF Total energy: -75.977878975377 MP2 Total energy: -76.130588854391 CCSD Total energy: -76.137734593279 **Water/DZP** ([Directory](https://github.com/ajz34/PyCrawfordProgProj/tree/master/source/Project_05/input/h2o/DZP)) For this molecule, some additional code is included. See [Project 03](../Project_03/Project_03.ipynb#Test-Cases) for details. ```python sol_mole = SolMol() sol_mole.construct_from_dat_file("input/h2o/DZP/geom.dat") sol_mole.obtain_mol_instance(basis="DZP_Dunning") sol_mole.mol._basis["H"][2][1][0] = 0.75 sol_mole.mol.cart = True sol_mole.mol.build() sol_mole.print_solution_05() ``` === CCSD Iterations === Epoch CCSD Corr Energy 0 -0.224897568632 1 -0.229765989748 2 -0.230690952120 3 -0.231206877685 4 -0.231397181235 5 -0.231490655839 6 -0.231532173882 7 -0.231552454210 8 -0.231562184919 9 -0.231567038519 10 -0.231569475283 11 -0.231570725976 12 -0.231571376382 13 -0.231571720337 14 -0.231571904782 15 -0.231572005095 16 -0.231572060344 17 -0.231572091132 18 -0.231572108469 19 -0.231572118323 20 -0.231572123969 21 -0.231572127228 22 -0.231572129120 23 -0.231572130225 24 -0.231572130872 25 -0.231572131253 26 -0.231572131478 27 -0.231572131611 28 -0.231572131690 === Final Results === MP2 Correlation energy: -0.222519233751 CCSD Correlation energy: -0.231572131690 SCF Total energy: -76.008821792901 MP2 Total energy: -76.231341026652 CCSD Total energy: -76.240393924591 **Methane/STO-3G** ([Directory](https://github.com/ajz34/PyCrawfordProgProj/tree/master/source/Project_05/input/ch4/STO-3G)) ```python sol_mole = SolMol() sol_mole.construct_from_dat_file("input/ch4/STO-3G/geom.dat") sol_mole.obtain_mol_instance(basis="STO-3G") sol_mole.print_solution_05() ``` === CCSD Iterations === Epoch CCSD Corr Energy 0 -0.070745262119 1 -0.075483795485 2 -0.077200327129 3 -0.077865986642 4 -0.078135368845 5 -0.078247913793 6 -0.078296204232 7 -0.078317410766 8 -0.078326912287 9 -0.078331242310 10 -0.078333243407 11 -0.078334178691 12 -0.078334619738 13 -0.078334829162 14 -0.078334929133 15 -0.078334977048 16 -0.078335000083 17 -0.078335011182 18 -0.078335016539 19 -0.078335019128 20 -0.078335020380 21 -0.078335020987 22 -0.078335021280 23 -0.078335021423 24 -0.078335021492 === Final Results === MP2 Correlation energy: -0.056046674662 CCSD Correlation energy: -0.078335021492 SCF Total energy: -39.726850316359 MP2 Total energy: -39.782896991021 CCSD Total energy: -39.805185337850 ## References - Hirata, S.; Podeszwa, R.; Tobita, M.; Bartlett, R. J. *J. Chem. Phys.* **2004**, *120* (6), 2581 Coupled-cluster singles and doubles for extended systems doi: [10.1063/1.1637577](https://dx.doi.org/10.1063/1.1637577)
21bb0dca8e97b353b9bccc2684e218745351b2bc
69,019
ipynb
Jupyter Notebook
source/Project_05/Project_05.ipynb
ajz34/PyCrawfordProgProj
d2ba51223a4e6e56deefc5c0d68aa4e663fbcd80
[ "Apache-2.0" ]
13
2020-08-13T06:59:08.000Z
2022-03-21T15:48:09.000Z
source/Project_05/Project_05.ipynb
ajz34/PyCrawfordProgProj
d2ba51223a4e6e56deefc5c0d68aa4e663fbcd80
[ "Apache-2.0" ]
null
null
null
source/Project_05/Project_05.ipynb
ajz34/PyCrawfordProgProj
d2ba51223a4e6e56deefc5c0d68aa4e663fbcd80
[ "Apache-2.0" ]
3
2021-04-26T03:28:48.000Z
2021-09-06T21:04:07.000Z
31.343778
670
0.511062
true
15,577
Qwen/Qwen-72B
1. YES 2. YES
0.808067
0.737158
0.595673
__label__eng_Latn
0.634057
0.222279
# Spectral Estimation of Random Signals *This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).* ## The Periodogram The [periodogram](https://en.wikipedia.org/wiki/Spectral_density_estimation#Periodogram) is an estimator for the power spectral density (PSD) $\Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ of a random signal $x[k]$. For the following it is assumed that $x[k]$ is drawn from a wide-sense ergodic real-valued random process. ### Definition The PSD is defined as the [discrete-time Fourier transformation (DTFT) of the auto-correlation function (ACF)](../random_signals/power_spectral_densities.ipynb#Definition) \begin{equation} \Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \mathcal{F}_* \{ \varphi_{xx}[\kappa] \} \end{equation} Hence, the PSD can be computed from an estimate of the ACF. Let's assume that we want to estimate the PSD from $N$ samples of the random signal $x[k]$ by way of the ACF. The signal $x[k]$ is truncated to $N$ samples by multiplication (windowing) with the rectangular signal $\text{rect}_N[k]$ of length $N$ \begin{equation} x_N[k] = x[k] \cdot \text{rect}_N[k] \end{equation} where $x_N[k]$ denotes the truncated signal. The ACF is estimated by applying its definition in a straightforward manner. For a random signal $x_N[k]$ of finite length, the estimated ACF $\hat{\varphi}_{xx}[\kappa]$ can be expressed [in terms of a convolution](../random_signals/correlation_functions.ipynb#Definition) \begin{equation} \hat{\varphi}_{xx}[\kappa] = \frac{1}{N} \cdot x_N[k] * x_N[-k] \end{equation} Applying the DTFT to both sides and rearranging the terms yields \begin{equation} \hat{\Phi}_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \frac{1}{N} \, X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega})\, X_N(\mathrm{e}^{\,-\mathrm{j}\,\Omega}) = \frac{1}{N} \, | X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega}) |^2 \end{equation} where the intermediate and last equalities result from the symmetry relations of the DTFT. This estimate of the PSD is known as the periodogram. It can be computed directly from the DTFT \begin{equation} X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \sum_{k=0}^{N-1} x_N[k] \, \mathrm{e}^{\,-\mathrm{j}\,\Omega\,k} \end{equation} of the truncated random signal. ### Example - Periodogram The following example estimates the PSD of a random process which draws samples from normal distributed white noise with zero mean and unit variance. The true PSD is hence given as $\Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = 1$. In order to compute the periodogram by the discrete Fourier transformation (DFT), the signal $x[k]$ has to be zero-padded to ensure that squaring (multiplying) the spectra does not result in a circular convolution. ```python import numpy as np import matplotlib.pyplot as plt %matplotlib inline N = 128 # number of samples # generate random signal np.random.seed(5) x = np.random.normal(size=N) # compute magnitude of the periodogram x = np.concatenate((x, np.zeros_like(x))) X = np.fft.rfft(x) Om = np.linspace(0, np.pi, len(X)) Sxx = 1/N * abs(X)**2 # plot results plt.figure(figsize=(10,4)) plt.stem(Om, Sxx, 'C0', label=r'$|\hat{\Phi}_{xx}(e^{j \Omega})|$', use_line_collection=True) plt.plot(Om, np.ones_like(Sxx), 'C1', label=r'$\Phi_{xx}(e^{j \Omega})$') plt.title('Estimated and true PSD') plt.xlabel(r'$\Omega$') plt.axis([0, np.pi, 0, 4]) plt.legend() # compute mean value of the periodogram print('Mean value of the periodogram: {0:1.3f}'.format(np.mean(np.abs(Sxx)))) print('Variance of the periodogram: {0:1.3f}'.format(np.var(np.abs(Sxx)))) ``` Mean value of the periodogram: 1.024 Variance of the periodogram: 0.791 **Exercise** * What do you have to change to evaluate experimentally if the periodogram is a consistent estimator? * Based on the results, is the periodogram a consistent estimator? Solution: The conditions for consistency have to be checked for the limiting case of a infinitely long signal ($N \to \infty$). Increasing the length `N` of the random signal in above example reveals that the periodogram can be assumed to be bias free, $b_{\hat{\Phi}_{xx}} = 0$. However, its variance $\sigma^2_{\hat{\Phi}_{xx}}$ does not tend to zero. The reason for this is that by increasing the length $N$ of the random signal also the number of spectral coefficients is increased by the same amount. ### Evaluation From above numerical example it should have become clear that the periodogram is no consistent estimator for the PSD $\Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega})$. It can be shown that the estimator is asymptotically bias free for $N \to \infty$, hence \begin{equation} \lim_{N \to \infty} E\{ \hat{\Phi}_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) \} = \Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) \end{equation} This is due to the [leakage effect](../spectral_analysis_deterministic_signals/leakage_effect.ipynb) which limits the spectral resolution for signals of finite length. The variance of the estimator does not converge towards zero \begin{equation} \lim_{N \to \infty} \sigma^2_{\hat{\Phi}_{xx}} \neq 0 \end{equation} This is due to the fact that by increasing $N$ also the number of independent frequencies $\Omega = \frac{2 \pi}{N} \mu$ for $\mu = 0,1,\dots,N-1$ increases. The periodogram is the basis for a variety of advanced estimation techniques for the PSD. These techniques rely on averaging or smoothing of (overlapping) periodograms. **Copyright** This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Digital Signal Processing - Lecture notes featuring computational examples*.
c987c26c97cdc0db096046443b7d5a4412e23e0b
120,071
ipynb
Jupyter Notebook
spectral_estimation_random_signals/periodogram.ipynb
Fun-pee/signal-processing
205d5e55e3168a1ec9da76b569af92c0056619aa
[ "MIT" ]
3
2020-09-21T10:15:40.000Z
2020-09-21T13:36:40.000Z
spectral_estimation_random_signals/periodogram.ipynb
jools76/digital-signal-processing-lecture
4bdfe13fa4a7502412f3f0d54deb8f034aef1ce2
[ "MIT" ]
null
null
null
spectral_estimation_random_signals/periodogram.ipynb
jools76/digital-signal-processing-lecture
4bdfe13fa4a7502412f3f0d54deb8f034aef1ce2
[ "MIT" ]
null
null
null
67.191382
24,554
0.616985
true
1,706
Qwen/Qwen-72B
1. YES 2. YES
0.90053
0.861538
0.775841
__label__eng_Latn
0.970788
0.640871
# Systems of Equations Imagine you are at a casino, and you have a mixture of £10 and £25 chips. You know that you have a total of 16 chips, and you also know that the total value of chips you have is £250. Is this enough information to determine how many of each denomination of chip you have? Well, we can express each of the facts that we have as an equation. The first equation deals with the total number of chips - we know that this is 16, and that it is the number of £10 chips (which we'll call ***x*** ) added to the number of £25 chips (***y***). The second equation deals with the total value of the chips (£250), and we know that this is made up of ***x*** chips worth £10 and ***y*** chips worth £25. Here are the equations \begin{equation}x + y = 16 \end{equation} \begin{equation}10x + 25y = 250 \end{equation} Taken together, these equations form a *system of equations* that will enable us to determine how many of each chip denomination we have. ## Graphing Lines to Find the Intersection Point One approach is to determine all possible values for x and y in each equation and plot them. A collection of 16 chips could be made up of 16 £10 chips and no £25 chips, no £10 chips and 16 £25 chips, or any combination between these. Similarly, a total of £250 could be made up of 25 £10 chips and no £25 chips, no £10 chips and 10 £25 chips, or a combination in between. Let's plot each of these ranges of values as lines on a graph: ```python %matplotlib inline from matplotlib import pyplot as plt # Get the extremes for number of chips chipsAll10s = [16, 0] chipsAll25s = [0, 16] # Get the extremes for values valueAll10s = [25,0] valueAll25s = [0,10] # Plot the lines plt.plot(chipsAll10s,chipsAll25s, color='blue') plt.plot(valueAll10s, valueAll25s, color="orange") plt.xlabel('x (£10 chips)') plt.ylabel('y (£25 chips)') plt.grid() plt.show() ``` Looking at the graph, you can see that there is only a single combination of £10 and £25 chips that is on both the line for all possible combinations of 16 chips and the line for all possible combinations of £250. The point where the line intersects is (10, 6); or put another way, there are ten £10 chips and six £25 chips. ### Solving a System of Equations with Elimination You can also solve a system of equations mathematically. Let's take a look at our two equations: \begin{equation}x + y = 16 \end{equation} \begin{equation}10x + 25y = 250 \end{equation} We can combine these equations to eliminate one of the variable terms and solve the resulting equation to find the value of one of the variables. Let's start by combining the equations and eliminating the x term. We can combine the equations by adding them together, but first, we need to manipulate one of the equations so that adding them will eliminate the x term. The first equation includes the term ***x***, and the second includes the term ***10x***, so if we multiply the first equation by -10, the two x terms will cancel each other out. So here are the equations with the first one multiplied by -10: \begin{equation}-10(x + y) = -10(16) \end{equation} \begin{equation}10x + 25y = 250 \end{equation} After we apply the multiplication to all of the terms in the first equation, the system of equations look like this: \begin{equation}-10x + -10y = -160 \end{equation} \begin{equation}10x + 25y = 250 \end{equation} Now we can combine the equations by adding them. The ***-10x*** and ***10x*** cancel one another, leaving us with a single equation like this: \begin{equation}15y = 90 \end{equation} We can isolate ***y*** by dividing both sides by 15: \begin{equation}y = \frac{90}{15} \end{equation} So now we have a value for ***y***: \begin{equation}y = 6 \end{equation} So how does that help us? Well, now we have a value for ***y*** that satisfies both equations. We can simply use it in either of the equations to determine the value of ***x***. Let's use the first one: \begin{equation}x + 6 = 16 \end{equation} When we work through this equation, we get a value for ***x***: \begin{equation}x = 10 \end{equation} So now we've calculated values for ***x*** and ***y***, and we find, just as we did with the graphical intersection method, that there are ten £10 chips and six £25 chips. You can run the following Python code to verify that the equations are both true with an ***x*** value of 10 and a ***y*** value of 6. ```python x = 10 y = 6 print ((x + y == 16) & ((10*x) + (25*y) == 250)) ``` True
964b5bce027d0e3f21a1161453352a5454a478ad
23,218
ipynb
Jupyter Notebook
MathsToML/Module01-Equations, Graphs, and Functions/01-03-Systems of Equations.ipynb
hpaucar/data-mining-repo
d0e48520bc6c01d7cb72e882154cde08020e1d33
[ "MIT" ]
null
null
null
MathsToML/Module01-Equations, Graphs, and Functions/01-03-Systems of Equations.ipynb
hpaucar/data-mining-repo
d0e48520bc6c01d7cb72e882154cde08020e1d33
[ "MIT" ]
null
null
null
MathsToML/Module01-Equations, Graphs, and Functions/01-03-Systems of Equations.ipynb
hpaucar/data-mining-repo
d0e48520bc6c01d7cb72e882154cde08020e1d33
[ "MIT" ]
null
null
null
331.685714
17,318
0.889913
true
1,222
Qwen/Qwen-72B
1. YES 2. YES
0.893309
0.941654
0.841189
__label__eng_Latn
0.999421
0.792696
$\newcommand{\rads}{~rad.s$^{-1}$}$ $\newcommand{\bnabla}{\boldsymbol{\nabla}}$ $\newcommand{\eexp}[1]{\textrm{e}^{#1}}$ $\newcommand{\glm}[1]{\overline{#1}^L}$ $\newcommand{\di}[0]{\textrm{d}}$ $\newcommand{\bs}[1]{\boldsymbol{#1}}$ $\newcommand{\ode}[2]{\frac{\di {#1}}{\di {#2}}}$ $\newcommand{\oden}[3]{\frac{\di^{#1} {#2}}{\di {#3}^{#1}}}$ $\newcommand{\odel}[2]{\di {#1}/\di {#2}}$ $\newcommand{\odeln}[3]{\di^{#1} {#2}/\di {#3}^{#1}}$ $\newcommand{\pde}[2]{\frac{\partial {#1}}{\partial {#2}}}$ $\newcommand{\pden}[3]{\frac{\partial^{#1} {#2}}{\partial {#3}^{#1}}}$ $\newcommand{\pdel}[2]{\partial_{#2} {#1}}$ $\newcommand{\pdenl}[3]{\partial^{#1}_{#3} {#2}}$ $\newcommand{\mde}[1]{\frac{\textrm{D} {#1}}{\textrm{D} t}}$ $\newcommand{\mdel}[1]{\textrm{D}_t {#1}}$ $\newcommand{\divr}[1]{\vec\nabla \cdot {#1}}$ $\newcommand{\divrb}[1]{\boldsymbol{\nabla} \cdot {#1}}$ $\newcommand{\grad}[1]{\vec \nabla {#1}}$ $\newcommand{\gradb}[1]{\boldsymbol\nabla {#1}}$ $\newcommand{\curl}[1]{\vec\nabla \times {#1}}$ $\newcommand{\curlb}[1]{\boldsymbol{\nabla}\times\boldsymbol{#1}}$ $\newcommand{\lapl}[0]{\vec\nabla^2}$ $\newcommand{\laplb}[0]{\boldsymbol{\nabla}^2}$ $\newcommand{\cplxi}[0]{\mathrm i}$ $\newcommand{\unit}[1]{\mathbf{\hat{#1}}}$ $\newcommand{\thrfor}[0]{\quad\Rightarrow\quad}$ $\newcommand{\andeq}[0]{\quad\textrm{and}\quad}$ $\newcommand{\oreq}[0]{\quad\textrm{or}\quad}$ $\newcommand{\red}[1]{\textcolor{red}{#1}}$ $\newcommand{\blue}[1]{\textcolor{blue}{#1}}$ $\newcommand{\mage}[1]{\textcolor{magenta}{#1}}$ $\newcommand{\stirling}[2]{\genfrac{[}{]}{0pt}{}{#1}{#2}}$ ```python from IPython.display import Image, display, YouTubeVideo ``` [*Book: chapter 4*] # Introduction This chapter is a pivotal chapter in this lecture series, and from past experience, one of the most challenging as well. It makes the transition between the behaviour of simple oscillators and wave behaviour. However, these two things look very, very different. Where does the connection happen? Here is the pivot, and pretty much the goal of this chapter: when two oscillators are coupled, i.e., the motion of one oscillator is influenced by the motion of the other, any motion that ensues can be thought of as a linear combination of two orthogonal types of motion. The features of these orthogonal types of motion can be found by solving for the eigenmodes of the matrix, corresponding to the $2\times2$ linear system of equations of the two oscillators. Each orthogonal mode of motion evolves independently, and its evolution equation is that of a simple harmonic oscillator. Physically, the most visible manifestation of one single mode of motion is that all parts of the coupled system of oscillators oscillate at the same frequency, while a different mode will have those parts oscillate at different frequencies. If now an infinite number of oscillators are coupled, the number of orthogonal modes of motions are not to the number of two, but are infinite. This model can be thought of as a model for a continuous medium, in which waves can propagate. The superposition of this infinite number of modes can lead to an infinite number of patterns that propagate: waves. Once again, each of these modes satisfies the same equation as that of a simple harmonic oscillator. It is the circle of life, PHY293-style: from oscillators to waves, and back to oscillators. Hopefully, by the end of my lecture series, you will have started to understand this concept. Finally, throughout the chapter, and in order to simplify the presentation, we neglect damping unless otherwise stated. But first, let's take a look at the go-to experimental demo of this chapter: the coupled pendulums (also at https://youtu.be/BSC0HG1Nz74 and https://play.library.utoronto.ca/a185a674af2647fc6639939b765f7454). ```python YouTubeVideo('BSC0HG1Nz74', width=560, height=315) ``` # Expectations ## Remember * A normal mode of oscillation occurs by definition when all of the components of a coupled system of oscillators oscillate at the same frequency. It is an object that is shared globally, by *all* of the components of the oscillator; * That the modes are completely independent when the system is linear; they all have their own quantity of energy; * The definitions of normal frequencies, normal modes, normal coordinates. * That the word 'normal' is equivalent to the prefix 'eigen'; * The determinant of a $2\times2$ matrix. * In the case of an initial value problen, and as for any second-order system, the future evolution of the system can be predicted if one knows all of its physical characteristics (for pendulums, that would be masses, stiffness(es), lengths and gravitational acceleration), the initial positions of each element of the system, and their initial velocities; * Any solution of an initial value problem (IVP) can be written as a linear combination of the $n$ eigenmodes. ## Understand * what the process of beating is; * what a degree of freedom is; * how projecting can help solving IVPs faster. ## Apply * A systematic approach to retrieve all of the normal-stuff is to solve an eigenvalue/vector problem. Remember how to cast the equations of motion into a matrix equation, and how to proceed to find all of the eigen-stuff (solve for the roots of the determinant, replace $\omega$ by $\omega_n$ to find the eigenvectors, maybe normalize). * How to use initial conditions to completely solve an initial value problem; * How to use projections to solve IVPs. * Worked examples, tutorials and problem sets. # Coupled Pendulums ## Simple Pendulum You have seen in the first problem set that the equation of motion of one undamped pendulum of length $l$ is $$ \ddot \theta + \omega_p^2\theta = 0,$$ with $\omega^2_p = g/l$ and $\theta$ is the angle of the pendulum with respect to the direction of gravity (See King fig. 1.15). Let $x$ be the distance of the mass from the vertical axis (see King fig. 1.15). We have $x = l \sin\theta$. In order for the SHO model to be valid, we need the oscillations to remain small, i.e., $\theta \ll 1$ or $\forall t,\ x(t) \ll l$. In this case, $\sin\theta \approx \theta$, meaning that an equivalent equation of motion for the mass is $$ \ddot x + \omega_p^2 x = 0. $$ ## Two Pendulums and a Spring ### Qualitative considerations #### Setup Let me use the example of two pendulums of identical lengths $l$, at the end of which two identical masses $m$ hang. The masses are attached via a spring of stiffness $k$, as shown on King fig. 4.2. When the two masses are at rest, both pendulums are vertical, and the spring is neither stretched or compressed. Let's call $B$ the mass on the left, and $A$ the mass on the right. $x_A$ and $x_B$ are the horizontal distances of each mass from their respective rest positions. I will show how a systematic mathematical treatment of the equations of motion reveals that there are two "natural", i.e., normal, modes of motion. But first, let me introduce them qualitatively. #### Antisymmetric Normal Mode Imagine that at $t=0$, $x_{A0} = x_{B0} = A$, and both masses are held steady initially (no initial velocity). Since the motion of both masses is initialized in the same way, and both pendulums have the same natural frequency $g/l$, they would naturally oscillate in perfect sync, with same amplitude and in phase: $$ x_A(t) = x_B(t) = A\cos(\omega_1 t). $$ The spring is neither stretched or compressed, and therefore plays no role in this motion. Here is why this mode of motion is what we call a normal mode: **all elements of the coupled system oscillate at the same frequency**, $\omega_1 =\omega_p$ here. #### Symmetric Normal Mode We can come up with another type of motion, for which each mass will oscillate sinusoidally with a well-defined frequency. Our initial conditions are now $x_{A0} = -x_{B0} = A$, with zero initial velocity again. By symmetry, one can expect that the motion is now described by $$ x_A(t) = A\cos(\omega_2 t)\quad\textrm{and}\quad x_{B}(t) = A\cos(\omega_2 t +\pi). $$ I.e., both positions will oscillate $\pi$ rad (or 180$^\circ$) out of phase, and with a single angular frequency $\omega_2$. What $\omega_2$ is, is not trivial at this point, because it will include effects due to both restoring forces, i.e., weight and the spring tension. I will derive it soon after, but at this point, I want you to remember that the type of motion is a normal mode because **both pendulums oscillate at the same frequency.** Could I find a third normal mode? The answer is no, because there are only two degrees of freedom (the positions of the masses). Is it obvious? Again, the answer is no. The definitive proof is a basic result of linear algebra. In this chapter, I will simply show that these forms of motion appear naturally out of the equations of motion, that no other normal mode is apparent, and that any other form of motion can be described as a linear combination of both forms of motion. ### Systematic Derivation #### Basic equations Let us now drop any symmetry assumption. The positions $x_A(t)$ and $x_B(t)$ are just what they are. Let us first focus on mass $A$. The weight of $A$ projected on the direction of the velocity is $F_{WA} = -mg x_A/l$ (see Problem Set 1). The force of the spring on the mass is $F_{s\to A} = -k(x_A - x_B)$. Strictly speaking, this force is aligned with the direction of the spring, but if the angles of displacement are small, we can also say that its projection on the direction of the trajectory is the same. Newton's 2$^{nd}$ law applied to the mass is therefore $$ m\ddot x_A + \frac{mg}{l}x_A + k(x_A - x_B) = 0,$$ or, after division by $m$, $$ \ddot x_A + \omega_p^2 x_A + \omega_s^2(x_A - x_B) = 0, \hspace{5cm} (1) $$ with $\omega_s^2 = k/m$ (recall that $\omega_p^2 = g/l$). The same reasoning with mass $B$ yields $$ \ddot x_B + \omega_p^2x_B + \omega_s^2(x_B - x_A) = 0. \hspace{5cm} (2) $$ Both equations are almost the same, except for the terms that result from the spring tension, which couples the motion of both masses. #### Change of variables It is not obvious at first, how to solve for $x_A$ and $x_B$ with this coupling term involved. The method we use is a **change of variables.** Define $$ q_1 = x_A + x_B \andeq q_2 = x_A - x_B.$$ One simple way to make them appear is to sum and subtract eqns. (1) and (2): * $(1) + (2) \quad\Rightarrow\quad \ddot q_1 + \omega_1^2 q_1 = 0$. This is the equation for an SHO of angular frequency $\omega_1 = \omega_p$. The antisymmetric mode of motion that I described earlier on was also oscillating at frequency $\omega_p$, because the spring wasn't stretching or compressing, and the presence of said spring was not modifying the pendulum natural frequency $\omega_p$. And since it was characterized by $x_A(t) = x_B(t)$, it corresponded to $q_1 = 2x_A = 2x_B \neq 0$ and $q_2 = 0$. *Note: why do I feel the need to introduce $\omega_1$ on top of $\omega_p$? Because in general, no frequency of the normal modes is equal to one of the frequencies of the uncoupled oscillators. In this simple case, yes, but I do not want to give false impressions.* * $(1) - (2) \quad\Rightarrow\quad \ddot q_2 + \omega_p^2 q_2 + 2\omega_s^2 q_2 = 0.$ Or, defining $\omega_2^2 = \omega_p^2 + 2\omega_s^2$, $$ \ddot q_2 + \omega_2^2 q_2 = 0. $$ Recall the symmetric oscillation mode: it was characterized by $x_A = -x_B$, and therefore $q_1 = 0$ and $q_2 = 2x_A = -2x_B \neq 0.$ I had also mentioned, back then, that $\omega_2^2$ was not trivial, because it was a mix of the influences of gravity and spring tension. Indeed, here, $\omega_2$ does contain both the influences of gravity (via $\omega_p^2$) and spring tension (via $2\omega_s^2$). Therefore, this mode corresponds to the symmetric mode of motion. #### Comments (to be remembered!) * Modes of oscillations cannot be thought on as simple oscillations anymore, with one object or easily identifiable quantity such as electric current describing an oscillation. It is a **global** concept in which **all** components of one coupled system of oscillators oscillate in phase, at the same frequency, as if described by a (D or S)HO model. * The oscillations of each individual component are not independent from each other, but **the modes are completely independent**: there is no trace of $q_1$ in the equation for $q_2$, and vice-versa. It means that if one initiates the coupled system in a funky way, and the coupled system does a weird dance of oscillations, the degree of complexity is actually finite: one mode has a certain amplitude and a certain phase, and the other mode has a certain amplitude, presumably different, and a certain phase shift with the first mode. The pattern of oscillations might look crazy, but there are only two modes, and they don't exchange energy from one to the other as long as the system is linear. * $q_1$ and $q_2$ are called the **normal coordinates**, or **eigencoordinates**, of the **normal modes**, or **eigenmodes**. Their respective **normal frequencies**, or **eigenfrequencies**, are $\omega_1$ and $\omega_2$. It is as if there is a physical space, in which physical coordinates $x_A$ and $x_B$ exist, and a dual space, in which $q_1$ and $q_2$ exist. Both spaces describe the same physical reality, but because they don't describe it in the same way, the "dual" space may be more useful in some cases, including this one. When you start studying Fourier series and later on Fourier analysis, you may call this dual space the "Fourier" space, or "spectral" space. It has nothing to do with Halloween, and everything to do with the fact that the visualization of the physical reality in this dual space is called a (Fourier) spectrum, which looks like the display on a sound equalizer (and it is not a coincidence). I may use the term "Fourier" inadvertently, by force of habit. ### More on the Independence of the Modes #### General solution as a linear superposition of all modes A bit more algebra, because I want to drive home the fact that the two modes are independent. If the coupled system is initiated somewhat randomly, both modes of oscillation will coexist. They both are solutions of two different SHO equations, and their expressions are therefore $$q_1 = C_1\cos(\omega_1 t + \phi_1)\andeq q_2 = C_2\cos(\omega_2 t + \phi_2).$$ In physical coordinates, this translates into $$ x_A = \frac{q_1 + q_2}2 = \frac12\left[C_1\cos(\omega_1 t + \phi_1) + C_2\cos(\omega_2 t + \phi_2) \right],$$ $$ x_B = \frac{q_1 - q_2}2 = \frac12\left[C_1\cos(\omega_1 t + \phi_1) - C_2\cos(\omega_2 t + \phi_2) \right].$$ Meaning that any motion can be considered a linear superposition of the two modes. For the motion of the antisymmetric mode, $C_2 = 0$, for the symmetric mode, $C_1 = 0$, and in general, $C_1\neq 0$ and $C_2\neq 0$. #### Energetic independence of the modes Here I show that the two modes each have a certain amount of energy, and that this energy does not get traded from one mode to the other. The kinetic energies of $A$ and $B$ are $$ K_{A, B} = \frac12 m(\dot x_{A, B})^2, $$ their gravitational potential energies are $$ U^{(g)}_{A, B} = \frac12\frac{mgx_{A, B}^2}{l} = \frac12 m\omega_p^2 x_{A, B}^2 \quad\textrm{(see PS1 or King § 1.3.2)}, $$ and the potential energy stored in the spring is $$ U^{(s)} = \frac12 k (x_A - x_B)^2 = \frac12 m\omega_s^2 (x_A - x_B)^2.$$ The total energy stored in the coupled system is therefore \begin{align*} E & = K_A + K_B + U^{(g)}_A + U^{(g)}_B + U^{(s)}, \\ & = \frac m2\left\{(\dot x_A)^2 + (\dot x_B)^2 + \omega_p^2 \left[x_A^2 + x_B^2\right] + \omega_s^2 (x_A - x_B)^2\right\}, \\ & = E_1 + E_2, \end{align*} with $$ E_1 = \frac14 m \left[(\dot q_1)^2 + \omega_1^2 q_1^2\right]\andeq E_2 = \frac14 m \left[(\dot q_2)^2 + \omega_2^2 q_2^2\right],$$ which you can check by plugging in the definitions of $q_1$ and $q_2$ in $E_1$ and $E_2$ above, and by recalling that $\omega_1^2 = \omega_p^2$ and $\omega_2^2 = \omega_p^2 + 2\omega_s^2$. As for the coordinates, each reservoir of energy, $E_1$ and $E_2$, is uncoupled from the other one: **there is no exchange of energy from one mode of motion to the other**. This is true for any system of coupled oscillators, as long as the linear model is valid. # Beating Phenomenon When a coupled system contains two frequencies that are very close to each other, and the two modes beat with similar amplitudes, we observe what is called a beating phenomenon in physical space. Take the solutions we derived earlier in physical space, but let us assume that $\phi_1 = \phi_2 = 0$. It can be physically realized by choosing initial conditions appropriately. But more importantly, the phase shifts won't fundamentally change what I am about to describe, so might as well choose easy ones. $$ x_A = \frac12\left[C_1\cos(\omega_1 t) + C_2\cos(\omega_2 t) \right],$$ $$ x_B = \frac12\left[C_1\cos(\omega_1 t) - C_2\cos(\omega_2 t) \right].$$ Let us also consider the case $C_1 = C_2 = C$, because it is the simplest case that can illustrate the beating phenomenon: $$ x_A = \frac C2\left[\cos(\omega_1 t) + \cos(\omega_2 t) \right] = C\cos\left(\frac{\omega_1 + \omega_2}2 t\right)\cos\left(\frac{\omega_2 - \omega_1}2 t\right),$$ $$ x_B = \frac C2\left[\cos(\omega_1 t) - \cos(\omega_2 t) \right] = C\sin\left(\frac{\omega_1 + \omega_2}2 t\right)\sin\left(\frac{\omega_2 - \omega_1}2 t\right).$$ *** This went fast, let me do it more slowly (see also https://play.library.utoronto.ca/fdf0002debf728d9825bb4a7cdceb3d4). A bit of trigonometry: recall that $$\cos(a+b) = \cos a \cos b - \sin a \sin b$$ and $$\cos(a-b) = \cos a \cos b + \sin a \sin b.$$ Sum those two and you get: $$ \cos(a-b) + \cos(a+b) = 2\cos a\cos b . $$ Subtract them and you obtain $$ \cos(a-b) - \cos(a+b) = 2\sin a\sin b. $$ Now, consider $$a - b = \omega_1 t \andeq a + b = \omega_2 t $$ $$\thrfor a = \frac{\omega_1 + \omega_2}2 t \andeq b = \frac{\omega_2 - \omega_1}2 t. $$ Using these formulas yields the result. *** Or, if we define $\Delta\omega = (\omega_2 - \omega_1)/2$ and $\Omega = (\omega_1 + \omega_2)/2$, $$ x_A = C\cos\left(\Omega t\right)\cos\left(\Delta\omega t\right),$$ $$ x_B = C\sin\left(\Omega t\right)\sin\left(\Delta\omega t\right).$$ Now, what if $\Delta\omega \ll \omega_1$, $\omega_2$ or $\Omega$? Then, $$ x_A \approx C\cos\left(\omega_1 t\right)\cos\left(\Delta\omega t\right),$$ $$ x_B \approx C\sin\left(\omega_1 t\right)\sin\left(\Delta\omega t\right).$$ Because $\Delta\omega \ll \omega_1$, the curves look like an oscillation, oscillating at frequency $\omega_1$, constrained by a sinusoidal envelope of frequency $\Delta\omega$ with a much longer period than that of the fast oscillation (see figs. 6 & 7). ```python import numpy as np import matplotlib.pyplot as plt from matplotlib import interactive interactive(False) def plot_beating(k, l, ratioA): """ This function creates two oscillations, and plots each of them separately as well as a superposition of INPUT: l is the length of the pendulums, increasing it increases periods k is the stiffness of he spring. Increasing it increases the difference between the two frequencies. """ # basic quantities # k = 0.5 # spring stiffness [N/m] m = 0.4 # mass [kg] g = 9.81 # gravitational acceleration [m/s2] # l = .5 # lengths of pendulums [m] C1 = 1.e-1 # amplitude of mode 1 [m] C2 = ratioA*C1 # amplitude of mode 2 [m] ftsz = 11 # font size on plots lnwt = 1 t_end = 60. # derived quantities t = np.linspace(0., t_end, 2048) omega_p = np.sqrt(g/l) # natural frequency of uncoupled pendulum omega_s = np.sqrt(k/m) # natural frequency of spring omega_1 = omega_p # 1st eigenfrequency omega_2 = np.sqrt(omega_p**2 + 2.*omega_s**2) # 2nd eigenfrequency q1 = C1*np.cos(omega_1*t) q2 = C2*np.cos(omega_2*t) xA = 0.5*(q1 + q2) # position of A Domega = 0.5*(omega_2 - omega_1) # Delta omega envelope = 0.5*C1*((ratioA-1)*np.sin(Domega*t) + (ratioA+1)*np.cos(Domega*t)) # C*np.cos(Domega*t) # 1st plot: each mode individually plt.figure(figsize=(12, 4), dpi=100) plt.subplot(311) ax1 = plt.gca() #ax1.axvline(0., color='k') ax1.plot(t, q1/2, 'b', linewidth=lnwt) ax1.set_xlim([0., t_end]) ax1.set_ylim([-0.05, 0.05]) ax1.set_ylabel('$C_1\,\cos(\omega_1 t)/2$', fontsize=ftsz) ax1.grid() plt.subplot(312, sharex=ax1, sharey=ax1) ax2 = plt.gca() #ax2.axvline(0., color='k') ax2.plot(t, q2/2, 'y', linewidth=lnwt) ax2.set_ylabel('$C_2\,\cos(\omega_2 t)/2$', fontsize=ftsz) ax2.grid() plt.subplot(313, sharex=ax1, sharey=ax1) ax3 = plt.gca() #ax2.axvline(0., color='k') ax3.plot(t, q1/2, 'b', linewidth=lnwt, label='$C_1\,\cos(\omega_1 t)/2$') ax3.plot(t, q2/2, 'y', linewidth=lnwt, label='$C_2\,\cos(\omega_2 t)/2$') ax3.set_ylabel('Both', fontsize=ftsz) ax3.set_xlabel('time [s]', fontsize=ftsz) ax3.legend() ax3.grid() # make these tick labels invisible plt.setp(ax2.get_xticklabels(), visible=False) plt.setp(ax1.get_xticklabels(), visible=False) plt.tight_layout() plt.autoscale(enable=True, axis='x', tight=True) # plt.savefig('BeatingIndividual.png') # plt.close() # 2nd plot: adding the two, the beating phenomenon plt.figure(figsize=(12, 4), dpi=100) ax = plt.gca() ax.axvline(0., color='k') ax.plot(t, xA, 'g', linewidth=lnwt, label='$x_A(t)$') if ratioA > 0.99: ax.plot(t, envelope, 'r-.', label='$C\,\cos(\Delta\omega t)$') ax.plot(t, -envelope, 'r--', label='$-C\,\cos(\Delta\omega t)$') ax.set_ylim([-0.14, 0.16]) ax.set_xlabel('time [s]', fontsize=ftsz) ax.set_ylabel('position [m]', fontsize=ftsz) ax.grid() ax.legend() # annotation to highlight the envelope period if ratioA > 0.99: T = 2*np.pi/Domega ax.axvline(T/4, color='k', linestyle='-.') # the t=T/2 mark ax.axvline(5*T/4, color='k', linestyle='-.') # the t=3T/2 mark ax.annotate(text='', xy=(T/4, -.12), xytext=(5*T/4, -0.12), arrowprops=dict(arrowstyle='<|-|>')) # the double arrow ax.text(1.5*np.pi/Domega, -.12, r'$2\pi/\Delta\omega$', verticalalignment='center', horizontalalignment='center', backgroundcolor='w', fontsize=ftsz) # annotation to highlight the fast oscillation period T = 4.*np.pi/(omega_2 + omega_1) pp = 20.5 t1 = pp*T t2 = (pp+1)*T ax.plot([t1]*2, [-C1*np.cos(Domega*t1), 0.13], color='k') # the fast period mark #1 ax.plot([t2]*2, [-C1*np.cos(Domega*t2), 0.13], color='k') # the fast period mark #2 ax.annotate(text='', xy=(t1-0.2, .11), xytext=(t2+0.2, 0.11), arrowprops=dict(arrowstyle='<|-|>')) # the double arrow ax.text((t1+t2)/2, 0.135, r'$2\pi/\Omega$', verticalalignment='center', horizontalalignment='center', backgroundcolor='w', fontsize=ftsz) # finishes plt.tight_layout() plt.autoscale(enable=True, axis='x', tight=True) # plt.savefig('Beating.png') plt.show() # plt.close() ``` ```python plot_beating(k=0.5, l=0.5, ratioA=1.) # for pdf export ``` *Below, on the Jupyter notebook, you can interactively increase the values of $\ell$ to decrease the frequency of both modes, and increase $k$ to increase the difference between the two normal frequencies. You should see that as you increase $k$, the beating phenomenon becomes less and less visible. By that I mean that you will still see beats, but they will be less long, the "bursts" will become closer and closer and the phases when you see an oscillation and phases when you don't will become less cleanly separated. You may want to increase $\ell$ just to make the phenomena better visible.* ```python from ipywidgets import interact, FloatSlider ``` ```python interact(plot_beating, k=FloatSlider(min=0.5, max=3., step=.1, value=.5), l=FloatSlider(min=0.5, max=2.5, step=.1, value=.5), ratioA=FloatSlider(min=0.0, max=1., step=.1, value=1.)) ``` Figs. 6 and 7 above: fig. 6 top and middle: two modes, oscillating with equal amplitude and slightly different frequency. fig. 6, bottom: superposition of the two modes. Fig. 7: sum of the two modes for mass A, i.e., $x_A$, forming a beating pattern. Below (Jupyter; also at https://youtu.be/CYnR0haH_Qc and https://play.library.utoronto.ca/081230b94886e73ba70d6016aecd227e), you will find a demo, illustrating the phenomenon of beating with acoustic waves (we will see them again near the end of this part of the course). There is a Khan Academy video that describes the phenomenon qualitatively. ```python YouTubeVideo('CYnR0haH_Qc', width=560, height=315) ``` *** *End of 09/29 lecture, beginning of 09/30 lecture.* *** *** **Worked example: King p. 84.** *** *Beyond this point, I will not follow King's book closely. You might notice that in his preface, he mentions that his book is designed for a first-year class, for which eigenvalues and eigenvectors would be too advanced a topic. It is our strong opinion (all the instructors involved in PHY293 and 294) that we need to cover this topic in this second-year class, because it is foundational for a lot of other topics that will be tackled in the rest of PHY293-4. If you want a book for help, I can recommend John R. Taylor's "Classical Mechanics" (University Science Books), chapter 11, which I am loosely following.* *** # General Solution Method In the previous case, we had only two coupled oscillators, which meant, two degrees of freedom (two normal coordinates). It was simple enough that we could solve it "by hand", by defining simple normal coordinates, and adding and subtracting the equations of motion. But as with all things in linear algebra, difficulty increases rapidly with every equation that we add. For example, what if we coupled 5 oscillators? Not to mention the infinity of oscillators that approximate a continuous medium. We need the systematic and powerful approaches that are offered by linear algebra. And in particular, we will eventually need to use the eigenvector/eigenvalue formalism, which many of you find difficult. It is, however, a necessary step, and a key aspect of the theory of waves and oscillations. So, even though I will have to keep it in mind, my best piece of advice is to brace yourselves. To brush up on linear algebra, you could either pull up your linear algebra notes from last year, or else. We will essentially do what is described in the following video (Jupyter) or at https://youtu.be/PFDu9oVAE-g?t=315 (and that way, I don't have to record one myself). Note that if you go the beginning of this video, the author (Grant Sanderson) says (I am actually paraphrasing him) that if finding eigenvalues and eigenvectors feels complicated, it is usually because understanding of vectors and matrices is shaky to begin with. Fortunately, this particular video is the 10th of 11 videos on linear algrebar (https://www.youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab). Careful though, it takes a few hours to go through all of them. ```python YouTubeVideo('PFDu9oVAE-g', width=560, height=315, start=315) ``` *** *I will not cover the discussion that follows in class. Reading it may help you understand why and how everything works, but it is beyond what it traditionally expected from 2nd year students.* ## *Results from Linear Algebra* ### *General results* *Linear algebra can be messy, especially when it comes to whether a matrix is diagonalizable or not. Fortunately for us, I will only use a few of the most basic results of linear algebra.* 1. *Finding the eigenvectors and eigenvalues of an $n\times n$ matrix $A$ means finding the vectors $\vec V$ and scalars $\lambda$ such that* $$ A \vec V = \lambda \vec V, \oreq (A - \lambda I_n)\vec V = 0,\hspace{5cm}(3)$$ *with $I_n$ the $n\times n$ identity matrix.* 2. *Equation (3) has non-trivial (i.e., non-identically zero, i.e., oscillations with non-zero amplitudes in our case) solutions for $\vec V$ if and only if $$ \det(A - \lambda I_n) = 0.\hspace{8cm}(4) $$ The determinant of a matrix is a scalar. Here, it is a polynomial in $\lambda$. It is called the characteristic polynomial, and its roots are the eigenvalues.* 3. *If there is a number $n$ of eigenvalues $\lambda$ that are all distinct from each other, then the matrix is diagonalizable, and the set of the corresponding eigenvectors forms a basis for the space of solutions. In other words, if $\vec V_i$, $i \in \{1\dots{} n\}$ is the complete set of eigenvectors, then any linear combination of them,* $$\sum_{i=1}^n a_i \vec V_i = \vec X$$ *is a solution of the original problem as well. In all our cases, $a_n \in \mathbb R$. Note that with damping, we could have $a_n \in \mathbb C$.* *Furthermore, is $\vec X$ is a solution of eqn. $(3)$, then it can be decomposed as a linear combination of eigenvectors,* $$\vec X = \sum_{i=1}^n a_i \vec V_i,$$ *and this decomposition is unique.* *Note: this condition is sufficient, not necessary. Namely, the converse is not true, and a matrix needs not have all* $n$ *eigenvalues distinct from each other to be diagonalizable. But in our cases, they will be, so let's enjoy it.* ### *How does this translate into our results?* * *First of all, we will have $n = 2$ for two oscillators;* * *the eigenvectors $\vec V_i$ will have some connection to our normal modes 1 and 2;* * *the eigenvalues $\lambda_i$ will have some connection to $\omega_{1,2}^2$ (they will be the same actually);* * *the eigenvectors, in the abstract case, form a basis of the space of solutions. This is a mathematical manifestation of the fact that any motion of the two masses can be decomposed as the linear combination of two modes, as we saw in the previous section;* * *solving $\det(A - \lambda I_2) = 0$ for $\lambda$ shows the way for solving coupled systems of oscillators systematically. Finding the roots of the polynomial, however messy it might become, will give us $\omega_1$ and $\omega_2$. Once we have them, substitution into the original eqn. (3) will give us $\vec V_1$ and $\vec V_2$.* *The math will become super messy, but the procedure is well-defined, and easy to make systematic.* *Resuming the normal course of operations.* *** ## Coupled pendulums As an illustration of how powerful such an approach is, let me add the extra complexity that the two masses are different, and equal to $m_A$ and $m_B$. I hope to convince you that while it would have made it difficult in our previous approach, it does not represent any extra difficulty with the eigenvector method. The equations are therefore $$ m_A\ddot x_A + \frac{m_A g}{l} x_A + k(x_A - x_B) = 0, \hspace{5cm}(5)$$ $$ m_B\ddot x_B + \frac{m_B g}{l} x_B - k(x_A - x_B) = 0, \hspace{5cm}(6)$$ Now, define a vector $$ \vec X = \begin{bmatrix} x_A \\ x_B, \end{bmatrix}$$ and two matrices $$ M = \begin{bmatrix} m_A & 0 \\ 0 & m_B \end{bmatrix} \andeq K = \begin{bmatrix} \frac{m_A g}l + k & -k \\ -k & \frac{m_B g}l + k \end{bmatrix}.$$ You can check that $$ M\ddot{\vec X} + K \vec X = \begin{bmatrix} m_A\ddot x_A + \frac{m_A g}{l} x_A + k(x_A - x_B) \\ m_B\ddot x_B + \frac{m_B g}{l} x_B - k(x_A - x_B) \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix},$$ or, more simply, that $$\boxed{ M\ddot{\vec X} + K \vec X = 0. }$$ This is our new equation of motion. It strongly resembles the SHO equation, and its resolution does involve the same steps: 1. assume $\vec X = \vec A \cos(\omega t + \phi)$, 2. substitute in the EOM and solve for $\omega$, 3. use initial conditions to solve for $\vec A$ and $\phi$. ### The first step... ... implies that $\ddot{\vec X} = -\omega^2 \vec X$. Moreover, $M\ddot{\vec X} = M(-\omega^2 \vec X) = -\omega^2M \vec X$ because the algebra is linear. ### For the second step (solving for $\omega$), $$M\ddot{\vec X} + K \vec X = -\omega^2 M \vec X + K\vec X = \boxed{\left(K -\omega^2 M\right)\vec X = 0.} \hspace{4cm} (7) $$ Let me recall one of the most important results of linear algebra: **A matrix equation** $\bs{\textsf{R}V = 0}$ **has non-trivial (i.e., non-identically zero) solutions if and only if** $\bs{\det[\textsf{R}] =0}$, as I stated in a different form around eqn. (4). Physically, trivial solutions are zero-amplitude oscillations of each mass position or of each mode. Trivially speaking, it is nothing. The matrices $M$ and $K$ both contain elements that are determined by the physics of the system: masses stiffness, length of pendulum and gravitational acceleration. The only parameter that can be varied is therefore $\omega$, and it is $\omega$ we are solving for. Mathematically, this step corresponds to finding the **eigenvalues** of eqn. $(7)$. It looks different from eqn. $(3)$ though. However, we can cast our equation in this form very easily by multiplying it by $$ M^{-1} = \begin{bmatrix} 1/m_A & 0 \\ 0 & 1/m_B \end{bmatrix}, $$ i.e., $$ M^{-1}K \vec X = \omega^2 \vec X, $$ which looks like a standard eigenvalue problem. The two ways of writing it are strictly equivalent, the form above would appear if I divided eqns. $(5)$ and $(6)$ by $m_A$ and $m_B$, respectively. We will also need the following formula: $$\boxed{ \det\left( \begin{bmatrix} a & b \\ c & d \end{bmatrix} \right) = ad - bc} \quad\textrm{(which I expect you to remember!!!)}.$$ I am going to do it by hand once, but the real advantage to this method is that it is fairly easy to ask a symbolic calculator or symbolic math software to solve for it. I will do it later, but for now, it is instructional to do it by hand. Here we go: \begin{align*} 0 = \det(K - \omega^2 M) & = \det\left( \begin{bmatrix} \frac{m_A g}l + k - \omega^2 m_A & -k \\ -k & \frac{m_B g}l + k - \omega^2 m_B \end{bmatrix}\right) \\ & = \left(\frac{m_A g}l + k - \omega^2 m_A\right)\left(\frac{m_B g}l + k - \omega^2 m_B\right) - k^2 \\ & = m_A m_B \left[\left(\omega_p^2 + \omega_{sA}^2 - \omega^2 \right)\left(\omega_p^2 + \omega_{sB}^2 - \omega^2\right) - \omega_{sA}^2\omega_{sB}^2\right], \end{align*} with $\omega_p^2 = g/l$, $\omega_{sA}^2 = k/m_A$ and $\omega_{sB}^2 = k/m_B$. It is a second-degree polynomial in $\omega^2$, which is not technically hard to solve, but this is a lot of symbols to keep track of. Let us define $\Omega^2 = \omega_p^2 + \omega_A^2 + \omega_B^2$. If $m_A = m_B$, then $\omega_{sA}^2 = \omega_{sB}^2 = \omega_s^2$ and $\Omega^2 = \omega_p^2 + 2\omega_s^2 = \omega_2^2$, which is a good sign. Let's plug $\Omega^2$ in the last line of the set of equations above (and remember that is has to be equal to zero): $$\omega_p^2\Omega^2 - \omega^2 \left(\omega_p^2 + \Omega^2\right) + \omega^4 = 0. $$ The determinant of this polynomial is $$ \Delta = \left(\omega_p^2 + \Omega^2\right)^2 - 4 \omega_p^2\Omega^2 = \left(\omega_p^2 - \Omega^2\right)^2 $$ and its roots are \begin{align*} \omega_1'^2 & = \frac{\omega_p^2 + \Omega^2 - \Omega^2 + \omega_p^2}{2} = \omega_p^2 \\ \omega_2'^2 & = \frac{\omega_p^2 + \Omega^2 + \Omega^2 - \omega_p^2}{2} = \Omega^2 = \omega_p^2 + \omega_{sA}^2 + \omega_{sB}^2. \end{align*} You may notice that $\omega_1'^2 = \omega_1^2$ of the previous case, and that if $m_A = m_B = m$, then $\omega_2'^2 = \omega_2^2$. These are our two eigenvalues, which are the squares of the normal frequencies! Also, $\omega_1'^2 \neq \omega_2'^2$. Therefore, based on the general results from linear algebra that I recalled earlier, we will have a basis of eigenvectors to decompose all our oscillations onto. ### The third step (solving for $\vec A$ and $\phi$) #### Polarization of the Eigenvectors We can actually separate $\vec A$ into $C\vec Y$, with $\|\vec Y\| = 1$. This makes it easier in the sense that we separate the amplitude of the mode, its strength, or weight, if you will, which resides in $C$ and which is calculated with the knowledge of the initial condition, and $\vec Y$, which is a feature of the matrix $K - \omega^2 M$, or of the physical system, and does not depend on the initial conditions. However, requiring that $\|\vec Y\| = 1$ is not required in order to solve a problem! I do it, but some textbooks might do it differently. In this case, $$ (K - \omega^2 M)\vec X = 0 = C (K - \omega^2 M)\vec Y \cos(\omega t + \phi) \quad\Leftrightarrow\quad (K - \omega^2 M)\vec Y = 0. $$ Let $$\vec Y = \begin{bmatrix} a \\ b \end{bmatrix}.$$ Looking for the relationship between $a$ and $b$ for a given mode is called looking for the polarization of that mode. \begin{align*} (K - \omega^2 M)\vec Y & = \begin{bmatrix} \frac{m_A g}l + k - \omega^2 m_A & -k \\ -k & \frac{m_B g}l + k - \omega^2 m_B \end{bmatrix} \begin{bmatrix} a \\ b \end{bmatrix} \\ & = \begin{bmatrix} \left(\frac{m_A g}l + k - \omega^2 m_A\right)a - kb \\ -ka + \left(\frac{m_B g}l + k - \omega^2 m_B\right)b \end{bmatrix} \\ & = \begin{bmatrix} m_A\left[(\omega_p^2 + \omega_{sA}^2 - \omega^2 )a - \omega_{sA}^2 b \right] \\ m_B\left[-\omega_{sB}^2 a + (\omega_p^2 + \omega_{sB}^2 - \omega^2 )b\right] \end{bmatrix} = 0 . \hspace{3cm} (8). \end{align*} * Case $\omega^2 = \omega_1^2 = \omega_p^2$: $$ \begin{bmatrix} ka - kb \\ -ka + kb \end{bmatrix} = 0 \thrfor a = b.$$ *(Notice how the two equations are redundant.)* Therefore, $a=b$, and since $\|\vec Y\| = \sqrt{a^2 + b^2} = 1$, $a = b= 1/\sqrt{2}$. This is the antisymmetric mode. *Note: that the masses are different does not have an impact. This is because the frequency of oscillations of each pendulum does not depend on the mass, attached to each one. Therefore, in this mode, the spring is never stretched or compressed.* * Case $\omega^2 = \omega_2^2 = \omega_p^2 + \omega_{sA}^2 + \omega_{sB}^2$. I just use the first line in eqn. (8) because the second line is redundant again: $$ -\omega_{sB}^2a - \omega_{sA}^2 b = 0.$$ Therefore, $\omega_{sB} a = -\omega_{sA}b$. We get it: if $m_A = m_B = m$, this would be the symmetric mode, with $a = 1/\sqrt{2} = -b$. With $m_A \neq m_B$, $a$ and $b$ are weighted by some coefficient, i.e., $a = \omega_{sA}^2/\sqrt{\omega_{sA}^4 + \omega_{sB}^4}$, and $b = -\omega_{sB}^2/\sqrt{\omega_{sA}^4 + \omega_{sB}^4}$. *** *End of 09/30 lecture, beginning of 10/02 lecture.* *** #### Eigenvectors as a basis for all the solutions So far, the connection with general linear algebra results is the following. $\vec Y_1$ and $\vec Y_2$ are eigenvectors of the matrix problem, $\omega_1'^2$ and $\omega_2'^2$ the eigenvalues (or $\omega_1'$ and $\omega_2'$ the eigenfrequencies) and $\vec X_1$ and $\vec X_2$ are the eigenmodes of oscillation. Based on the results from linear algebra that I recalled earlier, it means that all solutions can be written as a linear combination of these eigenmodes: $$ \vec X = C_1 \vec Y_1\cos(\omega_1 t + \phi_1) + C_2 \vec Y_2\cos(\omega_2 t + \phi_2). $$ #### Amplitudes ($C$) and phases ($\phi$). As with every initial value problem, it can get messy. Let me just do the cases we investigated before, i.e., the worked problem on King p. 84. In all cases, it was assumed that the masses were released from rest. We could check that in this case, $\phi_1 = \phi_2 = 0$. It was also assumed that $m_A = m_B$. The first example was $x_A = x_B = A$, which meant that the antisymmetric mode was selected by the initial conditions. In vector form, this means $$\vec X_0 = \vec X(t=0) = \begin{bmatrix} x_A(t=0) \\ x_B(t=0) \end{bmatrix} = \begin{bmatrix} A \\ A \end{bmatrix} = \sqrt{2} A \vec Y_1 + 0\vec Y_2.$$ Bam! No need to do any sort of math, we know right away that the symmetric mode is zero. The second example was $x_A = -x_B = A$, which meant that the symmetric mode was selected by the initial conditions. In vector form, this means $$\vec X_0 = \begin{bmatrix} A \\ -A \end{bmatrix} = 0\vec Y_1 + \sqrt{2}A\vec Y_2.$$ And again! We know right away that the antisymmetric mode is zero. ### Numerical resolution This bit is outside of what is expected of you, and will only benefit to those who read the Jupyter notebooks. As a matter of fact, the pdf export of this part will not be intelligible, though I will try to do the demo in class, time permitting. In any case, it is hard to realize the potential of the eigenmode method without realizing how easy it is to implement it numerically, and how it is possible to scale the numbers of oscillators up from there. So, let's give it a go. ```python from sympy import * # here I import the entire symbolic math package init_printing() # to print pretty ``` ```python # we need to declare some symbolic quantities # omega_sA, omega_p, omega = symbols('omega_sA, omega_p, omega', real=True) g, l = symbols('g, l', real=True) k, m_A, m_B = symbols('k, m_A, m_B', real=True) ``` ```python m_A ``` ```python # define the mass matrix M = Matrix([[m_A, 0], [0, m_B]]) M ``` ```python # Define the stiffness matrix K = Matrix([[m_A*g/l + k, -k], [-k, m_B*g/l + k]]) K ``` ```python # Compute M**(-1)*K EVecMat = M.inv()*K expand(EVecMat) ``` ```python # A bit of refinement: we can substitute k/m_A by omega_SA, etc. # but first, we need to declare those symbolic variables omega_sA, omega_sB = symbols('omega_sA, omega_sB', real=True) omega_p = Symbol('omega_p', real=True) ``` ```python New_EVMat = expand(EVecMat).subs(k/m_A, omega_sA**2); New_EVMat ``` ```python New_EVMat = expand(New_EVMat).subs(k/m_B, omega_sB**2); New_EVMat ``` ```python New_EVMat = expand(New_EVMat).subs(g/l, omega_p**2); New_EVMat ``` ```python # Or it can be done in one fell swoop: New_EVMat = (expand(EVecMat).subs(k/m_A, omega_sA**2) .subs(k/m_B, omega_sB**2) .subs(g/l, omega_p**2)) New_EVMat ``` ```python # And now the finishing touch New_EVMat.eigenvects() ``` The line above is to be interpreted in the following way: 1. There are two different eigenvalues, $\omega_p^2$ and $\omega_p^2+\omega_{sA}^2+\omega_{sB}^2$, 2. each of these eigenvalues appears only once (this is what the "1" numbers mean in second positions), 3. the eigenvectors are $[1, 1]$ (antisymmetric) and $[-\omega_{sA}^2/\omega_{sB}^2, 1]$ (pseudo-symmetric). Note how python doesn't care about normalizing the eigenvector norms, and that it used a different sign convention from me. It means that it does not really matter. ```python # I can even do it with two different length of pendulums! l_A, l_B = symbols('l_A, l_B', real=True) omega_pA, omega_pB = symbols('omega_pA, omega_pB', real=True) # New stiffness matrix K_2 = Matrix([[m_A*g/l_A + k, -k], [-k, m_B*g/l_B + k]]) # new eigenvector problem EVecMat_2 = M.inv()*K_2 New_EVMat_2 = (expand(EVecMat_2).subs(k/m_A, omega_sA**2) .subs(k/m_B, omega_sB**2) .subs(g/l_A, omega_pA**2) .subs(g/l_B, omega_pB**2)) New_EVMat_2 ``` ```python New_EVMat_2.eigenvects() ``` *Note: the result above is so long that it will not display properly on the pdf version of the notes. Which is the message here: the result is so long, it is not practical to handle it symbolically.* Obviously, in a practical setting, we would be using a numerical math package, not a symbolic one! These excruciatingly complicated expressions would become mere numbers, which computers could crunch out. *** # *Other Examples* *For the sake of time, I will not cover these in class. You should be able to do the masses coupled by springs by yourself, as a practice problem. It is something I could ask in an exam, and the problem set should feature several derived problems. Ocean tides are outside of the scope of this class, and only serves an illustrative purpose.* ## *Masses Coupled by Springs* See King fig. 4.11. #### *Equations of motion* *Let both masses be equal this time, and let all the stiffnesses be equal as well. Equations are:* $$ m \ddot x_A = -k x_A + k(x_B - x_A) = k x_B - 2 k x_A, $$ $$ m \ddot x_B = -k (x_B - x_A) - kx_B = k x_A - 2 k x_B. $$ *Dividing by $m$:* $$ \ddot x_A = \omega_s^2 (x_B - 2 x_A), $$ $$ \ddot x_B = \omega_s^2 (x_A - 2 x_B). $$ #### *Form of the solution* *We are looking for normal modes. Recall the definition of the normal modes:* **normal modes are modes for which all elements of the coupled system oscillate at the same, unique frequency.** *Therefore, we look for solutions of the form* $$ \vec X = C\vec Y \cos(\omega t + \phi). $$ *As before, solving for the eigenvalue problem will yield the $\omega$'s and $\vec Y$'s, while the $C$'s and $\phi$'s are determined in a second time, with the initial conditions.* #### *Eigenproblem and solving for $\vec Y$ and $\omega$.* *The previous system of equations becomes* $$ \omega_s^2 (2x_A - x_B) = \omega^2 x_A, \hspace{6cm} (9a)$$ $$ \omega_s^2 (2x_B - x_A) = \omega^2 x_B, \hspace{6cm} (9b)$$ *which has the form of an eigenproblem.* *Indeed, defining $$ M^{-1}K = \omega_s^2 \begin{bmatrix} 2 & -1 \\ -1 & 2 \end{bmatrix}, $$ We can write the above system of equations as* $$ M^{-1}K \vec X = \omega^2 \vec X. $$ *We find the eigenfrequencies by finding the roots of the characteristic polynomial, i.e., of* $$ \det(M^-1 K - \omega^2\mathsf I_2) = 0 = (2\omega_s^2 - \omega^2)^2 - \omega_s^4 = (\omega^2 - \omega_s^2)(\omega^2 - 3\omega^2_s). $$ *Thus, the eigenfrequencies are $\omega_1=\omega_s$ and $\omega_2 = \sqrt{3}\omega_s$.* *The mode, oscillating at $\omega_1= \omega_s$ can be qualified as "antisymmetric" again. It is the mode for which both masses oscillate in sync, and the central spring is neither stretched or compressed. Therefore, only the external springs stretch or compress, and their natural frequencies of oscillations are simply $\omega_s$.* *By process of elimination, and because the system is neatly symmetric around the central $x$, we can surmise that the other mode, oscillating at $\omega_2 = \sqrt3 \omega_s$, is a symmetric mode of motion, the motion of both masses being symmetric with respect to the central plane.* *Defining $\vec Y = [a, b]$, we can find the orientation of the eigenvectors by replacing $x_A$ and $x_B$ by $a$ and $b$ in the system $(9)$, or in only one of the equations since they are redundant.* *In the case $\omega = \omega_s$, eqn. $(9a)$ becomes $a - b = 0$, or $a =b$. It is indeed the antisymmetric mode. Normalizing to have $\|\vec Y_1\| = 1$ (*which I do, but isn't necessary!*) yields $a = b = 1/\sqrt{2}$.* *In the case $\omega = \sqrt{3}\omega_s$, eqn. $(9a)$ becomes $a = -b$. It is indeed the symmetric mode. Normalizing to have $\|\vec Y_2\| = 1$ yields $a = -b = 1/\sqrt{2}$.* *Any solution of the coupled oscillator system is therefore* $$ \vec X = \frac{C_1}{\sqrt{2}}\begin{bmatrix} 1 \\ 1 \end{bmatrix}\cos(\omega_s t + \phi_1) + \frac{C_2}{\sqrt{2}}\begin{bmatrix} 1 \\ -1 \end{bmatrix}\cos(\sqrt3\omega_s t + \phi_2).$$ *Any problem would the completed by using initial conditions in order to solve for the $C$'s and $\phi$'s. I won't do it.* ## *Ocean Tides* *For your interest only. This discussion is based on the following two articles:* * *Arbic, Brian K., and Chris Garrett. 2010. "A Coupled Oscillator Model of Shelf and Ocean Tides." Continental Shelf Research 30 (6). Pergamon: 564–74. doi:10.1016/j.csr.2009.07.008.* * *Arbic, Brian K., Richard H. Karsten, and Chris Garrett. 2009. "On Tidal Resonance in the Global Ocean and the Back-effect of Coastal Tides upon Open-ocean Tides." Atmosphere-Ocean 47 (4): 239–66. doi:10.3137/OC311.2009.* *Every continent is surrounded by continental shelves, i.e., regions where the ocean floor is relatively flat, and relatively shallow (less than 500 m deep). You may easily realize this by looking at google maps in satellite mode. In contrast, the "open-ocean" is in general much deeper (5 km). Because these two regions of the ocean are much different, they can be thought of as individual oscillators with respect to tidal forcing, with their own resonance frequencies, and their own damping rates: damping is much stronger for tides in shallower seas, because the proximity to the bottom creates much more fluid friction and turbulence.* *Some continental shelves are fairly well isolated from the open ocean. The example that is closest to home, but also one of the best examples in the world, is Hudson Bay. It is very large (over 1000 km diameter) and shallow (about 100 m on average), and coupled to the open-ocean via a series of very narrow and even more shallow straits.* *One would think that such an isolated body of water does its own thing, resonating with the tidal forcing without influencing the other bodies of water. This would be neglecting the peculiar properties of resonance, and in particular that even if the coupling is weak between two oscillators, if the forcing provided by the first oscillator to the second oscillator happens to be at a resonance frequency of the second oscillator, the second oscillator will react strongly. As a consequence, the coupled system will have a fundamentally different behaviour than if the two oscillators were uncoupled: modes of oscillation take over, and the behaviour becomes global.* *Arbic and his colleagues were interested in paleo-oceanography, and in particular, what were the tides like during the last glacial maximum, were polar ice caps were retaining a lot of the world's water in solid form, above sea level? Then, sea level was more than 100 m below what it is now, and the oscillators as well as the couplings between them must have been very different. They realized it was very complicated, and that the slightest changes in the coastal configuration could change the global tides radically. Take a look at Arbic and Garrett 2010, page 2: they push the analogy as far as drawing two masses linked by springs, go over a discussion about damping, plot the same resonance curves that I drew in chapter 3, etc.!* *The two articles that I am citing are proof-of-concept kind of articles, where they run numerical simulations of ocean tides in which they artificially change tiny bits of coasts. In particular, they block Hudson Strait between Hudson Bay and the Atlantic Ocean, and find that the tides are completely modified everywhere in the Atlantic Ocean (see figure here, i.e., fig. 15a, b, c of Arbic et al. 2009, reproduced here.) and even as far as India, were a 20 cm increase in tidal amplitude happens!* *Such considerations illustrate how delicate it is to predict adaptation to climate change. If the sea level is predicted to rise by metres, what will it entail e.g. for artifical ports, which have been designed to operate optimally under present-day tides? The changes in amplitudes and phases of tides might render some harbours less optimal than others, some breeding grounds might change, some touristic destinations might need to be redesigned...* # On the Orthogonality of the Eigenmodes ## One Last Bit of Linear Algebra The last bit of linear algebra that I will be using is that if a matrix $\textsf P$ is real symmetric, i.e., all elements of $\textsf P$ are real, and $\textsf P^T = \textsf P$, then the eigenvectors of $\textsf P$ are orthogonal to each other. That is, for eigenvectors that have been normalized ($\forall i\in {1\dots{}n}, \|V_i\| = 1$), $$\forall (i, j) \in \{1\dots n\}^2, \qquad V_i\cdot V_j = \delta_{ij}, $$ where $\delta_{ij} = 1$ if $i=j$, and $\delta_{ij} = 0$ otherwise ($\delta_{ij}$ is called the "Kronecker Delta"). *Note: the generalization of a real symmetric matrix is called a Hermitian matrix, for which* $P_{ij} = P_{ji}^*$ *, where the asterisk means complex conjugate. The result above applies to Hermitian matrices, which will be very important very soon, in quantum mechanics.* In general, what the dot product is depends on the vector space. In our case, it is simply that if $\vec Y_a = [a_1, a_2]$ and $\vec Y_b = [b_1, b_2]$, then $\vec Y_a \cdot \vec Y_b = a_1b_1 + a_2b_2$. Let me re-use the example of the two coupled pendulums of different masses. The matrix $\textsf P$ corresponded to $$ M^{-1}K = \begin{bmatrix} \frac{g}l + \frac{k}{m_A} & -\frac{k}{m_A} \\ -\frac{k}{m_B} & \frac{g}l + \frac{k}{m_B} \end{bmatrix},$$ and the normalized eigenvectors were $$ \vec Y_1 = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 \\ 1 \end{bmatrix} \andeq \vec Y_2 = \frac{1}{\sqrt{\omega_{sA}^4 + \omega_{sB}^4}}\begin{bmatrix} \omega_{sA}^2 \\ -\omega_{sB}^2 \end{bmatrix} = \frac1{\sqrt{m_A^2 + m_B^2}}\begin{bmatrix} m_B \\ -m_A \end{bmatrix}.$$ The projection of the two eigenvectors onto each other is therefore $$ \vec Y_1 \cdot \vec Y_2 = \frac{m_B - m_A}{\sqrt{2(m_A^2 + m_B^2)}}. $$ Therefore, if $m_A = m_B$, the two statements "$\vec Y_1 \perp \vec Y_2$" and "$M^{-1}K$ is real symmetric" are both true. In light of the linear algebra result I recalled above, this is no coincidence. Cases in which the matrix is real symmetric correspond to a wide range of applications. In these cases, the orthogonality property simplifies the calculations greatly, especially when the number of oscillators becomes large. The reason is that it becomes very easy to isolate every mode. Here, I divide the examples of applications into two classes: initial value problems (the free response), and forced problems (the driven response). *Note: there are relatively easy ways to generalize these results to non-symmetric matrices, because $\vec Y_i M\vec Y_j = 0$ if $i\neq j$. It represents an additional mathematical complexity level that, while not that difficult, is too much to deal with in the short amount of time I have.* ## Initial Value Problems Recall that the free response of the coupled system of oscillators can be written $$ \vec X = C_1 \vec Y_1 \cos(\omega_1 t + \phi_1) + C_2 \vec Y_2 \cos(\omega_2 t + \phi_2), $$ and that at $t = 0$, $\vec X(t=0) = \vec X_0 = C_1 \vec Y_1\cos\phi_1 + C_2 \vec Y_2\cos\phi_2$. The velocity is $$ \vec V = -\omega_1 C_1 \vec Y_1 \sin(\omega_1 t + \phi_1) - \omega_2 C_2 \vec Y_2 \sin(\omega_2 t + \phi_2),$$ which initially is $\vec V(t=0) = \vec V_0 = -\omega_1C_1 \vec Y_1\sin\phi_1 -\omega_2 C_2 \vec Y_2\sin\phi_2$. ### $M^{-1}K$ symmetric This is the only case you need to know how to do. Finding the initial conditions for the modes is merely about projecting onto $\vec Y_1$ and $\vec Y_2$: $$ \vec X_0\cdot \vec Y_1 = C_1 \cos\phi_1, $$ $$ \vec X_0\cdot \vec Y_2 = C_2 \cos\phi_2, $$ $$ \vec V_0\cdot \vec Y_1 = -\omega_1 C_1 \sin\phi_1, $$ $$ \vec V_0\cdot \vec Y_2 = -\omega_2 C_2 \sin\phi_2, $$ because $\vec Y_1 \cdot \vec Y_2 = 0$. *Note: if $\vec Y_1$ and $\vec Y_2$ are not normalized, you need to multiply these right-hand-sides by* $\|\vec Y_1\|^2$ *and* $\|\vec Y_2\|^2$!!! Take the example of the identical coupled pendulums, with $x_{A0} = A > 0$, $x_{B0} = 0$, $v_{A0} = v_{B0} = 0.$ We had $\vec Y_1 = [1, 1]/\sqrt{2}$ and $\vec Y_2 = [1, -1]/\sqrt{2}$. $$ \vec X_0\cdot \vec Y_1 = A/\sqrt{2} + 0/\sqrt{2} = A/\sqrt{2} = C_1 \cos\phi_1, $$ $$ \vec X_0\cdot \vec Y_2 = A/\sqrt{2} - 0/\sqrt{2} = A/\sqrt{2} = C_2 \cos\phi_2, $$ $$ \vec V_0\cdot \vec Y_1 = 0/\sqrt{2} + 0/\sqrt{2} = 0 = -\omega_1 C_1 \sin\phi_1, $$ $$ \vec V_0\cdot \vec Y_2 = 0/\sqrt{2} - 0/\sqrt{2} = 0 = -\omega_2 C_2 \sin\phi_2. $$ The only way for the last two equations to be satisfied with $C_1 > 0$ and $C_2 > 0$ is $\phi_1 = \phi_2 = 0$, in which case $C_1 = C_2 = A/\sqrt{2}$, and the solution is $$ \vec X = \frac{A}{\sqrt2} \vec Y_1 \cos(\omega_1 t) + \frac{A}{\sqrt2} \vec Y_2 \cos(\omega_2 t) = \frac{A}{2} \stirling11 \cos(\omega_1 t) + \frac{A}2 \stirling{1}{-1} \cos(\omega_2 t), $$ ### $M^{-1}K$ non-symmetric It is actually fairly simple to generalize the results above for a non-symmetric $M^{-1}K$ matrix, but you are not required to remember it. *So much so that I will skip it in class!*In this case, $\vec Y_1 \cdot \vec Y_2 \neq 0$, but $\vec Y_1 M \vec Y_2 = 0$. Therefore, instead of computing $\vec X_0 \cdot \vec Y_1$, etc, finding the coefficients entails computing $\vec X_0 M \vec Y_1$, etc.Take the example of the coupled pendulums, with $x_{A0} = A > 0$, $x_{B0} = 0$, $v_{A0} = v_{B0} = 0.$ We now choose $m_A = m$, and $m_B = 2m$. Therefore, $\vec Y_1 = [1, 1]/\sqrt{2}$ and $\vec Y_2 = [2, -1]/\sqrt{5}$.Then, $$ M \vec Y_1 = \frac1{\sqrt2}\begin{bmatrix}m & 0 \\ 0 & 2 m\end{bmatrix}\stirling{1}{1} = \frac{m}{\sqrt{2}}\stirling{1}{2} \and M \vec Y_1 = \frac1{\sqrt5}\begin{bmatrix}m & 0 \\ 0 & 2 m\end{bmatrix}\stirling{2}{-1} = \frac{2m}{\sqrt{5}}\stirling{1}{-1}.$$You can check that $$ \vec Y_1 M \vec Y_2 = \frac{1}{\sqrt{10}} [1, 1]\begin{bmatrix}m & 0 \\ 0 & 2 m\end{bmatrix}\stirling{2}{-1} = \frac{2m}{\sqrt{10}}[1, 1] \stirling{1}{-1} = 0,$$ and same for $\vec Y_2 M \vec Y_1.$For the coefficients, the procedure becomes $$ \vec X_0 M \vec Y_1 = mA/\sqrt{2} + 0\times 2m/\sqrt{2} = A/\sqrt{2} = C_1 \cos\phi_1, $$ $$ \vec X_0 M \vec Y_2 = A/\sqrt{2} - 0/\sqrt{2} = A/\sqrt{2} = C_2 \cos\phi_2, $$ $$ \vec V_0 M \vec Y_1 = 0/\sqrt{2} + 0/\sqrt{2} = 0 = -\omega_1 C_1 \sin\phi_1, $$ $$ \vec V_0 M \vec Y_2 = 0/\sqrt{2} - 0/\sqrt{2} = 0 = -\omega_2 C_2 \sin\phi_2. $$ ## Forced Problems *I will not have time to cover this sub-section, which you should treat as optional. I wrote it, back when I had 19 lectures instead of 18, and it would break my heart to delete all of this. It may also help you consolidate what you learned in Chapter 3.* Let us assume that there is some periodic force applied to the masses, and recall that $m_A = m_B = m$: $$ m\ddot x_A + \frac{mg}{l}x_A + k(x_A - x_B) = F_A\cos(\omega t),$$ $$ m\ddot x_B + \frac{mg}{l}x_B -k(x_A - x_B) = F_B\cos(\omega t),$$ which in matrix form can be written $$ \ddot{\vec X} + M^{-1}K \vec X = \vec \Psi\cos(\omega t),\hspace{5cm}(10) $$ with $\vec \Psi = [F_A/m, F_B/m]$. Note that the problem is not an eigenvalue problem anymore. The vectors $\vec Y_1$ and $\vec Y_2$, which were eigenvectors of the free problem, form an orthogonal basis for any 2D vector. Let me repeat this: $\vec Y_1$ and $\vec Y_2$ lose their special status as eigenvectors, but they still form an orthogonal basis on which any vector can be decomposed. Meaning that we can write $$\vec X = h_1\vec Y_1 + h_2 \vec Y_2.$$ Let me reiterate that the projections of $\vec X$ on $\vec Y_1$ and $\vec Y_2$, i.e., $h_1$ and $h_2$, are not eigenmodes of the forced problem. "Eigenmodes of the forced problem" makes as much sense as "free oscillations of the forced oscillator" in the case of just one oscillator, i.e., no sense at all. Note however that in this particular case, $h_1 = q_1/\sqrt2$ and $h_2 = q_2/\sqrt2$ if $q_1 = x_A + x_B$ and $q_2 = x_A - x_B$. The forcing can also the projected onto $\vec Y_1$ and $\vec Y_2$: $$\vec\Psi = \Psi_1 \vec Y_1 + \Psi_2 \vec Y_2,$$ with $\Psi_1 = (F_A + F_B)/(\sqrt{2}m)$ and $\Psi_2 = (F_A - F_B)/(\sqrt{2}m)$. However, the crucial point here is that $M^{-1}K \vec Y_{1, 2} = \omega^2_{1,2}\vec Y_{1, 2}$ is still true, by construction of $\vec Y_{1,2}$. Collecting all of these comments, equation $(10)$ can therefore be re-written $$ \ddot h_1{\vec Y}_1 + \ddot h_2 {\vec Y}_2 + \omega_1^2 h_1 \vec Y_1 + \omega_2^2 h_2 \vec Y_2 = (\Psi_1 \vec Y_1 + \Psi_2 \vec Y_2)\cos(\omega t).$$ Projecting the equation above on $\vec Y_1$ and $\vec Y_2$ yields, respectively, $$\ddot h_1 + \omega_1^2 h_1 = \Psi_1\cos(\omega t).$$ $$\ddot h_2 + \omega_2^2 h_2 = \Psi_2\cos(\omega t).$$ Therefore, individual modes can resonate, just like a simple oscillator can resonate. This time however, the complexity of the possible cases increases. For one mode to resonate, two conditions need to be satisfied. For example, for the first mode to resonate, we need $\omega = \omega_1$, just like in the simple oscillator case, but we also need $\Psi_1 \neq 0$. In our coupled pendulums, the first mode (the antisymmetric one) does not grow if the forcing is symmetric, i.e., if $F_A = -F_B$, in which case $\Psi_1 = 0$. In order to induce a resonance phenomenon, the frequency of the forcing has to match the corresponding frequency, and the forcing pattern needs to match the polarization of the mode, i.e., the "shape of the mode" (i.e. whether it is symmetric or antisymmetric, or whatever the pattern actually looks like), somewhat. "Somewhat" means that the projection of the force on the eigenvector has to be non-zero. For example, if only one mass feels a force, say, $F_A \neq 0$ and $F_B = 0$, the pattern is not exactly matched, but $\Psi_1 = F_A/\sqrt{2}m + 0/\sqrt{2} = F_A/\sqrt2m \neq 0$, which is enough to trigger a resonance. All of this is true for any coupled system of oscillators, and for however many degrees of freedom there are. It is also true for non-symmetric matrices, but the mathematics are more complicated to describe in the short amount of time I have. Of course, everything you have learned about damped harmonic oscillator resonance also applies to the modes. For example, if there is damping, the resonance curve will have a certain width: there can be amplification for frequencies that do not exactly match the eigenfrequencies of the free system. Each mode will have its own damping rate, and its own quality factor. For example, an atom in elemental form or a molecule can be thought of as a coupled system of oscillators. Atoms can move around, and energy levels of electrons can change, adding all sorts of degrees of freedom. This is how the atmospheric composition of the Sun can be determined from an absorption spectrum (cf. chapter 3). The reverse is true: every molecule has a specific emission spectrum, consisting of the emission of light at the resonance frequencies of the molecules. One just needs to excite the atom or molecule broadly, for example by heating it, and measure which frequencies are emitted by the atom or molecule. The more degrees of freedom there are, the more frequencies are emitted (cf. emission spectra of hydrogen vs. iron from Wikipedia, reproduced here). ![Fig. 14: Visible lines of the emission spectrumm of hydrogen, by Merikanto, Adrignola [CC0], via Wikimedia Commons](800px-Emission_spectrum-H.png) ![Fig. 15: Visible lines of the emission spectrumm of iron, by User:nilda (Own work) [Public domain], via Wikimedia Commons](Emission_spectrum-Fe.png) # $n$ Coupled Oscillators While I have illustrated this entire chapter with two oscillators, all the qualitative results apply to any number of oscillators. Right after this chapter, we will study waves, which can be thought of as a superposition of a finite or infinite number of modes, which arise from the fact that waves propagate on a continuous medium made of an infinite number of coupled oscillators. I actually hesitated between making this section the last section on coupled oscillators, or the first section on waves. Ultimately, it does not matter, as long as the transition is clearly understood. To illustrate this point, i.e., that a large number of coupled oscillators create waves, and that waves are really an infinitely dense network of coupled oscillators, I made the demo below (Jupyter; also at https://youtu.be/Ki70ShYFtmA and https://play.library.utoronto.ca/c4febe4cb2b4b0e8ee5641ffa524b24f). ```python YouTubeVideo('Ki70ShYFtmA', width=560, height=315) ``` Here, I will illustrate this point with identical masses, coupled by identical springs (cf. King fig. 4.11, reproduced earlier), because it is simpler than the coupled pendula, but complex enough to illustrate the principle. With two masses, we had $$ m \ddot x_A = -k x_A + k(x_B - x_A), $$ $$ m \ddot x_B = -k (x_B - x_A) - kx_B. $$ With three masses $A$, $B$ and $C$, we would have $$ m \ddot x_A = -k x_A + k(x_B - x_A), $$ $$ m \ddot x_B = -k (x_B - x_A) + k(x_C-x_B), $$ $$ m \ddot x_C = -k (x_C - x_B) - kx_C. $$ And with $N$ masses, we would have (I now index the masses with numbers instead of letters) $$ m \ddot x_1 = -k x_1 + k(x_2 - x_1) = - 2 k x_1 + k x_2, $$ $$ m \ddot x_n = -k (x_{n} - x_{n-1}) + k(x_{n+1} - x_{n}) = k(x_{n+1} -2x_n + x_{n-1})\quad\forall n\neq 1, N, $$ $$ m \ddot x_N = -k (x_N - x_{N-1}) - kx_N = -2k x_N + k x_{N-1}. $$ After division by $m$, we can turn this system into an $N \times N$ eigenvalue problem $$ (M^{-1}K - \omega^2 \textsf I_N) \vec X = 0, $$ with $$ M^{-1}K = \omega_s^2\begin{bmatrix} 2 & -1 & 0 & \dots & & & 0 \\ -1 & 2 & -1 & 0 & \dots & & 0 \\ 0 & \ddots & \ddots & \ddots & & & \\ \vdots & 0 & -1 & 2 & -1 & 0 & \\ & & & \ddots & \ddots & \ddots & \\ \\ 0 & \dots & & & 0 & -1 & 2 \end{bmatrix}. $$ This matrix is actually simple enough that we could find the eigenvalues analytically. My intention, however, is to show you that from a numerical point of view, a $3\times 3$ matrix and a $N\times N$ matrix are as hard to solve as each other as long as one understands how to use the methods of linear algebra. Instead of `SymPy`, Python's main symbolic math package, I will use `NumPy`, Python's main scientific computing package. Unlike `SymPy`, `NumPy` only understands numbers, not symbols. ```python # We already imported NumPy, but it is useful to import # NumPy's linear algebra functions separately. import numpy.linalg as LA # the following two packages allow me to display the animation from matplotlib import animation from IPython.display import HTML # More about animations: # https://stackoverflow.com/questions/43445103/inline-animations-in-jupyter ``` ```python N = 4 # number of oscillators ``` ```python iMK = np.zeros((N, N)) # creates a NxN square matrix filled with zeros # for simplicity, we set omega_s = 1 rad/s # I will fill the matrix in a somewhat clumsy way, for pedagogical reasons # However, there are functions to create tridiagonal matrices automatically iMK[0, 0] = 2. # top left-hand corner iMK[0 , 1] = -1 # top line, one to the right iMK[-1, -1] = 2 # bottom right-hand corner iMK[-1, -2] = -1 # bottom line, one to the left if N > 2: for n in range(1, N-1): # this loop makes filling the matrix automatic # we loop from the second to the penultimate line, which are all the same iMK[n, n] = 2 # diagonal terms iMK[n, n-1] = -1 # lower diagonal iMK[n, n+1] = -1 # upper diagonal print(iMK) ``` ```python eigvals, eigvecs = LA.eig(iMK) ``` ```python print(eigvals) ``` ```python eigfreqs = np.sqrt(eigvals) print(eigfreqs) ``` ```python print(eigvecs) ``` The `numpy.linalg.eig` function returns an output, that is different from the output of the SymPy `eigenvects` method we used earlier: 1. the first array lists all eigenvalues (repeated eigenvalues are simply written multiple times). 2. the second array provides the corresponding displacement amplitudes of each mass (the so-called polarization relations of each mode). The way it is displayed above, each column (`eigvecs[:, i]`) corresponds to the coefficients of the series of masses for one mode (e.g., all $a_n$ or $b_n$ for mode $n$), while each line (`eigvecs[i, :]`) corresponds to the coefficients of the series of modes for one mass (e.g., all $a_n$, $1 \leq n \leq N$, when there are $N$ modes). It is still a bit obscure at this point, and plotting the results will help. But before that, I want to sort the eigenfrequencies from lowest to highest, because ```numpy.linalg.eig``` does not necessarily do it. ```python print("Sequence that would sort the eigenfrequencies:") seq = np.argsort(eigfreqs) print(seq) print() print("Sorted eigenfrequencies:") print(eigfreqs[seq]) print() print("and the corresponding mass position amplitudes are") for ii in range(N): print(eigvecs[:, seq[ii]]) ``` ```python # Various quantities t_end = 2.*np.pi/eigfreqs.min() # time array spans one longest eigenperiod n_frames = 100 # number of frames for animation time = np.linspace(0., t_end, n_frames) # time array max_amp = abs(eigvecs.max()) # maximum displacement of any mass in any eigenmode L_inter = 3*max_amp # distance between the masses; this makes sure that it is enough L_tot = L_inter*float(N+1) # total length between the two walls; rest_positions = np.arange(L_inter, (N+0.5)*L_inter, L_inter) # positions at rest # prepping the coordinates on the plot: for each time step, we will plot the positions # on the x axis and the mode number on the y axis x_positions = np.zeros((N, N)) # N positions for N modes y_modes = np.zeros((N, N)) # N positions for N modes for mode_number in range(1, N+1): y_modes[mode_number-1, :] = mode_number imagelist = [] # list of frames to eventually animate fig = plt.figure() ax = plt.gca() ax.set_xlim([0., L_tot]) ax.set_ylim([0., N+1]) ax.set_xticks(rest_positions) ax.set_xlabel('positions') ax.set_yticks(range(1, N+1)) ax.set_ylabel('mode number') ax.grid() for t in time: # We loop over time to animate the masses for mode_number in range(1, N+1): ii = seq[mode_number-1] # this will select the correct mode in the list x_positions[mode_number-1, :] = (rest_positions + eigvecs[:, ii]*np.cos(eigfreqs[ii]*t)) im = plt.scatter(x_positions, y_modes, c=y_modes, cmap='copper') imagelist.append([im]) ani = animation.ArtistAnimation(fig, imagelist, interval=50, blit=True, repeat_delay=1000) plt.close() ``` ```python # Show the animation HTML(ani.to_html5_video()) ``` ```python # Save the animation ani.save('masses_for_{0:02d}_modes.mp4'.format(N), dpi=100) ``` The last command saves the animation as a ```.mp4``` video file. I saved a few and uploaded them onto Quercus. A few comments. You are not required to remember them at this point, but you will later on in this lecture series on waves and oscillations, so might as well start early. * For some modes, some masses don't move, or hardly at all. Likewise, some masses move symmetrically around points that would be associated with zero velocity if they had a mass on them. Such points are called "nodes". In-between nodes are locations where the velocities seem larger. These points are called "antinodes". We will be able to visualize them better with waves on a string. * The distance between two nodes corresponds to half a wavelength: every two nodes, the motion is periodic (spatial periodicity). * The larger the mode number (or the frequency) is, the more nodes there are, or equivalently, the shorter the wavelengths are. As a matter of fact, the first mode has zero nodes, the second mode has one node, and the $n^{th}$ has $n-1$ nodes. This is a general feature of standing waves, again better visualized with waves on a string. * Modes with more nodes are called "higher modes", those with less nodes are called "low", or "grave". The terminology comes from sound waves. Sound waves with larger wavelengths sound indeed graver (lower frequency) than sound waves with smaller wavelengths (higher frequencies).
b9f420395baf23a90fc6910c19076bcb7caed411
115,500
ipynb
Jupyter Notebook
PHY293/C04-Coupling.ipynb
ngrisouard/TenureApplicationCode
68f60dcfea11cdbbad17cf0b231e55cc37c32f38
[ "MIT" ]
1
2021-12-12T11:26:43.000Z
2021-12-12T11:26:43.000Z
PHY293/C04-Coupling.ipynb
ngrisouard/TenureApplicationCode
68f60dcfea11cdbbad17cf0b231e55cc37c32f38
[ "MIT" ]
null
null
null
PHY293/C04-Coupling.ipynb
ngrisouard/TenureApplicationCode
68f60dcfea11cdbbad17cf0b231e55cc37c32f38
[ "MIT" ]
1
2021-12-12T11:26:44.000Z
2021-12-12T11:26:44.000Z
34.601558
708
0.563411
true
21,817
Qwen/Qwen-72B
1. YES 2. YES
0.712232
0.749087
0.533524
__label__eng_Latn
0.994598
0.077884
# Lab03: Machine learning - MSSV: - Họ và tên: ## Yêu cầu bài tập **Cách làm bài** Bạn sẽ làm trực tiếp trên file notebook này; trong file, từ `TODO` để cho biết những phần mà bạn cần phải làm. Bạn có thể thảo luận ý tưởng cũng như tham khảo các tài liệu, nhưng *code và bài làm phải là của bạn*. Nếu vi phạm thì sẽ bị 0 điểm cho bài tập này. **Cách nộp bài** Trước khi nộp bài, rerun lại notebook (`Kernel` -> `Restart & Run All`). Sau đó, tạo thư mục có tên `MSSV` của bạn (vd, nếu bạn có MSSV là 1234567 thì bạn đặt tên thư mục là `1234567`) Chép file `Lab03-MachineLearning.ipynb` vào, rồi nén thư mục `MSSV` này lại và nộp ở link trên moodle. **Nội dung bài tập** Bài tập 3 là bài tập cá nhân. Trong bài này, bạn sẽ cài đặt 2 thuật toán học máy: 1. Cây quyết định (Decision tree) 2. Gaussian Naive Bayes (Lớp CNTN sẽ làm thêm phần 2) ### Import library ```python import matplotlib.pyplot as plt from sklearn import datasets import pandas as pd import numpy as np from sklearn.metrics import accuracy_score ``` ### Load Iris dataset ```python from sklearn.model_selection import train_test_split iris=datasets.load_iris() X=iris.data y=iris.target #split dataset into training data and testing data X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.33, random_state=42) ``` ## 1. Cây quyết định: Iterative Dichotomiser 3 (ID3) ### 1.1 Information Gain Thông tin kỳ vọng (entropy): $$Entropy=-\sum_{i}^{n}p_ilog_{2}(p_i)$$ Hàm entropy đạt giá trị nhỏ nhất nếu có một giá trị $p_i=1$, đạt giá trị lớn nhất nếu tất cả các $p_i$ bằng nhau. Những tính chất này của hàm entropy khiến nó được sử dụng trong việc đo độ hỗn loạn của một phép phân chia của ID3. ```python def entropy(counts, n_samples): """ Parameters: ----------- counts: shape (n_classes): list number of samples in each class n_samples: number of data samples ----------- return entropy """ #TODO ``` ```python def entropy_of_one_division(division): """ Returns entropy of a divided group of data Data may have multiple classes """ n_samples = len(division) n_classes = set(division) counts=[] #count samples in each class then store it to list counts #TODO: return entropy(counts,n_samples),n_samples def get_entropy(y_predict, y): """ Returns entropy of a split y_predict is the split decision by cutoff, True/Fasle """ n = len(y) entropy_true, n_true = entropy_of_one_division(y[y_predict]) # left hand side entropy entropy_false, n_false = entropy_of_one_division(y[~y_predict]) # right hand side entropy # overall entropy #TODO s=? return s ``` Độ lợi thông tin phân lớp tập D theo thuộc tính A: $$ Gain(A)=Entrophy(D)-Entrophy_{A}(D)$$ Trong ID3, tại mỗi node, thuộc tính được chọn được xác định dựa trên là thuộc tính khiến cho information gain đạt giá trị lớn nhất. Các thuộc tính của tập Iris đều có giá trị liên tục. Do đó ta cần rời rạc hóa cho từng thuộc tính. Cách đơn giản là sử dụng một ngưỡng `cutoff` chia giá trị của dữ liệu trên mỗi thuộc tính sẽ làm 2 phần: `<cutoff` và `>=cutoff`. Để tìm ngưỡng `cutoff` tốt nhất cho mỗi thuộc tính ta lần lượt thay `cutoff` bằng các giá trị của thuộc tính sau đó tính entropy, `cutoff` tốt nhất khi entropy bé nhất $ \left(\arg\min Entrophy_{A}(D)\right)$. ### 1.2 Decision tree ```python class DecisionTreeClassifier: def __init__(self, tree=None, depth=0): '''Parameters: ----------------- tree: decision tree depth: depth of decision tree after training''' self.depth = depth self.tree=tree def fit(self, X, y, node={}, depth=0): '''Parameter: ----------------- X: training data y: label of training data ------------------ return: node node: each node represented by cutoff value and column index, value and children. - cutoff value is thresold where you divide your attribute - column index is your data attribute index - value of node is mean value of label indexes, if a node is leaf all data samples will have same label Note that: we divide each attribute into 2 part => each node will have 2 children: left, right. ''' #Stop conditions #if all value of y are the same if np.all(y==y[0]): return {'val':y[0]} else: col_idx, cutoff, entropy = self.find_best_split_of_all(X, y) # find one split given an information gain y_left = y[X[:, col_idx] < cutoff] y_right = y[X[:, col_idx] >= cutoff] node = {'index_col':col_idx, 'cutoff':cutoff, 'val':np.mean(y)} node['left'] = self.fit(X[X[:, col_idx] < cutoff], y_left, {}, depth+1) node['right'] = self.fit(X[X[:, col_idx] >= cutoff], y_right, {}, depth+1) self.depth += 1 self.tree = node return node def find_best_split_of_all(self, X, y): col_idx = None min_entropy = 1 cutoff = None for i, col_data in enumerate(X.T): entropy, cur_cutoff = self.find_best_split(col_data, y) if entropy == 0: #best entropy return i, cur_cutoff, entropy elif entropy <= min_entropy: min_entropy = entropy col_idx = i cutoff = cur_cutoff return col_idx, cutoff, min_entropy def find_best_split(self, col_data, y): ''' Parameters: ------------- col_data: data samples in column''' min_entropy = 10 #Loop through col_data find cutoff where entropy is minimum for value in set(col_data): y_predict = col_data < value my_entropy = get_entropy(y_predict, y) #TODO #min entropy=?, cutoff=? return min_entropy, cutoff def predict(self, X): tree = self.tree pred = np.zeros(shape=len(X)) for i, c in enumerate(X): pred[i] = self._predict(c) return pred def _predict(self, row): cur_layer = self.tree while cur_layer.get('cutoff'): if row[cur_layer['index_col']] < cur_layer['cutoff']: cur_layer = cur_layer['left'] else: cur_layer = cur_layer['right'] else: return cur_layer.get('val') ``` ### 1.3 Classification on Iris Dataset ```python model = DecisionTreeClassifier() tree = model.fit(X_train, y_train) pred=model.predict(X_train) print('Accuracy of your decision tree model on training data:', accuracy_score(y_train,pred)) pred=model.predict(X_test) print('Accuracy of your decision tree model:', accuracy_score(y_test,pred)) ``` Accuracy of your decision tree model on training data: 1.0 Accuracy of your decision tree model: 0.96 ## 2. Định lý Bayes Định lý Bayes được phát biểu dưới dạng toán học như sau: $$\begin{equation} P\left(A|B\right)= \dfrac{P\left(B|A\right)P\left(A\right)}{P\left(B\right)} \end{equation}$$ Nếu ta coi $B$ là dữ liệu $\mathcal{D}$, các thông số cần ước tính $A$ là $w$, ta có: $$ \begin{align} \underbrace{P(w|\mathcal{D})}_{Posterior}= \dfrac{1}{\underbrace{P(\mathcal{D})}_{Normalization}} \overbrace{P(\mathcal{D}|w)}^{\text{Likelihood}} \overbrace{P(w)}^{Prior} \end{align} $$ #### Naive Bayes Để giúp cho việc tính toán được đơn giản, người ta thường giả sử một cách đơn giản nhất rằng các thành phần của biến ngẫu nhiên $D$ (hay các thuộc tính của dữ liệu $D$) là độc lập với nhau, nếu biết $w$. Tức là: $$P(\mathcal{D}|w)=\prod _{i=1}^{d}P(x_i|w)$$ $d$: số lượng thuộc tính ### 2.1. Probability Density Function ```python class pdf: def __init__(self,hist=None): ''' A probability density function represented by a histogram hist: shape (n,1), n: number of hypotheses hypo: hypothesis (simply understand as label) ------------------ hist[hypo]=P(hypo) ''' self.hist = hist #virtual function def likelihood(self, data, hypo): '''Paramters: data: new data record hypo: hypothesis (simply understand as label) --------- return P(data/hypo) ''' raise Exception() #update histogram for new data def update(self, data): ''' P(hypo/data)=P(data/hypo)*P(hypo)*(1/P(data)) ''' #Likelihood * Prior #TODO for hypo in self.hist.keys(): #self.hist[hypo]=? #Normalization #TODO: s=P(data) #s=? for hypo in self.hist.keys(): self.hist[hypo] = self.hist[hypo]/s def plot_pdf(self): #plot Histogram #TODO def maxHypo(self): #find the hypothesis with maximum probability from hist #TODO ``` ### 2.2 Classification on Iris Dataset #### Gaussian Naive Bayes - Naive Bayes có thể được mở rộng cho dữ liệu với các thuộc tính có giá trị là số thực, phổ biến nhất bằng cách sử dụng phân phối chuẩn (Gaussian distribution). - Phần mở rộng này được gọi là Gaussian Naive Bayes. Các hàm khác có thể được sử dụng để ước tính phân phối dữ liệu, nhưng Gauss (hoặc phân phối chuẩn) là dễ nhất để làm việc vì chỉ cần ước tính giá trị trung bình và độ lệch chuẩn từ dữ liệu huấn luyện. #### Define Gauss function $$ f\left(x;\mu,\sigma \right)= \dfrac{1}{\sigma \sqrt{2\pi}} \exp \left({-\dfrac{\left(x-\mu\right)^2}{2 \sigma^2}}\right) $$ ```python def Gauss(std,mean,x): #Compute the Gaussian probability distribution function for x #TODO ``` ```python class NBGaussian(pdf): def __init__(self, hist=None, std=None, mean=None): '''Parameters: ''' pdf.__init__(self, hist) self.std=std self.mean=mean def likelihood(self,data, hypo): ''' Returns: res=P(data/hypo) ----------------- Naive bayes: Atributes are assumed to be conditionally independent given the class value. ''' std=self.std[hypo] mean=self.mean[hypo] res=1 #TODO #res=res*P(x1/hypo)*P(x2/hypo)... return res def fit(self, X,y): """Parameters: X: training data y: labels of training data """ n=len(X) #number of iris species #TODO #n_species=??? hist={} mean={} std={} #separate dataset into rows by class for hypo in range(0,n_species): #rows have hypo label #TODO rows= #histogram for each hypo #TODO probability=? hist[hypo]=propability #Each hypothesis represented by its mean and standard derivation '''mean and standard derivation should be calculated for each column (or each attribute)''' #TODO mean[hypo]=?, std[hypo]=? self.mean=mean self.std=std self.hist=hist def _predict(self, data, plot=False): """ Predict label for only 1 data sample ------------ Parameters: data: data sample plot: True: draw histogram after update new data sample ----------- return: label of data """ model=NBGaussian(hist=self.hist.copy(),std=self.std.copy(), mean=self.mean.copy()) model.update(data) if (plot): model.plot_pdf() return model.maxHypo() def predict(self, data): """Parameters: Data: test data ---------- return labels of test data""" pred=[] for x in data: pred.append(self._predict(x)) return pred ``` #### Show histogram of training data ```python model_1=NBGaussian() model_1.fit(X_train, y_train) model_1.plot_pdf() ``` #### Test wih 1 data record ```python #label of y_test[10] print('Label of X_test[10]: ', y_test[10]) #update model and show histogram with X_test[10]: print('Our histogram after update X_test[10]: ') model_1._predict(X_test[10],plot=True) ``` #### Evaluate your Gaussian Naive Bayes model ```python pred=model_1.predict(X_test) print('Accuracy of your Gaussian Naive Bayes model:', accuracy_score(y_test,pred)) ``` Accuracy of your Gaussian Naive Bayes model: 0.96
5a84660357b97018a450e5dbc7428995e1bbb24d
37,247
ipynb
Jupyter Notebook
lab03/.ipynb_checkpoints/Lab03-MachineLearning-checkpoint.ipynb
nhutnamhcmus/decision-tree-bayes
5d8548bb84d3bbe4a8b3d53f193d4cec23b4177c
[ "MIT" ]
null
null
null
lab03/.ipynb_checkpoints/Lab03-MachineLearning-checkpoint.ipynb
nhutnamhcmus/decision-tree-bayes
5d8548bb84d3bbe4a8b3d53f193d4cec23b4177c
[ "MIT" ]
null
null
null
lab03/.ipynb_checkpoints/Lab03-MachineLearning-checkpoint.ipynb
nhutnamhcmus/decision-tree-bayes
5d8548bb84d3bbe4a8b3d53f193d4cec23b4177c
[ "MIT" ]
null
null
null
44.500597
7,752
0.663194
true
3,614
Qwen/Qwen-72B
1. YES 2. YES
0.855851
0.785309
0.672107
__label__vie_Latn
0.916382
0.399861
### Introduction to simplicial homology ```python # uncomment to install the panel library #pip install panel import numpy as np import matplotlib.pyplot as plt from bokeh import palettes %config InlineBackend.figure_format="retina" import panel as pn pn.extension() ``` ### Affinely independent points We say that the set of vectors $\{\vec{a}_0, \vec{a}_1, .., \vec{a}_n \}$ is affinely independent, if $\{ \vec{a}_1 - \vec{a}_0, .., \vec{a}_n - \vec{a}_0 \}$ is linearly independent. ### n-simplices An n-simplex $\sigma$ spanned by $\{\vec{a}_0, \vec{a}_1, .., \vec{a}_n \}$ to be the. set of all points $\vec{x} \in \mathbb{R}^n$ such that $$ \sigma = \left \{ \sum_{i = 0}^n t_i \vec{a}_i \, | \, \sum_{i=0}^n t_i = 1, t_i \ge 0 \forall i \right \} $$ That is, the coefficients $t_i$ form a probability distribution, and are uniquely determined. This form is called the barycentric coordinates of a point $x$. #### Examples of n-simplices First, let's begin in $\mathbb{R}^2$ 1. A 1-simplex is defined as $$ \vec{x} = t \vec{a}_0 + (1-t) \vec{a}_1 $$ Let $a_0 = (1, 0), a_1 = (1,1)$, then ```python t_slider = pn.widgets.FloatSlider( name = r"scalar t", start = 0.0, end = 1, step = 0.01, value = 0.5 ) ``` ```python # uncomment to select color palette # palettes.Blues3 ``` ```python @pn.depends(t_slider.param.value) def one_simplex(t): "An app to visualize a 1-simplex." a_0 = np.array([[0, 1]]).T a_1 = np.array([[1, 1]]).T x = t * a_0 + (1 - t) * a_1 fig = plt.figure(figsize=(6, 5)) plt.plot( [a_0[0], a_1[0]], [a_0[1], a_1[1]], color="lightgrey", label=r"$\sigma$ (1-simplex)", ) plt.scatter(*a_0, label="a_0", color="#deebf7") plt.scatter(*a_1, label="a_1", color="#3182bd") plt.scatter(*x, label="x", color="#9ecae1") # plot scaled a_0 len_a0 = np.sqrt(np.linalg.norm(a_0)) plt.arrow( x=0, y=0, dx=t * a_0[0][0], dy=t * a_0[1][0], color="#deebf7", label="a_0 scaled", ) plt.arrow( x=0, y=0, dx=(1 - t) * a_1[0][0], dy=(1 - t) * a_1[1][0], color="#3182bd", label="a_1 scaled", ) plt.arrow( x=0, y=0, dx=x[0][0], dy=x[1][0], color="#9ecae1", label="x vector", ) plt.legend(loc="lower right") plt.tight_layout() # plt.axis("off") plt.close(fig) return fig ``` ```python pn.Column( "### 1-simplex", t_slider, one_simplex ) ``` <div id='1002'> <div class="bk-root" id="91bb3b6e-64f6-480c-8122-63a8fa62e24f" data-root-id="1002"></div> </div> 2. The $2$-simplex can be specified as follows: \begin{align} \vec{x} &= t_0 \vec{a}_0 + (1- t_0) \vec{p}\\[1.em] &= t_0 \vec{a}_0 + (1- t_0) \left[\left( \frac{t_1}{\lambda} \right) \vec{a}_1 + \left( \frac{t_2}{\lambda} \right) \vec{a}_2 ) \right] \end{align} where $\lambda = 1- \vec{t}_0$. ```python t1_slider = pn.widgets.FloatSlider( name = r"scalar t1", start = 0.0, end = 1, step = 0.01, value = 0.5 ) t2_slider = pn.widgets.FloatSlider( name = r"scalar t2", start = 0.0, end = 1, step = 0.01, value = 0.5 ) ``` ```python #uncomment to select color palette #palettes.Set2 ``` ```python @pn.depends(t1_slider.param.value, t2_slider.param.value) def two_simplex(t_1, t_2): "An app to visualize a 1-simplex." a_1 = np.array([[0, 1]]).T a_2 = np.array([[1, 1]]).T a_3 = np.array([[1, 0]]).T t_3 = 1 - (t_1 + t_2) x = t_1 * a_1 + t_2 * a_2 + t_3 * a_3 lg="lightgrey" fig = plt.figure(figsize=(6, 5)) # a1 - a2 plt.plot([a_1[0], a_2[0]],[a_1[1], a_2[1]],color=lg) # a2 - a3 plt.plot([a_2[0], a_3[0]],[a_2[1], a_3[1]],color=lg) # a1 - a3 plt.plot( [a_1[0], a_3[0]],[a_1[1], a_3[1]],color=lg, label=r"$\sigma$ (1-simplex)" ) set2=('#66c2a5', '#fc8d62', '#8da0cb') plt.scatter(*a_1, label="a_1", color=set2[0]) plt.scatter(*a_2, label="a_2", color=set2[1]) plt.scatter(*a_3, label="a_3", color=set2[2]) plt.scatter(*x, color="#3182bd") # plot scaled a_1 plt.arrow( x=0,y=0, dx=t_1 * a_1[0][0], dy=t_1 * a_1[1][0], color=set2[0], #label="a_0 scaled", ) plt.arrow( x=0, y=0, dx= t_2 * a_2[0][0], dy= t_2 * a_2[1][0], color=set2[1], #label="a_1 scaled", ) plt.arrow( x=0, y=0, dx= t_3 * a_3[0][0], dy= t_3 * a_3[1][0], color= set2[2] , #label="a_1 scaled", ) plt.arrow( x=0, y=0, dx=x[0][0], dy=x[1][0], color="#3182bd", label="x vector", ) plt.legend(loc="lower right") plt.tight_layout() # plt.axis("off") plt.close(fig) return fig ``` ```python pn.Column( "### 2-simplex", t1_slider, t2_slider, two_simplex ) ``` <div id='1851'> <div class="bk-root" id="fc237926-8ca6-41e5-af63-5485643cb80c" data-root-id="1851"></div> </div> ```python ``` #### Convex sets, convex hull ```python ``` #### p-chains ```python ``` ### Boundary operator ```python ``` ### Computing the Betti numbers from a dataset ```python ```
feda3cd605cd88ccad2ad5e6bbb0c0166c90b697
173,841
ipynb
Jupyter Notebook
notebooks/simplicial_homology.ipynb
manuflores/sandbox
27b44dfb6bea20d56ece640c5db9d842cbfc424b
[ "MIT" ]
null
null
null
notebooks/simplicial_homology.ipynb
manuflores/sandbox
27b44dfb6bea20d56ece640c5db9d842cbfc424b
[ "MIT" ]
null
null
null
notebooks/simplicial_homology.ipynb
manuflores/sandbox
27b44dfb6bea20d56ece640c5db9d842cbfc424b
[ "MIT" ]
null
null
null
162.468224
67,323
0.828982
true
2,018
Qwen/Qwen-72B
1. YES 2. YES
0.874077
0.79053
0.690985
__label__eng_Latn
0.377041
0.44372
Overview of "A Quantum Approximate Optimization Algorithm" written by Edward Farhi, Jeffrey Goldstone and Sam Gutmann. # Introduction: Combinatorial optimization problems attempt to optimize an objective function over *n* bits with respect to *m* clauses. The bits are grouped into a string $z = z_1z_2...z_n$, while clauses are constraints on a subset of the bits, satisfied for some strings, not satisfied for others. The objective function, then, is defined as \begin{equation} \tag{1} C(z) = \sum_{\alpha=1}^{m} C_{\alpha}(z) \end{equation} where $C_{\alpha}(z) = 1$ if *z* satisfies clause $\alpha$, and is 0 otherwise. Note that $C_{\alpha}(z)$ typically will only depend on a few of the bits in the string. Now, the goal of approximate optimization is to find a string *z* for which $C(z)$ is close to the maximum value $C$ takes over all strings. This paper presents a quantum algorithm, paired with classical pre-processing, for approximate optimization. A quantum computer with *n* qubits works in a $2^n$ dimensional Hilbert space, with basis vectors denoted by $|z>$ where $z=1,2,3,...2^n$, known as the computational basis. In this framework, we view $(1)$ as an operator, which is diagonal in the computational basis. Next, a few more operators must be defined in order to map the approximate optimization problem onto the quantum comptuer. First, we define a unitary operator $U(C, \gamma)$, which depends on the objective function $C$ and an angle $\gamma$ \begin{equation} \tag{2} U(C,\gamma) = e^{-i{\gamma}C} = \prod_{\alpha=1}^{m} e^{-i{\gamma}C_\alpha} \end{equation} The second equality in $(2)$ is permitted since all $C_\alpha$'s commute with another (i.e. they are all diagonal in the same basis). Note that since $C$ has integer eigenvalues we can restrict the values of $\gamma$ to lie between $0$ and $2\pi$. Next, we define an operator $B$, which is the sum of all single bit-flip operators, represented by the Pauli-x matrix \begin{equation} \tag{3} B = \sum_{j=1}^{n} \sigma^{x}_j \end{equation} Then we define another angle-dependent unitary operator as a product of commuting one-bit operators \begin{equation} \tag{4} U(B,\beta) = e^{-i{\beta}B} = \prod_{j=1}^{n} e^{-i{\beta}\sigma^{x}_j} \end{equation} where $\beta$ takes values between $0$ and $\pi$. The initial state of the qubits will be set to a uniform superposition over all computational basis states, defined as: \begin{equation} \tag{5} |s> = \frac{1}{2^n} \sum_{z} |z> \end{equation} Next, we define an integer $p \ge 1$ which will set the quality of the approximate optimization; the higher $p$ the better approximation can be attained. For a given $p$, we define a total of $2p$ angles $\gamma_1 . . . \gamma_p \equiv \pmb\gamma$ and $\beta_1 . . . \beta_p \equiv \pmb\beta$ which will define an angle-dependent quantum state for the qubits \begin{equation} \tag{6} |\pmb\gamma, \pmb\beta> = U(B,{\beta}_p)U(C,{\gamma}_p)...U(B,{\beta}_1)U(C,{\gamma}_1)|s> \end{equation} Notice that there are $p$ $C$-dependent unitaries, each of which require $m$ rotation gates on local sets of qubits. In the worst case, each $C$-dependent unitary must be run in a different "moment" (when referring to a quantum circuit, a moment is essentially one "clock cycle" in which a group of operators acting on various qubits can be performed simulatneously), and thus the $C$-unitaries can be implemented with a circuit depth of $\mathcal{O}(mp)$. Meanwhile, there are $p$ $B$-dependent unitaries, each of which only involve single qubit-operators and thus can all be applied in one "moment". Therefore, the $B$-unitaries have a circuit depth of $\mathcal{O}(mp)$. This means that the final state we seek can be prepared on the quantum computer with a circuit depth of $\mathcal{O}(mp + p)$. Next, we define $F_p$ as the expectation value of $C$ in this state: \begin{equation} \tag{7} F_p(\pmb\gamma, \pmb\beta) = <\pmb\gamma, \pmb\beta|C|\pmb\gamma, \pmb\beta> \end{equation} and define the maximum value that $F_p$ takes over all angles as \begin{equation} \tag{8} M_p = max_{\gamma,\beta}F_p(\pmb\gamma, \pmb\beta) \end{equation} Finally, with all these terms defined, we can lay out an algorithm for approximate optimization. 1. Pick integer $p$ and determine a set of 2$p$ angles $\{ \pmb\gamma, \pmb\beta \}$ which maximize $F_p$. 2. Prepare the state $|\gamma, \beta>$ on the quantum computer. 3. Measure the qubits in the computational basis to obtain the string $z$ and evaluate $C(z)$ 4. Perform step 3 repeatedly with the same angles to obtain a string $z$ such that $C(z)$ is very near or greater than $F_p(\pmb\gamma, \beta)$ The main roadblock to this algorithm is to find the optimal set of angles $\{ \pmb\gamma, \pmb\beta \}$. One method, if $p$ does not grow with $n$, is to use brute force by running the quantum computer with values of $\{ \pmb\gamma, \pmb\beta \}$ chosen on a fine grid on the compact set $[0,2\pi]^p \times [0,\pi]^p$ in order to find the values of angles that produce the maximum $F_p$. The paper also presents a method using classical pre-processing to determine the optimal angles. However, as this overview is intended to focus on the quantum computational part of the algorithm, we will assume that optimal angles $\{ \pmb\gamma, \pmb\beta \}$ have been determined in *some* fashion, and illustrate in detail how the quantum computing part of this algorithm works. # Example Problem: MaxCut for Graphs with Bounded Degree We will now examine the quantum part of the quantum approximation optimization algorithm (QAOA) using the MaxCut problem for graphs with a bounded degree as an example optimization problem. The input is a graph with $n$ vertices and an edge set $\{ \langle jk \rangle \}$ of size $m$. The $n$ vertices are mapped to $n$ qubits, while the $m$ edges of the edge set represent the $m$ clauses of the combinatorial optimization problem. The goal is to maximize the objective function defined as \begin{equation} \tag{9} C = \sum_{\langle jk \rangle } C_{\langle jk \rangle } \end{equation} where \begin{equation} \tag{9} C_{\langle jk \rangle } = \frac{1}{2}(-\sigma^z_j \sigma^z_k + 1) \end{equation} Each clause $C_{\langle jk \rangle }$ is thus equal to 1 when qubits $j$ and $k$ have spins pointing in opposite directions along the *z*-direction, and equal to 0 when the spins are in the same direction. ## MaxCut for Bounded Graph of Degree 2 with 4 Vertices Below, we give the Cirq implementation of a QAOA for the MaxCut problem for a regular graph of degree 2 with $n=4$ vertices and $m=4$ edges given as $\langle 1,2 \rangle ,\langle 2,3 \rangle ,\langle 3,4 \rangle ,\langle 4,1 \rangle $. Note that this graph is just a ring. For simplicity, we choose $p=1$ and arbitrarily choose values of $\beta_1$ and $\gamma_1$ (though in a real implementation the optimal angles must be found either by brute force or clever classical pre-processing). To run, press the play button in the upper left-hand corner of the code boxes below. The code boxes must be run in sequential order, starting with the code box that imports Cirq and other necessary Python libraries. ```python !pip install git+https://github.com/quantumlib/Cirq import cirq import numpy as np import matplotlib.pyplot as plt import cmath import scipy.linalg ``` Collecting git+https://github.com/quantumlib/Cirq Cloning https://github.com/quantumlib/Cirq to /tmp/pip-req-build-1qjwfhm9 Running command git clone -q https://github.com/quantumlib/Cirq /tmp/pip-req-build-1qjwfhm9 Requirement already satisfied (use --upgrade to upgrade): cirq==0.9.0.dev0 from git+https://github.com/quantumlib/Cirq in /usr/local/lib/python3.6/dist-packages Requirement already satisfied: freezegun~=0.3.15 in /usr/local/lib/python3.6/dist-packages (from cirq==0.9.0.dev0) (0.3.15) Requirement already satisfied: google-api-core[grpc]<2.0.0dev,>=1.14.0 in /usr/local/lib/python3.6/dist-packages (from cirq==0.9.0.dev0) (1.16.0) Requirement already satisfied: matplotlib~=3.0 in /usr/local/lib/python3.6/dist-packages (from cirq==0.9.0.dev0) (3.2.1) Requirement already satisfied: networkx~=2.4 in /usr/local/lib/python3.6/dist-packages (from cirq==0.9.0.dev0) (2.4) Requirement already satisfied: numpy~=1.16 in /usr/local/lib/python3.6/dist-packages (from cirq==0.9.0.dev0) (1.18.4) Requirement already satisfied: pandas in /usr/local/lib/python3.6/dist-packages (from cirq==0.9.0.dev0) (1.0.3) Requirement already satisfied: protobuf~=3.11.0 in /usr/local/lib/python3.6/dist-packages (from cirq==0.9.0.dev0) (3.11.3) Requirement already satisfied: requests~=2.18 in /usr/local/lib/python3.6/dist-packages (from cirq==0.9.0.dev0) (2.23.0) Requirement already satisfied: sortedcontainers~=2.0 in /usr/local/lib/python3.6/dist-packages (from cirq==0.9.0.dev0) (2.1.0) Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from cirq==0.9.0.dev0) (1.4.1) Requirement already satisfied: sympy in /usr/local/lib/python3.6/dist-packages (from cirq==0.9.0.dev0) (1.1.1) Requirement already satisfied: typing_extensions in /usr/local/lib/python3.6/dist-packages (from cirq==0.9.0.dev0) (3.6.6) Requirement already satisfied: dataclasses in /usr/local/lib/python3.6/dist-packages (from cirq==0.9.0.dev0) (0.7) Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from freezegun~=0.3.15->cirq==0.9.0.dev0) (1.12.0) Requirement already satisfied: python-dateutil!=2.0,>=1.0 in /usr/local/lib/python3.6/dist-packages (from freezegun~=0.3.15->cirq==0.9.0.dev0) (2.8.1) Requirement already satisfied: setuptools>=34.0.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq==0.9.0.dev0) (46.1.3) Requirement already satisfied: google-auth<2.0dev,>=0.4.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq==0.9.0.dev0) (1.7.2) Requirement already satisfied: pytz in /usr/local/lib/python3.6/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq==0.9.0.dev0) (2018.9) Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq==0.9.0.dev0) (1.51.0) Requirement already satisfied: grpcio<2.0dev,>=1.8.2; extra == "grpc" in /usr/local/lib/python3.6/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq==0.9.0.dev0) (1.28.1) Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib~=3.0->cirq==0.9.0.dev0) (2.4.7) Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib~=3.0->cirq==0.9.0.dev0) (0.10.0) Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib~=3.0->cirq==0.9.0.dev0) (1.2.0) Requirement already satisfied: decorator>=4.3.0 in /usr/local/lib/python3.6/dist-packages (from networkx~=2.4->cirq==0.9.0.dev0) (4.4.2) Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests~=2.18->cirq==0.9.0.dev0) (3.0.4) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests~=2.18->cirq==0.9.0.dev0) (1.24.3) Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests~=2.18->cirq==0.9.0.dev0) (2.9) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests~=2.18->cirq==0.9.0.dev0) (2020.4.5.1) Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.6/dist-packages (from sympy->cirq==0.9.0.dev0) (1.1.0) Requirement already satisfied: rsa<4.1,>=3.1.4 in /usr/local/lib/python3.6/dist-packages (from google-auth<2.0dev,>=0.4.0->google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq==0.9.0.dev0) (4.0) Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.6/dist-packages (from google-auth<2.0dev,>=0.4.0->google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq==0.9.0.dev0) (0.2.8) Requirement already satisfied: cachetools<3.2,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth<2.0dev,>=0.4.0->google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq==0.9.0.dev0) (3.1.1) Requirement already satisfied: pyasn1>=0.1.3 in /usr/local/lib/python3.6/dist-packages (from rsa<4.1,>=3.1.4->google-auth<2.0dev,>=0.4.0->google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq==0.9.0.dev0) (0.4.8) Building wheels for collected packages: cirq Building wheel for cirq (setup.py) ... [?25l[?25hdone Created wheel for cirq: filename=cirq-0.9.0.dev0-cp36-none-any.whl size=1407599 sha256=32feab921e73df656a3323d61449bbf0671f5b88a9bc93ee4975c07b275d1737 Stored in directory: /tmp/pip-ephem-wheel-cache-otlqifme/wheels/c9/f4/ee/029123a49c5e2d75d08c2a9f937e207b88f045901db04632a7 Successfully built cirq ```python #This code runs the QAOA for the example of MaxCut on a 2-regular graph with #n=4 vertices and m=4 clauses #The graph is a ring # define the length of grid. length = 2 nqubits = length**2 # define qubits on the grid. #in this case we have 4 qubits on a 2x2 grid qubits = [cirq.GridQubit(i, j) for i in range(length) for j in range(length)] #instantiate a circuit circuit = cirq.Circuit() #apply Hadamard gate to all qubits to define the initial state #here, the initial state is a uniform superposition over all basis states circuit.append(cirq.H(q) for q in qubits) #here, we use p=1 #define optimal angles beta and gamma computed by brute force #or through classical pre-processing #beta in [0,pi] #gamma in [0,2*pi] beta = 0.2 gamma = 0.4 #define operators for creating state |gamma,beta> = U_B*U_C*|s> #define U_C operator = Product_<jk> { e^(-i*gamma*C_jk)} and append to main circuit coeff = -0.5 #coefficient in front of sigma_z operators in the C_jk operator U_C = cirq.Circuit() for i in range(0,nqubits): U_C.append(cirq.CNOT.on(qubits[i],qubits[(i+1)%nqubits])) U_C.append(cirq.rz(2.0*coeff*gamma).on(qubits[(i+1)%nqubits])) U_C.append(cirq.CNOT.on(qubits[i],qubits[(i+1)%nqubits])) circuit.append(U_C) #define U_B operator = Product_j {e^(-i*beta*X_j)} and append to main circuit U_B = cirq.Circuit() for i in range(0,nqubits): U_B.append(cirq.H(qubits[i])) U_B.append(cirq.rz(2.0*beta).on(qubits[i])) U_B.append(cirq.H(qubits[i])) circuit.append(U_B) #add measurement operators for each qubit #measure in the computational basis to get the string z for i in range(0,nqubits): circuit.append(cirq.measure(qubits[i],key=str(i))) #run circuit in simulator to get the state |beta,gamma> and measure in the #computational basis to get the string z and evaluate C(z) #repeat for 100 runs and save the best z and C(z) #simulator = cirq.google.XmonSimulator() simulator = cirq.Simulator() reps = 100 results = simulator.run(circuit, repetitions=reps) #get bit string z from results best_z = [None]*nqubits best_Cz = 0.0 all_Czs = [] for i in range(0,reps): z = [] for j in range (0,nqubits): if (results.measurements[str(j)][i]): z.append(1) else: z.append(-1) #compute C(z) Cz = 0.0 for j in range(0,nqubits): Cz += 0.5*(-1.0*z[j]*z[(j+1)%nqubits] + 1.0) all_Czs.append(Cz) #store best values for z and C(z) if (Cz > best_Cz): best_Cz = Cz best_z = z #print best string z and corresponding C(z) print("Best z") print(best_z) print("Best objective") print(best_Cz) plt.plot(all_Czs) plt.xlabel("Run") plt.ylabel("C(z) value") plt.show() #print a diagram of the circuit print(circuit) ``` ## Explanation of Code In what follows, we give a detailed description of the above code. ###Define the qubits First, the number of qubits ($n=4$) is defined on a 2x2 grid and we instantiate the circuit object, which will contain the quantum program we wish to run on the qubits. ### Prepare the Initial State Next, we prepare the initial state of the system, a uniform superposition over all basis states. Thus, we append a Hadamard gate acting on each qubit to the circuit object. As stated above, $p$ is set to 1 and $\beta$ and $\gamma$ are arbitrarily chosen in this implementation, however, in a real run of the algorithm, one would need to find the optimal values of these angles through brute force or classical pre-processing. Next, recall that the goal is to create the state \begin{equation} \tag{10} |\gamma, \beta \rangle = U(B,\beta)U(C,\gamma)|s \rangle \end{equation} Thus, we must define the unitary operators $U(C,\gamma)$ and $U(B,\beta)$ which will be appended to our circuit object, after the state initialization gates (Hadamard gates). ###Build the operator $U(C, \gamma)$ We begin by building up the operator $U(C, \gamma)$, which is defined as: \begin{equation} \tag{11} U(C,\gamma) = e^{-i{\gamma}C} = \prod_{ \langle jk \rangle} e^{-i{\gamma}C_{<jk>}} = \prod_{\langle jk \rangle} e^{-i\frac{\gamma}{2}(-\sigma^z_j\sigma^z_k +1)} \end{equation} Examaning the right-hand side of $(11)$, note that we can simplify this by dropping the identity term (the second term in the parantheses in the exponent), as this simple phase term will drop out once we make measurements of the qubits. Intuitively, an identity operator should have no effect on the system. Thus, we construct $U(C,\gamma)$ as the following product of exponentials: \begin{equation} \tag{11} U(C,\gamma) = e^{-i\frac{\gamma}{2}(-\sigma^z_1\sigma^z_2)}e^{-i\frac{\gamma}{2}(-\sigma^z_2\sigma^z_3)}e^{-i\frac{\gamma}{2}(-\sigma^z_3\sigma^z_4)}e^{-i\frac{\gamma}{2}(-\sigma^z_4\sigma^z_1)} \end{equation} Each exponential acts on one pair of qubits, and can be translated into a quantum circuit using $Rz(\theta)$ gates, which perform rotations of the spin about the *z*-axis through an angle of $\theta$, and $CNOT$ gates which essentially entangle the two qubits in the pair. As a specific example we examine how the first expoential operator in $(11)$ gets translated into quantum logic gates: \begin{eqnarray*} \tag{12} e^{-i\frac{\gamma}{2}(-\sigma^z_1\sigma^z_2)} & \rightarrow & CNOT[1,2]\\ & & I[1] \otimes Rz(2*-0.5*\gamma)[2]\\ & & CNOT[1,2] \end{eqnarray*} Here the right side indicates three gates are used to perform this operator: the $CNOT$ gate acts on qubits 1 and 2, then an $Rz(\theta)$ gate is acted on qubit 2, and finally, another $CNOT$ gate is acted on qubits 1 and 2. To show why this works, we simply need to show that the matrix representation of the left and right-hand sides of $(12)$ are the same. We derive the matrix for the left-hand side first. Since this operator acts on two qubits, it acts on a $2^2 = 4$ dimensional Hilbert space, and thus is represented by a 4x4 matrix. The term in the exponential, $-i\frac{\gamma}{2}(-\sigma^z_1\sigma^z_2)$, is defined by the tensor product of the two Pauli-z terms multiplied by a coefficient: \begin{equation} \tag{13} -i\frac{\gamma}{2}(-\sigma^z_1\sigma^z_2) = i\frac{\gamma}{2}\sigma^z_1 \otimes \sigma^z_2 \end{equation} Now \begin{equation} \tag{14} \sigma^z_i = \begin{bmatrix} 1 & 0\\ 0 & -1 \end{bmatrix} \end{equation} So \begin{equation} \tag{15} i\frac{\gamma}{2}\sigma^z_1 \otimes \sigma^z_2= i\frac{\gamma}{2}\begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & -1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \end{equation} And thus \begin{equation} \tag{16} e^{i\frac{\gamma}{2}\sigma^z_1 \otimes \sigma^z_2}= \begin{bmatrix} e^{i\frac{\gamma}{2}} & 0 & 0 & 0\\ 0 & e^{-i\frac{\gamma}{2}} & 0 & 0 \\ 0 & 0 & e^{-i\frac{\gamma}{2}} & 0 \\ 0 & 0 & 0 & e^{i\frac{\gamma}{2}} \end{bmatrix} \end{equation} Now we derive the matrix representation of the right-hand side of (12). First the $CNOT$ gate can be written as: \begin{equation} \tag{17} CNOT= \begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix} \end{equation} Next, the $Rz(\theta)$ term is defined as \begin{equation} \tag{18} Rz(\theta)= e^{-iZ\frac{\theta}{2}} = \begin{bmatrix} e^{-i\frac{\theta}{2}} & 0 \\ 0 & e^{i\frac{\theta}{2}} \end{bmatrix} \end{equation} Thus \begin{eqnarray*} I \otimes Rz(2*-0.5*\gamma) & = & \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \otimes \begin{bmatrix} e^{-i\frac{\gamma}{2}} & 0 \\ 0 & e^{i\frac{\gamma}{2}} \end{bmatrix} \\ & = & \begin{bmatrix} \tag{19} e^{i\frac{\gamma}{2}} & 0 & 0 & 0\\ 0 & e^{-i\frac{\gamma}{2}} & 0 & 0 \\ 0 & 0 & e^{i\frac{\gamma}{2}} & 0 \\ 0 & 0 & 0 & e^{-i\frac{\gamma}{2}} \end{bmatrix} \end{eqnarray*} Putting together the three gates we get: \begin{equation} \tag{20} \begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix} * \begin{bmatrix} e^{i\frac{\gamma}{2}} & 0 & 0 & 0\\ 0 & e^{-i\frac{\gamma}{2}} & 0 & 0 \\ 0 & 0 & e^{i\frac{\gamma}{2}} & 0 \\ 0 & 0 & 0 & e^{-i\frac{\gamma}{2}} \end{bmatrix} * \begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix} = \begin{bmatrix} e^{i\frac{\gamma}{2}} & 0 & 0 & 0\\ 0 & e^{-i\frac{\gamma}{2}} & 0 & 0 \\ 0 & 0 & e^{-i\frac{\gamma}{2}} & 0 \\ 0 & 0 & 0 & e^{i\frac{\gamma}{2}} \end{bmatrix} \end{equation} Notice the $(20)$ is identical to $(16)$, and thus we have proven that these three gates do indeed implement the first operator in $U(C,\gamma)$. The other operators are built analogously except that they act on different pairs of qubits. Once the sub-circuit for $U(C,\gamma)$ is defined, we append it to our main circuit. ###Build the operator $U(B, \beta)$ In this example $U(B,\beta)$ is given by \begin{equation} \tag{21} U(B,\beta) = e^{-i{\beta}\sigma^x_1}e^{-i{\beta}\sigma^x_2} e^{-i{\beta}\sigma^x_3} e^{-i{\beta}\sigma^x_4} \end{equation} Again we examine how to convert the first exponential operator into quantum logic gates: \begin{eqnarray*} \tag{22} e^{-i{\beta}\sigma^x_1} & \rightarrow & H[1]\\ & & Rz(2*\beta)[1]\\ & & H[1] \end{eqnarray*} The derivation of this is much simpler. Note that $\sigma^x$ is diagonal in the x-basis, but not in the computational(z)-basis. Since exponentiated operators are easy to carry out when the operator is diagonal, we apply a Hadamard gate on the qubit to rotate it into the x-basis, where this operator is now diagonal and represented by the $Rz(\theta)$ gate. We apply said $Rz(\theta)$ gate to the qubit, and then apply a second Hadamard gate to the qubit to rotate it back into the computational basis. Analogous gate sets are applied to all other qubits to carry out the full $U(B,\beta)$ operator. Once the sub-circuit for $U(B,\beta)$ is defined, we append it to our main circuit. ###Measurement and Simulation At this point, the main circuit now contains gates for initial state preparation and application of the $U(C,\gamma)$ and $U(B,\beta)$ unitary operators to get us into the final state \begin{equation} \tag{23} |\gamma, \beta \rangle = U(B,\beta)U(C,\gamma)|s \rangle \end{equation} The next step of the algorithm is to measure each of the qubits to get the string $z$ and then evaluate $C(z)$. Measurement is performed by appending a measurement operator on the computational basis to each of the qubits in our main circuit. A simulation is carried out by instantiating a Simulator. We then input the main circuit to this simulator, and since we will want to run this circuit many times, we can give an optional argument for the number of repetitions (in this case, 10). The circuit will then be simulated 10 times. The paper says that $m log(m)$ repetitions should suffice, and since in this case $m=4$, 10 repetitions are more than enough. The results of all runs are stored in the variable "results". We can then evaluate the value of $C(z)$ for each of the runs and keep a running tab on which string $z$ gives the maximum value of $C(z)$, each of which is printed at the end of the program. A diagram of the circuit is also output at the end of the program to give the reader a clearer understanding of the program that is being run on the quantum computer. To understand how this approximates the optimization of the MaxCut problem, note that $z = z_1z_2...z_n$ is a string, where each $z_i$ equals 0 or 1. In the MaxCut problem, one wants to find a subset of the vertices $S$ such that the number of edges between $S$ and the complementary subset is maximal. The final $z$ string we measure defines whether each vertex $i$ is in the subset $S$ (say, $z_i = 1$) or in the complement ($z_i = 0$). $C(z)$ gives the number of edges that exist between the subset S and its complement. So in our example, the 4-vertex ring, after enough repetitions, our QAOA finds the correct maximum cut of $C(z)=4$ and returns a string of alternating 0's and 1's, indicating the graph is maximally cut when alternating vertices are grouped into subsets. This toy example was small and simple, which is why the QAOA was able to return the true optimal solution. However, in more complex combinatorial optimization problems, this will not always be the case. It may be necessary to increase the value of $p$, which in this simple case we took to just be 1. ## MaxCut for Bounded Graph of Degree 2 with 16 Vertices and Brute Force Angle Optimization In the previous example with 4 vertices, the algorithm almost always returned the optimal cut, despite our random selection of values for $\beta$ and $\gamma$. Clearly, the problem was so small that the search space of possible solutions was easily navigated over the repeated runs. However, if we move to a larger problem with 16 vertices, it becomes more important to find optimal values for $\beta$ and $\gamma$ in order to find the optimal cut with high probability. Below is some code that performs a grid search over values for the angles to find their optimal values. Now, since $m=16$ it becomes necessary to increase the number of repetitions each circuit is run for to be $\mathcal{O}(16log(16))$. We, therefore, increase the number of repetitions to 100 in an attempt to keep it low enough for the code below not to take more than 5 minutes to run. Upon completion, that program prints the resulting optimal values for $\beta$, $\gamma$, $C(z)$, and $z$. It also plots a graph of how the value of $C(z)$ changes throughout the grid search as various values of $\beta$ and $\gamma$ are swept through. It is clear from the plot that there is indeed an important angle-dependence now that our problem has more vertices, and thus resides in a larger search space. **Note: Code below may take up to 5 minutes to run!** ```python #This code runs the QAOA, including optimization of the angles beta and gamma #for the example of MaxCut on a 2-regular graph with n=16 vertices and m=16 clauses #Brute force optimization of the angles beta ang gamma is performed by #a grid search over their possible values # define the length of grid. length = 4 nqubits = length**2 # define qubits on the grid. #in this case we have 16 qubits on a 4x4 grid qubits = [cirq.GridQubit(i, j) for i in range(length) for j in range(length)] #here, we use p=1 #search for optimal angles beta and gamma by brute force #beta in [0,pi] #gamma in [0,2*pi] gridsteps = 19 bstep = np.pi/gridsteps gstep = 2.0*np.pi/gridsteps overall_best_z = []*nqubits overall_best_Cz = 0.0 all_Cz = [] all_b = [] all_g = [] for b in range(0,gridsteps+1): for g in range(0,gridsteps+1): beta = bstep*b gamma = gstep*g all_b.append(b) all_g.append(g) #instantiate a circuit circuit = cirq.Circuit() #apply Hadamard gate to all qubits to define the initial state #here, the initial state is a uniform superposition over all basis states circuit.append(cirq.H(q) for q in qubits) #define operators for creating state |gamma,beta> = U_B*U_C*|s> #define U_C operator = Product_<jk> { e^(-i*gamma*C_jk)} and append to main circuit coeff = -0.5 #coefficient in front of sigma_z operators in the C_jk operator U_C = cirq.Circuit() for i in range(0,nqubits): U_C.append(cirq.CNOT.on(qubits[i],qubits[(i+1)%nqubits])) U_C.append(cirq.rz(2.0*coeff*gamma).on(qubits[(i+1)%nqubits])) U_C.append(cirq.CNOT.on(qubits[i],qubits[(i+1)%nqubits])) circuit.append(U_C) #define U_B operator = Product_j {e^(-i*beta*X_j)} and append to main circuit U_B = cirq.Circuit() for i in range(0,nqubits): U_B.append(cirq.H(qubits[i])) U_B.append(cirq.rz(2.0*beta).on(qubits[i])) U_B.append(cirq.H(qubits[i])) circuit.append(U_B) #add measurement operators for each qubit #measure in the computational basis to get the string z for i in range(0,nqubits): circuit.append(cirq.measure(qubits[i],key=str(i))) #run circuit in simulator to get the state |beta,gamma> and measure in the #computational basis to get the string z and evaluate C(z) #repeat for 100 runs and save the best z and C(z) #simulator = cirq.google.XmonSimulator() simulator = cirq.Simulator() reps = 100 results = simulator.run(circuit, repetitions=reps) #get bit string z from results best_z = [None]*nqubits best_Cz = 0.0 for i in range(0,reps): z = [] for j in range (0,nqubits): if (results.measurements[str(j)][i]): z.append(1) else: z.append(-1) #compute C(z) Cz = 0.0 for j in range(0,nqubits): Cz += 0.5*(-1.0*z[j]*z[(j+1)%nqubits] + 1.0) #store best values for z and C(z) if (Cz > best_Cz): best_Cz = Cz best_z = z all_Cz.append(best_Cz) if (best_Cz > overall_best_Cz): overall_best_Cz = best_Cz overall_best_z = best_z best_beta = beta best_gamma = gamma #print best string z and corresponding C(z) print("overall best z") print(overall_best_z) print("overall best Cz") print(overall_best_Cz) print("best beta") print(best_beta) print("best gamma") print(best_gamma) plt.plot(all_Cz) plt.xlabel("Iteration number in Grid Search") plt.ylabel("Best C(z) value") plt.show() #print a diagram of the circuit print(circuit) ``` ```python from mpl_toolkits.mplot3d import Axes3D from matplotlib import cm fig = plt.figure() ax = Axes3D(fig) surf = ax.plot_trisurf(all_b, all_g, all_Cz, cmap=cm.jet, linewidth=0.1) ax.set_xlabel('beta') ax.set_ylabel('gamma') ax.set_zlabel('best C(z) (out of 100 sims)') fig.colorbar(surf, shrink=0.5, aspect=5) plt.show() ``` ```python ``` ```python ```
f00178369ba99ab381e10b4b9392eeab62b7b7c3
216,828
ipynb
Jupyter Notebook
notebooks/unused/qaoa_cirq.ipynb
bernalde/QuIPML
a4593210b2dffa01561e6aafb01136471a0628cb
[ "MIT" ]
1
2021-11-08T21:42:27.000Z
2021-11-08T21:42:27.000Z
notebooks/unused/qaoa_cirq.ipynb
bernalde/QuIPML
a4593210b2dffa01561e6aafb01136471a0628cb
[ "MIT" ]
null
null
null
notebooks/unused/qaoa_cirq.ipynb
bernalde/QuIPML
a4593210b2dffa01561e6aafb01136471a0628cb
[ "MIT" ]
1
2021-09-10T06:08:44.000Z
2021-09-10T06:08:44.000Z
263.781022
96,456
0.851145
true
9,805
Qwen/Qwen-72B
1. YES 2. YES
0.942507
0.824462
0.777061
__label__eng_Latn
0.97386
0.643705
### Action of boundary and co-boundary maps on a chain __*Definition.*__ An abstract simplicial complex $K$ is a collection of finite sets that is closed under set inclusion, i.e. if $\sigma \in K$ and $\tau \subseteq \sigma$, then $\tau \in K$. __*Definition.*__ The boundary operator $\partial_d : C_d(K) \rightarrow C_{d-1}(K)$ is the linear function defined for each oriented $d$-simplex $\sigma = [v_0, ..., v_d]$ by \begin{equation} \partial_d (\sigma) = \partial_d [v_0, ..., v_d] = \sum_{i=0}^d (-1)^i [v_0, ..., \hat{v}_i, ..., v_d], \end{equation} where $[v_0, ... \hat{v}_i, ..., v_d]$ is the subset of $[v_0, ..., v_d]$ obtained by removing the vertex $v_i$. Let $S_d(K)$ be the set of all oriented $d$-simplices of the simplicial complex $K$ (i.e. the set of basis elements of $C_d(K)$), and let $\tau in S_{d-1}(K)$. Then define the two sets \begin{align} S^+_d (K) &= \{ \sigma \in S_d(K) \, | \text{ the coefficient of } \tau \text{ in } \partial_d (\sigma) \text{ is } +1 \} \\ S^-_d (K) &= \{ \sigma \in S_d(K) \, | \text{ the coefficient of } \tau \text{ in } \partial_d (\sigma) \text{ is } -1 \} \end{align} __*Lemma (see [1])*__ Let $\partial^*_d$ be the adjoint of the boundary operator $\partial_d$, and let $\tau \in S_{d-1}(K)$. Then \begin{equation} \partial^*_d (\tau) = \sum_{\sigma' \in S^+_d (K, \tau)} \sigma' - \sum_{\sigma'' \in S^-_d (K, \tau)} \sigma''. \end{equation} In this section we are going to see how boundary maps act on ```python ``` ## References <a class="anchor" id="refs"></a> 1. [R.Gustavson, Laplacians of Covering Complexes](https://scholar.rose-hulman.edu/cgi/viewcontent.cgi?article=1099&context=rhumj) 2. [From Topological Data Analysis to Deep Learning: No Pain No Gain](https://towardsdatascience.com/from-tda-to-dl-d06f234f51d) 3. [S. Mukherjee and J. Steenbergen, Random walks on simplicial complexes and harmonics](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5324709/) 4. [Michael T. Schaub, Austin R. Benson, Paul Horn, Gabor Lippner, Ali Jadbabaie, Random Walks on Simplicial Complexes and the normalized Hodge 1-Laplacian](https://arxiv.org/pdf/1807.05044.pdf) 5. [GUDHI Library](http://gudhi.gforge.inria.fr/doc/latest/index.html) 6. [R. Forman, Bochner's Method for Cell Complexes and Combinatorial Ricci Curvature](https://link.springer.com/content/pdf/10.1007%2Fs00454-002-0743-x.pdf)
425c6516b9a46b0a30befc2e9a439e0657d2b320
3,425
ipynb
Jupyter Notebook
examples/theory_simplicial_diffusion.ipynb
tsitsvero/hodgelaplacians
e03f96bf81de05fb93911b21f4f95443dcb3cec6
[ "MIT" ]
13
2019-06-17T13:07:04.000Z
2022-01-24T09:13:03.000Z
examples/theory_simplicial_diffusion.ipynb
tsitsvero/hodgelaplacians
e03f96bf81de05fb93911b21f4f95443dcb3cec6
[ "MIT" ]
1
2021-10-01T16:47:29.000Z
2021-12-09T07:26:54.000Z
examples/theory_simplicial_diffusion.ipynb
tsitsvero/hodgelaplacians
e03f96bf81de05fb93911b21f4f95443dcb3cec6
[ "MIT" ]
3
2021-08-31T00:48:04.000Z
2021-12-21T16:18:51.000Z
43.35443
203
0.581898
true
810
Qwen/Qwen-72B
1. YES 2. YES
0.887205
0.83762
0.74314
__label__eng_Latn
0.720956
0.564896
# Model project: Cournot competition In the following project, we searh to model how two competing firms determine the optimal amount of a homogenous good to produce in Cournot competition. We are assuming the following points throughout the assignment: - There are two firms (1 and 2), who produce the same good (homogenous) (in the extension, we observe three firms) - No collusion - The two firms simultaneously choose the quaintity to produce ($q_1$ and $q_2$). - The total amount produced is $Q = q_1 + q_2$ - The market price is decreasing in the total quantity: $P(Q)=a-Q$ - Both firms have the same marginal cost $c$, where $0 \leq c<a$ ```python import numpy as np from numpy import array from scipy import linalg from scipy import optimize from scipy import interpolate from scipy import optimize,arange import scipy as sp import ipywidgets as widgets import sympy as sm import matplotlib.pyplot as plt %matplotlib inline ``` # Introduction to the economic model We look for a Nash Equilibrium, where both firms choose their best response, as a reaction function to the other firms actions. The profit functions for the respective firms are: Profit for firm 1: $\pi_1(q_1,q_2) = q_1(P(q_1+q_2) - c) $ Profit for firm 2: $\pi_2(q_1,q_2) = q_2(P(q_1+q_2) - c)$ The price function and the marginal cost are included, and we can thereby rewrite: $\pi_1(q_1,q_2) = (a - q_1 - q_2 - c) * q_1$ $\pi_2(q_1,q_2) = (a - q_1 - q_2 - c) * q_2$ Since both firms want to maximize profit, we calculate the best response function, as the first order condition of the profit functions, where we solve for the quantity. ### $\frac{\partial \pi_1}{\partial q_1} = 0$ <=> $ q_1(q_2) = \frac{a-c-q_2}{2}$ ### $\frac{\partial \pi_2}{\partial q_2} = 0$ <=> $ q_2(q_1) = \frac{a-c-q_1}{2}$ Given symmetry, we can now solve for the equilibirum quantities: ### $q_1^* = \frac{a-c-\frac{(a-c-q_1^*)}{2}}{2} <=> q_1^* = \frac{a-c}{3} = q_2^*$ # Defining the model We firstly define the base of the model, which consists of the inverse demand function, the cost function, the profit function for firm 1 and 2, respectively. ```python def p(q1,q2,a): demand = a - q1 - q2 return demand def cost_func(q,c): cost = q * c return cost #Profit function for firm 1 def pi_1(q1,q2,a,c): profit1 = p(q1,q2,a) * q1 - cost_func(q1,c) return profit1 #Profit function for firm 2 def pi_2(q1,q2,a,c): profit2 = p(q1,q2,a) * q2 - cost_func(q2,c) return profit2 ``` The firms have symmetrical best response function, as seen before, given that they are solving the same optimization problem, with identical cost and demand. ```python # a: the lowest value q0 = [0] # b: defining the best responses def BR_f1(q2,a,c): optimal_Q1 = optimize.minimize(lambda q0: -pi_1(q0,q2,a,c), q0).x[0] return optimal_Q1 def BR_f2(q1,a,c): optimal_Q2 = optimize.minimize(lambda q0: -pi_2(q1,q0,a,c), q0).x[0] return optimal_Q2 ``` ```python def conditions(q,param): u = q[0] - BR_f1(q[1],param[0],param[1]) y = q[1] - BR_f2(q[0],param[0],param[1]) return [u,y] ``` ```python inital_values = [1,1] param = [11,2] ``` ```python solver = optimize.fsolve(conditions,inital_values, args = (param)) print(f'the optimal parameter values are: {solver}') ``` the optimal parameter values are: [3. 3.] # Plotting ```python # a: defining production levels for both firms. production_level_f2 = np.arange(0,10,0.1) production_level_f1 = [] # b: creating figure for q2 in production_level_f2: q1 = round(BR_f1(q2,10,0),3) production_level_f1.append(q1) plt.title("Best response for firm 1 (blue) and production level\n for firm 2 (orange), given production level for firm 2") plt.ylabel("$q_1$") plt.xlabel("$q_2$") plt.plot(production_level_f2, production_level_f1) plt.plot(production_level_f2, production_level_f2) ``` ```python costs = np.arange(0,5,0.1) production_level_f1 = np.arange(0,5,0.1) production_level_f2 = np.arange(0,5,0.1) x = [] y = [] for q2 in production_level_f2: q1 = round(BR_f1(q2,10,0),3) x.append(q1) for c, q1 in zip(costs,prod1): q2 = round(BR_f2(q1,10,c),3) y.append(q2) ``` ```python plt.title("Best response for firms with marginal cost $c = 0$ (blue) and for firms\nwith increasing marginal cost (orange) given produciton level of the other firm") plt.ylabel("$BR(q_i)$") plt.xlabel("$q_i$") plt.plot(costs, x) plt.plot(costs, y) ``` ```python # a: defining production levels production_level_f1 = np.arange(0,5,0.1) production_level_f2 = np.arange(0,5,0.1) # b: def f(marginal_cost_f1,marginal_cost_f2): x = [] y = [] for q2 in production_level_f2: q1 = round(BR_f1(q2,10,marginal_cost_f1),3) x.append(q1) for q1 in production_level_f1: q2 = round(BR_f2(q1,10,marginal_cost_f2),3) y.append(q2) plt.title("Best response for firm 1 with marginal cost $c_1$ (blue) and best response for\nfirm 2 with marginal cost $c_2$ (orange) given produciton level of the other firm") plt.ylabel("$BR(q_i)$") plt.xlabel("$q_i$") plt.plot(production_level_f2,x) plt.plot(production_level_f1,y) widgets.interact(f, marginal_cost_f1 = widgets.FloatSlider(description="$c_1$",min=0,max=5), marginal_cost_f2 = widgets.FloatSlider(description="$c_2$",min=0,max=5), ) ``` interactive(children=(FloatSlider(value=0.0, description='$c_1$', max=5.0), FloatSlider(value=0.0, description… <function __main__.f(marginal_cost_f1, marginal_cost_f2)> In the illustration above it is evident that if the two firms have identical marginal cost, the best response functions are completely identical (laying on top of eachother, graphically). If firm 1 has a higher marginal cost, their best response function is lower than firm 2. # Extension: Three firms We want to understand what the optimal production levels are if a third firm is to enter. We focus on the case where firms are symmetric, with the same cost function. In the following we define the inverse demand function, cost function, and three profit functions. ```python # a: defining cost function and inverse demand function def p(q1,q2,q3,a): demand = a - q1 - q2 - q3 return demand def cost(q,c): cost = q * c return cost # b: defining profitfunctions def pi_f1(q1,q2,q3,a,c): return p(q1,q2,q3,a) * q1 - cost(q1,c) def pi_f2(q1,q2,q3,a,c): return p(q1,q2,q3,a) * q2 - cost(q2,c) def pi_f3(q1,q2,q3,a,c): return p(q1,q2,q3,a) * q3 - cost(q3,c) ``` We further define the tre reaction functions, where firm i will optimize relative to the two other firms. ```python q0 = [0] def BR1(q2,q3,a,c): optimal_q1 = optimize.minimize(lambda q0: -pi_f1(q0,q2,q3,a,c), q0).x[0] return optimal_q1 def BR2(q1,q3,a,c): optimal_q2 = optimize.minimize(lambda q0: -pi_f2(q1,q0,q3,a,c), q0).x[0] return optimal_q2 def BR3(q1,q2,a,c): optimal_q3 = optimize.minimize(lambda q0: -pi_f3(q1,q2,q0,a,c), q0).x[0] return optimal_q3 ``` ```python def con(q,param): u = q[0] - BR1(q[1],q[2],param[0],param[1]) y = q[1] - BR2(q[0],q[2],param[0],param[1]) z = q[2] - BR3(q[0],q[1],param[0],param[1]) return [u,y,z] ``` ```python #initial values inital_values = [1,1,1] # Vector with parameters [a,c]: param = [11,2] ``` ```python solver = optimize.fsolve(con,inital_values, args = (param)) print(f'optimal quantity: {solver}') ``` optimal quantity: [2.24999997 2.25000001 2.25000001] Inclusion of an additional firm, reduces the optimal quantity, which as we saw before was 3 for both firms. # Conclusion Two firms with identical demand functions and marginal cost, are symmetrical, and thereby set the same equilibrium quantities. We have learned that the marginal cost (when rising), has a negative effect on the best response functions and thereby breaks with the symmetry of the two firms. It is therefore evident that rising or lowering marginal cost has an effect on the equilibrium quantity. Int the extension, we see that when a third firm is to enter the competition, it also does have an effect on the equilibrium quantitiy. When a third firm enters, assuming symmetry, it is evident that the optimal production level falls - in this instance from 3 to 2.5 units.
f7811e5d41caa6910f93e7659dec2d9d0953b67b
60,577
ipynb
Jupyter Notebook
modelproject/Modelproject-1.ipynb
NumEconCopenhagen/projects-2020-amalie-asima-marina-1
a8d6ff30b018d063094ce69a0bc5fd1b302fa75f
[ "MIT" ]
null
null
null
modelproject/Modelproject-1.ipynb
NumEconCopenhagen/projects-2020-amalie-asima-marina-1
a8d6ff30b018d063094ce69a0bc5fd1b302fa75f
[ "MIT" ]
12
2020-04-13T10:30:30.000Z
2020-05-11T19:18:26.000Z
modelproject/Modelproject-1.ipynb
NumEconCopenhagen/projects-2020-amalie-asima-marina-1
a8d6ff30b018d063094ce69a0bc5fd1b302fa75f
[ "MIT" ]
3
2020-03-12T08:34:51.000Z
2021-05-12T15:52:01.000Z
111.765683
24,124
0.865543
true
2,553
Qwen/Qwen-72B
1. YES 2. YES
0.882428
0.737158
0.650489
__label__eng_Latn
0.97103
0.349635
Problem 1 (20 points) Show that the stationary point (zero gradient) of the function$$ \begin{aligned} f=2x_{1}^{2} - 4x_1 x_2+ 1.5x^{2}_{2}+ x_2 \end{aligned} $$is a saddle (with indefinite Hessian). Find the directions of downslopes away from the saddle. Hint: Use Taylor's expansion at the saddle point. Find directions that reduce $f$ ```python import sympy as sym x1, x2 = sym.symbols("x1 x2") # variables to be used in the function f = 2*(x1)**2 - (4 * x1 * x2) + (1.5 * (x2)**2) + x2 # Declaring the function gradient = sym.derive_by_array(f, (x1, x2)) # Finding the gradient of the function hessian = sym.Matrix(sym.derive_by_array(gradient, (x1, x2))) # Finding Hessian of the function stationary_points = sym.solve(gradient, (x1, x2)) print(f'Stationary points are:\n {stationary_points}') value = f.subs({x1: stationary_points[x1], x2: stationary_points[x2]}) print(f'Value of the function at stationary point: {value}') egnval = hessian.eigenvals() # Finding the eigenvalues of the Hessian e_val = list(egnval.keys()) print("The eigenvales are:") print(*e_val) # checking for the nature of he Hessain posi = zo = nposi = 0 for val in e_val: val = float(val) if val > 0: posi += 1 elif val == 0: zo += 1 else: nposi += 1 if posi == len(e_val): print("Positive Definite Hessian") elif zo == len(e_val): print("Undertimed Eigen Values are zero") elif nposi == len(e_val): print("Negative Definite Hessian") else: print("Indefinite Hessian") ``` Stationary points are: {x1: 1.00000000000000, x2: 1.00000000000000} Value of the function at stationary point: 0.500000000000000 The eigenvales are: 7.53112887414927 -0.531128874149275 Indefinite Hessian
be4f3179814a85fb033775235e3b5f496402857c
3,006
ipynb
Jupyter Notebook
Imcomplete_HW2.ipynb
MrNobodyInCamelCase/Trail_repo
2508eef78e9793945d46c2394a61633e693387b7
[ "Apache-2.0" ]
null
null
null
Imcomplete_HW2.ipynb
MrNobodyInCamelCase/Trail_repo
2508eef78e9793945d46c2394a61633e693387b7
[ "Apache-2.0" ]
null
null
null
Imcomplete_HW2.ipynb
MrNobodyInCamelCase/Trail_repo
2508eef78e9793945d46c2394a61633e693387b7
[ "Apache-2.0" ]
null
null
null
29.184466
184
0.541916
true
567
Qwen/Qwen-72B
1. YES 2. YES
0.942507
0.894789
0.843345
__label__eng_Latn
0.799061
0.797706
```python import names from syft.core.common import UID from sympy import symbols from scipy import optimize import sympy as sym import numpy as np import random from sympy.solvers import solve from functools import lru_cache # ordered_symbols = list() # for i in range(100): # ordered_symbols.append(symbols("s"+str(i))) @lru_cache(maxsize=None) def maximize_flattened_poly(flattened_poly, *rranges, force_all_searches=False, **s2i): search_fun = create_searchable_function(flattened_poly, s2i) return minimize_function(f=search_fun, rranges=rranges, force_all_searches=False) def flatten_and_maximize_poly(poly, force_all_searches=False): i2s = list(poly.free_symbols) s2i = {s:i for i,s in enumerate(i2s)} # this code seems to make things slower - although there might be a memory improvement (i haven't checked) # flattened_poly = poly.copy().subs({k:v for k,v in zip(i2s, ordered_symbols[0:len(i2s)])}) # flattened_s2i = {str(ordered_symbols[i]):i for s,i in s2i.items()} flattened_poly = poly flattened_s2i = {str(s):i for s,i in s2i.items()} rranges = [(ssid2obj[i2s[i].name].min_val, ssid2obj[i2s[i].name].max_val) for i in range(len(s2i))] return maximize_flattened_poly(flattened_poly, *rranges, force_all_searches=force_all_searches, **flattened_s2i) def create_lookup_tables_for_symbol(polynomial): index2symbol = [str(x) for x in polynomial.free_symbols] symbol2index = {sym: i for i, sym in enumerate(index2symbol)} return index2symbol, symbol2index def create_searchable_function(f, symbol2index): # Tudor: Here you weren't using *params # Tudor: If I understand correctly, .subs returns def _run_specific_args(tuple_of_args: tuple): kwargs = {sym: tuple_of_args[i] for sym, i in symbol2index.items()} output = f.subs(kwargs) return output return _run_specific_args def minimize_function(f, rranges, constraints=[], force_all_searches=False): results = list() # Step 1: try simplicial shgo_results = optimize.shgo(f, rranges, sampling_method='simplicial', constraints=constraints) results.append(shgo_results) if not shgo_results.success or force_all_searches: # sometimes simplicial has trouble as a result of initialization # see: https://github.com/scipy/scipy/issues/10429 for details # if not force_all_searches: # print("Simplicial search didn't work... trying sobol") shgo_results = optimize.shgo(f, rranges, sampling_method='sobol', constraints=constraints) results.append(shgo_results) if not shgo_results.success: raise Exception("Search algorithm wasn't solvable... abort") return results def max_lipschitz_wrt_entity(scalars, entity): result = max_lipschitz_via_jacobian(scalars, input_entity=entity)[0][-1] if isinstance(result, float): return -result else: return -float(result.fun) def max_lipschitz_via_jacobian(scalars, input_entity=None, data_dependent=True, force_all_searches=False, try_hessian_shortcut=False): polys = [x.poly for x in scalars] input_scalars = set() for s in scalars: for i_s in s.input_scalars: input_scalars.add(i_s) # the numberator of the partial derivative out = sym.Matrix([x.poly for x in scalars]) if input_entity is None: j = out.jacobian([x.poly for x in input_scalars]) else: # In general it doesn't make sense to consider the max partial derivative over all inputs because we dont' want the # Lipschiptz bound of the entire jacobian, we want the lipschitz bound with respect to entity "i" (see https://arxiv.org/abs/2008.11193). # For example, if I had a polynomial y = a + b**2 + c**3 + d**4 where each a,b,c,d variable was from a different entity, # the fact taht d has a big derivative should change the max lipschitz bound of y with respect to "a". Thus, we're only interested # in searching for the maximum partial derivative with respect to the variables from the focus entity "i". # And if we're looking to compute the max parital derivative with respect to input scalars from only one entity, then # we select only the variables corresponding to that entity here. relevant_scalars = list(filter(lambda s:s.entity == input_entity, input_scalars)) relevant_inputs = [x.poly for x in relevant_scalars] j = out.jacobian(relevant_inputs) # for higher order functions - it's possible that some of the partial derivatives are conditioned # on data from the input entity. The philosophy of input DP is that when producing an epsilon # guarantee for entity[i] that we don't need to search over the possible range of data for that entity # but can instead use the data itself - this results in an epsilon for each entity which is private # but it also means the bound is tighter. So we could skip this step but it would in some cases # make the bound looser than it needs to be. if data_dependent: j = j.subs({x.poly:x.value for x in relevant_scalars}) neg_l2_j = -(np.sum(np.square(j)))**0.5 if(len(np.sum(j).free_symbols) == 0): result = -float(np.max(j)) return [result], neg_l2_j if(try_hessian_shortcut): h = j.jacobian([x.poly for x in input_scalars]) if(len(solve(np.sum(h**2), *[x.poly for x in input_scalars], dict=True)) == 0): print("The gradient is linear - solve through brute force search over edges of domain") i2s,s2i = create_lookup_tables_for_symbol(neg_l2_j) search_fun = create_searchable_function(f=neg_l2_j, symbol2index=s2i) constant = 0.000001 rranges = [(x.min_val, x.max_val, x.max_val - x.min_val) for x in input_scalars] skewed_results = optimize.brute(search_fun, rranges, finish=None, full_output=False) result_inputs = skewed_results + constant result_output = search_fun(result_inputs) return [float(result_output)], neg_l2_j return flatten_and_maximize_poly(neg_l2_j), neg_l2_j def get_mechanism_for_entity(scalars, entity, sigma=1.5): m_id = "ms_" for s in scalars: m_id += str(s.id).split(" ")[1][:-1]+"_" return iDPGaussianMechanism(sigma=sigma, value=np.sqrt(np.sum(np.square(np.array([float(s.value) for s in scalars])))), L=float(max_lipschitz_wrt_entity(scalars, entity=entity)), entity=entity.unique_name, name=m_id) def get_all_entity_mechanisms(scalars, sigma:float=1.5): entities = set() for s in scalars: for i_s in s.input_scalars: entities.add(i_s.entity) return {e:[get_mechanism_for_entity(scalars=scalars,entity=e,sigma=sigma)] for e in entities} def publish(scalars, acc, sigma: float = 1.5) -> float: acc_original = acc acc_temp = deepcopy(acc_original) ms = get_all_entity_mechanisms(scalars=scalars, sigma=sigma) acc_temp.append(ms) overbudgeted_entities = acc_temp.overbudgeted_entities # so that we don't modify the original polynomial # it might be fine to do so but just playing it safe if len(overbudgeted_entities) > 0: scalars = deepcopy(scalars) while len(overbudgeted_entities) > 0: input_scalars = set() for s in scalars: for i_s in s.input_scalars: input_scalars.add(i_s) for symbol in input_scalars: if symbol.entity in overbudgeted_entities: self.poly = self.poly.subs(symbol.poly, 0) break acc_temp = deepcopy(acc_original) # get mechanisms for new publish event ms = self.get_all_entity_mechanisms(sigma=sigma) acc_temp.append(ms) overbudgeted_entities = acc_temp.overbudgeted_entities output = [s.value + random.gauss(0, sigma) for s in scalars] acc_original.entity2ledger = deepcopy(acc_temp.entity2ledger) return output class Scalar(): def publish(self, acc, sigma: float = 1.5) -> float: return publish([self], acc=acc, sigma=sigma) def __str__(self) -> str: return "<"+str(type(self).__name__) + ": (" + str(self.min_val)+" < "+str(self.value)+" < " + str(self.max_val) + ")>" def __repr__(self) -> str: return str(self) class IntermediateScalar(Scalar): def __init__(self, poly, id=None): self.poly = poly self._gamma = None self.id = id if id else UID() def __rmul__(self, other: "Scalar") -> "Scalar": return self * other def __radd__(self, other: "Scalar") -> "Scalar": return self + other @property def input_scalars(self): phi_scalars = list() for ssid in self.input_polys: phi_scalars.append(ssid2obj[str(ssid)]) return phi_scalars @property def input_entities(self): return list(set([x.entity for x in self.input_scalars])) @property def input_polys(self): return self.poly.free_symbols @property def max_val(self): return -flatten_and_maximize_poly(-self.poly)[-1].fun @property def min_val(self): return flatten_and_maximize_poly(self.poly)[-1].fun @property def value(self): return self.poly.subs({obj.poly:obj.value for obj in self.input_scalars}) class IntermediatePhiScalar(IntermediateScalar): def __init__(self, poly, entity): super().__init__(poly=poly) self.entity = entity def max_lipschitz_wrt_entity(self, *args, **kwargs): return self.gamma.max_lipschitz_wrt_entity(*args, **kwargs) @property def max_lipschitz(self): return self.gamma.max_lipschitz def __mul__(self, other: "Scalar") -> "Scalar": if isinstance(other, IntermediateGammaScalar): return self.gamma * other if not isinstance(other, IntermediatePhiScalar): return IntermediatePhiScalar(poly=self.poly * other, entity=self.entity) # if other is referencing the same individual if self.entity == other.entity: return IntermediatePhiScalar(poly=self.poly * other.poly, entity=self.entity) return self.gamma * other.gamma def __add__(self, other: "Scalar") -> "Scalar": if isinstance(other, IntermediateGammaScalar): return self.gamma + other # if other is a public value if not isinstance(other, Scalar): return IntermediatePhiScalar(poly=self.poly + other, entity=self.entity) # if other is referencing the same individual if self.entity == other.entity: return IntermediatePhiScalar(poly=self.poly + other.poly, entity=self.entity) return self.gamma + other.gamma def __sub__(self, other: "Scalar") -> "Scalar": if isinstance(other, IntermediateGammaScalar): return self.gamma - other # if other is a public value if not isinstance(other, IntermediatePhiScalar): return IntermediatePhiScalar(poly=self.poly - other, entity=self.entity) # if other is referencing the same individual if self.entity == other.entity: return IntermediatePhiScalar(poly=self.poly - other.poly, entity=self.entity) return self.gamma - other.gamma @property def gamma(self): if self._gamma is None: self._gamma = GammaScalar(min_val=self.min_val, value=self.value, max_val=self.max_val, entity=self.entity) return self._gamma class OriginScalar(Scalar): """A scalar which stores the root polynomial values. When this is a superclass of PhiScalar it represents data that was loaded in by a data owner. When this is a superclass of GammaScalar this represents the node at which point data from mulitple entities was combined.""" def __init__(self, min_val, value, max_val, entity=None, id=None): self.id = id if id else UID() self._value = value self._min_val = min_val self._max_val = max_val self.entity = entity if entity is not None else Entity() @property def value(self): return self._value @property def max_val(self): return self._max_val @property def min_val(self): return self._min_val class PhiScalar(OriginScalar, IntermediatePhiScalar): """A scalar over data from a single entity""" def __init__(self, min_val, value, max_val, entity=None, id=None, ssid=None): super().__init__(min_val=min_val, value=value, max_val=max_val, entity=entity,id=id) # the scalar string identifier (SSID) - because we're using polynomial libraries # we need to be able to reference this object in string form. the library doesn't # know how to process things that aren't strings if ssid is None: ssid = str(self.id).split(" ")[1][:-1]# + "_" + str(self.entity.id).split(" ")[1][:-1] self.ssid = ssid IntermediatePhiScalar.__init__(self, poly=symbols(self.ssid), entity=self.entity) ssid2obj[self.ssid] = self class IntermediateGammaScalar(IntermediateScalar): """""" def __add__(self, other): if isinstance(other, Scalar): if isinstance(other, IntermediatePhiScalar): other = other.gamma return IntermediateGammaScalar(poly=self.poly + other.poly) return IntermediateGammaScalar(poly=self.poly + other) def __sub__(self, other): if isinstance(other, Scalar): if isinstance(other, IntermediatePhiScalar): other = other.gamma return IntermediateGammaScalar(poly=self.poly - other.poly) return IntermediateGammaScalar(poly=self.poly - other) def __mul__(self, other): if isinstance(other, Scalar): if isinstance(other, IntermediatePhiScalar): other = other.gamma return IntermediateGammaScalar(poly=self.poly * other.poly) return IntermediateGammaScalar(poly=self.poly * other) def max_lipschitz_via_explicit_search(self, force_all_searches=False): r1 = np.array([x.poly for x in self.input_scalars]) r2_diffs = np.array([GammaScalar(x.min_val,x.value,x.max_val, entity=x.entity).poly for x in self.input_scalars]) r2 = r1 + r2_diffs fr1 = self.poly fr2 = self.poly.copy().subs({x[0]:x[1] for x in list(zip(r1, r2))}) left = np.sum(np.square(fr1 - fr2)) ** 0.5 right = np.sum(np.square(r1 - r2)) ** 0.5 C = -left / right i2s, s2i = create_lookup_tables_for_symbol(C) search_fun = create_searchable_function(C, s2i) r1r2diff_zip = list(zip(r1, r2_diffs)) s2range = {} for _input_scalar, _additive_counterpart in r1r2diff_zip: input_scalar = ssid2obj[_input_scalar.name] additive_counterpart = ssid2obj[_additive_counterpart.name] s2range[input_scalar.ssid] = (input_scalar.min_val, input_scalar.max_val) s2range[additive_counterpart.ssid] = (input_scalar.min_val, input_scalar.max_val) rranges = list() for index,symbol in enumerate(i2s): rranges.append(s2range[symbol]) r2_indices_list = list() min_max_list = list() for r2_val in r2: r2_syms = [ssid2obj[x.name] for x in r2_val.free_symbols] r2_indices = [s2i[x.ssid] for x in r2_syms] r2_indices_list.append(r2_indices) min_max_list.append((r2_syms[0].min_val, r2_syms[0].max_val)) functions = list() for i in range(2): f1 = lambda x,i=i: x[r2_indices_list[i][0]]+x[r2_indices_list[i][1]] + min_max_list[i][0] f2 = lambda x,i=i: -(x[r2_indices_list[i][0]]+x[r2_indices_list[i][1]]) + min_max_list[i][1] functions.append(f1) functions.append(f2) constraints = [{'type':'ineq', 'fun':f} for f in functions] def non_negative_additive_terms(symbol_vector): out = 0 for index in [s2i[x.name] for x in r2_diffs]: out += (symbol_vector[index]**2) # theres a small bit of rounding error from this constraint - this should # only be used as a double check or as a backup!!! return out**0.5 - 1/2**16 constraints.append({'type':'ineq', 'fun':non_negative_additive_terms}) results = minimize_function(f=search_fun, rranges=rranges, constraints=constraints, force_all_searches=force_all_searches) return results, C def max_lipschitz_via_jacobian(self, input_entity=None, data_dependent=True, force_all_searches=False, try_hessian_shortcut=False): return max_lipschitz_via_jacobian(scalars=[self], input_entity=input_entity, data_dependent=data_dependent, force_all_searches=force_all_searches, try_hessian_shortcut=try_hessian_shortcut) @property def max_lipschitz(self): result = self.max_lipschitz_via_jacobian()[0][-1] if isinstance(result, float): return -result else: return -float(result.fun) def max_lipschitz_wrt_entity(self, entity): result = self.max_lipschitz_via_jacobian(input_entity=entity)[0][-1] if isinstance(result, float): return -result else: return -float(result.fun) class GammaScalar(OriginScalar, IntermediateGammaScalar): """A scalar over data from multiple entities""" def __init__(self, min_val, value, max_val, entity=None, id=None, ssid=None): super().__init__(min_val=min_val, value=value, max_val=max_val, entity=entity, id=id) # the scalar string identifier (SSID) - because we're using polynomial libraries # we need to be able to reference this object in string form. the library doesn't # know how to process things that aren't strings if ssid is None: ssid = str(self.id).split(" ")[1][:-1] + "_" + str(self.entity.id).split(" ")[1][:-1] self.ssid = ssid IntermediateGammaScalar.__init__(self, poly=symbols(self.ssid)) ssid2obj[self.ssid] = self ``` ```python from syft.core.adp.adversarial_accountant import AdversarialAccountant from syft.core.adp.entity import Entity from copy import deepcopy from syft.core.adp.idp_gaussian_mechanism import iDPGaussianMechanism ``` ```python # stdlib from typing import Dict as TypeDict from typing import KeysView as TypeKeysView from typing import List as TypeList from typing import Set as TypeSet # third party from autodp.autodp_core import Mechanism from autodp.transformer_zoo import Composition class AdversarialAccountant: def __init__(self, max_budget: float = 10, delta: float = 1e-6) -> None: self.entity2ledger: TypeDict[Entity, Mechanism] = {} self.max_budget = max_budget self.delta = delta def append(self, entity2mechanisms: TypeDict[str, TypeList[Mechanism]]) -> None: for key, ms in entity2mechanisms.items(): if key not in self.entity2ledger.keys(): self.entity2ledger[key] = list() for m in ms: self.entity2ledger[key].append(m) def get_eps_for_entity(self, entity: Entity) -> Scalar: # compose them with the transformation: compose. compose = Composition() mechanisms = self.entity2ledger[entity] composed_mech = compose(mechanisms, [1] * len(mechanisms)) # Query for eps given delta return PhiScalar( value=composed_mech.get_approxDP(self.delta), min_val=0, max_val=self.max_budget, entity=entity, ) def has_budget(self, entity_name: str) -> bool: eps = self.get_eps_for_entity(entity_name) if eps.value is not None: return eps.value < self.max_budget @property def entities(self) -> TypeKeysView[str]: return self.entity2ledger.keys() @property def overbudgeted_entities(self) -> TypeSet[str]: entities = set() for ent in self.entities: if not self.has_budget(ent): entities.add(ent) return entities def print_ledger(self, delta: float = 1e-6) -> None: for entity, mechanisms in self.entity2ledger.items(): print(str(entity) + "\t" + str(self.get_eps_for_entity(entity)._value)) ``` ```python x = PhiScalar(0,0.01,1) y = PhiScalar(0,0.02,1) z = PhiScalar(0,0.02,1) o = x*x + y*y + z z = o * o * o ``` ```python for k in sym.class_registry.all_classes: if (isinstance(z.poly, k)): print(k) ``` <class 'sympy.core.basic.Basic'> <class 'sympy.core.expr.Expr'> <class 'sympy.core.power.Pow'> ```python from p ``` ```python isinstance(z.poly, sym.Symbol) ``` False ```python type(z.poly) ``` sympy.core.power.Pow ```python z.max_lipschitz_via_explicit_search() ``` ([ fun: -46.47853095774777 funl: array([-46.47853096, -46.14593642, -46.14593642, -45.72380853]) message: 'Optimization terminated successfully.' nfev: 395 nit: 2 nlfev: 366 nlhev: 0 nljev: 35 success: True x: array([0.62855774, 0.62855775, 1. , 1. , 0.37144225, 0.37144226]) xl: array([[0.62855774, 0.62855775, 1. , 1. , 0.37144225, 0.37144226], [0.64833262, 0.5 , 1. , 1. , 0.5 , 0.35166738], [0.5 , 0.64833265, 1. , 1. , 0.35166735, 0.5 ], [0.5 , 0.5 , 1. , 1. , 0.5 , 0.5 ]])], -(4ae41313c3a941f4bc818d040bf5d1c6_7b7ecdd5de5d4f7da6215a8a97db5ab8**2 + 79584fa604334b038341109712bb3f32_f196c0224a6f4cc389c7f122ac2c9e8e**2 + dcad1295b5b345ab88856e2b91cc5146_16cf56ce29ef4408993ee71a0e51b9dd**2)**(-0.5)*(((69f93b3cedbf4087a607b592c7a0b711_f196c0224a6f4cc389c7f122ac2c9e8e + 7f150aa4dc774de59837714580f3656e_7b7ecdd5de5d4f7da6215a8a97db5ab8 + c73d8bc28f6b4c5881f976b8a657e61f_16cf56ce29ef4408993ee71a0e51b9dd)**3 - (4ae41313c3a941f4bc818d040bf5d1c6_7b7ecdd5de5d4f7da6215a8a97db5ab8 + 69f93b3cedbf4087a607b592c7a0b711_f196c0224a6f4cc389c7f122ac2c9e8e + 79584fa604334b038341109712bb3f32_f196c0224a6f4cc389c7f122ac2c9e8e + 7f150aa4dc774de59837714580f3656e_7b7ecdd5de5d4f7da6215a8a97db5ab8 + c73d8bc28f6b4c5881f976b8a657e61f_16cf56ce29ef4408993ee71a0e51b9dd + dcad1295b5b345ab88856e2b91cc5146_16cf56ce29ef4408993ee71a0e51b9dd)**3)**2)**0.5) ```python z.max_lipschitz_via_jacobian() ``` ([ fun: -46.76537180435969 funl: array([-46.7653718]) message: 'Optimization terminated successfully.' nfev: 14 nit: 2 nlfev: 5 nlhev: 0 nljev: 1 success: True x: array([1., 1., 1.]) xl: array([[1., 1., 1.]])], -5.19615242270663*((69f93b3cedbf4087a607b592c7a0b711_f196c0224a6f4cc389c7f122ac2c9e8e + 7f150aa4dc774de59837714580f3656e_7b7ecdd5de5d4f7da6215a8a97db5ab8 + c73d8bc28f6b4c5881f976b8a657e61f_16cf56ce29ef4408993ee71a0e51b9dd)**4)**0.5) ```python ``` ```python ``` ```python %%timeit acc = AdversarialAccountant(max_budget=10) z.publish(acc=acc, sigma=0.2) z2.publish(acc=acc, sigma=0.2) ``` 13.9 ms ± 81.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) ```python %%timeit acc = AdversarialAccountant(max_budget=10) publish([z,z2], acc=acc, sigma=0.2) ``` 11 ms ± 124 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) ```python ``` ```python ``` ```python from syft.lib.autograd.value import to_values from syft.lib.autograd.value import grad def make_entities(n=100): ents = list() for i in range(n): ents.append(Entity(name=names.get_full_name().replace(" ", "_"))) return ents def private(input_data, min_val, max_val, entities=None, is_discrete=False): self = input_data if entities is None: flat_data = self.flatten() entities = make_entities(n=len(flat_data)) scalars = list() for i in flat_data: value = max(min(float(i), max_val), min_val) s = Scalar( PhiScalar=value, min_val=min_val, max_val=max_val, entity=entities[len(scalars)], # is_discrete=is_discrete ) scalars.append(s) return to_values(np.array(scalars)).reshape(input_data.shape) elif isinstance(entities, list): if len(entities) == len(self): output_rows = list() for row_i, row in enumerate(self): row_of_entries = list() for item in row.flatten(): s = PhiScalar( value=item, min_val=min_val, max_val=max_val, entity=entities[row_i], # is_discrete=is_discrete ) row_of_entries.append(s) output_rows.append(np.array(row_of_entries).reshape(row.shape)) return to_values(np.array(output_rows)).reshape(self.shape) else: print(len(entities)) print(len(self)) raise Exception("len(entities) must equal len(self)") class Tensor(np.ndarray): def __new__(cls, input_array, min_val=None, max_val=None, entities=None, info=None, is_discrete=False): is_private = False if min_val is not None and max_val is not None: input_array = private( input_array, min_val=min_val, max_val=max_val, entities=entities, is_discrete=is_discrete ) is_private = True else: input_array = to_values(input_array) obj = np.asarray(input_array).view(cls) obj.info = info obj.is_private = is_private return obj def __array_finalize__(self, obj): if obj is None: return self.info = getattr(obj, "info", None) self.is_private = getattr(obj, "is_private", None) def __array_wrap__(self, out_arr, context=None): output = out_arr.view(Tensor) is_private = False if context is not None: for arg in context[1]: if hasattr(arg, "is_private") and arg.is_private: is_private = True output.is_private = is_private return output def backward(self): if self.shape == (): return grad(self.flatten()[0]) else: raise Exception("Can only call .backward() on single-value tensor.") @property def grad(self): grads = list() for val in self.flatten().tolist(): grads.append(val._grad) return Tensor(grads).reshape(self.shape) def slow_publish(self, **kwargs): grads = list() for val in self.flatten().tolist(): grads.append(val.value.publish(**kwargs)) return np.array(grads).reshape(self.shape) def publish(self, **kwargs): grads = list() for val in self.flatten().tolist(): grads.append(val.value) grads = publish(scalars=grads, **kwargs) return np.array(grads).reshape(self.shape) @property def value(self): values = list() for val in self.flatten().tolist(): if hasattr(val.value, "value"): values.append(val.value.value) else: values.append(val.value) return np.array(values).reshape(self.shape) def private(self, min_val, max_val, entities=None, is_discrete=False): if self.is_private: raise Exception("Cannot call .private() on tensor which is already private") return Tensor(self.value, min_val=min_val, max_val=max_val, entities=entities, is_discrete=is_discrete) ``` ```python acc = AdversarialAccountant(max_budget=3000000) entities = [Entity(unique_name="Tudor"), Entity(unique_name="Madhava"), Entity(unique_name="Kritika"), Entity(unique_name="George")] x = Tensor(np.array([[1,1],[1,0],[0,1],[0,0]])).private(min_val=0, max_val=1, entities=entities, is_discrete=True) y = Tensor(np.array([[1],[1],[0],[0]])).private(min_val=0, max_val=1, entities=entities, is_discrete=False) _weights = Tensor(np.random.uniform(size=(2,1))) ``` ```python weights = _weights + 0 acc = AdversarialAccountant(max_budget=3000000) for i in range(10): batch_loss = 0 pred = x.dot(weights) loss = np.mean(np.square(y-pred)) loss.backward() weight_grad = (weights.grad * 0.5) weight_grad = weight_grad.publish(acc=acc, sigma=0.1) weights = weights - weight_grad batch_loss += loss.value acc.print_ledger() # print(weights) ``` <Entity:George> 7.316032539740162 <Entity:Tudor> 7.316032539740162 <Entity:Madhava> 7.316032539740162 <Entity:Kritika> 7.316032539740162 <Entity:George> 9.520798620725053 <Entity:Tudor> 9.520798620725053 <Entity:Madhava> 9.520798620725053 <Entity:Kritika> 9.520798620725053 <Entity:George> 9.726133952597301 <Entity:Tudor> 9.726133952597301 <Entity:Madhava> 9.726133952597301 <Entity:Kritika> 9.726133952597301 <Entity:George> 10.043728493829402 <Entity:Tudor> 10.043728493829402 <Entity:Madhava> 10.043728493829402 <Entity:Kritika> 10.043728493829402 <Entity:George> 10.39935836365433 <Entity:Tudor> 10.39935836365433 <Entity:Madhava> 10.39935836365433 <Entity:Kritika> 10.39935836365433 <Entity:George> 10.413935084796428 <Entity:Tudor> 10.413935084796428 <Entity:Madhava> 10.413935084796428 <Entity:Kritika> 10.413935084796428 <Entity:George> 10.464816287127624 <Entity:Tudor> 10.464816287127624 <Entity:Madhava> 10.464816287127624 <Entity:Kritika> 10.464816287127624 <Entity:George> 11.409293745747654 <Entity:Tudor> 11.409293745747654 <Entity:Madhava> 11.409293745747654 <Entity:Kritika> 11.409293745747654 <Entity:George> 12.630258946985862 <Entity:Tudor> 12.630258946985862 <Entity:Madhava> 12.630258946985862 <Entity:Kritika> 12.630258946985862 <Entity:George> 12.728530248476105 <Entity:Tudor> 12.728530248476105 <Entity:Madhava> 12.728530248476105 <Entity:Kritika> 12.728530248476105 ```python ``` 564 ms ± 4.86 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ```python ```
8664411e6b1afbb23775436cd9b3742bd1fb09ad
43,214
ipynb
Jupyter Notebook
packages/syft/examples/experimental/adversarial_accountant/Untitled1.ipynb
callezenwaka/PySyft
2545c302441cfe727ec095c4f9aa136bff02be32
[ "Apache-1.1" ]
2
2022-02-18T03:48:27.000Z
2022-03-05T06:13:57.000Z
packages/syft/examples/experimental/adversarial_accountant/Untitled1.ipynb
callezenwaka/PySyft
2545c302441cfe727ec095c4f9aa136bff02be32
[ "Apache-1.1" ]
3
2021-11-17T15:34:03.000Z
2021-12-08T14:39:10.000Z
packages/syft/examples/experimental/adversarial_accountant/Untitled1.ipynb
callezenwaka/PySyft
2545c302441cfe727ec095c4f9aa136bff02be32
[ "Apache-1.1" ]
1
2021-08-19T12:23:01.000Z
2021-08-19T12:23:01.000Z
39.357013
864
0.528648
true
8,363
Qwen/Qwen-72B
1. YES 2. YES
0.890294
0.757794
0.67466
__label__eng_Latn
0.661755
0.405792
```python import ambulance_game as abg import numpy as np import sympy as sym from sympy.abc import a,b,c,d,e,f,g,h,i,j ``` # Classic Markov Chain ```python def get_P0(lambda_2, lambda_1, mu, num_of_servers, threshold): ro = (lambda_2 + lambda_1) / (mu * num_of_servers) summation_1 = np.sum( [ ((ro * num_of_servers) ** i) / np.math.factorial(i) for i in range(num_of_servers) ] ) summation_2 = ((num_of_servers * ro) ** num_of_servers) / ( np.math.factorial(num_of_servers) * (1 - ro) ) P_0 = 1 / (summation_1 + summation_2) return P_0 ``` ```python def get_prob(P_0, state, lambda_2, lambda_1, mu, num_of_servers, threshold): u = state[0] v = state[1] ro = (lambda_2 + lambda_1) / (mu * num_of_servers) # P_0 = get_P0(lambda_2, lambda_1, mu, num_of_servers, threshold) if v < num_of_servers: P_i = P_0 * ((ro * num_of_servers) ** v) / np.math.factorial(v) return P_i # if u != 0: # ro_a = (lambda_2) / (mu * num_of_servers) # P_i_1 = (P_0 * (ro ** v) * (num_of_servers ** num_of_servers) / (np.math.factorial(num_of_servers))) # P_i_2 = (P_0 * (ro_a ** u) * (num_of_servers ** num_of_servers) / (np.math.factorial(num_of_servers))) # return P_i_1 + P_i_2 P_i = (P_0 * (ro ** v) * (num_of_servers ** num_of_servers) / (np.math.factorial(num_of_servers))) return P_i ``` ```python lambda_2 = 1 lambda_1 = 1 mu = 2 num_of_servers = 3 threshold = 6 system_capacity = threshold - 1 buffer_capacity = 1 ``` ```python abg.markov.visualise_ambulance_markov_chain(num_of_servers, threshold, system_capacity, buffer_capacity) ``` ```python all_states = abg.markov.build_states(threshold, system_capacity, buffer_capacity) Q = abg.markov.get_transition_matrix( lambda_2, lambda_1, mu, num_of_servers, threshold, system_capacity, buffer_capacity ) sol = abg.markov.get_steady_state_algebraically(Q, algebraic_function=np.linalg.solve) sol ``` array([0.36486486, 0.36486486, 0.18243243, 0.06081081, 0.02027027, 0.00675676]) ```python exact_P0 = get_P0(lambda_2, lambda_1, mu, num_of_servers, threshold) sum([get_prob(exact_P0, state, lambda_2, lambda_1, mu, num_of_servers, threshold) for state in all_states]) ``` 0.9966329966329966 ```python sum([get_prob(sol[0], state, lambda_2, lambda_1, mu, num_of_servers, threshold) for state in all_states]) ``` 0.9999999999999999 # Using Sympy ```python lambda_2 = 1 lambda_1 = 1 mu = 2 num_of_servers = 1 threshold = 3 system_capacity = threshold - 1 buffer_capacity = 1 ``` ## Numeric ```python Q_num = abg.markov.get_transition_matrix(lambda_2=lambda_2, lambda_1=lambda_1, mu=mu, num_of_servers=num_of_servers, threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity) Q_num ``` array([[-2., 2., 0.], [ 2., -4., 2.], [ 0., 2., -2.]]) ```python dimension = Q_num.shape[0] print(dimension) ``` 3 ```python Q_num.transpose()[:-1,:] ``` array([[-2., 2., 0.], [ 2., -4., 2.]]) ```python M_num = np.vstack((Q_num.transpose()[:-1], np.ones(dimension))) M_num ``` array([[-2., 2., 0.], [ 2., -4., 2.], [ 1., 1., 1.]]) ```python b_num = np.vstack((np.zeros((dimension - 1, 1)), [1])) b_num ``` array([[0.], [0.], [1.]]) ```python np.linalg.solve(M_num, b_num).transpose()[0] ``` array([0.33333333, 0.33333333, 0.33333333]) ## Symbolic ```python Q_sym = abg.markov.get_symbolic_transition_matrix(num_of_servers=num_of_servers, threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity) Q_sym ``` $\displaystyle \left[\begin{matrix}- \Lambda & \Lambda & 0\\\mu & - \Lambda - \mu & \Lambda\\0 & \mu & - \mu\end{matrix}\right]$ ```python dimension = Q_sym.shape[0] dimension ``` 3 ```python sym.ones(1, dimension) ``` $\displaystyle \left[\begin{matrix}1 & 1 & 1\end{matrix}\right]$ ```python M_sym = sym.Matrix([Q_sym.transpose()[:-1,:], sym.ones(1,dimension)]) M_sym ``` $\displaystyle \left[\begin{matrix}- \Lambda & \mu & 0\\\Lambda & - \Lambda - \mu & \mu\\1 & 1 & 1\end{matrix}\right]$ ```python b_sym = sym.Matrix([sym.zeros(dimension - 1, 1), [1]]) b_sym ``` $\displaystyle \left[\begin{matrix}0\\0\\1\end{matrix}\right]$ ```python system = M_sym.col_insert(5,b_sym) system ``` $\displaystyle \left[\begin{matrix}- \Lambda & \mu & 0 & 0\\\Lambda & - \Lambda - \mu & \mu & 0\\1 & 1 & 1 & 1\end{matrix}\right]$ ```python # np.linalg.solve(M_sym, b_sym).transpose()[0] sym.init_printing(use_latex='mathjax') sym_pi=sym.solve_linear_system_LU(system, [a,b,c,d,e]) sym_pi ``` $\displaystyle \left\{ a : - \frac{\Lambda + \frac{\Lambda \left(\Lambda + \mu\right)}{- 2 \Lambda - \mu}}{\Lambda - \frac{\left(- \Lambda + \mu\right) \left(\Lambda + \mu\right)}{- 2 \Lambda - \mu}} + 1 - \frac{- \Lambda - \frac{\left(- \Lambda + \mu\right) \left(\Lambda + \frac{\Lambda \left(\Lambda + \mu\right)}{- 2 \Lambda - \mu}\right)}{\Lambda - \frac{\left(- \Lambda + \mu\right) \left(\Lambda + \mu\right)}{- 2 \Lambda - \mu}}}{- 2 \Lambda - \mu}, \ b : \frac{- \Lambda - \frac{\left(- \Lambda + \mu\right) \left(\Lambda + \frac{\Lambda \left(\Lambda + \mu\right)}{- 2 \Lambda - \mu}\right)}{\Lambda - \frac{\left(- \Lambda + \mu\right) \left(\Lambda + \mu\right)}{- 2 \Lambda - \mu}}}{- 2 \Lambda - \mu}, \ c : \frac{\Lambda + \frac{\Lambda \left(\Lambda + \mu\right)}{- 2 \Lambda - \mu}}{\Lambda - \frac{\left(- \Lambda + \mu\right) \left(\Lambda + \mu\right)}{- 2 \Lambda - \mu}}\right\}$ ```python [sym.simplify(sym_pi[key]) for key in sym_pi.keys()] ``` $\displaystyle \left[ \frac{\mu^{2}}{\Lambda^{2} + \Lambda \mu + \mu^{2}}, \ \frac{\Lambda \mu}{\Lambda^{2} + \Lambda \mu + \mu^{2}}, \ \frac{\Lambda^{2}}{\Lambda^{2} + \Lambda \mu + \mu^{2}}\right]$ ```python ``` # Multiple examples ```python sym.init_printing(use_latex='mathjax') ``` ```python def get_symbolic_pi(num_of_servers, threshold, system_capacity, buffer_capacity): Q_sym = abg.markov.get_symbolic_transition_matrix(num_of_servers=num_of_servers, threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity) dimension = Q_sym.shape[0] if dimension > 7: return "Capacity of 6 exceeded" M_sym = sym.Matrix([Q_sym.transpose()[:-1,:], sym.ones(1,dimension)]) b_sym = sym.Matrix([sym.zeros(dimension - 1, 1), [1]]) system = M_sym.col_insert(dimension,b_sym) sol = sym.solve_linear_system_LU(system, [a,b,c,d,e,f,g]) return sol ``` ```python ``` # $C=1, T=3, N=2, M=1$ ```python num_of_servers = 1 threshold = 3 system_capacity = 2 buffer_capacity = 1 ``` ```python abg.markov.visualise_ambulance_markov_chain(num_of_servers, threshold, system_capacity, buffer_capacity) ``` ```python sym_pi = get_symbolic_pi(num_of_servers=num_of_servers, threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity) ``` ```python abg.markov.build_states(threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity) ``` $\displaystyle \left[ \left( 0, \ 0\right), \ \left( 0, \ 1\right), \ \left( 0, \ 2\right)\right]$ ```python [sym.simplify(sym_pi[key]) for key in sym_pi.keys()] ``` $\displaystyle \left[ \frac{\mu^{2}}{\Lambda^{2} + \Lambda \mu + \mu^{2}}, \ \frac{\Lambda \mu}{\Lambda^{2} + \Lambda \mu + \mu^{2}}, \ \frac{\Lambda^{2}}{\Lambda^{2} + \Lambda \mu + \mu^{2}}\right]$ ```python ``` # $C=1, T=4, N=3, M=1$ ```python num_of_servers = 1 threshold = 4 system_capacity = 3 buffer_capacity = 1 ``` ```python abg.markov.visualise_ambulance_markov_chain(num_of_servers, threshold, system_capacity, buffer_capacity) ``` ```python sym_pi = get_symbolic_pi(num_of_servers=num_of_servers, threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity) ``` ```python abg.markov.build_states(threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity) ``` $\displaystyle \left[ \left( 0, \ 0\right), \ \left( 0, \ 1\right), \ \left( 0, \ 2\right), \ \left( 0, \ 3\right)\right]$ ```python [sym.simplify(sym_pi[key]) for key in sym_pi.keys()] ``` $\displaystyle \left[ \frac{\mu^{3}}{\Lambda^{3} + \Lambda^{2} \mu + \Lambda \mu^{2} + \mu^{3}}, \ \frac{\Lambda \mu^{2}}{\Lambda^{3} + \Lambda^{2} \mu + \Lambda \mu^{2} + \mu^{3}}, \ \frac{\Lambda^{2} \mu}{\Lambda^{3} + \Lambda^{2} \mu + \Lambda \mu^{2} + \mu^{3}}, \ \frac{\Lambda^{3}}{\Lambda^{3} + \Lambda^{2} \mu + \Lambda \mu^{2} + \mu^{3}}\right]$ # $C=1, T=1, N=2, Μ=1$ ```python num_of_servers = 1 threshold = 1 system_capacity = 2 buffer_capacity = 1 ``` ```python abg.markov.visualise_ambulance_markov_chain(num_of_servers, threshold, system_capacity, buffer_capacity) ``` ```python abg.markov.build_states(threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity) ``` $\displaystyle \left[ \left( 0, \ 0\right), \ \left( 0, \ 1\right), \ \left( 1, \ 1\right), \ \left( 0, \ 2\right), \ \left( 1, \ 2\right)\right]$ ```python sym_pi = get_symbolic_pi(num_of_servers=num_of_servers, threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity) ``` ```python sym_state_probs = [0 for _ in range(5)] ``` ```python # (0,0) sym_state_probs[0] = sym.simplify(1 - sum(sym_state_probs[1:])) sym_state_probs[0] ``` $\displaystyle \frac{\mu^{3} \left(\lambda^{A} + \mu\right)}{\Lambda \left(\lambda^{A}\right)^{2} \lambda^{o} + \Lambda \left(\lambda^{A}\right)^{2} \mu + \Lambda \lambda^{A} \left(\lambda^{o}\right)^{2} + 3 \Lambda \lambda^{A} \lambda^{o} \mu + 2 \Lambda \lambda^{A} \mu^{2} + \Lambda \lambda^{o} \mu^{2} + \Lambda \mu^{3} + \lambda^{A} \mu^{3} + \mu^{4}}$ ```python # (0,1) sym_state_probs[1] = sym.simplify(sym_pi[b]) sym_state_probs[1] ``` $\displaystyle \frac{\Lambda \mu^{2} \left(\lambda^{A} + \mu\right)}{\Lambda \left(\lambda^{A}\right)^{2} \lambda^{o} + \Lambda \left(\lambda^{A}\right)^{2} \mu + \Lambda \lambda^{A} \left(\lambda^{o}\right)^{2} + 3 \Lambda \lambda^{A} \lambda^{o} \mu + 2 \Lambda \lambda^{A} \mu^{2} + \Lambda \lambda^{o} \mu^{2} + \Lambda \mu^{3} + \lambda^{A} \mu^{3} + \mu^{4}}$ ```python # (1,1) sym_state_probs[2] = sym.simplify(sym_pi[c]) sym_state_probs[2] ``` $\displaystyle \frac{\Lambda \lambda^{A} \mu \left(\lambda^{A} + \lambda^{o} + \mu\right)}{\Lambda \left(\lambda^{A}\right)^{2} \lambda^{o} + \Lambda \left(\lambda^{A}\right)^{2} \mu + \Lambda \lambda^{A} \left(\lambda^{o}\right)^{2} + 3 \Lambda \lambda^{A} \lambda^{o} \mu + 2 \Lambda \lambda^{A} \mu^{2} + \Lambda \lambda^{o} \mu^{2} + \Lambda \mu^{3} + \lambda^{A} \mu^{3} + \mu^{4}}$ ```python # (0,2) sym_state_probs[3] = sym.simplify(sym_pi[d]) sym_state_probs[3] ``` $\displaystyle \frac{\Lambda \lambda^{o} \mu^{2}}{\Lambda \left(\lambda^{A}\right)^{2} \lambda^{o} + \Lambda \left(\lambda^{A}\right)^{2} \mu + \Lambda \lambda^{A} \left(\lambda^{o}\right)^{2} + 3 \Lambda \lambda^{A} \lambda^{o} \mu + 2 \Lambda \lambda^{A} \mu^{2} + \Lambda \lambda^{o} \mu^{2} + \Lambda \mu^{3} + \lambda^{A} \mu^{3} + \mu^{4}}$ ```python # (1,2) sym_state_probs[4] = sym.simplify(sym_pi[e]) sym_state_probs[4] ``` $\displaystyle \frac{\Lambda \lambda^{A} \lambda^{o} \left(\lambda^{A} + \lambda^{o} + 2 \mu\right)}{\Lambda \left(\lambda^{A}\right)^{2} \lambda^{o} + \Lambda \left(\lambda^{A}\right)^{2} \mu + \Lambda \lambda^{A} \left(\lambda^{o}\right)^{2} + 3 \Lambda \lambda^{A} \lambda^{o} \mu + 2 \Lambda \lambda^{A} \mu^{2} + \Lambda \lambda^{o} \mu^{2} + \Lambda \mu^{3} + \lambda^{A} \mu^{3} + \mu^{4}}$ ```python sym.simplify(sum(sym_state_probs)) ``` $\displaystyle 1$ ```python ``` # $C=2, T=1, N=2, M=1$ ```python num_of_servers = 2 threshold = 1 system_capacity = 2 buffer_capacity = 1 ``` ```python abg.markov.visualise_ambulance_markov_chain(num_of_servers, threshold, system_capacity, buffer_capacity) ``` ```python abg.markov.build_states(threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity) ``` $\displaystyle \left[ \left( 0, \ 0\right), \ \left( 0, \ 1\right), \ \left( 1, \ 1\right), \ \left( 0, \ 2\right), \ \left( 1, \ 2\right)\right]$ ```python sym_pi = get_symbolic_pi(num_of_servers=num_of_servers, threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity) ``` ```python sym_state_probs = [0 for _ in range(5)] ``` ```python # (0,0) sym_state_probs[0] = sym.simplify(1 - sum(sym_state_probs[1:])) sym_state_probs[0] ``` $\displaystyle \frac{2 \mu^{3} \left(\lambda^{A} + 2 \mu\right)}{\Lambda \left(\lambda^{A}\right)^{2} \lambda^{o} + 2 \Lambda \left(\lambda^{A}\right)^{2} \mu + \Lambda \lambda^{A} \left(\lambda^{o}\right)^{2} + 5 \Lambda \lambda^{A} \lambda^{o} \mu + 6 \Lambda \lambda^{A} \mu^{2} + 2 \Lambda \lambda^{o} \mu^{2} + 4 \Lambda \mu^{3} + 2 \lambda^{A} \mu^{3} + 4 \mu^{4}}$ ```python # (0,1) sym_state_probs[1] = sym.simplify(sym_pi[b]) sym_state_probs[1] ``` $\displaystyle \frac{2 \Lambda \mu^{2} \left(\lambda^{A} + 2 \mu\right)}{\Lambda \left(\lambda^{A}\right)^{2} \lambda^{o} + 2 \Lambda \left(\lambda^{A}\right)^{2} \mu + \Lambda \lambda^{A} \left(\lambda^{o}\right)^{2} + 5 \Lambda \lambda^{A} \lambda^{o} \mu + 6 \Lambda \lambda^{A} \mu^{2} + 2 \Lambda \lambda^{o} \mu^{2} + 4 \Lambda \mu^{3} + 2 \lambda^{A} \mu^{3} + 4 \mu^{4}}$ ```python # (1,1) sym_state_probs[2] = sym.simplify(sym_pi[c]) sym_state_probs[2] ``` $\displaystyle \frac{2 \Lambda \lambda^{A} \mu \left(\lambda^{A} + \lambda^{o} + 2 \mu\right)}{\Lambda \left(\lambda^{A}\right)^{2} \lambda^{o} + 2 \Lambda \left(\lambda^{A}\right)^{2} \mu + \Lambda \lambda^{A} \left(\lambda^{o}\right)^{2} + 5 \Lambda \lambda^{A} \lambda^{o} \mu + 6 \Lambda \lambda^{A} \mu^{2} + 2 \Lambda \lambda^{o} \mu^{2} + 4 \Lambda \mu^{3} + 2 \lambda^{A} \mu^{3} + 4 \mu^{4}}$ ```python # (0,2) sym_state_probs[3] = sym.simplify(sym_pi[d]) sym_state_probs[3] ``` $\displaystyle \frac{2 \Lambda \lambda^{o} \mu^{2}}{\Lambda \left(\lambda^{A}\right)^{2} \lambda^{o} + 2 \Lambda \left(\lambda^{A}\right)^{2} \mu + \Lambda \lambda^{A} \left(\lambda^{o}\right)^{2} + 5 \Lambda \lambda^{A} \lambda^{o} \mu + 6 \Lambda \lambda^{A} \mu^{2} + 2 \Lambda \lambda^{o} \mu^{2} + 4 \Lambda \mu^{3} + 2 \lambda^{A} \mu^{3} + 4 \mu^{4}}$ ```python # (1,2) sym_state_probs[4] = sym.simplify(sym_pi[e]) sym_state_probs[4] ``` $\displaystyle \frac{\Lambda \lambda^{A} \lambda^{o} \left(\lambda^{A} + \lambda^{o} + 3 \mu\right)}{\Lambda \left(\lambda^{A}\right)^{2} \lambda^{o} + 2 \Lambda \left(\lambda^{A}\right)^{2} \mu + \Lambda \lambda^{A} \left(\lambda^{o}\right)^{2} + 5 \Lambda \lambda^{A} \lambda^{o} \mu + 6 \Lambda \lambda^{A} \mu^{2} + 2 \Lambda \lambda^{o} \mu^{2} + 4 \Lambda \mu^{3} + 2 \lambda^{A} \mu^{3} + 4 \mu^{4}}$ ```python sym.simplify(sum(sym_state_probs)) ``` $\displaystyle 1$ ```python ``` # $C=1, T=2, N=2, M=2$ ```python num_of_servers = 1 threshold = 2 system_capacity = 2 buffer_capacity = 2 ``` ```python abg.markov.visualise_ambulance_markov_chain(num_of_servers, threshold, system_capacity, buffer_capacity) ``` ```python sym_pi = get_symbolic_pi(num_of_servers=num_of_servers, threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity) ``` ```python abg.markov.build_states(threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity) ``` $\displaystyle \left[ \left( 0, \ 0\right), \ \left( 0, \ 1\right), \ \left( 0, \ 2\right), \ \left( 1, \ 2\right), \ \left( 2, \ 2\right)\right]$ ```python sym_Lambda = sym.symbols("Lambda") sym_lambda_1 = sym.symbols("lambda_1") sym_lambda_2 = sym.symbols("lambda_2") sym_mu = sym.symbols("mu") ``` ```python # (0,0) (sym.simplify(sym_pi[e]) * sym_mu ** 4) / (sym_lambda_2 ** 2 * sym_Lambda ** 2) ``` $\displaystyle \frac{\mu^{4}}{\Lambda^{2} \left(\lambda^{A}\right)^{2} + \Lambda^{2} \lambda^{A} \mu + \Lambda^{2} \mu^{2} + \Lambda \mu^{3} + \mu^{4}}$ ```python # (0,1) (sym.simplify(sym_pi[e]) * sym_mu ** 3) / (sym_lambda_2 ** 2 * sym_Lambda) ``` $\displaystyle \frac{\Lambda \mu^{3}}{\Lambda^{2} \left(\lambda^{A}\right)^{2} + \Lambda^{2} \lambda^{A} \mu + \Lambda^{2} \mu^{2} + \Lambda \mu^{3} + \mu^{4}}$ ```python # (0,2) sym.simplify(sym_pi[c]) ``` $\displaystyle \frac{\Lambda^{2} \mu^{2}}{\Lambda^{2} \left(\lambda^{A}\right)^{2} + \Lambda^{2} \lambda^{A} \mu + \Lambda^{2} \mu^{2} + \Lambda \mu^{3} + \mu^{4}}$ ```python # (1,2) sym.simplify(sym_pi[d]) ``` $\displaystyle \frac{\Lambda^{2} \lambda^{A} \mu}{\Lambda^{2} \left(\lambda^{A}\right)^{2} + \Lambda^{2} \lambda^{A} \mu + \Lambda^{2} \mu^{2} + \Lambda \mu^{3} + \mu^{4}}$ ```python # (2,2) sym.simplify(sym_pi[e]) ``` $\displaystyle \frac{\Lambda^{2} \left(\lambda^{A}\right)^{2}}{\Lambda^{2} \left(\lambda^{A}\right)^{2} + \Lambda^{2} \lambda^{A} \mu + \Lambda^{2} \mu^{2} + \Lambda \mu^{3} + \mu^{4}}$ ```python ``` # Verify results ```python num_of_servers = 1 threshold = 1 system_capacity = 2 buffer_capacity = 1 ``` ```python abg.markov.build_states(threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity) sym_pi = get_symbolic_pi(num_of_servers=num_of_servers, threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity) ``` ```python lambda_2 = 0.2 lambda_1 = 0.1 mu = 0.4 ``` ```python ` [sym_pi[key].subs({sym_Lambda: lambda_2 + lambda_1, sym_lambda_1: lambda_1, sym_lambda_2: lambda_2, sym_mu: mu}) for key in sym_pi.keys()] ``` [0.402515723270440, 0.301886792452830, 0.176100628930818, 0.0503144654088050, 0.0691823899371068] ```python all_states = abg.markov.build_states(threshold, system_capacity, buffer_capacity) Q = abg.markov.get_transition_matrix( lambda_2, lambda_1, mu, num_of_servers, threshold, system_capacity, buffer_capacity ) sol3 = abg.markov.get_steady_state_algebraically(Q, algebraic_function=np.linalg.solve) list(sol3) ``` [0.4025157232704404, 0.30188679245283023, 0.17610062893081757, 0.050314465408805034, 0.06918238993710682] ```python ``` # $\pi$ for larger models # $C=1, T=3, N=4, M=1$ ```python lambda_2 = 0.2 lambda_1 = 0.1 mu = 0.4 num_of_servers = 1 threshold = 3 system_capacity = 4 buffer_capacity = 1 ``` ```python abg.markov.visualise_ambulance_markov_chain(num_of_servers, threshold, system_capacity, buffer_capacity) ``` ```python all_states = abg.markov.build_states(threshold, system_capacity, buffer_capacity) Q = abg.markov.get_transition_matrix( lambda_2, lambda_1, mu, num_of_servers, threshold, system_capacity, buffer_capacity ) sol = abg.markov.get_steady_state_algebraically(Q, algebraic_function=np.linalg.solve) sol ``` array([0.31771641, 0.23828731, 0.17871548, 0.13403661, 0.07818802, 0.02233944, 0.03071672]) ```python def pi_for_1_server_3_threshold_4_sys_1_par(lambda_2, lambda_1, mu, num_of_servers, threshold, system_capacity, buffer_capacity): all_states = abg.markov.build_states(threshold, system_capacity, buffer_capacity) Lambda = lambda_2 + lambda_1 pi = [0 for _ in range(len(all_states))] pi[0] = (lambda_2) * (mu ** 5) + (mu ** 6) # (0,0) pi[1] = Lambda * lambda_2 * (mu ** 4) + Lambda * (mu ** 5) # (0,1) pi[2] = (Lambda ** 2) * lambda_2 * (mu ** 3) + (Lambda ** 2) * (mu ** 4) # (0,2) pi[3] = (Lambda ** 3) * lambda_2 * (mu ** 2) + (Lambda ** 3) * (mu ** 3) # (0,3) pi[4] = (Lambda ** 3) * lambda_1 * lambda_2 * mu + (Lambda ** 3) * lambda_2 * (mu ** 2) + (Lambda ** 3) * lambda_2 * lambda_2 * mu # (1,3) pi[5] = (Lambda ** 3) * lambda_1 * (mu ** 2) # (0,4) pi[6] = (Lambda ** 3) * (lambda_1 ** 2) * lambda_2 + (Lambda ** 3) * lambda_1 * (lambda_2 ** 2) + 2 * (Lambda ** 3) * lambda_1 * lambda_2 * mu # (1,4) sum_of_rates = sum(pi) pi = [i/sum_of_rates for i in pi] return pi ``` ```python ans = pi_for_1_server_3_threshold_4_sys_1_par(lambda_2, lambda_1, mu, num_of_servers, threshold, system_capacity, buffer_capacity) ``` ```python for i in range(7): print(str(ans[i]) + " " + str(sol[i]) + " ---> " + str(round(ans[i], 10) == round(sol[i], 10))) ``` 0.3177164132795532 0.3177164132795532 ---> True 0.2382873099596649 0.23828730995966502 ---> True 0.1787154824697487 0.1787154824697487 ---> True 0.13403661185231155 0.13403661185231147 ---> True 0.07818802358051506 0.07818802358051498 ---> True 0.022339435308718594 0.022339435308718583 ---> True 0.030716723549488064 0.030716723549487967 ---> True ```python ``` # $C=1, T=1, N=2, M=2$ ```python lambda_2 = 1 lambda_1 = 1 mu = 3 num_of_servers = 1 threshold = 1 system_capacity = 2 buffer_capacity = 2 ``` ```python abg.markov.visualise_ambulance_markov_chain(num_of_servers, threshold, system_capacity, buffer_capacity) ``` ```python all_states = abg.markov.build_states(threshold, system_capacity, buffer_capacity) Q = abg.markov.get_transition_matrix( lambda_2, lambda_1, mu, num_of_servers, threshold, system_capacity, buffer_capacity ) sol = abg.markov.get_steady_state_algebraically(Q, algebraic_function=np.linalg.solve) sol ``` array([0.41116751, 0.27411168, 0.1142132 , 0.05329949, 0.06852792, 0.04568528, 0.03299492]) ```python all_states ``` $\displaystyle \left[ \left( 0, \ 0\right), \ \left( 0, \ 1\right), \ \left( 1, \ 1\right), \ \left( 2, \ 1\right), \ \left( 0, \ 2\right), \ \left( 1, \ 2\right), \ \left( 2, \ 2\right)\right]$ ```python def pi_for_1_server_1_threshold_2_sys_2_par(lambda_2, lambda_1, mu, num_of_servers, threshold, system_capacity, buffer_capacity): all_states = abg.markov.build_states(threshold, system_capacity, buffer_capacity) Lambda = lambda_2 + lambda_1 pi = [0 for _ in range(len(all_states))] pi[0] = (mu ** 6) + 2 * (lambda_2) * (mu ** 5) + (lambda_2 ** 2) * (mu ** 4) # (0,0) pi[1] = (Lambda * mu ** 3) * (mu ** 2 + 2 * mu * lambda_2 + lambda_2 ** 2) # (0,1) pi[2] = (Lambda * lambda_2 * mu ** 2) * (lambda_2 ** 2 + lambda_2 * lambda_1 + lambda_1 * mu + mu ** 2 + 2 * lambda_2 * mu) # (1,1) pi[3] = (Lambda * lambda_2 ** 2 * mu) * (lambda_2 ** 2 + 2 * lambda_1 * lambda_2 + 3 * lambda_1 * mu + mu ** 2 + 2 * lambda_2 * mu + lambda_1 ** 2) # (2,1) pi[4] = (Lambda * lambda_1 * mu ** 3) * (lambda_2 + mu) # (0,2) pi[5] = (Lambda * lambda_1 * lambda_2 * mu ** 2) * (2 * mu + lambda_1 + lambda_2) # (1,2) pi[6] = (Lambda * lambda_1 * lambda_2 ** 2) * (lambda_1 ** 2 + 4 * lambda_1 * mu + 2 * lambda_1 * lambda_2 + 3 * mu ** 2 + lambda_2 ** 2 + 3 * lambda_2 * mu) # (2,2) sum_of_rates = sum(pi) pi = [i/sum_of_rates for i in pi] return pi ``` ```python ans = pi_for_1_server_1_threshold_2_sys_2_par(lambda_2, lambda_1, mu, num_of_servers, threshold, system_capacity, buffer_capacity) ``` ```python for i in range(7): print(str(ans[i]) + " " + str(sol[i]) + " ---> " + str(round(ans[i], 10) == round(sol[i], 10))) ``` 0.41116751269035534 0.4111675126903551 ---> True 0.27411167512690354 0.27411167512690343 ---> True 0.11421319796954314 0.11421319796954318 ---> True 0.0532994923857868 0.05329949238578686 ---> True 0.06852791878172589 0.06852791878172587 ---> True 0.04568527918781726 0.04568527918781726 ---> True 0.03299492385786802 0.032994923857868085 ---> True ```python sym_pi = get_symbolic_pi(num_of_servers=num_of_servers, threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity) ``` ```python exp = sym.factor(sym_pi[g]) ``` ```python exp ``` $\displaystyle \frac{\Lambda \left(\lambda^{A}\right)^{2} \lambda^{o} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 3 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + 4 \lambda^{o} \mu + 3 \mu^{2}\right)}{\Lambda \left(\lambda^{A}\right)^{4} \lambda^{o} + \Lambda \left(\lambda^{A}\right)^{4} \mu + 2 \Lambda \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} + 5 \Lambda \left(\lambda^{A}\right)^{3} \lambda^{o} \mu + 3 \Lambda \left(\lambda^{A}\right)^{3} \mu^{2} + \Lambda \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} + 5 \Lambda \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu + 8 \Lambda \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{2} + 4 \Lambda \left(\lambda^{A}\right)^{2} \mu^{3} + \Lambda \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{2} + 4 \Lambda \lambda^{A} \lambda^{o} \mu^{3} + 3 \Lambda \lambda^{A} \mu^{4} + \Lambda \lambda^{o} \mu^{4} + \Lambda \mu^{5} + \left(\lambda^{A}\right)^{2} \mu^{4} + 2 \lambda^{A} \mu^{5} + \mu^{6}}$ ```python ``` # Investigating ratio relations $\frac{\pi_i}{\pi_j}$ ```python sym.init_printing(use_latex='mathjax') def get_symbolic_pi(num_of_servers, threshold, system_capacity, buffer_capacity): Q_sym = abg.markov.get_symbolic_transition_matrix(num_of_servers=num_of_servers, threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity) dimension = Q_sym.shape[0] if dimension > 7: return "Capacity of 6 exceeded" M_sym = sym.Matrix([Q_sym.transpose()[:-1,:], sym.ones(1,dimension)]) b_sym = sym.Matrix([sym.zeros(dimension - 1, 1), [1]]) system = M_sym.col_insert(dimension,b_sym) sol = sym.solve_linear_system_LU(system, [a,b,c,d,e,f,g]) return sol ``` ## $C=1, T=2, N=2, M=2$ ```python num_of_servers = 1 threshold = 2 system_capacity = 2 buffer_capacity = 2 sym_pi_1222 = get_symbolic_pi(num_of_servers=num_of_servers, threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity) all_states_1222 = abg.markov.build_states(threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity) sym_state_probs_1222 = [0 for _ in range(len(all_states_1222))] all_states_1222 ``` $\displaystyle \left[ \left( 0, \ 0\right), \ \left( 0, \ 1\right), \ \left( 0, \ 2\right), \ \left( 1, \ 2\right), \ \left( 2, \ 2\right)\right]$ ```python sym_state_probs_1222[0] = sym.factor(sym_pi_1222[a]) # (0,0) sym_state_probs_1222[1] = sym.factor(sym_pi_1222[b]) # (0,1) sym_state_probs_1222[2] = sym.factor(sym_pi_1222[c]) # (1,1) sym_state_probs_1222[3] = sym.factor(sym_pi_1222[d]) # (0,2) sym_state_probs_1222[4] = sym.factor(sym_pi_1222[e]) # (1,2) ``` ```python sym_state_recursive_ratios_1222 = sym.zeros(buffer_capacity + 1, system_capacity + 1) sym_state_recursive_ratios_1222[0,0] = 1 sym_state_recursive_ratios_1222[0,1] = sym.factor(sym_state_probs_1222[1] / sym_state_probs_1222[0]) # (0,0) -> (0,1) sym_state_recursive_ratios_1222[0,2] = sym.factor(sym_state_probs_1222[2] / sym_state_probs_1222[1]) # (0,1) -> (1,1) sym_state_recursive_ratios_1222[1,2] = sym.factor(sym_state_probs_1222[3] / sym_state_probs_1222[2]) # (0,1) -> (0,2) sym_state_recursive_ratios_1222[2,2] = sym.factor(sym_state_probs_1222[4] / sym_state_probs_1222[3]) # (0,2) -> (1,2) ``` ```python abg.markov.visualise_ambulance_markov_chain( num_of_servers=num_of_servers, threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity ) ``` ```python sym_state_recursive_ratios_1222 ``` $\displaystyle \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\Lambda}{\mu}\\0 & 0 & \frac{\lambda^{A}}{\mu}\\0 & 0 & \frac{\lambda^{A}}{\mu}\end{matrix}\right]$ ## $C=1, T=1, N=2, M=1$ ```python num_of_servers = 1 threshold = 1 system_capacity = 2 buffer_capacity = 1 all_states_1121 = abg.markov.build_states(threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity) Q_sym_1121 = abg.markov.get_symbolic_transition_matrix(num_of_servers, threshold, system_capacity, buffer_capacity) sym_pi_1121 = get_symbolic_pi(num_of_servers=num_of_servers, threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity) sym_state_probs_1121 = [0 for _ in range(len(all_states_1121))] ``` ```python sym_state_probs_1121[0] = sym.factor(sym_pi_1121[a]) # (0,0) sym_state_probs_1121[1] = sym.factor(sym_pi_1121[b]) # (0,1) sym_state_probs_1121[2] = sym.factor(sym_pi_1121[c]) # (1,1) sym_state_probs_1121[3] = sym.factor(sym_pi_1121[d]) # (0,2) sym_state_probs_1121[4] = sym.factor(sym_pi_1121[e]) # (1,2) ``` ```python sym_state_recursive_ratios_1121 = sym.zeros(buffer_capacity + 1, system_capacity + 1) sym_state_recursive_ratios_1121[0,0] = 1 sym_state_recursive_ratios_1121[0,1] = sym.factor(sym_state_probs_1121[1] / sym_state_probs_1121[0]) # (0,0) -> (0,1) sym_state_recursive_ratios_1121[1,1] = sym.factor(sym_state_probs_1121[2] / sym_state_probs_1121[1]) # (0,1) -> (1,1) sym_state_recursive_ratios_1121[0,2] = sym.factor(sym_state_probs_1121[3] / sym_state_probs_1121[1]) # (0,1) -> (0,2) sym_state_recursive_ratios_1121[1,2] = sym.factor(sym_state_probs_1121[4] / sym_state_probs_1121[3]) # (0,2) -> (1,2) sym_state_recursive_ratios_right_1121 = sym_state_recursive_ratios_1121.copy() sym_state_recursive_ratios_right_1121[1,2] = sym.factor(sym_state_probs_1121[4] / sym_state_probs_1121[2]) # (1,1) -> (1,2) sym_state_recursive_ratios_P0_1121 = sym.zeros(buffer_capacity + 1, system_capacity + 1) sym_state_recursive_ratios_P0_1121[0,0] = 1 sym_state_recursive_ratios_P0_1121[0,1] = sym.factor(sym_state_probs_1121[1] / sym_state_probs_1121[0]) # (0,0) -> (0,1) sym_state_recursive_ratios_P0_1121[1,1] = sym.factor(sym_state_probs_1121[2] / sym_state_probs_1121[0]) # (0,0) -> (1,1) sym_state_recursive_ratios_P0_1121[0,2] = sym.factor(sym_state_probs_1121[3] / sym_state_probs_1121[0]) # (0,0) -> (0,2) sym_state_recursive_ratios_P0_1121[1,2] = sym.factor(sym_state_probs_1121[4] / sym_state_probs_1121[0]) # (0,0) -> (1,2) ``` ```python abg.markov.visualise_ambulance_markov_chain( num_of_servers=num_of_servers, threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity ) ``` ```python all_states_1121 ``` $\displaystyle \left[ \left( 0, \ 0\right), \ \left( 0, \ 1\right), \ \left( 1, \ 1\right), \ \left( 0, \ 2\right), \ \left( 1, \ 2\right)\right]$ ```python sym_state_probs_1121 ``` $\displaystyle \left[ \frac{\mu^{3} \left(\lambda^{A} + \mu\right)}{\Lambda \left(\lambda^{A}\right)^{2} \lambda^{o} + \Lambda \left(\lambda^{A}\right)^{2} \mu + \Lambda \lambda^{A} \left(\lambda^{o}\right)^{2} + 3 \Lambda \lambda^{A} \lambda^{o} \mu + 2 \Lambda \lambda^{A} \mu^{2} + \Lambda \lambda^{o} \mu^{2} + \Lambda \mu^{3} + \lambda^{A} \mu^{3} + \mu^{4}}, \ \frac{\Lambda \mu^{2} \left(\lambda^{A} + \mu\right)}{\Lambda \left(\lambda^{A}\right)^{2} \lambda^{o} + \Lambda \left(\lambda^{A}\right)^{2} \mu + \Lambda \lambda^{A} \left(\lambda^{o}\right)^{2} + 3 \Lambda \lambda^{A} \lambda^{o} \mu + 2 \Lambda \lambda^{A} \mu^{2} + \Lambda \lambda^{o} \mu^{2} + \Lambda \mu^{3} + \lambda^{A} \mu^{3} + \mu^{4}}, \ \frac{\Lambda \lambda^{A} \mu \left(\lambda^{A} + \lambda^{o} + \mu\right)}{\Lambda \left(\lambda^{A}\right)^{2} \lambda^{o} + \Lambda \left(\lambda^{A}\right)^{2} \mu + \Lambda \lambda^{A} \left(\lambda^{o}\right)^{2} + 3 \Lambda \lambda^{A} \lambda^{o} \mu + 2 \Lambda \lambda^{A} \mu^{2} + \Lambda \lambda^{o} \mu^{2} + \Lambda \mu^{3} + \lambda^{A} \mu^{3} + \mu^{4}}, \ \frac{\Lambda \lambda^{o} \mu^{2}}{\Lambda \left(\lambda^{A}\right)^{2} \lambda^{o} + \Lambda \left(\lambda^{A}\right)^{2} \mu + \Lambda \lambda^{A} \left(\lambda^{o}\right)^{2} + 3 \Lambda \lambda^{A} \lambda^{o} \mu + 2 \Lambda \lambda^{A} \mu^{2} + \Lambda \lambda^{o} \mu^{2} + \Lambda \mu^{3} + \lambda^{A} \mu^{3} + \mu^{4}}, \ \frac{\Lambda \lambda^{A} \lambda^{o} \left(\lambda^{A} + \lambda^{o} + 2 \mu\right)}{\Lambda \left(\lambda^{A}\right)^{2} \lambda^{o} + \Lambda \left(\lambda^{A}\right)^{2} \mu + \Lambda \lambda^{A} \left(\lambda^{o}\right)^{2} + 3 \Lambda \lambda^{A} \lambda^{o} \mu + 2 \Lambda \lambda^{A} \mu^{2} + \Lambda \lambda^{o} \mu^{2} + \Lambda \mu^{3} + \lambda^{A} \mu^{3} + \mu^{4}}\right]$ ```python Q_sym_1121 ``` $\displaystyle \left[\begin{matrix}- \Lambda & \Lambda & 0 & 0 & 0\\\mu & - \lambda^{A} - \lambda^{o} - \mu & \lambda^{A} & \lambda^{o} & 0\\0 & \mu & - \lambda^{o} - \mu & 0 & \lambda^{o}\\0 & \mu & 0 & - \lambda^{A} - \mu & \lambda^{A}\\0 & 0 & \mu & 0 & - \mu\end{matrix}\right]$ ```python sym_state_recursive_ratios_1121, sym_state_recursive_ratios_right_1121 ``` $\displaystyle \left( \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\lambda^{o}}{\lambda^{A} + \mu}\\0 & \frac{\lambda^{A} \left(\lambda^{A} + \lambda^{o} + \mu\right)}{\mu \left(\lambda^{A} + \mu\right)} & \frac{\lambda^{A} \left(\lambda^{A} + \lambda^{o} + 2 \mu\right)}{\mu^{2}}\end{matrix}\right], \ \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\lambda^{o}}{\lambda^{A} + \mu}\\0 & \frac{\lambda^{A} \left(\lambda^{A} + \lambda^{o} + \mu\right)}{\mu \left(\lambda^{A} + \mu\right)} & \frac{\lambda^{o} \left(\lambda^{A} + \lambda^{o} + 2 \mu\right)}{\mu \left(\lambda^{A} + \lambda^{o} + \mu\right)}\end{matrix}\right]\right)$ ```python sym.fraction(sym_state_probs_1121[0])[0], sym_state_recursive_ratios_P0_1121 ``` $\displaystyle \left( \mu^{3} \left(\lambda^{A} + \mu\right), \ \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\Lambda \lambda^{o}}{\mu \left(\lambda^{A} + \mu\right)}\\0 & \frac{\Lambda \lambda^{A} \left(\lambda^{A} + \lambda^{o} + \mu\right)}{\mu^{2} \left(\lambda^{A} + \mu\right)} & \frac{\Lambda \lambda^{A} \lambda^{o} \left(\lambda^{A} + \lambda^{o} + 2 \mu\right)}{\mu^{3} \left(\lambda^{A} + \mu\right)}\end{matrix}\right]\right)$ ```python ``` ## $C=1, T=1, N=2, M=2$ ```python num_of_servers = 1 threshold = 1 system_capacity = 2 buffer_capacity = 2 all_states_1122 = abg.markov.build_states(threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity) Q_sym_1122 = abg.markov.get_symbolic_transition_matrix(num_of_servers, threshold, system_capacity, buffer_capacity) sym_pi_1122 = get_symbolic_pi(num_of_servers=num_of_servers, threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity) sym_state_probs_1122 = [0 for _ in range(len(all_states_1122))] ``` ```python sym_Lambda = sym.symbols("Lambda") sym_lambda_1 = sym.symbols("lambda_1") sym_lambda_2 = sym.symbols("lambda_2") sym_mu = sym.symbols("mu") ``` ```python sym_state_probs_1122[0] = (sym_mu ** 6) + 2 * (sym_lambda_2) * (sym_mu ** 5) + (sym_lambda_2 ** 2) * (sym_mu ** 4) # (0,0) sym_state_probs_1122[1] = (sym_Lambda * sym_mu ** 3) * (sym_mu ** 2 + 2 * sym_mu * sym_lambda_2 + sym_lambda_2 ** 2) # (0,1) sym_state_probs_1122[2] = (sym_Lambda * sym_lambda_2 * sym_mu ** 2) * (sym_lambda_2 ** 2 + sym_lambda_2 * sym_lambda_1 + sym_lambda_1 * sym_mu + sym_mu ** 2 + 2 * sym_lambda_2 * sym_mu) # (1,1) sym_state_probs_1122[3] = (sym_Lambda * sym_lambda_2 ** 2 * sym_mu) * (sym_lambda_2 ** 2 + 2 * sym_lambda_1 * sym_lambda_2 + 3 * sym_lambda_1 * sym_mu + sym_mu ** 2 + 2 * sym_lambda_2 * sym_mu + sym_lambda_1 ** 2) # (2,1) sym_state_probs_1122[4] = (sym_Lambda * sym_lambda_1 * sym_mu ** 3) * (sym_lambda_2 + sym_mu) # (0,2) sym_state_probs_1122[5] = (sym_Lambda * sym_lambda_1 * sym_lambda_2 * sym_mu ** 2) * (2 * sym_mu + sym_lambda_1 + sym_lambda_2) # (1,2) sym_state_probs_1122[6] = (sym_Lambda * sym_lambda_1 * sym_lambda_2 ** 2) * (sym_lambda_1 ** 2 + 4 * sym_lambda_1 * sym_mu + 2 * sym_lambda_1 * sym_lambda_2 + 3 * sym_mu ** 2 + sym_lambda_2 ** 2 + 3 * sym_lambda_2 * sym_mu) # (2,2) ``` ```python sym_state_recursive_ratios_1122 = sym.zeros(buffer_capacity + 1, system_capacity + 1) sym_state_recursive_ratios_1122[0,0] = 1 sym_state_recursive_ratios_1122[0,1] = sym.factor(sym_state_probs_1122[1] / sym_state_probs_1122[0]) # (0,0) -> (0,1) sym_state_recursive_ratios_1122[1,1] = sym.factor(sym_state_probs_1122[2] / sym_state_probs_1122[1]) # (0,1) -> (1,1) sym_state_recursive_ratios_1122[2,1] = sym.factor(sym_state_probs_1122[3] / sym_state_probs_1122[2]) # (1,1) -> (2,1) sym_state_recursive_ratios_1122[0,2] = sym.factor(sym_state_probs_1122[4] / sym_state_probs_1122[1]) # (0,1) -> (0,2) sym_state_recursive_ratios_1122[1,2] = sym.factor(sym_state_probs_1122[5] / sym_state_probs_1122[4]) # (0,2) -> (1,2) sym_state_recursive_ratios_1122[2,2] = sym.factor(sym_state_probs_1122[6] / sym_state_probs_1122[5]) # (1,2) -> (2,2) sym_state_recursive_ratios_right_1122 = sym_state_recursive_ratios_1122.copy() sym_state_recursive_ratios_right_1122[1,2] = sym.factor(sym_state_probs_1122[5] / sym_state_probs_1122[2]) # (1,1) -> (1,2) sym_state_recursive_ratios_right_1122[2,2] = sym.factor(sym_state_probs_1122[6] / sym_state_probs_1122[3]) # (2,1) -> (2,2) sym_state_recursive_ratios_P0_1122 = sym.zeros(buffer_capacity + 1, system_capacity + 1) sym_state_recursive_ratios_P0_1122[0,0] = 1 sym_state_recursive_ratios_P0_1122[0,1] = sym.factor(sym_state_probs_1122[1] / sym_state_probs_1122[0]) # (0,0) -> (0,1) sym_state_recursive_ratios_P0_1122[1,1] = sym.factor(sym_state_probs_1122[2] / sym_state_probs_1122[0]) # (0,0) -> (1,1) sym_state_recursive_ratios_P0_1122[2,1] = sym.factor(sym_state_probs_1122[3] / sym_state_probs_1122[0]) # (0,0) -> (2,1) sym_state_recursive_ratios_P0_1122[0,2] = sym.factor(sym_state_probs_1122[4] / sym_state_probs_1122[0]) # (0,0) -> (0,2) sym_state_recursive_ratios_P0_1122[1,2] = sym.factor(sym_state_probs_1122[5] / sym_state_probs_1122[0]) # (0,0) -> (1,2) sym_state_recursive_ratios_P0_1122[2,2] = sym.factor(sym_state_probs_1122[6] / sym_state_probs_1122[0]) # (0,0) -> (2,2) ``` ```python abg.markov.visualise_ambulance_markov_chain( num_of_servers=num_of_servers, threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity ) ``` ```python all_states_1122 ``` $\displaystyle \left[ \left( 0, \ 0\right), \ \left( 0, \ 1\right), \ \left( 1, \ 1\right), \ \left( 2, \ 1\right), \ \left( 0, \ 2\right), \ \left( 1, \ 2\right), \ \left( 2, \ 2\right)\right]$ ```python Q_sym_1122 ``` $\displaystyle \left[\begin{matrix}- \Lambda & \Lambda & 0 & 0 & 0 & 0 & 0\\\mu & - \lambda^{A} - \lambda^{o} - \mu & \lambda^{A} & 0 & \lambda^{o} & 0 & 0\\0 & \mu & - \lambda^{A} - \lambda^{o} - \mu & \lambda^{A} & 0 & \lambda^{o} & 0\\0 & 0 & \mu & - \lambda^{o} - \mu & 0 & 0 & \lambda^{o}\\0 & \mu & 0 & 0 & - \lambda^{A} - \mu & \lambda^{A} & 0\\0 & 0 & \mu & 0 & 0 & - \lambda^{A} - \mu & \lambda^{A}\\0 & 0 & 0 & \mu & 0 & 0 & - \mu\end{matrix}\right]$ ```python sym_state_recursive_ratios_1122, sym_state_recursive_ratios_right_1122 ``` $\displaystyle \left( \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\lambda^{o}}{\lambda^{A} + \mu}\\0 & \frac{\lambda^{A} \left(\lambda^{A} + \lambda^{o} + \mu\right)}{\mu \left(\lambda^{A} + \mu\right)} & \frac{\lambda^{A} \left(\lambda^{A} + \lambda^{o} + 2 \mu\right)}{\mu \left(\lambda^{A} + \mu\right)}\\0 & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + 3 \lambda^{o} \mu + \mu^{2}\right)}{\mu \left(\lambda^{A} + \mu\right) \left(\lambda^{A} + \lambda^{o} + \mu\right)} & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 3 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + 4 \lambda^{o} \mu + 3 \mu^{2}\right)}{\mu^{2} \left(\lambda^{A} + \lambda^{o} + 2 \mu\right)}\end{matrix}\right], \ \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\lambda^{o}}{\lambda^{A} + \mu}\\0 & \frac{\lambda^{A} \left(\lambda^{A} + \lambda^{o} + \mu\right)}{\mu \left(\lambda^{A} + \mu\right)} & \frac{\lambda^{o} \left(\lambda^{A} + \lambda^{o} + 2 \mu\right)}{\left(\lambda^{A} + \mu\right) \left(\lambda^{A} + \lambda^{o} + \mu\right)}\\0 & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + 3 \lambda^{o} \mu + \mu^{2}\right)}{\mu \left(\lambda^{A} + \mu\right) \left(\lambda^{A} + \lambda^{o} + \mu\right)} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 3 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + 4 \lambda^{o} \mu + 3 \mu^{2}\right)}{\mu \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + 3 \lambda^{o} \mu + \mu^{2}\right)}\end{matrix}\right]\right)$ ```python sym_state_probs_1122[0], sym_state_recursive_ratios_P0_1122 ``` $\displaystyle \left( \left(\lambda^{A}\right)^{2} \mu^{4} + 2 \lambda^{A} \mu^{5} + \mu^{6}, \ \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\Lambda \lambda^{o}}{\mu \left(\lambda^{A} + \mu\right)}\\0 & \frac{\Lambda \lambda^{A} \left(\lambda^{A} + \lambda^{o} + \mu\right)}{\mu^{2} \left(\lambda^{A} + \mu\right)} & \frac{\Lambda \lambda^{A} \lambda^{o} \left(\lambda^{A} + \lambda^{o} + 2 \mu\right)}{\mu^{2} \left(\lambda^{A} + \mu\right)^{2}}\\0 & \frac{\Lambda \left(\lambda^{A}\right)^{2} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + 3 \lambda^{o} \mu + \mu^{2}\right)}{\mu^{3} \left(\lambda^{A} + \mu\right)^{2}} & \frac{\Lambda \left(\lambda^{A}\right)^{2} \lambda^{o} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 3 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + 4 \lambda^{o} \mu + 3 \mu^{2}\right)}{\mu^{4} \left(\lambda^{A} + \mu\right)^{2}}\end{matrix}\right]\right)$ ```python ``` ## $C=1, T=3, N=4, M=1$ ```python num_of_servers = 1 threshold = 3 system_capacity = 4 buffer_capacity = 1 all_states_1341 = abg.markov.build_states(threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity) Q_sym_1341 = abg.markov.get_symbolic_transition_matrix(num_of_servers, threshold, system_capacity, buffer_capacity) sym_pi_1341 = get_symbolic_pi(num_of_servers=num_of_servers, threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity) sym_state_probs_1341 = [0 for _ in range(len(all_states_1341))] ``` ```python sym_Lambda = sym.symbols("Lambda") sym_lambda_1 = sym.symbols("lambda_1") sym_lambda_2 = sym.symbols("lambda_2") sym_mu = sym.symbols("mu") ``` ```python sym_state_probs_1341[0] = (sym_lambda_2) * (sym_mu ** 5) + (sym_mu ** 6) # (0,0) sym_state_probs_1341[1] = sym_Lambda * sym_lambda_2 * (sym_mu ** 4) + sym_Lambda * (sym_mu ** 5) # (0,1) sym_state_probs_1341[2] = (sym_Lambda ** 2) * sym_lambda_2 * (sym_mu ** 3) + (sym_Lambda ** 2) * (sym_mu ** 4) # (0,2) sym_state_probs_1341[3] = (sym_Lambda ** 3) * sym_lambda_2 * (sym_mu ** 2) + (sym_Lambda ** 3) * (sym_mu ** 3) # (0,3) sym_state_probs_1341[4] = (sym_Lambda ** 3) * sym_lambda_1 * sym_lambda_2 * sym_mu + (sym_Lambda ** 3) * sym_lambda_2 * (sym_mu ** 2) + (sym_Lambda ** 3) * sym_lambda_2 * sym_lambda_2 * sym_mu # (1,3) sym_state_probs_1341[5] = (sym_Lambda ** 3) * sym_lambda_1 * (sym_mu ** 2) # (0,4) sym_state_probs_1341[6] = (sym_Lambda ** 3) * (sym_lambda_1 ** 2) * sym_lambda_2 + (sym_Lambda ** 3) * sym_lambda_1 * (sym_lambda_2 ** 2) + 2 * (sym_Lambda ** 3) * sym_lambda_1 * sym_lambda_2 * sym_mu # (1,4) ``` ```python sym_state_recursive_ratios_1341 = sym.zeros(buffer_capacity + 1, system_capacity + 1) sym_state_recursive_ratios_1341[0,0] = 1 sym_state_recursive_ratios_1341[0,1] = sym.factor(sym_state_probs_1341[1] / sym_state_probs_1341[0]) # (0,0) -> (0,1) sym_state_recursive_ratios_1341[0,2] = sym.factor(sym_state_probs_1341[2] / sym_state_probs_1341[1]) # (0,1) -> (0,2) sym_state_recursive_ratios_1341[0,3] = sym.factor(sym_state_probs_1341[3] / sym_state_probs_1341[2]) # (0,2) -> (0,3) sym_state_recursive_ratios_1341[0,4] = sym.factor(sym_state_probs_1341[5] / sym_state_probs_1341[3]) # (0,3) -> (0,4) sym_state_recursive_ratios_1341[1,3] = sym.factor(sym_state_probs_1341[4] / sym_state_probs_1341[3]) # (0,3) -> (1,3) sym_state_recursive_ratios_1341[1,4] = sym.factor(sym_state_probs_1341[6] / sym_state_probs_1341[5]) # (0,4) -> (1,4) sym_state_recursive_ratios_right_1341 = sym_state_recursive_ratios_1341.copy() sym_state_recursive_ratios_right_1341[1,4] = sym.factor(sym_state_probs_1341[6] / sym_state_probs_1341[4]) # (1,3) -> (1,4) sym_state_recursive_ratios_P0_1341 = sym.zeros(buffer_capacity + 1, system_capacity + 1) sym_state_recursive_ratios_P0_1341[0,0] = 1 sym_state_recursive_ratios_P0_1341[0,1] = sym.factor(sym_state_probs_1341[1] / sym_state_probs_1341[0]) # (0,0) -> (0,1) sym_state_recursive_ratios_P0_1341[0,2] = sym.factor(sym_state_probs_1341[2] / sym_state_probs_1341[0]) # (0,0) -> (0,2) sym_state_recursive_ratios_P0_1341[0,3] = sym.factor(sym_state_probs_1341[3] / sym_state_probs_1341[0]) # (0,0) -> (0,3) sym_state_recursive_ratios_P0_1341[1,3] = sym.factor(sym_state_probs_1341[4] / sym_state_probs_1341[0]) # (0,0) -> (1,3) sym_state_recursive_ratios_P0_1341[0,4] = sym.factor(sym_state_probs_1341[5] / sym_state_probs_1341[0]) # (0,0) -> (0,4) sym_state_recursive_ratios_P0_1341[1,4] = sym.factor(sym_state_probs_1341[6] / sym_state_probs_1341[0]) # (0,0) -> (1,4) ``` ```python abg.markov.visualise_ambulance_markov_chain( num_of_servers=num_of_servers, threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity ) ``` ```python all_states_1341 ``` $\displaystyle \left[ \left( 0, \ 0\right), \ \left( 0, \ 1\right), \ \left( 0, \ 2\right), \ \left( 0, \ 3\right), \ \left( 1, \ 3\right), \ \left( 0, \ 4\right), \ \left( 1, \ 4\right)\right]$ ```python Q_sym_1341 ``` $\displaystyle \left[\begin{matrix}- \Lambda & \Lambda & 0 & 0 & 0 & 0 & 0\\\mu & - \Lambda - \mu & \Lambda & 0 & 0 & 0 & 0\\0 & \mu & - \Lambda - \mu & \Lambda & 0 & 0 & 0\\0 & 0 & \mu & - \lambda^{A} - \lambda^{o} - \mu & \lambda^{A} & \lambda^{o} & 0\\0 & 0 & 0 & \mu & - \lambda^{o} - \mu & 0 & \lambda^{o}\\0 & 0 & 0 & \mu & 0 & - \lambda^{A} - \mu & \lambda^{A}\\0 & 0 & 0 & 0 & \mu & 0 & - \mu\end{matrix}\right]$ ```python sym_state_recursive_ratios_1341, sym_state_recursive_ratios_right_1341 ``` $\displaystyle \left( \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\Lambda}{\mu} & \frac{\Lambda}{\mu} & \frac{\lambda^{o}}{\lambda^{A} + \mu}\\0 & 0 & 0 & \frac{\lambda^{A} \left(\lambda^{A} + \lambda^{o} + \mu\right)}{\mu \left(\lambda^{A} + \mu\right)} & \frac{\lambda^{A} \left(\lambda^{A} + \lambda^{o} + 2 \mu\right)}{\mu^{2}}\end{matrix}\right], \ \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\Lambda}{\mu} & \frac{\Lambda}{\mu} & \frac{\lambda^{o}}{\lambda^{A} + \mu}\\0 & 0 & 0 & \frac{\lambda^{A} \left(\lambda^{A} + \lambda^{o} + \mu\right)}{\mu \left(\lambda^{A} + \mu\right)} & \frac{\lambda^{o} \left(\lambda^{A} + \lambda^{o} + 2 \mu\right)}{\mu \left(\lambda^{A} + \lambda^{o} + \mu\right)}\end{matrix}\right]\right)$ ```python sym_state_recursive_ratios_P0_1341 ``` $\displaystyle \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\Lambda^{2}}{\mu^{2}} & \frac{\Lambda^{3}}{\mu^{3}} & \frac{\Lambda^{3} \lambda^{o}}{\mu^{3} \left(\lambda^{A} + \mu\right)}\\0 & 0 & 0 & \frac{\Lambda^{3} \lambda^{A} \left(\lambda^{A} + \lambda^{o} + \mu\right)}{\mu^{4} \left(\lambda^{A} + \mu\right)} & \frac{\Lambda^{3} \lambda^{A} \lambda^{o} \left(\lambda^{A} + \lambda^{o} + 2 \mu\right)}{\mu^{5} \left(\lambda^{A} + \mu\right)}\end{matrix}\right]$ ```python ``` ```python ``` # $C=1, T=1, N=3, M=1$ ```python num_of_servers = 1 threshold = 1 system_capacity = 3 buffer_capacity = 1 all_states_1131 = abg.markov.build_states(threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity) Q_sym_1131 = abg.markov.get_symbolic_transition_matrix(num_of_servers, threshold, system_capacity, buffer_capacity) sym_pi_1131 = get_symbolic_pi(num_of_servers=num_of_servers, threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity) sym_state_probs_1131 = [0 for _ in range(len(all_states_1131))] ``` ```python sym_Lambda = sym.symbols("Lambda") sym_lambda_1 = sym.symbols("lambda_1") sym_lambda_2 = sym.symbols("lambda_2") sym_mu = sym.symbols("mu") ``` ```python # (0,0) sym_state_probs_1131[0] = (sym_mu ** 6) + 2 * (sym_lambda_2 * (sym_mu ** 5)) + ((sym_lambda_2 ** 2) * (sym_mu ** 4)) + (sym_lambda_1 * sym_lambda_2 * (sym_mu ** 4)) # (0,1) sym_state_probs_1131[1] = sym_state_probs_1131[0] * sym_Lambda / sym_mu # (1,1) sym_state_probs_1131[2] = (sym_Lambda * (sym_lambda_1 ** 2) * sym_lambda_2 * (sym_mu ** 2)) + (sym_Lambda * sym_lambda_2 * sym_lambda_1 * (sym_mu ** 3)) + 2 * (sym_Lambda * sym_lambda_1 * (sym_lambda_2 ** 2) * (sym_mu ** 2)) + 2 * (sym_Lambda * (sym_lambda_2 ** 2) * (sym_mu ** 3)) + (sym_Lambda * (sym_lambda_2 ** 3) * (sym_mu ** 2)) + (sym_Lambda * sym_lambda_2 * (sym_mu ** 4)) # (0,2) sym_state_probs_1131[3] = sym_Lambda * sym_lambda_1 * sym_mu ** 3 * (sym_lambda_2 + sym_mu) # (1,2) sym_state_probs_1131[4] = (sym_Lambda * sym_lambda_2 * sym_lambda_1 * sym_mu) * ((sym_lambda_2 ** 2) + 2 * sym_lambda_2 * sym_lambda_1 + 3 * sym_lambda_2 * sym_mu + (sym_lambda_1 ** 2) + 2 * sym_lambda_1 * sym_mu + 2 * (sym_mu ** 2)) # (0,3) sym_state_probs_1131[5] = sym_Lambda * (sym_lambda_1 ** 2) * (sym_mu ** 3) # (1,3) sym_state_probs_1131[6] = (sym_Lambda * sym_lambda_2 * (sym_lambda_1 ** 2)) * ((sym_lambda_2 ** 2) + 2 * sym_lambda_2 * sym_lambda_1 + 3 * sym_lambda_2 * sym_mu + (sym_lambda_1 ** 2) + 2 * sym_lambda_1 * sym_mu + 3 * (sym_mu ** 2)) denominator = sym_Lambda*sym_lambda_2**3*sym_lambda_1**2 + sym_Lambda*sym_lambda_2**3*sym_lambda_1*sym_mu + sym_Lambda*sym_lambda_2**3*sym_mu**2 + 2*sym_Lambda*sym_lambda_2**2*sym_lambda_1**3 + 5*sym_Lambda*sym_lambda_2**2*sym_lambda_1**2*sym_mu + 5*sym_Lambda*sym_lambda_2**2*sym_lambda_1*sym_mu**2 + 3*sym_Lambda*sym_lambda_2**2*sym_mu**3 + sym_Lambda*sym_lambda_2*sym_lambda_1**4 + 3*sym_Lambda*sym_lambda_2*sym_lambda_1**3*sym_mu + 6*sym_Lambda*sym_lambda_2*sym_lambda_1**2*sym_mu**2 + 5*sym_Lambda*sym_lambda_2*sym_lambda_1*sym_mu**3 + 3*sym_Lambda*sym_lambda_2*sym_mu**4 + sym_Lambda*sym_lambda_1**2*sym_mu**3 + sym_Lambda*sym_lambda_1*sym_mu**4 + sym_Lambda*sym_mu**5 + sym_lambda_2**2*sym_mu**4 + sym_lambda_2*sym_lambda_1*sym_mu**4 + 2*sym_lambda_2*sym_mu**5 + sym_mu**6 sym_state_probs_1131 = [i/denominator for i in sym_state_probs_1131] ``` ```python sym_state_recursive_ratios_1131 = sym.zeros(buffer_capacity + 1, system_capacity + 1) sym_state_recursive_ratios_1131[0,0] = 1 sym_state_recursive_ratios_1131[0,1] = sym.factor(sym_state_probs_1131[1] / sym_state_probs_1131[0]) # (0,0) -> (0,1) sym_state_recursive_ratios_1131[1,1] = sym.factor(sym_state_probs_1131[2] / sym_state_probs_1131[1]) # (0,1) -> (1,1) sym_state_recursive_ratios_1131[0,2] = sym.factor(sym_state_probs_1131[3] / sym_state_probs_1131[1]) # (0,1) -> (0,2) sym_state_recursive_ratios_1131[1,2] = sym.factor(sym_state_probs_1131[4] / sym_state_probs_1131[3]) # (0,2) -> (1,2) sym_state_recursive_ratios_1131[0,3] = sym.factor(sym_state_probs_1131[5] / sym_state_probs_1131[3]) # (0,2) -> (0,3) sym_state_recursive_ratios_1131[1,3] = sym.factor(sym_state_probs_1131[6] / sym_state_probs_1131[5]) # (0,3) -> (1,3) sym_state_recursive_ratios_right_1131 = sym_state_recursive_ratios_1131.copy() sym_state_recursive_ratios_right_1131[1,2] = sym.factor(sym_state_probs_1131[4] / sym_state_probs_1131[2]) # (1,1) -> (1,2) sym_state_recursive_ratios_right_1131[1,3] = sym.factor(sym_state_probs_1131[6] / sym_state_probs_1131[4]) # (1,2) -> (1,3) sym_state_recursive_ratios_P0_1131 = sym.zeros(buffer_capacity + 1, system_capacity + 1) sym_state_recursive_ratios_P0_1131[0,0] = 1 sym_state_recursive_ratios_P0_1131[0,1] = sym.factor(sym_state_probs_1131[1] / sym_state_probs_1131[0]) # (0,0) -> (0,1) sym_state_recursive_ratios_P0_1131[1,1] = sym.factor(sym_state_probs_1131[2] / sym_state_probs_1131[0]) # (0,0) -> (1,1) sym_state_recursive_ratios_P0_1131[0,2] = sym.factor(sym_state_probs_1131[3] / sym_state_probs_1131[0]) # (0,0) -> (0,2) sym_state_recursive_ratios_P0_1131[1,2] = sym.factor(sym_state_probs_1131[4] / sym_state_probs_1131[0]) # (0,0) -> (1,2) sym_state_recursive_ratios_P0_1131[0,3] = sym.factor(sym_state_probs_1131[5] / sym_state_probs_1131[0]) # (0,0) -> (0,3) sym_state_recursive_ratios_P0_1131[1,3] = sym.factor(sym_state_probs_1131[6] / sym_state_probs_1131[0]) # (0,0) -> (1,3) ``` ```python abg.markov.visualise_ambulance_markov_chain( num_of_servers=num_of_servers, threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity ) ``` ```python all_states_1131 ``` $\displaystyle \left[ \left( 0, \ 0\right), \ \left( 0, \ 1\right), \ \left( 1, \ 1\right), \ \left( 0, \ 2\right), \ \left( 1, \ 2\right), \ \left( 0, \ 3\right), \ \left( 1, \ 3\right)\right]$ ```python Q_sym_1131 ``` $\displaystyle \left[\begin{matrix}- \Lambda & \Lambda & 0 & 0 & 0 & 0 & 0\\\mu & - \lambda^{A} - \lambda^{o} - \mu & \lambda^{A} & \lambda^{o} & 0 & 0 & 0\\0 & \mu & - \lambda^{o} - \mu & 0 & \lambda^{o} & 0 & 0\\0 & \mu & 0 & - \lambda^{A} - \lambda^{o} - \mu & \lambda^{A} & \lambda^{o} & 0\\0 & 0 & \mu & 0 & - \lambda^{o} - \mu & 0 & \lambda^{o}\\0 & 0 & 0 & \mu & 0 & - \lambda^{A} - \mu & \lambda^{A}\\0 & 0 & 0 & 0 & \mu & 0 & - \mu\end{matrix}\right]$ ```python sym_state_recursive_ratios_1131, sym_state_recursive_ratios_right_1131 ``` $\displaystyle \left( \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\lambda^{o} \left(\lambda^{A} + \mu\right)}{\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}} & \frac{\lambda^{o}}{\lambda^{A} + \mu}\\0 & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \lambda^{o} \mu + \mu^{2}\right)}{\mu \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)} & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 3 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + 2 \lambda^{o} \mu + 2 \mu^{2}\right)}{\mu^{2} \left(\lambda^{A} + \mu\right)} & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 3 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + 2 \lambda^{o} \mu + 3 \mu^{2}\right)}{\mu^{3}}\end{matrix}\right], \ \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\lambda^{o} \left(\lambda^{A} + \mu\right)}{\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}} & \frac{\lambda^{o}}{\lambda^{A} + \mu}\\0 & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \lambda^{o} \mu + \mu^{2}\right)}{\mu \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 3 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + 2 \lambda^{o} \mu + 2 \mu^{2}\right)}{\mu \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \lambda^{o} \mu + \mu^{2}\right)} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 3 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + 2 \lambda^{o} \mu + 3 \mu^{2}\right)}{\mu \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 3 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + 2 \lambda^{o} \mu + 2 \mu^{2}\right)}\end{matrix}\right]\right)$ ```python sym_state_recursive_ratios_P0_1131 ``` $\displaystyle \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\Lambda \lambda^{o} \left(\lambda^{A} + \mu\right)}{\mu \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)} & \frac{\Lambda \left(\lambda^{o}\right)^{2}}{\mu \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)}\\0 & \frac{\Lambda \lambda^{A} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \lambda^{o} \mu + \mu^{2}\right)}{\mu^{2} \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)} & \frac{\Lambda \lambda^{A} \lambda^{o} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 3 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + 2 \lambda^{o} \mu + 2 \mu^{2}\right)}{\mu^{3} \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)} & \frac{\Lambda \lambda^{A} \left(\lambda^{o}\right)^{2} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 3 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + 2 \lambda^{o} \mu + 3 \mu^{2}\right)}{\mu^{4} \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)}\end{matrix}\right]$ ```python ``` ```python ``` ## $C=1, T=1, N=3, M=2$ ```python num_of_servers = 1 threshold = 1 system_capacity = 3 buffer_capacity = 2 ``` ```python abg.markov.visualise_ambulance_markov_chain(num_of_servers=1, threshold=1, system_capacity=3, buffer_capacity=2) ``` ```python all_states_1132 = abg.markov.build_states(threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity) Q_sym_1132 = abg.markov.get_symbolic_transition_matrix(num_of_servers, threshold, system_capacity, buffer_capacity) p00, p01, p11, p21, p02, p12, p22, p03, p13, p23 = sym.symbols('p00, p01, p11, p21, p02, p12, p22, p03, p13, p23') pi_1132 = sym.Matrix([p00, p01, p11, p21, p02, p12, p22, p03, p13, p23]) dimension_1132 = Q_sym_1132.shape[0] M_sym_1132 = sym.Matrix([Q_sym_1132.transpose()[:-1,:], sym.ones(1,dimension_1132)]) sym_diff_equations_1132 = (M_sym_1132 @ pi_1132) b_sym_1132 = sym.Matrix([sym.zeros(dimension_1132 - 1, 1), [1]]) ``` ```python eq0_1132 = sym.Eq(sym_diff_equations_1132[0],b_sym_1132[0]) eq1_1132 = sym.Eq(sym_diff_equations_1132[1],b_sym_1132[1]) eq2_1132 = sym.Eq(sym_diff_equations_1132[2],b_sym_1132[2]) eq3_1132 = sym.Eq(sym_diff_equations_1132[3],b_sym_1132[3]) eq4_1132 = sym.Eq(sym_diff_equations_1132[4],b_sym_1132[4]) eq5_1132 = sym.Eq(sym_diff_equations_1132[5],b_sym_1132[5]) eq6_1132 = sym.Eq(sym_diff_equations_1132[6],b_sym_1132[6]) eq7_1132 = sym.Eq(sym_diff_equations_1132[7],b_sym_1132[7]) eq8_1132 = sym.Eq(sym_diff_equations_1132[8],b_sym_1132[8]) eq9_1132 = sym.Eq(sym_diff_equations_1132[9],b_sym_1132[9]) ``` ```python sym_state_probs_1132 = sym.solve([eq0_1132,eq1_1132,eq2_1132,eq3_1132,eq4_1132,eq5_1132,eq6_1132,eq7_1132,eq8_1132,eq9_1132],(p00,p01,p11,p21,p02,p12,p22,p03,p13,p23)) ``` ```python sym_state_recursive_ratios_1132 = sym.zeros(buffer_capacity + 1, system_capacity + 1) sym_state_recursive_ratios_1132[0,0] = 1 sym_state_recursive_ratios_1132[0,1] = sym.factor(sym_state_probs_1132[p01] / sym_state_probs_1132[p00]) # (0,0) -> (0,1) sym_state_recursive_ratios_1132[1,1] = sym.factor(sym_state_probs_1132[p11] / sym_state_probs_1132[p01]) # (0,1) -> (1,1) sym_state_recursive_ratios_1132[2,1] = sym.factor(sym_state_probs_1132[p21] / sym_state_probs_1132[p11]) # (1,1) -> (2,1) sym_state_recursive_ratios_1132[0,2] = sym.factor(sym_state_probs_1132[p02] / sym_state_probs_1132[p01]) # (0,1) -> (0,2) sym_state_recursive_ratios_1132[1,2] = sym.factor(sym_state_probs_1132[p12] / sym_state_probs_1132[p02]) # (0,2) -> (1,2) sym_state_recursive_ratios_1132[2,2] = sym.factor(sym_state_probs_1132[p22] / sym_state_probs_1132[p12]) # (1,2) -> (2,2) sym_state_recursive_ratios_1132[0,3] = sym.factor(sym_state_probs_1132[p03] / sym_state_probs_1132[p02]) # (0,2) -> (0,3) sym_state_recursive_ratios_1132[1,3] = sym.factor(sym_state_probs_1132[p13] / sym_state_probs_1132[p03]) # (0,3) -> (1,3) sym_state_recursive_ratios_1132[2,3] = sym.factor(sym_state_probs_1132[p23] / sym_state_probs_1132[p13]) # (1,3) -> (2,3) sym_state_recursive_ratios_right_1132 = sym_state_recursive_ratios_1132.copy() sym_state_recursive_ratios_right_1132[1,2] = sym.factor(sym_state_probs_1132[p12] / sym_state_probs_1132[p11]) # (1,1) -> (1,2) sym_state_recursive_ratios_right_1132[1,3] = sym.factor(sym_state_probs_1132[p13] / sym_state_probs_1132[p12]) # (1,2) -> (1,3) sym_state_recursive_ratios_right_1132[2,2] = sym.factor(sym_state_probs_1132[p22] / sym_state_probs_1132[p21]) # (2,1) -> (2,2) sym_state_recursive_ratios_right_1132[2,3] = sym.factor(sym_state_probs_1132[p23] / sym_state_probs_1132[p22]) # (2,2) -> (2,3) sym_state_recursive_ratios_P0_1132 = sym.zeros(buffer_capacity + 1, system_capacity + 1) sym_state_recursive_ratios_P0_1132[0,0] = 1 sym_state_recursive_ratios_P0_1132[0,1] = sym.factor(sym_state_probs_1132[p01] / sym_state_probs_1132[p00]) # (0,0) -> (0,1) sym_state_recursive_ratios_P0_1132[1,1] = sym.factor(sym_state_probs_1132[p11] / sym_state_probs_1132[p00]) # (0,0) -> (1,1) sym_state_recursive_ratios_P0_1132[2,1] = sym.factor(sym_state_probs_1132[p21] / sym_state_probs_1132[p00]) # (0,0) -> (2,1) sym_state_recursive_ratios_P0_1132[0,2] = sym.factor(sym_state_probs_1132[p02] / sym_state_probs_1132[p00]) # (0,0) -> (0,2) sym_state_recursive_ratios_P0_1132[1,2] = sym.factor(sym_state_probs_1132[p12] / sym_state_probs_1132[p00]) # (0,0) -> (1,2) sym_state_recursive_ratios_P0_1132[2,2] = sym.factor(sym_state_probs_1132[p22] / sym_state_probs_1132[p00]) # (0,0) -> (2,2) sym_state_recursive_ratios_P0_1132[0,3] = sym.factor(sym_state_probs_1132[p03] / sym_state_probs_1132[p00]) # (0,0) -> (0,3) sym_state_recursive_ratios_P0_1132[1,3] = sym.factor(sym_state_probs_1132[p13] / sym_state_probs_1132[p00]) # (0,0) -> (1,3) sym_state_recursive_ratios_P0_1132[2,3] = sym.factor(sym_state_probs_1132[p23] / sym_state_probs_1132[p00]) # (0,0) -> (2,3) ``` ```python sym_state_recursive_ratios_1132 ``` $\displaystyle \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\lambda^{o} \left(\lambda^{A} + \mu\right)}{\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}} & \frac{\lambda^{o}}{\lambda^{A} + \mu}\\0 & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \lambda^{o} \mu + \mu^{2}\right)}{\mu \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)} & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 4 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 3 \lambda^{A} \lambda^{o} \mu + 5 \lambda^{A} \mu^{2} + \left(\lambda^{o}\right)^{2} \mu + 2 \lambda^{o} \mu^{2} + 2 \mu^{3}\right)}{\mu \left(\lambda^{A} + \mu\right) \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)} & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 4 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + 2 \lambda^{o} \mu + 3 \mu^{2}\right)}{\mu \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)}\\0 & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{4} + 4 \left(\lambda^{A}\right)^{3} \lambda^{o} + 4 \left(\lambda^{A}\right)^{3} \mu + 6 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} + 11 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu + 6 \left(\lambda^{A}\right)^{2} \mu^{2} + 4 \lambda^{A} \left(\lambda^{o}\right)^{3} + 10 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu + 10 \lambda^{A} \lambda^{o} \mu^{2} + 4 \lambda^{A} \mu^{3} + \left(\lambda^{o}\right)^{4} + 3 \left(\lambda^{o}\right)^{3} \mu + 6 \left(\lambda^{o}\right)^{2} \mu^{2} + 3 \lambda^{o} \mu^{3} + \mu^{4}\right)}{\mu \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right) \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \lambda^{o} \mu + \mu^{2}\right)} & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{4} + 4 \left(\lambda^{A}\right)^{3} \lambda^{o} + 5 \left(\lambda^{A}\right)^{3} \mu + 6 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} + 14 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu + 10 \left(\lambda^{A}\right)^{2} \mu^{2} + 4 \lambda^{A} \left(\lambda^{o}\right)^{3} + 13 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu + 17 \lambda^{A} \lambda^{o} \mu^{2} + 9 \lambda^{A} \mu^{3} + \left(\lambda^{o}\right)^{4} + 4 \left(\lambda^{o}\right)^{3} \mu + 9 \left(\lambda^{o}\right)^{2} \mu^{2} + 8 \lambda^{o} \mu^{3} + 3 \mu^{4}\right)}{\mu^{2} \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 4 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 3 \lambda^{A} \lambda^{o} \mu + 5 \lambda^{A} \mu^{2} + \left(\lambda^{o}\right)^{2} \mu + 2 \lambda^{o} \mu^{2} + 2 \mu^{3}\right)} & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{4} + 4 \left(\lambda^{A}\right)^{3} \lambda^{o} + 5 \left(\lambda^{A}\right)^{3} \mu + 6 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} + 14 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu + 11 \left(\lambda^{A}\right)^{2} \mu^{2} + 4 \lambda^{A} \left(\lambda^{o}\right)^{3} + 13 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu + 19 \lambda^{A} \lambda^{o} \mu^{2} + 13 \lambda^{A} \mu^{3} + \left(\lambda^{o}\right)^{4} + 4 \left(\lambda^{o}\right)^{3} \mu + 10 \left(\lambda^{o}\right)^{2} \mu^{2} + 10 \lambda^{o} \mu^{3} + 6 \mu^{4}\right)}{\mu^{3} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 4 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + 2 \lambda^{o} \mu + 3 \mu^{2}\right)}\end{matrix}\right]$ ```python sym_state_recursive_ratios_right_1132 ``` $\displaystyle \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\lambda^{o} \left(\lambda^{A} + \mu\right)}{\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}} & \frac{\lambda^{o}}{\lambda^{A} + \mu}\\0 & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \lambda^{o} \mu + \mu^{2}\right)}{\mu \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 4 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 3 \lambda^{A} \lambda^{o} \mu + 5 \lambda^{A} \mu^{2} + \left(\lambda^{o}\right)^{2} \mu + 2 \lambda^{o} \mu^{2} + 2 \mu^{3}\right)}{\left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right) \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \lambda^{o} \mu + \mu^{2}\right)} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 4 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + 2 \lambda^{o} \mu + 3 \mu^{2}\right)}{\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 4 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 3 \lambda^{A} \lambda^{o} \mu + 5 \lambda^{A} \mu^{2} + \left(\lambda^{o}\right)^{2} \mu + 2 \lambda^{o} \mu^{2} + 2 \mu^{3}}\\0 & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{4} + 4 \left(\lambda^{A}\right)^{3} \lambda^{o} + 4 \left(\lambda^{A}\right)^{3} \mu + 6 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} + 11 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu + 6 \left(\lambda^{A}\right)^{2} \mu^{2} + 4 \lambda^{A} \left(\lambda^{o}\right)^{3} + 10 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu + 10 \lambda^{A} \lambda^{o} \mu^{2} + 4 \lambda^{A} \mu^{3} + \left(\lambda^{o}\right)^{4} + 3 \left(\lambda^{o}\right)^{3} \mu + 6 \left(\lambda^{o}\right)^{2} \mu^{2} + 3 \lambda^{o} \mu^{3} + \mu^{4}\right)}{\mu \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right) \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \lambda^{o} \mu + \mu^{2}\right)} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{4} + 4 \left(\lambda^{A}\right)^{3} \lambda^{o} + 5 \left(\lambda^{A}\right)^{3} \mu + 6 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} + 14 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu + 10 \left(\lambda^{A}\right)^{2} \mu^{2} + 4 \lambda^{A} \left(\lambda^{o}\right)^{3} + 13 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu + 17 \lambda^{A} \lambda^{o} \mu^{2} + 9 \lambda^{A} \mu^{3} + \left(\lambda^{o}\right)^{4} + 4 \left(\lambda^{o}\right)^{3} \mu + 9 \left(\lambda^{o}\right)^{2} \mu^{2} + 8 \lambda^{o} \mu^{3} + 3 \mu^{4}\right)}{\mu \left(\left(\lambda^{A}\right)^{4} + 4 \left(\lambda^{A}\right)^{3} \lambda^{o} + 4 \left(\lambda^{A}\right)^{3} \mu + 6 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} + 11 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu + 6 \left(\lambda^{A}\right)^{2} \mu^{2} + 4 \lambda^{A} \left(\lambda^{o}\right)^{3} + 10 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu + 10 \lambda^{A} \lambda^{o} \mu^{2} + 4 \lambda^{A} \mu^{3} + \left(\lambda^{o}\right)^{4} + 3 \left(\lambda^{o}\right)^{3} \mu + 6 \left(\lambda^{o}\right)^{2} \mu^{2} + 3 \lambda^{o} \mu^{3} + \mu^{4}\right)} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{4} + 4 \left(\lambda^{A}\right)^{3} \lambda^{o} + 5 \left(\lambda^{A}\right)^{3} \mu + 6 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} + 14 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu + 11 \left(\lambda^{A}\right)^{2} \mu^{2} + 4 \lambda^{A} \left(\lambda^{o}\right)^{3} + 13 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu + 19 \lambda^{A} \lambda^{o} \mu^{2} + 13 \lambda^{A} \mu^{3} + \left(\lambda^{o}\right)^{4} + 4 \left(\lambda^{o}\right)^{3} \mu + 10 \left(\lambda^{o}\right)^{2} \mu^{2} + 10 \lambda^{o} \mu^{3} + 6 \mu^{4}\right)}{\mu \left(\left(\lambda^{A}\right)^{4} + 4 \left(\lambda^{A}\right)^{3} \lambda^{o} + 5 \left(\lambda^{A}\right)^{3} \mu + 6 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} + 14 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu + 10 \left(\lambda^{A}\right)^{2} \mu^{2} + 4 \lambda^{A} \left(\lambda^{o}\right)^{3} + 13 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu + 17 \lambda^{A} \lambda^{o} \mu^{2} + 9 \lambda^{A} \mu^{3} + \left(\lambda^{o}\right)^{4} + 4 \left(\lambda^{o}\right)^{3} \mu + 9 \left(\lambda^{o}\right)^{2} \mu^{2} + 8 \lambda^{o} \mu^{3} + 3 \mu^{4}\right)}\end{matrix}\right]$ ```python sym_state_recursive_ratios_P0_1132 ``` $\displaystyle \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\Lambda \lambda^{o} \left(\lambda^{A} + \mu\right)}{\mu \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)} & \frac{\Lambda \left(\lambda^{o}\right)^{2}}{\mu \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)}\\0 & \frac{\Lambda \lambda^{A} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \lambda^{o} \mu + \mu^{2}\right)}{\mu^{2} \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)} & \frac{\Lambda \lambda^{A} \lambda^{o} \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 4 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 3 \lambda^{A} \lambda^{o} \mu + 5 \lambda^{A} \mu^{2} + \left(\lambda^{o}\right)^{2} \mu + 2 \lambda^{o} \mu^{2} + 2 \mu^{3}\right)}{\mu^{2} \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)^{2}} & \frac{\Lambda \lambda^{A} \left(\lambda^{o}\right)^{2} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 4 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + 2 \lambda^{o} \mu + 3 \mu^{2}\right)}{\mu^{2} \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)^{2}}\\0 & \frac{\Lambda \left(\lambda^{A}\right)^{2} \left(\left(\lambda^{A}\right)^{4} + 4 \left(\lambda^{A}\right)^{3} \lambda^{o} + 4 \left(\lambda^{A}\right)^{3} \mu + 6 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} + 11 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu + 6 \left(\lambda^{A}\right)^{2} \mu^{2} + 4 \lambda^{A} \left(\lambda^{o}\right)^{3} + 10 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu + 10 \lambda^{A} \lambda^{o} \mu^{2} + 4 \lambda^{A} \mu^{3} + \left(\lambda^{o}\right)^{4} + 3 \left(\lambda^{o}\right)^{3} \mu + 6 \left(\lambda^{o}\right)^{2} \mu^{2} + 3 \lambda^{o} \mu^{3} + \mu^{4}\right)}{\mu^{3} \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)^{2}} & \frac{\Lambda \left(\lambda^{A}\right)^{2} \lambda^{o} \left(\left(\lambda^{A}\right)^{4} + 4 \left(\lambda^{A}\right)^{3} \lambda^{o} + 5 \left(\lambda^{A}\right)^{3} \mu + 6 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} + 14 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu + 10 \left(\lambda^{A}\right)^{2} \mu^{2} + 4 \lambda^{A} \left(\lambda^{o}\right)^{3} + 13 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu + 17 \lambda^{A} \lambda^{o} \mu^{2} + 9 \lambda^{A} \mu^{3} + \left(\lambda^{o}\right)^{4} + 4 \left(\lambda^{o}\right)^{3} \mu + 9 \left(\lambda^{o}\right)^{2} \mu^{2} + 8 \lambda^{o} \mu^{3} + 3 \mu^{4}\right)}{\mu^{4} \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)^{2}} & \frac{\Lambda \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \left(\left(\lambda^{A}\right)^{4} + 4 \left(\lambda^{A}\right)^{3} \lambda^{o} + 5 \left(\lambda^{A}\right)^{3} \mu + 6 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} + 14 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu + 11 \left(\lambda^{A}\right)^{2} \mu^{2} + 4 \lambda^{A} \left(\lambda^{o}\right)^{3} + 13 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu + 19 \lambda^{A} \lambda^{o} \mu^{2} + 13 \lambda^{A} \mu^{3} + \left(\lambda^{o}\right)^{4} + 4 \left(\lambda^{o}\right)^{3} \mu + 10 \left(\lambda^{o}\right)^{2} \mu^{2} + 10 \lambda^{o} \mu^{3} + 6 \mu^{4}\right)}{\mu^{5} \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)^{2}}\end{matrix}\right]$ ```python ``` ```python ``` ## $C=1, T=1, N=4, M=1$ ```python num_of_servers = 1 threshold = 1 system_capacity = 4 buffer_capacity = 1 ``` ```python abg.markov.visualise_ambulance_markov_chain(num_of_servers=1, threshold=1, system_capacity=4, buffer_capacity=1) ``` ```python all_states_1141 = abg.markov.build_states(threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity) Q_sym_1141 = abg.markov.get_symbolic_transition_matrix(num_of_servers, threshold, system_capacity, buffer_capacity) p00, p01, p11, p02, p12, p03, p13, p04, p14 = sym.symbols('p00, p01, p11, p02, p12, p03, p13, p04, p14') pi_1141 = sym.Matrix([p00, p01, p11, p02, p12, p03, p13, p04, p14]) dimension_1141 = Q_sym_1141.shape[0] M_sym_1141 = sym.Matrix([Q_sym_1141.transpose()[:-1,:], sym.ones(1,dimension_1141)]) sym_diff_equations_1141 = (M_sym_1141 @ pi_1141) b_sym_1141 = sym.Matrix([sym.zeros(dimension_1141 - 1, 1), [1]]) ``` ```python eq0_1141 = sym.Eq(sym_diff_equations_1141[0],b_sym_1141[0]) eq1_1141 = sym.Eq(sym_diff_equations_1141[1],b_sym_1141[1]) eq2_1141 = sym.Eq(sym_diff_equations_1141[2],b_sym_1141[2]) eq3_1141 = sym.Eq(sym_diff_equations_1141[3],b_sym_1141[3]) eq4_1141 = sym.Eq(sym_diff_equations_1141[4],b_sym_1141[4]) eq5_1141 = sym.Eq(sym_diff_equations_1141[5],b_sym_1141[5]) eq6_1141 = sym.Eq(sym_diff_equations_1141[6],b_sym_1141[6]) eq7_1141 = sym.Eq(sym_diff_equations_1141[7],b_sym_1141[7]) eq8_1141 = sym.Eq(sym_diff_equations_1141[8],b_sym_1141[8]) ``` ```python sym_state_probs_1141 = sym.solve([eq0_1141,eq1_1141,eq2_1141,eq3_1141,eq4_1141,eq5_1141,eq6_1141,eq7_1141,eq8_1141],(p00, p01, p11, p02, p12, p03, p13, p04, p14)) ``` ```python sym_state_recursive_ratios_1141 = sym.zeros(buffer_capacity + 1, system_capacity + 1) sym_state_recursive_ratios_1141[0,0] = 1 sym_state_recursive_ratios_1141[0,1] = sym.factor(sym_state_probs_1141[p01] / sym_state_probs_1141[p00]) # (0,0) -> (0,1) sym_state_recursive_ratios_1141[1,1] = sym.factor(sym_state_probs_1141[p11] / sym_state_probs_1141[p01]) # (0,1) -> (1,1) sym_state_recursive_ratios_1141[0,2] = sym.factor(sym_state_probs_1141[p02] / sym_state_probs_1141[p01]) # (0,1) -> (0,2) sym_state_recursive_ratios_1141[1,2] = sym.factor(sym_state_probs_1141[p12] / sym_state_probs_1141[p02]) # (0,2) -> (1,2) sym_state_recursive_ratios_1141[0,3] = sym.factor(sym_state_probs_1141[p03] / sym_state_probs_1141[p02]) # (0,2) -> (0,3) sym_state_recursive_ratios_1141[1,3] = sym.factor(sym_state_probs_1141[p13] / sym_state_probs_1141[p03]) # (0,3) -> (1,3) sym_state_recursive_ratios_1141[0,4] = sym.factor(sym_state_probs_1141[p04] / sym_state_probs_1141[p03]) # (0,3) -> (0,4) sym_state_recursive_ratios_1141[1,4] = sym.factor(sym_state_probs_1141[p14] / sym_state_probs_1141[p04]) # (0,4) -> (1,4) sym_state_recursive_ratios_right_1141 = sym_state_recursive_ratios_1141.copy() sym_state_recursive_ratios_right_1141[1,2] = sym.factor(sym_state_probs_1141[p12] / sym_state_probs_1141[p11]) # (1,1) -> (1,2) sym_state_recursive_ratios_right_1141[1,3] = sym.factor(sym_state_probs_1141[p13] / sym_state_probs_1141[p12]) # (1,2) -> (1,3) sym_state_recursive_ratios_right_1141[1,4] = sym.factor(sym_state_probs_1141[p14] / sym_state_probs_1141[p13]) # (1,3) -> (1,4) sym_state_recursive_ratios_P0_1141 = sym.zeros(buffer_capacity + 1, system_capacity + 1) sym_state_recursive_ratios_P0_1141[0,0] = 1 sym_state_recursive_ratios_P0_1141[0,1] = sym.factor(sym_state_probs_1141[p01] / sym_state_probs_1141[p00]) # (0,0) -> (0,1) sym_state_recursive_ratios_P0_1141[1,1] = sym.factor(sym_state_probs_1141[p11] / sym_state_probs_1141[p00]) # (0,0) -> (1,1) sym_state_recursive_ratios_P0_1141[0,2] = sym.factor(sym_state_probs_1141[p02] / sym_state_probs_1141[p00]) # (0,0) -> (0,2) sym_state_recursive_ratios_P0_1141[1,2] = sym.factor(sym_state_probs_1141[p12] / sym_state_probs_1141[p00]) # (0,0) -> (1,2) sym_state_recursive_ratios_P0_1141[0,3] = sym.factor(sym_state_probs_1141[p03] / sym_state_probs_1141[p00]) # (0,0) -> (0,3) sym_state_recursive_ratios_P0_1141[1,3] = sym.factor(sym_state_probs_1141[p13] / sym_state_probs_1141[p00]) # (0,0) -> (1,3) sym_state_recursive_ratios_P0_1141[0,4] = sym.factor(sym_state_probs_1141[p04] / sym_state_probs_1141[p00]) # (0,0) -> (0,4) sym_state_recursive_ratios_P0_1141[1,4] = sym.factor(sym_state_probs_1141[p14] / sym_state_probs_1141[p00]) # (0,0) -> (1,4) ``` ```python sym_state_recursive_ratios_1141 ``` $\displaystyle \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)}{\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}} & \frac{\lambda^{o} \left(\lambda^{A} + \mu\right)}{\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}} & \frac{\lambda^{o}}{\lambda^{A} + \mu}\\0 & \frac{\lambda^{A} \left(\lambda^{A} + \lambda^{o} + \mu\right) \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \mu^{2}\right)}{\mu \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)} & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{3} + 3 \left(\lambda^{A}\right)^{2} \lambda^{o} + 4 \left(\lambda^{A}\right)^{2} \mu + 3 \lambda^{A} \left(\lambda^{o}\right)^{2} + 6 \lambda^{A} \lambda^{o} \mu + 5 \lambda^{A} \mu^{2} + \left(\lambda^{o}\right)^{3} + 2 \left(\lambda^{o}\right)^{2} \mu + 2 \lambda^{o} \mu^{2} + 2 \mu^{3}\right)}{\mu^{2} \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)} & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{3} + 3 \left(\lambda^{A}\right)^{2} \lambda^{o} + 4 \left(\lambda^{A}\right)^{2} \mu + 3 \lambda^{A} \left(\lambda^{o}\right)^{2} + 6 \lambda^{A} \lambda^{o} \mu + 6 \lambda^{A} \mu^{2} + \left(\lambda^{o}\right)^{3} + 2 \left(\lambda^{o}\right)^{2} \mu + 3 \lambda^{o} \mu^{2} + 3 \mu^{3}\right)}{\mu^{3} \left(\lambda^{A} + \mu\right)} & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{3} + 3 \left(\lambda^{A}\right)^{2} \lambda^{o} + 4 \left(\lambda^{A}\right)^{2} \mu + 3 \lambda^{A} \left(\lambda^{o}\right)^{2} + 6 \lambda^{A} \lambda^{o} \mu + 6 \lambda^{A} \mu^{2} + \left(\lambda^{o}\right)^{3} + 2 \left(\lambda^{o}\right)^{2} \mu + 3 \lambda^{o} \mu^{2} + 4 \mu^{3}\right)}{\mu^{4}}\end{matrix}\right]$ ```python sym_state_recursive_ratios_right_1141 ``` $\displaystyle \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)}{\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}} & \frac{\lambda^{o} \left(\lambda^{A} + \mu\right)}{\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}} & \frac{\lambda^{o}}{\lambda^{A} + \mu}\\0 & \frac{\lambda^{A} \left(\lambda^{A} + \lambda^{o} + \mu\right) \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \mu^{2}\right)}{\mu \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{3} + 3 \left(\lambda^{A}\right)^{2} \lambda^{o} + 4 \left(\lambda^{A}\right)^{2} \mu + 3 \lambda^{A} \left(\lambda^{o}\right)^{2} + 6 \lambda^{A} \lambda^{o} \mu + 5 \lambda^{A} \mu^{2} + \left(\lambda^{o}\right)^{3} + 2 \left(\lambda^{o}\right)^{2} \mu + 2 \lambda^{o} \mu^{2} + 2 \mu^{3}\right)}{\mu \left(\lambda^{A} + \lambda^{o} + \mu\right) \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \mu^{2}\right)} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{3} + 3 \left(\lambda^{A}\right)^{2} \lambda^{o} + 4 \left(\lambda^{A}\right)^{2} \mu + 3 \lambda^{A} \left(\lambda^{o}\right)^{2} + 6 \lambda^{A} \lambda^{o} \mu + 6 \lambda^{A} \mu^{2} + \left(\lambda^{o}\right)^{3} + 2 \left(\lambda^{o}\right)^{2} \mu + 3 \lambda^{o} \mu^{2} + 3 \mu^{3}\right)}{\mu \left(\left(\lambda^{A}\right)^{3} + 3 \left(\lambda^{A}\right)^{2} \lambda^{o} + 4 \left(\lambda^{A}\right)^{2} \mu + 3 \lambda^{A} \left(\lambda^{o}\right)^{2} + 6 \lambda^{A} \lambda^{o} \mu + 5 \lambda^{A} \mu^{2} + \left(\lambda^{o}\right)^{3} + 2 \left(\lambda^{o}\right)^{2} \mu + 2 \lambda^{o} \mu^{2} + 2 \mu^{3}\right)} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{3} + 3 \left(\lambda^{A}\right)^{2} \lambda^{o} + 4 \left(\lambda^{A}\right)^{2} \mu + 3 \lambda^{A} \left(\lambda^{o}\right)^{2} + 6 \lambda^{A} \lambda^{o} \mu + 6 \lambda^{A} \mu^{2} + \left(\lambda^{o}\right)^{3} + 2 \left(\lambda^{o}\right)^{2} \mu + 3 \lambda^{o} \mu^{2} + 4 \mu^{3}\right)}{\mu \left(\left(\lambda^{A}\right)^{3} + 3 \left(\lambda^{A}\right)^{2} \lambda^{o} + 4 \left(\lambda^{A}\right)^{2} \mu + 3 \lambda^{A} \left(\lambda^{o}\right)^{2} + 6 \lambda^{A} \lambda^{o} \mu + 6 \lambda^{A} \mu^{2} + \left(\lambda^{o}\right)^{3} + 2 \left(\lambda^{o}\right)^{2} \mu + 3 \lambda^{o} \mu^{2} + 3 \mu^{3}\right)}\end{matrix}\right]$ ```python sym_state_recursive_ratios_P0_1141 ``` $\displaystyle \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\Lambda \lambda^{o} \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)}{\mu \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)} & \frac{\Lambda \left(\lambda^{o}\right)^{2} \left(\lambda^{A} + \mu\right)}{\mu \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)} & \frac{\Lambda \left(\lambda^{o}\right)^{3}}{\mu \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)}\\0 & \frac{\Lambda \lambda^{A} \left(\lambda^{A} + \lambda^{o} + \mu\right) \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \mu^{2}\right)}{\mu^{2} \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)} & \frac{\Lambda \lambda^{A} \lambda^{o} \left(\left(\lambda^{A}\right)^{3} + 3 \left(\lambda^{A}\right)^{2} \lambda^{o} + 4 \left(\lambda^{A}\right)^{2} \mu + 3 \lambda^{A} \left(\lambda^{o}\right)^{2} + 6 \lambda^{A} \lambda^{o} \mu + 5 \lambda^{A} \mu^{2} + \left(\lambda^{o}\right)^{3} + 2 \left(\lambda^{o}\right)^{2} \mu + 2 \lambda^{o} \mu^{2} + 2 \mu^{3}\right)}{\mu^{3} \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)} & \frac{\Lambda \lambda^{A} \left(\lambda^{o}\right)^{2} \left(\left(\lambda^{A}\right)^{3} + 3 \left(\lambda^{A}\right)^{2} \lambda^{o} + 4 \left(\lambda^{A}\right)^{2} \mu + 3 \lambda^{A} \left(\lambda^{o}\right)^{2} + 6 \lambda^{A} \lambda^{o} \mu + 6 \lambda^{A} \mu^{2} + \left(\lambda^{o}\right)^{3} + 2 \left(\lambda^{o}\right)^{2} \mu + 3 \lambda^{o} \mu^{2} + 3 \mu^{3}\right)}{\mu^{4} \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)} & \frac{\Lambda \lambda^{A} \left(\lambda^{o}\right)^{3} \left(\left(\lambda^{A}\right)^{3} + 3 \left(\lambda^{A}\right)^{2} \lambda^{o} + 4 \left(\lambda^{A}\right)^{2} \mu + 3 \lambda^{A} \left(\lambda^{o}\right)^{2} + 6 \lambda^{A} \lambda^{o} \mu + 6 \lambda^{A} \mu^{2} + \left(\lambda^{o}\right)^{3} + 2 \left(\lambda^{o}\right)^{2} \mu + 3 \lambda^{o} \mu^{2} + 4 \mu^{3}\right)}{\mu^{5} \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)}\end{matrix}\right]$ ```python ``` ## $C=1, T=1, N=4, M=2$ ```python num_of_servers = 1 threshold = 1 system_capacity = 4 buffer_capacity = 2 ``` ```python abg.markov.visualise_ambulance_markov_chain(num_of_servers=num_of_servers, threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity) ``` ```python all_states_1142 = abg.markov.build_states(threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity) Q_sym_1142 = abg.markov.get_symbolic_transition_matrix(num_of_servers=num_of_servers, threshold=threshold, system_capacity=system_capacity, buffer_capacity=buffer_capacity) p00, p01, p11, p21, p02, p12, p22, p03, p13, p23, p04, p14, p24 = sym.symbols('p00, p01, p11, p21, p02, p12, p22, p03, p13, p23, p04, p14, p24') pi_1142 = sym.Matrix([p00, p01, p11, p21, p02, p12, p22, p03, p13, p23, p04, p14, p24]) dimension_1142 = Q_sym_1142.shape[0] M_sym_1142 = sym.Matrix([Q_sym_1142.transpose()[:-1,:], sym.ones(1,dimension_1142)]) sym_diff_equations_1142 = (M_sym_1142 @ pi_1142) b_sym_1142 = sym.Matrix([sym.zeros(dimension_1142 - 1, 1), [1]]) ``` ```python eq0_1142 = sym.Eq(sym_diff_equations_1142[0],b_sym_1142[0]) eq1_1142 = sym.Eq(sym_diff_equations_1142[1],b_sym_1142[1]) eq2_1142 = sym.Eq(sym_diff_equations_1142[2],b_sym_1142[2]) eq3_1142 = sym.Eq(sym_diff_equations_1142[3],b_sym_1142[3]) eq4_1142 = sym.Eq(sym_diff_equations_1142[4],b_sym_1142[4]) eq5_1142 = sym.Eq(sym_diff_equations_1142[5],b_sym_1142[5]) eq6_1142 = sym.Eq(sym_diff_equations_1142[6],b_sym_1142[6]) eq7_1142 = sym.Eq(sym_diff_equations_1142[7],b_sym_1142[7]) eq8_1142 = sym.Eq(sym_diff_equations_1142[8],b_sym_1142[8]) eq9_1142 = sym.Eq(sym_diff_equations_1142[9],b_sym_1142[9]) eq10_1142 = sym.Eq(sym_diff_equations_1142[10],b_sym_1142[10]) eq11_1142 = sym.Eq(sym_diff_equations_1142[11],b_sym_1142[11]) eq12_1142 = sym.Eq(sym_diff_equations_1142[12],b_sym_1142[12]) ``` ```python sym_state_probs_1142 = sym.solve([eq0_1142,eq1_1142,eq2_1142,eq3_1142,eq4_1142,eq5_1142,eq6_1142,eq7_1142,eq8_1142,eq9_1142,eq10_1142,eq11_1142,eq12_1142],(p00, p01, p11, p21, p02, p12, p22, p03, p13, p23, p04, p14, p24)) ``` ```python sym_state_recursive_ratios_1142 = sym.zeros(buffer_capacity + 1, system_capacity + 1) sym_state_recursive_ratios_1142[0,0] = 1 sym_state_recursive_ratios_1142[0,1] = sym.factor(sym_state_probs_1142[p01] / sym_state_probs_1142[p00]) # (0,0) -> (0,1) sym_state_recursive_ratios_1142[1,1] = sym.factor(sym_state_probs_1142[p11] / sym_state_probs_1142[p01]) # (0,1) -> (1,1) sym_state_recursive_ratios_1142[2,1] = sym.factor(sym_state_probs_1142[p21] / sym_state_probs_1142[p11]) # (1,1) -> (2,1) sym_state_recursive_ratios_1142[0,2] = sym.factor(sym_state_probs_1142[p02] / sym_state_probs_1142[p01]) # (0,1) -> (0,2) sym_state_recursive_ratios_1142[1,2] = sym.factor(sym_state_probs_1142[p12] / sym_state_probs_1142[p02]) # (0,2) -> (1,2) sym_state_recursive_ratios_1142[2,2] = sym.factor(sym_state_probs_1142[p22] / sym_state_probs_1142[p12]) # (1,2) -> (2,2) sym_state_recursive_ratios_1142[0,3] = sym.factor(sym_state_probs_1142[p03] / sym_state_probs_1142[p02]) # (0,2) -> (0,3) sym_state_recursive_ratios_1142[1,3] = sym.factor(sym_state_probs_1142[p13] / sym_state_probs_1142[p03]) # (0,3) -> (1,3) sym_state_recursive_ratios_1142[2,3] = sym.factor(sym_state_probs_1142[p23] / sym_state_probs_1142[p13]) # (1,3) -> (2,3) sym_state_recursive_ratios_1142[0,4] = sym.factor(sym_state_probs_1142[p04] / sym_state_probs_1142[p03]) # (0,3) -> (0,4) sym_state_recursive_ratios_1142[1,4] = sym.factor(sym_state_probs_1142[p14] / sym_state_probs_1142[p04]) # (0,4) -> (1,4) sym_state_recursive_ratios_1142[2,4] = sym.factor(sym_state_probs_1142[p24] / sym_state_probs_1142[p14]) # (1,4) -> (2,4) sym_state_recursive_ratios_right_1142 = sym_state_recursive_ratios_1142.copy() sym_state_recursive_ratios_right_1142[1,2] = sym.factor(sym_state_probs_1142[p12] / sym_state_probs_1142[p11]) # (1,1) -> (1,2) sym_state_recursive_ratios_right_1142[1,3] = sym.factor(sym_state_probs_1142[p13] / sym_state_probs_1142[p12]) # (1,2) -> (1,3) sym_state_recursive_ratios_right_1142[1,4] = sym.factor(sym_state_probs_1142[p14] / sym_state_probs_1142[p13]) # (1,3) -> (1,4) sym_state_recursive_ratios_right_1142[2,2] = sym.factor(sym_state_probs_1142[p22] / sym_state_probs_1142[p21]) # (2,1) -> (2,2) sym_state_recursive_ratios_right_1142[2,3] = sym.factor(sym_state_probs_1142[p23] / sym_state_probs_1142[p22]) # (2,2) -> (2,3) sym_state_recursive_ratios_right_1142[2,4] = sym.factor(sym_state_probs_1142[p24] / sym_state_probs_1142[p23]) # (2,3) -> (2,4) sym_state_recursive_ratios_P0_1142 = sym.zeros(buffer_capacity + 1, system_capacity + 1) sym_state_recursive_ratios_P0_1142[0,0] = 1 sym_state_recursive_ratios_P0_1142[0,1] = sym.factor(sym_state_probs_1142[p01] / sym_state_probs_1142[p00]) # (0,0) -> (0,1) sym_state_recursive_ratios_P0_1142[1,1] = sym.factor(sym_state_probs_1142[p11] / sym_state_probs_1142[p00]) # (0,0) -> (1,1) sym_state_recursive_ratios_P0_1142[2,1] = sym.factor(sym_state_probs_1142[p21] / sym_state_probs_1142[p00]) # (0,0) -> (2,1) sym_state_recursive_ratios_P0_1142[0,2] = sym.factor(sym_state_probs_1142[p02] / sym_state_probs_1142[p00]) # (0,0) -> (0,2) sym_state_recursive_ratios_P0_1142[1,2] = sym.factor(sym_state_probs_1142[p12] / sym_state_probs_1142[p00]) # (0,0) -> (1,2) sym_state_recursive_ratios_P0_1142[2,2] = sym.factor(sym_state_probs_1142[p22] / sym_state_probs_1142[p00]) # (0,0) -> (2,2) sym_state_recursive_ratios_P0_1142[0,3] = sym.factor(sym_state_probs_1142[p03] / sym_state_probs_1142[p00]) # (0,0) -> (0,3) sym_state_recursive_ratios_P0_1142[1,3] = sym.factor(sym_state_probs_1142[p13] / sym_state_probs_1142[p00]) # (0,0) -> (1,3) sym_state_recursive_ratios_P0_1142[2,3] = sym.factor(sym_state_probs_1142[p23] / sym_state_probs_1142[p00]) # (0,0) -> (2,3) sym_state_recursive_ratios_P0_1142[0,4] = sym.factor(sym_state_probs_1142[p04] / sym_state_probs_1142[p00]) # (0,0) -> (0,4) sym_state_recursive_ratios_P0_1142[1,4] = sym.factor(sym_state_probs_1142[p14] / sym_state_probs_1142[p00]) # (0,0) -> (1,4) sym_state_recursive_ratios_P0_1142[2,4] = sym.factor(sym_state_probs_1142[p24] / sym_state_probs_1142[p00]) # (0,0) -> (2,4) ``` ```python sym_state_recursive_ratios_1142 ``` $\displaystyle \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)}{\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}} & \frac{\lambda^{o} \left(\lambda^{A} + \mu\right)}{\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}} & \frac{\lambda^{o}}{\lambda^{A} + \mu}\\0 & \frac{\lambda^{A} \left(\lambda^{A} + \lambda^{o} + \mu\right) \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \mu^{2}\right)}{\mu \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)} & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{5} + 4 \left(\lambda^{A}\right)^{4} \lambda^{o} + 6 \left(\lambda^{A}\right)^{4} \mu + 6 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} + 15 \left(\lambda^{A}\right)^{3} \lambda^{o} \mu + 14 \left(\lambda^{A}\right)^{3} \mu^{2} + 4 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} + 12 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu + 20 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{2} + 16 \left(\lambda^{A}\right)^{2} \mu^{3} + \lambda^{A} \left(\lambda^{o}\right)^{4} + 3 \lambda^{A} \left(\lambda^{o}\right)^{3} \mu + 6 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{2} + 11 \lambda^{A} \lambda^{o} \mu^{3} + 9 \lambda^{A} \mu^{4} + \left(\lambda^{o}\right)^{3} \mu^{2} + 2 \left(\lambda^{o}\right)^{2} \mu^{3} + 2 \lambda^{o} \mu^{4} + 2 \mu^{5}\right)}{\mu \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right) \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)} & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{4} + 3 \left(\lambda^{A}\right)^{3} \lambda^{o} + 6 \left(\lambda^{A}\right)^{3} \mu + 3 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} + 9 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu + 12 \left(\lambda^{A}\right)^{2} \mu^{2} + \lambda^{A} \left(\lambda^{o}\right)^{3} + 4 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu + 9 \lambda^{A} \lambda^{o} \mu^{2} + 10 \lambda^{A} \mu^{3} + \left(\lambda^{o}\right)^{3} \mu + 2 \left(\lambda^{o}\right)^{2} \mu^{2} + 3 \lambda^{o} \mu^{3} + 3 \mu^{4}\right)}{\mu \left(\lambda^{A} + \mu\right) \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)} & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{3} + 3 \left(\lambda^{A}\right)^{2} \lambda^{o} + 6 \left(\lambda^{A}\right)^{2} \mu + 3 \lambda^{A} \left(\lambda^{o}\right)^{2} + 8 \lambda^{A} \lambda^{o} \mu + 9 \lambda^{A} \mu^{2} + \left(\lambda^{o}\right)^{3} + 2 \left(\lambda^{o}\right)^{2} \mu + 3 \lambda^{o} \mu^{2} + 4 \mu^{3}\right)}{\mu \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)}\\0 & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{6} + 6 \left(\lambda^{A}\right)^{5} \lambda^{o} + 6 \left(\lambda^{A}\right)^{5} \mu + 15 \left(\lambda^{A}\right)^{4} \left(\lambda^{o}\right)^{2} + 27 \left(\lambda^{A}\right)^{4} \lambda^{o} \mu + 15 \left(\lambda^{A}\right)^{4} \mu^{2} + 20 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{3} + 48 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} \mu + 48 \left(\lambda^{A}\right)^{3} \lambda^{o} \mu^{2} + 20 \left(\lambda^{A}\right)^{3} \mu^{3} + 15 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{4} + 42 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} \mu + 57 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu^{2} + 42 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{3} + 15 \left(\lambda^{A}\right)^{2} \mu^{4} + 6 \lambda^{A} \left(\lambda^{o}\right)^{5} + 18 \lambda^{A} \left(\lambda^{o}\right)^{4} \mu + 30 \lambda^{A} \left(\lambda^{o}\right)^{3} \mu^{2} + 30 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{3} + 18 \lambda^{A} \lambda^{o} \mu^{4} + 6 \lambda^{A} \mu^{5} + \left(\lambda^{o}\right)^{6} + 3 \left(\lambda^{o}\right)^{5} \mu + 6 \left(\lambda^{o}\right)^{4} \mu^{2} + 10 \left(\lambda^{o}\right)^{3} \mu^{3} + 6 \left(\lambda^{o}\right)^{2} \mu^{4} + 3 \lambda^{o} \mu^{5} + \mu^{6}\right)}{\mu \left(\lambda^{A} + \lambda^{o} + \mu\right) \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \mu^{2}\right) \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)} & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{6} + 6 \left(\lambda^{A}\right)^{5} \lambda^{o} + 7 \left(\lambda^{A}\right)^{5} \mu + 15 \left(\lambda^{A}\right)^{4} \left(\lambda^{o}\right)^{2} + 32 \left(\lambda^{A}\right)^{4} \lambda^{o} \mu + 21 \left(\lambda^{A}\right)^{4} \mu^{2} + 20 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{3} + 58 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} \mu + 69 \left(\lambda^{A}\right)^{3} \lambda^{o} \mu^{2} + 34 \left(\lambda^{A}\right)^{3} \mu^{3} + 15 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{4} + 52 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} \mu + 84 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu^{2} + 74 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{3} + 31 \left(\lambda^{A}\right)^{2} \mu^{4} + 6 \lambda^{A} \left(\lambda^{o}\right)^{5} + 23 \lambda^{A} \left(\lambda^{o}\right)^{4} \mu + 45 \lambda^{A} \left(\lambda^{o}\right)^{3} \mu^{2} + 54 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{3} + 39 \lambda^{A} \lambda^{o} \mu^{4} + 15 \lambda^{A} \mu^{5} + \left(\lambda^{o}\right)^{6} + 4 \left(\lambda^{o}\right)^{5} \mu + 9 \left(\lambda^{o}\right)^{4} \mu^{2} + 16 \left(\lambda^{o}\right)^{3} \mu^{3} + 15 \left(\lambda^{o}\right)^{2} \mu^{4} + 8 \lambda^{o} \mu^{5} + 3 \mu^{6}\right)}{\mu^{2} \left(\left(\lambda^{A}\right)^{5} + 4 \left(\lambda^{A}\right)^{4} \lambda^{o} + 6 \left(\lambda^{A}\right)^{4} \mu + 6 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} + 15 \left(\lambda^{A}\right)^{3} \lambda^{o} \mu + 14 \left(\lambda^{A}\right)^{3} \mu^{2} + 4 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} + 12 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu + 20 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{2} + 16 \left(\lambda^{A}\right)^{2} \mu^{3} + \lambda^{A} \left(\lambda^{o}\right)^{4} + 3 \lambda^{A} \left(\lambda^{o}\right)^{3} \mu + 6 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{2} + 11 \lambda^{A} \lambda^{o} \mu^{3} + 9 \lambda^{A} \mu^{4} + \left(\lambda^{o}\right)^{3} \mu^{2} + 2 \left(\lambda^{o}\right)^{2} \mu^{3} + 2 \lambda^{o} \mu^{4} + 2 \mu^{5}\right)} & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{6} + 6 \left(\lambda^{A}\right)^{5} \lambda^{o} + 7 \left(\lambda^{A}\right)^{5} \mu + 15 \left(\lambda^{A}\right)^{4} \left(\lambda^{o}\right)^{2} + 32 \left(\lambda^{A}\right)^{4} \lambda^{o} \mu + 22 \left(\lambda^{A}\right)^{4} \mu^{2} + 20 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{3} + 58 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} \mu + 73 \left(\lambda^{A}\right)^{3} \lambda^{o} \mu^{2} + 40 \left(\lambda^{A}\right)^{3} \mu^{3} + 15 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{4} + 52 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} \mu + 90 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu^{2} + 89 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{3} + 43 \left(\lambda^{A}\right)^{2} \mu^{4} + 6 \lambda^{A} \left(\lambda^{o}\right)^{5} + 23 \lambda^{A} \left(\lambda^{o}\right)^{4} \mu + 49 \lambda^{A} \left(\lambda^{o}\right)^{3} \mu^{2} + 66 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{3} + 57 \lambda^{A} \lambda^{o} \mu^{4} + 25 \lambda^{A} \mu^{5} + \left(\lambda^{o}\right)^{6} + 4 \left(\lambda^{o}\right)^{5} \mu + 10 \left(\lambda^{o}\right)^{4} \mu^{2} + 19 \left(\lambda^{o}\right)^{3} \mu^{3} + 20 \left(\lambda^{o}\right)^{2} \mu^{4} + 15 \lambda^{o} \mu^{5} + 6 \mu^{6}\right)}{\mu^{3} \left(\left(\lambda^{A}\right)^{4} + 3 \left(\lambda^{A}\right)^{3} \lambda^{o} + 6 \left(\lambda^{A}\right)^{3} \mu + 3 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} + 9 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu + 12 \left(\lambda^{A}\right)^{2} \mu^{2} + \lambda^{A} \left(\lambda^{o}\right)^{3} + 4 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu + 9 \lambda^{A} \lambda^{o} \mu^{2} + 10 \lambda^{A} \mu^{3} + \left(\lambda^{o}\right)^{3} \mu + 2 \left(\lambda^{o}\right)^{2} \mu^{2} + 3 \lambda^{o} \mu^{3} + 3 \mu^{4}\right)} & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{6} + 6 \left(\lambda^{A}\right)^{5} \lambda^{o} + 7 \left(\lambda^{A}\right)^{5} \mu + 15 \left(\lambda^{A}\right)^{4} \left(\lambda^{o}\right)^{2} + 32 \left(\lambda^{A}\right)^{4} \lambda^{o} \mu + 22 \left(\lambda^{A}\right)^{4} \mu^{2} + 20 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{3} + 58 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} \mu + 73 \left(\lambda^{A}\right)^{3} \lambda^{o} \mu^{2} + 41 \left(\lambda^{A}\right)^{3} \mu^{3} + 15 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{4} + 52 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} \mu + 90 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu^{2} + 92 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{3} + 49 \left(\lambda^{A}\right)^{2} \mu^{4} + 6 \lambda^{A} \left(\lambda^{o}\right)^{5} + 23 \lambda^{A} \left(\lambda^{o}\right)^{4} \mu + 49 \lambda^{A} \left(\lambda^{o}\right)^{3} \mu^{2} + 69 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{3} + 65 \lambda^{A} \lambda^{o} \mu^{4} + 34 \lambda^{A} \mu^{5} + \left(\lambda^{o}\right)^{6} + 4 \left(\lambda^{o}\right)^{5} \mu + 10 \left(\lambda^{o}\right)^{4} \mu^{2} + 20 \left(\lambda^{o}\right)^{3} \mu^{3} + 22 \left(\lambda^{o}\right)^{2} \mu^{4} + 18 \lambda^{o} \mu^{5} + 10 \mu^{6}\right)}{\mu^{4} \left(\left(\lambda^{A}\right)^{3} + 3 \left(\lambda^{A}\right)^{2} \lambda^{o} + 6 \left(\lambda^{A}\right)^{2} \mu + 3 \lambda^{A} \left(\lambda^{o}\right)^{2} + 8 \lambda^{A} \lambda^{o} \mu + 9 \lambda^{A} \mu^{2} + \left(\lambda^{o}\right)^{3} + 2 \left(\lambda^{o}\right)^{2} \mu + 3 \lambda^{o} \mu^{2} + 4 \mu^{3}\right)}\end{matrix}\right]$ ```python sym_state_recursive_ratios_right_1142 ``` $\displaystyle \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)}{\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}} & \frac{\lambda^{o} \left(\lambda^{A} + \mu\right)}{\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}} & \frac{\lambda^{o}}{\lambda^{A} + \mu}\\0 & \frac{\lambda^{A} \left(\lambda^{A} + \lambda^{o} + \mu\right) \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \mu^{2}\right)}{\mu \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{5} + 4 \left(\lambda^{A}\right)^{4} \lambda^{o} + 6 \left(\lambda^{A}\right)^{4} \mu + 6 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} + 15 \left(\lambda^{A}\right)^{3} \lambda^{o} \mu + 14 \left(\lambda^{A}\right)^{3} \mu^{2} + 4 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} + 12 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu + 20 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{2} + 16 \left(\lambda^{A}\right)^{2} \mu^{3} + \lambda^{A} \left(\lambda^{o}\right)^{4} + 3 \lambda^{A} \left(\lambda^{o}\right)^{3} \mu + 6 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{2} + 11 \lambda^{A} \lambda^{o} \mu^{3} + 9 \lambda^{A} \mu^{4} + \left(\lambda^{o}\right)^{3} \mu^{2} + 2 \left(\lambda^{o}\right)^{2} \mu^{3} + 2 \lambda^{o} \mu^{4} + 2 \mu^{5}\right)}{\left(\lambda^{A} + \lambda^{o} + \mu\right) \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \mu^{2}\right) \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{4} + 3 \left(\lambda^{A}\right)^{3} \lambda^{o} + 6 \left(\lambda^{A}\right)^{3} \mu + 3 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} + 9 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu + 12 \left(\lambda^{A}\right)^{2} \mu^{2} + \lambda^{A} \left(\lambda^{o}\right)^{3} + 4 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu + 9 \lambda^{A} \lambda^{o} \mu^{2} + 10 \lambda^{A} \mu^{3} + \left(\lambda^{o}\right)^{3} \mu + 2 \left(\lambda^{o}\right)^{2} \mu^{2} + 3 \lambda^{o} \mu^{3} + 3 \mu^{4}\right)}{\left(\lambda^{A}\right)^{5} + 4 \left(\lambda^{A}\right)^{4} \lambda^{o} + 6 \left(\lambda^{A}\right)^{4} \mu + 6 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} + 15 \left(\lambda^{A}\right)^{3} \lambda^{o} \mu + 14 \left(\lambda^{A}\right)^{3} \mu^{2} + 4 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} + 12 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu + 20 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{2} + 16 \left(\lambda^{A}\right)^{2} \mu^{3} + \lambda^{A} \left(\lambda^{o}\right)^{4} + 3 \lambda^{A} \left(\lambda^{o}\right)^{3} \mu + 6 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{2} + 11 \lambda^{A} \lambda^{o} \mu^{3} + 9 \lambda^{A} \mu^{4} + \left(\lambda^{o}\right)^{3} \mu^{2} + 2 \left(\lambda^{o}\right)^{2} \mu^{3} + 2 \lambda^{o} \mu^{4} + 2 \mu^{5}} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{3} + 3 \left(\lambda^{A}\right)^{2} \lambda^{o} + 6 \left(\lambda^{A}\right)^{2} \mu + 3 \lambda^{A} \left(\lambda^{o}\right)^{2} + 8 \lambda^{A} \lambda^{o} \mu + 9 \lambda^{A} \mu^{2} + \left(\lambda^{o}\right)^{3} + 2 \left(\lambda^{o}\right)^{2} \mu + 3 \lambda^{o} \mu^{2} + 4 \mu^{3}\right)}{\left(\lambda^{A}\right)^{4} + 3 \left(\lambda^{A}\right)^{3} \lambda^{o} + 6 \left(\lambda^{A}\right)^{3} \mu + 3 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} + 9 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu + 12 \left(\lambda^{A}\right)^{2} \mu^{2} + \lambda^{A} \left(\lambda^{o}\right)^{3} + 4 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu + 9 \lambda^{A} \lambda^{o} \mu^{2} + 10 \lambda^{A} \mu^{3} + \left(\lambda^{o}\right)^{3} \mu + 2 \left(\lambda^{o}\right)^{2} \mu^{2} + 3 \lambda^{o} \mu^{3} + 3 \mu^{4}}\\0 & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{6} + 6 \left(\lambda^{A}\right)^{5} \lambda^{o} + 6 \left(\lambda^{A}\right)^{5} \mu + 15 \left(\lambda^{A}\right)^{4} \left(\lambda^{o}\right)^{2} + 27 \left(\lambda^{A}\right)^{4} \lambda^{o} \mu + 15 \left(\lambda^{A}\right)^{4} \mu^{2} + 20 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{3} + 48 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} \mu + 48 \left(\lambda^{A}\right)^{3} \lambda^{o} \mu^{2} + 20 \left(\lambda^{A}\right)^{3} \mu^{3} + 15 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{4} + 42 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} \mu + 57 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu^{2} + 42 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{3} + 15 \left(\lambda^{A}\right)^{2} \mu^{4} + 6 \lambda^{A} \left(\lambda^{o}\right)^{5} + 18 \lambda^{A} \left(\lambda^{o}\right)^{4} \mu + 30 \lambda^{A} \left(\lambda^{o}\right)^{3} \mu^{2} + 30 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{3} + 18 \lambda^{A} \lambda^{o} \mu^{4} + 6 \lambda^{A} \mu^{5} + \left(\lambda^{o}\right)^{6} + 3 \left(\lambda^{o}\right)^{5} \mu + 6 \left(\lambda^{o}\right)^{4} \mu^{2} + 10 \left(\lambda^{o}\right)^{3} \mu^{3} + 6 \left(\lambda^{o}\right)^{2} \mu^{4} + 3 \lambda^{o} \mu^{5} + \mu^{6}\right)}{\mu \left(\lambda^{A} + \lambda^{o} + \mu\right) \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \mu^{2}\right) \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{6} + 6 \left(\lambda^{A}\right)^{5} \lambda^{o} + 7 \left(\lambda^{A}\right)^{5} \mu + 15 \left(\lambda^{A}\right)^{4} \left(\lambda^{o}\right)^{2} + 32 \left(\lambda^{A}\right)^{4} \lambda^{o} \mu + 21 \left(\lambda^{A}\right)^{4} \mu^{2} + 20 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{3} + 58 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} \mu + 69 \left(\lambda^{A}\right)^{3} \lambda^{o} \mu^{2} + 34 \left(\lambda^{A}\right)^{3} \mu^{3} + 15 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{4} + 52 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} \mu + 84 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu^{2} + 74 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{3} + 31 \left(\lambda^{A}\right)^{2} \mu^{4} + 6 \lambda^{A} \left(\lambda^{o}\right)^{5} + 23 \lambda^{A} \left(\lambda^{o}\right)^{4} \mu + 45 \lambda^{A} \left(\lambda^{o}\right)^{3} \mu^{2} + 54 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{3} + 39 \lambda^{A} \lambda^{o} \mu^{4} + 15 \lambda^{A} \mu^{5} + \left(\lambda^{o}\right)^{6} + 4 \left(\lambda^{o}\right)^{5} \mu + 9 \left(\lambda^{o}\right)^{4} \mu^{2} + 16 \left(\lambda^{o}\right)^{3} \mu^{3} + 15 \left(\lambda^{o}\right)^{2} \mu^{4} + 8 \lambda^{o} \mu^{5} + 3 \mu^{6}\right)}{\mu \left(\left(\lambda^{A}\right)^{6} + 6 \left(\lambda^{A}\right)^{5} \lambda^{o} + 6 \left(\lambda^{A}\right)^{5} \mu + 15 \left(\lambda^{A}\right)^{4} \left(\lambda^{o}\right)^{2} + 27 \left(\lambda^{A}\right)^{4} \lambda^{o} \mu + 15 \left(\lambda^{A}\right)^{4} \mu^{2} + 20 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{3} + 48 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} \mu + 48 \left(\lambda^{A}\right)^{3} \lambda^{o} \mu^{2} + 20 \left(\lambda^{A}\right)^{3} \mu^{3} + 15 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{4} + 42 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} \mu + 57 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu^{2} + 42 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{3} + 15 \left(\lambda^{A}\right)^{2} \mu^{4} + 6 \lambda^{A} \left(\lambda^{o}\right)^{5} + 18 \lambda^{A} \left(\lambda^{o}\right)^{4} \mu + 30 \lambda^{A} \left(\lambda^{o}\right)^{3} \mu^{2} + 30 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{3} + 18 \lambda^{A} \lambda^{o} \mu^{4} + 6 \lambda^{A} \mu^{5} + \left(\lambda^{o}\right)^{6} + 3 \left(\lambda^{o}\right)^{5} \mu + 6 \left(\lambda^{o}\right)^{4} \mu^{2} + 10 \left(\lambda^{o}\right)^{3} \mu^{3} + 6 \left(\lambda^{o}\right)^{2} \mu^{4} + 3 \lambda^{o} \mu^{5} + \mu^{6}\right)} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{6} + 6 \left(\lambda^{A}\right)^{5} \lambda^{o} + 7 \left(\lambda^{A}\right)^{5} \mu + 15 \left(\lambda^{A}\right)^{4} \left(\lambda^{o}\right)^{2} + 32 \left(\lambda^{A}\right)^{4} \lambda^{o} \mu + 22 \left(\lambda^{A}\right)^{4} \mu^{2} + 20 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{3} + 58 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} \mu + 73 \left(\lambda^{A}\right)^{3} \lambda^{o} \mu^{2} + 40 \left(\lambda^{A}\right)^{3} \mu^{3} + 15 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{4} + 52 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} \mu + 90 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu^{2} + 89 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{3} + 43 \left(\lambda^{A}\right)^{2} \mu^{4} + 6 \lambda^{A} \left(\lambda^{o}\right)^{5} + 23 \lambda^{A} \left(\lambda^{o}\right)^{4} \mu + 49 \lambda^{A} \left(\lambda^{o}\right)^{3} \mu^{2} + 66 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{3} + 57 \lambda^{A} \lambda^{o} \mu^{4} + 25 \lambda^{A} \mu^{5} + \left(\lambda^{o}\right)^{6} + 4 \left(\lambda^{o}\right)^{5} \mu + 10 \left(\lambda^{o}\right)^{4} \mu^{2} + 19 \left(\lambda^{o}\right)^{3} \mu^{3} + 20 \left(\lambda^{o}\right)^{2} \mu^{4} + 15 \lambda^{o} \mu^{5} + 6 \mu^{6}\right)}{\mu \left(\left(\lambda^{A}\right)^{6} + 6 \left(\lambda^{A}\right)^{5} \lambda^{o} + 7 \left(\lambda^{A}\right)^{5} \mu + 15 \left(\lambda^{A}\right)^{4} \left(\lambda^{o}\right)^{2} + 32 \left(\lambda^{A}\right)^{4} \lambda^{o} \mu + 21 \left(\lambda^{A}\right)^{4} \mu^{2} + 20 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{3} + 58 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} \mu + 69 \left(\lambda^{A}\right)^{3} \lambda^{o} \mu^{2} + 34 \left(\lambda^{A}\right)^{3} \mu^{3} + 15 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{4} + 52 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} \mu + 84 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu^{2} + 74 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{3} + 31 \left(\lambda^{A}\right)^{2} \mu^{4} + 6 \lambda^{A} \left(\lambda^{o}\right)^{5} + 23 \lambda^{A} \left(\lambda^{o}\right)^{4} \mu + 45 \lambda^{A} \left(\lambda^{o}\right)^{3} \mu^{2} + 54 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{3} + 39 \lambda^{A} \lambda^{o} \mu^{4} + 15 \lambda^{A} \mu^{5} + \left(\lambda^{o}\right)^{6} + 4 \left(\lambda^{o}\right)^{5} \mu + 9 \left(\lambda^{o}\right)^{4} \mu^{2} + 16 \left(\lambda^{o}\right)^{3} \mu^{3} + 15 \left(\lambda^{o}\right)^{2} \mu^{4} + 8 \lambda^{o} \mu^{5} + 3 \mu^{6}\right)} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{6} + 6 \left(\lambda^{A}\right)^{5} \lambda^{o} + 7 \left(\lambda^{A}\right)^{5} \mu + 15 \left(\lambda^{A}\right)^{4} \left(\lambda^{o}\right)^{2} + 32 \left(\lambda^{A}\right)^{4} \lambda^{o} \mu + 22 \left(\lambda^{A}\right)^{4} \mu^{2} + 20 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{3} + 58 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} \mu + 73 \left(\lambda^{A}\right)^{3} \lambda^{o} \mu^{2} + 41 \left(\lambda^{A}\right)^{3} \mu^{3} + 15 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{4} + 52 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} \mu + 90 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu^{2} + 92 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{3} + 49 \left(\lambda^{A}\right)^{2} \mu^{4} + 6 \lambda^{A} \left(\lambda^{o}\right)^{5} + 23 \lambda^{A} \left(\lambda^{o}\right)^{4} \mu + 49 \lambda^{A} \left(\lambda^{o}\right)^{3} \mu^{2} + 69 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{3} + 65 \lambda^{A} \lambda^{o} \mu^{4} + 34 \lambda^{A} \mu^{5} + \left(\lambda^{o}\right)^{6} + 4 \left(\lambda^{o}\right)^{5} \mu + 10 \left(\lambda^{o}\right)^{4} \mu^{2} + 20 \left(\lambda^{o}\right)^{3} \mu^{3} + 22 \left(\lambda^{o}\right)^{2} \mu^{4} + 18 \lambda^{o} \mu^{5} + 10 \mu^{6}\right)}{\mu \left(\left(\lambda^{A}\right)^{6} + 6 \left(\lambda^{A}\right)^{5} \lambda^{o} + 7 \left(\lambda^{A}\right)^{5} \mu + 15 \left(\lambda^{A}\right)^{4} \left(\lambda^{o}\right)^{2} + 32 \left(\lambda^{A}\right)^{4} \lambda^{o} \mu + 22 \left(\lambda^{A}\right)^{4} \mu^{2} + 20 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{3} + 58 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} \mu + 73 \left(\lambda^{A}\right)^{3} \lambda^{o} \mu^{2} + 40 \left(\lambda^{A}\right)^{3} \mu^{3} + 15 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{4} + 52 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} \mu + 90 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu^{2} + 89 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{3} + 43 \left(\lambda^{A}\right)^{2} \mu^{4} + 6 \lambda^{A} \left(\lambda^{o}\right)^{5} + 23 \lambda^{A} \left(\lambda^{o}\right)^{4} \mu + 49 \lambda^{A} \left(\lambda^{o}\right)^{3} \mu^{2} + 66 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{3} + 57 \lambda^{A} \lambda^{o} \mu^{4} + 25 \lambda^{A} \mu^{5} + \left(\lambda^{o}\right)^{6} + 4 \left(\lambda^{o}\right)^{5} \mu + 10 \left(\lambda^{o}\right)^{4} \mu^{2} + 19 \left(\lambda^{o}\right)^{3} \mu^{3} + 20 \left(\lambda^{o}\right)^{2} \mu^{4} + 15 \lambda^{o} \mu^{5} + 6 \mu^{6}\right)}\end{matrix}\right]$ ```python sym_state_recursive_ratios_P0_1142 ``` $\displaystyle \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\Lambda \lambda^{o} \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)}{\mu \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)} & \frac{\Lambda \left(\lambda^{o}\right)^{2} \left(\lambda^{A} + \mu\right)}{\mu \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)} & \frac{\Lambda \left(\lambda^{o}\right)^{3}}{\mu \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)}\\0 & \frac{\Lambda \lambda^{A} \left(\lambda^{A} + \lambda^{o} + \mu\right) \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \mu^{2}\right)}{\mu^{2} \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)} & \frac{\Lambda \lambda^{A} \lambda^{o} \left(\left(\lambda^{A}\right)^{5} + 4 \left(\lambda^{A}\right)^{4} \lambda^{o} + 6 \left(\lambda^{A}\right)^{4} \mu + 6 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} + 15 \left(\lambda^{A}\right)^{3} \lambda^{o} \mu + 14 \left(\lambda^{A}\right)^{3} \mu^{2} + 4 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} + 12 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu + 20 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{2} + 16 \left(\lambda^{A}\right)^{2} \mu^{3} + \lambda^{A} \left(\lambda^{o}\right)^{4} + 3 \lambda^{A} \left(\lambda^{o}\right)^{3} \mu + 6 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{2} + 11 \lambda^{A} \lambda^{o} \mu^{3} + 9 \lambda^{A} \mu^{4} + \left(\lambda^{o}\right)^{3} \mu^{2} + 2 \left(\lambda^{o}\right)^{2} \mu^{3} + 2 \lambda^{o} \mu^{4} + 2 \mu^{5}\right)}{\mu^{2} \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)^{2}} & \frac{\Lambda \lambda^{A} \left(\lambda^{o}\right)^{2} \left(\left(\lambda^{A}\right)^{4} + 3 \left(\lambda^{A}\right)^{3} \lambda^{o} + 6 \left(\lambda^{A}\right)^{3} \mu + 3 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} + 9 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu + 12 \left(\lambda^{A}\right)^{2} \mu^{2} + \lambda^{A} \left(\lambda^{o}\right)^{3} + 4 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu + 9 \lambda^{A} \lambda^{o} \mu^{2} + 10 \lambda^{A} \mu^{3} + \left(\lambda^{o}\right)^{3} \mu + 2 \left(\lambda^{o}\right)^{2} \mu^{2} + 3 \lambda^{o} \mu^{3} + 3 \mu^{4}\right)}{\mu^{2} \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)^{2}} & \frac{\Lambda \lambda^{A} \left(\lambda^{o}\right)^{3} \left(\left(\lambda^{A}\right)^{3} + 3 \left(\lambda^{A}\right)^{2} \lambda^{o} + 6 \left(\lambda^{A}\right)^{2} \mu + 3 \lambda^{A} \left(\lambda^{o}\right)^{2} + 8 \lambda^{A} \lambda^{o} \mu + 9 \lambda^{A} \mu^{2} + \left(\lambda^{o}\right)^{3} + 2 \left(\lambda^{o}\right)^{2} \mu + 3 \lambda^{o} \mu^{2} + 4 \mu^{3}\right)}{\mu^{2} \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)^{2}}\\0 & \frac{\Lambda \left(\lambda^{A}\right)^{2} \left(\left(\lambda^{A}\right)^{6} + 6 \left(\lambda^{A}\right)^{5} \lambda^{o} + 6 \left(\lambda^{A}\right)^{5} \mu + 15 \left(\lambda^{A}\right)^{4} \left(\lambda^{o}\right)^{2} + 27 \left(\lambda^{A}\right)^{4} \lambda^{o} \mu + 15 \left(\lambda^{A}\right)^{4} \mu^{2} + 20 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{3} + 48 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} \mu + 48 \left(\lambda^{A}\right)^{3} \lambda^{o} \mu^{2} + 20 \left(\lambda^{A}\right)^{3} \mu^{3} + 15 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{4} + 42 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} \mu + 57 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu^{2} + 42 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{3} + 15 \left(\lambda^{A}\right)^{2} \mu^{4} + 6 \lambda^{A} \left(\lambda^{o}\right)^{5} + 18 \lambda^{A} \left(\lambda^{o}\right)^{4} \mu + 30 \lambda^{A} \left(\lambda^{o}\right)^{3} \mu^{2} + 30 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{3} + 18 \lambda^{A} \lambda^{o} \mu^{4} + 6 \lambda^{A} \mu^{5} + \left(\lambda^{o}\right)^{6} + 3 \left(\lambda^{o}\right)^{5} \mu + 6 \left(\lambda^{o}\right)^{4} \mu^{2} + 10 \left(\lambda^{o}\right)^{3} \mu^{3} + 6 \left(\lambda^{o}\right)^{2} \mu^{4} + 3 \lambda^{o} \mu^{5} + \mu^{6}\right)}{\mu^{3} \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)^{2}} & \frac{\Lambda \left(\lambda^{A}\right)^{2} \lambda^{o} \left(\left(\lambda^{A}\right)^{6} + 6 \left(\lambda^{A}\right)^{5} \lambda^{o} + 7 \left(\lambda^{A}\right)^{5} \mu + 15 \left(\lambda^{A}\right)^{4} \left(\lambda^{o}\right)^{2} + 32 \left(\lambda^{A}\right)^{4} \lambda^{o} \mu + 21 \left(\lambda^{A}\right)^{4} \mu^{2} + 20 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{3} + 58 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} \mu + 69 \left(\lambda^{A}\right)^{3} \lambda^{o} \mu^{2} + 34 \left(\lambda^{A}\right)^{3} \mu^{3} + 15 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{4} + 52 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} \mu + 84 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu^{2} + 74 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{3} + 31 \left(\lambda^{A}\right)^{2} \mu^{4} + 6 \lambda^{A} \left(\lambda^{o}\right)^{5} + 23 \lambda^{A} \left(\lambda^{o}\right)^{4} \mu + 45 \lambda^{A} \left(\lambda^{o}\right)^{3} \mu^{2} + 54 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{3} + 39 \lambda^{A} \lambda^{o} \mu^{4} + 15 \lambda^{A} \mu^{5} + \left(\lambda^{o}\right)^{6} + 4 \left(\lambda^{o}\right)^{5} \mu + 9 \left(\lambda^{o}\right)^{4} \mu^{2} + 16 \left(\lambda^{o}\right)^{3} \mu^{3} + 15 \left(\lambda^{o}\right)^{2} \mu^{4} + 8 \lambda^{o} \mu^{5} + 3 \mu^{6}\right)}{\mu^{4} \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)^{2}} & \frac{\Lambda \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \left(\left(\lambda^{A}\right)^{6} + 6 \left(\lambda^{A}\right)^{5} \lambda^{o} + 7 \left(\lambda^{A}\right)^{5} \mu + 15 \left(\lambda^{A}\right)^{4} \left(\lambda^{o}\right)^{2} + 32 \left(\lambda^{A}\right)^{4} \lambda^{o} \mu + 22 \left(\lambda^{A}\right)^{4} \mu^{2} + 20 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{3} + 58 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} \mu + 73 \left(\lambda^{A}\right)^{3} \lambda^{o} \mu^{2} + 40 \left(\lambda^{A}\right)^{3} \mu^{3} + 15 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{4} + 52 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} \mu + 90 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu^{2} + 89 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{3} + 43 \left(\lambda^{A}\right)^{2} \mu^{4} + 6 \lambda^{A} \left(\lambda^{o}\right)^{5} + 23 \lambda^{A} \left(\lambda^{o}\right)^{4} \mu + 49 \lambda^{A} \left(\lambda^{o}\right)^{3} \mu^{2} + 66 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{3} + 57 \lambda^{A} \lambda^{o} \mu^{4} + 25 \lambda^{A} \mu^{5} + \left(\lambda^{o}\right)^{6} + 4 \left(\lambda^{o}\right)^{5} \mu + 10 \left(\lambda^{o}\right)^{4} \mu^{2} + 19 \left(\lambda^{o}\right)^{3} \mu^{3} + 20 \left(\lambda^{o}\right)^{2} \mu^{4} + 15 \lambda^{o} \mu^{5} + 6 \mu^{6}\right)}{\mu^{5} \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)^{2}} & \frac{\Lambda \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} \left(\left(\lambda^{A}\right)^{6} + 6 \left(\lambda^{A}\right)^{5} \lambda^{o} + 7 \left(\lambda^{A}\right)^{5} \mu + 15 \left(\lambda^{A}\right)^{4} \left(\lambda^{o}\right)^{2} + 32 \left(\lambda^{A}\right)^{4} \lambda^{o} \mu + 22 \left(\lambda^{A}\right)^{4} \mu^{2} + 20 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{3} + 58 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} \mu + 73 \left(\lambda^{A}\right)^{3} \lambda^{o} \mu^{2} + 41 \left(\lambda^{A}\right)^{3} \mu^{3} + 15 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{4} + 52 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} \mu + 90 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu^{2} + 92 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{3} + 49 \left(\lambda^{A}\right)^{2} \mu^{4} + 6 \lambda^{A} \left(\lambda^{o}\right)^{5} + 23 \lambda^{A} \left(\lambda^{o}\right)^{4} \mu + 49 \lambda^{A} \left(\lambda^{o}\right)^{3} \mu^{2} + 69 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{3} + 65 \lambda^{A} \lambda^{o} \mu^{4} + 34 \lambda^{A} \mu^{5} + \left(\lambda^{o}\right)^{6} + 4 \left(\lambda^{o}\right)^{5} \mu + 10 \left(\lambda^{o}\right)^{4} \mu^{2} + 20 \left(\lambda^{o}\right)^{3} \mu^{3} + 22 \left(\lambda^{o}\right)^{2} \mu^{4} + 18 \lambda^{o} \mu^{5} + 10 \mu^{6}\right)}{\mu^{6} \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)^{2}}\end{matrix}\right]$ ```python ``` ```python ``` # Recursive Ratios ## $C=1, T=2, N=2, M=2$ ```python abg.markov.visualise_ambulance_markov_chain(num_of_servers=1, threshold=2, system_capacity=2, buffer_capacity=2) ``` ```python sym_state_recursive_ratios_1222 ``` $\displaystyle \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\Lambda}{\mu}\\0 & 0 & \frac{\lambda^{A}}{\mu}\\0 & 0 & \frac{\lambda^{A}}{\mu}\end{matrix}\right]$ ## $C=1, T=1, N=2, M=1$ ```python abg.markov.visualise_ambulance_markov_chain(num_of_servers=1, threshold=1, system_capacity=2, buffer_capacity=1) ``` ```python sym_state_recursive_ratios_1121, sym_state_recursive_ratios_right_1121 ``` $\displaystyle \left( \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\lambda^{o}}{\lambda^{A} + \mu}\\0 & \frac{\lambda^{A} \left(\lambda^{A} + \lambda^{o} + \mu\right)}{\mu \left(\lambda^{A} + \mu\right)} & \frac{\lambda^{A} \left(\lambda^{A} + \lambda^{o} + 2 \mu\right)}{\mu^{2}}\end{matrix}\right], \ \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\lambda^{o}}{\lambda^{A} + \mu}\\0 & \frac{\lambda^{A} \left(\lambda^{A} + \lambda^{o} + \mu\right)}{\mu \left(\lambda^{A} + \mu\right)} & \frac{\lambda^{o} \left(\lambda^{A} + \lambda^{o} + 2 \mu\right)}{\mu \left(\lambda^{A} + \lambda^{o} + \mu\right)}\end{matrix}\right]\right)$ ## $C=1, T=3, N=4, M=1$ ```python abg.markov.visualise_ambulance_markov_chain(num_of_servers=1, threshold=3, system_capacity=4, buffer_capacity=1) ``` ```python sym_state_recursive_ratios_1341, sym_state_recursive_ratios_right_1341 ``` $\displaystyle \left( \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\Lambda}{\mu} & \frac{\Lambda}{\mu} & \frac{\lambda^{o}}{\lambda^{A} + \mu}\\0 & 0 & 0 & \frac{\lambda^{A} \left(\lambda^{A} + \lambda^{o} + \mu\right)}{\mu \left(\lambda^{A} + \mu\right)} & \frac{\lambda^{A} \left(\lambda^{A} + \lambda^{o} + 2 \mu\right)}{\mu^{2}}\end{matrix}\right], \ \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\Lambda}{\mu} & \frac{\Lambda}{\mu} & \frac{\lambda^{o}}{\lambda^{A} + \mu}\\0 & 0 & 0 & \frac{\lambda^{A} \left(\lambda^{A} + \lambda^{o} + \mu\right)}{\mu \left(\lambda^{A} + \mu\right)} & \frac{\lambda^{o} \left(\lambda^{A} + \lambda^{o} + 2 \mu\right)}{\mu \left(\lambda^{A} + \lambda^{o} + \mu\right)}\end{matrix}\right]\right)$ ## $C=1, T=1, N=2, M=2$ ```python abg.markov.visualise_ambulance_markov_chain(num_of_servers=1, threshold=1, system_capacity=2, buffer_capacity=2) ``` ```python sym_state_recursive_ratios_1122, sym_state_recursive_ratios_right_1122 ``` $\displaystyle \left( \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\lambda^{o}}{\lambda^{A} + \mu}\\0 & \frac{\lambda^{A} \left(\lambda^{A} + \lambda^{o} + \mu\right)}{\mu \left(\lambda^{A} + \mu\right)} & \frac{\lambda^{A} \left(\lambda^{A} + \lambda^{o} + 2 \mu\right)}{\mu \left(\lambda^{A} + \mu\right)}\\0 & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + 3 \lambda^{o} \mu + \mu^{2}\right)}{\mu \left(\lambda^{A} + \mu\right) \left(\lambda^{A} + \lambda^{o} + \mu\right)} & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 3 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + 4 \lambda^{o} \mu + 3 \mu^{2}\right)}{\mu^{2} \left(\lambda^{A} + \lambda^{o} + 2 \mu\right)}\end{matrix}\right], \ \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\lambda^{o}}{\lambda^{A} + \mu}\\0 & \frac{\lambda^{A} \left(\lambda^{A} + \lambda^{o} + \mu\right)}{\mu \left(\lambda^{A} + \mu\right)} & \frac{\lambda^{o} \left(\lambda^{A} + \lambda^{o} + 2 \mu\right)}{\left(\lambda^{A} + \mu\right) \left(\lambda^{A} + \lambda^{o} + \mu\right)}\\0 & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + 3 \lambda^{o} \mu + \mu^{2}\right)}{\mu \left(\lambda^{A} + \mu\right) \left(\lambda^{A} + \lambda^{o} + \mu\right)} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 3 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + 4 \lambda^{o} \mu + 3 \mu^{2}\right)}{\mu \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + 3 \lambda^{o} \mu + \mu^{2}\right)}\end{matrix}\right]\right)$ ## $C=1, T=1, N=3, M=1$ ```python abg.markov.visualise_ambulance_markov_chain(num_of_servers=1, threshold=1, system_capacity=3, buffer_capacity=1) ``` ```python sym_state_recursive_ratios_1131 ``` $\displaystyle \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\lambda^{o} \left(\lambda^{A} + \mu\right)}{\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}} & \frac{\lambda^{o}}{\lambda^{A} + \mu}\\0 & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \lambda^{o} \mu + \mu^{2}\right)}{\mu \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)} & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 3 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + 2 \lambda^{o} \mu + 2 \mu^{2}\right)}{\mu^{2} \left(\lambda^{A} + \mu\right)} & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 3 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + 2 \lambda^{o} \mu + 3 \mu^{2}\right)}{\mu^{3}}\end{matrix}\right]$ ```python sym_state_recursive_ratios_right_1131 ``` $\displaystyle \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\lambda^{o} \left(\lambda^{A} + \mu\right)}{\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}} & \frac{\lambda^{o}}{\lambda^{A} + \mu}\\0 & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \lambda^{o} \mu + \mu^{2}\right)}{\mu \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 3 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + 2 \lambda^{o} \mu + 2 \mu^{2}\right)}{\mu \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \lambda^{o} \mu + \mu^{2}\right)} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 3 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + 2 \lambda^{o} \mu + 3 \mu^{2}\right)}{\mu \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 3 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + 2 \lambda^{o} \mu + 2 \mu^{2}\right)}\end{matrix}\right]$ ## $C=1, T=1, N=3, M=2$ ```python abg.markov.visualise_ambulance_markov_chain(num_of_servers=1, threshold=1, system_capacity=3, buffer_capacity=2) ``` ```python sym_state_recursive_ratios_1132 ``` $\displaystyle \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\lambda^{o} \left(\lambda^{A} + \mu\right)}{\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}} & \frac{\lambda^{o}}{\lambda^{A} + \mu}\\0 & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \lambda^{o} \mu + \mu^{2}\right)}{\mu \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)} & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 4 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 3 \lambda^{A} \lambda^{o} \mu + 5 \lambda^{A} \mu^{2} + \left(\lambda^{o}\right)^{2} \mu + 2 \lambda^{o} \mu^{2} + 2 \mu^{3}\right)}{\mu \left(\lambda^{A} + \mu\right) \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)} & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 4 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + 2 \lambda^{o} \mu + 3 \mu^{2}\right)}{\mu \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)}\\0 & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{4} + 4 \left(\lambda^{A}\right)^{3} \lambda^{o} + 4 \left(\lambda^{A}\right)^{3} \mu + 6 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} + 11 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu + 6 \left(\lambda^{A}\right)^{2} \mu^{2} + 4 \lambda^{A} \left(\lambda^{o}\right)^{3} + 10 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu + 10 \lambda^{A} \lambda^{o} \mu^{2} + 4 \lambda^{A} \mu^{3} + \left(\lambda^{o}\right)^{4} + 3 \left(\lambda^{o}\right)^{3} \mu + 6 \left(\lambda^{o}\right)^{2} \mu^{2} + 3 \lambda^{o} \mu^{3} + \mu^{4}\right)}{\mu \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right) \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \lambda^{o} \mu + \mu^{2}\right)} & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{4} + 4 \left(\lambda^{A}\right)^{3} \lambda^{o} + 5 \left(\lambda^{A}\right)^{3} \mu + 6 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} + 14 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu + 10 \left(\lambda^{A}\right)^{2} \mu^{2} + 4 \lambda^{A} \left(\lambda^{o}\right)^{3} + 13 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu + 17 \lambda^{A} \lambda^{o} \mu^{2} + 9 \lambda^{A} \mu^{3} + \left(\lambda^{o}\right)^{4} + 4 \left(\lambda^{o}\right)^{3} \mu + 9 \left(\lambda^{o}\right)^{2} \mu^{2} + 8 \lambda^{o} \mu^{3} + 3 \mu^{4}\right)}{\mu^{2} \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 4 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 3 \lambda^{A} \lambda^{o} \mu + 5 \lambda^{A} \mu^{2} + \left(\lambda^{o}\right)^{2} \mu + 2 \lambda^{o} \mu^{2} + 2 \mu^{3}\right)} & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{4} + 4 \left(\lambda^{A}\right)^{3} \lambda^{o} + 5 \left(\lambda^{A}\right)^{3} \mu + 6 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} + 14 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu + 11 \left(\lambda^{A}\right)^{2} \mu^{2} + 4 \lambda^{A} \left(\lambda^{o}\right)^{3} + 13 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu + 19 \lambda^{A} \lambda^{o} \mu^{2} + 13 \lambda^{A} \mu^{3} + \left(\lambda^{o}\right)^{4} + 4 \left(\lambda^{o}\right)^{3} \mu + 10 \left(\lambda^{o}\right)^{2} \mu^{2} + 10 \lambda^{o} \mu^{3} + 6 \mu^{4}\right)}{\mu^{3} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 4 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + 2 \lambda^{o} \mu + 3 \mu^{2}\right)}\end{matrix}\right]$ ```python sym_state_recursive_ratios_right_1132 ``` $\displaystyle \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\lambda^{o} \left(\lambda^{A} + \mu\right)}{\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}} & \frac{\lambda^{o}}{\lambda^{A} + \mu}\\0 & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \lambda^{o} \mu + \mu^{2}\right)}{\mu \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 4 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 3 \lambda^{A} \lambda^{o} \mu + 5 \lambda^{A} \mu^{2} + \left(\lambda^{o}\right)^{2} \mu + 2 \lambda^{o} \mu^{2} + 2 \mu^{3}\right)}{\left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right) \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \lambda^{o} \mu + \mu^{2}\right)} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 4 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + 2 \lambda^{o} \mu + 3 \mu^{2}\right)}{\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 4 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 3 \lambda^{A} \lambda^{o} \mu + 5 \lambda^{A} \mu^{2} + \left(\lambda^{o}\right)^{2} \mu + 2 \lambda^{o} \mu^{2} + 2 \mu^{3}}\\0 & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{4} + 4 \left(\lambda^{A}\right)^{3} \lambda^{o} + 4 \left(\lambda^{A}\right)^{3} \mu + 6 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} + 11 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu + 6 \left(\lambda^{A}\right)^{2} \mu^{2} + 4 \lambda^{A} \left(\lambda^{o}\right)^{3} + 10 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu + 10 \lambda^{A} \lambda^{o} \mu^{2} + 4 \lambda^{A} \mu^{3} + \left(\lambda^{o}\right)^{4} + 3 \left(\lambda^{o}\right)^{3} \mu + 6 \left(\lambda^{o}\right)^{2} \mu^{2} + 3 \lambda^{o} \mu^{3} + \mu^{4}\right)}{\mu \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right) \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \lambda^{o} \mu + \mu^{2}\right)} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{4} + 4 \left(\lambda^{A}\right)^{3} \lambda^{o} + 5 \left(\lambda^{A}\right)^{3} \mu + 6 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} + 14 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu + 10 \left(\lambda^{A}\right)^{2} \mu^{2} + 4 \lambda^{A} \left(\lambda^{o}\right)^{3} + 13 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu + 17 \lambda^{A} \lambda^{o} \mu^{2} + 9 \lambda^{A} \mu^{3} + \left(\lambda^{o}\right)^{4} + 4 \left(\lambda^{o}\right)^{3} \mu + 9 \left(\lambda^{o}\right)^{2} \mu^{2} + 8 \lambda^{o} \mu^{3} + 3 \mu^{4}\right)}{\mu \left(\left(\lambda^{A}\right)^{4} + 4 \left(\lambda^{A}\right)^{3} \lambda^{o} + 4 \left(\lambda^{A}\right)^{3} \mu + 6 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} + 11 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu + 6 \left(\lambda^{A}\right)^{2} \mu^{2} + 4 \lambda^{A} \left(\lambda^{o}\right)^{3} + 10 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu + 10 \lambda^{A} \lambda^{o} \mu^{2} + 4 \lambda^{A} \mu^{3} + \left(\lambda^{o}\right)^{4} + 3 \left(\lambda^{o}\right)^{3} \mu + 6 \left(\lambda^{o}\right)^{2} \mu^{2} + 3 \lambda^{o} \mu^{3} + \mu^{4}\right)} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{4} + 4 \left(\lambda^{A}\right)^{3} \lambda^{o} + 5 \left(\lambda^{A}\right)^{3} \mu + 6 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} + 14 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu + 11 \left(\lambda^{A}\right)^{2} \mu^{2} + 4 \lambda^{A} \left(\lambda^{o}\right)^{3} + 13 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu + 19 \lambda^{A} \lambda^{o} \mu^{2} + 13 \lambda^{A} \mu^{3} + \left(\lambda^{o}\right)^{4} + 4 \left(\lambda^{o}\right)^{3} \mu + 10 \left(\lambda^{o}\right)^{2} \mu^{2} + 10 \lambda^{o} \mu^{3} + 6 \mu^{4}\right)}{\mu \left(\left(\lambda^{A}\right)^{4} + 4 \left(\lambda^{A}\right)^{3} \lambda^{o} + 5 \left(\lambda^{A}\right)^{3} \mu + 6 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} + 14 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu + 10 \left(\lambda^{A}\right)^{2} \mu^{2} + 4 \lambda^{A} \left(\lambda^{o}\right)^{3} + 13 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu + 17 \lambda^{A} \lambda^{o} \mu^{2} + 9 \lambda^{A} \mu^{3} + \left(\lambda^{o}\right)^{4} + 4 \left(\lambda^{o}\right)^{3} \mu + 9 \left(\lambda^{o}\right)^{2} \mu^{2} + 8 \lambda^{o} \mu^{3} + 3 \mu^{4}\right)}\end{matrix}\right]$ ## $C=1, T=1, N=4, M=1$ ```python abg.markov.visualise_ambulance_markov_chain(num_of_servers=1, threshold=1, system_capacity=4, buffer_capacity=1) ``` ```python sym_state_recursive_ratios_1141 ``` $\displaystyle \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)}{\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}} & \frac{\lambda^{o} \left(\lambda^{A} + \mu\right)}{\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}} & \frac{\lambda^{o}}{\lambda^{A} + \mu}\\0 & \frac{\lambda^{A} \left(\lambda^{A} + \lambda^{o} + \mu\right) \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \mu^{2}\right)}{\mu \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)} & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{3} + 3 \left(\lambda^{A}\right)^{2} \lambda^{o} + 4 \left(\lambda^{A}\right)^{2} \mu + 3 \lambda^{A} \left(\lambda^{o}\right)^{2} + 6 \lambda^{A} \lambda^{o} \mu + 5 \lambda^{A} \mu^{2} + \left(\lambda^{o}\right)^{3} + 2 \left(\lambda^{o}\right)^{2} \mu + 2 \lambda^{o} \mu^{2} + 2 \mu^{3}\right)}{\mu^{2} \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)} & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{3} + 3 \left(\lambda^{A}\right)^{2} \lambda^{o} + 4 \left(\lambda^{A}\right)^{2} \mu + 3 \lambda^{A} \left(\lambda^{o}\right)^{2} + 6 \lambda^{A} \lambda^{o} \mu + 6 \lambda^{A} \mu^{2} + \left(\lambda^{o}\right)^{3} + 2 \left(\lambda^{o}\right)^{2} \mu + 3 \lambda^{o} \mu^{2} + 3 \mu^{3}\right)}{\mu^{3} \left(\lambda^{A} + \mu\right)} & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{3} + 3 \left(\lambda^{A}\right)^{2} \lambda^{o} + 4 \left(\lambda^{A}\right)^{2} \mu + 3 \lambda^{A} \left(\lambda^{o}\right)^{2} + 6 \lambda^{A} \lambda^{o} \mu + 6 \lambda^{A} \mu^{2} + \left(\lambda^{o}\right)^{3} + 2 \left(\lambda^{o}\right)^{2} \mu + 3 \lambda^{o} \mu^{2} + 4 \mu^{3}\right)}{\mu^{4}}\end{matrix}\right]$ ```python sym_state_recursive_ratios_right_1141 ``` $\displaystyle \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)}{\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}} & \frac{\lambda^{o} \left(\lambda^{A} + \mu\right)}{\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}} & \frac{\lambda^{o}}{\lambda^{A} + \mu}\\0 & \frac{\lambda^{A} \left(\lambda^{A} + \lambda^{o} + \mu\right) \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \mu^{2}\right)}{\mu \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{3} + 3 \left(\lambda^{A}\right)^{2} \lambda^{o} + 4 \left(\lambda^{A}\right)^{2} \mu + 3 \lambda^{A} \left(\lambda^{o}\right)^{2} + 6 \lambda^{A} \lambda^{o} \mu + 5 \lambda^{A} \mu^{2} + \left(\lambda^{o}\right)^{3} + 2 \left(\lambda^{o}\right)^{2} \mu + 2 \lambda^{o} \mu^{2} + 2 \mu^{3}\right)}{\mu \left(\lambda^{A} + \lambda^{o} + \mu\right) \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \mu^{2}\right)} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{3} + 3 \left(\lambda^{A}\right)^{2} \lambda^{o} + 4 \left(\lambda^{A}\right)^{2} \mu + 3 \lambda^{A} \left(\lambda^{o}\right)^{2} + 6 \lambda^{A} \lambda^{o} \mu + 6 \lambda^{A} \mu^{2} + \left(\lambda^{o}\right)^{3} + 2 \left(\lambda^{o}\right)^{2} \mu + 3 \lambda^{o} \mu^{2} + 3 \mu^{3}\right)}{\mu \left(\left(\lambda^{A}\right)^{3} + 3 \left(\lambda^{A}\right)^{2} \lambda^{o} + 4 \left(\lambda^{A}\right)^{2} \mu + 3 \lambda^{A} \left(\lambda^{o}\right)^{2} + 6 \lambda^{A} \lambda^{o} \mu + 5 \lambda^{A} \mu^{2} + \left(\lambda^{o}\right)^{3} + 2 \left(\lambda^{o}\right)^{2} \mu + 2 \lambda^{o} \mu^{2} + 2 \mu^{3}\right)} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{3} + 3 \left(\lambda^{A}\right)^{2} \lambda^{o} + 4 \left(\lambda^{A}\right)^{2} \mu + 3 \lambda^{A} \left(\lambda^{o}\right)^{2} + 6 \lambda^{A} \lambda^{o} \mu + 6 \lambda^{A} \mu^{2} + \left(\lambda^{o}\right)^{3} + 2 \left(\lambda^{o}\right)^{2} \mu + 3 \lambda^{o} \mu^{2} + 4 \mu^{3}\right)}{\mu \left(\left(\lambda^{A}\right)^{3} + 3 \left(\lambda^{A}\right)^{2} \lambda^{o} + 4 \left(\lambda^{A}\right)^{2} \mu + 3 \lambda^{A} \left(\lambda^{o}\right)^{2} + 6 \lambda^{A} \lambda^{o} \mu + 6 \lambda^{A} \mu^{2} + \left(\lambda^{o}\right)^{3} + 2 \left(\lambda^{o}\right)^{2} \mu + 3 \lambda^{o} \mu^{2} + 3 \mu^{3}\right)}\end{matrix}\right]$ ## $C=1, T=1, N=4, M=2$ ```python abg.markov.visualise_ambulance_markov_chain(num_of_servers=1, threshold=1, system_capacity=4, buffer_capacity=2) ``` ```python sym_state_recursive_ratios_1142 ``` $\displaystyle \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)}{\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}} & \frac{\lambda^{o} \left(\lambda^{A} + \mu\right)}{\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}} & \frac{\lambda^{o}}{\lambda^{A} + \mu}\\0 & \frac{\lambda^{A} \left(\lambda^{A} + \lambda^{o} + \mu\right) \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \mu^{2}\right)}{\mu \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)} & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{5} + 4 \left(\lambda^{A}\right)^{4} \lambda^{o} + 6 \left(\lambda^{A}\right)^{4} \mu + 6 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} + 15 \left(\lambda^{A}\right)^{3} \lambda^{o} \mu + 14 \left(\lambda^{A}\right)^{3} \mu^{2} + 4 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} + 12 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu + 20 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{2} + 16 \left(\lambda^{A}\right)^{2} \mu^{3} + \lambda^{A} \left(\lambda^{o}\right)^{4} + 3 \lambda^{A} \left(\lambda^{o}\right)^{3} \mu + 6 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{2} + 11 \lambda^{A} \lambda^{o} \mu^{3} + 9 \lambda^{A} \mu^{4} + \left(\lambda^{o}\right)^{3} \mu^{2} + 2 \left(\lambda^{o}\right)^{2} \mu^{3} + 2 \lambda^{o} \mu^{4} + 2 \mu^{5}\right)}{\mu \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right) \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)} & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{4} + 3 \left(\lambda^{A}\right)^{3} \lambda^{o} + 6 \left(\lambda^{A}\right)^{3} \mu + 3 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} + 9 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu + 12 \left(\lambda^{A}\right)^{2} \mu^{2} + \lambda^{A} \left(\lambda^{o}\right)^{3} + 4 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu + 9 \lambda^{A} \lambda^{o} \mu^{2} + 10 \lambda^{A} \mu^{3} + \left(\lambda^{o}\right)^{3} \mu + 2 \left(\lambda^{o}\right)^{2} \mu^{2} + 3 \lambda^{o} \mu^{3} + 3 \mu^{4}\right)}{\mu \left(\lambda^{A} + \mu\right) \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)} & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{3} + 3 \left(\lambda^{A}\right)^{2} \lambda^{o} + 6 \left(\lambda^{A}\right)^{2} \mu + 3 \lambda^{A} \left(\lambda^{o}\right)^{2} + 8 \lambda^{A} \lambda^{o} \mu + 9 \lambda^{A} \mu^{2} + \left(\lambda^{o}\right)^{3} + 2 \left(\lambda^{o}\right)^{2} \mu + 3 \lambda^{o} \mu^{2} + 4 \mu^{3}\right)}{\mu \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)}\\0 & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{6} + 6 \left(\lambda^{A}\right)^{5} \lambda^{o} + 6 \left(\lambda^{A}\right)^{5} \mu + 15 \left(\lambda^{A}\right)^{4} \left(\lambda^{o}\right)^{2} + 27 \left(\lambda^{A}\right)^{4} \lambda^{o} \mu + 15 \left(\lambda^{A}\right)^{4} \mu^{2} + 20 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{3} + 48 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} \mu + 48 \left(\lambda^{A}\right)^{3} \lambda^{o} \mu^{2} + 20 \left(\lambda^{A}\right)^{3} \mu^{3} + 15 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{4} + 42 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} \mu + 57 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu^{2} + 42 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{3} + 15 \left(\lambda^{A}\right)^{2} \mu^{4} + 6 \lambda^{A} \left(\lambda^{o}\right)^{5} + 18 \lambda^{A} \left(\lambda^{o}\right)^{4} \mu + 30 \lambda^{A} \left(\lambda^{o}\right)^{3} \mu^{2} + 30 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{3} + 18 \lambda^{A} \lambda^{o} \mu^{4} + 6 \lambda^{A} \mu^{5} + \left(\lambda^{o}\right)^{6} + 3 \left(\lambda^{o}\right)^{5} \mu + 6 \left(\lambda^{o}\right)^{4} \mu^{2} + 10 \left(\lambda^{o}\right)^{3} \mu^{3} + 6 \left(\lambda^{o}\right)^{2} \mu^{4} + 3 \lambda^{o} \mu^{5} + \mu^{6}\right)}{\mu \left(\lambda^{A} + \lambda^{o} + \mu\right) \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \mu^{2}\right) \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)} & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{6} + 6 \left(\lambda^{A}\right)^{5} \lambda^{o} + 7 \left(\lambda^{A}\right)^{5} \mu + 15 \left(\lambda^{A}\right)^{4} \left(\lambda^{o}\right)^{2} + 32 \left(\lambda^{A}\right)^{4} \lambda^{o} \mu + 21 \left(\lambda^{A}\right)^{4} \mu^{2} + 20 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{3} + 58 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} \mu + 69 \left(\lambda^{A}\right)^{3} \lambda^{o} \mu^{2} + 34 \left(\lambda^{A}\right)^{3} \mu^{3} + 15 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{4} + 52 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} \mu + 84 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu^{2} + 74 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{3} + 31 \left(\lambda^{A}\right)^{2} \mu^{4} + 6 \lambda^{A} \left(\lambda^{o}\right)^{5} + 23 \lambda^{A} \left(\lambda^{o}\right)^{4} \mu + 45 \lambda^{A} \left(\lambda^{o}\right)^{3} \mu^{2} + 54 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{3} + 39 \lambda^{A} \lambda^{o} \mu^{4} + 15 \lambda^{A} \mu^{5} + \left(\lambda^{o}\right)^{6} + 4 \left(\lambda^{o}\right)^{5} \mu + 9 \left(\lambda^{o}\right)^{4} \mu^{2} + 16 \left(\lambda^{o}\right)^{3} \mu^{3} + 15 \left(\lambda^{o}\right)^{2} \mu^{4} + 8 \lambda^{o} \mu^{5} + 3 \mu^{6}\right)}{\mu^{2} \left(\left(\lambda^{A}\right)^{5} + 4 \left(\lambda^{A}\right)^{4} \lambda^{o} + 6 \left(\lambda^{A}\right)^{4} \mu + 6 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} + 15 \left(\lambda^{A}\right)^{3} \lambda^{o} \mu + 14 \left(\lambda^{A}\right)^{3} \mu^{2} + 4 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} + 12 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu + 20 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{2} + 16 \left(\lambda^{A}\right)^{2} \mu^{3} + \lambda^{A} \left(\lambda^{o}\right)^{4} + 3 \lambda^{A} \left(\lambda^{o}\right)^{3} \mu + 6 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{2} + 11 \lambda^{A} \lambda^{o} \mu^{3} + 9 \lambda^{A} \mu^{4} + \left(\lambda^{o}\right)^{3} \mu^{2} + 2 \left(\lambda^{o}\right)^{2} \mu^{3} + 2 \lambda^{o} \mu^{4} + 2 \mu^{5}\right)} & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{6} + 6 \left(\lambda^{A}\right)^{5} \lambda^{o} + 7 \left(\lambda^{A}\right)^{5} \mu + 15 \left(\lambda^{A}\right)^{4} \left(\lambda^{o}\right)^{2} + 32 \left(\lambda^{A}\right)^{4} \lambda^{o} \mu + 22 \left(\lambda^{A}\right)^{4} \mu^{2} + 20 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{3} + 58 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} \mu + 73 \left(\lambda^{A}\right)^{3} \lambda^{o} \mu^{2} + 40 \left(\lambda^{A}\right)^{3} \mu^{3} + 15 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{4} + 52 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} \mu + 90 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu^{2} + 89 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{3} + 43 \left(\lambda^{A}\right)^{2} \mu^{4} + 6 \lambda^{A} \left(\lambda^{o}\right)^{5} + 23 \lambda^{A} \left(\lambda^{o}\right)^{4} \mu + 49 \lambda^{A} \left(\lambda^{o}\right)^{3} \mu^{2} + 66 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{3} + 57 \lambda^{A} \lambda^{o} \mu^{4} + 25 \lambda^{A} \mu^{5} + \left(\lambda^{o}\right)^{6} + 4 \left(\lambda^{o}\right)^{5} \mu + 10 \left(\lambda^{o}\right)^{4} \mu^{2} + 19 \left(\lambda^{o}\right)^{3} \mu^{3} + 20 \left(\lambda^{o}\right)^{2} \mu^{4} + 15 \lambda^{o} \mu^{5} + 6 \mu^{6}\right)}{\mu^{3} \left(\left(\lambda^{A}\right)^{4} + 3 \left(\lambda^{A}\right)^{3} \lambda^{o} + 6 \left(\lambda^{A}\right)^{3} \mu + 3 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} + 9 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu + 12 \left(\lambda^{A}\right)^{2} \mu^{2} + \lambda^{A} \left(\lambda^{o}\right)^{3} + 4 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu + 9 \lambda^{A} \lambda^{o} \mu^{2} + 10 \lambda^{A} \mu^{3} + \left(\lambda^{o}\right)^{3} \mu + 2 \left(\lambda^{o}\right)^{2} \mu^{2} + 3 \lambda^{o} \mu^{3} + 3 \mu^{4}\right)} & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{6} + 6 \left(\lambda^{A}\right)^{5} \lambda^{o} + 7 \left(\lambda^{A}\right)^{5} \mu + 15 \left(\lambda^{A}\right)^{4} \left(\lambda^{o}\right)^{2} + 32 \left(\lambda^{A}\right)^{4} \lambda^{o} \mu + 22 \left(\lambda^{A}\right)^{4} \mu^{2} + 20 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{3} + 58 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} \mu + 73 \left(\lambda^{A}\right)^{3} \lambda^{o} \mu^{2} + 41 \left(\lambda^{A}\right)^{3} \mu^{3} + 15 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{4} + 52 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} \mu + 90 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu^{2} + 92 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{3} + 49 \left(\lambda^{A}\right)^{2} \mu^{4} + 6 \lambda^{A} \left(\lambda^{o}\right)^{5} + 23 \lambda^{A} \left(\lambda^{o}\right)^{4} \mu + 49 \lambda^{A} \left(\lambda^{o}\right)^{3} \mu^{2} + 69 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{3} + 65 \lambda^{A} \lambda^{o} \mu^{4} + 34 \lambda^{A} \mu^{5} + \left(\lambda^{o}\right)^{6} + 4 \left(\lambda^{o}\right)^{5} \mu + 10 \left(\lambda^{o}\right)^{4} \mu^{2} + 20 \left(\lambda^{o}\right)^{3} \mu^{3} + 22 \left(\lambda^{o}\right)^{2} \mu^{4} + 18 \lambda^{o} \mu^{5} + 10 \mu^{6}\right)}{\mu^{4} \left(\left(\lambda^{A}\right)^{3} + 3 \left(\lambda^{A}\right)^{2} \lambda^{o} + 6 \left(\lambda^{A}\right)^{2} \mu + 3 \lambda^{A} \left(\lambda^{o}\right)^{2} + 8 \lambda^{A} \lambda^{o} \mu + 9 \lambda^{A} \mu^{2} + \left(\lambda^{o}\right)^{3} + 2 \left(\lambda^{o}\right)^{2} \mu + 3 \lambda^{o} \mu^{2} + 4 \mu^{3}\right)}\end{matrix}\right]$ ```python sym_state_recursive_ratios_right_1142 ``` $\displaystyle \left[\begin{matrix}1 & \frac{\Lambda}{\mu} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)}{\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}} & \frac{\lambda^{o} \left(\lambda^{A} + \mu\right)}{\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}} & \frac{\lambda^{o}}{\lambda^{A} + \mu}\\0 & \frac{\lambda^{A} \left(\lambda^{A} + \lambda^{o} + \mu\right) \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \mu^{2}\right)}{\mu \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{5} + 4 \left(\lambda^{A}\right)^{4} \lambda^{o} + 6 \left(\lambda^{A}\right)^{4} \mu + 6 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} + 15 \left(\lambda^{A}\right)^{3} \lambda^{o} \mu + 14 \left(\lambda^{A}\right)^{3} \mu^{2} + 4 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} + 12 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu + 20 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{2} + 16 \left(\lambda^{A}\right)^{2} \mu^{3} + \lambda^{A} \left(\lambda^{o}\right)^{4} + 3 \lambda^{A} \left(\lambda^{o}\right)^{3} \mu + 6 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{2} + 11 \lambda^{A} \lambda^{o} \mu^{3} + 9 \lambda^{A} \mu^{4} + \left(\lambda^{o}\right)^{3} \mu^{2} + 2 \left(\lambda^{o}\right)^{2} \mu^{3} + 2 \lambda^{o} \mu^{4} + 2 \mu^{5}\right)}{\left(\lambda^{A} + \lambda^{o} + \mu\right) \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \mu^{2}\right) \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{4} + 3 \left(\lambda^{A}\right)^{3} \lambda^{o} + 6 \left(\lambda^{A}\right)^{3} \mu + 3 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} + 9 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu + 12 \left(\lambda^{A}\right)^{2} \mu^{2} + \lambda^{A} \left(\lambda^{o}\right)^{3} + 4 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu + 9 \lambda^{A} \lambda^{o} \mu^{2} + 10 \lambda^{A} \mu^{3} + \left(\lambda^{o}\right)^{3} \mu + 2 \left(\lambda^{o}\right)^{2} \mu^{2} + 3 \lambda^{o} \mu^{3} + 3 \mu^{4}\right)}{\left(\lambda^{A}\right)^{5} + 4 \left(\lambda^{A}\right)^{4} \lambda^{o} + 6 \left(\lambda^{A}\right)^{4} \mu + 6 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} + 15 \left(\lambda^{A}\right)^{3} \lambda^{o} \mu + 14 \left(\lambda^{A}\right)^{3} \mu^{2} + 4 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} + 12 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu + 20 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{2} + 16 \left(\lambda^{A}\right)^{2} \mu^{3} + \lambda^{A} \left(\lambda^{o}\right)^{4} + 3 \lambda^{A} \left(\lambda^{o}\right)^{3} \mu + 6 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{2} + 11 \lambda^{A} \lambda^{o} \mu^{3} + 9 \lambda^{A} \mu^{4} + \left(\lambda^{o}\right)^{3} \mu^{2} + 2 \left(\lambda^{o}\right)^{2} \mu^{3} + 2 \lambda^{o} \mu^{4} + 2 \mu^{5}} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{3} + 3 \left(\lambda^{A}\right)^{2} \lambda^{o} + 6 \left(\lambda^{A}\right)^{2} \mu + 3 \lambda^{A} \left(\lambda^{o}\right)^{2} + 8 \lambda^{A} \lambda^{o} \mu + 9 \lambda^{A} \mu^{2} + \left(\lambda^{o}\right)^{3} + 2 \left(\lambda^{o}\right)^{2} \mu + 3 \lambda^{o} \mu^{2} + 4 \mu^{3}\right)}{\left(\lambda^{A}\right)^{4} + 3 \left(\lambda^{A}\right)^{3} \lambda^{o} + 6 \left(\lambda^{A}\right)^{3} \mu + 3 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} + 9 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu + 12 \left(\lambda^{A}\right)^{2} \mu^{2} + \lambda^{A} \left(\lambda^{o}\right)^{3} + 4 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu + 9 \lambda^{A} \lambda^{o} \mu^{2} + 10 \lambda^{A} \mu^{3} + \left(\lambda^{o}\right)^{3} \mu + 2 \left(\lambda^{o}\right)^{2} \mu^{2} + 3 \lambda^{o} \mu^{3} + 3 \mu^{4}}\\0 & \frac{\lambda^{A} \left(\left(\lambda^{A}\right)^{6} + 6 \left(\lambda^{A}\right)^{5} \lambda^{o} + 6 \left(\lambda^{A}\right)^{5} \mu + 15 \left(\lambda^{A}\right)^{4} \left(\lambda^{o}\right)^{2} + 27 \left(\lambda^{A}\right)^{4} \lambda^{o} \mu + 15 \left(\lambda^{A}\right)^{4} \mu^{2} + 20 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{3} + 48 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} \mu + 48 \left(\lambda^{A}\right)^{3} \lambda^{o} \mu^{2} + 20 \left(\lambda^{A}\right)^{3} \mu^{3} + 15 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{4} + 42 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} \mu + 57 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu^{2} + 42 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{3} + 15 \left(\lambda^{A}\right)^{2} \mu^{4} + 6 \lambda^{A} \left(\lambda^{o}\right)^{5} + 18 \lambda^{A} \left(\lambda^{o}\right)^{4} \mu + 30 \lambda^{A} \left(\lambda^{o}\right)^{3} \mu^{2} + 30 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{3} + 18 \lambda^{A} \lambda^{o} \mu^{4} + 6 \lambda^{A} \mu^{5} + \left(\lambda^{o}\right)^{6} + 3 \left(\lambda^{o}\right)^{5} \mu + 6 \left(\lambda^{o}\right)^{4} \mu^{2} + 10 \left(\lambda^{o}\right)^{3} \mu^{3} + 6 \left(\lambda^{o}\right)^{2} \mu^{4} + 3 \lambda^{o} \mu^{5} + \mu^{6}\right)}{\mu \left(\lambda^{A} + \lambda^{o} + \mu\right) \left(\left(\lambda^{A}\right)^{2} + 2 \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \left(\lambda^{o}\right)^{2} + \mu^{2}\right) \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{6} + 6 \left(\lambda^{A}\right)^{5} \lambda^{o} + 7 \left(\lambda^{A}\right)^{5} \mu + 15 \left(\lambda^{A}\right)^{4} \left(\lambda^{o}\right)^{2} + 32 \left(\lambda^{A}\right)^{4} \lambda^{o} \mu + 21 \left(\lambda^{A}\right)^{4} \mu^{2} + 20 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{3} + 58 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} \mu + 69 \left(\lambda^{A}\right)^{3} \lambda^{o} \mu^{2} + 34 \left(\lambda^{A}\right)^{3} \mu^{3} + 15 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{4} + 52 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} \mu + 84 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu^{2} + 74 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{3} + 31 \left(\lambda^{A}\right)^{2} \mu^{4} + 6 \lambda^{A} \left(\lambda^{o}\right)^{5} + 23 \lambda^{A} \left(\lambda^{o}\right)^{4} \mu + 45 \lambda^{A} \left(\lambda^{o}\right)^{3} \mu^{2} + 54 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{3} + 39 \lambda^{A} \lambda^{o} \mu^{4} + 15 \lambda^{A} \mu^{5} + \left(\lambda^{o}\right)^{6} + 4 \left(\lambda^{o}\right)^{5} \mu + 9 \left(\lambda^{o}\right)^{4} \mu^{2} + 16 \left(\lambda^{o}\right)^{3} \mu^{3} + 15 \left(\lambda^{o}\right)^{2} \mu^{4} + 8 \lambda^{o} \mu^{5} + 3 \mu^{6}\right)}{\mu \left(\left(\lambda^{A}\right)^{6} + 6 \left(\lambda^{A}\right)^{5} \lambda^{o} + 6 \left(\lambda^{A}\right)^{5} \mu + 15 \left(\lambda^{A}\right)^{4} \left(\lambda^{o}\right)^{2} + 27 \left(\lambda^{A}\right)^{4} \lambda^{o} \mu + 15 \left(\lambda^{A}\right)^{4} \mu^{2} + 20 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{3} + 48 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} \mu + 48 \left(\lambda^{A}\right)^{3} \lambda^{o} \mu^{2} + 20 \left(\lambda^{A}\right)^{3} \mu^{3} + 15 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{4} + 42 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} \mu + 57 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu^{2} + 42 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{3} + 15 \left(\lambda^{A}\right)^{2} \mu^{4} + 6 \lambda^{A} \left(\lambda^{o}\right)^{5} + 18 \lambda^{A} \left(\lambda^{o}\right)^{4} \mu + 30 \lambda^{A} \left(\lambda^{o}\right)^{3} \mu^{2} + 30 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{3} + 18 \lambda^{A} \lambda^{o} \mu^{4} + 6 \lambda^{A} \mu^{5} + \left(\lambda^{o}\right)^{6} + 3 \left(\lambda^{o}\right)^{5} \mu + 6 \left(\lambda^{o}\right)^{4} \mu^{2} + 10 \left(\lambda^{o}\right)^{3} \mu^{3} + 6 \left(\lambda^{o}\right)^{2} \mu^{4} + 3 \lambda^{o} \mu^{5} + \mu^{6}\right)} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{6} + 6 \left(\lambda^{A}\right)^{5} \lambda^{o} + 7 \left(\lambda^{A}\right)^{5} \mu + 15 \left(\lambda^{A}\right)^{4} \left(\lambda^{o}\right)^{2} + 32 \left(\lambda^{A}\right)^{4} \lambda^{o} \mu + 22 \left(\lambda^{A}\right)^{4} \mu^{2} + 20 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{3} + 58 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} \mu + 73 \left(\lambda^{A}\right)^{3} \lambda^{o} \mu^{2} + 40 \left(\lambda^{A}\right)^{3} \mu^{3} + 15 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{4} + 52 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} \mu + 90 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu^{2} + 89 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{3} + 43 \left(\lambda^{A}\right)^{2} \mu^{4} + 6 \lambda^{A} \left(\lambda^{o}\right)^{5} + 23 \lambda^{A} \left(\lambda^{o}\right)^{4} \mu + 49 \lambda^{A} \left(\lambda^{o}\right)^{3} \mu^{2} + 66 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{3} + 57 \lambda^{A} \lambda^{o} \mu^{4} + 25 \lambda^{A} \mu^{5} + \left(\lambda^{o}\right)^{6} + 4 \left(\lambda^{o}\right)^{5} \mu + 10 \left(\lambda^{o}\right)^{4} \mu^{2} + 19 \left(\lambda^{o}\right)^{3} \mu^{3} + 20 \left(\lambda^{o}\right)^{2} \mu^{4} + 15 \lambda^{o} \mu^{5} + 6 \mu^{6}\right)}{\mu \left(\left(\lambda^{A}\right)^{6} + 6 \left(\lambda^{A}\right)^{5} \lambda^{o} + 7 \left(\lambda^{A}\right)^{5} \mu + 15 \left(\lambda^{A}\right)^{4} \left(\lambda^{o}\right)^{2} + 32 \left(\lambda^{A}\right)^{4} \lambda^{o} \mu + 21 \left(\lambda^{A}\right)^{4} \mu^{2} + 20 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{3} + 58 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} \mu + 69 \left(\lambda^{A}\right)^{3} \lambda^{o} \mu^{2} + 34 \left(\lambda^{A}\right)^{3} \mu^{3} + 15 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{4} + 52 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} \mu + 84 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu^{2} + 74 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{3} + 31 \left(\lambda^{A}\right)^{2} \mu^{4} + 6 \lambda^{A} \left(\lambda^{o}\right)^{5} + 23 \lambda^{A} \left(\lambda^{o}\right)^{4} \mu + 45 \lambda^{A} \left(\lambda^{o}\right)^{3} \mu^{2} + 54 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{3} + 39 \lambda^{A} \lambda^{o} \mu^{4} + 15 \lambda^{A} \mu^{5} + \left(\lambda^{o}\right)^{6} + 4 \left(\lambda^{o}\right)^{5} \mu + 9 \left(\lambda^{o}\right)^{4} \mu^{2} + 16 \left(\lambda^{o}\right)^{3} \mu^{3} + 15 \left(\lambda^{o}\right)^{2} \mu^{4} + 8 \lambda^{o} \mu^{5} + 3 \mu^{6}\right)} & \frac{\lambda^{o} \left(\left(\lambda^{A}\right)^{6} + 6 \left(\lambda^{A}\right)^{5} \lambda^{o} + 7 \left(\lambda^{A}\right)^{5} \mu + 15 \left(\lambda^{A}\right)^{4} \left(\lambda^{o}\right)^{2} + 32 \left(\lambda^{A}\right)^{4} \lambda^{o} \mu + 22 \left(\lambda^{A}\right)^{4} \mu^{2} + 20 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{3} + 58 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} \mu + 73 \left(\lambda^{A}\right)^{3} \lambda^{o} \mu^{2} + 41 \left(\lambda^{A}\right)^{3} \mu^{3} + 15 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{4} + 52 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} \mu + 90 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu^{2} + 92 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{3} + 49 \left(\lambda^{A}\right)^{2} \mu^{4} + 6 \lambda^{A} \left(\lambda^{o}\right)^{5} + 23 \lambda^{A} \left(\lambda^{o}\right)^{4} \mu + 49 \lambda^{A} \left(\lambda^{o}\right)^{3} \mu^{2} + 69 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{3} + 65 \lambda^{A} \lambda^{o} \mu^{4} + 34 \lambda^{A} \mu^{5} + \left(\lambda^{o}\right)^{6} + 4 \left(\lambda^{o}\right)^{5} \mu + 10 \left(\lambda^{o}\right)^{4} \mu^{2} + 20 \left(\lambda^{o}\right)^{3} \mu^{3} + 22 \left(\lambda^{o}\right)^{2} \mu^{4} + 18 \lambda^{o} \mu^{5} + 10 \mu^{6}\right)}{\mu \left(\left(\lambda^{A}\right)^{6} + 6 \left(\lambda^{A}\right)^{5} \lambda^{o} + 7 \left(\lambda^{A}\right)^{5} \mu + 15 \left(\lambda^{A}\right)^{4} \left(\lambda^{o}\right)^{2} + 32 \left(\lambda^{A}\right)^{4} \lambda^{o} \mu + 22 \left(\lambda^{A}\right)^{4} \mu^{2} + 20 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{3} + 58 \left(\lambda^{A}\right)^{3} \left(\lambda^{o}\right)^{2} \mu + 73 \left(\lambda^{A}\right)^{3} \lambda^{o} \mu^{2} + 40 \left(\lambda^{A}\right)^{3} \mu^{3} + 15 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{4} + 52 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{3} \mu + 90 \left(\lambda^{A}\right)^{2} \left(\lambda^{o}\right)^{2} \mu^{2} + 89 \left(\lambda^{A}\right)^{2} \lambda^{o} \mu^{3} + 43 \left(\lambda^{A}\right)^{2} \mu^{4} + 6 \lambda^{A} \left(\lambda^{o}\right)^{5} + 23 \lambda^{A} \left(\lambda^{o}\right)^{4} \mu + 49 \lambda^{A} \left(\lambda^{o}\right)^{3} \mu^{2} + 66 \lambda^{A} \left(\lambda^{o}\right)^{2} \mu^{3} + 57 \lambda^{A} \lambda^{o} \mu^{4} + 25 \lambda^{A} \mu^{5} + \left(\lambda^{o}\right)^{6} + 4 \left(\lambda^{o}\right)^{5} \mu + 10 \left(\lambda^{o}\right)^{4} \mu^{2} + 19 \left(\lambda^{o}\right)^{3} \mu^{3} + 20 \left(\lambda^{o}\right)^{2} \mu^{4} + 15 \lambda^{o} \mu^{5} + 6 \mu^{6}\right)}\end{matrix}\right]$ ```python ``` ```python ``` # $P_0$ rate ## $C=1, T=2, N=2, M=2$ ```python sym.factor(sym.fraction(sym_state_probs_1222[0])[0]) ``` $\displaystyle \mu^{4}$ ## $C=1, T=1, N=2, M=1$ ```python sym.factor(sym.fraction(sym_state_probs_1121[0])[0]) ``` $\displaystyle \mu^{3} \left(\lambda^{A} + \mu\right)$ ## $C=1, T=3, N=4, M=1$ ```python sym.factor(sym.fraction(sym_state_probs_1341[0])[0]) ``` $\displaystyle \mu^{5} \left(\lambda^{A} + \mu\right)$ ## $C=1, T=1, N=2, M=2$ ```python sym.factor(sym.fraction(sym_state_probs_1122[0])[0]) ``` $\displaystyle \mu^{4} \left(\lambda^{A} + \mu\right)^{2}$ ## $C=1, T=1, N=3, M=1$ ```python sym.factor(sym.fraction(sym_state_probs_1131[0])[0]) ``` $\displaystyle \mu^{4} \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)$ ## $C=1, T=1, N=3, M=2$ ```python sym.factor(sym.fraction(sym_state_probs_1132[p00])[0]) ``` $\displaystyle \mu^{5} \left(\left(\lambda^{A}\right)^{2} + \lambda^{A} \lambda^{o} + 2 \lambda^{A} \mu + \mu^{2}\right)^{2}$ ## $C=1, T=1, N=4, M=1$ ```python sym.factor(sym.fraction(sym_state_probs_1141[p00])[0]) ``` $\displaystyle \mu^{5} \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)$ ## $C=1, T=1, N=4, M=2$ ```python sym.factor(sym.fraction(sym_state_probs_1142[p00])[0]) ``` $\displaystyle \mu^{6} \left(\left(\lambda^{A}\right)^{3} + 2 \left(\lambda^{A}\right)^{2} \lambda^{o} + 3 \left(\lambda^{A}\right)^{2} \mu + \lambda^{A} \left(\lambda^{o}\right)^{2} + 2 \lambda^{A} \lambda^{o} \mu + 3 \lambda^{A} \mu^{2} + \mu^{3}\right)^{2}$ ```python ```
50ecbf11c2bf66e6b7adb1ed5af20901a4790874
998,281
ipynb
Jupyter Notebook
nbs/src/Markov/closed_form_formula_of_pi/investigate-close-form-pi.ipynb
11michalis11/AmbulanceDecisionGame
45164ba51da0417297f715e41716cb91facc120f
[ "MIT" ]
null
null
null
nbs/src/Markov/closed_form_formula_of_pi/investigate-close-form-pi.ipynb
11michalis11/AmbulanceDecisionGame
45164ba51da0417297f715e41716cb91facc120f
[ "MIT" ]
20
2020-04-20T09:08:31.000Z
2021-09-23T11:09:25.000Z
nbs/src/Markov/closed_form_formula_of_pi/investigate-close-form-pi.ipynb
11michalis11/AmbulanceDecisionGame
45164ba51da0417297f715e41716cb91facc120f
[ "MIT" ]
null
null
null
96.043968
23,541
0.583995
true
74,637
Qwen/Qwen-72B
1. YES 2. YES
0.924142
0.857768
0.792699
__label__ast_Latn
0.054806
0.680039
# Chapter 2 Exercises In this notebook we will go through the exercises of chapter 2 of Introduction to Stochastic Processes with R by Robert Dobrow. ```python import numpy as np ``` ## 2.1 A Markov chain has transition Matrix $$ p=\left(\begin{array}{cc} 0.1 & 0.3&0.6\\ 0 & 0.4& 0.6 \\ 0.3 & 0.2 &0.5 \end{array}\right) $$ With initial distribution $\alpha =(0.2,0.3,0.5)$. Find the following: a) $P(X_7=3|X_6=2)$ b) $P(X_9=2|X_1=2,X_5=1,X_7=3)$ c) $P(X_0=3|X_1=1)$ d) $E[X_2]$ ```python a=np.array([0.2,0.3,0.5]) p = np.matrix([[0.1,0.3,0.6],[0,0.4,0.6],[0.3,0.2,0.5]]) p[1,2] ``` 0.6 ```python (p*p)[2,1] ``` 0.27 ```python a[2]*p[2,0]/(a[0]*p[0,0]+a[1]*p[1,0]+a[2]*p[2,0]) ``` 0.8823529411764707 ```python e=0 pr = a*p*p for i in range(pr.shape[1]): e += (i+1)*pr[0,i] e ``` 2.3630000000000004 ## 2.1 A Markov chain has transition Matrix $$ p=\left(\begin{array}{cc} 0 & 1/2&1/2\\ 1 & 0& 0 \\ 1/3 & 1/3 &1/3 \end{array}\right) $$ With initial distribution $\alpha =(1/2,0,1/2)$. Find the following: a) $P(X_2=1|X_1=3)$ b) $P(X_1=3,X_2=1)$ c) $P(X_1=3|X_2=1)$ d) $P(X_9=1|X_1=3, X_4=1, X_7=2)$ ```python a=np.array([1/2,0,1/2]) p = np.matrix([[0,1/2,1/2],[1,0,0],[1/3,1/3,1/3]]) p[2,0] ``` 0.3333333333333333 ```python (a*p)[0,2]*p[2,0] ``` 0.13888888888888887 ```python (a*p)[0,2]*p[2,0]/((a*p)[0,0]*p[0,0]+(a*p)[0,1]*p[1,0]+(a*p)[0,2]*p[2,0]) ``` 0.25 ```python (p*p)[1,0] ``` 0.0 ## 2.3 Consider the Wright-Fisher model with population k=3. If the initial population has one A allele, what is the probability that there are no alleles at time 3. ```python def calculate_combinations(n, r): from math import factorial return factorial(n) // factorial(r) // factorial(n-r) mat = [] k = 3 for j in range(k+1): vec = [] for i in range(k+1): vec.append(calculate_combinations(k, i)*(j/k)**i*(1-j/k)**(k-i)) mat.append(vec) p=np.matrix(mat) (p**3)[1,0] ``` 0.5166895290352083 ## 2.4 For the Generalñ two-state chain with transformation matrix $$\boldsymbol{P}=\left(\begin{array}{cc} 1-p & p\\ q & 1-q \end{array}\right)$$ And initial distribution $\alpha=(\alpha_1,\alpha_2)$, find the following: (a) the two step transition matrix (b) the distribution of $X_1$ ### Answer (a) For this case the result comes from doing the matrix multiplication once i.e. finding $\boldsymbol{P}*\boldsymbol{P}$ that is: $$\boldsymbol{P}*\boldsymbol{P}=\left(\begin{array}{cc} (1-p)^2+pq & (1-p)p+(1-q)p\\ (1-p)q+(1-q)q & (1-q)^2+pq \end{array}\right)$$ (b) in this case we need to take into account alpha which would be $X_0$ and then multiply with the transition matrix, then: $\boldsymbol{\alpha}*\boldsymbol{P}$ that is: $$\boldsymbol{\alpha}*\boldsymbol{P}=( (1-p)\alpha_1+q\alpha_2 , p\alpha_1+(1-q)\alpha_2)$$ ## 2.5 Consider a random walk on $\{0,...,k\}$, which moves left and right with respective probabilities q and p. If the walk is at 0 it transitions to 1 on the next step. If the walk is at k it transitions to k-1 on the next step. This is calledrandom walk with reflecting bounderies. Assume that $k=3,q=1/4, p =3/4$ and the initial distribution is uniform. For the following, use technology if needed. (a) Exhibit the transition Matrix (b) Find $P(X_7=1|X_0=3,X_2=2,X_4=2)$ (c) Find $P(X_3=1,X_5=3)$ ### Answer (a) $$\boldsymbol{Pr}=\left(\begin{array}{cc} 0 & 1 & 0 & 0 \\ 1/4 & 0 & 3/4 & 0\\ 0 & 1/4 & 0 & 3/4 \\ 0 & 0 & 1 & 0 \end{array}\right)$$ (b) since it is a Markov Chain, then $P(X_7=1|X_0=3,X_2=2,X_4=2)=P(X_7=1|X_4=2)=\boldsymbol{Pr}^3_{2,1}$ ```python p = np.matrix([[0,1,0,0],[1/4,0,3/4,0],[0,1/4,0,3/4],[0,0,1,0]]) (p**3)[2,1] ``` 0.296875 (c) $P(X_3=1,X_5=3)=(\alpha \boldsymbol{Pr}^3)_{1}*\boldsymbol{Pr}^2_{1,3}$ ```python a=np.matrix([1/4,1/4,1/4,1/4]) (a*p**3)[0,1]*(p**2)[1,3] ``` 0.103271484375 ## 2.6 A tetrahedron die has four faces labeled 1,2,3 and 4. In repeated independent rolls of the die $R_0, R_1,...,$ let $X_n=max\{R_0,...,R_n\}$ be the maximum value after $n+1$ rolls, for $n\geq0$ (a) Give an intuitive argument for why $X_0, X_1, ... $ is a Markov Chain and exhibit its transition Matrix (b) find P(X_3 \geq 3) ### Answer (a) This is a Markov Chain since the value of the maximum in all the rolls does only depend on the last value given of the throw this is because the max value should not changhe based on past rolls of this given event, then $P(X_n=i|X_{n-1}=j,...)=P(X_n=i|X_{n-1}=j)$ and the transition matrix is: $$\boldsymbol{Pr}=\left(\begin{array}{cc} 1/4 & 1/4 & 1/4 & 1/4 \\ 0 & 1/2 & 1/4 & 1/4 \\ 0 & 0 & 3/4 & 1/4 \\ 0 & 0 & 0 & 1 \end{array}\right)$$ (b) We know that the tetrahedron has unoiform probability to get the initial value, then $\alpha=(1/4,1/4,1/4,1/4)$ then we want $\alpha*\boldsymbol{Pr}^3$ ```python p = np.matrix([[1/4,1/4,1/4,1/4],[0,1/2,1/4,1/4],[0,0,3/4,1/4],[0,0,0,1]]) a=np.matrix([1/4,1/4,1/4,1/4]) (a*p**3)[0,2:].sum() ``` 0.9375 ## 2.7 Let $X_0, X_1,...$ be a Markov chain with transition matrix $P$. Let $Y_n=X_{3n}$, for $n = 0,1,2,...$ Show that $Y_0,Y_1,...$ is a Markov chain and exhibit its transition Matrix. ### Answer Let's unravel $P(Y_k|Y_{k-1},...,Y_0)$ $$P(Y_k|Y_{k-1},...,Y_0)=P(X_{3k}|X_{3k-3},...,X_0)$$ But since $X_0, X_1,...$ is a Markov Chain, then $P(X_{3k}|X_{3k-3},...,X_0) = P(X_{3k}|X_{3k-3}) = P(Y_k|Y_{k-1})$ This means that $Y_0,Y_1,...$ is also a Markov chain. and then the transition matrix for Y is $P_Y=P_X^3$ ## 2.8 Give the Markov transition matrix for a random walk on the weighted graph in the next figure: <div> </div> ### Answer $$\boldsymbol{Pr}=\left(\begin{array}{cc} 0 & 1/3 & 0 & 0 & 2/3 \\ 1/10 & 1/5 & 1/5 & 1/10 & 2/5 \\ 1/2 & 1/3 & 0 & 1/6 & 0\\ 0 & 1/2 & 1/2 & 0 & 0\\ 1/3 & 2/3 & 0 & 0 & 0 \end{array}\right)$$ ## 2.8 Give the Markov transition matrix for a random walk on the weighted graph in the next figure: <div> </div> ### Answer $$\boldsymbol{Pr}=\left(\begin{array}{cc} 0 & 0 & 3/5 & 0 & 2/5 \\ 1/7 & 2/7 & 0 & 0 & 4/7 \\ 0 & 2/9 & 2/3 & 1/9 & 0\\ 0 & 1 & 0 & 0 & 0\\ 3/4 & 0 & 0 & 1/4 & 0 \end{array}\right)$$ ## 2.10 Consider a Markov Chain with transition Matrix $$\boldsymbol{Pr}=\left(\begin{array}{cc} 0 & 3/5 & 1/5 & 2/5 \\ 3/4 & 0 & 1/4 & 0 \\ 1/4 & 1/4 & 1/4 & 1/4\\ 1/4 & 0 & 1/4 & 1/2 \end{array}\right)$$ (a) Exhibit the directed, weighted transition graph for the chain (b) The transition graph for this chain can be given as a weighted graph without directed edges. Exhibit the graph (a) <div> </div> (b) <div> </div> ## 2.11 You start with five dice. Roll all the dice and put aside those dice that come up 6. Then roll the remaining dice, putting aside thise dice that come up 6. and so on. Let $X_n$ be the number of dice that are sixes after n rolls. (a) Describe the transition matrix $\boldsymbol{Pr}$ for this Markov chain (b) Find the probability of getting all sixes by the third play. (c) what do you expect $\boldsymbol{Pr}^{100}$ to look like? Use tech to confirm your answer **(Good that we are doing this in jupyter notebook haha)** (a) Note that the space for $X_n$ is $0,1,2,3,4,5$ $$\boldsymbol{Pr}=\left(\begin{array}{cc} 1 & 0 & 0 & 0 & 0 & 0 \\ 1/6 & 5/6 & 0 & 0 & 0 & 0\\ \frac{1^2}{6^2}{2\choose 2} & \frac{5*1}{6^2}{2\choose 1} & \frac{5^2}{6^2}{2\choose 0} & 0 & 0 & 0\\ \frac{1^3}{6^3}{3\choose 3} & \frac{5*1^2}{6^3}{3\choose 2} & \frac{5^2*1}{6^3}{3\choose 1} & \frac{5^3}{6^3}{3\choose 0} & 0 & 0\\ \frac{1^4}{6^4}{4\choose 4} & \frac{5*1^3}{6^4}{4\choose 3} & \frac{5^2*1^2}{6^4}{4\choose 2} & \frac{5^3*1}{6^4}{4\choose 1} & \frac{5^4}{6^4}{4\choose 0} & 0\\ \frac{1^5}{6^5}{5\choose 5} & \frac{5*1^4}{6^5}{5\choose 4} & \frac{5^2*1^3}{6^5}{5\choose 3} & \frac{5^3*1^2}{6^5}{5\choose 2} & \frac{5^4*1}{6^5}{5\choose 1} & \frac{5^5}{6^5}{5\choose 0}\\ \end{array}\right) = \left(\begin{array}{cc} 1 & 0 & 0 & 0 & 0 & 0 \\ \frac{1}{6} & \frac{5}{6} & 0 & 0 & 0 & 0\\ \frac{1}{36} & \frac{10}{36} & \frac{25}{36} & 0 & 0 & 0\\ \frac{1}{216} & \frac{15}{216} & \frac{75}{216} & \frac{125}{216} & 0 & 0\\ \frac{1}{1296} & \frac{20}{1296} & \frac{150}{1296} & \frac{500}{1296} & \frac{625}{1296} & 0\\ \frac{1}{7776} & \frac{25}{7776} & \frac{250}{7776} & \frac{1250}{7776} & \frac{3125}{7776} & \frac{3125}{7776}\\ \end{array}\right)$$ It is interesting that the distribution when $X_n=i$ is the components of the binomial expansion $(1/6+5/6)^i$ (b) ```python p = np.matrix([[1, 0, 0, 0, 0, 0], [1/6, 5/6, 0, 0, 0, 0], [1/36, 10/36, 25/36, 0, 0, 0], [1/216, 15/216, 75/216, 125/216, 0, 0], [1/1296, 20/1296, 150/1296, 500/1296, 625/1296, 0], [1/7776, 25/7776, 250/7776, 1250/7776 ,3125/7776, 3125/7776]]) (p**3)[5,0] ``` 0.013272056011374654 (c) I would expect that there is basically 100 percent posibility that there are 0 dices left without regarding how many dices you started with. ```python (p**100).round() ``` matrix([[1., 0., 0., 0., 0., 0.], [1., 0., 0., 0., 0., 0.], [1., 0., 0., 0., 0., 0.], [1., 0., 0., 0., 0., 0.], [1., 0., 0., 0., 0., 0.], [1., 0., 0., 0., 0., 0.]]) ## 2.12 two urns contain k balls each. At the beginning of the experiment the urn on the left countains k red balls and the one on the right contains k blue balls. At each step, pick a ball at random from each urn and excange them. Let $X_n$ be the number of blue balls left in the urn on the left (Note that $X_0=0, X_1 = 1$). Argue the process is a Markov Chain. Find the transition matrix. This model is called the Bernoulli - Laplace process. ### Answer This process has to be a random process because the only thing that matters at step n is to know how many blue balls are in the urn on the left. This means that even if you know all the history only the last state is the one that matters to know the probability for the next step. $$P = \left(\begin{array}{cc} 0 & 1 & 0 & ... &0 & 0 & 0 \\ \frac{1}{k} & 0 & \frac{k-1}{k} & ... & 0 & 0 & 0\\ 0 & \frac{2}{k} & 0 & ... & 0 & 0 & 0\\ ... & ... & ... & ... & ... & ... & ...\\ 0 & 0 & 0 & ... & \frac{k-1}{k} & 0 & \frac{1}{k}\\ 0 & 0 & 0 & ... & 0 & 1 & 0\\ \end{array}\right)$$ ## 2.13 see the move-to-front process in Example 2.10. Here is anorher way to organize the bookshelf. when a book is returned it is put bacl on the library shelf one position forward from where it was originally. If the book at the fron of the shelf is returned it is put back at the front of the shelf. This, if the order of books is (a,b,c,d,e) and the book d is picjed, the new order is (a,b,d,c,e). This reorganization method us called the *transposition* or *move-ahead-1* scheme. Give the transition matrix for the transportation scheme for a shelf with three books. ### Answer remember the states are $(abc, acb, bac,bca, cab, cba)$ $$P = \left(\begin{array}{cc} 0 & p_c & p_b &p_a & 0 & 0 \\ p_b & 0 & 0 & 0 & p_c & p_a\\ p_a & p_b & 0 & p_c & 0 & 0\\ 0 & 0 & p_a & 0 & p_b & p_c\\ p_c & p_a & 0 & 0 & 0 & p_b\\ 0 & 0 & p_c & p_b & p_a & 0 \end{array}\right)$$ ## 2.14 There are k songs in Mary's music player. The player is set to *Shuffle* mode, which plays songs uniformly at random, sampling with replacement. Thus, repeats are possible. Let $X_n$ denote the number of *unique* songs that have been heard after the nth play. (a) show that $X_0,X_1, ...$ is a Markov chain and give the transition matrix (b) if Mary has four songs on her music player, find the probability that all songs are heard after six plays. ### Answer (a) imagine that you have $P(X_n|X_{n-1},...,X_0)$ note that $X_{n+1}=max\{\xi_{n+1},X_n\}$ where $\xi_{n+1}$ is the result of the n+1 and it is independent to all $X_i$, $0\leq i\leq n$, then this means that $X_0,...,X_n$ is a Markov chain with transition matrix: $$P = \left(\begin{array}{cc} 1/k & (k-1)/k & 0 & ... &0 & 0 & 0 \\ 0 & 2/k & (k-2)/k & ... & 0 & 0 & 0\\ 0 & 0 & 3/k & ... & 0 & 0 & 0\\ ... & ... & ... & ... & ... & ... & ...\\ 0 & 0 & 0 & ... & 0 & (k-1)/k & 1/k\\ 0 & 0 & 0 & ... & 0 & 0 & 1\\ \end{array}\right)$$ ## 2.15 Assume that $X_0,X_1,...$ is a two state Markov chain in $\mathcal{S}=\{0,1\}$ with transition matrix: $$P=\left(\begin{array}{cc} 1-p & p \\ q & 1-q \end{array}\right)$$ The present state of the chain only depends on the previous state. We can create a bivariate process that looks back two time periods by the following contruction. Let $Z_n=(X_{n-1},X_n)$, for $n\geq1$. The sequence $Z_1,Z_2,...$ is a Markov chain with state space $\mathcal{S}X\mathcal{S}={(0,0),(1,0),(0,1),(1,1)}$ Give the transition matrix of the new chain. ### Answer $$P = \left(\begin{array}{cc} 1-p & p & 0 & 0 \\ 0 & 0 & q & 1-q \\ 1-p & p & 0 & 0 \\ 0 & 0 & q & 1-q \end{array}\right)$$ ## 2.16 Assume that $P$ is a stochastic matrix with equal rows. Show that $P^n=P$, for all $n\geq1$ ### Answer let's then see $P$ as: $$P = \left(\begin{array}{cc} p_1 & p_2 & ... & p_k \\ p_1 & p_2 & ... & p_k \\ p_1 & p_2 & ... & p_k \\ ... & ... & ... & ... \\ p_1 & p_2 & ... & p_k \\ p_1 & p_2 & ... & p_k \\ \end{array}\right)$$ First let's calculate $P^2$, then $$P^2 = \left(\begin{array}{cc} p_1*p_1+p_2*p_1+...+p_k*p_1 & p_1*p_2+p_2*p_2+...+p_k*p_2 & ... & p_1*p_k+p_2*p_k+...+p_k*p_k \\ p_1*p_1+p_2*p_1+...+p_k*p_1 & p_1*p_2+p_2*p_2+...+p_k*p_2 & ... & p_1*p_k+p_2*p_k+...+p_k*p_k \\ p_1*p_1+p_2*p_1+...+p_k*p_1 & p_1*p_2+p_2*p_2+...+p_k*p_2 & ... & p_1*p_k+p_2*p_k+...+p_k*p_k \\ ... & ... & ... & ... \\ p_1*p_1+p_2*p_1+...+p_k*p_1 & p_1*p_2+p_2*p_2+...+p_k*p_2 & ... & p_1*p_k+p_2*p_k+...+p_k*p_k \\ p_1*p_1+p_2*p_1+...+p_k*p_1 & p_1*p_2+p_2*p_2+...+p_k*p_2 & ... & p_1*p_k+p_2*p_k+...+p_k*p_k \\ \end{array}\right)= \left(\begin{array}{cc} p_1*(p_1+p_2+...+p_k) & ... & p_k(p_1+p_2+...+p_k) \\ p_1*(p_1+p_2+...+p_k) & ... & p_k(p_1+p_2+...+p_k) \\ p_1*(p_1+p_2+...+p_k) & ... & p_k(p_1+p_2+...+p_k) \\ ... & ... & ... \\ p_1*(p_1+p_2+...+p_k) & ... & p_k(p_1+p_2+...+p_k) \\ p_1*(p_1+p_2+...+p_k) & ... & p_k(p_1+p_2+...+p_k) \\ \end{array}\right)= \left(\begin{array}{cc} p_1*(1) & ... & p_k(1) \\ p_1*(1) & ... & p_k(1) \\ p_1*(1) & ... & p_k(1) \\ ... & ... & ... \\ p_1*(1) & ... & p_k(1) \\ p_1*(1) & ... & p_k(1) \\ \end{array}\right) = P$$ Then let's see what happens for $P^3$, then $$P^3=P^2*P=P*P=P^2=P$$ Assume it is true for $P^n$ and see what happens at $P^{n+1}$ $$P^{n+1}=P^n*P=P*P=P^2=P$$ Then this means that $P^n=P$, for all $n\geq1$ ## 2.17 Let **$P$** be the stochastic matrix. Show that $\lambda = 1$ is an eigenvalue of **$P$**. What is the associated eigenvector? ## Answer Let's first think that it is clear that the rows all sum to one, and remember that an eigenvector asociated to an eigen value looks of the form: $$A\overline{x}=\lambda \overline{x}$$ it is easy to note that the eigenvector we are looking at is the vector $\overline{x}=(1,1,...,1)^T$ this means that the eigenvalue asociated to this is also one we can check this by doing the multiplication. Say we have $$A= \left(\begin{array}{cc} p_{1,1} & p_{1,2} & ... & p_{1,k} \\ p_{2,1} & p_{2,2} & ... & p_{2,k} \\ ... & ... & ... & ... \\ p_{k,1} & p_{k,2} & ... & p_{k,k} \\ \end{array}\right)$$ where A is an stochastic matrix. Then: $$Ax= \left(\begin{array}{cc} p_{1,1} & p_{1,2} & ... & p_{1,k} \\ p_{2,1} & p_{2,2} & ... & p_{2,k} \\ ... & ... & ... & ... \\ p_{k,1} & p_{k,2} & ... & p_{k,k} \\ \end{array}\right)\left(\begin{array}{cc} 1 \\ 1 \\ ... \\ 1\\ \end{array}\right)= \left(\begin{array}{cc} p_{1,1} + p_{1,2} + ... + p_{1,k} \\ p_{2,1} + p_{2,2} + ... + p_{2,k} \\ ... \\ p_{k,1} + p_{k,2} + ... + p_{k,k} \\ \end{array}\right)= \left(\begin{array}{cc} 1 \\ 1 \\ ... \\ 1\\ \end{array}\right)$$ The way to prove this is to remember that if $A$ is an square matrix, then the eigenvalues of A are the same as its transpose's. Then, let's check if 1 is an eigenvalue by doing $det(A^T-\lambda I)$ where $\lambda$ is the eigenvalue (in this case 1) and $I$ is the identity matrix, when doing this we get that: $$A-I= \left(\begin{array}{cc} p_{1,1} -1 & p_{1,2} & ... & p_{1,k} \\ p_{2,1} & p_{2,2}- 1 & ... & p_{2,k} \\ ... & ... & ... & ... \\ p_{k,1} & p_{k,2} & ... & p_{k,k}-1 \\ \end{array}\right)$$ And since we are dealing with the transpose, then if we sum all rows to the first row (i.e. $R_1 -> R_1+R_2+...+R_k$), then all values in the first row are zero, which means that the matrix is linearly dependand and $det(A^T- 1I)=0$, which means that 1 is an eigenvalue for $A$ ## 2.18 A stochastic matrix is called *doubly* stochastic if its columns sum to 1. Let $X_0,X_1,...$ be a Markov chain on $\{ 1,...,k \}$ with doubly stochastic transition matrix and initial distribution that is uniform on $\{ 1,...,k \}$. Show that the distribution of $X_n$ is uniform on $\{ 1,...,k \}$, for all $n\geq0$ ### Answer Let's remember that then the distribution at $X_n$ is $\alpha P^n$, let's see what happens at $n=1$ $n=1$ $$X_1=\alpha P = \left(\begin{array}{cc} 1/k &1/k&...& 1/k \end{array}\right)\left(\begin{array}{cc} p_{1,1} & p_{1,2} & ... & p_{1,k} \\ p_{2,1} & p_{2,2} & ... & p_{2,k} \\ ... \\ p_{k,1} & p_{k,2} & ... & p_{k,k} \\ \end{array}\right)=\left(\begin{array}{cc} p_{1,1}*1/k + p_{2,1}*1/k + ... + p_{k,1}*1/k \\ p_{1,2}*1/k + p_{2,2}*1/k + ... + p_{k,2}*1/k \\ ... \\ p_{1,k}*1/k + p_{2,k}*1/k + ... + p_{k,k}*1/k \\ \end{array}\right)^T=\left(\begin{array}{cc} (p_{1,1} + p_{2,1} + ... + p_{k,1})*1/k\\ (p_{1,2} + p_{2,2} + ... + p_{k,2})*1/k \\ ... \\ (p_{1,k} + p_{2,k} + ... + p_{k,k})*1/k \\ \end{array}\right)^T = \left(\begin{array}{cc} (1)*1/k\\ (1)*1/k \\ ... \\ (1)*1/k \\ \end{array}\right)^T=\alpha $$ This last step is due to the matrix being double stochastic Now let's $n=2$ $$X_1=\alpha P^2 = (\alpha P)*P=\alpha*p=\alpha$$ Then we assume it happens for $n$, let's see then for $n+1$ $$X_{n+1}=\alpha P^{n+1} = (\alpha P^n)*P=\alpha*p=\alpha$$ ## 2.19 Let **$P$** be the transition matrix of a Markov chain on $k$ states. Let **$I$** denote the $kXk$ identity matrix. consider the matrix $$\boldsymbol{Q}=(1-p)\boldsymbol{I}+p\boldsymbol{P}\text{ , for }0<p<1$$ show that **$Q$** is a stochastic matrix. Give the probabilistic interpretation for the dynamics of a markov chain governed by the **$Q$** matrix in terms of the original Markov chain. ### Answer Let's construct $(1-p)\boldsymbol{I}+p\boldsymbol{P}$ and we need to check for two main things. First, $0\leq p_{i,j}\leq 1$, this is clear, since $P$ already holds this property and you are just multiplying it by two numbers that sum 1 and are less than 1. Now we need to check if $\sum_j p_{i,j}=1$, then seing the matrix: $$(1-p)\boldsymbol{I}+p\boldsymbol{P}=\left(\begin{array}{cc} p_{1,1}*p +(1-p) & p_{1,2}*p & ... & p_{1,k}*p \\ p_{2,1}*p & p_{2,2}*p+(1-p) & ... & p_{2,k}*p \\ ... & ... & ... & ... \\ p_{k,1}*p & p_{k,2}*p & ... & p_{k,k}*p+(1-p) \\ \end{array}\right)$$ we have for row i $p_{i,1}*p + p_{1,2}*p + ...+ p_{i,i}p+(1-p)+...+ p_{1,k}*p = p*(p_{i,1} + p_{1,2} + ...+ p_{1,k})+1-p=p*(1)+1-p=1 $ then $Q$ is a stochastic matrix. As I see it this matrix $Q$ the only value it is modifying is the probability of transitioning from state $i$ to state $i$. so this linear combination could be used when you want to give more probability to return to the same state you are at step $n$. you can notice that the probability of going back to state $i$ when you are at state $i$ is due to the fact that is the only other state that has a real value greater than zero being added to it. ## 2.20 Let $X_0,X_1,...$ be a Markov chain with transition matrix $$P=\left(\begin{array}{cc} 0 & 1 & 0 \\ 0 & 0 & 1 \\ p & 1-p & 0 \end{array}\right)$$ for $0<p<1$. Let $g$ be the function defined by: $$ g(x) = \begin{cases} 0 & \text{if $x=1$} \\ 1 & \text{if $x=2,3$} \end{cases} $$ Let $Y_n=g(X_n)$ for $n\geq0$. Show that $Y_0,Y_1,...$ is not a Markov chain ### Answer Let's proof this by contradiction. if we assume $Y_n$ is a Markov chain, then $P(Y_n|Y_{n-1},...,Y_0)=P(Y_n|Y_{n-1})$, but let's check this next two scenarios ## 2.21 Let $P$ and $Q$ be two transition matricies on the same state space processes, both started in some initial state $i$. In the process $\#1$ a coin is flipped, then if it lands head, the process unfolds according to $P$. If it lands tails, the process unfolds according to $Q$. In process $\#2$, at each step a coin is flipped. If it lands heads, the next step is chosen from $P$. If it lands tails, the next step is chosen from $Q$. Thus, in $\#1$, one coin is initially flipped, which governsthe entire evolution of the process and in $\#2$ a coin is flipped at each step to decide the next step in the process. Decide whether either of these processes is a Markov chain, if it is exhibit its transition matrix, if not, then explain why not. ### Answer $#1$ For example 1 we have that it is not a Markov chain and the reason for this is because in the first step the this means that the distribution of $X_0$ is different from the distribution of $X_1$ which means that we cannot have a transition matrix for all states, although it is nice to see that once the heads is chosen, then the next steps do follow a Markov chain with either $P$ or $Q$ as their matrices. $#2$ For example 2 we have that it is a Markov chain and the reason for this is because we can define the transition Matrix $M=0.5P+0.5Q$ as the new matrix that follows this process. ```python ## This could be a class. def ChooseNextStep(x, mat): tmp_x = x*mat return tmp_x, np.random.choice(tmp_x.shape[1],p=np.array(tmp_x).reshape(-1)) def Simulation_21(case, P=np.matrix([[0.2,0.8],[0.2,0.8]]),Q=np.matrix([[0.8,0.2],[0.8,0.2]]), simulations=1000, steps = 5, p=0.5): ''' This is a function for the simulation stated in 2.21 from the book, it receives the two matrices and the case you are talking for either one or two and calculates its state matrices for the first steps we assume initial distributions uniform ''' rows = P.shape[0] cols = P.shape[1] initial_dis = np.matrix([1/cols]*cols) sim = [] for i in range(simulations): res = [] tmp_x=initial_dis for j in range(steps): if case == 1: if j == 0: coin = np.random.random() if coin < p: # Follows P dist = "P" tmp_x, xi = ChooseNextStep(tmp_x,P) else: dist = "Q" tmp_x, xi = ChooseNextStep(tmp_x,Q) else: if dist=="P": tmp_x, xi = ChooseNextStep(tmp_x,P) else: tmp_x, xi = ChooseNextStep(tmp_x,Q) elif case == 2: coin = np.random.random() if coin < p: # Follows P tmp_x, xi = ChooseNextStep(tmp_x,P) else: tmp_x, xi = ChooseNextStep(tmp_x,Q) res.append(xi) sim.append(res) return np.array(sim) def CalcMatrixForStep(sim, step=1): res0 = sim[sim[:,(step-1)]==0] res1 = sim[sim[:,(step-1)]==1] prob0 = np.unique(res0[:,step], return_counts=True)[1]/res0.shape[0] prob1 = np.unique(res1[:,step], return_counts=True)[1]/res1.shape[0] return np.array([prob0,prob1]) def GetTransitionMatrix(results): matrices = [] for i in range(1,results.shape[1]): matrices.append(CalcMatrixForStep(results,i)) return matrices ``` ```python sim = Simulation_21(1, simulations=10000) for elem in GetTransitionMatrix(sim): print(elem) ``` [[0.68071928 0.31928072] [0.32472472 0.67527528]] [[0.6699145 0.3300855 ] [0.31824583 0.68175417]] [[0.67865078 0.32134922] [0.32442068 0.67557932]] [[0.68607443 0.31392557] [0.32906837 0.67093163]] ```python sim = Simulation_21(1, simulations=100000) for elem in GetTransitionMatrix(sim): print(elem) ``` [[0.68024935 0.31975065] [0.32288256 0.67711744]] [[0.68171683 0.31828317] [0.32311927 0.67688073]] [[0.68055638 0.31944362] [0.32225466 0.67774534]] [[0.68082058 0.31917942] [0.32163319 0.67836681]] ```python sim = Simulation_21(2, simulations=100000) for elem in GetTransitionMatrix(sim): print(elem) ``` [[0.50320229 0.49679771] [0.50051641 0.49948359]] [[0.49590515 0.50409485] [0.50063234 0.49936766]] [[0.50008028 0.49991972] [0.5005182 0.4994818 ]] [[0.49978013 0.50021987] [0.49673804 0.50326196]] ```python sim = Simulation_21(2, simulations=100000) for elem in GetTransitionMatrix(sim): print(elem) ``` [[0.49948298 0.50051702] [0.50138799 0.49861201]] [[0.5002298 0.4997702 ] [0.49796825 0.50203175]] [[0.50042076 0.49957924] [0.50405271 0.49594729]] [[0.50585378 0.49414622] [0.50204918 0.49795082]] ## 2.22 Show by induction: (a) $1+3+5+...+(2n-1)=n^2$ (b) $1^2+2^2+...+n^2=n(n+1)(2n+1)/6$ (c) for all real $x>-1$, $(1+x)^n\geq 1+nx$ ### Answer $#a$ For $n=1$ $$ \begin{align} 2(1)-1 &= 1 \\&=1^2 \end{align}$$ Assume it works for $n-1$ $$ \begin{align} 1+3+...+(2n-3) &= (n-1)^2 \\ &=n^2 -2n+1 \end{align}$$ Then for $n$ $$ \begin{align} 1+3+...+(2n-3)+(2n-1) &= n^2 -2n+1 +2n-1\\ &=n^2 \end{align}$$ Therefore is proved. $#b$ For $n=1$ $$ \begin{align} 1(1+1)(2*1+1)/6 &= 6/6 \\&=1^2 \end{align}$$ Assume it works for $n-1$ $$ \begin{align} 1^2+2^2+...+(n-1)n^2&=(n-1)(n)(2n-1)/6 \end{align}$$ Then for $n$ $$ \begin{align} 1^2+2^2+...+n^2 &=\frac{(n-1)(n)(2n-1)}{6}+n^2\\ &=\frac{2n^-3n^2+n+6n^2}{6}\\ &=\frac{n(2n^2+3n+1)}{6}\\ &=\frac{n(n+1)(2n+1)}{6} \end{align}$$ Therefore is proved. $#c$ For this is important to notice that for $x>-1$ means that $(1+x)>0$ for all $x$. Now let's see what happens for $n=1$ $$ \begin{align} (1+x)^1 &= (1+1*x)\\ 1+x &\geq 1+x \end{align}$$ Assume it works for $n-1$ $$ \begin{align} (1+x)^{n-1}&\geq1+(n-1)x \end{align}$$ Then for $n$ $$ \begin{align} (1+x)^{n} &= (1+x)^{n-1}*(1+x)\\ &\geq 1+(n-1)x*(1+x)\\ \\ \text{this last step because $1+x > 0$} \\ \\ 1+(n-1)x*(1+x) &=1+(n-1)x+x+(n-1)x^2\\ &=1+n*x+ (n-1)x^2\\ \\ \text{but since $(n-1)x^2\geq 0$ for all $x>1$}\\ \\ 1+n*x+ (n-1)x^2 &\geq 1+n*x \\ \\ \text{=>} \\ \\ (1+x)^{n} &\geq 1+n*x+ (n-1)x^2 \\ &\geq 1+n*x \\ \\ \text{by transitivity relation} \\ \\ (1+x)^{n} &\geq 1+n*x \\ \end{align}$$ Therefore is proved. ## 2.23 Simulate the first 20 letters (vowel/consonant) of the Pushkin poem Markov chain of Example 2.2. $$P=\left(\begin{array}{cc} 0.175 & 0.825 \\ 0.526 & 0.474 \end{array}\right)$$ ### Answer ```python class PushkinLetters(): def __init__(self, P=np.matrix([[0.175,0.825],[0.526,0.474]]), init= np.array([8.638/20, 11.362/20])): self.P=P self.init = init def sample(self, simlist, dist): vowel = 0 consonant = 1 if (np.random.random()<dist[0]): simlist.append(vowel) else: simlist.append(consonant) def parseSimulation(self, toParse): return ["vowel" if x == 0 else "consonant" for x in toParse] def OneSimulation(self, steps = 20): sim = [] self.sample(sim, self.init) for i in range(1,steps): self.sample(sim, np.array(self.P[sim[i-1]]).reshape(-1)) return self.parseSimulation(sim) def Simulation(self, simulations): vowel = 0 consonant = 1 results = [] for i in range(0,simulations): results.append(self.OneSimulation()) return results ``` ```python PushkinLetters().OneSimulation() ``` ['vowel', 'consonant', 'vowel', 'consonant', 'consonant', 'consonant', 'vowel', 'consonant', 'vowel', 'consonant', 'vowel', 'consonant', 'vowel', 'vowel', 'consonant', 'consonant', 'vowel', 'consonant', 'consonant', 'consonant'] ## 2.24 Simulate 50 steps of the random walk on the graph in Figure2.1. Repeat the simulation 10 times. How many of your simulations end at vertex c? Compare with the exact long-term probability the walk visits c. <div> </div> With transition matrix uniform across vertex as described in the frog example, then: $$P=\left(\begin{array}{cc} 0 & 1 & 0 & 0 & 0 & 0\\ 1/4 & 0 & 1/4 & 1/4 & 1/4 & 0\\ 0 & 1/4 & 0 & 1/4 & 1/4 & 1/4\\ 0 & 1/4 & 1/4 & 0 & 1/4 & 1/4\\ 0 & 1/3 & 1/3 & 1/3 & 0 & 0\\ 0 & 0 & 1/2 & 1/2 & 0 & 0 \end{array}\right)$$ ### Answer ```python class FrogJump(): def __init__(self, P=np.matrix([[0 , 1 , 0 , 0 , 0 , 0], [1/4 , 0 , 1/4 , 1/4 , 1/4 , 0], [0 , 1/4 , 0 , 1/4 , 1/4 , 1/4], [0 , 1/4 , 1/4 , 0 , 1/4 , 1/4], [0 , 1/3 , 1/3 , 1/3 , 0 , 0], [0 , 0 , 1/2 , 1/2 , 0 , 0]]), init= np.array([1/6]*6)): self.P=P self.init = init def sample(self, simlist, dist): simlist.append(np.random.choice(len(dist),p=dist)) def OneSimulation(self, steps = 50): sim = [] self.sample(sim, self.init) for i in range(1,steps): self.sample(sim, np.array(self.P[sim[i-1]]).reshape(-1)) return sim def Simulation(self, simulations): vowel = 0 consonant = 1 results = [] for i in range(0,simulations): results.append(self.OneSimulation()) return results ``` ```python np.array(FrogJump().Simulation(10))[:,-1] ``` array([3, 2, 4, 2, 3, 4, 1, 2, 3, 3]) ```python P=np.matrix([[0 , 1 , 0 , 0 , 0 , 0], [1/4 , 0 , 1/4 , 1/4 , 1/4 , 0], [0 , 1/4 , 0 , 1/4 , 1/4 , 1/4], [0 , 1/4 , 1/4 , 0 , 1/4 , 1/4], [0 , 1/3 , 1/3 , 1/3 , 0 , 0], [0 , 0 , 1/2 , 1/2 , 0 , 0]]) (P**30)[0,:] ``` matrix([[0.05555595, 0.22222121, 0.22222273, 0.22222273, 0.16666667, 0.11111072]]) ## 2.25 The behavior of dolphins in the presence of tour boats in Patagonia, Argentina is studied in Dans et al. (2012). A Markov chain model is developed, with state space consisting of five primary dolphin activities (socializing, traveling, milling, feeding, and resting). The following transition matrix is obtained. $$P=\left(\begin{array}{cc} 0.84 & 0.11 & 0.01 & 0.04 & 0 \\ 0.03 & 0.8 & 0.04 & 0.1 & 0.03 \\ 0.01 & 0.15 & 0.7 & 0.07 & 0.07 \\ 0.03 & 0.19 & 0.02 & 0.75 & 0.01 \\ 0.03 & 0.09 & 0.05 & 0 & 0.83 \end{array}\right)$$ ### Answer ```python (np.matrix([ [0.84 , 0.11 , 0.01 , 0.04 , 0], [0.03 , 0.8 , 0.04 , 0.1 , 0.03], [0.01 , 0.15 , 0.7 , 0.07 , 0.07], [0.03 , 0.19 , 0.02 , 0.75 , 0.01], [0.03 , 0.09 , 0.05 , 0 , 0.83] ])**100)[0] ``` matrix([[0.14783582, 0.41492537, 0.0955597 , 0.2163806 , 0.1252985 ]]) ## 2.26 In computer security applications, a honeypot is a trap set on a network to detect and counteract computer hackers. Honeypot data are studied in Kimou et al. (2010) using Markov chains. The authors obtain honeypot data from a central database and observe attacks against four computer ports—80, 135, 139, and 445—over 1 year. The ports are the states of a Markov chain along with a state corresponding to no port is attacked. Weekly data are monitored, and the port most often attacked during the week is recorded. The estimated Markov transition matrix for weekly attacks is $$P=\left(\begin{array}{cc} 0 & 0 & 0 & 0 & 1 \\ 0 & 8/13 & 3/13 & 1/13 & 1/13 \\ 1/16 & 3/16 & 3/8 & 1/4 & 1/8 \\ 0 & 1/11 & 4/11 & 5/11 & 1/11 \\ 0 & 1/8 & 1/2 & 1/8 & 1/4 \end{array}\right)$$ with initial distribution $\alpha = (0, 0, 0, 0, 1)$. (a) Which are the least and most likely attacked ports after 2 weeks? (b) Find the long-term distribution of attacked ports. ### Answer ```python P=np.matrix([ [0 , 0 , 0, 0 , 1], [0 , 8/13 , 3/13 , 1/13, 1/13 ], [1/16 , 3/16 , 3/8 , 1/4, 1/8 ], [0 , 1/11 , 4/11 , 5/11 , 1/11], [0 , 1/8 , 1/2 , 1/8 , 1/4 ]]) a=np.array([0,0,0,0,1]) parser=[80,135, 139, 445, 'No attack'] ``` ```python print("the most likely port to be attacked after two weeks is:",parser[(a*P**2).argmax()],"\n"+ "the least likely port to be attacked after two weeks is:",parser[(a*P**2).argmin()]) ``` the most likely port to be attacked after two weeks is: 139 the least likely port to be attacked after two weeks is: 80 ```python print("the long last distribution is:",(P**100)[0]) ``` the long last distribution is: [[0.02146667 0.26693333 0.34346667 0.22733333 0.1408 ]] ## 2.27 See gamblersruin.R. Simulate gambler’s ruin for a gambler with initial stake $\$2$, playing a fair game. (a) Estimate the probability that the gambler is ruined before he wins $\$5$. (b) Construct the transition matrix for the associated Markov chain. Estimate the desired probability in (a) by taking high matrix powers. (c) Compare your results with the exact probability. ### Answer ```python class GamblingWalk(): ''' This class is used to simulate a random walk, it is flexible to take different outcomes for the results of throwing the coin and you can have different limits to when stop the experiment The plotting functions are handy to see the final results. ''' def __init__(self, initial_money=50, prob=1/2, min_state=0, max_state=100, outcomes = [-1,1]): self.prob = prob self.init_money = initial_money self.walk = [] self.min_state = min_state self.max_state = max_state self.outcomes = outcomes def P_w(self, sim=100): results = [] for i in range(sim): res = self.randomWalk() results.append(res[1]) return sum(results)/len(results) def randomWalk(self): money = self.init_money self.walk = [int(money)] win = False while True: money += np.random.choice(self.outcomes,1,p=[1-self.prob,self.prob]) self.walk.append(int(money)) if (money <= self.min_state) or (money >= self.max_state): win=(money >= self.max_state) break return len(self.walk), win def plotWalk(self): if(len(self.walk)==0): self.randomWalk() plt.plot(range(len### Answer(self.walk)), self.walk) plt.xlabel("Time") plt.ylabel("Money") plt.show() ``` ```python wlk = GamblingWalk(initial_money=2, max_state=5) 1-wlk.P_w(10000) ``` array([0.5985]) (b) The transition Matrix is given by: $$P=\left(\begin{array}{cc} 1 & 0 & 0 & 0 & 0 & 0 \\ 1/2 & 0 & 1/2 & 0 & 0 & 0 \\ 0 & 1/2 & 0 & 1/2 & 0 & 0 \\ 0 & 0 & 1/2 & 0 & 1/2 & 0 \\ 0 & 0 & 0 & 1/2 & 0 & 1/2 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ \end{array}\right)$$ ```python P=np.matrix([[1 , 0 , 0 , 0 , 0 , 0], [1/2 , 0 , 1/2 , 0 , 0 , 0], [0 , 1/2 , 0 , 1/2 , 0 , 0], [0 , 0 , 1/2 , 0 , 1/2 , 0], [0 , 0 , 0 , 1/2 , 0 , 1/2], [0 , 0 , 0 , 0 , 0 ,1]]) a=np.array([0,0,1,0,0,0]) (a*P**100).round(4) ``` array([[0.6, 0. , 0. , 0. , 0. , 0.4]]) (c) As seen from the results from (a) and (b) we can see that the simulation is a bit off, even after 10,000 simulations.
ce9e388822f72a9455c50b39e459bb0cf61cc436
56,321
ipynb
Jupyter Notebook
Chapter02_py.ipynb
larispardo/StochasticProcessR
a2f8b6c41f2fe451629209317fc32f2c28e0e4ee
[ "MIT" ]
null
null
null
Chapter02_py.ipynb
larispardo/StochasticProcessR
a2f8b6c41f2fe451629209317fc32f2c28e0e4ee
[ "MIT" ]
null
null
null
Chapter02_py.ipynb
larispardo/StochasticProcessR
a2f8b6c41f2fe451629209317fc32f2c28e0e4ee
[ "MIT" ]
null
null
null
31.04796
584
0.47144
true
13,771
Qwen/Qwen-72B
1. YES 2. YES
0.91848
0.908618
0.834548
__label__eng_Latn
0.916103
0.777267
# Open Science Prize: Supplementary Material This notebook is meant to provide a little more information about the Open Science Prize, but mostly, this notebook is a launching point from which the motivated learner can find open access sources with even more detailed information. ## 1 The Heisenberg Spin Model In the open prize notebook, the Hamiltonian you are simulating is defined as the [Heisenberg XXX model](https://en.wikipedia.org/wiki/Quantum_Heisenberg_model#XXX_model) for 3 spins in a line: $$ H_{\text{Heis3}} = \sigma_x^{(0)}\sigma_x^{(1)} + \sigma_x^{(1)}\sigma_x^{(2)} + \sigma_y^{(0)}\sigma_y^{(1)} + \sigma_y^{(1)}\sigma_y^{(2)} + \sigma_z^{(0)}\sigma_z^{(1)} + \sigma_z^{(1)}\sigma_z^{(2)}. $$ ### 1-1 Why call it XXX? The XXX model is one of a family of spin models known as [Heisenberg spin models](https://en.wikipedia.org/wiki/Quantum_Heisenberg_model). In some sense, the most general form of Heisenberg model is often refered to as the XYZ model. The name 'XYZ' is used because the three pair-wise operators $\sigma_x\sigma_x$, $\sigma_y\sigma_y$, and $\sigma_z \sigma_z$ in the Hamitlonian have different coefficients $J_x$, $J_y$, and $J_z$ respectively. In the case where $J_x = J_y = J_z$, the model is labeled the 'XXX' model. ### 1-2 Numerically computing the matrix representation To compute the matrix representation of $H_{\text{Heis3}}$, we are actually missing some pieces namely the identity operator $I$ and the [tensor product](https://en.wikipedia.org/wiki/Tensor_product#Tensor_product_of_linear_maps) $\otimes$ symbol. They are both often left out in when writing a Hamiltonian, but they are implied to be there. Writing out the full $H_{\text{Heis3}}$ including the identity operators and tensor product symbols $$ H_{\text{Heis3}} = \sigma_x^{(0)}\otimes\sigma_x^{(1)}\otimes I^{(2)} + I^{(0)} \otimes\sigma_x^{(1)}\otimes\sigma_x^{(2)} + \sigma_y^{(0)}\otimes\sigma_y^{(1)}\otimes I^{(2)} + I^{(0)} \otimes \sigma_y^{(1)}\otimes\sigma_y^{(2)} + I^{(0)} \otimes\sigma_z^{(0)}\otimes\sigma_z^{(1)} + I^{(0)}\otimes\sigma_z^{(1)}\otimes\sigma_z^{(2)}. $$ You can see why physicists don't write that all out all the time. #### 1-2.1 Tensor product vs Kronecker product A point of clarity about jargon. To numerically compute the [tensor product](https://en.wikipedia.org/wiki/Tensor_product#Tensor_product_of_linear_maps) of $\sigma_x\otimes\sigma_x$, as an example, we often have already chosen to be working with the matrix representation of the operators at hand ($\sigma_x = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}$ in this example). Because a computer works in the matrix representation, what a computer does is actually called a [Kronecker product](https://en.wikipedia.org/wiki/Kronecker_product#Examples). When doing numerically computations, Kronecker product is the name you would look up for the given software package you're using such as [Mathematica](https://reference.wolfram.com/language/ref/KroneckerProduct.html), [numpy](https://numpy.org/doc/stable/reference/generated/numpy.kron.html), or [Qiskit](https://qiskit.org/documentation/tutorials/operators/01_operator_flow.html#Pauli-operators,-sums,-compositions,-and-tensor-products). Below is an example in Qiskit and numpy. ```python import numpy as np import matplotlib.pyplot as plt plt.rcParams.update({'font.size': 16}) # enlarge matplotlib fonts # Import Qubit states Zero (|0>) and One (|1>), and Pauli operators (X, Y, Z) from qiskit.opflow import Zero, One, I, X, Y, Z # Suppress warnings import warnings warnings.filterwarnings('ignore') ``` ```python # Compute a kronecker product in qiskit # Qiskit already knows what I and X are (the identity and Pauli-X operators), so to compute the kronecker product it's very simple: ^ IX_qiskit = (I^X) print('Qiskit Kronecker product:\n', IX_qiskit.to_matrix()) print("----------------") # Compute a kronecker product in numpy X_numpy = np.array([[0,1],[1,0]], dtype=complex) I_numpy = np.eye(2, dtype=complex) IX_numpy = np.kron(I_numpy, X_numpy) print('Numpy Kronecker product:\n', IX_numpy) ``` Qiskit Kronecker product: [[0.+0.j 1.+0.j 0.+0.j 0.+0.j] [1.+0.j 0.+0.j 0.+0.j 0.+0.j] [0.+0.j 0.+0.j 0.+0.j 1.+0.j] [0.+0.j 0.+0.j 1.+0.j 0.+0.j]] ---------------- Numpy Kronecker product: [[0.+0.j 1.+0.j 0.+0.j 0.+0.j] [1.+0.j 0.+0.j 0.+0.j 0.+0.j] [0.+0.j 0.+0.j 0.+0.j 1.+0.j] [0.+0.j 0.+0.j 1.+0.j 0.+0.j]] ## 2 Using OpFlow Qiskit offers [functionality for mathematically working with quantum states and operators](https://qiskit.org/documentation/apidoc/opflow.html) called ```opflow``` with tutorials found [here](https://qiskit.org/documentation/tutorials/operators/index.html). Opflow is especially convenient when dealing with large numbers of qubits as tensor products can become unweildy when using numpy both in size and syntax. For example, to define $H_{\text{Heis3}}$, we could use ```numpy```'s ```numpy.kron(...)``` function to compute the tensor product as shown below. ```python # Returns matrix representation of the XXX Heisenberg model for 3 spin-1/2 particles in a line (uses np.kron()) def H_heis3_np_kron(): #iden is the identity matrix; sig_x, sig_y, and sig_z are Pauli matrices iden = np.eye(2,2) sig_x = np.array([[0,1],[1,0]]) sig_y = np.array([[0,-1j],[1j,0]]) sig_z = np.array([[0,1],[0,-1]]) # Interactions (np.kron(A,B) is the tensor product of A and B) XXs = np.kron(iden, np.kron(sig_x, sig_x)) + np.kron(sig_x, np.kron(sig_x, iden)) YYs = np.kron(iden, np.kron(sig_y, sig_y)) + np.kron(sig_y, np.kron(sig_y, iden)) ZZs = np.kron(iden, np.kron(sig_z, sig_z)) + np.kron(sig_z, np.kron(sig_z, iden)) # Sum interactions H = XXs + YYs + ZZs # Return Hamiltonian return H # Returns matrix representation of the XXX Heisenberg model for 3 spin-1/2 particles in a line (uses opflow) def H_heis3_opflow(): # Interactions (I is the identity matrix; X, Y, and Z are Pauli matricies; ^ is a tensor product) XXs = (I^X^X) + (X^X^I) YYs = (I^Y^Y) + (Y^Y^I) ZZs = (I^Z^Z) + (Z^Z^I) # Sum interactions H = XXs + YYs + ZZs # Return Hamiltonian return H ``` Using opflow, however, the math is much easier to read and does not require recursive function calls, which can be confusing when generalizing a Hamiltonian to many qubits. Also, the operators do not need to be explicity computed in opflow saving memory. In opflow, the carrot symbol ```^``` denotes a tensor product. Important note, paranetheses are often needed to clarify the order of operations especially if doing other math operations such as ```+```. See below. ```python # Example of incorrectly adding two PauliOp objects. op = X^X + Z^Z print('matrix dimensions of op:', op.to_matrix().shape) ``` matrix dimensions of op: (8, 8) The shape of ```op``` should be (4,4) since $X$ and $Z$ are 2x2, but without parentheses, the order of operations is not correct. Shown below is the correct way of adding the two tensored Pauli operators in opflow. ```python # Example of correctly adding two PauliOp objects. op = (X^X) + (Z^Z) print('matrix dimensions of op:', op.to_matrix().shape) ``` matrix dimensions of op: (4, 4) ## 3 Solution to the Time-Independent Schrödinger Equation The [Schrödinger equation](https://en.wikipedia.org/wiki/Schrödinger_equation) is a foundational pillar of quantum mechanics. It relates how a quantum state $|\psi(t)\rangle$ changes in time given what the state was initially $|\psi(0)\rangle$. It's solution is relatively simple (more on how it's not simple in the next section) when the Hamiltonian $H$ that governs the state evolvution does not depend on time. Treating Schrödinger equation as the differential equation it is, we can use separation of variables to solve it $$ \begin{align} i \hbar \frac{d |\psi(t)\rangle}{dt} &= H|\psi(t)\rangle \\ |\psi(t)\rangle &= e^{-iHt / \hbar} |\psi(0)\rangle. \end{align} $$ ## 4 Exponential of a Matrix The solution to the [Schrödinger equation](https://en.wikipedia.org/wiki/Schrödinger_equation) for a Hamiltonian $H$ that does not depend on time (and $\hbar = 1$) is $$ |\psi(t)\rangle = e^{-iHt}|\psi(0)\rangle. $$ But, what does it mean to take $e$ to the power of an operator $H$? Let's briefly address this topic and give references where you can learn more. Working with the the matrix representation of $H$, the time evolution $U_H(t) = e^{-iHt}$ can be numerically and algebraically evaluated using a [taylor series expansion](https://en.wikipedia.org/wiki/Taylor_series#Exponential_function) ($e^x = 1 + x + x^2/2! + x^3/3! + ... $) . This is often where computers step in. Although, single Pauli operators (e.g. $H=\sigma_x$) have a nice algebraic result. On a computer, many different software packages have methods for computing $e^A$ where $A$ is a matrix (see scipy's [expm](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.expm.html) or opflow's [.exp_i()](https://qiskit.org/documentation/tutorials/operators/01_operator_flow.html#Evolutions,-exp_i(),-and-the-EvolvedOp)). We do need to be careful when the matrix that is being exponentiated is a sum of operators that do not commute ($[A,B] = AB - BA \neq 0$). For example, if $H = \sigma_x + \sigma_z$, we need to take care in working with $e^{it(\sigma_x + \sigma_z)}$ since the Pauli operatos $\sigma_x$ and $\sigma_z$ do not commute $[\sigma_x, \sigma_z] = \sigma_x \sigma_z - \sigma_z \sigma_x = -2i\sigma_y \neq0$. If we want to apply $e^{it(\sigma_x + \sigma_z)}$ on a quantum computer, however, we will want to decompose the sum in the exponential into a product of exponentials. Each product can then be implemented on the quantum computer as a single or two-qubit gate. Continuing the example, $\sigma_x$ and $\sigma_z$ do not commute ($[\sigma_x, \sigma_z] \neq 0$), so we cannot simply decompose the exponential into a product of exponentials $e^{-it(\sigma_x + \sigma_z)} \neq e^{-it\sigma_x}e^{-it\sigma_z}$. This is where approximation methods come in such as Trotterization (more on that in the next section). ## 5 Trotterization The Open Science Prize notebook ends the decomposition of the unitary time evolution $U_{\text{Heis3}}(t)$ calculation with $$ U_{\text{Heis3}}(t) \approx \left[XX^{(0,1)}(t/n) YY^{(0,1)}(t/n) ZZ^{(0,1)}(t/n) XX^{(1,2)}(t/n) YY^{(1,2)}(t/n) ZZ^{(1,2)}(t/n) \right]^{n} $$ where $t$ is time and $n$ is the number of Trotter steps. (Remember, this is not the only possible Trotterization.) The physical implementation of the two-qubit gates $XX(t)$, $YY(t)$, and $ZZ(t)$ are outlined in the Open Science Prize notebook, though without explanation. Here we provide a brief explanation with references where you can learn more. Qiskit has a library of circuits that include such gates called the [circuit library](https://qiskit.org/documentation/apidoc/circuit_library.html), but not every gate in the library is native to a physical device. Nonnative gates (such as $XX$, $YY$, $ZZ$, or $ZX$) need to be decomposed into gates that are [native to the physical device](https://qiskit.org/documentation/tutorials/circuits_advanced/08_gathering_system_information.html#Configuration). You may even come across three-qubit gates like XXY or XYZ in some models. These interactions can be decomposition into native gates, or you may find it effective go further and [engineer a typically nonnative interaction](https://qiskit.org/textbook/ch-quantum-hardware/hamiltonian-tomography.html) using Pulse. Assuming you want to only use native gates and not pulses, this review article is a useful starting point: F. Tacchino, et. al., *Quantum Computers as Universal Quantum Simulators: State-of-the-Art and Perspectives*, [Adv. Quantum Technol. *3* 3 (2020)](https://doi.org/10.1002/qute.201900052) \[[free arxiv version](https://arxiv.org/abs/1907.03505)\] In this notebook, let's start with the [ZZ gate](https://qiskit.org/documentation/stubs/qiskit.circuit.library.RZZGate.html) $$ \begin{align} ZZ(2t) &= e^{-it\sigma_z \otimes \sigma_z} \\ ZZ(2t) &= \begin{bmatrix} e^{-it} & 0 & 0 & 0 \\ 0 & e^{it} & 0 & 0 \\ 0 & 0 & e^{it} & 0 \\ 0 & 0 & 0 & e^{-it} \\ \end{bmatrix} \end{align}. $$ This can be implemented in terms of two-qubit [CNOT gates](https://qiskit.org/documentation/stubs/qiskit.circuit.library.CXGate.html#qiskit.circuit.library.CXGate) sandwiching a single qubit rotation [$R_z(2t)$ rotation](https://qiskit.org/documentation/stubs/qiskit.circuit.library.RZGate.html) as shown below. (Note that $R_z$ is evaluated at the angle $2t$ instead of the typical $t$ to simplify writing the exponential terms.) ```python from qiskit.circuit import Parameter from qiskit import QuantumCircuit, QuantumRegister, execute t = Parameter('t') ``` ```python # ZZ in two and single qubit gates qc = QuantumCircuit(2) qc.cnot(0,1) qc.rz(2 * t, 1) qc.cnot(0,1) qc.draw(output='mpl') ``` For fun, let's check this mathematically by evaluating the circuit library ZZ gate and decomposed circuit at various time values. ```python from qiskit.circuit.library import RXXGate, RYYGate, RZZGate, CXGate from qiskit.quantum_info.operators import Operator import qiskit.quantum_info as qi ``` ```python # time time = np.pi/2 # try different time values as a check that the decomposition of ZZ into native gates is accurate # Qiskit circuit library ZZ gate qc = QuantumCircuit(2) qc.append(RZZGate(time), [0,1]) qc_ZZ_matrix = qi.Operator(qc) # Decomposed ZZ gate CNOT = Operator(np.array([[1,0,0,0],[0,1,0,0],[0,0,0,1],[0,0,1,0]])) Rz = Operator(np.array([[np.exp(-1j*time/2),0], [0,np.exp(1j*time/2)]])) I_Rz = (Operator(I)^Rz) #np_ZZ_matrix = CNOT@I_Rz@CNOT np_ZZ_matrix = np.matmul(np.matmul(CNOT, I_Rz), CNOT) # Confirm matrices are the same at time t np.array_equal(qc_ZZ_matrix, np_ZZ_matrix) ``` True The $ZZ(t)$ gate can be rotated to the $XX(t)$ gate by adding single qubit rotations. Single qubit rotations such as $R_y(\pi/2) \sigma_z R_y(-\pi/2) = \sigma_x$ can transform $\sigma_z$ to $\sigma_x$. Consider computing the calculation yourself. Remember that Pauli matrices can be decomposed from an exponential form like so $R_z(2\theta)= \exp\left(-i\theta\sigma_z\right) = \cos(\theta)I - i \sin(\theta)\sigma_z$. Below is an abbreviated calculation to guide your own. Note that most tensor products with the identity have been omitted (e.g. $\sigma_z^{(0)} = \sigma_z^{(0)} \otimes I^{(1)}$), and for extra clarity, each operator has the qubit it acts on indexed in the superscript, so $\sigma_z^{(0)}$ would be the $\sigma_z$ operator acting on qubit $0$. $$ \begin{align} \left(R_y^{(0)}(\pi/2) \otimes R_Y^{(1)}(\pi/2)\right) ZZ(t)^{(0,1)}\left( R_y^{(0)}(-\pi/2) \otimes R_Y^{(1)}(-\pi/2)\right) &= \\ \exp\left[\frac{-i\pi}{4}(\sigma_y^{(0)} + \sigma_y^{(1)})\right] \exp\left[\frac{-it}{2} \sigma_z^{(0)} \otimes \sigma_z^{(1)}\right] \exp\left[\frac{i\pi}{4}(\sigma_y^{(0)} + \sigma_y^{(1)})\right] &= \\ \exp\left[\frac{-i\pi}{4}(\sigma_y^{(0)} + \sigma_y^{(1)})\right] \left[\cos(t/2)(I^{(0)} \otimes I^{(1)}) -i\sin(t/2)(\sigma_z^{(0)} \otimes \sigma_z^{(1)}) \right] \exp\left[\frac{i\pi}{4}(\sigma_y^{(0)} + \sigma_y^{(1)})\right] &= \\ \cos(t/2) \exp\left[\frac{-i\pi}{4}(\sigma_y^{(0)} + \sigma_y^{(1)})\right] (I^{(0)} \otimes I^{(1)}) \exp\left[\frac{i\pi}{4}(\sigma_y^{(0)} + \sigma_y^{(1)})\right] - i \exp\left[\frac{-i\pi}{4}(\sigma_y^{(0)} + \sigma_y^{(1)})\right] \sin(t/2)(\sigma_z \otimes \sigma_z) \exp\left[\frac{i\pi}{4}(\sigma_y^{(0)} + \sigma_y^{(1)})\right] &= \\ \cos(t/2) (I^{(0)} \otimes I^{(1)}) - \frac{i \sin(t/2)}{4} \left[\left(I^{(0)} - i \sigma_y^{(0)}\right) \sigma_z^{(0)} \left(I^{(0)} + i \sigma_y^{(0)}\right)\right] \otimes \left[\left(I^{(1)} - i \sigma_y^{(1)}\right) \sigma_z^{(1)} \left(I^{(1)} + i \sigma_y^{(1)}\right)\right] &= \\ \cos(t/2) (I^{(0)} \otimes I^{(1)}) - i \sin(t/2) (\sigma_x^{(0)} \otimes \sigma_x^{(1)}) &= \\ \exp\left[\frac{-it}{2} \sigma_x^{(0)} \otimes \sigma_x^{(1)}\right] &= \\ &= XX(t) \\ \end{align} $$ ```python # XX(t) qc = QuantumCircuit(2) qc.ry(np.pi/2,[0,1]) qc.cnot(0,1) qc.rz(2 * t, 1) qc.cnot(0,1) qc.ry(-np.pi/2,[0,1]) qc.draw(output='mpl') ``` Let's check for fun again. ```python # time time = np.pi/6 # try different time values as a check that the decomposition of XX into native gates is accurate # Qiskit circuit library XX gate qc = QuantumCircuit(2) qc.append(RXXGate(time), [0,1]) qc_XX_matrix = qi.Operator(qc).data # Decomposed XX gate Rz = Operator(np.array([[np.exp(-1j*time/2),0], [0,np.exp(1j*time/2)]])) Ry = Operator(np.array([[np.cos(np.pi/4), -np.sin(np.pi/4)],[np.sin(np.pi/4), np.cos(np.pi/4)]])) Ry_minus = Operator(np.array([[np.cos(-np.pi/4), -np.sin(-np.pi/4)],[np.sin(-np.pi/4), np.cos(-np.pi/4)]])) I_Rz = (Operator(I)^Rz) np_XX_matrix = (Ry^Ry)@CNOT@I_Rz@CNOT@(Ry_minus^Ry_minus) # Confirm matrices are the same at time t np.array_equal(np.round(qc_XX_matrix, 6), np.round(np_XX_matrix, 6)) ``` True The $ZZ(t)$ gate can be rotated to the $YY(t)$ gate by adding single qubit rotations $R_x(\pi/2) \sigma_z R_x(-\pi/2) = -\sigma_y$ around the $ZZ$ gate just as was done with the $XX$ gate. ```python # YY(t) qc = QuantumCircuit(2) qc.rx(np.pi/2,[0,1]) qc.cnot(0,1) qc.rz(2 * t, 1) qc.cnot(0,1) qc.rx(-np.pi/2,[0,1]) qc.draw(output='mpl') ``` ```python # time time = np.pi/6 # try different time values as a check that the decomposition of YY into native gates is accurate # Qiskit circuit library YY gate qc = QuantumCircuit(2) qc.append(RYYGate(time), [0,1]) qc_YY_matrix = qi.Operator(qc).data # Decomposed YY gate Rz = Operator(np.array([[np.exp(-1j*time/2),0], [0,np.exp(1j*time/2)]])) Rx = Operator(np.array([[np.cos(np.pi/4), -1j*np.sin(np.pi/4)],[-1j*np.sin(np.pi/4), np.cos(np.pi/4)]])) Rx_minus = Operator(np.array([[np.cos(-np.pi/4), -1j*np.sin(-np.pi/4)],[-1j*np.sin(-np.pi/4), np.cos(-np.pi/4)]])) I_Rz = (Operator(I)^Rz) np_YY_matrix = (Rx^Rx)@CNOT@I_Rz@CNOT@(Rx_minus^Rx_minus) # Confirm matrices are the same at time t np.array_equal(np.round(qc_YY_matrix, 6), np.round(np_YY_matrix, 6)) ``` True ## 6 State Tomography [State tomography](https://qiskit.org/documentation/tutorials/noise/8_tomography.html) is a method for determining the quantum state of a qubit, or qubits, even if the state is in a superposition or entangled. Repeatedly measuring a prepared quantum state may not be enough to determine the full state. For example, what if you want to know if the result of a quantum calculation is the state $|0\rangle -i|1\rangle$? Repeated measurements in the $Z$-basis would return $|0\rangle$ 50% of the time and $|1\rangle$ 50% of the time. However, there are other states that give the same measurement result, such as $|0\rangle + |1\rangle$, $|0\rangle - |1\rangle$, and $|0\rangle + i|1\rangle$. How could you determine the state you have has the $-i$ phase you're looking for? This requires measurements in [different bases](https://qiskit.org/textbook/ch-states/single-qubit-gates.html#measuring). In state tomography, a quantum circuit is repeated with measurements done in different bases to exhaustively determine the full quantum state (including any phase information). The Open Science Prize this year is using this technique to determine the full quantum state after the quantum simulation. That state is then compared to the exact expected result to compute a fidelity. Although this fidelity only gives information on how well the quantum simulation produces one particular state, it's a more lightweight approach than a full [process tomography](https://qiskit.org/documentation/tutorials/noise/8_tomography.html) caluclation. In short, a high fidelity measured by state tomography doesn't gaurentee a high fidelity quantum simulation, but a low fidelity state tomography does imply a low fidelity quantum simulation. # 7 Two-Qubit Gate with Pulse [Qiskit Pulse](https://qiskit.org/documentation/apidoc/pulse.html) offers low-level control of a device's qubits. Pulse allows users to program the physical interactions happening on the superconducting chip. This can be a powerful tool for streamlining circuits \[1-4\], crafting new types of gates \[1-4\], getting [higher fidelity readout](https://www.youtube.com/watch?v=_1XTChcvbOs), and more. Of course, the increased level of control requires more understanding of [the physics of the qubit and how gates are physically generated on chip](https://qiskit.org/textbook/ch-quantum-hardware/transmon-physics.html). For someone new to Pulse, this may seem intimidating. However, there are great tutorials, and the [qiskit textbook](https://qiskit.org/textbook/ch-quantum-hardware/index-pulses.html) can slowly introduce you to these ideas with real and practically useful examples. For those more experienced with Pulse, it may be time to review papers such as this [general overview](https://arxiv.org/pdf/2004.06755.pdf). There will be other papers you find useful for this competition. Below, there is a small--and far from exhaustive--list to get you started \[1-4\]. To aid your efforts, we've provided a simple example of implementing a pulse-efficient [$R_{ZX}$ gate](https://qiskit.org/documentation/stubs/qiskit.circuit.library.RZXGate.html#qiskit.circuit.library.RZXGate) as outlined in paper [3](https://arxiv.org/pdf/2105.01063.pdf). Note that although the example below is interesting, it is not the only way to make a "pulse-efficient" two-qubit rotation gate. \[1\] T. Alexander, et al., *Qiskit Pulse: Programming Quantum Computers Through the Cloud with Pulses*, [Quantum Sci. and Technol. **5** 044006 (2020)](https://arxiv.org/pdf/2004.06755.pdf) \[2\] J. Stenger, et al., *Simulating the dynamics of braiding of Majorana zero modes using an IBM quantum computer*, [Phys. Rev. Research **3**, 033171 (2021)](https://arxiv.org/pdf/2012.11660.pdf) \[3\] N. Earnest, et al., *Pulse-efficient circuit transpilation for quantum applications on cross-resonance-based hardware*, [arXiv:2105.01063 \[quant-ph\] (2021)](https://arxiv.org/pdf/2105.01063.pdf) \[4\] S. Oomura, et al., *Design and application of high-speed and high-precision CV gate on IBM Q OpenPulse system*, [arXiv:2102.06117 \[quant-ph\] (2021)](https://arxiv.org/pdf/2102.06117.pdf) [qiskit experiments repo](https://github.com/Qiskit/qiskit-experiments) ```python # Import qiskit packages from qiskit import IBMQ from qiskit import schedule, transpile from qiskit.tools.monitor import job_monitor # load IBMQ Account data # IBMQ.save_account(TOKEN) provider = IBMQ.load_account() provider = IBMQ.get_provider(hub='ibm-q-community', group='ibmquantumawards', project='open-science-22') backend = provider.get_backend('ibmq_jakarta') ``` ## 7.1 The [$R_{ZX}$ Gate](https://qiskit.org/documentation/stubs/qiskit.circuit.library.RZXGate.html#qiskit.circuit.library.RZXGate) It's now time for an example of implementing a rotational two-qubit gate and how you might go about using Pulse to implement it as efficiently as possible. Remembering the dicussion in sections 4 and 5, we can write out the form of the $R_{ZX}(\theta)$ gate: $$ R_{ZX}(2\theta) = \exp\left(-i \theta \sigma_z^{(0)} \otimes \sigma_x^{(1)}\right) = \cos(\theta)(I^{(0)}\otimes I^{(1)}) -i \sin(\theta)(\sigma_z^{(0)} \otimes \sigma_x^{(1)}) $$ Let's follow paper [\[3\]](https://arxiv.org/pdf/2105.01063.pdf)'s same line of reasoning. First, implement an $R_{ZX}$ using native circuit gates. Second, compare that circuit's performance to an $R_{ZX}$ gate contrusted from native pulses. Since the competition focuses on Jakarta's qubits 1, 3, and 5, let's just consider an $R_{ZX}$ gate between 1 and 3. ```python # RZX gate circuit (see section 5 for details on why this is the circuit for RZX) theta = 3*np.pi/11 # pick arbitrary rotation angle theta qrs = QuantumRegister(4) qc = QuantumCircuit(qrs) qc.ry(-np.pi/2, 3) qc.cx(1, 3) qc.rz(theta, 3) qc.cx(1, 3) qc.ry(np.pi/2, 3) print(qc.draw()) qct = transpile(qc, backend) # display the typical circuit diagram schedule(qct, backend).draw() # display what the pulse schedule looks like for the circuit ``` Now that we have an $R_{ZX}$ circuit implementation (shown above in circuit and pulse schedule forms), let's make the "pulse-efficient" version of $R_{ZX}$. (Spoiler: it's only two pulses!) ```python # The PassManager helps decide how a circuit should be optimized # (https://qiskit.org/documentation/tutorials/circuits_advanced/04_transpiler_passes_and_passmanager.html) from qiskit.transpiler import PassManager # This function will pull pulse-level calibration values to build RZX gates and tell the PassManager to leave RZX gates alone from qiskit.transpiler.passes import RZXCalibrationBuilderNoEcho ``` ```python # Equivalent circuit in terms of cross-resonance gates (https://qiskit.org/textbook/ch-quantum-hardware/cQED-JC-SW.html#6.-The-Cross-Resonance-Entangling-Gate) qrs_pe = QuantumRegister(4) qc_pe = QuantumCircuit(qrs_pe) qc_pe.rzx(theta/2, 1, 3) qc_pe.x(1) qc_pe.rzx(-theta/2, 1, 3) qc_pe.x(1) # Add the pulse-level calibrations pm = PassManager([RZXCalibrationBuilderNoEcho(backend)]) qc_pulse_efficient = pm.run(qc_pe) schedule(qc_pulse_efficient, backend).draw() # show pulse schedule ``` ```python # Compare the schedule durations print('Duration of standard CNOT-based circuit:') print(schedule(qct, backend).duration) print('Duration of pulse-efficient circuit:') print(schedule(qc_pulse_efficient, backend).duration) ``` Duration of standard CNOT-based circuit: 3776 Duration of pulse-efficient circuit: 1184 You've just optimized an $R_{ZX}$ gate using pulse-level control! The shorter circuit time for the pulse-efficient circuit is great news! The shorter the circuit, the less likely the qubit will decohere. This time savings, and fewer pulses to the qubits, will reduce errors. Let's quantify this reduction by measuring the [process fidelity](https://qiskit.org/documentation/tutorials/noise/8_tomography.html) of the two circuits. If we wanted to be thorough, we should do a [randomized benchmarking](https://qiskit.org/textbook/ch-quantum-hardware/randomized-benchmarking.html) as shown in Fig. 1 of paper [\[3\]](https://arxiv.org/pdf/2105.01063.pdf). ```python # Process tomography functions from qiskit.ignis.verification.tomography import process_tomography_circuits, ProcessTomographyFitter # Get exact gate for comparison in process tomography from qiskit.circuit.library import RZXGate # Get the ideal unitary operator to compare to circuit output target_unitary = qi.Operator(RZXGate(theta).to_matrix()) # Generate process tomography circuits and run qpt_qcs = process_tomography_circuits(qc, [qrs[1], qrs[3]]) job = execute(qpt_qcs, backend, shots=8192) ``` ```python job_monitor(job) ``` Job Status: job has successfully run ```python # Extract tomography data qpt_tomo = ProcessTomographyFitter(job.result(), qpt_qcs) # Tomographic reconstruction choi_fit_lstsq = qpt_tomo.fit(method='lstsq') print('Average gate fidelity: F = {:.5f}'.format(qi.average_gate_fidelity(choi_fit_lstsq, target=target_unitary))) ``` Average gate fidelity: F = 0.91322 Now for the pulse-efficient gate ```python # Generate process tomography circuits and run qpt_qcs = process_tomography_circuits(qc_pulse_efficient, [qrs_pe[1], qrs_pe[3]]) job = execute(qpt_qcs, backend, shots=8192) ``` ```python job_monitor(job) ``` Job Status: job has successfully run ```python # Extract tomography data so that counts are indexed by measurement configuration qpt_tomo = ProcessTomographyFitter(job.result(), qpt_qcs) # Tomographic reconstruction choi_fit_lstsq = qpt_tomo.fit(method='lstsq') print('Average gate fidelity: F = {:.5f}'.format(qi.average_gate_fidelity(choi_fit_lstsq, target=target_unitary))) ``` Average gate fidelity: F = 0.92471 This is just one example of how Pulse could be used. Read [tutorials](https://qiskit.org/documentation/tutorials/circuits_advanced/05_pulse_gates.html), papers, and the [Qiskit youtube channel](https://www.youtube.com/c/qiskit) for more ideas. ```python import qiskit.tools.jupyter %qiskit_version_table ``` <h3>Version Information</h3><table><tr><th>Qiskit Software</th><th>Version</th></tr><tr><td><code>qiskit-terra</code></td><td>0.18.3</td></tr><tr><td><code>qiskit-aer</code></td><td>0.9.1</td></tr><tr><td><code>qiskit-ignis</code></td><td>0.6.0</td></tr><tr><td><code>qiskit-ibmq-provider</code></td><td>0.18.0</td></tr><tr><td><code>qiskit-aqua</code></td><td>0.9.5</td></tr><tr><td><code>qiskit</code></td><td>0.32.0</td></tr><tr><td><code>qiskit-nature</code></td><td>0.1.3</td></tr><tr><th>System information</th></tr><tr><td>Python</td><td>3.8.8 (default, Apr 13 2021, 12:59:45) [Clang 10.0.0 ]</td></tr><tr><td>OS</td><td>Darwin</td></tr><tr><td>CPUs</td><td>8</td></tr><tr><td>Memory (Gb)</td><td>64.0</td></tr><tr><td colspan='2'>Mon Nov 29 09:01:00 2021 EST</td></tr></table>
9bb8c60a6458ba79c9db78e4fe8583ed7dca62ad
161,610
ipynb
Jupyter Notebook
ibmq-qsim-sup-mat.ipynb
qfizik/open-science-prize-2021
9cc4dcd3fe8aaf9e352abf283ddaaaf16afbdef1
[ "Apache-2.0" ]
null
null
null
ibmq-qsim-sup-mat.ipynb
qfizik/open-science-prize-2021
9cc4dcd3fe8aaf9e352abf283ddaaaf16afbdef1
[ "Apache-2.0" ]
null
null
null
ibmq-qsim-sup-mat.ipynb
qfizik/open-science-prize-2021
9cc4dcd3fe8aaf9e352abf283ddaaaf16afbdef1
[ "Apache-2.0" ]
null
null
null
173.961249
58,300
0.871877
true
8,865
Qwen/Qwen-72B
1. YES 2. YES
0.851953
0.891811
0.759781
__label__eng_Latn
0.9016
0.603558
# Linear Programming: Introduction ## Definition Formally, a linear program is an optimzation problem of the form: \begin{equation} \min \vec{c}^\mathsf{T}\vec x\\ \textrm{subject to} \begin{cases} \mathbf{A}\vec x=\vec b\\ \vec x\ge\vec 0 \end{cases} \end{equation} where $\vec c\in\mathbb R^n$, $\vec b\in\mathbb R^m$ and $\mathbf A \in \mathbb R^{m\times n}$. The vector inequality $\vec x\ge\vec 0$ means that each component of $\vec x$ is nonnegative. Several variations of this problem are possible; eg. instead of minimizing, we can maximize, or the constraints may be in the form of inequalities, such as $\mathbf A\vec x\ge \vec b$ or $\mathbf A\vec x\le\vec b$. We shall see later, these variations can all be rewritten into the standard form. ## Example A manufacturer produces four different products $X_1$, $X_2$, $X_3$ and $X_4$. There are three inputs to this production process: - labor in man weeks, - kilograms of raw material A, and - boxes of raw material B. Each product has different input requirements. In determining each week's production schedule, the manufacturer cannot use more than the available amounts of manpower and the two raw materials: |Inputs|$X_1$|$X_2$|$X_3$|$X_4$|Availabilities| |------|-----|-----|-----|-----|--------------| |Person-weeks|1|2|1|2|20| |Kilograms of material A|6|5|3|2|100| |Boxes of material B|3|4|9|12|75| |Production level|$x_1$|$x_2$|$x_3$|$x_4$| | These constraints can be written in mathematical form \begin{align} x_1+2x_2+x_3+2x_4\le&20\\ 6x_1+5x_2+3x_3+2x_4\le&100\\ 3x_1+4x_2+9x_3+12x_4\le&75 \end{align} Because negative production levels are not meaningful, we must impose the following nonnegativity constraints on the production levels: \begin{equation} x_i\ge0,\qquad i=1,2,3,4 \end{equation} Now suppose that one unit of product $X_1$ sells for €6 and $X_2$, $X_3$ and $X_4$ sell for €4, €7 and €5, respectively. Then, the total revenue for any production decision $\left(x_1,x_2,x_3,x_4\right)$ is \begin{equation} f\left(x_1,x_2,x_3,x_4\right)=6x_1+4x_2+7x_3+5x_4 \end{equation} The problem is then to maximize $f$ subject to the given constraints. ## Vector Notation Using vector notation with \begin{equation} \vec x = \begin{pmatrix} x_1\\x_2\\x_3\\x_4 \end{pmatrix} \end{equation} the problem can be written in the compact form \begin{equation} \max \begin{pmatrix}6&4&7&5\end{pmatrix} \begin{pmatrix} x_1\\x_2\\x_3\\x_4 \end{pmatrix}\\ \textrm{subject to} \begin{cases} \begin{pmatrix} 1&2&1&2\\ 6&5&3&2\\ 3&4&9&12 \end{pmatrix} \begin{pmatrix} x_1\\x_2\\x_3\\x_4 \end{pmatrix}\le \begin{pmatrix} 20\\100\\75 \end{pmatrix}\\ \begin{pmatrix} x_1\\x_2\\x_3\\x_4 \end{pmatrix}\ge \begin{pmatrix} 0\\0\\0\\0 \end{pmatrix} \end{cases} \end{equation} ## Two-dimensional Linear Program Many fundamental concepts of linear programming are easily illustrated in two-dimensional space. Consider the following linear program: \begin{equation} \max \begin{pmatrix} 1&5 \end{pmatrix} \begin{pmatrix} x_1\\ x_2 \end{pmatrix}\\ \textrm{subject to} \begin{cases} \begin{pmatrix} 5&6\\ 3&2 \end{pmatrix} \begin{pmatrix} x_1\\ x_2 \end{pmatrix}\le \begin{pmatrix} 30\\ 12 \end{pmatrix}\\ \begin{pmatrix} x_1\\ x_2 \end{pmatrix}\ge \begin{pmatrix} 0\\ 0 \end{pmatrix} \end{cases} \end{equation} ```julia #using Pkg #pkg"add LaTeXStrings" using Plots using LaTeXStrings x = -2:6 plot(x, (30 .- 5 .* x) ./ 6, linestyle=:dash, label=L"5x_1+6x_2=30") plot!(x, (12 .- 3 .* x) ./ 2, linestyle=:dash, label=L"3x_1+2x_2=12") plot!([0,4,1.5,0,0],[0,0,3.75,5,0], linewidth=2, label="constraints") plot!(x, -x ./ 5, label=L"f\left(x_1,x_2\right)=x_1+5x_2=0") plot!(x, (25 .- x) ./ 5, label=L"f\left(x_1,x_2\right)=x_1+5x_2=25") ``` ## Slack Variables Theorems and solution techniques are usually stated for problems in standard form. other forms of linear programs can be converted as the standard form. If a linear program is in the form \begin{equation} \min \vec{c}^\mathsf{T}\vec x\\ \textrm{subject to} \begin{cases} \mathbf{A}\vec x\ge \vec b\\ \vec x\ge\vec 0 \end{cases} \end{equation} then by introducing _surplus variables_, we can convert the orginal problem into the standard form \begin{equation} \min \vec{c}^\mathsf{T}\vec x\\ \textrm{subject to} \begin{cases} \mathbf{A}\vec x-\mathbf I\vec y = \vec b\\ \vec x\ge\vec 0\\ \vec y\ge\vec 0 \end{cases} \end{equation} where $\mathbf I$ is the $m\times m$ identity matrix. If, on the other hand, the constraints have the form \begin{equation} \begin{cases} \mathbf{A}\vec x\le b\\ \vec x\ge\vec 0 \end{cases} \end{equation} then we introduce the _slack variables_ to convert the constraints into the form \begin{equation} \begin{cases} \mathbf{A}\vec x+\mathbf I\vec y = \vec b\\ \vec x\ge\vec 0\\ \vec y\ge\vec 0 \end{cases} \end{equation} Consider the following optimization problem \begin{equation} \max x_2-x_1\\ \textrm{subject to} \begin{cases} 3x_1=x_2-5\\ \left|x_2\right|\le2\\ x_1\le0 \end{cases} \end{equation} To convert the problem into a standard form, we perform the following steps: 1. Change the objective function to: \begin{equation} \min x_1 - x_2 \end{equation} 2. Substitute $x_1=-x_1^\prime$. 3. Write $\left|x_2\right|\le2$ as $x_2\le 2$ and $-x_2\le 2$. 4. Introduce slack variables $y_1$ and $y_2$, and convert the inequalities above to \begin{cases} \hphantom{-}x_2 + y_1 =2\\ -x_2+y_2 =2 \end{cases} 5. Write $x_2=u-v$ with $u,v\ge0$. Hence, we obtain \begin{equation} \min -x_1^\prime-u+v\\ \textrm{subject to} \begin{cases} 3x_1^\prime+u-v=5\\ u-v+y_1=2\\ v-u+y_2=2\\ x_1^\prime,u,v,y_1,y_2\ge0 \end{cases} \end{equation} ## Fundamental Theorem of Linear Programming We consider the system of equalities \begin{equation} \mathbf{A}\vec x=\vec b \end{equation} where $\mathrm{rank}\,\mathbf A=m$. Let $\mathbf B$ a square matrix whose columns are $m$ linearly independent columns of $\mathbf A$. If necessary, we reorder the columns of $\mathbf A$ so that the columns in $\mathbf B$ appear first: $\mathbf A$ has the form $\left(\mathbf B |\mathbf N\right)$. The matrix is nonsingular, and thus we can solve the equation \begin{equation} \mathbf B\vec x_\mathbf B = \vec b \end{equation} The solution is $\vec x_\mathbf B = \mathbf B^{-1}\vec b$. Let $\vec x$ be the vector whose first $m$ components are equal to $\vec x_\mathbf B$ and the remaining components are equal to zero. Then $\vec x$ is a solution to $\mathbf A\vec x=\vec b$. We call $\vec x$ a _basic solution_. Its components refering to the the components of $\vec x_\mathbf B$ are called _basic variables_. - If some of the basic variables are zero, then the basic solution is _degenerate_. - A vector $\vec x$ satisfying $\mathbf A\vec x=\vec b$, $\vec x \ge \vec 0$, is said to be a _feasible solution_. - A feasible solution that is also basic is called a _basic feasible solution_. The fundamental theorem of linear programming states that when solving a linear programming problem, we need only consider basic feasible solutions. This is because the optimal value (if it exists) is always achieved at a basic solution.
162d7466e9ef9f19b55cae7e80719ed031028d97
10,683
ipynb
Jupyter Notebook
Lectures_old/Lecture 5.ipynb
BenLauwens/ES313.jl
5a7553e53c288834f768d26e0d5aa22f9062b6af
[ "MIT" ]
3
2018-12-17T16:00:26.000Z
2020-01-18T04:09:25.000Z
Lectures_old/Lecture 5.ipynb
BenLauwens/ES313
5a7553e53c288834f768d26e0d5aa22f9062b6af
[ "MIT" ]
null
null
null
Lectures_old/Lecture 5.ipynb
BenLauwens/ES313
5a7553e53c288834f768d26e0d5aa22f9062b6af
[ "MIT" ]
2
2018-08-27T13:41:05.000Z
2020-02-08T11:00:53.000Z
32.769939
516
0.536366
true
2,543
Qwen/Qwen-72B
1. YES 2. YES
0.936285
0.91118
0.853124
__label__eng_Latn
0.935652
0.820426
Marec 2015, J.Slavič in L.Knez Vprašanje 1: Za sistem enačb: $$ \mathbf{A}= \begin{bmatrix} 1 & -4 & 1\\ 1 & 6 & -1\\ 2 & -1 & 2 \end{bmatrix} \qquad \mathbf{b}= \begin{bmatrix} 7\\ 13\\ 5 \end{bmatrix} $$ najdite rešitev s pomočjo ``SymPy``. ```python from sympy import * init_printing() # Definiramo simbolične spremenljivke x1, x2, x3 = symbols('x1, x2, x3') # Definiramo sistem A = Matrix([[1, -4, 1], [1, 6, -1], [2, -1, 2]]) x = Matrix([[x1], [x2], [x3]]) b = Matrix([[7], [13], [5]]) eq = Eq(A*x,b) eq ``` ```python resitev = solve(eq,[x1, x2, x3]) resitev ``` Vprašanje 2: Za zgoraj definirano matriko $\mathbf{A}$ določite Evklidsko normo (lastni program). ```python # Pretvorimo podatke v np.array import numpy as np a = np.array(A).astype(float) # Pazite, da spremenite v float, v osnovi je object a ``` array([[ 1., -4., 1.], [ 1., 6., -1.], [ 2., -1., 2.]]) ```python # Hitra varianta A_evk = np.sqrt(np.sum(a**2)) A_evk ``` ```python # Pocasna varianta vsota = 0 for vrstica in a: for element in vrstica: vsota += element**2 A_evk = vsota**0.5 A_evk ``` Vprašanje 3: Za zgoraj definirano matriko $\mathbf{A}$ določite neskončno normo (lastni program). ```python # Hitra varianta A_oo = max(abs(a).sum(axis=1)) A_oo ``` ```python # Pocasna varianta vsota = zeros(1,3) for i, vrstica in enumerate(a): vsota[i] = sum(abs(vrstica)) A_oo = max(vsota) A_oo ``` Vprašanje 4: Za zgoraj definirano matriko $\mathbf{A}$ določite pogojenost (``numpy`` funkcija). ```python pogojenost = np.linalg.cond(a) pogojenost ``` Vprašanje 5: Definirajte funkcijo ``gauss_elim``, ki za poljubno matriko $\mathbf{A}$ in vektor $\mathbf{b}$ izvede Gaussovo eliminacijo (posebej za matriko in posebej za vektor). ```python # Definiramo podatke A_pod = np.array([[1, -4, 1], [1, 6, -1], [2, -1, 2]], dtype=float) b_pod = np.array([[7], [13], [5]], dtype=float) ``` ```python # Funkcija def gauss_elim(A, b): n = len(b) for k in range(0,n-1): for i in range(k+1,n): if A[i,k] != 0.0: faktor = A[i,k]/A[k,k] A[i,k:n] = A[i,k:n] - faktor*A[k,k:n] b[i] = b[i] - faktor*b[k] return A, b ``` ```python # Uporaba funkcije [A, b] = gauss_elim(A_pod.copy(), b_pod.copy()) A ``` array([[ 1. , -4. , 1. ], [ 0. , 10. , -2. ], [ 0. , 0. , 1.4]]) ```python b ``` array([[ 7. ], [ 6. ], [-13.2]]) Vprašanje 6: Definirajte funkcijo ``gauss_elim_x``, ki za razultat funkcije ``gaus_elim`` najde ustrezne vrednosti vektorja $\textbf{x}$. ```python # Funkcija def gauss_elim_x(A, b): Ab = np.column_stack((A,b)) # Sestavimo v sistem Ab n = len(b) res = np.zeros(n) # Pripravimo seznam ničel za rešitev for k, vrsta in enumerate(Ab[::-1]): res[n-k-1] = (vrsta[-1] - np.dot(vrsta[n-k:-1], res[n-k:]) ) / (vrsta[n-k-1]) return res ``` ```python x = gauss_elim_x(A.copy(), b.copy()) x ``` array([ 11.28571429, -1.28571429, -9.42857143]) ```python # Preverimo rešitev ostanek = np.dot(A_pod,x) - b_pod.T ostanek ``` array([[ -8.88178420e-16, 0.00000000e+00, -2.66453526e-15]]) ```python ```
4d6c2471bdd8f0fb702ede28ce73c737129fbb5b
18,204
ipynb
Jupyter Notebook
pypinm-master/vprasanja za razmislek/Vaja 5 - polovica.ipynb
CrtomirJuren/python-delavnica
db96470d2cb1870390545cfbe511552a9ef08720
[ "MIT" ]
null
null
null
pypinm-master/vprasanja za razmislek/Vaja 5 - polovica.ipynb
CrtomirJuren/python-delavnica
db96470d2cb1870390545cfbe511552a9ef08720
[ "MIT" ]
null
null
null
pypinm-master/vprasanja za razmislek/Vaja 5 - polovica.ipynb
CrtomirJuren/python-delavnica
db96470d2cb1870390545cfbe511552a9ef08720
[ "MIT" ]
null
null
null
35.142857
2,498
0.642936
true
1,368
Qwen/Qwen-72B
1. YES 2. YES
0.896251
0.817574
0.732752
__label__slv_Latn
0.902994
0.540761
Linear Algebra Examples ==== This just shows the machanics of linear algebra calculations with python. See Lecture 5 for motivation and understanding. ```python import numpy as np import scipy.linalg as la import matplotlib.pyplot as plt %matplotlib inline ``` ```python plt.style.use('ggplot') ``` Resources ---- - [Tutorial for `scipy.linalg`](http://docs.scipy.org/doc/scipy/reference/tutorial/linalg.html) Exact solution of linear system of equations ---- \begin{align} x + 2y &= 3 \\ 3x + 4y &= 17 \end{align} ```python A = np.array([[1,2],[3,4]]) A ``` array([[1, 2], [3, 4]]) ```python b = np.array([3,17]) b ``` array([ 3, 17]) ```python x = la.solve(A, b) x ``` array([11., -4.]) ```python np.allclose(A @ x, b) ``` True ```python A1 = np.random.random((1000,1000)) b1 = np.random.random(1000) ``` ### Using solve is faster and more stable numerically than using matrix inversion ```python %timeit la.solve(A1, b1) ``` 437 ms ± 115 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ```python %timeit la.inv(A1) @ b1 ``` 584 ms ± 127 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ### Under the hood (Optional) The `solve` function uses the `dgesv` fortran function to do the actual work. Here is an example of how to do this directly with the `lapack` function. There is rarely any reason to use `blas` or `lapack` functions directly becuase the `linalg` package provides more convenient functions that also perfrom error checking, but you can use Python to experiment with `lapack` or `blas` before using them in a language like C or Fortran. - [How to interpret lapack function names](http://www.netlib.org/lapack/lug/node24.html) - [Summary of BLAS functions](http://cvxopt.org/userguide/blas.html) - [Sumary of Lapack functions](http://cvxopt.org/userguide/lapack.html) ```python import scipy.linalg.lapack as lapack ``` ```python lu, piv, x, info = lapack.dgesv(A, b) x ``` Basic information about a matrix ---- ```python C = np.array([[1, 2+3j], [3-2j, 4]]) C ``` array([[1.+0.j, 2.+3.j], [3.-2.j, 4.+0.j]]) ```python C.conjugate() ``` array([[1.-0.j, 2.-3.j], [3.+2.j, 4.-0.j]]) #### Trace ```python def trace(M): return np.diag(M).sum() ``` ```python trace(C) ``` (5+0j) ```python np.allclose(trace(C), la.eigvals(C).sum()) ``` True #### Determinant ```python la.det(C) ``` (-8-5j) #### Rank ```python np.linalg.matrix_rank(C) ``` 2 #### Norm ```python la.norm(C, None) # Frobenius (default) ``` 6.557438524302 ```python la.norm(C, 2) # largest sinular value ``` 6.389028023601217 ```python la.norm(C, -2) # smallest singular value ``` 1.4765909770949925 ```python la.svdvals(C) ``` Least-squares solution ---- ```python la.solve(A, b) ``` ```python x, resid, rank, s = la.lstsq(A, b) x ``` ```python A1 = np.array([[1,2],[2,4]]) A1 ``` ```python b1 = np.array([3, 17]) b1 ``` ```python try: la.solve(A1, b1) except la.LinAlgError as e: print(e) ``` ```python x, resid, rank, s = la.lstsq(A1, b1) x ``` ```python A2 = np.random.random((10,3)) b2 = np.random.random(10) ``` ```python try: la.solve(A2, b2) except ValueError as e: print(e) ``` ```python x, resid, rank, s = la.lstsq(A2, b2) x ``` ### Normal equations One way to solve least squares equations $X\beta = y$ for $\beta$ is by using the formula $\beta = (X^TX)^{-1}X^Ty$ as you may have learnt in statistical theory classes (or can derive yourself with a bit of calculus). This is implemented below. Note: This is not how the `la.lstsq` function solves least square problems as it can be inefficent for large matrices. ```python def least_squares(X, y): return la.solve(X.T @ X, X.T @ y) ``` ```python least_squares(A2, b2) ``` Matrix Decompositions ---- ```python A = np.array([[1,0.6],[0.6,4]]) A ``` array([[1. , 0.6], [0.6, 4. ]]) ### LU ```python p, l, u = la.lu(A) ``` ```python p ``` array([[1., 0.], [0., 1.]]) ```python l ``` array([[1. , 0. ], [0.6, 1. ]]) ```python u ``` array([[1. , 0.6 ], [0. , 3.64]]) ```python np.allclose(p@l@u, A) ``` True ### Choleskey ```python U = la.cholesky(A) U ``` array([[1. , 0.6 ], [0. , 1.9078784]]) ```python np.allclose(U.T @ U, A) ``` True ```python # If workiing wiht complex matrices np.allclose(U.T.conj() @ U, A) ``` True ### QR ```python Q, R = la.qr(A) ``` ```python Q ``` array([[-0.85749293, -0.51449576], [-0.51449576, 0.85749293]]) ```python np.allclose((la.norm(Q[:,0]), la.norm(Q[:,1])), (1,1)) ``` True ```python ``` ```python np.allclose(Q@R, A) ``` True ### Spectral When matrix is symmetric, you can use la.eigh ```python u, v = la.eig(A) ``` ```python u ``` array([0.88445056+0.j, 4.11554944+0.j]) ```python v ``` array([[-0.98195639, -0.18910752], [ 0.18910752, -0.98195639]]) ```python np.allclose((la.norm(v[:,0]), la.norm(v[:,1])), (1,1)) ``` True ```python np.allclose(v @ np.diag(u) @ v.T, A) ``` True #### Inverting A ```python np.allclose(v @ np.diag(1/u) @ v.T, la.inv(A)) ``` #### Powers of A ```python np.allclose(v @ np.diag(u**5) @ v.T, np.linalg.matrix_power(A, 5)) ``` ### SVD ```python U, s, V = la.svd(A) ``` ```python U ``` ```python np.allclose((la.norm(U[:,0]), la.norm(U[:,1])), (1,1)) ``` ```python s ``` ```python V ``` ```python np.allclose((la.norm(V[:,0]), la.norm(V[:,1])), (1,1)) ``` ```python np.allclose(U @ np.diag(s) @ V, A) ``` ```python ```
2a1d4d98a850642594e1987d5c8c14440dae0af8
19,673
ipynb
Jupyter Notebook
notebooks/copies/lectures/T06_Linear_Algebra_Examples.ipynb
robkravec/sta-663-2021
4dc8018f7b172eaf81da9edc33174768ff157939
[ "MIT" ]
null
null
null
notebooks/copies/lectures/T06_Linear_Algebra_Examples.ipynb
robkravec/sta-663-2021
4dc8018f7b172eaf81da9edc33174768ff157939
[ "MIT" ]
null
null
null
notebooks/copies/lectures/T06_Linear_Algebra_Examples.ipynb
robkravec/sta-663-2021
4dc8018f7b172eaf81da9edc33174768ff157939
[ "MIT" ]
null
null
null
17.612355
442
0.451533
true
1,936
Qwen/Qwen-72B
1. YES 2. YES
0.928409
0.930458
0.863846
__label__eng_Latn
0.616989
0.845336
# Import ```python #region import matplotlib.pyplot as plt import math from sympy import * import matplotlib.pyplot as plt from numpy import linspace import numpy as np #endregion t = symbols('t') f = symbols('f', cls=Function) ``` # Input ```python #read input #region def ReadArray(f): line = f.readline() result = list(map(lambda x: float(N(x)), line.split())) return result def ReadMatrix(f): listCoef = [] line = f.readline() while(line.strip() != ''): coef = list(map(lambda x: float(N(x)), line.split())) listCoef.append(coef) line = f.readline() #print('listCoef: ') #print(listCoef) return listCoef def RandN(listCoef): # R & N R = listCoef[0][0] N = math.inf for coef in listCoef: if(R > coef[0]): R = coef[0] coef.pop(0) if(N > len(coef)): N = len(coef) if R <= 0: raise ValueError("invalid input: bán kính <= 0") return (R,N) #endregion ``` # Hàm chính ```python def calculate(initial, listCoef, N): result = initial # mảng kết quả c_i k=len(listCoef)-1 # mảng mảng hệ số a_i và f for n in range(0,N-k): c=0 offset = 1; for i in range(n+1,n+k+1): offset *= i #start calculating c_{n+k} for m in range(0,n+1): mult = 1 for i in range(0,k): c += listCoef[i][n-m] * result[m+i] * mult mult *= m+i+1 c= (listCoef[k][n]-c)/offset # -1*n! / (n+k)! result.append(c) return result #Program def Polynomial(inputPath): f = open(inputPath,"r") initial = ReadArray(f) listCoef = ReadMatrix(f) f.close() R,N = RandN(listCoef) result = calculate(initial, listCoef, N) return (R, result) def Restore(array): 3 ``` # Plot and save ```python #region def Save(result, outputPath, mode): f = open(outputPath, mode) f.write("Radius of convergence = " + str(result[0]) + ", Result: \n"); f.write(str(result[1])) f.close() def Plotf(f, interval): t_vals = linspace(interval[0], interval[1], 1000) lam_x = lambdify(t, f, modules = ['numpy']) x_vals = lam_x(t_vals) plt.plot(t_vals, x_vals) def Plot(result, start, end, g = None): f = 0 power = 0 for i in result: f += i * (t ** power) power += 1 Plotf(f, (start, end)) if g is not None: Plotf(g, (start, end)) return f #endregion #Frobenius ``` # Test ```python test1 = 'example1.txt' test2 = 'example2.txt' output = 'outputPath_1.txt' ``` ```python R, array = Polynomial(test1) print("Radius of convergence = " ,str(R), ", Result:") np.set_printoptions(precision=1) print(np.array(array)) f = Plot(array, -2 , 2, g = sin(3*t)) print(f.evalf(2)) Save((R,array),output,"w") ``` ```python R, array = Polynomial(test2) print("Radius of convergence = " + str(R) + ", Result: \n") print(array) Plot(array, -1 , 1, g = sin(t)) Save((R,array),output,"w") ``` ```python Plot([1,2], -2 , 2, g = sin(3*t)) ``` ```python ``` ```python ```
c7a0c1915dff07fa7f2d162fcbd338d44c7eddae
52,856
ipynb
Jupyter Notebook
Topic 5 - Solving Differential Equations/26.2.PowerSeries/PowerSeries/.ipynb_checkpoints/PowerSeries-checkpoint.ipynb
dthanhqhtt/MI3040-Numerical-Analysis
cf38ea7e6dc834b19e7cffef8b867a02ba472eae
[ "MIT" ]
7
2020-11-23T17:00:20.000Z
2022-01-31T06:28:40.000Z
Topic 5 - Solving Differential Equations/26.2.PowerSeries/PowerSeries/.ipynb_checkpoints/PowerSeries-checkpoint.ipynb
dthanhqhtt/MI3040-Numerical-Analysis
cf38ea7e6dc834b19e7cffef8b867a02ba472eae
[ "MIT" ]
2
2020-09-22T17:08:05.000Z
2020-12-20T12:00:59.000Z
Topic 5 - Solving Differential Equations/26.2.PowerSeries/PowerSeries/.ipynb_checkpoints/PowerSeries-checkpoint.ipynb
dthanhqhtt/MI3040-Numerical-Analysis
cf38ea7e6dc834b19e7cffef8b867a02ba472eae
[ "MIT" ]
5
2020-12-03T05:11:49.000Z
2021-09-28T03:33:35.000Z
161.639144
16,344
0.892803
true
958
Qwen/Qwen-72B
1. YES 2. YES
0.855851
0.771844
0.660583
__label__eng_Latn
0.309343
0.373087
```python import numpy %matplotlib notebook import matplotlib.pyplot import sympy ``` # Elliptical Turns ## Overview At an intersection, a car has to make a smooth turn between two road endpoints. To approximate this effect, I will attempt to fit an ellipse between the two roads. The ellipse must intersect at the endpoint of the intersection and the ellipse must also be tangential to the roads at those points. ## Defining Roads We can define the roads parametrically in two dimensions. The general equation for a line from point $\langle x_a, y_a \rangle$ to $\langle x_b, y_b \rangle$ in parametric form is $$ \ell(t) = \langle x_a + (x_b - x_a) t, y_a + (y_b - y_a) t \rangle \quad \forall t \in [0, 1] $$ In particular, we will focus on two perpendicular, linear roads. The first goes from $\langle 3, 2 \rangle$ to $\langle 3, 3 \rangle$. The second from $\langle 4, 4 \rangle$ to $\langle 5, 4 \rangle$. These are defined by the equations $$ \ell_a(t) = \langle 3, 2 + t \rangle \quad \forall t \in [0, 1] $$ $$ \ell_b(t) = \langle 4 + t, 4 \rangle \quad \forall t \in [0, 1] $$ ```python # Each component must have a dependence on `t` for NumPy to broadcast correctly l_a = lambda t: numpy.array([[3 + 0 * t], [2 + t]]) l_b = lambda t: numpy.array([[4 + t], [4 + 0 * t]]) ``` Now we can visualize the problem a little better by plotting the roads. ```python fig, ax = matplotlib.pyplot.subplots() t_a = t_b = numpy.linspace(0, 1, 100) ax.plot(l_a(t_a)[0].flatten(), l_a(t_a)[1].flatten(), 'r') ax.plot(l_b(t_b)[0].flatten(), l_b(t_b)[1].flatten(), 'b') matplotlib.pyplot.show() ``` <IPython.core.display.Javascript object> We also need to determine the partial derivatives of these roads. The general equation for the partial derivatives of a line from point $\langle x_a, y_a \rangle$ and $\langle x_b, y_b \rangle$ in parametric form is $$ \frac{\partial \ell(t)}{\partial t} = \langle x_b - x_a, y_b - y_a \rangle \quad \forall t \in [0, 1] $$ For this example case in particular, the partial derivatives evaluate to $$ \frac{\partial \ell_a(t)}{\partial t} = \langle 0, 1 \rangle \quad \forall t \in [0, 1] $$ $$ \frac{\partial \ell_b(t)}{\partial t} = \langle 1, 0 \rangle \quad \forall t \in [0, 1] $$ ```python dl_adt = lambda t: numpy.array([[0], [1]]) dl_bdt = lambda t: numpy.array([[1], [0]]) ``` ## Defining Constraints With the road equations defined, we need to determine what constraints to apply to a general ellipse equation so it fits our needs. The general equation for an ellipse in parametric form is $$ e(t) = \langle h + a \cos t, k + b \sin t \rangle \quad \forall t \in [0, 2 \pi) $$ ```python ellipse = lambda t, a, b, h, k: numpy.array([ [h + a * numpy.sin(t)], [k + b * numpy.cos(t)] ]) ``` We know that we want it to intersect the roads at their endpoints $\langle x_a, y_a \rangle$ and $\langle x_b, y_b \rangle$. Therefore, one set of constraints on the ellipse must be $$ \langle h + a \cos t_a, k + b \sin t_a \rangle = \langle x_a, y_a \rangle $$ $$ \langle h + a \cos t_b, k + b \sin t_b \rangle = \langle x_b, y_b \rangle $$ Furthermore, the ellipse must be tangential to the roads that the same points $\langle x_a, y_a \rangle$ and $\langle x_b, y_b \rangle$. To determine this, the derivatives of the ellipse are needed. They are $$ \frac{\partial e}{\partial t}(t) = \langle - a \sin t, b \cos t \rangle \quad \forall t \in [0, 2\pi) $$ From this, the final set of constraints follows from Lagrangian multipliers $$ \langle -a \sin t_a, b \cos t_a \rangle = \lambda \frac{\partial \ell_a(t_a)}{\partial t} = \lambda \left\langle \frac{\partial \ell_a(t_a)}{\partial x}, \frac{\partial \ell_a(t_a)}{\partial y} \right\rangle $$ $$ \langle -a \sin t_b, b \cos t_b \rangle = \mu \frac{\partial \ell_b(t_b)}{\partial t} = \mu \left\langle \frac{\partial \ell_b(t_b)}{\partial x}, \frac{\partial \ell_b(t_b)}{\partial y} \right\rangle$$ ## Solving While it might be possible to solve these eight equations analytically, we will save ourselves some algebra and use numerical methods. To do so, we have to stage our problem a little differently: We are going to determine a root of a vector function using Newton's Method. The vector function comes directly from our constraints determined above, but offset to equal zero. Let the vector $\vec{x}$ be equal to $$ \vec{x} = \left\langle a, b, h, k, t_a, t_b, \lambda, \mu, x_a, y_a, x_b, y_b, \frac{\partial \ell_a(t_a)}{\partial x}, \frac{\partial \ell_a(t_a)}{\partial y}, \frac{\partial \ell_b(t_b)}{\partial x}, \frac{\partial \ell_b(t_b)}{\partial y} \right\rangle $$ Then the vector function $\vec{F}(\vec{x})$, from the constraints above, is defined to be $$ \vec{F}(\vec{x}) = \begin{pmatrix} x_2 + x_0 \cos x_4 - x_8 \\ x_3 + x_1 \sin x_4 - x_9 \\ x_2 + x_0 \cos x_5 - x_{10} \\ x_3 + x_1 \sin x_5 - x_{11} \\ -x_0 \sin x_4 - x_6 x_{12} \\ x_1 \cos x_4 - x_6 x_{13} \\ -x_0 \sin x_5 - x_7 x_{14} \\ x_1 \cos x_5 - x_7 x_{15} \end{pmatrix} $$ ```python F = lambda x: numpy.array([ x[2] + x[0] * numpy.cos(x[4]) - x[8], x[3] + x[1] * numpy.sin(x[4]) - x[9], x[2] + x[0] * numpy.cos(x[5]) - x[10], x[3] + x[1] * numpy.sin(x[5]) - x[11], -x[0] * numpy.sin(x[4]) - x[6] * x[12], x[1] * numpy.cos(x[4]) - x[6] * x[13], -x[0] * numpy.sin(x[5]) - x[7] * x[14], x[1] * numpy.cos(x[5]) - x[7] * x[15] ]) ``` To apply Newton's Method, the Jacobian matrix is needed. I would much prefer not to take 64 derivatives, so we will use SymPy to analytically determine the matrix so we don't have to compute it each time. ```python x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12, x13, x14, x15 = sympy.symbols('x:16') sympy.Matrix([ x2 + x0 * sympy.cos(x4) - x8, x3 + x1 * sympy.sin(x4) - x9, x2 + x0 * sympy.cos(x5) - x10, x3 + x1 * sympy.sin(x5) - x11, -x0 * sympy.sin(x4) - x6 * x12, x1 * sympy.cos(x4) - x6 * x13, -x0 * sympy.sin(x5) - x7 * x14, x1 * sympy.cos(x5) - x7 * x15 ]).jacobian(sympy.Matrix([x0, x1, x2, x3, x4, x5, x6, x7])) ``` Matrix([ [ cos(x4), 0, 1, 0, -x0*sin(x4), 0, 0, 0], [ 0, sin(x4), 0, 1, x1*cos(x4), 0, 0, 0], [ cos(x5), 0, 1, 0, 0, -x0*sin(x5), 0, 0], [ 0, sin(x5), 0, 1, 0, x1*cos(x5), 0, 0], [-sin(x4), 0, 0, 0, -x0*cos(x4), 0, -x12, 0], [ 0, cos(x4), 0, 0, -x1*sin(x4), 0, -x13, 0], [-sin(x5), 0, 0, 0, 0, -x0*cos(x5), 0, -x14], [ 0, cos(x5), 0, 0, 0, -x1*sin(x5), 0, -x15]]) Using the SymPy output, we can copy the Jacobian matrix. I think you could use `sympy.lambdify` here instead, but I decided to copy it by hand. $$ J(\vec{x}) = \begin{pmatrix} \cos x_4 & 0 & 1 & 0 & -x_0 \sin x_4 & 0 & 0 & 0 \\ 0 & \sin x_4 & 0 & 1 & x_1 \cos x_4 & 0 & 0 & 0 \\ \cos x_5 & 0 & 1 & 0 & 0 & -x_0 \sin x_5 & 0 & 0 \\ 0 & \sin x_5 & 0 & 1 & 0 & x_1 \cos x_5 & 0 & 0 \\ -\sin x_4 & 0 & 0 & 0 & -x_0 \cos x_4 & 0 & -x_{12} & 0 \\ 0 & \cos x_4 & 0 & 0 & -x_1 \sin x_4 & 0 & -x_{13} & 0 \\ -\sin x_5 & 0 & 0 & 0 & 0 & -x_0 \cos x_5 & 0 & -x_{14} \\ 0 & \cos x_5 & 0 & 0 & 0 & -x_1 \sin x_5 & 0 & -x_{15} \end{pmatrix} $$ ```python J = lambda x: numpy.array([ [ numpy.cos(x[4]), 0, 1, 0, -x[0] * numpy.sin(x[4]), 0, 0, 0 ], [ 0, numpy.sin(x[4]), 0, 1, x[1] * numpy.cos(x[4]), 0, 0, 0 ], [ numpy.cos(x[5]), 0, 1, 0, 0, -x[0] * numpy.sin(x[5]), 0, 0 ], [ 0, numpy.sin(x[5]), 0, 1, 0, x[1] * numpy.cos(x[5]), 0, 0 ], [ -numpy.sin(x[4]), 0, 0, 0, -x[0] * numpy.cos(x[4]), 0, -x[12], 0 ], [ 0, numpy.cos(x[4]), 0, 0, -x[1] * numpy.sin(x[4]), 0, -x[13], 0 ], [ -numpy.sin(x[5]), 0, 0, 0, 0, -x[0] * numpy.cos(x[5]), 0, -x[14] ], [ 0, numpy.cos(x[5]), 0, 0, 0, -x[1] * numpy.sin(x[5]), 0, -x[15] ] ]).astype(numpy.float64) # This will remove the nested np.arrays ``` Now we are ready to apply Newton's Method. The steps for solving a root problem in multiple dimensions are 1. Initialize an $\vec{x}_0$ to a point close to the desired root. In our case, a vector selected from the Gaussian distribution will do well most of the time. 2. Calculate $J(\vec{x}_k)$ and $\vec{F}(\vec{x}_k)$. 3. Solve the linear system of equations $J(\vec{x}_k) \vec{y}_k = - \vec{F}(\vec{x}_k)$ for $\vec{y}_k$ using Gaussian elimination (or appropriate method). 4. Calculate $\vec{x}_{k + 1} = \vec{x}_k + \vec{y}_k$. 5. Repeat until $\lVert \vec{x}_{k + 1} - \vec{x}_k \rVert < \varepsilon$. ```python def newtons_method(F, J, x_0, epsilon): xs, Js, Fs, ys = [x_0], [], [], [] do = True while do: Js.append(J(xs[-1])) Fs.append(F(xs[-1])) xs.append(xs[-1] + numpy.linalg.solve(Js[-1], -Fs[-1])) do = numpy.linalg.norm(xs[-1] - xs[-2]) >= epsilon return xs, Js, Fs ``` It is important to realize that our vector function depends on several constants that we are not trying to solve for. To handle this, we will pass lambda functions into Newton's method to concatenate the constants. ```python consts = numpy.vstack([l_a(1), l_b(0), dl_adt(1), dl_bdt(0)]) x_0 = numpy.random.randn(8, 1) xs, Js, Fs = newtons_method(lambda x: F(numpy.vstack([x, consts])), lambda x: J(numpy.vstack([x, consts])), x_0, 0.01) numpy.around(xs[-1], 3) ``` array([[ 1.000000e+00], [-1.000000e+00], [ 4.000000e+00], [ 3.000000e+00], [ 4.853761e+03], [ 4.849048e+03], [ 1.000000e+00], [ 1.000000e+00]]) Now we have a solution to our system of equations. Let's plot it to confirm that it is in fact a solution. ```python fig, ax = matplotlib.pyplot.subplots() t_a = t_b = numpy.linspace(0, 1, 100) t_e = numpy.linspace(0, 2 * numpy.pi, 100) ax.plot(l_a(t_a)[0].flatten(), l_a(t_a)[1].flatten(), 'r') ax.plot(l_b(t_b)[0].flatten(), l_b(t_b)[1].flatten(), 'b') a, b, h, k = xs[-1][0], xs[-1][1], xs[-1][2], xs[-1][3] ax.plot(ellipse(t_e, a, b, h, k)[0].flatten(), ellipse(t_e, a, b, h, k)[1].flatten(), 'g') matplotlib.pyplot.show() ``` <IPython.core.display.Javascript object> Perfect! There is an ellipse that smoothly connects these two roads. ## Issues Unfortunately, this doesn't always work as smoothly. There are many ways to form roads that can not have an ellipse connecting them, even though a smooth turn does exist. Let's now consider the roads defined by the fuctions $$ \ell_a(t) = \left\langle 3 + \frac{t}{2}, \frac{5}{2} + t \right\rangle \quad \forall t \in [0, 1] $$ $$ \ell_b(t) = \langle 4 + t, 4 \rangle \quad \forall t \in [0, 1] $$ ```python l_a = lambda t: numpy.array([[3 + t / 2], [2.5 + t]]) l_b = lambda t: numpy.array([[4 + t], [4 + 0 * t]]) ``` The partial derivatives of these roads are easily determined by hand $$ \frac{\partial \ell_a(t)}{\partial t} = \left\langle \frac{1}{2}, 1 \right\rangle \quad \forall t \in [0, 1] $$ $$ \frac{\partial \ell_b(t)}{\partial t} = \langle 1, 0 \rangle \quad \forall t \in [0, 1] $$ ```python dl_adt = lambda t: numpy.array([[1/2], [1]]) dl_bdt = lambda t: numpy.array([[1], [0]]) ``` Let's plot them to see if this is a meaningful intersection. ```python fig, ax = matplotlib.pyplot.subplots() t_a = t_b = numpy.linspace(0, 1, 100) ax.plot(l_a(t_a)[0].flatten(), l_a(t_a)[1].flatten(), 'r') ax.plot(l_b(t_b)[0].flatten(), l_b(t_b)[1].flatten(), 'b') matplotlib.pyplot.show() ``` <IPython.core.display.Javascript object> Clearly, a smooth turn is possible between these two roads. Let's see if our method will determine it. ```python consts = numpy.vstack([l_a(1), l_b(0), dl_adt(1), dl_bdt(0)]) x_0 = numpy.random.randn(8, 1) xs, Js, Fs = newtons_method(lambda x: F(numpy.vstack([x, consts])), lambda x: J(numpy.vstack([x, consts])), x_0, 0.01) numpy.around(xs[-1], 3) ``` Nope. Newton's method fails to converge. Let's confirm that moving the endpoint a little farther down will yield convergence. Instead of ending at $t_a = 1$, we will continue to $t_a = 1.01$. ```python consts = numpy.vstack([l_a(1.01), l_b(0), dl_adt(1.01), dl_bdt(0)]) x_0 = numpy.random.randn(8, 1) xs, Js, Fs = newtons_method(lambda x: F(numpy.vstack([x, consts])), lambda x: J(numpy.vstack([x, consts])), x_0, 0.01) numpy.around(xs[-1], 3) ``` array([[ 2.487], [ 24.5 ], [ 4. ], [-20.5 ], [ 1.771], [ -4.712], [ -4.875], [ -2.487]]) This time, Newton's method converged. Let's plot the solution to confirm its legitimacy. ```python fig, ax = matplotlib.pyplot.subplots() t_a = numpy.linspace(0, 1.01, 100) t_b = numpy.linspace(0, 1, 100) t_e = numpy.linspace(0, 2 * numpy.pi, 1000) ax.plot(l_a(t_a)[0].flatten(), l_a(t_a)[1].flatten(), 'r') ax.plot(l_b(t_b)[0].flatten(), l_b(t_b)[1].flatten(), 'b') a, b, h, k = xs[-1][0], xs[-1][1], xs[-1][2], xs[-1][3] ax.plot(ellipse(t_e, a, b, h, k)[0].flatten(), ellipse(t_e, a, b, h, k)[1].flatten(), 'g') matplotlib.pyplot.show() ``` <IPython.core.display.Javascript object> That certainly looks like a valid curve. ## Conclusion This is unfortunate information. In order to apply this method, carefully selected endpoints must be chosen to yield convergence. However, I am not aware of a simple way to detect this issue or its fix. Thus, I will not be using this method.
a9a9973b711d6b982d99a83541e930e79439cc72
309,601
ipynb
Jupyter Notebook
Environment/EllipticalTurns.ipynb
RobertDurfee/RLTrafficIntersections
88272512e068fe41d7389faf476c2b8e63e36002
[ "MIT" ]
1
2021-02-03T03:24:57.000Z
2021-02-03T03:24:57.000Z
Environment/EllipticalTurns.ipynb
RobertDurfee/RLTrafficIntersections
88272512e068fe41d7389faf476c2b8e63e36002
[ "MIT" ]
1
2021-02-03T09:12:19.000Z
2021-02-03T15:00:03.000Z
Environment/EllipticalTurns.ipynb
RobertDurfee/RLTrafficIntersections
88272512e068fe41d7389faf476c2b8e63e36002
[ "MIT" ]
null
null
null
82.253188
47,091
0.706884
true
4,851
Qwen/Qwen-72B
1. YES 2. YES
0.896251
0.853913
0.76532
__label__eng_Latn
0.859441
0.616428
<a href="https://colab.research.google.com/github/ebatty/MathToolsforNeuroscience/blob/master/Week2/Week2Tutorial1.ipynb" target="_parent"></a> # Week 2: Linear Algebra II # Tutorial 1 # [insert your name] **Important reminders**: Before starting, click "File -> Save a copy in Drive". Produce a pdf for submission by "File -> Print" and then choose "Save to PDF". To complete this tutorial, you should have watched Videos 2.1 through 2.6. **Credits:** The videos you watched for this week were from 3Blue1Brown. Some elements of this problem set are from or inspired by https://openedx.seas.gwu.edu/courses/course-v1:GW+EngComp4+2019/about. In particular, we are using their `plot_linear_transformation` and `plot_linear_transformations` functions, and the demonstration of the additional transformation of a matrix inverse (end of Exercise 2) ```python # Imports import numpy as np import matplotlib.pyplot as plt import matplotlib import scipy.linalg # Plotting parameters matplotlib.rcParams.update({'font.size': 22}) ``` ```python # @title Plotting functions import numpy from numpy.linalg import inv, eig from math import ceil from matplotlib import pyplot, ticker, get_backend, rc from mpl_toolkits.mplot3d import Axes3D from itertools import cycle _int_backends = ['GTK3Agg', 'GTK3Cairo', 'MacOSX', 'nbAgg', 'Qt4Agg', 'Qt4Cairo', 'Qt5Agg', 'Qt5Cairo', 'TkAgg', 'TkCairo', 'WebAgg', 'WX', 'WXAgg', 'WXCairo'] _backend = get_backend() # get current backend name # shrink figsize and fontsize when using %matplotlib notebook if _backend in _int_backends: fontsize = 4 fig_scale = 0.75 else: fontsize = 5 fig_scale = 1 grey = '#808080' gold = '#cab18c' # x-axis grid lightblue = '#0096d6' # y-axis grid green = '#008367' # x-axis basis vector red = '#E31937' # y-axis basis vector darkblue = '#004065' pink, yellow, orange, purple, brown = '#ef7b9d', '#fbd349', '#ffa500', '#a35cff', '#731d1d' quiver_params = {'angles': 'xy', 'scale_units': 'xy', 'scale': 1, 'width': 0.012} grid_params = {'linewidth': 0.5, 'alpha': 0.8} def set_rc(func): def wrapper(*args, **kwargs): rc('font', family='serif', size=fontsize) rc('figure', dpi=200) rc('axes', axisbelow=True, titlesize=5) rc('lines', linewidth=1) func(*args, **kwargs) return wrapper @set_rc def plot_vector(vectors, tails=None): ''' Draw 2d vectors based on the values of the vectors and the position of their tails. Parameters ---------- vectors : list. List of 2-element array-like structures, each represents a 2d vector. tails : list, optional. List of 2-element array-like structures, each represents the coordinates of the tail of the corresponding vector in vectors. If None (default), all tails are set at the origin (0,0). If len(tails) is 1, all tails are set at the same position. Otherwise, vectors and tails must have the same length. Examples -------- >>> v = [(1, 3), (3, 3), (4, 6)] >>> plot_vector(v) # draw 3 vectors with their tails at origin >>> t = [numpy.array((2, 2))] >>> plot_vector(v, t) # draw 3 vectors with their tails at (2,2) >>> t = [[3, 2], [-1, -2], [3, 5]] >>> plot_vector(v, t) # draw 3 vectors with 3 different tails ''' vectors = numpy.array(vectors) assert vectors.shape[1] == 2, "Each vector should have 2 elements." if tails is not None: tails = numpy.array(tails) assert tails.shape[1] == 2, "Each tail should have 2 elements." else: tails = numpy.zeros_like(vectors) # tile vectors or tails array if needed nvectors = vectors.shape[0] ntails = tails.shape[0] if nvectors == 1 and ntails > 1: vectors = numpy.tile(vectors, (ntails, 1)) elif ntails == 1 and nvectors > 1: tails = numpy.tile(tails, (nvectors, 1)) else: assert tails.shape == vectors.shape, "vectors and tail must have a same shape" # calculate xlimit & ylimit heads = tails + vectors limit = numpy.max(numpy.abs(numpy.hstack((tails, heads)))) limit = numpy.ceil(limit * 1.2) # add some margins figsize = numpy.array([2,2]) * fig_scale figure, axis = pyplot.subplots(figsize=figsize) axis.quiver(tails[:,0], tails[:,1], vectors[:,0], vectors[:,1], color=darkblue, angles='xy', scale_units='xy', scale=1) axis.set_xlim([-limit, limit]) axis.set_ylim([-limit, limit]) axis.set_aspect('equal') # if xticks and yticks of grid do not match, choose the finer one xticks = axis.get_xticks() yticks = axis.get_yticks() dx = xticks[1] - xticks[0] dy = yticks[1] - yticks[0] base = max(int(min(dx, dy)), 1) # grid interval is always an integer loc = ticker.MultipleLocator(base=base) axis.xaxis.set_major_locator(loc) axis.yaxis.set_major_locator(loc) axis.grid(True, **grid_params) # show x-y axis in the center, hide frames axis.spines['left'].set_position('center') axis.spines['bottom'].set_position('center') axis.spines['right'].set_color('none') axis.spines['top'].set_color('none') @set_rc def plot_transformation_helper(axis, matrix, *vectors, unit_vector=True, unit_circle=False, title=None): """ A helper function to plot the linear transformation defined by a 2x2 matrix. Parameters ---------- axis : class matplotlib.axes.Axes. The axes to plot on. matrix : class numpy.ndarray. The 2x2 matrix to visualize. *vectors : class numpy.ndarray. The vector(s) to plot along with the linear transformation. Each array denotes a vector's coordinates before the transformation and must have a shape of (2,). Accept any number of vectors. unit_vector : bool, optional. Whether to plot unit vectors of the standard basis, default to True. unit_circle: bool, optional. Whether to plot unit circle, default to False. title: str, optional. Title of the plot. """ assert matrix.shape == (2,2), "the input matrix must have a shape of (2,2)" grid_range = 20 x = numpy.arange(-grid_range, grid_range+1) X_, Y_ = numpy.meshgrid(x,x) I = matrix[:,0] J = matrix[:,1] X = I[0]*X_ + J[0]*Y_ Y = I[1]*X_ + J[1]*Y_ origin = numpy.zeros(1) # draw grid lines for i in range(x.size): axis.plot(X[i,:], Y[i,:], c=gold, **grid_params) axis.plot(X[:,i], Y[:,i], c=lightblue, **grid_params) # draw (transformed) unit vectors if unit_vector: axis.quiver(origin, origin, [I[0]], [I[1]], color=green, **quiver_params) axis.quiver(origin, origin, [J[0]], [J[1]], color=red, **quiver_params) # draw optional vectors color_cycle = cycle([pink, darkblue, orange, purple, brown]) if vectors: for vector in vectors: color = next(color_cycle) vector_ = matrix @ vector.reshape(-1,1) axis.quiver(origin, origin, [vector_[0]], [vector_[1]], color=color, **quiver_params) # draw optional unit circle if unit_circle: alpha = numpy.linspace(0, 2*numpy.pi, 41) circle = numpy.vstack((numpy.cos(alpha), numpy.sin(alpha))) circle_trans = matrix @ circle axis.plot(circle_trans[0], circle_trans[1], color=red, lw=0.8) # hide frames, set xlimit & ylimit, set title limit = 4 axis.spines['left'].set_position('center') axis.spines['bottom'].set_position('center') axis.spines['left'].set_linewidth(0.3) axis.spines['bottom'].set_linewidth(0.3) axis.spines['right'].set_color('none') axis.spines['top'].set_color('none') axis.set_xlim([-limit, limit]) axis.set_ylim([-limit, limit]) if title is not None: axis.set_title(title) @set_rc def plot_linear_transformation(matrix, *vectors, unit_vector=True, unit_circle=False): """ Plot the linear transformation defined by a 2x2 matrix using the helper function plot_transformation_helper(). It will create 2 subplots to visualize some vectors before and after the transformation. Parameters ---------- matrix : class numpy.ndarray. The 2x2 matrix to visualize. *vectors : class numpy.ndarray. The vector(s) to plot along with the linear transformation. Each array denotes a vector's coordinates before the transformation and must have a shape of (2,). Accept any number of vectors. unit_vector : bool, optional. Whether to plot unit vectors of the standard basis, default to True. unit_circle: bool, optional. Whether to plot unit circle, default to False. """ figsize = numpy.array([4,2]) * fig_scale figure, (axis1, axis2) = pyplot.subplots(1, 2, figsize=figsize) plot_transformation_helper(axis1, numpy.identity(2), *vectors, unit_vector=unit_vector, unit_circle=unit_circle, title='Before transformation') plot_transformation_helper(axis2, matrix, *vectors, unit_vector=unit_vector, unit_circle=unit_circle, title='After transformation') @set_rc def plot_linear_transformations(*matrices, unit_vector=True, unit_circle=False): """ Plot the linear transformation defined by a sequence of n 2x2 matrices using the helper function plot_transformation_helper(). It will create n+1 subplots to visualize some vectors before and after each transformation. Parameters ---------- *matrices : class numpy.ndarray. The 2x2 matrices to visualize. Accept any number of matrices. unit_vector : bool, optional. Whether to plot unit vectors of the standard basis, default to True. unit_circle: bool, optional. Whether to plot unit circle, default to False. """ nplots = len(matrices) + 1 nx = 2 ny = ceil(nplots/nx) figsize = numpy.array([2*nx, 2*ny]) * fig_scale figure, axes = pyplot.subplots(nx, ny, figsize=figsize) for i in range(nplots): # fig_idx if i == 0: matrix_trans = numpy.identity(2) title = 'Before transformation' else: matrix_trans = matrices[i-1] @ matrix_trans if i == 1: title = 'After {} transformation'.format(i) else: title = 'After {} transformations'.format(i) plot_transformation_helper(axes[i//nx, i%nx], matrix_trans, unit_vector=unit_vector, unit_circle=unit_circle, title=title) # hide axes of the extra subplot (only when nplots is an odd number) if nx*ny > nplots: axes[-1,-1].axis('off') ``` # Key concept review & coding tips ## Linear transformations and matrices * A matrix is basically a table of numbers. * We can represent matrices with numpy arrays, which we create as a list of rows: \begin{bmatrix} 4 & 1 & 2\\ 3 & 2 & 0\ \end{bmatrix} would be `np.array([[4, 1, 2], [3, 2, 0]])` * Linear transformations take in an input vector and outputs a transformed vector. Under a linear transformation, all grid line remain parallel and evenly spaced and the origin remains fixed (it must preserve vector addition and scalar multiplication). * Matrices represent linear transformations: each column corresponds to where the corresponding standard basis vector ends up after the transformation * We can think of the matrix vector multiplication $A\bar{x}=\bar{b}$ as a linear transformation where $A$ acts on $\bar{x}$ to produce $\bar{b}$. An alternate view is to think of it as solving a system of linear equations. * `np.linalg.solve` solves matrix vector equations like the above * As an example, solving $A\bar{x}=\bar{b}$ is equivalent to solving the system of linear equations: $$ \begin{align} 2x_1 + 3x_2 &= 6 \\ x_1 + 4x_2 &= 1 \end{align}$$ $$\text{if } A = \begin{bmatrix} 2 & 3 \\ 1 & 4\ \end{bmatrix}, \bar{b} =\begin{bmatrix} 6 \\ 1\ \end{bmatrix}$$ ## Matrix multiplication * We can envision matrix multiplication as the composition of transformations. If C = AB, element $c_{ij}$ (the element of C in the ith row and jth column) equals the dot product of the ith row of A and the jth column of B. * There are several ways to do matrix multiplication in Python: we can use `np.dot(A, B)`, `np.matmul(A, B)` or use a special operator @ so `A @ B` ## Determinants * The determinant of a matrix (det A) is a scalar value that can be viewed as describing the area changes induced by the corresponding linear transformation. It is negative if the linear transformation reverses the orientation of the space. * `np.linalg.det(A)` computes the determinant ## Inverse matrices, column space, and null space * We can sometimes take the inverse of a matrix so that $A^{-1}A = I$ where $I$ is the identity matrix (all zeros except for ones on the diagonal). * We can use `np.linalg.inv(A)` to compute $A^{-1}$ when it exists * `np.eye(d)` gives us the identity matrix of dimension d * The column space of a matrix is the span of the columns of the matrix. This is equivalent to the range of the linear transformation where, in informal language, the range is everywhere that can be "gotten to" by the transformation. In other words, the range is the set of all vectors that the linear transformation maps to. * The rank of a matrix is the dimension of the column space. * `np.linalg.matrix_rank(A)` computes the rank * The null space of a matrix is the set of all vectors that land on the origin after the resulting transformation. In other words, it is the set of all solutions of $A\bar{x} = \bar{0}$. * You can use `scipy.linalg.null_space` to find a basis for the null space of a matrix. * If the matrix A is $m$ x $n$, the null space must be a subspace of $R^n$ and the column space must be a subspace of $R^m$. # Exercise 1: Computation corner For each computation below, please calculate it 1) by-hand and 2) using code. Check that the answers match! For by-hand calculation, please show some work when possible. For example, for matrix multiplication, write out the computation of each element in the resulting matrix so it looks something like this: $$A = \begin{bmatrix} 5*2+4*1 & 3*5+1*2 \\ 0*1+1*2 & 3*2+4*5 \\ \end{bmatrix} $$ Note that these are completely made up numbers for demonstration purposes - the above numbers don't make sense for a matrix multiplication. ## A) Matrix multiplication Please compute C = AB where $$A = \begin{bmatrix} 5 & 3 \\ 0 & 2 \\ \end{bmatrix}, B = \begin{bmatrix} 1 & 5 \\ 4 & 3 \\ \end{bmatrix} $$ **Your math answer** $$C = \begin{bmatrix} 1 * 5 + 4 * 3 & 5*5 + 3 * 3 \\ 1 * 0 + 4 * 2 & 5*0 + 3 * 2 \\ \end{bmatrix}$$ ```python # Your code answer A = np.array([[5, 3], [0, 2]]) B = np.array([[1, 5], [4, 3]]) np.matmul(A,B) ``` ## B) Matrix multiplication Please compute Z = XY where $$X = \begin{bmatrix} 3 & 2 & 1 \\ 1 & 2 & 7 \\ \end{bmatrix}, Y = \begin{bmatrix} 0 & 1 \\ 2 & 4 \\ 5 & 1 \\ \end{bmatrix} $$ Before computing, figure out what the dimensions of Z will be (no need to explicitly answer this) **Your math answer** The matrix will have row,col of X,Y, so 2 row by 2 col $$X = \begin{bmatrix} 3 * 0 + 2 * 2 + 1 * 5 & 3 * 1 + 2 * 4 + 1 * 1 \\ 1 * 0 + 2 * 2 + 5 * 7 & 1 * 1 + 2 * 4 + 1 * 7\\ \end{bmatrix}$$ ```python A = np.array([[3, 2, 1], [1, 2, 7]]) B = np.array([[0, 1], [2, 4], [5,1]]) np.matmul(A,B) ``` ## C) (Optional) Transpose **Please come back to this problem if you complete the rest of the tutorial during class time.** The **tranpose** of a matrix flips a matrix over its diagonal, changing the rows to columns and the columns to rows. We denote the transpose of matrix X with $X^T$. In numpy, we can get the transpose of an array X with `X.T`. First, write out the transpose of X from part B yourself and then produce it using code. **Your math answer** It's like computing Z = XY where $$X = \begin{bmatrix} 9 & 12 \\ 39 & 16 \\ \end{bmatrix}, Y = \begin{bmatrix} 0 & 1 \\ 2 & 4 \\ 5 & 1 \\ \end{bmatrix} $$ ```python # Your code answer np.matmul(A,B).T ``` ```python np.matmul(np.matmul(A,B) , np.array([[0, -1], [1,0]])) ``` You could not compute $X^TY$ - why not? **Your text answer** The shapes don't match along dimension 1 # Exercise 2: Thinking about transformations In the video *Linear transformations and matrices*, you learned that a matrix corresponding to a rotation by 90 degrees is $$X = \begin{bmatrix} 0 & -1 \\ 1 & 0 \\ \end{bmatrix}$$ You also saw that one matrix for which the transformation is horizontal shear is $$X = \begin{bmatrix} 1 & 1 \\ 0 & 1 \\ \end{bmatrix}$$ In this exercise, we will think about some other types of transformations. We will use `plot_linear_transformation(X)` to see the grid before and after the transformation corresponding to matrix $X$. **Remember to think about where your basis vectors should end up! Then your matrix consists of the transformed basis vectors. Drawing out what you want to happen can help** ## A) Reflection across x2 axis Come up with a matrix $A$ for which the corresponding linear transformation is reflection through the $x_2$ axis (flipping across the $x_2$ axis). For example, $\bar{x} = \begin{bmatrix} 1 \\ 5 \\ \end{bmatrix}$ should become $\bar{b} = \begin{bmatrix} -1 \\ 5 \\ \end{bmatrix}$ when multiplied with $A$. ```python A = np.array([[1, 0], [0, -1]]) plot_linear_transformation(A) ``` Would you expect the determinant of A to be positive or negative? Why? Would you expect the absolute value of the determinant to be greater than 1, less than 1, or equal to 1. Why? **Your text answer** ## B) Projection onto x1 Come up with a matrix $A$ for which the corresponding linear transformation is projecting onto the $x_1$ axis. For example, $\bar{x} = \begin{bmatrix} 1 \\ 5 \\ \end{bmatrix}$ should become $\bar{b} = \begin{bmatrix} 1 \\ 0 \\ \end{bmatrix}$ when multiplied with $A$. ```python A = ... plot_linear_transformation(A) ``` Would you expect the determinant of A to be positive or negative? Why? Would you expect the absolute value of the determinant to be greater than 1, less than 1, or equal to 1. Why? **Your text answer** ## C) (Optional) Figuring out the transformation from a matrix **Please come back to this problem if you complete the rest of the tutorial during class time.** $$A = \begin{bmatrix} 3 & 1 \\ 0 & 3 \\ \end{bmatrix}$$ Try to answer the below questions without looking at the plot of the transformation, but feel free to do so if you get stuck i) This matrix is a composition of two basic transformations, where possible basic transformations are reflection, contraction, expansion, horizontal shear, vertical shear, and projection. What are the two basic transformations it is a composition of? (Hint: does this matrix look at all like either of the two in the description below Exercise 2?) ii) Would you expect the determinant of A to be positive or negative? Why? Would you expect the absolute value of the determinant to be greater than 1, less than 1, or equal to 1. Why? iii) Rewrite A as a matrix multiplication of two matrices where each matrix corresponds to one of the basic transformations. **Your text answer** ```python A = np.array([[3, 1], [0, 3]]) #plot_linear_transformation(A) ``` ## Extra info: Matrix inverse transformation We know that the inverse of a matrix essentially "undoes" the transformation of the matrix. Let's see this in action. We will plot the transformation of A then the additional transformation of $A^{-1}$ - the resulting plot should look like the original. ```python A = np.array([[1,2], [2,1]]) A_inv = np.linalg.inv(M) plot_linear_transformations(A, A_inv) ``` # Exercise 3: Encoding model matrices Let's say we have a population of 3 visual neurons that respond to 3 pixel images. Each neural response is a weighted sum of the pixel image: we used this type of model in Week 1 Part 1 Exercise 2. We will now allow the pixels to have negative values. We will look at two different populations of 3 neurons with different weights from the pixels: population f and population g. Below, we have the system of linear equations that dictates the neuron models for each population. $x_1$, $x_2$, and $x_3$ correspond to the pixel values. $r_{f1}$, $r_{f2}$, and $r_{f3}$ correspond to the responses of neurons 1, 2, and 3 in population f. $r_{g1}$, $r_{g2}$, and $r_{g3}$ correspond to the responses of neurons 1, 2, and 3 in population g. Population f: $$\begin{align} x_1 + 3x_2 + 4x_3 &= r_{f1} \\ 2x_1 + x_2 + 4x_3 &= r_{f2} \\ x_1 + 5x_2 + 6x_3 &= r_{f3} \\ \end{align}$$ Population g: $$\begin{align} x_2 + x_3 &= r_{g1} \\ 6x_1 + 10x_2 &= r_{g2} \\ 3x_1 + 6x_2 + x_3 &= r_{g3} \\ \end{align}$$ ## A) Rewriting linear systems of equations to matrix equation We want to rewrite the above system of equations for each population in the matrix equation $F\bar{x} = \bar{r}_f$ where $\bar{x}$ is the image and $\bar{r}_f$ is the vector of neural responses in population f. What is F? We will do the same for population g: $G\bar{x} = \bar{r}_g$ where $\bar{r}_g$ is the vector of neural responses in population g. What is G? **Your math answer** F is the matrix of the coefficients such that $$F = \begin{bmatrix} 1 & 3 & 4 \\ 2 & 1 & 4 \\ 1 & 5 & 6 \end{bmatrix}$$ We started with the linear system of equations view but, as always, we can think about this matrix equation in terms of a linear transformation. In particular matrices F and G are transforming vectors from a "pixel basis", where each element of a vector represents one pixel to a "neural basis" where each element represents the response of one neuron. ## B) Solving a matrix equation We will now try to solve the matrix equation to find $\bar{x}$ for a given $\bar{r}_f$. What does this correspond to in the neuroscience setting (aka what is $\bar{x}$ here)? **Your text answer** As stated above, $\bar{x}$ is the image, so here it would be the afferent weights of retinal ganglion cells Find $\bar{x}$ if $$\bar{r}_f = \begin{bmatrix} 1 \\ 1 \\ 2 \\ \end{bmatrix}$$ We will use two different coding methods: you will first use `np.linalg.inv`, and then `np.linalg.solve`. ```python np.linalg.solve? ``` ```python # Define F F = np.array([[1, 3, 4], [2, 1, 4], [1, 5, 6]]) # Define r_f r_f = np.array([1,1,2]) # Find x using np.linalg.inv x_using_inv = np.linalg.inv(F) # Find x using np.linalg.solve x_using_solve = np.linalg.solve(F, r_f) # Check each method resulted in the same x if np.all(np.isclose(x_using_inv, x_using_solve)): print('Solutions match') else: print('PROBLEM: Solutions do not match!') ``` PROBLEM: Solutions do not match! ```python x_using_solve ``` array([-2. , -1. , 1.5]) ```python x_using_inv ``` array([[ 7. , -1. , -4. ], [ 4. , -1. , -2. ], [-4.5, 1. , 2.5]]) ## C) Solving another matrix equation Try to repeat the steps in B for population g where $$\bar{r}_g = \begin{bmatrix} 1 \\ 1 \\ 2 \\ \end{bmatrix}$$ What problem do you run into? ```python # Define G G = ... # Define r_g r_g = ... # Find x using np.linalg.inv x_using_inv = ... # Find x using np.linalg.solve x_using_solve = ... # Check each method resulted in the same x if np.all(np.isclose(x_using_inv, x_using_solve)): print('Solutions match') else: print('PROBLEM: Solutions do not match!') ``` ## D) Calculate the rank of F/G First think: from the video *Inverse Matrices, column space, and null space*, we know that if a n x n matrix is invertible, the matrix is not "squishing" space: all of $R^n$ can be reached via the transformation. Based on this, what do you expect the ranks of F and G to be based on parts B/C? (no need to explicitly answer, just discuss) Now compute the rank of each below and see if you were right. ```python rank_F = ... rank_G = ... print('The rank of F is '+str(rank_F)) print('The rank of G is '+str(rank_G)) ``` ## E) Linearly independent or dependent columns Are the columns of F linearly dependent of independent? How do you know? How about the columns of G? How do you know? (Hint: use the words rank and column space in your answer) **Your text answer** ## F) Finding the null space Use `scipy.linalg.null_space` to find the basis of the null spaces for F and G. ```python F_null_space = ... G_null_space = ... ``` From the above computation, what is the dimension of the null space for F? For G? What does the null space correspond to in our neuroscience setting? **Your text answer** ## G) Describing the populations of neurons So what does all this matrix examination tell us about the neural processing in populations f and g? Obviously this is a toy system but let's think about it. i) What is the dimensionality of the population of neural responses in population f? How about in g? ii) If we wanted to decode images from the corresponding neural responses, would we always be able to completely recover the image when looking at population f? How about population g? What does this tell us about the information loss of the neural processing? iii) If the columns of a matrix are linearly dependent, then the rows also are. What does this mean for the neural weights in population g? **Your text answer** ## Extra info: Invertible matrix theorem You may have noticed that F and G have a lot of differences in terms of invertibility, the rank, the dimension of the null space, the linear dependence of columns, etc. There is a theorem that sums up a lot of these concepts: the **invertible matrix theorem**. This theorem essentially sorts square matrices into two types - invertible and not-invertible - and notes a whole bunch of qualities of each type. Take a look at the below theorem. If you have time and really want to consolidate your knowledge, think through for each statement why they're either all true or all false. A lot of the theorem stems from what you already know as it is based on definitions or basic concepts. You do not need to memorize this theorem but it is helpful to remember that these two types of matrices exist. The below is an informal, incomplete encapsulation of the invertible matrix theorem (aka I'm not including every single statement): ### **Invertible matrix theorem** Let $A$ be a square n x n matrix. The following statements are either all true or all false for this matrix: a) $A$ is an invertible matrix b) The equation $A\bar{x} = \bar{b}$ has only the trivial solution (aka only true if $\bar{x} = \bar{0}$). c) The columns of $A$ form a linearly independent set. d) The equation $A\bar{x} = \bar{b}$ has at least one solution for each $\bar{b}$ in $R^n$. e) The columns of A span $R^n$ f) $A^T$ is an invertible matrix g) The columns of A form a basis of $R^n$ h) Col A (the column space of A) = $R^n$ i) rank A = n j) Nul A (the null space of A) = {$\bar{0}$} k) dim Nul A (the dimension of the null space) = 0 l) The determinant of A is not 0
fed8cf8c0cbf9658732adf84e00a08fd62a1f61b
60,485
ipynb
Jupyter Notebook
Week2/Week2Tutorial1.ipynb
hugoladret/MathToolsforNeuroscience
fad301909da9274bb6c40cac96e2c62ed85b3956
[ "MIT" ]
null
null
null
Week2/Week2Tutorial1.ipynb
hugoladret/MathToolsforNeuroscience
fad301909da9274bb6c40cac96e2c62ed85b3956
[ "MIT" ]
null
null
null
Week2/Week2Tutorial1.ipynb
hugoladret/MathToolsforNeuroscience
fad301909da9274bb6c40cac96e2c62ed85b3956
[ "MIT" ]
null
null
null
49.496727
19,496
0.683905
true
7,419
Qwen/Qwen-72B
1. YES 2. YES
0.66888
0.766294
0.512559
__label__eng_Latn
0.989833
0.029175
# Taylor Series for Approximations Taylor series are commonly used in physics to approximate functions making them easier to handle specially when solving equations. In this notebook we give a visual example on how it works and the biases that it introduces. ## Theoretical Formula Consider a function $f$ that is $n$ times differentiable in a point $a$. Then by Taylor's theorem, for any point $x$ in the domain of f, we have the Taylor expansion about the point $a$ is defined as: \begin{equation} f(x) = f(a) + \sum_{k=1}^n \frac{f^{k}(a)}{k!}(x-a)^k + o\left((x-a)^n\right) \quad, \end{equation} where $f^{(k)}$ is the derivative of order $k$ of $f$. Usually, we consider $a=0$ which gives: \begin{equation} f(x) = f(0) + \sum_{k=1}^n \frac{f^{k}(0)}{k!}(x)^k + o\left((x)^n\right) \quad. \end{equation} For example, the exponential, $e$ is infinitely differentiable with $e^{(k)}=e$ and $e^0=1$. This gives us the following Taylor expansion: \begin{equation} e(x) = 1 + \sum_{k=1}^\infty \frac{x^k}{k!} \quad. \end{equation} ## Visualising Taylor Expansion Approximation and its Bias Let us see visually how the Taylor expansion approximatees a given function. We start by defining our function below, for example we will consider the exponential function, $e$ again up to order 3. ```python #### FOLDED CELL %matplotlib inline import matplotlib.pyplot as plt from IPython.display import Markdown as md from sympy import Symbol, series, lambdify, latex from sympy.functions import * from ipywidgets import interactive_output import ipywidgets as widgets from sympy.parsing.sympy_parser import parse_expr import numpy as np x = Symbol('x') ``` ```python order = 3 func = exp(x) ``` ```python #### FOLDED CELL taylor_exp = series(func,x,n=order+1) approx = lambdify(x, sum(taylor_exp.args[:-1]), "numpy") func_np = lambdify(x, func, "numpy") latex_func = '$'+latex(func)+'$' latex_taylor = '\\begin{equation} '+latex(taylor_exp)+' \end{equation}' ``` The Taylor expansion of {{ latex_func }} is : {{latex_taylor}} Now let's plot the function and its expansion while considering a point, noted $p$, to study the biais that we introduce when we approximate the function by its expansion: ```python #### FOLDED CELL order = widgets.IntSlider(min=0,max=20,step=1,value=3,description='Order') x_min = -4 x_max = 4 x1 = widgets.FloatSlider(min=x_min,max=x_max,value=3,step=0.2,description='Point Absciss') func = widgets.Text('exp(x)',description='Function') text_offset = np.array([-0.15,2.]) ui = widgets.HBox([x1, order, func]) def f(order=widgets.IntSlider(min=1,max=10,step=1,value=3) ,x1=1.5 ,func='exp(x)'): func_sp = parse_expr(func) taylor_exp = series(func_sp,x,n=order+1) approx = lambdify(x, sum(taylor_exp.args[:-1]), "numpy") func_np = lambdify(x, func_sp, "numpy") n_points = 1000 x_array = np.linspace(x_min,x_max,n_points) approx_array = np.array([approx(z) for z in x_array]) func_array = np.array([func_np(z) for z in x_array]) func_x1 = func_np(x1) approx_x1 = approx(x1) plt.figure(42,figsize=(10,10)) plt.plot(x_array,approx_array,color='blue',label='Taylor Expansion') plt.plot(x_array,func_array,color='green',label=func) plt.plot(0,approx(0),color='black',marker='o') plt.annotate(r'(0,0)',[0,approx(0)],xytext=text_offset) plt.plot([x1,x1] ,[-np.max(np.abs([np.min(func_array),np.max(func_array)])),min(approx_x1, func_x1)] ,'--',color='black',marker='x') plt.plot([x1,x1],[approx_x1, func_x1],'r--',marker='x') plt.annotate(r'$p_{approx}$',[x1,approx(x1)],xytext=[x1,approx(x1)]-text_offset) plt.annotate(r'$p$',[x1,func_np(x1)],xytext=[x1,func_np(x1)]-text_offset) plt.xlim([x_min,x_max]) plt.ylim(-np.max(np.abs([np.min(func_array),np.max(func_array)])) ,np.max(np.abs([np.min(func_array),np.max(func_array)]))) plt.legend() plt.show() print('Approximation bias : {}'.format(func_x1-approx_x1)) return None interactive_plot = widgets.interactive_output(f, {'order': order, 'x1': x1, 'func': func}) interactive_plot.layout.height = '650px' display(interactive_plot, ui) ``` Output(layout=Layout(height='650px')) HBox(children=(FloatSlider(value=3.0, description='Point Absciss', max=4.0, min=-4.0, step=0.2), IntSlider(val… Notice that the further $p$ gets away from the point of the expansion (in that case $0$), the higher the approximation bias gets. Samely, the lower the order of approximation is, the higher the approximation bias gets.
d1e7f77a0398c1454062aa878fc5723994dd3e99
7,425
ipynb
Jupyter Notebook
taylor_series.ipynb
fadinammour/taylor_series
4deb11d51dcf23432c035486d997cfebd3ea7418
[ "MIT" ]
null
null
null
taylor_series.ipynb
fadinammour/taylor_series
4deb11d51dcf23432c035486d997cfebd3ea7418
[ "MIT" ]
null
null
null
taylor_series.ipynb
fadinammour/taylor_series
4deb11d51dcf23432c035486d997cfebd3ea7418
[ "MIT" ]
null
null
null
33.75
232
0.566734
true
1,353
Qwen/Qwen-72B
1. YES 2. YES
0.914901
0.904651
0.827666
__label__eng_Latn
0.838701
0.761278
# Algebraic differentiators: A detailed introduction This notebook includes a detailed introduction into the theoretical background of algebraic differentiators and shows how to use the proposed implementation. ## Content of this notebook \textbf{Theoretical background}: Time-domain and frequency-domain analysis \textbf{Numerical differentiation of sample signals}: The numerical estimation of derivatives of simulation and experimental data. ## Theoretical background ### Time-domain analysis The algebraic derivative estimation methods were initially derived using differential algebraic manipulations of truncated Taylor series \cite{mboup2007b,mboup2009a}. Later works \cite{liu2011a,kiltz2017} derived these filters using an approximation theoretical approach that yields a straightforward analysis of the filter characteristics, especially the estimation delay. Using this approach, the estimate of the $n$-th order derivative of a signal ${t\mapsto y(t)}$ denoted $\hat{y}^{(n)}$ can be approximated as \begin{equation} \hat{{y}}^{(n)}(t)=\int_{0}^{T}g^{(n)}_T(\tau)y(t-\tau)\textrm{d}\tau, \end{equation} with the filter kernel \begin{equation} g(t)=\frac{2}{T}\sum_{i=0}^{N}\frac{P_i^{(\alpha,\beta)}(\vartheta)}{\big\lVert{P_i^{(\alpha,\beta)}\big\rVert}^2}w^{(\alpha,\beta)}\left(\nu(t)\right)P_i^{(\alpha,\beta)}\left(\nu(t)\right). \end{equation} In the latter equation $\nu(t)=1-\frac{2}{T}t$, $\big\lVert{x}\big\rVert=\sqrt{\langle x,x\rangle}$ is the norm induced by the inner product \begin{equation} \langle x,y\rangle=\int_{-1}^{1}w^{(\alpha,\beta)}(\tau)x(\tau){y(\tau)}\textrm{d}\tau, \end{equation} with the weight function \begin{equation} w^{(\alpha,\beta)}(\tau)=\begin{cases} (1-\tau)^\alpha(1+\tau)^\beta,&\quad\tau\in[-1,1],\\ 0,&\quad\text{otherwise}, \end{cases} \end{equation} $N$ is the degree of the polynomial approximating the signal $y^{(n)}$ in the time window $[t-T,t]$, $T$ is the filter window length, and $\vartheta$ parameterizes the estimation delay as described below. This approach yields a straightforward analysis of the estimation delay $\delta_t$ and the degree of exactness $\gamma$. The degree of exactness was introduced in \cite{kiltz2017} as the polynomial degree up to which the derivative estimation is exact. If $\gamma=2$ for example, the first and second time derivatives of a polynomial signal of degree two are exact up to an estimation delay. The delay and the degree of exactness are given as \begin{align} \gamma&=\left\{\begin{matrix} n+N+1,\quad &\text{if } N=0\vee\vartheta=p_{N+1,k}^{(\alpha,\beta)}\\[4pt] n+N,\quad&\text{otherwise,} \end{matrix}\right.\\ \delta_t&=\left\{\begin{matrix} \frac{\alpha+1}{\alpha+\beta+2}T,\quad&\text{if } N=0\\[4pt] \frac{1-\vartheta}{2}T,\quad&\text{otherwise,} \end{matrix}\right. \end{align} with $p_{N+1,k}^{(\alpha,\beta)}$ the $k$-th zero of the Jacobi polynomial of degree $N+1$. #### Initializing an algebraic differentiator and performing a time-domain analysis The parameter $\vartheta$ is initialized by default as the largest zero of the Jacobi polynomial of degree $N+1$. This can be changed using the appropriate class member function (see documentation). ```python %matplotlib notebook import matplotlib.pyplot as plt import sys sys.path.append("..") from algebraicDifferentiator import AlgebraicDifferentiator import numpy as np ts = 0.001 # Initialize two different differentiators: For the first one the window length is specified # while the cutoff frequency is specified for the second diffA = AlgebraicDifferentiator(N=0,alpha=4.,beta=4,T=0.1, ts=ts) diffB = AlgebraicDifferentiator(N=1,alpha=4.,beta=4.,T=None,wc = 100, ts=ts) ``` The differentiator has the parameters: Alpha: 4.000000 Beta: 4.000000 Window length in s: 0.100000 Polynomial degree: 0 Estimation delay in s: 0.050000 Cutoff Frequency: 68.534675 Discrete window length: 100 The differentiator has the parameters: Alpha: 4.000000 Beta: 4.000000 Window length in s: 0.091000 Polynomial degree: 1 Estimation delay in s: 0.031781 Cutoff Frequency: 100.901547 Discrete window length: 91 ```python # Plot the impulse and step response of the filter t = np.arange(0,0.2,ts/10) n = 1 gA = diffA.evalKernelDer(t,n) gB = diffB.evalKernelDer(t,n) fig, ax = plt.subplots(nrows=1, ncols=2,sharex=True, figsize=(10, 5)) fig.suptitle("Impulse and step response of two algebraic differentiators") ax[0].plot(t/diffA.get_T(),diffA.evalKernel(t),label=r"diff. A") ax[0].plot(t/diffB.get_T(),diffB.evalKernel(t),label=r"diff. A") ax[1].plot(t/diffA.get_T(),diffA.get_stepResponse(t),label=r"\int_{0}^{t}g_{\mathrm{A}}(\tau)\mathrm{d}\tau") ax[1].plot(t/diffB.get_T(),diffB.get_stepResponse(t),label=r"\int_{0}^{t}g_{\mathrm{B}}(\tau)\mathrm{d}\tau") ax[0].set_xlabel(r"$\frac{t}{T}$") ax[1].set_xlabel(r"$\frac{t}{T}$") ax[0].set_ylabel(r"impulse responses") ax[1].set_ylabel(r"step responses") ax[0].legend() ax[0].grid() ax[1].grid() plt.grid() plt.show() ``` <IPython.core.display.Javascript object> ```python # Plot the first derivative of the kernel t = np.arange(-0.01,0.2,ts/10) n = 1 gA = diffA.evalKernelDer(t,n) gB = diffB.evalKernelDer(t,n) fig, ax_g = plt.subplots(nrows=1, ncols=1, sharex=True, figsize=(10, 5)) fig.suptitle("First derivatives of the kernels of two algebraic differentiators") ax_g.plot(t/diffA.get_T(),gA*diffA.get_T()**(1+n)/2**n,label="diff. A$") ax_g.plot(t/diffB.get_T(),gB*diffA.get_T()**(1+n)/2**n,label="$diff. B") ax_g.set_xlabel(r"$\frac{t}{T}$") ax_g.set_ylabel(r"first derivative of the kernels") ax_g.legend() plt.grid() plt.show() ``` <IPython.core.display.Javascript object> ## Frequency-domain analysis The fourier transform of an algebraic differentiator is given as \begin{equation} \mathcal{G}(\omega)=\sum_{i=0}^{N}\frac{\left(\alpha+\beta+2i+1\right)\mathrm{P}_i^{(\alpha,\beta)}}{\alpha+\beta+i+1}\sum_{k=0}^{i}(-1)^{i-k}\left(\begin{array}ii\\k\end{array}\right)\mathrm{M}_{i,k}^{(\alpha,\beta)}(-\iota\omega T) \label{eq:FourierTranformation} \end{equation} where $\mathrm{M}_{i,k}^{(\alpha,\beta)}$ denotes the hypergeometric Kummer M-Function. An approximation of the amplitude spectrum of the algebraic differentiator is \begin{equation} \left|\mathcal{G}(\omega)\right|\approx\begin{cases} 1,&\quad \left|{\omega}\right|\leq\omega_{\mathrm{c}},\\ \left|{\frac{\omega_{\mathrm{c}}}{\omega}}\right|^{\mu},&\quad \text{otherwise}, \end{cases} \label{eq:lowPassFilterApp} \end{equation} with $\omega_{\mathrm{c}}$ the cutoff frequency of the algebraic differentiator and $\mu=1+\min\{\alpha,\beta\}$. Lower and upper bounds can be computed for the amplitude spectrum of the algebraic differentiators. If $N=0$ and $\alpha=\beta$, the amplitude reaches $0$ and the lower bound is $0$. ```python omega = np.linspace(1,800,4*10**3) omegaH = np.linspace(1,800,4*10**3) # Get phase and amplitude of Fourier transform of filter A ampA,phaseA = diffA.get_ampAndPhaseFilter(omega) # Get upper and lower bound for the amplitude of Fourier transform of filter A uA, lA, mA = diffA.get_asymptotesAmpFilter(omegaH) # Get phase and amplitude of fourier transform of filter B ampB,phaseB = diffB.get_ampAndPhaseFilter(omega) # Get upper and lower bound for the amplitude of Fourier transform of filter B uB, lB, mB = diffB.get_asymptotesAmpFilter(omegaH) # Plot results ## PLEASE NOTE: Python will give a warning in the conversion to dB for the differentiato A ## since the amplitude spectrum reaches zero! fig, ax = plt.subplots(nrows=1, ncols=2,sharex=False, figsize=(10, 5)) fig.suptitle("Frequency analysis of two algebraic differentiators") ax[0].plot(omega/diffA.get_cutoffFreq(),20*np.log10(ampA),label=r"$|\mathcal{G}_{\mathrm{A}}(\omega)|$") ax[0].plot(omegaH/diffA.get_cutoffFreq(),20*np.log10(uA),label=r"$|\mathcal{G}_{\mathrm{A}}^+(\omega)|$") ax[0].plot(omegaH/diffA.get_cutoffFreq(),20*np.log10(lA),label=r"$|\mathcal{G}_{\mathrm{A}}^-(\omega)|$") ax[0].plot(omegaH/diffA.get_cutoffFreq(),20*np.log10(mA),label=r"$|\mathcal{G}_{\mathrm{A}}^{\Delta}(\omega)|$") ax[0].set_ylim(top=1) ax[0].set_xlabel(r"$\frac{\omega}{\omega_{\mathrm{c,A}}}$") ax[0].set_ylabel(r"amplitudes in dB") ax[0].legend() ax[0].grid() ax[1].plot(omega/diffB.get_cutoffFreq(),20*np.log10(ampB),label=r"$|\mathcal{G}_{\mathrm{B}}(\omega)|$") ax[1].plot(omegaH/diffB.get_cutoffFreq(),20*np.log10(uB),label=r"$|\mathcal{G}_{\mathrm{B}}^+(\omega)|$") ax[1].plot(omegaH/diffB.get_cutoffFreq(),20*np.log10(mB),label=r"$|\mathcal{G}_{\mathrm{B}}^{\Delta}(\omega)|$") ax[1].set_xlabel(r"$\frac{\omega}{\omega_{\mathrm{c,B}}}$") ax[1].set_ylim(top=1) ax[1].legend() plt.grid() fig.show() ``` <IPython.core.display.Javascript object> <ipython-input-4-09f842cd3212>:23: RuntimeWarning: divide by zero encountered in log10 ax[0].plot(omegaH/diffA.get_cutoffFreq(),20*np.log10(lA),label=r"$|\mathcal{G}_{\mathrm{A}}^-(\omega)|$") ## Numerical differentiation of sample signals ### Simulated signal The derivative of a signal $y:t\rightarrow y(t)$ without any disturbance is estimated: The first differentiator has a delay. The second differentiator is parametrized for a delay free approximation. ```python import sympy as sp ####################################### # Compute signals and its derivatives ####################################### a = sp.symbols('a_0:3') t = sp.symbols('t') y = a[0]*sp.exp(-a[1]*t)*sp.sin(a[2]*t) # Derivative to be estimated dy = sp.diff(a[0]*sp.exp(-a[1]*t)*sp.sin(a[2]*t),t,1) d2y = sp.diff(a[0]*sp.exp(-a[1]*t)*sp.sin(a[2]*t),t,2) aeval = {'a_0':1,'a_1':0.1,'a_2':4} # Evaluate signal and true derivative teval = np.arange(0,20,ts) for ai in a: y = y.subs({ai:aeval[repr(ai)]}) dy = dy.subs({ai:aeval[repr(ai)]}) d2y = d2y.subs({ai:aeval[repr(ai)]}) yeval = sp.lambdify(t, y, "numpy") yeval = yeval(teval) dyeval = sp.lambdify(t, dy, "numpy") dyeval = dyeval(teval) d2yeval = sp.lambdify(t, d2y, "numpy") d2yeval = d2yeval(teval) ####################################### # Parametrize diffB to be delay-free ####################################### # Set the parameter \vartheta diffB.set_theta(1,False) # Print the characteristics of the differentiator diffB.printParam() ####################################### # Estimate derivatives ####################################### # Filter the signal y xAppA = diffA.estimateDer(0,yeval) xAppB = diffB.estimateDer(0,yeval) # Estimate its first derivative dxAppA = diffA.estimateDer(1,yeval) dxAppB = diffB.estimateDer(1,yeval) # Estimate its 2nd derivative d2xAppA = diffA.estimateDer(2,yeval) d2xAppB = diffB.estimateDer(2,yeval) ####################################### # Plot results ####################################### fig, (fy,fdy,f2dy) = plt.subplots(nrows=1, ncols=3,sharex=True, figsize=(10, 5)) fig.subplots_adjust( wspace=0.5) fy.plot(teval,yeval,label='true signal') fy.plot(teval,xAppA,label='diff A') fy.plot(teval,xAppB,label='diff B') fy.set_xlabel(r'$t$') fy.set_ylabel(r'signals') fdy.plot(teval,dyeval,label='true signal') fdy.plot(teval,dxAppA,label='diff A') fdy.plot(teval,dxAppB,label='diff B') fdy.set_xlabel(r'$t$') fdy.set_ylabel(r'first der. of signals') f2dy.plot(teval,d2yeval,label='true signal') f2dy.plot(teval,d2xAppA,label='diff A') f2dy.plot(teval,d2xAppB,label='diff B') f2dy.set_xlabel(r'$t$') f2dy.set_ylabel(r' second der. of signal') plt.legend() plt.show() ``` The differentiator has the parameters: Alpha: 4.000000 Beta: 4.000000 Window length in s: 0.111000 Polynomial degree: 1 Estimation delay in s: 0.000000 Cutoff Frequency: 101.490088 Discrete window length: 111 <IPython.core.display.Javascript object> ### Estimation of derivative of a measured signal The measurements are loaded from .dat file. The signal is filtered and the first and second derivatives are estimated using an algebraic differentiator. ```python from os.path import dirname, join as pjoin import scipy.io as sio # Load measurements data = np.loadtxt('data90.dat') tmeas = data[:,0] ts = tmeas[2]-tmeas[1] xmeas = data[:,2] # Estimate derivatives diffA = AlgebraicDifferentiator(N=0,alpha=4,beta=4,T=None,wc=25, ts=ts,display=True) xAppA = diffA.estimateDer(0,xmeas) dxAppA = diffA.estimateDer(1,xmeas) d2xAppA = diffA.estimateDer(2,xmeas) # Plot results fig, (fy,fdy, fd2y) = plt.subplots(nrows=1, ncols=3,sharex=True, figsize=(10, 5)) fig.subplots_adjust( wspace=0.5) fy.plot(tmeas,xmeas,label='$y(t)$') fy.plot(tmeas,xAppA,label='$\hat y(t)$') fy.set_xlabel(r'$t$ in seconds') fy.set_ylabel(r'signals') fy.legend() fdy.plot(tmeas,dxAppA,label='$\dot\hat y(t)$') fdy.set_xlabel(r'$t$ in seconds') fdy.legend() fd2y.plot(tmeas,d2xAppA,label='$\ddot\hat y(t)$') fd2y.set_xlabel(r'$t$ in seconds') plt.legend() plt.show() ``` The differentiator has the parameters: Alpha: 4.000000 Beta: 4.000000 Window length in s: 0.270000 Polynomial degree: 0 Estimation delay in s: 0.135000 Cutoff Frequency: 25.383213 Discrete window length: 54 <IPython.core.display.Javascript object> # References [<a id="cit-mboup2007b" href="#call-mboup2007b">1</a>] M. Mboup, C. Join and M. Fliess, ``_A revised look at numerical differentiation with an application to nonlinear feedback control_'', 15th Mediterranean Conf. on Control $\&$ Automation, June 2007. [online](https://ieeexplore.ieee.org/document/4433728) [<a id="cit-mboup2009a" href="#call-mboup2009a">2</a>] Mboup M., Join C. and Fliess M., ``_Numerical differentiation with annihilators in noisy environment_'', Numerical Algorithms, vol. 50, number 4, pp. 439--467, 2009. [online](https://link.springer.com/article/10.1007/s11075-008-9236-1) [<a id="cit-liu2011a" href="#call-liu2011a">3</a>] Liu D.-Y., Gibaru O. and Perruquetti W., ``_Differentiation by integration with Jacobi polynomials_'', Journal of Computational and Applied Mathematics, vol. 235, number 9, pp. 3015 - 3032, 2011. [online](http://www.sciencedirect.com/science/article/pii/S0377042710006734) [<a id="cit-kiltz2017" href="#call-kiltz2017">4</a>] L. Kiltz, ``_Algebraische Ableitungsschätzer in Theorie und Anwendung_'', 2017. [online](https://scidok.sulb.uni-saarland.de/handle/20.500.11880/26974)
0b53b35953463cd491cf5cbb981b0cc3d6075b81
703,530
ipynb
Jupyter Notebook
examples/DetailedExamples.ipynb
AmineCybernetics/Algebraic-differentiators
e6dfd9db66d755e54779dd634f77c5ccd8888c5b
[ "BSD-3-Clause" ]
6
2021-08-06T07:05:09.000Z
2021-09-17T12:28:11.000Z
examples/DetailedExamples.ipynb
AmineCybernetics/Algebraic-differentiators
e6dfd9db66d755e54779dd634f77c5ccd8888c5b
[ "BSD-3-Clause" ]
null
null
null
examples/DetailedExamples.ipynb
AmineCybernetics/Algebraic-differentiators
e6dfd9db66d755e54779dd634f77c5ccd8888c5b
[ "BSD-3-Clause" ]
1
2021-08-10T10:25:12.000Z
2021-08-10T10:25:12.000Z
131.648578
148,481
0.793591
true
4,571
Qwen/Qwen-72B
1. YES 2. YES
0.880797
0.800692
0.705247
__label__eng_Latn
0.698765
0.476857
# Customer Assignment Problem ## Objective and Prerequisites Sharpen your mathematical optimization modeling skills with this example, in which you will learn how to select the location of facilities based on their proximity to customers. We’ll demonstrate how you can construct a mixed-integer programming (MIP) model of this facility location problem, implement this model in the Gurobi Python API, and generate an optimal solution using the Gurobi Optimizer. This modeling example is at the intermediate level, where we assume that you know Python and are familiar with the Gurobi Python API. In addition, you should have some knowledge about building mathematical optimization models. **Download the Repository** <br /> You can download the repository containing this and other examples by clicking [here](https://github.com/Gurobi/modeling-examples/archive/master.zip). --- ## Motivation Many companies in various industries must, at some point, make strategic decisions about where to build facilities to support their operations. For example: - Producers of goods need to decide how to design their supply chains – which encompass factories, distribution centers, warehouses, and retail stores. - Healthcare providers need to determine where to build hospitals to maximize their population coverage. These are strategic decisions that are difficult to implement and costly to change because they entail long-term investments. Furthermore, these decisions have a significant impact, both in terms of customer satisfaction and cost management. One of the critical factors to consider in this process is the location of the customers that the company is planning to serve. --- ## Problem Description The Customer Assignment Problem is closely related to the Facility Location Problem, which is concerned with the optimal placement of facilities (from a set of candidate locations) in order to minimize the distance between the company's facilities and the customers. When the facilities have unlimited capacity, customers are assumed to be served by the closest facility. In cases where the number of customers considered is too big, the customers can be grouped into clusters. Then, the cluster centers can be used in lieu of the individual customer locations. This pre-processing makes the assumption that all customers belonging to a given cluster will be served by the facility assigned to that cluster. The k-means algorithm can be used for this task, which aims to partition $n$ objects into $k$ distinct and non-overlapping clusters. --- ## Solution Approach Mathematical optimization is a declarative approach where the modeler formulates a mathematical optimization model that captures the key aspects of a complex decision problem. The Gurobi Optimizer solves such models using state-of-the-art mathematics and computer science. A mathematical optimization model has five components, namely: - Sets and indices. - Parameters. - Decision variables. - Objective function(s). - Constraints. We now present a Binary Integer Programming (BIP) formulation: ### Sets and Indices $i \in I$: Set of customer clusters. $j \in J$: Set of potential facility locations. $\text{Pairings}= \{(i,j) \in I \times J: \text{dist}_{i,j} \leq \text{threshold}\}$: Set of allowed pairings ### Parameters $\text{threshold} \in \mathbb{R}^+$: Maximum distance for a cluster-facility pairing to be considered. $\text{max_facilities} \in \mathbb{N}$: Maximum number of facilities to be opened. $\text{weight}_i \in \mathbb{N}$: Number of customers in cluster $i$. $\text{dist}_{i,j} \in \mathbb{R}^+$: Distance from cluster $i$ to facility location $j$. ### Decision Variables $\text{select}_j \in \{0,1\}$: 1 if facility location $j$ is selected; 0 otherwise. $\text{assign}_{i,j} \in \{0,1\}$: 1 if cluster $i$ is assigned to facility location $j$; 0 otherwise. ### Objective Function - **Total distance**: Minimize the total distance from clusters to their assigned facility: \begin{equation} \text{Min} \quad Z = \sum_{(i,j) \in \text{Pairings}}\text{weight}_i \cdot \text{dist}_{i,j} \cdot \text{assign}_{i,j} \tag{0} \end{equation} ### Constraints - **Facility limit**: The number of facilities built cannot exceed the limit: \begin{equation} \sum_{j}\text{select}_j \leq \text{max_facilities} \tag{1} \end{equation} - **Open to assign**: Cluster $i$ can only be assigned to facility $j$ if we decide to build that facility: \begin{equation} \text{assign}_{i,j} \leq \text{select}_{j} \quad \forall (i,j) \in \text{Pairings} \tag{2} \end{equation} - **Closest store**: Cluster $i$ must be assigned to exactly one facility: \begin{equation} \sum_{j:(i,j) \in \text{Pairings}}\text{assign}_{i,j} = 1 \quad \forall i \in I \tag{3} \end{equation} --- ## Python Implementation ### Dataset Generation In this simple example, we choose random locations for customers and facility candidates. Customers are distributed using Gaussian distributions around a few randomly chosen population centers, whereas facility locations are uniformly distributed. ```python %pip install gurobipy ``` ```python %matplotlib inline import random import gurobipy as gp from gurobipy import GRB import matplotlib.pyplot as plt import numpy as np from sklearn.cluster import MiniBatchKMeans # Tested with Gurobi v9.0.0 and Python 3.7.0 seed = 10101 num_customers = 50000 num_candidates = 20 max_facilities = 8 num_clusters = 50 num_gaussians = 10 threshold = 0.99 random.seed(seed) customers_per_gaussian = np.random.multinomial(num_customers, [1/num_gaussians]*num_gaussians) customer_locs = [] for i in range(num_gaussians): # each center coordinate in [-0.5, 0.5] center = (random.random()-0.5, random.random()-0.5) customer_locs += [(random.gauss(0,.1) + center[0], random.gauss(0,.1) + center[1]) for i in range(customers_per_gaussian[i])] # each candidate coordinate in [-0.5, 0.5] facility_locs = [(random.random()-0.5, random.random()-0.5) for i in range(num_candidates)] print('First customer location:', customer_locs[0]) ``` First customer location: (0.33164437091949245, -0.2809884943538464) ### Preprocessing **Clustering** To limit the size of the optimization model, we group individual customers into clusters and optimize on these clusters. Clusters are computed using the K-means algorithm, as implemented in the scikit-learn package. ```python kmeans = MiniBatchKMeans(n_clusters=num_clusters, init_size=3*num_clusters, random_state=seed).fit(customer_locs) memberships = list(kmeans.labels_) centroids = list(kmeans.cluster_centers_) # Center point for each cluster weights = list(np.histogram(memberships, bins=num_clusters)[0]) # Number of customers in each cluster print('First cluster center:', centroids[0]) print('Weights for first 10 clusters:', weights[:10]) ``` First cluster center: [0.47717052 0.37519204] Weights for first 10 clusters: [404, 1095, 1346, 1041, 1491, 1217, 1034, 1119, 559, 646] ***Viable Customer-Store Pairings*** Some facilities are just too far away from a cluster center to be relevant, so let's heuristically filter all distances that exceed a given `threshold`: ```python def dist(loc1, loc2): return np.linalg.norm(loc1-loc2, ord=2) # Euclidean distance pairings = {(facility, cluster): dist(facility_locs[facility], centroids[cluster]) for facility in range(num_candidates) for cluster in range(num_clusters) if dist(facility_locs[facility], centroids[cluster]) < threshold} print("Number of viable pairings: {0}".format(len(pairings.keys()))) ``` Number of viable pairings: 978 ### Model Deployment Build facilities from among candidate locations to minimize total distance to cluster centers: ```python m = gp.Model("Facility location") # Decision variables: select facility locations select = m.addVars(range(num_candidates), vtype=GRB.BINARY, name='select') # Decision variables: assign customer clusters to a facility location assign = m.addVars(pairings.keys(), vtype=GRB.BINARY, name='assign') # Deploy Objective Function # 0. Total distance obj = gp.quicksum(weights[cluster] *pairings[facility, cluster] *assign[facility, cluster] for facility, cluster in pairings.keys()) m.setObjective(obj, GRB.MINIMIZE) # 1. Facility limit m.addConstr(select.sum() <= max_facilities, name="Facility_limit") # 2. Open to assign m.addConstrs((assign[facility, cluster] <= select[facility] for facility, cluster in pairings.keys()), name="Open2assign") # 3. Closest store m.addConstrs((assign.sum('*', cluster) == 1 for cluster in range(num_clusters)), name="Closest_store") # Find the optimal solution m.optimize() ``` Using license file c:\gurobi\gurobi.lic Gurobi Optimizer version 9.1.0 build v9.1.0rc0 (win64) Thread count: 4 physical cores, 8 logical processors, using up to 8 threads Optimize a model with 1029 rows, 998 columns and 2954 nonzeros Model fingerprint: 0xb148fb31 Variable types: 0 continuous, 998 integer (998 binary) Coefficient statistics: Matrix range [1e+00, 1e+00] Objective range [1e+01, 1e+03] Bounds range [1e+00, 1e+00] RHS range [1e+00, 8e+00] Presolve time: 0.01s Presolved: 1029 rows, 998 columns, 2954 nonzeros Variable types: 0 continuous, 998 integer (998 binary) Found heuristic solution: objective 7776.7177685 Root relaxation: objective 6.029755e+03, 162 iterations, 0.00 seconds Nodes | Current Node | Objective Bounds | Work Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time * 0 0 0 6029.7545566 6029.75456 0.00% - 0s Explored 0 nodes (162 simplex iterations) in 0.03 seconds Thread count was 8 (of 8 available processors) Solution count 2: 6029.75 7776.72 Optimal solution found (tolerance 1.00e-04) Best objective 6.029754556623e+03, best bound 6.029754556623e+03, gap 0.0000% ## Analysis Let's plot a map with: - Customer locations represented as small pink dots. - Customer cluster centroids represented as large red dots. - Facility location candidates represented as green dots. Notice that selected locations have black lines emanating from them towards each cluster that is likely to be served by that facility. ```python plt.figure(figsize=(8,8), dpi=150) plt.scatter(*zip(*customer_locs), c='Pink', s=0.5) plt.scatter(*zip(*centroids), c='Red', s=10) plt.scatter(*zip(*facility_locs), c='Green', s=10) assignments = [p for p in pairings if assign[p].x > 0.5] for p in assignments: pts = [facility_locs[p[0]], centroids[p[1]]] plt.plot(*zip(*pts), c='Black', linewidth=0.1) ``` --- ## Conclusions We learned how mathematical optimization can be used to solve the Customer Assignment Problem. Moreover, it has been shown how machine learning can be used in the pre-processing so as to reduce the computational burden of big datasets. Of course, this comes at a cost, as using fewer clusters will result in coarser approximations to the global optimal solution. --- ## References 1. Drezner, Z., & Hamacher, H. W. (Eds.). (2001). Facility location: applications and theory. Springer Science & Business Media. 2. James, G., Witten, D., Hastie, T., & Tibshirani, R. (2013). An introduction to statistical learning. New York: springer. 3. Klose, A., & Drexl, A. (2005). Facility location models for distribution system design. European journal of operational research, 162(1), 4-29. Copyright © 2020 Gurobi Optimization, LLC
4f609534dfa3bdd1fc43c0ba38c5823220f4fa3b
431,371
ipynb
Jupyter Notebook
customer_assignment/customer_assignment_gcl.ipynb
anupamsharmaberkeley/Gurobi_Optimization
701200b5bfd9bf46036675f5b157b3d8e3728ff9
[ "Apache-2.0" ]
153
2019-07-11T15:08:37.000Z
2022-03-25T10:12:54.000Z
customer_assignment/customer_assignment_gcl.ipynb
anupamsharmaberkeley/Gurobi_Optimization
701200b5bfd9bf46036675f5b157b3d8e3728ff9
[ "Apache-2.0" ]
7
2020-10-29T12:34:13.000Z
2022-02-28T14:16:43.000Z
customer_assignment/customer_assignment_gcl.ipynb
anupamsharmaberkeley/Gurobi_Optimization
701200b5bfd9bf46036675f5b157b3d8e3728ff9
[ "Apache-2.0" ]
91
2019-11-11T17:04:54.000Z
2022-03-30T21:34:20.000Z
1,054.696822
415,216
0.952892
true
2,967
Qwen/Qwen-72B
1. YES 2. YES
0.966914
0.812867
0.785973
__label__eng_Latn
0.975027
0.664411
## 5. Linear ensemble filtering Lorenz-96 problem with localization In this notebook, we apply the stochastic ensemble Kalman filter to the Lorenz-96 problem. To regularize the inference problem, we use a localization radius `L` to cut-off long-range correlations and improve the conditioning of the covariance matrix. We refers readers to Asch et al. [2] for further details. [1] Evensen, G., 1994. Sequential data assimilation with a nonlinear quasi‐geostrophic model using Monte Carlo methods to forecast error statistics. Journal of Geophysical Research: Oceans, 99(C5), pp.10143-10162. [2] Asch, M., Bocquet, M. and Nodet, M., 2016. Data assimilation: methods, algorithms, and applications. Society for Industrial and Applied Mathematics. [3] Bishop, C.H., Etherton, B.J. and Majumdar, S.J., 2001. Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects. Monthly weather review, 129(3), pp.420-436. [4] Lorenz, E.N., 1963. Deterministic nonperiodic flow. Journal of atmospheric sciences, 20(2), pp.130-141. [5] Spantini, A., Baptista, R. and Marzouk, Y., 2019. Coupling techniques for nonlinear ensemble filtering. arXiv preprint arXiv:1907.00389. [6] Evensen, G. (2003). The ensemble Kalman filter: Theoretical formulation and practical implementation. Ocean dynamics, 53(4), 343-367. [7] Tsitouras, C. (2011). Runge–Kutta pairs of order 5 (4) satisfying only the first column simplifying assumption. Computers & Mathematics with Applications, 62(2), 770-775. ```julia using Revise using LinearAlgebra using AdaptiveTransportMap using Statistics using Distributions ``` ┌ Info: Precompiling AdaptiveTransportMap [bdf749b0-1400-4207-80d3-e689c0e3f03d] └ @ Base loading.jl:1278 ┌ Warning: Type annotations on keyword arguments not currently supported in recipes. Type information has been discarded └ @ RecipesBase ~/.julia/packages/RecipesBase/92zOw/src/RecipesBase.jl:116 ┌ Warning: Type annotations on keyword arguments not currently supported in recipes. Type information has been discarded └ @ RecipesBase ~/.julia/packages/RecipesBase/92zOw/src/RecipesBase.jl:116 Load some packages to make nice figures ```julia using Plots default(tickfont = font("CMU Serif", 9), titlefont = font("CMU Serif", 14), guidefont = font("CMU Serif", 12), legendfont = font("CMU Serif", 10), grid = false) pyplot() using LaTeXStrings PyPlot.rc("text", usetex = "true") PyPlot.rc("font", family = "CMU Serif") # gr() using ColorSchemes ``` The Lorenz-96 model is a famous problem used in data assimilation and weather prediction. It was derived from first principles as a one-dimensional model for the response of the mid-latitude atmosphere to forcing input. For certain forcing input, it can exhibit a chaotic behavior: sensitivity to initial conditions, strong mixing... The forty-dimensional state $\boldsymbol{x} = (x_1, \ldots, x_{40})$ at time $t$ is governed by the following set of ordinary differential equations: \begin{equation} \label{eqn:lorenz96} \frac{\mathrm{d} x_i}{\mathrm{d} t} = (x_{i+1} - x_{i-2}) x_{i-1} - x_{i} + F, \quad \mbox{for } i = 1, \ldots, 40, \end{equation} with periodic boundary conditions, setting the forcing input $F=8.0$ leads to chaos. We reproduce the hard configuration case of Spantini et al. [5]. We integrate this system of ODEs with time step $\Delta t_{dyn} = 0.05$. We observe the state with $\Delta t_{obs}=0.4$, every two components ($x_1, x_3, x_5, \ldots, x_{39})$. The large time interval between two assimilation makes the problem particularly challenging and enhances the non-Gaussianity. The initial distribution $\pi_{\mathsf{X}_0}$ is the standard Gaussian. We assume that there is no process noise. The measurement noise has a Gaussian distribution with zero mean and covariance $\theta^2\boldsymbol{I}_{20}$ where $\theta^2 = 0.5$. ### Simple twin-experiment Define the dimension of the state and observation vectors ```julia Nx = 40 Ny = 20 ``` 20 Define the time steps $\Delta t_{dyn}, \Delta t_{obs}$ of the dynamical and observation models. Observations from the truth are assimilated every $\Delta t_{obs}$. ```julia Δtdyn = 0.05 Δtobs = 0.4 ``` 0.4 Define the time span of interest ```julia t0 = 0.0 tf = 100.0 Tf = ceil(Int64, (tf-t0)/Δtobs) ``` 250 Define the properties of the initial condition ```julia m0 = zeros(Nx) C0 = Matrix(1.0*I, Nx, Nx); ``` We construct the state-space representation `F` of the system composed of the deterministic part of the dynamical and observation models. The dynamical model is provided by the right hand side of the ODE to solve. For a system of ODEs, we will prefer an in-place syntax `f(du, u, p, t)`, where `p` are parameters of the model. We rely on `OrdinaryDiffEq` to integrate the dynamical system with the Tsitouras 5/4 Runge-Kutta method adaptive time marching [7]. We assume that the state is fully observable, i.e. $h(t,x) = x$. ```julia h(t,x) = x[2:2:end] h(t,x,idx) = x[idx] F = StateSpace(lorenz96!, h) ``` StateSpace(AdaptiveTransportMap.lorenz96!, h) `ϵx` defines the additive process noise applied between the forecast step and the analysis step. The process noise is applied before to sample form the likelihood. `ϵy` defines the additive observation noise. We assume that these noises have Gaussian distribution. ```julia σx = 1e-16 σy = 0.5 ϵx = AdditiveInflation(Nx, zeros(Nx), σx) ϵy = AdditiveInflation(Ny, zeros(Ny), σy) ``` AdditiveInflation(20, [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.25 0.0 … 0.0 0.0; 0.0 0.25 … 0.0 0.0; … ; 0.0 0.0 … 0.25 0.0; 0.0 0.0 … 0.0 0.25], [0.5 0.0 … 0.0 0.0; 0.0 0.5 … 0.0 0.0; … ; 0.0 0.0 … 0.5 0.0; 0.0 0.0 … 0.0 0.5]) ```julia model = Model(Nx, Ny, Δtdyn, Δtobs, ϵx, ϵy, m0, C0, 0, 0, 0, F); ``` Set initial condition of the true system ```julia x0 = model.m0 + sqrt(model.C0)*randn(Nx); ``` Run dynamics and generate data ```julia @time data = generate_lorenz96(model, x0, Tf); ``` 0.005344 seconds (19.17 k allocations: 3.892 MiB) Define a stochastic ensemble Kalman filter ```julia senkf = StochEnKF(model.ϵy, model.Δtdyn, model.Δtobs) ``` Stochastic EnKF with filtered = false Define a ensemble transform Kalman filter ```julia etkf = ETKF(model.ϵy, model.Δtdyn, model.Δtobs, 20*model.Δtobs) ``` ETKF with filtered = false Initialize the ensemble matrix `X` $\in \mathbb{R}^{(N_y + N_x) \times N_e}$. ```julia Ne = 100 #ensemble size X = zeros(model.Ny + model.Nx, Ne) # Generate the initial conditions for the state. X[model.Ny+1:model.Ny+model.Nx,:] .= sqrt(model.C0)*randn(model.Nx, Ne) .+ model.m0; ``` Apply the sequential filter over the time window The function `seqassim` provides a friendly API to experiment with the different ensemble filters, the tuning of the different inflation parameters... Without localization ```julia @time Xsenkf = seqassim(F, data, Tf, model.ϵx, senkf, deepcopy(X), model.Ny, model.Nx, t0); ``` 0.984651 seconds (2.00 M allocations: 440.674 MiB, 12.74% gc time) Localization can easily be added as an additional regularization of the analysis step as follows.... ```julia @time Xsenkfloc = seqassim(F, data, Tf, model.ϵx, senkf, deepcopy(X), model.Ny, model.Nx, t0); ``` 0.922576 seconds (2.00 M allocations: 440.674 MiB, 8.93% gc time) `mean_hist` stacked together the ensemble mean over the assimilation window. ```julia # Plot the first component of the state over time nb = 1 ne = 100#size(Xsenkf,1)-1 Δ = 1 plt = plot(xlim = (-Inf, Inf), ylim = (-Inf, Inf), xlabel = L"t", ylabel = L"x_1") plot!(plt, data.tt[nb:Δ:ne], data.xt[1,nb:Δ:ne], linewidth = 3, color = :teal, label = "True") plot!(plt, data.tt[nb:Δ:ne], mean_hist(Xsenkf)[1,1+nb:Δ:1+ne], linewidth = 3, grid = false, color = :orangered2, linestyle = :dash, label = "sEnKF") scatter!(plt, data.tt[nb:Δ:ne], data.yt[1,nb:Δ:ne], linewidth = 3, color = :grey, markersize = 5, alpha = 0.5, label = "Observation") plt ``` ```julia nb = 100 ne = size(Xsenkf,1)-1 Δ = 5 plt = plot(layout = grid(3,1)) contourf!(plt[1,1], data.xt[:,nb:Δ:ne], color = :gnuplot2) contourf!(plt[2,1], mean_hist(Xsenkf)[:,1+nb:Δ:1+ne], color = :plasma) contourf!(plt[3,1], mean_hist(Xsenkf)[:,1+nb:Δ:1+ne] .- data.xt[:,nb:Δ:ne], color = :plasma) ``` ```julia # Plot the different component of the state over time nb = 1 ne = size(Xsenkf,1)-1 Δ = 1 plt = plot(layout = grid(3,1), xlim = (-Inf, Inf), ylim = (-Inf, Inf), xlabel = L"t", size = (900, 1000)) for i =1:3 plot!(plt[i,1], data.tt[nb:Δ:ne], data.xt[i,nb:Δ:ne], linewidth = 2, color = :teal, ylabel = latexstring("x_"*string(i)), legend = (i == 1), label = "True") plot!(plt[i,1], data.tt[nb:Δ:ne], mean_hist(Xsenkf)[i,1+nb:Δ:1+ne], linewidth = 2, grid = false, color = :orangered2, linestyle = :dash, label = "sEnKF") scatter!(plt[i,1], data.tt[nb:Δ:ne], data.yt[i,nb:Δ:ne], linewidth = 3, color = :grey, markersize = 5, alpha = 0.5, label = "Observation") end plt ``` ```julia ```
559ea242492cb3d4ac71ce5ba66e80ed2e947c02
199,004
ipynb
Jupyter Notebook
notebooks/Linear ensemble filtering Lorenz 96 with localization.ipynb
mleprovost/TransportBasedInference.jl
bdcedf72e9ea23c24678fe6af7a00202c5f9d5d7
[ "MIT" ]
1
2022-03-23T03:16:56.000Z
2022-03-23T03:16:56.000Z
notebooks/Linear ensemble filtering Lorenz 96 with localization.ipynb
mleprovost/TransportBasedInference.jl
bdcedf72e9ea23c24678fe6af7a00202c5f9d5d7
[ "MIT" ]
null
null
null
notebooks/Linear ensemble filtering Lorenz 96 with localization.ipynb
mleprovost/TransportBasedInference.jl
bdcedf72e9ea23c24678fe6af7a00202c5f9d5d7
[ "MIT" ]
null
null
null
340.760274
118,541
0.935549
true
3,075
Qwen/Qwen-72B
1. YES 2. YES
0.787931
0.754915
0.594821
__label__eng_Latn
0.881394
0.220299
## Rosenbrock The definition ca be found in <cite data-cite="rosenbrock"></cite>. It is a non-convex function, introduced by Howard H. Rosenbrock in 1960 and also known as Rosenbrock's valley or Rosenbrock's banana function. **Definition** \begin{align} \begin{split} f(x) &=& \sum_{i=1}^{n-1} \bigg[100 (x_{i+1}-x_i^2)^2+(x_i - 1)^2 \bigg] \\ &&-2.048 \leq x_i \leq 2.048 \quad i=1,\ldots,n \end{split} \end{align} **Optimum** $$f(x^*) = 0 \; \text{at} \; x^* = (1,\ldots,1) $$ **Contour** ```python import numpy as np from pymoo.factory import get_problem, get_visualization problem = get_problem("rosenbrock", n_var=2) get_visualization("fitness-landscape", problem, angle=(45, 45), _type="surface").show() ```
5eb4814f88f56a5564d62b620579e2f4efada290
332,031
ipynb
Jupyter Notebook
source/problems/single/rosenbrock.ipynb
SunTzunami/pymoo-doc
f82d8908fe60792d49a7684c4bfba4a6c1339daf
[ "Apache-2.0" ]
2
2021-09-11T06:43:49.000Z
2021-11-10T13:36:09.000Z
source/problems/single/rosenbrock.ipynb
SunTzunami/pymoo-doc
f82d8908fe60792d49a7684c4bfba4a6c1339daf
[ "Apache-2.0" ]
3
2021-09-21T14:04:47.000Z
2022-03-07T13:46:09.000Z
source/problems/single/rosenbrock.ipynb
SunTzunami/pymoo-doc
f82d8908fe60792d49a7684c4bfba4a6c1339daf
[ "Apache-2.0" ]
3
2021-10-09T02:47:26.000Z
2022-02-10T07:02:37.000Z
2,496.473684
329,172
0.962961
true
262
Qwen/Qwen-72B
1. YES 2. YES
0.939025
0.872347
0.819156
__label__eng_Latn
0.72157
0.741506
Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Lorena A. Barba, Gilbert F. Forsyth 2015. Thanks to NSF for support via CAREER award #1149784. [@LorenaABarba](https://twitter.com/LorenaABarba) 12 steps to Navier-Stokes ===== *** We continue our journey to solve the Navier-Stokes equation with Step 4. But don't continue unless you have completed the previous steps! In fact, this next step will be a combination of the two previous ones. The wonders of *code reuse*! Step 4: Burgers' Equation ---- *** You can read about Burgers' Equation on its [wikipedia page](http://en.wikipedia.org/wiki/Burgers'_equation). Burgers' equation in one spatial dimension looks like this: $$\frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} = \nu \frac{\partial ^2u}{\partial x^2}$$ As you can see, it is a combination of non-linear convection and diffusion. It is surprising how much you learn from this neat little equation! We can discretize it using the methods we've already detailed in Steps [1](./01_Step_1.ipynb) to [3](./04_Step_3.ipynb). Using forward difference for time, backward difference for space and our 2nd-order method for the second derivatives yields: $$\frac{u_i^{n+1}-u_i^n}{\Delta t} + u_i^n \frac{u_i^n - u_{i-1}^n}{\Delta x} = \nu \frac{u_{i+1}^n - 2u_i^n + u_{i-1}^n}{\Delta x^2}$$ As before, once we have an initial condition, the only unknown is $u_i^{n+1}$. We will step in time as follows: $$u_i^{n+1} = u_i^n - u_i^n \frac{\Delta t}{\Delta x} (u_i^n - u_{i-1}^n) + \nu \frac{\Delta t}{\Delta x^2}(u_{i+1}^n - 2u_i^n + u_{i-1}^n)$$ ### Initial and Boundary Conditions To examine some interesting properties of Burgers' equation, it is helpful to use different initial and boundary conditions than we've been using for previous steps. Our initial condition for this problem is going to be: \begin{eqnarray} u &=& -\frac{2 \nu}{\phi} \frac{\partial \phi}{\partial x} + 4 \\\ \phi &=& \exp \bigg(\frac{-x^2}{4 \nu} \bigg) + \exp \bigg(\frac{-(x-2 \pi)^2}{4 \nu} \bigg) \end{eqnarray} This has an analytical solution, given by: \begin{eqnarray} u &=& -\frac{2 \nu}{\phi} \frac{\partial \phi}{\partial x} + 4 \\\ \phi &=& \exp \bigg(\frac{-(x-4t)^2}{4 \nu (t+1)} \bigg) + \exp \bigg(\frac{-(x-4t -2 \pi)^2}{4 \nu(t+1)} \bigg) \end{eqnarray} Our boundary condition will be: $$u(0) = u(2\pi)$$ This is called a *periodic* boundary condition. Pay attention! This will cause you a bit of headache if you don't tread carefully. ### Saving Time with SymPy The initial condition we're using for Burgers' Equation can be a bit of a pain to evaluate by hand. The derivative $\frac{\partial \phi}{\partial x}$ isn't too terribly difficult, but it would be easy to drop a sign or forget a factor of $x$ somewhere, so we're going to use SymPy to help us out. [SymPy](http://sympy.org/en/) is the symbolic math library for Python. It has a lot of the same symbolic math functionality as Mathematica with the added benefit that we can easily translate its results back into our Python calculations (it is also free and open source). Start by loading the SymPy library, together with our favorite library, NumPy. ```python import numpy import sympy ``` We're also going to tell SymPy that we want all of its output to be rendered using $\LaTeX$. This will make our Notebook beautiful! ```python from sympy import init_printing init_printing(use_latex=True) ``` Start by setting up symbolic variables for the three variables in our initial condition and then type out the full equation for $\phi$. We should get a nicely rendered version of our $\phi$ equation. ```python x, nu, t = sympy.symbols('x nu t') phi = (sympy.exp(-(x - 4 * t)**2 / (4 * nu * (t + 1))) + sympy.exp(-(x - 4 * t - 2 * numpy.pi)**2 / (4 * nu * (t + 1)))) phi ``` It's maybe a little small, but that looks right. Now to evaluate our partial derivative $\frac{\partial \phi}{\partial x}$ is a trivial task. ```python phiprime = phi.diff(x) phiprime ``` If you want to see the unrendered version, just use the Python print command. ```python print(phiprime) ``` -(-8*t + 2*x)*exp(-(-4*t + x)**2/(4*nu*(t + 1)))/(4*nu*(t + 1)) - (-8*t + 2*x - 12.5663706143592)*exp(-(-4*t + x - 6.28318530717959)**2/(4*nu*(t + 1)))/(4*nu*(t + 1)) ### Now what? Now that we have the Pythonic version of our derivative, we can finish writing out the full initial condition equation and then translate it into a usable Python expression. For this, we'll use the *lambdify* function, which takes a SymPy symbolic equation and turns it into a callable function. ```python from sympy.utilities.lambdify import lambdify u = -2 * nu * (phiprime / phi) + 4 print(u) ``` -2*nu*(-(-8*t + 2*x)*exp(-(-4*t + x)**2/(4*nu*(t + 1)))/(4*nu*(t + 1)) - (-8*t + 2*x - 12.5663706143592)*exp(-(-4*t + x - 6.28318530717959)**2/(4*nu*(t + 1)))/(4*nu*(t + 1)))/(exp(-(-4*t + x - 6.28318530717959)**2/(4*nu*(t + 1))) + exp(-(-4*t + x)**2/(4*nu*(t + 1)))) + 4 ### Lambdify To lambdify this expression into a useable function, we tell lambdify which variables to request and the function we want to plug them in to. ```python ufunc = lambdify((t, x, nu), u) print(ufunc(1, 4, 3)) ``` 3.4917066420644494 ### Back to Burgers' Equation Now that we have the initial conditions set up, we can proceed and finish setting up the problem. We can generate the plot of the initial condition using our lambdify-ed function. ```python from matplotlib import pyplot %matplotlib inline ###variable declarations nx = 101 nt = 100 dx = 2 * numpy.pi / (nx - 1) nu = .07 dt = dx * nu x = numpy.linspace(0, 2 * numpy.pi, nx) un = numpy.empty(nx) t = 0 u = numpy.asarray([ufunc(t, x0, nu) for x0 in x]) u ``` array([ 4. , 4.06283185, 4.12566371, 4.18849556, 4.25132741, 4.31415927, 4.37699112, 4.43982297, 4.50265482, 4.56548668, 4.62831853, 4.69115038, 4.75398224, 4.81681409, 4.87964594, 4.9424778 , 5.00530965, 5.0681415 , 5.13097336, 5.19380521, 5.25663706, 5.31946891, 5.38230077, 5.44513262, 5.50796447, 5.57079633, 5.63362818, 5.69646003, 5.75929189, 5.82212374, 5.88495559, 5.94778745, 6.0106193 , 6.07345115, 6.136283 , 6.19911486, 6.26194671, 6.32477856, 6.38761042, 6.45044227, 6.51327412, 6.57610598, 6.63893783, 6.70176967, 6.76460125, 6.82742866, 6.89018589, 6.95176632, 6.99367964, 6.72527549, 4. , 1.27472451, 1.00632036, 1.04823368, 1.10981411, 1.17257134, 1.23539875, 1.29823033, 1.36106217, 1.42389402, 1.48672588, 1.54955773, 1.61238958, 1.67522144, 1.73805329, 1.80088514, 1.863717 , 1.92654885, 1.9893807 , 2.05221255, 2.11504441, 2.17787626, 2.24070811, 2.30353997, 2.36637182, 2.42920367, 2.49203553, 2.55486738, 2.61769923, 2.68053109, 2.74336294, 2.80619479, 2.86902664, 2.9318585 , 2.99469035, 3.0575222 , 3.12035406, 3.18318591, 3.24601776, 3.30884962, 3.37168147, 3.43451332, 3.49734518, 3.56017703, 3.62300888, 3.68584073, 3.74867259, 3.81150444, 3.87433629, 3.93716815, 4. ]) ```python pyplot.figure(figsize=(11, 7), dpi=100) pyplot.plot(x, u, marker='o', lw=2) pyplot.xlim([0, 2 * numpy.pi]) pyplot.ylim([0, 10]); ``` This is definitely not the hat function we've been dealing with until now. We call it a "saw-tooth function". Let's proceed forward and see what happens. ### Periodic Boundary Conditions One of the big differences between Step 4 and the previous lessons is the use of *periodic* boundary conditions. If you experiment with Steps 1 and 2 and make the simulation run longer (by increasing `nt`) you will notice that the wave will keep moving to the right until it no longer even shows up in the plot. With periodic boundary conditions, when a point gets to the right-hand side of the frame, it *wraps around* back to the front of the frame. Recall the discretization that we worked out at the beginning of this notebook: $$u_i^{n+1} = u_i^n - u_i^n \frac{\Delta t}{\Delta x} (u_i^n - u_{i-1}^n) + \nu \frac{\Delta t}{\Delta x^2}(u_{i+1}^n - 2u_i^n + u_{i-1}^n)$$ What does $u_{i+1}^n$ *mean* when $i$ is already at the end of the frame? Think about this for a minute before proceeding. ```python for n in range(nt): un = u.copy() for i in range(1, nx-1): u[i] = un[i] - un[i] * dt / dx *(un[i] - un[i-1]) + nu * dt / dx**2 *\ (un[i+1] - 2 * un[i] + un[i-1]) u[0] = un[0] - un[0] * dt / dx * (un[0] - un[-2]) + nu * dt / dx**2 *\ (un[1] - 2 * un[0] + un[-2]) u[-1] = u[0] u_analytical = numpy.asarray([ufunc(nt * dt, xi, nu) for xi in x]) ``` ```python pyplot.figure(figsize=(11, 7), dpi=100) pyplot.plot(x,u, marker='o', lw=2, label='Computational') pyplot.plot(x, u_analytical, label='Analytical') pyplot.xlim([0, 2 * numpy.pi]) pyplot.ylim([0, 10]) pyplot.legend(); ``` *** What next? ---- The subsequent steps, from 5 to 12, will be in two dimensions. But it is easy to extend the 1D finite-difference formulas to the partial derivatives in 2D or 3D. Just apply the definition — a partial derivative with respect to $x$ is the variation in the $x$ direction *while keeping $y$ constant*. Before moving on to [Step 5](./07_Step_5.ipynb), make sure you have completed your own code for steps 1 through 4 and you have experimented with the parameters and thought about what is happening. Also, we recommend that you take a slight break to learn about [array operations with NumPy](./06_Array_Operations_with_NumPy.ipynb). ```python from IPython.core.display import HTML def css_styling(): styles = open("../styles/custom.css", "r").read() return HTML(styles) css_styling() ``` <link href='http://fonts.googleapis.com/css?family=Fenix' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Source+Code+Pro:300,400' rel='stylesheet' type='text/css'> <style> @font-face { font-family: "Computer Modern"; src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf'); } div.cell{ width:800px; margin-left:16% !important; margin-right:auto; } h1 { font-family: 'Alegreya Sans', sans-serif; } h2 { font-family: 'Fenix', serif; } h3{ font-family: 'Fenix', serif; margin-top:12px; margin-bottom: 3px; } h4{ font-family: 'Fenix', serif; } h5 { font-family: 'Alegreya Sans', sans-serif; } div.text_cell_render{ font-family: 'Alegreya Sans',Computer Modern, "Helvetica Neue", Arial, Helvetica, Geneva, sans-serif; line-height: 135%; font-size: 120%; width:600px; margin-left:auto; margin-right:auto; } .CodeMirror{ font-family: "Source Code Pro"; font-size: 90%; } /* .prompt{ display: None; }*/ .text_cell_render h1 { font-weight: 200; font-size: 50pt; line-height: 100%; color:#CD2305; margin-bottom: 0.5em; margin-top: 0.5em; display: block; } .text_cell_render h5 { font-weight: 300; font-size: 16pt; color: #CD2305; font-style: italic; margin-bottom: .5em; margin-top: 0.5em; display: block; } .warning{ color: rgb( 240, 20, 20 ) } </style>
201e1f23ff9d62775d1bda6fbd9192903b83b1bc
68,332
ipynb
Jupyter Notebook
lessons/05_Step_4.ipynb
iafleischer/CFDPython
02e1959e483b4503e85ccfe1f4fdb39e9b1601f8
[ "CC-BY-3.0" ]
null
null
null
lessons/05_Step_4.ipynb
iafleischer/CFDPython
02e1959e483b4503e85ccfe1f4fdb39e9b1601f8
[ "CC-BY-3.0" ]
null
null
null
lessons/05_Step_4.ipynb
iafleischer/CFDPython
02e1959e483b4503e85ccfe1f4fdb39e9b1601f8
[ "CC-BY-3.0" ]
1
2021-05-01T13:45:12.000Z
2021-05-01T13:45:12.000Z
109.50641
24,900
0.812152
true
3,989
Qwen/Qwen-72B
1. YES 2. YES
0.83762
0.819893
0.686759
__label__eng_Latn
0.900684
0.433903
# Multiple Features ```python import pandas as pd import numpy as np ``` ```python size =[2104,1416,1534,852] nbr_bedrooms = [5,3,3,2] nbr_floors = [1,2,2,1] age = [45,40,30,36] price = [460,232,315,178] ``` ```python d = {'size':size,'nbr_bedrooms':nbr_bedrooms,'nbr_floors':nbr_floors,'age':age,'price':price} ``` ```python df = pd.DataFrame(d) ``` ## 1. Definitions ```python df ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>size</th> <th>nbr_bedrooms</th> <th>nbr_floors</th> <th>age</th> <th>price</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>2104</td> <td>5</td> <td>1</td> <td>45</td> <td>460</td> </tr> <tr> <th>1</th> <td>1416</td> <td>3</td> <td>2</td> <td>40</td> <td>232</td> </tr> <tr> <th>2</th> <td>1534</td> <td>3</td> <td>2</td> <td>30</td> <td>315</td> </tr> <tr> <th>3</th> <td>852</td> <td>2</td> <td>1</td> <td>36</td> <td>178</td> </tr> </tbody> </table> </div> $y$ : the element to predict here is the **price** $x_n$ : the features used to predict y **(size, nbr of bedrooms, nbr of floors, age )** $n$ : number of features. here $n = 4$ $x^{(i)} : $ input of $i^{th}$ training example * **ex:** $x^{(2)}$, with $x^{(2)}$ being a vector ```python #(pandas is indexed at 0) df.loc[1,['size','nbr_bedrooms','nbr_floors','age']] ``` size 1416 nbr_bedrooms 3 nbr_floors 2 age 40 Name: 1, dtype: int64 $x^{(i)}_j$ : value of feature $j$ in $i^{th}$ training example * **ex:** $x^{(2)}_3$ ```python #(pandas is indexed at 0) df.iloc[2][2] ``` 2 ## 2. Hypothesis **General rule** $h_\theta(x) = \theta_0 + \theta_1x_1 + \theta_2x_2 + ... + \theta_nx_n$ **In our case** $h_\theta(x) = \theta_0 + \theta_1x_1 + \theta_2x_2 + \theta_3x_3 + \theta_4x_4$ ### Writing the hypothesis formula in term on Matrices with $x_0 = 1$. which gives $\theta_0x_0 = \theta_0$ Features vector as $x$ Parameters vector as $\theta$ $ x = \begin{bmatrix}x_0 \\x_1 \\x_2 \\x_3 \\x_4 \end{bmatrix}$ $\space \space \space$ $\theta = \begin{bmatrix} \theta_0 \\ \theta_1 \\\theta_2 \\ \theta_3 \\ \theta_4 \end{bmatrix}$ To end up with our hypothesis formula, we can think of it as the product of 2 vectors. $x$ and the transpose of $\theta$ $ x = \begin{bmatrix}x_0 \\x_1 \\x_2 \\x_3 \\x_4 \end{bmatrix}$ $\space \space \space$ $\theta^T = \begin{bmatrix} \theta_0 & \theta_1 &\theta_2 & \theta_3 & \theta_4 \end{bmatrix}$ So we can resume the formula to : $h_\theta(x) = h_\theta(x) = \theta_0 + \theta_1x_1 + \theta_2x_2 + ... + \theta_nx_n = \theta^Tx$ ### Hypothesis computing in Python ```python from sympy import * ``` ```python X = Matrix(['x0','x1','x2','x3','x4']) X ``` $\displaystyle \left[\begin{matrix}x_{0}\\x_{1}\\x_{2}\\x_{3}\\x_{4}\end{matrix}\right]$ ```python T = Matrix([['T0','T1','T2','T3','T4']]) T ``` $\displaystyle \left[\begin{matrix}T_{0} & T_{1} & T_{2} & T_{3} & T_{4}\end{matrix}\right]$ ```python H = T.multiply(X) H ``` $\displaystyle \left[\begin{matrix}T_{0} x_{0} + T_{1} x_{1} + T_{2} x_{2} + T_{3} x_{3} + T_{4} x_{4}\end{matrix}\right]$ ```python ```
69dd45969f4f1b15e1b1693ab130ee5779b4db5b
9,416
ipynb
Jupyter Notebook
Andrew Ng - Coursera/Week 2/Multiple Features Regression.ipynb
chikoungoun/Machine-Learning
18625cacec264b612c4ec69cdc82cf7db46e6785
[ "MIT" ]
null
null
null
Andrew Ng - Coursera/Week 2/Multiple Features Regression.ipynb
chikoungoun/Machine-Learning
18625cacec264b612c4ec69cdc82cf7db46e6785
[ "MIT" ]
null
null
null
Andrew Ng - Coursera/Week 2/Multiple Features Regression.ipynb
chikoungoun/Machine-Learning
18625cacec264b612c4ec69cdc82cf7db46e6785
[ "MIT" ]
null
null
null
21.948718
224
0.421623
true
1,388
Qwen/Qwen-72B
1. YES 2. YES
0.882428
0.826712
0.729514
__label__eng_Latn
0.394924
0.533236
```python # picture from Jeremy Blum's book "Exploring Arduino" 2nd edition Page 50 from IPython.display import Image from IPython.core.display import HTML Image(url= "https://i.imgur.com/K6pJCwd.png") ``` ```python # through trial and error we find that 5*sin(0.5*x)**2 is very close to Blum's graph # the problem is that I could not figure out how to get the same amount of steps. # But with 7*sin(0.5*x)**2 then making the digital signal graph starts looking like Blum's from sympy import * x = symbols('x') eq0 = 5*sin(0.5*x)**2 p = plot(eq0,legend = True,xlim = (0,4),ylim = (0,5),title="Analog Signal", xlabel = "Time(s)",ylabel = "Voltage (V)",size = (7,7),steps=1,show=False,line_color = 'r') p.show() ``` ```python # What is unique here to my work was reverse engineering the graphs of Jeremy Blum that he also made with MATLAB # # Languages: Python Object Oriented Programming Language, Sympy (symbolic mathematics) Python external library # and Jupyter Notebook web-based Integrated Development Environment (IDE) # https://docs.sympy.org/ # https://python.org/ # https://jupyter.org/ # # referencing Jeremy Blum's "Exploring Arduino" - 2nd edition P.50 for the idea # thank you Andrew F. Rich for help with digital function notation eq = 7*sin(0.5*x)**2 p = plot(eq,legend = True,xlim = (0,4),ylim = (0,7),title="Analog Signal", xlabel = "Time(s)",ylabel = "Voltage (V)",size = (7,7),steps=1,show=False,line_color = 'r') p.show() # here we continually guessed values for some sine equation to look similar to Page 50 of Blum's "Exploring Arduino" # The differing aspect ratios of the images can make the sine wave function look distorted # # notice our y-axis is different (mine has 7 versus versus Blum's 5) # this is because I don't know how to get digital function to graph the 7 or 8 steps in the next part) ``` ```python # notice here we use the previous 7*sin(0.5*x)**2 (variable: eq) as an input for the digital signal graph eq2 = ((7 - Abs(floor((eq+0.6)) - 7))) p = plot(eq2,legend = True,xlim = (0,4),ylim = (0,7),title="Digital Signal", xlabel = "Time(s)",ylabel = "Digital Value",size = (7,7),steps=1,show=False) p.show() # here we also reverse engineered Blum's digital value graph approximately ``` ```python Digital Value of range [0,7] translated to 3-bit ADC would be 7 == 111 6 == 110 5 == 101 4 == 100 3 == 011 2 == 010 1 == 001 0 == 000 also important to note that 3-bit ADC means the ADC can quantize analog signals into 2**n where n = number of bits. so a 10-bit ADC can do 2**10 or 1024 representations. In these 3-bit graphs, you see we have 2**3 or 8 representations from a 5 volt supply. (although I had to use 7 volts to graph Blum's digital signal graph) In my personal case using the Arduino Metro microcontroller, a 5V regulator can supply peak ~800mA as long as the die temp of the regulator does not exceed 150*C using https://www.rapidtables.com/calc/electric/ohms-law-calculator.html that means we have: Resistance (R) = 6.25 ohms Current (I) = 800 milliamps (mA) Voltage (V) = 5 Volts (V) Power (P) = 4 watts(W) ``` ```python eq # Analog Signal Function ``` $\displaystyle 7 \sin^{2}{\left(0.5 x \right)}$ ```python eq2 # Digital Signal Function ``` $\displaystyle 7 - \left|{\left\lfloor{7 \sin^{2}{\left(0.5 x \right)} + 0.6}\right\rfloor - 7}\right|$ ```python in conclusion, with 5*sin(0.5*x)**2 I had trouble getting the stepwise function to have 8 steps (0-7) but with 7*sin(0.5*x)**2 the digital graph looks more like Blum's. So in reverse engineering these graphs, I was able to get either the Analog graph correct and Digital graph incorrect or the Analog graph incorrect and digital graph correct. ``` ```python special thanks to Robin R Mitchell for ideas about low level binary machine translations of summations written by Nicholas Caudill NSCaudill2020@manchester.edu github.com/NSC9 ```
9178035be86d2f09703d477bd99f48b335045776
69,951
ipynb
Jupyter Notebook
Personal_Projects/Reverse_Engineering/Reverse_Engineering_Analog-to-Digital_Graphs_v0.3.ipynb
NSC9/Sample_of_Work
8f8160fbf0aa4fd514d4a5046668a194997aade6
[ "MIT" ]
null
null
null
Personal_Projects/Reverse_Engineering/Reverse_Engineering_Analog-to-Digital_Graphs_v0.3.ipynb
NSC9/Sample_of_Work
8f8160fbf0aa4fd514d4a5046668a194997aade6
[ "MIT" ]
null
null
null
Personal_Projects/Reverse_Engineering/Reverse_Engineering_Analog-to-Digital_Graphs_v0.3.ipynb
NSC9/Sample_of_Work
8f8160fbf0aa4fd514d4a5046668a194997aade6
[ "MIT" ]
null
null
null
279.804
22,728
0.919629
true
1,180
Qwen/Qwen-72B
1. YES 2. YES
0.868827
0.899121
0.781181
__label__eng_Latn
0.969848
0.653277
# Adaptive Finite Element Method for a Nonlinear Poisson Equation In this tutorial we solve the nonlinear Poisson equation from tutorial 01 using adaptive grid refinement. The finite element solution on a given mesh is used to compute local error indicators that can be used to iteratively reduce the discretization error. It is assumed that the nonlinearity of the PDE is not too strong, so that it may be approximated by linearizing around the current finite element solution. This allows the calculation of local error estimates that quantify the discretization error on each element of the mesh. This tutorial depends on tutorial 01 which discusses the solution of the considered partial differential equation. It is assumed that you have worked through tutorial 01 before. Additionally, the error estimator has to assemble skeleton terms, which are explained in greater detail in tutorial 02. It may therefore be useful to have worked through that tutorial as well, but the actual method discussed there is not relevant here. # Problem Formulation We consider the following nonlinear Poisson equation with Dirichlet and Neumann boundary conditions as introduced in tutorial 01: \begin{align} \label{eq:ProblemStrong} -\Delta u + q(u) &= f &&\text{in $\Omega$},\\ u &= g &&\text{on $\Gamma_D\subseteq\partial\Omega$},\\ -\nabla u\cdot \nu &= j &&\text{on $\Gamma_N=\partial\Omega\setminus\Gamma_D$}. \end{align} $\Omega\subset\mathbb{R}^d$ is a domain, $q:\mathbb{R}\to\mathbb{R}$ is a given, possibly nonlinear function and $f: \Omega\to\mathbb{R}$ is the source term and $\nu$ denotes the unit outer normal to the domain. The weak formulation of this problem is derived by multiplication with an appropriate test function and integrating by parts. This results in the abstract problem: \begin{equation} \text{Find $u\in U$ s.t.:} \quad r^{\text{NLP}}(u,v)=0 \quad \forall v\in V, \label{Eq:BasicBuildingBlock} \end{equation} with the continuous residual form \begin{equation*} r^{\text{NLP}}(u,v) = \int_\Omega \nabla u \cdot \nabla v + (q(u)-f)v\,dx + \int_{\Gamma_N} jv\,ds \label{eq:ResidualForm} \end{equation*} and the function spaces $U= \{v\in H^1(\Omega) \,:\, \text{''$v=g$'' on $\Gamma_D$}\}$ and $V= \{v\in H^1(\Omega) \,:\, \text{''$v=0$'' on $\Gamma_D$}\}$. We assume that $q$ is such that this problem has a unique solution. The example application uses a domain $\Omega$ that is L-shaped and given by \begin{equation*} \Omega = \left\{ (x,y) \in \mathbb{R}^2 \colon |x| < 1, |y| < 1, (x < 0 \lor y > 0) \right\}. \end{equation*} We set $f = 0$ and $q(u) = 0$, and only consider Dirichlet boundary conditions. The resulting PDE in residual form is \begin{equation*} r^{\text{NLP}}(u,v) = \int_\Omega \nabla u \cdot \nabla v \,dx, \end{equation*} and the function \begin{equation*} u(r,\theta) = r^{2/3} \cdot \text{sin}\left(\frac{2}{3} \theta\right) \end{equation*} in polar coordinates is one of its solutions on the given domain. We choose the restriction of this function to the boundary of $\Omega$ for the Dirichlet boundary condition. # Adaptive Grid Refinement <a id= "AdaptiveGridRef"> In the previous tutorials a fixed finite element space $V_h$ was used, based on a fixed finite element mesh with an ordered set \begin{equation} \mathcal{X}_h = \{x_1,\ldots,x_N\} \end{equation} of vertices and an ordered set \begin{equation} \mathcal{T}_h = \{T_1, \ldots, T_M\} \end{equation} of elements. The task of choosing an appropriate mesh was not discussed in detail. Any choice of finite element space will lead to discretization errors due to the finite-dimensional approximation of the true solution of the PDE, but the size of this error can be controlled by choosing an adequate mesh. Let $u_h \in V_h$ be the finite element solution and $\|\cdot\|$ an appropriate norm. The goal is then \begin{equation*} \|u - u_h\| \leq \text{TOL} \end{equation*} with a given tolerance $\text{TOL}$, while keeping the complexity and size of $V_h$ as low as possible. The *a-priori* error estimates of formal proofs are not uitable for this task, since they contain a constant $C$ that is typically unknown and don't provide information about the spatial distribution of the error. For this reason one is interested in **a-posteriori** error estimates of the form \begin{equation} \|u - u_h\| \leq \gamma(u_h) = \left( \sum_{T \in {\mathcal T}_h} \gamma_T^2(u_h) \right)^{1/2}, \end{equation} where $\gamma_T(u_h)$ is a **local error estimator** for the element $T$. Such an error estimate should: 1. have comparatively low cost of calculation to make an adaptive approach feasible 2. accurately describe the spatial distribution and size of the discretization error The local estimates $\gamma_T(u_h)$ can then be used in a strategy that tries to minimize $\gamma(u_h)$ for fixed $\text{dim}(V_h)$ and thereby find a mesh with low discretization error $\|u - u_h\|$. If the considered PDE is linear, i.e. $q(u)$ is an affine linear function, then it can be shown [[1]](#ref) that such an estimate of the discretization error is given by \begin{equation*} \gamma_T^2(u_h) = h_T^2 \| R_T(u_h) \|_{0,T}^2 + \sum_{F \in \mathcal{F}_h \cap \partial T} h_T \| R_F(u_h) \|_{0,F}^2 \end{equation*} with the **element residuals** \begin{equation*} R_T(u_h) = f + \Delta u_h - q(u_h) \end{equation*} and the **face residuals** \begin{equation*} R_F(u_h) = \begin{cases} 2^{-1/2} [-(\nabla u_h) \cdot \nu] & F \in \mathcal{F}_h^i \\ - (\nabla u_h) \cdot \nu - j & F \in \mathcal{F}_h^N \end{cases} \end{equation*} Here $h_T$ is the local mesh width of the element $T$, $\mathcal{F}_h$ the set of faces of the elements in $\mathcal{T}_h$, $\partial T$ the boundary of element $T$, $\mathcal{F}_h^i \subset \mathcal{F}_h$ the set of interior faces, $\mathcal{F}_h^N \subset \mathcal{F}_h$ the set of Neumann boundary faces and $\nu$ the unit normal vector of the face $F$ pointing from one of the elements $T_F^-$ that belong to $F$ to the other $T_F^+$. $\|\cdot\|_{0,T}$ is the $L^2$ norm on $T$, $\|\cdot\|_{0,F}$ that on $F$, and $[\cdot]$ the jump of the given expression across the face $F$ in the direction of $\nu$. Broadly speaking, the element residual $R_T$ measures to which degree $u_h$ solves the PDE on the element $T$, and the face residual $R_F$ measures the consistency of the flux $-(\nabla u_h)\cdot \nu$ between the elements. With these definitions one can show that \begin{equation*} \|u - u_h\|_{1,\Omega} \leq C \gamma(u_h) = C \left( \sum_{T \in {\mathcal T}_h} \gamma_T^2(u_h) \right)^{1/2} \end{equation*} with $\|\cdot\|_{1,\Omega}$ the $H^1$ norm on $\Omega$. The right hand side of this inequality does not depend on $u$, which means it can be evaluated and used to control the discretization error. Several aspects have to be considered: - The derivation of the error estimator given above does not require additional regularity beyond $u \in H^1(\Omega)$. This is critical, since error control is especially important in problems with low regularity. - The constant $C$ in the estimate is usually not known exactly, and this should be taken into account when defining the tolerance $\text{TOL}$. - The error estimate may converge with a lower order than the actual discretization error which yields a very pessimistic stopping criterion. If the error estimate converges with the same order, then the estimator is called *efficient*. The lowest applicable constant $C$ is then the *efficiency index*. All of these considerations only hold if the PDE is linear. If the function $q(u)$ is nonlinear, then the estimate $\gamma(u_h)$ is no longer guaranteed to be an upper bound for the discretization error $\|u-u_h\|_{1,\Omega}$. However, it may still be used as an error indicator that is used to refine a given mesh. This can produce misleading results, and therefore the conclusions drawn from the estimate should be rather conservative. The error estimate will become more reliable the closer the finite element solution $u_h$ is to the exact solution $u$, as long as the function $q(u)$ is regular enough to be linearized around the exact solution. # Realization in PDELab The `fem` section in the ini-file provides the parameters for the finite element method. The new parameters control the adaptive refinement: `steps` specifies the maximum number of iterations for the refinement loop, `uniformlevel` the number of iterations that should use global refinement in the beginning, `tol` denotes the error tolerance used as a stopping criterion, and `fraction` allows setting the number of elements that should be marked in each step. ```ini [fem] steps=200 uniformlevel=0 tol=0.01 fraction=0.7 ``` The following includes again the necessary headers. ```c++ #include <dune/jupyter.hh> #include "nonlinearpoissonfemestimator.hh" #include "problem.hh" #include "../tutorial01/nonlinearpoissonfem.hh" ``` As described in the previous tutorials, a grid is instantiated in the following cell: ```c++ // open ini file Dune::ParameterTree ptree; Dune::ParameterTreeParser ptreeparser; ptreeparser.readINITree("tutorial05.ini",ptree); // read ini file const int dim = 2; const int degree = 1; using Grid = Dune::UGGrid<2>; std::string filename = ptree.get("grid.twod.filename", "ldomain.msh"); Dune::GridFactory<Grid> factory; Dune::GmshReader<Grid>::read(factory,filename,true,true); std::shared_ptr<Grid> gridp(factory.createGrid()); ``` ```c++ // get leaf gridview auto gv = gridp->leafGridView(); typedef decltype(gv) GV; // dimension and important types const int dim = GV::dimension; typedef typename GV::Grid::ctype DF; // type for coordinates typedef double RF; // type for computations ``` The code alternates between solving the problem on the current mesh and using the current solution for adaptive refinement. These two steps are repeated until the error estimate is below the prescribed tolerance or the maximum number of iterations is reached. The parts of the problem that are not affected by the changing mesh are instantiated outside of the loop. ```c++ // make user functions RF eta = ptree.get("problem.eta",(RF)1.0); Problem<RF> problem(eta); ``` ```c++ auto g = Dune::PDELab::makeGridFunctionFromCallable( gv, [&](const auto& e, const auto& x){return problem.g(e,x);} );; ``` ```c++ auto b = Dune::PDELab::makeBoundaryConditionFromCallable( gv, [&](const auto& i, const auto& x){return problem.b(i,x);} );; ``` The parameter class `Problem` defines functions for the right hand side $f$, the source term $q$ and the value of the Dirichlet boundary condition $g$. These functions are independent of the chosen discretization. This also holds for the lambda functions that encapsulate the Dirichlet boundary condition and its value. The grid function space `GFS` is a representation of the finite element space $V_h$. Most of its internals are independent of the actual mesh. The basis functions defined by the finite element map `FEM`, the constraints container `CON` and the vector backend `VBE` do not need to be modified when the mesh is refined. The only parts of the grid function space that have to be updated are the local to global map and the constrained indices, which will be considered later. ```c++ // Make grid function space using FEM = Dune::PDELab::PkLocalFiniteElementMap<GV,DF,RF,degree>; FEM fem(gv); using CON = Dune::PDELab::ConformingDirichletConstraints; using VBE = Dune::PDELab::ISTL::VectorBackend<>; using GFS = Dune::PDELab::GridFunctionSpace<GV,FEM,CON,VBE> ; GFS gfs(gv,fem); gfs.name("Vh"); ``` Then the constraints of the initial mesh and an initial guess for the finite element solution are assembled. ```c++ // Assemble constraints typedef typename GFS::template ConstraintsContainer<RF>::Type CC; CC cc; Dune::PDELab::constraints(b,gfs,cc); // assemble constraints // A coefficient vector using Z = Dune::PDELab::Backend::Vector<GFS,RF>; Z z(gfs); // initial value // Fill the coefficient vector Dune::PDELab::interpolate(g,gfs,z); ``` The central loop of the function starts by reading in the maximum number of refinement iterations and the number of steps that should use global refinement. This can be used to guarantee a maximum diameter of the elements of the resulting meshes. From this point on all statements are part of the loop that is repeated until the error estimate is small enough or the maximum number of iterations is reached. ```c++ // adaptation loop // only grid function space and coefficient vector // live outside this loop int steps = ptree.get("fem.steps",(int)3); int uniformlevel = ptree.get("fem.uniformlevel",(int)2); for (int i=0; i<steps; i++) { std::stringstream s; s << i; std::string iter; s >> iter; std::cout << "Iteration: " << iter //--- << "\thighest level in grid: " << grid.maxLevel() << "\thighest level in grid: " << gridp->maxLevel() << std::endl; std::cout << "constrained dofs=" << cc.size() << " of " << gfs.globalSize() << std::endl; // Make local operator typedef NonlinearPoissonFEM<Problem<RF>,FEM> LOP; LOP lop(problem); // Make a global operator typedef Dune::PDELab::ISTL::BCRSMatrixBackend<> MBE; MBE mbe((int)pow(1+2*degree,dim)); typedef Dune::PDELab::GridOperator< GFS,GFS, /* ansatz and test space */ LOP, /* local operator */ MBE, /* matrix backend */ RF,RF,RF, /* domain, range, jacobian field type */ CC,CC /* constraints for ansatz and test space */ > GO; GO go(gfs,cc,gfs,cc,lop,mbe); // Select a linear solver backend typedef Dune::PDELab::ISTLBackend_SEQ_CG_AMG_SSOR<GO> LS; LS ls(100,0); // solve nonlinear problem Dune::PDELab::Newton<GO,LS,Z> newton(go,z,ls); newton.setReassembleThreshold(0.0); newton.setVerbosityLevel(2); newton.setReduction(1e-6); newton.setMinLinearReduction(1e-6); newton.setMaxIterations(25); newton.setLineSearchMaxIterations(10); newton.apply(); // set up error estimator typedef Dune::PDELab::P0LocalFiniteElementMap<DF,RF,dim> P0FEM; P0FEM p0fem(Dune::GeometryTypes::simplex(dim)); typedef Dune::PDELab::NoConstraints NCON; typedef Dune::PDELab::GridFunctionSpace<GV,P0FEM,NCON,VBE> P0GFS; P0GFS p0gfs(gv,p0fem); typedef NonlinearPoissonFEMEstimator<Problem<RF>,FEM> ESTLOP; ESTLOP estlop(problem); typedef Dune::PDELab::EmptyTransformation NCC; typedef Dune::PDELab::GridOperator< GFS,P0GFS, /* one value per element */ ESTLOP, /* operator for error estimate */ MBE,RF,RF,RF, /* same as before */ NCC,NCC /* no constraints */ > ESTGO; ESTGO estgo(gfs,p0gfs,estlop,mbe); // compute local error contribution and global error using Z0 = Dune::PDELab::Backend::Vector<P0GFS,RF>; Z0 z0(p0gfs,0.0); estgo.residual(z,z0); auto estimated_error = sqrt(z0.one_norm()); std::cout << "Estimated error in step " << i << " is " << estimated_error << std::endl; // ========================= // compute true L2 error // ========================= typedef Dune::PDELab::DiscreteGridFunction<GFS,Z> ZDGF; ZDGF zdgf(gfs,z); // the FE function as a grid function; moved from below // put your code here !!! // vtk output std::cout << "VTK output" << std::endl; Dune::SubsamplingVTKWriter<GV> vtkwriter(gv,Dune::refinementIntervals(ptree.get("output.subsampling",(int)1))); typedef Dune::PDELab::VTKGridFunctionAdapter<ZDGF> VTKF; vtkwriter.addVertexData( // the FE solution std::shared_ptr<VTKF>(new VTKF(zdgf,"fesol"))); // ========================= // output error function from above // your code here !!! // ========================= // output squares of the local estimator typedef Dune::PDELab::DiscreteGridFunction<P0GFS,Z0> Z0DGF; Z0DGF z0dgf(p0gfs,z0); typedef Dune::PDELab::VTKGridFunctionAdapter<Z0DGF> VTKF0; vtkwriter.addCellData( std::shared_ptr<VTKF0>(new VTKF0(z0dgf,"gammaT^2"))); // now write the file vtkwriter.write(ptree.get("output.filename", (std::string)"output")+iter, Dune::VTK::appendedraw); // error control auto tol = ptree.get("fem.tol",(double)0.0); if (estimated_error<=tol) break; if (i==steps-1) break; // mark elements for refinement std::cout << "mark elements" << std::endl; auto fraction = ptree.get("fem.fraction",(double)0.5); RF eta_refine,eta_coarsen; Dune::PDELab::error_fraction( z0,fraction,0.0,eta_refine,eta_coarsen); if (fraction>=1.0 || i<uniformlevel) eta_refine=0.0; //--- Dune::PDELab::mark_grid(grid,z0,eta_refine,0.0,2); Dune::PDELab::mark_grid(*gridp,z0,eta_refine,0.0,2); // do refinement std::cout << "adapt grid and solution" << std::endl; //--- Dune::PDELab::adapt_grid(grid,gfs,z,2*(degree+1)); Dune::PDELab::adapt_grid(*gridp,gfs,z,2*(degree+1)); // recompute constraints std::cout << "constraints and stuff" << std::endl; Dune::PDELab::constraints(b,gfs,cc); // write correct boundary conditions in new vector Z znew(gfs); Dune::PDELab::interpolate(g,gfs,znew); // copy Dirichlet boundary to interpolated solution Dune::PDELab::copy_constrained_dofs(cc,znew,z); } ``` Figure 1 shows the final mesh and the corresponding solution $u_h$. The discretization error is almost completely restricted to the elements that touch the reentrant corner in the point $(0,0)$ and therefore hard to visualize. <table> <tr> <th> </th> <th> </th> </tr> </table> <font color = "grey"> Figure 1: Solution on the finest mesh and underlying elements. The colormap for the solution uses discrete intervals to make the large gradients in the reentrant corner visible. Refinement has mainly occurred in this area, as can be seen on the right.</font> ### Loop Explanation Information about the refinement level and dimension of the current finite element space is printed at the beginning of each iteration. Then the local operator and global operator of the nonlinear Poisson equation are created. The local operator is identical to the one of tutorial 01. It does not store information about the mesh and could therefore also be created outside of the loop. The global operator, however, has to take the discretization into account and therefore has to be updated after each iteration. The next few lines construct a linear solver backend and an instance of the Newton solver and solve the nonlinear problem on the current mesh. These lines construct the error estimator and local error indicators. Each of these indicators is a single value $\gamma_T$ that is associated with one of the elements $T$, and we use a finite element map `P0FEM` for piecewise constant functions to store these values. The function space does not need additional constraints, similar to the finite volume space of tutorial 02, and therefore the `NoConstraints` class is passed to the grid function space. Then a local operator for the error estimator is constructed. The parameter class of the PDE is passed as a template argument, since the estimator needs access to the problem definition. These components are then used to construct a grid operator of type `ESTGO` which is a representation of the element-wise computation of the error estimate. ```c++ // set up error estimator typedef Dune::PDELab::P0LocalFiniteElementMap<DF,RF,dim> P0FEM; P0FEM p0fem(Dune::GeometryTypes::simplex(dim)); typedef Dune::PDELab::NoConstraints NCON; typedef Dune::PDELab::GridFunctionSpace<GV,P0FEM,NCON,VBE> P0GFS; P0GFS p0gfs(gv,p0fem); typedef NonlinearPoissonFEMEstimator<Problem<RF>,FEM> ESTLOP; ESTLOP estlop(problem); typedef Dune::PDELab::EmptyTransformation NCC; typedef Dune::PDELab::GridOperator< GFS,P0GFS, /* one value per element */ ESTLOP, /* operator for error estimate */ MBE,RF,RF,RF, /* same as before */ NCC,NCC /* no constraints */ > ESTGO; ESTGO estgo(gfs,p0gfs,estlop,mbe); ``` A vector `z0` for the local estimates is created, and the error estimate is computed as the residual of the estimation grid operator. This way, the infrastructure of the assembler for the residual form of the PDE can be reused for this task. The result is a vector containing the squared values of the local error estimates. The norm of the estimated error is then printed to provide feedback about the refinement process. ```c++ // compute local error contribution and global error using Z0 = Dune::PDELab::Backend::Vector<P0GFS,RF>; Z0 z0(p0gfs,0.0); estgo.residual(z,z0); auto estimated_error = sqrt(z0.one_norm()); std::cout << "Estimated error in step " << i << " is " << estimated_error << std::endl; ``` A VTK output of the finite element solution and the squared error is created. This allows careful inspection of the estimated error and how it changes in the course of the adaptive refinement. ```c++ // vtk output std::cout << "VTK output" << std::endl; Dune::SubsamplingVTKWriter<GV> vtkwriter(gv,Dune::refinementIntervals(ptree.get("output.subsampling",(int)1))); typedef Dune::PDELab::VTKGridFunctionAdapter<ZDGF> VTKF; vtkwriter.addVertexData( // the FE solution std::shared_ptr<VTKF>(new VTKF(zdgf,"fesol"))); // output squares of the local estimator typedef Dune::PDELab::DiscreteGridFunction<P0GFS,Z0> Z0DGF; Z0DGF z0dgf(p0gfs,z0); typedef Dune::PDELab::VTKGridFunctionAdapter<Z0DGF> VTKF0; vtkwriter.addCellData( std::shared_ptr<VTKF0>(new VTKF0(z0dgf,"gammaT^2"))); // now write the file vtkwriter.write(ptree.get("output.filename", (std::string)"output")+iter, Dune::VTK::appendedraw); ``` If the norm of the estimated error is below the desired tolerance, then the finite element solution can be accepted. Additional refinement is unnecessary and the loop is exited. This also happens if the last iteration is reached, since the modified mesh would not be used for further computations. ```c++ // error control auto tol = ptree.get("fem.tol",(double)0.0); if (estimated_error<=tol) break; if (i==steps-1) break; ``` If the norm of the error estimate is still too large, then it has to be decided which of the elements should be refined and which should be kept. The fraction of elements that should be refined is read from the ini-file. The function `error_fraction` is called to translate this fraction of elements into a threshold `eta_refine` of the value of the local indicator that separates the elements that should be refined from those that should remain the same. It is also possible to request a fraction of elements that should be coarsened, but this value is zero here for simplicity. If the chosen fraction contains all elements or the loop is still in an iteration that should refine globally, then this threshold is set to zero. Afterwards the function `mark_grid` is called and marks all elements which have a value in `z0` that is larger than `eta_refine` to request their refinement. ```c++ // mark elements for refinement std::cout << "mark elements" << std::endl; auto fraction = ptree.get("fem.fraction",(double)0.5); RF eta_refine,eta_coarsen; Dune::PDELab::error_fraction( z0,fraction,0.0,eta_refine,eta_coarsen); if (fraction>=1.0 || i<uniformlevel) eta_refine=0.0; Dune::PDELab::mark_grid(grid,z0,eta_refine,0.0,2); ``` The next line is the central line that calls `adapt_grid` and modifies the mesh. In contrast to most other functions in PDELab, this function requires the grid as an argument, and not a `GridView`. Grid views are a read only concept, and `adapt_grid` has to modify its argument. The marks on the elements are passed to the grid manager, which decides how it can best achieve the requested refinement. Depending on the capabilities and restrictions of the grid manager, the resulting mesh may not be exactly as requested. Additional refinement may be necessary to keep the mesh conforming, for example. However, the level that is requested is a lower bound for the resulting level at each point in the domain, i.e. elements are only coarsened if requested and any marked element is refined. The function `adapt_grid` also receives the grid function space `gfs` and the solution `z` as arguments. The grid function space is updated to reflect the changes of the underlying mesh, and the solution is transferred to the new grid function space, by default through local $L^2$ projection onto the elements of the new mesh. The last argument specifies the order of the quadrature rule that is used in this process. ```c++ // do refinement std::cout << "adapt grid and solution" << std::endl; Dune::PDELab::adapt_grid(grid,gfs,z,2*(degree+1)); ``` The function `adapt_grid` has updated the grid function space and transferred the solution to the new mesh, but the Dirichlet boundary condition requires manual intervention. The function `constraints` is called again, which updates the list of constrained degrees of freedom on the Dirichlet boundary. Then the corresponding values of the Dirichlet boundary condition are computed and copied onto the constrained degrees of freedom of `z`. This guarantees that the function `g` of Dirichlet boundary values is always represented in the most accurate way possible on the mesh. With these last steps the iteration is finished, and the next one begins. ```c++ // recompute constraints std::cout << "constraints and stuff" << std::endl; Dune::PDELab::constraints(b,gfs,cc); // write correct boundary conditions in new vector Z znew(gfs); Dune::PDELab::interpolate(g,gfs,znew); // copy Dirichlet boundary to interpolated solution Dune::PDELab::copy_constrained_dofs(cc,znew,z); ``` ### Output Then the mesh file is read and some statistics about it are reported: ```bash Reading 2d Gmsh grid... version 2.2 Gmsh file detected file contains 375 nodes file contains 754 elements number of real vertices = 375 number of boundary elements = 74 number of elements = 674 ``` Now an instance of a DUNE grid is created, the initial setup is performed, and the refinement loop is started. The problem is solved on the initial mesh: ```bash Iteration: 0 highest level in grid: 0 constrained dofs=74 of 375 Initial defect: 2.9656e-02 Newton iteration 1. New defect: 2.9206e-11. Reduction (this): 9.8481e-10. Reduction (total): 9.8481e-10 ``` Due to the missing reaction term $q$ the problem is actually linear, and therefore it is solved after just one Newton iteration. The program runs the error estimator and reports an estimate for the error $\|u - u_h\|$ using the $L^2$ norm: ```bash Estimated error in step 0 is 0.285625 ``` Then the program states that it performs the remaining tasks of the first iteration: ```bash VTK output mark elements adapt grid and solution Updating entity set constraints and stuff ``` The current solution $u_h$ and the squared local error estimates are written out to a VTK file. Then the elements are marked for refinement based on the chosen fraction of elements and error estimate, the grid manager modifies the grid, and the solution $u_h$ and the grid function space are updated to reflect these changes. This includes a reevaluation of the constraints and the values of the Dirichlet boundary condition. Then the solution vector can be used as an initial guess for the next iteration. The loop runs for several iterations and produces similar output in each step. The reported error estimates are: ```bash Estimated error in step 0 is 0.285625 Estimated error in step 1 is 0.209272 Estimated error in step 2 is 0.145672 Estimated error in step 3 is 0.09893 Estimated error in step 4 is 0.0672581 Estimated error in step 5 is 0.0446733 ``` The program also reports the number of elements after each refinement: ```bash constrained dofs=74 of 375 constrained dofs=78 of 419 constrained dofs=90 of 647 constrained dofs=117 of 1409 constrained dofs=171 of 2957 constrained dofs=244 of 6804 ``` These numbers can be collected in a table containing the refinement level, the estimated error, the experimental order of convergence (EOC) for the error, the number of elements, and the number of elements a globally refined mesh would have: <table class='fixed'> <col width="50px" /> <col width="50px" /> <col width="50px" /> <col width="50px" /> <tr> <th> level </th> <th > $\gamma(u_h)$ </th> <th> $\#T$ </th> <th> $\#T_\text{global}$ </th> </tr> <tr> <td> 0 </td> <td> 0.286 </td> <td> 375 </td> <td> 375</td> </tr> <tr> <td> 1 </td> <td> 0.209 </td> <td> 419 </td> <td> 1.5e3 </td> </tr> <tr> <td> 2 </td> <td> 0.146 </td> <td> 647 </td> <td> 6.0e3 </td> </tr> <tr> <td> 3 </td> <td> 0.099 </td> <td> 1409 </td> <td> 2.4e4 </td> </tr> <tr> <td> 4 </td> <td> 0.067 </td> <td> 2957 </td> <td> 9.6e4</td> </tr> <tr> <td> 5 </td> <td> 0.045 </td> <td> 6804 </td> <td> 3.8e5 </td> </tr> </table> ## Classes `Problem` and `NonlinearPoissonFEM` These two classes are nearly identical to those of tutorial 01 and are therefore skipped. The only differences are the definitions of the right hand side and Dirichlet boundary condition. The example application of the tutorial solves the Laplace equation for a known reference solution, and therefore we set $f = 0$. ```c++ template<typename E, typename X> Number f (const E& e, const X& x) const { return 0.0; } ``` The value of the Dirichlet boundary condition is the restriction of the reference solution to the boundary, which is given by \begin{equation*} u (r, \theta) = r^{2/3} \cdot \sin\left(\frac{2}{3} \theta\right) \end{equation*} in polar coordinates. ```c++ //! Dirichlet extension template<typename E, typename X> Number g (const E& e, const X& xlocal) const { auto x = e.geometry().global(xlocal); double theta = std::atan2(x[1],x[0]); if(theta < 0.0) theta += 2*M_PI; auto r = x.two_norm(); return pow(r,2.0/3.0)*std::sin(theta*2.0/3.0); } ``` ## Local Operator `NonlinearPoissonFEMEstimator` The class `NonlinearPoissonFEMEstimator` implements the element-wise computations of the error estimate introduced in [Section 3](#AdaptiveGridRef). The computation of the error estimate is then implemented as a grid operator that returns the square of the estimated error as its residual. The contributions $R_T$ for the elements are provided by an `alpha_volume` term, while the contributions $R_F$ for the faces are provided by `alpha_skeleton` and `alpha_boundary` terms. The definition of class `NonlinearPoissonFEMEstimator` starts as follows: ```c++ template<typename Param, typename FEM> class NonlinearPoissonFEMEstimator : public Dune::PDELab::LocalOperatorDefaultFlags ``` The class is parametrized by a parameter class and a finite element map, just as the `NonlinearPoissonFEM` class. The parameter class is the same in both cases and provides access to the problem definition. The finite element map is expected to provide element-wise constant functions that match the geometry of the mesh elements, since the calculated error estimate contains one value per element. The class does not contain methods for the Jacobian and matrix-free Jacobian evaluation, as only the residual has to be assembled for the error estimator. Therefore, only the `LocalOperatorDefaultFlags` are inherited. Just as the class `NonlinearPoissonFEM`, the local operator contains three private data members, a cache for evaluation of the basis functions on the reference element: ```c++ typedef typename FEM::Traits::FiniteElementType ::Traits::LocalBasisType LocalBasis; Dune::PDELab::LocalBasisCache<LocalBasis> cache; ``` a reference to the parameter object: ```c++ Param& param; ``` and an integer value controlling the order of the formulas used for numerical quadrature: ```c++ int incrementorder; ``` The class has an additional private method `diameter`: ```c++ template<class GEO> typename GEO::ctype diameter (const GEO& geo) const { typedef typename GEO::ctype DF; DF hmax = -1.0E00; for (int i=0; i<geo.corners(); i++) { auto xi = geo.corner(i); for (int j=i+1; j<geo.corners(); j++) { auto xj = geo.corner(j); xj -= xi; hmax = std::max(hmax,xj.two_norm()); } } return hmax; } ``` This function iterates over all pairs of corners of a given entity and defines the maximum distance among these pairs as its diameter. This information is required to scale the element contributions and face contributions of the error estimate with the local mesh width. The public part of the class again starts with the definition of the flags controlling the generic assembly process. The `doPatternVolume` and `doPatternSkeleton` flags are both `false` to indicate that the sparsity pattern of the Jacobian can be skipped for this local operator: ```c++ enum { doPatternVolume = false }; enum { doPatternSkeleton = false }; ``` The residual assembly flags indicate that in this local operator we will provide the methods `lambda_volume`, `lambda_boundary` and `alpha_volume`: ```c++ enum { doAlphaVolume = true }; enum { doAlphaSkeleton = true }; enum { doAlphaBoundary = true }; ``` Next comes the constructor taking as an argument a reference to a parameter object and the optional increment of the quadrature order: ```c++ NonlinearPoissonFEMEstimator (Param& param_, int incrementorder_=0) : param(param_), incrementorder(incrementorder_) ``` ### Method `alpha_volume` This method assembles the error estimate contribution $R_T$ for a single element $T$. Its interface is ```c++ template<typename EG, typename LFSU, typename X, typename LFSV, typename R> void alpha_volume (const EG& eg, const LFSU& lfsu, const X& x, const LFSV& lfsv, R& r) const ``` This method is similar to the `alpha_volume` method of `NonlinearPoissonFEM`, but assembles a different function. The method starts by extracting the floating point type to be used for computations: ```c++ typedef decltype(makeZeroBasisFieldValue(lfsu)) RF; ``` Then a quadrature rule is selected ```c++ auto geo = eg.geometry(); const int order = incrementorder+2*lfsu.finiteElement().localBasis().order(); auto rule = Dune::PDELab::quadratureRule(geo,order); ``` a variable `sum` to collect the contributions is created, and the quadrature loop is started ```c++ RF sum(0.0); for (const auto& ip : rule) { ``` Within the quadrature loop the basis functions are evaluated ```c++ // evaluate basis functions auto& phihat = cache.evaluateFunction( ip.position(),lfsu.finiteElement().localBasis()); ``` and the value of $u_h$ at the quadrature point is computed. ```c++ // evaluate u RF u=0.0; for (size_t i=0; i<lfsu.size(); i++) u += x(lfsu,i)*phihat[i]; ``` We neglect contributions containing the gradient of $u_h$, since they would require second-order derivatives of the basis functions. This means we do not need to evaluate the gradients of the basis functions. The right hand side $f$ and the nonlinear term $q(u_h)$ are evaluated: ```c++ // evaluate reaction term auto q = param.q(u); // evaluate right hand side parameter function auto f = param.f(eg.entity(),ip.position()); ``` and the local error estimate for this quadrature point is calculated: ```c++ // integrate f^2 RF factor = ip.weight() * geo.integrationElement(ip.position()); sum += (f-q)*(f-q)*factor; } ``` After summing up all local contributions, we are in the position to finally compute the estimate on the element by multiplying with $h_T^2$: ```c++ // accumulate cell indicator auto h_T = diameter(eg.geometry()); r.accumulate(lfsv,0,h_T*h_T*sum); } ``` ### Metthod `alpha_skeleton` This method has a structure that is very similar to the skeleton terms of the finite volume method from tutorial 02. It implements the error estimate contributions $R_F$ for the interior faces of the mesh and has the following interface: ```c++ template<typename IG, typename LFSU, typename X, typename LFSV, typename R> void alpha_skeleton (const IG& ig, const LFSU& lfsu_i, const X& x_i, const LFSV& lfsv_i, const LFSU& lfsu_o, const X& x_o, const LFSV& lfsv_o, R& r_i, R& r_o) const ``` s in tutorial 02, the arguments comprise an intersection, local trial function and local test space for both elements adjacent to the intersection and containers for the local residual contributions in both elements. The subscripts `_i` and `_o` correspond to ``inside'' and ``outside''. W.r.t. our notation above ``inside'' corresponds to``-'' and ``outside'' corresponds to ``+''. As already noted in the context of the finite volume method of tutorial 02, the `alpha_skeleton` method needs to assemble contributions for `both` elements next to the intersection. It starts by extracting the geometries of the intersection relative to the two elements adjacent to the intersection ```c++ // geometries in local coordinates of the elements auto insidegeo = ig.geometryInInside(); auto outsidegeo = ig.geometryInOutside(); ``` the two elements themselves ```c++ // inside and outside cells auto cell_inside = ig.inside(); auto cell_outside = ig.outside(); ``` and their geometries ```c++ // geometries from local to global in elements auto geo_i = cell_inside.geometry(); auto geo_o = cell_outside.geometry(); ``` Then the dimension of the intersection is extracted and a quadrature rule is created: ```c++ // dimensions const int dim = IG::Entity::dimension; // select quadrature rule auto globalgeo = ig.geometry(); const int order = incrementorder+2*lfsu_i.finiteElement().localBasis().order(); auto rule = Dune::PDELab::quadratureRule(globalgeo,order); ``` The quadrature rule integrates over the intersection and collects contributions as the rule in `alpha_skeleton` did: ```c++ // loop over quadrature points and integrate normal flux typedef decltype(makeZeroBasisFieldValue(lfsu_i)) RF; RF sum(0.0); for (const auto& ip : rule) { ``` The coordinates on the intersection are translated to coordinates in the two elements to make the evaluation of $u_h$ and its gradient possible, and the unit vector on the intersection pointing from $T^-$ to $T^+$ is obtained. ```c++ // position of quadrature point in local coordinates of elements auto iplocal_i = insidegeo.global(ip.position()); auto iplocal_o = outsidegeo.global(ip.position()); // unit outer normal direction auto n_F = ig.unitOuterNormal(ip.position()); ``` The gradients of the basis functions are evaluated on the reference element, mapped onto the actual element, and used to calculate $(\nabla u_h) \cdot \nu$ for the element $T_F^-$. Afterwards the same steps are performed for $T_F^+$, which is not repeated here. ```c++ // gradient in normal direction in self auto& gradphihat_i = cache.evaluateJacobian( iplocal_i,lfsu_i.finiteElement().localBasis()); const auto S_i = geo_i.jacobianInverseTransposed(iplocal_i); RF gradun_i = 0.0; for (size_t i=0; i<lfsu_i.size(); i++) { Dune::FieldVector<RF,dim> v; S_i.mv(gradphihat_i[i][0],v); gradun_i += x_i(lfsu_i,i)*(v*n_F); } ``` The face residual $R_F$ is calculated as the difference between the two directional derivatives of $u_h$ across the face, and its square is added to the sum. ```c++ // integrate RF factor = ip.weight()*globalgeo.integrationElement(ip.position()); RF jump = gradun_i-gradun_o; sum += jump*jump*factor; } ``` After summing up all local contributions, the result is multiplied with $\frac{1}{2} h_T$ and added to the estimate on the element: ```c++ // accumulate indicator auto h_T = diameter(globalgeo); r_i.accumulate(lfsv_i,0,0.5*h_T*sum); r_o.accumulate(lfsv_o,0,0.5*h_T*sum); } ``` ### Method `alpha_boundary` The method `alpha_boundary` is largely identical with `alpha_skeleton` and therefore isn't repeated here. All calculations concerning $T_F^+$ are dropped, and instead the Neumann boundary condition value $j$ is used in the calculation of the jump: ```c++ // Neumann boundary condition value auto j = param.j(ig.intersection(),ip.position()); // integrate RF factor = ip.weight()*globalgeo.integrationElement(ip.position()); RF jump = gradun_i+j; sum += jump*jump*factor; ``` Additionally, the final accumulation scales with $h_T$, and not with $\frac{1}{2} h_T$ as the `alpha_skeleton` method. ```c++ // accumulate indicator auto h_T = diameter(globalgeo); r_i.accumulate(lfsv_i,0,h_T*sum); ``` # Exercises ## Playing with the Parameters The adaptive finite element algorithm is controlled by several parameters which can be set in the ini-file. Besides the polynomial degree these are: - `steps`: how many times the solve-- adapt cycle is executed. - `uniformlevel` : how many times uniform mesh refinement is used before adaptive refinement starts. - `tol` : tolerance where the adaptive algorithm is stopped. - `fraction` : Remember that the global error is estimated from element-wise contributions: $$\gamma^2(u_h) = \sum_{T\in \mathcal{T}_h} \gamma_T^2(u_h).$$ The fraction parameter controls which elements are marked for refinement. More precisely, the largest threshold $\gamma^\ast$ is determined such that $$\sum_{\{T\in\mathcal{T}_h \,:\, \gamma_T(u_h)\geq\gamma^\ast\}} \gamma_T^2(u_h) \geq \text{fraction} \cdot \gamma^2(u_h). $$ If `fraction` is set to zero no elements need to be refined, i.e. $\gamma^\ast=\infty$. When`fraction` is set to one then all elements need to be refined, i.e. $\gamma^\ast=0$. A smaller `fraction` parameter results in a more optimized mesh but needs more steps and thus might be more expensive. If `fraction` is chosen too large then almost all elements are refined which might also be expensive, so there exists an (unknown) optimal value for `fraction`. Carry out the following experiments: 1. Choose polynomial degree 1 and fix a tolerance, e.g. `tol`=0.01. Set `steps` to a large number so that the algorithm is not stopped before the tolerance is reached. Set `uniformlevel` to zero. Now vary the `fraction` parameter and measure the execution time with ```bash $ time ./exercise05 ``` 2. Repeat the previous experiment with polynomial degrees two and three. Compare the optimal meshes generated for the different polynomial degrees. 3. The idea of the heuristic refinement algorithm refining only the elements with the largest contribution to the error is to *equilibrate* the error. To check how well this is achieved use the calculator filter in ParaView (use cell data) to plot $\log(\gamma_T(u_h))$. Note that the cell data exported by the (unchanged) code is $\gamma_T^2(u_h)$! Now check the minimum and maximum error contributions. What happens directly at the singularity, i.e. the point $(0,0)$? - *times for `degree` = 1, `tol` = 0.01, `steps` = 200 and `uniformlevel` = 0* | fraction|execution time| |--------:|-------------:| | 0.2| 38s | | 0.4| 24s | | 0.6| 19s | | 0.8| 17s | | 0.9| 21s | - *times for same setup as before, but `degree` = 2* | fraction|execution time| |--------:|-------------:| | 0.2| 1.25 s | | 0.4| 0.53 s | | 0.6| 0.47 s | | 0.8| 0.38 s | | 0.9| 0.45 s | - *times for same setup as before, but `degree` = 3* | fraction|execution time| |--------:|-------------:| | 0.2| 1.81 s | | 0.4| 1.12 s | | 0.6| 0.71 s | | 0.8| 0.62 s | | 0.9| 0.60 s | - *examples for optimal meshes for polynomial degrees 1,2,3 (fraction = 0.7)* <table> <tr> <th> </th> <th> </th> <th> </th> </tr> </table> - *example for $log(\gamma(u_h))$ for degree = 3* <table> <tr> <th> </th> <th> </th> </tr> </table> ## Compute the True $L_2$-Error *Disclaimer: In this exercise we compute and evalute the $L_2$-norm of the error. The residual based error estimator implemented in the code, however, estimates the $H^1$-seminorm (which is equivalent to the $H^1$-norm here) and thus also optimizes the mesh with respect to that norm. This is done to make the exercise as simple as possible. It would be more appropriate to carry out the experiments either with the true $H^1$-norm or to change the estimator to the $L_2$-norm.* To do this exercise we need some more background on PDELab grid functions. The following code known from several tutorials by now constructs a PDELab grid function object `g`: ```c++ Problem<RF> problem(eta); auto glambda = [&](const auto& e, const auto& x){return problem.g(e,x);}; auto g = Dune::PDELab::makeGridFunctionFromCallable(gv,glambda); ``` Also the following code segment used in many tutorials constructs a PDELab grid function object `zdgf` from a grid function space and a coefficient vector: ```c++ typedef Dune::PDELab::DiscreteGridFunction<GFS,Z> ZDGF; ZDGF zdgf(gfs,z); ``` PDELab grid functions can be evaluated in local coordinates given by an element and a local coordinate within the corresponding reference element. The result is stored in an additional argument passed by reference. Here is a code segment doing the job: ```c++ Dune::FieldVector<RF,1> truesolution(0.0); g.evaluate(e,x,truesolution); ``` Here we assume that `e` is an element and `x` is a local coordinate. The result is stored in am`Dune::FieldVector` with one component of type `RF`. This allows also to return vector-valued results. Now here are your tasks: 1. Extend the code in the notebook to provide a grid function that provides the error $u-u_h$. Note that the method `g` in the parameter class already returns the true solution (for $\eta=0$). You can solve this task by writing a lambda function subtracting the values returned by `g.evaluate` and `zdgf.evaluate`. - *solution* ```c++ // ========================= // compute true L2 error // ========================= typedef Dune::PDELab::DiscreteGridFunction<GFS,Z> ZDGF; ZDGF zdgf(gfs,z); // the FE function as a grid function; moved from below auto errorlambda = // subtract exact solution from FE solution [&](const auto& e, const auto& x){ Dune::FieldVector<RF,1> fesolution(0.0); zdgf.evaluate(e,x,fesolution); Dune::FieldVector<RF,1> truesolution(0.0); g.evaluate(e,x,truesolution); return truesolution[0]-fesolution[0]; }; auto error = Dune::PDELab::makeGridFunctionFromCallable(gv,errorlambda); // make grid function ``` 2. Compute the $L_2$-norm of the error, i.e. $\|u-u_h\|_0 = \sqrt{\int_\Omega (u(x)-u_h(x))^2\,dx}$. This can be done by creating a grid function with the squared error using `Dune::PDELab::SqrGridFunctionAdapter` from `<dune/pdelab/function/sqr.hh>` and the function `Dune::PDELab::integrateGridFunction` contained in the header file `<dune/pdelab/common/functionutilities.hh>`. Output the result to the console by writing in a single line a tag like `L2ERROR`, the number of degrees of freedom using `gfs.globalSize()` and the $L_2$-norm. This lets you grep the results later for post-processing. - *solution* ```c++ typedef Dune::PDELab::SqrGridFunctionAdapter<decltype(error)> SQRERROR; // square of the error SQRERROR sqrerror(error); Dune::FieldVector<RF,1> integral(0.0); Dune::PDELab::integrateGridFunction(sqrerror,integral,2*degree); // integrate grid function std::cout << "L2ERROR " << gfs.globalSize() << " " << sqrt(integral[0]) << std::endl; ``` 3. Investigate the $L_2$-error for different polynomial degrees. Run the code using different polynomial degrees and also uniform and adaptive (use your optimal `fraction`) refinement. Use `gnuplot` to produce graphs such as those in Figure 2. An example gnuplot file `plot.gp` is given. <font color = "grey"> Figure 2: Comparison of $L_2$-error for different polynomial degrees and adaptive and uniform meshes.</font> The graph shows that for uniform refinement polynomial degree 2 is better than polynomial degree 1 only by a constant factor. For adaptive refinement a better convergence rate can be achieved. The plot also shows that for adaptive refinement the optimal convergence rate can be recovered. For $P_1$ finite elements and fully regular solution in $H^2(\Omega)$ the a-priori estimate for uniform refinement yields $\|u-u_h\|_0\leq C h^2$. Expressing $h$ in terms of the number of degrees of freedom $N$ yields $h\sim N^{-1/d}$, so $\|u-u_h\|_0\leq C h^2 \leq C' N^{-1}$ for $d=2$. The curve $N^{-1}$ is shown for comparison in the plot. Clearly, the result for $P_1$ is parallel to that line. 4. Optional: Also write the true error to the VTK output file. # Bibliography [1] Ainsworth M. and J.T. Oden. *A posteriori error estimation in finite element analysis*. Wiley. 2000. [2] P. Bastian. *Lecture Notes on Scientific Computing with Partial Differential Equations*. 2014. http://conan.iwr.uni-heidelberg.de/teaching/numerik2_ss2014/num2.pdf. [3] Braess, D. *Finite Elemente*. Springer. 3rd edition. 2003. [4] Brenner, S. C. and Scott, L. R. *The mathematical theory of finite element methods*. Springer. 1994. [5] Ciarlet, P.G. *The finite element method for elliptic problems*. SIAM. Classics in Applied Mathematics. 2002. [6] Elman, H., Silvester, D. and Wathen, A. *Finite Elements and Fast Iterative Solvers*. Oxford University Press. 2005. [7] Eriksson, D., Estep, Hansbo,P. and Johnson, C. *Computational Differential Equations*. Cambridge University Press. 1996. http://www.csc.kth.se/~jjan/private/cde.pdf. [8] Ern, A. and Guermond, J.-L. *Theory and practice of finite element methods*. SIAM. Classics in Applied Mathematics. 2002. [9] Geuzaine, C. and Remacle, J.-F. *Gmsh: A 3-D finite element mesh generator with built-in pre- and post-processing facilities*. International Journal for Numerical Methods in Engineering. 2009. [10] Großmann, C. and Roos, H.-G. *Numerische Behandlung partieller Differentialgleichungen*. Teubner. 2006. [11] Hackbusch, W. *Theorie und Numerik elliptischer Differentialgleichungen*. Teubner. 1986. http://www.mis.mpg.de/preprints/ln/lecturenote-2805.pdf. [12] Rannacher, R. *Einführung in die Numerische Mathematik II (Numerik partieller Differentialgleichungen)*. 2006. http://numerik.iwr.uni-heidelberg.de/~lehre/notes <a id = "ref"> </a>
592090db81a4a832322c3315ae694d721d23c15a
69,048
ipynb
Jupyter Notebook
notebooks/tutorial05/pdelab-tutorial05.ipynb
dokempf/dune-jupyter-course
1da9c0c2a056952a738e8c7f5aa5aa00fb59442c
[ "BSD-3-Clause" ]
1
2022-01-21T03:16:12.000Z
2022-01-21T03:16:12.000Z
notebooks/tutorial05/pdelab-tutorial05.ipynb
dokempf/dune-jupyter-course
1da9c0c2a056952a738e8c7f5aa5aa00fb59442c
[ "BSD-3-Clause" ]
21
2021-04-22T13:52:59.000Z
2021-10-04T13:31:59.000Z
notebooks/tutorial05/pdelab-tutorial05.ipynb
dokempf/dune-jupyter-course
1da9c0c2a056952a738e8c7f5aa5aa00fb59442c
[ "BSD-3-Clause" ]
1
2021-04-21T08:20:02.000Z
2021-04-21T08:20:02.000Z
41.898058
660
0.602842
true
13,811
Qwen/Qwen-72B
1. YES 2. YES
0.763484
0.851953
0.650452
__label__eng_Latn
0.984536
0.349549
```python def downloadDriveFile(file_id,file_name,file_extension): ''' Allows charge of public files into colab's workspace ''' !wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id='$file_id -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id="$file_id -O "$file_name"."$file_extension" && rm -rf /tmp/cookies.txt if 'google.colab' in str(get_ipython()): downloadDriveFile('1KkHZjTDShCz0uBEHWAZ9xx98F6ENMvrq','optimize','py') ``` ```python import numpy as np import sympy as sp import matplotlib.pyplot as plt import optimize ``` Steepest Descent Method ========= Example 1 ---- Find the minimizer of $f(\mathbf{x}) = x_1^{e−(x_1^2 + x_2^2)}$. ```python x = sp.symbols('x1 x2') func = x[0]*sp.exp(-x[0]**2 - x[1]**2) x0 = [-1, 1] min, search_steps = optimize.steepest_descent(func, x, start=x0, full_output=True) min_fix, search_steps_fix = optimize.steepest_descent(func, x, start=x0, step_type='fix', step=0.25, full_output=True) ``` # Sección nueva ```python fig = plt.figure() step = 0.01 x_vals = np.arange(-1.6, 0, step) y_vals = np.arange(-1.6, 1.6, step) X, Y = np.meshgrid(x_vals, y_vals) function = sp.lambdify([*x], func, modules='numpy') func_eval = function(X,Y) ax1 = fig.add_subplot(1,2,1) ax1.contour(X, Y, func_eval) ax1.plot(search_steps[:, 0],search_steps[:, 1],'-k.', linewidth='0.8', markersize=6) ax1.set_xlabel('$x_1$') ax1.set_ylabel('$x_2$') ax2 = fig.add_subplot(1,2,2) ax2.contour(X, Y, func_eval) ax2.plot(search_steps_fix[:, 0],search_steps_fix[:, 1],'-k.', linewidth='0.8' , markersize=6) ax2.set_xlabel('$x_1$') ax2.set_ylabel('$x_2$') fig_dpi = 150 fig_width = 18.8/2.54 fig_height = 9.4/2.54 fig.dpi=fig_dpi fig.set_size_inches((fig_width,fig_height)) fig.tight_layout() plt.show() ``` Example 2 ---- Find the minimizer of $f(\mathbf{x}) = 0.06e^{(2x_1+x_2)} + 0.05e^{(x_1-2x_2)} + e^{-x_1}$. ```python x = sp.symbols('x1 x2') func = 0.06*sp.exp(2*x[0]+x[1]) + 0.05*sp.exp(x[0]-2*x[1]) + sp.exp(-x[0]) x0 = [-0.5, 2] min, search_steps = optimize.steepest_descent(func, x, start=x0, full_output=True) min_fix, search_steps_fix = optimize.steepest_descent(func, x, start=x0, step_type='fix', step=0.25, full_output=True) ``` ```python fig = plt.figure() step = 0.001 x_vals = np.arange(-1, 1.75, step) y_vals = np.arange(-2, 3.5, step) X, Y = np.meshgrid(x_vals, y_vals) function = sp.lambdify([*x], func, modules='numpy') func_eval = function(X,Y) ax1 = fig.add_subplot(1,2,1) ax1.contour(X, Y, func_eval) ax1.plot(search_steps[:, 0],search_steps[:, 1],'-k.', linewidth='0.8', markersize=6) ax1.set_xlabel('$x_1$') ax1.set_ylabel('$x_2$') ax2 = fig.add_subplot(1,2,2) ax2.contour(X, Y, func_eval) ax2.plot(search_steps_fix[:, 0],search_steps_fix[:, 1],'-k.', linewidth='0.8' , markersize=6) ax2.set_xlabel('$x_1$') ax2.set_ylabel('$x_2$') fig_dpi = 150 fig_width = 18.8/2.54 fig_height = 9.4/2.54 fig.dpi=fig_dpi fig.set_size_inches((fig_width,fig_height)) fig.tight_layout() plt.show() ``` ```python ```
6a12c82530f7ddbf93ae7aec1ad286db6f617438
347,256
ipynb
Jupyter Notebook
numeric_analysis_exercises/steepest_descent_examples.ipynb
lufgarciaar/num_analysis_exercises
d145908494c5a7453830ec32dcac91df6fb028a4
[ "BSD-2-Clause" ]
null
null
null
numeric_analysis_exercises/steepest_descent_examples.ipynb
lufgarciaar/num_analysis_exercises
d145908494c5a7453830ec32dcac91df6fb028a4
[ "BSD-2-Clause" ]
null
null
null
numeric_analysis_exercises/steepest_descent_examples.ipynb
lufgarciaar/num_analysis_exercises
d145908494c5a7453830ec32dcac91df6fb028a4
[ "BSD-2-Clause" ]
null
null
null
347,256
347,256
0.869828
true
1,152
Qwen/Qwen-72B
1. YES 2. YES
0.828939
0.815232
0.675778
__label__eng_Latn
0.232175
0.408389
# Robust Registration of Catalogs **Fan Tian, 12/01/2019** - ftian4@jhu.edu <br/> ## Description In this notebook, we demonstrate using the robust registration algorithm [1] to cross-match small catalogs (particularly to those of the HST images) with rotation and shift. This is the latest version of the algorithm that: - implements the "ring" algorithm, which subsets all pairs within an initial search radius $R$ into overlapping rings with a specified ring-width. <br/> - uses a simple annealing schedule for the astrometric uncertainty, the $\sigma$ value. We also compare the robust estimation results with the results from the method of least-squares [2]. <br/> The first part of this notebook consists implementation of the algorithm on the simulated HST/ACS/WFC catalogs. The second part demonstrates the cross-registration of an HST image (from the HLA catalog) to the Gaia DR2 catalog of the same field. ### Reference [1] Tian, F. Budavári, T. Basu, A. Lubow, S.H. & White, R.L. (2019). Robust Registration of Astronomy Catalogs with Applications to the Hubble Space Telescope. _The Astronomical Journal_. 158(5) pp. 191. <a href="https://iopscience.iop.org/article/10.3847/1538-3881/ab3f38/meta">doi:10.3847/1538-3881/ab3f38</a> [2] Budavári, T. & Lubow, S.H. (2012). Catalog Matching with Astrometric Correction and its Application to the Hubble Legacy Archive. _The Astrophysical Journal_. 761(2) pp.188. <a href="https://iopscience.iop.org/article/10.1088/0004-637X/761/2/188">doi:10.1088/0004-637X/761/2/188</a> **Based on prototype implementations of 5/31/2018 - Tamás Budavári, and of 3/29/2019 - Rick White** ```python %matplotlib inline import time import numpy as np import pandas as pd import matplotlib.pyplot as plt import astropy # Set page width to fill browser for longer output lines from IPython.core.display import display, HTML display(HTML("<style>.container { width:100% !important; }</style>")) # set width for pprint astropy.conf.max_width = 150 ``` ```python # import cross-registration modules import xregistration.simulation as sim import xregistration.estimation as est import xregistration.est_catalog as rcat ``` ```python # global variables # convert units from arcseconds to radians arc2rad = 3600 * 180 / np.pi ``` ## Part 1. Simulation ## 1. Simulate mock universe and catalogs - Simulate catalogs to the HST/ACS/WFC catalog with Field of View: 202"× 202" - Approxately 1500 sources ```python # Initialize image size size=202 # Initialize uncertainty parameter - sigma sigma = 0.04 # Set seed seed= 4444 np.random.seed(seed) # Create mock universe m = sim.mock(1500, size) # df with index that's the objid # Create perturbed catalogs within selection interval - same size as m cn = [sim.cat(m,sigma,l,h) for l,h in [(0.2, 1), (0, 0.9)]] # selection intervals # Select catalogs - objid index retained cs = [a[a.Selected] for a in cn] # Generate random omega0 and catalog0 omega0, c0 = sim.randomega(cs[0], scale=60) # Generate catalog1, with omega1 = -omega0 omega1 = -1 * omega0 c1 = sim.trf(cs[1], omega1) # transformed catalogs co = [c0,c1] print("Average offset of two catalogs before transformation: {:2.3f} arcsec".format(sim.getsep(cs[0],cs[1],"mean"))) print("Average offset of two catalogs after transformation: {:2.3f} arcsec".format(sim.getsep(co[0],co[1]),"mean")) ``` ## 2. Robust Ring Estimation: - Cross-match pairs within rings - Ring seletion: width $\approx 4\sigma$ - Apply $\sigma$ convergence at initial steps of iteration - Stopping: $|\omega_{t+1} - \omega_{t}| < \epsilon$ ### 2.1 Matched pairs within an initial search radius #### Find all pairs that match within _radius_ (arcsec). ```python # Initial search radius, approximately 1.1 times of the maximum offset radius = 1.1 * sim.getsep(co[0],co[1],"max") print(f"search radius: {radius:.2f}") print(f"{co[0].shape[0]} sources in input catalog and {co[1].shape[0]} sources in reference catalog") ``` ### 2.2 Fast prototype of the robust iterative solver Objective function: $$ \tilde{\boldsymbol{\omega}}= \arg\min_{\boldsymbol{\omega}}\sum_{q}\, \rho\left(\frac{ \left|\boldsymbol{\Delta}_{q}-\boldsymbol{\omega}\times\boldsymbol{r}_{q} \right|}{\sigma}\right) $$ $$ \rho(x) = -\ln \left( \frac{\gamma_{*}}{2\pi\sigma^2}\ e^{-x^2/2} \,+\, \frac{1\!-\!\gamma_{*}}{\Omega} \right). $$ - $\textbf{c}_q$: q-th calibrator direction - $\textbf{r}_q$: q-th source direction of the image (to be corrected) - $\boldsymbol{\Delta}_q = \textbf{c}_q - \textbf{r}_q$: seperation between q-th source-calibrator - $\boldsymbol{\omega}$: 3-D rotation vector - $\sigma$: astrometric uncertainty - $\gamma$: probability of being a true association - $\gamma_{*} = \frac{\min (N_1, N_2)}{N}$; N=total number of pairs, N1=number of sources in input catalog, N2=number of sources in reference catalog. - $\Omega$: footprint area (steradians) Solve for $\tilde{\boldsymbol{\omega}}$ using $A\tilde{\boldsymbol{\omega}} = \textbf{b}$ with \begin{equation} \begin{array}{ccc} A =\displaystyle \sum_{q} \frac{w_{q}}{\sigma^{2}} \left(I-\boldsymbol{r}_{q}\!\otimes\boldsymbol{r}_{q}\right) & \textrm{and} & b = \displaystyle \sum_{q} \frac{w_{q}}{\sigma^{2}} \left(\boldsymbol{r}_{q}\!\times\boldsymbol{c}_{q}\right) \end{array} \end{equation} \begin{equation*} \begin{aligned} \text{Weight function: } w_q=W(x) &= \frac{\rho'(x)}{x} = \frac{\alpha e^{-x^2/2}}{\alpha e^{-x^2/2}+1}\\ \alpha &= \frac{\Omega}{2 \pi \sigma^2} \frac{\gamma_{*}}{(1-\gamma_{*})} \end{aligned} \end{equation*} **Robust solver output:** - omega: estimated rotation vector - pair: pairs in the optimal ring **Input parameters** <br> - area: side length of the image is 202 arcsec - radius: initial search radius - sigma: actual astrometric uncertainty of the catalog, 0.04 arcsec - gamma: fraction of true pairs (unknown, approximate) - ringwidth: assign ring width to 0.2 arcsec (empirical value) - sigma_init: assign an initial sigma to 0.4 arcsec - niter: minimum number of iterations for convergence = 10 - nextr: maximum additional number of iterations = 100 - mid: use midpoints of the two catalogs as reference **Estimate Omega using Robust Ring Algorithm Version2** Version-2 Algorithm divides all pairs into equal number of pairs in rings, and performs esimation in each ring. In this version, $\gamma$ is taken as the global value $$\gamma_* = \frac{\min(N_0,N_1)}{N_{pairs}}$$. ```python t0=time.time() # image area in steradians area=(202/arc2rad)**2 # estimate omega, and obtain pairs in the optimal ring bestomega, bestpairs, bestwts = est.robust_ring(co[0], co[1], area, radius, sigma=0.04, sigma_init=4, niter=50, nextr=100, mid=True, printerror=False) print("Total {:.3f} seconds to complete estimation".format(time.time()-t0)) ``` ### 2.3 Solve for $\boldsymbol{\omega}$ using the least-squares algorithm Apply the least-squares method on pairs in the optimal ring. <font color="red">Note this is really optimistic for the L2 method since it has no way to determine the best ring.</font> ```python L2omega = est.L2_est(co[0], co[1], bestpairs, sigma=0.04) ``` ### 2.4 Plot Catalogs ```python cat_cor_rob = [sim.trf(co[0],bestomega), sim.trf(co[1],-bestomega)] cat_cor_L2 = [sim.trf(co[0],L2omega), sim.trf(co[1],-L2omega)] ``` ```python fig=plt.figure(figsize=(20,6)) fig.add_subplot(131) plt.scatter(co[0].x*arc2rad, co[0].y*arc2rad, s=10, alpha=0.3) plt.scatter(co[1].x*arc2rad, co[1].y*arc2rad, s=10, alpha=0.3) plt.xlim(-50,250) plt.ylim(-50,250) plt.title("ORIGINAL") fig.add_subplot(132) plt.scatter(cat_cor_rob[0].x*arc2rad, cat_cor_rob[0].y*arc2rad, s=10, alpha=0.4) plt.scatter(cat_cor_rob[1].x*arc2rad, cat_cor_rob[1].y*arc2rad, s=10, alpha=0.4) plt.title("ROBUST METHOD") print("Initial average offset: {:.3f} arcsec" .format(sim.getsep(co[0],co[1],"mean"))) print("Average offset after correction - ROBUST: {:.3f} arcsec" .format(sim.getsep(cat_cor_rob[0], cat_cor_rob[1],"mean"))) fig.add_subplot(133) plt.scatter(cat_cor_L2[0].x*arc2rad, cat_cor_L2[0].y*arc2rad, s=10, alpha=0.4) plt.scatter(cat_cor_L2[1].x*arc2rad, cat_cor_L2[1].y*arc2rad, s=10, alpha=0.4) plt.title("L2 METHOD") print("Average offset after correction - L2: {:.3f} arcsec" .format(sim.getsep(cat_cor_L2[0],cat_cor_L2[1],"mean"))) plt.show() ``` ## Part 2. The HLA/Gaia Catalogs Cross-registration **Adapted from 3/29/2019 - Rick White** ## 1. Read Data ### 1.1 Set parameters for a visit - `imagename` = name of HLA dataset - `radius` = maximum shift radius to search (arcsec) - `requirePM` = True to require Gaia proper motions (must be False some cluster fields) - `limitGaia` = True to restrict the number of Gaia sources to ~200 - `flagcut` = maximum flag value to include in HLA catalog (5=all, 1=unsat, 0=stellar unsat) ```python # list of test images # imagename = 'hst_9984_nl_acs_wfc' # far north with rotation # imagename = 'hst_9984_ni_acs_wfc' # far north with rotation imagename = 'hst_11664_22_wfc3_uvis' # big 90" shift # imagename = 'hst_10775_a7_acs_wfc' # challenging image with large catalogs radius = 120.0 requirePM = True limitGaia = False flagcut = 5 ``` ### 1.2 Read the HLA catalog for a dataset This also applies a magnitude cut to keep only sources brighter than magnitude 22 that might match Gaia sources. ```python # save some time if we already have the correct catalog # use cache to store results so repeated queries are fast if 'current_imagename' in locals() and current_imagename == imagename: print('Already read catalog for',imagename) else: current_imagename = None imacat = rcat.getmultiwave(imagename) current_imagename = imagename print("Read {} sources for {}".format(len(imacat),imagename)) # use only objects brighter than mag 22 # select brightest of all mags magcols = [] flagcols = [] for col in imacat.colnames: if col.endswith('magauto'): magcols.append(col) elif col.endswith('_flags'): flagcols.append(col) if not magcols: raise ValueError("No magnitude columns found in catalog") if len(magcols) != len(flagcols): raise ValueError("Mismatch between magcols [{}] and flags [{}]?".format( len(magcols),len(flagcols))) print("Magnitudes {}".format(" ".join(magcols))) mags = imacat[magcols[0]] for col in magcols[1:]: mags = np.minimum(mags,imacat[col]) # mags = np.maximum(mags,cat[col]) flags = imacat[flagcols[0]] for col in flagcols[1:]: flags = np.minimum(flags,imacat[col]) ``` ### 1.3 Read the Gaia catalog with padding to allow for large shifts The Gaia search box is expanded by 2 arcmin on all sides to allow for the possibility of a shift that large. ```python mdec = imacat['dec'].mean() mra = imacat['ra'].mean() cdec = np.cos(rcat.d2r*mdec) # always pad using search radius 120.0 so we can reuse the result gradius = max(60.0,radius) # pad by 1.1*search radius on each side pad = 1.1*gradius/3600.0 rpad = pad/cdec ramin = imacat['ra'].min() - rpad ramax = imacat['ra'].max() + rpad decmin = imacat['dec'].min() - pad decmax = imacat['dec'].max() + pad new_params = (ramin,ramax,decmin,decmax) if 'gcat_params' in locals() and gcat_params == new_params: print('Already read Gaia catalog for {} ({} sources)'.format(imagename,len(gcat_all))) else: gcat_params = None gcat_all = rcat.gaiaquery(ramin,decmin,ramax,decmax) gcat_params = new_params print("Read {} Gaia sources".format(len(gcat_all))) gcat = gcat_all # compute ratio of area covered by data to extended area area_rat = (ramax-ramin-2*rpad)*(decmax-decmin-2*pad)/((ramax-ramin)*(decmax-decmin)) if requirePM: # keep only objects with proper motions gcat = gcat[~gcat['pmra'].mask] print("Keeping {} Gaia sources with measured PMs".format(len(gcat))) # apply proper motions epoch_yr = rcat.getepoch(imagename) # make reference epoch a scalar if possible ref_epoch = gcat['ref_epoch'] if (ref_epoch == ref_epoch.mean()).all(): ref_epoch = ref_epoch[0] dt = epoch_yr-ref_epoch print("Updating gcat for {:.1f} yrs of PM".format(-dt)) # PM fields are in mas/yr gcat.ra = gcat['ra'] + gcat['pmra']*(dt/(3600.0e3*np.cos(rcat.d2r*gcat['dec']))) gcat.dec = gcat['dec'] + gcat['pmdec']*(dt/3600.0e3) else: print("No Gaia PMs are used, all Gaia sources are retained") # if number of Gaia sources is large, select just a subset of the fainter sources # aim for about 200 sources within the field if limitGaia: ngmax = int(round(200/area_rat)) if len(gcat) > ngmax: print("Clipping to faintest",ngmax,"Gaia sources") gcat.sort('phot_g_mean_mag') gcat = gcat[-ngmax:] gcat[:5] ``` ### 1.4 Restrict the HLA catalog to sources close to the Gaia magnitude limit For a typical Gaia field, the magnitude cut is about 22. Some Gaia fields have a much brighter limit, which raises the magnitude cut. <br/> This also applies a cut on flags if flagcut is set. ```python gmaglim = gcat['phot_g_mean_mag'].max() magcut = min(gmaglim + 1.2, 22.0) print('Gaia mag limit {:.3f} -> HLA magnitude cut {}'.format(gmaglim,magcut)) # forcing this cut to see how this affects wider radius searches if (mags <= magcut).sum() > 1000: magcut = 17.0 print('XXX Forcing HLA magnitude cut {} XXX'.format(magcut)) wcut = np.where((mags <= magcut) & (flags <= flagcut)) bcat = imacat[wcut] bmags = mags[wcut] bflags = flags[wcut] print("{} sources left after cut at mag {}, flags <= {}".format(len(bcat),magcut,flagcut)) bcat[:5] ``` ### 1.5 Plot positions on sky ```python plt.rcParams.update({'font.size':14}) plt.figure(figsize=(8,8)) plt.plot(bcat['ra'],bcat['dec'],'ro',alpha=0.3,markersize=4,label='HLA') plt.plot(gcat['ra'],gcat['dec'],'bo',alpha=0.3,markersize=4,label='Gaia') plt.xlabel('RA [deg]') plt.ylabel('Dec [deg]') plt.title(imagename) plt.legend(loc=3); ``` ## 2. Matched pairs within an initial search radius #### Find all pairs that match within _radius_ arcsec. ```python ## Convert positions to Cartesian xyz coordinates # a = catalog to shift (the HLA catalog) # b = reference catalog (Gaia) a = rcat.cat2xyz(bcat) b = rcat.cat2xyz(gcat) print(f"{a.shape[0]:d} sources in HLA and {b.shape[0]:d} sources in Gaia") ``` ## 3. Use robust solver to estimate rotation of the HLA catalog to the reference **Input parameters** <br> - area: side length of the image is 202 arcsec - sigma: astrometric uncertainly is 0.02 arcsec - sigma_init: assign an initial sigma to 1 arcsec - gamma: fraction of true pairs (unknown, approximate) - niter: minimum number of iterations for convergence, 10 - nextr: maximum number of additional iterations, 100 - ringwidth: 0.3 arcsec ring width - mid: False, reference is the Gaia catalog ```python t0=time.time() area = (202/arc2rad)**2 radius = 120 print(f"Match to radius {radius} arcsec") omega_HLA, pairs_HLA, wts_HLA = est.robust_ring(a, b, area, radius, sigma=0.01, sigma_init=1, niter=10, nextr=100, mid=False, printerror=False) print("Total {:.3f} seconds to complete estimation".format(time.time()-t0)) ``` ## 4. Plot to show catalog separation before and after correction The top two panels show a zoomed-out view (over a region +- 5 arcsec) while the bottom two are zoomed in (over a region +- 0.12 arcsec). The left plot shows the original distribution (note it is centered far from zero) while the right is after applying the correction from the robust match (centered on zero). Note that the scale is identical in the left and right panels. Points with weights $w_q > 0.5$ are shown in red. Those are the "true" matches. ```python # Sort pairs for plot sep = np.sqrt(((a[pairs_HLA[:,0]]-b[pairs_HLA[:,1]])**2).sum(axis=1))*arc2rad ind = np.argsort(sep) sep = sep[ind] pairs_HLA = pairs_HLA[ind,:] wts_HLA = wts_HLA[ind] ``` ```python wmatch = np.where(wts_HLA>0.5)[0] print('total pairs =', len(wts_HLA), 'sum wts =', wts_HLA.sum(), 'matched pairs =', len(wmatch)) print(a.shape, b.shape) # first use only good pairs to get limits for plot p0 = pairs_HLA[wmatch,0] p1 = pairs_HLA[wmatch,1] ra1 = bcat['ra'][p0] dec1 = bcat['dec'][p0] gra = gcat['ra'][p1] gdec = gcat['dec'][p1] rr = rcat.radec2xyz(ra1,dec1) ra2, dec2 = rcat.xyz2radec(rr + np.cross(omega_HLA,rr)) dra1, ddec1 = rcat.getdeltas(ra1,dec1,gra,gdec) dra2, ddec2 = rcat.getdeltas(ra2,dec2,gra,gdec) # center shifted plot at zero and use the same range in arcsec for both plots xcen1 = np.ma.median(dra1) ycen1 = np.ma.median(ddec1) xcen2 = 0.0 ycen2 = 0.0 # plot both good pairs and bad pairs near the match p0 = pairs_HLA[:,0] p1 = pairs_HLA[:,1] ra1 = bcat['ra'][p0] dec1 = bcat['dec'][p0] gra = gcat['ra'][p1] gdec = gcat['dec'][p1] rr = rcat.radec2xyz(ra1,dec1) ra2, dec2 = rcat.xyz2radec(rr + np.cross(omega_HLA,rr)) dra1, ddec1 = rcat.getdeltas(ra1,dec1,gra,gdec) dra2, ddec2 = rcat.getdeltas(ra2,dec2,gra,gdec) # transparency for box around legend framealpha = 0.95 plt.figure(1,(12,12)) xsize = 5.0 xlims1 = (xcen1-xsize, xcen1+xsize) ylims1 = (ycen1-xsize, ycen1+xsize) xlims2 = (xcen2-xsize, xcen2+xsize) ylims2 = (ycen2-xsize, ycen2+xsize) # points to plot wp = np.where( ((dra1>=xlims1[0]) & (dra1<=xlims1[1]) & (ddec1>=ylims1[0]) & (ddec1<=ylims1[1])) | ((dra2>=xlims2[0]) & (dra2<=xlims2[1]) & (ddec2>=ylims2[0]) & (ddec2<=ylims2[1])) )[0] wgood = wp[wts_HLA[wp]>=0.5] wbad = wp[wts_HLA[wp]<0.5] print("{} good points {} bad points".format(len(wgood),len(wbad))) plt.subplot(221) plt.plot(dra1[wbad], ddec1[wbad], 'ko', markersize=2, label='original') plt.plot(dra1[wgood], ddec1[wgood], 'ro', markersize=2, label=r'$w_q \geq 0.5$') plt.ylabel('$\Delta$Dec [arcsec]') plt.xlabel('$\Delta$RA [arcsec]') plt.plot(xlims1,[0,0], 'g-', linewidth=0.5) plt.plot([0,0], ylims1, 'g-', linewidth=0.5) plt.xlim(xlims1) plt.ylim(ylims1) plt.legend(loc='upper left',framealpha=framealpha) plt.subplot(222) plt.plot(dra2[wbad], ddec2[wbad], 'ko', markersize=2, label='robust') plt.plot(dra2[wgood], ddec2[wgood], 'ro', markersize=2, label=r'$w_q \geq 0.5$') plt.xlabel('$\Delta$RA [arcsec]') plt.plot(xlims2,[0,0], 'g-', linewidth=0.5) plt.plot([0,0], ylims2, 'g-', linewidth=0.5) plt.xlim(xlims2) plt.ylim(ylims2) plt.legend(loc='upper left',framealpha=framealpha) xsize = 0.12 xlims1 = (xcen1-xsize, xcen1+xsize) ylims1 = (ycen1-xsize, ycen1+xsize) xlims2 = (xcen2-xsize, xcen2+xsize) ylims2 = (ycen2-xsize, ycen2+xsize) # points to plot wp = np.where( ((dra1>=xlims1[0]) & (dra1<=xlims1[1]) & (ddec1>=ylims1[0]) & (ddec1<=ylims1[1])) | ((dra2>=xlims2[0]) & (dra2<=xlims2[1]) & (ddec2>=ylims2[0]) & (ddec2<=ylims2[1])) )[0] wgood = wp[wts_HLA[wp]>=0.5] wbad = wp[wts_HLA[wp]<0.5] print("{} good points {} bad points".format(len(wgood),len(wbad))) plt.subplot(223) plt.plot(dra1[wgood], ddec1[wgood], 'ro', markersize=2, label='original') plt.plot(dra1[wbad], ddec1[wbad], 'ko', markersize=2) plt.ylabel('$\Delta$Dec [arcsec]') plt.xlabel('$\Delta$RA [arcsec]') plt.xlim(xlims1) plt.ylim(ylims1) plt.legend(loc='upper left',framealpha=framealpha) plt.subplot(224) plt.plot(dra2[wgood], ddec2[wgood], 'ro', markersize=2, label='robust') plt.plot(dra2[wbad], ddec2[wbad], 'ko', markersize=2) plt.xlabel('$\Delta$RA [arcsec]') plt.plot(xlims2,[0,0], 'g-', linewidth=0.5) plt.plot([0,0], ylims2, 'g-', linewidth=0.5) plt.xlim(xlims2) plt.ylim(ylims2) plt.legend(loc='upper left',framealpha=framealpha, title='rms = {:.0f} mas'.format( 1000*np.sqrt((dra2[wgood]**2+ddec2[wgood]**2).mean()))); ``` Note in the above example that not only was a large shift corrected, but there also was a small rotation corrected. That is why the point distribution is much tighter after the correction has been applied. ```python ```
6a25f94a79e2315021f3a0f8975313221eac03d4
28,727
ipynb
Jupyter Notebook
demo_robust_registration.ipynb
rlwastro/robust-registration
4289c9c725ad29561bcb7ec374bc98e5c02df5f4
[ "BSD-3-Clause" ]
2
2020-02-18T17:43:24.000Z
2021-02-02T12:55:18.000Z
demo_robust_registration.ipynb
rlwastro/robust-registration
4289c9c725ad29561bcb7ec374bc98e5c02df5f4
[ "BSD-3-Clause" ]
null
null
null
demo_robust_registration.ipynb
rlwastro/robust-registration
4289c9c725ad29561bcb7ec374bc98e5c02df5f4
[ "BSD-3-Clause" ]
null
null
null
35.641439
383
0.564103
true
6,440
Qwen/Qwen-72B
1. YES 2. YES
0.828939
0.715424
0.593043
__label__eng_Latn
0.782949
0.216167
# Task 3: Expectation values There are two main quantities that we wish to compute for the ground state in this project. They are the full many-body ground state energy $E$ from the general Hartree-Fock method, and the particle density (also known as the one-body density and the electron density) $\rho(x)$. ## The general Hartree-Fock energy We wish to compute the ground state energy from the Hartree-Fock ansatz, $| \Psi \rangle = |\phi_1\dots\phi_n\rangle$. This is done by \begin{align} E &= \langle \Psi | \hat{H} | \Psi \rangle = \sum_{i = 1}^{n} \langle \phi_i | \hat{h} | \phi_i \rangle + \frac{1}{2} \sum_{i, j = 1}^{n} \langle \phi_i \phi_j | \hat{u} | \phi_i \phi_j \rangle_{AS}, \end{align} where the molecular orbitals are the optimized Hartree-Fock orbitals. Inserting the basis expansion the energy can be found as a function of the coefficient matrices and the atomic orbital matrix elements. ## The particle density The particle density can be computed from the single-particle functions evaluated on a grid, and the one-body density matrix. We have already found the one-body density matrix when constructing the Fock matrix, it is given by $D_{\nu \mu} = \sum_{i = 1}^{n} C^{*}_{\mu i} C_{\nu i}$. We can then compute the particle density from \begin{align} \rho(x) = \sum_{\mu, \nu = 1}^{l} \psi^{*}_{\mu}(x) D_{\nu \mu} \psi_{\nu}(x) \end{align}
8ef50441ea13b5f399ebd58670731db0fc49c2a4
2,377
ipynb
Jupyter Notebook
docs/task-3-expectation-values.ipynb
Schoyen/tdhf-project-fys4411
b0231c0d759382c14257cc4572698aa80c1c94d0
[ "MIT" ]
1
2021-06-03T00:34:57.000Z
2021-06-03T00:34:57.000Z
docs/task-3-expectation-values.ipynb
Schoyen/tdhf-project-fys4411
b0231c0d759382c14257cc4572698aa80c1c94d0
[ "MIT" ]
null
null
null
docs/task-3-expectation-values.ipynb
Schoyen/tdhf-project-fys4411
b0231c0d759382c14257cc4572698aa80c1c94d0
[ "MIT" ]
null
null
null
33.478873
194
0.574253
true
418
Qwen/Qwen-72B
1. YES 2. YES
0.917303
0.757794
0.695127
__label__eng_Latn
0.994519
0.453344
# Lecture 9: Expectation, Indicator Random Variables, Linearity ## More on Cumulative Distribution Functions A CDF: $F(x) = P(X \le x)$, as a function of real $x$ has to be * non-negative * add up to 1 In the following discrete case, it is easy to see how the probability mass function (PMF) relates to the CDF: Therefore, you can compute any probability given a CDF. _Ex. Find $P(1 \lt x \le 3)$ using CDF $F$._ \begin{align} & &P(x \le 1) + P(1 \lt x \le 3) &= P(x \le 3) \\ & &\Rightarrow P(1 \lt x \le 3) &= F(3) - F(1) \end{align} Note that while we don't need to be so strict in the __continuous case__, for the discrete case you need to be careful about the $\lt$ and $\le$. ### Properties of CDF A function $F$ is a CDF __iff__ the following three conditions are satisfied. 1. increasing 1. right-continuous (function is continuous as you _approach a point from the right_) 1. $F(x) \rightarrow 0 \text{ as } x \rightarrow - \infty$, and $F(x) \rightarrow 1 \text{ as } x \rightarrow \infty$. ---- ## Independence of Random Variables $X, Y$ are independent r.v. if \begin{align} \underbrace{P(X \le x, Y \le y)}_{\text{joint CDF}} &= P(X \le x) P(Y \le y) & &\text{ for all x, y in the continuous case} \\ \\ \underbrace{P(X=x, Y=y)}_{\text{joint PMF}} &= P(X=x) P(Y=y) & &\text{ for all x, y in the discrete case} \end{align} ---- ## Averages of Random Variables (mean, Expected Value) A mean is... well, the _average of a sequence of values_. \begin{align} 1, 2, 3, 4, 5, 6 \rightarrow \frac{1+2+3+4+5+6}{6} = 3.5 \end{align} In the case where there is repetition in the sequence \begin{align} 1,1,1,1,1,3,3,5 \rightarrow & \frac{1+1+1+1+1+3+3+5}{8} \\ \\ & \dots \text{ or } \dots \\ \\ & \frac{5}{8} ~~ 1 + \frac{2}{8} ~~ 3 + \frac{1}{8} ~~ 5 & &\quad \text{ ... weighted average} \end{align} where the weights are the frequency (fraction) of the unique elements in the sequence, and these weights add up to 1. ### Expected value of a discrete r.v. $X$ \begin{align} \mathbb{E}(X) = \sum_{x} \underbrace{x}_{\text{value}} ~~ \underbrace{P(X=x)}_{\text{PMF}} ~& &\quad \text{ ... summed over x with } P(X=x) \gt 0 \end{align} ### Expected value of $X \sim Bern(p)$ \begin{align} \text{Let } X &\sim Bern(p) \\ \mathbb{E}(X) &= \sum_{k=0}^{1} k P(X=k) \\ &= 1 ~~ P(X=1) + 0 ~~ P(X=0) \\ &= p \end{align} ### Expected value of an Indicator Variable \begin{align} X &= \begin{cases} 1, &\text{ if A occurs} \\ 0, &\text{ otherwise } \end{cases} \\ \\ \therefore \mathbb{E}(X) &= P(A) \end{align} Notice how this lets us relate (bridge) the expected value $\mathbb{E}(X)$ with a probability $P(A)$. #### Average of $X \sim Bin(n,p)$ There is a hard way to do this, and an easy way. First the hard way: \begin{align} \mathbb{E}(X) &= \sum_{k=0}^{n} k \binom{n}{k} p^k (1-p)^{n-k} \\ &= \sum_{k=0}^{n} n \binom{n-1}{k-1} p^k (1-p)^{n-k} & &\text{from Lecture 2, Story proofs, ex. 2, choosing a team and president} \\ &= np \sum_{k=0}^{n} n \binom{n-1}{k-1} p^{k-1} (1-p)^{n-k} \\ &= np \sum_{j=0}^{n-1} \binom{n-1}{j} p^j(1-p)^{n-1-j} & &\text{letting } j=k-1 \text{, which sets us up to use the Binomial Theorem} \\ &= np \end{align} Now, what about the _easy way_? ---- ## Linearity of Expected Values Linearity is this: \begin{align} \mathbb{E}(X+Y) &= \mathbb{E}(X) + \mathbb{E}(Y) & &\quad \text{even if X and Y are dependent}\\ \\ \mathbb{E}(cX) &= c \mathbb{E}(X)\\ \end{align} ### Expected value of Binomial r.v using Linearity Let $X \sim Bin(n,p)$. The easy way to calculate the expected value of a binomial r.v. follows. Let $X = X_1 + X_2 + \dots + X_n$ where $X_j \sim Bern(P)$. \begin{align} \mathbb{E}(X) &= \mathbb{E}(X_1 + X_2 + \dots + X_n) \\ \mathbb{E}(X) &= \mathbb{E}(X_1) + \mathbb{E}(X_2) + \dots + \mathbb{E}(X_n) & &\quad \text{by Linearity}\\ \mathbb{E}(X) &= n \mathbb{E}(X_1) & &\quad \text{by symmetry}\\ \mathbb{E}(X) &= np \end{align} ### Expected value of Hypergeometric r.v. Ex. 5-card hand $X=(\# aces)$. Let $X_j$ be the indicator that the $j^{th}$ card is an ace. \begin{align} \mathbb{E}(X) &= \mathbb{E}(X_1 + X_2 + X_3 + X_4 + X_5) \\ &= \mathbb{E}(X_1) + \mathbb{E}(X_2) + \mathbb{E}(X_3) + \mathbb{E}(X_4) + \mathbb{E}(X_5) & &\quad \text{by Linearity} \\ &= 5 ~~ \mathbb{E}(X_1) & &\quad \text{by symmetry} \\ &= 5 ~~ P(1^{st} \text{ card is ace}) & &\quad \text{by the Fundamental Bridge}\\ &= \boxed{\frac{5}{13}} \end{align} Note that when we use linearity in this case, the individual probabilities are _weakly dependent_, in that the probability of getting an ace decreases slightly; and that if you already have four aces, then the fifth card cannot possibly be an ace. But using linearity, we can nevertheless quickly and easily compute $\mathbb{E}(X_1 + X_2 + X_3 + X_4 + X_5)$. ---- ## Geometric Distribution ### Description The Geometric distribution comprises a series of independent $Bern(p)$ trials where we count the number of failures before the first success. ### Notation $X \sim Geom(p)$. ### Parameters $0 < p < 1 \text{, } p \in \mathbb{R}$ ### Probability mass function Consider the event $A$ where there are 5 failures before the first success. We could notate this event $A$ as $\text{FFFFFS}$, where $F$ denotes failure and $S$ denotes the first success. Note that this string **must** end with a success. So, $P(A) = q^5p$. And from just this, we can derive the PMF for a geometric r.v. \begin{align} P(X=k) &= pq^k \text{, } k \in \{1,2, \dots \} \\ \\ \sum_{k=0}^{\infty} p q^k &= p \sum_{k=0}^{\infty} q^k \\ &= p ~~ \frac{1}{1-q} & &\quad \text{by the geometric series where } |r| < 1 \\ &= \frac{p}{p} \\ &= 1 & &\quad \therefore \text{ this is a valid PMF} \end{align} ### Expected value So, the hard way to calculate the expected value $\mathbb{E}(X)$ of a $Geom(p)$ is \begin{align} \mathbb{E}(X) &= \sum_{k=0}^{\infty} k p q^k \\ &= p \sum_{k=0}^{\infty} k q^k \\ \\ \\ \text{ now ... } \sum_{k=0}^{\infty} q^k &= \frac{1}{1-q} & &\quad \text{by the geometric series where |q| < 1} \\ \sum_{k=0}^{\infty} k q^{k-1} &= \frac{1}{(1-q)^2} & &\quad \text{by differentiating with respect to k} \\ \sum_{k=0}^{\infty} k q^{k} &= \frac{q}{(1-q)^2} \\ &= \frac{q}{p^2} \\ \\ \\ \text{ and returning, we have ... } \mathbb{E}(X) &= p ~~ \frac{q}{(p^2} \\ &= \frac{q}{p} & &\quad \blacksquare \end{align} And here is the story proof, without using the geometric series and derivatives: Again, we are considering a series of independent Bernoulli trials with probability of success $p$, and we are counting the number of failures before getting the first success. Similar to doing a first step analysis in the case of the Gambler's Ruin, we look at the first case where we either: * get a heads (success) on the very first try, meaning 0 failures * or we get 1 failure, but we start the process all over again Remember that in the case of a coin flip, the coin has no memory. Let $c=\mathbb{E}(X)$. \begin{align} c &= 0 ~~ p + (1 + c) ~~ q \\ &= q + qc \\ \\ c - cq &= q \\ c (1 - q) &= q \\ c &= \frac{q}{1-q} \\ &= \frac{q}{p} & &\quad \blacksquare \end{align} ----
3d9580db1f9988d418cc878a6d7d37994f2f2eb9
11,121
ipynb
Jupyter Notebook
Lecture_09.ipynb
dirtScrapper/Stats-110-master
a123692d039193a048ff92f5a7389e97e479eb7e
[ "BSD-3-Clause" ]
null
null
null
Lecture_09.ipynb
dirtScrapper/Stats-110-master
a123692d039193a048ff92f5a7389e97e479eb7e
[ "BSD-3-Clause" ]
null
null
null
Lecture_09.ipynb
dirtScrapper/Stats-110-master
a123692d039193a048ff92f5a7389e97e479eb7e
[ "BSD-3-Clause" ]
null
null
null
35.082019
368
0.491682
true
2,660
Qwen/Qwen-72B
1. YES 2. YES
0.658418
0.894789
0.589145
__label__eng_Latn
0.939055
0.207111
# The Harmonic Oscillator Strikes Back *Note:* Much of this is adapted/copied from https://flothesof.github.io/harmonic-oscillator-three-methods-solution.html This week we continue our adventures with the harmonic oscillator. The harmonic oscillator is a system that, when displaced from its equilibrium position, experiences a restoring force F proportional to the displacement x: $$F=-kx$$ The potential energy of this system is $$V = {1 \over 2}k{x^2}$$ These are sometime rewritten as $$ F=- \omega_0^2 m x, \text{ } V(x) = {1 \over 2} m \omega_0^2 {x^2}$$ Where $\omega_0 = \sqrt {{k \over m}} $ If the equilibrium value of the harmonic oscillator is not zero, then $$ F=- \omega_0^2 m (x-x_{eq}), \text{ } V(x) = {1 \over 2} m \omega_0^2 (x-x_{eq})^2$$ ## 1. Harmonic oscillator from last time (with some better defined conditions) Applying the harmonic oscillator force to Newton's second law leads to the following second order differential equation $$ F = m a $$ $$ F= -m \omega_0^2 (x-x_{eq}) $$ $$ a = - \omega_0^2 (x-x_{eq}) $$ $$ x(t)'' = - \omega_0^2 (x-x_{eq}) $$ The final expression can be rearranged into a second order homogenous differential equation, and can be solved using the methods we used above This is already solved to remind you how we found these values ```python import sympy as sym sym.init_printing() ``` **Note** that this time we define some of the properties of the symbols. Namely, that the frequency is always positive and real and that the positions are always real ```python omega0,t=sym.symbols("omega_0,t",positive=True,nonnegative=True,real=True) xeq=sym.symbols("x_{eq}",real=True) x=sym.Function("x",real=True) x(t),omega0 ``` ```python dfeq=sym.Derivative(x(t),t,2)+omega0**2*(x(t)-xeq) dfeq ``` ```python sol = sym.dsolve(dfeq) sol ``` ```python sol,sol.args[0],sol.args[1] ``` **Note** this time we define the initial positions and velocities as real ```python x0,v0=sym.symbols("x_0,v_0",real=True) ics=[sym.Eq(sol.args[1].subs(t, 0), x0), sym.Eq(sol.args[1].diff(t).subs(t, 0), v0)] ics ``` ```python solved_ics=sym.solve(ics) solved_ics ``` ### 1.1 Equation of motion for $x(t)$ ```python full_sol = sol.subs(solved_ics[0]) full_sol ``` ### 1.2 Equation of motion for $p(t)$ ```python m=sym.symbols("m",positive=True,nonnegative=True,real=True) p=sym.Function("p") sym.Eq(p(t),m*sol.args[1].subs(solved_ics[0]).diff(t)) ``` ## 2. Time average values for a harmonic oscillator If we want to understand the average value of a time dependent observable, we need to solve the following integral $${\left\langle {A(t)} \right\rangle}_t = \begin{array}{*{20}{c}} {\lim }\\ {\tau \to 0} \end{array}\frac{1}{\tau }\int\limits_0^\tau {A(t)dt} $$ ### 2.1 Average position ${\left\langle {x} \right\rangle}_t$ for a harmonic oscillator ```python tau=sym.symbols("tau",nonnegative=True,real=True) xfunc=full_sol.args[1] xavet=(xfunc.integrate((t,0,tau))/tau).limit(tau,sym.oo) xavet ``` The computer does not always make the best choices the first time. If you treat each sum individually this is not a hard limit to do by hand. The computer is not smart. We can help it by inseting an `expand()` function in the statement ```python xavet=(xfunc.integrate((t,0,tau))/tau).expand().limit(tau,sym.oo) xavet ``` ### 2.2 Excercise: Calculate the average momenta ${\left\langle {p} \right\rangle}_t$ for a harmonic oscillator ```python tau=sym.symbols("tau",nonnegative=True,real=True) p=sym.Eq(p(t),m*sol.args[1].subs(solved_ics[0]).diff(t)) pfunc=p.args[1] pavet=(pfunc.integrate((t,0,tau))/tau).limit(tau,sym.oo) pavet ``` ### 2.3 Exercise: Calculate the average kinetic energy of a harmonic oscillator ```python ke=pfunc**2/2*m ke ``` ```python keavet=(ke.integrate((t,0,tau))/tau).limit(tau,sym.oo) keavet ``` ```python keavet=(ke.integrate((t,0,tau))/tau).expand().limit(tau,sym.oo) keavet ``` ## 3. Ensemble (Thermodynamic) Average values for a harmonic oscillator If we want to understand the thermodynamics ensemble average value of an observable, we need to solve the following integral. $${\left\langle {A(t)} \right\rangle}_{T} = \frac{\int{A e^{-\beta H}dqdp}}{\int{e^{-\beta H}dqdp} } $$ You can think of this as a Temperature average instead of a time average. Here $\beta=\frac{1}{k_B T}$ and the classical Hamiltonian, $H$ is $$ H = \frac{p^2}{2 m} + V(q)$$ **Note** that the factors of $1/h$ found in the classical partition function cancel out when calculating average values ### 3.1 Average position ${\left\langle {x} \right\rangle}_t$ for a harmonic oscillator For a harmonic oscillator with equilibrium value $x_{eq}$, the Hamiltonian is $$ H = \frac{p^2}{2 m} + \frac{1}{2} m \omega_0 (x-x_{eq})^2 $$ First we will calculate the partition function $\int{e^{-\beta H}dqdp}$ ```python k,T=sym.symbols("k,T",positive=True,nonnegative=True,real=True) xT,pT=sym.symbols("x_T,p_T",real=True) ham=sym.Rational(1,2)*(pT)**2/m + sym.Rational(1,2)*m*omega0**2*(xT-xeq)**2 beta=1/(k*T) bolz=sym.exp(-beta*ham) z=sym.integrate(bolz,(xT,-sym.oo,sym.oo),(pT,-sym.oo,sym.oo)) z ``` Then we can calculate the numerator $\int{A e^{-\beta H}dqdp}$ ```python numx=sym.integrate(xT*bolz,(xT,-sym.oo,sym.oo),(pT,-sym.oo,sym.oo)) numx ``` And now the average value ```python xaveT=numx/z xaveT ``` ### 3.2 Exercise: Calculate the average momenta ${\left\langle {p} \right\rangle}_t$ for a harmonic oscillator After calculating the value, explain why you think you got this number ```python nump=sym.integrate(pT*bolz,(xT,-sym.oo,sym.oo),(pT,-sym.oo,sym.oo)) nump ``` ```python pavg=nump/z pavg ``` ### 3.3 Exercise: Calculate the average kinetic energy The answer you get here is a well known result related to the energy equipartition theorem ```python KE=pT**2/2*m numKE=sym.integrate(KE*bolz,(xT,-sym.oo,sym.oo),(pT,-sym.oo,sym.oo)) numKE ``` ```python KEavg=numKE/z KEavg ``` # Back to the lecture ## 4. Exercise Verlet integrators In this exercise we will write a routine to solve for the equations of motion for a hamonic oscillator. Plot the positions and momenta (seprate plots) of the harmonic oscillator as a functions of time. Calculaate trajectories using the following methods: 1. Exact solution 2. Simple taylor series expansion 3. Predictor-corrector method 4. Verlet algorithm 5. Leapfrog algorithm 6. Velocity Verlet algorithm ```python #Exact solution for x fuller_sol = sym.simplify(full_sol.subs({x0:10, xeq:0 , v0:10, omega0:1})) sym.plot(fuller_sol.rhs,(t,-10,10)) ``` ```python #Exact solution for p m=sym.symbols("m",positive=True,nonnegative=True,real=True) p=sym.Function("p") sym.Eq(p(t),m*sol.args[1].subs(solved_ics[0]).diff(t)) Momenta=sym.Eq(p(t),m*sol.args[1].subs(solved_ics[0]).diff(t)) Momentaa = sym.simplify(Momenta.subs({x0:10, xeq:0 , v0:10, omega0:1, m:1})) sym.plot(Momentaa.rhs,(t,-10,10)) ``` ```python #Simple taylor series expansion import matplotlib.pyplot as plt xt0=.1 t=.1 vt0=.1 xlist=[] for i in range(0,1000): vt=vt0+(1/2)*t**2*(-(xt0)) vt0=vt xt= xt0+(vt0)*(t) xt0=xt xlist.append(xt) plt.plot(xlist) plt.grid(True) plt.xlabel('x') plt.ylabel('y') ``` ```python # Verlet algorithm for position #r(t+dt)=2r(t)-r(t-dt)+dt**2*a #v(t)=(r(t+dt)-r(t-dt))/2dt xt1=.1 xt2=.1 t=.1 a=.1 plist=[] for i in range(0,100): vt=(xt2-xt1)/2*t xt1=xt2 xt3=vt plist.append(vt) plt.plot(xlist) plt.grid(True) plt.xlabel('x') plt.ylabel('y') ``` ```python # Verlet algorithm for momentum #r(t+dt)=2r(t)-r(t-dt)+dt**2*a #v(t)=(r(t+dt)-r(t-dt))/2dt xt1=.1 xt2=.1 t=.1 a=.1 xlist=[] for i in range(0,100): xt1=2*xt2-xt3+t**2*-xt2 xt3=xt2 xt2=xt1 xlist.append(xt3) plt.plot(xlist) plt.grid(True) plt.xlabel('x') plt.ylabel('y') ``` ```python #Leapfrog for position #r(t+dt)=r(t)+dtv(t+1/2dt) #v(t+1/2dt)=v(t-1/2dt)+dta(t) #v(t)=1/2(v(t+1/2dt)+v(t-1/2dt)) xt1=1 t=.2 xtwohalf=.2 xlist=[] for i in range(0,100): xtwhalf=xtwohalf+t*-xt1 xtwohalf=xtwhalf xt2=xt1+t*xtwhalf xt1=xt2 xlist.append(xt2) plt.plot(xlist) plt.grid(True) plt.xlabel('x') plt.ylabel('y') ``` ```python #Leapfrog for velocity #r(t+dt)=r(t)+dtv(t+1/2dt) #v(t+1/2dt)=v(t-1/2dt)+dta(t) #v(t)=1/2(v(t+1/2dt)+v(t-1/2dt)) vt1=2 t=.2 vtwohalf=.2 vtwhalf=2 vlist=[] for i in range(0,100): vt11=(1/2)*((vtwhalf)+(vtwohalf)) vtwohalf=vt11 vtwhalf=vtwohalf+t*-vt1 vtwohalf=vtwhalf vt2=vt1+t*vtwhalf vt1=vt2 vlist.append(vt11) plt.plot(vlist) plt.grid(True) plt.xlabel('x') plt.ylabel('y') ``` ```python #velocity verlet for position #r(t+dt)=r(t)+dtv(t)+1/2dt^2a(t) #v(t+dt)=v(t)+1/2dt(a(t)+a(t+dt)) #v(t+1/2dt)=v(t)+1/2dta(t) #v(t+dt)=v(t+1/2dt)+1/2dta(t+dt) dt=0.01 x1=0 v1=1 x=[] for i in range(0,1000): x2=x1+v1*dt+1/2*-x1*dt**2 x1=x2 v2=v1+1/2*(-x2-x1)*dt v1=v2 x.append(x2) plt.plot(x) plt.grid(True) plt.xlabel('x') plt.ylabel('y') ``` ```python #velocity verlet for Velocity #r(t+dt)=r(t)+dtv(t)+1/2dt^2a(t) #v(t+dt)=v(t)+1/2dt(a(t)+a(t+dt)) #v(t+1/2dt)=v(t)+1/2dta(t) #v(t+dt)=v(t+1/2dt)+1/2dta(t+dt) xt1=1 dt=2 vt1=1 vell=[] for i in range(0,10): vwhalf=vt1+1/2*dt*-xt1 xt1=vwhalf vell.append(vwhalf) plt.plot(vell) plt.grid(True) plt.xlabel('x') plt.ylabel('y') ```
0e85ae5f39c866fd984aaaf849e1b43f11bb7cfb
302,325
ipynb
Jupyter Notebook
harmonic_student.ipynb
sju-chem264-2019/new-10-14-10-Yekaterina25
d4c92231de3198e78affaa2e6bb1165d2cea20f1
[ "MIT" ]
null
null
null
harmonic_student.ipynb
sju-chem264-2019/new-10-14-10-Yekaterina25
d4c92231de3198e78affaa2e6bb1165d2cea20f1
[ "MIT" ]
null
null
null
harmonic_student.ipynb
sju-chem264-2019/new-10-14-10-Yekaterina25
d4c92231de3198e78affaa2e6bb1165d2cea20f1
[ "MIT" ]
null
null
null
225.784167
25,312
0.906591
true
3,357
Qwen/Qwen-72B
1. YES 2. YES
0.76908
0.743168
0.571556
__label__eng_Latn
0.697896
0.166246
```python import modern_robotics as mr import numpy as np import sympy as sp from sympy.physics.mechanics import dynamicsymbols, mechanics_printing mechanics_printing() from Utilities.symbolicFunctions import * from Utilities.kukaKinematics import Slist, Mlist ``` # Task 3 ## 3.2 ### Develop and implement a solution for the analytic inverse kinematics for the Agilus robot. ```python np.set_printoptions(precision=5) np.set_printoptions(suppress=True) #The zero config body frame used in the analytical solution. This is defined as the frame Me found in kukaKinematics.py. M = np.array([[1, 0, 0, 980], #End effector [0, 1, 0, 0], [0, 0, 1, 435], [0, 0, 0, 1]]) print("M: \n",M) #Thetas_gen can be modified to generate a valid desired position T_sd thetas_gen = np.array([0,0,0,0,0,0]) T_sd = mr.FKinSpace(M,Slist,thetas_gen) # Finding the thetas. agilus_analytical_IK is defined in symbolicFunctions.py thetas_up, thetas_down = agilus_analytical_IK(Slist, M, T_sd) # Resulting end effector pose and Ps for elbow up: T_up = mr.FKinSpace(M,Slist,thetas_up) P_up = ps_from_Tsd(T_up) # And for elbow down: T_down = mr.FKinSpace(M,Slist,thetas_down) P_down = ps_from_Tsd(T_down) print("The points P reached by both solutions: ", P_down, P_up) ``` M: [[ 1 0 0 980] [ 0 1 0 0] [ 0 0 1 435] [ 0 0 0 1]] The points P reached by both solutions: [900. 0. 435. 1.] [900. 0. 435. 1.] ## Task 3.3 ### Confirm that the solution of the analytical inverse kinematics from the previous point agrees with the solution from a numerical inverse kinematics solver. ```python #Numerical solution thetas_num, found = mr.IKinSpace(Slist,M,T_sd,[0,0,1,0,0.5,0],0.001,0.01) thetas_num_p = [0,0,0,0,0,0] #the post processed solution #Post process numerical angles to be [-pi, pi] for i in range(0,6): thetas_num_p[i] = thetas_num[i]%(2*np.pi) if thetas_num_p[i]>np.pi: thetas_num_p[i] = thetas_num_p[i]-2*np.pi T_num = mr.FKinSpace(M,Slist,thetas_num_p) print("T_sd: \n",T_sd,'\n\n', "T_up: \n",T_up, '\n\n',"T_down: \n",T_down, "\n\nT_num: \n", T_num) print('\nGenerating thetas:', thetas_gen, '\nElbow up: ', np.round(thetas_up,3), '\nElbow down: ', np.round(thetas_down,3), '\nNumerical: ', np.round(thetas_num_p,3), found, '\n') print("\nElbow down: \n") #apply_joint_lim is a function that finds out if a joint angle is outside the joints limits. Defined in symbolicFunctions.py: print("Elbow down solution viable: ", apply_joint_lim(jointLimits, thetas_down), "\n") print("\nElbow up: \n") print("Elbow up solution viable: ", apply_joint_lim(jointLimits, thetas_up), "\n") ``` T_sd: [[ 1. 0. 0. 980.] [ 0. 1. 0. 0.] [ 0. 0. 1. 435.] [ 0. 0. 0. 1.]] T_up: [[ 1. -0. 0. 980.] [ 0. 1. 0. 0.] [ -0. -0. 1. 435.] [ 0. 0. 0. 1.]] T_down: [[ 1. 0. 0. 980.] [ 0. 1. 0. 0.] [ 0. 0. 1. 435.] [ 0. 0. 0. 1.]] T_num: [[ 1. 0. 0. 979.99945] [ 0. 1. 0. -0. ] [ -0. 0. 1. 434.99998] [ 0. 0. 0. 1. ]] Generating thetas: [0 0 0 0 0 0] Elbow up: [ 0. -0.08 0.166 -3.142 0.086 -3.142] Elbow down: [ 0. -0. 0. 0. 0. 0.] Numerical: [-0. 0. -0. 0. 0. -0.] True Elbow down: Elbow down solution viable: True Elbow up: Elbow up solution viable: True ## Task 3.4 ### Using the developed analytic inverse kinematics formulation, visualize the Agilus robot in both elbow-up and elbow-down configurations for the same end-effector pose. ```python from Utilities.RobotClass import * ``` ```python Kuka = Robot(Mlist, ['z', '-z', 'x', 'x', '-z','x']) #Initializes kuka-robot object in zero-configuration Kuka.draw_robot() ``` WebVisualizer(window_uid='window_0') ```python Kuka.transform(Slist, thetas_down_num ) # Elbow DOWN pUp = np.array(Kuka.joints[5].coord.get_center()) #Get coordinates of {6} in elbow down Kuka.draw_robot() Kuka.transform(Slist, thetas_up_num ) # Elbow UP pDown = np.array(Kuka.joints[5].coord.get_center(),dtype=float) #Get coordinates of {6} in elbow up Kuka.draw_robot() print("Both elbow up and elbow down config yields the same end-effector pose: ",pUp.round(4) == pDown.round(4)) ``` WebVisualizer(window_uid='window_1') WebVisualizer(window_uid='window_2') Both elbow up and elbow down config yields the same end-effector pose: [ True True True] ```python ```
9ee28400b01c5d641c06abcde49fe4b286109314
10,009
ipynb
Jupyter Notebook
Task_3.ipynb
BirkHveding/RobotTek
37f4ab0de6de9131239ff5d97e4b68a7091f291b
[ "Apache-2.0" ]
null
null
null
Task_3.ipynb
BirkHveding/RobotTek
37f4ab0de6de9131239ff5d97e4b68a7091f291b
[ "Apache-2.0" ]
null
null
null
Task_3.ipynb
BirkHveding/RobotTek
37f4ab0de6de9131239ff5d97e4b68a7091f291b
[ "Apache-2.0" ]
null
null
null
38.644788
1,209
0.569188
true
1,658
Qwen/Qwen-72B
1. YES 2. YES
0.800692
0.875787
0.701236
__label__eng_Latn
0.69428
0.467537
```python from matplotlib import pyplot as plt import numpy as np from nodepy import runge_kutta_method as rk from nodepy import stability_function from sympy import symbols, expand from scipy.special import laguerre from ipywidgets import interact, FloatSlider ``` ```python def restricted_pade(k,gamma=0): coeffs = [] for m in range(k+1): coeffs.append((-1)**k*laguerre(k).deriv(k-m)(1./gamma)*gamma**m) numer = np.poly1d(coeffs[::-1]) z = symbols('z') denom = (1-gamma*z)**k coeffs = [expand(denom).coeff(z,n) for n in range(k+1)] denom = np.poly1d(coeffs[::-1]) return numer, denom ``` ```python p, q = restricted_pade(3,gamma=1.06857902) stability_function.plot_order_star(p,q); ``` Here we recreate and elaborate on Figure 6 of the paper "Order Stars and stability theorems" by Wanner, Hairer & Norsett. ```python L4 = laguerre(4).deriv() 1./L4.r ``` array([0.1288864 , 0.30253458, 1.06857902]) ```python def hwnplot(k=3,gamma=1.): p, q = restricted_pade(k,gamma) stability_function.plot_order_star(p,q,bounds=[-4,4,-4,4]); plt.show() interact(hwnplot, gamma=FloatSlider(min=0.12, max=1.07, step=0.004, value=1.07)); ``` interactive(children=(IntSlider(value=3, description='k', max=9, min=-3), FloatSlider(value=1.07, description=… ```python ```
6740dea5f630c98d53cf625186724da5abd7225f
9,307
ipynb
Jupyter Notebook
examples/Stability functions and order stars.ipynb
logichen/nodepy
3d994caff078f142be1157162132a8788c6e8bb4
[ "BSD-2-Clause" ]
null
null
null
examples/Stability functions and order stars.ipynb
logichen/nodepy
3d994caff078f142be1157162132a8788c6e8bb4
[ "BSD-2-Clause" ]
null
null
null
examples/Stability functions and order stars.ipynb
logichen/nodepy
3d994caff078f142be1157162132a8788c6e8bb4
[ "BSD-2-Clause" ]
null
null
null
58.16875
5,606
0.790265
true
420
Qwen/Qwen-72B
1. YES 2. YES
0.899121
0.828939
0.745317
__label__eng_Latn
0.520182
0.569952
# Announcements - No Problem Set this week, Problem Set 4 will be posted on 9/28. - Solutions to Problem Set 3 posted on D2L. <style> @import url(https://www.numfys.net/static/css/nbstyle.css); </style> <a href="https://www.numfys.net"></a> # Ordinary Differential Equations - higher order methods in practice <section class="post-meta"> Based on notes and notebooks by Niels Henrik Aase, Thorvald Ballestad, Vasilis Paschalidis and Jon Andreas Støvneng </section> ### Higher Order Derivatives and Sets of 1st order ODEs The trick to solving ODEs with higher derivatives is turning them into systems of first-order ODEs. As a simple example, consider the second-order differential equation describing the van der Pol oscillator $$ \frac{d^2 x}{dt^2} - a (1-x^2) \frac{dx}{dt} + x = 0 $$ We turn this into a pair of first-order ODEs by defining an auxiliary function $v(t) = dx/dt$ and writing the system as \begin{align} \begin{split} \frac{dv}{dt} &= a (1-x^2) v - x\\ \frac{dx}{dt} &= v \end{split} \end{align} Note that there are only functions (and the independent variable) on the RHS; all "differentials" are on the LHS. Now that we have a system of first-order equations ,we can proceed as above. A function describing the RHS of this system is ```python import numpy as np import matplotlib.pyplot as plt def RK4_step(t, y, h, g, *P): """ Implements a single step of a fourth-order, explicit Runge-Kutta scheme """ thalf = t + 0.5*h k1 = h * g(t, y, *P) k2 = h * g(thalf, y + 0.5*k1, *P) k3 = h * g(thalf, y + 0.5*k2, *P) k4 = h * g(t + h, y + k3, *P) return y + (k1 + 2*k2 + 2*k3 + k4)/6 ``` ```python def vdpRHS(t, y, a): # we store our function as the array [x, x'] return np.array([ y[1], # dx/dt = v a*(1-y[0]**2)*y[1] - y[0] # dv/dt = a*(1-x**2)*v - x ]) ``` ```python def odeSolve(t0, y0, tmax, h, RHS, method, *P): """ ODE driver with constant step-size, allowing systems of ODE's """ # make array of times and find length of array t = np.arange(t0,tmax+h,h) ntimes, = t.shape # find out if we are solving a scalar ODE or a system of ODEs, and allocate space accordingly if type(y0) in [int, float]: # check if primitive type -- means only one eqn neqn = 1 y = np.zeros( ntimes ) else: # otherwise assume a numpy array -- a system of more than one eqn neqn, = y0.shape y = np.zeros( (ntimes, neqn) ) # set first element of solution to initial conditions (possibly a vector) y[0] = y0 # march on... for i in range(0,ntimes-1): y[i+1] = method(t[i], y[i], h, RHS, *P) return t,y ``` ```python a = 15 # parameter h = 0.01 t0 = 0.0 y0 = np.array([ 0, 1]) tmax = 3 # call the solver t, y = odeSolve(t0, y0, tmax, h, vdpRHS, RK4_step, a) fig,ax = plt.subplots() ax.plot(t,y[:,0],'b', label='x') ax.plot(t,y[:,1],'r--', label='v') ax.set_xlabel('time') ax.legend() ax.set_title(f"van der Pol Oscillator for a={a}") plt.tight_layout() plt.show() ``` A somewhat more complex example is the Lane-Emden equation, which is really just Poisson's equation in spherical symmetry for the graviational potential of a self-gravitating fluid whose pressure is related to its density as $P\propto\rho^\gamma$. Such a system is called a _polytrope_, and is often used in astrophysics as a simple model for the structure of system such as a a star in which outward pressure and inward gravity are in equilibrium. Let $\xi$ be the dimensionless radius of the system, and let $\theta$ be related to the density as $\rho = \rho_c \theta^n$, where $\rho_c$ is the density at the origin and $n = 1/(\gamma-1)$. We then have the dimensionless second-order differential equation $$ \frac{1}{\xi^2}\frac{d}{d\xi}\left(\xi^2\frac{d\theta}{d\xi}\right) + \theta^n = 0 $$ Note that the first term is just the divergence $\nabla\cdot\theta$ in spherical symmetry. If we expand out the first term, we have $$ \frac{d^2\theta}{d\xi^2} + \frac{2}{\xi}\frac{d\theta}{d\xi} + \theta^n = 0 $$ Defining an auxiliary function $v(\xi) = d\theta/d\xi$, we can then convert this into a system of two first-order ODEs: \begin{align} \begin{split} \frac{dv}{d\xi} &= -\frac{2}{\xi} v - \theta^n \\ \frac{d\theta}{d\xi} &= v \end{split} \end{align} Again, we have "derivatives" only on the LHS and no derivatives on the RHS of our system. Looking at this expression, one can see right away that at the origin $\xi=0$ we will have a numerical problem; we are dividing by zero. Analytically, this is not a problem, since $v/\xi\rightarrow0$ as $\xi\rightarrow0$, but here we need to address this numerically. The first approach is to take care of the problem in our RHS function: ```python def leRHS(x, y, n): dthetadx = y[1] if x==0: dvdx = -y[0]**n else: dvdx = -2/x*y[1] - y[0]**n return np.array([ dthetadx, dvdx ]) ``` This is somewhat clunky, however, and you would first have to convince yourself that in fact $v(\xi)\rightarrow0$ faster than $\xi$ (don't just take my word for it!). Instead, we could use a more direct RHS function ```python def leRHS(x, y, n): dthetadx = y[1] dvdx = -2/x*y[1] - y[0]**n return np.array([ dthetadx, dvdx ]) ``` and expand the solution in a Taylor series about the origin to get a starting value for our numerical integration at a small distance away from the origin. To do this, write $$\theta(\xi) = a_0 + a_1 \xi + a_2 \xi^2 + \dots$$ The first thing to notice is that, by symmetry, only even powers of $\xi$ will appear in the solution. Thus we will have $$ \theta(\xi) = a_0 + a_2 \xi^2 + a_4 \xi^4 + \dots$$ By the boundary condition $\theta(0) = 1$, we have immediately that $a_0 = 1$. Next, substitute $\theta(\xi) = 1 + a_2 \xi^2 + a_4 \xi ^4 + O(\xi^6)$ into the Lane-Emden equation. $\theta$ and its first two derivatives are \begin{align} \begin{split} \theta(\xi) &= 1 + a_2 \xi^2 + a_4 \xi^4 + O(\xi^6)\\ \theta'(\xi) &= 2 a_2 \xi + 4 a_4 \xi^3 + O(\xi^5) \\ \theta''(\xi) &= 2 a_2 + 12 a_4 \xi^2 + O(\xi^4) \end{split} \end{align} Putting these into the Lane-Emden equation, we have \begin{align} \begin{split} 2 a_2 + 12 a_4 \xi^2 + O(\xi^4) + \frac{2}{\xi} (2 a_2 x + 4 a_4 \xi^3 + O(\xi^5)) &= -\theta^n \\ 6 a_2 + 20 a_4 \xi^2 + O(\xi^4) &= -\theta^n \end{split} \end{align} A boundary condition $\theta(0)=1$, and thus we have $a_2 = -1/6$. Away from zero, then, we have \begin{align} \begin{split} -1 + 20 a_4 \xi^2 + O(\xi^4) &= -\left(1 - 1/6 \xi^2 + a_4 \xi^4 + O(\xi^6)\right)^n \end{split} \end{align} The term on the RHS is $ 1 - n \xi^2/6 + O(\xi^4)$, and so we must have $a_4 = n/120$. Thus, the series expansion of the solution around the origin is $$ \theta(\xi) = 1 - \frac{1}{6}\xi^2 + \frac{n}{120} \xi^4 + \dots $$ We can now use this expansion to take a first step slightly away from the origin before beginning our numerical integration, thus avoiding the divide by zero. Note that this series solution near the origin is $O(h^5)$ and so is a good match for RK4 if we take the same (or smaller) step-size. ```python n = 3 xi0 = 0.01 # starting value of xi for our numerical integration theta0 = 1 - xi0**2/6 + n*xi0**4/120 # Taylor series solution to the DE near zero derived above theta0p = -xi0/3 + n*xi0**3/30 y0 = np.array([ theta0, theta0p]) # set IC's for numerical integration print(f"IC at {xi0:10.5e}: {y0[0]:10.5e}, {y0[1]:10.5e}") h = 0.1 tmax = 8 # call the solver t, y = odeSolve(xi0, y0, tmax, h, leRHS, RK4_step, n) fig,ax = plt.subplots() ax.plot(t,y[:,0],'b', label=r'$\theta(\xi)$') ax.plot(t,y[:,1],'r--', label=r'$\frac{d\theta}{d\xi}$') ax.plot([0,tmax],[0,0],'k') ax.set_xlabel(r'$\xi$') ax.set_title(f"Lane Emden Equation for n={n}") ax.legend() plt.tight_layout() plt.show() ``` For values of $n\le5$, the solutions of the Lane Emden equation (the so-called Lane-Emden functions of index $n$) decrease to zero at finite $\xi$. Since this is the radius at which the density goes to zero, we can interpret it as the surface of the self-gravitating body (for example, the radius of the star). Knowing this value $\xi_1$ is thus interesting... Let us see how to determine it numerically. Cleary, we are looking for the solution to $\theta(\xi_1)=0$; this is just root-finding, which we already know how to do. Instead of using some closed-form function, however, the value of the function $\theta(\xi)$ must in this case be determined numerically. But we have just figured out how to do this! Let's use the bisection method for our root-finding algorithm; here is a quick version (no error checking!) ```python def bisection(func, low, high, eps, *P): flow = func(low, *P) fhigh = func(high, *P) mid = 0.5*(low+high) fmid = func(mid,*P) while (high-low)> eps: if fmid*flow < 0: high = mid fhigh = fmid else: low = mid flow = mid mid = 0.5*(low+high) fmid = func(mid,*P) return low ``` Now let us make a function which returns $\theta(\xi)$, the solution to the Lane-Emden equation at $\xi$ ```python def theta(xi, n): h = 1e-4 xi0 = 1e-4 theta0 = 1 - xi0**2/6 + n*xi0**4/120 theta0p = -xi0/3 + n*xi0**3/30 y0 = np.array([ theta0, theta0p]) t, y = odeSolve(xi0, y0, xi, h, leRHS, RK4_step, n) return y[-1,0] ``` Using these, we can compute the surface radius of the polytrope ```python n = 3 xi1 = bisection(theta, 6, 8, 1e-5, n) print(f"xi_1 = {xi1:7.5f}") ``` xi_1 = 6.89680 A more careful treatment gives a value $\xi_1 = 6.89685...$, so we are doing pretty well... # Breaktout exercise: Projectile motion The trajectory of a projectile in a uniform gravity field is a classic and dear physics problem. However, in most introductory physics courses, one works under the assumption that the frictional forces can be neglected. This is of course not always the case, especially not at high speeds. If frictional forces is included, the path of the projectile can no longer be described analytically, and the trajectory must be calculated numerically. To do this, one must solve a coupled set of differential equations, which provides us with an excellent opportunity to implement an ODE-solver on vector form. This turns out to be an elegant and easy implementation, that can be used for solving both differential equations of higher order and coupled differential equations. ## Theory ### Equations of motion The governing equations for a projectile is derived from Newtons second law, $\vec{F} = m\vec{a}$, where $\vec{F}$ is the force acting on the projectile, $m$ is the mass, and $\vec{a}$ is the acceleration of the ball. In this example there are two forces acting on the ball. The gravitational force $$\vec{F_G} = m\vec{g},$$ where $\vec{g}$ denotes the the standard gravity. The other force is the drag force $$\vec{F_D} = -\frac{1}{2}\rho ACv_r^2\hat{v_r}.$$ $A$ is the cross-sectional area, $C$ is the drag coefficient, and $v_r$ is the velocity of the projectile relative to the fluid it is traveling through, $\rho$ is the air density and $\hat{v_r}$ is the direction of the projectile with length unity. A more practical way of writing this equation is $$\vec{F_D} = - \frac{B}{\rho_0}\rho v_r\left(\vec{v}-\vec{V}\right),$$ where $\rho_0$ is the air density at sea level altitude, $B$ is some constant, $\vec{v}$ is the velocity of the projectile, and $\vec{V}$ is the velocity of the wind. In this example we will use the shells from the German cannon "the Paris gun" as the projectiles, and the numerical constants mentioned above will be based on historical data of the cannon. The Paris gun operated during World War I and bombarded Paris from far away, the distances often exceeded 100 kilometers, with the velocities of the shells being correspondingly high. The complete equation for the drag force includes an extra term, which is not mentioned above, namely the term originating from Stokes' law. For a spherical projectile the term takes the form $$\vec{F_D'} = -6 \pi \eta \vec{v},$$ where $\vec{F_D'}$ denotes the extra term, and $\eta$ denotes the air's viscosity. At high velocities this term is dwarfed by $\vec{F_D}$ as $\vec{F_D}$ is proportional to $v^2$, thus making the drag force $\vec{F_D}$ an excellent approximation for the total drag force. The last part of the theory needed before we can find the equations of motion is an expression for the air density. As the shells of the Paris gun will reach the stratosphere, it is not reasonable to treat $\rho$ as a constant. In this notebook we will use an adiabatic air density model. The air density is then given by $$\rho = \rho_0 \left(1 - \frac{ay}{T_0}\right)^\alpha,$$ where $\rho$ and $T_0$ denotes the air density and temperature at sea level, $y$ denotes the altitude, $a$ and $\alpha$ are constants $a= 6.5 \cdot 10^{-3}$ K/m, $\alpha$ = 2.5. At high altitudes the expression in the parentheses will become negative, thus making the air density an imaginary number. To avoid this, the air density is set equal to zero when reaching these altitudes. This critical height is around 44 kilometers, well into the stratosphere. In a two-dimensional coordinate system, where $x$ and $y$ are the displacement from the origin in horizontal and vertical direction respectively, the equations of motion become $$ \begin{align} m\ddot{x} &= -\frac{B}{\rho_0}\rho\left(\dot{x}-V_x\right)\left|\vec{v}-\vec{V}\right|,\\ m\ddot{y} &= -\frac{B}{\rho_0}\rho\left(\dot{y}-V_y\right)\left|\vec{v}-\vec{V}\right|-mg, \end{align} $$ where $V_x$ and $V_y$ denotes the velocity of the wind in the horizontal and vertical direction. Simplifying even further, and introducing $u$ and $v$ as the velocities in the horizontal and vertical direction yields four coupled differential equations \begin{equation} \begin{aligned} \dot{x} &= u, \\ \dot{u} &= -\frac{B}{m}\Big[ 1 - \frac{ay}{T_0} \Big]^\alpha \left(u-V_x\right)\Big[ (u-V_x)^2+(v-V_y)^2 \Big]^{1/2},\\ \dot{y} &= w, \\ \dot{w} &= -\frac{B}{m}\Big[ 1 - \frac{ay}{T_0} \Big]^\alpha \left(v-V_y\right)\Big[ (u-V_x)^2+(v-V_y)^2 \Big]^{1/2}-g.\\ \end{aligned} \label{ODEs} \end{equation} ### Discretization: Runge-Kutta 4th order First we define a set of $N$ discrete time values $$ t_n=t_0+n\cdot h~~~~~\mathrm{with}~~~~n=0,1,2,3,...,N. $$ $h$ is the time step length and is calculated by the formula $h = \frac{t_N-t_0}{N}$, where $t_0$ and $t_N$ denotes the initial and final time values of the projectile's flight. These $N$ time values correspond to $N$ number of $x$, $y$, $u$, and $w$ values, each denoted $x_n$, $y_n$, $u_n$ and $w_n$. A more practical way of denoting this collection of values is on vector form $$ \vec{w_n} = \begin{bmatrix}x_n \\ y_n \\ u_n \\ v_n\end{bmatrix}. $$ This notation makes it possible to implement the Runge-Kutta method in an elegant way. It is always possible to reduce an ordinary differential equation of order $q$ to a problem of solving a set of two $q-1$ order coupled differential equations. We have a coupled set of two second order differential equations, thus we need to solve a set of four first order coupled differential equations. By introducing a function $f$ that transforms $\vec{w_n}$ in the way described by equation \eqref{ODEs}, one can implement the Runge-Kutta method directly. In our case, $f$ is given by $$ f(\vec{w_n} ) = f \begin{bmatrix}x_n \\ y_n \\ u_n \\ v_n\end{bmatrix} = \begin{bmatrix}u_n \\ v_n \\ -\frac{B}{m}\Big[ 1 - \frac{ay_n}{T_0} \Big]^\alpha \left(u_n-V_x\right)\Big[ (u_n-V_x)^2+(v_n-V_y)^2 \Big]^{1/2} \\ -\frac{B}{m}\Big[ 1 - \frac{ay_n}{T_0} \Big]^\alpha \left(v_n-V_y\right)\Big[ (u_n-V_x)^2+(v_n-V_y)^2 \Big]^{1/2}-mg\end{bmatrix}, $$ with no time dependence, such that $$ \dot{\vec{w_n}}=f(\vec{w_n}) $$ describes the complete system of differential equations. The following code implements this, and can easily be expanded to solve either an even higher order ODE or an ODE describing a three-dimensional problem. ```python # Importing necessary packages import numpy as np import matplotlib.pyplot as plt from matplotlib import rc %matplotlib inline newparams = {'figure.figsize': (15, 7), 'axes.grid': False, 'lines.markersize': 10, 'lines.linewidth': 2, 'font.size': 15, 'mathtext.fontset': 'stix', 'font.family': 'STIXGeneral', 'figure.dpi': 200} plt.rcParams.update(newparams) # Constants g = 9.81 # Force of gravity per kilo on earth's surface alpha = 2.5 # Parameter in the adiabatic air density model a = 6.5 * 10 ** (-3) # Parameter in the adiabatic air density model T_0 = 288 # Temperature at sea level m = 50 # Mass of the projectile B = 2 * 10 ** (-3) # Constant based on the Paris gun v_0 = 1640 # Initial velocity # Wind strength. This is a constant that never varies V = np.zeros(2) # Starting of with no wind def f(t,w,*P): """ A function describing the right hand side of the equations of motion/the set of ODEs. Parameter: t (time parameter, not needed in this case) w vector containg the needed coordinates and their velocities """ # V : the vector describing the wind strength and direction temp_vector = np.zeros(4) temp_vector[0] = w[2] temp_vector[1] = w[3] # Saving the next parameters in order to optimize run time k = (1 - a * w[1] / T_0) # Air density factor s = np.sqrt((w[2]-V[0]) ** 2 + (w[3]-V[1]) ** 2 ) if k>0: temp_vector[2] = - B / m * k ** (alpha) * (w[2]-V[0]) * s temp_vector[3] = - B / m * k ** (alpha) * (w[3]-V[1]) * s - g else: temp_vector[2] = 0 temp_vector[3] = - g return temp_vector ``` Now we can finally start to explore the trajectories of the Paris gun. The functions below will be helpful going forward. ```python def shoot(theta, v0): """ Initializes the vector w (x and y position and velocity of the projectile) given a initial shooting angle, theta, and absolute velocity, v0. """ w = np.zeros(4) w[2] = v0 * np.cos(np.deg2rad(theta)) w[3] = v0 * np.sin(np.deg2rad(theta)) return w def projectile_motion(h, theta): """ Calculates the motion of the projectile using the functions defined above. While the projectile is in the air (w[1] >=0) the position and velocity is updated using a single step of RK4. Parameters: h unit/integration step length theta initial shooting angle Returns: X_list array of the projectile's x-position Y_list array of the porjectile's y-position """ w = shoot(theta, v_0) X_list = np.zeros(0) Y_list = np.zeros(0) while w[1] >= 0: w = RK4_step(0, w, h,f,0) X_list = np.append(X_list, w[0]) Y_list = np.append(Y_list, w[1]) return X_list, Y_list ``` ### Finding the optimal launch angle With no drag force, it is trivial to show that the angle maximizing the range of a projectile is $\theta = 45^\circ$, where $\theta$ is the angle between the horizontal axis and the direction of the initial velocity. It is interesting to se how the air drag affects this maximum angle, both with and without wind. The following code finds the optimal angle to one decimal precision, and shows how the range depends on the firing angle. It is also instructive to see some of the trajectories of the projectile, so some of the trajectories are included. ```python def find_optimal_angle(h): """ Given an integration time step, this function calculates the optimal initial shooting angle for the projectile to obtain maximum range, in x-direction. The set of angles tested, with their corresponding range, along with the optimal angle are returned. """ record = 0 # Placeholder variable that holds the maximum range optimal_angle = 0 # Placeholder variable that holds the angle yielding the maximum range # Lists containing the initial angle and its corresponding range theta_list = np.zeros(0) range_list = np.zeros(0) for theta in range (1,90,2): x_list, y_list = projectile_motion(h, theta) # Using linear interpolation do determine the landing point more precisely m = (y_list[-1] - y_list[-2]) / (x_list[-1] - x_list[-2]) # The landing point x_range = - y_list[-1] / m + x_list[-1] theta_list = np.append(theta_list, theta) range_list = np.append(range_list, x_range) # Update records if x_range >= record: record = x_range optimal_angle = theta # Rerunning the same code on a smaller interval in order to approximate the optimal angle # more precicely return theta_list, range_list, optimal_angle theta, x , best = find_optimal_angle(0.1) print("The optimal angle is: ", best, " degrees") ``` The optimal angle is: 55 degrees ```python plt.plot(theta, x/1000) plt.title(r"Projectile range as a function of shooting angle, $\theta$") plt.xlabel(r"$\theta $ [$\degree$]") plt.ylabel(r"range [km]") def trajectories(h): plt.figure() plt.title("Projectile trajectories by alternating shooting angle") plt.xlabel(r"x [km]") plt.ylabel(r"y [km]") theta_list = np.arange(30.0,75,5) for angle in theta_list: x_list, y_list = projectile_motion(h, angle) plt.plot(x_list/1000, y_list/1000, '--', label=r'$\theta = $%.i $\degree$'%(angle)) plt.legend(loc='best') plt.gca().set_ylim(bottom=0) plt.show() trajectories(0.1) ``` In contrast to projectile motion with no friction, maximum range is not obtained with firing angle $\theta = 45^\circ$. Because the drag force is lower at higher altitudes, the range increases when the projectile spends the majority of it's flight at higher altitudes. This explains why the maximum range is obtained at $\theta = 55^\circ$. ### Numerical validity In order to determine the validity of the numerical implementation, we can use conservation of energy. During the projectile's trajectory the drag force $\vec{F_D}$ will do work on the projectile. The cumulative work done on the projectile is given by $$ W = \int \vec{F_D} \cdot \vec{dr}. $$ In our discrete model, we can easily calculate this integral by summing over all the infinitesimal contributions, and using the fact that $\vec{F_D}$ always is antiparallel with $\vec{dr}$ such that the cumulative work at time $t_M$, $M<N$, is equal to $$ W = \sum_{n=1}^{M} |\vec{F_{D_{n-1}}}| \cdot |\vec{r_n} - \vec{r_{n-1}}|, $$ where $\vec{F_{D_{n-1}}}$ denotes the drag force at position $\vec{r_{n-1}}$. However, this can again be written to the form $$ W = h \sum_{n=1}^{M} |\vec{F_{D_{n-1}}}| \cdot |\vec{v_{n-1}}|, $$ which is the most convenient for our purposes. It must be noted that the choice of index that is used for the force and speed respectively is non-trivial, as they both will vary during the interval $dr$. Here both the speed of the projectile and the force at point $\vec{r_{n-1}}$ is used. This is mainly a pragmatic choice, as this combination yields the lowest, and thus the most realistic energy loss. Thus we can finally express the energy of the projectile (which of course is equal to it's kinetic and potential energy) and the work done on the air at time $t_M$ $$ E(t_M) = h \sum_{n=1}^{M} |\vec{F_{D_{n-1}}}| \cdot |\vec{v_{n-1}}| + \frac{1}{2} m v_m^2 + mgy_m, $$ where $v_m$ is the absolute velcoity of the projectile, and $y_m$ is the height of the projectile over the ground (zero potential reference point). Now all that remains is to check if this expression for the energy indeed remains constant while it is in the air. In this notebook, the Runge-Kutta method is implemented without much difficulties. However, it is interesting to see whether or not it was necessary to use a 4th order differential equation solver, instead of using the most primitive ODE solver, Euler's method. The following code will use both Runge-Kutta and Euler's method, and compare the energy loss during the motion for both of them. ```python def euler_step(t,w ,h, *P): """Simple implementation of Euler's method on vector form """ return w + h * f(t,w) def energy_conservation(h, theta, func): """ This function performs the same steps as "projectile_motion()" and "f()" but extracts the value for the drag, F_D, at every step. This enables the calculation of work and storage of energy at each time step. Parameters: h integration time step theta initial shooting angle func a function defining the integration step (euler or RK4) Returns: x_list array of the trajectory x-coordinates y_list array of the trajectory y-coordinates time_steps array of the elapsed time at each step energy array of the energy at each step work the total work by the drag force """ w = shoot(theta, v_0) x_list = np.zeros(1) y_list = np.zeros(1) work = 0 # Initial total energy v = np.sqrt(w[2] ** 2 + w[3] ** 2) energy = np.array([0.5 * m * v ** 2]) while w[1] >= 0: w = func(0, w, h,f,0) # Updating lists x_list = np.append(x_list, w[0]) y_list = np.append(y_list, w[1]) # Temporary parameters for checking if high altitude k = (1 - a * y_list[-2] / T_0) s = np.sqrt((w[2] - V[0]) ** 2 + (w[3] - V[1]) ** 2) if k > 0: F_D = B * k ** (alpha) * s ** 2 else: F_D = 0 # Add the amount of work done by drag this time interval work += v * F_D * h # Updating the speed in order to calculate the total energy v = np.sqrt(w[2] ** 2 + w[3] ** 2) # Use np.sum on the work_list to get the cumulative work energy = np.append(energy, 0.5 * m * v ** 2 + m * g * w[1] + work) n = len(x_list) time_steps = np.linspace(0, n * h, n) return x_list, y_list, time_steps, energy, work # Setting initial values theta = 45 h = 0.1 # Getting the results x_list_euler, y_list_euler, time_steps_euler, energy_euler, work = energy_conservation(h, theta, euler_step) x_list_RK4, y_list_RK4, time_steps_RK4, energy_RK4, work = energy_conservation(h, theta, RK4_step) print("At firing angle %.1f degrees and with time step h = %.2f seconds:" %(theta, h)) print("Energy lost with RK4: %.3E = %.2f percent of total energy" %(energy_RK4[0] - energy_RK4[-1], (energy_RK4[0] - energy_RK4[-1]) / energy_RK4[0] * 100)) print("Energy lost with Euler's method: %.3E = %.2f percent of total energy" %(energy_euler[0] - energy_euler[-1], (energy_euler[0] - energy_euler[-1]) / energy_euler[0] * 100)) # Plotting the separate trajectories plt.plot(x_list_euler/1000, y_list_euler/1000, label ="Euler") plt.plot(x_list_RK4/1000, y_list_RK4/1000, label ="RK4") plt.title("Firing angle = " + str(theta) + "$\degree$") plt.xlabel(r"$x$ [km]") plt.ylabel(r"$y$ [km]") plt.gca().set_ylim(bottom=0) plt.legend() plt.show() ``` From the figure above, it is clear that Euler's method work surprisingly well, even though it us just ODE-solver of first order. It produces nearly the same trajectory as Runge Kutta 4th method. The most likely explanation of the small difference between Runge Kutta 4th order and Euler's method is that the coupled set of ODEs we are solving are quite linear, in other words, the derivatives vary quite slowly. Thus Euler's method produce satisfactory results, even though it is only of first order. It might seem like loosing around $10^3$ joules is a lot, but one needs to remember the enormous dimensions of the Paris gun, and that this is a small fraction of the original energy of the projectile. ```python ```
60615d454c2923296220c5f9baa336ef950a4e60
806,824
ipynb
Jupyter Notebook
Lectures/Lecture 13/Lecture13_ODE_part4.ipynb
astroarshn2000/PHYS305S20
18f4ebf0a51ba62fba34672cf76bd119d1db6f1e
[ "MIT" ]
3
2020-09-10T06:45:46.000Z
2020-10-20T13:50:11.000Z
Lectures/Lecture 13/Lecture13_ODE_part4.ipynb
astroarshn2000/PHYS305S20
18f4ebf0a51ba62fba34672cf76bd119d1db6f1e
[ "MIT" ]
null
null
null
Lectures/Lecture 13/Lecture13_ODE_part4.ipynb
astroarshn2000/PHYS305S20
18f4ebf0a51ba62fba34672cf76bd119d1db6f1e
[ "MIT" ]
null
null
null
833.495868
475,280
0.945323
true
8,344
Qwen/Qwen-72B
1. YES 2. YES
0.885631
0.888759
0.787113
__label__eng_Latn
0.992193
0.667059
# Tutorial ## Preliminaries Before following this tutorial we need to set up the tools and load the data. We need to import several packages, so before running this notebook you should create an environment (conda or virtualenv) with matplotlib, numpy, and scikit-image, and jupyter. You can use the Anaconda Navigator to do this, see: https://docs.continuum.io/anaconda/navigator/getting-started.html Or from the terminal (command window), for example: `conda create -n BIO399E jupyter matplotlib numpy scipy scikit-image` and then activate it, e.g. on Mac/Linux: `source activate BIO399E` or on windows `activate BIO399E` or select the conda env here in Jupyter. ## Modules First import the standard tools, numpy and matplotlib. These are very well documented packages, more info can be found here: http://www.matplotlib.org http://www.numpy.org ```python import numpy as np import matplotlib np.__version__ ``` ## Numpy arrays Lists are a simple way to store collections of data, but they are not very flexible. To deal with numerical data it is better is to use a package called numpy, which stores data in n-dimensional arrays. The simplest is a lot like a list, and we can make it from a list: ```python days = [31,28,31,30,31,30,31,31,30,31,30,31] adays = np.array(days) print adays ``` We can also make an array from scratch and fill it with zeros, ones, random values, etc, and combine arrays to compute functions: ```python a = np.zeros((12,)) x = np.ones((100,)) y = np.random.random((12,)) z = np.arange(1,13) w = z*5 + 0.5 print x print y print z print w ``` Now we have the days in a numpy array we can use functions from numpy to easily do the control: ```python # Calculate mean using numpy average_days = adays.mean() print adays print average_days ``` ```python # Calculate variance using numpy var_days = adays.var() print var_days # Calculate variance step-by-step (same result) devs = (adays-average_days)**2 print devs sum_devs = devs.sum() print sum_devs var_days = sum_devs/len(days) print var_days ``` ### 2-dimensional arrays Numpy can handle arrays of any number dimensions. For example for images we will use 2-dimensional arrays (in a later class). Here is how to make a 2-d array: ```python # A 2-dimensional array with random values twod = np.random.random((10,5)) print twod # Means (variance, etc.) are computed along specified axes print np.mean(twod, axis=1) print np.mean(twod, axis=(0,1)) ``` ## Exercise 1 As we saw above Numpy has many functions that perform calculations on arrays, see here: https://docs.scipy.org/doc/numpy-1.13.0/reference/arrays.ndarray.html#array-methods One of these functions is to load data from a text file. In this folder you will find 3 text files, 'fluo.csv', 'od.csv', and 't.csv' (time). These files contain comma separated lists of numbers corresponding to fluorescence and od measurements at times t. 1.1 Write code to load these files into 3 numpy arrays `fluo`, `od` and `t`: ```python ``` 1.2 Now calculate the mean and variance of each data set. What else can you calculate with numpy that might be useful? ```python ``` ## Matplotlib, making graphs Matplotlib is a module that works well with numpy arrays, and can make many kinds of graphs, charts, heatmaps etc. The part of the module that does plotting is called pyplot, we import it like this with some magic to put plots in this notebook: ```python import matplotlib.pyplot as plt %matplotlib inline ``` For example, here is a simple plot of the fluorescence: ```python # Simple plot plt.plot(fluo) # We can also choose the color, point shape etc. plt.plot(fluo, 'g+') ``` Make use of the matplotlib documentation to do the following: ## Exercise 2 2.1 Make a plot for each data set, `fluo`, `od` against time `t`. Label the axes and give the plot a title. ```python ``` 2.2 Do the same as above, but plot the log of the data: ```python ``` 2.3 Plot `fluo` and `od` in the same plot. Try and make the axes different so that you can really see od: ```python ``` 2.4 Plot `fluo/od` for all times: ```python ``` ## Exercise 3 3.1 Plot histograms of `fluo` and `od`: ```python ``` 3.2 Plot `fluo` against `od`: ```python ``` 3.3 Calculate the correlation between `fluo` and `od`: ```python ``` ## Data analysis, calculating gene expression Here is a simple model for fluorescent gene expression from a single cell: \begin{equation} \frac{dF}{dt} = k(t) - \mu(t) F(t) \end{equation} where $\mu(t)$ is the growth rate. If we have $N \approx OD$ cells, then we measure the sum of their gene expression $I(t) = F(t)OD(t)$ and we can show that: \begin{equation} k(t) = \frac{1}{OD}\frac{dI}{dt} \end{equation} ## Exercise 4 (Advanced) Given the data above with $I(t)$ in the variable `fluo`, how would you calculate the gene expression rate at each time `t`? ```python ```
f9fbdcda281125b29e8faafd44fc3fd8c589a584
11,621
ipynb
Jupyter Notebook
Tutorial.ipynb
SimonCastillo/bio231c
0cff86d41f88a57abfa69c18c5f43f0ceaf4ccd7
[ "MIT" ]
null
null
null
Tutorial.ipynb
SimonCastillo/bio231c
0cff86d41f88a57abfa69c18c5f43f0ceaf4ccd7
[ "MIT" ]
null
null
null
Tutorial.ipynb
SimonCastillo/bio231c
0cff86d41f88a57abfa69c18c5f43f0ceaf4ccd7
[ "MIT" ]
null
null
null
22.608949
277
0.551071
true
1,306
Qwen/Qwen-72B
1. YES 2. YES
0.679179
0.839734
0.570329
__label__eng_Latn
0.994219
0.163396
# The 24 Game The Python program below finds solutions to the 24 game: use four numbers and any of the four basic arithmetic operations (multiplication, division, addition and subtraction) to produce the number 24 (or any number you choose). Execute the program, choose four numbers (separated by commas) and the target number (24 by default). ```python from sympy import symbols, parsing from sympy.parsing.sympy_parser import parse_expr from itertools import permutations, combinations_with_replacement # list the default numbers to use here numbahs = [2,4,6,8] # enter the default number to calculate here ansah = 24 numberlist = input("Enter a list of four integers, separated by commas [default: 2,4,6,8]: ") if numberlist: numbahs = [int(n) for n in numberlist.split(',')] answer = input("Enter a number compute to [default: 24]: ") if answer: ansah = int(answer) # generate all possible arrangements of parentheses and operators # ways of combining arguments with parenthesis pgroups = [ "{0}{1}{2}{3}{4}{5}{6}", "({0}{1}{2}){3}{4}{5}{6}", "({0}{1}{2}{3}{4}){5}{6}", "(({0}{1}{2}){3}{4}){5}{6}", "({0}{1}({2}{3}{4})){5}{6}", "{0}{1}({2}{3}{4}){5}{6}", "{0}{1}({2}{3}({4}{5}{6}))", "{0}{1}(({2}{3}{4}){5}{6})", "{0}{1}({2}{3}{4}{5}{6})", "{0}{1}{2}{3}({4}{5}{6})" ] # available operators operators = ['*','/','+','-'] # available symbols variables = ['a','b','c','d'] # symbols for sympy to work with a, b, c, d = symbols('a b c d') # does a file exist with unique expressions? try: with open("uniqueexpressions.txt") as f: expressions = [parse_expr(s) for s in f.readlines()] # no file.. generate one except: # collect unique expressions in a set expressions = set() for p in pgroups: # for every combination of parenthesis for combo in combinations_with_replacement(operators, 3): # and every combination of operators for o in permutations(combo): # permuted for v in permutations(variables): # and every permutation of variables s = p.format(v[0],o[0],v[1],o[1],v[2],o[2],v[3]) # construct the expression and... s = parse_expr(s) # add to expressions set -- this will drop equivalent expressions expressions.add(s) with open("uniqueexpressions.txt", "w") as f: [f.write(str(s)+'\n') for s in expressions] # for every unique expression, substitute the values and evaluate for x in expressions: try: val = x.subs({a:numbahs[0],b:numbahs[1],c:numbahs[2],d:numbahs[3]}) if ansah == val: s = str(x) for sym, num in zip(variables, numbahs): s = s.replace(sym, str(num)) print(s) except: pass #print(expressions) ``` Enter a list of four integers, separated by commas [default: 2,4,6,8]: 1,2,3,4 Enter a number compute to [default: 24]: 2*3*4/1 1*2*3*4 4*(1 + 2 + 3)
d9f898f5abf528b45cb643eb4a6df90023210fdf
4,598
ipynb
Jupyter Notebook
24i.ipynb
tiggerntatie/24
49ad4d06230ee001518ff2bb41b24f2772b744c3
[ "MIT" ]
null
null
null
24i.ipynb
tiggerntatie/24
49ad4d06230ee001518ff2bb41b24f2772b744c3
[ "MIT" ]
null
null
null
24i.ipynb
tiggerntatie/24
49ad4d06230ee001518ff2bb41b24f2772b744c3
[ "MIT" ]
null
null
null
33.562044
236
0.50087
true
883
Qwen/Qwen-72B
1. YES 2. YES
0.948155
0.865224
0.820366
__label__eng_Latn
0.959808
0.744318
<div style='background-image: url("../../share/images/header.svg") ; padding: 0px ; background-size: cover ; border-radius: 5px ; height: 250px'> <div style="float: right ; margin: 50px ; padding: 20px ; background: rgba(255 , 255 , 255 , 0.7) ; width: 50% ; height: 150px"> <div style="position: relative ; top: 50% ; transform: translatey(-50%)"> <div style="font-size: xx-large ; font-weight: 900 ; color: rgba(0 , 0 , 0 , 0.8) ; line-height: 100%">Computational Seismology</div> <div style="font-size: large ; padding-top: 20px ; color: rgba(0 , 0 , 0 , 0.5)">Numerical first derivative</div> </div> </div> </div> <p style="width:20%;float:right;padding-left:50px"> <span style="font-size:smaller"> </span> </p> --- This notebook is part of the supplementary material to [Computational Seismology: A Practical Introduction](https://global.oup.com/academic/product/computational-seismology-9780198717416?cc=de&lang=en&#), Oxford University Press, 2016. --- ##### Authors: * Kristina Garina * Heiner Igel ([@heinerigel](https://github.com/heinerigel)) --- ### Excercise: Initialise arbitrary analytical function and calculate numerical first derivative with 2-point operator, compare with analytical solution. **Task:** extend scheme to 4 point operator and demonstrate improvement **(see Chapter 4)** --- This exercise covers the following aspects: * Calculation of numerical first derivative * Comparison with analytical solution --- **Please, execute first!** ``` # Import Libraries %matplotlib notebook import numpy as np import math import matplotlib.pyplot as plt plt.style.use("ggplot") ``` We initialise a Gaussian function \begin{equation} f(t)=\dfrac{1}{\sqrt{2 \pi a}}e^{-\dfrac{t^2}{2a}} \end{equation} ``` # Initial parameters tmax=10.0 # maximum time nt=100 # number of time sample a=1 # half-width dt=tmax/nt # defining dt t0 = tmax/2 # defining t0 time=np.linspace(0,tmax,nt) # defining time # Initialization gaussian function with zeros f=(1./np.sqrt(2*np.pi*a))*np.exp(-(((time-t0)**2)/(2*a))) # eq.(4.32) p. 80 # Plotting of gaussian plt.figure() plt.plot(time, f) plt.title('Gaussian function') plt.xlabel('Time, s') plt.ylabel('Amplitude') plt.xlim((0, tmax)) plt.grid() plt.show() ``` In the cell below we calculate numerical derivative using two points \begin{equation} f^{\prime}(t)=\dfrac{f(t+dt)-f(t-dt)}{2dt} \end{equation} and analytical derivative \begin{equation} f^{\prime}(t)=-\dfrac{t}{a}\dfrac{1}{\sqrt{2\pi a}}e^{-\dfrac{t^2}{2a}} \end{equation} ``` # First derivative with two points # Initiation of numerical and analytical derivatives nder=np.zeros(nt) # numerical derivative ader=np.zeros(nt) # analytical derivative # Numerical derivative of the given function for it in range (1, nt-1): nder[it]=(f[it+1]-f[it-1])/(2*dt) # Analytical derivative of the given function ader=(-(time-t0)/a)*(1/np.sqrt(2*np.pi*a))*np.exp(-(time-t0)**2/(2*a)) # Plot of the first derivative and analytical derivative plt.figure() plt.plot (time, nder,label="Numerical derivative", lw=2, color="yellow") plt.plot (time, ader, label="Analytical derivative", ls="--",lw=2, color="red") plt.title('First derivative') plt.xlabel('Time, s') plt.ylabel('Amplitude') plt.legend() plt.grid() plt.show() ``` In the cell below calculation of the first derivative with four points is provided with the following weights: \begin{equation} f^{\prime}(t)=\dfrac{\dfrac{1}{12}f(t-2dt)-\dfrac{2}{3}f(t-dt)+\dfrac{2}{3}f(t+dt)-\dfrac{1}{12}f(t+2dt)}{dt} \end{equation} ``` # First derivative with four points # Initiation of derivative ffder=np.zeros(nt) # Type your code here: for it in range (2, nt-2): #ffder[it]= # Plotting plt.figure() plt.plot (time, nder,label="Derivative, 2 points", lw=2, color="violet") plt.plot (time, ffder, label="Derivative, 4 points", lw=2, ls="--") plt.title('First derivative') plt.xlabel('Time, s') plt.ylabel('Amplitude') plt.legend() plt.grid() plt.show() ``` ``` ```
15f606174ad92e8fa3ad17328fff38cac8708ae8
6,075
ipynb
Jupyter Notebook
notebooks/Computational Seismology/The Finite-Difference Method/fd_first_derivative.ipynb
krischer/seismo_live_build
e4e8e59d9bf1b020e13ac91c0707eb907b05b34f
[ "CC-BY-3.0" ]
3
2020-07-11T10:01:39.000Z
2020-12-16T14:26:03.000Z
notebooks/Computational Seismology/The Finite-Difference Method/fd_first_derivative.ipynb
krischer/seismo_live_build
e4e8e59d9bf1b020e13ac91c0707eb907b05b34f
[ "CC-BY-3.0" ]
null
null
null
notebooks/Computational Seismology/The Finite-Difference Method/fd_first_derivative.ipynb
krischer/seismo_live_build
e4e8e59d9bf1b020e13ac91c0707eb907b05b34f
[ "CC-BY-3.0" ]
3
2020-11-11T05:05:41.000Z
2022-03-12T09:36:24.000Z
6,075
6,075
0.594897
true
1,216
Qwen/Qwen-72B
1. YES 2. YES
0.880797
0.812867
0.715971
__label__eng_Latn
0.734934
0.501772
```python # from matplotlib import pyplot as plt from sympy import Symbol, Eq, Function, solve, Rational, lambdify, latex from IPython.display import display from typing import List #initialize some symbols here: rho1 = Symbol("rho_1") t = Symbol("t") R = Function("R")(t) R_d1 = R.diff() R_d2 = R.diff().diff() P0 = Symbol("P_0") mu = Symbol("mu") sigma = Symbol("sigma") variables = { rho1: 997, # Density of water P0: -9.81 * 997 * 1000, # Assume constant throughout process, pressure = density * 9.81 * height mu: 0.0013076, sigma: 0.072 } print("Substitution Values") display( Eq(rho1, variables[rho1]), Eq(P0, variables[P0]), Eq(mu, variables[mu]), Eq(sigma, variables[sigma])) lhs = rho1 * (R * R_d2 + Rational(3 / 2) * R_d1 ** 2) rhs = - P0 - 4 * mu * (1 / R) * R_d1 - 2 * sigma / R eqn = Eq(lhs, rhs) print("\n\nRayleigh-Plesset equation") display(eqn) #output should be the initial values that are out ``` Substitution Values $\displaystyle \rho_{1} = 997$ $\displaystyle P_{0} = -9780570.0$ $\displaystyle \mu = 0.0013076$ $\displaystyle \sigma = 0.072$ Rayleigh-Plesset equation $\displaystyle \rho_{1} \left(R{\left(t \right)} \frac{d^{2}}{d t^{2}} R{\left(t \right)} + \frac{3 \left(\frac{d}{d t} R{\left(t \right)}\right)^{2}}{2}\right) = - P_{0} - \frac{4 \mu \frac{d}{d t} R{\left(t \right)}}{R{\left(t \right)}} - \frac{2 \sigma}{R{\left(t \right)}}$ ```python #Solve the R-P eqn to get the differential terms isolated dRdt = solve(eqn, R_d1) d2R_dt2 = solve(eqn, R_d2) #display([Eq(R_d1, dRdt), Eq(R_d2, d2R_dt2)]) ``` ```python #range kutta 4 function # yk+1 = yk + dt.f_mean #f_mean = f1/6 +f2/3 + f3\3 + f4/6 def rk4singlestep(func, tk: float, rk: float ,dt: float): #runge kutta method below f1 = func(rk, tk) f2 = func(rk + dt/2*f1, tk + dt/2) f3 = func(rk + dt/2*f2, tk + dt/2) f4 = func(rk + dt*f3, tk + dt) return f1/6 + f2/3 + f3/3 + f4/6 ``` ```python #solve the equation, start with 2nd derivative, then to 1st, then repeat for next timestep #initializing values: r = 100 # Starting R value #steps = 1400 # Number of steps to run dr_dt = -0.001 # Program breaks if dr_dt starts at 0 t = 0 # Starting t value (almost always at 0) dt=0.01 #loop the program using the inital values to solve for the eqn for x in range(0, 1400): #runs loop 1400 t = x * dt #should start at 0 for the time step #get the second derivative using the rk4 rk4singlestep(d2R_dt2, t, r, dt) #get the first derivative using rk4 f_mean = rk4singlestep(dR_dt, t, r, dt) r.append(r[x] + f_mean/dt) ``` ```python ```
37696aeee49c594dd3b489b298013c5b0830c850
9,035
ipynb
Jupyter Notebook
Projects/Project_1/Q2/Question2New.ipynb
UWaterloo-Mech-3A/Calculus
2e3f3bb606fda4bfe1c7f793a11b07b336f356ed
[ "MIT" ]
null
null
null
Projects/Project_1/Q2/Question2New.ipynb
UWaterloo-Mech-3A/Calculus
2e3f3bb606fda4bfe1c7f793a11b07b336f356ed
[ "MIT" ]
null
null
null
Projects/Project_1/Q2/Question2New.ipynb
UWaterloo-Mech-3A/Calculus
2e3f3bb606fda4bfe1c7f793a11b07b336f356ed
[ "MIT" ]
1
2021-07-16T06:01:32.000Z
2021-07-16T06:01:32.000Z
38.943966
2,041
0.555285
true
966
Qwen/Qwen-72B
1. YES 2. YES
0.896251
0.615088
0.551273
__label__eng_Latn
0.623377
0.119122
```python %matplotlib inline import matplotlib.pyplot as plt import numpy as np ``` The first exercise is about using Newton's method to find the cube roots of unity - find $z$ such that $z^3 = 1$. From the fundamental theorem of algebra, we know there must be exactly 3 complex roots since this is a degree 3 polynomial. We start with Euler's equation $$ e^{ix} = \cos x + i \sin x $$ Raising $e^{ix}$ to the $n$th power where $n$ is an integer, we get from Euler's formula with $nx$ substituting for $x$ $$ (e^{ix})^n = e^{i(nx)} = \cos nx + i \sin nx $$ Whenever $nx$ is an integer multiple of $2\pi$, we have $$ \cos nx + i \sin nx = 1 $$ So $$ e^{2\pi i \frac{k}{n}} $$ is a root of 1 whenever $k/n = 0, 1, 2, \ldots$. So the cube roots of unity are $1, e^{2\pi i/3}, e^{4\pi i/3}$. While we can do this analytically, the idea is to use Newton's method to find these roots, and in the process, discover some rather perplexing behavior of Newton's method. ```python from sympy import Symbol, exp, I, pi, N, expand from sympy import init_printing init_printing() ``` ```python expand(exp(2*pi*I/3), complex=True) ``` ```python expand(exp(4*pi*I/3), complex=True) ``` ```python plt.figure(figsize=(4,4)) roots = np.array([[1,0], [-0.5, np.sqrt(3)/2], [-0.5, -np.sqrt(3)/2]]) plt.scatter(roots[:,0], roots[:,1], s=50, c='red') xp = np.linspace(0, 2*np.pi, 100) plt.plot(np.cos(xp), np.sin(xp), c='blue'); ``` **1**. Newton's method for functions of complex variables - stability and basins of attraction. (30 points) 1. Write a function with the following function signature `newton(z, f, fprime, max_iter=100, tol=1e-6)` where - `z` is a starting value (a complex number e.g. ` 3 + 4j`) - `f` is a function of `z` - `fprime` is the derivative of `f` The function will run until either max_iter is reached or the absolute value of the Newton step is less than tol. In either case, the function should return the number of iterations taken and the final value of `z` as a tuple (`i`, `z`). 2. Define the function `f` and `fprime` that will result in Newton's method finding the cube roots of 1. Find 3 starting points that will give different roots, and print both the start and end points. Write the following two plotting functions to see some (pretty) aspects of Newton's algorithm in the complex plane. 3. The first function `plot_newton_iters(f, fprime, n=200, extent=[-1,1,-1,1], cmap='hsv')` calculates and stores the number of iterations taken for convergence (or max_iter) for each point in a 2D array. The 2D array limits are given by `extent` - for example, when `extent = [-1,1,-1,1]` the corners of the plot are `(-i, -i), (1, -i), (1, i), (-1, i)`. There are `n` grid points in both the real and imaginary axes. The argument `cmap` specifies the color map to use - the suggested defaults are fine. Finally plot the image using `plt.imshow` - make sure the axis ticks are correctly scaled. Make a plot for the cube roots of 1. 4. The second function `plot_newton_basins(f, fprime, n=200, extent=[-1,1,-1,1], cmap='jet')` has the same arguments, but this time the grid stores the identity of the root that the starting point converged to. Make a plot for the cube roots of 1 - since there are 3 roots, there should be only 3 colors in the plot. ```python ``` **2**. Ill-conditioned linear problems. (20 points) You are given a $n \times p$ design matrix $X$ and a $p$-vector of observations $y$ and asked to find the coefficients $\beta$ that solve the linear equations $X \beta = y$. ```python X = np.load('x.npy') y = np.load('y.npy') ``` The solution $\beta$ can also be loaded as ```python beta = np.load('b.npy') ``` - Write a formula that could solve the system of linear equations in terms of $X$ and $y$ Write a function `f1` that takes arguments $X$ and $y$ and returns $\beta$ using this formula. - How could you code this formula using `np.linalg.solve` that does not require inverting a matrix? Write a function `f2` that takes arguments $X$ and $y$ and returns $\beta$ using this. - Note that carefully designed algorithms *can* solve this ill-conditioned problem, which is why you should always use library functions for linear algebra rather than write your own. ```python np.linalg.lstsq(x, y)[0] ``` - What happens if you try to solve for $\beta$ using `f1` or `f2`? Remove the column of $X$ that is making the matrix singular and find the $p-1$ vector $b$ using `f2`. - Note that the solution differs from that given by `np.linalg.lstsq`? This arises because the relevant condition number for `f2` is actually for the matrix $X^TX$ while the condition number of `lstsq` is for the matrix $X$. Why is the condition so high even after removing the column that makes the matrix singular? ```python ``` **3**. Consider the following function on $\mathbb{R}^2$: $$f(x_1,x_2) = -x_1x_2e^{-\frac{(x_1^2+x_2^2)}{2}}$$ 1. Write down its gradient. 2. write down the Hessian matrix. 3. Find the critical points of $f$. 4. Characterize the critical points as max/min or neither. Find the minimum under the constraint $$g(x) = x_1^2+x_2^2 \leq 10$$ and $$h(x) = 2x_1 + 3x_2 = 5$$ using `scipy.optimize.minimize`. 5. Plot the function contours using `matplotlib`. (20 points) ```python ``` **4**. One of the goals of the course it that you will be able to implement novel algorithms from the literature. (30 points) - Implement the mean-shift algorithm in 1D as described [here](http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/TUZEL1/MeanShift.pdf). - Use the following function signature ```python def mean_shift(xs, x, kernel, max_iters=100, tol=1e-6): ``` - xs is the data set, x is the starting location, and kernel is a kernel function - tol is the difference in $||x||$ across iterations - Use the following kernels with bandwidth $h$ (a default value of 1.0 will work fine) - Flat - return 1 if $||x|| < h$ and 0 otherwise - Gaussian $$\frac{1}{\sqrt{2 \pi h}}e^{\frac{-||x||^2}{h^2}}$$ - Note that $||x||$ is the norm of the data point being evaluated minus the current value of $x$ - Use both kernels to find all 3 modes of the data set in `x1d.npy` - Modify the algorithm and/or kernels so that it now works in an arbitrary number of dimensions. - Use both kernels to find all 3 modes of the data set in `x2d.npy` - Plot the path of successive intermediate solutions of the mean-shift algorithm starting from `x0 = (-4, 10)` until it converges onto a mode in the 2D data for each kernel. Superimpose the path on top of a contour plot of the data density. ```python ```
03e8e772fb398d451609c9302452220e6a4e3397
26,448
ipynb
Jupyter Notebook
homework/08_Optimization.ipynb
cliburn/sta-663-2017
89e059dfff25a4aa427cdec5ded755ab456fbc16
[ "MIT" ]
52
2017-01-11T03:16:00.000Z
2021-01-15T05:28:48.000Z
homework/08_Optimization.ipynb
slimdt/Duke_Stat633_2017
89e059dfff25a4aa427cdec5ded755ab456fbc16
[ "MIT" ]
1
2017-04-16T17:10:49.000Z
2017-04-16T19:13:03.000Z
homework/08_Optimization.ipynb
slimdt/Duke_Stat633_2017
89e059dfff25a4aa427cdec5ded755ab456fbc16
[ "MIT" ]
47
2017-01-13T04:50:54.000Z
2021-06-23T11:48:33.000Z
88.454849
14,202
0.805014
true
1,922
Qwen/Qwen-72B
1. YES 2. YES
0.817574
0.90053
0.73625
__label__eng_Latn
0.997384
0.548888
``` from sympy import * ``` ``` n_u, n_b, m_p, m_r = symbols("n_u n_b m_p m_r") F = MatrixSymbol("F", n_u, n_u) M = MatrixSymbol("M", n_b, n_b) C = MatrixSymbol("C", n_b, n_u) B = MatrixSymbol("B", m_p, n_u) D = MatrixSymbol("D", m_r, n_b) ``` ``` B.T.shape ``` (n_u, m_p) ``` A = BlockMatrix([[F,B.T,C.T,ZeroMatrix(n_u,m_r)], [B,ZeroMatrix(m_p,m_p),ZeroMatrix(m_p,n_b),ZeroMatrix(m_p,m_r)], [-C,ZeroMatrix(n_b,m_p),M,D.T], [ZeroMatrix(m_r,n_u),ZeroMatrix(m_r,m_p),D,ZeroMatrix(m_r,m_r)]]) A ``` [ F, B', C', 0] [ B, 0, 0, 0] [(-1)*C, 0, M, D'] [ 0, 0, D, 0] ``` n_u, n_b, m_p, m_r = symbols("n_u n_b m_p m_r") F = Symbol("F") M = Symbol("M") C = Symbol("C") Ct = Symbol("Ct") B = Symbol("B") Bt = Symbol("Bt") D = Symbol("D") Dt = Symbol("Dt") ``` ``` A = Matrix([[F,B.T,C.T,0],[B,0,0,0],[-C,0,M,D.T],[0,0,D,0]]) ```
e466a3c648b87cc017f163676b8bc7a84d7d3ba4
4,437
ipynb
Jupyter Notebook
MHD/BLKdecomps/.ipynb_checkpoints/Untitled0-checkpoint.ipynb
wathen/PhD
35524f40028541a4d611d8c78574e4cf9ddc3278
[ "MIT" ]
3
2020-10-25T13:30:20.000Z
2021-08-10T21:27:30.000Z
MHD/BLKdecomps/.ipynb_checkpoints/Untitled0-checkpoint.ipynb
wathen/PhD
35524f40028541a4d611d8c78574e4cf9ddc3278
[ "MIT" ]
null
null
null
MHD/BLKdecomps/.ipynb_checkpoints/Untitled0-checkpoint.ipynb
wathen/PhD
35524f40028541a4d611d8c78574e4cf9ddc3278
[ "MIT" ]
3
2019-10-28T16:12:13.000Z
2020-01-13T13:59:44.000Z
34.395349
1,319
0.508902
true
388
Qwen/Qwen-72B
1. YES 2. YES
0.960361
0.779993
0.749075
__label__kor_Hang
0.105857
0.578684
# Asset replacement model **Randall Romero Aguilar, PhD** This demo is based on the original Matlab demo accompanying the <a href="https://mitpress.mit.edu/books/applied-computational-economics-and-finance">Computational Economics and Finance</a> 2001 textbook by Mario Miranda and Paul Fackler. Original (Matlab) CompEcon file: **demddp02.m** Running this file requires the Python version of CompEcon. This can be installed with pip by running !pip install compecon --upgrade <i>Last updated: 2021-Oct-01</i> <hr> ## About At the beginning of each year, a manufacturer must decide whether to continue to operate an aging physical asset or replace it with a new one. An asset that is $a$ years old yields a profit contribution $p(a)$ up to $n$ years, at which point the asset becomes unsafe and must be replaced by law. The cost of a new asset is $c$. What replacement policy maximizes profits? This is an infinite horizon, deterministic model with time $t$ measured in years. ## Initial tasks ```python import numpy as np import pandas as pd import matplotlib.pyplot as plt from compecon import DDPmodel ``` ## Model Parameters Assume a maximum asset age of 5 years, asset replacement cost $c = 75$, and annual discount factor $\delta = 0.9$. ```python maxage = 5 repcost = 75 delta = 0.9 ``` ### State Space The state variable $a \in \{1, 2, 3, \dots, n\}$ is the age of the asset in years. ```python S = np.arange(1, 1 + maxage) # machine age n = S.size # number of states ``` ### Action Space The action variable $x \in \{\text{keep, replace}\}$ is the hold-replacement decision. ```python X = np.array(['keep', 'replace']) # list of actions m = len(X) # number of actions ``` ### Reward Function The reward function is \begin{equation} f(a, x) = \begin{cases} p(a), &x = \text{keep} \\ p(0) - c, &x = \text{replace} \end{cases} \end{equation} Assuming a profit contribution $p(a) = 50 − 2.5a − 2.5a^2$ that is a function of the asset age $a$ in years: ```python f = np.zeros((m, n)) f[0] = 50 - 2.5 * S - 2.5 * S ** 2 f[1] = 50 - repcost f[0, -1] = -np.inf ``` ### State Transition Function The state transition function is \begin{equation} g(a, x) = \begin{cases} a + 1, &x = \text{keep} \\ 1, &x = \text{replace} \end{cases} \end{equation} ```python g = np.zeros_like(f) g[0] = np.arange(1, n + 1) g[0, -1] = n - 1 # adjust last state so it doesn't go out of bounds ``` ## Model Structure The value of an asset of age a satisfies the Bellman equation \begin{equation} V(a) = \max\{p(a) + \delta V(a + 1),\quad p(0) − c + \delta V(1)\} \end{equation} where we set $p(n) = -\infty$ to enforce replacement of an asset of age $n$. The Bellman equation asserts that if the manufacturer keeps an asset of age $a$, he earns $p(a)$ over the coming year and begins the subsequent year with an asset that is one year older and worth $V(a+1)$; if he replaces the asset, however, he starts the year with a new asset, earns $p(0)−c$ over the year, and begins the subsequent year with an asset that is one year old and worth $V(1)$. Actually, our language is a little loose here. The value $V(a)$ measures not only the current and future net earnings of an asset of age $a$, but also the net earnings of all future assets that replace it. To solve and simulate this model, use the CompEcon class ```DDPmodel```. ```python model = DDPmodel(f, g, delta) model.solve() ``` A deterministic discrete state, discrete action, dynamic model. There are 2 possible actions over 5 possible states ```python solution = pd.DataFrame({ 'Age of Machine': S, 'Action': X[model.policy], 'Value': model.value}).set_index('Age of Machine') solution['Action'] = solution['Action'].astype('category') ``` ## Analysis ### Plot Optimal Value This Figure gives the value of the firm at the beginning of the period as a function of the asset’s age. ```python ax = solution['Value'].plot(); ax.set(title='Optimal Value Function', ylabel='Value', xticks=S); ``` ### Plot Optimal Policy This Figure gives the optimal action as a function of the asset’s age. ```python ax = solution['Action'].cat.codes.plot(marker='o') ax.set(title='Optimal Replacement Policy', yticks=[0, 1], yticklabels=X, xticks=S); ``` ### Simulate Model The path was computed by performing a deterministic simulation of 12 years in duration using the ```simulate()``` method. ```python sinit, nyrs = S.min() - 1, 12 t = np.arange(1 + nyrs) spath, xpath = model.simulate(sinit, nyrs) ``` ```python simul = pd.DataFrame({ 'Year': t, 'Age of Machine': S[spath], 'Action': X[xpath]}).set_index('Year') simul['Action'] = pd.Categorical(X[xpath], categories=X) simul ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>Age of Machine</th> <th>Action</th> </tr> <tr> <th>Year</th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <th>0</th> <td>1</td> <td>keep</td> </tr> <tr> <th>1</th> <td>2</td> <td>keep</td> </tr> <tr> <th>2</th> <td>3</td> <td>keep</td> </tr> <tr> <th>3</th> <td>4</td> <td>replace</td> </tr> <tr> <th>4</th> <td>1</td> <td>keep</td> </tr> <tr> <th>5</th> <td>2</td> <td>keep</td> </tr> <tr> <th>6</th> <td>3</td> <td>keep</td> </tr> <tr> <th>7</th> <td>4</td> <td>replace</td> </tr> <tr> <th>8</th> <td>1</td> <td>keep</td> </tr> <tr> <th>9</th> <td>2</td> <td>keep</td> </tr> <tr> <th>10</th> <td>3</td> <td>keep</td> </tr> <tr> <th>11</th> <td>4</td> <td>replace</td> </tr> <tr> <th>12</th> <td>1</td> <td>keep</td> </tr> </tbody> </table> </div> ### Plot State and Action Paths Next Figure gives the age of the asset along the optimal path. As can be seen in this figure, the asset is replaced every four years. ```python fig, axs = plt.subplots(2, 1, sharex=True) simul['Age of Machine'].plot(ax=axs[0]) axs[0].set(title='Age Path'); simul['Action'].cat.codes.plot(marker='o', ax=axs[1]) axs[1].set(title='Replacement Path', yticks=[0, 1], yticklabels=X); ```
2cd2f19f2a48c061344d1aa1b01da50707d9d53a
128,368
ipynb
Jupyter Notebook
_build/jupyter_execute/notebooks/ddp/02 Asset replacement model.ipynb
randall-romero/CompEcon-python
c7a75f57f8472c972fddcace8ff7b86fee049d29
[ "MIT" ]
23
2016-12-14T13:21:27.000Z
2020-08-23T21:04:34.000Z
_build/jupyter_execute/notebooks/ddp/02 Asset replacement model.ipynb
randall-romero/CompEcon
c7a75f57f8472c972fddcace8ff7b86fee049d29
[ "MIT" ]
1
2017-09-10T04:48:54.000Z
2018-03-31T01:36:46.000Z
_build/jupyter_execute/notebooks/ddp/02 Asset replacement model.ipynb
randall-romero/CompEcon-python
c7a75f57f8472c972fddcace8ff7b86fee049d29
[ "MIT" ]
13
2017-02-25T08:10:38.000Z
2020-05-15T09:49:16.000Z
236.841328
53,420
0.912992
true
2,061
Qwen/Qwen-72B
1. YES 2. YES
0.83762
0.815232
0.682855
__label__eng_Latn
0.961577
0.424832
# Tutorial on motion energy model implementation This notebook demonstrates the components underlying a spatiotemporal energy model of motion perception. The model was originally introduced in 1985 by EH Adelson and JR Bergen. The basic idea is to think of motion velocity as an orientation in space-time. The model works by constructing filters with joint selectivity for speed and direction (velocity), which are then convolved with a movie of the motion stimulus. The specific Python implemenation here was written by ML Waskom based on an earlier MATLAB implementation by R Kiani. The notebook is intended to provide some intuition for the parameters of the model and to show how the high-level interface can be used for extracting motion energy estimates from a dynamic stimulus. This notebook is best viewed locally, because it has some interactive demos and movies of the stimulus that are not reproduced in the static HTML representation. ### References: Adelson EH & Bergen JR (1985). [Spatiotemporal energy models for perception of motion](https://www.ncbi.nlm.nih.gov/pubmed/3973762). *J Opt Soc Am A* 2(2):284-99. Kiani R, Hanks TD, Shadlen MN (2008). [Bounded integration in parietal cortex underlies decisions even when viewing duration is dictated by the environment. ](https://www.ncbi.nlm.nih.gov/pubmed/18354005). *J Neurosci* 28(12):3017-29. Waskom ML, Asfour JW, Kiani R (2018). [Perceptual insensitivity to higher-order statistical moments of coherent random dot motion](https://www.biorxiv.org/content/early/2018/04/26/261370). *J Vision* ```python import numpy as np from numpy.fft import fftn, fftshift, fftfreq from scipy.ndimage import rotate import seaborn as sns import matplotlib.pyplot as plt from ipywidgets import interact ``` ```python import motionenergy as me import stimulus as stim ``` ```python %matplotlib inline sns.set(font_scale=1.3, color_codes=True) ``` ---- ## Components of the spatiotemporal energy model The spatiotemporal filters are constructed from space-time seperable components. There are two functions each for the spatial and temporal components. Using pairs of quadrature filters will make the model invariant to phase. The full spatial filters are defined as follows, letting $\alpha = \tan^{-1}(x/\sigma_c)$: $$ \begin{align} f_1(x,y) &= \cos^4(\alpha)\cos(4\alpha)\exp\big(-\frac{y^2}{2\sigma_g^2}\big) \\ f_2(x,y) &= \cos^4(\alpha)\sin(4\alpha)\exp\big(-\frac{y^2}{2\sigma_g^2}\big), \end{align} $$ On one spatial axis, the filters are fourth-order Cauchy functions (similar to a Gabor pattern) controlled by a parameter $\sigma_c$. The demo below shows how this parameter determines the width of the filter; together with the temporal components (see below), it is part of what gives the filter speed selectivity. ```python @interact def cauchy_param_tutorial(σ_c=(.1, .6, .05)): """Interactive widget to demo spatial component of filters.""" x = me.filter_grid(256, .005, center=True) f1, f2 = me.cauchy(x, σ_c) f, ax = plt.subplots() ax.plot(x, f1, x, f2) ax.set(xlim=(-.65, .65), xlabel="Selective axis (deg)", ylabel="Amplitude") ``` interactive(children=(FloatSlider(value=0.30000000000000004, description='σ_c', max=0.6, min=0.1, step=0.05), … On the orthogonal axis, the filters are windowed with a Gaussian envelope; the two-dimensional filter is then oriented in space. The orientation of the filter determines its direction selectivity. The width of the Gaussian envelope influences how narrowly or broadly it is tuned around its preferred direction. ```python def centered_colormap(data): """Set vmin and vmax to put center of colormap at 0.""" lim = np.abs([data.min(), data.max()]).max() return dict(vmin=-lim, vmax=+lim, cmap="icefire") ``` ```python @interact def spatial_param_tutorial(σ_c=(.05, .65), σ_g=(.02, .2, .02), θ=(-90, 90, 10)): """Interactive widget to demo full spatial component of filters.""" x = me.filter_grid(125, .02, True) xx, yy = np.meshgrid(x, x) c1, c2 = me.cauchy(x, σ_c, 4) g = me.gaussian_envelope(x, σ_g) qx1, qy = np.meshgrid(c1, g) qx2, qy = np.meshgrid(c2, g) filters = ("Even filter ($f_1$)", qx1), ("Odd filter ($f_2$)", qx2) f, axes = plt.subplots(1, 2, figsize=(6, 3), sharey=True) for ax, (title, filt) in zip(axes, filters): filt = rotate(filt * qy, θ, reshape=False) ax.pcolormesh(xx, yy, filt, **centered_colormap(filt)) ax.set(title=title) f.subplots_adjust(0, 0, 1, 1, .02) axes[0].set(xlabel="Selective axis (deg)", ylabel="Non-selective axis (deg)") ``` interactive(children=(FloatSlider(value=0.35, description='σ_c', max=0.65, min=0.05), FloatSlider(value=0.1, d… ---- The temporal filters model the impulse response of direction-selective neurons in cortex. They are implemented as a pair of difference-of-poisson functions: $$ \begin{align} g_1(t) &= (kt)^3\exp(-kt)\Bigg[\frac{1}{3!} - \frac{(kt)^2}{(3 + 2)!}\Bigg] \\ g_2(t) &= (kt)^5\exp(-kt)\Bigg[\frac{1}{5!} - \frac{(kt)^2}{(5 + 2)!}\Bigg] \end{align} $$ The main parameter is $k$, which controls the latency of the filter. Together with $\sigma_c$, this will determine the speed selectivity. ```python @interact def temporal_param_tutorial(k=(30, 70, 5)): """Interactive widget to demo temporal component of filters.""" t = me.filter_grid(128, .005) f1 = me.temporal_impulse_response(t, 3, k) f2 = me.temporal_impulse_response(t, 5, k) f, ax = plt.subplots() lines = ax.plot(t, f1, t, f2) ax.legend(lines, ["$g_1$", "$g_2$"]) ax.set(xlabel="Time (sec)", ylabel="Amplitude") ``` interactive(children=(IntSlider(value=50, description='k', max=70, min=30, step=5), Output()), _dom_classes=('… --- ## Putting the components together These spatial and temporal functions are combined to create full three-dimensional spatiotemporal filters. When the model is used, each stimulus is convolved with four different filters. There are two filters oriented to be selective for motion in a given direction (preferred) and then two filters oriented to be selective for motion in the reverse direction (null). This is done because motion energy is defined as the difference between the energy in the preferred and null directions (i.e. opponent motion energy). For each direction, there is an even filter and an odd filter. The responses of this quadrature pair are combined so that the system is invariant to phase. When plotting the filters on a two-dimensional axes (selective spatial dimension and temporal dimension), they look like schmeared Gabors. This emphasizes how the model characterizes motion as an orientation in space-time. Just as an oriented Gabor filter is selective for a spatial orientation, an oriented Gabor schmear is selective for a velocity. ```python # Determine the size and resolution of the filters nx, dx = 235, .01 nt, dt = 300, .001 size = nx, 1, nt res = dx, dx, dt # Define the mesh that the filters are sampled on x = me.filter_grid(nx, dx, center=True) t = me.filter_grid(nt, dt) xx, tt = np.meshgrid(x, t, indexing="ij") # Create a set of motion energy filters filters = me.motion_filters(size, res) # Show each filter in the set f, axes = plt.subplots(1, 4, sharey=True, figsize=(10, 4)) titles = ["Even preferred", "Odd preferred", "Even null", "Odd null"] for ax, title, filt in zip(axes, titles, filters): filt = filt.squeeze()[:, -nt:] ax.pcolormesh(xx, tt, filt, **centered_colormap(filt)) ax.set_title(title) axes[0].set(xlabel="Selective axis (deg)", ylabel="Time (sec)") f.subplots_adjust(0, 0, 1, 1, .02, .02) ``` We can show the spectral representation of the filter to get a better sense for how the spatial and temporal parameters control the speed selectivity. What we want is to find a ratio between the spatial and temporal frequencies so that the filter has maximum power at the speed of the stimulus we want to characterize: ```python @interact def spatiotemporal_tutorial(filter=titles, k=(30, 80, 5), σ_c=(.05, .65, .05)): """Interactive widget to demo spatiotemporal selectivity.""" # Determine the size and resolution of the filters nx, nt = 41, 41 dx, dt = .0436, .0133 size = nx, 1, nt res = dx, .01, dt # Create the filter set and select the one we want to view filters = me.motion_filters(size, res, k=k, csigx=σ_c) filt = filters[titles.index(filter)] # Take the fourier transform of the filter to show its spectral representation filt_fft = np.abs(fftshift(fftn(filt)).squeeze()).real # The filter is causal in time, so crop off the first half filt = filt.squeeze()[:, nt:] # Define the spatial mesh xs = me.filter_grid(nx, dx, center=True) ts = me.filter_grid(nt, dt) xx, tt = np.meshgrid(xs, ts, indexing="ij") # Define the spectral mesh fxs = fftshift(fftfreq(nx, dx)) fts = fftshift(fftfreq(nt * 2 - 1, dt)) fxx, ftt = np.meshgrid(fxs, fts, indexing="ij") # Plot the spatial and spectral representations of the filter gridspec = {"width_ratios": (4, 5)} f, (ax1, ax2) = plt.subplots(1, 2, figsize=(9, 5), gridspec_kw=gridspec) ax1.pcolormesh(xx, tt, filt, **centered_colormap(filt)) ax2.pcolormesh(fxx, ftt, filt_fft) ax1.set(xlabel="Selective axis ($\circ$)", ylabel="Time (s)") ax2.set(xlabel="$\omega_x$ (cycle / sec)", ylabel="$\omega_t$ (cycle / deg)") f.tight_layout() ``` interactive(children=(Dropdown(description='filter', options=('Even preferred', 'Odd preferred', 'Even null', … ---- ## Using the filters to model motion perception Having gone through the components of the model, the next section demonstrates how it might actually be used with a perceptual stimulus. First, we are going to generate a random dot motion stimulus as a 3D movie. ```python # Determine the parameters of the display radius = 3 # radius of the stimulus in degrees ppd = 40 # display spatial resolution (pixels per degree) framerate = 1 / 60 # display temporal resolution # Determine the parameters of the stimulus density = 100.2 # density of dots in dots per deg^2 per sec size = 3 # size of each dot in pixels speed = 5 # speed of coherent motion in deg per sec coherence = .9 # proportion of dots displaced coherently moments = 0, 0.001, 0, 3 # mean, sd, skew, and variance of dot motion duration = 1 # length of the movit, in seconds ``` ```python dots = stim.dot_movie(radius, density, size, speed, coherence, ppd, framerate, duration, moments) stim.play_movie(dots, framerate) ``` /home/adrian/Git/GitHub/work/Waskom_JVision_2018/stimulus.py:416: RuntimeWarning: invalid value encountered in true_divide p /= integrate.trapz(p, x_orig) /home/adrian/Git/GitHub/work/Waskom_JVision_2018/stimulus.py:439: RuntimeWarning: invalid value encountered in less return rs.choice(x, replace=True, p=p, size=size) To extract the motion energy of this stimulus, we convolve the three dimensional arrays (the stimulus movie and each motion energy filter). The filters should have the same resolution as the movie, but they can have a different size. ```python filter_shape = 64, 64, 25 filter_res = 1 / ppd, 1 / ppd, framerate filters = me.motion_filters(filter_shape, filter_res) ``` Applying the filters to the stimulus will return a three dimensional array with the same shape as the stimulus movie. We can visualize it in the same way and see that the model has replaced each dot with a local burst of motion energy: ```python %%time dots_energy = me.apply_motion_energy_filters(dots, filters) stim.play_movie(dots_energy, framerate, vmax=10, cmap="mako") ``` CPU times: user 22.5 s, sys: 2.99 s, total: 25.5 s Wall time: 25.7 s The motion energy can be summed across space and shown as a function of time. Shown this way, it is clear that there is an initial rise from zero to a steady state; this rise time is determined by the temporal latency parameter. Becauase the random dot stimulus is stochastic, the motion energy then fluctuates somewhat around its steady state value. ```python f, ax = plt.subplots() t = me.filter_grid(duration / framerate, framerate) ax.plot(t, dots_energy.sum(axis=(0, 1))) ax.set(xlim=(0, duration), ylim=(0, None), xlabel="Time (sec)", ylabel="Motion energy (a.u.)") f.tight_layout() ``` We can use a bank of filters with different preferred orientations to model the response of a population of direction-selective cells to a particular stimulus. Alternatively, you can think of this as estimating the tuning curve of the filter system given the set of parameters. Constructing the motion energy profile involves a lot of convolution, so the next cell will take some time to execute. ```python %%time thetas = [-60, -45, -30, -15, 0, +15, +30, +45, +60] filter_bank = me.filter_bank(thetas, filter_shape, filter_res) bank_energy = np.sum(me.apply_motion_energy_filters(dots, filter_bank), axis=(1, 2)) ``` CPU times: user 2min 56s, sys: 24.6 s, total: 3min 21s Wall time: 3min 22s To plot the motion energy profile, we take the mean over time (after cropping to avoid the initial latency period) for each direction. ```python t = me.filter_grid(duration / framerate, framerate) energy_profile = bank_energy[:, t > .2].mean(axis=-1) ``` ```python f, ax = plt.subplots() ax.plot(thetas, energy_profile, "o-") ax.set(xlabel=r"Motion energy filter orientation ($\theta$)", ylabel="Motion energy (a.u.)") f.tight_layout() ``` ```python ```
a5fab4d340ec25d2e4c1d64ec0dfd77da74c77d6
212,506
ipynb
Jupyter Notebook
motionenergy_tutorial.ipynb
aernesto/Waskom_JVision_2018
7b9b74976bdfa45582256fcd8fed0b072bc5c2f1
[ "BSD-3-Clause" ]
null
null
null
motionenergy_tutorial.ipynb
aernesto/Waskom_JVision_2018
7b9b74976bdfa45582256fcd8fed0b072bc5c2f1
[ "BSD-3-Clause" ]
2
2019-02-25T19:15:43.000Z
2019-02-25T19:31:03.000Z
motionenergy_tutorial.ipynb
aernesto/Waskom_JVision_2018
7b9b74976bdfa45582256fcd8fed0b072bc5c2f1
[ "BSD-3-Clause" ]
null
null
null
206.919182
102,000
0.906412
true
3,669
Qwen/Qwen-72B
1. YES 2. YES
0.887205
0.7773
0.689624
__label__eng_Latn
0.96774
0.440559
(sec:KAM)= # An Informal Introduction to Ideas Related to KAM (Kolmogorov-Arnold-Moser) Theory KAM theory has developed into a recognized branch of dynamical systems theory that is concerned with the study of the persistence of quasiperiodic trajectories in Hamiltonian system subjected to perturbation (generally, a Hamiltonian perturbation). The origins of this mathematical theory lie in attempts to understand the convergence of perturbation series solutions representing quasiperiodic solutions of the three-body problem in celestial mechanics {cite}`moser2001stable,barrow1997poincare,dumas2014kam`. Significant progress was made in this area in the second half of the twentieth century, and its name represents an acronym formed by the names of the three founders: Andrei Kolmogorov, Vladimir Arnold and Jürgen Moser. The main question that KAM addresses is the following: > What happens to the <b>quasiperiodic trajectories</b> of an <b>integrable</b> Hamiltonian system when the flow is modified with a <b>perturbation</b> that respects the symplectic structure of Hamilton's equations? Will the perturbation <b>destroy</b> these quasiperiodic solutions, or is it possible that some of them will <b>survive</b> and only become slightly deformed? The first answer to this question was provided by Kolmogorov in his famous plenary lecture {cite}`kolmogorov1954` at the International Congress of Mathematicians that took place in Amsterdam in 1954. Since the pioneering works of Poincaré on the three body problem, the intuition built into the scientific community was that typical systems in classical mechanics exhibit chaotic behaviour in some regions of their phase space. However, Kolmogorov's remarkable announcement produced a complete shock that turned this view upside down. In his talk he claimed that for near-integrable Hamiltonian systems, that is, integrable systems subjected to a perturbation, almost all quasiperiodic motion persists. As we will explain in more detail shortly, quasiperiodic trajectories trace out tori in phase space. Indeed, they are often viewed synonomously with invariant tori. Moreover, it turns out that the quasiperiodic trajectories that lie on _**non-resonant tori**_ are more robust to survive the perturbation than those associated to _**resonant tori**_. In fact, resonant tori are the first tori to disappear when the system is perturbed. Interestingly, among the non-resonant tori, KAM theory establishes a criterion to measure the "degree of non-resonance" that is related to their robustness with respect to the perturbation. Despite the deep consequences and implications that the results presented by Kolmogorov would bring to the field of classical mechanics, he had only provided partial arguments for them and not complete mathematical proofs. Hence, the foundations of this new theory needed a rigorous scaffolding that arrived several years later, in the period 1962-1963, with the independent works of Arnold {cite}`arnold1962,arnold1963a,arnold1963b` and Moser {cite}`moser1962`. The goal of this section is to provide the reader with a basic and brief overview of the mathematical ideas and concepts behind KAM theory, and how its development has contributed greatly to the recognition of this theory as an important achievement of the modern theory of dynamical systems. For the sake of clarity, we will avoid many abstract and technical details of the theory. Instead, we will focus on explaining the mathematics behind the terms we have highlighted in boldface in the text above, so that this can help us better understand the mathematical content of the questions introduced above in italics, and the answers to them provided by KAM theory. Most of the material discussed in this section is based on, and has been adapted from, the detailed and nice descriptions of KAM theory presented in the books {cite}`diacu1996,dumas2014kam,wiggins2003introduction` For completeness, we begin our explanation of KAM theory by considering a Hamiltonian system with $N$ degrees-of-freedom (DoF) defined by a scalar function $H(\mathbf{q},\mathbf{p})$ known as the Hamiltonian, (see {eq}`eq:hamiltoneq` ). This function depends on $\mathbf{q} \in \mathbb{R}^N$, the configuration space coordinates that represent the DoF of the system, and $\mathbf{p} \in \mathbb{R}^N$, which corresponds to their canonically conjugate momenta. In this setup, the phase space where dynamics occurs is $2N$-dimensional, and if we label a point in phase space as $\mathbf{z} = (\mathbf{q},\mathbf{p}) \in \mathbb{R}^{2N}$, the evolution of the system's trajectories is governed by Hamilton's equations of motion: ```{math} --- label: --- \begin{equation} \dot{\mathbf{z}} = \mathcal{J} \, \nabla_{\mathbf{z}} H \quad \Leftrightarrow \quad \begin{cases} \dot{q}_i = \dfrac{\partial H}{\partial p_i}, \\[.4cm] \dot{p}_i = -\dfrac{\partial H}{\partial q_i}, \end{cases} \quad i \in \lbrace 1, \ldots, N\rbrace \end{equation} ``` where $\mathcal{J}$ is the symplectic matrix (see {eq}`eq:symp_cond` ) and $\nabla_{\mathbf{z}}$ is the gradient operator acting on the Hamiltonian. The value of the Hamiltonian corresponds to the total energy of the system, it is conserved along trajectories and therefore it is a constant of motion or a first integral of the system. Dynamics for a fixed energy is constrained to a $2N-1$-dimensional constant energy hypersurface embedded in the $2N$-dimensional phase space. As stated in the introduction, KAM theory is concerned with perturbed integrable systems. In simple terms, a system is said to be integrable if one can find explicit formulas for its solutions, or at least, expressions for them that involve indicated integrals and elementary functions. As opposed to non-integrable systems, integrable systems admit closed form expressions for their trajectories, given initial conditions. This type of integrability is referred to as ''integration by quadratures'' and was first developed by Liouville {cite}`liouville1855note`. Observe that a solution of a dynamical system is a trajectory, that is, a one-dimensional curve wandering around in the $2N$ high dimensional phase space. Therefore, in order to determine this trajectory one would need to be able to obtain $2N-1$ functionally independent constants of the motion (CoM), $F_{i}(\mathbf{q},\mathbf{p}) = C_i$ where $i \in \lbrace 1,\ldots,2N-1\rbrace$, so that the result of their intersection is the sought trajectory. This means that each CoM decreases the dimensionality of the accessible phase space for the system by one. Notice also that this approach of finding constants of the motion in order to determine the solutions of the system comes from the natural approach of integrating the $2N$ ordinary differential equations that define Hamilton's equations, an operation which would yield, if it can be carried out, $2N$ integration constants that are later obtained in the classical way from initial conditions. However, due to the special symplectic structure of Hamiltonian systems, in order to establish the integrability of a system we only need to find $N$ CoM. This criterion is known as the Liouville-Mineur-Arnold Integrability Theorem {cite}`liouville1855note,mineur1935systemes,mineur1937systemes,arnol2013mathematical`, and plays a key role in the development of KAM theory. This result states the following: <b> Liouville-Mineur-Arnold Integrability Theorem (LMAIT): </b> An $N$ DoF Hamiltonian system is integrable if and only if there exist $N$ constants of the motion $F_1,\ldots,F_N$ which are smooth analytic functions of the phase space variables, such that: * they are functionally independent, which means that the vectors $\nabla F_1 , \ldots, \nabla F_N$ are linearly independent for almost all phase space points, * they are in involution with each other, that is, $\lbrace F_i,F_j \rbrace = 0$ for each $i \neq j$. Notice that the involution condition is equivalent to $\nabla F_i$ and $\nabla F_j$ being orthogonal in the symplectic sense $$\lbrace F_i,F_j \rbrace = \left(\nabla F_i\right)^{T} \mathcal{J} \, \nabla F_j = 0.$$ This means geometrically that in the symplectic structure of Hamiltonian systems, the level hypersurfaces of the CoM in involution are orthogonal. The LMAIT criterion also ensures that if the intersection resulting from these CoM is a connected and compact manifold, it has the shape of an $N$-torus, denoted by $\mathbb{T}^{N}$. Such an object is the result of taking the cartesian product of $N$ copies of a circle, that is: ```{math} \begin{equation*} \mathbb{T}^{N} = \underbrace{S^1 \times \ldots \times S^1}_{\scriptsize{\mbox{$N$ times}}} = \left(\mathbb{R} / 2\pi \mathbb{Z}\right)^{N} \end{equation*} ``` The LMAIT result goes further by ensuring that for the phase space regions where one can find these tori, there exists a canonical transformation to action-angle coordinates $(\mathbf{I},\boldsymbol{\theta})$ such that the Hamiltonian in these new variables only depends on the actions: \begin{equation*} K(\mathbf{I}) = K(I_1,\ldots,I_N) \end{equation*} and Hamilton's equations are given by: ```{math} --- label: --- \begin{equation} \begin{cases} \dot{\theta}_i = \dfrac{\partial K}{\partial I_i} = \omega_i(\mathbf{I}) \\[.3cm] \dot{I}_i = -\dfrac{\partial K}{\partial \theta_i} = 0 \end{cases} \; , \quad i \in \lbrace 1, \ldots,N \rbrace \end{equation} ``` The action variables are $N$ independent constants of motion and they only depend on initial conditions: ```{math} --- label: eq:actions --- \begin{equation} I_i(t) = I_i^{0} = \text{constant} \quad \Leftrightarrow \quad \mathbf{I}(t) = \mathbf{I}_0 = \text{constant}. \label{eq:actions} \end{equation} ``` The actions determine the energy of the system, $$ K_0 = K(\mathbf{I}_0) = K\left(I_1^0,\ldots,I_N^0\right), $$ and the differential equations for the angle variables yield: $${\theta_i(t) = \omega_i(\mathbf{I}_0) \, t + \theta_i^0 \mod 2\pi \;\; , \quad i \in \lbrace 1, \ldots,N \rbrace}$$ where $\omega_i(\mathbf{I}_0)$ are the angular frequencies. We can write the angular solution in vector form: ```{math} --- label: eq:angles --- \begin{equation} \boldsymbol{\theta}(t) = \boldsymbol{\omega}\left(\mathbf{I}_0\right) t + \boldsymbol{\theta}_0 \mod 2\pi \label{eq:angles} \end{equation} ``` where $\boldsymbol{\omega}\left(\mathbf{I}_0\right)$ is the frequency vector that characterizes the type of motion displayed by the trajectories of the integrable system. Consequently, we have completely solved the motion in the action-angle coordinate space, and since the change of variable from $(\mathbf{q},\mathbf{p})$ to $(\mathbf{I},\boldsymbol{\theta})$ is canonical (symplectic), it is invertible and we can express the solutions to the original Hamiltonian $H$ in the form $\mathbf{q} = \mathbf{q}(\mathbf{I},\boldsymbol{\theta})$ and $\mathbf{p} = \mathbf{p}(\mathbf{I},\boldsymbol{\theta})$. Equations {eq}`eq:actions` and {eq}`eq:angles` tell us that the flow generated by an integrable Hamiltonian system is linear and takes place on invariant $N$-dimensional tori parametrized by $N$ angle variables. This behavior is known as quasiperiodic motion. Invariance means that a trajectory starting on one of these tori will never leave that torus. Given an energy level for the Hamiltonian system, this family of tori, which is parametrized by the action variables, foliates the energy manifold. Each member of the tori family can be labelled according to a particular value taken by the actions. In the case of $N = 2$ DoF, this foliation of the 3D energy hypersurface by 2D tori, similar to many concentric layers of an onion, can be easily visualized. {numref}`fig:tori_foliat` shows a representation of this geometrical arrangement. Next we introduce the notion of non-degeneracy of a Hamiltonian, which is a necessary requirement for many results described in the framework of KAM theory. <b>Kolmogorov non-degeneracy condition:</b> Given an integrable Hamiltonian system in action-angle coordinates $K = K(\mathbf{I})$, we say that it is non-degenerate if the frequency map: $${\mathbf{I} \;\; \mapsto \;\; \boldsymbol{\omega} = \boldsymbol{\omega}(\mathbf{I}) }$$ is a diffeomorphism. In particular, due to the inverse function theorem, a sufficient condition which implies that the Hamiltonian is non-degenerate is the following: $${ \det \left(\dfrac{\partial \boldsymbol{\omega}}{\partial \mathbf{I}}\right) = \det \left( \text{Hess}_{K}\right) \neq 0 }$$ ```{figure} figures/tori_foliation.png --- name: fig:tori_foliat --- Foliation of a 3D energy hypersurface by 2D tori for an integrable 2 DoF Hamiltonian system. Each torus in the family is labelled by the action variable. We have cut open the tori to better illustrate the layered structure. ``` At this point, it is important to define what we mean by a resonant and a non-resonant torus, because the informal statement of the main results of KAM theory that we gave at the beginning of this section relies heavily on this concept. Recall that KAM theory studies the conditions under which the tori of an integrable Hamiltonian persist when the system is subjected to a small perturbation. __Resonance:__ The frequency vector $\boldsymbol{\omega} \in \mathbb{R}^{N}$ is said to be resonant if there exists a vector of integers with at least one of its components not zero, that is, $\mathbf{k} \in \mathbb{Z}^{N} - \lbrace 0 \rbrace$, such that: ```{math} --- label: --- \begin{equation} \mathbf{k} \cdot \boldsymbol{\omega} = \sum_{i = 1}^{N} k_i \, \omega_i = 0. \end{equation} ``` Otherwise $\boldsymbol{\omega}$ is non-resonant. Tori characterized by a resonant (or non-resonant) frequency vector are known in the literature as resonant (or non-resonant) tori. Consider the nearly-integrable Hamiltonian: ```{math} --- label: --- \begin{equation} \mathcal{H}(\mathbf{I},\boldsymbol{\theta}) = h(\mathbf{I}) + \varepsilon f(\mathbf{I},\boldsymbol{\theta}) \end{equation} ``` where $h$ is the integrable Hamiltonian, $f$ is the perturbation and $\varepsilon \geq 0$ is the magnitude of the perturbation. KAM theory establishes that resonant tori are very fragile since they are the first ones to be destroyed when the perturbation is switched on, while most of the non-resonant tori (in the measure-theoretic sense) are robust and survive the perturbation, but become slightly deformed by it. The key ingredient which determines the ``degree of robustness" that allows us to quantify if a torus will survive or not is the frequency vector associated to each of the tori in the integrable system. We give here the basic definitions of these concepts and, for a more detailed analysis, the reader can refer to {cite}`wiggins2003introduction` and references therein. __Add example of 2 DOF harmonic oscillators.__ In order to get a better idea and develop some geometrical intuition on the relationship between the definition of resonance and the corresponding dynamical behavior of trajectories on tori, we go back to the simple case provided by an integrable Hamiltonian system with $N = 2$ DoF. In this context, a resonant torus is determined by a frequency vector $\boldsymbol{\omega} = (\omega_1,\omega_2)$ that satisfies: $${ k_1 \, \omega_1 + k_2 \, \omega_2 = 0 \;,\quad k_1 , k_2 \in \mathbb{Z}-\lbrace 0 \rbrace \quad \Leftrightarrow \quad \dfrac{\omega_2}{\omega_1} = -\dfrac{k_1}{k_2} \in \mathbb{Q} }$$ Motion on resonant tori is periodic, and hence trajectories are closed. For example, if we consider motion on a torus with frequencies $\omega_1 = 2$ and $\omega_2 = 3$, a trajectory lying on this torus will wind around it making $3$ complete revolutions through the hole on the torus and $2$ full revolutions around the hole before it closes onto itself. This motion is illustrated in {numref}`fig:quasip_mot` A). ```{figure} figures/periodicFlow_w1_2_w2_3.png --- name: ---- A) Periodic motion on a resonant torus with frequency ratio $\omega_2 / \omega_1 = 3/2$. ``` ```{figure} figures/quasiperFlow_w1_1_w2_sqrt2.png --- name: fig:quasip_mot ---- B) Quasiperiodic motion on a non-resonant torus with frequency ratio $\omega_2 / \omega_1 = \sqrt{2}$. Notice how the quasiperiodic trajectory densely and uniformly fills the surface of the non-resonant torus along its evolution. This type of motion is ergodic on the torus. ``` Interestingly, the resonant and non-resonant tori of the family that fill out the energy manifold of an integrable Hamiltonian system are intermingled in a similar way as the rational and irrational numbers are distributed among the reals. Moreover, it can be shown that almost all tori are non-resonant. This is also the case for rational and irrational numbers, since rationals are dense in the real numbers, but have zero measure. This means that if we select a real number at random, we will almost always pick an irrational number. We can illustrate the results discussed by KAM theory by considering tori in the integrable system defined by the Chirikov standard map {cite}`chirikov1979,meiss2008` their persistence under perturbation. The Chirikov standard map is a two dimensional symplectic map that displays many features of the Poincaré return maps for typical 2 DoF autonomous Hamiltonian systems. This discrete dynamical system is defined by a two-dimensional area-preserving map which depends on a parameter $k$ that determines the strength of the perturbation that is applied to the underlying integrable system. The solutions for the unperturbed system ($k = 0$) are one-dimensional tori, that is, circles, and are represented in {numref}`fig:std_map` A). Notice that these tori appear as lines because the left and right boundaries of the domain are identified, and so also, the top and bottom edges of the square. This is the manifestation of the toroidal geometry of the phase space of the standard map. If we increase the perturbation strength to $k = 0.3$ we observe in panel B) that most of the tori have survived the perturbation, but in a slightly distorted shape. Moreover, we notice also that some of the tori of the originally unperturbed system break (the resonant ones), and in the process they have generated smaller tori, and also hyperbolic points that possess stable and unstable manifolds. Increasing the perturbation further to $k = 0.8$, one can see in panel C) that non-resonant tori continue to break up into smaller tori, giving rise to structures known as islands of regularity, and that these subregions of the phase space are surrounded by a larger sea of random points that indicates chaotic motion. Panels D), E) and F) of {numref}`fig:std_map` show how with increasing $k$ all tori eventually break up. This is in part one of the main messages that can be drawn from KAM theory; the coexistence of chaotic and regular motion in the phase space of Hamiltonian systems {cite}`markus1974generic` ```{figure} figures/std_map_k_0.png --- name: ---- Phase space of the Chirikov's Standard Map as revealed by means of a Poincaré map for different values of the perturbation parameter: A) $k = 0$ ``` ```{figure} figures/std_map_k_03.png --- name: ---- Phase space of the Chirikov's Standard Map as revealed by means of a Poincaré map for different values of the perturbation parameter: B) $k = 0.3$ ``` ```{figure} figures/std_map_k_08.png --- name: ---- Phase space of the Chirikov's Standard Map as revealed by means of a Poincaré map for different values of the perturbation parameter: C) $k = 0.8$ ``` ```{figure} figures/std_map_k_12.png --- name: ---- Phase space of the Chirikov's Standard Map as revealed by means of a Poincaré map for different values of the perturbation parameter: D) $k = 1.2$ ``` ```{figure} figures/std_map_k_24.png --- name: ---- Phase space of the Chirikov's Standard Map as revealed by means of a Poincaré map for different values of the perturbation parameter: E) $k = 2.4$ ``` ```{figure} figures/std_map_k_150.png --- name: fig:std_map ---- Phase space of the Chirikov's Standard Map as revealed by means of a Poincaré map for different values of the perturbation parameter: F) $k = 15$ ``` \label{fig:std_map} \caption{Phase space of the Chirikov's Standard Map as revealed by means of a Poincaré map for different values of the perturbation parameter: A) $k = 0$; B) $k = 0.3$; C) $k = 0.8$; D) $k = 1.2$; E) $k = 2.4$. F) $k = 15$.} # References ```{bibliography} bibliography/chapter2.bib ```
0986b45b7d2b705d0cc94d070bc36e1bdb33ee82
25,069
ipynb
Jupyter Notebook
book/_build/jupyter_execute/content/chapter2_2.ipynb
champsproject/lagrangian_descriptors
b3a88a2243bd5b0dce7cc945f9504bfadc9a4b19
[ "CC-BY-4.0" ]
12
2020-07-24T17:35:42.000Z
2021-08-12T17:31:53.000Z
book/_build/jupyter_execute/content/chapter2_2.ipynb
champsproject/lagrangian_descriptors
b3a88a2243bd5b0dce7cc945f9504bfadc9a4b19
[ "CC-BY-4.0" ]
12
2020-05-26T17:28:38.000Z
2020-07-27T10:40:54.000Z
book/_build/jupyter_execute/content/chapter2_2.ipynb
champsproject/lagrangian_descriptors
b3a88a2243bd5b0dce7cc945f9504bfadc9a4b19
[ "CC-BY-4.0" ]
null
null
null
68.122283
1,715
0.680841
true
5,437
Qwen/Qwen-72B
1. YES 2. YES
0.793106
0.805632
0.638952
__label__eng_Latn
0.997596
0.32283
# 微分方程式の計算について   N0.5 -1 直接積分、変数分離形 ### 学籍番号[_________]クラス[_____] クラス番号[_____] 名前[_______________] ########### 1階常微分方程式 直接積分形 $$ \frac{dy}{dx}=f(x) $$ $$ y = \int f(x)dx = F(X)+C $$ f(x) の原始関数を F(x) とする ,C は積分定数 ########## 1階微分方程式 変数分離形 $$ \frac{dy}{dx}=f(x)g(y) $$ 左をyのみ、右をxのみに変形して分離する 両辺をそれぞれ不定積分する $$ \int \frac{1}{g(y)}dy = \int f(x)dx $$ この積分からyの解を求める ```python from sympy import * x, n , a ,C1, C2 ,C3 = symbols('x n a C1 C2 C3') f,g,y = symbols('f g y' ,cls=Function) init_printing() m ='5///1' i =0 ``` ```python diffeq = Eq(y(x).diff(x, 1) , 3*y(x)) i=i+1 print( 'No.',m,'---',i) diffeq ``` ```python dsolve(diffeq, y(x)) ``` ```python diffeq = Eq(y(x).diff(x, 1) , y(x)/(x*(x+2))) i=i+1 print( 'No.',m,'---',i) diffeq ``` ```python dsolve(diffeq, y(x)) ``` ```python diffeq = Eq(y(x).diff(x, 1) , 2*y(x)*x) i=i+1 print( 'No.',m,'---',i) diffeq ``` ```python dsolve(diffeq, y(x)) ``` ```python diffeq = Eq(y(x).diff(x, 1) , y(x)**2/x**2) i=i+1 print( 'No.',m,'---',i) diffeq ``` ```python dsolve(diffeq, y(x)) ``` ```python diffeq = Eq(y(x).diff(x, 1) ,2*y(x)/(x+2)) i=i+1 print( 'No.',m,'---',i) diffeq ``` ```python simplify(dsolve(diffeq, y(x))) ``` ```python diffeq = Eq(y(x).diff(x, 1) ,1+3*y(x)/x) i=i+1 print( 'No.',m,'---',i) diffeq ``` ```python simplify(dsolve(diffeq, y(x))) ``` ```python diffeq = Eq(y(x).diff(x, 1)+2*y(x) ,0) i=i+1 print( 'No.',m,'---',i) diffeq ``` ```python reseq = dsolve(diffeq, y(x)) reseq ``` ```python f0 = Eq(reseq.rhs.subs(x,2),6) c = solve(f0) c ``` ```python reseq01 = reseq.subs(C1, c[0]) reseq01 ``` ```python diffeq = Eq(y(x).diff(x, 1)+2*y(x)*x ,0) i=i+1 print( 'No.',m,'---',i) diffeq ``` ```python reseq = dsolve(diffeq, y(x)) reseq ``` ```python f0 = Eq(reseq.rhs.subs(x,0),1) c = solve(f0) c ``` ```python reseq01 = reseq.subs(C1, c[0]) reseq01 ``` ```python ```
795e41a0da10f544e57befd05810160824997752
49,306
ipynb
Jupyter Notebook
09_20181126-bibunhoteisiki-7-1-Ex&ans-1.ipynb
kt-pro-git-1/Calculus_Differential_Equation-public
d5deaf117e6841c4f6ceb53bc80b020220fd4814
[ "MIT" ]
1
2019-07-10T11:33:18.000Z
2019-07-10T11:33:18.000Z
09_20181126-bibunhoteisiki-7-1-Ex&ans-1.ipynb
kt-pro-git-1/Calculus_Differential_Equation-public
d5deaf117e6841c4f6ceb53bc80b020220fd4814
[ "MIT" ]
null
null
null
09_20181126-bibunhoteisiki-7-1-Ex&ans-1.ipynb
kt-pro-git-1/Calculus_Differential_Equation-public
d5deaf117e6841c4f6ceb53bc80b020220fd4814
[ "MIT" ]
null
null
null
70.841954
2,208
0.803695
true
1,007
Qwen/Qwen-72B
1. YES 2. YES
0.92944
0.859664
0.799006
__label__yue_Hant
0.277964
0.694692
# Tutorial: Small Angle Neutron Scattering Small Angle Neutron Scattering (SANS) is a powerful reciprocal space technique that can be used to investigate magnetic structures on mesoscopic length scales. In SANS the atomic structure generally has a minimal impact hence the sample can be approximated by a continuous magnetisation vector field \[[Mühlbauer 2019](https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.91.015004)\]. The differential scattering cross section $d\Sigma/d\Omega$ can be used as a function of the scattering vector ${\bf q}$ to predict the scattering patterns produced. ## SANS Reference frame In `mag2exp` the experimental SANS reference frame is the same as that defined to be congruent with the sample reference frame. The neutron polarisation direction can then be defined relative to the sample reference frame along with the desired scattering plane. There are common scattering geometries defined in SANS, namely with a magnetic field applied to the incoming neutron beam in either a perpendicular or and parallel geometry to polarise the neutrons. The `mag2exp` package gives a more general set up with the ability to have the polarisation pointing in any arbitrary direction. We can recreate the standard scattering geometries **Perpendicular geometry**: the magnetic field is applied along the $z$ direction while the incoming neutron beam propagates along the $x$ direction. This is the same as `polarisation=(0, 0, 1)` and cutting the scattering plane along `x=0`. **Parallel geometry**: both the magnetic field and the incoming neutron beam are along the $z$ direction. This is the same as `polarisation=(0, 0, 1)` and cutting the scattering plane along `z=0`. ## The micromagnetic simulation A micromagnetic simulation can be set up using <code>Ubermag</code> to obtain a 3-Dimentional magntic structure with periodic boundary conditions in the $xy$ plane. ```python # NBVAL_IGNORE_OUTPUT import oommfc as oc import discretisedfield as df import micromagneticmodel as mm import numpy as np import ubermagutil.units as uu np.random.seed(1) region = df.Region(p1=(-160e-9, -160e-9, 0), p2=(160e-9, 160e-9, 20e-9)) mesh = df.Mesh(region=region, cell=(5e-9, 5e-9, 5e-9), bc='xyz') system = mm.System(name='Box2') system.energy = (mm.Exchange(A=1.6e-11) + mm.DMI(D=4e-3, crystalclass='T') + mm.Zeeman(H=(0, 0, 2e5))) Ms = 1.1e6 def m_fun(pos): return 2 * np.random.rand(3) - 1 # create system with above geometry and initial magnetisation system.m = df.Field(mesh, dim=3, value=m_fun, norm=Ms) system.m.plane('z').mpl() ``` Relax the system and plot its magnetisation. ```python # NBVAL_IGNORE_OUTPUT # minimize the energy md = oc.MinDriver() md.drive(system) # Plot relaxed configuration: vectors in z-plane system.m.plane('z').mpl() ``` ```python system.m.plane('x').mpl() ``` Now we have a magnetisation texture we can compute the SANS scattering cross sections. ## Computing SANS Cross-sections We can use the `mag2exp` package to calculate the cross sections. The scattering can be calculated with the use of the magnetic interaction vector ${\bf Q}$ \[[Mühlbauer 2019](https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.91.015004)\] where \begin{equation} {\bf Q} = \hat{\bf q} \times \left[ \widetilde{\bf M} \times \hat{\bf q} \right], \end{equation} where $\hat{\bf q}$ is the unit scattering vector and $\widetilde{\bf M}$ is the Fourier transform of the magnetisation. The scattering vector is defined as \begin{equation} {\bf q} = {\bf k}_1 - {\bf k}_0 \end{equation} where ${\bf k}_0$ and ${\bf k}_1$ are the incident and scattered wavevectors respectively. From this the magnetic contribution to the cross sections can be calculated as \begin{equation} \frac{d\sum}{d\Omega} \sim |{\bf Q} \cdot {\bf \sigma}|^2, \end{equation} where ${\bf \sigma}$ is the Pauli vector \begin{equation} {\bf \sigma} = \begin{bmatrix} \sigma_x \\ \sigma_y \\ \sigma_z \end{bmatrix}, \end{equation} and \begin{align} \sigma_x &= \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \\ \sigma_y &= \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix}, \\ \sigma_z &= \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}. \end{align} For a polarisation in an arbitrary direction the Pauli vector can be rotated. This takes the form \begin{equation} \frac{d\sum}{d\Omega} = \begin{pmatrix} \frac{d\sum^{++}}{d\Omega} & \frac{d\sum^{-+}}{d\Omega}\\ \frac{d\sum^{+-}}{d\Omega} & \frac{d\sum^{--}}{d\Omega} \end{pmatrix} \end{equation} These then be combined in order to get the half and unpolarised cross sections. \begin{align} \frac{d\sum^{+}}{d\Omega} &= \frac{d\sum^{++}}{d\Omega} + \frac{d\sum^{+-}}{d\Omega} \\ \frac{d\sum^{-}}{d\Omega} &= \frac{d\sum^{--}}{d\Omega} + \frac{d\sum^{-+}}{d\Omega} \\ \frac{d\sum}{d\Omega} &= \frac{1}{2} \left( \frac{d\sum^{+}}{d\Omega} + \frac{d\sum^{-}}{d\Omega} \right) \\ \frac{d\sum}{d\Omega} &= \frac{1}{2} \left( \frac{d\sum^{++}}{d\Omega} + \frac{d\sum^{+-}}{d\Omega} + \frac{d\sum^{--}}{d\Omega} + \frac{d\sum^{-+}}{d\Omega} \right) \end{align} For example the unpolarised cross section calculated with the polarisation along the $z$ direction. ```python import mag2exp ``` ```python cross_section = mag2exp.sans.cross_section(system.m, method='unpol', polarisation=(0, 0, 1)) ``` The `mag2exp.sans.cross_section` function produces a three dimentional cross section in scattering space. A plane of this cross section can then be cut to obtain the relevant scattering plane. For example the `z=0` scattering plane is used here, this is equivalent to the incoming neutron beam being parallel to the $z$ direction. As the cross section is a `discretisedfield` object the built in plotting functions can be used to view them. NOTE: The values of the axis in Fourier space are frequency NOT angular frequency so DO NOT include a factor of $2\pi$. i.e. $|{\bf k}| = \frac{1}{\lambda} \neq \frac{2 \pi}{\lambda}$, where $\bf k$ is the wave vector and $\lambda$ is the wavelength. ```python # NBVAL_IGNORE_OUTPUT cross_section.plane(z=0).mpl.scalar(cmap='gray', interpolation='spline16', colorbar_label=r'Intensity') ``` Due to the high intensity of some areas a linear colour map make some features more difficult to see. We can use `matplotlib.color` to change the colour bar to a logarithmic scale. This reveals the higher order low intensity diffraction peaks. ```python # NBVAL_IGNORE_OUTPUT import matplotlib.colors as colors cross_section.plane(z=0).mpl.scalar(cmap='gray', colorbar_label=r'Intensity', norm=colors.LogNorm(vmin=1e-3,vmax=cross_section.real.array.max())) ``` It is also possible to just plot a selected region of the cross section by ```python # NBVAL_IGNORE_OUTPUT sans_region = df.Region(p1=(-40e6, -40e6, 0), p2=(40e6, 40e6, 0.5)) cross_section.plane(z=0)[sans_region].mpl.scalar(cmap='gray', interpolation='spline16', colorbar_label=r'Intensity') ``` For this in the parallel scattering geometry the spin-flip cross sections are ```python # NBVAL_IGNORE_OUTPUT cross_section = mag2exp.sans.cross_section(system.m, method='pn', polarisation=(0, 0, 1)) cross_section.plane(z=0).mpl.scalar(cmap='gray', interpolation='spline16', colorbar_label=r'Intensity (arb.)') cross_section = mag2exp.sans.cross_section(system.m, method='np', polarisation=(0, 0, 1)) cross_section.plane(z=0).mpl.scalar(cmap='gray', interpolation='spline16', colorbar_label=r'Intensity (arb.)') ``` and non-spin-flip cross sections are ```python # NBVAL_IGNORE_OUTPUT cross_section = mag2exp.sans.cross_section(system.m, method='pp', polarisation=(0, 0, 1)) cross_section.plane(z=0).mpl.scalar(cmap='gray', interpolation='spline16', colorbar_label=r'Intensity (arb.)') cross_section = mag2exp.sans.cross_section(system.m, method='nn', polarisation=(0, 0, 1)) cross_section.plane(z=0).mpl.scalar(cmap='gray', interpolation='spline16', colorbar_label=r'Intensity (arb.)') ``` We can also look at the scattering geometry where the beam and polarisation are no longer parallel. ```python # NBVAL_IGNORE_OUTPUT cross_section = mag2exp.sans.cross_section(system.m, method='unpol', polarisation=(1, 0, 0)) cross_section.plane(z=0).mpl.scalar(cmap='gray', interpolation='spline16', colorbar_label=r'Intensity') ``` ```python cross_section = mag2exp.sans.cross_section(system.m, method='pp', polarisation=(1, 0, 0)) cross_section.plane(z=0).mpl.scalar(cmap='gray', interpolation='spline16', colorbar_label=r'Intensity (arb.)') cross_section = mag2exp.sans.cross_section(system.m, method='nn', polarisation=(1, 0, 0)) cross_section.plane(z=0).mpl.scalar(cmap='gray', interpolation='spline16', colorbar_label=r'Intensity (arb.)') ``` ```python cross_section = mag2exp.sans.cross_section(system.m, method='pn', polarisation=(1, 0, 0)) cross_section.plane(z=0).mpl.scalar(cmap='gray', interpolation='spline16', colorbar_label=r'Intensity (arb.)') cross_section = mag2exp.sans.cross_section(system.m, method='np', polarisation=(1, 0, 0)) cross_section.plane(z=0).mpl.scalar(cmap='gray', interpolation='spline16', colorbar_label=r'Intensity (arb.)') ``` It is possible to examine the chiral function $-2i\chi$, which gives an indication of the asymmetry of the scattering. ```python chiral = mag2exp.sans.chiral_function(system.m, polarisation=(1, 0, 0)) chiral.plane(z=0).mpl.scalar(cmap='gray', interpolation='spline16', colorbar_label=r'Intensity (arb.)') ```
2cab58ff8c92a89d30b52f15083dd43a5e166bd8
891,037
ipynb
Jupyter Notebook
docs/SANS.ipynb
ubermag/exsim
35e7a88716a9ed2c9a34f4c93c628560a597b57f
[ "BSD-3-Clause" ]
null
null
null
docs/SANS.ipynb
ubermag/exsim
35e7a88716a9ed2c9a34f4c93c628560a597b57f
[ "BSD-3-Clause" ]
6
2021-06-10T13:42:08.000Z
2021-07-21T08:57:50.000Z
docs/SANS.ipynb
ubermag/exsim
35e7a88716a9ed2c9a34f4c93c628560a597b57f
[ "BSD-3-Clause" ]
null
null
null
1,528.365352
256,564
0.959388
true
2,751
Qwen/Qwen-72B
1. YES 2. YES
0.795658
0.727975
0.57922
__label__eng_Latn
0.921949
0.184051
```python from preamble import * %matplotlib notebook import matplotlib as mpl mpl.rcParams['legend.numpoints'] = 1 ``` ## Evaluation Metrics and scoring ### Metrics for binary classification ```python from sklearn.model_selection import train_test_split data = pd.read_csv("data/bank-campaign.csv") X = data.drop("target", axis=1).values y = data.target.values X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0) ``` ```python from sklearn.dummy import DummyClassifier dummy_majority = DummyClassifier(strategy='most_frequent').fit(X_train, y_train) pred_most_frequent = dummy_majority.predict(X_test) print("predicted labels: %s" % np.unique(pred_most_frequent)) print("score: %f" % dummy_majority.score(X_test, y_test)) ``` ```python from sklearn.tree import DecisionTreeClassifier tree = DecisionTreeClassifier(max_depth=2).fit(X_train, y_train) pred_tree = tree.predict(X_test) tree.score(X_test, y_test) ``` ```python from sklearn.linear_model import LogisticRegression dummy = DummyClassifier().fit(X_train, y_train) pred_dummy = dummy.predict(X_test) print("dummy score: %f" % dummy.score(X_test, y_test)) logreg = LogisticRegression(C=0.1).fit(X_train, y_train) pred_logreg = logreg.predict(X_test) print("logreg score: %f" % logreg.score(X_test, y_test)) ``` # Confusion matrices ```python from sklearn.metrics import confusion_matrix confusion = confusion_matrix(y_test, pred_logreg) print(confusion) ``` ```python mglearn.plots.plot_binary_confusion_matrix() ``` ```python print("Most frequent class:") print(confusion_matrix(y_test, pred_most_frequent)) print("\nDummy model:") print(confusion_matrix(y_test, pred_dummy)) print("\nDecision tree:") print(confusion_matrix(y_test, pred_tree)) print("\nLogistic Regression") print(confusion_matrix(y_test, pred_logreg)) ``` ##### Relation to accuracy \begin{equation} \text{Accuracy} = \frac{\text{TP} + \text{TN}}{\text{TP} + \text{TN} + \text{FP} + \text{FN}} \end{equation} #### Precision, recall and f-score \begin{equation} \text{Precision} = \frac{\text{TP}}{\text{TP} + \text{FP}} \end{equation} \begin{equation} \text{Recall} = \frac{\text{TP}}{\text{TP} + \text{FN}} \end{equation} \begin{equation} \text{F} = 2 \cdot \frac{\text{precision} \cdot \text{recall}}{\text{precision} + \text{recall}} \end{equation} ```python from sklearn.metrics import f1_score print("f1 score most frequent: %.2f" % f1_score(y_test, pred_most_frequent, pos_label="yes")) print("f1 score dummy: %.2f" % f1_score(y_test, pred_dummy, pos_label="yes")) print("f1 score tree: %.2f" % f1_score(y_test, pred_tree, pos_label="yes")) print("f1 score logreg: %.2f" % f1_score(y_test, pred_logreg, pos_label="yes")) ``` ```python from sklearn.metrics import classification_report print(classification_report(y_test, pred_most_frequent, target_names=["no", "yes"])) ``` ```python print(classification_report(y_test, pred_tree, target_names=["no", "yes"])) ``` ```python print(classification_report(y_test, pred_logreg, target_names=["no", "yes"])) ``` # Taking uncertainty into account ```python from mglearn.datasets import make_blobs from sklearn.svm import SVC X, y = make_blobs(n_samples=(400, 50), centers=2, cluster_std=[7.0, 2], random_state=22) X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0) svc = SVC(gamma=.05).fit(X_train, y_train) ``` ```python mglearn.plots.plot_decision_threshold() ``` ```python print(classification_report(y_test, svc.predict(X_test))) ``` ```python y_pred_lower_threshold = svc.decision_function(X_test) > -.8 ``` ```python print(classification_report(y_test, y_pred_lower_threshold)) ``` ## Precision-Recall curves and ROC curves ```python from sklearn.metrics import precision_recall_curve precision, recall, thresholds = precision_recall_curve(y_test, svc.decision_function(X_test)) ``` ```python # create a similar dataset as before, but with more samples to get a smoother curve X, y = make_blobs(n_samples=(4000, 500), centers=2, cluster_std=[7.0, 2], random_state=22) X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0) svc = SVC(gamma=.05).fit(X_train, y_train) precision, recall, thresholds = precision_recall_curve( y_test, svc.decision_function(X_test)) # find threshold closest to zero: close_zero = np.argmin(np.abs(thresholds)) plt.figure() plt.plot(precision[close_zero], recall[close_zero], 'o', markersize=10, label="threshold zero", fillstyle="none", c='k', mew=2) plt.plot(precision, recall, label="precision recall curve") plt.xlabel("precision") plt.ylabel("recall") plt.title("precision_recall_curve"); plt.legend(loc="best") ``` ```python from sklearn.ensemble import RandomForestClassifier rf = RandomForestClassifier(n_estimators=100, random_state=0, max_features=2) rf.fit(X_train, y_train) # RandomForestClassifier has predict_proba, but not decision_function precision_rf, recall_rf, thresholds_rf = precision_recall_curve( y_test, rf.predict_proba(X_test)[:, 1]) plt.figure() plt.plot(precision, recall, label="svc") plt.plot(precision[close_zero], recall[close_zero], 'o', markersize=10, label="threshold zero svc", fillstyle="none", c='k', mew=2) plt.plot(precision_rf, recall_rf, label="rf") close_default_rf = np.argmin(np.abs(thresholds_rf - 0.5)) plt.plot(precision_rf[close_default_rf], recall_rf[close_default_rf], '^', markersize=10, label="threshold 0.5 rf", fillstyle="none", c='k', mew=2) plt.xlabel("precision") plt.ylabel("recall") plt.legend(loc="best") plt.title("precision_recall_comparison"); ``` ```python print("f1_score of random forest: %f" % f1_score(y_test, rf.predict(X_test))) print("f1_score of svc: %f" % f1_score(y_test, svc.predict(X_test))) ``` ```python from sklearn.metrics import average_precision_score ap_rf = average_precision_score(y_test, rf.predict_proba(X_test)[:, 1]) ap_svc = average_precision_score(y_test, svc.decision_function(X_test)) print("average precision of random forest: %f" % ap_rf) print("average precision of svc: %f" % ap_svc) ``` # Receiver Operating Characteristics (ROC) and AUC \begin{equation} \text{FPR} = \frac{\text{FP}}{\text{FP} + \text{TN}} \end{equation} ```python from sklearn.metrics import roc_curve fpr, tpr, thresholds = roc_curve(y_test, svc.decision_function(X_test)) plt.figure() plt.plot(fpr, tpr, label="ROC Curve") plt.xlabel("FPR") plt.ylabel("TPR (recall)") plt.title("roc_curve"); # find threshold closest to zero: close_zero = np.argmin(np.abs(thresholds)) plt.plot(fpr[close_zero], tpr[close_zero], 'o', markersize=10, label="threshold zero", fillstyle="none", c='k', mew=2) plt.legend(loc=4) ``` ```python from sklearn.metrics import roc_curve fpr_rf, tpr_rf, thresholds_rf = roc_curve(y_test, rf.predict_proba(X_test)[:, 1]) plt.figure() plt.plot(fpr, tpr, label="ROC Curve SVC") plt.plot(fpr_rf, tpr_rf, label="ROC Curve RF") plt.xlabel("FPR") plt.ylabel("TPR (recall)") plt.title("roc_curve_comparison"); plt.plot(fpr[close_zero], tpr[close_zero], 'o', markersize=10, label="threshold zero SVC", fillstyle="none", c='k', mew=2) close_default_rf = np.argmin(np.abs(thresholds_rf - 0.5)) plt.plot(fpr_rf[close_default_rf], tpr[close_default_rf], '^', markersize=10, label="threshold 0.5 RF", fillstyle="none", c='k', mew=2) plt.legend(loc=4) ``` ```python from sklearn.metrics import roc_auc_score rf_auc = roc_auc_score(y_test, rf.predict_proba(X_test)[:, 1]) svc_auc = roc_auc_score(y_test, svc.decision_function(X_test)) print("AUC for Random Forest: %f" % rf_auc) print("AUC for SVC: %f" % svc_auc) ``` ```python X = data.drop("target", axis=1).values y = data.target.values X.shape ``` ```python X_train, X_test, y_train, y_test = train_test_split( X, y, random_state=0, train_size=.1, test_size=.1) plt.figure() for gamma in [1, 0.01, 0.001]: svc = SVC(gamma=gamma).fit(X_train, y_train) accuracy = svc.score(X_test, y_test) auc = roc_auc_score(y_test == "yes", svc.decision_function(X_test)) fpr, tpr, _ = roc_curve(y_test , svc.decision_function(X_test), pos_label="yes") print("gamma = %.03f accuracy = %.02f AUC = %.02f" % (gamma, accuracy, auc)) plt.plot(fpr, tpr, label="gamma=%.03f" % gamma, linewidth=4) plt.xlabel("FPR") plt.ylabel("TPR") plt.xlim(-0.01, 1) plt.ylim(0, 1.02) plt.legend(loc="best") ``` ### Multi-class classification ```python from sklearn.metrics import accuracy_score from sklearn.datasets import load_digits digits = load_digits() X_train, X_test, y_train, y_test = train_test_split( digits.data, digits.target, random_state=0) lr = LogisticRegression().fit(X_train, y_train) pred = lr.predict(X_test) print("accuracy: %0.3f" % accuracy_score(y_test, pred)) print("confusion matrix:") print(confusion_matrix(y_test, pred)) ``` ```python plt.figure() scores_image = mglearn.tools.heatmap(confusion_matrix(y_test, pred), xlabel='Predicted label', ylabel='True label', xticklabels=digits.target_names, yticklabels=digits.target_names, cmap=plt.cm.gray_r, fmt="%d") plt.title("Confusion matrix") plt.gca().invert_yaxis() ``` ```python print(classification_report(y_test, pred)) ``` ```python print("micro average f1 score: %f" % f1_score(y_test, pred, average="micro")) print("macro average f1 score: %f" % f1_score(y_test, pred, average="macro")) ``` ## Using evaluation metrics in model selection ```python from sklearn.model_selection import cross_val_score # default scoring for classification is accuracy print("default scoring ", cross_val_score(SVC(), X, y)) # providing scoring="accuracy" doesn't change the results explicit_accuracy = cross_val_score(SVC(), digits.data, digits.target == 9, scoring="accuracy") print("explicit accuracy scoring ", explicit_accuracy) roc_auc = cross_val_score(SVC(), digits.data, digits.target == 9, scoring="roc_auc") print("AUC scoring ", roc_auc) ``` ```python from sklearn.model_selection import GridSearchCV # back to the bank campaign X = data.drop("target", axis=1).values y = data.target.values X_train, X_test, y_train, y_test = train_test_split( X, y, train_size=.1, test_size=.1, random_state=0) # we provide a somewhat bad grid to illustrate the point: param_grid = {'gamma': [0.0001, 0.01, 0.1, 1, 10]} # using the default scoring of accuracy: grid = GridSearchCV(SVC(), param_grid=param_grid) grid.fit(X_train, y_train) print("Grid-Search with accuracy") print("Best parameters:", grid.best_params_) print("Best cross-validation score (accuracy)):", grid.best_score_) print("Test set AUC: %.3f" % roc_auc_score(y_test, grid.decision_function(X_test))) print("Test set accuracy %.3f: " % grid.score(X_test, y_test)) # using AUC scoring instead: grid = GridSearchCV(SVC(), param_grid=param_grid, scoring="roc_auc") grid.fit(X_train, y_train) print("\nGrid-Search with AUC") print("Best parameters:", grid.best_params_) print("Best cross-validation score (AUC):", grid.best_score_) print("Test set AUC: %.3f" % roc_auc_score(y_test, grid.decision_function(X_test))) print("Test set accuracy %.3f: " % grid.score(X_test, y_test)) ``` ```python from sklearn.metrics.scorer import SCORERS print(sorted(SCORERS.keys())) ``` ```python def my_scoring(fitted_estimator, X_test, y_test): return (fitted_estimator.predict(X_test) == y_test).mean() GridSearchCV(SVC(), param_grid, scoring=my_scoring) ``` # Exercises Load the adult dataset from ``data/adult.data``, and split it into training and test set. Apply grid-search to the training set, searching for the best C for Logistic Regression, also search over L1 penalty vs L2 penalty. Plot the ROC curve of the best model on the test set. ```python ```
0797b84bc61b6348a706cda72ae5c6c9b70241e4
20,552
ipynb
Jupyter Notebook
notebooks/03 Evaluation Metrics.ipynb
lampsonnguyen/ml-training-advance
992c8304683879ade23410cfa4478622980ef420
[ "MIT" ]
null
null
null
notebooks/03 Evaluation Metrics.ipynb
lampsonnguyen/ml-training-advance
992c8304683879ade23410cfa4478622980ef420
[ "MIT" ]
null
null
null
notebooks/03 Evaluation Metrics.ipynb
lampsonnguyen/ml-training-advance
992c8304683879ade23410cfa4478622980ef420
[ "MIT" ]
2
2018-04-20T03:09:43.000Z
2021-07-23T05:48:42.000Z
28.192044
140
0.566174
true
3,213
Qwen/Qwen-72B
1. YES 2. YES
0.803174
0.83762
0.672754
__label__eng_Latn
0.317725
0.401365
```python from scipy.stats import uniform, expon, norm from scipy import integrate uniform.pdf(x=5, loc=1, scale=9) ``` 0.1111111111111111 ```python 1 - norm.cdf(x=0.3, loc=100, scale=15) ``` 0.9999999999850098 ```python uniform.pdf(x=-1, loc=1, scale=15) ``` 0.0 ```python ``` 14.75 ```python 1/14.75*60 ``` 4.067796610169491 ```python ``` 0.0003333333333333333 ```python from sympy import Symbol, diff x = Symbol('x') func = x(15-(x/4)) diff(func, x) ``` ```python ```
ca7a8ed6fb868419e6e9363a3b37d49627a07190
4,288
ipynb
Jupyter Notebook
Semester/SW03/SW03.ipynb
florianbaer/STAT
7cb86406ed99b88055c92c1913b46e8995835cbb
[ "MIT" ]
null
null
null
Semester/SW03/SW03.ipynb
florianbaer/STAT
7cb86406ed99b88055c92c1913b46e8995835cbb
[ "MIT" ]
null
null
null
Semester/SW03/SW03.ipynb
florianbaer/STAT
7cb86406ed99b88055c92c1913b46e8995835cbb
[ "MIT" ]
null
null
null
23.822222
1,026
0.522621
true
196
Qwen/Qwen-72B
1. YES 2. YES
0.851953
0.626124
0.533428
__label__yue_Hant
0.412017
0.077662
###### Content under Creative Commons Attribution license CC-BY 4.0, code under MIT license (c)2014 L.A. Barba, C.D. Cooper, G.F. Forsyth. # Riding the wave ## Numerical schemes for hyperbolic PDEs Welcome back! This is the second notebook of *Riding the wave: Convection problems*, the third module of ["Practical Numerical Methods with Python"](https://openedx.seas.gwu.edu/courses/course-v1:MAE+MAE6286+2017/about). The first notebook of this module discussed conservation laws and developed the non-linear traffic equation. We learned about the effect of the wave speed on the stability of the numerical method, and on the CFL number. We also realized that the forward-time/backward-space difference scheme really has many limitations: it cannot deal with wave speeds that move in more than one direction. It is also first-order accurate in space and time, which often is just not good enough. This notebook will introduce some new numerical schemes for conservation laws, continuing with the traffic-flow problem as motivation. ## Red light! Let's explore the behavior of different numerical schemes for a moving shock wave. In the context of the traffic-flow model of the previous notebook, imagine a very busy road and a red light at $x=4$. Cars accumulate quickly in the front, where we have the maximum allowed density of cars between $x=3$ and $x=4$, and there is an incoming traffic of 50% the maximum allowed density $(\rho = 0.5\rho_{\rm max})$. Mathematically, this is: $$ \begin{equation} \rho(x,0) = \left\{ \begin{array}{cc} 0.5 \rho_{\rm max} & 0 \leq x < 3 \\ \rho_{\rm max} & 3 \leq x \leq 4 \\ \end{array} \right. \end{equation} $$ Let's find out what the initial condition looks like. ```python import numpy from matplotlib import pyplot %matplotlib inline ``` ```python # Set the font family and size to use for Matplotlib figures. pyplot.rcParams['font.family'] = 'serif' pyplot.rcParams['font.size'] = 16 ``` ```python def rho_red_light(x, rho_max): """ Computes the "red light" initial condition with shock. Parameters ---------- x : numpy.ndaray Locations on the road as a 1D array of floats. rho_max : float The maximum traffic density allowed. Returns ------- rho : numpy.ndarray The initial car density along the road as a 1D array of floats. """ rho = rho_max * numpy.ones_like(x) mask = numpy.where(x < 3.0) rho[mask] = 0.5 * rho_max return rho ``` ```python # Set parameters. nx = 81 # number of locations on the road L = 4.0 # length of the road dx = L / (nx - 1) # distance between two consecutive locations nt = 40 # number of time steps to compute rho_max = 10.0 # maximum taffic density allowed u_max = 1.0 # maximum speed traffic # Get the road locations. x = numpy.linspace(0.0, L, num=nx) # Compute the initial traffic density. rho0 = rho_red_light(x, rho_max) ``` ```python # Plot the initial traffic density. fig = pyplot.figure(figsize=(6.0, 4.0)) pyplot.xlabel(r'$x$') pyplot.ylabel(r'$\rho$') pyplot.grid() line = pyplot.plot(x, rho0, color='C0', linestyle='-', linewidth=2)[0] pyplot.xlim(0.0, L) pyplot.ylim(4.0, 11.0) pyplot.tight_layout() ``` The question we would like to answer is: **How will cars accumulate at the red light?** We will solve this problem using different numerical schemes, to see how they perform. These schemes are: * Lax-Friedrichs * Lax-Wendroff * MacCormack Before we do any coding, let's think about the equation a little bit. The wave speed $u_{\rm wave}$ is $-1$ for $\rho = \rho_{\rm max}$ and $\rho \leq \rho_{\rm max}/2$, making all velocities negative. We should see a solution moving left, maintaining the shock geometry. #### Figure 1. The exact solution is a shock wave moving left. Now to some coding! First, let's define some useful functions and prepare to make some nice animations later. ```python def flux(rho, u_max, rho_max): """ Computes the traffic flux F = V * rho. Parameters ---------- rho : numpy.ndarray Traffic density along the road as a 1D array of floats. u_max : float Maximum speed allowed on the road. rho_max : float Maximum car density allowed on the road. Returns ------- F : numpy.ndarray The traffic flux along the road as a 1D array of floats. """ F = rho * u_max * (1.0 - rho / rho_max) return F ``` Before we investigate different schemes, let's create the function to update the Matplotlib figure during the animation. ```python from matplotlib import animation from IPython.display import HTML ``` ```python def update_plot(n, rho_hist): """ Update the line y-data of the Matplotlib figure. Parameters ---------- n : integer The time-step index. rho_hist : list of numpy.ndarray objects The history of the numerical solution. """ fig.suptitle('Time step {:0>2}'.format(n)) line.set_ydata(rho_hist[n]) ``` ## Lax-Friedrichs scheme Recall the conservation law for vehicle traffic, resulting in the following equation for the traffic density: $$ \begin{equation} \frac{\partial \rho}{\partial t} + \frac{\partial F}{\partial x} = 0 \end{equation} $$ $F$ is the *traffic flux*, which in the linear traffic-speed model is given by: $$ \begin{equation} F = \rho u_{\rm max} \left(1-\frac{\rho}{\rho_{\rm max}}\right) \end{equation} $$ In the time variable, the natural choice for discretization is always a forward-difference formula; time invariably moves forward! $$ \begin{equation} \frac{\partial \rho}{\partial t}\approx \frac{1}{\Delta t}( \rho_i^{n+1}-\rho_i^n ) \end{equation} $$ As is usual, the discrete locations on the 1D spatial grid are denoted by indices $i$ and the discrete time instants are denoted by indices $n$. In a convection problem, using first-order discretization in space leads to excessive numerical diffusion (as you probably observed in [Lesson 1 of Module 2](https://nbviewer.jupyter.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/02_spacetime/02_01_1DConvection.ipynb)). The simplest approach to get second-order accuracy in space is to use a central difference: $$ \begin{equation} \frac{\partial F}{\partial x} \approx \frac{1}{2\Delta x}( F_{i+1}-F_{i-1}) \end{equation} $$ But combining these two choices for time and space discretization in the convection equation has catastrophic results! The "forward-time, central scheme" (FTCS) is **unstable**. (Go on: try it; you know you want to!) The Lax-Friedrichs scheme was proposed by Lax (1954) as a clever trick to stabilize the forward-time, central scheme. The idea was to replace the solution value at $\rho^n_i$ by the average of the values at the neighboring grid points. If we do that replacement, we get the following discretized equation: $$ \begin{equation} \frac{\rho_i^{n+1}-\frac{1}{2}(\rho^n_{i+1}+\rho^n_{i-1})}{\Delta t} = -\frac{F^n_{i+1}-F^n_{i-1}}{2 \Delta x} \end{equation} $$ Take a careful look: the difference formula no longer uses the value at $\rho^n_i$ to obtain $\rho^{n+1}_i$. The stencil of the Lax-Friedrichs scheme is slightly different than that for the forward-time, central scheme. #### Figure 2. Stencil of the forward-time/central scheme. #### Figure 3. Stencil of the Lax-Friedrichs scheme. This numerical discretization is **stable**. Unfortunately, substituting $\rho^n_i$ by the average of its neighbors introduces a first-order error. _Nice try, Lax!_ To implement the scheme in code, we need to isolate the value at the next time step, $\rho^{n+1}_i$, so we can write a time-stepping loop: $$ \begin{equation} \rho_i^{n+1} = \frac{1}{2}(\rho^n_{i+1}+\rho^n_{i-1}) - \frac{\Delta t}{2 \Delta x}(F^n_{i+1}-F^n_{i-1}) \end{equation} $$ The function below implements Lax-Friedrichs for our traffic model. All the schemes in this notebook are wrapped in their own functions to help with displaying animations of the results. This is also good practice for developing modular, reusable code. In order to display animations, we're going to hold the results of each time step in the variable `rho`, a 2D array. The resulting array `rho_n` has `nt` rows and `nx` columns. ```python def lax_friedrichs(rho0, nt, dt, dx, bc_values, *args): """ Computes the traffic density on the road at a certain time given the initial traffic density. Integration using Lax-Friedrichs scheme. Parameters ---------- rho0 : numpy.ndarray The initial traffic density along the road as a 1D array of floats. nt : integer The number of time steps to compute. dt : float The time-step size to integrate. dx : float The distance between two consecutive locations. bc_values : 2-tuple of floats The value of the density at the first and last locations. args : list or tuple Positional arguments to be passed to the flux function. Returns ------- rho_hist : list of numpy.ndarray objects The history of the car density along the road. """ rho_hist = [rho0.copy()] rho = rho0.copy() for n in range(nt): # Compute the flux. F = flux(rho, *args) # Advance in time using Lax-Friedrichs scheme. rho[1:-1] = (0.5 * (rho[:-2] + rho[2:]) - dt / (2.0 * dx) * (F[2:] - F[:-2])) # Set the value at the first location. rho[0] = bc_values[0] # Set the value at the last location. rho[-1] = bc_values[1] # Record the time-step solution. rho_hist.append(rho.copy()) return rho_hist ``` ### Lax-Friedrichs with $\frac{\Delta t}{\Delta x}=1$ We are now all set to run! First, let's try with CFL=1 ```python # Set the time-step size based on CFL limit. sigma = 1.0 dt = sigma * dx / u_max # time-step size # Compute the traffic density at all time steps. rho_hist = lax_friedrichs(rho0, nt, dt, dx, (rho0[0], rho0[-1]), u_max, rho_max) ``` ```python # Create an animation of the traffic density. anim = animation.FuncAnimation(fig, update_plot, frames=nt, fargs=(rho_hist,), interval=100) # Display the video. HTML(anim.to_html5_video()) ``` ##### Think * What do you see in the animation above? How does the numerical solution compare with the exact solution (a left-traveling shock wave)? * What types of errors do you think we see? * What do you think of the Lax-Friedrichs scheme, so far? ### Lax-Friedrichs with $\frac{\Delta t}{\Delta x} = 0.5$ Would the solution improve if we use smaller time steps? Let's check that! ```python # Set the time-step size based on CFL limit. sigma = 0.5 dt = sigma * dx / u_max # time-step size # Compute the traffic density at all time steps. rho_hist = lax_friedrichs(rho0, nt, dt, dx, (rho0[0], rho0[-1]), u_max, rho_max) ``` ```python # Create an animation of the traffic density. anim = animation.FuncAnimation(fig, update_plot, frames=nt, fargs=(rho_hist,), interval=100) # Display the video. HTML(anim.to_html5_video()) ``` ##### Dig deeper Notice the strange "staircase" behavior on the leading edge of the wave? You may be interested to learn more about this: a feature typical of what is sometimes called "odd-even decoupling." Last year we published a collection of lessons in Computational Fluid Dynamics, called _CFD Python_, where we discuss [odd-even decoupling](https://nbviewer.jupyter.org/github/barbagroup/CFDPython/blob/14b56718ac1508671de66bab3fe432e93cb59fcb/lessons/19_Odd_Even_Decoupling.ipynb). * How does this solution compare with the previous one, where the Courant number was $\frac{\Delta t}{\Delta x}=1$? ## Lax-Wendroff scheme The Lax-Friedrichs method uses a clever trick to stabilize the central difference in space for convection, but loses an order of accuracy in doing so. First-order methods are just not good enough for convection problems, especially when you have sharp gradients (shocks). The Lax-Wendroff (1960) method was the _first_ scheme ever to achieve second-order accuracy in both space and time. It is therefore a landmark in the history of computational fluid dynamics. To develop the Lax-Wendroff scheme, we need to do a bit of work. Sit down, grab a notebook and grit your teeth. We want you to follow this derivation in your own hand. It's good for you! Start with the Taylor series expansion (in the time variable) about $\rho^{n+1}$: $$ \begin{equation} \rho^{n+1} = \rho^n + \frac{\partial\rho^n}{\partial t} \Delta t + \frac{(\Delta t)^2}{2}\frac{\partial^2\rho^n}{\partial t^2} + \ldots \end{equation} $$ For the conservation law with $F=F(\rho)$, and using our beloved chain rule, we can write: $$ \begin{equation} \frac{\partial \rho}{\partial t} = -\frac{\partial F}{\partial x} = -\frac{\partial F}{\partial \rho} \frac{\partial \rho}{\partial x} = -J \frac{\partial \rho}{\partial x} \end{equation} $$ where $$ \begin{equation} J = \frac{\partial F}{\partial \rho} = u _{\rm max} \left(1-2\frac{\rho}{\rho_{\rm max}} \right) \end{equation} $$ is the _Jacobian_ for the traffic model. Next, we can do a little trickery: $$ \begin{equation} \frac{\partial F}{\partial t} = \frac{\partial F}{\partial \rho} \frac{\partial \rho}{\partial t} = J \frac{\partial \rho}{\partial t} = -J \frac{\partial F}{\partial x} \end{equation} $$ In the last step above, we used the differential equation of the traffic model to replace the time derivative by a spatial derivative. These equivalences imply that $$ \begin{equation} \frac{\partial^2\rho}{\partial t^2} = \frac{\partial}{\partial x} \left( J \frac{\partial F}{\partial x} \right) \end{equation} $$ Let's use all this in the Taylor expansion: $$ \begin{equation} \rho^{n+1} = \rho^n - \frac{\partial F^n}{\partial x} \Delta t + \frac{(\Delta t)^2}{2} \frac{\partial}{\partial x} \left(J\frac{\partial F^n}{\partial x} \right)+ \ldots \end{equation} $$ We can now reorganize this and discretize the spatial derivatives with central differences to get the following discrete equation: $$ \begin{equation} \frac{\rho_i^{n+1} - \rho_i^n}{\Delta t} = -\frac{F^n_{i+1}-F^n_{i-1}}{2 \Delta x} + \frac{\Delta t}{2} \left(\frac{(J \frac{\partial F}{\partial x})^n_{i+\frac{1}{2}}-(J \frac{\partial F}{\partial x})^n_{i-\frac{1}{2}}}{\Delta x}\right) \end{equation} $$ Now, approximate the rightmost term (inside the parenthesis) in the above equation as follows: \begin{equation} \frac{J^n_{i+\frac{1}{2}}\left(\frac{F^n_{i+1}-F^n_{i}}{\Delta x}\right)-J^n_{i-\frac{1}{2}}\left(\frac{F^n_i-F^n_{i-1}}{\Delta x}\right)}{\Delta x}\end{equation} Then evaluate the Jacobian at the midpoints by using averages of the points on either side: \begin{equation}\frac{\frac{1}{2 \Delta x}(J^n_{i+1}+J^n_i)(F^n_{i+1}-F^n_i)-\frac{1}{2 \Delta x}(J^n_i+J^n_{i-1})(F^n_i-F^n_{i-1})}{\Delta x}.\end{equation} Our equation now reads: \begin{align} &\frac{\rho_i^{n+1} - \rho_i^n}{\Delta t} = -\frac{F^n_{i+1}-F^n_{i-1}}{2 \Delta x} + \cdots \\ \nonumber &+ \frac{\Delta t}{4 \Delta x^2} \left( (J^n_{i+1}+J^n_i)(F^n_{i+1}-F^n_i)-(J^n_i+J^n_{i-1})(F^n_i-F^n_{i-1})\right) \end{align} Solving for $\rho_i^{n+1}$: \begin{align} &\rho_i^{n+1} = \rho_i^n - \frac{\Delta t}{2 \Delta x} \left(F^n_{i+1}-F^n_{i-1}\right) + \cdots \\ \nonumber &+ \frac{(\Delta t)^2}{4(\Delta x)^2} \left[ (J^n_{i+1}+J^n_i)(F^n_{i+1}-F^n_i)-(J^n_i+J^n_{i-1})(F^n_i-F^n_{i-1})\right] \end{align} with \begin{equation}J^n_i = \frac{\partial F}{\partial \rho} = u_{\rm max} \left(1-2\frac{\rho^n_i}{\rho_{\rm max}} \right).\end{equation} Lax-Wendroff is a little bit long. Remember that you can use \ slashes to split up a statement across several lines. This can help make code easier to parse (and also easier to debug!). ```python def jacobian(rho, u_max, rho_max): """ Computes the Jacobian for our traffic model. Parameters ---------- rho : numpy.ndarray Traffic density along the road as a 1D array of floats. u_max : float Maximum speed allowed on the road. rho_max : float Maximum car density allowed on the road. Returns ------- J : numpy.ndarray The Jacobian as a 1D array of floats. """ J = u_max * (1.0 - 2.0 * rho / rho_max) return J ``` ```python def lax_wendroff(rho0, nt, dt, dx, bc_values, *args): """ Computes the traffic density on the road at a certain time given the initial traffic density. Integration using Lax-Wendroff scheme. Parameters ---------- rho0 : numpy.ndarray The initial traffic density along the road as a 1D array of floats. nt : integer The number of time steps to compute. dt : float The time-step size to integrate. dx : float The distance between two consecutive locations. bc_values : 2-tuple of floats The value of the density at the first and last locations. args : list or tuple Positional arguments to be passed to the flux and Jacobien functions. Returns ------- rho_hist : list of numpy.ndarray objects The history of the car density along the road. """ rho_hist = [rho0.copy()] rho = rho0.copy() for n in range(nt): # Compute the flux. F = flux(rho, *args) # Compute the Jacobian. J = jacobian(rho, *args) # Advance in time using Lax-Wendroff scheme. rho[1:-1] = (rho[1:-1] - dt / (2.0 * dx) * (F[2:] - F[:-2]) + dt**2 / (4.0 * dx**2) * ((J[1:-1] + J[2:]) * (F[2:] - F[1:-1]) - (J[:-2] + J[1:-1]) * (F[1:-1] - F[:-2]))) # Set the value at the first location. rho[0] = bc_values[0] # Set the value at the last location. rho[-1] = bc_values[1] # Record the time-step solution. rho_hist.append(rho.copy()) return rho_hist ``` Now that's we've defined a function for the Lax-Wendroff scheme, we can use the same procedure as above to animate and view our results. ### Lax-Wendroff with $\frac{\Delta t}{\Delta x}=1$ ```python # Set the time-step size based on CFL limit. sigma = 1.0 dt = sigma * dx / u_max # time-step size # Compute the traffic density at all time steps. rho_hist = lax_wendroff(rho0, nt, dt, dx, (rho0[0], rho0[-1]), u_max, rho_max) ``` ```python # Create an animation of the traffic density. anim = animation.FuncAnimation(fig, update_plot, frames=nt, fargs=(rho_hist,), interval=100) # Display the video. HTML(anim.to_html5_video()) ``` Interesting! The Lax-Wendroff method captures the sharpness of the shock much better than the Lax-Friedrichs scheme, but there is a new problem: a strange wiggle appears right at the tail of the shock. This is typical of many second-order methods: they introduce _numerical oscillations_ where the solution is not smooth. Bummer. ### Lax-Wendroff with $\frac{\Delta t}{\Delta x} =0.5$ How do the oscillations at the shock front vary with changes to the CFL condition? You might think that the solution will improve if you make the time step smaller ... let's see. ```python # Set the time-step size based on CFL limit. sigma = 0.5 dt = sigma * dx / u_max # time-step size # Compute the traffic density at all time steps. rho_hist = lax_wendroff(rho0, nt, dt, dx, (rho0[0], rho0[-1]), u_max, rho_max) ``` ```python # Create an animation of the traffic density. anim = animation.FuncAnimation(fig, update_plot, frames=nt, fargs=(rho_hist,), interval=100) # Display the video. HTML(anim.to_html5_video()) ``` Eek! The numerical oscillations got worse. Double bummer! Why do we observe oscillations with second-order methods? This is a question of fundamental importance! ## MacCormack Scheme The numerical oscillations that you observed with the Lax-Wendroff method on the traffic model can become severe in some problems. But actually the main drawback of the Lax-Wendroff method is having to calculate the Jacobian in every time step. With more complicated equations (like the Euler equations), calculating the Jacobian is a large computational expense. Robert W. MacCormack introduced the first version of his now-famous method at the 1969 AIAA Hypervelocity Impact Conference, held in Cincinnati, Ohio, but the paper did not at first catch the attention of the aeronautics community. The next year, however, he presented at the 2nd International Conference on Numerical Methods in Fluid Dynamics at Berkeley. His paper there (MacCormack, 1971) was a landslide. MacCormack got a promotion and continued to work on applications of his method to the compressible Navier-Stokes equations. In 1973, NASA gave him the prestigious H. Julian Allen award for his work. The MacCormack scheme is a two-step method, in which the first step is called a _predictor_ and the second step is called a _corrector_. It achieves second-order accuracy in both space and time. One version is as follows: $$ \begin{equation} \rho^*_i = \rho^n_i - \frac{\Delta t}{\Delta x} (F^n_{i+1}-F^n_{i}) \ \ \ \ \ \ \text{(predictor)} \end{equation} $$ $$ \begin{equation} \rho^{n+1}_i = \frac{1}{2} (\rho^n_i + \rho^*_i - \frac{\Delta t}{\Delta x} (F^*_i - F^{*}_{i-1})) \ \ \ \ \ \ \text{(corrector)} \end{equation} $$ If you look closely, it appears like the first step is a forward-time/forward-space scheme, and the second step is like a forward-time/backward-space scheme (these can also be reversed), averaged with the first result. What is so cool about this? You can compute problems with left-running waves and right-running waves, and the MacCormack scheme gives you a stable method (subject to the CFL condition). Nice! Let's try it. ```python def maccormack(rho0, nt, dt, dx, bc_values, *args): """ Computes the traffic density on the road at a certain time given the initial traffic density. Integration using MacCormack scheme. Parameters ---------- rho0 : numpy.ndarray The initial traffic density along the road as a 1D array of floats. nt : integer The number of time steps to compute. dt : float The time-step size to integrate. dx : float The distance between two consecutive locations. bc_values : 2-tuple of floats The value of the density at the first and last locations. args : list or tuple Positional arguments to be passed to the flux function. Returns ------- rho_hist : list of numpy.ndarray objects The history of the car density along the road. """ rho_hist = [rho0.copy()] rho = rho0.copy() rho_star = rho.copy() for n in range(nt): # Compute the flux. F = flux(rho, *args) # Predictor step of the MacCormack scheme. rho_star[1:-1] = (rho[1:-1] - dt / dx * (F[2:] - F[1:-1])) # Compute the flux. F = flux(rho_star, *args) # Corrector step of the MacCormack scheme. rho[1:-1] = 0.5 * (rho[1:-1] + rho_star[1:-1] - dt / dx * (F[1:-1] - F[:-2])) # Set the value at the first location. rho[0] = bc_values[0] # Set the value at the last location. rho[-1] = bc_values[1] # Record the time-step solution. rho_hist.append(rho.copy()) return rho_hist ``` ### MacCormack with $\frac{\Delta t}{\Delta x} = 1$ ```python # Set the time-step size based on CFL limit. sigma = 1.0 dt = sigma * dx / u_max # time-step size # Compute the traffic density at all time steps. rho_hist = maccormack(rho0, nt, dt, dx, (rho0[0], rho0[-1]), u_max, rho_max) ``` ```python # Create an animation of the traffic density. anim = animation.FuncAnimation(fig, update_plot, frames=nt, fargs=(rho_hist,), interval=100) # Display the video. HTML(anim.to_html5_video()) ``` ### MacCormack with $\frac{\Delta t}{\Delta x}= 0.5$ Once again, we ask: how does the CFL number affect the errors? Which one gives better results? You just have to try it. ```python # Set the time-step size based on CFL limit. sigma = 0.5 dt = sigma * dx / u_max # time-step size # Compute the traffic density at all time steps. rho_hist = maccormack(rho0, nt, dt, dx, (rho0[0], rho0[-1]), u_max, rho_max) ``` ```python # Create an animation of the traffic density. anim = animation.FuncAnimation(fig, update_plot, frames=nt, fargs=(rho_hist,), interval=100) # Display the video. HTML(anim.to_html5_video()) ``` ##### Dig Deeper You can also obtain a MacCormack scheme by reversing the predictor and corrector steps. For shocks, the best resolution will occur when the difference in the predictor step is in the direction of propagation. Try it out! Was our choice here the ideal one? In which case is the shock better resolved? ##### Challenge task In the *red light* problem, $\rho \geq \rho_{\rm max}/2$, making the wave speed negative at all points . You might be wondering why we introduced these new methods; couldn't we have just used a forward-time/forward-space scheme? But, what if $\rho_{\rm in} < \rho_{\rm max}/2$? Now, a whole region has negative wave speeds and forward-time/backward-space is unstable. * How do Lax-Friedrichs, Lax-Wendroff and MacCormack behave in this case? Try it out! * As you decrease $\rho_{\rm in}$, what happens to the velocity of the shock? Why do you think that happens? ## References * Peter D. Lax (1954), "Weak solutions of nonlinear hyperbolic equations and their numerical computation," _Commun. Pure and Appl. Math._, Vol. 7, pp. 159–193. * Peter D. Lax and Burton Wendroff (1960), "Systems of conservation laws," _Commun. Pure and Appl. Math._, Vol. 13, pp. 217–237. * R. W. MacCormack (1969), "The effect of viscosity in hypervelocity impact cratering," AIAA paper 69-354. Reprinted on _Journal of Spacecraft and Rockets_, Vol. 40, pp. 757–763 (2003). Also on _Frontiers of Computational Fluid Dynamics_, edited by D. A. Caughey, M. M. Hafez (2002), chapter 2: [read on Google Books](http://books.google.com/books?id=QBsnMOz_8qcC&lpg=PA27&ots=uqCeuH1U6S&lr&pg=PA27#v=onepage&q&f=false). * R. W. MacCormack (1971), "Numerical solution of the interaction of a shock wave with a laminar boundary layer," _Proceedings of the 2nd Int. Conf. on Numerical Methods in Fluid Dynamics_, Lecture Notes in Physics, Vol. 8, Springer, Berlin, pp. 151–163. --- ###### The cell below loads the style of the notebook. ```python from IPython.core.display import HTML css_file = '../../styles/numericalmoocstyle.css' HTML(open(css_file, 'r').read()) ``` <link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Arvo:400,700,400italic' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=PT+Mono' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Shadows+Into+Light' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Nixie+One' rel='stylesheet' type='text/css'> <link href='https://fonts.googleapis.com/css?family=Source+Code+Pro' rel='stylesheet' type='text/css'> <style> @font-face { font-family: "Computer Modern"; src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf'); } #notebook_panel { /* main background */ background: rgb(245,245,245); } div.cell { /* set cell width */ width: 750px; } div #notebook { /* centre the content */ background: #fff; /* white background for content */ width: 1000px; margin: auto; padding-left: 0em; } #notebook li { /* More space between bullet points */ margin-top:0.8em; } /* draw border around running cells */ div.cell.border-box-sizing.code_cell.running { border: 1px solid #111; } /* Put a solid color box around each cell and its output, visually linking them*/ div.cell.code_cell { background-color: rgb(256,256,256); border-radius: 0px; padding: 0.5em; margin-left:1em; margin-top: 1em; } div.text_cell_render{ font-family: 'Alegreya Sans' sans-serif; line-height: 140%; font-size: 125%; font-weight: 400; width:600px; margin-left:auto; margin-right:auto; } /* Formatting for header cells */ .text_cell_render h1 { font-family: 'Nixie One', serif; font-style:regular; font-weight: 400; font-size: 45pt; line-height: 100%; color: rgb(0,51,102); margin-bottom: 0.5em; margin-top: 0.5em; display: block; } .text_cell_render h2 { font-family: 'Nixie One', serif; font-weight: 400; font-size: 30pt; line-height: 100%; color: rgb(0,51,102); margin-bottom: 0.1em; margin-top: 0.3em; display: block; } .text_cell_render h3 { font-family: 'Nixie One', serif; margin-top:16px; font-size: 22pt; font-weight: 600; margin-bottom: 3px; font-style: regular; color: rgb(102,102,0); } .text_cell_render h4 { /*Use this for captions*/ font-family: 'Nixie One', serif; font-size: 14pt; text-align: center; margin-top: 0em; margin-bottom: 2em; font-style: regular; } .text_cell_render h5 { /*Use this for small titles*/ font-family: 'Nixie One', sans-serif; font-weight: 400; font-size: 16pt; color: rgb(163,0,0); font-style: italic; margin-bottom: .1em; margin-top: 0.8em; display: block; } .text_cell_render h6 { /*use this for copyright note*/ font-family: 'PT Mono', sans-serif; font-weight: 300; font-size: 9pt; line-height: 100%; color: grey; margin-bottom: 1px; margin-top: 1px; } .CodeMirror{ font-family: "Source Code Pro"; font-size: 90%; } .alert-box { padding:10px 10px 10px 36px; margin:5px; } .success { color:#666600; background:rgb(240,242,229); } </style>
5c5e7447e35d466e292f6722e50527eed60d081e
231,915
ipynb
Jupyter Notebook
lessons/03_wave/03_02_convectionSchemes.ipynb
mcarpe/numerical-mooc
62b3c14c2c56d85d65c6075f2d7eb44266b49c17
[ "CC-BY-3.0" ]
748
2015-01-04T22:50:56.000Z
2022-03-30T20:42:16.000Z
lessons/03_wave/03_02_convectionSchemes.ipynb
mcarpe/numerical-mooc
62b3c14c2c56d85d65c6075f2d7eb44266b49c17
[ "CC-BY-3.0" ]
62
2015-02-02T01:06:07.000Z
2020-11-09T12:27:41.000Z
lessons/03_wave/03_02_convectionSchemes.ipynb
mcarpe/numerical-mooc
62b3c14c2c56d85d65c6075f2d7eb44266b49c17
[ "CC-BY-3.0" ]
1,270
2015-01-02T19:19:52.000Z
2022-02-27T01:02:44.000Z
69.644144
6,488
0.7791
true
8,569
Qwen/Qwen-72B
1. YES 2. YES
0.851953
0.73412
0.625435
__label__eng_Latn
0.965553
0.291426
# "Spin Glass Models 3: Ising Model - Theory" > "In this blog post we will introduce another model of spin glasses: the Ising model. We relax some of the simplifications of previous models to create something that more accurately captures the structure of real spin-glasses. We will look at this model mathematically to gain insight into its properties." - toc: true - author: Lewis Cole (2020) - branch: master - badges: false - comments: false - categories: [Spin-Glass, Magnet, Ising, Edwards-Anderson] - hide: false - search_exclude: false - image: https://github.com/lewiscoleblog/blog/raw/master/images/spin-glass/Ising3d.png ___ This is the third blog post in a series - you can find the previous blog post [here](https://lewiscoleblog.com/spin-glass-models-2) ___ In this blog post we are going to look at another spin-glass model. In the previous post we looked at the Sherrington-Kirkpatrick model which allowed us to study the spin-glass analytically. However there is a certain lack of "realism" in this model - the infinite interaction range means that the Sherrington-Kirkpatrick (in a sense) does not occupy any space, there is no concept of dimension nor geometry. While providing interesting mathematical results, some of which appear to be universal to spin-glass systems, we would like to look at a different model that captures more of the real world system and perhaps might uncover new behaviours. ## Introducing the Ising Model One model that we can look at is the Edwards-Anderson Ising model. We have actually looked at these models achronologically: Edwards-Anderson proposed the Ising model before Sherrington-Kirkpatrick proposed theirs. In fact Sherrington-Kirkpatrick developed their model partly due to the difficulty in deal with the Ising model. I have presented the models in this order in this series of blog posts because it feels more natural to look at models in increasing complexity order rather than presenting a historical account. The main difference between Ising and Sherrington-Kirkpatrick is the extent over which the interactions can occur, instead of an infinite range the Ising model only allows "nearest-neighbour" interactions. This relaxation means that there is a concept of dimension and geometry to an Ising spin-glass. For example we could think of spins oriented on a line segment (a finite 1d Ising model), a square lattice (a 2d Ising model) or something more exotic like spins oriented on the nodes of a 32 dimensional hexagonal lattice. Through careful use of limits we could also consider infinite-dimensional Ising models too. The Hamiltonian follows the form one would expect: $$ H = - \sum_{\lt x,y \gt} J_{xy} \sigma_x \sigma_y - h \sum_x \sigma_x $$ Where we use the notation $\lt x,y \gt$ to denote the sum occuring over neighbour pairs. As before the selection of $J_{xy}$ is somewhat arbitrary, however unlike the Sherrington-Kirkpatrick we do not have to scale this with lattice size to retain meaningful "average energy" or "average magnetism" measurements when we increase the size of the system. If we take $J_{xy} = J$ for some fixed $J$ we get the Ising model of Ferromagnetism, if we allow $J_{xy}$ to vary according to some distribution we get the Edwards-Anderson Ising Model. In this blog we will use the term "Ising model" to refer to either situation, the mathematics of moving from the fixed $J$ case usually involves integrating against the density function, this can lead to more complicated formulae so in this blog we will mainly focus on the ferromagnetic case unless otherwise stated. As before spins can only take values $\pm 1$ - in some literature this is referred to as "Ising spins" (e.g. you can read references to "Sherrington-Kirkpatrick with Ising spins" - this means an infinite interaction range with binary spins). A pictorial representation of a 3d square lattice Ising spin-glass can be seen below: For the rest of this article we will limit ourselves to talking about "square" lattices since this is where the majority of research is focussed. Using different lattice types won't change the story too much. In studying these models it is also worth noting what happens on the "boundary" of the lattice: for finite size systems this can become an issue. In this blog post we will skirt over the issue, where necessary we will assume periodic boundary conditions (i.e. there is a toroidal type geometry to the system) so each "edge" of the lattice is connected to an opposing one, so in a sense there are no boundary conditions to specify. ## 1D Ising Model We will now consider a 1 dimensional Ising model where spins are located on a line segment. This simplication means that finding the ground state (minimal energy spin configuration) is trivial: we pick a site in the middle of the line segment (the origin) we will pick a spin ($\pm 1$) at random. We then propagate out from the origin in the left and right direction, we pick the spin for the next site according to the interaction strength for example: if $\sigma_0 = +1$ and $J_{0,1} > 0$ then $\sigma_1 = +1$ and so on. We can see that (as with all spin glass systems) there is a symmetry: if we flip all the spins direction we end up with an equivalent energy - we call these "pairs" of configurations. It doesn't take much convincing to realise that the pair of configurations we constructed is a unique minima for the system. To see this imagine we have only one pair of spins that opposes the direction indicated by the interaction strength. For a large enough system we can eventually find another interaction strength that is greater in magnitude (by definition having a spin pair as indicated by the sign of the interaction strength). By simply flipping the relative orientation of the pairs we will end up with a configuration of lower energy - which is a contradiction. We will now look to solve the 1d Ising model analytically. For simplicity we will assume no external magnetic field, this just makes the formulae look "prettier". The crux of understanding any spin glass system is finding the Gibbs distribution, as before: $$P(\sigma) = \frac{exp(-\beta H_{\sigma})}{Z_{\beta}}$$ Where: $$H = - \sum_{\lt x,y \gt} J_{xy} \sigma_x \sigma_y$$ And $\beta$ is the inverse temperature. We can write an expression for the partition function as: $$ Z_{\beta} = \sum_{\sigma} exp(-\beta H_{\sigma}) = \sum_{\sigma} exp(-\beta \sum_{\lt x,y \gt} J_{xy} \sigma_x \sigma_y) $$ Suppose we look at a line segment with $N$ spins, we can then write this as: $$ Z_{\beta} =\sum_{\sigma_1, ..., \sigma_N} exp(-\beta (J_{1,2}\sigma_1 \sigma_2 + J_{2,3}\sigma_2 \sigma_3 + ... + J_{N-1,N}\sigma_{N-1}\sigma_N))$$ We notice that we can factorise this summation as: $$ Z_{\beta} =\sum_{\sigma_1, ..., \sigma_{N-1}} exp(-\beta (J_{1,2}\sigma_1 \sigma_2 + ... + J_{N-2,N-1}\sigma_{N-2}\sigma_{N-1}))\sum_{\sigma_N}exp(-\beta J_{N-1,N} \sigma_{N-1} \sigma_N) $$ The internal sum can then be evaluated: $$ \sum_{\sigma_N}exp(-\beta J_{N-1,N} \sigma_{N-1} \sigma_N) = exp(\beta J_{N-1,N} \sigma_{N-1} ) + exp(-\beta J_{N-1,N} \sigma_{N-1} ) = 2cosh(\beta J_{N-1,N} \sigma_{N-1}) = 2cosh(\beta J_{N-1,N}) $$ The last equality making use of the fact $\sigma_{N-1} = \pm 1$ and that $cosh(x) = cosh(-x)$. By repeating this process we can express the partition function as: $$ Z_{\beta} = 2 \prod_{i=1}^{N-1} 2 cosh(\beta J_{i,i+1}) $$ We can evaluate this exactly. If we are sampling the interaction strengths according to some distribution we can find expected values (or other statistical properties) by integrating against the density function as usual, since this adds to notational complexity we will assume that the interaction strengths are fixed for now. We can then calculate thermal properties using the equations: \begin{align} F &= - \frac{1}{\beta} ln Z_{\beta} = - T ln 2 - T \sum_{i=1}^{N-1} ln (2 cosh(\beta J_{i,i+1})) \\ U &= - \frac{\partial}{\partial \beta} ln Z_{\beta} = - \sum_{i=1}^{N-1} J_{i,i+1} tanh(\beta J_{i,i+1}) \\ C &= \frac{\partial U}{\partial T} = \sum_{i=1}^{N-1} (\beta J_{i,i+1})^2 sech^2(\beta J_{i,i+1}) \\ S &= \frac{U - F}{T} = ln2 + \sum_{i=1}^{N-1} \left( -\beta J_{i,i+1} tanh(\beta J_{i,i+1}) + ln(2cosh(\beta J_{i,i+1}) \right) \end{align} Where: <br/> F - is the Helmholtz-Free Energy <br/> U - is the thermodynamic energy (ensemble average) <br/> C - is the heat capacity <br/> S - is the entropy We can find the expected value of a given instantiation of interactions through an integral such as: $$ \mathbb{E}(U) = - \sum_{i=1}^{N-1} \int J_{i,i+1} tanh(\beta J_{i,i+1}) Q(J_{i,i+1}) dJ_{i,i+1} $$ With $Q(J)$ being the density function of the distribution and the integral occuring over its support. Similar formulae exist for the other thermodynamic variables and you can calculate other statistics (e.g. variance) in the usual way. We can plot the values of these variables for a given set of interaction weights: ```python #hide import warnings warnings.filterwarnings('ignore') ``` ```python # This code creates a 4 figure plot showing how # Thermodynamic variables change with temperature # For a 1d Ising Model with Gaussian interactions import numpy as np import matplotlib.pyplot as plt %matplotlib inline # Fix seed np.random.seed(123) # Fix number of points N = 100 # Instantiate interaction strength J = np.random.normal(0,1,size=N-1) # Set temperature ranges T_min = 1e-4 T_max = 5 T_steps = 1000 T = np.arange(T_steps)/T_steps *(T_max - T_min) + T_min beta = 1 / T # Set up holders for variables F = np.zeros(T_steps) U = np.zeros(T_steps) C = np.zeros(T_steps) S = np.zeros(T_steps) # Loop over T_steps and calculate at each step for i in range(T_steps): F[i] = - T[i] * np.log(2) - T[i] * (np.log(2*np.cosh(beta[i]*J))).sum() U[i] = - (J * np.tanh(J * beta[i])).sum() C[i] = ((beta[i] * J)**2 * (np.cosh(beta[i] * J))**-2).sum() S[i] = (U[i] - F[i]) / T[i] # Divide by number of points to give a scale invariant measure F = F / N U = U / N C = C / N S = S / N # Create plots fig, axs = plt.subplots(2, 2, figsize=(10,10), gridspec_kw={'hspace': 0.25, 'wspace': 0.25}) axs[0, 0].plot(T, F) axs[0, 0].set_title("Free Energy") axs[0, 0].set(xlabel="T", ylabel="F / N") axs[0, 1].plot(T, U) axs[0, 1].set_title("Average Energy") axs[0, 1].set(xlabel="T", ylabel="U / N") axs[1, 0].plot(T, C) axs[1, 0].set_title("Heat Capacity") axs[1, 0].set(xlabel="T", ylabel="C / N") axs[1, 1].plot(T, S) axs[1, 1].set_title("Entropy") axs[1, 1].set(xlabel="T", ylabel="S / N") plt.show() ``` From these plots we can see there is no phase transition taking place - this is true for all 1d Ising models. We can also calculate the correlation function for the system. By following a similar line of logic as before we find: $$ \langle \sigma_n \sigma_{n+r} \rangle = \prod_{i=0}^{r} tanh(\beta J_{n+i, n+i+1})$$ We can plot this as a function of $r$ starting at the first position: ```python # This code creates a plot displaying # The correlation function # For a 1d Ising Model with Gaussian interactions import numpy as np import matplotlib.pyplot as plt %matplotlib inline # Fix seed np.random.seed(123) # Fix number of points N = 100 # Instantiate interaction strength J = np.random.normal(0,1,size=N-1) # Create holders for variables r = np.arange(N) corr_array = np.zeros(N) # Fix beta beta = 1 # Calculate correlations tanh_array = np.tanh(beta * J) for i in range(N): corr_array[i] = tanh_array[:i].prod() plt.plot(r[0:20], corr_array[0:20]) plt.title("Correlation Function") plt.ylabel(r"$\langle \sigma_{1} \sigma_{r} \rangle$") plt.xlabel("r") plt.xticks(np.arange(11)*2) plt.ylim((-1,1)) plt.xlim((0,20)) plt.show() ``` As we can see there is a specific "structure" to the correlation function given an instantiation - this is not particularly surprising. If we picked an alternate starting state (e.g. the 2nd site) this graph can look totally different, the decay in absolute value will be similar however. We also notice that the correlation decays to zero fairly rapidly suggesting there isn't a long range structure to the model. ## 2d Ising Model We now turn our attention to the 2d Ising model. The mathematics here will, understandably, get more complicated. We will require a bit more sophistication to solve this system. The solution was originally published by Lars Onsager in 1944, with alternative proofs and derivations published later. The derivation itself is fairly involved and would take a blog post (at least) by itself to cover - I may come back to this at a later date. For now I will simply present the result for the free energy (F) in absence of a magnetic field ($h=0$): $$ - \frac{\beta}{N} F = ln2 + \frac{1}{8\pi^2} \int^{2\pi}_{0} \int^{2\pi}_{0} ln \left[ cosh(2 \beta J_H) cosh(2 \beta J_V) - sinh(2 \beta J_H)cos(\theta_1) - sinh(2 \beta J_V)cos(\theta_2) \right]d\theta_1 d\theta_2 $$ Where instead of $J_{xy}$ being sampled from a gaussian distribution we assume fixed interaction strengths $J_H$ and $J_V$ in the horizontal and vertical directions. For simplicity we will take: $J_H = J_V = J$ and we can derive the thermodynamic properties (per site - e.g. $f=\frac{F}{N}$) of this system as: \begin{align} f &= \frac{-ln2}{2\beta} - \frac{1}{2\pi\beta} I_0(\beta, J) \\ u &= -J coth(2\beta J) \left[ 1 + \frac{2}{\pi} \left( 2tanh^2(2\beta J) -1 \right) I_1(\beta, J) \right] \\ c &= 2J\beta^2 \left[U csch(2\beta J)sech(2\beta J) + \frac{8J}{\pi} sech^2(2\beta J) I_1(\beta, J) - \frac{2 \beta J}{\pi}(cosh(4\beta J)-3)^2 sech^6(2\beta J) I_2(\beta, J) \right] \\ s &= \frac{U - F}{T} \end{align} For convenience I created 3 new functions $I_0(\beta,J), I_1(\beta, J)$ and $I_2(\beta, J)$ to make the equations a little shorter. These functions are defined as: \begin{align} I_0(\beta, J) &= \int^\pi_0 ln \left[ cosh^2(2 \beta J) + sinh^2(2 \beta J) \sqrt{1 + csch^4(2 \beta J) - 2 csch^2(2 \beta J) cos(2\theta)} \right] d\theta \\ I_1(\beta, J) &= \int^{\frac{\pi}{2}}_0 \left[1 - 4csch^2(2\beta J)( 1 + csch^2(2\beta J))^{-2} sin^2(\theta) \right]^{-\frac{1}{2}} d\theta \\ I_2(\beta, J) &= \int^{\frac{\pi}{2}}_0 sin^2(\theta)\left[1 - 4csch^2(2\beta J)( 1 + csch^2(2\beta J))^{-2} sin^2(\theta) \right]^{-\frac{3}{2}} d\theta \end{align} As before we can produce plots of these: ```python # This code creates a 4 figure plot showing how # Thermodynamic variables change with temperature # For a 2d Ferromagnetic Ising Model with fixed interaction import numpy as np import matplotlib.pyplot as plt from scipy.integrate import quad %matplotlib inline # Fix seed np.random.seed(123) # Instantiate interaction strength J = 1 # Set temperature ranges T_min = 1e-4 T_max = 5 T_steps = 1000 T = np.arange(T_steps)/T_steps *(T_max - T_min) + T_min beta = 1 / T # Set up holders for variables f = np.zeros(T_steps) u = np.zeros(T_steps) c = np.zeros(T_steps) s = np.zeros(T_steps) # Set up integrands for I0, I1, I2 def integrand0(x, b, j): return np.log(np.cosh(2*b*j)**2 + np.sinh(2*b*j)**2 * np.sqrt(1 + np.sinh(2*b*j)**(-4) - 2*np.sinh(2*b*j)**(-2) * np.cos(2*x))) def integrand1(x, b, j): return (1 - 4*np.sinh(2*b*j)**(-2)*(1 + np.sinh(2*b*J)**(-2))**(-2)*np.sin(x)**2)**(-0.5) def integrand2(x, b, j): return np.sin(x)**2 * integrand1(x, b, j)**3 # Loop over T_steps and calculate at each step for i in range(T_steps): bt = beta[i] I0 = quad(integrand0, 0, np.pi, args=(bt, J)) I1 = quad(integrand1, 0, np.pi/2, args=(bt, J)) I2 = quad(integrand2, 0, np.pi/2, args=(bt, J)) f[i] = - np.log(2) / (2 * bt) - I0[0] / (2 * np.pi * bt) u[i] = -J*np.tanh(2*bt*J)**(-1) * ( 1 + (2/np.pi)*(2*np.tanh(2*bt*J)**2 -1)*I1[0]) c[i] = 2*bt**2*J*( u[i]*np.sinh(2*bt*J)**(-1)*np.cosh(2*bt*J)**(-1) + (8*J / np.pi)*np.cosh(2*bt*J)**(-2)*I1[0] - (2*bt*J/np.pi)*((np.cosh(4*bt*J)-3)**2)*np.cosh(2*bt*J)**(-6)*I2[0] ) s[i] = (u[i] - f[i])*bt # Create plots fig, axs = plt.subplots(2, 2, figsize=(10,10), gridspec_kw={'hspace': 0.25, 'wspace': 0.25}) axs[0, 0].plot(T, f) axs[0, 0].set_title("Free Energy") axs[0, 0].set(xlabel="T", ylabel="f") axs[0, 1].plot(T, u) axs[0, 1].set_title("Average Energy") axs[0, 1].set(xlabel="T", ylabel="u") axs[1, 0].plot(T, c) axs[1, 0].set_title("Heat Capacity") axs[1, 0].set(xlabel="T", ylabel="c") axs[1, 1].plot(T, s) axs[1, 1].set_title("Entropy") axs[1, 1].set(xlabel="T", ylabel="s") plt.show() ``` We find there is a critical temperature $T_c$ where the specific heat equation diverges. We can compute the value of this as satisfying: $$ sinh\left(\frac{2J_H}{T_c}\right)sinh\left(\frac{2J_V}{T_c}\right) = 1 $$ In the case where $J_H = J_V = J$ we have: $$T_c = \frac{2 J}{ln\left(1+\sqrt{2}\right)} $$ This is an example of a second order phase transition since the discontinuity only arises under a second derivative. For temperatres under this critical temperature we have that the spontaneous magnetization can be calculated as: $$ m = \left[ 1 - csch^2\left(\frac{2J_H}{T}\right)csch^2\left(\frac{2J_V}{T}\right) \right]^{\frac{1}{8}} $$ We can plot this (for $J_H = J_V = J = 1$) as: ```python # This code plots the spotaneous magnetization of # a 2d Ising model on a square-lattice import numpy as np import matplotlib.pyplot as plt %matplotlib inline # Set up temperature array T_min = 1e-4 T_max = 5 T_steps = 1000 T = np.arange(T_steps)/T_steps *(T_max - T_min) + T_min # Fix interaction strength constant J = 1 m = np.power(np.maximum(1 - np.sinh(2*J/T)**-4,0), 1/8) plt.plot(T,m) plt.xlabel("T") plt.ylabel("Magnetization") plt.title("2d Ising Model Spontaneous Magnetization") plt.xlim((0,5)) plt.show() print("Critical Temp (Tc):", (2*J) / np.log(1 + np.sqrt(2))) ``` The correlation function is harder to compute, here we just present the correlation function between elements on the diagonal of the lattice: $$ \langle \sigma_{0,0} \sigma_{N,N} \rangle = det \left[ A_N \right]$$ Where $A_N = (a_{n,m})_{n,m=1}^N$ is an $NxN$ matrix with entries: $$ a_{n,m} = \frac{1}{2 \pi} \int_0^{2\pi} e^{i(n-m)\theta} \sqrt{\frac{sinh^2(2\beta J) - e^{-i\theta}}{sinh^2(2\beta J) - e^{i\theta}} } d\theta $$ We can plot this as so: ```python # This code plots the diagnoal correlation function # For the 2d square-lattice Ising model # Lattice size = NxN # At the critical temperature import numpy as np import matplotlib.pyplot as plt from scipy.integrate import quad %matplotlib inline # Set up integrand function imag = complex(0,1) def integrand(x, n, m, beta, J): return np.exp(imag*(n-m)*x)*np.sqrt((np.sinh(2*beta*J)**2-np.exp(-imag*x))/(np.sinh(2*beta*J)**2-np.exp(imag*x))) # Set up constants J = 1 beta = np.log(1 + np.sqrt(2)) / (2*J) # Set up arrays N_max = 20 N_array = np.arange(N_max+1) correl_array = np.zeros(N_max+1) for x in range(N_max + 1): N = N_array[x] A_N = np.zeros((N, N)) for n in range(N): for m in range(N): I = quad(integrand, 0, 2*np.pi, args=(n, m, beta, J)) A_N[n,m] = I[0]/(2*np.pi) correl_array[x] = np.linalg.det(A_N) # Plot correlations plt.plot(N_array, correl_array) plt.xlabel("N") plt.ylabel(r"$\langle \sigma_{0,0} \sigma_{N,N} \rangle$") plt.title("Correlation Function") plt.xticks(np.arange(11)*2) plt.ylim((0,1)) plt.xlim((0,20)) plt.show() ``` We can see that even though we have short range interactions (nearest neighbour) this gives rise to longer-range structure within the model. In fact we find for temperatures below the critical temperature there is an infinite correlation length, whereas for temperatures above the critical temperature the correlation length is finite. ## Infinte Dimension Ising Model Next we consider the situation where the Ising model exists in infinite dimensions. In this case each site has infinitely many neighbours, as such a mean-field approximation is valid and we have a model that is somewhat similar to the Sherrington-Kirkpatrick fully connected geometry. (Please excuse the lack of rigour here; one has to be careful in how limits are defined and it is a non-trivial exercise. For this blog post I don't want to go down into that sort of detail.) If each site has infinitely many neighbours we only need to concern ourselves with the ratio of positive and negative spin neighbours. By mean field if we take the probability of a positive spin as $p$ then via the Gibbs distribution we have: $$ \frac{p}{1-p} = exp(2\beta H)$$ The average magnetization can then be calculated: $$ \mathbb{E}\left[M\right] = (1)p + (-1)(1-p) = 2p - 1 = \frac{1 - exp(2\beta H)}{1 + exp(2\beta H)} = tanh(2 \beta H)$$ By investigating this function we can gain insight into spontaenous magnetization. Other similar arguments can be invoked for the other themodynamic properties. It is possible to show, as with the Sherrington-Kirkpatrick, that a phase transition occurs. ## n-d Ising Model ($n\geq3$) Finally we look at the case where the dimension of the model is finite but strictly greater than 2. In this situation things get much trickier, in fact there are not many defined mathematical results in these systems and this is the subject of current research. As such I will just briefly outline some of the approaches that have been suggested to study these systems and their properties (presented in chronological order from when they were proposed): * **Replica-Symmetry Breaking** - From our previous post on Sherrington-Kirkpatrick models we briefly looked at this sort of method. They were an obvious first choice in trying to deal with short range interactions. It has been shown that the "standard" techniques are insufficient but there have been extensions proposed that have shown some promise. Like in the infinite range model it suggests states have an ultrametric structure and uncountably many pure states. * **Droplet Scaling** - The first such argument being presented by Rudolf Peierls. The main idea is to consider the arisal of "loops" or "islands" of spins (clusters of atoms all with the same spin being enclosed by some boundary). We then aim to count the number of such loops, often in high or low temperature ranges via approximation. This leads to only 2 pure states. * **Chaotic Pairs** - Has properties somewhat similar to a combination of the preceeding 2 methods, like replica symmetry breaking there are infinitely many thermodynamic states however the relationship between them is much simpler and has simple scaling behaviour - much like droplet scaling. * **TNT** - This interpretation does not itself specify the number of pure states, however it has been argued that it most naturally exists with 2 pure states - much like droplet scaling. However it has scaling properties more similar to that of replica symmetry breaking. For completeness: a pure state of a system is a a state that cannot be expressed as a convex combination of other states. As far as I am aware there is currently no conesensus on which (if any) of the options presented is the correct interpretation. One of the main questions that we would like to answer is whether there is a phase transition (first order). This is unresolved currently. Another question we might ask is whether there exists multiple ground state pairs (i.e. we can find 2 different configurations that have minimal energy that are not merely negative images of each other) - again this is unresolved. In infinite dimensions we can see this would be true, in 1d we know this cannot be true - for other dimensions it is unclear (although it is believed it is probably not true for d=2). In addition to this there are many other unanswered questions that are the subject of current research into Ising type models. ## Conclusion In this blog post we have investigated some of the properties of square-lattice Ising models in various dimensions. In particular we have seen that there is no phase transition in 1d, a second order phase tranisition in 2d, a phase transition in infinite dimension and currently other dimensions are unresolved. We can see that the short-range interactions cause a lot of headaches when trying to analyse these systems. In the next blog post in this series we will begin to look at ways of simulating Ising models (and other spin glass models generally). ___ This is the third blog post in a series - you can find the next blog post [here](https://lewiscoleblog.com/spin-glass-models-4)
c9e23738e6e0d812aa412c18d90b79cccdb06842
170,993
ipynb
Jupyter Notebook
_notebooks/2020-03-17-Spin-Glass-Models-3.ipynb
lewiscoleblog/blog
50183d63491abbf9e56676a784f53dfbb3952af1
[ "Apache-2.0" ]
2
2020-03-31T18:53:59.000Z
2021-03-25T01:02:14.000Z
_notebooks/2020-03-17-Spin-Glass-Models-3.ipynb
lewiscoleblog/blog
50183d63491abbf9e56676a784f53dfbb3952af1
[ "Apache-2.0" ]
3
2020-04-07T15:31:16.000Z
2021-09-28T01:25:25.000Z
_notebooks/2020-03-17-Spin-Glass-Models-3.ipynb
lewiscoleblog/blog
50183d63491abbf9e56676a784f53dfbb3952af1
[ "Apache-2.0" ]
1
2020-05-09T18:03:39.000Z
2020-05-09T18:03:39.000Z
284.514143
50,092
0.904236
true
7,156
Qwen/Qwen-72B
1. YES 2. YES
0.749087
0.841826
0.630601
__label__eng_Latn
0.991698
0.303428
```python %matplotlib inline ``` 对抗样本生成 ============================== **Author:** `Nathan Inkawhich <https://github.com/inkawhich>`__ **翻译者**: `Antares博士 <http://www.studyai.com/antares>`__ 如果你正在阅读这篇文章,希望你能体会到一些机器学习模型是多么的有效。研究不断推动ML模型变得更快、更准确和更高效。 然而,设计和训练模型的一个经常被忽视的方面是安全性和健壮性,特别是在面对希望欺骗模型的对手时。 本教程将提高您对ML模型的安全漏洞的认识,并将深入了解对抗性机器学习的热门话题。 您可能会惊讶地发现,在图像中添加不可察觉的扰动会导致截然不同的模型性能。 鉴于这是一个教程,我们将通过一个图像分类器的例子来探讨这个主题。 具体来说,我们将使用第一种也是最流行的攻击方法-快速梯度符号攻击(Fast Gradient Sign Attack ,FGSM)来欺骗MNIST分类器。 威胁模型(Threat Model) ------------------------- 有很多种类的对抗性攻击,每种攻击都有不同的目标和攻击者的知识假设。但是,总体目标 是在输入数据中增加最少的扰动量,以导致期望的错误分类。攻击者的知识有几种假设,其中两种假设是: **白盒子(white-box)** 和 **黑盒子(black-box)**。 *白盒子* 攻击假定攻击者拥有对模型的全部知识和访问权限,包括体系结构、输入、输出和权重。 *黑盒子* 攻击假设攻击者只能访问模型的输入和输出,而对底层架构或权重一无所知。 还有几种目标类型,包括 **错误分类(misclassification)** 和 **源/目标错误分类(source/target misclassification)** 。 *错误分类* 的目标意味着对手只希望输出分类是错误的,而不关心新的分类是什么。 *源/目标错误分类* 意味着对手希望更改最初属于特定源类的图像,从而将其归类为特定的目标类。 在这种情况下,FGSM攻击是以 *错误分类* 为目标的 *白盒攻击* 。 有了这些背景信息,我们现在可以详细讨论攻击(attack)了。 快速梯度符号攻击(Fast Gradient Sign Attack) -------------------------------------------- 迄今为止,第一次也是最流行的对抗性攻击(adversarial attacks)之一被称为 *快速梯度符号攻击(FGSM)* , 古德费尔特对此进行了描述: `Explaining and Harnessing Adversarial Examples <https://arxiv.org/abs/1412.6572>`__。 攻击是非常强大的,但却是直观的。它是设计用来攻击神经网络,利用他们的学习方式,*梯度* 。其思想很简单, 不是通过调整基于反向传播梯度的权重来最小化损失,而是 *基于相同的反向传播梯度调整输入数据, 使损失最大化* 。换句话说,攻击使用损失W.r.t输入数据的梯度,然后调整输入数据以最大化损失。 在我们进入代码之前,让我们看一下著名的 `FGSM <https://arxiv.org/abs/1412.6572>`__ 熊猫示例,并提取一些记号(notation)。 .. figure:: /_static/img/fgsm_panda_image.png :alt: fgsm_panda_image 从图片中, $\mathbf{x}$ 是被正确分类为“panda”的原始图像, $y$ 是 $\mathbf{x}$ 的真正的类标签。 $\mathbf{\theta}$ 表示模型参数,并且 $J(\mathbf{\theta}, \mathbf{x}, y)$ 用来 训练网络的损失。 攻击将梯度反向传播回输入数据以进行计算 $\nabla_{x} J(\mathbf{\theta}, \mathbf{x}, y)$ 。 然后,它沿着使损失最大化的方向(i.e. $sign(\nabla_{x} J(\mathbf{\theta}, \mathbf{x}, y))$) 上 调整输入数据一小步($\epsilon$ 或 $0.007$ 在图片中)。 由此产生的扰动图像(perturbed image), $x'$, 就会被目标网络 *误分类(misclassified)* 为 “gibbon”, 但事实上 被扰动的图像依然是个 “panda” 。 希望现在你已明了本教程的动机了,所以让我们跳到它的具体实现吧。 ```python from __future__ import print_function import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torchvision import datasets, transforms import numpy as np import matplotlib.pyplot as plt ``` 实现 -------------- 在这一小节中, 我们将讨论输入参数,定义在攻击之下的模型,然后编写攻击代码然后将一些测试跑起来。 输入 ~~~~~~ 本教程只有三个输入,定义如下: - **epsilons** - 要用于运行的epsilon值列表。在列表中保持0很重要,因为它代表了原始测试集上的模型性能。而且,从直觉上说, 我们认为epsilon越大,扰动越明显,但攻击越有效,降低了模型的准确性。由于 数据的范围是 $[0,1]$ ,任何epsilon值都不应超过1。 - **pretrained_model** - 通向预先训练过的MNIST模型的路径,该模型是用 `pytorch/examples/mnist <https://github.com/pytorch/examples/tree/master/mnist>`__ 。 为了简单起见,请在 `这里 <https://drive.google.com/drive/folders/1fn83DF14tWmit0RTKWRhPq5uVXt73e0h?usp=sharing>`__ 下载经过预先训练的模型。 - **use_cuda** - 布尔标志使用CUDA(如果需要和可用的话)。注意,带有CUDA的GPU对于本教程来说并不重要,因为CPU不会花费太多时间。 ```python epsilons = [0, .05, .1, .15, .2, .25, .3] pretrained_model = "./data/lenet_mnist_model.pth" use_cuda=True ``` 受攻击模型(Model Under Attack) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 如前所述,受攻击的模型是与 `pytorch/examples/mnist <https://github.com/pytorch/examples/tree/master/mnist>`__ 相同的MNIST模型。您可以训练和保存自己的MNIST模型,也可以下载和使用所提供的模型。 这里的网络定义和测试dataloader是从MNIST示例中复制的。本节的目的是定义model和dataloader, 然后初始化模型并加载预先训练的权重。 ```python # LeNet Model definition class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 10, kernel_size=5) self.conv2 = nn.Conv2d(10, 20, kernel_size=5) self.conv2_drop = nn.Dropout2d() self.fc1 = nn.Linear(320, 50) self.fc2 = nn.Linear(50, 10) def forward(self, x): x = F.relu(F.max_pool2d(self.conv1(x), 2)) x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2)) x = x.view(-1, 320) x = F.relu(self.fc1(x)) x = F.dropout(x, training=self.training) x = self.fc2(x) return F.log_softmax(x, dim=1) # MNIST Test dataset 和 dataloader 声明 test_loader = torch.utils.data.DataLoader( datasets.MNIST('./data/mnist', train=False, download=True, transform=transforms.Compose([ transforms.ToTensor(), ])), batch_size=1, shuffle=True) # 定义我们要使用的设备 print("CUDA Available: ",torch.cuda.is_available()) device = torch.device("cuda" if (use_cuda and torch.cuda.is_available()) else "cpu") # 初始化网络 model = Net().to(device) # 加载预训练模型 model.load_state_dict(torch.load(pretrained_model, map_location='cpu')) # 将模型设置为评估模式. 这是为了 Dropout layers。 model.eval() ``` FGSM Attack ~~~~~~~~~~~~~~~~~~~ 现在,我们可以通过扰动原始输入来定义创建对抗性样例(adversarial examples)的函数。 ``fgsm_attack`` 函数接收三个输入: *image* 是原始的干净图像 ($x$), *epsilon* 是 逐像素扰动量 ($\epsilon$), 而 *data_grad* 是损失相对于(w.r.t)输入图像的梯度: ($\nabla_{x} J(\mathbf{\theta}, \mathbf{x}, y)$) 。 有了这三个输入,该函数就会按下述方法 创建扰动图像(perturbed image): \begin{align}perturbed\_image = image + epsilon*sign(data\_grad) = x + \epsilon * sign(\nabla_{x} J(\mathbf{\theta}, \mathbf{x}, y))\end{align} 最后, 为了保持数据的原始范围,将扰动图像裁剪到 $[0,1]$ 范围内。 ```python # FGSM 攻击代码 def fgsm_attack(image, epsilon, data_grad): # Collect the element-wise sign of the data gradient sign_data_grad = data_grad.sign() # Create the perturbed image by adjusting each pixel of the input image perturbed_image = image + epsilon*sign_data_grad # Adding clipping to maintain [0,1] range perturbed_image = torch.clamp(perturbed_image, 0, 1) # Return the perturbed image return perturbed_image ``` 测试函数 ~~~~~~~~~~~~~~~~ 最后,本教程的中心结果来自于 ``test`` 函数。每次调用该测试函数都会在MNIST测试集上执行完整的测试步骤, 并报告最终的准确性。但是,请注意,此函数也接受 *epsilon* 输入。这是因为 ``test`` 函数报告了一个模型的准确性, 该模型正受到来自实力 $\epsilon$ 的对手的攻击。更具体地说,对于测试集中的每个样本, 该函数计算loss w.r.t the input data ($data\_grad$),用 ``fgsm_attack`` ($perturbed\_data$) 创建一个受扰动的图像,然后检查被扰动的样例是否是对抗性的。除了测试模型的准确性外, 该函数还保存并返回了一些成功的对抗性样例,以供以后可视化。 ```python def test( model, device, test_loader, epsilon ): # Accuracy counter correct = 0 adv_examples = [] # Loop over all examples in test set for data, target in test_loader: # Send the data and label to the device data, target = data.to(device), target.to(device) # Set requires_grad attribute of tensor. Important for Attack data.requires_grad = True # Forward pass the data through the model output = model(data) init_pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability # If the initial prediction is wrong, dont bother attacking, just move on if init_pred.item() != target.item(): continue # Calculate the loss loss = F.nll_loss(output, target) # Zero all existing gradients model.zero_grad() # Calculate gradients of model in backward pass loss.backward() # Collect datagrad data_grad = data.grad.data # Call FGSM Attack perturbed_data = fgsm_attack(data, epsilon, data_grad) # Re-classify the perturbed image output = model(perturbed_data) # Check for success final_pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability if final_pred.item() == target.item(): correct += 1 # Special case for saving 0 epsilon examples if (epsilon == 0) and (len(adv_examples) < 5): adv_ex = perturbed_data.squeeze().detach().cpu().numpy() adv_examples.append( (init_pred.item(), final_pred.item(), adv_ex) ) else: # Save some adv examples for visualization later if len(adv_examples) < 5: adv_ex = perturbed_data.squeeze().detach().cpu().numpy() adv_examples.append( (init_pred.item(), final_pred.item(), adv_ex) ) # Calculate final accuracy for this epsilon final_acc = correct/float(len(test_loader)) print("Epsilon: {}\tTest Accuracy = {} / {} = {}".format(epsilon, correct, len(test_loader), final_acc)) # Return the accuracy and an adversarial example return final_acc, adv_examples ``` 运行 Attack ~~~~~~~~~~~~~~~~~~ 实现的最后一部分是实际运行攻击。在这里,我们对 *epsilons* 输入中的每个epsilon值运行一个完整的测试步骤。 对于每个epsilon,我们还保存了最终的准确性和一些成功的对抗性样例,将在接下来绘制出来。 注意打印精度是如何随着epsilon值的增加而降低的。另外,请注意 $\epsilon=0$ 表示原始测试的准确性,没有任何攻击。 ```python accuracies = [] examples = [] # Run test for each epsilon for eps in epsilons: acc, ex = test(model, device, test_loader, eps) accuracies.append(acc) examples.append(ex) ``` 结果 ------- Accuracy vs Epsilon ~~~~~~~~~~~~~~~~~~~~~ 第一个结果是accuracy vs epsilon的图。正如前面提到的,随着epsilon的增加,我们预计测试的准确性会下降。 这是因为更大的epsilon意味着我们朝着最大化损失的方向迈出了更大的一步。注意,即使epsilon值是线性的, 曲线中的趋势也不是线性的。例如,在 $\epsilon=0.05$ 处的准确度仅比 $\epsilon=0.15$ 低4%, 而 $\epsilon=0.2$ 的准确度比 $\epsilon=0.15$ 低25%。 另外,注意模型的精度对10类分类器的随机精度影响在 $\epsilon=0.25$ 和 $\epsilon=0.3$ 之间。 ```python plt.figure(figsize=(5,5)) plt.plot(epsilons, accuracies, "*-") plt.yticks(np.arange(0, 1.1, step=0.1)) plt.xticks(np.arange(0, .35, step=0.05)) plt.title("Accuracy vs Epsilon") plt.xlabel("Epsilon") plt.ylabel("Accuracy") plt.show() ``` 一些对抗性样本 ~~~~~~~~~~~~~~~~~~~~~~~~~~~ 还记得没有免费午餐的思想吗?在这种情况下,随着epsilon的增加,测试精度降低,但扰动变得更容易察觉。 实际上,攻击者必须考虑的是准确性、程度和可感知性之间的权衡。在这里,我们展示了在每个epsilon值下 一些成功的对抗性样例。图中的每一行都显示不同的epsilon值。第一行是 $\epsilon=0$ 示例, 它表示原始的“干净”图像,没有任何扰动。每幅图像的标题显示“原始分类->对抗性分类”。 注意,当 $\epsilon=0.15$ 时,扰动开始变得明显,在 $\epsilon=0.3$ 时非常明显。 然而,在所有情况下,人类仍然能够识别正确的类别,尽管增加了噪音。 ```python # Plot several examples of adversarial samples at each epsilon cnt = 0 plt.figure(figsize=(8,10)) for i in range(len(epsilons)): for j in range(len(examples[i])): cnt += 1 plt.subplot(len(epsilons),len(examples[0]),cnt) plt.xticks([], []) plt.yticks([], []) if j == 0: plt.ylabel("Eps: {}".format(epsilons[i]), fontsize=14) orig,adv,ex = examples[i][j] plt.title("{} -> {}".format(orig, adv)) plt.imshow(ex, cmap="gray") plt.tight_layout() plt.show() ``` 下一步去哪里? ----------------- 希望本教程能提供一些关于对抗性机器学习主题的见解。这里有许多潜在的方向可走。 这种攻击代表了对抗性攻击研究的开始,并且由于有许多关于如何攻击和保护ML模型不受对手攻击的想法。 实际上,在NIPS 2017的比赛中,存在着一种对抗性的攻防竞争, `本文 <https://arxiv.org/pdf/1804.00097.pdf>`__ 介绍了在这场比赛中所采用的许多方法:对抗攻击和防御竞争。 防御方面的工作也带来了使机器学习模型在一般情况下更加健壮的想法, 使机器学习模型既具有自然的扰动性,又具有对抗性的输入。 另一个方向是不同领域的对抗攻击和防御。对抗性研究并不局限于图像领域,请看 `这个 <https://arxiv.org/pdf/1801.01944.pdf>`__ 对语音到文本模型的攻击。 但是也许了解更多对抗性机器学习的最好方法是弄脏你的手(意思是让你动手尝试)。 尝试实现来自NIPS 2017 竞赛的不同的攻击策略,看看它与FGSM有何不同。然后,试着保护模型不受你自己的攻击。
64093f8befd3aecabc36e5aee3df2b86f1b7106f
26,386
ipynb
Jupyter Notebook
build/_downloads/2b3dcb9883348c9350c194d591814364/fgsm_tutorial.ipynb
ScorpioDoctor/antares02
631b817d2e98f351d1173b620d15c4a5efed11da
[ "BSD-3-Clause" ]
null
null
null
build/_downloads/2b3dcb9883348c9350c194d591814364/fgsm_tutorial.ipynb
ScorpioDoctor/antares02
631b817d2e98f351d1173b620d15c4a5efed11da
[ "BSD-3-Clause" ]
null
null
null
build/_downloads/2b3dcb9883348c9350c194d591814364/fgsm_tutorial.ipynb
ScorpioDoctor/antares02
631b817d2e98f351d1173b620d15c4a5efed11da
[ "BSD-3-Clause" ]
null
null
null
136.010309
5,182
0.712575
true
4,934
Qwen/Qwen-72B
1. YES 2. YES
0.766294
0.685949
0.525639
__label__yue_Hant
0.23068
0.059564
<a href="https://colab.research.google.com/github/john-s-butler-dit/Numerical-Analysis-Python/blob/master/Chapter%2006%20-%20Boundary%20Value%20Problems/604_Boundary%20Value%20Problem%20Example%202.ipynb" target="_parent"></a> # Finite Difference Method #### John S Butler john.s.butler@tudublin.ie [Course Notes](https://johnsbutler.netlify.com/files/Teaching/Numerical_Analysis_for_Differential_Equations.pdf) [Github](https://github.com/john-s-butler-dit/Numerical-Analysis-Python) ## Overview This notebook illustrates the finite different method for a linear Boundary Value Problem. ### Example 2 Boundary Value Problem To further illustrate the method we will apply the finite difference method to the this boundary value problem \begin{equation} \frac{d^2 y}{dx^2} + 2x\frac{dy}{dx}+y=3x^2,\end{equation} with the boundary conditions \begin{equation} y(0)=1, y(1)=2. \end{equation} ```python import numpy as np import math import matplotlib.pyplot as plt ``` ## Discrete Axis The stepsize is defined as \begin{equation}h=\frac{b-a}{N}\end{equation} here it is \begin{equation}h=\frac{1-0}{10}\end{equation} giving \begin{equation}x_i=0+0.1 i\end{equation} for $i=0,1,...10.$ ```python ## BVP N=10 h=1/N x=np.linspace(0,1,N+1) fig = plt.figure(figsize=(10,4)) plt.plot(x,0*x,'o:',color='red') plt.xlim((0,1)) plt.xlabel('x',fontsize=16) plt.title('Illustration of discrete time points for h=%s'%(h),fontsize=32) plt.show() ``` ## The Difference Equation To convert the boundary problem into a difference equation we use 1st and 2nd order difference operators. The first derivative can be approximated by the difference operators: \begin{equation}D^{+}U_{i}=\frac{U_{i+1}-U_{i}}{h_{i+1}} \ \ \ \mbox{ Forward,} \end{equation} \begin{equation}D^{-}U_{i}=\frac{U_{i}-U_{i-1}}{h_i} \ \ \ \mbox{ Backward,} \end{equation} or \begin{equation}D^{0}U_{i}=\frac{U_{i+1}-U_{i-1}}{x_{i+1}-x_{i-1}} \ \ \ \mbox{ Centered.} \end{equation} The second derivative can be approxiamed \begin{equation}\delta_x^{2}U_{i}=\frac{2}{x_{i+1}-x_{i-1}}\left(\frac{U_{i+1}-U_{i}}{x_{i+1}-x_{i}}-\frac{U_{i}-U_{i-1}}{x_{i}-x_{i-1}}\right) \ \ \ \mbox{ Centered in $x$ direction} \end{equation} Given the differential equation \begin{equation} \frac{d^2 y}{dx^2} + 2x\frac{dy}{dx}+y=3x^2,\end{equation} the difference equation is, \begin{equation}\frac{1}{h^2}\left(y_{i-1}-2y_i+y_{i+1}\right)+2x_i\frac{y_{i+1}-y_{i-1}}{2h}+y_i=3x^2_i \ \ \ i=1,..,N-1. \end{equation} Rearranging the equation we have the system of N-1 equations \begin{equation}i=1: (\frac{1}{0.1^2}-\frac{2x_1}{0.2})\color{green}{y_{0}} -\left(\frac{2}{0.1^2}-1\right)y_1 +(\frac{1}{0.1^2}+\frac{2x_1}{0.2}) y_{2}=3x_1^2\end{equation} \begin{equation}i=2: (\frac{1}{0.1^2}-\frac{2x_2}{0.2})y_{1} -\left(\frac{2}{0.1^2}-1\right)y_2 +(\frac{1}{0.1^2}+\frac{2x_2}{0.2}) y_{3}=3x_2^2\end{equation} \begin{equation} ...\end{equation} \begin{equation}i=8: (\frac{1}{0.1^2}-\frac{2x_8}{0.2})y_{7} -\left(\frac{2}{0.1^2}-1\right)y_8 +(\frac{1}{0.1^2}+\frac{2x_8}{0.2})y_{9}=3x_8^2\end{equation} \begin{equation}i=9: (\frac{1}{0.1^2}-\frac{2x_9}{0.2})y_{8} -\left(\frac{2}{0.1^2}-1\right)y_9 +(\frac{1}{0.1^2}+\frac{2x_9}{0.2}) \color{green}{y_{10}}=3x_9^2\end{equation} where the green terms are the known boundary conditions. Rearranging the equation we have the system of 9 equations \begin{equation}i=1: -\left(\frac{2}{0.1^2}-1\right)y_1 +(\frac{1}{0.1^2}+\frac{2x_1}{0.2})y_{2}=-(\frac{1}{0.1^2}-\frac{2x_1}{0.2})\color{green}{y_{0}}+3x_1^2\end{equation} \begin{equation}i=2: (\frac{1}{0.1^2}-\frac{2x_2}{0.2})y_{1} -\left(\frac{2}{0.1^2}-1\right)y_2 +(\frac{1}{0.1^2}+\frac{2x_2}{0.2}) y_{3}=3x_2^2\end{equation} \begin{equation} ...\end{equation} \begin{equation}i=8: (\frac{1}{0.1^2}-\frac{2x_8}{0.2})y_{7} -\left(\frac{2}{0.1^2}-1\right)y_8 +(\frac{1}{0.1^2}+\frac{2x_8}{0.2}) y_{9}=3x_8^2\end{equation} \begin{equation}i=9: (\frac{1}{0.1^2}-\frac{2x_9}{0.2})y_{8} -\left(\frac{2}{0.1^2}-1\right)y_9 =-(\frac{1}{0.1^2}+\frac{2x_9}{0.2}) \color{green}{y_{10}}+3x_9^2\end{equation} where the green terms are the known boundary conditions. This is system can be put into matrix form \begin{equation} A\color{red}{\mathbf{y}}=\mathbf{b} \end{equation} Where A is a $9\times 9 $ matrix of the form which can be represented graphically as: ```python A=np.zeros((N-1,N-1)) # Diagonal for i in range (0,N-1): A[i,i]=-(2/(h*h)-1) # Diagonal for i in range (0,N-2): A[i+1,i]=1/(h*h)-2*(i+1)*h/(2*h) ## Lower Diagonal A[i,i+1]=1/(h*h)+2*(i+1)*h/(2*h) ## Upper Diagonal plt.imshow(A) plt.xlabel('i',fontsize=16) plt.ylabel('j',fontsize=16) plt.yticks(np.arange(N-1), np.arange(1,N-0.9,1)) plt.xticks(np.arange(N-1), np.arange(1,N-0.9,1)) clb=plt.colorbar() clb.set_label('Matrix value') plt.title('Matrix A',fontsize=32) plt.tight_layout() plt.subplots_adjust() plt.show() ``` $\mathbf{y}$ is the unknown vector which is contains the numerical approximations of the $y$. \begin{equation} \color{red}{\mathbf{y}}=\color{red}{ \left(\begin{array}{c} y_1\\ y_2\\ y_3\\ .\\ .\\ y_8\\ y_9 \end{array}\right).} \end{equation} ```python y=np.zeros((N+1)) # Boundary Condition y[0]=1 y[N]=2 ``` and the known right hand side is a known $9\times 1$ vector with the boundary conditions \begin{equation} \mathbf{b}=\left(\begin{array}{c}-99+3x_1^2\\ 3x_2^2\\ 3x_3^2\\ .\\ .\\ 3x_8^2\\ -209+3x_9^2 \end{array}\right) \end{equation} ```python b=np.zeros(N-1) for i in range (0,N-1): b[i]=3*h*(i+1)*h*(i+1) # Boundary Condition b[0]=-y[0]*(1/(h*h)-2*1*h/(2*h))+b[0] b[N-2]=-y[N]*(1/(h*h)+2*9*h/(2*h))+b[N-2] print('b=') print(b) ``` b= [-9.8970e+01 1.2000e-01 2.7000e-01 4.8000e-01 7.5000e-01 1.0800e+00 1.4700e+00 1.9200e+00 -2.1557e+02] ## Solving the system To solve invert the matrix $A$ such that \begin{equation}A^{-1}Ay=A^{-1}b\end{equation} \begin{equation}y=A^{-1}b\end{equation} The plot below shows the graphical representation of $A^{-1}$. ```python invA=np.linalg.inv(A) plt.imshow(invA) plt.xlabel('i',fontsize=16) plt.ylabel('j',fontsize=16) plt.yticks(np.arange(N-1), np.arange(1,N-0.9,1)) plt.xticks(np.arange(N-1), np.arange(1,N-0.9,1)) clb=plt.colorbar() clb.set_label('Matrix value') plt.title(r'Matrix $A^{-1}$',fontsize=32) plt.tight_layout() plt.subplots_adjust() plt.show() y[1:N]=np.dot(invA,b) ``` ## Result The plot below shows the approximate solution of the Boundary Value Problem (blue v). ```python fig = plt.figure(figsize=(8,4)) plt.plot(x,y,'v',label='Finite Difference') plt.xlabel('x') plt.ylabel('y') plt.legend(loc='best') plt.show() ```
2cc378795b62ea742c469674720d827519c366dd
66,063
ipynb
Jupyter Notebook
Chapter 06 - Boundary Value Problems/604_Boundary Value Problem Example 2.ipynb
jjcrofts77/Numerical-Analysis-Python
97e4b9274397f969810581ff95f4026f361a56a2
[ "MIT" ]
69
2019-09-05T21:39:12.000Z
2022-03-26T14:00:25.000Z
Chapter 06 - Boundary Value Problems/604_Boundary Value Problem Example 2.ipynb
jjcrofts77/Numerical-Analysis-Python
97e4b9274397f969810581ff95f4026f361a56a2
[ "MIT" ]
null
null
null
Chapter 06 - Boundary Value Problems/604_Boundary Value Problem Example 2.ipynb
jjcrofts77/Numerical-Analysis-Python
97e4b9274397f969810581ff95f4026f361a56a2
[ "MIT" ]
13
2021-06-17T15:34:04.000Z
2022-01-14T14:53:43.000Z
161.129268
17,294
0.862858
true
2,742
Qwen/Qwen-72B
1. YES 2. YES
0.894789
0.808067
0.72305
__label__eng_Latn
0.475997
0.518219
# 13 Root Finding An important tool in the computational tool box is to find roots of equations for which no closed form solutions exist: We want to find the roots $x_0$ of $$ f(x_0) = 0 $$ ## Problem: Projectile range The equations of motion for the projectile with linear air resistance (see *12 ODE applications*) can be solved exactly. As a reminder: the linear drag force is $$ \mathbf{F}_1 = -b_1 \mathbf{v}\\ b := \frac{b_1}{m} $$ Equations of motion with force due to gravity $\mathbf{g} = -g \hat{\mathbf{e}}_y$ \begin{align} \frac{d\mathbf{r}}{dt} &= \mathbf{v}\\ \frac{d\mathbf{v}}{dt} &= - g \hat{\mathbf{e}}_y -b \mathbf{v} \end{align} ### Analytical solution of the equations of motion (Following Wang Ch 3.3.2) Solve $x$ component of the velocity $$ \frac{dv_x}{dt} = -b v_x $$ by integration: $$ v_x(t) = v_{0x} \exp(-bt) $$ The drag force reduces the forward velocity to 0. Integrate again to get the $x(t)$ component $$ x(t) = x_0 + \frac{v_{0x}}{b} \left[1 - \exp(-bt)\right] $$ Integrating the $y$ component of the velocity $$ \frac{dv_y}{dt} = -g - b v_y $$ gives $$ v_y = \left(v_{0y} + \frac{g}{b}\right) \exp(-bt) - \frac{g}{b} $$ and integrating again $$ y(t) = y_0 + \frac{v_{0y} + \frac{g}{b}}{b} \left[1 - \exp(-bt)\right] - \frac{g}{b} t $$ (Note: This shows immediately that the *terminal velocity* is $$ \lim_{t\rightarrow\infty} v_y(t) = - \frac{g}{b}, $$ i.e., the force of gravity is balanced by the drag force.) #### Analytical trajectory To obtain the **trajectory $y(x)$** eliminate time (and for convenience, using the origin as the initial starting point, $x_0 = 0$ and $y_0 = 0$. Solve $x(t)$ for $t$ $$ t = -\frac{1}{b} \ln \left(1 - \frac{bx}{v_{0x}}\right) $$ and insert into $y(t)$: $$ y(x) = \frac{x}{v_{0x}} \left( v_{0y} + \frac{g}{b} \right) + \frac{g}{b^2} \ln \left(1 - \frac{bx}{v_{0x}}\right) $$ #### Plot Plot the analytical solution $y(x)$ for $\theta = 30^\circ$ and $v_0 = 100$ m/s. ```python import numpy as np import matplotlib.pyplot as plt %matplotlib inline plt.style.use('ggplot') ``` The function `y_lindrag()` should compute $y(x)$. ```python def y_lindrag(x, v0, b1=0.2, g=9.81, m=0.5): b = b1/m v0x, v0y = v0 return x/v0x * (v0y + g/b) + g/(b*b) * np.log(1 - b*x/v0x) def initial_v(v, theta): x = np.deg2rad(theta) return v * np.array([np.cos(x), np.sin(x)]) ``` The analytical function drops *very* rapidly towards the end ($> 42$ m – found by manual trial-and-error plotting...) so in order to nicely plot the function we use a fairly coarse sampling of points along $x$ for the range $0 \le x < 42$ and very fine sampling for the last 2 m ($42 \le x < 45$): ```python X = np.concatenate([np.linspace(0, 42, 100), np.linspace(42, 45, 10000)]) ``` Evaluate the function for all $x$ values: ```python Y = y_lindrag(X, initial_v(100, 30), b1=1) ``` /Users/oliver/anaconda3/envs/phy494/lib/python3.6/site-packages/ipykernel_launcher.py:4: RuntimeWarning: invalid value encountered in log after removing the cwd from sys.path. (The warning can be ignored, it just means that some of our `X` values were not approriate and outside the range of validity – when the argument of the logarithm becomes ≤0). To indicate the ground we also plot a dashed black line: note that the analytical solution goes below the dashed line. ```python plt.plot(X, Y) plt.xlabel("$x$ (m)") plt.ylabel("$y$ (m)") plt.hlines([0], X[0], X[-1], colors="k", linestyles="--"); ``` Compare to the numerical solution (from **12 ODE Applications**): ```python import ode def simulate(v0, h=0.01, b1=0.2, g=9.81, m=0.5): def f(t, y): # y = [x, y, vx, vy] return np.array([y[2], y[3], -b1/m * y[2], -g - b1/m * y[3]]) vx, vy = v0 t = 0 positions = [] y = np.array([0, 0, vx, vy], dtype=np.float64) while y[1] >= 0: positions.append([t, y[0], y[1]]) # record t, x and y y[:] = ode.rk4(y, f, t, h) t += h return np.array(positions) ``` ```python r = simulate(initial_v(100, 30), h=0.01, b1=1) ``` ```python plt.plot(X, Y, lw=2, label="analytical") plt.plot(r[:, 1], r[:, 2], '--', label="RK4") plt.legend(loc="best") plt.xlabel("$x$ (m)"); plt.ylabel("$y$ (m)") plt.hlines([0], X[0], X[-1], colors="k", linestyles="--"); ``` The RK4 solution tracks the analytical solution perfectly (and we also programmed it to not go below ground...) OPTIONAL: Show the residual $$ r = y_\text{numerical}(x) - y_\text{analytical} $$ ```python residual = r[:, 2] - y_lindrag(r[:, 1], initial_v(100, 30), b1=1) ``` ```python plt.plot(r[:, 1], residual) plt.xlabel("$x$ (m)"); plt.ylabel("residual $r$ (m)"); ``` ### Predict the range $R$ How far does the ball or projectile fly, i.e., that value $x=R$ where $y(R) = 0$: $$ \frac{R}{v_{0x}} \left( v_{0y} + \frac{g}{b} \right) + \frac{g}{b^2} \ln \left(1 - \frac{bR}{v_{0x}}\right) = 0 $$ This *transcendental equation* can not be solved in terms of elementary functions. Use a **root finding** algorithm. ## Root-finding with the Bisection algorithm **Bisection** is the simplest (but very robust) root finding algorithm that uses trial-and-error: * bracket the root * refine the brackets * see [13_Root-finding-algorithms (PDF)](13_Root-finding-algorithms.pdf) More specifically: 1. determine a bracket that contains the root: $a < x_0 < b$ (i.e., an interval $[a, b]$ with $f(a) > 0$ and $f(b) < 0$ or $f(a) < 0$ and $f(b) > 0$) 2. cut bracket in half: $x' = \frac{1}{2}(a + b)$ 3. determine in which half the root lies: either in $[a, x']$ or in $[x', b]$: If $f(a) f(x') > 0$ then the root lies in the right half $[x', b]$, otherwise the left half $[a, x']$. 4. Change the boundaries $a$ or $b$. 5. repeat until $|f(x')| < \epsilon$. ### Implementation of Bisection - Test that the initial bracket contains a root; if not, return `None` (and possibly print a warning). - If either of the bracket points is a root then return the bracket point. - Allow `Nmax` iterations or until the convergence criterion `eps` is reached. - Bonus: print a message if no root was found after `Nmax` iterations, but print the best guess and the error (but return `None`). ```python def bisection(f, a, b, Nmax=100, eps=1e-14): fa, fb = f(a), f(b) if (fa*fb) > 0: print("bisect: Initial bracket [{0}, {1}] " "does not contain a single root".format(a, b)) return None if np.abs(fa) < eps: return a if np.abs(fb) < eps: return b for iteration in range(Nmax): x = (a + b)/2 fx = f(x) if f(a) * fx > 0: # root is not between a and x a = x else: b = x if np.abs(fx) < eps: break else: print("bisect: no root found after {0} iterations (eps={1}); " "best guess is {2} with error {3}".format(Nmax, eps, x, fx)) x = None return x ``` ### Finding the range with the bisection algorithm Define the trial function `f`. Note that our `y_lindrag()` function depends on `x` **and** `v` but `bisect()` only accepts functions `f` that depend on a *single variable*, $f(x)$. We therefore have to wrap `y_lindrag(x, v)` into a function `f(x)` that sets `v` already to a value *outside* the function: [Python's scoping rules](https://stackoverflow.com/questions/291978/short-description-of-the-scoping-rules#292502) say that inside the function `f(x)`, the variable `x` has the value assigned to the argument of `f(x)` but any other variables such as `v` or `b1`, which were *not defined inside `f`*, will get the value that they had *outside `f`* in the *enclosing code*. ```python def f(x): v0 = initial_v(100, 30) b1 = 1. return y_lindrag(x, v0, b1=b1) ``` The initial bracket $[a_\text{initial}, b_\text{initial}]$ is a little bit difficult for this function: choose the right bracket near the point where the argument of the logarithm becomes 0 (which is actually the maximum $x$ value $\lim_{t\rightarrow +\infty} x(t) = \frac{v_{0x}}{b}$): $$ b_\text{initial} = \frac{v_{0x}}{b} - \epsilon' $$ where $\epsilon'$ is a small number. ```python v = initial_v(100, 30) b1 = 1. m = 0.5 b = b1/m bisection(f, 0.1, v[0]/b - 1e-12, eps=1e-6) ``` 43.30067423347077 Note that this solution is *not* the maximum value $\lim_{t\rightarrow +\infty} x(t) = \frac{v_{0x}}{b}$: ```python v[0]/b ``` 43.30127018922194 ### Find the range as a function of the initial angle ```python b1 = 1. m = 0.5 b = b1/m v0 = 100 u = [] for theta in np.arange(1, 90): v = initial_v(v0, theta) def f(x): return y_lindrag(x, v, b1=b1) R = bisection(f, 0.1, v[0]/b - 1e-16, eps=1e-5) if R is not None: u.append((theta, R)) u = np.array(u) ``` /Users/oliver/anaconda3/envs/phy494/lib/python3.6/site-packages/ipykernel_launcher.py:4: RuntimeWarning: divide by zero encountered in log after removing the cwd from sys.path. ```python plt.plot(u[:, 0], u[:, 1]) plt.xlabel(r"launch angle $\theta$ ($^\circ$)") plt.ylabel(r"range $R$ (m)"); ``` Write a function `find_range()` to calculate the range for a given initial velocity $v_0$ and plot $R(\theta)$ for $10\,\text{m/s} ≤ v_0 ≤ 100\,\text{m/s}$. ```python def find_range(v0, b1=1, m=0.5): b = b1/m u = [] for theta in np.arange(1, 90): v = initial_v(v0, theta) def f(x): return y_lindrag(x, v, b1=b1) R = bisection(f, 0.1, v[0]/b - 1e-16, eps=1e-5) if R is not None: u.append((theta, R)) return np.array(u) ``` ```python for v0 in (10, 25, 50, 75, 100): u = find_range(v0) plt.plot(u[:, 0], u[:, 1], label="{} m/s".format(v0)) plt.legend(loc="best") plt.xlabel(r"$\theta$ (degrees)") plt.ylabel(r"range $R$ (m)"); ``` As a bonus, find the dependence of the *optimum launch angle* on the initial velocity, i.e., that angle that leads to the largest range: ```python np.argmax(u[:, 1]) ``` 10 ```python velocities = np.linspace(5, 100, 100) results = [] # (v0, theta_opt) for v0 in velocities: u = find_range(v0) thetas, ranges = u.transpose() # find index for the largest range and pull corresponding theta theta_opt = thetas[np.argmax(ranges)] results.append((v0, theta_opt)) results = np.array(results) plt.plot(results[:, 0], results[:, 1]) plt.xlabel(r"velocity $v_0$ (m/s)") plt.ylabel(r"$\theta_\mathrm{best}$ (degrees)"); ``` The launch angle decreases with the velocity. The steps in the graph are an artifact of choosing to only calculate the trajectories for integer angles (see `for theta in np.arange(1, 90)` in `find_range()`). ## Newton-Raphson algorithm (see derivation in class and in the PDF or [Newton's Method](http://mathworld.wolfram.com/NewtonsMethod.html) on MathWorld) ### Activity: Implement Newton-Raphson 1. Implement the Newton-Raphson algorithm 2. Test with $g(x)$. $$ g(x) = 2 \cos x - x $$ 3. Bonus: test performance of `newton_raphson()` against `bisection()`. ```python def g(x): return 2*np.cos(x) - x ``` ```python xvals = np.linspace(0, 7, 30) plt.plot(xvals, np.zeros_like(xvals), 'k--') plt.plot(xvals, g(xvals)) ``` ```python def newton_raphson(f, x, h=1e-3, Nmax=100, eps=1e-14): """Find root x0 so that f(x0)=0 with the Newton-Raphson algorithm""" for iteration in range(Nmax): fx = f(x) if np.abs(fx) < eps: break df = (f(x + h/2) - f(x - h/2))/h Delta_x = -fx/df x += Delta_x else: print("Newton-Raphson: no root found after {0} iterations (eps={1}); " "best guess is {2} with error {3}".format(Nmax, eps, x, fx)) x = None return x ``` ```python x0 = newton_raphson(g, 2) print(x0) ``` ```python g(x0) ``` But note that the algorithm only converges well near the root. With other values it might not converge at all: ```python newton_raphson(g, 3) ``` ```python newton_raphson(g, 10) ``` ```python newton_raphson(g, 15) ``` Let's look how Newton-Raphson iterates: also return all intermediate $x$ values: ```python def newton_raphson_with_history(f, x, h=1e-3, Nmax=100, eps=1e-14): xvals = [] for iteration in range(Nmax): fx = f(x) if np.abs(fx) < eps: break df = (f(x + h/2) - f(x - h/2))/h Delta_x = -fx/df x += Delta_x xvals.append(x) else: print("Newton-Raphson: no root found after {0} iterations (eps={1}); " "best guess is {2} with error {3}".format(Nmax, eps, x, fx)) x = None return x, np.array(xvals) ``` ```python x = {} x0, xvals = newton_raphson_with_history(g, 1.5) x[1.5] = xvals print("root x0 = {} after {} iterations".format(x0, len(xvals))) x0, xvals = newton_raphson_with_history(g, 5) x[5] = xvals print("root x0 = {} after {} iterations".format(x0, len(xvals))) x0, xvals = newton_raphson_with_history(g, 10) x[10] = xvals print("root x0 = {} after {} iterations".format(x0, len(xvals))) ``` ```python for xstart in sorted(x.keys()): plt.semilogx(x[xstart], label=str(xstart)) plt.legend(loc="best") plt.xlabel("iteration") plt.ylabel("current guess for root $x_0$"); ```
04a0da4f184a8eee1a77774014be3435a54f0407
146,300
ipynb
Jupyter Notebook
13_root_finding/13-Root-finding.ipynb
ASU-CompMethodsPhysics-PHY494/PHY494-resources-2020
20e08c20995eab567063b1845487e84c0e690e96
[ "CC-BY-4.0" ]
null
null
null
13_root_finding/13-Root-finding.ipynb
ASU-CompMethodsPhysics-PHY494/PHY494-resources-2020
20e08c20995eab567063b1845487e84c0e690e96
[ "CC-BY-4.0" ]
null
null
null
13_root_finding/13-Root-finding.ipynb
ASU-CompMethodsPhysics-PHY494/PHY494-resources-2020
20e08c20995eab567063b1845487e84c0e690e96
[ "CC-BY-4.0" ]
null
null
null
148.678862
38,592
0.880752
true
4,407
Qwen/Qwen-72B
1. YES 2. YES
0.833325
0.810479
0.675392
__label__eng_Latn
0.89026
0.407493
# Generating Functions Generating functions are functions that encode sequences of numbers as the coefficients of power series. Consider a set $S$ with $n$ elements. Pretend there is a picture function' $P(s)\ \forall s\in S$. The picture function enables one, for example, to write the multiset $\{1,1,2\}$ as an expression $P(1)^2P(2)$. The enumerating function $E_p(s)\ :s\in S$ is used to list combinations subject to restraints. For example, the enumerating function for all multisets that include either one or two times some element $a$ and between zero and two times some element $b$ is written: \begin{equation} \begin{array}{rl} E_P(s) &= P(a)+ P(a)^2 + P(a)P(b) + P(a)^2P(b) + + P(a)P(b)^2 + P(a)^2P(b)^2\\ &\left(P(a) + P(a)^2\right)\left(1 + P(b) + P(b)^2\right) \end{array} \end{equation} This hints at how generating functions can make it easier to keep track of combinations. ## Example: Binomial Coefficients Consider the case of a collection of $n$ indistinguishable objects $s$, and write $P(s) = x$. Then the enumerator for selecting any subset of those $n$ objects is given by: \begin{equation} E_P(s) = \prod_i^n (x^0+x^1) = (1+x)^n = \sum_i^n {n \choose i}x^i \end{equation} Where $(x^0 + x^1)$ corresponds to including an element zero or one times. The exponent encodes how many objects were included into a particular subset. This is one way of "proving" the binomial theorem, and one can says $(1_x)^n$ is the generating function for the binomial coefficients ${n \choose i}$. ## Example: Basket of Goods An apple costs $20c$, a pear costs $25c$ and a banana costs $30c$. How many different fruit baskets can be bought for $100c$? By replacing the picture function $P(s)$ with $x$, it was possible to identify subsets of $n$ objects by looking at the exponent of $x^n$ in the enumerating function. In this case, the exponent is supposed to show the price. This can be done by writing $P(apple) = x^{20}, P(pear) = x^{25}$ and $P(banana) = x^{30}$. \begin{equation} E_P(s) = \left( \sum_{i=0}^5 x^{20} \right)\left( \sum_{i=0}^4 x^{25} \right)\left( \sum_{i=0}^3 x^{30} \right) \end{equation} Which results in some power series of the form: \begin{equation} E_P(s) = 1x^0 + 1x^{20} + 1x^{25} + 1x^{30} + 1x^{40} +... + 2x^{60} + ... + 1x^{290} \end{equation} To obtain the number of combinations that correspond to a cost of exactly $100c$, one can apply the operator $\frac{1}{n!}\frac{d^n}{dx^n}$ and set $x=0$ to obtain the desired term. This is what is done with moment generating functions in statistics. But actually it is easier to think through what the coefficients will be so that: \begin{equation} \sum_{l=0}^{n+m+h}d_l x^l = \left(\sum_{i=0}^n a_i x^i\right)\left(\sum_{j=0}^m b_j x^j\right)\left(\sum_{k=0}^h c_k x^k\right) \end{equation} \begin{equation} d_l = \sum_{\begin{array}{c}i,j,k\\i+j+k=l\end{array}} a_i b_j c_k \end{equation} If the coefficients are all '1', which is the case for these combinations, then: \begin{equation} d_l = \sum_{\begin{array}{c}i,j,k\\i+j+k=l\end{array}} 1 = {l + 3 - 1 \choose l} \end{equation} Which is the number of all multisets of size $l$ and 3 classes. In this case the coefficients are equal to 1 for i=20, j=25, and k=30, and zero otherwise, so that there are 4 ways of summing to 100. Hence, $d_{100} = 4$ for the basket of goods above. The sum can also be rewritten: \begin{equation} d_l = \sum^l_{i=0}\sum^{l-i}_{j=0}\sum^{l-i-j}_{k= l-i-j} a_i b_j c_k \end{equation} Where the last sum is actually just over a single term. Of course, this is the discrete version of a convolution. So, it is no surprise that the addition of random variables winds up being a convolution. ## Example: Dice How many ways are there for $n$ dice with $k$ faces to show $s$ eyes? \begin{equation} E_P = \left(\sum_{i=0}^\infty a_i x^i\right)^n = \sum_{i=0}^{\infty}d_i x^i \end{equation} Where $a_i = 1\ \forall i\in(1,k)$ and $a_i = 0$ otherwise. \begin{equation} E_P = \left(\sum_{i=1}^k x^i\right)^n = \left(x(1-x^k)\sum_{i=0}^\infty x^i\right)^n = x^n\left(\frac{1-x^k}{1-x}\right)^n = x^n \left(\sum_{i=0}^n (-1)^i {n \choose i} x^{ik}\right)\left(\sum_{j=0}^{\infty} (-1)^j {-n \choose j} x^j\right) \end{equation} The coefficient for $x^s$ is given the sum: \begin{equation} d_s = \sum_{ki+j=s-n} (-1)^{i+j}{n \choose i}{-n \choose j}\\ i \in [0,n] \\ j \in [0,\infty] \end{equation} Where the indices $i$ and $j$ satisfy $ki+j = s-n$. For example, for $s=7$, $n=2$ and $k=6$: \begin{equation} 6i+j = 7-2 = 5 \end{equation} Holds for $i=0, j=5$: \begin{equation} d_s = (-1)^5 {2 \choose 0}{-2 \choose 5} = (-1)^{10} 1 {2+5-1 \choose 5} = {6 \choose 5} = \frac{6!}{5!1!} = 6 \end{equation} Indeed, there are 6 ways for two d6 to add to 7: \begin{equation} [6,1],[5,2],[4,3],[3,4],[2,5],[1,6] \end{equation} ```python import warnings def memoize(func): """ memoizing wrapper to speed up recursion by keeping track of previously calculated values. """ S = {} def wrappingfunction(*args): if args not in S: S[args] = func(*args) return S[args] return wrappingfunction @memoize def factorial(x): if x == 0: return 1 else: res = 1 for i in range(1,x+1): res *= i return res @memoize def binomial(n,k): if n > 40 or k > 40: warnings.warn('careful with large n or k - unresolved numerical stability issues.') if n >= k and k>= 0: return int(factorial(n)/(factorial(n-k)*factorial(k))) elif n < 0 and k >= 0: return int((-1)**k * binomial(-n+k-1,k)) elif n < 0 and k <= n: return int((-1)**(n-k) * binomial(-k-1,n-k)) else: return 0 f = lambda x: str(x) if x != 0 else '-' N = 5 print('\t'.join([' ']+['k=%i' % k for k in range(-N,N+1)])) for n in range(-N,N+1)[::-1]: print('\t'.join(['n=%i' % n]+[f(binomial(n,k)) for k in range(-N,N+1)])) ``` k=-5 k=-4 k=-3 k=-2 k=-1 k=0 k=1 k=2 k=3 k=4 k=5 n=5 - - - - - 1 5 10 10 5 1 n=4 - - - - - 1 4 6 4 1 - n=3 - - - - - 1 3 3 1 - - n=2 - - - - - 1 2 1 - - - n=1 - - - - - 1 1 - - - - n=0 - - - - - 1 - - - - - n=-1 1 -1 1 -1 1 1 -1 1 -1 1 -1 n=-2 -4 3 -2 1 - 1 -2 3 -4 5 -6 n=-3 6 -3 1 - - 1 -3 6 -10 15 -21 n=-4 -4 1 - - - 1 -4 10 -20 35 -56 n=-5 1 - - - - 1 -5 15 -35 70 -126 ```python def d_s(s,n=2,k=6): """ Number of Dice Combinations n = number of dice k = number of dice faces (faces: 1,2,3,4,5,...,k) s = sum of dice throw Beware of numerical stability issues for N >~ 45 """ res = 0 for i in range(0,int((s-n)/k)+1): j = s-n-k*i res += (-1)**(i+j) * binomial(n,i) * binomial(-n,j) return res print('2 d6') for s in range(1,14): print('s = %i\t' % s, 'd = %i' % d_s(s,n=2,k=6)) ``` 2 d6 s = 1 d = 0 s = 2 d = 1 s = 3 d = 2 s = 4 d = 3 s = 5 d = 4 s = 6 d = 5 s = 7 d = 6 s = 8 d = 5 s = 9 d = 4 s = 10 d = 3 s = 11 d = 2 s = 12 d = 1 s = 13 d = 0 ```python import matplotlib.pyplot as plt import numpy as np k = 6 N = 31 ss = list(range(1,N*(k-2))) nn = list(range(1,N)) plt.figure(figsize=(12,5)) for n in nn: d = [d_s(s,n=n,k=k) for s in ss] d = np.array(d)/np.sum(d) plt.plot(ss,d,'.-',linewidth=2,markersize=8) plt.text(x=3.5*n+0.2,y=max(d)+0.0025,s='%i' % n) plt.xlabel('s') plt.ylabel('p(s)') plt.title('Probability of d6 Throw') plt.savefig('./img/d6throw.png') xlim = plt.xlim() k = 20 N = 31 ss = list(range(1,N*(k-2))) nn = list(range(1,N)) plt.figure(figsize=(12,5)) for n in nn: d = [d_s(s,n=n,k=k) for s in ss] d = np.array(d)/np.sum(d) plt.plot(ss,d,'.-',linewidth=2,markersize=8) if 10.5*n <= max(xlim): plt.text(x=10.5*n+0.2,y=max(d)+0.00075,s='%i' % n) plt.xlim(*xlim) plt.xlabel('s') plt.ylabel('p(s)') plt.title('Probability of d20 Throw') plt.savefig('./img/d20throw.png') ``` ```python ```
72b31a8f79ce5452c0c8f85cda868faae37e325e
235,977
ipynb
Jupyter Notebook
Combinatorics - Generating Functions.ipynb
jpbm/probabilism
a2f5c1595aed616236b2b889195604f365175899
[ "MIT" ]
null
null
null
Combinatorics - Generating Functions.ipynb
jpbm/probabilism
a2f5c1595aed616236b2b889195604f365175899
[ "MIT" ]
null
null
null
Combinatorics - Generating Functions.ipynb
jpbm/probabilism
a2f5c1595aed616236b2b889195604f365175899
[ "MIT" ]
null
null
null
655.491667
127,972
0.939816
true
3,084
Qwen/Qwen-72B
1. YES 2. YES
0.942507
0.901921
0.850066
__label__eng_Latn
0.944737
0.813322
Trusted Notebook" width="250 px" align="left"> ## _*Quantum Counterfeit Coin Problem*_ The latest version of this notebook is available on https://github.com/QISKit/qiskit-tutorial. *** ### Contributors Rudy Raymond, Takashi Imamichi ## Introduction The counterfeit coin problem is a classic puzzle first proposed by E. D. Schell in the January 1945 edition of the *American Mathematical Monthly*: >You have eight similar coins and a beam balance. At most one coin is counterfeit and hence underweight. How can you detect whether there is an underweight coin, and if so, which one, using the balance only twice? The answer to the above puzzle is affirmative. What happens when we can use a quantum beam balance? Given a quantum beam balance and a counterfeit coin among $N$ coins, there is a quantum algorithm that can find the counterfeit coin by using the quantum balance only once (and independent of $N$, the number of coins!). On the other hand, any classical algorithm requires at least $\Omega(\log{N})$ uses of the beam balance. In general, for a given $k$ counterfeit coins of the same weight (but different from the majority of normal coins), there is [a quantum algorithm](https://arxiv.org/pdf/1009.0416.pdf) that queries the quantum beam balance for $O(k^{1/4})$ in contrast to any classical algorithm that requires $\Omega(k\log{(N/k)})$ queries to the beam balance. This is one of the wonders of quantum algorithms, in terms of query complexity that achieves quartic speed-up compared to its classical counterpart. ## Quantum Procedure Hereafter we describe a step-by-step procedure to program the Quantum Counterfeit Coin Problem for $k=1$ counterfeit coin with the IBM Q Experience. [Terhal and Smolin](https://arxiv.org/pdf/quant-ph/9705041.pdf) were the first to show that it is possible to identify the false coin with a single query to the quantum beam balance. ### Preparing the environment First, we prepare the environment. ```python # useful additional packages import matplotlib.pyplot as plt %matplotlib inline import numpy as np # useful math functions from math import pi, cos, acos, sqrt # importing the QISKit from qiskit import QuantumProgram import Qconfig # import basic plot tools from qiskit.tools.visualization import plot_histogram ``` ### Setting the number of coins and the index of false coin Next, we set the number of coins and the index of the counterfeit coin. The former determines the quantum superpositions used by the algorithm, while the latter determines the quantum beam balance. ```python M = 16 # Maximum number of physical qubits available numberOfCoins = 8 # This number should be up to M-1, where M is the number of qubits available indexOfFalseCoin = 6 # This should be 0, 1, ..., numberOfCoins - 1, where we use Python indexing if numberOfCoins < 4 or numberOfCoins >= M: raise Exception("Please use numberOfCoins between 4 and ", M-1) if indexOfFalseCoin < 0 or indexOfFalseCoin >= numberOfCoins: raise Exception("indexOfFalseCoin must be between 0 and ", numberOfCoins-1) ``` ### Querying the quantum beam balance As in a classical algorithm to find the false coin, we will use the balance by placing the same number of coins on the left and right pans of the beam. The difference is that in a quantum algorithm, we can query the beam balance in superposition. To query the quantum beam balance, we use a binary query string to encode coins placed on the pans; namely, the binary string `01101010` means to place coins whose indices are 1, 2, 4, and 6 on the pans, while the binary string `01110111` means to place all coins but those with indices 0 and 4 on the pans. Notice that we do not care how the selected coins are placed on the left and right pans, because their results are the same: it is balanced when no false coin is included, and tilted otherwise. In our example, because the number of coins is $8$ and the index of false coin is $3$, the query `01101010` will result in balanced (or, $0$), while the query `01110111` will result in tilted (or, $1$). Using two quantum registers to query the quantum balance, where the first register is for the query string and the second register for the result of the quantum balance, we can write the query to the quantum balance (omitting the normalization of the amplitudes): \begin{eqnarray} |01101010\rangle\Big( |0\rangle - |1\rangle \Big) &\xrightarrow{\mbox{Quantum Beam Balance}}& |01101010\rangle\Big( |0\oplus 0\rangle - |1 \oplus 0\rangle \Big) = |01101010\rangle\Big( |0\rangle - |1\rangle \Big)\\ |01110111\rangle\Big( |0\rangle - |1\rangle \Big) &\xrightarrow{\mbox{Quantum Beam Balance}}& |01110111\rangle\Big( |0 \oplus 1\rangle - |1 \oplus 1\rangle \Big) = (-1) |01110111\rangle\Big( |0 \rangle - |1 \rangle \Big) \end{eqnarray} Notice that in the above equation, the phase is flipped if and only if the binary query string is $1$ at the index of the false coin. Let $x \in \left\{0,1\right\}^N$ be the $N$-bit query string (that contains even number of $1$s), and let $e_k \in \left\{0,1\right\}^N$ be the binary string which is $1$ at the index of the false coin and $0$ otherwise. Clearly, $$ |x\rangle\Big(|0\rangle - |1\rangle \Big) \xrightarrow{\mbox{Quantum Beam Balance}} \left(-1\right) ^{x\cdot e_k} |x\rangle\Big(|0\rangle - |1\rangle \Big), $$ where $x\cdot e_k$ denotes the inner product of $x$ and $e_k$. Here, we will prepare the superposition of all binary query strings with even number of $1$s. Namely, we want a circuit that produces the following transformation: $$ |0\rangle \rightarrow \frac{1}{2^{(N-1)/2}}\sum_{x\in \left\{0,1\right\}^N~\mbox{and}~|x|\equiv 0 \mod 2} |x\rangle, $$ where $|x|$ denotes the Hamming weight of $x$. To obtain such superposition of states of even number of $1$s, we can perform Hadamard transformation on $|0\rangle$ to obtain superposition of $\sum_{x\in\left\{0,1\right\}^N} |x\rangle$, and check if the Hamming weight of $x$ is even. It can be shown that the Hamming weight of $x$ is even if and only if $x_1 \oplus x_2 \oplus \ldots \oplus x_N = 0$. Thus, we can transform: \begin{equation} |0\rangle|0\rangle \xrightarrow{H^{\oplus N}} \frac{1}{2^{N/2}}\sum_x |x\rangle |0\rangle \xrightarrow{\mbox{XOR}(x)} \frac{1}{2^{N/2}}\sum_x |x\rangle |0\oplus x_1 \oplus x_2 \oplus \ldots \oplus x_N\rangle \end{equation} The right-hand side of the equation can be divided based on the result of the $\mbox{XOR}(x) = x_1 \oplus \ldots \oplus x_N$, namely, $$ \frac{1}{2^{(N-1)/2}}\sum_{x\in \left\{0,1\right\}^N~\mbox{and}~|x|\equiv 0 \mod 2} |x\rangle|0\rangle + \frac{1}{2^{(N-1)/2}}\sum_{x\in \left\{0,1\right\}^N~\mbox{and}~|x|\equiv 1 \mod 2} |x\rangle|1\rangle. $$ Thus, if we measure the second register and observe $|0\rangle$, the first register is the superposition of all binary query strings we want. If we fail (observe $|1\rangle$), we repeat the above procedure until we observe $|0\rangle$. Each repetition is guaranteed to succeed with probability exactly half. Hence, after several repetitions we should be able to obtain the desired superposition state. *Notice that we can perform [quantum amplitude amplification](https://arxiv.org/abs/quant-ph/0005055) to obtain the desired superposition states with certainty and without measurement. The detail is left as an exercise*. Below is the procedure to obtain the desired superposition state with the classical `if` of the QuantumProgram. Here, when the second register is zero, we prepare it to record the answer to quantum beam balance. ```python Q_program = QuantumProgram() Q_program.set_api(Qconfig.APItoken, Qconfig.config["url"]) # set the APIToken and API url # Creating registers # numberOfCoins qubits for the binary query string and 1 qubit for working and recording the result of quantum balance qr = Q_program.create_quantum_register("qr", numberOfCoins+1) # for recording the measurement on qr cr = Q_program.create_classical_register("cr", numberOfCoins+1) circuitName = "QueryStateCircuit" queryStateCircuit = Q_program.create_circuit(circuitName, [qr], [cr]) N = numberOfCoins # Create uniform superposition of all strings of length N for i in range(N): queryStateCircuit.h(qr[i]) # Perform XOR(x) by applying CNOT gates sequentially from qr[0] to qr[N-1] and storing the result to qr[N] for i in range(N): queryStateCircuit.cx(qr[i], qr[N]) # Measure qr[N] and store the result to cr[N]. We continue if cr[N] is zero, or repeat otherwise queryStateCircuit.measure(qr[N], cr[N]) # we proceed to query the quantum beam balance if the value of cr[0]...cr[N] is all zero # by preparing the Hadamard state of |1>, i.e., |0> - |1> at qr[N] queryStateCircuit.x(qr[N]).c_if(cr, 0) queryStateCircuit.h(qr[N]).c_if(cr, 0) # we rewind the computation when cr[N] is not zero for i in range(N): queryStateCircuit.h(qr[i]).c_if(cr, 2**N) ``` ### Constructing the quantum beam balance The quantum beam balance returns $1$ when the binary query string contains the position of the false coin and $0$ otherwise, provided that the Hamming weight of the binary query string is even. Notice that previously, we constructed the superposition of all binary query strings whose Hamming weights are even. Let $k$ be the position of the false coin, then with regards to the binary query string $|x_1,x_2,\ldots,x_N\rangle|0\rangle$, the quantum beam balance simply returns $|x_1,x_2,\ldots,x_N\rangle|0\oplus x_k\rangle$, that can be realized by a CNOT gate with $x_k$ as control and the second register as target. Namely, the quantum beam balance realizes $$ |x_1,x_2,\ldots,x_N\rangle\Big(|0\rangle - |1\rangle\Big) \xrightarrow{\mbox{Quantum Beam Balance}} |x_1,x_2,\ldots,x_N\rangle\Big(|0\oplus x_k\rangle - |1 \oplus x_k\rangle\Big) = \left(-1\right)^{x\cdot e_k} |x_1,x_2,\ldots,x_N\rangle\Big(|0\rangle - |1\rangle\Big) $$ Below we apply the quantum beam balance on the desired superposition state. ```python k = indexOfFalseCoin # Apply the quantum beam balance on the desired superposition state (marked by cr equal to zero) queryStateCircuit.cx(qr[k], qr[N]).c_if(cr, 0) ``` <qiskit.extensions.standard.cx.CnotGate at 0x105dbc2b0> ### Identifying the false coin In the above, we have queried the quantum beam balance once. How to identify the false coin after querying the balance? We simply perform a Hadamard transformation on the binary query string to identify the false coin. Notice that, under the assumption that we query the quantum beam balance with binary strings of even Hamming weight, the following equations hold. \begin{eqnarray} \frac{1}{2^{(N-1)/2}}\sum_{x\in \left\{0,1\right\}^N~\mbox{and}~|x|\equiv 0 \mod 2} |x\rangle &\xrightarrow{\mbox{Quantum Beam Balance}}& \frac{1}{2^{(N-1)/2}}\sum_{x\in \left\{0,1\right\}^N~\mbox{and}~|x|\equiv 0 \mod 2} \left(-1\right)^{x\cdot e_k} |x\rangle\\ \frac{1}{2^{(N-1)/2}}\sum_{x\in \left\{0,1\right\}^N~\mbox{and}~|x|\equiv 0 \mod 2} \left(-1\right)^{x\cdot e_k} |x\rangle&\xrightarrow{H^{\otimes N}}& \frac{1}{\sqrt{2}}\Big(|e_k\rangle+|\hat{e_k}\rangle\Big) \end{eqnarray} In the above, $e_k$ is the bitstring that is $1$ only at the position of the false coin, and $\hat{e_k}$ is its inverse. Thus, by performing the measurement in the computational basis after the Hadamard transform, we should be able to identify the false coin because it is the one whose label is different from the majority: when $e_k$, the false coin is labelled $1$, and when $\hat{e_k}$ the false coin is labelled $0$. ```python # Apply Hadamard transform on qr[0] ... qr[N-1] for i in range(N): queryStateCircuit.h(qr[i]).c_if(cr, 0) # Measure qr[0] ... qr[N-1] # queryStateCircuit.measure(qr, cr) # THIS IS NOT SUPPORTED? for i in range(N): queryStateCircuit.measure(qr[i], cr[i]) ``` Now we perform the experiment to see how we can identify the false coin by the above quantum circuit. Notice that when we use the `plot_histogram`, the numbering of the bits in the classical register is from right to left, namely, `0100` means the bit with index $2$ is one and the rest are zero. Because we use `cr[N]` to control the operation prior to and after the query to the quantum beam balance, we can detect that we succeed in identifying the false coin when the left-most bit is $0$. Otherwise, when the left-most bit is $1$, we fail to obtain the desired superposition of query bitstrings and must repeat from the beginning. *Notice that we have not queried the quantum beam oracle yet. This repetition is not neccesary when we feed the quantum beam balance with the superposition of all bitstrings of even Hamming weight, which can be done with probability one, thanks to the quantum amplitude amplification*. When the left-most bit is $0$, the index of the false coin can be determined by finding the one whose values are different from others. Namely, when $N=8$ and the index of the false coin is $3$, we should observe `011110111` or `000001000`. ```python backend = "local_qasm_simulator" # backend = "ibmqx3" shots = 1 # We perform a one-shot experiment results = Q_program.execute([circuitName], backend=backend, shots=shots) answer = results.get_counts(circuitName) for key in answer.keys(): if key[0:1] == "1": raise Exception("Fail to create desired superposition of balanced query string. Please try again") plot_histogram(answer) from collections import Counter for key in answer.keys(): normalFlag, _ = Counter(key[1:]).most_common(1)[0] #get most common label for i in range(2,len(key)): if key[i] != normalFlag: print("False coin index is: ", len(key) - i - 1) ``` ## About Quantum Counterfeit Coin Problem The case when there is a single false coin, as presented in this notebook, is essentially [the Bernstein-Vazirani algorithm](http://epubs.siam.org/doi/abs/10.1137/S0097539796300921), and the single-query coin-weighing algorithm was first presented in 1997 by [Terhal and Smolin](https://arxiv.org/pdf/quant-ph/9705041.pdf). The Quantum Counterfeit Coin Problem for $k > 1$ in general is studied by [Iwama et al.](https://arxiv.org/pdf/1009.0416.pdf) Whether there exists a quantum algorithm that only needs $o(k^{1/4})$ queries to identify all the false coins remains an open question. ```python %run "../version.ipynb" ``` <h2>Version information</h2> <p>Please note that this tutorial is targeted to the <b>stable</b> version of the QISKit SDK. The following versions of the packages are recommended:</p> <table> <tr><th>Package</th><th colspan="2">Version</th></tr> <tr><td>QISKit</td><td> 0.4.8</td></tr> <tr><td>IBMQuantumExperience</td><td>&gt;= 1.8.26</td></tr> <tr><td>numpy</td><td>&gt;= 1.13, &lt; 1.14</td></tr> <tr><td>scipy</td><td>&gt;= 0.19, &lt; 0.20</td></tr> <tr><td>matplotlib</td><td>&gt;= 2.0, &lt; 2.1</td></tr> </table> ```python ```
10f1d06505442809305d443d412c1ccbf490d498
28,716
ipynb
Jupyter Notebook
reference/games/quantum_counterfeit_coin_problem.ipynb
marisvs/qiskit-tutorial
9efa21357a6f01e0312ad2e9786b1a0f9f67fc74
[ "Apache-2.0" ]
null
null
null
reference/games/quantum_counterfeit_coin_problem.ipynb
marisvs/qiskit-tutorial
9efa21357a6f01e0312ad2e9786b1a0f9f67fc74
[ "Apache-2.0" ]
null
null
null
reference/games/quantum_counterfeit_coin_problem.ipynb
marisvs/qiskit-tutorial
9efa21357a6f01e0312ad2e9786b1a0f9f67fc74
[ "Apache-2.0" ]
null
null
null
75.172775
9,058
0.736036
true
4,239
Qwen/Qwen-72B
1. YES 2. YES
0.887205
0.757794
0.672319
__label__eng_Latn
0.991576
0.400353
```python from games_setup import * from SBMLLint.common import constants as cn from SBMLLint.common.molecule import Molecule, MoleculeStoichiometry from SBMLLint.common.reaction import Reaction from SBMLLint.games.som import SOM from SBMLLint.common.simple_sbml import SimpleSBML import collections import itertools import networkx as nx import numpy as np import pandas as pd from sympy.matrices import Matrix, eye # from SBMLLint.common.stoichiometry_matrix import StoichiometryMatrix from SBMLLint.games.mesgraph import MESGraph from SBMLLint.games.message import Message, SOMStoichiometry, SOMReaction from scipy.linalg import lu, inv ``` Current Directory: /Users/woosubshin/Desktop/ModelEngineering/SBMLLint/notebook ```python simple = load_file_from_curated_data(61) for r in simple.reactions: if r.category != cn.REACTION_BOUNDARY: print(r.makeIdentifier(is_include_kinetics=False)) ``` vinGlc: GlcX0 -> GlcX vGlcTrans: GlcX -> 59.00 Glc vHK: ATP + Glc -> G6P + ADP vPGI: G6P -> F6P vPFK: F6P + ATP -> FBP + ADP vALD: FBP -> GAP + DHAP vTIM: DHAP -> GAP vGAPDH: GAP + NAD -> NADH + BPG vlpPEP: BPG + ADP -> PEP + ATP vPK: ADP + PEP -> Pyr + ATP vPDC: Pyr -> ACA vADH: NADH + ACA -> NAD + EtOH vdifEtOH: 59.00 EtOH -> EtOHX voutEtOH: EtOHX -> P vlpGlyc: DHAP + NADH -> Glyc + NAD vdifGlyc: 59.00 Glyc -> GlycX voutGlyc: GlycX -> P vdifACA: 59.00 ACA -> ACAX voutACA: ACAX -> P vlacto: CNX + ACAX -> P vinCN: CNX0 -> CNX vstorage: ATP + G6P -> ADP vconsum: ATP -> ADP vAK: ATP + AMP -> 2.00 ADP ```python m = Message(simple) ``` ```python s = np.array([[-1, 0, 0, 1], [-1, -1, 1, 1]]) sd = pd.DataFrame(s, index=['R1', 'R2'], columns=['APT', 'Glc', 'G6P', 'ADP']) sd ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>APT</th> <th>Glc</th> <th>G6P</th> <th>ADP</th> </tr> </thead> <tbody> <tr> <th>R1</th> <td>-1</td> <td>0</td> <td>0</td> <td>1</td> </tr> <tr> <th>R2</th> <td>-1</td> <td>-1</td> <td>1</td> <td>1</td> </tr> </tbody> </table> </div> ```python s_ = np.array([[-1, 0, 0, 1], [0, -1, 1, 0]]) sd_ = pd.DataFrame(s_, index=['R1', 'R2*'], columns=['APT', 'Glc', 'G6P', 'ADP']) sd_ ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>APT</th> <th>Glc</th> <th>G6P</th> <th>ADP</th> </tr> </thead> <tbody> <tr> <th>R1</th> <td>-1</td> <td>0</td> <td>0</td> <td>1</td> </tr> <tr> <th>R2*</th> <td>0</td> <td>-1</td> <td>1</td> <td>0</td> </tr> </tbody> </table> </div> ```python s__ = np.array([[-1, 0, 0, 1, 1], [-1, -1, 1, 1, 0]]) sd__ = pd.DataFrame(s__, index=['R1', 'R2'], columns=['APT', 'Glc', 'G6P', 'ADP', 'Pi']) sd__ ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>APT</th> <th>Glc</th> <th>G6P</th> <th>ADP</th> <th>Pi</th> </tr> </thead> <tbody> <tr> <th>R1</th> <td>-1</td> <td>0</td> <td>0</td> <td>1</td> <td>1</td> </tr> <tr> <th>R2</th> <td>-1</td> <td>-1</td> <td>1</td> <td>1</td> <td>0</td> </tr> </tbody> </table> </div> ```python s_new = np.array([[-1, 0, 0, 1, 1], [0, -1, 1, 0, -1]]) sd_new = pd.DataFrame(s_new, index=['R1', 'R2*'], columns=['APT', 'Glc', 'G6P', 'ADP', 'Pi']) sd_new ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>APT</th> <th>Glc</th> <th>G6P</th> <th>ADP</th> <th>Pi</th> </tr> </thead> <tbody> <tr> <th>R1</th> <td>-1</td> <td>0</td> <td>0</td> <td>1</td> <td>1</td> </tr> <tr> <th>R2*</th> <td>0</td> <td>-1</td> <td>1</td> <td>0</td> <td>-1</td> </tr> </tbody> </table> </div> ```python simple2 = load_file_from_curated_data(383) for r in simple2.reactions: if r.category != cn.REACTION_BOUNDARY: print(r.makeIdentifier(is_include_kinetics=False)) m2 = Message(simple2) ``` PGA_prod_Vc: RuBP + CO2 + 2.00 NADPH -> 2.00 PGA PGA_prod_Vo: RuBP + CO2 + 2.00 NADPH -> 1.50 PGA PGA_cons: PGA -> RuBP NADPH_prod: NADP -> NADPH ```python e = np.array([[-1, -2, 1], [-1, -2, 0.5]]) ed = pd.DataFrame(e, index=['R1', 'R2'], columns=['CO2', 'NADPH', 'PGA']) ed ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>CO2</th> <th>NADPH</th> <th>PGA</th> </tr> </thead> <tbody> <tr> <th>R1</th> <td>-1.0</td> <td>-2.0</td> <td>1.0</td> </tr> <tr> <th>R2</th> <td>-1.0</td> <td>-2.0</td> <td>0.5</td> </tr> </tbody> </table> </div> ```python ef = np.array([[-1, -2, 1], [0, 0, -0.5]]) ef_d = pd.DataFrame(ef, index=['R1*', 'R2*'], columns=['CO2', 'NADPH', 'PGA']) ef_d ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>CO2</th> <th>NADPH</th> <th>PGA</th> </tr> </thead> <tbody> <tr> <th>R1*</th> <td>-1.0</td> <td>-2.0</td> <td>1.0</td> </tr> <tr> <th>R2*</th> <td>0.0</td> <td>0.0</td> <td>-0.5</td> </tr> </tbody> </table> </div> ```python p = np.identity(2) pdd = pd.DataFrame(p, index=['R1', 'R2'], columns=['R1', 'R2']) pdd ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>R1</th> <th>R2</th> </tr> </thead> <tbody> <tr> <th>R1</th> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>R2</th> <td>0.0</td> <td>1.0</td> </tr> </tbody> </table> </div> ```python l = np.array([[1.0, 0.0], [-1.0, 1.0]]) ld = pd.DataFrame(l, index=['R1', 'R2'], columns=['R1', 'R2']) ld ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>R1</th> <th>R2</th> </tr> </thead> <tbody> <tr> <th>R1</th> <td>1.0</td> <td>0.0</td> </tr> <tr> <th>R2</th> <td>-1.0</td> <td>1.0</td> </tr> </tbody> </table> </div> ```python rref = np.array([[1.0, 0.0, 0.0, 1.0, -1.0], [0.0, 1.0, 1.0, 1.0, -1.0], [0.0, 0.0, -1.0, -1.0, 1.0]]) rref_d = pd.DataFrame(rref, index=['R1', 'R2', 'R3'], columns=['A', 'B', 'C', 'D', 'E']) rref_d ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>A</th> <th>B</th> <th>C</th> <th>D</th> <th>E</th> </tr> </thead> <tbody> <tr> <th>R1</th> <td>1.0</td> <td>0.0</td> <td>0.0</td> <td>1.0</td> <td>-1.0</td> </tr> <tr> <th>R2</th> <td>0.0</td> <td>1.0</td> <td>1.0</td> <td>1.0</td> <td>-1.0</td> </tr> <tr> <th>R3</th> <td>0.0</td> <td>0.0</td> <td>-1.0</td> <td>-1.0</td> <td>1.0</td> </tr> </tbody> </table> </div> ```python np.array([0.05, 0.25, 0.7]) * 0.18 ``` array([0.009, 0.045, 0.126]) ```python 0.168 + 0.045 + 0.0224 + 0.036 ``` 0.27140000000000003 ```python rreff = np.array([[1.0, 0.0, 0.0, 1.0, -1.0], [0.0, 1.0, 0.0, 0.0, 0.0], [0.0, 0.0, -1.0, -1.0, 1.0]]) rreff_d = pd.DataFrame(rreff, index=['R1', 'R2*', 'R3'], columns=['A', 'B', 'C', 'D', 'E']) rreff_d ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>A</th> <th>B</th> <th>C</th> <th>D</th> <th>E</th> </tr> </thead> <tbody> <tr> <th>R1</th> <td>1.0</td> <td>0.0</td> <td>0.0</td> <td>1.0</td> <td>-1.0</td> </tr> <tr> <th>R2*</th> <td>0.0</td> <td>1.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> </tr> <tr> <th>R3</th> <td>0.0</td> <td>0.0</td> <td>-1.0</td> <td>-1.0</td> <td>1.0</td> </tr> </tbody> </table> </div> ```python ```
1407d54ad9d7563a0d06341fdc2aee7c79f44195
22,656
ipynb
Jupyter Notebook
notebook/april26_2019_biocontrol.ipynb
BeckResearchLab/SBMLLint
a5f2b1ad691c192e456e2c0b5d208d921a933a4f
[ "MIT" ]
null
null
null
notebook/april26_2019_biocontrol.ipynb
BeckResearchLab/SBMLLint
a5f2b1ad691c192e456e2c0b5d208d921a933a4f
[ "MIT" ]
null
null
null
notebook/april26_2019_biocontrol.ipynb
BeckResearchLab/SBMLLint
a5f2b1ad691c192e456e2c0b5d208d921a933a4f
[ "MIT" ]
null
null
null
25.399103
102
0.353681
true
4,150
Qwen/Qwen-72B
1. YES 2. YES
0.83762
0.661923
0.55444
__label__yue_Hant
0.37259
0.126479
# What is inversion? For those with little or no background or experience, the concept of inversion can be intimidating. Many available resources assume a certain amount of background already, and for the uninitiated, this can make the topic difficult to grasp. Likewise, if material is too general and simplistic, it is of little use to someone who has some understanding but is looking for deeper insight. This suite of tutorials will begin in earnest from the first module, but if you have no background, then this is the place to start. I will assume that you have had some basic linear algebra and matrix theory, and little else. I will also assume, at least at first, that you may have forgotten a few of these things along the way. So let's begin from zero, with the absolute simplest terms:<br> In algebra, when we want to solve an equation for an unknown variable $x$, say in the equation:<br> \begin{equation} ax=b \end{equation} <br> where $a$ and $b$ are real numbers, we can do so, quite simply, by multiplying both sides of the equation by the *inverse* of $a$, that is $a^{-1}$. Our result $x = b/a$, is, in a very simple way, the solution to an inverse problem. And even though this example is trivial, inversion is simply an expansion of this process to larger systems of equations with more unknowns. Expanding ever so slightly, we do an analogous process by obtaining the inverse of a matrix. For example, say we have a system of equations that we represent in a square matrix $A$, a vector of unknowns $x$, and a vector of known values $b$. What we constuct is the matrix equation <br> \begin{equation} Ax=b \end{equation} <br> and provided an inverse for $A$ exists (that is, if $det(A) \neq 0$, or if you prefer $A^{-1}A = I$), the solution can be found by multiplying both sides by the inverse of $A$ \begin{equation} A^{-1}Ax= Ix = x = A^{-1}b \\ x = A^{-1}b \end{equation} <br> If you are the type of person that likes to see things through simple examples, consider the following. Say we have a set of two equations and two unknowns: \begin{equation} 2x +0y = 4 \\ 0x +3y = 6 \\ \end{equation} <br> Now, because this is a trivial example, we can see from inspection that $x=2$ and $y=2$. But for the purpose of seeing how things are put together, let's solve this using matrices anyway. We can assemble the coefficients for each equation into the matrix equation: \begin{equation} \begin{bmatrix} 2 &0\\ 0 &3\\ \end{bmatrix} \left[ \begin{array}{c} x\\ y \end{array} \right] = \left[ \begin{array}{c} 4\\ 6 \end{array} \right] \end{equation}<br> This is an equation of the form $Ax=b$, and we can solve this by multiplying both sides of the equation by $A^{-1}$: \begin{equation} \begin{bmatrix} \frac{1}{2} &0\\ 0 &\frac{1}{3}\\ \end{bmatrix} \begin{bmatrix} 2 &0\\ 0 &3\\ \end{bmatrix} \left[ \begin{array}{c} x\\ y \end{array} \right] = \begin{bmatrix} \frac{1}{2} &0\\ 0 &\frac{1}{3}\\ \end{bmatrix} \left[ \begin{array}{c} 4\\ 6 \end{array} \right] \end{equation}<br> Multiplying this out on both sides gives us (unsurprisingly):<br> \begin{equation} \left[ \begin{array}{c} x\\ y \end{array} \right] = \left[ \begin{array}{c} 2\\ 2 \end{array} \right] \end{equation} <br> Again, this is a trivial example. Now we will build up the notation a little. Thus far I have used the standard matrix-vector equation ($Ax=b$) that you have likely seen in a basic linear algebra textbook. In this set of tutorials I will use slightly different notation, and our matrix equation will take the form $Gm=d$. Of course, the choice of symbols for matrices and vectors is arbitrary, but in the context that we are going to proceed, $G$, $m$, and $d$ will take on new meaning. We represent the right hand side of the equation by $d$, which is the *data* that we measure in the field, $m$ which represents our *model parameters* (the unknowns that we seek), and $G$, which is our *forward operator* that operates on the model to produce the data. Generally, the governing equations that go into the physical system are represented by our matrix $G$. And even more generally, our matrix $G$ is a special case of a more general idea of a forward operator, $F[m]$. For a geophysical survey the a broader representation of our problem, for the ith data point would be \begin{equation} F_i[m] = d_i + n_i \end{equation} where $F_i[m]$ is the forward operator, $d_i $ is the measured data and $n_i$ is additive noise. This distinction is important as it adds a degree of richness to our scenario. Here are a few points to consider: <ul> <li>First $F[m]$ does not need to be linear. You may recall that a transformation is considered *linear* if the following property holds: <br> \begin{equation} T(\alpha u+ \beta v) = \alpha T(u) + \beta T(v) \end{equation} <br> In the most general case our forward operator $F[m]$ can be linear or nonlinear. <li> In the case where $F[m]$ is a matrix operation (our $G$, above), it might be a square matrix that is not invertible. <li> $F[m]$ might represent an overdetermined system (i.e. $G$ is not square matrix, but is a "tall" matrix). <li> $F[m]$ might represent an underdetermined system (i.e. $G$ is not square matrix, but is a "wide" matrix). <li> In geophysical applications, the number of values is large, and the doing these operations directly on large data sets is impractical. <li> In geophysics we deal with physical systems, and physical systems will always have noise in the data, which is accounted for in the $n_i$ term above. <li> We will need to estimate the uncertainties in our data, again, because we are dealing with physical systems. <li> And it can never be emphasized enough, so I will introduce it now: geophysics problems produce nonunique solutions. Obtaining a solution will involve optimization theory and will require implementing what we call a "regularization." The solution that we get will depend heavily, very heavily, on our choice of regularization. This choice is critical as it makes the difference between arriving at a solution that makes logical (and geological) sense, and complete gibberish. </ul> The end goal of this set of tutorials is to understand and learn how to manage some of the complexities that go into the inversion process. I will, whenever possible, try to build intuition by first using simple systems and trivial solutions, and then develop these into more complex (and more useful) versions of the same. ### Forward modeling vs. inversion Forward modeling and inversion are opposite processes. When the model of the Earth is known and we want to calculate the data, we call this a forward model. Forward modeling answers the question: given that I know the physics and have a model of the earth, what will the measured data look like? In the matrix-vector terms we used above, $Gm=d$, the forward model has two knowns, $G$ and $m$, and one unknown, $d$. Calculating the data is done by applying our forward operator $G$ to our model parameters, $m$. Inversion, as I have stated above already, takes measured data $d$ and a known forward operator $G$ and solves for the model parameters $m$. Put slightly differently, given that we have (1) field survey observations, (2) an estimate about the errors in these observations, (3) a physical representation of the earth, and (4) the ability to produce a forward model, the goal of the inversion is to produce a reasonable model that generated data. <br> ### Types of Inversion Problems Various methodologies for performing geophysical inversion have been developed. There are two broad classes of inversion: "Parametric" methods and "Generalized" inversion methods. **Parametric methods** These inversion methods involve finding a model of the earth which is described using only a few parameters. The solutions require that there be fewer parameters than there are data values so that the problem is formally "over-determined." A few examples of parametric models are: - **Buried object:** parameters could be depth to a sphere (or cylinder), a radius of a sphere or radius and length of a cylinder, and the physical property contrast between the object and host rocks. - **Layered earth:** parameters are layer thicknesses and physical property values. - **A buried sheet:** parameters might be depth to the top of sheet, it's dip, strike, thickness, and the physical property contrast between the sheet and host rocks. **Generalized inversion methods** This second class of inversion methods allows the earth's model to be more realistically complex, which means that more parameters than data points are permitted. Such problems are mathematically referred to as "under-determined". Most solutions to this more general form of the geophysical inversion problem involve three steps, which can be explained as follows: 1. Represent the earth with many cells so that complex distributions of physical properties can be simulated. In practice, the earth is divided into thousands or millions of cells of fixed geometry. Each cell has a constant, but unknown, value. The parameters we seek are the physical property values for these cells. 2. Design a model objective function. This is a mathematical quantity which measures the "size" of any solution. It is a single number. A priori information about the earth can be incorporated into the objective function. Usually the model objective has different components. One will make it "close" to a supplied reference model, others may control "smoothness" in various spatial directions. Mathematical optimization theory is used to find a solution that minimizes the objective function. The resultant solution will have minimum structure. This will be a good choice since it will tend to show large scale important features rather than a great deal of extraneous structure that can result from noisy observations. 3. The final solution must also acceptably reproduce the field observations. Our final optimization problem is to find that model which minimizes our model objective function subject to the constraint that the measured data are adequately reproduced. In practice a number of inversions, with different reasonable objective functions, should be carried out so the interpreter has some insight about the range of earth models that can acceptably reproduce the field data. Error statistics about the data will determine how closely the reproduced data matches the real measured data. The fact that these error statistics are often poorly known is a second good reason for performing several inversions before settling upon a preferred model. <!--- This is directly from the new GPG --> ### Inversion as a workflow Even though in the back of our minds we are really just solving a matrix equation, the complications that arise in practical situations will require inversion to take the form of a multi-step process, which is often represented as a workflow. <br> Each subsequent module will begin to discuss one of the boxes in the workflow, and we will build up the entire process, step by step, until completion. The first module of this tutorial, then, will begin with setting up the problem—taking a continous distribution of a physical property, discretizing it, and putting it on a mesh.
54a9b69787af03f0e296075586ab4da32f0422af
14,325
ipynb
Jupyter Notebook
Final Drafts/.ipynb_checkpoints/Module 0, a quick overview-checkpoint.ipynb
lheagy/inversion-tutorial
78673e800d4662992e5fe9595b4422bc2d5c50e9
[ "MIT" ]
2
2017-10-08T02:10:35.000Z
2017-10-18T17:49:21.000Z
Final Drafts/.ipynb_checkpoints/Module 0, a quick overview-checkpoint.ipynb
lheagy/inversion-tutorial
78673e800d4662992e5fe9595b4422bc2d5c50e9
[ "MIT" ]
null
null
null
Final Drafts/.ipynb_checkpoints/Module 0, a quick overview-checkpoint.ipynb
lheagy/inversion-tutorial
78673e800d4662992e5fe9595b4422bc2d5c50e9
[ "MIT" ]
3
2016-09-01T20:38:20.000Z
2020-05-13T22:19:16.000Z
60.699153
1,083
0.644887
true
2,667
Qwen/Qwen-72B
1. YES 2. YES
0.824462
0.798187
0.658075
__label__eng_Latn
0.999764
0.367259
# Ce notebook est *en cours de rédaction* - Je vais implémenter une fonction, en [Python 3](https://docs.python.org/3/), qui permettra de résoudre rapidement un problème mathématique. --- ## Exposé du problème : - Soit $n \geq 1$ un nombre de faces pour des dés bien équilibrés. On prendre $n = 6$ pour commencer, mais $n = 8, 10, 12, 20$ ou $n = 100$ est aussi possible. - On veut trouver deux ensembles d'entiers, $A$ et $B$, de tailles $n$, tels que les entiers dans $A$ soient tous [premiers](https://fr.wikipedia.org/wiki/Nombre_premier), et les entiers dans $B$ soient tous [pairs](https://fr.wikipedia.org/wiki/Parit%C3%A9_(arithm%C3%A9tique)), et on souhaite que chaque somme $a + b$ pour $a \in A$ et $b \in B$ soit encore un nombre premier. Par exemples : - Avec $n = 1$, prendre $A = \{3\}$ et $B = \{2\}$ fonctionne, comme $3 + 2 = 5$ est premier. - Avec $n = 2$, prendre $A = \{3, 5\}$ et $B = \{2, 8\}$ fonctionne, comme $3 + 2 = 5$, $3 + 8 = 11$, $5 + 2 = 7$, et $5 + 8 = 13$ sont premiers. - Avec $n = 6$, c'est tout de suite moins facile à trouver de tête... Allez, je triche et je donne un exemple trouvé plus bas : $A = \{5, 7, 17, 31, 61, 101\}$ et $B = \{6, 12, 36, 66, 96, 132\}$. ### Buts Notre but est d'abord de trouver vérifier qu'une paire $(A, B)$ donnée soit valide, puis de trouver une paire valide de taille $n$ donnée. On ne cherchera pas à avoir un algorithme très malin, une énumération exhaustive nous suffira. Ensuite on cherchera à répondre à une question plus combinatoire : peut-on trouver, à $n$ fixer, la paire valide $(A, B)$ de somme minimale (ou minimisant un certain critère) ? La réponse sera oui, et pas trop dure à obtenir. --- ## Présentation de ma solution numérique - Je vais utiliser le module python [`sympy`](http://sympy.org/) et en particulier ses fonctions du module [`sympy.ntheory`](http://docs.sympy.org/latest/modules/ntheory.html) pour avoir facilement accès à une liste de nombres premiers. En particulier, [`primerange`](http://docs.sympy.org/latest/modules/ntheory.html#sympy.ntheory.generate.primerange) sera bien pratique. - Je vais procéder par énumération totale avec un "doubling trick" : on cherchera toutes les paires $(A, B)$ bornées dans, disons, $[1, 100]$, puis $[1, 110]$ et ainsi de suite (avec un petit incrément), jusqu'à en trouver une qui marche. - Cette approche "ascendante" garantit de terminer, pour peu qu'on puisse prouver théoriquement l'existence de la solution qu'on cherche. ### Solution théorique ? > Je ne vais pas rentrer dans les détails, mais avec [le théorème de la progression arithmétique de Dirichlet](https://fr.wikipedia.org/wiki/Théorème_de_la_progression_arithmétique) (cf. aussi [ce document](http://perso.eleves.ens-rennes.fr/~ariffaut/Agregation/Dirichlet.pdf)), on peut montrer que pour tout nombre de faces $n \geq 1$, on peut trouver un nombre infini d'ensembles $(A, B)$ qui conviennent. > *Ouais, c'est balèze*. ---- ### Solution numérique On commence avec les dépendances suivantes. > *Note :* Si vous n'avez pas installé [`sympy`](http://sympy.org/), un simple coup de `pip install sympy` suffit. ```python from sympy import isprime from sympy import primerange ``` --- ## Implémentation des fonctions utilitaires requises D'abord, une fonction `verifie_paire` qui vérifie si une paire $(A, B)$, donnée sous forme de deux *itérateurs* (liste ou ensemble, peu importe), est valide. ```python def verifie_paire(A, B): """Version efficace, qui s'arrête le plus tôt possible.""" if A is None or B is None: return False for a in A: # if not isprime(a): # Inutile car A aura été bien choisi # return False for b in B: if not isprime(a + b): return False return True ``` Pour visualiser un peu, on en fait une version qui parle : ```python def verifie_paire_debug(A, B): """Version plus lente qui fait tous les tests, et affiche vrai ou faux pour chaque somme a + b.""" if A is None or B is None: return False reponse = True for a in A: if not isprime(a): print(" - a = {:<6} est pas premier, ECHEC ...".format(a)) reponse = False for b in B: if not isprime(a + b): print(" - a = {:<6} + b = {:<6} donnent {:<6} qui n'est pas premier, ECHEC ...".format(a, b, a + b)) reponse = False else: print(" - a = {:<6} + b = {:<6} donnent {:<6} qui est bien premier, OK ...".format(a, b, a + b)) return reponse ``` Premier exemple : ```python A = [3] B = [2] verifie_paire_debug(A, B) ``` - a = 3 + b = 2 donnent 5 qui est bien premier, OK ... True Second exemple : ```python A = (3, 5) B = (2, 8) verifie_paire_debug(A, B) ``` - a = 3 + b = 2 donnent 5 qui est bien premier, OK ... - a = 3 + b = 8 donnent 11 qui est bien premier, OK ... - a = 5 + b = 2 donnent 7 qui est bien premier, OK ... - a = 5 + b = 8 donnent 13 qui est bien premier, OK ... True ```python verifie_paire(A, B) ``` True Avec $n = 6$ : ```python A = [5, 7, 17, 31, 61, 101] B = [6, 12, 36, 66, 96, 132] verifie_paire(A, B) ``` True Les dés vendus dans le commerce sont marqués avec ces faces : ```python A = [2, 6, 20, 252, 266, 380] B = [17, 41, 107, 191, 311, 347] verifie_paire(A, B) ``` True --- On a besoin de cette fonction de combinaison : ```python from itertools import combinations ``` Ensuite, on écrit une fonction qui va énumérer *tous* les ensembles $(A, B)$ possibles et valides, avec $\max (A \cup B ) \leq M$ pour $M$ fixé. J'ajoute une borne inférieure $m$, valant par défaut $m = 1$, pour rester le plus générique possible, mais on ne s'en sert pas vraiment. - On récupère les candidats possibles pour $a \in A$ via `primerange` (notés $C_A$), et pour $B$ via un `range` à pas $2$ pour considérer seulement des nombres pairs (notés $B_A$), - Ensuite on boucle sur tous les ensembles $A$ de taille $n$ dans $C_A$, et $B$ de taille $n$ dans $C_B$, via [`itertools.combinations`](https://docs.python.org/3/library/itertools.html#itertools.combinations), - On garde uniquement ceux qui sont valides, et on les stocke tous. > **Attention** ça devient vite très couteux ! ```python def candidats_CA_CB(M=10, m=1): # Premiers entre m et M (compris), ie. P inter [|m, ..., M|] C_A = list(primerange(m, M + 1)) # Nombres pairs if m % 2 == 0: C_B = list(range(m, M - 1, 2)) else: C_B = list(range(m + 1, M - 1, 2)) return C_A, C_B ``` Cette première approche est très naïve, et énormément de calculs sont dédoublés (on teste les paires $(A, B)$ en faisant plein de tests pour savoir si $a + b$ est premier, c'est couteux). ```python def enumere_toutes_les_paires(n=2, M=10, m=1, debug=False): C_A, C_B = candidats_CA_CB(M=M, m=m) if debug: print("C_A =", C_A) print("C_B =", C_B) print("Combinaison de n =", n, "éléments dans A =", list(combinations(C_A, n))) print("Combinaison de n =", n, "éléments dans B =", list(combinations(C_B, n))) # C_A, C_B est déjà trié, c'est cool all_A_B = [] for A in combinations(C_A, n): if debug: print(" - A =", A) for B in combinations(C_B, n): if debug: print(" - B =", B) if verifie_paire(A, B): if debug: print("==> Une paire (A, B) de plus !") all_A_B.append((A, B)) # all_A_B est aussi trié par ordre lexicographique return all_A_B ``` On peut vérifier que les exemples donnés ci-dessus sont valides, et sont en fait les plus petits : ```python n = 1 M = 10 enumere_toutes_les_paires(n, M, debug=True) ``` C_A = [2, 3, 5, 7] C_B = [2, 4, 6, 8] Combinaison de n = 1 éléments dans A = [(2,), (3,), (5,), (7,)] Combinaison de n = 1 éléments dans B = [(2,), (4,), (6,), (8,)] - A = (2,) - B = (2,) - B = (4,) - B = (6,) - B = (8,) - A = (3,) - B = (2,) ==> Une paire (A, B) de plus ! - B = (4,) ==> Une paire (A, B) de plus ! - B = (6,) - B = (8,) ==> Une paire (A, B) de plus ! - A = (5,) - B = (2,) ==> Une paire (A, B) de plus ! - B = (4,) - B = (6,) ==> Une paire (A, B) de plus ! - B = (8,) ==> Une paire (A, B) de plus ! - A = (7,) - B = (2,) - B = (4,) ==> Une paire (A, B) de plus ! - B = (6,) ==> Une paire (A, B) de plus ! - B = (8,) [((3,), (2,)), ((3,), (4,)), ((3,), (8,)), ((5,), (2,)), ((5,), (6,)), ((5,), (8,)), ((7,), (4,)), ((7,), (6,))] ```python n = 2 M = 10 enumere_toutes_les_paires(n, M, debug=True) ``` C_A = [2, 3, 5, 7] C_B = [2, 4, 6, 8] Combinaison de n = 2 éléments dans A = [(2, 3), (2, 5), (2, 7), (3, 5), (3, 7), (5, 7)] Combinaison de n = 2 éléments dans B = [(2, 4), (2, 6), (2, 8), (4, 6), (4, 8), (6, 8)] - A = (2, 3) - B = (2, 4) - B = (2, 6) - B = (2, 8) - B = (4, 6) - B = (4, 8) - B = (6, 8) - A = (2, 5) - B = (2, 4) - B = (2, 6) - B = (2, 8) - B = (4, 6) - B = (4, 8) - B = (6, 8) - A = (2, 7) - B = (2, 4) - B = (2, 6) - B = (2, 8) - B = (4, 6) - B = (4, 8) - B = (6, 8) - A = (3, 5) - B = (2, 4) - B = (2, 6) - B = (2, 8) ==> Une paire (A, B) de plus ! - B = (4, 6) - B = (4, 8) - B = (6, 8) - A = (3, 7) - B = (2, 4) - B = (2, 6) - B = (2, 8) - B = (4, 6) - B = (4, 8) - B = (6, 8) - A = (5, 7) - B = (2, 4) - B = (2, 6) - B = (2, 8) - B = (4, 6) - B = (4, 8) - B = (6, 8) [((3, 5), (2, 8))] On peut continuer, avec $n = 3$ et un petit majorant $M$ : ```python n = 3 M = 20 enumere_toutes_les_paires(n, M, debug=False) ``` [((3, 7, 13), (4, 10, 16)), ((5, 11, 17), (2, 6, 12))] On peut continuer, avec $n = 4$ et un petit majorant $M$ : ```python n = 4 M = 40 %time enumere_toutes_les_paires(n, M, debug=False) ``` CPU times: user 7.41 s, sys: 0 ns, total: 7.41 s Wall time: 7.41 s [((3, 7, 13, 37), (4, 10, 16, 34)), ((7, 11, 17, 31), (6, 12, 30, 36)), ((7, 13, 19, 37), (4, 10, 24, 34)), ((7, 13, 31, 37), (6, 10, 16, 30)), ((7, 17, 23, 37), (6, 24, 30, 36))] On voit que ça commence à prendre du temps. Deux améliorations seront explorées : - On a pas besoin de toutes les calculer, en fait. Si on cherche juste une paire valide, on peut s'arrêter à la première trouvée. - On peut être malin dans la création des candidats $C_B$, plutôt que de juste prendre de nombres pairs, on prend directement des nombres $b$ tels que $a + b$ soit premier. --- ## Première optimisation - On n'a pas besoin d'énunumérer toutes les paires, il suffit de donner la premier trouvée. - Pourquoi la première ? Et bien si $C_A$ et $C_B$ sont triés, les candidats pour $(A, B)$ seront aussi triés par ordre lexicographique, et donc le premier trouvé sera le plus petit. ```python def premiere_paire(n=2, M=10, m=1): C_A, C_B = candidats_CA_CB(M=M, m=m) # C_A, C_B est déjà trié, c'est cool for A in combinations(C_A, n): for B in combinations(C_B, n): if verifie_paire(A, B): return (A, B) return (None, None) ``` ```python n = 4 M = 40 %time A, B = premiere_paire(n, M) print("A =", A) print("B =", B) ``` CPU times: user 2 s, sys: 8 ms, total: 2.01 s Wall time: 2.01 s A = (3, 7, 13, 37) B = (4, 10, 16, 34) Avec $n = 5$, on commence à avoir besoin d'aller plus loin que ce majorant `M = 40` : ```python n = 5 M = 40 %time A, B = premiere_paire(n, M) # (None, None) indique qu'on a pas trouvé print("A =", A) print("B =", B) ``` CPU times: user 33.7 s, sys: 0 ns, total: 33.7 s Wall time: 33.7 s A = None B = None Mais parfois on ne sait pas trop quelle valeur donner à ce majorant `M`... Une approche simple est donc d'augmenter sa valeur jusqu'à trouver une paire valide. ```python def premiere_paire_explore_M(n=2, Ms=[10,20,30,40,50], m=1): for M in Ms: resultat = premiere_paire(n=n, M=M, m=m) if resultat[0] is not None: return resultat return (None, None) ``` On peut faire encore mieux, en augmentant `M` automatiquement d'un certain offset $\delta_M > 0$ : ```python def premiere_paire_augmente_M(n=2, Mmin=10, deltaM=10, m=1): assert isinstance(deltaM, int) assert deltaM >= 1 M = Mmin while True: print("Appel à premiere_paire(n={}, M={}, m={}) ...".format(n, M, m)) # Ce n'est pas dangereux, puisqu'on est garanti de trouver une paire qui marche resultat = premiere_paire(n=n, M=M, m=m) if resultat[0] is not None: print("Terminé, avec M =", M) return resultat M += deltaM ``` On peut retrouver le résultat trouvé plus haut, pour $n = 2$ : ```python n = 2 Ms = [10,20,30,40,50] %time A, B = premiere_paire_explore_M(n, Ms) print("A =", A) print("B =", B) ``` CPU times: user 0 ns, sys: 0 ns, total: 0 ns Wall time: 331 µs A = (3, 5) B = (2, 8) ```python n = 4 M = 40 deltaM = 10 %time A, B = premiere_paire_augmente_M(n=n, Mmin=M, deltaM=deltaM) print("A =", A) print("B =", B) ``` Appel à premiere_paire(n=4, M=40, m=1) ... Terminé, avec M = 40 CPU times: user 2.01 s, sys: 0 ns, total: 2.01 s Wall time: 2.01 s A = (3, 7, 13, 37) B = (4, 10, 16, 34) Et on peut résoudre pour $n = 5$ et $n = 6$. Mais cette approche prend beaucoup trop de temps. ```python n = 5 M = 100 deltaM = 10 #%time A, B = premiere_paire_augmente_M(n=n, Mmin=M, deltaM=deltaM) #print("A =", A) #print("B =", B) ``` > *Note :* Plutôt que d'écrire une fonction (qui `return` le premier résultat), on peut aussi écrire un **générateur**, qui `yield` les résultats un à un. > Il sera alors possible d'écrire des boucles directement comme ça : ```python for (A, B) in generateur_paires(n=2, M=10, m=1): ... ``` ```python def generateur_paires(n=2, M=10, m=1): C_A, C_B = candidats_CA_CB(M=M, m=m) # C_A, C_B est déjà trié, c'est cool for A in combinations(C_A, n): for B in combinations(C_B, n): if verifie_paire(A, B): yield (A, B) ``` --- ## Exemples --- ### Exemple avec des dés à 6 faces ```python n = 6 M = 100 #A, B = premiere_paire(n, M) #print("A =", A) #print("B =", B) ``` --- ## Autre approche, deuxième optimisation On a vu que l'approche *naïve* exposée ci-dessus n'est vraiment pas très efficace. On va être plus malin : - d'abord générer toutes paires $(a, b)$ possibles, avec $a$ et $a + b$ premiers, pour $a, b \geq M$. - ensuite on pourra créer la plus grande paire $(A, B)$, probablement en regardant simplement des intersections d'ensembles. ### Préliminaire : petite optimisation pour `isprime` En une seule ligne, on peut ajouter un cache ([`functools.lru_cache`](https://docs.python.org/3/library/functools.html#functools.lru_cache)) pour la rendre plus rapide : ```python from functools import lru_cache @lru_cache(maxsize=1<<10, typed=False) def estpremier(n): return isprime(n) @lru_cache(maxsize=1<<20, typed=False) def candidats_CA_CB_cache(M=10, m=1): return candidats_CA_CB(M=M, m=m) ``` ```python import numpy.random as nr %timeit [isprime(n) for n in nr.randint(1, 1<<10, 100000)] ``` 1 loop, best of 3: 208 ms per loop ```python %timeit [estpremier(n) for n in nr.randint(1, 1<<10, 100000)] ``` 10 loops, best of 3: 18.7 ms per loop ### Générer les paires $(a, b)$ valides C'est assez rapide. - La première fonction renvoie le résultat sous forme de liste de paires $(a, b)$ avec $b$ tel que $a + b$ soit premier, - La seconde fonction renvoie plutôt un dictionnaire qui, à $a \in C_A$, associe tous les $b \in C_B$ tels que $a + b$. ```python def genere_paires(M=10, m=1): C_A, C_B = candidats_CA_CB_cache(M=M, m=m) return [(a, b) for a in C_A for b in C_B if estpremier(a + b)] def genere_dict(M=10, m=1): C_A, C_B = candidats_CA_CB_cache(M=M, m=m) return {a: [b for b in C_B if estpremier(a + b)] for a in C_A} ``` ```python genere_paires(10) ``` [(3, 2), (3, 4), (3, 8), (5, 2), (5, 6), (5, 8), (7, 4), (7, 6)] ```python genere_dict(10) ``` {2: [], 3: [2, 4, 8], 5: [2, 6, 8], 7: [4, 6]} On voit que le nombres de paires $(a, b)$ telle que $a + b$ soit premier et $a, b \leq M$ augmente assez vite avec $M$ : ```python len(genere_paires(400)) ``` 5107 ```python import matplotlib.pyplot as plt X = list(range(10, 2000, 10)) Y = [ len(genere_paires(M)) for M in X ] plt.plot(X, Y) plt.show() ``` ## Suite : générer une paire $(A, B)$ - On génère les candidats $C_A, C_B$, - On associe à chaque $a \in C_A$ les $b \in C_B$ qui fonctionne, notés $d(a)$, - On garde ceux qui ont au moins $n$ valeurs associées, - On regarde tous les ensembles $A \subset C_A$ de taille exactement $n$, - Pour un $A$ donné, on construit $B$ comme $\bigcap_{a \in A} d(a)$, et on garde les $B$ de taille $\geq n$, - On regarde enfin tous les ensembles $B' \subset B$ de taille exactement $n$. - Ici, on garde juste le premier trouvé, plus bas on les garde tous. ```python @lru_cache(maxsize=1<<10, typed=False) def genere_A_B(n=2, M=10, m=1): C_A, C_B = candidats_CA_CB_cache(M=M, m=m) dictAB = {a: {b for b in C_B if estpremier(a + b)} for a in C_A} for a in C_A: if len(dictAB[a]) < n: del dictAB[a] # print(dictAB) C_A = sorted(list(dictAB.keys())) # C_A est déjà trié, c'est cool # TODO : This for loop could possibly get parallelized, using joblib.Parallel # It could speed up the computation by a factor of cpu_count()... for A in combinations(C_A, n): # DONE : improve this step to stop as soon as the intersection has length < n # B = set.intersection(*(dictAB[a] for a in A)) B = set(dictAB[A[0]]) for a in A[1:]: if len(B) < n: break B &= dictAB[a] if len(B) >= n: print(" - A =", A, "\t et B =", B, "...") for BB in combinations(B, n): # On prend la 1ère return (list(A), list(BB)) return (None, None) ``` ```python def test_genere_AB(n=2, M=10, m=1): A, B = genere_A_B(n=n, M=M) if not verifie_paire(A, B): print("Mauvaise paire (A, B) =", (A, B), ": elle n'est pas valable.") return (None, None) print("A =", A) print("B =", B) return (A, B) ``` ```python n = 2 M = 10 test_genere_AB(n=n, M=M) ``` - A = (3, 5) et B = {8, 2} ... A = [3, 5] B = [8, 2] ([3, 5], [8, 2]) ```python n = 3 M = 40 %time test_genere_AB(n=n, M=M) ``` - A = (3, 5, 11) et B = {8, 2, 26} ... A = [3, 5, 11] B = [8, 2, 26] CPU times: user 0 ns, sys: 0 ns, total: 0 ns Wall time: 1.8 ms ([3, 5, 11], [8, 2, 26]) ```python n = 4 M = 50 %time test_genere_AB(n=n, M=M) ``` - A = (3, 7, 13, 19) et B = {40, 34, 10, 4} ... A = [3, 7, 13, 19] B = [40, 34, 10, 4] CPU times: user 0 ns, sys: 0 ns, total: 0 ns Wall time: 1.89 ms ([3, 7, 13, 19], [40, 34, 10, 4]) On peut comparer cette nouvelle fonction avec `premiere_paire` écrite plus haut : ```python n = 4 M = 50 %timeit -n1 premiere_paire(n=n, M=M) ``` 1 loop, best of 3: 10.3 s per loop ```python n = 4 M = 50 %timeit -n1 genere_A_B(n=n, M=M) ``` 1 loop, best of 3: 1.12 µs per loop → On est effectivement bien plus rapide. Yay. ```python n = 5 M = 100 %time test_genere_AB(n=n, M=M) ``` - A = (3, 7, 13, 19, 97) et B = {40, 34, 10, 4, 94} ... A = [3, 7, 13, 19, 97] B = [40, 34, 10, 4, 94] CPU times: user 4 ms, sys: 0 ns, total: 4 ms Wall time: 4.67 ms ([3, 7, 13, 19, 97], [40, 34, 10, 4, 94]) ```python n = 6 M = 100 %time test_genere_AB(n=n, M=M) ``` - A = (7, 13, 31, 37, 73, 97) et B = {66, 6, 10, 76, 16, 30} ... A = [7, 13, 31, 37, 73, 97] B = [66, 6, 10, 76, 16, 30] CPU times: user 112 ms, sys: 0 ns, total: 112 ms Wall time: 110 ms ([7, 13, 31, 37, 73, 97], [66, 6, 10, 76, 16, 30]) On peut vérifier que la valeur du majorant $M$ sur les valeurs des faces ne change pas le premier $A$ trouvé, mais comme les $B$ ne sont **pas** parcourus dans un ordre lexicographique (les `set` sont non triés), on ne trouve pas le même $B$. Et plus $M$ est grand, plus c'est long à calculer. ```python n = 6 M = 200 %time test_genere_AB(n=n, M=M) ``` - A = (3, 7, 13, 31, 73, 97) et B = {160, 100, 40, 10, 76, 16} ... A = [3, 7, 13, 31, 73, 97] B = [160, 100, 40, 10, 76, 16] CPU times: user 248 ms, sys: 0 ns, total: 248 ms Wall time: 246 ms ([3, 7, 13, 31, 73, 97], [160, 100, 40, 10, 76, 16]) Ça marche plutôt bien, on trouve effectivement la paire qui possède le plus petit $\max A$ mais pas celle qui minimise $\sum A \cup B$ ou $\max A \cup B$... ## Trouver la paire de somme minimale ou de max minimum. - On va calculer *toutes* les paires, avec notre fonction optimisée, dans `genere_toutes_A_B`, - Puis on les triera (en ordre croissant) selon le critère voulu, - Et on renvoie la première, qui minimisera donc le critère cherché. > *Note :* en étant malin, on pourrait carrément faire mieux qu'une énumération complète. Mais la flemme. ### D'abord, générer toutes les paires $(A, B)$ C'est la même fonction que `genere_A_B`, mais on les renvoie toutes au lieu de s'arrêter à la première. ```python @lru_cache(maxsize=1<<10, typed=False) def genere_toutes_A_B(n=2, M=10, m=1): C_A, C_B = candidats_CA_CB_cache(M=M, m=m) dictAB = {a: {b for b in C_B if estpremier(a + b)} for a in C_A} for a in C_A: if len(dictAB[a]) < n: del dictAB[a] # print(dictAB) C_A = sorted(list(dictAB.keys())) # C_A est déjà trié, c'est cool paires = [] # TODO : This for loop could very easily get parallelized, using joblib.Parallel # It could speed up the computation by a factor of cpu_count()... for A in combinations(C_A, n): # DONE : improve this step to stop as soon as the intersection has length < n # B = set.intersection(*(dictAB[a] for a in A)) B = set(dictAB[A[0]]) for a in A[1:]: if len(B) < n: break B &= dictAB[a] if len(B) >= n: for BB in combinations(B, n): print(" - A =", A, "\t et B =", B, "...") # print('.', end='') # Affiche .... au fur et à mesure de l'exécution. paires.append((list(A), list(BB))) print('') return paires ``` On vérifie que ça fonctionne comme prévu : ```python n = 4 M = 40 genere_toutes_A_B(n=n, M=M) ``` - A = (3, 7, 13, 37) et B = {16, 34, 10, 4} ... - A = (7, 11, 17, 31) et B = {36, 12, 6, 30} ... - A = (7, 13, 19, 37) et B = {24, 34, 10, 4} ... - A = (7, 13, 31, 37) et B = {16, 10, 6, 30} ... - A = (7, 17, 23, 37) et B = {24, 36, 6, 30} ... [([3, 7, 13, 37], [16, 34, 10, 4]), ([7, 11, 17, 31], [36, 12, 6, 30]), ([7, 13, 19, 37], [24, 34, 10, 4]), ([7, 13, 31, 37], [16, 10, 6, 30]), ([7, 17, 23, 37], [24, 36, 6, 30])] Il suffit donc de trier selon le critère choisi. $$ \mathrm{sumsum} : (A, B) \mapsto \sum_{a \in A} a + \sum_{b \in B} b. $$ $$ \mathrm{maxmax} : (A, B) \mapsto \max A \cup B. $$ ```python def sumsum(paire): return sum(paire[0]) + sum(paire[1]) def maxmax(paire): return max(max(paire[0]), max(paire[1])) ``` On peut aussi prendre un critère qui combine les deux : ```python def combine(paire): return (sumsum(paire) + maxmax(paire)) / 2 ``` Il suffit d'utiliser l'argument `key=` de la fonction [`sorted`](https://docs.python.org/3/library/functions.html#sorted) : ```python def trie_et_prend_min(paires, key=sumsum): paires = sorted(paires, key=key) A, B = paires[0] assert verifie_paire(A, B) return (sorted(A), sorted(B)) ``` Pour tester : ```python def test_trie_et_prend_min(n=2, M=10): paires = genere_toutes_A_B(n=n, M=M) print("Pour n =", n, "et M =", M, "il y a", len(paires), "paires (A, B) trouvées valides ...") print("- On recherche celle de somme sum(A u B) minimale ...") A, B = trie_et_prend_min(paires, key=sumsum) print("A =", A) print("B =", B) print(" Elle donne sumsum =", sumsum((A, B)), "et maxmax =", maxmax((A, B))) print("- On recherche celle de maximum max(A u B) minimale ...") A, B = trie_et_prend_min(paires, key=maxmax) print("A =", A) print("B =", B) print(" Elle donne sumsum =", sumsum((A, B)), "et maxmax =", maxmax((A, B))) print("- On recherche celle de moyenne sumsum(A,B), maxmax(A,B) minimale ...") A, B = trie_et_prend_min(paires, key=combine) print("A =", A) print("B =", B) print(" Elle donne sumsum =", sumsum((A, B)), "et maxmax =", maxmax((A, B))) # for key in [sumsum, maxmax, combine]: # alls = [ (key(paire), (sorted(paire[0]), sorted(paire[1]))) for paire in sorted(paires, key=key) ] # print(alls[:5]) return (A, B) ``` On peut refaire les mêmes exemples pour $n = 3, 4, 5$, et enfin $n = 6$. ```python n = 3 M = 20 test_trie_et_prend_min(n, M) ``` - A = (3, 7, 13) et B = {16, 10, 4} ... - A = (5, 11, 17) et B = {2, 12, 6} ... Pour n = 3 et M = 20 il y a 2 paires (A, B) trouvées valides ... - On recherche celle de somme sum(A u B) minimale ... A = [3, 7, 13] B = [4, 10, 16] Elle donne sumsum = 53 et maxmax = 16 - On recherche celle de maximum max(A u B) minimale ... A = [3, 7, 13] B = [4, 10, 16] Elle donne sumsum = 53 et maxmax = 16 - On recherche celle de moyenne sumsum(A,B), maxmax(A,B) minimale ... A = [3, 7, 13] B = [4, 10, 16] Elle donne sumsum = 53 et maxmax = 16 ([3, 7, 13], [4, 10, 16]) ```python n = 4 M = 37 test_trie_et_prend_min(n, M) ``` - A = (3, 7, 13, 37) et B = {16, 34, 10, 4} ... - A = (7, 13, 19, 37) et B = {24, 34, 10, 4} ... - A = (7, 13, 31, 37) et B = {16, 10, 6, 30} ... Pour n = 4 et M = 37 il y a 3 paires (A, B) trouvées valides ... - On recherche celle de somme sum(A u B) minimale ... A = [3, 7, 13, 37] B = [4, 10, 16, 34] Elle donne sumsum = 124 et maxmax = 37 - On recherche celle de maximum max(A u B) minimale ... A = [3, 7, 13, 37] B = [4, 10, 16, 34] Elle donne sumsum = 124 et maxmax = 37 - On recherche celle de moyenne sumsum(A,B), maxmax(A,B) minimale ... A = [3, 7, 13, 37] B = [4, 10, 16, 34] Elle donne sumsum = 124 et maxmax = 37 ([3, 7, 13, 37], [4, 10, 16, 34]) ```python n = 5 M = 70 test_trie_et_prend_min(n, M) ``` - A = (5, 11, 17, 41, 47) et B = {6, 42, 12, 56, 26, 62} ... - A = (5, 11, 17, 41, 47) et B = {6, 42, 12, 56, 26, 62} ... - A = (5, 11, 17, 41, 47) et B = {6, 42, 12, 56, 26, 62} ... - A = (5, 11, 17, 41, 47) et B = {6, 42, 12, 56, 26, 62} ... - A = (5, 11, 17, 41, 47) et B = {6, 42, 12, 56, 26, 62} ... - A = (5, 11, 17, 41, 47) et B = {6, 42, 12, 56, 26, 62} ... - A = (5, 17, 29, 47, 59) et B = {24, 42, 12, 54, 14} ... - A = (5, 17, 31, 47, 61) et B = {66, 12, 42, 36, 6} ... - A = (7, 13, 37, 43, 67) et B = {16, 4, 30, 46, 60} ... - A = (11, 13, 23, 41, 53) et B = {48, 18, 60, 6, 30} ... - A = (11, 17, 23, 47, 53) et B = {56, 50, 36, 20, 6} ... Pour n = 5 et M = 70 il y a 11 paires (A, B) trouvées valides ... - On recherche celle de somme sum(A u B) minimale ... A = [5, 11, 17, 41, 47] B = [6, 12, 26, 42, 56] Elle donne sumsum = 263 et maxmax = 56 - On recherche celle de maximum max(A u B) minimale ... A = [5, 11, 17, 41, 47] B = [6, 12, 26, 42, 56] Elle donne sumsum = 263 et maxmax = 56 - On recherche celle de moyenne sumsum(A,B), maxmax(A,B) minimale ... A = [5, 11, 17, 41, 47] B = [6, 12, 26, 42, 56] Elle donne sumsum = 263 et maxmax = 56 ([5, 11, 17, 41, 47], [6, 12, 26, 42, 56]) On observe que celle qui minimise `sumsum` est la même que celle qui minimise `maxmax`. Curieux ? C'est aussi le cas pour $n = 6$ : ```python n = 6 M = 150 %time test_trie_et_prend_min(n, M) ``` - A = (3, 7, 13, 37, 73, 97) et B = {34, 100, 10, 76, 16, 94} ... - A = (5, 7, 17, 31, 61, 101) et B = {96, 66, 132, 36, 6, 12} ... - A = (5, 11, 17, 41, 71, 137) et B = {96, 2, 42, 12, 56, 26} ... - A = (5, 11, 17, 41, 101, 137) et B = {96, 2, 12, 56, 26, 62} ... - A = (5, 11, 17, 47, 71, 137) et B = {36, 42, 12, 56, 26, 92} ... - A = (5, 11, 17, 47, 101, 137) et B = {36, 12, 56, 26, 92, 62} ... - A = (5, 11, 17, 47, 131, 137) et B = {36, 42, 146, 26, 92, 62} ... - A = (5, 11, 17, 71, 101, 137) et B = {96, 2, 36, 12, 56, 26, 92} ... - A = (5, 11, 17, 71, 101, 137) et B = {96, 2, 36, 12, 56, 26, 92} ... - A = (5, 11, 17, 71, 101, 137) et B = {96, 2, 36, 12, 56, 26, 92} ... - A = (5, 11, 17, 71, 101, 137) et B = {96, 2, 36, 12, 56, 26, 92} ... - A = (5, 11, 17, 71, 101, 137) et B = {96, 2, 36, 12, 56, 26, 92} ... - A = (5, 11, 17, 71, 101, 137) et B = {96, 2, 36, 12, 56, 26, 92} ... - A = (5, 11, 17, 71, 101, 137) et B = {96, 2, 36, 12, 56, 26, 92} ... - A = (5, 11, 31, 53, 101, 103) et B = {96, 36, 6, 78, 48, 126} ... - A = (5, 11, 41, 53, 83, 101) et B = {96, 98, 6, 48, 56, 26} ... - A = (5, 11, 41, 53, 83, 131) et B = {96, 98, 6, 48, 18, 26} ... - A = (5, 11, 41, 53, 101, 131) et B = {96, 98, 6, 48, 26, 126} ... - A = (5, 11, 41, 71, 101, 137) et B = {96, 2, 12, 56, 26, 126} ... - A = (5, 11, 47, 71, 101, 137) et B = {36, 12, 56, 26, 92, 126} ... - A = (5, 11, 47, 71, 131, 137) et B = {36, 102, 42, 26, 92, 126} ... - A = (5, 17, 31, 41, 61, 97) et B = {96, 66, 132, 6, 42, 12} ... - A = (5, 19, 23, 29, 89, 149) et B = {108, 78, 144, 18, 84, 24} ... - A = (5, 19, 23, 79, 89, 149) et B = {78, 144, 48, 18, 84, 24} ... - A = (5, 19, 23, 83, 89, 149) et B = {108, 144, 48, 18, 84, 24} ... - A = (5, 23, 29, 53, 89, 149) et B = {8, 74, 78, 14, 144, 18, 84} ... - A = (5, 23, 29, 53, 89, 149) et B = {8, 74, 78, 14, 144, 18, 84} ... - A = (5, 23, 29, 53, 89, 149) et B = {8, 74, 78, 14, 144, 18, 84} ... - A = (5, 23, 29, 53, 89, 149) et B = {8, 74, 78, 14, 144, 18, 84} ... - A = (5, 23, 29, 53, 89, 149) et B = {8, 74, 78, 14, 144, 18, 84} ... - A = (5, 23, 29, 53, 89, 149) et B = {8, 74, 78, 14, 144, 18, 84} ... - A = (5, 23, 29, 53, 89, 149) et B = {8, 74, 78, 14, 144, 18, 84} ... - A = (5, 23, 29, 59, 89, 149) et B = {134, 8, 108, 78, 14, 24} ... - A = (5, 23, 29, 83, 89, 149) et B = {74, 108, 14, 144, 18, 84, 24} ... - A = (5, 23, 29, 83, 89, 149) et B = {74, 108, 14, 144, 18, 84, 24} ... - A = (5, 23, 29, 83, 89, 149) et B = {74, 108, 14, 144, 18, 84, 24} ... - A = (5, 23, 29, 83, 89, 149) et B = {74, 108, 14, 144, 18, 84, 24} ... - A = (5, 23, 29, 83, 89, 149) et B = {74, 108, 14, 144, 18, 84, 24} ... - A = (5, 23, 29, 83, 89, 149) et B = {74, 108, 14, 144, 18, 84, 24} ... - A = (5, 23, 29, 83, 89, 149) et B = {74, 108, 14, 144, 18, 84, 24} ... - A = (5, 23, 29, 89, 113, 149) et B = {78, 14, 144, 18, 84, 24} ... - A = (5, 23, 53, 83, 89, 149) et B = {74, 14, 144, 48, 18, 84} ... - A = (5, 29, 41, 59, 71, 89) et B = {68, 38, 42, 108, 12, 122} ... - A = (7, 11, 17, 31, 67, 101) et B = {96, 36, 6, 72, 12, 30} ... - A = (7, 11, 17, 41, 67, 101) et B = {96, 6, 72, 12, 90, 30} ... - A = (7, 13, 19, 37, 79, 97) et B = {34, 4, 10, 144, 60, 94} ... - A = (7, 13, 19, 37, 79, 103) et B = {34, 4, 10, 24, 60, 94} ... - A = (7, 13, 19, 37, 79, 139) et B = {34, 10, 144, 24, 60, 94} ... - A = (7, 13, 19, 37, 103, 139) et B = {34, 10, 24, 90, 60, 94} ... - A = (7, 13, 19, 73, 103, 139) et B = {34, 10, 54, 24, 90, 94} ... - A = (7, 13, 23, 37, 47, 107) et B = {66, 6, 144, 24, 90, 60} ... - A = (7, 13, 23, 37, 83, 107) et B = {66, 6, 144, 24, 90, 30} ... - A = (7, 13, 31, 37, 73, 97) et B = {66, 100, 6, 10, 76, 16, 30} ... - A = (7, 13, 31, 37, 73, 97) et B = {66, 100, 6, 10, 76, 16, 30} ... - A = (7, 13, 31, 37, 73, 97) et B = {66, 100, 6, 10, 76, 16, 30} ... - A = (7, 13, 31, 37, 73, 97) et B = {66, 100, 6, 10, 76, 16, 30} ... - A = (7, 13, 31, 37, 73, 97) et B = {66, 100, 6, 10, 76, 16, 30} ... - A = (7, 13, 31, 37, 73, 97) et B = {66, 100, 6, 10, 76, 16, 30} ... - A = (7, 13, 31, 37, 73, 97) et B = {66, 100, 6, 10, 76, 16, 30} ... - A = (7, 13, 37, 67, 79, 97) et B = {34, 100, 4, 144, 60, 30} ... - A = (7, 13, 37, 79, 97, 139) et B = {34, 100, 10, 144, 60, 94} ... - A = (7, 17, 31, 41, 67, 101) et B = {96, 132, 6, 72, 12, 30} ... - A = (7, 17, 31, 41, 97, 101) et B = {96, 66, 132, 6, 12, 30} ... - A = (7, 19, 37, 43, 79, 103) et B = {4, 10, 60, 24, 120, 94} ... - A = (7, 19, 37, 79, 97, 139) et B = {34, 10, 144, 52, 60, 94} ... - A = (11, 13, 41, 53, 103, 131) et B = {96, 6, 138, 48, 60, 126} ... - A = (11, 17, 41, 53, 71, 137) et B = {96, 140, 86, 56, 26, 30} ... - A = (11, 17, 41, 53, 83, 101) et B = {96, 6, 140, 56, 26, 30} ... - A = (11, 17, 41, 53, 83, 137) et B = {96, 140, 20, 56, 26, 30} ... - A = (11, 17, 41, 71, 101, 137) et B = {96, 2, 140, 12, 56, 26, 30} ... - A = (11, 17, 41, 71, 101, 137) et B = {96, 2, 140, 12, 56, 26, 30} ... - A = (11, 17, 41, 71, 101, 137) et B = {96, 2, 140, 12, 56, 26, 30} ... - A = (11, 17, 41, 71, 101, 137) et B = {96, 2, 140, 12, 56, 26, 30} ... - A = (11, 17, 41, 71, 101, 137) et B = {96, 2, 140, 12, 56, 26, 30} ... - A = (11, 17, 41, 71, 101, 137) et B = {96, 2, 140, 12, 56, 26, 30} ... - A = (11, 17, 41, 71, 101, 137) et B = {96, 2, 140, 12, 56, 26, 30} ... - A = (11, 17, 41, 83, 101, 137) et B = {96, 90, 140, 56, 26, 30} ... - A = (11, 17, 53, 71, 101, 137) et B = {96, 36, 140, 56, 26, 30} ... - A = (11, 17, 59, 107, 137, 149) et B = {2, 42, 120, 90, 92, 30} ... - A = (11, 23, 41, 53, 71, 137) et B = {140, 86, 30, 56, 60, 126} ... - A = (11, 23, 41, 53, 107, 137) et B = {20, 86, 30, 56, 60, 126} ... - A = (11, 37, 47, 71, 131, 137) et B = {36, 102, 42, 120, 60, 126} ... - A = (11, 41, 53, 71, 101, 137) et B = {96, 140, 30, 56, 26, 126} ... - A = (11, 41, 53, 83, 101, 131) et B = {96, 98, 6, 140, 48, 26} ... - A = (11, 41, 67, 71, 97, 137) et B = {96, 42, 12, 126, 60, 30} ... - A = (13, 17, 23, 73, 83, 107) et B = {66, 6, 84, 24, 90, 30} ... - A = (13, 17, 43, 83, 97, 127) et B = {96, 66, 114, 84, 54, 30} ... - A = (13, 19, 23, 83, 89, 149) et B = {144, 48, 18, 84, 24, 90} ... - A = (13, 19, 29, 43, 113, 139) et B = {138, 18, 84, 54, 24, 60} ... - A = (13, 19, 29, 53, 113, 139) et B = {138, 144, 18, 84, 54, 60} ... - A = (13, 19, 29, 89, 113, 139) et B = {138, 144, 18, 84, 24, 60} ... - A = (13, 19, 37, 43, 79, 103) et B = {4, 70, 10, 24, 60, 94} ... - A = (13, 19, 37, 79, 97, 103) et B = {34, 4, 70, 10, 60, 94} ... - A = (13, 19, 43, 73, 103, 139) et B = {10, 138, 54, 24, 28, 94} ... - A = (13, 19, 43, 79, 103, 139) et B = {10, 60, 24, 88, 28, 94} ... - A = (13, 23, 37, 53, 67, 97) et B = {6, 144, 114, 30, 60, 126} ... - A = (13, 23, 53, 67, 97, 107) et B = {6, 144, 84, 30, 60, 126} ... - A = (13, 23, 53, 79, 83, 149) et B = {144, 48, 18, 114, 84, 30} ... - A = (13, 31, 43, 61, 97, 127) et B = {96, 66, 70, 40, 136, 10} ... - A = (13, 37, 67, 79, 97, 127) et B = {100, 4, 70, 144, 114, 30} ... - A = (13, 43, 67, 97, 109, 127) et B = {4, 70, 40, 114, 84, 30} ... - A = (17, 23, 53, 59, 83, 137) et B = {140, 44, 14, 114, 20, 30} ... - A = (17, 29, 59, 107, 137, 149) et B = {2, 134, 42, 44, 120, 30} ... - A = (17, 41, 53, 71, 83, 101) et B = {96, 140, 110, 56, 26, 30} ... - A = (17, 47, 59, 89, 107, 149) et B = {134, 42, 50, 24, 90, 92} ... - A = (17, 47, 59, 107, 137, 149) et B = {132, 134, 42, 120, 90, 92} ... - A = (19, 23, 29, 79, 89, 113) et B = {78, 144, 18, 84, 24, 60} ... - A = (23, 29, 53, 83, 113, 149) et B = {128, 44, 14, 144, 18, 84} ... - A = (23, 29, 53, 89, 113, 149) et B = {78, 14, 144, 18, 50, 84} ... Pour n = 6 et M = 150 il y a 109 paires (A, B) trouvées valides ... - On recherche celle de somme sum(A u B) minimale ... A = [7, 13, 31, 37, 73, 97] B = [6, 10, 16, 30, 66, 76] Elle donne sumsum = 462 et maxmax = 97 - On recherche celle de maximum max(A u B) minimale ... A = [7, 13, 31, 37, 73, 97] B = [6, 10, 16, 30, 66, 76] Elle donne sumsum = 462 et maxmax = 97 - On recherche celle de moyenne sumsum(A,B), maxmax(A,B) minimale ... A = [7, 13, 31, 37, 73, 97] B = [6, 10, 16, 30, 66, 76] Elle donne sumsum = 462 et maxmax = 97 CPU times: user 2.43 s, sys: 12 ms, total: 2.44 s Wall time: 2.41 s ([7, 13, 31, 37, 73, 97], [6, 10, 16, 30, 66, 76]) ```python A, B = test_trie_et_prend_min(n, M) verifie_paire_debug(A, B) ``` Pour n = 6 et M = 150 il y a 109 paires (A, B) trouvées valides ... - On recherche celle de somme sum(A u B) minimale ... A = [7, 13, 31, 37, 73, 97] B = [6, 10, 16, 30, 66, 76] Elle donne sumsum = 462 et maxmax = 97 - On recherche celle de maximum max(A u B) minimale ... A = [7, 13, 31, 37, 73, 97] B = [6, 10, 16, 30, 66, 76] Elle donne sumsum = 462 et maxmax = 97 - On recherche celle de moyenne sumsum(A,B), maxmax(A,B) minimale ... A = [7, 13, 31, 37, 73, 97] B = [6, 10, 16, 30, 66, 76] Elle donne sumsum = 462 et maxmax = 97 - a = 7 + b = 6 donnent 13 qui est bien premier, OK ... - a = 7 + b = 10 donnent 17 qui est bien premier, OK ... - a = 7 + b = 16 donnent 23 qui est bien premier, OK ... - a = 7 + b = 30 donnent 37 qui est bien premier, OK ... - a = 7 + b = 66 donnent 73 qui est bien premier, OK ... - a = 7 + b = 76 donnent 83 qui est bien premier, OK ... - a = 13 + b = 6 donnent 19 qui est bien premier, OK ... - a = 13 + b = 10 donnent 23 qui est bien premier, OK ... - a = 13 + b = 16 donnent 29 qui est bien premier, OK ... - a = 13 + b = 30 donnent 43 qui est bien premier, OK ... - a = 13 + b = 66 donnent 79 qui est bien premier, OK ... - a = 13 + b = 76 donnent 89 qui est bien premier, OK ... - a = 31 + b = 6 donnent 37 qui est bien premier, OK ... - a = 31 + b = 10 donnent 41 qui est bien premier, OK ... - a = 31 + b = 16 donnent 47 qui est bien premier, OK ... - a = 31 + b = 30 donnent 61 qui est bien premier, OK ... - a = 31 + b = 66 donnent 97 qui est bien premier, OK ... - a = 31 + b = 76 donnent 107 qui est bien premier, OK ... - a = 37 + b = 6 donnent 43 qui est bien premier, OK ... - a = 37 + b = 10 donnent 47 qui est bien premier, OK ... - a = 37 + b = 16 donnent 53 qui est bien premier, OK ... - a = 37 + b = 30 donnent 67 qui est bien premier, OK ... - a = 37 + b = 66 donnent 103 qui est bien premier, OK ... - a = 37 + b = 76 donnent 113 qui est bien premier, OK ... - a = 73 + b = 6 donnent 79 qui est bien premier, OK ... - a = 73 + b = 10 donnent 83 qui est bien premier, OK ... - a = 73 + b = 16 donnent 89 qui est bien premier, OK ... - a = 73 + b = 30 donnent 103 qui est bien premier, OK ... - a = 73 + b = 66 donnent 139 qui est bien premier, OK ... - a = 73 + b = 76 donnent 149 qui est bien premier, OK ... - a = 97 + b = 6 donnent 103 qui est bien premier, OK ... - a = 97 + b = 10 donnent 107 qui est bien premier, OK ... - a = 97 + b = 16 donnent 113 qui est bien premier, OK ... - a = 97 + b = 30 donnent 127 qui est bien premier, OK ... - a = 97 + b = 66 donnent 163 qui est bien premier, OK ... - a = 97 + b = 76 donnent 173 qui est bien premier, OK ... True ## Conclusion pour des dés à $6$ faces Avec $n = 6$ dés, la plus petite paire de valeurs $(A, B)$ trouvée est - A = $\{7, 13, 31, 37, 73, 97\}$ - B = $\{6, 10, 16, 30, 66, 76\}$ Elle donne $\sum A \cup B = 462$ et $\max A \cup B = 97$. Assez surprenament, c'est la paire qui minimise les deux critères à la fois (et l'unique paire qui satisfait cette propriété). --- ## Avec plus de faces ? ### Exemples avec des dés à 8, 10, 12 et 20 faces > Ça va commencer à être *très couteux*. Je recommande de ne pas faire tourner ce code sur votre machine, mais sur un serveur de calcul, par exemple ! ```python n = 7 M = 170 %time test_genere_AB(n=n, M=M) %time test_trie_et_prend_min(n, M) ``` - A = (3, 7, 13, 37, 73, 97, 157) et B = {160, 34, 100, 10, 76, 16, 94} ... A = [3, 7, 13, 37, 73, 97, 157] B = [160, 34, 100, 10, 76, 16, 94] CPU times: user 704 ms, sys: 12 ms, total: 716 ms Wall time: 713 ms - A = (3, 7, 13, 37, 73, 97, 157) et B = {160, 34, 100, 10, 76, 16, 94} ... - A = (3, 13, 19, 37, 79, 97, 103) et B = {160, 34, 4, 70, 10, 154, 94} ... - A = (3, 13, 37, 73, 97, 157, 163) et B = {34, 100, 10, 76, 16, 154, 94} ... - A = (7, 13, 31, 37, 73, 97, 157) et B = {160, 66, 100, 6, 10, 76, 16} ... - A = (7, 13, 31, 67, 73, 97, 151) et B = {160, 100, 166, 6, 40, 16, 30} ... - A = (7, 13, 37, 73, 97, 157, 163) et B = {34, 66, 100, 10, 76, 16, 94} ... - A = (13, 17, 23, 73, 83, 107, 167) et B = {66, 6, 84, 150, 24, 90, 30} ... - A = (13, 19, 37, 79, 97, 103, 163) et B = {34, 4, 70, 10, 154, 60, 94} ... Pour n = 7 et M = 170 il y a 8 paires (A, B) trouvées valides ... - On recherche celle de somme sum(A u B) minimale ... A = [7, 13, 31, 37, 73, 97, 157] B = [6, 10, 16, 66, 76, 100, 160] Elle donne sumsum = 849 et maxmax = 160 - On recherche celle de maximum max(A u B) minimale ... A = [3, 7, 13, 37, 73, 97, 157] B = [10, 16, 34, 76, 94, 100, 160] Elle donne sumsum = 877 et maxmax = 160 - On recherche celle de moyenne sumsum(A,B), maxmax(A,B) minimale ... A = [7, 13, 31, 37, 73, 97, 157] B = [6, 10, 16, 66, 76, 100, 160] Elle donne sumsum = 849 et maxmax = 160 CPU times: user 21.9 s, sys: 8 ms, total: 21.9 s Wall time: 21.9 s ([7, 13, 31, 37, 73, 97, 157], [6, 10, 16, 66, 76, 100, 160]) #### Avec 8 faces ```python n = 8 M = 440 %time test_genere_AB(n=n, M=M) # %time test_trie_et_prend_min(n, M) ``` #### Avec 10 faces ```python n = 10 M = 550 %time test_genere_AB(n=n, M=M) # %time test_trie_et_prend_min(n, M) ``` #### Avec 12 faces ```python n = 12 M = 800 %time test_genere_AB(n=n, M=M) # %time test_trie_et_prend_min(n, M) ``` #### Avec 20 faces ```python n = 20 M = 1000 %time test_genere_AB(n=n, M=M) # %time test_trie_et_prend_min(n, M) ``` ### Exemples avec des dés de tailles différentes > Il ne serait pas très compliqué d'adapter le code précédent pour considérer $A$ de taille $n$ et $B$ de taille $p$ avec possiblement $n \neq p$. > Je le ferai, plus tard. ### Exemples avec $100$ faces ? > Il faudrait une machine costaux, et du temps... ---- # Généralisations possibles - Dés avec un nombre de faces différentes, i.e., $\# A \neq \# B$, - Plus de deux dés, i.e., considérer $A_1, \dots, A_p$ et chercher la même propriété : $$ \forall a_1 \in A_1, \dots, a_p \in A_p, a_1 + \dots + a_p \in \mathcal{P}. $$ En imposant toujours que $A_1 \subset \mathcal{P}$, et éventuellement plus. - Résoudre autrement le problème initial pour ne pas avoir à donner une borné $M$ sur $\max A, B$.
e64d5f28d3a889a116dc70a3089ea7fff2c712b0
105,824
ipynb
Jupyter Notebook
simus/Calcul_d_une_paire_de_des_un_peu_particuliers.ipynb
dutc/notebooks
e3ac65c22f6ad2ce863c0b80a999f029fff1ca2c
[ "MIT" ]
4
2017-05-15T19:41:09.000Z
2019-04-09T11:34:07.000Z
simus/Calcul_d_une_paire_de_des_un_peu_particuliers.ipynb
dutc/notebooks
e3ac65c22f6ad2ce863c0b80a999f029fff1ca2c
[ "MIT" ]
null
null
null
simus/Calcul_d_une_paire_de_des_un_peu_particuliers.ipynb
dutc/notebooks
e3ac65c22f6ad2ce863c0b80a999f029fff1ca2c
[ "MIT" ]
5
2017-07-25T05:05:55.000Z
2021-09-27T07:47:43.000Z
40.530065
21,760
0.581191
true
18,888
Qwen/Qwen-72B
1. YES 2. YES
0.877477
0.851953
0.747569
__label__fra_Latn
0.428865
0.575185
# Neuro-Fuzzy Classification of MNIST In this notebook a neuro-fuzzy classifier will be trained and evaluated on the MNIST dataset. No feature reductions techniques will be used, the intent is to provide a baseline performance to compare to other techniques. ```python import gzip import numpy as np def load_data(filename, dims): with gzip.open(filename, "rb") as infile: # consume magic number infile.read(4) # consume dimensions data infile.read(4 * len(dims)) return np.frombuffer(infile.read(np.prod(dims)), dtype=np.uint8).reshape(dims) # training data train_images = load_data("data/train-images-idx3-ubyte.gz", [60000, 28, 28]) train_labels = load_data("data/train-labels-idx1-ubyte.gz", [60000]) # testing data test_images = load_data("data/t10k-images-idx3-ubyte.gz", [10000, 28, 28]) test_labels = load_data("data/t10k-labels-idx1-ubyte.gz", [10000]) ``` The MNIST data is loaded into memory as NumPy arrays. ```python import matplotlib.pyplot as plt %matplotlib inline fig, ax_array = plt.subplots(10, 20, figsize=(16, 8)) axes = ax_array.flatten() for i, ax in enumerate(axes): ax.imshow(train_images[i,:,:], cmap="Greys", interpolation="none") plt.setp(axes, xticks=[], yticks=[], frame_on=False) plt.tight_layout(h_pad=0, w_pad=0) ``` Samples from the training images are displayed above to ensure the data was loaded correctly. ```python import os import pickle import skfuzzy as skf _cache_dir = "cache/params.pickle" _ignore_cache = False # compute sigma parameter def compute_sigmas(data, memberships, centers): mask = np.any(memberships >= 1.0, axis=0) data = data[:,~mask] memberships = memberships[:,~mask] data = np.expand_dims(data, axis=0) memberships = np.expand_dims(memberships, axis=1) centers = np.expand_dims(centers, axis=2) return np.mean(np.sqrt(-np.square(data - centers) / (2 * np.log(memberships))), axis=2) # generate the parameters for the fuzzy classifier def gen_params(data, c, m): if not os.path.isfile(_cache_dir) or _ignore_cache: centers, memberships, u0, d, jm, p, fpc = skf.cmeans( data.T, 10, 1.1, 1e-8, 1000, seed=0) sigmas = compute_sigmas(data.T, memberships, centers) pickle.dump((centers, sigmas), open(_cache_dir, "wb")) else: centers, sigmas = pickle.load(open(_cache_dir, "rb")) return centers, sigmas # initial parameters for use in model init_mu, init_sigma = gen_params(train_images.reshape(-1, 784), 10, 1.1) plt.figure(figsize=(16, 4)) plt.title("Initial Mu") plt.boxplot(init_mu.reshape(-1), vert=False) plt.figure(figsize=(16, 4)) plt.title("Initial Sigma") plt.boxplot(init_sigma.reshape(-1), vert=False) ``` The initial values for mu and sigma are generated (or loaded from the disk) and boxplots are displayed above. ```python def processed_data(): train_x = train_images.astype(np.float) / 255 train_y = np.zeros((60000,10)) train_y[np.arange(60000), train_labels] = 1 test_x = test_images.astype(np.float) / 255 test_y = np.zeros((10000,10)) test_y[np.arange(10000), test_labels] = 1 return (train_x, train_y), (test_x, test_y) (train_x, train_y), (test_x, test_y) = processed_data() ``` This function formats the training data by rescaling the images and creating one-hot vectors for the target data. ## Building the Network The code that defines the network structure written below. A custom layer was created for the gaussian membership function. Additionally, the product t-norm is computed and normalized in special way to prevent numerical underflow. The product t-norm and normalization layer. \begin{align} \ f_i & = \frac{1}{f_{max}}\prod_{j=1}^N m_{i,j} \\ \ f_i & = exp\Big({ln\Big(\frac{1}{f_{max}}\prod_{j=1}^N m_{i,j}\Big)}\Big) \\ \ f_i & = exp\Big({\sum_{j=1}^N ln(m_{i,j})} - ln(f_{max})\Big) \\ \end{align} ```python import keras import keras.layers as layers import keras.models as models from keras import backend as K # custom layer for gauss membership function class GaussMembership(layers.Layer): def __init__(self, num_rules, epsilon=1e-8, **kwargs): self.epsilon = epsilon self.num_rules = num_rules super(GaussMembership, self).__init__(**kwargs) def build(self, input_shape): self.mu = self.add_weight( name="mu", shape=(self.num_rules, input_shape[1]), initializer=keras.initializers.Zeros(), trainable=True) self.sigma = self.add_weight( name="sigma", shape=(self.num_rules, input_shape[1]), initializer=keras.initializers.Ones(), constraint=keras.constraints.NonNeg(), trainable=True) super(GaussMembership, self).build(input_shape) def call(self, x): x = K.expand_dims(x, axis=1) x = K.square((x - self.mu) / (self.sigma + self.epsilon)) return K.exp(-0.5 * x) def compute_output_shape(self, input_shape): return (input_shape[0], self.num_rules, input_shape[1]) # create the model with the given parameters def create_model(init_mu, init_sigma, num_rules=10): # define the models input inputs = layers.Input(shape=(28, 28,)) reshaped = layers.Reshape((784,))(inputs) # apply the membership function membership = GaussMembership( num_rules, weights=[init_mu, init_sigma])(reshaped) def log_prod(x): x = K.sum(K.log(x + 1e-8), axis=2) x = x - K.max(x, axis=1, keepdims=True) return K.exp(x) fstrength = layers.Lambda(lambda x: log_prod(x))(membership) outputs = layers.Dense(10, activation="sigmoid")(fstrength) return models.Model(inputs=inputs, outputs=outputs) ``` Using TensorFlow backend. This code defines the model in keras. ```python import tensorflow as tf def plot_activations(model, inputs): with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for layer in model.layers: activation_val = sess.run( layer.output, feed_dict={model.input:inputs}) plt.figure(figsize=(16, 4)) plt.title(layer) plt.boxplot(activation_val.reshape(-1), vert=False) def plot_gradients(model, inputs): gradient_fn = K.gradients(model.output, model.trainable_weights) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) gradients = sess.run( gradient_fn, feed_dict={model.input:inputs}) for param, gradient in zip(model.trainable_weights, gradients): plt.figure(figsize=(16, 4)) plt.title(param) plt.boxplot(gradient.reshape(-1), vert=False) ``` The functions above are written using tensorflow in order to view the intermediate activations and gradients in a model. ```python fcm_model = create_model(init_mu, init_sigma) fcm_model.compile(loss="binary_crossentropy", optimizer="sgd") #plot_activations(fcm_model, train_x[:64]) ``` ```python def plot_prod_layer(model, inputs): with tf.Session() as sess: sess.run(tf.global_variables_initializer()) membership_val = sess.run( model.layers[2].output, feed_dict={model.input:inputs}) plt.figure(figsize=(16, 4)) plt.title("Memberships") plt.boxplot(membership_val.reshape(-1), vert=False) logsum = np.sum(np.log(membership_val + 1e-8), axis=2) plt.figure(figsize=(16, 4)) plt.title("Log-Sum") plt.boxplot(logsum.reshape(-1), vert=False) normalized = logsum - np.max(logsum, axis=1, keepdims=True) plt.figure(figsize=(16, 4)) plt.title("Normalized Log-Sum") plt.boxplot(normalized.reshape(-1), vert=False) fstrength = np.exp(normalized) plt.figure(figsize=(16, 4)) plt.title("Firing Strength") plt.boxplot(fstrength.reshape(-1), vert=False) #plot_prod_layer(fcm_model, train_x[:64]) ``` The activations of each layer are shown in the boxplots above. The firing strength of the rules seem to be exclusively ones. I am unsure as to why this is, but it will likely cause a problem with training. ```python #plot_gradients(fcm_model, train_x[:128]) ``` Plotted above are the models gradients. ```python def plot_history(history): plt.figure(figsize=(16, 8)) plt.title("Loss") plt.plot(history.history["loss"], c="b") plt.plot(history.history["val_loss"], c="r") plt.figure(figsize=(16, 8)) plt.title("Accuracy") plt.plot(history.history["categorical_accuracy"], c="b") plt.plot(history.history["val_categorical_accuracy"], c="r") # fcm_model.compile( # optimizer="sgd", # loss="binary_crossentropy", # metrics=[keras.metrics.categorical_accuracy]) # fcm_history = fcm_model.fit( # x=train_x, # y=train_y, # batch_size=64, # epochs=100, # validation_data=(test_x, test_y), # verbose=0) # plot_history(fcm_history) ``` When initialized using the parameters from FCM, the model does not seem to learn at all. Both accuracy and loss seem to completely plateau. This is most likely due to numerous small values in the sigma parameters but will need to be investigated in more depth later. ```python centers_model = create_model(init_mu, np.ones((10,784))) # centers_model.compile( # optimizer="adam", # loss="binary_crossentropy", # metrics=[keras.metrics.categorical_accuracy]) # centers_history = centers_model.fit( # x=train_x, # y=train_y, # batch_size=64, # epochs=50, # validation_data=(test_x, test_y), # verbose=1) # plot_history(centers_history) ``` Using an array of ones in place of the initial sigma values allows the network to train and reach an accuracy of around 70%. Extrapolating from the graph, better performance can be achieved by letting it train longer. ```python sigmas_model = create_model(np.zeros((10,784)), init_sigma) # sigmas_model.compile( # optimizer="sgd", # loss="binary_crossentropy", # metrics=[keras.metrics.categorical_accuracy]) # sigmas_history = sigmas_model.fit( # x=train_x, # y=train_y, # batch_size=64, # epochs=100, # validation_data=(test_x, test_y), # verbose=0) # plot_history(sigmas_history) ``` Using the zeros for the mu values, and the initial sigma values the network encounters the same problem where it is unable to train. ```python ones_zeros_model = create_model(np.zeros((10,784)), np.ones((10,784))) # ones_zeros_model.compile( # optimizer="sgd", # loss="binary_crossentropy", # metrics=[keras.metrics.categorical_accuracy]) # ones_zeros_history = ones_zeros_model.fit( # x=train_x, # y=train_y, # batch_size=64, # epochs=100, # validation_data=(test_x, test_y), # verbose=0) # plot_history(ones_zeros_history) ``` Surprisingly, using zeros for mu and ones for sigma as initial values seems to give the best results so far. The network exceeds 80% accuracy and shows no signs of overfitting. The network does seem to take a very long time to train, taking 100 epochs to reach ~85% accuracy. ```python model = create_model(init_mu, np.ones((10,784))) model.compile( optimizer="adam", loss="binary_crossentropy", metrics=[keras.metrics.categorical_accuracy]) history = model.fit( x=train_x, y=train_y, batch_size=64, epochs=50, validation_data=(test_x, test_y), verbose=1) ``` Train on 60000 samples, validate on 10000 samples Epoch 1/50 60000/60000 [==============================] - 3s 58us/step - loss: 0.4104 - categorical_accuracy: 0.2533 - val_loss: 0.2905 - val_categorical_accuracy: 0.3845 Epoch 2/50 60000/60000 [==============================] - 3s 50us/step - loss: 0.2640 - categorical_accuracy: 0.4919 - val_loss: 0.2387 - val_categorical_accuracy: 0.6201 Epoch 3/50 60000/60000 [==============================] - 3s 49us/step - loss: 0.2158 - categorical_accuracy: 0.6862 - val_loss: 0.1858 - val_categorical_accuracy: 0.7500 Epoch 4/50 60000/60000 [==============================] - 3s 50us/step - loss: 0.1694 - categorical_accuracy: 0.7865 - val_loss: 0.1544 - val_categorical_accuracy: 0.8402 Epoch 5/50 60000/60000 [==============================] - 3s 50us/step - loss: 0.1449 - categorical_accuracy: 0.8441 - val_loss: 0.1359 - val_categorical_accuracy: 0.8509 Epoch 6/50 60000/60000 [==============================] - 3s 50us/step - loss: 0.1283 - categorical_accuracy: 0.8575 - val_loss: 0.1225 - val_categorical_accuracy: 0.8509 Epoch 7/50 60000/60000 [==============================] - 3s 51us/step - loss: 0.1159 - categorical_accuracy: 0.8662 - val_loss: 0.1129 - val_categorical_accuracy: 0.8645 Epoch 8/50 60000/60000 [==============================] - 3s 50us/step - loss: 0.1070 - categorical_accuracy: 0.8725 - val_loss: 0.1048 - val_categorical_accuracy: 0.8749 Epoch 9/50 60000/60000 [==============================] - 3s 51us/step - loss: 0.1007 - categorical_accuracy: 0.8788 - val_loss: 0.1000 - val_categorical_accuracy: 0.8771 Epoch 10/50 60000/60000 [==============================] - 3s 50us/step - loss: 0.0960 - categorical_accuracy: 0.8838 - val_loss: 0.0971 - val_categorical_accuracy: 0.8807 Epoch 11/50 60000/60000 [==============================] - 3s 50us/step - loss: 0.0923 - categorical_accuracy: 0.8881 - val_loss: 0.0935 - val_categorical_accuracy: 0.8821 Epoch 12/50 60000/60000 [==============================] - 3s 50us/step - loss: 0.0892 - categorical_accuracy: 0.8907 - val_loss: 0.0921 - val_categorical_accuracy: 0.8822 Epoch 13/50 60000/60000 [==============================] - 3s 49us/step - loss: 0.0868 - categorical_accuracy: 0.8933 - val_loss: 0.0893 - val_categorical_accuracy: 0.8880 Epoch 14/50 60000/60000 [==============================] - 3s 52us/step - loss: 0.0847 - categorical_accuracy: 0.8950 - val_loss: 0.0891 - val_categorical_accuracy: 0.8830 Epoch 15/50 60000/60000 [==============================] - 3s 51us/step - loss: 0.0828 - categorical_accuracy: 0.8968 - val_loss: 0.0866 - val_categorical_accuracy: 0.8898 Epoch 16/50 60000/60000 [==============================] - 3s 51us/step - loss: 0.0810 - categorical_accuracy: 0.8984 - val_loss: 0.0848 - val_categorical_accuracy: 0.8905 Epoch 17/50 60000/60000 [==============================] - 3s 51us/step - loss: 0.0798 - categorical_accuracy: 0.8992 - val_loss: 0.0846 - val_categorical_accuracy: 0.8905 Epoch 18/50 60000/60000 [==============================] - 3s 50us/step - loss: 0.0782 - categorical_accuracy: 0.9016 - val_loss: 0.0824 - val_categorical_accuracy: 0.8916 Epoch 19/50 60000/60000 [==============================] - 3s 53us/step - loss: 0.0770 - categorical_accuracy: 0.9032 - val_loss: 0.0820 - val_categorical_accuracy: 0.8908 Epoch 20/50 60000/60000 [==============================] - 3s 53us/step - loss: 0.0757 - categorical_accuracy: 0.9040 - val_loss: 0.0808 - val_categorical_accuracy: 0.8935 Epoch 21/50 60000/60000 [==============================] - 3s 48us/step - loss: 0.0742 - categorical_accuracy: 0.9061 - val_loss: 0.0795 - val_categorical_accuracy: 0.8917 Epoch 22/50 60000/60000 [==============================] - 3s 48us/step - loss: 0.0698 - categorical_accuracy: 0.9113 - val_loss: 0.0731 - val_categorical_accuracy: 0.9089 Epoch 23/50 60000/60000 [==============================] - 3s 48us/step - loss: 0.0662 - categorical_accuracy: 0.9170 - val_loss: 0.0712 - val_categorical_accuracy: 0.9072 Epoch 24/50 60000/60000 [==============================] - 3s 48us/step - loss: 0.0642 - categorical_accuracy: 0.9195 - val_loss: 0.0703 - val_categorical_accuracy: 0.9103 Epoch 25/50 60000/60000 [==============================] - 3s 48us/step - loss: 0.0624 - categorical_accuracy: 0.9211 - val_loss: 0.0675 - val_categorical_accuracy: 0.9119 Epoch 26/50 60000/60000 [==============================] - 3s 48us/step - loss: 0.0608 - categorical_accuracy: 0.9229 - val_loss: 0.0667 - val_categorical_accuracy: 0.9101 Epoch 27/50 60000/60000 [==============================] - 3s 48us/step - loss: 0.0594 - categorical_accuracy: 0.9238 - val_loss: 0.0654 - val_categorical_accuracy: 0.9156 Epoch 28/50 60000/60000 [==============================] - 3s 48us/step - loss: 0.0581 - categorical_accuracy: 0.9249 - val_loss: 0.0652 - val_categorical_accuracy: 0.9151 Epoch 29/50 60000/60000 [==============================] - 3s 48us/step - loss: 0.0569 - categorical_accuracy: 0.9261 - val_loss: 0.0639 - val_categorical_accuracy: 0.9171 Epoch 30/50 60000/60000 [==============================] - 3s 48us/step - loss: 0.0557 - categorical_accuracy: 0.9267 - val_loss: 0.0627 - val_categorical_accuracy: 0.9143 Epoch 31/50 60000/60000 [==============================] - 3s 48us/step - loss: 0.0543 - categorical_accuracy: 0.9285 - val_loss: 0.0614 - val_categorical_accuracy: 0.9152 Epoch 32/50 60000/60000 [==============================] - 3s 48us/step - loss: 0.0533 - categorical_accuracy: 0.9304 - val_loss: 0.0609 - val_categorical_accuracy: 0.9182 Epoch 33/50 60000/60000 [==============================] - 3s 48us/step - loss: 0.0524 - categorical_accuracy: 0.9308 - val_loss: 0.0597 - val_categorical_accuracy: 0.9187 Epoch 34/50 60000/60000 [==============================] - 3s 48us/step - loss: 0.0516 - categorical_accuracy: 0.9321 - val_loss: 0.0590 - val_categorical_accuracy: 0.9219 Epoch 35/50 60000/60000 [==============================] - 3s 48us/step - loss: 0.0508 - categorical_accuracy: 0.9331 - val_loss: 0.0582 - val_categorical_accuracy: 0.9199 Epoch 36/50 60000/60000 [==============================] - 3s 48us/step - loss: 0.0500 - categorical_accuracy: 0.9345 - val_loss: 0.0586 - val_categorical_accuracy: 0.9222 Epoch 37/50 60000/60000 [==============================] - 3s 48us/step - loss: 0.0497 - categorical_accuracy: 0.9344 - val_loss: 0.0580 - val_categorical_accuracy: 0.9206 Epoch 38/50 60000/60000 [==============================] - 3s 48us/step - loss: 0.0492 - categorical_accuracy: 0.9343 - val_loss: 0.0582 - val_categorical_accuracy: 0.9173 Epoch 39/50 60000/60000 [==============================] - 3s 48us/step - loss: 0.0486 - categorical_accuracy: 0.9366 - val_loss: 0.0568 - val_categorical_accuracy: 0.9229 Epoch 40/50 60000/60000 [==============================] - 3s 49us/step - loss: 0.0479 - categorical_accuracy: 0.9366 - val_loss: 0.0564 - val_categorical_accuracy: 0.9234 Epoch 41/50 60000/60000 [==============================] - 3s 48us/step - loss: 0.0474 - categorical_accuracy: 0.9371 - val_loss: 0.0547 - val_categorical_accuracy: 0.9254 Epoch 42/50 60000/60000 [==============================] - 3s 48us/step - loss: 0.0471 - categorical_accuracy: 0.9373 - val_loss: 0.0565 - val_categorical_accuracy: 0.9224 Epoch 43/50 60000/60000 [==============================] - 3s 48us/step - loss: 0.0466 - categorical_accuracy: 0.9392 - val_loss: 0.0545 - val_categorical_accuracy: 0.9243 Epoch 44/50 60000/60000 [==============================] - 3s 48us/step - loss: 0.0462 - categorical_accuracy: 0.9387 - val_loss: 0.0566 - val_categorical_accuracy: 0.9227 Epoch 45/50 60000/60000 [==============================] - 3s 48us/step - loss: 0.0460 - categorical_accuracy: 0.9389 - val_loss: 0.0543 - val_categorical_accuracy: 0.9257 Epoch 46/50 60000/60000 [==============================] - 3s 48us/step - loss: 0.0455 - categorical_accuracy: 0.9395 - val_loss: 0.0552 - val_categorical_accuracy: 0.9252 Epoch 47/50 60000/60000 [==============================] - 3s 48us/step - loss: 0.0454 - categorical_accuracy: 0.9396 - val_loss: 0.0543 - val_categorical_accuracy: 0.9224 Epoch 48/50 60000/60000 [==============================] - 3s 48us/step - loss: 0.0449 - categorical_accuracy: 0.9403 - val_loss: 0.0542 - val_categorical_accuracy: 0.9221 Epoch 49/50 60000/60000 [==============================] - 3s 48us/step - loss: 0.0447 - categorical_accuracy: 0.9403 - val_loss: 0.0547 - val_categorical_accuracy: 0.9221 Epoch 50/50 60000/60000 [==============================] - 3s 48us/step - loss: 0.0445 - categorical_accuracy: 0.9406 - val_loss: 0.0545 - val_categorical_accuracy: 0.9201 ```python def summarize_history(history): print("Training Accuracy: {:.2%}".format(np.max(history.history["categorical_accuracy"]))) print("Validation Accuracy: {:.2%}".format(np.max(history.history["val_categorical_accuracy"]))) fig, axes = plt.subplots(1, 2, figsize=(8, 4), squeeze=True) axes[0].set_title("Loss") axes[0].plot(history.history["loss"], c="b") axes[0].plot(history.history["val_loss"], c="r") axes[1].set_title("Accuracy") axes[1].set_ylim((0.8, 1)) axes[1].plot(history.history["categorical_accuracy"], c="b") axes[1].plot(history.history["val_categorical_accuracy"], c="r") summarize_history(history) ``` Using the adam optimizer gives the best training results so far. The network starts to overfit somewhere between 10-20 epochs. This means the network won't have to be trained for hundreds of epochs. ```python # ones_zeros_model = create_model(np.zeros((10,784)), np.ones((10,784))) # ones_zeros_model.compile( # optimizer="adam", # loss="binary_crossentropy", # metrics=[keras.metrics.categorical_accuracy]) # adam_history = ones_zeros_model.fit( # x=train_x, # y=train_y, # batch_size=64, # epochs=20, # validation_data=(test_x, test_y), # verbose=0) # summarize_history(adam_history) # plot_history(adam_history) ``` Using the Adam optimizer, good results are achieved in only 20 epochs. ## Other Parameters This investigates the model's performance with different numbers of rules. ```python # for n in [20, 50, 100]: # model = create_model(np.zeros((n,784)), np.ones((n,784)), num_rules=n) # model.compile( # optimizer="adam", # loss="binary_crossentropy", # metrics=[keras.metrics.categorical_accuracy]) # history = model.fit( # x=train_x, # y=train_y, # batch_size=64, # epochs=20, # validation_data=(test_x, test_y), # verbose=0) # print("{} Rules:".format(n)) # summarize_history(history) # plot_history(history) ```
5585dd4858a0f0d315f32677b39b4d77b2f423bc
206,105
ipynb
Jupyter Notebook
mnist/classifier-mnist.ipynb
rkluzinski/research-2019
105a295b40c0f21349dbbc6ab992eced4c36bc67
[ "MIT" ]
2
2020-10-10T06:46:08.000Z
2022-03-29T03:08:45.000Z
mnist/classifier-mnist.ipynb
christianfv5/Deep_Fuzzy_1
105a295b40c0f21349dbbc6ab992eced4c36bc67
[ "MIT" ]
null
null
null
mnist/classifier-mnist.ipynb
christianfv5/Deep_Fuzzy_1
105a295b40c0f21349dbbc6ab992eced4c36bc67
[ "MIT" ]
null
null
null
241.05848
132,576
0.897494
true
6,625
Qwen/Qwen-72B
1. YES 2. YES
0.891811
0.849971
0.758014
__label__eng_Latn
0.346758
0.599452
```python import loader from sympy import * init_printing() from root.solver import * ``` #### Find the general solution of $y^{(4)} - 4y''' + 4y'' = 0$ ```python yc, p = nth_order_const_coeff(1, -4, 4, 0, 0) p.display() ``` $\displaystyle \text{Characteristic equation: }$ $\displaystyle r^{4} - 4 r^{3} + 4 r^{2} = 0$ $\displaystyle \text{Roots: }\left\{ \begin{array}{ll}r_{1,2} = 0\\r_{3,4} = 2\\\end{array} \right.$ $\displaystyle \text{General Solution: }$ $\displaystyle y = C_{1} + C_{2} t + C_{3} e^{2 t} + C_{4} t e^{2 t}$ #### Find the general solution of $y^{(6)} + y = 0$ ```python yc, p = nth_order_const_coeff(1, 0, 0, 0, 0, 0, 1) p.display() ``` $\displaystyle \text{Characteristic equation: }$ $\displaystyle r^{6} + 1 = 0$ $\displaystyle \text{Roots: }\left\{ \begin{array}{ll}r_{1} = i\\r_{2} = - i\\r_{3} = - \frac{\sqrt{3}}{2} - \frac{i}{2}\\r_{4} = - \frac{\sqrt{3}}{2} + \frac{i}{2}\\r_{5} = \frac{\sqrt{3}}{2} - \frac{i}{2}\\r_{6} = \frac{\sqrt{3}}{2} + \frac{i}{2}\\\end{array} \right.$ $\displaystyle \text{General Solution: }$ $\displaystyle y = C_{1} \sin{\left(t \right)} + C_{2} \cos{\left(t \right)} + C_{3} e^{- \frac{\sqrt{3} t}{2}} \sin{\left(\frac{t}{2} \right)} + C_{4} e^{- \frac{\sqrt{3} t}{2}} \cos{\left(\frac{t}{2} \right)} + C_{5} e^{\frac{\sqrt{3} t}{2}} \sin{\left(\frac{t}{2} \right)} + C_{6} e^{\frac{\sqrt{3} t}{2}} \cos{\left(\frac{t}{2} \right)}$
3d6157fb16a89d8aa0e6f90946945568562dd7f4
4,519
ipynb
Jupyter Notebook
notebooks/higher-order-homogeneous-ode-constant-coefficients.ipynb
kaiyingshan/ode-solver
30c6798efe9c35a088b2c6043493470701641042
[ "MIT" ]
2
2019-02-17T23:15:20.000Z
2019-02-17T23:15:27.000Z
notebooks/higher-order-homogeneous-ode-constant-coefficients.ipynb
kaiyingshan/ode-solver
30c6798efe9c35a088b2c6043493470701641042
[ "MIT" ]
null
null
null
notebooks/higher-order-homogeneous-ode-constant-coefficients.ipynb
kaiyingshan/ode-solver
30c6798efe9c35a088b2c6043493470701641042
[ "MIT" ]
null
null
null
23.293814
381
0.451427
true
606
Qwen/Qwen-72B
1. YES 2. YES
0.937211
0.868827
0.814274
__label__yue_Hant
0.3151
0.730164
# 2 Potential Outcomes ## 2.1 Potential Outcomes and Individual Treatment Effects #### Potential Outcome - 表示如果采用treatment T,输出将会是什么,用$Y(t)$表示 - 不是实际观察到的输出$Y$ #### Individual Treatment Effect (ITE) 个体$i$的ITE: $$\tau_i \triangleq Y_i(1)-Y_i(0)$$ ## 2.2 The Fundamental Problem of Causal Inference #### Fundamental Problem of Causal Inference - 因为无法同时观测到$Y_i(1)$和$Y_i(0)$,因此无法观测到causal effect $Y_i(1)-Y_i(0)$ - 可以通过potential outcome计算,没有观测到的potential outcome称作反事实,反事实和事实是相对的,只有当一个输出被观测到后,才能决定其为事实,其他为反事实 ## 2.3 Getting Around the Fundamental Problem ### 2.3.1 Average Treatment Effects and Missing Data Interpretation #### Average Treatment Effect (ATE) $$\tau \triangleq \mathbb{E}[Y_i(1)-Y_i(0)]=\mathbb{E}[Y(1)-Y(0)]$$ #### Associational Difference - 表达式: $$\mathbb{E}[Y|T=1]-\mathbb{E}[Y|T=0]$$ - 因为有confounder的存在,不等同于ATE ### 2.3.2 Ignorability and Exchangeability 如何让associational difference等于ATE? 给予受试者随机的treatment,去除受试者对treatment选择的影响 ```{admonition} Assumption 2.1 (Ignorability / Exchangeability) :class: note $$(Y(1),Y(0))\amalg T$$ ``` 这个假设是因果推断的关键,可以将ATE转换为associational difference: $$\begin{split}\mathbb{E}Y(1)]-\mathbb{E}[Y(0)]&=\mathbb{E}[Y(1)|T=1]-\mathbb{E}[Y(0)|T=0]\\&=\mathbb{E}[Y|T=1]-\mathbb{E}[Y|T=0]\end{split}$$ 另一个视角是可互换性,即交换treatment和control组的样本,观察的输出是一样的, $$\mathbb{E}[Y(1)|T=0]=\mathbb{E}[Y(1)|T=1]$$ $$\mathbb{E}[Y(0)|T=1]=\mathbb{E}[Y(0)|T=0]$$ 意味着 $$\mathbb{E}[Y(1)|T=t]=\mathbb{E}[Y(1)]$$ $$\mathbb{E}[Y(0)|T=t]=\mathbb{E}[Y(0)]$$ ```{admonition} Definition 2.1 (Identifiability) :class: warning A causal quantity ( $e.g.\ \mathbb{E}[Y(t)]$) is identifiable if we can compute it from a purely statistical quantity ($e.g.\ \mathbb{E}[Y|t]$). ``` ### 2.3.3 Conditional Exchangeability and Unconfoundedness ```{admonition} Assumption 2.2 (Conditional Exchangeability / Unconfoundedness) :class: note $$(Y(1),Y(0))\amalg T|X$$ ``` $T$和$Y$之间的非因果关联是通过$T \leftarrow X \rightarrow Y$传递的,在图2.3中用红色点线表示。 当以$X$作为条件时,就不存在非因果关联了,如图2.4所示,称之为conditional exchangeability。 有了conditional exchangeability的假设,可以得到$X$级别上的causal effect, \begin{align}\mathbb{E}[Y(1)-Y(0)|X]&=\mathbb{E}[Y(1)|X]-\mathbb{E}[Y(0)|X]\\&=\mathbb{E}[Y(1)|T=1,X]-\mathbb{E}[Y(0)|T=0,X]\\&=\mathbb{E}[Y|T=1,X]-\mathbb{E}[Y|T=0,X] \end{align} 在conditional exchangeability的假设下,遍历$X$可以到边际效应, \begin{align}\mathbb{E}[Y(1)-Y(0)]&=\mathbb{E}_X\mathbb{E}[Y(1)-Y(0)|X]\\&=\mathbb{E}_X[\mathbb{E}[Y|T=1,X]-\mathbb{E}[Y|T=0,X]]\end{align} ```{admonition} Theorem 2.1 (Adjustment Formula) :class: important Given the assumptions of unconfoundedness, positivity, consistency, and no interference, we can identify the average treatment effect: $$\mathbb{E}[Y(1) − Y(0)] = \mathbb{E}_X[\mathbb{E}[Y|T=1,X]-\mathbb{E}[Y|T=0,X]]$$ ``` ### 2.3.4 Positivity/Overlap and Extrapolation ```{admonition} Assumption 2.3 (Positivity / Overlap / Common Support) :class: note For all values of covariates $x$ present in the population of interest (i.e. $x$ such that $P(X=x)>0$), $$0<P(T=1|X=x)<1$$ ``` #### positivity/overlap/common support 用Bayes拆解后,分母中不能为0 \begin{align}\mathbb{E}[Y(1)-Y(0)]&=\mathbb{E}_X[\mathbb{E}[Y|T=1,X]-\mathbb{E}[Y|T=0,X]]\\&=\sum_XP(X=x)\left(\sum_yyP(Y=y|T=1,X=x)-\sum_yyP(Y=y|T=0,X=x)\right)\\&=\sum_xp(X=x)\left(\sum_yy\frac{P(Y=y,T=1,X=x)}{P(T=1|X=x)P(X=x)}-\sum_yy\frac{P(Y=y,T=0,X=x)}{P(T=0|X=x)P(X=x)}\right)\end{align} 如果数据中的一个子集都是treatment或者control,那么就无法估计其causal effect。 #### The Positivity-Unconfoundedness Tradeoff condition更多的covariates,会增加满足unconfoundedness的几率,但同时会增加违背positivity的可能性。 #### Extrapolation 违背positivity假设会导致更严重地依赖模型,且往往得到比较差的表现。许多因果预估模型用数据$(t,x,y)$训练模型,强制模型在$P(T=1,X=x)=0$和$P(T=0,X=x)=0$的地方推断,替代adjustment formula(Theorem 2.1)中的条件期望。 ### 2.3.5 No interference, Consistency, and SUTVA ```{admonition} Assumption 2.4 (No Interference) :class: note $$Y_i(t_1,...,t_{i-1},t_i,t_{i+1},...,t_n)=Y_i(t_i)$$ ``` 即输出不受其他人的treatment影响。但这个假设很容易打破,例如,如果treatment是“养狗”,输出是我的快乐程度,那么很容易受朋友是否养狗的影响,因为我们会一起遛狗。 ```{admonition} Assumption 2.5 (Consistency) :class: note If the treatment is $T$, then the observed outcome $Y$ is the potential outcome under treatment $T$. Formally, $$T=t \Longrightarrow Y=Y(t)$$ ``` #### We could write this equivalently as follow: $$Y=Y(T)$$ 观察到的treatment 为$T$的数据的输出$Y$即为treatment $T$的potential outcome。 反例:如果treatment被定义为养狗和不养狗,而养一只puppy会让我开心,因为需要充满活力的朋友,但是如果养一只年老的、没有活力的狗,就不会开心。 所以consistency包含了在一个treatment下没有多种版本的假设。 #### SUTVA(stable unit-treatment value assumption) 个体$i$的输出是个体$i$的treatment的一个简单函数。 SUTVA是一致性和无干预的组合。 ### 2.3.6 Tying It All Together ## 2.4 Fancy Statistics Terminology Defancified - estimand:希望预估的量,例如$\mathbb{E}_X[\mathbb{E}[Y|T=1,X]-\mathbb{E}[Y|T=0,X]]$ - estimate:estimand的近似,从数据中得到 - estimation:用数据和estimand得到estimate的过程 - model-assisted estimators:借助机器学习模型进行预估 ## 2.5 A Complete Example with Estimation ```python """ Estimating the causal effect of sodium on blood pressure in a simulated example adapted from Luque-Fernandez et al. (2018): https://academic.oup.com/ije/article/48/2/640/5248195 """ import numpy as np import pandas as pd from sklearn.linear_model import LinearRegression def generate_data(n=1000, seed=0, beta1=1.05, alpha1=0.4, alpha2=0.3, binary_treatment=True, binary_cutoff=3.5): np.random.seed(seed) age = np.random.normal(65, 5, n) sodium = age / 18 + np.random.normal(size=n) if binary_treatment: if binary_cutoff is None: binary_cutoff = sodium.mean() sodium = (sodium > binary_cutoff).astype(int) blood_pressure = beta1 * sodium + 2 * age + np.random.normal(size=n) proteinuria = alpha1 * sodium + alpha2 * blood_pressure + np.random.normal(size=n) hypertension = (blood_pressure >= 140).astype(int) # not used, but could be used for binary outcomes return pd.DataFrame({'blood_pressure': blood_pressure, 'sodium': sodium, 'age': age, 'proteinuria': proteinuria}) def estimate_causal_effect(Xt, y, model=LinearRegression(), treatment_idx=0, regression_coef=False): model.fit(Xt, y) if regression_coef: return model.coef_[treatment_idx] else: Xt1 = pd.DataFrame.copy(Xt) Xt1[Xt.columns[treatment_idx]] = 1 Xt0 = pd.DataFrame.copy(Xt) Xt0[Xt.columns[treatment_idx]] = 0 return (model.predict(Xt1) - model.predict(Xt0)).mean() binary_t_df = generate_data(beta1=1.05, alpha1=.4, alpha2=.3, binary_treatment=True, n=10000000) continuous_t_df = generate_data(beta1=1.05, alpha1=.4, alpha2=.3, binary_treatment=False, n=10000000) ate_est_naive = None ate_est_adjust_all = None ate_est_adjust_age = None for df, name in zip([binary_t_df, continuous_t_df], ['Binary Treatment Data', 'Continuous Treatment Data']): print() print('### {} ###'.format(name)) print() # Adjustment formula estimates ate_est_naive = estimate_causal_effect(df[['sodium']], df['blood_pressure'], treatment_idx=0) ate_est_adjust_all = estimate_causal_effect(df[['sodium', 'age', 'proteinuria']], df['blood_pressure'], treatment_idx=0) ate_est_adjust_age = estimate_causal_effect(df[['sodium', 'age']], df['blood_pressure']) print('# Adjustment Formula Estimates #') print('Naive ATE estimate:\t\t\t\t\t\t\t', ate_est_naive) print('ATE estimate adjusting for all covariates:\t', ate_est_adjust_all) print('ATE estimate adjusting for age:\t\t\t\t', ate_est_adjust_age) print() # Linear regression coefficient estimates ate_est_naive = estimate_causal_effect(df[['sodium']], df['blood_pressure'], treatment_idx=0, regression_coef=True) ate_est_adjust_all = estimate_causal_effect(df[['sodium', 'age', 'proteinuria']], df['blood_pressure'], treatment_idx=0, regression_coef=True) ate_est_adjust_age = estimate_causal_effect(df[['sodium', 'age']], df['blood_pressure'], regression_coef=True) print('# Regression Coefficient Estimates #') print('Naive ATE estimate:\t\t\t\t\t\t\t', ate_est_naive) print('ATE estimate adjusting for all covariates:\t', ate_est_adjust_all) print('ATE estimate adjusting for age:\t\t\t\t', ate_est_adjust_age) print() ``` ### Binary Treatment Data ### # Adjustment Formula Estimates # Naive ATE estimate: 5.328501680864975 ATE estimate adjusting for all covariates: 0.8537946431496021 ATE estimate adjusting for age: 1.0502124539714488 # Regression Coefficient Estimates # Naive ATE estimate: 5.328501680864978 ATE estimate adjusting for all covariates: 0.8537946431495851 ATE estimate adjusting for age: 1.0502124539714823 ### Continuous Treatment Data ### # Adjustment Formula Estimates # Naive ATE estimate: 3.628378195978172 ATE estimate adjusting for all covariates: 0.8532920319407821 ATE estimate adjusting for age: 1.0497716562238169 # Regression Coefficient Estimates # Naive ATE estimate: 3.6283781959780943 ATE estimate adjusting for all covariates: 0.8532920319407795 ATE estimate adjusting for age: 1.0497716562238382
91a8ac3603f0873ca2f80aee7b0049a54f7bc89c
14,673
ipynb
Jupyter Notebook
docs/causal_inference/introduction_to_causal_inference/ch2.ipynb
cancermqiao/CancerMBook
bd26c0e3e1f76f66b75aacf75b3cb8602715e803
[ "Apache-2.0" ]
null
null
null
docs/causal_inference/introduction_to_causal_inference/ch2.ipynb
cancermqiao/CancerMBook
bd26c0e3e1f76f66b75aacf75b3cb8602715e803
[ "Apache-2.0" ]
null
null
null
docs/causal_inference/introduction_to_causal_inference/ch2.ipynb
cancermqiao/CancerMBook
bd26c0e3e1f76f66b75aacf75b3cb8602715e803
[ "Apache-2.0" ]
null
null
null
34.606132
325
0.555442
true
3,498
Qwen/Qwen-72B
1. YES 2. YES
0.699254
0.779993
0.545413
__label__eng_Latn
0.298027
0.105508
### Nomenclature: $d$ means a derivative ALONG the saturation line, $\partial$ means a partial derivative AT the saturation line (or anywhere in the single phase region). ### References: Krafcik and Velasco, DOI 10.1119/1.4858403 Thorade and Saadat, DOI 10.1007/s12665-013-2394-z https://en.wikipedia.org/wiki/Product_rule https://en.wikipedia.org/wiki/Quotient_rule https://en.wikipedia.org/wiki/Triple_product_rule ### Clausius-Clapeyron Clausius-Clapeyron p/T \begin{equation} \frac{dp}{dT} = \frac{s''-s'}{v''-v'} \end{equation} derivative of Clausius-Clapeyron \begin{equation} \begin{split} \frac{d^2 p}{d T^2} &= \frac{\left(\frac{ds''}{dT}-\frac{ds'}{dT}\right) (v''-v')}{(v''-v')^2} - \frac{(s''-s') \left(\frac{dv''}{dT}-\frac{dv'}{dT}\right)}{(v''-v')^2} \\ &= \frac{\left(\frac{ds''}{dT}-\frac{ds'}{dT}\right)}{(v''-v')} - \frac{dp}{dT} \frac{\left(\frac{dv''}{dT}-\frac{dv'}{dT}\right)}{(v''-v')} \end{split} \end{equation} Clausius-Clapeyron T/p \begin{equation} \frac{dT}{dp} = \frac{v''-v'}{s''-s'} \end{equation} derivative of Clausius-Clapeyron \begin{equation} \begin{split} \frac{d^2 T}{d p^2} &= \frac{\left(\frac{dv''}{dp}-\frac{dv'}{dp}\right) (s''-s')}{(s''-s')^2} - \frac{(v''-v') \left(\frac{ds''}{dp}-\frac{ds'}{dp}\right)}{(s''-s')^2} \\ &= \frac{\left(\frac{dv''}{dp}-\frac{dv'}{dp}\right)}{(s''-s')} - \frac{dT}{dp} \frac{\left(\frac{ds''}{dp}-\frac{ds'}{dp}\right)}{(s''-s')} \end{split} \end{equation} where \begin{equation} \begin{split} \frac{ds}{dT} &= \left(\frac{\partial s}{\partial T}\right)_p + \left(\frac{\partial s}{\partial p}\right)_T \frac{dp}{dT}\\ \frac{dv}{dT} &= \left(\frac{\partial v}{\partial T}\right)_p + \left(\frac{\partial v}{\partial p}\right)_T \frac{dp}{dT}\\ \frac{dv}{dp} &= \left(\frac{\partial v}{\partial p}\right)_T + \left(\frac{\partial v}{\partial T}\right)_p \frac{dT}{dp}\\ \frac{ds}{dp} &= \left(\frac{\partial s}{\partial p}\right)_T + \left(\frac{\partial s}{\partial T}\right)_p \frac{dT}{dp} \end{split} \end{equation} ### Temporary Names Introduce temporary names for some of the partial derivatives wrt $p$ and $T$: \begin{equation} \begin{split} A &= \left(\frac{\partial \rho}{\partial T}\right)_p \\ B &= \left(\frac{\partial \rho}{\partial p}\right)_T \\ C &= \left(\frac{\partial s}{\partial T}\right)_p \\ E &= \left(\frac{\partial s}{\partial p}\right)_T \\ G &= \left(\frac{\partial h}{\partial T}\right)_p \\ K &= \left(\frac{\partial h}{\partial p}\right)_T \end{split} \end{equation} Introduce temporary names for some of the partial derivatives wrt $\rho$ and $T$: \begin{equation} \begin{split} W &= \left(\frac{\partial p}{\partial T}\right)_{\rho} \\ X &= \left(\frac{\partial p}{\partial \rho}\right)_T \\ Y &= \left(\frac{\partial s}{\partial T}\right)_{\rho} \\ Z &= \left(\frac{\partial s}{\partial \rho}\right)_T \end{split} \end{equation} Introduce temporary names for some of the derivatives along the saturation line: \begin{equation} \begin{split} M &= \frac{d \rho}{d h} = \frac{{d \rho}/{dT}}{{dh}/{dT}} \\ N &= \frac{d s}{d h} = \frac{{ds}/{dT}}{{dh}/{dT}} \end{split} \end{equation} Now the rest is not too hard, but with intermediate steps it is quite long. ### First example: $d^2 \rho / dT^2$ The corresponding first derivative can be written in two ways: \begin{equation} \begin{split} \frac{d \rho}{dT} &= \left(\frac{\partial \rho}{\partial T}\right)_p + \left(\frac{\partial \rho}{\partial p}\right)_T \frac{d p}{dT} = A + B \frac{dp}{dT}\\ &= {\frac{dp}{dT} \left(\frac{\partial p}{\partial T}\right)_{\rho}} / {\left(\frac{\partial p}{\partial \rho}\right)_T} = {\frac{dp}{dT} W}/X \end{split} \end{equation} Both ways can be used as a starting point for the second derivative: \begin{equation} \begin{split} \frac{d^2 \rho}{dT^2} &= \frac{dA}{dT} + \frac{dB}{dT}\frac{dp}{dT} + B\frac{d^2p}{dT^2}\\ &= \frac{\left(\frac{d^2p}{dT^2} - \frac{dW}{dT}\right)X - \left(\frac{dp}{dT}-W\right)\frac{dX}{dT}}{X^2} \end{split} \end{equation} The first variant seems to be nicer, because the first deriv did not have a fraction in it. The derivatives along the saturation line that appeared can be written as (again, using two different approaches): \begin{equation} \begin{split} \frac{dA}{dT} &= \left(\frac{\partial A}{\partial T}\right)_p + \left(\frac{\partial A}{\partial p}\right)_T \frac{d p}{dT}\\ \frac{dB}{dT} &= \left(\frac{\partial B}{\partial T}\right)_p + \left(\frac{\partial B}{\partial p}\right)_T \frac{d p}{dT}\\ \frac{dW}{dT} &= \left(\frac{\partial W}{\partial T}\right)_{\rho} + \left(\frac{\partial W}{\partial \rho}\right)_T \frac{d\rho}{dT}\\ \frac{dX}{dT} &= \left(\frac{\partial X}{\partial T}\right)_{\rho} + \left(\frac{\partial X}{\partial \rho}\right)_T \frac{d\rho}{dT}\\ \end{split} \end{equation} and then the partial derivatives of A, B, W and X wrt p and T can be written as: \begin{equation} \begin{split} \left(\frac{\partial A}{\partial T}\right)_p &= \left(\frac{\partial^2 \rho}{\partial T^2}\right)_p \\ \left(\frac{\partial A}{\partial p}\right)_T &= \left(\frac{\partial^2 \rho}{\partial p \partial T}\right) \\ \left(\frac{\partial B}{\partial T}\right)_p &= \left(\frac{\partial^2 \rho}{\partial T \partial p}\right) = \left(\frac{\partial A}{\partial p}\right)_T \\ \left(\frac{\partial B}{\partial p}\right)_T &= \left(\frac{\partial^2 \rho}{\partial p^2}\right)_T \\ \left(\frac{\partial W}{\partial T}\right)_{\rho} &= \left(\frac{\partial^2 p}{\partial T^2}\right)_{\rho} \\ \left(\frac{\partial W}{\partial \rho}\right)_T &= \left(\frac{\partial^2 p}{\partial \rho \partial T}\right) \\ \left(\frac{\partial X}{\partial T}\right)_{\rho} &= \left(\frac{\partial^2 p}{\partial T \partial \rho}\right) = \left(\frac{\partial W}{\partial \rho}\right)_T \\ \left(\frac{\partial X}{\partial \rho}\right)_T &= \left(\frac{\partial^2 p}{\partial {\rho}^2}\right)_T \end{split} \end{equation} So this time, the second way looks maybe nicer, because it uses partial derivatives wrt to density and temperature. On the other hand, all second partial derivatives are already implemented in Coolprop and the pT derivatives are internally rewritten as dT derivatives. ### Second example: $d^2s/dT^2$ The corresponding first derivative can be written in two ways: \begin{equation} \begin{split} \frac{ds}{dT} &= \left(\frac{\partial s}{\partial T}\right)_p + \left(\frac{\partial s}{\partial p}\right)_T \frac{dp}{dT} = C + E\frac{dp}{dT}\\ &= \left(\frac{\partial s}{\partial T}\right)_{\rho} + \left(\frac{\partial s}{\partial \rho}\right)_T \frac{d \rho}{dT} = Y + Z\frac{d \rho}{dT} \end{split} \end{equation} Both can be used as starting point for the second derivatives. \begin{split} \frac{d^2 s}{dT^2} &= \frac{dC}{dT} + \frac{dE}{dT}\frac{dp}{dT} + E\frac{d^2p}{dT^2}\\ &= \frac{dY}{dT} + \frac{dZ}{dT}\frac{dp}{dT} + Z\frac{d^2 \rho}{dT^2} \end{split} Now, which one is nicer to work with? Not sure. Just as shown above, the derivatives along the saturation line that appeared can be written as (again, using two different approaches): \begin{equation} \begin{split} \frac{dC}{dT} &= \left(\frac{\partial C}{\partial T}\right)_p + \left(\frac{\partial C}{\partial p}\right)_T \frac{d p}{dT}\\ \frac{E}{dT} &= \left(\frac{\partial E}{\partial T}\right)_p + \left(\frac{\partial E}{\partial p}\right)_T \frac{d p}{dT}\\ \frac{dY}{dT} &= \left(\frac{\partial Y}{\partial T}\right)_{\rho} + \left(\frac{\partial Y}{\partial \rho}\right)_T \frac{d\rho}{dT}\\ \frac{dZ}{dT} &= \left(\frac{\partial Z}{\partial T}\right)_{\rho} + \left(\frac{\partial Z}{\partial \rho}\right)_T \frac{d\rho}{dT}\\ \end{split} \end{equation} Rewriting the the partial derivatives works just like shown above. ### Third example: $d^2{\rho}/dp^2$ The corresponding first derivative can be written in many ways, here is one: \begin{equation} \frac{d \rho}{dp} = \left(\frac{\partial \rho}{\partial p}\right)_T + \left(\frac{\partial \rho}{\partial T}\right)_p \frac{dT}{dp} = B + A\frac{dT}{dp} \end{equation} The second derivative is then: \begin{equation} \frac{d^2 \rho}{dp^2} = \frac{dB}{dp} + \frac{dA}{dp}\frac{dT}{dp} + A\frac{d^2T}{dp^2} \end{equation} The derivatives wrt p along the saturation line can be written as: \begin{equation} \begin{split} \frac{dB}{dp} &= \left(\frac{\partial B}{\partial p}\right)_T + \left(\frac{\partial B}{\partial T}\right)_p \frac{d T}{dp}\\ \frac{dA}{dp} &= \left(\frac{\partial A}{\partial p}\right)_T + \left(\frac{\partial A}{\partial T}\right)_p \frac{d T}{dp} \end{split} \end{equation} Rewriting the partial derivatives works just like shown above. ### Fourth example: $d^2s/dp^2$ The corresponding first derivative can be written in many ways, here is one: \begin{equation} \frac{ds}{dp} = \left(\frac{\partial s}{\partial p}\right)_T + \left(\frac{\partial s}{\partial T}\right)_p \frac{dT}{dp} = E + C\frac{dT}{dp} \end{equation} The second derivative is then: \begin{equation} \frac{d^2 s}{dp^2} = \frac{dE}{dp} + \frac{dC}{dp}\frac{dT}{dp} + C\frac{d^2T}{dp^2} \end{equation} Rewriting the derivatives along the sat line and the partial derivatives works just like shown above. ### Fifth example: $d^2h/dT^2$ First: \begin{equation} \frac{dh}{dT} = \left(\frac{\partial h}{\partial T}\right)_p + \left(\frac{\partial h}{\partial p}\right)_T \frac{dp}{dT} = G + K\frac{dp}{dT} \end{equation} Second: \begin{equation} \frac{d^2 h}{dT^2} = \frac{dG}{dT} + \frac{dK}{dT}\frac{dp}{dT} + K\frac{d^2p}{dT^2} \end{equation} ### Sixth example: $d^2{\rho}/dh^2$ This is a bit different, but also not difficult. First: \begin{equation} M = \frac{d\rho}{dh} = \frac{{d\rho}/{dT}}{{dh}/{dT}} \end{equation} Intermediate: \begin{equation} \frac{dM}{dT} = \frac{\frac{d^2 \rho}{dT^2}\frac{dh}{dT}-\frac{d \rho}{dT}\frac{d^2h}{dT^2}}{\left(\frac{dh}{dT}\right)^2} \end{equation} Second: \begin{equation} \frac{d^2 \rho}{dh^2} = \frac{dM}{dh} = \frac{{dM}/{dT}}{{dh}/{dT}} = \frac{\frac{d^2 \rho}{dT^2}\frac{dh}{dT}-\frac{d \rho}{dT}\frac{d^2h}{dT^2}}{\left(\frac{dh}{dT}\right)^3} \end{equation} ### Seventh example: $d^2s/dh^2$ First: \begin{equation} N = \frac{ds}{dh} = \frac{{ds}/{dT}}{{dh}/{dT}} \end{equation} Intermediate: \begin{equation} \frac{dN}{dT} = \frac{\frac{d^2s}{dT^2}\frac{dh}{dT}-\frac{ds}{dT}\frac{d^2h}{dT^2}}{\left(\frac{dh}{dT}\right)^2} \end{equation} Second: \begin{equation} \frac{d^2s}{dh^2} = \frac{dN}{dh} = \frac{{dN}/{dT}}{{dh}/{dT}} = \frac{\frac{d^2s}{dT^2}\frac{dh}{dT}-\frac{ds}{dT}\frac{d^2h}{dT^2}}{\left(\frac{dh}{dT}\right)^3} \end{equation} ### Eighth example: $d^2s/(dh dp)$ First: \begin{equation} N = \frac{ds}{dh} = \frac{{ds}/{dT}}{{dh}/{dT}} \end{equation} Intermediate: \begin{equation} \frac{dN}{dT} = \frac{\frac{d^2s}{dT^2}\frac{dh}{dT}-\frac{ds}{dT}\frac{d^2h}{dT^2}}{\left(\frac{dh}{dT}\right)^2} \end{equation} Second: \begin{equation} \frac{d^2s}{dh dp} = \frac{dN}{dp} = \frac{{dN}/{dT}}{{dp}/{dT}} = \frac{\frac{d^2s}{dT^2}\frac{dh}{dT}-\frac{ds}{dT}\frac{d^2h}{dT^2}}{\left(\frac{dh}{dT}\right)^2 \left(\frac{dp}{dT}\right)} \end{equation} ``` # load some bits and pieces import numpy as np import matplotlib import matplotlib.pyplot as plt import CoolProp as CP from CoolProp.CoolProp import PropsSI # Check: CoolProp version print(CP.__version__) print(CP.__gitrevision__) # Constants eps = 1e-9 kilo = 1e3 Mega = 1e6 golden = (1 + 5 ** 0.5) / 2 nPoints = 1000 width = 12.5 # helper function: get slope from two sorted arrays def numSlopeAr(xAr, yAr): deltaX = np.ones(len(xAr)) deltaY = np.ones(len(yAr)) slopeAr = np.ones(len(yAr)) for index in range(1,len(xAr)-1): deltaX[index] = xAr[index-1] - xAr[index+1] deltaY[index] = yAr[index-1] - yAr[index+1] slopeAr[index] = deltaY[index]/deltaX[index] # inaccurate, but who cares? slopeAr[0]=slopeAr[1] slopeAr[-1]=slopeAr[-2] return slopeAr # helper function: draw tangent from array def drawTangent(xAr, yAr, slopeAr, index: int): xVal = xAr[index] yVal = yAr[index] slopeVal = slopeAr[index] # line length as percentage of length of x axis xRange = 0.15*abs(max(xAr)-min(xAr)) xRangeAr = np.arange(xVal-xRange, xVal+xRange) tang = yVal + slopeVal*(xRangeAr-xVal) plt.plot(xVal,yVal,'om',xRangeAr,tang,'-k') ``` 5.1.1 12f006445f234e572e64cc820146ab5d2c2a9d10 ``` # All calculations happen in this cell # All following cells are used for plotting only # Run this cell twice to get rid of error message # Set FluidName FluidName = 'Propane' # Triple and critical data T_crt = PropsSI('TCRIT',FluidName) T_trp = PropsSI('TTRIPLE',FluidName) p_crt = PropsSI('PCRIT',FluidName) p_trp = PropsSI('PTRIPLE',FluidName) d_crt = PropsSI('RHOCRIT',FluidName) d_trp_liq = PropsSI('D','T',T_trp,'Q',0,FluidName) d_trp_vap = PropsSI('D','T',T_trp,'Q',1,FluidName) print("T_crt = " + str(T_crt)) print("T_trp = " + str(T_trp)) # Properties at saturation, liq and vap # All the way to the crt point or keep some distance? T_sat = np.linspace(T_trp, T_crt, num=nPoints) p_sat = CP.CoolProp.PropsSI('P','T',T_sat,'Q',0,FluidName) d_sat_liq = CP.CoolProp.PropsSI('D','T',T_sat,'Q',0,FluidName) d_sat_vap = CP.CoolProp.PropsSI('D','T',T_sat,'Q',1,FluidName) v_sat_liq = 1/d_sat_liq v_sat_vap = 1/d_sat_vap s_sat_liq = CP.CoolProp.PropsSI('S','T',T_sat,'Q',0,FluidName) s_sat_vap = CP.CoolProp.PropsSI('S','T',T_sat,'Q',1,FluidName) h_sat_liq = CP.CoolProp.PropsSI('H','T',T_sat,'Q',0,FluidName) h_sat_vap = CP.CoolProp.PropsSI('H','T',T_sat,'Q',1,FluidName) u_sat_liq = CP.CoolProp.PropsSI('U','T',T_sat,'Q',0,FluidName) u_sat_vap = CP.CoolProp.PropsSI('U','T',T_sat,'Q',1,FluidName) # Clausius-Clapeyron # at crt point, vap=liq, so D becomes zero Ds_vl = (s_sat_vap-s_sat_liq) Dv_vl = (v_sat_vap-v_sat_liq) dp_dT_q = Ds_vl/Dv_vl dT_dp_q = Dv_vl/Ds_vl # = 1/dp_dT_q dp_dT_qn = numSlopeAr(T_sat,p_sat) dT_dp_qn = numSlopeAr(p_sat,T_sat) # derivs wrt pT, single-phase AT the sat liq line dd_dp_Tl = CP.CoolProp.PropsSI('d(Dmass)/d(P)|T','D',d_sat_liq,'T',T_sat,FluidName) dd_dT_pl = CP.CoolProp.PropsSI('d(Dmass)/d(T)|P','D',d_sat_liq,'T',T_sat,FluidName) ds_dp_Tl = CP.CoolProp.PropsSI('d(Smass)/d(P)|T','D',d_sat_liq,'T',T_sat,FluidName) ds_dT_pl = CP.CoolProp.PropsSI('d(Smass)/d(T)|P','D',d_sat_liq,'T',T_sat,FluidName) dh_dp_Tl = CP.CoolProp.PropsSI('d(Hmass)/d(P)|T','D',d_sat_liq,'T',T_sat,FluidName) dh_dT_pl = CP.CoolProp.PropsSI('d(Hmass)/d(T)|P','D',d_sat_liq,'T',T_sat,FluidName) dv_dp_Tl = -dd_dp_Tl/d_sat_liq**2 dv_dT_pl = -dd_dT_pl/d_sat_liq**2 # derivs wrt pT, single-phase AT the sat vap line dd_dp_Tv = CP.CoolProp.PropsSI('d(Dmass)/d(P)|T','D',d_sat_vap,'T',T_sat,FluidName) dd_dT_pv = CP.CoolProp.PropsSI('d(Dmass)/d(T)|P','D',d_sat_vap,'T',T_sat,FluidName) ds_dp_Tv = CP.CoolProp.PropsSI('d(Smass)/d(P)|T','D',d_sat_vap,'T',T_sat,FluidName) ds_dT_pv = CP.CoolProp.PropsSI('d(Smass)/d(T)|P','D',d_sat_vap,'T',T_sat,FluidName) dh_dp_Tv = CP.CoolProp.PropsSI('d(Hmass)/d(P)|T','D',d_sat_vap,'T',T_sat,FluidName) dh_dT_pv = CP.CoolProp.PropsSI('d(Hmass)/d(T)|P','D',d_sat_vap,'T',T_sat,FluidName) dv_dp_Tv = -dd_dp_Tv/d_sat_vap**2 dv_dT_pv = -dd_dT_pv/d_sat_vap**2 # derivs wrt dT, single-phase AT the sat liq line dp_dd_Tl = CP.CoolProp.PropsSI('d(P)/d(Dmass)|T','D',d_sat_liq,'T',T_sat,FluidName) dp_dT_dl = CP.CoolProp.PropsSI('d(P)/d(T)|Dmass','D',d_sat_liq,'T',T_sat,FluidName) ds_dd_Tl = CP.CoolProp.PropsSI('d(Smass)/d(D)|T','D',d_sat_liq,'T',T_sat,FluidName) ds_dT_dl = CP.CoolProp.PropsSI('d(Smass)/d(T)|D','D',d_sat_liq,'T',T_sat,FluidName) dh_dd_Tl = CP.CoolProp.PropsSI('d(Hmass)/d(D)|T','D',d_sat_liq,'T',T_sat,FluidName) dh_dT_dl = CP.CoolProp.PropsSI('d(Hmass)/d(T)|D','D',d_sat_liq,'T',T_sat,FluidName) dp_dv_Tl = -d_sat_liq**2 * dp_dd_Tl dp_dT_vl = dp_dT_dl # derivs wrt dT, single-phase AT the sat vap line dp_dd_Tv = CP.CoolProp.PropsSI('d(P)/d(Dmass)|T','D',d_sat_vap,'T',T_sat,FluidName) dp_dT_dv = CP.CoolProp.PropsSI('d(P)/d(T)|Dmass','D',d_sat_vap,'T',T_sat,FluidName) ds_dd_Tv = CP.CoolProp.PropsSI('d(Smass)/d(D)|T','D',d_sat_vap,'T',T_sat,FluidName) ds_dT_dv = CP.CoolProp.PropsSI('d(Smass)/d(T)|D','D',d_sat_vap,'T',T_sat,FluidName) dh_dd_Tv = CP.CoolProp.PropsSI('d(Hmass)/d(D)|T','D',d_sat_vap,'T',T_sat,FluidName) dh_dT_dv = CP.CoolProp.PropsSI('d(Hmass)/d(T)|D','D',d_sat_vap,'T',T_sat,FluidName) dp_dv_Tv = -d_sat_vap**2 * dp_dd_Tv dp_dT_vv = dp_dT_dv # two ways for derivs along the sat line: using derivs wrt dT or pT # derivs wrt T ALONG the sat liq line (that is, at q=const=0) dd_dT_ql = (dp_dT_q - dp_dT_dl)/dp_dd_Tl dv_dT_ql = dv_dT_pl + dv_dp_Tl*dp_dT_q ds_dT_ql = ds_dT_dl + ds_dd_Tl*dd_dT_ql #ds_dT_ql = ds_dT_pl + ds_dp_Tl*dp_dT_q dh_dT_ql = dh_dT_pl + dh_dp_Tl*dp_dT_q # derivs wrt T ALONG the sat vap line (that is, at q=const=1) dd_dT_qv = (dp_dT_q - dp_dT_dv)/dp_dd_Tv dv_dT_qv = (dp_dT_q - dp_dT_dv)/dp_dd_Tv dv_dT_qv = dv_dT_pv + dv_dp_Tv*dp_dT_q ds_dT_qv = ds_dT_dv + ds_dd_Tv*dd_dT_qv dh_dT_qv = dh_dT_pv + dh_dp_Tv*dp_dT_q # derivs wrt p ALONG the sat liq line (that is, at q=const=0) dd_dp_ql = dd_dp_Tl + dd_dT_pl*dT_dp_q ds_dp_ql = ds_dp_Tl + ds_dT_pl*dT_dp_q dv_dp_ql = dv_dp_Tl + dv_dT_pl*dT_dp_q # derivs wrt p ALONG the sat vap line (that is, at q=const=1) dd_dp_qv = dd_dp_Tv + dd_dT_pv*dT_dp_q ds_dp_qv = ds_dp_Tv + ds_dT_pv*dT_dp_q dv_dp_qv = dv_dp_Tv + dv_dT_pv*dT_dp_q # derivs wrt h ALONG the sat line dd_dh_ql = dd_dT_ql/dh_dT_ql dd_dh_qv = dd_dT_qv/dh_dT_qv ds_dh_ql = ds_dT_ql/dh_dT_ql ds_dh_qv = ds_dT_qv/dh_dT_qv # derivs of Clausius-Clapeyron # d2p_dT2_q = ((ds_dT_qv-ds_dT_ql)*Dv_vl - Ds_vl*(dv_dT_qv-dv_dT_ql)) / Dv_vl**2 d2p_dT2_q = (ds_dT_qv-ds_dT_ql)/Dv_vl - dp_dT_q*(dv_dT_qv-dv_dT_ql)/Dv_vl d2T_dp2_q = (dv_dp_qv-dv_dp_ql)/Ds_vl - dT_dp_q*(ds_dp_qv-ds_dp_ql)/Ds_vl d2p_dT2_qn = numSlopeAr(T_sat,dp_dT_q) d2T_dp2_qn = numSlopeAr(p_sat,dT_dp_q) # second derivs AT the sat line # AT the liq line # wrt pT d2d_dT2_pl = np.empty(nPoints) d2d_dp2_Tl = np.empty(len(T_sat)) d2d_dpTl = np.empty(len(T_sat)) d2s_dT2_pl = np.empty(len(T_sat)) d2s_dp2_Tl = np.empty(len(T_sat)) d2s_dpTl = np.empty(len(T_sat)) d2h_dT2_pl = np.empty(len(T_sat)) d2h_dp2_Tl = np.empty(len(T_sat)) d2h_dpTl = np.empty(len(T_sat)) # wrt dT d2s_dT2_dl = np.empty(len(T_sat)) d2s_dd2_Tl = np.empty(len(T_sat)) d2s_ddTl = np.empty(len(T_sat)) # AT the vap line # wrt pT d2d_dT2_pv = np.empty(len(T_sat)) d2d_dp2_Tv = np.empty(len(T_sat)) d2d_dpTv = np.empty(len(T_sat)) d2s_dT2_pv = np.empty(len(T_sat)) d2s_dp2_Tv = np.empty(len(T_sat)) d2s_dpTv = np.empty(len(T_sat)) d2h_dT2_pv = np.empty(len(T_sat)) d2h_dp2_Tv = np.empty(len(T_sat)) d2h_dpTv = np.empty(len(T_sat)) # wrt dT d2s_dT2_dv = np.empty(len(T_sat)) d2s_dd2_Tv = np.empty(len(T_sat)) d2s_ddTv = np.empty(len(T_sat)) HEOS = CP.AbstractState("HEOS", FluidName) for idx in range(0,len(T_sat)): # AT the liq line HEOS.update(CP.QT_INPUTS, 0, T_sat[idx]) # wrt pT d2d_dT2_pl[idx] = HEOS.second_partial_deriv(CP.iDmass, CP.iT, CP.iP, CP.iT, CP.iP) d2d_dp2_Tl[idx] = HEOS.second_partial_deriv(CP.iDmass, CP.iP, CP.iT, CP.iP, CP.iT) d2d_dpTl[idx] = HEOS.second_partial_deriv(CP.iDmass, CP.iP, CP.iT, CP.iT, CP.iP) d2s_dT2_pl[idx] = HEOS.second_partial_deriv(CP.iSmass, CP.iT, CP.iP, CP.iT, CP.iP) d2s_dp2_Tl[idx] = HEOS.second_partial_deriv(CP.iSmass, CP.iP, CP.iT, CP.iP, CP.iT) d2s_dpTl[idx] = HEOS.second_partial_deriv(CP.iSmass, CP.iP, CP.iT, CP.iT, CP.iP) d2h_dT2_pl[idx] = HEOS.second_partial_deriv(CP.iHmass, CP.iT, CP.iP, CP.iT, CP.iP) d2h_dp2_Tl[idx] = HEOS.second_partial_deriv(CP.iHmass, CP.iP, CP.iT, CP.iP, CP.iT) d2h_dpTl[idx] = HEOS.second_partial_deriv(CP.iHmass, CP.iP, CP.iT, CP.iT, CP.iP) # wrt dT d2s_dT2_dl[idx] = HEOS.second_partial_deriv(CP.iSmass, CP.iT, CP.iDmass, CP.iT, CP.iDmass) d2s_dd2_Tl[idx] = HEOS.second_partial_deriv(CP.iSmass, CP.iDmass, CP.iT, CP.iDmass, CP.iT) d2s_ddTl[idx] = HEOS.second_partial_deriv(CP.iSmass, CP.iDmass, CP.iT, CP.iT, CP.iDmass) # AT the vap line HEOS.update(CP.QT_INPUTS, 1, T_sat[idx]) # wrt pT d2d_dT2_pv[idx] = HEOS.second_partial_deriv(CP.iDmass, CP.iT, CP.iP, CP.iT, CP.iP) d2d_dp2_Tv[idx] = HEOS.second_partial_deriv(CP.iDmass, CP.iP, CP.iT, CP.iP, CP.iT) d2d_dpTv[idx] = HEOS.second_partial_deriv(CP.iDmass, CP.iP, CP.iT, CP.iT, CP.iP) d2s_dT2_pv[idx] = HEOS.second_partial_deriv(CP.iSmass, CP.iT, CP.iP, CP.iT, CP.iP) d2s_dp2_Tv[idx] = HEOS.second_partial_deriv(CP.iSmass, CP.iP, CP.iT, CP.iP, CP.iT) d2s_dpTv[idx] = HEOS.second_partial_deriv(CP.iSmass, CP.iP, CP.iT, CP.iT, CP.iP) d2h_dT2_pv[idx] = HEOS.second_partial_deriv(CP.iHmass, CP.iT, CP.iP, CP.iT, CP.iP) d2h_dp2_Tv[idx] = HEOS.second_partial_deriv(CP.iHmass, CP.iP, CP.iT, CP.iP, CP.iT) d2h_dpTv[idx] = HEOS.second_partial_deriv(CP.iHmass, CP.iP, CP.iT, CP.iT, CP.iP) # wrt dT d2s_dT2_dv[idx] = HEOS.second_partial_deriv(CP.iSmass, CP.iT, CP.iDmass, CP.iT, CP.iDmass) d2s_dd2_Tv[idx] = HEOS.second_partial_deriv(CP.iSmass, CP.iDmass, CP.iT, CP.iDmass, CP.iT) d2s_ddTv[idx] = HEOS.second_partial_deriv(CP.iSmass, CP.iDmass, CP.iT, CP.iT, CP.iDmass) # calculate 2nd derivs ALONG the sat line (analytically and numerically) # A=dd_dT_p and B=dd_dp_T # liq side dA_dT_ql = d2d_dT2_pl + d2d_dpTl*dp_dT_q dB_dT_ql = d2d_dpTl + d2d_dp2_Tl*dp_dT_q d2d_dT2_qla = dA_dT_ql + dB_dT_ql*dp_dT_q + dd_dp_Tl*d2p_dT2_q d2d_dT2_qln = numSlopeAr(T_sat, dd_dT_ql) # vap side dA_dT_qv = d2d_dT2_pv + d2d_dpTv*dp_dT_q dB_dT_qv = d2d_dpTv + d2d_dp2_Tv*dp_dT_q d2d_dT2_qva = dA_dT_qv + dB_dT_qv*dp_dT_q + dd_dp_Tv*d2p_dT2_q d2d_dT2_qvn = numSlopeAr(T_sat, dd_dT_qv) #print(d2d_dT2_qla - d2d_dT2_qln) #print(d2d_dT2_qva - d2d_dT2_qln) # C=ds_dT_p and E=ds_dp_T # liq side dC_dT_ql = d2s_dT2_dl + d2s_ddTl*dd_dT_ql dE_dT_ql = d2s_ddTl + d2s_dd2_Tl*dd_dT_ql d2s_dT2_qla = dC_dT_ql + dE_dT_ql*dd_dT_ql + ds_dd_Tl*d2d_dT2_qla d2s_dT2_qln = numSlopeAr(T_sat, ds_dT_ql) # vap side dC_dT_qv = d2s_dT2_dv + d2s_ddTv*dd_dT_qv dE_dT_qv = d2s_ddTv + d2s_dd2_Tv*dd_dT_qv d2s_dT2_qva = dC_dT_qv + dE_dT_qv*dd_dT_qv + ds_dd_Tv*d2d_dT2_qva d2s_dT2_qvn = numSlopeAr(T_sat, ds_dT_qv) #print(d2s_dT2_qla - d2s_dT2_qln) #print(d2s_dT2_qva - d2s_dT2_qvn) # B=dd_dp_T and A=dd_dT_p # liq side dB_dp_ql = d2d_dp2_Tl + d2d_dpTl*dT_dp_q dA_dp_ql = d2d_dpTl + d2d_dT2_pl*dT_dp_q d2d_dp2_qla = dB_dp_ql + dA_dp_ql*dT_dp_q + dd_dT_pl*d2T_dp2_q d2d_dp2_qln = numSlopeAr(T_sat, ds_dp_ql) # vap side dB_dp_qv = d2d_dp2_Tv + d2d_dpTv*dT_dp_q dA_dp_qv = d2d_dpTv + d2d_dT2_pv*dT_dp_q d2d_dp2_qva = dB_dp_qv + dA_dp_qv*dT_dp_q + dd_dT_pv*d2T_dp2_q d2d_dp2_qvn = numSlopeAr(T_sat, ds_dp_qv) #print(d2d_dp2_qla - d2d_dp2_qln) #print(d2d_dp2_qva - d2d_dp2_qvn) # E=ds_dp_T and C=ds_dT_p # liq side dE_dp_ql = d2s_dp2_Tl + d2s_dpTl*dT_dp_q dC_dp_ql = d2s_dpTl + d2s_dT2_pl*dT_dp_q d2s_dp2_qla = dE_dp_ql + dC_dp_ql*dT_dp_q + ds_dT_pl*d2T_dp2_q d2s_dp2_qln = numSlopeAr(p_sat, ds_dp_ql) # vap side dE_dp_qv = d2s_dp2_Tv + d2s_dpTv*dT_dp_q dC_dp_qv = d2s_dpTv + d2s_dT2_pv*dT_dp_q d2s_dp2_qva = dE_dp_qv + dC_dp_qv*dT_dp_q + ds_dT_pv*d2T_dp2_q d2s_dp2_qvn = numSlopeAr(p_sat, ds_dp_qv) #print(d2s_dp2_qla - d2s_dp2_qln) #print(d2s_dp2_qva - d2s_dp2_qvn) # G=dh_dT_p and K=dh_dp_T # liq side dG_dT_ql = d2h_dT2_pl + d2h_dpTl*dp_dT_q dK_dT_ql = d2h_dpTl + d2h_dp2_Tl*dp_dT_q d2h_dT2_qla = dG_dT_ql + dK_dT_ql*dp_dT_q + dh_dp_Tl*d2p_dT2_q d2h_dT2_qln = numSlopeAr(T_sat, dh_dT_ql) # vap side dG_dT_qv = d2h_dT2_pv + d2h_dpTv*dp_dT_q dK_dT_qv = d2h_dpTv + d2h_dp2_Tv*dp_dT_q d2h_dT2_qva = dG_dT_qv + dK_dT_qv*dp_dT_q + dh_dp_Tv*d2p_dT2_q d2h_dT2_qvn = numSlopeAr(T_sat, dh_dT_qv) #print(d2h_dT2_qla - d2h_dT2_qln) #print(d2h_dT2_qva - d2h_dT2_qvn) # liq side d2d_dh2_qla = (d2d_dT2_qla*dh_dT_ql - dd_dT_ql*d2h_dT2_qla) / dh_dT_ql**3 d2d_dh2_qln = numSlopeAr(h_sat_liq, dd_dh_ql) # vap side d2d_dh2_qva = (d2d_dT2_qva*dh_dT_qv - dd_dT_qv*d2h_dT2_qva) / dh_dT_qv**3 d2d_dh2_qvn = numSlopeAr(h_sat_vap, dd_dh_qv) #print(d2d_dh2_qla - d2d_dh2_qln) #print(d2d_dh2_qva - d2d_dh2_qvn) # liq side d2s_dh2_qla = (d2s_dT2_qla*dh_dT_ql - ds_dT_ql*d2h_dT2_qla) / dh_dT_ql**3 d2s_dh2_qln = numSlopeAr(h_sat_liq, ds_dh_ql) # vap side d2s_dh2_qva = (d2s_dT2_qva*dh_dT_qv - ds_dT_qv*d2h_dT2_qva) / dh_dT_qv**3 d2s_dh2_qvn = numSlopeAr(h_sat_vap, ds_dh_qv) #print(d2s_dh2_qla - d2s_dh2_qln) #print(d2s_dh2_qva - d2s_dh2_qvn) ``` T_crt = 369.89 T_trp = 85.525 ``` # vapor pressure, Clausius-Clapeyron # pick a point and set figure size mySatPoint = int(nPoints-50) print("T_sat = " + str(T_sat[mySatPoint])) print("p_sat = " + str(p_sat[mySatPoint])) plt.figure(figsize=(width,width*3/2/golden)) plt.subplot(3,2,1) plt.plot(T_sat, p_sat, color='blue') #plt.yscale('log') plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0)) plt.grid(b=True, linestyle=':') plt.minorticks_on() plt.xlabel('Temperature in K') plt.ylabel('Pressure in Pa') drawTangent(T_sat, p_sat, dp_dT_q, mySatPoint) #print(dp_dT_q - dp_dT_qn) plt.subplot(3,2,2) plt.plot(p_sat, T_sat, color='blue') #plt.yscale('log') plt.ticklabel_format(style='sci', axis='x', scilimits=(0,0)) plt.grid(b=True, linestyle=':') plt.minorticks_on() plt.xlabel('Pressure in Pa') plt.ylabel('Temperature in K') drawTangent(p_sat, T_sat, dT_dp_q, mySatPoint) #print(dT_dp_q - dT_dp_qn) plt.subplot(3,2,3) plt.plot(T_sat, dp_dT_q, color='blue') #plt.yscale('log') plt.grid(b=True, linestyle=':') plt.minorticks_on() plt.xlabel('Temperature in K') plt.ylabel('dp/dT in Pa/K') drawTangent(T_sat, dp_dT_q, d2p_dT2_q, mySatPoint) #print(d2p_dT2_q - d2p_dT2_qn) plt.subplot(3,2,4) plt.plot(p_sat, dT_dp_q, color='blue') plt.yscale('log') plt.ticklabel_format(style='sci', axis='x', scilimits=(0,0)) plt.grid(b=True, linestyle=':') plt.minorticks_on() plt.xlabel('Pressure in Pa') plt.ylabel('dT/dp in K/Pa') drawTangent(p_sat, dT_dp_q, d2T_dp2_q, mySatPoint) #print(d2T_dp2_q - d2T_dp2_qn) plt.subplot(3,2,5) plt.plot(T_sat, d2p_dT2_q, color='blue') plt.grid(b=True, linestyle=':') plt.minorticks_on() plt.xlabel('Temperature in K') plt.ylabel('d2p/dT2 in Pa/K2') plt.subplot(3,2,6) plt.plot(p_sat, abs(d2T_dp2_q), color='blue') plt.yscale('log') plt.ticklabel_format(style='sci', axis='x', scilimits=(0,0)) plt.grid(b=True, linestyle=':') plt.minorticks_on() plt.xlabel('Pressure in Pa') plt.ylabel('|d2T/dp2| in K/Pa2') ``` ``` # Pick a point and set figure size mySatPoint = int(nPoints-50) print("T_sat = " + str(T_sat[mySatPoint])) print("p_sat = " + str(p_sat[mySatPoint])) plt.figure(figsize=(width,width*3/2/golden)) # d versus T plt.subplot(3,2,1) plt.plot(T_sat, d_sat_liq, color='blue') plt.plot(T_sat, d_sat_vap, color='red') #plt.plot(T_sat, (d_sat_liq+d_sat_vap)/2, color='black') plt.xlabel('Temperature in K') plt.ylabel('Density in kg/m³') drawTangent(T_sat, d_sat_liq, dd_dT_ql, mySatPoint) drawTangent(T_sat, d_sat_vap, dd_dT_qv, mySatPoint) # s versus T plt.subplot(3,2,2) plt.plot(T_sat, s_sat_liq, color='blue') plt.plot(T_sat, s_sat_vap, color='red') plt.xlabel('Temperature in K') plt.ylabel('Specific entropy in J/(kg·K)') drawTangent(T_sat, s_sat_liq, ds_dT_ql, mySatPoint) drawTangent(T_sat, s_sat_vap, ds_dT_qv, mySatPoint) # d versus p plt.subplot(3,2,3) plt.plot(p_sat, d_sat_liq, color='blue') plt.plot(p_sat, d_sat_vap, color='red') plt.ticklabel_format(style='sci', axis='x', scilimits=(0,0)) plt.xlabel('Pressure in Pa') plt.ylabel('Density in kg/m³') drawTangent(p_sat, d_sat_liq, dd_dp_ql, mySatPoint) drawTangent(p_sat, d_sat_vap, dd_dp_qv, mySatPoint) #print(dd_dp_ql - numSlopeAr(p_sat, d_sat_liq)) #print(dd_dp_qv - numSlopeAr(p_sat, d_sat_vap)) # s versus p plt.subplot(3,2,4) plt.plot(p_sat, s_sat_liq, color='blue') plt.plot(p_sat, s_sat_vap, color='red') #plt.yscale('log') plt.ticklabel_format(style='sci', axis='x', scilimits=(0,0)) plt.xlabel('Pressure in Pa') plt.ylabel('Specific entropy in J/(kg·K)') drawTangent(p_sat, s_sat_liq, ds_dp_ql, mySatPoint) drawTangent(p_sat, s_sat_vap, ds_dp_qv, mySatPoint) #print(ds_dp_ql - numSlopeAr(p_sat, s_sat_liq)) #print(ds_dp_qv - numSlopeAr(p_sat, s_sat_vap)) # d versus h plt.subplot(3,2,5) plt.plot(h_sat_liq, d_sat_liq, color='blue') plt.plot(h_sat_vap, d_sat_vap, color='red') plt.ticklabel_format(style='sci', axis='x', scilimits=(0,0)) plt.xlabel('Specific enthalpy in J/kg') plt.ylabel('Density in kg/m³') drawTangent(h_sat_liq, d_sat_liq, dd_dh_ql, mySatPoint) drawTangent(h_sat_vap, d_sat_vap, dd_dh_qv, mySatPoint) # s versus h plt.subplot(3,2,6) plt.plot(h_sat_liq, s_sat_liq, color='blue') plt.plot(h_sat_vap, s_sat_vap, color='red') plt.ticklabel_format(style='sci', axis='x', scilimits=(0,0)) plt.xlabel('Specific enthalpy in J/kg') plt.ylabel('Specific entropy in J/(kg·K)') drawTangent(h_sat_liq, s_sat_liq, ds_dh_ql, mySatPoint) drawTangent(h_sat_vap, s_sat_vap, ds_dh_qv, mySatPoint) ``` ``` # Pick a point and set figure size mySatPoint = int(nPoints-50) print("T_sat = " + str(T_sat[mySatPoint])) print("p_sat = " + str(p_sat[mySatPoint])) plt.figure(figsize=(width,width*3/2/golden)) # dd_dT versus T plt.subplot(3,2,1) plt.plot(T_sat, dd_dT_ql, color='blue') plt.plot(T_sat, dd_dT_qv, color='red') plt.xlabel('Temperature in K') plt.ylabel('dd/dT in kg/m³/K') drawTangent(T_sat, dd_dT_ql, d2d_dT2_qla, mySatPoint) drawTangent(T_sat, dd_dT_qv, d2d_dT2_qva, mySatPoint) # ds_dT versus T plt.subplot(3,2,2) plt.plot(T_sat, ds_dT_ql, color='blue') plt.plot(T_sat, ds_dT_qv, color='red') plt.xlabel('Temperature in K') plt.ylabel('ds/dT in J//(kg·K)/K') drawTangent(T_sat, ds_dT_ql, d2s_dT2_qla, mySatPoint) drawTangent(T_sat, ds_dT_qv, d2s_dT2_qva, mySatPoint) # dd_dp versus p plt.subplot(3,2,3) #plt.plot(p_sat, dd_dp_ql, color='blue') plt.plot(p_sat, dd_dp_qv, color='red') plt.ticklabel_format(style='sci', axis='x', scilimits=(0,0)) plt.xlabel('Pressure in Pa') plt.ylabel('dd/dp in kg/m³/Pa') #drawTangent(p_sat, dd_dp_ql, d2d_dp2_qla, mySatPoint) drawTangent(p_sat, dd_dp_qv, d2d_dp2_qva, mySatPoint) # ds_dp versus p plt.subplot(3,2,4) plt.plot(p_sat, ds_dp_ql, color='blue') #plt.plot(p_sat, ds_dp_qv, color='red') #plt.yscale('log') plt.ylim([0,10*min(ds_dp_ql)]) plt.ticklabel_format(style='sci', axis='x', scilimits=(0,0)) plt.xlabel('Pressure in Pa') plt.ylabel('ds/dp in J//(kg·K)/Pa') drawTangent(p_sat, ds_dp_ql, d2s_dp2_qla, mySatPoint) #drawTangent(p_sat, ds_dp_qv, d2s_dp2_qva, mySatPoint) # dd_dh versus h plt.subplot(3,2,5) plt.plot(h_sat_liq, dd_dh_ql, color='blue') #plt.plot(h_sat_vap, dd_dh_qv, color='red') plt.ticklabel_format(style='sci', axis='x', scilimits=(0,0)) plt.xlabel('Specific enthalpy in J/kg') plt.ylabel('dd/dh') drawTangent(h_sat_liq, dd_dh_ql, d2d_dh2_qla, mySatPoint) #drawTangent(h_sat_vap, dd_dh_qv, d2d_dh2_qva, mySatPoint) # ds_dh versus h plt.subplot(3,2,6) plt.plot(h_sat_liq, ds_dh_ql, color='blue') #plt.plot(h_sat_vap, ds_dh_qv, color='red') plt.ticklabel_format(style='sci', axis='x', scilimits=(0,0)) plt.xlabel('Specific enthalpy in J/kg') plt.ylabel('ds/dh') drawTangent(h_sat_liq, ds_dh_ql, d2s_dh2_qla, mySatPoint) #drawTangent(h_sat_vap, ds_dh_qv, d2s_dh2_qva, mySatPoint) ```
adf8d67f6f7f8facd4b956b44ffad18d3ceb250a
288,465
ipynb
Jupyter Notebook
doc/notebooks/Saturation.ipynb
pauliacomi/CoolProp
80eb4601c67ecd04353067663db50937fd7ccdae
[ "MIT" ]
520
2015-01-14T16:49:41.000Z
2022-03-29T07:48:50.000Z
doc/notebooks/Saturation.ipynb
pauliacomi/CoolProp
80eb4601c67ecd04353067663db50937fd7ccdae
[ "MIT" ]
1,647
2015-01-01T07:42:45.000Z
2022-03-31T23:48:56.000Z
doc/notebooks/Saturation.ipynb
pauliacomi/CoolProp
80eb4601c67ecd04353067663db50937fd7ccdae
[ "MIT" ]
320
2015-01-02T17:24:27.000Z
2022-03-19T07:01:00.000Z
302.057592
87,411
0.891093
true
12,695
Qwen/Qwen-72B
1. YES 2. YES
0.909907
0.760651
0.692121
__label__eng_Latn
0.154741
0.446361