text
stringlengths 104
605k
|
---|
# zbMATH — the first resource for mathematics
Extensions of the mountain pass theorem. (English) Zbl 0564.58012
The paper contains a number of extensions of the mountain pass lemma of A. Ambrosetti and P. H. Rabinowitz [(*) ibid. 14, 349-381 (1973; Zbl 0273.49063)]. The lemma gives sufficient conditions for the existence of critical points of continuously Fréchet differentiable functionals $$I: X\to {\mathbb{R}}$$ on a real Banach space X. The hypotheses of the lemma and its variants consist of a compactness condition and geometric restraints on the functional I. It was shown in (*) how the lemma may be applied to prove the existence of weak solutions for differential equations. (See also the survey article by L. Nirenberg [Bull. Am. Math. Soc., New Ser. 4, 267-302 (1981; Zbl 0468.47040)] for an introduction.)
The authors of the paper under review study variants of the geometric restraints on the functional I. At the same time they make statements as to whether one obtains local minima, maxima or saddle points. For example, take $$K_ b=\{x\in X| \quad I(x)=b,\quad I'(x)=0\},$$ the set of all critical points with critical value b. If b is the value given in the original mountain pass lemma and X is infinite dimensional, then $$K_ b$$ contains at least one saddle point. Finally, the authors give modifications of the above-mentioned results for periodic functionals. In this case one needs an adapted version of the compactness condition. For a different type of extension of the mountain pass lemma and its applications we would like to mention results of M. Struwe [Math. Ann. 261, 399-412 (1982; Zbl 0506.35034); J. Reine Angew. Math. 349, 1-23 (1984; Zbl 0521.49028)]. In these papers the differentiability requirement for the functional I is weakend.
Reviewer: G.Warnecke
##### MSC:
58E05 Abstract critical point theory (Morse theory, Lyusternik-Shnirel’man theory, etc.) in infinite-dimensional spaces 57R70 Critical points and critical submanifolds in differential topology 49Q99 Manifolds and measure-geometric topics
Full Text:
##### References:
[1] Ambrosetti, A; Rabinowitz, P.H, Dual variational methods in critical point theory and applications, J. funct. anal., 14, 349-381, (1973) · Zbl 0273.49063 [2] Brezis, H; Coron, J.M; Nirenberg, L, Free vibrations for a nonlinear wave equation and a theorem of P. Rabinowitz, Comm. pure appl. math., 33, 667-684, (1980) · Zbl 0484.35057 [3] Clark, D.C, A variant of the Lusternik-schnirelman theory, Indiana univ. math. J., 22, 65-74, (1972) · Zbl 0228.58006 [4] Hofer, H, A note on the topological degree at a critical point of mountain pass type, (), 309-315 · Zbl 0545.58015 [5] Mawhin, J; Willem, M, Variational methods and boundary value problems for vector second order differential equations and applications to the pendulum equation, J. diff. equations, 52, 264-287, (1984) · Zbl 0557.34036 [6] \scP. Pucci and J. Serrin, A mountain pass theorem J. Diff. Equations, in press. · Zbl 0585.58006 [7] Ni, W.M, Some minimax principles and their applications in nonlinear eliptic equations, J. analyse math., 37, 248-278, (1980) [8] Rabinowitz, P.H, Variational methods for nonlinear eigenvalue problems, (), Varenna · Zbl 0212.16504 [9] Rabinowitz, P.H, Some aspects of critical point theory, University of wisconsin, MRC technical report no. 2465, (1983)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. |
Tag Info
1
For a neutron of that speed, the uncertainty in the momentum is expected to be less than the momentum magnitude. Using the actual momentum will be an upper bound on the momentum uncertainty. That correlates to a lower bound on the position uncertainty. So, $\Delta x$ is lower bounded by $\hbar/(2p)$: $$\Delta x \ge \frac{\hbar}{2m_nv}.$$ $\Delta x$ could ...
-1
Gravity fluctuations will always cause vibrations in atoms and molecules limiting the lowest temperature obtainable. Closer to the mass source, the stronger the gravity field. As stated by Asaf earlier, evaporative cooling will lower the temperature only so far. Adding a magnetic field may temporarily increase temperature by increasing vibrations in the ...
0
I think you are probably misinterpreting the context here. If you read the previous line carefully it says "there is always an undetermined interaction between observer and observed; there is nothing we can do to avoid the interaction or to allow for it ahead of time. And later he just says due to the fact that photon can be scattered within the 2θ' angle ...
1
There is yet another solution (maybe more elementary)$^1$, with some components of the answers from Qmechanic and JoshPhysics (Currently I'm taking my first QM course and I don't quite understand the solution of Qmechanic, and this answer complement JoshPhysics's answer) the solution uses the Heisenberg Equation: The time evolution of an operator $\hat{A}$ ...
6
The temperature limit for laser cooling is not related to gravity but to the always-present momentum kick during absoprtion/emission of photons. Ultracold atom experiments typically use laser cooling at an initial stage and afterwards evaporative cooling is used to reach the lowest temperatures. In evaporative cooling the most energetic atoms are discarded ...
4
Summary Using the entropic uncertainty principle, one can show that $μ_qμ_p≥\frac{π}{4e}$, where $μ$ is the mean deviation. This corresponds to $F≥\frac{π^2}{4e}=0.9077$ using the notations of AccidentalFourierTransform’s answer. I don’t think this bound is optimal, but didn’t manage to find a better proof. To simplify the expressions, I’ll assume $ℏ=1$, ...
4
This is a great example of how hard it is to popularize quantum mechanics. Greene's example is not quite right, because classically, the butterfly does have a definite position and momentum, at all times. We can also measure these values simultaneously to arbitrary accuracy, as your friend says. (As for your concern about exposure time, we could decrease ...
0
The Heisenberg uncertainty principle is a basic foundation stone of quantum mechanics, and is derivable from the commutator relations of the quantum mechanical operators describing the pair of variables participating in the HUP. You are discussing the energy time uncertainty, . For an individual particle, it describes a locus in the time versus energy ...
3
It cannot be proven, because "wave-particle duality" is not a mathematical statement. It most definitely is not "logically true". Can you try to make it mathematical? A mathematical framework The "complementarity principle" was introduced in order to better understand some features of quantum mechanics in the early days. The problem is that if you consider ...
1
The uncertainty principle never said that nothing can be measured simultaneously with accuracy. Uncertainty principle states that it is not possible to measure two canonically conjugate quantities at the same time with accuracy. Like you cannot measure the x component of momentum $p_x$ and the x coordinate position simultaneously with accuracy. But the x ...
0
I am not satisfied of the published replies, so I will try my own, as a metrologist (expert in measurement units, but not in theoretical physics). The question clearly is referring to the experimental frame while all the answers are referring only to the theoretical frame, so they do not talk nor understand with each other. Here we are dealing with two ...
1
The point dipole is an approximation from classical physics - note that it also involves an infinite field strength in its center, where the field amplitude is not differentiable. I think such a source is not compatible with the common approach to quantum mechanics. If you take such a very small, subwavelength source, it is true that the evanescent near ...
11
We can assume WLOG that $\bar x=\bar p=0$ and $\hbar =1$. We don't assume that the wave-functions are normalised. Let $$\sigma_x\equiv \frac{\int \mathrm dx\; |x|\;|\psi(x)|^2}{\int\mathrm dx\; |\psi(x)|^2}$$ and $$\sigma_p\equiv \frac{\int \mathrm dp\; |p|\;|\tilde \psi(p)|^2}{\int\mathrm dx\; |\psi(x)|^2}$$ Using $$\int\mathrm dp\ |p|\;\mathrm ... 4 I went back to the derivation of the Heisenberg uncertainty principle and tried to modify it. Not sure if what I've come up with is worth anything, but you'll be the judge: The original derivation Let \hat{A} = \hat{x} - \bar{x} and \hat{B} = \hat{p} - \bar{p}. Then the inner product of the state | \phi\rangle = \left(\hat{A} + i \lambda ... 0 As in the link you give, the functional form depends on the probability distribution used, and these differ widely, nothing as general as the Heisenberg form can appear. The quantum mechanical equivalent requires the solution for the specific boundary problem. In any case , the HUP is about deltas, i.e. uncertainties, and not only standard deviations as ... 2 I) In this answer we will consider the microscopic description of classical E&M only. The Lorentz force reads$$ \tag{1} {\bf F}~:=~q({\bf E}+{\bf v}\times {\bf B})~=~\frac{\mathrm d}{\mathrm dt}\frac{\partial U}{\partial {\bf v}}- \frac{\partial U}{\partial {\bf r}}~=~-q\frac{\mathrm d{\bf A}}{\mathrm dt} - \frac{\partial U}{\partial {\bf r}}, ...
Top 50 recent answers are included |
# Understanding this explanation about Big O notation
I'm trying to learn the Big O Notation...and I got a bit confused by this article:
https://brilliant.org/practice/big-o-notation-2/?chapter=intro-to-algorithms&pane=1838
where it stands that f(x) = 4x and g(x) = 10x, (...), and that one could look at the Big O Notation by dividing f(x) by g(x): 10x/4x
Shouldn't it be 4x/10x instead in this very example? (since f(x) = 4x and g(x) = 10x) Or is just me who got it all wrong?...
Kind regards,
c
The best way to look at big $O$ notation is the following: $f(x)$ and $g(x)$ have the same $O$ complexity if you can find a positive constants $c_1, c_2 \in \mathbb{R}$ such that $f(x) \leq c_1 \cdot g(x)$ and $g(x) \leq c_2 \cdot f(x)$ for all $x$.
So, for example $4x$ and $10x$ are both in the same complexity class because $4x \leq 1 \cdot 10x$ and $10x \leq 3 \cdot 4x$. We name the complexity class they belong to $O(x)$, because $x$ is the simplest of all expressions of the form $c \cdot x$ so we use it as a representative.
Also, one can write equalities with big $O$; $$O(4x) = O(10x) = O(x) = O(192839182x) \neq O(x^2)$$
Some basic complexity classes are (in order of complexity, from lower to higher):
• $O(log n)$
• $O(n)$
• $O(n log n)$
• $O(n^2)$
• $\ldots$
• $O(n^k)$
• $\ldots$
• $O(2^n)$
• $\ldots$
and every complexity class on this list is not equal to any other.
• This is basically right but please don't confuse complexity classes and orders of growth. A complexity class is a class of computational problems, based on some kind of resource usage; $O(...)$ is a class of mathematical functions. There are no complexity classes in your answer because you're talking only about the growth rate of mathematical functions. – David Richerby Aug 20 '18 at 22:46
• Not for all $x$, it is enough if it is valid for $x$ large enough (we are interested in the functions for "very large" values of $x$, for suitable "very large"). – vonbrand Mar 3 '20 at 16:17 |
# Simulation of pinned diffusion process
Suppose I have a stochastic differential equation (in the Ito sense): $$dX_t = \mu(X_t)\,dt + \sigma(X_t)\, dW_t$$ in $\mathbb{R}^n$, where I know that $X_0=a$ and $X_T=b$. In other words, the process has been "pinned" at fixed times $0$ and $T$.
I want to know how to simulate such an equation (i.e. produce trajectories numerically).
I've seen some questions ( 1, 2, 3, 4, 5 ) on the "Brownian Bridge", which is a special case of this.
Edit (081617): it appears that this process is also referred to as an Ito bridge or as a diffusion bridge. It turns out this is not as easy as I'd hoped. A promising paper (found with these better search terms) is Simulation of multivariate diffusion bridges by Bladt et al. Any help/suggestions are still appreciated!
• Have you tried Euler Maruyama method to simulate it ? Aug 16, 2017 at 19:32
• @Khosrotash How can I apply Euler-Maruyama to a pinned diffusion? Aug 16, 2017 at 19:41
• Do you mean $$@ t=0 \to x(0)=a \\@t=T \to x(T)=b$$ and $a,b$ are assumed ? Aug 16, 2017 at 19:44
• @Khosrotash Yes indeed, all are fixed or known in advance. Aug 16, 2017 at 19:52 |
# Chapter 6 Visualizing data in R – An intro to ggplot
These notes accompany portions of Chapter 2 — Displaying Data — of our textbook, which we revisit in Section 9. The reading below is required, is not.
Motivating scenarios: Motivating scenarios: you have a fresh new data set and want to check it out. How do you go about looking into it?
Learning goals: By the end of this chapter you should be able to:
• Build a simple ggplot.
• Explain the idea of mapping data onto aesthetics, and the use of different geoms.
• Match common plots to common data type.
• Use geoms in ggplot to generate the common plots (above).
There is no external reading for this chapter, but watch the embedded vidoes, and complete all embedded learnRexcercises. Then go to canvas to fill out the evaluation. You will need to make three very different types of plots from the mpg data.
## 6.1 A quick intro to data visualization.
Recall that as bio-statisticians, we bring data to bear on critical biological questions, and communicate these results to interested folks. A key component of this process is visualizing our data.
### 6.1.1 Exploratory and explanatory visualizations
We generally think of two extremes of the goals of data visualization
• In exploratory visualizations we aim to identify any interesting patterns in the data, we also conduct quality control to see if there are patterns indicating mistakes or biases in our data, and to think about appropriate transformations of data. On the whole, our goal in exploratory data analysis is to understand the stories in the data.
• In explanatory visualizations we aim to communicate our results to a broader audience. Here are goals are communication and persuasion. When developing explanatory plots we consider our audience (scientists? consumers? experts?) and how we are communicating (talk? website? paper?).
The ggplot2 package in R is well suited for both purposes. Today we focus on exploratory visualization in ggplot2 because
1. They are the starting point of all statistical analyses.
2. You can do them with less ggplot2 knowledge.
3. They take less time to make than explanatory plots.
Later in the term we will show how we can use ggplot2 to make high quality explanatory plots.
### 6.1.2 Centering plots on biology
Whether developing an exploratory or exploratory plot, you should think hard about the biology you hope to convey before jumping into a plot. Ask yourself
• What do you hope to learn from this plot?
• Which is the response variable (we usually place that on the y-axis)?
• Are data numeric or categorical?
• If they are categorical are they ordinal, and if so what order should they be in?
The answers to these questions should guide our data visualization strategy, as this is a key step in our statistical analysis of a dataset. The best plots should evoke an immediate understanding of the (potentially complex) data. Put another way, a plot should highlight both the biological question and its answer.
Before jumping into making a plot in R, it is often useful to take this step back, think about your main biological question, and take a pencil and paper to sketch some ideas and potential outcomes. I do this to prepare my mind to interpret different results, and to ensure that I’m using R to answer my questions, rather than getting sucked in to so much Ring that I forget why I even started. With this in mind, we’re ready to get introduced to ggploting!
### Remembering out set up from last chapter
msleep <- msleep %>%
mutate(log10_brainwt = log10(brainwt),
log10_bodywt = log10(bodywt))
msleep_plot1 <- ggplot(data = msleep, aes(x = log10_brainwt)) # save plot
msleep_histogram <- msleep_plot1 +
geom_histogram(bins =10, color = "white")
## 6.2 Common types of plots
As we saw in the section, Centering plots on biology, we want our biological questions and the structure of the data to guide our plotting choices. So, before we get started on making plots, we should think about our data.
• What are the variable names?
• What are the types of variables?
• What are our motivating questions and how do the data map onto these questions?
• Etc…
Using the msleep data set below, we briefly work through a rough guide on how the structure of our data can translate into a plot style, and how we translate that into a geom in ggplot. So the first step you should look at the data – either with the view() function, or a quick glimpse() and reflect on your questions before plotting. This also helps us remember the name and data type of each variable.
glimpse(msleep)
## Rows: 83
## Columns: 13
## $name [3m[38;5;246m<chr>[39m[23m "Cheetah", "Owl monkey", "Mountain beaver", "Greater short-tailed shrew", … ##$ genus [3m[38;5;246m<chr>[39m[23m "Acinonyx", "Aotus", "Aplodontia", "Blarina", "Bos", "Bradypus", "Callorhi…
## $vore [3m[38;5;246m<chr>[39m[23m "carni", "omni", "herbi", "omni", "herbi", "herbi", "carni", NA, "carni", … ##$ order [3m[38;5;246m<chr>[39m[23m "Carnivora", "Primates", "Rodentia", "Soricomorpha", "Artiodactyla", "Pilo…
## $conservation [3m[38;5;246m<chr>[39m[23m "lc", NA, "nt", "lc", "domesticated", NA, "vu", NA, "domesticated", "lc", … ##$ sleep_total [3m[38;5;246m<dbl>[39m[23m 12.1, 17.0, 14.4, 14.9, 4.0, 14.4, 8.7, 7.0, 10.1, 3.0, 5.3, 9.4, 10.0, 12…
## $sleep_rem [3m[38;5;246m<dbl>[39m[23m NA, 1.8, 2.4, 2.3, 0.7, 2.2, 1.4, NA, 2.9, NA, 0.6, 0.8, 0.7, 1.5, 2.2, 2.… ##$ sleep_cycle [3m[38;5;246m<dbl>[39m[23m NA, NA, NA, 0.1333333, 0.6666667, 0.7666667, 0.3833333, NA, 0.3333333, NA,…
## $awake [3m[38;5;246m<dbl>[39m[23m 11.90, 7.00, 9.60, 9.10, 20.00, 9.60, 15.30, 17.00, 13.90, 21.00, 18.70, 1… ##$ brainwt [3m[38;5;246m<dbl>[39m[23m NA, 0.01550, NA, 0.00029, 0.42300, NA, NA, NA, 0.07000, 0.09820, 0.11500, …
## $bodywt [3m[38;5;246m<dbl>[39m[23m 50.000, 0.480, 1.350, 0.019, 600.000, 3.850, 20.490, 0.045, 14.000, 14.800… ##$ log10_brainwt [3m[38;5;246m<dbl>[39m[23m NA, -1.8096683, NA, -3.5376020, -0.3736596, NA, NA, NA, -1.1549020, -1.007…
## \$ log10_bodywt [3m[38;5;246m<dbl>[39m[23m 1.6989700, -0.3187588, 0.1303338, -1.7212464, 2.7781513, 0.5854607, 1.3115…
Now we’re nearly ready to get started, but first, some caveats
1. These are vey preliminary exploratory plots – and you may need more advanced plotting R talents to make plots that better help you see patterns. We will cover these in Chapters YB ADD, where we focus on explanatory plots.
2. There are not always cookie cutter solutions, with more complex data you may need more complex visualizations.
That said, the simple visualization and R tricks we learn below are the essential building blocks of most data presentation. So, let’s get started!
There is a lot of stuff below. We will revisit all of it again and again over the term, so you don’t need to master it now – think of this as your first exposure. You’ll get more comfortable and this will become more natural over time.
### 6.2.1 One variable
With one variable, we use plots to visualize the relative frequency (on the y-axis) of the values it takes (on the x-axis).
gg-plotting one variable We map our one variable of interest onto x aes(x = <x_variable>), where we replace <x_variable> with our x-variable. The mapping of frequency onto the y happens automatically.
#### One categorical variable
Say we wanted to know how many carnivores, herbivores, insectivores, and omnivores in the msleep data set. From the output of the glimpse() function above, we know that vore is a categorical variable, so we want a simple bar plot, which we make with geom_bar().
ggplot(data = msleep, aes(x = vore)) +
geom_bar()
We can also pipe data into ggplot argument after doing stuff to the data. For example, the code below remove NA values from our plot.
msleep %>%
filter(!is.na(vore)) %>%
ggplot(aes(x = vore)) +
geom_bar()
If the same data where presented as one categorical variable for vore (with each vore once) and another, n, for counts.
count(msleep, vore)
## [38;5;246m# A tibble: 5 x 2[39m
## vore n
## [3m[38;5;246m<chr>[39m[23m [3m[38;5;246m<int>[39m[23m
## [38;5;250m1[39m carni 19
## [38;5;250m2[39m herbi 32
## [38;5;250m3[39m insecti 5
## [38;5;250m4[39m omni 20
## [38;5;250m5[39m [31mNA[39m 7
We could recreate figure 6.1 with geom_col(). again mapping vore to the x-aesthetic, and now mapping count to the y aesthetic, by as follows:
count(msleep, vore) %>%
ggplot(aes(x = vore, y = n))+
geom_col()
#### One continuous variable
We are often interested to know how variable our data is, and to think about the shape of this variability. Revisiting our data on mammal sleep patterns, we might be interested to evaluate the variability in how long mammals sleep.
• Do all species sleep roughly the same amount?
• Is the data bimodal (with two humps)?
• Do some species sleep for an extraordinarily long or short amount of time?
We can look into this with a histogram or a density plot.
##### One continuous variable: A histogram
We use the histogram geom, geom_histogram(), to make a histogram in R.
ggplot(msleep, aes(x = log10_brainwt))+
geom_histogram(bins = 10, color = "white") # Bins tells R we want 10 bins, and color = white tells R we want white lines between our bins
## Warning: Removed 27 rows containing non-finite values (stat_bin).
In a histogram, each value on the x represents some interval of values of our categorical variable (in this case, we had 10 bins, but we could have, for example, looked at sleeep in one hour with binwidth = 1), while y-values show how many observations correspond to an interval on the x.
When making a histogram it is worth exploring numerous binwidths to ensure you’re not fooling yourself
##### One continuous variable: A density plot
We use the density geom, geom_density(), to make a histogram in R.
ggplot(msleep, aes(x = log10_brainwt))+
geom_density(fill = "blue")
Sometimes we prefer a smooth density plot to a histogram, as this can allow us to not get too distracted by a few bumps (on the other hand, we can also miss important variability, so be careful). We again map total_sleep onto the x aesthetic, but now use geom_density().
### 6.2.2 Two variables
With two variables, we want to highlight the association between them. In the plots below, we show that how this is presented can influence our biological interpretation and take-home messages.
#### Two categorical variables
With two categorical variables, we usually add color to a barplot to identify the second group. We can choose to
Below, we’ll make one of each of these graphs to look at this for the association between mammal order and diet, limiting our view to orders with five or more species with data. Which of these you choose depends on the message, story and details. For example, a filled barplot is nice because we can see proportions, but a bummer because we don’t get to see counts. The book advocates for mosaic plots, which I really like but skip here because they are a bit esoteric. Look into the ggmosaic package, and its vignette if you want to make one.
First, we process our data, making use of the tricks we learned in Handling data in R. To do so, we filter() for not NA diets, add_count() to see how many species we have in each order, and filter() for orders with five or more species with diet data.
# Data processing
msleep_data_ordervore <- msleep %>%
filter(!is.na(vore)) %>% # Only cases with data for diet
add_count(order) %>% # Find counts for each order
filter(n >= 5) # Lets only hold on to orders with 5 or more species with data
##### Two categorical variables: A stacked bar plot
ggplot(data = msleep_data_ordervore, aes(x = order, fill= vore))+
geom_bar()
Stacked barplots are best suited for cases when we’re primarily interested in total counts (e.g. how many species do we have data for in each order), and less interested in comparing the categories going into these counts. Rarely is this the best choice, so don’t expect to make too many stacked barplots.
##### Two categorical variables: A grouped bar plot
ggplot(data = msleep_data_ordervore, aes(x = order, fill= vore))+
geom_bar(position = position_dodge(preserve = "single"))
Grouped barplots are best suited for cases when we’re primarily interested in comparing the categories going into these counts. This is often the best choice, as we get to see counts. However the total number in each group is harder to see in a grouped than a stacked barplot (e.g. it’s easy to see that we have the same number of primates and carnivores in Fig. 6.3, while this is harder to see in Fig. 6.4).
##### Two categorical variables: A filled bar plot
ggplot(data = msleep_data_ordervore, aes(x = order, fill= vore))+
geom_bar(position = "fill")
Filled barplots are much stacked barplots standardized to the same height. In other words, they are like stacked bar plots without their greatest strength. This is rarely a good idea, except for cases with only two or three options for each of numerous categories.
#### 6.2.2.1 One categorical and one continuous variable.
##### One categorical and one continuous variable: Multiple histograms
A straightforward way to show the continuous values for different categories is to make a separate histogram for each numerous distributions is to make separate histograms for each category using the geom_histogram() and facet_wrap() functions in ggplot.
msleep_data_ordervore_hist <- ggplot(msleep_data_ordervore, aes(x= log10_bodywt))+
geom_histogram(bins = 10)
msleep_data_ordervore_hist +
facet_wrap(~order, ncol = 1)
When doing this, be sure to aid visual comparisons simple by ensuring there’s only one column. Note how Figure 6.6 makes it much easier to compare distributions than does Figure 6.7.
msleep_data_ordervore_hist +
facet_wrap(~order, nrow = 1)
##### One categorical and one continuous variable: Density plots
ggplot(msleep_data_ordervore, aes(x= bodywt, fill = order))+
geom_density(alpha = .3)+
scale_x_continuous(trans = "log10")
While many histograms can be nice, they can also take up a lot of space. Sometime we can more succinctly show distributions for each group with numerous density plots (geom_density()). While this can be succinct, it can also get too crammed, so have a look and see which display is best for your data and question.
##### One categorical and one continuous variable: Boxplots, jitterplots etc..
Histograms and density plots communicate the shapes of distributions, but we often hope to compare means and get a sense of variability.
• Boxplots (Figure 6.9A) summarize distributions by showing all quartiles – often showing outliers with points. e.g. ggplot(aes(x = order, y = bodywt)) + geom_boxplot().
• Jitterplots (Figure 6.9B) show all data points, spreading them out over the x-axis. e.g. ggplot(aes(x = order, y = bodywt)) + geom_jitter().
• We can combine both to get the best of both worlds (Figure 6.9C). e.g. ggplot(aes(x = order, y = bodywt)) + geom_boxplot() + geom_jitter().
#### 6.2.2.2 Two continuous variables
ggplot(msleep_data_ordervore, aes(x = log10_bodywt, y = log10_brainwt))+
geom_point()
With two continuous variables, we want a graph that visually display the association between them. A scatterplot displays the explanatory variable n the x-axis, and the response variable on the y-axis. The scatterplot in figure 6.10, shows a clear increase in brain size with body size across mammal species when both are on $$log_{10}$$ scales.
### 6.2.3 More dimensions
ggplot(msleep_data_ordervore,
aes(x = log10_bodywt, y = log10_brainwt, color = vore, shape = order))+
geom_point()
What if we wanted to see even more? Like let’s say we wanted to know if we found a similar relationship between brain weight and body weight across orders and/or if this relationship was mediated by diet. We can pack more info into these plots.
⚠️ Beware, sometimes shapes are hard to differentiate.⚠️ Facetting might make these patterns stand out.
ggplot(msleep_data_ordervore, aes(x = log10_bodywt, y = log10_brainwt, color = vore))+
geom_point()+
facet_wrap(~order, nrow = 1)
### 6.2.4 Interactive plots with the plotly package
Often when I get a fresh data set I want to know a bit more about the data points (to e.g. identify outliers or make sense of things). The plotly package is super useful for this, as it makes interactive graphs that we can explore.
# install.packages("plotly") first install plotly, if it's not installed yet
library(plotly) # now tell R you want to use plotly
# Click on the plot below to explore the data!
big_plot <- ggplot(msleep_data_ordervore,
aes(x = log10_bodywt, y = log10_brainwt,
color = vore, shape = order, label = name))+
geom_point()
ggplotly(big_plot)
#### Decoration vs information
ggplot(msleep_data_ordervore, aes(x = log10_bodywt, y = log10_brainwt))+
geom_point(color = "firebrick", size = 3, alpha = .5)
We have used the aes() argument to provide information. For example, in Figure 5.15 we used color to show a diet by typing aes(…, color = vore). But what if we just want a fun color for data points. We can do this by specifying color outside of the aes argument. Same goes for other attributes, like size etc, or transparency (alpha)…
## 6.3 ggplot Assignment
Watch the video about getting started with ggplot
Complete RStudio’s primer on data visualization basics.
Complete the glimpse intro (4.3.1) and the quiz.
Make three plots from the mpg data and describe the patterns they highlight.
Fill out the quiz on canvas, which is very simlar to the one below.
## 6.4 ggplot2 review / reference
### 6.4.1 ggplot2: cheat sheet
There is no need to memorize anything, check out this handy cheat sheet!
#### 6.4.1.1 ggplot2: common functions, aesthetics, and geoms
##### The ggplot() function
• Takes arguments data = and mapping =.
• We usually leave these implied and type e.g. ggplot(my.data, aes(...)) rather than ggplot(data = my.data, mapping = aes(...)).
• We can pipe data into the ggplot() function, so my.data %>% ggplot(aes(…)) does the same thing as ggplot(my.data, aes(…)).
##### Arguments for aes() function
The aes() function takes many potential arguments each of which specifies the aesthetic we are mapping onto a variable:
###### x, y, and label:
• x: What is shown on the x-axis.
• y: What is shown on the y-axis.
• label: What is show as text in plot (when using geom_text())
##### Faceting
Faceting allows us to use the concept of small multiples to highlight patterns.
For one facetted variable: facet_wrap(~ <var>, nocl = )
For two facetted variable: facet_grid(<var1>~ <var2>), where one is shown by rows, and is shown by columns. |
Why must the matrices be positive semidefinite? What is the input authority cost? What is the purpose of multiplying the transpose then the positive semidefinite matrix then the matrix itself?
Why must the matrices be positive semidefinite?
If we only consider real numbers, the definition of a PSD matrix $$A\in\mathbb{R}^{n\times n}$$ is $$z^\top A z \ge 0$$ for $$z \in \mathbb{R}^n$$.
By restricting ourselves to PSD matrices, we know that the loss $$J$$ must always be bounded below by 0 because the sum of non-negative numbers must be non-negative. In particular, this problem is minimizing a strongly convex quadratic so there must be a unique minimum. That's nice!
Now consider a matrix $$B$$ that does not have the PSD property. The quantity $$z^\top B z$$ could be positive, negative, or neither.
Your optimization procedure is minimizing the loss $$J$$. If your matrix is, for example negative definite, then you could always improve the loss by making the sums of these quadratic forms arbitrarily negative, ever smaller. This is akin to minimizing a line with nonzero slope: there's no minimum to find!
What is the input authority cost?
No idea. You'll have to read the slides, or the cited works, or contact the author.
What is the purpose of multiplying the transpose then the positive semidefinite matrix then the matrix itself?
This is called a quadratic form, and it shows up all over the place in math because of its role in defining PD and PSD matrices. What it means in the specific terms of this optimization depends on the context of the problem: where do these matrices come from, and what do they mean? |
# NAG Library Routine Document
## 1Purpose
f16jtf (blas_zamin_val) computes, with respect to absolute value, the smallest component of a complex vector, along with the index of that component.
## 2Specification
Fortran Interface
Subroutine f16jtf ( n, x, incx, k, r)
Integer, Intent (In) :: n, incx Integer, Intent (Out) :: k Real (Kind=nag_wp), Intent (Out) :: r Complex (Kind=nag_wp), Intent (In) :: x(1+(n-1)*ABS(incx))
#include <nagmk26.h>
void f16jtf_ (const Integer *n, const Complex x[], const Integer *incx, Integer *k, double *r)
The routine may be called by its BLAST name blas_zamin_val.
## 3Description
f16jtf (blas_zamin_val) computes, with respect to absolute value, the smallest component, $r$, of an $n$-element complex vector $x$, and determines the smallest index, $k$, such that
$r=Rexk+Imxk=minjRexj+Imxj.$
## 4References
Basic Linear Algebra Subprograms Technical (BLAST) Forum (2001) Basic Linear Algebra Subprograms Technical (BLAST) Forum Standard University of Tennessee, Knoxville, Tennessee http://www.netlib.org/blas/blast-forum/blas-report.pdf
## 5Arguments
1: $\mathbf{n}$ – IntegerInput
On entry: $n$, the number of elements in $x$.
2: $\mathbf{x}\left(1+\left({\mathbf{n}}-1\right)×\left|{\mathbf{incx}}\right|\right)$ – Complex (Kind=nag_wp) arrayInput
On entry: the $n$-element vector $x$.
If ${\mathbf{incx}}>0$, ${x}_{\mathit{i}}$ must be stored in ${\mathbf{x}}\left(\left(\mathit{i}-1\right)×{\mathbf{incx}}+1\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$.
If ${\mathbf{incx}}<0$, ${x}_{\mathit{i}}$ must be stored in ${\mathbf{x}}\left(\left({\mathbf{n}}-\mathit{i}\right)×\left|{\mathbf{incx}}\right|+1\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$.
Intermediate elements of x are not referenced. If ${\mathbf{n}}=0$, x is not referenced.
3: $\mathbf{incx}$ – IntegerInput
On entry: the increment in the subscripts of x between successive elements of $x$.
Constraint: ${\mathbf{incx}}\ne 0$.
4: $\mathbf{k}$ – IntegerOutput
On exit: $k$, the index, from the set $\left\{1,2,\dots ,{\mathbf{n}}\right\}$, of the smallest component of $x$ with respect to absolute value. If ${\mathbf{n}}\le 0$ on input then k is returned as $0$.
5: $\mathbf{r}$ – Real (Kind=nag_wp)Output
On exit: $r$, the smallest component of $x$ with respect to absolute value. If ${\mathbf{n}}\le 0$ on input then r is returned as $0.0$.
## 6Error Indicators and Warnings
If ${\mathbf{incx}}=0$, an error message is printed and program execution is terminated.
## 7Accuracy
The BLAS standard requires accurate implementations which avoid unnecessary over/underflow (see Section 2.7 of Basic Linear Algebra Subprograms Technical (BLAST) Forum (2001)).
## 8Parallelism and Performance
f16jtf (blas_zamin_val) is not threaded in any implementation.
None.
## 10Example
This example computes the smallest component with respect to absolute value and index of that component for the vector
$x= -4+2.1i,3.7+4.5i,-6+1.2iT .$
### 10.1Program Text
Program Text (f16jtfe.f90)
### 10.2Program Data
Program Data (f16jtfe.d)
### 10.3Program Results
Program Results (f16jtfe.r) |
# Determining a δ
• Sep 4th 2009, 08:53 AM
Rker
Determining a δ
I have absolutely no idea how to solve these type of problems. My teacher gave a lecture about this subject two days ago, and I took a look at this stickied thread, but I'm still stuck. :s
In exercises 1–8, numerically and graphically determine a δ corresponding to (a) ε = 0.1 and (b) ε = 0.05. Graph the function in the ε δ window [x-range is (a δ, a δ) and y-range is (L ε, L + ε)] to verify that your choice works.
1.
limx0 (x^2 + 1) = 1
In exercises 9–20, symbolically find δ in terms of ε.
15.
limx1 (x^2 + x 2)/(x 1) = 3
52.
A fiberglass company ships its glass as spherical marbles. If the volume of each marble must be within ε of π/6, how close does the radius need to be to 1/2?
• Sep 4th 2009, 10:11 AM
VonNemo19
Quote:
Originally Posted by Rker
I have absolutely no idea how to solve these type of problems. My teacher gave a lecture about this subject two days ago, and I took a look at this stickied thread, but I'm still stuck. :s
In exercises 1–8, numerically and graphically determine a δ corresponding to (a) ε = 0.1 and (b) ε = 0.05. Graph the function in the ε δ window [x-range is (a δ, a δ) and y-range is (L ε, L + ε)] to verify that your choice works.
1.
limx0 (x^2 + 1) = 1
In exercises 9–20, symbolically find δ in terms of ε.
15.
limx1 (x^2 + x 2)/(x 1) = 3
52.
A fiberglass company ships its glass as spherical marbles. If the volume of each marble must be within ε of π/6, how close does the radius need to be to 1/2?
For 1.
You wish to show that
$\lim_{x\to0}(x^2+1)=1$.
To do this we must have
$|f(x)-L|$ whenever $|x-a|<\delta$.
So, given that $\epsilon=0.1$we proceed
$|(x^2+1)-1|<0.1$
$|x^2|<0.1$. Since $x^2>0$ for all x,
$x^2<0.1$
Can you see how to find delta?
PS Finding a delta graphically is easy. Just draw the graph. then draw the lines $L+\epsilon$ and $L-\epsilon$. Where those lines intesect the graph, draw vertical lines down to the x-axis. the line which is closest to $x=a$ is $\delta$. |
Record Details
Title:
Reply to “Comment on ‘Ultrafast terahertz-field-driven ionic response in ferroelectric $BaTiO_{3}$' ”
Affiliation(s):
EuXFEL staff, Other
Author group:
Instrument FXE
Keyword(s):
Topic:
Scientific area:
Abstract:
In this reply to S. Durbin’s comment on our original paper “Ultrafast terahertz-field-driven ionic response in ferroelectric $BaTiO_{3}$,” we concur that his final equations 8 and 9 more accurately describe the change in diffracted intensity as a function of Ti displacement. We also provide an alternative derivation based on an ensemble average over unit cells. The conclusions of the paper are unaffected by this correction.
Imprint:
American Physical Society, 2018
Journal Information:
Physical Review B, 97 (22), 226102 (2018)
Related external records:
Language(s):
English
Record appears in:
Export |
English
# Start
## MPG.PuRe
This is the publication repository of the
It contains bibliographic data and numerous fulltexts of the publications of its researchers.
The repository is based on PubMan, a publication repository software developed by the Max Planck Digital Library.
Currently we are working on the migration of the data base of the predecessor system eDoc into this repository.
### Search for publications here
... or browse through different categories.
## Tools and Interfaces
#### Search and Export
Do you want to integrate your PubMan Data within an external system?
Necessary queries can be carried out via our REST-Interface!
#### Control of Named Entities (CoNE)
Search and administrate controlled vocabularies for persons, journals, classifications or languages.
## Most Recently Released Items
Duvigneau, Stefanie; Kettner, Alexander; Carius, Lisa; Griehl, Carola ...
-
2021-06-17
Renn, Jürgen
-
2021-06-17
Crisp, Tyrone; Meir, Ehud; Onn, Uri
-
2021-06-17
We construct, for any finite commutative ring $R$, a family of representations of the general linear group $\mathrm{GL}_n(R)$ whose intertwining ...
Shen, Yubin
-
2021-06-17 |
Feeds:
Posts
## The Arctic Has Barfed
I was scanning my blog stats the other day – partly to see if people were reading my new post on the Blue Mountains bushfires, partly because I just like graphs – when I noticed that an article I wrote nearly two years ago was suddenly getting more views than ever before:
The article in question highlights the scientific inaccuracies of the 2004 film The Day After Tomorrow, in which global warming leads to a new ice age. Now that I’ve taken more courses in thermodynamics I could definitely expand on the original post if I had the time and inclination to watch the film again…
I did a bit more digging in my stats and discovered that most viewers are reaching this article through Google searches such as “is the day after tomorrow true”, “is the day after tomorrow likely to happen”, and “movie review of a day after tomorrow if it is possible or impossible.” The answers are no, no, and impossible, respectively.
But why the sudden surge in interest? I think it is probably related to the record cold temperatures across much of the United States, an event which media outlets have dubbed the “polar vortex”. I prefer “Arctic barf”.
Part of the extremely cold air mass which covers the Arctic has essentially detached and spilled southward over North America. In other words, the Arctic has barfed on the USA. Less sexy terminology than “polar vortex”, perhaps, but I would argue it is more enlightening.
Greg Laden also has a good explanation:
The Polar Vortex, a huge system of swirling air that normally contains the polar cold air has shifted so it is not sitting right on the pole as it usually does. We are not seeing an expansion of cold, an ice age, or an anti-global warming phenomenon. We are seeing the usual cold polar air taking an excursion.
Note that other regions such as Alaska and much of Europe are currently experiencing unusually warm winter weather. On balance, the planet isn’t any colder than normal. The cold patches are just moving around in an unusual way.
Having grown up in the Canadian Prairies, where we experience daily lows below -30°C for at least a few days each year (and for nearly a month straight so far this winter), I can’t say I have a lot of sympathy. Or maybe I’m just bitter because I never got a day off school due to the cold? But seriously, nothing has to shut down if you plug in the cars at night and bundle up like an astronaut. We’ve been doing it for years.
## A Simple Stochastic Climate Model: Climate Sensitivity
Last time I derived the following ODE for temperature T at time t:
where S and τ are constants, and F(t) is the net radiative forcing at time t. Eventually I will discuss each of these terms in detail; this post will focus on S.
At equilibrium, when dT/dt = 0, the ODE necessitates T(t) = S F(t). A physical interpretation for S becomes apparent: it measures the equilibrium change in temperature per unit forcing, also known as climate sensitivity.
A great deal of research has been conducted with the aim of quantifying climate sensitivity, through paleoclimate analyses, modelling experiments, and instrumental data. Overall, these assessments show that climate sensitivity is on the order of 3 K per doubling of CO2 (divide by 5.35 ln 2 W/m2 to convert to warming per unit forcing).
The IPCC AR4 report (note that AR5 was not yet published at the time of my calculations) compared many different probability distribution functions (PDFs) of climate sensitivity, shown below. They follow the same general shape of a shifted distribution with a long tail to the right, and average 5-95% confidence intervals of around 1.5 to 7 K per doubling of CO2.
Box 10.2, Figure 1 of the IPCC AR4 WG1: Probability distribution functions of climate sensitivity (a), 5-95% confidence intervals (b).
These PDFs generally consist of discrete data points that are not publicly available. Consequently, sampling from any existing PDF would be difficult. Instead, I chose to create my own PDF of climate sensitivity, modelled as a log-normal distribution (e raised to the power of a normal distribution) with the same shape and bounds as the existing datasets.
The challenge was to find values for μ and σ, the mean and standard deviation of the corresponding normal distribution, such that for any z sampled from the log-normal distribution,
Since erf, the error function, cannot be evaluated analytically, this two-parameter problem must be solved numerically. I built a simple particle swarm optimizer to find the solution, which consistently yielded results of μ = 1.1757, σ = 0.4683.
The upper tail of a log-normal distribution is unbounded, so I truncated the distribution at 10 K, consistent with existing PDFs (see figure above). At the beginning of each simulation, climate sensitivity in my model is sampled from this distribution and held fixed for the entire run. A histogram of 106 sampled points, shown below, has the desired characteristics.
Histogram of 106 points sampled from the log-normal distribution used for climate sensitivity in the model.
Note that in order to be used in the ODE, the sampled points must then be converted to units of Km2/W (warming per unit forcing) by dividing by 5.35 ln 2 W/m2, the forcing from doubled CO2.
## Bits and Pieces
Now that the academic summer is over, I have left Australia and returned home to Canada. It is great to be with my friends and family again, but I really miss the ocean and the giant monster bats. Not to mention the lab: after four months as a proper scientist, it’s very hard to be an undergrad again.
While I continue to settle in, move to a new apartment, and recover from jet lag (which is way worse in this direction!), here are a few pieces of reading to tide you over:
Scott Johnson from Ars Technica wrote a fabulous piece about climate modelling, and the process by which scientists build and test new components. The article is accurate and compelling, and features interviews with two of my former supervisors (Steve Easterbrook and Andrew Weaver) and lots of other great communicators (Gavin Schmidt and Richard Alley, to name a few).
I have just started reading A Short History of Nearly Everything by Bill Bryson. So far, it is one of the best pieces of science writing I have ever read. As well as being funny and easy to understand, it makes me excited about areas of science I haven’t studied since high school.
Finally, my third and final paper from last summer in Victoria was published in the August edition of Journal of Climate. The full text (subscription required) is available here. It is a companion paper to our recent Climate of the Past study, and compares the projections of EMICs (Earth System Models of Intermediate Complexity) when forced with different RCP scenarios. In a nutshell, we found that even after anthropogenic emissions fall to zero, it takes a very long time for CO2 concentrations to recover, even longer for global temperatures to start falling, and longer still for sea level rise (caused by thermal expansion alone, i.e. neglecting the melting of ice sheets) to stabilize, let alone reverse.
## A Simple Stochastic Climate Model: Deriving the Backbone
Last time I introduced the concept of a simple climate model which uses stochastic techniques to simulate uncertainty in our knowledge of the climate system. Here I will derive the backbone of this model, an ODE describing the response of global temperature to net radiative forcing. This derivation is based on unpublished work by Nathan Urban – many thanks!
In reality, the climate system should be modelled not as a single ODE, but as a coupled system of hundreds of PDEs in four dimensions. Such a task is about as arduous as numerical science can get, but dozens of research groups around the world have built GCMs (General Circulation Models, or Global Climate Models, depending on who you talk to) which come quite close to this ideal.
Each GCM has taken hundreds of person-years to develop, and I only had eight weeks. So for the purposes of this project, I treat the Earth as a spatially uniform body with a single temperature. This is clearly a huge simplification but I decided it was necessary.
Let’s start by defining T1(t) to be the absolute temperature of this spatially uniform Earth at time t, and let its heat capacity be C. Therefore,
$C \: T_1(t) = E$
where E is the change in energy required to warm the Earth from 0 K to temperature T1. Taking the time derivative of both sides,
$C \: \frac{dT_1}{dt} = \frac{dE}{dt}$
Now, divide through by A, the surface area of the Earth:
$c \: \frac{dT_1}{dt} = \frac{1}{A} \frac{dE}{dt}$
where c = C/A is the heat capacity per unit area. Note that the right side of the equation, a change in energy per unit time per unit area, has units of W/m2. We can express this as the difference of incoming and outgoing radiative fluxes, I(t) and O(t) respectively:
$c \: \frac{dT_1}{dt} = I(t)- O(t)$
By the Stefan-Boltzmann Law,
$c \: \frac{dT_1}{dt} = I(t) - \epsilon \sigma T_1(t)^4$
where ϵ is the emissivity of the Earth and σ is the Stefan-Boltzmann constant.
To consider the effect of a change in temperature, suppose that T1(t) = T0 + T(t), where T0 is an initial equilibrium temperature and T(t) is a temperature anomaly. Substituting into the equation,
$c \: \frac{d(T_0 + T(t))}{dt} = I(t) - \epsilon \sigma (T_0 + T(t))^4$
Noting that T0 is a constant, and also factoring the right side,
$c \: \frac{dT}{dt} = I(t) - \epsilon \sigma T_0^4 (1 + \tfrac{T(t)}{T_0})^4$
Since the absolute temperature of the Earth is around 280 K, and we are interested in perturbations of around 5 K, we can assume that T(t)/T0 ≪ 1. So we can linearize (1 + T(t)/T0)4 using a Taylor expansion about T(t) = 0:
$c \: \frac{dT}{dt} = I(t) - \epsilon \sigma T_0^4 (1 + 4 \tfrac{T(t)}{T_0} + O[(\tfrac{T(t)}{T_0})^2])$
$\approx I(t) - \epsilon \sigma T_0^4 (1 + 4 \tfrac{T(t)}{T_0})$
$= I(t) - \epsilon \sigma T_0^4 - 4 \epsilon \sigma T_0^3 T(t)$
Next, let O0 = ϵσT04 be the initial outgoing flux. So,
$c \: \frac{dT}{dt} = I(t) - O_0 - 4 \epsilon \sigma T_0^3 T(t)$
Let F(t) = I(t) – O0 be the radiative forcing at time t. Making this substitution as well as dividing by c, we have
$\frac{dT}{dt} = \frac{F(t) - 4 \epsilon \sigma T_0^3 T(t)}{c}$
Dividing each term by 4ϵσT03 and rearranging the numerator,
$\frac{dT}{dt} = - \frac{T(t) - \tfrac{1}{4 \epsilon \sigma T_0^3} F(t)}{\tfrac{c}{4 \epsilon \sigma T_0^3}}$
Finally, let S = 1/(4ϵσT03) and τ = cS. Our final equation is
$\frac{dT}{dt} = - \frac{T(t) - S F(t)}{\tau}$
While S depends on the initial temperature T0, all of the model runs for this project begin in the preindustrial period when global temperature is approximately constant. Therefore, we can treat S as a parameter independent of initial conditions. As I will show in the next post, the uncertainty in S based on climate system dynamics far overwhelms any error we might introduce by disregarding T0.
## A Simple Stochastic Climate Model: Introduction
This winter I took a course in computational physics, which has probably been my favourite undergraduate course to date. Essentially it was an advanced numerical methods course, but from a very practical point of view. We got a lot of practice using numerical techniques to solve realistic problems, rather than just analysing error estimates and proving conditions of convergence. As a math student I found this refreshing, and incredibly useful for my research career.
We all had to complete a term project of our choice, and I decided to build a small climate model. I was particularly interested in the stochastic techniques taught in the course, and given that modern GCMs and EMICs are almost entirely deterministic, it was possible that I could contribute something original to the field.
The basic premise of my model is this: All anthropogenic forcings are deterministic, and chosen by the user. Everything else is determined stochastically: parameters such as climate sensitivity are sampled from probability distributions, whereas natural forcings are randomly generated but follow the same general pattern that exists in observations. The idea is to run this model with the same anthropogenic input hundreds of times and build up a probability distribution of future temperature trajectories. The spread in possible scenarios is entirely due to uncertainty in the natural processes involved.
This approach mimics the real world, because the only part of the climate system we have full control over is our own actions. Other influences on climate are out of our control, sometimes poorly understood, and often unpredictable. It is just begging to be modelled as a stochastic system. (Not that it is actually stochastic, of course; in fact, I understand that nothing is truly stochastic, even random number generators – unless you can find a counterexample using quantum mechanics? But that’s a discussion for another time.)
A word of caution: I built this model in about eight weeks. As such, it is highly simplified and leaves out a lot of processes. You should never ever use it for real climate projections. This project is purely an exercise in numerical methods, and an exploration of the possible role of stochastic techniques in climate modelling.
Over the coming weeks, I will write a series of posts that explains each component of my simple stochastic climate model in detail. I will show the results from some sample simulations, and discuss how one might apply these stochastic techniques to existing GCMs. I also plan to make the code available to anyone who’s interested – it’s written in Matlab, although I might translate it to a free language like Python, partly because I need an excuse to finally learn Python.
I am very excited to finally share this project with you all! Check back soon for the next installment.
## Climate change and the jet stream
Here in the northern mid-latitudes (much of Canada and the US, Europe, and the northern half of Asia) our weather is governed by the jet stream. This high-altitude wind current, flowing rapidly from west to east, separates cold Arctic air (to the north) from warmer temperate air (to the south). So on a given day, if you’re north of the jet stream, the weather will probably be cold; if you’re to the south, it will probably be warm; and if the jet stream is passing over you, you’re likely to get rain or snow.
The jet stream isn’t straight, though; it’s rather wavy in the north-south direction, with peaks and troughs. So it’s entirely possible for Calgary to experience a cold spell (sitting in a trough of the jet stream) while Winnipeg, almost directly to the east, has a heat wave (sitting in a peak). The farther north and south these peaks and troughs extend, the more extreme these temperature anomalies tend to be.
Sometimes a large peak or trough will hang around for weeks on end, held in place by certain air pressure patterns. This phenomenon is known as “blocking”, and is often associated with extreme weather. For example, the 2010 heat wave in Russia coincided with a large, stationary, long-lived peak in the polar jet stream. Wildfires, heat stroke, and crop failure ensued. Not a pretty picture.
As climate change adds more energy to the atmosphere, it would be naive to expect all the wind currents to stay exactly the same. Predicting the changes is a complicated business, but a recent study by Jennifer Francis and Stephen Vavrus made headway on the polar jet stream. Using North American and North Atlantic atmospheric reanalyses (models forced with observations rather than a spin-up) from 1979-2010, they found that Arctic amplification – the faster rate at which the Arctic warms, compared to the rest of the world – makes the jet stream slower and wavier. As a result, blocking events become more likely.
Arctic amplification occurs because of the ice-albedo effect: there is more snow and ice available in the Arctic to melt and decrease the albedo of the region. (Faster-than-average warming is not seen in much of Antarctica, because a great deal of thermal inertia is provided to the continent in the form of strong circumpolar wind and ocean currents.) This amplification is particularly strong in autumn and winter.
Now, remembering that atmospheric pressure is directly related to temperature, and pressure decreases with height, warming a region will increase the height at which pressure falls to 500 hPa. (That is, it will raise the 500 hPa “ceiling”.) Below that, the 1000 hPa ceiling doesn’t rise very much, because surface pressure doesn’t usually go much above 1000 hPa anyway. So in total, the vertical portion of the atmosphere that falls between 1000 and 500 hPa becomes thicker as a result of warming.
Since the Arctic is warming faster than the midlatitudes to the south, the temperature difference between these two regions is smaller. Therefore, the difference in 1000-500 hPa thickness is also smaller. Running through a lot of complicated physics equations, this has two main effects:
1. Winds in the east-west direction (including the jet stream) travel more slowly.
2. Peaks of the jet stream are pulled farther north, making the current wavier.
Also, both of these effects reinforce each other: slow jet streams tend to be wavier, and wavy jet streams tend to travel more slowly. The correlation between relative 1000-500 hPa thickness and these two effects is not statistically significant in spring, but it is in the other three seasons. Also, melting sea ice and declining snow cover on land are well correlated to relative 1000-500 hPa thickness, which makes sense because these changes are the drivers of Arctic amplification.
Consequently, there is now data to back up the hypothesis that climate change is causing more extreme fall and winter weather in the mid-latitudes, and in both directions: unusual cold as well as unusual heat. Saying that global warming can cause regional cold spells is not a nefarious move by climate scientists in an attempt to make every possible outcome support their theory, as some paranoid pundits have claimed. Rather, it is another step in our understanding of a complex, non-linear system with high regional variability.
Many recent events, such as record snowfalls in the US during the winters of 2009-10 and 2010-11, are consistent with this mechanism – it’s easy to see that they were caused by blocking in the jet stream when Arctic amplification was particularly high. They may or may not have happened anyway, if climate change wasn’t in the picture. However, if this hypothesis endures, we can expect more extreme weather from all sides – hotter, colder, wetter, drier – as climate change continues. Don’t throw away your snow shovels just yet.
## Climate Change and Atlantic Circulation
Today my very first scientific publication is appearing in Geophysical Research Letters. During my summer at UVic, I helped out with a model intercomparison project regarding the effect of climate change on Atlantic circulation, and was listed as a coauthor on the resulting paper. I suppose I am a proper scientist now, rather than just a scientist larva.
The Atlantic meridional overturning circulation (AMOC for short) is an integral part of the global ocean conveyor belt. In the North Atlantic, a massive amount of water near the surface, cooling down on its way to the poles, becomes dense enough to sink. From there it goes on a thousand-year journey around the world – inching its way along the bottom of the ocean, looping around Antarctica – before finally warming up enough to rise back to the surface. A whole multitude of currents depend on the AMOC, most famously the Gulf Stream, which keeps Europe pleasantly warm.
Some have hypothesized that climate change might shut down the AMOC: the extra heat and freshwater (from melting ice) coming into the North Atlantic could conceivably lower the density of surface water enough to stop it sinking. This happened as the world was coming out of the last ice age, in an event known as the Younger Dryas: a huge ice sheet over North America suddenly gave way, drained into the North Atlantic, and shut down the AMOC. Europe, cut off from the Gulf Stream and at the mercy of the ice-albedo feedback, experienced another thousand years of glacial conditions.
A shutdown today would not lead to another ice age, but it could cause some serious regional cooling over Europe, among other impacts that we don’t fully understand. Today, though, there’s a lot less ice to start with. Could the AMOC still shut down? If not, how much will it weaken due to climate change? So far, scientists have answered these two questions with “probably not” and “something like 25%” respectively. In this study, we analysed 30 climate models (25 complex CMIP5 models, and 5 smaller, less complex EMICs) and came up with basically the same answer. It’s important to note that none of the models include dynamic ice sheets (computational glacial dynamics is a headache and a half), which might affect our results.
Models ran the four standard RCP experiments from 2006-2100. Not every model completed every RCP, and some extended their simulations to 2300 or 3000. In total, there were over 30 000 model years of data. We measured the “strength” of the AMOC using the standard unit Sv (Sverdrups), where each Sv is 1 million cubic metres of water per second.
Only two models simulated an AMOC collapse, and only at the tail end of the most extreme scenario (RCP8.5, which quite frankly gives me a stomachache). Bern3D, an EMIC from Switzerland, showed a MOC strength of essentially zero by the year 3000; CNRM-CM5, a GCM from France, stabilized near zero by 2300. In general, the models showed only a moderate weakening of the AMOC by 2100, with best estimates ranging from a 22% drop for RCP2.6 to a 40% drop for RCP8.5 (with respect to preindustrial conditions).
Are these somewhat-reassuring results trustworthy? Or is the Atlantic circulation in today’s climate models intrinsically too stable? Our model intercomparison also addressed that question, using a neat little scalar metric known as Fov: the net amount of freshwater travelling from the AMOC to the South Atlantic.
The current thinking in physical oceanography is that the AMOC is more or less binary – it’s either “on” or “off”. When AMOC strength is below a certain level (let’s call it A), its only stable state is “off”, and the strength will converge to zero as the currents shut down. When AMOC strength is above some other level (let’s call it B), its only stable state is “on”, and if you were to artificially shut it off, it would bounce right back up to its original level. However, when AMOC strength is between A and B, both conditions can be stable, so whether it’s on or off depends on where it started. This phenomenon is known as hysteresis, and is found in many systems in nature.
This figure was not part of the paper. I made it just now in MS Paint.
Here’s the key part: when AMOC strength is less than A or greater than B, Fov is positive and the system is monostable. When AMOC strength is between A and B, Fov is negative and the system is bistable. The physical justification for Fov is its association with the salt advection feedback, the sign of which is opposite Fov: positive Fov means the salt advection feedback is negative (i.e. stabilizing the current state, so monostable); a negative Fov means the salt advection feedback is positive (i.e. reinforcing changes in either direction, so bistable).
Most observational estimates (largely ocean reanalyses) have Fov as slightly negative. If models’ AMOCs really were too stable, their Fov‘s should be positive. In our intercomparison, we found both positives and negatives – the models were kind of all over the place with respect to Fov. So maybe some models are overly stable, but certainly not all of them, or even the majority.
As part of this project, I got to write a new section of code for the UVic model, which calculated Fov each timestep and included the annual mean in the model output. Software development on a large, established project with many contributors can be tricky, and the process involved a great deal of head-scratching, but it was a lot of fun. Programming is so satisfying.
Beyond that, my main contribution to the project was creating the figures and calculating the multi-model statistics, which got a bit unwieldy as the model count approached 30, but we made it work. I am now extremely well-versed in IDL graphics keywords, which I’m sure will come in handy again. Unfortunately I don’t think I can reproduce any figures here, as the paper’s not open-access.
I was pretty paranoid while coding and doing calculations, though – I kept worrying that I would make a mistake, never catch it, and have it dredged out by contrarians a decade later (“Kate-gate”, they would call it). As a climate scientist, I suppose that comes with the job these days. But I can live with it, because this stuff is just so darned interesting.
## Permafrost Projections
During my summer at UVic, two PhD students at the lab (Andrew MacDougall and Chris Avis) as well as my supervisor (Andrew Weaver) wrote a paper modelling the permafrost carbon feedback, which was recently published in Nature Geoscience. I read a draft version of this paper several months ago, and am very excited to finally share it here.
Studying the permafrost carbon feedback is at once exciting (because it has been left out of climate models for so long) and terrifying (because it has the potential to be a real game-changer). There is about twice as much carbon frozen into permafrost than there is floating around in the entire atmosphere. As high CO2 levels cause the world to warm, some of the permafrost will thaw and release this carbon as more CO2 – causing more warming, and so on. Previous climate model simulations involving permafrost have measured the CO2 released during thaw, but haven’t actually applied it to the atmosphere and allowed it to change the climate. This UVic study is the first to close that feedback loop (in climate model speak we call this “fully coupled”).
The permafrost part of the land component was already in place – it was developed for Chris’s PhD thesis, and implemented in a previous paper. It involves converting the existing single-layer soil model to a multi-layer model where some layers can be frozen year-round. Also, instead of the four RCP scenarios, the authors used DEPs (Diagnosed Emission Pathways): exactly the same as RCPs, except that CO2 emissions, rather than concentrations, are given to the model as input. This was necessary so that extra emissions from permafrost thaw would be taken into account by concentration values calculated at the time.
As a result, permafrost added an extra 44, 104, 185, and 279 ppm of CO2 to the atmosphere for DEP 2.6, 4.5, 6.0, and 8.5 respectively. However, the extra warming by 2100 was about the same for each DEP, with central estimates around 0.25 °C. Interestingly, the logarithmic effect of CO2 on climate (adding 10 ppm to the atmosphere causes more warming when the background concentration is 300 ppm than when it is 400 ppm) managed to cancel out the increasing amounts of permafrost thaw. By 2300, the central estimates of extra warming were more variable, and ranged from 0.13 to 1.69 °C when full uncertainty ranges were taken into account. Altering climate sensitivity (by means of an artificial feedback), in particular, had a large effect.
As a result of the thawing permafrost, the land switched from a carbon sink (net CO2 absorber) to a carbon source (net CO2 emitter) decades earlier than it would have otherwise – before 2100 for every DEP. The ocean kept absorbing carbon, but in some scenarios the carbon source of the land outweighed the carbon sink of the ocean. That is, even without human emissions, the land was emitting more CO2 than the ocean could soak up. Concentrations kept climbing indefinitely, even if human emissions suddenly dropped to zero. This is the part of the paper that made me want to hide under my desk.
This scenario wasn’t too hard to reach, either – if climate sensitivity was greater than 3°C warming per doubling of CO2 (about a 50% chance, as 3°C is the median estimate by scientists today), and people followed DEP 8.5 to at least 2013 before stopping all emissions (a very intense scenario, but I wouldn’t underestimate our ability to dig up fossil fuels and burn them really fast), permafrost thaw ensured that CO2 concentrations kept rising on their own in a self-sustaining loop. The scenarios didn’t run past 2300, but I’m sure that if you left it long enough the ocean would eventually win and CO2 would start to fall. The ocean always wins in the end, but things can be pretty nasty until then.
As if that weren’t enough, the paper goes on to list a whole bunch of reasons why their values are likely underestimates. For example, they assumed that all emissions from permafrost were CO2, rather than the much stronger CH4 which is easily produced in oxygen-depleted soil; the UVic model is also known to underestimate Arctic amplification of climate change (how much faster the Arctic warms than the rest of the planet). Most of the uncertainties – and there are many – are in the direction we don’t want, suggesting that the problem will be worse than what we see in the model.
This paper went in my mental “oh shit” folder, because it made me realize that we are starting to lose control over the climate system. No matter what path we follow – even if we manage slightly negative emissions, i.e. artificially removing CO2 from the atmosphere – this model suggests we’ve got an extra 0.25°C in the pipeline due to permafrost. It doesn’t sound like much, but add that to the 0.8°C we’ve already seen, and take technological inertia into account (it’s simply not feasible to stop all emissions overnight), and we’re coming perilously close to the big nonlinearity (i.e. tipping point) that many argue is between 1.5 and 2°C. Take political inertia into account (most governments are nowhere near even creating a plan to reduce emissions), and we’ve long passed it.
Just because we’re probably going to miss the the first tipping point, though, doesn’t mean we should throw up our hands and give up. 2°C is bad, but 5°C is awful, and 10°C is unthinkable. The situation can always get worse if we let it, and how irresponsible would it be if we did?
## Modelling Geoengineering, Part II
Near the end of my summer at the UVic Climate Lab, all the scientists seemed to go on vacation at the same time and us summer students were left to our own devices. I was instructed to teach Jeremy, Andrew Weaver’s other summer student, how to use the UVic climate model – he had been working with weather station data for most of the summer, but was interested in Earth system modelling too.
Jeremy caught on quickly to the basics of configuration and I/O, and after only a day or two, we wanted to do something more exciting than the standard test simulations. Remembering an old post I wrote, I dug up this paper (open access) by Damon Matthews and Ken Caldeira, which modelled geoengineering by reducing incoming solar radiation uniformly across the globe. We decided to replicate their method on the newest version of the UVic ESCM, using the four RCP scenarios in place of the old A2 scenario. We only took CO2 forcing into account, though: other greenhouse gases would have been easy enough to add in, but sulphate aerosols are spatially heterogeneous and would complicate the algorithm substantially.
Since we were interested in the carbon cycle response to geoengineering, we wanted to prescribe CO2 emissions, rather than concentrations. However, the RCP scenarios prescribe concentrations, so we had to run the model with each concentration trajectory and find the equivalent emissions timeseries. Since the UVic model includes a reasonably complete carbon cycle, it can “diagnose” emissions by calculating the change in atmospheric carbon, subtracting contributions from land and ocean CO2 fluxes, and assigning the residual to anthropogenic sources.
After a few failed attempts to represent geoengineering without editing the model code (e.g., altering the volcanic forcing input file), we realized it was unavoidable. Model development is always a bit of a headache, but it makes you feel like a superhero when everything falls into place. The job was fairly small – just a few lines that culminated in equation 1 from the original paper – but it still took several hours to puzzle through the necessary variable names and header files! Essentially, every timestep the model calculates the forcing from CO2 and reduces incoming solar radiation to offset that, taking changing planetary albedo into account. When we were confident that the code was working correctly, we ran all four RCPs from 2006-2300 with geoengineering turned on. The results were interesting (see below for further discussion) but we had one burning question: what would happen if geoengineering were suddenly turned off?
By this time, having completed several thousand years of model simulations, we realized that we were getting a bit carried away. But nobody else had models in the queue – again, they were all on vacation – so our simulations were running three times faster than normal. Using restart files (written every 100 years) as our starting point, we turned off geoengineering instantaneously for RCPs 6.0 and 8.5, after 100 years as well as 200 years.
## Results
Similarly to previous experiments, our representation of geoengineering still led to sizable regional climate changes. Although average global temperatures fell down to preindustrial levels, the poles remained warmer than preindustrial while the tropics were cooler:
Also, nearly everywhere on the globe became drier than in preindustrial times. Subtropical areas were particularly hard-hit. I suspect that some of the drying over the Amazon and the Congo is due to deforestation since preindustrial times, though:
Jeremy also made some plots of key one-dimensional variables for RCP8.5, showing the results of no geoengineering (i.e. the regular RCP – yellow), geoengineering for the entire simulation (red), and geoengineering turned off in 2106 (green) or 2206 (blue):
It only took about 20 years for average global temperature to fall back to preindustrial levels. Changes in solar radiation definitely work quickly. Unfortunately, changes in the other direction work quickly too: shutting off geoengineering overnight led to rates of warming up to 5 C / decade, as the climate system finally reacted to all the extra CO2. To put that in perspective, we’re currently warming around 0.2 C / decade, which far surpasses historical climate changes like the Ice Ages.
Sea level rise (due to thermal expansion only – the ice sheet component of the model isn’t yet fully implemented) is directly related to temperature, but changes extremely slowly. When geoengineering is turned off, the reversals in sea level trajectory look more like linear offsets from the regular RCP.
Sea ice area, in contrast, reacts quite quickly to changes in temperature. Note that this data gives annual averages, rather than annual minimums, so we can’t tell when the Arctic Ocean first becomes ice-free. Also, note that sea ice area is declining ever so slightly even with geoengineering – this is because the poles are still warming a little bit, while the tropics cool.
Things get really interesting when you look at the carbon cycle. Geoengineering actually reduced atmospheric CO2 concentrations compared to the regular RCP. This was expected, due to the dual nature of carbon cycle feedbacks. Geoengineering allows natural carbon sinks to enjoy all the benefits of high CO2 without the associated drawbacks of high temperatures, and these sinks become stronger as a result. From looking at the different sinks, we found that the sequestration was due almost entirely to the land, rather than the ocean:
In this graph, positive values mean that the land is a net carbon sink (absorbing CO2), while negative values mean it is a net carbon source (releasing CO2). Note the large negative spikes when geoengineering is turned off: the land, adjusting to the sudden warming, spits out much of the carbon that it had previously absorbed.
Within the land component, we found that the strengthening carbon sink was due almost entirely to soil carbon, rather than vegetation:
This graph shows total carbon content, rather than fluxes – think of it as the integral of the previous graph, but discounting vegetation carbon.
Finally, the lower atmospheric CO2 led to lower dissolved CO2 in the ocean, and alleviated ocean acidification very slightly. Again, this benefit quickly went away when geoengineering was turned off.
## Conclusions
Is geoengineering worth it? I don’t know. I can certainly imagine scenarios in which it’s the lesser of two evils, and find it plausible (even probable) that we will reach such a scenario within my lifetime. But it’s not something to undertake lightly. As I’ve said before, desperate governments are likely to use geoengineering whether or not it’s safe, so we should do as much research as possible ahead of time to find the safest form of implementation.
The modelling of geoengineering is in its infancy, and I have a few ideas for improvement. In particular, I think it would be interesting to use a complex atmospheric chemistry component to allow for spatial variation in the forcing reduction through sulphate aerosols: increase the aerosol optical depth over one source country, for example, and let it disperse over time. I’d also like to try modelling different kinds of geoengineering – sulphate aerosols as well as mirrors in space and iron fertilization of the ocean.
Jeremy and I didn’t research anything that others haven’t, so this project isn’t original enough for publication, but it was a fun way to stretch our brains. It was also a good topic for a post, and hopefully others will learn something from our experiments.
Above all, leave over-eager summer students alone at your own risk. They just might get into something like this. |
Prime
#### Prepinsta Prime
Video courses for company/skill based Preparation
(Check all courses)
Get Prime Video
Prime
#### Prepinsta Prime
Purchase mock tests for company/skill building
(Check all mocks)
Get Prime mock
# Probability Quiz – 1
Question 1
Time: 00:00:00
A year is selected at random. What is the probability that it contains 53 Mondays if every fourth year is a leap year?
5/28
5/28
3/22
3/22
1/7
1/7
6/53
6/53
Once you attempt the question then PrepInsta explanation will be displayed.
Start
Question 2
Time: 00:00:00
There are three cartons, each containing a different number of soda bottles. The first carton has 10 bottles, of which four are flat, the second has six bottles, of which one is flat, and the third carton has eight bottles of which three are flat. What is the probability of a flat bottle being selected when a bottle is chosen at random from one of the three cartons?
25/62
25/62
113/360
113/360
123/360
123/360
113/180
113/180
Once you attempt the question then PrepInsta explanation will be displayed.
Start
Question 3
Time: 00:00:00
A die is thrown. Let A be the event that the number obtained is greater than 3. Let B be the event that the number obtained is less than 5. Then P (A∪B) is
2/5
2/5
3/5
3/5
1
1
1/4
1/4
Once you attempt the question then PrepInsta explanation will be displayed.
Start
Question 4
Time: 00:00:00
One ticket is selected at random from 50 tickets numbered 0, 01, 02, ……, 49.Then, the probability that the sum of the digits on the selected ticket is 8, given that the product of these digits is zero equals
1/14
1/14
1/7
1/7
5/14
5/14
1/50
1/50
Once you attempt the question then PrepInsta explanation will be displayed.
Start
Question 5
Time: 00:00:00
In a plane, S lines of lengths 2, 3, 4, 5 and 6 cm are lying. What is the probability that by joining the three randomly chosen lines end to end a triangle cannot be formed?
$\frac{3}{10}$
$\frac{3}{10}$
$\frac{7}{10}$
$\frac{7}{10}$
$\frac{1}{2}$
$\frac{1}{2}$
1
1
Once you attempt the question then PrepInsta explanation will be displayed.
Start
Question 6
Time: 00:00:00
There are 7 boys and 8 girls in a class. A teacher has 3 items viz a pen, a pencil and an eraser, each 5 in number. He distributes the items, one to each student. What is the probability that a boy selected at random has either a pencil or an eraser?
2/3
2/3
2/21
2/21
14/45
14/45
None of these
None of these
Once you attempt the question then PrepInsta explanation will be displayed.
Start
Question 7
Time: 00:00:00
A locker at the RBI building can be opened by dialling a fixed three digit code (between 000 and 999). Chhota Chetan, a terrorist, only knows that the number is a three digit number and has only one six. Using this information he tries to open the locker by dialling three digits at random. The probability that he succeeds in his endeavor is
$\frac{1}{243}$
$\frac{1}{243}$
$\frac{1}{900}$
$\frac{1}{900}$
$\frac{1}{1000}$
$\frac{1}{1000}$
$\frac{1}{216}$
$\frac{1}{216}$
Once you attempt the question then PrepInsta explanation will be displayed.
Start
Question 8
Time: 00:00:00
A pair of fair dice are rolled together, till a sum of either 5 or 7 is obtained. The probability that the sum 5 happens before sum 7 is
0.45
0.45
0.4
0.4
0.5
0.5
0.5
0.5
Once you attempt the question then PrepInsta explanation will be displayed.
Start
Question 9
Time: 00:00:00
In the previous question, what is the probability of getting sum 7 before sum 5?
0.6
0.6
0.55
0.55
0.4
0.4
0.5
0.5
Once you attempt the question then PrepInsta explanation will be displayed.
Start
Question 10
Time: 00:00:00
Numbers are selected at random one at a time, from the numbers 00, 01, 02,…., 99 with replacement. An event E occurs if and only if the product of the two digits of a selected number is 18. If four numbers are selected, then the probability that E occurs at least 3 times is
$\frac{97}{390625}$
$\frac{97}{390625}$
$\frac{98}{390625}$
$\frac{98}{390625}$
$\frac{97}{380626}$
$\frac{97}{380626}$
$\frac{97}{380625}$
$\frac{97}{380625}$
Once you attempt the question then PrepInsta explanation will be displayed.
Start
["0","40","60","80","100"]
["Need more practice!","Keep trying!","Not bad!","Good work!","Perfect!"]
Completed
0/0
Accuracy
0%
Prime
#### Prime Mock
Complete Mock Subscription
for Goldman Sachs
Prime Mock
Personalized Analytics only Availble for Logged in users
Analytics below shows your performance in various Mocks on PrepInsta
Your average Analytics for this Quiz
Rank
-
Percentile
0% |
# [pstricks] Antw: Re: recursion
John Culleton john at wexfordpress.com
Sun Oct 28 13:40:26 CET 2007
On Sunday 28 October 2007 03:36:50 am Robert Salvador wrote:
> Thank you very much, Alan!
>
> I am doing what you thaught I am doing. And this \edef is exactly what I
> need. All is working now :-))
> By the way: where can I find information about these TeX (??) - commands
> like \def, \edef, \gdef, ... and all the others that one can use in
> LaTeX together with pstricks?
>
> Robert
LaTeX is really a layer cake and the bottom layer is Knuth's original
primitive commands. The next layer is (most of) the plain tex format. This is
incorporated into the LaTeX format layer, next come optional LaTeX styles
\def, \edef etc. come from that bottom layer. The TeXBook is a reliable, if
somewhat hard to follow guide. Beyond that canonical work I use _TeX for
the Impatient_ , A Beginners Book of TeX_ ,and _TeX by Topic_ in that order.
Two of the three (Impatient and Topic) can be downloaded free. I ultimately
bought a used paper copy of Impatient via Amazon Marketplace however because
I use it so frequently and a ring binder is clumsy.
I prefer plain tex, plain pdftex or Context to LaTeX or pdflatex.
--
John Culleton
Want to know what I really think?
http://apps.wexfordpress.net/blog/
And my must-read (free) short list:
http://wexfordpress.com/tex/shortlist.pdf |
# Homework Help: Solve integral
1. Jan 8, 2012
### Elliptic
1. The problem statement, all variables and given/known data
Solve the integral annd express it through the gamma f
2. Relevant equations
cos(theta)^(2k+1)
3. The attempt at a solution
File size:
2.9 KB
Views:
149
2. Jan 8, 2012
### Simon Bridge
You mean:
$$\int_0^{\frac{\pi}{2}} \cos^{2k+1}(\theta)d\theta$$... eg: evaluate the definite integral of an arbitrary odd-power of cosine.
The standard approach is to start by integrating by parts.
You'll end up with a reducing formula which you can turn into a ratio of factorials - apply the limits - after which it is a matter of relating that to the factorial form of the gamma function.
eg. http://mathworld.wolfram.com/CosineIntegral.html
3. Jan 8, 2012
### Elliptic
Its difficult do see what is happening here.
File size:
4.9 KB
Views:
137
4. Jan 8, 2012
### Simon Bridge
If it was easy there'd be no point setting it as a problem.
I'm not going to do it for you ...
Do you know what a gamma function is? You can represent it as a factorial?
Can you identify where you are having trouble seeing what is going on?
Perhaps you should try to do the derivation for yourself?
Last edited: Jan 8, 2012
5. Jan 8, 2012
Thanks.
File size:
2.9 KB
Views:
138
6. Jan 8, 2012
### Simon Bridge
Really? And I thought I was being mean.....
The trig-form of the beta function aye - yep, that's a tad more elegant that the path I was suggesting before (the more usual one)... but relies on a hand-wave: do you know how the beta function is derived?
Also - you have $\frac{1}{2}B(\frac{1}{2},k+1)$ but you've spotted that.
If you look at the cosine formula - you have to evaluate the limits ... at first it looks grim because it gives you a sum of terms like $\sin\theta\cos^{2k}\theta$ which is zero at both limits ... unless k=0 ... which is the first term in the sum, which is 1.
After that it is a matter of subbing in the factorial representation of the gamma function.
Which would be a concrete proof.
Yours is shorter and if you have the beta function in class notes then you should be fine using it. |
# 'egen' command: Does Stata select the right level from an attribute?
#### nmarti222
##### New Member
I´m doing my Master thesis in Hybrid Choice model with the example of a fitness center.
My dataset has 81 respondents who have to choose 12 times (12 choice tasks) among 3 alternatives ("alternative 1", "alternative 2" and "nochoice"). There are different attributes for each alternative considered (type of access, weekly access, days per week and price). For the first alternative, the attributes were labelled as alt1_type, alt1_weac, alt1_dayp, alt1_pri. However, for the second alternative the attributes were labelled as alt2_type, alt2_weac, alt2_dayp, alt2_pri. The attribute I want to focus on is the "type of access" (present in alt1_type and alt2_type) and the corresponding levels are "mixed gender" and "female only".
EXPLANATION OF THE PROBLEM --> The level ("mixed gender" and "female only") of each attribute ("type of access" present in "alt1_type" and "alt2_type") changes randomly between alternative 1 and 2. For example, in the first choice task (from a total of 12 choice tasks) a respondent can choose "female only" that appears under "alt1_type" and obviously "mixed gender" appears under "alt2_type". In the second choice task "mixed gender" appears under "alt1_type" and obviously "female only" appears under "alt2_type". Levels are randomly assigned to each alternative until 12 choice tasks are completed by the survey respondent. For nochoice is always the same level, so no problem in here.
PROBLEM --> With the code below I'm assuming that the first alternative is always "female only" (or "mixed gender") and the second is always "mixed gender" (or "female only") and the third is always "nochoice" (that in that case is always the same, so no problem).
In the dataset I have a variable called "choice" that equals 1 to the first alternative selected: alternative 1. Choice equals 2 to the second alternative selected: alternative 2 and 3 to the third alternative "nochoice". How can I chose the correct level from each attribute?
MY "WRONG" CODE IN Stata -->
Code:
egen choice1 = sum(choice == 1), by(userid)
egen choice2 = sum(choice == 2), by(userid)
egen nochoice = sum(choice == 3), by(userid)
And after the code follows like this in order to know which is the correct distribution of the selected alternative 1, 2 and nochoice.
Code:
gen choice1n = round(choice1/12,.01)
gen choice2n = round(choice2/12,.01)
gen nochoicen = round(nochoice/12,.01)
gen choice12n = choice1n+choice2n
Any idea is more than welcome!
Many thanks!
Nerea |
# I need to find a unit vector perpendicular to vector b
1. Apr 7, 2006
### danago
Hey. Here is the question:
$$\underline{b}= -8\underline{i} - 6\underline{j}$$
I need to find a unit vector perpendicular to vector b.
Ive come up with the following:
$$10 \times |\underline{x}| \cos 90 = \underline{b} \bullet \underline{x}$$
I dont know if what ive done is even close to what i need to do, but from there, im completely stuck.
Any help greatly appreciated.
Dan.
2. Apr 7, 2006
### nrqed
Well, the equation you wrote does not give you any information since cos 90 =0.
YWrite your unknown vector as $a {\vec i} + b {\vec j}$ and then impose that the dot product of this with your vector above gives zero (write out the scalar product explicitly in terms of *components*, not in terms of magnitude and angle). You will get one equation for two unknowns so there will be an infinite number of solutions. Just pick a value for a (ANY value, except zero) and solve for b. Then you can normalize your vector by dividing it by its magnitude.
3. Apr 7, 2006
### Euclid
Clearly the solution is in the x-y plane. Let's say the solution is $$\textbf{z}=x\textbf{i}+y\textbf{j}$$. You want to solve $$\textbf{z} \cdot \textbf{b} = -8x-6y = 0$$, subject to the constraint $$x^2+y^2 =1$$. How would you normally go about solving these?
4. Apr 7, 2006
### danago
thanks for both of those posts, but im not really understanding them. nrqed, you referred to scalar product, and i wouldnt have a clue what that means.
And euclid, im really lost with what youre trying to say sorry.
Thanks for attempting to help me anyway.
5. Apr 7, 2006
### Euclid
Two vectors a and b are orthogonal (by definition) if their dot product is zero.
If $$\textbf{a} = a_1 \textbf{i}+a_2\textbf{j}$$ and $$\textbf{b} = b_1\textbf{i}+b_2\textbf{j}$$, then their dot product is
$$\textbf{a}\cdot \textbf{b} =a_1b_1+a_2b_2$$
The length of the vector $$\textbf{a}$$ is
$$|a|=\sqrt{a_1^2+a_2^2}$$.
Hence your probem is to find a vector $$\textbf{z}$$ such that $$\textbf{z}\cdot \textbf{b} = 0$$ and $$|\textbf{z}|=1$$. Simply follow the definitions to get the system of equations above.
BTW, nrqed's method is more efficient than solving the two equations directly. It works because if $$\textbf{a}$$ and $$\textbf{b}$$ are orthogonal, so are $$c\textbf{a}$$ and $$\textbf{b}$$ for any scalar c.
Last edited: Apr 7, 2006
6. Apr 7, 2006
### nrqed
Sorry about the confusion... "scalar product" and "dot product" are two terms representing the same thing. You seem to already know about this type of product between two vectors because this is essential what you wrote as ${\undeline b} \cdot {\underline x}$ in your first post.
However, have you learned that there are *two* ways to calculate this product? One is using the form you wrote, the other way involves multiplying components and adding them, as Euclid wrote. Have you seen this? It's this other way of calculating a dot product that you need to solve this problem. The equation you wrote is correct but not useful for this type of problem.
Hope this helps.
Patrick
7. Apr 7, 2006
### danago
ahhh i think i understand now.
So from that, i can write two equations:
$$-8a-6b=0$$
and
$$a^2+b^2=1$$
The first equation as another way of writing the dot product, and the second equation because the solution is a unit vector, then i solve them as simultaneous equations?
I came up with the final vector:
$$\textbf{z}=-0.6\textbf{i}+0.8\textbf{j}$$
8. Apr 7, 2006
### Euclid
Yup, that's right!
9. Apr 7, 2006
### danago
Yay. Thanks so much to both of you for the help. Makes sense to me now :)
10. Apr 8, 2006
### nrqed
That's perfect!
Notice that there is one other solution (makes sense, right? I mean if you have a vector, it is possible to get *two* different unit vectors which will be perpendicular to it...one at 90 degrees on one side and one on the other side). That other solution comes from when you use a^2+b^2 =1 and you isolate a, let's say, you can take two different roots.
Good for you!
I was glad to help but you did most of the work.
Regards
Patrick
11. Apr 8, 2006
### danago
thats because when i write $a^2+b^2=1$ in terms of b i get
$b=\pm\sqrt{1-a^2}$ right? which means that the second solution would be $\textbf{z}=0.6\textbf{i}-0.8\textbf{j}$, the negative of $$\textbf{z}$$?
Makes perfect sense to me. Thanks alot :) |
# How do you solve x^3 -3x^2 +16x -48 = 0?
Feb 16, 2017
$3 \mathmr{and} \pm i 4$. See the Socratic graph of the cubic, making x-intercept 3..
#### Explanation:
From sign changes in the coefficients, the equation has utmost 3
positive roots,There are no changes in sign,
when x is changed to $- x$. And so, there are no negative roots.
The cubic is 0 at x = 0. So, it becomes
$\left(x - 3\right) \left({x}^{2} + 16\right)$
The other solutions are from by ${x}^{2} + 16 = 0$, giving $x = \pm i 4$
graph{(x-3)(x^2+16) [2, 4, -500, 500]} 0, -500, 500]}
Not to scale. It is large y vs small x, for approximating the solution. |
## 18 October 2011
### Previously
I blogged some ideas about Library paths for Klink, the Kernel implementation I wrote. I listed several desiderata, based on lessons from the past. I also blogged about how I'd like bridge the gap between package as used by developer and package as used by user.
• (The desiderata from the previous blog post, plus:)
• Should co-operate well with development. Switching from use to development shouldn't require gross changes or compete with library support.
• Can fetch libraries automatically with reasonable power and control
• In particular, automatable enough to support "remote autoloading" but ultimately should be under the user's control.
• Support clean packaging
## Fetching libraries: mostly lean on git
The well-loved version manager git provides most of what I'd want, out of the box:
• Co-operates well with development (More than co-operates, that's what it's usually for)
• Reasonably compact for non-development. You can clone a repo with depth=1
• Fetching
• Via URL (git protocol or otherwise)
• Doesn't treat URLs as sexps - only a mild problem.
• Finding out what's there to be fetched, in the sense of available versions (eg, looking for latest stable release)
git ls-remote --tags URL
• But we have to distinguish tags and tags, which AIUI don't refer to versions.
• Secure digital signatures are easy
• Creating them
git tag -s
• Verifying them
git-verify
• Excluding local customizations from being updated
• This is possible with .gitignore and some care
• But customizations will live somewhere else entirely (See below)
• Practices supporting stable releases. git-flow (code and practices) does this.
• NOT a well-behaved heterogenerated tree of libraries.
Of course git does not support knowing that a repo is intended as Kernel code. Looking at filename extensions does, but that seems to require fetching the repo first. For the same reason, it can't easily be any file that "lives" in the repo. It should be something about the repo itself.
So the convention I propose is that the presence of a branch named --kernel-source-release indicates a branch of stable Kernel code. Tags on that branch would indicate available versions, so even if coders are working informally and doing unstable work on "master", only tagged versions would be offered.
But does keeping --kernel-source-release up to date require extra effort for the maintainer? IIUC, git can simply make --kernel-source-release track "master", so if a coder's workflow is organized, he needn't make any extra effort beyond issuing a one-time command. Branch tracking is intended for remotes, but seems to support this.
Should there be other branches, like --kernel-source-unstable or --kernel-source-development? I suspect they're not needed, and any use of unstable branches should be specifically configured by the daring user.
I'm not proposing to permanently tie Klink (much less Kernel) specifically to git forever. But it serves so well and is so well supported that I'm not concerned.
## Where to put it all?
That addressed how we can fetch code. In doing so, it put some restrictions on how we can organize the files on disk. So I should at least sketch how it could work on disk.
### The easy part
Of course one would configure directories for libraries to live in. Presumably one would distinguish system, local, and user.
### Path configuration
But the stow approach still left issues of where exactly to stow things. We can't solve it in the file system. That would result in one of two ugly things:
• Making each project represent the entire library filespace, with its real code living at some depth below the project root.
• Making each project physically live in a mirror of the target filespace. This would have all the problems we were avoiding above plus more.
So I propose per-project configuration data to tell stow about paths. I'd allow binding at least these things:
prefix
The library prefix, being a list of symbols.
parts
List of sub-parts, each being a list, being:
For example,
((prefix (std util my-app))
(parts
(
(source
[,,src,,]
())
(source
[,,tests,,]
(tests))
(info
[,,doc,,]
())
(default-customizations
[,,defaults,,]
())
(public-key
[,,pub_key.asc,,]
()))))
That would live in a file with a reserved name, say "%kernel-paths" in the repo root. As the example implies, the contents of that file would be sexps, but it wouldn't be code as such. It'd be bindings, to be evaluated in a "sandbox" environment that supported little or no functionality. The expressions seem to be just literals, so no more is required.
## Dependencies and version identity
### Surfeit of ways to express version identity
There are a number of ways to indicate versions. All have their strengths:
• ID hash
• Automatic
• Unique
• Says nothing about stability and features
• Release timestamp
• Time ordered
• Nearly unique, but can mess up.
• Says nothing about stability and features
• Version major.minor.patch
• Just a little work
• Expresses stability
• Expresses time order, but can be messed up.
• Test-satisfaction
• Lots of work
• Almost unused
• Automatically expresses stability and features
• No good convention for communicating the nature of tests
• stable', unstable', release', current'.
• Expresses only stability and currency
• By named sub-features
• Just a little work
• Expresses dependencies neatly
• Expressive
• Not automatic
I chose sub-feature names, based on how well that works for emacs libraries, a real stress test. That is, I choose for code to express dependencies in a form like:
(require (li bra ry name) (feature-1 feature-2))
### Co-ordinating sub-features with version identity
The other forms of version identity still exist as useful data: ID hash, version tags, results of tests. What makes sense to me is to translate them into sets of provided features. Do this somewhere between the repository and the require statement. require would still just see sets of features.
Desiderata for this translation:
• Shouldn't be too much work for the developer.
• Probably easiest to support automatic rules and allow particular exceptions. With a git-flow workflow, this could almost be automatic. As soon as a feature branch is merged into "master", that version and later versions would be deemed to have a feature of that name.
• Should be expressable at multiple points in the pipeline, at least:
• Annotations in the source code itself
• In the repo (In case the source code annotations had to be corrected)
• Stand-alone indexes of library identities. Such indexes would be libraries in their own right. Presumably they'd also record other version-relevant attributes such as signature and URL.
• Locally by user
• Should be derivable from many types of data, at least:
• Branches (eg, everything on "master" branch has the feature stable)
• Tag text (eg, all versions after (2 3 3) provide foo-feature)
• Tag signature (eg, check it against a public key, possibly found in the repo)
• Source code annotations (eg, after coding foo-feature, write (provide-features ear lier fea tures foo-feature))
• Tests (eg, annotate foo-feature's (sub)test suite to indicate that passing it all means foo-feature is provided)
• ID
• To express specific exceptions (eg, ID af84925ebdaf4 does not provide works)
• To potentially compile a mapping from ID to features
• Upstream data. Eg, the bundles of library identities might largely collect and filter data from the libraries
• Should be potentially independent of library's presence, so it can be consulted before fetching a version of a library.
• Should potentially bundle groups of features under single names, to let require statements require them concisely.
### Dependencies
With sub-features, we don't even need Scheme's modest treatment of dependencies, at least not in require. Instead, we could avoid bad versions by indicating that they lack a feature, or possibly possess a negative feature.
The usual configuration might implicitly require:
• works
• stable
• trusted-source
• all-tests-passed
The set of implicitly required features must be configurable by the user, eg for a developer to work on unstable branches.
## Library namespace conventions
On the whole, I like the CPAN namespace conventions. I'd like to suggest these additional (sub-)library-naming conventions:
raw
This interface provides "raw" functionality that favors regular operation and controllability over guessing intentions.
dwim
This interface provides "dwim" functionality that tries to do what is probably meant.
test
This sub-library contains tests for the library immediately enclosing it
testhelp
This sub-library contains code that helps test libraries that use the library immediately enclosing it. In particular, it should provide instances of objects the library builds or operates on for test purposes.
interaction
This library has no functionality per se, it combine one or more functional libraries with an interface (keybindings, menus, or w/e). This is intended to encourage separation of concerns.
inside-out
This library is young and has not yet been organized into a well-behaved namespace with parts. It can have sub-libraries, and their names should evolve to mirror the overall library organization so that it can become a real library.
(inside-out new-app)
user
This user is providing a library that doesn't yet have an official "home" in the namespace. The second component is a unique user-name.
(user tehom-blog/blogspot.com inside-out new-app)
(user tehom-blog/blogspot.com std utility new-util)
## Mutability and Signals
Recently I've been working on Rosegarden, the music sequencer. It uses Qt which uses signals.
Signals implement the Observer pattern, where an object notifies "observers" via signals. A signal is connected to one or more "slots" in other objects. The slots are basically normal methods, except they return nothing (void). When a signal is emitted, Qt arranges for the slots to be called, other than those of deleted objects. So far, I find it works easily and elegantly.
This made me wonder: Could signals take the place of mutability in Scheme? And might that give us both referential transparency and reactiveness simultaneously?
There's not much support for signals for Lisp and Scheme. There's Cells, but it seemed to be conceived of as just a super-spreadsheet. I want to go much further and use signals to re-imagine the basics of object mutation.
## Quasi-mutation: The general idea
Basically, we'd use signals between constant objects to fake mutability.
• Objects can't mutate.
• "Slots" are closures.
• Signals are emitted with respect to particular objects.
• Not "by objects". Again, we're not object-oriented. We're just indexing on objects.
• Ability to emit is controlled by access to objects and signal types.
• Indexing on one particular argument seems overly special, so I contemplate indexing on any relevant arguments. This is again similar to generic functions.
• Signals can be connected to slots.
• The signals go to where the object's respective signal is connected. They are indexed on objects.
• Constructors connect the signal replaced from the parts to the constructed object.
• More precisely, to a closure that knows the object.
• The closure would fully represent the objects' relation. For instance, mutable pairs might have the slots new-car and new-cdr with the obvious meanings.
• But not for immutable objects. Immutable objects' slots would not be new-car and new-cdr, they would raise error.
• The constructed object can access its part objects, in appropriate ways by its own lights. For instance, a pair object could retrieve its car and cdr objects.
• This particular signal replaced is not exposed.
• The details of replaced will be refined below.
• Slots such as new-car will typically:
• Construct a near-copy of the object, with the new part in the old part's place. This effectively connects a new version of the object to the new part and disconnects it from the old part.
• Emit replaced with respect to the object, propagating the change.
• "Normal" setters such as set-car! emit replaced wrt the old object with the new object as value.
• That's just the classical way. There's plenty of room to do clever new things with signals.
• As above, doing this to immutable objects causes error.
• Constructed objects behaving this way would include at least:
• Mutable pairs (and therefore mutable lists and trees)
• Environments
• Continuations. While not often considered object-like, continuations have parts such as the current environment and their parent continuations.
• External-world-ish objects such as ports react to signals in their own appropriate way, not neccessarily propagating them further.
## As if
Literally implementing ever pair with at least two signals between itself and its car and cdr seems prohibitive if not impossible. Physically, it couldn't be the only mechanism of mutation. So I'm talking about a mechanism that acts as if it's continuous down to basics like pairs and lists, but really uses a more modest mechanism where it can (presumably containment, as now).
## Object identity
Above, I said "constructed object can access a part object", not "a part's value". Since objects no longer ever change value, the difference is subtle. It's just this: the object has a single set of signal connections. So it has a single identity. So there is a trace of object identity remaining.
One could represent identity value-wise by saying that values consist of a (classical) value and an "object-identity" value, and that object-identity values are opaque and unique except as shared by this mechanism. So signals are connected with respect to object-identity values.
It has a flavor of "the long way around", but it lets us treat objects entirely as values.
### Object versioning
Different versions of an object have different Object-IDs. Imperatively this wouldn't be anything, since two simultaneous references to the same object can't have different values. But here, one can both mutate an object as far the the world is concerned and hold onto its old value. But it should never be the case that objects are eq? but not equal?. So different versions have different Object-IDs.
### The equality predicates
Equality:
equal?
is what it normally is in Scheme or Kernel: The objects have the same current value.
eq?
A and B are eq? just if
• (equal? identity-of-a identity-of-b)
=?
is just an optimization of equal?
## Signals and immutability
I mentioned that I was thinking about immutability in regard to this. So far, I've just described how to duplicate mutability with signals.
For immutable objects, some or all slots would still get connections, but would raise error instead of propagating mutation.
But that's only half of the control we should have over mutation. We'd also like to guarantee that certain evaluations don't mutate even mutable objects they have access to, eg their arguments, the environment, and dynamic variables. The "foo!" convention indicates this (negatively) but doesn't enforce anything. "foo!" convention notwithstanding, we'd like to guarantee this from outside an arbitrary call, not from inside trusted combiners.
### Blocking signals
So we'd like to sometimes block signals. If signals were emitted anyways, they'd be errors and would not reach their destinations. So if replaced is blocked, code either doesn't try to mutate objects or tries and raises an error. Either is consistent with immutability.
ISTM the simplest way to block signals is to disconnect their existing connections and connect them to error combiners. When the call is done, restore their original connections. However, that doesn't play well with asynchrous execution.
Instead, we'll make a copy of the original object that will (probably lazily) infect its parts with "don't mutate me in this scope".
### Scope
For a traditional imperative language, where flow of control and scope are structurally the same, we could block signals in specific scopes, recursively. But for Scheme and Kernel, that won't suffice. What would happen if an object is passed to a continuation and mutated there? We've broken the guarantee that the object wouldn't be mutated. Any time we let objects be passed abnormally, this can happen.
We might try to:
1. raise error if affected objects are passed to continuation applications, or
2. "infect" the other scope with the signal restrictions.
Neither is appealing. In this mechanism, continuing normally is also passing to a less restrictive scope. And continuing normally should behave about the same way as continuing abnormally to the same destination. We also don't want error returns to permanently "freeze" objects.
So ISTM we must distinguish between continuing to a (not neccessarily proper) parent of the restricting scope (normally or otherwise) and continuing elsewhere. Signal blocks are removed just if control reaches a parent. This is essentially how Kernel guards reckon continuation parentage.
### Doing this for all object parts
We'd usually want to say that no part of any argument to a combiner can be mutated. It's easy enough to treat signal connections from the root argobject. But we need to "infect" the whole object with immutability, and not infect local objects, which may well be temporary and legitimately mutable.
Since these arguments are ultimately derived from the root argobject, what we can do is arrange for accessors to give immutable objects in their turn. But they have to be only temporarily immutable - blocked, as we said above. And we'd prefer to manage it lazily.
So I propose that accessing a blocked object gives only objects:
• whose replaced signals are re-routed to error, as above, until control "escapes normally" (an improper parent continuation is reached)
• which are in turn blocked, meaning they have the same property infecting all of their accessors.
#### Non-pair containers
Non-pair containers are not treated by mechanisms like copy-es-immutable. But we want to treat immutably fully, so they have to be treated too. This is the case even for:
• Environments
• Encapsulation types. In this case, their sole accessor is required to do this, as all accessors are.
• Ports. Administrative state or not, they can be immutable.
• Closures. Anything they return is considered accessed and automatically gets this blockingness. Their internal parts (static environment, etc) need not be blocked.
• Continuations. Like closures, what they return is considered accessed and automatically gets this blockingness.
#### Exemptions
We'd like to be able to exempt particular objects from this. Some combiners mutate an argument but shouldn't mutate anything else. There'd probably be a signal-block spec that would specify this.
### Blocking signals to keyed dynamic objects
We can easily extend the above to deal with dynamic environment, but keyed dynamic objects are not so simple. Their accessors would be covered by the above if derived from the argobject or the dynamic environment, but they need not be.
So we need an additional rule: Keyed dynamic objects are blocked if accessed in the dynamic scope of the blocking. That's recursive like other blockings. Keyed rebindings in the dynamic scope aren't, because one might bind a temporary that's legitimately mutable.
Side note: I'd like to see a stylistic convention to differentiate between combiners that mutate their arguments ("foo!") and combiners that mutate something else, meaning either their dynamic environment or a dynamic variable (Say "foo!!")
## Interface to be defined
I've already written a lot, so I'll leave this part as just a sketch. To support all this, we need to define an interface.
• A means of defining new signal types
• Returns a signal emitter
• Returns a signal identifier for connect' to use.
• A signal scheduler. By general principles, the user should be able to use his own, at least for exposed signals. It's not too hard to write one with closures and continuations.
• Means for the user to emit signals
• Means for the user to connect and disconnect signals
• Not exposed for many built-in objects such as pairs.
• "capture all current", as for blocking
• "disconnect all"
• block-all-signals: connects all signals to error continuation
• Possible argument except'
• (Maybe) block-signal: connects given signal to error continuation
• A "dirty-flag" mechanism, often useful with signals.
• Possibly a priority mechanism.
## Some possibilities this raises
• Use the signal scheduling mechanism for managing constraints. Since when we can enforce immutability, constraints become very tempting.
• Provide general signals
• Provide an interface to Lone-COWs, a sort of copy-on-write object optimized for usually being uniquely referenced/owned.
• Supplying "out-of-band" signals to a debugger or similar. They really do need to be out-of-band.
• Provide a broadly applicable interface for redo/undo. It could basically just capture historical copies of objects.
## Previously
Klink is my stand-alone implementation of Kernel by John Shutt.
## Types of immutability
There are several types of immutability used or contemplated in Klink.
Complete immutability
Not much to be said.
Pair immutability
A pair that forever holds the same two objects, though those objects' contents may change.
List immutability
For instance, a list where you can change the contents, but not the number of elements.
Recursive structural immutability
An immutable tree of mutable non-pair objects.
Eg, what ports typically have. When you read or write to a port, it remains "the same thing" administratively.
### Complete immutability
Available in C, used in some places such as symbols and strings. AFAICT there's no way to specify its use or non-use in Kernel; some objects just have it because of what they are or how they are created.
### Recursive structural immutability (Tree immutability)
Supported via copy-es-immutable
No C flag for it. Like complete immutability, some objects just have it because of what type they are.
## Non-recursive structural immutability?
If you read section 4.7.2 of the Kernel report (copy-es-immutable), you may notice that Pair immutability and List immutability are actually extensions. So I figured I should at least advance a rationale for them.
Is non-recursive immutability worth supporting? ISTM it's already strongly suggested by Kernel.
Implied by an informal type
Some combiners take finite lists as arguments; all applicatives require a finite list as argobject. That distinguishes the finite list as at least an informal type. There's a predicate for it, finite-list?, but pairs that "are" finite lists can "become" other sorts of lists (dotted or circular), so it falls short of being a formal type. This seems like an irregularity to me. Structural immutability would solve it.
Implied by implied algorithms
For some combiners (eg reduce), any practical algorithm seems to require doing sub-operations on counted lists. That implies a structurally immutable list, because otherwise the count could theoretically become wrong; in practice it's saved from this by writing the algorithm carefully. So there are at least ephemeral, implied structurally immutable lists present.
Vectors
John once told me that he would eventually like to have vectors in Kernel. Vectors are optimized structurally immutable finite lists.
Opportunity for optimization
There's an opportunity for other optimizations where list immutability is used and recognized. In particular, caching a list's element count is often nice if one can be sure the list count won't change.
## Comparison table
Type of immutabilityPairListTree
Special care needed
for shared objects?NoOnly own tailYes |
# Thread: the limit of (1+1/n)^n=e
1. ## the limit of (1+1/n)^n=e
hallo,
does any one knows how to prove that lim(1+1/n)^n=e when n goes to zero.
thanks
omri
2. Originally Posted by omrimalek
hallo,
does any one knows how to prove that lim(1+1/n)^n=e when n goes to zero.
thanks
omri
I assume that you made a typo because $\lim_{n \mapsto 0}\left(1+\frac1n\right)^n = 1$
3. Hi
$L=\lim_{n \to 0} \left(1+\frac 1n\right)^n$
$\ln(L)=\lim_{n \to 0} n \cdot \ln \left(1+\frac 1n\right)$
But when $n \to 0 ~,~ \frac 1n \gg 1$
Therefore $\ln(L)=\lim_{n \to 0} n \cdot \ln \left(\frac 1n\right)$
Substituting $u=\frac 1n$, we get :
$\ln(L)=\lim_{u \to \infty} \frac 1u \cdot \ln(u)=0$.
Therefore, $L = 1.$
4. Originally Posted by earboth
I assume that you made a typo because $\lim_{n \mapsto 0}\left(1+\frac1n\right)^n = 1$
May be he means as $n \to \infty$??
Which can be done by taking logs then showing the limit is $1$ by any number of methods including L'Hopitals rule.
RonL
5. ## sorry i made a mistake...
i ment when n goes to infinity...
6. $\lim_{n\to\infty}\left ( 1 + \frac{1}{n} \right )^n=e$
This is the definition of number e. So we can't prove that this is e, but we can show that this definition is consistent with other rules and definitions.
7. Originally Posted by wingless
$\lim_{n\to\infty}\left ( 1 + \frac{1}{n} \right )^n=e$
This is the definition of number e. So we can't prove that this is e, but we can show that this definition is consistent with other rules and definitions.
It can be proved but that requires assuming some other definition which would otherwise be proved from this definition .....
What can be done without assuming any definitions is proving that the limit exists and that it lies between 2 and 3.
8. ## thank you all!
9. Let $\varphi = \lim_{x \to \infty } \left( {1 + \frac{1}
{x}} \right)^x .$
(Assuming the limit does exist.)
Since the logarithm is continuous on its domain, we can interchange the function and taking limits.
$\ln \varphi = \ln \Bigg[ {\lim_{x \to \infty } \left( {1 + \frac{1}
{x}} \right)^x } \Bigg] = \lim_{x \to \infty } x\ln \left( {1 + \frac{1}
{x}} \right).$
Make the substitution $u=\frac1x,$
$\ln \varphi =\lim_{u\to0}\frac1u\ln(1+u).$ Since $\ln (1 + u) = \int_1^{1 + u} {\frac{1}
{\tau }\,d\tau } ,\,1 \le\tau\le 1 + u,$
$\frac{1}
{{1 + u}} \le \frac{1}
{\tau } \le 1\,\therefore \,\frac{u}
{{1 + u}} \le \int_1^{1 + u} {\frac{1}
{\tau}\,d\tau} \le u.$
So $\frac1{1+u}\le\frac1u\ln(1+u)\le1.$
Take the limit when $u\to0,$ then by the Squeeze Theorem we can conclude that $\lim_{u\to0}\frac1u\ln(1+u)=1.$
Finally $\ln\varphi=1\,\therefore\,\varphi=e.\quad\blacksqu are$
10. Originally Posted by Moo
Hi
You just have to be careful about one thing over here. You assumed the limit exists. How do you know the limit exists? Note, this is why Krizalid wrote "assuming it exists". To prove this we can note the function $f0,\infty)\mapsto \mathbb{R}" alt="f0,\infty)\mapsto \mathbb{R}" /> defined as $f(x) = (1+\tfrac{1}{x})^x$ is increasing since $f' >0$. Therefore, the sequence $x_n = (1+\tfrac{1}{n})^n$ is an increasing sequence. Now show that $\{ x_n\}$ is bounded. So we have a increasing bounded sequence and therefore we have a limit.
Originally Posted by wingless
$\lim_{n\to\infty}\left ( 1 + \frac{1}{n} \right )^n=e$
This is the definition of number e. So we can't prove that this is e, but we can show that this definition is consistent with other rules and definitions.
That is not how I like to define $e$. I like to define $\log x = \smallint_1^x \tfrac{d\mu}{\mu}$. And we can define $e$ to be the (unique) number such that $\log (e) = 1$. If we define $e$ this way then it would follow that $(1+\tfrac{1}{n})^n\to e$. But whatever, it depends on your style of defining logarithmic functions. I just find that the approach I use it the cleanest and smoothest.
11. Originally Posted by wingless
$\lim_{n\to\infty}\left ( 1 + \frac{1}{n} \right )^n=e$
This is the definition of number e. So we can't prove that this is e, but we can show that this definition is consistent with other rules and definitions.
It is not $the$ definition but $a$ definition, how one proves it depends on what one is supposed to know, and how it has been defined. It is quite common for it to be defined as the base of natural logarithms.
RonL
12. Originally Posted by CaptainBlack
It is not $the$ definition but $a$ definition, how one proves it depends on what one is supposed to know, and how it has been defined. It is quite common for it to be defined as the base of natural logarithms.
RonL
Thanks to you both, TPH and CaptainBlack. I know that there are tons of definitions for e, but this definition is the oldest one so it seemed me nonsense to prove it.
13. I will present this proof for what it's worth. Though, the posters proof builds
from this. This proof builds on the differentiability of ln(x). To be more exact
the
derivative of ln(x) at x=1.
Using the definition of a derivative, since 1/x=1, we get:
$1=\lim_{h\to 0}\frac{ln(1+h)-ln(1)}{h}$
$=\lim_{h\to 0}\frac{ln(1+h)}{h}$
$=\lim_{h\to 0} ln(1+h)^{\frac{1}{h}}$
Therefore, it follows that:
$e=e^{\lim_{h\to 0} ln(1+h)^{\frac{1}{h}}}$
from the continuity of $e^{x}$ can be written this way:
$e=\lim_{h\to 0}e^{ln(1+h)^{\frac{1}{h}}}=\lim_{h\to 0}(1+h)^{\frac{1}{h}}$
Now, we recognize this limit. The others play off it.
To show $\lim_{x\to {\infty}}\left(1+\frac{1}{x}\right)^{x}=e$
merely let $t=\frac{1}{x}$.
This changes the limit to $x\to 0$ and we have said limit. |
# Convert Tensor¶
This function copies elements from the input tensor to the output with data conversion according to the output tensor type parameters.
For example, the function can:
• convert data according to new element type: fx16 to fx8 and backward
• change data according to new data parameter: increase/decrease the number of fractional bits while keeping the same element type for FX data
Conversion is performed using:
• rounding when the number of significant bits increases
• saturation when the number of significant bits decreases
This operation does not change tensor shape. It copies it from input to output.
This kernel performs in-place computation, but only for conversions without increasing data size, so that that it does not lead to undefined behavior. Therefore, output and input might point to exactly the same memory (but without shift) except during fx8 to fx16 conversion. In-place computation might affect performance for some platforms.
## Kernel Interface¶
### Prototype¶
mli_status mli_hlp_convert_tensor(
mli_tensor *in,
mli_tensor *out
);
### Parameters¶
Kernel Interface Parameters
Parameters Description
in [IN] Pointer to input tensor
start_dim [OUT] Pointer to output tensor
Returns - status code
## Conditions for Applying the Function¶
• Input must be a valid tensor (see mli_tensor Structure).
• Before processing, the output tensor must contain a valid pointer to a buffer with sufficient capacity for storing the result (that is, the total amount of elements in input tensor).
• The output tensor also must contain valid element type and its parameter (el_params.fx.frac_bits).
• Before processing, the output tensor does not have to contain valid shape and rank - they are copied from input tensor. |
# American Institute of Mathematical Sciences
November 2015, 9(4): 1139-1169. doi: 10.3934/ipi.2015.9.1139
## Bilevel optimization for calibrating point spread functions in blind deconvolution
1 Department of Mathematics, Humboldt-Universität zu Berlin, Unter den Linden 6, 10099 Berlin, Germany, Germany
Received October 2014 Revised March 2015 Published October 2015
Blind deconvolution problems arise in many imaging modalities, where both the underlying point spread function, which parameterizes the convolution operator, and the source image need to be identified. In this work, a novel bilevel optimization approach to blind deconvolution is proposed. The lower-level problem refers to the minimization of a total-variation model, as is typically done in non-blind image deconvolution. The upper-level objective takes into account additional statistical information depending on the particular imaging modality. Bilevel problems of such type are investigated systematically. Analytical properties of the lower-level solution mapping are established based on Robinson's strong regularity condition. Furthermore, several stationarity conditions are derived from the variational geometry induced by the lower-level problem. Numerically, a projected-gradient-type method is employed to obtain a Clarke-type stationary point and its convergence properties are analyzed. We also implement an efficient version of the proposed algorithm and test it through the experiments on point spread function calibration and multiframe blind deconvolution.
Citation: Michael Hintermüller, Tao Wu. Bilevel optimization for calibrating point spread functions in blind deconvolution. Inverse Problems & Imaging, 2015, 9 (4) : 1139-1169. doi: 10.3934/ipi.2015.9.1139
##### References:
[1] M. S. C. Almeida and L. B. Almeida, Blind and semi-blind deblurring of natural images,, IEEE Trans. Image Process., 19 (2010), 36. doi: 10.1109/TIP.2009.2031231. Google Scholar [2] G. Aubert and P. Kornprobst, Mathematical Problems in Image Processing,, Springer, (2002). Google Scholar [3] J. Bardsley, S. Jefferies, J. Nagy and R. Plemmons, A computational method for the restoration of images with an unknown, spatially-varying blur,, Opt. Express, 14 (2006), 1767. doi: 10.1364/OE.14.001767. Google Scholar [4] M. Burger and O. Scherzer, Regularization methods for blind deconvolution and blind source separation problems,, Math. Control Signals Systems, 14 (2001), 358. doi: 10.1007/s498-001-8041-y. Google Scholar [5] J.-F. Cai, H. Ji, C. Liu and Z. Shen, Blind motion deblurring using multiple images,, J. Comput. Phys., 228 (2009), 5057. doi: 10.1016/j.jcp.2009.04.022. Google Scholar [6] P. Campisi and K. Egiazarian, eds., Blind image deconvolution: Theory and applications,, CRC press, (2007). doi: 10.1201/9781420007299. Google Scholar [7] A. S. Carasso, Direct blind deconvolution,, SIAM J. Appl. Math., 61 (2001), 1980. doi: 10.1137/S0036139999362592. Google Scholar [8] _________, The APEX method in image sharpening and the use of low exponent Lévy stable laws,, SIAM J. Appl. Math., 63 (2002), 593. doi: 10.1137/S0036139901389318. Google Scholar [9] _________, APEX blind deconvolution of color Hubble space telescope imagery and other astronomical data,, Optical Engineering, 45 (2006). Google Scholar [10] _________, False characteristic functions and other pathologies in variational blind deconvolution: A method of recovery,, SIAM J. Appl. Math., 70 (2009), 1097. doi: 10.1137/080737769. Google Scholar [11] A. Chambolle and T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging,, J. Math. Imaging Vis., 40 (2011), 120. doi: 10.1007/s10851-010-0251-1. Google Scholar [12] T. F. Chan and J. Shen, Image Processing and Analysis: Variational, PDE, Wavelet, and Stochastic Methods,, SIAM, (2005). doi: 10.1137/1.9780898717877. Google Scholar [13] T. F. Chan and C.-K. Wong, Total variation blind deconvolution,, IEEE Trans. Image Process., 7 (1998), 370. doi: 10.1109/83.661187. Google Scholar [14] R. Chartrand and W. Yin, Iteratively reweighted algorithms for compressive sensing,, in Proceedings of the IEEE International Conference on Acoustics, (2008), 3869. Google Scholar [15] S. Cho, Y. Matsushita and S. Lee, Removing non-uniform motion blur from images,, in IEEE 11th International Conference on Computer Vision, (2007), 1. doi: 10.1109/ICCV.2007.4408904. Google Scholar [16] J. C. De los Reyes and C.-B. Schönlieb, Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization,, Inverse Problems and Imaging, 7 (2013), 1183. doi: 10.3934/ipi.2013.7.1183. Google Scholar [17] A. L. Dontchev and R. T. Rockafellar, Robinson's implicit function theorem and its extensions,, Math. Program., 117 (2009), 129. doi: 10.1007/s10107-007-0161-1. Google Scholar [18] D. A. Fish, A. M. Brinicombe and E. R. Pike, Blind deconvolution by means of the Richardson-Lucy algorithm,, J. Opt. Soc. Am. A, 12 (1995), 58. doi: 10.1364/JOSAA.12.000058. Google Scholar [19] R. Fletcher, S. Leyffer, D. Ralph and S. Scholtes, Local convergence of SQP methods for mathematical programs with equilibrium constraints,, SIAM J. Optim., 17 (2006), 259. doi: 10.1137/S1052623402407382. Google Scholar [20] R. W. Freund and N. M. Nachtigal, QMR: A quasi-minimal residual method for non-Hermitian linear systems,, Numer. Math., 60 (1991), 315. doi: 10.1007/BF01385726. Google Scholar [21] M. Fukushima, Z.-Q. Luo and J.-S. Pang, A globally convergent sequential quadratic programming algorithm for mathematical programs with linear complementarity constraints,, Comput. Optim. Appl., 10 (1998), 5. doi: 10.1023/A:1018359900133. Google Scholar [22] E. M. Gafni and D. P. Bertsekas, Convergence of a Gradient Projection Method,, Laboratory for Information and Decision Systems Report LIDS-P-1201, (1982). Google Scholar [23] L. He, A. Marquina and S. J. Osher, Blind deconvolution using TV regularization and Bregman iteration,, International Journal of Imaging Systems and Technology, 15 (2005), 74. doi: 10.1002/ima.20040. Google Scholar [24] M. Hintermüller and I. Kopacka, Mathematical programs with complementarity constraints in function space: C- and strong stationarity and a path-following algorithm,, SIAM J. Optim., 20 (2009), 868. doi: 10.1137/080720681. Google Scholar [25] M. Hintermüller and K. Kunisch, Total bounded variation regularization as a bilaterally constrained optimization problem,, SIAM J. Appl. Math., 64 (2004), 1311. doi: 10.1137/S0036139903422784. Google Scholar [26] M. Hintermüller and G. Stadler, An infeasible primal-dual algorithm for total bounded variation-based inf-convolution-type image restoration,, SIAM J. Sci. Comput., 28 (2006), 1. doi: 10.1137/040613263. Google Scholar [27] M. Hintermüller and T. Surowiec, A bundle-free implicit programming approach for a class of MPECs in function space,, preprint, (2014). Google Scholar [28] M. Hintermüller and T. Wu, Nonconvex $TV^q$-models in image restoration: Analysis and a trust-region regularization-based superlinearly convergent solver,, SIAM J. Imaging Sci., 6 (2013), 1385. doi: 10.1137/110854746. Google Scholar [29] _________, A superlinearly convergent $R$-regularized Newton scheme for variational models with concave sparsity-promoting priors,, Comput. Optim. Appl., 57 (2014), 1. doi: 10.1007/s10589-013-9583-2. Google Scholar [30] K. Ito and K. Kunisch, An active set strategy based on the augmented Lagrangian formulation for image restoration,, Mathematical Modelling and Numerical Analysis, 33 (1999), 1. doi: 10.1051/m2an:1999102. Google Scholar [31] L. Justen, Blind Deconvolution: Theory, Regularization and Applications,, Ph.D. thesis, (2006). Google Scholar [32] L. Justen and R. Ramlau, A non-iterative regularization approach to blind deconvolution,, Inverse Problems, 22 (2006), 771. doi: 10.1088/0266-5611/22/3/003. Google Scholar [33] D. Kundur and D. Hatzinakos, Blind image deconvolution,, IEEE Signal Process. Mag., 13 (1996), 43. doi: 10.1109/79.489268. Google Scholar [34] ________, Blind image deconvolution revisited,, IEEE Signal Process. Mag., 13 (1996), 61. Google Scholar [35] K. Kunisch and T. Pock, A bilevel optimization approach for parameter learning in variational models,, SIAM J. Imaging Sci., 6 (2013), 938. doi: 10.1137/120882706. Google Scholar [36] A. Levin, Blind motion deblurring using image statistics,, Advances in Neural Information Processing Systems, 19 (2006), 841. Google Scholar [37] A. B. Levy, Solution sensitivity from general principles,, SIAM J. Control Optim., 40 (2001), 1. doi: 10.1137/S036301299935211X. Google Scholar [38] Z.-Q. Luo, J.-S. Pang and D. Ralph, Mathematical Programs with Equilibrium Constraints,, Cambridge University Press, (1996). doi: 10.1017/CBO9780511983658. Google Scholar [39] B. S. Mordukhovich, Variational Analysis and Generalized Differentiation, I: Basic Theory, II: Applications,, Springer, (2006). Google Scholar [40] J. Nocedal and S. Wright, Numerical optimization,, 2nd ed., (2006). Google Scholar [41] J. Outrata, M. Kocvara and J. Zowe, Nonsmooth Approach to Optimization Problems with Equilibrium Constraints,, Kluwer Academic Publishers, (1998). doi: 10.1007/978-1-4757-2825-5. Google Scholar [42] J. V. Outrata, A generalized mathematical program with equilibrium constraints,, SIAM J. Control Optim., 38 (2000), 1623. doi: 10.1137/S0363012999352911. Google Scholar [43] S. M. Robinson, Strongly regular generalized equations,, Math. Oper. Res., 5 (1980), 43. doi: 10.1287/moor.5.1.43. Google Scholar [44] ________, Local structure of feasible sets in nonlinear programming, Part III: Stability and sensitivity,, Math. Programming Stud., 30 (1987), 45. Google Scholar [45] R. T. Rockafellar and R. J.-B. Wets, Variational Analysis,, Springer, (1998). doi: 10.1007/978-3-642-02431-3. Google Scholar [46] L. Rudin, S. Osher and E. Fatemi, Nonlinear total variation based noise removal algorithms,, Physica D, 60 (1992), 259. doi: 10.1016/0167-2789(92)90242-F. Google Scholar [47] H. Scheel and S. Scholtes, Mathematical programs with complementarity constraints: Stationarity, optimality, and sensitivity,, Math. Oper. Res., 25 (2000), 1. doi: 10.1287/moor.25.1.1.15213. Google Scholar [48] S. Scholtes, Convergence properties of a regularization scheme for mathematical programs with complementarity constraints,, SIAM J. Optim., 11 (2001), 918. doi: 10.1137/S1052623499361233. Google Scholar [49] Q. Shan, J. Jia and A. Agarwala, High-quality motion deblurring from a single image,, ACM T. Graphic, 27 (2008). doi: 10.1145/1399504.1360672. Google Scholar [50] A. Shapiro, Sensitivity analysis of parameterized variational inequalities,, Math. Oper. Res., 30 (2005), 109. doi: 10.1287/moor.1040.0115. Google Scholar [51] J. J. Ye, Necessary and sufficient optimality conditions for mathematical programs with equilibrium constraints,, J. Math. Anal. Appl., 307 (2005), 350. doi: 10.1016/j.jmaa.2004.10.032. Google Scholar [52] J. J. Ye, D. L. Zhu and Q. J. Zhu, Exact penalization and necessary optimality conditions for generalized bilevel programming problems,, SIAM J. Optim., 7 (1997), 481. doi: 10.1137/S1052623493257344. Google Scholar [53] Y.-L. You and M. Kaveh, A regularization approach to joint blur identification and image restoration,, IEEE Trans. Image Process., 5 (1996), 416. Google Scholar
show all references
##### References:
[1] M. S. C. Almeida and L. B. Almeida, Blind and semi-blind deblurring of natural images,, IEEE Trans. Image Process., 19 (2010), 36. doi: 10.1109/TIP.2009.2031231. Google Scholar [2] G. Aubert and P. Kornprobst, Mathematical Problems in Image Processing,, Springer, (2002). Google Scholar [3] J. Bardsley, S. Jefferies, J. Nagy and R. Plemmons, A computational method for the restoration of images with an unknown, spatially-varying blur,, Opt. Express, 14 (2006), 1767. doi: 10.1364/OE.14.001767. Google Scholar [4] M. Burger and O. Scherzer, Regularization methods for blind deconvolution and blind source separation problems,, Math. Control Signals Systems, 14 (2001), 358. doi: 10.1007/s498-001-8041-y. Google Scholar [5] J.-F. Cai, H. Ji, C. Liu and Z. Shen, Blind motion deblurring using multiple images,, J. Comput. Phys., 228 (2009), 5057. doi: 10.1016/j.jcp.2009.04.022. Google Scholar [6] P. Campisi and K. Egiazarian, eds., Blind image deconvolution: Theory and applications,, CRC press, (2007). doi: 10.1201/9781420007299. Google Scholar [7] A. S. Carasso, Direct blind deconvolution,, SIAM J. Appl. Math., 61 (2001), 1980. doi: 10.1137/S0036139999362592. Google Scholar [8] _________, The APEX method in image sharpening and the use of low exponent Lévy stable laws,, SIAM J. Appl. Math., 63 (2002), 593. doi: 10.1137/S0036139901389318. Google Scholar [9] _________, APEX blind deconvolution of color Hubble space telescope imagery and other astronomical data,, Optical Engineering, 45 (2006). Google Scholar [10] _________, False characteristic functions and other pathologies in variational blind deconvolution: A method of recovery,, SIAM J. Appl. Math., 70 (2009), 1097. doi: 10.1137/080737769. Google Scholar [11] A. Chambolle and T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging,, J. Math. Imaging Vis., 40 (2011), 120. doi: 10.1007/s10851-010-0251-1. Google Scholar [12] T. F. Chan and J. Shen, Image Processing and Analysis: Variational, PDE, Wavelet, and Stochastic Methods,, SIAM, (2005). doi: 10.1137/1.9780898717877. Google Scholar [13] T. F. Chan and C.-K. Wong, Total variation blind deconvolution,, IEEE Trans. Image Process., 7 (1998), 370. doi: 10.1109/83.661187. Google Scholar [14] R. Chartrand and W. Yin, Iteratively reweighted algorithms for compressive sensing,, in Proceedings of the IEEE International Conference on Acoustics, (2008), 3869. Google Scholar [15] S. Cho, Y. Matsushita and S. Lee, Removing non-uniform motion blur from images,, in IEEE 11th International Conference on Computer Vision, (2007), 1. doi: 10.1109/ICCV.2007.4408904. Google Scholar [16] J. C. De los Reyes and C.-B. Schönlieb, Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization,, Inverse Problems and Imaging, 7 (2013), 1183. doi: 10.3934/ipi.2013.7.1183. Google Scholar [17] A. L. Dontchev and R. T. Rockafellar, Robinson's implicit function theorem and its extensions,, Math. Program., 117 (2009), 129. doi: 10.1007/s10107-007-0161-1. Google Scholar [18] D. A. Fish, A. M. Brinicombe and E. R. Pike, Blind deconvolution by means of the Richardson-Lucy algorithm,, J. Opt. Soc. Am. A, 12 (1995), 58. doi: 10.1364/JOSAA.12.000058. Google Scholar [19] R. Fletcher, S. Leyffer, D. Ralph and S. Scholtes, Local convergence of SQP methods for mathematical programs with equilibrium constraints,, SIAM J. Optim., 17 (2006), 259. doi: 10.1137/S1052623402407382. Google Scholar [20] R. W. Freund and N. M. Nachtigal, QMR: A quasi-minimal residual method for non-Hermitian linear systems,, Numer. Math., 60 (1991), 315. doi: 10.1007/BF01385726. Google Scholar [21] M. Fukushima, Z.-Q. Luo and J.-S. Pang, A globally convergent sequential quadratic programming algorithm for mathematical programs with linear complementarity constraints,, Comput. Optim. Appl., 10 (1998), 5. doi: 10.1023/A:1018359900133. Google Scholar [22] E. M. Gafni and D. P. Bertsekas, Convergence of a Gradient Projection Method,, Laboratory for Information and Decision Systems Report LIDS-P-1201, (1982). Google Scholar [23] L. He, A. Marquina and S. J. Osher, Blind deconvolution using TV regularization and Bregman iteration,, International Journal of Imaging Systems and Technology, 15 (2005), 74. doi: 10.1002/ima.20040. Google Scholar [24] M. Hintermüller and I. Kopacka, Mathematical programs with complementarity constraints in function space: C- and strong stationarity and a path-following algorithm,, SIAM J. Optim., 20 (2009), 868. doi: 10.1137/080720681. Google Scholar [25] M. Hintermüller and K. Kunisch, Total bounded variation regularization as a bilaterally constrained optimization problem,, SIAM J. Appl. Math., 64 (2004), 1311. doi: 10.1137/S0036139903422784. Google Scholar [26] M. Hintermüller and G. Stadler, An infeasible primal-dual algorithm for total bounded variation-based inf-convolution-type image restoration,, SIAM J. Sci. Comput., 28 (2006), 1. doi: 10.1137/040613263. Google Scholar [27] M. Hintermüller and T. Surowiec, A bundle-free implicit programming approach for a class of MPECs in function space,, preprint, (2014). Google Scholar [28] M. Hintermüller and T. Wu, Nonconvex $TV^q$-models in image restoration: Analysis and a trust-region regularization-based superlinearly convergent solver,, SIAM J. Imaging Sci., 6 (2013), 1385. doi: 10.1137/110854746. Google Scholar [29] _________, A superlinearly convergent $R$-regularized Newton scheme for variational models with concave sparsity-promoting priors,, Comput. Optim. Appl., 57 (2014), 1. doi: 10.1007/s10589-013-9583-2. Google Scholar [30] K. Ito and K. Kunisch, An active set strategy based on the augmented Lagrangian formulation for image restoration,, Mathematical Modelling and Numerical Analysis, 33 (1999), 1. doi: 10.1051/m2an:1999102. Google Scholar [31] L. Justen, Blind Deconvolution: Theory, Regularization and Applications,, Ph.D. thesis, (2006). Google Scholar [32] L. Justen and R. Ramlau, A non-iterative regularization approach to blind deconvolution,, Inverse Problems, 22 (2006), 771. doi: 10.1088/0266-5611/22/3/003. Google Scholar [33] D. Kundur and D. Hatzinakos, Blind image deconvolution,, IEEE Signal Process. Mag., 13 (1996), 43. doi: 10.1109/79.489268. Google Scholar [34] ________, Blind image deconvolution revisited,, IEEE Signal Process. Mag., 13 (1996), 61. Google Scholar [35] K. Kunisch and T. Pock, A bilevel optimization approach for parameter learning in variational models,, SIAM J. Imaging Sci., 6 (2013), 938. doi: 10.1137/120882706. Google Scholar [36] A. Levin, Blind motion deblurring using image statistics,, Advances in Neural Information Processing Systems, 19 (2006), 841. Google Scholar [37] A. B. Levy, Solution sensitivity from general principles,, SIAM J. Control Optim., 40 (2001), 1. doi: 10.1137/S036301299935211X. Google Scholar [38] Z.-Q. Luo, J.-S. Pang and D. Ralph, Mathematical Programs with Equilibrium Constraints,, Cambridge University Press, (1996). doi: 10.1017/CBO9780511983658. Google Scholar [39] B. S. Mordukhovich, Variational Analysis and Generalized Differentiation, I: Basic Theory, II: Applications,, Springer, (2006). Google Scholar [40] J. Nocedal and S. Wright, Numerical optimization,, 2nd ed., (2006). Google Scholar [41] J. Outrata, M. Kocvara and J. Zowe, Nonsmooth Approach to Optimization Problems with Equilibrium Constraints,, Kluwer Academic Publishers, (1998). doi: 10.1007/978-1-4757-2825-5. Google Scholar [42] J. V. Outrata, A generalized mathematical program with equilibrium constraints,, SIAM J. Control Optim., 38 (2000), 1623. doi: 10.1137/S0363012999352911. Google Scholar [43] S. M. Robinson, Strongly regular generalized equations,, Math. Oper. Res., 5 (1980), 43. doi: 10.1287/moor.5.1.43. Google Scholar [44] ________, Local structure of feasible sets in nonlinear programming, Part III: Stability and sensitivity,, Math. Programming Stud., 30 (1987), 45. Google Scholar [45] R. T. Rockafellar and R. J.-B. Wets, Variational Analysis,, Springer, (1998). doi: 10.1007/978-3-642-02431-3. Google Scholar [46] L. Rudin, S. Osher and E. Fatemi, Nonlinear total variation based noise removal algorithms,, Physica D, 60 (1992), 259. doi: 10.1016/0167-2789(92)90242-F. Google Scholar [47] H. Scheel and S. Scholtes, Mathematical programs with complementarity constraints: Stationarity, optimality, and sensitivity,, Math. Oper. Res., 25 (2000), 1. doi: 10.1287/moor.25.1.1.15213. Google Scholar [48] S. Scholtes, Convergence properties of a regularization scheme for mathematical programs with complementarity constraints,, SIAM J. Optim., 11 (2001), 918. doi: 10.1137/S1052623499361233. Google Scholar [49] Q. Shan, J. Jia and A. Agarwala, High-quality motion deblurring from a single image,, ACM T. Graphic, 27 (2008). doi: 10.1145/1399504.1360672. Google Scholar [50] A. Shapiro, Sensitivity analysis of parameterized variational inequalities,, Math. Oper. Res., 30 (2005), 109. doi: 10.1287/moor.1040.0115. Google Scholar [51] J. J. Ye, Necessary and sufficient optimality conditions for mathematical programs with equilibrium constraints,, J. Math. Anal. Appl., 307 (2005), 350. doi: 10.1016/j.jmaa.2004.10.032. Google Scholar [52] J. J. Ye, D. L. Zhu and Q. J. Zhu, Exact penalization and necessary optimality conditions for generalized bilevel programming problems,, SIAM J. Optim., 7 (1997), 481. doi: 10.1137/S1052623493257344. Google Scholar [53] Y.-L. You and M. Kaveh, A regularization approach to joint blur identification and image restoration,, IEEE Trans. Image Process., 5 (1996), 416. Google Scholar
[1] Rouhollah Tavakoli, Hongchao Zhang. A nonmonotone spectral projected gradient method for large-scale topology optimization problems. Numerical Algebra, Control & Optimization, 2012, 2 (2) : 395-412. doi: 10.3934/naco.2012.2.395 [2] Yuanjia Ma. The optimization algorithm for blind processing of high frequency signal of capacitive sensor. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 1399-1412. doi: 10.3934/dcdss.2019096 [3] Chunyang Zhang, Shugong Zhang, Qinghuai Liu. Homotopy method for a class of multiobjective optimization problems with equilibrium constraints. Journal of Industrial & Management Optimization, 2017, 13 (1) : 81-92. doi: 10.3934/jimo.2016005 [4] Tim Hoheisel, Christian Kanzow, Alexandra Schwartz. Improved convergence properties of the Lin-Fukushima-Regularization method for mathematical programs with complementarity constraints. Numerical Algebra, Control & Optimization, 2011, 1 (1) : 49-60. doi: 10.3934/naco.2011.1.49 [5] Chunlin Hao, Xinwei Liu. A trust-region filter-SQP method for mathematical programs with linear complementarity constraints. Journal of Industrial & Management Optimization, 2011, 7 (4) : 1041-1055. doi: 10.3934/jimo.2011.7.1041 [6] Liping Pang, Na Xu, Jian Lv. The inexact log-exponential regularization method for mathematical programs with vertical complementarity constraints. Journal of Industrial & Management Optimization, 2019, 15 (1) : 59-79. doi: 10.3934/jimo.2018032 [7] Zheng-Hai Huang, Jie Sun. A smoothing Newton algorithm for mathematical programs with complementarity constraints. Journal of Industrial & Management Optimization, 2005, 1 (2) : 153-170. doi: 10.3934/jimo.2005.1.153 [8] Xiaojiao Tong, Shuzi Zhou. A smoothing projected Newton-type method for semismooth equations with bound constraints. Journal of Industrial & Management Optimization, 2005, 1 (2) : 235-250. doi: 10.3934/jimo.2005.1.235 [9] Yu-Ning Yang, Su Zhang. On linear convergence of projected gradient method for a class of affine rank minimization problems. Journal of Industrial & Management Optimization, 2016, 12 (4) : 1507-1519. doi: 10.3934/jimo.2016.12.1507 [10] Xing Li, Chungen Shen, Lei-Hong Zhang. A projected preconditioned conjugate gradient method for the linear response eigenvalue problem. Numerical Algebra, Control & Optimization, 2018, 8 (4) : 389-412. doi: 10.3934/naco.2018025 [11] Juan C. Moreno, V. B. Surya Prasath, João C. Neves. Color image processing by vectorial total variation with gradient channels coupling. Inverse Problems & Imaging, 2016, 10 (2) : 461-497. doi: 10.3934/ipi.2016008 [12] Gui-Hua Lin, Masao Fukushima. A class of stochastic mathematical programs with complementarity constraints: reformulations and algorithms. Journal of Industrial & Management Optimization, 2005, 1 (1) : 99-122. doi: 10.3934/jimo.2005.1.99 [13] Yi Zhang, Liwei Zhang, Jia Wu. On the convergence properties of a smoothing approach for mathematical programs with symmetric cone complementarity constraints. Journal of Industrial & Management Optimization, 2018, 14 (3) : 981-1005. doi: 10.3934/jimo.2017086 [14] X. X. Huang, D. Li, Xiaoqi Yang. Convergence of optimal values of quadratic penalty problems for mathematical programs with complementarity constraints. Journal of Industrial & Management Optimization, 2006, 2 (3) : 287-296. doi: 10.3934/jimo.2006.2.287 [15] Yongchao Liu. Quantitative stability analysis of stochastic mathematical programs with vertical complementarity constraints. Numerical Algebra, Control & Optimization, 2018, 8 (4) : 451-460. doi: 10.3934/naco.2018028 [16] Jianjun Zhang, Yunyi Hu, James G. Nagy. A scaled gradient method for digital tomographic image reconstruction. Inverse Problems & Imaging, 2018, 12 (1) : 239-259. doi: 10.3934/ipi.2018010 [17] Qingsong Duan, Mengwei Xu, Yue Lu, Liwei Zhang. A smoothing augmented Lagrangian method for nonconvex, nonsmooth constrained programs and its applications to bilevel problems. Journal of Industrial & Management Optimization, 2019, 15 (3) : 1241-1261. doi: 10.3934/jimo.2018094 [18] Dan Li, Li-Ping Pang, Fang-Fang Guo, Zun-Quan Xia. An alternating linearization method with inexact data for bilevel nonsmooth convex optimization. Journal of Industrial & Management Optimization, 2014, 10 (3) : 859-869. doi: 10.3934/jimo.2014.10.859 [19] Michal Kočvara, Jiří V. Outrata. Inverse truss design as a conic mathematical program with equilibrium constraints. Discrete & Continuous Dynamical Systems - S, 2017, 10 (6) : 1329-1350. doi: 10.3934/dcdss.2017071 [20] Nam-Yong Lee, Bradley J. Lucier. Preconditioned conjugate gradient method for boundary artifact-free image deblurring. Inverse Problems & Imaging, 2016, 10 (1) : 195-225. doi: 10.3934/ipi.2016.10.195
2018 Impact Factor: 1.469 |
Data for Example 6.9
Haptoglo
## Format
A data frame/tibble with eight observations on one variable
concent
haptoglobin concentration (in grams per liter)
## References
Kitchens, L. J. (2003) Basic Statistics and Data Analysis. Pacific Grove, CA: Brooks/Cole, a division of Thomson Learning.
## Examples
shapiro.test(Haptoglo$concent) #> #> Shapiro-Wilk normality test #> #> data: Haptoglo$concent
#> W = 0.93818, p-value = 0.5932
#>
t.test(Haptoglo$concent, mu = 2, alternative = "less") #> #> One Sample t-test #> #> data: Haptoglo$concent
#> t = -0.58143, df = 7, p-value = 0.2896
#> alternative hypothesis: true mean is less than 2
#> 95 percent confidence interval:
#> -Inf 2.595669
#> sample estimates:
#> mean of x
#> 1.73625
#> |
# Introduction
## Background and Motivation
Heart disease has been the leading cause of death in the world for the last twenty years. It is therefore of great importance to look for ways to prevent it. In this project, funduscopy images of retinas of tens of thousands of participants collected by the UK biobank and data of biologically relevant variables collected in a dataset are used for two different purposes.
An image of retina being taken by funduscopy.
First, GWAS analysis of some of the variables in the dataset allows us to look at their concrete importance in the genome. Second, the dataset was used as a means of refining the selection of retinal images so that they could be subjected to a classification model called Dense Net with as output a prediction of hypertension. A key point associated with both of these analyses - especially for the classification part - is that mathematically adequate data cleaning should enhance the relevant GWAS p-values, or accuracy of hypertension prediction.
## Data cleaning process
The data has been collected from the UK biobank and consists of :
1. Retina images of left eyes, right eyes, or both left and right eyes of the participants. Also, a few hundreds of participants have had replica images of either their left or right eye taken.
2. A 92366x47 dataset with rows corresponding to every left or right retina images. Columns refer to biologically relevant data previously measured on those images.
The cleaning process has involved :
1. Removing 15 variables by recommendation of the assistants and dividing the dataset into two : one (of size 78254x32) containing only participants which had both their left (labelled "L") and right (labelled "R") eyes taken and nothing else, and the other (of size 464x32) containing each replica (labelled "1") image alongside its original (labelled "0").
2. For every participants, every variables, and in the two datasets : applying $\delta = \frac{|L-R|}{L+R}$ to the left-right dataset and $\delta = \frac{|0-1|}{0+1}$ to the original-replica dataset. This delta computes the relative distance between either L and R, or 0 and 1.
3. Computing the T-test and the Cohen's D (the effect size) between each corresponding variables of the two datasets and removing the 5 variables with significant p-values after Bonferroni correction for 32 tests. This was done because for the classification model and to predict hypertension, it is better for input images of left and right eyes to not have striking differences between them, otherwise the machine could lose in accuracy by accounting for these supplementary data, instead of focusing on the overall structure of the images it analyses. We can check if each variable has a high left-right difference by comparing it to the corresponding variable 0-1 difference ; if a variable has a low left-right difference - a low delta (L, R) - its delta(L, R) distribution should be similarly distributed as its corresponding delta(0, 1) variable, because a replica has by definition no other difference with its original than the technical variability related to the way it was practically captured.
The classification has then used the 39127x27 delta(L, R) cleaned and transformed dataset for the selection of its images.
# Deep Learning Model
This section focused on using the previously defined Delta variable to sort the images used as input for the classifier. A CNN model was built by the CBG to predict hypertension from retina fundus images. We wished to improve the predictions by reducing technical error in the input images. The statistical tests performed in the first part allow us to select the variable for which delta (L, R) can be used as an approximation of technical error (or delta (0, 1)), i.e select the variable with the smallest difference between delta (L, R) and delta (0, 1).
The delta values for the "FD_all" variable were used here to discriminate participants. Participants with the highest delta values were excluded. We ran the model with 10 different sets of images: Retaining 90%, 80%, 70%, 60% and 50% of images using the delta values, and random selection to make comparisons.
## Results
The ROC and training accuracy curves were extracted after every run. The shape of both curves didn't change much from run to run, but notable changes in AUROC were noted.
The AUC values for the different sets of images seem to follow a general trend: Decrease in precision as dataset size decreases for the randomly selected images, and increases when using delta.
However, the inherent variation in AUC results from run to run makes it hard to draw conclusions from such little data. Running the model at least thrice with each set of images would allow us to get a much clearer idea of what is actually happening, and to do statistical tests.
# GWAS
The goal of the GWAS was to investigate if the asymmetry of the eyes could have genetic origins. We decided to look at the variables with the largest left right difference were selected: Fractal dimension and tortuosity. The phenotype for the GWAS was the delta (delta = abs(L-R)/(L+R)) of fractal dimension and tortuosity. That way, we would we able to identify genes responsible for asymmetry in these variables. Two rounds of GWAS were made. The first one had approximately 40'000 subjects and the second one had approximately 50'000 subjects.
## Results
The results were not significant. Only one GWAS was very slightly significant, the fractal dimension with the larger set of participants (indicated by red circle).
In the event of the GWAS showing a significant peak, we could have then investigated the part of the genome associated with it by looking up the reference SNP cluster ID (rSID) in NCBI. We could have then identified genes associated with fractal dimension asymmetry in the eyes. |
# Brownian motion ish
1. Sep 13, 2012
### sjweinberg
Suppose I have a large particle of mass $M$ that is randomly emitting small particles. The magnitude of the momenta of the small particles is $\delta p$ (and it is equal for all of them. Each particle is launched in a random direction (in 3 spatial dimensions--although we can work with 1 dimension if it's much easier). Assume also that these particles are emitted at a uniform rate with time $\delta t$ between emissions.
So here's my issue. It seems to me that this is a random walk in momentum space. What I would like to know is how to estimate the displacement of the particle after $N$ particles are pooped out. Thus, I need some way to "integrate the velocity".
However, I want to stress that I only care about an order of magnitude estimate of the displacement here. Has anyone dealt with this kind of a situation?
I appreciate any help greatly!
2. Sep 13, 2012
### ImaLooser
We have the sum of N independent identically distributed random variables so this is going to converge to a Gaussian very quickly, that is with N>30 or so. The momentum will follow 3-D Gaussian with mean of zero, that has got to be available somewhere. (A 2D Gaussian is called a Rayleigh distribution.)
The 1D case will be a binomial distribution that converges to a Gaussian.
3. Sep 13, 2012
### sjweinberg
I am aware that the momentum distribution will converge to a Gaussian of width $\sim \sqrt{N} \delta p$. However, do you know what this will mean for the position distribution? In other words, I am really interested in the distribution of the quantity $\sum_{i} p(t_{i})$ where the sum is taken over time steps for the random walk.
My concern is that even though $p$ is expected to be $\sim \sqrt{N} \delta p$ at the end of the walk, I think that the sum may "accelerate" away from the origin because $p$ drifts from its origin.
4. Sep 14, 2012
### Staff: Mentor
From a dimensional analysis: $\overline{|x|}=c~ \delta t~\delta v~ N^\alpha$
A quick simulation indicates $\alpha \approx 1.5$ and $c\approx 1/2$ in the 1-dimensional case. In 3 dimensions, c might be different, while alpha should stay the same.
5. Sep 14, 2012
### sjweinberg
Thanks for the help. In fact, your estimation of $\alpha = \frac{3}{2}$ is the same thing I estimated with the following sketchy method:
Let $n(t) = \frac{t}{\delta t}$ be the number of particles emitted after time $t$. Then, the speed of the large particle at time $t$ can be estimated as $\frac{\delta p \sqrt{n(t)}}{M} = \frac{\delta p }{M} \sqrt{\frac{t}{\delta t}}$.
Then $\left| x(t) \right| \sim \int_{0}^{t} \left| v(t) \right| dt \sim \delta t \delta v \left(\frac{t}{\delta t}\right)^{3/2}$.
I feel that this estimate is probably an overestimate which is where your $c \sim 1/2$ may come from.
Thanks again. |
# Generalised Lyndon-Schützenberger Equations
We fully characterise the solutions of the generalised Lyndon-Schützenberger word equations $$u_1 \cdots u_\ell = v_1 cdots v_m w_1 \cdots w_n$$, where $$u_i \in \{u, \theta(u)\}$$ for all $$1 \leq i \leq \ell$$, $$v_j \in \{v, \theta(v)\}$$ for all $$1 \leq j \leq m$$, $$w_k \in \{w, \theta(w)\}$$ for all $$1 \leq k ?\leq n$$, and $$\theta$$ is an antimorphic involution. More precisely, we show for which $$\ell$$, $$m$$, and $$n$$ such an equation has only $$\theta$$-periodic solutions, i.e., $$u$$, $$v$$, and $$w$$ are in $$\{t, \theta(t)\}^\ast$$ for some word $$t$$, closing an open problem by Czeizler et al. (2011).
### Rights
Use and reproduction: |
+0
# Pls help me i forgot how to do this...
0
45
1
Selling Price = $70 Rate of Sales Tax = 6% What is the sales tax? What is the total price? My options (you can choose more than one): [ ]$4.00
[ ]$4.02 [ ]$4.20
[ ]$420 [ ]$490
[ ]$74 [ ]$74.02
[ ]$74.20 Can someone tell me how to do this cus i forgot and i have a reveiw and a test. Mar 15, 2021 ### 1+0 Answers #1 +498 0 6% is just $$\frac{6}{100}$$ = 0.06 To get the sales tax, you take the product of 70 and 0.06 (70 x 0.06) =$4.20
Then the total price is just 70 + 4.2 (since it's tax and not discount) , which gives you \$74.20
Mar 15, 2021 |
Parabolic paths vs Elliptical paths.
1. Oct 1, 2005
mprm86
We are always taught that a projectile describes a parabolic path (neglecting air resistance), but the path is actually elliptical. So, my question is this: A projectile is thrown in point A (on the ground), it reaches a maximum height H, and it finally falls in point B (same height as A, that is, the ground). Which will be the difference between the paths if (a) it is elliptical, and (b) it is parabolic? Any ideas, suggestions?
P.S. The answer I´m looking for is one of the kind of 1 part in a million or somewhat.
2. Oct 1, 2005
mathman
Where did you get this idea? The path is parabolic. You can approximate a parabola by an ellipse as close as you want by simply moving the foci farther apart. The parabola can be looked at as the limit as the separation becomes infinite.
Added note: You may have a point since the earth is not flat. The distant focus will be the center of the earth.
3. Oct 1, 2005
rcgldr
The classic parabolic path assumes a flat earth.
If the projectile travels below escape velocity, the path is elliptical.
If the projectile travels exactly at escape velocity, the path is parabolic.
If the projectile travels faster than escape velocity, the path is hyperbolic.
A link for some formulas (go to orbital mechanics page)
http://www.braeunig.us/space
4. Oct 2, 2005
Galileo
What's responsible for an elliptic path (if v< v_escape) is not the curvature of the earth, but the variation of the gravitational force with height.
You could solve Newton's law under a inverse square force field to find the actual path. The variation g with height is very small to take into consideration when throwing stuff in the air though. (Air resistance is WAY more dominant)
5. Oct 2, 2005
rcgldr
No one mentioned curvature of the earth in this thread. My reference to a parabola being correct for flat earth was a reference to treating gravity as being effectively generated from a flat plane instead of effectively from a point source (in which case you get an elliptical path). |
As an example, let the original function be: The reflected equation, as reflected across the line $y=x$, would then be: Reflection over $y=x$: The function $y=x^2$ is reflected over the line $y=x$. ", with understanding the concept. [1] Consider an example where the original function is: $\displaystyle y = (x-2)^2$. In general, a horizontal stretch is given by the equation $y = f(cx)$. Although the concept is simple, it has the most advanced mathematical process of the transformations discussed. November 2015 If we rotate this function by 90 degrees, the new function reads: $[xsin(\frac{\pi}{2}) + ycos(\frac{\pi}{2})] = [xcos(\frac{\pi}{2}) - ysin(\frac{\pi}{2})]^2$. Reflections produce a mirror image of a function. December 2017 By using our site, you agree to our. Jake Adams. January 2018 In general, a vertical translation is given by the equation: $\displaystyle y = f(x) + b$. Multiplying the independent variable $x$ by a constant greater than one causes all the $x$ values of an equation to increase. Therefore the horizontal reflection produces the equation: \displaystyle \begin{align} y &= f(-x)\\ &= (-x-2)^2 \end{align}. Approved. The original function we will use is: Translating the function up the $y$-axis by two produces the equation: And translating the function down the $y$-axis by two produces the equation: Vertical translations: The function $f(x)=x^2$ is translated both up and down by two. This reflection has the effect of swapping the variables $x$and $y$, which is exactly like the case of an inverse function. September 2015 You should include at least two values above and below the middle value for x in the table for the sake of symmetry. October 2016 The positive numbers on the y-axis are above the point (0, 0), and the negative numbers on the y-axis are below the point (0, 0). Determine whether a given transformation is an example of translation, scaling, rotation, or reflection. In this case, 100% of readers who voted found the article helpful, earning it our reader-approved status. There are 11 references cited in this article, which can be found at the bottom of the page. If $b>1$, the graph stretches with respect to the $y$-axis, or vertically. December 2018 March 2012 The movement is caused by the addition or subtraction of a constant from a function. January 2015 The mirror image of this function across the $y$-axis would then be $f(-x) = -x^5$. Thank you for, "Building a solar oven was easy with the geometric definition of the parabola. A vertical reflection is given by the equation $y = -f(x)$ and results in the curve being “reflected” across the x-axis. The graph has now physically gotten “taller”, with every point on the graph of the original function being multiplied by two. In this example, put the value of the axis of symmetry (x = 0) in the middle of the table. The result is that the curve becomes flipped over the $x$-axis. October 2019 20 May 2020. As an example, consider again the initial sinusoidal function: If we want to induce horizontal shrinking, the new function becomes: \displaystyle \begin{align} y &= f(3x)\\ &= \sin(3x) \end{align}. January 2020 Research source. I love it! Parabolas are also symmetrical which means they can be folded along a line so that all of the points on one side of the fold line coincide with the corresponding points on the other side of the fold line. This leads to a “shrunken” appearance in the horizontal direction. A rotation is a transformation that is performed by “spinning” the object around a fixed point known as the center of rotation. Put arrows at the ends. If the function $f(x)$ is multiplied by a value less than one, all the $y$ values of the equation will decrease, leading to a “shrunken” appearance in the vertical direction. For this section we will focus on the two axes and the line $y=x$. Academic Tutor & Test Prep Specialist. April 2012 December 2016 20 May 2020. Again, the original function is: Shifting the function to the left by two produces the equation: \displaystyle \begin{align} y &= f(x+2)\\ &= (x+2)^2 \end{align}. Original figure by Julien Coyne. October 2015 To stretch or shrink the graph in the x direction, divide or multiply the input by a constant. ", "It tells us step by step in an easy way.". Expert Source This change will cause the graph of the function to move, shift, or stretch, depending on the type of transformation. Let’s use the same basic quadratic function to look at horizontal translations. In general, a vertical stretch is given by the equation $y=bf(x)$. We use cookies to make wikiHow great. A translation is a function that moves every point a constant distance in a specified direction. A translation moves every point by a fixed distance in the same direction. In this case the axis of symmetry is x = 0 (which is the y-axis of the coordinate plane). To learn how to shift the graph of your parabola, read on! When by either f (x) f (x) or x x is multiplied by a number, functions can “stretch” or “shrink” vertically or horizontally, respectively, when graphed. March 2015 February 2013 Calculate the corresponding values for y or f(x). If $c$ is greater than one the function will undergo horizontal shrinking, and if $c$ is less than one the function will undergo horizontal stretching. This leads to a “stretched” appearance in the vertical direction. Boost your career: Improve your Zoom skills. In general, a vertical stretch is given by the equation y = bf (x) y = b f (x). February 2018 November 2011. To translate a function horizontally is the shift the function left or right. November 2017 It is represented by adding or subtracting from either y or x. Manipulate functions so that they are translated vertically and horizontally. Record the value of y, and that gives you a point to use when graphing the parabola. April 2015 ", http://www.mathsisfun.com/definitions/parabola.html, http://www.purplemath.com/modules/grphquad.htm, http://www.sparknotes.com/math/algebra1/quadratics/section1.rhtml, http://www.mathsisfun.com/geometry/parabola.html, consider supporting our work with a contribution to wikiHow. Make a two-column table. If $b$ is greater than one the function will undergo vertical stretching, and if $b$ is less than one the function will undergo vertical shrinking. A translation can be interpreted as shifting the origin of the coordinate system. CC licensed content, Specific attribution, https://www.youtube.com/watch?v=3Mle83Jiy7k, http://ibmathstuff.wikidot.com/transformations, http://en.wikipedia.org/wiki/Transformation_(function), http://en.wikibooks.org/wiki/Algebra/Absolute_Value%23Translations, http://en.wikipedia.org/wiki/Translation_(geometry), http://en.wikipedia.org/wiki/Reflection_(mathematics), http://en.wiktionary.org/wiki/transformation. A horizontal reflection is a reflection across the $y$-axis, given by the equation: In this general equation, all $x$ values are switched to their negative counterparts while the y values remain the same. [7] Keep in mind the U-shape of the parabola. where $f(x)$ is some given function and $b$ is the constant that we are adding to cause a translation. Points on it include (-1, 1), (1, 1), (-2, 4), and (2, 4). A translation of a function is a shift in one or more directions. A transformation takes a basic function and changes it slightly with predetermined methods. Vertical reflection: The function $y=x^2$ is reflected over the $x$-axis. This article has been viewed 166,475 times. If a parabola is "given," that implies that its equation is provided. You can shift a parabola based on its equation. Multiplying the entire function $f(x)$ by a constant greater than one causes all the $y$ values of an equation to increase. As an example, let $f(x) = x^3$. {"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/7\/7e\/Graph-a-Parabola-Step-1-Version-2.jpg\/v4-460px-Graph-a-Parabola-Step-1-Version-2.jpg","bigUrl":"\/images\/thumb\/7\/7e\/Graph-a-Parabola-Step-1-Version-2.jpg\/aid4162801-v4-728px-Graph-a-Parabola-Step-1-Version-2.jpg","smallWidth":460,"smallHeight":345,"bigWidth":"728","bigHeight":"546","licensing":" |
# Almost Consecutive
Calculus Level 5
$1 + \frac {4}{6} + \frac {4 \cdot 5}{6\cdot 9}+ \frac {4 \cdot 5\cdot 6}{6\cdot 9\cdot 12} + \frac {4 \cdot 5\cdot 6 \cdot 7}{6\cdot 9\cdot 12 \cdot 15} + \ldots$
If the value of the series above is in the form of $$\frac a b$$ where $$a,b$$ are coprime positive integers, what is the value of $$a + b$$?
× |
End of preview. Expand
in Dataset Viewer.
YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/datasets-cards)
# Small Open Web Math Dataset v2
A 10k-sample shuffled subset of OpenWebMath, ensuring randomized selection of high-quality mathematical text.
- Downloads last month
- 90